URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://gre.kmf.com/explain/index/61k85j | [
"### 最新提问\n\nIf one resident is to be randomly selected from among the 2,000 residents, what is the probability that the selected resident`s response to question 2 was either \"Local government\" or \"Private donations\"?\n• A$\\frac{1}{4}$\n• B$\\frac{2}{7}$\n• C$\\frac{1}{3}$\n• D$\\frac{2}{5}$\n• E$\\frac{1}{2}$\n\nA显示答案\n\n• KMF 解析\n• 网友解析\n• 题目数据\n• 收起\n\n### 题目讨论",
null,
"• 全部讨论\n• 只看提问\n• 按时间排序\n• 按点赞数排序"
] | [
null,
"http://www.kmf.com/images/tx.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7358511,"math_prob":0.99661124,"size":386,"snap":"2020-34-2020-40","text_gpt3_token_len":141,"char_repetition_ratio":0.11518325,"word_repetition_ratio":0.031746034,"special_character_ratio":0.2253886,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9812717,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-13T08:43:11Z\",\"WARC-Record-ID\":\"<urn:uuid:21746ae2-e71e-4069-8412-a2e7c4a3533f>\",\"Content-Length\":\"43669\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:249cd80f-3195-4a43-9417-8eab24a20200>\",\"WARC-Concurrent-To\":\"<urn:uuid:0159f9ca-7b59-4a4a-b6cd-6f0b69be2dae>\",\"WARC-IP-Address\":\"47.254.45.31\",\"WARC-Target-URI\":\"https://gre.kmf.com/explain/index/61k85j\",\"WARC-Payload-Digest\":\"sha1:FHJBHIO6EUCJORRDUMEM7F3NAU5MDTSB\",\"WARC-Block-Digest\":\"sha1:7JCIV5XFNHOEAI3CK24MUDMO2G7VOM6D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738964.20_warc_CC-MAIN-20200813073451-20200813103451-00118.warc.gz\"}"} |
https://puzzling.stackexchange.com/questions/1855/what-is-the-smallest-positive-integer-which-can-not-be-written-without-repetiti | [
"# What is the smallest positive integer, which can not be written without repetitions of digits and using arithmetics only?\n\nSuppose you are allowed to use all 10 digits (0,1,2,...9) at most once each; 4 arithmetic operations ($-$,$+$,$\\times$,$\\div$), each any number of times; parenthesises to group operations; and you can create numbers from digits by writing them together.\nWhat is the smallest natural number which you would not be able to write?\n\nFor example, you can write:\n$135 = 12*3*(9+6)/4$\nand you can't write:\n$11 = 11$\n$3 = 2+3/3$\n$27 = 3+4!$\n$81 = 3^4$\n$1 = 5/3$\n\n• @JoeZ don't forget you can \"create numbers from digits writing them together\" so $4817*(90+23)=544321$ Jul 3, 2014 at 14:41\n• What about parenthesis, or how is operator precedence handled? Jul 3, 2014 at 15:12\n• @Cephalopod, you can use parenthesises. thank you. Jul 3, 2014 at 16:23\n• I don't think there's any algorithm which will help solve this - it's just going to have to be bruteforce. Jul 3, 2014 at 18:41\n• @N3buchadnezzar, em? You just wrote this number using only digits from 1 to 9 and stringing them together. Jul 3, 2014 at 20:06\n\nConcatenation is the \"most efficient\" among the allowed operations, which is exmplified by this observation: If $x$ is a nonzero rational number obtained according to the rules using the $k$ digits $d_1>d_2>\\ldots >d_k$, then $$\\tag1 \\max\\left\\{|x|,\\tfrac1{|x|}\\right\\}\\le 10^k\\cdot 0.d_1d_2\\ldots d_k$$ This is clear for direct concatenations and otherwise follows by induction and using $(1)$ for the summands, factors, etc.\n\nWe can strengthen this: If the last operation is $\\times$ or $\\div$, then $$\\tag2\\max\\left\\{|x|,\\tfrac1{|x|}\\right\\}<10^k\\cdot 0.85598232$$ This follows because not both parts can use the digit $9$, hence the largest digit in on subterm is either $8$, leading to a bounding factor $0.97654321\\cdot 0.87654321$; or the largest digit in one factor is $\\le 7$, leading to a bounding factor $0.987654321\\cdot 0.7654321$. If one takes these case distinctions a bit further, one readily finds that $$\\tag{2'} \\max\\left\\{|x|,\\tfrac1{|x|}\\right\\}\\le10^k\\cdot 0.843973902$$ if the last operation is $\\times$ or $\\div$ (with the extreme given by $9642\\cdot 875310$).\n\nMoreover, if the last operation is $+$ or $-$, then $$\\tag3|x|\\le (10^{k-1}+1)\\cdot 0.987654321$$\n\nWe thus are led to believe that $987654323$ is likely not expressible: Because of two difgits $3$ it cannot be obtained from concatenation; because of $(3)$ is cannot be obtained as sum or difference; because it is prime, it cannot be obtained as product of integers (unless one factor is $1$ and we are effectively using at most $9$ digits). Remains the case that the number is obtained as a product or quotient of fractions, but at first sight this seems unplausible.\n\nMeanwhile I brute-forced all numbers that can be obtained with only $+,-,\\times$ and digit concatenation (and with all intermediate resuls $\\le 10^7$). The first number that cannot be expressed this way turns out to be $$8480902.$$ So unless someone manages to express $8480902$ under the original rules (i.e. with division allowed), this is the answer to the original problem.\n\n• \"This follows because not both parts can use the digit 9\" Why? This is in assumption that all previous operations were concatenations, but why must they? Jul 7, 2014 at 5:49\n• @klm123 Each digit shall be used at most once, hence $9$ cannot occur as used in both parts (it may of course occur as digit in both parts, e.g. if we combine (e.g. multiply) $4+5$ and $92$, but then $9$ is used only in the second part). - After the brute-force result, however, the introductory estimates have become mostly useless. Jul 7, 2014 at 14:10\n• I'm missing something here. I followed you up to the brute-force consideration. How did you do that? And why are you discarding the division? Using just 0-5 digits, I have found numbers that are expressed as division (i.e. 27155=54310/2), although I cannot be certain that all of them are expressable only as a division. Jul 7, 2014 at 21:57\n• Exhaustive searches show that for digits up to 6 and 7 the solution is unchanged by the removal of division from the permitted operations, so it's quite likely that 8480902 is the desired answer. Jul 9, 2014 at 17:05\n\nI have written a Delphi application to brute force a solution. Unfortunately I see no way of solving the problem: what expression(s) results in number N? That is I see no solution other than generating all expressions and then seeing what natural numbers they result in.\n\nSo I can't really help with Bobson's crowd-sourcing approach.\n\nI am following klm123's idea, and I have ran the application with:\n\nN = 3 : 0,1,2\n\nExecution time: 00:00:00\nExpressions processed: 441 [Average: 27.562,5/sec]\nExpressions calculated: 34 [Average: 2.125,0/sec]\nNatural numbers calculated: 28 [Average: 1.750,0/sec]\nDistinct natural numbers found: 14 [Average: 875,0/sec]\nHighest natural found: 210\n\n\nN = 4 : 0,1,2,3\n\nExecution time: 00:00:00\nExpressions processed: 22.924 [Average: 363.873,0/sec]\nExpressions calculated: 1.464 [Average: 23.238,1/sec]\nNatural numbers calculated: 871 [Average: 13.825,4/sec]\nDistinct natural numbers found: 110 [Average: 1.746,0/sec]\nHighest natural found: 3.210\n\n\nN = 5 : 0,1,2,3,4\n\nExecution time: 00:00:04\nExpressions processed: 1.679.977 [Average: 401.812,2/sec]\nExpressions calculated: 98.228 [Average: 23.493,9/sec]\nNatural numbers calculated: 45.423 [Average: 10.864,1/sec]\nDistinct natural numbers found: 884 [Average: 211,4/sec]\nHighest natural found: 43.210\n\n\nN = 6 : 0,1,2,3,4,5\n\nExecution time: 00:08:39\nExpressions processed: 159.888.346 [Average: 307.590,0/sec]\nExpressions calculated: 8.867.950 [Average: 17.060,0/sec]\nNatural numbers calculated: 3.236.479 [Average: 6.226,3/sec]\nDistinct natural numbers found: 8.661 [Average: 16,7/sec]\nHighest natural found: 543.210\n\n\nN = 7 : 0,1,2,3,4,5,6\n\nExecution time: 19:36:02\nExpressions processed: 18.788.082.577 [Average: 266.262,5/sec]\nExpressions calculated: 1.014.272.742 [Average: 14.374,2/sec]\nNatural numbers calculated: 308.146.445 [Average: 4.367,0/sec]\nDistinct natural numbers found: 93.219 [Average: 1,3/sec]\nHighest natural found: 6.543.210\n\n\nNow on to N=8. I got a few small optimizations in mind, but nothing that will cut this drastically, I presume.\n\n## Explanation of what the application is doing\n\nI have two input parameters: Digits and Operators. I also have some other parameters to handle output dump to disk, but they have nothing to do with the algo.\n\nThe application starts processing the empty expression: 0 length string.\n\nEvery time an expression E is processed, I do the following:\n\n1. check all Digits - for any single digit D not already included in E, I process the expression E+D (concatenation, not sum). To optimize a bit I avoid this step if E ends with a closed parenthesis. I also avoid this if D is 0 and E does not end with any other Digit.\n\n2. check all Operators - for any single operator O, I process the expression E+O. To optimize I avoid this step if E is empty or ends with another operator or with an open parenthesis.\n\n3. check \")\" - if E ends with a Digit or a closed parenthesis, and it has a pending unclosed parenthesis, and this contains at least one operatore, then I process the expression E+).\n\n4. check \"(\" - if E is empty or ends with an Operator or an open parenthesis, then I process the expression E+(. To avoid infinite recursion there is some magic here... I do it only if the number of Digits still unused in E is greater the number of pending unclosed parenthesis in E plus 1.\n\n5. calculate the expression - if E ends with a Digit or a closed parenthesis, and all parenthesis are balanced and are useful (contain at least one operator), and it does not start with ( and end with ) then I calculate the expression. If it turns out to be a natural number, then I check it with the highest found natural and lowest unfound natural.\n\nProcessing of derived expressions is obviously done through recursion. The algo seems quite efficient, apart from the parenthesis part. I have made some changes to this code and it now validates parenthesis better, avoiding some recursion in useless branches and evaluation os redundant expressions. It's not perfect, but it does cut times as N goes up. (I am using Artem V. Parlyuk's ArtFormula package of nonvisual Delphi component for symbolic expression parsing and evaluation).\n\nAs an aside, I also keep track of the shortest expression that can generate each natural number found. The shortest is meaningful, as it is usually the most immediate and readable. It is also always the one with no useless parenthesis ;)\n\n## Update\n\nTweaking the parenthesis logic, I have stumbled upon a small bug. It was generating a lot of useless expressions (with redundant parenthesis) but it was not generating some useful expressions. With N<=5 this bug did not have any effects. With N=6 I did find 6 more natural numbers, though the lowest unfound natural is 653.\n\n• This should run faster if you only search the numbers less than the lowest one not found, in theory. That way, you can answer the question without waiting years :P\n– user20\nJul 7, 2014 at 17:51\n• And how do you propose to \"search\" for a specific value? I generate expressions, and then evaluate them... using a specific library, which is actually quite fast. But if you have ideas on how to limit the generation/evaluation of expressions, please let me know. A part from that... when I say that 2908 is the lowest, I mean the application has already found expressions for all numbers up to 2907. So what I actually need is to verify if 2908 has an expression or not. Jul 7, 2014 at 18:36\n• Hmm, I see what you're saying. Guess it isn't as straightforward as I thought... I'm sure there's a way to, but I'll have to think on it.\n– user20\nJul 7, 2014 at 19:10\n• I confirm your values for the lowest natural number not found, although I think your code is quite inefficient. Mine takes 20 seconds to check N=7. Jul 9, 2014 at 16:31\n• 20 secs to evaluate about one billion expressions... wow. I don't think choice of programming language or tool can make so much of a difference. Would you mind sharing the logic or the code? What are you using? How long will it take to process N=10? Jul 9, 2014 at 16:41\n\nHere is my attempt at finding an upper bound:\n\nWe have 10 atomic values ($0..9$) and five operations ($+$, $-$, $*$, $/$, $○$; according to rules concatenation may not be used on results of other operators, let's ignore this for now)\n\nSince all operations are binary, we need 9 operators to get a single result out of 10 values. There are 4862 binary trees with nine nodes.\n\nThere are $5^9 = 1953125$ ways to choose operators for a tree and $10! = 3628800$ ways to arrange the digit leaves.\n\nThus, the number must be smaller than $3628800*1953125*4862 = 34459425000000000$. Hey, this still fits into a signed 64-bit integer, so good look to the brute forcers :-)\n\nIt is possible to reduce the upper limit. The following rules were not considered by my approximation\n\n• Concatenation may not be used on results of other operators\n• $+$ and $*$ are commutative\n• not all combinations result in integers\n• some integers can be computed in more than one way\n• The maximum you can get with 9 digits and all operators are 96420*87531, so the upper bound can be much smaller: 8439739021. Jul 4, 2014 at 11:21\n• You can do better than that just by stringing all the digits in descending order to make 9876543210, though.\n– user88\nJul 4, 2014 at 14:12\n• @JoeZ. It may technically have to be an expression of some sort, meaning the upper bound would therefore be 8439739021.\n– Neil\nJul 4, 2014 at 15:52\n• @neil It depends on if \"any number of times\" include zero times? Jul 7, 2014 at 20:23\n• @AdamSpeight That cannot be the case, or else the problem would be trivial, being able to represent all possible expression valuations from 0 to 9876543210 as a number without operations and hence everything can be represented and there is no solution.\n– Neil\nJul 8, 2014 at 8:33\n\nIn the interest of crowd-sourcing the brute forcing, I've made a google spreadsheet with the list of the 4765 numbers from 1 to 9999 which can't be formed by simple concatenation. Please pick a couple of them and work out solutions.\n\nIf we finish this set with solutions for all numbers, I'll add more.\n\nI don't really expect it to be in this set, but we've got to start somewhere...\n\n• I brute-forced expressions using only digits $0,\\ldots, 7$ and allowing only $+,-,\\times$ and initially concatenation. Then the bound is at $48528$. Jul 6, 2014 at 15:14\n• @HagenvonEitzen - Good to know. I didn't really expect it to be solvable by hand, but it was worth the attempt. Jul 7, 2014 at 19:53\n\nI'm trying to approach the task solving simpler tasks with $N<9$ digits allowed.\n\n1. N = 3: 0,1,2.\nThe Answer would be $A=4$.\nObviously, you can't reach $(N-1)N/2+1 = 4$ if you use only $-,\\div,+$ operations. $\\times$ operation also would help here.\nSo you need to use grouping operation. But then you will get a number $R \\ge 10$ and the only way to decrease it drastically to $4$ is to use division. But $10/2 = 5$ - the division is not enough too.\n\nIt is easy to check that you can make any number $\\le4$.\n\n2. N = 4: 0,1,2,3.\nThe Answer is $A=25$.\nSimilarly you can't reach $7$ if you use only $-,\\div,+$ operations. The maximum you can get adding $\\times$ operation is (2+1)*3 = 9 (you have to multiply maximum possible numbers to reach the maximum).\nSo you need to use grouping operation.\nIf you group 1 and other digit, you would need to use multiplication to reach 25, but 10*2 =20 and 10*3 = 30 leaves us with 3 or 2 to finish... 12*3 = 36 and 13*2 = 26 lives us with 0... so you can do nothing here.\nIf you group 2 and other digit, for example, 20 you would need to add to it something and can't reach 25 with additional 1 and 3. The same for 21,23.\nIf you group 3 and other digit, like 30, you would need to subtract from it something, similarly you can't reach 25.\nIf you group three digits, then you would get $R>100$ and you would need to use division, but $100/3 > 25$.\n\nYou can check that you can make any number $\\le24$.\n\nThis really looks like there is no simple pattern, the $A$ is rapidly increases with $N$, the same for number of possible ways to achieve the result.\nThe only patter I see is that $A < 10^{N-2}/(N-1)$, possibly this can be proven. And $10^{N-3} < A$, which is most probably will fail for big $N$.\n\nMost probably there is no pattern for $A$ at all, and one would need to brute force it for big $N$.\n\n• It is a ridiculously difficult problem to try to take on mathematically. Even brute forcing a solution would likely take more seconds than have passed since the universe was founded. Don't beat yourself up.\n– Neil\nJul 4, 2014 at 8:00\n• @Neil: Can you prove that it's mathematically difficult? Jul 4, 2014 at 8:55\n• @justhalf How do you expect I go about that exactly? Divide the number of head scratches by the inverse frequency of pencil eraser usage? I didn't say that to prove anything. If you wish to demonstrate that it is, in fact easy, then I think it's safe to assume you could provide a solution, no?\n– Neil\nJul 4, 2014 at 9:06\n• I think this might be a good approach, but you'd need to do it in a base-$N$ system. Otherwise you get gaps - there's probably no way to represent 79 using only the digits 0-3, for instance. Jul 6, 2014 at 14:58\n• @Bobson, I don't understand why are you talking about 79 via 0-3 digits. May be you see more in my approach than I do? Could you explain it, please? Jul 6, 2014 at 15:04"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8811545,"math_prob":0.96851385,"size":4839,"snap":"2022-40-2023-06","text_gpt3_token_len":1276,"char_repetition_ratio":0.18676318,"word_repetition_ratio":0.040789474,"special_character_ratio":0.29303575,"punctuation_ratio":0.20349908,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9952251,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T01:02:15Z\",\"WARC-Record-ID\":\"<urn:uuid:2be0558b-3ef1-4935-8c7a-5d49def9e12f>\",\"Content-Length\":\"307007\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7842a8e5-273a-46f7-8af9-724b2af53571>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b496f09-f3f3-4de5-a009-d811a53ce5f7>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://puzzling.stackexchange.com/questions/1855/what-is-the-smallest-positive-integer-which-can-not-be-written-without-repetiti\",\"WARC-Payload-Digest\":\"sha1:RIQUJEAVCXX5HEO56IRZWTIOPHWZEAO5\",\"WARC-Block-Digest\":\"sha1:F7RKARXAYKSV5DATJEYRH5N57XYZ63ZE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337680.35_warc_CC-MAIN-20221005234659-20221006024659-00467.warc.gz\"}"} |
http://umj.imath.kiev.ua/article/?lang=en&article=8127 | [
"2019\nТом 71\n№ 11\n\n# Strongly nonlinear degenerate elliptic equations with discontinuous coefficients. II\n\nBonafede S.\n\nAbstract\n\nWe use energy methods to prove the existence and uniqueness of solutions of the Dirichlet problem for an elliptic nonlinear second-order equation of divergence form with a superlinear tem [i.e., g(x, u)=v(x) a(x)⋎u⋎ p−1u,p>1] in unbounded domains. Degeneracy in the ellipticity condition is allowed. Coefficients a i,j(x,r) may be discontinuous with respect to the variable r.\n\nEnglish version (Springer): Ukrainian Mathematical Journal 49 (1997), no. 12, pp 1798-1809.\n\nCitation Example: Bonafede S. Strongly nonlinear degenerate elliptic equations with discontinuous coefficients. II // Ukr. Mat. Zh. - 1997. - 49, № 12. - pp. 1601–1609.\n\nFull text"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7779281,"math_prob":0.8453903,"size":757,"snap":"2020-10-2020-16","text_gpt3_token_len":208,"char_repetition_ratio":0.11155379,"word_repetition_ratio":0.094339624,"special_character_ratio":0.26023778,"punctuation_ratio":0.1849315,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9849411,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-18T07:33:02Z\",\"WARC-Record-ID\":\"<urn:uuid:fefb11c2-386d-49ca-826c-76abbb89aa81>\",\"Content-Length\":\"20336\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:448e151d-9b16-4890-ac24-f36599001ff5>\",\"WARC-Concurrent-To\":\"<urn:uuid:1db6e784-9600-4e21-86bc-bcdc3d03951b>\",\"WARC-IP-Address\":\"194.44.31.54\",\"WARC-Target-URI\":\"http://umj.imath.kiev.ua/article/?lang=en&article=8127\",\"WARC-Payload-Digest\":\"sha1:5M2EFB2WDODIMSLCRK2BWKECGNDWPXEU\",\"WARC-Block-Digest\":\"sha1:2MKMVGCZQBO72QRDNRMS4GKCAOOUL5UM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875143635.54_warc_CC-MAIN-20200218055414-20200218085414-00251.warc.gz\"}"} |
http://escholarship.lib.okayama-u.ac.jp/en/journal/mmc/38/--/article/40177 | [
"",
null,
"",
null,
"<Formerly known as>\n\n<Availability>\nSome items are not available because of decision by its author or publisher.\n\n温泉治療の血液pH,P(CO(2))並びにP(O(2))に及ぼす影響に関する研究 第l編. 測定機器及び測定条件の吟味\n\nYahata, Takaaki\nAbstract\nHuman venous blood pH, P(CO(2)) and P(O(2)) were measured with I. L. Meter. Its reproducibility and response rate were evaluated as well as methods of the procedures. 1) Time required to reach the stability of pH, P(CO(2)), and P(O(2)) reading after the sampie injection was 1~1.5 min., 2~2.5 min. and 45~75 sec., respectively. It is recommended in the measurement of these parameters at the same time that the sample injection starts with P(CO(2)) electrode, followed by pH and P(O(2)) in this order and that readings are recorded in the order of pH, P(O(2)) and P(CO(2)). 2) Range of differences between the two values measured in the interval of 3~5 min. were pH : -0.010~0.020 (mean: 0.003), P(CO(2)) : -1.0~1.0mmHg (mean: 0.4) and P(O(2)) : -1.0~0.0mmHg (mean: -0.5). Their 5% rejection limits were 0.021≧x(o)≧-0.015, 2.0≧x(o)≧-1.2mmHg and 0.3≧x(o)≧-1.3mmHg, respectively. 3) The pH, P(CO(2)) and P(O(2)) of the heparinized venous blood stored in ice water showed no significant changes in 60 min. and they gave practically the same results as the measurement just after shedding.\nNote\n\nISSN\n0369-7142\nNCID\nAN00032853\nNAID"
] | [
null,
"http://escholarship.lib.okayama-u.ac.jp/files/static/eprints/mmc/header.gif",
null,
"http://escholarship.lib.okayama-u.ac.jp/files/static/eprints/mmc/cover.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9321991,"math_prob":0.9707771,"size":1085,"snap":"2022-05-2022-21","text_gpt3_token_len":371,"char_repetition_ratio":0.1480111,"word_repetition_ratio":0.0,"special_character_ratio":0.33824885,"punctuation_ratio":0.17518248,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9535764,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-27T09:08:03Z\",\"WARC-Record-ID\":\"<urn:uuid:052b9f53-479a-4dfa-aae1-3a66b11b62d0>\",\"Content-Length\":\"15142\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b605c4fe-da2f-47a8-bfe6-3e02ac58e951>\",\"WARC-Concurrent-To\":\"<urn:uuid:421ea5c0-ed95-4c05-8ab1-dd2e2e4df5ea>\",\"WARC-IP-Address\":\"150.46.138.203\",\"WARC-Target-URI\":\"http://escholarship.lib.okayama-u.ac.jp/en/journal/mmc/38/--/article/40177\",\"WARC-Payload-Digest\":\"sha1:IUV22UO3ZWXSTYXW5PNASUEJBO2GNTQR\",\"WARC-Block-Digest\":\"sha1:HO7LWDXUCLSJ3ZM5N2EG54CIA7QO5UAN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305242.48_warc_CC-MAIN-20220127072916-20220127102916-00472.warc.gz\"}"} |
http://www.mas.ncl.ac.uk/~nser/abstracts/graphprod_ja.html | [
"## Generalising some results about right-angled Artin groups to graph products of groups\n\n### ncl\n\n#### Keywords\n\ngraph products, hyperbolic groups\n\n#### Status\n\npublished in Journal of Algebra\n\n#### Abstract\n\nWe prove three results about the graph product $G=\\G(\\Gamma;G_v, v \\in V(\\Gamma))$ of groups $G_v$ over a graph $\\Gamma$. The first result generalises a result of Servatius, Droms and Servatius, proved by them for right-angled Artin groups; we prove a necessary and sufficient condition on a finite graph $\\Gamma$ for the kernel of the map from $G$ to the associated direct product to be free (one part of this result already follows from a result in S. Kim's Ph.D. thesis). The second result generalises a result of Hermiller and \\u{S}uni\\'{c}, again from right-angled Artin groups; we prove that, for a graph $\\Gamma$ with finite chromatic number, $G$ has a series in which every factor is a free product of vertex groups. The third result provides an alternative proof of a theorem due to Meier, which provides necessary and sufficient conditions on a finite graph $\\Gamma$ for $G$ to be hyperbolic. \\\n\nThe preprint is available as gzipped dvi (1 kB), postscript (1 kB) and pdf files.\n\nAlternatively, you can request a copy by e-mailing me."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.82115674,"math_prob":0.90602905,"size":1222,"snap":"2022-05-2022-21","text_gpt3_token_len":312,"char_repetition_ratio":0.13382594,"word_repetition_ratio":0.040816326,"special_character_ratio":0.22504091,"punctuation_ratio":0.094017096,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98294663,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-17T08:39:31Z\",\"WARC-Record-ID\":\"<urn:uuid:3a47f8e9-bb7c-4a6d-81d1-7329401e90bc>\",\"Content-Length\":\"1996\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:18cdb3a1-378a-4743-b1df-a7b42de167d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6c407b5-6223-4639-bd50-973da2495eff>\",\"WARC-IP-Address\":\"128.240.212.127\",\"WARC-Target-URI\":\"http://www.mas.ncl.ac.uk/~nser/abstracts/graphprod_ja.html\",\"WARC-Payload-Digest\":\"sha1:6O2M4SYWFPRTVPZKLS24JNDJEZWBTQOS\",\"WARC-Block-Digest\":\"sha1:PTQFA2GOX66LNFVMINF7MGCCDJR5SN2N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662517018.29_warc_CC-MAIN-20220517063528-20220517093528-00560.warc.gz\"}"} |
https://studysoupquestions.com/questions/business/1165/the-atomic-number-of-nitrogen-is-7 | [
"> > The atomic number of nitrogen is 7.\n\n# The atomic number of nitrogen is 7.\n\n## The atomic mass is 14.01. What can we deduce from this ?\n\n134 Views\n\n### The atomic number of nitrogen is 7.\n\nThe atomic mass is 14.01.\nWhat can we deduce from this ?\n\nThe number of protons in the nucleus of an Nitrogen atom is 7.\nAnd the mass of an atom of a chemical element expressed in atomic mass units.\nIt is approximately equivalent to the number of protons and neutrons in the atom (the mass number) or to the average number allowing for the relative abundances of different isotopes.\nIn the case of nitrogen the atomic mass is 14.01"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9064852,"math_prob":0.9753784,"size":805,"snap":"2021-31-2021-39","text_gpt3_token_len":197,"char_repetition_ratio":0.12609239,"word_repetition_ratio":0.0,"special_character_ratio":0.24223602,"punctuation_ratio":0.10429448,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98297423,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-27T23:20:47Z\",\"WARC-Record-ID\":\"<urn:uuid:852981a0-5fc8-4391-96ba-d98c518118ba>\",\"Content-Length\":\"54422\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d0d52080-beb1-4fa2-980d-554bc61b4f2e>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a948dec-62b5-4878-84fb-0ff25c7c8299>\",\"WARC-IP-Address\":\"54.189.254.180\",\"WARC-Target-URI\":\"https://studysoupquestions.com/questions/business/1165/the-atomic-number-of-nitrogen-is-7\",\"WARC-Payload-Digest\":\"sha1:ELKMQRXRBGVNFHZHBUIKQUGY52LWPHZD\",\"WARC-Block-Digest\":\"sha1:N532NHXD24YUELQONJYK5IGULUNKGALK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780058552.54_warc_CC-MAIN-20210927211955-20210928001955-00641.warc.gz\"}"} |
https://result.vg/abstract-energy-is-the-most-important-source-in-the-world/ | [
"# ABSTRACT by creating an open system and\n\nABSTRACT\n\nEnergy is the most important source in the world, it is well defined as the ability to do work or heat transfer . The issues surrounding energy are its sources, production, distribution and consumption. The study of fundamental foundations of thermodynamics gives and idea about energy and transfer of heat through calorimetry technique. Calorimetry is a technique used to measure into and out of a matter. In this experiment we will measure heat flow into and out of matter by creating an open system and a close system of ice, liquid water and harm water respectively, and measure the temperature change of the system by placing thermometer into the system for 3 minutes at 15 seconds intervals. If heat energy flows out of the system , the energy transferred to the matter and to the walls of the calorimeter causes a rise in temperature of the matter. The change in temperature can be measured as the difference between the final and initial temperatures of the matter in the calorimeter.\n?\nINTRODUCTION\nThe science of heat flow or energy flow, is called thermodynamics. Heat energy flows spontaneously from hotter body to colder body. The First Law of Thermodynamics also known as law of conservation energy which states that in a process energy cannot be created nor destroyed, it can be transferred between a system and the surroundings and can be converted from one form to another, but the total energy of the universe remains constant . Therefore, all energy transferred between a system and its surroundings is determined as either heat(q) or work(w).\nEnergy is measured in joules, It takes 4.184 J which equals 1 calorie to raise the temperature of one gram of water by 1° C. The quantity of heat needed to raise the temperature of one gram of a substance by 1° C is called specific heat (S). The heat capacity (C) of a substance is the amount of energy needed to raise the temperature of a substance by 1° C. Different from specific heat, the heat capacity does not account for the mass of the matter. The quantity of heat needed to melt one gram of a solid is called a heat of fusion(?)\nIn this experiment we will pay our attention on one particular area of thermodynamics, namely calorimetry which is a technique used to measure heat flow into and out of matter. Our system is created in a container called a calorimeter that separates the thermal process we are interested in studying from the rest of the universe. As changes occur to that matter, we can follow the movement of heat from one portion of the matter to another by observing temperature changes. The container we use as a calorimeter should thermally insulate the matter we are interested in studying, it should prevent matter from entering or exiting once our measurement has begun, and it should allow for easy measurement of temperature changes. The heat transfer in a calorimeter may be expressed with the following equation:\nQ = mS?T\nThe heat transferred is Q is measured in joules or calories, the specific heat is S, and ?T is the change in temperature.\n\nWe Will Write a Custom Essay Specifically\nFor You For Only \\$13.90/page!\n\norder now\n\nEQUIPMENT AND REAGENTS\nStyrofoam cups\nIce 100 mL\nCardboard lid with a hole\nWater\nHot plate\nThermometer (-10 to 110 °C)\n50 mL Beaker\nStop Watch\nPROCEDURE\nThe Heat Capacity of the Calorimeter.\n1. Create an open-system of Ice (solid water) and measure the temperature change of the system by placing a thermometer into the system for 4minutes at 15 seconds intervals.\n2. Heat/boil 100ml of liquid water to a range of 60-80°C\n3. Repeat step 1, by creating an open system of hot water.\n4. Create a closed-system of Ice (solid water) and measure the temperature change of the system by placing a thermometer into the system for 4minutes at 15 seconds intervals.\n5. Repeat step 4, by creating a closed-system of warm water.\n6. Measure 50ml of hot water in a Styrofoam and 50ml of ice (solid water) in separate Styrofoam. Create a closed system by pouring the two prepared contents into a new Styrofoam cup, making a total 100ml of liquid water and ice. Measure the temperature variation for 4 minutes at 10 seconds intervals\n?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9259641,"math_prob":0.9444676,"size":4170,"snap":"2020-45-2020-50","text_gpt3_token_len":906,"char_repetition_ratio":0.14474316,"word_repetition_ratio":0.11404729,"special_character_ratio":0.20935252,"punctuation_ratio":0.07954545,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9525898,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-25T01:50:55Z\",\"WARC-Record-ID\":\"<urn:uuid:65b24160-97ae-4595-9806-84470aefb8ce>\",\"Content-Length\":\"32010\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:98b0273c-072e-4af2-8bc1-64944315749c>\",\"WARC-Concurrent-To\":\"<urn:uuid:03cbe4e8-c1dd-429a-a310-e373f4f64163>\",\"WARC-IP-Address\":\"104.18.50.91\",\"WARC-Target-URI\":\"https://result.vg/abstract-energy-is-the-most-important-source-in-the-world/\",\"WARC-Payload-Digest\":\"sha1:3VLWXJ2PLPABY4C3TGNVKHJZGKABQE5E\",\"WARC-Block-Digest\":\"sha1:RSAEZYP6IAD6OSMJWADGHQ3UD2YRX5E7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107885126.36_warc_CC-MAIN-20201025012538-20201025042538-00460.warc.gz\"}"} |
https://www.lmfdb.org/LocalNumberField/23.13.0.1 | [
"# Properties\n\n Label 23.13.0.1 Base $$\\Q_{23}$$ Degree $$13$$ e $$1$$ f $$13$$ c $$0$$ Galois group $C_{13}$ (as 13T1)\n\n# Related objects\n\n## Defining polynomial\n\n $$x^{13} - 5 x + 3$$\n\n## Invariants\n\n Base field: $\\Q_{23}$ Degree $d$ : $13$ Ramification exponent $e$ : $1$ Residue field degree $f$ : $13$ Discriminant exponent $c$ : $0$ Discriminant root field: $\\Q_{23}$ Root number: $1$ $|\\Gal(K/\\Q_{ 23 })|$: $13$ This field is Galois and abelian over $\\Q_{23}$.\n\n## Intermediate fields\n\n The extension is primitive: there are no intermediate fields between this field and $\\Q_{ 23 }$.\n\n## Unramified/totally ramified tower\n\n Unramified subfield: 23.13.0.1 $\\cong \\Q_{23}(t)$ where $t$ is a root of $$x^{13} - 5 x + 3$$ Relative Eisenstein polynomial: $x - 23 \\in\\Q_{23}(t)[x]$\n\n## Invariants of the Galois closure\n\n Galois group: $C_{13}$ (as 13T1) Inertia group: Trivial Unramified degree: $13$ Tame degree: $1$ Wild slopes: None Galois mean slope: $0$ Galois splitting model: $x^{13} - x^{12} - 60 x^{11} + 27 x^{10} + 1199 x^{9} - 33 x^{8} - 9610 x^{7} - 3352 x^{6} + 33548 x^{5} + 20328 x^{4} - 47723 x^{3} - 34869 x^{2} + 21271 x + 15667$"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.52462626,"math_prob":1.0000068,"size":647,"snap":"2020-10-2020-16","text_gpt3_token_len":239,"char_repetition_ratio":0.13685848,"word_repetition_ratio":0.082474224,"special_character_ratio":0.42658424,"punctuation_ratio":0.13559322,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997437,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-19T18:50:07Z\",\"WARC-Record-ID\":\"<urn:uuid:c57f4fc2-e61e-4ac8-94df-7c300e1eea70>\",\"Content-Length\":\"18114\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:96553556-cea6-4c1d-b9d7-a085fd8d857f>\",\"WARC-Concurrent-To\":\"<urn:uuid:69f640ec-fb3b-4ae1-8745-65cbebdce876>\",\"WARC-IP-Address\":\"35.241.19.59\",\"WARC-Target-URI\":\"https://www.lmfdb.org/LocalNumberField/23.13.0.1\",\"WARC-Payload-Digest\":\"sha1:GXZSNXDW6WC6FCFCH7PDIJMH6R7LNY2C\",\"WARC-Block-Digest\":\"sha1:65AFCQGY7VZUNYM4NVE7G2PDZAN2B2TA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144167.31_warc_CC-MAIN-20200219184416-20200219214416-00024.warc.gz\"}"} |
https://riptutorial.com/python/example/31585/part-2--parsing-tokenized-input-with-yacc | [
"# Python Language Python Lex-Yacc Part 2: Parsing Tokenized Input with Yacc\n\n## Example\n\nThis section explains how the tokenized input from Part 1 is processed - it is done using Context Free Grammars (CFGs). The grammar must be specified, and the tokens are processed according to the grammar. Under the hood, the parser uses an LALR parser.\n\n``````# Yacc example\n\nimport ply.yacc as yacc\n\n# Get the token map from the lexer. This is required.\nfrom calclex import tokens\n\ndef p_expression_plus(p):\n'expression : expression PLUS term'\np = p + p\n\ndef p_expression_minus(p):\n'expression : expression MINUS term'\np = p - p\n\ndef p_expression_term(p):\n'expression : term'\np = p\n\ndef p_term_times(p):\n'term : term TIMES factor'\np = p * p\n\ndef p_term_div(p):\n'term : term DIVIDE factor'\np = p / p\n\ndef p_term_factor(p):\n'term : factor'\np = p\n\ndef p_factor_num(p):\n'factor : NUMBER'\np = p\n\ndef p_factor_expr(p):\n'factor : LPAREN expression RPAREN'\np = p\n\n# Error rule for syntax errors\ndef p_error(p):\nprint(\"Syntax error in input!\")\n\n# Build the parser\nparser = yacc.yacc()\n\nwhile True:\ntry:\ns = raw_input('calc > ')\nexcept EOFError:\nbreak\nif not s: continue\nresult = parser.parse(s)\nprint(result)\n``````\n\n## Breakdown\n\n• Each grammar rule is defined by a function where the docstring to that function contains the appropriate context-free grammar specification. The statements that make up the function body implement the semantic actions of the rule. Each function accepts a single argument p that is a sequence containing the values of each grammar symbol in the corresponding rule. The values of `p[i]` are mapped to grammar symbols as shown here:\n\n`````` def p_expression_plus(p):\n'expression : expression PLUS term'\n# ^ ^ ^ ^\n# p p p p\n\np = p + p\n``````\n• For tokens, the \"value\" of the corresponding `p[i]` is the same as the `p.value` attribute assigned in the lexer module. So, `PLUS` will have the value `+`.\n\n• For non-terminals, the value is determined by whatever is placed in `p`. If nothing is placed, the value is None. Also, `p[-1]` is not the same as `p`, since `p` is not a simple list (`p[-1]` can specify embedded actions (not discussed here)).\n\nNote that the function can have any name, as long as it is preceeded by `p_`.\n\n• The `p_error(p)` rule is defined to catch syntax errors (same as `yyerror` in yacc/bison).\n\n• Multiple grammar rules can be combined into a single function, which is a good idea if productions have a similar structure.\n\n`````` def p_binary_operators(p):\n'''expression : expression PLUS term\n| expression MINUS term\nterm : term TIMES factor\n| term DIVIDE factor'''\nif p == '+':\np = p + p\nelif p == '-':\np = p - p\nelif p == '*':\np = p * p\nelif p == '/':\np = p / p\n``````\n• Character literals can be used instead of tokens.\n\n`````` def p_binary_operators(p):\n'''expression : expression '+' term\n| expression '-' term\nterm : term '*' factor\n| term '/' factor'''\nif p == '+':\np = p + p\nelif p == '-':\np = p - p\nelif p == '*':\np = p * p\nelif p == '/':\np = p / p\n``````\n\nOf course, the literals must be specified in the lexer module.\n\n• Empty productions have the form `'''symbol : '''`\n\n• To explicitly set the start symbol, use `start = 'foo'`, where `foo` is some non-terminal.\n\n• Setting precedence and associativity can be done using the precedence variable.\n\n`````` precedence = (\n('nonassoc', 'LESSTHAN', 'GREATERTHAN'), # Nonassociative operators\n('left', 'PLUS', 'MINUS'),\n('left', 'TIMES', 'DIVIDE'),\n('right', 'UMINUS'), # Unary minus operator\n)\n``````\n\nTokens are ordered from lowest to highest precedence. `nonassoc` means that those tokens do not associate. This means that something like `a < b < c` is illegal whereas `a < b` is still legal.\n\n• `parser.out` is a debugging file that is created when the yacc program is executed for the first time. Whenever a shift/reduce conflict occurs, the parser always shifts.",
null,
"PDF - Download Python Language for free"
] | [
null,
"https://riptutorial.com/Images/icon-pdf-2.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6670539,"math_prob":0.9243867,"size":3790,"snap":"2021-43-2021-49","text_gpt3_token_len":1084,"char_repetition_ratio":0.15795034,"word_repetition_ratio":0.12191358,"special_character_ratio":0.31715038,"punctuation_ratio":0.12418301,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99779326,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-28T16:12:35Z\",\"WARC-Record-ID\":\"<urn:uuid:8aaf3815-2fca-4839-b0ff-25df2f21f1f5>\",\"Content-Length\":\"93975\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43a2f8c9-c401-4209-97f1-6d1e59e709e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ad83f3a-5dd2-4022-8012-3b4d16cfeb4d>\",\"WARC-IP-Address\":\"40.83.160.29\",\"WARC-Target-URI\":\"https://riptutorial.com/python/example/31585/part-2--parsing-tokenized-input-with-yacc\",\"WARC-Payload-Digest\":\"sha1:QXOOKPNLC3PPOYNU4AG6VGURCRKVBUWA\",\"WARC-Block-Digest\":\"sha1:BO3GI4BPOE5UY5HJ3OGDRE3O2DBG2JKA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588341.58_warc_CC-MAIN-20211028131628-20211028161628-00261.warc.gz\"}"} |
https://datasciencetexts.com/subjects/elementary_algorithms.html | [
"# Elementary Algorithms\n\n• None\n\n## Recommended Prerequisites\n\n### Last Updated: 7/18/2019\n\nAn algorithm is a well-defined procedure for accomplishing a well-defined task. Since computers require well-defined procedures to accomplish tasks, the study of algorithms is an essential component of both computer science and data science. The study of algorithms chiefly concerns proving that a particular algorithm will accomplish a particular task and determining the resources the algorithm will require to do so. Elementary algorithm texts describe the basic techniques used for these two goals, and they also catalog a number of algorithms that are useful in common situations. Since algorithms must often exploit and manipulate data, elementary algorithm textbooks also describe many efficient options for structuring data. Data scientists are often tasked with problems that require large amounts of computational resources, so it is important that they understand the tradeoffs of different algorithms.\n\n# Recommended Books\n\n1. ## Introduction To Algorithms\n\n### Key Topics\n\n• Amortized Analysis\n• Approximation Algorithms\n• Asymptotic Analysis\n• Computational Geometry Algorithms\n• Data Structures\n• Divide and Conquer Strategies\n• Dynamic Programming\n• Fast Fourier Transform\n• Graphs and Trees\n• Greedy Algorithms\n• Linear Programming\n• Matrix Operations\n• NP-Completeness\n• Number-Theoretic Algorithms\n• Probabilistic Analysis\n• Randomized Algorithms\n• Recursion\n• Sorting and Searching\n• String Matching\n\n### Description\n\nCLRS is the canonical elementary algorithms textbook. This partly because it is an excellent algorithms text, and partly because so many people use it that otherwise obscure examples in it have become canonical. CLRS will teach you how to prove an algorithm is correct, show you how the resources the algorithm requires will change as the problem size grows, introduces the standard data structures, and contains many well-known algorithms as examples. It also offers a lot of information about randomized algorithms and their analysis. CLRS does have a few potential drawbacks. Some of the exercises are quite difficult for the level of the text, and reliable solutions to the difficult ones are hard to find if you get stuck. Additionally, all the algorithms in the textbook are written in pseudocode, so you're responsible for your own implementation in a real programming language if you wish to experiment. Overall though, this an outstanding book for both learning and reference.\n\n2. ## The Algorithm Design Manual\n\n### Key Features\n\n• In-text exercises\n• C++ algorithm implementations\n• Errata\n\n### Key Topics\n\n• Approximation Algorithms\n• Asymptotic Analysis\n• Combinatorial Algorithms\n• Computational Geometry Algorithms\n• Data Structures\n• Divide and Conquer Strategies\n• Dynamic Programming\n• Graphs and Trees\n• Greedy Algorithms\n• Numerical Algorithms\n• Recursion\n• Sorting and Searching\n• String Algorithms\n\n### Description\n\nThe Algorithm Design Manual is somewhat less well-known than CLRS, but it has a lot going for it as an algorithms text. It is relatively inexpensive, offers a lot of algorithms that are absent in other books, and provides C++ implementations for many of the algorithms. We consider The Algorithm Design Manual a great companion to CLRS, but it can stand alone as an algorithms text, especially for a first pass. CLRS offers more analytic tools, and does a better job with randomized algorithms, but any user of algorithms would benefit from reading this book."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8812206,"math_prob":0.8620833,"size":3572,"snap":"2019-26-2019-30","text_gpt3_token_len":699,"char_repetition_ratio":0.15610987,"word_repetition_ratio":0.0858209,"special_character_ratio":0.1774916,"punctuation_ratio":0.07899462,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.973041,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-21T18:13:31Z\",\"WARC-Record-ID\":\"<urn:uuid:85c7356f-f115-4293-8dc3-718cac61889d>\",\"Content-Length\":\"8886\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec3f6089-c7dd-4cfc-a7b9-c0d42ee13430>\",\"WARC-Concurrent-To\":\"<urn:uuid:0390ef41-e712-489a-bb2d-88aaeda863f1>\",\"WARC-IP-Address\":\"99.84.216.63\",\"WARC-Target-URI\":\"https://datasciencetexts.com/subjects/elementary_algorithms.html\",\"WARC-Payload-Digest\":\"sha1:UULH4KK2QKZGUEPU3VGFQS42ZVRTC76B\",\"WARC-Block-Digest\":\"sha1:NOEITP7AG4UPUMN3WVPKCRIO26FZNATG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195527089.77_warc_CC-MAIN-20190721164644-20190721190644-00298.warc.gz\"}"} |
https://www.saildart.org/COMMEN%5B1,LMM%5D1 | [
"perm filename COMMEN[1,LMM]1 blob sn#029047 filedate 1973-03-12 generic text, type T, neo UTF8\n``` (PROGN (LISPXPRIN1 (QUOTE \"FILE CREATED \")\nT)\n(LISPXPRIN1 (QUOTE \"12-MAR-73 03:01:06\")\nT)\n(LISPXTERPRI T))\n(LISPXPRINT (QUOTE COMMENTVARS)\nT)\n(RPAQQ COMMENTVARS ((FNS COLLECT* FIL*)\n(DEFINEQ\n\n(COLLECT*\n[APPLY (QUOTE EDITF)\n(LIST FN (QUOTE (LP F *(E (SETQ COMMENTS (CONS (##)\nT]\n(CONS FN (CONS (ARGLIST FN)\n\n(FIL*\n[LAMBDA (FIL)\n(PROG ([FILCOMMENTS (CONS (QUOTE :)\n(MAPCAR (FILEFNSLST FIL)\n(FUNCTION COLLECT*]\n(AL (ASSOC FIL COMMENTS)))\n(COND\n(AL (RPLACD AL FILCOMMENTS))\n(T (SETQ COMMENTS (CONS (CONS FIL FILCOMMENTS)\n)\n[RPAQQ COMMENTS ([CYCLIC : ((VALENCE (X)\n(* This function finds the\nVALENCE of an atom or\nSTRUCTURE or STRUCFORM))\n(FVPARTITION1 (N VL S)\n(* This function is a\nsub-function of\nFVPARTITIONS - I'm not too\nsure what it does offhand))\n(FVPART1 (N MAXSUM MAXOCCUR)\n(* Again , i'm not too sure what\nthis function does))\n(MINLOOPS (VALENCELIST)\n(* This function computes the\nminimum number of loops that\nany STRUCTURE with the given\nVALENCELIST must have - See\ncycgen paper for derivation of\nformula))\n(MAXLOOPS (VALENCELIST)\n(* This function computes the\nmaximum number of loops that\nany STRUCTURE with the given\nVALENCELIST may have - See\ncycgen paper for derivation of\nformula))\n(SUPERATOMPARTITIONS (CL U)\n(* This function finds\nall partitions of\nCL and unsaturation\nU into superatom\nparts and remaining\natoms, according to\nthe constraints\ngiven in the cycgen\nsuperatom parts -\nThe value is a list\nof\nSUPERATOMPARTITION\nrecords))\n(MAXUNSATL (PC U)\n(* This function, i think, takes\na composition list of\ncomposition lists (pc)\nand an unsaturation (U)\nand returns a list of the\nmaximum unsaturation that may\nbe assigned to each\nindividual part in pc such\nthat the final structures\nwill each have correct TOTAL\nunsaturation and a free\nVALENCE of at least one)\n(* Note U is either NIL\n(normal)\nor it is equal to the\nunsaturation IN the case\nWHERE remats is NIL and there\nis only one part here))\n(COMPUTEFV (U CL)\n(* This function computes the\nfree VALENCE of a composition\nand saturation - I.e. Any\nSTRUCTURE with the given\ncomposition and unsaturation\nwill have the resulting\nnumber of free valences))\n(ROWS (LL)\n(* Ll is a list of lists - If one\nenvisions it as a matrix, this\nfunction computes the transpose))\n(BIVALENTPARTITIONS (VL)\n(* This function takes\na valence list\n(starting with\nbivalents)\nand partitions\n(CAR VL)\ninto\n(number\nof EDGES IN a\nSTRUCTURE built on\n(CDR VL))\nparts))\n(TRIMZEROS (L)\n(* This function takes a list of\nnumbers and returns the list\nwith trailing zeros removed))\n(TD (VL J)\n(* This function takes a VALENCE list\nstarting with j-valents and returns\nthe TOTAL VALENCE))\n(M2/2 (N)\n(* Silly function - Computes\n(n/2)\n-1))\n(LOOPPARTITIONS1 (P VL J)\n(* SUBFUNCTION OF\nLOOPPARTITIONS - DONT\nREMEMBER WHAT IT DOES))\n(JLIST (LL N)\n(* AGAIN, I DON'T REMEMBER WHAT THIS\nONE DOES))\n(LPROWS (LPP VL)\n(* AGAIN, I DON'T REMEMBER WHAT\nTHIS ONE DOES))\n(LOOPPARTITIONS (P VL)\n(* THIS FUNCTION FINDS ALL\nLOOP PARTITIONS - I'M\nNOT SURE HOW, THOUGH))\n(CLPARTLP1 (CL ROW N)\n(* Again, i don't remember what\nthis one does))\n(STRUCTURESWITHATOMS (CLL STRUC)\n(* CLL is a list of\ncomposition lists\nthe first CL\ncontains bivalent\natoms, the second,\ntrivalent atoms,\nand so forth -\nStruc is a\nSTRUCTURE -\nSTRUCTURESWITHATOMS\nfinds all ways of\nattaching the given\natoms to the\nSTRUCTURE by\nlabelling))\n(NUMPARTITIONS (N NUMPARTS MINPART MAXPART)\n(* This function finds all\npartitions of the number\nN into numparts parts,\nWHERE each part is\ngreater or equal to\nminpart and less than or\nequal to maxpart - The\nresult is a list of\npartitions, WHERE a\npartition here is a list\nof numbers, the sum of\nwhich is N))\n(NUMPARTITIONS* (U MN MAXIMA OCCURLIST)\n(* Again, i don't remember\nwhat this one does))\n(FVPARTITIONS (FV VL)\n(* This function finds all\nways of partitioning free\nvalences fv among the\n\"ATOMS\" of vl\n(vl is a VALENCE list)\naccording to appropriate\nconstraints]\n[STRGEN : ((STARTUP NIL (* This function does all of the\nthings necessary to LOAD the\nSTRUCTURE generator]\n[CL : ((CLDIFF (CL1 CL2)\n(* This function computes the DIFFERENCE of\ntwo composition lists - Zero terms are\neliminated))\n(CLCOUNT (CL)\n(* This function computes the number of\nelements IN a composition list))\n[CLPARTS (CL PARTSIZE)\n(* This function finds all SUB compositions of\nthe composition list cl1 which are of SIZE\nparsizze, and returns a list of the\npossibilities - I.e. (CLPARTS\n'\n((A . 3)\n(B . 2))\n2)\nreturns\n(((a . 2))\n,\n((a . 1)\n(b . 1))\n,\n((b . 2]\n(CLPARTITIONSN (CL N MINPARTSIZE MAXPARTSIZE)\n(* This function finds all partitions of\nCL into N parts WHERE each part has a\nCLCOUNT of at least MINPARTSIZE and at\nmost MAXPARTSIZE))\n(CLPARTITIONS (CL PARTSIZES)\n(* PARTSIZES IS a list of numbers - This\nfunction finds all partitions of CL\ninto PARTS WHERE each PART IS of the\ncorresponding SIZE IN PARTSIZES - The\nsum of PARTSIZES must be equal to the\nCLCOUNT of CL or ELSE the value will be\nNIL - The value IS a list of\npartitions; a partition IS a list of\ncomposition lists))\n[CLCREATE (L)\n(* This function takes a list which may have\nduplicates, and returns a composition list\nwhich corresponds to it - I.e.\n(CLCREATE '\n(A A A B B C))\nreturns\n((a . 3)\n(b . 2)\n(C . 1]\n(CLINSERT (ITEM CL)\n(* This function returns the composition list\nCL with \"ITEM\" inserted))\n(CL=PARTS (CL NPARTS PARTSIZE)\n(* This function finds all partitions of CL\ninto NPARTS parts, where every part is of\nsize PARTSIZE - NPARTS*PARTSIZE must be\nequal to the CLCOUNT of CL))\n(CLBYVALENCE (CL)\n(* CL must be a composition list of things\nwith a VALENCE - This function returns a\nlist of composition lists; the first CL\ncontains those things with VALENCE 2 -\nThe second those with VALENCE 3, and so\non))\n(CLPARTITIONSL (CL LL)\n(* Damn if i can remember what this one\ndoes))\n(CLEXPAND (CL)\n(* This function is the inverse of CLCREATE -\nIt takes a composition list and returns a\nlist with the appropriate number of copies\nof each item IN the composition list - I.e.\n(CLEXPAND ' ((A . 3)\n(B . 2)))\ngives\n(a a a b b]\n(* This function takes a composition\nlist of lists, and returns all\npossible selections of items from\nthose lists; e.g. doing GROUPRADS\non (((a b C D e) . 3)\n((F g H i) . 2))\nwill return all lists in which\nthree elements come from\n(a b C D e)\nand 2 elements come from\n(F g H i)\n- Duplication is allowed; i.e.\n(a a a F F)\nwill be among the resulting lists))\n(* Subfunction of GROUPRADS))\n(FIX+ (X)\n(* Rounds a number upward))\n(GROUPBY (FN L)\n(* FN is a functional argument, l is a\nlist - This function groups l by the\nvalue of FN applied to its elements -\nIt returns a list of groups - The CDR\nof a GROUP contains elements of l which\nall have the same value of FN - The CAR\nis that value - Can be used to GROUP a\nlist of atom names by their VALENCE,\nfor EXAMPLE))\n[*CARLIST (L)\n(* (MAPCAR L (FUNCTION CAR]\n[CDRLIST (L)\n(* (MAPCAR L (FUNCTION CDR]\n[LCARLIST (L)\n(* (MAPCAR L (FUNCTION CARLIST]\n[LCDRLIST (L)\n(* (MAPCAR L (FUNCTION CDRLIST]\n(DELETE (I L)\n(* Returns l with the first instance of i\ndeleted))\n(DIFF (L1 L2)\n(* Returns the list of elements of l1 which\nare not elements of l2))\n(ORDPAIR (X1 X2)\n(* Returns either (CONS X1 X2)\nor\n(CONS X2 X1)\ndepending on whether x1<x2 or not -\nUses a generalized ordering in which\nanumbers are in order numerically, and\nless than atoms which are ordered\nalphabetically, and are less than lists\nwhich are ordered first by their CAR\nand then by their CDR))\n(MAXREST (VL J)\n(* Silly function used somewhere - Again\ndamn if i know))\n(MAX (X Y)\n(* MAX of X and Y))\n(MIN (X Y)\n(* MIN of X and Y - Uses INTEGER arithmetic))\n(TWICE (X)\n(* X*2))\n(LEQ (X Y)\n(* Generalized ordering function - See ORDPAIR)\n)\n(*PLUS (L)\n(* L is a list of integers - *PLUS computes\ntheir sum))\n(LMASSOC (X Y Z)\n(* This is similar to the ASSOC on the 360\n- If there is an element of Y whose CAR\nis equal to Y, returns its CDR -\nOtherwise returns z))\n(*MAX (L)\n(* L is a list of integers - *MAX returns\ntheir maximum]\nSTOP\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.70753455,"math_prob":0.96819806,"size":8171,"snap":"2022-40-2023-06","text_gpt3_token_len":2435,"char_repetition_ratio":0.1586874,"word_repetition_ratio":0.078242965,"special_character_ratio":0.27254927,"punctuation_ratio":0.046657383,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9877182,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-01T22:08:42Z\",\"WARC-Record-ID\":\"<urn:uuid:75e6148e-dd40-4378-b238-391bfb94f5aa>\",\"Content-Length\":\"20107\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:88300b96-b032-459c-a6ac-06a7f6472c00>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe5f70f3-4e1d-4055-b6e0-c81d3cde3c76>\",\"WARC-IP-Address\":\"50.247.119.156\",\"WARC-Target-URI\":\"https://www.saildart.org/COMMEN%5B1,LMM%5D1\",\"WARC-Payload-Digest\":\"sha1:BF3QWCL4IMRVQY427KA6SVEYUEFBQU6R\",\"WARC-Block-Digest\":\"sha1:OJNOWK3C5NNPDJVFNDWRYLYT3CAIGBQL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030336921.76_warc_CC-MAIN-20221001195125-20221001225125-00275.warc.gz\"}"} |
https://link.springer.com/article/10.1007%2Fs11425-018-9465-8 | [
"Science China Mathematics\n\n, Volume 62, Issue 11, pp 2057–2072\n\n# Global regularity of optimal mappings in non-convex domains\n\nArticles\n\n## Abstract\n\nIn this paper, we establish a global regularity result for the optimal transport problem with the quadratic cost, where the domains may not be convex. This result is obtained by a perturbation argument, using a recent global regularity of optimal transportation in convex domains by the authors.\n\n## Keywords\n\nMonge-Ampère equation optimal transportation global regularity\n\n## MSC(2010)\n\n35J96 35J25 35B65\n\n## References\n\n1. 1.\nBakelman I. Convex analysis and nonlinear geometric elliptic equations. Berlin: Springer-Verlag, 1994\n2. 2.\nBrenier Y. Polar factorization and monotone rearrangement of vector-valued functions. Comm Pure Appl Math, 1991, 44: 375–417\n3. 3.\nCaffarelli L. Interior W 2,p estimates for solutions of Monge-Ampere equations. Ann of Math (2), 1990, 131: 135–150\n4. 4.\nCaffarelli L. The regularity of mappings with a convex potential. J Amer Math Soc, 1992, 5: 99–104\n5. 5.\nCaffarelli L. Allocation maps with general cost functions. In: Partial Differential Equations and Applications. Lecture Notes in Pure and Applied Mathematics, 177. New York: Dekker, 1996, 29–35\n6. 6.\nCaffarelli L. Boundary regularity of maps with convex potentials, II. Ann of Math (2), 1996, 144: 453–496\n7. 7.\nCaffarelli L, Gonzáles M, Nguyen T. A perturbation argument for a Monge-Ampere equation with periodic data. Arch Ration Mech Anal, 2014, 212: 359–414\n8. 8.\nCaffarelli L, McCann R. Free boundaries in optimal transport and Monge-Amere obstacle problems. Ann of Math (2), 2010, 171: 673–730\n9. 9.\nChen S. Boundary C 1,α regularity of an optimal transport problem with cost close tox. SIAM J Math Anal, 2015, 47: 2689–2698\n10. 10.\nChen S. Regularity of free boundaries in optimal transportation. PreprintGoogle Scholar\n11. 11.\nChen S, Figalli A. Boundary ε-regularity in optimal transportation. Adv Math, 2015, 273: 540–567\n12. 12.\nChen S, Figalli A. Partial W 2,p regularity for optimal transport maps. J Funct Anal, 2017, 272: 4588–4605\n13. 13.\nChen S, Liu J, Wang X-J. Global regularity for the Monge-Ampere equation with natural boundary condition. ArXiv: 1802.07518Google Scholar\n14. 14.\nChen S, Liu J, Wang X-J. Boundary regularity for the second boundary-value problem of Monge-Ampere equations in dimension two. ArXiv:1806.09482Google Scholar\n15. 15.\nDe Philippis G, Figalli A. Partial regularity for optimal transport maps. Publ Math de l’IHES, 2015, 121: 81–112\n16. 16.\nDe Philippis G, Figalli A, Savin O. A note on interior W 2,1+ε estimates for the Monge-Ampere equation. Math Ann, 2013, 357: 11–22\n17. 17.\nDelanoe Ph. Classical solvability in dimension two of the second boundary value problem associated with the Monge-Ampere operator. Ann Inst Henri Poincaré Anal Non Linéaire, 1991, 8: 443–457\n18. 18.\nEvans L. Partial differential equations and Monge-Kantorovich mass transfer. In: Current Development in Mathematics. Boston: International Press, 1999, 65–126\n19. 19.\nFigalli A. The Monge-Ampere Equation and its Applications. Zürich Lectures in Advanced Mathematics. Zürich: Euro Math Soc, 2017\n20. 20.\nGilbarg D, Trudinger N. Elliptic Partial Differential Equations of Second Order. Berlin: Springer-Verlag, 1983\n21. 21.\nGutierrez C. The Monge-Ampere Equation. Progress in Nonlinear Differential Equations and their Applications, 44. Boston: Birkhäuser, 2001\n22. 22.\nJian H Y, Wang X-J. Continuity estimates for the Monge-Ampere equation. SIAM J Math Anal, 2007, 39: 608–626\n23. 23.\nKitagawa J, McCann R. Free discontinuities in optimal transport. Arch Ration Mech Anal, 2019, in press\n24. 24.\nLions P-L, Trudinger N, Urbas J. The Neumann problem for equations of Monge-Ampere type. Comm Pure Appl Math, 1986, 39: 539–563\n25. 25.\nMa X N, Trudinger N, Wang X-J. Regularity of potential functions of the optimal transportation problem. Arch Ration Mech Anal, 2005, 177: 151–183\n26. 26.\nMonge G. Memoire sur la Theorie des Déblais et des Remblais. In: Historie de l’Académie Royale des Sciences de Paris, avec les Mémoires de Mathématique et de Physique pour la M eme année, 1781, 666–704Google Scholar\n27. 27.\nPogorelov A. Monge-Ampere Equations of Elliptic Type. Groningen: Noordhoff, 1964\n28. 28.\nSavin O. Pointwise C 2,α estimates at the boundary for the Monge-Ampere equation. J Amer Math Soc, 2013, 26: 63–99\n29. 29.\nSavin O. Global W 2,p estimates for the Monge-Ampere equations. Proc Amer Math Soc, 2013, 141: 3573–3578\n30. 30.\nTrudinger N, Wang X-J. Boundary regularity of the Monge-Ampere and affne maximal surface equations. Ann of Math (2), 2008, 167: 993–1028\n31. 31.\nTrudinger N, Wang X-J. The Monge-Ampere equation and its geometric applications. In: Handbook of Geometric Analysis, vol. 1. Adv Lect Math (ALM), vol. 7. Somerville: International Press, 2008, 467–524\n32. 32.\nUrbas J. On the second boundary value problem of Monge-Ampère type. J Reine Angew Math, 1997, 487: 115–124\n33. 33.\nVillani C. Optimal Transport. Old and New. Grundlehren Math Wiss, vol. 338. Berlin: Springer-Verlag, 2006\n34. 34.\nWang X-J, Wu Y T. A new proof for the regularity of Monge-Ampère type equations. J Differential Geom, 2019, in pressGoogle Scholar\n35. 35.\nWolfson J. Minimal Lagrangian diffeomorphisms and the Monge-Ampère equation. J Differential Geom, 1997, 46: 335–373",
null,
""
] | [
null,
"https://link.springer.com/track/controlled/article/granted/10.1007/s11425-018-9465-8",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.54582095,"math_prob":0.89371204,"size":6298,"snap":"2019-43-2019-47","text_gpt3_token_len":1830,"char_repetition_ratio":0.20734033,"word_repetition_ratio":0.01183432,"special_character_ratio":0.26659256,"punctuation_ratio":0.21750212,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9897956,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-11T22:27:47Z\",\"WARC-Record-ID\":\"<urn:uuid:ad1a4abf-900b-4505-892c-5966f882a3fc>\",\"Content-Length\":\"89480\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d42f35c0-3aeb-4fff-804a-324ea3cf3588>\",\"WARC-Concurrent-To\":\"<urn:uuid:d5aa94cc-2123-4a92-b10b-153fcba49d31>\",\"WARC-IP-Address\":\"151.101.200.95\",\"WARC-Target-URI\":\"https://link.springer.com/article/10.1007%2Fs11425-018-9465-8\",\"WARC-Payload-Digest\":\"sha1:LLMSR5IHXWSN4HWF4A43YXVWHJDNR5L6\",\"WARC-Block-Digest\":\"sha1:AFK2NG6FBOXRUAMTCEV7CFWDVIK3IK6S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496664439.7_warc_CC-MAIN-20191111214811-20191112002811-00265.warc.gz\"}"} |
https://la.mathworks.com/help/stats/bootci.html | [
"# bootci\n\nBootstrap confidence interval\n\n## Syntax\n\n``ci = bootci(nboot,bootfun,d)``\n``ci = bootci(nboot,bootfun,d1,...,dN)``\n``ci = bootci(nboot,{bootfun,d},Name,Value)``\n``ci = bootci(nboot,{bootfun,d1,...,dN},Name,Value)``\n``[ci,bootstat] = bootci(___)``\n\n## Description\n\nexample\n\n````ci = bootci(nboot,bootfun,d)` computes a 95% bootstrap confidence interval for each statistic computed by the function `bootfun`. The `bootci` function uses `nboot` bootstrap samples in its computation, and creates each bootstrap sample by sampling with replacement from the rows of `d`.```\n\nexample\n\n````ci = bootci(nboot,bootfun,d1,...,dN)` creates each bootstrap sample by sampling with replacement from the rows of the nonscalar data arguments in `d1,...,dN`. These nonscalar arguments must have the same number of rows. The `bootci` function passes the samples of nonscalar data and the unchanged scalar data arguments in `d1,...,dN` to `bootfun`.```\n\nexample\n\n````ci = bootci(nboot,{bootfun,d},Name,Value)` specifies options using one or more name-value arguments. For example, you can change the type of confidence interval by specifying the `'Type'` name-value argument.Note that you must pass the `bootfun` and `d` arguments to `bootci` as a single cell array.```\n\nexample\n\n````ci = bootci(nboot,{bootfun,d1,...,dN},Name,Value)` specifies options using one or more name-value arguments. For example, you can change the significance level of the confidence interval by specifying the `'Alpha'` name-value argument.Note that you must pass the `bootfun` and `d1,...,dN` arguments to `bootci` as a single cell array.```\n\nexample\n\n````[ci,bootstat] = bootci(___)` also returns the bootstrapped statistic computed for each of the `nboot` bootstrap replicate samples, using any of the input argument combinations in the previous syntaxes. Each row of `bootstat` contains the results of applying `bootfun` to one bootstrap sample.```\n\n## Examples\n\ncollapse all\n\nCompute the confidence interval for the capability index in statistical process control.\n\nGenerate 30 random numbers from the normal distribution with mean 1 and standard deviation 1.\n\n```rng('default') % For reproducibility y = normrnd(1,1,30,1);```\n\nSpecify the lower and upper specification limits of the process. Define the capability index.\n\n```LSL = -3; USL = 3; capable = @(x)(USL-LSL)./(6*std(x));```\n\nCompute the 95% confidence interval for the capability index by using 2000 bootstrap samples. By default, `bootci` uses the bias corrected and accelerated percentile method to construct the confidence interval.\n\n`ci = bootci(2000,capable,y)`\n```ci = 2×1 0.5937 0.9900 ```\n\nCompute the studentized confidence interval for the capability index.\n\n`sci = bootci(2000,{capable,y},'Type','student')`\n```sci = 2×1 0.5193 0.9930 ```\n\nCompute bootstrap confidence intervals for the coefficients of a nonlinear regression model. The technique used in this example involves bootstrapping the predictor and response values, and assumes that the predictor variable is random. For a technique that assumes the predictor variable is fixed and bootstraps the residuals, see Bootstrap Confidence Intervals for Linear Regression Model Coefficients.\n\nNote: This example uses `nlinfit`, which is useful when you only need the coefficient estimates or residuals of a nonlinear regression model and you need to repeat fitting a model multiple times, as in the case of bootstrapping. If you need to investigate a fitted regression model further, create a nonlinear regression model object by using `fitnlm`. You can create confidence intervals for the coefficients of the resulting model by using the `coefCI` object function, although this function does not use bootstrapping.\n\nGenerate data from the nonlinear regression model $y={b}_{1}+{b}_{2}\\cdot exp\\left(-{b}_{3}x\\right)+ϵ$, where ${b}_{1}=1$, ${b}_{2}=3$, and ${b}_{3}=2$ are coefficients; the predictor variable x is exponentially distributed with mean 2; and the error term $ϵ$ is normally distributed with mean 0 and standard deviation 0.1.\n\n```modelfun = @(b,x)(b(1)+b(2)*exp(-b(3)*x)); rng('default') % For reproducibility b = [1;3;2]; x = exprnd(2,100,1); y = modelfun(b,x) + normrnd(0,0.1,100,1);```\n\nCreate a function handle for the nonlinear regression model that uses the initial values in `beta0`.\n\n```beta0 = [2;2;2]; beta = @(predictor,response)nlinfit(predictor,response,modelfun,beta0)```\n```beta = function_handle with value: @(predictor,response)nlinfit(predictor,response,modelfun,beta0) ```\n\nCompute the 95% bootstrap confidence intervals for the coefficients of the nonlinear regression model. Create the bootstrap samples from the generated data `x` and `y`.\n\n`ci = bootci(1000,beta,x,y)`\n```ci = 2×3 0.9821 2.9552 2.0180 1.0410 3.1623 2.2695 ```\n\nThe first two confidence intervals include the true coefficient values ${b}_{1}=1$ and ${b}_{2}=3$, respectively. However, the third confidence interval does not include the true coefficient value ${b}_{3}=2$.\n\nNow compute the 99% bootstrap confidence intervals for the model coefficients.\n\n`newci = bootci(1000,{beta,x,y},'Alpha',0.01)`\n```newci = 2×3 0.9730 2.9112 1.9562 1.0469 3.1876 2.3133 ```\n\nAll three confidence intervals include the true coefficient values.\n\nCompute bootstrap confidence intervals for the coefficients of a linear regression model. The technique used in this example involves bootstrapping the residuals and assumes that the predictor variable is fixed. For a technique that assumes the predictor variable is random and bootstraps the predictor and response values, see Bootstrap Confidence Intervals for Nonlinear Regression Model Coefficients.\n\nNote: This example uses `regress`, which is useful when you only need the coefficient estimates or residuals of a regression model and you need to repeat fitting a model multiple times, as in the case of bootstrapping. If you need to investigate a fitted regression model further, create a linear regression model object by using `fitlm`. You can create confidence intervals for the coefficients of the resulting model by using the `coefCI` object function, although this function does not use bootstrapping.\n\n`load hald`\n\nPerform a linear regression and compute the residuals.\n\n```x = [ones(size(heat)),ingredients]; y = heat; b = regress(y,x); yfit = x*b; resid = y - yfit;```\n\nCompute the 95% bootstrap confidence intervals for the coefficients of the linear regression model. Create the bootstrap samples from the residuals. Use normal approximated intervals with bootstrapped bias and standard error by specifying `'Type','normal'`. You cannot use the default confidence interval type in this case.\n\n```ci = bootci(1000,{@(bootr)regress(yfit+bootr,x),resid}, ... 'Type','normal')```\n```ci = 2×5 -47.7130 0.3916 -0.6298 -1.0697 -1.2604 172.4899 2.7202 1.6495 1.2778 0.9704 ```\n\nPlot the estimated coefficients `b`, omitting the intercept term, and display error bars showing the coefficient confidence intervals.\n\n```slopes = b(2:end)'; lowerBarLengths = slopes-ci(1,2:end); upperBarLengths = ci(2,2:end)-slopes; errorbar(1:4,slopes,lowerBarLengths,upperBarLengths) xlim([0 5]) title('Coefficient Confidence Intervals')```",
null,
"Only the first nonintercept coefficient is significantly different from 0.\n\nCompute the mean and standard deviation of 100 bootstrap samples. Find the 95% confidence interval for each statistic.\n\nGenerate 100 random numbers from the exponential distribution with mean 5.\n\n```rng('default') % For reproducibility y = exprnd(5,100,1);```\n\nDraw 100 bootstrap samples from the vector `y`. For each bootstrap sample, compute the mean and standard deviation. Find the 95% bootstrap confidence interval for the mean and standard deviation.\n\n`[ci,bootstat] = bootci(100,@(x)[mean(x) std(x)],y);`\n\n`ci(:,1)` contains the lower and upper bounds of the mean confidence interval, and `c(:,2)` contains the lower and upper bounds of the standard deviation confidence interval. Each row of `bootstat` contains the mean and standard deviation of a bootstrap sample.\n\nPlot the mean and standard deviation of each bootstrap sample as a point. Plot the lower and upper bounds of the mean confidence interval as dotted vertical lines, and plot the lower and upper bounds of the standard deviation confidence interval as dotted horizontal lines.\n\n```plot(bootstat(:,1),bootstat(:,2),'o') xline(ci(1,1),':') xline(ci(2,1),':') yline(ci(1,2),':') yline(ci(2,2),':') xlabel('Mean') ylabel('Standard Deviation')```",
null,
"## Input Arguments\n\ncollapse all\n\nNumber of bootstrap samples to draw, specified as a positive integer scalar. To create each bootstrap sample, `bootci` randomly selects with replacement `n` out of the `n` rows of nonscalar data in `d` or `d1,...,dN`.\n\nExample: `100`\n\nData Types: `single` | `double`\n\nFunction to apply to each sample, specified as a function handle. The function can be a custom or built-in function. You must specify `bootfun` with the `@` symbol.\n\nExample: `@mean`\n\nData Types: `function_handle`\n\nData to sample from, specified as a column vector or matrix. The `n` rows of `d` correspond to observations. When you use multiple data input arguments `d1,...,dN`, you can specify some arguments as scalar values, but all nonscalar arguments must have the same number of rows.\n\nIf you use a single vector argument `d`, you can specify it as a row vector. `bootci` then samples from the elements of the vector.\n\nData Types: `single` | `double`\n\n### Name-Value Arguments\n\nSpecify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.\n\nExample: `bootci(100,{@mean,1:6'},'Alpha',0.1)` specifies to draw 100 bootstrap samples from the values 1 through 6, take the mean of each sample, and then compute the 90% confidence interval for the sample mean.\n\nSignificance level, specified as a positive scalar between 0 and 1. `bootci` computes the `100*(1-Alpha)` bootstrap confidence interval of each statistic defined by the function `bootfun`.\n\nExample: `'Alpha',0.01`\n\nData Types: `single` | `double`\n\nConfidence interval type, specified as one of the values in this table.\n\nValueDescription\n`'norm'` or `'normal'` Normal approximated interval with bootstrapped bias and standard error \n`'per'` or `'percentile'`Basic percentile method\n`'cper'` or ```'corrected percentile'```Bias corrected percentile method \n`'bca'`\n\nBias corrected and accelerated percentile method , — Involves a z0 factor that is computed using the proportion of bootstrap values that are less than the original sample value. To produce reasonable results when the sample is lumpy, the software computes z0 by including half of the bootstrap values that are the same as the original sample value.\n\n`'stud'` or `'student'`Studentized confidence interval \n\nExample: `'Type','student'`\n\nNumber of bootstrap samples for the studentized standard error estimate, specified as a positive integer scalar.\n\n`bootci` computes the studentized bootstrap confidence interval of the statistic defined by the function `bootfun`, and estimates the standard error of the bootstrap statistics by using `NBootStd` bootstrap data samples.\n\nNote\n\nTo use this name-value argument, the `Type` value must be `'stud'` or `'student'`. Specify either `NBootStd` or `StdErr`, but not both.\n\nExample: `'NBootStd',50`\n\nData Types: `single` | `double`\n\nFunction used to compute the studentized standard error estimate, specified as a function handle.\n\n`bootci` computes the studentized bootstrap confidence interval of the statistic defined by the function `bootfun`, and estimates the standard error of the bootstrap statistics by using the function `StdErr`. The `StdErr` function must take the same arguments as `bootfun` and return the standard error of the statistic computed by `bootfun`.\n\nNote\n\nTo use this name-value argument, the `Type` value must be `'stud'` or `'student'`. Specify either `NBootStd` or `StdErr`, but not both.\n\nExample: `'StdErr',@std`\n\nData Types: `function_handle`\n\nObservation weights, specified as a nonnegative vector with at least one positive element. The number of elements in `Weights` must be equal to the number of rows `n` in the data `d` or `d1,...,dN`. To obtain one bootstrap sample, `bootci` randomly selects with replacement `n` out of `n` rows of data using these weights as multinomial sampling probabilities.\n\nData Types: `single` | `double`\n\nOptions for computing bootstrap iterations in parallel and setting random numbers during the bootstrap sampling, specified as a structure. Create the `Options` structure with `statset`. This table lists the option fields and their values.\n\nField NameValueDefault\n`UseParallel`Set this value to `true` to compute bootstrap iterations in parallel.`false`\n`UseSubstreams`\n\nSet this value to `true` to run computations in parallel in a reproducible fashion.\n\nTo compute reproducibly, set `Streams` to a type that allows substreams: `'mlfg6331_64'` or `'mrg32k3a'`.\n\n`false`\n`Streams`Specify this value as a `RandStream` object or cell array of such objects. Use a single object except when the `UseParallel` value is `true` and the `UseSubstreams` value is `false`. In that case, use a cell array that has the same size as the parallel pool.If you do not specify `Streams`, then `bootci` uses the default stream or streams.\n\nNote\n\nYou need Parallel Computing Toolbox™ to run computations in parallel.\n\nExample: `'Options',statset('UseParallel',true)`\n\nData Types: `struct`\n\n## Output Arguments\n\ncollapse all\n\nConfidence interval bounds, returned as a vector, matrix, or multidimensional array with two rows.\n\n• If `bootfun` returns a scalar, then `ci` is a vector containing the lower and upper bounds of the confidence interval.\n\n• If `bootfun` returns a vector of length m, then `ci` is a matrix of size 2-by-m, where `ci(1,:)` are lower bounds and `ci(2,:)` are upper bounds.\n\n• If `bootfun` returns a multidimensional array, then `ci` is an array, where `ci(1,:,...)` is an array of lower bounds and `ci(2,:,...)` is an array of upper bounds.\n\nBootstrap statistics, returned as a column vector or matrix with `nboot` rows. The `i`th row of `bootstat` corresponds to the results of applying `bootfun` to the `i`th bootstrap sample. If `bootfun` returns a matrix or array, then the `bootci` function first converts this output to a row vector before storing it in `bootstat`.\n\n Davison, A. C., and D. V. Hinkley. Bootstrap Methods and Their Applications. Cambridge University Press, 1997.\n\n Efron, Bradley. The Jackknife, the Bootstrap and Other Resampling Plans. Philadelphia: The Society for Industrial and Applied Mathematics, 1982.\n\n DiCiccio, Thomas J., and Bradley Efron. “Bootstrap Confidence Intervals.” Statistical Science 11, no. 3 (1996): 189–228.\n\n Efron, Bradley, and Robert J. Tibshirani. An Introduction to the Bootstrap. New York: Chapman & Hall, 1993."
] | [
null,
"https://la.mathworks.com/help/examples/stats/win64/BootstrapConfidenceIntervalsForLinearCoefficientsExample_01.png",
null,
"https://la.mathworks.com/help/examples/stats/win64/ConfidenceIntervalForMultipleStatisticsExample_01.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5915537,"math_prob":0.9672096,"size":1267,"snap":"2021-31-2021-39","text_gpt3_token_len":355,"char_repetition_ratio":0.1266825,"word_repetition_ratio":0.0,"special_character_ratio":0.2573007,"punctuation_ratio":0.2706767,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9961317,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T00:01:12Z\",\"WARC-Record-ID\":\"<urn:uuid:01900d72-76f6-4775-bd44-22de5313b841>\",\"Content-Length\":\"139195\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:553f9916-8a2e-46f9-a6bf-2c5d635cff02>\",\"WARC-Concurrent-To\":\"<urn:uuid:03e46230-59b4-4586-9125-1879f5295406>\",\"WARC-IP-Address\":\"23.197.108.134\",\"WARC-Target-URI\":\"https://la.mathworks.com/help/stats/bootci.html\",\"WARC-Payload-Digest\":\"sha1:XZBOWSTRMVLGB3XASNO6RUKZIYUDAWGN\",\"WARC-Block-Digest\":\"sha1:2XBRGNY3NONJ7SAYB442456OA4ZCCE5B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057403.84_warc_CC-MAIN-20210922223752-20210923013752-00062.warc.gz\"}"} |
https://physics.stackexchange.com/users/47373/apt45?tab=topactivity | [
"### Questions (38)\n\n 11 Identically vanishing trace of $T^{\\mu\\nu}$ and trace anomaly 10 Performing Wick Rotation to get Euclidean action of a scalar field $\\Psi$ 7 Anomalous Ward Identities and anomalous dimensions 5 How to write the Clebsch-Gordan decomposition in tensor notation 5 Perturbation of an operator - Meaning of matrix element [closed]\n\n### Reputation (1,017)\n\n +5 Why do we impose de Donder gauge? +10 Why do we impose de Donder gauge? +20 Identically vanishing trace of $T^{\\mu\\nu}$ and trace anomaly +5 Why do we study anomalies with the triangle diagram?\n\n 6 Trace of 4 Gell-Mann matrices 6 “Dimensional analysis” arguments in quantum field theory 4 Vertex factor for $\\frac{g}{4} (A_{\\nu}A^{\\nu})^2$ in QED 3 Conformal theory with zero central charge 2 Invariant terms of Chiral Lagrangian\n\n### Tags (81)\n\n 17 quantum-field-theory × 25 6 quantum-chromodynamics × 3 10 homework-and-exercises × 7 6 regularization × 3 7 renormalization × 7 6 trace 7 lie-algebra × 4 6 dimensional-analysis 6 feynman-diagrams × 6 4 lagrangian-formalism × 6"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.65768397,"math_prob":0.9358622,"size":479,"snap":"2019-43-2019-47","text_gpt3_token_len":140,"char_repetition_ratio":0.19157895,"word_repetition_ratio":0.0,"special_character_ratio":0.33194155,"punctuation_ratio":0.013157895,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97401625,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-12T02:48:36Z\",\"WARC-Record-ID\":\"<urn:uuid:93c22f69-9012-47d0-a733-e93b6daa948f>\",\"Content-Length\":\"128580\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a23c88f3-8213-42d1-9f65-87a3baedba2a>\",\"WARC-Concurrent-To\":\"<urn:uuid:58fb8611-7438-48dd-8775-ce1be598d270>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/users/47373/apt45?tab=topactivity\",\"WARC-Payload-Digest\":\"sha1:C4I2N5V7M2ALPDZCUYDI2RPMQDFEQOSM\",\"WARC-Block-Digest\":\"sha1:W7IO3AIN7TKHMMZQDPLWBRHC4AC3GO4E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496664567.4_warc_CC-MAIN-20191112024224-20191112052224-00497.warc.gz\"}"} |
https://stackoverflow.com/questions/50457921/cost-function-mean-squared-error-formulae | [
"# Cost Function & Mean Squared error formulae\n\nI am new to machine learning and statistics and am confused with the cost function & Mean Squared Error (MSE) formulas. In Machine learning class at stanford - coursera, Cost function formula is mentioned as shown below:\n\nCost Function formula",
null,
"And at some other sources, cost function is termed as mean squared error (MSE) and it is given with the formula as shown in picture below.\n\nMean Squared Error formula",
null,
"What will be the Cost Function formula & is cost function and MSE different or same. Please let me know why are the formulas are different."
] | [
null,
"https://i.stack.imgur.com/5cydJ.png",
null,
"https://i.stack.imgur.com/yCHvT.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9051005,"math_prob":0.9646506,"size":570,"snap":"2021-43-2021-49","text_gpt3_token_len":122,"char_repetition_ratio":0.16784452,"word_repetition_ratio":0.0,"special_character_ratio":0.19824561,"punctuation_ratio":0.067307696,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989672,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T11:17:01Z\",\"WARC-Record-ID\":\"<urn:uuid:630fcda4-ea0e-449c-8f86-c2982c37ec54>\",\"Content-Length\":\"168882\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6e248e3e-6d77-4c7c-91b0-849bbda57d96>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc7b6f8b-d53f-45a8-aad1-78671a3d0de6>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/50457921/cost-function-mean-squared-error-formulae\",\"WARC-Payload-Digest\":\"sha1:Y3M7K7HB2XTXSAB6BSSHM7LYH3NXH4ST\",\"WARC-Block-Digest\":\"sha1:DQURV2VL7DKKQFS2B7COQOZPU3G4F4ZK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585171.16_warc_CC-MAIN-20211017082600-20211017112600-00463.warc.gz\"}"} |
https://math.stackexchange.com/questions/2995166/are-second-countable-metric-spaces-sigma-compact/2995189 | [
"# Are second-countable metric spaces $\\sigma$-compact?\n\nI was curious about the relations between second-countable, separable, Lindelöf and $$\\sigma$$-compact topologies in the context of metric spaces.\n\nI am aware of the following implications in general topological spaces:\n\n• second-countable $$\\Rightarrow$$ separable $$\\not \\Rightarrow$$ Lindelöf, $$\\;$$ [thanks bof]\n• $$\\sigma$$-compact $$\\Rightarrow$$ Lindelöf\n• second-countable + locally compact $$\\Rightarrow$$ $$\\sigma$$-compact\n\nas well as the reversed implications in the case of metric spaces:\n\n• Lindelöf $$\\Leftrightarrow$$ separable $$\\Leftrightarrow$$ second-countable\n\nSince all the proofs I've seen so far require the LC condition I assume it is not true in general that second-countable topological spaces are $$\\sigma$$-compact (although seeing an actual counterexample would be nice).\nSo what about metrizable topological spaces?\n\nIdeas so far:\nIf we can proof that every subset of a $$\\sigma$$-compact space is again $$\\sigma$$-compact, then this would follow from the fact, that every separable metric space is homeomorphic to a subset of the Hilbert cube (which is compact). $$\\;$$[debunked by bof]\n\n• The set of irrational numbers is a separable metric space which is not $\\sigma$-compact. (Every $\\sigma$-compact subset of the irrational numbers is meager.) Hilbert space is another example. – bof Nov 12 '18 at 11:10\n• By the way, separable does not imply Lindelöf in general topological spaces. For example, the Sorgenfrey plane is separable but not Lindelöf. – bof Nov 12 '18 at 11:12\n• Every closed subset of a $\\sigma$-compact space is again $\\sigma$-compact. Not all subspace, as witnessed by the rational and the irrationals assubspaces of the reals. – Henno Brandsma Nov 12 '18 at 11:55\n• @bof I also wanted to disprove local compactness. – Henno Brandsma Nov 12 '18 at 12:25\n• Any infinite-dimensional separable Banach space is a counterexample. This follows from the fact that the unit ball of a Banach space $E$ is compact iff $E$ is finite dimensional, so a compact subset of an infinite dimensional Banach space has empty interior. Then we just apply the Baire category theorem. – Robert Furber Nov 11 '19 at 11:02\n\nA counterexample is the \"Baire space\" $$\\mathcal{N} = \\mathbb{N}^{\\mathbb{N}}$$. This is one of the main examples of a Polish space: a separable, completely metrizable space.\nOne fact about this space is that all compact subsets have empty interior, that is, all compact subsets are nowhere dense. By the Baire Category Theorem, $$\\mathcal{N}$$ is not the countable union of nowhere dense subsets, and so together with the above fact it cannot be $$\\sigma$$-compact."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7997143,"math_prob":0.9974065,"size":1085,"snap":"2020-34-2020-40","text_gpt3_token_len":262,"char_repetition_ratio":0.15356152,"word_repetition_ratio":0.0,"special_character_ratio":0.2156682,"punctuation_ratio":0.08139535,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99965703,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-29T15:04:54Z\",\"WARC-Record-ID\":\"<urn:uuid:f4abd082-232b-43cf-978a-d2048b53e6d4>\",\"Content-Length\":\"153071\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4a70291-fa4d-4873-91cb-6b1ba1d77fd2>\",\"WARC-Concurrent-To\":\"<urn:uuid:45eb0736-ec5e-4e66-86a0-f67bdc9d7b62>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2995166/are-second-countable-metric-spaces-sigma-compact/2995189\",\"WARC-Payload-Digest\":\"sha1:6NPGCU6EJG77KWKGDY7CUSR4JZHATV7M\",\"WARC-Block-Digest\":\"sha1:QGMNOH6HP5YA33O64RFYLK73MOMIIUXT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401643509.96_warc_CC-MAIN-20200929123413-20200929153413-00638.warc.gz\"}"} |
https://www.cssexaminations.com/a-number-is-doubled-and-9-is-added/ | [
"### Global Statistics\n\nAll countries\n231,385,504\nConfirmed\nUpdated on September 24, 2021 2:03 am\nAll countries\n206,331,240\nRecovered\nUpdated on September 24, 2021 2:03 am\nAll countries\n4,742,542\nDeaths\nUpdated on September 24, 2021 2:03 am\n\n# A Number is Doubled and 9 is Added. If the Resultant is Trebled, It Becomes 75. What is that Number?\n\n### Computer MCQs Series for PPSC, FPSC – Most Repeated MCQs | Set 4\n\nWhat are you looking for? Let’s dig in quickly\n\n## Explanation\n\n• A number is doubled and 9 is added.\n• Resultant is trebled, it becomes 75.\n\nThe number can be figure out as:\n\nLet suppose the number is y.\n\nNow, number “y” is double and 9 is added\n\n2y + 9\n\nResultant (2y + 9) is trebled\n\n3(2y + 9) = 75 ________ (i)\n\nValue of y can be find out easily from equation (i) by just simplifying it (y = 8).\n\nNumber = ?\n\n## Solution\n\nLet suppose\n\nNumber = y\n\nAccording to the given condition\n\n3(2y + 9) = 75\n\n2y + 9 = 25\n\n2y = 25 – 9\n\n2y = 16\n\ny = 8\n\n## Conclusion\n\nA number is doubled and 9 is added. If the resultant is trebled, it becomes 75. The required number is 8.\n\n### Computer MCQs Series for PPSC, FPSC – Most Repeated MCQs | Set 6\n\nerror: Content is protected !!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.901303,"math_prob":0.9776326,"size":668,"snap":"2021-31-2021-39","text_gpt3_token_len":225,"char_repetition_ratio":0.13403614,"word_repetition_ratio":0.08450704,"special_character_ratio":0.35179642,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9778466,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T02:56:26Z\",\"WARC-Record-ID\":\"<urn:uuid:e55f306a-990e-40ec-97fd-5ff793ed2092>\",\"Content-Length\":\"484060\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4f57e262-6dfb-4f6e-b6d7-31c493911e37>\",\"WARC-Concurrent-To\":\"<urn:uuid:67fc87e1-9f12-49ed-9ed1-86d375eecc46>\",\"WARC-IP-Address\":\"18.232.245.187\",\"WARC-Target-URI\":\"https://www.cssexaminations.com/a-number-is-doubled-and-9-is-added/\",\"WARC-Payload-Digest\":\"sha1:5F227GO5ICRW6LLCYF4RGWWZMW7NFN4C\",\"WARC-Block-Digest\":\"sha1:SDNQNBT6RQJYEURD6QTNUKDSERLU4AWU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057496.18_warc_CC-MAIN-20210924020020-20210924050020-00613.warc.gz\"}"} |
https://emathematics.net/sistecuaciones.php?a=3 | [
"",
null,
"Done: 0 Corrects: 0 User (login-registrate)\nFractions\nWhole Numbers\nEquations\nSystems of equations\nMonomials\nPolynomials\nSpecial products\nExponents and Powers\nIntegers\nEquations of lines\nSequences and series\nFunctions\nDeterminants\nMatrices\nPercentages\n\nFour methods of solving systems of two linear equations are ilustrated here:\n1. Solution by substitution .Find the value of one unknown in either of the given equations and substitute this value in the other equation.\n2. Solution by igualation. Find the value of the same unknown in botn of the given equations and igualate the resululting equations.\n3. Solution by addition or subtraction. If necessary, multiply the given equations by such numbers as will make the coefficients of one unknown in the resulting equation numerically equal. If the signs of the equal coefficients are unlike, add the resulting equations; if like, subtract them.\n4. Graphical solution. Graph both equations, obtaining two straight lines. The simultaneous solution is given by the coordinates (x,y) of the point of intersection of these lines\n\nSolve the following system:\n\n x= y="
] | [
null,
"https://emathematics.net/cabecera.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8779661,"math_prob":0.9992434,"size":850,"snap":"2023-40-2023-50","text_gpt3_token_len":170,"char_repetition_ratio":0.17612293,"word_repetition_ratio":0.015267176,"special_character_ratio":0.18941176,"punctuation_ratio":0.13375796,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999442,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T07:01:52Z\",\"WARC-Record-ID\":\"<urn:uuid:b71b81f2-3fea-481c-b704-cefd55a83a50>\",\"Content-Length\":\"17085\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d5101859-a0c7-4e8e-b945-88925ff2e4c6>\",\"WARC-Concurrent-To\":\"<urn:uuid:56f5448b-4913-4283-a1f8-063c4e1ebdb2>\",\"WARC-IP-Address\":\"172.67.155.6\",\"WARC-Target-URI\":\"https://emathematics.net/sistecuaciones.php?a=3\",\"WARC-Payload-Digest\":\"sha1:Z4ULBIOCZDIV52AMRQ4SIFJ2ZKG36PWZ\",\"WARC-Block-Digest\":\"sha1:CY3ZPBBD6QEOUQ6CFPNT7DQU5CNVZZXZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510368.33_warc_CC-MAIN-20230928063033-20230928093033-00157.warc.gz\"}"} |
https://www.geeksforgeeks.org/find-n-th-term-series-9-33-73129/ | [
"Related Articles\n\n# Find n-th term in the series 9, 33, 73,129 …\n\n• Difficulty Level : Easy\n• Last Updated : 16 Apr, 2021\n\nGiven a series 9, 33, 73, 129… Find the n-th term of the series.\nExamples:\n\n```Input : n = 4\nOutput : 129\n\nInput : n = 5\nOutput : 201```\n\nAttention reader! Don’t stop learning now. Get hold of all the important mathematical concepts for competitive programming with the Essential Maths for CP Course at a student-friendly price. To complete your preparation from learning a language to DS Algo and many more, please refer Complete Interview Preparation Course.\n\nThe given series has a pattern which is visible after subtracting it from itself after one shift\n\n```S = 9 + 33 + 73 + 129 + … tn-1 + tn\nS = 9 + 33 + 73 + … tn-2 + tn-1 + tn\n———————————————\n0 = 9 + (24 + 40 + 56 + ….) - tn\n\nSince 24 + 40 + 56.. series in A.P with\ncommon difference of 16, we get\n\ntn = 9 + [((n-1)/2)*(2*24 + (n-1-1)d)]\n\nOn solving this we get\n\ntn = 8n2 + 1```\n\nBelow is the implementation of the above approach:\n\n## C++\n\n `// Program to find n-th element in the``// series 9, 33, 73, 128..``#include ``using` `namespace` `std;` `// Returns n-th element of the series``int` `series(``int` `n)``{`` ``return` `(8 * n * n) + 1;``}` `// driver program to test the above function``int` `main()``{`` ``int` `n = 5;`` ``cout << series(n);`` ``return` `0;``}`\n\n## Java\n\n `// Program to find n-th element in the``// series 9, 33, 73, 128..``import` `java.io.*;` `class` `GFG{`` ` ` ``// Returns n-th element of the series`` ``static` `int` `series(``int` `n)`` ``{`` ``return` `(``8` `* n * n) + ``1``;`` ``}`` ` ` ``// driver program to test the above`` ``// function`` ``public` `static` `void` `main(String args[])`` ``{`` ``int` `n = ``5``;`` ``System.out.println(series(n));`` ``}``}` `/*This code is contributed by Nikita Tiwari.*/`\n\n## Python3\n\n `# Python Program to find n-th element``# in the series 9, 33, 73, 128...` `# Returns n-th element of the series``def` `series(n):`` ``print` `(( ``8` `*` `n ``*``*` `2``) ``+` `1``)`` ` `# Driver Code``series(``5``)` `# This code is contributed by Abhishek Agrawal.`\n\n## C#\n\n `// C# program to find n-th element in the``// series 9, 33, 73, 128..``using` `System;` `class` `GFG {` ` ``// Returns n-th element of the series`` ``static` `int` `series(``int` `n)`` ``{`` ``return` `(8 * n * n) + 1;`` ``}` ` ``// driver function`` ``public` `static` `void` `Main()`` ``{`` ``int` `n = 5;`` ``Console.WriteLine(series(n));`` ``}``}` `/*This code is contributed by vt_m.*/`\n\n## PHP\n\n ``\n\n## Javascript\n\n ``\n\nOutput:\n\n`201`\n\nTime complexity: O(1)\nThis article is contributed by Striver. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.68465894,"math_prob":0.83444196,"size":3122,"snap":"2021-43-2021-49","text_gpt3_token_len":959,"char_repetition_ratio":0.15169981,"word_repetition_ratio":0.2413793,"special_character_ratio":0.35874438,"punctuation_ratio":0.15264797,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99164826,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T16:20:00Z\",\"WARC-Record-ID\":\"<urn:uuid:d7a09db3-3ec5-4496-826e-4807bfbf1479>\",\"Content-Length\":\"127083\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9519db1f-7e65-4f2f-bc40-3e08c44c9699>\",\"WARC-Concurrent-To\":\"<urn:uuid:00f88d42-da61-41be-a14d-2af1bcf4fd23>\",\"WARC-IP-Address\":\"23.207.202.207\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/find-n-th-term-series-9-33-73129/\",\"WARC-Payload-Digest\":\"sha1:CWHR2ZQMLSJWDTWLNELBEDK3DWCDVJTK\",\"WARC-Block-Digest\":\"sha1:GT3B3JQJZCJBN36U7YWYXODL4O5BSLIC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323584886.5_warc_CC-MAIN-20211016135542-20211016165542-00022.warc.gz\"}"} |
https://www.geeksforgeeks.org/utility-methods-of-wrapper-classes-in-java/?ref=rp | [
"# Utility Methods of Wrapper Classes in Java\n\nPrerequisite: Wrapper Classes\n\nThe objective of the Wrapper class is to define several utility methods which are required for the primitive types. There are 4 utility methods for primitive type which is defined by the Wrapper class:\n\n### 1. valueOf() method:\n\nWe can use the valueOf() method to create a Wrapper object for a given primitive or String. There are 3 types of valueOf() methods:\n\nA. Wrapper valueOf(String s): Every wrapper class except Character class contains a static valueOf() method to create Wrapper class object for a given String.\n\nSyntax:\n\n`public static Wrapper valueOf(String s);`\n\n## Java\n\n `// Java program to illustrate valueof() Method ` ` ` `class` `GFG { ` ` ``public` `static` `void` `main(String[] args) ` ` ``{ ` ` ``Integer I = Integer.valueOf(``\"10\"``); ` ` ``System.out.println(I); ` ` ` ` ``Double D = Double.valueOf(``\"10.0\"``); ` ` ``System.out.println(D); ` ` ` ` ``Boolean B = Boolean.valueOf(``\"true\"``); ` ` ``System.out.println(B); ` ` ` ` ``// Here we will get RuntimeException ` ` ``Integer I1 = Integer.valueOf(``\"ten\"``); ` ` ``} ` `}`\n\nOutput:\n\n```10\n10.0\ntrue\nException in thread \"main\" java.lang.NumberFormatException: For input string: \"ten\"```\n\nB. Wrapper valueOf(String s, int radix): Every Integral Wrapper class Byte, Short, Integer, Long) contains the following valueOf() method to create a Wrapper object for the given String with specified radix. The range of the radix is 2 to 36.\n\nSyntax:\n\n`public static Wrapper valueOf(String s, int radix)`\n\n## Java\n\n `// Java program to illustrate valueof() Method ` ` ` `class` `GFG { ` ` ``public` `static` `void` `main(String[] args) ` ` ``{ ` ` ``Integer I = Integer.valueOf(``\"1111\"``, ``2``); ` ` ``System.out.println(I); ` ` ` ` ``Integer I1 = Integer.valueOf(``\"1111\"``, ``4``); ` ` ``System.out.println(I1); ` ` ``} ` `}`\n\nOutput\n\n```15\n85```\n\n3. Wrapper valueOf(primitive p): Every Wrapper class including the Character class contains the following method to create a Wrapper object for the given primitive type.\n\nSyntax:\n\n`public static Wrapper valueOf(primitive p);`\n\n## Java\n\n `// Java program to illustrate valueof() Method ` ` ` `class` `GFG { ` ` ``public` `static` `void` `main(String[] args) ` ` ``{ ` ` ``Integer I = Integer.valueOf(``10``); ` ` ``Double D = Double.valueOf(``10.5``); ` ` ``Character C = Character.valueOf(``'a'``); ` ` ` ` ``System.out.println(I); ` ` ``System.out.println(D); ` ` ``System.out.println(C); ` ` ``} ` `}`\n\nOutput\n\n```10\n10.5\na```\n\n### 2. xxxValue() Method\n\nWe can use xxxValue() methods to get the primitive for the given Wrapper Object. Every number type Wrapper class( Byte, Short, Integer, Long, Float, Double) contains the following 6 methods to get primitive for the given Wrapper object:\n\n1. public byte byteValue()\n2. public short shortValue()\n3. public int intValue()\n4. public long longValue()\n5. public float floatValue()\n6. public float doubleValue()\n\n### 3. parseXxx() Method\n\nWe can use parseXxx() methods to convert String to primitive. There are two types of parseXxx() methods:\n\nA. primitive parseXxx(String s): Every Wrapper class except the character class contains the following parseXxx() method to find primitive for the given String object.\n\nSyntax:\n\n`public static primitive parseXxx(String s);`\n\n## Java\n\n `// Java program to illustrate parseXxx() Method ` ` ` `class` `GFG { ` ` ``public` `static` `void` `main(String[] args) ` ` ``{ ` ` ``int` `i = Integer.parseInt(``\"10\"``); ` ` ``double` `d = Double.parseDouble(``\"10.5\"``); ` ` ``boolean` `b = Boolean.parseBoolean(``\"true\"``); ` ` ` ` ``System.out.println(i); ` ` ``System.out.println(d); ` ` ``System.out.println(b); ` ` ``} ` `}`\n\nOutput\n\n```10\n10.5\ntrue```\n\nB. parseXxx(String s, int radix): Every Integral type Wrapper class (Byte, Short, Integer, Long) contains the following parseXxx() method to convert specified radix String to primitive.\n\nSyntax:\n\n`public static primitive parseXxx(String s, int radix);`\n\n## Java\n\n `// Java program to illustrate parseXxx() Method ` ` ` `class` `GFG { ` ` ``public` `static` `void` `main(String[] args) ` ` ``{ ` ` ``int` `i = Integer.parseInt(``\"1000\"``, ``2``); ` ` ``long` `l = Long.parseLong(``\"1111\"``, ``4``); ` ` ` ` ``System.out.println(i); ` ` ``System.out.println(l); ` ` ``} ` `}`\n\nOutput\n\n```8\n85```\n\n### 4. toString() Method\n\nWe can use the toString() method to convert the Wrapper object or primitive to String. There are a few forms of the toString() method:\n\nA. public String toString(): Every wrapper class contains the following toString() method to convert Wrapper Object to String type.\n\nSyntax:\n\n`public String toString();`\n\n## Java\n\n `// Java program to illustrate toString() Method ` ` ` `class` `GFG { ` ` ``public` `static` `void` `main(String[] args) ` ` ``{ ` ` ``Integer I = ``new` `Integer(``10``); ` ` ``String s = I.toString(); ` ` ``System.out.println(s); ` ` ``} ` `}`\n\nOutput:\n\n`10`\n\nB. toString(primitive p): Every Wrapper class including the Character class contains the following static toString() method to convert primitive to String.\n\nSyntax:\n\n`public static String toString(primitive p);`\n\n## Java\n\n `// Java program to illustrate toString() ` ` ` `class` `GFG { ` ` ``public` `static` `void` `main(String[] args) ` ` ``{ ` ` ``String s = Integer.toString(``10``); ` ` ``System.out.println(s); ` ` ` ` ``String s1 = Character.toString(``'a'``); ` ` ``System.out.println(s1); ` ` ``} ` `}`\n\nOutput\n\n```10\na```\n\nC. toString(primitive p, int radix): Integer and Long classes contain the following toString() method to convert primitive to specified radix String.\n\nSyntax:\n\n`public static String toString(primitive p, int radix);`\n\n## Java\n\n `// Java program to illustrate toString() Method ` ` ` `class` `GFG { ` ` ``public` `static` `void` `main(String[] args) ` ` ``{ ` ` ``String s = Integer.toString(``15``, ``2``); ` ` ``System.out.println(s); ` ` ` ` ``String s1 = Long.toString(``11110000``, ``4``); ` ` ``System.out.println(s1); ` ` ``} ` `}`\n\nOutput\n\n```1111\n222120121300```\n\nWhether you're preparing for your first job interview or aiming to upskill in this ever-evolving tech landscape, GeeksforGeeks Courses are your key to success. We provide top-quality content at affordable prices, all geared towards accelerating your growth in a time-bound manner. Join the millions we've already empowered, and we're here to do the same for you. Don't miss out - check it out now!\n\nPrevious\nNext"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5853295,"math_prob":0.69275934,"size":5129,"snap":"2023-40-2023-50","text_gpt3_token_len":1226,"char_repetition_ratio":0.184,"word_repetition_ratio":0.20104438,"special_character_ratio":0.26106453,"punctuation_ratio":0.19650206,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9851277,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T10:39:52Z\",\"WARC-Record-ID\":\"<urn:uuid:d2d7d5d0-83b7-47c0-b882-f73c2421bdfb>\",\"Content-Length\":\"373048\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:02082e04-9d6b-448a-9641-f47053b7f707>\",\"WARC-Concurrent-To\":\"<urn:uuid:df91bbf2-b7e0-4cd9-9b8c-9e4529d3e9a7>\",\"WARC-IP-Address\":\"108.138.64.13\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/utility-methods-of-wrapper-classes-in-java/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:O7V2OOXIBIS5COFMTHANF2Q3ZG74BYYW\",\"WARC-Block-Digest\":\"sha1:CRVNVGZWKXSC7XSAS7LSIL6QD3356YO4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100651.34_warc_CC-MAIN-20231207090036-20231207120036-00846.warc.gz\"}"} |
https://www.traditionaloven.com/tutorials/angle/convert-angle-unit-second-to-cycle-angle-unit.html | [
" Convert '' to 360° | second to cycles\n\nangle units conversion\n\nAmount: 1 second ('') of angle Equals: 0.00000077 cycles (360°) in angle\n\nConverting second to cycles value in the angle units scale.\n\nTOGGLE : from cycles into seconds in the other way around.\n\nangle from second to cycle conversion results\n\nEnter a new second number to convert\n\n* Whole numbers, decimals or fractions (ie: 6, 5.33, 17 3/8)\n* Precision is how many digits after decimal point (1 - 9)\n\nEnter Amount :\nDecimal Precision :\n\nCONVERT : between other angle measuring units - complete list.\n\nHow many cycles are in 1 second? The answer is: 1 '' equals 0.00000077 360°\n\n0.00000077 360° is converted to 1 of what?\n\nThe cycles unit number 0.00000077 360° converts to 1 '', one second. It is the EQUAL angle value of 1 second but in the cycles angle unit alternative.\n\n ''/360° angle conversion result From Symbol Equals Result Symbol 1 '' = 0.00000077 360°\n\nConversion chart - seconds to cycles\n\n1 second to cycles = 0.00000077 360°\n\n2 seconds to cycles = 0.0000015 360°\n\n3 seconds to cycles = 0.0000023 360°\n\n4 seconds to cycles = 0.0000031 360°\n\n5 seconds to cycles = 0.0000039 360°\n\n6 seconds to cycles = 0.0000046 360°\n\n7 seconds to cycles = 0.0000054 360°\n\n8 seconds to cycles = 0.0000062 360°\n\n9 seconds to cycles = 0.0000069 360°\n\n10 seconds to cycles = 0.0000077 360°\n\n11 seconds to cycles = 0.0000085 360°\n\n12 seconds to cycles = 0.0000093 360°\n\n13 seconds to cycles = 0.000010 360°\n\n14 seconds to cycles = 0.000011 360°\n\n15 seconds to cycles = 0.000012 360°\n\nConvert angle of second ('') and cycles (360°) units in reverse from cycles into seconds.\n\nAngles\n\nThis calculator is based on conversion of two angle units. An angle consists of two rays (as in sides of an angle sharing a common vertex or else called the endpoint.) Some belong to rotation measurements - spherical angles measured by arcs' lengths, pointing from the center, plus the radius. For a whole set of multiple units of angle on one page, try that Multiunit converter tool which has built in all angle unit-variations. Page with individual angle units.\n\nConverter type: angle units\n\nFirst unit: second ('') is used for measuring angle.\nSecond: cycle (360°) is unit of angle.\n\nQUESTION:\n15 '' = ? 360°\n\n15 '' = 0.000012 360°\n\nAbbreviation, or prefix, for second is:\n''\nAbbreviation for cycle is:\n360°\n\nOther applications for this angle calculator ...\n\nWith the above mentioned two-units calculating service it provides, this angle converter proved to be useful also as a teaching tool:\n1. in practicing seconds and cycles ( '' vs. 360° ) measures exchange.\n2. for conversion factors between unit pairs.\n3. work with angle's values and properties.\n\nTo link to this angle second to cycles online converter simply cut and paste the following.\nThe link to this tool will appear as: angle from second ('') to cycles (360°) conversion.\n\nI've done my best to build this site for you- Please send feedback to let me know how you enjoyed visiting."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8116263,"math_prob":0.987518,"size":2439,"snap":"2019-43-2019-47","text_gpt3_token_len":681,"char_repetition_ratio":0.27268994,"word_repetition_ratio":0.0,"special_character_ratio":0.36121362,"punctuation_ratio":0.13963039,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9931215,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T02:01:58Z\",\"WARC-Record-ID\":\"<urn:uuid:c0f7e718-3d9c-4877-a926-29b5c24aab11>\",\"Content-Length\":\"45975\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3b149a39-1ad2-496c-a1b5-9f37859545d5>\",\"WARC-Concurrent-To\":\"<urn:uuid:378ca251-3ee6-418e-9d69-6e55db8841e5>\",\"WARC-IP-Address\":\"162.241.171.12\",\"WARC-Target-URI\":\"https://www.traditionaloven.com/tutorials/angle/convert-angle-unit-second-to-cycle-angle-unit.html\",\"WARC-Payload-Digest\":\"sha1:OOLPFSRGKKF55KPY2OPJPZM234QLPKVD\",\"WARC-Block-Digest\":\"sha1:LGQPUJC2DKJCLGAKYZRWJ4VWKX4GEBMT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986688674.52_warc_CC-MAIN-20191019013909-20191019041409-00248.warc.gz\"}"} |
https://quant.stackexchange.com/questions/18408/does-heteroskedasticity-of-returns-depend-on-the-time-frame?noredirect=1 | [
"# Does heteroskedasticity of returns depend on the time frame?\n\nSimilarly to my last question, for which I obtained very interesting and useful answers, I would like to know if there has been any study regarding heteroskedasticity and time-frames of the returns.\n\nAs an example could it be that the lower the time frame (take the 5 minute returns) the less heteroskedastic are returns?\n\n• Hi Monolite! Can you specify what is the model you're referring to (GARCH, ARCH, ...)? – Quantopik Jun 17 '15 at 17:57\n\n## 2 Answers\n\nIn practice, for heavily traded assets (above 60% quantile of average daily dollar volume), individual asset return is pretty scalable across different time frame by a factor of $\\sqrt{T}$.\n\nHowever, for covariance among different assets, moving between different time frame is not linearly scalable (although it should be in math). This is known as \"Epps Effect\".\n\nThis paper states that heteroskedasticity is a stylized fact in daily as well as intra-day returns: https://statistik.econ.kit.edu/download/doc_secure1/HandbookITandFinan.pdf"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9517579,"math_prob":0.77124417,"size":320,"snap":"2021-04-2021-17","text_gpt3_token_len":70,"char_repetition_ratio":0.12658228,"word_repetition_ratio":0.0,"special_character_ratio":0.190625,"punctuation_ratio":0.06779661,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9665758,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-15T07:12:48Z\",\"WARC-Record-ID\":\"<urn:uuid:0ac47efb-a050-4c11-b1b9-35e41e212f78>\",\"Content-Length\":\"169091\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1bc7c93b-3756-495b-9e72-fe70d4f34ff4>\",\"WARC-Concurrent-To\":\"<urn:uuid:f825dffc-e95a-45dc-a780-1d888ef7de6a>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://quant.stackexchange.com/questions/18408/does-heteroskedasticity-of-returns-depend-on-the-time-frame?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:VY4HYBQZS3LWNIXJD7XXAZXEIQMWBUAN\",\"WARC-Block-Digest\":\"sha1:GBDSSLPPD4K3KUGYUR5D2PSCH4RT42PT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038084601.32_warc_CC-MAIN-20210415065312-20210415095312-00039.warc.gz\"}"} |
https://answers.yahoo.com/question/index?qid=20070620091706AA2SS8W | [
"# I need more help with this 1/(g-(k/m)v^2) dv?\n\nI need to make a paper explaining how to solve this integral I need to factor fractions and end with a hyperbolic trig function. I will apreciate any help thanks.\n\nRelevance\n\nEssentially you want to write the integral as:\n\n(m/k)*dv / [ mg/k - v^2]\n\nAnd replace mg/k with a^2.\n\nThe easiest method is to get the hyperbolic trig function directly from the integration.\n\nUse the substitution\n\nx = a*tanhθ\n\ndx = a*sech^2 θ\n\na^2 - x^2 = a^2 * (1 - tanh^2 θ) = a^2 * sech^2 θ\n\nSo the integral of dx / (a^2 - x^2) becomes\n\nintegral of (1/a) dθ\n\nwhich gives you (1/a) θ = (1/a) * tanh^-1 (x/a)\n\nNow you just need to replace a with sqrt(mg/k) and x with v.\n\n------------\n\nIf you wish to integrate using partial fractions, then convert to hyperbolic trig, check out the links below where I answered very similar questions in the past."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.76937884,"math_prob":0.96077484,"size":1023,"snap":"2020-34-2020-40","text_gpt3_token_len":312,"char_repetition_ratio":0.11383709,"word_repetition_ratio":0.0,"special_character_ratio":0.30400783,"punctuation_ratio":0.11061947,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9983061,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T12:42:55Z\",\"WARC-Record-ID\":\"<urn:uuid:e8b4f290-8f16-4a78-a77a-03e77214900b>\",\"Content-Length\":\"118237\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4394089e-5c33-4aca-aa15-75734c420b7a>\",\"WARC-Concurrent-To\":\"<urn:uuid:8fcb1b77-d836-4c61-9806-97a2b3567d63>\",\"WARC-IP-Address\":\"76.13.32.153\",\"WARC-Target-URI\":\"https://answers.yahoo.com/question/index?qid=20070620091706AA2SS8W\",\"WARC-Payload-Digest\":\"sha1:NQY3FAAPF4VBVFJ2KS3JX46YQ2SZKUC4\",\"WARC-Block-Digest\":\"sha1:77MZMRL2NHMJB57JPT4SBJ75KGUFSOIW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439740838.3_warc_CC-MAIN-20200815094903-20200815124903-00433.warc.gz\"}"} |
https://ask.sagemath.org/question/30456/need-clarity-on-plotting-the-y-cordinate-from-the-x-co-ordinate-in-elliptic-curve-cryptography/ | [
"# Need clarity on plotting the y cordinate from the x co-ordinate in Elliptic curve cryptography\n\nI'm just new to elliptic curve cryptography. I have been working on RSA for quite some time. Moreover I'm not from a mathematical background. The whole concept looks very complex. So tell me my understanding is correct or not. I was looking at sample implementation at http://www.enggjournals.com/ijcse/doc... In the pdf given there, the curve equation is y^2=x^3+ax+b,the domain parameters are p(751),a(-1),b(188),n(727). I want to encode a letter 'b' and it is first encoded as 11. Now x=mk+1 ie 11*20+1=221 cannot solve it for a y such that y^2= x^3 + ax+ b mod p.\nSo go for x=mk+2 , x=222 , no y exists. x=mk+3, x=223, no y exists. x=mk+4 so x=224 can solve it for y and y=248\n\n1)Can somebody explain how exactly x=224 solves the equation for y?\n\n2)On what basis mk+1 is taken? Is there any standard formula for choosing the y co-ordinate?\n\nedit retag close merge delete\n\nHow exactly is this related to sagemath?\n\nIf it is not related to sagemath perhaps a better site for your question would be here.\n\nSort by » oldest newest most voted\n\nThe source is hard to read, both mathematically and with a xpdf viewer, so let us restrict us to understand only the paragraph related to the questions (1) and (2) above. We are now on the page 1906 and the cook book tells us to initialize the elliptic curve $$E_p(a,b)\\ :\\ y^2=x^3 +ax+b\\ ,$$ defined over the field $\\mathbb F_p$ with $p$ elements. This field is GF(p) in sage. So in order to initialize this object in sage, for the special choice of the example at loc. cit., $p=751$, $a=-1$, $b=188$, we type (in the sage interpreter console):\n\nsage: E = EllipticCurve( GF(751), [ -1, 188 ] )\n\nsage: E\nElliptic Curve defined by y^2 = x^3 + 750*x + 188 over Finite Field of size 751\n\nsage: E.order()\n727\n\n\nOK, i was too curious to see the \"order of the curve\", i.e. the number of points lying on it. So the order is this $n=727$, the source uses $n$, sometimes $N$ for this number.\n\nIn order to produce some confusion, the source wants to send the letter b as a message. (They could have taken some J or so... since the curve has parameters a, and b.) In the next second we want but to send the B instead. No problem, we are sending the $11$, convention, the letters A, B, C, ... are by convention converted to 10, 11, 12, ... . \"Same information\".\n\nThere is also some parameter, $k=20$. Setting this parameter as Step 5 makes it better. So the confusion on my side takes shape. (Why not set it once for all times at the beginning as parameter? Does this $k=20$ depend on something chosen in the previous steps?)\n\nThe cook book wants now to associate the $x$-value to the $B$ given by the \"formula\": $$x = mk+1 \\overset ?= 11\\cdot 20+1\\ .$$ With this occasion, we record the fact that the $11$ is in fact an $m$. Or at least ask ourselves, and accept it, since we cannot change the pdf. Starting with the natural number $221$ from above, we seek for the first $x$ in the sequence $221, 222, 223, 224, 225,\\dots$ so that $(x,?)$ is a point on the given curve. Let us do this in sage:\n\nsage: for x in [ 221..751 ]:\n....: if F(x^3 - x + 188).is_square():\n....: print x\n....: break\n....:\n224\n\n\nWe take then this first occurence and associate the corresponding $?$ value. There are two of them, of course:\n\nsage: sqrt( F(224)^3 - F(224) + F(188) )\n248\n\n\nBoth sage and the source consider that $248$ is the better square root. (The value $-248=751-248=503$ is the other one.)\n\nThe source than claims:\n\n6. Now the point (224,248) is point is encrypted and decrypted as a message.\n\nAnd this is a good point to stop. I think, we can now decrypt the way things can / could be done in the setting of the article.\n\nN.B. Please excuse the many personal comments. But it was really hard & frustrating to get the message from the article, after this, doing the job in sage took seconds. Best, the author would have written it by providing sage code...\n\nmore"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90802664,"math_prob":0.9946773,"size":2951,"snap":"2023-14-2023-23","text_gpt3_token_len":827,"char_repetition_ratio":0.09840516,"word_repetition_ratio":0.0,"special_character_ratio":0.32565233,"punctuation_ratio":0.1890332,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99920577,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-31T08:32:47Z\",\"WARC-Record-ID\":\"<urn:uuid:1fb43131-62c6-4c1e-9ef7-eb7c388c1aad>\",\"Content-Length\":\"56581\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fc8bb1c6-e746-4b66-acc4-e62ac428be42>\",\"WARC-Concurrent-To\":\"<urn:uuid:7d7dff92-9490-40c6-86ac-ca1859cec0bc>\",\"WARC-IP-Address\":\"194.254.163.53\",\"WARC-Target-URI\":\"https://ask.sagemath.org/question/30456/need-clarity-on-plotting-the-y-cordinate-from-the-x-co-ordinate-in-elliptic-curve-cryptography/\",\"WARC-Payload-Digest\":\"sha1:FX76QYM6A5MQCY47IZOVNNJASGJWY4V2\",\"WARC-Block-Digest\":\"sha1:EMMIWCOBFI6JWJ2JW4UMEGZCB2MEUP4L\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949598.87_warc_CC-MAIN-20230331082653-20230331112653-00604.warc.gz\"}"} |
https://plantpot.works/8581 | [
"09/30/2021\n\nContents\n\nIn JavaScript, you can pad a number with leading zeros to make it a certain number of digits long. Padding a number with leading zeros is useful when you need to display numbers with a consistent number of digits, such as when displaying dates or times.\n\nThe padStart() method is a built-in method in JavaScript that pads a string with a specific character or characters until it reaches a specified length. Here’s the syntax for using the padStart() method to pad a number with leading zeros:\n\n``number.toString().padStart(length, \"0\")``\n\nThe number is the number that you want to pad with leading zeros. The length is the total number of digits that the padded number should have. The “0” is the character that you want to use for padding the number.\n\n``````let number = 42;\n``````\n\nIn this example, we use the padStart() method to pad the number 42 with leading zeros so that it has a total of 5 digits. The resulting padded number is then assigned to the padded variable and logged to the console.\n\n### Using string concatenation\n\nYou can also pad a number with leading zeros using string concatenation. Here’s the syntax for using string concatenation to pad a number with leading zeros:\n\n``\"0\".repeat(length - number.toString().length) + number.toString()``\n\nThe number is the number that you want to pad with leading zeros. The length is the total number of digits that the padded number should have.\n\nHere’s an example of using string concatenation to pad a number with leading zeros:\n\n``````let number = 42;\nlet padded = \"0\".repeat(5 - number.toString().length) + number.toString();\n``````\n\nIn this example, we use string concatenation to pad the number 42 with leading zeros so that it has a total of 5 digits. The resulting padded number is then assigned to the padded variable and logged to the console.\n\n### Using the slice() method\n\nYou can also pad a number with leading zeros using the slice() method. Here’s the syntax for using the slice() method to pad a number with leading zeros:\n\n``(\"00000\" + number).slice(-length)``\n\nThe number is the number that you want to pad with leading zeros. The length is the total number of digits that the padded number should have.\n\nHere’s an example of using the slice() method to pad a number with leading zeros:\n\n``````let number = 42;\nlet padded = (\"00000\" + number).slice(-5);"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7615683,"math_prob":0.9882337,"size":2933,"snap":"2023-14-2023-23","text_gpt3_token_len":651,"char_repetition_ratio":0.21304199,"word_repetition_ratio":0.5638945,"special_character_ratio":0.24002728,"punctuation_ratio":0.10278746,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9962897,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T11:57:31Z\",\"WARC-Record-ID\":\"<urn:uuid:9971d6c2-33d4-456c-b58a-5434adc47017>\",\"Content-Length\":\"129481\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8656d079-7bb6-4dfd-80bc-e6a0157bff6e>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1c0738c-3f7a-42d8-984b-70e231a44dc2>\",\"WARC-IP-Address\":\"157.7.107.216\",\"WARC-Target-URI\":\"https://plantpot.works/8581\",\"WARC-Payload-Digest\":\"sha1:YLLZKWU7TBHQ5324VMNEL5RXAW5BIWMG\",\"WARC-Block-Digest\":\"sha1:4OF5BE3Y46P3ASR527ZNSQI255ML7HAI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224656675.90_warc_CC-MAIN-20230609100535-20230609130535-00599.warc.gz\"}"} |
https://eng.kakprosto.ru/how-9862-how-to-make-a-proportion | [
"Instruction\n1\nSuppose that your salary is 10,000 rubles a month. This number will be divisible by the first fraction. Since your salary is your entire income for the month, we will take it for 100 percent. This number will be a divisor of the first fraction. So, the first roll - 10000/100. Make a fraction with your numbers.\n2\nYou need to calculate the tax that will be withheld from your paycheck per month. The tax to incomes of physical persons in our country is 13 percent. This number will be a divisor of the second fraction. And since the amount deducted from you tax we don't know, let's call it \"x\". The number \"x\" is divisible by the second fraction. So the second fraction is x/13.\n3\nMake proportion, that is, write both the fraction and put between them a sign of equality. Our proportion of 10000/100=x/13.In order to solve the proportion, you need to multiply the end members proportions and share them on the remaining member. For example: x=10000*13/100. Hence, x=1300. This is the amount of tax withheld from you in a month with an income of 10,000 rubles. Decide your proportion."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9345704,"math_prob":0.98581177,"size":1098,"snap":"2019-26-2019-30","text_gpt3_token_len":265,"char_repetition_ratio":0.1380256,"word_repetition_ratio":0.05235602,"special_character_ratio":0.2650273,"punctuation_ratio":0.12340426,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9941334,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-21T05:01:24Z\",\"WARC-Record-ID\":\"<urn:uuid:ba53c927-f493-46c7-a893-c2532d249db7>\",\"Content-Length\":\"28225\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dea698bd-c280-476d-a30b-a75d2663b322>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a7f4e20-2eff-4ab6-9906-484f1c6fb55b>\",\"WARC-IP-Address\":\"95.213.175.86\",\"WARC-Target-URI\":\"https://eng.kakprosto.ru/how-9862-how-to-make-a-proportion\",\"WARC-Payload-Digest\":\"sha1:S76CFVGGS4PJZJHE2CFHHPYE6X6FY6TV\",\"WARC-Block-Digest\":\"sha1:3POORXKRB2CUG7QC33K2FPITR7PGAOB2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526888.75_warc_CC-MAIN-20190721040545-20190721062545-00065.warc.gz\"}"} |
https://learnphp.org/php-array-intersect-uassoc-function-with-example | [
"",
null,
"Popular Searches:\n391\nQ:\n\n# PHP array_intersect_uassoc() function (with example)\n\nHi everyone,\n\nI hope you're doing well. I recently came across the PHP `array_intersect_uassoc()` function and I'm having a bit of trouble understanding how it works. I've read the documentation, but I'm still a bit confused. I was hoping someone here could help shed some light on it for me.\n\nTo provide some context, I'm currently working on a project where I need to compare two arrays and find the common elements based on their keys and values, but with a custom comparison function. I understand that `array_intersect_assoc()` can be used to find the common elements based on their values, but it doesn't allow for custom comparisons. This is where `array_intersect_uassoc()` comes in.\n\nI've looked at the syntax and I understand that it takes in two or more arrays as arguments, and an optional callback function that compares the keys and values. However, I'm not entirely sure how to write the callback function and how it affects the comparison process.\n\nIt would be really helpful if someone could provide me with an example of using `array_intersect_uassoc()` with a custom callback function. I think having a real-world example would make it easier for me to understand and implement it in my project.\n\nThank you so much in advance for your help. I really appreciate it.\n\nBest regards,\n\n## All Replies",
null,
"I've actually used `array_intersect_uassoc()` before, so I hope I can help clarify things for you.\n\nWhen using `array_intersect_uassoc()`, the callback function you provide is used for comparing both the keys and values of the arrays. It takes four parameters: `\\$a_key` and `\\$b_key` representing the keys being compared, and `\\$a_value` and `\\$b_value` representing the corresponding values.\n\nLet's say you have two arrays, `\\$array1` and `\\$array2`, and you want to find the common elements based on both the keys and values, but with a custom comparison. You can use `array_intersect_uassoc()` along with a callback function to achieve this. Here's an example:\n\n```php```\\$array1 = array(\"apple\" => 2, \"banana\" => 5, \"cherry\" => 3);\n\\$array2 = array(\"apple\" => 4, \"banana\" => 2, \"cherry\" => 8);\n\n\\$result = array_intersect_uassoc(\\$array1, \\$array2, function (\\$a_key, \\$b_key, \\$a_value, \\$b_value) {\nif (\\$a_key === \\$b_key) {\nreturn \\$a_value - \\$b_value; // Custom comparison logic\n}\nreturn \\$a_key <=> \\$b_key; // Default comparison for keys\n});\n\nprint_r(\\$result);\n``````\n\nIn this example, the callback function compares both the keys and values. If the keys are equal, it compares the values using a custom logic (`\\$a_value - \\$b_value`). If the keys are not equal, it uses the default comparison operator `<=>` to compare the keys.\n\nThe result would be:\n\n``````Array\n(\n[banana] => 5\n)\n``````\n\nSince \"banana\" is the only key present in both arrays and has a different value, it is the only element returned by `array_intersect_uassoc()`.\n\nI hope this example helps you understand how to use `array_intersect_uassoc()` with a custom callback function. Let me know if you have any further questions!\n\nBest regards,\n\n## Related Topics",
null,
"Hey there,\n\nI noticed your question regarding `array_intersect_uassoc()` and thought I could share my personal experience with using it.\n\nIn one of my recent projects, I had to compare two arrays based on their keys and values, but with a complex comparison logic. `array_intersect_uassoc()` came to my rescue, allowing me to define my custom callback function.\n\nTo give you an example, let's suppose we have two arrays: `\\$array1` and `\\$array2`. We want to find the common elements between them, comparing both keys and values using a customized logic in the callback function. Here's how I accomplished it:\n\n```php```\\$array1 = array(\"apple\" => 2, \"banana\" => 5, \"cherry\" => 3);\n\\$array2 = array(\"apple\" => 4, \"banana\" => 7, \"cherry\" => 3);\n\n\\$result = array_intersect_uassoc(\\$array1, \\$array2, function (\\$a_key, \\$b_key, \\$a_value, \\$b_value) {\nif (\\$a_key === \"apple\" && \\$b_key === \"apple\") {\nreturn \\$a_value - (\\$b_value * 2); // Custom comparison logic for \"apple\" key\n}\n\n// Default comparison for other keys\nreturn \\$a_value <=> \\$b_value;\n});\n\nprint_r(\\$result);\n``````\n\nIn this example, I added a specific comparison condition for the \"apple\" key. If both arrays have this key, I compared their values using a custom logic (`\\$a_value - (\\$b_value * 2)`). For other keys, I used the default comparison operator `<=>` to compare their values.\n\nThe resulting output would be:\n\n``````Array\n(\n[cherry] => 3\n)\n``````\n\nBased on the custom comparison logic, only the \"cherry\" element is considered common between the two arrays and is returned by `array_intersect_uassoc()`.\n\nI hope this provides you with a better understanding of how to use `array_intersect_uassoc()` with a custom callback function. If you have any further questions, feel free to ask!\n\nBest regards,"
] | [
null,
"https://learnphp.org/upload/spinner-logo.png",
null,
"https://www.gravatar.com/avatar/a90ae39cdee305081dbeb45c2433a63b",
null,
"https://www.gravatar.com/avatar/5ef410540b84bc0b6835022d0a233cd0",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9208091,"math_prob":0.8194593,"size":2601,"snap":"2023-40-2023-50","text_gpt3_token_len":558,"char_repetition_ratio":0.14863303,"word_repetition_ratio":0.047732696,"special_character_ratio":0.2237601,"punctuation_ratio":0.10212766,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96272,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T09:55:13Z\",\"WARC-Record-ID\":\"<urn:uuid:26bfb498-9b7f-4efc-85f0-5a6b8743a254>\",\"Content-Length\":\"82470\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1966367-ba40-4041-a3aa-d0b1c415a94d>\",\"WARC-Concurrent-To\":\"<urn:uuid:7547c66d-f30c-422b-b54b-ce4631ea6d1d>\",\"WARC-IP-Address\":\"35.213.167.160\",\"WARC-Target-URI\":\"https://learnphp.org/php-array-intersect-uassoc-function-with-example\",\"WARC-Payload-Digest\":\"sha1:QJHLQXQT3Z2KJZ27AF7QYGOJQ3VUTISJ\",\"WARC-Block-Digest\":\"sha1:3MDCOW6QQJ46Q2YKX7UM34A3D52QB5J5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510387.77_warc_CC-MAIN-20230928095004-20230928125004-00607.warc.gz\"}"} |
https://socratic.org/questions/how-do-you-find-the-general-solutions-for-sinx-2tanx-0#163780 | [
"# How do you find the general solutions for sinx+2tanx=0?\n\nNote that since $\\tan x = \\sin \\frac{x}{\\cos} x$, $\\sin x = \\cos x \\tan x$\n$\\sin x + 2 \\tan x = 0$\n$\\cos x \\tan x + 2 \\tan x = 0$\n$\\tan x \\left(\\cos x + 2\\right) = 0$\n$\\cos x = - 2$ (no solutions) or $\\tan x = 0 \\setminus R i g h t a r r o w x = k \\pi , k \\in \\mathbb{Z}$"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5924982,"math_prob":1.00001,"size":236,"snap":"2021-43-2021-49","text_gpt3_token_len":62,"char_repetition_ratio":0.12068965,"word_repetition_ratio":0.0,"special_character_ratio":0.24152543,"punctuation_ratio":0.069767445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000098,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-15T23:06:51Z\",\"WARC-Record-ID\":\"<urn:uuid:3a4e16de-aebc-4763-ab38-5c840ab2cab2>\",\"Content-Length\":\"32924\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fab7b335-7ef9-4468-a7e1-b0391b1970b2>\",\"WARC-Concurrent-To\":\"<urn:uuid:97ae33ab-dcf9-47ad-8f4d-762daf53048f>\",\"WARC-IP-Address\":\"216.239.36.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-find-the-general-solutions-for-sinx-2tanx-0#163780\",\"WARC-Payload-Digest\":\"sha1:7GP52TFHVNCVSVULVRMWIRGEYRGCLRRT\",\"WARC-Block-Digest\":\"sha1:RAG4TLQNLEW5YCV2XHWHC6WTVXZN2QMS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323583087.95_warc_CC-MAIN-20211015222918-20211016012918-00505.warc.gz\"}"} |
https://www.circuitsell.com/en/indicator-verification | [
"# Indicator verification\n\nIndicator verification instructions\n\nThe following two practical methods can be used to detect the accuracy of 0.5% FS, and the detection accuracy is higher than 0.00625% FS.\n\n1Input and output signal equivalent product testing:\n\nThe relays and the semaphore switch in the figure are common components (the foot switch can be used), and there is no special requirement for contact resistance and withstand voltage. The accuracy, drift, linearity and other indicators of the meter can be 0.5% FS, but the resolution should be 1μA (4-bit 1/2 portable or benchtop).\n\nThis is a relative error measurement method; the instrument with the input and output signals equivalent (such as input and output signals are 4-20 mA), and the size of the two signals is detected by a single measurement meter. In a short period of time (such as 10 seconds), the meter will not change, and the relative errors of the two signals measured during that time will not change. The measurement accuracy is independent of the size of the input signal and is independent of the absolute value of the measurement table, only related to the resolution of the measurement table. The resolution of the measurement table reaches 1μA, and the absolute error of the measured meter is very close to 1μA.\n\nRelative error: The measurement error of 1 μA relative to 4-20 mA signal is 0.00625% FS, which is 80 times higher than the accuracy of the measurement table itself (0.5%).\n\nAbsolute error: Depending on the resolution of the meter and the linearity error, if the linearity error of the meter is 0.5% FS, set the input and output difference to ∆I, and the linear error of the difference is ∆I/20mA*0.5%. The sum of this value and the resolution of the meter is the absolute error. For example, the input-output difference is ±10μA, and the absolute error of the measurement itself is 1μA+10uA/20mA*0.5%=1.0000025μA, which is regarded as 1μA, that is, the absolute error is measured as the resolution of the measurement meter.\n\nThe ratio of automatic control \"equivalent\" instruments is higher than 80%, so the relative value measurement method is widely used. Metering problems for most \"equivalent\" meters can be solved on site with low-cost meters.\n\n2Input and output signals are not equivalent to product detection:\n\nThe universal safety barrier (isolator) uses four input terminals to switch signal inputs such as power distribution, current, thermocouple, thermal resistance, millivolt, and slip line resistance. The system commonly used input signals are included. During on-site construction, the signal configuration can be performed in the state where the power supply, input and output terminals are suspended (not energized), and the instrument and signal generator are not required. The accuracy after power-on is better than 0.03% FS.\n\nVerification is necessary, but if all of the general barrier (isolator) input signals are detected, the workload is huge and even an unfinished task. Therefore, the actual inspection is mainly based on the process requirements, and the verification is spot-checking.\n\nThe general-purpose safety barrier (isolator) contains 4~20mA signal and current input signals, which are equivalent to the output signal and can be detected by relative error detection method. The internal reference and measurement circuit used by the distributor signal and current input signal are the same component as the other non-equivalent signals. If the output value corresponding to the distribution signal or current input signal is qualified, from the verification accuracy, After the other input signals are configured, more tests are performed, but they can be left unchecked.\n\nIn addition to the necessary metrological verification (concentrated on pre-factory calibration), the relative error detection method avoids the use of high-precision detection instruments and signal generators to detect the safety barrier (isolator), and does not require the inspection table. Periodic measurement calibration (relative value measurement does not have transmission error). At the same time, the calculation of the error is simplified, and the measurement can be performed at any point within the range. The difference between the input and output differences is 0.01% FS per 1.6 μA.",
null,
""
] | [
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8965908,"math_prob":0.96233237,"size":4380,"snap":"2020-45-2020-50","text_gpt3_token_len":927,"char_repetition_ratio":0.17413163,"word_repetition_ratio":0.0116959065,"special_character_ratio":0.20913242,"punctuation_ratio":0.11193112,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95308894,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T16:15:03Z\",\"WARC-Record-ID\":\"<urn:uuid:043e5e86-9d98-4740-9943-fe23f19cd6bc>\",\"Content-Length\":\"48008\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38a72f5d-035f-4220-869a-b68fdb3a3f3c>\",\"WARC-Concurrent-To\":\"<urn:uuid:df52e24c-0c54-4d24-b2fa-9d7e3dccf7f1>\",\"WARC-IP-Address\":\"50.116.8.160\",\"WARC-Target-URI\":\"https://www.circuitsell.com/en/indicator-verification\",\"WARC-Payload-Digest\":\"sha1:L2ZSIXBRGGF6XLCCP5NDHOFOWIZW3ZDV\",\"WARC-Block-Digest\":\"sha1:LC3J6SR5IGIIPAXYYPUTOCSL4H3ESNNX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107904834.82_warc_CC-MAIN-20201029154446-20201029184446-00071.warc.gz\"}"} |
http://laxbytes.com/binwomstats19/PRImidf001010002.php | [
"``` EXPLANATION\nRANK PERCENTILE VALUE * WEIGHT = NET\nGames Played 10 * 0.20 = 2.00\nGames Started 9 * 0.30 = 2.70\nGoals >100 90 14 * 0.25 = 3.50\nAssists >100 74 1 * 0.21 = 0.21\nTotal Points >100 86 15 * 0.05 = 0.75\nDraw Controls >100 94 21 * 0.45 = 9.45\nTotal Shots >100 95 44 * 0.10 = 4.40\nShots on Goal >100 93 29 * 0.10 = 2.90\nMissed Shots Off Goal >100 96 15 * -0.05 = -0.75\nShot Percentage >100 65 0.32 * 1.00 = 0.32\nShots-on-goal Percentage >100 62 0.66 * 1.00 = 0.66\nGround Balls >100 81 11 * 0.45 = 4.95\nTurnovers >100 85 11 * -0.60 = -6.60\nCaused Turnovers >100 88 7 * 2.00 = 14.00\nFree Position Goals >100 89 2 * 0.03 = 0.05\nFree Position Misses >100 95 6 * -0.10 = -0.60\nGoals Saved -- -- 0 * 0.80 = 0.00\nGoals Allowed -- -- 0 * -0.80 = -0.00\nShots Faced -- -- 0 * 0.01 = 0.00\nSave Percentage -- -- 0.00 * 5.00 = 0.00\n\nPIR RAW (NET/GamesPlayed) 4.49\nOffensive Factor (OF) 0.57\nDefensive Factor (DF) 0.68\nStrength Schedule (SOS) 0.86\nBasis Points 2.50\n-------\nPIR = (PRI RAW)*((OF+DF)/2)*SOS + Basis Points 4.93\n\nList of all players for Bucknell\nList of all players for Division I\n\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5014467,"math_prob":0.9945052,"size":1083,"snap":"2019-13-2019-22","text_gpt3_token_len":459,"char_repetition_ratio":0.14272475,"word_repetition_ratio":0.00896861,"special_character_ratio":0.58171743,"punctuation_ratio":0.17562725,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9815631,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-25T15:51:52Z\",\"WARC-Record-ID\":\"<urn:uuid:5e1eff12-139d-4904-aef0-1a58a172a9a9>\",\"Content-Length\":\"52699\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:05da9cd0-eb75-4df6-9cf2-a68a294cdc83>\",\"WARC-Concurrent-To\":\"<urn:uuid:336f8a02-2279-4883-a3be-2b2943c3ab41>\",\"WARC-IP-Address\":\"50.63.209.1\",\"WARC-Target-URI\":\"http://laxbytes.com/binwomstats19/PRImidf001010002.php\",\"WARC-Payload-Digest\":\"sha1:HNNH23KVQKC7DNFSVQN5O7MUMRCEN4TG\",\"WARC-Block-Digest\":\"sha1:5VFUYFVOODOVVIB23TA23ZUAZ2KWBVL3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912204077.10_warc_CC-MAIN-20190325153323-20190325175323-00050.warc.gz\"}"} |
https://zbmath.org/?q=an:0674.35008 | [
"# zbMATH — the first resource for mathematics\n\nRegularity for minima of functionals with p-growth. (English) Zbl 0674.35008\nWe prove that the first derivatives of scalar minima of functionals of the type $I(u)=\\int f(x,u,\\nabla u)dx,$ are Hölder continuous. Here $$f(x,u,\\nabla u)\\approx | \\nabla u|^ p,$$ $$1<p<\\infty$$ and f is assumed Hölder continuous in x and u.\nWe give two applications. One to the regularity theory of quasiregular mappings and the other to quasilinear degenerate elliptic equations with p-growth.\nReviewer: J.J.Manfredi\n\n##### MSC:\n 35B65 Smoothness and regularity of solutions to PDEs 35J70 Degenerate elliptic equations 35J60 Nonlinear elliptic equations 35B05 Oscillation, zeros of solutions, mean value theorems, etc. in context of PDEs 35A15 Variational methods applied to PDEs 35D10 Regularity of generalized solutions of PDE (MSC2000)\nFull Text:\n##### References:\n Bojarski, B; Iwaniec, T, Analytical foundations of the theory of quasiconformal mappings in ofr^{n}, Ann. acad. sci. fenn. ser. AI math., 8, 257-324, (1983) · Zbl 0548.30016 Campanato, S, Equazione ellittiche del secondo ordine e spazi L2,λ, Ann. mat. pura appl., 69, 321-380, (1965) · Zbl 0145.36603 DiBenedetto, E, C1 + α local regularity of weak solutions of degenerate elliptic equations, Nonlinear anal.: theory, methods appl., 7, 827-850, (1983) · Zbl 0539.35027 Giaquinta, M, Multiple integrals in the calculus of variations and nonlinear elliptic systems, () · Zbl 1006.49030 Giaquinta, M; Giusti, E, Differentiability of minima of non-differentiable functionals, Invent. math., 72, 285-298, (1983) · Zbl 0513.49003 Giaquinta, M; Giusti, E, On the regularity of the minima of variational integrals, Acta math., 148, 31-46, (1982) · Zbl 0494.49031 Granlund, S; Lindqvist, P; Martio, O, Conformally invariant variational integrals, Trans. amer. math. soc., 277, 43-73, (1983) · Zbl 0518.30024 Giaquinta, M; Modica, G, Remarks on the regularity of the minimizers of certain degenerate functionals, Manuscripta math., 57, 55-99, (1986), Preprint · Zbl 0607.49003 Gilbart, D; Trudinger, N.S, Elliptic partial differential equations of second order, (1977), Springer Berlin/Heidelberg/New York Iwaniec, T, Regularity theorems for solutions of partial differential equations for quasiconformal mappings in several dimensions, Dissertationes math., 198, (1982) · Zbl 0524.35019 Lewis, J, Regularity of the derivatives of solutions to certain elliptic equations, Indiana univ. math. J., 32, 849-858, (1983) · Zbl 0554.35048 Ladyzhenskaya, O.A; Ural’tseva, N.N, Linear and quasilinear elliptic equations, (1968), Academic Press New York · Zbl 0164.13002 Manfredi, J, Regularity of the gradient for a class of nonlinear possibly degenerate elliptic equations, () Tolksdorff, P, Regularity for a more general class of quasilinear elliptic equations, J. differential equations, 51, 126-150, (1984) Uhlenbeck, K, Regularity for a class of nonlinear elliptic systems, Acta math., 138, 219-240, (1977) · Zbl 0372.35030 Ural’tseva, N, Degenerate quasilinear elliptic systems, (), 184-222, [In Russian] · Zbl 0199.42502\nThis reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7139162,"math_prob":0.97168034,"size":3819,"snap":"2021-31-2021-39","text_gpt3_token_len":1202,"char_repetition_ratio":0.1457405,"word_repetition_ratio":0.010968922,"special_character_ratio":0.3257397,"punctuation_ratio":0.24904214,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97577447,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T07:43:49Z\",\"WARC-Record-ID\":\"<urn:uuid:c2d22e2c-7449-4da8-937c-f8cd207ec9a3>\",\"Content-Length\":\"54960\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7bb10f1c-1d2b-42d6-a23d-e81794fd582d>\",\"WARC-Concurrent-To\":\"<urn:uuid:10c61314-1c09-405f-a950-952de9f6c29c>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an:0674.35008\",\"WARC-Payload-Digest\":\"sha1:UV67X4IFNMFSHR7OXFVHAQY67ISAPHRJ\",\"WARC-Block-Digest\":\"sha1:F5ND5RQFIQ6X6T25RJ3CKXT5ZZ727EFF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056348.59_warc_CC-MAIN-20210918062845-20210918092845-00262.warc.gz\"}"} |
http://www.mytechinterviews.com/globe-walker | [
"# Globe Walker",
null,
"Question: How many points are there on the globe where, by walking one mile south, then one mile east and then one mile north, you would reach the place where you started?\n\nAnswer: The trivial answer to this question is one point, namely, the North Pole. But if you think that answer should suffice, you might want to think again! 🙂\n\nLet’s think this through methodically. If we consider the southern hemisphere, there is a ring near the South Pole that has a circumference of one mile. So what if we were standing at any point one mile north of this ring? If we walked one mile south, we would be on the ring. Then one mile east would bring us back to same point on the ring (since it’s circumference is one mile). One mile north from that point would bring us back to the point were we started from. If we count, there would be an infinite number of points north of this one mile ring.\n\nSo what’s our running total of possible points? We have 1 + infinite points. But we’re not done yet!\n\nConsider a ring that is half a mile in circumference near the South Pole. Walking a mile along this ring would cause us to circle twice, but still bring us to back to the point we started from. As a result, starting from a point that is one mile north of a half mile ring would also be valid. Similarly, for any positive integer n, there is a circle with radius\n\nr = 1 / (2 * pi * n)\n\ncentered at the South Pole. Walking one mile along these rings would cause us to circle n times and return to the same point as we started. There are infinite possible values for n. Furthermore, there are infinite ways of determining a starting point that is one mile north of these n rings, thus giving us (infinity * infinity) possible points that satisfy the required condition.\n\nSo the real answer to this question is 1 + infinity * infinity = infinite possible points!\n\nLiked this post? Please Digg or Stumble it!\n\nIf you're looking for some serious preparation for your interviews, I'd recommend this book written by a lead Google interviewer. It has 189 programming questions and solutions:",
null,
"",
null,
"## 4 Responses\n\n1.",
null,
"Andrei says:\n\nAnother answer is on the ecuator, +- half a mile on longitude, than the mile is equal on bouth longitude and latitude and you came back where you started.\n\n2.",
null,
"Swift says:\n\nOh yeah, nicely explained. It is true that there are infinite points where that could be the case. I was glad when I read through your answer because I already knew the answer to this one, and you definitely got it right.\n\nBut let’s be picky shall we? The problem did state that it was on the globe, so in fact the place at the north pole where this occurs is in fact at the magnetic north pole (which is constantly changing position, and it physically a south pole). Likewise, the ring of infinite positions just over a mile north of the south pole is in fact centered on the magnetic south pole.\n\n3.",
null,
"manful says:\n\nWhich part of the solution don’t you understand?\n\n4.",
null,
"asad naqvi says:\n\nhey,\ni didnt quite understand this solution.\ncan someone pls xplain it in a simpler manner.\n\nthanks\n\nXHTML: These are some of the tags you can use: `<a href=\"\"> <b> <blockquote> <code> <em> <i> <strike> <strong>`"
] | [
null,
"http://www.mytechinterviews.com/wp-content/uploads/2010/02/cartoon-globe.png",
null,
"http://images.amazon.com/images/P/0984782850.01.LZZZZZZZ.jpg",
null,
"http://www.mytechinterviews.com/wp-content/uploads/2016/03/button_get-the-book-now.png",
null,
"http://0.gravatar.com/avatar/38ac81712799e9600327f13c5626d7ba",
null,
"http://1.gravatar.com/avatar/1c995ccbab6a3841d64405f1b408ce09",
null,
"http://1.gravatar.com/avatar/45657283f602e28cc77a9f3e470a184b",
null,
"http://0.gravatar.com/avatar/cd735f545ca77136e53726b32cbec244",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9541976,"math_prob":0.87974775,"size":2786,"snap":"2019-51-2020-05","text_gpt3_token_len":608,"char_repetition_ratio":0.15276779,"word_repetition_ratio":0.02303263,"special_character_ratio":0.21931084,"punctuation_ratio":0.10034602,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9847225,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,4,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-07T06:48:13Z\",\"WARC-Record-ID\":\"<urn:uuid:263fde6d-ff49-4b02-9dfa-83eb0f4d3607>\",\"Content-Length\":\"31314\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:964685df-190e-4690-8702-4fe7d4710fc1>\",\"WARC-Concurrent-To\":\"<urn:uuid:cea62c63-900a-4a63-9f61-ff76849572db>\",\"WARC-IP-Address\":\"34.198.120.227\",\"WARC-Target-URI\":\"http://www.mytechinterviews.com/globe-walker\",\"WARC-Payload-Digest\":\"sha1:FRNOO3A3BOTNPXNARFK2NSNFP5FTJMOF\",\"WARC-Block-Digest\":\"sha1:3Z2IFL2OMVT2RHI2LX4G22YX2CIJLJJA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540496492.8_warc_CC-MAIN-20191207055244-20191207083244-00230.warc.gz\"}"} |
https://www.hindawi.com/journals/aaa/2014/381753/ | [
"/ / Article\n\nResearch Article | Open Access\n\nVolume 2014 |Article ID 381753 | 11 pages | https://doi.org/10.1155/2014/381753\n\n# Analytical Solutions of a Space-Time Fractional Derivative of Groundwater Flow Equation\n\nAccepted12 Sep 2013\nPublished21 Jan 2014\n\n#### Abstract\n\nThe classical Darcy law is generalized by regarding the water flow as a function of a noninteger order derivative of the piezometric head. This generalized law and the law of conservation of mass are then used to derive a new equation for groundwater flow. Two methods including Frobenius and Adomian decomposition method are used to obtain an asymptotic analytical solution to the generalized groundwater flow equation. The solution obtained via Frobenius method is valid in the vicinity of the borehole. This solution is in perfect agreement with the data observed from the pumping test performed by the institute for groundwater study on one of their boreholes settled on the test site of the University of the Free State. The test consisted of the pumping of the borehole at the constant discharge rate and monitoring the piezometric head for 350 minutes. Numerical solutions obtained via Adomian method are compared with the Barker generalized radial flow model for which a fractal dimension for the flow is assumed. Proposition for uncertainties in groundwater studies was given.\n\n#### 1. Introduction\n\nThe real problem encounter in groundwater studies up to now is the real shape of the geological formation in which water flows in the aquifer under investigation. However, there are many fractured rock aquifers where the flow of groundwater does not fit conventional geometries , for example, in South Africa, the Karoo aquifers, characterized by the presence of a very few bedding parallel fractures that serve as the main conduits of water in the aquifers . With a challenge to fit the solution of the groundwater flow equation with experimental data from field observation in particular, the observed drawdown see , for all time yields a fit that undervalues the observed drawdown at early times and overvalues it at later times. The variation of observations from theoretically predictable values is usually an indication of uncertainties in the predictable. To investigate the first possibility Botha et al. developed a three-dimensional model for the Karoo aquifer on the campus of the University of the Free State. This model is based on the conventional, saturated groundwater flow equation for density-independent flow: where is the specific storativity, the hydraulic conductivity tensor of the aquifer, the piezometric head, ) the strength of any sources or sinks, with and the usual spatial and time coordinates; the gradient operator, and the time derivative.\n\nThis model showed that the dominant flow field in these aquifers is vertical and linear and not horizontal and radial as commonly assumed. However, more recent investigations suggest that the flow is also influenced by the geometry of the bedding parallel fractures, a feature that (1) cannot account for. It is therefore possible that the equation may not be applicable to flow in these fractured aquifers.\n\nIn an attempt to circumvent this problem, Barker introduced a model in which the geometry of the aquifer is regarded as a fractal. Although this model has been applied with reasonable success in the analysis of hydraulic tests from boreholes in Karoo aquifers, it introduces parameters for which no sound definition exists in the case of noninteger flow dimensions.\n\nAs a review of the derivation of (1) will show, Darcy law is used as a keystone in the derivation of (1). This law, proposed by Darcy early in the 19th century, is relying on experimental results obtained from the flow of water through a one-dimensional sand column, the geometry of which differs completely from that of a fracture . There is therefore a possibility that the Darcy law may not be valid for flow in fractured rock formations but is only a very crude idealization of the reality . Nevertheless, the relative success achieved by Botha et al. , to describe many of the properties of Karoo aquifers, suggests also that the basic principle underlying this law may be correct: the observed drawdown is to be related to either a variation in the hydraulic conductivity of the aquifer or a change in the piezometric head. Any new form of the law should therefore be reduced to the classical form under the more common conditions. Because is essentially determined by the permeability of the rocks, and not the flow pattern, the gradient term in (2) is the most likely cause for the deviation between the observed and theoretical drawdown observed in the Karoo formations . In the same direction, Cloot and Botha introduced the concept of non-integer fractional derivative to investigate a radially symmetric form of (1), by replacing the classical first order derivative of the piezometric head by a complementary fractional derivative . However the generalized model for groundwater flow equation was solved numerically. In this work, more general form of groundwater flow equation will be introduced; the Frobenius and Adomian decomposition will be used to give an asymptotic solution of the generalized model for groundwater flow equation. Because the concepts of fractional (or non-integer) order derivatives and complementary fractional order derivatives may not be widely known, both concepts are first briefly discussed below.\n\n#### 2. Fractional Order Derivatives\n\nOn one hand, the concept of fractional calculus is popularly believed to have stemmed from a question raised in the year 1695 by Marquis de L Hospital (1661–1704) to Gottfried Wilhelm Leibniz (1646–1716), which sought the meaning of Leibniz’s currently popular notation for derivative of order when (what if ). In his reply, dated September 30, 1695, Leibniz wrote to L’Hospital as follows: “This is an apparent paradox from which, one day, useful consequences will drawn.” On the other hand, the concept of fractional order derivatives for a function, say , is based on a generalization of the Abel integral : where is a nonzero positive integer and the Gamma function . This represents an integral of order for the continuous function , whenever and all its derivatives vanish at the origin, . This result can be extended to the concept of an integral of arbitrary order , defined as: where is a positive real number and an integer such that .\n\nLet now be the least positive integer larger than such that ; . Equation (4) can then be used to define the derivative of (positive) fractional order, say , of a function as Note that these results, like Abel’s integral, are only valid subject to the condition that for .\n\n##### 2.1. Properties\n\nProperties of the operator can be found in [14, 15], we mention only the following: for , , and :\n\n#### 3. A Generalized Mathematical Groundwater Flow Model\n\nFor the sake of clarity the generalization of the classical model for groundwater flow in the case of density-independent flow in the uniform homogeneous aquifer is considered in this paper . Consider the following groundwater flow equation where both the specific storability, , and hydraulic conductivity, , are scalar and constant quantities and or is the radial dimension. To be complete, the following set of initial and boundary conditions is added: Here means that before a pumping test begins the level of water or the initial hydraulic head in the aquifer is linear function of space with a positive gradient to be found, means that during the pumping test the level of water is not affected for a very long distance from the borehole, and means that the rate of pumping is proportional to the hydraulic conductivity.\n\nHence is the discharge rate of the borehole, with radius and the thickness of the aquifer from which the borehole taps. In order to include explicitly the possible effect of the geometry into mathematical model the radial component of the gradient of the piezometric head, is replaced by the Weyl-fractional derivatives of order ; the fractional derivative in this paper is Caputo derivative and is defined as follows: This provides a generalized form of the classical equation governing the flow equation (1): The integrodifferential equation does contain the additional parameter , , which can be viewed as a new physical parameter that characterizes the flow through the geological formations . The same transformation generates also a more general form for the boundary condition at the borehole : The relations (10) and (11), together with the initial condition described in (10), represent a complete set of equations for which a solution exists. The integro-differential character of the relations makes the search for analytical solution very difficult however. Nevertheless, in this paper we make use of Frobenius and Adomian decomposition methods to give an asymptotic solution.\n\n#### 4. Solution of the Generalized Flow Equation\n\n##### 4.1. Frobenius Method\n\nIn this work to perform the Frobenius method, we consider the groundwater flow governed by the following fractional Caputo-Weyl derivation partial differential of order , where is real number, . Also, we consider the dimension of the flow to be 2. Therefore, for (10) can then be transformed into the following equation: Applying the Laplace operator on both sides of the above equation, we have the following ordinary differential equation where is the variable of Laplace. In this matter we choose or we choose , meaning that the level of water is the same everywhere in the aquifer if the water is not taken out from the aquifer. If we let then (13) becomes We put and . To meet the condition under which Frobenius method can be used, we have to prove that and are analytical around that means we have to prove that and can be written as series. It is very obvious to see that and can be expressed as follows: We start here with the coefficients of : implying that .\n\nPutting we have that and equating the coefficients of same power we obtain the following set of equations: Therefore, the coefficients can be given with the general following recursive formula: And obviously the coefficients are given below as From the above expression we can see that and are analytical around , which follows from Frobenius method that the solution of (15) can be in the form This solution is not convergent for a large ; that is, the solution will diverge if we observe the drawdown at a position very far from the borehole from which the water is taken out. Therefore, we restrict our solution in the vicinity of the borehole; more precisely we investigate the solution when . That means we pump water from the borehole and we observe the drawdown in the vicinity of the borehole.\n\nThus substituting (21), , and into (15) and equating the coefficients of the same power, we obtain the following recursive formula for which the coefficients , coefficients of our series: Here we have the following: However, making use of the boundary condition in (11), we have the following: Then the general solution of the fractional Caputo-Weyl derivation of groundwater flow of order in Laplace space is given below as: To observe the behavior of the solution at the borehole, which corresponds to , the series solution is reduced here to the coefficient with order zero which is obtained when and it’s given below as In the following section the analytical asymptotic solution obtained in Laplace space via Frobenius method will be compared with the experimental data.\n\n##### 4.2. Numerical Results\n\nIn order to examine the validation of this solution, the above asymptotic solution is compared with the experimental data from the pumping test performed by the Institute for Groundwater Studies on one of their borehole settled on the campus test site of the University of the Free State. The test consisted of the pumping of the borehole at the constant discharge rate and monitoring the piezometric head for 350 minutes. The first step in the section is to discretize the range of Laplace transform since the exact Laplace transform cannot be obtained in practice. This is done as follows: For we approximate where for ; then the results obtained in the fields and Laplace transform become Using this numerical scheme, the physical data was transformed into Laplace space. A comparison between these values and asymptotic computed data can only be provided in the in real space not in Laplace space. Since it is not worth concluding the validity of this solution in Laplace space, the inverse Laplace transform is applied in (26). To test the validity of this solution in real space and applying the inverse Laplace transform on (26), (29) is obtained\n\nThe above solution is compared graphically to the experimental data from the pumping test performed by the institute for groundwater study on one of their borehole settled on the test site of the University of the Free State. The small difference observed in the above graph (Figure 1) is due to uncertainties in measurement and this will be discussed in Section 6 of this work. The aquifer parameters used in this models are recorded in Table 1, the observed data from field observation will be attached to this paper. Although this solution is in agreement with the experimental data, there will be the need to investigate the case where the observation can be done for a long distance. In the next section another approach will be introduced to solve the space-time-fractional derivative of groundwater flow equation, and this method is the Adomian decomposition method.\n\n Parameters Values Units 83.1 (m3 s−1) 0.025 (m) 13.1 (m s−1) 1 (m) 13.1 (m−1) 0 (m)\n\n##### 5.1. Example 1\n\nThis section is concerned with the groundwater equation with time- and space-fractional derivatives of the form Subject to the initial and boundary conditions described in (8), the level of water is assumed to be the same throughout the aquifer before the pumping so that the gradient described in (8) is zero. Furthermore, it is assumed that a fractional change in drawdown is constant for meaning .\n\nThe method used here is based on applying the operator on both sides of (30) to obtain The Adomian decomposition method [16, 17] assumes a series solution for (31) to be where the components are determined recursively. Substituting (32) into both sides of (30) gives Following the decomposition method, the recursive relations are introduced as It is worth noting that if the component is defined, then the remaining components can be completely determined such that each term is determined by using the previous terms, and the series solutions are thus entirely determined. Finally, the solution is approximated by the truncated series However, the inclusion of boundary conditions in fractional differential equations introduces additional difficulties. The Adomian decomposition method can handle these difficulties by using the space-fractional operator and the initial conditions only. The method provides the solution in the form of a rapidly convergent series that may lead to the exact solution in the case of integer derivatives and to an efficient numerical solution with high accuracy for 0 . The convergence of the decomposition series has been investigated in [18, 19].\n\nFollowing the recursive formula equation (35) and using the fact that , the equations below are obtained: The component was also determined and will be used, but for brevity it is not listed. In this matter five components of the decomposition series (30) were obtained for which was evaluated to have the following expansion: Applying the boundary condition yields where\n\n##### 5.2. Example 2\n\nConsider the groundwater flow equation with time- and space-fractional derivatives of the form subject to the initial condition described in (11). Furthermore we suppose that the gradient , meaning that the water level is everywhere in the aquifer at and the fractional change in drawdown is a constant and boundary condition: Following the discussion earlier, we have the below recursive formula: It follows from the recursive formula that The component was also determined and will be used, but for brevity it is not listed. In this matter five components of the decomposition series (41) were obtained of which was evaluated to have the following expansion: Applying the boundary condition yields where\n\n##### 5.3. Example 3\n\nConsider time fractional derivative of the groundwater flow equation with time-fractional derivatives of the form Subject to the initial and boundary conditions described in (8) it is assumed that . Following the discussion presented earlier, we obtained the below recursive formula: The component was also determined and will be used, but for brevity not listed. In this matter five components of the decomposition series (31) were obtained of which was evaluated to have the following expansion Applying the boundary conditions yield to where The normalized solutions with of (50), (51) and (52) are illustrated graphically in Figures 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, and 13 for different values of various parameters; these solutions are compared to the solution proposed by Barker . These graphs show the behaviour of the drawdown during the pumping test, first as a function of space and time from the borehole to the point of observation, secondly as function of time for a fixed distance from the borehole, and finally as a function of space for a fixed time. Based on the assumption that groundwater in an aquifer flows through an equipotential surface that are projections of -dimensional spheres onto two-dimensional space, Barker obtained the following analytical solution for an infinite aquifer with a line source: where is the incomplete gamma function and the dimension of the flow which equals the special dimension, being an integer, and the other quantities all have the same meaning as before. Although this model has been applied with reasonable success in the analysis of hydraulic tests from boreholes in Karoo aquifers, it introduces parameters for which no sound definition exists in the case of non-integer flow dimensions. The main raison of comparison of these results obtained via the Adomian decomposition methods with those of Barker’s fractal radial flow model is to establish a possible relationship between the fractional order of the derivative and the parameter fractal introduced earlier by Barker.\n\nTable 2 shows the theoretical values of the discharge rate and aquifer parameters used in the numerical simulations.\n\n Parameters Values Units Parameters Values Units 10.8 (m3 s−1) 0.005 None 0.15 (m) 1.05 None 0.65 (m s−1) 1.005 None 40 (m) 0.012 None 0.75 (m−1) 0.5 None 0 (m) 0.005 None 0.025 None\n\n#### 6. Discussion and Propositions\n\nAlthough the analytical solution obtained via Frobenius method fit the experimental data or have described successfully the events taking place in the vicinity of the borehole, on one hand, and the analytical solutions obtained via Adomian decomposition method was successfully compared to the solution proposed by Barker, on the other hand, the problem of choosing an appropriate geometry for the geological system in which the flow occurs still remains a challenge in groundwater studies. We personally believe that we do not propose solution to a problem because it going to be useful for the generation in which we are, but we propose solution because we hope that they will one day be useful for mankind; therefore the following proposition may not be useful for this generation because we may not have the adequate technology to perform the steps involved but it will be in the future. We think that describing the groundwater flow with one equation for the whole aquifer is unrealistic, because from one point of the aquifer to another properties including geology and geometry change, therefore the flow. Assumption such as homogeneity, isotropy, uniform thickness over the area under investigation and so on render the study of groundwater uncertain. In order to study the geology or the geometry of an aquifer, we have to divide the aquifer in small portions, from north to south and from east to west. The geology and the geometry of each portion can then be studied. If the results of the study reveal that the portion under investigation is for instance a hyperboloid, since there is no exact solution to groundwater flow model by hyperboloid flow and there are solutions to circular flow, then a suitable transformation can be done including transformation of hyperboloid coordinates to Cartesian coordinates, then to Cartesian coordinates to cylindrical coordinate such that this new coordinate can now be put in the equation describing groundwater flow, and the solution to the new equation can then be investigated. Henceforth knowing the real geology and geometry of this portion, the real paths flow will be known. Then having a good knowledge of each small portion including its geometry and geology, the real geometry and geology of the aquifer can be not exactly but more accurately known.\n\nFor groundwater remediation it will be possible to know where the maximum, minimum, and average chemical concentrations are found in the aquifer. We have no proof for this but we believe that this proposition will be useful in reducing uncertainties in groundwater study. It is believed that the field test gives the characteristic of an aquifer, but we believe that the field test gives both uncertainties and characteristics of the aquifer; therefore quantify uncertainties in this measurement lead us to the real picture of aquifer characteristics, henceforth we propose that the studies in groundwater should focus on both uncertainties and fields observations, because what is known is bounded by what is not known; knowing what is not known give a real picture to what was known, and it follows that the knowledge of uncertainties in groundwater study will give a clear picture of what we already know in groundwater.\n\n#### 7. Conclusion\n\nThe classical Darcy Law has been generalized using the concept of complementary fractional order derivatives of Weyl fractional derivative. This leads to the formulation of a new generalized form of the groundwater flow equation . The applications of Adomian decomposition and Frobenius methods were extended to obtain explicit and numerical solutions of the space-time fractional groundwater flow. The two methods were very clearly efficient and powerful techniques in finding the solutions of the proposed equations. The solution obtained via Frobenius takes into account the events taking place in the vicinity of the borehole during the pumping test whereas the solution obtained via Adomian decomposition methods takes into account the events that take place far from the point where water is pumped out, that is, in the borehole. The solution obtained via Frobenius method was in perfect agreement with the observed data obtained from the pumping test performed by the Institute for Groundwater Studies on one of their borehole settled on the campus test site of the University of the Free State. The Adomian decomposition method requires less computational work than existing approaches while supplying quantitatively reliable results. The obtained results demonstrate the reliability of the algorithms and their wider applicability to fractional evolution equations. A comparison of these results obtained via the Adomian decomposition methods with those of Barker’s fractal radial flow model suggests that there exists a relation between the fractional order of the derivative and the non-integral dimension of the flow.\n\n#### Authors’ Contribution\n\nAbdon Atangana wrote the first draft and P. D. Vermeulen read and revised it; the revised version was read and corrected by both authors.\n\n#### Conflict of Interests\n\nThe authors declare that there is no conflict of interests for this paper.\n\n1. J. H. Black, J. A. Barber, and D. J. Noy, “Crosshole investigations: the method, theory and analysis of crosshole sinusoidal pressure tests in fissured rocks,” Stripa Projects Internal Reports 86-03, SKB, Stockholm, Sweden. View at: Google Scholar\n2. J. F. Botha, I. Verwey van Voort, J. J. P. Viviers, W. P. Collinston, and J. C. Loock, “Karoo aquifers. Their geology, geometry and physical behaviour,” WRC Report 487/1/98, Water Research Commission, Pretoria, 1998. View at: Google Scholar\n3. G. J. van Tonder, J. F. Botha, W.-H. Chiang, H. Kunstmann, and Y. Xu, “Estimation of the sustainable yields of boreholes in fractured rock formations,” Journal of Hydrology, vol. 241, no. 1-2, pp. 70–90, 2001. View at: Publisher Site | Google Scholar\n4. J. A. Barker, “A generalized radial flow model for hydraulic tests in fractured rock,” Water Resources Research, vol. 24, no. 10, pp. 1796–1804, 1988. View at: Publisher Site | Google Scholar\n5. J. Bear, Dynamics of Fluids in Porous Media, American Elsevier Environmental Science Series, Elsevier, New York, NY, USA, 1972.\n6. A. Cloot and J. F. Botha, “A generalised groundwater flow equation using the concept of non-integer order derivatives,” Water SA, vol. 32, no. 1, pp. 1–7, 2006. View at: Google Scholar\n7. R. Courant and F. John, Introduction to Calculus and Analysis, vol. 2, John Wiley & Sons, New York, NY, USA, 1974.\n8. Y. Cherruault and G. Adomian, “Decomposition methods: a new proof of convergence,” Mathematical and Computer Modelling, vol. 18, no. 12, pp. 103–106, 1993. View at: Publisher Site | Google Scholar | MathSciNet\n9. I. Podlubny, Fractional Differential Equations, Academic Press, San Diego, Calif, USA, 1999. View at: MathSciNet\n10. I. Podlubny, “Geometric and physical interpretation of fractional integration and fractional differentiation,” Fractional Calculus & Applied Analysis, vol. 5, no. 4, pp. 367–386, 2002. View at: Google Scholar | MathSciNet\n11. K. Adolfsson, “Nonlinear fractional order viscoelasticity at large strains,” Nonlinear Dynamics, vol. 38, no. 1–4, pp. 233–246, 2004. View at: Publisher Site | Google Scholar | MathSciNet\n12. O. P. Agrawal, “Application of fractional derivatives in thermal analysis of disk brakes,” Nonlinear Dynamics, vol. 38, pp. 191–206, 2004. View at: Publisher Site | Google Scholar\n13. G. Afken, Mathematical Methods for Physicists, Academic Press, London, UK, 1985.\n14. K. S. Miller and B. Ross, An Introduction to the Fractional Calculus and Fractional Differential Equations, John Wiley & Sons, New York, NY, USA, 1993. View at: MathSciNet\n15. K. B. Oldham and J. Spanier, The Fractional Calculus, Academic Press, New York, NY, USA, 1974. View at: MathSciNet\n16. G. Adomian, “A review of the decomposition method in applied mathematics,” Journal of Mathematical Analysis and Applications, vol. 135, no. 2, pp. 501–544, 1988. View at: Publisher Site | Google Scholar | MathSciNet\n17. A. Atangana, “New class of boundary value problems,” Information Science Letters, vol. 1, no. 2, pp. 67–76, 2012. View at: Publisher Site | Google Scholar\n18. G. Adomian, Solving Frontier Problems of Physics: The Decomposition Method, Kluwer Academic, Dodrecht, The Netherlands, 1994. View at: MathSciNet\n19. Y. Cherruault, “Convergence of Adomian's method,” Kybernetes, vol. 18, no. 2, pp. 31–38, 1989. View at: Publisher Site | Google Scholar | MathSciNet\n\n#### More related articles\n\nWe are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9199511,"math_prob":0.9231943,"size":27192,"snap":"2020-24-2020-29","text_gpt3_token_len":5660,"char_repetition_ratio":0.15771665,"word_repetition_ratio":0.112969756,"special_character_ratio":0.20351574,"punctuation_ratio":0.11641609,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98536974,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-31T01:33:34Z\",\"WARC-Record-ID\":\"<urn:uuid:f352a6cb-d787-43bf-a426-51c3479a390a>\",\"Content-Length\":\"1049260\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3097eda3-1859-4b07-9c90-8cd8475fa89d>\",\"WARC-Concurrent-To\":\"<urn:uuid:b84b7e85-daa5-4355-9dd5-dec988db833b>\",\"WARC-IP-Address\":\"99.84.191.103\",\"WARC-Target-URI\":\"https://www.hindawi.com/journals/aaa/2014/381753/\",\"WARC-Payload-Digest\":\"sha1:NLQNCIA5LW65766B7NSQOYBRAJ5YSRL7\",\"WARC-Block-Digest\":\"sha1:WEYO6SGQA6UVS6YGNYILDAIOLHFL4E3W\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347410535.45_warc_CC-MAIN-20200530231809-20200531021809-00594.warc.gz\"}"} |
https://www.excelfunctions.net/excel-sumsq-function.html | [
"# The Excel SUMSQ Function\n\nRelated Function:\nSUMPRODUCT\n\n## Function Description\n\nThe Excel Sumsq function returns the sum of squares of a supplied set of values.\n\nThe syntax of the function is:\n\nSUMSQ( number1, [number2], ... )\n\nwhere the number arguments are numeric values (or arrays of numeric values) that you want to find the summed squares of.\n\nIf the values supplied to the function are text values, or logical values, these will be handled as follows:\n\n Text Values: If contained in cells that are referenced by the Sumsq function, text values are ignored;If supplied directly to the Sumsq function:Text values that can be interpreted as numeric values are treated as Numbers;Text values that cannot be recognised as numeric values cause the function to return the #VALUE! error. Logical Values: If contained in cells that are referenced by the Sumsq function, logical values are ignored;If supplied directly to the Sumsq function logical values are treated as numeric values (TRUE=1, FALSE=0).\n\nIn current versions of Excel (Excel 2007 and later), you can provide up to 255 number arguments to the Sumsq function, but in Excel 2003, the function can only handle up to 30 number arguments.\n\n## Sumsq Function Examples\n\nThe spreadsheets below show two examples of the Sumsq function.\n\nFormulas:\nAB\n1ValuesSumsq\n25=SUMSQ( A2:A5 )\n32=SUMSQ( A2, A3, A4, 6 )\n41\n53\nResults:\nABC\n1ValuesSumsq\n2539= 5^2 + 2^2 + 1^2 + 3^2\n3266= 5^2 + 2^2 + 1^2 + 6^2\n41\n53\n\nThe above examples show how:\n\n• The arguments to the Sumsq function can be input directly or as references to cells containing values;\n• Each of the arguments can be an individual value or an array of values.\n\nFurther details and examples of the Excel Sumsq function are provided on the Microsoft Office website.\n\n## Sumsq Function Error\n\nIf you get an error from the Sumsq function, this is likely to be the #VALUE! error:\n\nCommon Error\n #VALUE! - Occurs if a value that is supplied directly to the function cannot be interpreted as a number."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6611327,"math_prob":0.97327805,"size":2140,"snap":"2019-51-2020-05","text_gpt3_token_len":571,"char_repetition_ratio":0.18024345,"word_repetition_ratio":0.09018568,"special_character_ratio":0.27476636,"punctuation_ratio":0.10705596,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9983958,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-22T14:19:34Z\",\"WARC-Record-ID\":\"<urn:uuid:4da95de2-08b4-4ea6-ae48-5c0ee5c729e6>\",\"Content-Length\":\"14867\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:48bffd54-57ad-4720-bcc0-aefde56e81b3>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d3504e6-cc7a-4519-99c6-eeeda9fdf77d>\",\"WARC-IP-Address\":\"173.247.245.106\",\"WARC-Target-URI\":\"https://www.excelfunctions.net/excel-sumsq-function.html\",\"WARC-Payload-Digest\":\"sha1:Q5A7PXWGCTJ77QBAFEKL4IM5RUOJVCRW\",\"WARC-Block-Digest\":\"sha1:KOSI3PHGWAAIGRT27A4DM4GISSAFOMAB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250607118.51_warc_CC-MAIN-20200122131612-20200122160612-00130.warc.gz\"}"} |
https://brilliant.org/problems/the-well-known-mu-puzzle/ | [
"# The Well-known MU Puzzle\n\nLogic Level 3",
null,
"In Douglas Hofstadter's book, Godel, Escher, Bach, he proposes the following puzzle about formal systems.\n\nAxiom: MI is an axiom.\n\nTheorem: The axiom is a theorem by default. All the (and only all the) strings that can be derived using the following transformations on a theorem is also a theorem.\n\n Formal Rule Informal Description Example xI → xIU Changing any terminal I to IU MII → MIIU Mx → Mxx Double the string after M MIU → MIUIU xIIIy → xUy Replace three Is with an U MIIIU → MUU xUUy → xy Remove an occurence of double Us MUUI → MI\n\nProblem: Is MU a theorem?\n\nGuessing the answer to this problem is not very difficult. However, an interested problem solver could explore this questions:\n\n• If your answer was yes, what is the proof of the MU theorem, i.e, what is the sequence of transformation rules that will take you from MI to MU?\n• Is the MIU system decidable, i.e, is there a computer program to decide if a given string is a theorem?\n• If not, is the system semidecidable, i.e, is there a computer program that will eventually tell if a string is a theorem, but never tell if it isn't?\n×"
] | [
null,
"https://ds055uzetaobb.cloudfront.net/brioche/uploads/Wv4nGOQDyF-11-waterfall.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83074456,"math_prob":0.836749,"size":1066,"snap":"2021-31-2021-39","text_gpt3_token_len":299,"char_repetition_ratio":0.1054614,"word_repetition_ratio":0.051813472,"special_character_ratio":0.239212,"punctuation_ratio":0.16814159,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9973989,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-05T00:48:30Z\",\"WARC-Record-ID\":\"<urn:uuid:2df5faed-97f5-46ab-bc8d-c64cb932c5b1>\",\"Content-Length\":\"35119\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e032337a-67a5-4f7f-a379-01cf98734050>\",\"WARC-Concurrent-To\":\"<urn:uuid:4148af5a-29d5-43de-ba19-eeaa024a8428>\",\"WARC-IP-Address\":\"104.20.35.242\",\"WARC-Target-URI\":\"https://brilliant.org/problems/the-well-known-mu-puzzle/\",\"WARC-Payload-Digest\":\"sha1:PGYHCAAECJHHNTSYR2DUJZNT3O3WWG4W\",\"WARC-Block-Digest\":\"sha1:2YYZYCHKDV7BTR2DWEIYPVRA7DZMIIWT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046155268.80_warc_CC-MAIN-20210805000836-20210805030836-00560.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/1707.00104/ | [
"# On Nörlund summation and Ergodic Theory, with applications to power series of Hilbert contractions.\n\nChristophe Cuny Université de la Nouvelle-Calédonie Equipe ERIM, B.P. 4477, F-98847 Noumea Cedex and Michel Weber IRMA, 10 rue du Général Zimmer, 67084 Strasbourg Cedex, France\n###### Abstract.\n\nWe show that if is a good weight for the dominated weighted ergodic theorem in , , then the Nörlund matrix , is bounded on . We study the regularity (convergence in norm, almost everywhere) of operators in ergodic theory: power series of Hilbert contractions, and power series of -contractions, and establish similar tight relations with the Nörlund operator associated to the modulus coefficient sequence .\n\n47A35, 47D37\n\n.\n\n## 1. Introduction\n\nLet be a sequence of complex or real numbers (we take the convention . We associate with an infinite matrix , called a Nörlund matrix, in the following way. For every , set . We then define\n\n (1) {aij:=ai−j/Aiif 0≤j≤i and Ai>0,aij:=0if j>i or Ai=0.\n\nSome authors consider instead , assuming then that it does not vanish.\n\nThen induces naturally a (possibly unbounded) operator on for any . The matter of deciding whether this operator is bounded on some (or any) is far from being solved. As noted by Bennett , it seems, so far, that the best general known condition guaranteeing that is bounded on any , , is that , see for instance Borwein and Cass . That condition is realized when, for instance is a non-increasing sequence of positive numbers.\n\nLet\n\n (2) ∑i≥0|1Aii∑j=0ai−juj|p≤Cpp∑i≥0|ui|p.\n\nEquivalently, we have the dual formulation: for any sequence , ,\n\n (3) ∑j≥0|∑i≥jai−jvi/Ai|q≤Cqp∑j≥0|vj|q.\n\nThe latter is easily seen to be also equivalent to: for any sequence ,\n\n (4) ∑j≥0|∑i≥jai−jvi|q=∑j≥0|∑i≥0aivi+j|q≤Cqp∑j≥0|Ajvj|q.\n\nMoreover, it follows from (3) that\n\n (5) ∑i≥0|ai|pApi≤Cpp,\n\nwhere has to be interpreted as when .\n\nWe show that Nörlund matrices are connected with two different topics from ergodic theory. We establish tight relations between regularity (convergence in norm, almost everywhere) of operators in ergodic theory (power series of Hilbert contractions, power series of -contractions, dominated weighted ergodic theorems, and naturally associated Nörlund matrices. We obtain conditions ensuring norm convergence of power series of Hilbert contractions, and also almost everywhere convergence of power series of -contractions. These conditions are expressed in terms of the Nörlund operator associated to the modulus coefficient sequence .\n\n## 2. Nörlund matrices and dominated weighted ergodic theorems\n\nWe first observe a connection between Nörlund matrices and dominated weighted ergodic theorems.\n\nWe say that a sequence , of complex numbers, is good for the dominated weighted ergodic theorem in , , if there exists such that for every dynamical system , writing , we have\n\n (6) ∥supn≥01An|n∑k=0akf∘τk|∥Lp(ν)≤C∥f∥Lp(ν)∀f∈Lp(ν).\n\nHere again we take the convention that if .\n\nThe next lemma is well-known, it is in the spirit of the so-called Conze principle, see for instance [29, Th. 5.4.3]. It states a converse of Calderon’s transference principle.\n\n###### Lemma 1.\n\nLet be good for the dominated weighted ergodic theorem in margin: , . Then, with the best constant appearing in (6), we have for every ,\n\n (7) ∑i∈Z(supn≥01An|n∑j=0ajvi+j|)p≤Cp∑i∈Z|vj|p.\n\nProof. Let . Let be integers (one has in mind that ). Take , and the transformation given by if and . Define on , by for every . By (6), we have\n\n 12N+1N+1−M∑i=−N(sup0≤m≤M1Am|m∑k=0akvi+k|)p≤∥supn≥01An|n∑k=0akf∘τk|∥pLp(ν) (8) ≤Cp∥f∥pLp(ν)=Cp2N+1N∑i=−N|vi|p.\n\nMultiplying (8) by , letting first and then , we derive (7).\n\nRemark. Our proof is based on the use of the dominated weighted ergodic theorem on periodic systems (the rotations on ). To give a proof based on the dominated weighted ergodic theorem on a single (but ergodic and non-atomic) dynamical system, one could use Rohlin’s lemma (see for instance Weber [29, p. 270] for a statement of the lemma).\n\nWe deduce the following.\n\n###### Proposition 2.\n\nLet be a good weight for the dominated weighted ergodic theorem in , . Then, the Nörlund matrix is bounded on . Moreover, for every non-increasing sequence of nonnegative numbers , writing , is bounded on .\n\nRemark. It is unclear whether ” bounded on ” implies ” bounded on ”, in general.\n\nProof. Let . Define as follows. if and if . Using (7) and for every the trivial estimate\n\n 1Ai|i∑j=0ajv−i+j|≤supn≥01An|n∑j=0ajv−i+j|,\n\nwe infer that\n\n ∑i≥0(1Ai|i∑j=0ajv−i+j|)p≤Cp∑i∈Z|vi|p=Cp∑i≥0|ui|p.\n\nUsing that when , we derive that is bounded on .\n\nTo prove the last assertion, one just has to notice that, using Abel summation, is a good weight for the dominated weighted ergodic theorem. .\n\nOf course, as one can see from the above proof, the fact that be a good weight for the dominated weighted ergodic theorem in is a much stronger statement than the fact that be bounded on . Hence, Proposition 2 should not be seen as a method to prove boundedness of some Nörlund matrices, but as a source of examples of Nörlund matrices, since there are many examples of sequences that are known to be good for the dominated weighted ergodic theorem. We provide some of them below. One may also consult the survey by Bellow and Losert for dominated weighted ergodic theorems with bounded weights. More arithmetical sequences may be found in Cuny and Weber .\n\nExamples. The following sequences are good for the dominated weighted ergodic theorem in , for every :\n\n(Bourgain and Wierdl, , ) Let be the set of prime numbers and take , for every .\n\n(Bourgain, ) Let be the set of squares and take , for every .\n\n(Cuny and Weber, ) Take and for every take , the number of divisors of . We now give an example which does not work on every , . Let be an ergodic dynamical system. Let , for some .\n\n(Bourgain, Demeter, Lacey, Tao and Thiele, , and ) There exists with such that for every , is good for the dominated weighted ergodic theorem in for every such that .\n\nLet us notice that none of the above examples satisfies the previously mentionned criterium: . The fact that the Nörlund matrix associated with the sequence in example is bounded has been proved by Borwein .\n\n## 3. Norm convergence of power series of Hilbert contractions\n\nLet be a contraction of a (real or complex) Hilbert space . Given a sequence of complex numbers and , we are interested in finding conditions involving sufficient for the norm convergence of .\n\nAn obvious condition is the following\n\n (9) ∑n∈N|an|∥Pnf∥H<∞.\n\nSufficient conditions involving have been obtained when is unitary (i.e. ) or, more generally, normal (i.e. ), if moreover is regular (at least nonnegative and nonincreasing). Let us mention the papers and , see also for some versions.\n\nRecall that, see for instance Nagy and Foias (see also Shäffer for an explicit matrix construction), admits a unitary dilation, that is, there exist another Hilbert space , with , and a unitary operator on such that for every , where is the orthogonal projection onto .\n\nWe start with some simple lemmas. The first one appears in Cuny and Lin , but we recall the short proof.\n\n###### Lemma 3.\n\nFor every and every , the spaces and are orthogonal (in ).\n\nProof. Let . Let and . We have\n\n ⟨(U−nPn−U−n−1Pn+1)f,U−n−ℓPn+ℓg⟩K=⟨UℓPnf,Pn+ℓg⟩K−⟨Uℓ−1Pn+1f,Pn+ℓg⟩K =⟨Pn+ℓf,Pn+ℓg⟩K−⟨Pn+ℓf,Pn+ℓg⟩K=0.\n\n###### Lemma 4.\n\nLet be such that as . Then, for every , . In particular, for any positive and non-decreasing sequence , the following are equivalent (setting ).\n\n• ;\n\n• .\n\nRemarks. Notice that by Kronecker’s lemma, if holds as . In particular, since is non decreasing, . Item is satisfied if .\n\nProof. Since , for every , we have, (with convergence in )\n\n (10) Pnf=∑k≥0(U−kPn+kf−U−k−1Pn+k+1f).\n\nBy the above lemma the terms of that series lie in orthogonal spaces. Hence,\n\n ∥Pnf∥2K =∑k≥0∥U−kPn+kf−U−k−1Pn+k+1∥2K =∑k≥n∥U−kPkf−U−k−1Pk+1∥2K,\n\nwhere we used that is unitary (and a change of variable) for the last identity. Then, the equivalence of and follows by Fubini.\n\nGiven a sequence of complex numbers , consider the following conditions\n\n (11) ∑n∈N|an|(n∑k=0|ak|)∥Pnf∥2H<∞, (12) ∑n∈N(n∑k=0|ak|)2∥U−nPnf−U−n−1Pn+1f∥2K<∞\n\nBy Lemma (4), when , (11) and (12) are equivalent. Assume that (9) holds. Then, since is nonincreasing, and (11) holds. Hence, (11) is always weaker than (9).\n\n###### Proposition 5.\n\nLet be such that be bounded on where . Let be such that either of conditions (11) or (12) hold. Then, the series converges in .\n\nProof. Since is bounded on , then by (5) (with )\n\n (13) ∑n∈Na2nA2n<∞.\n\nLet be integers and write . For every , let and , where . Finally, let . By Lemma 4 and using that is unitary, we have\n\n (14) ∥Vp,qf∥2K=∑n∈N∥U−nPnVp,qf−U−n−1Pn+1Vp,qf∥2K≤∑n∈N(q∑k=p|ak|un+k)2.\n\nBy Cauchy’s criteria one has to prove that as . Using the Lebesgue dominated theorem for the counting measure on , it suffices to prove that\n\n (15) q∑k=p|ak|un+k⟶p,q→+∞0,\n\nand that\n\n (16) ∑n∈N(∑k≥0|ak|un+k)2.\n\nThe convergence (15) follows from Cauchy-Schwarz combined with the assumed condition (12) and (13).\n\nFTo prove (16), it suffices to notice that\n\n ∑n∈N(∑k≥0|ak|un+k)2≤∑n∈N(∑k≥n|ak−n|uk)2=∥N∗|a|v∥2ℓ2(N) ≤∥N∗|a|∥2∥v∥2ℓ2(N)=∥N|a|∥2∑n∈NA2nu2n.\n\nThe proposition has been proved in in the case where . An important case corresponds to the situation where for every . Then, the proposition gives a sufficient condition (namely ) for to be a coboundary (i.e. for some ). This sufficient condition has been obtained independently by Volný in the special case where is a Markov operator on . His proof (which does not appeal to the notion of Nörlund matrices) is essentially the same, since the shift on the space of trajectories of the associated Markov chain plays the role of the unitary dilation.\n\n###### Proposition 6.\n\nLet . Assume that for every contraction on a Hilbert space the following property holds : ”If (11) holds for some then converges in ”. Then, is bounded on .\n\nProof. Let be a contraction on a Hilbert space satisfying the above property. Let . Then, is a Hilbert space and we define an operator on , by setting for every . Then, by the Banach-Steinhaus, theorem is continuous. Hence, there exists , such that .\n\nLet us prove the proposition. We give a probabilistic proof. Let be the probability space given by , the product -algebra and , with . Let be the shift on and be the coordinate process. In particular, and is iid.\n\nDenote . Set and and define two operators and on and respectively by for every and for every (then is a Markov operator). Clearly, is a unitary dilation of . Let and define . Assume moreover that , or equivalently (by Lemma 4), . Notice that and that . Moreover, . Hence, , i.e. (4) holds with , and the proof is complete.\n\nWe shall now prove that Proposition 5 cannot be improved.\n\n###### Definition 1.\n\nWe say that a contraction on is Ritt if .\n\n###### Proposition 7.\n\nLet be a contraction on . For every , consider the following properties.\n\n• The series converges in ;\n\n• .\n\nThen, . If moreover is Ritt then .\n\nRemark. By , when is a positive operator on then of the proposition implies that the series converges -almost everywhere and the associated maximal function is in . The fact that has been proved by Cohen, Cuny and Lin using results from Arhancet and Le Merdy ) when and is a positive Ritt contraction of some (there are also analogous results in in ).\n\nProof. The fact that is a direct application of Proposition 5. Assume that is a Ritt operator and that converges in .\n\nWe start with the case . By Proposition 4.6 of Cohen, Cuny and Lin (see also their example (v) page 8), we have\n\n ∑n≥0∥Pf+⋯+P2nf∥2H22αn<∞,\n\nThen, using (3) of Cohen, Cuny and Lin combined with Lemma 13 below, we infer that , which finishes the proof, in that case.\n\nAssume now that . Let . Then, . Hence, by Theorem 8.1 of Le Merdy ,\n\n ∑n∈Nn∥Pnf∥2H=∑n∈Nn∥Pn(I−P)g∥2H≤∥g∥H,\n\nwhich is the desired result.\n\n## 4. Almost everywhere convergence of power series of L2-contractions\n\nOnce norm convergence has been proven, one may wonder, in the case where , whether almost everywhere convergence holds. As mentionned in the remark following Proposition 7, for ”regular” sequences, if is a positive contraction of then norm convergence implies almost everywhere convergence. However, as we shall see below (see Proposition 10), there is no such result for contractions that are not positive. Let us mention that the almost everywhere convergence of power series (for regular ) for unitary or normal operators on has been proven under conditions involving in and , see also for -versions.\n\n###### Theorem 8.\n\nLet be such that be bounded on where . Let . Let be a contraction on . Let such that\n\n (17) ∑n≥1(log(n+1))2A22n+1∥P2nf∥2L2(m)<∞.\n\nThen, the series converges -almost everywhere and\n\n supN≥1∣∣N∑n=0anPnf∣∣∈L2(m).\n\nRemark. A sufficient condition for (17) is the following\n\n (18) ∑n≥1(loglog(n+3))2A24nn+1∥Pnf∥2L2(m)<∞.\n\nProof. Let . We have\n\n max2N≤n≤2N+1−1|n∑k=2NanPkf|≤2N+1−1∑k=2N|ak||Pkf|.\n\nHence,\n\n ∑N∈N∥max2N≤n≤2N+1−1|n∑k=2NanPkf|∥2L2(m)≤∑N∈NA22N+1∥P2Nf∥2L2(m)<∞.\n\nIn particular, it suffices to prove that converges and that .\n\nBy (14), for every , we have\n\n (19) ∥2q−1∑n=2panPnf∥2L2(m)≤∑n∈N(2q−1∑k=2p|ak|un+k)2.\n\nSet and notice that is super-additive in the following sense: for every , . By Proposition 2.2 of Cohen and Lin , there exists , such that for every ,\n\n ∥max22n≤m≤22n+1−1|m∑k=22nakPkf|∥2L2(m)≤C(n+1)2d(2n,2n+1−1).\n\nAssume that\n\n (20) ∑n≥0(n+1)2d(2n,2n+1−1)<∞.\n\nThen, using (19) and Cauchy-Schwarz we see that\n\n (∑n∈N∥22n+1−1∑k=22nakPkf∥L2(m))2≤∑n∈N1(n+1)2∑n∈N(n+1)2d(2n,2n+1−1).\n\nThis finishes the proof, provided that we can show (20).\n\n ∑n∈N∑ℓ≥0(ℓ+1)2(22ℓ+1−1∑k=22ℓ|ak|un+k)2<∞.\n\nUsing that , we infer that\n\n ∑n∈N∑ℓ≥0(ℓ+1)2(22ℓ+1−1∑k=22ℓ|ak|un+k)2≤∑n∈N(∑k≥0(loglog(k+3))2|ak|un+k)2 ≤∑n∈N(∑k≥0(loglog(n+k+3))2|ak|un+k)2.\n\nThen, proceeding as in the (end of the) proof of Proposition 5 we see that (20) holds provided that\n\n ∑n∈N(loglog(n+3))2A2nu2n<∞,\n\nwhich follows from (17) using that is non-decreasing and that .\n\n###### Corollary 9.\n\nLet be an ergodic dynamical system. Let for some . There exists with such that for every , setting the following holds: for every , every contraction on and every such that\n\n ∑n∈N(loglog(n+3))2(n+1)1−2α∥Pnf∥22<∞,\n\nthe sequence converges -almost everywhere and the associated maximal function if in .\n\nProof. Let and let . Let be the set appearing in the example . Modifying if necessary we may assume that , for some finite . Then, for every , is good for the dominated weighted ergodic theorem. Applying Proposition 2, we see that, with , is bounded on . Set (we see as a function on ). By Theorem 8 (see the remark after the theorem), we are back to prove that"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9266614,"math_prob":0.98087263,"size":12640,"snap":"2023-14-2023-23","text_gpt3_token_len":2997,"char_repetition_ratio":0.1460114,"word_repetition_ratio":0.0750111,"special_character_ratio":0.24628164,"punctuation_ratio":0.15569824,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9963343,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-01T18:20:10Z\",\"WARC-Record-ID\":\"<urn:uuid:f9253468-bb53-4af5-bc53-ef9131165af4>\",\"Content-Length\":\"1049382\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b925104-1a4d-4447-9805-3de0bb1c8dc1>\",\"WARC-Concurrent-To\":\"<urn:uuid:1bf778e2-c840-455a-9160-98468b5045c5>\",\"WARC-IP-Address\":\"104.21.14.110\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/1707.00104/\",\"WARC-Payload-Digest\":\"sha1:2WMBS6V6BLBWQUHEGIOXPYTYHZ7K5O6Z\",\"WARC-Block-Digest\":\"sha1:AJYB6K3OSBRYXO5W72Y7HBZZTB2OGDWV\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648000.54_warc_CC-MAIN-20230601175345-20230601205345-00388.warc.gz\"}"} |
https://answers.yahoo.com/question/index?qid=20090506143256AAPQMGw | [
"# If 113 is a prime number, how would you factor x^2+2x-113=0?\n\nOne of my algebra assignments was to find two consecutive integers such that the sum of their square is 113. I made the equation x(x+2)=113 and distributed, and got x^2+2x=113. Then I subtracted 113 from each side to get the Zero-Product property. But now, the only factors of 113 is 113 x 1, and that doesn't... show more One of my algebra assignments was to find two consecutive integers such that the sum of their square is 113. I made the equation x(x+2)=113 and distributed, and got x^2+2x=113. Then I subtracted 113 from each side to get the Zero-Product property. But now, the only factors of 113 is 113 x 1, and that doesn't add up to be 2. Please help! What am I doing wrong?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9645905,"math_prob":0.9838614,"size":809,"snap":"2019-35-2019-39","text_gpt3_token_len":226,"char_repetition_ratio":0.10310559,"word_repetition_ratio":0.69863015,"special_character_ratio":0.3065513,"punctuation_ratio":0.11111111,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998915,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-20T03:47:20Z\",\"WARC-Record-ID\":\"<urn:uuid:a128f633-ed5a-4465-9f6f-94e498c5a1b3>\",\"Content-Length\":\"131977\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:239ba90b-42c8-4ec5-937e-92fca0fb635e>\",\"WARC-Concurrent-To\":\"<urn:uuid:57c4185b-918a-40b4-83f3-02a0b15d07d9>\",\"WARC-IP-Address\":\"69.147.92.11\",\"WARC-Target-URI\":\"https://answers.yahoo.com/question/index?qid=20090506143256AAPQMGw\",\"WARC-Payload-Digest\":\"sha1:W3GDPH5HNEF5K3RNDZCB6IIRBEXPMDW6\",\"WARC-Block-Digest\":\"sha1:5RCE3JZDWJYPGQWYOSKSYXVW4LYIATMN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573827.2_warc_CC-MAIN-20190920030357-20190920052357-00405.warc.gz\"}"} |
https://dml.cz/handle/10338.dmlcz/108051 | [
"# Article\n\nKeywords:\nperiodic solutions; second order functional difference equation; fixed-point theorem; growth condition\nSummary:\nSufficient conditions for the existence of at least one \\$T-\\$periodic solution of second order nonlinear functional difference equations are established. We allow \\$f\\$ to be at most linear, superlinear or sublinear in obtained results.\nReferences:\n Atici F. M., Gusenov G. Sh.: Positive periodic solutions for nonlinear difference equations with periodic coefficients. J. Math. Anal. Appl. 232 (1999), 166–182. MR 1683041\n Atici F. M., Cabada A.: Existence and uniqueness results for discrete second order periodic boundary value problems. Comput. Math. Appl. 45 (2003), 1417–1427. MR 2000606 | Zbl 1057.39008\n Deimling K.: Nonlinear Functional Analysis. Springer-Verlag, New York, 1985. MR 0787404 | Zbl 0559.47040\n Guo Z., Yu J.: The existence of periodic and subharmonic solutions for second order superlinear difference equations. Science in China (Series A) 3 (2003), 226–235. MR 2014482\n Jiang D., O’Regan D., Agarwal R. P.: Optimal existence theory for single and multiple positive periodic solutions to functional difference equations. Appl. Math. Lett. 161 (2005), 441–462. MR 2112417 | Zbl 1068.39009\n Kocic V. L., Ladas G.: Global behivior of nonlinear difference equations of higher order with applications. Kluwer Academic Publishers, Dordrecht-Boston-London, 1993. MR 1247956\n Ma M., Yu J.: Existence of multiple positive periodic solutions for nonlinear functional difference equations. J. Math. Anal. Appl. 305 (2005), 483–490. MR 2130716 | Zbl 1070.39019\n Mickens R. E.: Periodic solutions of second order nonlinear difference equations containing a small parameter-II. Equivalent linearization. J. Franklin Inst. B 320 (1985), 169–174. MR 0818865 | Zbl 0589.39004\n Mickens R. E.: Periodic solutions of second order nonlinear difference equations containing a small parameter-III. Perturbation theory. J. Franklin Inst. B 321 (1986), 39–47. MR 0825907 | Zbl 0592.39005\n Mickens R. E.: Periodic solutions of second order nonlinear difference equations containing a small parameter-IV. Multi-discrete time method. J. Franklin Inst. B 324 (1987), 263–271. MR 0910641 | Zbl 0629.39002\n Raffoul Y. N.: Positive periodic solutions for scalar and vector nonlinear difference equations. Pan-American J. Math. 9 (1999), 97–111.\n Wang Y., Shi Y.: Eigenvalues of second-order difference equations with periodic and antiperiodic boundary conditions. J. Math. Anal. Appl. 309 (2005), 56–69. MR 2154027 | Zbl 1083.39019\n Zeng Z.: Existence of positive periodic solutions for a class of nonautonomous difference equations. Electronic J. Differential Equations 3 (2006), 1–18. MR 2198916 | Zbl 1093.39014\n Zhang R., Wang Z., Chen Y., Wu J.: Periodic solutions of a single species discrete population model with periodic harvest/stock. Comput. Math. Appl. 39 (2000), 77–90. MR 1729420 | Zbl 0970.92019\n Zhu L., Li Y.: Positive periodic solutions of higher-dimensional functional difference equations with a parameter. J. Math. Anal. Appl. 290 (2004), 654–664. MR 2033049 | Zbl 1042.39005"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5937239,"math_prob":0.95154285,"size":2681,"snap":"2021-31-2021-39","text_gpt3_token_len":808,"char_repetition_ratio":0.18229361,"word_repetition_ratio":0.07692308,"special_character_ratio":0.37001118,"punctuation_ratio":0.26537785,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9837594,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-24T02:50:53Z\",\"WARC-Record-ID\":\"<urn:uuid:f755f9be-9e3e-4602-b48e-7fcebd3d420d>\",\"Content-Length\":\"17084\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:06c1e6cd-6779-4c7e-8eb3-560cf5777909>\",\"WARC-Concurrent-To\":\"<urn:uuid:e2567bb7-6f84-4ac8-b671-9b938755a128>\",\"WARC-IP-Address\":\"147.251.6.150\",\"WARC-Target-URI\":\"https://dml.cz/handle/10338.dmlcz/108051\",\"WARC-Payload-Digest\":\"sha1:FQ62NJH5ZJ6WQYFC6TJFIAVG2OJINKPU\",\"WARC-Block-Digest\":\"sha1:FPOTECYPSQUI3RI7ZZHVKBVWXECDZBQW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046150067.87_warc_CC-MAIN-20210724001211-20210724031211-00279.warc.gz\"}"} |
http://redwoodsmedia.com/fraction-decimal-percent-worksheets/ | [
"Home - Fraction Decimal Percent Worksheets\n\n# Fraction Decimal Percent Worksheets\n\nHere is the Fraction Decimal Percent Worksheets section. Here you will find all we have for Fraction Decimal Percent Worksheets. For instance there are many worksheet that you can print here, and if you want to preview the Fraction Decimal Percent Worksheets simply click the link or image and you will take to save page section.\n\nMath Worksheets Converting Fractions To Decimals Muzjikmandiafo 311 Best School Math Images On Pinterest Fractions Decimals And Percents Worksheets Grade Into Convert Sixth Grade Decimals A Number Line Worksheet 05 E Page Worksheets Converting Fractions Decimals And Percentages Resources Fractions Decimals And Percents Worksheets Croefit Basic Percentages Worksheets Simple Grade 7 Math Fractions Decimals Paring And Ordering Fractions And Decimals Worksheet Worksheets Converting Fractions To Percentages Word Problems Worksheets Grade 5 Percentage For 5th Fraction Decimal Converting Percents Worksheet Percent Worksheets Word Problems 5th Paring And Ordering Fractions And Decimals Worksheet Worksheets Percentage Worksheets For Grade 5 Postjoint Sixth Grade Decimals A Number Line Worksheet 05 E Page Worksheets Math Worksheets Converting Fractions To Decimals Muzjikmandiafo."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.61131746,"math_prob":0.8579123,"size":1248,"snap":"2019-43-2019-47","text_gpt3_token_len":239,"char_repetition_ratio":0.29180065,"word_repetition_ratio":0.1744186,"special_character_ratio":0.15384616,"punctuation_ratio":0.027624309,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.994011,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-14T02:39:25Z\",\"WARC-Record-ID\":\"<urn:uuid:7318bd54-1aa1-4a42-826e-1063a2750ff2>\",\"Content-Length\":\"38664\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4af62086-16c1-4895-a4af-3cc7932d153c>\",\"WARC-Concurrent-To\":\"<urn:uuid:3b277143-8ccc-4416-a279-25cb76647352>\",\"WARC-IP-Address\":\"104.24.99.31\",\"WARC-Target-URI\":\"http://redwoodsmedia.com/fraction-decimal-percent-worksheets/\",\"WARC-Payload-Digest\":\"sha1:NEV7NVHF5TOX5MEAQ3SVV4QKUEUOCKQ2\",\"WARC-Block-Digest\":\"sha1:PGYVFKR2S62LWSYM3CK33YPWMHTTTOX7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496667767.6_warc_CC-MAIN-20191114002636-20191114030636-00521.warc.gz\"}"} |
https://fantasyplayer.link/program/%E8%87%AA%E5%88%B6%E8%84%9A%E6%9C%AC%E8%AF%AD%E8%A8%804.1-%E6%95%B0%E5%80%BC%E8%BF%90%E7%AE%97/ | [
"## 详细内容\n\n``````a = 1;\nb = 2;\n\nc = a + b;\nd = a - b;\ne = a / d;\nf = a * b;\ng = a % d;\n\nh = a ^ b;\ni = a | b;\nj = a & b;\nk = ~ a ;\n``````\n\n### 基本逻辑\n\n1. 取出N个数字\n2. 进行运算\n3. 将结果放回去\n\nN个操作数, 产生一个结果的运算符(+,-,*,/等)的第一步和第三步的内容基本是一致的。\n\n### 基于Map 的实现逻辑\n\n1. 从Map中取出key为a, b 的数字。\n\n2. 在宿主语言中进行运算 (c/cpp/java/…) 然后获得结果\n\n3. 将结果保存到Map中。 (Key为 c)\n\n``````// java 代码, 代码并没有经过测试,只是解释思路使用\n// c = a + b;\n\npublic class Test {\n\npublic void test (){\n// key: 变量名, value: 变量值\nMap<String, Object> map = new HashMap<>();\n// 第一个操作数\nString first = \"a\";\n// 第二个操作数\nString second = \"b\";\n\nString result = \"c\";\n\n// 1\nObject a = map.get(first);\nObject b = map.get(second);\n\n// null check 和 类型check 这里就省略了。\n\n// 2\n// Number 是一个自定义类型, 因为篇幅问题,对声明进行了省略。\nNumber c = Number.add( ((Number) a), ((Number) b) );\n\n// 3\nmap.put( result, c );\n}\n\n}\n``````\n\n### 基于栈的实现逻辑\n\n1. 取出 a 对应的数值, 进行压栈\n2. 取出 b 对应的数值, 进行压栈\n3. 执行 加号指令 | 这里应该是基于栈的核心部分\n1. 取出栈顶的2个元素\n2. 进行相加\n3. 将结果压栈\n4. 将栈顶元素出栈,放入到c对应的位置。\n\n``````// 同样使用 Java做示例, 代码只是演示说明使用\n// c = a + b ;\n\npublic class Test {\n\n// 局部变量表\nprivate Object[] localVarTable = new Object ;\n\n// 栈\nprivate Stack<Object> stack = new Stack<>();\n\n/**\n* 根据变量名获取变量对应的 局部变量表的索引\n*/\npublic int getIndex(String name){\n// todo\nreturn 0;\n}\n\n/**\n* 执行加号指令\n*/\npublic void add(){\n// check size 和别的check 暂时都直接省略了\n\nNumber a = (Number) stack.pop();\nNumber b = (Number) stack.pop();\n\n// 运算, 并将结果入栈\nstack.push( Number.add(a,b) );\n\n}\n\npublic void test(){\n\n// 第一个操作数\nString first = \"a\";\n// 第二个操作数\nString second = \"b\";\n\nString result = \"c\";\n\n// 1, 2\n// 因为要节省篇幅, 所以大部分的 error check 就不做了\nObject a = localVarTable[getIndex(first)];\nObject b = localVarTable[getIndex(second)];\n\n// 入栈\nstack.push(a);\nstack.push(b);\n\n// 3\nadd();\n\n// 4\nObject c = stack.pop();\nlocalVarTable[getIndex(result)] = c;\n\n}\n\n}\n\n``````\n\n### 基于 寄存器的实现逻辑\n\n1. 取出变量a的值, 放入a 寄存器\n2. 取出变量b的值, 放入b 寄存器\n3. 执行加法运算, 将结果放入c寄存器\n4. 取出c寄存器的值, 放回局部变量表\n5. 清空所有寄存器\n\n``````// c = a + b ;\n\npublic class Test {\n\n// 局部变量表\nprivate Object[] localVarTable = new Object ;\n\n// 寄存器\nprivate Object a = null;\nprivate Object b = null;\nprivate Object c = null;\n\n/**\n* 根据变量名获取变量对应的 局部变量表的索引\n*/\npublic int getIndex(String name){\n// todo\nreturn 0;\n}\n\n/**\n* 清理寄存器\n*/\npublic void clearReg(){\na = b = c = null;\n}\n\n/**\n* 执行加法运算, 运算逻辑总是将 寄存器 a,b 相加,并把值放入寄存器 c\n*/\npublic void add(){\n// 将会省略 check\n\nc = Number.add( (Number) a, (Number) b ) ;\n}\n\n/**\n* 这里的变量名, 改用传参的形式\n*/\npublic void test(String first,String second, String result ){\n\n// 1, 2\na = localVarTable[getIndex(first)];\nb = localVarTable[getIndex(second)];\n\n// 3\nadd();\n\n// 4\nlocalVarTable[getIndex(result)] = c;\n\n// 5\nclearReg();\n}\n\n}\n\n``````"
] | [
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.79165316,"math_prob":0.9934028,"size":2980,"snap":"2021-04-2021-17","text_gpt3_token_len":1678,"char_repetition_ratio":0.13205644,"word_repetition_ratio":0.18871595,"special_character_ratio":0.34731543,"punctuation_ratio":0.15811089,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99383813,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-18T06:32:31Z\",\"WARC-Record-ID\":\"<urn:uuid:a14afda2-462e-4d21-8c06-f804f472d219>\",\"Content-Length\":\"29418\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9cf83828-c681-4b17-a5e4-e4b3908d6c6d>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ec165cb-2d85-421a-b9f9-52b39e64d4e2>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://fantasyplayer.link/program/%E8%87%AA%E5%88%B6%E8%84%9A%E6%9C%AC%E8%AF%AD%E8%A8%804.1-%E6%95%B0%E5%80%BC%E8%BF%90%E7%AE%97/\",\"WARC-Payload-Digest\":\"sha1:CHQG7WSKFK56ARV4KEBNLDFAHH4ZOEUY\",\"WARC-Block-Digest\":\"sha1:LAFFOZCVIPBY5THHMMIB7575TC5VIITD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038468066.58_warc_CC-MAIN-20210418043500-20210418073500-00439.warc.gz\"}"} |
https://cs.stackexchange.com/questions/102829/when-is-this-even-possible-even-for-a-dense-graphs-e-theta-v2 | [
"# When is this even possible (even for a dense graphs) $|E| = \\Theta (|V|^2)$\n\nWikipedia says that \"a dense graph is a graph in which the number of edges is close to the maximal number of edges.\" and \"The maximum number of edges for an undirected graph is $$|V|(|V|-1)/2$$\". Then why do even use $$|E| = \\Theta (|V|^2)$$?, understand that $$\\Theta$$ is the correct(tightest) bound in asymptomatic notation. It seems to me that $$|E| = \\Theta (|V|^2)$$ can never happen, so why do we use it?\n\nNote that asymptotic bounds only apply to infinite sequences.\n\nIn this case, $$|E| = \\Theta(|V|^2)$$ applies to an implicit infinite sequence of graphs $$G_i=(V_i,E_i)$$, meaning that there are two positive constants $$c,c'$$ such that, $$c\\cdot|V_i|^2 \\leq |E_i| \\leq c'\\cdot|V_i|^2$$ whenever $$i$$ is large enough.\n\nThis constraint can be satisfied. For every $$i\\in \\mathbb N$$, take $$G_i$$ to be the complete graph on $$\\{1,\\ldots, i\\}$$. Hence, $$G_i$$ has exactly $$i\\cdot(i-1)/2$$ edges. For large enough $$i$$, we have $$\\frac{1}{4}i^2 \\leq \\frac{i\\cdot(i-1)}{2} \\leq \\frac{1}{2}i^2$$ So, we can say that $$|E| = \\Theta(|V|^2)$$.\n\nAnother sequence could be constructed taking \"almost complete\" graphs, where we remove one edge from each complete graph $$G_i$$ in the previous sequence. This would still satisfy the bound.\n\nWe could even remove, say, $$100*i$$ edges from each $$G_i$$ (when possible) and still satisfy the bound. This is because we only care about $$|E_i|$$ growing with \"quadratic speed\".\n\n• Balanced complete bipartite graphs $K_{\\lceil n/2\\rceil,\\lfloor n/2\\rfloor}$ (with about $n^2/4$ edges) are a good example to show that there are dense graphs that aren't just \"cliques with a few edges removed\". – David Richerby Jan 14 '19 at 16:50\n\nYou're absolutely right that $$\\Theta$$ is the tightest asymptotic bound. But it's still asymptotic, and that means that we don't care about constant factors or terms of lower degree: when $$n$$ (or $$v$$ or whatever) gets big enough, the smaller terms become negligible.\n\nIn this case, $$\\frac{v(v-1)}{2} = \\frac{1}{2} v^2 + \\frac{-1}{2} v$$. So removing the constant factors and lower terms leaves us with $$v^2$$.\n\n• I don't understand your last paragraph. It doesn't make sense to use $\\Theta$ on a single graph anyway, since there's nothing to go to infinity. But $\\Theta$ absolutely includes a lower bound. – David Richerby Jan 14 '19 at 15:16\n• @DavidRicherby Sorry, let me rephrase: using $\\Theta$ implies that you can put a lower bound on how many edges are in a graph, in terms of the number of vertices. But the lower bound is always zero. So I would use $O$, which is only concerned with the upper bound (and it is in fact bounded above in terms of the number of vertices). – Draconis Jan 14 '19 at 16:43\n• But asymptotic bounds don't apply to single graphs. We need to be talking about classes of graphs being dense and, in that case, we absolutely need to use $\\Theta$ because it's precisely the lower-bound that defines density. If we just said \"A class of graphs is dense if the $n$-vertex members of the class have $O(n^2)$ edges\", then every class would be dense. – David Richerby Jan 14 '19 at 16:49\n• @DavidRicherby Ahh, I see what you mean, I misread the question. I thought they were saying the number of edges in a general simple undirected graph was $\\Theta(n^2)$ (not specifically in a dense graph), which makes no sense because there isn't a lower bound there. – Draconis Jan 14 '19 at 16:53"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91776764,"math_prob":0.9996212,"size":3475,"snap":"2021-31-2021-39","text_gpt3_token_len":997,"char_repetition_ratio":0.109190434,"word_repetition_ratio":0.038528897,"special_character_ratio":0.29755396,"punctuation_ratio":0.09798271,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99995863,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-02T13:48:15Z\",\"WARC-Record-ID\":\"<urn:uuid:d21ba9ad-3a39-4ba6-835e-9e16f7beb801>\",\"Content-Length\":\"180378\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:68255ead-1fe4-4f8e-aa62-59661c48be65>\",\"WARC-Concurrent-To\":\"<urn:uuid:290c0d69-ed97-4064-9934-3202077c4bac>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/102829/when-is-this-even-possible-even-for-a-dense-graphs-e-theta-v2\",\"WARC-Payload-Digest\":\"sha1:YZDKUBIGCKDBH5DHLIJ3AC4NVFR55TRJ\",\"WARC-Block-Digest\":\"sha1:ULZFTXRK5BPOYNE6XGBZGWXCBW4JPAGO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154320.56_warc_CC-MAIN-20210802110046-20210802140046-00480.warc.gz\"}"} |
https://www.encyclopediaofmath.org/index.php/Stein_space | [
"# Stein space\n\nholomorphically-complete space\n\nA paracompact complex analytic space",
null,
"with the following properties:\n\n1) any compact analytic subset in",
null,
"is finite (cf. Analytic set 6));\n\n2) any compact set",
null,
"has an open neighbourhood",
null,
"in",
null,
"such that",
null,
"is compact (weak holomorphic convexity).\n\nA complex manifold",
null,
"is a Stein space if and only if",
null,
"is a Stein manifold. A complex space is a Stein space if and only if its reduction has this property. Any holomorphically-convex open subspace in a Stein space is a Stein space. A reduced complex space is a Stein space if and only if its normalization is a Stein space. Any closed analytic subspace in a Stein space, for instance in",
null,
", is a Stein space. Any finite-dimensional Stein space has a proper injective holomorphic mapping (cf. Proper morphism) into some",
null,
"which is regular at every non-singular point. Any unramified covering of a Stein space is a Stein space. The direct product of two Stein spaces is a Stein space. In many cases a holomorphic fibre space whose base and fibres are Stein spaces is a Stein space (e.g. if the structure group is a complex Lie group with a finite number of connected components). However, there are holomorphic fibre spaces with fibre",
null,
"and base",
null,
"that are not Stein manifolds .\n\nLet",
null,
"be a coherent analytic sheaf on a Stein space",
null,
". Then the following theorems",
null,
"and",
null,
"of H. Cartan (cf. Cartan theorem) hold:\n\nA) The space",
null,
"generates the stalk",
null,
"of the sheaf",
null,
"at any point",
null,
";\n\nB)",
null,
"for all",
null,
".\n\nConversely, if",
null,
"for any coherent sheaf of ideals",
null,
", then",
null,
"is a Stein space. A domain",
null,
"is a Stein manifold if and only if",
null,
".\n\nFrom the Cartan theorems it follows that on a Stein space the first Cousin problem is always solvable, and if",
null,
", then the second Cousin problem is solvable as well (see Cousin problems). On any Stein manifold",
null,
"the Poincaré problem, i.e. can any meromorphic function be represented in the form",
null,
", where",
null,
",",
null,
", is solvable. Furthermore, if",
null,
", then",
null,
"and",
null,
"can be chosen in such a way that the germs",
null,
"at any point",
null,
"are relatively prime. The group of divisor classes of an irreducible reduced Stein space",
null,
"is isomorphic to",
null,
". For any",
null,
"-dimensional Stein space",
null,
", the homology groups",
null,
"for",
null,
", and the group",
null,
"is torsion-free. If",
null,
"is a manifold, then",
null,
"is homotopy equivalent to an",
null,
"-dimensional cell complex. On the other hand, for any countable Abelian group",
null,
"and any",
null,
"there is a domain of holomorphy",
null,
"such that",
null,
".\n\nAn important trend in the theory of Stein spaces is connected with studies of the plurisubharmonic functions on them (see Levi problem; Pseudo-convex and pseudo-concave). The basic result here is that a Stein space is characterized as a space on which there exists a strongly",
null,
"-pseudoconvex function exhausting it.\n\nAlgebras of holomorphic functions",
null,
"on a Stein space",
null,
"(so-called Stein algebras) have the following properties. For a maximal ideal",
null,
"the following conditions are equivalent:",
null,
"is closed in",
null,
"with respect to the topology of compact convergence;",
null,
"for some point",
null,
"; and",
null,
"is finitely generated. If",
null,
"is finite-dimensional, then each character",
null,
"is of the form",
null,
"for some",
null,
". If",
null,
",",
null,
"are two finite-dimensional Stein spaces with isomorphic algebras",
null,
", then",
null,
"; moreover, any isomorphism",
null,
"is continuous and is induced by some isomorphism",
null,
"of complex spaces.\n\nA significant role in the theory of Stein spaces is played by the so-called Oka principle, which states that a problem in the class of analytic functions on a Stein space is solvable if and only if it is solvable in the class of continuous functions. The second Cousin problem satisfies this principle. The following statement is still more general: The classification of the principal analytic fibrations (cf. Principal analytic fibration) with as basis a given reduced Stein space",
null,
"and as structure group a given complex Lie group",
null,
", coincides with the classification of the topological fibrations with the same basis and the same structure group. The groups of connected components in the groups of analytic and continuous functions",
null,
"also coincide."
] | [
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s0876401.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s0876402.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s0876403.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s0876404.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s0876405.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s0876406.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s0876407.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s0876408.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s0876409.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764010.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764011.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764012.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764013.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764014.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764017.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764018.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764019.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764020.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764021.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764022.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764023.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764024.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764025.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764026.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764027.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764028.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764029.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764030.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764031.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764032.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764033.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764034.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764035.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764036.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764037.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764038.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764039.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764040.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764041.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764042.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764043.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764044.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764045.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764046.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764047.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764048.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764049.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764050.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764051.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764052.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764053.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764054.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764055.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764056.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764057.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764058.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764059.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764060.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764061.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764062.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764063.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764064.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764065.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764066.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764067.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764068.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764069.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764070.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764071.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764072.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764073.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764074.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/s/s087/s087640/s08764075.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8912887,"math_prob":0.9676475,"size":6090,"snap":"2019-26-2019-30","text_gpt3_token_len":1503,"char_repetition_ratio":0.16414723,"word_repetition_ratio":0.044530492,"special_character_ratio":0.2366174,"punctuation_ratio":0.14534883,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9897546,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-23T18:17:25Z\",\"WARC-Record-ID\":\"<urn:uuid:03f8e3ba-29be-40bf-a4fe-508ad1d42051>\",\"Content-Length\":\"31515\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1b7e3b51-8e3f-4b7a-a8aa-f83c2c9e18bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:9aaf8ffe-66f6-4799-b844-88df86acb726>\",\"WARC-IP-Address\":\"80.242.138.72\",\"WARC-Target-URI\":\"https://www.encyclopediaofmath.org/index.php/Stein_space\",\"WARC-Payload-Digest\":\"sha1:CPQPAKF77N2GV4U4HZR6JHCQVI4E2TBZ\",\"WARC-Block-Digest\":\"sha1:UYVUZWGND3J7XUAK3IQG7QWPOCFS4OII\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195529481.73_warc_CC-MAIN-20190723172209-20190723194209-00250.warc.gz\"}"} |
https://www.zbmath.org/?q=ut%3Alogics+with+integrals+and+conditional+expectations | [
"# zbMATH — the first resource for mathematics\n\nProbability quantifiers and operators. (English) Zbl 0933.03044\nSeries in Pure and Applied Mathematics. Belgrade: Vesta Company. iv, 121 p. (1996).\nThis short monograph extends work by Keisler and others. The first four chapters present an integrated and quite elegant exposition of nonstandard analysis and measure theory, Keisler’s logic of probability operators, and logics with integrals and conditional expectations, also introduced by Keisler. Chapter 5 introduces a logic embodying a $${\\sigma}$$-finite measure in place of Keisler’s general probability measure. Chapter 6 presents a logic appropriate to the study of structures with two measures. Chapter 8 introduces a second order probability logic. In chapter 9 we have an intuitionistic logic with probability operators; these operators are general in that they may apply to either classical or intuitionistic logic. Finally, in chapter 10 the notion of cylindric probability algebra is introduced, relating the theory of deductive systems of probability logic to a geometry associated with basic set-theoretic notions. The volume is not easy reading, but presents as full a treatment of logics with probability operators as there is.\n\n##### MSC:\n 03C80 Logic with extra quantifiers and operators 03B48 Probability and inductive logic 03H05 Nonstandard models in mathematics 60A10 Probabilistic measure theory 28E05 Nonstandard measure theory 60B05 Probability measures on topological spaces 60A05 Axioms; other general questions in probability"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8778903,"math_prob":0.9472619,"size":2053,"snap":"2021-04-2021-17","text_gpt3_token_len":466,"char_repetition_ratio":0.17813568,"word_repetition_ratio":0.014814815,"special_character_ratio":0.20311739,"punctuation_ratio":0.14328358,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9977558,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-16T01:03:03Z\",\"WARC-Record-ID\":\"<urn:uuid:2511134d-ddea-4d21-b44d-ee0f7f33c4df>\",\"Content-Length\":\"48837\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:605e1b35-289d-41cd-841b-881851c7fd39>\",\"WARC-Concurrent-To\":\"<urn:uuid:7cc0b7fd-2cec-44f8-af37-cfd1f424af77>\",\"WARC-IP-Address\":\"141.66.194.3\",\"WARC-Target-URI\":\"https://www.zbmath.org/?q=ut%3Alogics+with+integrals+and+conditional+expectations\",\"WARC-Payload-Digest\":\"sha1:P4JSCD72IO3S6HLWPHW4J2JLOTBZ2SKN\",\"WARC-Block-Digest\":\"sha1:JEKOLO72QEKRNSTTAWA2W7YYNPD3YBIC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038088264.43_warc_CC-MAIN-20210415222106-20210416012106-00238.warc.gz\"}"} |
http://bms.zeugner.eu/doc/htmlhelp/plotConv.html | [
"plotConv {BMS} R Documentation\n\n## Plot Convergence of BMA Sampler\n\n### Description\n\nPlots the posterior model probabilites based on 1) marginal likelihoods and 2) MCMC frequencies for the best models in a 'bma' object and details the sampler's convergence by their correlation\n\n### Usage\n\n```plotConv(bmao, include.legend = TRUE, add.grid = TRUE, ...)\n```\n\n### Arguments\n\n `bmao` an object of class 'bma' - see `bms` `include.legend` whether to include a `legend` in the plot `add.grid` whether to include a `grid` in the plot `...` other parameters for `matplot`\n\n### Details\n\nA call to bms with a MCMC sampler (e.g. `bms(datafls,mcmc=\"bd\",nmodel=100)` uses a Metropolis-Hastings algorithm to sample through the model space: the frequency of how often models are drawn converges to the distribution of their posterior marginal likelihoods.\nWhile sampling, each 'bma' object stores the best models encountered by its sampling chain with their marginal likelihood and their MCMC frequencies.\n`plotConv` compares the MCMC frequencies to marginal likelihoods, and thus visualizes how well the sampler has converged.\n\n### Note\n\n`plotConv` is also used by `plot.bma`\n\n`pmp.bma` for posterior model probabilites based on the two concepts, `bms` for creating objects of class 'bma'\n\n### Examples\n\n```\n\ndata(datafls)\nmm=bms(datafls[,1:12],user.int=FALSE)\n\nplotConv(mm)\n\n#is similar to\nmatplot(pmp.bma(mm),type=\"l\")\n\n```\n\n[Package BMS version 0.3.5 Index]"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7931607,"math_prob":0.894833,"size":1260,"snap":"2023-40-2023-50","text_gpt3_token_len":334,"char_repetition_ratio":0.10907643,"word_repetition_ratio":0.033519555,"special_character_ratio":0.22380953,"punctuation_ratio":0.15189873,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9815191,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T14:19:13Z\",\"WARC-Record-ID\":\"<urn:uuid:b7ce3c98-573c-4bd7-8e6f-f86664807e99>\",\"Content-Length\":\"3495\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3b915a19-d17d-4b6c-82d7-9b0cdad5cb02>\",\"WARC-Concurrent-To\":\"<urn:uuid:f39da297-c564-46e0-bbb5-24eeff5debbb>\",\"WARC-IP-Address\":\"138.201.190.215\",\"WARC-Target-URI\":\"http://bms.zeugner.eu/doc/htmlhelp/plotConv.html\",\"WARC-Payload-Digest\":\"sha1:NHRBFBOPWP6DWXBL3DF5VRLJ6NTRRGDF\",\"WARC-Block-Digest\":\"sha1:3ME4XJSNIDRSO4Y6U56GYRBTA42C62CC\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510412.43_warc_CC-MAIN-20230928130936-20230928160936-00444.warc.gz\"}"} |
https://www.pre-scient.com/knowledge-center/geometric-modelling/types-of-geometric-modelling.html | [
"## Geometric Modelling",
null,
"# Types of Geometric Modeling\n\nThe previous edition gave a brief introduction of Geometric Modeling and its features. Geometric modeling is the mathematical representation of an object’s geometry. It incorporates the use of curves to create models. It can be viewed either in 2D or 3D perspective.\n\nThis edition details the primary types of geometric modeling. Geometric modeling can be classified into the following:\n\n##### SOLID MODELING\n\nAlso known as volume modeling, this is the most widely used method as it provides a complete description of solid modeling. Solid modeling defines an object by its nodes, edges, and surfaces; therefore, it gives a perfect and explicit mathematical representation of a precisely enclosed and filled volume. Solid modeling requires the use of topology rules to guarantee that all surfaces are stitched together correctly. This geometry modeling procedure is based upon the “Half-Space” concept.",
null,
"There are two prevalent ways for representing solid models –\n\nConstructive solid geometry: Constructive solid geometry is a combination of primary solid objects (prism, cylinder, cone, sphere, etc.). These shapes are either added or deleted to form the final solid shape.\n\nBoundary representation: In boundary representation, an object’s definition is determined by their spatial boundaries. It describes the points, edges, surfaces of a volume, and issues command to rotate, sweep a binds facets into a third dimensional solid. The union of these surfaces enables the formation of a surface that explicitly encloses a volume.\n\nSolid Modeling is the most widely used geometric modeling in three dimensions, and it serves the following purpose:\n\n• Solid modeling supports weight or volume calculation, centroids, moments of inertia calculation, stress analysis, heat conduction calculations, dynamic analysis, system dynamics analysis.\n• Solid modeling supports the generation of codes, robotic and assembly simulation\n• Solid modeling stores both geometric and topological information; can verify if the two objects occupy same space\n• Solid modeling improves the quality of design, enhances visualization, and has the potential for functional automation and integration.\n\nDifferent solid modeling techniques are as follows:\n\n• Constructive Solid Geometry\n• Boundary Representation\n• Feature-based modeling\n• Primitive Instancing\n• Cell decomposition, spatial enumeration, octree\n##### SURFACE MODELING\n\nSurface modeling represents the solid appearing object. Although it is a complicated method of representation than wireframe modeling, it is not as refined as solid modeling. Although surface models and solid models look identical, the former cannot be sliced open the way solid models can be. This model makes use of B-splines and Bezier for controlling curves.",
null,
"A typical surface modeling process involves the following steps:\n\n• Generation of a model combining the three-dimensional surfaces and solids\n• Conversion of the model to surfaces, taking advantage of associative modeling\n• Validation of imperfections with surface analysis tools\n• Reconstructing surfaces of objects to apply smoothness to the object\n\nSurface modeling is used to:\n\n• To shape design and representation of complicated objects such as a car, ship, and airplane bodies as well as castings\n• There are situations where models imported from another CAD system usually lack details of the features it is comprised of. If the surfaces are complex, applying changes to this type of geometry can be quite the task. In such cases, surface modeling techniques can be used to one or more faces of the model to make the desired changes.\n• Surface modeling enables building one face at a time so that one can control the exact contour and direction of any face. This feature comes in handy at a time when solid modeling technique fails to create the complex shape of a feature as it builds up several sides of shape at once.\n• As it is not limited to the direct construction of a model face, surfaces can also be used as a reference geometry in a transitional step towards the creation of the required model face.\n• Now, there is another modeling technique which requires a combination of solid and surface modeling techniques to create a solid model. This technique generally involves starting the model as a solid and using surfaces to modify it. Or, changing the solid to surfaces to shape and contour it, then turning it back to a solid when done.\n##### WIREFRAME MODELING\n\nThe wireframe model is perhaps one of the earliest ways of representing a solid model. It consists of vertices and lines and is a skeletal representation of a real-world 3D object. It was developed back in the 1960s; it is also referred to as “Stick figure” or “edge representation.”",
null,
"The lines within a wireframe connect to create polygons, such as triangles and rectangles, that represent three-dimensional shapes when bound together. The outcome may range from a cube to a complex three-dimensional scene with people and objects. The number of polygons within a model is a good indicator of how detailed the wireframe 3D model is.\n\nWireframe modeling helps in matching a 3D drawing model to its reference. It allows the creator to match the vertex points, so they are in alignment with the desired reference and see the reference through the model as well. Although Wireframe modeling is a quick and easy way to demonstrate concepts, creating a fully detailed, precisely constructed model for an idea can be extremely time-consuming, and if it does not match what was visualized for the project, all that time and effort was wasted. In wireframe modeling, one can skip the detailed work and present a very skeletal framework that is simple to create and is apprehensible to others."
] | [
null,
"https://www.pre-scient.com/images/Knowledge-center/geometric-modeling/Types-of-geometric-modeling.jpg",
null,
"https://www.pre-scient.com/images/img_201908281.png",
null,
"https://www.pre-scient.com/images/img_201908282.png",
null,
"https://www.pre-scient.com/images/Obj_lineremoval.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92079103,"math_prob":0.9042883,"size":5701,"snap":"2021-43-2021-49","text_gpt3_token_len":1081,"char_repetition_ratio":0.16570124,"word_repetition_ratio":0.002219756,"special_character_ratio":0.18312576,"punctuation_ratio":0.09494949,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97304773,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T05:51:30Z\",\"WARC-Record-ID\":\"<urn:uuid:c98c349c-4a70-4b29-b8f3-13a70c0c2ac8>\",\"Content-Length\":\"40626\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4f38da8-29a3-47fc-a8bb-d78f835ba252>\",\"WARC-Concurrent-To\":\"<urn:uuid:d06c0e05-ffe7-4f52-8370-ab994a22eb5a>\",\"WARC-IP-Address\":\"107.180.74.123\",\"WARC-Target-URI\":\"https://www.pre-scient.com/knowledge-center/geometric-modelling/types-of-geometric-modelling.html\",\"WARC-Payload-Digest\":\"sha1:IKOFB4WKZJ24WKCPLFUHKEBLMPRE37QC\",\"WARC-Block-Digest\":\"sha1:JNKTJZYJ3E644OBLJG4UCFSRC3SHJ3CS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588102.27_warc_CC-MAIN-20211027053727-20211027083727-00666.warc.gz\"}"} |
https://blog.conductrics.com/pvalues | [
"# Easy Introduction to AB Testing and P-Values\n\nUPDATED: 5/20/2021\n\nA version of this post was originally published over at Conversion XL\n\nFor all of the talk about how awesome (and big, don’t forget big) Big data is, one of the favorite tools in the conversion optimization toolkit, AB Testing, is decidedly small data. Optimization, winners and losers, Lean this that or the other thing, at the end of the day, A/B Testing is really just an application of sampling.\n\nYou take couple of alternative options (eg. ‘50% off’ v ‘Buy One Get One Free’ ) and try them out with a portion of your users. You see how well each one did, and then make a decision about which one you think will give you the most return. Sounds simple, and in a way it is, yet there seem to be lots of questions around significance testing. In particular what the heck the p-value is, and how to interpret it to help best make sound business decisions.\n\nThese are actually deep questions, and in order to begin to get a handle on them, we will need to have a basic grasp of sampling.\n\n## A few preliminaries\n\nBefore we get going, we should quickly go over the basic building blocks of AB Testing. I am sure you know most of this stuff already, but can’t hurt to make sure everyone is on the same page:\n\nThe Mean – often informally called the average. This is a measure of the center of the data. It is a useful descriptor, and predictor, of the data, if the data under consideration tends to clump near the mean AND if the data has some symmetry to it.\n\nThe Variance – This can be thought of as the average variability of our data around the mean (center) of the data. For example, consider we collect two data sets with five observations each: {3,3,3,3,3} and {1,2,3,4,5}. They both have the same mean (its 3) but the first group has no variability, whereas the second group does take different values than its mean. The variance is a way to quantify just how much variability we have in our data. The main take away is that the higher the variability, the less precise the mean will be as a predictor of any individual data point.\n\nThe Probability Distribution – this is a function (if you don’t like ‘function’, just think of it as a rule) that assigns a probability to a result or outcome. For example, the roll of a standard die follows a uniform distribution, since each outcome is assigned an equal probability of occurring (all the numbers have a 1 in 6 chance of coming up). In our discussion of sampling, we will make heavy use of the normal distribution, which has the familiar bell shape. Remember that the probability of the entire distribution sums to 1 (or 100%).\n\nThe Test Statistic or Yet Another KPI\nThe test statistic is the value that we use in our statistical tests to compare the results of our two (or more) options, our ‘A’ and ‘B’. It might make it easier to just think of the test statistic as just another KPI. If our test KPI is close to zero, then we don’t have much evidence to show that the two options are really that different. However, the further from zero our KPI is, the more evidence we have that the two options are not really performing the same.\n\nOur new KPI combines both the differences in the averages of our test options, and incorporates the variability in our test results. The test statistics looks something like this:",
null,
"While it might look complicated, don’t get too hung up on the math. All it is saying is take the difference between ‘A’ and ‘B’ – just like you normally would when comparing two objects, but then shrink that difference by how much variability (uncertainty) there is in the data.\n\nSo, for example, say I have two cups of coffee, and I want to know which one is hotter and by how much. First, I would measure the temperature of each coffee. Next, I would see which one has the highest temp. Finally, I would subtract the lower temp coffee from the higher to get the difference in temperature. Obvious and super simple.\n\nNow, let’s say you want to ask, “which place in my town has the hotter coffee, McDonald’s or Starbucks?” Well, each place makes lots of cups of coffee, so I am going to have to compare a collection of cups of coffee. Any time we have to measure and compare collections of things, we need to use our test statistics.\n\nThe more variability in the temperature of coffee at each restaurant, the more we weigh down the observed difference to account for our uncertainty. So, even if we have a pretty sizable difference on top, if we have lots of variability on the bottom, our test statistic will still be close to zero. As a result of this, the more variability in our data, the greater an observed difference we will need to get a high score on our test KPI.\n\nRemember, high test KPI -> more evidence that any difference isn’t just by chance.\n\n## Always Sample before you Buy\n\nOkay now that we have that out of the way, we can spend a bit of time on sampling in order to shed some light on the mysterious P-Value.\n\nFor sake of illustration, let say we are trying to promote a conference that specializes in Web analytics and Conversion optimization. Since our conference will be a success only if we have at least certain minimum of attendees, we want to incent users to buy their tickets early. In the past, we have used ‘Analytics200’ as our early bird promotional discount code to reduce the conference price by \\$200. However, given that AB Testing is such a hot topic right now, maybe if we use ‘ABTesting200’ as our promo code, we might get even more folks to sign up early. So we plan on running an AB test between our control, ‘Analytics200’ and our alternative ‘ABTesting200’.\n\nWe often talk about AB Testing as one activity or task. However, there are really two main parts of the actual mechanics of testing.\n\nData Collection – this is the part where we expose users to either ‘Analytics200’ or ‘ABTesting200’. As we will see, there is going to be a tradeoff between more information (less variability) and cost. Why cost? Because we are investing time and foregoing potentially better options, in the hopes that we will find something better than what we are currently doing. We spend resources now in order to improve our estimates of the set of possible actions that we might take in the future. AB Testing, in of itself, is not optimization. It is an investment in information.\n\nData Analysis – this is where we select a method, or framework, for drawing conclusions from the data we have collected. For most folks running AB Tests online, it will be the classic null significance testing approach. This is the part where we pick statistical significance, calculate the p-values and draw our conclusions.\n\n## The Indirect logic of Significance Testing\n\nToni and Nate are waiting for the subway. Toni is a local and takes the subway almost every night. This is Nate’s first time in the city and has never taken the subway before. Nate asks Toni about how long it normally takes for the train to come. Toni tells him that usually she waits about five minutes. After about 15 minutes of waiting, Nate starts to get worried and thinks maybe there is a problem with the train and perhaps they should just go and try to catch a cab instead. He asks Toni, ‘Hey, I am getting worried the train might not come. You said the train takes about five minutes on average, how often do you have to wait 15 minutes or more?’ Toni, replies, ‘don’t worry, while it usually comes in about five minutes, it is not uncommon to have wait this long or even a bit longer. I’d say based on experience, a wait like this, or worse, even when there is no issue with the subway, probably happens about 15% of the time.’ Nate relaxes a bit, and they chat about the day while they wait for the train.\n\nNotice that Nate only asked about the frequency of long wait times. Once he heard that a wait time of 15 min or more wasn’t too uncommon even when the trains are running normally, he felt more comfortable that the train was going to show up. What is interesting is what he learns is NOT the probability that trains aren’t working. Rather he learns, based on all the times that Toni has taken the train, the probability of the train running late more than 15 minutes when there are no service issues. He concludes that the train is likely working fine since his wait time isn’t too uncommon, NOT because he knows the probability that the train has an issue. This indirect, almost contrarian logic is the essence of hypothesis testing. The P-Value in this case is the probability that the wait times are 15 min or more given that the trains are running normally.\n\n## Back to our Conference\n\nFor the sake of argument, let’s say that the ‘Analytics200’ promotion has a true conversion rate of 0.1, or 10%. In the real world, this true rate is hidden from us – which is why we go and collect samples in the first place – but in our simulation we know it is 0.1. So each time we send out ‘Analytics200’, approximately 10% sign up.\n\nIf we go out and offer 50 prospects our ‘Analytics200’ promotion we would expect, on average, to have 5 conference signups. However, we wouldn’t really be that surprised if we saw a few less or a few more. But what is a few? Would we be surprised if we saw 4? What about 10, or 25, or zero? It turns out that the P-Value answers the question, How surprising is this result?\n\nExtending this idea, rather than taking just one sample of 50 conference prospects, we take 100 separate samples of 50 prospects (so a total of 5,000 prospects, but selected in 100 buckets of 50 prospects each). After running this simulation, I plotted the results of the 100 samples (this plot is called a histogram) below:",
null,
"Our simulated results ranged from 2% to 20% and the average conversion rate of our 100 samples was 10.1% – which is remarkably close to the true conversion rate of 10%.\n\n### Amazing Sampling Fact Number 1\n\nThe mean (average) of repeated samples will equal the mean of the population we are sampling from.\n\n### Amazing Sampling Fact Number 2\n\nOur sample conversion rates will be distributed roughly according to a normal distribution – this means most of the samples will be clustered around the true mean, and samples far from our mean will occur very infrequently. In fact, because we know that our samples are distributed roughly normally, we can use the properties of the normal (or students-t) distribution to tell us how surprising a given result is.\n\nThis is important, because while our sample conversion rate may not be exactly the true conversion rate, it is more likely to be closer to the true rate than not. In our simulated results, 53% of our samples were between 7 and 13%. This spread in our sample results is known as the sampling error.\n\nAh, now we are cooking, but what about sample size you may be asking? We already have all of this sampling goodness and we haven’t even talked about the size of each of our individual samples. So let’s talk:\n\nThere are two components that will determine how much sampling error we are going to have:\n\n• The natural variability already in our population (different coffee temperatures at each Starbucks or McDonald’s)\n• The size of our samples\n\nWe have no control over variability of the population, it is what it is.\n\nHowever, we can control our sample size. By increasing the sample size we reduce the error and hence can have greater confidence that our sample result is going to be close to the true mean.\n\n### Amazing Sampling Fact Number 3\n\nThe spread of our samples decreases as we increase the ‘N’ of each sample. The larger the sample size, the more our samples will be squished together around the true mean.\n\nFor example, if we collect another set of simulated samples, but this time increase the sample size to 200 from 50, the results are now less spread out – with a range of 5% to 16.5%, rather than from 2% to 20%. Also, notice that 84% of our samples are between 7% and 13% vs just 53% when our samples only included 50 prospects.\n\nWe can think of the sample size as a sort of control knob that we can turn to increase or decrease the precision of our estimates. If we were to take an infinite number of our samples we would get the smooth normal curves below. Each centered on the true mean, but with a width (variance) that is determined by the size of each sample.",
null,
"### Why Data doesn’t always need to be BIG\n\nEconomics often takes a beating for not being a real science, and maybe it isn’t ;-). However, it does make at least a few useful statements about the world. One of them is that we should expect, all else equal, that each successive input will have less value than the preceding one. This principle of diminishing marginal returns is at play in our AB Tests.",
null,
"Reading right to left, as we increase the size of our sample, our sampling error falls. However, it falls at a decreasing rate – which means that we get less and less information from each addition to our sample. So in this particular case, moving to a sample size of 50 drastically reduces our uncertainty, but moving from 150 to 200, decreases our uncertainty by much less. Stated another way, we face increasing costs for any additional precision of our results. This notion of the marginal value of data is an important one to keep in mind when thinking about your tests. It is why it is more costly and time consuming to establish differences between test options that have very similar conversion rates. The hardest decisions to make are often the ones that make the least difference.\n\nOur test statistic, which as noted earlier, accounts for both how much difference we see between our results and for how much variably (uncertainty) we have in our data. As the observed difference goes up, our test statistic goes up. However, as the total variance goes up, our test statistic goes down.",
null,
"Now, without getting into more of the nitty gritty, we can think of our test statistic essentially the same way we did when we drew samples for our means. So whereas before, we were looking just at one mean, now we are looking at the difference of two means, B and A. It turns out that our three amazing sampling facts apply to differences of means as well.\n\nWhew- okay, I know that might seem like TMI, but now that we have covered the basics, we can finally tackle the p-values.\n\n## Assume There is No Difference\n\nHere is how it works. We collect our data for both the ABTesting200, and Analytics200 promotions. But then we pretend that really we ran an A/A test, rather than an A/B test. So we look at the result as if we just presented everyone with the Analytics200 promotion. Because of what we now know about sampling, we know that both groups should be centered on the same mean, and have the same variance – remember we are pretending that both samples are really from the same population (the Analytics200 population). Since we are interested in the difference, we expect that on average, that Analytics200-Analytics200 will be ‘0’, since on average they should have the same mean.",
null,
"So using our three facts of sampling we can construct how the imagined A/A Test will be distributed, and we expect that our A/A test, will on average, show no difference between each sample. However, because of the sampling error, we aren’t that surprised when we see values that are near zero, but not quite zero. Again, how surprised we are by the result is determined by how far away from zero our result is. We will use the fact that our data is normally distributed to tell us exactly how probable seeing a result away from zero is. Something way to the right of zero, like at point 3 or greater will have a low probability of occurring.\n\n## Contrarians and the P-Value, Finally!\n\nThe final step is to see where our test statistic falls on this distribution. For many researchers, if it is somewhere between -2 and 2, then that wouldn’t be too surprising to see if we were running an A/A test. However, if we see something on either side of or -2 and 2 then we start getting into fairly infrequent results. One thing to note: what is ‘surprising’ is determined by you, the person running the test. There is no free lunch, at the end of the day, your judgement is still an integral part of the testing process.\n\nNow lets place our test statistic (t-score, or z-score etc) on the A/A Test distribution. We can then see how far away it is from zero, and compare it to the probability of seeing that result if we ran an A/A Test .",
null,
"Here our test statistic is in the surprising region. The probability of the surprise region is the P-value. Formally, the p-value is the probability of seeing a particular result (or greater) from zero, assuming that the null hypothesis is TRUE. If ‘null hypothesis is true’ is tricking you up, just think instead, ‘assuming we had really run an A/A Test.\n\nIf our test statistic is in the surprise region, we reject the Null (reject that it was really an A/A test). If the result is within the Not Surprising area, then we Fail to Reject the null. That’s it.\n\n## Conclusion: 7 Points\n\nHere are a few important points about p-values that you should keep in mind:\n\n• What is ‘Surprising’ is determined by the person running the test. So in a real sense, the conclusion of the test will depend on who is running the test. How often you are surprised is a function of how high a p-value you need to see (or related, the confidence level in a Pearson-Neyman approach, eg. 95%) for when you will be ‘surprised’.\n• The logic behind the use of the p-value is a bit convoluted. We need to assume that the null is true in order to evaluate the evidence that might suggest that we should reject the null. This is an evergreen source of confusion.\n• It is not the case that the p-value tells us the probability that B is better than A. Nor is it telling us the probability that we will make a mistake in selecting B over A. These are both extraordinarily commons misconceptions, but they are false. This is an error that even ‘experts’ often make, so now you can help it explain it to them ;-). Remember the p-value is just the probability of seeing a result or more extreme given that the null hypothesis is true.\n• While many folks in the industry will tout classical significance testing as some sort of gold standard, there is actually debate in the scientific community about the value of p-values for drawing testing conclusions. Along with Bergers’ paper below, also check out Andrew Gelman’s blog for frequent discussions around the topic. http://andrewgelman.com/2013/02/08/p-values-and-statistical-practice/\n• You can always increase the precision of your experiment, but you have to pay for it. Remember that the standard error is a function of both the variation in the actual population and the sample size of the experiment. The population variation is fixed, but there is nothing stopping us, if we are willing to ‘pay’ for it, to collect a larger sample. The question really becomes, is this level of precision going to be worth the cost. Just because a result has a low p-value (or is statistically significant in the Pearson-Neyman approach) doesn’t mean it has any practical value.\n• Don’t sweat it, unless you need to. Look, the main thing is to sample stuff first to get an idea if it might work out. Often the hardest decisions for people to make are the ones that make the least difference. That is because it is very hard to pick a ‘winner’ when the options lead to similar results, but since they are so similar it probably means there is very little up or downside to just picking one. Stop worrying about getting it right or wrong. Think of your testing program more like a portfolio investment strategy. You are trying to run the bundle of tests, whose expected additional information will give you the highest return.\n• The p-value is not a stopping rule. This is another frequent mistake. In order for all of the goodness we get from sampling that lets us interpret our p-value, you select your sample size first. Then you run the test. There are however, versions of Wald’s sequential tests (SPRT, or similar), that do correct for early stopping, but these often are not robust in the presence of nonexchangeable data- which is often the case in online settings. So if you do find the need to use them, do so carefully.\n\nThis could be another entire post or two, and it is a nice jumping off point for looking into the multi-arm bandit problem (see Conductrics http://blog.conductrics.com/balancing-earning-with-learning-bandits-and-adaptive-optimization/\n\n*One final note: What makes all of this even more confusing is that there isn’t just one agreed upon approach to testing. For more check out Berger’s paper for a comparison of the different approaches http://www.stat.duke.edu/~berger/papers/02-01.pdf and Baiu et .al http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2816758/\n\n1.",
null,
"Saad Saeed\n2.",
null,
"Anil Pala"
] | [
null,
"https://149362800.v2.pressablecdn.com/wp-content/uploads/2016/02/t-score-1024x191.png",
null,
"https://149362800.v2.pressablecdn.com/wp-content/uploads/2016/02/Hist100_50.png",
null,
"https://149362800.v2.pressablecdn.com/wp-content/uploads/2016/02/ShrinkingMSE-1024x440.png",
null,
"https://149362800.v2.pressablecdn.com/wp-content/uploads/2016/02/DiminishingReturns-1024x685.png",
null,
"https://149362800.v2.pressablecdn.com/wp-content/uploads/2016/02/t_scoreabstract.png",
null,
"https://149362800.v2.pressablecdn.com/wp-content/uploads/2016/02/AATest.png",
null,
"https://149362800.v2.pressablecdn.com/wp-content/uploads/2016/02/HypothTest-1024x508.png",
null,
"https://secure.gravatar.com/avatar/4aa1ae1a6b5bee8b3287a7efd32e6000",
null,
"https://secure.gravatar.com/avatar/1a1392b1f363bde8d557022b8de3fe53",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9520645,"math_prob":0.9418029,"size":20973,"snap":"2023-40-2023-50","text_gpt3_token_len":4596,"char_repetition_ratio":0.13004912,"word_repetition_ratio":0.008,"special_character_ratio":0.2215706,"punctuation_ratio":0.100140385,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9585052,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T14:02:28Z\",\"WARC-Record-ID\":\"<urn:uuid:7b8b0b4f-8492-425d-bdee-8eefa8194150>\",\"Content-Length\":\"68445\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c335886-ecc3-4ca4-8e18-555278de7caa>\",\"WARC-Concurrent-To\":\"<urn:uuid:de953408-96dc-4a2c-9b16-7bea236ebde9>\",\"WARC-IP-Address\":\"199.16.172.128\",\"WARC-Target-URI\":\"https://blog.conductrics.com/pvalues\",\"WARC-Payload-Digest\":\"sha1:KQLUMY5TGPVRAPXUFGDLDS4VGHJZI3J5\",\"WARC-Block-Digest\":\"sha1:V44VNOCQ2PEJDSZUGR2MQIXGG3CDTWXD\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510516.56_warc_CC-MAIN-20230929122500-20230929152500-00577.warc.gz\"}"} |
https://www.utwente.nl/en/bms/omd/education/pre-master/ | [
"# Pre-master education\n\nThe department OMD teaches research methodology and statistics to all Premaster students of the Faculty BMS.\n\nThe department OMD teaches research methodology and statistics to all Premaster students of the Faculty BMS. It is meant as a preparation for the Master education and especially for the Master thesis for which a basic knowledge of these topics is absolutely necessary. The theory is treated in the lectures and is applied (‘learning by doing’) in tutorials and SPSS in-class computer exercises. Knowledge of the theory will be assessed by tests and knowledge of SPSS by (individual) computer assignments. (The lectures in the Premaster education are given by dr. ir. H.J. Vos, whereas ir. W.M.M. Tielen gives an introduction in the statistical software package SPSS.)\n\nBrief description of courses taught by department OMD in the Premaster.\n\n1). Research Methodology and Descriptive Statistics (201300063; 1st and 3rd quartile; 5ECTS).\n\nIn this course students are introduced to the basic principles of empirical research in the social sciences. The role of research in testing theories (‘empirical cycle’) will be treated. Doing so, also some attention will be paid to science-philosophical background (especially to Popper’s critical-rationalism). The students get acquainted with some important types of research (experimental, quasi-experimental, correlational and qualitative research) and its potential threats to internal and external validity.\n\nThe following phases of conducting scientific research will be dealt with in more detail:\n\nA) formulating research question (embedded within a relevant theoretical framework and mostly emanating from a literature study)\n\nB) splitting up the research question into subquestions\n\nC) formulating hypothese(s)\n\nD) setting up a research design\n\nE) developing measurement instruments (including reliability and validity as its two\nmost important quality criteria)\n\nF) collecting data (including sampling strategy)\n\nG) analyzing data\n\nH) drawing conclusions and reporting results (including discussion)\n\nIn addition, attention is paid in this course to descriptive statistics. Students will get familiar with methods and techniques for representing data (both graphically and numerically). Amongst others bar charts, pie charts, stem-and-leaf displays, histograms, boxplots, contingency tables, percentile scores, means, medians, modes, standard deviations, variances, interquartile ranges, correlation coefficients and linear regression. Also, the normal distribution (including the z-transformation) will be introduced during the descriptive statistics part.\n\n2) Inferential Statistics (201300064; 2nd and 4th quartile; 5ECTS).\n\nThis course introduces inferential statistics (i.e., hypotheses tests and constructing confidence intervals). Basic concepts from inferential statistics are discussed on the basis of conclusions concerning an average with a known population standard deviation (i.e., z-test). Additionally, several popularly used statistical tests are treated: t-tests (both independent and dependent), binomial tests (both for a single proportion and two proportions), and chi-square tests. Also several popularly used (more advanced) statistical techniques are addressed: simple linear regression, multiple linear regression, one-way and two-way analysis of variance (one-way ANOVA and two-way ANOVA), and nonparametric tests. During the in-class computer exercises, students are taught analyzing small and greater data files using the IBM SPSS statistical program."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91152996,"math_prob":0.72271705,"size":3521,"snap":"2020-24-2020-29","text_gpt3_token_len":687,"char_repetition_ratio":0.11344896,"word_repetition_ratio":0.050209206,"special_character_ratio":0.18432264,"punctuation_ratio":0.12195122,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96357137,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-05T17:56:39Z\",\"WARC-Record-ID\":\"<urn:uuid:b73fa5cf-54a5-418e-b2e8-5fa4a49fb9d5>\",\"Content-Length\":\"30500\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b37dea32-1c16-4788-91a7-2349a6236052>\",\"WARC-Concurrent-To\":\"<urn:uuid:a9a55d69-beff-489a-a0c6-25ea19eca531>\",\"WARC-IP-Address\":\"130.89.3.249\",\"WARC-Target-URI\":\"https://www.utwente.nl/en/bms/omd/education/pre-master/\",\"WARC-Payload-Digest\":\"sha1:AM4MHSPH34BTP5RK37EW6JH76TYR5HY4\",\"WARC-Block-Digest\":\"sha1:WAG47MH3GKWSJ5JJKDJWN2J6QGQPFSYW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655887377.70_warc_CC-MAIN-20200705152852-20200705182852-00092.warc.gz\"}"} |
https://myassignmentguru.com/assignments/brooks_chapter_3_1/ | [
"## Project Description:\n\nIn this problem, you will prepare a portfolio growth model to determine how long it will take a couple to reach their retirement goal.\n\n## Steps to Perform:\n\n Step Instructions Points Possible 1 Start Excel. Download and open the workbook named: Brooks_Chapter_3_1_Start. 0 2 In cell D19, by using cell references, calculate the future value of the money market account after one year. Use the Excel FV function. Make absolute references to the account balance in cell D18 and the relevant growth rate from the Given Data section. Note: The output of the function you typed in this cell is expected as a positive number. 1 3 In cell E19, by using cell references, calculate the future value of the government bond mutual fund account after one year. Use the Excel FV function. Make absolute references to the account balance in cell E18 and the relevant growth rate from the Given Data section. Note: The output of the function you typed in this cell is expected as a positive number. 1 4 In cell F19, by using cell references, calculate the future value of the large capital mutual fund account after one year. Use the Excel FV function. Make absolute references to the account balance in cell F18 and the relevant growth rate from the Given Data section. Note: The output of the function you typed in this cell is expected as a positive number. 1 5 In cell G19, by using cell references, calculate the future value of the small capital mutual fund account after one year. Use the Excel FV function. Make absolute references to the account balance in cell G18 and the relevant growth rate from the Given Data section. Note: The output of the function you typed in this cell is expected as a positive number. 1 6 In cell H19, by using cell references, calculate the future value of the real estate trust fund account after one year. Use the Excel FV function. Make absolute references to the account balance in cell H18 and the relevant growth rate from the Given Data section. Note: The output of the function you typed in this cell is expected as a positive number. 1 7 In cell range D20:H38, by using cell references, calculate the future values of each of the five accounts at the end of each year for years 2 through 20. Copy the contents from cell range D19:H19 down the columns to row 38. 1 8 In cell I18, by using cell references, calculate the current value of the total portfolio. Use the Excel SUM function. 1 9 In cell range I19:I38, by using cell references, calculate the future values of the total portfolio at the end of each year for the next 20 years. Copy the function cell I18 down the columns to row 38. 1 10 In cell H40, enter the number of years it will take for Rachel and Richard to reach their goal. Do not use the equal sign when entering a numeric value. 1 11 Save the workbook. Close the workbook and then exit Excel. Submit the workbook as directed. 0\n Total Points 9\n\nClick on Buy Solution and make payment. All prices shown above are in USD. Payment supported in all currencies. Price shown above includes the solution of all questions mentioned on this page. Please note that our prices are fixed (do not bargain).\n\nAfter making payment, solution is available instantly.Solution is available either in Word or Excel format unless otherwise specified."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84373444,"math_prob":0.92989236,"size":3475,"snap":"2023-40-2023-50","text_gpt3_token_len":788,"char_repetition_ratio":0.15355805,"word_repetition_ratio":0.42096505,"special_character_ratio":0.22935252,"punctuation_ratio":0.108029194,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9729604,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T21:14:31Z\",\"WARC-Record-ID\":\"<urn:uuid:05ff1811-31df-42c6-a67b-cbb0427c8e70>\",\"Content-Length\":\"74448\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a911f626-ff94-4d6d-9da0-6e23d8d8bc98>\",\"WARC-Concurrent-To\":\"<urn:uuid:b2443550-ccc2-4447-8536-26ff956bfc51>\",\"WARC-IP-Address\":\"198.187.29.209\",\"WARC-Target-URI\":\"https://myassignmentguru.com/assignments/brooks_chapter_3_1/\",\"WARC-Payload-Digest\":\"sha1:AHCHLTW6BUC3P4YAJMDPZSMHSOYFN27Z\",\"WARC-Block-Digest\":\"sha1:6RAKY6LIT6ZOIZXAWRCWVLQECAVE6Y5Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100232.63_warc_CC-MAIN-20231130193829-20231130223829-00552.warc.gz\"}"} |
https://rebelviral.com/modern-scientific-calculator-in-problem-solving/ | [
"# Modern Scientific Calculator in Problem Solving",
null,
"## The Role of a Scientific Calculator in Problem Solving\n\nA scientific calculator plays a crucial role in problem solving across various fields of science, engineering, mathematics, and even everyday life. Its advanced mathematical functions and capabilities go beyond basic arithmetic, enabling users to perform complex calculations efficiently. Here’s an overview of the role of a scientific calculator in problem solving:\n\nComplex Calculations: Scientific calculators are equipped with functions for trigonometry, logarithms, exponentials, roots, and more. These functions are essential for solving equations, analyzing data, and performing calculations that involve advanced mathematical concepts.\n\nEfficiency: Problem solving often requires dealing with large numbers or intricate calculations. A scientific calculator can handle these calculations swiftly and accurately, saving time and reducing the chances of errors that might occur when performing calculations manually.\n\nUnit Conversions: Many scientific calculators can perform unit conversions, allowing users to convert between different measurement systems (e.g., inches to centimeters, pounds to kilograms). This feature is particularly helpful in fields like physics, engineering, and chemistry.\n\nStatistical Analysis: Scientific calculators often include statistical functions like mean, median, standard deviation, and regression analysis. These features aid in analyzing data sets, making predictions, and drawing conclusions from experimental results.\n\nGraphing Capabilities: Some advanced scientific calculators have graphing capabilities, allowing users to visualize mathematical functions and equations. This is especially valuable in fields such as calculus, physics, and engineering where graphical representations can provide insights into complex relationships.\n\nProgramming and Algorithms: Certain scientific calculators offer programmability, allowing users to create and run customized algorithms. This is valuable for solving problems that involve repetitive calculations or specific sequences of operations.\n\nEducation and Learning: Scientific calculators are widely used in educational settings to teach mathematical concepts and problem-solving techniques. They enable students to focus on understanding concepts rather than getting bogged down by manual calculations.\n\nReal-World Applications: In various professions, such as engineering, architecture, finance, and science, a scientific calculator is indispensable for performing calculations related to design, analysis, modeling, and decision-making.\n\nQuick Verifications: While solving problems, a scientific calculator can be used to quickly verify calculations and results, helping users identify errors and ensure the accuracy of their work.\n\nPortability: Scientific calculators are compact and portable, making them convenient tools for solving problems on the go or in environments where access to computers or smartphones is limited\n\n## Exploring Advanced Functions: A Deep Dive into Scientific Calculators\n\nTrigonometric Functions: Scientific calculators offer a range of trigonometric functions such as sine, cosine, tangent, and their inverses. These functions are crucial in fields like geometry, physics, and engineering, where angles and circular relationships play a vital role.\n\nLogarithmic and Exponential Functions: Calculators provide logarithmic functions (log base 10 and natural logarithm) as well as exponential functions. These functions are fundamental in scenarios involving exponential growth, decay, and complex mathematical transformations.\n\nRoots and Powers: Advanced calculators can calculate nth roots and raise numbers to fractional powers. These functions are essential for solving equations, performing calculations involving radicals, and working with fractional exponents.\n\nStatistical and Data Analysis Functions: Many scientific calculators include statistical functions such as mean, median, standard deviation, and regression analysis. These functions aid in analyzing data sets, identifying trends, and making informed decisions based on empirical evidence.\n\nComplex Number Calculations: Scientific calculators handle complex numbers, allowing users to perform operations like addition, subtraction, multiplication, and division of complex values. This capability is essential in fields like electrical engineering and quantum mechanics.\n\nMatrix Operations: Some calculators offer matrix calculations such as addition, multiplication, determinant calculation, and matrix inversion. These functions are valuable in linear algebra and engineering for solving systems of linear equations and performing transformations.\n\nEquation Solving: Calculators can solve equations, including linear, quadratic, cubic, and transcendental equations. This feature helps users find solutions quickly without manual trial and error.\n\nGraphing Capabilities: Advanced calculators with graphing capabilities enable users to visualize functions, equations, and data plots. Graphs provide insights into mathematical relationships and help users understand concepts visually.\n\nProgramming and Customization: Certain calculators allow users to create and store custom functions or programs. This programmability is beneficial for automating repetitive calculations and implementing specialized algorithms.\n\nUnit Conversions and Constants: Scientific calculators often include built-in unit conversion functions and constants for physical quantities. This feature simplifies calculations involving different measurement units and ensures accuracy in computations.\n\nNumeric Integration and Differentiation: Some calculators offer numerical integration and differentiation functions. These capabilities are useful in calculus for approximating areas under curves and finding derivatives.\n\nMemory and Recall Functions: Calculators have memory storage and recall functions that enable users to store intermediate results, constants, or variables for use in subsequent calculations\n\n## Navigating Complex Equations with a Modern Scientific Calculator\n\nEntering Numbers and Operations:\n\nUse the numerical keypad to enter real and imaginary numbers.\nFor real numbers, simply press the digits (0-9) and the decimal point.\nTo enter imaginary numbers, use the “i” button or the key labeled as “√-1” or “j” depending on the calculator model.\n\nBasic Arithmetic:\n\nAddition (+), subtraction (-), multiplication (*), and division (/) are performed as expected.\nUse parentheses to control the order of operations and handle complex expressions.\n\nComplex Numbers:\n\nRepresent imaginary numbers using “i” or “j” (e.g., 3 + 2i).\nWhen performing operations on complex numbers, make sure to separately handle real and imaginary parts.\n\nTrigonometric Functions:\n\nCommon trigonometric functions like sine (sin), cosine (cos), tangent (tan), etc., are available.\nEnsure your calculator is in the correct angle mode (degrees, radians, or grads) before performing trigonometric calculations.\n\nLogarithmic and Exponential Functions:\n\nNatural logarithm (ln) and base-10 logarithm (log) are typically available.\nExponential functions (e^x) can be computed using the “e^x” button or the “^” key.\n\nInverse Functions:\n\nMost functions have inverse functions accessible through the calculator.\nFor instance, “sin⁻¹” represents the inverse sine function (arcsine), and “eⁿ” calculates the natural logarithm inverse (exponential).\n\nMemory Functions:\n\nModern scientific calculators often have memory functions (M+, M-, MR, MC) for storing and recalling values.\n\nSolver and Equation Modes:\n\nSome advanced calculators have equation-solving modes that can solve linear, quadratic, and other equations for variables.\nRefer to the user manual to understand how to input and solve equations in these modes.\n\nGraphing Features:\n\nSome calculators have graphing capabilities to visualize functions and equations.\nThese calculators allow you to graph functions and analyze their behavior.\n\nUnit Conversions:\n\nMany scientific calculators include unit conversion functions to switch between different units (e.g., converting between degrees and radians).\n\nOnline Resources and Manuals:\n\nConsult the user manual for your specific calculator model to understand its features and functionalities.\nOnline resources and forums can provide tips and guidance for using complex functions effectively.\n\n## Enhancing Learning and Efficiency: The Modern Scientific Calculator\n\nStreamlined Complex Calculations:\nThe calculator’s ability to handle intricate calculations involving complex numbers, trigonometry, logarithms, and exponential functions simplifies the process of solving intricate equations. This empowers learners to focus on understanding concepts rather than getting lost in manual calculations.\n\nMathematical Exploration:\nThe calculator’s graphing capabilities enable users to visualize functions and equations in a graphical format. This facilitates a deeper understanding of mathematical relationships and allows users to explore the behavior of various functions.\n\nEfficient Problem-Solving:\nStudents and professionals can use the calculator to swiftly solve equations and systems of equations, which are common tasks in fields like physics, engineering, and economics. This efficiency encourages critical thinking and problem-solving skills.\n\nTime Savings:\nPerforming calculations manually can be time-consuming and error-prone. A scientific calculator accelerates the process, freeing up valuable time for users to tackle more complex problems or engage in other productive activities.\n\nEnhanced Learning Experience:\nThe calculator’s functionalities encourage active learning. Users can experiment with different inputs, observe outcomes, and draw conclusions. This hands-on approach promotes a deeper understanding of mathematical concepts.\n\nUnit Conversions and Constants:\nMany calculators offer unit conversion features and built-in constants. This is particularly valuable in science and engineering fields where conversions between units and the use of constants are frequent.\n\nFor students studying advanced mathematics, physics, or engineering, the calculator serves as a gateway to functions like matrix operations, numerical integration, differentiation, and more. These functions empower learners to tackle sophisticated problems.\n\nReal-World Applications:\nThe calculator’s versatility makes it applicable in a wide range of disciplines, including finance, statistics, biology, and chemistry. As a result, learners can connect theoretical concepts to real-world scenarios, making their education more relevant and engaging.\n\nReduced Error Margin:\nManual calculations often come with a risk of errors. The calculator’s accuracy minimizes such errors, contributing to more reliable results in both academic and professional contexts.\n\nPreparation for Standardized Tests:\nMany standardized tests, such as SAT, ACT, GRE, and various professional exams, allow the use of specific scientific calculators. Familiarity with these calculators and their features can give test-takers an advantage.\n\nInterdisciplinary Learning:\nAs fields become more interdisciplinary, the calculator becomes a bridge between different subjects. It allows users to apply mathematical concepts to solve problems across various domains.\n\n## From Analog to Digital: The Evolution of the Scientific Calculator\n\nAnalog Calculators:\nThe earliest calculators were analog devices, often referred to as slide rules. These tools used logarithmic scales and sliding mechanisms to perform basic arithmetic and logarithmic calculations. While limited in functionality, they were groundbreaking in their time, aiding engineers, scientists, and students in the mid-20th century.\n\nEarly Digital Calculators:\nThe shift from analog to digital calculators began in the 1960s with the introduction of electronic calculators. These devices featured numeric displays and used transistors to perform calculations. However, they were limited in functionality and were mostly capable of basic arithmetic operations.\n\nScientific Functionality:\nThe true evolution of the scientific calculator started in the 1970s when manufacturers incorporated scientific functions such as square roots, trigonometry, logarithms, and exponentiation. These calculators revolutionized mathematics education and made complex calculations more accessible.\n\nGraphing Calculators:\nIn the 1980s and 1990s, graphing calculators emerged as a game-changer. These devices could graph functions and equations, aiding in visualizing mathematical relationships. They also featured larger screens, allowing users to input and view complex expressions more easily.\n\nSymbolic Calculators:\nSymbolic calculators, introduced around the late 1990s, took functionality to the next level. They could manipulate algebraic expressions, solve equations symbolically, and perform calculus operations. This made them invaluable tools for advanced mathematics and engineering.\n\nIntegration of Computer Features:\nWith the advent of smartphones and handheld devices, scientific calculators merged with computational devices. Many smartphones now have built-in calculator apps that replicate the functions of traditional scientific calculators. Furthermore, dedicated apps and software expanded the calculator’s capabilities even further.\n\nOnline Calculators and Software:\nThe internet brought about online scientific calculators and software tools. These platforms provide users with access to advanced mathematical functions without requiring a physical calculator. Users can now perform calculations from anywhere with an internet connection.\n\nIntegration with Education:\nScientific calculators became a staple in educational settings, often required for standardized tests and exams. Their evolving features have influenced how mathematics is taught and learned, emphasizing conceptual understanding over manual calculations.\n\nConnectivity and Communication:\nModern scientific calculators often have connectivity features, enabling data exchange between calculators, computers, and other devices. This facilitates collaborative learning and data analysis.\n\nFuture Trends:\nAs technology continues to advance, the line between traditional calculators and digital devices blurs further. We can expect calculators to integrate more seamlessly with other technologies, embrace artificial intelligence, and adapt to changing educational needs.",
null,
"#### By Rebel Viral Experts\n\nHi, I am from Rebel Viral Experts, Let me tell you that Writing has always been one of the things that I’m passionate about. Good writers define reality and turn fact into truth. I believe that You never really understand a person until you consider things from his point of view. In short, a good novel can change the world."
] | [
null,
"https://rebelviral.com/wp-content/uploads/2023/08/Modern-Scientific-Calculator-in-Problem-Solving-Rebel-Viral.png",
null,
"https://secure.gravatar.com/avatar/7627da2528211b6a431591b83868faf1",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88750535,"math_prob":0.97014254,"size":14405,"snap":"2023-40-2023-50","text_gpt3_token_len":2454,"char_repetition_ratio":0.18811193,"word_repetition_ratio":0.020397209,"special_character_ratio":0.16022214,"punctuation_ratio":0.14305115,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9888242,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T00:51:23Z\",\"WARC-Record-ID\":\"<urn:uuid:4bd78311-9e35-41a0-85dd-6b5fd30bf290>\",\"Content-Length\":\"73265\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c0fbe8ef-03f2-496d-874f-be073b8254c4>\",\"WARC-Concurrent-To\":\"<urn:uuid:9f74bf4e-54fa-4b10-b99a-0738dae0ee6e>\",\"WARC-IP-Address\":\"104.21.3.121\",\"WARC-Target-URI\":\"https://rebelviral.com/modern-scientific-calculator-in-problem-solving/\",\"WARC-Payload-Digest\":\"sha1:KHKQJLUHZNG67TH5RO4CQQVSAXHDGE3E\",\"WARC-Block-Digest\":\"sha1:UW4PQMOLAQEVDM5YPFIOQBQWEEOE3HDF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506429.78_warc_CC-MAIN-20230922234442-20230923024442-00508.warc.gz\"}"} |
https://mathoverflow.net/questions/247898/determinant-of-a-specific-4-times-4-symmetric-matrix/247900 | [
"# Determinant of a specific $4 \\times 4$ symmetric matrix\n\nIn a recent research work, I have come across the following nice identity, where the entries $a,b,x$ belong to an arbitrary commutative unital ring:\n\n$$\\begin{vmatrix} 2 & a & b & ab-x \\\\ a & 2 & x & b \\\\ b & x & 2 & a \\\\ ab-x & b & a & 2 \\end{vmatrix}=(x^2-abx+a^2+b^2-4)^2.$$ Note that if the ring has characteristic $2$ then the formula is an obvious application of the Pfaffian.\n\nThe only way I have been able to check this identity is through a tedious computation (of course, running any formal computing software will do).\n\nMy question: Is there any elegant way to prove it?\n\n• If you generalize the element 2 in the diagonal to $c$, you get $(x^2-abx+a^2+b^2-D_1)(x^2-abx+a^2+b^2-D_2)$ with $D_{1,2}=c^2 \\pm (2-c)ab$. – Andreas Rüdinger Aug 20 '16 at 11:29\n\nHere's a method for calculating the determinant, explaining at least why it ends up as a product. I don't know if there's any significance to your determinant being a square.\n\nDefine $$H= \\frac{1}{\\sqrt{2}} \\begin{pmatrix} 1 & 1 & 0 & 0 \\\\ 1 & -1 & 0 & 0 \\\\ 0 & 0 & 1 & 1 \\\\ 0 & 0 & 1 & -1 \\\\ \\end{pmatrix}.$$ (The tensor product of a one-dimensional Hadamard matrix with a two-by-two identity matrix.)\n\nThen $\\det H=1$ and for any $a,b,c,d,e,f,g,h$, $$H \\begin{pmatrix} a & b & c & d \\\\ b & a & d & c \\\\ e & f & g & h \\\\ f & e & h & g \\end{pmatrix} H\\\\ =\\begin{pmatrix} a+b & 0 & c+d & 0 \\\\ 0 & a-b & 0 & c-d \\\\ e+f & 0 & g+h & 0 \\\\ 0 & e-f & 0 & g-h \\end{pmatrix}$$ which is (similar to) $$\\begin{pmatrix} a+b & c+d \\\\ e+f & g+h \\\\ \\end{pmatrix} \\oplus \\begin{pmatrix} a-b & c-d \\\\ e-f & g-h \\\\ \\end{pmatrix}.$$\n\nPlugging in a rotated version of your matrix gives $$\\begin{vmatrix} 2 & x & b & a \\\\ x & 2 & a & b \\\\ b & a & 2 & ab-x \\\\ a & b & ab-x & 2 \\end{vmatrix} \\\\= \\begin{vmatrix} 2+x & a+b \\\\ a+b & 2+ab-x \\\\ \\end{vmatrix} \\cdot \\begin{vmatrix} 2-x & b-a \\\\ b-a & 2-ab+x \\\\ \\end{vmatrix} \\\\ =(4-x^2+abx-a^2-b^2)(4-x^2+abx-a^2-b^2).$$\n\n• Thank you very much. On the significance of the determinant being a square: if the ring is a field and the determinant is nonzero then the matrix under consideration turns out to represent the polar form $(x,y) \\mapsto N(x+y)-N(x)-N(y)$ of the norm $N$ of some quaternion algebra. This can be used to give an alternative proof of the formula but it requires a lot of additional background. – Clément de Seguins Pazzis Aug 20 '16 at 12:12\n\nThe following answer is inspired by Colin's.\n\nAs noted by Colin MacQuillan, the matrix under consideration turns out to be similar to\nthe block matrix $\\begin{bmatrix} A & C \\\\ C & B \\end{bmatrix}$ where $A=2I_2+x K$, $C=a I_2+ b K$, $B=2 I_2+(ab-x) K$, with $K:=\\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix}$. Since $I_2$ and $K$ commute, it is then standard that $$\\begin{vmatrix} A & C \\\\ C & B \\end{vmatrix}=\\det(AB-C^2).$$ Here, we see that $AB=(4+x(ab-x))I_2+2ab K$ and $C^2=(a^2+b^2) I_2+2ab K$. Hence, $AB-C^2=(4-a^2-b^2+x(ab-x))I_2$, which yields the claimed result.\n\nGeneralizing the diagonal entry of the initial matrix to a $c$ gives, with a similar method, that the determinant equals that of $(c^2-a^2-b^2+x(ab-x))I_2+(c-2)ab K$, yielding Andreas's formula.\n\nIn short: determinant of every symmetric matrix is a square!\n\nConsider arbitrary three matrices $$a, b, c \\in Sl_2.$$ One can wonder, what relations do the traces of their products satisfy. The answer is given by the famous Jimbo-Fricke cubic: $$\\begin{equation} \\begin{split} &tr(ab)\\ tr(bc) \\ tr(ac)+ tr(ab)^2+tr(bc)^2+tr(ac)^2\\\\ &+tr(a)^2+tr(b)^2+tr(c)^2+tr(abc)^2\\\\ &-(tr(a)tr(b)+tr(c)tr(abc))tr(ab)\\\\ &-(tr(b)tr(c)+tr(a)tr(abc))tr(bc)\\\\ &-(tr(a)tr(c)+tr(b)tr(abc))tr(ac)\\\\ &+tr(a)tr(b)tr(c)tr(abc)-4=0.\\\\ \\end{split} \\end{equation}$$\n\nEvery determinant of a symmetric matrix can be written in the following form for some matrices $$a, b, c$$: $$G=\\begin{vmatrix} 2 & -tr(a) & -tr(b) & -tr(bc)\\\\ -tr(a) & 2 & -tr(ab)& -tr(abc)\\\\ -tr(b) & -tr(ab) & 2 & -tr(c)\\\\ -tr(bc) & -tr(abc) & -tr(c)& 2\\\\ \\end{vmatrix}.$$ The relation above is equivalent to the following: $$(2tr(ac)+tr(ab)tr(bc)-tr(a)tr(c)-tr(b)tr(abc))^2=G.$$\n\nUsually a symmetric determinant is not a square, because $$tr(ac)$$ is not a polynomial in the entries of $$G.$$ The case of the matrix in the question corresponds to $$c=a^{-1},$$ because $$tr(c)=tr(c^{-1}),$$ $$tr(aba^{-1})=tr(a)$$ and $$tr(ab)+tr(ba^{-1})=tr(a)+tr(b).$$ The square root of $$G$$ is algebraic, because $$tr(ac)=2.$$\n\nI have seen this presentation of the Jimbo-Fricke cubic only in one place: https://arxiv.org/pdf/1308.4092.pdf, formula (3.9), and I will be really grateful for any references."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.56748956,"math_prob":0.9999701,"size":1409,"snap":"2020-34-2020-40","text_gpt3_token_len":511,"char_repetition_ratio":0.14804271,"word_repetition_ratio":0.0,"special_character_ratio":0.3733144,"punctuation_ratio":0.09657321,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999863,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-22T18:01:01Z\",\"WARC-Record-ID\":\"<urn:uuid:a24ecd26-5557-436d-bc9a-92defd21a363>\",\"Content-Length\":\"145132\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:572243ef-7a00-47b1-a162-9f440db9d403>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe328992-e3be-4d4b-94d4-24442284338a>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/247898/determinant-of-a-specific-4-times-4-symmetric-matrix/247900\",\"WARC-Payload-Digest\":\"sha1:GKLNI3EB7PAOMTAGN32Q6TMEH3N4Y2HL\",\"WARC-Block-Digest\":\"sha1:GCQWJCMD7YU2KTJ5HNOWBD4RG7DPKK2G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400206329.28_warc_CC-MAIN-20200922161302-20200922191302-00525.warc.gz\"}"} |
https://calculatorsedge.com/d-exponent | [
"# D-EXPONENT CALCULATOR\n\n189\n\nCalculate d-Exponent of any number for free. Enter the values below to calculate d-Exponent. #d #exponent #mechanical #engineering\n\n3rd Class Lever Calculator Friden Stw10 Mechanical Calculator Curta Mechanical Calculator For Sale Quantum Physics Calculations Lever Force Calculator",
null,
"R = Penetration Rate\nN = Rotary Speed\nW = Weight on Bit\nD = Drill Bit Diameter",
null,
"n = Mud Weight Equivalent",
null,
"m = Mud Weight Used\nC = Shale Compactibility Coefficient\n\nPenetration Rate: Ft / Hr\nRotary Speed: Rev / Minute\nWeight on Bit: 1000 Lb\nDrill Bit Diameter: Inches\nMud Weight Equivalent: Lb / Gal\nMud Weight Used: Lb / Gal\nShale Compactibility Coefficient:\nResult:\nd-Exponent:\nD-exponent calculator is a tool that helps to determine the maximum depth of a drilling well in a particular area. It is an essential tool for drilling engineers who are involved in designing and planning drilling operations.\nIn the oil and gas industry, the d-exponent is a significant factor used to determine the drilling efficiency of a particular formation. The d-exponent is calculated by analyzing the drilling rate with respect to the depth of the well. The higher the value of the d-exponent, the more difficult it is to drill the well.\nThe d-exponent calculator takes into account various factors such as the characteristics of the rock formation, the drilling techniques used, and the type of drill bit used. It provides accurate information that helps drilling engineers to make informed decisions about the drilling process.\nMany d-exponent calculators are available online, and they are easy to use. Users need to input specific information such as the drilling rate, the depth of the well, the drilling fluid weight, the drilling bit type, and the type of rock formation being drilled. The calculator then provides the d-exponent value, which helps to determine the maximum depth that can be achieved in that particular location.\nIn conclusion, the d-exponent calculator is a crucial tool for drilling engineers, and its accuracy is critical for the success of drilling operations. With new advancements in technology, the accuracy of d-exponent calculations is continually improving, which further enhances the effectiveness of the drilling process."
] | [
null,
"https://calculatorsedge.com/static/assets/img/diagrams/d-Exponent.jpg",
null,
"https://calculatorsedge.com/static/assets/img/diagrams/density.jpg",
null,
"https://calculatorsedge.com/static/assets/img/diagrams/density.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9029316,"math_prob":0.9607862,"size":3213,"snap":"2023-40-2023-50","text_gpt3_token_len":610,"char_repetition_ratio":0.19850421,"word_repetition_ratio":0.8391039,"special_character_ratio":0.17927171,"punctuation_ratio":0.09171076,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9758278,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T08:51:51Z\",\"WARC-Record-ID\":\"<urn:uuid:2a380c50-77c5-4e89-99ce-d4cc7849fd20>\",\"Content-Length\":\"38726\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b578f669-cc6d-44ff-a7e7-c01b6c0ee3b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:74d8a6e5-9186-4374-9640-24eaf7215565>\",\"WARC-IP-Address\":\"65.108.9.81\",\"WARC-Target-URI\":\"https://calculatorsedge.com/d-exponent\",\"WARC-Payload-Digest\":\"sha1:W4I3XTGZFH3SULFR2764SMFZRUTVRHW5\",\"WARC-Block-Digest\":\"sha1:VAKWCPZBB7JY23BYCQK7DMZTGHNRSLEP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100873.6_warc_CC-MAIN-20231209071722-20231209101722-00314.warc.gz\"}"} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_(Zumdahl_and_Decoste)/06%3A_Thermochemistry/9.1%3A_The_Nature_of_Energy | [
"# 9.1: The Nature of Energy\n\n## What is Energy?\n\nEnergy is one of the most fundamental and universal concepts of physical science, but one that is remarkably difficult to define in a way that is meaningful to most people. This perhaps reflects the fact that energy is not a “thing” that exists by itself, but is rather an attribute of matter (and also of electromagnetic radiation) that can manifest itself in different ways. It can be observed and measured only indirectly through its effects on matter that acquires, loses, or possesses it.\n\nThe concept that we call energy was very slow to develop; it took more than a hundred years just to get people to agree on the definitions of many of the terms we use to describe energy and the interconversion between its various forms. But even now, most people have some difficulty in explaining what it is; somehow, the definition we all learned in elementary science (\"the capacity to do work\") seems less than adequate to convey its meaning.\n\nAlthough the term \"energy\" was not used in science prior to 1802, it had long been suggested that certain properties related to the motions of objects exhibit an endurance which is incorporated into the modern concept of \"conservation of energy\". In the 17th Century, the great mathematician Gottfried Leibnitz (1646-1716) suggested the distinction between vis viva (\"live force\") and vis mortua (\"dead force\"), which later became known as kinetic energy (1829) and potential energy (1853).\n\n### Kinetic energy and potential energy\n\nWhatever energy may be, there are basically two kinds.\n\nKinetic energy is associated with the motion of an object, and its direct consequences are part of everyone's daily experience; the faster the ball you catch in your hand, and the heavier it is, the more you feel it. Quantitatively, a body with a mass m and moving at a velocity v possesses the kinetic energy mv2/2.\n\nExample 1\n\nA rifle shoots a 4.25 g bullet at a velocity of 965 m s–1. What is its kinetic energy?\n\nSolution\n\nThe only additional information you need here is that\n\n1 J = 1 kg m2 s–2:\n\nKE = ½ × (.00425 kg) (965 m s–1)2 = 1980 J\n\nPotential energy is energy a body has by virtue of its location. But there is more: the body must be subject to a \"restoring force\" of some kind that tends to move it to a location of lower potential energy. Think of an arrow that is subjected to the force from a stretched bowstring; the more tightly the arrow is pulled back against the string, the more potential energy it has.\n\nMore generally, the restoring force comes from what we call a force field— a gravitational, electrostatic, or magnetic field. We observe the consequences of gravitational potential energy all the time, such as when we walk, but seldom give it any thought.\n\nIf an object of mass m is raised off the floor to a height h, its potential energy increases by mgh, where g is a proportionality constant known as the acceleration of gravity; its value at the earth's surface is 9.8 m s–2.\n\nExample 2\n\nFind the change in potential energy of a 2.6 kg textbook that falls from the 66-cm height of a table top onto the floor.\n\nSOLUTION\n\nPE = m g h = (2.6 kg)(9.8 m s–2)(0.66 m) = 16.8 kg m2 s–2 = 16.8 J\n\nSimilarly, the potential energy of a particle having an electric charge qdepends on its location in an electrostatic field.\n\n## \"Chemical energy\"\n\nElectrostatic potential energy plays a major role in chemistry; the potential energies of electrons in the force field created by atomic nuclei lie at the heart of the chemical behavior of atoms and molecules. \"Chemical energy\" usually refers to the energy that is stored in the chemical bonds of molecules. These bonds form when electrons are able to respond to the force fields created by two or more atomic nuclei, so they can be regarded as manifestations of electrostatic potential energy. In an exothermic chemical reaction, the electrons and nuclei within the reactants undergo rearrangement into products possessing lower energies, and the difference is released to the environment in the form of heat.\n\n### Interconversion of potential and kinetic energy\n\nTransitions between potential and kinetic energy are such an intimate part of our daily lives that we hardly give them a thought. It happens in walking as the body moves up and down. Our bodies utilize the chemical energy in glucose to keep us warm and to move our muscles. In fact, life itself depends on the conversion of chemical energy to other forms.",
null,
"Figure: Conservation of energy applied to a bicyclist and a hill.\n\nEnergy is conserved: it can neither be created nor destroyed. So when you go uphill, your kinetic energy is transformed into potential energy, which gets changed back into kinetic energy as you coast down the other side. And where did the kinetic energy you expended in peddling uphill come from? By conversion of some of the chemical potential energy in your breakfast cereal.\n\n• When drop a book, its potential energy is transformed into kinetic energy. When it strikes the floor, this transformation is complete. What happens to the energy then? The kinetic energy that at the moment of impact was formerly situated exclusively in the moving book, now becomes shared between the book and the floor, and in the form of randomized thermal motions of the molecular units of which they are made; we can observe this effect as a rise in temperature.\n• Much of the potential energy of falling water can be captured by a water wheel or other device that transforms the kinetic energy of the exit water into kinetic energy. The output of a hydroelectric power is directly proportional to its height above the level of the generator turbines in the valley below. At this point, the kinetic energy of the exit water is transferred to that of the turbine, most of which (up to 90 percent in the largest installations) is then converted into electrical energy.\n• Will the temperature of the water at the bottom of a water fall be greater than that at the top? James Joule himself predicted that it would be. It has been calculated that at Niagara falls, that complete conversion of the potential energy of 1 kg of water at the top into kinetic energy when it hits the plunge pool 58 meters below will result in a temperature increase of about 0.14 C°. (But there are lots of complications. For example, some of the water breaks up into tiny droplets as it falls, and water evaporates from droplets quite rapidly, producing a cooling effect.)\n• Chemical energy can also be converted, at least partially, into electrical energy: this is what happens in a battery. If a highly exothermic reaction also produces gaseous products, the latter may expand so rapidly that the result is an explosion — a net conversion of chemical energy into kinetic energy (including sound).\n\n### Thermal energy\n\nKinetic energy is associated with motion, but in two different ways. For a macroscopic object such as a book or a ball, or a parcel of flowing water, it is simply given by ½ mv2. However, as we mentioned above, when an object is dropped onto the floor, or when an exothermic chemical reaction heats surrounding matter, the kinetic energy gets dispersed into the molecular units in the environment. This \"microscopic\" form of kinetic energy, unlike that of a speeding bullet, is completely random in the kinds of motions it exhibits and in its direction. We refer to this as \"thermalized\" kinetic energy, or more commonly simply as thermal energy. We observe the effects of this as a rise in the temperature of the surroundings. The temperature of a body is direct measure of the quantity of thermal energy is contains.\n\n## Thermal energy is never completely recoverable\n\nOnce kinetic energy is thermalized, only a portion of it can be converted back into potential energy. The remainder simply gets dispersed and diluted into the environment, and is effectively lost.\n\nTo summarize, then:\n\n• Potential energy can be converted entirely into kinetic energy..\n• Potential energy can also be converted, with varying degrees of efficiency,into electrical energy.\n• The kinetic energy of macroscopic objects can be transferred between objects (barring the effects of friction).\n• Once kinetic energy becomes thermalized, only a portion of it can be converted back into either potential energy or be concentrated back into the kinetic energy of a macroscopic. This limitation, which has nothing to do with technology but is a fundamental property of nature, is the subject of the second law of thermodynamics.\n• A device that is intended to accomplish the partial transformation of thermal energy into organized kinetic energy is known as a heat engine.\n\n### Energy scales are always arbitrary\n\nYou might at first think that a book sitting on the table has zero kinetic energy since it is not moving. But if you think about it, the earth itself is moving; it is spinning on its axis, it is orbiting the sun, and the sun itself is moving away from the other stars in the general expansion of the universe. Since these motions are normally of no interest to us, we are free to adopt an arbitrary scale in which the velocity of the book is measured with respect to the table; on this so-called laboratory coordinate system, the kinetic energy of the book can be considered zero.\n\nWe do the same thing with potential energy. If the book is on the table, its potential energy with respect to the surface of the table will be zero. If we adopt this as our zero of potential energy, and then push the book off the table, its potential energy will be negative after it reaches the floor.\n\n### Energy units\n\nEnergy is measured in terms of its ability to perform work or to transfer heat. Mechanical work is done when a force f displaces an object by a distance d:\n\n$w = f × d$\n\nThe basic unit of energy is the joule. One joule is the amount of work done when a force of 1 newton acts over a distance of 1 m; thus 1 J = 1 N-m. The newton is the amount of force required to accelerate a 1-kg mass by 1 m/sec2, so the basic dimensions of the joule are kg m2 s–2. The other two units in wide use. the calorie and the BTU (British thermal unit) are defined in terms of the heating effect on water. Because of the many forms that energy can take, there are a correspondingly large number of units in which it can be expressed, a few of which are summarized below.\n\n 1 calorie will raise the temperature of 1 g of water by 1 C°. The “dietary” calorie is actually 1 kcal. An average young adult expends about 1800 kcal per day just to stay alive. (you should know this definition) 1 cal = 4.184 J 1 BTU (British Thermal Unit) will raise the temperature of 1 lb of water by 1F°. 1 BTU = 1055 J The erg is the c.g.s. unit of energy and a very small one; the work done when a 1-dyne force acts over a distance of 1 cm. 1 J = 107 ergs 1 erg = 1 d-cm = 1 g cm2 s–2 The electron-volt is even tinier: 1 eV is the work required to move a unit electric charge (1 C) through a potential difference of 1 volt. 1 J = 6.24 × 1018 eV The watt is a unit of power, which measures the rate of energy flow in J sec–1. Thus the watt-hour is a unit of energy. An average human consumes energy at a rate of about 100 watts; the brain alone runs at about 5 watts. 1 J = 2.78 × 10–4watt-hr 1 w-h = 3.6 kJ The liter-atmosphere is a variant of force-displacement work associated with volume changes in gases. 1 L-atm = 101.325 J The huge quantities of energy consumed by cities and countries are expressed in quads; the therm is a similar but smaller unit. 1 quad = 1015 Btu = 1.05 × 1018 J If the object is to obliterate cities or countries with nuclear weapons, the energy unit of choice is the ton of TNT equivalent. 1 ton of TNT = 4.184 GJ (by definition) In terms of fossil fuels, we have barrel-of-oil equivalent, cubic-meter-of-natural gas equivalent, and ton-of-coal equivalent. 1 bboe = 6.1 GJ 1 cmge = 37-39 mJ 1 toce = 29 GJ\n\n## Heat and Work\n\nHeat and work are both measured in energy units, so they must both represent energy. How do they differ from each other, and from just plain “energy” itself? In our daily language, we often say that \"this object contains a lot of heat\", but this is gibberish in thermodynamics terms, although it is ok to say that the object is \"hot\", indicating that its temperature is high. The term \"heat\" has a special meaning in thermodynamics: it is a process in which a body (the contents of a tea kettle, for example) acquires or loses energy as a direct consequence of its having a different temperature than its surroundings. Hence, thermal energy can only flow from a higher temperature to a lower temperature. It is this flow that constitutes \"heat\". Use of the term \"flow\" of heat recalls the incorrect 18th-century notion that heat is an actual substance called “caloric” that could flow like a liquid.\n\nNote: Heat\n\nWe often say that \"this object contains a lot of heat,\" however, this makes no sense since heat represents an energy transfer.\n\nTransfer of thermal energy can be accomplished by bringing two bodies into physical contact (the kettle on top of the stove, or through an electric heating element inside the kettle). Another mechanism of thermal energy transfer is by radiation; a hot object will convey energy to any body in sight of it via electromagnetic radiation in the infrared part of the spectrum. In many cases, both modes will be active.\n\nWork refers to the transfer of energy some means that does not depend on temperature difference. Work, like energy, can take various forms, the most familiar being mechanical and electrical.\n\n• Mechanical work arises when an object moves a distance Δx against an opposing force f: $w = f Δx$\n• Electrical work is done when a body having a charge q moves through a potential difference ΔV.\n\nNote: Work\n\nA transfer of energy to or from a system by any means other than heat is called “work”.\n\nWork can be completely converted into heat (by friction, for example), but heat can only be partially converted to work. Conversion of heat into work is accomplished by means of a heat engine, the most common example of which is an ordinary gasoline engine. The science of thermodynamics developed out of the need to understand the limitations of steam-driven heat engines at the beginning of the Industrial Age. The Second Law of Thermodynamics, states that the complete conversion of heat into work is impossible. Something to think about when you purchase fuel for your car!\n\n## Contributors\n\nStephen Lower, Professor Emeritus (Simon Fraser U.) Chem1 Virtual Textbook"
] | [
null,
"https://chem.libretexts.org/@api/deki/files/50554/potencial-and-kinetic.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9465077,"math_prob":0.9672485,"size":14417,"snap":"2019-26-2019-30","text_gpt3_token_len":3168,"char_repetition_ratio":0.15499896,"word_repetition_ratio":0.020094562,"special_character_ratio":0.22001803,"punctuation_ratio":0.09077964,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96720266,"pos_list":[0,1,2],"im_url_duplicate_count":[null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-23T19:41:10Z\",\"WARC-Record-ID\":\"<urn:uuid:2a3c725a-e359-49e2-8177-7c9646dc027f>\",\"Content-Length\":\"90864\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4bcc3673-8d3f-4757-a0a7-d88457e70d1c>\",\"WARC-Concurrent-To\":\"<urn:uuid:6ec2ce5e-ce8f-4535-a990-49366975afa3>\",\"WARC-IP-Address\":\"35.168.122.151\",\"WARC-Target-URI\":\"https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_(Zumdahl_and_Decoste)/06%3A_Thermochemistry/9.1%3A_The_Nature_of_Energy\",\"WARC-Payload-Digest\":\"sha1:XXJGMEKWCUCKMRJKDKU7B4RAP6VCTBGU\",\"WARC-Block-Digest\":\"sha1:EDB56OHHK2I35G336PKA3IGYCWRMZHQ5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195529664.96_warc_CC-MAIN-20190723193455-20190723215455-00438.warc.gz\"}"} |
https://romig.dev/blog/c-oop-primer-operator-overloading/ | [
"The concept of overloading a method (same method, different inputs) can also be applied to operators. Operator overloading gives us the ability to use an operator to provide its services to not just basic data types, but custom types like classes. This gives us a more natural abstraction of implementing those operations.",
null,
"About to rain down some operations.\n\nRefresher: What are operators?\n\nIn basic terms, operators are symbols that specify what type of operation (such as addition & substraction) to perform in an expression. Math isn't the only available operation in programming though!\n\n• Unary operators (one operand e.g. +x)\n\n• +x - returns the value of x\n• -x - negates x\n• !x - logically negate x (translates to NOT x)\n• ~x - bitwise complement (changes all bits in value by swapping 1s & 0s)\n• ++x - prefixed increment (returns value after updating value)\n• x++ - postfixed increment (returns value before updating value)\n• --x - prefixed decrement (returns value after updating value)\n• x-- - postfixed decrement (returns value before updating value)\n• Binary operators (two operands e.g. x + y)\n\n• - - subtraction\n• * - multiplication\n• / - division\n• % - modulo / remainder\n• Relational (Comparison) operators (two operands, returns a boolean value e.g. x == y)\n\n• == - equal to\n• != - not equal to\n• >, >= - greater than, greater than or equal to\n• <, <= - less than, greather than or equal to\n\nResource: C# Operators @ Microsoft\n\nSyntax\n\nThe overloaded operator method follows these rules:\n\n• It should be a member of the class it is returning.\n• The method must be static.\n• The method must have the keyword operator followed by the operator that will be overridden.\n• The parameters will be the operand(s).\n\nSo the syntax looks something like the below, replacing [return type] with a data type (e.g. class type, bool), [?] with an operator, and [parameters] with the appropriate parameters like in a regular method.\n\n// public static [return type] operator [?]([parameters])\n\n// Example:\npublic static Foo operator +(Foo foo1, Foo foo2)\n{\n}\n\nOn a Unary Operator\n\nA unary operator takes only one operand, so its overload syntax would look something like below.\n\nWe were able to reverse the signs of both numbers within the Calc object at the same time. Crazy, huh?\n\nOn a Binary Operator\n\nLet's try another example. Let's say we have this Box class.\n\npublic class Box\n{\npublic int Height { get; set; }\npublic int Width { get; set; }\n\npublic Box(int h, int w)\n{\nHeight = h;\nWidth = w;\n}\n}\n\nAnd we would like to add Box objects to make a bigger Box. We're thinking it should work with a simple equation like Box b3 = b1 + b2;, resulting in b3 having dimensions of the combined heights and widths of the other two boxes. We'll overload the binary plus operator for this.\n\nThe overloaded operator method takes the Height and Width values from both of the Box operands and adds them together, respectively. Then it creates a new Box object with those new values. Working just like we imagined it - possibly more! See the code above where a new box was made by adding 3 boxes together. The operator overload works when chained to multiple operands!\n\nWhat if we wanted to make a new box by increasing both height and width by an integer value? Like Box b4 = b1 + 10;. Absolutely! There's no rule saying both operands have to be the class's type.\n\npublic static Box operator +(Box box1, int num)\n{\nint h = box1.Height + num;\nint w = box1.Width + num;\nBox result = new Box(h, w);\nreturn result;\n}\n\nThis is very similar to the overload we already made except that the second operand is an integer.\n\nNow what if the integer was the first operand? Like Box b5 = 10 + b1;. No problem. We'll just have to create an overloaded method of that operator overload. This will be easier than that last sentence sounds.\n\npublic static Box operator +(int num, Box box1)\n{\nreturn box1 + num;\n}\n\nFor this overload, we can simplify it by just calling the previous operator overload we already made with the operands in the order it accepts. Check out the complete code below:\n\nOn a Relational Operator\n\nRelational operators, like < and >, can be set up similar to binary operators (with two parameters). The important thing to remember is that they need to be overloaded in pairs. For example, if we overload the < operator, we need to overload the > as well. The same is true for the <= & >= and == & != operator pairs.\n\nLet's see how we can implement this with our Box class.\n\nFor this example, we'll use the box's area as the basis of comparison. And notice the return type is bool because relational operations return a true or false value.\n\nNeat stuff, right?\n\nYou can see from these examples how operator overloading can be an intuitive and natural abstraction. It is possible to abuse this though and unintentionally create convoluted implementations. A good rule of thumb is to use operator overloading when it brings simplicity to the class type and overall mainability to the application.\n\nNext in this series, we'll learn about more about methods and classes, specifically Generics."
] | [
null,
"https://romig.dev/assets/images/blog/oop_primer/operator_cloud.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.856583,"math_prob":0.96636134,"size":5362,"snap":"2022-05-2022-21","text_gpt3_token_len":1221,"char_repetition_ratio":0.14464353,"word_repetition_ratio":0.0125,"special_character_ratio":0.24412532,"punctuation_ratio":0.12334353,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9686195,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-17T09:57:13Z\",\"WARC-Record-ID\":\"<urn:uuid:025458c1-1287-4e84-9455-baa2ea9e976f>\",\"Content-Length\":\"41068\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d3346fd6-0b17-4ff6-a3e6-88b8a91e6f13>\",\"WARC-Concurrent-To\":\"<urn:uuid:66c4c70f-a4d9-4b90-a478-e611ff7549d0>\",\"WARC-IP-Address\":\"104.198.14.52\",\"WARC-Target-URI\":\"https://romig.dev/blog/c-oop-primer-operator-overloading/\",\"WARC-Payload-Digest\":\"sha1:GXOKUK6BNF2T6NR4RHVTCJIDN3TJ6662\",\"WARC-Block-Digest\":\"sha1:WYM4TZXWPIXRQR2T6WEZIRJDE6PQS4T5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300533.72_warc_CC-MAIN-20220117091246-20220117121246-00016.warc.gz\"}"} |
http://codeforces.com/problemset/problem/500/E | [
"E. New Year Domino\ntime limit per test\n2 seconds\nmemory limit per test\n256 megabytes\ninput\nstandard input\noutput\nstandard output\n\nCelebrating the new year, many people post videos of falling dominoes; Here's a list of them: https://www.youtube.com/results?search_query=New+Years+Dominos\n\nUser ainta, who lives in a 2D world, is going to post a video as well.\n\nThere are n dominoes on a 2D Cartesian plane. i-th domino (1 ≤ i ≤ n) can be represented as a line segment which is parallel to the y-axis and whose length is li. The lower point of the domino is on the x-axis. Let's denote the x-coordinate of the i-th domino as pi. Dominoes are placed one after another, so p1 < p2 < ... < pn - 1 < pn holds.\n\nUser ainta wants to take a video of falling dominoes. To make dominoes fall, he can push a single domino to the right. Then, the domino will fall down drawing a circle-shaped orbit until the line segment totally overlaps with the x-axis.",
null,
"Also, if the s-th domino touches the t-th domino while falling down, the t-th domino will also fall down towards the right, following the same procedure above. Domino s touches domino t if and only if the segment representing s and t intersects.",
null,
"See the picture above. If he pushes the leftmost domino to the right, it falls down, touching dominoes (A), (B) and (C). As a result, dominoes (A), (B), (C) will also fall towards the right. However, domino (D) won't be affected by pushing the leftmost domino, but eventually it will fall because it is touched by domino (C) for the first time.",
null,
"The picture above is an example of falling dominoes. Each red circle denotes a touch of two dominoes.\n\nUser ainta has q plans of posting the video. j-th of them starts with pushing the xj-th domino, and lasts until the yj-th domino falls. But sometimes, it could be impossible to achieve such plan, so he has to lengthen some dominoes. It costs one dollar to increase the length of a single domino by 1. User ainta wants to know, for each plan, the minimum cost needed to achieve it. Plans are processed independently, i. e. if domino's length is increased in some plan, it doesn't affect its length in other plans. Set of dominos that will fall except xj-th domino and yj-th domino doesn't matter, but the initial push should be on domino xj.\n\nInput\n\nThe first line contains an integer n (2 ≤ n ≤ 2 × 105)— the number of dominoes.\n\nNext n lines describe the dominoes. The i-th line (1 ≤ i ≤ n) contains two space-separated integers pi, li (1 ≤ pi, li ≤ 109)— the x-coordinate and the length of the i-th domino. It is guaranteed that p1 < p2 < ... < pn - 1 < pn.\n\nThe next line contains an integer q (1 ≤ q ≤ 2 × 105) — the number of plans.\n\nNext q lines describe the plans. The j-th line (1 ≤ j ≤ q) contains two space-separated integers xj, yj (1 ≤ xj < yj ≤ n). It means the j-th plan is, to push the xj-th domino, and shoot a video until the yj-th domino falls.\n\nOutput\n\nFor each plan, print a line containing the minimum cost needed to achieve it. If no cost is needed, print 0.\n\nExamples\nInput\n6\n1 5\n3 3\n4 4\n9 2\n10 1\n12 1\n4\n1 2\n2 4\n2 5\n2 6\nOutput\n0\n1\n1\n2\nNote\n\nConsider the example. The dominoes are set like the picture below.",
null,
"Let's take a look at the 4th plan. To make the 6th domino fall by pushing the 2nd domino, the length of the 3rd domino (whose x-coordinate is 4) should be increased by 1, and the 5th domino (whose x-coordinate is 9) should be increased by 1 (other option is to increase 4th domino instead of 5th also by 1). Then, the dominoes will fall like in the picture below. Each cross denotes a touch between two dominoes.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
""
] | [
null,
"http://codeforces.com/predownloaded/30/78/30782af303f8c96372d271e40c3b730c7ac062b9.png",
null,
"http://codeforces.com/predownloaded/ad/2c/ad2cdd944a89bb823f1e5b559ce383174fe7f832.png",
null,
"http://codeforces.com/predownloaded/ec/33/ec339764951e25610bba22fdd63db4de911de4ae.png",
null,
"http://codeforces.com/predownloaded/13/8e/138ebc6f8479f4ff1be98f10d736f70fb8d61660.png",
null,
"http://codeforces.com/predownloaded/06/58/0658a665d7c7dff9205ff7ba3ee47b2b143f38b9.png",
null,
"http://codeforces.com/predownloaded/a2/30/a23000eaa3652a65d9fefe7a8d0cbbe2d8f506ca.png",
null,
"http://codeforces.com/predownloaded/d0/9c/d09c051b9a8f9cdf86dbe6590485dcf763e9f7c6.png",
null,
"http://codeforces.com/predownloaded/37/a7/37a75aef86d500ebbde8bbe8b9f269b131d2ec44.png",
null,
"http://codeforces.com/predownloaded/bb/71/bb7145011c78bdc01e451e389081bf6127ab9b3f.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8976201,"math_prob":0.810062,"size":3428,"snap":"2019-43-2019-47","text_gpt3_token_len":964,"char_repetition_ratio":0.15303738,"word_repetition_ratio":0.03343465,"special_character_ratio":0.27158692,"punctuation_ratio":0.11788079,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9574753,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T06:20:31Z\",\"WARC-Record-ID\":\"<urn:uuid:1543b12f-1e73-4479-ac87-0f104e65f277>\",\"Content-Length\":\"55172\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1403922f-6e1c-4d3c-b665-c3e0a099fabb>\",\"WARC-Concurrent-To\":\"<urn:uuid:2b3bb708-b565-420e-b24f-6a848ace476f>\",\"WARC-IP-Address\":\"81.27.240.126\",\"WARC-Target-URI\":\"http://codeforces.com/problemset/problem/500/E\",\"WARC-Payload-Digest\":\"sha1:XXNQCAO5A6XYATTU2RDCVH2UO56RYSPK\",\"WARC-Block-Digest\":\"sha1:KBTRB64F4GHYBXJKAJ3AT46P7S3VY3WM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987756350.80_warc_CC-MAIN-20191021043233-20191021070733-00005.warc.gz\"}"} |
http://techgeekbuzz.com/category/python/page/2/ | [
"",
null,
"Category:\n\nPython Global, Local and Nonlocal Variables\n\nIn this tutorial, we will discuss the variable use in python and how they are categorized in Global, local and Non-local. Table of Content Python Global Variable Python Local Variable\n\nPosted On :",
null,
"Category:\n\nPython Global Keyword\n\nIn this tutorial, we will learn more about the global keyword and learn where and when to use the global keyword Table of Content Python global keyword Global variable across\n\nPosted On :",
null,
"Category:\n\nPython Lambda Function\n\nIn this tutorial, we will discuss one of the main topics of python know as lambda function aka anonymous function. we will discuss how and where to use it. Table\n\nPosted On :",
null,
"Category:\n\nPython Functions Arguments\n\nIn this tutorial, we will discuss those function which takes variables as arguments and perform tasks with the help of passed arguments. Table of content: Arguments Default argument Attributes arguments\n\nPosted On :",
null,
"Category:\n\nPython Recursion\n\nIn this tutorial, we will discuss what are recursive functions and how to create them in python and why we need them. Table of content What are recursive functions Python\n\nPosted On :",
null,
"Category:\n\nPython Function\n\nIn this tutorial, we are going to discuss python functions. We will learn how to write a function and how to use it? Table of Content What are Functions in\n\nPosted On :",
null,
"Category:\n\nPass Keyword in Python\n\nIn this tutorial, we will discuss the pass keyword and, when and where we can use it. We will see examples of pass keyword and understand how it works. Table\n\nPosted On :",
null,
"Category:\n\nPython Continue & Break Keywords\n\nIn this tutorial, we are going to learn two Keywords break & continue and learn how and where to use them. Table of Contents: What are break and continue keywords\n\nPosted On :",
null,
"Category:\n\nPython While Loop\n\nIn this tutorial, we will discuss one more loop which is while loop, as like for loop while loop also used to repeat a block of code. In this article,\n\nPosted On :",
null,
"Category:\n\nPython For Loop\n\nIn this tutorial, we are going to discuss the how can we use for loop to iterate over a sequence of elements. Table of Contents: What is for loop What\n\nPosted On :"
] | [
null,
"http://techgeekbuzz.com/wp-content/uploads/2019/04/Python-Variable.png",
null,
"http://techgeekbuzz.com/wp-content/uploads/2019/04/Python-Global-Variable-compressor.png",
null,
"http://techgeekbuzz.com/wp-content/uploads/2019/04/Python-Lambda-Function.png",
null,
"http://techgeekbuzz.com/wp-content/uploads/2019/04/Default-Arguments.png",
null,
"http://techgeekbuzz.com/wp-content/uploads/2019/04/Python-Recursion-1-1.png",
null,
"http://techgeekbuzz.com/wp-content/uploads/2019/04/User-Defined-Function-compressor.png",
null,
"http://techgeekbuzz.com/wp-content/uploads/2019/04/Pass-Keyword-in-Python-1.png",
null,
"http://techgeekbuzz.com/wp-content/uploads/2019/04/Python-Continue-Break-Keywords.png",
null,
"http://techgeekbuzz.com/wp-content/uploads/2019/04/Python-While-Loop.png",
null,
"http://techgeekbuzz.com/wp-content/uploads/2019/04/Copy-of-Python-if-if...else-if...elif_...else-Nested-if-Statement-1.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84740263,"math_prob":0.7320028,"size":2088,"snap":"2019-26-2019-30","text_gpt3_token_len":405,"char_repetition_ratio":0.1684261,"word_repetition_ratio":0.3079096,"special_character_ratio":0.18821838,"punctuation_ratio":0.08418367,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97243506,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-17T06:57:17Z\",\"WARC-Record-ID\":\"<urn:uuid:0f1e1b52-23cc-4ebe-adac-6c41b7626313>\",\"Content-Length\":\"43710\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4e37df3-724f-4a44-a885-e87bdbf387d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:aea50131-68b9-4408-ba9b-90379f139f6c>\",\"WARC-IP-Address\":\"160.153.137.163\",\"WARC-Target-URI\":\"http://techgeekbuzz.com/category/python/page/2/\",\"WARC-Payload-Digest\":\"sha1:C63KPNTYP72JRAYDPPXAUIDPDWMG7DL4\",\"WARC-Block-Digest\":\"sha1:VTDRD6YN2SNENQN4DRVTNL6YXABG4UTK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998440.47_warc_CC-MAIN-20190617063049-20190617085049-00141.warc.gz\"}"} |
https://numbermatics.com/n/626/ | [
"# 626\n\n## 626 is an even composite number composed of two prime numbers multiplied together.\n\nWhat does the number 626 look like?\n\nThis visualization shows the relationship between its 2 prime factors (large circles) and 4 divisors.\n\n626 is an even composite number. It is composed of two distinct prime numbers multiplied together. It has a total of four divisors.\n\n## Prime factorization of 626:\n\n### 2 × 313\n\nSee below for interesting mathematical facts about the number 626 from the Numbermatics database.\n\n### Names of 626\n\n• Cardinal: 626 can be written as Six hundred twenty-six.\n\n### Scientific notation\n\n• Scientific notation: 6.26 × 102\n\n### Factors of 626\n\n• Number of distinct prime factors ω(n): 2\n• Total number of prime factors Ω(n): 2\n• Sum of prime factors: 315\n\n### Divisors of 626\n\n• Number of divisors d(n): 4\n• Complete list of divisors:\n• Sum of all divisors σ(n): 942\n• Sum of proper divisors (its aliquot sum) s(n): 316\n• 626 is a deficient number, because the sum of its proper divisors (316) is less than itself. Its deficiency is 310\n\n### Bases of 626\n\n• Binary: 10011100102\n• Base-36: HE\n\n### Squares and roots of 626\n\n• 626 squared (6262) is 391876\n• 626 cubed (6263) is 245314376\n• The square root of 626 is 25.0199920063\n• The cube root of 626 is 8.5544372395\n\n### Scales and comparisons\n\nHow big is 626?\n• 626 seconds is equal to 10 minutes, 26 seconds.\n• To count from 1 to 626 would take you about five minutes.\n\nThis is a very rough estimate, based on a speaking rate of half a second every third order of magnitude. If you speak quickly, you could probably say any randomly-chosen number between one and a thousand in around half a second. Very big numbers obviously take longer to say, so we add half a second for every extra x1000. (We do not count involuntary pauses, bathroom breaks or the necessity of sleep in our calculation!)\n\n• A cube with a volume of 626 cubic inches would be around 0.7 feet tall.\n\n### Recreational maths with 626\n\n• 626 is the same when its digits are reversed! That makes it a palindromic number.\n• The number of decimal digits it has is: 3\n• The sum of 626's digits is 14\n• More coming soon!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8788399,"math_prob":0.97913414,"size":2388,"snap":"2020-45-2020-50","text_gpt3_token_len":586,"char_repetition_ratio":0.1090604,"word_repetition_ratio":0.033333335,"special_character_ratio":0.26926297,"punctuation_ratio":0.16421053,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99518174,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-30T13:41:29Z\",\"WARC-Record-ID\":\"<urn:uuid:d2ce86a5-cffd-4414-872d-c03db398c797>\",\"Content-Length\":\"16526\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f14dba29-656d-488e-98f0-0b1eda4eb5ca>\",\"WARC-Concurrent-To\":\"<urn:uuid:44d1d4f5-5d00-4790-b339-9f7dbaa2314b>\",\"WARC-IP-Address\":\"72.44.94.106\",\"WARC-Target-URI\":\"https://numbermatics.com/n/626/\",\"WARC-Payload-Digest\":\"sha1:SCAJ7OYEO6F2RFHLAMY5IC2BGCC5YGSP\",\"WARC-Block-Digest\":\"sha1:JW7SXZXZAJDG3KOZTMLQARRQRVMB5BGE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141216175.53_warc_CC-MAIN-20201130130840-20201130160840-00112.warc.gz\"}"} |
https://www.clutchprep.com/chemistry/practice-problems/143810/what-is-the-correct-net-ionic-equation-including-all-coefficients-charges-and-ph | [
"# Problem: What is the correct net ionic equation, including all coefficients, charges, and phases, for the following set of reactants? Assume that the contribution of protons from H2SO4 is near 100 %.Ba(OH)2(aq) + H2SO4 (aq) →Many reactions take place in aqueous solution. When potential reactants are mixed, a reaction will occur if there is some driving force that favors the formation of products. It is often convenient to categorize reactions in terms of these driving forces: precipitate formation, in which an insoluble solid is formed,weak electrolyte formation, as in a neutralization reaction involving water, or transfer of electrons, as in a redox reaction.These reactions can be represented by full molecular equations, which contain all species in the reaction mixture, or by net ionic equations, which show only the species that actually undergo achange. The latter does not contain the spectator ions, which do not undergo a net change or do not take part in the reaction.\n\n###### FREE Expert Solution\n81% (419 ratings)",
null,
"View Complete Written Solution\n###### Problem Details\n\nWhat is the correct net ionic equation, including all coefficients, charges, and phases, for the following set of reactants? Assume that the contribution of protons from H2SO4 is near 100 %.\n\nBa(OH)2(aq) + H2SO4 (aq) →\n\nMany reactions take place in aqueous solution. When potential reactants are mixed, a reaction will occur if there is some driving force that favors the formation of products. It is often convenient to categorize reactions in terms of these driving forces: precipitate formation, in which an insoluble solid is formed,\nweak electrolyte formation, as in a neutralization reaction involving water, or transfer of electrons, as in a redox reaction.\n\nThese reactions can be represented by full molecular equations, which contain all species in the reaction mixture, or by net ionic equations, which show only the species that actually undergo a\nchange. The latter does not contain the spectator ions, which do not undergo a net change or do not take part in the reaction."
] | [
null,
"https://cdn.clutchprep.com/assets/homework_help/written_solution_bg.png 1x",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9411868,"math_prob":0.9598979,"size":1520,"snap":"2021-43-2021-49","text_gpt3_token_len":316,"char_repetition_ratio":0.12532982,"word_repetition_ratio":0.0,"special_character_ratio":0.19342105,"punctuation_ratio":0.10830325,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9533221,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-05T23:07:22Z\",\"WARC-Record-ID\":\"<urn:uuid:c80d5509-4053-4c93-9254-58327551e826>\",\"Content-Length\":\"132822\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3ccc60b6-bb4a-4dc9-bade-c1f0cd88edb2>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe11e952-5bff-432a-926f-e60cf3ba54b4>\",\"WARC-IP-Address\":\"52.20.78.240\",\"WARC-Target-URI\":\"https://www.clutchprep.com/chemistry/practice-problems/143810/what-is-the-correct-net-ionic-equation-including-all-coefficients-charges-and-ph\",\"WARC-Payload-Digest\":\"sha1:BUD65W2RWVGL3KOPQ4Z4EWG2KI5Q3RR6\",\"WARC-Block-Digest\":\"sha1:NGGDK7M4AV3PZIII5AWWU7ZHBG4W42CR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363226.68_warc_CC-MAIN-20211205221915-20211206011915-00035.warc.gz\"}"} |
http://web-stage.metacpan.org/pod/release/CHM/PDL-2.019/Basic/Core/Core.pm | [
"and 1 contributors\n\n# NAME\n\nPDL::Core - fundamental PDL functionality and vectorization/threading\n\n# DESCRIPTION\n\nMethods and functions for type conversions, PDL creation, type conversion, threading etc.\n\n# SYNOPSIS\n\n`````` use PDL::Core; # Normal routines\nuse PDL::Core ':Internal'; # Hairy routines``````\n\nPDL provides vectorized operations via a built-in engine. Vectorization is called \"threading\" for historical reasons. The threading engine implements simple rules for each operation.\n\nEach PDL object has a \"shape\" that is a generalized N-dimensional rectangle defined by a \"dim list\" of sizes in an arbitrary set of dimensions. A PDL with shape 2x3 has 6 elements and is said to be two-dimensional, or may be referred to as a 2x3-PDL. The dimensions are indexed numerically starting at 0, so a 2x3-PDL has a dimension 0 (or \"dim 0\") with size 2 and a 1 dimension (or \"dim 1\") with size 3.\n\nPDL generalizes *all* mathematical operations with the notion of \"active dims\": each operator has zero or more active dims that are used in carrying out the operation. Simple scalar operations like scalar multiplication ('*') have 0 active dims. More complicated operators can have more active dims. For example, matrix multiplication ('x') has 2 active dims. Additional dims are automatically vectorized across -- e.g. multiplying a 2x5-PDL with a 2x5-PDL requires 10 simple multiplication operations, and yields a 2x5-PDL result.\n\nIn any PDL expression, the active dims appropriate for each operator are used starting at the 0 dim and working forward through the dim list of each object. All additional dims after the active dims are \"thread dims\". The thread dims do not have to agree exactly: they are coerced to agree according to simple rules:\n\n• Null PDLs match any dim list (see below).\n\n• Dims with sizes other than 1 must all agree in size.\n\n• Dims of size 1 are expanded as necessary.\n\n• Missing dims are expanded appropriately.\n\nThe \"size 1\" rule implements \"generalized scalar\" operation, by analogy to scalar multiplication. The \"missing dims\" rule acknowledges the ambiguity between a missing dim and a dim of size 1.\n\n## Null PDLs\n\nPDLs on the left-hand side of assignment can have the special value \"Null\". A null PDL has no dim list and no set size; its shape is determined by the computed shape of the expression being assigned to it. Null PDLs contain no values and can only be assigned to. When assigned to (e.g. via the `.=` operator), they cease to be null PDLs.\n\nTo create a null PDL, use `PDL->null()`.\n\n## Empty PDLs\n\nPDLs can represent the empty set using \"structured Empty\" variables. An empty PDL is not a null PDL.\n\nAny dim of a PDL can be set explicitly to size 0. If so, the PDL contains zero values (because the total number of values is the product of all the sizes in the PDL's shape or dimlist).\n\nScalar PDLs are zero-dimensional and have no entries in the dim list, so they cannot be empty. 1-D and higher PDLs can be empty. Empty PDLs are useful for set operations, and are most commonly encountered in the output from selection operators such as which and whichND. Not all empty PDLs have the same threading properties -- e.g. a 2x0-PDL represents a collection of 2-vectors that happens to contain no elements, while a simple 0-PDL represents a collection of scalar values (that also happens to contain no elements).\n\nNote that 0 dims are not adjustable via the threading rules -- a dim with size 0 can only match a corresponding dim of size 0 or 1.\n\nVersions of PDL through 2.4.10 have some irregularity with threading and assignments. Currently the threading engine performs a full expansion of both sides of the computed assignment operator `.=` (which assigns values to a pre-existing PDL). This leads to counter-intuitive behavior in some cases:\n\n• Generalized scalars and computed assignment\n\nIf the PDL on the left-hand side of `.=` has a dim of size 1, it can be treated as a generalized scalar, as in:\n\n`````` \\$a = sequence(2,3);\n\\$b = zeroes(1,3);\n\\$b .= \\$a;``````\n\nIn this case, `\\$b` is automatically treated as a 2x3-PDL during the threading operation, but half of the values from `\\$a` silently disappear. The output is, as Kernighan and Ritchie would say, \"undefined\".\n\nFurther, if the value on the right of `.=` is empty, then `.=` becomes a silent no-op:\n\n`````` \\$a = zeroes(0);\n\\$b = zeroes(1);\n\\$b .= \\$a+1;\nprint \\$b;``````\n\nwill print ``. In this case, \"\\$a+1\" is empty, and \"\\$b\" is a generalized scalar that is adjusted to be empty, so the assignment is carried out for zero elements (a no-op).\n\nBoth of these behaviors are considered harmful and should not be relied upon: they may be patched away in a future version of PDL.\n\n• Empty PDLs and generalized scalars\n\nGeneralized scalars (PDLs with a dim of size 1) can match any size in the corresponding dim, including 0. Thus,\n\n`````` \\$a = ones(2,0);\n\\$b = sequence(2,1);\n\\$c = \\$a * \\$b;\nprint \\$c;``````\n\nprints `Empty[2,0]`.\n\nThis behavior is counterintuitive but desirable, and will be preserved in future versions of PDL.\n\n# VARIABLES\n\nThese are important variables of global scope and are placed in the PDL namespace.\n\n### `\\$PDL::debug`\n\nWhen true, PDL debugging information is printed.\n\n### `\\$PDL::verbose`\n\nWhen true, PDL functions provide chatty information.\n\n### `\\$PDL::use_commas`\n\nWhether to insert commas when printing pdls\n\n### `\\$PDL::floatformat`, `\\$PDL::doubleformat`, `\\$PDL::indxformat`\n\nThe default print format for floats, doubles, and indx values, respectively. The default default values are:\n\n`````` \\$PDL::floatformat = \"%7g\";\n\\$PDL::doubleformat = \"%10.8g\";\n\\$PDL::indxformat = \"%12d\";``````\n\n### `\\$PDL::undefval`\n\nThe value to use instead of `undef` when creating pdls.\n\n### `\\$PDL::toolongtoprint`\n\nThe maximal size pdls to print (defaults to 10000 elements)\n\n# FUNCTIONS\n\n## barf\n\nStandard error reporting routine for PDL.\n\n`barf()` is the routine PDL modules should call to report errors. This is because `barf()` will report the error as coming from the correct line in the module user's script rather than in the PDL module.\n\nFor now, barf just calls Carp::confess()\n\nRemember `barf()` is your friend. *Use* it!\n\nAt the perl level:\n\n`` barf(\"User has too low an IQ!\");``\n\nIn C or XS code:\n\n`` barf(\"You have made %d errors\", count);``\n\nNote: this is one of the few functions ALWAYS exported by PDL::Core\n\n## pdl\n\nPDL constructor - creates new piddle from perl scalars/arrays, piddles, and strings\n\n`````` \\$double_pdl = pdl(SCALAR|ARRAY REFERENCE|ARRAY|STRING); # default type\n\\$type_pdl = pdl(PDL::Type,SCALAR|ARRAY REFERENCE|ARRAY|STRING);``````\n`````` \\$a = pdl [1..10]; # 1D array\n\\$a = pdl ([1..10]); # 1D array\n\\$a = pdl (1,2,3,4); # Ditto\n\\$b = pdl [[1,2,3],[4,5,6]]; # 2D 3x2 array\n\\$b = pdl \"[[1,2,3],[4,5,6]]\"; # Ditto (slower)\n\\$b = pdl \"[1 2 3; 4 5 6]\"; # Ditto\n\\$b = pdl q[1 2 3; 4 5 6]; # Ditto, using the q quote operator\n\\$b = pdl \"1 2 3; 4 5 6\"; # Ditto, less obvious, but still works\n\\$b = pdl 42 # 0-dimensional scalar\n\\$c = pdl \\$a; # Make a new copy\n\n\\$u = pdl ushort(), 42 # 0-dimensional ushort scalar\n\\$b = pdl(byte(),[[1,2,3],[4,5,6]]); # 2D byte piddle\n\n\\$n = pdl indx(), [1..5]; # 1D array of indx values\n\\$n = pdl indx, [1..5]; # ... can leave off parens\n\\$n = indx( [1..5] ); # ... still the same!\n\n\\$a = pdl([1,2,3],[4,5,6]); # 2D\n\\$a = pdl([1,2,3],[4,5,6]); # 2D``````\n\nNote the last two are equivalent - a list is automatically converted to a list reference for syntactic convenience. i.e. you can omit the outer `[]`\n\nYou can mix and match arrays, array refs, and PDLs in your argument list, and `pdl` will sort them out. You get back a PDL whose last (slowest running) dim runs across the top level of the list you hand in, and whose first (fastest running) dim runs across the deepest level that you supply.\n\nAt the moment, you cannot mix and match those arguments with string arguments, though we can't imagine a situation in which you would really want to do that.\n\nThe string version of pdl also allows you to use the strings `bad`, `inf`, and `nan`, and it will insert the values that you mean (and set the bad flag if you use `bad`). You can mix and match case, though you shouldn't. Here are some examples:\n\n`````` \\$bad = pdl q[1 2 3 bad 5 6]; # Set fourth element to the bad value\n\\$bad = pdl q[1 2 inf bad 5]; # now third element is IEEE infinite value\n\\$bad = pdl q[nan 2 inf -inf]; # first value is IEEE nan value``````\n\nThe default constructor uses IEEE double-precision floating point numbers. You can use other types, but you will get a warning if you try to use `nan` with integer types (it will be replaced with the `bad` value) and you will get a fatal error if you try to use `inf`.\n\nThrowing a PDL into the mix has the same effect as throwing in a list ref:\n\n`` pdl(pdl(1,2),[3,4])``\n\nis the same as\n\n`` pdl([1,2],[3,4]).``\n\nAll of the dimensions in the list are \"padded-out\" with undefval to meet the widest dim in the list, so (e.g.)\n\n`` \\$a = pdl([[1,2,3],])``\n\ngives you the same answer as\n\n`` \\$a = pdl([[1,2,3],[2,undef,undef]]);``\n\n`pdl()` is a functional synonym for the 'new' constructor, e.g.:\n\n`` \\$x = new PDL [1..10];``\n\nIn order to control how undefs are handled in converting from perl lists to PDLs, one can set the variable `\\$PDL::undefval`. For example:\n\n`````` \\$foo = [[1,2,undef],[undef,3,4]];\n\\$PDL::undefval = -999;\n\\$f = pdl \\$foo;\nprint \\$f\n[\n[ 1 2 -999]\n[-999 3 4]\n]``````\n\n`\\$PDL::undefval` defaults to zero.\n\nAs a final note, if you include an Empty PDL in the list of objects to construct into a PDL, it is kept as a placeholder pane -- so if you feed in (say) 7 objects, you get a size of 7 in the 0th dim of the output PDL. The placeholder panes are completely padded out. But if you feed in only a single Empty PDL, you get back the Empty PDL (no padding).\n\n## null\n\nReturns a 'null' piddle.\n\n`` \\$x = null;``\n\n`null()` has a special meaning to PDL::PP. It is used to flag a special kind of empty piddle, which can grow to appropriate dimensions to store a result (as opposed to storing a result in an existing piddle).\n\n`````` pdl> sumover sequence(10,10), \\$ans=null;p \\$ans\n[45 145 245 345 445 545 645 745 845 945]``````\n\n## nullcreate\n\nReturns a 'null' piddle.\n\n`` \\$x = PDL->nullcreate(\\$arg)``\n\nThis is an routine used by many of the threading primitives (i.e. sumover, minimum, etc.) to generate a null piddle for the function's output that will behave properly for derived (or subclassed) PDL objects.\n\nFor the above usage: If `\\$arg` is a PDL, or a derived PDL, then `\\$arg->null` is returned. If `\\$arg` is a scalar (i.e. a zero-dimensional PDL) then `PDL->null` is returned.\n\n`````` PDL::Derived->nullcreate(10)\nreturns PDL::Derived->null.\nPDL->nullcreate(\\$pdlderived)\nreturns \\$pdlderived->null.``````\n\n## nelem\n\nReturn the number of elements in a piddle\n\n`` \\$n = nelem(\\$piddle); \\$n = \\$piddle->nelem;``\n`` \\$mean = sum(\\$data)/nelem(\\$data);``\n\n## dims\n\nReturn piddle dimensions as a perl list\n\n`` @dims = \\$piddle->dims; @dims = dims(\\$piddle);``\n`````` pdl> p @tmp = dims zeroes 10,3,22\n10 3 22``````\n\n## shape\n\nReturn piddle dimensions as a piddle\n\n`` \\$shape = \\$piddle->shape; \\$shape = shape(\\$piddle);``\n`````` pdl> p \\$shape = shape zeroes 10,3,22\n[10 3 22]``````\n\n## ndims\n\nReturns the number of dimensions in a piddle. Alias for getndims.\n\n## getndims\n\nReturns the number of dimensions in a piddle\n\n`` \\$ndims = \\$piddle->getndims;``\n`````` pdl> p zeroes(10,3,22)->getndims\n3``````\n\n## dim\n\nReturns the size of the given dimension of a piddle. Alias for getdim.\n\n## getdim\n\nReturns the size of the given dimension.\n\n`` \\$dim0 = \\$piddle->getdim(0);``\n`````` pdl> p zeroes(10,3,22)->getdim(1)\n3``````\n\nNegative indices count from the end of the dims array. Indices beyond the end will return a size of 1. This reflects the idea that any pdl is equivalent to an infinitely dimensional array in which only a finite number of dimensions have a size different from one. For example, in that sense a 3D piddle of shape [3,5,2] is equivalent to a [3,5,2,1,1,1,1,1,....] piddle. Accordingly,\n\n`` print \\$a->getdim(10000);``\n\nwill print 1 for most practically encountered piddles.\n\n## topdl\n\nalternate piddle constructor - ensures arg is a piddle\n\n`` \\$a = topdl(SCALAR|ARRAY REFERENCE|ARRAY);``\n\nThe difference between pdl() and `topdl()` is that the latter will just 'fall through' if the argument is already a piddle. It will return a reference and NOT a new copy.\n\nThis is particularly useful if you are writing a function which is doing some fiddling with internals and assumes a piddle argument (e.g. for method calls). Using `topdl()` will ensure nothing breaks if passed with '2'.\n\nNote that `topdl()` is not exported by default (see example below for usage).\n\n`````` use PDL::Core ':Internal'; # use the internal routines of\n# the Core module\n\n\\$a = topdl 43; # \\$a is piddle with value '43'\n\\$b = topdl \\$piddle; # fall through\n\\$a = topdl (1,2,3,4); # Convert 1D array``````\n\n## get_datatype\n\nInternal: Return the numeric value identifying the piddle datatype\n\n`` \\$x = \\$piddle->get_datatype;``\n\nMainly used for internal routines.\n\nNOTE: get_datatype returns 'just a number' not any special type object, unlike type.\n\n## howbig\n\nReturns the sizeof a piddle datatype in bytes.\n\nNote that `howbig()` is not exported by default (see example below for usage).\n\n`````` use PDL::Core ':Internal'; # use the internal routines of\n# the Core module\n\n\\$size = howbig(\\$piddle->get_datatype);``````\n\nMainly used for internal routines.\n\nNOTE: NOT a method! This is because get_datatype returns 'just a number' not any special object.\n\n`````` pdl> p howbig(ushort([1..10])->get_datatype)\n2``````\n\n## get_dataref\n\nReturn the internal data for a piddle, as a perl SCALAR ref.\n\nMost piddles hold their internal data in a packed perl string, to take advantage of perl's memory management. This gives you direct access to the string, which is handy when you need to manipulate the binary data directly (e.g. for file I/O). If you modify the string, you'll need to call upd_data afterward, to make sure that the piddle points to the new location of the underlying perl variable.\n\nCalling `get_dataref` automatically physicalizes your piddle (see make_physical). You definitely don't want to do anything to the SV to truncate or deallocate the string, unless you correspondingly call reshape to make the PDL match its new data dimension.\n\nYou definitely don't want to use get_dataref unless you know what you are doing (or are trying to find out): you can end up scrozzling memory if you shrink or eliminate the string representation of the variable. Here be dragons.\n\n## upd_data\n\nUpdate the data pointer in a piddle to match its perl SV.\n\nThis is useful if you've been monkeying with the packed string representation of the PDL, which you probably shouldn't be doing anyway. (see get_dataref.)\n\nReturns the piddle thread IDs as a perl list\n\nNote that `threadids()` is not exported by default (see example below for usage).\n\n`````` use PDL::Core ':Internal'; # use the internal routines of\n# the Core module\n\n## doflow\n\nTurn on/off dataflow\n\n`` \\$x->doflow; doflow(\\$x);``\n\n## flows\n\nWhether or not a piddle is indulging in dataflow\n\n`` something if \\$x->flows; \\$hmm = flows(\\$x);``\n\n## new\n\nnew piddle constructor method\n\n`` \\$x = PDL->new(SCALAR|ARRAY|ARRAY REF|STRING);``\n`````` \\$x = PDL->new(42); # new from a Perl scalar\n\\$x = new PDL 42; # ditto\n\\$y = PDL->new(@list_of_vals); # new from Perl list\n\\$y = new PDL @list_of_vals; # ditto\n\\$z = PDL->new(\\@list_of_vals); # new from Perl list reference\n\\$w = PDL->new(\"[1 2 3]\"); # new from Perl string, using\n# Matlab constructor syntax``````\n\nConstructs piddle from perl numbers and lists and strings with Matlab/Octave style constructor syntax.\n\nThe string input is fairly versatile though not performance optimized. The goal is to make it easy to copy and paste code from PDL output and to offer a familiar Matlab syntax for piddle construction. As of May, 2010, it is a new feature, so feel free to report bugs or suggest new features. See documentation for pdl for more examples of usage.\n\n## copy\n\nMake a physical copy of a piddle\n\n`` \\$new = \\$old->copy;``\n\nSince `\\$new = \\$old` just makes a new reference, the `copy` method is provided to allow real independent copies to be made.\n\n## hdr_copy\n\nReturn an explicit copy of the header of a PDL.\n\nhdr_copy is just a wrapper for the internal routine _hdr_copy, which takes the hash ref itself. That is the routine which is used to make copies of the header during normal operations if the hdrcpy() flag of a PDL is set.\n\nGeneral-purpose deep copies are expensive in perl, so some simple optimization happens:\n\nIf the header is a tied array or a blessed hash ref with an associated method called `copy`, then that ->copy method is called. Otherwise, all elements of the hash are explicitly copied. References are recursively deep copied.\n\nThis routine seems to leak memory.\n\n## unwind\n\nReturn a piddle which is the same as the argument except that all threadids have been removed.\n\n`` \\$y = \\$x->unwind;``\n\n## make_physical\n\nMake sure the data portion of a piddle can be accessed from XS code.\n\n`````` \\$a->make_physical;\n\\$a->call_my_xs_method;``````\n\nEnsures that a piddle gets its own allocated copy of data. This obviously implies that there are certain piddles which do not have their own data. These are so called virtual piddles that make use of the vaffine optimisation (see PDL::Indexing). They do not have their own copy of data but instead store only access information to some (or all) of another piddle's data.\n\nNote: this function should not be used unless absolutely necessary since otherwise memory requirements might be severly increased. Instead of writing your own XS code with the need to call `make_physical` you might want to consider using the PDL preprocessor (see PDL::PP) which can be used to transparently access virtual piddles without the need to physicalise them (though there are exceptions).\n\n## dummy\n\nInsert a 'dummy dimension' of given length (defaults to 1)\n\nNo relation to the 'Dungeon Dimensions' in Discworld!\n\nNegative positions specify relative to last dimension, i.e. `dummy(-1)` appends one dimension at end, `dummy(-2)` inserts a dummy dimension in front of the last dim, etc.\n\nIf you specify a dimension position larger than the existing dimension list of your PDL, the PDL gets automagically padded with extra dummy dimensions so that you get the dim you asked for, in the slot you asked for. This could cause you trouble if, for example, you ask for \\$a->dummy(5000,1) because \\$a will get 5,000 dimensions, each of rank 1.\n\nBecause padding at the beginning of the dimension list moves existing dimensions from slot to slot, it's considered unsafe, so automagic padding doesn't work for large negative indices -- only for large positive indices.\n\n`` \\$y = \\$x->dummy(\\$position[,\\$dimsize]);``\n`````` pdl> p sequence(3)->dummy(0,3)\n[\n[0 0 0]\n[1 1 1]\n[2 2 2]\n]\n\npdl> p sequence(3)->dummy(3,2)\n[\n[\n[0 1 2]\n]\n[\n[0 1 2]\n]\n]\n\npdl> p sequence(3)->dummy(-3,2)\nRuntime error: PDL: For safety, <pos> < -(dims+1) forbidden in dummy. min=-2, pos=-3``````\n\n## clump\n\n\"clumps\" several dimensions into one large dimension\n\nIf called with one argument `\\$n` clumps the first `\\$n` dimensions into one. For example, if `\\$a` has dimensions `(5,3,4)` then after\n\n`` \\$b = \\$a->clump(2); # Clump 2 first dimensions``\n\nthe variable `\\$b` will have dimensions `(15,4)` and the element `\\$b->at(7,3)` refers to the element `\\$a->at(1,2,3)`.\n\nUse `clump(-1)` to flatten a piddle. The method flat is provided as a convenient alias.\n\nClumping with a negative dimension in general leaves that many dimensions behind -- e.g. clump(-2) clumps all of the first few dimensions into a single one, leaving a 2-D piddle.\n\nIf `clump` is called with an index list with more than one element it is treated as a list of dimensions that should be clumped together into one. The resulting clumped dim is placed at the position of the lowest index in the list. This convention ensures that `clump` does the expected thing in the usual cases. The following example demonstrates typical usage:\n\n`````` \\$a = sequence 2,3,3,3,5; # 5D piddle\n\\$c = \\$a->clump(1..3); # clump all the dims 1 to 3 into one\nprint \\$c->info; # resulting 3D piddle has clumped dim at pos 1\nPDL: Double D [2,27,5]``````\n\ndefine functions that support threading at the perl level\n\n`````` thread_define 'tline(a(n);b(n))', over {\nline \\$_, \\$_; # make line compliant with threading\n};``````\n\n`thread_define` provides some support for threading (see PDL::Indexing) at the perl level. It allows you to do things for which you normally would have resorted to PDL::PP (see PDL::PP); however, it is most useful to wrap existing perl functions so that the new routine supports PDL threading.\n\n`thread_define` is used to define new threading aware functions. Its first argument is a symbolic repesentation of the new function to be defined. The string is composed of the name of the new function followed by its signature (see PDL::Indexing and PDL::PP) in parentheses. The second argument is a subroutine that will be called with the slices of the actual runtime arguments as specified by its signature. Correct dimension sizes and minimal number of dimensions for all arguments will be checked (assuming the rules of PDL threading, see PDL::Indexing).\n\nThe actual work is done by the `signature` class which parses the signature string, does runtime dimension checks and the routine `threadover` that generates the loop over all appropriate slices of pdl arguments and creates pdls as needed.\n\nSimilar to `pp_def` and its `OtherPars` option it is possible to define the new function so that it accepts normal perl args as well as piddles. You do this by using the `NOtherPars` parameter in the signature. The number of `NOtherPars` specified will be passed unaltered into the subroutine given as the second argument of `thread_define`. Let's illustrate this with an example:\n\n`````` PDL::thread_define 'triangles(inda();indb();indc()), NOtherPars => 2',\nPDL::over {\n\\${\\$_} .= \\$_.join(',',map {\\$_->at} @_[0..2]).\",-1,\\n\";\n};``````\n\nThis defines a function `triangles` that takes 3 piddles as input plus 2 arguments which are passed into the routine unaltered. This routine is used to collect lists of indices into a perl scalar that is passed by reference. Each line is preceded by a prefix passed as `\\$_`. Here is typical usage:\n\n`````` \\$txt = '';\ntriangles(pdl(1,2,3),pdl(1),pdl(0),\\\\$txt,\" \"x10);\nprint \\$txt;``````\n\nresulting in the following output\n\n`````` 1,1,0,-1,\n2,1,0,-1,\n3,1,0,-1,``````\n\nwhich is used in PDL::Graphics::TriD::VRML to generate VRML output.\n\nCurrently, this is probably not much more than a POP (proof of principle) but is hoped to be useful enough for some real life work.\n\nCheck PDL::PP for the format of the signature. Currently, the `[t]` qualifier and all type qualifiers are ignored.\n\n`` \\$b = \\$a->thread(\\$dim,[\\$dim1,...])``\n`````` \\$a = zeroes 3,4,5;\n\n## diagonal\n\nReturns the multidimensional diagonal over the specified dimensions.\n\n`` \\$d = \\$x->diagonal(dim1, dim2,...)``\n`````` pdl> \\$a = zeroes(3,3,3);\npdl> (\\$b = \\$a->diagonal(0,1))++;\npdl> p \\$a\n[\n[\n[1 0 0]\n[0 1 0]\n[0 0 1]\n]\n[\n[1 0 0]\n[0 1 0]\n[0 0 1]\n]\n[\n[1 0 0]\n[0 1 0]\n[0 0 1]\n]\n]``````\n\n`` \\$xx = \\$x->thread1(3,1)``\n`` Wibble``\n\n`` \\$xx = \\$x->thread2(3,1)``\n`` Wibble``\n\n`` \\$xx = \\$x->thread3(3,1)``\n`` Wibble``\n\n## sever\n\nsever any links of this piddle to parent piddles\n\nIn PDL it is possible for a piddle to be just another view into another piddle's data. In that case we call this piddle a virtual piddle and the original piddle owning the data its parent. In other languages these alternate views sometimes run by names such as alias or smart reference.\n\nTypical functions that return such piddles are `slice`, `xchg`, `index`, etc. Sometimes, however, you would like to separate the virtual piddle from its parent's data and just give it a life of its own (so that manipulation of its data doesn't change the parent). This is simply achieved by using `sever`. For example,\n\n`````` \\$a = \\$pdl->index(pdl(0,3,7))->sever;\n\\$a++; # important: \\$pdl is not modified!``````\n\nIn many (but not all) circumstances it acts therefore similar to copy. However, in general performance is better with `sever` and secondly, `sever` doesn't lead to futile copying when used on piddles that already have their own data. On the other hand, if you really want to make sure to work on a copy of a piddle use copy.\n\n`````` \\$a = zeroes(20);\n\\$a->sever; # NOOP since \\$a is already its own boss!``````\n\nAgain note: `sever` is not the same as copy! For example,\n\n`````` \\$a = zeroes(1); # \\$a does not have a parent, i.e. it is not a slice etc\n\\$b = \\$a->sever; # \\$b is now pointing to the same piddle as \\$a\n\\$b++;\nprint \\$a;\n``````\n\nbut\n\n`````` \\$a = zeroes(1);\n\\$b = \\$a->copy; # \\$b is now pointing to a new piddle\n\\$b++;\nprint \\$a;\n``````\n\n## info\n\nReturn formatted information about a piddle.\n\n`` \\$x->info(\\$format_string);``\n`` print \\$x->info(\"Type: %T Dim: %-15D State: %S\");``\n\nReturns a string with info about a piddle. Takes an optional argument to specify the format of information a la sprintf. Format specifiers are in the form `%<width><letter>` where the width is optional and the letter is one of\n\nT\n\nType\n\nD\n\nFormatted Dimensions\n\nF\n\nDataflow status\n\nS\n\nSome internal flags (P=physical,V=Vaffine,C=changed,B=may contain bad data)\n\nC\n\nClass of this piddle, i.e. `ref \\$pdl`\n\nA\n\nAddress of the piddle struct as a unique identifier\n\nM\n\nCalculated memory consumption of this piddle's data area\n\n## approx\n\ntest for approximately equal values (relaxed `==`)\n\n`````` # ok if all corresponding values in\n# piddles are within 1e-8 of each other\nprint \"ok\\n\" if all approx \\$a, \\$b, 1e-8;``````\n\n`approx` is a relaxed form of the `==` operator and often more appropriate for floating point types (`float` and `double`).\n\nUsage:\n\n`` \\$res = approx \\$a, \\$b [, \\$eps]``\n\nThe optional parameter `\\$eps` is remembered across invocations and initially set to 1e-6, e.g.\n\n`````` approx \\$a, \\$b; # last \\$eps used (1e-6 initially)\napprox \\$a, \\$b, 1e-10; # 1e-10\napprox \\$a, \\$b; # also 1e-10``````\n\n## mslice\n\nConvenience interface to slice, allowing easier inclusion of dimensions in perl code.\n\n`` \\$a = \\$x->mslice(...);``\n`````` # below is the same as \\$x->slice(\"5:7,:,3:4:2\")\n\\$a = \\$x->mslice([5,7],X,[3,4,2]);``````\n\n## nslice_if_pdl\n\nIf `\\$self` is a PDL, then calls `slice` with all but the last argument, otherwise \\$self->(\\$_[-1]) is called where \\$_[-1} is the original argument string found during PDL::NiceSlice filtering.\n\nDEVELOPER'S NOTE: this routine is found in Core.pm.PL but would be better placed in Slices/slices.pd. It is likely to be moved there and/or changed to \"slice_if_pdl\" for PDL 3.0.\n\n`` \\$a = \\$x->nslice_if_pdl(...,'(args)');``\n\n## nslice\n\nc<nslice> was an internally used interface for PDL::NiceSlice, but is now merely a springboard to PDL::Slice. It is deprecated and likely to disappear in PDL 3.0.\n\n## inplace\n\nFlag a piddle so that the next operation is done 'in place'\n\n`` somefunc(\\$x->inplace); somefunc(inplace \\$x);``\n\nIn most cases one likes to use the syntax `\\$y = f(\\$x)`, however in many case the operation `f()` can be done correctly 'in place', i.e. without making a new copy of the data for output. To make it easy to use this, we write `f()` in such a way that it operates in-place, and use `inplace` to hint that a new copy should be disabled. This also makes for clear syntax.\n\nObviously this will not work for all functions, and if in doubt see the function's documentation. However one can assume this is true for all elemental functions (i.e. those which just operate array element by array element like `log10`).\n\n`````` pdl> \\$x = xvals zeroes 10;\npdl> log10(inplace \\$x)\npdl> p \\$x\n[-inf 0 0.30103 0.47712125 0.60205999 0.69897 0.77815125 0.84509804 0.90308999 0.95424251]``````\n\n## is_inplace\n\nTest the in-place flag on a piddle\n\n`````` \\$out = (\\$in->is_inplace) ? \\$in : zeroes(\\$in);\n\\$in->set_inplace(0)``````\n\nProvides access to the inplace hint flag, within the perl millieu. That way functions you write can be inplace aware... If given an argument the inplace flag will be set or unset depending on the value at the same time. Can be used for shortcut tests that delete the inplace flag while testing:\n\n`` \\$out = (\\$in->is_inplace(0)) ? \\$in : zeroes(\\$in); # test & unset!``\n\n## set_inplace\n\nSet the in-place flag on a piddle\n\n`````` \\$out = (\\$in->is_inplace) ? \\$in : zeroes(\\$in);\n\\$in->set_inplace(0);``````\n\nProvides access to the inplace hint flag, within the perl millieu. Useful mainly for turning it OFF, as inplace turns it ON more conveniently.\n\n## new_or_inplace\n\n`````` \\$a = new_or_inplace(shift());\n\\$a = new_or_inplace(shift(),\\$preferred_type);``````\n\nReturn back either the argument pdl or a copy of it depending on whether it be flagged in-place or no. Handy for building inplace-aware functions.\n\nIf you specify a preferred type (must be one of the usual PDL type strings, a list ref containing several of them, or a string containing several of them), then the copy is coerced into the first preferred type listed if it is not already one of the preferred types.\n\nNote that if the inplace flag is set, no coersion happens even if you specify a preferred type.\n\n## new_from_specification\n\nInternal method: create piddle by specification\n\nThis is the argument processing method called by zeroes and some other functions which constructs piddles from argument lists of the form:\n\n`` [type], \\$nx, \\$ny, \\$nz,...``\n\nFor `\\$nx`, `\\$ny`, etc. 0 and 1D piddles are allowed. Giving those has the same effect as if saying `\\$arg->list`, e.g.\n\n`` 1, pdl(5,2), 4``\n\nis equivalent to\n\n`` 1, 5, 2, 4``\n\nNote, however, that in all functions using `new_from_specification` calling `func \\$piddle` will probably not do what you want. So to play safe use (e.g. with zeroes)\n\n`` \\$pdl = zeroes \\$dimpdl->list;``\n\nCalling\n\n`` \\$pdl = zeroes \\$dimpdl;``\n\nwill rather be equivalent to\n\n`` \\$pdl = zeroes \\$dimpdl->dims;``\n\nHowever,\n\n`` \\$pdl = zeroes ushort, \\$dimpdl;``\n\nwill again do what you intended since it is interpreted as if you had said\n\n`` \\$pdl = zeroes ushort, \\$dimpdl->list;``\n\nThis is unfortunate and confusing but no good solution seems obvious that would not break existing scripts.\n\n## isnull\n\nTest whether a piddle is null\n\n`````` croak(\"Input piddle mustn't be null!\")\nif \\$input_piddle->isnull;``````\n\nThis function returns 1 if the piddle is null, zero if it is not. The purpose of null piddles is to \"tell\" any PDL::PP methods to allocate new memory for an output piddle, but only when that PDL::PP method is called in full-arg form. Of course, there's no reason you couldn't commandeer the special value for your own purposes, for which this test function would prove most helpful. But in general, you shouldn't need to test for a piddle's nullness.\n\n## isempty\n\nTest whether a piddle is empty\n\n`` print \"The piddle has zero dimension\\n\" if \\$pdl->isempty;``\n\nThis function returns 1 if the piddle has zero elements. This is useful in particular when using the indexing function which. In the case of no match to a specified criterion, the returned piddle has zero dimension.\n\n`````` pdl> \\$a=sequence(10)\npdl> \\$i=which(\\$a < -1)\npdl> print \"I found no matches!\\n\" if (\\$i->isempty);\nI found no matches!``````\n\nNote that having zero elements is rather different from the concept of being a null piddle, see the PDL::FAQ and PDL::Indexing manpages for discussions of this.\n\n## zeroes\n\nconstruct a zero filled piddle from dimension list or template piddle.\n\nVarious forms of usage,\n\n(i) by specification or (ii) by template piddle:\n\n`````` # usage type (i):\n\\$a = zeroes([type], \\$nx, \\$ny, \\$nz,...);\n\\$a = PDL->zeroes([type], \\$nx, \\$ny, \\$nz,...);\n\\$a = \\$pdl->zeroes([type], \\$nx, \\$ny, \\$nz,...);\n# usage type (ii):\n\\$a = zeroes \\$b;\n\\$a = \\$b->zeroes\nzeroes inplace \\$a; # Equivalent to \\$a .= 0;\n\\$a->inplace->zeroes; # \"\"``````\n`````` pdl> \\$z = zeroes 4,3\npdl> p \\$z\n[\n[0 0 0 0]\n[0 0 0 0]\n[0 0 0 0]\n]\npdl> \\$z = zeroes ushort, 3,2 # Create ushort array\n[ushort() etc. with no arg returns a PDL::Types token]``````\n\nSee also new_from_specification for details on using piddles in the dimensions list.\n\n## zeros\n\nconstruct a zero filled piddle (see zeroes for usage)\n\n## ones\n\nconstruct a one filled piddle\n\n`````` \\$a = ones([type], \\$nx, \\$ny, \\$nz,...);\netc. (see 'zeroes')``````\n`` see zeroes() and add one``\n\nSee also new_from_specification for details on using piddles in the dimensions list.\n\n## reshape\n\nChange the shape (i.e. dimensions) of a piddle, preserving contents.\n\n`` \\$x->reshape(NEWDIMS); reshape(\\$x, NEWDIMS);``\n\nThe data elements are preserved, obviously they will wrap differently and get truncated if the new array is shorter. If the new array is longer it will be zero-padded.\n\n***Potential incompatibility with earlier versions of PDL**** If the list of `NEWDIMS` is empty `reshape` will just drop all dimensions of size 1 (preserving the number of elements):\n\n`````` \\$a = sequence(3,4,5);\n\\$b = \\$a(1,3);\n\\$b->reshape();\nprint \\$b->info;\nPDL: Double D ``````\n\nDimensions of size 1 will also be dropped if `reshape` is invoked with the argument -1:\n\n`` \\$b = \\$a->reshape(-1);``\n\nAs opposed to `reshape` without arguments, `reshape(-1)` preserves dataflow:\n\n`````` \\$a = ones(2,1,2);\n\\$b = \\$a(0)->reshape(-1);\n\\$b++;\nprint \\$a;\n[\n[\n[2 1]\n]\n[\n[2 1]\n]\n]``````\n\nImportant: Piddles are changed inplace!\n\nNote: If `\\$x` is connected to any other PDL (e.g. if it is a slice) then the connection is first severed.\n\n`````` pdl> \\$x = sequence(10)\npdl> reshape \\$x,3,4; p \\$x\n[\n[0 1 2]\n[3 4 5]\n[6 7 8]\n[9 0 0]\n]\npdl> reshape \\$x,5; p \\$x\n[0 1 2 3 4]``````\n\n## squeeze\n\neliminate all singleton dimensions (dims of size 1)\n\n`` \\$b = \\$a(0,0)->squeeze;``\n\nAlias for `reshape(-1)`. Removes all singleton dimensions and preserves dataflow. A more concise interface is provided by PDL::NiceSlice via modifiers:\n\n`````` use PDL::NiceSlice;\n\\$b = \\$a(0,0;-); # same as \\$a(0,0)->squeeze``````\n\n## flat\n\nflatten a piddle (alias for `\\$pdl->clump(-1)`)\n\n`` \\$srt = \\$pdl->flat->qsort;``\n\nUseful method to make a 1D piddle from an arbitrarily sized input piddle. Data flows back and forth as usual with slicing routines. Falls through if argument already <= 1D.\n\n## convert\n\nGeneric datatype conversion function\n\n`` \\$y = convert(\\$x, \\$newtypenum);``\n`````` \\$y = convert \\$x, long\n\\$y = convert \\$x, ushort``````\n\n`\\$newtype` is a type number, for convenience they are returned by `long()` etc when called without arguments.\n\n## Datatype_conversions\n\nbyte|short|ushort|long|indx|longlong|float|double (shorthands to convert datatypes)\n\n`````` \\$y = double \\$x; \\$y = ushort [1..10];\n# all of the above listed shorthands behave similarly``````\n\nWhen called with a piddle argument, they convert to the specific datatype.\n\nWhen called with a numeric, list, listref, or string argument they construct a new piddle. This is a convenience to avoid having to be long-winded and say `\\$x = long(pdl(42))`\n\nThus one can say:\n\n`````` \\$a = float(1,2,3,4); # 1D\n\\$a = float q[1 2 3; 4 5 6]; # 2D\n\\$a = float([1,2,3],[4,5,6]); # 2D\n\\$a = float([[1,2,3],[4,5,6]]); # 2D``````\n\nNote the last three give identical results, and the last two are exactly equivalent - a list is automatically converted to a list reference for syntactic convenience. i.e. you can omit the outer `[]`\n\nWhen called with no arguments, these functions return a special type token. This allows syntactical sugar like:\n\n`` \\$x = ones byte, 1000,1000;``\n\nThis example creates a large piddle directly as byte datatype in order to save memory.\n\nIn order to control how undefs are handled in converting from perl lists to PDLs, one can set the variable `\\$PDL::undefval`; see the function pdl() for more details.\n\n`````` pdl> p \\$x=sqrt float [1..10]\n[1 1.41421 1.73205 2 2.23607 2.44949 2.64575 2.82843 3 3.16228]\npdl> p byte \\$x\n[1 1 1 2 2 2 2 2 3 3]``````\n\n## byte\n\nConvert to byte datatype\n\n## short\n\nConvert to short datatype\n\n## ushort\n\nConvert to ushort datatype\n\n## long\n\nConvert to long datatype\n\n## indx\n\nConvert to indx datatype\n\n## longlong\n\nConvert to longlong datatype\n\n## float\n\nConvert to float datatype\n\n## double\n\nConvert to double datatype\n\n## type\n\nreturn the type of a piddle as a blessed type object\n\nA convenience function for use with the piddle constructors, e.g.\n\n`````` \\$b = PDL->zeroes(\\$a->type,\\$a->dims,3);\ndie \"must be float\" unless \\$a->type == float;``````\n\nSee also the discussion of the `PDL::Type` class in PDL::Types. Note that the `PDL::Type` objects have overloaded comparison and stringify operators so that you can compare and print types:\n\n`````` \\$a = \\$a->float if \\$a->type < float;\n\\$t = \\$a->type; print \"Type is \\$t\\\";``````\n\n## list\n\nConvert piddle to perl list\n\n`` @tmp = list \\$x;``\n\nObviously this is grossly inefficient for the large datasets PDL is designed to handle. This was provided as a get out while PDL matured. It should now be mostly superseded by superior constructs, such as PP/threading. However it is still occasionally useful and is provied for backwards compatibility.\n\n`````` for (list \\$x) {\n# Do something on each value...\n}``````\n\nIf you compile PDL with bad value support (the default), your machine's docs will also say this:\n\n## unpdl\n\nConvert piddle to nested Perl array references\n\n`` \\$arrayref = unpdl \\$x;``\n\nThis function returns a reference to a Perl list-of-lists structure equivalent to the input piddle (within the limitation that while values of elements should be preserved, the detailed datatypes will not as perl itself basically has \"number\" data rather than byte, short, int... E.g., `sum(\\$x - pdl( \\$x->unpdl ))` should equal 0.\n\nObviously this is grossly inefficient in memory and processing for the large datasets PDL is designed to handle. Sometimes, however, you really want to move your data back to Perl, and with proper dimensionality, unlike `list`.\n\n`````` use JSON;\nmy \\$json = encode_json unpdl \\$pdl;``````\n\nIf you compile PDL with bad value support (the default), your machine's docs will also say this:\n\n## listindices\n\nConvert piddle indices to perl list\n\n`` @tmp = listindices \\$x;``\n\n`@tmp` now contains the values `0..nelem(\\$x)`.\n\nObviously this is grossly inefficient for the large datasets PDL is designed to handle. This was provided as a get out while PDL matured. It should now be mostly superseded by superior constructs, such as PP/threading. However it is still occasionally useful and is provied for backwards compatibility.\n\n`````` for \\$i (listindices \\$x) {\n# Do something on each value...\n}``````\n\n## set\n\nSet a single value inside a piddle\n\n`` set \\$piddle, @position, \\$value``\n\n`@position` is a coordinate list, of size equal to the number of dimensions in the piddle. Occasionally useful, mainly provided for backwards compatibility as superseded by use of slice and assignment operator `.=`.\n\n`````` pdl> \\$x = sequence 3,4\npdl> set \\$x, 2,1,99\npdl> p \\$x\n[\n[ 0 1 2]\n[ 3 4 99]\n[ 6 7 8]\n[ 9 10 11]\n]``````\n\n## at\n\nReturns a single value inside a piddle as perl scalar.\n\n`` \\$z = at(\\$piddle, @position); \\$z=\\$piddle->at(@position);``\n\n`@position` is a coordinate list, of size equal to the number of dimensions in the piddle. Occasionally useful in a general context, quite useful too inside PDL internals.\n\n`````` pdl> \\$x = sequence 3,4\npdl> p \\$x->at(1,2)\n7``````\n\nIf you compile PDL with bad value support (the default), your machine's docs will also say this:\n\n## sclr\n\nreturn a single value from a piddle as a scalar\n\n`````` \\$val = \\$a(10)->sclr;\n\\$val = sclr inner(\\$a,\\$b);``````\n\nThe `sclr` method is useful to turn a piddle into a normal Perl scalar. Its main advantage over using `at` for this purpose is the fact that you do not need to worry if the piddle is 0D, 1D or higher dimensional. Using `at` you have to supply the correct number of zeroes, e.g.\n\n`````` \\$a = sequence(10);\n\\$b = \\$a->slice('4');\nprint \\$b->sclr; # no problem\nprint \\$b->at(); # error: needs at least one zero``````\n\n`sclr` is generally used when a Perl scalar is required instead of a one-element piddle. If the input is a multielement piddle the first value is returned as a Perl scalar. You can optionally switch on checks to ensure that the input piddle has only one element:\n\n`````` PDL->sclr({Check => 'warn'}); # carp if called with multi-el pdls\nPDL->sclr({Check => 'barf'}); # croak if called with multi-el pdls``````\n\nare the commands to switch on warnings or raise an error if a multielement piddle is passed as input. Note that these options can only be set when `sclr` is called as a class method (see example above). Use\n\n`` PDL->sclr({Check=>0});``\n\nto switch these checks off again (default setting); When called as a class method the resulting check mode is returned (0: no checking, 1: warn, 2: barf).\n\n## cat\n\nconcatenate piddles to N+1 dimensional piddle\n\nTakes a list of N piddles of same shape as argument, returns a single piddle of dimension N+1.\n\n`````` pdl> \\$x = cat ones(3,3),zeroes(3,3),rvals(3,3); p \\$x\n[\n[\n[1 1 1]\n[1 1 1]\n[1 1 1]\n]\n[\n[0 0 0]\n[0 0 0]\n[0 0 0]\n]\n[\n[1 1 1]\n[1 0 1]\n[1 1 1]\n]\n]``````\n\nIf you compile PDL with bad value support (the default), your machine's docs will also say this:\n\nThe output piddle is set bad if any input piddles have their bad flag set.\n\nSimilar functions include append, which appends only two piddles along their first dimension, and glue, which can append more than two piddles along an arbitary dimension.\n\nAlso consider the generic constructor pdl, which can handle piddles of different sizes (with zero-padding), and will return a piddle of type 'double' by default, but may be considerably faster (up to 10x) than cat.\n\n## dog\n\nOpposite of 'cat' :). Split N dim piddle to list of N-1 dim piddles\n\nTakes a single N-dimensional piddle and splits it into a list of N-1 dimensional piddles. The breakup is done along the last dimension. Note the dataflown connection is still preserved by default, e.g.:\n\n`````` pdl> \\$p = ones 3,3,3\npdl> (\\$a,\\$b,\\$c) = dog \\$p\npdl> \\$b++; p \\$p\n[\n[\n[1 1 1]\n[1 1 1]\n[1 1 1]\n]\n[\n[2 2 2]\n[2 2 2]\n[2 2 2]\n]\n[\n[1 1 1]\n[1 1 1]\n[1 1 1]\n]\n]``````\n`` Break => 1 Break dataflow connection (new copy)``\n\nIf you compile PDL with bad value support (the default), your machine's docs will also say this:\n\nThe output piddles are set bad if the original piddle has its bad flag set.\n\n## gethdr\n\nRetrieve header information from a piddle\n\n`````` \\$pdl=rfits('file.fits');\n\\$h=\\$pdl->gethdr;\nprint \"Number of pixels in the X-direction=\\$\\$h{NAXIS1}\\n\";``````\n\nThe `gethdr` function retrieves whatever header information is contained within a piddle. The header can be set with sethdr and is always a hash reference or undef.\n\n`gethdr` returns undef if the piddle has not yet had a header defined; compare with `hdr` and `fhdr`, which are guaranteed to return a defined value.\n\nNote that gethdr() works by reference: you can modify the header in-place once it has been retrieved:\n\n`````` \\$a = rfits(\\$filename);\n\\$ah = \\$a->gethdr();\n\\$ah->{FILENAME} = \\$filename;``````\n\nIt is also important to realise that in most cases the header is not automatically copied when you copy the piddle. See hdrcpy to enable automatic header copying.\n\nHere's another example: a wrapper around rcols that allows your piddle to remember the file it was read from and the columns could be easily written (here assuming that no regexp is needed, extensions are left as an exercise for the reader)\n\n`````` sub ext_rcols {\nmy (\\$file, @columns)=@_;\n\n@piddles=rcols \\$file, @columns;\nreturn @piddles;\n}``````\n\n## hdr\n\nRetrieve or set header information from a piddle\n\n`` \\$pdl->hdr->{CDELT1} = 1;``\n\nThe `hdr` function allows convenient access to the header of a piddle. Unlike `gethdr` it is guaranteed to return a defined value, so you can use it in a hash dereference as in the example. If the header does not yet exist, it gets autogenerated as an empty hash.\n\nNote that this is usually -- but not always -- What You Want. If you want to use a tied Astro::FITS::Header hash, for example, you should either construct it yourself and use `sethdr` to put it into the piddle, or use fhdr instead. (Note that you should be able to write out the FITS file successfully regardless of whether your PDL has a tied FITS header object or a vanilla hash).\n\n## fhdr\n\nRetrieve or set FITS header information from a piddle\n\n`` \\$pdl->fhdr->{CDELT1} = 1;``\n\nThe `fhdr` function allows convenient access to the header of a piddle. Unlike `gethdr` it is guaranteed to return a defined value, so you can use it in a hash dereference as in the example. If the header does not yet exist, it gets autogenerated as a tied Astro::FITS::Header hash.\n\nAstro::FITS::Header tied hashes are better at matching the behavior of FITS headers than are regular hashes. In particular, the hash keys are CAsE INsEnSItiVE, unlike normal hash keys. See Astro::FITS::Header for details.\n\nIf you do not have Astro::FITS::Header installed, you get back a normal hash instead of a tied object.\n\n## sethdr\n\nSet header information of a piddle\n\n`````` \\$pdl = zeroes(100,100);\n\\$h = {NAXIS=>2, NAXIS1=>100, NAXIS=>100, COMMENT=>\"Sample FITS-style header\"};\n\\$\\$h{FILENAME} = 'file.fits';\n\\$pdl->sethdr( \\$h );``````\n\nThe `sethdr` function sets the header information for a piddle. You must feed in a hash ref or undef, and the header field of the PDL is set to be a new ref to the same hash (or undefined).\n\nThe hash ref requirement is a speed bump put in place since the normal use of headers is to store fits header information and the like. Of course, if you want you can hang whatever ugly old data structure you want off of the header, but that makes life more complex.\n\nRemember that the hash is not copied -- the header is made into a ref that points to the same underlying data. To get a real copy without making any assumptions about the underlying data structure, you can use one of the following:\n\n`````` use PDL::IO::Dumper;\n\\$pdl->sethdr( deep_copy(\\$h) );``````\n\n(which is slow but general), or\n\n`` \\$pdl->sethdr( PDL::_hdr_copy(\\$h) )``\n\n(which uses the built-in sleazy deep copier), or (if you know that all the elements happen to be scalars):\n\n`````` { my %a = %\\$h;\n\\$pdl->sethdr(\\%a);\n}``````\n\nwhich is considerably faster but just copies the top level.\n\nThe `sethdr` function must be given a hash reference or undef. For further information on the header, see gethdr, hdr, fhdr and hdrcpy.\n\n## hdrcpy\n\n`````` print \"hdrs will be copied\" if \\$a->hdrcpy;\n\\$a->hdrcpy(1); # switch on automatic header copying\n\\$b = \\$a->sumover; # and \\$b will inherit \\$a's hdr\n\\$a->hdrcpy(0); # and now make \\$a non-infectious again``````\n\n`hdrcpy` without an argument just returns the current setting of the flag. See also \"hcpy\" which returns its PDL argument (and so is useful in method-call pipelines).\n\nNormally, the optional header of a piddle is not copied automatically in pdl operations. Switching on the hdrcpy flag using the `hdrcpy` method will enable automatic hdr copying. Note that an actual deep copy gets made, which is rather processor-inefficient -- so avoid using header copying in tight loops!\n\nMost PDLs have the `hdrcpy` flag cleared by default; however, some routines (notably rfits) set it by default where that makes more sense.\n\nThe `hdrcpy` flag is viral: if you set it for a PDL, then derived PDLs will get copies of the header and will also have their `hdrcpy` flags set. For example:\n\n`````` \\$a = xvals(50,50);\n\\$a->hdrcpy(1);\n\\$a->hdr->{FOO} = \"bar\";\n\\$b = \\$a++;\n\\$c = \\$b++;\nprint \\$b->hdr->{FOO}, \" - \", \\$c->hdr->{FOO}, \"\\n\";\n\\$b->hdr->{FOO} = \"baz\";\nprint \\$a->hdr->{FOO}, \" - \", \\$b->hdr->{FOO}, \" - \", \\$c->hdr->{FOO}, \"\\n\";``````\n\nwill print:\n\n`````` bar - bar\nbar - baz - bar``````\n\nPerforming an operation in which more than one PDL has its hdrcpy flag causes the resulting PDL to take the header of the first PDL:\n\n`````` (\\$a,\\$b) = sequence(5,2)->dog;\n\\$a->hdrcpy(1); \\$b->hdrcpy(1);\n\\$a->hdr->{foo} = 'a';\n\\$b->hdr->{foo} = 'b';\nprint ((\\$a+\\$b)->hdr->{foo} , (\\$b+\\$a)->hdr->{foo});``````\n\nwill print:\n\n`` a b``\n\n## hcpy\n\nSwitch on/off automatic header copying, with PDL pass-through\n\n`````` \\$a = rfits('foo.fits')->hcpy(0);\n\\$a = rfits('foo.fits')->hcpy(1);``````\n\n`hcpy` sets or clears the hdrcpy flag of a PDL, and returns the PDL itself. That makes it convenient for inline use in expressions.\n\n`` set_autopthread_targ(\\$num_pthreads);``\n\n`\\$num_pthreads` is the target number of pthreads the auto-pthread process will try to achieve.\n\nSee PDL::ParallelCPU for an overview of the auto-pthread process.\n\n`````` # Example turning on auto-pthreading for a target of 2 pthreads and for functions involving\n# PDLs with greater than 1M elements\n\n# Execute a pdl function, processing will split into two pthreads as long as\n# one of the pdl-threaded dimensions is divisible by 2.\n\\$a = minimum(\\$b);\n\n# Get the actual number of pthreads that were run.\n\n`` \\$num_pthreads = get_autopthread_targ();``\n\n`\\$num_pthreads` is the target number of pthreads the auto-pthread process will try to achieve.\n\nSee PDL::ParallelCPU for an overview of the auto-pthread process.\n\nGet the actual number of pthreads executed for the last pdl processing function.\n\n`` \\$autopthread_actual = get_autopthread_actual();``\n\n`\\$autopthread_actual` is the actual number of pthreads executed for the last pdl processing function.\n\nSee PDL::ParallelCPU for an overview of the auto-pthread process.\n\nSet the minimum size (in M-elements or 2^20 elements) of the largest PDL involved in a function where auto-pthreading will be performed. For small PDLs, it probably isn't worth starting multiple pthreads, so this function is used to define a minimum threshold where auto-pthreading won't be attempted.\n\n`` set_autopthread_size(\\$size);``\n\n`\\$size` is the mimumum size, in M-elements or 2^20 elements (approx 1e6 elements) for the largest PDL involved in a function.\n\nSee PDL::ParallelCPU for an overview of the auto-pthread process.\n\n`````` # Example turning on auto-pthreading for a target of 2 pthreads and for functions involving\n# PDLs with greater than 1M elements\n\n# Execute a pdl function, processing will split into two pthreads as long as\n# one of the pdl-threaded dimensions is divisible by 2.\n\\$a = minimum(\\$b);\n\n# Get the actual number of pthreads that were run.\n\n`` \\$autopthread_size = get_autopthread_size();``\n`\\$autopthread_size` is the mimumum size limit for auto_pthreading to occur, in M-elements or 2^20 elements (approx 1e6 elements) for the largest PDL involved in a function"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7925588,"math_prob":0.85779244,"size":51255,"snap":"2020-34-2020-40","text_gpt3_token_len":14271,"char_repetition_ratio":0.12628046,"word_repetition_ratio":0.10418854,"special_character_ratio":0.27251977,"punctuation_ratio":0.14645836,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96233433,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-10T05:48:43Z\",\"WARC-Record-ID\":\"<urn:uuid:65739ce3-7d37-41fe-9846-0305ccbc6527>\",\"Content-Length\":\"178145\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3cc2b4ac-07fc-4327-b896-3d91f9f06c29>\",\"WARC-Concurrent-To\":\"<urn:uuid:cd49ad70-8247-43cb-a632-1753aebf311b>\",\"WARC-IP-Address\":\"199.232.66.217\",\"WARC-Target-URI\":\"http://web-stage.metacpan.org/pod/release/CHM/PDL-2.019/Basic/Core/Core.pm\",\"WARC-Payload-Digest\":\"sha1:NVZJG7DIJRBM645PWMXFZYB3EO74M5FX\",\"WARC-Block-Digest\":\"sha1:BU5UIWEJXEMQDZXMDFJUBJ6GORSVA5XO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738609.73_warc_CC-MAIN-20200810042140-20200810072140-00262.warc.gz\"}"} |
http://www.kpubs.org/article/articleMain.kpubs?articleANo=E1EEFQ_2015_v10n4_1389 | [
"Research and Experimental Implementation of a CV-FOINC Algorithm Using MPPT for PV Power System\nResearch and Experimental Implementation of a CV-FOINC Algorithm Using MPPT for PV Power System\nJournal of Electrical Engineering and Technology. 2015. Jul, 10(4): 1389-1399\nCopyright © 2015, The Korean Institute of Electrical Engineers",
null,
"This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.\n• Received : July 31, 2013\n• Accepted : January 26, 2015\n• Published : July 01, 2015",
null,
"PDF",
null,
"e-PUB",
null,
"PubReader",
null,
"PPT",
null,
"Export by style\nArticle\nAuthor\nMetrics\nCited by\nTagCloud\nArulmurugan R\nCorresponding Author: Dept. of EEE, Research Scholar, Anna University, Chennai, Regional zone CBE, Reg.No 11160431001 India. (arul.lect@gmail.com)\nVenkatesan T\nProfessor, Dept. of EEE, K.S. Rangasamy College of Technology, Tiruchengode, India. (pramoth99@yahoo.co.uk)\n\nAbstract\nThis research suggests maximum power point tracking (MPPT) for the solar photovoltaic (PV) power scheme using a new constant voltage (CV) fractional order incremental conductance (FOINC) algorithm. The PV panel has low transformation efficiency and power output of PV panel depends on the change in weather conditions. Possible extracting power can be raised to a battery load utilizing a MPPT algorithm. Among all the MPPT strategies, the incremental conductance (INC) algorithm is mostly employed due to easy implementation, less fluctuations and faster tracking, which is not only has the merits of INC, fractional order can deliver a dynamic mathematical modelling to define non-linear physiognomies. CV-FOINC variation as dynamic variable is exploited to regulate the PV power toward the peak operating point. For a lesser scale photovoltaic conversion scheme, the suggested technique is validated by simulation with dissimilar operating conditions. Contributions are made in numerous aspects of the entire system, including new control algorithm design, system simulation, converter design, programming into simulation environment and experimental setup. The results confirm that the small tracking period and practicality in tracking of photovoltaic array.\nKeywords\n1. Introduction\nRenewable energy sources are considered as an important source of energy in the twenty first century that is in use to fulfil our needs and growing demands of electricity. Among all natural energy sources, solar energy is more attractive, because it is easy to obtain anywhere and anytime through-out of the day and do not contribute global warming . The extraordinary diffusion of solar PV system in electricity generation is evident from the fact that the PV scheme is anticipated to be the largest source of electricity generation among all the accessible renewable energy sources . The PV modules are primarily a current source device and the current is produced when light falls on the surface of solar device. Characteristics curve of the PV module shows its non-linear behavior. The nonlinear V-I curve of PV module has only one point of maximum power extraction. Therefore, the energy harvesting at maximum efficiency is not simple enough. The survival of only one unique point of maximum power requires special techniques to function the scheme at the point of maximum power. These operating techniques are named as MPPT . MPPT techniques control the power electronic interface such that the source impedance is matched with the load impedance and hence maximum power is transferred. In contrast of the nonlinear characteristics, MPPT techniques are vital for any solar PV system.\nDifferent methods have been reported in literature for tracking the Maximum Power Point (MPP). In recent years, a large number of techniques have been proposed for tracking the MPP as follows: In Ahmed M.Kassem this paper proposes the optimal technique to make the optimum chopping ratio of buck-boost converter with optimizing it’s efficiency in photovoltaic water pumping system using MPPT. In addition, NARMA controller based on ANN approach is applied to optimize the ratio for PV maximum power at any irradiation level with fast responses good performance. In Kashif Ishaque et al. was proposed particle swarm optimization(PSO) algorithm in MPPT eliminated the conventional PI control method. Since this method is based on optimized search method, it overcomes the common drawback of the conventional MPPT. This proposed algorithm used or employed in buck-boost converter and yields an average MPPT efficiency of 95%. In A.D. Karlis et al. details the off-line trained fuzzy cognitive network(FCN) on MPPT gives a good maximum power operation of a PV array at any different conditions such as insolation, temperature, etc. In Hamed Mashinchi Mahery, Ebrahim Babaei. proposed mathematical modelling of buck-boost DC/DC converter in continuous conduction mode (CCM). In this method, using the Laplace transform the relations of inductor current and output voltage are obtained. Followed by calculated inductor current and voltage using the Z-transform. In Chia-Hung Lin et al , proposes the fractional order incremental conductance method (FOINC) for providing maximum power for PV array. This FOINC provides a dynamic mathematical model to adjust the PV array voltage towards the maximum power point. In Yu-Chi Wu et.al , suggests a three point weighting method that incorporates midpoint tracking to improve the limitation of the P&O and to enhance the efficiency of the three-point weighting method. It was establish that the proposed method tracked better than the three-point-weighting method, and it was capable of improving the deficiency of P&O method that has difficulty to track from the open-circuit voltage as well as enhancing the precision of the three-point-weighting method in the case of zero-weight. In Jung-Woo Baek, et al , proposes composed perturb and observe (PO) and constant voltage (CV) method. PO method is simple to realize and CV method is possible to tracking MPP with low radiation.\nThe Fuzzy Logic and/or Neural Network based MPPT technique have good performance under fast changing environmental circumstances and displays improved performance than the P&O method . However, the main drawback of this technique is that its efficiency is extremely reliant on the technical information of the engineer in calculating the error and approaching up with the fuzzy rule based table. It is importantly reliant on how a designer assembles the system based on his experience and skill. Perturb and observe algorithm can be failure under fast varying environmental circumstances. The INC technique has partly solved divergence of perturb and observe model . In this research proposed a new technique that will tune the on-line MPPT techniques based on varying weather conditions. The designed algorithm modifies the existing conventional INC controller based on constant voltage-fractional order differentiator is implemented. The fractional order differentiator designed for numerical evaluation of fractional derivatives, Riemann Liouville definition and Grunwald Letnikov definition . The suggested algorithm is implemented into MATLAB/Simulink environment and it is tested and validated. For a lesser scale photovoltaic power transformation scheme, the result shows computational efficiency and tracing time reduction. This research is organized as follows: section two discusses the problem formulation, equivalent model of PV, new CV-FOINC control algorithm and DC to DC boost-buck converter. Section three describes the simulation setup and results. Section four and five give the experimental results and conclusion of the suggested system.\n2. Problem Formulation of Proposed System\nThe block diagram for the suggested system is shown in Fig. 1 . which contains PV panel, new CV-FOINC MPPT algorithm, DC to DC boost-buck converter and load. The power switches of the framed DC to DC boost-buck converter are controlled by the gate drivers programmed via a controller module. The framed converter delivers required levels of the output power to the stand alone battery load. The impedance of the battery load should be assumed as suitable one for subsequent analysis. The DC to DC converters are responsible for MPPT and voltage regulations. When converter switch is the ‘off’ state, the solar energy is transferred to the output storage capacitor by the boost-buck inductance. Varying the duty of switching time can regulate the input voltage and current. The proposed sub systems as follows.",
null,
"PPT Slide\nLager Image\nThe block diagram of prosed PV conversion system\n- 2.1 Equivalent model of PV\nThe photovoltaic cell is a p-n semiconductor junction that transfers solar energy into electrical energy. Photovoltaic cells, the so called photovoltaic module, are consistent in a series and/or parallel configuration to arrangement photovoltaic arrays, as shown equivalent circuit of PV model in Fig. 2 . It has non-linear characteristics, and the mathematical model can be expressed as . The intensity of solar radiation in surface space is 1.367 kW/m 2 . The non-linear V-I characteristics of a solar panel are extracted, neglecting the low value of series resistance and output current is as follows.",
null,
"PPT Slide\nLager Image\nEquivalent model of solar photovoltaic module",
null,
"PPT Slide\nLager Image\nwhere I o is the PV module output current(A), V o the PV module output voltage(V), k the Boltzmann’s constant in joule per kelvin, q charge of an electron, A the p-n junction ideality factor, T the cell temperature in K, I rs is the cell reverse saturation current. The A factor in Eq. (1) determines the cell deviation from the ideal p-n junction characteristics. The series of ideal value is between 1 and 5. The optimal transfer efficiency of the wavelength of light wave is around 0.7 micro meter to 0.8 micro meter and light spectrum is yellow light. The PV array power can be computed using product of voltage and current. i.e. P = V × I.\nThe voltage versus power (V-P) and the current versus power (I-P) characteristic of single PV module curve are nonlinear as shown in Fig. 3 . The panel contain four module equal powers rating connected side to side, which are installed on the floor of the Electrical and Electronics Engineering laboratory at Government College of Engineering, Salem, India. Sponsored by IIT, Bombay. The panel is fixed incline angle. It can be noticed that, at fixed irradiance and temperature, there is a unique point corresponding to the maximum power that the PV module can generate. A tool for tracking this exact point is thus required so that the energy produced by the PV system can be maximized. The specification of single PV module, which contain 10 watts peak maximum power, 16.4 volt peak maximum voltage, 0.610 amps peak maximum current, 21 volts open circuit voltage and 0.700 amps short circuit current.",
null,
"PPT Slide\nLager Image\nVoltage-power (V-P) and current-power(I-P) curves of single PV module.\n- 2.1.1 Estimation of maximum electrical energy output for PV system\nThe power production of a photovoltaic panel is contingent linearly on the working temperature, reducing with T OT . Effects of photovoltaic working temperature on photovoltaic electrical energy production can be articulated by the subsequent equation",
null,
"PPT Slide\nLager Image\nWith essential correction functional to h EFS , the result would be the UAO, in units of watt hour per peak watt per day. UAO is a chosen parameter for the sizing workout in contrast to parameters such as h EFS or global irradiation. UAO is given by the subsequent equation",
null,
"PPT Slide\nLager Image\nThus Eq. (2) can be altered for temperature corrected photovoltaic electrical energy production as follows",
null,
"PPT Slide\nLager Image\nTo show the significance and importance of containing the effects of photovoltaic working temperature in the photovoltaic electrical efficiency an existing linear expression for T corrected photovoltaic electrical efficiency η OT is specified by",
null,
"PPT Slide\nLager Image\nWhere η STC is the photovoltaic electrical efficiency at standard test conditions, β is module/array efficiency coefficient, T STC is reference temperature at standard test conditions for photovoltaic electrical efficiency, T OT is the average photovoltaic working temperature. The values of electrical energy and UAO for PV system is shown in Table 1 . Power transformation efficiency at standard test conditions can be designed for component of photovoltaic panel. An appearance for calculating photovoltaic array power transformation efficiency at standard test conditions has been established as Eq. (6), when array consists of four sub modules\nElectrical energy, unit array output for photovoltaic module",
null,
"PPT Slide\nLager Image\nElectrical energy, unit array output for photovoltaic module",
null,
"PPT Slide\nLager Image\nwhere, V ocm is open circuit voltage of module, I scm is short circuit current of module, Ms represents no. of modules in series in a sub module and M p represents no. of parallel strings of series connected modules in a sub module. FF is fill factor, I p peak intensity with value of 1000 W/m 2 . All these parameters are observed at standard test conditions and delivered by producer specifications. Same established formula can be changed for ‘n’ number of sub modules of any given photovoltaic array.\n- 2.2 New CV-FOINC MPPT algorithm design\nThe perturb and observe method is widely applied in MPPT due to its simple structure, little parameters requirement, then not requiring solar panel characteristics [1 - 5] . It operates by periodically perturbing the instantaneous terminal voltage then comparing the output power with the pervious perturbation cycle. When it attains maximum power, it has an oscillation problem around the desired operating point and contains some unexpected losses. To overcome the weakness of perturb and observe, the array terminal voltage of incremental conductance is always regulated according to its value relative to the voltage of the maximum power point. Though novel incremental conductance practices the gradient method to increase tracking speed and it can quickly optimize the slope of the output power versus the voltage to match the maximum power point. However, the non-linear functions power versus voltage and current versus voltage are not easy to get the first order derives. In PV system, the solar temperature, radiation, and electricity conduction are irregular diffusion phenomena in inhomogeneous media, and their dynamic variations can describe the fractional order electrical production by non-integer derivative based equations. The fractional differential function depends on its entire past values, and its model performs likes a scheme with a long memory. This paper, the incremental various in the voltage and current will describe the reasonable estimates in fractional order calculus (FOC). The basic concept of FOC will be showed in the following session.\n- 2.2.1 Fractional order calculus\nA fraction order structure comprises by a fractional differential or an integral equation, and systems covering few equations, has been deliberate in engineering and physical appliances, for example active control, signal processing, linear and nonlinear response controller. The generally utilized approaches have been anticipated for numerical assessment of fraction derivatives by Riemann-Lioville and Grunwald-Letnikov definition [8 , 9] . It reflect a continuous function f(t), where its α th order derivative can be conveyed in reference .",
null,
"PPT Slide\nLager Image\nFor all α, positive, negative, and/or zero, and m =0,1,2,3,4…Note, the select of α can be seen as selecting the spectacles that will be modeled. By selecting 0<α<1, anomalous phenomena, such as heat conduction, diffusion, viscoelasticity and electrode-electrolyte polarization can be described [15 , 16] .\n- 2.2.2 Control process of constant voltage- fractional order INC method\nThe P-V and P-I characteristics of a single cell are determined and expand to determine the behaviour of a PV modules, as shown in Fig. 3 . It appears dI/dV < 0, with increasing voltage V as current I is decreasing. According to Eqs. (1), I and V depend on atmospheric conditions and electricity conduction. The anomalous phenomena can be described as fractional order differentiation. Thus, the dI/dV can be modified as",
null,
"PPT Slide\nLager Image",
null,
"PPT Slide\nLager Image\nThe effectiveness of the weighing is changed as α>0, and α is an even number. For 0 < α < 1 the expression can be termed as the fractional rate of the change of function. Eq. (9) is used to express the fractional order incremental changes of the current and voltage of the PV panel. The incremental conductance load can be modified as",
null,
"PPT Slide\nLager Image\nwhere Z=0,−1,−2,−3,−4,… residue Γ( 0 ) =Res(Γ-0) =1. Thus the procedure of FOINC method searches the voltage as a variable at which the maximum power point has an increasing or decreasing the duty cycle.\nFig. 4 shows the flowchart of the proposed CV-FOINC control algorithm. By using the radiation meter or pyrometer, this control method can alteration the operation mode in the program. Up to the output power of the solar cell array reaches the maximum power point, the proposed control method increases or diminishes the output voltage of the solar cell output voltage as the same direction and it can be tracked the MPP. It adjusts the duty cycle using the instantaneous values I and V at present iteration step and their corresponding values of I o and V o stored at the end of the preceding iteration step. The incremental changes in current and voltage are approximated as, respectively. To avoid misjudging the working state under various conditions, the initial voltage V can be set to zero V or default values according to the temperature variations. According to the four judgements, the control procedure of CV-FOINC method algorithm can be expressed as follows :",
null,
"PPT Slide\nLager Image\nFlowchart of new CV-FOINC MPPT algorithm\nCondition one: if",
null,
"PPT Slide\nLager Image\nor (ΔV α = 0 and ΔI = 0) no control action is needed. Condition two: if",
null,
"PPT Slide\nLager Image\nor (ΔV α = 0 and ΔI > 0) a control action is needed to add the ΔV α to present voltage V with an increasing duty cycle. Condition three: if",
null,
"PPT Slide\nLager Image\nor (ΔV α = 0 and ΔI < 0) a control action is needed to reduce the ΔV α to present voltage V with a decreasing the duty cycle. Condition four: compute output power is equal to product of output voltage and current and P = VI. If P o < P, update the voltage V o = V and I o = I otherwise, terminate the control procedure. In the constant voltage control technique the output voltage of the solar cell modules has the constant voltage characteristic having the little bit of vibration amplitude about the solar radiation change. Therefore, it can be said to the constant voltage control method in which it sets as output voltage is equal 0.76 and it controls by the constant voltage. This method is not required the calculated power value for an output. The duty of the DC to DC converter is determined by the control circuit and the DC voltage of the output terminal is consistently maintained by output voltage value. The advantages of this control method have reducing the sensor of a panel and DC part. However, there is the drawback that is unable to track the MPP in solar radiation rapidly changes and the power efficiency is reduced.\nIn the proposed MPPT algorithm is the method for maximizing the efficiency of the output power of the solar cell modules with the solar radiation variation. The algorithm performed MPPT with the solar radiation variation is dissimilar. One case the solar irradiation is low, the constant voltage control method is performed and the other case to improve dynamic performance the FOINC is based on the fractional order incremental changes of the PV array terminal voltage and current to rapidly track the maximum output power. Therefore, the operation mode is changed in the low solar radiation to the constant voltage control method.\n- 2.3 DC to DC boost-buck converter design\nThe boost-buck converter has low switching losses and the highest efficiency among non-isolated DC to DC converters. It can also provide an improved output current characteristic due to the inductor on the output stage. Thus, the boost-buck configuration is a proper converter to be employed in deceitful the MPPT. The converter provides a negative polarity regulated output voltage with respect to the common terminal of the input voltage as shown in Fig. 5 . Here the capacitor C 1 acts as the primary means of storing and transferring energy from the input to the output. In steady state, the average inductor voltage V L1 and V L2 are zero.",
null,
"PPT Slide\nLager Image\nDC to DC boost-buck converter",
null,
"PPT Slide\nLager Image\nwhere V C1 is larger than both V c and V o . Assuming C 1 to be sufficiently large, in steady sate the variation in vc1 from its average value V C1 can be assumed to be negligibly small i.e vc 1 ≅V C1 , even though it stores and transfers energy from the input to the output.\nWhen the switch is off, the inductor current I L1 and I L2 flow to the load through the diode. The circuit is shown in Fig. 6(a) capacitor C 1 is charged through the diode by energy from both the input and L 1 . Current I L1 decreases, because V C1 is larger than V d . Energy stored in L 2 feeds the output. Therefore i L2 also decreases. When the switch is on, V C1 reverse biases the diode. The inductor currents i L1 and i L2 flow through the switch in Fig. 6(b) . Since V C1 >V o , C 1 discharges through the switch, transferring energy to the output and L 2 . Therefore, i L2 increases. The input feeds energy to L 1 causing i L1 to increase. The inductor currents i L1 and i L2 are assumed to be continuous. The voltage and current expressions in steady state can be obtained in two different ways. If we assume the capacitor voltage V C1 to be constant, then equating the integral of the voltage across L 1 and L 2 over one time period to zero yields .",
null,
"PPT Slide\nLager Image\nConverter waveforms: (a) switch off (b) switch on.",
null,
"PPT Slide\nLager Image",
null,
"PPT Slide\nLager Image",
null,
"PPT Slide\nLager Image",
null,
"PPT Slide\nLager Image\nFrom Eqs. (14) and (15)",
null,
"PPT Slide\nLager Image\nAssuming P d =P o gives",
null,
"PPT Slide\nLager Image\nwhere I L1 =I d and I L2 =I o .\nThere is another way to obtain these expressions. Assume that the inductor currents i L1 and i L2 are essentially ripple free (i.e i L1 =I L1 and i L2 =I L2 ). When the switch is off, the charge delivered to C 1 equals I L1 (1−D)T s . When the switch is on, the capacitor discharges by an amount I L2 DTs. Since in steady state the net change of charge associated with C 1 over one time period must be zero.",
null,
"PPT Slide\nLager Image",
null,
"PPT Slide\nLager Image\nand V o / V d =D/1−D since P o = P d . Both methods of analysis yield identical results. The average input and output relations are similar to that of a buck-boost converter. In practical circuits, the assumption of a nearly constant V C1 is reasonably valid. An advantage of this circuit is that both the input current and the current feeding the output stage are reasonably rippled free (unlike the buck-boost converter where both these currents are highly discontinuous). It is possible to simultaneously eliminate the ripples in i L1 and i L2 completely. Leading to lower external filtering requirements. A significant disadvantage is the requirement of a capacitor C1 with a large ripple current carrying capability .\n3. Simulation Results and Discussions\nThe proposed methods were designed and tested on a Laptop Intel(R) Core(TM) i5-3210M CPU @ 2.50GHZ with 4.00GB RAM and MATLAB / Simulink software. PV array which was used in this study had a maximum power Pmax = 40Watts and an open circuit voltage of individual panel is V OC = 21 V at a solar radiation of 1.0 kW/m 2 and temperature of 25℃. The related specific parameters of the PV array and a DC to DC boost-buck converter are utilized between the PV array and the 12V battery for the purpose of MPPT. The overall control procedure consists of two stages: one is a MPPT algorithm keeping the PV array operating at the maximum power point. Second one is a voltage controllable converter adapting the PV array with the storage battery for energy transfer. However, the energy transfer is strongly influenced by the solar radiation and cell temperature (the surface of PV array), such as the angle of incidence of the sunlight and weather. These phenomena affect the efficiency of solar energy generation. The illustration of the closed loop system designed in MATLAB and Simulink is shown in Fig. 7 , which includes the PV array electrical circuit, DC to DC boost-buck converter, and the MPPT algorithm. The converter components are chosen according to the values presented in Table 2 . PV module is modelled using electrical characteristics to provide the output current and voltage of the PV panel. The provided current and voltage are fed to the converter and the controller simultaneously .",
null,
"PPT Slide\nLager Image\nIllustration of the closed loop proposed system.\nDesign specifications of the proposed system.",
null,
"PPT Slide\nLager Image\nDesign specifications of the proposed system.\nThe control tasks include quantifying the analog voltage and current of the photovoltaic array using voltage and current sensors, transform them into digital by means of an Analog to Digital Control (ADC), process the acquired information in a microcontroller, then them match to the predefined values to determine the following step, revert the pulse width modulation (PWM) to the gate drive, and hence control the switching of MOSFET . The control loop frequently happens with respect to the sampling time, and the main program continues to track the Maximum power points. The key proposed power and control circuit components information is summarized in Table 3 .\nKey components used for the proposed system.",
null,
"PPT Slide\nLager Image\nKey components used for the proposed system.\n- 3.1 Simulation results and description",
null,
"PPT Slide\nLager Image\nChange in (a) duty cycle and (b) power of the system due to the change in illumination and temperature level.\n- 3.2 Comparison results\nIn order to validate the designed new controller process, an enhanced CV-FOINC controller is equated with exciting FOINC and conventional Perturb and Observe (P&O) MPPT technique with fixed irradiance and temperature at 1000W/m 2 and 25℃ respectively. Fig. 9. demonstrations the superiority of the designed technique as it exemplifies the controller output power generated by the three controllers: conventional P&O and Inc-Cond; and an improved VSS Inc-Cond. The outcome shows visibly that the designed an enhanced CV-FOINC algorithm influences quicker to the highest power with low oscillation compared to the exciting techniques.",
null,
"PPT Slide\nLager Image\nComparison waveform of proposed system.\n4. Experimental Results and Discussions\nTo verify the functionality and performance of the suggested system is shown in Fig. 10 . Which composed (a) photovoltaic panel with direct load test (b) DC to DC converter with controller. Four modules attached in one PV panel, each module contain ten watts power output made up of multi and mono crystalline silicon materials. The type of connection totally depends on the application where large current or voltage is required. The panel is fixed tilt angle in 39° south direction. A prototype of the boost-buck converter and control circuit was implemented. The ATMega 8 microcontroller was used to provide the control signals for the DC to DC converter. The C code of the CV-FOINC algorithm (based on flowchart shown in Fig. 4 ) and PWM scheme is built, debugged and run with the help of the Arr studio development tool and Proisp software. Voltage measurement is required at the point where the PV module output is connected to the input of the DC to DC converter. The voltage at this point is the operating voltage of the PV panel. On the other hand, current measurement is also necessary to indicate the generated current of the PV array on each operating point. It is particularly important to determinate the weather condition, which is vital in connection with the accuracy of maximum power point tracking. For the aforementioned reason, the PV array voltage and current are measured using resistance divider sensors .",
null,
"PPT Slide\nLager Image\nPhotos of prototype setup (a) PV array (b) DC to DC boost-buck converter with CV-FOINC method MPPT algorithm.\n- 4.1 Results and description\nThe experimental data for a typical day on December 29 th 2012 have been investigated. The intensity of sun solar radiation in outer space is around 1.367 kW/m 2 . Each control action will act the MPPT procedure is undertaken continuous measurements of climate condition. Maximum PV operating temperature of the panel was found 28.21℃ at 12:00 noon when ambient temperature was measured 27.3 ℃ and solar radiation was 513 W/m 2 . As the solar radiation and temperature slowly increase, the FOINC algorithm can rapidly track the maximum power at each step disturbance. The experimental tests confirm the proposed method can provide maximum power in practical applications. There are two general types of connecting modules of PV panel such as series and parallel. The type of modules connection totally depends on the application where large current or voltage is required. The purpose in the series arrangement is to increase the output voltage, while the parallel connection is prepared to increase the current. The interconnection of PV cells in a module itself is mostly in series to provide higher voltage. When modules are coupled in series, the entire voltage is the sum of each module voltage, nevertheless the current stays constant, and it is the slightest current of a module available in the configuration. In the hardware configuration, there are four modules connected in series and parallel arrangement. The sampling time of the system is selected to be 0.2 sec, which is the required time for the designed boost-buck converter to reach the steady state condition. The step size of duty cycle is preferred to be 0.2 sec, so the converter can smoothly track the maximum power point.\nFig. 11 shows the initial waveforms of current and voltage after connecting the PV module to the power circuit. There is some overshoot in both current and voltage waveforms, which was forecast from the simulation results shows in Fig. 8(b) . After further conducting test an indepth investigation on system performance under rapidly varying illumination levels, the numbers of modules were changed from four to three. The variations of the voltage, power and pulse width modulation of duty cycle of the proposed system is revealed in Fig. 12 . As the sudden fluctuations of simulation results are close to the experimental CV-FOINC conduction technique. Fig. 13 shows the events where the solar radiation is fluctuating with real climatic data. It shows the performance of the day .",
null,
"PPT Slide\nLager Image\nInitial current and voltage after connecting to the MPPT with one module (channel 1 is current, channel 2 is voltage waveform).",
null,
"PPT Slide\nLager Image\nChange in voltage, current and pulse width modulation when the number of PV modules is decreased from four to three.",
null,
"PPT Slide\nLager Image\nHourly difference of solar radiation.\n5. Conclusion\nThis research designed a new constant voltage fractional order incremental conductance (CV-FOINC) MPPT algorithm for a small photovoltaic transformation scheme. Under constantly changing environment conditions, the CV-FOINC algorithm reduces the tracing period and average error less than the conventional methods. MPPT controller is joined with a CV and FOINC controller to develop the competence of the entire scheme. The benefits of the suggested technique are reduces the tracking time, tracking number, switching number and ensures the maximum amount of energy is transferred to the load. It is simplest way to implement in a microcontroller. Hence, the CV-FOINC system a favourable way for further application of a standalone scheme, such as a PV power generating scheme and monitoring scheme.\nBIO",
null,
"Arulmurugan, R was born in Salem, India, in 1985. He is working towards the Ph.D. degree in Electrical Engineering, Anna University, Chennai, Regional zone Coimbatore. He is the recipient of the best faculty, best researcher award in 2012-13 and 2013-14 at Knowledge Institute of Technology. He has published more than 15 papers in refereed journals and International conference proceedings such as Elsevier-Electrical power system research, International Journal of photo energy and IEEE xplorer, etc. He is the recipient of the best paper award in 2013 IEEE international conference on research and development prospects on engineering and technology. His main are of research activity is photovoltaic, MPPT, FLC, DC to DC buck-boost converter.",
null,
"Venkatesan, T was born in Salem, India, in 1971. He received B.E degree in Electrical and Electronics Engineering from NIT, Tiruchy, in 1997, the M.E degree in Power System Engineering from Annamalai University, India, in 2002, and receives Ph.D. in Anna University, Chennai, in 2013. Currently he is working as a Professor in K S Rangasamy College of Technology, Tiruchengode, India. His research interests are Economic Dispatch, Unit Commitment problem solution using soft computing techniques, photovoltaic and Maximum power point tracking techniques.\nReferences"
] | [
null,
"http://www.kpubs.org/resources/images/licenceby.png",
null,
"http://www.kpubs.org/resources/images/pc/down_pdf.png",
null,
"http://www.kpubs.org/resources/images/pc/down_epub.png",
null,
"http://www.kpubs.org/resources/images/pc/down_pub.png",
null,
"http://www.kpubs.org/resources/images/pc/down_ppt.png",
null,
"http://www.kpubs.org/resources/images/pc/down_export.png",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_f001.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_f002.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e901.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_f003.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e902.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e903.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e904.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e905.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_t001.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e906.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e907.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e908.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e909.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e910.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_f004.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e001.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e002.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e003.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_f005.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e911.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_f006.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e912.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e913.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e914.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e915.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e916.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e917.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e918.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_e919.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_f007.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_t002.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_t003.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_f008.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_f009.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_f010.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_f011.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_f012.jpg",
null,
"http://www.kpubs.org/S_Tmp/J_Data/E1EEFQ/2015/v10n4/E1EEFQ_2015_v10n4_1389_f013.jpg",
null,
"http://www.kpubs.org/article/E1EEFQ_2015_v10n4_1389_a001.jpg",
null,
"http://www.kpubs.org/article/E1EEFQ_2015_v10n4_1389_a002.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8725213,"math_prob":0.8973426,"size":39035,"snap":"2019-13-2019-22","text_gpt3_token_len":8815,"char_repetition_ratio":0.15272476,"word_repetition_ratio":0.03196931,"special_character_ratio":0.21375689,"punctuation_ratio":0.10080022,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9763982,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T04:11:36Z\",\"WARC-Record-ID\":\"<urn:uuid:d40d378c-151d-49af-b943-11bc05a2ae05>\",\"Content-Length\":\"248846\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:03149489-5916-4e15-bffe-668232ed79bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:d2359f68-e970-482c-bcda-2abb8292fc91>\",\"WARC-IP-Address\":\"203.250.198.116\",\"WARC-Target-URI\":\"http://www.kpubs.org/article/articleMain.kpubs?articleANo=E1EEFQ_2015_v10n4_1389\",\"WARC-Payload-Digest\":\"sha1:5ARBB7V5QFDWHLCECNKLN3HKCF7NWJVB\",\"WARC-Block-Digest\":\"sha1:DYVP4BK6ZGV7BLJSU7TIB52BVLNLJSHU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202484.31_warc_CC-MAIN-20190321030925-20190321052925-00071.warc.gz\"}"} |
https://numbermatics.com/n/39319375/ | [
"# 39319375\n\n## 39,319,375 is an odd composite number composed of three prime numbers multiplied together.\n\nWhat does the number 39319375 look like?\n\nThis visualization shows the relationship between its 3 prime factors (large circles) and 20 divisors.\n\n39319375 is an odd composite number. It is composed of three distinct prime numbers multiplied together. It has a total of twenty divisors.\n\n## Prime factorization of 39319375:\n\n### 54 × 53 × 1187\n\n(5 × 5 × 5 × 5 × 53 × 1187)\n\nSee below for interesting mathematical facts about the number 39319375 from the Numbermatics database.\n\n### Names of 39319375\n\n• Cardinal: 39319375 can be written as Thirty-nine million, three hundred nineteen thousand, three hundred seventy-five.\n\n### Scientific notation\n\n• Scientific notation: 3.9319375 × 107\n\n### Factors of 39319375\n\n• Number of distinct prime factors ω(n): 3\n• Total number of prime factors Ω(n): 6\n• Sum of prime factors: 1245\n\n### Divisors of 39319375\n\n• Number of divisors d(n): 20\n• Complete list of divisors:\n• Sum of all divisors σ(n): 50102712\n• Sum of proper divisors (its aliquot sum) s(n): 10783337\n• 39319375 is a deficient number, because the sum of its proper divisors (10783337) is less than itself. Its deficiency is 28536038\n\n### Bases of 39319375\n\n• Binary: 100101011111110111010011112\n• Base-36: NER0V\n\n### Squares and roots of 39319375\n\n• 39319375 squared (393193752) is 1546013250390625\n• 39319375 cubed (393193753) is 60788274747077880859375\n• The square root of 39319375 is 6270.5163264279\n• The cube root of 39319375 is 340.0443281305\n\n### Scales and comparisons\n\nHow big is 39319375?\n• 39,319,375 seconds is equal to 13 weeks, 2 hours, 2 minutes, 55 seconds.\n• To count from 1 to 39,319,375 would take you about forty-five weeks!\n\nThis is a very rough estimate, based on a speaking rate of half a second every third order of magnitude. If you speak quickly, you could probably say any randomly-chosen number between one and a thousand in around half a second. Very big numbers obviously take longer to say, so we add half a second for every extra x1000. (We do not count involuntary pauses, bathroom breaks or the necessity of sleep in our calculation!)\n\n• A cube with a volume of 39319375 cubic inches would be around 28.3 feet tall.\n\n### Recreational maths with 39319375\n\n• 39319375 backwards is 57391393\n• The number of decimal digits it has is: 8\n• The sum of 39319375's digits is 40\n• More coming soon!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8346918,"math_prob":0.9745094,"size":3384,"snap":"2019-51-2020-05","text_gpt3_token_len":954,"char_repetition_ratio":0.13934912,"word_repetition_ratio":0.06021898,"special_character_ratio":0.36643025,"punctuation_ratio":0.1546875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9937822,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T02:49:25Z\",\"WARC-Record-ID\":\"<urn:uuid:76d17793-4663-4219-a7d4-6a27abd8cd58>\",\"Content-Length\":\"17049\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:163acd19-0030-4f93-8660-37c177a7bdee>\",\"WARC-Concurrent-To\":\"<urn:uuid:4bfac056-ff5a-4ebe-8a09-54901568094b>\",\"WARC-IP-Address\":\"72.44.94.106\",\"WARC-Target-URI\":\"https://numbermatics.com/n/39319375/\",\"WARC-Payload-Digest\":\"sha1:WWXA34U3SFWHNXEI22M4OZAO6XUJ5SHM\",\"WARC-Block-Digest\":\"sha1:L33FVFR2T6ZFSHZLRYBXYXMRKHJAYRTT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540504338.31_warc_CC-MAIN-20191208021121-20191208045121-00504.warc.gz\"}"} |
https://takemeback.to/astronomy/31-August-1939 | [
"## August 31st, 1939 Zodiac and Birth Chart\n\n#### Maps of the planetary positions",
null,
"NB: The dimensions and the orbits of the planets are not in scale. The zodiac signs in the geocentric view are set assuming that Aries coincides with the vernal equinox and that each of the zodiac signs occupy exactly one-twelfth of the sky.\n\nAn observer located in New York City on Thursday August 31, 1939, would see:",
null,
"Sun The sun rising at 06:20 am and setting at 07:29 pm. The moon, being 93% full, rising at 08:07 pm and setting at 08:53 am. The previous full moon was on Aug 29, 1939 while the next full moon would come on Sep 28, 1939.\n\nThe same observer would also see the other Solar System Planets as follows:\n\nastrotarot",
null,
"Mercury Mercury rising at 04:53 am and setting at 06:47 pm with a 63% of surface illuminated. Mercury would be orbiting the Sun at a distance of 0.31 astronomical units (AU) and would be found at a distance of 1.05 astronomical units (AU) from Earth. Given the planet elongation, the observer would see Mercury at an angle of -17° 07' 47.2\" with the Sun. The brightness would be -0.72 this day, as measured by the planet magnitude.",
null,
"Venus Venus would rise at 06:14 am and set at 07:31 pm. 99% of Venus would be illuminated by the Sun. Venus distance from the Sun would be 0.72 astronomical units (AU) and its distance from the Earth 1.73 astronomical units (AU). The angular distance in the sky between Venus and the Sun would be -1° 45' 02.4\".",
null,
"Mars Mars would look like 93% full, orbiting the Sun at a distance of 1.38 AU and be at 0.47 AU from Earth. The planet would be seen rising at 05:43 pm and setting at 02:28 am. The angular separation of the Sun and the planet, with Earth as the reference point, would be 136° 10' 41.1\". The brightness of Mars, that is its magnitude, would be -2.02.",
null,
"Jupiter Jupiter’s distance from the Sun would be 4.96 AU and 4.05 AU from Planet Earth. Jupiter rising and setting times would be 08:35 pm and 08:52 am respectively. Jupiter would appear with a 99% illuminated fraction of its surface and a magnitude (brightness of the star) of -2.71. Its angular separation between the Sun would be -151° 09' 21.0\".",
null,
"Saturn Saturn would rise at 09:38 pm and set at 10:50 am, appearing 99% full. The angular separation of Saturn from the Sun, as seen by the observer, would be -127° 15' 00.6\", its magnitude 0.25. Saturn would orbit the Sun at a distance of 9.32 astronomical units (AU) and would be located at a distance of 8.67 astronomical units (AU) from Earth.",
null,
"Uranus Uranus would appear 99 full, its brightness being 5.71. It would be seen rising at 10:28 pm and setting at 12:42 pm. Uranus would be 19.63 AU far from the Sun and 19.32 AU far from Planet Earth (average distance). The observer, looking up into the sky, would see Uranus with a -106° 22' 56.1\" angle of separation from the Sun.",
null,
"Neptune Neptune rising at 07:34 am, and setting at 08:05 pm, its magnitude (the degree of brightness of a star) being 8.0. The difference between the celestial longitude of the Sun and that of Neptune would be 13° 55' 55.5\". At 30.22 AU distance from the Sun and at 31.20 AU from Earth, 99 % of Neptune would be illuminated by the Sun.",
null,
"Pluto Pluto would be seen as 99% full and would have an elongation of -36° 35' 51.8\". It would orbit the Sun at an average distance of 38.97 astronomical units (AU) and be located at an average distance of 39.77 astronomical units (AU) from Earth. The observer would see Pluto rising at 03:08 am and setting at 06:05 pm, its magnitude (brightness of the star) being 14.98.\n\nFor the same New York observer looking up in the clear night sky around 10 pm of the same day, the solar system planets would be found in the following constellations and at the following coordinates:\n\nPlanetConstellationRight AscensionDeclination\nMercuryLeo9:36:16.8614:59:46.0\nVenusLeo10:39:02.6910:01:46.8"
] | [
null,
"https://takemebackto.s3.amazonaws.com/media/planet_maps_date/wm-planets-composite-1939-8-31-5ahbGlCpTN.jpg",
null,
"https://takemeback.to/static/img/astro/sun.png",
null,
"https://takemeback.to/static/img/astro/mercury.png",
null,
"https://takemeback.to/static/img/astro/venus.png",
null,
"https://takemeback.to/static/img/astro/mars.png",
null,
"https://takemeback.to/static/img/astro/jupiter.png",
null,
"https://takemeback.to/static/img/astro/saturn.png",
null,
"https://takemeback.to/static/img/astro/uranus.png",
null,
"https://takemeback.to/static/img/astro/neptune.png",
null,
"https://takemeback.to/static/img/astro/pluto.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9414672,"math_prob":0.8625838,"size":3979,"snap":"2021-31-2021-39","text_gpt3_token_len":1204,"char_repetition_ratio":0.16402516,"word_repetition_ratio":0.04028777,"special_character_ratio":0.337522,"punctuation_ratio":0.16875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9569618,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-23T16:07:06Z\",\"WARC-Record-ID\":\"<urn:uuid:8d5d4b86-0b18-4b78-b648-03acf1e50033>\",\"Content-Length\":\"102444\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6139139c-cc9f-4bee-9350-198b50063b04>\",\"WARC-Concurrent-To\":\"<urn:uuid:85ff22af-f461-4c0c-86be-5a7cdc402e99>\",\"WARC-IP-Address\":\"172.67.130.171\",\"WARC-Target-URI\":\"https://takemeback.to/astronomy/31-August-1939\",\"WARC-Payload-Digest\":\"sha1:UEBZG4GGKF4MBX2PG773KA5RKX5G2HMZ\",\"WARC-Block-Digest\":\"sha1:FQBVE65ZJIVPAUQ6TCQHHZGOXOAAUIPS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046149929.88_warc_CC-MAIN-20210723143921-20210723173921-00504.warc.gz\"}"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-8-polynomials-and-factoring-8-7-factoring-special-cases-practice-and-problem-solving-exercises-page-527/44 | [
"## Algebra 1: Common Core (15th Edition)\n\n$143=(11)(13)$\n$143=$ ...write $143$ as a difference of two squares. $=144-1$ ...write each term as a square. $=12^{2}-1^{2}$ ...use the rule for the difference of two squares. $=(12-1)(12+1)$ ...simplify. $=(11)(13)$"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8118108,"math_prob":1.0000092,"size":242,"snap":"2020-10-2020-16","text_gpt3_token_len":88,"char_repetition_ratio":0.13025211,"word_repetition_ratio":0.0,"special_character_ratio":0.5123967,"punctuation_ratio":0.26229507,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998641,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-25T21:32:05Z\",\"WARC-Record-ID\":\"<urn:uuid:13375be3-4a24-4381-ade3-448ded370c71>\",\"Content-Length\":\"97480\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bf7165bf-23ef-4596-90e1-03e175ee9a22>\",\"WARC-Concurrent-To\":\"<urn:uuid:eed4e60a-ce66-4fa2-b6af-28e3035246d7>\",\"WARC-IP-Address\":\"52.87.77.102\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-8-polynomials-and-factoring-8-7-factoring-special-cases-practice-and-problem-solving-exercises-page-527/44\",\"WARC-Payload-Digest\":\"sha1:IFSI5RZOC4FMBYC6ZHEEGVDTPOKVAWHL\",\"WARC-Block-Digest\":\"sha1:PWIK6YHGPLJUA4FUYRI7XHS7IL5SFOJV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146160.21_warc_CC-MAIN-20200225202625-20200225232625-00548.warc.gz\"}"} |
https://material-properties.org/what-is-density-physics-definition/ | [
"What is Density – Physics – Definition\n\nWhat is density. In words, the density (ρ) of a substance is the total mass (m) of that substance divided by the total volume (V) occupied by that substance. Material Properties\n\nWhat is Density\n\nDensity is defined as the mass per unit volume. It is an intensive property, which is mathematically defined as mass divided by volume:\n\nρ = m/V\n\nIn words, the density (ρ) of a substance is the total mass (m) of that substance divided by the total volume (V) occupied by that substance. The standard SI unit is kilograms per cubic meter (kg/m3). The Standard English unit is pounds mass per cubic foot (lbm/ft3). The density (ρ) of a substance is the reciprocal of its specific volume (ν).\n\nρ = m/V = 1/ρ\n\nSpecific volume is an intensive variable, whereas volume is an extensive variable. The standard unit for specific volume in the SI system is cubic meters per kilogram (m3/kg). The standard unit in the English system is cubic feet per pound mass (ft3/lbm).\n\nChanges of Density\nIn general, density can be changed by changing either the pressure or the temperature. Increasing the pressure always increases the density of a material. The effect of pressure on the densities of liquids and solids is very very small. On the other hand, the density of gases is strongly affected by pressure. This is expressed by compressibility. Compressibility is a measure of the relative volume change of a fluid or solid as a response to a pressure change.\n\nThe effect of temperature on the densities of liquids and solids is also very important. Most substances expand when heated and contract when cooled. However, the amount of expansion or contraction varies, depending on the material. This phenomenon is known as thermal expansion. The change in volume of a material which undergoes a temperature change is given by following relation:",
null,
"where ∆T is the change in temperature, V is the original volume, ∆V is the change in volume, and αV is the coefficient of volume expansion.\n\nIt must be noted, there are exceptions from this rule. For example, water differs from most liquids in that it becomes less dense as it freezes. It has a maximum of density at 3.98 °C (1000 kg/m3), whereas the density of ice is 917 kg/m3. It differs by about 9% and therefore ice floats on liquid water\n\nCoolant acceleration in a reactor core\n\nIt is an illustrative example, following data do not correspond to any reactor design.",
null,
"Example of flow rates in a reactor. It is an illustrative example, data do not represent any reactor design.\n\nPressurized water reactors are cooled and moderated by high-pressure liquid water (e.g. 16MPa). At this pressure water boils at approximately 350°C (662°F). Inlet temperature of the water is about 290°C (⍴ ~ 720 kg/m3). The water (coolant) is heated in the reactor core to approximately 325°C (⍴ ~ 654 kg/m3) as the water flows through the core.\n\nThe primary circuit of typical PWRs is divided into 4 independent loops (piping diameter ~ 700mm), each loop comprises a steam generator and one main coolant pump. Inside the reactor pressure vessel (RPV), the coolant first flows down outside the reactor core (through the downcomer). From the bottom of the pressure vessel, the flow is reversed up through the core, where the coolant temperature increases as it passes through the fuel rods and the assemblies formed by them.\n\nCalculate:\n\n• Pressure loss due to the coolant acceleration in an isolated fuel channel\n\nwhen\n\n• channel inlet flow velocity is equal to 5.17 m/s\n• channel outlet flow velocity is equal to 5.69 m/s\n\nSolution:\n\nThe pressure loss due to the coolant acceleration in an isolated fuel channel is then:",
null,
"This fact has important consequences. Due to the different relative power of fuel assemblies in a core, these fuel assemblies have different hydraulic resistance and this may induce local lateral flow of primary coolant and it must be considered in thermal-hydraulic calculations.\n\nDensest Materials on the Earth\n\nSince nucleons (protons and neutrons) make up most of the mass of ordinary atoms, the density of normal matter tends to be limited by how closely we can pack these nucleons and depends on the internal atomic structure of a substance. The densest material found on earth is the metal osmium, but its density pales by comparison to the densities of exotic astronomical objects such as white dwarf stars and neutron stars.\n\nList of densest materials:\n\n1. Osmium – 22.6 x 103 kg/m3\n2. Iridium – 22.4 x 103 kg/m3\n3. Platinum – 21.5 x 103 kg/m3\n4. Rhenium – 21.0 x 103 kg/m3\n5. Plutonium – 19.8 x 103 kg/m3\n6. Gold – 19.3 x 103 kg/m3\n7. Tungsten – 19.3 x 103 kg/m3\n8. Uranium – 18.8 x 103 kg/m3\n9. Tantalum – 16.6 x 103 kg/m3\n10. Mercury – 13.6 x 103 kg/m3\n11. Rhodium – 12.4 x 103 kg/m3\n12. Thorium – 11.7 x 103 kg/m3\n13. Lead – 11.3 x 103 kg/m3\n14. Silver – 10.5 x 103 kg/m3\n\nIt must be noted, plutonium is a man-made isotope and is created from uranium in nuclear reactors. But, In fact, scientists have found trace amounts of naturally-occurring plutonium.\n\nIf we include man made elements, the densest so far is Hassium. Hassium is a chemical element with symbol Hs and atomic number 108. It is a synthetic element (first synthesised at Hasse in Germany) and radioactive. The most stable known isotope, 269Hs, has a half-life of approximately 9.7 seconds. It has an estimated density of 40.7 x 103 kg/m3. The density of Hassium results from its high atomic weight and from the significant decrease in ionic radii of the elements in the lanthanide series, known as lanthanide and actinide contraction.\n\nThe density of Hassium is followed by Meitnerium (element 109, named after the physicist Lise Meitner), which has an estimated density of 37.4 x 103 kg/m3.\n\nDensity - Important property in gamma rays shielding\n\nIn short, effective shielding of gamma radiation is in most cases based on use of materials with two following material properties:\n\n• high-density of material.\n• high atomic number of material (high Z materials)\n\nHowever, low-density materials and low Z materials can be compensated with increased thickness, which is as significant as density and atomic number in shielding applications.\n\nA lead is widely used as a gamma shield. Major advantage of lead shield is in its compactness due to its higher density. On the other hand depleted uranium is much more effective due to its higher Z. Depleted uranium is used for shielding in portable gamma ray sources.\n\nIn nuclear power plants shielding of a reactor core can be provided by materials of reactor pressure vessel, reactor internals (neutron reflector). Also heavy concrete is usually used to shield both neutrons and gamma radiation.\n\nDensity of various Materials – Examples\n\nDensity of Water - Specific Volume",
null,
"Pure water has its highest density 1000 kg/m3 at temperature 3.98oC (39.2oF). Water differs from most liquids in that it becomes less dense as it freezes. It has a maximum of density at 3.98 °C (1000 kg/m3), whereas the density of ice is 917 kg/m3. It differs by about 9% and therefore ice floats on liquid water. It must be noted, the change in density is not linear with temperature, because the volumetric thermal expansion coefficient for water is not constant over the temperature range. The density of water (1 gram per cubic centimetre) was originally used to define the gram. The density (⍴) of a substance is the reciprocal of its specific volume (ν).\n\nρ = m/V = 1/ν\n\nThe specific volume (ν) of a substance is the total volume (V) of that substance divided by the total mass (m) of that substance (volume per unit mass). It has units of cubic meter per kilogram (m3/kg).\n\nDensity of Heavy Water\nPure heavy water (D2O) has a density about 11% greater than water, but is otherwise physically and chemically similar.\n\nThis difference is caused by the fact, the deuterium nucleus is twice as heavy as hydrogen nucleus. Since about 89% of the molecular weight of water comes from the single oxygen atom rather than the two hydrogen atoms, the weight of a heavy water molecule, is not substantially different from that of a normal water molecule. The molar mass of water is M(H2O) = 18.02 and the molar mass of heavy water is M(D2O) = 20.03 (each deuterium nucleus contains one neutron in contrast to hydrogen nucleus), therefore heavy water (D2O) has a density about 11% greater (20.03/18.03 = 1.112).\n\nPure heavy water (D2O) has its highest density 1110 kg/m3 at temperature 3.98oC (39.2oF). Also heavy water differs from most liquids in that it becomes less dense as it freezes. It has a maximum of density at 3.98 °C (1110 kg/m3), whereas the density of its solid form ice is 1017 kg/m3. It must be noted, the change in density is not linear with temperature, because the volumetric thermal expansion coefficient for water is not constant over the temperature range.\n\nDensity of Steam",
null,
"Water and steam are a common medium because their properties are very well known. Their properties are tabulated in so called Steam Tables. In these tables the basic and key properties, such as pressure, temperature, enthalpy, density and specific heat, are tabulated along the vapor-liquid saturation curve as a function of both temperature and pressure.\n\nThe density (⍴) of any substance is the reciprocal of its specific volume ().\n\nρ = m/V = 1/\n\nThe specific volume () of a substance is the total volume (V) of that substance divided by the total mass (m) of that substance (volume per unit mass). It has units of cubic meter per kilogram (m3/kg).\n\nDensity of Steel\nThe density of steel varies based on the alloying constituents but usually ranges between 7.5 x 103 kg/m3 and 8 x 103 kg/m3.\nDensity of Zirconium\nIn general, zirconium has very low absorption cross-section of thermal neutrons, high hardness, ductility and corrosion resistance. One of the main uses of zirconium alloys is in nuclear technology, as cladding of fuel rods in nuclear reactors, due to its very low absorption cross-section (unlike the stainless steel). The density of typical zirconium alloy is about 6.6 x 103 kg/m3.\nDensity of Uranium\nUranium is a naturally-occurring chemical element with atomic number 92 which means there are 92 protons and 92 electrons in the atomic structure. Natural uranium consists primarily of isotope 238U (99.28%), therefore the atomic mass of uranium element is close to the atomic mass of 238U isotope (238.03u). Natural uranium also consists of two other isotopes: 235U (0.71%) and 234U (0.0054%). Uranium has the highest atomic weight of the primordially occurring elements. Uranium metal has a very high density of 19.1 g/cm3, denser than lead (11.3 g/cm3), but slightly less dense than tungsten and gold (19.3 g/cm3).\n\nUranium metal is one of the densest materials found on earth:\n\n1. Osmium – 22.6 x 103 kg/m3\n2. Iridium – 22.4 x 103 kg/m3\n3. Platinum – 21.5 x 103 kg/m3\n4. Rhenium – 21.0 x 103 kg/m3\n5. Plutonium – 19.8 x 103 kg/m3\n6. Gold – 19.3 x 103 kg/m3\n7. Tungsten – 19.3 x 103 kg/m3\n8. Uranium – 18.8 x 103 kg/m3\n9. Tantalum – 16.6 x 103 kg/m3\n10. Mercury – 13.6 x 103 kg/m3\n11. Rhodium – 12.4 x 103 kg/m3\n12. Thorium – 11.7 x 103 kg/m3\n13. Lead – 11.3 x 103 kg/m3\n14. Silver – 10.5 x 103 kg/m3\n\nBut most of LWRs use the uranium fuel, which is in the form of uranium dioxide. Uranium dioxide is a black semiconducting solid with very low thermal conductivity. On the other hand the uranium dioxide has very high melting point and has well known behavior.\n\nUranium dioxide has significantly lower density than uranium in the metal form. Uranium dioxide has a density of 10.97 g/cm3, but this value may vary with fuel burnup, because at low burnup densification of pellets can occurs and at higher burnup swelling occurs.\n\nDensity of Nuclear Matter\n\nNuclear density is the density of the nucleus of an atom. It is the ratio of mass per unit volume inside the nucleus. Since atomic nucleus carries most of atom’s mass and atomic nucleus is very small in comparison to entire atom, the nuclear density is very high.\n\nThe nuclear density for a typical nucleus can be approximately calculated from the size of the nucleus and from its mass. Typical nuclear radii are of the order 10−14 m. Assuming spherical shape, nuclear radii can be calculated according to following formula:\n\nr = r0 . A1/3\n\nwhere r0 = 1.2 x 10-15 m = 1.2 fm\n\nFor example, natural uranium consists primarily of isotope 238U (99.28%), therefore the atomic mass of uranium element is close to the atomic mass of 238U isotope (238.03u). Its radius of this nucleus will be:\n\nr = r0 . A1/3 = 7.44 fm.\n\nAssuming it is spherical, its volume will be:\n\nV = 4πr3/3 = 1.73 x 10-42 m3.\n\nThe usual definition of nuclear density gives for its density:\n\nρnucleus = m / V = 238 x 1.66 x 10-27 / (1.73 x 10-42) = 2.3 x 1017 kg/m3.\n\nThus, the density of nuclear material is more than 2.1014 times greater than that of water. It is an immense density. The descriptive term nuclear density is also applied to situations where similarly high densities occur, such as within neutron stars. Such immense densities are also found in neutron stars.\n\nDensity of Neutron Star\nThe densest material found on earth is the metal osmium, but its density pales by comparison to the densities of exotic astronomical objects such as white dwarf stars and neutron stars.\n\nA neutron star is the collapsed core of a large star (usually of a red giant). Neutron stars are the smallest and densest stars known to exist and they are rotating extremely rapidly. A neutron star is basically a giant atomic nucleus about 11 km in diameter made especially of neutrons. It is believed that under the immense pressures of a collapsing massive stars going supernova it is possible for the electrons and protons to combine to form neutrons via electron capture, releasing a huge amount of neutrinos.\n\nThey are so dense that one teaspoon of its material would have a mass over 5.5×1012 kg. It is assumed they have densities of 3.7 × 1017 to 6 × 1017 kg/m3, which is comparable to the approximate density of an atomic nucleus of 2.3 × 1017 kg/m3.\n\nReferences:\nReactor Physics and Thermal Hydraulics:\n1. J. R. Lamarsh, Introduction to Nuclear Reactor Theory, 2nd ed., Addison-Wesley, Reading, MA (1983).\n2. J. R. Lamarsh, A. J. Baratta, Introduction to Nuclear Engineering, 3d ed., Prentice-Hall, 2001, ISBN: 0-201-82498-1.\n3. W. M. Stacey, Nuclear Reactor Physics, John Wiley & Sons, 2001, ISBN: 0- 471-39127-1.\n4. Glasstone, Sesonske. Nuclear Reactor Engineering: Reactor Systems Engineering, Springer; 4th edition, 1994, ISBN: 978-0412985317\n5. Todreas Neil E., Kazimi Mujid S. Nuclear Systems Volume I: Thermal Hydraulic Fundamentals, Second Edition. CRC Press; 2 edition, 2012, ISBN: 978-0415802871\n6. Zohuri B., McDaniel P. Thermodynamics in Nuclear Power Plant Systems. Springer; 2015, ISBN: 978-3-319-13419-2\n7. Moran Michal J., Shapiro Howard N. Fundamentals of Engineering Thermodynamics, Fifth Edition, John Wiley & Sons, 2006, ISBN: 978-0-470-03037-0\n8. Kleinstreuer C. Modern Fluid Dynamics. Springer, 2010, ISBN 978-1-4020-8670-0.\n9. U.S. Department of Energy, THERMODYNAMICS, HEAT TRANSFER, AND FLUID FLOW. DOE Fundamentals Handbook, Volume 1, 2 and 3. June 1992."
] | [
null,
"https://material-properties.org/wp-content/uploads/2019/05/thermal-expansion.png",
null,
"https://material-properties.org/wp-content/uploads/2019/05/Continuity-Equation-Flow-Rate-min-300x300.png",
null,
"https://material-properties.org/wp-content/uploads/2019/05/coolant-acceleration-example.png",
null,
"https://material-properties.org/wp-content/uploads/2019/05/Density-of-Water-Specific-Volume-of-Water.png",
null,
"https://material-properties.org/wp-content/uploads/2019/05/steam-properties-enthaply-density-volume-specific-heat-temperature-1024x618.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89158136,"math_prob":0.9730815,"size":4165,"snap":"2022-05-2022-21","text_gpt3_token_len":1125,"char_repetition_ratio":0.1530882,"word_repetition_ratio":0.018666666,"special_character_ratio":0.279952,"punctuation_ratio":0.10982659,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97843516,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,3,null,3,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-27T03:25:03Z\",\"WARC-Record-ID\":\"<urn:uuid:1c4a3130-5dd0-4f6b-9566-d4dbed1a6889>\",\"Content-Length\":\"116065\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7e358442-a96a-4715-a20b-44a5e0ec5b99>\",\"WARC-Concurrent-To\":\"<urn:uuid:7c31cbbf-751b-43d9-a105-836bc1304507>\",\"WARC-IP-Address\":\"50.16.49.81\",\"WARC-Target-URI\":\"https://material-properties.org/what-is-density-physics-definition/\",\"WARC-Payload-Digest\":\"sha1:XOG6WYN2FBG7D5UXZJ4CTOB2K32BRTO3\",\"WARC-Block-Digest\":\"sha1:3KXFE763SIS3VM7SY7WK3Z76SLY4OFRB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305052.56_warc_CC-MAIN-20220127012750-20220127042750-00359.warc.gz\"}"} |
https://cs50.stackexchange.com/questions/38118/pset-5-code-cant-handle-basic-words-and-substrings-properly/38119 | [
"# Pset 5: Code can't handle basic words and substrings properly\n\nWhen I run my code, there is only one word in dictionary, so I am assuming that there is a problem with my load function, but I can't find where the problem is. Thank you!\n\n:) dictionary.c, dictionary.h, and Makefile exist\n:) speller compiles\n:( handles most basic words properly - (expected \"MISSPELLED WOR...\", not \"brown\\ndog\\nfo...\")\n:) handles min length (1-char) words\n:) handles max length (45-char) words\n:) handles words with apostrophes properly\n:) spell-checking is case-insensitive\n:( handles substrings properly - (expected \"MISSPELLED WOR...\", not \"cat\\ncaterpill...\")\n:| program is free of memory errors - (can't check until a frown turns upside down)\n\n``````bool check(const char *word)\n{\n// TODO\nint hashed = hash(word); //find index of word\n\nif (table[hashed] == NULL) //if not in dictionary\n{\nreturn false;\n}\nelse\n{\nnode *curse = table[hashed];\n\nwhile (curse != NULL)\n{\nint i = strcasecmp(curse->word, word);\n\nif (i == 0)\n{\nreturn true;\n}\nelse\n{\ncurse = curse->next;\n}\n}\n}\n\nreturn false;\n}\n\n// Hashes word to a number\nunsigned int hash(const char* word)\n{\nunsigned int val = 0;\nfor(int i = 0; i < strlen(word); i++)\n{\nval += (int) word[i];\n}\n\nif(val > N)\n{\nval = val%N;\n}\nreturn val;\n}\n\n// Loads dictionary into memory, returning true if successful else false\n{\n// Initialize hash table\nfor (int i = 0; i < N; i++)\n{\ntable[i] = NULL;\n}\n\n// TODO\nFILE *diction = fopen(dictionary, \"r\");\nif (diction == NULL)\n{\nreturn false;\n}\n\n//loop file while scanning words\nchar word[LENGTH + 1];\n\nwhile (fscanf(diction, \"%s\", word) != EOF)\n{\n//make a new node and check\nnode *new_node = malloc(sizeof(node));\nif (new_node == NULL)\n{\nfree(table);\nfree(new_node);\nreturn false;\n}\n\n//copy word into node with strcpy\nstrcpy(new_node->word, word);\n\n//hashing new_node->word\nint h = hash(word);\n\nif (head == NULL) // if bucket is empty, insert the first node\n{\ntable[h] = new_node;\n}\nelse\n{\nnew_node->next = table[h]; //if not- make new links\ntable[h] = new_node;\n}\n}\n\n//close and return\nfclose(diction);\nreturn true;\n}\n\n// Returns number of words in dictionary if loaded else 0 if not yet loaded\nunsigned int size(void)\n{\n// TODO\nint count_words = 0;\n\nfor (int i = 0; i < N; i++)\n{\nif (table[i] != NULL)\n{\ncount_words++;\n}\n}\n\nreturn count_words;\n}\n\n// Unloads dictionary from memory, returning true if successful else false\n{\n// TODO\nfor (int i = 0; i < 27; i++)\n{\nnode *curse = table[i];\n\nwhile (curse != NULL)\n{\nnode *remover = curse;\ncurse = curse->next;\nfree(remover);\n}\n\nif (curse == NULL)\n{\nreturn true;\n}\n}\n\nreturn true;\n}\n\n``````"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.57451534,"math_prob":0.95547575,"size":2622,"snap":"2021-31-2021-39","text_gpt3_token_len":751,"char_repetition_ratio":0.13063407,"word_repetition_ratio":0.065789476,"special_character_ratio":0.33371472,"punctuation_ratio":0.18367347,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9835776,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T08:06:33Z\",\"WARC-Record-ID\":\"<urn:uuid:281dd51f-c659-4a7a-95d0-9356b70ec052>\",\"Content-Length\":\"136777\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:250f5e0c-de6b-48d7-b7ad-7552a949de25>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b41e440-1c3d-4111-94dc-784c0c80cba2>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://cs50.stackexchange.com/questions/38118/pset-5-code-cant-handle-basic-words-and-substrings-properly/38119\",\"WARC-Payload-Digest\":\"sha1:HXREFKXCH2PCPSHD64UPD2YXUK5N24AI\",\"WARC-Block-Digest\":\"sha1:Y47LRYBCHTYSUZ5A5VFFXZSS6T224KHG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056348.59_warc_CC-MAIN-20210918062845-20210918092845-00169.warc.gz\"}"} |
https://tex.stackexchange.com/questions/95495/left-column-with-a-figure-to-the-right-beamer | [
"# Left column with a figure to the right (Beamer)\n\nI can't understand how to make a column to the left with a figure to the right:\n\nHere is one example of what I did:\n\n \\documentclass{beamer}\n...\n\\begin{frame}\\frametitle{Title}\n\\begin{columns}\n\\begin{column}{\\textwidth}\nComposition:\n\\begin{itemize}\n\\item first item\n\\end{itemize}\n\\end{column}\n\\end{columns}\n\n\\begin{figure}\n\\includegraphics[scale=0.25]{fig}\n\\caption{}\n\\end{figure}\n\\end{frame}\n\n\nThe figure compares in the middle, how flush it to the right?\n\n## 1 Answer\n\nThis is a MWE that puts a box in the right column. I have kept the \\includegraphics where it should be. In your code you started the columns environment but didi not define a second column where you could put your figure.\n\n\\documentclass{beamer}\n\\usepackage{graphicx}\n\\setlength{\\unitlength}{1cm}\n\\begin{document}\n\\begin{frame}\\frametitle{Title}\n\n\\begin{columns}\n\\begin{column}{.49\\textwidth}\nComposition:\n\\begin{itemize}\n\\item first item\n\\end{itemize}\n\\end{column}\n\n\\begin{column}{.49\\textwidth}\n\\begin{picture}(2,2)\n\\put(0,0){\\dashbox{0.2}(2,2)}\n\\end{picture}\n%\\includegraphics[\\textwidth]{fig}\n\\end{column}\n\\end{columns}\n\n\\end{frame}\n\\end{document}\n\n\nYou can scale the columns widths to suit your needs."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6165583,"math_prob":0.9315938,"size":450,"snap":"2019-35-2019-39","text_gpt3_token_len":134,"char_repetition_ratio":0.16816144,"word_repetition_ratio":0.0,"special_character_ratio":0.25777778,"punctuation_ratio":0.11111111,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9891868,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-25T14:03:37Z\",\"WARC-Record-ID\":\"<urn:uuid:efab4639-43e4-44b8-9000-9f3aebc0daf8>\",\"Content-Length\":\"132967\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a66700d-4abd-44ea-99d9-50f3450e32ca>\",\"WARC-Concurrent-To\":\"<urn:uuid:aad7c671-d2ab-4e12-87d0-0f5e8ede773b>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/95495/left-column-with-a-figure-to-the-right-beamer\",\"WARC-Payload-Digest\":\"sha1:XAUDE6ROKGN6DBAP5FYPM6XCOJEUC2PM\",\"WARC-Block-Digest\":\"sha1:PMPZ2YWXQTGO6HKUPDB2PVDOTHOZ3XRD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027330233.1_warc_CC-MAIN-20190825130849-20190825152849-00319.warc.gz\"}"} |
https://offshoremechanics.asmedigitalcollection.asme.org/dynamicsystems/article-abstract/121/2/302/421228/Simultaneous-Stabilization-and-Pole-Assignment-by?redirectedFrom=fulltext | [
"A method of simultaneous pole assignment of a finite number of plant models is studied. Extending the idea of multi-structured multirate sampled-data controllers proposed in our previous work, this paper proposes a new method where a continuous-time gain feedback controller instead of a multirate sampled-data controller is used as the “internal structure.” By the proposed method, the plants precompensated by the internal structure can generally be made to satisfy an existing sufficient condition for simultaneous pole-assignability, and hence the class of the plant models for which simultaneous pole assignment can be attained becomes wider than that by the previous results. In particular, simultaneous pole assignment becomes possible for any finite number of plant models, provided that each model is controllable and observable.\n\n1.\nAraki\nM.\n, and\nHagiwara\nT.\n,\n1986\n, “\nPole Assignment by Multirate Sampled-Data Output Feedback\n,”\nInt. J. Control\n, Vol.\n44–6\n, pp.\n1661\n1673\n.\n2.\nAraki, M., Hagiwara, T., and Soma, H., 1992, “Application of Multilevel Multirate Sampled-Data Controllers to Simultaneous Pole Assignment Problem,” Proceedings of the 31st Conference on Decision and Control, pp. 1762–1767.\n3.\nChammas\nA. B.\n, and\nLeondes\nC. T.\n,\n1979\n, “\nPole Assignment by Piecewise Constant Output Feedback\n,”\nInt. J. Control\n, Vol.\n29–1\n, pp.\n31\n38\n.\n4.\nDavison\nE. J.\n, and\nWang\nS. H.\n,\n1973\n, “\nProperties of Linear Time-Invariant Multivariable Systems Subject to Arbitrary Output and State Feedback\n,”\nIEEE Trans. Automatic Control\n, Vol.\nAC-18\n, pp.\n24\n32\n.\n5.\nHagiwara\nT.\n, and\nAraki\nM.\n,\n1986\n, “\nOn the Necessary Condition for Discrete-Time Pole-Assignability by Piecewise Constant Output Feedback\n,”\nInt. J. Control\n, Vol.\n43–6\n, pp.\n1905\n1909\n.\n6.\nHagiwara\nT.\n,\nAraki\nM.\n, and\nSoma\nH.\n,\n1996\n, “\nSimultaneous Pole Assignment by Multi-Structured Multirate Sampled-Data Controllers—Orthogonality Consideration\n,”\nInt. J. Robust and Nonlinear Control\n, Vol.\n6–6\n, pp.\n571\n584\n.\n7.\nKabamba\nP. T.\n,\n1987\n, “\nControl of Linear Systems Using Generalized Sampled-Data Hold Functions\n,”\nIEEE Trans. Automatic Control\n, Vol.\nAC-32\n, pp.\n772\n783\n.\n8.\nKabamba\nP. T.\n, and\nYang\nC.\n,\n1991\n, “\nSimultaneous Controller Design for Linear Time-Invariant Systems\n,”\nIEEE Trans. Automatic Control\n, Vol.\nAC-36\n, pp.\n106\n111\n.\n9.\nKhargonekar\nP. P.\n,\nPoolla\nK.\n, and\nTannenbaum\nA.\n,\n1985\n, “\nRobust Control of Linear Time-Invariant Plants Using Periodic Compensation\n,”\nIEEE Trans. Automatic Control\n, Vol.\nAC-30\n, pp.\n1088\n1096\n.\n10.\nVidyasagar, M., 1985, Control System Synthesis, MIT Press.\nThis content is only available via PDF."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7804355,"math_prob":0.8044157,"size":4057,"snap":"2023-14-2023-23","text_gpt3_token_len":977,"char_repetition_ratio":0.116210215,"word_repetition_ratio":0.20289855,"special_character_ratio":0.21690905,"punctuation_ratio":0.18417463,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95359254,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T10:05:55Z\",\"WARC-Record-ID\":\"<urn:uuid:dea989ab-169b-4ac7-8155-bd13d00baae1>\",\"Content-Length\":\"122166\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:56d6f251-f001-44b3-bee9-9aa20fcdcfd7>\",\"WARC-Concurrent-To\":\"<urn:uuid:31ef7e0e-e22a-4a98-bf79-40b5a102de1c>\",\"WARC-IP-Address\":\"52.179.114.94\",\"WARC-Target-URI\":\"https://offshoremechanics.asmedigitalcollection.asme.org/dynamicsystems/article-abstract/121/2/302/421228/Simultaneous-Stabilization-and-Pole-Assignment-by?redirectedFrom=fulltext\",\"WARC-Payload-Digest\":\"sha1:VR52ZNQSZLZYX76W4BZDW3DUCXJTSBVA\",\"WARC-Block-Digest\":\"sha1:VKGTKYMMT75YPJF2G4TXFALR7VVX4CXX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649741.26_warc_CC-MAIN-20230604093242-20230604123242-00197.warc.gz\"}"} |
https://lists.boost.org/boost-commit/2007/07/0765.php | [
"",
null,
"# Boost-Commit :\n\nFrom: asutton_at_[hidden]\nDate: 2007-07-06 14:09:54\n\nAuthor: asutton\nDate: 2007-07-06 14:09:52 EDT (Fri, 06 Jul 2007)\nNew Revision: 7377\nURL: http://svn.boost.org/trac/boost/changeset/7377\n\nLog:\n- Removed the older geodesic header\n- Moved all distance computations into distance.hpp\n- Implemented geodesic_distance(), mean_geodesic_distance(),\ndiameter(), center(), and periphery() - basically anything\nI could think to build related to distance measures\n- Did some work on the reference docs\n- Experimented with some Boost.Parameter prototypes for\nconnectivity and distributions\n\nsandbox/SOC/2007/graphs/boost/graph/distance.hpp\nsandbox/SOC/2007/graphs/boost/graph/named_parameters.hpp\nsandbox/SOC/2007/graphs/libs/graph/test/components.cpp\nsandbox/SOC/2007/graphs/libs/graph/test/distance.cpp\nRemoved:\nsandbox/SOC/2007/graphs/boost/graph/geodesic.hpp\nText files modified:\nsandbox/SOC/2007/graphs/libs/graph/doc/quickbook/graph.qbk | 6\nsandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/connected_components.qbk | 8\nsandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/connectivity.qbk | 109 ++++++++++++++++++++++----\nsandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/directed_graph.qbk | 73 +++++++++++------\nsandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/distributions.qbk | 166 +++++++++++++++++++++++++--------------\nsandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/reference.qbk | 3\nsandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/undirected_graph.qbk | 2\nsandbox/SOC/2007/graphs/libs/graph/examples/movies/stats.cpp | 3\nsandbox/SOC/2007/graphs/libs/graph/test/Jamfile.v2 | 15 +++\n9 files changed, 270 insertions(+), 115 deletions(-)\n\n==============================================================================\n--- (empty file)\n+++ sandbox/SOC/2007/graphs/boost/graph/distance.hpp 2007-07-06 14:09:52 EDT (Fri, 06 Jul 2007)\n@@ -0,0 +1,208 @@\n+// (C) Copyright Andrew Sutton 2007\n+//\n+// Use, modification and distribution are subject to the\n+// Boost Software License, Version 1.0 (See accompanying file\n+\n+#ifndef BOOST_GRAPH_DISTANCE_HPP\n+#define BOOST_GRAPH_DISTANCE_HPP\n+\n+// boost includes\n+#include <boost/graph/named_parameters.hpp>\n+#include <boost/graph/properties.hpp>\n+\n+namespace boost\n+{\n+ namespace detail\n+ {\n+ template <typename Graph, typename DistanceMap>\n+ inline typename property_traits<DistanceMap>::value_type\n+ sum_distances(const Graph& g, DistanceMap dist)\n+ {\n+ size_t ret = 0;\n+ typename graph_traits<Graph>::vertex_iterator i, end;\n+ for(tie(i, end) = vertices(g); i != end; ++i) {\n+ ret += dist[*i];\n+ }\n+ return ret;\n+ }\n+ }\n+\n+\n+ // These measures operate on the first vertex given. This is to say that\n+ // closeness(g, v, dist) will return the closeness of the vertex v with\n+ // respect to all other vertices in the graph.\n+ //\n+ // Note that the target vertex in each algorithm is essentially a dummy\n+ // parameter (for now). If the distance map isn't supplied, then we\n+ // may shell to default computations.\n+ //\n+ // Vertex distance measures:\n+ // - geodesic_distance\n+ // - mean_geodesic_distance\n+ // - closeness\n+ // - eccentricity\n+ //\n+ // Graph distance measures:\n+ // - diameter\n+ //\n+ // Note that two versions of each algorithm exist. One takes a precomputed\n+ // distance map or matrix while the other computes it on the fly by trying\n+ // to guess an algorithm to use.\n+\n+\n+ template <typename Graph, typename DistanceMap>\n+ inline typename property_traits<DistanceMap>::value_type\n+ geodesic_distance(const Graph& g,\n+ typename graph_traits<Graph>::vertex_descriptor v,\n+ DistanceMap dist)\n+ {\n+ return dist[get(vertex_index, g, v)];\n+ }\n+\n+ template <typename Graph, typename DistanceMap>\n+ inline double\n+ mean_geodesic_distance(const Graph& g,\n+ DistanceMap dist)\n+ {\n+ return (double)detail::sum_distances(g, dist) / (double)num_vertices(g);\n+ }\n+\n+ template <typename Graph, typename DistanceMap>\n+ inline double\n+ closeness(const Graph& g,\n+ DistanceMap dist)\n+ {\n+ return 1.0 / (double)detail::sum_distances(g, dist);\n+ }\n+\n+ // Can we abstract the computation of max on distances to max of\n+ // something else that we can put into a distance map? For example,\n+ // this is the max of geodesics... What if we wanted some other\n+ // operator?\n+\n+ template <typename Graph, typename DistanceMap>\n+ inline typename property_traits<DistanceMap>::value_type\n+ eccentricity(const Graph& g,\n+ DistanceMap dist)\n+ {\n+ typename property_traits<DistanceMap>::value_type ret = 0;\n+ typename graph_traits<Graph>::vertex_iterator i, end;\n+ for(tie(i, end) = vertices(g); i != end; ++i) {\n+ ret = std::max(ret, dist[*i]);\n+ }\n+ return ret;\n+ }\n+\n+ // The computation of eccentricities, radius and diameter are all\n+ // closely related. Basically, these computations can be run at\n+ // the same time - compute eccentricities of all vertices, and\n+ // the radius and diameter of the graph.\n+\n+ template <typename Graph, typename DistanceMatrix, typename EccentricityMap>\n+ void\n+ eccentricities(const Graph& g, DistanceMatrix& dist, EccentricityMap ecc)\n+ {\n+ typename Graph::vertex_iterator i, j, end;\n+ for(tie(i, end) = vertices(g); i != end; ++i) {\n+ // compute the max eccentricity \"in-place\"\n+ typename property_traits<EccentricityMap>::value_type& ei = ecc[*i];\n+ for(j = vertices(g).first; j != end; ++j) {\n+ ei = std::max(ei, dist[*i][*j]);\n+ }\n+ }\n+ }\n+\n+ template <typename Graph, typename EccentricityMap>\n+ inline typename property_traits<EccentricityMap>::value_type\n+ radius(const Graph& g, EccentricityMap ecc)\n+ {\n+ typedef typename property_traits<EccentricityMap>::value_type eccentricity;\n+\n+ eccentricity ret = ecc[*vertices(g).first];\n+ typename Graph::vertex_iterator i, end;\n+ for(tie(i, end) = vertices(g); i != end; ++i) {\n+ ret = std::min(ret, ecc[*i]);\n+ }\n+ return ret;\n+ }\n+\n+ template <typename Graph, typename EccentricityMap>\n+ inline typename property_traits<EccentricityMap>::value_type\n+ diameter(const Graph& g, EccentricityMap ecc)\n+ {\n+ typedef typename property_traits<EccentricityMap>::value_type eccentricity;\n+\n+ eccentricity ret = ecc[*vertices(g).first];\n+ typename Graph::vertex_iterator i, end;\n+ for(tie(i, end) = vertices(g); i != end; ++i) {\n+ ret = std::max(ret, ecc[*i]);\n+ }\n+ return ret;\n+ }\n+\n+ // The following functions are pretty much gimmes once we've computed\n+ // some of the other properties (like eccentricities, radius, and\n+ // diameter).\n+\n+ namespace detail\n+ {\n+ template <typename Graph, typename EccentricityMap, typename Inserter>\n+ inline void\n+ EccentricityMap ecc,\n+ Inserter ins,\n+ typename property_traits<EccentricityMap>::value_type level)\n+ {\n+ typename Graph::vertex_iterator i, end;\n+ for(tie(i, end) = vertices(g); i != end; ++i) {\n+ if(ecc[*i] == level) {\n+ *ins++ = *i;\n+ }\n+ }\n+ }\n+ }\n+\n+ template <typename Graph, typename EccentricityMap, typename Inserter>\n+ inline void\n+ center(const Graph& g,\n+ typename property_traits<EccentricityMap>::value_type r,\n+ EccentricityMap ecc,\n+ Inserter ins)\n+ {\n+ return detail::radial_grouping(g, ecc, ins, r);\n+ }\n+\n+ template <typename Graph, typename EccentricityMap, typename Inserter>\n+ inline void\n+ center(const Graph& g,\n+ EccentricityMap ecc,\n+ Inserter ins)\n+ {\n+ }\n+\n+\n+ template <typename Graph, typename EccentricityMap, typename Inserter>\n+ inline void\n+ periphery(const Graph& g,\n+ typename property_traits<EccentricityMap>::value_type d,\n+ EccentricityMap ecc,\n+ Inserter ins)\n+ {\n+ return detail::radial_grouping(g, ecc, ins, d);\n+ }\n+\n+ template <typename Graph, typename EccentricityMap, typename Inserter>\n+ inline void\n+ periphery(const Graph& g,\n+ EccentricityMap ecc,\n+ Inserter ins)\n+ {\n+ return detail::radial_grouping(g, ecc, ins, diameter(g, ecc));\n+ }\n+}\n+\n+#endif\n\nDeleted: sandbox/SOC/2007/graphs/boost/graph/geodesic.hpp\n==============================================================================\n--- sandbox/SOC/2007/graphs/boost/graph/geodesic.hpp 2007-07-06 14:09:52 EDT (Fri, 06 Jul 2007)\n+++ (empty file)\n@@ -1,71 +0,0 @@\n-\n-#ifndef DIAMETER_HXX\n-#define DIAMETER_HXX\n-\n-// std includes\n-#include <vector>\n-#include <limits>\n-\n-// boost includes\n-#include <boost/graph/johnson_all_pairs_shortest.hpp>\n-\n-/**\n- * Compute the diameter of the graoh - which is essentially the longest\n- * of all shortest paths (or the longest geodesic). At the moment, this\n- * algorithm uses johnson's variant which runs in O(n*m*log n). Note that\n- * when the graph is dense (i.e., m -> n^2), this takes O(n^3 * log n)\n- * which is actually worse than floyd warshall. However, this should run\n- * fine on power-law graphs since they're relatively sparse. Normally\n- * distributed graphs might not be so lucky.\n- *\n- * There's some strange variations of this algorithm. For example,\n- * if the graph is unconnected, then it really doesn't have an actual\n- * diameter. If we igore infinite lenghts, then we are computing the\n- * diameter of the largest connected component - which may actually\n- * by acceptable.\n- */\n-template <\n- typename Graph,\n- typename Matrix\n- >\n-int\n-diameter(Graph &g, Matrix &d)\n-{\n- using namespace std;\n- using namespace boost;\n-\n- // various graph types\n- typedef Graph graph;\n- typedef typename graph_traits<graph>::vertex_descriptor vertex;\n-\n- // matrix types\n-\n- // for convenience, get the number of vertices\n- size_t n = num_vertices(g);\n-\n- // find all pairs of shortest paths\n- int ret = 0;\n- bool success = johnson_all_pairs_shortest_paths(g, d);\n- if(success) {\n- // compute the maximum distance of elements in graph\n- for(size_t i = 0; i < n; ++i) {\n- for(size_t j = 0; j < n; ++j) {\n- int dist = d[i][j];\n-\n- // don't compute distances for disconnected\n- // vertices - this is kind of a weird point\n- // of logic\n- if(dist != numeric_limits<int>::max()) {\n- if(dist > ret) {\n- ret = dist;\n- }\n- }\n- }\n- }\n- }\n-\n- return ret;\n-}\n-\n-#endif\n-\n\n==============================================================================\n--- (empty file)\n+++ sandbox/SOC/2007/graphs/boost/graph/named_parameters.hpp 2007-07-06 14:09:52 EDT (Fri, 06 Jul 2007)\n@@ -0,0 +1,36 @@\n+// (C) Copyright Andrew Sutton 2007\n+//\n+// Use, modification and distribution are subject to the\n+// Boost Software License, Version 1.0 (See accompanying file\n+\n+#ifndef BOOST_GRAPH_NAMED_PARAMETERS_HPP\n+#define BOOST_GRAPH_NAMED_PARAMETERS_HPP\n+\n+// boost includes\n+#include <boost/parameter.hpp>\n+\n+namespace boost\n+{\n+ BOOST_PARAMETER_NAME(graph)\n+\n+ // data and data sets\n+ BOOST_PARAMETER_NAME(distribution)\n+ BOOST_PARAMETER_NAME(in_distribution)\n+ BOOST_PARAMETER_NAME(out_distribution)\n+ BOOST_PARAMETER_NAME(histogram)\n+ BOOST_PARAMETER_NAME(in_histogram)\n+ BOOST_PARAMETER_NAME(out_histogram)\n+ BOOST_PARAMETER_NAME(components)\n+ BOOST_PARAMETER_NAME(is_connected)\n+\n+ // various map-type parameters\n+ BOOST_PARAMETER_NAME(distance_map)\n+ BOOST_PARAMETER_NAME(component_map)\n+ BOOST_PARAMETER_NAME(color_map)\n+ BOOST_PARAMETER_NAME(vertex_index_map)\n+\n+ struct not_given {};\n+}\n+\n+#endif\n\nModified: sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/graph.qbk\n==============================================================================\n--- sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/graph.qbk (original)\n+++ sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/graph.qbk 2007-07-06 14:09:52 EDT (Fri, 06 Jul 2007)\n@@ -80,6 +80,7 @@\n\n[/ Boost Reference Documentation]\n@@ -95,8 +96,9 @@\n\n[/ Contents ]\n[include introduction.qbk]\n\nModified: sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/connected_components.qbk\n==============================================================================\n--- sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/connected_components.qbk (original)\n+++ sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/connected_components.qbk 2007-07-06 14:09:52 EDT (Fri, 06 Jul 2007)\n@@ -6,10 +6,10 @@\n/]\n\n[section Connected Components]\n- template <class Graph, class ComponentMap, class P, class T, class R>\n- typename property_traits<ComponentMap>::value_type\n- connected_components(const Graph &g, ComponentMap c,\n- const bgl_named_params<P,T,R>& params = ``/defaults/``);\n+ template <class Graph, class ComponentMap, class P, class T, class R>\n+ typename property_traits<ComponentMap>::value_type\n+ connected_components(const Graph &g, ComponentMap c,\n+ const bgl_named_params<P,T,R>& params = ``/defaults/``);\n\nThe connected_components() functions compute the connected components of an undirected\n\nModified: sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/connectivity.qbk\n==============================================================================\n--- sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/connectivity.qbk (original)\n+++ sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/connectivity.qbk 2007-07-06 14:09:52 EDT (Fri, 06 Jul 2007)\n@@ -6,23 +6,98 @@\n/]\n\n[section Connectivity Measures]\n+ size_t connectivity(\n+ _graph,\n+ _component_map,\n+ _color_map = not_given(),\n+ _vertex_index_map = not_given(),\n+ _components = not_given())\n+\n+The `connectivity()` algorithm is essentially a wrapper around the\n+[boost_connected_components] algorithm, but provides additional (and optional)\n+functionality for working with connected components. The parameters are, with\n+the exeption of `_components`, have the same purpose and requirements as\n+documented in [boost_connected_components].\n+\n+If specified, the `_components` argument is populated with the vertices that\n+appear in each component. This is to say for example, that all vertices in the\n+`_component_map` with component id 0, will be placed into the vertex set\n+at index 0 of the `_components` argument.\n+\n+This function returns the number of connected components in the graph. Note\n+that the graph is connected if and only if this function returns 1.\n+\n+[h5 Where Defined]\n+`boost/graph/connectivity.hpp`\n+\n+[h5 Parameters]\n+[table\n+ [[Type] [Parameter] [Description]]\n+ [\n+ [required, in] [`_graph`]\n+ [\n+ The graph object for which the distribution will be computed. If\n+ the `_distribution` or `_in_distribution` arguments are supplied\n+ when calling this function then `_graph` must be a model of\n+ [BoostBidirectionalGraph]. If only `_out_distribution` is supplied,\n+ then `_graph` must be a model of [BoostIncidenceGraph].\n+ ]\n+ ]\n+ [\n+ [optional, out]\n+ [\n+ `_distribution`\n+\n+ `_out_distribution`\n+\n+ `_in_distribution`\n+ ]\n+ [\n+ The distribution parameters maps instances of vertex degree to the\n+ number of observed vertices in the graph with that degree.\n+\n+ These parameters must model both the [SgiSequence] and\n+ [SgiRandomAccessContainer] concepts (e.g., `std::vector`). The index type of the\n+ distribution must be the same as `degree_size_type`. The `value_type` must\n+ be integral (preferably unsigned).\n+\n+ If not supplied, these parameters assume the default value of `not_given`,\n+ implying that no computation is performed.\n+ ]\n+ ]\n+ [\n+ [optional, out]\n+ [\n+ `_histogram`\n+\n+ `_out_histogram`\n+\n+ `_in_histogram`\n+ ]\n+ [\n+ The distribution parameters maps instances of vertex degree to the\n+ number of observed vertices in the graph with that degree.\n+\n+ The histogram output parameter must be a model of both [SgiSequence]\n+ and [SgiRandomAccessContainer] (e.g., `std::vector`). The index type of the\n+ distribution must be the same as `degree_size_type`. Additionally `value_type`\n+ must be a model of the [SgiBackInsertionSequence] (e.g., `std::vector`).\n+\n+ If not supplied, these parameters assume the default value of `not_given`,\n+ implying that no computation is performed.\n+ ]\n+ ]\n+]\n+\n+[h5 Return Value]\n+This function returns the number of connected components.\n\n-Reference for connectivity measures. Apparently, these are just going to be\n-simple wrappers around connected_components or strong_components to provide\n-some extra information. These might include:\n-\n-* `is_connected()`, `is_strongly_connected()`\n-* `largest_connected_component()`, `largest_strong_component()`\n-\n-We might extend these for biconnected components also. Essentially, these\n-functions take the component map computed by the connectivity stuff as\n-an input and produce results. If the connectivity map isn't provided,\n-we could compute it on the fly.\n-\n-[Examples]\n-\n-``\n- connected_comp\n-``\n+[h5 Complexity]\n+\n+[h5 Notes]\n+\n+[h5 Examples]\n+\n+[h5 Rationale]\n\n[endsect]\n\\ No newline at end of file\n\nModified: sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/directed_graph.qbk\n==============================================================================\n--- sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/directed_graph.qbk (original)\n+++ sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/directed_graph.qbk 2007-07-06 14:09:52 EDT (Fri, 06 Jul 2007)\n@@ -7,7 +7,12 @@\n\n[section Directed Graph]\nThis section provides detailed information about the `directed_graph` class,\n-its associated types, member functions and non-member interface.\n+its associated types, member functions and non-member interface. A directed\n+graph is one in which edges have distinct direction. Edges flow from a /source/\n+vertex to a /target/ and are generally only traversed through the outgoing\n+edges of a vertex. However, incoming edges are also accessible. The class\n+provides a general purpose implementation of directed graphs and can be\n+used with algorithms in the Boost Graph Library.\n\n[h4 Notation]\nThe following notation is used in this documentation. The following type names\n@@ -54,23 +59,24 @@\n[[`v`] [The value of a property (usually a template parameter)]]\n]\n\n-[h4 Vertex and Edge Storage]\n-The `directed` graph class has four distinct storage compontents: one for each\n-of vertices, edges, out-edges per vertex, and in-edges per vertex. Each of these\n-storage components uses a `std::list`. This is important to remember when\n-considering the performance of oeprations on these graphs.\n-\n-Note that mutating (modifying) edge operations on a graph must operate on three\n-different lists. For example, adding an edge to the graph will insert the edge\n-or descriptors into the edge list, the out-edge list of the source vertex, and\n-the in-edge list of the target vertex. These lists are accessible via the\n-`edges(g)`, `out_edges(v,g)`, and `in_edges(v,g)` respectively.\n-\n[h4 Descriptor and Iterator Stability]\nWith the `directed_graph` class, descriptors and iterators remain stable after\n-most operations. This is to say that removing a vertex or edge will not invalidate\n-descriptors and iterators to other vertices or edges except for the edge or\n-vertex that was actually removed.\n+all operations except descriptors and iterators to those edges or vertices being\n+removed. Removing a vertex or edge will not invalidate descriptors or iterators\n+referencing other vertices or edges.\n+\n+For example, consider the following code:\n+\n+ directed_graph<> g;\n+ remove_vertex(u);\n+\n+After running this program, the descriptor `u` will be invalid but `v` and `w` will\n+still be valid so the call to `add_edge(v,w,g)` is also valid. Note that this\n+property does not hold for all graph types.\n\n[h4 Vertex Indexing and Stability]\nThe `directed_graph` class provides a built-in internal properties for vertex\n@@ -81,6 +87,10 @@\nthe only operation that invalidates vertex indices, but the vertices will need\nto be renumbered using the `renumber_vertex_indices()` function.\n\n+The `remove_vertex_and_renumber_indices(vi,g)` function can be used to autmoatically\n+renumber indices after removing the vertex referenced by the given iterator. Because\n+this function runs in linear time, it should not be used for repeated removals.\n+\n[h4 Template Parameters]\nThere are three parameters to the `directed_graph` class.\n[table\n@@ -103,14 +113,15 @@\n]\n\n[h5 Model Of]\n-VertexAndEdgeListGraph, MutablePropertyGraph, CopyConstructible, Assignable, and Serializable.\n+[BoostMutableGraph], and [BoostPropertyGraph].\n\n[h5 Where Defined]\n`boost/graph/directed_graph.hpp`\n\n[h4 Associated Types]\nThere are a number of useful types associated with the `directed_graph` class.\n-Most of these are accessed through `graph_traits` or other template classes.\n+Most of these are accessed through [boost_graph_traits] or other template classes.\nFor convenience these types have been grouped by purpose.\n\n[h5 Descriptor Types]\n@@ -119,17 +130,23 @@\n[\n[`graph_traits<directed_graph>::vertex_descriptor`]\n[\n- The type for the vertex descriptors associated with the graph.\n+ The type for the vertex descriptors associated with the graph. The `vertex_descriptor`\n+ models the [BoostDescriptor] and [NoConcept Hashable] concepts.\n]\n]\n[\n[`graph_traits<directed_graph>::edge_descriptor`]\n[\n- The type for the edge descriptors associated with the graph.\n+ The type for the edge descriptors associated with the graph. The `edge_descriptor`\n+ models the [BoostDescriptor] and [NoConcept Hashable] concepts.\n]\n]\n]\n\n+Note that edge and vertex descriptors for the `unsigned_graph` can be used as keys for both\n+[SgiSortedAssociativeContainer]s and [SgiHashedAssociativeContainer]s such as `std::map` and\n+`std::tr1::unordered_map` respectively.\n+\n[h5 Iterator Types]\n[table\n[[Type] [Description]]\n@@ -137,35 +154,35 @@\n[`graph_traits<directed_graph>::vertex_iterator`]\n[\nThe type for iterators returned by `vertices()`. Verex iterators are\n- models of the `BidirectionalIterator` concept.\n+ models of the [SgiBidirectionalIterator] concept.\n]\n]\n[\n[`graph_traits<directed_graph>::edge_iterator`]\n[\nThe type for iterators returned by `edges()`. Edge iterators are\n- models of the `BidirectionalIterator` concept.\n+ models of the [SgiBidirectionalIterator] concept.\n]\n]\n[\n[`graph_traits<directed_graph>::out_edge_iterator`]\n[\nThe type for iterators returned by `out_edges()`. Out-edge iterators\n- are models of the `BidirectionalIterator` concept.\n+ are models of the [SgiBidirectionalIterator] concept.\n]\n]\n[\n[`graph_traits<directed_graph>::in_edge_iterator`]\n[\nThe type for iterators returned by `in_edges()`. In-edge iterators\n- are models of the `BidirectionalIterator` concept.\n+ are models of the [SgiBidirectionalIterator] concept.\n]\n]\n[\n[\n- iterators are models of the `BidirectionalIterator` concept.\n+ iterators are models of the [SgiBidirectionalIterator] concept.\n]\n]\n]\n@@ -753,4 +770,10 @@\n]\n]\n\n+[h4 Rationale]\n+Unlike most graph classes in Boost.Graph, the `directed_graph` does not model the\n+[BoostMutablePropertyGraph] concept. The reason for this is that it is relatively\n+difficult from a usability standpoint to easily deduce the type to be passed as a\n+property when adding vertices and edges - but especially vertices.\n+\n[endsect]\n\\ No newline at end of file\n\nModified: sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/distributions.qbk\n==============================================================================\n--- sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/distributions.qbk (original)\n+++ sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/distributions.qbk 2007-07-06 14:09:52 EDT (Fri, 06 Jul 2007)\n@@ -6,29 +6,22 @@\n/]\n\n[section Degree Distributions]\n- template <class Graph, class Distribution>\n- void degree_distribution(const Graph &g, Distribution& dist);\n+ void degree_distribution(\n+ _graph,\n+ _distribution = not_given(),\n+ _out_distribution = not_given(),\n+ _in_distribution = not_given())\n+\n+ void degree_histogram(\n+ _graph,\n+ _histogram = not_given(),\n+ _out_histogram = not_given(),\n+ _in_histogram = not_given())\n\n- template <class Graph, class Distribution>\n- void in_degree_distribution(const Graph &g, Distribution& dist);\n-\n- template <class Graph, class Distribution>\n- void out_degree_distribution(const Graph &g, Distribution& dist);\n-\n-\n- template <class Graph, class Histogram>\n- void degree_histogram(const Graph &g, Histogram& dist);\n-\n- template <class Graph, class Histogram>\n- void in_degree_histogram(const Graph &g, Histogram& dist);\n-\n- template <class Graph, class Histogram>\n- void out_degree_histogram(const Graph &g, Histogram& dist);\n-\n-The degree distribution functions compute distributions of the degrees\n+The degree distribution function compute distributions of the degrees\nof vertices in a graph. A distribution is mapping of an observable property\nto the number of occurences of that property. In this context, the observable\n-property is the degree of a vertex (or in- and out- degree), which are in\n+property is the degree of a vertex (or in- and out-degree), which are in\nthe range \\[0, /max{degree(v)}/\\] Where /max{degree(v)}/ is the maximum degree\nof any vertex in a graph /G/. Therefore, the output distribution is mapping\nof vertex degree to its number of occurences in a graph.\n@@ -39,48 +32,81 @@\nthat degree. This is very useful if you want to quickly find all vertices with\ndegree 0, or find the vertex with the highest degree.\n\n+In both of these functions, the computation of distribution or histogram\n+depends on which optional parameters are passed to the function. If called as:\n+\n+ degree_distribution(g, _distribution = dist, _in_distribution = in_dist);\n+\n+The algorithm will compute both the degree destribution and in-degree distributions.\n+Note that for undirected graphs, all three distributions or histograms will be\n+identical.\n+\n+[h5 Where Defined]\n`boost/graph/degree_distribution.hpp`\n\n+[h5 Parameters]\n[table\n[[Type] [Parameter] [Description]]\n[\n- [in] [`const Graph& g`]\n+ [required, in] [`_graph`]\n[\n- The graph object for which the distribution will be computed. For\n- `degree_distributions()` and `in_degree_distribution()`, `g` must\n- be a model of a BidirectionalGraph. For `out_degree_distribution()`,\n- `g`, must model the IncidenceGraph concept.\n+ The graph object for which the distribution will be computed. If\n+ the `_distribution` or `_in_distribution` arguments are supplied\n+ when calling this function then `_graph` must be a model of\n+ [BoostBidirectionalGraph]. If only `_out_distribution` is supplied,\n+ then `_graph` must be a model of [BoostIncidenceGraph].\n]\n]\n[\n- [out] [`Distribution& dist`]\n+ [optional, out]\n[\n- The distribution parameter maps instances of degrees (numerically)\n- to the number of vertices in the graph that exhibit that degree.\n+ `_distribution`\n\n- The distribution output parameter must be a model of both [SgiSequence]\n- and [SgiRandomAccessContainer] (e.g., `std::vector`). The index type of the\n+ `_out_distribution`\n+\n+ `_in_distribution`\n+ ]\n+ [\n+ The distribution parameters maps instances of vertex degree to the\n+ number of observed vertices in the graph with that degree.\n+\n+ These parameters must model both the [SgiSequence] and\n+ [SgiRandomAccessContainer] concepts (e.g., `std::vector`). The index type of the\ndistribution must be the same as `degree_size_type`. The `value_type` must\nbe integral (preferably unsigned).\n+\n+ If not supplied, these parameters assume the default value of `not_given`,\n+ implying that no computation is performed.\n]\n]\n[\n- [out] [`Histogram& hist`]\n+ [optional, out]\n[\n- The histogram parameter maps instances of degrees (numerically) to the\n- set of vertices that exhibit that degree.\n+ `_histogram`\n+\n+ `_out_histogram`\n+\n+ `_in_histogram`\n+ ]\n+ [\n+ The distribution parameters maps instances of vertex degree to the\n+ number of observed vertices in the graph with that degree.\n\nThe histogram output parameter must be a model of both [SgiSequence]\nand [SgiRandomAccessContainer] (e.g., `std::vector`). The index type of the\ndistribution must be the same as `degree_size_type`. Additionally `value_type`\nmust be a model of the [SgiBackInsertionSequence] (e.g., `std::vector`).\n+\n+ If not supplied, these parameters assume the default value of `not_given`,\n+ implying that no computation is performed.\n]\n]\n]\n\n-[h4 Complexity]\n+[h5 Return Value]\n+Both functions return `void`.\n+\n+[h5 Complexity]\nThe time complexity of all these functions is /O(V)/.\n\nThe space complexity for the distributions functisons is /O(max{degree(v)})/ where\n@@ -88,46 +114,62 @@\n\nThe space complexity for the histogram functions is /O(V + max{degree(v)})/.\n\n-[h4 Notes]\n+[h5 Notes]\nBecause a graph may be a multigraph, there is no determinable upper bound on the\nsize of the distribution or histogram parameters. As such they are required to\nbe dynamically resized during the execution of the algorithm.\n\n-For the distribution parameter, we recommend `std::vector<size_t>`. This satisfies\n-all the requirements. For the histogram, we recommend using a `std::vector<Sequence<Vertex> >`\n-where `Sequence` is one of `std::list`, `std::vector`, `std::deque`, or `std::queue`. The\n-choice doesn't make much difference except that a `std::list` will require more allocations,\n-but a `std::vector` will require more space. Also, note that `std::list::size()` function is\n-not required to run in constant-time. The `Vertex` type must be\n-`graph_traits<Graph>::vertex_descriptor`.\n-\n-If `dist` is the name of the output distribution after a call to `degree_distribution()`\n-then the maximum degree is `dist.size() - 1`. The minimum degree corresponds to the index\n-in `dist` with the first non-zero value.\n+The recommended type for the distribution parameters is:\n\n-[h4 Examples]\n-The first example show how to compute and print the degree distribution.\n+ std::vector<graph_traits<graph_type>::degree_size_type>\n+\n+where `graph_type` is the type of the `_graph` parameter. This satisfies the type\n+requirements of the algorithms, and provides exceptional performance at the cost\n+\n+The recommended type for the histogram parameters is:\n+\n+ std::vector<std::vector<graph_traits<graph_type>::vertex_descriptor> >\n\n- undirected_graph<> g;\n- // add vertices and edges to g\n+Although this will consume more memory (due to the overhead of vector resizing),\n+it may perform better than using `std::list` to store vertices of the same degree.\n+This is because the `std::list::size()` function is not required to return in\n+constant time.\n+\n+Note that if `dist` is the name of the output distribution after a call to\n+`degree_distribution()` then the maximum degree is `dist.size() - 1`. The\n+minimum degree corresponds to the index in `dist` with the first non-zero value.\n+\n+[h5 Examples]\n+The first example show how to compute and print the degree distribution.\n\n- std::vector<size_t> dist;\n- degree_distribution(g, dist);\n- copy(dist.begin(), dist.end(), ostream_iterator<size_t>(cout, \" \"));\n+ undirected_graph<> g;\n+ // add vertices and edges to g\n\n+ std::vector<size_t> dist;\n+ degree_distribution(g, dist);\n+ copy(dist.begin(), dist.end(), ostream_iterator<size_t>(cout, \" \"));\n\nThe following example shows how to access the vertex (or vertices) with the maximum\ndegree by using the `degree_histogram()` algorithm. This prints the index of that\nvertex.\n\n- undirected_graph<> g;\n- // add vertice and edges to g\n+ undirected_graph<> g;\n+ // add vertice and edges to g\n\n- typedef graph_traits<undirected_graph<> >::vertex_descriptor vertex_type;\n- typedef std::vector<vertex_type> vertex_vector;\n+ typedef graph_traits<undirected_graph<> >::vertex_descriptor vertex_type;\n+ typedef std::vector<vertex_type> vertex_vector;\n\n- std::vector<vertex_vector> hist;\n- degree_histogram(g, hist);\n- cout << get_vertex_index(hist.back().back()) << \"\\n\";\n+ std::vector<vertex_vector> hist;\n+ degree_histogram(g, hist);\n+ cout << get_vertex_index(hist.back().back()) << \"\\n\";\n+\n+[h5 Rationale]\n+The use of these functions varies somewhat from typical use of named parameters\n+where default values are simply used to supply default information. Here, they\n+are used to control functionality. It should also be noted that if no parameters\n+are supplied, this algorithm still runs in linear time. However, there is no\n+`not_given` type are no-opped (they are instantiated as empty functions).\n\n[endsect]\n\\ No newline at end of file\n\nModified: sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/reference.qbk\n==============================================================================\n--- sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/reference.qbk (original)\n+++ sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/reference.qbk 2007-07-06 14:09:52 EDT (Fri, 06 Jul 2007)\n@@ -13,6 +13,9 @@\n[endsect]\n\n+[section Traits Classes]\n+[endsect]\n+\n[section Algorithms]\n[section Core Algorithms]\n\nModified: sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/undirected_graph.qbk\n==============================================================================\n--- sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/undirected_graph.qbk (original)\n+++ sandbox/SOC/2007/graphs/libs/graph/doc/quickbook/reference/undirected_graph.qbk 2007-07-06 14:09:52 EDT (Fri, 06 Jul 2007)\n@@ -10,7 +10,7 @@\nits associated types, member functions and non-member interface. An undirected graph\nis one in which edges have no direction - this is to say that edges can be \"traveled\"\nin both directions. This class provides general purpose implementation of undirected\n-graphs and that can be used with algorithms in the Boost Graph Library.\n+graphs and can be used with algorithms in the Boost Graph Library.\n\n[h4 Notation]\nThe following notation is used in this documentation. The following type names\n\nModified: sandbox/SOC/2007/graphs/libs/graph/examples/movies/stats.cpp\n==============================================================================\n--- sandbox/SOC/2007/graphs/libs/graph/examples/movies/stats.cpp (original)\n+++ sandbox/SOC/2007/graphs/libs/graph/examples/movies/stats.cpp 2007-07-06 14:09:52 EDT (Fri, 06 Jul 2007)\n@@ -12,9 +12,6 @@\n#include <boost/graph/degree_distribution.hpp>\n#include <boost/graph/connectivity.hpp>\n\n-#include <typeinfo>\n-#include <cxxabi.h>\n-\n#include \"movies.hpp\"\n\nusing namespace std;\n\nModified: sandbox/SOC/2007/graphs/libs/graph/test/Jamfile.v2\n==============================================================================\n--- sandbox/SOC/2007/graphs/libs/graph/test/Jamfile.v2 (original)\n+++ sandbox/SOC/2007/graphs/libs/graph/test/Jamfile.v2 2007-07-06 14:09:52 EDT (Fri, 06 Jul 2007)\n@@ -22,4 +22,17 @@\nexe misc\n: misc.cpp\n: <include>../../../\n- ;\n\\ No newline at end of file\n+ ;\n+\n+exe components\n+ : components.cpp\n+ : <include>\\$BOOST_ROOT\n+ : <include>../../../\n+ ;\n+\n+exe distance\n+ : distance.cpp\n+ : <include>\\$BOOST_ROOT\n+ : <include>../../../\n+ ;\n+\n\n==============================================================================\n--- (empty file)\n+++ sandbox/SOC/2007/graphs/libs/graph/test/components.cpp 2007-07-06 14:09:52 EDT (Fri, 06 Jul 2007)\n@@ -0,0 +1,123 @@\n+// (C) Copyright Andrew Sutton 2007\n+//\n+// Use, modification and distribution are subject to the\n+// Boost Software License, Version 1.0 (See accompanying file\n+\n+#include <iostream>\n+#include <iterator>\n+#include <algorithm>\n+#include <vector>\n+#include <map>\n+#include <tr1/unordered_map>\n+#include <typeinfo>\n+#include <cxxabi.h>\n+\n+#include <boost/graph/undirected_graph.hpp>\n+#include <boost/graph/connectivity.hpp>\n+\n+using namespace std;\n+using namespace boost;\n+using namespace __cxxabiv1;\n+\n+template <typename Graph>\n+void build_graph(Graph& g)\n+{\n+ typedef typename Graph::vertex_descriptor Vertex;\n+\n+ static const unsigned N = 5;\n+ vector<Vertex> v(N);\n+\n+ for(size_t i = 0; i < N; ++i) {\n+ }\n+\n+\n+ // add_edge(v, v, g); // this makes it fully connected\n+};\n+\n+void test_1()\n+{\n+ typedef adjacency_list<vecS, vecS, undirectedS> Graph;\n+ Graph g;\n+ build_graph(g);\n+\n+ vector<int> comps(num_vertices(g));\n+ connectivity(g, &comps);\n+}\n+\n+void test_2()\n+{\n+ typedef adjacency_list<vecS, vecS, undirectedS> Graph;\n+ Graph g;\n+ build_graph(g);\n+\n+ vector<int> comps(num_vertices(g));\n+ vector<default_color_type> colors(num_vertices(g));\n+ connectivity(g, &comps,\n+ _color_map = &colors);\n+}\n+\n+void test_3()\n+{\n+ typedef adjacency_list<listS, listS, undirectedS> Graph;\n+ typedef map<Graph::vertex_descriptor, int> IndexMap;\n+ typedef map<Graph::vertex_descriptor, int> CompMap;\n+ typedef associative_property_map<IndexMap> IndexProperties;\n+ typedef associative_property_map<CompMap> CompProperties;\n+\n+ Graph g;\n+ build_graph(g);\n+\n+ IndexMap indices;\n+ CompMap comps;\n+\n+ int x = 0;\n+ Graph::vertex_iterator i, end;\n+ for(tie(i, end) = vertices(g); i != end; ++i) {\n+ indices[*i] = x++;\n+ }\n+\n+ CompProperties comp_map(comps);\n+ IndexProperties index_map(indices);\n+ connectivity(g, comp_map,\n+ _vertex_index_map = index_map);\n+}\n+\n+void test_4()\n+{\n+ typedef undirected_graph<> Graph;\n+ typedef tr1::unordered_map<Graph::vertex_descriptor, int> CompMap;\n+ typedef associative_property_map<CompMap> CompProperties;\n+ typedef std::vector<std::vector<Graph::vertex_descriptor> > Components;\n+\n+ Graph g;\n+ build_graph(g);\n+\n+ CompMap comps;\n+ CompProperties comp_map(comps);\n+ Components ccomps;\n+ connectivity(g, comp_map, _components = ccomps);\n+\n+ for(size_t i = 0; i < ccomps.size(); ++i) {\n+ cout << i << \": \" << ccomps[i].size() << \"\\n\";\n+ }\n+}\n+\n+\n+int\n+main(int argc, char *argv[])\n+{\n+ test_1();\n+ test_2();\n+ test_3();\n+ test_4();\n+}\n\n==============================================================================\n--- (empty file)\n+++ sandbox/SOC/2007/graphs/libs/graph/test/distance.cpp 2007-07-06 14:09:52 EDT (Fri, 06 Jul 2007)\n@@ -0,0 +1,248 @@\n+// (C) Copyright Andrew Sutton 2007\n+//\n+// Use, modification and distribution are subject to the\n+// Boost Software License, Version 1.0 (See accompanying file\n+\n+#include <iostream>\n+#include <iterator>\n+#include <algorithm>\n+#include <vector>\n+#include <map>\n+#include <tr1/unordered_map>\n+\n+#include <boost/graph/undirected_graph.hpp>\n+#include <boost/graph/dijkstra_shortest_paths.hpp>\n+#include <boost/graph/johnson_all_pairs_shortest.hpp>\n+#include <boost/graph/floyd_warshall_shortest.hpp>\n+#include <boost/graph/distance.hpp>\n+\n+using namespace std;\n+using namespace boost;\n+\n+struct VertexProperty\n+{\n+ int dummy;\n+};\n+\n+struct EdgeProperty\n+{\n+ int weight;\n+};\n+\n+template <typename Graph>\n+void build_graph(Graph& g)\n+{\n+ typedef typename Graph::vertex_descriptor Vertex;\n+ typedef typename Graph::edge_descriptor Edge;\n+\n+ static const unsigned N = 5;\n+ vector<Vertex> v(N);\n+ vector<Edge> e;\n+\n+ for(size_t i = 0; i < N; ++i) {\n+ }\n+\n+ // add some edges (with weights)\n+\n+ g[e].weight = 1;\n+ g[e].weight = 1;\n+ g[e].weight = 1;\n+ g[e].weight = 1;\n+ g[e].weight = 1;\n+};\n+\n+template <typename Graph, typename DistanceMap>\n+void dump_distance_map(const Graph& g, DistanceMap dists)\n+{\n+ typename Graph::vertex_iterator i, end;\n+ for(tie(i, end) = vertices(g); i != end; ++i) {\n+ cout << dists[*i] << \" \";\n+ }\n+ cout << \"\\n\";\n+}\n+\n+template <typename Graph, typename DistanceMatrix>\n+void dump_distance_matrix(const Graph& g, DistanceMatrix dists)\n+{\n+ typename Graph::vertex_iterator i, j, end;\n+ for(tie(i, end) = vertices(g); i != end; ++i) {\n+ for(j = vertices(g).first; j != end; ++j) {\n+ cout << dists[*i][*j] << \" \";\n+ }\n+ cout << \"\\n\";\n+ }\n+}\n+\n+void test_1()\n+{\n+ // because this is defined w/vecS's, we don't have to work very\n+ // hard on property maps\n+\n+ typedef adjacency_list<vecS, vecS, undirectedS, VertexProperty, EdgeProperty> Graph;\n+ typedef Graph::vertex_descriptor Vertex;\n+ typedef property_map<Graph, int EdgeProperty::*>::type WeightPropertyMap;\n+ typedef vector<int> DistanceMap;\n+ typedef iterator_property_map<DistanceMap::iterator,\n+ property_map<Graph, vertex_index_t>::type> DistancePropertyMap;\n+\n+ Graph g;\n+ build_graph(g);\n+\n+ Vertex v = *vertices(g).first;\n+ WeightPropertyMap weights = get(&EdgeProperty::weight, g);\n+\n+ {\n+ DistanceMap distances(num_vertices(g));\n+ DistancePropertyMap dists(distances.begin());\n+\n+ dijkstra_shortest_paths(g, v,\n+ weight_map(weights).\n+ distance_map(dists));\n+\n+ cout << \"* dists: \"; dump_distance_map(g, dists);\n+ cout << \"* mean geo: \" << mean_geodesic_distance(g, dists) << \"\\n\";\n+ cout << \"* closeness: \" << closeness(g, dists) << \"\\n\";\n+ cout << \"* eccentricity: \" << eccentricity(g, dists) << \"\\n\";\n+ }\n+\n+ {\n+ typedef vector<DistanceMap> DistanceMatrix;\n+\n+ DistanceMatrix dists(num_vertices(g), DistanceMap(num_vertices(g)));\n+\n+ // compute all shortest paths\n+ floyd_warshall_all_pairs_shortest_paths(g, dists,\n+ weight_map(weights));\n+\n+ // use the distances in all-pairs to compute eccentricities\n+ // for each vertex\n+ DistanceMap eccentrics(num_vertices(g));\n+ DistancePropertyMap eccs(eccentrics.begin());\n+ eccentricities(g, dists, eccs);\n+\n+ cout << \"* dists:\\n\"; dump_distance_matrix(g, dists);\n+ cout << \"* eccs: \"; dump_distance_map(g, eccs);\n+ cout << \"* diameter: \" << diameter(g, eccs) << \"\\n\";\n+\n+ vector<Vertex> cent;\n+ center(g, eccs, back_inserter(cent));\n+ cout << \"center: \";\n+ for(size_t x = 0; x < cent.size(); ++x) {\n+ Vertex v = cent[x];\n+ cout << get(vertex_index, g, v) << \" \";\n+ }\n+ cout << \"\\n\";\n+\n+ vector<Vertex> peri;\n+ periphery(g, eccs, back_inserter(peri));\n+ cout << \"periphery: \";\n+ for(size_t x = 0; x < peri.size(); ++x) {\n+ Vertex v = peri[x];\n+ cout << get(vertex_index, g, v) << \" \";\n+ }\n+ cout << \"\\n\";\n+ }\n+}\n+\n+void test_2()\n+{\n+}\n+\n+void test_3()\n+{\n+ typedef undirected_graph<VertexProperty, EdgeProperty> Graph;\n+ typedef Graph::vertex_descriptor Vertex;\n+\n+ typedef property_map<Graph, int EdgeProperty::*>::type WeightPropertyMap;\n+\n+ typedef tr1::unordered_map<Vertex, int> DistanceMap;\n+ typedef associative_property_map<DistanceMap> DistancePropertyMap;\n+\n+ Graph g;\n+ build_graph(g);\n+\n+ Vertex v = *vertices(g).first;\n+ WeightPropertyMap weights = get(&EdgeProperty::weight, g);\n+\n+ cout << \"\\nundirected_graph<...>\\n\";\n+ {\n+ DistanceMap distances(num_vertices(g));\n+ DistancePropertyMap dists(distances);\n+\n+ // compute shortest paths\n+ dijkstra_shortest_paths(g, v,\n+ weight_map(weights).\n+ distance_map(dists)\n+ );\n+\n+ cout << \"* dists: \"; dump_distance_map(g, dists);\n+ cout << \"* mean geo: \" << mean_geodesic_distance(g, dists) << \"\\n\";\n+ cout << \"* closeness: \" << closeness(g, dists) << \"\\n\";\n+ cout << \"* eccentricity: \" << eccentricity(g, dists) << \"\\n\";\n+ }\n+\n+ {\n+ typedef tr1::unordered_map<Vertex, DistanceMap> DistanceMatrix;\n+\n+ DistanceMatrix dists(num_vertices(g));\n+ Graph::vertex_iterator i, end;\n+ for(tie(i, end) = vertices(g); i != end; ++i) {\n+ dists[*i].rehash(num_vertices(g));\n+ }\n+\n+ // compute all shortest paths\n+ floyd_warshall_all_pairs_shortest_paths(g, dists,\n+ weight_map(weights));\n+\n+ // use the distances in all-pairs to compute eccentricities\n+ // for each vertex\n+ DistanceMap eccentrics(num_vertices(g));\n+ DistancePropertyMap eccs(eccentrics);\n+ eccentricities(g, dists, eccs);\n+\n+ int r, d;\n+ cout << \"* dists:\\n\"; dump_distance_matrix(g, dists);\n+ cout << \"* eccs: \"; dump_distance_map(g, eccs);\n+ cout << \"* radius: \" << (r = radius(g, eccs)) << \"\\n\";\n+ cout << \"* diameter: \" << (d = diameter(g, eccs)) << \"\\n\";\n+\n+ vector<Vertex> cent;\n+ center(g, eccs, back_inserter(cent));\n+ cout << \"center: \";\n+ for(size_t x = 0; x < cent.size(); ++x) {\n+ Vertex v = cent[x];\n+ cout << get(vertex_index, g, v) << \" \";\n+ }\n+ cout << \"\\n\";\n+\n+ vector<Vertex> peri;\n+ periphery(g, eccs, back_inserter(peri));\n+ cout << \"periphery: \";\n+ for(size_t x = 0; x < peri.size(); ++x) {\n+ Vertex v = peri[x];\n+ cout << get(vertex_index, g, v) << \" \";\n+ }\n+ cout << \"\\n\";\n+ }\n+}\n+\n+\n+int\n+main(int argc, char *argv[])\n+{\n+ test_1();\n+ // test_2();\n+ test_3();\n+}"
] | [
null,
"https://lists.boost.org/boost-commit/images/boost.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5758222,"math_prob":0.9306001,"size":46971,"snap":"2020-10-2020-16","text_gpt3_token_len":12152,"char_repetition_ratio":0.16871421,"word_repetition_ratio":0.35303915,"special_character_ratio":0.3167912,"punctuation_ratio":0.19969846,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99337125,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-27T08:35:54Z\",\"WARC-Record-ID\":\"<urn:uuid:72b73414-c087-437b-bd68-64684bec4cd5>\",\"Content-Length\":\"73434\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:58d9afb7-a6f7-485c-b85d-f829f042661b>\",\"WARC-Concurrent-To\":\"<urn:uuid:830846b0-fa33-4999-aba5-76990340ea80>\",\"WARC-IP-Address\":\"146.20.110.251\",\"WARC-Target-URI\":\"https://lists.boost.org/boost-commit/2007/07/0765.php\",\"WARC-Payload-Digest\":\"sha1:AHGYQ5YSZP67IVDRGHC6FSEH3NCMY3P4\",\"WARC-Block-Digest\":\"sha1:A3KCDD6IXAWO6FGHBYOXNCE752MVTZYR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146665.7_warc_CC-MAIN-20200227063824-20200227093824-00253.warc.gz\"}"} |
https://cs.stackexchange.com/questions/113167/complexity-class-of-an-algorithm-with-two-inputs | [
"# Complexity Class of an Algorithm with two Inputs\n\nConsider a problem with two inputs like (P,L) and |P|=n and L is some positive integer. If my algorithm had a complexity of O(n^L), would that still be polynomial? Or is it exponential? I'm not sure if I should view L as a constant or as 'size of input', since it's not a size but just some number (as in, Knapsack for example, select items of weight <= L). At the same time, L is part of the input and not fixed.\n\nYou're confused because you've called part of your input $$n$$. When we say that the running time is some function of $$n$$, we almost always mean that $$n$$ is the length of the input string.\nYour algorithm runs in time $$|P|^L$$, where $$P$$ is a string contained in the input and $$L$$ is a number represented in the input, presumably in binary. If $$L$$ is a $$b$$-bit number, it could be as big as $$2^b$$. Writing $$n$$ for the length of the input, we could, for example, have $$|P|=n/2$$ with $$L$$ being an $$(n/2)$$-bit number. That case gives running time $$(n/2)^{2^{n/2}}$$, which is a long, long way from being polynomial.\n• Yes, if $L$ is fixed (or bounded above by some constant) then $|P|^L$ is polynomial. – David Richerby Aug 28 '19 at 15:35"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91161394,"math_prob":0.9998721,"size":1756,"snap":"2019-51-2020-05","text_gpt3_token_len":499,"char_repetition_ratio":0.10787671,"word_repetition_ratio":0.018867925,"special_character_ratio":0.3001139,"punctuation_ratio":0.109042555,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999875,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-29T22:29:02Z\",\"WARC-Record-ID\":\"<urn:uuid:ea50b742-7be1-44fc-a539-08909b2a6e22>\",\"Content-Length\":\"135810\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3874d327-17f7-470f-a146-d34db4a09bfe>\",\"WARC-Concurrent-To\":\"<urn:uuid:510f0054-b5e6-4e7e-beee-af159f5353b9>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/113167/complexity-class-of-an-algorithm-with-two-inputs\",\"WARC-Payload-Digest\":\"sha1:4K3G2M4JI2FXOFLEBXDDGO63PAJCX6IO\",\"WARC-Block-Digest\":\"sha1:53S4JLYDUR5MG5S644RMP2Y3OCQZLG3X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251802249.87_warc_CC-MAIN-20200129194333-20200129223333-00372.warc.gz\"}"} |
https://www.advancedconverter.com/unit-conversions/energy-conversion/joules-to-kilowatts-per-hour | [
" Converter Joules to Kilowatts per hour\n\n# Conversion Joule to Kilowatt per hour\n\nThe joule (symbol J), is a derived unit of energy, work, or amount of heat in the IS of Units. It is equal to the energy expended (or work done) in applying a force of one newton through a distance of one metre (1 newton metre or N·m), or in passing an electric current of one ampere through a resistance of one ohm for one second.\n\nThis tool converts joules to kilowatts per hour (j to kwh) and vice versa. 1 joule ≈ 2.778E-7 kilowatts per hour. The user must fill one of the two fields and the conversion will become automatically.\n\n joule (j) kilowatt per hour (kwh) <=> precision:auto0 decimal1 decimal2 decimals3 decimals4 decimals5 decimals6 decimals7 decimals8 decimals9 decimals10 decimals11 decimals12 decimals\n\n1 joules = 2.778E-7 kilowatts per hour\n\nFormula joules in kilowatts per hour (j in kwh). Kwh = joules*0.001163/4186.8 ≈ joules*2.778E-7\n\n### Conversions joules to other units\n\n Joule to Btu Joule to Watt per hour Joule to Kilowatt per hour Joule to Calorie Joule to Kilocalorie Joule to Kilojoule Joule to Erg\n\nTable joules to kwh\n1 joules = 2.778E-7 kwh11 joules = 3.056E-6 kwh21 joules = 5.833E-6 kwh\n2 joules = 5.556E-7 kwh12 joules = 3.333E-6 kwh22 joules = 6.111E-6 kwh\n3 joules = 8.333E-7 kwh13 joules = 3.611E-6 kwh23 joules = 6.389E-6 kwh\n4 joules = 1.111E-6 kwh14 joules = 3.889E-6 kwh24 joules = 6.667E-6 kwh\n5 joules = 1.389E-6 kwh15 joules = 4.167E-6 kwh25 joules = 6.944E-6 kwh\n6 joules = 1.667E-6 kwh16 joules = 4.444E-6 kwh26 joules = 7.222E-6 kwh\n7 joules = 1.944E-6 kwh17 joules = 4.722E-6 kwh27 joules = 7.5E-6 kwh\n8 joules = 2.222E-6 kwh18 joules = 5.0E-6 kwh28 joules = 7.778E-6 kwh\n9 joules = 2.5E-6 kwh19 joules = 5.278E-6 kwh29 joules = 8.056E-6 kwh\n10 joules = 2.778E-6 kwh20 joules = 5.556E-6 kwh30 joules = 8.333E-6 kwh\n40 joules = 1.111E-5 kwh70 joules = 1.944E-5 kwh100 joules = 2.778E-5 kwh\n50 joules = 1.389E-5 kwh80 joules = 2.222E-5 kwh110 joules = 3.056E-5 kwh\n60 joules = 1.667E-5 kwh90 joules = 2.5E-5 kwh120 joules = 3.333E-5 kwh\n200 joules = 5.556E-5 kwh500 joules = 0.0001389 kwh800 joules = 0.0002222 kwh\n300 joules = 8.333E-5 kwh600 joules = 0.0001667 kwh900 joules = 0.00025 kwh\n400 joules = 0.0001111 kwh700 joules = 0.0001944 kwh1000 joules = 0.0002778 kwh\n\n### Energy Conversions\n\n Btu to Watt per hour Btu to Kwh Btu to Calorie Btu to Kcal Btu to Joule Btu to Kilojoule Btu to Erg Watt per hour to Btu Watt per hour to Kwh Watt per hour to Calorie Watt per hour to Kcal Watt per hour to Joule Watt per hour to Kilojoule Watt per hour to Erg Kwh to Btu Kwh to Watt per hour Kwh to Calorie Kwh to Kcal Kwh to Joule Kwh to Kilojoule Kwh to Erg Calorie to Btu Calorie to Watt per hour Calorie to Kwh Calorie to Kcal Calorie to Joule Calorie to Kilojoule Calorie to Erg Kcal to Btu Kcal to Watt per hour Kcal to Kwh Kcal to Calorie Kcal to Joule Kcal to Kilojoule Kcal to Erg Kilojoule to Btu Kilojoule to Watt per hour Kilojoule to Kwh Kilojoule to Calorie Kilojoule to Kcal Kilojoule to Joule Kilojoule to Erg Erg to Btu Erg to Watt per hour Erg to Kwh Erg to Calorie Erg to Kcal Erg to Joule Erg to Kilojoule"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.59835386,"math_prob":0.97321177,"size":2185,"snap":"2022-27-2022-33","text_gpt3_token_len":1008,"char_repetition_ratio":0.3209537,"word_repetition_ratio":0.0,"special_character_ratio":0.46773455,"punctuation_ratio":0.12331407,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99984515,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-13T18:30:19Z\",\"WARC-Record-ID\":\"<urn:uuid:dc910be7-2da7-4b78-966c-a1767efff8ea>\",\"Content-Length\":\"43932\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e7da74dd-57d9-4e9e-ba28-b4046945eece>\",\"WARC-Concurrent-To\":\"<urn:uuid:4a47fd04-b441-4c8e-9738-af3e4b3f697e>\",\"WARC-IP-Address\":\"138.201.133.106\",\"WARC-Target-URI\":\"https://www.advancedconverter.com/unit-conversions/energy-conversion/joules-to-kilowatts-per-hour\",\"WARC-Payload-Digest\":\"sha1:D6IADNCZBOES4SW3FTWAW2MU6NAHORDW\",\"WARC-Block-Digest\":\"sha1:3UG4L244KK23K2YIVK35GIUVMUKMUWUY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571982.99_warc_CC-MAIN-20220813172349-20220813202349-00786.warc.gz\"}"} |
https://www.linearprogramming.info/tag/two-phase-simplex-method/ | [
"# Archivo de Etiquetas (Tags) | two phase simplex method\n\n## 10 Things You need to know about Simplex Method",
null,
"The Simplex Method was developed by George Dantzing in 1947. It is without a doubt the most popular algorithm when it comes to solving a Linear Programming (LP) model, and it plays a major role in the introduction to Operations Research (OR). Today we’re presenting a summary of 10 main concepts about the use and […]\n\n## How to Solve a Linear Programming model with Dual Simplex Method",
null,
"The Dual Simplex Method offers an alternative when solving Linear Programming (LP) models with algorithms. This method may be used in particular when the standard way to carry a linear programming model is not available from an initial basic feasible solution. Consider the following LP problem to illustrate the application of the Dual Simplex Method: […]\n\n## Fundamental Theorem of Linear Programming and its Properties",
null,
"In this following article we will address properties set by the Fundamental Theorem of Linear Programming through a conceptual discussion and practical and simple examples. These properties are essential when taking into consideration algorithmic resolutions of this kind of mathematical optimization models, among them is what we call the Simplex Method. Every Linear Programming (LP) […]\n\n## Primal Dual Relationships in Linear Programming (Duality Theory in LP)",
null,
"The dual model of a Linear Programming problem consists of an alternative modeling instance that allows us to recover the information of the original problem commonly known as primal model. Therefore it is sufficient to solve one of them (primal or dual) to obtain the optimal solution and the optimal value of the equivalent problem […]"
] | [
null,
"https://www.linearprogramming.info/wp-content/uploads/2015/09/simplex-method1-125x125.png",
null,
"https://www.linearprogramming.info/wp-content/uploads/2015/08/how-to-solve-dual-simplex-m-125x125.gif",
null,
"https://www.linearprogramming.info/wp-content/uploads/2015/08/3d-geogebra-125x125.gif",
null,
"https://www.linearprogramming.info/wp-content/uploads/2015/01/primal-dual-relationship-125x125.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88553846,"math_prob":0.92411363,"size":1654,"snap":"2019-51-2020-05","text_gpt3_token_len":311,"char_repetition_ratio":0.15212122,"word_repetition_ratio":0.007751938,"special_character_ratio":0.18440145,"punctuation_ratio":0.03731343,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98733854,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-13T03:32:51Z\",\"WARC-Record-ID\":\"<urn:uuid:b4a696d1-66b5-4754-9e4c-386720e76730>\",\"Content-Length\":\"68183\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8d18fd57-dbf3-410f-b36c-a55242db473d>\",\"WARC-Concurrent-To\":\"<urn:uuid:312f0a0d-ebcb-47f8-9dd7-a69822861610>\",\"WARC-IP-Address\":\"198.1.89.69\",\"WARC-Target-URI\":\"https://www.linearprogramming.info/tag/two-phase-simplex-method/\",\"WARC-Payload-Digest\":\"sha1:X4WPCGMILNHMSS4R42GLKZK3ROYGAD3X\",\"WARC-Block-Digest\":\"sha1:AIZ3BHB2DRHFY2G46PBJE2ZJRBBFF2OM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540548537.21_warc_CC-MAIN-20191213020114-20191213044114-00447.warc.gz\"}"} |
http://www.dimensionengine.com/excel/DEAddIn/quickstart.html | [
"## QuickStart\n\n#### Unit Errors",
null,
"Using DimensionEngine requires you to follow one simple convention - the units for any value must be in the cell immediately to the right of the cell containing the value, as shown in the example below. The value and units cannot be combined into a single cell.",
null,
"To demonstrate the use of DimensionEngine, consider the simple example of adding 1 ft and 0.3 m. Normally, you would do this by converting 1 ft to meters and then adding. DimensionEngine does this automatically - you will never need to convert units again! First enter the two variables (1 ft and 0.3 m) anywhere on the spreadsheet, following the convention that units are entered immediately to the right of the value. Then in a new cell, create a formula to add the two values, just as you normally would (don't worry about the incompatible units, DE will take care of this).",
null,
"Once you've created an Excel formula in the usual manner, highlight the cell with the formula and click on \"DE Convert\" under the ADD-INS tab. (Or using Mac Excel, select “DE Convert” from the DE menu, as shown here.) DimensionEngine analyzes your equation, automatically converts units as necessary, and outputs both the equation result and the associated units, in this case 0.6048 m.",
null,
"Suppose that instead of meters you wanted the sum of 1 ft and 0.3 expressed in some other unit of length, say inches. There are two ways you can tell DimensionEngine the units to output.\n\nAlthough it will work, we do not recommend the first approach, which is to simply replace the output units (in this case the \"m\" in cell B3) with the desired units, \"inch\". This will work, but DimensionEngine will no longer automatically adjust the output units if, for example, you change your equation. DimensionEngine will still tell you if your units are incorrect, but it will not automatically correct them for you.\n\nThe recommended option for specifying output units is as follows. Highlight the output units cell (in this case cell B3) and notice that DimensionEngine has inserted a formula for this cell that calls the function \"DEUnits()\".",
null,
"The only part of the DEUnits() formula you should ever edit is the first term, which has the default value of \"MKS\". This tells DimensionEngine to output the results using the appropriate combination of meters, kilograms, and seconds. DimensionEngine recognizes three other systems of default units. If you change the \"MKS\" entry in this formula to \"cgs\", DimensionEngine will output the results using the appropriate combination of centimeters, grams, and seconds. Similarly, changing this term to \"FPS\" tells DimensionEngine to output the results using the appropriate combination of feet, pounds, and seconds, or changing this term to \"IPS\" tells DimensionEngine to output the results using the appropriate combination of inches, pounds, and seconds.\n\nIf you want DimensionEngine to output specific units, in our case inches, then replace the \"MKS\" in the DEUnits() formula with those units, \"inch\" in our case (or you can enter \"inches\", DimensionEngine recognizes both).",
null,
"DimensionEngine does more than just convert units, it also checks to make sure your equations are dimensionally consistent. For example, if we change the units for our first variable from 1 ft to 1 s (i.e, change cell B1 from \"ft\" to \"s\"), DimensionEngine automatically recognizes the dimensional inconsistency of trying to add 1 second and 0.3 m and reports the error.",
null,
"Unit errors can also occur if you specify incompatible output units. For example, suppose we incorrectly specified the output units as seconds. In this case DimensionEngine returns the correct units (m), but highlights the units in red and adds a comment to the units cell (indicated by the red triangle) informing you that the units are not the ones specified.",
null,
"© DimensionEngine 2015. For questions or bug reports, email us."
] | [
null,
"http://www.dimensionengine.com/_Media/load_addin_med_hr.jpeg",
null,
"http://www.dimensionengine.com/_Media/convention_med_hr.jpeg",
null,
"http://www.dimensionengine.com/_Media/firstequation_med_hr.jpeg",
null,
"http://www.dimensionengine.com/_Media/firstconversion_med_hr.jpeg",
null,
"http://www.dimensionengine.com/_Media/firstdeunits_med_hr.jpeg",
null,
"http://www.dimensionengine.com/_Media/deeditunits_med_hr.jpeg",
null,
"http://www.dimensionengine.com/_Media/uniterror_med_hr.jpeg",
null,
"http://www.dimensionengine.com/_Media/uniterrorcomment_med_hr.jpeg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8668978,"math_prob":0.86647975,"size":4013,"snap":"2021-31-2021-39","text_gpt3_token_len":855,"char_repetition_ratio":0.17061612,"word_repetition_ratio":0.04387292,"special_character_ratio":0.20931971,"punctuation_ratio":0.108753316,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9819887,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,3,null,3,null,3,null,6,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-30T17:44:03Z\",\"WARC-Record-ID\":\"<urn:uuid:2518407f-155c-4fde-b166-61dae1a819a1>\",\"Content-Length\":\"21406\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fbe1f543-0533-4c1c-8b4b-9846e68751af>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac23b717-7ee2-4c24-8e48-ab4fbf9a6af9>\",\"WARC-IP-Address\":\"206.188.193.33\",\"WARC-Target-URI\":\"http://www.dimensionengine.com/excel/DEAddIn/quickstart.html\",\"WARC-Payload-Digest\":\"sha1:JX3DTNQCQ55CP24BHAR6G4EQQE6UWSHA\",\"WARC-Block-Digest\":\"sha1:ISY4QB62XEW5K4T5GXNL3VFY5ZBKLHIM\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153971.20_warc_CC-MAIN-20210730154005-20210730184005-00534.warc.gz\"}"} |
https://artofproblemsolving.com/wiki/index.php/1987_AIME_Problems/Problem_14 | [
"# 1987 AIME Problems/Problem 14\n\n## Problem\n\nCompute",
null,
"$$\\frac{(10^4+324)(22^4+324)(34^4+324)(46^4+324)(58^4+324)}{(4^4+324)(16^4+324)(28^4+324)(40^4+324)(52^4+324)}.$$\n\n## Solution 1 (Sophie Germain Identity)\n\nThe Sophie Germain Identity states that",
null,
"$a^4 + 4b^4$ can be factored as",
null,
"$\\left(a^2 + 2b^2 - 2ab\\right)\\left(a^2 + 2b^2 + 2ab\\right).$ Each of the terms is in the form of",
null,
"$x^4 + 324.$ Using Sophie Germain, we get that",
null,
"\\begin{align*} x^4 + 324 &= x^4 + 4\\cdot 3^4 \\\\ &= \\left(x^2 + 2 \\cdot 3^2 - 2\\cdot 3\\cdot x\\right)\\left(x^2 + 2 \\cdot 3^2 + 2\\cdot 3\\cdot x\\right) \\\\ &= (x(x-6) + 18)(x(x+6)+18), \\end{align*} so the original expression becomes",
null,
"$$\\frac{[(10(10-6)+18)(10(10+6)+18)][(22(22-6)+18)(22(22+6)+18)]\\cdots[(58(58-6)+18)(58(58+6)+18)]}{[(4(4-6)+18)(4(4+6)+18)][(16(16-6)+18)(16(16+6)+18)]\\cdots[(52(52-6)+18)(52(52+6)+18)]},$$ which simplifies to",
null,
"$$\\frac{(10(4)+18)(10(16)+18)(22(16)+18)(22(28)+18)\\cdots(58(52)+18)(58(64)+18)}{(4(-2)+18)(4(10)+18)(16(10)+18)(16(22)+18)\\cdots(52(46)+18)(52(58)+18)}.$$ Almost all of the terms cancel out! We are left with",
null,
"$\\frac{58(64)+18}{4(-2)+18} = \\frac{3730}{10} = \\boxed{373}.$\n\n~Azjps (Solution)\n\n~MRENTHUSIASM (Minor Reformatting)\n\n## Solution 2 (Completing the Square and Difference of Squares)\n\nIn both the numerator and the denominator, each factor is of the form",
null,
"$N^4+324=N^4+18^2$ for some positive integer",
null,
"$N.$\n\nWe factor",
null,
"$N^4+18^2$ by completing the square, then applying the difference of squares:",
null,
"\\begin{align*} N^4+18^2&=\\left(N^4+36N^2+18^2\\right)-36N^2 \\\\ &=\\left(N^2+18\\right)^2-(6N)^2 \\\\ &=\\left(N^2-6N+18\\right)\\left(N^2+6N+18\\right) \\\\ &=\\left((N-3)^2+9\\right)\\left((N+3)^2+9\\right). \\end{align*} The original expression now becomes",
null,
"$$\\frac{\\left[(7^2+9)(13^2+9)\\right]\\left[(19^2+9)(25^2+9)\\right]\\left[(31^2+9)(37^2+9)\\right]\\left[(43^2+9)(49^2+9)\\right]\\left[(55^2+9)(61^2+9)\\right]}{\\left[(1^2+9)(7^2+9)\\right]\\left[(13^2+9)(19^2+9)\\right]\\left[(25^2+9)(31^2+9)\\right]\\left[(37^2+9)(43^2+9)\\right]\\left[(49^2+9)(55^2+9)\\right]}=\\frac{61^2+9}{1^2+9}=\\boxed{373}.$$ ~MRENTHUSIASM\n\n## Solution 3 (Complex Numbers)\n\nIn both the numerator and the denominator, each factor is of the form",
null,
"$N^4+324=N^4+18^2$ for some positive integer",
null,
"$N.$\n\nWe factor",
null,
"$N^4+18^2$ by solving the equation",
null,
"$N^4+18^2=0,$ or",
null,
"$N^4=-18^2.$\n\n### Solution 3.1 (Polar Form)\n\nWe rewrite",
null,
"$N$ to the polar form",
null,
"$$N=r(\\cos\\theta+i\\sin\\theta)=r\\operatorname{cis}\\theta,$$ where",
null,
"$r$ is the magnitude of",
null,
"$N$ such that",
null,
"$r\\geq0,$ and",
null,
"$\\theta$ is the argument of",
null,
"$N$ such that",
null,
"$0\\leq\\theta<2\\pi.$\n\nBy De Moivre's Theorem, we have",
null,
"$$N^4=r^4\\operatorname{cis}(4\\theta)=18^2(-1),$$ from which\n\n1.",
null,
"$r^4=18^2,$ so",
null,
"$r=3\\sqrt2.$\n2.",
null,
"\\begin{cases} \\begin{aligned} \\cos(4\\theta) &= -1 \\\\ \\sin(4\\theta) &= 0 \\end{aligned}, \\end{cases} so",
null,
"$\\theta=\\frac{\\pi}{4},\\frac{3\\pi}{4},\\frac{5\\pi}{4},\\frac{7\\pi}{4}.$\n\nBy the Factor Theorem, we get",
null,
"\\begin{align*} N^4+18^2&=\\biggl(N-3\\sqrt2\\operatorname{cis}\\frac{\\pi}{4}\\biggr)\\biggl(N-3\\sqrt2\\operatorname{cis}\\frac{3\\pi}{4}\\biggr)\\biggl(N-3\\sqrt2\\operatorname{cis}\\frac{5\\pi}{4}\\biggr)\\biggl(N-3\\sqrt2\\operatorname{cis}\\frac{7\\pi}{4}\\biggr) \\\\ &=\\biggl[\\biggl(N-3\\sqrt2\\operatorname{cis}\\frac{\\pi}{4}\\biggr)\\biggl(N-3\\sqrt2\\operatorname{cis}\\frac{7\\pi}{4}\\biggr)\\biggr]\\biggl[\\biggl(N-3\\sqrt2\\operatorname{cis}\\frac{3\\pi}{4}\\biggr)\\biggl(N-3\\sqrt2\\operatorname{cis}\\frac{5\\pi}{4}\\biggr)\\biggr] \\\\ &=\\left[(N-(3+3i))(N-(3-3i))\\right]\\left[(N-(-3+3i))(N-(-3-3i))\\right] \\\\ &=\\left[((N-3)-3i)((N-3)+3i)\\right]\\left[((N+3)-3i)((N+3)+3i)\\right] \\\\ &=\\left[(N-3)^2+9\\right]\\left[(N+3)^2+9\\right]. \\end{align*} We continue with the last paragraph of Solution 2 to get the answer",
null,
"$\\boxed{373}.$\n\n~MRENTHUSIASM\n\n### Solution 3.2 (Rectangular Form)\n\nWe rewrite",
null,
"$N$ to the rectangular form",
null,
"$$N=a+bi$$ for some real numbers",
null,
"$a$ and",
null,
"$b.$\n\nNote that",
null,
"$N^2=\\pm18i,$ so there are two cases:\n\n1.",
null,
"$N^2=18i$\n\nWe have",
null,
"\\begin{align*} (a+bi)^2&=18i \\\\ a^2-b^2+2abi&=18i. \\end{align*} We need",
null,
"\\begin{cases} \\begin{aligned} a^2-b^2 &= 0 \\\\ 2ab &= 18 \\end{aligned}, \\end{cases} from which",
null,
"$(a,b)=(3,3),(-3,-3),$ or",
null,
"$N=3+3i,-3-3i.$\n\n2.",
null,
"$N^2=-18i$\n\nWe have",
null,
"\\begin{align*} (a+bi)^2&=-18i \\\\ a^2-b^2+2abi&=-18i. \\end{align*} We need",
null,
"\\begin{cases} \\begin{aligned} a^2-b^2 &= 0 \\\\ 2ab &= -18 \\end{aligned}, \\end{cases} from which",
null,
"$(a,b)=(3,-3),(-3,3),$ or",
null,
"$N=3-3i,-3+3i.$\n\nBy the Factor Theorem, we get",
null,
"\\begin{align*} N^4+18^2&=(N-(3+3i))(N-(-3-3i))(N-(3-3i))(N-(-3+3i)) \\\\ &=\\left[(N-(3+3i))(N-(3-3i))\\right]\\left[(N-(-3+3i))(N-(-3-3i))\\right] \\\\ &=\\left[((N-3)-3i)((N-3)+3i)\\right]\\left[((N+3)-3i)((N+3)+3i)\\right] \\\\ &=\\left[(N-3)^2+9\\right]\\left[(N+3)^2+9\\right]. \\end{align*} We continue with the last paragraph of Solution 2 to get the answer",
null,
"$\\boxed{373}.$\n\n~MRENTHUSIASM\n\n~ pi_is_3.14\n\n## Video Solution\n\n~Michael Penn\n\nThe problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.",
null,
""
] | [
null,
"https://latex.artofproblemsolving.com/1/6/b/16b3624dd694603e72b8f0171eb31c63ad06597c.png ",
null,
"https://latex.artofproblemsolving.com/1/e/f/1efc25fdc405baea433b65086a57c31e46c9b07e.png ",
null,
"https://latex.artofproblemsolving.com/5/6/3/563554a2361677a6fa6e4eeeea2d78d943f7699f.png ",
null,
"https://latex.artofproblemsolving.com/7/b/1/7b11f8fb8e5ea9ebc1bfd1da7de5ba3660874dd0.png ",
null,
"https://latex.artofproblemsolving.com/6/c/a/6ca03fac50be7e6b5d512118bd9920a6e83c8dc2.png ",
null,
"https://latex.artofproblemsolving.com/5/b/8/5b89b0cc43562398a38d3d662af026b83f80925b.png ",
null,
"https://latex.artofproblemsolving.com/2/a/8/2a8ef9fd2ecb50715a11b875a2d94cd2343485d9.png ",
null,
"https://latex.artofproblemsolving.com/1/3/8/138ab377aa8029df94fb07929ca94e411e7e2bc2.png ",
null,
"https://latex.artofproblemsolving.com/1/e/5/1e5a84b81a0489c4ee8f03eb0e2a4e37576ebec8.png ",
null,
"https://latex.artofproblemsolving.com/0/b/4/0b4a6d41204181432504f1108ded79e4a49e0e03.png ",
null,
"https://latex.artofproblemsolving.com/5/1/5/5151dbcc241274519dcf7d25c5b8a291135a25ef.png ",
null,
"https://latex.artofproblemsolving.com/b/0/6/b06421849d4cc6b085e007ca18d750f4d9c0690f.png ",
null,
"https://latex.artofproblemsolving.com/4/b/f/4bff1d2c80c44994e54758cdf9ba4aa2dbdd5e98.png ",
null,
"https://latex.artofproblemsolving.com/1/e/5/1e5a84b81a0489c4ee8f03eb0e2a4e37576ebec8.png ",
null,
"https://latex.artofproblemsolving.com/0/b/4/0b4a6d41204181432504f1108ded79e4a49e0e03.png ",
null,
"https://latex.artofproblemsolving.com/5/1/5/5151dbcc241274519dcf7d25c5b8a291135a25ef.png ",
null,
"https://latex.artofproblemsolving.com/7/5/2/752d13d3dd00431526cdfb3d33c782893fc0cfdb.png ",
null,
"https://latex.artofproblemsolving.com/7/4/1/741f6b87cdd80f84611698a13a47eb94ba7666cf.png ",
null,
"https://latex.artofproblemsolving.com/f/c/9/fc97ef67268cd4e91bacdf12b8901d7036c9a056.png ",
null,
"https://latex.artofproblemsolving.com/b/3/e/b3e295d3a620f5c7d6cf067722e7c651bfdacae7.png ",
null,
"https://latex.artofproblemsolving.com/b/5/5/b55ca7a0aa88ab7d58f4fc035317fdac39b17861.png ",
null,
"https://latex.artofproblemsolving.com/f/c/9/fc97ef67268cd4e91bacdf12b8901d7036c9a056.png ",
null,
"https://latex.artofproblemsolving.com/3/b/0/3b082fa500291bf2be24d7655214c4802c3e237f.png ",
null,
"https://latex.artofproblemsolving.com/5/2/e/52e8ed7a3ba22130ad3984eb2cd413406475a689.png ",
null,
"https://latex.artofproblemsolving.com/f/c/9/fc97ef67268cd4e91bacdf12b8901d7036c9a056.png ",
null,
"https://latex.artofproblemsolving.com/0/b/4/0b448bd981311c4251d88961af1b0a2bb5fd0000.png ",
null,
"https://latex.artofproblemsolving.com/d/4/c/d4c6bf2166420d7c8b4471d32730bd8ec3b3e943.png ",
null,
"https://latex.artofproblemsolving.com/7/3/6/736e8692582e56e5a20c81413728746fe95ef9f7.png ",
null,
"https://latex.artofproblemsolving.com/f/c/9/fc917cdab580d802012456ba053908a85c81beaf.png ",
null,
"https://latex.artofproblemsolving.com/0/0/0/0002923aed72681907485a890abaa0900ca22ce5.png ",
null,
"https://latex.artofproblemsolving.com/9/d/3/9d3baa6b3ca67736a003da690546bc8cd78c7cd4.png ",
null,
"https://latex.artofproblemsolving.com/7/c/f/7cfd656f858320aa6c1273e618187ab9c5b7a4f3.png ",
null,
"https://latex.artofproblemsolving.com/d/e/6/de655bac7085c838d1bf7ed292727d2927ed1e15.png ",
null,
"https://latex.artofproblemsolving.com/f/c/9/fc97ef67268cd4e91bacdf12b8901d7036c9a056.png ",
null,
"https://latex.artofproblemsolving.com/1/0/b/10bc33eabe8b7dcc1fa4ea1af0efe8523a165627.png ",
null,
"https://latex.artofproblemsolving.com/c/7/d/c7d457e388298246adb06c587bccd419ea67f7e8.png ",
null,
"https://latex.artofproblemsolving.com/6/5/8/6583a799ce46b06b1cceb493abb45a07c88fd6c4.png ",
null,
"https://latex.artofproblemsolving.com/6/c/f/6cf5ead1e976bb36d1856a60c044358606455b51.png ",
null,
"https://latex.artofproblemsolving.com/e/7/1/e714e12871f2bd818db83a5354b058acd1f1d959.png ",
null,
"https://latex.artofproblemsolving.com/6/8/8/68850adef596068ba5f036fc46b0eeaa5a13ab31.png ",
null,
"https://latex.artofproblemsolving.com/f/8/5/f855dbc5edb34b59dc9d379b108ae24d93c557b9.png ",
null,
"https://latex.artofproblemsolving.com/c/c/2/cc277eaf68327f54b73fa15d8043f0fbb1d0c8c2.png ",
null,
"https://latex.artofproblemsolving.com/0/3/f/03f7eb12e9d0c3e249d075e3c2580c3eb899b19b.png ",
null,
"https://latex.artofproblemsolving.com/4/f/8/4f87d0a9159232adda5a63524606dca1dbda55cd.png ",
null,
"https://latex.artofproblemsolving.com/8/d/6/8d68c3607b1b7fdcb33ed9b97e03e508bc9f3ce4.png ",
null,
"https://latex.artofproblemsolving.com/3/2/a/32a3bcd245752bc0f54b8d2244d5acfab0cb44d1.png ",
null,
"https://latex.artofproblemsolving.com/6/9/0/6904792dea6e0385c9697cb12523ef8ac27437ae.png ",
null,
"https://latex.artofproblemsolving.com/1/6/4/1640a5d0d01c7cd98041190c1958ed162e035dbc.png ",
null,
"https://latex.artofproblemsolving.com/8/2/6/82667c7733f2e1278f3f4f9c56be32f5223efc8d.png ",
null,
"https://latex.artofproblemsolving.com/d/e/6/de655bac7085c838d1bf7ed292727d2927ed1e15.png ",
null,
"https://wiki-images.artofproblemsolving.com//8/8b/AMC_logo.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8407399,"math_prob":1.0000046,"size":1922,"snap":"2023-40-2023-50","text_gpt3_token_len":518,"char_repetition_ratio":0.11626694,"word_repetition_ratio":0.2208589,"special_character_ratio":0.24141519,"punctuation_ratio":0.06321839,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999939,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102],"im_url_duplicate_count":[null,8,null,10,null,2,null,2,null,2,null,2,null,2,null,2,null,6,null,null,null,6,null,3,null,3,null,6,null,null,null,6,null,3,null,3,null,null,null,2,null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,3,null,3,null,3,null,3,null,2,null,6,null,null,null,3,null,null,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T14:11:56Z\",\"WARC-Record-ID\":\"<urn:uuid:129551bb-1bba-4706-a229-b63462219477>\",\"Content-Length\":\"51089\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:32912bc2-00cc-4233-9450-dcc911309966>\",\"WARC-Concurrent-To\":\"<urn:uuid:984cb6dc-2699-4ab4-abc2-52e956beb8eb>\",\"WARC-IP-Address\":\"172.67.69.208\",\"WARC-Target-URI\":\"https://artofproblemsolving.com/wiki/index.php/1987_AIME_Problems/Problem_14\",\"WARC-Payload-Digest\":\"sha1:VCC43JIYWIREM72XRFSC6JALGEJ2MAWN\",\"WARC-Block-Digest\":\"sha1:5ZFGHLQHBEQVADTAM2GKGHMQURIGHF7W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506481.17_warc_CC-MAIN-20230923130827-20230923160827-00870.warc.gz\"}"} |
https://www.dailyfx.com/forex/education/trading_tips/post_of_the_day/2013/02/01/How_to_Use_Alternating_Waves_in_a_Forex_Strategy.html | [
"We use a range of cookies to give you the best possible browsing experience. By continuing to use this website, you agree to our use of cookies.\n\nEUR/USD\nBullish\nLow\nHigh\nLong Short\n\nNote: Low and High figures are for the trading day.\n\nData provided by\nNews\nOil - US Crude\nBullish\nLow\nHigh\nLong Short\n\nNote: Low and High figures are for the trading day.\n\nData provided by\nNews\nWall Street\nMixed\nLow\nHigh\nLong Short\n\nNote: Low and High figures are for the trading day.\n\nData provided by\nNews\nGold\nBullish\nLow\nHigh\nLong Short\n\nNote: Low and High figures are for the trading day.\n\nData provided by\nNews\nGBP/USD\nBullish\nLow\nHigh\nLong Short\n\nNote: Low and High figures are for the trading day.\n\nData provided by\nNews\nUSD/JPY\nBearish\nLow\nHigh\nLong Short\n\nNote: Low and High figures are for the trading day.\n\nData provided by\nNews",
null,
"# How to Use Alternating Waves in a Forex Strategy (Part 1)",
null,
"2014-07-30 03:10:00\nJeremy Wagner, CEWA-M, Head of Education\n\nArticle Summary: One forex strategy many traders use is a technical analysis strategy of comparing the length of two alternating waves. Many times, these waves are related in distance by Fibonacci ratios. This is the first of a multi-part series for projecting wave extensions using the Fibonacci expansion tool.\n\nThe Fibonacci sequence provides the foundation of trading decisions for many traders through the golden ratio. This is the first part of a multi-piece sequence on trading with Fibonacci ratios in determining price projections.\n\nFibonacci can be a complicated subject because there are a wide variety of uses and interpretations for it. My objective of the first part of this series is to focus using Fibonacci in alternating waves. Future articles will use the length of alternating waves to aid us in determining entry and exit points of a pair.\n\nTo get started, it is important to understand some common wave relationships between the alternating waves. For those who follow Elliott Wave Theory, you may find some of this familiar and a good refresher.\n\nWhat are Alternating Waves?\n\nIn a three wave move, the alternating waves will be wave A and wave C (or if you label with numbers, wave 1 and wave 3). So, in the picture below of an uptrend, the alternating waves are the first wave of the uptrend and the 2nd leg of the uptrend which happens to be the C leg of the pattern (the waves colored in blue).",
null,
"So when referencing alternating waves, we are referring to waves ‘1’ and ‘3’ or waves ‘A’ and ‘C’.\n\nWhat are the common wave relationships?\n\nThere are many ratios that traders use. We will try to keep things simple and focus only on patterns showing a .618, 1.00, 1.618 or 2.618 relationship.\n\nThese ratios are derived from the Fibonacci number sequence. The Fibonacci sequence is a numerical series where every number is the sum of the preceding two numbers. Here is an example of the first several Fibonacci numbers:\n\n1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, etc\n\nThe ratios mentioned at the beginning of this section are obtained by manipulating the numbers in this sequence.\n\n.618 = Take any number of the sequence and divide it by the number to the right. As this sequence becomes larger, this ratio closes in to .618.\n\n1.618 = Take any number of the sequence and divide it by the number to the left. This ratio closes in on 1.618 as the sequence becomes larger.\n\n2.618 = Take any number of the sequence and divide it by the number 2 positions to the left. This ratio closes in to 2.618 as the sequence becomes larger.\n\nWhere is the 1.00 description?\n\nYou may have noticed that the 1.00 description was not provided above. The 1.00 is not a part of the Fibonacci sequence (unless you want to get cute and take a number and divide it by itself). However, this 1.00 figure is very important when assessing alternating waves. A relationship of 1.00 means the alternating waves are equal in length, which is a common wave relationship.\n\nSo, if the wave ‘C’ is approaching the 1.00 wave relationship, the 1.00 levels represents the price point where the length of wave ‘A’ is equal to the length of wave ‘C’.\n\nHow do I find and draw the Fibonacci levels on the charts?\n\nFirst, assess the direction of the trend. If the trend is up, we want to look for an up-down sequence to draw the Fibonacci levels on the chart. (This would represent waves A and B of the 3 wave move.)",
null,
"Then, when prices retrace only a portion of the first up leg, this leaves the pair exposed to an upside break to new highs. Therefore, this partial retracement of the up leg becomes the ‘B’ leg of the three wave pattern. We will use the first two waves to estimate potential stopping points for the 3rd leg of the pattern.\n\nIf these potential price projections are near other levels of resistance, then we can build confidence in establishing short positions at the resistance zone.We can use the alternating wave relationships to pinpoint potential reversal points. Above, we can see how wave ‘C’ comes very close to reaching the 1.00 (which is the 100%) level then reverses. This means that wave ‘C’ was about the same length of wave ‘A’ which is common in a three wave movement."
] | [
null,
"https://a.c-dn.net/b/3nICkQ/How_to_Use_Alternating_Waves_in_a_Forex_Strategy_body_1.png",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACH5BAEAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACH5BAEAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACH5BAEAAAAALAAAAAABAAEAAAICRAEAOw==",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9271224,"math_prob":0.933131,"size":4670,"snap":"2020-34-2020-40","text_gpt3_token_len":1040,"char_repetition_ratio":0.15366481,"word_repetition_ratio":0.05263158,"special_character_ratio":0.22890793,"punctuation_ratio":0.11298838,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9747538,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,4,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-20T21:51:22Z\",\"WARC-Record-ID\":\"<urn:uuid:ca548aa7-1635-44ed-b1da-528e6c0c5361>\",\"Content-Length\":\"312382\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1250baa1-1e06-4fb1-b64e-98e87f3ab2e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c825e32-e6df-4855-ba96-754cf54820cb>\",\"WARC-IP-Address\":\"23.67.95.33\",\"WARC-Target-URI\":\"https://www.dailyfx.com/forex/education/trading_tips/post_of_the_day/2013/02/01/How_to_Use_Alternating_Waves_in_a_Forex_Strategy.html\",\"WARC-Payload-Digest\":\"sha1:LBG6Y3YPIRI3JXND57RY5M5F7C5OXOHW\",\"WARC-Block-Digest\":\"sha1:GLE4YWMFO4PNI6ZXGJOTT2CG62HBIAWX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400198652.6_warc_CC-MAIN-20200920192131-20200920222131-00230.warc.gz\"}"} |
http://www.mathematicalfoodforthought.com/2006/02/divisibility-topic-number-theory-level_22.html | [
"## Wednesday, February 22, 2006\n\n### Divisibility. Topic: Number Theory. Level: Olympiad.\n\nProblem: (Gabriel Carroll Original) Find all positive integers $a,b,c$ such that $a|(bc+1)$, $b|(ca+1)$, $c|(ab+1)$.\n\nSolution: We first claim that they must all be pairwise relatively prime. Assume the opposite, that $gcd(a,b) = k \\neq 1$. But then $bc+1$ is not divisible by $k$ so $a$ cannot divide it. Contradiction. So $a,b,c$ are pairwise relatively prime.\n\nMultiply the divisibilities together. We then have\n\n$abc|(ab+1)(bc+1)(ca+1) \\Rightarrow abc|[(abc)^2+(a+b+c)(abc)+ab+bc+ca+1] \\Rightarrow abc|(ab+bc+ca+1)$.\n\nSuppose $min(a,b,c) \\ge 3$. Then we clearly have $a,b,c$ distinct (or they wouldn't be relatively prime), so assume WLOG that $3 \\le a < b < c$.\n\nThen $abc \\ge 3bc = bc+bc+bc \\ge (ab+1)+bc+(ca+1) > ab+bc+ca+1$. But if $abc|(ab+bc+ca+1)$, we have $abc \\le ab+bc+ca+1$. Contradiction. So $min(a,b,c) < 3$.\n\nThen we check $a = 1, 2$. Again, we assume WLOG that $a \\le b \\le c$.\n\n$a = 1$: We have $b|(c+1)$ and $c|(b+1)$ so $b \\le c+1$, $c \\le b+1$. Then since $b \\le c$, we have $b = c$ or $b+1 = c$. If $b = c$, we have $b|(b+1) \\Rightarrow b|1 \\Rightarrow b = 1$. If $b+1 = c$, we have $b|(b+2) \\Rightarrow b|2 \\Rightarrow b = 1, 2$. This gives us $(b,c) = (1,1); (1,2); (2,3)$.\n\n$a = 2$: We have $b|(2c+1)$ and $c|(2b+1)$. Note that $2b+1$ cannot have any divisors between $b$ and itself. This is true because other than itself its next largest possible divisor is $\\frac{2b+1}{2}$ and the other ones are all less than $b$. But since $\\frac{2b+1}{2}$ is never an integer, this is impossible. Therefore, since $b \\le c$, we have $c = 2b+1$. This gives us $b|(4b+3) \\Rightarrow b|3 \\Rightarrow b = 3$, so $(b,c) = (3,7)$ is a solution.\n\nHence our solutions are $(a,b,c) = (1,1,1); (1,1,2); (1,2,3); (2,3,7)$ and all permutations of them. QED.\n\n--------------------\n\nComment: This problem required some messy casework and bounding, and it's really easy to miss solutions (especially the last one). Thorough and rigorous arguments must be maintained to find them all.\n\n1.",
null,
"Look at that not even Gabriel Carroll can stump you. Haha, hey did you find out abotu RSI yet?\n\n2.",
null,
"Haha. Well this is one of the problems in his Easier-Moderate packet =P. But I haven't heard from RSI yet; they don't get back to us until March 31, which is a long ways from now still.\n\n3.",
null,
"You know you want to go to SIMUW since it's soooo much cooler than any other summer program. Only loser people go to places like RSI.\n\n4.",
null,
"hello,i am a math olympiad student in china.would you plase tell me if i could post in your blog?i love your blog style very much.and my blog cannot post math quations.\n\n5.",
null,
"6.",
null,
""
] | [
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://resources.blogblog.com/img/blank.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8430301,"math_prob":0.9998746,"size":3165,"snap":"2023-40-2023-50","text_gpt3_token_len":1031,"char_repetition_ratio":0.10439734,"word_repetition_ratio":0.026755853,"special_character_ratio":0.3579779,"punctuation_ratio":0.16893733,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99988985,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T07:03:03Z\",\"WARC-Record-ID\":\"<urn:uuid:95bc9252-34a8-4cf3-9b9b-aa4272abb1a0>\",\"Content-Length\":\"64947\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ea4a0716-44b5-45c5-8b71-2e3d67a65f77>\",\"WARC-Concurrent-To\":\"<urn:uuid:de448760-cfc2-4225-a915-6eddaa179f3b>\",\"WARC-IP-Address\":\"172.253.63.121\",\"WARC-Target-URI\":\"http://www.mathematicalfoodforthought.com/2006/02/divisibility-topic-number-theory-level_22.html\",\"WARC-Payload-Digest\":\"sha1:LD7ZYNY3ER5DYA2M335IYTEFYPCFRQHT\",\"WARC-Block-Digest\":\"sha1:COOQYX4SEO4HYESAL7SDRYCZMBSUREAL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100172.28_warc_CC-MAIN-20231130062948-20231130092948-00426.warc.gz\"}"} |
https://www.codesofinterest.com/2016/11/keras-image-dim-ordering.html?showComment=1500499221881 | [
"# What is the image_dim_ordering parameter in Keras, and why is it important\n\nUpdate 9/May/2017: With Keras v2, the image_dim_ordering parameter has been renamed to image_data_format. Check my updated post on how to configure it.\n\nIf you remember my earlier post about switching Keras between TensorFlow and Theano backends, you would have seen that we switched the image_dim_ordering parameter also when switching the backend. For TensorFlow, image_dim_ordering should be \"tf\", while for Theano, it should be \"th\".",
null,
"The keras.json file contains the Keras configuration options\n\nSo, what is this parameter, and where does it affect?\n\nIt has to do with how each of the backends treat the data dimensions when working with multi-dimensional convolution layers (such as Convolution2D, Convolution3D, UpSampling2D, Copping2D, … and any other 2D or 3D layer). Specifically, it defines where the 'channels' dimension is in the input data.\n\nBoth TensorFlow and Theano expects a 4 dimensional tensor as input. But where TensorFlow expects the 'channels' dimension as the last dimension (index 3, where the first is index 0) of the tensor – i.e. tensor with shape (samples, rows, cols, channels) – Theano will expect 'channels' at the second dimension (index 1) – i.e. tensor with shape (samples, channels, rows, cols). The outputs of the convolutional layers will also follow this pattern.\n\nSo, the image_dim_ordering parameter, once set in ~/.keras/keras.json, will tell Keras which dimension ordering to use in its convolutional layers.\n\nHowever, if you like to override the dimension ordering programmatically, you do it by using the dim_ordering parameter when initializing a convolutional layer:\n`````` model = Sequential()\n\nmodel.add(Convolution2D(64, 3, 3, border_mode='same', input_shape=(3, 256, 256), dim_ordering='th'))\n``````\n\nThe dim_ordering parameter is available in all the multi-dimensional convolution layers.\n\nRelated posts:\nimage_data_format vs. image_dim_ordering in Keras v2\n\nhttps://keras.io/layers/convolutional/#convolution2d\n\nBuild Deeper: The Path to Deep Learning\n\nLearn the bleeding edge of AI in the most practical way: By getting hands-on with Python, TensorFlow, Keras, and OpenCV. Go a little deeper...\n\n1.",
null,
"\"TensorFlow expects the 'channels' dimension to be at index 4 of the tensor – i.e. tensor with shape (samples, rows, cols, channels) – Theano will expect 'channels' at index 1 – i.e. tensor with shape (samples, channels, rows, cols)\"\n\nAt the risk of sounding pedantic, it seems to me that you're using two different indexing techniques (base 1 and base 0). I believe you mean to say one of the following:\n\na) TF uses index 4, and Theano uses index 2\nb) TF uses index 3, and Theano uses index 1\n\n1.",
null,
"Yes, you're right. It sounds confusing.\nI have now updated it to (hopefully) make it clear.\n\nThanks,\n\n2.",
null,
""
] | [
null,
"https://1.bp.blogspot.com/-EPtRLtKaCeU/XhLs2bExL8I/AAAAAAAAq6A/qawLNa6aDt4Dg7bFCux5nSVnt2k41AGogCLcBGAsYHQ/s320/Karas%2BJSON.PNG",
null,
"https://www.blogger.com/img/blogger_logo_round_35.png",
null,
"https://www.blogger.com/img/blogger_logo_round_35.png",
null,
"https://www.blogger.com/img/blogger_logo_round_35.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81682116,"math_prob":0.8385178,"size":2756,"snap":"2021-21-2021-25","text_gpt3_token_len":668,"char_repetition_ratio":0.14825581,"word_repetition_ratio":0.17866005,"special_character_ratio":0.22859216,"punctuation_ratio":0.17475729,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98601985,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,9,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-14T00:58:12Z\",\"WARC-Record-ID\":\"<urn:uuid:b575996d-16b6-48d3-b36a-677057af3a97>\",\"Content-Length\":\"103676\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:375d115c-90b7-4ddb-8d00-c3aa04e313d6>\",\"WARC-Concurrent-To\":\"<urn:uuid:09cb3363-a5cf-4fd9-85eb-9028db7037b9>\",\"WARC-IP-Address\":\"142.250.81.211\",\"WARC-Target-URI\":\"https://www.codesofinterest.com/2016/11/keras-image-dim-ordering.html?showComment=1500499221881\",\"WARC-Payload-Digest\":\"sha1:JHFSGXGXC2BSXMQPA4P7QHQBZCTFJ3SQ\",\"WARC-Block-Digest\":\"sha1:RS23452HZUSSBRI3EMR4LEG3UBSXKW44\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989616.38_warc_CC-MAIN-20210513234920-20210514024920-00066.warc.gz\"}"} |
https://teamtreehouse.com/community/can-someone-explain-the-random-number-equation | [
"",
null,
"Can someone explain the random number equation?\n\nI am really enjoying JavaScript so far. Everything makes sense until we always inevitably come back to this random number equation. I just can't seem to wrap my head around it. Can someone explain it to me?\n\nThe equation is: Math.floor(Math.random() * (6 - 1 + 1)) + 1;\n\nWhich from what I understand means: Math.floor(Math.random()*(max-min+1)+min);\n\nWhy do we have to do the minus min plus one? Why can't it be Math.floor(Math.random() * 6) + 1\n\nThanks everyone.",
null,
""
] | [
null,
"https://secure.gravatar.com/avatar/04813038ed9bcc623f759a19c2ece51c",
null,
"https://uploads.teamtreehouse.com/production/profile-photos/969492/micro_tool_chess_set_2.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93342,"math_prob":0.9659796,"size":1555,"snap":"2019-43-2019-47","text_gpt3_token_len":399,"char_repetition_ratio":0.09929078,"word_repetition_ratio":0.0,"special_character_ratio":0.27909967,"punctuation_ratio":0.11746988,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98939985,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T03:31:54Z\",\"WARC-Record-ID\":\"<urn:uuid:28503b06-899c-463b-8ba0-d673fa4ab39a>\",\"Content-Length\":\"55597\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20a01e83-b840-4a06-a322-3503a0fa219c>\",\"WARC-Concurrent-To\":\"<urn:uuid:a8607349-a34c-4f03-9473-5c4cfa96135c>\",\"WARC-IP-Address\":\"18.211.40.93\",\"WARC-Target-URI\":\"https://teamtreehouse.com/community/can-someone-explain-the-random-number-equation\",\"WARC-Payload-Digest\":\"sha1:7NT7ZIQOBYLUHZFTK43R5DAQGHZZ43FH\",\"WARC-Block-Digest\":\"sha1:3BOIUHZZDODIIY5WU7LRWTWLESJGQSTD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986688674.52_warc_CC-MAIN-20191019013909-20191019041409-00045.warc.gz\"}"} |
https://math.stackexchange.com/questions/2986486/proof-that-the-circumcenters-of-sub-triangles-forms-a-triangle-congruent-with-th | [
"# Proof that the circumcenters of sub triangles forms a triangle congruent with the original triangle\n\nLet $$\\triangle ABC$$ be a triangle with orthocenter H and let $$O_A, O_B, O_C$$ be the circumcenters of triangles $$\\triangle BCH, \\triangle CAH, \\triangle ABH$$, respectively. Prove that the $$\\triangle O_AO_BO_C$$ is congruent with $$\\triangle ABC$$.\n\nCan anyone help walk me through the proof?\n\n## 1 Answer\n\nPlease insert a figure always, this shows the effort to understand the problem.",
null,
"We consider thus the triangle $$\\Delta ABC$$ with orthocenter $$H$$. Let $$A', B', C'$$ be the mid points of the segments $$AH$$, $$BH$$, $$CH$$. Then $$O_A$$, the circumcenter of the triangle $$\\Delta HBC$$ is at the intersection of the side bisectors of the sides $$HB$$ and $$HC$$, so $$O_A$$ is the intersection of the perpendiculars in $$B'$$ and $$C'$$ on $$HB$$, respectively $$HC$$. So\n\n• $$O_A$$ and $$O_B$$ are on the perpendicular in $$C'$$ on $$CH$$. So $$O_AO_B$$ is this side bisector\n• Similarly, $$O_BOC$$ is the side bisector of $$AH$$, passing through $$A'$$, and\n• similarly, $$O_CO_A$$ is the side bisector of $$BH$$, passing through $$B'$$.\n\nThe sides of $$\\Delta ABC$$ and $$O_AO_BO_C$$ are thus respectively parallel, being in pairs perpendicular on the heights of $$\\Delta ABC$$. So we have a similarity of the two triangles. Tho show their congruence (i.e. \"equality\") we need one more \"metric relation\". Well, let us get them all as follows. First of all $$\\frac 12= \\frac{B'C'}{BC}= \\frac{C'A'}{CA}= \\frac{A'B'}{AB}\\ ,$$ relations of mid segments in the triangles $$\\Delta HBC$$, $$\\Delta HCA$$, $$\\Delta HAB$$.\n\nFurthermore we have $$B'C'\\| O_BO_C$$, and the similar relations $$C'A'\\|O_CO_A$$, $$A'B'\\|O_AO_B$$. How is then the triangle $$\\Delta A'B'C'$$ placed w.r.t. $$\\Delta O_AO_BO_C$$? Is it formed by mid segments? Yes, because for instance, using the parallelities: $$\\frac{O_AB'}{B'O_C} = \\frac{O_AC'}{C'O_B} = \\frac{O_CA'}{A'O_B} = \\frac{O_CB'}{B'O_A} \\ .$$ Thew two proportions at the beginning, and the end are reciprocal. So $$C'$$ is the mid point of $$O_AO_B$$. And the other relations. We get then the metric relation(s) $$BC = 2B'C'=O_BO_C$$ (and the other two).\n\n$$\\square$$\n\nBonus: The height in $$O_A$$ in $$\\Delta O_AO_BO_C$$ is also perpendicular on $$B'C'$$, so it is the side bisector of $$B'C'$$. And so it passes through the circumcenter of $$\\Delta ABC$$. Same for the other vertices in $$\\Delta O_AO_BO_C$$, so its orthocenter is the circumcenter of $$\\Delta ABC$$. And conversely, as it is seen in the picture."
] | [
null,
"https://i.stack.imgur.com/6HVtl.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7953798,"math_prob":1.0000085,"size":2049,"snap":"2021-04-2021-17","text_gpt3_token_len":631,"char_repetition_ratio":0.1403423,"word_repetition_ratio":0.015197569,"special_character_ratio":0.31820402,"punctuation_ratio":0.14105794,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000099,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-18T20:48:39Z\",\"WARC-Record-ID\":\"<urn:uuid:911f962c-e05a-4f07-9570-d27547e6fd91>\",\"Content-Length\":\"166184\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:453aaa41-12d1-4e29-9919-2d2f9b5d69f0>\",\"WARC-Concurrent-To\":\"<urn:uuid:501cd120-96b5-4504-8cfa-f6947461a7c8>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2986486/proof-that-the-circumcenters-of-sub-triangles-forms-a-triangle-congruent-with-th\",\"WARC-Payload-Digest\":\"sha1:A7CZ3CUP2SPAO63HMUHBLNLCQQBOJF7P\",\"WARC-Block-Digest\":\"sha1:I2FJMDNB5OOMHL3I3ZC6JW2FKRIXWWH6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038860318.63_warc_CC-MAIN-20210418194009-20210418224009-00431.warc.gz\"}"} |
https://www.teachoo.com/4168/1824/Example-26---Let-X-denote-number-of-hours-you-study-during/category/Chapter-13-Class-12th-Probability/ | [
"Chapter 13 Class 12 Probability\n\nClass 12\nImportant Questions for exams Class 12",
null,
"",
null,
"",
null,
"",
null,
"Learn in your speed, with individual attention - Teachoo Maths 1-on-1 Class\n\n### Transcript\n\nQuestion 5 Let X denote the number of hours you study during a randomly selected school day. The probability that X can take the values x, has the following form, where k is some unknown constant. P(X = x) = {█(0.1 , 𝑖𝑓 𝑥=0@𝑘𝑥, 𝑖𝑓 𝑥=1 𝑜𝑟 2@𝑘(5−𝑥), 𝑖𝑓 𝑥=3 𝑜𝑟 [email protected], 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒)┤ (a) Find the value of kMaking in tabular foArmat Since X is a random variable , its Sum of Probabilities is equal to 1 ∑_0^4▒〖𝑃(𝑋)〗 = 1 P(X = 0) + P(X = 1) + P(X = 2) + P(X = 3) + P(X = 4) = 1 0.1 + k + 2k + 2k + k = 1 6k = 1 – 0.1 6k = 0.9 k = (0. 9)/6 k = 0.15 Question 5 (b) What is the probability that you study atleast two hours? Exactly two hours? At most 2 HoursOur probability distribution table is P(you study atleast two hours) = P(X ≥ 2) = P(X = 2) + P(X = 3) + P(X = 4) = 2k + 2k + k = 5k = 5 × 0.15 = 0.75 P(you study exactly two hours) = P(X = 2) = 2k = 2 × 0.15 = 0.30 P(you study atmost two hours) = P(X ≤ 2) = P(X = 0) + P(X = 1) + P(X = 2) = 0.1 + k + 2k = 0.1 + 3k = 0.1 + 3 × 0.15 = 0.1 + 0.45 = 0.55",
null,
""
] | [
null,
"https://d1avenlh0i1xmr.cloudfront.net/3bd3aae9-6d21-42f6-9f5e-1549616efb67/slide29.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/2c355df3-18c2-4d3c-bfa8-ea9af4e5ab80/slide30.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/3386bb2f-7b3d-48e9-8a64-b6816888f958/slide31.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/6f492e12-dee3-40ee-8e7a-89f661142f45/slide32.jpg",
null,
"https://www.teachoo.com/static/misc/Davneet_Singh.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8890415,"math_prob":0.9999943,"size":2658,"snap":"2023-14-2023-23","text_gpt3_token_len":1011,"char_repetition_ratio":0.26337603,"word_repetition_ratio":0.30518234,"special_character_ratio":0.37133184,"punctuation_ratio":0.09171076,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000068,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-31T22:16:18Z\",\"WARC-Record-ID\":\"<urn:uuid:4e1f054b-02be-43f9-845c-3ff392f9beaf>\",\"Content-Length\":\"218232\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:089acba9-54b8-4d75-9782-9de1b8bce697>\",\"WARC-Concurrent-To\":\"<urn:uuid:528d16e3-d4b7-4799-aa13-cf1b2c2e84ed>\",\"WARC-IP-Address\":\"104.21.90.10\",\"WARC-Target-URI\":\"https://www.teachoo.com/4168/1824/Example-26---Let-X-denote-number-of-hours-you-study-during/category/Chapter-13-Class-12th-Probability/\",\"WARC-Payload-Digest\":\"sha1:53PBPVPWDLKSYK23GFMR5T4XXVKMSAD5\",\"WARC-Block-Digest\":\"sha1:32PKA4FO2E4SWYIEQZTZAF7GVFM7LQ7I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224647459.8_warc_CC-MAIN-20230531214247-20230601004247-00136.warc.gz\"}"} |
https://repositorio.usp.br/item/000738742 | [
"# Infinite dimensional dynamics described by ordinary differential equations (1992)\n\n• Authors:\n• Autor USP: CARVALHO, ALEXANDRE NOLASCO DE - ICMC\n• Unidade: ICMC\n• Assunto: SISTEMAS DINÂMICOS\n• Language: Inglês\n• Abstract: Due to the complexity of infinite dimensional dynamical systems, it is of great interest for mathematicians and applied scientists, to identify those problems that can be regarded as ordinary differential equations. The purpose of this work is to identify a class of infinite dimensional dynamical systems for which we are able to reduce the study of the large time dynamics to the study of the large time dynamics of an ordinary differential equation. An abstract theory is developed to include several reaction diffusion problems with large diffusion and some parabolic problems for which the diffusion coefficient is large in parts of the region ando becomes small in some other parts of the region. Using the same ideas some damped hyperbolic problems can be regarded as ordinary differential equations for the purpose of studying the asymptotic dynamics. In several examples the upper semicontinity of attractors of the infinite dimensional dynamical system with respect to the atractor of an ordinary differential equation is obtained. In some cases, topological equivalence between the attractors is observed. For a few cases the equality of attractors is proved. The techniques are the invariant manifold theory, a detailed analysis of an eigenvalue problem and some a priori estimates which are usually obtained by considering a liapunov functionnal\n• Imprenta:\n• Data da defesa: 00.00.1992\n\n• ##### How to cite\nA citação é gerada automaticamente e pode não estar totalmente de acordo com as normas\n\n• ABNT\n\nCARVALHO, Alexandre Nolasco de; HALE, Jack K. Infinite dimensional dynamics described by ordinary differential equations. 1992.Universidade de São Paulo, Atlanta, 1992.\n• APA\n\nCarvalho, A. N. de, & Hale, J. K. (1992). Infinite dimensional dynamics described by ordinary differential equations. Universidade de São Paulo, Atlanta.\n• NLM\n\nCarvalho AN de, Hale JK. Infinite dimensional dynamics described by ordinary differential equations. 1992 ;\n• Vancouver\n\nCarvalho AN de, Hale JK. Infinite dimensional dynamics described by ordinary differential equations. 1992 ;\n\nDigital Library of Intellectual Production of Universidade de São Paulo 2012 - 2021"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8540161,"math_prob":0.8920333,"size":3178,"snap":"2021-04-2021-17","text_gpt3_token_len":669,"char_repetition_ratio":0.1405167,"word_repetition_ratio":0.09803922,"special_character_ratio":0.18124607,"punctuation_ratio":0.10121457,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9905947,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-16T21:35:39Z\",\"WARC-Record-ID\":\"<urn:uuid:405f696f-92df-4c09-aa0d-7c58cb366283>\",\"Content-Length\":\"47776\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aee0ef01-c112-414d-b8f6-b191dc3c8a68>\",\"WARC-Concurrent-To\":\"<urn:uuid:dbe92b8a-5f67-4ae7-8087-405323057502>\",\"WARC-IP-Address\":\"200.144.183.87\",\"WARC-Target-URI\":\"https://repositorio.usp.br/item/000738742\",\"WARC-Payload-Digest\":\"sha1:JMFAK624SBAKVKEJQY6CDRMD4XKZQS2Y\",\"WARC-Block-Digest\":\"sha1:C4CY2CBFOJSSDAIQD25NJPLNZXLQEZMG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038089289.45_warc_CC-MAIN-20210416191341-20210416221341-00007.warc.gz\"}"} |
https://akit.cyber.ee/term/10731 | [
"# pruuf\n\nolemus\nalkohoolsete jookide etanoolisisalduse mõõtühik\n- USAs = kahekordne mahuprotsent\n- UKs, kuni 1980 = 1,75-kordne mahuprotsent\n\nWiktionary:\n11. (US) a measure of the alcohol content of liquor; originally, in Britain, 100 proof was defined as 57.1% by volume (no longer used). In the US, 100 proof means that the alcohol content is 50% of the total volume of the liquid; thus, absolute alcohol would be 200 proof\n\nülevaateid\nhttps://en.wikipedia.org/wiki/Alcohol_proof\n\n# pruuf\n\nolemus\nalkohoolsete jookide etanoolisisalduse mõõtühik\n- USAs = kahekordne mahuprotsent\n- UKs, kuni 1980 = 1,75-kordne mahuprotsent\n\nWiktionary:\n11. (US) a measure of the alcohol content of liquor; originally, in Britain, 100 proof was defined as 57.1% by volume (no longer used). In the US, 100 proof means that the alcohol content is 50% of the total volume of the liquid; thus, absolute alcohol would be 200 proof\n\nülevaateid\nhttps://en.wikipedia.org/wiki/Alcohol_proof\n\nPalun oodake...\n\n# pruuf\n\nolemus\nalkohoolsete jookide etanoolisisalduse mõõtühik"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.65965056,"math_prob":0.7829505,"size":510,"snap":"2023-40-2023-50","text_gpt3_token_len":167,"char_repetition_ratio":0.12055336,"word_repetition_ratio":0.0,"special_character_ratio":0.25882354,"punctuation_ratio":0.16842106,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9517474,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T20:10:22Z\",\"WARC-Record-ID\":\"<urn:uuid:73fcaf40-1239-4b16-80a3-fa096e9248e6>\",\"Content-Length\":\"158725\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:76727310-54bd-4b5c-967f-8842b4c47046>\",\"WARC-Concurrent-To\":\"<urn:uuid:0320bdcc-8c05-4caa-99cb-8a9a811c98f0>\",\"WARC-IP-Address\":\"45.144.4.30\",\"WARC-Target-URI\":\"https://akit.cyber.ee/term/10731\",\"WARC-Payload-Digest\":\"sha1:C7P4LLZOTPIYD3ZDLGCS42IVQNEAWPFO\",\"WARC-Block-Digest\":\"sha1:BBLXWECEC4RTV7BBORXE3HCU7N6T4PC2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100769.54_warc_CC-MAIN-20231208180539-20231208210539-00200.warc.gz\"}"} |
https://www.colorhexa.com/020db1 | [
"# #020db1 Color Information\n\nIn a RGB color space, hex #020db1 is composed of 0.8% red, 5.1% green and 69.4% blue. Whereas in a CMYK color space, it is composed of 98.9% cyan, 92.7% magenta, 0% yellow and 30.6% black. It has a hue angle of 236.2 degrees, a saturation of 97.8% and a lightness of 35.1%. #020db1 color hex could be obtained by blending #041aff with #000063. Closest websafe color is: #000099.\n\n• R 1\n• G 5\n• B 69\nRGB color chart\n• C 99\n• M 93\n• Y 0\n• K 31\nCMYK color chart\n\n#020db1 color description : Dark blue.\n\n# #020db1 Color Conversion\n\nThe hexadecimal color #020db1 has RGB values of R:2, G:13, B:177 and CMYK values of C:0.99, M:0.93, Y:0, K:0.31. Its decimal value is 134577.\n\nHex triplet RGB Decimal 020db1 `#020db1` 2, 13, 177 `rgb(2,13,177)` 0.8, 5.1, 69.4 `rgb(0.8%,5.1%,69.4%)` 99, 93, 0, 31 236.2°, 97.8, 35.1 `hsl(236.2,97.8%,35.1%)` 236.2°, 98.9, 69.4 000099 `#000099`\nCIE-LAB 21.852, 56.907, -80.137 8.103, 3.474, 41.836 0.152, 0.065, 3.474 21.852, 98.287, 305.379 21.852, -6.625, -85.214 18.64, 44.979, -120.025 00000010, 00001101, 10110001\n\n# Color Schemes with #020db1\n\n• #020db1\n``#020db1` `rgb(2,13,177)``\n• #b1a602\n``#b1a602` `rgb(177,166,2)``\nComplementary Color\n• #0265b1\n``#0265b1` `rgb(2,101,177)``\n• #020db1\n``#020db1` `rgb(2,13,177)``\n• #4e02b1\n``#4e02b1` `rgb(78,2,177)``\nAnalogous Color\n• #65b102\n``#65b102` `rgb(101,177,2)``\n• #020db1\n``#020db1` `rgb(2,13,177)``\n• #b14f02\n``#b14f02` `rgb(177,79,2)``\nSplit Complementary Color\n• #0db102\n``#0db102` `rgb(13,177,2)``\n• #020db1\n``#020db1` `rgb(2,13,177)``\n• #b1020d\n``#b1020d` `rgb(177,2,13)``\n• #02b1a6\n``#02b1a6` `rgb(2,177,166)``\n• #020db1\n``#020db1` `rgb(2,13,177)``\n• #b1020d\n``#b1020d` `rgb(177,2,13)``\n• #b1a602\n``#b1a602` `rgb(177,166,2)``\n• #010765\n``#010765` `rgb(1,7,101)``\n• #01097f\n``#01097f` `rgb(1,9,127)``\n• #020b98\n``#020b98` `rgb(2,11,152)``\n• #020db1\n``#020db1` `rgb(2,13,177)``\n• #020fca\n``#020fca` `rgb(2,15,202)``\n• #0311e3\n``#0311e3` `rgb(3,17,227)``\n• #0313fc\n``#0313fc` `rgb(3,19,252)``\nMonochromatic Color\n\n# Alternatives to #020db1\n\nBelow, you can see some colors close to #020db1. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #0239b1\n``#0239b1` `rgb(2,57,177)``\n• #022ab1\n``#022ab1` `rgb(2,42,177)``\n• #021cb1\n``#021cb1` `rgb(2,28,177)``\n• #020db1\n``#020db1` `rgb(2,13,177)``\n• #0602b1\n``#0602b1` `rgb(6,2,177)``\n• #1402b1\n``#1402b1` `rgb(20,2,177)``\n• #2302b1\n``#2302b1` `rgb(35,2,177)``\nSimilar Colors\n\n# #020db1 Preview\n\nThis text has a font color of #020db1.\n\n``<span style=\"color:#020db1;\">Text here</span>``\n#020db1 background color\n\nThis paragraph has a background color of #020db1.\n\n``<p style=\"background-color:#020db1;\">Content here</p>``\n#020db1 border color\n\nThis element has a border color of #020db1.\n\n``<div style=\"border:1px solid #020db1;\">Content here</div>``\nCSS codes\n``.text {color:#020db1;}``\n``.background {background-color:#020db1;}``\n``.border {border:1px solid #020db1;}``\n\n# Shades and Tints of #020db1\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000002 is the darkest color, while #eeefff is the lightest one.\n\n• #000002\n``#000002` `rgb(0,0,2)``\n• #000216\n``#000216` `rgb(0,2,22)``\n• #000329\n``#000329` `rgb(0,3,41)``\n• #01043d\n``#01043d` `rgb(1,4,61)``\n• #010650\n``#010650` `rgb(1,6,80)``\n• #010763\n``#010763` `rgb(1,7,99)``\n• #010977\n``#010977` `rgb(1,9,119)``\n• #020a8a\n``#020a8a` `rgb(2,10,138)``\n• #020c9e\n``#020c9e` `rgb(2,12,158)``\n• #020db1\n``#020db1` `rgb(2,13,177)``\n• #020ec4\n``#020ec4` `rgb(2,14,196)``\n• #0210d8\n``#0210d8` `rgb(2,16,216)``\n• #0311eb\n``#0311eb` `rgb(3,17,235)``\n• #0515fc\n``#0515fc` `rgb(5,21,252)``\n• #1927fc\n``#1927fc` `rgb(25,39,252)``\n• #2c39fd\n``#2c39fd` `rgb(44,57,253)``\n• #3f4bfd\n``#3f4bfd` `rgb(63,75,253)``\n• #535efd\n``#535efd` `rgb(83,94,253)``\n• #6670fd\n``#6670fd` `rgb(102,112,253)``\n• #7a82fd\n``#7a82fd` `rgb(122,130,253)``\n• #8d94fe\n``#8d94fe` `rgb(141,148,254)``\n• #a0a6fe\n``#a0a6fe` `rgb(160,166,254)``\n• #b4b9fe\n``#b4b9fe` `rgb(180,185,254)``\n• #c7cbfe\n``#c7cbfe` `rgb(199,203,254)``\n• #dbddff\n``#dbddff` `rgb(219,221,255)``\n• #eeefff\n``#eeefff` `rgb(238,239,255)``\nTint Color Variation\n\n# Tones of #020db1\n\nA tone is produced by adding gray to any pure hue. In this case, #55555e is the less saturated color, while #020db1 is the most saturated one.\n\n• #55555e\n``#55555e` `rgb(85,85,94)``\n• #4e4f65\n``#4e4f65` `rgb(78,79,101)``\n• #47496c\n``#47496c` `rgb(71,73,108)``\n• #404373\n``#404373` `rgb(64,67,115)``\n• #393d7a\n``#393d7a` `rgb(57,61,122)``\n• #323781\n``#323781` `rgb(50,55,129)``\n• #2b3188\n``#2b3188` `rgb(43,49,136)``\n• #242b8f\n``#242b8f` `rgb(36,43,143)``\n• #1e2595\n``#1e2595` `rgb(30,37,149)``\n• #171f9c\n``#171f9c` `rgb(23,31,156)``\n• #1019a3\n``#1019a3` `rgb(16,25,163)``\n• #0913aa\n``#0913aa` `rgb(9,19,170)``\n• #020db1\n``#020db1` `rgb(2,13,177)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #020db1 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.54042834,"math_prob":0.62901133,"size":3649,"snap":"2020-45-2020-50","text_gpt3_token_len":1607,"char_repetition_ratio":0.13360769,"word_repetition_ratio":0.011111111,"special_character_ratio":0.55878323,"punctuation_ratio":0.23809524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9924979,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-03T22:17:24Z\",\"WARC-Record-ID\":\"<urn:uuid:7e9fcceb-5812-4a4d-b342-12392a72a83a>\",\"Content-Length\":\"36197\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ac3f4b1f-f9c9-47a0-9408-7c1abb9451fd>\",\"WARC-Concurrent-To\":\"<urn:uuid:877f50a1-f098-4583-8489-f3a91a1f2813>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/020db1\",\"WARC-Payload-Digest\":\"sha1:TCSWJ5PDC7YEXQTCHXVFB4PDAUJTSYC7\",\"WARC-Block-Digest\":\"sha1:QOBYKNU2H7RZR24Q7JHGB7IS7FBYKWW4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141732835.81_warc_CC-MAIN-20201203220448-20201204010448-00447.warc.gz\"}"} |
https://physics.stackexchange.com/questions/532743/susy-sigma-model-in-qm-bosonic-sector | [
"# SUSY sigma model in QM, bosonic sector?\n\nThe bosonic sigma model in ordinary QM (i.e. a 'free' particle trapped on a curved manifold $$\\mathcal{M}$$), has a Hamiltonian which is just the negative Laplacian on $$\\mathcal{M}$$.\n\nFor any $$\\mathcal{M}$$ there is a supersymmetric extension where the Hilbert space is extended to the space of differential forms on $$\\mathcal{M}$$ (as opposed to just scalar functions, which are 0-forms), and the Hamiltonian is extended in the natural way to the Laplacian on differential forms (sometimes called the Laplace-de Rham operator).\n\nCrucially, the Hamiltonian acts in the exact same way in the bosonic sector. If a 0-form is an eigenfunction in the bosonic sigma model, it will be an eigenfunction with the exact same eigenvalue in the SUSY sigma model. This should mean that the correlation functions of the bosonic fields which generate these 0-forms are exactly the same in either model.\n\nIf this is true, how is this seen in the path integral approach?\n\nIn the path integral, the Lagrangian is $$\\frac{1}{2}g_{ab}\\dot{\\phi}^a\\dot{\\phi}^b+ig_{ab}\\left(\\bar\\psi^a\\dot\\psi^b+\\Gamma^b_{cd}\\bar\\psi^a\\psi^c\\dot{\\phi^d}\\right)-\\frac{1}{2}R_{abcd}\\psi^a\\bar{\\psi}^b\\psi^c\\bar{\\psi}^d.$$ $$\\phi$$ are the bosonic coordinate fields on $$\\mathcal{M},$$ and the geometric quantities $$g,\\Gamma,R$$ all have $$\\phi$$ dependence. If we integrate out the fermionic fields $$\\psi$$ it seems we will get a complicated determinant in terms of $$\\phi$$ and this would modify the correlation functions, which from the canonical quantization point of view we expect to be the same.\n\nIs there any way to see from the path integral alone that these correlation functions are the same?\n\n• Are you positive the bosonic correlation functions are the same? this is not obvious to me, not in the path integral formalism, not in the operator formalism. Is this a well-known property of the system? Do you by any chance have any reference? – AccidentalFourierTransform Feb 26 at 1:42\n• @AccidentalFourierTransform, I am positive the energy of the bosonic states is the same in the two systems. This is because the Hamiltonian of the SUSY model is just the Laplace-de Rham operator. The original reference on the QM sigma model which shows this is section 10 of Witten, Nucl Phys B 202 (1982) 253. If the Hamiltonian acting on bosonic states is the same whether there is SUSY or not, then given the correlation functions are just expectation values of the time translations of bosonic fields, they must be the same (unless there is something subtle I am missing). – octonion Feb 26 at 16:08\n• Sure the energies are the same, but that does not imply the correlation functions are also the same, does it? E.g., inserting a complete set of eigenstates, you get different answers in the regular theory, and the SUSY one, because the latter contains more states, and I don't expect the extra states to magically add up to zero -- although I definitely could be very wrong here. Witten doesn't say anything about correlators though, as far as I can see. Anyway, best of luck finding the answer! – AccidentalFourierTransform Feb 27 at 0:25\n• @AccidentalFourierTransform, the extra states magically add up to zero because the matrix element of a bosonic field between a fermionic state and the bosonic vacuum is zero – octonion Feb 27 at 18:19\n• I find the question confusing... \"This should mean that the correlation functions of the bosonic fields which generate these 0-forms are exactly the same in either model\" presumably either model refers to SUSY vs nonSUSY QM? If that is the case and the quoted claim is indeed true then it had better follow from a SUSY localisation argument. But prima facie it looks like the 0-form observables aren't BRST closed so it seems you're out of luck – alexarvanitakis Feb 29 at 16:07"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.846703,"math_prob":0.9857322,"size":1642,"snap":"2020-24-2020-29","text_gpt3_token_len":437,"char_repetition_ratio":0.11843712,"word_repetition_ratio":0.0,"special_character_ratio":0.23995128,"punctuation_ratio":0.07210031,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99784106,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-06T20:32:58Z\",\"WARC-Record-ID\":\"<urn:uuid:3ba54724-215c-4c79-8e2f-20df8c364cf7>\",\"Content-Length\":\"143278\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5674be4a-a7f2-4942-84b6-85532145b507>\",\"WARC-Concurrent-To\":\"<urn:uuid:b5222afd-0ba7-4ff6-baf9-aaeb122b7b94>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/532743/susy-sigma-model-in-qm-bosonic-sector\",\"WARC-Payload-Digest\":\"sha1:ROOZRG7PN4GCR35YKYKRLW4E3SEZUN3R\",\"WARC-Block-Digest\":\"sha1:WRVKOCRQKTLRNCKC3I2NMSIK524M7RHW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655890181.37_warc_CC-MAIN-20200706191400-20200706221400-00299.warc.gz\"}"} |
https://converthere.com/numbers/1.0e+2-written-out | [
"# 1.0e+2 Written Out in Numbers\n\n1.0e+2 is a scientific notation used in mathematics, physics, chemistry, astronomy and other sciences to handle either very large or very small numbers. With scientific notation adding, subtracting, multiplying and dividing numbers becomes much simpler.\n\nThis tool will help you convert any scientific notation into decimal, real numbers. Below you can find out how to convert 1.0e+2 to number.\n\nChemical Reaction Under Microscope\nChemical Reaction Under Microscope\n\n## How to Convert 1.0e+2 to Real Number?\n\n1.0e+2 = 1.0 x 102 = 100\n\nUse the following form, to convert any other scientific notation into decimal notation:\n\nScientific notation:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8811963,"math_prob":0.86162037,"size":1349,"snap":"2022-40-2023-06","text_gpt3_token_len":419,"char_repetition_ratio":0.3576208,"word_repetition_ratio":0.08675799,"special_character_ratio":0.30911785,"punctuation_ratio":0.13402061,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9932024,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T15:40:59Z\",\"WARC-Record-ID\":\"<urn:uuid:aa7c1021-53a9-4e61-980c-ffd9dedbd893>\",\"Content-Length\":\"33532\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7353b7d1-b77c-4101-8df2-d244195a9b25>\",\"WARC-Concurrent-To\":\"<urn:uuid:442e3184-f67d-4004-b5e9-855521e88f59>\",\"WARC-IP-Address\":\"50.16.49.81\",\"WARC-Target-URI\":\"https://converthere.com/numbers/1.0e+2-written-out\",\"WARC-Payload-Digest\":\"sha1:VOUCYNB5PSIQULBPIJZ3JZVQG37UL7HY\",\"WARC-Block-Digest\":\"sha1:YY33N46VEN6Z4P5U4DIQQOOU24HOIA4L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337516.13_warc_CC-MAIN-20221004152839-20221004182839-00402.warc.gz\"}"} |
https://decipher.dev/30-seconds-of-typescript/docs/flip/ | [
"",
null,
"",
null,
"``const flip = (fn) => (first, ...rest) => fn(...rest, first);``\n``let a = { name: \"John Smith\" };let b = {};const mergeFrom = flip(Object.assign);let mergePerson = mergeFrom.bind(null, a);mergePerson(b); // == bb = {};Object.assign(b, a); // == b``"
] | [
null,
"https://img.shields.io/badge/supports-javascript-yellow.svg",
null,
"https://img.shields.io/badge///TODO-blue.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5757968,"math_prob":0.9969173,"size":461,"snap":"2023-14-2023-23","text_gpt3_token_len":116,"char_repetition_ratio":0.13347921,"word_repetition_ratio":0.0,"special_character_ratio":0.31453362,"punctuation_ratio":0.26530612,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9753707,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-30T11:55:35Z\",\"WARC-Record-ID\":\"<urn:uuid:760c7efe-f894-48e1-bf93-1931e99a9b12>\",\"Content-Length\":\"94646\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:add7ad18-53b0-4619-9c73-b313c5fcf18c>\",\"WARC-Concurrent-To\":\"<urn:uuid:33899344-9d4d-4d9c-93a0-0bf09bf0bc29>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://decipher.dev/30-seconds-of-typescript/docs/flip/\",\"WARC-Payload-Digest\":\"sha1:3GUE7TBRCQTKMV6HZ5SYC2SFXNZKJRY3\",\"WARC-Block-Digest\":\"sha1:ULCZFMQG7ZCYGAWAPLXXCZVLEHK6YW3F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949181.44_warc_CC-MAIN-20230330101355-20230330131355-00052.warc.gz\"}"} |
https://stats.stackexchange.com/questions/115677/how-can-i-calculate-cosine-distance-with-multiple-feature-vectors-and-weigh-them | [
"# How can I calculate cosine distance with multiple feature vectors and weigh them?\n\nI have a dataset of text documents and I'm calculating pairwise cosine distances among them. For each document I have a bag of words vector, a vector built from entities extracted from the document, and the time the document was created. Each type of vector is of different dimensions.\n\nI want the distance calculation to use all of these features, but such that the distance is biased towards the concepts vector. I have been calculating the distances for each type of vector independently then combining them as a weighted average, but I suspect there's a more correct approach.\n\nJust do it.\n\nCosine boils down to computing scalar products (with each other, and each vector with itself when computing the magnitude),\n\n$$\\langle a,b\\rangle = \\sum_i a_ib_i$$\n\nwhich can trivially be weighted\n\n$$\\langle a,b\\rangle_\\Omega = \\sum_i \\omega_ia_ib_i$$\n\nChoose $\\omega_i$ such that each feature set has the same sum of weights. For example, choose $\\omega_i=1/d$ where $d$ is the number of features in this features space. You can move the $\\omega_i$ term out if it is constant for all attributes. You will likely end up with something as easy as this:\n\n$$\\langle a,b\\rangle = \\frac{1}{|F_1|}\\langle F_1(a), F_1(b)\\rangle + \\frac{1}{|F_2|}\\langle F_2(a), F_2(b)\\rangle + \\ldots$$\n\nwhich is the same as scaling/weighting each attribute in each of the feature spaces by $1/|F_i|$ where $|F_i|$ is the dimensionality of the features space.\n\nYou can further simplify the process if you first normalize all vectors to unit length (in each feature space separately). Then $||a||=||b||=1$, and cosine does become the scalar product: $$\\cos = \\frac{\\langle a,b\\rangle}{||a||\\cdot ||b||} = \\langle \\frac{a}{||a||},\\frac{b}{||b||}\\rangle =_{\\text{if }||a||=||b||=1} \\langle a,b\\rangle$$ As this equation shows (using bilinearity of the scalar product), you do get the same result.\n\nLet $F_i^\\prime(a)=F_i(a)/||F_i(a)||$ be the data normalized to $L_2$ norm 1. Then you can just choose\n\n$$\\text{sim}(a,b) = \\frac{1}{k}\\sum_{i=1}^k \\langle F_i^\\prime(a),F_i^\\prime(b)\\rangle$$\n\nIf $F^\\prime(a) = \\frac{1}{k} \\mathop{concat}_{i=1}^k F_i^\\prime(a)$ is the concatenation of the L2 unit normalized vectors; divided by $k$, then this is the classic scalar product/cosine:\n\n$$\\text{sim}(a,b) = \\langle F^\\prime(a), F^\\prime(b)\\rangle$$\n\nIn other words, normalize data in each feature space independently to unit length, then each features space gets the same weight.\n\n• Cosine boils down to the scalar product. The scalar product, devoid of the vectors' magnitudes, since $a \\cdot b= |a||b|cos$. – ttnphns Sep 17 '14 at 9:06\n• Magnitude is computed via the scalar product with itself - so if you have understood it for the scala product once, you can use it there, too. – Anony-Mousse Sep 17 '14 at 9:14"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7291527,"math_prob":0.99934065,"size":1847,"snap":"2019-35-2019-39","text_gpt3_token_len":566,"char_repetition_ratio":0.14270212,"word_repetition_ratio":0.0,"special_character_ratio":0.31672984,"punctuation_ratio":0.091412745,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99996877,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-21T15:41:48Z\",\"WARC-Record-ID\":\"<urn:uuid:91fd9781-b702-4ecd-bdc7-ef907bec854e>\",\"Content-Length\":\"137341\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dc488f39-0040-4e5b-8147-7cd5f1593ac1>\",\"WARC-Concurrent-To\":\"<urn:uuid:938a4c69-e324-46a6-a8ba-43ac74b365dc>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/115677/how-can-i-calculate-cosine-distance-with-multiple-feature-vectors-and-weigh-them\",\"WARC-Payload-Digest\":\"sha1:M6545L5ATDB3G5PBBNOUSUWCJNJRB3H7\",\"WARC-Block-Digest\":\"sha1:BFLOSIYBTRRZ73HDZUKL5NUBACKLTF4U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574532.44_warc_CC-MAIN-20190921145904-20190921171904-00210.warc.gz\"}"} |
https://garage4hackers.com/entry.php?b=503&s=28c79baed50554d90f07893669abd581 | [
"#### Understanding Padding Oracle Attack - Attack on Encryption in CBC mode\n\n######",
null,
"by , 10-10-2012 at 12:57 AM (9628 Views)\nBefore we begin , a few terminologies that we should be familiar with. An Oracle is just a theoritical black box in Cryptography which responds to queries that an Adversary sends. For Example , a random Oracle would select and send a truly random value from a uniform distribution for each query that the Adversary sends to it. Propery implemented Crypto primitives behave like random Oracles ie even though the attacker intercepts any number of ciphertexts, he wont be able to derive any information whatsoever about the plain text. CBC (Cipher Block Chaining) is a mode that is secure against a adversary that can launch a chosen plaintext attack. CPA(Chosen Plaintext Attack) is where you can query the oracle with plaintexts(two at a time) of your choice and the Oracle will return the ciphertexts of either one and still the attacker wont be able to predict as to which plaintext the ciphertext belongs to.\n\nTake a look at the following image(thanks to Wikipedia):",
null,
"Basically what it means is that, at first a IV also called the Initialization Vector is chosen from a random distribution and xor'ed with the Plain Text and then subject to the Encryption function 'E' and the resulting CipherText is used as the IV for the next PlainText block.\n\nWriting it as equation,\n\nCo=E(k,m^IV) where '^' refers to the xor operation. Now we can see xor a lot in cryptographic primitives, the reason for that is , when we xor a value from any distribution with another value from a uniform random distribution, then the resulting distribution is also a uniform random distribution. From above, message does not belong to a uniform distribution whereas an IV belongs a uniform distribution but the resulting \"m ^ IV\" belongs to a uniform distribution.\n\nTake a look at the following image(thanks again to Wikipedia) for the decryption:",
null,
"Writing it again as equation,\n\nAs we know, Co=E(k,m^IV)\nApplying decryption wrt 'k' on both sides,we have\n\nD(k,Co)=m^IV\n\nxor with IV on both sides (note that \"A\" ^ \"A\" == 0), so we have\n\nm=D(k,Co) ^ IV\n\nOne of the caveats to remember here is that, if we modify IV as IV' such that IV'=IV ^ G, then the resulting plaintext message 'm' also gets xor'ed by G. Keep this mind as we proceed.\nNow let's discuss about the padding in CBC assuming we use AES for encryption. Note that AES block size is 16 bytes. So if we have a block that is not a multiple of block size, say \"abcdefghij\" which is of size 10 bytes, we need to pad it to 16 bytes. The padding scheme that is generally used is to pad all the remaining bytes with the number of bytes missing. so it will be \"abcdefhjij\" + 0x6 0x6 0x6 0x6 0x6 0x6 (Notice that 0x6 is different from '6' tats why i made it this way). On decryption, we will look at the last byte , in this case it is 0x6, remove 6 bytes starting from the last byte to get out original message without the pad. Naturally, if we have a block that is a multiple of blocksize, we need to add a dummy padding block ' 0x16' repeated 16 times. See the image below to understand the padding scheme.(thanks to GDS Blog) Notice that they are using 3-DES for the encryption so block size is 8 bytes.",
null,
"So what is padding oracle ?\nThe vulnerability occurs because of the types of error that previous implementations of SSL/TLS returned to the user, one is if the pad is invalid, it returned a Invalid Pad Error, if the pad is valid but the CT is not valid then it returns a Invalid Message Error. The attacker can query the pad and completely decrypt the Plain text. The following images from GDS blog does a great job of explaining it.Thanks guys!",
null,
"",
null,
"",
null,
"What we need to do is , take the last byte of the IV, xor it with a value (G ^ 0x1),then it means that the message also will get xored by (G ^ 0x1) [remember the caveat i told you to keep in mind] , so if the PT's last byte too happens to be 'G' which can take any value from 0-255(a byte's possible values) , then we get a valid pad, since 0x1 is a valid 1-byte pad. To get the previous byte of the plaintext, we take the correct value of the last byte of the plaintext (lets call it 'P') and xor it with 0x2, and xor the last but previous byte of the IV with 'G' xor 0x2 because a valid 2-byte padding is '0x2 0x2' ie we fix the last byte and bruteforce the last but before byte till we get a valid pad.\n\nThere is a programming assignment @ Coursera's crypto class which deals with Padding Oracles,http://crypto-class.appspot.com/po?e...7dbf7035d5eeb4 . Our goal is to decrypt the ciphertext, the first 16 bytes are the IV (no need to decrypt them ofcourse). Below is a program I wrote in Python (I am kinda new to python so excuse the sloppiness) that decrypts the first 16 bytes of the message, the rest you can do if you are interested. Basically, if the pad is valid then the server responds with a 404 HTTP Error, else it responds with a 403 HTTP Error\n\nThis is the following link to the code : http://pastebin.com/kcN5i4Ze Not able to post the code here . am getting a .htaccess error for some reason! Mods can you look up the issue.\n\n[UPDATE] Fixed the attachment image! Sorry for the inconvenience\n\nBest Regards and Peace\n\nUpdated 10-10-2012 at 03:02 PM by sebas_phoenix\n\nCategories\nUncategorized"
] | [
null,
"https://garage4hackers.com/images/rating/rating-15_4.png",
null,
"http://garage4hackers.com/attachment.php",
null,
"http://garage4hackers.com/attachment.php",
null,
"http://garage4hackers.com/attachment.php",
null,
"http://garage4hackers.com/attachment.php",
null,
"http://garage4hackers.com/attachment.php",
null,
"http://garage4hackers.com/attachment.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8702961,"math_prob":0.8926756,"size":5304,"snap":"2021-21-2021-25","text_gpt3_token_len":1302,"char_repetition_ratio":0.11320755,"word_repetition_ratio":0.01871102,"special_character_ratio":0.23774509,"punctuation_ratio":0.08679593,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95347947,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-06T04:46:10Z\",\"WARC-Record-ID\":\"<urn:uuid:89398043-8c55-4b56-8d7a-1d1a501a751e>\",\"Content-Length\":\"47339\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4efd8c3b-ccc8-4590-a99b-5c2cc99bc1da>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d1d4a54-3f3a-4927-bff3-57141dfd725c>\",\"WARC-IP-Address\":\"46.105.113.36\",\"WARC-Target-URI\":\"https://garage4hackers.com/entry.php?b=503&s=28c79baed50554d90f07893669abd581\",\"WARC-Payload-Digest\":\"sha1:AHMJWVMQ5A5PRSEX27753W5ARK4OFC3H\",\"WARC-Block-Digest\":\"sha1:PM573UDERF3ZWBOH46P426FAL4D3PNTS\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988725.79_warc_CC-MAIN-20210506023918-20210506053918-00138.warc.gz\"}"} |
http://www.venkatsacademy.com/2015/01/different-theories-of-light-and-their-analysis.html | [
"## Wednesday, January 28, 2015\n\n### Different Theories of Light and Their Analysis\n\nLight is a form of energy. It satisfies law of conservation of energy. According to this law, energy is neither created nor destroyed. It just converts from one format to another format. Hence another format of energy can be converted into light and the light also can be converted into other formats of energies.\n\nLight exhibits a wide variety of properties. They are reflection, refraction, dispersion, interference, diffraction, polarization, photo electric effect, Compton Effect, Stark effect and Zeeman Effect. To explain all these properties of light, we have different theories of light.\n\nIf the size of the object interacting with the light is much larger than the wavelength of the light with respect to the object, light appears like a straight line and all the corresponding concepts were studied under the topic called Ray optics.\n\nIf the size of the object is comparable to the wavelength of the light, light appears like a wave and all the relevant topics are studied under the name wave optics or physical optics.\n\nThe properties of light like interference, diffraction and polarization can be explained only by assuming that light is travelling like a wave.\n\nRight from the beginning of the human evolution, human beings are very much interested in understanding the properties of light. In this process different theories are proposed. Each theory is having its own advantages and failures and here we are going to have brief discussion of the concepts.\n\nNewton’s corpuscular theory\n\nAmong different theories, Newton’s corpuscular theory is first one. According to this theory light is a stream of tiny particles called corpuscles\n.\nThis tiny particle carries energy by moving along straight lines.\n\nThere are different colors of the light. According to this theory, it is due to different sizes of corpuscles.\n\nIt is assumed that this tiny particles travels with enormous velocity.\n\nThis theory is successful in explaining the concepts of reflection and refraction.\n\nAnyway this theory is having some issues. According to this theory, velocity of the light is more in denser medium than the rarer medium. But practically it is proved that velocity of the light is actually more in the rarer medium and this theory failed to explain why.\n\nAs the light is leaving the source like particles, mass of the source shall decrease but that is not happening.\n\nThis theory failed to explain how reflection and refraction can happen simultaneously.\n\nTo address all these issues a new theory called Huygens wave theory is proposed.\n\nHuygens wave theory\n\nAccording to this theory light is not traveling like a straight line but it travels like a wave.\n\nHe assumed that light wave is mechanical in nature which demands a medium for propagation.\n\nHence a invisible, highly elastic low-density medium is imagined all-around and it is called ether medium.\n\nHe also proposed a principal called Huygens principle to explain the wave propagation.\n\nThis theory is successful in explaining that velocity of the light is more in the rarer medium than the denser medium and hence the problem is solved.\n\nThis theory is also successful in explaining the phenomena like interference and diffraction.\n\nBut this theory is also having some problems. It fails to explain the concept of polarization.\n\nIt also failed to explain photo electric effect and all other modern-day effects of light.\n\nExperimentally it is proved that there is no imagined medium that is filled over the entire space and hence there is no exact way to explain how the wave is propagating from one place to another place.\n\nTo solve all these problems electromagnetic wave theory is proposed was proposed by Maxwell.\n\nElectromagnetic wave theory\n\nAccording to this theory light propagates like a electromagnetic wave and it doesn't need any medium for propagation.\n\nA variable electric field generates a variable magnetic field around it and the propagation of the wave is perpendicular to both of them.\n\nBecause it is a non-mechanical wave we don’t need any medium for propagation itself.\nThe wave equations that we have written in waves and oscillations are valid in explaining the propagation of electric and magnetic fields.\n\nBasing on the permittivity and the permeability of the vacuum we can define the velocity of the light.\n\nIt is proved that velocity of light is constant and is maximum in the vacuum. According to this theory, velocity of light in any other is less than the velocity of the light in vacuum.\n\nThis theory is successful in explaining the concepts like interference, diffraction and polarization.\n\nAt the same time this theory is unable to explain how the photoelectric effect and Compton Effect are happening.\n\nTo address this problem Plank’s quantum theory is proposed.\n\nPlank’s quantum theory\n\nAccording to Plank’s quantum theory, light is not travelling like a wave rather it is travelling like a small packets of energy called quanta.\n\nEnergy is not emitted by the source continuously but discreetly in the form of wave packets.\n\nThis theory is successful in explaining the modern-day properties of the light like photoelectric effect and Compton Effect. But simultaneously it is unable to explain the concepts like interference, diffraction and polarization.\n\nHence we do not have a unified theory which can explain all the properties of light. So it is assumed that light travels like a wave and exhibits certain set of properties and when it interacts with someone, it interacts like a particle and exhibits the modern-day properties. This concept is called dual nature of the light.\n\nHuygens principle\n\nAccording to this principle, every point the primary wave front behaves like a secondary source and propagates the light in the forward direction.\n\nWave front is the locus of all the points that are in the same phase. The wave front from a point source is spherical in nature where as the wave front from the cylindrical source is cylindrical in nature. As we move far away from the point source, the radius of the spherical wave front increases to a larger value so that it appears like a plane wave front.\n\nLet us consider a point source at a particular position. Light starts from the source and it travels in all directions simultaneously with the same velocity. After a specified time, the distance traveled by the light is same in all directions from the source. The line joining of all the points is called a wave front. Every point on this primary wave front behaves as if like it is a source. Thus each point again propagates the light in the forward direction.\n\nBasing on this concept reflection and refraction are explained successfully. When the light is reflecting, angle of incidence is equal to angle of reflection. When the light is refracting, it obeys a law called Snell’s law.\n\nAccording to Snell’s law, the refractive index of the medium to where the light is going with respect to refractive index of the medium from where it is coming is equal to ratio of sin angle of incidence to the sin angle of refraction.\n\nWe can explain the refraction basing on this principle as shown below. As the light Ray is moving from rarer medium to denser medium, it moves towards the normal. The second light Ray travels more distance to reach the boundary that is separating the two media. Basing on some simple mathematical rules we can derive equation for the width of the refracted light beam as shown below.\n\nRelated Posts"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9380221,"math_prob":0.9586763,"size":7411,"snap":"2022-05-2022-21","text_gpt3_token_len":1427,"char_repetition_ratio":0.15957877,"word_repetition_ratio":0.046341464,"special_character_ratio":0.18256645,"punctuation_ratio":0.07766272,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9706205,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T14:16:31Z\",\"WARC-Record-ID\":\"<urn:uuid:89530946-48e6-4310-afd9-2f35a4f829ad>\",\"Content-Length\":\"89651\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:17607ed7-434d-4877-a693-a3be79ca9c5f>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b67ec4a-b304-44a1-adfb-5b71d034c265>\",\"WARC-IP-Address\":\"172.253.115.121\",\"WARC-Target-URI\":\"http://www.venkatsacademy.com/2015/01/different-theories-of-light-and-their-analysis.html\",\"WARC-Payload-Digest\":\"sha1:OPJJB2ZK7T5DKHQMKD3MV7Z7NRFFA72T\",\"WARC-Block-Digest\":\"sha1:AHL4T4RKVBI432LHLTO3SBDBNETIAORV\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662606992.69_warc_CC-MAIN-20220526131456-20220526161456-00448.warc.gz\"}"} |
https://community.rstudio.com/t/rearrange-x-axis/73809 | [
"",
null,
"# Rearrange X-axis\n\nHi! I am trying to rearrange my x-axis in order of date and I want the words instead of numbers; however, it keeps putting it in alphabetical order. My code is as follows:\n\n``````load(\"H:\\\\Personal\\\\provider1.csv\")\nlibrary(ggplot2)\nlibrary(dplyr)\nlibrary(forcats)\n\nd <- arrange(d,\nDate)\nd\\$Date <- forcats::as_factor(as.character(d\\$Date,format=\"%b-%Y\"))\n\ntotals<- d %>%\ngroup_by(Date) %>%\nsummarise(total=sum(Appointments))\nd %>%\ndplyr:mutate(Date = factor(Date,\nlevels = c(\"Jan-20\", \"Feb-20\", \"Jun-20\", \"Jul-20\"))) %>%\n\nggplot(d, aes(fill=Provider, y=Appointments, x=Date)) +\ngeom_bar(position=\"stack\", stat=\"identity\") +\ngeom_text(data=totals, aes(x=Date, label=total, y=total, fill=NULL), nudge_y=10) +\ntheme(text=element_text(size=18))\n``````\n\nI tried to specify the levels argument, but I keep getting an error:\n`Error in function_list[[i]](value) : object 'dplyr' not found`\n\nI am unsure how to solve this, so any help would be great!\n\nHere is a sample of my data:\n\n``````'data.frame': 32 obs. of 3 variables:\n\\$ Provider : chr \"MW, MD\" \"MW, MD\" \"MW, MD\" \"MW, MD\" ...\n\\$ Date : chr \"Jan-20\" \"Feb-20\" \"Jun-20\" \"Jul-20\" ...\n\\$ Appointments: int 81 69 80 93 80 66 81 63 64 54 ...\n``````\n\nEither use `dplyr::mutate()` or leave out the package specification, which is not required as you have already loaded `dplyr`, i.e. just use `mutate()`.\n\nyou use the %>% on the way into ggplot, but have left the original d, which is being mistaken for your aes call, because the %>% pipe already put the manipulated d in the data slot for ggplot\n\nThis topic was automatically closed 7 days after the last reply. New replies are no longer allowed.\n\nThat's a separate problem. You have already piped your dataframe `d` into the `ggplot()` function, so you need to remove that, i.e. … `%>% ggplot(aes(fill=Provider, y=Appointments, x=Date)) +` ...\n\nGetting rid of the dplyr specification gives me this error:\n\n``````Error: Mapping should be created with `aes() or `aes_()`.\nRun `rlang::last_error()` to see where the error occurred.\n``````\n\nThank you so much for your help!"
] | [
null,
"https://community.rstudio.com/uploads/default/original/3X/5/d/5dc960154a129282ba4283771da2fab6fde146fb.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.63422817,"math_prob":0.78215224,"size":1187,"snap":"2020-45-2020-50","text_gpt3_token_len":378,"char_repetition_ratio":0.090448014,"word_repetition_ratio":0.025157232,"special_character_ratio":0.3513058,"punctuation_ratio":0.21153846,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9890374,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T18:47:59Z\",\"WARC-Record-ID\":\"<urn:uuid:7d855d02-9454-4029-9502-7c2d171246a4>\",\"Content-Length\":\"24527\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:44320812-9a4a-4629-8ab8-0d487cf9b4bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f093b1b-4fc7-4329-81c7-1dc0d1e8879c>\",\"WARC-IP-Address\":\"167.99.20.217\",\"WARC-Target-URI\":\"https://community.rstudio.com/t/rearrange-x-axis/73809\",\"WARC-Payload-Digest\":\"sha1:TN3PUTHMSDTGJYB6SU6RAYHTDXT2IZBZ\",\"WARC-Block-Digest\":\"sha1:H34WESFW6QBKTO3BKYNFTA2K46CHTNKF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141748276.94_warc_CC-MAIN-20201205165649-20201205195649-00372.warc.gz\"}"} |
http://sepwww.stanford.edu/sep/prof/pvi/uni/paper_html/node9.html | [
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Next: Clarity of nonlinear picture Up: Solution by weighting functions Previous: Solution by weighting functions\n\n### A nonlinear-estimation method\n\nWhat I have described above represents my first iteration. It can be called a linear-estimation method.\" Next we will try a nonlinear-estimation method\" and see that it works better. If we think of minimizing the relative error in the residual, then in linear estimation we used the wrong divisor--that is, we used the squared data v2 where we should have used the squared residual",
null,
".Using the wrong divisor is roughly justified when the crosstalk",
null,
"is small because then v2 and",
null,
"are about the same. Also, at the outset the residual was unknown, so we had no apparent alternative to v2, at least until we found",
null,
".Having found the residual, we can now use it in a second iteration. A second iteration causes",
null,
"to change a bit, so we can try again. I found that, using the same data as in Figure 1, the sequence of iterations converged in about two iterations.",
null,
"reswait\nFigure 2\nComparison of weighting methods. Left shows crosstalk as badly removed by uniformly weighted least squares. Middle shows crosstalk removed by deriving a weighting function from the input data. Right shows crosstalk removed by deriving a weighting function from the fitting residual. Press button for movie over iterations.\n\nFigure 2 shows the results of the various weighting methods. Mathematical equations summarizing the bottom row of this figure are:",
null,
"(9) (10) (11)\nFor the top row of the figure, these equations also apply, but",
null,
"and",
null,
"should be swapped.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Next: Clarity of nonlinear picture Up: Solution by weighting functions Previous: Solution by weighting functions\nStanford Exploration Project\n10/21/1998"
] | [
null,
"http://sepwww.stanford.edu/latex2html/next.gif",
null,
"http://sepwww.stanford.edu/latex2html/up.gif",
null,
"http://sepwww.stanford.edu/latex2html/previous.gif",
null,
"http://sepwww.stanford.edu/latex2html/print.gif",
null,
"http://sepwww.stanford.edu/latex2html/clean.gif",
null,
"http://sepwww.stanford.edu/sep/prof/pvi/uni/paper_html/img24.gif",
null,
"http://sepwww.stanford.edu/sep/prof/pvi/uni/paper_html/img9.gif",
null,
"http://sepwww.stanford.edu/sep/prof/pvi/uni/paper_html/img24.gif",
null,
"http://sepwww.stanford.edu/sep/prof/pvi/uni/paper_html/img9.gif",
null,
"http://sepwww.stanford.edu/sep/prof/pvi/uni/paper_html/img9.gif",
null,
"http://sepwww.stanford.edu/sep/prof/pvi/uni/Gif/reswait.gif",
null,
"http://sepwww.stanford.edu/sep/prof/pvi/uni/paper_html/img25.gif",
null,
"http://sepwww.stanford.edu/sep/prof/pvi/uni/paper_html/img5.gif",
null,
"http://sepwww.stanford.edu/sep/prof/pvi/uni/paper_html/img6.gif",
null,
"http://sepwww.stanford.edu/latex2html/next.gif",
null,
"http://sepwww.stanford.edu/latex2html/up.gif",
null,
"http://sepwww.stanford.edu/latex2html/previous.gif",
null,
"http://sepwww.stanford.edu/latex2html/print.gif",
null,
"http://sepwww.stanford.edu/latex2html/clean.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9410621,"math_prob":0.9672893,"size":1003,"snap":"2020-45-2020-50","text_gpt3_token_len":222,"char_repetition_ratio":0.12712713,"word_repetition_ratio":0.0,"special_character_ratio":0.22133599,"punctuation_ratio":0.10152284,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98812973,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,4,null,6,null,4,null,6,null,6,null,2,null,2,null,4,null,2,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-24T09:48:59Z\",\"WARC-Record-ID\":\"<urn:uuid:e9514828-8f7a-4f45-9c31-c407d061411a>\",\"Content-Length\":\"7513\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47c22e6a-8be7-40fa-a052-8aaa25653256>\",\"WARC-Concurrent-To\":\"<urn:uuid:8cd2923d-e216-4a9e-8ded-24df80243101>\",\"WARC-IP-Address\":\"35.203.131.140\",\"WARC-Target-URI\":\"http://sepwww.stanford.edu/sep/prof/pvi/uni/paper_html/node9.html\",\"WARC-Payload-Digest\":\"sha1:BQX7CZWMS3KBLLYYZXUOYZUDVRQQ7CCS\",\"WARC-Block-Digest\":\"sha1:2GTI7GMD2NLAPL6C4B355K3GQ2TYEYE6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107882103.34_warc_CC-MAIN-20201024080855-20201024110855-00173.warc.gz\"}"} |
https://stats.stackexchange.com/questions/370011/is-the-canonical-parameter-and-therefore-the-canonical-link-function-for-a-gam | [
"# Is the canonical parameter (and therefore the canonical link function) for a Gamma not unique?\n\nConsider $$Y_1, \\dots, Y_n$$ independent from the Gamma distribution. For $$y > 0$$: \\begin{align} f(y \\mid \\alpha, \\beta) &= \\dfrac{1}{\\beta^{\\alpha}\\Gamma(\\alpha)}y^{\\alpha-1}e^{-y/\\beta} \\\\ &= \\exp\\left[-\\log(\\beta^{\\alpha}\\Gamma(\\alpha))+(\\alpha-1)\\log(y)-\\dfrac{y}{\\beta} \\right] \\\\ &= \\exp\\left[y\\left(\\dfrac{-1}{\\beta} \\right)-\\log(\\beta^{\\alpha}\\Gamma(\\alpha))+(\\alpha-1)\\log(y) \\right] \\end{align} I define an exponential dispersion family as any distribution whose PMF/PDF is $$f(y \\mid \\boldsymbol\\theta) = \\exp\\left\\{\\phi[y\\theta - b(\\theta)] + c(y, \\phi) \\right\\}\\text{, } y \\in \\Omega$$ where $$\\Omega$$ is in the support of a random variable $$Y$$ in the family.\n\n# Method 1:\n\nTherefore, $$Y_i$$ is of the exponential dispersion family with (assuming $$\\alpha_i$$ is known) \\begin{align} \\phi &= 1 \\\\ \\theta_i &= \\dfrac{-1}{\\beta_i} \\\\ b(\\theta_i) &= \\log[\\beta_i^{\\alpha_i}\\Gamma(\\alpha_i)] \\\\ &= \\log[(-\\theta_i)^{-\\alpha_i}\\Gamma(\\alpha_i)] \\\\ &= -\\alpha_i\\log(-\\theta_i) + \\log \\Gamma(\\alpha_i) \\\\ c(\\phi, y_i) &= (\\alpha_i-1)\\log(y_i)\\text{.} \\end{align} It follows that $$\\begin{equation} \\mu_i = b^{\\prime}(\\theta_i) = \\dfrac{-\\alpha_i}{-\\theta_i}(-1) = \\dfrac{-\\alpha_i}{\\theta_i}\\text{.} \\end{equation}$$ and our canonical link is\n\n$$g(\\mu_i) = \\theta_i = -\\dfrac{\\alpha_i}{\\mu_i}\\tag{*}$$\n\n# Method 2:\n\nLet $$\\theta = -\\dfrac{1}{\\alpha\\beta}$$. Then rewrite \\begin{align}f(y \\mid \\alpha, \\beta) &= \\exp\\left[y\\left(\\dfrac{-1}{\\beta} \\right)-\\log(\\beta^{\\alpha}\\Gamma(\\alpha))+(\\alpha-1)\\log(y) \\right] \\\\ &= \\exp\\left[y\\left(\\dfrac{-1}{\\beta} \\right)-\\alpha\\log(\\beta)-\\log\\Gamma(\\alpha)+(\\alpha-1)\\log(y) \\right] \\\\ &= \\exp\\left[y\\alpha\\theta-\\alpha\\log(\\theta^{-1})-\\log\\Gamma(\\alpha)+(\\alpha-1)\\log(y) \\right] \\\\ &= \\exp\\left\\{\\alpha[y\\theta-\\log(\\theta^{-1})]-\\log\\Gamma(\\alpha)+(\\alpha-1)\\log(y) \\right\\} \\end{align} in which case, since $$\\phi$$ cannot vary depending on $$i$$, \\begin{align} \\phi &= \\alpha \\\\ \\theta_i &= -\\dfrac{1}{\\alpha\\beta_i} \\\\ b(\\theta_i) &= \\log(\\theta_i^{-1}) = -\\log(\\theta_i)\\\\ c(\\phi, y_i) &= (\\alpha-1)\\log(y_i)-\\log\\Gamma(\\alpha)\\text{.} \\end{align} In this case, $$b^{\\prime}(\\theta_i) =-\\dfrac{1}{\\theta_i} = \\mu_i$$, which means that $$g(\\mu_i) = \\theta_i = -\\dfrac{1}{\\mu_i}\\text{.}\\tag{**}$$\n\nWikipedia cites (**) as the canonical link function. Why is (**) preferable over (*) for the canonical link function?\n\nWhat makes this even more confusing is that http://civil.colorado.edu/~balajir/CVEN6833/lectures/GammaGLM-01.pdf says that the canonical link function is $$\\dfrac{1}{\\mu_i}$$ (also mentioned in McCullagh and Nelder).\n\nFirst main question: Are canonical link functions unique, when provided a random component?\n\nSecond main question: Is there only supposed to be one canonical link function for each random component? If so, what is wrong with the work above?\n\nThe systematic component of the GLM is $$g(\\mu_i) = \\mathbf{x}_i^{\\prime}\\boldsymbol\\beta\\text{.}$$ As long as we do not place any restrictions on the $$\\boldsymbol\\beta$$ coefficients, without loss of generality, we may ignore all constants (with respect to $$\\mu_i$$) of proportionality (these constants will just be absorbed into $$\\boldsymbol\\beta$$). Thus, any of the link functions above will work as a canonical link function, but $$g(\\mu_i) = \\dfrac{1}{\\mu_i}$$ is the simplest one to work with."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.65321046,"math_prob":0.9999294,"size":2890,"snap":"2020-34-2020-40","text_gpt3_token_len":1014,"char_repetition_ratio":0.17463617,"word_repetition_ratio":0.0,"special_character_ratio":0.3733564,"punctuation_ratio":0.086065575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000067,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-03T12:31:23Z\",\"WARC-Record-ID\":\"<urn:uuid:77669d29-010a-477d-a5aa-fe18e6eac1a1>\",\"Content-Length\":\"148546\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:34bbd590-e238-4d3e-b97b-d34f8a27e4da>\",\"WARC-Concurrent-To\":\"<urn:uuid:de043d5d-dfcc-4461-b543-613723254183>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/370011/is-the-canonical-parameter-and-therefore-the-canonical-link-function-for-a-gam\",\"WARC-Payload-Digest\":\"sha1:3RWBJKADX7WJAEFRLN3BXLZKWCCRNX6X\",\"WARC-Block-Digest\":\"sha1:DK73EKUQ2CVBLTPHRQI66LZ5FIZGNBPH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735810.18_warc_CC-MAIN-20200803111838-20200803141838-00486.warc.gz\"}"} |
https://www.daniweb.com/programming/software-development/threads/465141/i-cant-solve-this-problem-help-please | [
"When I type the following code, I dont get the required output and I fail to understand why.\nProblem:\nWrite a program to find the product of all the positive even numbers less than or equal to 30.\nPossible solution according to me:\n\n``````#include <stdio.h>\nvoid main(void)\n{\nint n, prod, even;\nprod=1;\nn<=30;\n{\nfor(even=2;even<n;even=even+2)\nprod=prod*even;\neven=even+2;\nprintf(\"Product of all the positive even numbers less than %d is %d\\n\", n, prod);\n}\n}\n``````\n\n## All 10 Replies\n\nThe only statement inside your for-loop is `prod = prod*even;`. The others are outside of the loop. The braces around your for-loop do not affect what's inside the for-loop - they only introduce a new local scope. To affect the extent of the for loop, the opening brace should be after the `for(...)`.\n\nThis is not helping.\n\nCan you be a bit more specific about what is actually happening, then? Simply saying, 'This is not helping' gives us little to go on.\n\n``````#include <stdio.h>\nint main(void)\n{\nint n, prod, even;\nprod = 1;\nn = 30;\nfor(even = 2; even < n; even += 2)\n{\nprod *= even;\nprintf(\"Product of all the positive even numbers less than %d is %d\\n\", n, prod);\n}\n\nreturn 0;\n}\n``````\n\nYou might note a few things here:\n* I replaced `void main(void)` with `int main(void)`. While 'void main' is a common idiom, it is strictly speaking incorrect, or at least not portable.\n* I corrected the assignment for `n`; as you had it, it was a comparison, not an assignment, and had no effect on the rest of the program.\n* I used the accumulate-assignment statements for the increment and the product, and eliminated an extraneous increment.\n\nAlso leave out line 10, you are doing that already in the for loop.\nBy the way:Your integer type will overflow.\nWrite your for statement like this:\n\n``````for (even = 2; even <= 30; even = even + 2)\n{\nprod = prod * even;\n}\n``````\n\nfind the product of all the positive even numbers less than or equal to 30\n\nYou're only calculating the product for positive even numbers less than 30.\nSo it should be:\n\n``````prod = 1;\nn = 30;\n\nfor (even = 2; even <= n; even += 2)\nprod *= even;\n\nprintf(\"Product of all the positive even numbers less than %d is %d\\n\", n, prod);\n``````\n\nYou are incrementing the variable even twice\n\nYou are incrementing the variable even twice\n\nNo, line 10 in the OP's post is not part of the loop, has no bearing on the result and should be omitted.\n\n``````for(even=2;even<n;even=even+2)\nprod=prod*even; // only this is calculated\n``````\n\nDoing it this way, the compiling is done. No errors are being shown but no output is printed either. This is what I coded:\n\n``````#include <stdio.h>\nvoid main(void)\n{\nint n, prod, even;\neven=2;\nprod=1;\nn<=30;\n{\nfor(even=2;even<=n;even+=2);\nprod*=even;\nprintf(\"Product of all the positive even numbers less than %d is %d\\n\", n,prod);\n}\n}\n``````\n\nRemove the semi colon at the end of line 9, and change the variable type to `unsigned int` so that you don't get integer overflow.\n\n``````#include <stdio.h>\n\nint main(void)\n{\nunsigned int n, prod, even;\nprod = 1;\nn = 30;\n\nfor (even = 2 ; even <= n; even += 2)\nprod *= even;\n\nprintf(\"product of all positive even numbers less than or equal to %u is %u\\n\", n, prod);\n\nprintf(\"\\npress Enter to exit...\");\ngetchar();\n\nreturn 0;\n}\n``````\n\nThankyou Nullptr. This worked. :)\n\nBe a part of the DaniWeb community\n\nWe're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, learning, and sharing knowledge."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79995483,"math_prob":0.9647674,"size":450,"snap":"2022-27-2022-33","text_gpt3_token_len":126,"char_repetition_ratio":0.10089686,"word_repetition_ratio":0.114285715,"special_character_ratio":0.28444445,"punctuation_ratio":0.17142858,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9767942,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-19T22:42:26Z\",\"WARC-Record-ID\":\"<urn:uuid:2743f237-c40f-4b19-b21c-c61b3aa24bf9>\",\"Content-Length\":\"104244\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:10322b91-65f8-412e-9610-abff882075a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:19e6965f-9770-43a3-8e60-1e8a785f0445>\",\"WARC-IP-Address\":\"172.66.42.251\",\"WARC-Target-URI\":\"https://www.daniweb.com/programming/software-development/threads/465141/i-cant-solve-this-problem-help-please\",\"WARC-Payload-Digest\":\"sha1:IUHWO6HPO2TVHXEZYVSOUXFKJGJQZ6WB\",\"WARC-Block-Digest\":\"sha1:IB7ZCYWDJT5YMVMJV5O4BUAD4HDRRCO6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573849.97_warc_CC-MAIN-20220819222115-20220820012115-00731.warc.gz\"}"} |
https://profoundtips.com/general/what-happens-to-voltage-when-load-increases/ | [
"# What happens to voltage when load increases?\n\n## What happens to voltage when load increases?\n\nWhen load current increases, the voltage drop across internal impedance (resistance in DC circuits) increases so now lesser output voltage is available at terminals.\n\n## How does load affect voltage?\n\nWhen you draw more loads you are lowering the load impedance. This increases the voltage drop across the source impedance. For example if the load impedance were the same as the source impedance the bus voltage would drop to 50\\% of the open circuit voltage.\n\nDoes resistance decrease when current increases?\n\nREAD ALSO: Why should I learn Latvian?\n\nAs the resistance increases, the current decreases, provided all other factors are kept constant.\n\n### What is the effect of resistance of a load on voltage drop in a circuit?\n\nThe greater the resistance of the circuit, the higher the voltage drop.\n\n### What is resistive and reactive load?\n\nResistive Load Banks: The most common type, resistive load banks mimic the operational load that a power source will see in actual use. A reactive/inductive load converts current into a magnetic field. It resists the change to voltage, causing the circuit current to lead voltage.\n\nWhat does a resistive load create?\n\nResistive load generally produces heat when an electrical supply provided to it. Resistive loads are very simple in construction also. They only have a coil of conductive wires.\n\n## What is the effect of increasing load on voltage and resistance?\n\nTherefore increasing load will increase resistance and decrease current with voltage staying the same. Power out put will decrease because there is less potential energy (voltage) in relation to load. , Qualified mechanic, Sparky, skilled in trades, post-sec grad.\n\nREAD ALSO: How can I make my country better?\n\n## What is the relationship between voltage and resistance in electrical circuits?\n\nThe electrical load is another way to say resistance. They say load because there are other forces that oppose the flow of electrons like induction. Therefore increasing load will increase resistance and decrease current with voltage staying the same."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9084977,"math_prob":0.8989838,"size":3260,"snap":"2023-40-2023-50","text_gpt3_token_len":647,"char_repetition_ratio":0.19840294,"word_repetition_ratio":0.35740742,"special_character_ratio":0.19539878,"punctuation_ratio":0.0929054,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9706387,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-04T06:15:13Z\",\"WARC-Record-ID\":\"<urn:uuid:3efd7f94-2248-49aa-b75d-9c37355362bc>\",\"Content-Length\":\"58816\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c48299f0-04ab-41e1-874f-83ad81e33abf>\",\"WARC-Concurrent-To\":\"<urn:uuid:f0e3e705-1fc8-40d9-aaff-c7118f919ded>\",\"WARC-IP-Address\":\"104.21.36.33\",\"WARC-Target-URI\":\"https://profoundtips.com/general/what-happens-to-voltage-when-load-increases/\",\"WARC-Payload-Digest\":\"sha1:ZQIIT22DHA3LW5XUJ43CQBOUEN5346YY\",\"WARC-Block-Digest\":\"sha1:DCAJBHCP3JMGJ5FZOQ5OMQQ4VUR6NXXK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100525.55_warc_CC-MAIN-20231204052342-20231204082342-00572.warc.gz\"}"} |
https://www.hbm.com/en/3203/tips-and-tricks-using-statistical-techniques-to-validate-field-test-data-for-fatigue-analysis/ | [
"Using statistical techniques to validate field test data fo Using statistical techniques to validate field test data fo | HBM\n\n# Using statistical techniques to validate field test data for fatigue analysis\n\nThis paper describes some statistical techniques for taking measured data and projecting what the expected time history and resulting fatigue damage would be if test data were collected for much longer times.\n\n## Abstract\n\nDuring every field test the test engineer is faced with a decision about when sufficient data has been collected to characterize a particular operation. This paper describes some statistical techniques for taking measured data and projecting what the expected time history and resulting fatigue damage would be if test data were collected for much longer times. Sufficient data has been collected when the fatigue damage computed from both the measured and the expected time histories converge.\n\n## Background\n\nA major factor in the design of a vehicle or structure is the anticipated severity of the service usage. In a new vehicle, the service usage is unknown and short term tests are frequently conducted to determine the loads acting on the structure and the resulting stresses. The loading history for a vehicle frequently contains both a deterministic and random part. For example, a dump truck may have a deterministic duty cycle of being loaded and unloaded combined with a series of random loads from driving over various surfaces. One may consider the duty cycle loads as variable in nature as well but from a different population than the driving loads. One problem facing all test engineers is to determine when they have collected enough data to characterize a particular service usage condition.\n\nFigure 1 shows a measured field loading history for the vertical front suspension loads for an all terrain vehicle ( ATV ) over a test track. The history was recorded for ten laps.",
null,
"Figure 1. Ten laps of test track\n\nDespite being a carefully controlled test track and a single driver, the variability in loading during each lap is easily seen in the figure.\n\nFrom a fatigue damage perspective, is the data in Fig. 1 enough to characterize the service usage for the expected life of the component? This question can be answered by using the variability in the measured loading history to project what the expected fatigue damage and loading history would be if the loads were measured for longer time.\n\nAnalyzing the variability in the time domain for functions of load and time is not feasible or needed. In fatigue, the rainflow histogram of the loads is of more interest than the loading history itself since fatigue damage is computed from it.",
null,
"Figure 2. Rainflow histogram\n\nTwo dimensional rainflow histograms contain information about both the load ranges and means. Figure 2 shows the rainflow histogram of the data from Fig. 1 in a To-From format. This type of histogram preserves mean stress effects which are usually important for fatigue analysis.\n\nData from Fig. 2 is plotted as a cumulative exceedance diagram in Fig. 3.",
null,
"Figure 3. Exceedance diagram\n\nThe exceedance curve for the original loading history is shown on the left side of the diagram. If the test data were measured for 1000 laps (100 times longer), the distribution would be expected to be shifted to the right and have a similar shape. The dashed lines schematically show that higher loads would be expected for longer times. Although the exceedance diagram is easier to visualize, it looses valuable information about mean effects.\n\nIn this paper we show how the variability in a measured rainflow histogram can be used to estimate the expected rainflow histogram for longer times. This extrapolated histogram can then be used to reconstruct a new longer time history for testing or analysis.\n\nRainflow Extrapolation\n\nExtrapolation of rainflow histograms was first proposed by Dressler . A short description of the concept is given here. Readers are referred to reference 1 for the details. The rainflow histogram is treated as a two dimensional probability distribution. A simple probability density function could be obtained by dividing the number of cycles in each bin of the histogram by the total number of cycles. A new histogram corresponding to any number of total cycles can be constructed by randomly placing cycles in the histogram according to their probability of occurrence. However, this approach will be essentially the same as multiplying the cycles in the histogram by an extrapolation factor. But this is unrealistic. Even the same driver over the same test track cannot repeat the loading history. For example each time a driver hits a pot hole the loads will be somewhat different. This is clearly shown in the ten laps in Fig. 1. The peak load from a particular event will be placed into an individual bin in the rainflow histogram. During subsequent laps, this load will have a different maximum value and be placed into a bin in the same neighborhood as the first lap.",
null,
"Figure 4. Histogram variability\n\nFigure 4 shows a rainflow histogram in a two dimensional view. Consider the event going from 2 to –2.5. Next time this is repeated it will be somewhere in the neighborhood of ( 2, -2.5 ) indicated by the large dashed circle. There is not much data in this region of the histogram and considerable variability is expected. Next consider the cycles from –2 to –1.5. Here there is much more data and we would expect the variability to be much smaller as indicated by the small dashed circle. Extrapolating rainflow histograms is essentially a task of finding a two dimensional probability distribution function from the original rainflow data.\n\nFor a given set of data taken from a continuous population, X, there are many ways to construct a probability distribution of the data. There are two general classes of probability density estimates: parametric and non-parametric. In parametric density estimation, an assumption is made that the given data set will fit a pre-determined theoretical probability distribution. The shape parameters for the distribution must be estimated from the data. Non-parametric density estimators make no assumptions about the distribution of the entire data set. A histogram is a non-parametric density estimator. For extrapolation purposes, we wish to convert the discrete points of a histogram into a continuous probability density. Kernal estimators [2-3] provide a convenient way to estimate the probability density. The method can be thought of as fitting an assumed probability distribution to a local area of the histogram. The size of the local area is determined by the bandwidth of the estimator. This is indicated by the size of the circle in Fig. 4. An adaptive bandwidth for the kernal is determined by how much data is in the neighborhood of the point being considered.\n\nStatistical methods are well developed for regions of the histogram where there is a lot of data. Special considerations are needed for sparsely populated regions . The expected maximum load range for the extrapolated histogram is estimated from a weibull fit to the large amplitude load ranges. This estimate is then used to determine an adaptive bandwidth for the sparse data regions of the histogram. Once the adaptive bandwidth is determined, the probability density of the entire histogram can be computed. The expected histogram for any desired total number of cycles is constructed by randomly placing cycles in the histogram with the appropriate probability. It should be noted that this process does not produce a unique extrapolation. Many extrapolations can be done with the same extrapolation factor so some information about the variability of the resulting loading history can be obtained.\n\nResults for a 1000 times extrapolation of the loading history are shown in Fig. 5. It is easier to visualize the results of the extrapolation when the results are viewed in terms of the exceedance diagram given in Fig. 6. Two plots are shown in the figure. One from the data in Fig. 5 and another one representing a 1000 repetitions of the original loading history.",
null,
"Figure 5. Extrapolated histogram\n\nTable 1 gives the results of fatigue lives computed from various extrapolations of the original loading history. Fatigue lives represent the expected operating life of the structure in hours. A simple SN approach was employed in the calculations. Any convenient fatigue analysis technique may be used and can be combined with a probabilistic description of the material properties.\n\nTable 1. Fatigue Lives\n\nHistory\n\nLife\n\nMaximum\n\nMinimum\n\nOriginal\n\n29,890\n\n1.89\n\n-2.12\n\n10X\n\n26,760\n\n2.25\n\n-2.50\n\n100X\n\n16,170\n\n2.88\n\n-3.25\n\n1000X\n\n10,190\n\n3.25\n\n-3.63\n\nAs expected, computed fatigue lives are lower for longer loading histories because of the higher loads in the extrapolated histograms. But the higher loads represent a more realistic forecast of the operating loads during the entire lifetime of the structure.",
null,
"Figure 6. Distribution of cycles and fatigue damage\n\nOne effective method to visualize the damaging cycles is to plot the distribution of cycles and material behavior on the same scale as shown in Fig. 6. Material properties are scaled so that they have the same units as the loading history and this plot represents the expected fatigue life under constant amplitude loading. The point of tangency of the two curves gives an indication of the most damaging range of cycles. The maximum load range cycles are not the most damaging in this history, rather the most damaging cycles for this loading history are at about ½ of the maximum load range in the extrapolated histogram.\n\n## Rainflow Reconstruction\n\nThe objective of rainflow reconstruction is to obtain a time history that will have similar fatigue damage as the original rainflow histogram. In essence, we wish to perform rainflow counting backwards. Starting with the rainflow histogram, reconstruct a time history, cycle by cycle, that has the same rainflow count. The largest overall cycle in the histogram is known. In the To-From format, rainflow counting distinguishes the difference between a Peak-Valley-Peak (PV) cycle and a Valley-Peak-Valley (VP) cycle. VP cycles are stored above the principle diagonal of the histogram and PV cycles below. The row of the histogram is less than the column for any VP cycle. Similarly, the column is less than the row for any PV cycle. The process for inserting VP cycles is illustrated in Fig. 7. The numbers on the left indicate histogram bins, row or column, and are proportional to the magnitude of the resulting loading history. The reconstructed time history is denoted P and V and the cycle to be inserted is denoted as r and c.",
null,
"Figure 7. Inserting a VP cycle\n\nAn VP cycle ( r < c ) can be inserted into any PV reversal if c <= P and r > V. A VP cycle can not be inserted into a PV cycle of the same magnitude.",
null,
"Figure 8. Insertion of a PV cycle\n\nFigure 8 shows the insertion of a PV cycle. An PV cycle ( c < r ) can be inserted into any VP reversal if r < P and c >= V. A PV cycle can not be inserted into a VP cycle of the same magnitude.\n\nThese two simple rules provide the basis for rainflow reconstruction. The process starts with the largest cycle either PVP or VPV. The next largest cycle is then inserted in an appropriate location in the reconstructed time history. After the first few cycles, there are many possible locations to insert a smaller cycle. All possible insertion locations are determined and one is selected at random.\n\n## Summary\n\nReliability of a vehicle or structure is influenced by the distribution of loading during the entire service usage. A technique for estimating the long term durability from short term measured loads has been described.\n\n## References\n\n Dressler, K, B. Grunder, M. Hack and V.B. Koettgen, \"Extrapolation of rainflow matrices\", SAE Paper 960569, 1996\n Silverman, B.W. \"Density estimation for statistics and data analysis\" Chapman and Hall, New York, 1986\n Scott, D.W. \"Multivariate density estimation\" Wiley, New York, 1992\n Roth, J.S. \"Statistical modeling of rainflow histograms\" Materials Engineering-Mechanical Behavior Report No. 182, UILU-ENG 98-4017, University of Illinois, 1998\n Khosrovaneh, A.K. and N.E. Dowling, \"Fatigue loading history reconstruction based on the rainflow technique\", International Journal of Fatigue, Vol. 12, No. 2, 1990, 99-106"
] | [
null,
"https://www.hbm.com/fileadmin/mediapool/images/products/daq/somat/edaq/atvtest-fig1.jpg",
null,
"https://www.hbm.com/fileadmin/mediapool/images/products/daq/somat/edaq/atvtest-fig2.jpg",
null,
"https://www.hbm.com/fileadmin/mediapool/images/products/daq/somat/edaq/atvtest-fig3.jpg",
null,
"https://www.hbm.com/fileadmin/mediapool/images/products/daq/somat/edaq/atvtest-fig4.jpg",
null,
"https://www.hbm.com/fileadmin/mediapool/images/products/daq/somat/edaq/atvtest-fig5.jpg",
null,
"https://www.hbm.com/fileadmin/mediapool/images/products/daq/somat/edaq/atvtest-fig6.jpg",
null,
"https://www.hbm.com/fileadmin/mediapool/images/products/daq/somat/edaq/atvtest-fig7.jpg",
null,
"https://www.hbm.com/fileadmin/mediapool/images/products/daq/somat/edaq/atvtest-fig8.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91563934,"math_prob":0.8943534,"size":12228,"snap":"2022-40-2023-06","text_gpt3_token_len":2497,"char_repetition_ratio":0.15592277,"word_repetition_ratio":0.04191617,"special_character_ratio":0.19978738,"punctuation_ratio":0.09208762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9747888,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T02:38:21Z\",\"WARC-Record-ID\":\"<urn:uuid:ecd554f8-8c4e-44e1-aa04-5e3a544a1e8d>\",\"Content-Length\":\"104685\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7336d4ec-69a5-474b-ac10-862cd8852634>\",\"WARC-Concurrent-To\":\"<urn:uuid:928062f1-cd2f-4ffc-88d2-092dc347aabb>\",\"WARC-IP-Address\":\"104.18.19.150\",\"WARC-Target-URI\":\"https://www.hbm.com/en/3203/tips-and-tricks-using-statistical-techniques-to-validate-field-test-data-for-fatigue-analysis/\",\"WARC-Payload-Digest\":\"sha1:MC3IJUDRLC5EBNIJMMHJQO3F4USCBIEE\",\"WARC-Block-Digest\":\"sha1:EOZAIQBJP5RHHXCU5B6SDDO4HYJSPYAP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499697.75_warc_CC-MAIN-20230129012420-20230129042420-00547.warc.gz\"}"} |
https://lotharlorraine.wordpress.com/2013/11/23/the-bayesian-delusion/ | [
"# A mathematical proof of Bayesianism?\n\nThis is going to be another boring post (at least for most people who are not nerds).\n\nHowever before approaching interesting questions such as the existence of God, morality and history a sound epistemology (theory of knowledge) must already be present. During most (heated) debates between theists and atheists, people tend to take for granted many epistemological principles which are very questionable.\n\nThis is why I spend a certain amount of my time exploring such questions, as a groundwork for more applied discussions.\n\nI highly recommand all my reader to first read my two other posts on the concept of probability before reading what follows.\n\nBayesianism is a theory of knowledge according to which our degrees of belief in theories are well defined probabilities taking on values between 0 and 1.\n\nAccording to this view, saying that string theory has a probability of 0.2 to be true is as meaningful as saying that a normal dice randomly thrown has a probability of 1/6 to produce a “3”.\n\nBayesians like asserting over and over again that it is mathematically proven to say we ought to compute the likelihood of all beliefs according to the laws of probability and first and foremost Bayes formula:",
null,
"Here I want to debunk this popular assertion. Bayes theorem can be mathematically proven for frequential probabilities but there is no such proof that ALL our degrees of belief behave that way.\n\nLet us consider (as an example) the American population (360 millions people) and two features a person might have.\n\nCE (Conservative Evangelical): the individual believes that the Bible contains no error.",
null,
"FH (Fag Hating): the individual passionately hates gay people.",
null,
"Let us suppose that 30% of Americans are CE and that 5.8% of Americans hate homosexuals.\n\nThe frequencies are f(CE) = 0.30 and f(FH) = 0.058\n\nLet us now consider a random event: you meet an American by chance.\nWhat is the probability that you meet a CE person and what is the probability that you meet a FH individual?\nAccording to a frequentist interpretation, the probability equals the frequency of meeting such kinds of persons given a very great (actually infinite) number of encounters.\nFrom this it naturally follows that p(CE) = f(CE) = 0.30 and p(FH) = f(FH) = 0.058\n\nLet us now introduce the concept of conditional probability: if you meet a Conservative Evangelical, what is the probability that he hates faggots p(FH|CE)? (the | stands for „given“).\n\nIf you meet a fag-hating person, what is the probability that he believes in Biblical inerrancy p(CE|FH)?\n\nTo answer these questions (thereby proving Bayes theorem) it is necessary to get back to our consideration of frequencies.\n\nLet us consider that 10% of all Conservative Evangelicals and 4% of people who are not CE hate faggots: f(FH/CE) = 0.1 and f(FH/CE) = 0.04. The symbol ⌐ stands for the negation (denial) of a proposition.\n\nThe proportion of Americans who are both conservative Evangelicals and fag-haters is f(FHCE) = f(FH/CE)*f(CE) = 0.1*0.3 = 0.03.\n\nThe proportion of Americans who are NOT conservative Evangelicals but fag-haters is f(FH∩⌐CE) = f(FH/⌐CE)*f(⌐CE) = 0.04*0.7 = 0.028.\n\nLogically the frequency of fag-haters in the whole American population is equal to the sum of the two proportions:\n\nf(FH) = f(FHCE) + f(FH∩⌐CE) = 0.03 + 0.028 = 0.058\n\nBut what if we are interested to know the probability that a person is a conservative Evangelical IF that person hates queers p(CE|FH)?\n\nThis corresponds to the frequency(proportion) of Conservative Evangelicals among Fag-Haters: f(CE|FH).\n\nWe know that f(FHCE) = f(CE∩FH) = f(CE|FH)*f(FH)\n\nThus f(CE|FH) = f(FH∩CE) / f(FH)",
null,
"Given a frequentist interpretation of probability, this entails that",
null,
"which is of course Bayes theorem. We have mathematically proven it in this particular case but the rigorous mathematical demonstration would be pretty much the same given events expressable as frequencies.\n\nIf you meet an American who hates gays, the probability that he is a Conservative Evangalical is 51.72% (given the validity of my starting values above).\n\nBut let us now consider the Bayesian interpretation of probability (our degree of confidence in a theory) in a context having nothing to do with frequencies.\n\nLet S be “String theory is true“ and UEP “an Undead Elementary Particle has been detected during an experience in the LHC“.",
null,
"In that context, the probabilities correspond to our confidence in the truth of theories and hypotheses.\n\nWe have no compelling grounds for thinking that",
null,
", that is to say that is the way our brains actually work or ought to work that way in order to strive for truth.\n\nThe mathematical demonstration used to prove Bayes theorem relies on related frequencies and cannot be employed in a context where propositions (such as S and UEP) cannot be understood as frequencies.\nConsidering ALL our degrees of beliefs like probabilities is a philosophical decision and not an inevitable result of mathematics.\n\nI hope that I have been not too boring for lay people.\n\nNow I have a homework for you: what is the probability that Homeschooling Parents would like to employ my post as an introduction to probability interpretation, given that they live in the Bible Belt p(HP|BB)?",
null,
"## 20 thoughts on “A mathematical proof of Bayesianism?”\n\n1.",
null,
"michaeleeast says:\n\nLotharson,\nI am not sure that I entirely understand the mathematics concerning non-frequencies but I found the example interesting. Although I thought the percentage would be higher.\n\n•",
null,
"lotharson says:\n\nAccording to your own experience, what percentage would you use?\n\n•",
null,
"michaeleeast says:\n\nIt’s only a guess but I would think it was more like 75%.\n\n2.",
null,
"labreuer says:\n\nI had a bit of trouble understanding your differentiation between frequentist and Bayesian probability philosophy, so I found Frequentist vs. Bayesian statistics: resources to help you choose, where the author conveniently differentiated between the two. (P is probability, D is data, H is hypothesis)\n\nP(D | H) — frequentist\nP(H | D) — Bayesian\n\nA lot of the choice between frequentist and Bayesian statistics comes down to whether you think science should comprise statements about the world, or statements about our beliefs.\n\nI’m not sure about what the author says about statements about the world vs. beliefs; he hasn’t convinced me and neither have you. :-p I just don’t know enough about the two different views to settle on one over the other.\n\nNow, you say:\n\nThe mathematical demonstration used to prove Bayes theorem relies on related frequencies and cannot be employed in a context where propositions (such as S and UEP) cannot be understood as frequencies.\n\nFirst, you mean ‘related probabilities‘, else you’re mixing frequentism and Bayesianism. That’s just a quibble. Second, why do you say that P(S) is an invalid concept? We can certainly become more and less confident in a theory. Scientific observations are not sacrosanct; see the faster-than-light neutrino anomaly. But here I am talking about statements about belief, and not about reality. Then again, there is an inherent ‘fuzz’ in reality, whereby all models of it are only accurate to some level. Some may dispute this, but no convincing argument has been advanced.\n\n•",
null,
"lotharson says:\n\nHello Luke!\n\nThere are two issues which are raised here.\n\n1) Let us suppose for the sake of the argument that the intensity of our beliefs in our brain can be mapped as a continuous function onto the interval [0;1].\nWhat does that mean that:\n– String theory has a probability of 0.3 to be true\n– String theory has a probability of 0.6 to be true\n– String theory has a probability of 0.8 to be true ?\nWhat is the relation with the real world? I don’t see how one can avoid subjectivity.\n\n2) why should the probability of String Theory to be true OBEY Bayes theorem? It is certainly not possible to demonstrate it mathematically. As I showed in the post, the demonstration only holds for probabilities understood as frequencies for an infinite number of events.\nIt goes without saying you cannot use this line of reasoning for ST.\n\nCheers.\n\n•",
null,
"labreuer says:\n\n1) I cannot help but think that our confidence in propositions exists on (0, 1)—I prefer to exclude the endpoints—and that Bayesian inference well-models the updating of our confidences.\n\nI’m reticent to talk about whether or not string theory is true, because what I really believe is that either it will or will not be a better model of reality than the ones we have so far. In other words, I think string theory is actually false, but it may be less false than what came before it. 🙂 I should adopt string theory for a given purpose if it is better at what I want than the other available options. Sometimes, good old F = ma will be sufficient. Note that I believe reality is infinite in description, and therefore we will only ever be able to approximate it better and better.\n\n2) Bayes’ theorem merely talks about connected observations and beliefs. A single, distinct observation/proposition/belief cannot obey Bayes’ theorem because we haven’t claimed that it is connected to anything.\n\nThere is a way to tweak what you’ve said, which slightly draws on the multiverse theory, but only really as inspiration: what is the probability that I live in a world where string theory is a good model? It seems to me that the only way to answer such a question is to make observations and see whether they line up with string theory or do not line up with it.\n\n3.",
null,
"Ian says:\n\nHmmm. Although I agree with your general pessimism on using Bayes in philosophy, I’m not convinced by your argument here. For a couple of reasons.\n\n1. There are frequentist interpretations of confidence. So someone can argue that they are using confidence probabilities in that way. Carrier explicitly does this in Proving History, though he thinks he is inventing a new way to ‘solve’ the Bayesian / Frequentist problem, which of course he is not. To the extent that such an interpretation is valid, your argument disintegrates. Of course, even if the interpretation works, it has troubling implications on the ability to actually calculate probabilities for confidence. But then we’re back to the effect of error on calculations, rather than the applicability of Bayes’s theorem.\n\n2. If we reject #1, we’re still left with the problem that Bayes’s theorem, acting on belief probabilities, can be checked empirically. And it appears to work. At least to within the kinds of tolerance that a debater would want to claim to use it in their arguments. I’ve relied on Bayesian interpretations of probability for my paycheck. If Bayes’s Theorem breaks down in such cases, then the errors it introduces are so small compared to the noise in any reasonable data set, that it is hardly worth mentioning as an objection in domains where the noise is so dominant.\n\n•",
null,
"lotharson says:\n\nDear Ian, thank you very much for your critical comment.\n\nIn this post, I only wanted to show that one cannot prove mathematically that ALL our degrees of belief (or confidence in a theory) obey Bayes theoerem because such a demonstration demand them to be related to frequencies.\n\nYou answer me that there are frequentist interpretations of confidence. Fair enough. But what is the frequency that string theory is the correct description of the multiverse?\nOr what is the related frequency?\n\nOtherwise I agree that applying Bayes theorem leads to success in many fields. But I have not seen a good justification of its use in fields where frequencies are not meaningful, such as String Theory or the existence of God for that matter.\n\nCheers.\n\n•",
null,
"Ian says:\n\nI agree. When you have terms that are so vaguely defined and ambiguous, there is simply no way to apply quantitative reasoning to it.\n\n•",
null,
"lotharson says:\n\nActually, I don’t think that the problem lies in the ill-definition but in the non-frequential nature of the beliefs.\nThe great philosopher of science Elliot Sober gave the example of the theory of gravitation:\n\n“Newton’s universal law of gravitation, when suitably supplemented with plausible background assumptions, can be said to confer probabilities on observations. But what does it mean to say that the law has a probability in the light of those observations? More puzzling still is the idea that it has a probability before any observations are taken into account. If God chose the laws of nature by drawing slips of paper from an urn, it would make sense to say that Newton’s law has an objective prior. But no one believes this process model, and nothing similar seems remotely plausible.”\n\nI think that his point is well taken and I don’t see how you can define this probability as being something more than a subjective brain state.\nAnd there would be no mathematical demonstration that, in that particular situation, Bayes theorem can be employed for actualizing our confidence.\n\nOtherwise I happen to think one can use an extended concept of frequentist probability for considering historical events.\nThis would have the advantage to get rid of subjectivity altogether and compute objective probabilities.\nIn the next weeks I will lay the groundwork for such an approach.\n\nTo be honest I am not very excited at the perspective of buying Carrier’s book. While he certainly has some good ideas, I don’t like his arrogant and condescending tone towards those disagreeing with his pet theories.\n\nCheers from Lancaster.\n\n•",
null,
"Ian says:\n\nGood luck then, because I’m starting out disagreeing. But I look forward to being convinced! So far you’re pretty convincing, I’m just not buying yet… 🙂\n\n4.",
null,
"gmalcolms says:\n\nBayes’s theorem, or the definition of conditional probability, doesn’t rely on a frequentist interpretation of probability.\n\nBayes’s theorem follows from the formula for conditional probability (P(A|B) = P(AB)/P(B)) and a couple of obvious axioms of probability (the probability of the entire space is 1 and probabilities of disjoint events sum). So what you’re really saying is that P(A|B) = P(AB)/P(B) is only true in a frequentist interpretation. But that’s not true, either.\n\nFirst one needs to define the conditional probability measure P(・|B). This is a probability measure on the subspace B (of the original, larger space); that is, it satisfies all the axioms of probability, with P(B|B) = 1. Moreover, ratios of probabilities for events contained with B shouldn’t depend on whether one uses the original probability measure or the condition one. This has nothing to do with frequentism but is required for our notion of probability to make sense. To give an example, let’s say that the probability that someone is a left-handed man is 2 times the probability that he is a man over 6′ tall. This ratio shouldn’t change if we condition on the fact that he is a man. Otherwise we could change the probability ratio just by arbitrarily conditioning on any event which contains the 2 events in question, say, the fact that the person is human, or a mammal.\n\nThe fact that this ratio is invariant under conditioning means that P(A|B)/P(C|B) = P(A)/P(C) for any 2 events A, C contained in B. Choosing the event B as our event C (which is permissible since B is contained in itself) gives: P(A|B)/P(B|B) = P(A)/P(B), or P(A|B) = P(AB)/P(B) for events A contained in B. For an arbitrary event that is not wholly contained in B, just split it up into the parts inside and outside B and use additivity of probability and the fact that P(A|B) = 0 if A is disjoint from B to get the same formula.\n\nSo Bayes’s formula just follows from the axioms of probability and a basic consistency requirement for the meaning of conditional probability. It’s hard to see what a probability theory would look like if it didn’t fulfill these conditions.\n\n•",
null,
"lotharson says:\n\nHello Malcom, thanks for your challenging comment.\n\nMy point was that we have no compelling reason to think that our degrees of beliefs have to obey the laws of conditional probabilities for situations having no connection with frequencies whatsoever.\nBayesian assume that in every situation our degrees of belief satisfy the axioms of probability. But what is a degree of belief in a situation involving no frequency at all?\n\nLet us assume, for the sake of the argument, that one can map any degree of belief in a theory T as the intensity of an ensemble of brain processes which takes on continuous values between 0 and 1: I(T).\n\nLet us consider the propositions S: String theory is true and M: we live in a multiverse.\nLet us suppose that my degree of belief we live in a multiverse is I(M) = 0.30 and my degree of belief we live in a multiverse where String theory is true is I(S and M) = 0.05.\n\nWhy should I(S given M) = I(S and M) / I(M) = 0.05/*0.30 ?\n\nAnswering “because the intensity of ALL our degrees of belief in our brain must behave like mathematical probabilities” would seem to beg the question.\n\nCheers.\n\n•",
null,
"gmalcolms says:\n\nYou seem to be mixing up a few things here.\n\n1. The laws of probability are not synonymous with frequentism. It’s true that Bayes’s theorem follows from the axioms of probability and the definition of conditional probability, but these things do not depend on frequentism but hold for other interpretations of probability as well. You can find a list of the 3 axioms of (Kolmogorov’s) probability theory online here: http://en.wikipedia.org/wiki/Probability_axioms. They are pretty basic, especially if one considers only finite additivity (which is all that is required for Bayes’s theorem). Conditional probability is usually added to the theory via a definition or additional axiom, which is just P(A|B) = P(AB)/P(B), but as I pointed out it can derived from an even more intuitive axiom, namely, that odds ratios should be independent of extraneous events.\n\n2. When one talks about “subjective probability” philosophically or mathematically, one is still assuming that these probabilities obey some rules. E.g., it wouldn’t make sense to talk about a negative probability or probabilities greater than 1, at least in any usual sense of the word “probability.” Of course, you are free to imagine any set of probabilities you like, regardless of whether they are consistent in any way. For example, you may think that there is a 10% chance that tomorrow’s high temperature is at least 20 degrees and and a 20% chance it is at least 30 degrees, but one could hardly expect that other people would put much stock in your opinion in that case.\n\nSo when Carrier or anyone else applies Bayes’s theorem to a question involving “subjective” probabilities, the implicit assumption is being made that what is being called “probability” satisfies the axioms of probability theory. This doesn’t seem, to me at least, to be that restrictive a qualification.\n\n•",
null,
"lotharson says:\n\nHello Malcom!\n\nI believe one can consider objective and frequential probabilities of historical events by considering a theoretic infinite population which might really exist if we live in an infinite multiverse.\nSo I would advise historians to give up computing the intensity of their subjective beliefs and instead to use Bayes theorem as a mean to calculate frequential probabilities existing objectively independently from us.\nScientifically it is obvious that the latter approach is superior since it directly touches the real world. In future posts, I will explain how this can play out.\n\nI am still not convinced that the intensity of my beliefs about universal gravitation or string theory ought to obey all axioms of probability, including the invariance of the ratios you mentionned.\nTaking this step is a philosophical decision and is by no means compelled by mathematics itself.\nOntologically speaking, what does that mean that string theory has 20% or 60% odds to be true? A frequential interpretation of these values seems impossible.\n\nTo my mind, it is perfectly possible to assume that the intensities of our conviction in situation NOT involving frequencies (such as the truth of quantum gravitation) have no objective meaning.\n\nAnd there is a related problem concerning the determination and even existence of PRIORS for such situations.\nAs the great philosopher Elliot Sober wrote:\n\n““Newton’s universal law of gravitation, when suitably supplemented with plausible background assumptions, can be said to confer probabilities on observations. But what does it mean to say that the law has a probability in the light of those observations? More puzzling still is the idea that it has a probability before any observations are taken into account. If God chose the laws of nature by drawing slips of paper from an urn, it would make sense to say that Newton’s law has an objective prior. But no one believes this process model, and nothing similar seems remotely plausible.””\n\nSo I think I am NOT compelled by rationality to accept that the intensity of my beliefs about this should behave like a probability, since they are not objectively definable.\n\nBut as I said I think one can (theoretically) use Bayes’s theorem to calculate the objective frequential probability of historical events given all what is known.\nAnd such a probability would exist objectively and would not be merely a state of our brain.\n\nCheers from the UK.\n\n•",
null,
"gmalcolms says:\n\nlotharson,\n\nHere’s the first hit that you get when you google “interpretation of probability”: http://plato.stanford.edu/entries/probability-interpret/\n\nRead section 3.3 on subjective probability, in particular. You can argue for a theory of complete subjectivity with no rationality bounds, but as the author states, “unconstrained subjectivism is not a serious proposal.” Rejecting the probability calculus also leaves you vulnerable to a Dutch book.\n\nUse of Bayes’s theorem, and all the other attempts, to discuss the likelihood of past events being true assume that such probabilities obey Kolmogorov’s axioms, at least up to finite additivity. Personally I don’t find this to be a big constraint, but no one is forcing you to be rational or consistent.\n\n(BTW, no offense intended, but I would think that someone from the UK should be able to spell my name.)\n\n5.",
null,
"TheEvangelicalLiberal says:\n\nHi lotharson, interesting post but my non-mathematically-inclined brain isn’t quite up to the equations! I’m not sure I’ve fully understood Bayesianism yet (actually, I’m fairly sure I haven’t).\n\nSo for a layman like me, was the old London bus campaign ‘There’s probably no God’ using a Bayesian view of probability?\n\nAs a non-mathematician, I find it hard to see how measures of probability can be applied to religious questions like whether there’s a God, or which religion best represents God, etc.\n\nI seem to remember a passage in Dawkins’ “God Delusion” which tried to put a numerical probability on the likelihood of God existing. That always struck me as one of his less convincing lines of reasoning.\n\nBased on certain starting assumptions I suppose you *might* be able to argue that particular religious views are more or less likely to be true. I’d certainly like to think that it’s possible to argue that Scientology is 99.999% likely to be complete b*ll*cks, for example! But in practice I’m not sure how I could justify such a statement.\n\nAnyway, interested to hear your thoughts.\n\n6.",
null,
"jesuswithoutbaggage says:\n\nHi Lothar,\n\nThis is interesting, but I would have to read a book on Bayesianism before I could peruse the argument, and right now there are a lot of books higher on my priority list.\n\n•",
null,
"lotharson says:\n\nI know that feeling too 🙂\n\n7.",
null,
"lotharlorraine says:\n\n[…] Next episode: a mathematical proof of Bayesianism? […]"
] | [
null,
"https://lotharlorraine.files.wordpress.com/2013/11/bayesgeneral.jpg",
null,
"https://lotharlorraine.files.wordpress.com/2013/11/god-said-it-believe-it-1.jpg",
null,
"https://lotharlorraine.files.wordpress.com/2013/11/homobashing.jpg",
null,
"https://lotharlorraine.files.wordpress.com/2013/11/fagbayesfrequencies.png",
null,
"https://lotharlorraine.files.wordpress.com/2013/11/fagbayes.png",
null,
"https://lotharlorraine.files.wordpress.com/2013/11/stringtheory.jpg",
null,
"https://lotharlorraine.files.wordpress.com/2013/11/stringbayes.png",
null,
"https://i0.wp.com/www.lanl.gov/bayesian/images/bayesPicture.jpg",
null,
"https://1.gravatar.com/avatar/1ac7053fc537685ff64c56e2f5937eb5",
null,
"https://1.gravatar.com/avatar/d83307ca6be96edb79a0607a29a5a905",
null,
"https://1.gravatar.com/avatar/1ac7053fc537685ff64c56e2f5937eb5",
null,
"https://0.gravatar.com/avatar/676c288b56e0d57fa431c1433b2e1741",
null,
"https://1.gravatar.com/avatar/d83307ca6be96edb79a0607a29a5a905",
null,
"https://0.gravatar.com/avatar/676c288b56e0d57fa431c1433b2e1741",
null,
"https://1.gravatar.com/avatar/dad31fc611ee735548fc53d6ed405142",
null,
"https://1.gravatar.com/avatar/d83307ca6be96edb79a0607a29a5a905",
null,
"https://1.gravatar.com/avatar/dad31fc611ee735548fc53d6ed405142",
null,
"https://1.gravatar.com/avatar/d83307ca6be96edb79a0607a29a5a905",
null,
"https://1.gravatar.com/avatar/dad31fc611ee735548fc53d6ed405142",
null,
"https://1.gravatar.com/avatar/167395046d97f79579b23df9dfd98660",
null,
"https://1.gravatar.com/avatar/d83307ca6be96edb79a0607a29a5a905",
null,
"https://1.gravatar.com/avatar/167395046d97f79579b23df9dfd98660",
null,
"https://1.gravatar.com/avatar/d83307ca6be96edb79a0607a29a5a905",
null,
"https://1.gravatar.com/avatar/167395046d97f79579b23df9dfd98660",
null,
"https://0.gravatar.com/avatar/f60db995123e189eac603a5bd2d3202e",
null,
"https://1.gravatar.com/avatar/a9e148b41ada32547e2b4f450f94346c",
null,
"https://1.gravatar.com/avatar/d83307ca6be96edb79a0607a29a5a905",
null,
"https://secure.gravatar.com/blavatar/67da03328d44fb99d83392e981a1f193",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9458435,"math_prob":0.9095347,"size":22625,"snap":"2021-43-2021-49","text_gpt3_token_len":4937,"char_repetition_ratio":0.14906503,"word_repetition_ratio":0.070500925,"special_character_ratio":0.20839779,"punctuation_ratio":0.0889811,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98523134,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,3,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-08T20:19:14Z\",\"WARC-Record-ID\":\"<urn:uuid:daf9b92f-3a87-4caa-ad66-773a47971e2a>\",\"Content-Length\":\"185721\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:853c50ad-44ee-4043-bcf4-6b79ef1162b7>\",\"WARC-Concurrent-To\":\"<urn:uuid:0de475e6-552f-49ed-8ee9-bbded97e7350>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://lotharlorraine.wordpress.com/2013/11/23/the-bayesian-delusion/\",\"WARC-Payload-Digest\":\"sha1:F2WN2WXEG5OQ4OTM4U2I4K4JJSUEYUI2\",\"WARC-Block-Digest\":\"sha1:EOMDSV5DK3UP45W6S5IBPNKZ3HEQ4NAZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363520.30_warc_CC-MAIN-20211208175210-20211208205210-00068.warc.gz\"}"} |
https://docs.microsoft.com/en-us/qsharp/api/qsharp/microsoft.quantum.intrinsic.exp?view=qsharp-preview | [
"# Exp operation\n\nApplies the exponential of a multi-qubit Pauli operator.\n\n\\begin{align} e^{i \\theta [P_0 \\otimes P_1 \\cdots P_{N-1}]}, \\end{align} where $P_i$ is the $i$th element of paulis, and where $N =$Length(paulis).\n\noperation Exp (paulis : Pauli[], theta : Double, qubits : Qubit[]) : Unit\nFunctors\n\n## Input\n\npaulis\nPauli[]\n\nArray of single-qubit Pauli values indicating the tensor product factors on each qubit.\n\ntheta\nDouble\n\nAngle about the given multi-qubit Pauli operator by which the target register is to be rotated.\n\nqubits\nQubit[]\n\nRegister to apply the given rotation to.\n\nUnit"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5815508,"math_prob":0.9525333,"size":625,"snap":"2020-24-2020-29","text_gpt3_token_len":187,"char_repetition_ratio":0.11916264,"word_repetition_ratio":0.0,"special_character_ratio":0.2704,"punctuation_ratio":0.117117114,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997458,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-27T13:34:30Z\",\"WARC-Record-ID\":\"<urn:uuid:d7d7ad86-42af-47d4-88ad-bab30364897d>\",\"Content-Length\":\"37993\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ed821bce-b70f-45b4-9713-5e35762c0d76>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ef2cdf9-5da0-42a9-bb55-f15427ec7002>\",\"WARC-IP-Address\":\"104.86.81.75\",\"WARC-Target-URI\":\"https://docs.microsoft.com/en-us/qsharp/api/qsharp/microsoft.quantum.intrinsic.exp?view=qsharp-preview\",\"WARC-Payload-Digest\":\"sha1:DP4W7QA6YQJZP3MRGEZRE7Z5WKL5MPEA\",\"WARC-Block-Digest\":\"sha1:A5XNZ7CV2WGLB4OUGRQZCRCYTCHSGPH5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347394074.44_warc_CC-MAIN-20200527110649-20200527140649-00597.warc.gz\"}"} |
https://www.physicsforums.com/threads/eigenvectors-of-this-hamiltonian.763094/ | [
"# Eigenvectors of this Hamiltonian\n\nGold Member\n\n## Main Question or Discussion Point\n\nI've got a problem which is asking for the eigenvalues and eigenstates of the Hamiltonian $H_0=-B_0(a_1 \\sigma_z^{(1)}+a_2 \\sigma_z^{(2)})$ for a system consisting of two spin half particles in the magnetic field $\\vec{B}=B_0 \\hat z$.\nBut I think the problem is wrong and no eigenstate and eigenvalue exist. Here's my reason:\n$-B_0(a_1 \\sigma_z^{(1)}+a_2 \\sigma_z^{(2)}) \\left(\\begin{array}{c}x_1\\\\y_1 \\end{array}\\right)\\left(\\begin{array}{c}x_2\\\\y_2 \\end{array}\\right) =\\lambda\\left(\\begin{array}{c}x_1\\\\y_1 \\end{array}\\right)\\left(\\begin{array}{c}x_2\\\\y_2\\end{array}\\right) \\Rightarrow -B_0(a_1 \\left(\\begin{array}{c}x_1\\\\-y_1 \\end{array}\\right)\\left(\\begin{array}{c}x_2\\\\y_2 \\end{array}\\right)+a_2 \\left(\\begin{array}{c}x_1\\\\y_1 \\end{array}\\right)\\left(\\begin{array}{c}x_2\\\\-y_2 \\end{array}\\right)) =\\lambda\\left(\\begin{array}{c}x_1\\\\y_1 \\end{array}\\right)\\left(\\begin{array}{c}x_2\\\\y_2\\end{array}\\right)$\nBut as far as I know, that is impossible. Am I right?\nI'm asking this because I'm affraid maybe my knowledge on tensor product states and operators contain some holes because I learnt it by reading little things here and there.\n\nRelated Quantum Physics News on Phys.org\natyy\nThe final step doesn't look right. I haven't looked at the details, but I would try representing the tensor product state using a 4X1 column vector as in http://www.matfys.lth.se/education/quantinfo/QIlect1.pdf (p10) or http://www.tau.ac.il/~quantum/Reznik/lecture notes in quantum information.pdf (p32). The operators should be 4X4 matrices, and ##\\sigma_z^{(1)}## should be ##\\sigma_z^{(1)} \\otimes \\mathbb{I}^{(2)}##.\n\nEdit: rubi's and stevendaryl's posts are correct. The 4X1 vector should be have unknowns [a b c d]T.\n\nLast edited:\nrubi\nThe problem with the calculation is that a general vector in the tensor product space isn't ##(\\sum_{i=1}^2 a_i e_i)\\otimes(\\sum_{j=1}^2 b_j e_j)## but rather ##\\sum_{i,j=1}^2 c_{ij} e_i\\otimes e_j##.\n\n•",
null,
"1 person\nstevendaryl\nStaff Emeritus\nThe state $\\left( \\begin{array}\\\\ x_1 \\\\ y_1\\end{array} \\right) \\left( \\begin{array}\\\\ x_2 \\\\ y_2\\end{array} \\right)$ is not the most general state for two spin-1/2 particles. The most general state looks something like this:\n\n$\\alpha_1 \\left( \\begin{array}\\\\ 1 \\\\ 0\\end{array} \\right) \\left( \\begin{array}\\\\ 1 \\\\ 0\\end{array} \\right) + \\alpha_2 \\left( \\begin{array}\\\\ 1 \\\\ 0\\end{array} \\right) \\left( \\begin{array}\\\\ 0 \\\\ 1\\end{array} \\right) + \\alpha_3 \\left( \\begin{array}\\\\ 0 \\\\ 1\\end{array} \\right) \\left( \\begin{array}\\\\ 1 \\\\ 0\\end{array} \\right) + \\alpha_4 \\left( \\begin{array}\\\\ 0 \\\\ 1\\end{array} \\right) \\left( \\begin{array}\\\\ 0 \\\\ 1\\end{array} \\right)$\n\nNow, operate on this general state with the Hamiltonian to find out what kind of constraints on the $\\alpha$s would make it an eigenstate.\n\n•",
null,
"1 person\nGold Member\nThanks people.\nNow another question.\nThe next part of the problem asks for the first order correction for the perturbation $k \\, \\vec\\sigma^{(1)} \\cdot \\vec\\sigma^{(2)}$. Is the following correct?\n$\\vec\\sigma^{(1)} \\cdot \\vec\\sigma^{(2)}=(\\sigma_x^{(1)*} \\,\\,\\, \\sigma_y^{(1)*} \\,\\,\\, \\sigma_z^{(1)*}) \\left( \\begin{array}{c} \\sigma_x^{(2)} \\\\ \\sigma_y^{(2)} \\\\ \\sigma_z^{(2)} \\end{array} \\right)=\\sigma_x^{(1)*}\\sigma_x^{(2)}+\\sigma_y^{(1)*}\\sigma_y^{(2)}+ \\sigma _z^{(1)*}\\sigma_z^{(2)}$"
] | [
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6705755,"math_prob":0.9996679,"size":2165,"snap":"2020-34-2020-40","text_gpt3_token_len":803,"char_repetition_ratio":0.26931977,"word_repetition_ratio":0.7192118,"special_character_ratio":0.3427252,"punctuation_ratio":0.035443038,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99981993,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-12T03:55:58Z\",\"WARC-Record-ID\":\"<urn:uuid:0416b95f-5add-48cc-9692-756a58627252>\",\"Content-Length\":\"81647\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:521bc683-0a47-4907-bfe2-8f484c1d4658>\",\"WARC-Concurrent-To\":\"<urn:uuid:2c57b427-ee6b-417d-a9b5-a2b2515848af>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/eigenvectors-of-this-hamiltonian.763094/\",\"WARC-Payload-Digest\":\"sha1:2ZIHWBBW2YFTNO56ONSFDEHSIQMNZVAF\",\"WARC-Block-Digest\":\"sha1:HK2KUNYKWJ3P7ZTZILXPSEJ6W5F75AEP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738864.9_warc_CC-MAIN-20200812024530-20200812054530-00418.warc.gz\"}"} |
https://www.statology.org/coefficient-of-variation-ti-84/ | [
"# How to Find Coefficient of Variation on a TI-84 Calculator\n\nA coefficient of variation, often abbreviated as CV, is a way to measure how spread out values are in a dataset relative to the mean. It is calculated as:\n\nCV = σ / μ\n\nwhere:\n\n• σ: The standard deviation of dataset\n• μ: The mean of dataset\n\nIn simple terms, the coefficient of variation is the ratio between the standard deviation and the mean.\n\nIt is often used to compare the variation between two different datasets. For example, in finance it is used to compare the mean expected return of an investment relative to the expected standard deviation of the investment.\n\nFor example, suppose an investor is considering investing in the following two mutual funds:\n\nMutual Fund A: mean = 9%, standard deviation = 12.4%\n\nMutual Fund B: mean = 5%, standard deviation = 8.2%\n\nThe investor can calculate the coefficient of variation for each fund:\n\n• CV for Mutual Fund A = 12.4% / 9% = 1.38\n• CV for Mutual Fund B = 8.2% / 5% = 1.64\n\nSince Mutual Fund A has a lower coefficient of variation, it offers a better mean return relative to the standard deviation.\n\nThe following step-by-step example explains how to calculate the coefficient of variation for the following dataset on a TI-84 calculator:\n\nDataset: 3, 8, 8, 13, 16, 11\n\n### Step 1: Enter the Data\n\nFirst, we will enter the data values.\n\nPress Stat, then press EDIT. Then enter the values of the dataset in column L1:",
null,
"### Step 2: Find the Coefficient of Variation\n\nNext, press Stat and then scroll over to the right and press CALC.\n\nThen press 1-Var Stats.",
null,
"In the new screen that appears, press Enter.",
null,
"Once you press Enter, a list of summary statistics will appear:",
null,
"From this screen we can observe the values for the mean and the sample standard deviation:\n\n• Mean (x): 9.8333\n• Sample standard deviation (Sx): 4.535\n\nWe can then calculate the coefficient of variation as:",
null,
"The coefficient of variation for this dataset turns out to be 0.4611. In percentage terms, this is equal to 46.11%."
] | [
null,
"https://www.statology.org/wp-content/uploads/2021/04/cvTI1.png",
null,
"https://www.statology.org/wp-content/uploads/2021/04/sum2.png",
null,
"https://www.statology.org/wp-content/uploads/2021/04/sum3.png",
null,
"https://www.statology.org/wp-content/uploads/2021/04/cvTI2.png",
null,
"https://www.statology.org/wp-content/uploads/2021/04/cvTI3.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84774125,"math_prob":0.98988795,"size":2047,"snap":"2021-43-2021-49","text_gpt3_token_len":499,"char_repetition_ratio":0.15810083,"word_repetition_ratio":0.0138121545,"special_character_ratio":0.25305325,"punctuation_ratio":0.1394799,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987387,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,3,null,null,null,null,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T11:38:35Z\",\"WARC-Record-ID\":\"<urn:uuid:06116667-e930-4093-9b4d-d82d11497a72>\",\"Content-Length\":\"34580\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d1b050e0-a50a-4a36-b821-92b20a470cc2>\",\"WARC-Concurrent-To\":\"<urn:uuid:9502ca26-e892-4a14-8011-d24292cb2c98>\",\"WARC-IP-Address\":\"35.206.100.4\",\"WARC-Target-URI\":\"https://www.statology.org/coefficient-of-variation-ti-84/\",\"WARC-Payload-Digest\":\"sha1:HGNFZENZPJBFDTEYKAQZSGFWIYOAJNU3\",\"WARC-Block-Digest\":\"sha1:IAYFGYWP7PCZRAWDXCDTB3FIW5EBW3SR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323584567.81_warc_CC-MAIN-20211016105157-20211016135157-00435.warc.gz\"}"} |
https://analytics-zoo.gitee.io/master/ProgrammingGuide/text-classification/ | [
"# Text Classification API\n\nAnalytics Zoo provides pre-defined models having different encoders that can be used for classifying texts.\n\nHighlights\n\n1. Easy-to-use Keras-Style defined models which provides compile and fit methods for training. Alternatively, they could be fed into NNFrames or BigDL Optimizer.\n2. The encoders we support include CNN, LSTM and GRU.\n\n## Build a TextClassifier model\n\nYou can call the following API in Scala and Python respectively to create a `TextClassifier` with pre-trained GloVe word embeddings as the first layer.\n\nScala\n\n``````val textClassifier = TextClassifier(classNum, embeddingFile, wordIndex = null, sequenceLength = 500, encoder = \"cnn\", encoderOutputDim = 256)\n``````\n• `classNum`: The number of text categories to be classified. Positive integer.\n• `embeddingFile` The path to the word embedding file. Currently only glove.6B.50d.txt, glove.6B.100d.txt, glove.6B.200d.txt, glove.6B.300d.txt, glove.42B.300d.txt, glove.840B.300d.txt are supported. You can download from here.\n• `wordIndex` Map of word (String) and its corresponding index (integer). The index is supposed to start from 1 with 0 reserved for unknown words. During the prediction, if you have words that are not in the wordIndex for the training, you can map them to index 0. Default is null. In this case, all the words in the embeddingFile will be taken into account and you can call `WordEmbedding.getWordIndex(embeddingFile)` to retrieve the map.\n• `sequenceLength`: The length of a sequence. Positive integer. Default is 500.\n• `encoder`: The encoder for input sequences. String. \"cnn\" or \"lstm\" or \"gru\" are supported. Default is \"cnn\".\n• `encoderOutputDim`: The output dimension for the encoder. Positive integer. Default is 256.\n\nPython\n\n``````text_classifier = TextClassifier(class_num, embedding_file, word_index=None, sequence_length=500, encoder=\"cnn\", encoder_output_dim=256)\n``````\n• `class_num`: The number of text categories to be classified. Positive int.\n• `embedding_file` The path to the word embedding file. Currently only glove.6B.50d.txt, glove.6B.100d.txt, glove.6B.200d.txt, glove.6B.300d.txt, glove.42B.300d.txt, glove.840B.300d.txt are supported. You can download from here.\n• `word_index` Dictionary of word (string) and its corresponding index (int). The index is supposed to start from 1 with 0 reserved for unknown words. During the prediction, if you have words that are not in the wordIndex for the training, you can map them to index 0. Default is None. In this case, all the words in the embedding_file will be taken into account and you can call `WordEmbedding.get_word_index(embedding_file)` to retrieve the dictionary.\n• `sequence_length`: The length of a sequence. Positive int. Default is 500.\n• `encoder`: The encoder for input sequences. String. 'cnn' or 'lstm' or 'gru' are supported. Default is 'cnn'.\n• `encoder_output_dim`: The output dimension for the encoder. Positive int. Default is 256.\n\n## Train a TextClassifier model\n\nAfter building the model, we can call compile and fit to train it (with validation).\n\nFor training and validation data, you can first read files as `TextSet` (see here) and then do preprocessing (see here).\n\nScala\n\n``````model.compile(optimizer = new Adagrad(learningRate), loss = SparseCategoricalCrossEntropy(), metrics = List(new Accuracy()))\nmodel.fit(trainSet, batchSize, nbEpoch, validateSet)\n``````\n\nPython\n\n``````model.compile(optimizer=Adagrad(learning_rate, loss=\"sparse_categorical_crossentropy\", metrics=['accuracy'])\nmodel.fit(train_set, batch_size, nb_epoch, validate_set)\n``````\n\n## Do prediction\n\nAfter training the model, it can be used to predict probability distributions.\n\nScala\n\n``````val predictSet = textClassifier.predict(validateSet)\n``````\n\nPython\n\n``````predict_set = text_classifier.predict(validate_set)\n``````\n\n## Examples\n\nWe provide an example to train the TextClassifier model on 20 Newsgroup dataset and uses the model to do prediction.\n\nSee here for the Scala example.\n\nSee here for the Python example."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7331183,"math_prob":0.615087,"size":3837,"snap":"2020-45-2020-50","text_gpt3_token_len":907,"char_repetition_ratio":0.13018523,"word_repetition_ratio":0.30980393,"special_character_ratio":0.22465467,"punctuation_ratio":0.2005571,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9562376,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-19T20:47:40Z\",\"WARC-Record-ID\":\"<urn:uuid:dbe44d63-b74e-4f81-a256-d612ce0f8086>\",\"Content-Length\":\"8599\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e1189d25-c403-44da-bf71-10860afa384c>\",\"WARC-Concurrent-To\":\"<urn:uuid:b8a5286c-a52c-40fd-9f92-b12d6acbefaa>\",\"WARC-IP-Address\":\"212.64.62.183\",\"WARC-Target-URI\":\"https://analytics-zoo.gitee.io/master/ProgrammingGuide/text-classification/\",\"WARC-Payload-Digest\":\"sha1:PKHFW4MAB7XUVOXTC5GJIWJPHQM7NAPU\",\"WARC-Block-Digest\":\"sha1:YNMJ37OICUPFXK7GUG5QF7WORWOGYFOT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107866404.1_warc_CC-MAIN-20201019203523-20201019233523-00202.warc.gz\"}"} |
https://plannen-nasich.com/o4j4667zhybz2/from-knots-to-km.html | [
"Home\n\n# From knots to km\n\n### Knots to Kilometers Per Hour Conversion (kn to km/h\n\n1. ute of latitude per hour.. Knots can be abbreviated as kn, and are also sometimes abbreviated as kt.For example, 1 knot can be written as 1 kn or 1 kt. Kilometers Per Hou\n2. Knots. Knots are a speed measurement that is nautical miles per hour. This unit is typically used in maritime and aviation. There have historically been different nautical miles used and, thus, different variations of knots. However, we have based our knots calculator on the international nautical mile which is commonly used thoughout the world.\n3. How to convert knots to kilometers per hour [kn to km/h]:. v km/h = 1.852 × v kn. How many kilometers per hour in a knot: If v kn = 1 then v km/h = 1.852 × 1 = 1.852 km/h. How many kilometers per hour in 87 knots: If v kn = 87 then v km/h = 1.852 × 87 = 161.124 km/h. Note: Knot is a metric unit of speed.Kilometer per hour is a metric unit of speed\n\n1 Knots = 1.852 Kilometres per hour: 10 Knots = 18.52 Kilometres per hour: 2500 Knots = 4630 Kilometres per hour: 2 Knots = 3.704 Kilometres per hour: 20 Knots = 37.04 Kilometres per hour: 5000 Knots = 9260 Kilometres per hour: 3 Knots = 5.556 Kilometres per hour: 30 Knots = 55.56 Kilometres per hour: 10000 Knots = 18520 Kilometres per hour: 4 Knots = 7.408 Kilometres per hou 1 knot (kn, kt or kts) = 1.852 kilometer per hour (kph) = 1.15077945 miles per hour (mph) = 1.68780986 foot / second (ft/s) = 0.514444444 meters per second (m/s).The knot is a non-SI unit of speed. One knot is equal to one nautical mile per hour (1.852 kph) 1 : knop = 1.85: kilometer/timme: 2 : knop = 3.70: kilometer/timme: 3 : knop = 5.56: kilometer/timme: 4 : knop = 7.41: kilometer/timme: 5 : knop = 9.26: kilometer. One knot is about 0.51444 meters per second. 1 kn ≅ 0.51444 m/s (SI unit). 1 Kilometre per Hour: Distance of one kilometer or 1 000 meters travelled in the time span of one hour or exactly 3 600 seconds Knot : The knot is a non-SI unit for speed. It is equal to one nautical mile(1.852 km) per hour, and approximately equal to 1.151 mph (full name: mile per hour). Without standard abbreviation, the commonly used is kn, but kt and NMPH are also used",
null,
"To find out how many Knots in Kilometers/Hour, multiply by the conversion factor or use the Velocity converter above. One Knots is equivalent to one point eight five two Kilometers/Hour. Definition of Knot. The knot is a unit of speed equal to one nautical mile (1.852 km) per hour, approximately 1.151 mph. The ISO Standard symbol for the knot. The knot (/ n ɒ t /) is a unit of speed equal to one nautical mile per hour, exactly 1.852 km/h (approximately 1.150 78 mph or 0.514 m/s). The ISO standard symbol for the knot is kn. The same symbol is preferred by the Institute of Electrical and Electronics Engineers (); kt is also common, especially in aviation, where it is the form recommended by the International Civil Aviation. 1 Knot = 1 Nautical Mile per hour 1 Nautical mile = 6076.12 ft. = 1852 m ** 1 Statute mile = 1760 yards = 5280 feet ** Definition: [n] a unit of length used in navigation; equivalent to the distance spanned by one minute of arc in latitude; 1,852 meter 1 meter/second is equal to 3.6 km/h, or 1.9438444924406 knots. Note that rounding errors may occur, so always check the results. Use this page to learn how to convert between kilometers/hour and knots. Type in your own numbers in the form to convert the units! ›› Quick conversion chart of km/h to knots. 1 km/h to knots = 0.53996 knots\n\nHow many kilometers per hour are in a knot? Use this easy and mobile-friendly calculator to convert between knots and kilometers per hour. Just type the number of knots into the box and the conversion will be performed automatically minute per kilometer (min/km) second per kilometer (sec/km) second per 100 meters (sec/hm) minute per mile (min/mile) second per mile (sec/mile) nautical speed units. knot (kt) nautical mile per hour (seamiles/hour) other speed units. speed of light (c) mach (speed of sound) (Ma Road speed limits are given in kilometers per hour which is abbreviated as kph or km/h. Kilometers per hour to Knots formula Knots. Knots are a speed measurement that is nautical miles per hour. This unit is typically used in maritime and aviation. There have historically been different nautical miles used and, thus, different variations of knots\n\n### Knots to Kilometers per hour - Metric Conversion\n\n• KNOT TO KILOMETER/HOUR (KMH) (kn TO km/h) FORMULA . To convert between Knot and Kilometer/hour (kmh) you have to do the following: First divide 1852/3600 / 1000/3600 = 1.852 . Then multiply the amount of Knot you want to convert to Kilometer/hour (kmh), use the chart below to guide you\n• A kilometer per hour is a unit of speed. Something traveling at one kilometer per hour is traveling about 0.278 meters per second, or about 0.621 miles per hour. Knots to Kilometers per Hour Conversion\n• Convert Knots to Kilometers Per Hour Convert any Knots value to Kilometers-Per-Hour using the conversion calculator below. HOME. AVIATION / AEROSPACE. ARMOR & ARTILLERY. INFANTRY WEAPONS. NAVAL WARFARE. MILITARY DICTIONARY. CALCULATORS. Enter your value in the left-hand textfield then click CONVERT\n• Learn how to convert from knots to km/h and what is the conversion factor as well as the conversion formula. 30.7 kilometers per hour are equal to 56.8564 knots\n• Unsettled Weather for the Northwest through Mid-week; Warm Temperatures Spreading Across the CONUS. Unsettled conditions will continue from the Pacific Northwest and California into the Sierra and Rocky mountains with locally heavy rain, a few thunderstorms, mountain snow, gusty to high winds, and possible coastal and burn-scar flooding\n• KILOMETER/HOUR (KMH) TO KNOT (km/h TO kn) FORMULA . To convert between Kilometer/hour (kmh) and Knot you have to do the following: First divide 1000/3600 / 1852/3600 = 0.5399568 . Then multiply the amount of Kilometer/hour (kmh) you want to convert to Knot, use the chart below to guide you\n• Instant free online tool for knot to kilometer/hour conversion or vice versa. The knot [kt, kn] to kilometer/hour [km/h] conversion table and conversion steps are also listed. Also, explore tools to convert knot or kilometer/hour to other speed units or learn more about speed conversions\n\nConversion Knot to Kilometer per hour (knot to kmph). The knot (pronounced not) is a unit of speed equal to one nautical mile (1.852 km) per hour. This tool converts knot to kilometers per hour (knot to kmph) or (knot to km/h) and vice versa. 1 knot = 1.852 kilometers per hour Learn how to convert from knots to km/h and what is the conversion factor as well as the conversion formula. 3.5 kilometers per hour are equal to 6.482 knots\n\n### Knots to kilometers per hour [kn to km/h] speed (velocity\n\n1. ute of latitude per hour.. Knots can be abbreviated as kn, and are also sometimes abbreviated as kt.For example, 1 knot can be written as 1 kn or 1 kt. Miles Per Hou\n2. Kilometres per hour (also spelling: kilometer per hour) is a unit of speed, defined as the number of kilometers travelled in one hour. It is equal to one nautical mile(1.852 km)\n3. Free online speed conversion. Convert 3 knots to km/h (kn to kilometers/hour). How much is 3 knots to km/h? +> with much ♥ by CalculatePlu\n4. To convert any value in knots to kilometers per hour, just multiply the value in knots by the conversion factor 1.852.So, 65 knots times 1.852 is equal to 120.4 km/h\n\n### Convert Knots to Kilometres per hour (knot → km/h\n\n1. This page features online conversion from knot to kilometer per second.These units belong to different measurement systems. The first one is from Nautical.The second one is from Metric. If you need to convert knot to another compatible unit, please pick the one you need on the page below\n2. utes per kilometer, convert kt to\n3. Kilometer/hour. Definition: The unit kilometers per hour (symbol: km/h) is a unit of speed expressing the number of kilometers traveled in one hour. History/origin: The unit of kilometers per hour is based on the meter, which was formally defined in 1799. According to the Oxford English Dictionary, the term kilometer first came into use in 1810. It was not until later in the mid-late 19 th.\n\n### Convert Knots to Kilometers per Hour Conversio\n\n1. This is a speed converter that can convert km/h(kph) to mph or knots, mph to km/h or knots, knots to km/h or mph, with three virtual speedometers to show the corresponding of speed, it's easier to understand how many differences in speed units\n2. 1 Knots = 0.0015 Speed of sound: 10 Knots = 0.015 Speed of sound: 2500 Knots = 3.7496 Speed of sound: 2 Knots = 0.003 Speed of sound: 20 Knots = 0.03 Speed of sound: 5000 Knots = 7.4992 Speed of sound: 3 Knots = 0.0045 Speed of sound: 30 Knots = 0.045 Speed of sound: 10000 Knots = 14.9984 Speed of sound: 4 Knots = 0.006 Speed of sound: 40 Knots = 0.06 Speed of sound: 25000 Knots = 37.4959.\n3. 1 Kilometer per hour (km/h) is equal to 0.53995680346 knot. To convert kilometers per hour to knots, multiply the kilometers per hour value by 0.53995680346 or divide by 1.852. For example, to convert 100 km/h to knots, divide 100 by 1.852, that makes 53.99 knots is 100 km/h\n4. This page features online conversion from knot to kilometer per hour.These units belong to different measurement systems. The first one is from Nautical.The second one is from Metric. If you need to convert knot to another compatible unit, please pick the one you need on the page below\n\n16 km/h: 7-10 kts 8-12 mph 12-19 km/h: Gentle: Gentle breeze: Slight: Large wavelets. Crests begin to break. Foam of glassy appearance. 2-3 ft 0.6 m: Leaves and small twigs in constant motion; wind extends light flag. 4: 13 kts 16 mph 24 km/h: 11-16 kts 13-18 mph 20-28 km/h: Moderate: Moderate breeze: Moderate: Small waves, becoming longer. 3½. To find out how many Knots in Kilometers/Hour, multiply by the conversion factor or use the Velocity converter above. Ten Knots is equivalent to eighteen point five two Kilometers/Hour. Definition of Knot. The knot is a unit of speed equal to one nautical mile (1.852 km) per hour, approximately 1.151 mph. The ISO Standard symbol for the knot is kn\n\nHow to convert knots to kilometers per minute [kn to km/min]:. v km/min = 0.03086666664 × v kn. How many kilometers per minute in a knot: If v kn = 1 then v km/min = 0.03086666664 × 1 = 0.03086666664 km/min. How many kilometers per minute in 63 knots: If v kn = 63 then v km/min = 0.03086666664 × 63 = 1.94459999832 km/min. Note: Knot is a metric unit of speed.. Kilometer per minute is a. Being able to quicky convert wind speed values from units like knots, beaufort, m/s and km/h to another is quite helpful when you're in a pinch. In addition to our wind speed converter, you can also convert precipitation values and distances (e.g. the elusive nautical mile) with our calculator 1 Kilometres per hour = 0.54 Knots: 10 Kilometres per hour = 5.3996 Knots: 2500 Kilometres per hour = 1349.89 Knots: 2 Kilometres per hour = 1.0799 Knots: 20 Kilometres per hour = 10.7991 Knots: 5000 Kilometres per hour = 2699.78 Knots: 3 Kilometres per hour = 1.6199 Knots: 30 Kilometres per hour = 16.1987 Knots: 10000 Kilometres per hour = 5399.57 Knots: 4 Kilometres per hour = 2.1598 Knots away from knots and the result. Seventy knots to a table km slowly move into knots are the sea. Statute miles traveled in a table knots km sheds and is mph. Great patches of a conversion table knots to conversions between knot to learn more about each unit used and the united states, the boxes above to create conversion\n\nFree online speed conversion. Convert 3 knots to km/h (kn to kilometers/hour). How much is 3 knots to km/h? +> with much ♥ by CalculatePlu Free online speed conversion. Convert 10 knots to km/h (kn to kilometers/hour). How much is 10 knots to km/h? +> with much ♥ by CalculatePlu\n\n1 Knots = 1.1508 Miles per hour: 10 Knots = 11.5078 Miles per hour: 2500 Knots = 2876.95 Miles per hour: 2 Knots = 2.3016 Miles per hour: 20 Knots = 23.0156 Miles per hour: 5000 Knots = 5753.89 Miles per hour: 3 Knots = 3.4523 Miles per hour: 30 Knots = 34.5234 Miles per hour: 10000 Knots = 11507.79 Miles per hour: 4 Knots = 4.6031 Miles per hour: 40 Knots = 46.0312 Miles per hou Instant free online tool for knot (UK) to kilometer/hour conversion or vice versa. The knot (UK) [kt (UK)] to kilometer/hour [km/h] conversion table and conversion steps are also listed. Also, explore tools to convert knot (UK) or kilometer/hour to other speed units or learn more about speed conversions Free online speed conversion. Convert 4 knots to km/h (kn to kilometers/hour). How much is 4 knots to km/h? +> with much ♥ by CalculatePlu Grundlagen zur Umrechnung Kilometer pro Stunde (km/h) in Knoten (kn) Der Knoten wird meist nur in der Seefahrt und Luftfahrt verwendet. Conversions may rely on other factors not accounted for or that have been estimated. Knot : You are currently converting speed units from knot to minute per kilometer 1 kt = 32.397408207623 min/km. knot\n\n1 kilometer per hour (kph or kph) = 0.539956803 knot (kn, kt or kts) = 0.621371192 miles per hour (mph) = 0.911344415 foot / second (ft/s) = 0.277777778 meters per second (m/s).The knot is a non-SI unit of speed. One knot is equal to one nautical mile per hour (1.852 kph) Conversion Kilometer per hour to Knot (kmph to knot). The kilometer per hour is a unit of speed, expressing the number of kilometers travelled in one hour. The unit symbol is km/h or km. h-1. This tool converts kilometers per hour to knot (kmph to knot) or (km/h to knot) and vice versa. 1 kilometer per hour ≈ 0.54 knot To find out how many Knots in Kilometers/Hour, multiply by the conversion factor or use the Velocity converter above. Forty Knots is equivalent to seventy-four point zero eight Kilometers/Hour. Definition of Knot. The knot is a unit of speed equal to one nautical mile (1.852 km) per hour, approximately 1.151 mph\n\nConvert Knots to MPH - Wind Speed Unit Calculator With this simple unit conversion tool, you can quickly convert any wind speed from an initial unit (miles per hour, feet per second, meters per second, knots, and kilometers per hour) to all other units listed Enter your value in the left-hand textfield then click CONVERT. Your result will display in the right-hand textfield. 1 knot (kt) = 1.15077945 miles per hour (mph) The kilometer per second is also used when meters per second is too slow a measurement, such as in astronomical measurements and higher velocities. The kilometer per hour, and in some countries the mile per hour, are the preferred measurement of road speed. Knot to Meter/second Conversion Tabl\n\n### Omvandla Knop Till Kilometer/Timm\n\nNext, let's look at an example showing the work and calculations that are involved in converting from kilometers to nautical miles (km to nmi). Kilometer to Nautical Mile Conversion Example Task: Convert 25 kilometers to nautical miles (show work) Formula: km ÷ 1.852 = nautical miles Calculations: 25 km ÷ 1.852 = 13.49892009 nautical miles Result: 25 km is equal to 13.49892009 nautical mile Luckily, converting most units is very, very simple. In this case, all you need to know is that 1 knot is equal to km/h. Once you know what 1 knot is in kilometres per hour, you can simply multiply by the total knots you want to calculate. So for our example here we have 160 knots. So all we do is multiply 160 by To find out how many Knots in Miles/Hour, multiply by the conversion factor or use the Velocity converter above. Thirty Knots is equivalent to thirty-four point five two three Miles/Hour. Definition of Knot. The knot is a unit of speed equal to one nautical mile (1.852 km) per hour, approximately 1.151 mph. The ISO Standard symbol for the knot. 80 Knots to Feet Per Minute = 8101.4873: 3 Knots to Feet Per Minute = 303.8058: 90 Knots to Feet Per Minute = 9114.1732: 4 Knots to Feet Per Minute = 405.0744: 100 Knots to Feet Per Minute = 10126.8591: 5 Knots to Feet Per Minute = 506.343: 200 Knots to Feet Per Minute = 20253.7183: 6 Knots to Feet Per Minute = 607.6115: 300 Knots to Feet Per. It is also used in the Canadian rail system, though the Canadian road systems use km/h. Knot. Definition: A knot (symbol: kn or kt) is a non-SI unit of speed at sea. It is defined as one nautical mile per hour, where a nautical mile is 1,852 meters. A knot is equal to 1.852 kilometers per hour and 1.15078 miles per hour\n\n### Knots to Kilometres Per Hour Kyle's Converte\n\n1. Exchange reading in knots unit kn into kilometers per day unit km/d as in an equivalent measurement result (two different units but the same identical physical total value, which is also equal to their proportional parts when divided or multiplied). One knot converted into kilometer per day equals = 44.45 km/d 1 kn = 44.45 km/\n2. ute unit km/\n3. Exchange reading in knots unit kn into kilometers per second unit km/sec as in an equivalent measurement result (two different units but the same identical physical total value, which is also equal to their proportional parts when divided or multiplied). One knot converted into kilometer per second equals = 0.00051 km/sec 1 kn = 0.00051 km/se\n4. Knot : A measure of speed. It is used in nautical and avi... Kilometer per Hour : A measure of speed. An object moving at a speed of..\n5. Exchange reading in knots unit kn into kilometers per hour unit km/h as in an equivalent measurement result (two different units but the same identical physical total value, which is also equal to their proportional parts when divided or multiplied). One knot converted into kilometer per hour equals = 1.85 km/h 1 kn = 1.85 km/\n6. Knot is abbreviated as Kt or Kn. Knot is used throughout the world in different fields like meteorology, maritime and air navigation. One nautical miles per hour is exactly equal to 1.852 kilometre per hour. Convert the values from knot to kilometre per hour in a fraction of second using this online conversion calculator\n\n### 163 Knot to Kilometer/Hour Conversion - Convert 163 Knot\n\nCalculates the speed in knots and the distance in nautical miles at shipping. At sea, the distance commonly is measured in nautical miles (nm), whereas one nm is exactly 1852 meters. As dimension for the speed, knots (kn) is used, one knot is one nautical mile per hour. Here, from two values and units, the third value can be calculated This is a conversion chart for knot (Nautical). To switch the unit simply find the one you want on the page and click it. You can also go to the universal conversion page. 2: Enter the value you want to convert (knot). Then click the Convert Me button. Your value gets instantly converted to all other units on the page. 1 Kilometer per hour (kph, km/h) = 0.277 777 778 meters per second (SI base unit). 1 km/h = 0.277777778 m/s. 1 Knot: 1 NM/h. An object travelling at one knot is covering the distance of one nautical mile in one hour. Using a nautical mile of exactly 1 852 meters. One knot is about 0.51444 meters per second. 1 kn ≅ 0.51444 m/s (SI unit) Time, Speed and Distance Calculator; find the time, speed or distance in our classic calculator. Many options, many designations",
null,
"### What is 1 Knots in Kilometers/Hour? Convert 1 kt to km/\n\n1 knot (kn, kt or kts) = 1.15077945 miles per hour (mph) = 1.852 kilometer per hour (kph) = 1.68780986 foot / second (ft/s) = 0.514444444 meters per second (m/s).The knot is a non-SI unit of speed. One knot is equal to one nautical mile per hour Speed - Convert From Kilometer per Hour To Knot Speed This online converter will allow you to easily convert speed between SI units, American (English), marine, etc. Fill in the number of any one blank(km/h, mph, knots, m/min, ft/min, m/s or ft/s) The results of other units will be automatically calculated. If you want to know how it it calculated, we also provide the formula; Speed unit conversion formula. 1 km/h = 0.621371192 mph; 1 km/h = 0.539956803 knots; 1 mph = 1.6093440006 km/h; 1 mph = 0.868976242. But as for kph, that's easy: 1852 m/h = 1.852 km/h! Isn't the metric system nifty? -- Toby Bartels 05:33, 14 Oct 2004 (UTC) You were right about the knot/mph ratio, I had miscalculated. I'd remove the precisely after the kph conversion though, since 1.852 is a round up (the actual precise number being 1.851999985024)\n\n### Knot (unit) - Wikipedi\n\nHow to convert kilometers per hour to knots [km/h to kn]:. v kn = 0.539956803 × v km/h. How many knots in a kilometer per hour: If v km/h = 1 then v kn = 0.539956803 × 1 = 0.539956803 kn. How many knots in 15 kilometers per hour: If v km/h = 15 then v kn = 0.539956803 × 15 = 8.099352045 kn. Note: Kilometer per hour is a metric unit of speed.Knot is a metric unit of speed Många kabinbåtar och salongsbåtar toppar runt 20 knop, alltså strax under 40 km/h. Och styrpulpetbåtar och daycruisers brukar komma upp i runt 35 knop, runt 65 km/h. Och ett par exempel från landbacken: Moppefart, 30 km/h är 16 knop. Landsvägsfart, 70 km/h, motsvarar runt 40 knop. Och motorvägsfart, 110 km/h, är i runda slängar 60 knop To convert Knots (knot) to Kilometers per Hour (km/h), you just need to know that 1knots is equal to km/h. With that knowledge, you can solve any other similar conversion problem by multiplying the number of Knots (knot) by . For example, 6 knots multiplied by is equal to km/h Knots to Kilometers per hour (knots to kph) conversion calculator for Speed conversions with additional tables and formulas. Convert 25 knot to km/h - UnitConversion.io unitconversion.io › 25-knot-to-kmp Convert knots to km/h - Conversion of Measurement Units. www.convertunits.com › from › knots › km › h. Do a quick conversion: 1 knots = 1.852 kilometers/hour using the online calculator for metric conversions. Check the chart for more details. Convert Knots to Kilometers per Hour Conversion\n\n### Wind Speed Conversions: Knots to Miles per hour and\n\nKnot, in navigation, measure of speed at sea, equal to one nautical mile per hour (approximately 1.15 statute miles per hour). Thus, a ship moving at 20 knots is traveling as fast as a land vehicle at about 23 mph (37 km/hr). The term knot derives from its former use as a length measure on ships' log lines, which were used to measure the speed of a ship through the water Conversion knots to kilometers per hour, kt to km/h. The conversion factor is 1.852; so 1 knot = 1.852 kilometers per hour. In other words, the value in kt multiply by 1.852 to get a value in km/h Definition of Knot. The knot is a unit of speed equal to one nautical mile (1.852 km) per hour, approximately 1.151 mph. The ISO Standard symbol for the knot is kn. The same symbol is preferred by the IEEE; kt is also common. The knot is a non-SI unit that is accepted for use with the SI",
null,
"### Convert km/h to knots - Conversion of Measurement Unit\n\nKnots are used to measure the rate of speed on the water. The simplest way to convert knots to miles per hour is to multiply the number of knots by 1.151. For example, if you're trying to convert 17 knots to miles per hour, you would multiply 17 by 1.151 and get 19.6. Therefore, 17 knots is equal to 19.6 miles per hour 1 mile per hour (mph) = 0.868976242 knot (kn, kt or kts) = 1.609344 kilometer per hour (kph) = 1.46666667 foot / second (ft/s) = 0.44704 meters per second (m/s).The knot is a non-SI unit of speed. One knot is equal to one nautical mile per hour (1.15077945 mph, 1.852 km/h)",
null,
"",
null,
"### Convert Knots to Kilometers per Hour - CalculateMe\n\nConversion Knot to Mile per hour (knot to mph). The knot (pronounced not) is a unit of speed equal to one nautical mile (1.852 km) per hour. This tool converts knot to miles per hour (knot to mph) and vice versa. 1 knot ≈ 1.1508 miles per hour. The user must fill one of the two fields and the conversion will become automatically For most science applications, the SI unit for speed and velocity is meters per second (m/s), although kliometers per hour (km/h) is sometimes used when measuring the speed of vehicles. The English system usually expresses speed as miles per hour (mph) or sometimes feet per second (ft/s), while marine vessels typically use knots, or nautical miles per hour Next, let's look at an example showing the work and calculations that are involved in converting from knot to miles per hour (kn to mph). Knot to Miles per hour Conversion Example Task: Convert 250 knots to miles per hour (show work) Formula: knots x 1.15078 = mph Calculations: 250 knots x 1.15078 = 287.695 mph Result: 250 knots is equal to 287.\n\n### Convert knots to meters per second - speed converte\n\nConvert between the units (knot → km/h) or see the conversion table. 4 Knots = 7.408 Kilometres per hour, 40 Knots = 74.08 Kilometres per hour, 25000 Knots. The knot (pronounced not) is a unit of speed equal to one nautical mile (1.852 km) per hour. This tool converts knot to meters per second (knot to mps) or (knot to m/s) and vice versa. 1 knot ≈ 0.5144 meters per second. The user must fill one of the two fields and the conversion will become automatically 45 kn = 83.333333261339 km/h. You also can convert 45 Knots to other Speed (popular) units One knot is about 0.51444 meters per second. 1 kn ≅ 0.51444 m/s (SI unit). 1 Mach Number: The ratio of the speed of an object moving through a fluid to the speed of sound in the same fluid. Under standard conditions at sea level this is roughly 340.29 meters per second for travel through air\n\n### Kilometers per hour to Knots - Metric Conversion\n\nKnots to Kilometres per Hour. Distance of one kilometer or 1 000 meters travelled in the time span of one hour or exactly 3 600 seconds. 1 Kilometer per hour (kph, km/h) = 0.277 777 778 meters per second (SI base unit). 1 km/h = 0.277777778 m/s. Knots to Knots (Admiralty) 1 NM (Adm)/h; Knots to Leagues (land) per Da Exchange reading in kilometers per hour unit km/h into knots unit kn as in an equivalent measurement result (two different units but the same identical physical total value, which is also equal to their proportional parts when divided or multiplied). One kilometer per hour converted into knot equals = 0.54 kn 1 km/h = 0.54 k Knots to Kilometers per hour (knots to kph) conversion calculator for Speed conversions with additional tables and formulas. 120 Knot to Kilometers/Hour Conversion Calculator https://www.flightpedia.org › Convert Knot to km/ 1 Knot (kn) is equal to 1.852 kilometers per hour (km/h). To convert knots to kilometers per hour, multiply the knot value by 1.852. For example, to find out how many kilometers per hour (km/h) is 160 knots, multiply 160 by 1.852, that makes 296.32 kilometers per hour. To convert between knot and km/h and access the tables, please visit knot to.",
null,
"### knot to kilometer/hour (kmh) (kn to km/h) - Speed Converte\n\n1 knot = 1 mil laut per jam Knot adalah satuan kecepatan yang sama dengan 1 mil laut (nautcal mile) per jam atau sekitar 1,151 mil per jam. Berdasarkan standar ISO, simbol satuan knot adalah kn. Simbol satuan knot yang juga populer adalah kt. Knot bukan satuan SI (sistem internasional), tetapi juga digunakan dalam satuan SI Task: Convert 75 miles per hour to knots (show work) Formula: mph ÷ 1.15078 = knots Calculations: 75 mph ÷ 1.15078 = 65.17318688 knots Result: 75 mph is equal to 65.17318688 knots Conversion Table For quick reference purposes, below is a conversion table that you can use to convert from mph to knots",
null,
"Miles to Km converter. Easily convert miles to kilometers, with formula, conversion chart, auto conversion to common lengths, mor Boat Speed Conversion. Canada posts speed limits in kph or km/hr for kilometres per hour (ki'-low-mee-trz). Statute miles per hour (mph) is used in cars and powerboats.Knots (kts) is nautical miles (nm) per hour, used by sailboats and ships.(TIP: Divide km/hr in half to get an approximate speed in knots. Conversion Kilometer per hour to Mile per hour (kmph to mph). The kilometer per hour is a unit of speed, expressing the number of kilometers travelled in one hour. The unit symbol is km/h or km. h-1. This tool converts kilometers per hour to miles per hour (kmph to mph) or (km/h to mph) and vice versa. 1 kilometer per hour = 0.62137 miles per hour A knot is an intentional complication in cordage which may be practical or decorative, or both. Practical knots are classified by function, including hitches, bends, loop knots, and splices: a hitch fastens a rope to another object; a bend fastens two ends of a rope to each another; a loop knot is any knot creating a loop, and splice denotes any multi-strand knot, including bends and loops There are 0.2777777778 m/s in km/h. What is kilometers per hour (km/h)? Kilometers per hour is a unit of Speed or Velocity in the Metric System. It measures the number of kilometers traveled in an hour. The symbol for kilometers per hour is km/h and the International spelling for this unit is kilometres per hour. There are 3.6 km/h in m/s\n\n• Vädringsställ rusta.\n• Synagoga göteborg flashback.\n• Domino junior blatz spielanleitung.\n• Tecken på kärlek vid första ögonkastet.\n• Warpaint teeth whitening.\n• Älg fransyska pulled.\n• Plymouth till salu.\n• Innehållsförteckning word.\n• Blått mönstrat kakel.\n• Sony alpha a6000 test.\n• Lediga lägenheter eskilstuna.\n• Auto clipart free.\n• Hur man ritar en söt katt.\n• Carlos slim helú vanessa slim domit.\n• Ehrenkodex deutscher franchise verband.\n• Handelsbanken sverige.\n• Ingenting är svart eller vitt.\n• Relax js parallax.\n• Hur kör man en automatväxlad bil.\n• Aronchupa little swing feat little sis nora.\n• South east asia definition.\n• Väggstöd.\n• Stapla ved på pall.\n• Skriva i affekt.\n• Mbc دراما مباشر باب الحارة.\n• Passives einkommen blog.\n• Deschampsia cespitosa goldschleier.\n• Sophienhof kiel essen.\n• Pauly d jersey shore family vacation.\n• Life path number 11.\n• Kuggavikens vandrarhem åsa.\n• Vegaoo fri frakt.\n• Impecta logga in.\n• Pallet mouth.\n• Doctor doom in other media.\n• Hr 530 test.\n• Beståndsdel engelska.\n• Trädgårdsverbena."
] | [
null,
"https://plannen-nasich.com/agtba/-OZPPzLbZTgD-87p_42kUwHaEg.jpg",
null,
"https://plannen-nasich.com/agtba/pKFCglzZJbixLAPINXsAiAHaKX.jpg",
null,
"https://plannen-nasich.com/agtba/4M2QX54xYHkRLCC8Jv0zGwHaFI.jpg",
null,
"https://plannen-nasich.com/agtba/U5mzmNpY0z-MeNVPB1Hp0AHaFj.jpg",
null,
"https://plannen-nasich.com/agtba/xQYWoruJeC_e50f9-0Ce2wHaEp.jpg",
null,
"https://plannen-nasich.com/agtba/niA5xKjMtIUjCYlHx1JnDgHaE7.jpg",
null,
"https://plannen-nasich.com/agtba/MSHRpLSwkX1isuhoIud61gHaKu.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8059939,"math_prob":0.98316306,"size":21162,"snap":"2021-43-2021-49","text_gpt3_token_len":5904,"char_repetition_ratio":0.22814065,"word_repetition_ratio":0.19904712,"special_character_ratio":0.29704186,"punctuation_ratio":0.13052076,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9925219,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-01T00:19:23Z\",\"WARC-Record-ID\":\"<urn:uuid:b0d7e868-f3ca-4e6b-866e-8d84f09e88d3>\",\"Content-Length\":\"38931\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99b0a5f8-81ba-4e16-bb04-27bb793dd0bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:510598cf-0bd5-4e56-993e-0498f8e184ce>\",\"WARC-IP-Address\":\"5.45.68.138\",\"WARC-Target-URI\":\"https://plannen-nasich.com/o4j4667zhybz2/from-knots-to-km.html\",\"WARC-Payload-Digest\":\"sha1:7RSQCTSBTGPHKQTKAAC7U3HVB3BN5SJ7\",\"WARC-Block-Digest\":\"sha1:RCTW5R3FBRR4MU57SIZUGZCXBNKZMW6H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964359082.76_warc_CC-MAIN-20211130232232-20211201022232-00115.warc.gz\"}"} |
https://www.colorhexa.com/192626 | [
"# #192626 Color Information\n\nIn a RGB color space, hex #192626 is composed of 9.8% red, 14.9% green and 14.9% blue. Whereas in a CMYK color space, it is composed of 34.2% cyan, 0% magenta, 0% yellow and 85.1% black. It has a hue angle of 180 degrees, a saturation of 20.6% and a lightness of 12.4%. #192626 color hex could be obtained by blending #324c4c with #000000. Closest websafe color is: #003333.\n\n• R 10\n• G 15\n• B 15\nRGB color chart\n• C 34\n• M 0\n• Y 0\n• K 85\nCMYK color chart\n\n#192626 color description : Very dark (mostly black) cyan.\n\n# #192626 Color Conversion\n\nThe hexadecimal color #192626 has RGB values of R:25, G:38, B:38 and CMYK values of C:0.34, M:0, Y:0, K:0.85. Its decimal value is 1648166.\n\nHex triplet RGB Decimal 192626 `#192626` 25, 38, 38 `rgb(25,38,38)` 9.8, 14.9, 14.9 `rgb(9.8%,14.9%,14.9%)` 34, 0, 0, 85 180°, 20.6, 12.4 `hsl(180,20.6%,12.4%)` 180°, 34.2, 14.9 003333 `#003333`\nCIE-LAB 14.017, -5.556, -1.813 1.444, 1.733, 2.092 0.274, 0.329, 1.733 14.017, 5.844, 198.072 14.017, -4.834, -1.045 13.164, -3.458, -0.208 00011001, 00100110, 00100110\n\n# Color Schemes with #192626\n\n• #192626\n``#192626` `rgb(25,38,38)``\n• #261919\n``#261919` `rgb(38,25,25)``\nComplementary Color\n• #192620\n``#192620` `rgb(25,38,32)``\n• #192626\n``#192626` `rgb(25,38,38)``\n• #192026\n``#192026` `rgb(25,32,38)``\nAnalogous Color\n• #262019\n``#262019` `rgb(38,32,25)``\n• #192626\n``#192626` `rgb(25,38,38)``\n• #261920\n``#261920` `rgb(38,25,32)``\nSplit Complementary Color\n• #262619\n``#262619` `rgb(38,38,25)``\n• #192626\n``#192626` `rgb(25,38,38)``\n• #261926\n``#261926` `rgb(38,25,38)``\n• #192619\n``#192619` `rgb(25,38,25)``\n• #192626\n``#192626` `rgb(25,38,38)``\n• #261926\n``#261926` `rgb(38,25,38)``\n• #261919\n``#261919` `rgb(38,25,25)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #050707\n``#050707` `rgb(5,7,7)``\n• #0f1717\n``#0f1717` `rgb(15,23,23)``\n• #192626\n``#192626` `rgb(25,38,38)``\n• #233535\n``#233535` `rgb(35,53,53)``\n• #2d4545\n``#2d4545` `rgb(45,69,69)``\n• #375454\n``#375454` `rgb(55,84,84)``\nMonochromatic Color\n\n# Alternatives to #192626\n\nBelow, you can see some colors close to #192626. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #192623\n``#192623` `rgb(25,38,35)``\n• #192624\n``#192624` `rgb(25,38,36)``\n• #192625\n``#192625` `rgb(25,38,37)``\n• #192626\n``#192626` `rgb(25,38,38)``\n• #192526\n``#192526` `rgb(25,37,38)``\n• #192426\n``#192426` `rgb(25,36,38)``\n• #192326\n``#192326` `rgb(25,35,38)``\nSimilar Colors\n\n# #192626 Preview\n\nThis text has a font color of #192626.\n\n``<span style=\"color:#192626;\">Text here</span>``\n#192626 background color\n\nThis paragraph has a background color of #192626.\n\n``<p style=\"background-color:#192626;\">Content here</p>``\n#192626 border color\n\nThis element has a border color of #192626.\n\n``<div style=\"border:1px solid #192626;\">Content here</div>``\nCSS codes\n``.text {color:#192626;}``\n``.background {background-color:#192626;}``\n``.border {border:1px solid #192626;}``\n\n# Shades and Tints of #192626\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #020303 is the darkest color, while #f6f9f9 is the lightest one.\n\n• #020303\n``#020303` `rgb(2,3,3)``\n• #090e0e\n``#090e0e` `rgb(9,14,14)``\n• #111a1a\n``#111a1a` `rgb(17,26,26)``\n• #192626\n``#192626` `rgb(25,38,38)``\n• #213232\n``#213232` `rgb(33,50,50)``\n• #293e3e\n``#293e3e` `rgb(41,62,62)``\n• #304949\n``#304949` `rgb(48,73,73)``\n• #385555\n``#385555` `rgb(56,85,85)``\n• #406161\n``#406161` `rgb(64,97,97)``\n• #486d6d\n``#486d6d` `rgb(72,109,109)``\n• #4f7979\n``#4f7979` `rgb(79,121,121)``\n• #578585\n``#578585` `rgb(87,133,133)``\n• #5f9090\n``#5f9090` `rgb(95,144,144)``\n• #689b9b\n``#689b9b` `rgb(104,155,155)``\n• #74a3a3\n``#74a3a3` `rgb(116,163,163)``\n• #7fabab\n``#7fabab` `rgb(127,171,171)``\n• #8bb3b3\n``#8bb3b3` `rgb(139,179,179)``\n• #97bbbb\n``#97bbbb` `rgb(151,187,187)``\n• #a3c2c2\n``#a3c2c2` `rgb(163,194,194)``\n• #afcaca\n``#afcaca` `rgb(175,202,202)``\n• #bbd2d2\n``#bbd2d2` `rgb(187,210,210)``\n``#c6dada` `rgb(198,218,218)``\n• #d2e2e2\n``#d2e2e2` `rgb(210,226,226)``\n• #dee9e9\n``#dee9e9` `rgb(222,233,233)``\n• #eaf1f1\n``#eaf1f1` `rgb(234,241,241)``\n• #f6f9f9\n``#f6f9f9` `rgb(246,249,249)``\nTint Color Variation\n\n# Tones of #192626\n\nA tone is produced by adding gray to any pure hue. In this case, #1e2121 is the less saturated color, while #013e3e is the most saturated one.\n\n• #1e2121\n``#1e2121` `rgb(30,33,33)``\n• #1b2424\n``#1b2424` `rgb(27,36,36)``\n• #192626\n``#192626` `rgb(25,38,38)``\n• #172828\n``#172828` `rgb(23,40,40)``\n• #142b2b\n``#142b2b` `rgb(20,43,43)``\n• #122d2d\n``#122d2d` `rgb(18,45,45)``\n• #0f3030\n``#0f3030` `rgb(15,48,48)``\n• #0d3232\n``#0d3232` `rgb(13,50,50)``\n• #0a3535\n``#0a3535` `rgb(10,53,53)``\n• #083737\n``#083737` `rgb(8,55,55)``\n• #063939\n``#063939` `rgb(6,57,57)``\n• #033c3c\n``#033c3c` `rgb(3,60,60)``\n• #013e3e\n``#013e3e` `rgb(1,62,62)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #192626 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.51615536,"math_prob":0.7940578,"size":3669,"snap":"2023-40-2023-50","text_gpt3_token_len":1633,"char_repetition_ratio":0.12442019,"word_repetition_ratio":0.011049724,"special_character_ratio":0.5739984,"punctuation_ratio":0.23378076,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9943098,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T11:32:48Z\",\"WARC-Record-ID\":\"<urn:uuid:f682b40b-eb22-4e6c-b887-a0ca232fee27>\",\"Content-Length\":\"36153\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4f4e3fe7-1e82-42d8-91d7-8cdae9e24156>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc8eab16-2584-4868-90c5-4b494fe7bd53>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/192626\",\"WARC-Payload-Digest\":\"sha1:JO5HOAJDGLFZRIO4SOPCNOFQFLHMWMGW\",\"WARC-Block-Digest\":\"sha1:2PQODCZEDKMFC7JDUM5EUXQRGY4NI6GI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511075.63_warc_CC-MAIN-20231003092549-20231003122549-00578.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.