URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://assignmentpoint.com/compound-interest/ | [
"Finance\n\n# Compound Interest",
null,
"Compound interest (or compound interest) applies to interest payments made on the amount of the initial principal and the interest accrued in advance. It is the product of reinvesting interest, rather than paying it back, so that interest on the principal sum plus previously accrued interest is then paid in the next cycle. Thought to possess originated in 17th century Italy, interest is thought of as “interest on interest”, and can make a sum grow at a faster rate than interest, which is calculated only on the principal amount.\n\nIn finance and economics compound interest is the norm. It allows investors to gain potentially very high returns over a long period of time and is effectively a risk-free way to generate wealth. Multiplying the initial principal sum by one plus the average interest rate adjusted to the number of compound periods minus one determines compound interest. It is very different from equity investments, in which capital gains are generated only if the market value of the security rises over time (i.e., buy low, sell high).\n\nWhen measuring compound interest, a major difference is in the number of compounding times. Compound interest is not fully risk-free, as the interest payer may adjust the default or interest rates. Notwithstanding, the component of accumulating funds is the thing that makes it moderately riskless contrasted with different speculations. It is stood out from basic intrigue, where recently amassed intrigue isn’t added to the chief measure of the current time frame, so there is no intensifying.",
null,
"The following are the four main components of compound interest:\n\n• Principal – The principal is the amount that was initially deposited in a compounding system (for example, bank savings account with high-interest rates). It is the starting number from which the first payment of interest is measured.\n• Interest rate – The interest rate applies to the amount of the loan that is paid. The interest charge would be equal to the rate of interest plus the account balance (which is the amount of the initial principal and any interest earlier paid).\n• Compounding Frequency – The intensifying recurrence decides how frequently a year the intrigue is paid. It will impact the loan cost itself as high-recurrence exacerbating will regularly just be accessible with lower rates. Regularly, aggravating happens on a month to month, quarterly, or yearly premise.\n• Time horizon – Time horizon alludes to the measure of time over which the progressive accrual instrument can work. The more drawn out the time skyline, the more intrigue installments that can be made, and the bigger the consummation account worth will be. Time horizon is the absolute most significant part of progressive accrual, as it basically directs the future gainfulness of a venture. An aggravating domain with low rates and low intensifying recurrence can in any case be alluring if the accessible time horizon is exceptionally long.\n\nThe rate at which compound interest accumulates depends on the compounding frequency, so the higher the number of compounding periods, the greater the compound interest. The compounding frequency is the amount of times the accrued interest is paid out or capitalized (credited to the account) on a regular basis, every year (or occasionally, another unit time). Thus, the quantity of compound interest accrued on \\$100 compounded at 10% annually is less than that on \\$100 compounded at 5% semi-annually over the identical fundamental measure. Since the interest-on-interest effect can generate increasingly positive returns supported the initial principal amount, it’s sometimes been observed because of the “miracle of compound interest.”\n\nCompound interest was once considered the worst kind of usury and was heavily condemned in many other countries by Roman law and common laws. Multiplying the initial principal sum by one plus the average interest rate adjusted to the number of compound periods minus one determines compound interest. The original cumulative loan amount is then subtracted from the resulting value.\n\nThe formula for calculating compound interest is:\n\nCompound Interest = Total amount of Principal and Interest in future (or Future Value) less Principal amount at present (or Present Value)\n\n= (P (1 + i)n) – P\n\n= P ((1 + i)n – 1)\n\nWhere:\n\nP = Principal\n\ni = nominal annual interest rate in percentage terms\n\nn = number of compounding periods\n\nThe effect of compounding depends on:\n\n• The nominal interest rate is applied and\n• The frequency interest is compounded.\n\nWhen measuring compound interest, a major difference is in the number of compounding times. The basic rule is that the greater the amount of compound interest, the higher the number of compounding periods.\n\nInformation Sources:"
] | [
null,
"https://assignmentpoint.com/wp-content/uploads/2020/09/Compound-Interest.jpg",
null,
"https://assignmentpoint.com/wp-content/uploads/2020/09/Compound-Interest-CI.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94029546,"math_prob":0.96231127,"size":4757,"snap":"2022-40-2023-06","text_gpt3_token_len":915,"char_repetition_ratio":0.19377236,"word_repetition_ratio":0.091984235,"special_character_ratio":0.19045617,"punctuation_ratio":0.08131068,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99187595,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-07T20:08:20Z\",\"WARC-Record-ID\":\"<urn:uuid:45874bd3-0982-42c3-99fe-39d3ed399faf>\",\"Content-Length\":\"35996\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a5ab7085-1364-4153-a426-b625dddfa6aa>\",\"WARC-Concurrent-To\":\"<urn:uuid:7eb0b7a9-9d18-4b7a-a0e2-31a69fe0e95e>\",\"WARC-IP-Address\":\"178.63.81.55\",\"WARC-Target-URI\":\"https://assignmentpoint.com/compound-interest/\",\"WARC-Payload-Digest\":\"sha1:A6PPQ2USEHX2IMKVHMIYR7FSBP6TMXQS\",\"WARC-Block-Digest\":\"sha1:4JSBISO27EZN22J3QWFQE5NAIWGDIFEE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030338244.64_warc_CC-MAIN-20221007175237-20221007205237-00428.warc.gz\"}"} |
https://dba.stackexchange.com/questions/322124/why-changing-limit-triggers-different-query-plans-in-postgresql | [
"# Why changing limit triggers different query plans in postgresql?\n\nChanging only limit from 40 to 50 in following query triggers different execution plans. And unfortunatly one I needed is much slower. So the question is: why this happening and how can I force postgresql to use faster plan? I'm using postgresql 14.5\n\n``````SELECT \"Id\"\nFROM \"Podcasts\" AS P\nINNER JOIN \"PodcastCategories\" AS PC ON P.\"Id\"=PC.\"PodcastId\"\nWHERE \"LastPublishDate\" IS NOT NULL AND \"Dead\" = false AND \"Hidden\" = false AND PC.\"CategoryId\" = ANY (ARRAY)\nAND P.\"LastPublishDate\"<'2023-01-14 23:00:00+00'\nORDER BY \"LastPublishDate\" DESC\nLIMIT 50\n``````\n\nThis is plan for limit 40 and it is expected one and fast!\n\n``````Limit (cost=1000.87..53095.17 rows=40 width=12) (actual time=46.797..606.536 rows=40 loops=1)\n->Gather Merge (cost=1000.87..60909.31 rows=46 width=12) (actual time=46.796..606.518 rows=40 loops=1)\nWorkers Planned: 2\nWorkers Launched: 2\n->Nested Loop (cost=0.84..59903.98 rows=19 width=12) (actual time=23.367..448.066 rows=15 loops=3)\n-> Parallel Index Only Scan using \"\"IX_Podcasts_LastPublishDate\"\" on \"\"Podcasts\"\" p (cost=0.42..55488.81 rows=2442 width=12) (actual time=0.791..63.207 rows=259 loops=3)\nIndex Cond: (\"\"LastPublishDate\"\" < '2023-01-14 23:00:00+00'::timestamp with time zone)\nHeap Fetches: 776\n-> Index Only Scan using \"\"PK_PodcastCategories\"\" on \"\"PodcastCategories\"\" pc (cost=0.42..1.80 rows=1 width=4) (actual time=1.487..1.487 rows=0 loops=776)\nIndex Cond: ((\"\"PodcastId\"\" = p.\"\"Id\"\") AND (\"\"CategoryId\"\" = ANY ('{1}'::integer[])))\nHeap Fetches: 21\n\nPlanning Time: 2.468 ms\nExecution Time: 606.588 ms\n``````\n\nThis is plan, when limit is 50 and it runs much slower\n\n``````Limit (cost=59885.72..59888.83 rows=27 width=12) (actual time=34419.067..34436.304 rows=50 loops=1)\n->Gather Merge (cost=59885.72..59888.83 rows=27 width=12) (actual time=34419.065..34436.298 rows=50 loops=1)\nWorkers Planned: 1\nWorkers Launched: 1\n->Sort (cost=58885.71..58885.78 rows=27 width=12) (actual time=34415.504..34415.510 rows=40 loops=2)\nSort Method: top-N heapsort Memory: 28kB\nSort Key: p.\"\"LastPublishDate\"\" DESC\nWorker 0: Sort Method: top-N heapsort Memory: 29kB\n->Parallel Hash Join (cost=55858.19..58885.07 rows=27 width=12) (actual time=34386.500..34412.404 rows=10528 loops=2)\nHash Cond: (pc.\"\"PodcastId\"\" = p.\"\"Id\"\")\n->Parallel Bitmap Heap Scan on \"\"PodcastCategories\"\" pc (cost=336.90..3313.48 rows=19163 width=4) (actual time=94.378..2852.945 rows=16934 loops=2)\nRecheck Cond: (\"\"CategoryId\"\" = ANY ('{1}'::integer[]))\nHeap Blocks: exact=1292\"\n->Bitmap Index Scan on \"\"IX_PodcastCategories_CategoryId\"\" (cost=0.00..328.75 rows=32577 width=0) (actual time=91.542..91.543 rows=33879 loops=1)\nIndex Cond: (\"\"CategoryId\"\" = ANY ('{1}'::integer[]))\n->Parallel Hash (cost=55490.76..55490.76 rows=2442 width=12) (actual time=31518.266..31518.267 rows=130037 loops=2)\nBuckets: 131072 (originally 8192) Batches: 4 (originally 1) Memory Usage: 4128kB\n->Parallel Index Only Scan using \"\"IX_Podcasts_LastPublishDate\"\" on \"\"Podcasts\"\" p (cost=0.42..55490.76 rows=2442 width=12) (actual time=0.029..30960.929 rows=130037 loops=2)\nIndex Cond: (\"\"LastPublishDate\"\" < '2023-01-14 23:00:00+00'::timestamp with time zone)\nHeap Fetches: 260290\nPlanning Time: 0.348 ms\nExecution Time: 34436.367 ms\n``````\n\n• And, on top, I would check if `random_page_cost` is set correctly. Jan 16 at 7:03"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5865011,"math_prob":0.73946095,"size":3181,"snap":"2023-14-2023-23","text_gpt3_token_len":1104,"char_repetition_ratio":0.15738118,"word_repetition_ratio":0.071625344,"special_character_ratio":0.44797233,"punctuation_ratio":0.2299169,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9843528,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T13:11:35Z\",\"WARC-Record-ID\":\"<urn:uuid:29f45236-d0f4-4d14-a03d-90a7b9684c2a>\",\"Content-Length\":\"157640\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:83d5686f-b53a-4fc7-bb30-d6a7f4772681>\",\"WARC-Concurrent-To\":\"<urn:uuid:81b098fe-ff2f-4391-8a06-4b640f0bc399>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://dba.stackexchange.com/questions/322124/why-changing-limit-triggers-different-query-plans-in-postgresql\",\"WARC-Payload-Digest\":\"sha1:VVXRM32X7IXHR5JTC6JVFIKBXMPI4EXH\",\"WARC-Block-Digest\":\"sha1:HCXNWI4BLIIGBIBDZ4W4I6FC5OR63E7I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649986.95_warc_CC-MAIN-20230604125132-20230604155132-00314.warc.gz\"}"} |
https://videoclass.com/video/solving-equations-containing-decimals1 | [
"# Solving Equations Containing Decimals1",
null,
"To solve linear equations with decimals or fractions, the first step is to get rid of the decimals or fractions. There are simple techniques to get rid of both decimals and fractions and the techniques can... To solve linear equations with decimals or fractions, the first step is to get rid of the decimals or fractions. There are simple techniques to get rid of both decimals and fractions and the techniques can be used together in case both decimals and fractions appear in the same equation. Once the equations are simplified, graph all the linear equations together.\nMore... Collapse\n100 Views"
] | [
null,
"https://s3.eu-central-1.amazonaws.com/videoclass-static/user-avatars/50x50.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8569703,"math_prob":0.9972284,"size":651,"snap":"2021-43-2021-49","text_gpt3_token_len":126,"char_repetition_ratio":0.19629057,"word_repetition_ratio":0.60194176,"special_character_ratio":0.19354838,"punctuation_ratio":0.108333334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99970245,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-28T05:26:43Z\",\"WARC-Record-ID\":\"<urn:uuid:4531ebad-0708-47ff-bac4-09308902a30c>\",\"Content-Length\":\"52013\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b7815721-3b16-43f4-8a5b-dead5dfeda8c>\",\"WARC-Concurrent-To\":\"<urn:uuid:efeceef4-e3be-4e80-b6b9-a30c249140c3>\",\"WARC-IP-Address\":\"172.67.197.157\",\"WARC-Target-URI\":\"https://videoclass.com/video/solving-equations-containing-decimals1\",\"WARC-Payload-Digest\":\"sha1:P77E2C7ZDAKYWTDV7F54JQQZ7PQRK4FV\",\"WARC-Block-Digest\":\"sha1:R53I5E2XWZIJQUO5RV3JOBRBNNQ75ZZ7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588257.34_warc_CC-MAIN-20211028034828-20211028064828-00570.warc.gz\"}"} |
https://www.drmaciver.com/2016/05/myhill-nerode-theorem/ | [
"# The Myhill–Nerode theorem and the Minimal Deterministic Automaton\n\nI recently learned a whole bunch about a random corner of language theory that I hadn’t previously known much about. I thought it might be interesting to write up some of it. So this is the first post amongst several (I currently have two others planned but they may spawn more).\n\nThis is a post about the Myhill–Nerode theorem, behind which is a nice construction (in the abstract sense. You couldn’t perform it directly on an actual machine) for the minimal deterministic automaton matching any language.\n\nLets start with some terms: We’ve got some finite non-empty alphabet $$A$$. A string on $$A$$ is any finite sequence of elements from $$A$$. A language on $$A$$ is any set of strings.\n\nFor this post we’ll fix $$A$$ and then $$S$$ will be the set of strings on $$A$$.\n\nString notation:\n\n• $$\\epsilon$$ is the string of length $$0$$\n• If $$u, v$$ are strings then $$uv$$ is the string made by concatenating them in order.\n• Similarly if $$a \\in A$$ then $$au$$ is the string that consists of prepending $$a$$ to $$u$$ and $$ua$$ is the string that consists of appending $$a$$ to $$u$$.\n\nIn general we are interested in languages which have nice compact definitions rather than having to specify the whole set. One nice way of defining languages which often provides this is a deterministic automaton. These also have the advantage of being very simple to work with computationally once you have one.\n\nA deterministic automaton on $$A$$ consists of:\n\n• A set of states $$Q$$, (possibly infinite – if it’s a finite set we call it a deterministic finite automaton)\n• An initial state $$q_0 \\in Q$$\n• An update function $$\\delta : Q \\times A \\to Q$$\n• A set of accepting states $$F \\subseteq Q$$.\n\nWe can extend the update function to take any finite string: Let $$\\delta(q, \\epsilon) = q$$ and $$\\delta(q, au) = \\delta(\\delta(q, a), u)$$. i.e. we just follow the update function along the string.\n\nWe will additionally assume that every state $$Q$$ is reachable. That is it’s $$\\delta(q_0, s)$$ for some string $$s$$. This is a fairly harmless restriction because for any automaton we can just create the automaton that replaces $$Q$$ and $$F$$ with their reachable subsets.\n\nA deterministic automaton recognises a language $$L$$ if $$s \\in L$$ if and only if $$\\delta(q_0, s) \\in F$$. i.e. if you follow the automaton along the string then you end up in an accepting state if and only if the string is in the desired language.\n\nEvery language can be recognised by a deterministic automaton:\n\n• Let $$Q = S$$\n• Let $$q_0 = \\epsilon$$\n• Let $$\\delta(s, a) = sa$$\n• Let $$F = L$$\n\nThis works because every state in this automaton has a unique string that reaches it from the initial state (which is the string representing the state), and the state is accepting if and only if that string is in the language, so by construction recognises the language.\n\nThis simplifies matters not even slightly because we’ve just reframed the problem and determining if a state is accepting is equivalent to determining whether the string is in our language, and the automaton constructed this way is always infinite (though may have finitely many accepting states if $$L$$ is finite), even if a very small finite automaton might recognise our language. If anything we’ve made the problem worse because we’ve taken a potentially finite language and moved to an always infinite representation.\n\nSo, can we find a smaller automaton that recognises a given language?\n\nIn fact we can. For any language there is a unique minimal deterministic automaton for any language. We’ll see exactly what that means in a bit, but first we’ll construct the automaton we want.\n\nThe idea for constructing it is this: We take the above automaton, and we try to collapse it by conflating equivalent states. We look for equivalent states with the notion of a test.\n\nA test is just a string. The idea is that if we have some string $$t$$ and two strings $$u$$ and $$v$$ then if $$(ut \\in L) \\ne (vt \\in L)$$, $$u$$ and $$v$$ we must have $$\\delta(q_0, u) \\ne \\delta(q_0, v)$$ for any automaton recognising $$L$$ – if not then we would have $$\\delta(q_0, ut) = \\delta(q_0, vt)$$, which is impossible because one is an accepting state and the other is not.\n\nTwo strings are said to be word equivalent if there is no test that distinguishes them. The idea is that we can then shrink our above automaton by conflating all states that are word equivalent.\n\nSo if $$s$$ is a string let $$[s]$$ be the set of all strings that are word equivalent to $$s$$. We can then define the following automaton:\n\n• Let $$Q = \\{[s]: s \\in S\\}$$\n• Let $$q_0 = [\\epsilon]$$\n• Let $$\\delta([s], a) = [sa]$$\n• Let $$F = \\{[s]: s \\in L\\}$$\n\nWe’ll call this the Myhill-Nerode automaton and label it $$Z$$.\n\nWe first need to check that $$delta$$ is well defined. i.e. if $$u$$ is word equivalent to $$v$$ then $$ua$$ is word equivalent to $$va$$. This follows from the fact that if $$t$$ is a test distinguishing $$ua$$ and $$va$$ then $$at$$ is a test distinguishing $$u$$ and $$v$$.\n\nSo this is a well defined state machine, but does it recognise $$L$$?\n\nFirst note:\n\n• $$\\delta(q_0, s) = [s]$$ (more or less by definition)\n• If $$u, v$$ are word-equivalent then $$u \\in L = v \\in L$$ because otherwise $$\\epsilon$$ is a test distinguishing them\n\nSo the $$\\delta(q_0, s)$$ is an accepting state if and only if it is $$[t]$$ for some $$t \\in L$$, which is true if and only if $$s \\in L$$. i.e. this automaton recognises $$L$$.\n\nSo we’ve got another automaton recognising $$L$$, and it sure looks smaller than the first one, but is it minimal?\n\nAnd what do we even mean by minimal?\n\nFor that we’ll need the idea of an automaton reduction: Let $$P, Q$$ be automata (call the initial state of $$P$$ $$p_0$$ and we’ll use subscripts to distinguish $$\\delta$$ and $$F$$. $$Q$$ is a reduction of $$P$$ if there is a function $$r: P \\to Q$$ such that:\n\n1. $$r(p_0) = q_0$$\n2. If $$p \\in P$$ and $$a \\in A$$ then $$r(\\delta_P(p, a)) = \\delta_Q(r(p), a)$$.\n3. $$r(p) \\in F_Q$$ if and only if $$p \\in F_P$$\n4. Every $$q \\in Q$$ is $$r(p)$$ for some $$p$$\n\ni.e. we label every state in $$P$$ with a state in $$Q$$ such that everything lines up correctly for transitions.\n\nOur claim of minimality is this: If $$Q$$ is any automaton recognising $$L$$, then $$Z$$ is a reduction of $$Q$$.\n\nThe key property for proving this is that if $$u$$ and $$v$$ are two strings with $$\\delta_Q(q_0, u) = \\delta_Q(q_0, v) = q$$ then they must be word equivalent. Otherwise there would be some test $$t$$ such that $$\\delta(q, t) \\ne \\delta(q, t)$$. So we choose as our reduction function $$r$$ the function $$r(q) = [s]$$ where $$s$$ is any string such that $$\\delta_Q(q_0, s) = q$$ (recall we’re assuming all states are reachable).\n\nLets check that this has the desired properties:\n\n1. $$\\delta_Q(q_0, \\epsilon) = q_0$$, so $$r(q_0) = [\\epsilon] = z_0$$\n2. If $$\\delta_Q(q_0, s) = q$$ then $$delta_Q(q_0, sa) = \\delta(q, a)$$, so $$r(\\delta_Q(q_0, s), a)) = [sa] = \\delta_Z([s], a) = \\delta_Z(r(q), a)$$ as desired\n3. This follows automatically because both $$Q$$ and $$Z$$ recognise $$L$$\n4. $$[s] = r(\\delta(q_0, s))$$\n\nSo for any automaton we can reduce it to $$Z$$, as desired.\n\nAnd finally we need to prove uniqueness: The uniqueness claim is that if $$Q$$ is any other automaton that has this property then a reduction $$r: Z \\to Q$$ must also have the property that if $$s \\ne t$$ then $$r(s) \\ne r(t)$$. i.e. $$r$$ is an isomorphism, just relabelling the vertices.\n\nProof: Suppose $$[u] \\ne [v]$$ are two distinct states of $$Z$$. Then there is some $$t$$ such that $$\\delta_Z([u], t) \\in F_Z \\ne \\delta_Z(([v], t) \\in F_Z$$. But then by the properties of the reduction we must have $$\\delta_Q(r([u]), t) \\in F_Q \\ne \\delta_Q((r([v]), t) \\in F_Q$$ and thus $$r([u]) \\ne r([v])$$ as desired.\n\nWe’ll finally finish with the title of this post: The Myhill–Nerode theorem.\n\nIn the language we have so far, this states that $$L$$ is regular if and only if the $$Z$$ we constructed is finite. But this follows immediately from what we’ve done above: $$Z$$ is an automaton recognizing $$L$$, so by definition if $$Z$$ is finite then $$L$$ must be regular. Conversely, if there is any finite automaton that recognizes $$L$$ then it reduces to $$Z$$, so $$Z$$ must also be finite.\n\nWhy do we care about all of this?\n\nWell, as far as I know, the Myhill–Nerode theorem itself isn’t actually very important. It’s mostly an intellectual curiousity. I do like that it shows that every language regardless of structural properties has a minimal deterministic automaton, but I’m not sure I have any actual use for that fact.\n\nBut the machinery used to prove it is independently quite interesting. In particular, minimal (or at least small) deterministic finite automaton are very useful in practice and we can use a similar construction starting from an existing finite automaton to produce one (this is Hopcroft’s Algorithm).\n\nMoreover, the idea of a test that we started from proves quite interesting, and you can use it to construct alternative (often more compact) representations of a DFA, and it forms the basis for the inference of regular languages in certain circumstances. More on that in later posts.\n\nThis entry was posted in Automata Theory on by ."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8765713,"math_prob":0.9999553,"size":9393,"snap":"2019-43-2019-47","text_gpt3_token_len":2520,"char_repetition_ratio":0.14793907,"word_repetition_ratio":0.010989011,"special_character_ratio":0.2967103,"punctuation_ratio":0.096039064,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000058,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-17T10:18:12Z\",\"WARC-Record-ID\":\"<urn:uuid:1f4242d1-b5de-4378-91f2-b8e68c51fe69>\",\"Content-Length\":\"27746\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d768a0b9-5237-46c6-9154-65fdc57d64bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:9105d2df-da25-473c-a15b-f26d4958edd3>\",\"WARC-IP-Address\":\"104.18.42.229\",\"WARC-Target-URI\":\"https://www.drmaciver.com/2016/05/myhill-nerode-theorem/\",\"WARC-Payload-Digest\":\"sha1:5DVUOVWXOVQ2FW2S2SRW7U2IQMZ4C4Z7\",\"WARC-Block-Digest\":\"sha1:MNLSRHODYKLNX3XQCXCTJ7VZTIHBR4EQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668910.63_warc_CC-MAIN-20191117091944-20191117115944-00129.warc.gz\"}"} |
http://rcompanion.org/handbook/J_01.html | [
"",
null,
"## Summary and Analysis of Extension Program Evaluation in R\n\nSalvatore S. Mangiafico\n\n# Regression for Count Data\n\n### Introduction\n\n#### Count data\n\nIn general, common parametric tests like t-test and anova shouldn’t be used for count data. One reason is technical in nature: that parametric analyses require continuous data. Count data is by its nature discrete and is left-censored at zero. (That is, usually counts can’t be less than zero.)\n\nA second reason is more practical in nature. Count data are often highly skewed, and often produce skewed residuals if a parametric approach is attempted. In this case, the hypothesis tests will not be accurate.\n\nFor further discussion, see the “Count data may not be appropriate for common parametric tests” section in the Introduction to Parametric Tests chapter.\n\n#### Regression approaches for count data\n\nThe most common regression approach for handling count data is probably Poisson regression. However, Poisson regression makes assumptions about the distribution of the data that may not be appropriate in all cases. Hermite regression is a more flexible approach, but at the time of writing doesn’t have a complete set of support functions in R. Quasi-Poisson regression is also flexible with data assumptions, but also but at the time of writing doesn’t have a complete set of support functions in R. Negative binomial regression allows for overdispersion in data; and zero-inflated regression is useful when there are a high proportion of zero counts in the data.\n\n#### Cautionary note\n\nNote that model assumptions and pitfalls of these regression techniques are not discussed in depth here. The reader is urged to understand the assumptions of this kind of modeling before proceeding.\n\n#### Generalized linear regression\n\nPoisson, Hermite, and related regression approaches are a type of generalized linear model. This should not be confused with general linear model, which is implemented with the lm function. Generalized linear models are implemented with the glm function or other functions.\n\nGeneralized linear models are used when the dependent variable is count, binary, multinomial, etc. More information on using the glm function can be found by using help(glm) and help(family). For examples of logistic regression, see the chapter Models for Nominal Data; the chapter Beta Regression for Percent and Proportion Data; or Mangiafico (2015) in the “References” section. For a table of common uses for family and link function in generalized linear models, see the Wikipedia article in the “References” section for this chapter.\n\n### Packages used in this chapter\n\nThe packages used in this chapter include:\n\n• pysch\n\n• hermite\n\n• lattice\n\n• plyr\n\n• boot\n\n• DescTools\n\n• ggplot2\n\n• car\n\n• multcompView\n\n• emmeans\n\n• MASS\n\n• pscl\n\n• rcompanion\n\n• robust\n\nThe following commands will install these packages if they are not already installed:\n\nif(!require(psych)){install.packages(\"psych\")}\nif(!require(hermite)){install.packages(\"hermite\")}\nif(!require(lattice)){install.packages(\"lattice\")}\nif(!require(plyr)){install.packages(\"plyr\")}\nif(!require(boot)){install.packages(\"boot\")}\nif(!require(DescTools)){install.packages(\"DescTools\")}\nif(!require(ggplot2)){install.packages(\"ggplot2\")}\nif(!require(car)){install.packages(\"car\")}\nif(!require(multcompView)){install.packages(\"multcompView\")}\nif(!require(emmeans)){install.packages(\"emmeans\")}\nif(!require(MASS)){install.packages(\"MASS\")}\nif(!require(pscl)){install.packages(\"pscl\")}\nif(!require(rcompanion)){install.packages(\"rcompanion\")}\nif(!require(robust)){install.packages(\"robust\")}\n\n### Count data example\n\nIn this example, extension researchers have set up garden plots with different suites of plants, with each suite identified as a level of the variable Garden below. In September, they counted the number of monarch butterflies in each garden plot.\n\nInput = (\"\nGarden Monarchs\nA 0\nA 4\nA 2\nA 2\nA 0\nA 6\nA 0\nA 0\nB 5\nB 9\nB 7\nB 5\nB 7\nB 5\nB 9\nB 5\nC 10\nC 14\nC 12\nC 12\nC 10\nC 16\nC 10\nC 10\n\")\n\n### Order factors by the order in data frame\n\n### Otherwise, R will alphabetize them\n\nData\\$Garden = factor(Data\\$Garden,\nlevels=unique(Data\\$Garden))\n\n### Check the data frame\n\nlibrary(psych)\n\nstr(Data)\n\nsummary(Data)\n\n### Remove unnecessary objects\n\nrm(Input)\n\n#### Histograms\n\nlibrary(lattice)\n\nhistogram(~ Monarchs | Garden,\ndata=Data,\nlayout=c(1,3) # columns and rows of individual plots\n)",
null,
"### Poisson regression example\n\nPoisson regression makes certain assumptions about the relationship between the mean and the dispersion of the dependent variable. Because this assumption may not be met for all data sets, Poisson regression may not be recommended for routine use. Particularly, classic Poisson regression should be avoided if there is overdispersion in the data or if there are several zero counts in the dependent variable.\n\nAn alternate approach for data with overdispersion is negative binomial regression.\n\nAn alternative approach for data with many zeros is zero-inflated Poisson regression.\n\nFor further discussion, see the “Count data may not be appropriate for common parametric tests” section in the Introduction to Parametric Tests chapter.\n\nNote that model assumptions and pitfalls of this approach are not discussed here. The reader is urged to understand the assumptions of this kind of modeling before proceeding.\n\nmodel.p = glm(Monarchs ~ Garden,\ndata=Data,\nfamily=\"poisson\")\n\nlibrary(car)\n\nAnova(model.p,\ntype=\"II\",\ntest=\"LR\")\n\nAnalysis of Deviance Table (Type II tests)\n\nLR Chisq Df Pr(>Chisq)\nGarden 66.463 2 3.697e-15 ***\n\nlibrary(rcompanion)\n\nnagelkerke(model.p)\n\n\\$Pseudo.R.squared.for.model.vs.null\nPseudo.R.squared\nCox and Snell (ML) 0.937293\nNagelkerke (Cragg and Uhler) 0.938037\n\n\\$Likelihood.ratio.test\nDf.diff LogLik.diff Chisq p.value\n-2 -33.231 66.463 3.6967e-15\n\nlibrary(multcompView)\n\nlibrary(emmeans)\n\nmarginal = emmeans(model.p,\n~ Garden)\n\npairs(marginal,\n\ncld(marginal,\nalpha=0.05,\nLetters=letters, ### Use lower-case letters for .group\n\nGarden emmean SE df asymp.LCL asymp.UCL .group\nA 0.5596158 0.2672450 NA -0.07849527 1.197727 a\nB 1.8718022 0.1386750 NA 1.54068251 2.202922 b\nC 2.4638532 0.1031421 NA 2.21757688 2.710130 c\n\nResults are given on the log (not the response) scale.\nConfidence level used: 0.95\nConf-level adjustment: sidak method for 3 estimates\nP value adjustment: tukey method for comparing a family of 3 estimates\nTests are performed on the log scale\nsignificance level used: alpha = 0.05\n\n### Note that estimates are on log scale\n\n### Negative binomial regression example\n\nNegative binomial regression is similar in application to Poisson regression, but allows for overdispersion in the dependent count variable.\n\nThis example will use the glm.nb function in the MASS package. The Anova function in the car package will be used for an analysis of deviance, and the nagelkerke function will be used to determine a p-value and pseudo R-squared value for the model. Post-hoc analysis can be conducted with the emmeans package.\n\nNote that model assumptions and pitfalls of this approach are not discussed here. The reader is urged to understand the assumptions of this kind of modeling before proceeding.\n\nlibrary(MASS)\n\nmodel.nb = glm.nb(Monarchs ~ Garden,\ndata=Data,\ncontrol = glm.control(maxit=10000))\n\nlibrary(car)\n\nAnova(model.nb,\ntype=\"II\",\ntest=\"LR\")\n\nAnalysis of Deviance Table (Type II tests)\n\nLR Chisq Df Pr(>Chisq)\nGarden 66.464 2 3.694e-15 ***\n\nlibrary(rcompanion)\n\nnagelkerke(model.nb)\n\n\\$Pseudo.R.squared.for.model.vs.null\nPseudo.R.squared\nCox and Snell (ML) 0.776007\nNagelkerke (Cragg and Uhler) 0.778217\n\n\\$Likelihood.ratio.test\nDf.diff LogLik.diff Chisq p.value\n-2 -17.954 35.907 1.5952e-08\n\nlibrary(multcompView)\n\nlibrary(emmeans)\n\nmarginal = emmeans(model.nb,\n~ Garden)\n\npairs(marginal,\n\ncld(marginal,\nalpha = 0.05,\nLetters = letters, ### Use lower-case letters for .group\ntype = \"response\", ### Report emmeans in orginal scale\n\nGarden response SE df asymp.LCL asymp.UCL .group\nA 1.75 0.4677072 NA 0.9244706 3.312707 a\nB 6.50 0.9013878 NA 4.6677750 9.051422 b\nC 11.75 1.2119200 NA 9.1850474 15.031223 c\n\nConfidence level used: 0.95\nConf-level adjustment: sidak method for 3 estimates\nIntervals are back-transformed from the log scale\nP value adjustment: tukey method for comparing a family of 3 estimates\nTests are performed on the log scale\nsignificance level used: alpha = 0.05\n\n### Zero-inflated regression example\n\nZero-inflated regression is similar in application to Poisson regression, but allows for an abundance of zeros in the dependent count variable.\n\nThis example will use the zeroinfl function in the pscl package. The Anova function in the car package will be used for an analysis of deviance, and the nagelkerke function will be used to determine a p-value and pseudo R-squared value for the model. Post-hoc analysis can be conducted with the emmeans package.\n\nlibrary(pscl)\n\nmodel.zi = zeroinfl(Monarchs ~ Garden,\ndata = Data,\ndist = \"poisson\")\n\n### dist = \"negbin\" may be used\n\nsummary(model.zi)\n\nCall:\nzeroinfl(formula = Monarchs ~ Garden | Garden, data = Data, dist = \"poisson\")\n\nCount model coefficients (poisson with log link):\nEstimate Std. Error z value Pr(>|z|)\n(Intercept) 1.2182 0.2847 4.278 1.89e-05 ***\nGardenB 0.6536 0.3167 2.064 0.039 *\nGardenC 1.2457 0.3029 4.113 3.90e-05 ***\n\nZero-inflation model coefficients (binomial with logit link):\nEstimate Std. Error z value Pr(>|z|)\n(Intercept) -7.046e-02 7.363e-01 -0.096 0.924\nGardenB -2.057e+01 1.071e+04 -0.002 0.998\nGardenC -2.057e+01 1.071e+04 -0.002 0.998\n\n### Note that there are separate coefficients for the\n### Poisson part of the analysis and for the zero-inflation part.\n\nlibrary(car)\n\nAnova(model.zi,\ntype=\"II\",\ntest=\"Chisq\")\n\nAnalysis of Deviance Table (Type II tests)\n\nDf Chisq Pr(>Chisq)\nGarden 2 23.914 6.414e-06 ***\n\nlibrary(rcompanion)\n\nnagelkerke(model.zi)\n\n\\$Pseudo.R.squared.for.model.vs.null\nPseudo.R.squared\nCox and Snell (ML) 0.797356\nNagelkerke (Cragg and Uhler) 0.800291\n\n\\$Likelihood.ratio.test\nDf.diff LogLik.diff Chisq p.value\n-4 -19.156 38.311 9.6649e-08\n\nlibrary(multcompView)\n\nlibrary(emmeans)\n\nmarginal = emmeans(model.zi,\n~ Garden)\n\npairs(marginal,\n\ncld(marginal,\nalpha=0.05,\nLetters=letters, ### Use lower-case letters for .group\n\nGarden emmean SE df asymp.LCL asymp.UCL .group\nA 1.75 0.7586301 NA -0.06140972 3.561410 a\nB 6.50 0.9013877 NA 4.34772229 8.652278 b\nC 11.75 1.2119199 NA 8.85625304 14.643747 c\n\nConfidence level used: 0.95\nConf-level adjustment: sidak method for 3 estimates\nP value adjustment: tukey method for comparing a family of 3 estimates\nsignificance level used: alpha = 0.05\n\n### Note, emmeans are on the original measurement scale\n\n### Robust Poisson regression example\n\nRobust Poisson regression is robust to outliers in the dependent variable.\n\nThis example uses the glmRob function in the robust package. The anova function can be used to conduct an analysis of deviance. The p-value for the model can be found by comparing the model to a null model. However, at the time of writing, I don’t know of any way to determine AIC or pseudo R-squared for the model.\n\nAt the time of writing, the glmRob function can only use the Poisson and binomial families of models.\n\nAn alternate method is the glmrob function in the robustbase package.\n\nlibrary(robust)\n\nmodel.rob = glmRob(Monarchs ~ Garden,\ndata = Data,\nfamily = \"poisson\")\n\nanova(model.rob, test=\"Chisq\")\n\nDf Deviance Resid. Df Resid.\nDev Pr(>Chi)\nNULL NA NA 23 430.19850 NA\nGarden 2 400.9221 21 29.27641 3.567693e-63\n\nmodel.rob.null = glmRob(Monarchs ~ 1,\ndata = Data,\nfamily = \"poisson\")\n\nanova(model.rob.null, model.rob, test=\"Chisq\")\n\nTerms Resid. Df Resid. Dev Test Df Deviance Pr(>Chi)\n1 1 23 95.12606 NA NA NA\n2 Garden 21 29.27641 2 65.84965 5.536815e-11\n\n### Quasi-Poisson regression\n\nQuasi-Poisson regression is useful since it has a variable dispersion parameter, so that it can model over-dispersed data. It may be better than negative binomial regression in some circumstances (Verhoef and Boveng. 2007).\n\nAt the time of writing, Quasi-Poisson regression doesn’t have complete set of support functions in R. Using the quasipoisson family option in the glm function, the results will have the same parameter coefficients as with the poisson option, but the inference statistics are adjusted in the summary function. The Anova function in the car package can be used for an analysis of deviance table, and the emmeans package can be used for post-hoc comparisons. Since the model doesn’t produce a log-likelihood value, I don’t know a way to produce a p-value for the mode, for a pseudo R-squared value for the model.\n\n.\n\nmodel.qp = glm(Monarchs ~ Garden,\ndata=Data,\nfamily=\"quasipoisson\")\n\nlibrary(car)\n\nAnova(model.qp,\ntype=\"II\",\ntest=\"LR\")\n\nAnalysis of Deviance Table (Type II tests)\n\nResponse: Monarchs\nLR Chisq Df Pr(>Chisq)\nGarden 52.286 2 4.429e-12 ***\n\nlibrary(multcompView)\n\nlibrary(emmeans)\n\nmarginal = emmeans(model.qp,\n~ Garden)\n\npairs(marginal,\n\ncld(marginal,\nalpha=0.05,\nLetters=letters,\n\nGarden emmean SE df asymp.LCL asymp.UCL .group\nA 0.5596158 0.3013057 NA -0.1598233 1.279055 a\nB 1.8718022 0.1563493 NA 1.4984809 2.245123 b\nC 2.4638532 0.1162877 NA 2.1861887 2.741518 c\n\nResults are given on the log (not the response) scale.\nConfidence level used: 0.95\nConf-level adjustment: sidak method for 3 estimates\nP value adjustment: tukey method for comparing a family of 3 estimates\nsignificance level used: alpha = 0.05\n\n### Hermite regression\n\nThe generalized Hermite distribution is a more general distribution that can handle overdispersion or multimodality (Moriña and others, 2015). This makes generalized Hermite regression a powerful and flexible tool for modeling count data. It is implemented with the hermite package.\n\nFitting models with the hermite package can be somewhat difficult. One issue is that model fitting may fail without some parameters being specified. Often specifying an appropriate value for the m option will help.\n\nA further difficulty with this approach is that, at the time writing, the package isn’t supported by the anova function to compare models, the Anova function to test effects, or other useful functions like emmeans for factor effects.\n\nThe hermite package is used to conduct hermite regression. Here, the m=3 option is specified. Often the default m=NULL can be used. In this case, if the m value is not specified, the function cannot complete the model fitting, and errors are produced. Using m=2 often works. Here, m=3 was used because it produced a model with a lower AIC than did the m=2 option.\n\nlibrary(hermite)\n\nmodel = glm.hermite(Monarchs ~ Garden,\ndata = Data,\nm=3)\n\nsummary(model)\n\nCoefficients:\nEstimate Std. Error z value p-value\n(Intercept) 0.5081083 0.3251349 1.5627612 1.181088e-01\nGardenB 1.3700567 0.3641379 3.7624662 1.682461e-04\nGardenC 1.9596153 0.3476326 5.6370291 1.730089e-08\ndispersion.index 1.0820807 0.2877977 0.1281707 3.601681e-01\norder 3.0000000 NA NA NA\n(Likelihood ratio test against Poisson is reported by *z value* for *dispersion.index*)\n\nAIC: 112.7762\n\n#### Post-hoc analysis: Medians and confidence intervals\n\nAt the time of writing, the emmeans package does not support post-hoc analysis of regressions produced with the hermite package.\n\nOne imperfect approach for post-hoc analysis would be to examine median counts for treatments and the confidence intervals of these medians. We can conclude that groups with non-overlapping 95% confidence intervals for their medians are significantly different.\n\nHowever, this approach does not represent any information learned from the Hermite regression.\n\nA second issue is that, because the dependent variable is not continuous, the distribution of the bootstrapped confidence intervals is not likely to be continuous, and so is may not be reliable.\n\nTo get confidence intervals for the medians for each group, we will use the groupwiseMedian function. Here I used the percentile method for confidence intervals.\n\nlibrary(rcompanion)\n\nSum = groupwiseMedian(Monarchs ~ Garden,\ndata=Data,\nconf=0.95,\nR=5000,\npercentile=TRUE,\nbca=FALSE,\ndigits=3)\n\nSum\n\nGarden n Median Conf.level Percentile.lower Percentile.upper\n1 A 8 1 0.95 0 4\n2 B 8 6 0.95 5 8\n3 C 8 11 0.95 10 14\n\n### In this case, none of the confidence intervals overlap.\n\n##### Plot of medians and confidence intervals\n\nThe data frame Sum created above will be passed to ggplot for plotting. At the end of the code, annotate is used to add text to the plot to indicate which medians are significantly different from one another.\n\nlibrary(ggplot2)\n\nggplot(Sum, ### The data frame to use.\naes(x = Garden,\ny = Median)) +\ngeom_errorbar(aes(ymin = Percentile.lower,\nymax = Percentile.upper),\nwidth = 0.05,\nsize = 1) +\ngeom_point(shape = 15,\nsize = 5) +\ntheme_bw() +\ntheme(axis.title = element_text(face = \"bold\")) +\nylab(\"Median count of monarch butterflies\") +\n\nannotate(\"text\",\nx = 1:3,\ny = c(5, 10, 15),\nlabel = c(\"Group 3\", \"Group 2\", \"Group 1\"))",
null,
"### Optional code for chi-square goodness-of-fit test\n\nAn alternative approach to handling count data is to sum up the counts for treatments, and use a chi-square test or related test. Here, a chi-square goodness-of-fit test is used to see if counts differ from “expected” equal proportions.\n\n#### Omnibus test\n\nTabla = xtabs(Monarchs ~ Garden,\ndata = Data)\n\nTabla\n\nGarden\nA B C\n14 52 94\n\nchisq.test(Tabla)\n\nChi-squared test for given probabilities\n\nX-squared = 60.05, df = 2, p-value = 9.127e-14\n\n#### Post-hoc chi-square tests\n\nGarden.A = sum(Data\\$Monarchs[Data\\$Garden==\"A\"])\nGarden.B = sum(Data\\$Monarchs[Data\\$Garden==\"B\"])\nGarden.C = sum(Data\\$Monarchs[Data\\$Garden==\"C\"])\n\nobserved = c(Garden.A, Garden.B) # observed frequencies\nexpected = c(1/2, 1/2) # expected proportions\n\nchisq.test(x = observed,\np = expected)\n\nChi-squared test for given probabilities\n\nX-squared = 21.879, df = 1, p-value = 2.904e-06\n\nobserved = c(Garden.B, Garden.C) # observed frequencies\nexpected = c(1/2, 1/2) # expected proportions\n\nchisq.test(x = observed,\np = expected)\n\nChi-squared test for given probabilities\n\nX-squared = 12.082, df = 1, p-value = 0.0005091\n\nobserved = c(Garden.A, Garden.C) # observed frequencies\nexpected = c(1/2, 1/2) # expected proportions\n\nchisq.test(x = observed,\np = expected)\n\nChi-squared test for given probabilities\n\nX-squared = 59.259, df = 1, p-value = 1.382e-14\n\n### Optional analysis: Vuong test to compare Poisson, negative binomial, and zero-inflated models\n\nThe Vuong test, implemented by the pscl package, can test two non-nested models. It works with negbin, zeroinfl, and some glm model objects which are fitted to the same data.\n\nThe null hypothesis is that there is no difference in models. The function produces three tests, a “Raw” test, an AIC-corrected, and a BIC-corrected, any of which could be used.\n\nIt has been suggested that the Vuong test not be used to test for zero-inflation (Wilson, 2015).\n\nDefine models\n\nmodel.p = glm(Monarchs ~ Garden,\ndata=Data,\nfamily=\"poisson\")\n\nlibrary(MASS)\n\nmodel.nb = glm.nb(Monarchs ~ Garden,\ndata=Data,\ncontrol = glm.control(maxit=10000))\n\nlibrary(pscl)\n\nmodel.zi = zeroinfl(Monarchs ~ Garden,\ndata = Data,\ndist = \"poisson\")\n\nVuong test\n\nlibrary(pscl)\n\nvuong(model.p,\nmodel.nb,\ndigits = 4)\n\nVuong Non-Nested Hypothesis Test-Statistic:\n(test-statistic is asymptotically distributed N(0,1) under the\nnull that the models are indistinguishible)\n-------------------------------------------------------------\nVuong z-statistic H_A p-value\nRaw 0.03324988 model1 > model2 0.48674\nAIC-corrected 0.03324988 model1 > model2 0.48674\nBIC-corrected 0.03324988 model1 > model2 0.48674\n\n### Positive Vuong z-statistic suggests that model 1 is superior,\n### but, in this case, the difference is not significant,\n### and the value of the statistic is probably too tiny to be meaningful.\n\nvuong(model.p,\nmodel.zi,\ndigits = 4)\n\nVuong Non-Nested Hypothesis Test-Statistic:\n(test-statistic is asymptotically distributed N(0,1) under the\nnull that the models are indistinguishible)\n-------------------------------------------------------------\nVuong z-statistic H_A p-value\nRaw -1.4424725 model2 > model1 0.074585\nAIC-corrected -0.4335210 model2 > model1 0.332318\nBIC-corrected 0.1607786 model1 > model2 0.436134\n\n### Negative Vuong z-statistic suggests that model 2 is superior.\n### If the Raw statistic is used, p = 0.07 gives some evidence\n### that zi model is superior.\n\nvuong(model.nb,\nmodel.zi,\ndigits = 4)\n\nVuong Non-Nested Hypothesis Test-Statistic:\n(test-statistic is asymptotically distributed N(0,1) under the\nnull that the models are indistinguishible)\n-------------------------------------------------------------\nVuong z-statistic H_A p-value\nRaw -1.4424725 model2 > model1 0.074585\nAIC-corrected -0.4335210 model2 > model1 0.332318\nBIC-corrected 0.1607786 model1 > model2 0.436134\n\n### Negative Vuong z-statistic suggests that model 2 is superior.\n### If the Raw statistic is used, p = 0.07 gives some evidence\n### that zi model is superior.\n\n### References\n\nMoriña, D., M. Higueras, P. Puig, and M. Oliveira. 2015. Generalized Hermite Distribution\n\nModelling with the R Package hermite. The R Journal 7(2):263–274. journal.r-project.org/archive/2015-2/morina-higueras-puig-etal.pdf.\n\nhelp(package=\"hermite\")\n\nlibrary(hermite); ?glm.hermite\n\nlibrary(MASS); ?glm.nb\n\nlibrary(pscl); ?zeroinfl\n\nlibrary(pscl); ?vuong\n\n“Simple Logistic Regression” in Mangiafico, S.S. 2015. An R Companion for the Handbook of Biological Statistics, version 1.09. rcompanion.org/rcompanion/e_06.html.\n\n\"Generalized linear model: Link function\". No date. Wikipedia. Retrieved 31 Jan. 2016. en.wikipedia.org/wiki/Generalized_linear_model#Link_function.\n\nVerhoef, J.M. and P.L. Boveng. 2007. Quasi-Poisson vs. negative binomial regression: How should we model overdispersed count data? Ecology 88(11) 2766–2772. http://fisher.utstat.toronto.edu/reid/sta2201s/QUASI-POISSON.pdf.\n\nWilson, P. 2015. The Misuse of the Vuong Test for Non-Nested Models to Test for Zero-Inflation. Economic Letters 127: 51–53. cybermetrics.wlv.ac.uk/paperdata/misusevuong.pdf\n\n### References for count data\n\nGrace-Martin, K. No date. \"Regression Models for Count Data\". The Analysis Factor. www.theanalysisfactor.com/regression-models-for-count-data/.\n\nGrace-Martin, K. No date. \" Zero-Inflated Poisson Models for Count Outcomes\". The Analysis Factor. www.theanalysisfactor.com/zero-inflated-poisson-models-for-count-outcomes/.\n\n[IDRE] Institute for Digital Research and Education. 2015. “R Data Analysis Examples: Poisson Regression”. UCLA. www.ats.ucla.edu/stat/r/dae/poissonreg.htm.\n\n[IDRE] Institute for Digital Research and Education. 2015. “R Data Analysis Examples: Negative Binomial Regression”.\n\n[IDRE] Institute for Digital Research and Education. 2015. “R Data Analysis Examples: Zero-Inflated Poisson Regression”.\n\n[IDRE] Institute for Digital Research and Education. 2015. “R Data Analysis Examples: Zero-Truncated Poisson Regression”."
] | [
null,
"http://rcompanion.org/handbook/images/banner.jpg",
null,
"http://rcompanion.org/handbook/images/image178.png",
null,
"http://rcompanion.org/handbook/images/image179.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6846066,"math_prob":0.8776753,"size":22980,"snap":"2019-51-2020-05","text_gpt3_token_len":6503,"char_repetition_ratio":0.12552229,"word_repetition_ratio":0.2512547,"special_character_ratio":0.28646648,"punctuation_ratio":0.16736586,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9939243,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-16T08:32:42Z\",\"WARC-Record-ID\":\"<urn:uuid:5a01e7e4-d24e-45ae-90af-973e7080d568>\",\"Content-Length\":\"62920\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2cf82019-e6b0-49ae-a199-e8001632b514>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba6dd475-7d6c-4383-9150-a6a9da5a00d7>\",\"WARC-IP-Address\":\"75.98.175.108\",\"WARC-Target-URI\":\"http://rcompanion.org/handbook/J_01.html\",\"WARC-Payload-Digest\":\"sha1:CGKUMZAEAVHZR5EO2EUGBOHSDQGHK5QY\",\"WARC-Block-Digest\":\"sha1:PWALFL3GGREIT7NC2JB3IUE4GWNQH3LX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541318556.99_warc_CC-MAIN-20191216065654-20191216093654-00442.warc.gz\"}"} |
https://stackoverflow.com/questions/3354313/python-is-there-a-c-like-for-loop-available/3354369 | [
"# Python: is there a C-like for loop available?\n\nCan I do something like this in Python?\n\n``````for (i = 0; i < 10; i++):\nif someCondition:\ni+=1\nprint i\n``````\n\nI need to be able to skip some values based on a condition\n\nEDIT: All the solutions so far suggest pruning the initial range in one way or another, based on an already known condition. This is not useful for me, so let me explain what I want to do.\n\nI want to manually (i.e. no getopt) parse some cmd line args, where each 'keyword' has a certain number of parameters, something like this:\n\n``````for i in range(0,len(argv)):\narg = argv[i]\nif arg == '--flag1':\nopt1 = argv[i+1]\ni+=1\ncontinue\nif arg == '--anotherFlag':\noptX = argv[i+1]\noptY = argv[i+2]\noptZ = argv[i+3]\ni+=3\ncontinue\n\n...\n``````\n• what is it you're actually trying to do? Jul 28, 2010 at 15:16\n• updated question with more info... Jul 28, 2010 at 15:39\n• There is always `while` for you. Jul 28, 2010 at 20:36\n\nYes, this is how I would do it\n\n``````>>> for i in xrange(0, 10):\n... if i == 4:\n... continue\n... print i,\n...\n0 1 2 3 5 6 7 8 9\n``````\n\nEDIT\nBased on the update to your original question... I would suggest you take a look at optparse\n\n• +1 for `optparse`. Parsing command-line arguments manually is completely unnecessary. Jul 28, 2010 at 16:12\n• Apparently optparse is obsoleted in 2.7, and the new flavor is called argparse. They both seem to be able to do what I need, but I'm wondering if it's worth the learning curve. Anyway, +1 for pointing out this module. Jul 28, 2010 at 23:18\n``````for (i = 0; i < 10; i++)\nif someCondition:\ni+=1\nprint i\n``````\n\nIn python would be written as\n\n``````i = 0\nwhile i < 10\nif someCondition\ni += 1\nprint i\ni += 1\n``````\n\nthere you go, that is how to write a c for loop in python.\n\n• @Cristi yes but that was in the original example code in the question, so I thought that was what he wanted. Feb 6, 2013 at 1:01\n\nThere are two things you could do to solve your problem:\n\n• require comma-separated arguments which are going to be grouped into the following option value, you could use `getopt`, or any other module then.\n• or do more fragile own processing:\n\n``````sys.argv.pop()\ncmd = {}\nwhile sys.argv:\narg = sys.argv.pop(0)\nif arg == '--arg1':\ncmd[arg] = sys.argv.pop(0), sys.argv.pop(0)\nelif:\npass\nprint(cmd)\n``````\n• The idea of using argv as a stack is great for my needs, and it removes the indexed based access to the list. Aug 6, 2010 at 14:37\n• I love it when people suggest everything but the answer for the question 👏 Feb 24, 2020 at 19:16\n\nStrange way:\n\n``````for x in (x for x in xrange(10) if someCondition):\nprint str(x)\n``````\n\nYou should use `continue` to skip a value, in both C and Python.\n\n``````for i in range(10):\nif someCondition:\ncontinue\nprint(i)\n``````\n\nYou probably don't actually need the indices, you probably need the actual items. A better solution would probably be like this:\n\n``````sequence = 'whatever'\nfor item in sequence:\nif some_condition:\ncontinue\ndo_stuff_with(item)\n``````\n• I would dearly use an iterator over a for loop any time of the day, however I can't think of a way to use iteration in my situation - see updated question Jul 28, 2010 at 15:41\n\nYou could first turn the argv list into a generator:\n\n``````def g(my_list):\nfor item in my_list:\nyield item\n``````\n\nYou could then step through the items, invoking the generator as required:\n\n``````my_gen = g(sys.argv[1:]):\nwhile True:\ntry:\narg = my_gen.next()\nif arg == \"--flag1\":\noptX = my_gen.next()\nopyY = my_gen.next()\n--do something\nelif arg == \"--flag2\":\noptX = my_gen.next()\noptY = my_gen.next()\noptZ = my_gen.next()\n--do something else\n...\nexcept StopIteration:\nbreak\n``````\n• how's that any different than iterating over a list? just more hassle. Jul 28, 2010 at 18:03\n• The idea is good, but why convert `sys.argv[1:]` to a generator instance rather than an iterator? E.g. `my_gen = iter(sys.argv[1:])` will get you the same results as your function `g`. Jul 28, 2010 at 19:01\n• Also, instead of using `while` with `try: except StopIteration:` a for loop will achieve the same thing. Simply: `for arg in my_gen`. You can still use `my_gen.next()` in the loop. Jul 28, 2010 at 19:08\n\nYou can ensure that an index is incremented within a `try...finally` block. This solve the common problem of wanting to `continue` to the next index without having to copy/past `i += 1` everywhere. Which is one of the main advantages the C-like `for` loop offers.\n\nThe main disadvantage to using a `try...finally` is having to indent your code once more. but if you have a `while` loop with many `continue` conditions its probably worth it.\n\n# Example\n\nThis example demonstrates that `i` still gets incremented in the `finally` block, even with `continue` being called. If `i` is not incremented its value will remain even forever, and the `while` loop will become infinite.\n\n``````i = 0\nwhile i < 10:\ntry:\nprint(i)\n\nif i % 2 == 0:\ncontinue\n\nfinally:\ni += 1\n``````\n\nwithout it you would have to increment `i` just before calling `continue`.\n\n``````i = 0\nwhile i < 10:\nprint(i)\n\nif i % 2 == 0:\ni += 1 # duplicate code\ncontinue\n\ni += 1\n``````\n`````` for i in xrange(0, 10):\nif i % 3 == 0\ncontinue\nprint i\n``````\n\nWill only values which aren't divisible by 3.\n\nIf you need to iterate over something, and need an index, use enumerate()\n\n``````for i, arg in enumerate(argv):\n...\n``````\n\nwhich does the same as the questioner's\n\n``````for i in range(0,len(argv)):\narg = argv[i]\n``````\n\nYour problem seems to be that you should loop not raw parameters but parsed parameters. I would suggest you to consider to change your decision not to use standard module (like the others).\n\n``````increament = 4 #say\nfor i in range(n):\n#write your code here\nn=n+increment\n``````\n\nthis might be the simple solution to the problem if you just want to iterate through the array by skipping 4 members"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.75578314,"math_prob":0.7471764,"size":689,"snap":"2022-40-2023-06","text_gpt3_token_len":201,"char_repetition_ratio":0.10656934,"word_repetition_ratio":0.0,"special_character_ratio":0.31204644,"punctuation_ratio":0.14375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9731701,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T12:33:10Z\",\"WARC-Record-ID\":\"<urn:uuid:f38eba14-8746-46bf-a4db-7fd31ad8ff32>\",\"Content-Length\":\"249751\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:168d5d6a-d064-4f7b-b848-3ecec0475f2c>\",\"WARC-Concurrent-To\":\"<urn:uuid:f3eee143-00e3-4f8a-9c40-e31643b5a9b2>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/3354313/python-is-there-a-c-like-for-loop-available/3354369\",\"WARC-Payload-Digest\":\"sha1:6Y6HBLHXIRPTMO6E5HXXJHJ4U6FJCP3F\",\"WARC-Block-Digest\":\"sha1:SK46654P6FLPFXKFKDH2G3MNNVQLJFZA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335350.36_warc_CC-MAIN-20220929100506-20220929130506-00052.warc.gz\"}"} |
https://dsp.stackexchange.com/questions/69715/doopler-spread-and-delay-spread-in-an-equation-of-channel-freq-response | [
"# Doopler spread and delay spread in an equation of channel freq response\n\nAssumed the transmission system with channel impulse response is determined by the matrices $$H(\\tau_l) \\in C^{N_R \\times N_T}$$ ( $$N_R$$ and $$N_T$$ are numbers of received and transmit antennas)\n\nBy taking the Fourier Transform (FT) of the impulse response, we get the frequency response of the channel at subcarrier $$n$$.\n\n$$\\begin{equation} H(n)=\\sum_{l=1}^{L} H(\\tau_l) \\cdot e^{i \\cdot 2 \\pi (f_c+\\Delta f \\cdot n)\\tau_l}, \\end{equation}$$ where $$f_c$$ is the centre frequency, $$\\Delta f$$ is the subcarrier bandwidth, $$n$$ is a subcarrier index, $$n=1, \\ldots N$$.$$t$$ denote time, $$\\tau$$ is delay. $$\\tau_l$$ is the cluster delays.\n\nHow should I change the equation above if I need to check how the doppler delay is affected system?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79543847,"math_prob":0.99999166,"size":719,"snap":"2020-34-2020-40","text_gpt3_token_len":212,"char_repetition_ratio":0.11888112,"word_repetition_ratio":0.0,"special_character_ratio":0.28789985,"punctuation_ratio":0.08695652,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000095,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T04:11:16Z\",\"WARC-Record-ID\":\"<urn:uuid:0537655d-7e4c-4713-b5b1-3941b4997d27>\",\"Content-Length\":\"141609\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:51970c8c-eb61-4c35-b84f-7cb2c4a12db2>\",\"WARC-Concurrent-To\":\"<urn:uuid:5d77e3e8-8314-4bf8-a1f8-4d57097100c6>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/69715/doopler-spread-and-delay-spread-in-an-equation-of-channel-freq-response\",\"WARC-Payload-Digest\":\"sha1:IA6Y6GQ36XJXPIFGTEFY7LW4TM43JGKQ\",\"WARC-Block-Digest\":\"sha1:242YEATKFIQJOG57SL7MPXQ4MBSFBCDR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402130615.94_warc_CC-MAIN-20201001030529-20201001060529-00248.warc.gz\"}"} |
https://php-legacy-docs.zend.com/manual/php5/en/pht-vector.updateAt | [
"Edit Report a Bug",
null,
"# pht\\Vector::updateAt\n\n(PECL pht >= 0.0.1)\n\npht\\Vector::updateAtUpdates a value in the vector\n\n### Description\n\npublic pht\\Vector::updateAt ( mixed `\\$value` , int `\\$offset` ) : void\n\nThis method updates a value at the specified offset in the vector (in linear time). The vector will automatically be resized if it is not large enough.\n\nSince the pht\\Vector class supports array access, updating values can also be performed using the array subset notation (`[]`).\n\n### Parameters\n\n`value`\n\nThe value to be inserted into the vector. This value will be serialised (since it may be passed around between threads).\n\n`offset`\n\nThe offset at which the value will be updated at. This offset must be within the 0..(N-1) range (inclusive), where N is the size of the vector. Attempting to update at offsets outside of this range will result in an Error exception.\n\nNo return value.\n\n### Examples\n\nExample #1 Updating a value in a vector\n\n``` <?phpuse pht\\Vector;\\$vector = new Vector();\\$vector[] = 1;\\$vector[] = 2;\\$vector->updateAt(3, 0);\\$vector = 4;var_dump(\\$vector); ```\n\nThe above example will output:\n\n```object(pht\\Vector)#1 (2) {\n=>\nint(3)\n=>\nint(4)\n}\n```"
] | [
null,
"https://php-legacy-docs.zend.com/images/icon-github.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6596066,"math_prob":0.9125541,"size":1027,"snap":"2022-27-2022-33","text_gpt3_token_len":280,"char_repetition_ratio":0.16617791,"word_repetition_ratio":0.0,"special_character_ratio":0.28334957,"punctuation_ratio":0.14009662,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9838623,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T15:26:43Z\",\"WARC-Record-ID\":\"<urn:uuid:0f3af953-4248-4a16-8a39-3fab0351bdfa>\",\"Content-Length\":\"8597\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2e80a394-9a3e-45cd-9c54-3fa9d674f815>\",\"WARC-Concurrent-To\":\"<urn:uuid:4010550e-99c1-4b46-b59e-cc5a06acf1dd>\",\"WARC-IP-Address\":\"54.85.21.190\",\"WARC-Target-URI\":\"https://php-legacy-docs.zend.com/manual/php5/en/pht-vector.updateAt\",\"WARC-Payload-Digest\":\"sha1:KTIY3NP3HYTEZHKZILCA327JBR6ACXSZ\",\"WARC-Block-Digest\":\"sha1:YDLSAQIP6GTNCWDAQ436TJTHMZLHJFL7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103640328.37_warc_CC-MAIN-20220629150145-20220629180145-00352.warc.gz\"}"} |
http://www.mathematics21.org/view/compact/index-4.html | [
"",
null,
"Then\n\ng\n\n=\n\nh\n\nf\n\n×\n\nf\n\ni\n\n.\n\nProof.\n\nFrom the above\n\nh\n\nf\n\n×\n\nf\n\ni\n\ng\n\ng\n\n1\n\ng\n\n.\n\n[FIXME: Funcoids and reloids are confused.]\n\nIt’s remainded to prove\n\ng\n\n⊑ h\n\nf\n\n×\n\nf\n\ni\n\n.\n\n[FIXME: Possible errors.]\n\nSuppose there is\n\nU\n\nxyGR\n\nh\n\nf\n\n×\n\nf\n\ni\n\nsuch that\n\nU\n\nGR\n\ng\n\n.\n\nThen\n\n{\n\nV\n\n\\\n\nU\n\n|\n\nV\n\nGR\n\ng\n\n}\n\n=\n\ng\n\n\\\n\nU\n\nwould be a proper filter.\n\nThus by reflexivity\n\nh\n\nf\n\n×\n\nf\n\ni\n\n(\n\ng\n\n\\\n\nU\n\n)\n\n0\n\n.\n\nBy compactness of\n\nf\n\n×\n\nf\n\n, Cor\n\nh\n\nf\n\n×\n\nf\n\ni\n\n(\n\ng\n\n\\\n\nU\n\n)\n\n0\n\n.\n\nSuppose\n\n↑{\n\n(\n\nx\n\n;\n\nx\n\n)\n\n} ⊑ h\n\nf\n\n×\n\nf\n\ni\n\n(\n\ng\n\n\\\n\nU\n\n)\n\n; then\n\ng\n\n\\\n\nU\n\nh\n\nf\n\n1\n\n×\n\nf\n\n1\n\ni{\n\n(\n\nx\n\n;\n\nx\n\n)\n\n}\n\n;\n\nU\n\nh\n\nf\n\n1\n\n×\n\nf\n\n1\n\ni{\n\n(\n\nx\n\n;\n\nx\n\n)\n\n} ⊑ h\n\nf\n\n1\n\n×\n\nf\n\n1\n\ni\n\nwhat is impossible.\n\nThus there exist\n\nx\n\ny\n\nsuch that\n\n{\n\n(\n\nx\n\n;\n\ny\n\n)\n\n} ⊑\n\nCor\n\nh\n\nf\n\n×\n\nf\n\ni\n\n(\n\ng\n\n\\\n\nU\n\n)\n\n. Thus\n\n{\n\n(\n\nx\n\n;\n\ny\n\n)\n\n} ⊑ h\n\nf\n\n×\n\nf\n\ni\n\ng\n\n.\n\nThus by the lemma\n\n{\n\n(\n\nx\n\n;\n\ny\n\n)\n\n} ⊑\n\nwhat is impossible. So\n\nU\n\nGR\n\ng\n\n.\n\nWe have xyGR\n\nh\n\nf\n\n×\n\nf\n\ni\n\nGR\n\ng\n\n;\n\nh\n\nf\n\n×\n\nf\n\ni\n\ng\n\n.\n\nCorollary 18.\n\nLet\n\nf\n\nis a\n\nT\n\n1\n\n-separable (the same as\n\nT\n\n2\n\nfor symmetric transitive) compact funcoid\n\nand\n\ng\n\nis a uniform space (reflexive, symmetric, and transitive endoreloid) such that\n\n(\n\nFCD\n\n)\n\ng\n\n=\n\nf\n\n.\n\nThen\n\ng\n\n=\n\nh\n\nf\n\n×\n\nf\n\ni\n\n.\n\nAn (incomplete) attempt to prove one more theorem follows:\n\nTheorem 19.\n\nLet\n\nµ\n\nand\n\nν\n\nbe uniform spaces,\n\n(\n\nFCD\n\n)\n\nµ\n\nbe a compact funcoid. Then a map\n\nf\n\nis a\n\ncontinuous map from\n\n(\n\nFCD\n\n)\n\nµ\n\nto\n\n(\n\nFCD\n\n)\n\nν\n\niff\n\nf\n\nis a (uniformly) continuous map from\n\nµ\n\nto\n\nν\n\n.\n\nProof.\n\n[FIXME: errors in this proof.]\n\nWe have\n\nµ\n\n=\n\nh\n\n(\n\nFCD\n\n)\n\nµ\n\n×\n\n(\n\nFCD\n\n)\n\nµ\n\ni↑\n\nRLD\n\nf\n\nC\n\n?\n\n((\n\nFCD\n\n)\n\nµ\n\n; (\n\nFCD\n\n)\n\nν\n\n)\n\n. Then\n\nf\n\n×\n\nf\n\nC\n\n?\n\n((\n\nFCD\n\n)(\n\nµ\n\n×\n\nµ\n\n); (\n\nFCD\n\n)(\n\nν\n\n×\n\nν\n\n))\n\n(\n\nf\n\n×\n\nf\n\n)\n\n(\n\nFCD\n\n)(\n\nµ\n\n×\n\nµ\n\n)\n\n(\n\nFCD\n\n)(\n\nν\n\n×\n\nν\n\n)\n\n(\n\nf\n\n×\n\nf\n\n)\n\nFor every\n\nV\n\nGR\n\n(\n\nν\n\n×\n\nν\n\n)\n\nwe have\n\nh\n\ng\n\n1\n\ni\n\nV\n\n∈ h\n\n(\n\nFCD\n\n)(\n\nµ\n\n×\n\nµ\n\n)\n\ni{\n\ny\n\n}\n\nfor some\n\ny\n\n.\n\nh\n\ng\n\n1\n\ni\n\nV\n\n∈ h\n\n(\n\nFCD\n\n)\n\nµ\n\n×\n\n(\n\nFCD\n\n)\n\nµ\n\ni↑\n\nRLD\n\n∆ =\n\nGR\n\nµ\n\nh\n\ng\n\nih\n\ng\n\n1\n\ni\n\nV\n\nV\n\nWe need to prove\n\nf\n\nC\n\n(\n\nµ\n\n;\n\nν\n\n)\n\nthat is\n\np\n\nGR\n\nν\n\nq\n\nGR\n\nµ\n\n:\n\nh\n\nf\n\ni\n\nq\n\np\n\n. But this follows from the\n\nabove.\n\nBibliography\n\n\n\nVictor Porton. Categorical product of funcoids. At\n\nhttp://www.mathematics21.org/binaries/product.pdf\n\n.\n\n\n\nVictor Porton.\n\nAlgebraic General Topology. Volume 1\n\n. 2013.\n\n4"
] | [
null,
"http://www.mathematics21.org/view/compact/index004.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.63594574,"math_prob":0.9547413,"size":1986,"snap":"2019-35-2019-39","text_gpt3_token_len":1146,"char_repetition_ratio":0.16094853,"word_repetition_ratio":0.27631578,"special_character_ratio":0.449144,"punctuation_ratio":0.13276231,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997559,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-26T06:29:39Z\",\"WARC-Record-ID\":\"<urn:uuid:0771cb4a-2b43-4b21-a582-f9367826b8fe>\",\"Content-Length\":\"50532\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe7b91ab-8252-4507-8a80-fc4afe6f2d38>\",\"WARC-Concurrent-To\":\"<urn:uuid:2cf7cc94-bdd9-4733-b40e-40d06222391f>\",\"WARC-IP-Address\":\"104.236.49.103\",\"WARC-Target-URI\":\"http://www.mathematics21.org/view/compact/index-4.html\",\"WARC-Payload-Digest\":\"sha1:ZLHUKSLJMCVU4MRYHWZQKRY4YF3DQF3I\",\"WARC-Block-Digest\":\"sha1:MM7QNKO4AHT5Q33DP4XGXA5ECNS4TW2D\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027330968.54_warc_CC-MAIN-20190826042816-20190826064816-00522.warc.gz\"}"} |
https://www.bartleby.com/questions-and-answers/a-particle-is-at-rest-and-at-x0-and-y0-at-time-0.-the-particle-then-moves-in-the-xy-plane-with-an-ac/05cc8e10-0ff4-424f-82b0-a0fc17474aea | [
"",
null,
"",
null,
"A particle is at rest and at x=0 and y=0 at time = 0. The particle then moves in the x-y plane with an acceleration in m/s^2 of a = 5t^2 i + 5t j , where t is in seconds. Determine the magnitude of the particle's velocity vector in m/s at instant the y-component of the position vector ry = 8 meters.\n\nQuestion\n\nA particle is at rest and at x=0 and y=0 at time = 0. The particle then moves in the x-y plane with an acceleration in m/s^2 of a = 5t^2 i + 5t j , where t is in seconds. Determine the magnitude of the particle's velocity vector in m/s at instant the y-component of the position vector ry = 8 meters.\n\nStep 1",
null,
"help_outlineImage TranscriptioncloseGiven Jntormation - At fime (tzo) x-0 cand yo At time tzo) Vx =o and Uyzo Acceleration vectpr a z 5ti+5tj fullscreen\nStep 2",
null,
"help_outlineImage TranscriptioncloseGiven a5t and ay 5t Velodty of the panticle a dv dt dv adt V -Ja,dt VxSstdt Jstdt Vyay dt 5 Vy At time Cto) Vyzo At time to), Vxo C20 . V St fullscreen\nStep 3",
null,
"help_outlineImage TranscriptioncloseDisplacement is calculated as ds dt ds Vd t S vdt SVndt Sx= S- 37 Sy 6 At fime (to) Sy 0 (2 At time (t-o) S0 5t S C20 (2 fullscreen\n\nWant to see the full answer?\n\nSee Solution\n\nWant to see this answer and more?\n\nOur solutions are written by experts, many with advanced degrees, and available 24/7\n\nSee Solution\nTagged in\n\nKinematics",
null,
""
] | [
null,
"https://www.bartleby.com/static/search-icon-white.svg",
null,
"https://www.bartleby.com/static/close-grey.svg",
null,
"https://prod-qna-question-images.s3.amazonaws.com/qna-images/answer/f9772908-ce96-424c-9aea-414c39919021/4a42967f-dc42-4a20-a1a4-cbc7a8043348/s6dnafo.png",
null,
"https://prod-qna-question-images.s3.amazonaws.com/qna-images/answer/f9772908-ce96-424c-9aea-414c39919021/4a42967f-dc42-4a20-a1a4-cbc7a8043348/15eezkg.png",
null,
"https://prod-qna-question-images.s3.amazonaws.com/qna-images/answer/f9772908-ce96-424c-9aea-414c39919021/4a42967f-dc42-4a20-a1a4-cbc7a8043348/kfp5q6.png",
null,
"https://www.bartleby.com/static/logo.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87922275,"math_prob":0.94405425,"size":1865,"snap":"2019-43-2019-47","text_gpt3_token_len":494,"char_repetition_ratio":0.111767866,"word_repetition_ratio":0.037900876,"special_character_ratio":0.2756032,"punctuation_ratio":0.18077803,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98172104,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T22:46:55Z\",\"WARC-Record-ID\":\"<urn:uuid:994a6e99-3bd4-46d3-a72b-533b06277121>\",\"Content-Length\":\"102789\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1214bfa5-b45e-4a6e-adde-caaf18fc4a82>\",\"WARC-Concurrent-To\":\"<urn:uuid:9a53eae4-ca77-43f1-8d97-b9853c6b2260>\",\"WARC-IP-Address\":\"99.84.181.62\",\"WARC-Target-URI\":\"https://www.bartleby.com/questions-and-answers/a-particle-is-at-rest-and-at-x0-and-y0-at-time-0.-the-particle-then-moves-in-the-xy-plane-with-an-ac/05cc8e10-0ff4-424f-82b0-a0fc17474aea\",\"WARC-Payload-Digest\":\"sha1:IHJRKHNKGSLDT4HN6TIFJFG26IKNA32B\",\"WARC-Block-Digest\":\"sha1:JRSQ2TZ7I5ZXCZIZEJCOZXX444II3UE7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987795253.70_warc_CC-MAIN-20191021221245-20191022004745-00281.warc.gz\"}"} |
https://xarray.pydata.org/en/latest/generated/xarray.Dataset.sortby.html | [
"# xarray.Dataset.sortby¶\n\nDataset.sortby(variables, ascending=True)\n\nSort object by labels or values (along an axis).\n\nSorts the dataset, either along specified dimensions, or according to values of 1-D dataarrays that share dimension with calling object.\n\nIf the input variables are dataarrays, then the dataarrays are aligned (via left-join) to the calling object prior to sorting by cell values. NaNs are sorted to the end, following Numpy convention.\n\nIf multiple sorts along the same dimension is given, numpy’s lexsort is performed along that dimension: https://docs.scipy.org/doc/numpy/reference/generated/numpy.lexsort.html and the FIRST key in the sequence is used as the primary sort key, followed by the 2nd key, etc.\n\nParameters\nReturns\n\nsorted – A new dataset where all the specified dims are sorted by dim labels.\n\nReturn type\n\nDataset"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7662463,"math_prob":0.8780469,"size":841,"snap":"2020-34-2020-40","text_gpt3_token_len":190,"char_repetition_ratio":0.119474314,"word_repetition_ratio":0.0,"special_character_ratio":0.20451844,"punctuation_ratio":0.14375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98214656,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-22T02:05:07Z\",\"WARC-Record-ID\":\"<urn:uuid:13f1fb4f-b739-4618-b4d1-d7bb6a60c13f>\",\"Content-Length\":\"23770\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eef91f92-278c-4084-a35f-09dbb62c868b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d5b73523-0078-480b-971e-4f1f8350b04d>\",\"WARC-IP-Address\":\"104.26.1.204\",\"WARC-Target-URI\":\"https://xarray.pydata.org/en/latest/generated/xarray.Dataset.sortby.html\",\"WARC-Payload-Digest\":\"sha1:NIESAOTRJWDDJ2365UKIZJIGONWSXOTM\",\"WARC-Block-Digest\":\"sha1:75Z4OATXLIDI2TKOTJCVXRO25K72PVB5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400202686.56_warc_CC-MAIN-20200922000730-20200922030730-00684.warc.gz\"}"} |
https://gitlab.eurecom.fr/dyyu/openairinterface5g/commit/2cb4d347d04242be04383433771626bd8ca26129 | [
"### make the local tracer more robust\n\n```if a read from the remote tracer's socket fails, let's smash\nthe socket and wait for another remote tracer```\nparent 2219fe54\n ... ... @@ -131,35 +131,69 @@ process: static void *forward_remote_messages(void *_f) { #define PUT(x) do { \\ if (bufsize == bufmaxsize) { \\ bufmaxsize += 4096; \\ buf = realloc(buf, bufmaxsize); \\ if (buf == NULL) abort(); \\ } \\ buf[bufsize] = x; \\ bufsize++; \\ } while (0) #define PUT_BUF(x, l) do { \\ char *zz = (char *)(x); \\ int len = l; \\ while (len) { PUT(*zz); zz++; len--; } \\ } while (0) forward_data *f = _f; int from; int to; int l, len; char *b; char buf; char *buf = NULL; int bufsize = 0; int bufmaxsize = 0; char t; again: /* Note: if the remote socket dies while a transfer is running * then the state of the tracer will be totally messed up. * If that ever happens, things are messed up anyway, so no big * deal... (TODO: to be refined at some point, maybe) */ while (1) { from = f->socket_remote; to = f->socket_local; len = read(from, buf, 1024); if (len <= 0) break; b = buf; while (len) { l = write(to, b, len); bufsize = 0; /* let's read and process messages */ len = read(from, &t, 1); if (len <= 0) goto dead; PUT(t); switch (t) { case 0: case 1: /* message 0 and 1: get a length and then 'length' numbers */ if (read(from, &len, sizeof(int)) != sizeof(int)) goto dead; PUT_BUF(&len, 4); while (len) { if (read(from, &l, sizeof(int)) != sizeof(int)) goto dead; PUT_BUF(&l, 4); len--; } break; case 2: break; } b = buf; while (bufsize) { l = write(to, b, bufsize); if (l <= 0) abort(); len -= l; bufsize -= l; b += l; } } dead: /* socket died, let's stop all traces and wait for another tracer */ buf = 1; if (write(to, buf, 1) != 1) abort(); ... ...\nMarkdown is supported\n0% or\nYou are about to add 0 people to the discussion. Proceed with caution.\nFinish editing this message first!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.55198574,"math_prob":0.9879138,"size":1968,"snap":"2020-10-2020-16","text_gpt3_token_len":675,"char_repetition_ratio":0.11608961,"word_repetition_ratio":0.094444446,"special_character_ratio":0.414126,"punctuation_ratio":0.24213076,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9762399,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-05T11:27:48Z\",\"WARC-Record-ID\":\"<urn:uuid:2ae49cbc-f705-4b04-8c75-00db31533f91>\",\"Content-Length\":\"155323\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99a58201-fd3f-4631-9af9-41377c76071a>\",\"WARC-Concurrent-To\":\"<urn:uuid:97b460b4-6bf3-427f-bb4f-57c8b6356904>\",\"WARC-IP-Address\":\"193.55.113.249\",\"WARC-Target-URI\":\"https://gitlab.eurecom.fr/dyyu/openairinterface5g/commit/2cb4d347d04242be04383433771626bd8ca26129\",\"WARC-Payload-Digest\":\"sha1:ZGBGOGP6S5WCMVTEZWZH5DGZFDDBNG5K\",\"WARC-Block-Digest\":\"sha1:ISVUKT5BQKLPBT3PAWKPKWHANRIHGIDX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371576284.74_warc_CC-MAIN-20200405084121-20200405114121-00357.warc.gz\"}"} |
https://pressbooks.online.ucf.edu/osuniversityphysics2/chapter/power-in-an-ac-circuit/ | [
"Chapter 15. Alternating-Current Circuits\n\n# 15.4 Power in an AC Circuit\n\n### Learning Objectives\n\nBy the end of the section, you will be able to:\n\n• Describe how average power from an ac circuit can be written in terms of peak current and voltage and of rms current and voltage\n• Determine the relationship between the phase angle of the current and voltage and the average power, known as the power factor\n\nA circuit element dissipates or produces power according to $P=IV,$ where I is the current through the element and V is the voltage across it. Since the current and the voltage both depend on time in an ac circuit, the instantaneous power $p\\left(t\\right)=i\\left(t\\right)v\\left(t\\right)$ is also time dependent. A plot of p(t) for various circuit elements is shown in Figure 15.16. For a resistor, i(t) and v(t) are in phase and therefore always have the same sign (see Figure 15.5). For a capacitor or inductor, the relative signs of i(t) and v(t) vary over a cycle due to their phase differences (see Figure 15.7 and Figure 15.9). Consequently, p(t) is positive at some times and negative at others, indicating that capacitive and inductive elements produce power at some instants and absorb it at others.",
null,
"Figure 15.16 Graph of instantaneous power for various circuit elements. (a) For the resistor, ${P}_{\\text{ave}}={I}_{0}{V}_{0}\\text{/}2,$ whereas for (b) the capacitor and (c) the inductor, ${P}_{\\text{ave}}=0.$ (d) For the source, ${P}_{\\text{ave}}={I}_{0}{V}_{0}\\left(\\text{cos}\\phantom{\\rule{0.2em}{0ex}}\\varphi \\right)\\text{/}2,$ which may be positive, negative, or zero, depending on $\\varphi .$\n\nBecause instantaneous power varies in both magnitude and sign over a cycle, it seldom has any practical importance. What we’re almost always concerned with is the power averaged over time, which we refer to as the average power. It is defined by the time average of the instantaneous power over one cycle:\n\n${P}_{\\text{ave}}=\\frac{1}{T}{\\int }_{0}^{T}p\\left(t\\right)dt,$\n\nwhere $T=2\\pi \\text{/}\\omega$ is the period of the oscillations. With the substitutions $v\\left(t\\right)={V}_{0}\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}\\omega t$ and $i\\left(t\\right)={I}_{0}\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}\\left(\\omega t-\\varphi \\right),$ this integral becomes\n\n${P}_{\\text{ave}}=\\frac{{I}_{0}{V}_{0}}{T}{\\int }_{0}^{T}\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}\\left(\\omega t-\\varphi \\right)\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}\\omega t\\phantom{\\rule{0.2em}{0ex}}dt.$\n\nUsing the trigonometric relation $\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}\\left(A-B\\right)=\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}A\\text{cos}\\phantom{\\rule{0.2em}{0ex}}B-\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}B\\phantom{\\rule{0.2em}{0ex}}\\text{cos}\\phantom{\\rule{0.2em}{0ex}}A,$ we obtain\n\n${P}_{\\text{ave}}=\\frac{{I}_{0}{V}_{0}\\phantom{\\rule{0.2em}{0ex}}\\text{cos}\\phantom{\\rule{0.2em}{0ex}}\\varphi }{T}{\\int }_{0}^{T}\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}\\omega tdt-\\frac{{I}_{0}{V}_{0}\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}\\varphi }{T}{\\int }_{0}^{T}{\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}}^{2}\\omega t\\text{cos}\\phantom{\\rule{0.2em}{0ex}}\\omega tdt.$\n\nEvaluation of these two integrals yields\n\n$\\frac{1}{T}{\\int }_{0}^{T}{\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}}^{2}\\omega tdt=\\frac{1}{2}$\n\nand\n\n$\\frac{1}{T}{\\int }_{0}^{T}\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}\\omega t\\text{cos}\\phantom{\\rule{0.2em}{0ex}}\\omega tdt=0.$\n\nHence, the average power associated with a circuit element is given by\n\n${P}_{\\text{ave}}=\\frac{1}{2}{I}_{0}{V}_{0}\\phantom{\\rule{0.2em}{0ex}}\\text{cos}\\phantom{\\rule{0.2em}{0ex}}\\varphi .$\n\nIn engineering applications, $\\text{cos}\\phantom{\\rule{0.2em}{0ex}}\\varphi$ is known as the power factor, which is the amount by which the power delivered in the circuit is less than the theoretical maximum of the circuit due to voltage and current being out of phase. For a resistor, $\\varphi =0,$ so the average power dissipated is\n\n${P}_{\\text{ave}}=\\frac{1}{2}{I}_{0}{V}_{0}.$\n\nA comparison of p(t) and ${P}_{\\text{ave}}$ is shown in Figure 15.16(d). To make ${P}_{\\text{ave}}=\\left(1\\text{/}2\\right){I}_{0}{V}_{0}$ look like its dc counterpart, we use the rms values ${I}_{\\text{rms}}\\phantom{\\rule{0.2em}{0ex}}\\text{and}\\phantom{\\rule{0.2em}{0ex}}{V}_{\\text{rms}}$ of the current and the voltage. By definition, these are\n\n${I}_{\\text{rms}}=\\sqrt{{i}_{\\text{ave}}^{2}}\\phantom{\\rule{0.2em}{0ex}}\\text{and}\\phantom{\\rule{0.2em}{0ex}}{V}_{\\text{rms}}=\\sqrt{{v}_{\\text{ave}}^{2}},$\n\nwhere\n\n${i}_{\\text{ave}}^{2}=\\frac{1}{T}{\\int }_{0}^{T}{i}^{2}\\left(t\\right)dt\\phantom{\\rule{0.2em}{0ex}}\\text{and }\\phantom{\\rule{0.2em}{0ex}}{v}_{\\text{ave}}^{2}=\\frac{1}{T}{\\int }_{0}^{T}{v}^{2}\\left(t\\right)dt.$\n\nWith $i\\left(t\\right)={I}_{0}\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}\\left(\\omega t-\\varphi \\right)\\phantom{\\rule{0.2em}{0ex}}\\text{and}\\phantom{\\rule{0.2em}{0ex}}v\\left(t\\right)={V}_{0}\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}\\omega t,$ we obtain\n\n${I}_{\\text{rms}}=\\frac{1}{\\sqrt{2}}{I}_{0}\\phantom{\\rule{0.2em}{0ex}}\\text{and}\\phantom{\\rule{0.2em}{0ex}}{V}_{\\text{rms}}=\\frac{1}{\\sqrt{2}}{V}_{0}.$\n\nWe may then write for the average power dissipated by a resistor,\n\n${P}_{\\text{ave}}=\\frac{1}{2}{I}_{0}{V}_{0}={I}_{\\text{rms}}{V}_{\\text{rms}}={I}_{\\text{rms}}^{2}R.$\n\nThis equation further emphasizes why the rms value is chosen in discussion rather than peak values. Both equations for average power are correct for Equation 15.13, but the rms values in the formula give a cleaner representation, so the extra factor of 1/2 is not necessary.\n\nAlternating voltages and currents are usually described in terms of their rms values. For example, the 110 V from a household outlet is an rms value. The amplitude of this source is $110\\sqrt{2}\\phantom{\\rule{0.2em}{0ex}}\\text{V}=\\text{156 V}\\text{.}$ Because most ac meters are calibrated in terms of rms values, a typical ac voltmeter placed across a household outlet will read 110 V.\n\nFor a capacitor and an inductor, $\\varphi =\\pi \\text{/}2\\phantom{\\rule{0.2em}{0ex}}\\text{and}\\phantom{\\rule{0.2em}{0ex}}-\\pi \\text{/}2\\phantom{\\rule{0.2em}{0ex}}\\text{rad,}$ respectively. Since $\\text{cos}\\pi \\text{/}2=\\text{cos}\\left(\\text{−}\\pi \\text{/}2\\right)=0,$ we find from Equation 15.12 that the average power dissipated by either of these elements is ${P}_{\\text{ave}}=0.$ Capacitors and inductors absorb energy from the circuit during one half-cycle and then discharge it back to the circuit during the other half-cycle. This behavior is illustrated in the plots of Figure 15.16, (b) and (c), which show p(t) oscillating sinusoidally about zero.\n\nThe phase angle for an ac generator may have any value. If $\\text{cos}\\phantom{\\rule{0.2em}{0ex}}\\varphi >0,$ the generator produces power; if $\\text{cos}\\phantom{\\rule{0.2em}{0ex}}\\varphi <0,$ it absorbs power. In terms of rms values, the average power of an ac generator is written as</p>\n\n${P}_{\\text{ave}}={I}_{\\text{rms}}{V}_{\\text{rms}}\\phantom{\\rule{0.2em}{0ex}}\\text{cos}\\phantom{\\rule{0.2em}{0ex}}\\varphi .$\n\nFor the generator in an RLC circuit,\n\n$\\text{tan}\\phantom{\\rule{0.2em}{0ex}}\\varphi =\\frac{{X}_{L}-{X}_{C}}{R}$\n\nand\n\n$\\text{cos}\\phantom{\\rule{0.2em}{0ex}}\\varphi =\\frac{R}{\\sqrt{{R}^{2}+{\\left({X}_{L}-{X}_{C}\\right)}^{2}}}=\\frac{R}{Z}.$\n\nHence the average power of the generator is\n\n${P}_{\\text{ave}}={I}_{\\text{rms}}{V}_{\\text{rms}}\\phantom{\\rule{0.2em}{0ex}}\\text{cos}\\phantom{\\rule{0.2em}{0ex}}\\varphi =\\frac{{V}_{\\text{rms}}}{Z}{V}_{\\text{rms}}\\frac{R}{Z}=\\frac{{V}_{\\text{rms}}^{2}R}{{Z}^{2}}.$\n\nThis can also be written as\n\n${P}_{\\text{ave}}={I}_{\\text{rms}}^{2}R,$\n\nwhich designates that the power produced by the generator is dissipated in the resistor. As we can see, Ohm’s law for the rms ac is found by dividing the rms voltage by the impedance.\n\n### Example\n\n#### Power Output of a Generator\n\nAn ac generator whose emf is given by\n\n$v\\left(t\\right)=\\left(4.00\\phantom{\\rule{0.2em}{0ex}}\\text{V}\\right)\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}\\left[\\left(1.00\\phantom{\\rule{0.2em}{0ex}}×\\phantom{\\rule{0.2em}{0ex}}{10}^{4}\\text{rad}\\text{/}\\text{s}\\right)t\\right]$\n\nis connected to an RLC circuit for which $L=2.00\\phantom{\\rule{0.2em}{0ex}}×\\phantom{\\rule{0.2em}{0ex}}{10}^{-3}\\phantom{\\rule{0.2em}{0ex}}\\text{H}$, $C=4.00\\phantom{\\rule{0.2em}{0ex}}×\\phantom{\\rule{0.2em}{0ex}}{10}^{-6}\\phantom{\\rule{0.2em}{0ex}}\\text{F}$, and $R=5.00\\phantom{\\rule{0.2em}{0ex}}\\text{Ω}$. (a) What is the rms voltage across the generator? (b) What is the impedance of the circuit? (c) What is the average power output of the generator?\n\n#### Strategy\n\nThe rms voltage is the amplitude of the voltage times $1\\text{/}\\sqrt{2}$. The impedance of the circuit involves the resistance and the reactances of the capacitor and the inductor. The average power is calculated by Equation 15.14, or more specifically, the last part of the equation, because we have the impedance of the circuit Z, the rms voltage ${V}_{\\text{rms}}$, and the resistance R.\n\n#### Solution\n\n1. Since ${V}_{0}=4.00\\phantom{\\rule{0.2em}{0ex}}\\text{V,}$ the rms voltage across the generator is\n\n${V}_{\\text{rms}}=\\frac{1}{\\sqrt{2}}\\left(4.00\\phantom{\\rule{0.2em}{0ex}}\\text{V}\\right)=2.83\\phantom{\\rule{0.2em}{0ex}}\\text{V}\\text{.}$\n2. The impedance of the circuit is\n\n$\\begin{array}{cc}\\hfill Z& =\\sqrt{{R}^{2}+{\\left({X}_{L}-{X}_{C}\\right)}^{2}}\\hfill \\\\ & ={\\left\\{{\\left(5.00\\phantom{\\rule{0.2em}{0ex}}\\text{Ω}\\right)}^{2}+{\\left[\\left(1.00\\phantom{\\rule{0.2em}{0ex}}×\\phantom{\\rule{0.2em}{0ex}}{10}^{4}\\phantom{\\rule{0.2em}{0ex}}\\text{rad/s}\\right)\\left(2.00\\phantom{\\rule{0.2em}{0ex}}×\\phantom{\\rule{0.2em}{0ex}}{10}^{-3}\\phantom{\\rule{0.2em}{0ex}}\\text{H}\\right)-\\frac{1}{\\left(1.00\\phantom{\\rule{0.2em}{0ex}}×\\phantom{\\rule{0.2em}{0ex}}{10}^{4}\\phantom{\\rule{0.2em}{0ex}}\\text{rad/s}\\right)\\left(4.00\\phantom{\\rule{0.2em}{0ex}}×\\phantom{\\rule{0.2em}{0ex}}{10}^{-6}\\phantom{\\rule{0.2em}{0ex}}\\text{F}\\right)}\\right]}^{2}\\right\\}}^{1\\text{/}2}\\hfill \\\\ & =7.07\\phantom{\\rule{0.2em}{0ex}}\\text{Ω}\\text{.}\\hfill \\end{array}$\n3. From Equation 15.14, the average power transferred to the circuit is\n\n${P}_{\\text{ave}}=\\frac{{V}_{\\text{rms}}^{2}R}{{Z}^{2}}=\\frac{{\\left(2.83\\phantom{\\rule{0.2em}{0ex}}\\text{V}\\right)}^{2}\\left(5.00\\phantom{\\rule{0.2em}{0ex}}\\text{Ω}\\right)}{{\\left(7.07\\phantom{\\rule{0.2em}{0ex}}\\text{Ω}\\right)}^{2}}=0.801\\phantom{\\rule{0.2em}{0ex}}\\text{W}\\text{.}$\n\n#### Significance\n\nIf the resistance is much larger than the reactance of the capacitor or inductor, the average power is a dc circuit equation of $P={V}^{2}\\text{/}R,$ where V replaces the rms voltage.\n\n### Check Your Understanding\n\nAn ac voltmeter attached across the terminals of a 45-Hz ac generator reads 7.07 V. Write an expression for the emf of the generator.\n\nShow Solution\n\n$v\\left(t\\right)=\\left(10.0\\phantom{\\rule{0.2em}{0ex}}\\text{V}\\right)\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}90\\pi t$\n\n### Check Your Understanding\n\nShow that the rms voltages across a resistor, a capacitor, and an inductor in an ac circuit where the rms current is ${I}_{\\text{rms}}$ are given by ${I}_{\\text{rms}}R,{I}_{\\text{rms}}{X}_{C},\\phantom{\\rule{0.2em}{0ex}}\\text{and}\\phantom{\\rule{0.2em}{0ex}}{I}_{\\text{rms}}{X}_{L},$ respectively. Determine these values for the components of the RLC circuit of Equation 15.12.\n\nShow Solution\n\n2.00 V; 10.01 V; 8.01 V\n\n### Summary\n\n• The average ac power is found by multiplying the rms values of current and voltage.\n• Ohm’s law for the rms ac is found by dividing the rms voltage by the impedance.\n• In an ac circuit, there is a phase angle between the source voltage and the current, which can be found by dividing the resistance by the impedance.\n• The average power delivered to an RLC circuit is affected by the phase angle.\n• The power factor ranges from –1 to 1.\n\n### Conceptual Questions\n\nFor what value of the phase angle $\\varphi$ between the voltage output of an ac source and the current is the average power output of the source a maximum?\n\nDiscuss the differences between average power and instantaneous power.\n\nShow Solution\n\nThe instantaneous power is the power at a given instant. The average power is the power averaged over a cycle or number of cycles.\n\nThe average ac current delivered to a circuit is zero. Despite this, power is dissipated in the circuit. Explain.\n\nCan the instantaneous power output of an ac source ever be negative? Can the average power output be negative?\n\nShow Solution\n\nThe instantaneous power can be negative, but the power output can’t be negative.\n\nThe power rating of a resistor used in ac circuits refers to the maximum average power dissipated in the resistor. How does this compare with the maximum instantaneous power dissipated in the resistor?\n\n### Problems\n\nThe emf of an ac source is given by $v\\left(t\\right)={V}_{0}\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}\\omega t,$ where ${V}_{0}=100\\phantom{\\rule{0.2em}{0ex}}\\text{V}$ and $\\omega =200\\pi \\phantom{\\rule{0.2em}{0ex}}\\text{rad/s}\\text{.}$ Calculate the average power output of the source if it is connected across (a) a $20\\text{-}\\mu \\text{F}$ capacitor, (b) a 20-mH inductor, and (c) a $50\\text{-}\\text{Ω}$ resistor.\n\nCalculate the rms currents for an ac source is given by $v\\left(t\\right)={V}_{0}\\phantom{\\rule{0.2em}{0ex}}\\text{sin}\\phantom{\\rule{0.2em}{0ex}}\\omega t,$ where ${V}_{0}=100\\phantom{\\rule{0.2em}{0ex}}\\text{V}$ and $\\omega =200\\pi \\phantom{\\rule{0.2em}{0ex}}\\text{rad/s}$ when connected across (a) a $20\\text{-}\\mu \\text{F}$ capacitor, (b) a 20-mH inductor, and (c) a $50\\text{-}\\text{Ω}$ resistor.\n\nShow Solution\n\na. 0.89 A; b. 5.6A; c. 1.4 A\n\nA 40-mH inductor is connected to a 60-Hz AC source whose voltage amplitude is 50 V. If an AC voltmeter is placed across the inductor, what does it read?\n\nFor an RLC series circuit, the voltage amplitude and frequency of the source are 100 V and 500 Hz, respectively; $R=500\\phantom{\\rule{0.2em}{0ex}}\\text{Ω}$; and $L=0.20\\phantom{\\rule{0.2em}{0ex}}\\text{H}$. Find the average power dissipated in the resistor for the following values for the capacitance: (a) $C=2.0\\mu \\text{F}$ and (b) $C=0.20\\phantom{\\rule{0.2em}{0ex}}\\mu \\text{F}\\text{.}$\n\nShow Solution\n\na. 7.3 W; b. 6.3 W\n\nAn ac source of voltage amplitude 10 V delivers electric energy at a rate of 0.80 W when its current output is 2.5 A. What is the phase angle $\\varphi$ between the emf and the current?\n\nAn RLC series circuit has an impedance of $60\\phantom{\\rule{0.2em}{0ex}}\\text{Ω}$ and a power factor of 0.50, with the voltage lagging the current. (a) Should a capacitor or an inductor be placed in series with the elements to raise the power factor of the circuit? (b) What is the value of the reactance across the inductor that will raise the power factor to unity?\n\nShow Solution\n\na. inductor; b. ${X}_{L}=52\\phantom{\\rule{0.2em}{0ex}}\\text{Ω}$\n\n### Glossary\n\naverage power\ntime average of the instantaneous power over one cycle\npower factor\namount by which the power delivered in the circuit is less than the theoretical maximum of the circuit due to voltage and current being out of phase",
null,
""
] | [
null,
"https://pressbooks.online.ucf.edu/app/uploads/sites/372/2021/04/CNX_UPhysics_32_04_acpower.jpg",
null,
"https://pressbooks.online.ucf.edu/app/themes/pressbooks-book/packages/buckram/assets/images/cc-by.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.75210834,"math_prob":0.9993486,"size":13230,"snap":"2022-40-2023-06","text_gpt3_token_len":4246,"char_repetition_ratio":0.271057,"word_repetition_ratio":0.09492848,"special_character_ratio":0.30400604,"punctuation_ratio":0.103924416,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99980026,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-01T02:24:23Z\",\"WARC-Record-ID\":\"<urn:uuid:eec206e9-4d99-4d55-9ba7-993a7d2a60d2>\",\"Content-Length\":\"115523\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ef65eca-351a-432f-99ca-f783e098361a>\",\"WARC-Concurrent-To\":\"<urn:uuid:47023927-f49d-4b58-bdc8-82d59976d935>\",\"WARC-IP-Address\":\"108.138.64.32\",\"WARC-Target-URI\":\"https://pressbooks.online.ucf.edu/osuniversityphysics2/chapter/power-in-an-ac-circuit/\",\"WARC-Payload-Digest\":\"sha1:Y5EWFXDI4VXLWIEQ447MRJAGJGCBQ5MF\",\"WARC-Block-Digest\":\"sha1:IPOP6KQJHXJ6TXLSKYMGB6NQV2XZSJBI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335514.65_warc_CC-MAIN-20221001003954-20221001033954-00432.warc.gz\"}"} |
https://www.dadsworksheets.com/worksheets/subtraction-sequences.html | [
"# Subtraction Sequences Worksheets\n\nThese subtraction worksheets allow students to practice simple sequences of subtraction to arrive at a final answer to a problem. The worksheets start with very small differences and progress through multi-digit subtraction. There are also worksheets that mix addition and subtraction steps in the sequences for practicing both of these arithmetic skills together. They are a fun alternative to simple subtraction problems that keep the subtraction skills moving along!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9014894,"math_prob":0.77418786,"size":711,"snap":"2021-31-2021-39","text_gpt3_token_len":114,"char_repetition_ratio":0.2446959,"word_repetition_ratio":0.0,"special_character_ratio":0.14627285,"punctuation_ratio":0.06666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99414515,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-05T16:41:40Z\",\"WARC-Record-ID\":\"<urn:uuid:e6c11893-3f44-40ae-8220-57cc58af8f08>\",\"Content-Length\":\"292961\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:820b3f5b-9ca6-4b82-8fa6-1c9b087f28b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:f23af3e8-ffb8-41e4-89a7-9f0d10f71944>\",\"WARC-IP-Address\":\"172.67.75.70\",\"WARC-Target-URI\":\"https://www.dadsworksheets.com/worksheets/subtraction-sequences.html\",\"WARC-Payload-Digest\":\"sha1:O2N3Z2RLDI4QWA3I5EQIAHLFP2F5F7DK\",\"WARC-Block-Digest\":\"sha1:QLPCKG3AXPGNFKHSLGLAA2GKEVYOJ7KJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046156141.29_warc_CC-MAIN-20210805161906-20210805191906-00598.warc.gz\"}"} |
https://www.lmfdb.org/EllipticCurve/Q/7260/l/ | [
"# Properties\n\n Label 7260.l Number of curves $4$ Conductor $7260$ CM no Rank $0$ Graph",
null,
"# Related objects\n\nShow commands for: SageMath\nsage: E = EllipticCurve(\"l1\")\n\nsage: E.isogeny_class()\n\n## Elliptic curves in class 7260.l\n\nsage: E.isogeny_class().curves\n\nLMFDB label Cremona label Weierstrass coefficients j-invariant Discriminant Torsion structure Modular degree Faltings height Optimality\n7260.l1 7260n4 $$[0, 1, 0, -30249556, 64026275300]$$ $$6749703004355978704/5671875$$ $$2572306572000000$$ $$$$ $$207360$$ $$2.6925$$\n7260.l2 7260n3 $$[0, 1, 0, -1890181, 1000400300]$$ $$-26348629355659264/24169921875$$ $$-685095855468750000$$ $$$$ $$103680$$ $$2.3459$$\n7260.l3 7260n2 $$[0, 1, 0, -381916, 83523284]$$ $$13584145739344/1195803675$$ $$542320423497388800$$ $$$$ $$69120$$ $$2.1432$$\n7260.l4 7260n1 $$[0, 1, 0, 26459, 6095384]$$ $$72268906496/606436875$$ $$-17189438667390000$$ $$$$ $$34560$$ $$1.7966$$ $$\\Gamma_0(N)$$-optimal\n\n## Rank\n\nsage: E.rank()\n\nThe elliptic curves in class 7260.l have rank $$0$$.\n\n## Complex multiplication\n\nThe elliptic curves in class 7260.l do not have complex multiplication.\n\n## Modular form7260.2.a.l\n\nsage: E.q_eigenform(10)\n\n$$q + q^{3} - q^{5} - 2q^{7} + q^{9} - 2q^{13} - q^{15} - 2q^{19} + O(q^{20})$$",
null,
"## Isogeny matrix\n\nsage: E.isogeny_class().matrix()\n\nThe $$i,j$$ entry is the smallest degree of a cyclic isogeny between the $$i$$-th and $$j$$-th curve in the isogeny class, in the LMFDB numbering.\n\n$$\\left(\\begin{array}{rrrr} 1 & 2 & 3 & 6 \\\\ 2 & 1 & 6 & 3 \\\\ 3 & 6 & 1 & 2 \\\\ 6 & 3 & 2 & 1 \\end{array}\\right)$$\n\n## Isogeny graph\n\nsage: E.isogeny_graph().plot(edge_labels=True)\n\nThe vertices are labelled with LMFDB labels.",
null,
""
] | [
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAoAAAAHgCAYAAAA10dzkAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/d3fzzAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAga0lEQVR4nO3df5TV9X3n8df4Y9ghIgQGEqLZRONq/LFNYECTbbtHiIkkNfxI25AItmlio4sRNK010DaApzAmaZpKUwI21nYDNNqTCIZjsFFMUrMLCwOa%2BovGhiRVSJiZBKMyMgLf/YNoa/wBMvfORT6PxzkeDuM9n3l7Dr553u%2B99ztNVVVVAQCgGEc1egAAAPqXAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKMwxjR6AI0dVVens7MyuXbvS29ub5ubmDBw4MMOHD09TU1OjxwMohn3MgQhADllXV1fWrl2bjo6ObNy4IZs2bcrOnY8973FDhgzO6NGjM2bM2LS1tWX8%2BPFpbW1twMQARyb7mJerqaqqqtFD8MpRVVXWrVuXxYsX5%2Babb05vb29OfM2IjDntlIx%2B8yk586Q3ZNDAljQfe2x6n346j%2B/qyf1bf5hNDz2cjVseziM/2ZHm5uZMnTo1M2bMyDnnnOPZKMAhsI/pCwHIQVu1alXmzZ2be%2B69NyedMDKXTn5Ppk8Yn5Gtww76jO1d3Vm2Zm2WrLwtWx/dnlFvfWvmzZ%2BfiRMn1nFygCOLfUxfCUAOqLu7OzNnzsyKFSty3thRufKD78v557TlqKMO/TNE%2B/bty%2B3rO/K5f/hq7tiwORdeeGEWLVqUYcMOfnkBlMY%2BplYEIC9p5cqVufSSS9L7VE%2Buu/LSTDt/fE1fIqiqKsvW3JlZn1uaAS0tWbJ0aSZPnlyz8wGOFPYxteQ2MLygqqqycOHCTJkyJWefenLuW74k0ye8o%2BbvD2lqaspF7z4v969YkrNPPTlTpkxJe3t7PC8B2M8%2Bph5cAeR5qqrKnDlzcu2112buxdPzyQ9P65c3BldVlfk3LMs1NyzP7Nmzs2DBAm9IBopmH1MvbgPD87S3t%2Bfaa6/NZy7//fzBhb/Zb9%2B3qakp8y6%2BKIMGDsxV7e0ZNGhQZs%2Be3W/fH%2BBwYx9TL64A8hwrV67MlClTMvfi6Zn7kekNm2PeF7%2BUa25YnpUrV2bSpEkNmwOgUexj6kkA8qzu7u6cecYZOfvUk7Py03Mberm/qqpMumpeNnxva%2B5/4AGfRgOKYh9Tbz4EwrNmzpyZ3qd6suTqyxv%2BXo%2BmpqYsuXpmdvf0ZNasWQ2dBaC/2cfUmwAkyf6biq5YsSLXXXnpy7qRaD29bviwXHflJVm%2BfHluvfXWRo8D0C/sY/qDl4BJVVUZPWpUWpuPyu3XLWz4s83/rKqqnD9rTrqfrtKxadNhNRtArdnH9BdXAMm6detyz7335soPvu%2Bw%2Bx%2B6qakpV3xgSjbfc0/Wr1/f6HEA6so%2Bpr8IQLJ48eKcdMLInH9OW6NHeUHnn9OWk04YmcWLFzd6FIC6so/pLwKwcF1dXbn55ptz6eT3HPLPkhw346pc8bklNZ7sPxx99NG5ZPJ7ctNNN6Wrq6tu3wegkexj%2BpMALNzatWvT29ub6RPG9/msp/fsydV/fUN%2BZdqlOW7cpJzw3gvzu/M/k22d3X0%2B%2B6IJ49Pb25u77rqrz2fBS2lvT8aOTQYNSkaMSCZPTrZsafRUlKCW%2BzhJvvrNuzPhijkZPuH9OertE3LPv/5bTc61j48MArBwHR0dOfE1I2rySbNdT%2B3O5i0P509%2B78J0/N3n85X2P82//vujmfRH8/p89sjWYTlhxPB0dHT0%2BSx4Kd/6VnLZZcm6dck3vpHs2ZO8613Jk082ejKOdLXcx0nyZM9T%2BR///cy0z/i9mpz3DPv4yOBHwRVu44YNGXPaKTU5a/Bxr8o/LWp/ztcWffx/5ZyPzMqPfrwj//W1I/p0/pg3n5KNGzf06Qw4kDVrnvv7G2/cfyWwoyP5n/%2BzMTNRhlru4yS56N3nJUl%2BsP3HNTvzGfbxK58rgAWrqiqbNm/K6DfXbuH8sseeeDJNTU0ZMuhVfT5r9GmnZFPHprhzEf3pscf2/zp0aGPn4MjWH/u4luzjVz4BWLDOzs7s3PlYzjzpDXU5/6ndvZn9hRtz4bvOzfGv6nsAnnXyG/OznTvT2dlZg%2BngwKoq%2BfjHk1/7teSssxo9DUeyeu/jWrOPX/m8BFywXbt2JUkGDWyp%2BdlP79mTD36yPfv27ctfX/Wxmpx53C/m3LBhQ0aOHFmTMylPS0tLTj/99IN67Mc%2Blnz3u8nddx/4sQ8%2B%2BGB6enr6OB2l2rZtW5L67ON6eGYf%2BzP/yiUAC9bb25skaT722Jqe%2B/SePZn6xwuzdduPc%2BfnP1WTq39J0nzM/j%2BuF1xwQU3Oo0yjRo3Kpk2bDvi4yy9Pbr01%2Bfa3kxNPPPC506ZNy%2BbNm2swISWr9T6ul2f28e7duxs8CYdKABasubk5SdL79NM1O/OZ%2BPveI49m7ec/lWGDj6/Z2b179iRJVq9e7Qogh6yl5aWvsFTV/vi75Zbkm99MTjrp4M5dvny5qyEcsm3btuW9731vTfdxPT2zjwcMGNDgSThUArBgAwcOTJI8vqs2f2nt2bM3vz3nz7Jpy8P52p9fk7379uXH3T9Nkgw9flCfn9k%2B8Ys5x44dmxEj%2BvaJYngxl12WrFiRrFq1/16AP/7FBygHD05eqh0P9mVleCEn/uIyc632cZL89LHH86Of7Mi2rv33Yt3yo0eSJK8d9uq8dljfPtX0zD4%2B0BMqDl8CsGDDhw/PkCGDc//WH2bKub/a5/Me6ezMrf%2B8Lkky6ndmPOffrf3rT%2BXc0W/p0/n3ff8HefWQIRk%2BfHifzoGX8oUv7P/13HOf%2B/Ubb0w%2B9KH%2BnoZS1HofJ8mtd//ffPjP/uLZ33/wT/ffpuuTH5mWeRdf1Kez7eNXvqbKZ7iL9o7x43P83t356qc%2B2ehRDmjK1fPzxLEtueOOOxs9CkDN2cf0J7eBKdyYsWOzccvDjR7joGx86OGMGTO20WMA1IV9TH8SgIVra2vLIz/Zke1dff95vfW0vas7j%2B7oTFtbW6NHAagL%2B5j%2BJAALN378%2BDQ3N2fZmrWNHuUlfWnN2jQ3N2fcuHGNHgWgLuxj%2BpMALFxra2ve//73Z8nK27Jv375Gj/OC9u7dm6Urb8vUqVPT2tra6HEA6sI%2Bpj8JQDJjxoxsfXR7bl/f0ehRXtDt6zuy9dHtmTFjxoEfDPAKZh/TX3wKmFRVlbbRozPs2Kbcft3CNDU1NXqkZ1VVlfNnzclP9yQbOzoOq9kAas0%2Bpr%2B4Akiampoyb/783LFhc5bffni992TZmjtzx4bNmTtvnmUDHPHsY/qLK4A8a9q0afn66q/lvuVLMrJ1WKPHybbO7pw17dL8xsSJWbZsWaPHAeg39jH1JgB5Vnd3d84844ycferJWfnpuQ19hldVVSZdNS8bvrc19z/wQIYNa/wCBOgv9jH15iVgnjVs2LAsWbo0X7t7Xa752%2BUNnWX%2BDcuy%2Bjvrs/T66y0boDj2MfUmAHmOyZMnZ8GCBZn/xWX57IqvNGSGz674Sq65YXkWLlyYSZMmNWQGgEazj6mnYxo9AIef2bNn54knnshV7e15fNeuzP3I9H55%2BaGqqsy/YVmuuWF5Zs%2BenU984hN1/54AhzP7mHoRgDxPU1NTFixYkEGDBmXOnDnZvOXfsuTqy%2Bv6RuRtnd259FOLsvo769Pe3m7ZAMQ%2Bpn58CISXtGrVqlzy0Y9md09Prrvykkyf8I6aPvusqirL1tyZWZ9bmgEtLVl6/fVeZgB4AfYxtSQAOaDu7u7MnDkzK1asyHljR%2BWKD0zJhLeNyVFHHfpbSPfu3Zvb13fkL798S%2B7YsDnTpk3LokWLMnTo0BpODnBksY%2BpFQHIQVu1alXmz5uXzffck5NOGJlLJr8nF00Y/7Jeitje1Z0vrVmbpStvy9ZHt2fUW9%2BaefPnZ%2BLEiXWcHODIYh/TVwKQl6Wqqqxfvz6LFy/OTTfdlN7e3pz4mhFpO%2B1NGX3aKTnr5DfmuIEtaT7mmPTu2ZMndvXkvu//IJu2PJyNDz2cR3d0ZsCAAZk6dWpmzJiRs88%2B2x3lAQ6BfUxfCEAOWVdXV%2B66665s3LgxHR0b09HRkZ07H3ve44YMGZy2traMGTM2bW1tGTduXFpbWxswMcCRyT7m5RKA1ExVVens7MyGDRtywQUXZPXq1Rk7dmyGDx/uWSVAP7KPORC3gaFmmpqaMmLEiIwcOTJJMnLkyIwYMaLBUwGUxz7mQPwkEACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAaq6lpSWjRo1KS0tLo0cBKJp9zItpqqqqavQQAAD0H1cAAQAKIwABAAojAAEACiMAAQAKIwABAAojAKmZL3wh%2BZVfSY4/fv8/b3978vWvN3oqgLLYxRwMt4GhZr72teToo5NTTtn/%2B7//%2B%2BQzn0k2b07OPLOxswGUwi7mYAhA6mro0P2L5yMfafQkAOWyi/llxzR6AI5Me/cm//iPyZNP7n/5AYD%2BZxfzYgQgNfUv/7J/yTz1VHLccckttyRnnNHoqQDKYhdzIF4CpqZ6e5Mf/SjZuTP5yleSL34x%2Bda3LB6A/mQXcyACkLo677zkTW9Kli5t9CQA5bKL%2BWVuA0NdVVWye3ejpwAom13ML/MeQGpmzpzk3e9OXv/65PHHky9/OfnmN5M1axo9GUA57GIOhgCkZn7yk%2BSii5Lt25PBg/ffiHTNmuSd72z0ZADlsIs5GN4DCABQGO8BBAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwApOYefPDBjB49Og8%2B%2BGCjRwEomn3MixGA1FxPT082b96cnp6eRo8CUDT7mBcjAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAqZmqqrJjx45s27YtSbJt27bs2LEjVVU1eDKAstjHHEhT5U8Dh6irqytr165NR0dHNm7ckE2bNmXnzsee97ghQwZn9OjRGTNmbNra2jJ%2B/Pi0trY2YGKAI5N9zMslAHlZqqrKunXrsnjx4tx8883p7e3Nia8ZkTGnnZLRbz4lZ570hgwa2JLmY49N79NP5/FdPbl/6w%2Bz6aGHs3HLw3nkJzvS3NycqVOnZsaMGTnnnHPS1NTU6P8sgFcc%2B5i%2BEIActFWrVmXe3Lm55957c9IJI3Pp5Pdk%2BoTxGdk67KDP2N7VnWVr1mbJytuy9dHtGfXWt2be/PmZOHFiHScHOLLYx/SVAOSAuru7M3PmzKxYsSLnjR2VKz/4vpx/TluOOurQ30K6b9%2B%2B3L6%2BI5/7h6/mjg2bc%2BGFF2bRokUZNuzglxdAaexjakUA8pJWrlyZSy%2B5JL1P9eS6Ky/NtPPH1/QlgqqqsmzNnZn1uaUZ0NKSJUuXZvLkyTU7H%2BBIYR9TSz4FzAuqqioLFy7MlClTcvapJ%2Be%2B5UsyfcI7av7%2BkKamplz07vNy/4olOfvUkzNlypS0t7f7pBrAL9jH1IMrgDxPVVWZM2dOrr322sy9eHo%2B%2BeFp/fLG4KqqMv%2BGZbnmhuWZPXt2FixY4A3JQNHsY%2BrlmEYPwOGnvb091157bT5z%2Be/nDy78zX77vk1NTZl38UUZNHBgrmpvz6BBgzJ79ux%2B%2B/4Ahxv7mHpxBZDnWLlyZaZMmZK5F0/P3I9Mb9gc8774pVxzw/KsXLkykyZNatgcAI1iH1NPApBndXd358wzzsjZp56clZ%2Be29DL/VVVZdJV87Lhe1tz/wMP%2BDQaUBT7mHrzIRCeNXPmzPQ%2B1ZMlV1/e8Pd6NDU1ZcnVM7O7pyezZs1q6CwA/c0%2Bpt4EIEn231R0xYoVue7KS1/WjUTr6XXDh%2BW6Ky/J8uXLc%2ButtzZ6HIB%2BYR/TH7wETKqqyuhRo9LafFRuv25hw59t/mdVVeX8WXPS/XSVjk2bDqvZAGrNPqa/uAJI1q1bl3vuvTdXfvB9h93/0E1NTbniA1Oy%2BZ57sn79%2BkaPA1BX9jH9RQCSxYsX56QTRub8c9oaPcoLOv%2Bctpx0wsgsXry40aMA1JV9TH8RgIXr6urKzTffnEsnv%2BeQf5bkuBlX5YrPLanxZP/h6KOPziWT35ObbropXV1ddfs%2BAI1kH9OfBGDh1q5dm97e3kyfML7mZ19y7XU56u0T8pdfvqXPZ100YXx6e3tz11131WAyeHHt7cnYscmgQcmIEcnkycmWLY2eihLUeh/P%2B%2BKXcvrUi3PcuEkZ%2Bq7fyjsv/0TW3/9Qn8%2B1j48MArBwHR0dOfE1I2r%2BSbOV3/o/%2BX8PbMnranTuyNZhOWHE8HR0dNTkPHgx3/pWctllybp1yTe%2BkezZk7zrXcmTTzZ6Mo50td7Hp77%2BxPzVH8zId5ctyT8v%2BfO8YeRrcv6sOen82c4%2BnWsfHxkEYOE2btiQMaedUtMzH93Rlcs/uzjL5v1Rjj3m6JqdO%2BbNp2Tjxg01Ow9eyJo1yYc%2BlJx5ZvKWtyQ33pj86EeJv%2Buot1rv4wvPH5fzzh6dk08YmTNPfmP%2BYtZH8/Mnd%2BW7D2/t89n28SufACxYVVXZtHlTRr%2B5dgtn3759%2BZ1rPpM/nPZbOfPkN9bs3CQZfdop2dSxKe5cRH967LH9vw4d2tg5OLLVYx//Z71PP53rV349g497Vd7y307u83n28SvfMY0egMbp7OzMzp2P5cyT3lCzMz/1pZtzzNFHZ%2Bb7a//zIs86%2BY352c6d6ezszIgRI2p%2BPvyyqko%2B/vHk134tOeusRk/Dkawe%2BzhJVt%2B9Ph/8ZHt2PbU7I4cNzT9dtzCtQwb3%2BVz7%2BJVPABZs165dSZJBA1tqcl7HQ9/LoptXpePvPl%2BX%2B1cd94s5N2zYkJEjR9b8fMrQ0tKS008//aAe%2B7GPJd/9bnL33Qd%2B7IMPPpienp4%2BTkeptm3blqR2%2B/gZ49reks1/vzhdjz2Wv1n19Uz9k4VZ98XrMmLokD6d%2B8w%2B9mf%2BlUsAFqy3tzdJ0nzssTU575/vuS87frYzb5hy0bNf27t3X/7wr/4m1910S7be8r/7dH7zMfv/uF5wwQV9OoeyjRo1Kps2bTrg4y6/PLn11uTb305OPPHA506bNi2bN2%2BuwYSUrFb7%2BBmvavkvOeX1r8spr39d3nbW6Tn1tz%2BcG762JrN/9wN9OveZfbx79%2B5ajEkDCMCCNTc3J9n/3pBauOjd78h5Y0c952sTrvjjTH/3O/J7v/HOPp/fu2dPkmT16tWuAHLIWlpe%2BgpLVe2Pv1tuSb75zeSkkw7u3OXLl7sawiHbtm1b3vve99ZsH7%2BYqqqyuwbf45l9PGDAgD6fRWMIwIINHDgwSfL4rtr8pTVs8PEZNvj453zt2GOOzmuHvjqnveH1fT7/iV/MOXbsWO85oW4uuyxZsSJZtWr/vQB//OP9Xx88OHmpdjzYl5XhhZz4i8vMtdrHT/Y8lQV/9w%2BZ%2BOtvy8hhQ9P9859n8VdW55HOrvz2%2BF/v8/nP7OMDPaHi8CUACzZ8%2BPAMGTI492/9Yaac%2B6uNHueA7vv%2BD/LqIUMyfPjwRo/CEewLX9j/67nnPvfrN964//YwUA%2B13sdHH3VUtvzw3/Nbt92Rrsd%2BnmGDB2Xs6afm21/485rcocE%2BfuVrqnyGu2jvGD8%2Bx%2B/dna9%2B6pONHuWAplw9P08c25I77riz0aMA1Jx9TH9yH8DCjRk7Nhu3PNzoMQ7KxocezpgxYxs9BkBd2Mf0JwFYuLa2tjzykx3Z3tXd6FFe0vau7jy6ozNtbW2NHgWgLuxj%2BpMALNz48ePT3NycZWvWNnqUl/SlNWvT3NyccePGNXoUgLqwj%2BlPArBwra2tef/7358lK2/Lvn37Gj3OC9q7d2%2BWrrwtU6dOTWtra6PHAagL%2B5j%2BJADJjBkzsvXR7bl9/eH50%2B5vX9%2BRrY9uz4wZMxo9CkBd2cf0F58CJlVVpW306Aw7tim3X7ewLj/G7VBVVZXzZ83JT/ckGzs6DqvZAGrNPqa/uAJImpqaMm/%2B/NyxYXOW3354vfdk2Zo7c8eGzZk7b55lAxzx7GP6iyuAPGvatGn5%2Buqv5b7lSzKydVijx8m2zu6cNe3S/MbEiVm2bFmjxwHoN/Yx9SYAeVZ3d3fOPOOMnH3qyVn56bkNfYZXVVUmXTUvG763Nfc/8ECGDWv8AgToL/Yx9eYlYJ41bNiwLFm6NF%2B7e12u%2BdvlDZ1l/g3Lsvo767P0%2BustG6A49jH1JgB5jsmTJ2fBggWZ/8Vl%2BeyKrzRkhs%2Bu%2BEquuWF5Fi5cmEmTJjVkBoBGs4%2Bpp2MaPQCHn9mzZ%2BeJJ57IVe3teXzXrsz9yPR%2BefmhqqrMv2FZrrlheWbPnp1PfOITdf%2BeAIcz%2B5h6EYA8T1NTUxYsWJBBgwZlzpw52bzl37Lk6svr%2BkbkbZ3dufRTi7L6O%2BvT3t5u2QDEPqZ%2BfAiEl7Rq1apc8tGPZndPT6678pJMn/COmj77rKoqy9bcmVmfW5oBLS1Zev31XmYAeAH2MbUkADmg7u7uzJw5MytWrMh5Y0flig9MyYS3jclRRx36W0j37t2b29d35C%2B/fEvu2LA506ZNy6JFizJ06NAaTg5wZLGPqRUByEFbtWpV5s%2Bbl8333JOTThiZSya/JxdNGP%2ByXorY3tWdL61Zm6Urb8vWR7dn1Fvfmnnz52fixIl1nBzgyGIf01cCkJelqqqsX78%2Bixcvzk033ZTe3t6c%2BJoRaTvtTRl92ik56%2BQ35riBLWk%2B5pj07tmTJ3b15L7v/yCbtjycjQ89nEd3dGbAgAGZOnVqZsyYkbPPPtsd5QEOgX1MXwhADllXV1fuuuuubNy4MR0dG9PR0ZGdOx973uOGDBmctra2jBkzNm1tbRk3blxaW1sbMDHAkck%2B5uUSgNRMVVXp7OxMT09Pdu/enQEDBqSlpSXDhw/3rBKgH9nHHIgABAAojJ8EAgBQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFCY/w8pjNcGaiB0XgAAAABJRU5ErkJggg%3D%3D",
null,
"https://www.lmfdb.org/static/images/t2r.png",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAoAAAAHgCAYAAAA10dzkAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/d3fzzAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAga0lEQVR4nO3df5TV9X3n8df4Y9ghIgQGEqLZRONq/LFNYECTbbtHiIkkNfxI25AItmlio4sRNK010DaApzAmaZpKUwI21nYDNNqTCIZjsFFMUrMLCwOa%2BovGhiRVSJiZBKMyMgLf/YNoa/wBMvfORT6PxzkeDuM9n3l7Dr553u%2B99ztNVVVVAQCgGEc1egAAAPqXAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKMwxjR6AI0dVVens7MyuXbvS29ub5ubmDBw4MMOHD09TU1OjxwMohn3MgQhADllXV1fWrl2bjo6ObNy4IZs2bcrOnY8973FDhgzO6NGjM2bM2LS1tWX8%2BPFpbW1twMQARyb7mJerqaqqqtFD8MpRVVXWrVuXxYsX5%2Babb05vb29OfM2IjDntlIx%2B8yk586Q3ZNDAljQfe2x6n346j%2B/qyf1bf5hNDz2cjVseziM/2ZHm5uZMnTo1M2bMyDnnnOPZKMAhsI/pCwHIQVu1alXmzZ2be%2B69NyedMDKXTn5Ppk8Yn5Gtww76jO1d3Vm2Zm2WrLwtWx/dnlFvfWvmzZ%2BfiRMn1nFygCOLfUxfCUAOqLu7OzNnzsyKFSty3thRufKD78v557TlqKMO/TNE%2B/bty%2B3rO/K5f/hq7tiwORdeeGEWLVqUYcMOfnkBlMY%2BplYEIC9p5cqVufSSS9L7VE%2Buu/LSTDt/fE1fIqiqKsvW3JlZn1uaAS0tWbJ0aSZPnlyz8wGOFPYxteQ2MLygqqqycOHCTJkyJWefenLuW74k0ye8o%2BbvD2lqaspF7z4v969YkrNPPTlTpkxJe3t7PC8B2M8%2Bph5cAeR5qqrKnDlzcu2112buxdPzyQ9P65c3BldVlfk3LMs1NyzP7Nmzs2DBAm9IBopmH1MvbgPD87S3t%2Bfaa6/NZy7//fzBhb/Zb9%2B3qakp8y6%2BKIMGDsxV7e0ZNGhQZs%2Be3W/fH%2BBwYx9TL64A8hwrV67MlClTMvfi6Zn7kekNm2PeF7%2BUa25YnpUrV2bSpEkNmwOgUexj6kkA8qzu7u6cecYZOfvUk7Py03Mberm/qqpMumpeNnxva%2B5/4AGfRgOKYh9Tbz4EwrNmzpyZ3qd6suTqyxv%2BXo%2BmpqYsuXpmdvf0ZNasWQ2dBaC/2cfUmwAkyf6biq5YsSLXXXnpy7qRaD29bviwXHflJVm%2BfHluvfXWRo8D0C/sY/qDl4BJVVUZPWpUWpuPyu3XLWz4s83/rKqqnD9rTrqfrtKxadNhNRtArdnH9BdXAMm6detyz7335soPvu%2Bw%2Bx%2B6qakpV3xgSjbfc0/Wr1/f6HEA6so%2Bpr8IQLJ48eKcdMLInH9OW6NHeUHnn9OWk04YmcWLFzd6FIC6so/pLwKwcF1dXbn55ptz6eT3HPLPkhw346pc8bklNZ7sPxx99NG5ZPJ7ctNNN6Wrq6tu3wegkexj%2BpMALNzatWvT29ub6RPG9/msp/fsydV/fUN%2BZdqlOW7cpJzw3gvzu/M/k22d3X0%2B%2B6IJ49Pb25u77rqrz2fBS2lvT8aOTQYNSkaMSCZPTrZsafRUlKCW%2BzhJvvrNuzPhijkZPuH9OertE3LPv/5bTc61j48MArBwHR0dOfE1I2rySbNdT%2B3O5i0P509%2B78J0/N3n85X2P82//vujmfRH8/p89sjWYTlhxPB0dHT0%2BSx4Kd/6VnLZZcm6dck3vpHs2ZO8613Jk082ejKOdLXcx0nyZM9T%2BR///cy0z/i9mpz3DPv4yOBHwRVu44YNGXPaKTU5a/Bxr8o/LWp/ztcWffx/5ZyPzMqPfrwj//W1I/p0/pg3n5KNGzf06Qw4kDVrnvv7G2/cfyWwoyP5n/%2BzMTNRhlru4yS56N3nJUl%2BsP3HNTvzGfbxK58rgAWrqiqbNm/K6DfXbuH8sseeeDJNTU0ZMuhVfT5r9GmnZFPHprhzEf3pscf2/zp0aGPn4MjWH/u4luzjVz4BWLDOzs7s3PlYzjzpDXU5/6ndvZn9hRtz4bvOzfGv6nsAnnXyG/OznTvT2dlZg%2BngwKoq%2BfjHk1/7teSssxo9DUeyeu/jWrOPX/m8BFywXbt2JUkGDWyp%2BdlP79mTD36yPfv27ctfX/Wxmpx53C/m3LBhQ0aOHFmTMylPS0tLTj/99IN67Mc%2Blnz3u8nddx/4sQ8%2B%2BGB6enr6OB2l2rZtW5L67ON6eGYf%2BzP/yiUAC9bb25skaT722Jqe%2B/SePZn6xwuzdduPc%2BfnP1WTq39J0nzM/j%2BuF1xwQU3Oo0yjRo3Kpk2bDvi4yy9Pbr01%2Bfa3kxNPPPC506ZNy%2BbNm2swISWr9T6ul2f28e7duxs8CYdKABasubk5SdL79NM1O/OZ%2BPveI49m7ec/lWGDj6/Z2b179iRJVq9e7Qogh6yl5aWvsFTV/vi75Zbkm99MTjrp4M5dvny5qyEcsm3btuW9731vTfdxPT2zjwcMGNDgSThUArBgAwcOTJI8vqs2f2nt2bM3vz3nz7Jpy8P52p9fk7379uXH3T9Nkgw9flCfn9k%2B8Ys5x44dmxEj%2BvaJYngxl12WrFiRrFq1/16AP/7FBygHD05eqh0P9mVleCEn/uIyc632cZL89LHH86Of7Mi2rv33Yt3yo0eSJK8d9uq8dljfPtX0zD4%2B0BMqDl8CsGDDhw/PkCGDc//WH2bKub/a5/Me6ezMrf%2B8Lkky6ndmPOffrf3rT%2BXc0W/p0/n3ff8HefWQIRk%2BfHifzoGX8oUv7P/13HOf%2B/Ubb0w%2B9KH%2BnoZS1HofJ8mtd//ffPjP/uLZ33/wT/ffpuuTH5mWeRdf1Kez7eNXvqbKZ7iL9o7x43P83t356qc%2B2ehRDmjK1fPzxLEtueOOOxs9CkDN2cf0J7eBKdyYsWOzccvDjR7joGx86OGMGTO20WMA1IV9TH8SgIVra2vLIz/Zke1dff95vfW0vas7j%2B7oTFtbW6NHAagL%2B5j%2BJAALN378%2BDQ3N2fZmrWNHuUlfWnN2jQ3N2fcuHGNHgWgLuxj%2BpMALFxra2ve//73Z8nK27Jv375Gj/OC9u7dm6Urb8vUqVPT2tra6HEA6sI%2Bpj8JQDJjxoxsfXR7bl/f0ehRXtDt6zuy9dHtmTFjxoEfDPAKZh/TX3wKmFRVlbbRozPs2Kbcft3CNDU1NXqkZ1VVlfNnzclP9yQbOzoOq9kAas0%2Bpr%2B4Akiampoyb/783LFhc5bffni992TZmjtzx4bNmTtvnmUDHPHsY/qLK4A8a9q0afn66q/lvuVLMrJ1WKPHybbO7pw17dL8xsSJWbZsWaPHAeg39jH1JgB5Vnd3d84844ycferJWfnpuQ19hldVVSZdNS8bvrc19z/wQIYNa/wCBOgv9jH15iVgnjVs2LAsWbo0X7t7Xa752%2BUNnWX%2BDcuy%2Bjvrs/T66y0boDj2MfUmAHmOyZMnZ8GCBZn/xWX57IqvNGSGz674Sq65YXkWLlyYSZMmNWQGgEazj6mnYxo9AIef2bNn54knnshV7e15fNeuzP3I9H55%2BaGqqsy/YVmuuWF5Zs%2BenU984hN1/54AhzP7mHoRgDxPU1NTFixYkEGDBmXOnDnZvOXfsuTqy%2Bv6RuRtnd259FOLsvo769Pe3m7ZAMQ%2Bpn58CISXtGrVqlzy0Y9md09Prrvykkyf8I6aPvusqirL1tyZWZ9bmgEtLVl6/fVeZgB4AfYxtSQAOaDu7u7MnDkzK1asyHljR%2BWKD0zJhLeNyVFHHfpbSPfu3Zvb13fkL798S%2B7YsDnTpk3LokWLMnTo0BpODnBksY%2BpFQHIQVu1alXmz5uXzffck5NOGJlLJr8nF00Y/7Jeitje1Z0vrVmbpStvy9ZHt2fUW9%2BaefPnZ%2BLEiXWcHODIYh/TVwKQl6Wqqqxfvz6LFy/OTTfdlN7e3pz4mhFpO%2B1NGX3aKTnr5DfmuIEtaT7mmPTu2ZMndvXkvu//IJu2PJyNDz2cR3d0ZsCAAZk6dWpmzJiRs88%2B2x3lAQ6BfUxfCEAOWVdXV%2B66665s3LgxHR0b09HRkZ07H3ve44YMGZy2traMGTM2bW1tGTduXFpbWxswMcCRyT7m5RKA1ExVVens7MyGDRtywQUXZPXq1Rk7dmyGDx/uWSVAP7KPORC3gaFmmpqaMmLEiIwcOTJJMnLkyIwYMaLBUwGUxz7mQPwkEACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAAIDCCEAAgMIIQACAwghAaq6lpSWjRo1KS0tLo0cBKJp9zItpqqqqavQQAAD0H1cAAQAKIwABAAojAAEACiMAAQAKIwABAAojAKmZL3wh%2BZVfSY4/fv8/b3978vWvN3oqgLLYxRwMt4GhZr72teToo5NTTtn/%2B7//%2B%2BQzn0k2b07OPLOxswGUwi7mYAhA6mro0P2L5yMfafQkAOWyi/llxzR6AI5Me/cm//iPyZNP7n/5AYD%2BZxfzYgQgNfUv/7J/yTz1VHLccckttyRnnNHoqQDKYhdzIF4CpqZ6e5Mf/SjZuTP5yleSL34x%2Bda3LB6A/mQXcyACkLo677zkTW9Kli5t9CQA5bKL%2BWVuA0NdVVWye3ejpwAom13ML/MeQGpmzpzk3e9OXv/65PHHky9/OfnmN5M1axo9GUA57GIOhgCkZn7yk%2BSii5Lt25PBg/ffiHTNmuSd72z0ZADlsIs5GN4DCABQGO8BBAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwApOYefPDBjB49Og8%2B%2BGCjRwEomn3MixGA1FxPT082b96cnp6eRo8CUDT7mBcjAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAAQAKIwABAAojAAEACiMAqZmqqrJjx45s27YtSbJt27bs2LEjVVU1eDKAstjHHEhT5U8Dh6irqytr165NR0dHNm7ckE2bNmXnzsee97ghQwZn9OjRGTNmbNra2jJ%2B/Pi0trY2YGKAI5N9zMslAHlZqqrKunXrsnjx4tx8883p7e3Nia8ZkTGnnZLRbz4lZ570hgwa2JLmY49N79NP5/FdPbl/6w%2Bz6aGHs3HLw3nkJzvS3NycqVOnZsaMGTnnnHPS1NTU6P8sgFcc%2B5i%2BEIActFWrVmXe3Lm55957c9IJI3Pp5Pdk%2BoTxGdk67KDP2N7VnWVr1mbJytuy9dHtGfXWt2be/PmZOHFiHScHOLLYx/SVAOSAuru7M3PmzKxYsSLnjR2VKz/4vpx/TluOOurQ30K6b9%2B%2B3L6%2BI5/7h6/mjg2bc%2BGFF2bRokUZNuzglxdAaexjakUA8pJWrlyZSy%2B5JL1P9eS6Ky/NtPPH1/QlgqqqsmzNnZn1uaUZ0NKSJUuXZvLkyTU7H%2BBIYR9TSz4FzAuqqioLFy7MlClTcvapJ%2Be%2B5UsyfcI7av7%2BkKamplz07vNy/4olOfvUkzNlypS0t7f7pBrAL9jH1IMrgDxPVVWZM2dOrr322sy9eHo%2B%2BeFp/fLG4KqqMv%2BGZbnmhuWZPXt2FixY4A3JQNHsY%2BrlmEYPwOGnvb091157bT5z%2Be/nDy78zX77vk1NTZl38UUZNHBgrmpvz6BBgzJ79ux%2B%2B/4Ahxv7mHpxBZDnWLlyZaZMmZK5F0/P3I9Mb9gc8774pVxzw/KsXLkykyZNatgcAI1iH1NPApBndXd358wzzsjZp56clZ%2Be29DL/VVVZdJV87Lhe1tz/wMP%2BDQaUBT7mHrzIRCeNXPmzPQ%2B1ZMlV1/e8Pd6NDU1ZcnVM7O7pyezZs1q6CwA/c0%2Bpt4EIEn231R0xYoVue7KS1/WjUTr6XXDh%2BW6Ky/J8uXLc%2ButtzZ6HIB%2BYR/TH7wETKqqyuhRo9LafFRuv25hw59t/mdVVeX8WXPS/XSVjk2bDqvZAGrNPqa/uAJI1q1bl3vuvTdXfvB9h93/0E1NTbniA1Oy%2BZ57sn79%2BkaPA1BX9jH9RQCSxYsX56QTRub8c9oaPcoLOv%2Bctpx0wsgsXry40aMA1JV9TH8RgIXr6urKzTffnEsnv%2BeQf5bkuBlX5YrPLanxZP/h6KOPziWT35ObbropXV1ddfs%2BAI1kH9OfBGDh1q5dm97e3kyfML7mZ19y7XU56u0T8pdfvqXPZ100YXx6e3tz11131WAyeHHt7cnYscmgQcmIEcnkycmWLY2eihLUeh/P%2B%2BKXcvrUi3PcuEkZ%2Bq7fyjsv/0TW3/9Qn8%2B1j48MArBwHR0dOfE1I2r%2BSbOV3/o/%2BX8PbMnranTuyNZhOWHE8HR0dNTkPHgx3/pWctllybp1yTe%2BkezZk7zrXcmTTzZ6Mo50td7Hp77%2BxPzVH8zId5ctyT8v%2BfO8YeRrcv6sOen82c4%2BnWsfHxkEYOE2btiQMaedUtMzH93Rlcs/uzjL5v1Rjj3m6JqdO%2BbNp2Tjxg01Ow9eyJo1yYc%2BlJx5ZvKWtyQ33pj86EeJv%2Buot1rv4wvPH5fzzh6dk08YmTNPfmP%2BYtZH8/Mnd%2BW7D2/t89n28SufACxYVVXZtHlTRr%2B5dgtn3759%2BZ1rPpM/nPZbOfPkN9bs3CQZfdop2dSxKe5cRH967LH9vw4d2tg5OLLVYx//Z71PP53rV349g497Vd7y307u83n28SvfMY0egMbp7OzMzp2P5cyT3lCzMz/1pZtzzNFHZ%2Bb7a//zIs86%2BY352c6d6ezszIgRI2p%2BPvyyqko%2B/vHk134tOeusRk/Dkawe%2BzhJVt%2B9Ph/8ZHt2PbU7I4cNzT9dtzCtQwb3%2BVz7%2BJVPABZs165dSZJBA1tqcl7HQ9/LoptXpePvPl%2BX%2B1cd94s5N2zYkJEjR9b8fMrQ0tKS008//aAe%2B7GPJd/9bnL33Qd%2B7IMPPpienp4%2BTkeptm3blqR2%2B/gZ49reks1/vzhdjz2Wv1n19Uz9k4VZ98XrMmLokD6d%2B8w%2B9mf%2BlUsAFqy3tzdJ0nzssTU575/vuS87frYzb5hy0bNf27t3X/7wr/4m1910S7be8r/7dH7zMfv/uF5wwQV9OoeyjRo1Kps2bTrg4y6/PLn11uTb305OPPHA506bNi2bN2%2BuwYSUrFb7%2BBmvavkvOeX1r8spr39d3nbW6Tn1tz%2BcG762JrN/9wN9OveZfbx79%2B5ajEkDCMCCNTc3J9n/3pBauOjd78h5Y0c952sTrvjjTH/3O/J7v/HOPp/fu2dPkmT16tWuAHLIWlpe%2BgpLVe2Pv1tuSb75zeSkkw7u3OXLl7sawiHbtm1b3vve99ZsH7%2BYqqqyuwbf45l9PGDAgD6fRWMIwIINHDgwSfL4rtr8pTVs8PEZNvj453zt2GOOzmuHvjqnveH1fT7/iV/MOXbsWO85oW4uuyxZsSJZtWr/vQB//OP9Xx88OHmpdjzYl5XhhZz4i8vMtdrHT/Y8lQV/9w%2BZ%2BOtvy8hhQ9P9859n8VdW55HOrvz2%2BF/v8/nP7OMDPaHi8CUACzZ8%2BPAMGTI492/9Yaac%2B6uNHueA7vv%2BD/LqIUMyfPjwRo/CEewLX9j/67nnPvfrN964//YwUA%2B13sdHH3VUtvzw3/Nbt92Rrsd%2BnmGDB2Xs6afm21/485rcocE%2BfuVrqnyGu2jvGD8%2Bx%2B/dna9%2B6pONHuWAplw9P08c25I77riz0aMA1Jx9TH9yH8DCjRk7Nhu3PNzoMQ7KxocezpgxYxs9BkBd2Mf0JwFYuLa2tjzykx3Z3tXd6FFe0vau7jy6ozNtbW2NHgWgLuxj%2BpMALNz48ePT3NycZWvWNnqUl/SlNWvT3NyccePGNXoUgLqwj%2BlPArBwra2tef/7358lK2/Lvn37Gj3OC9q7d2%2BWrrwtU6dOTWtra6PHAagL%2B5j%2BJADJjBkzsvXR7bl9/eH50%2B5vX9%2BRrY9uz4wZMxo9CkBd2cf0F58CJlVVpW306Aw7tim3X7ewLj/G7VBVVZXzZ83JT/ckGzs6DqvZAGrNPqa/uAJImpqaMm/%2B/NyxYXOW3354vfdk2Zo7c8eGzZk7b55lAxzx7GP6iyuAPGvatGn5%2Buqv5b7lSzKydVijx8m2zu6cNe3S/MbEiVm2bFmjxwHoN/Yx9SYAeVZ3d3fOPOOMnH3qyVn56bkNfYZXVVUmXTUvG763Nfc/8ECGDWv8AgToL/Yx9eYlYJ41bNiwLFm6NF%2B7e12u%2BdvlDZ1l/g3Lsvo767P0%2BustG6A49jH1JgB5jsmTJ2fBggWZ/8Vl%2BeyKrzRkhs%2Bu%2BEquuWF5Fi5cmEmTJjVkBoBGs4%2Bpp2MaPQCHn9mzZ%2BeJJ57IVe3teXzXrsz9yPR%2BefmhqqrMv2FZrrlheWbPnp1PfOITdf%2BeAIcz%2B5h6EYA8T1NTUxYsWJBBgwZlzpw52bzl37Lk6svr%2BkbkbZ3dufRTi7L6O%2BvT3t5u2QDEPqZ%2BfAiEl7Rq1apc8tGPZndPT6678pJMn/COmj77rKoqy9bcmVmfW5oBLS1Zev31XmYAeAH2MbUkADmg7u7uzJw5MytWrMh5Y0flig9MyYS3jclRRx36W0j37t2b29d35C%2B/fEvu2LA506ZNy6JFizJ06NAaTg5wZLGPqRUByEFbtWpV5s%2Bbl8333JOTThiZSya/JxdNGP%2ByXorY3tWdL61Zm6Urb8vWR7dn1Fvfmnnz52fixIl1nBzgyGIf01cCkJelqqqsX78%2Bixcvzk033ZTe3t6c%2BJoRaTvtTRl92ik56%2BQ35riBLWk%2B5pj07tmTJ3b15L7v/yCbtjycjQ89nEd3dGbAgAGZOnVqZsyYkbPPPtsd5QEOgX1MXwhADllXV1fuuuuubNy4MR0dG9PR0ZGdOx973uOGDBmctra2jBkzNm1tbRk3blxaW1sbMDHAkck%2B5uUSgNRMVVXp7OxMT09Pdu/enQEDBqSlpSXDhw/3rBKgH9nHHIgABAAojJ8EAgBQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFAYAQgAUBgBCABQGAEIAFCY/w8pjNcGaiB0XgAAAABJRU5ErkJggg%3D%3D",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5137007,"math_prob":0.9999844,"size":1505,"snap":"2021-21-2021-25","text_gpt3_token_len":615,"char_repetition_ratio":0.14656895,"word_repetition_ratio":0.022346368,"special_character_ratio":0.5674419,"punctuation_ratio":0.20152092,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998505,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-07T19:42:21Z\",\"WARC-Record-ID\":\"<urn:uuid:f20a0f81-e657-4ec8-8d4d-adac395f1c92>\",\"Content-Length\":\"43445\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b686d7c3-cc2a-4384-9531-cd944c536e87>\",\"WARC-Concurrent-To\":\"<urn:uuid:1fa65f24-9ae0-4416-bb67-526b762519a1>\",\"WARC-IP-Address\":\"35.241.19.59\",\"WARC-Target-URI\":\"https://www.lmfdb.org/EllipticCurve/Q/7260/l/\",\"WARC-Payload-Digest\":\"sha1:545FQXMOITQAA4NL3UY7QQWVXSKTQXFG\",\"WARC-Block-Digest\":\"sha1:X4XJ7ERNVJLEKTDEHIPZLLRFHVXLKMRA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988802.93_warc_CC-MAIN-20210507181103-20210507211103-00416.warc.gz\"}"} |
http://www.yaldex.com/game-programming/0131020099_ch18lev1sec1.html | [
" Game Programming",
null,
"Free JavaScript Editor Ajax Editor \n\nMain Page\n\n### Types of Textures\n\nBefore we delve into specific texture mapping techniques, we must start by classifying textures and their uses. Textures is a very broad topic, and just using the term alone can result in misunderstandings.\n\nTextures can be classified as explicit or procedural. An explicit texture map consists of a regular bitmap, which we can create with a paint program, scan from a picture, and so on. It is very easy to apply and has almost no CPU cost. We just have to provide the texture coordinates, and the graphics subsystem will take care of the texturing process. On the other hand, a procedural texture is the output of a computer program that computes the texture map. Marble, fire, and many other materials can be decomposed in some primitive equations and functions, which can be implemented as a program. Procedural textures have several benefits over explicit textures. They are resolution independent because we can zoom into the texture and compute the details. Because they are defined by mathematical functions that include some randomness, they are usually not as repetitive as explicit maps. On the other hand, they have a significant performance hit. Whether they are executed on the CPU or on a dedicated shader platform, procedural maps will take up some system resources. This chapter will focus almost exclusively on explicit textures, whereas procedural maps are covered in Chapter 21, \"Procedural Techniques.\"\n\nAnother interesting classification can be established between static and dynamic texture maps. The difference between them is that dynamic maps are recomputed in real time, whereas static maps are just created at boot time. Both explicit and procedural textures can be static or dynamic. The most common type of texture map is both static and explicit. Games like Quake used a sequence of looping fire bitmaps to create torches, which is a good example of a dynamic and explicit technique. Marble and procedural fire would be adequate examples of static and dynamic procedural maps, respectively.\n\nDynamic, explicit textures are frequently encoded by storing all the frames in a single, large texture map, and somehow computing the texture coordinates automatically from the current timer value. So, if the texture holds NxN frames, and the speed (in frames per second) at which we want to animate the map is S, the following pseudocode computes texturing coordinates:\n\n```current_frame=(S/current_time) % (N*N)\n\nrow=current_frame/N\ncolumn=current_frame % N\nu1=(1/N)*row\nu2=(1/N)*(row+1)\nv1=(1/N)*column\nv2=(1/N)*(column+1)\n```\n\nIn the preceding code, we are cycling through the texture map, using subimages as the actual texture map. Notice that I assume each image occupies a square zone.\n\n#### Texture Mapping\n\nWhichever the type, textures can be 1D, 2D, and 3D data sets. 2D maps are the most widely used because they can be represented with bitmaps. 3D textures are also called volume textures. They appear in fields like surgery to display 3D medical images and have slowly appeared on game-oriented hardware. They take up huge amounts of memory, and thus their use must be very limited. A 256x256x256 3D image with each texel representing a 256-color value (such as a grayscale image) can take up as much as 16MB. 1D maps have become increasingly popular over time as a way to implement a color translation table. The texture coordinate (in the range 0..1) is used as an index to a 1D table, where different colors are stored. A good example of this technique is cel shading, which produces quite convincing cartoonlike renderings. Cel shading is discussed in Chapter 17, \"Shading.\"\n\nNow, let's go back to our classic 2D texture maps. Because objects are inherently three dimensional, we need a way to precisely specify how the texture map should stretch and wrap the object, so we can place it. This process is called texture mapping.\n\nIn the 2D case, texture mapping is a function that goes from X,Y,Z to U,V, which are the mapping coordinates. This way it defines the correspondence between vertices on the geometry and texels in the map. For example, the following function is a proper texture mapping function:\n\n```U = X + Z\nV = Y\n```\n\nHowever, things are rarely this simple, and more involved functions must be used. Following is a survey of classic mapping functions.\n\n#### XYZ mapping\n\nThis function is used for 3D textures, especially procedural textures. It directly maps space to texture coordinates, possibly with translations, rotations, and scalings added on top. The general equation is as follows:\n\n```U = X\nV = Y\nW = Z\n```\n\nA scaled function would look something like this:\n\n```U = X*sx\nV = Y*sy\nW = Z*sz\n```\n\n#### Cylindrical Mapping\n\nAnother classic texture mapping function involves using a cylinder as the projector. Imagine a cylinder along the Y axis, wrapping the object with the material. That cylinder would be defined by the following parametric equations:\n\n```X = r cos(u* 2 pi)\nY = u * h\nZ = r sin(u* 2 pi)\n```\n\nwhere r and h would determine the cylinder's radii and height, respectively. Now, we can invert the preceding equation to solve for U and V, which are the mapping coordinates. In that case, the resulting equation would be:\n\n```V = arctan (X/Z)\nU = Y/h\n```\n\nNotice that this mapping is computed with the cylinder along the Y axis. Euclidean transforms could be applied to ensure that the cylinder lies in an arbitrary axis.\n\n#### Spherical Mapping\n\nMapping onto a sphere is somewhat similar to using a cylinder. All we have to do is use the parametric equations of the sphere to construct inverse equations that represent the texture mapping. Assume a sphere defined by the following:\n\n```x = r sin(v pi) cos(u 2 pi)\ny = r sin(v pi) sin(u 2 pi\nz = r cos(v pi)\n```\n\nNotice how we have used U and V (in the texture mapping sense) as the parameters for the sphere; both are in the range 0..1. Then, reversing the equation after half a page of algebra you get:\n\n```u = ( arccos(x/(r sin(v pi))) ) / (2 pi)\nv = arccos(z/r) / pi\n```\n\nThe preceding formula assumes that arccos returns a value from 0 to 2Pi. Most software implementations return values in the range 0..Pi, so some changes to the code are needed for correct rendering. Here is the source code for a spherical texture mapping function:\n\n```#define PI 3.141592654\n#define TWOPI 6.283185308\n\nvoid SphereMap(x,y,z,radius,u,v)\ndouble x,y,z,r,*u,*v;\n{\n*v = acos(z/radius) / PI;\nif (y >= 0) *u = acos(x/(radius * sin(PI*(*v)))) / TWOPI;\nelse *u = (PI + acos(x/(radius * sin(PI*(*v))))) / TWOPI;\n}\n```\n\nNotice how spherical mapping causes distortion to appear near the two poles of the sphere. To prevent this phenomenon called pinching, we need to preprocess the texture, either by using a commercial image processing software package or by implementing the process ourselves. Simply put, we need to convert the texture from polar to rectangular coordinates. Tools like Photoshop and Paint Shop Pro do this type of processing.\n\n#### Texture Mapping a Triangle\n\nThe methods discussed in the previous sections are used for full objects that more or less conform to the mapping shapes, such as a planet or a banana. Most objects, however, have complex shapes that cannot be easily assigned to a sphere or cylinder. In these cases, we will need a general method that offers triangle-level control over the texturing process. We will need to assign texture coordinates manually, so the maps wrap the object precisely and accurately.\n\nGiven a triangle p1, p2, p3 with texture coordinates (u1,v1), (u2,v2), and (u3, v3), deciding the texture coordinates for a point p (in world coordinates) inside the triangle is just a bilinear interpolation. We start by building two vectors from p1 to p2 and from p1 to p3, and construct the point p as the linear combination of these two vectors. Assuming the initial vectors were normalized, the components of the linear combination are the blending coefficients that, once applied to the per-vertex texture coordinates, will give us the U,V coordinates for the point p. See the example in Figure 18.1.\n\n##### Figure 18.1. Texture mapping triangles.",
null,
"On the other hand, rasterizers usually texture pixels in 2D after projection, but not full 3D points. Thus, a different mapping approach is followed. We start with the triangle defined not by 3D points, but by 2D, projected coordinates. Then, the triangle is painted one scanline at a time, drawing the horizontal line of pixels and texturing them along the way. Clearly, we need some kind of depth information to distinguish, from a texturing standpoint, two triangles that look the same onscreen but are not the same in 3D (see Figure 18.2). Thus, the 2D triangle coordinates are augmented with additional data that is then used to perform perspective-correct texture mapping, taking depth into consideration. Because these computations are performed on the hardware, they are beyond the duties of a game programmer these days.\n\n##### Figure 18.2. Two distinct triangles that share the same projection.",
null,
"",
null,
"\nBitcoin Gambling Site",
null,
"Ajax Editor JavaScript Editor"
] | [
null,
"http://www.yaldex.com/FSimages/CoolJavaScriptEditor.gif",
null,
"http://www.yaldex.com/game-programming/FILES/18fig01.gif",
null,
"http://www.yaldex.com/game-programming/FILES/18fig02.gif",
null,
"http://www.yaldex.com/game-programming/FILES/pixel.gif",
null,
"http://www.yaldex.com/FSimages/CoolJavaScriptEditor.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90559906,"math_prob":0.96436495,"size":8891,"snap":"2021-04-2021-17","text_gpt3_token_len":1971,"char_repetition_ratio":0.13896704,"word_repetition_ratio":0.0013540961,"special_character_ratio":0.22101001,"punctuation_ratio":0.11672474,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9883674,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,1,null,1,null,9,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-21T20:41:55Z\",\"WARC-Record-ID\":\"<urn:uuid:1dcb7e1f-7729-4681-90cf-744dbfa16eb6>\",\"Content-Length\":\"15485\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:263bace5-3200-4ca6-a727-f088def3b7a6>\",\"WARC-Concurrent-To\":\"<urn:uuid:acb4ef18-6892-49b2-bf59-6e58ca28f87c>\",\"WARC-IP-Address\":\"173.249.8.108\",\"WARC-Target-URI\":\"http://www.yaldex.com/game-programming/0131020099_ch18lev1sec1.html\",\"WARC-Payload-Digest\":\"sha1:BWOTLEHBBBR67EGJAH5DNA5QXVSK77Y7\",\"WARC-Block-Digest\":\"sha1:URILELIU7UKG7N3KFQJXV66CBQAAEMKW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039550330.88_warc_CC-MAIN-20210421191857-20210421221857-00619.warc.gz\"}"} |
https://www.idealady.com/category/sales/page/5/ | [
"WordPress database error: [Table './idealady_idea/wp_wfblocks7' is marked as crashed and should be repaired]\n`SHOW FULL COLUMNS FROM `wp_wfblocks7``\n\nWordPress database error: [Table './idealady_idea/wp_wfblocks7' is marked as crashed and should be repaired]\n`SHOW FULL COLUMNS FROM `wp_wfblocks7``\n\nWordPress database error: [Table './idealady_idea/wp_wfhits' is marked as crashed and should be repaired]\n`SELECT MAX(attackLogTime) FROM wp_wfhits`\n\nWordPress database error: [Table './idealady_idea/wp_wfblocks7' is marked as crashed and should be repaired]\n```SELECT *, CASE WHEN `type` = 3 THEN 0 WHEN `type` = 4 THEN 1 WHEN `type` = 7 THEN 2 WHEN `type` = 6 THEN 3 WHEN `type` = 5 THEN 4 WHEN `type` = 9 THEN 5 WHEN `type` = 8 THEN 6 WHEN `type` = 2 THEN 7 WHEN `type` = 1 THEN 8 ELSE 9999 END AS `typeSort`, CASE WHEN `type` = 3 THEN `parameters` WHEN `type` = 4 THEN `parameters` WHEN `type` = 1 THEN `IP` WHEN `type` = 9 THEN `IP` WHEN `type` = 5 THEN `IP` WHEN `type` = 6 THEN `IP` WHEN `type` = 7 THEN `IP` WHEN `type` = 2 THEN `IP` WHEN `type` = 8 THEN `IP` ELSE 9999 END AS `detailSort` FROM `wp_wfblocks7` WHERE `type` IN (4) AND (`expiration` = 0 OR `expiration` > UNIX_TIMESTAMP()) ORDER BY `typeSort` ASC, `id` DESC```\n\nWordPress database error: [Table './idealady_idea/wp_wfblocks7' is marked as crashed and should be repaired]\n```SELECT *, CASE WHEN `type` = 3 THEN 0 WHEN `type` = 4 THEN 1 WHEN `type` = 7 THEN 2 WHEN `type` = 6 THEN 3 WHEN `type` = 5 THEN 4 WHEN `type` = 9 THEN 5 WHEN `type` = 8 THEN 6 WHEN `type` = 2 THEN 7 WHEN `type` = 1 THEN 8 ELSE 9999 END AS `typeSort`, CASE WHEN `type` = 3 THEN `parameters` WHEN `type` = 4 THEN `parameters` WHEN `type` = 1 THEN `IP` WHEN `type` = 9 THEN `IP` WHEN `type` = 5 THEN `IP` WHEN `type` = 6 THEN `IP` WHEN `type` = 7 THEN `IP` WHEN `type` = 2 THEN `IP` WHEN `type` = 8 THEN `IP` ELSE 9999 END AS `detailSort` FROM `wp_wfblocks7` WHERE `type` IN (3) AND (`expiration` = 0 OR `expiration` > UNIX_TIMESTAMP()) ORDER BY `typeSort` ASC, `id` DESC```\n\nWordPress database error: [Table './idealady_idea/wp_wfblocks7' is marked as crashed and should be repaired]\n`SHOW FULL COLUMNS FROM `wp_wfblocks7``\n\nWordPress database error: [Table './idealady_idea/wp_wfblocks7' is marked as crashed and should be repaired]\n`SHOW FULL COLUMNS FROM `wp_wfblocks7``\n\nSales Archives - Page 5 of 6 - Cathy Stucker - The Idea Lady\n\nWordPress database error: [Table './idealady_idea/wp_wfblocks7' is marked as crashed and should be repaired]\n```SELECT *, CASE WHEN `type` = 3 THEN 0 WHEN `type` = 4 THEN 1 WHEN `type` = 7 THEN 2 WHEN `type` = 6 THEN 3 WHEN `type` = 5 THEN 4 WHEN `type` = 9 THEN 5 WHEN `type` = 8 THEN 6 WHEN `type` = 2 THEN 7 WHEN `type` = 1 THEN 8 ELSE 9999 END AS `typeSort`, CASE WHEN `type` = 3 THEN `parameters` WHEN `type` = 4 THEN `parameters` WHEN `type` = 1 THEN `IP` WHEN `type` = 9 THEN `IP` WHEN `type` = 5 THEN `IP` WHEN `type` = 6 THEN `IP` WHEN `type` = 7 THEN `IP` WHEN `type` = 2 THEN `IP` WHEN `type` = 8 THEN `IP` ELSE 9999 END AS `detailSort` FROM `wp_wfblocks7` WHERE `type` IN (4) AND (`expiration` = 0 OR `expiration` > UNIX_TIMESTAMP()) ORDER BY `typeSort` ASC, `id` DESC```\n\nWordPress database error: [Table './idealady_idea/wp_wfblocks7' is marked as crashed and should be repaired]\n```SELECT *, CASE WHEN `type` = 3 THEN 0 WHEN `type` = 4 THEN 1 WHEN `type` = 7 THEN 2 WHEN `type` = 6 THEN 3 WHEN `type` = 5 THEN 4 WHEN `type` = 9 THEN 5 WHEN `type` = 8 THEN 6 WHEN `type` = 2 THEN 7 WHEN `type` = 1 THEN 8 ELSE 9999 END AS `typeSort`, CASE WHEN `type` = 3 THEN `parameters` WHEN `type` = 4 THEN `parameters` WHEN `type` = 1 THEN `IP` WHEN `type` = 9 THEN `IP` WHEN `type` = 5 THEN `IP` WHEN `type` = 6 THEN `IP` WHEN `type` = 7 THEN `IP` WHEN `type` = 2 THEN `IP` WHEN `type` = 8 THEN `IP` ELSE 9999 END AS `detailSort` FROM `wp_wfblocks7` WHERE `type` IN (3) AND (`expiration` = 0 OR `expiration` > UNIX_TIMESTAMP()) ORDER BY `typeSort` ASC, `id` DESC```\n\nWordPress database error: [Table './idealady_idea/wp_wfblocks7' is marked as crashed and should be repaired]\n```SELECT *, CASE WHEN `type` = 3 THEN 0 WHEN `type` = 4 THEN 1 WHEN `type` = 7 THEN 2 WHEN `type` = 6 THEN 3 WHEN `type` = 5 THEN 4 WHEN `type` = 9 THEN 5 WHEN `type` = 8 THEN 6 WHEN `type` = 2 THEN 7 WHEN `type` = 1 THEN 8 ELSE 9999 END AS `typeSort`, CASE WHEN `type` = 3 THEN `parameters` WHEN `type` = 4 THEN `parameters` WHEN `type` = 1 THEN `IP` WHEN `type` = 9 THEN `IP` WHEN `type` = 5 THEN `IP` WHEN `type` = 6 THEN `IP` WHEN `type` = 7 THEN `IP` WHEN `type` = 2 THEN `IP` WHEN `type` = 8 THEN `IP` ELSE 9999 END AS `detailSort` FROM `wp_wfblocks7` WHERE `type` IN (1, 8, 9, 2, 5, 6) AND (`expiration` = 0 OR `expiration` > UNIX_TIMESTAMP()) ORDER BY `typeSort` ASC, `id` DESC```\n\nWordPress database error: [Table './idealady_idea/wp_wfblocks7' is marked as crashed and should be repaired]\n```SELECT *, CASE WHEN `type` = 3 THEN 0 WHEN `type` = 4 THEN 1 WHEN `type` = 7 THEN 2 WHEN `type` = 6 THEN 3 WHEN `type` = 5 THEN 4 WHEN `type` = 9 THEN 5 WHEN `type` = 8 THEN 6 WHEN `type` = 2 THEN 7 WHEN `type` = 1 THEN 8 ELSE 9999 END AS `typeSort`, CASE WHEN `type` = 3 THEN `parameters` WHEN `type` = 4 THEN `parameters` WHEN `type` = 1 THEN `IP` WHEN `type` = 9 THEN `IP` WHEN `type` = 5 THEN `IP` WHEN `type` = 6 THEN `IP` WHEN `type` = 7 THEN `IP` WHEN `type` = 2 THEN `IP` WHEN `type` = 8 THEN `IP` ELSE 9999 END AS `detailSort` FROM `wp_wfblocks7` WHERE `type` IN (7) AND (`expiration` = 0 OR `expiration` > UNIX_TIMESTAMP()) ORDER BY `typeSort` ASC, `id` DESC```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.96641743,"math_prob":0.971493,"size":3026,"snap":"2022-40-2023-06","text_gpt3_token_len":606,"char_repetition_ratio":0.11912641,"word_repetition_ratio":0.0,"special_character_ratio":0.1979511,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9922515,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T22:49:30Z\",\"WARC-Record-ID\":\"<urn:uuid:dea1e9cc-a56c-4988-848c-1e7011e080f1>\",\"Content-Length\":\"72520\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:18d11ec5-ae55-435f-add0-09e3ba6e3836>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e949094-807a-4988-a13f-5cb9eb90003e>\",\"WARC-IP-Address\":\"72.52.171.245\",\"WARC-Target-URI\":\"https://www.idealady.com/category/sales/page/5/\",\"WARC-Payload-Digest\":\"sha1:XD7LETFDRRWW3NGN4PL6NJ26PS3DASEO\",\"WARC-Block-Digest\":\"sha1:P3TET2LTWPDZL4UTTBJHELUCKXUHMCW5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337668.62_warc_CC-MAIN-20221005203530-20221005233530-00218.warc.gz\"}"} |
https://cvgmt.sns.it/paper/5151/ | [
"",
null,
"## On minimizers of the maximal distance functional for a planar convex closed smooth curve\n\ncreated by teplitskaya1 on 06 Jun 2021\n\n[BibTeX]\n\npreprint\n\nInserted: 6 jun 2021\nLast Updated: 6 jun 2021\n\nYear: 2020\n\nArXiv: 2011.10463 PDF\n\nAbstract:\n\nFix a compact $M \\subset \\mathbb{R}^2$ and $r>0$. A minimizer of the maximal distance functional is a connected set $\\Sigma$ of the minimal length, such that $max_{y \\in M} dist(y,\\Sigma) \\leq r.$ The problem of finding maximal distance minimizers is connected to the Steiner tree problem. In this paper we consider the case of a convex closed curve $M$, with the minimal radius of curvature greater than $r$ (it implies that $M$ is smooth). The first part is devoted to statements on structure of $\\Sigma$: we show that the closure of an arbitrary connected component of $B_r(M) \\cap \\Sigma$ is a local Steiner tree which connects no more than five vertices. In the second part we \"derive in the picture\". Assume that the left and right neighborhoods of $y \\in M$ are contained in $r$-neighborhoods of different points $x_1$, $x_2 \\in \\Sigma$. We write conditions on the behavior of $\\Sigma$ in the neighborhoods of $x_1$ and $x_2$ under the assumption by moving $y$ along $M$."
] | [
null,
"https://cvgmt.sns.it/media/cvgmt_transparent.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8715225,"math_prob":0.99925745,"size":1058,"snap":"2022-40-2023-06","text_gpt3_token_len":285,"char_repetition_ratio":0.11100569,"word_repetition_ratio":0.0,"special_character_ratio":0.27221173,"punctuation_ratio":0.08374384,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997027,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T03:16:06Z\",\"WARC-Record-ID\":\"<urn:uuid:29f6a978-fc86-4315-a42a-627ec5f07711>\",\"Content-Length\":\"8805\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b121a3db-39a6-426f-b8d4-fb13b247e3e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:a500d17d-e208-434a-b21b-46da285b5e4b>\",\"WARC-IP-Address\":\"192.167.206.42\",\"WARC-Target-URI\":\"https://cvgmt.sns.it/paper/5151/\",\"WARC-Payload-Digest\":\"sha1:5F5AD7AQPG6KRNO27FG2BUHIGNCQIVUA\",\"WARC-Block-Digest\":\"sha1:ILU7A5R647XHWJOSM6CCTBA6CJG7HTLE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337723.23_warc_CC-MAIN-20221006025949-20221006055949-00297.warc.gz\"}"} |
https://www.decisionmodels.com/optspeede.htm | [
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"# Optimising Lookups",
null,
"Importance of Speed Faster Calculations Microsoft's Advice Bottlenecks/size Lookups Dynamic Ranges Totalling Generating Workbooks Multi-Level Calculations Array Formulae SUMIF Example Worksheet Links",
null,
"",
null,
"Lookup Options Speeding Up Lookups Missing Values Multiple Cells Two-Dimensional Lookup Three-Dimensional Lookup\n\n## Lookup Options\n\nLookups are often the most significant factor in calculation times, and fortunately there are many ways of improving lookup calculation time.\n\nWith FastExcel Version 2 you can now use the AVLOOKUP function, which is faster than VLOOKUP and INDEX/MATCH in many circumstances.\n\n#### Make sure you have understood the options in MATCH, VLOOKUP and HLOOKUP.\n\nMATCH(lookup_value, lookup_array, match_type)\n\n• Match_type=1 returns the largest match less than or equal to lookup value if lookup array is sorted ascending. This is the default\n• Match_type=0 requests an exact match\n• Match_type=-1 returns the smallest match greater than or equal to lookup value if lookup array is sorted descending\n\nVLOOKUP(lookup_value, table_array, colnum, range_lookup)\n\n• Range_lookup=TRUE returns the largest match less than or equal to lookup value. This is the default option. Table array MUST be sorted ascending.\n• Range_lookup=FALSE requests an exact match. Table array does not need to be sorted.\n Avoid using exact match lookup if possible. If you are doing lookup’s using the exact match option the calculation time for the function is proportional to the number of cells scanned before a match is found. For lookups over large ranges this time can be very significant. Lookup time using the approximate match options of VLOOKUP, HLOOKUP, MATCH on sorted data is fast and not significantly increased by the length of the range you are looking up. (Characteristics are the same as binary search).\n\n#### VLOOKUP & MATCH with multiple matches.\n\nIf the table you are looking up contains more than one row with a value which matches your lookup value, which row gets found?\n\n• If the table is not sorted the first matching row found is returned.\n• If the table is sorted ascending then the last matching row is returned.\n\n#### VLOOKUP versus INDEX and MATCH or OFFSET.\n\n I recommend using INDEX and MATCH. VLOOKUP is slightly faster (approx. 5%), simpler and uses less memory than a combination of MATCH and INDEX or OFFSET. However the additional flexibility offered by MATCH and INDEX often allows you to make significant timesaving compared to VLOOKUP. INDEX is very fast and from Excel 97 onwards is a non-volatile function (speeds up recalculation). OFFSET is also very fast, but it’s a volatile function.\n\n#### Converting VLOOKUP to INDEX and MATCH.\n\nThese statements return the same answer:\n\nVLOOKUP(A1, Data!\\$A\\$2:\\$F\\$1000,3,False)\nINDEX(Data!\\$A\\$2:\\$F\\$1000,MATCH(A1,\\$A\\$1:\\$A\\$1000,0),3)\n\n### Speeding up Lookup’s\n\nBecause exact match lookups are so slow it’s worth looking for ways of speeding things up:\n\n#### Use FastExcel's AVLOOKUP function.\n\nAVLOOKUP is significantly faster than VLOOKUP in many circumstances.\n\n#### Use One Worksheet.\n\n If speed is critical keep Lookups and Data on the same sheet. Keep exact match lookups on the same worksheet as the data they are looking up: It’s significantly faster.\n\n#### Use Excel 2000 or later:\n\n Upgrade to Excel 2000. Excel 2000 is significantly faster than Excel 97 for exact matches. But make sure you have installed the SR1 Service Release which fixes a problem with Lookups into closed workbooks (see MSKB Q248173).\n\n#### SORT the Data Whenever Possible.\n\n SORT your data and use approximate Match. Whenever possible SORT the data first, (SORT is very fast) and use approximate match.\n\n#### Minimise the Range of Cells you are Looking Up.\n\n The smaller the Range the better. When doing exact match lookups restrict the range of cells to be scanned to a minimum. Use Dynamic Range Names rather than referring to a very large number of rows or columns. Sometimes you can pre-calculate a lower and upper range limit for the lookup.\n\n#### Sorted Data with Missing Values.\n\n Two approximate Lookups are usually faster than one exact Lookup. If you can sort your data but still cannot use approximate match because you can’t be sure that the value you are looking up exists in the lookup range, then try this: IF(lookup_val=Index(lookup_array,MATCH(lookup_val,lookup_list),1) ,Index(lookup_array,MATCH(lookup_val,lookup_array), colnum),“notexist”) This does an approximate lookup on the lookup list, and if the lookup value = the answer in the lookup column you have found an exact match, so redo the approximate lookup on the column you want, otherwise it’s a missing value. Note that this assumes you never lookup a value smaller than the smallest value in the list, so you may need to add a dummy very small entry into the list. Two approximate matches are significantly faster than one exact match for a lookup over a large number of rows (breakeven point is about 10-20 rows). FastExcel’s AVLOOKUP function allows you to handle the missing value problem very efficiently.\n\n#### Unsorted Data with Missing Values.\n\n Store the result of an exact MATCH and reuse it. If you have to use exact match lookup on unsorted data and you can’t be sure that the lookup value exists you often have to handle the #N/A that gets returned if no match is found. The simplest and slowest way is to use an IF function containing two lookups: IF(ISNA(VLOOKUP(lookupval,table,2,FALSE)),0, VLOOKUP(lookupval,table,2,FALSE)) You can avoid the double exact lookup if you use exact MATCH once, store the result in a cell, and then test the result before doing an INDEX: In A1 =MATCH(lookupvalue,lookuparray,0) In B1=IF(ISNA(A1),0,INDEX(table,A1,colnum)) If you cannot use two cells then use COUNTIF, on average it is faster than an exact match lookup: IF (COUNTIF(lookuparray,lookupvalue)=0, 0, VLOOKUP(lookupval, table, 2 FALSE)) FastExcel’s AVLOOKUP function allows you to handle the missing value problem very efficiently\n\n#### Exact Match Lookups returning values from Multiple Columns.\n\n You can often reuse a stored exact MATCH many times. If you are doing exact lookups on multiple columns you can save a lot of time using one MATCH and many INDEX statements rather than many VLOOKUPs. Add an extra column for the MATCH to store the result (stored_row). For each column use: INDEX(Lookup_Range,stored_row,column_number) Alternatively you can use VLOOKUP in an array formula: this example returns the value from the 2nd and 4th column in the lookup range. {VLOOKUP(lookupvalue,Lookup_Range,{4,2},FALSE)}\n\n#### Looking Up a Set of Contiguous Rows or Columns.\n\n Y ou can also return many cells from one Lookup operation. If you want to lookup a number of contiguous columns then you can use INDEX in an array formula to return multiple columns at once (use 0 as the column number). You can also use INDEX to return multiple rows at once. {INDEX(\\$A\\$1:\\$J\\$1000,stored_row,0)} This returns columns A to J in the stored row created by a previous MATCH\n\n#### Looking Up a Rectangular Block of Cells.\n\nYou can use MATCH and OFFSET to return a rectangular block of cells as a range.\n\n#### Two-Dimensional Lookup\n\n Multi-dimensional lookup can also be done efficiently. Two-dimensional table lookup using separate lookup's on the rows and columns of a table can be efficiently done using an INDEX with two embedded MATCH functions. This example assumes a table in A1:Z1000 with column A containing the row identifier and row 1 containing the column identifier. Both the row and column identifiers are sorted ascending. INDEX(\\$B\\$2:\\$Z\\$1000,MATCH(RowLookup_Value,\\$A\\$2:\\$A\\$1000),MATCH(colLookup_value,\\$B\\$1:\\$Z\\$1))\n\n#### Multiple-Index Lookup\n\nIn large spreadsheets you often need to lookup using multiple indexes, such as looking up product volumes in a country.\n\nThe simple way to do this is to concatenate the indexes and lookup using concatenated lookup values. This is inefficient when the data is sorted for two reasons:\n\n• Concatenating strings is a calculation-intensive operation.\n• The lookup will cover a large range.\n\nIt is often more efficient to calculate a subset range for the lookup: for example by using COUNTIF to count the number of rows for each country and then calculating the first and last row for each country from the counts, and then looking up the product within that range. See SUMIF Example or the FastExcel sample problem for an example of using this technique.\n\nThe FastExcel Version 2 AVLOOKUP function has built-in methods to easily and efficiently handle multiple-index lookup.\n\n#### Three-dimensional lookup.\n\nIf you need to lookup the table to use as well as the row and the column here are some techniques you can use, focussing on how to make Excel lookup/choose the table.\n\nIf each table you want to lookup (the third dimension) is stored as a set of range names, or as a table of text strings that represent ranges, then you may be able to use INDIRECT or CHOOSE.\nUsing CHOOSE and range names can be a very efficient method, and it is not volatile, but it is best suited to only a small number of tables:\n\nINDEX(CHOOSE(TableLookup_Value,TableName1,TableName2,TableName3,TableName4),MATCH(RowLookup_Value,\\$A\\$2:\\$A\\$1000),MATCH(colLookup_value,\\$B\\$1:\\$Z\\$1))\n\nThe example above dynamically uses TableLookup_Value to choose which range name (TableName1, TableName2, ...) to use for the lookup table.\n\nINDEX(INDIRECT(\"Sheet\" & TableLookup_Value & \"!\\$B\\$2:\\$Z\\$1000\"),MATCH(RowLookup_Value,\\$A\\$2:\\$A\\$1000),MATCH(colLookup_value,\\$B\\$1:\\$Z\\$1))\n\nThis example uses INDIRECT and TableLookup_Value to dynamically create the sheet name to use for the lookup table. This method has the advantage of being simple and can handle a large number of tables, but because INDIRECT is a volatile function the lookup will be calculated at every calculation even if none of the data has changed.\nYou could also use VLOOKUP to find the name of the sheet or the text string to use for the table, and then use INDIRECT to convert the resulting text into a range:\n\nINDEX(INDIRECT(VLOOKUP(TableLookup_Value,TableOfTAbles,1)),MATCH(RowLookup_Value,\\$A\\$2:\\$A\\$1000),MATCH(colLookup_value,\\$B\\$1:\\$Z\\$1))\n\nAnother technique is to aggregate all your tables into one giant table, but with an additional column which identifies the individual tables. You can then use the techniques for multiple-index lookup above.\n\n#### Wildcard Lookup\n\nAVLOOKUP, AMATCH, MATCH,VLOOKUP and HLOOKUP allow you to use the wildcard characters ? (Any single character) and * (no character or any number of characters) on alphabetic exact matches. Sometimes this can avoid multiple matches.",
null,
"© 2001-2020 Decision Models Legal Disclaimer Privacy Statement",
null,
""
] | [
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r1_c1.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r2_c1.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r2_c2.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r2_c7.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r3_c2.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r4_c2.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r4_c6.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r5_c2.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r6_c2.gif",
null,
"https://www.decisionmodels.com/NavBar-2Y_r6_c3.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r6_c5.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r7_c3.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r8_c3.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r8_c4.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r9_c3.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r10_c3.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r11_c3.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r12_c1.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r12_c8.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/NavBar-2_r13_c1.gif",
null,
"https://www.decisionmodels.com/spacer.gif",
null,
"https://www.decisionmodels.com/Titlebar-1.gif",
null,
"https://www.decisionmodels.com/shimmy.gif",
null,
"https://www.decisionmodels.com/shimmy.gif",
null,
"https://www.decisionmodels.com/shimmy.gif",
null,
"https://www.decisionmodels.com/shimmy.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7763304,"math_prob":0.91883534,"size":10153,"snap":"2022-05-2022-21","text_gpt3_token_len":2414,"char_repetition_ratio":0.14149177,"word_repetition_ratio":0.03474416,"special_character_ratio":0.21757115,"punctuation_ratio":0.11375388,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9940254,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,3,null,null,null,3,null,3,null,3,null,null,null,3,null,null,null,3,null,3,null,null,null,3,null,null,null,3,null,3,null,3,null,null,null,3,null,null,null,3,null,3,null,null,null,3,null,null,null,3,null,null,null,3,null,null,null,3,null,3,null,null,null,3,null,null,null,3,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-19T09:17:24Z\",\"WARC-Record-ID\":\"<urn:uuid:4a6ce21e-015b-464e-8bbd-02ce2bd13546>\",\"Content-Length\":\"38265\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:085f3fb6-cc3c-4190-867a-adb63b761933>\",\"WARC-Concurrent-To\":\"<urn:uuid:90a38e69-09c2-42cf-9c15-23a3371ee673>\",\"WARC-IP-Address\":\"81.27.105.73\",\"WARC-Target-URI\":\"https://www.decisionmodels.com/optspeede.htm\",\"WARC-Payload-Digest\":\"sha1:D73FF2VZY2YYNBUASHLNLXVJAY3QKB33\",\"WARC-Block-Digest\":\"sha1:IAAHY335Y7NFP5W6SCUHW62QVNM4LMEH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662526009.35_warc_CC-MAIN-20220519074217-20220519104217-00789.warc.gz\"}"} |
https://engineering.calendar.utoronto.ca/course/rob313h1 | [
"# ROB313H1: Introduction to Learning from Data\n\n0.50\n38.4L/25.6T\n\nThis course will introduce students to the topic of machine learning, which is key to the design of intelligent systems and gaining actionable insights from datasets that arise in computational science and engineering. The course will cover the theoretical foundations of this topic as well as computational aspects of algorithms for unsupervised and supervised learning. The topics to be covered include: The learning problem, clustering and k-means, principal component analysis, linear regression and classification, generalized linear models, bias-variance tradeoff, regularization methods, maximum likelihood estimation, kernel methods, the representer theorem, radial basis functions, support vector machines for regression and classification, an introduction to the theory of generalization, feedforward neural networks, stochastic gradient descent, ensemble learning, model selection and validation.\n\n47.2 (Fall), 51.2 (Winter), 98.4 (Full Year)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8942669,"math_prob":0.8132627,"size":969,"snap":"2022-40-2023-06","text_gpt3_token_len":184,"char_repetition_ratio":0.10673575,"word_repetition_ratio":0.0,"special_character_ratio":0.18782249,"punctuation_ratio":0.17575757,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95777124,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T04:40:52Z\",\"WARC-Record-ID\":\"<urn:uuid:a6468a45-ed16-4013-b3ef-1c52465a2aa4>\",\"Content-Length\":\"30623\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:579120c9-f2a7-4243-8804-9bcc750ff7dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:803d189d-67e5-4467-9642-5e81c808a49e>\",\"WARC-IP-Address\":\"23.185.0.2\",\"WARC-Target-URI\":\"https://engineering.calendar.utoronto.ca/course/rob313h1\",\"WARC-Payload-Digest\":\"sha1:HZFVJ5ZPQZYNASCHNO3NHKXIARLAXFR3\",\"WARC-Block-Digest\":\"sha1:QLXCNVKZKXLXWGDJ2D4UKQX3WWIN23Q6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500215.91_warc_CC-MAIN-20230205032040-20230205062040-00105.warc.gz\"}"} |
https://www.jobilize.com/online/course/3-2-independent-and-mutually-exclusive-events-by-openstax?qcr=www.quizover.com | [
"# 3.2 Independent and mutually exclusive events\n\n Page 1 / 4\nProbability: Independent and Mutually Exclusive Events is part of the collection col10555 written by Barbara Illowsky and Susan Dean and explains the concept of independent events, where the probability of event A does not have any effect on the probability of event B, and mutually exclusive events, where events A and B cannot occur at the same time. The module has contributions from Roberta Bloom.\n\nIndependent and mutually exclusive do not mean the same thing.\n\n## Independent events\n\nTwo events are independent if the following are true:\n\n• $\\mathrm{P\\left(A|B\\right) = P\\left(A\\right)}$\n• $\\mathrm{P\\left(B|A\\right) = P\\left(B\\right)}$\n\nTwo events $A$ and $B$ are independent if the knowledge that one occurred does not affect the chance the other occurs. For example, the outcomes of two roles of a fair die are independent events. The outcome of the first roll does not change the probability for the outcome of the secondroll. To show two events are independent, you must show only one of the above conditions. If two events are NOT independent, then we say that they are dependent .\n\nSampling may be done with replacement or without replacement .\n\n• With replacement : If each member of a population is replaced after it is picked, then that member has the possibility of being chosen more than once. When sampling is done with replacement, then events are considered to be independent, meaning the result of the first pick will not change the probabilities for the second pick.\n• Without replacement: : When sampling is done without replacement, then each member of a population may be chosen only once. In this case, the probabilities for the second pick are affected by the result of the first pick. The events are considered to be dependent or not independent.\n\nIf it is not known whether $A$ and $B$ are independent or dependent, assume they are dependent until you can show otherwise .\n\n## Mutually exclusive events\n\n$A$ and $B$ are mutually exclusive events if they cannot occur at the same time. This means that $A$ and $B$ do not share any outcomes and $\\text{P(A AND B)}=0$ .\n\nFor example, suppose the sample space $\\mathrm{S = \\left\\{1, 2, 3, 4, 5, 6, 7, 8, 9, 10\\right\\}}$ . Let $\\mathrm{A = \\left\\{1, 2, 3, 4, 5\\right\\}, B = \\left\\{4, 5, 6, 7, 8\\right\\}}$ , and $\\mathrm{C = \\left\\{7, 9\\right\\}}$ . $\\text{A AND B}=\\left\\{4,5\\right\\}$ . $\\text{P(A AND B) =}$ $\\frac{2}{10}$ and is not equal to zero. Therefore, $A$ and $B$ are not mutually exclusive. $A$ and $C$ do not have any numbers in common so $\\text{P(A AND C) = 0}$ . Therefore, $A$ and $C$ are mutually exclusive.\n\nIf it is not known whether A and B are mutually exclusive, assume they are not until you can show otherwise .\n\nThe following examples illustrate these definitions and terms.\n\nFlip two fair coins. (This is an experiment.)\n\nThe sample space is $\\mathrm{\\left\\{HH, HT, TH, TT\\right\\}}$ where $T$ = tails and $H$ = heads. The outcomes are $\\mathrm{HH}$ , $\\mathrm{HT}$ , $\\mathrm{TH}$ , and $\\mathrm{TT}$ . The outcomes $\\mathrm{HT}$ and $\\mathrm{TH}$ are different. The $\\mathrm{HT}$ means that the first coin showed heads and the second coin showed tails. The $\\mathrm{TH}$ means that the first coin showed tails and the second coin showed heads.\n\n• Let $A$ = the event of getting at most one tail . (At most one tail means 0 or 1 tail.) Then $A$ can be written as $\\mathrm{\\left\\{HH, HT, TH\\right\\}}$ . The outcome $\\mathrm{HH}$ shows 0 tails. $\\mathrm{HT}$ and $\\mathrm{TH}$ each show 1 tail.\n• Let $B$ = the event of getting all tails. $B$ can be written as $\\mathrm{\\left\\{TT\\right\\}}$ . $B$ is the complement of $A$ . So, $\\mathrm{B = A\\text{'}}$ . Also, $\\mathrm{P\\left(A\\right) + P\\left(B\\right) = P\\left(A\\right) + P\\left(A\\text{'}\\right) = 1}$ .\n• The probabilities for $A$ and for $B$ are $\\mathrm{P\\left(A\\right)}=\\frac{3}{4}$ and $\\mathrm{P\\left(B\\right)}=\\frac{1}{4}$ .\n• Let $C$ = the event of getting all heads. $\\mathrm{C = \\left\\{HH\\right\\}}$ . Since $\\mathrm{B = \\left\\{TT\\right\\}}$ , . $B$ and $C$ are mutually exclusive. ( $B$ and $C$ have no members in common because you cannot have all tails and all heads at the same time.)\n• Let $D$ = event of getting more than one tail. $D=\\mathrm{\\left\\{TT\\right\\}}$ . $\\mathrm{P\\left(D\\right)}=\\frac{1}{4}$ .\n• Let $E$ = event of getting a head on the first roll. (This implies you can get either a head or tail on the second roll.) $E=\\left\\{\\mathrm{HT},\\mathrm{HH}\\right\\}$ . $\\mathrm{P\\left(E\\right)}=\\frac{2}{4}$ .\n• Find the probability of getting at least one (1 or 2) tail in two flips. Let $F$ = event of getting at least one tail in two flips. $F=\\left\\{\\mathrm{HT},\\mathrm{TH},\\mathrm{TT}\\right\\}$ . $\\text{P(F)}=\\frac{3}{4}$\n\nwhere we get a research paper on Nano chemistry....?\nwhat are the products of Nano chemistry?\nThere are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others..\nlearn\nEven nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level\nlearn\nda\nno nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts\nBhagvanji\nPreparation and Applications of Nanomaterial for Drug Delivery\nrevolt\nda\nApplication of nanotechnology in medicine\nwhat is variations in raman spectra for nanomaterials\nI only see partial conversation and what's the question here!\nwhat about nanotechnology for water purification\nplease someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.\nDamian\nyes that's correct\nProfessor\nI think\nProfessor\nNasa has use it in the 60's, copper as water purification in the moon travel.\nAlexandre\nnanocopper obvius\nAlexandre\nwhat is the stm\nis there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?\nRafiq\nindustrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong\nDamian\nHow we are making nano material?\nwhat is a peer\nWhat is meant by 'nano scale'?\nWhat is STMs full form?\nLITNING\nscanning tunneling microscope\nSahil\nhow nano science is used for hydrophobicity\nSantosh\nDo u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq\nRafiq\nwhat is differents between GO and RGO?\nMahi\nwhat is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq\nRafiq\nif virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION\nAnam\nanalytical skills graphene is prepared to kill any type viruses .\nAnam\nAny one who tell me about Preparation and application of Nanomaterial for drug Delivery\nHafiz\nwhat is Nano technology ?\nwrite examples of Nano molecule?\nBob\nThe nanotechnology is as new science, to scale nanometric\nbrayan\nnanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale\nDamian\nIs there any normative that regulates the use of silver nanoparticles?\nwhat king of growth are you checking .?\nRenato\nWhat fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?\nwhy we need to study biomolecules, molecular biology in nanotechnology?\n?\nKyle\nyes I'm doing my masters in nanotechnology, we are being studying all these domains as well..\nwhy?\nwhat school?\nKyle\nbiomolecules are e building blocks of every organics and inorganic materials.\nJoe\nhow did you get the value of 2000N.What calculations are needed to arrive at it\nPrivacy Information Security Software Version 1.1a\nGood\n1 It is estimated that 30% of all drivers have some kind of medical aid in South Africa. What is the probability that in a sample of 10 drivers: 3.1.1 Exactly 4 will have a medical aid. (8) 3.1.2 At least 2 will have a medical aid. (8) 3.1.3 More than 9 will have a medical aid.",
null,
"By Nicole Duquette",
null,
"By Jessica Collett",
null,
"By Caitlyn Gobble",
null,
"By OpenStax",
null,
"By Madison Christian",
null,
"By OpenStax",
null,
"By Sandhills MLT",
null,
"By OpenStax",
null,
"By OpenStax",
null,
"By Stephen Voron"
] | [
null,
"https://farm1.staticflickr.com/288/18860755658_8f1c6343ec_t.jpg",
null,
"https://www.jobilize.com/quiz/thumb/understanding-societies-exam-1-by-dr-jessica-collett-notre-dame.png;jsessionid=gcXfzy0qNJ6ofVuL0mjvi1q8xYPFRwcBLPeEsL2M.web001",
null,
"https://farm9.staticflickr.com/8602/15982618451_48770f8a99_t.jpg",
null,
"https://www.jobilize.com/quiz/thumb/biology-ch-01-the-study-of-life-mcq-quiz-openstax-college.png;jsessionid=gcXfzy0qNJ6ofVuL0mjvi1q8xYPFRwcBLPeEsL2M.web001",
null,
"https://farm9.staticflickr.com/8640/16656923381_6bf805ffed_t.jpg",
null,
"https://www.jobilize.com/quiz/thumb/human-body-anatomy-physiology-essay-quiz.png;jsessionid=gcXfzy0qNJ6ofVuL0mjvi1q8xYPFRwcBLPeEsL2M.web001",
null,
"https://www.jobilize.com/quiz/thumb/microbiology-practice-test-by-sandhills-mlt.png;jsessionid=gcXfzy0qNJ6ofVuL0mjvi1q8xYPFRwcBLPeEsL2M.web001",
null,
"https://www.jobilize.com/quiz/thumb/microeconomics-01-what-is-economics-ch-01-flashcards-by-openstax.png;jsessionid=gcXfzy0qNJ6ofVuL0mjvi1q8xYPFRwcBLPeEsL2M.web001",
null,
"https://www.jobilize.com/quiz/thumb/human-body-anatomy-physiology-mcq-quiz.png;jsessionid=gcXfzy0qNJ6ofVuL0mjvi1q8xYPFRwcBLPeEsL2M.web001",
null,
"https://www.jobilize.com/quiz/thumb/ch-01-the-cranial-nerves-and-the-circle-of-willis-quiz-by-university.png;jsessionid=gcXfzy0qNJ6ofVuL0mjvi1q8xYPFRwcBLPeEsL2M.web001",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9455932,"math_prob":0.9929375,"size":1744,"snap":"2020-34-2020-40","text_gpt3_token_len":357,"char_repetition_ratio":0.14942528,"word_repetition_ratio":0.06984127,"special_character_ratio":0.22133027,"punctuation_ratio":0.12232416,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99644613,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,1,null,null,null,1,null,null,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-18T21:20:12Z\",\"WARC-Record-ID\":\"<urn:uuid:0e0d7445-3dbc-4fb8-903a-e51b59b5f554>\",\"Content-Length\":\"122726\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1e10fb8-f88e-49ee-aaea-d4a4be7377c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:6d779b0a-2c6f-4ddc-b061-6bd825bd5bb7>\",\"WARC-IP-Address\":\"207.38.87.179\",\"WARC-Target-URI\":\"https://www.jobilize.com/online/course/3-2-independent-and-mutually-exclusive-events-by-openstax?qcr=www.quizover.com\",\"WARC-Payload-Digest\":\"sha1:5RKHZUNARLGSOMKAY6XXKSEKWN3I5CML\",\"WARC-Block-Digest\":\"sha1:4EFXJOW7I7JOOKDG5SEKQ77WQYSVKN3G\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400188841.7_warc_CC-MAIN-20200918190514-20200918220514-00551.warc.gz\"}"} |
https://www.flyingcoloursmaths.co.uk/blog/page/21/ | [
"# The Flying Colours Maths Blog: Latest posts\n\n## The Dictionary of Mathematical Eponymy: Ackermann’s function\n\nFor 2019, I'm trying an experiment: every couple of weeks, writing a post about a mathematical object that a) I don't know much about and b) is named after somebody. These posts are a trial run - let me know how you find them! The chief use of the Ackermann\n\n## Ask Uncle Colin: A Strange Simultaneous Equation\n\nDear Uncle Colin, I have the simultaneous equations $3x^2 - 3y = 0$ and $3y^2 - 3x = 0$. I've worked out that $x^2 = y$ and $y^2 = x$, but then I'm stuck! - My Expertise Relatedto1 Simultaneous Equations? Not Nearly Enough! Hi, MERSENNE, and thanks for your message!\n\n## Sticks and Stones\n\nBecause I'm insufferably vain, I have a search running in my Twitter client for the words \"The Maths Behind\", in case someone mentions my book (which is, of course, available wherever good books are sold). On the minus side, it rarely is; on the plus side, the search occasionally throws\n\n## Ask Uncle Colin: A Factorial Sum\n\nDear Uncle Colin, I have been given the series $\\frac{1}{2} + \\frac{1}{3} + \\frac{1}{8} + \\frac{1}{30} + \\frac{1}{144} + ...$, which appears to have a general term of $\\frac{1}{k! + (k+1)!}$ - but I can't see how to sum that! Any ideas? - Series Underpin Maths! Hi, SUM, and thanks\n\n## A Christmas Decagon\n\nSince it's Christmas (more or less), let's treat ourselves to a colourful @solvemymaths puzzle: Have a go, if you'd like to! Below the line will be spoilers. Consistency The first and most obvious thing to ask is, is Ed's claim reasonable? At a glance, yes, it makes sense: there's a\n\n## Ask Uncle Colin: Some Ugly Trigonometry\n\nDear Uncle Colin, How do I verify the identity $\\frac{\\cos(\\theta)}{1 - \\sin(\\theta)} \\equiv \\tan(\\theta) + \\sec(\\theta)$ for $\\cos(\\theta) \\ne 0$? - Struggles Expressing Cosines As Nice Tangents Hi, SECANT, and thanks for your message! The key questions for just about any trigonometry proof are \"what's ugly?\" and \"how can I\n\n## Review: Festival of the Spoken Nerd, You Can’t Polish A Nerd\n\nMidway through the second half of You Can’t Polish A Nerd, Steve Mould neatly encapsulates the show in one line: “It creates images on your oscilloscope. It’s so cool!” Because of course you have an oscilloscope. And of course you would use it - or failing that, a balloon and\n\n## Ask Uncle Colin: A factorising trick\n\nDear Uncle Colin, How would you factorise $63x^2 + 32x - 63$? I tried the method where you multiply $a$ and $c$ (it gives you -3969) - but I'm not sure how to find factors of that that sum to 32! Factors Are Troublesomely Oversized, Urgh Hi, FATOU, and thanks\n\n## Wrong, But Useful: Episode 62\n\nIn this month’s episode of Wrong, But Useful, we’re joined by @ch_nira, who is Dr Nira Chamberlain in real life - and the World’s Most Interesting Mathematician. Nira is a professional mathematical modeller, president-designate of the IMA, and a visiting fellow at Loughborough university. We discuss Nira’s entry in the\n\n## Bone Marrow Odds\n\nI don't remember doing it - although I'd meant to for some time - but apparently I signed up for the British Bone Marrow Registry. (If you're between 17 and 40, you can sign up the next time you give blood; the more people on the register, the more likely"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91723037,"math_prob":0.82537293,"size":2930,"snap":"2020-34-2020-40","text_gpt3_token_len":806,"char_repetition_ratio":0.101845525,"word_repetition_ratio":0.023210831,"special_character_ratio":0.27576792,"punctuation_ratio":0.122073576,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9563573,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T13:34:47Z\",\"WARC-Record-ID\":\"<urn:uuid:8789beba-4ffc-4f2a-b45c-14ee039728e5>\",\"Content-Length\":\"60161\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7113dc80-e714-4089-a57c-95afd4b0d4df>\",\"WARC-Concurrent-To\":\"<urn:uuid:93f1db6f-391e-4eb1-a66d-ae215f513080>\",\"WARC-IP-Address\":\"185.20.49.4\",\"WARC-Target-URI\":\"https://www.flyingcoloursmaths.co.uk/blog/page/21/\",\"WARC-Payload-Digest\":\"sha1:ITPFXLGSOQ62FYBR5NH34YLVXWY43WCG\",\"WARC-Block-Digest\":\"sha1:T5XOF4ZSNHD4636QT2YZCYMNZTRFN3UQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400279782.77_warc_CC-MAIN-20200927121105-20200927151105-00786.warc.gz\"}"} |
https://answers.everydaycalculation.com/percent-is/20-120 | [
"Solutions by everydaycalculation.com\n\n## 20 is what percent of 120?\n\n20 of 120 is 16.67%\n\n#### Steps to solve \"what percent is 20 of 120?\"\n\n1. 20 of 120 can be written as:\n20/120\n2. To find percentage, we need to find an equivalent fraction with denominator 100. Multiply both numerator & denominator by 100\n\n20/120 × 100/100\n3. = (20 × 100/120) × 1/100 = 16.67/100\n4. Therefore, the answer is 16.67%\n\nIf you are using a calculator, simply enter 20÷120×100 which will give you 16.67 as the answer.\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn how to work with percentages in your own time:"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86593837,"math_prob":0.9708145,"size":461,"snap":"2022-27-2022-33","text_gpt3_token_len":139,"char_repetition_ratio":0.18161926,"word_repetition_ratio":0.0,"special_character_ratio":0.37744033,"punctuation_ratio":0.08080808,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99291164,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-11T17:35:52Z\",\"WARC-Record-ID\":\"<urn:uuid:9ae2bc74-a903-49ae-b9aa-f21930e5a4e4>\",\"Content-Length\":\"6388\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1b73a1b4-da59-421a-99a5-149a2a7208de>\",\"WARC-Concurrent-To\":\"<urn:uuid:8e7e6065-1628-45f3-95f9-95ad2d55c29c>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/percent-is/20-120\",\"WARC-Payload-Digest\":\"sha1:I7L5O2MIPVYESWJCPIBB6OCNUDWHXFET\",\"WARC-Block-Digest\":\"sha1:AZL4DU5L444H4XUPFWS6ZAQOJPUT7U3P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571483.70_warc_CC-MAIN-20220811164257-20220811194257-00032.warc.gz\"}"} |
http://slideflix.net/doc/1895416/inverse-function-theorems | [
"",
null,
"# Inverse function theorems\n\nby user\n\non\n1\n\nviews\n\nReport\n\n#### Transcript\n\nInverse function theorems\n```(November 20, 2014)\nInverse function theorems\nPaul Garrett [email protected]\nhttp://www.math.umn.edu/egarrett/\n[This document is\nhttp://www.math.umn.edu/˜garrett/m/complex/notes 2014-15/05c inverse function.pdf]\n1.\n2.\n3.\n4.\nFixed-point lemma\nSmooth inverse function theorem\nHolomorphic inverse function theorem\nPerturbations f (z) + h · g(z)\n1. Fixed-point lemma\n[1.0.1] Lemma: Let X be a complete metric space with distance function d. Let f : X → X be a continuous\nmap uniformly contractive in the sense that there is 0 < c < 1 so that d(f x, f y) ≤ c · d(x, y) for all x, y ∈ X.\nThen f has a unique fixed point: there is a unique x ∈ X with f (x) = x. Further, limn→∞ f n y = x for any\ny ∈ X.\nProof: First, for any y ∈ X, by repeated application of the triangle inequality,\nd(y, f n (y)) ≤ d(y, f (y)) + d(f (y), f 2 (y)) + d(f 2 (y), f 3 (y)) . . . + d(f n−1 (y), f n (y))\n≤ (1 + c + c2 + . . . + cn−1 ) · d(y, f (y)) <\nd(y, f (y))\n1−c\nNext, claim that for any y ∈ X the sequence y, f (y), f 2 (y), . . . is Cauchy. Indeed, for no ≤ m ≤ n, using the\nprevious inequality,\nd(f m (y), f n (y)) ≤ cno · d(f m−no (y), f n−no (y)) ≤ cno · cm−no · d(y, f n−m (y)) ≤ cno ·\nd(y, f (y))\n1−c\nThis goes to 0 as no → +∞, so the sequence is Cauchy.\nSimilarly, for any y, z in X, with m ≤ n\nd(z, f (z)) d(f m (y), f n (z)) ≤ cm · d(y, f n−m (z)) ≤ cm · d(y, z) + d(z, f n−m (z)) ≤ cm · d(y, z) +\n1−c\nwhich goes to 0 as m → +∞. The limit is the same for y unchanged but z arbitrary. Thus, z, f (z), f 2 (z), . . .\nhas limit x for all z ∈ X. Further, taking z = x, f n (x) → x. Given ε > 0, take no large enough so that\nd(f n (x), x) < ε for n ≥ no . For n ≥ no ,\nd(f (x), x) ≤ d(f (x), f n+1 (x)) + d(f n+1 (x), x) < c · d(x, f n (x)) + ε < (c + 1) · ε\nThis holds for all ε > 0, so f (x) = x.\n///\n1\nPaul Garrett: Inverse function theorems (November 20, 2014)\n2. Smooth inverse function theorem\nThe derivative γ 0 of a smooth function γ : [a, b] → U ⊂ Rn is the usual\nγ 0 (t) = lim\nh→0\nγ(t + h) − γ(t)\nh\nFor this section, the derivative f 0 of an Rn -valued function on an open U ⊂ Rn is the n-by-n-matrix-valued\nfunction so that for every smooth path γ : [a, b] → U\n(f ◦ γ)0 (t) = f 0 (γ(t)) · γ 0 (t)\n(matrix multiplication)\nEquivalently, for small real h, xo ∈ U , and v ∈ Rn , as h → 0, using Landau’s little-oh notation, \nf (xo + h · v) = f (x) + h · f 0 (xo ) · v + o(h)\n(matrix multiplication)\n[2.0.1] Theorem: Let U be an open subset of Rn and f : U → Rn a continuously differentiable function.\nFor x0 ∈ U such that f 0 (x0 ) : Rn → Rn is a linear isomorphism, there is a neighborhood V ⊂ U of x0 so\nthat f |V has a continuously differentiable inverse on f (V ).\nProof: Let x → |x| be the usual norm on Rn , and |T | the operator norm on n-by-n real matrices. Without\nloss of generality, xo = 0, f (xo ) = 0, and f 0 (xo ) = f 0 (0) = 1n . Let F (x) = x − f (x), so that F 0 (0) = 0. By\ncontinuity, there is δ > 0 so that |F 0 (x)| < 12 for |x| < δ.\nWith g(t) = F (tx) for t ∈ [0, 1], the Mean Value Theorem in one variable gives\nF (x) = g(1) = g(0) + g 0 (t)(1 − 0) = F (0) + F 0 (tx)(x) = F 0 (tx)(x)\n(for some 0 ≤ t ≤ 1)\nso\nδ\n2\nThus, F maps the closed ball Bδ of radius δ to the closed ball Bδ/2 .\n|F (x)| ≤ |F 0 (tx)| · |x| ≤\n1\n2\n· |x| ≤\n(for |x| < δ)\nWe claim that f (Bδ ) ⊃ Bδ/2 , and that f is injective on f −1 (Bδ/2 . To this end, take y ∈ Bδ/2 , and\nlet Φy (x) = y + F (x) = y + x − f (x). For |y| ≤ δ/2 and |x| ≤ δ, |Φy (x)| ≤ δ, so Φy is a continuous\nmap of the complete metric space Bδ to itself. A similar estimate shows that Φy is contractive: letting\ng(t) = F ((1 − t)x1 + tx2 ),\n|Φy (x2 ) − Φy (x1 )| = |F (x2 ) − F (x1 )| = |g(1) − g(0)| = |g 0 (t)| · |1 − 0|\n= |F 0 ((1 − t)x1 + tx2 )| · |x2 − x1 | ≤\n1\n2\n· |x1 − x2 |\n(for given x1 , x2 ∈ Bδ , for some 0 ≤ t ≤ 1)\nBy the fixed-point lemma, Φy has a unique fixed point xo , that is,\nxo = Φy (xo ) = y + xo − f (xo )\nso xo is the unique solution in Bδ to the equation f (xo ) = y. This proves f (Bδ ) ⊃ Bδ/2 as well as the\ninjectivity on f −1 (Bδ/2 ).\n When f (x)/g(x) → 0 as x → x , write f (x) = o(g(x)).\no\n The usual operator norm is |T | = sup\n|x|≤1 |T x|.\n2\nPaul Garrett: Inverse function theorems (November 20, 2014)\nTo prove differentiability of the inverse map ϕ = f −1 , take x1 , x2 ∈ Bδ . Continuity of ϕ follows from\n|x1 − x2 | ≤ |f (x1 ) − f (x2 )| + x1 − f (x1 ) − x2 − f (x2 ) ≤ |f (x1 ) − f (x2 )| + |F (x1 ) − F (x2 )| ≤ |f (x1 ) − f (x2 )| + 21 |x1 − x2 |\nby the inequality |F (x1 ) − F (x2 )| < 12 |x1 − x2 | from above. Subtracting 21 |x1 − x2 | from both sides,\n1\n2 |x1\n− x2 | ≤ |f (x1 ) − f (x2 )|\ngiving continuity of the inverse.\nFor differentiability, let y1 = f (x1 ) and y2 = f (x2 ) with y1 , y2 in the interior of Bδ/2 . Then\nϕ(y1 ) − ϕ(y2 ) − f 0 (x2 )−1 (y1 − y2 ) = x1 − x2 − f 0 (x2 )−1 (f (x1 ) − f (x2 ))\n= x1 − x2 − f 0 (x2 )−1 f 0 (x2 )(x1 − x2 ) + o(x1 − x2 )\n= x1 − x2 − (x1 − x2 ) + o(x1 − x2 ) = o(x1 − x2 )\n(as x1 → x2 )\n(as x1 → x2 )\nBy the already-established continuity, this is o(y1 − y2 ). Thus, the inverse ϕ is differentiable at y2 = f (x2 ),\nand its derivative is ϕ0 (y2 ) = f 0 (x2 )−1 , for |y| < δ/2.\n///\n[2.0.2] Remark: An elaboration of this discussion proves higher-order continuous differentiability in the\nreal-variables sense, but we do not need this for application to the holomorphic inverse function theorem\nbelow.\n3. Holomorphic inverse function theorem\n[3.0.1] Theorem: For f holomorphic on a neighborhood U of zo and f 0 (zo ) 6= 0, there is a holomorphic\ninverse function g on a neighborhood of f (zo ), that is, such that (g ◦ f )(z) = z and (f ◦ g)(z) = z.\nProof: The idea is to consider f as a real-differentiable map f : R2 → R2 , obtain a real-differentiable inverse\ng and then observe that complex differentiability of f implies that of g.\nThe complex differentiability of f can be expressed as\nf (zo + hw) = f (xo ) + hf 0 (zo ) · w + o(h)\n(small real h, complex w)\nwhere f 0 (zo ) · w denotes multiplication in C. Separate real and imaginary parts: let f 0 (zo ) = a + bi with\na, b ∈ R, and w = u + iv with u, v ∈ R, giving\nf (zo + hw) = f (xo ) + h(a + bi) · (u + iv) + o(h) = f (xo ) + h (au − bv) + i(av + bu) ) + o(h)\nThe multiplication in C is achieved by matrix multiplication of real and imaginary parts:\nau − bv\na −b\nu\n=\nav + bu\nb\na\nv\nso the real-variable derivative of f at zo is\nRe f 0 (zo ) −Im f 0 (zo )\nIm f 0 (zo )\nRef 0 (zo )\n3\nPaul Garrett: Inverse function theorems (November 20, 2014)\nThe real-variable derivative has determinant |f 0 (zo )|2 , so is invertible for f 0 (zo ) 6= 0. Let α = f 0 (zo ). Thus,\nthere exists a real-differentiable inverse g, with real-variable derivative at f (zo ) given by\nRe α\nIm α\n−Im α\nRe α\n−1\n1\n=\n|α|2\nReα\n−Im α\nIm α\nRe α\n=\nRe α−1 Im α−1\n−Im α−1 Re α−1\nThat is, with wo = f (zo ),\ng(wo + h(u + iv)) = g(wo ) + h(1 i)\nRe α−1 Im α−1\n−Im α−1 u\n+ o(h) = g(wo ) + hα−1 (u + iv) + o(h)\n−1\nv\nRe α\nThis holds for all real u, v, so g is complex-differentiable at f (zo ), with complex derivative 1/f 0 (zo ).\n///\n4. Perturbations f (z) + h · g(z)\n[4.0.1] Corollary: For f, g holomorphic near zo , with zo a simple zero of f (zo ), for all ε > 0 there is δ > 0\nsuch that f − h · g has a zero zh with |zo − zh | < ε, and zh is a holomorphic function of h.\nProof: In the anomalous case that g(zo ) = 0, then zh = zo suffices.\nFor g(zo ) 6= 0, solve f (z) + h · g(z) = 0 for h:\nh =\nand then\nh0 =\nand\nh0 (zo ) =\n−f (z)\ng(z)\n−f 0 (z) f (z) · g 0 (z)\n−\ng(z)\ng(z)2\n−f 0 (zo ) 0 · g 0 (z)\n−f 0 (zo )\n−\n=\n6= 0\n2\ng(zo )\ng(zo )\ng(zo )\nApply the holomorphic inverse function theorem to obtain the holomorphic inverse F (h) = z such that\nF (0) = zo .\n///\n4\n```\nFly UP"
] | [
null,
"http://placehold.it/65x65",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7939279,"math_prob":0.9995975,"size":7736,"snap":"2019-13-2019-22","text_gpt3_token_len":3041,"char_repetition_ratio":0.12972064,"word_repetition_ratio":0.07647983,"special_character_ratio":0.43924507,"punctuation_ratio":0.118336886,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99994886,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-22T13:19:32Z\",\"WARC-Record-ID\":\"<urn:uuid:2e4bb441-963a-4af7-92b5-0f7792de2b08>\",\"Content-Length\":\"31549\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d30527d0-7f5c-4324-802f-3659676b4c28>\",\"WARC-Concurrent-To\":\"<urn:uuid:062ea984-6d62-4f14-a266-8331a78eb0d0>\",\"WARC-IP-Address\":\"104.18.59.110\",\"WARC-Target-URI\":\"http://slideflix.net/doc/1895416/inverse-function-theorems\",\"WARC-Payload-Digest\":\"sha1:PN4PE6GLRKXICO7HATSBCNRY3VYKZBYB\",\"WARC-Block-Digest\":\"sha1:BTKBV7GYNPOKYSCJLJFTFSD7OBQS4SYF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202658.65_warc_CC-MAIN-20190322115048-20190322141048-00532.warc.gz\"}"} |
https://cerfacs.fr/coop/coop-cse-test | [
"",
null,
"One important mission of COOP falls under the broad term Computer Science and Engineering, or CSE. To get a feel for what we do, we ask for candidates to play with a sample of Python code that represents many of the topics that are important to us. Specifically, the task is to edit the code so that:\n\n1. It can be called from an other module\n2. It is “clean”, i.e. follows coding standards\n3. It gains some flexibility and functionality\n\nIf you’ve been asked to perform this task and ended up here, please read the instructions below, and send your results to the person you’ve been in contact with. Note that if you are comfortable with Python, the full test should take less than an hour. If not, tentative ETAs are given for each task.\n\n##### The initial code\n\nWe start from a python script solving the equations of a SIR epidemiological model, found in this blog article.\n\n```import numpy as np\nfrom scipy.integrate import odeint\nimport matplotlib.pyplot as plt\n\n# Total population, N.\nN = 1000\n# Initial number of infected and recovered individuals, I0 and R0.\nI0, R0 = 1, 0\n# Everyone else, S0, is susceptible to infection initially.\nS0 = N - I0 - R0\n# Contact rate, beta, and mean recovery rate, gamma, (in 1/days).\nbeta, gamma = 0.2, 1./10\n# A grid of time points (in days)\nt = np.linspace(0, 160, 160)\n\n# The SIR model differential equations.\ndef deriv(y, t, N, beta, gamma):\nS, I, R = y\ndSdt = -beta * S * I / N\ndIdt = beta * S * I / N - gamma * I\ndRdt = gamma * I\nreturn dSdt, dIdt, dRdt\n\n# Initial conditions vector\ny0 = S0, I0, R0\n# Integrate the SIR equations over the time grid, t.\nret = odeint(deriv, y0, t, args=(N, beta, gamma))\nS, I, R = ret.T\n\n# Plot the data on three separate curves for S(t), I(t) and R(t)\nfig = plt.figure(facecolor='w')\nax.plot(t, S/1000, 'b', alpha=0.5, lw=2, label='Susceptible')\nax.plot(t, I/1000, 'r', alpha=0.5, lw=2, label='Infected')\nax.plot(t, R/1000, 'g', alpha=0.5, lw=2, label='Recovered with immunity')\nax.set_xlabel('Time /days')\nax.set_ylabel('Number (1000s)')\nax.set_ylim(0,1.2)\nax.yaxis.set_tick_params(length=0)\nax.xaxis.set_tick_params(length=0)\nax.grid(b=True, which='major', c='w', lw=2, ls='-')\nlegend = ax.legend()\nlegend.get_frame().set_alpha(0.5)\nfor spine in ('top', 'right', 'bottom', 'left'):\nax.spines[spine].set_visible(False)\nplt.show()\n```\n\nThis outputs the following graph:",
null,
"##### 1. Make it callable from another module\n\nETA: 10 - 20 min\n\nPaste the code in a file `sir_model.py` and create a caller code `test.py` in the same folder with the following lines:\n\n```from sir_model import sir, epidemic_plot\n\nresult = sir(days=160, population=1000, infected=1, contact_rate=0.2, infection_rate=0.1)\nepidemic_plot(result)\n```\n\nEdit `sir_model.py` to make `test.py` work. Just use the fact that these files are in the same folder, no `\\$PYTHON_PATH` or `setup.py` work needed here.\n\n##### 2. Clean the code\n\nETA: 30 min - 1h\n\nReadability counts for good code mainteance! Please tidy this code up so that it becomes easy for maintainers to read in the future, but keep the exact same functionalities.\n\nWe like to stick to Python standards for code quality, not for each dev to decide what “clean” should look like. The bible of Python style is PEP8, buy you can get some help from Pylint, if you wish. If you don’t know about Pylint, you can check it out online, or start with our blog post on the\n\nETA: 1 - 2 hours\n\nUpgrade the code to add an optional “lockdown mode”, with a choice of lockdown length in days. It should, for example, allow to set a lockdown period of 40 days each time the relative proportion of infected people reaches 5%. During the lockdown, the contact rate is decreased tenfold.\n\nThe API must keep the possibility to NOT include a quarantine mode in the simulation. The result should look like this:",
null,
"#### And that’s a wrap!\n\nAntoine Dauptain is a research scientist focused on computer science and engineering topics for HPC.\nCorentin Lapeyre is a research scientist focused on AI for physical modeling."
] | [
null,
"https://uwaterloo.ca/student-success/sites/ca.student-success/files/styles/wide-body-750px-wide/public/uploads/images/tests-and-exams.jpg",
null,
"https://cerfacs.fr/coop/images/python_course_one/sir.png",
null,
"https://cerfacs.fr/coop/images/python_course_one/quarantine.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7881892,"math_prob":0.9093639,"size":3878,"snap":"2021-43-2021-49","text_gpt3_token_len":1088,"char_repetition_ratio":0.079504386,"word_repetition_ratio":0.0061919503,"special_character_ratio":0.2849407,"punctuation_ratio":0.18287037,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9868433,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T21:26:18Z\",\"WARC-Record-ID\":\"<urn:uuid:f469d623-61be-47de-8d1b-e982bc2fe270>\",\"Content-Length\":\"27069\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:83d9e103-e89d-45a1-abb4-35361f457f24>\",\"WARC-Concurrent-To\":\"<urn:uuid:e4c4e18b-ac8f-4de0-a074-2a776518592f>\",\"WARC-IP-Address\":\"138.63.8.11\",\"WARC-Target-URI\":\"https://cerfacs.fr/coop/coop-cse-test\",\"WARC-Payload-Digest\":\"sha1:BOSW34I2KZXVKO6W5A7SOSXO4FWBS5GP\",\"WARC-Block-Digest\":\"sha1:A7JXHIAVYUXN7ZKA46X5G62ULXLMDOEQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585441.99_warc_CC-MAIN-20211021195527-20211021225527-00152.warc.gz\"}"} |
http://ddsog.com/991.html | [
"# 1、模型架构\n\ntransformer的模型架构如下:",
null,
"## 1.1 seq2seq结构\n\nthis is a lovely dog.",
null,
"## 1.2 注意力机制",
null,
"e\n\ni\n\nj\n\n=\n\nf\n\n(\n\ns\n\ni\n\n1\n\n,\n\nh\n\nj\n\n)\n\ne_{ij}=f(s_{i-1}, hj)\n\neij=f(si1,hj)\n\nα\n\ni\n\nj\n\n=\n\ne\n\nx\n\np\n\n(\n\ne\n\ni\n\nj\n\n)\n\n/\n\nk\n\n=\n\n1\n\nT\n\nx\n\ne\n\nx\n\np\n\n(\n\ne\n\ni\n\nk\n\n)\n\n\\alpha_{ij}={exp(e_{ij})}/({\\sum^{T_x}_{k=1}{exp(e_{ik})}})\n\nαij=exp(eij)/k=1Txexp(eik)\n\nc\n\ni\n\n=\n\nj\n\n=\n\n1\n\nT\n\nx\n\nα\n\ni\n\nj\n\nh\n\nj\n\nc_i=\\sum^{T_x}_{j=1}{\\alpha_{ij}h_j}\n\nci=j=1Txαijhj\n\nf\n\nf\n\nf表示的是一个前向神经网络。",
null,
"• self-attention与word2vec\nword2vec是静态的表示一个token的特征,而self-attention则是动态的表示一个token的特征。比如这个苹果真好吃苹果手机真好用这两句中的苹果,在word2vec中都是只有一个相同的向量表示,而在self-attention中则会根据上下文的不同给出不同的向量表示。\n• self-attention与attention\nattention是获取一个句子与另一个句子之间token的权重关系,而self-attention则是获取句子自身token之间的权重关系。\n\n# 2、transformer各部分介绍\n\n• encoder\n• encoder部分的输入\n• layer normalize\n• feed forward\n• scaled dot-product attention\n• decoder\n• decoder部分的输入\n• encoder与decoder的连接\n\n## 2.1 encoder\n\n### 2.1.1 encoder的输入\n\nP\n\nE\n\n(\n\np\n\no\n\ns\n\n,\n\n2\n\ni\n\n)\n\n=\n\ns\n\ni\n\nn\n\n(\n\np\n\no\n\ns\n\n/\n\n1000\n\n0\n\n2\n\ni\n\n/\n\nd\n\nm\n\no\n\nd\n\ne\n\nl\n\n)\n\nPE_{(pos, 2i)}=sin(pos/10000^{2i/d_{model}})\n\nPE(pos,2i)=sin(pos/100002i/dmodel)\n\nP\n\nE\n\n(\n\np\n\no\n\ns\n\n,\n\n2\n\ni\n\n+\n\n1\n\n)\n\n=\n\nc\n\no\n\ns\n\n(\n\np\n\no\n\ns\n\n/\n\n1000\n\n0\n\n2\n\ni\n\n/\n\nd\n\nm\n\no\n\nd\n\ne\n\nl\n\n)\n\nPE_{(pos, 2i+1)}=cos(pos/10000^{2i/d_{model}})\n\nPE(pos,2i+1)=cos(pos/100002i/dmodel)\n\n2\n\ni\n\n2i\n\n2i或者\n\n2\n\ni\n\n+\n\n1\n\n2i+1\n\n2i+1表示在该向量中的位置。",
null,
"### 2.1.3 layer normalization\n\n• Batch normalization是对每个批次中的多个样本之间的相同位置的特征进行归一化;\n• layer normalization是对同一个样本中的不同特征进行归一化;\n• 对于一些结构化特征场景下,layer normalization看起来不太合理,但是对于nlp场景下layer normalization就比较合理。因为在nlp场景下,对每一个样本的多个token 向量进行layer normalization,实际上就是对token向量进行归一化。\n\n### 2.1.5 scaled dot-product attention\n\nAttention函数的本质可以被描述为一个查询(query)到一系列(键key-值value)对的映射。\n\n• 第一步是将query和每个key进行相似度计算得到权重,常用的相似度函数有点积,拼接,感知机等;\n• 第二步一般是使用一个softmax函数对这些权重进行归一化;\n• 最后将权重和相应的键值value进行加权求和得到最后的attention。\n\nsacled dot-product attention也是按照上述所说的流程进行计算的。首先我们来看一下multi-head attention和scaled dot-product attention的结构。",
null,
"W\n\nQ\n\n,\n\nW\n\nK\n\n,\n\nW\n\nV\n\nW^Q, W^K, W^V\n\nWQ,WK,WV相乘得到的。\n\nq\n\n1\n\n=\n\nX\n\n1\n\nW\n\nQ\n\nq1 = X_1W^Q\n\nq1=X1WQ\n\nk\n\n1\n\n=\n\nX\n\n1\n\nW\n\nK\n\nk1 = X_1W^K\n\nk1=X1WK\n\nv\n\n1\n\n=\n\nX\n\n1\n\nW\n\nV\n\nv1 = X_1W^V\n\nv1=X1WV\n\nq\n\n2\n\n=\n\nX\n\n2\n\nW\n\nQ\n\nq2 = X_2W^Q\n\nq2=X2WQ\n\nk\n\n2\n\n=\n\nX\n\n2\n\nW\n\nK\n\nk2 = X_2W^K\n\nk2=X2WK\n\nv\n\n2\n\n=\n\nX\n\n2\n\nW\n\nV\n\nv2 = X_2W^V\n\nv2=X2WV\n\nW\n\nQ\n\n,\n\nW\n\nK\n\n,\n\nW\n\nV\n\nW^Q, W^K, W^V\n\nWQ,WK,WV都是需要进行训练得到的。",
null,
"d\n\nk\n\n\\sqrt{d_k}\n\ndk\n\nd\n\nk\n\nd_k\n\ndk表示词向量的维度,之所以除以这个数的作用是避免将softmax函数push到梯度很小的区域,具体可以参考《transformer中的attention为什么scaled?》),然后进行softmax,得到softmax向量之后再与各value相乘求和,得到Thinking的进行scaled dot-product attention之后的向量。Machine的计算也是类似。",
null,
"",
null,
"",
null,
"A\n\nt\n\nt\n\ne\n\nn\n\nt\n\ni\n\no\n\nn\n\n(\n\nQ\n\n,\n\nK\n\n,\n\nV\n\n)\n\n=\n\ns\n\no\n\nf\n\nt\n\nm\n\na\n\nx\n\n(\n\nQ\n\nK\n\nT\n\nd\n\nk\n\n)\n\nV\n\nAttention(Q, K, V) = softmax(\\frac{QK^T}{\\sqrt{d_k}})V\n\nAttention(Q,K,V)=softmax(dk\nQKT\n)V\n\n## 2.2 decoder\n\n### 2.2.3 encoder与decoder的连接",
null,
"### 2.2.4 decoder的输出\n\nTHE END",
null,
"",
null,
""
] | [
null,
"https://img-blog.csdnimg.cn/20201111215549581.png#pic_center",
null,
"https://img-blog.csdnimg.cn/20201111215817888.png#pic_center",
null,
"https://img-blog.csdnimg.cn/20201111220244810.png#pic_center",
null,
"https://img-blog.csdnimg.cn/20201111222953201.png#pic_center",
null,
"https://img-blog.csdnimg.cn/2020111300391080.png#pic_center",
null,
"https://img-blog.csdnimg.cn/20201113190653794.png#pic_center",
null,
"https://img-blog.csdnimg.cn/20201113191839473.png#pic_center",
null,
"https://img-blog.csdnimg.cn/20201113192711446.png#pic_center",
null,
"https://img-blog.csdnimg.cn/20201113200536221.png#pic_center",
null,
"https://img-blog.csdnimg.cn/20201113200548252.png#pic_center",
null,
"https://img-blog.csdnimg.cn/20201114115654503.png#pic_center",
null,
"http://ddsog.com/991.html",
null,
"http://ddsog.com/991.html",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.8915577,"math_prob":0.988821,"size":8401,"snap":"2023-14-2023-23","text_gpt3_token_len":5524,"char_repetition_ratio":0.1766107,"word_repetition_ratio":0.13015874,"special_character_ratio":0.20735627,"punctuation_ratio":0.11013216,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9948401,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-25T12:06:27Z\",\"WARC-Record-ID\":\"<urn:uuid:c1a282f2-de39-441f-bdf7-7c5aff43f195>\",\"Content-Length\":\"103478\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:24baa377-d83e-422e-ac6d-bb988444f456>\",\"WARC-Concurrent-To\":\"<urn:uuid:5e427b66-942a-410c-b03a-44a1b54f3bec>\",\"WARC-IP-Address\":\"43.129.238.132\",\"WARC-Target-URI\":\"http://ddsog.com/991.html\",\"WARC-Payload-Digest\":\"sha1:5DEEF4PO4JTCGUINJQP2EMVJHVOCUV5V\",\"WARC-Block-Digest\":\"sha1:A7TOMTG3BJIDPZGUIVED36K4SXEFHRR7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945323.37_warc_CC-MAIN-20230325095252-20230325125252-00394.warc.gz\"}"} |
https://artofproblemsolving.com/wiki/index.php?title=2006_UNCO_Math_Contest_II_Problems/Problem_5&diff=prev&oldid=81835 | [
"# Difference between revisions of \"2006 UNCO Math Contest II Problems/Problem 5\"\n\n## Problem\n\nIn the figure",
null,
"$BD$ is parallel to",
null,
"$AE$ and also",
null,
"$BF$ is parallel to",
null,
"$DE$. The area of the larger triangle",
null,
"$ACE$ is",
null,
"$128$. The area of the trapezoid",
null,
"$BDEA$ is",
null,
"$78$. Determine the area of triangle",
null,
"$ABF$.",
null,
"$[asy] draw((0,0)--(1,2)--(4,0)--cycle,black); draw((1/2,1)--(2.5,1)--(2,0),black); MP(\"A\",(4,0),SE);MP(\"C\",(1,2),N);MP(\"E\",(0,0),SW); MP(\"D\",(.5,1),W);MP(\"B\",(2.5,1),NE);MP(\"F\",(2,0),S); [/asy]$\n\n## Solution\n\nThis problem needs a solution. If you have a solution for it, please help us out by adding it.\n\n## See Also\n\n 2006 UNCO Math Contest II (Problems • Answer Key • Resources) Preceded byProblem 4 Followed byProblem 6 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 All UNCO Math Contest Problems and Solutions\nInvalid username\nLogin to AoPS"
] | [
null,
"https://latex.artofproblemsolving.com/8/9/f/89fff82bb65d0215e49c8c91cb7c553da52205e2.png ",
null,
"https://latex.artofproblemsolving.com/9/a/6/9a65c05bd85d2609bea5db2b109ed5f556463511.png ",
null,
"https://latex.artofproblemsolving.com/6/5/e/65e4f2d5cc59e5ffbed3734130e8c45b643adc91.png ",
null,
"https://latex.artofproblemsolving.com/6/d/5/6d5c9afa92ae3180f02754c177e530d9d0f8b4d4.png ",
null,
"https://latex.artofproblemsolving.com/f/3/4/f348b5890100fa57eaa4a3c548badc83b75af585.png ",
null,
"https://latex.artofproblemsolving.com/0/a/e/0aed56990dedba5a88a65e21245ccfd5d5e43ae2.png ",
null,
"https://latex.artofproblemsolving.com/7/c/9/7c9d11f822493c6fa024726ef0750b2917998e2c.png ",
null,
"https://latex.artofproblemsolving.com/7/6/d/76d15d808807bfa561e72dc8111649ed0a204ebe.png ",
null,
"https://latex.artofproblemsolving.com/c/e/e/cee88eb0e94e7831e4d242c8930836c02c7d5128.png ",
null,
"https://latex.artofproblemsolving.com/8/b/c/8bc13e1d9d8740bc4f6f48868cdc693e5c1b390c.png ",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8271617,"math_prob":0.99606985,"size":924,"snap":"2021-21-2021-25","text_gpt3_token_len":274,"char_repetition_ratio":0.14891304,"word_repetition_ratio":0.037037037,"special_character_ratio":0.32683983,"punctuation_ratio":0.0882353,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99558616,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-16T23:15:22Z\",\"WARC-Record-ID\":\"<urn:uuid:17496adb-ad08-4ce6-b52a-69385e67496e>\",\"Content-Length\":\"42003\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:660b8270-639a-45ca-a7fd-0885e7beebf1>\",\"WARC-Concurrent-To\":\"<urn:uuid:26638762-4914-47ca-be8a-313c3fd41b0b>\",\"WARC-IP-Address\":\"172.67.69.208\",\"WARC-Target-URI\":\"https://artofproblemsolving.com/wiki/index.php?title=2006_UNCO_Math_Contest_II_Problems/Problem_5&diff=prev&oldid=81835\",\"WARC-Payload-Digest\":\"sha1:XHASMD6Q5A5JLRP4KOGMRJDOMOGSP2IN\",\"WARC-Block-Digest\":\"sha1:P7BVZL5AYOSZV3TLFY3MKPKV6K2IS645\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487626122.27_warc_CC-MAIN-20210616220531-20210617010531-00557.warc.gz\"}"} |
http://oeis.org/A083710 | [
"This site is supported by donations to The OEIS Foundation.",
null,
"Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!)\n A083710 Number of partitions of n each of which has a summand which divides every summand in the partition. 9\n 1, 1, 2, 3, 5, 6, 11, 12, 20, 25, 37, 43, 70, 78, 114, 143, 196, 232, 330, 386, 530, 641, 836, 1003, 1340, 1581, 2037, 2461, 3127, 3719, 4746, 5605, 7038, 8394, 10376, 12327, 15272, 17978, 22024, 26095, 31730, 37339, 45333, 53175, 64100, 75340, 90138 (list; graph; refs; listen; history; text; internal format)\n OFFSET 0,3 COMMENTS Since the summand (part) which divides all the other summands is necessarily the smallest, an equivalent definition is: \"Number of partitions of n such that smallest part divides every part.\" - Joerg Arndt, Jun 08 2009 The first few partitions that fail the criterion are 5=3+2, 7=5+2=4+3=3+2+2. So a(5) = A000041(5) - 1 = 6, a(7) = A000041(7) - 3 = 12. - Vladeta Jovovic, Jun 17 2003 Starting with offset 1 = inverse Mobius transform (A051731) of the partition numbers, A000041. - Gary W. Adamson, Jun 08 2009 REFERENCES L. M. Chawla, M. O. Levan and J. E. Maxfield, On a restricted partition function and its tables, J. Natur. Sci. and Math., 12 (1972), 95-101. LINKS FORMULA Equals left border of triangle A137587 starting (1, 2, 3, 5, 6, 11,...). - Gary W. Adamson, Jan 27 2008 Comment from Joerg Arndt, Jun 08 2009: Sequence has g.f. 1 + Sum_{n>=1} x^n/eta(x^n). The g.f. for partitions into parts that are a multiple of n is x^n/eta(x^n), now sum over n. Gary W. Adamson's comment is equivalent to the formula a(n) = Sum_{d|n} p(d-1) where p(i) = number of partitions of i (A000041(i)). Hence A083710 has g.f. Sum_{d>=1} p(d-1)*x^d/(1-x^d), - N. J. A. Sloane, Jun 08 2009 MAPLE with(combinat): with(numtheory): a := proc(n) c := 0: l := sort(convert(divisors(n), list)): for i from 1 to nops(l)-0 do c := c+numbpart(l[i]-1) od: RETURN(c): end: for j from 0 to 60 do printf(`%d, `, a(j)) od: # Zerinvary Lajos, Apr 14 2007 CROSSREFS Cf. A083711, A018783, A137587. Cf. A000041, A051731. - Gary W. Adamson, Jun 08 2009 Sequence in context: A033159 A199366 A318689 * A127524 A117086 A081026 Adjacent sequences: A083707 A083708 A083709 * A083711 A083712 A083713 KEYWORD nonn,easy AUTHOR N. J. A. Sloane, Jun 16 2003 EXTENSIONS More terms from Vladeta Jovovic, Jun 17 2003 STATUS approved\n\nLookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam\nContribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent\nThe OEIS Community | Maintained by The OEIS Foundation Inc.\n\nLast modified June 20 01:36 EDT 2019. Contains 324223 sequences. (Running on oeis4.)"
] | [
null,
"http://oeis.org/banner2021.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.61623883,"math_prob":0.9516506,"size":2353,"snap":"2019-26-2019-30","text_gpt3_token_len":944,"char_repetition_ratio":0.10557684,"word_repetition_ratio":0.027842227,"special_character_ratio":0.5053124,"punctuation_ratio":0.25996533,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9916036,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-20T05:52:59Z\",\"WARC-Record-ID\":\"<urn:uuid:781738a4-eda3-4e11-b945-61223d12fe38>\",\"Content-Length\":\"18215\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d8771a3e-f58a-4c72-b1bc-b973d7db995b>\",\"WARC-Concurrent-To\":\"<urn:uuid:75f3a4f4-995d-462c-a83c-2f6ac4ef1a54>\",\"WARC-IP-Address\":\"104.239.138.29\",\"WARC-Target-URI\":\"http://oeis.org/A083710\",\"WARC-Payload-Digest\":\"sha1:SFVL3GEO7ASRMSXRHJKNLGMEL2ZYTRJC\",\"WARC-Block-Digest\":\"sha1:XDFWRRDYF4AU2DLOL2366VEZM3NOWJE6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999141.54_warc_CC-MAIN-20190620044948-20190620070948-00004.warc.gz\"}"} |
https://www.numberempire.com/516636 | [
"Home | Menu | Get Involved | Contact webmaster",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"# Number 516636\n\nfive hundred sixteen thousand six hundred thirty six\n\n### Properties of the number 516636\n\n Factorization 2 * 2 * 3 * 3 * 113 * 127 Divisors 1, 2, 3, 4, 6, 9, 12, 18, 36, 113, 127, 226, 254, 339, 381, 452, 508, 678, 762, 1017, 1143, 1356, 1524, 2034, 2286, 4068, 4572, 14351, 28702, 43053, 57404, 86106, 129159, 172212, 258318, 516636 Count of divisors 36 Sum of divisors 1327872 Previous integer 516635 Next integer 516637 Is prime? NO Previous prime 516623 Next prime 516643 516636th prime 7632467 Is a Fibonacci number? NO Is a Bell number? NO Is a Catalan number? NO Is a factorial? NO Is a regular number? NO Is a perfect number? NO Polygonal number (s < 11)? triangular(1016) Binary 1111110001000011100 Octal 1761034 Duodecimal 20ab90 Hexadecimal 7e21c Square 266912756496 Square root 718.77395612251 Natural logarithm 13.155093843627 Decimal logarithm 5.7131846652067 Sine 0.88575455673642 Cosine 0.46415392406041 Tangent 1.9083207333202\nNumber 516636 is pronounced five hundred sixteen thousand six hundred thirty six. Number 516636 is a composite number. Factors of 516636 are 2 * 2 * 3 * 3 * 113 * 127. Number 516636 has 36 divisors: 1, 2, 3, 4, 6, 9, 12, 18, 36, 113, 127, 226, 254, 339, 381, 452, 508, 678, 762, 1017, 1143, 1356, 1524, 2034, 2286, 4068, 4572, 14351, 28702, 43053, 57404, 86106, 129159, 172212, 258318, 516636. Sum of the divisors is 1327872. Number 516636 is not a Fibonacci number. It is not a Bell number. Number 516636 is not a Catalan number. Number 516636 is not a regular number (Hamming number). It is a not factorial of any number. Number 516636 is an abundant number and therefore is not a perfect number. Number 516636 is a triangular number with n=1016. Binary numeral for number 516636 is 1111110001000011100. Octal numeral is 1761034. Duodecimal value is 20ab90. Hexadecimal representation is 7e21c. Square of the number 516636 is 266912756496. Square root of the number 516636 is 718.77395612251. Natural logarithm of 516636 is 13.155093843627 Decimal logarithm of the number 516636 is 5.7131846652067 Sine of 516636 is 0.88575455673642. Cosine of the number 516636 is 0.46415392406041. Tangent of the number 516636 is 1.9083207333202"
] | [
null,
"https://www.numberempire.com/images/graystar.png",
null,
"https://www.numberempire.com/images/graystar.png",
null,
"https://www.numberempire.com/images/graystar.png",
null,
"https://www.numberempire.com/images/graystar.png",
null,
"https://www.numberempire.com/images/graystar.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.55378747,"math_prob":0.995714,"size":2644,"snap":"2020-34-2020-40","text_gpt3_token_len":965,"char_repetition_ratio":0.16704546,"word_repetition_ratio":0.21609196,"special_character_ratio":0.5139939,"punctuation_ratio":0.22095238,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9861366,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-09T15:17:33Z\",\"WARC-Record-ID\":\"<urn:uuid:c806faf5-51cd-4f64-9f9e-2e5ba3e64182>\",\"Content-Length\":\"26735\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e6260c4-abac-4035-8b24-192a379f5548>\",\"WARC-Concurrent-To\":\"<urn:uuid:096d4a43-7e11-4178-a595-9294a245ef02>\",\"WARC-IP-Address\":\"172.67.208.6\",\"WARC-Target-URI\":\"https://www.numberempire.com/516636\",\"WARC-Payload-Digest\":\"sha1:HJZVFZX4IHHYTOIKV4EBNNZMABSLGLL6\",\"WARC-Block-Digest\":\"sha1:52W3FS54QDIDC3ULWOPBKARLC4W2VWRG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738555.33_warc_CC-MAIN-20200809132747-20200809162747-00095.warc.gz\"}"} |
https://scip.zib.de/doc-6.0.2/html/presol__symmetry_8c.php | [
"",
null,
"presol_symmetry.c File Reference\n\nDetailed Description\n\npresolver for storing symmetry information about current problem\n\nThis presolver computes symmetries of the problem and stores this information in adequate form. It does not perform additional actions. The symmetry information can be accessed through external functions. However, the user has to declare the type of symmetry that is needed before execution, see SYMsetSpecRequirement().\n\nNote\nWe treat implict integer variables as if they were continuous/real variables. The reason is that there is currently no distinction between implicit integer and implicit binary. Moreover, currently implicit integer variables hurt our code more than continuous/real variables (we basically do not handle integral variables at all).\nWe do not copy symmetry information, since it is not clear how this information transfers. Moreover, copying symmetry might inhibit heuristics. But note that solving the a sub-SCIP might then happen without symmetry information!\n\nDefinition in file presol_symmetry.c.\n\n#include <scip/cons_linear.h>\n#include <scip/cons_knapsack.h>\n#include <scip/cons_varbound.h>\n#include <scip/cons_setppc.h>\n#include <scip/cons_and.h>\n#include <scip/cons_logicor.h>\n#include <scip/cons_or.h>\n#include <scip/cons_xor.h>\n#include <scip/cons_bounddisjunction.h>\n#include <scip/presol_symmetry.h>\n#include <symmetry/compute_symmetry.h>\n#include <string.h>\n\nGo to the source code of this file.\n\nData Structures\n\nstruct SYM_Sortrhstype\n\nMacros\n\n#define PRESOL_NAME \"symmetry\"\n\n#define PRESOL_DESC \"presolver for computing and storing symmetry information about current problem\"\n\n#define PRESOL_PRIORITY 0\n\n#define PRESOL_MAXROUNDS -1\n\n#define PRESOL_TIMING SCIP_PRESOLTIMING_EXHAUSTIVE /* timing of the presolver (fast, medium, or exhaustive) */\n\n#define DEFAULT_MAXGENERATORS 1500\n\n#define DEFAULT_CHECKSYMMETRIES FALSE\n\n#define DEFAULT_DISPLAYNORBITVARS FALSE\n\n#define MAXGENNUMERATOR 64000000\n\n#define SCIP_SPECIALVAL 1.12345678912345e+19\n\nTypedefs\n\ntypedef struct SYM_Sortrhstype SYM_SORTRHSTYPE\n\nFunctions\n\nstatic SCIP_DECL_HASHGETKEY (SYMhashGetKeyVartype)\n\nstatic SCIP_DECL_HASHKEYEQ (SYMhashKeyEQVartype)\n\nstatic SCIP_DECL_HASHKEYVAL (SYMhashKeyValVartype)\n\nstatic SCIP_DECL_SORTINDCOMP (SYMsortRhsTypes)\n\nstatic SCIP_DECL_SORTINDCOMP (SYMsortMatCoef)\n\nstatic SCIP_Bool SymmetryFixVar (SYM_SPEC fixedtype, SCIP_VAR *var)\n\nstatic SCIP_RETCODE getActiveVariables (SCIP *scip, SCIP_VAR ***vars, SCIP_Real **scalars, int *nvars, SCIP_Real *constant, SCIP_Bool transformed)\n\nstatic SCIP_RETCODE collectCoefficients (SCIP *scip, SCIP_VAR **linvars, SCIP_Real *linvals, int nlinvars, SCIP_Real lhs, SCIP_Real rhs, SCIP_Bool istransformed, SYM_RHSSENSE rhssense, SYM_MATRIXDATA *matrixdata)\n\nstatic SCIP_RETCODE checkSymmetriesAreSymmetries (SCIP *scip, SYM_SPEC fixedtype, SYM_MATRIXDATA *matrixdata, int nperms, int **perms)\n\nstatic int getNSymhandableConss (SCIP *scip)\n\nstatic SCIP_RETCODE computeSymmetryGroup (SCIP *scip, int maxgenerators, SYM_SPEC fixedtype, SCIP_Bool local, SCIP_Bool checksymmetries, int *npermvars, SCIP_VAR ***permvars, SCIP_Real **permvarsobj, int *nperms, int *nmaxperms, int ***perms, SCIP_Real *log10groupsize, SCIP_Bool *success)\n\nstatic SCIP_RETCODE computeNOrbitVars (SCIP *scip, SCIP_PRESOLDATA *presoldata, SCIP_Bool completestatistic)\n\nstatic SCIP_RETCODE determineSymmetry (SCIP *scip, SCIP_PRESOLDATA *presoldata, SYM_SPEC symspecrequire, SYM_SPEC symspecrequirefixed)\n\nstatic SCIP_DECL_PRESOLINIT (presolInitSymmetry)\n\nstatic SCIP_DECL_PRESOLEXIT (presolExitSymmetry)\n\nstatic SCIP_DECL_PRESOLFREE (presolFreeSymmetry)\n\nstatic SCIP_DECL_PRESOLEXEC (presolExecSymmetry)\n\nSCIP_RETCODE SCIPincludePresolSymmetry (SCIP *scip)\n\nSCIP_RETCODE SCIPgetGeneratorsSymmetry (SCIP *scip, SYM_SPEC symspecrequire, SYM_SPEC symspecrequirefixed, SCIP_Bool recompute, int *npermvars, SCIP_VAR ***permvars, int *nperms, int ***perms, SCIP_Real *log10groupsize, SCIP_Bool *binvaraffected)\n\nSCIP_RETCODE SCIPgetPermvarsObjSymmetry (SCIP *scip, SCIP_Real **permvarsobj)\n\n◆ PRESOL_NAME\n\n #define PRESOL_NAME \"symmetry\"\n\nDefinition at line 53 of file presol_symmetry.c.\n\n◆ PRESOL_DESC\n\n #define PRESOL_DESC \"presolver for computing and storing symmetry information about current problem\"\n\nDefinition at line 54 of file presol_symmetry.c.\n\nReferenced by SCIPincludePresolSymmetry().\n\n◆ PRESOL_PRIORITY\n\n #define PRESOL_PRIORITY 0\n\npriority of the presolver (>= 0: before, < 0: after constraint handlers)\n\nDefinition at line 55 of file presol_symmetry.c.\n\nReferenced by SCIPincludePresolSymmetry().\n\n◆ PRESOL_MAXROUNDS\n\n #define PRESOL_MAXROUNDS -1\n\nmaximal number of presolving rounds the presolver participates in (-1: no limit)\n\nDefinition at line 56 of file presol_symmetry.c.\n\nReferenced by SCIPincludePresolSymmetry().\n\n◆ PRESOL_TIMING\n\n #define PRESOL_TIMING SCIP_PRESOLTIMING_EXHAUSTIVE /* timing of the presolver (fast, medium, or exhaustive) */\n\nDefinition at line 57 of file presol_symmetry.c.\n\nReferenced by SCIPincludePresolSymmetry().\n\n◆ DEFAULT_MAXGENERATORS\n\n #define DEFAULT_MAXGENERATORS 1500\n\nlimit on the number of generators that should be produced within symmetry detection (0 = no limit)\n\nDefinition at line 60 of file presol_symmetry.c.\n\nReferenced by SCIPincludePresolSymmetry().\n\n◆ DEFAULT_CHECKSYMMETRIES\n\n #define DEFAULT_CHECKSYMMETRIES FALSE\n\nShould all symmetries be checked after computation?\n\nDefinition at line 61 of file presol_symmetry.c.\n\nReferenced by SCIPincludePresolSymmetry().\n\n◆ DEFAULT_DISPLAYNORBITVARS\n\n #define DEFAULT_DISPLAYNORBITVARS FALSE\n\nShould the number of variables affected by some symmetry be displayed?\n\nDefinition at line 62 of file presol_symmetry.c.\n\nReferenced by SCIPincludePresolSymmetry().\n\n◆ MAXGENNUMERATOR\n\n #define MAXGENNUMERATOR 64000000\n\ndetermine maximal number of generators by dividing this number by the number of variables\n\nDefinition at line 65 of file presol_symmetry.c.\n\nReferenced by determineSymmetry().\n\n◆ SCIP_SPECIALVAL\n\n #define SCIP_SPECIALVAL 1.12345678912345e+19\n\nspecial floating point value for handling zeros in bound disjunctions\n\nDefinition at line 66 of file presol_symmetry.c.\n\nReferenced by computeSymmetryGroup().\n\n◆ SYM_SORTRHSTYPE\n\n typedef struct SYM_Sortrhstype SYM_SORTRHSTYPE\n\nDefinition at line 159 of file presol_symmetry.c.\n\n◆ SCIP_DECL_HASHGETKEY()\n\n static SCIP_DECL_HASHGETKEY ( SYMhashGetKeyVartype )\nstatic\n\ngets the key of the given element\n\nDefinition at line 99 of file presol_symmetry.c.\n\n◆ SCIP_DECL_HASHKEYEQ()\n\n static SCIP_DECL_HASHKEYEQ ( SYMhashKeyEQVartype )\nstatic\n\nreturns TRUE iff both keys are equal\n\nCompare the types of two variables according to objective, lower and upper bound, and variable type.\n\nDefinition at line 109 of file presol_symmetry.c.\n\nReferences FALSE, SYM_Vartype::lb, SYM_Vartype::obj, SCIPisEQ(), TRUE, SYM_Vartype::type, and SYM_Vartype::ub.\n\n◆ SCIP_DECL_HASHKEYVAL()\n\n static SCIP_DECL_HASHKEYVAL ( SYMhashKeyValVartype )\nstatic\n\nreturns the hash value of the key\n\nDefinition at line 140 of file presol_symmetry.c.\n\n◆ SCIP_DECL_SORTINDCOMP() [1/2]\n\n static SCIP_DECL_SORTINDCOMP ( SYMsortRhsTypes )\nstatic\n\nsort rhs types - first by sense, then by value\n\nDue to numerical issues, we first sort by sense, then by value.\n\nresult: < 0: ind1 comes before (is better than) ind2 = 0: both indices have the same value\n\n0: ind2 comes after (is worse than) ind2\n\nDefinition at line 171 of file presol_symmetry.c.\n\n◆ SCIP_DECL_SORTINDCOMP() [2/2]\n\n static SCIP_DECL_SORTINDCOMP ( SYMsortMatCoef )\nstatic\n\nsort matrix coefficients\n\nresult: < 0: ind1 comes before (is better than) ind2 = 0: both indices have the same value\n\n0: ind2 comes after (is worse than) ind2\n\nDefinition at line 205 of file presol_symmetry.c.\n\nReferences SCIP_Real, and SYM_Sortrhstype::vals.\n\n◆ SymmetryFixVar()\n\n static SCIP_Bool SymmetryFixVar ( SYM_SPEC fixedtype, SCIP_VAR * var )\nstatic\n\ndetermines whether variable should be fixed by permutations\n\nParameters\n fixedtype bitset of variable types that should be fixed var variable to be considered\n\nDefinition at line 228 of file presol_symmetry.c.\n\nReferenced by checkSymmetriesAreSymmetries(), and computeSymmetryGroup().\n\n◆ getActiveVariables()\n\n static SCIP_RETCODE getActiveVariables ( SCIP * scip, SCIP_VAR *** vars, SCIP_Real ** scalars, int * nvars, SCIP_Real * constant, SCIP_Bool transformed )\nstatic\n\nTransforms given variables, scalars, and constant to the corresponding active variables, scalars, and constant.\n\nNote\nconstant needs to be initialized!\nParameters\n scip SCIP data structure vars pointer to vars array to get active variables for scalars pointer to scalars a_1, ..., a_n in linear sum a_1*x_1 + ... + a_n*x_n + c nvars pointer to number of variables and values in vars and vals array constant pointer to constant c in linear sum a_1*x_1 + ... + a_n*x_n + c transformed transformed constraint?\n\nDefinition at line 249 of file presol_symmetry.c.\n\nReferenced by collectCoefficients().\n\n◆ collectCoefficients()\n\n static SCIP_RETCODE collectCoefficients ( SCIP * scip, SCIP_VAR ** linvars, SCIP_Real * linvals, int nlinvars, SCIP_Real lhs, SCIP_Real rhs, SCIP_Bool istransformed, SYM_RHSSENSE rhssense, SYM_MATRIXDATA * matrixdata )\nstatic\n\nfill in matrix elements into coefficient arrays\n\nParameters\n scip SCIP data structure linvars array of linear variables linvals array of linear coefficients values (or NULL if all linear coefficient values are 1) nlinvars number of linear variables lhs left hand side rhs right hand side istransformed whether the constraint is transformed rhssense identifier of constraint type matrixdata matrix data to be filled in\n\nDefinition at line 295 of file presol_symmetry.c.\n\nReferenced by computeSymmetryGroup().\n\n◆ checkSymmetriesAreSymmetries()\n\n static SCIP_RETCODE checkSymmetriesAreSymmetries ( SCIP * scip, SYM_SPEC fixedtype, SYM_MATRIXDATA * matrixdata, int nperms, int ** perms )\nstatic\n\nchecks whether given permutations form a symmetry of a MIP\n\nWe need the matrix and rhs in the original order in order to speed up the comparison process. The matrix is needed in the right order to easily check rows. The rhs is used because of cache effects.\n\nParameters\n scip SCIP data structure fixedtype variable types that must be fixed by symmetries matrixdata matrix data nperms number of permutations perms permutations\n\nDefinition at line 487 of file presol_symmetry.c.\n\nReferenced by computeSymmetryGroup().\n\n◆ getNSymhandableConss()\n\n static int getNSymhandableConss ( SCIP * scip )\nstatic\n\nreturns the number of active constraints that can be handled by symmetry\n\nParameters\n scip SCIP instance\n\nDefinition at line 633 of file presol_symmetry.c.\n\nReferences NULL, SCIPconshdlrGetNActiveConss(), and SCIPfindConshdlr().\n\nReferenced by computeSymmetryGroup(), and determineSymmetry().\n\n◆ computeSymmetryGroup()\n\n static SCIP_RETCODE computeSymmetryGroup ( SCIP * scip, int maxgenerators, SYM_SPEC fixedtype, SCIP_Bool local, SCIP_Bool checksymmetries, int * npermvars, SCIP_VAR *** permvars, SCIP_Real ** permvarsobj, int * nperms, int * nmaxperms, int *** perms, SCIP_Real * log10groupsize, SCIP_Bool * success )\nstatic\n\ncompute symmetry group of MIP\n\nParameters\n scip SCIP pointer maxgenerators maximal number of generators constructed (= 0 if unlimited) fixedtype variable types that must be fixed by symmetries local Use local variable bounds? checksymmetries Should all symmetries be checked after computation? npermvars pointer to store number of variables for permutations permvars pointer to store variables on which permutations act permvarsobj objective values of permuted variables nperms pointer to store number of permutations nmaxperms pointer to store maximal number of permutations (needed for freeing storage) perms pointer to store permutation generators as (nperms x npermvars) matrix log10groupsize pointer to store log10 of size of group success pointer to store whether symmetry computation was successful\n\nDefinition at line 669 of file presol_symmetry.c.\n\nReferenced by determineSymmetry().\n\n◆ computeNOrbitVars()\n\n static SCIP_RETCODE computeNOrbitVars ( SCIP * scip, SCIP_PRESOLDATA * presoldata, SCIP_Bool completestatistic )\nstatic\nParameters\n scip SCIP instance presoldata presolver data completestatistic whether a complete statistic on affected vars should be computed\n\nDefinition at line 1308 of file presol_symmetry.c.\n\nReferenced by determineSymmetry().\n\n◆ determineSymmetry()\n\n static SCIP_RETCODE determineSymmetry ( SCIP * scip, SCIP_PRESOLDATA * presoldata, SYM_SPEC symspecrequire, SYM_SPEC symspecrequirefixed )\nstatic\n\ndetermine symmetry\n\nParameters\n scip SCIP instance presoldata presolver data symspecrequire symmetry specification for which we need to compute symmetries symspecrequirefixed symmetry specification of variables which must be fixed by symmetries\n\nDefinition at line 1369 of file presol_symmetry.c.\n\n◆ SCIP_DECL_PRESOLINIT()\n\n static SCIP_DECL_PRESOLINIT ( presolInitSymmetry )\nstatic\n\ninitialization method of presolver (called after problem was transformed)\n\nDefinition at line 1550 of file presol_symmetry.c.\n\n◆ SCIP_DECL_PRESOLEXIT()\n\n static SCIP_DECL_PRESOLEXIT ( presolExitSymmetry )\nstatic\n\ndeinitialization method of presolver (called before transformed problem is freed)\n\nDefinition at line 1569 of file presol_symmetry.c.\n\n◆ SCIP_DECL_PRESOLFREE()\n\n static SCIP_DECL_PRESOLFREE ( presolFreeSymmetry )\nstatic\n\ndestructor of presolver to free user data (called when SCIP is exiting)\n\nDefinition at line 1614 of file presol_symmetry.c.\n\n◆ SCIP_DECL_PRESOLEXEC()\n\n static SCIP_DECL_PRESOLEXEC ( presolExecSymmetry )\nstatic\n\nexecution method of presolver\n\nDefinition at line 1635 of file presol_symmetry.c.\n\nReferences NULL, PRESOL_NAME, SCIP_DIDNOTRUN, SCIP_OKAY, and SCIPpresolGetName().\n\n◆ SCIPincludePresolSymmetry()\n\n SCIP_RETCODE SCIPincludePresolSymmetry ( SCIP * scip )\n\ninclude symmetry constraint handler\n\nParameters\n scip SCIP data structure\n\nDefinition at line 1654 of file presol_symmetry.c.\n\nReferenced by SCIPincludeDefaultPlugins().\n\n SCIP_RETCODE SCIPgetGeneratorsSymmetry ( SCIP * scip, SYM_SPEC symspecrequire, SYM_SPEC symspecrequirefixed, SCIP_Bool recompute, int * npermvars, SCIP_VAR *** permvars, int * nperms, int *** perms, SCIP_Real * log10groupsize, SCIP_Bool * binvaraffected )\n\nreturn symmetry group generators\n\nParameters\n scip SCIP data structure symspecrequire symmetry specification for which we need to compute symmetries symspecrequirefixed symmetry specification of variables which must be fixed by symmetries recompute Have symmetries already been computed? npermvars pointer to store number of variables for permutations permvars pointer to store variables on which permutations act nperms pointer to store number of permutations perms pointer to store permutation generators as (nperms x npermvars) matrix log10groupsize pointer to store log10 of group size (or NULL) binvaraffected pointer to store whether binary variables are affected\n\nDefinition at line 1712 of file presol_symmetry.c.\n\nReferenced by getSymmetries(), and tryAddSymmetryHandlingConss().\n\n◆ SCIPgetPermvarsObjSymmetry()\n\n SCIP_RETCODE SCIPgetPermvarsObjSymmetry ( SCIP * scip, SCIP_Real ** permvarsobj )\n\nreturn objective coefficients of permuted variables at time of symmetry computation\n\nParameters\n scip SCIP data structure permvarsobj pointer to store objective coefficients of permuted variables (NULL if not available)\n\nDefinition at line 1799 of file presol_symmetry.c.\n\nReferenced by propagateOrbitalFixing()."
] | [
null,
"https://scip.zib.de/doc-6.0.2/html/search/mag_sel.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5458328,"math_prob":0.79404,"size":20824,"snap":"2019-43-2019-47","text_gpt3_token_len":5546,"char_repetition_ratio":0.195293,"word_repetition_ratio":0.13314176,"special_character_ratio":0.20231463,"punctuation_ratio":0.24795155,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9680594,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T03:33:52Z\",\"WARC-Record-ID\":\"<urn:uuid:554929a1-05a9-40bd-9edb-2475a18af3ea>\",\"Content-Length\":\"110708\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:12d74b43-35c7-4b4f-9861-e6bb31f39614>\",\"WARC-Concurrent-To\":\"<urn:uuid:eb473e3a-5143-412d-b9de-2f34bfc0062d>\",\"WARC-IP-Address\":\"130.73.108.67\",\"WARC-Target-URI\":\"https://scip.zib.de/doc-6.0.2/html/presol__symmetry_8c.php\",\"WARC-Payload-Digest\":\"sha1:44ETHQCABICG4OMQBBCZQTXSBUS65BFA\",\"WARC-Block-Digest\":\"sha1:3XK64XCCRKV3QG2HWG2ZLQNKZELFLEMR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987751039.81_warc_CC-MAIN-20191021020335-20191021043835-00235.warc.gz\"}"} |
https://www.encyclopediaofmath.org/index.php/Golubev%E2%80%93Privalov_theorem | [
"# Golubev-Privalov theorem\n\n(Redirected from Golubev–Privalov theorem)\n\nIf",
null,
"is a complex summable function on a closed rectifiable Jordan curve",
null,
"in the complex",
null,
"-plane, then a necessary and sufficient condition for the existence of a function",
null,
", regular in the interior of the domain",
null,
"bounded by",
null,
"and whose angular boundary values coincide with",
null,
"almost-everywhere on",
null,
", is",
null,
"(1)\n\nThese conditions are known as the Golubev–Privalov conditions. That they are sufficient has been shown by V.V. Golubev ; that they are necessary has been shown by I.I. Privalov . In other words, conditions (1) are necessary and sufficient for the integral of Cauchy–Lebesgue type (cf. Cauchy integral)",
null,
"constructed for the function",
null,
"and the curve",
null,
":",
null,
"to be a Cauchy–Lebesgue integral.\n\nIn a more general formulation, let",
null,
"be a complex Borel measure on",
null,
". Then the integral of Cauchy–Stieltjes type (cf. Cauchy integral),",
null,
"is a Cauchy–Stieltjes integral if and only if the generalized Golubev–Privalov conditions",
null,
"(2)\n\nare satisfied.\n\nIn other words, conditions (2) are necessary and sufficient for the existence of a regular function",
null,
"in",
null,
"such that its angular boundary values coincide almost-everywhere (with respect to Lebesgue measure) on",
null,
"with",
null,
"where",
null,
"is the angle between the positive direction of the abscissa axis and the tangent to",
null,
"at the point",
null,
"and",
null,
"is the derivative of",
null,
"with respect to Lebesgue measure (arc length) on",
null,
".\n\nThe Golubev–Privalov theorem is of importance in the theory of boundary properties of analytic functions."
] | [
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g0445801.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g0445802.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g0445803.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g0445804.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g0445805.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g0445806.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g0445807.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g0445808.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g0445809.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458010.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458011.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458012.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458013.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458014.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458015.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458016.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458017.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458018.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458019.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458020.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458021.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458022.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458023.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458024.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458025.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458026.png",
null,
"https://www.encyclopediaofmath.org/legacyimages/g/g044/g044580/g04458027.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8440676,"math_prob":0.9692902,"size":2235,"snap":"2019-13-2019-22","text_gpt3_token_len":598,"char_repetition_ratio":0.12684895,"word_repetition_ratio":0.018867925,"special_character_ratio":0.23221476,"punctuation_ratio":0.15931372,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9659986,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-25T18:41:17Z\",\"WARC-Record-ID\":\"<urn:uuid:18750544-233a-4510-bbe9-c301628f433e>\",\"Content-Length\":\"19827\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4d7dca10-2488-495d-b46a-b1e9aa0d3280>\",\"WARC-Concurrent-To\":\"<urn:uuid:0bf4a67d-bb6c-43a8-b350-2e31b702ef22>\",\"WARC-IP-Address\":\"80.242.138.72\",\"WARC-Target-URI\":\"https://www.encyclopediaofmath.org/index.php/Golubev%E2%80%93Privalov_theorem\",\"WARC-Payload-Digest\":\"sha1:BGJQ3CQRCPGZSITNGMLA5WEQO5I62UHJ\",\"WARC-Block-Digest\":\"sha1:URMQEVPGLHKE32NZD63AC6GURDHIZIR4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912204086.87_warc_CC-MAIN-20190325174034-20190325200034-00422.warc.gz\"}"} |
http://clay6.com/qa/74825/two-amplifiers-are-connected-one-after-the-other-in-series-cascaded-the-fir | [
"",
null,
"# Two amplifiers are connected one after the other in series (cascaded). The first amplifier has a voltage gain of $10$ and the second has a voltage gain of 20. If the input signal is $0.01\\;V$ calculate the output a.c signal\n$\\begin{array}{1 1} 2\\;V \\\\ 4\\;V \\\\ 6\\;V \\\\ 1\\;V \\end{array}$"
] | [
null,
"http://clay6.com/images/down_arrow_square.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.62305945,"math_prob":0.99916685,"size":819,"snap":"2020-34-2020-40","text_gpt3_token_len":244,"char_repetition_ratio":0.10306749,"word_repetition_ratio":0.044444446,"special_character_ratio":0.25274727,"punctuation_ratio":0.11464968,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9959471,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T00:28:46Z\",\"WARC-Record-ID\":\"<urn:uuid:79efab15-b20e-4113-9447-dbffc95408ab>\",\"Content-Length\":\"18320\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:19810b58-5bce-46c8-8308-312a8299506c>\",\"WARC-Concurrent-To\":\"<urn:uuid:09c82fa3-b797-4bf2-a9b4-0844832400cc>\",\"WARC-IP-Address\":\"139.162.17.55\",\"WARC-Target-URI\":\"http://clay6.com/qa/74825/two-amplifiers-are-connected-one-after-the-other-in-series-cascaded-the-fir\",\"WARC-Payload-Digest\":\"sha1:YTN2E27XBJPZ2AHEZSXILH6SHVAHVOIS\",\"WARC-Block-Digest\":\"sha1:K25WTMHNGT5TZOWYFP3JANX5QKNENSML\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400249545.55_warc_CC-MAIN-20200926231818-20200927021818-00708.warc.gz\"}"} |
https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20160111/006565.html | [
"# [swift-evolution] Epic: Typesafe calculations\n\nMatt Whiteside mwhiteside.dev at gmail.com\nWed Jan 13 22:27:05 CST 2016\n\n```> On Jan 13, 2016, at 11:16, Stephen Canon <scanon at apple.com> wrote:\n>\n> … in exact arithmetic, not in floating-point, which is what people tend to actually use for matrices. For that matter, the entire notion of “invertible” as a binary attribute is problematic when dealing with approximate arithmetic; a matrix may be formally invertible but so ill-conditioned that the result is effectively meaningless.\n>\n> Not really trying to shoot you down, just observing that this is an extremely subtle example with lots of hidden sharp edges.\n>\n> – Steve\n\nThat’s a good point. So then I wonder if exact arithmetic wouldn’t be good enough for a lot of entry level math & science applications, especially the ones where this type of approach would come to mind in the first place.\n\nSomething like,\n\nstruct Rational:CustomDebugStringConvertible{\nlet numerator, denominator: Int\nvar debugDescription:String{\nget{\nreturn \"\\(Double(numerator)/Double(denominator))\"\n}\n}\n}\n\nfunc *(lhs:Rational,rhs:Rational) -> Rational{\nreturn Rational(numerator: lhs.numerator*rhs.numerator,\ndenominator: lhs.denominator * rhs.denominator)\n}\n\nfunc *(lhs:Int,rhs:Rational) -> Rational{\nreturn Rational(numerator: lhs*rhs.numerator,\ndenominator: rhs.denominator)\n}\n\nlet π = Rational(numerator: 355, denominator: 113)\nlet two_π = 2 * π\n\nmight get you pretty far with 64 bits.\n\nMatt\n\n-------------- next part --------------\nAn HTML attachment was scrubbed...\nURL: <https://lists.swift.org/pipermail/swift-evolution/attachments/20160113/7700bdfe/attachment.html>\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8146632,"math_prob":0.71792537,"size":1697,"snap":"2021-43-2021-49","text_gpt3_token_len":410,"char_repetition_ratio":0.13526285,"word_repetition_ratio":0.0,"special_character_ratio":0.25574544,"punctuation_ratio":0.18566775,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9687591,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T21:02:33Z\",\"WARC-Record-ID\":\"<urn:uuid:8577ad21-420b-48f4-bc82-97b3a18cfaae>\",\"Content-Length\":\"4896\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:054389fe-6709-4918-baa2-d272d4a59d3b>\",\"WARC-Concurrent-To\":\"<urn:uuid:17749a1f-709b-488c-87d6-8a5c8792f4fe>\",\"WARC-IP-Address\":\"17.188.22.4\",\"WARC-Target-URI\":\"https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20160111/006565.html\",\"WARC-Payload-Digest\":\"sha1:G2EH74LCLUYSAMHKOL4UERJIU6ZXGGOR\",\"WARC-Block-Digest\":\"sha1:H3FI2FHHOL3XQ37UJ24UUTUEYBQDM2M2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585768.3_warc_CC-MAIN-20211023193319-20211023223319-00607.warc.gz\"}"} |
https://calcforme.com/percentage-calculator/103-is-203-percent-of-what | [
"# 103 is 203 Percent of what?\n\n## 103 is 203 Percent of 50.74\n\n%\n\n103 is 203% of 50.74\n\nCalculation steps:\n\n103 ÷ ( 203 ÷ 100 ) = 50.74\n\n### Calculate 103 is 203 Percent of what?\n\n• F\n\nFormula\n\n103 ÷ ( 203 ÷ 100 )\n\n• 1\n\nPercent to decimal\n\n203 ÷ 100 = 2.03\n\n• 2\n\n103 ÷ 2.03 = 50.74 So 103 is 203% of 50.74\n\nExample"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7099776,"math_prob":0.98709553,"size":182,"snap":"2023-40-2023-50","text_gpt3_token_len":87,"char_repetition_ratio":0.21910113,"word_repetition_ratio":0.2,"special_character_ratio":0.6923077,"punctuation_ratio":0.1590909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99883074,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T18:30:08Z\",\"WARC-Record-ID\":\"<urn:uuid:b9982eaa-90b7-4b36-965e-dc08b6013fd0>\",\"Content-Length\":\"15520\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:13931d09-c6e0-41b5-abfe-acb48ea9daef>\",\"WARC-Concurrent-To\":\"<urn:uuid:cd011694-6ec4-43c9-acb3-903ea58f2b9f>\",\"WARC-IP-Address\":\"76.76.21.164\",\"WARC-Target-URI\":\"https://calcforme.com/percentage-calculator/103-is-203-percent-of-what\",\"WARC-Payload-Digest\":\"sha1:XN252RQLHSNCXU4KBK4CL3V45NXS7YAC\",\"WARC-Block-Digest\":\"sha1:JLLC6CMKFWKPN4PFIMYHOJUJLKBQE6Y3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100769.54_warc_CC-MAIN-20231208180539-20231208210539-00142.warc.gz\"}"} |
https://superior-papers.com/indicate-the-effect-of-each-of-these-errors-on-working-capital-current-ratio-assume-that-the/ | [
"# Indicate the effect of each of these errors on working capital, current ratio (assume that the…\n\n(Inventory Errors—Periodic) Thomason Company makes the following errors during the current year. (In all cases, assume ending inventory in the following year is correctly stated.)\n\n1. Both ending inventory and purchases and related accounts payable are understated. (Assume this purchase was recorded and paid for in the following year.)\n\n2. Ending inventory is overstated, but purchases and related accounts payable are recorded correctly.\n\n3. Ending inventory is correct, but a purchase on account was not recorded. (Assume this purchase was recorded and paid for in the following year.)\n\nInstructions\n\nIndicate the effect of each of these errors on working capital, current ratio (assume that the current ratio is greater than 1), retained earnings, and net income for the current year and the subsequent year.\n\n0 replies"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.96386,"math_prob":0.68482566,"size":971,"snap":"2020-45-2020-50","text_gpt3_token_len":193,"char_repetition_ratio":0.12512927,"word_repetition_ratio":0.2857143,"special_character_ratio":0.19258496,"punctuation_ratio":0.115606934,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96241933,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-23T17:04:12Z\",\"WARC-Record-ID\":\"<urn:uuid:ed2bfad6-9b7c-43ef-ae56-2b3a906ec12f>\",\"Content-Length\":\"66923\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:66586b21-d896-4318-a7da-8d2b9bad653f>\",\"WARC-Concurrent-To\":\"<urn:uuid:14d96cd7-b19e-446a-a8b5-66c2612e4fa2>\",\"WARC-IP-Address\":\"199.188.200.186\",\"WARC-Target-URI\":\"https://superior-papers.com/indicate-the-effect-of-each-of-these-errors-on-working-capital-current-ratio-assume-that-the/\",\"WARC-Payload-Digest\":\"sha1:F3KC7RJKUWOASXVTE3IYGUP4S6QMNWPO\",\"WARC-Block-Digest\":\"sha1:YXZ4KC675A75MF6DVL3O2UOAZ3GPWPNG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141163411.0_warc_CC-MAIN-20201123153826-20201123183826-00523.warc.gz\"}"} |
https://www.idealhometuition.com/2014/04/mechanical-properties-of-fluids-ncert_8136.html | [
"## Pages\n\n### Mechanical Properties of Fluids NCERT Solutions Class 11 Physics - Solved Exercise Question 10.26\n\nQuestion 10.26:\n(a) What is the largest average velocity of blood flow in an artery of radius 2 × 10–3 m if the flow must remain laminar? (b) What is the corresponding flow rate? (Take viscosity of blood to be 2.084 × 10–3 Pa s).\nSolution:\n(a)Radius of the artery, r = 2 × 10–3 m\nDiameter of the artery, d = 2 × 2 × 10–3 m = 4 × 10– 3 m\nViscosity of blood, η = 2.084 X 10-3 Pa s\nDensity of blood, ρ = 1.06 × 103 kg/m3\nReynolds’ number for laminar flow, NR = 2000\nThe largest average velocity of blood is given by the relation:\nVarg = NRη / ρd\n= 2000 X 2.084 X 10-3 / (1.06 X 103 X 4 X 10-3)\n= 0.983 m/s\nTherefore, the largest average velocity of blood is 0.983 m/s.\n\n(b) Flow rate is given by the relation:\nR = π r2Vavg\n= 3.14 X (2 X 10-3)2 X 0.983\n= 1.235 X 10-5 m3s-1\nTherefore, the corresponding flow rate is 1.235 X 10-5 m3s-1"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7938021,"math_prob":0.99651986,"size":1406,"snap":"2021-04-2021-17","text_gpt3_token_len":576,"char_repetition_ratio":0.13908702,"word_repetition_ratio":0.78807944,"special_character_ratio":0.45234707,"punctuation_ratio":0.12806539,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99453646,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-26T14:15:21Z\",\"WARC-Record-ID\":\"<urn:uuid:73496dc8-e2ee-4524-8329-7413adffc292>\",\"Content-Length\":\"47525\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:946b367d-9e77-42ba-9390-8ef10fd254a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:9460e060-8b0e-4ba0-aaf2-5969e1cf1493>\",\"WARC-IP-Address\":\"172.217.2.115\",\"WARC-Target-URI\":\"https://www.idealhometuition.com/2014/04/mechanical-properties-of-fluids-ncert_8136.html\",\"WARC-Payload-Digest\":\"sha1:YPNGDIVPWFWJ6XZOX3PCJOIP4R3R56DZ\",\"WARC-Block-Digest\":\"sha1:JYIAPEVGBM7G4KGAE5VQIM6LRGGEME2J\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704800238.80_warc_CC-MAIN-20210126135838-20210126165838-00124.warc.gz\"}"} |
https://elearning.reb.rw/course/view.php?id=296§ion=9 | [
"•",
null,
"### General\n\n•",
null,
"### Unit :9 Mathematics\n\nUnit 9 Mathematics",
null,
"Exercise 1: Listen to numbers and write\nthem down\n1. Take turns to say the numbers from one\nto ten out loud.\n2. Write the numbers out as words.\n3. Complete the sentences.\na. Five comes after ___ and before ___.\nb. Three comes after ___.\nc. The number before eight is ___.\n\nExercise 2: Names of numbers\nSay the names of all the numbers in this\n\nnumber square.",
null,
"",
null,
"Learning box\nWhen we add ten hundreds together we\nget one thousand.\nTen hundreds equal one thousand.\n\nWe write a thousand as 1,000.",
null,
"When we do addition, we\ntogether to make a total.\nWe use the plus sign (+) to\n\n1. Write the calculations in words.\na. 4 + 7 = 11 b. 3 + 5 = 8\n2. Write the words below in the correct\norder to make a calculation.\nnine, four, equals, plus, five\n3. Complete the sentences.\nIn our class there are ____ boys and\n\n____ girls. ____ plus ____ equals ____.\n\nWhen we do subtraction, we take one amount\naway from another amount. It is the opposite\nof addition. We use the minus sign (–) to show\nthat we are taking something away.\n\nExercise 5: Practise subtraction\n1. There are eight bananas. The monkey\neats five. How many bananas are left?\nWrite it like this: Eight minus five\nequals ____.\n2. Write the calculations in words.\na. 7 – 4 = 3 b. 20 – 8 = 12\n3. Write the calculations in numbers.\na. Ten minus two equals eight.\n\nb. Eleven minus five equals six.",
null,
"",
null,
"When we multiply, we take\n\ntogether a number of times\n\nExample\n5 multiplied by 4 = 5 + 5 + 5 + 5 = 20\nWe took the number 5 and added it\ntogether 4 times. This is why\n\nmultiplication is sometimes called “times”.\n\nExercise 6: Practise multiplication",
null,
"1. A girl has three bags\nof apples, with six\napples in each bag.\na. How many apples\ndoes she have?\nb. Write the\ncalculation in\nnumbers.\n2. Write these calculations in words.\na. 3 × 3 = 9\n\nb. 2 × 7 = 14",
null,
"Division is breaking up or\ndividing a number into an\n\nequal number of parts.\n\nExample\n20 divided by 4 = ?\nIf you take 20 bananas and put them into\nfour equal-sized groups, there will be 5\n\nbananas in each group. The answer is 5.\n\nExercise 7: Practise division\n1. In this bag there are 24 sweets. There\nare 6 boys who want to\nshare them.",
null,
"24 divided by 6 = ____.\nEach boy will get ____\nsweets.\n2. What is ten divided by five? Write the\ncalculation in numbers.\n3. Write the calculations in words.\n\na. 10 ÷ 2 = 5 b. 12 ÷ 4 = 3\n\nExercise 8: Match the words and numbers\nMatch the sentences in column 1 with the\n\ncalculations in column 2.",
null,
"9.6 Review\n1. Write the numbers as words.",
null,
"a. 9 b. 3 c. 5\nd. 11 e. 20 f. 100\n2. Match the signs in the\ntable with the correct\nword.\n3. Count backwards from 20\nto 1. Write the numbers in\n4. Match the calculations\nwith the correct numbers\n\nin words.",
null,
"5. Do the following calculations in groups.\n\na. 19 – 4 =\nb. 30 ÷ 5 =\nc. 21 + 3 =\n\nd. 6 x 5 =\n\nUnit8: Animals, birds and insectsUnit 10 Talk about events in the past and future"
] | [
null,
"https://elearning.reb.rw/theme/image.php/mb2mcl/core/1673317101/spacer",
null,
"https://elearning.reb.rw/theme/image.php/mb2mcl/core/1673317101/spacer",
null,
"https://elearning.reb.rw/pluginfile.php/2572/course/section/3284/41%20%282%29.PNG",
null,
"https://elearning.reb.rw/pluginfile.php/2572/course/section/3284/42%20%282%29.PNG",
null,
"https://elearning.reb.rw/pluginfile.php/2572/course/section/3284/43%20%281%29.PNG",
null,
"https://elearning.reb.rw/pluginfile.php/2572/course/section/3284/44%20%281%29.PNG",
null,
"https://elearning.reb.rw/pluginfile.php/2572/course/section/3284/46%20%281%29.PNG",
null,
"https://elearning.reb.rw/pluginfile.php/2572/course/section/3284/47%20%281%29.PNG",
null,
"https://elearning.reb.rw/pluginfile.php/2572/course/section/3284/48%20%281%29.PNG",
null,
"https://elearning.reb.rw/pluginfile.php/2572/course/section/3284/50%20%281%29.PNG",
null,
"https://elearning.reb.rw/pluginfile.php/2572/course/section/3284/51%20%283%29.PNG",
null,
"https://elearning.reb.rw/pluginfile.php/2572/course/section/3284/52%20%282%29.PNG",
null,
"https://elearning.reb.rw/pluginfile.php/2572/course/section/3284/53%20%281%29.PNG",
null,
"https://elearning.reb.rw/pluginfile.php/2572/course/section/3284/54%20%281%29.PNG",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8653651,"math_prob":0.99765986,"size":2987,"snap":"2022-40-2023-06","text_gpt3_token_len":877,"char_repetition_ratio":0.15119007,"word_repetition_ratio":0.023931624,"special_character_ratio":0.30532306,"punctuation_ratio":0.17316018,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995476,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,4,null,4,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T15:37:31Z\",\"WARC-Record-ID\":\"<urn:uuid:13aa3b0e-e488-4e55-ae93-358935e5feab>\",\"Content-Length\":\"50249\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4099bc4e-d9e4-408b-91e1-c0a5cbe85678>\",\"WARC-Concurrent-To\":\"<urn:uuid:aae05b84-cb7d-4fca-b6a1-0a06a657be87>\",\"WARC-IP-Address\":\"197.243.23.249\",\"WARC-Target-URI\":\"https://elearning.reb.rw/course/view.php?id=296§ion=9\",\"WARC-Payload-Digest\":\"sha1:CVYBTKW4E3Y6ZYYRWCLEQQZ5JXRBNIO7\",\"WARC-Block-Digest\":\"sha1:LJYYWMTE34HCVGQQMBK6CELULUWBMTWG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764494986.94_warc_CC-MAIN-20230127132641-20230127162641-00228.warc.gz\"}"} |
https://ris-ai.com/stock-price-prediction-with-machine-learning | [
"• +91-9872993883\n• +91-8283824812\n• info@ris-ai.com\n\n# Stock Price Prediction with Machine Learning ¶\n\nThe Stock Market is known as a place where people can make a fortune if they can successfully predict stock prices as Stock Price Prediction is very important in finance and business for decision making. In this article , we will create a Linear Regression model and a Decision Tree Regression Model to Predict Apple’s Stock Price using Machine Learning and Python . Stock Dataset is also provided , you can down load it.\n\nA stock price may depend on several factors operating in the current world and stock market. We will try to take into account a combination of mainly two factors:\n\n• 1. How the increase and decrease of stock prices of the other companies affect the stock price of a given target company.\n• 2. The past performances of the target company.\n\nImport pandas to import a CSV file:\n\nIn :\nimport pandas as pd\n\n Date Open High Low Close Adj Close \\\n0 2014-09-29 100.589996 100.690002 98.040001 99.620003 93.514290\n1 2014-10-06 99.949997 102.379997 98.309998 100.730003 94.556244\n2 2014-10-13 101.330002 101.779999 95.180000 97.669998 91.683792\n3 2014-10-20 98.320000 105.489998 98.220001 105.220001 98.771042\n4 2014-10-27 104.849998 108.040001 104.699997 108.000000 101.380676\n\nVolume\n0 142718700\n1 280258200\n2 358539800\n3 358532900\n4 220230600\n\n\nTo get the number of training days:\n\nIn :\nprint(\"training days =\",apple.shape)\n\ntraining days = (184, 7)\n\n\nTo Visualize the close price Data:\n\nIn :\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set()\nplt.figure(figsize=(10, 4))\nplt.title(\"Apple's Stock Price\")\nplt.xlabel(\"Days\")\nplt.ylabel(\"Close Price USD ($)\") plt.plot(apple[\"Close\"]) plt.show()",
null,
"To get the close price: In : apple = apple[[\"Close\"]] print(apple.head()) Close 0 99.620003 1 100.730003 2 97.669998 3 105.220001 4 108.000000 Creating a variable to predict ‘X’ days in the future: In : futureDays = 25 Create a new target column shifted ‘X’ units/days up: In : apple[\"Prediction\"] = apple[[\"Close\"]].shift(-futureDays) print(apple.head()) print(apple.tail()) Close Prediction 0 99.620003 123.250000 1 100.730003 125.320000 2 97.669998 127.099998 3 105.220001 124.750000 4 108.000000 130.279999 Close Prediction 179 179.979996 NaN 180 178.020004 NaN 181 164.940002 NaN 182 167.779999 NaN 183 167.779999 NaN To create a feature dataset (x) and convert into a numpy array and remove last ‘x’ rows/days: In : import numpy as np x = np.array(apple.drop([\"Prediction\"], 1))[:-futureDays] # print(x) To create a target dataset (y) and convert it to a numpy array and get all of the target values except the last ‘x’ rows days: In : y = np.array(apple[\"Prediction\"])[:-futureDays] print(y) [123.25 125.32 127.099998 124.75 130.279999 128.949997 127.620003 128.770004 132.539993 130.279999 128.649994 127.169998 126.599998 126.75 126.440002 123.279999 129.619995 124.5 121.300003 115.519997 115.959999 105.760002 113.290001 109.269997 114.209999 113.449997 114.709999 110.379997 112.120003 111.040001 119.080002 119.5 121.059998 112.339996 119.300003 117.809998 119.029999 113.18 106.029999 108.029999 105.260002 96.959999 97.129997 101.419998 97.339996 94.019997 93.989998 96.040001 96.910004 103.010002 102.260002 105.919998 105.669998 109.989998 108.660004 109.849998 105.68 93.739998 92.720001 90.519997 95.220001 100.349998 97.919998 98.830002 95.330002 93.400002 95.889999 96.68 98.779999 98.660004 104.209999 107.480003 108.18 109.360001 106.940002 107.730003 103.129997 114.919998 112.709999 113.050003 114.059998 117.629997 116.599998 113.720001 108.839996 108.43 110.059998 111.790001 109.900002 113.949997 115.970001 116.519997 115.82 117.910004 119.040001 120. 121.949997 129.080002 132.119995 135.720001 136.660004 139.779999 139.139999 139.990005 140.639999 143.660004 143.339996 141.050003 142.270004 143.649994 148.960007 156.100006 153.059998 153.610001 155.449997 148.979996 142.270004 146.279999 144.020004 144.179993 149.039993 150.270004 149.5 156.389999 157.479996 157.5 159.860001 164.050003 158.630005 159.880005 151.889999 154.119995 155.300003 156.990005 156.25 163.050003 172.5 174.669998 170.149994 174.970001 171.050003 169.369995 173.970001 175.009995 169.229996 175. 177.089996 178.460007 171.509995 160.5 156.410004 172.429993 175.5 176.210007 179.979996 178.020004 164.940002 167.779999 167.779999] ### Split the data into 75% training and 25% testing ¶ In : from sklearn.model_selection import train_test_split xtrain, xtest, ytrain, ytest = train_test_split(x, y, test_size=0.25) ### Creating Models¶ In : # Creating the decision tree regressor model from sklearn.tree import DecisionTreeRegressor tree = DecisionTreeRegressor().fit(xtrain, ytrain) # creating the Linear Regression model from sklearn.linear_model import LinearRegression linear = LinearRegression().fit(xtrain, ytrain) To get the last ‘x’ rows/days of the feature dataset: In : xfuture = apple.drop([\"Prediction\"], 1)[:-futureDays] xfuture = xfuture.tail(futureDays) xfuture = np.array(xfuture) print(xfuture) [[143.649994] [148.960007] [156.100006] [153.059998] [153.610001] [155.449997] [148.979996] [142.270004] [146.279999] [144.020004] [144.179993] [149.039993] [150.270004] [149.5 ] [156.389999] [157.479996] [157.5 ] [159.860001] [164.050003] [158.630005] [159.880005] [151.889999] [154.119995] [155.300003] [156.990005]] #### To see the model tree prediction¶ In : treePrediction = tree.predict(xfuture) print(\"Decision Tree prediction =\",treePrediction) Decision Tree prediction = [154.119995 163.050003 171.509995 174.669998 174.669998 174.970001 163.050003 163.18 173.970001 175.009995 169.229996 175. 177.089996 178.460007 171.509995 160.5 156.410004 176.210007 175.5 176.210007 176.210007 178.020004 174.669998 167.779999 167.779999] #### To see the model linear regression prediction ¶ In : linearPrediction = linear.predict(xfuture) print(\"Linear regression Prediction =\",linearPrediction) Linear regression Prediction = [152.36331808 157.55282578 164.53078983 161.55977169 162.09729294 163.89553212 157.57236115 151.01464554 154.93363781 152.72493108 152.8812893 157.63099658 158.83309374 158.08056419 164.81420173 165.87946237 165.8990124 168.2054556 172.3003698 167.0033731 168.22500563 160.41632462 162.59571315 163.74894208 165.40059121] ## Visualize decision tree predictions¶ In : predictions = treePrediction valid = apple[x.shape:] valid[\"Predictions\"] = predictions plt.figure(figsize=(10, 6)) plt.title(\"Apple's Stock Price Prediction Model(Decision Tree Regressor Model)\") plt.xlabel(\"Days\") plt.ylabel(\"Close Price USD ($)\")\nplt.plot(apple[\"Close\"])\nplt.plot(valid[[\"Close\", \"Predictions\"]])\nplt.legend([\"Original\", \"Valid\", \"Predictions\"])\nplt.show()\n\n/.local/lib/python3.5/site-packages/ipykernel_launcher.py:3: SettingWithCopyWarning:\nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nThis is separate from the ipykernel package so we can avoid doing imports until",
null,
"## Visualize the linear model predictions¶\n\nIn :\npredictions = linearPrediction\nvalid = apple[x.shape:]\nvalid[\"Predictions\"] = predictions\nplt.figure(figsize=(10, 6))\nplt.title(\"Apple's Stock Price Prediction Model(Linear Regression Model)\")\nplt.xlabel(\"Days\")\nplt.ylabel(\"Close Price USD (\\$)\")\nplt.plot(apple[\"Close\"])\nplt.plot(valid[[\"Close\", \"Predictions\"]])\nplt.legend([\"Original\", \"Valid\", \"Predictions\"])\nplt.show()\n\n/.local/lib/python3.5/site-packages/ipykernel_launcher.py:3: SettingWithCopyWarning:\nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nThis is separate from the ipykernel package so we can avoid doing imports until",
null,
""
] | [
null,
"https://ris-ai.com/static/images/models/apple-stock-price.jpg",
null,
"https://ris-ai.com/static/images/models/apple-stock-price-prediction-decision-tree.jpg",
null,
"https://ris-ai.com/static/images/models/apple-stock-price-prediction-linear-regression.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5449079,"math_prob":0.7580628,"size":7442,"snap":"2023-40-2023-50","text_gpt3_token_len":2509,"char_repetition_ratio":0.12557139,"word_repetition_ratio":0.16484849,"special_character_ratio":0.5640957,"punctuation_ratio":0.24297924,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96232045,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-04T00:58:10Z\",\"WARC-Record-ID\":\"<urn:uuid:981accf4-b328-417c-a7c9-6b70f08d70f0>\",\"Content-Length\":\"109930\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f3647447-f159-45d7-ade1-455e6c7c32c8>\",\"WARC-Concurrent-To\":\"<urn:uuid:697eb6ca-a226-4461-8fe1-1fd311fb377e>\",\"WARC-IP-Address\":\"68.66.226.100\",\"WARC-Target-URI\":\"https://ris-ai.com/stock-price-prediction-with-machine-learning\",\"WARC-Payload-Digest\":\"sha1:XYZDI5SNQSBMVLEHDULNJKUJIED2QOQF\",\"WARC-Block-Digest\":\"sha1:OMHUPE6QGFQVKGNYWBV6Y62EIDGHHBPT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100518.73_warc_CC-MAIN-20231203225036-20231204015036-00775.warc.gz\"}"} |
https://www.cuemath.com/ncert-solutions/q-1-exercise-2-1-linear-equations-in-one-variable-class-8-maths/ | [
"In the verge of coronavirus pandemic, we are providing FREE access to our entire Online Curriculum to ensure Learning Doesn't STOP!\n\n# EX.2.1 Q1 Linear Equations in One Variable Solutions - NCERT Maths Class 8\n\nGo back to 'Ex.2.1'\n\n## Question\n\nSolve the equation:\n\n$x- 2= 7$\n\nVideo Solution\nLinear Equations\nEx 2.1 | Question 1\n\n## Text Solution\n\nWhat is known?\n\nEquations\n\nWhat is unknown?\n\nValue of the variable\n\nReasoning:\n\nIn an equation values of left-hand side (LHS) and right-hand side (RHS) are equal. The two sides of the equation are balanced. We perform mathematical operations so that the balance is not disturbed.\n\nSteps:\n\n$x- 2= 7$\n\nTransposing ($$-2$$) to RHS we get:\n\n\\begin{align}x &= {\\rm{7}} + {\\rm{ 2 }}\\\\\\text{ }\\!\\!~\\!\\!\\text{ }x&=9\\end{align}\n\nLearn from the best math teachers and top your exams\n\n• Live one on one classroom and doubt clearing\n• Practice worksheets in and after class for conceptual clarity\n• Personalized curriculum to keep up with school"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6960701,"math_prob":0.9881968,"size":613,"snap":"2020-10-2020-16","text_gpt3_token_len":197,"char_repetition_ratio":0.124794744,"word_repetition_ratio":0.0,"special_character_ratio":0.31973898,"punctuation_ratio":0.14285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9905456,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-06T21:50:27Z\",\"WARC-Record-ID\":\"<urn:uuid:410d9005-5c73-48b8-a958-6738c08dfe0d>\",\"Content-Length\":\"104947\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2ffe72d6-2af1-4c30-bd5f-c19cc2cc14cd>\",\"WARC-Concurrent-To\":\"<urn:uuid:6b4c62e6-42ed-4ddb-9eba-c25655b09a67>\",\"WARC-IP-Address\":\"13.251.150.116\",\"WARC-Target-URI\":\"https://www.cuemath.com/ncert-solutions/q-1-exercise-2-1-linear-equations-in-one-variable-class-8-maths/\",\"WARC-Payload-Digest\":\"sha1:CWDSPXD66ASQHTZC7YQYWTFSFEKCMCJ6\",\"WARC-Block-Digest\":\"sha1:ZZPHSMDZMFYKNO747UNUC2C5C5MUUQYE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371660550.75_warc_CC-MAIN-20200406200320-20200406230820-00066.warc.gz\"}"} |
https://factorization.info/factors/factor-calculator.html | [
"Factor Calculator",
null,
"We have all the information you will ever need about the factors of any number. We will provide you with the definition of factors of your number, show you how to find the factors of your number, and give you all the factors of your number.\n\nNot only that, but we will also count the factors of your number and show you the factor pairs of your number to prove that our answer is solved correctly.\n\nTo get the factors for any number, please submit it below.\n\nHere are some examples what we can explain and calculate for you:"
] | [
null,
"https://factorization.info/images/factors-of.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9165718,"math_prob":0.9776453,"size":589,"snap":"2023-40-2023-50","text_gpt3_token_len":119,"char_repetition_ratio":0.21025641,"word_repetition_ratio":0.0,"special_character_ratio":0.20373514,"punctuation_ratio":0.07826087,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9590441,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T17:01:42Z\",\"WARC-Record-ID\":\"<urn:uuid:27cc842e-b790-4c99-a753-049599701eda>\",\"Content-Length\":\"8074\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a781f45c-cbf8-4d23-8cfa-cb997e251e1c>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f5bf6d1-3d9b-4a5b-a647-f8c9b4d7b7fc>\",\"WARC-IP-Address\":\"18.67.65.63\",\"WARC-Target-URI\":\"https://factorization.info/factors/factor-calculator.html\",\"WARC-Payload-Digest\":\"sha1:F7ZWR46WY5VWAJCSRNYX2BHCYIC4U4XZ\",\"WARC-Block-Digest\":\"sha1:TQOMQGL5I4MKLDGGDWRNDIJP3YJMI6H5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233509023.57_warc_CC-MAIN-20230925151539-20230925181539-00320.warc.gz\"}"} |
https://m.montessoriteacherscollective.com/lessons/the-stamp-game/ | [
"# The Stamp Game\n\nTHE STAMP GAME\n\nNote:\nThe process is still important, however if answers are frequently incorrect re-present.\nThis material is used by the children for individual work with the Decimal System, following the group exercises done with the golden bead material.\n\nPurpose:\nTo give the child the opportunity of carrying out the four operations as individual exercises.\n\nMaterials:\nSmall colored ’tiles’ or ‘stamps’:\ngreen with ‘1’ written on them to represent units;\nblue with ’10’ to represent tens;\nred with ‘100’ to represent hundreds;\ngreen, again, marked ‘1000’ to represent thousands.\nSkittles: 1 large green, 9 of each red, blue and green representing the decimal system categories.\nSome red, blue and green plastic discs to represent decimal system categories.\nSquared paper, pencil and ruler.\n\nControl:\nThe teacher.\n\nFor each process the checking can be taught e.g., in addition one of the addends can be subtracted from the sum to find the other addend.\nIn subtraction the subtrahend can be added to the difference to find the minuend.\nIn multiplication the product can be divided by the multiplier to find the multiplicand.\nIn division the quotient can be multiplied by the divisor to find the dividend.\n\nAge:\n5 – 5.5 years\n\nPresentation:\nIndividual exercise.\nPlace the stamp game, writing materials and presentation tray (golden beads) at a table.\nThe teacher removes one stamp from each category and asks the child to identify the numeral.\nAsk the child to align the stamps with their corresponding golden beads.\nExplain that the stamps may be used, individually, for the same exercises as the golden beads.\nThe child returns the presentation tray to the shelf and the stamps to the box.\nThe number cards will no longer be necessary and instead we will write our numbers.\nThe teacher writes a four digit number beginning with the highest category.\nThe child reads and makes the quantity with the stamps.\nRepeat for a few examples.\nIntroduce some numbers with zero.\n\nNote: Use correct terminology with each operation.\nMultiplication – multiplicand, multiplier, product.\nSubtraction – minuend, subtrahend, difference.\nDivision – dividend, divisor, quotient.\nIntroduce the signs used to symbolise, e.g., + for addition; – for subtraction; x for multiplication and ¸ for division.\n\nEXERCISE 1:\nWith the child’s input, write two addends which will not require carrying.\nDraw a line under the addends and include a plus sign.\nPoint out the use of a plus sign denotes this is addition.\nRead the problem with the child. The child lays out the appropriate stamps for the first addend.\nEncourage the child to check by reading the quantity made with the stamps.\nPlace a ruler under the first addend and have the child lay out the second addend.\nCheck.\nRemove the ruler.\nRemind the child of the necessary process to find the answer – combine categories and count beginning with the units.\nTo combine the categories push the stamps up towards the top of the table until they form a double column per category.\nCount stamps using the category name.\nAs each stamp is counted move it slightly toward you.\nThe child records the answer in the units place – below the equal line.\nHave the child repeat the process for the other categories: tens, hundreds, thousands respectively.\nReview the problem with the child.\n\nDynamic Addition Follows the same procedure except when counting, exchange the categories as necessary by removing one stamp of the next higher category from the box and replacing the ten stamps, which have been counted, into their appropriate place in the box.\n\nMultiplication\nPresentation & EXERCISE 2:\nWith the child’s input, write a multiplicand.\nThe child chooses a multiplier of 2 or 3, which is written in the units column below the multiplicand.\nIntroduce the multiplication sign.\nRead the equation with the child.\nThe child lays out the multiplicand the appropriate number of times combines the categories and counts exchanging as necessary and records the answer for each category as he counts.\nAfter some experience use ‘0’ in the multiplicand.\n\nStatic Subtraction\nPresentation & EXERCISE 3:\nWith the child’s input, write a minuend.\nWrite a subtrahend which does not necessitate exchanging.\nIntroduce the subtraction sign. Read the problem with the child.\nThe child lays out the appropriate stamps for the minuend.\nBeginning with the units the child takes away the necessary number of stamps and replaces them into the box.\nThe child counts the remaining number of units and records the answer.\nHe repeats this process for the remaining categories in their respective order.\nReview the problem with the child.\n\nDynamic Subtraction\nFollows the same procedure except when subtracting exchange categories as necessary by replacing one stamp of the next higher category into the box and removing ten stamps of the needed category (using a ruler for spacing purposes).\n\nStatic Short Division\nPresentation & EXERCISE 4:\nWith the child’s input write a dividend as a statement and as in a process.\nIntroduce the division sign and read the problem with the child.\nRemind the child that the skittles represent the divisor.\nSet out the appropriate number of skittles horizontally.\nStack the appropriate stamps for the dividend to the left of the skittles in hierarchical order.\nReview the procedure for division: we start with the highest category and we give an equal number of stamps to each skittle.\nShare out the stamps underneath the skittles.\nThe child counts the stamps under one skittle and records the quotient above the dividend.\nRead the problem with the child.\n\nDynamic Short Division (No remainder)\nFollows the same procedure except to exchange categories as necessary.\n\nDynamic Short Division (With remainder)\nFollows the same procedure as dynamic short division, except to introduce ‘remainder’.\nWrite the remainder to the right of the quotient with a small case ‘r’ before it.\nExplain that the ‘r’ is an abbreviation of remainder."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8770655,"math_prob":0.8776383,"size":6068,"snap":"2021-43-2021-49","text_gpt3_token_len":1247,"char_repetition_ratio":0.16655673,"word_repetition_ratio":0.054933876,"special_character_ratio":0.19561635,"punctuation_ratio":0.119282514,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97690636,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T02:10:16Z\",\"WARC-Record-ID\":\"<urn:uuid:060b5f32-bbc9-43a2-8ee9-3b197b151dbc>\",\"Content-Length\":\"36859\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a7c26ee-a348-4baf-aee9-3c9cbab5914c>\",\"WARC-Concurrent-To\":\"<urn:uuid:1fb228d4-cedf-4f86-a6d4-a9a8ec36151f>\",\"WARC-IP-Address\":\"173.236.136.222\",\"WARC-Target-URI\":\"https://m.montessoriteacherscollective.com/lessons/the-stamp-game/\",\"WARC-Payload-Digest\":\"sha1:SUYQ7PRRPMK7JSOLLIQ735E3PJBXAQNP\",\"WARC-Block-Digest\":\"sha1:TPC7TQTDSOG2RAEHHSWYTUF7NNWMDUB5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363229.84_warc_CC-MAIN-20211206012231-20211206042231-00096.warc.gz\"}"} |
https://converthere.com/numbers/5.78e-5-written-out | [
"# 5.78e-5 Written Out in Numbers\n\n5.78e-5 is a scientific notation used in mathematics, physics, chemistry, astronomy and other sciences to handle either very large or very small numbers. With scientific notation adding, subtracting, multiplying and dividing numbers becomes much simpler.\n\nThis tool will help you convert any scientific notation into decimal, real numbers. Below you can find out how to convert 5.78e-5 to number.\n\n## How to Convert 5.78e-5 to Real Number?\n\n5.78e-5 = 5.78 x 10-5 = 0.0000578\n\nUse the following form, to convert any other scientific notation into decimal notation:\n\nScientific notation:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88002366,"math_prob":0.8524407,"size":1291,"snap":"2023-40-2023-50","text_gpt3_token_len":411,"char_repetition_ratio":0.37296036,"word_repetition_ratio":0.0900474,"special_character_ratio":0.3261038,"punctuation_ratio":0.13986014,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99643403,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T17:48:44Z\",\"WARC-Record-ID\":\"<urn:uuid:0770a1da-4216-4617-98f3-a900f7018845>\",\"Content-Length\":\"30767\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:23a57b63-9a90-4b46-8244-db350b38b6df>\",\"WARC-Concurrent-To\":\"<urn:uuid:f018e844-45fa-437d-996d-0b6f19614bbc>\",\"WARC-IP-Address\":\"50.16.223.119\",\"WARC-Target-URI\":\"https://converthere.com/numbers/5.78e-5-written-out\",\"WARC-Payload-Digest\":\"sha1:E4XZ7TFH2RVT2QSTF6ASR2D3PEVVJUZG\",\"WARC-Block-Digest\":\"sha1:PXYXMTT5W37IFJM75BKER6KDJ6JP6TA5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510319.87_warc_CC-MAIN-20230927171156-20230927201156-00292.warc.gz\"}"} |
https://benidormclubdeportivo.org/does-time-go-on-the-x-or-y-axis/ | [
"Graphing is likely the most crucial skill girlfriend will find out in every one of your science classes. V the process of graphing and also by interpreting graphs we are able to gain brand-new insights into trouble we thought we extended in depth already. Graphing likewise makes it simpler to watch connections in between different variables. A snapshot is worth an ext than a thousand words. (Variables are quantities that can change from one difficulty to the next or sometimes likewise within a problem.)\n\nSo much we have actually tried to recognize the connection between mass and also volume and also we have functioned a good number of troubles with this variables. Graphing mass and volume will aid us deepen our understanding of mass, volume, density and also the relationship between these variables.\n\nGraphing in physics is the same as graphing in math, but just together we have uncovered out prior to in physics us are always dealing through quantities, numbers, that stand because that a distance, or a mass, or a time. We are not simply graphing numbers for their very own sake. Sine we are taking care of real physics phenomena we have actually (most the the time) no need for three of the 4 quadrants of the name: coordinates system. We will just use the first quadrant, which is bordered through the x- and the y-axis. The allude where the two lines satisfy is dubbed the beginning (0,0). These two axis stand for two variables and each allude on the graph has as such two “coordinates”. Imagine you desire to accomplish somebody in brand-new York City. You would also have to give your friend 2 “coordinates” – “5th and 82nd Street” for instance. One coordinate is for the East-West axis (horizontal) and for the North-South axis (vertical).",
null,
"The x-axis (horizontal) constantly shows the independent variable, that is the variable over which you have no control. This is most noticeable when graphing distance and time. Time will always go ~ above the x-axis, since it is independent of anything else. When we graph mass and volume we will certainly pout the volume ~ above the x-axis, for reasons that room not quite noticeable yet. Once picking your axis the is income to recognize the worths you have to graph. If the largest mass you will have to graph is 290 kg, the y-axis needs to go at least to 290. That does, however, not make lot sense to have actually it walk to a 1000. It would certainly be most judicious to pick 300 together the highest possible vertical number and divide the y-axis right into 6 equal parts: 50, 100, 150, 200, 250 and 300. The exact same is true because that the x-axis. The most necessary reason why us learn about graphing is the we are able come gain new insights into the relationships between variables. The easiest connection to recognize is as soon as the points the you graphed lied in a straight line, or space close to developing a right line. Clues cannot make a line, only if you to be to affix them would you obtain a line. However, us NEVER affix the point out that we plot – never. Rather what we perform is find a straight line that ideal fits every the points. The heat does not have to cross v all the point out (sometimes that does not also go through any kind of points), however is needs to be a “reasonable fit” of this points. View the number below. Drawing this right line shows that we think that the 2 variables have actually a unique relationship. Castle are claimed to it is in “proportional”. The graph is dubbed “linear graph”.\n\nOnce we have found that the graph is direct we can then find how steep (or shallow) the line is. In mathematics this is referred to as “the slope”. In her workbook friend have worked on several various graphs and have discovered the slope of these graphs. The slope is specifically what is suggested by the name, the slope of a meadow is how much that goes increase or down. The steep of a roof is referred to as pitch, the of a road is called grade.\n\n Back to our graph. You discover the slope by illustration a best-fit line through your data and then picking any two point out on this line. The points execute not need to be data points, castle only need to be two points top top the line. You require two point out to discover the slope. If the heat goes through the beginning it is beneficial to pick the origin as among your points. You then need to find the rise and the run in between these 2 points. Watch the power suggest for further information. The steep in the snapshot to the appropriate would it is in rise/run = 3/1 = 3. Finding the slope almost always tells united state something about the variables. If the slope has actually a meaning (in our course it always will) we will certainly then know that there is a relationship in between the two variables and also by calculating the steep we are able to write this connection in form of one equation. In math class this is done utilizing the equation: y = mx +b. The slope is given the change “m”. The change “b” is the y-intercept, the point where the line you drew intercepts the vertical (the y-) axis. In a the majority of our examples this will be zero. Try to affix what you learned around slope in math to what we room doing in physics.",
null,
"When graphing mass and also volume we perform so to obtain deeper insight into the relationship in between these two variables. Top top a mass and also volume graph because that one type of material you will soon see the all clues lie top top one right line. Think around what this means. Not only are both variables proportional but the relationship between them is a addressed one. The graph is linear. Because that every unit that you boost the mass, the volume rises by an additional fixed unit. In the graph at appropriate three different materials space plooted amss versus volume. You have the right to see the for none of them the massive increses by the exact same unit together the volume does, although the red heat (which represents ice) is close. Pay close fist to the axis, they space not the same. For the blue line, if the volume increases by one unit the mass increases by 8 units. That means that because that every cubic cenitmeter that you increase the volume th amss goes up by 8g. What does this number represent? If friend think about this for a moment and remember what us talked about when we discussed proportional reasoning in between mass and also volume, it will certainly be noticeable that this is the density. The thickness can therefore be uncovered by recognize the slope on mass matches volume graph. If you upper and lower reversal the axi about you willl get the number the tels you by exactly how much the volume boosts if you increase the fixed by one gram. This, return it has a meaning, go not have actually a name, as we debated in class. If you upper and lower reversal the axis you will likewise note that now the red line will be the steepest.",
null,
"Here is a problem type the workbook that you can want to job-related through:\n\nSuppose we have a bump of clay stood for by point P ~ above the mass matches volume graph below. Enough mystery metal is mixed with this lump of clay so that the resulting round of clay and metal is stood for by allude Q.\n\nYou are watching: Does time go on the x or y axis\n\nA. What is the density of pure clay (without the metal)? explain your reasoning.\n\nB. What is the thickness of the secret metal? explain your reasoning.\n\nI am sure you will have no difficulty in findg the density of the clay. Look at at allude P, divide the massive (15g) through the volume (10 ccm). This will lead a thickness of 1.5 g/ccm. When this is the right answer you acquired it most most likely right by accident just as part B will certainly proof. Her answer to component B will probably be something around 2.8 g/ccm - divding 38 g by 14 ccm, together you would certainly gte by looking at point Q.\n\nSee more: What Does In Consideration Of Mean Ing, In Consideration Of\n\nBut when solving this problem, try to photo the clay and also the metal and what you space diong through them. An initial you have actually the clay. Measering that is mass and volume will provide you allude P. Then you ADD the metal and that leads to suggest Q. What does point Q as such represent? Think around it because that a moment: the mass and also the volume that both the clay and also the steel combined. So, as soon as you divide 38 by 14 girlfriend will obtain the density of both products combined. By dividing those two numbers you actually discovered the slope of the line leading from 0 to allude Q.\n\nRemember that thickness can only be discovered on a graph by detect the slope. When you fixed for component A, the thickness of the clay you found the slop the the heat from 0 to point P, which by accident is the very same as splitting the 2 numbers for suggest P. In order to find the thickness of the steel alone you need to look at wherein the metal shows up on the graph alone - that does so, as soon as you add it come the clay. Thereofre you can find its thickness by recognize the slop the the line type P to Q."
] | [
null,
"https://benidormclubdeportivo.org/does-time-go-on-the-x-or-y-axis/imager_1_6827_700.jpg",
null,
"https://benidormclubdeportivo.org/does-time-go-on-the-x-or-y-axis/imager_2_6827_700.jpg",
null,
"https://benidormclubdeportivo.org/does-time-go-on-the-x-or-y-axis/imager_3_6827_700.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9614704,"math_prob":0.9617014,"size":10418,"snap":"2022-05-2022-21","text_gpt3_token_len":2272,"char_repetition_ratio":0.13030536,"word_repetition_ratio":0.005192108,"special_character_ratio":0.21702822,"punctuation_ratio":0.079009995,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9853934,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T19:02:56Z\",\"WARC-Record-ID\":\"<urn:uuid:a526fef8-3039-4134-8db9-6f1cbb56db02>\",\"Content-Length\":\"20389\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0c909323-955d-4a8d-9a26-537ee2c408ce>\",\"WARC-Concurrent-To\":\"<urn:uuid:e64904a8-14b3-4938-b047-aae44ee89b82>\",\"WARC-IP-Address\":\"172.67.177.16\",\"WARC-Target-URI\":\"https://benidormclubdeportivo.org/does-time-go-on-the-x-or-y-axis/\",\"WARC-Payload-Digest\":\"sha1:2CXOTPTBZ5FXNH35S444YXIOXTAOLJBZ\",\"WARC-Block-Digest\":\"sha1:IL2RD2VCSXQK5WNK3E2ARYLWN4PKJTEV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662512229.26_warc_CC-MAIN-20220516172745-20220516202745-00185.warc.gz\"}"} |
https://math.stackexchange.com/questions/3207296/maximal-smooth-atlas | [
"# Maximal smooth atlas\n\nFor each nonnegative integer $$n$$, the Euclidean space $$\\mathbb{R}^n$$ is a smooth $$n$$-manifold with the smooth structure determined by the atlas $$\\mathcal{A}=(\\mathbb{R}^n,\\mathbb{1}_{\\mathbb{R}^n})$$.\n\nAny chart $$(U,\\varphi)$$ contained in the given maximal smooth atlas is called smooth chart or smooth coordinate chart.\n\nSince $$\\mathcal{A}$$ is a smooth atlas, is contained in a unique maixmal smooth atlas, called smooth structure determined by $$\\mathcal{A}.$$\n\nWho are the smooth coordinate chart $$(U,\\varphi)$$ for $$\\mathbb{R}^n$$ respect to this smooth smooth structure?\n\nIt has to happen that $$\\varphi\\circ\\mathbb{1}_{\\mathbb{R}^n}^{-1}\\colon\\mathbb{1}_{\\mathbb{R}^n}(\\mathbb{R}^n\\cap U)\\to \\varphi(\\mathbb{R}^n\\cap U)$$ is $$C^{\\infty}$$, that is $$\\varphi\\colon U\\to \\hat{U}:=\\varphi(U)$$ is $$C^{\\infty}$$, same reasoning for $$\\mathbb{1}_{\\mathbb{R}^n}\\circ\\varphi^{-1}$$. Therefore, with respect to this smooth structure, the smooth coordinate charts for $$\\mathbb{R}^n$$ are exactly those charts $$(U,\\varphi)$$ such that $$\\varphi$$ is a diffeomorphism.\n\nSo, in the case of $$\\mathbb{R}^n$$, then we can describe the maximal atlas, right?\n\nQuestion. Are there other cases of manifolds in which we can fully describe some maximal atlas?\n\nThanks!\n\n• Your answer to the first question is correct, diffeomorphisms describe the maximal smooth structure. As to your second question, consider the same ambient space $\\mathbb R$ with the smooth structure determined by the homeomorphism $x^3$; this does not have a type of classification of the maximal smooth structure as in the previous case – user555729 May 6 at 9:32\n\nBut suppose that you have a topological $$n$$-manifold $$M$$ equipped with a smooth atlas $${\\mathcal A}$$. Then the maximal smooth atlas $${\\mathcal A}'$$ on $$M$$ containing $${\\mathcal A}$$ consists of pairs $$(U,\\phi)$$ where $$U$$ is an open subset of $$M$$ and $$\\phi: U\\to \\hat{U}\\subset R^n$$ is a homeomorphism which is a diffeomorphism with respect to the atlas $${\\mathcal A}$$. In other words, for every $$x\\in M$$, $$\\phi$$ is required to satisfy the condition that $$\\phi\\circ \\psi^{-1}$$ is a diffeomorphism between suitable open subsets in $$R^n$$. Here $$(V,\\psi)$$ is a chart in $${\\mathcal A}$$ such that $$x\\in U\\cap V$$.\nThis is pretty much the same \"description\" as you gave in the case when $$M=R^n$$."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7174463,"math_prob":1.0000042,"size":1228,"snap":"2019-26-2019-30","text_gpt3_token_len":389,"char_repetition_ratio":0.20588236,"word_repetition_ratio":0.0,"special_character_ratio":0.28338763,"punctuation_ratio":0.104166664,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000086,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-21T06:20:46Z\",\"WARC-Record-ID\":\"<urn:uuid:9b19b984-b945-46b7-ac8e-04a63005d7bd>\",\"Content-Length\":\"142656\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:09a03600-4472-4e80-8b02-7d342bc2e6a2>\",\"WARC-Concurrent-To\":\"<urn:uuid:7503fd09-8dbb-48a7-938e-8366fc468a94>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3207296/maximal-smooth-atlas\",\"WARC-Payload-Digest\":\"sha1:5SFSSZSW64RILGMOBQUXCUYRP7RKDYC3\",\"WARC-Block-Digest\":\"sha1:5P3NRXIPR4QX22VU67SBCDQJ3WUP7WBQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526931.25_warc_CC-MAIN-20190721061720-20190721083720-00389.warc.gz\"}"} |
https://chemhomeworkhelp.com/homework-questions/chemical-reactions-lab-worksheet/ | [
"Chemical Reactions Lab Worksheet\n\n# INTRODUCTION\n\nAtoms gain or lose electrons to yield ions, or combine with other atoms to form molecules. The symbols of atoms are combined to represent the chemical compounds that are formed. A chemical reaction is when one or more substances undergo a chemical change to form new substance(s). There are several observations that are evidence of a chemical reaction. Some of these possible observations are the production of a gas, the formation of a precipitate, a color change, or even an energy change in the form of heat. One chemical reaction that you’ve performed several times already is the combustion of methane (CH4). The combustion of methane results in the following observations: light is emitted, heat is emitted, and water vapor is released.\n\nIn chemistry, it is important that we can represent each chemical reaction in terms of the atoms involved. A chemical equation represents the chemical or physical change that take place including the identities and the relative quantities of the chemical substances. For example, when one methane molecule (CH4) and two diatomic oxygen molecules (O2) react, one carbon dioxide molecule (CO2) and two water molecules (H2O) are formed. This reaction is represented in Figure 1 below using a chemical equation and a space-filling model.",
null,
"Figure 1: Chemical Equation and Space-Filling Model for the Formation of Methane\n\nThere are four fundamental aspects to any chemical equation:\n\n1. The substances that are on the left side of the equation are known as reactants, and they are the substances undergoing reaction.\n2. The substances on the right side of the equation are known as products, and they are the substances generated by the reaction.\n3. Plus signs (+) separate individual reactants and products, while an arrow (→) separates the left side (reactants) and right side (products) of a reaction.\n4. The relative number of products and reactants for a particular reaction are represented using coefficients, the numbers to the left of each chemical formula. When the coefficient is 1, the number is typically omitted. These coefficients are reduced to the lowest whole-number ratio in a chemical equation. Because matter is not created or destroyed by a chemical reaction, the numbers of atoms must be the same on either side of the reaction. The atoms are ‘balanced’ by adjusting the chemical coefficients of the reactants and products.\n\nIn addition to these basic features of a chemical equation, various symbols are used to describe a chemical reaction.\n\n1. Each reactant and product must have an indicated state of matter. Solids or precipitates are represented by (s), liquids by (l), gases by (g), and aqueous solutions as (aq).\n2. If a reaction does not take place, NR (which stands for ‘No Reaction’) is written after the → in a reaction.\n3. If heat is used in a reaction, the symbol ∆ is written above the →\n\nThere are five basic types of chemical reactions: combination, combustion, single displacement, double displacement, and decomposition reactions. Neutralization, acid-base, and gas-forming reactions are reactions that can also be classified as double-displacement reactions.\n\nSingle-displacement reactions occur when one element displaces another in a compound. All single displacement reactions have the general form:\n\nA + BC B + AC\n\nExample of a single displacement reaction:\n\nZn(s) + 2 HCl(aq) → ZnCl2(aq) + H2(g)\n\nThere are multiple types of double-displacement reactions, reactions with the form:\n\nAB + CD AD + CB\n\nThe first type, a precipitation reaction, occurs when the cations and anions of aqueous ionic compounds switch partners, forming the two new ionic compounds, AD and CB. One of the new ionic compounds formed is an insoluble solid compound, known as a precipitate. To determine whether the product of a precipitation reaction will be soluble or insoluble, Solubility Rules (shown below), are consulted.",
null,
"Chemistry by OpenStax is licensedunder Creative Commons Attribution License v4.0",
null,
"Example of a double displacement reaction:\n\n3AgNO3(aq) + Na3PO4(aq) → Ag3PO4(s) + 3NaNO3(aq)\n\nThe second type, a neutralization reaction, occurs when an acid and base react to form an ionic compound (salt) and liquid water. Acid-base reactions are exothermic, so energy is released as heat upon reaction.\n\nExample of a neutralization reaction:\n\nH2SO3(aq) + 2 NaOH(aq) → Na2SO3(aq) + 2 H2O(l)\n\nThe third type, a gas-forming reaction, has an unstable product, which will decompose into at least one gas as a product.\n\nExample of a gas-forming reaction:\n\n2 HNO3(aq) + Na2SO3(aq) → 2 NaNO3(aq) + H2O(l) + SO2(g)\n\nIn this laboratory, we are going to perform a series of chemical reactions, and record evidence of a reaction such as the formation of gas, the formation of a precipitate, a change in color, or a change in temperature. These reactions will then be classified as single displacement, double-displacement, neutralization, or gasforming. You will then the write and balance chemical equations and practice predicting the products of the reactions introduced in this laboratory.\n\nA lot of reactions take place in aqueous media (water), and subsequently involve ions. The balanced equation is called the molecular equation, and is written in the usual way as the reactions above. The reaction below will be used as an example.\n\nBaBr2(aq) + Pb(NO3)2(aq) Ba(NO3)2(aq) + PbBr2(s)\n\nWhen ionic compounds are dissolved in water, they dissociate into their parent ions, and this equation is referred to as the complete ionic equation. Be careful to remember that only aqueous ionic compounds dissociate into ions – solids, liquids, and gases cannot be dissociated as they are not in solution.\n\nBa2+(aq) + 2Br_(aq) + Pb2+(aq) + 2NO3(aq) Ba2+(aq) + 2NO3(aq) + PbBr2(s)\n\nWhen two identical chemical species are on each side of the equation they are known as spectator ions, and can be eliminated from the equation. The equation that remains after the spectator ions have been removed and is referred to as the net ionic equation.\n\nBa2+(aq) + 2Br_(aq) + Pb2+(aq) + 2NO3(aq)Ba2+(aq) + 2NO3(aq) + PbBr2(s)\n\n2Br_(aq) + Pb2+(aq) PbBr2(s)\n\nPROCEDURE:",
null,
"Unless otherwise noted, content of Santa Monica College is licensed under CC BY 4.0\n\n1. You may want to refer to your prelab for the formulas of each reactant.\n2. For the reactions involving solid reactants, use large test tubes and place in a beaker to observe the reaction. For reactions involving solutions, use small (10 mL) test tubes and the red test tube racks from the “151 Cart.”. Use test tubes that are clean and rinsed with distilled water. It is not necessary for the test tubes to be dry.\n3. When using solutions, use approximately 1.5 mL of each solution. When mixing two solutions, there should be a total of 3 mL, which is indicated by red tape on a sample test tube by the fume hood.\n4. When using metals, use 1 piece of the copper and magnesium metal and 2 pieces of zinc, putting the metal in the test tube, then put the solution in the test tube. The metal should be fully immersed in solution.\n5. Perform the reactions below, record your observations in the table below, specify the type of reaction that was completed, and write the balanced molecular equation.\n\nDATA:\n\n Reactants Observation(s) Type of Reaction Balanced Molecular Equation aqueous bariumchloride + aqueous sodium sulfate A white solid forms after the solutions are mixed together. Precipitation Reaction BaCl2(aq) + Na2SO4(aq) → BaSO4(s) + 2 NaCl(aq) zinc metal + hydrochloric acid Bubbles are observed after the solutions are mixed together. Thezinc appears to be smaller in size. aqueous sodiumphosphate + aqueous copper(II) sulfate A solid forms after the solutions are mixed together. copper metal + aqueous silver nitrate After the solution is added to the copper, the copper turns black and pieces of it are following off.\n Solid sodium bicarbonate + aqueous acetic acid When acetic acid is poured on the solid, bubbles are formed, and the solution looks frothy. aqueous nickel(II) nitrate + aqueous sodium hydroxide A solid forms after the solutions are mixed together. aqueous potassium chloride + aqueous silver nitrate A solid forms after the solutions are mixed together. aqueous hydrochloric acid + aqueous sodium hydroxide A clear solution is observed after the solutions are mixed together. aqueous sodiumcarbonate + aqueous cobalt(II) nitrate A solid forms after the solutions are mixed together. zinc metal + aqueous lead(II) nitrate The zinc metal looks as if it is corroding when the solution is added. magnesium metal +aqueous acetic acid When the solution is added, bubbles form around/on the metal. aqueous iron(III)chloride + aqueous ammonium hydroxide A yellow solid forms after the solutions are mixed together.\n\n# POST-LABORATORY QUESTIONS\n\n1. For the following reactions, write the balanced molecular equation, complete ionic equation, and net ionic equation.\n\n1. barium chloride(aq) + sodium sulfate(aq) →\n\nBalanced Molecular Equation:\n\nComplete Ionic Equation:\n\nNet Ionic Equation:\n\n1. nitric acid(aq) + sodium hydroxide(aq) →\n\nBalanced Molecular Equation:\n\nComplete Ionic Equation:\n\nNet Ionic Equation:\n\n# PRE-LABORATORY ASSIGNMENT\n\n1. What are the three observations that a chemical reaction has occurred?\n\n1. What are the types of reactions we are performing in this laboratory? Describe them in detail.\n\n1. Write the general reactions for single and double displacement reactions.\n\n1. You will be using the reactants below in this lab. Write the chemical formula for each name.\n Chemical Formula Name Chemical Formula Name Barium chloride BaCl2 Potassium chloride Sodium sulfate Cobalt(II) nitrate Hydrochloric acid Sodium carbonate Sodium phosphate Lead(II) nitrate Copper(II) sulfate Sodium chloride Silver nitrate Potassium nitrate Acetic acid Iron(III) chloride Nickel(II) nitrate Ammonium hydroxide Sodium hydroxide Sodium bicarbonate Zinc nitrate\n1. Translate the following word equations into balanced chemical equations.\n2. copper(s) + silver nitrate(aq) → copper(II) nitrate + silver(s)\n3. calcium(s) + water(l) → calcium hydroxide(s) + hydrogen(g)\n4. aluminum nitrate(aq) + potassium carbonate(aq) → aluminum carbonate(s) + potassium nitrate(aq)\n5. sulfuric acid(aq) + potassium hydroxide(aq) → potassium sulfate(aq) + water(l)"
] | [
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI2NzAiIGhlaWdodD0iMjgwIiB2aWV3Qm94PSIwIDAgNjcwIDI4MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4MCIgaGVpZ2h0PSIxNSIgdmlld0JveD0iMCAwIDgwIDE1Ij48cmVjdCB3aWR0aD0iMTAwJSIgaGVpZ2h0PSIxMDAlIiBmaWxsPSIjY2ZkNGRiIi8+PC9zdmc+",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NzgiIGhlaWdodD0iNDAwIiB2aWV3Qm94PSIwIDAgODc4IDQwMCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4MCIgaGVpZ2h0PSIxNSIgdmlld0JveD0iMCAwIDgwIDE1Ij48cmVjdCB3aWR0aD0iMTAwJSIgaGVpZ2h0PSIxMDAlIiBmaWxsPSIjY2ZkNGRiIi8+PC9zdmc+",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88647133,"math_prob":0.982486,"size":10299,"snap":"2021-43-2021-49","text_gpt3_token_len":2419,"char_repetition_ratio":0.16221467,"word_repetition_ratio":0.05582822,"special_character_ratio":0.21409845,"punctuation_ratio":0.09492274,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9863729,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T15:26:49Z\",\"WARC-Record-ID\":\"<urn:uuid:0cb84c2e-bdb5-403b-be4b-0fa226f6aa61>\",\"Content-Length\":\"62431\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e1ff93a1-e73d-4ea4-86f5-beef7d81ce3d>\",\"WARC-Concurrent-To\":\"<urn:uuid:a4c823d6-d39c-4440-8ada-4a1154b5d07d>\",\"WARC-IP-Address\":\"147.135.115.64\",\"WARC-Target-URI\":\"https://chemhomeworkhelp.com/homework-questions/chemical-reactions-lab-worksheet/\",\"WARC-Payload-Digest\":\"sha1:DYR3CUYBHKKBOGIAT6DMD2ZNDUGKVAYP\",\"WARC-Block-Digest\":\"sha1:32SBMHFJ6O5UHLKDELRI6QJQVN7NT6NF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323586043.75_warc_CC-MAIN-20211024142824-20211024172824-00622.warc.gz\"}"} |
https://advancesindifferenceequations.springeropen.com/articles/10.1186/s13662-018-1854-x | [
"Theory and Modern Applications\n\n# Nonlinear sequential Riemann–Liouville and Caputo fractional differential equations with generalized fractional integral conditions\n\n## Abstract\n\nIn this paper, we discuss the existence and uniqueness of solutions for two new classes of sequential fractional differential equations of Riemann–Liouville and Caputo types with generalized fractional integral boundary conditions, by using standard fixed point theorems. In addition, we also demonstrate the application of the obtained results with the aid of examples.\n\n## Introduction\n\nIn this paper, we investigate the following two nonlinear sequential fractional differential equations of Riemann–Liouville and Caputo fractional derivatives subject to the generalized fractional integral boundary conditions of the forms\n\n\\begin{aligned} &{^{\\mathrm{RL}}}D^{q} \\bigl({^{C}}D^{r}x \\bigr) (t)=f\\bigl(t,x(t)\\bigr),\\quad t\\in(0,T), \\end{aligned}\n(1)\n\\begin{aligned} &x(0)=\\sum_{i=1}^{m}\\gamma_{i} {^{\\bar{\\rho_{i}} }}I^{\\bar{\\alpha_{i}},\\bar {\\beta_{i}}}_{\\bar{\\eta_{i}},\\bar{\\kappa_{i}}}x(\\xi_{i}),\\qquad x(T)= \\sum_{j=1}^{n}\\sigma_{j} {^{{\\rho_{j}} }}I^{\\alpha_{j},\\beta_{j}}_{\\eta_{j},\\kappa _{j}}x(\\delta_{j}), \\end{aligned}\n(2)\n\nand\n\n\\begin{aligned} &{^{C}}D^{q} \\bigl({^{\\mathrm{RL}}}D^{r}x \\bigr) (t)=f\\bigl(t,x(t)\\bigr), \\quad t\\in(0,T), \\end{aligned}\n(3)\n\\begin{aligned} &x(0)=0,\\qquad x(T)=\\sum_{j=1}^{n} \\sigma_{j} {^{{\\rho_{j}} }}I^{\\alpha_{j},\\beta _{j}}_{\\eta_{j},\\kappa_{j}}x( \\delta_{j}), \\end{aligned}\n(4)\n\nwhere $${^{\\mathrm{RL}}}D^{q}$$ and $${^{C}}D^{r}$$ denote the Riemann–Liouville and Caputo fractional derivatives of order $$0< q,r\\leq1$$, respectively, with $$1< q+r\\leq2$$, $${}^{\\tilde{\\rho}}I^{\\tilde{\\alpha},\\tilde{\\beta}}_{\\tilde {\\eta},\\tilde{\\kappa}}$$ denote the generalized fractional integral of order $$\\tilde{\\alpha}>0$$, $$f:[0,T]\\times\\mathbb{R}\\rightarrow\\mathbb {R}$$ is a continuous function, $$\\xi_{i}, \\delta_{j}\\in(0,T)$$, $$\\tilde{\\alpha}\\in\\{\\bar{\\alpha_{i}},\\alpha_{j}\\}>0$$, $$\\tilde{\\rho}\\in\\{ \\bar{\\rho_{i}},\\rho_{j}\\}$$, $$\\tilde{\\beta}\\in\\{\\bar{\\beta_{i}},\\beta_{j}\\}$$, $$\\tilde{\\eta}\\in\\{\\bar{\\eta_{i}},\\eta_{j}\\}$$, $$\\tilde{\\kappa}\\in\\{\\bar {\\kappa_{i}},\\kappa_{j}\\}\\in\\mathbb{R}$$, $$\\gamma_{i}, \\sigma_{j} \\in {\\mathbb {R}}$$, for all $$i=1,2,\\ldots,m$$, $$j=1,2,\\ldots,n$$. Observe that interchanging the sequence of fractional derivatives in (1) and (3) has an effect on the boundary conditions which are seen in (2) and (4), namely the Caputo fractional derivative of a constant is zero while the Riemann–Liouville derivative is not.\n\nThe subject of fractional differential equations has emerged as an interesting and popular field of research in view of its extensive applications in applied and technical sciences. One can easily observe the role and importance of fractional calculus in several diverse disciplines such as physics, chemical processes, population dynamics, biotechnology, economics, etc. For examples and recent development on the topic, see and the references cited therein. The significance of fractional derivatives owes to the fact that they serve as an excellent tool for the description of memory and hereditary properties of various materials and processes. One can notice that fractional derivatives are defined via fractional integrals. Among several types of fractional integral found in the literature, Riemann–Liouville and Hadamard fractional integrals are the most extensively studied. A new fractional integral, called generalized Riemann–Liouville fractional integral, which generalizes the Riemann–Liouville and Hadamard integrals into a single form, was introduced in (see Definition 5). For more details of this integral and similar ones, we refer the reader to and .\n\nSeveral new existence and uniqueness results for problems (1)–(2) and (3)–(4) are proved by using a variety of fixed point theorems (such as Banach contraction principle, Krasnoselskii’s fixed point theorem, Leray–Schauder nonlinear alternative). The rest of the paper is organized as follows: in Sect. 2 we recall some preliminary facts that we need in the sequel. In Sect. 3 we present our existence and uniqueness results. Examples illustrating the obtained results are presented in Sect. 4.\n\n## Preliminaries\n\nIn this section, we recall some basic concepts of fractional calculus [1, 2] and present known results needed in our forthcoming analysis.\n\n### Definition 1\n\nThe Riemann–Liouville fractional derivative of order q for a function $$f: (0,\\infty)\\rightarrow{\\mathbb {R}}$$ is defined by\n\n$${^{\\mathrm{RL}}}D^{q}f(t)=\\frac{1}{\\Gamma(n-q)} \\biggl( \\frac{d}{dt} \\biggr)^{n} \\int _{0+}^{t} (t-s )^{n-q-1}f(s)\\,ds,\\quad q>0, n=[q]+1,$$\n\nwhere $$[q]$$ denotes the integer part of the real number q, provided the right-hand side is pointwise defined on $$(0,\\infty)$$.\n\n### Definition 2\n\nThe Riemann–Liouville fractional integral of order q for a function $$f: (0,\\infty)\\rightarrow{\\mathbb {R}}$$ is defined by\n\n$${^{\\mathrm{RL}}}I^{q}f(t)=\\frac{1}{\\Gamma(q)} \\int_{0+}^{t} (t-s )^{q-1}f(s)\\,ds, \\quad q>0,$$\n\nprovided the right-hand side is pointwise defined on $$(0,\\infty)$$.\n\n### Definition 3\n\nThe Caputo derivative of fractional order q for an n-times differentiable function $$f: (0,\\infty)\\rightarrow{\\mathbb {R}}$$ is defined as\n\n$${^{C}}D^{q}f(t)=\\frac{1}{\\Gamma(n-q)} \\int_{0+}^{t} (t-s )^{n-q-1} \\biggl( \\frac{d}{ds} \\biggr)^{n}f(s) \\,ds,\\quad q>0, n=[q]+1.$$\n\n### Definition 4\n\nThe Hadamard fractional integral of order q for a function $$f: (0,\\infty)\\to{\\mathbb {R}}$$ is defined by\n\n$${^{H}}J^{q}f(t)=\\frac{1}{\\Gamma(q)} \\int_{0+}^{t} \\biggl(\\log\\frac{t}{s} \\biggr)^{q-1}\\frac{f(s)}{s}\\,ds, \\quad q>0,$$\n\nprovided the integral exists, where $$\\log(\\cdot)=\\log_{e}(\\cdot)$$.\n\n### Definition 5\n\nThe Katugampola fractional integral of order $$q>0$$ and $$\\rho>0$$ of a function $$f(t)$$ for all $$0< t<\\infty$$ is defined by\n\n$${^{\\rho}}\\tilde{J}^{q}f(t)=\\frac{\\rho^{1-q}}{\\Gamma(q)} \\int_{0+}^{t}\\frac {s^{\\rho-1}f(s)}{(t^{\\rho} - s^{\\rho})^{1-q}}\\,ds,$$\n\nprovided the integral exists.\n\n### Remark 1\n\nFor $$\\rho=1$$ in the above definition, we arrive at the standard Riemann–Liouville fractional integral, which is used to define both the Riemann–Liouville and Caputo fractional derivatives, while in the limit $$\\rho\\to0^{+}$$ we have\n\n$$\\lim_{\\rho\\to0^{+}}{^{\\rho}}\\tilde{J}^{q}f(t)= \\frac{1}{\\Gamma(q)} \\int _{0^{+}}^{t} \\biggl(\\log\\frac{t}{s} \\biggr)^{q-1}\\frac{f(s)}{s}\\,ds,$$\n\nwhich is the famous Hadamard fractional integral; see .\n\n### Definition 6\n\nThe Erdélyi–Kober fractional integral of order $$\\delta>0$$ with $$\\eta>0$$ and $$\\gamma\\in\\mathbb{R}$$ of a function $$f: (0,\\infty)\\to {\\mathbb {R}}$$ is defined by\n\n$$\\hat{J}^{\\gamma,\\delta}_{\\eta}f(t)=\\frac{\\eta t^{-\\eta(\\delta+\\gamma )}}{\\Gamma(\\delta)} \\int_{0+}^{t}\\frac{s^{\\eta\\gamma+\\eta-1}f(s)}{(t^{\\eta}- s^{\\eta})^{1-\\delta}}\\,ds,$$\n\nprovided the integral exists.\n\nLet $$X_{c}^{p}(a,b), c\\in{\\mathbb {R}}, 1\\le p\\le\\infty$$ be the space of all complex-valued Lebesgue measurable functions ϕ on $$(a,b)$$ for which $$\\|\\phi\\|_{X_{c}^{p}}<\\infty$$, with\n\n$$\\Vert \\phi \\Vert _{X_{c}^{p}}= \\biggl( \\int_{a}^{b} \\bigl\\vert x^{c}\\phi(x) \\bigr\\vert ^{p}\\frac{dx}{x} \\biggr)^{1/p}, \\quad 1\\le p< \\infty.$$\n\n### Definition 7\n\n()\n\nLet $$f\\in X_{c}^{p}(a,b)$$ with $$a=0^{+}$$. The generalized fractional integral of order $$\\alpha>0$$ and constants $$\\beta,\\rho,\\eta,\\kappa\\in\\mathbb {R}$$ for a function $$f: (0,\\infty)\\to{\\mathbb {R}}$$ is defined by\n\n$$\\bigl( {^{\\rho}I^{\\alpha,\\beta}_{\\eta,\\kappa}f} \\bigr) (t) = \\frac{\\rho ^{1-\\beta}t^{\\kappa}}{\\Gamma(\\alpha)} \\int_{0+}^{t}\\frac{\\tau^{\\rho(\\eta +1)-1}}{(t^{\\rho}-\\tau^{\\rho})^{1-\\alpha}}f(\\tau)\\,d\\tau,$$\n(5)\n\nprovided the integral exists.\n\n### Remark 2\n\nThe fractional integral (5) contains six well-known fractional integrals as its particular cases (see also ).\n\nFrom (5) we have the following special cases:\n\n1. (i)\n\nIf $$\\beta=\\alpha$$, $$\\kappa=0$$, $$\\eta=0$$, then (2) can be reduced to\n\n$$x(0)=\\sum_{i=1}^{m} \\gamma_{i} {^{\\bar{\\rho_{i}}}}\\tilde{J}^{\\bar{\\alpha _{i}}}x( \\xi_{i}),\\qquad x(T)=\\sum_{j=1}^{n} \\sigma_{j} {^{{\\rho_{j}} }}\\tilde {J}^{\\alpha_{j}}x( \\delta_{j}),$$\n(6)\n\nwhich are the Katugampola fractional integral boundary conditions;\n\n2. (ii)\n\nIf $$\\rho=1$$, $$\\beta=\\alpha$$, $$\\kappa=0$$, $$\\eta=0$$, then (2) can be reduced to\n\n$$x(0)=\\sum_{i=1}^{m} \\gamma_{i} ^{\\mathrm{RL}}I^{\\bar{\\alpha_{i}}}x(\\xi_{i}),\\qquad x(T)=\\sum_{j=1}^{n}\\sigma_{j} ^{\\mathrm{RL}}I^{\\alpha_{j}}x(\\delta_{j}),$$\n(7)\n\nwhich are the Riemann–Liouville fractional integral boundary conditions;\n\n3. (iii)\n\nIf $$\\rho\\rightarrow0$$, $$\\beta=\\alpha$$, $$\\kappa=0$$, $$\\eta =0$$, then (2) can be reduced\n\n$$x(0)=\\sum_{i=1}^{m} \\gamma_{i} ^{H}J^{\\bar{\\alpha_{i}}}x(\\xi_{i}),\\qquad x(T)=\\sum_{j=1}^{n}\\sigma_{j} ^{H}J^{\\alpha_{j}}x(\\delta_{j}),$$\n(8)\n\nwhich are the Hadamard fractional integral boundary conditions;\n\n4. (iv)\n\nIf $$\\beta=0$$, $$\\kappa=-\\rho(\\alpha+\\eta)$$, then (2) can be reduced to\n\n$$x(0)=\\sum_{i=1}^{m} \\gamma_{i} \\hat{J}^{\\bar{\\alpha_{i}},\\bar{\\rho _{i}}}_{\\bar{\\eta_{i}}}x( \\xi_{i}),\\qquad x(T)=\\sum_{j=1}^{n} \\sigma_{j} \\hat {J}^{\\alpha_{j},\\rho_{j}}_{\\eta_{j}}x( \\delta_{j}),$$\n(9)\n\nwhich are the Erdélyi–Kober fractional integral boundary conditions.\n\n### Lemma 1\n\n()\n\nLet $$q>0$$. Then for $$y\\in C(0, T)\\cap L(0,T)$$ it holds\n\n$${^{\\mathrm{RL}}I^{q}} \\bigl({^{\\mathrm{RL}}D^{q}}y \\bigr) (t)=y(t)+c_{1}t^{q-1}+c_{2}t^{q-2}+ \\cdots +c_{n}t^{q-n},$$\n\nwhere $$c_{i}\\in\\mathbb{R}$$, $$i=1,2,\\ldots,n$$ and $$n-1< q< n$$.\n\n### Lemma 2\n\n()\n\nLet $$q>0$$. Then for $$y\\in C(0, T)\\cap L(0,T)$$ it holds\n\n$${^{\\mathrm{RL}}I^{q}} \\bigl({^{C}D^{q}}y \\bigr) (t)=y(t)+c_{0}+c_{1} t+c_{2}t^{2}+ \\cdots +c_{n-1}t^{n-1},$$\n\nwhere $$c_{i}\\in\\mathbb{R}$$, $$i=0,1,2,\\ldots,n-1$$ and $$n=[q]+1$$.\n\n### Lemma 3\n\nLet α, $$\\rho>0$$ and $$\\beta,\\rho,\\eta,\\kappa\\in\\mathbb{R}$$, $$m>0$$ and $$\\rho(\\alpha+\\eta)+m+\\kappa\\geq0$$. Then we have\n\n$${^{{\\rho} }}I^{\\alpha,\\beta}_{\\eta,\\kappa}{t^{m}}= \\rho^{-\\beta}\\frac {\\Gamma(\\frac{\\rho\\eta+\\rho+m}{\\rho})}{\\Gamma(\\frac{\\rho\\eta+\\rho\\alpha +\\rho+m}{\\rho})}t^{\\rho(\\alpha+\\eta) +m+\\kappa}.$$\n(10)\n\n### Proof\n\nNow we state the definition of the beta function and its property, which for $$x,y >0$$ read\n\n$$B(x,y)= \\int_{0}^{1}u^{x-1}(1-u)^{y-1} \\,du \\quad\\text{and}\\quad B(x,y)=\\frac {\\Gamma(x)\\Gamma(y)}{\\Gamma(x+y)}.$$\n\nFrom Definition 7 and by changing the variable of integration, we can compute the following formula:\n\n\\begin{aligned} {^{\\rho}I^{\\alpha,\\beta}_{\\eta,\\kappa}t^{m}} &= \\frac{\\rho^{1-\\beta }t^{\\kappa}}{\\Gamma(\\alpha)} \\int_{0}^{t}\\frac{\\tau^{\\rho(\\eta +1)-1}}{(t^{\\rho}-\\tau^{\\rho})^{1-\\alpha}}\\tau^{m} \\,d\\tau \\\\ &= \\frac{\\rho^{-\\beta}t^{\\rho(\\alpha+\\eta)+m+\\kappa}}{\\Gamma({\\alpha })} \\int_{0}^{1}u^{ (\\frac{\\rho\\eta+\\rho+m}{\\rho} )-1}(1-u)^{\\alpha-1} \\,du \\\\ &= \\frac{\\rho^{-\\beta}t^{\\rho(\\alpha+\\eta)+m+\\kappa}}{\\Gamma({\\alpha })}B \\biggl(\\frac{\\rho\\eta+\\rho+m}{\\rho},\\alpha \\biggr) \\\\ &= \\rho^{-\\beta}\\frac{\\Gamma(\\frac{\\rho\\eta+\\rho+m}{\\rho})}{\\Gamma (\\frac{\\rho\\eta+\\rho\\alpha+\\rho+m}{\\rho})}t^{\\rho(\\alpha+\\eta)+m+\\kappa}. \\end{aligned}\n\nThe proof is completed. □\n\nBefore going to prove the next lemma, for convenience, we set constants\n\n\\begin{aligned} &\\Omega_{1}= \\frac{\\Gamma(q)}{\\Gamma(q+r)}\\sum_{i=1}^{m} \\gamma_{i} \\pi ^{\\bar{\\rho_{i}},\\bar{\\alpha_{i}},\\bar{\\beta _{i}},\\bar{\\eta_{i}},\\bar{\\kappa_{i}}}_{q+r-1}(\\xi_{i}),\\\\ &\\Omega_{2}=\\sum_{i=1}^{m} \\gamma_{i} \\pi^{\\bar{\\rho_{i}},\\bar{\\alpha _{i}},\\bar{\\beta _{i}},\\bar{\\eta_{i}},\\bar{\\kappa_{i}}}_{0}(\\xi_{i})-1, \\\\ &\\Omega_{3}= \\frac{\\Gamma(q)}{\\Gamma(q+r)} \\Biggl(\\sum _{j=1}^{n}\\sigma _{j} \\pi^{\\rho_{j},\\alpha_{j},\\beta_{j},\\eta_{j},\\kappa_{j}}_{q+r-1}(\\delta_{j}) - T^{q+r-1} \\Biggr), \\\\ &\\Omega_{4}=\\sum_{j=1}^{n} \\sigma_{j}\\pi^{\\rho_{j},\\alpha_{j},\\beta_{j},\\eta _{j},\\kappa_{j}}_{0}(\\delta_{j})-1 \\end{aligned}\n\nand\n\n$$\\Omega=\\Omega_{1}\\Omega_{4}-\\Omega_{2} \\Omega_{3}\\neq0,$$\n(11)\n\nwhere\n\n$$\\pi^{\\rho,\\alpha,\\beta,\\eta,\\kappa}_{m}(t)={\\rho^{-\\beta}}\\frac{\\Gamma (\\frac{\\rho\\eta+\\rho+m}{\\rho})}{\\Gamma(\\frac{\\rho\\eta+\\rho\\alpha+\\rho +m}{\\rho})}t^{\\rho(\\alpha+\\eta)+\\kappa+m}.$$\n(12)\n\n### Lemma 4\n\nLet $$0< q,r\\leq1$$ with $$1< q+r\\leq2$$, $$\\bar{\\rho_{i}}$$, $$\\rho_{j}$$, q, r, $$\\bar{\\alpha_{i}}$$, $$\\alpha_{j} >0$$, $$\\xi_{i}$$, $$\\delta_{j}\\in(0,T)$$, $$\\bar {\\beta_{i}}$$, $$\\bar{\\eta_{i}}$$, $$\\bar{\\kappa_{i}}$$, $$\\beta_{j}$$, $$\\eta_{j}$$, $$\\kappa_{j}\\in\\mathbb{R}$$ for $$i=1,2,\\ldots,m$$, $$j=1,2,\\ldots,n$$, $$\\Omega \\ne0$$ and $$y\\in C([0,T], \\mathbb{R})$$. The unique solution of the following linear sequential Riemann–Liouville and Caputo fractional differential equation\n\n\\begin{aligned} {^{\\mathrm{RL}}}D^{q} \\bigl({^{C}}D^{r}x \\bigr) (t)=y(t),\\quad t\\in(0,T), \\end{aligned}\n(13)\n\nsubject to the generalized fractional integral boundary conditions\n\n\\begin{aligned} x(0)=\\sum_{i=1}^{m} \\gamma_{i} {^{\\bar{\\rho_{i}} }}I^{\\bar{\\alpha_{i}},\\bar {\\beta_{i}}}_{\\bar{\\eta_{i}},\\bar{\\kappa_{i}}}x( \\xi_{i}),\\qquad x(T)=\\sum_{j=1}^{n} \\sigma_{j} {^{{\\rho_{j}} }}I^{\\alpha_{j},\\beta_{j}}_{\\eta_{j},\\kappa _{j}}x( \\delta_{j}), \\end{aligned}\n(14)\n\nis given by the integral equation\n\n\\begin{aligned} x(t)={}&\\frac{1}{\\Omega} \\Biggl( \\biggl( \\Omega_{1}- \\Omega_{2}\\frac{\\Gamma (q)}{\\Gamma(q+r)}t^{q+r-1} \\biggr) \\bigl({^{\\mathrm{RL}}I^{q+r}}y \\bigr) (T) \\\\ &{}+ \\biggl( \\Omega_{2}\\frac{\\Gamma(q)}{\\Gamma(q+r)}t^{q+r-1}-\\Omega _{1} \\biggr)\\sum_{j=1}^{n} \\sigma_{j} \\bigl[{^{\\rho_{j}}I^{\\alpha _{j},\\beta_{j}}_{\\eta_{j},\\kappa{j}}} \\bigl({^{\\mathrm{RL}}}I^{q+r}y \\bigr) \\bigr](\\delta_{j}) \\\\ &{} + \\biggl( \\Omega_{3}-\\Omega_{4}\\frac{\\Gamma(q)}{\\Gamma (q+r)}t^{q+r-1} \\biggr)\\sum_{i=1}^{m}\\gamma_{i} \\bigl[{^{\\bar{\\rho_{i}} }}I^{\\bar{\\alpha_{i}},\\bar{\\beta_{i}}}_{\\bar{\\eta_{i}},\\bar{\\kappa_{i}}} \\bigl({^{\\mathrm{RL}}}I^{q+r}y \\bigr) \\bigr](\\xi_{i}) \\Biggr) + \\bigl({^{\\mathrm{RL}}}I^{q+r}y \\bigr) (t). \\end{aligned}\n(15)\n\n### Proof\n\nApplying the Riemann–Liouville fractional integral of orders q and r, respectively, to both sides of (13) and using Lemmas 1 and 2, we have\n\n\\begin{aligned} x(t)= \\bigl({^{\\mathrm{RL}}}I^{q+r} y \\bigr) (t)+c_{1}\\frac{\\Gamma(q)}{\\Gamma (q+r)}t^{q+r-1}+c_{2}, \\end{aligned}\n(16)\n\nwhere constants $$c_{1}$$, $$c_{2}\\in\\mathbb{R}$$.\n\nUsing the nonlocal boundary condition (14) to the above equation with Lemma 3 and the above-set constants, we obtain the following linear system of constants $$c_{1}$$ and $$c_{2}$$:\n\n\\begin{aligned} &\\Omega_{1}c_{1}+\\Omega_{2}c_{2} = -\\sum_{i=1}^{m}\\gamma_{i} \\bigl[ {^{\\bar{\\rho_{i}} }}I^{\\bar{\\alpha_{i}},\\bar{\\beta_{i}}}_{\\bar{\\eta_{i}},\\bar {\\kappa_{i}}} \\bigl({}^{\\mathrm{RL}}I^{q+r}y \\bigr) \\bigr](\\xi_{i}), \\\\ &\\Omega_{3}c_{1}+\\Omega_{4}c_{2} = {^{\\mathrm{RL}}I^{q+r}y(T)}-\\sum_{j=1}^{n} \\sigma_{j} \\bigl[{^{{\\rho_{j}} }}I^{\\alpha_{j},\\beta_{j}}_{\\eta _{j},\\kappa_{j}} \\bigl({}^{\\mathrm{RL}}I^{q+r}y \\bigr) \\bigr](\\delta_{j}). \\end{aligned}\n\nSolving the above system of linear equations for the constants $$c_{1}$$, $$c_{2}$$, we have\n\n\\begin{aligned} c_{1}={}&\\frac{1}{\\Omega} \\Biggl[ \\Omega_{2}\\sum_{j=1}^{n} \\sigma_{j} \\bigl[ {^{{\\rho_{j}} }}I^{\\alpha_{j},\\beta_{j}}_{\\eta_{j},\\kappa_{j}} \\bigl({}^{\\mathrm{RL}}I^{q+r}y \\bigr) \\bigr](\\delta_{j})- \\Omega_{2} \\bigl({^{\\mathrm{RL}}}I^{q+r}y \\bigr) (T) \\\\ &{}- \\Omega_{4} \\sum_{i=1}^{m} \\gamma_{i} \\bigl[ {^{\\bar{\\rho_{i}} }}I^{\\bar {\\alpha_{i}},\\bar{\\beta_{i}}}_{\\bar{\\eta_{i}},\\bar{\\kappa_{i}}} \\bigl({}^{\\mathrm{RL}}I^{q+r}y \\bigr) \\bigr](\\xi_{i}) \\Biggr], \\\\ c_{2}={}&\\frac{1}{\\Omega} \\Biggl[ \\Omega_{1} \\bigl({^{\\mathrm{RL}}}I^{q+r}y \\bigr) (T)-\\Omega_{1}\\sum _{j=1}^{n}\\sigma_{j} \\bigl[{^{{\\rho_{j}} }}I^{\\alpha _{j},\\beta_{j}}_{\\eta_{j},\\kappa_{j}} \\bigl({}^{\\mathrm{RL}}I^{q+r}y \\bigr) \\bigr](\\delta _{j}) \\\\ &{} + \\Omega_{3}\\sum_{i=1}^{m} \\gamma_{i} \\bigl[{^{\\bar{\\rho_{i}} }}I^{\\bar {\\alpha_{i}},\\bar{\\beta_{i}}}_{\\bar{\\eta_{i}},\\bar{\\kappa_{i}}} \\bigl({}^{\\mathrm{RL}}I^{q+r}y \\bigr) \\bigr](\\xi_{i}) \\Biggr]. \\end{aligned}\n\nSubstituting constants $$c_{1}$$ and $$c_{2}$$ into (16), we obtain integral equation (15). The converse follows by direct computation. The proof is completed. □\n\n### Remark 3\n\nSince $$q+r>1$$, equation (16) is well defined when $$t=0$$.\n\n### Lemma 5\n\nThe linear sequential Caputo and Riemann–Liouville fractional differential equation\n\n$${^{C}}D^{q} \\bigl({^{\\mathrm{RL}}}D^{r}x \\bigr) (t)=y(t),\\quad t\\in(0,T),$$\n(17)\n\nassuming (4), can be written as an integral equation\n\n\\begin{aligned} x(t)={}&\\frac{t^{q}}{{\\Gamma(q+1)}\\Omega^{*}} \\Biggl(\\sum _{j=1}^{n}\\sigma _{j} \\bigl[{^{{\\rho_{j}} }}I^{\\alpha_{j},\\beta_{j}}_{\\eta_{j},\\kappa_{j}} \\bigl({}^{\\mathrm{RL}}I^{q+r}y \\bigr) \\bigr](\\delta_{j})- \\bigl({^{\\mathrm{RL}}}I^{q+r}y \\bigr) (T) \\Biggr) \\\\ &{} + {^{\\mathrm{RL}}}I^{q+r}y(t), \\end{aligned}\n(18)\n\nwhere the constant $$\\Omega^{*}\\neq0$$ is defined by\n\n\\begin{aligned} \\Omega^{*}=T^{q}-\\sum_{j=1}^{n} \\sigma_{j} \\pi^{\\rho_{j},\\alpha_{j},\\beta_{j},\\eta _{j},\\kappa_{j}}_{q}(\\delta_{j}). \\end{aligned}\n\n### Proof\n\nBy taking the Riemann–Liouville fractional derivative of orders q and r, respectively, of (17), we obtain\n\n\\begin{aligned} x(t)= \\bigl({^{\\mathrm{RL}}}I^{q+r} y \\bigr) (t)+c_{1}\\frac{t^{q}}{\\Gamma(q+1)}+c_{2}t^{q-1}. \\end{aligned}\n(19)\n\nCondition $$x(0)=0$$ implies $$c_{2}=0$$. Applying the boundary condition (4) and using the same method as in Lemma 4 for finding a constant $$c_{1}$$, we obtain (18) as desired. This completes the proof. □\n\n### Remark 4\n\nIf $$c_{2}\\neq0$$, then (19) is singular in the case $$t=0$$ and $$q\\in(0,1)$$.\n\nThe following fixed point theorems are fundamental in the proofs of our main results.\n\n### Lemma 6\n\n(Krasnoselskii’s fixed point theorem, )\n\nLet M be a closed, bounded, convex and nonempty subset of a Banach space X. Let A, B be the operators such that (a) $$Ax+By\\in M$$ whenever $$x,y\\in M$$; (b) A is compact and continuous; (c) B is a contraction mapping. Then there exists $$z\\in M$$ such that z=Az+Bz.\n\n### Lemma 7\n\n(Nonlinear alternative for single-valued maps, )\n\nLet E be a Banach space, C be a closed, convex subset of E, U be an open subset of C and $$0\\in U$$. Suppose that $$F:\\overline{U}\\rightarrow C$$ is a continuous, compact (that is, $$F(\\overline{U})$$ is a relatively compact subset of C) map. Then either\n\n1. (i)\n\nF has a fixed point in , or\n\n2. (ii)\n\nthere is a $$u\\in\\partial U$$ (the boundary of U in C) and $$\\lambda\\in(0,1)$$ with $$u=\\lambda F(u)$$.\n\n## Main results\n\nIn this section, we will use fixed point theorems to prove the existence and uniqueness of solutions for problems (1)–(2) and (3)–(4). Throughout this paper, for convenience, we use the abbreviate notations\n\n\\begin{aligned} \\bigl({^{\\mathrm{RL}}}I^{q+r}{f_{x}} \\bigr) (z)= \\frac{1}{\\Gamma(q+r)} \\int _{0+}^{z} (z-s )^{q+r-1}f\\bigl(s,x(s) \\bigr)\\,ds \\quad\\text{for } z\\in[0,T] \\end{aligned}\n\nand\n\n\\begin{aligned} \\bigl[^{\\tilde{\\rho}}I^{\\tilde{\\alpha},\\tilde{\\beta}}_{\\tilde{\\eta },\\tilde{\\kappa}} \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) \\bigr](v)=\\frac{\\tilde {\\rho}v^{\\tilde{\\kappa}}}{\\Gamma(\\tilde{\\alpha})\\Gamma(q+r)} \\int_{0+}^{v} \\int_{0+}^{t}\\frac{v^{\\tilde{\\rho}(\\tilde{\\eta }+1)-1}}{(t^{\\tilde{\\rho}}-v^{\\tilde{\\rho}})^{1-\\tilde{\\alpha }}}(t-s)^{q+r-1}f \\bigl(s,x(s)\\bigr)\\,ds\\,dt, \\end{aligned}\n\nfor $$v\\in[0,T]$$, where $$z\\in\\{t,T\\}$$, $$v\\in\\{\\xi_{i},\\delta_{j}\\}$$, $$\\tilde{\\rho}\\in\\{\\bar{\\rho_{i}},\\rho_{j}\\}$$, $$\\tilde{\\alpha}\\in\\{\\bar {\\alpha_{i}},\\alpha_{j}\\}$$, $$\\tilde{\\beta}\\in\\{\\bar{\\beta_{i}},\\beta_{j}\\}$$, $$\\tilde{\\eta}\\in\\{\\bar{\\eta_{i}},\\eta_{j}\\}$$, $$\\tilde{\\kappa}\\in\\{\\bar {\\kappa_{i}},\\kappa_{j}\\}$$, $$i=1,2,\\ldots,m$$, $$j=1,2,\\ldots,n$$.\n\nLet $$\\mathcal{C}=C([0, T],\\mathbb{R})$$ denote the Banach space of all continuous functions from $$[0, T]$$ to $$\\mathbb{R}$$ endowed with the norm defined by $$\\|x\\|=\\sup_{t\\in[0, T]}|x(t)|$$. By Lemma 4, we define an operator $$\\mathcal{F}:\\mathcal{C}\\rightarrow\\mathcal{C}$$ by\n\n\\begin{aligned} &(\\mathcal{F}x) (t) \\\\ &\\quad=\\frac{1}{\\Omega} \\Biggl( \\biggl( \\Omega_{1}-\\Omega _{2}\\frac{\\Gamma(q)}{\\Gamma(q+r)}t^{q+r-1} \\biggr) \\bigl({^{\\mathrm{RL}}I^{q+r}}f_{x} \\bigr) (T) \\\\ &\\qquad{} + \\biggl( \\Omega_{2}\\frac{\\Gamma(q)}{\\Gamma(q+r)}t^{q+r-1}-\\Omega _{1} \\biggr)\\sum_{j=1}^{n} \\sigma_{j} \\bigl[{^{\\rho_{j}} I^{\\alpha_{j},\\beta_{j}}_{\\eta_{j},\\kappa_{j}}} \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) \\bigr]( \\delta_{j}) \\\\ &\\qquad{} + \\biggl( \\Omega_{3}-\\Omega_{4}\\frac{\\Gamma(q)}{\\Gamma (q+r)}t^{q+r-1} \\biggr)\\sum_{i=1}^{m}\\gamma_{i} \\bigl[ {^{\\bar{\\rho_{i}} }}I^{\\bar{\\alpha_{i}},\\bar{\\beta_{i}}}_{\\bar{\\eta_{j}},\\bar{\\kappa_{i}}} \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) \\bigr](\\xi_{i}) \\Biggr) \\\\ &\\qquad{} + \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) (t), \\end{aligned}\n(20)\n\nwith $$\\Omega\\neq0$$. It should be noticed that problem (1)–(2) has solutions if and only if the operator $$\\mathcal{F}$$ has fixed points. For the sake of convenience, we put a constant\n\n\\begin{aligned} \\Phi={}& \\frac{1}{\\Gamma(q+r+1) \\vert \\Omega \\vert } \\Biggl( \\vert \\Omega _{1} \\vert T^{q+r}+ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma (q+r)}T^{2q+2r-1} \\\\ &{} + \\biggl[ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)}T^{q+r-1}+ \\vert \\Omega _{1} \\vert \\biggr]\\sum_{j=1}^{n} \\vert \\sigma_{j} \\vert \\pi^{\\rho_{j},\\alpha_{j},\\beta_{j},\\eta _{j},\\kappa_{j}}_{q+r}( \\delta_{j}) \\\\ &{} + \\biggl[ \\vert \\Omega_{3} \\vert + \\vert \\Omega_{4} \\vert \\frac{\\Gamma(q)}{\\Gamma (q+r)}T^{q+r-1} \\biggr]\\sum _{i=1}^{m} \\vert \\gamma_{i} \\vert \\pi^{\\bar{\\rho _{i}},\\bar{\\alpha_{i}},\\bar{\\beta _{i}},\\bar{\\eta_{i}},\\bar{\\kappa_{i}}}_{q+r}(\\xi_{i}) \\Biggr) \\\\ &{} +\\frac{T^{q+r}}{\\Gamma(q+r+1)}. \\end{aligned}\n(21)\n\nTo prove the existence theorems for problem (3)–(4), by Lemma 5, we define an operator $$\\mathcal{H}:\\mathcal{C}\\rightarrow\\mathcal{C}$$ by\n\n\\begin{aligned} (\\mathcal{H}x) (t)={}&\\frac{t^{q}}{{\\Gamma(q+1)}\\Omega^{*}} \\Biggl(\\sum _{j=1}^{n}\\sigma_{j} \\bigl[ {^{{\\rho_{j}} }}I^{\\alpha_{j},\\beta_{j}}_{\\eta _{j},\\kappa_{j}} \\bigl({}^{\\mathrm{RL}}I^{q+r}f_{x} \\bigr) \\bigr](\\delta_{j})- \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) (T) \\Biggr) \\\\ &{} + \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) (t),\\quad \\Omega^{*}\\neq0. \\end{aligned}\n(22)\n\nThe first existence and uniqueness result is based on the Banach contraction mapping principle.\n\n### Theorem 1\n\nLet $$f: [0,T]\\times{\\mathbb {R}}\\to{\\mathbb {R}}$$ be a continuous function satisfying the following assumption:\n\n$$(H_{1})$$ :\n\nThere exists a constant $$L>0$$ such that $$|f(t,x)-f(t,y)|\\leq L|x-y|$$, for each $$t\\in[0, T]$$ and $$x, y\\in \\mathbb{R}$$.\n\nIf\n\n\\begin{aligned} L\\Phi< 1, \\end{aligned}\n(23)\n\nwhere a constant Φ is given by (21), then the boundary value problem (1)(2) has a unique solution on $$[0,T]$$.\n\n### Proof\n\nProblem (1)–(2) can be transformed into a fixed point problem, $$x=\\mathcal{F}x$$, where the operator $$\\mathcal {F}$$ is defined by (20). By using the Banach’s contraction mapping principle, we shall show that $$\\mathcal{F}$$ has a fixed point which is the unique solution of problem (1)–(2).\n\nLet us set $$\\sup_{t \\in[0,T]}|f(t,0)|=M < \\infty$$ and choose\n\n\\begin{aligned} r \\ge\\frac{M\\Phi}{1-L\\Phi}, \\end{aligned}\n\nas a radius of the ball $$B_{r}$$, where $$B_{r}=\\{x \\in \\mathcal{C}: \\|x\\|\\le r \\}$$. From inequality (23), a constant r is well defined. Now, we show that $$\\mathcal{F} B_{r} \\subset B_{r}$$. For any $$x \\in B_{r}$$, and taking into account Lemma 3, we obtain\n\n\\begin{aligned} \\Vert \\mathcal{F}x \\Vert ={}&\\sup_{t\\in[0,T]} \\Biggl\\vert \\frac{1}{\\Omega} \\Biggl( \\biggl( \\Omega _{1}-\\Omega_{2} \\frac{\\Gamma(q)}{\\Gamma(q+r)}t^{q+r-1} \\biggr) \\bigl({^{\\mathrm{RL}}I^{q+r}}f_{x} \\bigr) (T) \\\\ &{}+ \\biggl( \\Omega_{2}\\frac{\\Gamma(q)}{\\Gamma(q+r)}t^{q+r-1}- \\Omega _{1} \\biggr) \\sum_{j=1}^{n} \\sigma_{j} \\bigl[{^{\\rho_{j}}I^{\\alpha_{j},\\beta _{j}}_{\\eta_{j},\\kappa_{j}}} \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) \\bigr](\\delta _{j}) \\\\ &{}+ \\biggl( \\Omega_{3}-\\Omega_{4} \\frac{\\Gamma(q)}{\\Gamma (q+r)}t^{q+r-1} \\biggr)\\sum_{i=1}^{m} \\gamma_{i} \\bigl[ {^{\\bar{\\rho_{i}} }}I^{\\bar{\\alpha_{i}},\\bar{\\beta_{i}}}_{\\bar{\\eta_{i}},\\bar{\\kappa_{i}}} \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) \\bigr]( \\xi_{i}) \\Biggr) + \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) (t) \\Biggr\\vert \\\\ \\le{}& \\frac{1}{ \\vert \\Omega \\vert } \\Biggl( \\biggl[ \\vert \\Omega_{1} \\vert + \\vert \\Omega_{2} \\vert \\frac {\\Gamma(q)}{\\Gamma(q+r)}T^{q+r-1} \\biggr] \\bigl[ \\bigl({^{\\mathrm{RL}}}I^{q+r}\\bigl( \\vert f_{x}-f_{0} \\vert + \\vert f_{0} \\vert \\bigr) \\bigr) (T) \\bigr]\\\\ &{}+ \\biggl[ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)}T^{q+r-1}+ \\vert \\Omega _{1} \\vert \\biggr] \\\\ &{}\\times \\sum_{j=1}^{n} \\vert \\sigma_{j} \\vert \\bigl[{^{\\rho_{j}}I^{\\alpha _{j},\\beta_{j}}_{\\eta_{j},\\kappa_{j}}} \\bigl({^{\\mathrm{RL}}}I^{q+r} \\bigl( \\vert f_{x}-f_{0} \\vert + \\vert f_{0} \\vert \\bigr) \\bigr) \\bigr] ( \\delta_{j})\\\\ &{} + \\biggl[ \\vert \\Omega_{3} \\vert + \\vert \\Omega_{4} \\vert \\frac{\\Gamma(q)}{\\Gamma (q+r)}T^{q+r-1} \\biggr] \\\\ &{}\\times \\sum_{i=1}^{m} \\vert \\gamma_{i} \\vert \\bigl[ {^{\\bar{\\rho_{i}} }}I^{\\bar {\\alpha_{i}},\\bar{\\beta_{i}}}_{\\bar{\\eta_{i}},\\bar{\\kappa_{i}}} \\bigl({^{\\mathrm{RL}}}I^{q+r} \\bigl( \\vert f_{x}-f_{0} \\vert + \\vert f_{0} \\vert \\bigr) \\bigr) \\bigr]( \\xi_{i}) \\Biggr) + \\bigl({^{\\mathrm{RL}}}I^{q+r} \\bigl( \\vert f_{x}-f_{0} \\vert + \\vert f_{0} \\vert \\bigr) \\bigr) (t) \\\\ \\le{}& (Lr+M) \\Biggl\\{ \\frac{1}{\\Gamma(q+r+1) \\vert \\Omega \\vert } \\Biggl( \\vert \\Omega _{1} \\vert T^{q+r}+ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)}T^{2q+2r-1}\\\\ &{}+ \\biggl[ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)}T^{q+r-1}+ \\vert \\Omega _{1} \\vert \\biggr] \\sum_{j=1}^{n} \\vert \\sigma_{j} \\vert \\pi^{\\rho_{j},\\alpha_{j},\\beta _{j},\\eta_{j},\\kappa_{j}}_{q+r}( \\delta_{j}) \\\\ &{}+ \\biggl[ \\vert \\Omega_{3} \\vert + \\vert \\Omega_{4} \\vert \\frac{\\Gamma(q)}{\\Gamma (q+r)}T^{q+r-1} \\biggr]\\sum _{i=1}^{m} \\vert \\gamma_{i} \\vert \\pi^{\\bar{\\rho _{i}},\\bar{\\alpha_{i}},\\bar{\\beta_{i}},\\bar{\\eta_{i}},\\bar{\\kappa _{i}}}_{q+r}(\\xi_{i}) \\Biggr) + \\frac{T^{q+r}}{\\Gamma(q+r+1)} \\Biggr\\} \\\\ ={}& (Lr+M)\\Phi\\leq r, \\end{aligned}\n\nwhich gives $$\\mathcal{F}B_{r}\\subset B_{r}$$.\n\nFor $$x,y\\in\\mathcal{C}$$ and for each $$t\\in[0, T]$$, we obtain\n\n\\begin{aligned} & \\bigl\\vert \\mathcal{F}x(t)-\\mathcal{F}y(t) \\bigr\\vert \\\\ &\\quad\\le \\frac{1}{ \\vert \\Omega \\vert } \\Biggl( \\biggl[ \\vert \\Omega_{1} \\vert + \\vert \\Omega_{2} \\vert \\frac {\\Gamma(q)}{\\Gamma(q+r)}T^{q+r-1} \\biggr] \\bigl({^{\\mathrm{RL}}}I^{q+r} \\bigl( \\vert f_{x}-f_{y} \\vert \\bigr) \\bigr) (T) \\\\ &\\qquad{}+ \\biggl[ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)}T^{q+r-1}+ \\vert \\Omega _{1} \\vert \\biggr] \\\\ &\\qquad{} \\times\\sum_{j=1}^{n} \\vert \\sigma_{j} \\vert \\bigl[{^{\\rho_{j}}I^{\\alpha _{j},\\beta_{j}}_{\\eta_{j},\\kappa_{j}}} \\bigl({^{\\mathrm{RL}}I}^{q+r} \\bigl( \\vert f_{x}-f_{y} \\vert \\bigr) \\bigr) \\bigr](\\delta_{j}) \\\\ &\\qquad{} + \\biggl[ \\vert \\Omega_{3} \\vert + \\vert \\Omega_{4} \\vert \\frac{\\Gamma(q)}{\\Gamma (q+r)}T^{q+r-1} \\biggr] \\\\ &\\qquad{} \\times\\sum_{i=1}^{m} \\vert \\gamma_{i} \\vert \\bigl[ {^{\\bar{\\rho_{i}} }}I^{\\bar {\\alpha_{i}},\\bar{\\beta_{i}}}_{\\bar{\\eta_{i}},\\bar{\\kappa_{i}}} \\bigl({^{\\mathrm{RL}}}I^{q+r} \\bigl( \\vert f_{x}-f_{y} \\vert \\bigr) \\bigr) \\bigr](\\xi_{i}) \\Biggr) \\\\ &\\qquad{} + \\bigl({^{\\mathrm{RL}}}I^{q+r} \\bigl( \\vert f_{x}-f_{y} \\vert \\bigr) \\bigr) (t) \\\\ &\\quad\\le L \\Vert x-y \\Vert \\Biggl\\{ \\frac{1}{\\Gamma(q+r+1) \\vert \\Omega \\vert } \\Biggl( \\vert \\Omega _{1} \\vert T^{q+r}+ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)}T^{2q+2r-1} \\\\ &\\qquad{} + \\biggl[ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)}T^{q+r-1}+ \\vert \\Omega _{1} \\vert \\biggr]\\sum_{j=1}^{n} \\vert \\sigma_{j} \\vert \\pi^{\\rho_{j},\\alpha_{j},\\beta _{j},\\eta_{j},\\kappa_{j}}_{q+r}( \\delta_{j}) \\\\ &\\qquad{} + \\biggl[ \\vert \\Omega_{3} \\vert + \\vert \\Omega_{4} \\vert \\frac{\\Gamma(q)}{\\Gamma (q+r)}T^{q+r-1} \\biggr]\\sum _{i=1}^{m} \\vert \\gamma_{i} \\vert \\pi^{\\bar{\\rho _{i}},\\bar{\\alpha_{i}},\\bar{\\beta_{i}},\\bar{\\eta_{i}},\\bar{\\kappa _{i}}}_{q+r}(\\xi_{i}) \\Biggr) + \\frac{T^{q+r}}{\\Gamma(q+r+1)} \\Biggr\\} \\\\ &\\quad = L\\Phi \\Vert x-y \\Vert . \\end{aligned}\n\nThe above result leads to $$\\|\\mathcal{F}x-\\mathcal{F}y\\|\\leq L\\Phi\\| x-y\\|$$. As $$L\\Phi<1$$, therefore the operator $$\\mathcal{F}$$ is a contraction. Hence, by the Banach contraction mapping principle, we deduce that $$\\mathcal{F}$$ has a fixed point which is the unique solution of the problem (1)–(2). The proof is completed. □\n\n### Corollary 1\n\nLet condition $$(H_{1})$$ in Theorem (1) hold. If $$L\\Phi^{*}<1$$, where $$\\Phi^{*}$$ is defined by\n\n\\begin{aligned} \\Phi^{*}=\\frac{T^{q}}{ \\vert \\Omega^{*} \\vert \\Gamma(q+1)\\Gamma(q+r+1)} \\Biggl(\\sum _{j=1}^{n} \\vert \\sigma_{j} \\vert \\pi^{\\rho_{j},\\alpha_{j},\\beta_{j},\\eta_{j},\\kappa _{j}}_{q+r}(\\delta_{j})+T^{q+r} \\Biggr)+\\frac{T^{q+r}}{\\Gamma(q+r+1)}, \\end{aligned}\n\nthen the boundary value problem (3)(4) has a unique solution on $$[0,T]$$.\n\nNext, we give the second existence theorem by using Krasnoselskii’s fixed point theorem.\n\nSetting a constant\n\n$$\\Phi_{1}=\\frac{T^{q+r}}{\\Gamma(q+r+1)}+\\frac{1}{ \\vert \\Omega \\vert \\Gamma (q+r+1)} \\biggl( \\vert \\Omega_{1} \\vert T^{q+r}+ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma (q+r)}T^{2q+2r+1} \\biggr).$$\n\nNote that $$\\Phi_{1}\\leq\\Phi$$. Now, we state and prove the second result.\n\n### Theorem 2\n\nAssume that $$f:[0,T]\\times\\mathbb{R}\\rightarrow\\mathbb{R}$$ is continuous and satisfies assumption $$(H_{1})$$ of Theorem 1. In addition we suppose that:\n\n$$(H_{2})$$ :\n\n$$|f(t,x)|\\leq\\phi(t)$$, $$\\forall(t,x)\\in[0, T]\\times \\mathbb{R}$$ and $$\\phi\\in C([0,T],\\mathbb{R}^{+})$$.\n\nIf the inequality\n\n\\begin{aligned} \\Phi_{1}L< 1 \\end{aligned}\n(24)\n\nholds, then the boundary value problem (1)(2) has at least one solution on $$[0, T]$$.\n\n### Proof\n\nLet us define a suitable ball $$B_{\\overline{r}}=\\{x\\in \\mathcal{C}:\\|x\\|\\leq\\overline{r}\\}$$, where the radius is defined by\n\n$$\\overline{r}\\geq \\Vert \\phi \\Vert \\Phi,$$\n\nwith $$\\sup_{t\\in[0,T]}|\\phi(t)|=\\|\\phi\\|$$ and Φ defined by (21). Furthermore, we define two operators $$\\mathcal{P}$$ and $$\\mathcal{Q}$$ on $$B_{\\overline{r}}$$ as\n\n\\begin{aligned} (\\mathcal{P} x) (t)={}&\\frac{1}{\\Omega} \\Biggl( \\biggl( \\Omega_{2} \\frac {\\Gamma(q)}{\\Gamma(q+r)}t^{q+r-1}-\\Omega_{1} \\biggr)\\sum _{j=1}^{n}\\sigma _{j} \\bigl[{^{\\rho_{j}}I^{\\alpha_{j},\\beta_{j}}_{\\eta_{j},\\phi_{j}}} \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) \\bigr](\\delta_{j}) \\\\ &{} + \\biggl( \\Omega_{3}-\\Omega_{4}\\frac{\\Gamma(q)}{\\Gamma (q+r)}t^{q+r-1} \\biggr)\\sum_{i=1}^{m}\\gamma_{i} \\bigl[ {^{\\bar{\\rho_{i}} }}I^{\\bar{\\alpha_{i}},\\bar{\\beta_{i}}}_{\\bar{\\eta_{i}},\\bar{\\kappa_{i}}} \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) \\bigr](\\xi_{j}) \\Biggr), \\\\ (\\mathcal{Q} x) (t)={}&\\frac{1}{\\Omega} \\biggl( \\biggl( \\Omega_{1}- \\Omega _{2}\\frac{\\Gamma(q)}{\\Gamma(q+r)}t^{q+r-1} \\biggr) \\bigl( {^{\\mathrm{RL}}I^{q+r}}f_{x} \\bigr) (T) \\biggr) \\\\ &{} + \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) (t),\\quad t \\in[0,T]. \\end{aligned}\n\nObserve that $$\\mathcal{F}x=\\mathcal{P}x+\\mathcal{Q}x$$. For $$x,y \\in B_{\\overline{r}}$$, we have\n\n\\begin{aligned} & \\Vert \\mathcal{P} x+\\mathcal{Q} y \\Vert \\\\ &\\quad\\leq \\Vert \\phi \\Vert \\Biggl\\{ \\frac{1}{\\Gamma(q+r+1) \\vert \\Omega \\vert } \\Biggl( \\vert \\Omega _{1} \\vert T^{q+r}+ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)}T^{2q+2r-1} \\\\ &\\qquad{} + \\biggl[ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)}T^{q+r-1}+ \\vert \\Omega _{1} \\vert \\biggr]\\sum_{j=1}^{n} \\vert \\sigma_{j} \\vert \\pi^{\\rho_{j},\\alpha_{j},\\beta _{j},\\eta_{j},\\kappa_{j}}_{q+r}( \\delta_{j}) \\\\ &\\qquad{} + \\biggl[ \\vert \\Omega_{3} \\vert + \\vert \\Omega_{4} \\vert \\frac{\\Gamma(q)}{\\Gamma (q+r)}T^{q+r-1} \\biggr]\\sum _{i=1}^{m} \\vert \\gamma_{i} \\vert \\pi^{\\bar{\\rho _{i}},\\bar{\\alpha_{i}},\\bar{\\beta_{i}},\\bar{\\eta_{i}},\\bar{\\kappa _{i}}}_{q+r}(\\xi_{i}) \\Biggr) + \\frac{T^{q+r}}{\\Gamma(q+r+1)} \\Biggr\\} \\\\ &\\quad = \\| \\phi \\| \\Phi \\\\ &\\quad \\leq \\overline{r}. \\end{aligned}\n\nThis shows that $$\\mathcal{P} x+ \\mathcal{Q} y\\in B_{\\overline{r}}$$. Therefore, condition (a) of Lemma 6 holds. Using assumption $$(H_{1})$$ with the inequality in (24), we deduce that operator $$\\mathcal{Q}$$ is a contraction mapping which satisfies condition (c) of Lemma 6.\n\nNow, we will show that operator $$\\mathcal{P}$$ satisfies condition (b) of Lemma 6. Since f is a continuous function, we have that operator $$\\mathcal{P}$$ is continuous. Next, we prove compactness of operator $$\\mathcal{P}$$. It is easy to verify that\n\n\\begin{aligned} \\Vert \\mathcal{P} x \\Vert \\leq{}& \\Vert \\phi \\Vert \\Biggl\\{ \\frac{1}{\\Gamma(q+r+1) \\vert \\Omega \\vert } \\Biggl( \\biggl[ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma (q+r)}T^{q+r-1}+ \\vert \\Omega_{1} \\vert \\biggr] \\sum_{j=1}^{n} \\vert \\sigma_{j} \\vert \\pi^{\\rho _{j},\\alpha_{j},\\beta_{j},\\eta_{j},\\kappa_{j}}_{q+r}( \\delta_{j}) \\\\ &{} + \\biggl[ \\vert \\Omega_{3} \\vert + \\vert \\Omega_{4} \\vert \\frac{\\Gamma(q)}{\\Gamma (q+r)}T^{q+r-1} \\biggr]\\sum _{i=1}^{m} \\vert \\gamma_{i} \\vert \\pi^{\\bar{\\rho _{i}},\\bar{\\alpha_{i}},\\bar{\\beta_{i}},\\bar{\\eta_{i}},\\bar{\\kappa _{i}}}_{q+r}(\\xi_{i}) \\Biggr) \\Biggr\\} . \\end{aligned}\n\nHence, $$\\mathcal{P}(B_{\\overline{r}})$$ is a uniformly bounded set. Let us put $$\\sup_{(t,x)\\in[0,T]\\times B_{\\overline{r}}}|f(t,x)| =\\overline{f}<\\infty$$. Consequently, we get\n\n\\begin{aligned} & \\bigl\\vert (\\mathcal{P} x) (t_{1})-(\\mathcal{P} x) (t_{2}) \\bigr\\vert \\\\ &\\quad = \\Biggl\\vert \\frac{1}{\\Omega} \\Biggl\\{ \\biggl( \\Omega_{2} \\frac{\\Gamma (q)}{\\Gamma(q+r)}t_{1}^{q+r-1}-\\Omega_{1} \\biggr) \\sum_{j=1}^{n}\\sigma _{j} \\bigl[{^{\\rho_{j}}I^{\\alpha_{j},\\beta_{j}}_{\\eta_{j}, \\kappa_{j}}} \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) \\bigr](\\delta_{j}) \\\\ &\\qquad{}+ \\biggl( \\Omega_{3}- \\Omega_{4}\\frac{\\Gamma(q)}{\\Gamma (q+r)}t_{1}^{q+r-1} \\biggr) \\sum_{i=1}^{m}\\gamma_{i} \\bigl[{^{\\bar{\\rho_{i}} }}I^{\\bar{\\alpha_{i}},\\bar {\\beta_{i}}}_{\\bar{\\eta_{i}},\\bar{\\kappa_{i}}} \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) \\bigr](\\xi_{i}) \\Biggr\\} \\\\ &\\qquad{}- \\frac{1}{\\Omega} \\Biggl\\{ \\biggl( \\Omega_{2}\\frac{\\Gamma(q)}{\\Gamma (q+r)}t_{2}^{q+r-1}- \\Omega_{1} \\biggr)\\sum_{j=1}^{n} \\sigma_{j} \\bigl[{^{\\rho_{j}} I^{\\alpha_{j},\\beta_{j}}_{\\eta_{j},\\kappa_{j}}} \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) \\bigr]( \\delta_{j})\\\\ &\\qquad{} + \\biggl( \\Omega_{3}-\\Omega_{4} \\frac{\\Gamma(q)}{\\Gamma (q+r)}t_{2}^{q+r-1} \\biggr)\\sum _{i=1}^{m}\\gamma_{i} \\bigl[ {^{\\bar{\\rho _{i}} }}I^{\\bar{\\alpha_{i}},\\bar{\\beta_{i}}}_{\\bar{\\eta_{i}},\\bar{\\kappa _{i}}} \\bigl({^{\\mathrm{RL}}}I^{q+r}f_{x} \\bigr) \\bigr](\\xi_{i}) \\Biggr\\} \\Biggr\\vert \\\\ &\\quad \\leq\\frac{1}{ \\vert \\Omega \\vert } \\Biggl[ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)} \\bigl\\vert t_{1}^{q+r-1}-t_{2}^{q+r-1} \\bigr\\vert \\bar{f}\\sum_{j=1}^{n} \\vert \\sigma_{j} \\vert \\pi ^{\\rho_{j},\\alpha_{j},\\beta_{j},\\eta_{j}, \\kappa_{j}}_{q+r}( \\delta_{j}) \\\\ &\\qquad{}+ \\vert \\Omega_{4} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)} \\bigl\\vert t_{2}^{q+r-1}-t_{1}^{q+r-1} \\bigr\\vert \\bar{f}\\sum_{i=1}^{m} \\vert \\gamma_{i} \\vert \\pi ^{\\bar{\\rho_{i}},\\bar{\\alpha_{i}},\\bar{\\beta_{i}},\\bar{\\eta_{i}},\\bar {\\kappa_{i}}}_{q+r}( \\xi_{i}) \\Biggr], \\end{aligned}\n\nwhich is independent of x and tends to zero as $$t_{2}\\rightarrow t_{1}$$. Hence, the set $$\\mathcal{P}(B_{\\overline{r}})$$ is equicontinuous. Hence, by the Arzelá–Ascoli theorem, the set $$\\mathcal {P}(B_{\\overline{r}})$$ is relatively compact. Therefore, the operator $$\\mathcal{P}$$ is compact on $$B_{\\overline{r}}$$. Thus all the assumptions of Lemma 6 are satisfied. Then the boundary value problem (1)–(2) has at least one solution on $$[0, T]$$. The proof is completed. □\n\n### Remark 5\n\nIn the above theorem, we can interchange the roles of operators $${\\mathcal {P}}$$ and $${\\mathcal {Q}}$$ to obtain a second result, replacing (24) by the following condition:\n\n\\begin{aligned} &\\frac{L}{ \\vert \\Omega \\vert \\Gamma(q+r+1)} \\Biggl( \\biggl[ \\vert \\Omega_{2} \\vert \\frac{\\Gamma (q)}{\\Gamma(q+r)}T^{q+r-1}+ \\vert \\Omega_{1} \\vert \\biggr]\\sum_{j=1}^{n} \\vert \\sigma _{j} \\vert \\pi^{\\rho_{j},\\alpha_{j},\\beta_{j},\\eta_{j},\\kappa _{j}}_{q+r}(\\delta_{j}) \\\\ &\\quad{} + \\biggl[ \\vert \\Omega_{3} \\vert + \\vert \\Omega_{4} \\vert \\frac{\\Gamma(q)}{\\Gamma (q+r)}T^{q+r-1} \\biggr]\\sum _{i=1}^{m} \\vert \\gamma_{i} \\vert \\pi^{\\bar{\\rho _{i}},\\bar{\\alpha_{i}},\\bar{\\beta_{i}},\\bar{\\eta_{i}},\\bar{\\kappa _{i}}}_{q+r}(\\xi_{i}) \\Biggr) < 1. \\end{aligned}\n\n### Corollary 2\n\nAssume that $$(H_{1})$$ and $$(H_{2})$$ are fulfilled. If either\n\n$$\\frac{L{T^{q+r}}}{\\Gamma(q+r+1)}< 1,$$\n\nor\n\n$$\\frac{LT^{q}}{ \\vert \\Omega^{*} \\vert \\Gamma(q+1)\\Gamma(q+r+1)} \\Biggl(\\sum_{j=1}^{n} \\vert \\sigma_{j} \\vert \\pi^{\\rho_{j},\\alpha_{j},\\beta_{j},\\eta_{j},\\kappa _{j}}_{q+r}( \\delta_{j})+T^{q+r} \\Biggr) < 1,$$\n\nholds, then the boundary value problem (3)(4) has at least one solution on $$[0,T]$$.\n\nNow, our third existence theorem will be proved by using the Leray–Schauder’s Nonlinear Alternative.\n\n### Theorem 3\n\nAssume that $$f:[0, T]\\times\\mathbb{R}\\rightarrow\\mathbb{R}$$ is a continuous function. In addition, we suppose that:\n\n$$(H_{3})$$ :\n\nThere exist a continuous nondecreasing function $$\\psi :[0,\\infty)\\rightarrow(0,\\infty)$$ and a function $$p \\in C([0, T],\\mathbb {R}^{+})$$ such that\n\n$$\\bigl\\vert f(t,x) \\bigr\\vert \\leq p(t)\\psi\\bigl( \\Vert x \\Vert \\bigr)\\quad \\textit{for each }(t,x)\\in[0,T]\\times\\mathbb{R};$$\n$$(H_{4})$$ :\n\nThere exists a constant $$N>0$$ such that\n\n\\begin{aligned} \\frac{N}{ \\Vert p \\Vert \\psi(N)\\Phi}> 1, \\end{aligned}\n\nwhere Φ is defined by (21).\n\nThen the boundary value problem (1)(2) has at least one solution on $$[0, T]$$.\n\n### Proof\n\nTo apply Lemma 7, we define a bounded ball in $$\\mathcal{C}$$ by $$B_{R}=\\{x\\in\\mathcal{C}:\\|x\\|\\leq R\\}$$, $$R>0$$. Now, we shall show that the operator $$\\mathcal{F}$$ defined by (20), maps bounded sets $$B_{R}$$ into bounded sets in $$\\mathcal {C}$$. For $$t\\in[0, T]$$ we have\n\n\\begin{aligned} & \\bigl\\vert \\mathcal{F}x(t) \\bigr\\vert \\\\ &\\quad\\le \\frac{1}{ \\vert \\Omega \\vert } \\Biggl( \\biggl[ \\vert \\Omega_{1} \\vert + \\vert \\Omega_{2} \\vert \\frac {\\Gamma(q)}{\\Gamma(q+r)}T^{q+r-1} \\biggr] \\bigl({^{\\mathrm{RL}}I^{q+r}} \\bigl( \\vert f_{x} \\vert \\bigr) \\bigr) (T) \\\\ &\\qquad{} + \\biggl[ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)}T^{q+r-1}+ \\vert \\Omega _{1} \\vert \\biggr]\\sum_{j=1}^{n} \\vert \\sigma_{j} \\vert \\bigl[{^{\\rho_{j}}I^{\\alpha _{j},\\beta_{j}}_{\\eta_{j},\\kappa_{j}}} \\bigl({^{\\mathrm{RL}}}I^{q+r} \\bigl( \\vert f_{x} \\vert \\bigr) \\bigr) \\bigr] (\\delta_{j}) \\\\ &\\qquad{}+ \\biggl[ \\vert \\Omega_{3} \\vert + \\vert \\Omega_{4} \\vert \\frac{\\Gamma(q)}{\\Gamma (q+r)}T^{q+r-1} \\biggr] \\sum _{i=1}^{m} \\vert \\gamma_{i} \\vert \\bigl[ {^{\\bar{\\rho_{i}} }}I^{\\bar{\\alpha _{i}},\\bar{\\beta_{i}}}_{\\bar{\\eta_{i}},\\bar{\\kappa_{i}}} \\bigl({^{\\mathrm{RL}}}I^{q+r} \\bigl( \\vert f_{x} \\vert \\bigr) \\bigr) \\bigr](\\xi_{i}) \\Biggr) \\\\ &\\qquad{} + \\bigl( {^{\\mathrm{RL}}}I^{q+r} \\bigl( \\vert f_{x} \\vert \\bigr) \\bigr) (t) \\\\ &\\quad \\le \\Vert p \\Vert \\psi\\bigl( \\Vert x \\Vert \\bigr) \\Biggl\\{ \\frac{1}{\\Gamma(q+r+1) \\vert \\Omega \\vert } \\Biggl( \\vert \\Omega _{1} \\vert T^{q+r}+ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)}T^{2q+2r-1} \\\\ &\\qquad{} + \\biggl[ \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)}T^{q+r-1}+ \\vert \\Omega _{1} \\vert \\biggr]\\sum_{j=1}^{n} \\vert \\sigma_{j} \\vert \\pi^{\\rho_{j},\\alpha_{j},\\beta _{j},\\eta_{j},\\kappa_{j}}_{q+r}( \\delta_{j}) \\\\ &\\qquad{} + \\biggl[ \\vert \\Omega_{3} \\vert + \\vert \\Omega_{4} \\vert \\frac{\\Gamma(q)}{\\Gamma (q+r)}T^{q+r-1} \\biggr]\\sum _{i=1}^{m} \\vert \\gamma_{i} \\vert \\pi^{\\bar{\\rho _{i}},\\bar{\\alpha_{i}},\\bar{\\beta_{i}},\\bar{\\eta_{i}},\\bar{\\kappa _{i}}}_{q+r}(\\xi_{i}) \\Biggr) + \\frac{T^{q+r}}{\\Gamma(q+r+1)} \\Biggr\\} \\\\ &\\quad \\le \\Vert p \\Vert \\psi(R)\\Phi. \\end{aligned}\n\nTherefore, we conclude that $$\\|\\mathcal{F}x\\|\\leq\\|p\\|\\psi(R)\\Phi$$, which implies that the set $$\\mathcal{F}(B_{R})$$ is uniformly bounded.\n\nNext, we will show that $$\\mathcal{F}$$ maps a bounded set $$B_{R}$$ into an equicontinuous set in $$\\mathcal{C}$$. Let $$\\nu_{1}, \\nu_{2}\\in[0, T]$$ with $$\\nu_{1}<\\nu_{2}$$ and for any $$x\\in B_{R}$$. Then we have\n\n\\begin{aligned} & \\bigl\\vert (\\mathcal{F} x) (\\nu_{2})-(\\mathcal{F} x) ( \\nu_{1}) \\bigr\\vert \\\\ &\\quad\\le \\frac{1}{ \\vert \\Omega \\vert } \\Biggl( \\biggl( \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma (q+r)} \\bigl\\vert {\\nu_{1}}^{q+r-1}-{ \\nu_{2}}^{q+r-1} \\bigr\\vert \\biggr) \\bigl({^{\\mathrm{RL}}I^{q+r}} \\vert f_{x} \\vert \\bigr) (T) \\\\ &\\qquad{} + \\biggl( \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)} \\bigl\\vert {\\nu _{2}}^{q+r-1}-{\\nu_{1}}^{q+r-1} \\bigr\\vert \\biggr) \\sum_{j=1}^{n} \\sigma_{j} \\bigl[{^{\\rho_{j}}I^{\\alpha_{j},\\beta _{j}}_{\\eta_{j},\\kappa{j}}} \\bigl({^{\\mathrm{RL}}}I^{q+r} \\vert f_{x} \\vert \\bigr) \\bigr](\\delta_{j}) \\\\ &\\qquad{} + \\biggl( \\vert \\Omega_{4}\\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)}\\bigl\\vert { \\nu_{1}}^{q+r-1}-{\\nu _{2}}^{q+r-1} \\bigr\\vert \\biggr) \\sum_{i=1}^{m} \\gamma_{i} \\bigl[ {^{\\bar{\\rho_{i}} }}I^{\\bar{\\alpha_{i}},\\bar {\\beta_{i}}}_{\\bar{\\eta_{i}},\\bar{\\kappa_{i}}} \\bigl({^{\\mathrm{RL}}}I^{q+r} \\vert f_{x} \\vert \\bigr) \\bigr](\\xi_{i}) \\Biggr) \\\\ &\\qquad{} + \\frac{ \\Vert p \\Vert \\psi(R)}{\\Gamma(q+r)} \\biggl\\vert \\int_{0}^{\\nu_{1}} \\bigl[(\\nu _{2}-s)^{q+r-1}-( \\nu_{1}-s)^{q+r-1} \\bigr]\\,ds \\biggr\\vert \\\\ &\\qquad{} + \\frac{ \\Vert p \\Vert \\psi(R)}{\\Gamma(q+r)} \\biggl\\vert \\int_{\\nu_{1}}^{\\nu_{2}}(\\nu _{2}-s)^{q+r-1} \\,ds \\biggr\\vert \\\\ &\\quad\\leq \\Vert p \\Vert \\psi(R) \\Biggl\\{ \\frac{1}{ \\vert \\Omega \\vert } \\Biggl( \\biggl( \\vert \\Omega_{2} \\vert \\frac {\\Gamma(q)}{\\Gamma(q+r)} \\bigl\\vert { \\nu_{1}}^{q+r-1}-{\\nu_{2}}^{q+r-1} \\bigr\\vert \\biggr)\\frac{T^{q+r}}{\\Gamma(q+r+1)} \\\\ &\\qquad{} + \\biggl( \\vert \\Omega_{2} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)} \\bigl\\vert {\\nu _{2}}^{q+r-1}-{\\nu_{1}}^{q+r-1} \\bigr\\vert \\biggr)\\sum_{j=1}^{n} \\vert \\sigma_{j} \\vert \\pi ^{\\rho_{j},\\alpha_{j},\\beta_{j},\\eta_{j},\\kappa_{j}}_{q+r}( \\delta_{j}) \\\\ &\\qquad{} + \\biggl( \\vert \\Omega_{4} \\vert \\frac{\\Gamma(q)}{\\Gamma(q+r)} \\bigl\\vert {\\nu _{1}}^{q+r-1}-{\\nu_{2}}^{q+r-1} \\bigr\\vert \\biggr)\\sum_{i=1}^{m} \\vert \\gamma_{i} \\vert \\pi ^{\\bar{\\rho_{i}},\\bar{\\alpha_{i}},\\bar{\\beta_{i}},\\bar{\\eta_{i}},\\bar {\\kappa_{i}}}_{q+r}( \\xi_{i}) \\Biggr) \\\\ &\\qquad{} + \\frac{1}{\\Gamma(q+r+1)} \\bigl[ \\bigl\\vert \\nu^{q+r}_{2}- \\nu^{q+r}_{1} \\bigr\\vert +2(\\nu_{2}- \\nu_{1})^{q+r} \\bigr] \\Biggr\\} . \\end{aligned}\n\nObviously, the right-hand side of the above inequality tends to zero independently of $$x\\in B_{R}$$ as $$\\nu_{2}\\rightarrow\\nu_{1}$$. Thus $$\\mathcal{F}(B_{R})$$ is an equicontinuous set. Therefore, it follows by the Arzelá–Ascoli theorem that $$\\mathcal{F}:\\mathcal{C}\\rightarrow \\mathcal{C}$$ is completely continuous.\n\nLet x be a solution of boundary value problem (1)–(2). Hence, for $$t\\in[0, T]$$, and using the above method, we have\n\n\\begin{aligned} \\Vert x \\Vert \\leq \\Vert p \\Vert \\psi\\bigl( \\Vert x \\Vert \\bigr) \\Phi, \\end{aligned}\n\nwhich can be written as\n\n\\begin{aligned} \\frac{ \\Vert x \\Vert }{ \\Vert p \\Vert \\psi( \\Vert x \\Vert )\\Phi}\\leq1. \\end{aligned}\n\nIn view of $$(H_{4})$$, there exists an N such that $$\\|x\\|\\neq N$$. Now we define a set\n\n$$U=\\bigl\\{ x\\in B_{R}: \\Vert x \\Vert < N\\bigr\\} .$$\n(25)\n\nNote that the operator $$\\mathcal{F}:\\overline{U}\\rightarrow\\mathcal {C}$$ is continuous and compact. From the choice of U, there is no $$x\\in\\partial U$$ such that $$x=\\theta\\mathcal{F}x$$ for some $$\\theta\\in (0,1)$$. Consequently, by the nonlinear alternative of Leray–Schauder type (Lemma 7) we get that $${\\mathcal {F}}$$ has a fixed point in , which is a solution of the boundary value problem (1)–(2). This completes the proof. □\n\n### Corollary 3\n\nLet condition $$(H_{3})$$ in Theorem 3 hold. If there exists a constant $$N>0$$ such that\n\n\\begin{aligned} \\frac{N}{ \\Vert p \\Vert \\psi(N)\\Phi^{*}}> 1, \\end{aligned}\n\nthen the boundary value problem (3)(4) has at least one solution on $$[0, T]$$.\n\nThe following corollary is obtained by substituting $$p(t)\\equiv1$$ and $$\\psi(|x|)=M|x|+K$$. Then we can use the following assumption.\n\n$$(H_{5})$$ :\n\nThere exist constants $$M>0$$ and $$K\\geq0$$ such that\n\n$$\\bigl\\vert f(t,x) \\bigr\\vert \\leq M \\vert x \\vert +K \\quad\\textit{for each } (t,x)\\in[0, T] \\times\\mathbb{R}.$$\n\n### Corollary 4\n\nAssume that a continuous function $$f:[0,T]\\times\\mathbb{R}\\rightarrow \\mathbb{R}$$ satisfies condition $$(H_{5})$$.\n\n1. (i)\n\nIf $$M\\Phi<1$$, then boundary value problem (1)(2) has at least one solution on $$[0, T]$$.\n\n2. (ii)\n\nIf $$M\\Phi^{*}<1$$, then the boundary value problem (3)(4) has at least one solution on $$[0, T]$$.\n\n## Examples\n\n### Example 1\n\nConsider the following nonlinear sequential Riemann–Liouville and Caputo fractional differential equation with generalized fractional integral conditions:\n\n$$\\textstyle\\begin{cases} {^{\\mathrm{RL}}}D^{\\frac{1}{2}} ({^{C}}D^{\\frac{3}{4}}x )(t)= \\frac{\\cos ^{2}(2\\pi t)}{{(t^{2}+5)}^{2}+35}\\cdot ( \\frac {x^{2}(t)+2 \\vert x(t) \\vert }{ \\vert x(t) \\vert +1} ) +e^{t},\\quad 0< t< 5,\\\\ x(0)= \\frac{1}{3} {^{\\frac{1}{2}}}I^{\\frac{1}{2},\\frac{3}{4}}_{\\frac {1}{2},\\frac{3}{4}}x (\\frac{3}{2} )+\\frac{1}{2} {^{\\frac {5}{2}}}I^{\\frac{3}{2},\\frac{1}{2}}_{\\frac{1}{2},\\frac{1}{3}}x (\\frac{5}{2} ),\\\\ x(5)= \\frac{2}{3} {^{\\frac{1}{3}}}I^{\\frac{1}{4},\\frac{1}{4}}_{\\frac {1}{2},\\frac{1}{3}}x (\\frac{1}{2} ) +\\frac{5}{7} {^{\\frac{3}{2}}}I^{\\frac{1}{3},\\frac{1}{4}}_{\\frac {1}{2},\\frac{3}{2}}x (\\frac{7}{2} ) +\\frac{11}{16} {^{\\frac{5}{2}}}I^{\\frac{5}{2},\\frac{1}{2}}_{\\frac {3}{2},\\frac{1}{2}}x (\\frac{9}{2} ). \\end{cases}$$\n(26)\n\nHere $$q=1/2$$, $$r=3/4$$, $$m=2$$, $$n=3$$, $$T=5$$, $$\\gamma_{1}=1/3$$, $$\\gamma _{2}=1/2$$, $$\\bar{\\rho_{1}}=1/2$$, $$\\bar{\\rho_{2}}=3/2$$, $$\\bar{\\alpha_{1}}=1/2$$, $$\\bar{\\alpha_{2}}=3/2$$, $$\\bar{\\beta_{1}}=3/4$$, $$\\bar{\\beta_{2}}=1/2$$, $$\\bar {\\eta_{1}}=3/2$$, $$\\bar{\\eta_{2}}=1/2$$, $$\\bar{\\kappa_{1}}=1/2$$, $$\\bar{\\kappa _{2}}=1/3$$, $$\\xi_{1}=3/2$$, $$\\xi_{2}=5/2$$, $${\\sigma_{1}=2/3}$$, $${\\sigma_{2}=5/7}$$, $${\\sigma_{3}=11/16}$$, $$\\rho_{1}=1/3$$, $$\\rho_{2}=3/2$$, $$\\rho_{3}=5/2$$, $$\\alpha_{1}=1/4$$, $$\\alpha_{2}=1/3$$, $$\\alpha _{3}=5/2$$, $$\\beta_{1}=1/4$$, $$\\beta_{2}=1/4$$, $$\\beta_{3}=1/2$$, $$\\eta_{1}=1/2$$, $$\\eta_{2}=1/2$$, $$\\eta_{3}=3/2$$, $$\\kappa_{1}=1/3$$, $$\\kappa_{2}=3/2$$, $$\\kappa _{3}=1/2$$, $$\\delta_{1}=1/2$$, $$\\delta_{2}=7/2$$, $$\\delta_{3}=9/2$$ and $$f(t,x)=((\\cos^{2}(2\\pi t))/({(t^{2}+5)}^{2}+35))\\cdot ( (x^{2}+2|x|)/(|x|+1) ) +e^{t}$$. From the given information, we find that $$\\Phi=26.98694773$$. Since\n\n$$\\bigl\\vert f(t,x)-f(t,y) \\bigr\\vert \\leq\\frac{1}{30} \\vert x-y \\vert ,$$\n\ncondition $$(H_{1})$$ is satisfied with $$L=1/30$$. Thus\n\n\\begin{aligned} L\\Phi=0.8995649243< 1. \\end{aligned}\n\nHence, by Theorem 1, the boundary value problem (26) has a unique solution on $$[0,5]$$.\n\n### Example 2\n\nConsider the following nonlinear sequential Riemann–Liouville and Caputo fractional differential equation with generalized fractional integral conditions:\n\n$$\\textstyle\\begin{cases} {^{\\mathrm{RL}}}D^{\\frac{4}{5}} ({^{C}}D^{\\frac{1}{2}}x )(t)= \\frac {e^{-t^{2}}}{(t+2)^{2}+2}\\cdot\\frac{ \\vert x(t) \\vert }{ \\vert x(t) \\vert +1}+\\frac{t}{t+1},\\quad 0< t< 3,\\\\ x(0)=\\frac{1}{2} {^{\\frac{1}{2}}}I^{\\frac{1}{2},\\frac{1}{4}}_{\\frac {1}{2},\\frac{1}{4}}x (\\frac{1}{2} )+\\frac{1}{3} {^{\\frac {1}{3}}}I^{\\frac{1}{3},\\frac{1}{2}}_{\\frac{1}{2},\\frac{1}{2}}x (\\frac{2}{3} ) +\\frac{5}{6} {^{\\frac{1}{4}}}I^{\\frac{1}{2},\\frac{1}{2}}_{\\frac {1}{4},\\frac{1}{4}}x (\\frac{3}{2} ),\\\\ x(3)=\\frac{1}{4} {^{\\frac{1}{4}}}I^{\\frac{1}{3},\\frac{1}{2}}_{\\frac {1}{4},\\frac{1}{4}}x (\\frac{4}{3} ) +\\frac{1}{3} {^{\\frac{1}{3}}}I^{\\frac{1}{3},\\frac{1}{3}}_{\\frac {1}{2},\\frac{1}{2}}x (\\frac{7}{3} ) +\\frac{5}{6} {^{\\frac{1}{4}}}I^{\\frac{1}{4},\\frac{1}{2}}_{\\frac {1}{2},\\frac{1}{4}}x (\\frac{5}{2} ). \\end{cases}$$\n(27)\n\nHere $$q=4/5$$, $$r=1/2$$, $$m=3$$, $$n=3$$, $$T=3$$, $$\\gamma_{1}=1/2$$, $$\\gamma _{2}=1/3$$, $$\\gamma_{3}=5/6$$, $$\\bar{\\rho_{1}}=1/2$$, $$\\bar{\\rho_{2}}=1/3$$, $$\\bar {\\rho_{3}}=1/4$$, $$\\bar{\\alpha_{1}}=1/2$$, $$\\bar{\\alpha_{2}}=1/3$$, $$\\bar{\\alpha _{3}}=1/2$$ $$\\bar{\\beta_{1}}=1/4$$, $$\\bar{\\beta_{2}}=1/2$$, $$\\bar{\\beta_{3}}=1/2$$ $$\\bar{\\eta_{1}}=1/2$$, $$\\bar{\\eta_{2}}=1/2$$, $$\\bar{\\eta_{3}}=1/4$$, $$\\bar{\\kappa _{1}}=1/4$$, $$\\bar{\\kappa_{2}}=1/2$$, $$\\bar{\\kappa_{3}}=1/4$$, $$\\xi_{1}=1/2$$, $$\\xi _{2}=2/3$$, $$\\xi_{3}=3/2$$, $${\\sigma_{1}=1/4}$$, $${\\sigma_{2}=1/3}$$, $${\\sigma_{3}=5/6}$$, $$\\rho_{1}=1/4$$, $$\\rho_{2}=1/3$$, $$\\rho_{3}=1/4$$, $$\\alpha_{1}=1/3$$, $$\\alpha_{2}=1/3$$, $$\\alpha _{3}=1/4$$, $$\\beta_{1}=1/2$$, $$\\beta_{2}=1/3$$, $$\\beta_{3}=1/2$$, $$\\eta_{1}=1/4$$, $$\\eta_{2}=1/2$$, $$\\eta_{3}=1/2$$, $$\\kappa_{1}=1/4$$, $$\\kappa_{2}=1/2$$, $$\\kappa _{3}=1/4$$, $$\\delta_{1}=4/3$$, $$\\delta_{2}=7/3$$, $$\\delta_{3}=5/2$$ and $$f(t,x)=((e^{-t^{2}})/((t+2)^{2}+2))\\cdot((|x|)/(|x|+1))+(t/(t+1))$$. From the above information, we can find that $$\\Phi=11.03750380$$ and $$\\Phi _{1}=5.898666195$$. From $$|f(t,x)-f(t,y)|\\leq(1/6)|x-y|$$, we set $$L=1/6$$, which is a constant satisfying $$(H_{1})$$. Since $$L\\Phi=1.839583967>1$$, Theorem 1 cannot be used in this example. However, we can check that\n\n$$L\\Phi_{1}= 0.9831110325< 1$$\n\nand\n\n$$\\bigl\\vert f(t,x) \\bigr\\vert = \\biggl\\vert \\frac{e^{-t^{2}}}{(t+2)^{2}}\\cdot \\frac { \\vert x(t) \\vert }{ \\vert x(t)+1 \\vert }+\\frac{t}{t+1} \\biggr\\vert \\leq\\frac{e^{-t^{2}}}{6}+ \\frac{t}{t+1},$$\n\nwhich is needed in condition $$(H_{2})$$ in Theorem 2. Hence, by Theorem 2, the boundary value problem (27) has at least one solution on $$[0,3]$$.\n\n## References\n\n1. Podlubny, I.: Fractional Differential Equations. Academic Press, San Diego (1999)\n\n2. Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations. North-Holland Mathematics Studies, vol. 204. Elsevier, Amsterdam (2006)\n\n3. Klafter, J., Lim, S.C., Metzler, R. (eds.): Fractional Dynamics in Physics. World Scientific, Singapore (2011)\n\n4. Zhou, Y.: Basic Theory of Fractional Differential Equations. World Scientific, Hackensack (2014)\n\n5. Ahmad, B., Nieto, J.J.: Riemann–Liouville fractional integro-differential equations with fractional nonlocal integral boundary conditions. Bound. Value Probl. 2011, 36 (2011)\n\n6. Bai, Z.B., Sun, W.: Existence and multiplicity of positive solutions for singular fractional boundary value problems. Comput. Math. Appl. 63, 1369–1381 (2012)\n\n7. O’Regan, D., Stanek, S.: Fractional boundary value problems with singularities in space variables. Nonlinear Dyn. 71, 641–652 (2013)\n\n8. Ahmad, B., Ntouyas, S.K., Alsaedi, A.: A study of nonlinear fractional differential equations of arbitrary order with Riemann–Liouville type multistrip boundary conditions. Math. Probl. Eng. 2013, Article ID 320415 (2013)\n\n9. Zhang, L., Ahmad, B., Wang, G., Agarwal, R.P.: Nonlinear fractional integro-differential equations on unbounded domains in a Banach space. J. Comput. Appl. Math. 249, 51–56 (2013)\n\n10. Liu, X., Jia, M., Ge, W.: Multiple solutions of a p-Laplacian model involving a fractional derivative. Adv. Differ. Equ. 2013, 126 (2013)\n\n11. Wang, J.R., Zhou, Y., Feckan, M.: On the nonlocal Cauchy problem for semilinear fractional order evolution equations. Cent. Eur. J. Math. 12, 911–922 (2014)\n\n12. Henderson, J., Luca, R., Tudorache, A.: On a system of fractional differential equations with coupled integral boundary conditions. Fract. Calc. Appl. Anal. 18, 361–386 (2015)\n\n13. Ntouyas, S.K., Etemad, S.: On the existence of solutions for fractional differential inclusions with sum and integral boundary conditions. Appl. Math. Comput. 266, 235–243 (2015)\n\n14. Ntouyas, S.K., Etemad, S., Tariboon, J.: Existence of solutions for fractional differential inclusions with integral boundary conditions. Bound. Value Probl. 2015, 92 (2015)\n\n15. Ntouyas, S.K., Etemad, S., Tariboon, J.: Existence results for multi-term fractional differential inclusions. Adv. Differ. Equ. 2015, 140 (2015)\n\n16. Qarout, D., Ahmad, B., Alsaedi, A.: Existence theorems for semi-linear Caputo fractional differential equations with nonlocal discrete and integral boundary conditions. Fract. Calc. Appl. Anal. 19, 463–479 (2016)\n\n17. Zhou, Y., Shangerganesh, L., Manimaran, J., Debbouche, A.: A class of time—fractional reaction–diffusion equation with nonlocal boundary condition. Math. Methods Appl. Sci. 41, 2987–2999 (2018)\n\n18. Zhou, Y.: Attractivity for fractional evolution equations with almost sectorial operators. Fract. Calc. Appl. Anal. 21, 786–800 (2018)\n\n19. Ahmad, B., Ntouyas, S.K.: Some fractional-order one-dimensional semi-linear problems under nonlocal integral boundary conditions. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 110, 159–172 (2016)\n\n20. Jia, M., Zhang, H., Chen, Q.: Existence of positive solutions for fractional differential equation with integral boundary conditions on the half-line. Bound. Value Probl. 2016, 104 (2016)\n\n21. Ahmad, B., Ntouyas, S.K., Tariboon, J.: A study of mixed Hadamard and Riemann–Liouville fractional integro-differential inclusions via endpoint theory. Appl. Math. Lett. 52, 9–14 (2016)\n\n22. Katugampola, U.N.: New approach to a generalized fractional integral. Appl. Math. Comput. 218, 860–865 (2015)\n\n23. Malinowska, A.B., Odzijewicz, T., Torres, D.F.M.: Advanced Methods in the Fractional Calculus of Variations. Springer, Berlin (2015)\n\n24. Butkovskii, A.G., Postnov, S.S., Postnova, E.A.: Fractional integro-differential calculus and its control-theoretical applications in mathematical fundamentals and the problem of interpretation. Autom. Remote Control 74, 543–574 (2013)\n\n25. Pooseh, S., Almeida, R., Torres, D.F.M.: Expansion formulas in terms of integer-order derivatives for the Hadamard fractional integral and derivative. Numer. Funct. Anal. Optim. 33, 301–319 (2012)\n\n26. Gambo, Y.Y., Ameen, R., Jarad, F., Abdeljawad, T.: Existence and uniqueness of solutions to fractional differential equations in the frame of generalized Caputo fractional derivatives. Adv. Differ. Equ. 2018, 134 (2018)\n\n27. Mahmudov, N.I., Emin, S.: Fractional-order boundary value problems with Katugampola fractional integral conditions. Adv. Differ. Equ. 2018, 81 (2018)\n\n28. Katugampola, U.N.: New fractional integral unifying six existing fractional integrals (2016) arXiv:1612.08596v1 [math.CA]\n\n29. Krasnoselskii, M.A.: Two remarks on the method of successive approximations. Usp. Mat. Nauk 10, 123–127 (1955)\n\n30. Granas, A., Dugundji, J.: Fixed Point Theory. Springer, New York (2003)\n\n## Funding\n\nThis research was funded by King Mongkut’s University of Technology North Bangkok. Contract no. KMUTNB-60-ART-072.\n\n## Author information\n\nAuthors\n\n### Contributions\n\nThe authors have contributed equally to this manuscript. They read and approved the final manuscript.\n\n### Corresponding author\n\nCorrespondence to Jessada Tariboon.\n\n## Ethics declarations\n\n### Competing interests\n\nThe authors declare that they have no competing interests.",
null,
""
] | [
null,
"https://advancesindifferenceequations.springeropen.com/track/article/10.1186/s13662-018-1854-x",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7529044,"math_prob":1.000006,"size":28375,"snap":"2022-40-2023-06","text_gpt3_token_len":9445,"char_repetition_ratio":0.15776673,"word_repetition_ratio":0.08667781,"special_character_ratio":0.35404405,"punctuation_ratio":0.17378624,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000094,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T08:05:27Z\",\"WARC-Record-ID\":\"<urn:uuid:3d948025-d0ab-435a-ae70-25af749d8edc>\",\"Content-Length\":\"329372\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2fa6ebe9-3d71-4e03-b943-01cdb709bc3e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae3af984-e505-4961-81d3-58a5e0d7f33f>\",\"WARC-IP-Address\":\"146.75.36.95\",\"WARC-Target-URI\":\"https://advancesindifferenceequations.springeropen.com/articles/10.1186/s13662-018-1854-x\",\"WARC-Payload-Digest\":\"sha1:VATISMVQBGITC5GAKJI2CPUAP4NZENTE\",\"WARC-Block-Digest\":\"sha1:TN3TPAOKL37UIMCP4ZMM63OFZHJ22KEJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337595.1_warc_CC-MAIN-20221005073953-20221005103953-00017.warc.gz\"}"} |
https://socratic.org/questions/how-do-you-simplify-4sqrt-20-sqrt-45-7sqrt5 | [
"# How do you simplify 4sqrt(20)-sqrt(45)+7sqrt5?\n\nFirst, you need to properly factor all of your roots. This starts with the $4 \\sqrt{20}$ where you must factor out a 2, leaving $8 \\sqrt{5}$. Then on the $\\sqrt{45}$ factor out a 3, leaving $3 \\sqrt{5}$. Now you are left with $8 \\sqrt{5} - 3 \\sqrt{5} + 7 \\sqrt{5}$. By combining like terms you have $12 \\sqrt{5}$"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6936534,"math_prob":0.9997156,"size":271,"snap":"2019-43-2019-47","text_gpt3_token_len":72,"char_repetition_ratio":0.11235955,"word_repetition_ratio":0.0,"special_character_ratio":0.26199263,"punctuation_ratio":0.07692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989264,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-16T02:02:30Z\",\"WARC-Record-ID\":\"<urn:uuid:59b56c70-c3ea-4249-a7e5-978b30f98452>\",\"Content-Length\":\"32544\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20409744-626b-4013-bb87-038c86a1ea91>\",\"WARC-Concurrent-To\":\"<urn:uuid:63786b6f-5dc2-4d86-b609-335f2b92bab4>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-simplify-4sqrt-20-sqrt-45-7sqrt5\",\"WARC-Payload-Digest\":\"sha1:PFTNNLTDWCGCTGBO24OWEYXPC2BWHJCO\",\"WARC-Block-Digest\":\"sha1:L6E62GFPQI7MZSI64RABQAMRL66RT2C4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668716.69_warc_CC-MAIN-20191116005339-20191116033339-00415.warc.gz\"}"} |
https://dba.stackexchange.com/questions/82522/get-a-specific-row-from-a-tsql-common-table-expression | [
"# Get a specific row from a TSQL common table expression\n\nI have this tsql code to recursively get the whole workgroup hierarchy for a specific workgroup (WgID 283 here):\n\n``````WITH GroupList(WgID,WgParentWgID) AS (\nSELECT WgID,WgParentWgID\nFROM tblplWorkGroup WITH (NOLOCK) WHERE WgID = 283 AND WgActive=1\n\nUNION ALL\n\nSELECT B.WgID,B.WgParentWgID\nFROM GroupList A\nINNER JOIN tblplWorkGroup B WITH (NOLOCK) ON A.WgParentWgID = B.WgID\nWHERE B.WgActive=1\n) SELECT * FROM GroupList\n``````\n\nGives this:\n\n``````WgID WgParentWgID\n283 315\n315 272\n272 4\n4 0\n``````\n\nI need to get just the second to last row. I can do it by changing the last select statement.\n\n`````` SELECT *\nFROM GroupList\nWHERE WgParentWgID = (SELECT TOP (1) WgID\nFROM GroupList\nWHERE WgParentWgID = 0)\n``````\n\nGives:\n\n``````WgID WgParentWgID\n272 4\n``````\n\nHowever, if I want to get a nth row, it starts to get cumbersome. Is there a way to get a nth row from a WITH table expression? Or output row information in the output table too?\n\nI tried converting the WITH expression to use a temporary table (where looks like I can get the nth row) but could not make the recursive functionality work. Any help is much appreciated.\n\n• What about converting it to a function that returns a table? Then you could use a parm and pass it in. – Chris Aldrich Nov 12 '14 at 21:33\n• @ChrisAldrich I have it in a function and I'll be passing the row as a param. I just couldn't figure out how to get the nth row from the table here. – halfwise7 Nov 12 '14 at 22:04\n\nThere are a few ways to do this. If there is a 1-2-1 relationship between the parents and widgets ( ie a parent only has one widget and vice versa ) you can just add a column in the CTE to count how many rows have passed through. See the xlevel example below.\n\nIf that doesn't work for you ( and I would probably expect a parent to have multiple widgets ) then you could just throw all the results into a temp table using the IDENTITY function, then grab the third row. Of course you need some kind of meaningful order for this to work, but see the #tmp table version below.\n\nYou could also use ROW_NUMBER(), available in SQL Server 2008 to add an arbitrary row number to the resultset. Again you need some kind of order to give it meaning. See the ROW_NUMBER example below:\n\n``````;WITH GroupList(xlevel, WgID,WgParentWgID) AS (\nSELECT 1 xlevel, WgID,WgParentWgID\nFROM tblplWorkGroup WITH (NOLOCK) WHERE WgID = 283 AND WgActive=1\n\nUNION ALL\n\nSELECT xlevel + 1, B.WgID,B.WgParentWgID\nFROM GroupList A\nINNER JOIN tblplWorkGroup B WITH (NOLOCK) ON A.WgParentWgID = B.WgID\nWHERE B.WgActive=1\n)\nSELECT *\nFROM GroupList\nWHERE xlevel = 3\nGO\n\nIF OBJECT_ID('tempdb..#tmp') IS NOT NULL DROP TABLE #tmp\n\n;WITH GroupList(WgID,WgParentWgID) AS (\nSELECT WgID,WgParentWgID\nFROM tblplWorkGroup WITH (NOLOCK) WHERE WgID = 283 AND WgActive=1\n\nUNION ALL\n\nSELECT B.WgID,B.WgParentWgID\nFROM GroupList A\nINNER JOIN tblplWorkGroup B WITH (NOLOCK) ON A.WgParentWgID = B.WgID\nWHERE B.WgActive=1\n)\nSELECT IDENTITY( INT, 1, 1 ) rowId, *\nINTO #tmp\nFROM GroupList\nORDER BY WgParentWgID, WgID\nGO\n\nSELECT *\nFROM #tmp\nWHERE rowId = 3\nGO\n\n;WITH GroupList(WgID,WgParentWgID) AS (\nSELECT WgID,WgParentWgID\nFROM tblplWorkGroup WITH (NOLOCK) WHERE WgID = 283 AND WgActive=1\n\nUNION ALL\n\nSELECT B.WgID,B.WgParentWgID\nFROM GroupList A\nINNER JOIN tblplWorkGroup B WITH (NOLOCK) ON A.WgParentWgID = B.WgID\nWHERE B.WgActive=1\n)\nSELECT *\nFROM\n(\nSELECT ROW_NUMBER() OVER( ORDER BY WgParentWgID, WgID ) rowId, *\nFROM GroupList\n) x\nWHERE rowId = 3\nORDER BY rowId\nGO\n``````\n\nHTH\n\n• Thanks so much for the detailed response! I ended up using the second option but since I was using it in a function I changed the temp table into a table variable. Also needed to count the no of rows in the table and determine the requested row based on that so that worked out well. – halfwise7 Nov 14 '14 at 15:18\n• Glad that worked! – wBob Nov 14 '14 at 15:24"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7250471,"math_prob":0.652508,"size":1065,"snap":"2021-04-2021-17","text_gpt3_token_len":327,"char_repetition_ratio":0.15362865,"word_repetition_ratio":0.011173184,"special_character_ratio":0.24225353,"punctuation_ratio":0.097087376,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95781976,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-23T07:16:12Z\",\"WARC-Record-ID\":\"<urn:uuid:2df639ef-121e-44b2-9fc0-bc7fed415600>\",\"Content-Length\":\"154716\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5ddf008a-aa42-48a7-8451-ae60bfa9d87f>\",\"WARC-Concurrent-To\":\"<urn:uuid:174774c5-f4b6-4bf6-8350-50f0de2e9125>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://dba.stackexchange.com/questions/82522/get-a-specific-row-from-a-tsql-common-table-expression\",\"WARC-Payload-Digest\":\"sha1:EJH2LBFQZ6PCXJXOQNJ4V3FQ4CXMMVT4\",\"WARC-Block-Digest\":\"sha1:DV2C4T5TZF4D2MWKE7CMHCVMXQE3XGWT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703536556.58_warc_CC-MAIN-20210123063713-20210123093713-00460.warc.gz\"}"} |
https://www.teachoo.com/2564/621/Example-15---Find-sum-of-7--77--777--7777--...-to-n-terms/category/Examples/ | [
"Examples\n\nChapter 9 Class 11 Sequences and Series\nSerial order wise",
null,
"",
null,
"",
null,
"",
null,
"This video is only available for Teachoo black users\n\nSolve all your doubts with Teachoo Black (new monthly pack available now!)\n\n### Transcript\n\nExample 15 Find the sum of the sequence 7, 77, 777, 7777, ... to n terms. 7, 77, 777, 7777, ... n terms Here, 77/7 = 11 & 777/77 = 10.09 Thus, ( )/( ) ( )/( ) i.e. common ratio is not same This is not a GP We need to find sum Sum = 7 + 77 + 777 + 7777 + ...upto n terms = 7(1 + 11 + 111 + . upto n terms) = 7(1 + 11 + 111 + . upto n terms) Multiplying & dividing by 9 = 7/9 [9(1 + 11 + 111 + upto n term) = 7/9 [9 + 99 + 999 + 9999 + upto n terms] = 7/9 [(10 1) + (100 1) + (1000 1) + upto n terms] = 7/9 [(10 + 100 + 1000 + .n terms) 1 1 1 upto n terms] = 7/9 [(10 + 100 + 1000 + .n terms) (1 + 1 + 1 + upto n terms)] = 7/9 [(10 + 100 + 1000 + .n terms) n 1] = 7/9 [(10 + 100 + 1000 + .n terms) n] Now, a = 10, r = 10 For, r > 1 i.e. Sn = (a( ^ 1))/( 1) Putting value of a = 10 & r = 10 Sn = (10( 10 ^ 1))/(10 1) Sn = (10( 10 ^ 1))/9 Now substituting this value in (1) Sum = 7/9 [(10 + 102 + 103 + upto n terms) n] Sum = 7/9 [(10( 10 ^ 1))/9 \" n\" ] Thus, 7, 77, 777, 7777, ...upto n terms = 7/9 [(10( 10 ^ 1))/9 \" n\" ]",
null,
""
] | [
null,
"https://d1avenlh0i1xmr.cloudfront.net/4ea1eedc-e887-452f-b738-a815f1cffdadslide26.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/fa8966e5-fbf3-4195-9569-bc3cd256de22slide27.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/9a72c77d-dae0-4c5f-a1a4-620e0c80e316slide28.jpg",
null,
"https://d3m5vxlyiwf9rd.cloudfront.net/vimeo-thumbnails/133126584/thumbnail.jpg",
null,
"https://www.teachoo.com/static/misc/Davneet_Singh.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6375096,"math_prob":0.9997899,"size":1674,"snap":"2022-40-2023-06","text_gpt3_token_len":677,"char_repetition_ratio":0.23053892,"word_repetition_ratio":0.21671018,"special_character_ratio":0.52449226,"punctuation_ratio":0.11653116,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99983686,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,6,null,6,null,6,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-02T03:14:03Z\",\"WARC-Record-ID\":\"<urn:uuid:1b999c00-d583-4b6f-9108-b5d8bb230b36>\",\"Content-Length\":\"180980\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bf492456-79f0-467c-b182-f6d848d338e4>\",\"WARC-Concurrent-To\":\"<urn:uuid:8bc88395-1e75-433d-82a7-aebeb1c15501>\",\"WARC-IP-Address\":\"50.16.49.81\",\"WARC-Target-URI\":\"https://www.teachoo.com/2564/621/Example-15---Find-sum-of-7--77--777--7777--...-to-n-terms/category/Examples/\",\"WARC-Payload-Digest\":\"sha1:LOANKENGYZPEBMVHXW7YZFOF2Z7ERZ6C\",\"WARC-Block-Digest\":\"sha1:VGWZOGMS2VLAYQDWSOEJEDC6YDNXX4FY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337244.17_warc_CC-MAIN-20221002021540-20221002051540-00515.warc.gz\"}"} |
https://math.answers.com/Q/One_mm_equals_how_many_micrometers | [
"",
null,
"",
null,
"",
null,
"",
null,
"0\n\n# One mm equals how many micrometers?\n\nUpdated: 12/24/2022",
null,
"Wiki User\n\n13y ago\n\n1 millimeter=1,000 micrometers",
null,
"Wiki User\n\n13y ago",
null,
"",
null,
"",
null,
"Earn +20 pts\nQ: One mm equals how many micrometers?\nSubmit\nStill have questions?",
null,
"",
null,
"Related questions\n\n### How many mm is equals to 500 micrometers?\n\n500 micrometers = 0.5 millimeters\n\n### 15 millimeters equals how many micrometers?\n\nBy unit of length and distance and conversion ,we can say that 1 mm=1000 millimeter 15 mm=15000 mm\n\n### 1000 of these metric units equals one millimeter?\n\nMicrometers Algebraic Steps / Dimensional Analysis 1,000 micrometers*1 mm 1000 micrometers=1 mm (milli meter)\n\n### How many micrometers are in 2.14 mm?\n\nsince there are 1000um(micrometers)/mm then 2.14 x 1000 = 2,140um\n\n### How many 2 micrometers fit into a mm?\n\n2 micrometers=0.002 millimeters\n\n### How many mm are in 2025 micrometer?\n\n2025 micrometers = 2.025 mm\n\n### How many meters equals a mm?\n\n1000 mm equals one meter\n\n### How many cubic micrometers in a cubic millimeter?\n\nCubic micrometers x 1e-9 = cubic mm\n\n0.6 mm\n\n### How many micrometers equal 1 millimeter?\n\n1 micrometer = .001 millimeter 1000 micrometers in one millimeter (mm) = 10^-3 (um) = 10^-6\n\n### What is 7 micrometers in mm?\n\n7 micrometers=0.007 millimeters\n\n### How many dust bites per mm?\n\nthere is 10 dustbites per mm A dust mite is around 420 micrometers long and from 250 to 320 micrometers wide."
] | [
null,
"https://math.answers.com/icons/searchIcon.svg",
null,
"https://math.answers.com/icons/searchGlassWhiteIcon.svg",
null,
"https://math.answers.com/icons/notificationBellIcon.svg",
null,
"https://math.answers.com/icons/coinIcon.svg",
null,
"https://math.answers.com/images/avatars/default.png",
null,
"https://math.answers.com/images/avatars/default.png",
null,
"https://math.answers.com/images/avatars/default.png",
null,
"https://math.answers.com/icons/sendIcon.svg",
null,
"https://math.answers.com/icons/coinIcon.svg",
null,
"https://math.answers.com/icons/searchIcon.svg",
null,
"https://st.answers.com/html_test_assets/imp_-_pixel.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.50869244,"math_prob":0.97864336,"size":899,"snap":"2023-40-2023-50","text_gpt3_token_len":316,"char_repetition_ratio":0.33407822,"word_repetition_ratio":0.23529412,"special_character_ratio":0.36262515,"punctuation_ratio":0.098265894,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9817401,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T01:27:32Z\",\"WARC-Record-ID\":\"<urn:uuid:68a8a3bb-e979-44d1-9996-cdfe291d59ee>\",\"Content-Length\":\"165177\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:85ec742e-49d6-4bf5-b937-800d800d29e9>\",\"WARC-Concurrent-To\":\"<urn:uuid:8935177b-e9ad-49c7-a4dc-aadcf3b8dd7a>\",\"WARC-IP-Address\":\"146.75.36.203\",\"WARC-Target-URI\":\"https://math.answers.com/Q/One_mm_equals_how_many_micrometers\",\"WARC-Payload-Digest\":\"sha1:QKI6T3HFSHOPG7YU5MC3JHXMK3N5XL5U\",\"WARC-Block-Digest\":\"sha1:6ZVIQLZUQFWAHQHKWBF5EPFZNSVDHAHB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510734.55_warc_CC-MAIN-20231001005750-20231001035750-00546.warc.gz\"}"} |
https://tex.stackexchange.com/questions/134728/how-to-have-center-title-in-listings-when-you-have-subcaption-workaournd-confli | [
"# How to have center title in listings when you have subcaption (workaournd conflict between listing caption and subcaption class)?\n\nI want to use listings package (\\usepackage{listings}) to show a matlab code in a paper (using elsarticle class) but the title is not aligned in center below the code. This happened after I added xleftmargin=3.5cm and xrightmargin=3.5cm to add margin from left and right. If I remove these two options, title appears at the center of page! How can I have both margin and center title with listings?\n\nI used \\centering and \\begin{center}...\\end{center} but they didn't work.\n\nUPDATE: I found out that \\usepackage{subcaption} is the problem. without it title will be centered. But I need subcaption!\n\nmy code:\n\n\\documentclass[3p,times]{elsarticle}\n\n\\usepackage{ecrc}\n\\volume{00}\n\\firstpage{1}\n\\journalname{Best Journal Ever}\n\\runauth{S. Gerami}\n\\jid{}\n\\jnltitlelogo{Best Journal Ever}\n\\usepackage{amssymb}\n\\usepackage[figuresright]{rotating}\n\\usepackage[ruled,vlined]{algorithm2e}\n\n%%FOR MATH\n\\usepackage{amsthm}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\n%%I NEED THESE PACKAGED FOR FIGURES AND SUBCAPTIONs\n\\usepackage{graphicx}\n\\usepackage{caption}\n\\usepackage{subcaption}\n\n%%FOR CODES\n\\usepackage{listings}\n\\usepackage{color}\n\n%%FOR TABLES\n\\usepackage{array}\n\\newcolumntype{L}{>{\\raggedright\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}m{#1}}\n\\newcolumntype{C}{>{\\centering\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}m{#1}}\n\\newcolumntype{R}{>{\\raggedleft\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}m{#1}}\n\n\\usepackage{lipsum}\n\\begin{document}\n\n\\begin{frontmatter}\n\\title{title of my paper}\n\n\\address[bahonar]{Faculty of Mathematics and Computer Science, Best University, Somewhere, The One Country}\n\n\\begin{abstract}\nabstract text....\n\\end{abstract}\n\n\\end{frontmatter}\n\n\\section{First section}\n\\lipsum{}\n\\definecolor{mygreen}{rgb}{0,0.6,0}\n\\definecolor{mygray}{rgb}{0.5,0.5,0.5}\n\\definecolor{mymauve}{rgb}{0.58,0,0.82}\n\n\\lstset{ %\nbackgroundcolor=\\color{white}, % choose the background color; you must add \\usepackage{color} or \\usepackage{xcolor}\nbasicstyle=\\footnotesize, % the size of the fonts that are used for the code\nbreakatwhitespace=false, % sets if automatic breaks should only happen at whitespace\nbreaklines=true, % sets automatic line breaking\ncaptionpos=b, % sets the caption-position to bottom\ndeletekeywords={...}, % if you want to delete keywords from the given language\nextendedchars=true, % lets you use non-ASCII characters; for 8-bits encodings only, does not work with UTF-8\nframe=single, % adds a frame around the code\nkeepspaces=true, % keeps spaces in text, useful for keeping indentation of code (possibly needs columns=flexible)\nkeywordstyle=\\color{blue}, % keyword style\nlanguage=Octave, % the language of the code\nmorekeywords={*,...}, % if you want to add more keywords to the set\nnumbers=left, % where to put the line-numbers; possible values are (none, left, right)\nnumbersep=5pt, % how far the line-numbers are from the code\nnumberstyle=\\tiny\\color{mygray}, % the style that is used for the line-numbers\nrulecolor=\\color{black}, % if not set, the frame-color may be changed on line-breaks within not-black text (e.g. comments (green here))\nshowspaces=false, % show spaces everywhere adding particular underscores; it overrides 'showstringspaces'\nshowstringspaces=false, % underline spaces within strings only\nshowtabs=false, % show tabs within strings adding particular underscores\nstepnumber=1, % the step between two line-numbers. If it's 1, each line will be numbered\nstringstyle=\\color{mymauve}, % string literal style\ntabsize=2, % sets default tabsize to 2 spaces\n% title=Script 1: A program in Matlab to estimate the total area under sin(x), % show the filename of files included with \\lstinputlisting; also try caption instead of title\nxleftmargin=3.5cm,\nxrightmargin=3.5cm\n}\n\n\\begin{lstlisting}[title={Script 1: A program in Matlab to estimate the total area under $\\sin(x)$}]\na=0;\nb=pi/2;\nh=max(feval(@func,linspace(a,b,n)));\nx=a+(b-a)*rand(n,1);\ny=h*rand(n,1);\nN=0;\nf=sin(x);\nfor j=1:n\nif y(j)<=f(j)\nN=N+1;\nend\nend\nans=N/n*h*(b-a)\n\\end{lstlisting}\n%*****************\n\\lipsum{}\n\n\\end{document}\n\n\noutput:",
null,
"UPDATE2: I tested method proposed in following link and not worked: Center caption in listing\n\nbut then I saw this answer and edited to this:\n\n\\begin{figure}[thp]\n\\begin{center}\n\\begin{lstlisting}[numbers=left]\nMy Code\nAnother line\n\\end{lstlisting}\n\\end{center}\n\\caption{my caption or title}\n\\end{figure}\n\n\nresult:",
null,
"It's good but the gap between box and caption is too much. I need to reduce it for good result.\n\nUPDATE3: This link has the best answer to removing gap for just one figure.\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{image}\n\\vspace{-1em}\n\\caption{\\textcolor{gray}{\\footnotesize \\textit{ImageCaption}}}\n\\vspace{-1.5em}\n\\end{figure}\n\n• I get it centered. Anyway, you'd probably better use \\begin{lstlisting}[title={Script 1: A program in Matlab to estimate the total area under $\\sin(x)$}] instead of putting it inside the \\lstset. – karlkoeller Sep 23 '13 at 19:35\n• @karlkoeller I did what you said and got the same result! How you centered it? did you have margin and center title? – sajjadG Sep 23 '13 at 19:39\n• I've just used your code. Please post a complete compilable example that reproduces your issue. – karlkoeller Sep 23 '13 at 19:53\n• To reduce the gap, move the \\caption command inside the center environment and add \\vspace*{-\\baselineskip} just before \\caption. – Andrew Swann Sep 24 '13 at 11:15\n• Did you read section 6.7 of the caption manual? – Martin Schröder Sep 25 '13 at 19:19\n\nEffectively, there is some incopatibility between caption and listings when you use xleftmargin and xrightmargin...\n\nWithout changing any commands from one or the other package, one thing that comes to mind is to insert the listing inside a centered minipage.\n\nSo, remove the lines\n\n xleftmargin=3.5cm,\nxrightmargin=3.5cm\n\n\nfrom inside the \\lstset\n\n\\begin{center}\n\\begin{minipage}{.5\\textwidth}\n\\begin{lstlisting}[title={Script 1: A program in Matlab to estimate the total area under $\\sin(x)$}]\na=0;\nb=pi/2;\nh=max(feval(@func,linspace(a,b,n)));\nx=a+(b-a)*rand(n,1);\ny=h*rand(n,1);\nN=0;\nf=sin(x);\nfor j=1:n\nif y(j)<=f(j)\nN=N+1;\nend\nend\nans=N/n*h*(b-a)\n\\end{lstlisting}\n\\end{minipage}\n\\end{center}\n\n\nThus, the following MWE\n\n\\documentclass[3p,times]{elsarticle}\n\n%%I NEED THESE PACKAGED FOR FIGURES AND SUBCAPTIONs\n\\usepackage{graphicx}\n%\\usepackage{caption} % no need because it is loaded by subcaption.\n\\usepackage{subcaption}\n\n%%FOR CODES\n\\usepackage{listings}\n\\usepackage{color}\n\n\\usepackage{lipsum}\n\\begin{document}\n\n\\begin{frontmatter}\n\\title{title of my paper}\n\n\\begin{abstract}\nabstract text....\n\\end{abstract}\n\n\\end{frontmatter}\n\n\\section{First section}\n\\lipsum{}\n\\definecolor{mygreen}{rgb}{0,0.6,0}\n\\definecolor{mygray}{rgb}{0.5,0.5,0.5}\n\\definecolor{mymauve}{rgb}{0.58,0,0.82}\n\n\\lstset{ %\nbackgroundcolor=\\color{white}, % choose the background color; you must add \\usepackage{color} or \\usepackage{xcolor}\nbasicstyle=\\footnotesize, % the size of the fonts that are used for the code\nbreakatwhitespace=false, % sets if automatic breaks should only happen at whitespace\nbreaklines=true, % sets automatic line breaking\ncaptionpos=b, % sets the caption-position to bottom\ndeletekeywords={...}, % if you want to delete keywords from the given language\nextendedchars=true, % lets you use non-ASCII characters; for 8-bits encodings only, does not work with UTF-8\nframe=single, % adds a frame around the code\nkeepspaces=true, % keeps spaces in text, useful for keeping indentation of code (possibly needs columns=flexible)\nkeywordstyle=\\color{blue}, % keyword style\nlanguage=Octave, % the language of the code\nmorekeywords={*,...}, % if you want to add more keywords to the set\nnumbers=left, % where to put the line-numbers; possible values are (none, left, right)\nnumbersep=5pt, % how far the line-numbers are from the code\nnumberstyle=\\tiny\\color{mygray}, % the style that is used for the line-numbers\nrulecolor=\\color{black}, % if not set, the frame-color may be changed on line-breaks within not-black text (e.g. comments (green here))\nshowspaces=false, % show spaces everywhere adding particular underscores; it overrides 'showstringspaces'\nshowstringspaces=false, % underline spaces within strings only\nshowtabs=false, % show tabs within strings adding particular underscores\nstepnumber=1, % the step between two line-numbers. If it's 1, each line will be numbered\nstringstyle=\\color{mymauve}, % string literal style\ntabsize=2, % sets default tabsize to 2 spaces\n}\n\\begin{center}\n\\begin{minipage}{.5\\textwidth}\n\\begin{lstlisting}[title={Script 1: A program in Matlab to estimate the total area under $\\sin(x)$}]\na=0;\nb=pi/2;\nh=max(feval(@func,linspace(a,b,n)));\nx=a+(b-a)*rand(n,1);\ny=h*rand(n,1);\nN=0;\nf=sin(x);\nfor j=1:n\nif y(j)<=f(j)\nN=N+1;\nend\nend\nans=N/n*h*(b-a)\n\\end{lstlisting}\n\\end{minipage}\n\\end{center}\n\\lipsum{}\n\n\\end{document}\n\n\nyields",
null,
"Just some remarks:\n\n1. I've removed some code from your MWE since I didn't know where to get the package ecrc.\n2. the package caption doesn't need to be loaded since it is already loaded by the package subcaption.\n3. I don't know if you are using title instead of caption for a specific reason. If the reason is to get rid of the word \"Listing\" inside the caption you can do the following:\n\na. put the following line in the preamble:\n\n\\renewcommand{\\lstlistingname}{Script}\n\n\nb. replace\n\n\\begin{lstlisting}[title={Script 1: A program in Matlab to estimate the total area under $\\sin(x)$}]\n\n\nwith\n\n\\begin{lstlisting}[caption={A program in Matlab to estimate the total area under $\\sin(x)$}]\n\n• The ecrc package must be called to make the CRC functionality available. It's in the Elsevier template. You used \\documentclass[3p,times]{elsarticle} and ecrc is in it! Your answer is very good. As you can see I have a workaround with figure for this issue. Your remarks is so much teaching. Thank you. – sajjadG Sep 24 '13 at 13:52\n• @sajjadG Remember that inserting the listing inside a figure causes the listing to float. Also, you will have it listed in the List of Figures. If you want this behavior, then use your workaround, otherwise, I'd suggest you to use the code in my answer. – karlkoeller Sep 24 '13 at 14:01"
] | [
null,
"https://i.stack.imgur.com/GXSLi.png",
null,
"https://i.stack.imgur.com/DKyGW.png",
null,
"https://i.stack.imgur.com/OlQUc.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5536469,"math_prob":0.70751405,"size":5298,"snap":"2020-45-2020-50","text_gpt3_token_len":1563,"char_repetition_ratio":0.12051379,"word_repetition_ratio":0.048231512,"special_character_ratio":0.25821066,"punctuation_ratio":0.13227513,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9801321,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T05:53:16Z\",\"WARC-Record-ID\":\"<urn:uuid:8f5b8f81-b31a-4383-a1dc-69c84fabaf9c>\",\"Content-Length\":\"166914\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d1b200f4-eadf-400d-acc4-9bc8acbf47c4>\",\"WARC-Concurrent-To\":\"<urn:uuid:c4b2f210-210d-4deb-80f7-0f765a0ee083>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/134728/how-to-have-center-title-in-listings-when-you-have-subcaption-workaournd-confli\",\"WARC-Payload-Digest\":\"sha1:EHS7NDHCDYFHLVH4CSYLXWVSUAIBC47S\",\"WARC-Block-Digest\":\"sha1:VASYFPO7DSEDMHIQCJE5YXHQ2BLCMCV3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107902745.75_warc_CC-MAIN-20201029040021-20201029070021-00530.warc.gz\"}"} |
https://hackonmath.com/category/math/solving-equations/ | [
"## Mean Median Mode and Range\n\nThe mean median mode and range all are values that help characterize a data set. Mean is technically called the arithmetic mean. The mean median mode and range each represent…\n\n## The Associative Property\n\nThe associative property in math says that if you change the order of terms while doing mathematical operations, the result remains the same. The Associative Properties: 1. The Associative Property…\n\n## Commutative Property\n\nCommutative Property, Associative Property, and Distributive Property are three basic properties in math that will help you to increase your speed of computation. Commutative Property: If you solve any expression…\n\n## Multiplying and dividing decimals\n\nMultiplying and dividing decimals means doing multiplication and division of decimal numbers. This can be achieved by doing normal multiplication and then placing decimal points. Multiplying and dividing decimals: Multiplying…\n\n## Multiplication equations: one step equation\n\nMultiplication equations one step equation means finding the value of an unknown variable by eliminating extra constants by multiplication. To solve an algebraic equation, the goal is to always isolate…\n\n## Elimination using addition and subtraction\n\nElimination using addition and subtraction means solving an algebraic equation by finding the value of the unknown. The goal is to always isolate the variable—this means to get the variable…"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8851275,"math_prob":0.97173494,"size":1412,"snap":"2023-14-2023-23","text_gpt3_token_len":259,"char_repetition_ratio":0.12926136,"word_repetition_ratio":0.25592417,"special_character_ratio":0.17067988,"punctuation_ratio":0.07692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991991,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T08:18:02Z\",\"WARC-Record-ID\":\"<urn:uuid:e74af2aa-9d8f-41bb-b266-8118d7f302ff>\",\"Content-Length\":\"81494\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ee782495-fe40-4acd-ab30-b133b00902f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:49d2e677-eec7-4bb9-9f55-54deb1bc61fd>\",\"WARC-IP-Address\":\"191.96.56.53\",\"WARC-Target-URI\":\"https://hackonmath.com/category/math/solving-equations/\",\"WARC-Payload-Digest\":\"sha1:UP6TTH6DBE3PPYJCLZPAYFCDJYWZG5JI\",\"WARC-Block-Digest\":\"sha1:EHMUJ6ACJGGVQ3ZYGILKAX3RSITB3BV3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649518.12_warc_CC-MAIN-20230604061300-20230604091300-00531.warc.gz\"}"} |
https://www.girlsbedroom.org/pink-dresses-for-girls-wallpaper-decor-for-girls-bedroom/ | [
"# pink dresses for girls wallpaper decor for girls bedroom\n\nThese colourful, original and beautifully illustrated large wall stickers come with all your favourite characters and images on one sheet. These stickers can also be used on furniture, windows, mirror…\n\nSchooled in matters of taste by her mother, when designer Emma Burns inherited the former stables that her parents had converted as a weekend retreat, she put into practice the principles that now guide her professional work for Sibyl Colefax & John Fowler. This cosy bedroom has beamed ceilings, and is furnished mainly with antiques.\n\nYour first impulse might be to think of the sloped ceiling and cramped space of an attic bedroom as a minus, but instead, embrace it as a plus. When the bed is piled high with quilts, pillows, and shams, and surrounded by simple, yet lovely furnishings as in the adorable country bedroom shown here, the sloped ceiling and limited square footage simply add to the cozy vibe.\n\nColourful crate style storage can act as a fun bookcase or toy store in a kids’ room, just ensure you affix them securely to the wall. Oh, and while we’re on the wall, the clever handpainted brick paint is pretty cool too.\n\n(function(){var b=this;var f=String.prototype.trim?function(a){return a.trim()}:function(a){return a.replace(/^[\\s\\xa0]+|[\\s\\xa0]+\\$/g,)},g=function(a,c){return ac?1:0};var k;a:{var l=b.navigator;if(l){var m=l.userAgent;if(m){k=m;break a}}k=};var ba=function(a,c){var h=aa;return Object.prototype.hasOwnProperty.call(h,a)?h[a]:h[a]=c(a)};var ca=-1!=k.indexOf(Opera),n=-1!=k.indexOf(Trident)||-1!=k.indexOf(MSIE),da=-1!=k.indexOf(Edge),p=-1!=k.indexOf(Gecko)&&!(-1!=k.toLowerCase().indexOf(webkit)&&-1==k.indexOf(Edge))&&!(-1!=k.indexOf(Trident)||-1!=k.indexOf(MSIE))&&-1==k.indexOf(Edge),q=-1!=k.toLowerCase().indexOf(webkit)&&-1==k.indexOf(Edge),t=function(){var a=b.document;return a?a.documentMode:void 0},u;a:{var v=,w=function(){var a=k;if(p)return/rv\\:([^\\);]+)(\\)|;)/.exec(a);if(da)return/Edge\\/([\\d\\.]+)/.exec(a);if(n)return/\\b(?:MSIE|rv)[: ]([^\\);]+)(\\)|;)/.exec(a);if(q)return/WebKit\\/(\\S+)/.exec(a);if(ca)return/(?:Version)[ \\/]?(\\S+)/.exec(a)}();w&&(v=w?w:);if(n){var x=t();if(null!=x&&x>parseFloat(v)){u=String(x);break a}}u=v}var y=u,aa={},z=function(a){return ba(a,function(){for(var c=0,h=f(String(y)).split(.),S=f(String(a)).split(.),fa=Math.max(h.length,S.length),r=0;0==c&&rH.length)throw Error(Cannot parse serialized data);var I={b:F,content:H.substr(0,G),a:H.substr(G)},J;var K=window.location.hash;if(K){var L=K.match(/[&#]eid=([\\d,]+)/);J=L?L.split(,):[]}else J=[];var M=J;M.indexOf&&0<=M.indexOf(21060230)||!E||(window.name=);var N=JSON.parse(I.a);N.goog_safeframe_hlt&&(b.goog_safeframe_hlt=N.goog_safeframe_hlt);N._context&&(b.AMP_CONTEXT_DATA=N._context);b.sf_={v:I.b,exp:M,cfg:N};var O=I.content,P=window;if(E){var Q=P.document;Q.open(text/html,replace);Q.write(O);Q.close()}else{var R=O,T;if(T=n&&z(7)&&!z(10)){var U=navigator.userAgent.match(/Trident\\/([0-9]+.[0-9]+)/);T=6>(U?parseFloat(U):0)}var V;if(V=T)a:{for(var W=0;W"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.53132683,"math_prob":0.98160136,"size":3001,"snap":"2021-43-2021-49","text_gpt3_token_len":929,"char_repetition_ratio":0.116116114,"word_repetition_ratio":0.0,"special_character_ratio":0.3162279,"punctuation_ratio":0.26329115,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9765119,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T02:55:40Z\",\"WARC-Record-ID\":\"<urn:uuid:4d5f6306-ffba-4188-819c-ad941908817a>\",\"Content-Length\":\"68268\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:958800d8-496f-478f-a2d3-92282f7e2531>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5baec74-491d-4fe3-9201-32cd674fe7d6>\",\"WARC-IP-Address\":\"191.101.50.240\",\"WARC-Target-URI\":\"https://www.girlsbedroom.org/pink-dresses-for-girls-wallpaper-decor-for-girls-bedroom/\",\"WARC-Payload-Digest\":\"sha1:6C3WD2HEL45Q7Q2ZFB7VY6CLLKI2U2EH\",\"WARC-Block-Digest\":\"sha1:XC2B5A27MHD6GL65HUQ2ABDT6OW4CFF6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358685.55_warc_CC-MAIN-20211129014336-20211129044336-00204.warc.gz\"}"} |
https://discourse.mc-stan.org/t/unbiased-variance-of-fixed-effect-parameters/32290 | [
"# Unbiased variance of fixed effect parameters\n\nHi,\n\nI have a very general question that is more about Bayesian statistics than STAN per se, but this the best forum I can think of with skilled people to answer it.\n\nImagine that we have a situation where we want to estimate the variance (in the response) that arising from fixed effect parameters (i.e. not the “explained variance”, that would be the variance arising from the estimators).\n\nTo me more precise, this is in a biological settings where we are analysing a phenotypic trait \\mathbf{y} using linear modelling (technically, we are using a random-slope model, but my question stands for this simpler case), such that:\n\n\\mathbf{y} = \\mathbf{X}\\mathbf{\\beta} + \\mathbf{e},\n\nwhere \\mathbf{X} is the design matrix, \\mathbf{\\beta} is the vector of parameters of the model and \\mathbf{e} are the residuals. So, nothing out of the ordinary.\n\nAs I mentioned, we are interested in the variance arising from fixed effect parameters, i.e. \\text{V}(\\mathbf{X}\\mathbf{\\beta}). Of course, we do not have access to the real \\mathbf{\\beta}, only estimates of it, so \\hat{\\mathbf{\\beta}}. So, since:\n\n\\hat{\\mathbf{\\beta}} = \\mathbf{\\beta} + \\tilde{\\mathbf{\\beta}},\n\nwhere \\tilde{\\mathbf{\\beta}} is the error in estimating \\mathbf{\\beta}, we know that \\text{V}(\\mathbf{X}\\hat{\\mathbf{\\beta}}) is a biased measure of \\text{V}(\\mathbf{X}\\mathbf{\\beta}). This is an ancient problem in, e.g. quantitative genetics, when relying on ANOVA model to estimate a between-group variance component.\n\nIn a frequentist settings, there is a fix to that, using \\mathbf{\\Sigma_{X}}, the variance-covariance matrix of \\mathbf{X} and \\mathbf{S}_{\\beta}, the variance-covariance matrix of error (i.e. the matrix that contains the squared standard errors, and covariances, associated with \\hat{\\mathbf{\\beta}} not sure it has a proper name?), an unbiased estimator of \\text{V}(\\mathbf{X}\\mathbf{\\beta}) is:\n\n\\text{V}(\\mathbf{X}\\hat{\\mathbf{\\beta}}) - \\text{tr}(\\mathbf{\\Sigma_{X}} \\mathbf{S}_{\\beta}).\n\nIn a Bayesian settings, the problem is that the link between the “standard error” (i.e. posterior standard deviation) and the sampling variance is lost (or rather weakened), due to the influence of the prior. The above “fix” should, at least approximately, hold for non-informative, flat, priors, but is expected over-correct when the prior becomes more and more informative. I have checked this using simulated data and a simple linear model in brms.\n\nSo, my question is: is it possible to obtain an unbiased estimate of \\text{V}(\\mathbf{X}\\mathbf{\\beta}) in a Bayesian context, accounting for the impact of the prior distribution? If that estimate could provide a posterior distribution for \\text{V}(\\mathbf{X}\\mathbf{\\beta}), that would ideal!\n\nOne more thing: I do realise that \\text{V}(\\mathbf{X}\\hat{\\mathbf{\\beta}}) is a convergent estimator for \\text{V}(\\mathbf{X}\\mathbf{\\beta}) and that we should expect the level of bias to be relatively low for common sample sizes. However, this is for a general methodological paper, and I have no control on the sample size of people out there, so I would like to provide as robust an estimator as possible. Also, since the “fix” above exists in a frequentist settings, it would be a shame to not provide something as robust in a Bayesian settings.\n\nAnyhow, sorry for the long post, happy to hear any thoughts on the subject!\n\nWe’re pretty busy just answering Stan-related questions.\n\nIn an MCMC context, the standard error is the posterior standard deviation divided by the square root of the effective sample size—in practice we need to estimate both terms. Standard error goes to zero as the MCMC sample size increases.\n\nLike regularized estimators in frequentist stats (e.g., using empirical Bayes or even just simple shrinkage), Bayesians are often OK trading bias for reduced variance. We’re usually focused on calibrated probabilistic prediction, specifically posterior predictive inference."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88455796,"math_prob":0.99329376,"size":3124,"snap":"2023-40-2023-50","text_gpt3_token_len":782,"char_repetition_ratio":0.16666667,"word_repetition_ratio":0.0,"special_character_ratio":0.23399487,"punctuation_ratio":0.121753246,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996055,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T10:00:14Z\",\"WARC-Record-ID\":\"<urn:uuid:639a6bf3-8607-4e1c-a21e-682f752539b9>\",\"Content-Length\":\"25078\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6d13bcd8-f03e-470f-930a-f221ecda3dbe>\",\"WARC-Concurrent-To\":\"<urn:uuid:a6778ba4-ea3d-4058-999b-bb28ba5a35e2>\",\"WARC-IP-Address\":\"74.82.16.203\",\"WARC-Target-URI\":\"https://discourse.mc-stan.org/t/unbiased-variance-of-fixed-effect-parameters/32290\",\"WARC-Payload-Digest\":\"sha1:MUU4BZSX5U2QUOCO6PW3A3ZMILU5IPNW\",\"WARC-Block-Digest\":\"sha1:FG5SWUY3IF2EBKIHJ5QI75VTQYRQJVDM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100057.69_warc_CC-MAIN-20231129073519-20231129103519-00138.warc.gz\"}"} |
http://www.oalib.com/relative/3588179 | [
"Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+",
null,
"",
null,
"Title Keywords Abstract Author All\nSearch Results: 1 - 10 of 100 matches for \" \"\n Page 1 /100 Display every page 5 10 20 Item\n Konstadinos Sfetsos Physics , 1997, DOI: 10.1016/S0550-3213(97)00823-7 Abstract: We prove that a transformation, conjectured in our previous work, between phase-space variables in $\\s$-models related by Poisson-Lie T-duality is indeed a canonical one. We do so by explicitly demonstrating the invariance of the classical Poisson brackets. This is the first example of a class of $\\s$-models with no isometries related by canonical transformations. In addition we discuss generating functionals of canonical transformations in generally non-isometric, bosonic and supersymmetric $\\s$-models and derive the complete set of conditions that determine them. We apply this general formalism to find the generating functional for Poisson-Lie T-duality. We also comment on the relevance of this work to D-brane physics and to quantum aspects of T-duality.\n Ladislav Hlavaty Physics , 2005, DOI: 10.1016/j.physletb.2005.08.089 Abstract: We have solved a sigma-model in curved background using the fact that the Poisson-Lie T-duality can transform the curved background into the flat one. For finding solution of the flat model we have used transformation of coordinates that makes the metric constant. The T-duality transform was then explicitly performed.\n Physics , 1998, DOI: 10.1016/S0550-3213(98)00548-3 Abstract: The problem of quantum equivalence between non-linear sigma models related by Abelian or non-Abelian T-duality is studied in perturbation theory. Using the anomalous Ward identity for Weyl symmetry we derive a relation between the Weyl anomaly coefficients of the original and dual theories. The analysis is not restricted to conformally invariant backgrounds. The formalism is applied to the study of two examples. The first is a model based on SU(2) non-Abelian T duality. The second represents a simple realization of Poisson-Lie T duality involving the Drinfeld double based on SU(2). In both cases quantum T duality is established at the 1-loop level.\n Physics , 2001, DOI: 10.1088/1126-6708/2001/04/027 Abstract: We extend the path-integral formulation of Poisson-Lie duality found by Tyurin and von Unge to N=1 supersymmetric sigma-models. Using an explicit representation of the generators of the Drinfel'd double corresponding to GxU(1)^dimG we discuss an application to non-abelian duality. The paper also contains the relevant background and some comments on Poisson-Lie duality.\n Mathematics , 2013, Abstract: We study the notion of formal duality introduced by Cohn, Kumar, and Sch\\\"urmann in their computational study of energy-minimizing particle configurations in Euclidean space. In particular, using the Poisson summation formula we reformulate formal duality as a combinatorial phenomenon in finite abelian groups. We give new examples related to Gauss sums and make some progress towards classifying formally dual configurations.\n S. E. Parkhomenko Physics , 1998, DOI: 10.1134/1.558947 Abstract: Poisson-Lie T-duality in quantum N=2 superconformal WZNW models is considered. The Poisson-Lie T-duality transformation rules of the super-Kac-Moody algebra currents are found from the conjecture that, as in the classical case, the quantum Poisson-Lie T-duality is given by an automorphism which interchanges the isotropic subalgebras of the underlying Manin triple of the model. It is shown that quantum Poisson-Lie T-duality acts on the generators of the N=2 super-Virasoro algebra of the quantum models as a mirror symmetry acts: in one of the chirality sectors it is trivial transformation while in another chirality sector it changes the sign of the U(1) current and interchanges the spin-3/2 currents. A generalization of Poisson-Lie T-duality for the Kazama-Suzuki models is proposed. It is shown that quantum Poisson-Lie T-duality acts in these models as a mirror symmetry also.\n Mathematics , 2000, Abstract: The description of the two sets of (4,0) supersymmetric models that are related by non-abelian duality transformations is given. The (4,0) supersymmetric WZNW is constructed and the formulation of the (4,0) supersymmetric sigma model dual to (4,0) supersymmetric WZNW model in the sense of Poisson-Lie T-duality is described.\n Mathematics , 2010, Abstract: We extend the bar-cobar adjunction to operads and properads, not necessarily augmented. Due to the default of augmentation, the objects of the dual category are endowed with a curvature. We handle the lack of augmentation by extending the category of coproperads to include objects endowed with a curvature. As usual, the bar-cobar construction gives a (large) cofibrant resolution for any properad, such as the properad encoding unital and counital Frobenius algebras, a notion which appears in 2d-TQFT. We also define a curved Koszul duality theory for operads or properads presented with quadratic, linear and constant relations, which provides the possibility for smaller relations. We apply this new theory to study the homotopy theory and the cohomology theory of unital associative algebras.\n S. E. Parkhomenko Physics , 1997, DOI: 10.1142/S021773239800111X Abstract: The transformation properties of the N=2 Virasoro superalgebra generators under Poisson-Lie T-duality in (2,2)-superconformal WZNW and Kazama-Suzuki models is considered. It is shown that Poisson-Lie T-duality acts on the N=2 super-Virasoro algebra generators as a mirror symmetry does: it unchanges the generators from one of the chirality sectors while in another chirality sector it changes the sign of U(1) current and interchanges spin-3/2 currents. We discuss Kazama-Suzuki models generalization of this transformation and show that Poisson-Lie T-duality acts as a mirror symmetry also.\n Physics , 1998, Abstract: We show that supersymmetric and $\\kappa$-symmetric Dp-brane actions in type II supergravity background have the same duality transformation properties as those in a flat Minkowskian background. Specially, it is shown that the super D-string transforms in a covariant way while the super D3-brane is self-dual under the $SL(2,Z)$ duality. Also, the D2-brane and the D4-brane transform in ways expected from the relation between type IIA superstring theory and M-theory. The present study proves that various duality symmetries, which were originally found in the flat background field, are precisely valid even in the curved background geometry.\n Page 1 /100 Display every page 5 10 20 Item"
] | [
null,
"http://www.oalib.com:80/images/arrow-down.gif",
null,
"http://www.oalib.com:80/images/oaliblog3.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8161103,"math_prob":0.8401347,"size":8326,"snap":"2020-10-2020-16","text_gpt3_token_len":2108,"char_repetition_ratio":0.14732036,"word_repetition_ratio":0.09260764,"special_character_ratio":0.21799183,"punctuation_ratio":0.09829351,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9648179,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-22T01:23:24Z\",\"WARC-Record-ID\":\"<urn:uuid:0a8b0aa6-e3e8-45e2-99ad-0cb81695363d>\",\"Content-Length\":\"74476\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:610fb6b8-8841-4b2f-ac68-2f4f84a5f8d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:b711f5f7-d3c3-45c7-98e3-b0ea08d46532>\",\"WARC-IP-Address\":\"104.149.186.70\",\"WARC-Target-URI\":\"http://www.oalib.com/relative/3588179\",\"WARC-Payload-Digest\":\"sha1:3DVOQ5AWO6Z7B6BDEYNWTGXDZVWLBT2Q\",\"WARC-Block-Digest\":\"sha1:W4GHSARCHF3YHUDKYH7MG6A6R4T4RSD2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145621.28_warc_CC-MAIN-20200221233354-20200222023354-00271.warc.gz\"}"} |
https://math.stackexchange.com/questions/3229355/clarification-for-equivalence-class | [
"# Clarification for equivalence class\n\nIn my book it's given, if there is a relation $$R$$ on set $$\\mathbb Z$$ of integers, $$R=\\{(a, b) : a, b \\in \\mathbb Z, a-b\\text{ is divisible by }2\\},$$ then this relation is an equivalence relation.\n\nBut now We can see that the relation has divided the set $$\\mathbb Z$$ of integers into $$2$$ disjoint sets of $$\\mathbb E$$ and $$\\mathbb O$$ (even and odd). In $$\\mathbb E$$ every element is related to $$0$$ and each other but not with the elements of $$\\mathbb O$$ and in $$\\mathbb O$$ every element is related to $$1$$ and each other but not with the elements of $$\\mathbb E$$. Therefore $$\\mathbb E$$ and $$\\mathbb O$$ are forming an equivalence class and $$\\mathbb E$$ can be written as $$$$ and $$\\mathbb O$$ can be written as $$$$.\n\nNow my question is since in both the sets each element of the set is related to other element of the set that means that every element in $$\\mathbb E$$ and $$\\mathbb O$$ is experiencing what $$0$$ in $$\\mathbb E$$ and $$1$$ in $$O$$ is experiencing.\n\nSo why are we writing $$0$$ and $$1$$ in the brackets for representing the equivalence class? Is it because they are the smallest in value?\n\nWe write $$$$ to signal that we're talking about the equivalence class of $$0$$, not the element $$0\\in\\mathbb Z$$.\nAnd that $$0$$ and $$1$$ are used as the elements we use to denote the equivalence classes is just a choice the author has made. He could use a different even number every time he wanted to mention the equivalence class of even numbers, but it would probably be confusing to read, and definitely harder to write.\n$$E$$ can also written as $$$$ or as $$$$ or as $$[-2].......$$.\n$$O$$ can also written as $$$$ or as $$$$ or as $$[-1].......$$.\nYou may read the symbol $$[\\cdot]$$ as $$\\text{the equivalence class of},$$ so that $$,$$ for example would read the equivalence class of $$0.$$ The element $$0$$ is called a representative of the class. As others have pointed out, any element in the class may as well serve as its representative. But the main point is we're no longer thinking about individual elements, but the whole class as a single entity."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92943645,"math_prob":1.0000073,"size":1155,"snap":"2019-51-2020-05","text_gpt3_token_len":317,"char_repetition_ratio":0.20156386,"word_repetition_ratio":0.08018868,"special_character_ratio":0.2883117,"punctuation_ratio":0.06437768,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000008,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T21:19:05Z\",\"WARC-Record-ID\":\"<urn:uuid:3fc63814-b1e2-426e-be05-67c5d21243ee>\",\"Content-Length\":\"145167\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99b816bc-2d55-4b87-87ea-4ec9a686e0fe>\",\"WARC-Concurrent-To\":\"<urn:uuid:74007757-e9db-404a-9899-58eeddf1b99f>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3229355/clarification-for-equivalence-class\",\"WARC-Payload-Digest\":\"sha1:CKK6BOANFRPRMAVAB2LKKXI4TS5U2FXN\",\"WARC-Block-Digest\":\"sha1:FMFNJMKWN2LEA7ISLY4YXZXG7QVINRGT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540514893.41_warc_CC-MAIN-20191208202454-20191208230454-00511.warc.gz\"}"} |
https://cstheory.stackexchange.com/questions/11194/resolution-vs-extended-resolution | [
"# Resolution vs Extended Resolution\n\nLet $R(f)$ and $ER(f)$ be the minimum-size for unsat proofs of $f$ in Resolution and Extended Resolution respectively. What's the best bound we have on $D=\\min_f (R(f)-ER(f))$ where $f$ belongs to a worst-case family of boolean formulae? Since no superpolynomial lower bounds are known on Extended Resolution, we don't have a good upper bound on $D$, so I'm interested in lower bounds on $D$.\n\nEdit: Clarified question as per Kaveh's answer below\n\n• What exactly do you mean by a “worst-case family of boolean formulae”? – Emil Jeřábek supports Monica Apr 26 '12 at 11:32\n\nYour definition of $D$ is not clear, if it is $D = \\max_f (R(f) - ER(f))$, then it is exponential.\nUnsatisfiability of $f$ is the same as $\\lnot f$ being a tautology. Let $\\lnot f$ be as PHP. Then $R(f)$ is exponential where as $ER(f)$ is polynomial, so the difference is exponential. Also it cannot be more than exponential.\nIf $D = \\min_f (R(f) - ER(f))$ then it is 0. Take $\\lnot f=\\top$ and it has one line proof in both. (R is included in ER so it cannot be less than 0.)\n• Hi Kaveh - I meant $\\min_f (R(f)-ER(f))$. I've updated the question to reflect that. – DPLL Apr 26 '12 at 2:06"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94342464,"math_prob":0.9987693,"size":445,"snap":"2019-51-2020-05","text_gpt3_token_len":115,"char_repetition_ratio":0.12471655,"word_repetition_ratio":0.0,"special_character_ratio":0.2494382,"punctuation_ratio":0.069767445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999948,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-20T10:20:58Z\",\"WARC-Record-ID\":\"<urn:uuid:3486d363-373a-4c8c-a89f-d3d3712511b7>\",\"Content-Length\":\"136865\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c097ce3f-958c-4323-a004-e31f4f8f452e>\",\"WARC-Concurrent-To\":\"<urn:uuid:857418c5-163d-4094-aee2-8e32478a8bab>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://cstheory.stackexchange.com/questions/11194/resolution-vs-extended-resolution\",\"WARC-Payload-Digest\":\"sha1:DFESUTHXKUHRJAPBVA7SQ753URHSKEGB\",\"WARC-Block-Digest\":\"sha1:WXZSYUHKSZUTNIAC7YNWKPAEVNR4SYF7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250598217.23_warc_CC-MAIN-20200120081337-20200120105337-00290.warc.gz\"}"} |
https://number.academy/2642 | [
"# Number 2642\n\nNumber 2,642 spell 🔊, write in words: two thousand, six hundred and forty-two . Ordinal number 2642th is said 🔊 and write: two thousand, six hundred and forty-second. The meaning of number 2642 in Maths: Is Prime? Factorization and prime factors tree. The square root and cube root of 2642. What is 2642 in computer science, numerology, codes and images, writing and naming in other languages. Other interesting facts related to 2642.\n\n## What is 2,642 in other units\n\nThe decimal (Arabic) number 2642 converted to a Roman number is MMDCXLII. Roman and decimal number conversions.\n The number 2642 converted to a Mayan number is",
null,
"Decimal and Mayan number conversions.\n\n#### Weight conversion\n\n2642 kilograms (kg) = 5824.6 pounds (lbs)\n2642 pounds (lbs) = 1198.4 kilograms (kg)\n\n#### Length conversion\n\n2642 kilometers (km) equals to 1642 miles (mi).\n2642 miles (mi) equals to 4252 kilometers (km).\n2642 meters (m) equals to 8668 feet (ft).\n2642 feet (ft) equals 805.291 meters (m).\n2642 centimeters (cm) equals to 1040.2 inches (in).\n2642 inches (in) equals to 6710.7 centimeters (cm).\n\n#### Temperature conversion\n\n2642° Fahrenheit (°F) equals to 1450° Celsius (°C)\n2642° Celsius (°C) equals to 4787.6° Fahrenheit (°F)\n\n#### Power conversion\n\n2642 Horsepower (hp) equals to 1942.92 kilowatts (kW)\n2642 kilowatts (kW) equals to 3592.61 horsepower (hp)\n\n#### Time conversion\n\n(hours, minutes, seconds, days, weeks)\n2642 seconds equals to 44 minutes, 2 seconds\n2642 minutes equals to 1 day, 20 hours, 2 minutes\n\n### Codes and images of the number 2642\n\nNumber 2642 morse code: ..--- -.... ....- ..---\nSign language for number 2642:",
null,
"",
null,
"",
null,
"",
null,
"Number 2642 in braille:",
null,
"Images of the number\nImage (1) of the numberImage (2) of the number",
null,
"",
null,
"More images, other sizes, codes and colors ...\n\n#### Number 2642 infographic",
null,
"### Gregorian, Hebrew, Islamic, Persian and Buddhist year (calendar)\n\nGregorian year 2642 is Buddhist year 3185.\nBuddhist year 2642 is Gregorian year 2099 .\nGregorian year 2642 is Islamic year 2082 or 2083.\nIslamic year 2642 is Gregorian year 3184 or 3185.\nGregorian year 2642 is Persian year 2020 or 2021.\nPersian year 2642 is Gregorian 3263 or 3264.\nGregorian year 2642 is Hebrew year 6402 or 6403.\nHebrew year 2642 is Gregorian year 1118 a. C.\nThe Buddhist calendar is used in Sri Lanka, Cambodia, Laos, Thailand, and Burma. The Persian calendar is official in Iran and Afghanistan.\n\n## Share in social networks",
null,
"## Mathematics of no. 2642\n\n### Multiplications\n\n#### Multiplication table of 2642\n\n2642 multiplied by two equals 5284 (2642 x 2 = 5284).\n2642 multiplied by three equals 7926 (2642 x 3 = 7926).\n2642 multiplied by four equals 10568 (2642 x 4 = 10568).\n2642 multiplied by five equals 13210 (2642 x 5 = 13210).\n2642 multiplied by six equals 15852 (2642 x 6 = 15852).\n2642 multiplied by seven equals 18494 (2642 x 7 = 18494).\n2642 multiplied by eight equals 21136 (2642 x 8 = 21136).\n2642 multiplied by nine equals 23778 (2642 x 9 = 23778).\nshow multiplications by 6, 7, 8, 9 ...\n\n### Fractions: decimal fraction and common fraction\n\n#### Fraction table of 2642\n\nHalf of 2642 is 1321 (2642 / 2 = 1321).\nOne third of 2642 is 880,6667 (2642 / 3 = 880,6667 = 880 2/3).\nOne quarter of 2642 is 660,5 (2642 / 4 = 660,5 = 660 1/2).\nOne fifth of 2642 is 528,4 (2642 / 5 = 528,4 = 528 2/5).\nOne sixth of 2642 is 440,3333 (2642 / 6 = 440,3333 = 440 1/3).\nOne seventh of 2642 is 377,4286 (2642 / 7 = 377,4286 = 377 3/7).\nOne eighth of 2642 is 330,25 (2642 / 8 = 330,25 = 330 1/4).\nOne ninth of 2642 is 293,5556 (2642 / 9 = 293,5556 = 293 5/9).\nshow fractions by 6, 7, 8, 9 ...\n\n### Calculator\n\n 2642\n\n#### Is Prime?\n\nThe number 2642 is not a prime number. The closest prime numbers are 2633, 2647.\n2642th prime number in order is 23753.\n\n#### Factorization and factors (dividers)\n\nThe prime factors of 2642 are 2 * 1321\nThe factors of 2642 are 1 , 2 , 1321 , 2642\nTotal factors 4.\nSum of factors 3966 (1324).\n\n#### Powers\n\nThe second power of 26422 is 6.980.164.\nThe third power of 26423 is 18.441.593.288.\n\n#### Roots\n\nThe square root √2642 is 51,400389.\nThe cube root of 32642 is 13,824336.\n\n#### Logarithms\n\nThe natural logarithm of No. ln 2642 = loge 2642 = 7,879291.\nThe logarithm to base 10 of No. log10 2642 = 3,421933.\nThe Napierian logarithm of No. log1/e 2642 = -7,879291.\n\n### Trigonometric functions\n\nThe cosine of 2642 is -0,996848.\nThe sine of 2642 is 0,079338.\nThe tangent of 2642 is -0,079589.\n\n### Properties of the number 2642\n\nMore math properties ...\n\n## Number 2642 in Computer Science\n\nCode typeCode value\nPIN 2642 It's recommendable to use 2642 as a password or PIN.\n2642 Number of bytes2.6KB\nUnix timeUnix time 2642 is equal to Thursday Jan. 1, 1970, 12:44:02 a.m. GMT\nIPv4, IPv6Number 2642 internet address in dotted format v4 0.0.10.82, v6 ::a52\n2642 Decimal = 101001010010 Binary\n2642 Decimal = 10121212 Ternary\n2642 Decimal = 5122 Octal\n2642 Decimal = A52 Hexadecimal (0xa52 hex)\n2642 BASE64MjY0Mg==\n2642 MD5f12f2b34a0c3174269c19e21c07dee68\n2642 SHA15eb6f63039d452e50d36d6bbd07a3fd5aec0d58c\n2642 SHA224266a909a01aefacc3121717072f2cea18c3e670339d422be79246a30\n2642 SHA384dd413c7de321b05da84df5cabe76c984689dbd6e4a8dcb5c502af0bbdcd96614f1eda4a88b6393c7848291405c038ac2\nMore SHA codes related to the number 2642 ...\n\nIf you know something interesting about the 2642 number that you did not find on this page, do not hesitate to write us here.\n\n## Numerology 2642\n\n### The meaning of the number 2 (two), numerology 2\n\nCharacter frequency 2: 2\n\nThe number two (2) needs above all to feel and to be. It represents the couple, duality, family, private and social life. He/she really enjoys home life and family gatherings. The number 2 denotes a sociable, hospitable, friendly, caring and affectionate person. It is the sign of empathy, cooperation, adaptability, consideration for others, super-sensitivity towards the needs of others.\n\nThe number 2 (two) is also the symbol of balance, togetherness and receptivity. He/she is a good partner, colleague or companion; he/she also plays a wonderful role as a referee or mediator. Number 2 person is modest, sincere, spiritually influenced and a good diplomat. It represents intuition and vulnerability.\nMore about the meaning of the number 2 (two), numerology 2 ...\n\n### The meaning of the number 6 (six), numerology 6\n\nCharacter frequency 6: 1\n\nThe number 6 (six) denotes emotional responsibility, love, understanding and harmonic balance. The person with the personal number 6 must incorporate vision and acceptance in the world. Beauty, tenderness, stable, responsible and understanding exchange, the sense of protection and availability also define the meaning of the number 6 (six).\nMore about the meaning of the number 6 (six), numerology 6 ...\n\n### The meaning of the number 4 (four), numerology 4\n\nCharacter frequency 4: 1\n\nThe number four (4) came to establish stability and to follow the process in the world. It needs to apply a clear purpose to develop internal stability. It evokes a sense of duty and discipline. Number 4 personality speaks of solid construction. It teaches us to evolve in the tangible and material world, to develop reason and logic and our capacity for effort, accomplishment and work.\nMore about the meaning of the number 4 (four), numerology 4 ...\n\n## Interesting facts about the number 2642\n\n### Asteroids\n\n• (2642) Vésale is asteroid number 2642. It was discovered by S. J. V. Arend from Uccle on 9/14/1961.\n\n### Distances between cities\n\n• There is a 2,642 miles (4,251 km) direct distance between Ahmedabad (India) and Kowloon (Hong Kong).\n• There is a 1,642 miles (2,642 km) direct distance between Al Başrah (Iraq) and Jalandhar (India).\n• There is a 2,642 miles (4,251 km) direct distance between Al Jīzah (Egypt) and Vadodara (India).\n• There is a 1,642 miles (2,642 km) direct distance between Amsterdam (Netherlands) and Antalya (Turkey).\n• There is a 2,642 miles (4,251 km) direct distance between Belgrade (Serbia) and Kabul (Afganistan).\n• There is a 2,642 miles (4,251 km) direct distance between Ciudad Guayana (Venezuela) and Toronto (Alberta).\n• There is a 2,642 miles (4,251 km) direct distance between Da Nang (Viet Nam) and Hyderabad (Pakistan).\n• There is a 1,642 miles (2,642 km) direct distance between Davao (Philippines) and Hangzhou (China).\n• There is a 1,642 miles (2,642 km) direct distance between Dongguan (China) and Singapore (Singapore).\n• There is a 2,642 miles (4,251 km) direct distance between Fuzhou (China) and Ludhiāna (India).\n• There is a 2,642 miles (4,251 km) direct distance between Ghāziābād (India) and Ningbo (China).\n• There is a 2,642 miles (4,251 km) direct distance between Gorakhpur (India) and Puyang (China).\n• There is a 2,642 miles (4,251 km) direct distance between Hangzhou (China) and Srinagar (India).\n• There is a 1,642 miles (2,642 km) direct distance between Changsha (China) and Kawasaki (Japan).\n• There is a 1,642 miles (2,642 km) direct distance between Changsha (China) and Yono (Japan).\n• There is a 1,642 miles (2,642 km) direct distance between Chongqing (China) and Kota Bharu (Malaysia).\n• There is a 2,642 miles (4,251 km) direct distance between Jeddah (Saudi Arabia) and Lucknow (India).\n• There is a 2,642 miles (4,251 km) direct distance between Kano (Nigeria) and Riyadh (Saudi Arabia).\n• There is a 2,642 miles (4,251 km) direct distance between Klang (Malaysia) and Lahore (Pakistan).\n• There is a 1,642 miles (2,642 km) direct distance between Krasnoyarsk (Russia) and Qinhuangdao (China).\n• There is a 1,642 miles (2,642 km) direct distance between Lanzhou (China) and Novosibirsk (Russia).\n• There is a 2,642 miles (4,251 km) direct distance between Lomé (Togo) and Natal (Brazil).\n• There is a 2,642 miles (4,251 km) direct distance between Nairobi (Kenya) and Qom (Iran).\n• There is a 1,642 miles (2,642 km) direct distance between Ordos (China) and Yono (Japan).\n• There is a 2,642 miles (4,251 km) direct distance between Pimpri (India) and Volgograd (Russia).\n• There is a 2,642 miles (4,251 km) direct distance between Shiraz (Iran) and Turin (Italy).\n\n### Mathematics\n\n• 2642 = 52 + 63 + 74.\n\n## Number 2,642 in other languages\n\nHow to say or write the number two thousand, six hundred and forty-two in Spanish, German, French and other languages. The character used as the thousands separator.\n Spanish: 🔊 (número 2.642) dos mil seiscientos cuarenta y dos German: 🔊 (Anzahl 2.642) zweitausendsechshundertzweiundvierzig French: 🔊 (nombre 2 642) deux mille six cent quarante-deux Portuguese: 🔊 (número 2 642) dois mil, seiscentos e quarenta e dois Chinese: 🔊 (数 2 642) 二千六百四十二 Arabian: 🔊 (عدد 2,642) ألفان و ستمائة و اثنان و أربعون Czech: 🔊 (číslo 2 642) dva tisíce šestset čtyřicet dva Korean: 🔊 (번호 2,642) 이천육백사십이 Danish: 🔊 (nummer 2 642) totusinde og sekshundrede og toogfyrre Hebrew: (מספר 2,642) שנים אלף שש מאות ארבעים ושנים Dutch: 🔊 (nummer 2 642) tweeduizendzeshonderdtweeënveertig Japanese: 🔊 (数 2,642) 二千六百四十二 Indonesian: 🔊 (jumlah 2.642) dua ribu enam ratus empat puluh dua Italian: 🔊 (numero 2 642) duemilaseicentoquarantadue Norwegian: 🔊 (nummer 2 642) to tusen, seks hundre og førti-to Polish: 🔊 (liczba 2 642) dwa tysiące sześćset czterdzieści dwa Russian: 🔊 (номер 2 642) две тысячи шестьсот сорок два Turkish: 🔊 (numara 2,642) ikibinaltıyüzkırkiki Thai: 🔊 (จำนวน 2 642) สองพันหกร้อยสี่สิบสอง Ukrainian: 🔊 (номер 2 642) двi тисячi шiстсот сорок двi Vietnamese: 🔊 (con số 2.642) hai nghìn sáu trăm bốn mươi hai Other languages ...\n\n## News to email\n\nPrivacy Policy.\n\n## Comment\n\nIf you know something interesting about the number 2642 or any natural number (positive integer) please write us here or on facebook."
] | [
null,
"https://numero.wiki/s/numeros-mayas/numero-maya-2642.png",
null,
"https://numero.wiki/s/senas/lenguaje-de-senas-numero-2.png",
null,
"https://numero.wiki/s/senas/lenguaje-de-senas-numero-6.png",
null,
"https://numero.wiki/s/senas/lenguaje-de-senas-numero-4.png",
null,
"https://numero.wiki/s/senas/lenguaje-de-senas-numero-2.png",
null,
"https://number.academy/img/braille-2642.svg",
null,
"https://numero.wiki/img/a-2642.jpg",
null,
"https://numero.wiki/img/b-2642.jpg",
null,
"https://number.academy/i/infographics/2/number-2642-infographic.png",
null,
"https://numero.wiki/s/share-desktop.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.67762977,"math_prob":0.97028935,"size":10316,"snap":"2022-05-2022-21","text_gpt3_token_len":3614,"char_repetition_ratio":0.19608223,"word_repetition_ratio":0.13309567,"special_character_ratio":0.3800892,"punctuation_ratio":0.15521502,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9690133,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-23T01:49:48Z\",\"WARC-Record-ID\":\"<urn:uuid:8f6df88f-8f85-4aac-a169-eb9ca2fe6e61>\",\"Content-Length\":\"46464\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c7afedae-0a8e-4b24-8b5d-9d0e19032d36>\",\"WARC-Concurrent-To\":\"<urn:uuid:5cf56928-a83f-4075-8b25-5678ec7d56a2>\",\"WARC-IP-Address\":\"162.0.227.212\",\"WARC-Target-URI\":\"https://number.academy/2642\",\"WARC-Payload-Digest\":\"sha1:7W5M4U6FMDYAMJBDVOA4XGOMRHM4F4DN\",\"WARC-Block-Digest\":\"sha1:I4C5LRIWZ3HEGSP5LVWS5WGLFR5BDHS2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662552994.41_warc_CC-MAIN-20220523011006-20220523041006-00314.warc.gz\"}"} |
https://xplaind.com/251841/present-value-vs-future-value | [
"# Present Value vs Future Value\n\nPresent value is the equivalent value today of some amount to be received or paid in future and future value is the accumulated value in future of an amount received or paid today. The equivalency arises because a cash flow that occur at time 0 can accumulate interest.\n\nIf interest rate is 5%, the dollar received at t=0 can earn interest of \\$0.05 (\\$1 × 5%) per year. After one year, dollar received at t=0 is worth \\$1.05 (\\$1 plus accumulated interest of \\$0.05) which is \\$0.05 more than the dollar received at t=1.\n\nThis interplay of money today and some future date is called the time value of money. It is one of the core concepts in finance. Net present value, internal rate of return, and valuation of a share of common stock or bond, etc. are all applications of time value of money.\n\n## Future Value\n\nThe value of any amount today i.e. at t=0 is called the present value, the value of any sum at some time in future is called the future value and these two values are connected by the interest rate and time. In the above example, \\$1 received today is the present value and \\$1.05 that it is worth after 1 year given a 5% interest rate is the future value.\n\nIf FV stands for future value, PV stands for present value, I stands for total interest expense and i stands for the interest rate, the relationship can be represented algebraically as follows:\n\nFV\n= PV + I\n= PV + PV × i\n= PV × (1 + i)\n\nThe above expression is for one year. Let’s assume we need to compare a dollar received at t=0 and one received after two years (i.e. t=2). Before delving into the actual calculations, we need to determine one thing i.e. whether the interest earned in the first year will earn interest too. If yes, it is called the compound interest and the value of \\$1 received at t=0 after first year is \\$1.05 as illustrated above, after the second year, the value is \\$1.1025 (t=1 value of \\$1.05 plus interest earned in second year of \\$1.05 × 5%). However, if the interest doesn't accumulate further interest, it is called simple interest and the future value is \\$1.1 (i.e. \\$1.05 + interest for one more year of \\$0.05).\n\nIn case of compound interest, value of a dollar n years in future given an interest rate i can be worked out using the following equation:\n\nFV = PV × (1 + i)n\n\nThe factor (1+i)n is called the future value factor.\n\nIn case of simple interest, value of a dollar after n periods is given by the following equation:\n\nFV = PV × (1 + i × n)\n\n## Present Value\n\nThe above comparison can also be made by finding the present value of \\$1 received at t=1 today i.e. at t=0. We just need to make an algebraic adjustment to the above equation to get:\n\n Present Value = Future Value (1 + i)n\n\nCrunching the numbers shows that \\$1 received after 1 year (i.e. at t=1) is worth \\$0.9524 today:\n\n Present Value = \\$1 = \\$0.9524 (1 + 5%)1"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.95414233,"math_prob":0.9948435,"size":2883,"snap":"2022-27-2022-33","text_gpt3_token_len":735,"char_repetition_ratio":0.17645016,"word_repetition_ratio":0.031365313,"special_character_ratio":0.27297953,"punctuation_ratio":0.10828026,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994485,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T05:51:37Z\",\"WARC-Record-ID\":\"<urn:uuid:bb1dd7ca-bacb-48a4-acd4-5c10a091c80c>\",\"Content-Length\":\"56303\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fb88f414-6e88-4261-8a5b-efefbd37a62d>\",\"WARC-Concurrent-To\":\"<urn:uuid:36a025f3-56a0-4a25-862d-b497575a12a8>\",\"WARC-IP-Address\":\"3.234.104.255\",\"WARC-Target-URI\":\"https://xplaind.com/251841/present-value-vs-future-value\",\"WARC-Payload-Digest\":\"sha1:LOSNTIMN7RRKFQC6LND5T3TCPMSUMOB3\",\"WARC-Block-Digest\":\"sha1:URPIPGNNAQT5ZKMV7ISTKVEFFG25JZF3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104514861.81_warc_CC-MAIN-20220705053147-20220705083147-00155.warc.gz\"}"} |
https://es.scribd.com/document/258006220/Training-SAP2000NonLinearAnalysis | [
"Está en la página 1de 11\n\n# Non-Linear Analysis/SAP 2000\n\n## Non-Linear Static Analysis using SAP 2000\n\nMitesh Surana\nResearch Scholar, Department of Earthquake Engineering, IIT Roorkee\nFor Queries please mail to: mitesh.bana@gmail.com\n\n## Step-1 Linear Analysis & Unlock the Model\n\nDesign the building as per linear analysis using expected strength and modified\npartial safety factors.\nAfter completing the design of the building Unlock the model as shown in Figure 1.\n\n## Step-2 Define New Load Case of Gravity Loads\n\nGo to Define > Load Case> Add New Load Case consisting of Gravity loads (i.e.\ndead load and % of live load). This load case consists of force controlled loads as\nshown in Figure 2 since load application type is full load.\n\n## Non-Linear Analysis/SAP 2000\n\nIn loads applied edit box include all dead loads and % of live loads.\nSelect Load Case Type > Static, Analysis Type> Nonlinear and Geometric Nonlinearity Parameters as P-Delta.\nNOTE-The non-linear parameters (other parameters as shown in Figure 2) should not\nbe changed in gravity load case. It will start from zero initial stiffness.\n\n## Step-3 Assignment of Hinges to Frame Elements\n\nSelect all the beams in the model. Go to Assign > Frame > Hinges. The hinge form\nwill appear as shown in Figure 3.\n\n## FIGURE-3 Hinge Assignment Form\n\nAdd hinges to the selected beams the hinge type form will appear as shown in Figure\n4.\n\n## FIGURE-4 Hinge Properties for Beams Form\n\nThe hinges should be assigned at both the ends which means at the relative distance of\n0 and 1 and form as shown in Figure 5 will appear.\n\n## In similar manner assign hinges to all columns by repeating steps as previously\n\ncarried out for beams the only difference is that column should be assigned P-M2-M3\nhinges instead of M3 hinges as shown in Figure 6.\n\n## NOTE- The SAP 2000 non-linear pre-defined non-linear hinge properties\n\ncorresponding to Caltrans hinge model and FEMA 356 hinge model. So depending\nupon the requirement the adequate auto hinge type should be used. Apart from this\nSAP 2000 also provides user definition of hinges. In the Figure 5 in select FEMA 356\ntable depending upon the member to hinge is to be assigned appropriate table should\nbe selected. The beams should be assigned M3 hinges while columns should be\nassigned P-M2-M3 hinges since column consists of interaction between axial load and\nbending moment. The V value in the hinge properties for beams form previously\ndefined gravity load case should be used as shown in Figure 5. The transverse\nreinforcement is confirming box should be active if frame is designed as SMRF\n(Special Moment Resisting Frame) else it should not be active. The hinge properties\nare dependent on member capacity; hence reinforcing ratio should be used from\ncurrent design.\n\n## Non-Linear Analysis/SAP 2000\n\nThe hinges should be assigned at both the ends which means at the relative distance of\n0 and 1 and form as shown in Figure 7 will appear.\n\n## Step-4 Define PUSHOVER load Case\n\nGo to Define > Load Case> Add New Load Case >PUSH consisting of load in\nproportion to the fundamental mode. This load case is deformation controlled load\ncase as shown in Figure 8.\n\n## FIGURE-8 Pushover Load Case Form\n\nSelect Load Case Type> Static, Analysis Type> Nonlinear and Geometric Nonlinearity Parameters as P-Delta.\nThis load case should be started from a previous load case Gravity since gravity load\nwill always be acting on the structure.\nSelect Loads Applied proportional to Mode in the considered direction of the\nanalysis. The scale factor for this load case should be kept equal to 1.\nIn pushover load case for other parameters, to modify the displacement upto which\nthe force-deformation curve needs to be monitored click Modify and the load\napplication control for non-linear static analysis form will appear as shown in Figure\n9. In the non-linear load application control parameters Load Application should be\nDisplacement control with Monitored Displacement. Generally, the monitored\n4\n\n## Non-Linear Analysis/SAP 2000\n\ndisplacement is kept equal to 2% of the height of the building. This displacement\nshould be monitored in the considered direction of analysis (i.e. either U1 or U2\ndegree of freedom). The joint at which the force deformation curve is monitored is\ngenerally taken as at the Centre of mass of the building.\n\n## FIGURE-10 Results Saved for Non-Linear Static Load Case Form\n\nIn pushover load case for other parameters, to modify the steps at which results needs\nto be saved click Modify the results saved for non-linear static load case form will\nappear as shown in Figure 10. In this form Multiple Steps should be selected in order\nto save the results at the intermediate steps. In this form for each stage Minimum and\nMaximum number of saved steps should be kept 1000 and 5000 in order to avoid\nsolution converge.\nIn pushover load case for other parameters, the non-linear parameters should be by\nDefault. If convergence problem occurs than Number of Null Steps and Number of\nConstant Stiffness Iteration should be increased.\nFor unloading the hinge, Unload Entire Structure method should be used as shown\nnon-linear parameter form in Figure 11. The hinge unloading method controls the\ntime requirements for completing the analysis. The Local Redistribution method is\nthe Fastest while the Unloading Entire Structure method is the Slowest.\nNOTE- It is important to note that the non-linear parameters affect the solution\ncontrol; hence the results obtained using different solution control parameters and\n5\n\n## Non-Linear Analysis/SAP 2000\n\nhinge unloading method may deviate slightly, hence no unique solution is obtained\nfor some problems.\n\n## Step-4 RUN Analysis\n\nWhile running the analysis it is important to Run the Modal and Gravity analysis\nwith pushover load case since Pushover load case takes stiffness from gravity load\ncase and mode shape from Modal load case.\nWhile the SAP 2000 is analyzing the structure the analysis monitor form as shown in\nFigure 12 will appear.\nAfter completing the analysis the analysis complete form as shown in Figure 13 will\nappear.\nThe pushover analysis is a non-linear static analysis so depending upon system\nconfiguration it takes time to complete the analysis.\n\n## Step-5 Graphically Review the Pushover Analysis Results\n\nTo see the pushover curve Go to > Display > Show Static Pushover Curve, the\ndisplay static pushover curve form as shown in Figure 14.\nThe Static Pushover Curve will appear as shown in Figure 15. Go to > File > Display\nTables it will show the results of Pushover analysis in Tabular Form.\n\n## Non-Linear Analysis/SAP 2000\n\nTo see the deformed shape under pushover load case and hinges at any step, Go to\nDisplay> Show Deformed Shape> Select load case> PUSH. The deformed shape\nform will appear as shown in Figure 16.\nIn deformed shape form the step at which the result is to be seen should be given. For\nthe result at last step it should be 1000 and deformed shape of structure at that step is as\nshown in Figure 17.\n\n## FIGURE-17 Deformed Shape at Last Step\n\nTo see the hinge results Right Click on Hinge the hinge results will appear as shown in\n\nFigure 18.\n\n## Non-Linear Dynamic Analysis using SAP 2000\n\nIn order to perform the time history analysis the step 1 to step 3 should be repeated as\ndiscussed above.\n\n## Step-6 Define Time History\n\nGo to Define > Functions> Time History the following form will appear as shown in\nFigure 19. In this form choose function type to add from file.\n\n## FIGURE-19 Define Time History Function Form\n\nGo to Add New Function and add the time history file from system in text format\nand convert it to user defined than time history file form will appear as shown in\nFigure 20.\n\n## Step-7 Define Time History Load Case\n\nGo to Define > Load Case> Add New Load Case >TH consisting of time history\nload case and the time history load case form will appear as shown in Figure 21.\n\n## FIGURE-21Time History Load Case Form\n\nSelect Load Case Type> Time History, Analysis Type> Nonlinear, Time History\nType> Direct Integration and Geometric Non-linearity Parameters as P-Delta.\nThis load case should be started from a previous load case Gravity since gravity load\nwill always be acting on the structure.\nSelect Loads Applied > Acceleration in the considered direction (i.e. either U1 or\nU2 degree of freedom) of the analysis. The scale factor for this load case should be\nkept equal to 1or 9.81 depending upon the input time history file whether provided in\nunits of g or m/s2, respectively.\n\n## FIGURE-22 Mass and Stiffness Proportional Damping Form\n\nIn Time History Load Case for time step data the Number of Output Time Steps\nand Output Time Step Size should be same as provided in Input time history file.\nThe number of output time steps may be less or more than the steps in time history\ninput file but lesser number of saved output steps may reduce the accuracy while\nthe more number of output steps may increase the space requirements.\nIn Time History Load Case for other parameters, to modify the damping which\nneeds to be considered click Modify and mass and stiffness proportional damping\nform will appear as shown in Figure 22. It is important to note that when damping is\nspecified it is specified by period and period corresponding to 1st and 3rd mode of\nvibration in the considered direction of analysis should be used. The damping used\nshould be equal to 2%, 5% and 10% for Steel, RC and Masonry buildings,\nrespectively.\n10\n\n## Non-Linear Analysis/SAP 2000\n\nIn Time History load case for other parameters, to modify the time integration\ntechnique to be considered click Modify and time integration parameter form will\nappear as shown in Figure 23. The Hilber-Hughes-Taylor method of time integration\nshould be used with alpha as zero and if convergence does not occur than alpha\nequal to minus 0.33 should be used.\n\n## FIGURE-23 Time Integration Parameters Form\n\nIn Time History Load Case for other parameters, the non-linear parameters should\nbe set to default.\nAfter defining Time History Load Case, Run the analysis.\n\nNOTE- The non-linear time history analysis is a time consuming analysis and depending\nupon the size of the problem it takes large amount of time as well as space for completion\nof the analysis.\n\n## Step-8 Graphically Review the Time History Analysis Results\n\nThe deformed shape and hinge pattern at any instant can be viewed in similar manner\nas viewed in case of pushover analysis.\nThe peak displacement during time history analysis can be viewed through plot\nfunction. Go to > Display > Show Plot Functions and define the plot function i.e.\njoint whose displacement is required.\nThe Figure 24 shows the plot functions for the joint displacement for the time history\nload case.\n\n11"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8305119,"math_prob":0.69253945,"size":7808,"snap":"2019-35-2019-39","text_gpt3_token_len":1669,"char_repetition_ratio":0.16196822,"word_repetition_ratio":0.17533432,"special_character_ratio":0.20325308,"punctuation_ratio":0.06409373,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9571045,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-18T17:04:43Z\",\"WARC-Record-ID\":\"<urn:uuid:0a21c938-4931-4033-96aa-d76560178f6c>\",\"Content-Length\":\"311417\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7484e699-7870-4f27-b04e-538943bfdb74>\",\"WARC-Concurrent-To\":\"<urn:uuid:24f0dee8-ae3a-4311-851e-f8dddb7bcfc9>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://es.scribd.com/document/258006220/Training-SAP2000NonLinearAnalysis\",\"WARC-Payload-Digest\":\"sha1:6YECIVZ7T2LQNU3ES4LXE27GFBMT7AS4\",\"WARC-Block-Digest\":\"sha1:KSZMA32IXH6OBEBLNJA53TRXJAZMJ5PF\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027313987.32_warc_CC-MAIN-20190818165510-20190818191510-00512.warc.gz\"}"} |
https://www.colorhexa.com/007ec8 | [
"# #007ec8 Color Information\n\nIn a RGB color space, hex #007ec8 is composed of 0% red, 49.4% green and 78.4% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 37% magenta, 0% yellow and 21.6% black. It has a hue angle of 202.2 degrees, a saturation of 100% and a lightness of 39.2%. #007ec8 color hex could be obtained by blending #00fcff with #000091. Closest websafe color is: #0066cc.\n\n• R 0\n• G 49\n• B 78\nRGB color chart\n• C 100\n• M 37\n• Y 0\n• K 22\nCMYK color chart\n\n#007ec8 color description : Strong blue.\n\n# #007ec8 Color Conversion\n\nThe hexadecimal color #007ec8 has RGB values of R:0, G:126, B:200 and CMYK values of C:1, M:0.37, Y:0, K:0.22. Its decimal value is 32456.\n\nHex triplet RGB Decimal 007ec8 `#007ec8` 0, 126, 200 `rgb(0,126,200)` 0, 49.4, 78.4 `rgb(0%,49.4%,78.4%)` 100, 37, 0, 22 202.2°, 100, 39.2 `hsl(202.2,100%,39.2%)` 202.2°, 100, 78.4 0066cc `#0066cc`\nCIE-LAB 50.792, -1.387, -46.389 17.884, 19.09, 57.383 0.19, 0.202, 19.09 50.792, 46.41, 268.288 50.792, -31.482, -71.101 43.692, -3.4, -47.283 00000000, 01111110, 11001000\n\n# Color Schemes with #007ec8\n\n• #007ec8\n``#007ec8` `rgb(0,126,200)``\n• #c84a00\n``#c84a00` `rgb(200,74,0)``\nComplementary Color\n• #00c8ae\n``#00c8ae` `rgb(0,200,174)``\n• #007ec8\n``#007ec8` `rgb(0,126,200)``\n• #001ac8\n``#001ac8` `rgb(0,26,200)``\nAnalogous Color\n• #c8ae00\n``#c8ae00` `rgb(200,174,0)``\n• #007ec8\n``#007ec8` `rgb(0,126,200)``\n• #c8001a\n``#c8001a` `rgb(200,0,26)``\nSplit Complementary Color\n• #7ec800\n``#7ec800` `rgb(126,200,0)``\n• #007ec8\n``#007ec8` `rgb(0,126,200)``\n• #c8007e\n``#c8007e` `rgb(200,0,126)``\n• #00c84a\n``#00c84a` `rgb(0,200,74)``\n• #007ec8\n``#007ec8` `rgb(0,126,200)``\n• #c8007e\n``#c8007e` `rgb(200,0,126)``\n• #c84a00\n``#c84a00` `rgb(200,74,0)``\n• #004e7c\n``#004e7c` `rgb(0,78,124)``\n• #005e95\n``#005e95` `rgb(0,94,149)``\n• #006eaf\n``#006eaf` `rgb(0,110,175)``\n• #007ec8\n``#007ec8` `rgb(0,126,200)``\n• #008ee2\n``#008ee2` `rgb(0,142,226)``\n• #009efb\n``#009efb` `rgb(0,158,251)``\n• #16a9ff\n``#16a9ff` `rgb(22,169,255)``\nMonochromatic Color\n\n# Alternatives to #007ec8\n\nBelow, you can see some colors close to #007ec8. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #00b0c8\n``#00b0c8` `rgb(0,176,200)``\n• #009fc8\n``#009fc8` `rgb(0,159,200)``\n• #008fc8\n``#008fc8` `rgb(0,143,200)``\n• #007ec8\n``#007ec8` `rgb(0,126,200)``\n• #006dc8\n``#006dc8` `rgb(0,109,200)``\n• #005dc8\n``#005dc8` `rgb(0,93,200)``\n• #004cc8\n``#004cc8` `rgb(0,76,200)``\nSimilar Colors\n\n# #007ec8 Preview\n\nThis text has a font color of #007ec8.\n\n``<span style=\"color:#007ec8;\">Text here</span>``\n#007ec8 background color\n\nThis paragraph has a background color of #007ec8.\n\n``<p style=\"background-color:#007ec8;\">Content here</p>``\n#007ec8 border color\n\nThis element has a border color of #007ec8.\n\n``<div style=\"border:1px solid #007ec8;\">Content here</div>``\nCSS codes\n``.text {color:#007ec8;}``\n``.background {background-color:#007ec8;}``\n``.border {border:1px solid #007ec8;}``\n\n# Shades and Tints of #007ec8\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000204 is the darkest color, while #eff9ff is the lightest one.\n\n• #000204\n``#000204` `rgb(0,2,4)``\n• #000f17\n``#000f17` `rgb(0,15,23)``\n• #001b2b\n``#001b2b` `rgb(0,27,43)``\n• #00273f\n``#00273f` `rgb(0,39,63)``\n• #003452\n``#003452` `rgb(0,52,82)``\n• #004066\n``#004066` `rgb(0,64,102)``\n• #004d7a\n``#004d7a` `rgb(0,77,122)``\n• #00598d\n``#00598d` `rgb(0,89,141)``\n• #0065a1\n``#0065a1` `rgb(0,101,161)``\n• #0072b4\n``#0072b4` `rgb(0,114,180)``\n• #007ec8\n``#007ec8` `rgb(0,126,200)``\n``#008adc` `rgb(0,138,220)``\n• #0097ef\n``#0097ef` `rgb(0,151,239)``\n• #04a2ff\n``#04a2ff` `rgb(4,162,255)``\n• #17a9ff\n``#17a9ff` `rgb(23,169,255)``\n• #2bb1ff\n``#2bb1ff` `rgb(43,177,255)``\n• #3fb8ff\n``#3fb8ff` `rgb(63,184,255)``\n• #52bfff\n``#52bfff` `rgb(82,191,255)``\n• #66c6ff\n``#66c6ff` `rgb(102,198,255)``\n• #7aceff\n``#7aceff` `rgb(122,206,255)``\n• #8dd5ff\n``#8dd5ff` `rgb(141,213,255)``\n• #a1dcff\n``#a1dcff` `rgb(161,220,255)``\n• #b4e3ff\n``#b4e3ff` `rgb(180,227,255)``\n• #c8ebff\n``#c8ebff` `rgb(200,235,255)``\n• #dcf2ff\n``#dcf2ff` `rgb(220,242,255)``\n• #eff9ff\n``#eff9ff` `rgb(239,249,255)``\nTint Color Variation\n\n# Tones of #007ec8\n\nA tone is produced by adding gray to any pure hue. In this case, #5c666c is the less saturated color, while #007ec8 is the most saturated one.\n\n• #5c666c\n``#5c666c` `rgb(92,102,108)``\n• #556873\n``#556873` `rgb(85,104,115)``\n• #4d6a7b\n``#4d6a7b` `rgb(77,106,123)``\n• #456c83\n``#456c83` `rgb(69,108,131)``\n• #3e6e8a\n``#3e6e8a` `rgb(62,110,138)``\n• #367092\n``#367092` `rgb(54,112,146)``\n• #2e729a\n``#2e729a` `rgb(46,114,154)``\n• #2674a2\n``#2674a2` `rgb(38,116,162)``\n• #1f76a9\n``#1f76a9` `rgb(31,118,169)``\n• #1778b1\n``#1778b1` `rgb(23,120,177)``\n• #0f7ab9\n``#0f7ab9` `rgb(15,122,185)``\n• #087cc0\n``#087cc0` `rgb(8,124,192)``\n• #007ec8\n``#007ec8` `rgb(0,126,200)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #007ec8 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5455025,"math_prob":0.7594034,"size":3672,"snap":"2023-40-2023-50","text_gpt3_token_len":1643,"char_repetition_ratio":0.14667393,"word_repetition_ratio":0.0074074073,"special_character_ratio":0.55038124,"punctuation_ratio":0.23216309,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98520195,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T02:00:54Z\",\"WARC-Record-ID\":\"<urn:uuid:de9bb156-24f0-4e18-a48f-4131970f3941>\",\"Content-Length\":\"36184\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0c27e914-1339-4bdc-b45a-359ba1645a37>\",\"WARC-Concurrent-To\":\"<urn:uuid:40427ecb-e179-4983-ab4f-f5ae2a30d9d2>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/007ec8\",\"WARC-Payload-Digest\":\"sha1:7MURQ7NUDLZ3VJ2R6KGL527VXOMYJZWB\",\"WARC-Block-Digest\":\"sha1:XTJUAEQAW2BHR7X2SGPRUNQM7D6WXVR5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679103464.86_warc_CC-MAIN-20231211013452-20231211043452-00581.warc.gz\"}"} |
http://dailycalculators.com/index.php/depreciation-calculator | [
"# Depreciation calculator\n\nStraight line depreciation\nSum-of-years-digits depreciation\n\n## How to use the depreciation calculator\n\n• Depreciation method - Choose which depreciation method to use. There are two options here: straight line depreciation and sum-of-years-digits depreciation.\n• Cost - Original cost of the asset.\n• Salvage - The salvage value after the asset has been fully depreciated.\n• Life - The number of periods (year here) that depreciates the asset.\n\n## What is straight line depreciation\n\nStraight line is the simplest method, it allocates depreciation value evenly on a number of periods. The calculation formula is:\n\nAnnual depreciation amount = (Original cost of asset - Salvage value)/number of periods\nexample: an asset has original cost of \\$1000, and salvage value of \\$100 after 3 years.\nAnnual depreciation amount is (1000-100)/3 = \\$300.\n\n## What is sum-of-years-digits depreciation\n\nThe sum-of-years-digits method has a more accerlerated write-off than straight line method.\n\nExample: an asset will be depreciated in 8 years.\nSum of years digits: 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 = 36,\nDepreciation rate:\nYear 1: 8/36\nYear 2: 7/36\nYear 3: 6/36\nYear 4: 5/36\nYear 5: 4/36\nYear 6: 3/36\nYear 7: 2/36\nYear 8: 1/36"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7820256,"math_prob":0.9766422,"size":1171,"snap":"2020-24-2020-29","text_gpt3_token_len":326,"char_repetition_ratio":0.19365895,"word_repetition_ratio":0.0,"special_character_ratio":0.2877882,"punctuation_ratio":0.11790393,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990388,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-11T20:25:48Z\",\"WARC-Record-ID\":\"<urn:uuid:c052d8d0-a163-45bb-90d3-b05df110fbc9>\",\"Content-Length\":\"34017\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54887706-04df-40ae-8c50-23839a3c8669>\",\"WARC-Concurrent-To\":\"<urn:uuid:2474fac7-4ce3-4319-93f8-ae922dfc48a5>\",\"WARC-IP-Address\":\"70.35.206.41\",\"WARC-Target-URI\":\"http://dailycalculators.com/index.php/depreciation-calculator\",\"WARC-Payload-Digest\":\"sha1:62FVDIR2TW4WADXWXOAPUVRTCWUJYO7J\",\"WARC-Block-Digest\":\"sha1:LYUKLTBETWPKILTHPB25AV5XB5WGZVP4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655937797.57_warc_CC-MAIN-20200711192914-20200711222914-00196.warc.gz\"}"} |
https://www.mdpi.com/2073-431X/9/1/6 | [
"Next Article in Journal\nImproving Internet of Things (IoT) Security with Software-Defined Networking (SDN)\nPrevious Article in Journal\nAcknowledgement to Reviewers of Computers in 2019\n\nFont Type:\nArial Georgia Verdana\nFont Size:\nAa Aa Aa\nLine Spacing:\nColumn Width:\nBackground:\nArticle\n\n# A Computer Vision System for the Automatic Classification of Five Varieties of Tree Leaf Images\n\n1\nDepartment of Biosystems Engineering, College of Agriculture and University of Mohaghegh Ardabili, Ardabil 56199-11367, Iran\n2\nDepartment of Teoría de la Señal y Comunicaciones e Ingeniería Telemática, University of Valladolid, 47011 Valladolid, Spain\n3\nCastilla-León Neuroscience Institute (INCYL), University of Salamanca, 37007 Salamanca, Spain\n*\nAuthor to whom correspondence should be addressed.\nComputers 2020, 9(1), 6; https://doi.org/10.3390/computers9010006\nReceived: 19 December 2019 / Revised: 24 January 2020 / Accepted: 24 January 2020 / Published: 28 January 2020\n\n## Abstract\n\n:\nA computer vision system for automatic recognition and classification of five varieties of plant leaves under controlled laboratory imaging conditions, comprising: 1–Cydonia oblonga (quince), 2–Eucalyptus camaldulensis dehn (river red gum), 3–Malus pumila (apple), 4–Pistacia atlantica (mt. Atlas mastic tree) and 5–Prunus armeniaca (apricot), is proposed. 516 tree leaves images were taken and 285 features computed from each object including shape features, color features, texture features based on the gray level co-occurrence matrix, texture descriptors based on histogram and moment invariants. Seven discriminant features were selected and input for classification purposes using three classifiers: hybrid artificial neural network–ant bee colony (ANN–ABC), hybrid artificial neural network–biogeography based optimization (ANN–BBO) and Fisher linear discriminant analysis (LDA). Mean correct classification rates (CCR), resulted in 94.04%, 89.23%, and 93.99%, for hybrid ANN–ABC; hybrid ANN–BBO; and LDA classifiers, respectively. Best classifier mean area under curve (AUC), mean sensitivity, and mean specificity, were computed for the five tree varieties under study, resulting in: 1–Cydonia oblonga (quince) 0.991 (ANN–ABC), 95.89% (ANN–ABC), 95.91% (ANN–ABC); 2–Eucalyptus camaldulensis dehn (river red gum) 1.00 (LDA), 100% (LDA), 100% (LDA); 3–Malus pumila (apple) 0.996 (LDA), 96.63% (LDA), 94.99% (LDA); 4–Pistacia atlantica (mt. Atlas mastic tree) 0.979 (LDA), 91.71% (LDA), 82.57% (LDA); and 5–Prunus armeniaca (apricot) 0.994 (LDA), 88.67% (LDA), 94.65% (LDA), respectively.\n\n## 1. Introduction\n\nIn the past decades, the use of herbicides, pesticides, and other chemical substances has continuously been increased. Unfortunately, overuse of these materials cause surface water pollution, environmental pollution, and animal and human toxicity (Liu and O’Connell, 2002). For this reason, scientists proposed the use of precision agriculture. One of the advantages of precision agriculture is the use of chemical substances only over the area of interest, called site-specific spray. The first step in site-specific spray operations is the proper recognition of the area of interest. Image processing is applied in this field to automatically identify plant species. In the past, there have been various researchers working on the recognition of different plants and trees.\nSingh and Bhamrah stated that the recognition of different plants, especially medicine plants is challenging. Eight species of different plants including Amrood, Jaman, Kathal, Nimbu, Palak, Pippal, Sfaida and Toot, were used in this study. From each specimen, ten images were taken. The proposed algorithm had five main steps: image histogram equalization, pre-processing, segmentation, feature extraction, and classification. Seven features were extracted from each image to be used in the classifier. Extracted features were: solidity, length of the major axis, length of the minor axis, aspect ratio, area, perimeter, and eccentricity. The classifier used was an artificial neural network. Results showed that the accuracy of this classifier was 98.8%.\nKadir proposed a system to identify some leaves in different plants, based on the combination of Fourier descriptors and some shape features. Extracted features were: translation, scaling, moving the starting point (Fourier descriptors), convexity, irregularity factor, aspect factor, solidity, and roundness factor (shape features). To test the proposed algorithm, a total of 100 images were used. A Bayesian classifier was used. Results showed a classifier accuracy of 88%.\nEhsanirad classified several plant leaves based on texture features such as autocorrelation, entropy, contrast, correlation, and homogeneity. Two methods, gray level co-occurrence matrix (GLCM) and principal component analysis (PCA) were used to feature selection. Following image processing, a total of 65 images from different plants were used to validate and test the system. Results showed a classification accuracy with PCA and GLCM of 98% and 78%, respectively.\nMursalin et al. , performed a study over the classification of five plant species, including Capsicum, Burcucumber, Cogongrass, Marsh herb, and Pigweed. To train and test the proposed system 400 images (80 images from each plant type) were used. Different features were extracted for classification purposes from each image. Extracted features were: solidity, elongatedness, convexity, form factor, area, thickness, perimeter, convex area, convex perimeter, and some color features. Three classifiers were used: support vector machine (SVM), C4.5 and naive Bayes. Results showed the best case of naive Bayes, SVM and C4.5 classifiers with accuracies of 99.3%, 98.24%, and 97.86%, respectively.\nIn Ahmed et al. , authors classified six plants species including Capsicum frutescens L., Amaranthus viridis L., Enhydra fluctuans lour., Chenopodium album L., Imperata cylindrica (L.) P. beauv. and Sicyos angulatus L., based on the SVM classifier. In this study, a total of 224 images were taken from previously listed plants. In the next step, 14 features were extracted from each image in fields of color features, moment invariants, and size-independent shape features. Among previous features, the combination of only 9 features had the best accuracy results. Final nine features used were solidity, elongatedness, mean value component ‘r’ in normalized RGB color space, mean value ‘b’ in RGB color space, standard deviation ‘r’ component, standard deviation of ‘b’ component in RGB color space, the , the , and the , of area them all. Results showed that the SVM classifier had an accuracy of 97.3%.\nRumpf et al. , used three sequential SVM to classify 10 different plant types including Agropyron repens, Alopecurus Myosuroides, C. Arvense, Galium aparine, Lamium sp., Matricaria inodora, Sinapis arvensis, Stellaria media, Veronica persica, and Hordeum vulgare. From each image, 10 features were extracted: number of pixels of an object, mean distance of the border to the center of gravity, maximum distance of the border to the center of gravity, vertical distance of the border to the main axis of the object, eccentricity, first two-moment invariants, skelmean, and skelsize. Results showed that total accuracy classification for the first SVM classifier was 97.7%, and for the second and third 80%.\nPereira et al. , classified three different plant types based on shape features. These plants were: E. crassipes, P. stratiotes, and S. auriculata. In order to train and test the proposed system, a total of 51 images of E. crassipes, 46 images of P. stratiotes and 63 images of S. auriculata were taken. Several features including beam angle statistics, FD, moment invariants, multiscale fractal dimension, and tensor scale descriptor, were extracted. Results showed that a neural network classified these three plant types with an accuracy above 80%.\nIn addition, Azlah et al. , have recently presented a thorough review paper about plant leaf recognition and classification techniques, concluding that the current image processing techniques should be robust under diverse intensity lighting conditions, which could, in turn, be developed by tweaking the detection technique leading to detection of specific plant diseases.\nThe aim of this study is to design an intelligent computer vision system to classify five species tree leaves, including 1–Cydonia oblonga (quince), 2–Eucalyptus camaldulensis dehn (river red gum), 3–Malus pumila (apple), 4–Pistacia atlantica (mt. Atlas mastic tree) and 5–Prunus armeniaca (apricot). Figure 1 depicts a flowchart of the computer vision system for plant tree leaves automatic classification, here proposed.\n\n## 2. Materials and Methods\n\nAs it can be seen from Figure 1, the proposed computer vision system consists of five steps: the first step includes imaging, the second step includes segmentation and pre-processing, the third step includes feature extraction (including texture descriptors based on the histogram, texture features based on the gray level co-occurrence matrix, shape features, moment invariants, and color features), fourth step includes discriminant feature selection (based on two methods: hybrid artificial neural network–differential evolution (ANN–DE) and hybrid artificial neural network–genetic algorithm (ANN–GA)), and fifth step comprises classification based on three different classifiers, including hybrid artificial neural network–ant bee colony (ANN–ABC), hybrid artificial neural network–biogeography based optimization (ANN–BBO) and Fisher’s linear discriminant analysis (LDA). Each of the five steps in the here proposed computer vision system, depicted in Figure 1 flowchart, will be further explained with more detail in the next paper sections.\n\n#### 2.1. Database Used in Computer Vision\n\nAs already mentioned, in this study five different types of leaves were investigated, including 1—Cydonia oblonga (quince), see Appendix Figures A1 and A2Eucalyptus camaldulensis dehn (river red gum), see Appendix Figures A2 and A3Malus pumila (apple), see Appendix Figures A3 and A4Pistacia atlantica (mt. Atlas mastic tree), see Appendix Figures A4 and A5Prunus armeniaca (apricot), see Appendix Figure A5. Table 1 shows the class number, English common tree name, scientific tree name and the number of samples in each class. Figure 2 shows one sample of each leaf type. All images were taken in Kermanshah, Iran (longitude: 7.03°E; latitude: 4.22°N). High-quality images were taken with a color GigE industrial camera (ImagingSource, model DFK-23GM021, 1/3 inch Aptina CMOS MT9M021 sensor, 1.2 MP 1280 × 960 spatial resolution, Germany), mounting an appropriate Computar CBC Group lens (model H0514-MP2, f = 5 mm F1.4, 1/2 inch type megapixel cameras, Japan). The camera was fixed at 10 cm above ground level, and all images were taken under white light of 327 lux lighting conditions.\n\n#### 2.2. Image Pre-Processing and Segmentation\n\nThe segmentation stage is an important one in image processing, since in case of wrong segmentation, either the background is considered as an object (in this study, any tree leaf) or object is considered as background. To reach a proper segmentation, six standard color spaces (RGB, HSV, HSI, YIQ, CMY, and YCbCr) were taken into account to find the best color space for segmentation purposes. Results have shown that the best color space for image segmentation was YCbCr color space and the best channels for image thresholding were luminance Y and chrominance Cr channels. Equation (1) was used to set the value of the threshold used to segment objects from their background in tree leaf images:\nThe implications of this equation are as follows: each pixel in YCbCr color space that has Y components smaller or equal to 100, or Cr components larger or equal to 15, is considered as background, otherwise, a pixel is considered as foreground (tree leaf object). In order to extract shape features from segmented images, binary images are needed. To extract shape features with high accuracy, some morphological operations are needed, since usually, some “noisy” pixels can exist in segmented images. For this reason, the Matlab imclose function was used, (Gonzalez et al. ). Figure 3 summarizes pre-processing operations, shown over three sample tree leaf images.\n\n#### 2.3. Feature Extraction\n\nIt is well-known that in order to properly classify different tree leaf types, feature extraction is needed. Features of various types and in different fields include texture descriptors based on the histogram, texture features based on the gray level co-occurrence matrix, shape features, moment invariants and color features (others may exist that are not used here). Above mentioned feature types were extracted from each leave. Indeed, the first 285 features of each leaf image were extracted and then effective (high discriminant) features were selected among them by hybrid ANN–DE and hybrid ANN–GA optimization approaches.\n\n#### 2.3.1. Texture Descriptors Based on the Histogram\n\nTexture features that were extracted based on histogram include homogeneity, entropy, smoothness, third moment, average and standard deviation.\n\n#### 2.3.2. Texture Features Based on the Gray Level Co-Occurrence Matrix (GLCM)\n\nIt is well-known that neighbor angles have a big impact on the estimation of the values of each feature based on the GLCM. Table 2 shows texture features extracted based on the gray level co-occurrence matrix. These features were extracted for four different angles: 0°, 45°, 90°, and 135°. Thus, in total, 27 × 4 = 108 texture features were extracted from each object.\n\n#### 2.3.3. Shape Features\n\nIn this study, 29 shape features were extracted. Table 3 lists the shape features that were used here.\n\n#### 2.3.4. Moment Invariant Features\n\nMoment invariant features have the advantage that are insensitive to translation, reflection dilation, and rotation. In this study, 10 moment-invariant features were extracted: first-order moment invariant, second-order moment invariant, third-order moment invariant, fourth-order moment invariant, fifth-order moment invariant, sixth-order moment invariant, seventh order moment invariant, difference of first and seventh order moment invariants, difference of second and sixth-order moment invariants and difference of third and fifth-order moment invariants.\n\n#### 2.3.5. Color Features\n\nDifferent color features have different values in different color spaces. For this reason, some color features were extracted in RGB, YCbCr, YIQ, CMY, HSV and HSI color spaces. Extracted color features are divided into two groups:\n• ‘Statistical’ color features\n• Vegetation index color features.\n\n#### ‘Statistical’ Color Features\n\nThese features comprised mean and standard deviations of the first color component, the second component, third component and mean of first, second and third components for all RGB, YCbCr, YIQ, CMY, HSV, and HSI color spaces. Thus, the total number of ‘statistical’ color features was $2 × 4 × 6 = 48$.\n\n#### Vegetation Index Color Features\n\nTable 4 lists 14 vegetation index features for RGB color space, including mathematical definitions. These features were also computed for the other five color spaces (YCbCr, YIQ, CMY, HSV, and HSI) totaling $14 × 6 = 84$ vegetation index features.\n\n#### 2.4. Discriminant Feature Selection\n\nAs mentioned before, 285 features were extracted from each leaf image object. The use of all features as input to classifiers is not wise, given the problem of overfitting and poor generalization to the test set. The selection of discriminant groups of features is a well-known good practice. To do so, two methods based on artificial intelligence were used to select effective features (results to be shown later on):\n• ANN–DE\n• ANN–GA.\n\n#### 2.5. Optimal Neural Network Classifier Parameters\n\nClassification is the final step in designing a computer or machine vision system. A high-performance classifier guarantees the accuracy of the computer version system. In this study, three classifiers including hybrid ANN–ABC, hybrid ANN–BBO and Fisher’s linear discriminant analysis (LDA), were used in classifying. In order to have statistically valid results, 100 uniform random simulations were repeated. All data were divided into two disjoint groups, training, and validation set data (60% input samples) and test set data (40% of input samples) in each of the 100 averaged simulations, following a uniform random distribution with probability 0.6 and 0.4 to belong to train/validation and test sets, respectively. It is worth mention that a multilayer perceptron (MLP) ANN was used. An MLP neural network has five adjustable parameters: number of neural network layers, number of neurons in each layer, nonlinear transfer function, back-propagation network training function, and back-propagation weight/bias learning function. ABC and BBO algorithms were used to select optimal MLP parameter values. Table 5 shows the optimum values of the MLP parameters which were determined with ABC and BBO algorithms.\n\n## 3. Results and Discussion\n\nTo evaluate the three different classifiers’ performance (hybrid ANN-ABC, hybrid ANN–BBO and LDA), various criteria are computed, including confusion matrix, sensitivity, specificity and accuracy, receiver operating characteristic (ROC) and area under the ROC curve (AUC). Before the evaluation of the classifiers’ performance, selected features based on ANN–DE and ANN–GA approaches are shown next.\n\n#### 3.1. Feature Selection Based on ANN–DE and ANN–GA Approaches\n\nAs mentioned in previous sections, two ANN–DE and ANN–GA approaches were used to select effective (discriminant) features. Table 6 shows the seven selected features by ANN–DE and ANN–GA. As one can see, ANN–DE method selected all features among shape and texture features based on the gray level co-occurrence matrix, while the ANN–GA method selects discriminant features among shape, texture based on the gray level co-occurrence matrix and color features. Figure 4 and Figure 5 show boxplot graphs for the value of each extracted feature by ANN–DE and ANN–GA methods, respectively. Figure 4 (ANN–DE) shows that quince, red gum, mastic tree, and apricot classes can be separated based on only three features, inverse difference moment normalized related to 0 degrees (IDMN_0), homogeneity related to 0 degree (homogeneity_0) and mean related to 45 degrees (mean_45), since those features have the lowest overlapping boxplots among classes. The same can be said, for obvious reasons, about Figure 5c (ANN–GA) regarding homogeneity related to 0 degrees (homogeneity_0) feature. Figure 6 shows the validation step, where we plot mean square error (MSE) as a function of iteration number while learning for hybrid ANN–ABC, hybrid ANN–BBO and LDA classifiers, based on the selected features obtained with ANN–DE and ANN–GA methods listed above. Figure 6a–c show that MSE for selected features based on ANN–DE are lower than those MSE for selected features based on ANN–GA. Having said that, selected features by ANN–DE were finally used as input in getting the classification results.\n\n#### 3.2. Classification Based on Hybrid ANN–ABC, Hybrid ANN–BBO and LDA\n\nIn this study, three classifiers were used: hybrid ANN–ABC, hybrid ANN–BBO and LDA. To achieve statistical valid results, we averaged 100 simulations with uniform random train and test sets; the probability of an input sample to belong to the train set was 0.6 and 0.4 to belong to the test set.\n\n#### 3.2.1. Classification Based on the Hybrid ANN–ABC\n\nTable 7 shows confusion matrix classification results, using ANN–ABC classifier for 100 iterations, over the test set. The lowest classifier error value is that of red gum class with only 17 misclassified samples among 4703 total test samples, resulting in accuracy above 99.6% for this class. The highest classifier error value is given in the mastic tree class, where 453 out of 3542 samples were misclassified, resulting in a misclassification rate of 12.79%. Figure 2 shows that the shape of 2–Eucalyptus camaldulensis dehn samples (river red gum) are very different from other class samples, so samples related to this class are classified with high accuracy. On the other hand, as a possible explanation to previous facts, Figure 2 shows how river red gum tree leaves are quite different in shape to other four tree leaves species, which at the same time have quite similar shapes among them, thus resulting in most misclassification errors among these four classes. In summary, the ANN–ABC classifier classified the 20,700 test samples with an accuracy above 94%, which has to be judged as very good for our tree leaf image database.\n\n#### 3.2.2. Classification Based on the Hybrid ANN–BBO\n\nTable 8 shows confusion matrix classification results using hybrid ANN–BBO classifier for 100 iterations averaged, over the test set. As can be seen, 2230 samples from 20,700 samples were misclassified by this classifier (10.77%). Again, the lowest ANN–BBO classifier accuracy is for mastic tree (82.20%) and the highest classifier accuracy is again for river red gum tree (94.60%), similar to the hybrid ANN–ABC classifier, and considered as promising results, despite on average the ANN–BBO classifier performance is behind its partners.\n\n#### 3.2.3. Classification Based on LDA\n\nTable 9 shows classification results presented as a confusion matrix for the LDA classifier for 100 uniform random train and test set iterations averaged, over the test set. The interesting thing here is that the LDA classifier classified all samples in the river red gum class correctly. So it can be concluded that for easy class data, the LDA classifier has higher accuracy. On the other hand, the LDA classifier has less accuracy than the hybrid ANN–ABC classifier in quince and apricot classes. Finally, one can see how LDA classifier classified all test data with an overall accuracy of 93.99%, that is high accuracy very close to the 94.04% overall accuracy for the ANN–ABC classifier, and both clearly over the 89.23% overall accuracy for the ANN–BBO classifier, which performed the poorest of the three.\n\n#### 3.3. Classifiers Performance: Evaluation\n\nTwo common methods to evaluate classifiers performance over the test set are:\n• Based on the sensitivity, specificity, and accuracy values (confusion matrix).\n• Based on receiver operating characteristic (ROC) and area under the ROC curve (AUC).\nWe evaluate classifiers using both methods next.\n\n#### 3.3.1. Classification Performance Based on the Sensitivity, Specificity, and Accuracy Values (Confusion Matrix)\n\nWith sensitivity, specificity and accuracy criteria, one can evaluate a classifier system based on correct and wrong classified samples, detailing between which classes errors are taken place if needed. Sensitivity, specificity, and accuracy are defined next in Equations (2)–(4):\n$Sensitivity = TP TP + FN$\n$Specificity = TN FP + TN$\n$Accuracy = TP + TN TP + TN + FP + FN$\nwhere true-positive fraction (TP) is equal to the number of samples in each class that are correctly classified, true-negative fraction (TN) is equal to the number of samples on main diagonal in confusion matrix (square matrix that has dimensions equal to number of classes, cf. Table 7, Table 8 and Table 9) minus the number of samples that are correctly classified in the intended class, false-negative fraction (FN) is defined as the sum of horizontal samples in the investigated class minus number of samples that are correctly classified in the intended class, and false-positive fraction (FP) is defined as the sum of vertical samples of the investigated class minus the number of samples that are correctly classified in the intended class (Wisaeng ). In short, a sensitivity value of 100% in a certain class means that all samples of this class were classified correctly; a specificity value of 100% in a certain class means that no samples from other classes were misclassified in that given class; and an accuracy value of 100% means that neither no samples from other classes were misclassified in a certain class nor any samples from that certain class were misclassified in other classes.\nTable 10 shows classification performance criteria including specificity, sensitivity and accuracy results for 100 uniform random iterations using hybrid ANN–ABC, hybrid ANN–BBO and LDA classifiers. By looking carefully at Table 10, one realizes that the LDA classifier is in general terms superior to both ANN–ABC (slightly) and ANN–BBO (clearly), that red river gum tree class is consistently easier to classify among all three classifiers, and that opposite situation happens with mt. Atlas mastic tree class. In precise terms, the following results are reached: best classifier, mean sensitivity, and mean specificity, were computed for the five trees under study, resulting in 1–Cydonia oblonga (quince) 95.89% (ANN–ABC), 95.91% (ANN–ABC); 2–Eucalyptus camaldulensis dehn (river red gum) 100% (LDA), 100% (LDA); 3–Malus pumila (apple) 96.63% (LDA), 94.99% (LDA); 4–Pistacia atlantica (mt. Atlas mastic tree), 91.71% (LDA), 82.57% (LDA); and 5–Prunus armeniaca (apricot) 88.67% (LDA), 94.65% (LDA), respectively, all over the test set.\n\n#### 3.3.2. Classification Performance Based on ROC and AUC\n\nFigure 7 shows ROC graphs for the different classes and different classifiers considered. In short, when plotting ROC curves, we are plotting FPR versus TPR, on the specificity-sensitivity plane. The higher the AUC values is the higher the classifier accuracy, while having an $A U C = 1 / 2$ value is totally useless and simply equivalent to a random toss of a coin. As one can see, the ROC graphs related to hybrid ANN–ABC and LDA classifiers are near the upright end, so this implies that these two classifiers have good accuracy. At the same time, in Table 11 we show the mean and standard deviation of AUC values for the five different classes and three classifiers under consideration, over 100 repeated uniform random iterations. Figure 8 shows AUC boxplot graphs for 100 iterations based on ANN–ABC, ANN–BBO and LDA classifiers. As one can see, these three classifiers classified red river gum class samples with extremely high accuracy, since boxplot graphs are very compact. Also, LDA if often ahead of its partners, closely followed by ANN–ABC, in terms of AUC values (please note that vertical axis AUC ranges are different in each figure). In precise terms, mean AUC values were computed for the five trees under study, resulting in 1–Cydonia oblonga (quince) 0.991 (ANN–ABC); 2–Eucalyptus camaldulensis dehn (river red gum) 1.00 (LDA); 3–Malus pumila (apple) 0.996 (LDA); 4–Pistacia atlantica (mt. Atlas mastic tree) 0.979 (LDA); and 5–Prunus armeniaca (apricot) 0.994 (LDA), respectively, all over the test set.\nFinally, to compare the CCR in this study with others, Singh and Bhamrah , Kadir , Mursalin et al., and Ehsanirad , were used. It should be noted that since studies do not share the same leave image database, no direct comparison is possible. Singh and Bhamrah classified 8 species of different plants with artificial neural network classifier and Kadir used a Bayes classifier to classify some leaves related to several plants. Mursalin et al., used some shape and color features for plant classification purposes. Ehsanirad used only some texture features for the classification of different plants. Table 12 shows the accuracy or CCR of the different classification methods for comparison purposes. As one can see, the proposed method has a higher number of samples and higher accuracy than the other methods being compared with.\nOne possible reason for the superiority of the proposed method might be the use of different properties. As can be seen in Section 2.3, feature extraction, in this study several features comprising five different feature types were extracted. That can be of help in finding effective discriminative features that can properly separate different plant type leaves. While on the contrary, other researchers often used only a couple of feature types, such as color and texture features. A second possible reason for higher accuracy might refer to the structure of the neural network since the study here proposed the optimal structure of the ANN is adjusted by both ABC and BBO algorithms.\n\n## 4. Conclusions\n\nThis study aims to design an imaging computer vision system to classify five types of tree leaf images. Relevant results can be summarized as follows to conclude:\n• Tree leaves images had the lowest level of noise (noise pixels) after segmentation in YCbCr color space, in comparison with RGB, YIQ, CMY, HSV and HSI color spaces.\n• Two channels in YCbCr color space were needed to image thresholding (a luminance and a chrominance channels).\n• Among the seven discriminant features selected, four features were texture features based on the gray level co-occurrence matrix and three features were shape features.\n• We believe that since samples in 2–Eucalyptus camaldulensis dehn (river red gum, class 2) have clear different shapes, shape features were selected as effective features, reaching almost no classification errors inside 2–Eucalyptus camaldulensis dehn (river red gum) class.\n• Whenever features in different classes do not present overlapping, the statistical LDA method usually has a superior performance.\n\n## Author Contributions\n\nConceptualization, S.S. R.P. and J.I.A.; methodology, S.S. and J.I.A.; software, S.S.; validation, S.S. and J.I.A.; formal analysis, S.S. and J.I.A.; investigation, S.S. and J.I.A.; resources, S.S.; data curation, J.I.A.; writing—original draft preparation, J.I.A.; writing—review and editing, S.S., R.P. and J.I.A.; visualization, S.S. and R.P.; supervision, J.I.A.; project administration, J.I.A. All authors have read and agreed to the published version of the manuscript.\n\n## Funding\n\nThis research was funded in part by the European Union (EU) under the Erasmus+ project entitled “Fostering Internationalization in Agricultural Engineering in Iran and Russia” [FARmER] with grant number 585596-EPP-1-2017-1-DE-EPPKA2-CBHE-JP.\n\n## Conflicts of Interest\n\nThe authors declare no conflict of interests.\n\n## Appendix A\n\nWe depict next all five type 516 segmented tree leaves image database, as shown in Figure A1, Figure A2, Figure A3, Figure A4 and Figure A5, corresponding to, 1—Cydonia oblonga (quince), 2—Eucalyptus camaldulensis dehn (river red gum), 3—Malus pumila (apple), 4—Pistacia atlantica (mt. Atlas mastic tree) and 5—Prunus armeniaca (apricot), respectively.\nFigure A1. Cydonia oblonga (quince) tree leaves image database: 111 samples.\nFigure A1. Cydonia oblonga (quince) tree leaves image database: 111 samples.\nFigure A2. Eucalyptus camaldulensis dehn (river red gum) tree leaves image database: 117 samples.\nFigure A2. Eucalyptus camaldulensis dehn (river red gum) tree leaves image database: 117 samples.\nFigure A3. Malus pumila (apple) tree leaves image database: 101 samples.\nFigure A3. Malus pumila (apple) tree leaves image database: 101 samples.\nFigure A4. Pistacia atlantica (mt. Atlas mastic tree) tree leaves image database: 89 samples.\nFigure A4. Pistacia atlantica (mt. Atlas mastic tree) tree leaves image database: 89 samples.\nFigure A5. Prunus armeniaca (apricot) tree leaves image database: 98 samples.\nFigure A5. Prunus armeniaca (apricot) tree leaves image database: 98 samples.\n\n## References\n\n1. Liu, F.H.; O’Connell, N.V. Off-site movement of surface-applied simazine from a citrus orchard as affected by irrigation incorporation. Weed Sci. 2002, 50, 672–676. [Google Scholar] [CrossRef]\n2. Singh, S.; Bhamrah, M.S. Leaf identification using feature extraction and neural network. IOSR J. Electron. Commun. Eng. 2015, 10, 134–140. [Google Scholar]\n3. Kadir, A. Leaf Identification Using Fourier Descriptors and Other Shape Features. Gate Comput. Vis. Pattern Recognit. 2015, 1, 3–7. [Google Scholar] [CrossRef]\n4. Ehsanirad, A. Plant classification based on leaf recognition. Int. J. Comput. Sci. Inform. Sec. 2010, 8, 78–81. [Google Scholar]\n5. Mursalin, M.; Hossain, M.; Noman, K.; Azam, S. Performance Analysis among Different Classifier Including Naive Bayes, Support Vector Machine and C4.5 for Automatic Weeds Classification. Glob. J. Comput. Sci. Technol. Graph. Vis. 2013, 13, 11–16. [Google Scholar]\n6. Ahmed, F.; Al-Mamun, H.A.; Bari, A.S.M.H.; Hossain, E.; Kwan, P. Classification of crops and weeds from digital images: A support vector machine approach. Crop Prot. 2012, 40, 98–104. [Google Scholar] [CrossRef]\n7. Rumpf, T.; Römer, C.; Weis, M.; Sökefeld, M.; Gerhards, R.; Plümer, L. Sequential support vector machine classification for small-grain weed species discrimination with special regard to Cirsium arvense and Galium aparine. Comput. Electron. Agric. 2012, 80, 89–96. [Google Scholar] [CrossRef]\n8. Pereira, L.A.M.; Nakamura, R.Y.M.; Souza, G.F.S.d.; Martins, D.; Papa, J.P. Aquatic weed automatic classification using machine learning techniques. Comput. Electron. Agric. 2012, 87, 56–63. [Google Scholar] [CrossRef]\n9. Azlah, M.A.F.; Chua, L.S.; Rahmad, F.R.; Abdullah, F.I.; Wan Alwi, S.R. Review on Techniques for Plant Leaf Classification and Recognition. Computers 2019, 8, 77. [Google Scholar] [CrossRef][Green Version]\n10. Gonzalez, R.C.; Woods, R.E.; Eddins, S.L. Digital Image Processing Using MATLAB; Prentice Hall: Upper Saddle River, NY, USA, 2004. [Google Scholar]\n11. Woebbecke, D.; Meyer, G.E.; Bargen, K.V.; Mortensen, D.A. Color indices for weed identification under various soil, residue, and lighting conditions. Trans. ASAE 1995, 3, 259–269. [Google Scholar] [CrossRef]\n12. Meyer, G.E.; Mehta, T.; Kocher, M.F.; Mortensen, D.A.; Samal, A. Textural imaging and discriminant analysis for distinguishing weeds for spot spraying. Trans. ASAE 1998, 41, 1189–1197. [Google Scholar] [CrossRef]\n13. Kataoka, T.; Kaneko, T.; Okamoto, H.; Hata, S. Crop growth estimation system using machine vision. In Proceedings of the IEEE/ASME International Conference Advances Intelligence Mechatronics (AIM 2003), Kobe, Japan, 20 July–24 July 24 2003; pp. 1079–1083. [Google Scholar]\n14. Meyer, G.E.; Neto, J.A.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]\n15. Woebbecke, D.M.; Meyer, G.E.; Bargen, K.V.; Mortensen, D.A. Plant species identification, size, and enumeration using machine vision techniques on near-binary images. Opt. Agric. For. 1992, 1836, 208–219. [Google Scholar]\n16. Golzarian, M.R.; Frick, R.A. Classification of images of wheat, ryegrass and brome grass species at early growth stages using principal component analysis. Plant Meth. 2011, 7, 7–28. [Google Scholar] [CrossRef] [PubMed][Green Version]\n17. Wisaeng, K. A Comparison of Decision Tree Algorithms For UCI Repository Classification. Int. J. Eng. Trends Technol. 2013, 4, 3393–3397. [Google Scholar]\nFigure 1. Flowchart of the proposed computer vision system applied in tree leaves automatic classification.\nFigure 1. Flowchart of the proposed computer vision system applied in tree leaves automatic classification.\nFigure 2. An image sample of each tree leaf type: (a) 1–Cydonia oblonga (quince), (b) 2–Eucalyptus camaldulensis dehn (river red gum), (c) 3–Malus pumila (apple), (d) 4–Pistacia atlantica (mt. Atlas mastic tree), and (e) 5–Prunus armeniaca (apricot).\nFigure 2. An image sample of each tree leaf type: (a) 1–Cydonia oblonga (quince), (b) 2–Eucalyptus camaldulensis dehn (river red gum), (c) 3–Malus pumila (apple), (d) 4–Pistacia atlantica (mt. Atlas mastic tree), and (e) 5–Prunus armeniaca (apricot).\nFigure 3. Three sample tree leaf images used to show image pre-processing and segmentation operations: (a) original tree leaf image, (b) segmented tree leaf image, (c) binary image, and (d) improved binary image.\nFigure 3. Three sample tree leaf images used to show image pre-processing and segmentation operations: (a) original tree leaf image, (b) segmented tree leaf image, (c) binary image, and (d) improved binary image.\nFigure 4. Boxplot graphs for the value of each of the seven selected features by artificial neural network–differential evolution (ANN–DE) algorithm over the five tree varieties under study: (a) IDMN_0, (b) LWR, (c) length, (d) convexity, (e) VAR_0, (f) homogeneity_0, and (g) mean_45 (cf. Table 6).\nFigure 4. Boxplot graphs for the value of each of the seven selected features by artificial neural network–differential evolution (ANN–DE) algorithm over the five tree varieties under study: (a) IDMN_0, (b) LWR, (c) length, (d) convexity, (e) VAR_0, (f) homogeneity_0, and (g) mean_45 (cf. Table 6).\nFigure 5. Boxplot graphs for the value of each of the seven selected features by the artificial neural network–genetic algorithm (ANN–GA) algorithm over the five tree varieties under study: (a) LWR, (b) EXS_HSV, (c) homogeneity_0, (d) STDY_CMY, (e) percentage Cr-YCbCr, (f), correlation_135, and (g) STDH_HSI (cf. Table 6).\nFigure 5. Boxplot graphs for the value of each of the seven selected features by the artificial neural network–genetic algorithm (ANN–GA) algorithm over the five tree varieties under study: (a) LWR, (b) EXS_HSV, (c) homogeneity_0, (d) STDY_CMY, (e) percentage Cr-YCbCr, (f), correlation_135, and (g) STDH_HSI (cf. Table 6).\nFigure 6. Mean square learning (train) error as a function of iteration number using ANN–DE and ANN–GA feature selection algorithms: (a) ANN–ABC classifier, (b) artificial neural network–biogeography based optimization (ANN–BBO) classifier, (c) LDA classifier, and (d) best performance among all three classifiers (ANN-DE).\nFigure 6. Mean square learning (train) error as a function of iteration number using ANN–DE and ANN–GA feature selection algorithms: (a) ANN–ABC classifier, (b) artificial neural network–biogeography based optimization (ANN–BBO) classifier, (c) LDA classifier, and (d) best performance among all three classifiers (ANN-DE).\nFigure 7. Average receiver operating characteristic (ROC) plots: (a) ANN–ABC, (b) ANN–BBO, and (c) LDA, classifiers. 1–Cydonia oblonga (quince), 2–Eucalyptus camaldulensis dehn (river red gum), 3–Malus pumila (apple), 4–Pistacia atlantica (mt. Atlas mastic tree) and 5–Prunus armeniaca (apricot). 100 random train and test set samples simulations averaged, computed over the test set.\nFigure 7. Average receiver operating characteristic (ROC) plots: (a) ANN–ABC, (b) ANN–BBO, and (c) LDA, classifiers. 1–Cydonia oblonga (quince), 2–Eucalyptus camaldulensis dehn (river red gum), 3–Malus pumila (apple), 4–Pistacia atlantica (mt. Atlas mastic tree) and 5–Prunus armeniaca (apricot). 100 random train and test set samples simulations averaged, computed over the test set.\nFigure 8. AUC boxplot graphs: (a) ANN–ABC, (b) ANN–BBO, and (c) LDA, classifiers. 1–Cydonia oblonga (quince), 2–Eucalyptus camaldulensis dehn (river red gum), 3–Malus pumila (apple), 4–Pistacia atlantica (mt. Atlas mastic tree) and 5–Prunus armeniaca (apricot). 100 random train and test set samples simulations averaged, computed over the test set.\nFigure 8. AUC boxplot graphs: (a) ANN–ABC, (b) ANN–BBO, and (c) LDA, classifiers. 1–Cydonia oblonga (quince), 2–Eucalyptus camaldulensis dehn (river red gum), 3–Malus pumila (apple), 4–Pistacia atlantica (mt. Atlas mastic tree) and 5–Prunus armeniaca (apricot). 100 random train and test set samples simulations averaged, computed over the test set.\nTable 1. Class number, tree common English and scientific names and number of tree leaves images: 516 tree leaves in total.\nTable 1. Class number, tree common English and scientific names and number of tree leaves images: 516 tree leaves in total.\nClass #English NameScientific NameNumber of Samples\n1quinceCydonia oblonga111 (Figure A1)\n2river red gumEucalyptus camaldulensis dehn117 (Figure A2)\n3appleMalus pumila101 (Figure A3)\n4mt. Atlas mastic treePistacia atlantica89 (Figure A4)\n5apricotPrunus armeniaca98 (Figure A5)\nTable 2. Texture features $27 × 4 = 108$ extracted based on the gray level co-occurrence matrix (GLCM): feature number and name.\nTable 2. Texture features $27 × 4 = 108$ extracted based on the gray level co-occurrence matrix (GLCM): feature number and name.\nNumberFeatureNumberFeature\n1contrast15inverse difference normalized (INN)\n2sum of squares16inverse difference moment normalized\n3second diagonal moment17diagonal moment\n4mean18sum average\n5sum entropy19variance\n6difference variance20sum variance\n7difference entropy21standard deviation\n8information measure of correlation122coefficient of variation\n9information measure of correlation223maximum probability\n10inverse difference (INV) in homogeneity24Correlation\n11autocorrelation25cluster prominence\n13energy27entropy\n14homogeneity\nTable 3. Shape features $29$: feature number and name.\nTable 3. Shape features $29$: feature number and name.\nNumberFeatureNumberFeature\n1length16orientation\n2Euler number17filled area\n3area18equiv. diameter\n4logarithm of the ratio of length to width19width\n5ratio of perimeter to broadness of object20eccentricity\n6ratio of width to length21elongation\n7ratio of area to length22compactness\n8solidity23extent\n9perimeter24aspect ratio\n10convex area25ratio of length to perimeter\n11minor axis length26perimeter (old way)\n12major axis length27compactness\n13form factor28convexity\n14ratio of subtraction and sum of minor axis length and major axis length29centroid\n15convex perimeter\nTable 4. Vegetation index features $14 × 6 = 84$: feature name and mathematical formulae definitions.\nTable 4. Vegetation index features $14 × 6 = 84$: feature name and mathematical formulae definitions.\nFeature NameMathematical Equation Definition\nnormalized first component of RGB color space$R n = R / ( R + G + B )$\nnormalized second component of RGB color space$G n = G / R + G + B$\nnormalized third component of RGB color space$B n = B / R + G + B$\ngray channel$g r a y = 0.2898 × R n + 0.5870 × G n + 0.1140 × B n$\nadditional green (Woebbecke et al. )$E X G = 2 × G n − R n − B n$\nadditional red (Meyer et al. )$E X R = 1.4 × R n − G n$\ncolor index for vegetation cover (Kataoka et al. )$C I V E = 0.441 × R n − 0.811 × G n + 0.385 × B n + 18.78$\nsubtraction between additional green and additional red (Meyer and Neto )$E X G R = E X G − E X R$\nnormalized difference index (Woebbecke et al. )$N D I = G n − B n / G n + B n$\ngreen minus blue index (Woebbecke et al. )$G B = G n − B n$\nred-blue contrast (Golzarian and Frick )$R B I = G n − B n / G n + B n$\ngreen-red index (Golzarian and Frick )$E R I = R n − G n × R n − B n$\nadditional green index (Golzarian and Frick )$E G I = G n − R n × G n − B n$\nadditional blue index (Golzarian and Frick )$E B I = B n − G n × B n − R n$\nTable 5. Optimal neural network parameters under MatLab, as determined by ant bee colony (ABC) and BBO algorithms. satlin: saturating linear transfer function; tribas: triangular basis transfer function; tansig: hyperbolic tangent sigmoid transfer function; purelin: linear transfer function; logsig: log-sigmoid transfer function; trainlm: Levenberg–Marquardt backpropagation learning rule; learnwh: Widrow–Hoff weight/bias learning function; learncon: conscience bias learning function.\nTable 5. Optimal neural network parameters under MatLab, as determined by ant bee colony (ABC) and BBO algorithms. satlin: saturating linear transfer function; tribas: triangular basis transfer function; tansig: hyperbolic tangent sigmoid transfer function; purelin: linear transfer function; logsig: log-sigmoid transfer function; trainlm: Levenberg–Marquardt backpropagation learning rule; learnwh: Widrow–Hoff weight/bias learning function; learncon: conscience bias learning function.\nAlgorithmNumber of LayersNumber of NeuronsNonlinear Transfer\nFunctions\nBackpropagation Network Training FunctionBackpropagation Weight/Bias Learning Function\nABC3First layer: 25, second layer: 25, third layer: 25First layer: satlin, second layer: tribas, third layer: tansigtrainlmlearnwh\nBBO3First layer: 17, second layer: 16, third layer: 15First layer: satlin, second layer: purelin, third layer: logsigtrainlmlearncon\nTable 6. Discriminant feature selection based on the ANN–DE and ANN–GA methodologies: seven final discriminant features selected in each method.\nTable 6. Discriminant feature selection based on the ANN–DE and ANN–GA methodologies: seven final discriminant features selected in each method.\nMethodSelected Features\nANN–DE\n• inverse difference moment normalized related to 0 degree (IDMN_0)\n• ratio of length to width (LWR)\n• length\n• convexity\n• variance related to 0 degree (VAR_0)\n• homogeneity of 0 degree (Homogeneity_0)\n• mean of 45 degrees (Mean_45)\nANN–GA\n• ratio of length to width (LWR)\n• additional second component in HSV color space index (EXS_HSV)\n• homogeneity of 0 degree (Homogeneity_0)\n• standard deviation of the third component of CMY color space (STDY_CMY)\n• percentage of pixels between levels 100 and 120 of third component in YCbCr color space (Percentage Cr-YCbCr)\n• correlation of 135 degrees (Correlation_135)\n• standard deviation of first component of HSI color space (STDH_HSI)\nTable 7. Confusion matrix classification results for the hybrid ANN–ABC classifier: 100 random train and test set samples, results computed over the test set. T: true class, E: estimated class.\nTable 7. Confusion matrix classification results for the hybrid ANN–ABC classifier: 100 random train and test set samples, results computed over the test set. T: true class, E: estimated class.\nT\\EQuinceRed GumAppleMasticApricotAll DataPartial Incorrect Classification Rate (%)Correct Classification Rate (%)\nquince4273091581644564.1194.04\nred gum14686214047030.362\napple332337921178640516.39\nmastic11881143089213354212.79\napricot301093190362539488.18\nTable 8. Confusion matrix classification results for the hybrid ANN–BBO classifier: 100 random train and test set samples, results computed over the test set. T: true class, E: estimated class.\nTable 8. Confusion matrix classification results for the hybrid ANN–BBO classifier: 100 random train and test set samples, results computed over the test set. T: true class, E: estimated class.\nT\\EQuinceRed GumAppleMasticApricotAll DataPartial Incorrect Classification Rate (%)Correct Classification Rate (%)\nquince413255661477544757.6689.23\nred gum106432652622745735.40\napple140583573145162407812.38\nmastic196551312913249354417.80\napricot87171682323526403012.51\nTable 9. Confusion matrix classification results for LDA classifier: 100 random train and test set samples, results computed over the test set. T: true class, E: estimated class.\nTable 9. Confusion matrix classification results for LDA classifier: 100 random train and test set samples, results computed over the test set. T: true class, E: estimated class.\nT\\eQuinceRed gumAppleMasticApricotAll DataPartial Incorrect Classification Rate (%)Correct Classification Rate (%)\nquince413000366044968.1493.99\nred gum0465800046580\napple4203927851040643.37\nmastic22085325118735458.29\napricot8901222353491393711.33\nTable 10. Performance classification criteria results for ANN–ABC, ANN–BBO and LDA classifiers: sensitivity, accuracy and specificity, evaluated on 100 random train and test set samples, computed over the test set.\nTable 10. Performance classification criteria results for ANN–ABC, ANN–BBO and LDA classifiers: sensitivity, accuracy and specificity, evaluated on 100 random train and test set samples, computed over the test set.\nHybrid ANN–ABCHybrid ANN–BBOLDA\nsen1 (%)acc2 (%)spe3 (%)sen (%)acc (%)spe (%)sen (%)acc (%)spe (%)\n1–Cydonia oblonga95.8998.1695.9192.3395.4988.6591.8697.4096.43\n2–Eucalyptus camaldulensis d.99.6499.7099.1394.6097.7195.90100100100\n3–Malus pumila93.6197.6194.5687.6295.2485.5596.6398.2694.99\n4–Pistacia atlantica87.2195.4386.5782.1993.8283.2591.7195.2082.57\n5–Prunus armeniaca91.8296.8392.0087.4994.7887.3088.6796.8094.65\n1. Sensitivity (sen), 2. Accuracy (acc), 3. Specificity (spe); 1–Cydonia oblonga (quince), 2–Eucalyptus camaldulensis dehn (river red gum), 3–Malus pumila (apple), 4–Pistacia atlantica (mt. Atlas mastic tree) and 5–Prunus armeniaca (apricot).\nTable 11. Mean ± standard deviation area under the ROC curve (AUC) values for ANN–ABC, ANN–BBO and LDA classifiers: 100 random train and test set samples, computed over the test set.\nTable 11. Mean ± standard deviation area under the ROC curve (AUC) values for ANN–ABC, ANN–BBO and LDA classifiers: 100 random train and test set samples, computed over the test set.\nAUC (Mean ± Standard Deviation)\nQuinceRed GumAppleMasticApricot\nANN–ABC0.991 ± 0.0100.999 ± 0.0000.985 ± 0.0170.963 ± 0.0250.986 ± 0.015\nANN–BBO0.961 ± 0.1360.968 ± 0.1390.945 ± 0.1620.945 ± 0. 1140.947 ± 0.179\nLDA0.988 ± 0.0051.000 ± 0.0000.996 ± 0.0020.979 ± 0.0090.994 ± 0.003\nTable 12. Comparison of accuracy or the correct classification rate (CCR) of the different classification systems.\nTable 12. Comparison of accuracy or the correct classification rate (CCR) of the different classification systems.\nMethodThe Number of SamplesCorrect Classification Rate (%)\nBest result of here proposed model51699.52 (test set)\nSingh and Bhamrah 8098.8\nMursalin et al. 40098.24\n\n## Share and Cite\n\nMDPI and ACS Style\n\nSabzi, S.; Pourdarbani, R.; Arribas, J.I. A Computer Vision System for the Automatic Classification of Five Varieties of Tree Leaf Images. Computers 2020, 9, 6. https://doi.org/10.3390/computers9010006\n\nAMA Style\n\nSabzi S, Pourdarbani R, Arribas JI. A Computer Vision System for the Automatic Classification of Five Varieties of Tree Leaf Images. Computers. 2020; 9(1):6. https://doi.org/10.3390/computers9010006\n\nChicago/Turabian Style\n\nSabzi, Sajad, Razieh Pourdarbani, and Juan Ignacio Arribas. 2020. \"A Computer Vision System for the Automatic Classification of Five Varieties of Tree Leaf Images\" Computers 9, no. 1: 6. https://doi.org/10.3390/computers9010006\n\nNote that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6854306,"math_prob":0.8365671,"size":14391,"snap":"2023-40-2023-50","text_gpt3_token_len":4688,"char_repetition_ratio":0.118996315,"word_repetition_ratio":0.5772093,"special_character_ratio":0.31207004,"punctuation_ratio":0.15083799,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9528591,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T13:48:11Z\",\"WARC-Record-ID\":\"<urn:uuid:8c47770b-34f5-4a85-acd1-e5055f25d169>\",\"Content-Length\":\"361521\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:afa7b8ed-7295-4ab5-a388-d16dd5ca01dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2e0eaf5-aedf-42b6-af5a-87b428f5d252>\",\"WARC-IP-Address\":\"104.18.24.151\",\"WARC-Target-URI\":\"https://www.mdpi.com/2073-431X/9/1/6\",\"WARC-Payload-Digest\":\"sha1:Z3SXU66DMJUAUSFGWXZGNXGJVSZTEID5\",\"WARC-Block-Digest\":\"sha1:Z3HDA5X6BJ3TS6IMREAUCVNZ6R3KICHT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511369.62_warc_CC-MAIN-20231004120203-20231004150203-00247.warc.gz\"}"} |
https://stoplearn.com/volume-of-sphere-and-hemisphere/ | [
"Categories\n\n# Volume of Sphere and Hemisphere\n\n### Volume of a Sphere\n\nsphere is a solid in which all the points on the round surface are equidistant from afixed point, known as the center of the sphere. The distance from the center to the surface is the radius.\n\nVolume of sphere = where r is the radius.\n\nHow to find the volume of a sphere? What is the volume of air in the ball?\n\nVolume of a hemisphere\n\nhemisphere is half a sphere, with one flat circular face and one bowl-shaped face.\n\nVolume of hemisphere where r is the radius\n\nSpheres\n\nWhat is a sphere?\nsphere is a solid with all its points the same distance from the center. The distance is known as the radius of the sphere. The maximum straight distance through the center of a sphere is known as the diameter of the sphere. The diameter is twice the radius.\n\nHow to find the volume of a sphere?\n\nThe volume of a sphere is equal to four-thirds of the product of pi and the cube of the radius.\n\nThe volume and surface area of a sphere are given by the formulas:\n\nwhere r is the radius of the sphere.\nExample:\n\nCalculate the volume of sphere with radius 4 cm.\n\nSolution:\n\nVolume of sphere\n\nWe can also change the subject of the formula to obtain the radius given the volume.\n\nExample:\n\nThe volume of a spherical ball is 5,000 cm3. What is the radius of the ball?\n\nSolution:\n\nExample: Find the volume of a sphere with a diameter of 14 cm.\n\nDownload our free Android Mobile application: Save your data when you use our free app. Click picture to download. No subscription.",
null,
""
] | [
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20210%20140%22%3E%3C/svg%3E",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90811783,"math_prob":0.9773023,"size":1725,"snap":"2022-27-2022-33","text_gpt3_token_len":387,"char_repetition_ratio":0.21092388,"word_repetition_ratio":0.06349207,"special_character_ratio":0.21623188,"punctuation_ratio":0.11413044,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99850786,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T17:01:17Z\",\"WARC-Record-ID\":\"<urn:uuid:8471d03d-af6f-4e98-b009-b99b4041231e>\",\"Content-Length\":\"64072\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b8d6a855-089f-4766-baae-2fc91af9dbc7>\",\"WARC-Concurrent-To\":\"<urn:uuid:a2c0f6fb-0ed4-4ca9-804f-ef044a446800>\",\"WARC-IP-Address\":\"168.235.116.113\",\"WARC-Target-URI\":\"https://stoplearn.com/volume-of-sphere-and-hemisphere/\",\"WARC-Payload-Digest\":\"sha1:HWWZO6US6AFEREELOWJJ473HAQ4Z23SP\",\"WARC-Block-Digest\":\"sha1:MWIJOOWQ2EOIOSQA2RO2QWWIIK2RSAWK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573242.55_warc_CC-MAIN-20220818154820-20220818184820-00704.warc.gz\"}"} |
https://journals.ametsoc.org:443/view/journals/bams/97/12/bams-d-15-00267.1.xml | [
"• Ambaum, M. H. P., 2010: Significance tests in climate science. J. Climate, 23, 59275932, doi:10.1175/2010JCLI3746.1.\n\n• Benjamini, Y., and Y. Hochberg, 1995: Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. Roy. Stat. Soc., 57B, 289300.\n\n• Export Citation\n• Hartmann, D. L., and Coauthors, 2013: Observations: Atmosphere and surface. Climate Change 2013: The Physical Science Basis, T. F. Stocker et al., Eds., Cambridge University Press, 159254.\n\n• Export Citation\n• Jolliffe, I. T., 2004: P stands forWeather, 59, 7779, doi:10.1256/wea.132.03.\n\n• Katz, R. W., 1982: Statistical evaluation of climate experiments with general circulation models: A parametric time series modeling approach. J. Atmos. Sci., 39, 14461455, doi:10.1175/1520-0469(1982)039<1446:SEOCEW>2.0.CO;2.\n\n• Crossref\n• Export Citation\n• Katz, R. W., 2002: Sir Gilbert Walker and a connection between El Niño and statistics. Stat. Sci., 17, 97112, doi:10.1214/ss/1023799000.\n\n• Katz, R. W., and B. G. Brown, 1991: The problem of multiplicity in research on teleconnections. Int. J. Climatol., 11, 505513, doi:10.1002/joc.3370110504.\n\n• Crossref\n• Export Citation\n• Livezey, R. E., and W. Y. Chen, 1983: Statistical field significance and its determination by Monte Carlo techniques. Mon. Wea. Rev., 111, 4659, doi:10.1175/1520-0493(1983)111<0046:SFSAID>2.0.CO;2.\n\n• Crossref\n• Export Citation\n• Mielke, P. W., K. J. Berry, and G. W. Brier, 1981: Application of multi-response permutation procedures for examining seasonal changes in monthly mean sea level pressure patterns. Mon. Wea. Rev., 109, 120126, doi:10.1175/1520-0493(1981)109<0120:AOMRPP>2.0.CO;2.\n\n• Crossref\n• Export Citation\n• Polyak, I., 1996: Computational Statistics in Climatology Oxford University Press, 358 pp.\n\n• Storey, J. D., and R. Tibshirani, 2003: Statistical significance for genomewide studies. Proc. Natl. Acad. Sci. USA, 100, 94409445, doi:10.1073/pnas.1530509100.\n\n• Crossref\n• Export Citation\n• Ventura, V., C. J. Paciorek, and J. S. Risbey, 2004: Controlling the proportion of falsely rejected hypotheses when conducting multiple tests with climatological data. J. Climate, 17, 43434356, doi:10.1175/3199.1.\n\n• Crossref\n• Export Citation\n• von Storch, H., 1982: A remark on Chervin-Schneider’s algorithm to test significance of climate experiments with GCM’s. J. Atmos. Sci., 39, 187189, doi:10.1175/1520-0469(1982)039<0187:AROCSA>2.0.CO;2.\n\n• Crossref\n• Export Citation\n• von Storch, H., and F. W. Zwiers, 1999: Statistical Analysis in Climate Research.Cambridge University Press, 484 pp.\n\n• Vose, R. S., R. L. Schmoyer, P. M. Steurer, T. C. Peterson, R. Heim, T. R. Karl, and J. Eischeid, 1992: The Global Historical Climatology Network: Long-term monthly temperature, precipitation, sea level pressure, and station pressure data. Environmental Sciences Division Publ. 3912, ORNL/CDIAC-53, NDP-041, 325 pp., doi:10.3334/CDIAC/cli.ndp041.\n\n• Export Citation\n• Walker, G. T., 1914: Correlation in seasonal variations of weather. III. On the criterion for the reality of relationships or periodicities. Mem. Indian Meteor. Dept., 21(9), 1315.\n\n• Export Citation\n• Wilks, D. S., 1997: Resampling hypothesis tests for autocorrelated fields. J. Climate, 10, 6583, doi:10.1175/1520-0442(1997)010<0065:RHTFAF>2.0.CO;2.\n\n• Crossref\n• Export Citation\n• Wilks, D. S., 2006: On “field significance” and the false discovery rate. J. Appl. Meteor. Climatol., 45, 11811189, doi:10.1175/JAM2404.1.\n\n• Wilks, D. S., 2011: Statistical Methods in the Atmospheric Sciences. 3rd ed. Academic Press, 676 pp.\n\n• Zwiers, F. W., 1987: Statistical considerations for climate experiments. Part II: Multivariate tests. J. Climate Appl. Meteor., 26, 477487, doi:10.1175/1520-0450(1987)026<0477:SCFCEP>2.0.CO;2.\n\n• Crossref\n• Export Citation\n• Zwiers, F. W., and H. J. Thiébaux, 1987: Statistical considerations for climate experiments. Part I: Scalar tests. J. Climate Appl. Meteor., 26, 465476, doi:10.1175/1520-0450(1987)026<0464:SCFCEP>2.0.CO;2.\n\n• Export Citation\n•",
null,
"Hypothetical 3720-gridpoint domain, representing the Northern Hemisphere from 20° to 80°N. Concentric thick outlines indicate regions where local null hypotheses are not true.\n•",
null,
"Eight spatial autocorrelation functions of the form in Eq. (4). Stars indicate correlations for Northern Hemisphere 500-hPa heights from Polyak (1996).\n•",
null,
"Illustration of the FDR criterion using αFDR = 0.10 (dashed diagonal line), αFDR = 0.20 (dotted diagonal line), and the naïve stippling approach of rejecting any local test with p value smaller than α0 = 0.05 (dashed horizontal line). Plotted points are the smallest 350 sorted p values of 3720 local tests. Points below the diagonal lines represent significant results according to the two FDR control levels. Crosses represent six tests with true null hypotheses that were falsely rejected, and circles represent false local null hypotheses that were not rejected, when αFDR = 0.10. Inset shows closer view of points within the red box. The 345 tests with p values smaller than α0 = 0.05 would be declared significant under the naïve stippling procedure, even though a majority of these null hypotheses are true.\n•",
null,
"Achieved global test levels (probabilities of rejecting true global null hypotheses) when using the FDR procedure, as a function of spatial correlation strength. For moderate and strong spatial correlation, approximately correct results can be achieved by choosing αFDR = 2αglobal.\n•",
null,
"False discovery rates for the FDR method with αFDR = 0.10 (red), the Livezey–Chen counting approach with α0 = αglobal = 0.05 (black), and the naïve stippling approach with α0 = 0.05 (brown), as functions of numbers of false local null hypotheses and alternative-hypothesis magnitudes ∆µ, using the e-folding distance 1.54 × 103 km.\n•",
null,
"Maps of local test decisions made by (a) the FDR procedure with αFDR = 0.10 and (b) the naïve stippling approach using α0 = 0.05. Correct local null hypothesis rejections are indicated by plus signs, failures to reject false local null hypotheses are indicated by circles, and erroneous rejections of true null hypotheses are indicated by crosses. Results correspond to the ranked p values shown in Fig. 3, with NA = 156, ∆µ = 0.7, and e-folding distance 1.54 × 103 km.\n•",
null,
"Linear trends in annual precipitation during 1951–2010, based on data from the Global Historical Climatology Network (Vose et al. 1992). Grid elements with linear trends exhibiting local statistical significance at the α = 0.10 level are been indicated by the plus signs, and those with p values small enough to satisfy the FDR criterion with αFDR = 0.10 [Eq. (3)] are indicated by the red circles. The figure has been modified from Hartmann et al. (2013, p. 203).\n•",
null,
"Correlations for pairs of colored grid elements in Fig. 7 separated by no more than 2 × 103 km. The fitted exponential decay function indicates an approximate e-folding decorrelation distance of 0.62 × 103 km, or approximately 150% of the typical grid spacing.\nAll Time Past Year Past 30 Days\nAbstract Views 2 2 0\nFull Text Views 2226 2130 153\n\n# “The Stippling Shows Statistically Significant Grid Points”: How Research Results are Routinely Overstated and Overinterpreted, and What to Do about It\n\nView More View Less\n• 1 Department of Earth and Atmospheric Sciences, Cornell University, Ithaca, New York\nFull access\n\n## Abstract\n\nSpecial care must be exercised in the interpretation of multiple statistical hypothesis tests—for example, when each of many tests corresponds to a different location. Correctly interpreting results of multiple simultaneous tests requires a higher standard of evidence than is the case when evaluating results of a single test, and this has been known in the atmospheric sciences literature for more than a century. Even so, the issue continues to be widely ignored, leading routinely to overstatement and overinterpretation of scientific results, to the detriment of the discipline. This paper reviews the history of the multiple-testing issue within the atmospheric sciences literature and illustrates a statistically principled and computationally easy approach to dealing with it—namely, control of the false discovery rate.\n\nCORRESPONDING AUTHOR: Dan Wilks, Department of Earth and Atmospheric Sciences, 1104 Bradfield Hall, Cornell University, Ithaca, NY 14853, E-mail: dsw5@cornell.edu\n\n## Abstract\n\nSpecial care must be exercised in the interpretation of multiple statistical hypothesis tests—for example, when each of many tests corresponds to a different location. Correctly interpreting results of multiple simultaneous tests requires a higher standard of evidence than is the case when evaluating results of a single test, and this has been known in the atmospheric sciences literature for more than a century. Even so, the issue continues to be widely ignored, leading routinely to overstatement and overinterpretation of scientific results, to the detriment of the discipline. This paper reviews the history of the multiple-testing issue within the atmospheric sciences literature and illustrates a statistically principled and computationally easy approach to dealing with it—namely, control of the false discovery rate.\n\nCORRESPONDING AUTHOR: Dan Wilks, Department of Earth and Atmospheric Sciences, 1104 Bradfield Hall, Cornell University, Ithaca, NY 14853, E-mail: dsw5@cornell.edu\n\nControlling the false discovery rate provides a computationally straightforward approach to interpretation of multiple hypothesis tests.\n\n“A neglected aspect of statistical testing in a large number of geophysical studies has been the evaluation of the collective significance of a finite set of individual significance tests. This neglect has stemmed…from a lack of understanding of the combined effects of number and interdependence of set numbers” (Livezey and Chen 1983).\n\nMore than 30 years have passed since the seminal paper by Livezey and Chen (1983) pointed out that collections of multiple statistical tests, often in the setting of individual tests at many spatial grid points, are very often interpreted incorrectly and in a way that leads to research results being overstated. That paper also proposed an approach to dealing with and protecting against that problem, which they called assessment of “field significance.” The idea was to construct a “metatest” using as input the results of the many individual tests to address the “global” null hypothesis that all individual “local” (e.g., grid point) null hypotheses are true. If the global null hypothesis cannot be rejected, one cannot conclude with adequate confidence that any of the individual local tests show meaningful violations of their respective null hypotheses. Thus, failure to achieve field significance protects the analyst to a degree from being misled into believing results from the many erroneous rejections of true local gridpoint null hypotheses that will invariably occur.\n\nUnfortunately, very little has changed during the intervening decades with respect to the overinterpretation of multiple hypothesis tests in the atmospheric sciences literature. For example, of the 281 papers published in the Journal of Climate during the first half of 2014, 97 (34.5%) included maps described in part by some variant of the quotation in the title of this paper. These studies implicitly but wrongly represented that any individual gridpoint test exhibiting nominal statistical significance was indicative of a physically meaningful result. By contrast, only 3 of the 281 papers (1.1%) considered the effects of multiple hypothesis testing on their scientific conclusions. (The remaining 64.4% of these papers either had no maps or did not attempt statistical inference on any of the mapped quantities.) These are disturbing but unfortunately quite representative statistics. Consequences of the widespread and continued failure to address the issue of multiple hypothesis testing are overstatement and overinterpretation of the scientific results, to the detriment of the discipline.\n\nThe purposes of this paper are to highlight problems relating to interpretation of multiple statistical tests, to provide some of the history related to this issue, and to describe and illustrate a straightforward and statistically principled approach—control of the false discovery rate (FDR)—to protecting against overstatement and overinterpretation of multiple-testing results.\n\n## EXPOSITION OF THE MULTIPLE-TESTING PROBLEM.\n\nComputation of a single hypothesis test involves defining a null hypothesis H0, which will be rejected in favor of an alternative hypothesis HA if a sufficiently extreme value of the test statistic is observed (e.g., Wilks 2011). Rejection of H0 at a test level α occurs if the test statistic is sufficiently extreme that the probability (called the p value) of observing it or any other outcome even less favorable to H0, if that null hypothesis is true, is no larger than α. If H0 is rejected with α = 0.05 (the most common, although an arbitrary choice), the result is said to be significant at the 5% level.1\n\nAlthough perhaps intuitively attractive, it is quite incorrect to interpret a p value as the probability that the null hypothesis is true, given the evidence expressed in the observed test statistic (e.g., Ambaum 2010). The correct interpretation is opposite: a p value is a probability related to the magnitude of a test statistic, assuming the truth of H0. The implication is that any true null hypothesis will be rejected with probability α (if the test has been formulated correctly), so that collections of N0 hypothesis tests whose null hypotheses are all true will exhibit, on average, αN0 erroneous rejections. However, any particular collection of N0 hypothesis tests whose null hypotheses are all true will rarely exhibit exactly αN0 erroneous rejections, but rather the number of erroneous rejections will be a random quantity. That is, the number of erroneous rejections will be different for different (possibly hypothetical) batches of the same kind of data, and for any particular batch this number will behave as if it had been drawn from a probability distribution whose mean is αN0.\n\nIf the results of these N0 hypothesis tests are statistically independent, then the probability distribution for the number of erroneously rejected null hypotheses will be binomial, yielding the probabilities for the possible numbers of erroneously rejected tests x,",
null,
"One implication of this equation is that, unless N0 is relatively small, erroneously rejecting at least one of the true null hypotheses is nearly certain: for example, if α = 0.05 and N0 = 100 this probability is 0.994. Thus, some true null hypotheses will almost always be erroneously rejected in any realistic multiple-testing situation involving gridded data. Even though this number will be αN0 on average, Eq. (1) specifies nonnegligible probabilities for numbers of erroneous rejections that may be substantially larger than αN0. When the members of the collection of hypothesis tests are not independent, which is the usual situation for gridded data, Eq. (1) is no longer valid and the probabilities for numbers of erroneous rejections much larger than αN0 are even higher.\n\nThe problem of interpreting the results of N multiple simultaneous hypothesis tests is further complicated by the fact that the fraction of true null hypotheses N0/N is unknown, and also that some of the NA = NN0 false null hypotheses may not be rejected. How, then, can a spatial field of hypothesis test results be interpreted in a statistically principled and scientifically meaningful way?\n\n## HISTORICAL DEVELOPMENT OF MULTIPLE TESTING IN THE ATMOSPHERIC SCIENCES.\n\n### Walker’s test.\n\nThe question just posed has been confronted in the atmospheric sciences for more than a century, apparently having been addressed first by Walker (1914). Katz and Brown (1991), and Katz (2002) provide a modern perspective on Walker’s thinking on this subject.\n\nWalker realized that an extreme value of a sample statistic (e.g., a small p value) is progressively more likely to be observed as more realizations of the statistic (e.g., more hypothesis tests) are examined, so that a progressively stricter standard for statistical significance must be imposed as the number of tests increases. To limit the probability of erroneously rejecting one or more of N0 true null hypotheses to an overall level α0, Walker’s criterion is that only individual tests with p values no larger than αWalker should be regarded as significant, where (e.g., Wilks 2006)",
null,
"Of course αWalker = α0 for a single (N0 = 1) test. To limit the probability of erroneously rejecting any of N0 = 100 true null hypotheses to the level α0 = 0.05, only those tests having p values smaller than αWalker = 0.000513 would be regarded as significant according to this criterion. In contrast, as noted above, naïvely evaluating each of N0 = 100 independent tests having true null hypotheses at the α0 = 0.05 level (i.e., ignoring the multiple-testing problem) results in a 0.994 probability that at least one true null hypothesis is erroneously rejected.\n\nEquation (2) was derived under the (often unrealistic) assumption that the results of the individual tests are statistically independent, but in practice it is robust to (only modestly affected by) deviations from this assumption (Katz and Brown 1991; Wilks 2006). On the other hand, although Eq. (2) will yield relatively few rejections of true null hypotheses, the Walker criterion is quite strict since αWalkerα0/N, which compromises the sensitivity of the procedure for detecting false null hypotheses.\n\n### The field significance approach.\n\nVon Storch (1982) and Livezey and Chen (1983) cast the problem of evaluating multiple hypothesis tests as a metatest, or a global hypothesis test whose input data are the results of N local hypothesis tests. Because the individual local tests often pertain to a grid or other geographic array, they can be thought of as composing a “field” of test results. Accordingly this approach to multiple testing is generally referred to as assessment of field significance (Livezey and Chen 1983). It has become the dominant paradigm for multiple testing in the atmospheric sciences, especially when the individual hypothesis tests pertain to a network of geographic locations.\n\nThe global null hypothesis is that all of the local null hypotheses are true, so that failure to reject the global null hypothesis implies that significant results have not been detected anywhere in the field of individual local tests. In the idealized case that the local null hypotheses are statistically independent, the binomial distribution [Eq. (1)] allows calculation of the minimum number of locally significant tests required to reject a global null hypothesis—that is, to achieve field significance. For example, again if N = 100 independent tests and α0 = 0.05, the global null hypothesis implies N0 = N = 100 so that on average (over many hypothetical realizations of the single testing situation for which we have data) 5 of the 100 local null hypotheses are expected to be rejected. But in order to reject the global null hypothesis, an unusually large number of local test rejections must be observed. Equation (1) specifies that 10 or more such rejections are required in order to have smaller than αglobal = α0 = 0.05 probability of observing this or a more extreme result if the global null hypothesis is true. If fewer of these independent local tests have p values smaller than α0 = 0.05, then none of them are regarded as significant according to this criterion.\n\nAssuming statistical independence among the local test results is a best-case situation. The usual condition of spatial correlation among the local gridpoint tests implies that even more local test rejections than implied by Eq. (1) are required in order to achieve field significance. However, exactly how many local test rejections are required depends on the nature of the underlying spatial correlation, and this threshold may be difficult to determine in a particular multiple-testing setting. One approach is to try to estimate an “effective number of independent tests” Neff < N and to use this value in Eq. (1), although often Neff cannot be estimated rigorously (von Storch and Zwiers 1999). Livezey and Chen (1983) also suggest estimating the frequency distribution for numbers of locally significant tests using Monte Carlo methods (i.e., randomly resampling the available data in a manner consistent with the global null hypothesis; e.g., Mielke et al. 1981; Zwiers 1987). This approach can require elaborate and computationally expensive calculations, especially if the data exhibit both temporal and spatial correlations (Wilks 1997), and in some test settings an appropriate Monte Carlo algorithm may not be available. Ignoring the effect of spatial correlation leads to highly inaccurate test results when using this method, with global null hypotheses being rejected much more frequently than specified by the nominal αglobal (von Storch 1982; Livezey and Chen 1983; Wilks 2006).\n\nThe Livezey–Chen procedure has other drawbacks beyond its sensitivity to spatial correlation. The most important of these are as follows:\n\n1. The global test statistic involves only the numbers of locally significant tests but not their p values, so that vanishingly small local p values can provide no more evidence against the global null hypothesis than do local tests for which pα0. Test sensitivity is consequently less than optimal because not all the available information is used (Zwiers 1987; Wilks 2006). This problem is particularly acute when the fraction of false null hypotheses is small.\n\n2. Having declared field significance, many of the local tests exhibiting p < α0 will have resulted from random and irreproducible fluctuations rather than physically real effects (Ventura et al. 2004; Wilks 2006). This problem is compounded in the presence of spatial correlation because these spurious “features” will tend to exhibit geographic coherence, potentially leading the analyst to overinterpret the data in an attempt to explain them.\n\n## A PRINCIPLED AND STRAIGHTFORWARD SOLUTION—CONTROLLING THE FALSE DISCOVERY RATE.\n\nThe problems just noted can be addressed by controlling FDR when analyzing the results of multiple hypothesis tests. The FDR is the statistically expected (i.e., average over analyses of hypothetically many similar testing situations) fraction of local null hypothesis test rejections (“discoveries”) for which the respective null hypotheses are actually true. An upper limit for this fraction can be controlled exactly for independent local tests (and approximately for correlated local tests), regardless of the unknown proportion N0/N of local tests having true null hypotheses. Benjamini and Hochberg (1995) first described this method, with a primary focus on medical statistics. It has become the dominant, mainstream approach to evaluation of multiple hypothesis test results, both in the statistics literature and in the scientific literature more broadly, with Google Scholar listing more than 34,000 citations of the original (Benjamini and Hochberg 1995) paper. Ventura et al. (2004) introduced its use for multiple hypothesis tests pertaining to gridded atmospheric data, and Wilks (2006) demonstrated its relationship to the traditional field significance framework.\n\nAlthough it is still not well known within the atmospheric sciences, the FDR method is the best available approach to analysis of multiple hypothesis test results, even when those results are mutually correlated. Its criterion of limiting the fraction of erroneously rejected null hypotheses is more relevant to scientific interpretation than is the traditional approach of limiting the probability that any given local test yields an erroneous rejection (Storey and Tibshirani 2003; Ventura et al. 2004). In particular, FDR control addresses (and, in a sense, puts a ceiling on) the probability that a rejected local null hypothesis is in fact true, whereas a p value quantifies the probability of results at least as inconsistent with the null hypothesis as the observed test statistic, under the assumption that the null hypothesis is true. The former notion is more closely aligned with common intuition, and indeed p values are commonly misinterpreted in this way (e.g., Storey and Tibshirani 2003; Jolliffe 2004; Ambaum 2010), presumably because investigators often prefer this framing of scientific answers.\n\nThe FDR procedure is similar in spirit to Walker’s approach in that it requires a higher standard (i.e., smaller p values) in order to reject local null hypotheses. The algorithm operates on the collection of p values from N local hypothesis tests pi, with i = 1, …, N, which are first sorted in ascending order. Using a standard statistical notation, these sorted p values are denoted using parenthetical subscripts, so that p(1)p(2) ≤ … ≤ p(N). Local null hypotheses are rejected if their respective p values are no larger than a threshold level",
null,
"that depends on the distribution of the sorted p values:",
null,
"where αFDR is the chosen control level for the FDR. That is, the threshold",
null,
"for rejecting local null hypotheses is the largest p(i) that is no larger than the fraction of αFDR specified by i/N.\n\nThe Walker criterion [Eq. (2)] is very nearly the same as Eq. (3) if i = 1, so that the FDR procedure will be more sensitive to detecting false null hypotheses to the extent that Eq. (3) is satisfied by a p(i) with i > 1, even as the expected fraction of false detections is maintained below αFDR. In addition, the FDR procedure can be interpreted as an approach to field significance. If none of the sorted p values satisfy the inequality in Eq. (3), then none of the respective null hypotheses can be rejected, implying also nonrejection of the global null hypothesis that they compose. Furthermore the size of that global hypothesis test (i.e., the probability of rejecting a global null hypothesis if it is true), is αglobal = αFDR (Wilks 2006).\n\nEven though Eq. (3) assumes statistical independence among the local test results, the FDR procedure is (as will be illustrated in the following section) approximately valid (while being somewhat conservative) even when those results are strongly correlated, unlike the use of Eq. (1) to evaluate numbers of locally significant tests. This property greatly simplifies statistically principled evaluation of multiple hypothesis test results, since there is no need for elaborate Monte Carlo simulations. Indeed, having obtained the N local p values, the most complicated computation required is merely their sorting into ascending order so that Eq. (3) can be evaluated.\n\n## SYNTHETIC-DATA EXAMPLES.\n\n### Structure of the synthetic examples.\n\nIt is instructive to compare the multiple-testing procedures in an artificial yet relatively realistic setting, so that their properties can be evaluated in the context of a completely known data-generating process. In this section, synthetic data will be defined on the N = 3720-point grid indicated in Fig. 1. The ordinate represents the 31 latitudes from 20° to 80°N at increments of 2° and the horizontal dimension represents 360° of longitude at 3° increments, with a cyclic boundary. The four concentric thick outlines indicate regions, ranging in extent from 0.9% to 19.2% of the total number of grid points, where the local null hypotheses are not true.\n\nThe effects on the multiple-testing results of eight levels of spatial correlation of the underlying synthetic data will be investigated. Figure 2 shows the spatial autocorrelation functions for these eight levels, of the form",
null,
"where d is the great-circle distance between two grid points. These eight spatial autocorrelation functions range in e-folding distance (i.e., average distance at which the data correlations drop below 1/e = 0.3679) from 0.1 × 103 km (nearly spatially independent) to 10 × 103 km (very strongly dependent). The star symbols in Fig. 2 indicate data for spatial autocorrelation of the Northern Hemisphere 500-hPa height field taken from Polyak (1996), which are closely approximated by the heavy c = 0.42 (e-folding distance = 1.54 × 103 km) curve.\n\nOne of the strengths of the FDR method is that it is applicable to collections of p values from hypothesis tests of any form. In this section the FDR method is illustrated using p values from one-sample t tests, with one local t test being computed for each of the 3720 grid points shown in Fig. 1. The underlying synthetic data are random Gaussian fields with spatial correlations governed by Eq. (4), generated using methods described in Wilks (2011, p. 499). That is, the statistical distribution of the synthetic values at each grid point is standard Gaussian—that is, having zero mean and unit variance. For each realization of 3720 local hypothesis tests, 25 of these fields were generated, yielding 3720 sample means and 3720 sample standard deviations (which are not assumed to be equal across the domain). From these quantities, the test statistics for 3720 local one-sample t tests with H0: {µ = 0} having 24 degrees of freedom at each grid point were computed. The alternative hypothesis in each case is two sided: that is, HA: {µ ≠ 0}. In experiments where some of the local null hypotheses are false, all gridpoint sample means within one of the outlines shown in Fig. 1 were increased above zero by amounts ∆µ ranging from 0.05 to 1.00. Using c = 0.42 yields spatial correlation corresponding to the magnitude typically exhibited by Northern Hemisphere 500-hPa height fields. Although the correlation function in Eq. (4) does not represent the characteristic wave structures in these fields, these are not important for the purpose of illustrating the effect of spatial correlation on the multiple testing.\n\n### Global test properties.\n\nFigure 3 illustrates the operation of the FDR procedure (diagonal lines), in contrast to the “naïve stippling” approach of accepting alternative hypotheses at any gridpoint for which a locally significant result occurs (dashed horizontal line). This figure corresponds to a particular realization that will be examined later in more detail. The simulated data were generated with c = 0.42 (realistic spatial autocorrelation), NA = 156 (4.2% of total grid points with false null hypotheses), and using the relatively large alternative-hypothesis mean ∆µ = 0.7. The figure shows the smallest 350 of the 3720 sorted p values p(i) as a function of their rank i. The dashed diagonal line indicates the threshold criterion defined by Eq. (3) using αFDR = 0.10, according to which",
null,
"= 0.003998 = p(150). That is, in this particular realization the local tests having the 150 smallest p values are declared to exhibit statistically significant results. Of these, 144 are correct rejections, indicated by the dots below the dashed diagonal line. The twelve circles above the dashed diagonal line represent false null hypotheses that were not rejected. The six crosses below the dashed diagonal represent true null hypotheses that were erroneously rejected, yielding an achieved FDR = 6/150 = 0.04. The inset shows a closer view of the points within the red box.\n\nThe dotted diagonal line shows the threshold from Eq. (3) when αFDR = 0.20, in which case",
null,
"= 0.009502 = p(183). In this case all NA = 156 false null hypotheses are detected, but at the expense of erroneously rejecting 27 true null hypotheses, yielding an achieved FDR = 27/183 = 0.15. In contrast, the naïve stippling approach of rejecting any local null hypothesis for which the p value is less than α0 = 0.05 (dashed horizontal line) detects all 156 false null hypotheses, but at the expense of erroneously rejecting 189 true null hypotheses (crosses and dots above the dashed diagonal), yielding an unacceptably large achieved FDR = 189/345 = 0.55: a majority of the nominally significant results are spurious.\n\nFigure 4 illustrates the performance of the FDR procedure in terms of achieved global test levels as a function of the degree of spatial correlation. That is, in the situation of all local null hypotheses being true, the achieved level is the probability that the global null hypothesis will be rejected [i.e., that at least one of the sorted p values will satisfy the condition in Eq. (3)], which ideally will equal αglobal = αFDR. These probabilities are approximated in Fig. 4 as the corresponding relative frequencies over 105 simulated global tests. As expected, these achieved levels are approximately correct for small spatial correlations but then decline fairly quickly and stabilize at about half the nominal levels. Thus, the FDR procedure is robust to the effects of spatial correlation, yielding a somewhat conservative global test when the spatial correlation is moderate or strong. That is, when the spatial correlation is high, the achieved FDR will be smaller (more strict) than the nominal FDR. This is consistent with prior results (Ventura et al. 2004; Wilks 2006). Figure 4 suggests that, for data grids exhibiting moderate to strong spatial correlation, approximately correct global test levels can be produced using the FDR procedure by choosing αFDR = 2αglobal.\n\nIn sharp contrast, the achieved test levels for the Livezey–Chen counting procedure, also with no adjustment for spatial correlation, are very strongly permissive. For example, using Eq. (1) and assuming spatial independence yields a requirement for at least 208 locally significant tests (5.6% of local null hypotheses rejected) for field significance with α0 = αglobal = 0.05. This criterion produces achieved global test levels of 0.0907 and 0.3517 when the e-folding distances are 0.2 and 1.54 × 103 km, respectively (results not shown in the figure). The naïve stippling interpretation that any significant local test implies field significance is even worse, as it produces an achieved global test level of unity: at least one of the 3720 local tests is virtually certain to exhibit a spurious null hypothesis rejection, regardless of the strength of the spatial correlation within the range considered in Fig. 4.\n\n### Local test interpretations.\n\nOften the primary interest will be interpretation of the locations and spatial patterns of the locally significant test results, which might be interpreted as “signal.” Reliability of these interpretations will of course be enhanced to the extent that they are minimally contaminated with erroneous rejections of true local null hypotheses (“noise”). Figure 5 shows false discovery rates for the FDR method with αFDR = 0.10 (red), the Livezey–Chen counting approach with α0 = αglobal = 0.05 (black), and the naïve stippling approach of rejecting any local null hypothesis whose p value is no larger than the nominal α0 = 0.05 (brown), as functions of numbers of false local null hypotheses and alternative-hypothesis magnitudes ∆µ, for the realistic e-folding distance 1.54 × 103 km. The plotted values are averages over 103 realizations, so that, for example, the quantities contributed to the averages from the particular realization shown in Fig. 3 are 6/150 = 0.04 for the FDR procedure, 189/345 = 0.55 for the naïve stippling procedure, and zero for the Livezey–Chen counting procedure because fewer than the required 365 local tests2 were significant at the 5% level (the global null hypothesis could not be rejected). As expected, the FDR procedure controls the false discovery rates very tightly. The Livezey–Chen procedure also exhibits small false discovery rates for the smallest number of false local null hypotheses, but primarily because very few global null hypotheses can be rejected regardless of the magnitude of ∆µ. For larger numbers of false local null hypotheses, the Livezey–Chen procedure yields much larger false discovery rates. Worst performance of all is exhibited by the naïve stippling procedure, for which nearly all local test rejections are incorrect when ∆µ is small, and which converges to the Livezey–Chen result for large ∆µ and NA since in these cases the Livezey–Chen procedure declares field significance in nearly all realizations.\n\nTo help visualize the foregoing more concretely, Fig. 6 shows maps for a particular realization, interpreted according to the FDR procedure with αFDR = 0.10 (Fig. 6a) and the naïve stippling approach using α0 = 0.05 (Fig. 6b). Correct local null hypothesis rejections are indicated by plus signs, failures to reject false local null hypotheses are indicated by circles, and erroneous rejections of true null hypotheses are indicated by crosses. These maps correspond to the ranked p values shown in Fig. 3, with NA = 156, ∆µ = 0.7, and e-folding distance 1.54 × 103 km. In Fig. 6a the FDR procedure fails to reject 12 of the 156 false null hypotheses (circles) but erroneously rejects only 6 true null hypotheses (crosses). The result is that the FDR procedure locates the true signal very effectively while introducing very little noise. By contrast, in Fig. 6b the naïve stippling procedure locates all 156 false null hypotheses but also erroneously indicates another 189 nominally significant grid points. The very large additional noise level in Fig. 6b would make physical interpretation of this map difficult, possibly leading an analyst to stretch his or her imagination to rationalize the many spurious local test rejections, which may appear to be physically coherent structures because of the strong spatial autocorrelation in the underlying data. Again, because the number of false null hypotheses is relatively small, the Livezey–Chen procedure would fail to reject the global null hypothesis, leading an analyst to doubt the reality of any of the local test rejections shown in Fig. 6b, even though some of the p values are extremely small.\n\n## A REAL-DATA EXAMPLE.\n\nFigure 7 shows an analysis of linear trends in annual precipitation for the period 1951–2010, modified from an original figure in Hartmann et al. (2013, p. 203). The underlying data are monthly precipitation values interpolated to a 5° × 5° grid from the Global Historical Climatology Network (Vose et al. 1992). The colored patches locate the 408 grid elements having at least 42 (70%) complete calendar years and at least two complete years during 1951–56 and 2005–10 (Hartmann et al. 2013). The 128 grid elements with linear trends exhibiting regression slopes that are large enough in absolute value to achieve local statistical significance at the α = 0.10 level, without considering the multiple-testing problem, have been indicated by the plus signs.\n\nThe red circles in Fig. 7 locate the 51 grid elements exhibiting linear precipitation trends that are meaningfully different from zero, assessed according to the FDR method with αFDR = 0.10. Here αFDR = α = 0.10 (the same test level as the original naïve stippling results) has been used because of the relatively weak spatial correlation of the underlying annual precipitation totals. Figure 8 shows these correlations for the pairs of colored grid elements in Fig. 7 separated by no more than 2 × 103 km and indicates an approximate e-folding decorrelation distance of 0.62 × 103 km, or about 150% of the typical grid element separation of approximately 400 km. Comparing Fig. 3, which was calculated on the basis of a 2° × 3° grid system, 150% of the typical grid separation of 250 km translates to an e-folding correlation distance of approximately 0.38 × 103 km, for which choosing αFDR = α produces only very slight test conservatism.\n\nUsing αFDR = 0.10,",
null,
"= p(51) = 0.01136 [Eq. (3)], so that the 51 grid elements whose local tests reject null hypotheses of zero linear trend with p values no larger than 0.01136 can be regarded as meaningful. No more than five of these are expected to be erroneous rejections of true local null hypotheses.\n\n## SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS.\n\nThe problem of simultaneously evaluating results of multiple hypothesis tests, often at a large network of grid points or other geographic locations, is widespread in meteorology and climatology. Unfortunately, the dominant approach to this problem in the literature is to naïvely examine each gridpoint test in isolation and then to report as “significant” any result for which a local null hypothesis is rejected, with no adjustment for the effects of test multiplicity on the overall result. As a consequence, language similar to the hypothetical quotation in the title of this paper is distressingly common, immediately flagging the results portrayed as almost certainly overstated. This statistically unprincipled practice should be unacceptable to reviewers and editors of scientific papers.\n\nThe necessity of correcting for the effects of simultaneous multiple test results has been known in the atmospheric sciences literature for more than a century, dating at least from Walker (1914). More recently, this problem has been cast as a metatest on the collective results of many individual test results and known as the assessment of field significance (Livezey and Chen 1983). Although the field significance approach was a very substantial advance over the naïve stippling procedure that ignores the effects of multiple testing, it suffers from several drawbacks.\n\nControlling the FDR (Benjamini and Hochberg 1995; Ventura et al. 2004; Wilks 2006) has many favorable attributes, including only modest sensitivity to spatial autocorrelation in the underlying data, intuitive interpretation, and only weak sensitivity to alternative-hypothesis magnitudes and the number of false null hypotheses.\n\nThe examples employed here were constructed without temporal autocorrelation in order to simplify the exposition. However, because the FDR method is robust to spatial autocorrelation, effects of temporal autocorrelation can be addressed with appropriate testing procedures (e.g., Katz 1982; Zwiers and Thiébaux 1987; Wilks 2011) in the individual gridpoint calculations, so that complex resampling procedures addressing both types of autocorrelation simultaneously (e.g., Wilks 1997) are unnecessary. The examples presented here were based on local t tests pertaining to sample means and tests for nonzero regression slopes. However, the method is applicable to collections of multiple hypothesis test results, regardless of the mathematical forms of those tests, so long as the individual tests operate correctly (i.e., with proportion of true null hypotheses rejected close to the nominal test level α0).\n\nPerhaps the greatest advantage of the FDR approach is that, by design, a control limit is placed on the fraction of significant gridpoint test results that are spurious, which greatly enhances the scientific interpretability of the spatial patterns of significant results. Because the FDR approach is not only effective, but is also easy and computationally fast, it should be adopted whenever the results of simultaneous multiple hypothesis tests are reported or interpreted. Its main computational demand is only that the individual gridpoint p values be sorted and examined in light of Eq. (3). The usual strong spatial correlation encountered in gridded atmospheric data can be accommodated by choosing αFDR = 2αglobal, as illustrated in Fig. 4. The consequence of employing this statistically principled procedure—in stark contrast to the all-too-common naïve stippling approach—is that there is much reduced scope for overstatement and overinterpretation of the results. In particular, the analyst is not tempted to construct possibly fanciful rationalizations for the many spurious local test rejections, which may appear to be physically coherent structures because of the strong spatial autocorrelation.\n\n## ACKNOWLEDGMENTS\n\nI thank the anonymous reviewers, whose comments lead to an improved presentation. This work was supported by the National Science Foundation under Grant AGS-1112200.\n\n## REFERENCES\n\n• Ambaum, M. H. P., 2010: Significance tests in climate science. J. Climate, 23, 59275932, doi:10.1175/2010JCLI3746.1.\n\n• Benjamini, Y., and Y. Hochberg, 1995: Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. Roy. Stat. Soc., 57B, 289300.\n\n• Export Citation\n• Hartmann, D. L., and Coauthors, 2013: Observations: Atmosphere and surface. Climate Change 2013: The Physical Science Basis, T. F. Stocker et al., Eds., Cambridge University Press, 159254.\n\n• Export Citation\n• Jolliffe, I. T., 2004: P stands forWeather, 59, 7779, doi:10.1256/wea.132.03.\n\n• Katz, R. W., 1982: Statistical evaluation of climate experiments with general circulation models: A parametric time series modeling approach. J. Atmos. Sci., 39, 14461455, doi:10.1175/1520-0469(1982)039<1446:SEOCEW>2.0.CO;2.\n\n• Crossref\n• Export Citation\n• Katz, R. W., 2002: Sir Gilbert Walker and a connection between El Niño and statistics. Stat. Sci., 17, 97112, doi:10.1214/ss/1023799000.\n\n• Katz, R. W., and B. G. Brown, 1991: The problem of multiplicity in research on teleconnections. Int. J. Climatol., 11, 505513, doi:10.1002/joc.3370110504.\n\n• Crossref\n• Export Citation\n• Livezey, R. E., and W. Y. Chen, 1983: Statistical field significance and its determination by Monte Carlo techniques. Mon. Wea. Rev., 111, 4659, doi:10.1175/1520-0493(1983)111<0046:SFSAID>2.0.CO;2.\n\n• Crossref\n• Export Citation\n• Mielke, P. W., K. J. Berry, and G. W. Brier, 1981: Application of multi-response permutation procedures for examining seasonal changes in monthly mean sea level pressure patterns. Mon. Wea. Rev., 109, 120126, doi:10.1175/1520-0493(1981)109<0120:AOMRPP>2.0.CO;2.\n\n• Crossref\n• Export Citation\n• Polyak, I., 1996: Computational Statistics in Climatology Oxford University Press, 358 pp.\n\n• Storey, J. D., and R. Tibshirani, 2003: Statistical significance for genomewide studies. Proc. Natl. Acad. Sci. USA, 100, 94409445, doi:10.1073/pnas.1530509100.\n\n• Crossref\n• Export Citation\n• Ventura, V., C. J. Paciorek, and J. S. Risbey, 2004: Controlling the proportion of falsely rejected hypotheses when conducting multiple tests with climatological data. J. Climate, 17, 43434356, doi:10.1175/3199.1.\n\n• Crossref\n• Export Citation\n• von Storch, H., 1982: A remark on Chervin-Schneider’s algorithm to test significance of climate experiments with GCM’s. J. Atmos. Sci., 39, 187189, doi:10.1175/1520-0469(1982)039<0187:AROCSA>2.0.CO;2.\n\n• Crossref\n• Export Citation\n• von Storch, H., and F. W. Zwiers, 1999: Statistical Analysis in Climate Research.Cambridge University Press, 484 pp.\n\n• Vose, R. S., R. L. Schmoyer, P. M. Steurer, T. C. Peterson, R. Heim, T. R. Karl, and J. Eischeid, 1992: The Global Historical Climatology Network: Long-term monthly temperature, precipitation, sea level pressure, and station pressure data. Environmental Sciences Division Publ. 3912, ORNL/CDIAC-53, NDP-041, 325 pp., doi:10.3334/CDIAC/cli.ndp041.\n\n• Export Citation\n• Walker, G. T., 1914: Correlation in seasonal variations of weather. III. On the criterion for the reality of relationships or periodicities. Mem. Indian Meteor. Dept., 21(9), 1315.\n\n• Export Citation\n• Wilks, D. S., 1997: Resampling hypothesis tests for autocorrelated fields. J. Climate, 10, 6583, doi:10.1175/1520-0442(1997)010<0065:RHTFAF>2.0.CO;2.\n\n• Crossref\n• Export Citation\n• Wilks, D. S., 2006: On “field significance” and the false discovery rate. J. Appl. Meteor. Climatol., 45, 11811189, doi:10.1175/JAM2404.1.\n\n• Wilks, D. S., 2011: Statistical Methods in the Atmospheric Sciences. 3rd ed. Academic Press, 676 pp.\n\n• Zwiers, F. W., 1987: Statistical considerations for climate experiments. Part II: Multivariate tests. J. Climate Appl. Meteor., 26, 477487, doi:10.1175/1520-0450(1987)026<0477:SCFCEP>2.0.CO;2.\n\n• Crossref\n• Export Citation\n• Zwiers, F. W., and H. J. Thiébaux, 1987: Statistical considerations for climate experiments. Part I: Scalar tests. J. Climate Appl. Meteor., 26, 465476, doi:10.1175/1520-0450(1987)026<0464:SCFCEP>2.0.CO;2."
] | [
null,
"https://journals.ametsoc.org:443/skin/68c0c3890c60f00b76b5b7c8ed57768f2e4b5d7f/img/Blank.svg",
null,
"https://journals.ametsoc.org:443/skin/68c0c3890c60f00b76b5b7c8ed57768f2e4b5d7f/img/Blank.svg",
null,
"https://journals.ametsoc.org:443/skin/68c0c3890c60f00b76b5b7c8ed57768f2e4b5d7f/img/Blank.svg",
null,
"https://journals.ametsoc.org:443/skin/68c0c3890c60f00b76b5b7c8ed57768f2e4b5d7f/img/Blank.svg",
null,
"https://journals.ametsoc.org:443/skin/68c0c3890c60f00b76b5b7c8ed57768f2e4b5d7f/img/Blank.svg",
null,
"https://journals.ametsoc.org:443/skin/68c0c3890c60f00b76b5b7c8ed57768f2e4b5d7f/img/Blank.svg",
null,
"https://journals.ametsoc.org:443/skin/68c0c3890c60f00b76b5b7c8ed57768f2e4b5d7f/img/Blank.svg",
null,
"https://journals.ametsoc.org:443/skin/68c0c3890c60f00b76b5b7c8ed57768f2e4b5d7f/img/Blank.svg",
null,
"https://journals.ametsoc.org:443/skin/68c0c3890c60f00b76b5b7c8ed57768f2e4b5d7f/img/Blank.svg",
null,
"https://journals.ametsoc.org:443/skin/68c0c3890c60f00b76b5b7c8ed57768f2e4b5d7f/img/Blank.svg",
null,
"https://journals.ametsoc.org:443/view/journals/bams/97/12/BAMS-D-15-00267.1-inf1.gif",
null,
"https://journals.ametsoc.org:443/skin/68c0c3890c60f00b76b5b7c8ed57768f2e4b5d7f/img/Blank.svg",
null,
"https://journals.ametsoc.org:443/view/journals/bams/97/12/BAMS-D-15-00267.1-inf1.gif",
null,
"https://journals.ametsoc.org:443/skin/68c0c3890c60f00b76b5b7c8ed57768f2e4b5d7f/img/Blank.svg",
null,
"https://journals.ametsoc.org:443/view/journals/bams/97/12/BAMS-D-15-00267.1-inf1.gif",
null,
"https://journals.ametsoc.org:443/view/journals/bams/97/12/BAMS-D-15-00267.1-inf1.gif",
null,
"https://journals.ametsoc.org:443/view/journals/bams/97/12/BAMS-D-15-00267.1-inf1.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8721362,"math_prob":0.89166415,"size":43677,"snap":"2021-43-2021-49","text_gpt3_token_len":10069,"char_repetition_ratio":0.16444942,"word_repetition_ratio":0.27260458,"special_character_ratio":0.23836344,"punctuation_ratio":0.14157991,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96720314,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,5,null,null,null,5,null,null,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-09T05:05:55Z\",\"WARC-Record-ID\":\"<urn:uuid:86971a2b-fc7d-4761-b4de-7adcf5397c8a>\",\"Content-Length\":\"490999\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:35651c6a-b2e1-460a-8585-ba49ca1d7214>\",\"WARC-Concurrent-To\":\"<urn:uuid:c9e7ab4e-9b20-4b41-a5d0-f2869322f585>\",\"WARC-IP-Address\":\"52.16.108.163\",\"WARC-Target-URI\":\"https://journals.ametsoc.org:443/view/journals/bams/97/12/bams-d-15-00267.1.xml\",\"WARC-Payload-Digest\":\"sha1:5M5KKORNG7EAAYGS6H55O6444ZJEM6MB\",\"WARC-Block-Digest\":\"sha1:ON2VMNE5HD6RIUMKFXPQJP3DLWOP7LTJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363659.21_warc_CC-MAIN-20211209030858-20211209060858-00031.warc.gz\"}"} |
http://discourse.iapct.org/t/systems-theory-and-pct/10956 | [
"# Systems theory and PCT\n\n[From Rick Marken (940223.1030)]\n\nIt seems that I did have a close encounter of the \"systems theory\"\nkind once before. I just noticed a file in my e-mail folder\nlabelled \"systems theory\". It turns out that it is a reply to a\npost from my sister- in- law who wrote to me in about May, 1983\nas follows:\n\n(Fritjof Capra) and he mentions that it has roots in cyberntics, as in both\nvon Neuman (like systems are i/o hierarchies) and Norbert Wiener (like\nsystems are self-organizing and self-maintaining). Where does control\ntheory fit in here?\n\nit's worth, here is my reply to her. I would appreciate hearing\nwhat Cliff Joslyn has to say about it (I am intersted in Klir's\n_Facets_, by the way Cliff, but I'd appreciate a summary if possible).\n\n···\n\n-------------\nHi sis-a-la;\n\nI think of a \"system\" as a collection of components. The components\nare typically functions that transform input variables into output var-\niables. The variables can be scalers or vectors; in the systems I care\nabout, the input variables are typically scalars or vectors and the\noutput variables are always scalars. So, in the simplest case, a\nsystem component is a function -- y = f(x) which is a function that\ntransforms an input variable (x) into the output variable (y) (by the way,\nin the discussion below all variables are assumed to vary over time).\n\nWhat makes a system \"interesting\" is how the components are\nhooked together; certain hook-ups result in system behavior\nthat is quite different than the behavior of the components. This is\nwhat happens in a control system. The components of a control\nsystem are \"causal\" functions -- the variations in the value of the\noutput variable (y) depend deterministically on variations in the\nvalue of the input (x). But the behavior of a control system is NOT\ncausal; it is \"purposeful\". The output does NOT depend on the input;\nrather, the input is \"controlled\"; it is kept at a value specified by\nthe system itself and maintained at this value in the face of\ndisturbances.\n\nThe purposeful behavior of a control system results from the fact that\nthe components of the system are hooked up in a negative feedback\nrelationship. A negative feedback control system is most easily\ndemonstarted (by me -- because I am a math dunce) using linear\n(rather than arbitrary) functions. Two functional components define\na negative feedback control system:\n\n(1) o = k1 (p*-p)\n(2) p = k2 o + k3 d\n\nThis is a \"system\" of equations; but the equations could represent\nreal \"physical\" components in real physical systems (as they do in\nreal control systems -- like thermostats and people). The first\nequation represents an \"output device\" like a variable intensity\nheater. o is the output variable -- the actual time variations in heat\nfrom the heater; p is an electrical variable called the \"perceptual signal\"\nand p* is a reference signal (the signal that -- it turns out -- specifies\nthe desired value of p; it is the signal that results when you \"set\" the\nthermostat). Equation 1 says that the output of the output device is\nproportional (by a factor k1) to a signal that represents the difference\nbetween the reference and perceptual signals. Equation 1 is just an\ninput/output function -- input (p*-p) is converted into output\n(o). The conversion factor is k1.\n\nThe second equation also represents a physical device -- in this case\na \"sensor device\" like the thermocouple that converts heat energy into\nan electrical signal. The input to this device is the heat near the sensor;\nthis heat depends on the output of the heater (o) AND external dist-\nurbances (d), such as outdoor air temperature, people in the room, etc.\nSo the input to the sensor is net heat near the sensor which is the sum\nof heater and disturbance generated heat. The coefficients (k2 and k3)\nrepresent physical factors that determine the extent to which output\nand disturbance variables contribute to heat near the sensor. The sensor\nconverts the heat input (k2o + k3d) into an electrical signal -- the\nperceptual signal, p, which can be considered a continuous measure of\nthe heat near the sensor. So equation 2 is also an input/output function --\ninput (k2o + k3d) is converted into output (p).\n\nNow we can solve equations 1 & 2 simultaneously to learn something\nabout the behavior of the system as a whole. First, let's solve for the\noutput variable, o. The result is:\n\no = k1/(1+k1k2) p* - k1k3/(1+k1k2) d\n\nThis can be simplified by letting k2 and k3 (the physical constants) be\nlarge realtive to k1 and about the same size (this is generally true\nin real systems) . Then we get\n\no = k1p* -d\n\nSo the first thing we learn about a control system is that the output (o)\nof the system does NOT depend on the input (p) -- the output depends\non the reference signal, p* and disturbances THAT ARE NOT EVEN SENSED.\nIf the reference signal,p*, is a constant, then variations in the output\nof the system depend completely on unsensed disturbances. Note that\nthe disturbances, d, are mixed with the system's own outputs, o, to\ndetermine the actual input (eq. 2). So surprising finding number one\nabout a control system is that its output depends PRECISELY on a\nvariable (d) that the system does not even sense. Simple math --\nheavy result. Weiner and the cyberneticists, even with all their fancy\nmath, never picked up on this enormous fact. Psychologists don't WANT\nto pick up on it because doing so would mean that they would realize\nthat they have been studying an illusion. The basis of experimental\nmethodology in psychology is the assumption that what people do depends\non what happens to them; that is, it is assumed that\n\n(3) o = k1 p\n\nA basic analysis of a control system shows that, if people are control\nsystems, then equation (3) does not hold. This is why control theory\n(the real thing) is not real popular in scientific psychology circles.\n\nNow let's solve for the other variable that the system can influence;\nthe perception. If we go through the same exercise (and make the\nsame assumptions) we will find:\n\np = p*\n\nSo, in a control system, the perceptual input variable is determined by\nthe reference signal, p*, NOT by distal stimuli in the environment. This is\nsurprising because psychologists think of perception (input) as an INDE-\nPENDENT VARIABLE; something that causes behavioral outputs (see\nequation 3) . But in a negative feedback control system, perception is\nthe DEPENDENT VARIABLE; the system always acts in order to keep its\ninput perceptual variable matching the reference signal SET BY THE\nSYSTEM ITSELF. Of course, in artificial control systems like the thermostat,\nthe reference signal is actually set \"from outside\" -- but once that signal\nis set the the system operates autonomously; that is, it does whatever\nis necessary (what is necessary depends on disturbances) to keep its\ninput matching the reference signal. In living control systems, the\ndials that set the references are not accessible from \"outside\"; the\nreference signals for the system's inputs are \"set\" by other \"systems\"\nin the organism itself. So living organisms are VERY autonomous, meaning\nthat they cannot be \"controlled\" from outside at all. They cannot be\ncontrolled from outside because no one but the system itself has access\nto the reference signals (p*) that are the ultimate determiner of what the\nsystem \"does\" -- meaning, what inputs it controls and at what level they\nare controlled.\n\nSo a simple analysis (to do it right you would have to include dynamics\nbut the algebraic results are correct under the assumption that the\nsystem is dynamically stable) shows that a system composed of cause-\neffect components arranged in a negative feedback organization is\nNOT a cause-effect SYSTEM; it is a purposeful system. (By the way,\nthe \"negative\" in \"negative feedback\" refers to the net sign of the\nmultipliers to the variables that travel around the loop from input to\noutput; the \"feedback\" refers to that fact that the output of the system\nis one of the influences on the variable that causes that output). The\npurpose of the system is p* -- the desired input perecption. The system\nachieves its purpose (continuously) in the face of undetectable and\nunpredictable disturbances through the operation of the negative\nfeedback loop.\n\nThis was a VERY elementary introduction to perceptual control system\ntheory. The next step is to expand the system concept to form a\nsystem of control systems -- such an organization makes it possible\nfor control systems to be the source of reference signals for other\ncontrol systems. Again, the behavior of this \"system of systems\" is\n\"more than the sum of its parts\". But the behavior is still purposeful.\nOnce you start to look at this system of systems, however, you can\nstart to understand the main problem confronted by any control system\nin a system of control systems -- CONFLICT.\n\nBut we can deal with conflict in our next episode -- if you're still awake.\n\nBy the way, I'm not sure how this all fits into Fritjof Capra's stuff.\nMy completely uninformed opinion is that he is like some of the\npsychologists I know -- jumping right into very abstract, complex stuff\nin a search of some deep revelations. My intuition is that depth comes\nout of simplicity. If psychologists had been willing to look at the simple\nfacts of how people are connected to their perceptions, they would have\ndiscovered years ago that\n\no = - k d and p = p*\n\nand psychology could have become what it should be -- the study\nof autonomous, purposeful system (APSs) -- like you and that cutest of\nall APSs -- Lauren!\n\nI love ya!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9106482,"math_prob":0.90150005,"size":9688,"snap":"2023-14-2023-23","text_gpt3_token_len":2250,"char_repetition_ratio":0.15427509,"word_repetition_ratio":0.013165769,"special_character_ratio":0.22182082,"punctuation_ratio":0.07705192,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9796094,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-24T16:15:08Z\",\"WARC-Record-ID\":\"<urn:uuid:87770ee8-49ad-4d8d-89cd-2e8ebbd7fdee>\",\"Content-Length\":\"22724\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b1ce1149-78be-4276-93ac-e5dd4ff396bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:78134db7-111f-402a-98c1-59b1cd9403e6>\",\"WARC-IP-Address\":\"208.113.129.162\",\"WARC-Target-URI\":\"http://discourse.iapct.org/t/systems-theory-and-pct/10956\",\"WARC-Payload-Digest\":\"sha1:ZHJN4BM5MN3M7WPH3XV3ZDPOEMHJ4CPD\",\"WARC-Block-Digest\":\"sha1:CS5H26AIE3X7Q5YXEJBMBZCCIFRJHEJP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945287.43_warc_CC-MAIN-20230324144746-20230324174746-00505.warc.gz\"}"} |
https://stats.stackexchange.com/questions/320407/bimodal-maxent-distributions | [
"# Bimodal MaxEnt distributions?\n\nWhat kind of constrains give rise to bimodal distributions in the Maximum Entropy formalism? Are there any known results in this topic?\n\nThis might be related to the support of the constraints. With the following notation: let X be a discrete random variable on states, let P be a probability distribution with P(X=x_i)=p_i for i=1,...,n where n=length(states), and let H(X)=-sum(p_i*log(p_i) for i=1,...,n) be the entropy of X.\n\nDefine constraints by:\n\nA_j: R^n -> R\n\n\nas a column vector of length n, j=1,...,m and demand that the conditions\n\nA_j %*% X == bvec[j] for j=1,...,m\n\n\nare satisfied.\n\nNow if the constraints are supported on different subsets of states, a bimodal distribution may arise. For example:\n\nn <- 40\np <- 0.5\nmu <- n*p\nA_1 <- c(1:n, rep(0,n))\nA_2 <- c((1:n-mu)^2, rep(0,n))\nA_3 <- c(rep(0,n), 1:n)\nA_4 <- c(rep(0,n), (1:n-mu)^2)\nbvec <- c(mu, n*p*(1-p), mu, n*p*(1-p))/2\n\n\nThe first two conditions result in a binomial distribution on 1:40 and the second two conditions result in a binomial distribution on 41:80. The maximum entropy distribution on states=1:80 has then two modi:",
null,
"I believe this generalizes to continuous random variables."
] | [
null,
"https://i.stack.imgur.com/ESEcw.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.69808793,"math_prob":0.99499935,"size":1155,"snap":"2020-24-2020-29","text_gpt3_token_len":359,"char_repetition_ratio":0.12250217,"word_repetition_ratio":0.04255319,"special_character_ratio":0.32640693,"punctuation_ratio":0.19649123,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998543,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-15T19:06:23Z\",\"WARC-Record-ID\":\"<urn:uuid:d3d06a4e-c77f-435e-8879-7a40ecbf2672>\",\"Content-Length\":\"138618\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:88ad6942-55e0-496b-b997-2cccc1c55535>\",\"WARC-Concurrent-To\":\"<urn:uuid:f7de1c40-76a8-41e4-926b-9dd753b96f96>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/320407/bimodal-maxent-distributions\",\"WARC-Payload-Digest\":\"sha1:A6VEEHAH75ABUHEU7OGCJENEOKHB2GBW\",\"WARC-Block-Digest\":\"sha1:HOGWMKKZYHJ4YRO4FSRSKHQ7FXDT7U4M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657170639.97_warc_CC-MAIN-20200715164155-20200715194155-00139.warc.gz\"}"} |
https://blog.tri-lib.com/2012/02/molas-isolation-framework/ | [
"Back\n\n## Molas——.NET依赖分离框架",
null,
"Moles是由微软研究院(Microsoft Research)开发的.NET依赖分离框架,它实现了使用自定义的委托(delegate)方法来替换原有类中的方法,以达到分离依赖,方便单元测试的目的。Moles在功能和用法上与开源的IoC框架Moq很像,但Moles有一些Moq实现不了功能,如替换静态方法,去掉静态构造函数,突破访问限制等。Molas非常有利于对ASP.NET WebForm构建的网站和依赖第三方类库的程序进行单元测试。\n\n## 使用示例\n\n``````namespace MoleDomain\n{\npublic static class Y2KChecker\n{\npublic static void Check()\n{\nif (DateTime.Now == new DateTime(2000, 1, 1))\nthrow new ApplicationException(\"y2kbug!\");\n}\n}\n}\n``````\n\n``````[TestMethod]\n[ExpectedException(typeof(ApplicationException))]\npublic void Test()\n{\nY2KChecker.Check();\n}\n``````\n\n``````[TestMethod]\n[ExpectedException(typeof(ApplicationException))]\n[HostType(\"Moles\")]\npublic void Test()\n{\n//利用委托替换原来的返回值\nMDateTime.NowGet = () => new DateTime(2000, 1, 1);\n\nY2KChecker.Check();\n}\n``````\n\n``````测试方法 MoleDomain.Test.Y2KCheckerTest.Test 引发了异常 Microsoft.Moles.Framework.Moles.MoleNotInstrumentedException,但应为异常 System.ApplicationException。异常消息: Microsoft.Moles.Framework.Moles.MoleNotInstrumentedException: The System.DateTime System.DateTime.get_Now() was not instrumented\nTo resolve this issue, add the following attribute in the test project:\n\nusing Microsoft.Moles.Framework;\n[assembly: MoledType(typeof(System.DateTime))]\n``````\n\n``````using Microsoft.Moles.Framework;\n\n[assembly: MoledType(typeof(System.DateTime))]\nnamespace MoleDomain.Test\n{\n.....\n}\n``````\n\n# Mole基础知识\n\n• ◇ 静态方法表示为mole类型的静态属性\n\n• ◇ 类实例方法表示为嵌套的AllInstances类型的静态属性\n\n• ◇ 类构造函数表示为mole类型的命名为Constructor的静态属性\n\n``````public static class MyClass {\npublic static int MyMethod() {\n...\n}\n}\n``````\n\n``````MMyClass.MyMethod = () =>5;\n``````\n\n``````public class MMyClass {\npublic static Func MyMethod {\nset {\n...\n}\n}\n}\n``````\n\n``````public class MyClass {\npublic int MyMethod() {\n...\n}\n}\n``````\n\nmole一个方法使所有实例对象都返回5:\n\n``````MMyClass.AllInstances.MyMethod = _ => 5;\n``````\n\n``````public class MMyClass : MoleBase {\npublic static class AllInstances {\npublic static FuncMyMethod {\nset {\n...\n}\n}\n}\n}\n``````\n\n``````public class MyClass {\npublic int MyMethod() {\n...\n}\n}\n``````\n\n``````var myClass1 = new MMyClass()\n{\nMyMethod = () => 5\n};\nvar myClass2 = new MMyClass() { MyMethod = () => 10 };\n``````\n\n``````public class MMyClass : MoleBase {\npublic Func MyMethod {\nset {\n...\n}\n}\npublic MyClass Instance {\nget {\n...\n}\n}\n}\n``````\n\n``````var mole = new MMyClass();\nvar instance = mole.Instance;\n``````\n\nmole实例对象也可以隐式转换为原始类型对象,所以你可以直接赋值对原始类型,如下:\n\n``````var mole = new MMyClass();\nMyClassinstance = mole;\n``````\n\n``````public class MyClass {\npublic MyClass(int value) {\nthis.Value = value;\n}\n...\n}\n``````\n\n``````MMyClass.ConstructorInt32 = (@this, value) => {\nvar mole = new MMyClass(@this) {\nValueGet = () => -5\n};\n};\n``````\n\n``````MMyClass.ConstructorInt32 = (@this, value) => {\n...\nMMyClass.ConstructorInt32 = null;\n};\n``````\n\n``````public MMyClass() { }\npublic MMyClass(MyClass instance) : base(instance) { }\n``````\n\n``````public class MMyClass : MoleBase\n{\npublic static Action ConstructorInt32 {\nset {\n...\n}\n}\npublic MMyClass() { }\npublic MMyClass(MyClass instance) : base(instance) { }\n...\n}\n``````\n\n``````public abstract class Base {\npublic int MyMethod() {\n...\n}\n}\npublic class Child : Base {\n}\n``````\n\n``````var child = new MChild();\nnew MBase(child) { MyMethod = () => 5 };\n``````\n\n``````public class MChild : MoleBase {\npublic MChild() { }\npublic MChild(Child child)\n: base(child) { }\n}\n\npublic class MBase : MoleBase {\npublic MBase(Base target) { }\npublic Func MyMethod\n{ set { ... } }\n}\n``````\n\n``````[assembly: MolesEraseStaticConstructor(typeof(MyStatic))]\nclass MyStatic {\nstatic MyStatic() {\nthrow new Exception(); // needs moling…\n}\n}\n``````\n\n``````[assembly: MolesEraseFinalizer(typeof(MyFinalizer))]\nclass MyFinalizer {\n~MyFinalizer() {\nthrow new Exception(); // needs moling…\n}\n}\n``````\n\n``````public class MyClass : IEnumerable {\npublic IEnumerator GetEnumerator() {\n...\n}\n...\n}\n``````\n\n``````var myClass = new MMyClass();\nmyClass.Bind(new int[] { 1, 2, 3 });\n``````\n\n``````public class MMyClass : MoleBase {\npublic MMyClass Bind(IEnumerable target) {\n...\n}\n}\n``````\n\n## Moles缺点\n\nMoles缺点是,测试运行比较慢,还有测试代码只能在本机上才能测试通过,假如同伴获取代码后需要运行单元测试,必须也安装Molas环境。\n\n## 参考资料\n\nhttp://research.microsoft.com/en-us/projects/pex/molesmanual.pdf\n\nhttp://research.microsoft.com/en-us/projects/pex/documentation.aspx"
] | [
null,
"http://research.microsoft.com/en-us/projects/pex/pexweb.png",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.6319032,"math_prob":0.6141553,"size":5991,"snap":"2020-45-2020-50","text_gpt3_token_len":2711,"char_repetition_ratio":0.16919993,"word_repetition_ratio":0.19699812,"special_character_ratio":0.21665832,"punctuation_ratio":0.22206897,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95393866,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T02:52:05Z\",\"WARC-Record-ID\":\"<urn:uuid:74e7c3dd-f531-4edb-8cf9-81fa89909a29>\",\"Content-Length\":\"24607\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9cbedc2d-029c-4ed4-bfe8-4a2614c469bf>\",\"WARC-Concurrent-To\":\"<urn:uuid:6d290f08-7756-4fe8-8acb-6cfcedd1134a>\",\"WARC-IP-Address\":\"104.31.87.62\",\"WARC-Target-URI\":\"https://blog.tri-lib.com/2012/02/molas-isolation-framework/\",\"WARC-Payload-Digest\":\"sha1:6VCJQM4HNYQMGLXS4QRSHIUIRWUU7HAM\",\"WARC-Block-Digest\":\"sha1:7Q5JFYBYPKECPNY7ZMGC2XM5V4ZWABDK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107869785.9_warc_CC-MAIN-20201020021700-20201020051700-00019.warc.gz\"}"} |
http://www.hackingtheuniverse.com/science/history-of-science-and-technology/era-second-industrial/1859-riemann-hypothesis | [
"## 1859 – Riemann hypothesis\n\nThe Riemann hypothesis is named after Bernhard Riemann, who worked on a technique to predict the distribution of prime numbers. Mersenne and Fermat worked on formulas that can predict *some* prime numbers but not all of them. In 1737 Euler followed up on their work, eventually showing that the summation series of the reciprocals of prime numbers is divergent and therefore the number of primes is infinite. The key function he used in this process is now known as the “zeta function”.\n\nLegendre and Gauss worked on the same problem using a log function and produced the “Prime Number Theorem” which says that for large numbers the density of prime numbers nearby is 1/log(n). Riemann then expanded the zeta function to include complex numbers (numbers with both a real and imaginary part) and made a conjecture about the distribution of zeros. This is known as the Riemann Hypothesis.\n\n1826 – Riemann\n1859 – “On the Number of Primes Less Than a Given Magnitude” – Riemann"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93955714,"math_prob":0.9577739,"size":1033,"snap":"2019-51-2020-05","text_gpt3_token_len":237,"char_repetition_ratio":0.15451895,"word_repetition_ratio":0.0,"special_character_ratio":0.21103582,"punctuation_ratio":0.05464481,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99784094,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T12:25:21Z\",\"WARC-Record-ID\":\"<urn:uuid:8ba8a6d4-ad8a-4c1d-9b83-378b28ae5e8e>\",\"Content-Length\":\"46610\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dda6e7bd-cc44-4a10-be8d-233ec2abd0b9>\",\"WARC-Concurrent-To\":\"<urn:uuid:73f13815-88e7-4896-b90d-28353c556a02>\",\"WARC-IP-Address\":\"72.47.224.97\",\"WARC-Target-URI\":\"http://www.hackingtheuniverse.com/science/history-of-science-and-technology/era-second-industrial/1859-riemann-hypothesis\",\"WARC-Payload-Digest\":\"sha1:7MTWOMEXRZSPTUCEBFPINKYV7TERW2SD\",\"WARC-Block-Digest\":\"sha1:6W4DA53WCXZOJLLM25Y43VOJIFSFANWZ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250603761.28_warc_CC-MAIN-20200121103642-20200121132642-00450.warc.gz\"}"} |
https://sorabatake.jp/en/12531/ | [
"Notebooks\n\n# Tellus Takes on the Challenge of Calculating Land Coverage Using L2.1 Processing Data from PALSAR-2\n\nTellus Takes on the Challenge of Calculating Land Coverage Using L2.1 Processing Data from PALSAR-2\n\nTellus has released a function to distribute original data from the satellite data provider in addition to the tiled data that can be easily checked with Tellus OS (on the browser).\n\nOriginal data is a difficult thing to handle without knowledge of satellite data, but if you know how to use it, the range of what you can do will expand dramatically.\nThis article will explain how PALSAR-2 L2.1 processing data available on Tellus is gathered, and how the land coverage is calculated using multifaceted polarized data. The second half will evaluate the results of machine learning.\n\n## What is PALSAR-2's Process Level L2.1?\n\n### L2.1is an enhanced image that has been processed with resolution and ortho-correction\n\nThere are several processing levels for SAR images. L2.1 is the high level base product received after performing resolution and ortho-correction to an image. While the information is limited to enhanced information received by electromagnetic waves, the image produced is similar to an optic image, meaning this processing level is capable of being used for various applications such as land coverage detection, vegetation classification, and object detection.\n\n### The Key is in the Polarized Data\n\nThere are a maximum of 4 types of data determined by the type of electro-magnetic wave correspondence called polarized waves (HH, HV, VH, VV). They indicate whether it is a vertical polarization (V) or a horizontal polarization (H) with respect to transmission (1st letter) and reception (2nd letter) respectively. This not only enhances the reception, but makes it possible to analyze images according to their polarization traits.\n\n## Let's Gather PALSAR-2 L2.1 Processing Data\n\nSee Tellusの開発環境でAPI引っ張ってみた (Use Tellus API from the development environment) for how to use the development environment (JupyterLab) on Tellus.\n\nIn order to download PALSAR-2’s L2.1 products, you need to enter the product ID. All you need to do is look up the product you like and enter its product ID. We will try looking up a product that has multifaceted polarized data recorded on it.\n\n``````import requests\n\nTOKEN = \"自分のAPIトークンを入れてください\"\n\ndef search_palsar2_l21(params={}, next_url=''):\nif len(next_url) > 0:\nurl = next_url\nelse:\nurl = 'https://file.tellusxdp.com/api/v1/origin/search/palsar2-l21'\n\"Authorization\": \"Bearer \" + TOKEN\n}\n\nif not r.status_code == requests.codes.ok:\nr.raise_for_status()\nreturn r.json()\n\ndef main():\nfor k in range(100):\nif k == 0:\nret = search_palsar2_l21({'after':'2017-1-01T15:00:00'})\nelse:\nret = search_palsar2_l21(next_url = ret['next'])\nfor i in ret['items']:\nif len(i['polarisations']) > 1:\nprint(i['dataset_id'],i['polarisations'])\nif 'next' not in ret:\nbreak\n\nif __name__==\"__main__\":\nmain()\n``````\n\nThrough using this program, products from after 2017 with more than two types of polarized data can be produced. Choose one of those product IDs and download the data set included in the product with the code below.\n\n``````import os, requests\n\nTOKEN = \"自分のAPIトークンを入れてください\"\n\ndef publish_palsar2_l21(dataset_id):\nurl = 'https://file.tellusxdp.com/api/v1/origin/publish/palsar2-l21/{}'.format(dataset_id)\n\"Authorization\": \"Bearer \" + TOKEN\n}\nif not r.status_code == requests.codes.ok:\nr.raise_for_status()\nreturn r.json()\n\n\"Authorization\": \"Bearer \" + TOKEN\n}\nif not r.status_code == requests.codes.ok:\nr.raise_for_status()\nif os.path.exists(dir) == False:\nos.makedirs(dir)\nwith open(os.path.join(dir,name), \"wb\") as f:\nfor chunk in r.iter_content(chunk_size=1024):\nf.write(chunk)\n\ndef main():\ndataset_id = 'ALOS2236422890-181009' #選んだプロダクトIDを入力\nprint(dataset_id)\npublished = publish_palsar2_l21(dataset_id)\nfor file in published['files']:\nfile_name = file['file_name']\nfile_url = file['url']\nprint(file_name)\nprint('done')\n\nif __name__==\"__main__\":\nmain()\n``````\n\nThat is all you need to do to gather the data. A folder with the same name as the product ID will be made in the Current Directory containing the data inside of it.\n\n## Looking at Land Coverage Data with Polarized Data\n\nLet’s make the polarized data easy to see. This time we use data with HH and HV polarizations, so let’s show the true colors of the data using a combination of the two and their differences. By doing this, we can look up if there is a correlation with different kinds of land coverage for each of the polarized traits\n\n``````from osgeo import gdal, gdalconst, gdal_array\nimport os\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef normalization(arr):\nmin = np.amin(arr)\nmax = np.amax(arr)\nret = (arr-min)/(max-min)\nreturn ret\n\ndataset_id = 'ALOS2236422890-181009'\noffx = 14000\noffy = 16000\ncols = 5000\nrows = 5000\n\nHH = tif.GetRasterBand(1).ReadAsArray(offx, offy, cols, rows)\nHV = tif.GetRasterBand(1).ReadAsArray(offx, offy, cols, rows)\nHH = 10*np.log(HH) - 83.0\nHV = 10*np.log(HV) - 83.0\nHH = normalization(HH).reshape(rows*cols,1)\nHV = normalization(HV).reshape(rows*cols,1)\nHHHV = normalization(HH-HV).reshape(rows*cols,1)\nimg = np.hstack((HH,HV,HHHV)).reshape(rows,cols,3)\n\nplt.figure()\nplt.imshow(img, vmin=0, vmax=1)\nplt.imsave('sample.jpg', img)\n``````\n\nThis is what it looks like if you compare the image we get to one taken by AVNIR-2 on Tellus.\n\nWe have chosen an image of the area around Suwako Lake as an example. HH is red and HV is green, and you can see that the city area has a stronger HH reading, and the forest area has much more green. Furthermore, Suwa Lake has weaker reflection levels, making it darker. You can see how this can be used to categorize land coverage to an extent even with only two types of polarized waves.\n\n## Predicting Land Coverage with Machine Learning\n\nWe used machine learning to show how this can be used. Using the results of the categorization of land coverage performed by JAXA, we have made a prediction model for using machine learning to categorize land coverage.\n\nThis is what the image that is used for label data looks like.\n\nThere are actually ten different classes for this label data, but as the resolution of the label data we are using this time is on the lower end at 30m, and there being only three data points (HH, HV, HH-HV), we graded down to only having the 4 catergories of “water area” “city” “rice paddy field” and “other.”\n\n## Data set/How to use CNN\n\nThe satellite image and the label image are both formatted as Geotiff, which makes it possible to get the longitude and latitude of each pixel of both of the images. Due to varying resolutions, however, it is necessary to over sample the label data with the lower resolution. The steps to do that are laid out below.\n\n(1) Prepare an empty two-dimensional array with the same number of pixels as the satellite image.\n(2) Search for a pixel from a label image that includes the longitude and latitude for each pixel in the satellite image.\n(3) Save the label data for that cell into the empty array prepared for (1)\n\nThat is how you create label data with the same resolution as satellite images. It doesn’t take long to scan all of the pixels with a for-loop, so anyone can use the steps above to create data.\nThe learning model includes a CNN constructed with Keras and Tensorflow as referred to in another Sorabatake article. We created a data set of 621 200 x 200 pixel images broken down from a 5400 x 4600 pixel image of Shuwa Lake. This is over sampling the 30m resolution labels to match the resolution of the satellite image. This is split up into practice samples, verification samples, and test samples at a ratio of 7:2:1, respectively.\nWe took this data and saved it on a Jupyter Notebook to perform machine learning on it. That means we can’t send the files to a third party during the start of the learning phase. Specifically, the array below has been saved as Numpy Array in order to have machine learning performed on it. Inside the () contains the size of the Numpy Array arrangements. We can also easily change the ratio and pixel numbers.\n\n### Practice Samples:\n\n``````X_train(435, 200, 200, 3)\n\nY_train(435, 200, 200, 4)``````\n\n### Verification Samples:\n\n``````X_train(124, 200, 200, 3)\n\nY_train(124, 200, 200, 4)``````\n\n### Test Samples:\n\n``````X_train(63, 200, 200, 3)\n\nY_train(63, 200, 200, 4)``````\n\nThe constructed CNN is as follows. After creating the data set, try performing it.\n\n``````import tensorflow as tf\nimport keras\nfrom keras import backend as K\nfrom keras.layers import Input, Dropout, Dense, Dropout, Flatten\nfrom keras.layers import Conv2D, MaxPooling2D, UpSampling2D, Activation\nfrom keras.layers.normalization import BatchNormalization\nfrom keras.models import Model\nfrom keras.activations import softmax\nfrom keras.engine.topology import Layer\n\nclass MySoftmax(Layer):\ndef __init__(self, **kwargs):\nsuper(MySoftmax, self).__init__(**kwargs)\ndef call(self, x):\nreturn(softmax(x, axis=3))\n\n# parameter\nimg_size = 200\nnpol = 3\nnclass = 4\n\n# create model\ninput_shape = (bsize, bsize, npol)\ninput = Input(shape=input_shape)\nact = 'relu'\nnet = Conv2D(filters=64, kernel_size=(3,3), strides=(1,1), padding=\"same\", kernel_initializer=\"he_normal\")(input)\nnet = BatchNormalization()(net)\nnet = Activation(act)(net)\nnet = Conv2D(filters=64, kernel_size=(3,3), strides=(2,2), padding=\"same\", kernel_initializer=\"he_normal\")(net)\nnet = BatchNormalization()(net)\nnet = Activation(act)(net)\nnet = UpSampling2D((2,2))(net)\nnet = Conv2D(filters=64, kernel_size=(3,3), strides=(1,1), padding=\"same\", kernel_initializer=\"he_normal\")(net)\nnet = BatchNormalization()(net)\nnet = Activation(act)(net)\nnet = Conv2D(filters=64, kernel_size=(3,3), strides=(1,1), padding=\"same\", kernel_initializer=\"he_normal\")(net)\nnet = BatchNormalization()(net)\nnet = Activation(act)(net)\noutput = Conv2D(filters=nclass, kernel_size=(3,3), strides=(1,1), padding=\"same\", kernel_initializer=\"he_normal\")(net)# class\noutput = MySoftmax()(output)\nmodel = Model(input, output)\n\n## optimizer\nmodel.compile(loss=\"sparse_categorical_crossentropy\", optimizer=opt)\nprint(model.summary())\n\n##実際のデータを用いた学習(実行するには、事前にデータの作成が必要です)\nmodel.fit(X_train, Y_train, batch_size=16, epochs=24, validation_data=(X_val, Y_val))\n``````"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7921151,"math_prob":0.9360664,"size":12381,"snap":"2020-45-2020-50","text_gpt3_token_len":3180,"char_repetition_ratio":0.10131696,"word_repetition_ratio":0.058126412,"special_character_ratio":0.25329134,"punctuation_ratio":0.14394905,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98088753,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-25T20:05:00Z\",\"WARC-Record-ID\":\"<urn:uuid:f7746df4-4cb6-4438-88d7-3f775e90ecf4>\",\"Content-Length\":\"54732\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c116ba1-54ce-465d-b388-5f374e372c01>\",\"WARC-Concurrent-To\":\"<urn:uuid:fac339ad-84a7-45d9-8633-10e967c3f4cd>\",\"WARC-IP-Address\":\"133.242.215.167\",\"WARC-Target-URI\":\"https://sorabatake.jp/en/12531/\",\"WARC-Payload-Digest\":\"sha1:ABDIPBZWQNGLVN2E6KDDECVKVIOVHSTX\",\"WARC-Block-Digest\":\"sha1:N7EKYPG5UYHYEEPYFTNJK3IOVJZ22JMG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141184123.9_warc_CC-MAIN-20201125183823-20201125213823-00215.warc.gz\"}"} |
https://fr.mathworks.com/matlabcentral/cody/solutions/1972738 | [
"Cody\n\n# Problem 44221. Check availability of a number in an array\n\nSolution 1972738\n\nSubmitted on 12 Oct 2019 by Nikolaos Nikolaou\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1 Pass\nA = [1 2 3 4 5 7 8 9]; n = 6; y_correct = 1; assert(isequal(MemberOfArray(A,n),y_correct))\n\nm = 0 y = 1\n\n2 Pass\nA = [1 2 3 4 5]; n = 5; y_correct = 0; assert(isequal(MemberOfArray(A,n),y_correct))\n\nm = 1 y = 0\n\n3 Pass\nA = [1 2 7 8 9]; n = 1; y_correct = 0; assert(isequal(MemberOfArray(A,n),y_correct))\n\nm = 1 y = 0"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.54335386,"math_prob":0.99869007,"size":613,"snap":"2019-43-2019-47","text_gpt3_token_len":235,"char_repetition_ratio":0.13300492,"word_repetition_ratio":0.19491525,"special_character_ratio":0.41272432,"punctuation_ratio":0.14184397,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9930288,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-17T02:24:08Z\",\"WARC-Record-ID\":\"<urn:uuid:cbf248d2-9981-4222-8a63-5d2d050ac4d0>\",\"Content-Length\":\"72815\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:60717653-02be-49ff-a6e0-ad126caf340e>\",\"WARC-Concurrent-To\":\"<urn:uuid:38ce099d-beed-40e7-ad11-e970f86b50ff>\",\"WARC-IP-Address\":\"184.25.198.13\",\"WARC-Target-URI\":\"https://fr.mathworks.com/matlabcentral/cody/solutions/1972738\",\"WARC-Payload-Digest\":\"sha1:27447P2Q3YY2BMQ656XE3O3GBDHPJ4VU\",\"WARC-Block-Digest\":\"sha1:4YPRA4YOCE77ZPLZ6WGS4AS24ZDNSTRQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668782.15_warc_CC-MAIN-20191117014405-20191117042405-00481.warc.gz\"}"} |
https://dspace.kaist.ac.kr/handle/10203/42206 | [
"#### (A) new mixed finite element space over skew hexahedral grids = 3차원 사각격자에서 새로운 혼합요소\n\nCited 0 time in",
null,
"Cited 0 time in",
null,
"• Hit : 389\nThe mixed finite element method has an advantage of finding the approximate pressure and velocity simultaneously. In order to use the mixed finite element method, we construct finite dimensional space of H(div;Ω) and $L^2$ (Ω), respectively. In general, we use triangular grids and rectangular grids for constructing the finite dimensional space in two dimension and use tetrahedral grids and hexahedral grids in three dimension. But we consider a new grid in three dimension that is called a skew hexahedral grid. In this paper, we introduce a new finite element and the finite dimensional space $V_h \\times W_h$ of H(div;Ω) $\\times L^2$(Ω).\nKawk, Do-Youngresearcher곽도영researcher\nDescription\n한국과학기술원 : 수리과학과,\nPublisher\n한국과학기술원\nIssue Date\n2009\nIdentifier\n308739/325007 / 020073272\nLanguage\neng\nDescription\n\n학위논문(석사) - 한국과학기술원 : 수리과학과, 2009.2, [ v, 20 p. ]\n\nKeywords\n\nskew; hexahedral grids; mixed finite element space; 비스듬한 사각격자; 혼합법; skew; hexahedral grids; mixed finite element space; 비스듬한 사각격자; 혼합법\n\nURI\nhttp://hdl.handle.net/10203/42206",
null,
""
] | [
null,
"https://dspace.kaist.ac.kr/image/common/webofscience.png",
null,
"https://dspace.kaist.ac.kr/image/common/scopus.png",
null,
"https://api.qrserver.com/v1/create-qr-code/",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.50332546,"math_prob":0.86208636,"size":281,"snap":"2021-43-2021-49","text_gpt3_token_len":126,"char_repetition_ratio":0.10108303,"word_repetition_ratio":0.24390244,"special_character_ratio":0.26334518,"punctuation_ratio":0.29508197,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9764365,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-04T19:13:00Z\",\"WARC-Record-ID\":\"<urn:uuid:63c97c09-18ba-47e0-b58f-6ed7c48b9517>\",\"Content-Length\":\"24771\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ef1c17d7-f72c-4055-bbf2-9a849083a874>\",\"WARC-Concurrent-To\":\"<urn:uuid:52b9eebd-48d2-4be8-ba61-4e8fb877f6cf>\",\"WARC-IP-Address\":\"143.248.118.80\",\"WARC-Target-URI\":\"https://dspace.kaist.ac.kr/handle/10203/42206\",\"WARC-Payload-Digest\":\"sha1:LG6SNM7OM6T3RVZAYP253FXWBQWPZPVZ\",\"WARC-Block-Digest\":\"sha1:EKFZAIIV5YF233B5QHRGVTFDQY5GQIM6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363006.60_warc_CC-MAIN-20211204185021-20211204215021-00396.warc.gz\"}"} |
https://www.convert-measurement-units.com/convert+Megawatt+hour+to+British+thermal+unit.php | [
" Convert MWh to Btu (Megawatt hour to British thermal unit)\n\n## Megawatt hour into British thermal unit\n\nMeasurement Categorie:\n\n Original value: Original unit: Attojoule [aJ]British thermal unit [Btu]Calories [cal]Electronvolt [eV]ErgExajoule [EJ]Femtojoule [fJ]Foot-poundal [ftpdl]Foot-pound force [ftlbf]Gigaelectronvolt [GeV]Gigajoule [GJ]Gigawatt hour [GWh]Hartree [Ha]Inch-pound force [inlbf]Joule [J]Kilocalories [kcal]Kiloelectronvolt [keV]Kilojoule [kJ]Kilowatt hour [kWh]Kilowatt second [kWs]Litre-atmosphere [lATM]Megaelectronvolt [MeV]Megajoule [MJ]Megawatt hour [MWh]Microjoule [µJ]Millijoule [mJ]Million BTU [MMBtu]Nanojoule [nJ]N·mPetajoule [PJ]Picojoule [pJ]QuadRydberg [Ry]Terajoule [TJ]Terawatt hour [TWh]Therm [thm]Thousand BTU [MBtu]Watt hour [Wh]Watt second [Ws] Target unit: Attojoule [aJ]British thermal unit [Btu]Calories [cal]Electronvolt [eV]ErgExajoule [EJ]Femtojoule [fJ]Foot-poundal [ftpdl]Foot-pound force [ftlbf]Gigaelectronvolt [GeV]Gigajoule [GJ]Gigawatt hour [GWh]Hartree [Ha]Inch-pound force [inlbf]Joule [J]Kilocalories [kcal]Kiloelectronvolt [keV]Kilojoule [kJ]Kilowatt hour [kWh]Kilowatt second [kWs]Litre-atmosphere [lATM]Megaelectronvolt [MeV]Megajoule [MJ]Megawatt hour [MWh]Microjoule [µJ]Millijoule [mJ]Million BTU [MMBtu]Nanojoule [nJ]N·mPetajoule [PJ]Picojoule [pJ]QuadRydberg [Ry]Terajoule [TJ]Terawatt hour [TWh]Therm [thm]Thousand BTU [MBtu]Watt hour [Wh]Watt second [Ws] numbers in scientific notation\n\nhttps://www.convert-measurement-units.com/convert+Megawatt+hour+to+British+thermal+unit.php\n\n# Convert Megawatt hour to British thermal unit (MWh to Btu):\n\n1. Choose the right category from the selection list, in this case 'Energy'.\n2. Next enter the value you want to convert. The basic operations of arithmetic: addition (+), subtraction (-), multiplication (*, x), division (/, :, ÷), exponent (^), brackets and π (pi) are all permitted at this point.\n3. From the selection list, choose the unit that corresponds to the value you want to convert, in this case 'Megawatt hour [MWh]'.\n4. Finally choose the unit you want the value to be converted to, in this case 'British thermal unit [Btu]'.\n5. Then, when the result appears, there is still the possibility of rounding it to a specific number of decimal places, whenever it makes sense to do so.\n\nWith this calculator, it is possible to enter the value to be converted together with the original measurement unit; for example, '354 Megawatt hour'. In so doing, either the full name of the unit or its abbreviation can be usedas an example, either 'Megawatt hour' or 'MWh'. Then, the calculator determines the category of the measurement unit of measure that is to be converted, in this case 'Energy'. After that, it converts the entered value into all of the appropriate units known to it. In the resulting list, you will be sure also to find the conversion you originally sought. Alternatively, the value to be converted can be entered as follows: '90 MWh to Btu' or '40 MWh into Btu' or '5 Megawatt hour -> British thermal unit' or '23 MWh = Btu' or '29 Megawatt hour to Btu' or '97 MWh to British thermal unit' or '63 Megawatt hour into British thermal unit'. For this alternative, the calculator also figures out immediately into which unit the original value is specifically to be converted. Regardless which of these possibilities one uses, it saves one the cumbersome search for the appropriate listing in long selection lists with myriad categories and countless supported units. All of that is taken over for us by the calculator and it gets the job done in a fraction of a second.\n\nFurthermore, the calculator makes it possible to use mathematical expressions. As a result, not only can numbers be reckoned with one another, such as, for example, '(26 * 92) MWh'. But different units of measurement can also be coupled with one another directly in the conversion. That could, for example, look like this: '354 Megawatt hour + 1062 British thermal unit' or '27mm x 5cm x 18dm = ? cm^3'. The units of measure combined in this way naturally have to fit together and make sense in the combination in question.\n\nIf a check mark has been placed next to 'Numbers in scientific notation', the answer will appear as an exponential. For example, 5.056 790 077 44×1022. For this form of presentation, the number will be segmented into an exponent, here 22, and the actual number, here 5.056 790 077 44. For devices on which the possibilities for displaying numbers are limited, such as for example, pocket calculators, one also finds the way of writing numbers as 5.056 790 077 44E+22. In particular, this makes very large and very small numbers easier to read. If a check mark has not been placed at this spot, then the result is given in the customary way of writing numbers. For the above example, it would then look like this: 50 567 900 774 400 000 000 000. Independent of the presentation of the results, the maximum precision of this calculator is 14 places. That should be precise enough for most applications.\n\n## How many British thermal unit make 1 Megawatt hour?\n\n1 Megawatt hour [MWh] = 3 412 141.633 127 9 British thermal unit [Btu] - Measurement calculator that can be used to convert Megawatt hour to British thermal unit, among others."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8510008,"math_prob":0.89545697,"size":3694,"snap":"2020-34-2020-40","text_gpt3_token_len":861,"char_repetition_ratio":0.13279133,"word_repetition_ratio":0.0140625,"special_character_ratio":0.25284246,"punctuation_ratio":0.12413793,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9866651,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-10T01:54:40Z\",\"WARC-Record-ID\":\"<urn:uuid:1345a617-c15e-4334-9411-46f0f7f216b1>\",\"Content-Length\":\"60592\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bae719de-ca40-4bef-a8cc-d71b2d7c1251>\",\"WARC-Concurrent-To\":\"<urn:uuid:c27f7c7f-e220-4d98-9c00-be91a675e1a6>\",\"WARC-IP-Address\":\"46.4.106.116\",\"WARC-Target-URI\":\"https://www.convert-measurement-units.com/convert+Megawatt+hour+to+British+thermal+unit.php\",\"WARC-Payload-Digest\":\"sha1:CHLF3PUT4BEXH7H6E4PURB6NQB4Y4KPQ\",\"WARC-Block-Digest\":\"sha1:HGP3RAEQXEXOYUQTQSQQOTHZAUQUBAHX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738603.37_warc_CC-MAIN-20200810012015-20200810042015-00595.warc.gz\"}"} |
https://www.csdn.net/tags/NtjaMg0sNjU5OC1ibG9n.html | [
"• 2021-04-20 02:35:24\n\n如何使用matlab绘制简单的正方形连续目标(离散点)\n\n%利用linspace绘制对应长宽\n\n%x_dim_parameter参数为取值范围\n\n%points_num为取值点数\n\n%为绘制成连续目标需利用meshgrid函数将正方形填满\n\n%此时形成的是二维矩阵,需要利用reshape转为一维,\n\n%x_dimension对应点云的x坐标,其大小为points_num * points_num2 * 2\n\n%y_dimension对应点云的y坐标,其大小为points_num * points_num2 * 2\n\n%z_dimension对应点云的y坐标,其大小为points_num * points_num2 * 2\n\n%Ptar(:,4)对应点云的强度值,此处均设为1,\n\n%Ptar为最终点云,其中Ptar(:,1)为x_dimension,其中Ptar(:,2)为y_dimension\n\n%其中Ptar(:,3)为z_dimension,Ptar(:,4)对应点云的强度值。\n\nx_dim_parameter2 = 0.005;\n\nx_dim_parameter1 = -0.005;\n\npoints_num = 10;\n\nx_dimension_mid_1 = linspace(x_dim_parameter1,x_dim_parameter2,points_num);\n\ny_dimension_mid_1 = linspace(x_dim_parameter1,x_dim_parameter2,points_num);\n\n[x_dimension_1,y_dimension_1] = meshgrid(x_dimension_mid_1,y_dimension_mid_1);\n\nz_dimension_1 = 0.5*ones(points_num);\n\nintensity_1 = ones(points_num);\n\nx_dim2_parameter_2 = 0.025;\n\nx_dim2_parameter_1 = -0.025;\n\npoints_num2 = 10;\n\nx_dimension_mid_2 = linspace(x_dim2_parameter_1,x_dim2_parameter_2,points_num2);\n\ny_dimension_mid_2 = linspace(x_dim2_parameter_1,x_dim2_parameter_2,points_num2);\n\n[x_dimension_2,y_dimension_2] = meshgrid(x_dimension_mid_2,y_dimension_mid_2);\n\nz_dimension_2 = 0.5*ones(points_num2);\n\nintensity_2 = ones(points_num2);\n\nx_dimension = cat(2,x_dimension_1,x_dimension_2);\n\nz_dimension = cat(2,z_dimension_1,z_dimension_2);\n\ny_dimension = cat(2,y_dimension_1,y_dimension_2);\n\nintensity = cat(2,intensity_1,intensity_2);\n\nPtar(:,1) = reshape(x_dimension,[],1);\n\nPtar(:,2) = reshape(y_dimension,[],1);\n\nPtar(:,3) = reshape(z_dimension,[],1);\n\nPtar(:,4) = reshape(intensity,[],1);\n\n更多相关内容\n• %出内正方体框(检验质量块); %顶点矩阵; vertex_matrix=[-0.5 -0.5 -0.5;0.5 -0.5 -0.5;0.5 0.5 -0.5;-0.5 0.5 -0.5; -0.5 -0.5 0.5;0.5 -0.5 0.5;0.5 0.5 0.5;-0.5 0.5 0.5]; %连接矩阵:连接关系矩阵每一行...\nx_inner=40; %length\ny_inner=40; %width\nz_inner=10; %height\n\ndistance_x=5;\ndistance_y=5;\ndistance_z=5;\n\n%定义各平面与轴的交点位置;\nx_outer=x_inner+2*distance_x;\ny_outer=y_inner+2*distance_y;\nz_outer=z_inner+2*distance_z;\n\nx_inner1=x_inner/2;\nx_inner2=-x_inner/2;\ny_inner1=y_inner/2;\ny_inner2=-y_inner/2;\nz_inner1=z_inner/2;\nz_inner2=-z_inner/2;\n\nx_outer1=x_outer/2;\nx_outer2=-x_outer/2;\ny_outer1=y_outer/2;\ny_outer2=-y_outer/2;\nz_outer1=z_outer/2;\nz_outer2=-z_outer/2;\n\n%画出内正方体框(检验质量块);\n%顶点矩阵;\nvertex_matrix=[x_inner2 y_inner2 z_inner2;x_inner1 y_inner2 z_inner2;x_inner1 y_inner1 z_inner2;x_inner2 y_inner1 z_inner2;\nx_inner2 y_inner2 z_inner1;x_inner1 y_inner2 z_inner1;x_inner1 y_inner1 z_inner1;x_inner2 y_inner1 z_inner1];\n\n%连接矩阵:连接关系矩阵每一行中的数值分别表示顶点矩阵的行标;\nface_matrix=[1 2 6 5;2 3 7 6;3 4 8 7;\n4 1 5 8;1 2 3 4;5 6 7 8];\npatch('Vertices',vertex_matrix,'Faces',face_matrix,'FaceVertexCData',hsv(8),'FaceColor','none')\n\n%画出外正方体框(电极笼);\n%顶点矩阵;\nvertex_matrix=[x_outer2 y_outer2 z_outer2;x_outer1 y_outer2 z_outer2;x_outer1 y_outer1 z_outer2;x_outer2 y_outer1 z_outer2;\nx_outer2 y_outer2 z_outer1;x_outer1 y_outer2 z_outer1;x_outer1 y_outer1 z_outer1;x_outer2 y_outer1 z_outer1];\n\n%连接矩阵:连接关系矩阵每一行中的数值分别表示顶点矩阵的行标;\nface_matrix=[1 2 6 5;2 3 7 6;3 4 8 7;\n4 1 5 8;1 2 3 4;5 6 7 8];\npatch('Vertices',vertex_matrix,'Faces',face_matrix,'FaceVertexCData',hsv(8),'FaceColor','none')\n\nview(3);\n\naxis square",
null,
"展开全文",
null,
"• ## MATLAB画正方体\n\n千次阅读 2020-08-31 22:22:42\nMATLAB画正方体 X=zeros(8,3); X([5:8,11,12,15,16,18,20,22,24])=1; d=[1 2 4 3 1 5 6 8 7 5 6 2 4 8 7 3]; plot3(X(d,1),X(d,2),X(d,3)); view(3); rotate3d;\n\nMATLAB画正方体\n\nX=zeros(8,3);\nX([5:8,11,12,15,16,18,20,22,24])=1;\nd=[1 2 4 3 1 5 6 8 7 5 6 2 4 8 7 3];\nplot3(X(d,1),X(d,2),X(d,3));\nview(3); rotate3d;\n\n展开全文",
null,
"• 给定一个中心和一个半径,这个程序生成一个正方形的坐标,无论是空心还是实心。 该程序对于在圆形周围绘制正方形特别有用。 结果是一组适合用于在矩阵上生成正方形的坐标。\n• ## MATLAB画矩形和圆\n\n千次阅读 2020-09-28 15:20:49\n以前的博文中已经讲解过如何立方体以及画圆的方法。这里将如何矩形和圆的方法写成函数形式,直接提供出来,方便大家使用。 矩形代码 % Example % cx = 1; % cy = 1; % radius = 1; % alpha = 0.8; % color = ...\n\n## 简介\n\n以前的博文中已经讲解过如何画立方体以及画圆的方法。这里将如何画矩形和圆的方法写成函数形式,直接提供出来,方便大家使用。\n\n## 画矩形代码\n\n% Example\n% cx = 1;\n% cy = 1;\n% alpha = 0.8;\n% face_color = rand(1,3);\n% edge_color = rand(1,3);\n% rectangle_patch(cx, cy, radius, face_color, edge_color, alpha)\n\nfunction rectangle_patch(x, y, r, face_color, edge_color, alpha)\nxs = [x-r, x-r, x+r, x+r];\nys = [y+r, y-r, y-r, y+r];\npatch(xs, ys, face_color, 'facealpha', alpha, 'edgecolor', edge_color);\nend\n\n\n\n## 画圆代码\n\n% Example\n% cx = 1;\n% cy = 1;\n% alpha = 0.8;\n% face_color = rand(1,3);\n% edge_color = rand(1,3);\n% circle_patch(cx, cy, radius, face_color, edge_color, alpha)\n\nfunction circle_patch(cx, cy, radius, face_color, edge_color, alpha)\nt = 0:.1:2*pi;\nx = radius * cos(t) + cx;\ny = radius * sin(t) + cy;\npatch(x, y, face_color, 'facealpha', alpha, 'edgecolor', edge_color);\nend\n\n\n\n## 效果",
null,
"展开全文",
null,
"",
null,
"MATLAB画圆\n• plot([1,1,0,0,1]',[0 1 1 0 0]')怎样用MATLAB画正方形,急求!!!条件没看太明白,“通过点的斜率什么意思?画正方形可以使用rectangle函数,指定左下角的顶标和边长即可。示例:% 正方形一个顶点坐标Vertex = [2 ...\n• ## matlab:画正方体\n\n千次阅读 2020-03-13 00:29:38\n%Cube clear ,clc; figure; hold on; view(3); %Create four vertices V = [0 0 0;0 1 0;1 1 0;1 0 0; 0 0 1;0 1 1;1 1 1;1 0 1]; F = { [1 2 3 4] [5 6 7 8] [1 2 6 ...\n• ^ 抱歉之前赶着吃饭没看清题目 clc clear all close all x1=0:0.1:2;...%轨迹如果不需要删掉即可 for i = 1: length(traj_x) hold on plot(x(i,:),y(i,:)); axis([-11,15,-0.5,110]); pause(0.1); end\n• 给图像添加网格:grid on 调整坐标轴:axis axis square——>正方形图像 axis equal——>间距相同 axis auto ——> 自动选择 线条风格 z 实线 '-' z 虚线 '--' z 虚点线 '-.' z 点线 ':' 颜色说明符 白色 w ...",
null,
"开发语言\n• 在此之前先介绍一个矩形的函数:rectangle('Position',[0.59,0.35,3.75,1.37],'Curvature',[0,0],'LineWidth',2,'LineStyle','--')rectangle('Position',[x坐标,y坐标,宽,高],'Curvature',[曲率纵向,曲率横向],'...\n• %本程序出在正方体内嵌套1/4圆的组合图形 %程序虽小,但是蕴含着用结构体设置图形属性和图形对象及坐标轴属性底层设置 clc; x=0:0.01:1; %x的范围标定 y=sqrt(1-x.^2); %函数y的表达式 figure; h=plot...\n• 此代码仅用于测试 OWI ROBOT ARM 在添加编码器并使用电机驱动器通过 PWM 改变电机速度时是否确实可以绘制正方形\n• 当自己做出图后,发现只有将作者原图截图下来,放在某些软件中(如excel,origin)再添加上自己的图,并且要不断的调整取值范围等。图片较少,也就罢了;图片很多,或者多次算出结果,再去调整这些细节,很费时。本人...\n• 任意正六边形随机撒点是通信仿真里经常用到的,无奈网上资料很少,参照...废话不多说了,先来看看怎么正六边形。theta =linspace(0,2*pi,7);plot(cos(theta),sin(theta),'g-');两句代码就可以出一个正六边形了,...\n• ## matlab画三角形,多边形\n\n万次阅读 多人点赞 2015-03-03 19:57:04\n没有找到直接多边形的函数,只能是将各个点的坐标保存在数组中,将一个点与其相邻的点相连,并将最后一个点与第一个点连接。 clear; clc; a =[ 0 2 4 6 8 10 12 14 ; 0 2 1 4 6 ...\n• ## matlab画温度分布图\n\n千次阅读 2021-04-18 12:09:17\n传热学MATLAB温度分布大作业完整版_工学_高等教育_教育专区。东南大学能源与环境学院 课程作业报告作业名称:传热学大作业——利用 matlab 程序解决热传导问题 院系:......三维火柴杆图。 >> x=[1:0.5:20]; >...\n• 今天在用MATLAB编程的时候,用到了已知圆心和半径,画圆的程序,上网搜了一下,主要有下面两种,在这里总结一下:(这里我都是放在函数中做的,想多个圆的话可以加个for循环调用一下函数,或者直接用向量做都是可以...\n• 这个简单的函数使用 Matlab 的 Mapping Toolbox 生成具有矩形边界的北极和南极的极地立体图。 通常,Matlab 中的极地立体图以圆形边界显示,但这通常不是显示有关南大洋或北极系统信息的最节省空间的方式。 Ncpolarm...\n• 标记形式有'+','o','*',和'x',填入's'代表正方形,'d'代表菱形,'A'为上三角形,'v'为下三角形,’>’为右三角形,'<’为左三角形,'p’为五角星形,’h'为六角星形,none为不用标记。 ...\n• 线型 说明 y 黄色 m 品红色 c 青蓝色 r 红色 g 绿色 b 蓝色 w 白色 k 黑色 相关属性 MATLAB图形允许控制下列直观特征: LineWidth - 指定线条的宽度(以磅为单位)。 MarkerEdgeColor - 指定标记颜色或填充标记...\n• 在turtlebot端:[turtlebot] 启动Turtlebotroslaunch turtlebot_bringup minimal.launch[turtlebot] 启动雷达roslaunch rplidar_ros rplidar.launch在Matlab端:运行avoiding_obstacles.m文件运...\n• ## matlab怎么画圆柱\n\n千次阅读 2021-04-18 02:57:21\nMatlab 函数图像一、螺旋线 1.静态螺旋线 a=0:0.1:20*pi...例 在区间[0,2π]sin(x),并分别标注“sin(x)””cos(x)”. 解 x=linspace(0,2*pi,30); y=sin(x); MATLAB liti3 z=cos(x); plot(x,y,x,......最终效果...\n• 请问怎样用matlab画一个透明的球体,球体圆心为(x,y,z)坐标的原点[x,y,z]=sphere(30);%30是画出来的球面的经纬分面数...30的话就是30个经度,30个纬度x=4+7*x;&matlab 如何画三维空间中的离散点颜色举个例子:z=0:...\n• ## matlab画三角形网格\n\n千次阅读 2021-04-18 05:25:44\n而应力解的 结束......matlab画网格并填充用于相关系数图示_数学_自然科学_专业资料。画网格并填充...对于平面问题,结构离散常用的网格形状有三角形、矩形...本章将详细介绍如何利用 Matlab 语言 编制三角形常...(前...\n• ## matlab画实心圆点\n\n千次阅读 2021-04-28 08:14:13\nhist 累计图 rose 极座标累计图 stairs 阶梯图 stem 针状图 fill 实心图 feather 羽毛图 compass 罗盘图 quiver 向量场图 Matlab 如何出一个二维数组的曲线? ...一、Matlab 绘图中用到的直线属性包括: (1)...\n• ## Matlab计算轮廓内切圆\n\n千次阅读 多人点赞 2019-06-22 20:21:32\n当然由于全是个人的理解肯定有不妥当的地方,权作抛砖引玉),我最终用Matlab(2018a)造了个自认为效果还可以的程序,于是有了这篇文章(偷懒的我直接从参考网站上截取了图片,权作问题的直观描述)。 在设计最终的...",
null,
"多边形\n• ## matlab画极坐标\n\n千次阅读 2021-04-20 02:42:12\n4.3 其它图形函数除plot等基本绘图命令外,Matlab系统提供了 许多其它特殊绘图函数,这里举一些代表性 例子......三维曲线的绘制 3.3 三维网线图和曲面图的绘制 3.4 Matlab图形处理的基本技术 3.5 动画技术 3.6 图形...\n• 请问各位用MATLAB画出的图形(分形天线,编程画出来的),如何导入到仿真软件中(如HFSS)呢?保存的时候要选择哪种格式啊?同问。没作过,不知道行不行。好像是不行。同问。先行谢谢咯,那位哥么能帮忙解决一下啊……无法...\n• MATLAB 中提供了一系列的函数用于绘制不同形式... contour(z):......(x,y,z) 有等位线投影的实心网格图 [x,y,z]=cylinder(r,n) 三维旋转曲面,其中n为母线的条数 Matlab软件例1:在区间[0,10π]出参数曲线x=sint, y=...",
null,
"",
null,
"...\n\n# matlab画正方形",
null,
"matlab 订阅"
] | [
null,
"https://img-blog.csdnimg.cn/9de3d2daf9a64971b828153c4dc65de2.png",
null,
"https://csdnimg.cn/release/aggregate/img/arrow-down@2x.png",
null,
"https://csdnimg.cn/release/aggregate/img/arrow-down@2x.png",
null,
"https://img-blog.csdnimg.cn/20200928152011389.png#pic_center",
null,
"https://csdnimg.cn/release/aggregate/img/arrow-down@2x.png",
null,
"https://csdnimg.cn/release/aggregate/img/tags.png",
null,
"https://csdnimg.cn/release/aggregate/img/tags.png",
null,
"https://csdnimg.cn/release/aggregate/img/tags.png",
null,
"https://csdnimg.cn/release/blogv2/dist/pc/img/monkeyWhite.png",
null,
"https://csdnimg.cn/release/blogv2/dist/pc/img/monkeyWhite.png",
null,
"https://img-search.csdnimg.cn/bkimg/8b13632762d0f703918f605e5cb3463d269759ee6f55/aggregate_page",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.52440804,"math_prob":0.9971747,"size":8768,"snap":"2022-27-2022-33","text_gpt3_token_len":5361,"char_repetition_ratio":0.14240073,"word_repetition_ratio":0.19277108,"special_character_ratio":0.39005473,"punctuation_ratio":0.3189386,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9862474,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,1,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T11:18:41Z\",\"WARC-Record-ID\":\"<urn:uuid:09b57109-fe02-44e9-82f9-fbe5051f91d9>\",\"Content-Length\":\"194453\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:00428fc2-80f9-4bbf-8241-c2fe7ce5a933>\",\"WARC-Concurrent-To\":\"<urn:uuid:1cc8b2e2-8c25-4f57-ba9a-0bb61e69db69>\",\"WARC-IP-Address\":\"39.106.226.142\",\"WARC-Target-URI\":\"https://www.csdn.net/tags/NtjaMg0sNjU5OC1ibG9n.html\",\"WARC-Payload-Digest\":\"sha1:MLBGVANROJHD5PR6WKFCVA6LL76T5C7L\",\"WARC-Block-Digest\":\"sha1:SC2KKYAHTRNJGCNNW7WGZH3TZ6EZZQYT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104669950.91_warc_CC-MAIN-20220706090857-20220706120857-00006.warc.gz\"}"} |
https://www.nagwa.com/en/videos/786175656432/ | [
"# Question Video: Finding the Equation of a Plane Mathematics\n\nFind the equation of the plane whose 𝑥-, 𝑦-, and 𝑧-intercepts are −7, 3, and −4, respectively.\n\n01:24\n\n### Video Transcript\n\nFind the equation of the plane whose 𝑥-, 𝑦-, and 𝑧-intercepts are negative seven, three, and negative four, respectively.\n\nIn this question, we’re asked to find the equation of a plane. And to do this, we’re given that it has an 𝑥-intercept at negative seven, a 𝑦-intercept at three, and its 𝑧-intercept is at negative four. And we can recall we know how to find the equation of a plane from its intercepts by using the intercept form of the equation of a plane. This tells us if the 𝑥-, 𝑦-, and 𝑧-intercepts of a plane are 𝑎, 𝑏, and 𝑐 for nonzero values, then 𝑥 over 𝑎 plus 𝑦 over 𝑏 plus 𝑐 over 𝑧 is equal to one is the equation of this plane. And since the three intercepts we’re given are nonzero, we can just substitute these values into our equation for the plane\n\nWe substitute 𝑎 is equal to negative seven, 𝑏 is equal to three, and 𝑐 is equal to negative four into our intercept equation of the plane. We get 𝑥 over negative seven plus 𝑦 over three plus 𝑧 over negative four is equal to one. We can then simplify this slightly. In the first and third term on the left-hand side of our equation, we can bring the factor of negative one into the expression. This gives us the following equation. And it’s worth noting we could write the coefficients of 𝑥, 𝑦, and 𝑧 as integers by multiplying the equation through by 84. However, we can just leave our answer in this form. This gives us the answer of negative 𝑥 over seven plus 𝑦 over three minus 𝑧 over four is equal to one."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9372387,"math_prob":0.9969949,"size":1494,"snap":"2021-43-2021-49","text_gpt3_token_len":397,"char_repetition_ratio":0.17986578,"word_repetition_ratio":0.035460994,"special_character_ratio":0.22423026,"punctuation_ratio":0.1030303,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99986684,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T23:11:45Z\",\"WARC-Record-ID\":\"<urn:uuid:97d260a1-40b4-42bf-b4dd-acb3b1aafc61>\",\"Content-Length\":\"35857\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fa5ab29b-4e11-4dfc-b1c0-5e1abae14cbb>\",\"WARC-Concurrent-To\":\"<urn:uuid:72397f98-1530-4241-a418-58104b8c6827>\",\"WARC-IP-Address\":\"76.223.114.27\",\"WARC-Target-URI\":\"https://www.nagwa.com/en/videos/786175656432/\",\"WARC-Payload-Digest\":\"sha1:A3XBCQYNBHC7PBNMG2EKC2SE4FCWAF5F\",\"WARC-Block-Digest\":\"sha1:GBAX35M2KYG3XZKTYEESRT3BB5NYONRD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358323.91_warc_CC-MAIN-20211127223710-20211128013710-00515.warc.gz\"}"} |
https://www.aeroengineering.co.id/2020/03/validation-and-verification-in-computational-fluid-dynamics-cfd/ | [
"# VALIDATION AND VERIFICATION IN COMPUTATIONAL FLUID DYNAMICS (CFD)\n\nOne method commonly used in design and research in fluid mechanics and heat transfer besides analytical and experimental is using a numerical method known as Computational Fluid Dynamics (CFD).\n\nThis method has been used for a long time to solve any engineering problems (fluid mechanics related) in many industries, from aerospace, maritime, automotive, manufacture, energy and renewable energy up to biomedical engineering.\n\nBecause this method is computer-based (no physical prototype needed), the total processes can be done quickly, flexible, low cost, deeper and more importantly no safety issues if the test is related to human interaction.\n\nNevertheless, some engineers and scientists are still skeptical about the accuracy of the CFD result because of the lack of operational CFD knowledge. (no matter how sophisticated your calculator is, if you hit the wrong input the output will be wrong right?). In this article, we will discuss the verification and validation of CFD method.\n\nFirst, before we discuss verification and validation, we must understand some terminologies, these are (1) code, (2) simulation, and (3) Model:\n\n(1) CODE: Is a bunch of computer instructions to gives input and definitions. This code has a strong relation to what software we used. Different software will have difference code characteristics.\n\n(2) SIMULATION: Is the use of the model, in CFD case this is to obtain the results such as flow, pressure, velocity, etc. based on the input to the model.\n\n(3) MODEL: Model is a representation of the physical system (in CFD case is the fluid flow or heat transfer) to predict the characteristics or output of the system. For example the geometrical size, inlet velocity, temperature in the wall, pressure at the outlet, etc. based on the physical system we want to mimic.\n\nCredibility of a code, model and CFD simulation are obtained based on its uncertainty and error level. The value of uncertainty and error itself determines whether the program and computational method used are fitted with at least intuitively and mathematically or not. Then, validation determines whether the simulation is fitted with physical phenomena or not. Generally, validation used experimental methods if possible.\n\nThere are some disagreements among professionals about the standard procedure of verification and validation of CFD simulation. Although CFD is widely used, this method is relatively new. CFD is a complex method that involves non-linear differential equations to solve the theoretical equations or experimental equations in a discrete domain, in complex geometry. Hence, the error assessment for CFD is based on these tree root (1) theory, (2) Experiment, and (3) Computation.\n\nUSING THE CFD RESULTS\nThe accuracy level of CFD analysis depends on the use of the result itself. The conceptual design process doesn’t need an accurate simulation result, on the other hand, on the detail design process, we need accurate CFD results. Every quantity in CFD needs a different accuracy level, for example, we don’t need accurate temperature value in the design process of low-speed aircraft, but we need accurate temperature calculation when we are dealing with supersonic aircraft or rocket. In general, there are three categories of CFD simulation based on its accuracy demand: (1) Simulation for qualitative information, (2) Simulation to obtain incremental value, and (3) simulation to obtain the absolute value of a quantity.\n\n(1) Simulation to obtain qualitative information\nIn this case, generally, experimental information data are hard or maybe too costly to obtain, so there’s no comparison data, and what engineers or scientists need is the “how it works” information, and how to optimize a flow without needing the exact value of each parameter. For example, a valve manufacturer wants to develop a novel design idea, and they want to prove the theory and see whether or not the flow is streamlined or chaotic in nature, they don’t need exact value of pressure drop, velocity, etc. in this conceptual design step: At least until they want to compare this design to an existing design (refer to category 2) and want to design the minimum thickness of this part before it is ready to manufacture (refer to category 3)\n\n(2) Simulation to obtain incremental value\nThis scenario compares the incremental value with respect to some design or flows alteration with the same basic characteristics. For example, a company wants to modify an existing impeller blade in case of its blades number or its inlet angle (illustrates in the picture below). From this simulation, we could determine which impeller has the highest pressure difference regardless of its absolute pressure in the entire system. This type of simulation demands more accuracy than category 1.\n\n(3) Simulation to obtain absolute quantity\nThis is the most accuracy-demanding simulation scenario and sometimes this simulation results are compared with the experimental result to validate the method, and the other results are used in the next design process such as calculating the L/D of an aircraft wing illustrates bellow:\n\nFLOW CHARACTERISTICS\nTo conduct a model validation, we must understand the flow characteristic to get intuition whether the flow acts as expected physical phenomena or not. For example, if we simulate a projectile with the speed exceed the speed of sound, the shock wave phenomena should occur; or if we simulate flow in pipe in low Reynold number, the flow should be laminar, otherwise, it must be turbulent, and so on. This knowledge is important because CFD is only a “calculator” if we hit the wrong input, the output will be wrong, in fact, the settings in CFD software, in general, are varied and cause a headache if we don’t have this knowledge.\n\nPHYSICAL MODEL\nPhysical model not only refers to the geometrical model, but these are also the following models to be considered in CFD simulation:\n\n(1) Spacial dimension\nOr the geometry (1D, 2D or 3D) of the object we want to model, sometimes this model is simplified with symmetry or reduces 3D into 2D to reduce the computational effort as long as it still represents the essence of the flow we want to analyze.\n\n(2) Temporal dimension\nThis is a time dimension of the simulation we want to conduct. This is very important in transient simulation, but not significant if we want to simulate a steady simulation. For example, if we want to simulates an object that rotates 1 rotation/second, and we input the delta time 0,1 second, we will accommodate the 10 incremental motions in our simulation. But, if we input delta time 2 second, the computation will error because we can’t accommodate the “motion” of the object.\n\n(3) Navier-Stokes Equation\nThis is the fundamental equation of fluid mechanics that models the flow velocity, pressure, gravity, viscosity and even rotational force in the flow.\n\n(4) Turbulent Model\nThis model is specially designed to model turbulent flow without calculating the whole (complex and computationally high effort) Navier-Stokes equation. The difference turbulent model we use will generate different results in our CFD simulation.\n\n(5) Energy equation\nUnlike classical solid mechanics, in fluid dynamics, energy generally refers to heat transfer and temperature change.\n\n(6) Flow boundary condition\nThis is a mandatory input in our simulation. Boundary conditions input what flow characteristic we already have, for example, pressure in the inlet of a pipe (from a pump) or the velocity an aircraft during flight, etc.\n\nCLOSURE\nEven the setting in CFD simulation looks messy for CFD beginner, but a lot of scientists and engineers around the globe are publishing papers and journal continuously to share their setups and its accuracy compared to experimental as well as analytical results, hence CFD verification and validation becomes easier with these abundance references.\n\nBy Caesar Wiratama\n\naeroengineering.co.id is an online platform that provides engineering consulting with various solutions, from CAD drafting, animation, CFD, or FEA simulation which is the primary brand of CV. Markom.",
null,
"#### Author: Caesar Wiratama\n\ncaesar@aeroengineering.co.id +62 821-3868-4162\n0 replies"
] | [
null,
"https://secure.gravatar.com/avatar/cc8cc285e31cdb0563c9624be95ff89f",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9053603,"math_prob":0.9525577,"size":8233,"snap":"2022-05-2022-21","text_gpt3_token_len":1645,"char_repetition_ratio":0.12176449,"word_repetition_ratio":0.0030698387,"special_character_ratio":0.19227499,"punctuation_ratio":0.105405405,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9679247,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-21T15:51:11Z\",\"WARC-Record-ID\":\"<urn:uuid:a14f9b3b-b659-4d94-b4d1-bafe17d1a803>\",\"Content-Length\":\"52568\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bdfd0aa2-1dbf-493a-992f-d49c69128530>\",\"WARC-Concurrent-To\":\"<urn:uuid:f66c1dcf-f411-4f02-b9d5-f7a6370bd093>\",\"WARC-IP-Address\":\"206.72.200.10\",\"WARC-Target-URI\":\"https://www.aeroengineering.co.id/2020/03/validation-and-verification-in-computational-fluid-dynamics-cfd/\",\"WARC-Payload-Digest\":\"sha1:U2GVUSFG2UO7ID4HCU7SJBJU52PZP64P\",\"WARC-Block-Digest\":\"sha1:M5MEFNEJKCRDSOWQGEVLZYO2FMBEFMNX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662539131.21_warc_CC-MAIN-20220521143241-20220521173241-00471.warc.gz\"}"} |
https://math.stackexchange.com/questions/2455094/determining-parameters-of-ax3bx2-given-inflection-point | [
"# Determining parameters of ax^3+bx^2 given inflection point\n\nI stumbled upon this exercise in which the function f(x)=ax^3 +bx^2 is given and it is asked to determine a and b for when the function crosses (-1,1) and has an inflection point at x=0.5\n\nAs far as I understood I have the second derivative which is f”(0.5)= 6*0.5*a+b= 0 and have to create some sort of system to get either a or b but I can’t figure out how to do this exactly.\n\nP.s. sorry for the sloppy formulas but I am typing this from my phone.\n\nIf $f(x)=ax^3+bx^2$ passes through $(-1,1)$ then you have one equation: $-a+b = 1$\nand if you take the second derivative you get $f''(x)=6ax+2b$ you have another equation that you can form given the information you have, do you know how to continue now?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91908956,"math_prob":0.99214,"size":1756,"snap":"2023-40-2023-50","text_gpt3_token_len":454,"char_repetition_ratio":0.1033105,"word_repetition_ratio":0.0,"special_character_ratio":0.25455582,"punctuation_ratio":0.084745765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998316,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T04:05:37Z\",\"WARC-Record-ID\":\"<urn:uuid:2f480d7b-2395-4ece-b8d4-0ce7ad8bc777>\",\"Content-Length\":\"138639\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:40735bce-47d6-4df3-b3f9-018bee28a044>\",\"WARC-Concurrent-To\":\"<urn:uuid:0fe7dbbc-3368-4061-a578-1b7f46259f29>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2455094/determining-parameters-of-ax3bx2-given-inflection-point\",\"WARC-Payload-Digest\":\"sha1:YR4FCM4VWSBEBPMAYCT4FCA44S72V7WQ\",\"WARC-Block-Digest\":\"sha1:67LV4IJAU74NSNLWH2LZTRKNCYBJCS6Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100710.22_warc_CC-MAIN-20231208013411-20231208043411-00570.warc.gz\"}"} |
https://docs.oracle.com/database/121/SQLRF/functions082.htm | [
"# HEXTORAW\n\nSyntax",
null,
"Description of the illustration ''hextoraw.gif''\n\nPurpose\n\n`HEXTORAW` converts `char` containing hexadecimal digits in the `CHAR`, `VARCHAR2`, `NCHAR`, or `NVARCHAR2` data type to a raw value.\n\nThis function does not support `CLOB` data directly. However, `CLOB`s can be passed in as arguments through implicit data conversion.\n\nExamples\n\nThe following example creates a simple table with a raw column, and inserts a hexadecimal value that has been converted to `RAW`:\n\n```CREATE TABLE test (raw_col RAW(10));\n\nINSERT INTO test VALUES (HEXTORAW('7D'));\n```\n\nThe following example converts hexadecimal digits to a raw value and casts the raw value to `VARCHAR2`:\n\n```SELECT UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW('4041424344'))\nFROM DUAL;\n\nUTL_RAW.CAST_TO_VARCHAR2(HEXTORAW('4041424344'))\n------------------------------------------------\n@ABCD\n```"
] | [
null,
"https://docs.oracle.com/database/121/SQLRF/img/hextoraw.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5502098,"math_prob":0.8492455,"size":843,"snap":"2022-27-2022-33","text_gpt3_token_len":205,"char_repetition_ratio":0.15733016,"word_repetition_ratio":0.0,"special_character_ratio":0.28351128,"punctuation_ratio":0.1294964,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9743514,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T02:44:53Z\",\"WARC-Record-ID\":\"<urn:uuid:0ff2fa09-ded6-4824-a174-838815bd06ff>\",\"Content-Length\":\"8960\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eb5a7079-ffd4-433d-af67-fdb831c4482e>\",\"WARC-Concurrent-To\":\"<urn:uuid:55f97c93-35d2-417b-ac9d-01555824799a>\",\"WARC-IP-Address\":\"104.104.97.12\",\"WARC-Target-URI\":\"https://docs.oracle.com/database/121/SQLRF/functions082.htm\",\"WARC-Payload-Digest\":\"sha1:CZWU4ZJYQVJMEHJ2CHDXPDW4CQ6BIOAS\",\"WARC-Block-Digest\":\"sha1:DV7EXVMBKKATSAF3XZBKARELRM2BLK37\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103347800.25_warc_CC-MAIN-20220628020322-20220628050322-00679.warc.gz\"}"} |
https://sagnikb.github.io/blogposts/leemetric/ | [
"Volumes of spheres in discrete metrics\n\nRecently I co-authored a paper with Prof Adrish Banerjee, which we submitted to ISIT 2019, and I thought I would go over the key ideas in the paper as a blog post. Our main concern was to find out the volume of spheres in discrete metrics, and applications of the method to the Lee metric.\n\nBackground\n\nWhy would it be an interesting problem to talk about the Lee metric and bounds of the kind that we were looking for? It turns out that there are quite a few applications for results of this sort.\n\nA bound on the volume of a sphere in a discrete metric is used to find bounds on the limits of reliable communication over a channel (more on that later). We actually see this in the paper itself - the moment we obtain an upper bound on the volume of a sphere in the Lee metric, we could use known results to immediately find analogues of the binary Hamming, Elias-Bassalygo and Gilbert-Varshamov bounds in the case of the Lee metric (more on all of these later). The basic utility of bounds on the limits of reliable communication over a channel should be clear - it gives people who design communication protocols something to aim at, and if such a design hits the upper bound, we know that one cannot do better, at least from the rate point of view (one could still improve things like encoding-decoding efficiency etc.). There are other examples too - consider the result of another paper that I was a co-author on that was also accepted to ISIT, where we showed that if the capacity under vanishing average error of a channel does not equal the zero-error capacity (both of these terms will also be talked about later in slightly more detail) of the channel, we need at least $\\log(n)$ bits of common randomness, where $n$ is the blocklength of the code, to communicate reliably at capacity over the channel. Common randomness being a valuable resource (it is hard to generate and distribute such randomness), such a bound is good to know. How to show such a gap exists? We need an upper bound on the capacity of the kind that our results help us find.\n\nThis is the thing that brought our attention to this problem in the first place. We already knew the common randomness result, and we knew that the gap exists for the Hamming metric, but we thought it would be interesting to show that the gap exists for a more general channel - enter the Lee metric.\n\nWhy is a metric important?\n\nBefore proceeding further, let me quickly talk about metrics and why they are useful in information and coding theory. So a major subfield of information theory involves finding the limits of reliable communication over a particular channel. Now reliable can be defined in various ways - two of which are\n\n• we want the average error to go down to zero when we use the channel multiple times\n• we want the error to be exactly zero for all channel uses\n\nIn most cases, the maximum rate at which we can transmit information in the first case will be higher than the maximum rate for the second case. Now, any transmission is done using an encoder-transmitter pair and a receiver-decoder pair, with a (possibly noisy) channel in between. In recovering the transmitted data from the noisy data at the receiver, it makes intuitive sense that the decoder should choose the most probable transmitted codeword given the model of the channel noise and the received data, an idea called Maximum Likelihood (ML) decoding. This is where the metric comes in. If the channel noise model is sufficiently ‘nice’ we can replace the ML decoding at the decoder by using a metric (which is just a measure of distance) such that choosing the nearest neighbour in the set of all possible transmitted codewords to the received data, we get the same result as what we would get if we were to use ML decoding. When this happens, we say that the metric is matched to the channel under consideration. Note that the metric that will be used depends crucially on the encoder, noise model of the channel, and the decoder. This idea will become clearer in the next section.\n\nThe Hamming and the Lee metrics\n\nTwo metrics that have been studied relatively deeply in the literature are the Hamming metric and the Lee metric. The Hamming metric is a very simple metric - given two $n$-length words $a$ and $b$, it counts the number of differences between them. It is useful in cases where, given any two distinct symbols (not words) $a$ and $b$, the probability that the channel corrupts $a$ to $b$ is the same, and this probability remains the same if the pair $(a, b)$ is replaced by any other distinct pair $(a',b')$. Even though this metric is really simple, it can be used to describe a large class of encoder, decoder and channel combinations, and thus has been the most well-studied metrics in information and coding theory. The Lee metric is useful whenever symbols closer together are more probable to corruption than symbols further apart, and the decision of the ‘closeness’ of two symbols is made by arranging the symbols in a circle (for example, the first symbol and the last symbol are considered adjacent, which wouldn’t be the case if they were on a line).1\n\nAnother interesting concept is that of a rate-distance trade-off. It again makes intuitive sense to say that if two codewords are far apart in a suitable metric, then the probability that the channel will confuse the two is low. So, to protect against as many errors as possible, we want our codewords to be as far off from each other as possible. However, in a finite space, we can only pack so many codewords that are all pair-wise more than a given distance away. So, given the number of errors a code can handle, we get a bound on the number of codewords we can use. If we use $n$-length code, the number of such codewords grows exponentially in $n$, and therefore it makes sense to define the rate as $\\log \\mathcal{M}(n)/n$, where $\\mathcal{M}(n)$ is the number of possible $n$-length messages. In the case of the error being exactly zero, we want there to be no codewords in a sphere around a given codeword. For average error going to zero, we can allow small intersections between the spheres. Note that this means that finding how many $n$-letter words are there in a neighbourhood of a given radius around a point is going to be important because we have to ensure that none of these is part of the code that we are going to use.\n\nIn the rest of this post, we will be concerned with only zero-error information theory, the second error criteria used above. In this case, we know a lot for the Hamming metric - given a particular rate, we know that a particular rate is achievable (that is, we can construct a code that achieves that rate), and we know that some rates are not achievable. The achievability result is via the Gilbert-Varshamov bound, while there are several upper bounds known (like the Hamming, singleton and Elias-Bassalygo bounds). Unfortunately, these do not match, and a gap exists 2 which has been an open problem for quite some time now.",
null,
"What the bounds look like for the Hamming metric. The Gilbert-Varshamov bound is the achievability result. The rest are upper bounds. Note the gap between them.\n\nIn the case of the Lee metric, far less is known. We saw above why it is important to bound the number of words that are within a given distance away from a point, a quantity that we will call the size of a ball in the metric. The main difficulty in the case of the Lee metric is that it is much harder to find the size of a ball in the Lee metric than it is in the Hamming metric. Assuming we know the size of the ball in the Lee metric, a formula for each of the rate-distance bounds mentioned above appears in a wonderful book - Algebraic Coding Theory by Elwyn Berlekamp.\n\nSo now, the issue is to figure out what the size of a ball in the Lee metric is. This is the main content of our paper. Once we had this result, we were able to use Berlekamp’s work to immediately obtain the above-mentioned bounds for the Lee metric.\n\nOur approach\n\nFirst steps\n\nThe first step was essentially a rediscovery of results that were known when Berlekamp wrote his book in the 60s. The idea is that one can think of this problem as estimating a particular coefficient in the expansion of a particular generating polynomial for the metric. To gain some intuition for this, consider the simple case of the binary Hamming metric, and suppose we need to find all the $n$-letter words that are a Hamming distance $d$ away from the word $(0, \\ldots, 0)$3. The Hamming metric just says that if there are $d$ locations where two equal length words differ, the Hamming distance is $d$. The required number of words is given by the number of choices of $d$ positions out of $n$ that will have $1$’s, which is simply $\\binom{n}{d}$. This is also the coefficient of $x^d$ in the polynomial $(1+x)^d$. One simple way to understand the equivalence is that this coefficient also involves the choice of $d$ $1$’s from $n$ choices.\n\nThis idea generalises to the Lee metric. Say the alphabet size in the Lee metric is $5$. Then the generating polynomial is given by $(1 + 2x + 2x^2)$, and the problem of finding the size of the Lee ball of radius $d$ reduces to finding the coefficient of $x^d$ in $(1 + 2x + 2x^2)^n$. Now, this is not an especially easy problem, because no such simple idea as the one in the case of the Hamming metric works in this case. There exist formulae for this (also known from the 60s), but they are a) difficult to use and b) do not give much insight to the problem. I was at this time doing an online course on analytic combinatorics on Coursera, and there they talked about problems of this form and how to find analytic solutions for the same. I learned a lot of interesting new approaches to solving problems like this (in fact the first time I thought about the generating polynomial method of solving the problem was while doing the course), but ultimately in turned out that the techniques in the course weren’t well suited for this particular application (in particular because here we need to fix $n$ to be a particular finite value and there one could take the limit as $n \\rightarrow \\infty$, and the forms of the polynomials that we were working with could not be handled very easily via the techniques proposed in the course).\n\nA question\n\nWhile spending a few weeks trying to use techniques from analytic combinatorics and ruling them out one by one, I noticed that even the Hamming metric calculations are done using assuming that we are finding the coefficient of some $x^d$ where $d$ is then the location of the largest coefficient. I figured it would be a start if we could at least figure out where the maximum coefficient would be, we would at least know the parameter range where we would be working. After a few days wrestling with this problem, I seemed to have guessed a solution to it but was not able to prove it myself. So I posted the following question on the Mathematics StackExchange - Largest coefficient in the power of a polynomial. A couple of answers could be used to get a proof of the maximum coefficient, and I finally hit upon some path of progress - I could use the central limit theorem to find the required coefficients for large enough $n$. Using Python’s numpy.polynomial, I was able to get some graphs that showed me how the coefficients were behaving and how the normal approximation looked. I tried to use the results to obtain the bounds that we want, but the results were rather bad - they didn’t even match the known results for the Hamming case, leave alone the Lee metric. When I went to see what the problem was, it was quite apparent - I had been using the log scale to look at the graphs, and the normal approximation was really good close to the peak but was really bad away from it. The log scale hid the fact that a couple of standard deviations away from the peak, the approximation was giving a value many orders of magnitude more than the actual value (say $10^{10}$ times higher).\n\nLarge Deviations and Sanov’s theorem\n\nThis was a problem that reeks of large deviation theory - but it took me some time to recognise that because I had not used it before. Large deviation theory, as the name says, gives good bounds when one is working more than a few standard deviations away from the central peak.\n\nOne of the basic results in large deviation theory is something called Sanov’s theorem. It gives a powerful bound on the size of a set of type classes drawn from a probability distribution. The idea is as follows - suppose we have a discrete probability distribution, like the one that governs what face of a die comes up when it is thrown. Now, if we sample the probability distribution $n$ times (throw the die $n$ times), the outcomes will probably not be distributed exactly like the underlying distribution, but would be something called the empirical distribution - in the case of a die, this would consist of the number of times each number came up when the die was thrown $100$ times. Empirical distributions are also called types, a powerful concept in information theory, with several very nice properties. A type class is the set of all outcomes which have the same type 4. Now, given a set of types, Sanov’s theorem allows us to bound the size of that set of types - the number of sequences such that their type is in the set. In our case, the class is just the set of all types such that the expected value with that probability distribution is less than $d/n$. This could be calculated using the normal approximation, but as mentioned earlier, the bounds obtained from it are pretty poor away from the peak.\n\nConvex Optimisation\n\nSanov’s theorem requires us to find the type in the set that has the minimum relative entropy with the underlying probability distribution. This is a convex optimisation problem, and one could take its dual. The dual problem was just a complicated single variable optimisation that became something that I could handle without any of the sophisticated methods used to solve convex optimisation problems, using some Python code that ran reasonably quickly. This was part of the problem solved - numerically finding this maximising parameter gave some upper bound on the required coefficient. I wanted something better.\n\nA weird function\n\nBy the duality properties of convex optimisation it was clear that any positive value of the single parameter that the dual was over would work, so what remained was to choose what the positive value would be as a function of the expected value that was used to define the type class. I plotted out what the optimal value looked like, and it seemed an odd kind of curve. Now the first thing that entered my head was that the negative of the curve looked like a $q$-th root function, appropriately scaled and shifted. Now, using curve_fit from scipy.optimize in Python, I was able to fit it to other functions, like polynomials or exponentials, but either they were very sensitive to parameters or were not very good fits - something that can be quantified using the output of the curve_fit function. Anyway, it turned out that the best fit among all the functions I tested was indeed given by that weird function, and that’s what found its way into the paper. We also found that perturbing the optimal values for the constants in this fit did not affect the final result much - so this fit was robust in some sense. If you look at the curve and some other functional form comes to your mind, I would love to know it!\n\nFinal thoughts\n\nOnce we obtained the formula for the size of the Lee ball, all that remained to do was to substitute the result in the formulae from Berlekamp’s book and plot the results. The results for $q = 6$ are shown below. We also verified that $q = 2$ with our method gives the same results as the binary Hamming case pointed out above - one way in which we were able to be somewhat sure that the method works.",
null,
"What the bounds look like for the Lee metric with alphabet size of $$6$$. Note a few similarities with the Hamming case. The GV and EB bounds agree on where the rate is zero ($$\\delta = 1.5$$). There is again a gap between the upper and lower bounds, and the EB is again the tightest upper bound.",
null,
"Our technique (markers) compared with the known results for the Hamming metric case (lines).\n\nIf you are wondering what we used to make the graphs, there’s a very nice way of using Latex in matplotlib in Python, and matplotlib can also be made to output the graphs in SVG, for the rather good looking graphs that you see both in this post and the paper.\n\nIf you are interested in looking at out code, it is all reasonably well documented and uploaded on Github. If you have any thoughts about this work, use the comment section below or get in touch via email. Provided things work out, I might be presenting this at ISIT 2019, so this might not be the last time that this paper is featured on this blog.\n\n1. Brief side note here. One might wonder about the metric that describes the situation where ‘closeness’ is defined by arranging on a line and not a circle. In this case, it can be shown that there are no channels that are matched to this metric, and so, even though the metric is mathematically interesting. it doesn’t concern us that much here (it’s not just mathematically interesting, but a situation where it would be of interest to an information/communication theorist would be too big a detour for this blog post.)\n\n2. In the Hamming case the gap exists for alphabet sizes less than $49$, above which there is a construction based on algebraic geometry that closes the gap.\n\n3. Point to note - for ‘nice’ metrics, one can always translate in this way to calculate the size of the neighbourhood around zero.\n\n4. One of the cool properties of types is that all members of the type class have the same probability.\n\nTags:\n\nCategories:\n\nUpdated:"
] | [
null,
"https://sagnikb.github.io/assets/images/bounds.svg",
null,
"https://sagnikb.github.io/assets/images/Leebounds.svg",
null,
"https://sagnikb.github.io/assets/images/comparisonbounds.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.96349996,"math_prob":0.97985333,"size":17033,"snap":"2019-26-2019-30","text_gpt3_token_len":3543,"char_repetition_ratio":0.14299136,"word_repetition_ratio":0.018867925,"special_character_ratio":0.20225444,"punctuation_ratio":0.07114744,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99698234,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-25T13:29:38Z\",\"WARC-Record-ID\":\"<urn:uuid:cd40f340-8882-4944-8e51-f75cf84b4b1d>\",\"Content-Length\":\"38728\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:24127457-a686-4c46-8806-009329e6bbba>\",\"WARC-Concurrent-To\":\"<urn:uuid:cffbd639-99b0-4a31-97ef-d89b85423006>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://sagnikb.github.io/blogposts/leemetric/\",\"WARC-Payload-Digest\":\"sha1:WFVVJHAWI6BGSR2ENTBEENFY25Y6DLO4\",\"WARC-Block-Digest\":\"sha1:7WAL7AMVKHE2LNW7OJPJOZQAXY2KOXRV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999838.27_warc_CC-MAIN-20190625132645-20190625154645-00370.warc.gz\"}"} |
https://dsp.stackexchange.com/questions/14699/long-term-average-spectrum-of-large-batch-of-audio-files/35515 | [
"# Long Term Average Spectrum of large batch of audio files\n\nI'm trying to obtain the overall LTAS of a large batch of audio (wav) files. To do this, I should obtain the LTAS of each file individually, then take the average value across all LTASs for each frequency. I also need the standard deviation for each frequency. I'm doing this in Matlab, but am rather a noob. Can anyone offer any pointers to help me write a Matlab script to do this? Thanks!\n\n2. use \"pwelch()\" to calculated the power spectrum of the songs\n3. you do energy averagin over multiple songs.\n\nSomething like this (just a code sketch)\n\nnx = 8192; % that's about 5 Hz @44.1 kHz\ndd = dir('*.wav'); % all files in the current directory\nnFiles = length(dd);\nLTAS = zeros(nx/2+1,nFiles)\nfor i = 1:nFiles\nLTAS(:,i) = pwelch(x,hanning(nx),nx/2);\nend\nLTASOverAllSongs = mean(LTAS,2);\n\n• Thanks for your reply, Hilmar. I can't figure out what to do with the wavread - x = wavread(dd(i).name; - line, though. That's not how I've used wavread previously. Any suggestions? Thanks! – Nobbynomates Mar 3 '14 at 14:35\n• It is simply reading the wav file listed previously by dd=dir('*.wav'); (all in current folder). Please mind that there is a missing bracket in this line, should be: x = wavread(dd(i).name). – jojek Mar 29 '14 at 20:12\n\nI know this question is a few years old but here is a solution for the record:\n\nLTASS.m function for MATLAB (Github)\n\n(This is a similar solution to Hilmar's)(Note: Will require getAllFiles() function to work)\n\n• Hi, can you explain further how to make \"getAllfiles()_function\" to work? I am very new to MATLAB and unsure how to solve this. Thanks! – rjh3m8 Jan 30 '20 at 21:10"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8879323,"math_prob":0.6918982,"size":1749,"snap":"2021-31-2021-39","text_gpt3_token_len":498,"char_repetition_ratio":0.09226361,"word_repetition_ratio":0.0,"special_character_ratio":0.28816468,"punctuation_ratio":0.14325069,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9715038,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-04T00:22:28Z\",\"WARC-Record-ID\":\"<urn:uuid:537c2b77-458f-4574-afff-8d5ac745650a>\",\"Content-Length\":\"176500\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3621efd5-c91e-45ab-a0e6-4c10ce90d661>\",\"WARC-Concurrent-To\":\"<urn:uuid:612e5a09-af19-4d8b-a64e-11518c47a06b>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/14699/long-term-average-spectrum-of-large-batch-of-audio-files/35515\",\"WARC-Payload-Digest\":\"sha1:XG6OEM72C5QZUU6KSBVYJC2QOZC2P5UT\",\"WARC-Block-Digest\":\"sha1:X44U2WCE6G65LXE4T3LRYPCWOKHKJVG5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154486.47_warc_CC-MAIN-20210803222541-20210804012541-00429.warc.gz\"}"} |
https://gamedev.stackexchange.com/questions/53667/throw-object-towards-predefined-place | [
"# Throw object towards predefined place? [duplicate]\n\nI have a 2D side-view game with object attributes such as x, y, speedX, etc. Now I want to throw a given object from any place on the screen onto a defined goal place, in such a way that its trajectory looks somewhat normal (with gravity, it making a curve and such). Imagine an enemy is destroyed and what he stole flies from him back towards the stolen object's origin. (I do have physics via Box2D in this game, but for this particular need, it seems I require a manual approach; for one thing, the object is allowed to pass other bridges and such for the time it flies back to its main bridge.)\n\nHow would I go about this? I've tried moving manually by adjusting towards a target speed and such, but the trajectory and timing looks very unnatural. Thanks!",
null,
"Wiki has article about projectile trajectory here.\n\nMaybe also this will help you (it helped me). It's brief explanation of two approaches to your problem - variable strength and angle of projectile being shoot. At the end of this post, there's short \"Try it yourself!\" section, where small JavaScript implementation exists (which may, again, help you).\n\nGood luck!\n\nOne approach would be to use a parabolic trajectory, which is the shape that normal projectiles take.\n\nWe can use three points to define the parabola:\n\n• Initial object position\n• Final object position\n• Turning point\n\nThe turning point can be a constant horizontal and vertical distance away from the target, it just needs to flip slides to suit the initial position.",
null,
"The parabola equation we are using is",
null,
"Solve the following simultaneous equations to find a and b.",
null,
"Now we might be interested in finding the launch angle and a velocity. First we manipulate the projectile motion equations to obtain the following",
null,
"The above equation takes the parabolic form we outlined above. To find the launch angle theta we can simply use",
null,
"If we know the animation time t we can find the launch velocity v_0 with",
null,
"• I waited because I wasn't sure which of the two replies to mark as answers. Yours is extensive and great but I should have clarified that I was looking more for code or pseudo code. Apr 12, 2013 at 7:35"
] | [
null,
"https://i.stack.imgur.com/EcaEC.png",
null,
"https://i.stack.imgur.com/WxSiL.png",
null,
"https://i.stack.imgur.com/LdUbP.png",
null,
"https://i.stack.imgur.com/kPJ6C.png",
null,
"https://i.stack.imgur.com/ARugY.png",
null,
"https://i.stack.imgur.com/curBf.png",
null,
"https://i.stack.imgur.com/Mltui.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9716525,"math_prob":0.80530775,"size":758,"snap":"2022-40-2023-06","text_gpt3_token_len":168,"char_repetition_ratio":0.09549072,"word_repetition_ratio":0.0,"special_character_ratio":0.21899736,"punctuation_ratio":0.10691824,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9642308,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-30T00:09:14Z\",\"WARC-Record-ID\":\"<urn:uuid:8d6fbe51-26ac-416a-b605-1d64173b5cf2>\",\"Content-Length\":\"221151\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c62ada32-986e-4224-828b-0f093a2af841>\",\"WARC-Concurrent-To\":\"<urn:uuid:0bed582b-8738-4c94-a5cc-bd2da642d8ea>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://gamedev.stackexchange.com/questions/53667/throw-object-towards-predefined-place\",\"WARC-Payload-Digest\":\"sha1:FHF26SCRMVWDD7JRF2SDH5ATOTX32ECO\",\"WARC-Block-Digest\":\"sha1:J6V7VNHT6U27V7WEFRMQV6LHSHRRNUEH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335396.92_warc_CC-MAIN-20220929225326-20220930015326-00564.warc.gz\"}"} |
https://de.maplesoft.com/support/help/maple/view.aspx?path=Student%2FLinearAlgebra%2FRandomMatrix | [
"Student[LinearAlgebra] - Maple Programming Help\n\nHome : Support : Online Help : Education : Student Packages : Linear Algebra : Computation : Constructors : Student/LinearAlgebra/RandomMatrix\n\nStudent[LinearAlgebra]\n\n RandomMatrix\n construct a random Matrix\n RandomVector\n construct a random Vector\n\n Calling Sequence RandomMatrix(r, c, options) RandomVector[o](d, options)\n\nParameters\n\n r - non-negative integer; row dimension of the resulting Matrix c - (optional) non-negative integer; column dimension of the resulting Matrix [o] - (optional) use either [row] or [column] to specify the orientation of the resulting Vector d - (optional) non-negative integer; dimension of the resulting Vector options - (optional) parameters; for a complete list, see LinearAlgebra[RandomMatrix]\n\nDescription\n\n • The RandomMatrix(r, c) command returns an $rxc$ Matrix in which all entries have integer values in the range $-99..99$.\n The RandomMatrix(r) command returns an $rxr$ Matrix in which all entries have integer values in the range $-99..99$.\n • The RandomVector(d) command returns a d-dimensional Vector in which all entries have integer entries in the range $-99..99$.\n The RandomVector[row](d) command behaves like RandomVector(d), except that a row Vector is returned. If the orientation option is omitted or if RandomVector[column](d) is used, a column Vector is returned.\n If the dimension is not provided, it defaults to 0.\n\nExamples\n\n > $\\mathrm{with}\\left({\\mathrm{Student}}_{\\mathrm{LinearAlgebra}}\\right):$\n > $\\mathrm{RandomVector}\\left(2\\right)$\n $\\left[\\begin{array}{c}{-31}\\\\ {67}\\end{array}\\right]$ (1)\n > ${\\mathrm{RandomVector}}_{\\mathrm{row}}\\left(6\\right)$\n $\\left[\\begin{array}{cccccc}{8}& {69}& {99}& {29}& {44}& {92}\\end{array}\\right]$ (2)\n > $\\mathrm{RandomMatrix}\\left(3,4\\right)$\n $\\left[\\begin{array}{cccc}{-98}& {27}& {-72}& {-74}\\\\ {-77}& {-93}& {-2}& {-4}\\\\ {57}& {-76}& {-32}& {27}\\end{array}\\right]$ (3)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.51564324,"math_prob":0.99434197,"size":1450,"snap":"2019-43-2019-47","text_gpt3_token_len":365,"char_repetition_ratio":0.16597511,"word_repetition_ratio":0.13368984,"special_character_ratio":0.22965518,"punctuation_ratio":0.105882354,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99928766,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-18T09:13:52Z\",\"WARC-Record-ID\":\"<urn:uuid:739d6874-d1b9-47b2-b7c0-0a7fcd3918bf>\",\"Content-Length\":\"165990\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:232a1dd5-1fc8-4f43-a02b-79a40e66ddb2>\",\"WARC-Concurrent-To\":\"<urn:uuid:1fd99803-3653-4049-8537-6a032a185475>\",\"WARC-IP-Address\":\"199.71.183.28\",\"WARC-Target-URI\":\"https://de.maplesoft.com/support/help/maple/view.aspx?path=Student%2FLinearAlgebra%2FRandomMatrix\",\"WARC-Payload-Digest\":\"sha1:53TB3GKTPD6PZLO7FKFGKGYI5HQCL4QK\",\"WARC-Block-Digest\":\"sha1:BPJPEHCWWLVMF7AQHHGTC3PCLTALTFVJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986679439.48_warc_CC-MAIN-20191018081630-20191018105130-00006.warc.gz\"}"} |
https://physics.stackexchange.com/questions/418423/measuring-quantum-mechanics-changes-the-answer-and-energy | [
"# Measuring quantum mechanics changes the answer and energy?\n\nA thought occured to me thinking about the quantum slit experiment. Upon firing an electron or particle it would impart an opposing force on the machine shooting it. The direction if this force should give you the direction your electron should be going and allow you to predict if it went through a slit or not. When you do, does it count as a measurement and stop the interference pattern? Or does the particle need to be measured?\n\nImagine for a moment it does still create an interference pattern because you didnt measure the particle itself. If you then measure the electron if and when it goes through a slit AFTER you measured the force in the opposite direction, would the energy be at odds compared to a particle going the exact same direction but it wasn't measured (because it would go through both slits and none at the same time and thus have multiple potential directions)?\n\n• In quantum mechanical dimensions there is the Heisenberg uncertainty principle which encapsulates the probabilistic nature of quantum mechanical predictions. Itis the probabilities that have wave behavior, not the particle and the slits themselves. It means that the trajectory of the electron in this single electron at a time hitting dounle slits en.wikipedia.org/wiki/… cannot be defined as a classical trajectory, but as a probable classical trajectory, dependent on boundary conditions in interacting with the two slits. – anna v Jul 19 '18 at 13:35\n• I'm guessing you wouldn't be able to measure this within the uncertainties. Ion propulsion and laser propulsion are illustrations that there is scope for some measured changes. – CriglCragl Jul 19 '18 at 13:48\n• @Anna v I understand that (kind of). The trajectory of the particle cannot really be defined classically, but the recoil/opposing force the particle had upon being fired at the slits would define it's classical trajectory without changing the particles path through measurement (I assume). If you then measure the particle it changes it's unclassical undefined trajectory to a classical defined trajectory. So would that change the measurement of recoil direction at the machine firing the particle? – Demigan Jul 19 '18 at 15:34\n• The firing machine which you hypothesize, is also ruled by quantum mechanics as long as it is firing quantum mechanical entities. For a given single event, there will be the probabilistic uncertainty of \"firing\" , no classical trajectory can be defined in this sense. everything is probabilistic, although energy and momentum are conserved for each event, but there is the heisenberg uncertainty on position and momentu – anna v Jul 19 '18 at 17:47\n• – Stéphane Rollandin Jul 19 '18 at 20:51\n\n## 5 Answers\n\nEven if you know the initial direction of the particle, this does not tell you which slit the particle will go through. The wavefunction (probability wave) of the particle will still follow the Schrodinger equation. For example, the initial condition in solving the Shrodinger equation could be a Gaussian wave packet in space centered at the measured starting position with some standard deviation determined by the device measuring the initial position. The Gaussian will also be determined by the measured momentum (with some magnitude and direction, each with their own associated uncertainty). This wavefunction will then evolve according to the Schrodinger equation.\n\nGetting to the heart of your question: just because we know the initial position and direction of the particle does not mean it follows a well defined trajectory afterwards. This is just not the case according to basic interpretations of quantum mechanics.\n\n• Allright, so the answer is \"no, measuring the particle will have no effect on the measurement of the opposing force on the machine\"? – Demigan Jul 19 '18 at 15:37\n• @Demigan The particle goes through both slits, if it can. See the delayed choice experiments for more on that in action – CriglCragl Jul 19 '18 at 16:11\n• @Demigan What do you mean by \"measuring the particle\"? Do you mean measure the position? If you are measuring the force then don't you have some information about where the particle was located upon force measurement? – Aaron Stevens Jul 19 '18 at 21:21\n• @Aaron Stevens I mean the measurement nornally associated with changing it's outcome by determining which slit it goes through. Also wouldnt you be able to extrapolate it's position upon force measurement? The velocity of the particle can be found based on the amount of opposing force on the machine, and the direction of the opposing force gives you the direction of the particle (for both give and take the preciseness of the measurement). Use the time it took for the measurement and you at least narrow the amount of potential positions it could be in. – Demigan Jul 19 '18 at 21:29\n• @Demigan You are on the right track by saying \"at least narrow the amount of potential positions it could be in.\" You can calculate the spreading out of the wave function based on the initial conditions. But this is based on the Schrodinger equation, not classical trajectories. – Aaron Stevens Jul 19 '18 at 22:46\n\nThere is only interference if the setup is such that it is not possible to determine through which slit the electron went.\n\nWhenever you're analyzing such conceptual problems, it's very useful to try and make a concrete model of the system you're considering. Maybe even multiple models, to better see the common properties.\n\nSuppose you have an apparatus, which fires an electron in one direction, and recoil (or whatever other mechanism) results in a second electron being ejected with the opposite momentum. To be able to use the momentum being opposite to predict where the first electron will be, you need the initial positions to also be accurately defined: otherwise even classically you'll get uncorrelated results.\n\nNow, the two-electron system, being quantum mechanical, obeys Heisenberg uncertainty principle. Since both particles are in a precisely defined location, their total momentum is quite uncertain. This then means that, by measuring the momentum of the second electron, you'll obtain the momentum of the first electron... ± that additional uncertainty. The final result will be that you'll still get garbage results if you try to answer the which-way question for the first electron using this measurement of the second electron's momentum.\n\nIf you are able to determine which slit the particle went through, by any means, there is no interference pattern.\n\nIn practice, I think it would be difficult to set up the experiment as described. But in principle, yes, if you measured the recoil accurately enough to determine which slit the particle went through, the interference pattern would disappear.\n\n(Although as Aaron's answer points out, knowing the momentum does not necessarily determine which slit the particle goes through. I'm assuming that the momentum is high enough, and the slits far enough apart, that the probability of a particle going through the \"wrong\" slit is minimal. The probability is never going to be zero, so there will always be some interference, but it could in principle be minimized - and that interference pattern would look different to the original one anyway.)\n\nThis (your first paragraph) is the same question/famous objection Einstein posed in the Bohr-Einstein debates: https://en.wikipedia.org/wiki/Bohr%E2%80%93Einstein_debates#Post-revolution:_First_stage\n\nSo basically Bohr reply was: You also need to apply QM to the double slit. So to measure the momentum transfer, you have to have a sharp momentum (of the measurement apparatus) first. But then your position is not determined leading to washing out of the interference pattern.\n\n## protected by Qmechanic♦Jul 19 '18 at 19:30\n\nThank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).\n\nWould you like to answer one of these unanswered questions instead?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9533112,"math_prob":0.818336,"size":887,"snap":"2019-35-2019-39","text_gpt3_token_len":169,"char_repetition_ratio":0.14043035,"word_repetition_ratio":0.0,"special_character_ratio":0.18940248,"punctuation_ratio":0.05421687,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97852325,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-24T00:33:25Z\",\"WARC-Record-ID\":\"<urn:uuid:d5a39960-04a1-44ab-bfa5-e6cc212093a9>\",\"Content-Length\":\"160251\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4f8c5b3a-74ec-472c-b704-2a242f64a7ec>\",\"WARC-Concurrent-To\":\"<urn:uuid:cd399d1d-6005-47ea-9269-b5434f520f3d>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/418423/measuring-quantum-mechanics-changes-the-answer-and-energy\",\"WARC-Payload-Digest\":\"sha1:6EUUSPYNIGZ335CODPXBHJ34YKVPYXAN\",\"WARC-Block-Digest\":\"sha1:OVJLZPNYAAAO3YXYMOLCV4EBJ73Y7YAZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027319155.91_warc_CC-MAIN-20190823235136-20190824021136-00330.warc.gz\"}"} |
http://physicsbuzz.physicscentral.com/2014/01/does-1234-112.html?m=1 | [
"### Does 1+2+3+4+ . . . =-1/12?\n\n(Spoiler alert: Yes it does . . . probably . . . um, maybe . . .I mean, it has to.)\n\nAn interwebs firestorm has been raging recently about a Numberphile video that makes the astounding claim that if you add up all the positive whole numbers from one to infinity, the result will be -1/12. To write it out more concisely\n\n1+2+3+4+ . . . = -1/12 , (where the three dots indicate all the rest of the positive numbers up to infinity)\n\nIf you haven't seen the video, take a look - it's short.\n\nFascinating, isn't it?\n\nRenowned science writer and astronomer Phil Plait (Bad Astronomy) blogged about the video recently, calling it \"simply the most astonishing math you'll ever see.\" The post led to a Twitter and comment storm, fueled both by people bowled over by the calculation and a much larger number of people convinced it was nothing short of mathematical fraud.The passionate response he got to his post led Plait to write a follow up piece, partly in self defense, and partly as penance for his various mathematical sins as pointed out by his readers.\n\nClearly, only a fool would consider defending this absurd calculation after the reception Plait got.\n\nSo here I go . . .\n\nI'm not going to follow Plait's example of trying to explain the math that goes into the calculation. But I will point out that many of the problems that commenters and Twitterers latched onto are irrelevant if you look at the more elaborate discussion in the Numberphile's extra footage. In the much longer second video, they come to the same reviled result as in the first video, except that they use an approach first written down by Leonhard Euler.\n\nIf you dislike the initial video, you really should watch this one to see if it sways you at all.\n\nThat's much better, isn't it? I'm sure it's not perfect, but the flaws are beyond my mathematical abilities to recognize.\n\nIn any case I'm willing to believe 1+2+3+4+ . . . = -1/12 is a mathematically legitimate thing to write down for the following three reasons.\n\n1. Euler, who was one of the greatest mathematicians of all time, proved the equation for real numbers.\n\n2. Another great mathematician, Bernhard Riemann, generalized Euler's approach to include complex numbers, and came up with the same equation.\n\n3. My favorite mathematician, the self-taught genius Srinivasa Ramanujan, rediscovered the equation and stood by it, even though he realized that he might be thought be mad for making the claim, writing in a letter to mathematician G.H. Hardy, \"I told him that the sum of an infinite number of terms of the series: 1 + 2 + 3 + 4 + · · · = −1/12 under my theory. If I tell you this you will at once point out to me the lunatic asylum as my goal.\"\n\nSo, counting the Numberphiles' somewhat dubious derivation, there are at least four ways to prove that the sum of all the positive integers equals -1/12. And as far as I know, there's no way to prove that it doesn't equal -1/12.*\n\nIf you don't believe any of these people, then there's nothing I can do, mathematically speaking, to change your mind. I mean these guys are among the greatest. What could I add that would improve on their proofs?\n\nBut, Obviously 1+2+3+4+ . . . = -1/12 Doesn't Really Mean Anything - Right?\n\nOf the mathematicians and physicists I've talked to about it, several of them are willing to accept that it's possible to derive the equation, but insist that it's meaningless. They tell me, if I understand them correctly, that it's some sort of numeric fluke that can't possibly have any consequences in the real world. There's just no way to add positive numbers together get a negative result in reality, especially when the numbers you're adding are getting larger and larger. In effect, it's nothing more than an artifact that results from a method that makes sense when applied to complex variables or other series, but not for the sum of positive integers. To think otherwise would be nuts, right?\n\nThe problem is, they're wrong (or so a number of physicists have told me). The equation 1+2+3+4+ . . . = -1/12 is vital for describing the real world.\n\nAs the Numberphile people point out, the dreaded equation pops up in many places in physics. They specifically note it's appearance in a string theory textbook (see page 22 in this Google book). But that's only one example and, depending on how you feel about string theory, among the least convincing ones. What's much more compelling is the fact that this sort of equation is integral to Quantum Electrodynamics (QED).\n\nQED is the theory that explains the interaction between charged particles like electrons and protons. Along with neutrons, electrons and protons make up atoms, which in turn make up molecules and everything built of them. In other words, QED essentially describes much of the physical world we live in. And it does it extremely well. QED calculations for the spin of the electron have been confirmed to better than one part in ten trillion - making QED just about the most precise and successful theory of all time.\n\nIf QED is correct (and it appears to be the most correct theory yet developed, if experimental confirmation is a reasonable way to judge correctness), then I would argue that the things that go into QED calculations must be just as correct. Doing QED calculations requires using 1+2+3+4+ . . . = -1/12, so the equation is at least as correct as QED theory itself.\n\nIn fact, the Wikipedia page on a QED phenomenon known as the Casimir Effect shows a derivation of the effect that includes an even more audacious equation involving the sum of the cubes of the natural numbers up to infinity. Specifically, calculating the effect involves using the equation\n\n1^3+2^3+3^3 +4^3+ . . . = 1/120, (where the notation 2^3 means 2x2x2)\n\n(In the Wikipedia article, they have an equation that looks like this",
null,
"$\\zeta(-3)=1/120$, but the stuff on the left hand side is just another way of writing 1^3+2^3+3^3 +4^3+ . . .)\n\nThe number on the right is positive this time, but it's ten times smaller than 1/12, even though each of the terms in the sum is much bigger than the corresponding terms in the equation 1 + 2 + 3 + 4 + . . . = −1/12 (except for the first term, of course, since 1^3 = 1). Both equations come from the same sort of derivation, so it's not surprising that they are both seemingly incredible and ridiculous. But if you believe in QED and the Casimir Effect, how can you not believe the pieces that go into them?\n\nMaybe It's Just a Trick\n\nOne response I've gotten after querying my more mathematically savvy friends is that the equations are nifty tricks, and nothing more, to get rid of infinities in QED and produce the correct finite answers. I guess that's possible, but you would have to be one heck of a mathamagician to come up with a trick resulting in accuracy of a part in ten trillion.\n\nIt's even more impressive when you consider that the QED predictions came before the experiments that measured things like the electron spin to fourteen decimal places. It's one thing to design a trick to rationalize a number you already know. It's a whole other matter to come up with a trick that gives you the answers in advance of the experiment. In that case, it's not a trick, it's simply a very good theory.\n\nMaybe It's Not Necessary, Just Handy\n\nOne final possibility that I can think of is that the equations are not really necessary for doing QED calculations, and that instead there is a correct and intelligible approach that gives answers without using nonsense like 1 + 2 + 3 + 4 + . . . = −1/12 or 1^3+2^3+3^3 +4^3+ . . . = 1/120.\n\nI can't imagine why physicists would rather rely on trickery than doing things correctly, so I tend to dismiss the idea that some sort of mathematical conspiracy is behind it all. If it turns out that it's possible to have physical theories that describe the real world as well as QED does without relying these equations, then we might as well use those theories and forget the whole controversy.\n\nSo What's Really Wrong?\n\nIf you accept that Euler, Riemann, and Ramanujan did things properly when they found 1 + 2 + 3 + 4 + . . . = −1/12, and if you accept that it and related equations are necessary to describe the real world, then how can you not accept that the equation is true? And yet, many people still claim that there's something wrong. It doesn't make sense. It's so counter intuitive that the phrase \"counter intuitive\" seems far too weak a description. It's an alien, freakish, mind f----.\n\nBut that's OK. Some things are true without being conceivable. This is just the most recent example I can think of. Pythagoras and and his followers apparently committed human sacrifice because they couldn't handle the idea of irrational numbers. For centuries, ancient mathematicians struggled with unsolvable problems because they didn't know that pi is a transcendental number. And today, there are still things about quantum mechanics that defy intuitive understanding - the whole point of Schrodinger's Cat is to illustrate the absurdity of quantum superposition. But just because people didn't intuitively grasp those things, it didn't change the fact the the square root of 2 is irrational, that pi's transcendental nature means it's impossible to square the circle, and that particles can become quantum mechanically entangled just like Schrodinger's Cat.\n\nYes, there's a problem with 1 + 2 + 3 + 4 + . . . = −1/12. But I suspect the problem is with us and our failure to understand infinity. Why shouldn't an infinite sum of numbers going to infinity add up to a finite (and negative!) number? I don't really know what infinity means anyway, so I can't think of any way to object to a statement that includes not one but TWO infinities in it.\n\nYou might as well ask me why a bandersnatch of numbers going to bandersnatch add up to -1/12. But if you're able to mathematically sum a bandersnatch of bandersnatches, and then use that sum to describe the real world and predict the outcomes of real world experiments I have no choice but it seems unreasonable not to believe your bandersnatch math.\n\nSo, Does 1 + 2 + 3 + 4 + . . . = −1/12 or not?\n\nYou bet your bandersnatch it does! (I think, anyway)\n\n1.",
null,
"But analytic continuation *is* a trick. It's specifically a trick to extend the values of certain types of functions outside the domain where they are valid. A summation of the series of natural numbers isn't valid -- it diverges.\n\nSumming any number of natural numbers can never produce a nonpositive number or a fraction. Infinity just sums an arbitrarily large quantity of these numbers, so axiomatically 1+2+3+...+n can't ever equal anything but an arbitrarily large positive number.\n\nCritically, analytic continuation extends a function beyond its definition, so the summation of the natural numbers is no longer strictly speaking what we're talking about, just like tangent doesn't actually have an inverse function, which doesn't make arctan less useful.\n\nSo the question that boggles our ape brains is what analytic continuation is really doing when it extends functions into the unknown realms where they are not defined, other than producing useful answers for equations in which they're involved.\n\nDoes it have any meaning in the real world? Who knows? What does that question even mean? Does infinity even have any meaning in the real world, or is it just a handy trick for calculating the area under a curve?\n\n1.",
null,
"I don't understand what you mean by calling analytic continuation a trick. According to Wikipedia, it's a mathematical technique. What defines a mathematical trick?\n\n2.",
null,
"It is a trick for a simple reason - zeta function is only defined for s>1, whereas to perform the trick of \"summing up\" the natural numbers if set s=-1. This may be a useful trick, but it's still a trick.\n\n3.",
null,
"What if the analytic continuation is real, and we simple-minded apes initially failed to notice anything but the banana-shaped set of natural numbers sticking out?\n\n2.",
null,
"I was unsure why the numbers of S2 were shifted to the right in order to get the desired result?\n\n1.",
null,
"I assume you're talking about the first video. One of the problems with it is the inappropriate manipulation of series. It's wise of you to question that step. But rather than try to rationalize it, I would recommend you watch the second video and just forget about the first one.\n\n3.",
null,
"You cite the Wikipedia page of the Casimir effect. In fact, this Wikipedia derivation (http://en.wikipedia.org/wiki/Casimir_effect, part \"Derivation of Casimir effect assuming zeta-regularization\", version from 13 January 2014, 20:07) is equivalent to the problem discussed here and clearly shows the most important point: Regularization! The Wikipedia article does not merely sum 1^3 + 2^3 + 3^3 + 4^3 + ... = 1/120. Instead, it sums 1^(3-s) + 2^(3-s) + 3^(3-s) + 4^(3-s) + ..., where s is a (complex) regularization parameter. In the end, s=0 is needed, but already at an early point of the calculation, this parameter s is introduced in order to make the calculational steps mathematically well-defined. The sum which appears only converges for s>4 (or, rather, the real part of s must satisfy Re(s)>4). Under this condition, the sum yields the Riemann zeta function, 1^(3-s) + 2^(3-s) + 3^(3-s) + 4^(3-s) + ... = zeta(s-3), for Re(s)>4. So we get the result for the Casimir energy as a function of the (complex) regulator s. This function is analytic for Re(s)>4, so it can be analytically continued to other values of s, including s=0 which is the desired value for s where the regularization disappears. This function value, zeta(-3) = 1/120, is known, it can be derived by techniques from complex analysis. But zeta(-3) is *not* given by the divergent sum 1^3 + 2^3 + 3^3 + 4^3 + ... which has no well-defined value. Only when specifying a proper regularization scheme with some parameter s, we can perform the calculation, evaluate the infinite sum (as a function of s), analytically continue its result to s=0 and obtain zeta(-3) = 1/120.\n\n1.",
null,
"Can you explain this regularization?\n\n4.",
null,
". . . \"and obtain zeta(-3) = 1/120\" where zeta(-3) = 1^3+2^3+3^3+ . . .\nso 1^3+2^3+3^3+ . . . =1/120, as Buzz said.\n\n1.",
null,
"No! We have zeta(-3) = 1/120, but zeta(-3) is NOT equal to 1^3 + 2^3 + 3^3 + ...! For argument -3, the zeta function cannot be defined through this sum which is one of its possible definitions for arguments larger than 1.\n\nHave a look at possible definitions of zeta(z) e.g. in this book: I.S. Gradshteyn and I.M. Ryzhik, \"Table of Integrals, Series, and Products\" (I found a PDF online there: http://atsol.fis.ucv.cl/dariop/sites/atsol.fis.ucv.cl.dariop/files/Table_of_Integrals_Series_and_Products_Tablicy_Integralov_Summ_Rjadov_I_Proizvedennij_Engl._2.pdf).\nSection 9.513 (page 1036) shows several integral definitions for zeta(z), some of them also valid for negative z, from which one may evaluate zeta(-3). However, the well-known series representation for zeta(z) in section 9.522 1. (page 1037) is clearly indicated to be only valid for Re(z)>1, so in particular z>1 on the real axis. So this sum cannot be used as a definition for zeta(-3).\n\nIn simple terms: Look at f(x) = 1/(1-x). This function is perfectly well-defined for x = 2, then f(2) = -1. On the other hand, for |x| < 1 we can write f(x) = 1 + x + x^2 + x^3 + ... But we cannot define f(x) through this infinite sum for x = 2, because 1 + 2 + 4 + 8 + ... is divergent and does not have a well-defined value. Only within the \"radius of convergence\" |x| < 1, we can define f(x) through 1 + x + x^2 + x^3 + ... There, however, we may evaluate the convergent sum and obtain f(x) = 1/(1-x). And by defining f(x) through 1/(1-x) we may extend its definition from |x| < 1 to all values of x except 1. Still, the sum 1 + 2 + 4 + 8 + ... is divergent and does not have a finite value, only f(2) does, using the definition f(x) = 1/(1-x).\n\n2.",
null,
"Thanks for bringing some clarity into this horrible mess!\n\n5.",
null,
"1+2+4+... is a great example, because it does converge 2-adically, where |2|=0.5 < 1, and it converges to -1 = f(2).\n\nA shadow of this fact is that the binary representation of -1 is 11...11 - calculating mod 2^n is a quotient of calculating 2-adically, and under this quotient the image of the sum is the finite sum of powers of 2 that gives 11...11, whereas the image of -1 is -1 of course, so they're equal.\n\nNow this doesn't have a much to do with zeta functions afaik, but it is a good example of why a \"crazy\" thing to do with a sum can be made quite precise mathematically.\n\n6.",
null,
"You may claim that 1+2+4+8+... (or other sums) converge p-adically, but that is a completely different notion of absolute values and convergence which - as far as I have seen - nobody was using so far, neither in this blog nor in the Numberphile videos.\n\nStill, it is an interesting phenomenon that 1+2+4+8+... converges 2-adically to the same value as the corresponding analytic continuation of the geometric series.\n\nWhat I claim is that, under the \"standard\" definition of absolute values and numbers (which should be assumed unless specified otherwise) the above-mentioned infinite sums do not converge and are not equal to any finite values (-1, -1/12, 1/120 etc.).\n\n7.",
null,
"Nobody said anything about the series converging. They don't converge. They are simply equal to -1/12 and 1/120 when you include infinite terms. But the corresponding series diverge. It's a paradox, and yet true\n\n8.",
null,
"The value of a convergent series is naturally and uniquely given by the value it converges to. But it is not obvious how to define a unique value for a divergent series!\n\nLook at the people you cite above for providing 1+2+3+4+... = -1/12:\nThe Numberphiles' derivation (1st video) you call yourself \"somewhat dubious\", and there are so many flaws in it that I would not even dare to consider it a derivation.\nEuler's proof (as cited in Baez's exercise sheet you linked above) seems to me roughly the same what Numberphile presents in their second, longer video. And Baez himself admits two things: 1) He used \"dirty tricks invented by Euler\". 2) He needs to define some divergent series by their \"Abel sums\". Because of 2) Euler's derivation as presented by Baez is dependent on a special definition (see below). And because of 1), especially the moment when the regularization parameter s disappears, this is again a step where expressions are used which are not rigorously defined and depend on a specific choice of regularization (here using this parameter s). In the end, Baez mentions the analytic continuation of the Riemann zeta function, but he never claims that zeta(-1) is equal to the sum of natural numbers (it is just the analytic continuation of a series with generic powers (-s)).\nAlso Ramanujan only claims this identity to be true only \"under his theory\".\n\nConclusion: In order to define a value for a divergent series, you first have to specify your particular theory or regularization scheme which makes your definition well-defined and consistent. And, because this is then dependent on your definition, how do you know that your value is unique? There are always several ways of defining things.\n\nI might be tempted to define the value of any divergent series by summing up exactly the first 100 of its terms and stopping there. This definition would also be fine. For (fast) convergent series it already provides a decent approximation to their true value. And for divergent series the size of this value is some kind of measure for the degree of divergence. And surely, for all series mentioned here, this so-defined value differs from the one claimed by you (-1/12, 1/120 etc.).\n\nThe regularization via the Riemann zeta function or via the Abel sum mentioned in the Baez / Euler approach have in common the following scheme:\n\nGiven a divergent series F = f_1 + f_2 + f_3 + ..., you introduce a regulator s which changes the series into F(s) = f_1(s) + f_2(s) + f_3(s) + ... such that f_n(0) = f_n, i.e. your original series is recovered for s=0. The new series F(s) should converge within some region of s. Within this region, the series is summed up and its value F(s) as a function of the parameter s is obtained. This function F(s) (not the original divergent series) is then (analytically or by taking the limit) continued to s=0. Finally one defines the value of the original series F to be F(0).\n\nFor F = 1 + 2 + 3 + 4 + ..., we have f_n = n.\nUsing f_n(s) = n^(1-s), we obtain F(s) = zeta(s-1) and therefore define the original series' value to be F(0) = zeta(-1) = -1/12.\nWe could also have used the Abel sum here by f_n(s) = n*(1-s)^n. Then F(s) = (1-s)/s^2. Here the limit s->0 does not exist, so the Abel-sum definition fails or provides at best the \"value\" infinity, which is different from -1/12.\n\n9.",
null,
"I want to stress the following: Especially if want to calculate something meaningful in physics, you should not start by some ill-defined expression, perform non-rigorous transformations on it and be happy to arrive at some finite result. This can go awfully wrong! Always start with an expression which is well-defined by introducing a decent regularization scheme from the beginning. Then perform well-defined transformations and only remove the regulator in the end, when the resulting expression is well-defined and finite without regulator.\nThis is the way which my former colleagues from Quantum Field Theory (QED etc.) and I are using when we are dealing with regularization and renormalization schemes for evaluating loop integrals of Feynman diagrams etc.\n\nDon't ge me wrong: I love \"1+2+3+4+... = -1/12\"! It's a funny equation, we were having it on the clock of my last physics institute's seminar room (in the form \"-1/sum_{n=0}^\\infty n\" at the place of the \"12\"). But this equation is not a rigorous mathematical identity, it rather is a mathematical joke with a large portion of reality background.\n\n10.",
null,
"While I agree that it's preferable to have a solid foundation before attempting to calculate something meaningful in physics, as a rule, I feel like it's too late to apply that standard to QED, among other things, when it comes to 1+2+3+4+... = -1/12.\n\nIt's interesting to delve into the mechanics and details leading to Euler's, Reimann's and Ramanujan's derivations of the equation, but physicists have been using it to make extraordinary theories that describe the world very well. If the equation were different, or didn't exist at all, then the theories (so I'm told) wouldn't work.\n\nSo even though mathematical rigor seems to make the truth of the equation debatable, to mathematicians anyway, the very existence of a world that obeys the theories that depend on it, to a precision of 13 significant digits or more in some cases, is a pragmatic argument for the fundamental truth of 1+2+3+4+... = -1/12.\n\nI hope mathematicians will eventually be able to discover why this is the case. For all I know, it may turn out to be just a fluke. But any joke that can tell me about the real world to a part in ten trillion seems like a very handy joke. Perhaps, as comedians like to say, \"It's funny because it's true.\"\n\n11.",
null,
"QED is a great theory which makes powerful and precise predictions. I know QED because as a theoretical particle physicist I have been working with QED and other quantum field theories for more than twelve years.\n\nBut QED does NOT rely on \"identities\" like 1+2+3+4+... = -1/12. Of course, in QED we encounter infinities almost everywhere. But they are treated by employing regularization and renormalization schemes. In a calculation, if we arrive at a divergent series like 1+2+3+4+..., we know that we have to go back some steps and look for a way to properly regularize our expressions in order to work with well-defined quantities.\n\nAgain I refer to the Wikipedia page for the Casimir Effect: Instead of proceeding with a divergent expression, it introduces the regulator s which makes both the integrals and the series well-defined. Instead of 1^3 + 2^3 + 3^3 + ... it sums 1^(3-s) + 2^(3-s) + 3^(3-s) + ... = zeta(s-3), and in the zeta function (not in the series!) the regulator s can be removed because of analytic continuation.\n\nOf course, physicists are pragmatic, and we cannot always prove mathematically that what we are doing is correct. Fortunately, physicists and mathematicians before us have already proven the correctness of many regularization procedures we are using today. Even weird schemes like dimensional regularization (where the number of space-time dimensions is changed from 3+1=4 to a generic complex number) have been proven, and people have elaborated which calculational transformations are valid there. This makes it relatively easy for us today to simply use such a scheme without having to bother each time about its correctness.\n\nLet me show you a case where physicists have tried to perform a quantum-field-theory calculation without using regularization:\nhttp://arxiv.org/abs/1108.5872\nThis calculation had been done before using dimensional regularization. The authors of the new paper did it without regularizing and got a wrong result! (Notice that their paper never got published in a journal.) The authors themselves pointed out the crucial integral, written down in equation (4.4). While the dimensionally regularized version of this integral yields a finite (and correct) result, they claimed this integral to be zero because of \"symmetric integration\" (i.e., rotational invariance). But their argument is not valid because the integral is not absolutely convergent and depends on the order in which the four components of the loop momentum are integrated. So even renowned physicists can easily make mistakes when they try to solve problems without properly regularizing their expressions.\n\nTo conclude: Using analytic continuation in combination with regularization is crucial for QED (and other fields of physics). But no physicist who wants to obtain correct results naively uses \"identities\" like 1+2+3+4+... = -1/12 without at least knowing what is the correct way of obtaining this result by employing a proper regularization. This \"identity\" as it stands here is simply wrong. But it can be useful for a calculation if you properly regularize the divergent series and apply analytic continuation.\n\n12.",
null,
"I confess, I don't understand what you mean by \"This \"identity\" as it stands here is simply wrong. But it can be useful for a calculation if you properly regularize the divergent series and apply analytic continuation.\" How can it be simply wrong, and yet useful?\n\nAlso, as far as I can tell, the the Wikipedia piece is using the zeta function as defined here http://en.wikipedia.org/wiki/Riemann_zeta_function. So zeta (-3)=1^3+2^3+3^3+ . . .\n\nIn an earlier line, they write sum[abs(n)^(3-s)], then take the limit as s goes to zero, leaving sum[abs(n)^(3)] which looks like the usual zeta function with the argument -3.\n\nSo I still believe I'm right in saying that the Wikipedia includes the equation 1^3+2^3+3^3 +4^3+ . . . = 1/120. It may or may not be a correct equation, but it's what the page says.\n\n13.",
null,
"Concerning the Riemann zeta function, I already answered this in my reply to Anonymous from January 24 at 3:02 AM.\n\nIn short: The Riemann zeta function zeta(s) is an analytic function defined for s in the whole complex except s=1 where zeta(s) has a pole. In my reply from January 24 at 3:02 AM above, I linked the PDF file of a book where you can find various definitions of zeta(s) which are also valid for negative s. However, the series representation zeta(s) = 1^(-s) + 2^(-s) + 3^(-s) + ... is only valid if the real part of s is larger than 1 (so for s>1 on the real axis). That means the series can only be used to define zeta(s) for Re(s)>1, but other definitions are available for Re(s)<1 which may be derived from the series representation via analytic continuation.\n\nThe Wikipedia page on the Riemann zeta function does not state zeta(-3) = 1^3 + 2^3 + 3^3 + ... Instead, it says in the introduction: the \"Riemann zeta function [...] is a function of a complex variable s that analytically continues the sum of the infinite series [...], which converges when the real part of s is greater than 1.\" That is exactly what I explained above. And also the \"Definition\" section of the Wikipedia article restricts the given series definition to Re(s)>1, stating that the \"infinite series defines zeta(s) in this case\". Nowhere in the Wikipedia page do you find \"zeta(-3) = 1^3 + 2^3 + 3^3 + ...\" or \"zeta(-1) = 1 + 2 + 3 + ...\", which would be wrong.\n\nThe other Wikipedia page on the Casimir effect uses the Riemann zeta function in order to sum 1^(3-s) + 2^(3-s) + 3^(3-s) + ... = zeta(s-3), which is valid for real parts of s larger than 4. This zeta(s-3) is then identified with the analytic continuation of the zeta function which is valid also for negative values of s (in contrast to the series!). So, while intermediary steps of the calculation where valid only for Re(s)>4, the limit for s->0 of the final result is taken in the end. Such a procedure is often applied when using regularization, and it is justified by knowing that the calculated quantity is an analytic function of s.\n\nIf I find more time later, I will comment on what I meant by an identity being \"simply wrong and yet useful\".\n\n14.",
null,
"So, what did I mean by saying before \"This 'identity' as it stands here is simply wrong. But it can be useful for a calculation if you properly regularize the divergent series and apply analytic continuation.\"?\n\nThe \"identity\" 1 + 2 + 3 + ... = -1/12 is wrong, because the summation of a divergent series does not have a value and cannot be equal to a finite number.\nHowever, the following statement is correct:\n\n\"The Riemann-zeta-regularized summation of 1 + 2 + 3 + ... yields -1/12.\"\n\nIf you mean a specific regularization for your divergent series, then you have to name it. Simply writing \"1 + 2 + 3 + ...\" implies standard summation by adding consecutively more and more terms, and this does not yield a finite value here.\n\nWhat \"Riemann-zeta-regularized\" means is that you replace the original divergent series by the new series 1^(-s) + 2^(-s) + 3^(-s) + ... which converges for s>1. (I am now omitting \"real part\" for simplicity.) This new series is summed, it yields zeta(s). Then this function zeta(s) is analytically continued to negative values of s, where the series itself has no meaningful value any more. The result of these steps is zeta(-1) = -1/12. Mark: zeta(-1) is not equal to the original divergent series, but it follows from it through the regularization steps described here.\n\nFor a physics calculation, it is only safe to apply such Riemann-zeta regularization if the regulator s is introduced at a point in the calculation where everything is still mathematically well-defined, i.e. before arriving at a divergent series. Otherwise there are caveats such that the calculation might lead to right or wrong results, you never know in general, and manipulating divergent expressions is a dangerous thing to do.\n\nBy the way, look at the Wikipedia page for \"1 + 2 + 3 + 4 + ⋯\" (http://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B%AF). This page says that the series \"can be manipulated to yield a number of mathematically interesting results\", that it is \"'summed' by zeta function regularization\" ('summed' even in quotation marks), or that its \"Ramanujan summation [...] is also -1/12\". These formulations are different from stating plainly \"1 + 2 + 3 + ... = -1/12\", which is clearly avoided in the Wikipedia article.\n\n15.",
null,
"The twelfth root of 2 is how we divide the musical scale, each note is 1/12th of the octave, wonder if there is a connection. Its often assumed that all music strictly cultural but the evidence does not support it, for example tone/tempo of voice carries emotional information you can understand even if you don't know the language and have no other clues, play a film in a foreign language while blindfolded and you can still tell what the characters are feeling by tone/tempo of voice alone. The variances in tempo and pitch that convey this information are indistinguishable from musical variances, we speak with music only some of which carries abstract associative meaning but most of which carries universally understood emotional state information. This suggests that the emotional content of music is conveyed by adherence to universal rules that are just as solid as the rules of physics with the 12th root of two the simplest generator of the frequencies for both tempo and tone.\n\n16.",
null,
"Davide CastelvecchiJanuary 28, 2014 at 12:10 PM\n\nAnalytical continuation is not a way to make divergent series converge. It just isn't.\n\nAnalytical functions are by definition those that admit a power\nexpansion at every point. That means that *locally* you can write them\nas the sum of their Taylor expansion. (And no, not all functions are\nanalytic, even if they admit infinitely many derivatives: sometimes\nyou can write the Taylor expansion but that does not equal the\nfunction you started with. But I digress.)\n\nBeing analytic does not mean that the function is the same as its\nTaylor expansion at a point. The Taylor expansion, in particular, may\nstart diverging at points where the function is perfectly well\ndefined. Taylor expansions have a well-defined convergence radius in\nthe complex plane, and that radius may be infinite (as it is for the\ncase of e^z, say) but it is often finite.\n\nSo, if you start with a given Taylor series, there can be a function\nthat the Taylor series was the expansion of (at a given point), and\nthat function may be well-defined on a larger domain than the Taylor\nseries was. But that doesn't mean that you are making the original\nTaylor series converge. It still diverges: it's just that it only\nrepresents the function _within_ its radius of convergence.\n\nIf you want to play with an example, take f(z)=1/(z-1). f(z) is\nperfectly well defined at any complex number z, as long as it's not 1.\nNow take its Taylor expansion at z=0, meaning\n\nf(0) + f'(0)z + 1/2f\"(0)z^2 + 1/6 f'''(0)z^3 + ...\n\nIts radius of convergence is 1, which means that the Taylor series\n*diverges* for any |z|>1. The fact that f(z) is well defined for *all*\n|z|>1 doesn't magically make the series converge there. It's the\n*function* that is well defined there, not the Taylor series. If you\nhad started with the Taylor series, and then realized that it was the\nTaylor series of 1/(z-1), then you could say that f(z) was an\nanalytical continuation of the Taylor series. But again, doing that\ndoesn't make the series any more convergent than it was before. This\nis stuff that has been figured out once and for all 200 years ago.\n\nAbout 1+2+3... Evelyn Lamb has already said everything that there is to say: \"There is a meaningful way to associate the number -1/12 to the series 1+2+3+4…, but in my opinion, it is misleading to call it the sum of the series.\"\nEnd of story.\nhttp://blogs.scientificamerican.com/roots-of-unity/2014/01/20/is-the-sum-of-positive-integers-negative/\n\n17.",
null,
"I think the point is that no one is saying they converge. It's the same as for the series 1-1+1-1+1-1+...\n\nIt doesn't converge, but if you include all the terms to infinity (not the limit as you go to infinity, but the value when infinite terms are included), it equals 1/2.\n\nIf you approach it by thinking in terms of convergence, as you point out, there's nothing you can do with zeta(-1), zeta (-3), or 1-1+1-1+1 . . . And yet, using the values that Euler, Reimann, and Ramanujan calculated for those things give you meaningful answers to real, testable theories.\n\nI can't see a way to rectify those two things - either the equations and the theories are true, in some sense anyway, or they're each wrong in ways that precisely (to 12 or more decimal places) correct the errors the other one produces. The second option seems much harder to believe than the first.\n\n18.",
null,
"Davide CastelvecchiJanuary 28, 2014 at 1:09 PM\n\nI am sorry Physics Buzz, but you are talking nonsense. The statement \"include all the terms to infinity\" is devoid of meaning. And as Evelyn Lamb explained, you are misrepresenting what Ramanujan & co. did. I don't think there is anything more I can say at this point other than suggesting that you read the sentence \"There is a meaningful way to associate the number -1/12 to the series 1+2+3+4…, but in my opinion, it is misleading to call it the sum of the series\" again and again. Perhaps it would have been advisable to do so before blogging about this on a website that's supposed to be educational.\n\n19.",
null,
"Ramanujan (and Euler) wrote\n\n1+2+3+4…= -1/12\n\nRamanujan's exact words are \". . . the sum of an infinite number of terms of the series\n1+2+3+4…= -1/12 . . .\"\n\nI am only repeating what he said.\n\nAre you saying he is wrong or that \"=\" means something other than equals?\n\n20.",
null,
"Keep in mind, I'm saying\n\nIf QED is true\nand it relies on 1+2+3+4…= -1/12\nthen 1+2+3+4…= -1/12 must be true.\n\nI have no reason to want 1+2+3+4…= -1/12 to be true. I am happy with it being infinite. But then what's going on with QED? I don't accept that luck can lead to such precise predictions.\n\n21.",
null,
"Thanks, Davide, for your comments. I really like Evelyn Lamb's statement which you cited: You may assign -1/12 to the divergent series, but it is misleading to call this the sum of the series.\n\nAnd, Buzz (are you the same as \"Buzz Skyline\" or a different person?), it is sad when the only remaining argument for a mathematical equation like 1+2+3+4+… = -1/12, is the correctness of theories in physics (like QED). As a physicist I tell you, there is no meaningful theory in physics which relies on \"identities\" like 1+2+3+4+… = -1/12. Whenever infinities and divergences are encountered in physics, they are properly regularized (and eventually renormalized, especially in QED). And that is also what Euler, Ramanujan & co. did: They attached a meaning to divergent series by choosing a certain regularization.\n\n1.",
null,
"By saying that the theories rely on the equation, I'm saying that, as I understand it, the theories would make different predictions if the \"sum of the infinite series\" (as Ramanujan says) equaled something other than -1/12. If you can do without them, they why to physicists who do QED calculations tell me they need them?\n\n2.",
null,
"I have also done a lot of QED calculations, and I have never needed to sum divergent series without a proper regularization. Probably the physicists you are speaking about just wrote down such \"identities\" with divergent series as a very simple picture of what they did, omitting all notion of regularization for sake of simplicity? Maybe they even wanted to impress you by showing you what kind of \"crazy things\" they are doing? (Things which aren't crazy at all, once you properly define your regularization procedure.)\n\nAs I told you before, physicists tend to be pragmatic. So they might even work with \"identities\" like 1+2+3+4+... = -1/12 because they know that, in principle, it can also be done properly and mathematically. Anyway, theories in physics do not depend on such \"identities\" being true, they just depend on being regularizable and renormalizable such that their predictions are finite in the end.\n\n3.",
null,
"Well, finite, and *correct* I assume, is what your after. I'm sure there are many ways to deal with infinities. I imagine the ways that give you precise answers are fewer.\n\nSo your proposal is that the answer is more along the lines of a conspiracy of physicists who are telling us this stuff to impress. It's possible, I suppose, but hard to believe.\n\n4.",
null,
"No conspiracy. But some people tend to oversimplify things when they present them to non-experts. Then they are presenting them in a way which is actually not correct, but behind the wrong picture is a true meaning which though would have been more complicated to explain in a correct way. That's what the Numberphile people have done in my opinion. I always try to avoid this, but sometimes it is hard.\n\n22.",
null,
"This is proof of my Theory of Dark Numbers. This theory postulates that Dark Numbers exist (but yet unobserved) to tame the large expansion of currently observable whole numbers 1+2+3+4+... so that the infinite series is equal to -1/12 instead of infinity. Tesla alluded to using Dark Numbers when he built his transport ray machine. According to my theory, the number line contains 4.9% ordinary numbers, 26.8% dark Robinson hyper-reals and 68.3% dark numbers.\n\n23.",
null,
"Buzz, let me show you one way to better understand, at least in a qualitative way, how such divergent sums of positive numbers can be related to finite and sometimes even negative numbers.\n\nLook at the physics case of the Casimir effect (http://en.wikipedia.org/wiki/Casimir_force#Derivation_of_Casimir_effect_assuming_zeta-regularization): Even when properly regularized by using the parameter s, how can a sum (and integral) of energy contributions, which are each individually positive, end up in a negative total Casimir energy at the end?\n\nIn order to explain this, let me first approximate the series representation of the zeta function by an integral which is easier to handle. Using the Euler-Maclaurin-Formula (http://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula) in its 0th-order approximation, we can approximate\n\n1^(-s) + 2^(-s) + 3^(-s) + 4^(-s) + ...\n= 1/2 + integral from 1 to infinity of dx * x^(-s) + small remainder,\n\nwhich is valid, as the series on the left-hand side, for s>1.\n\nNow, as the integrand x^(-s) is positive all over the integration domain from 1 to infinity, one would expect this integral to be positive as well. And in the convergent case s>1, this is as expected. Using the primitive of the integrand, -1/(s-1) * x^(1-s), one gets:\n\nintegral = 1/(s-1) * ( 1 - infinity^(1-s) )\n\nFor s>1, the contribution from the upper boundary is infinity raised to a negative power, i.e. zero, and we and up with:\n\nintegral = 1/(s-1) > 0 for s>1.\n\nBut what happens in the divergent case, when s<1? Then the contribution from the upper boundary is divergent: infinity raised to a positive power. So the integral should be positive and infinite, as one would also expect from summing all natural numbers (the above series with s = -1) or all cubes of the natural numbers (series with s = -3). However, we are using a kind of regularization here which is based on analytic regularization. As the result 1/(s-1) for our integral is valid all over the complex half-plane where the real part of s is larger than 1, we analytically continue this result to all values of s (except s=1). Practically this means that the contribution from the upper boundary of the integral, 1/(s-1) * infinity^(1-s), is omitted although it diverges.\n\nSo our analytically regularized integral is still given by minus the primitive of the integrand at the lower boundery, which is now a negative contribution:\n\nintegral = 1/(s-1) = -1/(1-s) < 0 for s<1.\n\nSummarizing, the negative and finite result is obtained by dropping the positive and infinite contribution from the upper boundary of the integral. This prescription to drop such an infinite contribution arises from the analytic continuation in the parameter s.\n\nIn physics (example of the Casimir effect) such a prescription makes sense because constant energy shifts (even if they are infinite) are irrelevant for the Casimir force.\n\n24.",
null,
"Bernd, my boss happened to stop by and say basically the same thing (he's a physicist who has a lot of experience with quantum field theory) and illustrate it using the Casimir Effect. I have to admit, I'm coming around to see what you're saying. It's easier to understand with a white board handy. In any case, I'll have to think about it for a while before I have anything much to say (or ask).\n\nAlong the way, I happened to find a relatively simple way to evaluate zeta functions that I'd never seen before, for odd, negative values of n. I guess it doesn't matter, really, but it happens to make calculating some Bernoulli numbers (B_n, for n even) relatively quick and easy. I have no idea whether that's useful, interesting, or new. It entertained me though.\n\n25.",
null,
"Wondrful exchange guys. Thanks for the knowledge you've been sharing with us, and for the effort that takes.\n\n26.",
null,
"Definition error. The sum of n to infinity diverges, goes really big. The \"regularization\" converges to -1/12. The technique is proven, duh, it would be odd if they were using the regularized sum without a proof. Its like saying 2+2=10. Very misleading unless you realize that the equation has been manipulated by being turned into 2, base ten, +2, base ten, equals 10 base 2. Although the regularization is legitimate, you can't say the sum adds to -1/12. You have to say the sum can be regularized through the Riemann zeta function and through a variety of other methods to equal -1/12. Although for all intents and purposes no ones checking the addend up to infinity to make sure that it equals what we say it does so feel free to say it equals -1/12, but make sure add in that regularized part if your talking with any self respecting mathematician or physicist.\n\n27.",
null,
"For me, not a mathematician, this is a perfect example for that you can prove anything with infinity mathematics.\n\nIn my view, infinity does not exist (in the real world), nor does 0.\n\nInfinity = anything / 0\n\nso\n\n0 = anything / infinity.\n\nPhysical argument: the smallest thing (measure) is the planck dimension (planck lenght, planck time etc.) so there cannot be an infinite number of since the beginning of time (the big bang).\n\n28.",
null,
"I'm afraid you're greatly confused.\n\n1. Just because Euler derived something doesn't make it true. Euler was known for \"breaking the rules\" all too often - that was surely part of his genial nature - but it doesn't mean that all of his results and proofs are correct, in fact his handling of infinite diverging sums is not strictly mathematically correct.\n\nAlso, n appeal to authority is a fallacy.\n\n2. It's true that 1+2+3... results in the so-called Ramanujan sum of -1/12. But a Ramanujan sum is not at all the same as *a* sum in a traditional sense. Look it up.\n\n3. You're engaging in trickery by juggling different definition of sums.\nIf we're talking about traditional sums, then no: divergent infinite series cannot be summed up and -1/12 is certainly not a sum of any such series.\n\nNow, we can extend the definition of \"summation\" to cover the assignment of finite numbers to infinite divergent series. But in such a case it *must* be clear that we're talking about something entirely different than traditional summation. Because 1+2+3...=-1/12 is only mindblowing when we're thinking of traditional sums (in that case it also happens to be incorrect).\n\n4. Now, certainly it must be admitted that the persistence with which -1/12 (and other numbers) pops up when all those different kinds of operations are performed (legally or not), as well as the fact that it is applicable to the real world does mean that it's not merely a fluke and that there is indeed a deep connection between these things on a fundamental level.\n\nBut this doesn't justify the bad math, sorry.\n\n1.",
null,
"2.",
null,
"I totally agree. And I refer to my previous comments above, where I have made similar statements.\n\nhttp://physicsbuzz.physicscentral.com/2014/01/redux-does-1234-112-absolutely-not.html\n[titled \"Correction: Does 1+2+3+4+ . . . =-1/12? Absolutely Not! (I think)\"]\n\nand\n\nhttp://physicsbuzz.physicscentral.com/2014/02/so-what-does-1234-equal-we-give-you_11.html\n[titled \"If Not -1/12, What Does 1+2+3+4+... Equal? We Give You the Answer \"]\n\nYou may see from these blog posts (and their comments) that the author changed his mind several times about this topic. Unfortunately, he did not link his follow-up posts from this first one.\n\n3.",
null,
"Thanks, Dr. Jantzen!\n\n29.",
null,
"Problems of summation divergent series is solved! https://m4t3m4t1k4.wordpress.com/2015/02/14/general-method-for-summing-divergent-series-determination-of-limits-of-divergent-sequences-and-functions-in-singular-points-v2/\n\n30.",
null,
"https://m4t3m4t1k4.wordpress.com/2015/11/25/general-method-for-summing-divergent-series-using-mathematica-and-a-comparison-to-other-summation-methods/\n\n31.",
null,
"Please please can someone point out the flaw in my reasoning. This theory is completely WRONG.\n\nIf I take the series S = 1 + 2 + 3 + 4 + ....\nand the series 4S = 4 + 8 + 12 + 16 + ...\n\nAnd subtract the 2, I cannot just simply shift and then pretend everything works out ok. Why shift the bottom row with intervals that line up numerically with the top row? Why not just shift it all an infinite amount and end up with:\nS - 4S = (1 + 2 + 3 + 4 + 5 + 6) - 4?\n\nThe amount of shift is completely arbitrary and is just done for convenience sake. It completely disregards the rate of numerical increase.\n\nWe all know that a function f(x) = 2x [x limit to infinity] is larger than the function f(x) = x [x limit to infinity]. It's absolutely important that the numbers in each position are subtracted with the corresponding position.\n\nPlease tell me why I am wrong, or tell me how some string theory is actually based off of this??\n\n### How 4,000 Physicists Gave a Vegas Casino its Worst Week Ever\n\nWhat happens when several thousand distinguished physicists, researchers, and students descend on the nation’s gambling capital for a conference? The answer is \"a bad week for the casino\"—but you'd never guess why.\n\n### Ask a Physicist: Phone Flash Sharpie Shock!\n\nLexie and Xavier, from Orlando, FL want to know: \"What's going on in this video ? Our science teacher claims that the pain comes from a small electrical shock, but we believe that this is due to the absorption of light. Please help us resolve this dispute!\"\n\n### The Science of Ice Cream: Part One\n\nEven though it's been a warm couple of months already, it's officially summer. A delicious, science-filled way to beat the heat? Making homemade ice cream. (We've since updated this article to include the science behind vegan ice cream. To learn more about ice cream science, check out The Science of Ice Cream, Redux ) Image Credit: St0rmz via Flickr Over at Physics@Home there's an easy recipe for homemade ice cream. But what kind of milk should you use to make ice cream? And do you really need to chill the ice cream base before making it? Why do ice cream recipes always call for salt on ice?"
] | [
null,
"http://upload.wikimedia.org/math/8/3/f/83f92d3d67e96dc7b3bd91fac119cd0a.png ",
null,
"http://3.bp.blogspot.com/-4kdYc2mkqx0/T3mIGJL38II/AAAAAAAAAFI/_ZtWU3RNd9A/s35/BobJediBear.png",
null,
"http://4.bp.blogspot.com/_3tEJQrezVXM/SXqJfQbG6cI/AAAAAAAABc4/0hhqLkvBtts/S45-s35/DarkNetCover16.jpg",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://4.bp.blogspot.com/-FQizhxhZZxM/YOF8gmDWAiI/AAAAAAAABdM/CitSZNGggHQD5FPMam-IqPQdG7M2ref8ACK4BGAYYCw/s35/6B003B26-BEC4-46AA-A598-6A3B79A13238.jpeg",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://4.bp.blogspot.com/_3tEJQrezVXM/SXqJfQbG6cI/AAAAAAAABc4/0hhqLkvBtts/S45-s35/DarkNetCover16.jpg",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://2.bp.blogspot.com/__YfsCrLM4cA/ScfG2i-SSlI/AAAAAAAABzQ/H8OKROys9V8/S45-s35/gnl2.jpeg",
null,
"http://3.bp.blogspot.com/_p7VP-xdD2GQ/SZcqgBBxnDI/AAAAAAAAAP8/oPRQ197W84I/S45-s35/robbie.jpg",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://4.bp.blogspot.com/_3tEJQrezVXM/SXqJfQbG6cI/AAAAAAAABc4/0hhqLkvBtts/S45-s35/DarkNetCover16.jpg",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://4.bp.blogspot.com/_3tEJQrezVXM/SXqJfQbG6cI/AAAAAAAABc4/0hhqLkvBtts/S45-s35/DarkNetCover16.jpg",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://4.bp.blogspot.com/_3tEJQrezVXM/SXqJfQbG6cI/AAAAAAAABc4/0hhqLkvBtts/S45-s35/DarkNetCover16.jpg",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://4.bp.blogspot.com/_3tEJQrezVXM/SXqJfQbG6cI/AAAAAAAABc4/0hhqLkvBtts/S45-s35/DarkNetCover16.jpg",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://4.bp.blogspot.com/_3tEJQrezVXM/SXqJfQbG6cI/AAAAAAAABc4/0hhqLkvBtts/S45-s35/DarkNetCover16.jpg",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://2.bp.blogspot.com/-WOaj2H_if4U/Uugc0YGqYWI/AAAAAAAAEYI/VrnfxHhRlS8/s35/*",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://4.bp.blogspot.com/_3tEJQrezVXM/SXqJfQbG6cI/AAAAAAAABc4/0hhqLkvBtts/S45-s35/DarkNetCover16.jpg",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://resources.blogblog.com/img/blank.gif",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9359465,"math_prob":0.9647223,"size":49469,"snap":"2021-43-2021-49","text_gpt3_token_len":12150,"char_repetition_ratio":0.14050339,"word_repetition_ratio":0.04470814,"special_character_ratio":0.24700317,"punctuation_ratio":0.11599351,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9888242,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92],"im_url_duplicate_count":[null,9,null,9,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,9,null,9,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,7,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T05:05:27Z\",\"WARC-Record-ID\":\"<urn:uuid:be43b800-9bf0-4837-aaf1-9cc84c3ef1cf>\",\"Content-Length\":\"439503\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2948ef2d-37e9-4a53-8036-83520473bd23>\",\"WARC-Concurrent-To\":\"<urn:uuid:c61e3ebc-b6b9-43b8-aee1-4ca7916ba413>\",\"WARC-IP-Address\":\"172.217.13.243\",\"WARC-Target-URI\":\"http://physicsbuzz.physicscentral.com/2014/01/does-1234-112.html?m=1\",\"WARC-Payload-Digest\":\"sha1:4MIZZ7ZNSCUI2RW56C6FVMOOCK6WVLDE\",\"WARC-Block-Digest\":\"sha1:FJT2LYXNNBZIWE5MOEIV254NLW2MU5YA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585911.17_warc_CC-MAIN-20211024050128-20211024080128-00585.warc.gz\"}"} |
https://unapologetic.wordpress.com/2007/07/17/ | [
"# The Unapologetic Mathematician\n\n## The Unit and Counit of an Adjunction\n\nLet’s say we have an adjunction",
null,
"$F\\dashv G:\\mathcal{C}\\rightarrow\\mathcal{D}$. That is, functors",
null,
"$F:\\mathcal{C}\\rightarrow\\mathcal{D}$ and",
null,
"$G:\\mathcal{D}\\rightarrow\\mathcal{C}$ and a natural isomorphism",
null,
"$\\Phi_{C,D}:\\hom_\\mathcal{D}(F(C),D)\\rightarrow\\hom_\\mathcal{C}(C,G(D))$.\n\nLast time I drew an analogy between equivalences and adjunctions. In the case of an equivalence, we have natural isomorphisms",
null,
"$\\eta:1_\\mathcal{C}\\rightarrow G\\circ F$ and",
null,
"$\\epsilon:F\\circ G\\rightarrow1_\\mathcal{D}$. This presentation seems oddly asymmetric, and now we’ll see why by moving these structures to the case of an adjunction.\n\nSo let’s set",
null,
"$D=F(C')$ like we did to show that an equivalence is an adjunction. The natural isomorphism is now",
null,
"$\\Phi_{C,F(C')}:\\hom_\\mathcal{D}(F(C),F(C'))\\rightarrow\\hom_\\mathcal{C}(C,G(F(C'))$. Now usually this doesn’t give us much, but there’s one of these hom-sets that we know has a morphism in it: if",
null,
"$C'=C$ then",
null,
"$1_{F(C)}\\in\\hom_\\mathcal{D}(F(C),F(C))$. Then",
null,
"$\\Phi_{C,F(C)}(1_{F(C)})$ is an arrow in",
null,
"$\\mathcal{C}$ from",
null,
"$C$ to",
null,
"$\\left[G\\circ F\\right](C)$.\n\nWe’ll call this arrow",
null,
"$\\eta_C$. Doing this for every object",
null,
"$C\\in\\mathcal{C}$ gives us all the components of a natural transformation",
null,
"$\\eta:1_\\mathcal{C}\\rightarrow G\\circ F$. For this, we need to show the naturality condition",
null,
"$G(F(f))\\circ\\eta_C=\\eta_{C'}\\circ f$ for each arrow",
null,
"$f:C\\rightarrow C'$. This is a straightforward calculation:",
null,
"$G(F(f))\\circ\\eta_C=G(F(f))\\circ\\Phi_{C,F(C)}(1_{F(C)})=\\Phi_{C,F(C')}(F(f)\\circ1_{F(C)})=$",
null,
"$\\Phi_{C,F(C')}(1_{F(C')}\\circ F(f))=\\Phi_{C',F(C')}(1_{F(C')})\\circ f=\\eta_{C'}\\circ f$\nusing the definition of",
null,
"$\\eta_C$ and the naturality of",
null,
"$\\Phi$.\n\nThis natural isomorphism",
null,
"$\\eta$ is called the “unit” of the adjunction",
null,
"$F\\dashv G$. Dually we can set",
null,
"$C=G(D)$ and extract an arrow",
null,
"$\\eta_D=\\Phi_{G(D),D}^{-1}(1_{G(D)})$ for each object",
null,
"$D\\in\\mathcal{D}$ and assemble them into a natural transformation",
null,
"$\\eta:F\\circ G\\rightarrow1_\\mathcal{D}$ called the “counit”. If both of these natural transformations are natural isomorphisms, then we have an equivalence.\n\nFor a particular example, let’s look at this in the case of the free-monoid functor",
null,
"$M$ as the left adjoint to the underlying-set monoid",
null,
"$U$. The unit will give an arrow",
null,
"$\\eta_S:S\\rightarrow U(M(S))$, which here is just the inclusion of the generators (elements of",
null,
"$S$) as elements of the underlying set of the free monoid. The counit, on the other hand, will give an arrow",
null,
"$\\epsilon_N:M(U(N))\\rightarrow N$. That is, we take all elements of the monoid",
null,
"$N$ and use them as generators of a new free monoid — write out “words” where each “letter” is a whole element of",
null,
"$N$. Then to take such a word and send it to an element of",
null,
"$N$, we just take all the letters and multiply them together as elements of",
null,
"$N$. Since we gave a description of",
null,
"$\\Phi$ last time for this case, it’s instructive to sit down and work through the definitions of",
null,
"$\\eta_S=\\Phi_{S,M(S)}$ and",
null,
"$\\epsilon=\\Phi_{U(N),N}^{-1}$ to show that they do indeed give these arrows.\n\nJuly 17, 2007 Posted by | Category theory | 11 Comments"
] | [
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9192094,"math_prob":0.999882,"size":1970,"snap":"2023-14-2023-23","text_gpt3_token_len":443,"char_repetition_ratio":0.13428281,"word_repetition_ratio":0.0,"special_character_ratio":0.21522842,"punctuation_ratio":0.09367089,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999654,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T00:21:12Z\",\"WARC-Record-ID\":\"<urn:uuid:f125680e-d967-474a-adf6-d95569c743a5>\",\"Content-Length\":\"82241\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e0eb077-6f80-47cd-aa36-84038f80bf1a>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1b6725b-c664-4d7b-9056-c111676676a8>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://unapologetic.wordpress.com/2007/07/17/\",\"WARC-Payload-Digest\":\"sha1:N4XBX76UAL6EEE3INPVHDUO5S7CDFMFR\",\"WARC-Block-Digest\":\"sha1:RZ4YSFMDPLCG4OXH7PFSPQW7DQ7KONIJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649348.41_warc_CC-MAIN-20230603233121-20230604023121-00656.warc.gz\"}"} |
https://www.knowledgeadda.com/2014/07/transmission-lines-and-waveguides.html | [
"# Transmission Lines and Waveguides EC2305 MJ2014 5th Semester Question Paper\n\nAnna University Question Paper\nB.E./B.Tech. DEGREE EXAMINATION, May /June 2014.\nElectronics and Communication Engineering\nEC2305-Transmission Lines and Waveguides\nFifth Semester\n\nProgramming Questions | Travel Tips | Mobile Review | Placement Papers\n\nPART-A\n\n1.Write any two advantages of m-derived filters over constant K-filters.\n2.Determine the value of L required by a constant K T-sction high pass filter with a cutoff frequency of 1.5 KHz and design impedance of 500ohm.\n3.A transmission line has a characteristic impedance of 600 om .Determine magnitude of reflection of coefficient if the receving and impedance is (650-j 475)ohm.\n4.Define distortion loss.\n5.Give the equations for the characteristic impedance and propagation constant of a dissipationless line.\n6.Mention the disadvantages of single stub matching\n7.A wave is propagated in the dominant mode in a parallel plane waveguide.The frequency is 6 GHz and the plane separation is 4 Cm .Calculate the cutoff wavelength and the wavelength in the waveguide.\n8.Give the equations for the propagation constant and wavelength for TEM waves between parallel planes.\n9.What are the advantages and application of cylindrical aveguides?\n10.Mention the different tyes of guide termination.\n\nPART-B\n\n11.a.1)Draw and explain the design and operation of m-derived T-section bandpass filter with necessary equations and diagrams. (8)\n2)Design constant K bandstop filters (both T and 𝜋 sections )for the cutoff frequencies of 2 KHz and 6KHz .The design impedance is 500 Hz. (8)\nOr\nb.1)Explain the principle and operation of crystal filters with neat diagrams. (10)\n2)Design an m derived low pass filter with a cutoff frequency of 2 KHz .Design impedance is 500 ohm and m=0.4.Consider a 𝜋 section for your calculation. (6)\n\n12.a.1)What are the types of waveform distortion introduced by a transmission line ? Derive the conditions for the distortion less operation of a transmission line. (10)\n2)The constants of a transmission line are R=6 ohm/Km.L=2.2mH/Km.C=0.005μF/Km and G=0.25*10^-3 mhos/Km.Calculate the attenuation constant and phase constant at 1000Hz. (6)\nOr\nb)1)Derive the transmission line equations and obtain expressions for the voltage and current on a transmission line. (10)\n2)A transmission line has a characteristic impedance of (683-j138).The propagation constant is ((0.0074+j0.0356)per Km.Determine the values of R and L of this line if the frequency is 1000Hz. (6)\n\n13.a)1)Drive an expression for the input impedance of a dissipationless line. Extend your results for open and short circuited lines also. (10)\n2)Write a brief note on impedance measurement on transmission lines. (6)\nOr\nb)1)Discuss the principle of double stub matching with neat diagram and expressions.(8)\n2)A single stub is to match a 300 ohm line to a load of (180+j120) ohm.The wavelength is 2 meters. Determine the shortest distance from the load to the stub\nlocation and proper length of a short circuited stub using relevant formula. (8)\n\n14.a)Discuss the transmission of TM waves between parallel perfectly conducting planes with necessary expressions for the field components . Discuss the\ncharacteristics of TE and TM waves between the parallel planes. (16)\nOr\nb)1.Disuss briefly he attenuation of TE and TM waves between parallel planes. (8)\n2)Describe the manner of wave travel between parallel planes with necessary expressions for their velocities. (8)\n\n15.a)1.Describe the propagation of TE waves in a rectangular waveguide with a necessary expressions for the field components. (10)\n2)An air filled rectangular waveguide of dimensions a=6cm and b=4cm operates in the TM11 mode .Find the cutoff frequency guide wavelength and phase velocity at a\nfrequency of 3 GHz. (6)\nOr\nb)1)Describe the principle and operation of rectangular cavity resonators with relevant expressions. (10)\n2)Give a brief note on excitation of modes in rectangular waveguides. (6)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7802342,"math_prob":0.96189314,"size":4256,"snap":"2021-43-2021-49","text_gpt3_token_len":1021,"char_repetition_ratio":0.130762,"word_repetition_ratio":0.034428794,"special_character_ratio":0.23331767,"punctuation_ratio":0.09987516,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9922265,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-20T13:16:44Z\",\"WARC-Record-ID\":\"<urn:uuid:ed9aa013-d8d6-4b08-a449-d4c68c86930b>\",\"Content-Length\":\"231669\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:33ae97b5-688b-4e70-8efa-e8ae8673aba7>\",\"WARC-Concurrent-To\":\"<urn:uuid:bfb79b12-b2ac-4030-98fc-5a45e897b3a0>\",\"WARC-IP-Address\":\"142.250.65.83\",\"WARC-Target-URI\":\"https://www.knowledgeadda.com/2014/07/transmission-lines-and-waveguides.html\",\"WARC-Payload-Digest\":\"sha1:BU3LZ4MFERBJXOVFP3S4UDFULDSVYN32\",\"WARC-Block-Digest\":\"sha1:Y3FFWNEBGIN2IEKJLZ3UOKBDYIHMTIED\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585321.65_warc_CC-MAIN-20211020121220-20211020151220-00683.warc.gz\"}"} |
https://www.gkduniya.com/physics-quiz8 | [
"# Physics GK Quiz\n\nQ106. The wave of theory of light is based on\n(a) the laws of reflection\n(b) the laws of refraction\n(c) the inverse square law\n(d) Huyghens' principle\n\nQ107. The unit of magnetic permeability is\n(a) Henry/meter\n(b) Henry\n(c) Weber\n(d) Weber/meter\n\nQ108. The domain theory of magnetism applies to\n(a) aluminium\n(b) copper\n(c) silver\n(d) cobalt\n\nQ109. The paramagnetic theory of magnetism applies to\n(a) nickel\n(b) mercury\n(c) iron\n(d) platinum\n\nQ110. Due to temperature variation along a conductor, potential variation occurs along it. This phenomenon is known as\n(a) Thomson effect\n(b) Joule effect\n(c) Seebeck effect\n(d) Peltier effect\n\nQ111. What is the law in which under the same conditions of temperature and pressure, equal volumes of all gas contain equal number of molecules\n(a) Colioli effect\n(b) Grahams law\n(c) Pascal's law\n\nQ112. The impulse of a torque about an axis is equal to\n(a) angular velocity about that axis\n(b) angular momentum about that axis\n(c) the force applied on it\n(d) the workdone\n\nQ113. Impulse is the product of _________ and ________\n(a) mass, velocity\n(b) mass, acceleration\n(c) force, time\n(d) torque, time\n\nQ114. The perfectly elastic bodies have Co-efficient of restitution\n(a) 0\n(b) 1\n(c) -1\n(d) Infinity\n\nQ115. How soap water remove the dust and strain from the cloth\n(a) It increase the surface tension of the water\n(b) It decrease the surface tension of the water\n(c) It produce heat on the cloth\n(d) No change in surface tension\n\nQ116. Which of the following physièal quantity does not have the unit of N/m2\n(a) pressure\n(b) stress\n(c) viscosity\n(d) modulus of elasticity\n\nQ117. The Stefan's law reduces to Newton's law of cooling when\n(a) the temperature difference between body and surrounding is large\n(b) the temperature difference is small\n(c) conduction is negligible compared to radiation\n(d) at all temperature differences\n\nQ118. When the displacement of a simple harmonic oscillator is zero, its speed is\n(a) zero\n(b) maximum\n(c) infinity\n(d) minimum\n\nQ119. Velocity of sound in a solid is proportional to\n(a) adiabatic elasticity of the medium\n(b) isothermal elasticity of the medium\n(c) square root of isothermal elasticity of medium\n(d) square root of adiabatic elasticity of medium\n\nQ120. At constant temperature, as pressure increases, the velocity of sound in gas\n(a) increases\n(b) decreases\n(c) remains the same\n(d) becomes zero"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7186113,"math_prob":0.98833793,"size":2889,"snap":"2021-43-2021-49","text_gpt3_token_len":782,"char_repetition_ratio":0.13587521,"word_repetition_ratio":0.06708595,"special_character_ratio":0.25995153,"punctuation_ratio":0.07779886,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9943153,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-25T08:13:33Z\",\"WARC-Record-ID\":\"<urn:uuid:6b1368c1-639a-406b-b3ff-d36563ad0324>\",\"Content-Length\":\"37667\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb77050d-6ef6-4797-be9a-193ddbddbf07>\",\"WARC-Concurrent-To\":\"<urn:uuid:39d77997-a187-4466-abfb-c1a2240e2b57>\",\"WARC-IP-Address\":\"172.67.162.154\",\"WARC-Target-URI\":\"https://www.gkduniya.com/physics-quiz8\",\"WARC-Payload-Digest\":\"sha1:ADGMNG4WWGCOG4NXREGCXEZURNV4H6JD\",\"WARC-Block-Digest\":\"sha1:CGLW72MMACNTGMPBKOPYCFJT3XTW42QP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587655.10_warc_CC-MAIN-20211025061300-20211025091300-00128.warc.gz\"}"} |
https://link.springer.com/article/10.1007/s40306-014-0054-3 | [
"Acta Mathematica Vietnamica\n\n, Volume 39, Issue 2, pp 121–131\n\nGlobal Existence and Decay Rates of Solutions of Generalized Benjamin–Bona–Mahony Equations in Multiple Dimensions\n\nOpen Access\nArticle\n\nAbstract\n\nWe study the global existence and decay rates of the Cauchy problem for the generalized Benjamin–Bona–Mahony equations in multi-dimensional spaces. By using Fourier analysis, frequency decomposition, pseudo-differential operators and the energy method, we obtain global existence and optimal L 2 convergence rates of the solution.\n\nKeywords\n\nCauchy problem Generalized Benjamin–Bona–Mahony equation Multiple dimensions Global existence Optimal L2 decay estimate\n\n35L05\n\n1 Introduction\n\nIn this paper, we study the global existence and decay rates of the smooth solution u(x,t) to the scalar multi-dimensional generalized Benjamin–Bona–Mahony (GBBM) equations of the form\n$$\\begin{cases} \\partial_t u-\\Delta\\partial_t u-\\eta\\Delta u+(\\beta\\cdot\\nabla)u+\\operatorname {div}f(u)=0,\\\\ u(0,x)=u_0(x). \\end{cases}$$\n(1.1)\nHere η is a positive constant, β is a real constant vector, f(u)=(u 2,u 2,…,u 2), $$\\Delta=\\sum_{j=1}^{n}\\partial_{x_{j}}^{2}$$ is the Laplacian, ∇ is a gradient operator, n≥2 is the spatial dimension.\nThe well known Benjamin–Bona–Mahony (BBM) equation\n$$u_t-u_{xxt}+u_x+uu_x=0, \\quad - \\infty<x<\\infty,\\ t>0$$\nand its counterpart, the Korteweg–de Vries (KdV) equation\n$$u_t-u_{xxx}+u_x+uu_x=0, \\qquad - \\infty<x<\\infty,\\ t>0,$$\nwere both suggested as model equations for long waves in nonlinear dispersive media. The BBM equation was advocated by Benjamin, Bona and Mahony in 1972. Since then, the periodic boundary value problems, the initial value problems and the initial boundary value problems, for various generalized BBM equations have been studied. The existence and uniqueness of solutions for GBBM have been proved by many authors [2, 3, 7, 8]. The decays of solutions were also studied in [1, 4, 5, 6, 10]. In [1, 4, 5, 10], the authors studied the equation in low spatial dimension. In , the equation in high spatial dimension was studied, but the authors just got a global solution u(x,t)∈H 1. In this paper, the goal is to get a global smoother solution and give its L 2 decay rates in high spatial dimension.\nThroughout this paper, we denote the generic constants by C and write $$D^{\\alpha}f=\\partial_{x_{1}}^{\\alpha_{1}}\\partial_{x_{2}}^{\\alpha_{2}}\\cdots \\partial_{x_{n}}^{\\alpha_{n}}$$ for a multi-index α=(α 1,α 2,…,α n ). Let $$W^{s,p}(\\mathbb{R}^{n})$$, sZ +, p∈[1,∞], be the usual Sobolev space with the norm\n$$\\|f\\|_{W^{s,p}}:=\\sum_{|\\alpha|=0}^{s} \\bigl\\| D^{\\alpha}f\\bigr\\| _{L_p}.$$\nIn particular, W s,2=H s . The Fourier transformation with respect to the variable $$x\\in{\\mathbb{R}}^{n}$$ is\n$$\\hat{f}(\\xi,t)=\\int_{{\\mathbb{R}}^n}f(x,t)e^{-\\sqrt{-1}x\\cdot\\xi}\\,dx$$\nand the inverse Fourier transformation with respect to the variable ξ is\n$$f(x,t)=(2\\pi)^{-n}\\int_{{\\mathbb{R}}^n}\\hat{f}( \\xi,t)e^{\\sqrt {-1}x\\cdot\\xi}\\,d\\xi.$$\nIn this paper, all convolutions are only with respect to the spatial variable x.\n\nOur main result is the following:\n\nTheorem 1.1\n\nIf $$E=(\\|u_{0}\\|_{H^{l}}+\\|\\bigtriangledown u_{0}\\|_{H^{l}})$$ is small enough, $$l\\geq1+[\\frac{n}{2}]$$, then there exists a global solution u(x,t) of (1.1) such that\n$$u\\in L_{\\infty}\\bigl(0,\\infty;H^l\\bigl({\\mathbb{R}}^n\\bigr)\\bigr).$$\nMoreover, we have $$\\|D_{x}^{\\alpha}u\\|_{L_{2}}\\leq C(1+t)^{-\\frac{n}{4}-\\frac{|\\alpha|}{2}}$$ for |α|≤l.\n\nRemark\n\nThe decay rate is the same as that of the heat equation, so our estimate is optimal.\n\nThe rest of the paper is arranged as follows. In Sect. 2, we get the local existence of the solution directly by constructing a Cauchy sequence and using energy estimation. In Sect. 3, by means of the Green function, we obtain a bound of the solution, then we extend the local solution to the global one. In Sect. 4, we get an L 2 decay estimate of the solution.\n\n2 Local existence\n\nIn this section, we will construct a convergent sequence {u (m)(x,t)} to get the local solution, where u (m)(x,t) satisfy the following linear problem\n$$\\begin{cases} \\partial_t u^{(m)}-\\Delta\\partial_t u^{(m)}-\\eta\\Delta u^{(m)}+(\\beta \\cdot\\nabla)u^{(m)}=-\\operatorname {div}f(u^{(m-1)}),\\\\ u^{(m)}(0,x)=u_0(x),\\quad u^{(0)}(t,x)=0. \\end{cases}$$\n(2.1)\nWe will construct a Banach space and prove that the sequence is convergent in this space, so the limit is the solution of (1.1).\nFirst, we define a function space\n$$\\textbf{X}_{T_0}= \\Bigl\\{ u(x,t)|\\|u\\|_{\\textbf{X}_{T_0}}=\\sup _{0\\leq t\\leq T_0}\\|u\\|_{H^l}\\leq C_0E \\Bigr\\} .$$\nHere $$E=(\\|u_{0}\\|_{H^{l}}+\\|\\nabla u_{0}\\|_{H^{l}})$$ is small enough, $$C_{0}>\\sqrt{l},l\\geq1+[\\frac{n}{2}]$$ and n≥2. The metric in $$\\textbf{X}_{T_{0}}$$ is induced by the norm $$\\|u\\|_{\\textbf{X}_{T_{0}}}$$:\n$$\\rho(u,v)=\\|u-v\\|_{\\textbf{X}_{T_0}}\\quad\\forall u,v\\in\\textbf{X}_{T_0}.$$\nIt is obvious that $$\\textbf{X}_{T_{0}}$$ is a nonempty complete space.\n\nLemma 2.1\n\nThere exists some constant T 1 such that {u (m)(x,t)} belongs to $$\\textbf{X}_{T_{1}}$$.\n\nProof\n\nWe will prove this lemma by induction on m. When m=1, we have\n$$\\begin{cases} u_t^{(1)}-\\Delta u_t^{(1)}-\\eta\\Delta u^{(1)}+(\\beta\\cdot\\nabla )u^{(1)}=0,\\\\ u^{(1)}(0,x)=u_0(x). \\end{cases}$$\n(2.2)\nMultiplying the first equation by u (1) and integrating with respect to the variable x in $$\\mathbb{R}^{n}$$, we get\n$$\\frac{1}{2}\\frac{\\partial}{\\partial t}\\bigl\\| u^{(1)}\\bigr\\| _{L_2}^2+ \\frac{1}{2}\\frac{\\partial}{\\partial t}\\bigl\\| \\nabla u^{(1)}\\bigr\\| _{L_2}^2+ \\eta\\bigl\\| \\nabla u^{(1)}\\bigr\\| _{L_2}^2=0,$$\nhere we use the fact that ∫(β⋅∇u (1))⋅u (1) dx=0. Then\n$$\\bigl\\| u^{(1)}\\bigr\\| _{L_2}^2+\\bigl\\| \\nabla u^{(1)} \\bigr\\| _{L_2}^2\\leq \\|u_0\\|_{L_2}^2+ \\|\\nabla u_0\\|_{L_2}^2.$$\nFrom (2.2) we have\n$$D_x^{\\alpha}u_t^{(1)}-D_x^{\\alpha} \\Delta u_t^{(1)}-\\eta D_x^{\\alpha} \\Delta u^{(1)}+D_x^{\\alpha}(\\beta\\cdot\\nabla)u^{(1)}=-D_x^{\\alpha} \\operatorname {div}f\\bigl(u^{(0)}\\bigr)=0.$$\n(2.3)\nMultiplying the two sides in (2.3) by $$D_{x}^{\\alpha}u^{(1)}$$ and integrating with respect to the variable x in $$\\mathbb{R}^{n}$$, we also get\n$$\\frac{1}{2}\\frac{\\partial}{\\partial t}\\bigl\\| D_x^{\\alpha}u^{(1)} \\bigr\\| _{L_2}^2+\\frac{1}{2}\\frac{\\partial}{\\partial t} \\bigl\\| D_x^{\\alpha}\\nabla u^{(1)}\\bigr\\| _{L_2}^2 +\\eta\\bigl\\| \\nabla D_x^{\\alpha}u^{(1)} \\bigr\\| _{L_2}^2=0 .$$\n(2.4)\nFrom (2.4) we have\n$$\\bigl\\| D_x^{\\alpha}u^{(1)}\\bigr\\| _{L_2}^2(t)+ \\bigl\\| D_x^{\\alpha}\\nabla u^{(1)}\\bigr\\| _{L_2}^2(t) \\leq \\bigl\\| D_x^{\\alpha}u_0\\bigr\\| _{L_2}^2+ \\bigl\\| D_x^{\\alpha}\\nabla u_0\\bigr\\| _{L_2}^2.$$\nThus, for every T 1, we have $$u^{(1)}\\in\\textbf{X}_{T_{1}}$$. Supposing that $$u^{(m)}(x,t)\\in\\textbf{X}_{T_{1}}$$, we next prove that $$u^{(m+1)}(x,t)\\in\\textbf{X}_{T_{1}}$$. By definition, u (m+1)(x,t) satisfies the following equation:\n$$\\begin{cases} \\partial_t u^{(m+1)}-\\Delta\\partial_t u^{(m+1)}-\\eta\\Delta u^{(m+1)}+(\\beta\\cdot\\nabla)u^{(m+1)}=-\\operatorname {div}f(u^{(m)}),\\\\ u^{(m+1)}(0,x)=u_0(x). \\end{cases}$$\nThen\n$$D_x^{\\alpha}\\partial_t u^{(m+1)}-D_x^{\\alpha} \\Delta\\partial_t u^{(m+1)}-\\eta D_x^{\\alpha} \\Delta u^{(m+1)}+D_x^{\\alpha}(\\beta\\cdot \\nabla)u^{(m+1)}=-D_x^{\\alpha} \\operatorname {div}f \\bigl(u^{(m)}\\bigr),$$\nso, we get\n\\begin{aligned} \\begin{aligned} &\\frac{1}{2}\\frac{\\partial}{\\partial t} \\bigl\\| D_x^{\\alpha}u^{(m+1)}\\bigr\\| _{L_2}^2+ \\frac{1}{2}\\frac{\\partial}{\\partial t}\\bigl\\| D_x^{\\alpha}\\nabla u^{(m+1)}\\bigr\\| _{L_2}^2 +\\eta\\bigl\\| \\nabla D_x^{\\alpha}u^{(m+1)}\\bigr\\| _{L_2}^2 \\\\ &\\quad =-\\int D_x^{\\alpha}\\text{div}f\\bigl(u^{(m)} \\bigr)D_x^{\\alpha}u^{(m+1)}\\,dx \\\\ &\\quad =\\int D_x^{\\alpha}f\\bigl(u^{(m)}\\bigr)\\nabla D_x^{\\alpha}u^{(m+1)}\\,dx \\\\ &\\quad \\leq\\frac{\\eta}{2}\\bigl\\| \\nabla D_x^{\\alpha}u^{(m+1)} \\bigr\\| _{L_2}^2+C\\bigl\\| D_x^{\\alpha}f \\bigl(u^{(m)}\\bigr)\\bigr\\| _{L_2}^2. \\end{aligned} \\end{aligned}\n(2.5)\nTo proceed, we need the following inequality:\n$$\\bigl\\| D^s(F G)\\bigr\\| _{L^r}\\leq C\\bigl( \\bigl\\| D^sF\\bigr\\| _{L^p}\\|G\\|_{L^q}+\\|F\\|_{L^p} \\bigl\\| D^sG\\bigr\\| _{L^q}\\bigr),$$\n(2.6)\nwhere 1≤r,p,q≤∞ and $$\\frac{1}{r}=\\frac{1}{p}+\\frac{1}{q}$$.\nNoticing that f(u)=(u 2,u 2,…,u 2), we have\n$$\\bigl\\| D_x^{\\alpha}f\\bigl(u^{(m)}\\bigr) \\bigr\\| _{L_2}\\leq C\\bigl\\| u^{(m)}\\bigr\\| _{L_{\\infty}}\\bigl\\| D_x^{\\alpha}u^{(m)} \\bigr\\| _{L_2}.$$\n(2.7)\nBy the Sobolev embedding inequality, we have $$\\|u\\|_{L_{\\infty}}\\leq C\\|u\\|_{H^{l}}$$ if $$l\\geq1+[\\frac{n}{2}]$$. From (2.5), (2.7), we get\n\\begin{aligned} &\\frac{1}{2}\\frac{\\partial}{\\partial t}\\bigl\\| D_x^{\\alpha}u^{(m+1)} \\bigr\\| _{L^2}^2+\\frac{1}{2}\\frac{\\partial}{\\partial t} \\bigl\\| D_x^{\\alpha}\\nabla u^{(m+1)}\\bigr\\| _{L^2}^2 +\\frac{\\eta}{2}\\bigl\\| \\nabla D_x^{\\alpha}u^{(m+1)} \\bigr\\| _{L^2}^2 \\\\ &\\quad \\leq C\\bigl\\| u^{(m)}\\bigr\\| _{H^l}^2\\bigl\\| D_x^{\\alpha}u^{(m)} \\bigr\\| _{L^2}^2. \\end{aligned}\nThen\n\\begin{aligned} \\bigl\\| D_x^{\\alpha}u^{(m+1)} \\bigr\\| _{L^2}^2+\\bigl\\| D_x^{\\alpha}\\nabla u^{(m+1)}\\bigr\\| _{L^2}^2 \\leq&\\bigl\\| D_x^{\\alpha}u_0 \\bigr\\| _{L^2}^2+\\bigl\\| D_x^{\\alpha}\\nabla u_0\\bigr\\| _{L^2}^2+C\\int_0^tE^2E^2\\,ds \\\\ \\leq& E^2+CE^4t. \\end{aligned}\n(2.8)\n\nWhen $$T_{1}\\leq(C_{0}^{2}-1)\\frac{1}{CE^{2}}$$, from (2.8) we have $$\\{u^{(m+1)}(x,t)\\}\\in\\textbf{X}_{T_{1}}$$. By the induction method, our lemma is proved. □\n\nLemma 2.2\n\nThere exists a positive constant T 0 such that {u (m)(x,t)} constructed by (2.1) is a Cauchy sequence in $$\\textbf{X}_{T_{0}}$$.\n\nProof\n\nWe only need to prove that\n$$\\bigl\\| u^{(m+1)}-u^{(m)}\\bigr\\| _{\\textbf{X}_{T_0}}\\leq\\lambda\\bigl\\| u^{(m)}-u^{(m-1)}\\bigr\\| _{\\textbf{X}_{T_0}}$$\n(2.9)\nfor some 0<λ<1. By (2.1), for every m, u (m+1)u (m) satisfies the following equation:\n$$\\begin{cases} \\partial_t (u^{(m+1)}-u^{(m)})-\\Delta\\partial _t(u^{(m+1)}-u^{(m)})-\\eta\\Delta(u^{(m+1)}-u^{(m)})\\\\ \\quad{} +(\\beta\\cdot\\nabla)(u^{(m+1)}-u^{(m)}) =-\\operatorname {div}f(u^{(m)})+\\operatorname {div}f(u^{(m-1)}),\\\\ (u^{(m+1)}-u^{(m)})(0,x)=0. \\end{cases}$$\n(2.10)\nThus we have\n\\begin{aligned} &D_x^{\\alpha}\\partial_t \\bigl(u^{(m+1)}-u^{(m)}\\bigr)-D_x^{\\alpha} \\Delta\\partial _t\\bigl(u^{(m+1)}-u^{(m)}\\bigr)-\\eta D_x^{\\alpha}\\Delta\\bigl(u^{(m+1)}-u^{(m)} \\bigr) \\\\ &\\quad {} +D_x^{\\alpha}(\\beta\\cdot\\nabla) \\bigl(u^{(m+1)}-u^{(m)} \\bigr)=-D_x^{\\alpha} \\operatorname {div}\\bigl(f\\bigl(u^{(m)}\\bigr)-f \\bigl(u^{(m-1)}\\bigr)\\bigr). \\end{aligned}\n(2.11)\nMultiplying the two sides of (2.11) by $$D_{x}^{\\alpha}(u^{(m+1)}-u^{(m)})$$ and integrating with respect to the variable x in $$\\mathbb{R}^{n}$$, we get\n\\begin{aligned} &\\frac{1}{2}\\frac{\\partial}{\\partial t}\\bigl\\| D_x^{\\alpha} \\bigl(u^{(m+1)}-u^{(m)}\\bigr)\\bigr\\| _{L_2}^2+ \\frac{1}{2}\\frac{\\partial}{\\partial t}\\bigl\\| D_x^{\\alpha}\\nabla \\bigl(u^{(m+1)}-u^{(m)}\\bigr)\\bigr\\| _{L_2}^2 +\\eta\\bigl\\| \\nabla D_x^{\\alpha}\\bigl(u^{(m+1)}-u^{(m)} \\bigr)\\bigr\\| _{L_2}^2 \\\\ &\\quad =-\\int D_x^{\\alpha}{\\rm div}\\bigl(f\\bigl(u^{(m)} \\bigr)-f\\bigl(u^{(m-1)}\\bigr)\\bigr) D_x^{\\alpha} \\bigl(u^{(m+1)}-u^{(m)}\\bigr)\\,dx\\\\ &\\quad =\\int D_x^{\\alpha}\\bigl(f\\bigl(u^{(m)}\\bigr)-f \\bigl(u^{(m-1)}\\bigr)\\bigr)\\nabla D_x^{\\alpha } \\bigl(u^{(m+1)}-u^{(m)}\\bigr)\\,dx \\\\ &\\quad \\leq\\frac{\\eta}{2}\\bigl\\| \\nabla D_x^{\\alpha} \\bigl(u^{(m+1)}-u^{(m)}\\bigr)\\bigr\\| _{L_2}^2 +C\\bigl(\\bigl\\| u^{(m)}\\big\\| _{H^l}^2+\\bigl\\| u^{(m-1)} \\bigr\\| _{H^l}^2\\bigr)\\bigl\\| D_x^{\\alpha} \\bigl(u^{(m)}-u^{(m-1)}\\bigr)\\bigr\\| _{L_2}^2. \\end{aligned}\nThus\n$$\\frac{\\partial}{\\partial t}\\bigl(\\bigl\\| D_x^{\\alpha}\\bigl(u^{(m+1)}-u^{(m)} \\bigr)\\bigr\\| _{L_2}^2 \\leq C\\bigl(\\bigl\\| u^{(m)} \\bigr\\| _{H^l}^2+\\bigl\\| u^{(m-1)}\\bigr\\| _{H^l}^2 \\bigr) \\bigl(\\bigl\\| D_x^{\\alpha }\\bigl(u^{(m)}-u^{(m-1)} \\bigr)\\bigr\\| _{L_2}^2\\bigr)\\bigr).$$\nNoticing that (u (m+1)u (m))(0,x)=0, and choosing $$T_{0}=\\min(T_{1}, \\frac{1}{4CE^{2}})$$, we have\n\\begin{aligned} \\bigl\\| \\bigl(u^{(m+1)}-u^{(m)}\\bigr)\\bigr\\| _{H^l}^2 \\leq& C\\int_0^t\\bigl(\\bigl\\| u^{(m)} \\bigr\\| _{H^l}^2+\\bigl\\| u^{(m-1)}\\bigr\\| _{H^l}^2 \\bigr)\\bigl\\| u^{(m)}-u^{(m-1)}\\bigr\\| _{H^l}^2\\,ds \\\\ \\leq&2CE^2 T_0 \\bigl\\| u^{(m)}-u^{(m-1)} \\bigr\\| _{\\textbf{X}_{T_0} }^2 \\\\ \\leq&\\frac{1}{2}\\bigl\\| u^{(m)}-u^{(m-1)}\\bigr\\| _{\\textbf{X}_{T_0} }^2. \\end{aligned}\n\nThus we can get (2.9). □\n\nSince $$\\textbf{X}_{T_{0}}$$ is a complete metric space, by Lemmas 2.1 and 2.2, there exists a $$u(x,t)\\in\\textbf {X}_{T_{0}}$$ such that\n$$\\begin{cases} \\partial_t u-\\Delta\\partial_t u-\\eta\\Delta u+(\\beta\\cdot\\nabla)u+\\operatorname {div}f(u)=0,\\\\ u(0,x)=u_0(x). \\end{cases}$$\n\nThus we have proved the local existence of (1.1). Next we will prove the global existence.\n\n3 Bounded estimates and global existence\n\nIn this section, we want to get $$\\|u\\|\\ _{H^{l}}\\leq CE$$, then extend the local solution to a global one.\n\nLemma 3.1\n\nIf u 0H 1, we have uL (0,+∞;H 1).\n\nProof\n\nMultiplying the first equation by u and integrating with respect to the variable x in $$\\mathbb{R}^{n}$$, we get\n$$\\partial_tu-\\Delta\\partial_tu-\\eta\\Delta u+(\\beta\\cdot \\nabla)u=-\\operatorname {div}f(u),$$\nthen we have\n$$\\frac{1}{2}\\frac{\\partial}{\\partial t}\\|u\\|_{L_2}^2+ \\frac{1}{2}\\frac{\\partial}{\\partial t}\\|\\nabla u\\|_{L_2}^2+\\eta\\| \\nabla u\\|_{L_2}^2 =-\\int u\\; \\operatorname {div}f(u)\\,dx=\\int f(u)\\cdot \\nabla u\\,dx=0.$$\nThus\n$$\\|u\\|_{L_2}^2+\\|\\nabla u\\|_{L_2}^2+ \\int_0^t\\|\\nabla u\\|_{L_2}^2 ds\\leq \\|u_0\\|_{L_2}^2+\\|\\nabla u_0\\|_{L_2}^2,$$\n(3.1)\nand our lemma is proved. □\nIn order to get a bound of u, first we want to get an explicit expression of u through the Green function. The linearized system of (1.1) is of the form\n$$\\partial_t u-\\Delta\\partial_t u-\\eta\\Delta u+(\\beta\\cdot\\nabla)u=0.$$\n(3.2)\nThe Green function of (3.2) is defined as\n$$\\begin{cases} \\partial_t G-\\Delta\\partial_t G-\\eta\\Delta G+(\\beta\\cdot\\nabla)G=0,\\\\ G|_{t=0}=\\delta(x). \\end{cases}$$\nDirect calculation shows that the Fourier transformation of G is\n$$\\hat{G}=e^{-\\frac{\\eta|\\xi|^2}{1+|\\xi|^2}t-\\frac{i\\beta\\cdot\\xi t}{1+|\\xi|^2}}.$$\n(3.3)\nSet\n$$\\hat{H}=\\frac{1}{1+|\\xi|^2}\\hat{G}.$$\n(3.4)\nDue to Duhamel’s principle, we know that the solution of (1.1) can be expressed by\n$$u(x,t)=G\\ast u_0-\\int_0^t H(t-s)\\ast \\operatorname {div}f(u) (s)\\,ds.$$\n(3.5)\nNext we want to analyze u through estimates of G,H. For this part, we must estimate decay rates of $$\\hat{G}, \\hat{H}$$ separately for low and high frequencies. Thus we set\n$$\\chi_1(\\xi)= \\begin{cases} 1, &|\\xi|\\leq R,\\\\ 0, &|\\xi|>R+1, \\end{cases} \\qquad \\chi_2(\\xi)= \\begin{cases} 1, &|\\xi|\\geq R+1,\\\\ 0, &|\\xi|<R, \\end{cases}$$\nwhere χ 1,χ 2 are smooth cut-off functions and χ 1(ξ)+χ 2(ξ)=1.\n\nSet $$\\hat{G}_{i}=\\chi_{i}\\hat{G}, \\hat{H}_{i}=\\chi_{i}\\hat{H}$$. For low frequencies, we have the following lemma.\n\nLemma 3.2\n\nFor 2≤p≤∞, we have $$\\|D_{x}^{\\alpha}G_{1}\\|_{L_{p}}\\leq Ct^{-\\frac{|\\alpha|}{2}-\\frac{n}{2}(1-\\frac{1}{p})}, \\|D_{x}^{\\alpha}H_{1}\\|_{L_{p}}\\leq Ct^{-\\frac{|\\alpha|}{2}-\\frac{n}{2}(1-\\frac{1}{p})}$$.\n\nProof\n\nBy the Hausdorff–Young inequality, if the integers p≥2 and q satisfy $$\\frac{1}{p}+\\frac{1}{q}=1$$, then we have\n\\begin{aligned} \\bigl\\| D_x^{\\alpha}G_1\\bigr\\| _{L_p}\\leq C\\bigl\\| \\xi^{\\alpha}\\hat{G}_1\\bigr\\| _{L_q}&\\leq C \\biggl(\\int _{|\\xi|\\leq R+1}|\\xi|^{|\\alpha|q}e^{-\\frac{\\eta q |\\xi |^2}{1+|\\xi|^2}t}\\,d\\xi \\biggr)^{\\frac{1}{q}} \\\\ &\\leq C \\biggl(\\int_{|\\xi|\\leq R+1}|\\xi|^{|\\alpha|q}e^{-\\frac{\\eta q |\\xi |^2}{1+(R+1)^2}t}\\,d\\xi \\biggr)^{\\frac{1}{q}} \\\\ &\\leq C \\bigl(t^{-\\frac{n}{2}-\\frac{|\\alpha|q}{2}} \\bigr)^{\\frac{1}{q}} \\\\ &=Ct^{-\\frac{|\\alpha|}{2}-\\frac{n}{2}(1-\\frac{1}{p})} \\end{aligned}\nand\n$$\\bigl\\| D_x^{\\alpha}H_1\\bigr\\| _{L_p}\\leq C\\biggl\\| \\xi^{\\alpha}\\frac{1}{1+|\\xi|^2}\\hat{G}_1\\biggr\\| _{L_q}\\leq C \\bigl\\| \\xi^{\\alpha}\\hat{G}_1\\bigr\\| _{L_q}\\leq Ct^{-\\frac{|\\alpha|}{2}-\\frac{n}{2}(1-\\frac{1}{p})}.$$\n□\n\nNext we analyze the constructions of G,H for high frequencies.\n\nLemma 3.3\n\nThere exist a positive constant b and functions f 1(x),f 2(x),f 3(x) such that G 2Ce bt (δ(x)+f 1(x)),H 2Ce bt (δ(x)+f 2(x)), ∇H 2Ce bt (δ(x)+f 3(x)). Here $$\\|f_{1}\\|_{L_{1}}<C,\\|f_{2}\\|_{L_{1}}<C,\\|f_{3}\\|_{L_{1}}<C$$.\n\nProof\n\nWe just prove the first inequality; the proofs of the others are similar. By (3.3), if |β|≥1 then we have\n$$\\bigl|D_{\\xi}^{\\beta}\\hat{G}_2\\bigr|\\leq C| \\xi|^{-|\\beta|-1}e^{-\\frac{\\eta |\\xi|^2}{1+|\\xi|^2}t}\\leq C|\\xi|^{-|\\beta|-1}e^{-bt}.$$\nFrom this and Lemma 3.2 in , we get our result. □\n\nLemma 3.4\n\n$$\\|u\\|_{L_{\\infty}}\\leq CE$$.\n\nProof\n\nBy Young’s inequality, from (3.5) we have\n\\begin{aligned} \\|u\\|_{L_{\\infty}} \\leq&\\|G_1\\ast u_0 \\|_{L_{\\infty}}+\\|G_2\\ast u_0\\|_{L_{\\infty}} + \\int _0^t\\bigl\\| H_1(t-s)\\bigr\\| _{L_{\\infty}}\\| \\operatorname {div}f(u)\\|_{L_1}\\,ds \\\\ &{}+\\int_0^t\\bigl\\| H_2(t-s)\\ast \\operatorname {div}f(u) \\bigr\\| _{L_{\\infty}}\\,ds. \\end{aligned}\n(3.6)\nBy Lemmas 3.2, 3.3 and the Sobolev embedding inequality, we have\n\\begin{aligned} \\|G_1\\ast u_0\\|_{L_{\\infty}}+ \\|G_2\\ast u_0\\|_{L_{\\infty}} &\\leq\\|G_1\\ast u_0\\|_{H^l}+Ce^{-bt} \\bigl(\\|u_0 \\|_{L_{\\infty}}+\\|f_1\\| _{L_1}\\|u_0 \\|_{L_{\\infty}} \\bigr) \\\\ &\\leq C\\|G_1\\|_{L_{\\infty}}\\|u_0\\|_{H^l}+Ce^{-bt} \\|u_0\\|_{H^l} \\\\ &\\leq CE . \\end{aligned}\n(3.7)\nSetting $$M(t)=\\sup_{0\\leq s\\leq t, |\\alpha|\\leq 1}\\|D_{x}^{\\alpha}u\\|_{L_{\\infty}}$$, we have\n$$\\bigl\\| D_x^{\\alpha}u\\bigr\\| _{L_{\\infty}}\\leq M, \\quad |\\alpha|\\leq1.$$\n(3.8)\nNext we will prove that MCE. From (3.1) we have\n$$\\|u\\|_{L_2}\\leq CE, \\qquad \\int_0^t \\|\\nabla u\\|_{L_2}^2ds\\leq CE^2.$$\nBy Lemma 3.2, (3.1) and Hölder inequality, we have\n\\begin{aligned} \\int_0^t\\bigl\\| H_1(t-s) \\bigr\\| _{L_{\\infty}} \\bigl\\| \\operatorname {div}f(u)\\bigr\\| _{L_1}\\,ds &\\leq C\\int_0^t(1+t-s)^{-\\frac{n}{2}} \\|\\nabla u\\|_{L_2}\\| u\\|_{L_2}\\,ds \\\\ \\leq&C \\biggl(\\int_0^t(1+t-s)^{-n} \\| u\\|_{L_2}^2ds \\biggr)^{\\frac {1}{2}} \\biggl(\\int _0^t \\|\\nabla u\\|_{L_2}^2\\,ds \\biggr)^{\\frac{1}{2}} \\\\ &\\leq CE \\end{aligned}\n(3.9)\nBy Lemma 3.3 and (3.8),\n\\begin{aligned} \\int_0^t\\bigl\\| H_2(t-s) \\ast \\operatorname {div}f(u)\\bigr\\| _{L_{\\infty}}ds& \\leq\\int_0^te^{-b(t-s)} \\bigl(\\bigl\\| \\operatorname {div}f(u)\\bigr\\| _{L_{\\infty}}+\\|f_2\\|_{L_1}\\bigl\\| \\operatorname {div}f(u) \\bigr\\| _{L_{\\infty}}\\bigr) \\\\ &\\leq\\int_0^te^{-b(t-s)}\\|u \\|_{L_{\\infty}}\\|\\nabla u\\|_{L_{\\infty }}\\,ds \\\\ &\\leq C\\int_0^te^{-b(t-s)}M^2\\,ds \\\\ &\\leq CM^2. \\end{aligned}\n(3.10)\nBy (3.6), (3.7), (3.9), and (3.10), we have\n$$\\|u\\|_{L_{\\infty}}\\leq CE+CE+CM^2.$$\nThus M(t)≤CE+CM 2(t). Because $$M(0)=\\sup_{|\\alpha|\\leq1}\\|D_{x}^{\\alpha}u_{0}\\|_{L_{\\infty}}\\leq \\|u_{0}\\|_{H^{l}}+\\|\\nabla u_{0}\\|_{H^{l}}\\leq CE$$, this implies that M(t)≤CE; thus our result is proved. □\n\nLemma 3.5\n\n$$\\|u\\|_{H^{l}}\\leq CE$$. $$\\|\\nabla u\\|_{H^{l}}\\leq CE$$.\n\nProof\n\nBy (2.6) and the formula f(u)=(u 2,u 2,…,u 2), we can assert that\n$$\\bigl\\| D_x^{\\alpha}f(u)\\bigr\\| _{L_2}\\leq C\\|u \\|_{L_{\\infty}}\\bigl\\| D_x^{\\alpha}u\\bigr\\| _{L_2}.$$\n(3.11)\nFrom (1.1) we have\n$$D_x^{\\alpha}\\partial_tu-D_x^{\\alpha} \\Delta\\partial_tu-\\eta D_x^{\\alpha}\\Delta u+D_x^{\\alpha}(\\beta\\cdot\\nabla)u=-D_x^{\\alpha} \\operatorname {div}f(u).$$\n(3.12)\nBy (3.12) and Hölder inequality, we have\n\\begin{aligned} &\\frac{1}{2}\\frac{\\partial}{\\partial t}\\bigl\\| D_x^{\\alpha}u \\bigr\\| _{L_2}^2+\\frac {1}{2}\\frac{\\partial}{\\partial t} \\bigl\\| D_x^{\\alpha}\\nabla u\\bigr\\| _{L_2}^2+\\eta \\bigl\\| D_x^{\\alpha}\\nabla u\\bigr\\| _{L_2}^2 \\\\ &\\quad =-\\int D_x^{\\alpha}u D_x^{\\alpha} \\operatorname {div}f(u)dx \\\\ &\\quad =\\int D_x^{\\alpha}f(u)\\cdot D_x^{\\alpha} \\nabla u\\,dx \\\\ &\\quad \\leq\\frac{\\eta}{2}\\bigl\\| D_x^{\\alpha}\\nabla u \\bigr\\| _{L_2}^2+C\\bigl\\| D_x^{\\alpha}f(u) \\bigr\\| _{L_2}^2. \\end{aligned}\n(3.13)\nFrom (3.11), (3.13) and Lemma 3.4, we have\n\\begin{aligned} \\frac{1}{2}\\frac{\\partial}{\\partial t}\\bigl\\| D_x^{\\alpha}u \\bigr\\| _{L_2}^2+\\frac {1}{2}\\frac{\\partial}{\\partial t} \\bigl\\| D_x^{\\alpha}\\nabla u\\bigr\\| _{L_2}^2+ \\frac{\\eta}{2}\\bigl\\| D_x^{\\alpha} \\nabla u\\|_{L_2}^2 \\leq& C\\bigl\\| D_x^{\\alpha}f(u)\\bigr\\| _{L_2}^2 \\\\ \\leq& C\\|u\\|_{L_{\\infty}}^2\\bigl\\| D_x^{\\alpha}u \\bigr\\| _{L_2}^2 \\\\ \\leq& CE^2\\bigl\\| D_x^{\\alpha}u\\bigr\\| _{L_2}^2. \\end{aligned}\nThen\n\\begin{aligned} &\\bigl\\| D_x^{\\alpha}u\\bigr\\| _{L_2}^2+ \\bigl\\| D_x^{\\alpha}\\nabla u\\bigr\\| _{L_2}^2+ \\frac{\\eta }{2}\\int_0^t\\bigl\\| D_x^{\\alpha} \\nabla u\\bigr\\| _{L_2}^2 \\\\ &\\quad \\leq CE^2\\int_0^t \\bigl\\| D_x^{\\alpha}u\\bigr\\| _{L_2}^2ds+ \\bigl\\| D_x^{\\alpha}u_0\\bigr\\| _{L_2}^2+ \\bigl\\| D_x^{\\alpha}\\nabla u_0\\bigr\\| _{L_2}^2. \\end{aligned}\nBy (3.1), if |α|=1 then\n$$\\bigl\\| D_x^{\\alpha}u\\bigr\\| _{L_2}\\leq CE,\\qquad \\int _0^t\\bigl\\| D_x^{\\alpha}\\nabla u \\bigr\\| _{L_2}^2ds\\leq CE^2.$$\nUsing induction, we have\n$$\\bigl\\| D_x^{\\alpha}u\\bigr\\| _{L_2}\\leq CE,\\qquad \\bigl\\| D_x^{\\alpha}\\nabla u\\bigr\\| _{L_2}\\leq CE,\\quad|\\alpha| \\leq l.$$\n(3.14)\nThus our lemma is proved. □\n\nAccording to Lemma 3.5 and using the local solution, by the usual method, we can derive a global solution for (1.1) such that uL (0,∞,H l (R n )).\n\n4 Decay estimation\n\nSetting $$M(t)=\\sup_{|\\alpha|\\leq l, 0\\leq s\\leq t}\\|D_{x}^{\\alpha}u\\|_{L_{2}}(1+s)^{\\frac{n}{4}+\\frac{|\\alpha|}{2}}$$, we have\n$$\\bigl\\| D_x^{\\alpha}u\\bigr\\| _{L_2}\\leq M(t) (1+s)^{-\\frac{n}{4}-\\frac{|\\alpha|}{2}},\\quad|\\alpha|\\leq l.$$\n(4.1)\nBy (4.1),\n$$\\bigl\\| D_x^{\\alpha}f(u) (s)\\bigr\\| _{L_1}\\leq\\sum _{|\\alpha_1|+|\\alpha_2|=|\\alpha|}\\bigl\\| D_x^{\\alpha_1}u \\bigr\\| _{L_2}\\bigl\\| D_x^{\\alpha _1}u\\bigr\\| _{L_2} \\leq CM^2(t) (1+s)^{-\\frac{n}{2}-\\frac{|\\alpha|}{2}}.$$\n(4.2)\nFrom (3.11), Lemma 3.4 and (4.1), it follows that\n$$\\bigl\\| D_x^{\\alpha}f(u)\\bigr\\| _{L_2}\\leq C\\|u \\|_{L_\\infty}\\bigl\\| D_x^{\\alpha}u\\bigr\\| _{L_2}\\leq CEM(t) (1+s)^{-\\frac{n}{4}-\\frac{|\\alpha|}{2}},\\quad |\\alpha|\\leq l.$$\n(4.3)\nBy (3.4) and Young’s inequality,\n\\begin{aligned} \\bigl\\| D_x^{\\alpha}u\\bigr\\| _{L_2} \\leq& \\bigl\\| D_x^{\\alpha}G_1\\bigr\\| _{L_2} \\|u_0\\|_{L_1}+\\bigl\\| G_2\\ast D_x^{\\alpha}u_0 \\bigr\\| _{L_2} \\\\ &{}+ \\int_0^t\\bigl\\| D_x^{\\alpha}H_1(t-s) \\ast \\operatorname {div}f(u)\\bigr\\| _{L_2}ds+\\int_0^t\\bigl\\| \\nabla H_2(t-s)\\ast D_x^{\\alpha}f(u) \\bigr\\| _{L_2}\\,ds. \\end{aligned}\n(4.4)\nFrom Lemmas 3.2 and 3.3, we have\n$$\\bigl\\| D_x^{\\alpha}G_1\\bigr\\| _{L_2} \\|u_0\\|_{L_1}+\\bigl\\| G_2\\ast D_x^{\\alpha}u_0 \\bigr\\| _{L_2}\\leq CE(1+t)^{-\\frac{n}{4}-\\frac{|\\alpha|}{2}},\\quad|\\alpha|\\leq l.$$\n(4.5)\nWhen |α|≤l, from Lemma 3.2, (4.2) and (3.1), we get\n\\begin{aligned} &\\int_0^t\\bigl\\| D_x^{\\alpha}H_1(t-s) \\ast \\operatorname {div}f(u)\\bigr\\| _{L_2}\\,ds \\\\ &\\quad \\leq\\int_0^{\\frac{t}{2}} \\bigl\\| D_x^{\\alpha}\\nabla H_1(t-s)\\bigr\\| _{L_2} \\bigl\\| f(u)\\bigr\\| _{L_1}\\,ds \\\\ &\\qquad{}+\\int_{\\frac{t}{2}}^t \\bigl\\| \\nabla H_1(t-s) \\bigr\\| _{L_2}\\bigl\\| D_x^{\\alpha} f(u)\\bigr\\| _{L_1}\\,ds \\\\ &\\quad \\leq\\int_0^{\\frac{t}{2}}(1+t-s)^{-\\frac{n}{4}-\\frac{|\\alpha|}{2}-\\frac {1}{2}}M^2(1+s)^{-\\frac{n}{2}}\\,ds \\\\ &\\qquad{}+ \\int_{\\frac{t}{2}}^t(1+t-s)^{-\\frac{n}{4}-\\frac{1}{2}}M^2(t) (1+s)^{-\\frac{n}{2}-\\frac{|\\alpha|}{2}}\\,ds \\\\ &\\quad \\leq CM^2(1+t)^{-\\frac{n}{4}-\\frac{|\\alpha|}{2}}+C(1+t)^{-\\frac{n}{4}-\\frac {|\\alpha|}{2}}M^2(t). \\end{aligned}\n(4.6)\nFrom Lemma 3.3 and (4.3), with |α|≤l, we have\n\\begin{aligned} &\\int_0^t\\bigl\\| \\nabla H_2(t-s)\\ast D_x^{\\alpha} f(u)\\bigr\\| _{L_2}\\,ds \\end{aligned}\n(4.7)\n\\begin{aligned} &\\quad \\leq \\int_0^te^{-b(t-s)}\\bigl( \\bigl\\| D_x^{\\alpha} f(u)\\bigr\\| _{L_2} +\\|f_3 \\|_{L_1}\\bigl\\| D_x^{\\alpha} f(u)\\bigr\\| _{L_2}\\bigr)\\,ds \\\\ &\\quad \\leq \\int_0^te^{-b(t-s)}CEM(t) (1+s)^{-\\frac{n}{4}-\\frac{|\\alpha |}{2}}\\,ds \\\\ &\\quad \\leq CEM(t) (1+t)^{-\\frac{n}{4}-\\frac{|\\alpha|}{2}}. \\end{aligned}\n(4.8)\nFrom (4.4)–(4.7) we obtain\n$$\\bigl\\| D_x^{\\alpha}u \\bigr\\| _{L_2}\\leq CE(1+t)^{-\\frac{n}{4}-\\frac{|\\alpha|}{2}}+CM^2(1+t)^{-\\frac{n}{4}-\\frac {|\\alpha|}{2}}+ CEM(t) (1+t)^{-\\frac{n}{4}-\\frac{|\\alpha|}{2}},$$\nthus M(t)≤CE+CM 2(t)+CEM(t). In addition, since M(0)≤CE, we have M(t)≤CE. Therefore,\n$$\\bigl\\| D_x^{\\alpha}u\\bigr\\| _{L_2}\\leq CE(1+t)^{-\\frac{n}{4}-\\frac{|\\alpha|}{2}}.$$\n(4.9)\nThus we get Theorem 1.1, our main result in this paper.\n\nReferences\n\n1. 1.\nAlbert, J.: On the decay of solutions of the generalized Benjamin–Bona–Mahony equations. J. Math. Anal. Appl. 141, 527–537 (1989)\n2. 2.\nAvrin, J., Goldstein, J.A.: Global existence for the Benjamin–Bona–Mahony equation in arbitrary dimensions. Nonlinear Anal. 9, 861–865 (1985)\n3. 3.\nBenjamin, T.B., Bona, J.L., Mahony, J.J.: Model equations for long waves in nonlinear dispersive systems. Philos. Trans. R. Soc. Lond. Ser. A, Math. Phys. Sci. 272, 47–78 (1972)\n4. 4.\nBiler, P.: Long time behavior of solutions of the generalized Benjamin–Bona–Mahony equation in two space dimensions. Differ. Integral Equ. 5, 891–901 (1992)\n5. 5.\nFang, S.M., Guo, B.L.: Long time behavior for solution of initial-boundary value problem for one class of system with multidimensional inhomogeneous GBBM equations. Appl. Math. Mech. 26(6), 659–664 (2005)\n6. 6.\nFang, S., Guo, B.: The decay rates of solutions of generalized Benjamin–Bona–Mahony equations in multi-dimensions. Nonlinear Anal. 69, 2230–2235 (2008)\n7. 7.\nGoldstein, J.A., Wichnoski, B.J.: On the Benjamin–Bona–Mahony equation in higher dimensions. Nonlinear Anal. 4, 665–675 (1980)\n8. 8.\nGuo, B.L.: Initial boundary value problem for one class of system of multidimensional inhomogeneous GBBM equations. Chin. Ann. Math., Ser. B 8(2), 226–238 (1987)\n9. 9.\nWang, W., Yang, T. : The pointwise estimate of solutions of Euler equations with damping in multi-dimensions. J. Differ. Equ. 173, 410–450 (2001)\n10. 10.\nZhang, L.H.: Decay of solutions of generalized Benjamin–Bona–Mahony–Burgers equations in n-space dimensions. Nonlinear Anal. 25, 1343–1369 (1995)",
null,
""
] | [
null,
"https://link.springer.com/track/open/article/granted/10.1007/s40306-014-0054-3",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.67340374,"math_prob":0.99999964,"size":5111,"snap":"2019-26-2019-30","text_gpt3_token_len":1628,"char_repetition_ratio":0.123751715,"word_repetition_ratio":0.01724138,"special_character_ratio":0.31324595,"punctuation_ratio":0.18653321,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000036,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-20T23:07:09Z\",\"WARC-Record-ID\":\"<urn:uuid:884070d7-3693-43a5-a9ee-b8650430dbb4>\",\"Content-Length\":\"113696\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b80f93b4-b13d-4bad-9d02-fd56d8477576>\",\"WARC-Concurrent-To\":\"<urn:uuid:7c3a87af-86bb-4464-a221-a452b4516b97>\",\"WARC-IP-Address\":\"151.101.248.95\",\"WARC-Target-URI\":\"https://link.springer.com/article/10.1007/s40306-014-0054-3\",\"WARC-Payload-Digest\":\"sha1:E4Y7A4YT7CP6M5TD44ZCWT4DU56GUM6Y\",\"WARC-Block-Digest\":\"sha1:6MPX4VSVGREX3GPCN7QOWFZMY3JKHZOY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999291.1_warc_CC-MAIN-20190620230326-20190621012326-00049.warc.gz\"}"} |
https://corvalliscustom.com/image-ratio-printing | [
"# Image Ratio for Printing\n\n## What is Aspect ratio?\n\nAspect ratio refers to the relationship between the width and height of an image. The number before the colon represents the width of the image and the number after is the height. Both numbers represent a relationship, not a specific measurement.\n\n## How does Aspect Ratio affect printing?\n\nThe aspect ratio will never affect the quality of a print, only how it is cropped. For example, if an image is uploaded with an aspect ratio of 3:2 but dimensions of 5 inches x 5 inches are selected then the image will be cropped to square, or a white boarder will be left on the top and bottom. This occurs because the file ratio of 3:2 was not equal to the selected ratio of 1:1.",
null,
"",
null,
"## How does aspect ratio affect the size of a print?\n\nBecause aspect ratio represents the width to height relationship, it doesn’t limit or restrict the overall size of a print.\nTypically we like to ask customers how wide they would like their print. By selecting a width we can determine height using the image aspect ratio of the original file.\nFor example: If you have an image with an aspect ratio of 3:2 and you want a print that is 12 inches wide, the height will be 66% (or 2/3) the size of the width. 12 *.66 = ~8 inches.\n\n##",
null,
"",
null,
"You need to choose options for your item\n\nx"
] | [
null,
"https://corvalliscustom.com/media/wysiwyg/1to1ratio.jpg",
null,
"https://corvalliscustom.com/media/wysiwyg/1to1withWHITEratio.jpg",
null,
"https://corvalliscustom.com/media/wysiwyg/2to1ratio.jpg",
null,
"https://corvalliscustom.com/media/wysiwyg/4to1ratio.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.931642,"math_prob":0.9664352,"size":1240,"snap":"2020-45-2020-50","text_gpt3_token_len":281,"char_repetition_ratio":0.15453075,"word_repetition_ratio":0.017621145,"special_character_ratio":0.22983871,"punctuation_ratio":0.099236645,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95687133,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,4,null,4,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-27T23:48:40Z\",\"WARC-Record-ID\":\"<urn:uuid:02751df1-6c2a-40ba-8deb-7939cb97aadd>\",\"Content-Length\":\"79792\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4479c0f6-d341-4948-9116-181aee282413>\",\"WARC-Concurrent-To\":\"<urn:uuid:244c5a6c-6eac-4680-b185-3cafb7625e6c>\",\"WARC-IP-Address\":\"104.27.132.202\",\"WARC-Target-URI\":\"https://corvalliscustom.com/image-ratio-printing\",\"WARC-Payload-Digest\":\"sha1:PDKWNGSE7STR6HQZH6OBK2NDEMPDUPQL\",\"WARC-Block-Digest\":\"sha1:FX5X6PXOAG464KESCIOYE3OGZLALRE3Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107894890.32_warc_CC-MAIN-20201027225224-20201028015224-00313.warc.gz\"}"} |
https://onlinestatbook.com/2/sampling_distributions/SampDist_v2.html | [
"Sampling Size Demonstration\n\nLearning Objectives\n1. Develop a basic understanding of the properties of a sampling distribution based on the properties of the population.\n2. Compare the sampling distributions of different sample sizes.\n\nInstructions\nThis simulation demonstrates the effect of sample size on the sampling distribution.\n\nDepicted on the top graph is the population distribution. By default it is a uniform distribution (all values are equally likely). The sampling distributions for two different sample sizes are shown in the lower two graphs. The starting values are 2 and 10. By default, the statistic to be computed is the mean, although you can also specify to compute the median. For both the population distribution and the sampling distribution, the mean and the standard deviation are depicted graphically on the frequency distribution itself. The blue-colored vertical bar below the X-axis indicates the mean value. The red line starts from this mean value and extends one standard deviation in length in both directions. The values of both the mean and the standard deviation are also given to the left of the graph. Notice that the numeric form of a property matches its graphical form in color. In this simulation, you specify two sample sizes (the defaults are set at N = 2 and N = 10), and then sample a sufficiently large number of samples until the sampling distributions stabilize. Compare the mean and standard deviaiton of the two sampling distributions. Repeat the process a couple times and watch the results. Do you observe a general rule regarding the effect of sample size on the mean and the standard deviation of the sampling distribution? You may also test the effect of sample size with a normal population or with a different sample statistic (the median). When you have discovered the rule, go back and answer the questions again.\n\nIllustrated Instructions\nVideo Demo\nThe video below changes the population distribution from uniform to normal and then draws 30,000 samples with N = 2 and 30,000 samples with N = 10 by clicking the \"10,000 Samples\" 3 times. Notice the differences in the means and standard deviations of the two sample distributions. How do these compare to the population?\n\nThe vertical bar to the right of each sampling distribution can be dragged along the x-axis and once the mouse is released the area of the curve to the left of the line is displayed above the chart.\n\nFor some browsers you will not see the bars move as you move them. They will move when you release the mouse button.\n\nVideo Demo"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90574086,"math_prob":0.9897753,"size":1065,"snap":"2023-14-2023-23","text_gpt3_token_len":218,"char_repetition_ratio":0.14985862,"word_repetition_ratio":0.23655914,"special_character_ratio":0.21220657,"punctuation_ratio":0.06341463,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9913085,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T13:36:26Z\",\"WARC-Record-ID\":\"<urn:uuid:16aea92f-84a5-4baf-a936-081f99e79e03>\",\"Content-Length\":\"8259\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a823a51d-bc41-4463-8fd1-76aba09fef20>\",\"WARC-Concurrent-To\":\"<urn:uuid:37c35cd7-7074-4602-ac4e-22e3fd3c161b>\",\"WARC-IP-Address\":\"208.113.186.203\",\"WARC-Target-URI\":\"https://onlinestatbook.com/2/sampling_distributions/SampDist_v2.html\",\"WARC-Payload-Digest\":\"sha1:YN53YYMFTXLSRIAGO7BQHHQIKXVV4CSD\",\"WARC-Block-Digest\":\"sha1:EH6PE2J5EUNDN5ZEV7RPT4NVGFGW5X5J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652569.73_warc_CC-MAIN-20230606114156-20230606144156-00788.warc.gz\"}"} |
http://wiki.nars2000.org/index.php?title=Language_Features&action=history | [
"# Language Features\n\nAt the moment, the following sections describe only those language features that are New or Enhanced relative to the Extended APL Standard, or that deserve special comment.\n\n## Syntax\n\n• Strand Assignment: (A1 A2 ... An)←R\n• Modify Assignment: Af←R\n• Modify Strand Assignment: (A1 A2 ... An)f←R\n• Hyperators: ∇ Z←L (LO (LH dhdo RH) RO) R\n• Function/Operator/Hyperator Assignment: A←f, A←op1, A←op2\n• Binding Strength: How Variables, Functions, Operators, Hyperators and other syntactic elements combine\n• Sink: ←R\n• Point Notation:\n• Base16b10FFFF is a shorthand for 16⊥1 0 15 15 15 15 and 10b45v is a shorthand for 10⊥4 5 31.\n• Euler2x3 is a shorthand for 2e3 or 2×(*1)*3 where e is Euler's Number (2.718281828459045...).\n• Pi2p3 is a shorthand for 2π3 or 2×(○1)*3 where π is Archimedes' constant (3.141592653589793...).\n• Gamma2g3 is a shorthand for 2γ3 where γ is Euler-Mascheroni's Constant (0.5772156649015329...).\n• Hypercomplex2i3, 2J3 (both equal to 2+3×√¯1), 2ad3 (Angle in Degrees), 2ar3 (Angle in Radians), or 2au3 (Angle in Unit Normalized) for a Complex number, 1i2j3k4 for a Quaternion number, and 1i2j3k4l5ij6jk7kl8 for an Octonion number.\n• Rational2r3 is a shorthand for 2÷3 as a Multiple-Precision Integer/Rational number.\n• Variable-Precision Floating2.3v is a shorthand for 2.3 as a Multiple-Precision Floating Point number.\n• Ball Arithmetic2.3±1¯17 is a shorthand for a Ball whose Midpoint and Radius are 2.3 and 1¯17, respectively.\n• Trains: e.g., avg←(+⌿ ÷ ≢) applies the functions to its argument(s) in a particular way (in this case, to compute the average of a numeric scalar or vector).\n• Anonymous Functions/Operators/Hyperators: multi-line grouping of one or more statements all enclosed in braces such as {(+⌿⍵)÷≢⍵}.\n\n## Primitive functions\n\nwhere L is the Left argument, R is the Right argument, and [X] is an optional Axis value.\n\n## Primitive operators\n\n Name Symbol/Syntax Derived Function Valence Axis {L} f[X] R, {L} (f op1[X]) R, {L} (f op2[X] g) R ambivalent Combinatorial a‼ R monadic Commute L f⍨ R ←→ R f L dyadic Composition {L} f⍥g R ambivalent Compose {L} f∘g R, (f∘b) R, a∘g R ambivalent/monadic Convolution L f⍡g R dyadic Determinant f.g R monadic Duplicate f⍨ R ←→ R f R monadic Matrix {L} f⌻ R, ∘⌻ R ambivalent/monadic Multisets {L} f⍦ R ambivalent Null {L} f⊙ R ambivalent Power {L} f⍣g R, {L} (f⍣b) R ambivalent Rank {L} (f⍤[X] b) R ambivalent Variant {L} (f⍠b) R ambivalent\n\nwhere {L} is an optional Left argument, R is the Right argument, f and g represent Functions, a and b represent Variables, and [X] is an optional Axis value.\n\n## System Commands\n\nSystem Commands provide features to the user of the APL system, separate from actual workspaces, variables or APL operators. These provide such features as accessing files, saving a workspace, and exiting the APL interpreter. The commands are not case sensitive, so )IN and )in do the same thing.\n\nNARS2000 currently has the following system commands:\n\n )BOX Turn ON/OFF box around output )CLEAR Start a new session manager window with an empty workspace )CLOSE )COPY )DROP )EDIT Create a new function in the workspace or edit an existing function )ERASE Delete a variable or function in the current workspace )EXIT Close down the interpreter and exit the program. Same as )OFF )FNS )FOP Display functions, operators, and hyperators. Same as )FOPS )FOPS )HYP Display hyperators only )IN )INASCII )LIB )LAOD Alternative spelling of )LOAD )LOAD Load an existing workspace )NEWTAB\n )NMS )OFF Close down the interpreter and exit the program. Same as )EXIT )OPEN )OPS Display operators only )OUT )RESET )SAVE Save the current workspace )SVAE Alternative spelling for )SAVE )SI workspace State Indicator )SIC State Indicator Clear )SINL )SYMB )ULIB )VARS )XLOAD )WSID\n\n## System Variables and Functions\n\nSystem Variables (A value may be assigned to these except for ⎕DM)\nALX CT DM DT ELX FC FEATURE FPC IC IO\nLR LX PP PR PW RL SA WSID\nNiladic System Functions (a value cannot be assigned to these)\nA AV EM ET LC NNAMES NNUMS SI SYSID SYSVER\nT TC TCBEL TCBS TCESC TCFF TCHT TCLF TCNL TCNUL\nTS WA\nMonadic or dyadic system functions (a value cannot be assigned to these)\nAT CR DC DL DR EA EC ERROR ES EX\nFMT FX MF NAPPEND NC NCREATE NERASE NINFO NL NLOCK\nNREAD NRENAME NREPLACE NRESIZE NSIZE NTIE NUNTIE STOP TF TRACE\nUCS VR\nNote that quad functions and variables (except for the ⎕A family of functions) are case insensitive"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.72371185,"math_prob":0.9380083,"size":3684,"snap":"2019-26-2019-30","text_gpt3_token_len":1235,"char_repetition_ratio":0.12744565,"word_repetition_ratio":0.021806853,"special_character_ratio":0.31188926,"punctuation_ratio":0.13544668,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9673045,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-20T03:52:48Z\",\"WARC-Record-ID\":\"<urn:uuid:9fe1db50-394d-429f-9c55-b7c8a7c4a04b>\",\"Content-Length\":\"61837\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c1b57f3-c60e-4b41-ad67-54232613a1e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:670d35f7-6127-464a-9aa3-a02aaf351265>\",\"WARC-IP-Address\":\"69.36.190.213\",\"WARC-Target-URI\":\"http://wiki.nars2000.org/index.php?title=Language_Features&action=history\",\"WARC-Payload-Digest\":\"sha1:AH5KO5XRVCYL4YENLETTHBMIQXI2KS25\",\"WARC-Block-Digest\":\"sha1:XZTU6KUC2VIDJSB6FEYLWOL2GK2PTV7Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526408.59_warc_CC-MAIN-20190720024812-20190720050812-00440.warc.gz\"}"} |
https://high-tech-guide.com/article/what-are-the-velocity-and-acceleration-equations-in-polar-coordinates | [
"# Blog\n\n## What are the velocity and acceleration equations in polar coordinates?",
null,
"## What are plane polar coordinates in physics?\n\nA polar coordinate system is a two-dimensional coordinate system in which each point on a plane is determined by a distance from a reference point and an angle from a reference direction.\n\n## What are unit vectors in polar coordinates?\n\nThere are three mutually orthogonal unit vectors associated with the coordinates r, θ, φ, defined as follows: er = cos φ sin θ i+sin φ sin θj+cos θ k, eθ = cos φ cos θ i+sin φ cos θj−sin θ k, eφ = − sin φ i+cos φ j.\n\n## What is Cartesian velocity?\n\nThe velocity of the object with respect to the object's central body, as observed from the requested coordinate system, expressed in Cartesian components of that system, as a function of time.\n\n## How do you find the magnitude of acceleration in polar coordinates?\n\nWhen we express acceleration in plane polar coordinates, we can find that →a=(¨r−r˙θ2)ˆr+(r¨θ−2˙r˙θ)ˆθ.Dec 2, 2017",
null,
"",
null,
"### What does R mean in polar coordinates?\n\nThe coordinate r is the length of the line segment from the point (x,y) to the origin and the coordinate θ is the angle between the line segment and the positive x-axis.\n\n### How do you convert XY coordinates to polar coordinates?\n\nTo convert from Cartesian coordinates to polar coordinates: r=√x2+y2 . Since tanθ=yx, θ=tan−1(yx) . So, the Cartesian ordered pair (x,y) converts to the Polar ordered pair (r,θ)=(√x2+y2,tan−1(yx)) .\n\n### How do you know if polar coordinates are the same?\n\nSolution: One big difference between polar and rectangular coordinates is that polar coordinates can have multiple coordinates representing the same point by adjusting the angle θ or the sign of r and the angle θ.\n\n### How do you find the polar coordinates?\n\n• Locate the angle on the polar coordinate plane. Refer to the figure to find the angle: Determine where the radius intersects the angle. Because the radius is 2 (r = 2), you start at the pole and move out 2 spots in the direction of the angle. Plot the given point.\n\n### What is a polar coordinate graph?\n\n• Polar coordinates provide a method of rendering graphs and indicating the positions of points on a two-dimensional (2D) surface. The polar coordinate system is employed in mathematics, physics, engineering, navigation, robotics, and other sciences. The polar plane consists of a reference axis, or ray, that emanates from a point called the origin.\n\n### What is the definition of polar coordinate system?\n\n• In mathematics, the polar coordinate system is a two-dimensional coordinate system in which each point on a plane is determined by a distance from a reference point and an angle from a reference direction.\n\n### What is a polar angle?\n\n• There are two different definitions of the polar angle. In the plane, the polar angle is the counterclockwise angle from the x-axis at which a point in the -plane lies. In spherical coordinates, the polar angle is the angle measured from the -axis, denoted in this work, and also variously known as the zenith angle andcolatitude."
] | [
null,
"https://img.youtube.com/vi/MlNmlY_yoZ0/0.jpg",
null,
"https://img.youtube.com/vi/3z15i3hjNzo/0.jpg",
null,
"https://img.youtube.com/vi/AxlSzLPKaAU/0.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8444274,"math_prob":0.9760933,"size":3076,"snap":"2022-27-2022-33","text_gpt3_token_len":707,"char_repetition_ratio":0.20930989,"word_repetition_ratio":0.116504855,"special_character_ratio":0.21423927,"punctuation_ratio":0.10708402,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989246,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-08T09:26:02Z\",\"WARC-Record-ID\":\"<urn:uuid:54113711-0486-49c2-bfce-200688e43397>\",\"Content-Length\":\"53308\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bd791304-a6a3-4df3-82ab-7b6dce03e9ae>\",\"WARC-Concurrent-To\":\"<urn:uuid:8dcdd16f-cca1-4be9-a72e-8228cd2119d8>\",\"WARC-IP-Address\":\"172.67.178.1\",\"WARC-Target-URI\":\"https://high-tech-guide.com/article/what-are-the-velocity-and-acceleration-equations-in-polar-coordinates\",\"WARC-Payload-Digest\":\"sha1:5NETTPGNB34N7RZNOXKX3GZVOEEL5BBQ\",\"WARC-Block-Digest\":\"sha1:IMWCMFOCP7OSHADYCP322GO2D47MYOEG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570793.14_warc_CC-MAIN-20220808092125-20220808122125-00698.warc.gz\"}"} |
https://www.colorhexa.com/0201ce | [
"# #0201ce Color Information\n\nIn a RGB color space, hex #0201ce is composed of 0.8% red, 0.4% green and 80.8% blue. Whereas in a CMYK color space, it is composed of 99% cyan, 99.5% magenta, 0% yellow and 19.2% black. It has a hue angle of 240.3 degrees, a saturation of 99% and a lightness of 40.6%. #0201ce color hex could be obtained by blending #0402ff with #00009d. Closest websafe color is: #0000cc.\n\n• R 1\n• G 0\n• B 81\nRGB color chart\n• C 99\n• M 100\n• Y 0\n• K 19\nCMYK color chart\n\n#0201ce color description : Strong blue.\n\n# #0201ce Color Conversion\n\nThe hexadecimal color #0201ce has RGB values of R:2, G:1, B:206 and CMYK values of C:0.99, M:1, Y:0, K:0.19. Its decimal value is 131534.\n\nHex triplet RGB Decimal 0201ce `#0201ce` 2, 1, 206 `rgb(2,1,206)` 0.8, 0.4, 80.8 `rgb(0.8%,0.4%,80.8%)` 99, 100, 0, 19 240.3°, 99, 40.6 `hsl(240.3,99%,40.6%)` 240.3°, 99.5, 80.8 0000cc `#0000cc`\nCIE-LAB 25.229, 67.23, -91.66 11.174, 4.49, 58.667 0.15, 0.06, 4.49 25.229, 113.672, 306.259 25.229, -7.291, -101.533 21.19, 57.05, -149.322 00000010, 00000001, 11001110\n\n# Color Schemes with #0201ce\n\n• #0201ce\n``#0201ce` `rgb(2,1,206)``\n• #cdce01\n``#cdce01` `rgb(205,206,1)``\nComplementary Color\n• #0167ce\n``#0167ce` `rgb(1,103,206)``\n• #0201ce\n``#0201ce` `rgb(2,1,206)``\n• #6901ce\n``#6901ce` `rgb(105,1,206)``\nAnalogous Color\n• #67ce01\n``#67ce01` `rgb(103,206,1)``\n• #0201ce\n``#0201ce` `rgb(2,1,206)``\n• #ce6901\n``#ce6901` `rgb(206,105,1)``\nSplit Complementary Color\n• #01ce02\n``#01ce02` `rgb(1,206,2)``\n• #0201ce\n``#0201ce` `rgb(2,1,206)``\n• #ce0201\n``#ce0201` `rgb(206,2,1)``\n• #01cdce\n``#01cdce` `rgb(1,205,206)``\n• #0201ce\n``#0201ce` `rgb(2,1,206)``\n• #ce0201\n``#ce0201` `rgb(206,2,1)``\n• #cdce01\n``#cdce01` `rgb(205,206,1)``\n• #010182\n``#010182` `rgb(1,1,130)``\n• #02019b\n``#02019b` `rgb(2,1,155)``\n• #0201b5\n``#0201b5` `rgb(2,1,181)``\n• #0201ce\n``#0201ce` `rgb(2,1,206)``\n• #0201e7\n``#0201e7` `rgb(2,1,231)``\n• #0504fe\n``#0504fe` `rgb(5,4,254)``\n• #1f1efe\n``#1f1efe` `rgb(31,30,254)``\nMonochromatic Color\n\n# Alternatives to #0201ce\n\nBelow, you can see some colors close to #0201ce. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #0133ce\n``#0133ce` `rgb(1,51,206)``\n• #0122ce\n``#0122ce` `rgb(1,34,206)``\n• #0111ce\n``#0111ce` `rgb(1,17,206)``\n• #0201ce\n``#0201ce` `rgb(2,1,206)``\n• #1301ce\n``#1301ce` `rgb(19,1,206)``\n• #2401ce\n``#2401ce` `rgb(36,1,206)``\n• #3501ce\n``#3501ce` `rgb(53,1,206)``\nSimilar Colors\n\n# #0201ce Preview\n\nThis text has a font color of #0201ce.\n\n``<span style=\"color:#0201ce;\">Text here</span>``\n#0201ce background color\n\nThis paragraph has a background color of #0201ce.\n\n``<p style=\"background-color:#0201ce;\">Content here</p>``\n#0201ce border color\n\nThis element has a border color of #0201ce.\n\n``<div style=\"border:1px solid #0201ce;\">Content here</div>``\nCSS codes\n``.text {color:#0201ce;}``\n``.background {background-color:#0201ce;}``\n``.border {border:1px solid #0201ce;}``\n\n# Shades and Tints of #0201ce\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #00000b is the darkest color, while #f6f6ff is the lightest one.\n\n• #00000b\n``#00000b` `rgb(0,0,11)``\n• #00001e\n``#00001e` `rgb(0,0,30)``\n• #000032\n``#000032` `rgb(0,0,50)``\n• #010045\n``#010045` `rgb(1,0,69)``\n• #010059\n``#010059` `rgb(1,0,89)``\n• #01016c\n``#01016c` `rgb(1,1,108)``\n• #010180\n``#010180` `rgb(1,1,128)``\n• #010193\n``#010193` `rgb(1,1,147)``\n• #0201a7\n``#0201a7` `rgb(2,1,167)``\n• #0201ba\n``#0201ba` `rgb(2,1,186)``\n• #0201ce\n``#0201ce` `rgb(2,1,206)``\n• #0201e2\n``#0201e2` `rgb(2,1,226)``\n• #0201f5\n``#0201f5` `rgb(2,1,245)``\n• #0d0cfe\n``#0d0cfe` `rgb(13,12,254)``\n• #2120fe\n``#2120fe` `rgb(33,32,254)``\n• #3433fe\n``#3433fe` `rgb(52,51,254)``\n• #4747fe\n``#4747fe` `rgb(71,71,254)``\n• #5b5afe\n``#5b5afe` `rgb(91,90,254)``\n• #6e6efe\n``#6e6efe` `rgb(110,110,254)``\n• #8281fe\n``#8281fe` `rgb(130,129,254)``\n• #9595fe\n``#9595fe` `rgb(149,149,254)``\n• #a9a8ff\n``#a9a8ff` `rgb(169,168,255)``\n• #bcbcff\n``#bcbcff` `rgb(188,188,255)``\n• #cfcfff\n``#cfcfff` `rgb(207,207,255)``\n• #e3e3ff\n``#e3e3ff` `rgb(227,227,255)``\n• #f6f6ff\n``#f6f6ff` `rgb(246,246,255)``\nTint Color Variation\n\n# Tones of #0201ce\n\nA tone is produced by adding gray to any pure hue. In this case, #61616e is the less saturated color, while #0201ce is the most saturated one.\n\n• #61616e\n``#61616e` `rgb(97,97,110)``\n• #595976\n``#595976` `rgb(89,89,118)``\n• #51517e\n``#51517e` `rgb(81,81,126)``\n• #494986\n``#494986` `rgb(73,73,134)``\n• #41418e\n``#41418e` `rgb(65,65,142)``\n• #393996\n``#393996` `rgb(57,57,150)``\n• #31319e\n``#31319e` `rgb(49,49,158)``\n• #2929a6\n``#2929a6` `rgb(41,41,166)``\n• #2221ae\n``#2221ae` `rgb(34,33,174)``\n• #1a19b6\n``#1a19b6` `rgb(26,25,182)``\n• #1211be\n``#1211be` `rgb(18,17,190)``\n• #0a09c6\n``#0a09c6` `rgb(10,9,198)``\n• #0201ce\n``#0201ce` `rgb(2,1,206)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #0201ce is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5294362,"math_prob":0.54199934,"size":3632,"snap":"2020-24-2020-29","text_gpt3_token_len":1603,"char_repetition_ratio":0.15049614,"word_repetition_ratio":0.011111111,"special_character_ratio":0.55754405,"punctuation_ratio":0.23516238,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9915463,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-14T16:48:06Z\",\"WARC-Record-ID\":\"<urn:uuid:861082d2-31c3-45c3-b620-5eecfba098a3>\",\"Content-Length\":\"36172\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8a743a80-46fb-48e7-8baf-33f9da634ea1>\",\"WARC-Concurrent-To\":\"<urn:uuid:e3d3632f-61d8-4783-a0db-1854e4e2c96c>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/0201ce\",\"WARC-Payload-Digest\":\"sha1:5OUKU4W2VC3W66ZP2EDQ2VBQCW56335H\",\"WARC-Block-Digest\":\"sha1:6AP7ZCY5H5Y3F4V33ELYJX64L4QZYWKS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655897168.4_warc_CC-MAIN-20200714145953-20200714175953-00359.warc.gz\"}"} |
https://www.mdpi.com/2076-3417/8/1/44 | [
"Next Article in Journal\nStudy on the Structure of an Efficient Receiver for Covert Underwater Communication Using Direct Sequence Spread Spectrum\nNext Article in Special Issue\nLocomotion Efficiency Optimization of Biologically Inspired Snake Robots\nPrevious Article in Journal\nVirtual Reality-Wireless Local Area Network: Wireless Connection-Oriented Virtual Reality Architecture for Next-Generation Virtual Reality Devices\nPrevious Article in Special Issue\nA Bio-Inspired Control Strategy for Locomotion of a Quadruped Robot\n\nFont Type:\nArial Georgia Verdana\nFont Size:\nAa Aa Aa\nLine Spacing:\nColumn Width:\nBackground:\nArticle\n\n# Upper-Body Control and Mechanism of Humanoids to Compensate for Angular Momentum in the Yaw Direction Based on Human Running\n\n1\nDepartment of Modern Mechanical Engineering, Waseda University, 2-2 Wakamatsu-cho, Shinjuku-ku, Tokyo 162-8480, Japan\n2\nWaseda Institute for Advanced Study, Waseda University, No. 41-304, 17 Kikui-cho, Shinjuku-ku, Tokyo 162-0044, Japan\n3\nHumanoid Robotics Institute (HRI), Waseda University, 2-2 Wakamatsu-cho, Shinjuku-ku, Tokyo 162-8480, Japan\n4\nGraduate School of Science and Engineering, Waseda University, No. 41-304, 17 Kikui-cho, Shinjuku-ku, Tokyo 162-0044, Japan\n5\nASICS Corporation, Institute of Sport Science, 6-2-1 Takatsukadai, Nishi-ku, Kobe, 651-2271, Japan\n6\nFaculty of Sport Sciences, Waseda University, 2-579-15 Mikajima, Tokorozawa-shi, Tokyo 359-1192, Japan\n7\nFaculty of Engineering, Kanagawa University, 3-27-1 Rokkakubashi, Kanagawa-ku, Yokohama 221-8686, Japan\n*\nAuthor to whom correspondence should be addressed.\nThese authors contributed equally to this work.\nAppl. Sci. 2018, 8(1), 44; https://doi.org/10.3390/app8010044\nOriginal submission received: 23 October 2017 / Revised: 25 December 2017 / Accepted: 27 December 2017 / Published: 3 January 2018\n\n## Abstract\n\n:\nMany extant studies proposed various stabilizing control methods for humanoids during the stance phase while hopping and running. Although these methods contribute to stability during hopping and running, humanoid robots do not swing their legs rapidly during the flight phase to prevent rotation in the yaw direction. Humans utilize their torsos and arms when running to compensate for the angular momentum in the yaw direction generated by leg movement during the flight phase. In this study, we developed an angular momentum control method based on human motion for a humanoid upper body. The method involves calculation of the angular momentum generated by the movement of the humanoid legs and calculation of the torso and arm motions required to compensate for the angular momentum of the legs in the yaw direction. We also developed a humanoid upper-body mechanism having human link length and mass properties, using carbon-fiber-reinforced plastic and a symmetric structure for generating large angular momentum. The humanoid robot developed in this study could generate almost the same angular momentum as that of a human. Furthermore, when suspended in midair, the humanoid robot achieved angular momentum compensation in the yaw direction.\n\n## 1. Introduction\n\nHumanoid robots are expected to be useful in various environments where people live. The reason for this is that humanoid robots, which are close to humans in behavior and functionality, are easy to adapt to the living environment designed for human beings. In addition, while other robots have only a few controllable joints, humanoid robots have more than 20 joints and can negotiate various scenarios. Locomotion is necessary to work in various situations, and so far, many studies have been performed on stable walking motion generation techniques. In recent years, running, which is a movement mode including the flight phase with a faster moving speed compared to walking, has also attracted attention, and research is being advanced on running motion generation methods to improve the movement ability of a humanoid robot. Raibert et al. developed a running robot with a single linear leg . The bipedal robot ATRIAS has a four-bar leg mechanism that includes a series of elastic springs [2,3]. Hyon et al. developed a biologically inspired robot based on a dog-leg model . However, these robots do not have a human-like structure. Some studies have shown that bipedal humanoid robots can run [5,6,7,8,9]. For example, the Advanced Step in Innovative MObility (ASIMO) humanoid robot, which was designed and developed by Honda, can run at a speed of 2.5 m/s . Toyota’s bipedal humanoid robot can run using a zero-moment-point (ZMP)-based running control system . The athlete robot developed by Niiyama et al. has a human-like musculoskeletal system built to execute dynamic motions, such as running . The bipedal robot MABEL, developed by researchers at the University of Michigan, has leg elasticity that originates from a leaf spring. It is the fastest-running of all currently available bipedal robots, having achieved a speed of 3 m/s with axial constraints on the y-axis .\nHowever, present humanoid robots cannot run as fast and stably as human beings, who can run at speeds ranging between 2 m/s and 13 m/s. The reason that ordinary humanoid robots cannot run as fast and stably as humans because they require both a large power output for kicking the ground and various stabilization control methods. In general, to increase the power output, actuators having large power output capacity are required. However, high power actuators are heavy and their use in the humanoid robots renders the robots heavy. Moreover, the actuators require higher power. Therefore, it is difficult to design humanoid robots that can achieve high power output and are light enough to jump. Ordinary humanoid robots can attain a power output of approximately 3.5 W/kg for the joints in the leg; however, humans generate around 16.7 W/kg in leg joints while running . In addition, various stabilization control methods such as considering the center of mass position, landing point, ground reaction force, and linear and angular momentum, are needed for stable running. However, present running stabilization methods do not consider motion during the flight phase. For example, several studies on running control have used the spring-loaded inverted pendulum (SLIP) model [15,16,17]. This simple model does not consider the mass of the legs, which generates angular momentum by leg swinging . However, a humanoid that can perform human-like fast lower-leg swinging, generates large angular momentum during the flight phase, especially in the yaw direction. Some studies considered the angular momentum of the entire body; however, they focused mainly on only stance-phase movement [19,20]. Thus, in these studies, humanoids could run slowly without leg swing during the flight phase to decrease the angular momentum generated by leg motion. Therefore, the humanoid requires a method to compensate for angular momentum during the flight phase for high speed running.\nFor fast and stable running, various characteristics of human running have been identified in studies on human sciences and sport sciences, including the following:\n• The stance leg acts like a linear spring. A human leg can be modeled as a SLIP model.\n• The knee and ankle joints in the stance leg serve as torsion springs that provide leg elasticity and the ability to kick the ground strongly [21,22].\n• The leg and joint stiffness change depending on the running speed .\n• Rapid knee bending occurs in the swing phase to avoid contact of the foot with the ground .\n• The pelvis rotates in the frontal plane to increase the jumping force .\n• Moment compensation is accomplished using torso and arm swinging in yaw direction [26,27].\nWe are working to develop a robot that can run like a human by mimicking the above characteristics. Previously, we developed a lower-body robot that mimics human characteristics, such as leg joint stiffness and pelvic rotations in the frontal plane and can hop with a large joint output of approximately 1000 W by not dissipating the energy at landing but storing the energy in its elastic parts [25,28,29]. In addition, the lower-body robot has the human link length and mass properties, such as the mass, center-of-mass (COM) position in the link, and inertial matrix of each link. The reason for this is that link length and mass properties would result in angular momentum. The currently available humanoid robots do not have human like link lengths or mass properties. These humanoid robots are presumed to have been developed with a focus on walking, which is a movement slower than running. When the angular velocities of joints are not very large, the angular momentum generated is not large and has little influence on the whole-body motion. Some researchers are interested in methods of generating stable motion during the stance phase. Sugihara et al. are developing stable control based on the momentum as a norm during walking exercises . Hyon et al. investigated a back-handspring motion with a multi-link robot and proposed motion planning considering global physical quantities such as the center of mass or angular momentum during the stance phase . On the other hand, during the flight phase, the angular momentum of the whole body of the robot cannot be modified. The upper body and legs are connected to the waist, where the movement of legs and an upper body are generated. It means that when the robot swings only its legs without active moving of the upper body joints during the flight phase, the waist and upper body rotate in yaw direction due to the angular momentum generated by the legs movement. As a result, when the direction of the waist deviates from the traveling direction, the robot cannot perform straight running. To solve it, humans move their upper body actively including arms for generating the angular momentum in opposite direction from that generated by the legs movement [26,27].\nIn this study, our focus is on the flight phase that occurs during the hopping or running motion of a robot, and we aimed to prevent rotation of the waist by aggressively generating angular momentum with the upper body equivalent to that generated by the legs during the flight phase. To this end, we propose an angular momentum compensation control method that uses the arms and torso inspired by the mechanisms of human running. Moreover, to realize the proposed method with a real humanoid, we developed an upper-body mechanism to mimic human link lengths and mass properties. We performed experiments with the robot that we developed to evaluate the proposed methods. We confirmed that the humanoid robot could compensate for the angular momentum in the yaw direction that is generated by lower-body movement in midair.\nThe remainder of this paper is organized as follows. In Section 2, we describe the proposed method for compensating for angular momentum in the yaw direction and the design of the upper-body mechanism that mimics the human mass properties. In Section 3, we present the experimental results. In Section 4, we present a discussion. Finally, in Section 5, we present our conclusions.\n\n## 2. Angular Momentum Compensation\n\n#### 2.1. Requirements for Angular Momentum Compensation\n\nWe identified the characteristics of upper-body movement of human running on the basis of previous research in human and sport sciences, and we based our determination of the requirements for a control method and an upper-body mechanism on these characteristics. Table 1 summarizes the characteristics we identified. Humans utilize the torso and arms to compensate for angular momentum in the yaw direction generated by the movement of the lower body during the flight phase, during which the human is not in contact with the ground . Here, the trunk refers to the parts higher than the lumbar vertebrae, and the trunk joints refer to the virtual joint between the trunk and the lumbar vertebrae, which consolidates the complex movements of the spine. Rapid leg swinging during the flight phase is responsible for the large angular momentum produced. To compensate for the large angular momentum, both the torso yaw joint and the shoulder pitch joints swing. The shoulder pitch joints swing widely and rapidly because of the inertial moment of the arms in the yaw direction. Most of that upper-body movement is in the yaw direction. In addition, the shoulder roll and yaw joints can change arm postures. A human extends his/her elbow joints according to his/her running speed . The elbow joints can be used to adjust the inertial moment of the arms when large angular momentum is generated by quick lower-body movements. Therefore, we determined that the requisite joint motions of the upper-body mechanism are the three torso joint motions (pitch, roll, and yaw), the three shoulder joint motions (pitch, roll and yaw), and one elbow joint motion inspired by the human configuration.\nFurthermore, we determined the requirements for the link lengths and mass properties of the upper-body mechanism based on human data [32,33] (see Table 2 and Table 3; note that in the tables, S.D. means the standard deviation). In Table 3, the moment of inertia is with respect to the center of mass position of each link. These parameters influence the angular momentum; however, ordinary humanoid robots have very light arms so as to decrease the required output power of the leg joint, and thereby cannot generate the large angular momentum to compensate that generated by the leg. Therefore, we assumed that employing human-like parameters would be useful to utilize the human-inspired angular momentum compensation method. The requirements of the total mass and total height are 60 kg and 1600 mm based on human data . In addition, we determined the requirements of the joints. The requirements for the movable angles of the joints were determined based on those of a human . The angular velocity was determined based on human running data obtained in our previous research . The torque was calculated from the angular velocity and mass properties of each link. These requirements are listed in Table 4.\n\n#### 2.2. Upper-Body Control Method Based on Angular Momentum\n\nWe developed an angular momentum compensation method inspired by humans. In this method, the upper body, including the torso and arms, is controlled to compensate for the angular momentum generated by the movement of the legs during the flight phase. By using both torso and arms, the upper body can generate large angular momentum. In this paper, we present all vectors of position, angular velocity, linear momentum and angular momentum, and all rotation matrices in the Cartesian frame fixed on the ground. The inertia matrix is basically with respect to the center of mass position of the link presented in the Cartesian frame fixed on the ground. The process of the proposed method consists of four steps:\n(1)\nSelection of an angular momentum reference for a waist of the robot;\n(2)\nCalculation of the angular momentum generated by legs movement;\n(3)\nCalculation of the angular momentum that needs to be generated by movement of the torso and arms;\n(4)\nGeneration of movement of each upper-body joint.\nThe angular momentum of the waist is first determined, based on if rotation in the yaw direction is needed. The angular momentum reference of the waist $L w a i s t _ target$ should be 0 kgm2/s to perform straight running motion. Second, the angular momentum generated by leg movement is determined. The legs are controlled with controllers for running. In our previous studies, the controller decided a landing position and swung the legs to achieve the desired landing. When the leg movement is different, the angular momentum generated by the legs also changes. The legs’ angular momentum $L l e g s$ is described by the following equation, considering the angular momentum of each link.\n$L l e g s = ∑ i = 1 L e g s _ l i n k L i$\nwhere, $L i$ is the angular momentum of the ith link, which is generally calculated as shown in Equation (2) .\n$L i = c i × P i + R i T I ¯ i R i ω i$\nwhere, $c i$ is the COM position of the ith link, $P i$ is the linear momentum of the ith link, $R i$ is the rotation matrix of the ith link, $I ¯ i$ is the inertia matrix of the ith link with respect to the COM position of the link, and $ω i$ is the angular velocity of the ith link. The inertia matrices are known as the designed parameters including the moment of inertia and products of inertia. The rotation matrix and the angular velocity are measured by the joint angle sensors implemented in the robot, and the linear momentum is calculated with mass and the COM position.\nIn the third step of the process, the angular momentum reference of the upper body $L u p p e r b o d y _ target$ is calculated from the angular momentum of the whole body at take-off $L a l l$ and the angular momentum $L l e g s$ generated by the leg movement. By the same method for the calculation of the angular momentum $L l e g s$ generated by the legs, the angular momentum of the whole body $L a l l$ is calculated at take-off because it does not change during the flight phase.\n$L u p p e r b o d y _ target = L a l l − L l e g s − L waist _ target$\nThe angular momentum reference of the upper body is divided into torso and arms components for compensation in the yaw direction. $L t r u n k _ target$ and $L a r m s _ target$ are the references of angular momentums generated by the movements of the torso and arms, respectively. The arm movement contribution is represented with gain $K$ as shown in Equations (4) and (5).\n$L a r m s _ target = K ⋅ L u p p e r b o d y _ target$\n$L t r u n k _ target = ( E − K ) ⋅ L u p p e r b o d y _ target$\nThe gain should be determined by the desired motion and the capacity of generating the angular momentum by each part of the robot. For stable running, both the trunk and arms should be used for generating large angular momentum. When the robot has very light arms and cannot generate angular momentum with arms, the gain should be lower.\nFinally, the movements of the joints are determined. To generate $L t r u n k _ target$ and $L a r m s _ target$, the robot controls its trunk and shoulder joints. The joint angular velocity reference of each joint is calculated using Equations (6)–(8).\n$ω r i g h t _ s h o u l d e r _ r e f = R r i g h t _ a r m T I ¯ r i g h t _ a r m − 1 R r i g h t _ a r m ( L a r m s _ target 2 − c r i g h t _ a r m × P r i g h t _ a r m )$\n$ω l e f t _ s h o u l d e r _ r e f = − ω r i g h t _ s h o u l d e r _ r e f$\n$ω t r u n k _ r e f = R u p p e r b o d y T I ¯ u p p e r b o d y − 1 R u p p e r b o d y ( L t r u n k _ target − c u p p e r b o d y × P u p p e r b o d y )$\nwhere, $c r i g h t _ a r m$ and $c u p p e r b o d y$ are the COM positions of the right arm and the upper body, respectively. $P r i g h t _ a r m$ and $P u p p e r b o d y$ are the linear momentum of the right arm and the upper body, respectively. $R r i g h t _ a r m$ and $R u p p e r b o d y$ are the rotation matrices of the right arm and the upper body, respectively. $I ¯ r i g h t _ a r m$ and $I ¯ u p p e r b o d y$ are the inertial matrices of the right arm and the upper body with respect to the COM position of each part, respectively. The left and right shoulder joint motions are symmetrical with respect to the center.\nUsing this method, the joints of the torso and arms are controlled so that the angular momentum of the waist in the yaw direction is kept close to zero. The method will be used for various humanoid robots by changing the arm movement gain according to the capacity of generating the angular momentum by each part of the robot.\n\n#### 2.3. Design of the Upper-Body Mechanism\n\nTo fulfill the above requirements of an upper-body robot mechanism, we developed an upper-body mechanism that has human-like mass properties and can perform the motions observed during human running (see Figure 1). Some studies have been conducted using humanoid robots with upper-body mechanisms [7,10,11,35], but it is difficult to achieve human-like size, mass properties and motion. The reason for this is that human-like motion requires a high-power output, but high-power actuators are too heavy to mimic a human mass properties. For example, the mass of the upper arm of a human may be 1600 g. Three actuators are required in the upper arm for the shoulder and elbow joints. However, the mass of each actuator, including the gear, is approximately 400 g. We considered the use of brushless direct-current (DC) motors as actuators, because they are small and light and can output high power. Based on the upper body motion during human running shown in Table 4, the maximum joint power of the shoulder pitch and trunk joints is around 150 W, and that of other joints is even smaller. Implementation of such actuators requires that the mass of the other parts, such as the frames connecting each joint and the electric parts, be less than 400 g. We must save the weight of frames; however, the frames must bear the load derived from large angular momentum for preventing yaw rotation.\nTherefore, we considered the material properties required for the structural members of the humanoid robot to best mimic the size and mass properties of a human. Aluminum is typically used for the structural members of a robot. The structure of the upper body of most humanoid robots are exoskeleton structures, wherein the frame shapes the body, and every component, such as motors and electrical parts, is stored in the frame. It is difficult to save the weight of the upper body because the amount of aluminum parts for shaping the body are increased. For weight saving in the links, we used carbon-fiber-reinforced plastic (CFRP), which is extremely strong yet also light. The density of CFRP (1.5 g/cm3) is much lower than that of aluminum (2.7 g/cm3). CFRP pipes were used in the center of each link, similar to the inner skeleton structure based on the human structure, with actuators, motor controllers and cables placed around the pipes (see Figure 2 and Figure 3). In the torso link, a large twisting moment is loaded for generating angular momentum in yaw direction. To stand the twisting moment, two pipes were implemented instead of one big pipe (Figure 3). Consequently, the size of each link was close to that of a human’s, and the mass of each link was approximately 11% less than if it were made of aluminum.\nAchieving a human-like COM position and inertial moment requires axial symmetry of the structure of the torso. Some humanoids have a torso mechanism actuated by a motor and a gear implemented on only one side (see Figure 4a). In this type of mechanism, the COM position of the torso is inclined from the center in the frontal plane and is thus different from that of a human. It therefore produces an angular momentum that is different from that of a human. A humanoid robot developed using this type of mechanism cannot mimic human-like motion by generating the same angular momentum. A counterweight can be used to shift the COM position, but the total mass of the mechanism may still be too large compared with that of a human. To solve this problem, we implemented a symmetrical joint mechanism with two of the same actuators and gears (see Figure 4b). The COM position was thus shifted to the center of the torso, as in a human. With two actuators in parallel, each actuator and gear required less power when only one actuator was used, which made it possible to decrease the size and weight of the actuator. As a result, the mass of the mechanism was decreased by approximately 550 g compared to that associated with using a counterweight. The weight of each part of the upper-body mechanism could therefore be decreased. Thus, the weight could be freely distributed, and the COM position could be regulated more easily.\nWe combined the upper body developed as described above with the lower body to produce a whole-body humanoid (see Figure 5). The humanoid has 22 degrees-of-freedom (DOF) in total. The brushless DC motors which can output 200 W are implemented in the joints which needs large output such as the shoulder pitch, trunk, hip pitch, and knee joints. On the other hand, the smaller motors which can output 100 W are implemented in other joints. The specifications of the upper body for the link length and mass properties are listed in Table 5 and Table 6. The parameter values of the humanoid mimic those of a human in almost every respect. The mass of the humanoid is 60 kg, and the height is 1500 mm. In the case of a human, the COM position of the whole body is located near the pelvis, the height of which is approximately 56% of the standing body height . The robot mimics the whole-body COM position by mimicking the link length, mass, and COM position of each link.\n\n## 3. Experiments and Results\n\nWe performed two experiments to verify the capacity of generating angular momentum of the upper body and the effectiveness of the developed control method.\n\n#### 3.1. Verification of Active Generation of Angular Momentum as Large as that of a Human\n\nWe conducted an experiment using the humanoid upper body developed in this study to evaluate its capability to generate large angular momentum with human-like motion. We implemented the upper body on a six-axis force sensor for measuring the moment which the upper body applies to the lower part such as the waist of the robot. We calculated the angular momentum generated by the upper body of the robot integrating the moment data measured by the six-axis force sensor. The motion of the upper body was the same as that of human running motion obtained in our previous research . The motion of each joint of human can be expressed as an approximate sine wave as Equation (9).\n$θ J O I N T = θ J O I N T _ I N I T I A L − A J O I N T sin ( ω t )$\nwhere, $θ J O I N T _ I N I T I A L$ is the initial joint angle, $A J O I N T$ is the amplitude of the joint motion, $ω$ is the natural frequency of the joint motion, and $t$ is the elapsed time from the start of the experiment. Therefore, we determined the motion of the upper body of the robot as Equation (9). The experimental parameters and their value of each joint are listed in Table 7.\nMoreover, we calculated the angular momentum generated by the human upper body and the simulation model with a 300 g lighter forearm at the same motion for evaluating the influence of the difference of the weight of the link on the generated angular momentum. After that, we compared the angular momentum generated by the upper body of the developed robot with that of human. The angular momentum of a human’s upper body was calculated with the human motion data and body parameters in Table 2 and Table 3.\nTable 8 summarizes the results for angular momentum generated in the experiment, in simulation, and in human running. The results confirm that the upper-body mechanism developed in this study can generate angular momentum as large as in human motion. In addition, we calculated the angular momentum in the yaw direction with the upper-body model, which has a forearm 300 g lighter than a robot forearm. The maximum angular momentum generated with the upper-body model was 1.7 kgm2/s. On the other hand, a maximum angular momentum of 1.5 kgm2/s was generated with the lighter forearm model. The total weight of the lighter forearm model decreased by only 2.3%; however, the angular momentum in the yaw direction decreased by 12%. We confirmed the developed upper body can be used for active angular momentum control as human.\n\n#### 3.2. Angular Momentum Compensation in the Yaw Direction\n\nWe conducted an experiment to evaluate how effectively the humanoid could compensate for the angular momentum generated by the leg motion during the flight phase. In this experiment, the humanoid was suspended in midair to perform running motion during the flight phase without any constraint in the yaw direction. The lower-body joints were controlled according to the joint angle references based on the human motion data . We verified the humanoid turning angles in the yaw direction depends on whether control is provided, which depended on the angular momentum in the yaw direction. We measured the turning angle of the waist in the yaw direction using a motion capture system that included infrared cameras used to determine three-dimensional marker positions at 120 Hz. Spherical retro-reflective markers were attached to the surface of the waist parts of the humanoid near the COM of the whole body. The motion capture system calculated the segment attitude using the position data for some of the markers. When angular momentum compensation was not employed, each joint of the upper body maintained a neutral position. When angular momentum compensation was employed, the angular momentum generated by the leg motion was calculated, and the upper body moved to compensate for the angular momentum actively. The ratios of the arms contribution and trunk contribution were determined to be 0.8 and 0.2, respectively, based on the human running data in which the angular momentum generated by the arms was 1.2 kgm2/s and that generated by the trunk was 0.3 kgm2/s. The lower-body motion of each joint was determined to follow an approximate sine wave as Equation (10), based on the human running data .\n$θ H I P = θ H I P _ I N I T I A L − A H I P sin ( ω t )$\nwhere, $θ H I P _ I N I T I A L$ is the initial joint angle, $A H I P$ is the amplitude of the joint motion, $ω$ is the natural frequency of the joint motion, and $t$ is the elapsed time from the start of the experiment. The experimental parameters and their values of joint movements are listed in Table 9.\nFigure 6 illustrates the rotation angle of the humanoid waist in the yaw direction. Figure 7 presents photographs of the experiment. When the angular momentum compensation control was not employed, the humanoid rotated approximately 15 deg in the yaw direction. On the other hand, when the control was used, the rotation decreased to approximately 8 deg. The results confirm the effectiveness of the proposed angular momentum compensation control for stabilization in the yaw direction. It was assumed that the friction force in the yaw direction of the hanger influenced the angular momentum because it was small but not zero.\n\n## 4. Discussion\n\nIn experiment 2, we could not perform a running experiment with the humanoid developed in this study because more work is required to develop methods for stabilization in the pitch and roll directions. During running, the vertical component of the ground reaction force can be up to 1800 N, which is much larger than the horizontal component of approximately 180 N . Therefore, the ground reaction force influences stabilization in the pitch and roll directions but not in the yaw direction. When the ground reaction force does not act in alignment with the COM of the humanoid, angular momentum is generated and the robot falls. To prevent this situation, some researchers have focused on stabilization in the pitch and roll directions by controlling the ground reaction force [38,39]. In contrast, in this study, we focused on yaw stabilization using the upper body, because the ground reaction force does not have a large influence on stabilization in the yaw direction.\nIn this study, we designed an upper-body mechanism that has human-like link length and mass properties to achieve angular momentum compensation in the yaw direction during running. In experiment 1, we confirmed that the upper body could generate almost the same angular momentum as a human’s upper body during running. In addition, we calculated the angular momentum in the yaw direction with the 300 g lighter forearm. These results indicate that mass parameters have a large influence on the angular momentum generated during running. To achieve angular momentum compensation during running, it is important to incorporate the capacity of generating angular momentum in designing a humanoid robot. The upper body developed in this study was found to be able to perform fast movement as well as a human. This upper body will be useful in future research on effective sports movements such as ball throwing.\nWe propose in this paper an angular momentum compensation method using a humanoid upper body. This method can be used with other humanoid robots that do not have human-like mass properties. The reason for this is that the robot can calculate the angular momentum required using its mass property data and the control method developed. In addition, the control method can be applied to active turning by changing the angular momentum reference of the waist. In experiment 2, which involved running on a straight course, the angular momentum reference of the waist was set to zero.\nIn addition, we assume that the proposed method for stabilization in the yaw direction can be applied to stabilization in the pitch and roll directions with a change in the ground reaction force. In general, the upper body has a large mass and moment of inertia, and it can thus generate large angular momentum in the pitch, roll and yaw directions. The angular momentum generated by the ground reaction force can be compensated for by the upper-body motion. When the humanoid cannot effectively use its upper body, the humanoid should generate motion within a range that maintains the stabilization with only its legs and torso. By integrating these methods, the humanoid can perform faster and wider-ranging motions. In future work, we intend to apply the proposed method for stabilization to the pitch and roll directions. In addition, we will utilize the robot in the place of human subjects to confirm various running characteristics.\nIn summary, we found that the angular momentum compensation in the yaw direction using the upper body during the flight phase can improve the capability of humanoid hopping and running performance. Moreover, the upper body design should focus on the capability of generating angular momentum for rapid movement such as hopping and running. For stable running without constraints, the proposed method should be integrated with other control methods; however, humanoid robots will perform stable and faster motions by using the proposed methods, and the developed upper body will be useful in research about finding effective sport movements such as ball throwing.\n\n## 5. Conclusions\n\nIn this paper, we propose an angular momentum compensation method to achieve angular momentum compensation in the yaw direction during the flight phase of running. The method is based on the human-running mechanism. To compensate for the angular momentum generated by lower-body movement during the flight phase, the angular momentum compensation method calculates the angular momentum and generates upper-body motion that activates the torso and arms, as in humans. The humanoid robot can thereby change its upper-body motion according to changes that occur in the lower-body motion, such as shifts in running speed. We also developed an upper-body mechanism that has the link and mass parameters similar to that of a human, and that can generate large angular momentum. We evaluated the developed upper body and noted that it could generate large angular momentum similar to that of humans. Furthermore, the minor differences in the link and mass parameters can significantly influence the capacity of generating angular momentum. Moreover, we confirmed that the humanoid robot could compensate for the angular momentum generated by the lower-body movement when the robot was suspended in midair.\nThe developed control method can contribute to improving the stability of humanoids that can perform dynamic movements including the flight phase, such as jumping, hopping and running. Because most other control methods for stabilization of the dynamic motion of humanoids focus on the stance phase, the developed method will be integrated without interference. However, using the developed control method requires the generation of large angular momentum by the upper body. To do that, the upper body design, which was not focused on in relevant studies, will also be changed to consider the mass, mass position, inertia matrix and link length for generating large angular momentum. Thanks to improved stability, humanoids will be able to advance into human living spaces and work stably.\n\n## Acknowledgments\n\nThis study was conducted with the support of the Research Institute for Science and Engineering, Waseda University; the Institute of Advanced Active Aging Research, Waseda University; the Future Robotics Organization, Waseda University, and as part of the humanoid project at the Humanoid Robotics Institute, Waseda University. It was also financially supported in part by JSPS KAKENHI Grant Nos. 25220005, 25709019, 17H00767; a Waseda University Grant for Special Research Projects (Project number 2017K-215); SolidWorks Japan K.K.; the DYDEN Corporation; and Cybernet Systems Co., Ltd. We thank all of these for the financial and technical support provided. We would like to thank Editage (www.editage.jp) for the English language editing.\n\n## Author Contributions\n\nTakuya Otani, Kenji Hashimoto, Hiroki Ueta, Akira Natsuhara, Hum-Ok Lim, and Atsuo Takanishi developed the upper-body control; Takuya Otani, Kenji Hashimoto, and Shunsuke Miyamae performed the experiments; Masanori Sakaguchi and Yasuo Kawakami analyzed the human motion data; Hum-Ok Lim and Atsuo Takanishi helped to draft the manuscript, Takuya Otani wrote the paper, and all of the authors read and approved the final manuscript.\n\n## Conflicts of Interest\n\nThe authors declare no conflict of interest.\n\n## References\n\n1. Raibert, M.H. Legged Robots that Balance; MIT Press: Cambridge, MA, USA, 1986. [Google Scholar]\n2. Grimes, J.A.; Hurst, J.W. The Design of ATRIAS 1.0 a Unique Monopod, Hopping Robot. In Proceedings of the International Conference on Climbing and Walking Robots (CLAWAR), Baltimore, MD, USA, 23–26 July 2012; pp. 548–554. [Google Scholar]\n3. Martin, W.C.; Wu, A.; Geyer, H. Experimental Evaluation of Deadbeat Running on the ATRIAS Biped. IEEE Robot. Autom. Lett. 2017, 2, 1085–1092. [Google Scholar] [CrossRef]\n4. Hyon, S.; Emura, T.; Mita, T. Dynamics-based control of one-legged hopping robot. J. Syst. Control Eng. Proc. Inst. Mech. Eng. Part I 2003, 217, 83–98. [Google Scholar] [CrossRef]\n5. Nagasaka, K.; Kuroki, Y.; Suzuki, S.; Itoh, Y.; Yamaguchi, J. Integrated motion control for walking, jumping and running on a small bipedal entertainment robot. In Proceedings of the 2004 IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004; pp. 3189–3194. [Google Scholar]\n6. Kajita, S.; Nagasaki, T.; Kaneko, K.; Yokoi, K.; Tanie, K. A Running Controller of Humanoid Biped HRP-2LR. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 618–624. [Google Scholar]\n7. Cho, B.K.; Park, S.S.; Oh, J.H. Controllers for running in the humanoid robot, HUBO. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots 2009, Paris, France, 7–10 December 2009; pp. 385–390. [Google Scholar]\n8. Wensing, M.P.; Orin, E.D. High-speed humanoid running through control with a 3D-SLIP model. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 5134–5140. [Google Scholar]\n9. Tamada, T.; Ikarashi, W.; Yoneyama, D.; Tanaka, K.; Yamakawa, Y.; Senoo, T.; Ishikawa, M. High Speed Bipedal Robot Running Using High Speed Visual Feedback. In Proceedings of the 14th IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain, 18–20 November 2014; pp. 140–145. [Google Scholar]\n10. Takenaka, T.; Matsumoto, T.; Yoshiike, T. Real-time Dynamics Compensation with Considering Ground Reaction Force and Moment Limit for Biped Robot. J. Robot. Soc. Jpn. 2014, 32, 295–306. [Google Scholar] [CrossRef]\n11. Tajima, R.; Honda, D.; Suga, K. Fast Running Experiments Involving a Humanoid Robot. In Proceedings of the IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 1571–1576. [Google Scholar]\n12. Niiyama, R.; Nishikawa, S.; Kuniyoshi, Y. Biomechanical Approach to Open-loop Bipedal Running with a Musculoskeletal Athlete Robot. Adv. Robot. 2012, 26, 383–398. [Google Scholar] [CrossRef]\n13. Grizzle, J.W.; Hurst, J.; Morris, B.; Park, H.W.; Sreenath, K. MABEL, a new robotic bipedal walker and runner. In Proceedings of the American Control Conference, St. Louis, MO, USA, 10–12 June 2009; pp. 2030–2036. [Google Scholar]\n14. Endo, T.; Miyashita, K.; Ogata, M. Kinetics factors of the lower limb joints decelerating running velocity in the last phase of 100 m race. Res. Phys. Educ. 2008, 53, 477–490. [Google Scholar] [CrossRef]\n15. Cavagna, G.A.; Franzetti, P.; Heglund, N.C.; Willems, P. The determinants of the step frequency in running, trotting and hoppin in man and other vertebrates. J. Physiol. 1988, 29, 81–92. [Google Scholar] [CrossRef]\n16. Blickhan, R. The Spring-mass Model for Running and Hopping. J. Biomech. 1989, 22, 1217–1227. [Google Scholar] [CrossRef]\n17. McMahon, T.; Cheng, G. The Mechanics of Running: How does Stiffness Couple with Speed? J. Biomech. 1990, 23, 65–78. [Google Scholar] [CrossRef]\n18. Ounpuu, S. The biomechanics of walking and running. Clin. Sports Med. 1994, 13, 843–863. [Google Scholar] [PubMed]\n19. Kajita, S.; Kanehiro, F.; Kaneko, K.; Fujiwara, K.; Harada, K.; Yokoi, K.; Hirukawa, H. Resolved Momentum Control: Humanoid Motion Planning based on the Linear and Angular Momentum. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 27–31 October 2003; pp. 1644–1650. [Google Scholar]\n20. Orin, D.E.; Goswami, A.; Lee, S.H. Centroidal dynamics of a humanoid robot. Auton. Robots 2013, 35, 161–176. [Google Scholar] [CrossRef]\n21. Gunther, M.; Blickhan, R. Joint stiffness of the ankle and the knee in running. J. Biomech. 2002, 35, 1459–1474. [Google Scholar] [CrossRef]\n22. Kuitunen, S.; Komi, P.V.; Kyrolainen, H. Knee and ankle joint stiffness in sprint running. Med. Sci. Sports Exerc. 2002, 34, 166–173. [Google Scholar] [CrossRef] [PubMed]\n23. Ferber, R.; Davis, I.M.; Williams, D.S., III. Gender Differences in Lower Extremity Mechanics during Running. Clin. Biomech. 2003, 18, 350–357. [Google Scholar] [CrossRef]\n24. Chapman, A.E.; Caldwell, G.E. Factors determining changes in lower limb energy during swing in treadmill running. J. Biomech. 1983, 16, 69–77. [Google Scholar] [CrossRef]\n25. Otani, T.; Hashimoto, K.; Yahara, M.; Miyamae, S.; Isomichi, T.; Hanawa, S.; Sakaguchi, M.; Kawakami, Y.; Lim, H.; Takanishi, A. Utilization of Human-Like Pelvic Rotation for Running Robot. Front. Robot. 2015. [Google Scholar] [CrossRef]\n26. Hinrichs, N.R. Upper Extremity Function in Running. II: Angular Momentum Considerations. Int. J. Sport Biomech. 1987, 3, 242–263. [Google Scholar] [CrossRef]\n27. Hinrichs, N.R. Whole Body Movement: Coordination of Arms and Legs in Walking and Running. In Multiple Muscle Systems; Springer: New York, NY, USA, 1990; pp. 694–705. [Google Scholar]\n28. Otani, T.; Hashimoto, K.; Isomichi, T.; Sakaguchi, M.; Kawakami, M.; Lim, H.O.; Takanishi, A. Joint Mechanism That Mimics Elastic Characteristics in Human Running. Machines 2016, 4, 5. [Google Scholar] [CrossRef]\n29. Otani, T.; Hashimoto, K.; Yahara, M.; Miyamae, S.; Isomichi, T.; Sakaguchi, M.; Kawakami, Y.; Lim, H.O.; Takanishi, A. Running with Lower-Body Robot That Mimics Joint Stiffness of Humans. In Proceedings of the of the IEEE International Conference on Intelligent Robots and Systems 2015, Hamburg, Germany, 28 September–2 October 2015; pp. 3969–3974. [Google Scholar]\n30. Sugihara, T.; Nakamura, Y.; Inoue, H. Realtime Humanoid Motion Generation through ZMP Manipulation based on Inverted Pendulum Control. In Proceedings of the IEEE International Conference on Robotics and Automation, Washington, DC, USA, 11–15 May 2002; pp. 1404–1409. [Google Scholar]\n31. Hyon, S.H.; Yokoyama, N.; Emura, T. Back handspring of a multi-link gymnastic robot—Reference model approach. Adv. Robot. 2006, 20, 93–113. [Google Scholar] [CrossRef]\n32. Ae, M.; Tang, H.; Yokoi, T. Estimation of inertia properties of the body segments in Japanese athletes. Soc. Biomech. 1992, 11, 23–33. [Google Scholar] [CrossRef]\n33. Kouchi, M.; Mochimaru, M. Human Dimension Database; AIST Digital Human Research Center: Tokyo, Japan, 2005. [Google Scholar]\n34. Nakamura, R.; Saito, H. Fundamental Kinesiology, 4th ed.; Kendall Hunt Publishing Company: Dubuque, IA, USA, 1992. [Google Scholar]\n35. Nagasaki, T.; Kajita, S.; Kaneko, K.; Yokoi, K.; Tanie, K. A Running Experiment of Humanoid Biped. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, 28 September–2 October 2004; pp. 136–141. [Google Scholar]\n36. Shima, K. Effect of Masticatory Movement on Head, Trunk and Body sways during Standing Position. Ph.D. Thesis, Hokkaido University, Sapporo, Japan, 2015. [Google Scholar]\n37. Farley, C.T.; Ferris, D.P. Biomechanics of walking and running: Center of mass movements to muscle action. Exerc. Sport Sci. Rev. 1998, 26, 253–285. [Google Scholar] [CrossRef] [PubMed]\n38. Sreenath, K.; Park, H.W.; Poulakakis, I.; Grizzle, J.W. Embedding active force control within the compliant hybrid zero dynamics to achieve stable, fast running on MABEL. Int. J. Robot. Res. 2013, 32, 324–345. [Google Scholar] [CrossRef]\n39. Kajita, S.; Kaneko, K.; Morisawa, M.; Nakaoka, S.; Hirukawa, H. ZMP-based Biped Running Enhanced by Toe Springs. In Proceedings of the IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 3963–3969. [Google Scholar]\nFigure 1. Computer-aided design (CAD) of the developed humanoid.\nFigure 1. Computer-aided design (CAD) of the developed humanoid.\nFigure 2. CAD of the humanoid robot arm. The green line is the rotational axis. CFRP: carbon-fiber-reinforced plastic.\nFigure 2. CAD of the humanoid robot arm. The green line is the rotational axis. CFRP: carbon-fiber-reinforced plastic.\nFigure 3. CAD of the humanoid robot trunk. Two CFRP pipes are implemented in the center. Bottom parts are connected to trunk joints.\nFigure 3. CAD of the humanoid robot trunk. Two CFRP pipes are implemented in the center. Bottom parts are connected to trunk joints.\nFigure 4. CAD of the trunk joint mechanism of the humanoid robot. The green line is the rotational axis. (a) Ordinary mechanism; (b) Mechanism developed in this study.\nFigure 4. CAD of the trunk joint mechanism of the humanoid robot. The green line is the rotational axis. (a) Ordinary mechanism; (b) Mechanism developed in this study.\nFigure 5. Whole-body running robot. The blue joint is the roll axis, the red joint is the pitch axis, and the yellow joint is the yaw axis. In the knee and ankle joints, a variable joint stiffness mechanism is implemented for storing jumping energy. (a) Humanoid robot; (b) DOF (degrees-of-freedom) configuration.\nFigure 5. Whole-body running robot. The blue joint is the roll axis, the red joint is the pitch axis, and the yellow joint is the yaw axis. In the knee and ankle joints, a variable joint stiffness mechanism is implemented for storing jumping energy. (a) Humanoid robot; (b) DOF (degrees-of-freedom) configuration.\nFigure 6. Experimental results for angular momentum compensation.\nFigure 6. Experimental results for angular momentum compensation.\nFigure 7. Experiment with a real robot. Upper row shows the experiment without angular momentum control. Lower row shows the experiment with angular momentum control. The humanoid was suspended with orange cables connected to a hanger, which could rotate passively in the yaw direction, and did not contact with the ground.\nFigure 7. Experiment with a real robot. Upper row shows the experiment without angular momentum control. Lower row shows the experiment with angular momentum control. The humanoid was suspended with orange cables connected to a hanger, which could rotate passively in the yaw direction, and did not contact with the ground.\nTable 1. Characteristics of the upper body during human running.\nTable 1. Characteristics of the upper body during human running.\nJointsCharacteristics\nTrunk RollBending for angular momentum compensation\nTrunk PitchBending for angular momentum compensation\nTrunk YawSwing for generating angular momentum\nShoulder RollFor changing the position of the center of mass\nShoulder PitchWide and rapid swing for generating large angular momentum\nShoulder YawFor changing the position of the center of mass\nElbow PitchChanging the position of the center of mass and moment of inertia of the arm\nTable 2. Mass, COM (center-of-mass) position, and link length requirements for an upper-body mechanism.\nTable 2. Mass, COM (center-of-mass) position, and link length requirements for an upper-body mechanism.\nTrunk16120 (10) a270 (20)\nUpper arm1.6 (0.10)150 (10) b270 (14)\nForearm0.90 (0.06)90 (5.0) c220 (11)\na from the waist; b from the shoulder; c from the elbow.\nTable 3. Moment of inertia requirements for an upper-body mechanism.\nTable 3. Moment of inertia requirements for an upper-body mechanism.\nLinksIxx (Roll) kgm2Iyy (Pitch) kgm2Izz (Yaw) kgm2\nTrunk1.5 × 10−11.4 × 10−11.0 × 10−1\nUpper arm6.5 × 10−37.0 × 10−31.7 × 10−3\nForearm4.1 × 10−34.2 × 10−37.0 × 10−4\nTable 4. Requirements for upper-body joints.\nTable 4. Requirements for upper-body joints.\nJointsMovable Range DegAngular Velocity (S.D.) rpmTorque (S.D.) Nm\nShoulder Pitch−180–5050 (15)21 (8.8)\nShoulder Roll0–18025 (5.9)12 (6.8)\nShoulder Yaw−80–6029 (1.9)4.8 (2.2)\nElbow Pitch−145–569 (12)17 (1.9)\nTrunk Pitch−45–3018 (9.5)77 (27)\nTrunk Roll−50–5014 (4.7)64 (25)\nTrunk Yaw−40–4036 (9.2)40 (15)\nTable 5. Upper-body specifications of the humanoid.\nTable 5. Upper-body specifications of the humanoid.\nTrunk16130270\nUpper arm1.6140260\nForearm0.9090210\nTable 6. Moment of Inertia of the upper body of the humanoid.\nTable 6. Moment of Inertia of the upper body of the humanoid.\nLinksIxx (Roll) kgm2Iyy (Pitch) kgm2Izz (Yaw) kgm2\nTrunk1.4 × 10−11.3 × 10−10.60 × 10−1\nUpper arm7.3 × 10−37.4 × 10−31.1 × 10−3\nForearm4.5 × 10−34.4 × 10−34.4 × 10−4\nTable 7. Experimental conditions.\nTable 7. Experimental conditions.\nParametersValue\nTrunk pitch amplitudedeg4.7\nTrunk pitch initial angledeg12\nTrunk roll amplitudedeg13.8\nTrunk roll initial angledeg0\nTrunk yaw amplitudedeg15.5\nTrunk yaw initial angledeg0\nShoulder pitch amplitudedeg20.5\nShoulder pitch initial angledeg37.8\nMovement periods0.6\nTable 8. Max. angular momentum generated by upper-body of human and the humanoid.\nTable 8. Max. angular momentum generated by upper-body of human and the humanoid.\nObjectsMax. Generated Angular Momentum kgm2/s\nRollPitchYaw\nHumanoid upper body5.52.31.7\nSimulation with light forearm5.32.31.5\nHuman upper body5.52.51.5\nTable 9. Experimental conditions.\nTable 9. Experimental conditions.\nParametersValue\nHip pitch amplitude$A H I P$deg15\nHip pitch initial angle$θ H I P _ I N I T I A L$deg10\nKnee pitch amplitudedeg20\nKnee pitch initial angledeg150\nAnkle pitch amplitudedeg0\nAnkle pitch initial angledeg90\nMovement periods1.0\nArm gain of angular momentum controlK0.8\nTorso gain of angular momentum control0.2\n\n## Share and Cite\n\nMDPI and ACS Style\n\nOtani, T.; Hashimoto, K.; Miyamae, S.; Ueta, H.; Natsuhara, A.; Sakaguchi, M.; Kawakami, Y.; Lim, H.-O.; Takanishi, A. Upper-Body Control and Mechanism of Humanoids to Compensate for Angular Momentum in the Yaw Direction Based on Human Running. Appl. Sci. 2018, 8, 44. https://doi.org/10.3390/app8010044\n\nAMA Style\n\nOtani T, Hashimoto K, Miyamae S, Ueta H, Natsuhara A, Sakaguchi M, Kawakami Y, Lim H-O, Takanishi A. Upper-Body Control and Mechanism of Humanoids to Compensate for Angular Momentum in the Yaw Direction Based on Human Running. Applied Sciences. 2018; 8(1):44. https://doi.org/10.3390/app8010044\n\nChicago/Turabian Style\n\nOtani, Takuya, Kenji Hashimoto, Shunsuke Miyamae, Hiroki Ueta, Akira Natsuhara, Masanori Sakaguchi, Yasuo Kawakami, Hum-Ok Lim, and Atsuo Takanishi. 2018. \"Upper-Body Control and Mechanism of Humanoids to Compensate for Angular Momentum in the Yaw Direction Based on Human Running\" Applied Sciences 8, no. 1: 44. https://doi.org/10.3390/app8010044\n\nNote that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94298446,"math_prob":0.75809324,"size":13626,"snap":"2023-40-2023-50","text_gpt3_token_len":2601,"char_repetition_ratio":0.17992952,"word_repetition_ratio":0.056973565,"special_character_ratio":0.19147219,"punctuation_ratio":0.09638064,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9500061,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T12:08:04Z\",\"WARC-Record-ID\":\"<urn:uuid:c1cdfa9a-2e40-484f-a258-1ff34c96d988>\",\"Content-Length\":\"353132\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63692b49-631b-49a2-8235-d84bfbf0f22a>\",\"WARC-Concurrent-To\":\"<urn:uuid:7c475bc2-51ec-4fa9-a9fc-0d52c606eb9c>\",\"WARC-IP-Address\":\"104.18.24.151\",\"WARC-Target-URI\":\"https://www.mdpi.com/2076-3417/8/1/44\",\"WARC-Payload-Digest\":\"sha1:2TTLCYEJPS4SQJWE7J4UH5VBXHBN7CMQ\",\"WARC-Block-Digest\":\"sha1:IPLOPT7YXVPYN5BCRTKSZAF2VEG7YPLQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679511159.96_warc_CC-MAIN-20231211112008-20231211142008-00510.warc.gz\"}"} |
https://math.stackexchange.com/questions/1404213/maximal-ideals-and-the-projective-nullstellensatz | [
"Maximal ideals and the projective Nullstellensatz\n\nThis is a simple question, but it's one of those things that I've been thinking about so much that I've just kind of lost where I am and need some explicit reference.\n\nOne of the main corollaries of the affine Nullstellensatz (i.e. that $\\mathbb{I}(\\mathbb{V}(I)) = \\sqrt{I}$) is that maximal ideals in $R=k[x_0,\\ldots,x_n]$ are in bijective correspondence with points in $\\mathbb{A}^{n+1}$. In all (but one -- see below) of the literature I've read that talks of the projective Nullstellensatz, there is no mention of what maximal ideals of $R$ correspond to, just what prime ideals in general correspond to. I assumed that, if there were a similar fact, then I would have found it somewhere.\n\nBut after some thinking we see that points in projective space correspond to lines through the origin in affine space, which are irreducible affine varieties, but not affine points. Thus points in projective space correspond to (prime) ideals in $R$ coming from lines through the origin.\n\nIn the opposite direction, maximal ideals in $R$ correspond to points in affine space, and so can't correspond directly to points in projective space but each point in affine space uniquely determines a point in projective space, since it uniquely determines a line through the origin and that point. So it seems like there should be some maximal ideal/points in projective space correspondence, and one way kind of seems almost clear to me, but the other doesn't so much.\n\nIn essence could anybody please explain the correspondence between maximal ideals properly contained in $k[x_0,\\ldots,x_n]$ and points of projective space. We know that all maximal ideals are of the form $(x_0-a_0,\\ldots,x_n-a_n)$, and these correspond to the point $(a_0,\\ldots,a_n)$ in affine space. Is there an equally nice formulation for the projective case?\n\nHere is the one reference that I could find:",
null,
"• We always have a morphism $\\mathbb{A}^{n+1}-\\{0\\}\\to\\mathbb{P}^n$. Thus, any maximal ideal other than $(x_0,\\ldots,x_n)$ gives you a point in $\\mathbb{P}^n$, the correspondence is not one-to-one, but still nice. Notice that the multiplicative group acts naturally by multiplying all the $x_i$'s by the same number on $R$ and all maximal ideals in a single orbit goes to the same point in projective space and vice versa. – Mohan Aug 20 '15 at 19:16\n• I was just curious where the excerpt above comes from. Thanks! – mwmjp Mar 30 '16 at 14:33\n• – Tim Mar 31 '16 at 21:16\n\nYou're thinking about this the wrong way. There is a map from $\\mathbb{A}^{n+1} - \\{0\\} \\to \\mathbb{P}^n$ for any $n$, but this is not the way we realize the co-ordinate ring, and the construction one would get if one tried to blow up points into lines in this quotient would not be the affine variety corresponding to some subset of $\\mathbb{P}^n$ but the pullback of the tautological bundle onto this variety.\n\nThe real maps to consider are the co-ordinate maps: $\\mathbb{A}^n \\to \\mathbb{P}^n$ that make $\\mathbb{P}^n$ a variety. These induce isomorphisms between $k[x_1, \\dots, x_n]$ and $\\mathcal{O}(\\mathbb{P}^n - \\{(x_1:\\dots : x_{n+1}) | x_i = 0\\})$, and this is how you prove the correspondence between maximal ideals that do not contain the trivial homogeneous ideal $R(X)_+$ (which obviously cannot be included) and points in $\\mathbb{P}^n$, you use these homeomorphisms.\n\n• So given some maximal ideal $m$ not containing $R(X)_+$, this corresponds to some maximal ideal $\\hat{m}$ of $k[x_0,\\ldots,x_n]$ not containing $(x_0,\\ldots,x_n)$. So we know that there exists some $x_i$ such that $\\hat{m}$ doesn't contain $(x_i)$... and then where do we go? – Tim Aug 21 '15 at 17:43\n• why can we not use a proof that goes something like this: math.stackexchange.com/questions/1405195/… ? – Tim Aug 21 '15 at 18:15\n• Then we take the map $\\phi: \\mathbb{P}^n - \\{x_i = 0\\} \\to \\mathbb{A}^n$ that sends $(a_1: \\dots : a_{n+1}) \\to (a_1/a_i, \\dots, a_{n+1}/a_i)$ which induces an isomorphism of coordinate rings on some open set homeomorphic to $mathbb{A}^n$... And from here it should be very obvious to you. This is exactly how they want you to prove it in that link. – Sempliner Aug 22 '15 at 3:54\n• How do we deal with patching the affine patches together though? That is, if some $x$ has $x_i,x_j\\neq0$, then how do we know that the maximal ideal given by looking at $\\mathbb{P}^n\\setminus\\{x_i=0\\}$ is the same as that given by looking at $\\mathbb{P}^n\\setminus\\{x_j=0\\}$? – Tim Aug 22 '15 at 11:39\n• Further, the Nullstellensatz tells us that all maximal ideals of the polynomial ring $k[x_0,\\ldots,x_n]$ are of the form $(x_0-a_0,\\ldots,x_n-a_n)$, but this is not homogeneous. So how can there exist maximal homogeneous ideals? – Tim Aug 22 '15 at 11:44\n\nhttps://math.stackexchange.com/a/1643498/346324\n\nThis answer is for the analogous problem in Vakil but is more explicit than what has been given already and looks similar to the affine situation.\n\n• I don't think this can be count as an answer. It's just a comment. – user26857 Oct 3 '16 at 16:01\n• @user26857 I agree. It's a very useful comment, but still a comment... – Tim Oct 3 '16 at 22:01\n• I'm trying to say that question above has already been asked and answered in the link that I gave... – Tanner Strunk Oct 4 '16 at 1:25"
] | [
null,
"https://i.stack.imgur.com/056Z4.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9366097,"math_prob":0.99641466,"size":1862,"snap":"2019-43-2019-47","text_gpt3_token_len":435,"char_repetition_ratio":0.16523144,"word_repetition_ratio":0.013559322,"special_character_ratio":0.2244898,"punctuation_ratio":0.09859155,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99967945,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-18T04:50:19Z\",\"WARC-Record-ID\":\"<urn:uuid:92573d07-b5ad-462a-a55b-3118a213c85f>\",\"Content-Length\":\"157578\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:12bd077a-087d-4c19-afbe-baa38126a6e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d800bd6-a410-418d-a21b-dc1ff4cf0263>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1404213/maximal-ideals-and-the-projective-nullstellensatz\",\"WARC-Payload-Digest\":\"sha1:54FJ5NFERYW53OO3SAWFFKE6FXWJSNLE\",\"WARC-Block-Digest\":\"sha1:GAW5P43HU6SMIAJRCYNZ5JHBCKQI4KP3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986677884.28_warc_CC-MAIN-20191018032611-20191018060111-00376.warc.gz\"}"} |
https://www.jiskha.com/questions/321551/write-and-balance-equations-for-the-following-reactions-a-iron-metal-and-chlorine-gas | [
"# Chemistry\n\nWrite and balance equations for the following reactions.\nA)Iron metal and Chlorine gas react to form solid iron(III) chloride.\n\nB)Solid aluminum carbonate decomposes to form solid aluminum oxide and carbon dioxide gas.\n\nC)Solid magnesium reacts with aqueous silver nitrate to form solid silver and aqueous magnesium nitrate.\n\n1. 👍\n2. 👎\n3. 👁\n1. )Iron metal and Chlorine gas react to form solid iron(III) chloride.\nThese are word equations. You are to turn them into symbolic equations. You are not asked to predict the products; those are given to you. Here is how you do the first one.\nFe + Cl2 ==> FeCl3\nNow balance it.\n2Fe + 3Cl2 ==> 2FeCl3\n\n1. 👍\n2. 👎\n2. Dr. Bob is right! The\n\n1. 👍\n2. 👎\n\n## Similar Questions\n\n1. ### chemistry\n\nMnO4–(aq) + Cl–(aq) Mn2+ + Cl2(g) (unbalanced) i. Write the reduction and oxidation half-reactions (without electrons). (.5 point) ii. Balance the equations for atoms (except O and H). (.5 point) iii. Balance the equations for\n\n2. ### Chemestry\n\nIron metal reacts with oxygen gas to form rust, iron (III) oxide. Which of the following correctly shows the balanced equation for this reaction?\n\n3. ### chemistry\n\nUsing the activity series, write a balanced chemical equation for the following reactions: 1. iron metal is added to a solution of copper (II) nitrate 2. zinc metal is added to a solution of magnesium sulfate 3. hydrobromic acid\n\n4. ### chemistry\n\nWhat are the skeleton equations for Dicarbon hexahydride gas + oxygen gas --> carbon dioxide gas + gaseous water And Solid barium and chlorine gas react to produce solid barium chloride\n\n1. ### general chem\n\nUsing the activity series (Table 4.5), write balanced chemical equations for the following reactions. If no reaction occurs, simply write NR. (a) Nickel metal is added to a solution of copper (II) nitrate; (b) a solution of zinc\n\n2. ### Chemistry 101\n\nComplete and balance each of the following equations for gas-evolution reactions. Identify all of the phases in your answer. HCl(aq)+K2SO3(aq)-->\n\n3. ### Analytical Chem\n\nDetermine [Zn2 ], [CN–], and [HCN] in a saturated solution of Zn(CN)2 with a fixed pH of 2.390. The Ksp for Zn(CN)2 is 3.0 × 10–16. The Ka for HCN is 6.2 × 10–10. I know the following is what needs to be done, but im\n\n4. ### Chemistry\n\nWrite the balanced net ionic equation for the following? a) Copper metal and zinc ion b) zinc and cupric ion c) Copper metal and hydrogen ion d) zinc metal and hydrogen ion e) iron metal and hydrogen ion f) Chlorine (Cl2) and\n\n1. ### chemistry\n\niron(III) oxide reacts with carbon monoxide gas to form solid iron metal and carbon dioxide gas: Fe2O3 + 3 CO --> 2 Fe + 3 CO2 If you begin the reaction with 84.34 g of iron(III) oxide and 68.87 g of CO, which reactant will be in\n\n2. ### chemistry\n\nIron reacts with chlorine to form iron(iii)chloride. A)How many chlorines react with 1.86g of iron? B)What mass of iron(iii)chloride is produced?\n\n3. ### Chemistry 1300\n\n1) Aluminum metal reacts with chlorine gas to form solid aluminum trichloride, AlCl3. What mass of chlorine gas (Cl2) is needed to react completely with 163 g of aluminum? 2) Magnesium (used in the manufacture of light alloys)\n\n4. ### chemistry\n\nIron and oxygen gas can react to form the magnetic oxide of iron Fe3O4. How many grams of Fe3O4 can be made from 13.2 g of O2?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84281325,"math_prob":0.91570073,"size":2643,"snap":"2021-04-2021-17","text_gpt3_token_len":746,"char_repetition_ratio":0.13830997,"word_repetition_ratio":0.03640257,"special_character_ratio":0.2659856,"punctuation_ratio":0.09430256,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9868838,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-21T08:35:45Z\",\"WARC-Record-ID\":\"<urn:uuid:dc8793c5-5f07-4879-989f-169550ca9e87>\",\"Content-Length\":\"19752\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b1f6bf4c-d732-417b-95d6-3031029f8e81>\",\"WARC-Concurrent-To\":\"<urn:uuid:c526dd6b-d127-47a8-ab2e-b8317a4c6d72>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/321551/write-and-balance-equations-for-the-following-reactions-a-iron-metal-and-chlorine-gas\",\"WARC-Payload-Digest\":\"sha1:DMDWQDFS5TMFR4FFRLCDRXE6ELYZ3SZR\",\"WARC-Block-Digest\":\"sha1:BGZSXCYXIVMGOZT57MX72IANVZ3SXWSW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039526421.82_warc_CC-MAIN-20210421065303-20210421095303-00436.warc.gz\"}"} |
https://quant.stackexchange.com/questions/27452/how-to-simulate-3-correlated-stock-processes-following-a-gbm | [
"# How to simulate 3 correlated stock processes following a GBM?\n\nSuppose we have 3 stocks following GBMs.\n\nWe are given the distribution of the daily log returns which is multivariate normal.\n\nSuppose I want to sample the stock price tomorrow ($\\Delta t = 1$ day), could I just sample a return vector from this distribution and then say that the stock price tomorrow is $S_0 \\cdot \\exp(r_\\text{sample}\\Delta t)$?\n\nI've been arguing with my friend about this and he claims I should multiply by $\\sqrt{\\Delta t}$? I don't understand his argument.\n\nIs there anything wrong with what I am doing here?\n\n• For this question to be answered, you need to explain how you sampled $r_\\text{sample}$. – SRKX Jul 6 '16 at 9:17\n\nThe log-return of a stock over a period $\\Delta t$ starting at $t=0$ is defined as: $$r_{\\Delta t} = \\ln \\left( \\frac{S_{\\Delta t}}{S_0} \\right)$$ Thus you should compute $S_{\\Delta t}$ as $$S_{\\Delta t} = S_0 \\exp ( r_{\\Delta t} )$$ when you are given the $\\Delta t$-period log-return i.e. the one which you sample as you propose above. Thus no multiplication by $\\Delta t$ or its square root whatsoever.\nMaybe your confusion arises from the fact that in the BS equation we traditionally use continuously compounded rates: $$\\exp ( r_{\\Delta t} ) = \\exp \\left( \\int_0^{\\Delta t} r (t) dt \\right) = \\exp ( r \\Delta t )$$ where the last equality holds when $r (t) = r$ a constant, and in which case you should use $\\Delta t \\approx 1/252$ to compute daily returns if you're using annualised quantities (which is usually the case)\nif you sample from Brownian motion, then $$B_{t+\\Delta t} - B_t$$ is normally distributed with variance $\\Delta t$. Thus if you sample a standard normal $Z$ (with variance 1) then you can use $$\\sqrt{\\Delta t} Z$$ as sample for $B_{t+\\Delta t} - B_t$ in order to get the correct variance. Recall that constant factors enter variance with the squared value.\nIn your question: how do you sample $r$? And if $\\Delta t=1$ then it does not matter at all whether you use $\\Delta t$ or $\\sqrt{\\Delta t}$.\n• Right .. but as the OP asks about 1 day returns and $\\Delta t = 1$ ... this is just trivial ... – Ric Jul 6 '16 at 12:35"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9584183,"math_prob":0.99965334,"size":527,"snap":"2021-31-2021-39","text_gpt3_token_len":125,"char_repetition_ratio":0.09942639,"word_repetition_ratio":0.0,"special_character_ratio":0.23719165,"punctuation_ratio":0.06930693,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999988,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-06T03:41:03Z\",\"WARC-Record-ID\":\"<urn:uuid:1116a925-27af-43e8-b52f-349a42daa245>\",\"Content-Length\":\"172395\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:88009f22-0c91-4edb-b606-5aa3f36d140f>\",\"WARC-Concurrent-To\":\"<urn:uuid:50a473dc-4dd5-41a8-a08c-62c1891bc35d>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://quant.stackexchange.com/questions/27452/how-to-simulate-3-correlated-stock-processes-following-a-gbm\",\"WARC-Payload-Digest\":\"sha1:QMQ5K34ARYYSZQKR5VWOTNWIN5OADAUJ\",\"WARC-Block-Digest\":\"sha1:ORZZTIDAPGVCQWSIVBHCLUANKV22WWR6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152112.54_warc_CC-MAIN-20210806020121-20210806050121-00266.warc.gz\"}"} |
https://www.tutoreye.com/matrix-a-has-rows-as-follows-row-a-b-c-row-d-e-f-row-g-h-i-matrix-b-has-rows-as-follows-row-a-b-c-row-qa | [
"Search matrix-a-has-rows-as-follows-row-a-b-c-row-d-e-f-row-g-h-i-matrix-b-has-rows-as-follows-row-a-b-c-row\n\n# Matrix a has rows as follows row a b c row d e f row g h i matrix b has rows as follows row a b c row\n\n## Top Questions\n\nas follows Row 1: [2a,6b,2c] Row 2: [−g,−3h,−i] Row 3: [d,3e,f] If det(A)=4, then det(B)=\nView More\n\n1.AU MAT 120 Systems of Linear Equations and Inequalities Discussion\n\nmathematicsalgebra Physics"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86597466,"math_prob":0.8519031,"size":269,"snap":"2022-40-2023-06","text_gpt3_token_len":91,"char_repetition_ratio":0.12830189,"word_repetition_ratio":0.09090909,"special_character_ratio":0.32342008,"punctuation_ratio":0.25,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9946662,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T07:14:46Z\",\"WARC-Record-ID\":\"<urn:uuid:3de5f2b3-b893-4705-b509-9a0da7231f0e>\",\"Content-Length\":\"215326\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8ad41821-6151-4d1f-8ab0-a896f94f43da>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a52745e-cbea-4cce-892f-c5a428c7aef4>\",\"WARC-IP-Address\":\"3.133.248.148\",\"WARC-Target-URI\":\"https://www.tutoreye.com/matrix-a-has-rows-as-follows-row-a-b-c-row-d-e-f-row-g-h-i-matrix-b-has-rows-as-follows-row-a-b-c-row-qa\",\"WARC-Payload-Digest\":\"sha1:FPK6YANUNYYAUQ47KLCCDKNQUWDFL4KB\",\"WARC-Block-Digest\":\"sha1:NGZYDATUVBK5AV6FLEQ5LYBTAEPQQKYE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335326.48_warc_CC-MAIN-20220929065206-20220929095206-00591.warc.gz\"}"} |
https://www.yourelectricalguide.com/2021/05/dc-generator-mcq-questions.html | [
"# DC Generator MCQ Questions\n\n### 1. In a DC generator, the ripples in the generated DC are reduced by employing\n\n(a) commutator with large number of segments.\n(b) equalizer rings.\n(c) carbon brushes.\n(d) graphite brushes.\n\n### 2. In order to ascertain whether a DC machine is lap wound or wave wound on the basis of visual observations of the armature, one should observe the\n\n(a) connections to the brushes\n(b) connections to the commutator.\n(c) connections to the field winding\n(d) direction of end connections.\n\n### 3. The coil span\n\n(a) must be exactly equal to pole pitch.\n(b) can never be equal to pole pitch.\n(c) may or may not be equal to pole pitch.\n(d) none of the above.\n\n### 4. The commutator pitch for simplex lap winding is\n\n(a) +1\n(b) -1\n(c) +1 or -1\n(d) average pitch\n\n### 5. The commutator pitch for simplex wave winding is\n\n(a) +1 or -1\n(b) twice average pitch.\n(c) average pitch.\n(d) half average pitch.\n\n(a) coils.\n(b) coil sides.\n(c) conductors.\n(d) turns.\n\n### 7. In a drum type DC armature winding the back pitch and front pitch in terms of the coil-sides must be\n\n(a) even and odd respectively.\n(b) odd and even respectively.\n(c) both even.\n(d) both odd.\n\n### 8. In a DC machine, fractional pitch is employed to\n\n(a) increase generated voltage.\n(b) reduce sparking.\n(c) save copper.\n(d) improve cooling.\n(e) both (b) and (c).\n\n### 9. In a DC machine, the armature winding is made up of number of coils distributed in large number of armature slots instead of placing all the coils into two slots to\n\n(a) have maximum EMF at the output terminals.\n(b) have maximum EMF generated in the armature.\n(c) have minimum heat dissipation from the armature.\n(d) make the armature dynamically balanced.\n\n### 10. Aluminum is not used as winding wire in DC machine armature as it\n\n(a) is costlier.\n(b) is of low resistivity.\n(c) is of lower thermal conductivity.\n(d) requires large winding space.\n\n### 11. Short circuiting is caused in the armature winding due to\n\n(a) failure of insulation between two turns of a coil.\n(b) failure of insulation between two commutator bars.\n(c) grounding of two or more turns of the same coil.\n(d) any of the above.\n\n### 12. In a lap wound generator, the equalizer rings are provided to\n\n(a) neutralize the armature reaction effect.\n(b) avoid short circuiting.\n(c) avoid unequal distribution of currents at brushes.\n(d) avoid harmonics generated in the EMF.\n\n(a) Z/P\n(b) 2Z/P\n(c) Z/2P\n(d) Z/4P\n\n### 14. A frog-leg winding is used in a DC machine to eliminate\n\n(a) the use of equalizers in lap winding.\n(b) the use of compensating winding.\n(c) the use of compensating winding.\n(d) reactance voltage produced due to commutation.\n\n### 15. A 250 V DC generator is run at rated speed with no excitation. The open-circuit voltage will be\n\n(a) zero\n(b) very small, say about 2 or 3 V.\n(d) 250 V.\n\n### 16. At a certain speed and flux, the voltage generated by a DC generator is 230 volts. If the speed in increased by 20% and the flux is simultaneously reduced by 10%, the voltage will be\n\n(a) increased by 10%.\n(b) reduced by 20%.\n(c) increased by 8%.\n(d) decreased by 8%.\n\n(a) sinusoidal.\n(b) triangular.\n(c) flat topped.\n(d) pulsating.\n\n(a) pulsating.\n(b) rectangular.\n(c) triangular.\n(d) sinusoidal.\n\n### 19. The armature MMF of a DC machine has\n\n(a) triangular space distribution and rotates at the speed of armature.\n(b) trapezoidal space distribution and is stationary in space.\n(c) stepped space distribution and rotates at the speed of armature.\n(d) triangular space distribution and is stationary in space.\n\n### 20. In a DC machine, the actual flux distribution depends upon\n\n(a) size of air gap.\n(b) shape of pole shoes.\n(c) clearance between tips of the adjacent pole shoes.\n(d) all of the above.\n\n### 21. In a DC machine, the space wave form of the air-gap flux distribution affects\n\n(a) torque only.\n(b) voltage only.\n(c) both the torque and voltage.\n(d) neither the torque nor the voltage.\n\n### 22. In a DC machine, the armature MMF is\n\n(a) stationary with respect to the poles but rotating with respect to the armature.\n(b) rotating with respect to field poles as well as armature.\n(c) rotating with respect to field poles.\n(d) stationary with respect to armature.\n\n### 23. Armature reaction in a DC shunt generator, running at full load with the brushes not shifted from the geometrical neutral plane and saturation neglected is\n\n(a) absent.\n(b) cross-magnetizing.\n(c) demagnetizing.\n(d) magnetizing.\n\n### 24. In a DC machine without any brush shift, the shift of magnetic neutral axis owing to armature reaction is\n\n(a) in the direction of rotation for both the generator and the motor.\n(b) against the direction of rotation for both the generator and the motor.\n(c) in the direction of rotation for the generator and against the direction of rotation for the motor.\n(d) against the direction of rotation for the generator and in the direction of rotation for the motor.\n\n### 25. In a DC machine, on no load the magnetic neutral axis\n\n(a) moves from geometric neutral axis in the direction of rotation.\n(b) moves from geometric neutral axis in the opposite direction of rotation.\n(c) coincides with the geometrical neutral axis.\n(d) none of the above.\n\n### 26. Due to magnetic saturation, the flux per pole in a DC machine without brush shift\n\n(a) increases in both the generators and the motors with load.\n(b) decreases in both the generators and the motors with load.\n(c)[/expander_maker] increases in generators but decreases in motors with load.\n(d)[/expander_maker] decreases in generators but increases in motors with load.\n\n### 27. If the brushes are given a small amount of forward shift in case of DC generator, the armature reaction effect will be\n\n(a) totally demagnetizing.\n(b) totally cross-magnetizing.\n(c) partly demagnetizing and partly cross-magnetizing.\n(d) totally magnetizing.\n\n(a) 4\n(b) 2\n(c) 8\n(d) 1\n\n### 29. Commutation in a DC machine may be explained as a process\n\n(a) in which magnetic field is set up.\n(b) by virtue of which EMF is induced in the armature conductors.\n(c) by which current in a coil is reversed during the period it is short circuited by the brush.\n(d) how armature flux affects the main field flux.\n\n### 30. In a DC shunt generator working on load, the brushes are moved forward in the direction of rotation, as a result of this, commutation will\n\n(a) improve but terminal voltage will fall.\n(b) worsen and terminal voltage will fall.\n(c) improve and terminal voltage will rise.\n(d) worsen and terminal voltage will rise.\n\n### 31. In a DC machine, the sparking between brushes and commutator surface may be due to\n\n(a) under commutation.\n(b) over commutation.\n(c) too rapid reversal of current.\n(d) any of the above.\n\n### 32. The self induced EMF in the coil undergoing commutation is called the\n\n(a) reactance voltage.\n(b) statically induced voltage.\n(c) dynamically induced voltage.\n(d) none of the above.\n\n### 33. The sparking at the brushes, in a DC machine, is due to\n\n(a) armature reaction.\n(b) reactance voltage.\n(c) presence of commutator.\n(d) high resistance of carbon brushes.\n\n### 34. The reactance voltage is\n\n(a) directly proportional to armature current and commutation period.\n(b) directly proportional to commutation period and inversely proportional to armature current.\n(c) directly proportional to armature current and inversely proportional to commutation period.\n(d) none of the above.\n\n### 35. Sparkless commutation can be achieved by employing\n\n(a) interpoles.\n(b) compensating winding.\n(c) high resistance carbon brushes.\n(d) any one or more of the above methods.\n\n### 36. To have sparkless commutation, the armature reaction effect in a DC a machine is neutralized by\n\n(a) compensating winding and commutating poles.\n(b) shifting the brush axis from geometrical neutral axis to magnetic neutral axis.\n(c) fixing the brush axis in line with the pole axis.\n(d) increasing the field excitation.\n\n## DC Generator MCQ Questions\n\n© www.yourelectricalguide.com/ dc generator mcq questions."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8543793,"math_prob":0.9529867,"size":8752,"snap":"2022-40-2023-06","text_gpt3_token_len":2118,"char_repetition_ratio":0.1622085,"word_repetition_ratio":0.08045213,"special_character_ratio":0.24497257,"punctuation_ratio":0.11502347,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.979554,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T05:07:58Z\",\"WARC-Record-ID\":\"<urn:uuid:b8f2749d-dbae-4d9d-980e-8d5cc859810b>\",\"Content-Length\":\"213470\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a65fee03-d7b5-4160-abf6-35cb9a12f97b>\",\"WARC-Concurrent-To\":\"<urn:uuid:afad5a94-5b42-4358-aafb-dd3f8bd46647>\",\"WARC-IP-Address\":\"104.21.22.163\",\"WARC-Target-URI\":\"https://www.yourelectricalguide.com/2021/05/dc-generator-mcq-questions.html\",\"WARC-Payload-Digest\":\"sha1:SM6X2V77NVJPRCOSWZGVBI7Z6SUDMAAZ\",\"WARC-Block-Digest\":\"sha1:PQNAFMQQICBSMB3QJ2GFG26FRIA6BCGF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500215.91_warc_CC-MAIN-20230205032040-20230205062040-00636.warc.gz\"}"} |
https://www.easyelimu.com/qa/4226/wooden-horizontal-surface-horizontal-exerted-accelerate | [
"# A wooden block of mass 2kg is placed on a horizontal surface. A horizontal force of 14N is exerted on it which makes it to accelerate at 5ms-1.\n\n499 views\n\nA wooden block of mass 2kg is placed on a horizontal surface. A horizontal force of 14N is exerted on it which makes it to accelerate at 5ms-1.\nFind the frictional force acting between the surfaces.\n\nF=ma\nF= 2x5 = 10N\nFr= Force applied - resultant force - Formula can be implied substitution\n= 14-10\n=4N"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9029954,"math_prob":0.9651065,"size":246,"snap":"2023-40-2023-50","text_gpt3_token_len":65,"char_repetition_ratio":0.11570248,"word_repetition_ratio":0.0,"special_character_ratio":0.24796748,"punctuation_ratio":0.09615385,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98668313,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T03:28:50Z\",\"WARC-Record-ID\":\"<urn:uuid:74326874-e677-453d-87a2-3a7de2e26a6b>\",\"Content-Length\":\"34742\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b9b27bd-dee7-4e08-93ca-2461e5608f9a>\",\"WARC-Concurrent-To\":\"<urn:uuid:802fa378-df0f-490f-aa27-492b70e1a27e>\",\"WARC-IP-Address\":\"75.119.156.104\",\"WARC-Target-URI\":\"https://www.easyelimu.com/qa/4226/wooden-horizontal-surface-horizontal-exerted-accelerate\",\"WARC-Payload-Digest\":\"sha1:AOW5FFVT27KRADOZYIQBFE2QNKZVAHF7\",\"WARC-Block-Digest\":\"sha1:I5IVDKA2OXGWMMVYSSDTY2OLT57TNC3B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510358.68_warc_CC-MAIN-20230928031105-20230928061105-00701.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.