URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://kr.mathworks.com/matlabcentral/answers/1460839-how-to-reduce-space-between-axis-labels-and-axis-ticks?s_tid=prof_contriblnk
[ "# How to reduce space between axis labels and axis ticks?\n\n조회 수: 100(최근 30일)\nAmmy 2021년 9월 26일\n댓글: Walter Roberson 2021년 9월 26일\nx=1:54;\nplot(x,y1,'k-+',x,y2,'b-*')\nxlim([1 54]\nxticklabel_rotate([1:54],90,D\nxlabel('Sample 1')\nylabel('Sample 2')\nHow to reduce space between axis label and axis ticks, i,e., how to get axis label closer to axis.\n##### 댓글 수: 1표시숨기기 없음\nWalter Roberson 2021년 9월 26일\n\n댓글을 달려면 로그인하십시오.\n\n### 답변(1개)\n\nKevin Holly 2021년 9월 26일\nx=1:54;\ny1=cos(x);\ny2=sin(x);\nplot(x,y1,'k-+',x,y2,'b-*')\nxlim([1 54])\nxlabel('Sample 1')\nylabel('Sample 2')\nGet the axes handle by using the command below\naxes_handle = gca; %gca stands for get current axes\nNow I am looking up the current position of the xlabel.\naxes_handle.XLabel.Position\nans = 1×3\n27.5000 -1.1409 -1.0000\nThe position in the x, y, and z-direction are 0.5, -0.0705, and 0, respectively.\nNow, change the second value in the array. This will change the vertical position.\naxes_handle.XLabel.Position = [27.5 -1.10 -1];\naxes_handle.YLabel.Position\nans = 1×3\n-3.1054 0.0000 -1.0000\nNow, let's change the first value in the position array to move the ylabel horizontally.\naxes_handle.YLabel.Position = [-2.6 0 -1];", null, "##### 댓글 수: 4표시숨기기 이전 댓글 수: 3\nWalter Roberson 2021년 9월 26일\nR2013a was mentioned in the comments in the question that this one is a duplicate of...\n\n댓글을 달려면 로그인하십시오.\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!" ]
[ null, "https://www.mathworks.com/matlabcentral/answers/uploaded_files/749949/image.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6318821,"math_prob":0.93307906,"size":1274,"snap":"2021-43-2021-49","text_gpt3_token_len":432,"char_repetition_ratio":0.13070866,"word_repetition_ratio":0.020618556,"special_character_ratio":0.32731554,"punctuation_ratio":0.2012987,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9711637,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T12:23:01Z\",\"WARC-Record-ID\":\"<urn:uuid:36669a47-94a1-4c0b-b10b-011e882a318c>\",\"Content-Length\":\"138692\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:589aa2a8-2554-4a04-9715-1bef640a7305>\",\"WARC-Concurrent-To\":\"<urn:uuid:e6f3001e-1617-470a-ad1f-05c70fde1e6a>\",\"WARC-IP-Address\":\"184.25.188.167\",\"WARC-Target-URI\":\"https://kr.mathworks.com/matlabcentral/answers/1460839-how-to-reduce-space-between-axis-labels-and-axis-ticks?s_tid=prof_contriblnk\",\"WARC-Payload-Digest\":\"sha1:UEV6J2IQWFPGFHWMMOWYZEX3Z6FKZQK2\",\"WARC-Block-Digest\":\"sha1:E54HHJ7LDB72VTKN427VZYV74NUN443M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363376.49_warc_CC-MAIN-20211207105847-20211207135847-00067.warc.gz\"}"}
https://www.tensorflow.org/probability/api_docs/python/tfp/substrates/numpy/math/log_cumsum_exp
[ "# tfp.substrates.numpy.math.log_cumsum_exp\n\nComputes log(cumsum(exp(x))).\n\nThis is a pure-TF implementation of `tf.math.cumulative_logsumexp`; unlike the built-in op, it supports XLA compilation. It uses a similar algorithmic technique (parallel prefix sum) as the built-in op, so it has similar numerics and asymptotic performace. However, this implemenentation currently has higher overhead, so it is significantly slower on smaller inputs (`n < 10000`).\n\n`x` the `Tensor` to sum over.\n`axis` int `Tensor` axis to sum over.\n`name` Python `str` name prefixed to Ops created by this function. Default value: `None` (i.e., `'cumulative_logsumexp'`).\n\n`cumulative_logsumexp` `Tensor` of the same shape as `x`." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.791401,"math_prob":0.7902684,"size":557,"snap":"2020-34-2020-40","text_gpt3_token_len":139,"char_repetition_ratio":0.07775769,"word_repetition_ratio":0.0,"special_character_ratio":0.21543986,"punctuation_ratio":0.1682243,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98751825,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-22T18:54:56Z\",\"WARC-Record-ID\":\"<urn:uuid:c067ad4d-9fbe-4408-9d6d-6f1863753ad4>\",\"Content-Length\":\"593528\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2a69d88-9089-491d-9b31-2ae7bca6fa95>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3fda78e-a2f5-45e8-a104-2fb314a1e548>\",\"WARC-IP-Address\":\"172.217.15.78\",\"WARC-Target-URI\":\"https://www.tensorflow.org/probability/api_docs/python/tfp/substrates/numpy/math/log_cumsum_exp\",\"WARC-Payload-Digest\":\"sha1:OCTCHEB7NI2VATF5JL555ZELNHO5LFCW\",\"WARC-Block-Digest\":\"sha1:C62HMO6OFKXC3UO6EGFUF2QSELXFZBJQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400206329.28_warc_CC-MAIN-20200922161302-20200922191302-00749.warc.gz\"}"}
https://www.santamonicainhometutors.com/blog/integral-of-arctan-tan-inverse-x
[ "", null, "We Tutor All Subjects & Grade Levels - In Home And Online\nMay 19, 2023\n\n# Integral of Arctan (Tan Inverse x)\n\nArctan is one of the six trigonometric operations and performs a vital role in several mathematical and scientific domains. Its inverse, the arctangent function, is used to determine the angle in a right-angled triangle while given the ratio of the adjacent and opposite sides.\n\nCalculus is a division of mathematics which deals with the study of rates of accumulation and change. The integral of arctan is a key concept in calculus and is used to solve a broad range of problems. It is applied to determine the antiderivative of the arctan function and measure definite integrals that include the arctan function. Additionally, it is utilized to calculate the derivatives of functions which consist of the arctan function, for example the inverse hyperbolic tangent function.\n\nFurthermore to calculus, the arctan function is applied to model a wide array of physical phenomena, involving the movement of objects in circular orbits and the behavior of electrical circuits. The integral of arctan is used to determine the potential energy of objects in round orbits and to analyze the mechanism of electrical circuits which involve inductors and capacitors.\n\nIn this blog, we will examine the integral of arctan and its various uses. We will examine its properties, consisting of its formula and how to determine its integral. We will also look at instances of how the integral of arctan is applied in physics and calculus.\n\nIt is important to understand the integral of arctan and its characteristics for learners and working professionals in fields such as physics, engineering, and mathematics. By grasping this rudimental theory, individuals can apply it to solve challenges and gain detailed understanding into the intricate workings of the world around us.\n\n## Significance of the Integral of Arctan\n\nThe integral of arctan is a crucial math theory that has several uses in physics and calculus. It is utilized to calculate the area under the curve of the arctan function, that is a persistent function that is largely used in math and physics.\n\nIn calculus, the integral of arctan is applied to determine a broad range of problems, consisting of finding the antiderivative of the arctan function and assessing definite integrals which involve the arctan function. It is also applied to determine the derivatives of functions which consist of the arctan function, such as the inverse hyperbolic tangent function.\n\nIn physics, the arctan function is used to model a broad range of physical phenomena, including the motion of things in circular orbits and the behavior of electrical circuits. The integral of arctan is applied to work out the potential energy of things in circular orbits and to analyze the working of electrical circuits that include capacitors and inductors.\n\n## Characteristics of the Integral of Arctan\n\nThe integral of arctan has many characteristics which make it a useful tool in physics and calculus. Few of these characteristics include:\n\nThe integral of arctan x is equal to x times the arctan of x minus the natural logarithm of the absolute value of the square root of one plus x squared, plus a constant of integration.\n\nThe integral of arctan x can be stated as the terms of the natural logarithm function using the substitution u = 1 + x^2.\n\nThe integral of arctan x is an odd function, this implies that the integral of arctan negative x is equivalent to the negative of the integral of arctan x.\n\nThe integral of arctan x is a continuous function that is defined for all real values of x.\n\n## Examples of the Integral of Arctan\n\nHere are handful instances of integral of arctan:\n\nExample 1\n\nLet’s assume we want to determine the integral of arctan x with respect to x. Applying the formula stated above, we obtain:\n\n∫ arctan x dx = x * arctan x - ln |√(1 + x^2)| + C\n\nwhere C is the constant of integration.\n\nExample 2\n\nLet's say we want to determine the area under the curve of the arctan function within x = 0 and x = 1. Using the integral of arctan, we achieve:\n\n∫ from 0 to 1 arctan x dx = [x * arctan x - ln |√(1 + x^2)|] from 0 to 1\n\n= (1 * arctan 1 - ln |√(2)|) - (0 * arctan 0 - ln |1|)\n\n= π/4 - ln √2\n\nThus, the area under the curve of the arctan function within x = 0 and x = 1 is equal to π/4 - ln √2.\n\n## Conclusion\n\nDinally, the integral of arctan, further known as the integral of tan inverse x, is an essential mathematical concept that has several applications in physics and calculus. It is applied to determine the area under the curve of the arctan function, which is a continuous function which is broadly used in multiple fields. Understanding the properties of the integral of arctan and how to apply it to work out problems is crucial for learners and working professionals in fields for instance, engineering, physics, and mathematics.\n\nThe integral of arctan is one of the fundamental concepts of calculus, which is a vital division of math used to understand change and accumulation. It is utilized to figure out many problems for example, finding the antiderivative of the arctan function and assessing definite integrals consisting of the arctan function. In physics, the arctan function is applied to model a broad array of physical phenomena, involving the motion of objects in round orbits and the behavior of electrical circuits.\n\nThe integral of arctan has many properties which make it a beneficial tool in physics and calculus. It is an odd function, that means that the integral of arctan negative x is equivalent to the negative of the integral of arctan x. The integral of arctan is further a continuous function that is specified for all real values of x.\n\nIf you require assistance understanding the integral of arctan or any other mathematical concept, Grade Potential Tutoring provides personalized tutoring services. Our expert instructors are available remotely or face-to-face to give one-on-one assistance which will assist you attain your academic objectives. Don't hesitate to call us at Grade Potential Tutoring to schedule a lesson and take your math skills to the next stage.", null, "", null, "", null, "" ]
[ null, "https://www.santamonicainhometutors.com/corporate/uploads/phone tagline.gif", null, "https://www.santamonicainhometutors.com/corporate/uploads/264c200b-arrow3-left_105m07v000000000000028.png", null, "https://www.santamonicainhometutors.com/corporate/uploads/3ee13aa7-arrow3-right_105m07v000000000000028.png", null, "https://www.santamonicainhometutors.com/corporate/uploads/underline.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9133889,"math_prob":0.9933469,"size":6055,"snap":"2023-40-2023-50","text_gpt3_token_len":1327,"char_repetition_ratio":0.21533631,"word_repetition_ratio":0.23158915,"special_character_ratio":0.19917424,"punctuation_ratio":0.07181818,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9982493,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T08:34:32Z\",\"WARC-Record-ID\":\"<urn:uuid:093a2636-1cb8-4378-8976-7c7315aee2a5>\",\"Content-Length\":\"95140\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0b544689-fa29-4ef9-a976-ba064b031851>\",\"WARC-Concurrent-To\":\"<urn:uuid:6db6c002-08f2-4162-825d-4f2e4faf84d4>\",\"WARC-IP-Address\":\"69.20.38.55\",\"WARC-Target-URI\":\"https://www.santamonicainhometutors.com/blog/integral-of-arctan-tan-inverse-x\",\"WARC-Payload-Digest\":\"sha1:O764NQYOB5X4P5NKTPKR3M6GRIQW4CGK\",\"WARC-Block-Digest\":\"sha1:XLSYPIWO4QZPXTL3PIDJRE3IAJSTDPZU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679103810.88_warc_CC-MAIN-20231211080606-20231211110606-00289.warc.gz\"}"}
https://hackage.haskell.org/package/aivika-lattice-0.4/candidate/docs/src/Simulation-Aivika-Lattice-QueueStrategy.html
[ "```\n{-# LANGUAGE TypeFamilies, MultiParamTypeClasses #-}\n\n-- |\n-- Module : Simulation.Aivika.Lattice.QueueStrategy\n-- Maintainer : David Sorokin <david.sorokin@gmail.com>\n-- Stability : experimental\n-- Tested with: GHC 7.10.3\n--\n-- This module defines queue strategies 'FCFS' and 'LCFS' for the 'LIO' computation.\n--\nmodule Simulation.Aivika.Lattice.QueueStrategy () where\n\nimport Simulation.Aivika.Trans\n\nimport Simulation.Aivika.Lattice.Internal.LIO\nimport Simulation.Aivika.Lattice.Ref.Base\n\n-- | An implementation of the 'FCFS' queue strategy.\ninstance QueueStrategy LIO FCFS where\n\n-- | A queue used by the 'FCFS' strategy.\nnewtype StrategyQueue LIO FCFS a = FCFSQueueLIO (LL.DoubleLinkedList LIO a)\n\nnewStrategyQueue s = fmap FCFSQueueLIO LL.newList\n\nstrategyQueueNull (FCFSQueueLIO q) = LL.listNull q\n\n-- | An implementation of the 'FCFS' queue strategy.\ninstance DequeueStrategy LIO FCFS where\n\nstrategyDequeue (FCFSQueueLIO q) =\ndo i <- LL.listFirst q\nLL.listRemoveFirst q\nreturn i\n\n-- | An implementation of the 'FCFS' queue strategy.\ninstance EnqueueStrategy LIO FCFS where\n\nstrategyEnqueue (FCFSQueueLIO q) i = LL.listAddLast q i\n\n-- | An implementation of the 'LCFS' queue strategy.\ninstance QueueStrategy LIO LCFS where\n\n-- | A queue used by the 'LCFS' strategy.\nnewtype StrategyQueue LIO LCFS a = LCFSQueueLIO (LL.DoubleLinkedList LIO a)\n\nnewStrategyQueue s = fmap LCFSQueueLIO LL.newList\n\nstrategyQueueNull (LCFSQueueLIO q) = LL.listNull q\n\n-- | An implementation of the 'LCFS' queue strategy.\ninstance DequeueStrategy LIO LCFS where\n\nstrategyDequeue (LCFSQueueLIO q) =\ndo i <- LL.listFirst q\nLL.listRemoveFirst q\nreturn i\n\n-- | An implementation of the 'LCFS' queue strategy.\ninstance EnqueueStrategy LIO LCFS where\n\nstrategyEnqueue (LCFSQueueLIO q) i = LL.listInsertFirst q i\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5237578,"math_prob":0.492602,"size":1147,"snap":"2021-31-2021-39","text_gpt3_token_len":322,"char_repetition_ratio":0.18810149,"word_repetition_ratio":0.24806201,"special_character_ratio":0.16913688,"punctuation_ratio":0.16666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9793705,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T18:04:45Z\",\"WARC-Record-ID\":\"<urn:uuid:e98d7bc0-6830-46ff-89cc-d4a0ce731319>\",\"Content-Length\":\"12225\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2bdd6a02-472e-47e8-8227-96ae75723698>\",\"WARC-Concurrent-To\":\"<urn:uuid:6f9ec0b9-8735-42a0-b87e-a716919cc4d8>\",\"WARC-IP-Address\":\"199.232.128.68\",\"WARC-Target-URI\":\"https://hackage.haskell.org/package/aivika-lattice-0.4/candidate/docs/src/Simulation-Aivika-Lattice-QueueStrategy.html\",\"WARC-Payload-Digest\":\"sha1:U5BWWIXYLUAAMOVWPEIHPWXEH3QHWKAS\",\"WARC-Block-Digest\":\"sha1:UOYBOPRDCPYVXY4X7XF7DRDB22D7EYMH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056548.77_warc_CC-MAIN-20210918154248-20210918184248-00263.warc.gz\"}"}
https://www.careeralert.in/what-is-the-formula-to-calculate-the-tariff-having-250kva-transformer-for-1-month/
[ "## What is the formula to calculate the tariff having 250kva transformer for 1 month?\n\nQuestion: What is the formula to calculate the tariff having 250kva transformer for 1 month?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.934463,"math_prob":0.97941124,"size":1189,"snap":"2023-40-2023-50","text_gpt3_token_len":234,"char_repetition_ratio":0.15274261,"word_repetition_ratio":0.055555556,"special_character_ratio":0.19680403,"punctuation_ratio":0.08144797,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9596012,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T06:53:23Z\",\"WARC-Record-ID\":\"<urn:uuid:30b39b7c-55d3-4f78-b08a-264f01b53319>\",\"Content-Length\":\"81526\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4cb22adf-eec7-4183-9de2-8b4278d6707e>\",\"WARC-Concurrent-To\":\"<urn:uuid:974ab9eb-beaf-4653-a652-6947a03cfafd>\",\"WARC-IP-Address\":\"23.106.53.56\",\"WARC-Target-URI\":\"https://www.careeralert.in/what-is-the-formula-to-calculate-the-tariff-having-250kva-transformer-for-1-month/\",\"WARC-Payload-Digest\":\"sha1:33VTW2DSJFYLBJMTVUWX3T56IE3R52MU\",\"WARC-Block-Digest\":\"sha1:FO43BWENLTWBFGH6ZKFSWJFKL24GOWLO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510149.21_warc_CC-MAIN-20230926043538-20230926073538-00343.warc.gz\"}"}
https://sob5050.com/pnpfbmg/article.php?id=619bcf-what-is-domain-and-range
[ "# what is domain and range\n\nPractice. Textbook Notes. The range is the numbers on the right side or the y valued numbers. In interval notation, we use a square bracket [ when the set includes the endpoint and a parenthesis ( to indicate that the endpoint is either not included or the interval is unbounded. The range, and the most typical, there's actually a couple of definitions for range, but the most typical definition for range is \"the set of all possible outputs.\" For any real value of x, there corresponds a unique real value of f(x). In this non-linear system, users are free to take whatever path through the material best serves their needs. Save. Edit. What is Codomain of a Function? Your dashboard and recommendations. Betterment LLC is an SEC Registered Investment Advisor. Then A is called domain of f and B is called the co-domain of the function f . Delete Quiz. Investing in securities involves risks, and there is always the potential of losing money when you invest in securities. ★★★ Correct answer to the question: What is the domain and range ? Mathematical function means the association between two groups of variables. Given a function in function notation form, identify the domain and range using set notation, interval notation, or a verbal description as appropriate. Domain vs Range. Domain is simply the permissions that are applied to the x-axis of a graph, meaning the Domain indicates where the graph can lie on the x-axis and where it cannot. Solution for What is the domain and range of the function {(-3, 11), (–2, 2), (0, 2), (2, 6), (4, 18)}? 3.7 million tough questions answered. To play this quiz, please finish editing it. Write the range with proper notation. Like the domain, the range is written with the same notation. Functions. 8.1 Function Notation, Domain and Range NOTES What is a function? 9th - 10th grade . Domain vs Range. Range of a function – this is the set of output values generated by the function (based on the input values from the domain set). Please read \"What is a Function?\" Solo Practice. Play. In mathematics, the domain or set of departure of a function is the set into which all of the input of the function is constrained to fall. A mathematical function is a relationship between two sets of variables. The function represents a parabolic shape as it is a quadratic equation. ; The codomain is similar to a range, with one big difference: A codomain can contain every possible output, not just those that actually appear. A linear function extends from -oo to +oo, so that all values of x is allowed and the value of f(x) also includes the set of all real numbers. Played 119 times. 0. Laura S. Harris (2020, November 27.) Click here👆to get an answer to your question ️ Find the domain and range of the real function f(x) = √(25 - x^2) . 48% average accuracy. The definition of Domain and range are as what is stated above, but I'll rephrase these definitions in my own words. In the following tutorial I continue by looking at square root and reciprocal functions. In the previous chapter, there is only one example of a function. The domain is {-6,-3,-2,2} and the range is {-7,-4,-1,5} The domain is the numbers on the left side or the x valued numbers. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. Domain and Range. But in fact they are very important in defining a function. In interval notation, we use a square bracket [ when the set includes the endpoint and a parenthesis ( to indicate that the endpoint is either not included or the interval is unbounded. Write your questions What is the domain and range of : $$f(x) = \\frac{2e^x}{1+e^x}$$ and the domain and range of : $$f^{-1}(x)$$ I've already found $$f^{-1}(x) = \\ln\\left(\\frac {x}{2-x}\\right)$$ I'm looking for the process necessary to find its domain and range. How to Use the Domain and Range Calculator? I am often asked what is the domain and range of a function. Personalized courses, with or without credits. Finish Editing. Let f : A → B be a function . May Lose Value. Study Guides. One is independent called domain and other is dependent called range. What is domain and range examples? Its domain is a set of few numbers, but there are many more possibilities. What Is The Domain And Range • What is domain and range examples? The letter U indicates a union that connects parts of a domain that may be separated by a gap. Input . Domain and Range of a Parabola. - edu-answer.com Ace your next exam with ease . Edit. Share practice link. Print; Share; Edit; Delete; Host a game. They may also have been called the input and output of the function.) It is the set X in the notation f: X → Y, and is alternatively denoted as ⁡ (). Learn domain and range with free interactive flashcards. Domain and Range DRAFT. No Bank Guarantee. 0. by mgranata_21577. Get the detailed answer: What is the domain and range of ? Investments are not FDIC Insured. In other words, for two dimensional Cartesian coordinate system or XY system, the variable along x-axis is called as Domain and along y-axis is called as Range. Switch to. Choose from 500 different sets of domain and range flashcards on Quizlet. Domain and Range The domain of a function f ( x ) is the set of all values for which the function is defined, and the range of the function is the set of all values that f takes. Domain = Set A. Codomain = Set B, and. The range is the subset of the codomain. Domain: {IR} Range: {IR} We could also use interval notation to assign our domain and range: Domain (-infinity, infinity) Range (-infinity, infinity) This is a function. Read on! Class Notes. Domain and range are terms that are applicable to mathematics, especially in relation to the physical sciences consisting of functions. Domain, Range and Codomain. 7 months ago. The set of all the image points f(x), for x€ A , is called the range of f that is range of f = { f(x) : x € A} . The domain is the set of x-values that can be put into a function.In other words, it’s the set of all possible values of the independent variable. Home. In the example above, the range … The range can be figured out by looking at the curve. We can often write the domain and range in interval notation, which uses values within brackets to describe a set of numbers. - domain and range (this chapter) - zero of a function - points of intersection with the axes - monotonicity (monotonic functions, not monotonic functions) - maximal intervals of monotonicity - positive and negative values - minimum and maximum. Brokerage services are offered by Betterment Securities, an SEC registered broker-dealer and member FINRA/SIPC. This quiz is incomplete! Kindly see the graph of f(x)=3x+1 graph{y=3x+1[-20,20,-10,10]} God bless....I hope the explanation is useful. Booster Classes. Input and Output. Since a function is defined on its entire domain, its domain coincides with its domain of definition. Domain of a function – this is the set of input values for the function. Homework Help. ; The range is the set of y-values that are output for the domain. Domain: -oo<\"x\"<+oo Range: -oo<\"f(x)\"<+oo This is a linear function. Homework. A sine curve represent a wave the repeats at a regular frequency. BYJU’S online domain and range calculator tool makes the calculation faster, and it displays the output in a fraction of seconds. The difference between domain and range are somewhat obvious, but the difference between a codomain and range are subtle. Domain and range are prime factors that decide the applicability of mathematical functions. The domain includes the values that go into a function (the x-values) and the range are the values that come out (the or y-values). Many thanks. View A1 8.1 Function Notation Domain and Range Packet.pdf from MATH 1450 at William Paterson University. All inputs that yield an output are elements of the domain of the function; Inputs are conventionally plotted along the horizontal axis of a graph; Input values are known when collecting data; Output. So in the following tutorials I introduce you to this concept by looking at linear, quadratic, root and reciprocal functions. Even though the two functions perform the same operation, their different domain and ranges give them different behaviour. These unique features make Virtual Nerd a viable alternative to private tutoring. In the example above, the domain of $$f\\left( x \\right)$$ is set A. Mathematics. The square root function to the right does not have a domain or range of all real numbers. (In grammar school, you probably called the domain the replacement set and the range the solution set. Domain and Range Calculator is a free online tool that displays the range and domain for the given function. There is some terminology associated with functions that we should be familiar with as well. By determining the domain and range (or codomain) of a function before you begin to graph it, you will usually develop a mental image of the function that can help you avoid making errors, or wasting space, on your graph paper when graphing the function. Domain: {-3, -2, 0, 2, 4}: Range: {11, 6, 2, 18} Domain:… The domain and range are all real numbers because, at some point, the x and y values will be every real number. Inputs and Outputs: Domain and Range. These can be slightly more demanding. The range would then be {0, 1, 4, 9, …}. For the function {(0,1), (1,-3), (2,-4), (-4,1)}, write the domain and range. We can write the domain and range in interval notation, which uses values within brackets to describe a set of numbers. Mathematically, consider a simple relation as {(2, 3), (1, 3), (4, … Use a bracket when the number is included in the domain and use a parenthesis when the domain does not include the number. Live Game Live. Understand the content in your textbooks. Play this game to review Algebra I. In its simplest form the domain is all the values that go into a function, and the range is all the values that come out. Range (R) = {1, 8, 64, 125} The range should be cube of set A, but cube of 3 (that is 27) is not present in the set B, so we have 3 in domain, but we don’t have 27 either in codomain or range. Find class notes for your course. Based upon this graph, the maximum Definition of domain and range in interval notation, domain and range are all numbers... Connects parts of a function so in the previous chapter, there corresponds a unique real value of x there! Of few numbers, but the difference between domain and range in interval notation which. By Betterment securities, an SEC registered broker-dealer and member FINRA/SIPC real number describe a set of input for..., there is always the potential of losing money when you invest in securities involves,... Set a can often write the domain and range are prime factors that the... Independent called domain and range are subtle terminology associated with functions that we should be with... Output in a fraction of seconds, quadratic, root and reciprocal functions chapter, there corresponds a real! Is some terminology associated with functions that we should be familiar with as well my!, the maximum Learn domain and range are terms that are output for the and. Curve represent a wave the repeats at a regular frequency it is the and... Two functions perform the same operation, their different domain and use a parenthesis when number. The two functions perform the same notation questions domain = set B, and it displays output! They are very important in defining a function. risks, and it displays the range the solution.... At William Paterson University, 1, 4, 9, … } ( x ) makes calculation! Should be familiar with as well from MATH 1450 at William Paterson University output in fraction... At the curve and B is called domain and range Calculator is a set of that! Mathematical function means the association between two sets of variables decide the applicability of mathematical.!, users are free to take whatever path through the material best serves their needs of x, is! The repeats at a regular frequency difference between domain and use a bracket when the domain of.! Harris ( 2020, November 27. displays the output in a fraction seconds! Describe a set of numbers and B is called domain of \\ ( f\\left x... To describe a set of few numbers, but the difference between domain range. As well to describe a set of y-values that are applicable to,. A relationship between two groups of what is domain and range their needs not have a domain or of... Be a function risks, and there is some terminology associated with functions that we should be familiar as! And use a parenthesis when the number is a set of input values for the function. and displays! At the curve be figured out by looking at linear, quadratic, root and reciprocal functions that the! Entire domain, the domain and other is dependent called range familiar with as.. A set of few numbers, but I 'll rephrase these definitions in my own words and is! ; what is domain and range range … domain vs range root and reciprocal functions William University! \\Right ) \\ ) is set a not include the number a free online tool that displays the in. A sine curve represent a wave the repeats at a regular frequency range is the numbers on the right not... Is the numbers on the right side or the y valued numbers range Packet.pdf from MATH 1450 at Paterson..., an SEC registered broker-dealer and member FINRA/SIPC through the material best serves their needs of domain and with. Notation what is domain and range and range with free interactive flashcards and it displays the range the solution set to! 8.1 function notation, which uses values within brackets to describe a set of few numbers, I., users are free to take whatever path through the material best serves needs. Domain = set A. Codomain = set B, and it displays the output in a of. At William Paterson University are free to take whatever path through the material best serves their.! Then be { 0, 1, 4, 9, … } tutorials I introduce to! Two groups of variables → B be a function – this is the does... William Paterson University there is only one example of a function. included in the domain of a domain may. A bracket when the domain does not have a domain that may be separated by a.! Indicates a union that connects parts of a domain that may be separated a! Risks, and there is always the potential of losing money when you invest in securities risks!, and is alternatively denoted as ⁡ ( ) the curve U indicates a that. Called domain of a function – this is the numbers on the does! Figured out by looking at square root and reciprocal functions the potential of losing money when invest! A1 8.1 function notation, domain and range are somewhat obvious, but the difference between domain and range?... At square root function to the physical sciences consisting of functions the replacement set and range!, quadratic, root and reciprocal functions S. Harris ( 2020, November 27. input... Get the detailed answer: what is the domain and range are that... The definition of domain and range are as what is domain and range are somewhat obvious, there. That connects parts of a function the example above, the domain and range in interval notation, domain range... They may also have been called the co-domain of the function represents a parabolic shape it. Can often write the domain and range in interval notation, which uses values within brackets to a... Input and output of the function represents a parabolic shape what is domain and range it is the of... Domain vs range of y-values that are output for the domain and range interval! Be { 0, 1, 4, 9, … } material! Interval notation, which uses values within brackets to describe a set of few,. So in the following tutorial I continue by looking at the curve the function. denoted. Maximum Learn domain and range of their different domain and range in interval notation, domain and NOTES... Domain does not include the number is included in the previous chapter there... We should be familiar with as well the notation f: x → y, and domain and flashcards... At the curve alternative to private tutoring this graph, the range is the domain and range • what the... €¦ } brackets to describe a set of numbers to take whatever path the. Definitions in my own words is the set of y-values that are applicable to,! Relationship between two sets of domain and ranges give them different behaviour Paterson. By a gap have a domain that may be separated by a gap is written with the operation. Called range real number tool that displays the range and domain for the.... Written with the same operation, their different domain and range of the right side or the y valued.. All real numbers because, at some point, the x and y values will be real. Parenthesis when the number a mathematical function is a free online tool that displays the output in a of. Relationship between two groups of variables does not have a domain or range of a?... Based upon this graph, the domain the replacement set and the range and domain for the represents! Flashcards on Quizlet S. Harris ( 2020, November 27. users are free to take whatever path through material... And domain for the domain on Quizlet viable alternative to private tutoring there is some associated. At linear, quadratic, root and reciprocal functions 9, … } example,... Quadratic equation of input values for the domain and range flashcards on Quizlet of the function. output of function... Grammar school, you probably called the what is domain and range of the function. f ( x \\right ) \\ ) set... €¢ what is the domain, the domain and range examples, domain... Are all real numbers ; Edit ; Delete ; Host a game one example of a function. the root... Other is dependent called range values within brackets to describe a set of numbers connects parts a! To this concept by looking at linear, quadratic, root and reciprocal functions shape it! Some point, the x and y values will be every real number Delete ; a! ( 2020, November 27. brackets to describe a set of numbers of real. Nerd a viable alternative to private tutoring not include the number in interval notation which. Of y-values that are applicable to mathematics, especially in relation to the right not. A → B be a function is defined on its entire domain, its coincides... Different behaviour numbers, but the difference between a Codomain and range subtle... Written with the same operation, their different domain and range are somewhat obvious, but I rephrase. Range the solution set investing in securities or the y valued numbers of values. Y, and it displays the range is written with the same operation, their different domain and use bracket... Following tutorial I what is domain and range by looking at square root function to the physical consisting... For the given function. coincides with its domain of definition Share ; Edit ; Delete ; a! As what is the numbers on the right side or the y valued numbers be separated by a.. F\\Left ( x \\right ) \\ ) is set a parabolic shape as it is the domain and are... Codomain and range are somewhat obvious, but I 'll rephrase these definitions in my own words domain... Between domain and range Packet.pdf from MATH 1450 at William Paterson University set a of y-values that are to!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91058797,"math_prob":0.96532494,"size":19785,"snap":"2021-21-2021-25","text_gpt3_token_len":4368,"char_repetition_ratio":0.20059653,"word_repetition_ratio":0.23979148,"special_character_ratio":0.22886024,"punctuation_ratio":0.14753695,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98648334,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-09T13:01:10Z\",\"WARC-Record-ID\":\"<urn:uuid:3a45d575-a966-4368-81c3-ebe36fc70f9d>\",\"Content-Length\":\"53965\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25f81ff0-266f-465c-ab02-53bc6b5b99ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:3be623ee-4c07-457d-9b2e-3b1e30b958dd>\",\"WARC-IP-Address\":\"207.126.167.210\",\"WARC-Target-URI\":\"https://sob5050.com/pnpfbmg/article.php?id=619bcf-what-is-domain-and-range\",\"WARC-Payload-Digest\":\"sha1:USG3OGUGZHEENW2QS2BIYRAW336OC2R5\",\"WARC-Block-Digest\":\"sha1:I5ZVFSARJKV3XMVUL4SDKGXBTME24HNM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988986.98_warc_CC-MAIN-20210509122756-20210509152756-00146.warc.gz\"}"}
https://flutterq.com/solved-laravel-eloquent-where-date-is-equal-or-greater-than-datetime/
[ "# [Solved] Laravel eloquent where date is equal or greater than DateTime\n\nHello Guys, How are you all? Hope You all Are Fine. Today I get the following error Laravel eloquent where date is equal or greater than DateTime in php. So Here I am Explain to you all the possible solutions here.\n\n## How Laravel eloquent where date is equal or greater than DateTime Error Occurs?\n\nToday I get the following error Laravel eloquent where date is equal or greater than DateTime in php.\n\n## How To Solve Laravel eloquent where date is equal or greater than DateTime Error ?\n\n1. How To Solve Laravel eloquent where date is equal or greater than DateTime Error ?\n\nTo Solve Laravel eloquent where date is equal or greater than DateTime Error Here you can use this:\n`->where('date', '>=', date('Y-m-d'))`\n\n2. To Solve Laravel eloquent where date is equal or greater than DateTime Error\n\nTo Solve Laravel eloquent where date is equal or greater than DateTime Error Here you can use this:\n`->where('date', '>=', date('Y-m-d'))`\n\n## Solution 1\n\nSounds like you need to use `>=`, for example:\n\n`->whereDate('date', '>=', Carbon::now('Europe/Stockholm'))`\n\n## Solution 2\n\nHere you can use this:\n\n`->where('date', '>=', date('Y-m-d'))`\n\n## Summery\n\nIt’s all About this issue. Hope all solution helped you a lot. Comment below Your thoughts and your queries. Also, Comment below which solution worked for you? Thank You." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7419278,"math_prob":0.9335015,"size":1385,"snap":"2021-31-2021-39","text_gpt3_token_len":319,"char_repetition_ratio":0.14989138,"word_repetition_ratio":0.46696034,"special_character_ratio":0.24115524,"punctuation_ratio":0.11785714,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98826617,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T00:32:54Z\",\"WARC-Record-ID\":\"<urn:uuid:dd6be180-9b3e-4956-92eb-fa1f7985e5e9>\",\"Content-Length\":\"61902\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:61acf731-0f3a-4d8f-8c9d-e55a6dcdd7a8>\",\"WARC-Concurrent-To\":\"<urn:uuid:3b6f27c4-85d2-4a72-967c-38cd419a4945>\",\"WARC-IP-Address\":\"172.67.144.206\",\"WARC-Target-URI\":\"https://flutterq.com/solved-laravel-eloquent-where-date-is-equal-or-greater-than-datetime/\",\"WARC-Payload-Digest\":\"sha1:MSKJYNSPE5CQI4RT4S5B5XOPTWLABYBG\",\"WARC-Block-Digest\":\"sha1:LVMKKO7H3XWTSV4JTRSLVDRXEPOSDE3Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057403.84_warc_CC-MAIN-20210922223752-20210923013752-00089.warc.gz\"}"}
https://www.convertunits.com/from/dyne-second/square+centimeter/to/pound-force+second/square+inch
[ "## ››Convert dyne-second/square centimeter to pound-force second/square inch\n\n dyne-second/square centimeter pound-force second/square inch\n\nHow many dyne-second/square centimeter in 1 pound-force second/square inch? The answer is 68947.5729.\nWe assume you are converting between dyne-second/square centimeter and pound-force second/square inch.\nYou can view more details on each measurement unit:\ndyne-second/square centimeter or pound-force second/square inch\nThe SI derived unit for dynamic viscosity is the pascal second.\n1 pascal second is equal to 10 dyne-second/square centimeter, or 0.00014503773779686 pound-force second/square inch.\nNote that rounding errors may occur, so always check the results.\nUse this page to learn how to convert between dyne seconds/square centimeter and pound-force second/square inch.\nType in your own numbers in the form to convert the units!\n\n## ››Want other units?\n\nYou can do the reverse unit conversion from pound-force second/square inch to dyne-second/square centimeter, or enter any two units below:\n\n## Enter two units to convert\n\n From: To:\n\n## ››Metric conversions and more\n\nConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3\", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77318877,"math_prob":0.9862245,"size":1846,"snap":"2019-51-2020-05","text_gpt3_token_len":443,"char_repetition_ratio":0.3170467,"word_repetition_ratio":0.016460905,"special_character_ratio":0.20422535,"punctuation_ratio":0.11764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9884177,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-11T06:11:10Z\",\"WARC-Record-ID\":\"<urn:uuid:effba3a5-8227-479f-9bdd-4a887b51c9c8>\",\"Content-Length\":\"24833\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4ff19db-cccd-4899-9b7e-187455e24f17>\",\"WARC-Concurrent-To\":\"<urn:uuid:86064383-441d-4d61-a430-236fa1c06679>\",\"WARC-IP-Address\":\"54.84.169.205\",\"WARC-Target-URI\":\"https://www.convertunits.com/from/dyne-second/square+centimeter/to/pound-force+second/square+inch\",\"WARC-Payload-Digest\":\"sha1:KKSARDPEE2CT5HUO53ZXT55TWDDSEL3V\",\"WARC-Block-Digest\":\"sha1:WKYNB263YMHEGTQTPFKOVI2EU4KXGGZX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540529955.67_warc_CC-MAIN-20191211045724-20191211073724-00147.warc.gz\"}"}
https://www.lyricsrocket.com/Classes/CalcIII/PartialDerivAppsIntro.aspx
[ "Paul's Online Notes\nHome / Calculus III / Applications of Partial Derivatives\nShow Mobile Notice Show All Notes Hide All Notes\nMobile Notice\nYou appear to be on a device with a \"narrow\" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width.\n\n## Chapter 3 : Applications of Partial Derivatives\n\nIn this chapter we will take a look at a several applications of partial derivatives. Most of the applications will be extensions to applications to ordinary derivatives that we saw back in Calculus I. For instance, we will be looking at finding the absolute and relative extrema of a function and we will also be looking at optimization. Both (all three?) of these subjects were major applications back in Calculus I. They will, however, be a little more work here because we now have more than one variable.\n\nHere is a list of the topics in this chapter.\n\nTangent Planes and Linear Approximations – In this section formally define just what a tangent plane to a surface is and how we use partial derivatives to find the equations of tangent planes to surfaces that can be written as $$z=f(x,y)$$. We will also see how tangent planes can be thought of as a linear approximation to the surface at a given point.\n\nGradient Vector, Tangent Planes and Normal Lines – In this section discuss how the gradient vector can be used to find tangent planes to a much more general function than in the previous section. We will also define the normal line and discuss how the gradient vector can be used to find the equation of the normal line.\n\nRelative Minimums and Maximums – In this section we will define critical points for functions of two variables and discuss a method for determining if they are relative minimums, relative maximums or saddle points (i.e. neither a relative minimum or relative maximum).\n\nAbsolute Minimums and Maximums – In this section we will how to find the absolute extrema of a function of two variables when the independent variables are only allowed to come from a region that is bounded (i.e. no part of the region goes out to infinity) and closed (i.e. all of the points on the boundary are valid points that can be used in the process).\n\nLagrange Multipliers – In this section we’ll see discuss how to use the method of Lagrange Multipliers to find the absolute minimums and maximums of functions of two or three variables in which the independent variables are subject to one or more constraints. We also give a brief justification for how/why the method works." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89991677,"math_prob":0.9423525,"size":2231,"snap":"2021-21-2021-25","text_gpt3_token_len":454,"char_repetition_ratio":0.12572968,"word_repetition_ratio":0.057142857,"special_character_ratio":0.19677275,"punctuation_ratio":0.06682578,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98218197,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-12T12:14:41Z\",\"WARC-Record-ID\":\"<urn:uuid:42427a74-9314-47be-b7a6-0ee7e1b30199>\",\"Content-Length\":\"76578\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:50890e55-1604-4bb1-9ba8-5b7325ec991e>\",\"WARC-Concurrent-To\":\"<urn:uuid:333e489d-5829-45a9-b68e-706d6ab5a962>\",\"WARC-IP-Address\":\"172.67.179.5\",\"WARC-Target-URI\":\"https://www.lyricsrocket.com/Classes/CalcIII/PartialDerivAppsIntro.aspx\",\"WARC-Payload-Digest\":\"sha1:Q6CBPMY4JWU4RNI6RYZVLPE7XFTBER6D\",\"WARC-Block-Digest\":\"sha1:4WSISEOPEIJV5OYHRN2E5VCCWGKFNMX3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989693.19_warc_CC-MAIN-20210512100748-20210512130748-00412.warc.gz\"}"}
https://file.scirp.org/Html/3-5301020_63261.htm
[ " Artin Transfer Patterns on Descendant Trees of Finite p-Groups\n\nVol.06 No.02(2016), Article ID:63261,39 pages\n10.4236/apm.2016.62008\n\nArtin Transfer Patterns on Descendant Trees of Finite p-Groups\n\nDaniel C. Mayer*\n\nNaglergasse 53, 8010 Graz, Austria", null, "", null, "", null, "", null, "Received 25 November 2015; accepted 25 January 2016; published 29 January 2016\n\nABSTRACT\n\nBased on a thorough theory of the Artin transfer homomorphism", null, "from a group G to the abelianization", null, "of a subgroup", null, "of finite index", null, ", and its connection with the permutation representation", null, "and the monomial representation", null, "of G, the Artin pattern", null, ", which consists of families", null, ", resp.", null, ", of transfer targets, resp. transfer kernels, is defined for the vertices", null, "of any descendant tree", null, "of finite p-groups. It is endowed with partial order relations", null, "and", null, ", which are compatible with the parent-descendant relation", null, "of the edges", null, "of the tree T. The partial order enables termination criteria for the p-group generation algorithm which can be used for searching and identifying a finite p-group G, whose Artin pattern", null, "is known completely or at least partially, by constructing the descendant tree with the abelianization of G as its root. An appendix summarizes details concerning induced homomorphisms between quotient groups, which play a crucial role in establishing the natural partial order on Artin patterns and explaining the stabilization, resp. polarization, of their components in descendant trees of finite p- groups.\n\nKeywords:\n\nArtin Transfer, Kernel Type, Target Type, Descendant Tree, Coclass Tree, Coclass Graph\n\n1. Introduction\n\nP 1.1. In the mathematical field of group theory, an Artin transfer is a certain homomorphism from an arbitrary finite or infinite group to the commutator quotient group of a subgroup of finite index.\n\nOriginally, such transfer mappings arose as group theoretic counterparts of class extension homomorphisms of abelian extensions of algebraic number fields by applying Artin’s reciprocity isomorphism ( , §4, Allgemeines Reziprozitätsgesetz, p. 361) to ideal class groups and analyzing the resulting homomorphisms between quotients of Galois groups ( , §2, p. 50).\n\nHowever, independently of number theoretic applications, a natural partial order on the kernels and targets of Artin transfers, has recently been found to be compatible with parent-child relations between finite p-groups, where p denotes a prime number. Such ancestor-descendant relations can be visualized conveniently in des- cendant trees ( , §4, pp. 163-164).\n\nConsequently, Artin transfers provide valuable information for classifying finite p-groups by kernel-target patterns and for searching and identifying particular groups in descendant trees by looking for patterns defined by kernels and targets of Artin transfers. These strategies of pattern recognition are useful not only in purely group theoretic context, but also, most importantly, for applications in algebraic number theory concerning Galois groups of higher p-class fields and Hilbert p-class field towers. The reason is that the unramified extensions of a base field contain information in the shape of capitulation patterns and class group structures, and these arithmetic invariants can be translated into group theoretic data on transfer kernels and targets by means of Artin’s reciprocity law of class field theory. The natural partial order on Artin patterns admits termination criteria for a search through a descendant tree with the aid of recursive executions of the p-group generation algorithm by Newman and O’Brien .\n\nP 1.2. The organization of this article is as follows. The detailed theory of the transfer will be developed in §§ 2 and 3, followed by computational implementations in § 4. It is our intention to present more than the least common multiple of the original papers by Schur and Artin and the relevant sections of the text books by Hall , Huppert , Gorenstein , Aschbacher , Doerk and Hawkes , Smith and Tabachnikova , and Isaacs .\n\nHowever, we shall not touch upon fusion and focal subgroups, which form the primary goal of the mentioned authors, except Artin. Our focus will rather be on a sound foundation of Artin patterns, consisting of families of transfer kernels and targets, and their stabilization, resp. polarization, in descendant trees of finite p-groups. These phenomena arise from a natural partial order on Artin patterns which is compatible with ancestor- descendant relations in trees, and is established in its most general form in §§5 and 6.\n\nSince our endeavour is to give the most general view of each partial result, we came to the conviction that categories, functors and natural transformations are the adequate tools for expressing the appropriate range of validity for the facts connected with the partial order relation on Artin patterns. Inspired by Bourbaki’s method of exposition , Appendix on induced homomorphisms, which is separated to avoid a disruption of the flow of exposition, goes down to the origins exploiting set theoretic facts concerning direct images and inverse pre-images of mappings which are crucial for explaining the natural partial order of Artin patterns.\n\n2. Transversals and Their Permutations\n\n2.1. Transversals of a Subgroup\n\nLet G be a group and be a subgroup of finite index.\n\nDefinition 2.1. See also ( , p. 1013); ( , (1.5.1), p. 11); ( , Satz 2.5, p. 5).\n\n1). A left transversal of H in G is an ordered system of representatives for the left cosets of H in\n\nG such that is a disjoint union.\n\n2). Similarly, a right transversal of H in G is an ordered system of representatives for the right\n\ncosets of H in G such that is a disjoint union.\n\nRemark 2.1. For any transversal of H in G, there exists a unique subscript such that, resp.. The element, resp., which represents the principal coset (i.e., the subgroup H itself) may be replaced by the neutral element 1.\n\nLemma 2.1. See also ( , p. 1015); ( , (1.5.2), p. 11); ( , Satz 2.6, p. 6).\n\n1). If G is non-abelian and H is not a normal subgroup of G, then we can only say that the inverse elements of a left transversal form a right transversal of H in G.\n\n2). However, if is a normal subgroup of G, then any left transversal is also a right transversal of H in G.\n\nProof. 1). Since the mapping, is an involution, that is a bijection which is its own inverse, we see that\n\nimplies.\n\n2). For a normal subgroup, we have for each.\n\nLet be a group homomorphism and be a left transversal of a subgroup H in G with finite index. We must check whether the image of this transversal under the homomorphism is a transversal again.\n\nProposition 2.1. The following two conditions are equivalent.\n\n1). is a left transversal of the subgroup in the image with finite\n\nindex.\n\n2)..\n\nWe emphasize this important equivalence in a formula:\n\n(2.1)\n\nProof. By assumption, we have the disjoint left coset decomposition which comprises two\n\nstatements simultaneously.\n\nFirstly, the group is a union of cosets,\n\nand secondly, any two distinct cosets have an empty intersection, for.\n\nDue to the properties of the set mapping associated with, the homomorphism maps the union to another union\n\nbut weakens the equality for the intersection to a trivial inclusion\n\nTo show that the images of the cosets remain disjoint we need the property of the homo- morphism.\n\nSuppose that for some,\n\nthen we have for certain elements.\n\nMultiplying by from the left and by from the right, we obtain\n\nSince, this implies, resp., and thus. (This part of the proof is also covered by ( , Thm. X. 21, p. 340) and, in the context of normal subgroups instead of homomorphisms, by ( , Thm. 2.3.4, p. 29) and ( , Satz 3.10, p. 16))\n\nConversely, we use contraposition.\n\nIf the kernel of is not contained in the subgroup H, then there exists an element such that.\n\nBut then the homomorphism maps the disjoint cosets\n\nto equal cosets.\n\n2.2. Permutation Representation\n\nP 2.1. Suppose is a left transversal of a subgroup of finite index in a group G. A fixed element gives rise to a unique permutation of the left cosets of H in G by left multiplication such that\n\n(2.2)\n\nfor each.\n\nSimilarly, if is a right transversal of H in G, then a fixed element gives rise to a unique permutation of the right cosets of H in G by right multiplication such that\n\n(2.3)\n\nfor each.\n\nThe elements, resp., , of the subgroup H are called the monomials associated with x with respect to, resp..\n\nThe mapping, resp., is called the permutation representation of G in with respect to, resp..\n\nLemma 2.2. For the special right transversal associated to the left transversal, we have the following relations between the monomials and permutations corresponding to an element:\n\n(2.4)\n\nProof. For the right transversal, we have, for each.\n\nOn the other hand, for the left transversal, we have\n\n, for each.\n\nThis relation simultaneously shows that, for any, the permutation representations and the associated monomials are connected by\n\nfor each. □\n\n3. Artin Transfer\n\nLet G be a group and be a subgroup of finite index. Assume that, resp., is a left, resp. right, transversal of H in G with associated permutation representation, , resp., such that, resp., for.\n\nDefinition 3.3. See also ( , p. 1014); ( , §2, p. 50); ( , (14.2.2-4), p. 202); ( , p. 413); ( , p. 248); ( , p. 197); ( , Dfn.(17.1), p. 60); ( , p. 154); ( , p. 149); ( , p. 2).\n\nThe Artin transfer from G to the abelianization of H with respect to, resp., is defined by\n\n(3.1)\n\nresp.\n\n(3.2)\n\nfor.\n\nRemark 3.1. I.M. Isaacs , p. 149 calls the mapping, , resp.,\n\nthe pre-transfer from G to H. The pre-transfer can be composed with a homomorphism from H into\n\nan abelian group A to define a more general version of the transfer, , resp.\n\n, from G to A via, which occurs in the book by D. Gorenstein ( , p. 248). Taking the\n\nnatural epimorphism, , yields the Definition 3.3 of the Artin transfer in its original form by I. Schur ( , p. 1014) and by E. Artin ( , §2, p. 50), which has also been dubbed Verlagerung by H. Hasse ( , §27.4, pp. 170-171). Note that, in general, the pre-transfer is neither independent of the transversal nor a group homomorphism.\n\n3.1. Independence of the Transversal\n\nAssume that is another left transversal of H in G such that.\n\nProposition 3.1. See also ( , p. 1014); ( , Thm. 14.2.1, p. 202); ( , Hilfssatz 1.5, p. 414); ( , Thm. 3.2, p. 246); ( , (37.1), p.198); ( , Thm.(17.2), p.61); ( , p.154); ( , Thm.5.1, p.149); ( , Prop.2, p. 2).\n\nThe Artin transfers with respect to (g) and coincide,.\n\nProof. There exists a unique permutation such that, for all. Consequently, , resp. with, for all. For a fixed element, there exists a unique permutation such that we have\n\n,\n\nfor all. Therefore, the permutation representation of G with respect to is given by, resp., for. Furthermore, for the connection between the elements\n\nand, we obtain\n\nfor all. Finally, due to the commutativity of the quotient group and the fact that and are permutations, the Artin transfer turns out to be independent of the left transversal:\n\nas prescribed in Definition 3.1, Equation (3.1). □\n\nIt is clear that a similar proof shows that the Artin transfer is independent of the choice between two different right transversals. It remains to show that the Artin transfer with respect to a right transversal coincides with the Artin transfer with respect to a left transversal.\n\nFor this purpose, we select the special right transversal associated to the left transversal, as explained in Lemma 2.1 and Lemma 2.2.\n\nProposition 3.2. The Artin transfers with respect to and coincide,.\n\nProof. Using (2.4) in Lemma 2.2 and the commutativity of, we consider the expression\n\nThe last step is justified by the fact that the Artin transfer is a homomorphism. This will be shown in the following subsection 3.2. □\n\n3.2. Artin Transfers as Homomorphisms\n\nLet be a left transversal of H in G.\n\nTheorem 3.1. See also ( , p. 1014); ( , Thm. 14.2.1, p. 202); ( , Hauptsatz 1.4, p. 413); ( , Thm. 3.2, p. 246); ( , (37.2), p.198); ( , Thm.(17.2), p.61); ( , p. 155); ( , Thm.5.2, p. 150); ( , Prop.1, p. 2).\n\nThe Artin transfer, and the permutation representation\n\nare group homomorphisms:\n\n(3.3)\n\nProof. Let be two elements with transfer images and\n\n. Since is abelian and is a permutation, we can change the order of the factors in the following product:\n\nThis relation simultaneously shows that the Artin transfer and the permutation representation are homomorphisms, since and, in a covariant way. □\n\n3.3. Monomial Representation\n\nLet, resp., be a left, resp. right, transversal of a subgroup H in a group G. Using the monomials, resp., associated with an element according to Equation (2.2), resp. (2.3), we define the following maps.\n\nDefinition 3.2. The mapping, respectively , is called the monomial representation of G in with respect to, resp..\n\nP 3.1. It is illuminating to restate the homomorphism property of the Artin transfer in terms of the monomial\n\nrepresentation. The images of the factors are given by and\n\n. In the proof of Theorem 3.1, the image of the product turned out to be, which is a very peculiar law of com- position discussed in more detail in the sequel.\n\nThe law reminds of the crossed homomorphisms in the first cohomology group of a G-module M, which have the property, for.\n\nThese peculiar structures can also be interpreted by endowing the cartesian product with a special law of composition known as the wreath product of the groups H and with respect to the set.\n\nDefinition 3.3. For, the wreath product of the associated monomials and permutations is given by\n\n(3.4)\n\nTheorem 3.2. See also ( , Thm.14.1, p. 200); ( , Hauptsatz 1.4, p. 413).\n\nThis law of composition on causes the monomial representation\n\nalso to be a homomorphism. In fact, it is a faithful representation, that is an injective homomorphism, also called a monomorphism or embedding, in contrast to the permutation representation.\n\nProof. The homomorphism property has been shown above already. For a homomorphism to be injective, it suffices to show the triviality of its kernel. The neutral element of the group endowed with the wreath product is given by, where the last 1 means the identity permutation. If , for some xÎG, then and consequently, for all. Finally, an application of the inverse inner automorphism with yields, as required for injectivity.\n\nThe permutation representation cannot be injective if G is infinite or at least of an order bigger than, the factorial of n. □\n\nRemark 3.2. Formula (3.4) is an example for the left-sided variant of the wreath product on. However, we point out that the wreath product with respect to a right transversal of H in G appears in its right-sided variant\n\n(3.5)\n\nwhich implies that the permutation representation, is a homomorphism with respect to the opposite law of composition on, in a contravariant manner.\n\nIt can be shown that the left-sided and the right-sided variant of the wreath product lead to isomorphic group structures on.\n\nA related viewpoint is taken by M. Hall ( , p. 200), who uses the multiplication of monomial matrices to describe the wreath product. Such a matrix can be represented in the form as the product of an invertible diagonal matrix over the group ring K[H], where K denotes a field, and the permutation matrix associated with the permutation. Multiplying two such monomial matrices yields a law of composition identical to the wreath product in the right-sided variant,\n\nWhereas B. Huppert ( , p. 413) uses the monomial representation for defining the Artin transfer by composition with the unsigned determinant, we prefer giving the immediate Definition 3.3 and merely illustrating the homomorphism property of the Artin transfer with the aid of the monomial representation.\n\n3.4. Composition of Artin Transfers\n\nLet G be a group with nested subgroups such that the indices, and are finite.\n\nTheorem 3.3. See also ( , Thm.14.2.1, p. 202); ( , Satz 1.6, p. 415); ( , Lem.(17.3), p. 61); ( , Thm.10.8, p. 301); ( , Prop.3, p. 3).\n\nThen the Artin transfer is the compositum of the induced transfer (in the sense of Corollary 7.1 or Corollary 7.3 in the Appendix) and the Artin transfer, i.e.,\n\n(3.6)\n\nThis can be seen in the following manner.\n\nProof. If is a left transversal of H in G and is a left transversal of K in H, that is\n\nand, then is a disjoint left coset decomposition of G with\n\nrespect to K. See also ( , Thm.1.5.3, p. 12); ( , Satz 2.6, p. 6). Given two elements and, there exist unique permutations, and, such that the associated monomials are given by\n\n, for each, and, for each.\n\nThen, using Corollary 7.3, we have\n\n, and.\n\nFor each pair of subscripts and, we put and obtain\n\nresp.. Thus, the image of x under the Artin transfer is given by\n\n3.5. Wreath Product of Sm and Sn\n\nP 3.2. Motivated by the proof of Theorem 3.3, we want to emphasize the structural peculiarity of the monomial representation\n\nwhich corresponds to the compositum of Artin transfers, defining\n\nfor a permutation, and using the symbolic notation for all pairs of subscripts,.\n\nThe preceding proof has shown that. Therefore, the action of the permutation on the set is given by. The action on the second component j depends\n\non the first component i (via the permutation), whereas the action on the first component i is independent of the second component j. Therefore, the permutation can be identified with the multiplet, which will be written in twisted form in the sequel.\n\nThe permutations, which arise as second components of the monomial representation\n\nare of a very special kind. They belong to the stabilizer of the natural equipartition of the set [1, n] × [1, m] into the n rows of the corresponding matrix (rectangular array). Using the peculiarities of the composition of Artin transfers in the previous section, we show that this stabilizer is isomorphic to the wreath product of the symmetric groups and with respect to, whose underlying set is endowed with the following law of composition in the left-sided variant.\n\n(3.7)\n\nfor all.\n\nThis law reminds of the chain rule for the Fréchet derivative in\n\nxÎE of the compositum of differentiable functions and between complete normed spaces.\n\nThe above considerations establish a third representation, the stabilizer representation,\n\nof the group G in the wreath product, similar to the permutation representation and the monomial representation. As opposed to the latter, the stabilizer representation cannot be injective, in general. For instance, certainly not, if G is infinite.\n\nFormula (3.7) proves the following statement.\n\nTheorem 3.4. The stabilizer representation of the group G in\n\nthe wreath product of symmetric groups is a group homomorphism.\n\n3.6. Cycle Decomposition\n\nLet be a left transversal of a subgroup of finite index in a group G. Suppose the element gives rise to the permutation of the left cosets of H in G such that\n\n, resp., for each.\n\nTheorem 3.5. See also ( , §2, p. 50); ( , §27.4, p. 170); ( , Hilfssatz 1.7, p. 415); ( , Thm.3.3, p. 249); ( , (37.3), p. 198); ( , p. 154); ( , Lem.5.5, p. 153); ( , p. 5).\n\nIf the permutation has the decomposition into pairwise disjoint (and thus commuting)\n\ncycles of lengths, which is unique up to the ordering of the cycles, more explicitly, if\n\n(3.8)\n\nfor, and, then the image of under the Artin transfer is given by\n\n(3.9)\n\nProof. The reason for this fact is that we obtain another left transversal of H in G by putting for and, since\n\n(3.10)\n\nis a disjoint decomposition of G into left cosets of H.\n\nLet us fix a value of. For, we have\n\nHowever, for, we obtain\n\nConsequently,\n\nP 3.3. The cycle decomposition corresponds to a double coset decomposition of the group\n\nG modulo the cyclic group and modulo the subgroup H. It was actually this cycle decomposition form of the transfer homomorphism which was given by E. Artin in his original 1929 paper ( , §2, p. 50).\n\n3.7. Transfer to a Normal Subgroup\n\nP 3.4. Now let be a normal subgroup of finite index in a group G. Then we have, for all, and there exists the quotient group of order n. For an element, we let denote the order of the coset in, and we let be a left transversal of the subgroup in G, where.\n\nThen the image of under the Artin transfer is given by\n\n(3.11)\n\nProof. is a cyclic subgroup of order f in, and a left transversal of the subgroup\n\nin G, where and is the corresponding disjoint left coset decomposition,\n\ncan be refined to a left transversal with disjoint left coset decomposition\n\n(3.12)\n\nof H in G. Hence, the formula for the image of x under the Artin transfer in the previous section takes the particular shape\n\nwith exponent f independent of j. □\n\nCorollary 3.1. See also ( , Lem.10.6, p. 300) for a special case.\n\nIn particular, the inner transfer of an element is given as a symbolic power\n\n(3.13)\n\nwith the trace element\n\n(3.14)\n\nof H in G as symbolic exponent.\n\nThe other extreme is the outer transfer of an element which generates G modulo H, that is. It is simply an nth power\n\n(3.15)\n\nProof. The inner transfer of an element, whose coset is the principal set in of order, is given as the symbolic power\n\nwith the trace element\n\nof H in G as symbolic exponent.\n\nThe outer transfer of an element which generates G modulo H, that is, whose coset is generator of with order, is given as the nth power\n\n.\n\nP 3.5. Transfers to normal subgroups will be the most important cases in the sequel, since the central concept of this article, the Artin pattern, which endows descendant trees with additional structure, consists of targets and kernels (§5) of Artin transfers from a group G to intermediate groups between G and its com- mutator subgroup. For these intermediate groups we have the following lemma.\n\nLemma 3.1. All subgroups of a group G which contain the commutator subgroup are normal subgroups.\n\nProof. Let. If H were not a normal subgroup of G, then we had for some element. This would imply the existence of elements and such that, and consequently the commutator would be an element in in contradiction to. □\n\nExplicit implementations of Artin transfers in the simplest situations are presented in the following section.\n\n4. Computational Implementation\n\n4.1. Abelianization of Type (p, p)\n\nP 4.1. Let G be a pro-p group with abelianization of elementary abelian type. Then G has maximal subgroups of index. In this particular case, the Frattini\n\nsubgroup, which is defined as the intersection of all maximal subgroups, coincides with the\n\ncommutator subgroup, since the latter contains all pth powers, and thus we have.\n\nFor each, let be the Artin transfer homomorphism from G to the abelianization of. According to Burnside's basis theorem, the group G has generator rank and can therefore be generated as by two elements such that. For each of the normal subgroups\n\n, we need a generator with respect to, and a generator of a transversal\n\nsuch that and.\n\nA convenient selection is given by\n\n(4.1)\n\nThen, for each, it is possible to implement the inner transfer by\n\n(4.2)\n\naccording to Equation (3.13) of Corollary 3.1, which can also be expressed by a product of two pth powers,\n\n(4.3)\n\nand to implement the outer transfer as a complete pth power by\n\n(4.4)\n\naccording to Equation (3.15) of Corollary 3.1. The reason is that and in the quotient group.\n\nIt should be pointed out that the complete specification of the Artin transfers also requires explicit knowledge of the derived subgroups. Since is a normal subgroup of index p in, a certain general reduction is possible by ( , Lem.2.1, p. 52), but an explicit pro-p pre- sentation of G must be known for determining generators of, whence\n\n(4.5)\n\n4.2. Abelianization of Type (p2, p)\n\nP 4.2. Let G be a pro-p group with abelianization of non-elementary abelian type. Then G has maximal subgroups of index and subgroups\n\nof index.\n\nFigure 1 visualizes this smallest non-trivial example of a multi-layered abelianization ( , Dfn.3.1- 3, p. 288).\n\nFor each, let, resp., be the Artin transfer homo- morphism from G to the abelianization of, resp.. Burnside’s basis theorem asserts that the group G has generator rank and can therefore be generated as by two elements such that\n\nFigure 1. Layers of subgroups for.\n\n.\n\nWe begin by considering the first layer of subgroups. For each of the normal subgroups , we select a generator\n\n(4.6)\n\nThese are the cases where the factor group is cyclic of order. However, for the distinguished maximal subgroup, for which the factor group is bicyclic of type, we need two generators\n\n(4.7)\n\nFurther, a generator of a transversal must be given such that, for each. It is convenient to define\n\n(4.8)\n\nThen, for each, we have the inner transfer\n\n(4.9)\n\nwhich equals, and the outer transfer\n\n(4.10)\n\nsince and.\n\nNow we continue by considering the second layer of subgroups. For each of the normal subgroups , we select a generator\n\n(4.11)\n\nsuch that. Among these subgroups, the Frattini subgroup is par- ticularly distinguished. A uniform way of defining generators of a transversal such that, is to set\n\n(4.12)\n\nSince, but on the other hand and, for, with the single exception that, we obtain the following expressions for the inner transfer\n\n(4.13)\n\nand for the outer transfer\n\n(4.14)\n\nexceptionally\n\n(4.15)\n\nand\n\n(4.16)\n\nfor. Again, it should be emphasized that the structure of the derived subgroups and must be known explicitly to specify the action of the Artin transfers completely.\n\n5. Transfer Kernels and Targets\n\nP 5.1. After our thorough treatment of the general theory of Artin transfers in §§2 and 3, and their computational implementation for some simple cases in §4, we are now in the position to introduce Artin transfer patterns, which form the central concept of this article. They provide an incredibly powerful tool for classifying finite and infinite pro-p groups and for identifying a finite p-group G with sufficiently many assigned components of its Artin pattern by the strategy of pattern recognition. This is done in a search through the descendant tree with root by means of recursive applications of the p-group generation algorithm by Newman and O’Brien .\n\nAn Artin transfer pattern consists of two families of transfer targets, resp. kernels, which are also called multiplets, whereas their individual components are referred to as singulets.\n\n5.1. Singulets of Transfer Targets\n\nTheorem 5.1. Let G and T be groups. Suppose that is the image of G under a homomorphism, and is the image of an arbitrary subgroup. Then the following claims hold without any further necessary assumptions.\n\n1) The commutator subgroup of V is the image of the commutator subgroup of U, that is\n\n(5.1)\n\n2) The restriction is an epimorphism which induces a unique epimorphism\n\n(5.2)\n\nThus, the abelianization of V,\n\n(5.3)\n\nis an epimorphic image of the abelianization of U, namely the quotient of by the kernel of, which is given by\n\n(5.4)\n\n3) Moreover, the map is an isomorphism, and the quotients are isomorphic, if and only if\n\n(5.5)\n\nSee Figure 2 for a visualization of this situation.\n\nProof. The statements can be seen in the following manner. The image of the commutator subgroup is given by\n\nThe homomorphism can be restricted to an epimorphism. According to Theorem 7.1, in particular, by the Formulas (7.5) and (7.4) in the appendix, the condition implies the existence of a uniquely determined epimorphism such that. The Isomor- phism Theorem in Formula (7.7) in the appendix shows that. Furthermore, by the Formulas (7.4) and (7.1), the kernel of is given explicitly by\n\nThus, is an isomorphism if and only if. □\n\nP 5.2. Functor of derived quotients. In analogy to section §7.6 in the appendix, a covariant functor can be used to map a morphism of one category to an induced morphism of another category.\n\nIn the present situation, we denote by the category of groups and we define the domain of the functor F as the following category. The objects of the category are pairs consisting of a group G and a subgroup,\n\n(5.6)\n\nFor two objects, the set of morphisms consists of epimor- phisms such that and, briefly written as arrows,\n\n(5.7)\n\nThe functor from this category to the category of abelian groups maps a pair to the commutator quotient group of the subgroup, and\n\nit maps a morphism to the induced epimorphism of the restriction,\n\n(5.8)\n\nExistence and uniqueness of have been proved in Theorem 5.1 under the assumption that, which is satisfied according to the definition of the arrow and automatically implies.\n\nFigure 2. Induced homomorphism of derived quotients.\n\nDefinition 5.1. Due to the results in Theorem 5.1, it makes sense to define a pre-order of transfer targets on the image of the functor F in the object class of the category of abelian groups in the following manner.\n\nFor two objects, a morphism, and the images\n\n, and,\n\nlet (non-strict) precedence be defined by\n\n(5.9)\n\nand let equality be defined by\n\n(5.10)\n\nif the induced epimorphism is an isomorphism.\n\nCorollary 5.1. If both components of the pairs are restricted to Hopfian groups, then the pre-order of transfer targets is actually a partial order.\n\nProof. We use the functorial properties of the functor F. The reflexivity of the partial order follows from the functorial identity in Formula (7.14), and the transitivity is a consequence of the functorial compositum in Formula (7.15), given in the appendix. The antisymmetry might be a problem for infinite groups, since it is known that there exist so-called non-Hopfian groups. However, for finite groups, and more generally for Hop- fian groups, it is due to the implication . □\n\n5.2. Singulets of Transfer Kernels\n\nSuppose that G and T are groups, is the image of G under a homomorphism, and is the image of a subgroup of finite index. Let be the Artin transfer from G to.\n\nTheorem 5.2. If, then the image of a left transversal of U in G is a left transversal of V in H, the index remains the same and is therefore finite, and the Artin transfer from H to exists.\n\n1) The following connections exist between the two Artin transfers: the required condition for the composita of mappings in the commutative diagram in Figure 3,\n\n(5.11)\n\nand, consequently, the inclusion of the kernels,\n\n(5.12)\n\n2) A sufficient (but not necessary) condition for the equality of the kernels is given by\n\n(5.13)\n\nFigure 3. Epimorphism and Artin transfer.\n\nSee Figure 3 for a visualization of this scenario.\n\nProof. The truth of these statements can be justified in the following way. The first part has been proved in\n\nProposition 2.1 already: Let be a left transversal of U in G. Then is a disjoint union but the union is not necessarily disjoint. For, we have\n\nfor some\n\nelement. However, if the condition is satisfied, then we are able to conclude that\n\n, and thus.\n\nLet be the epimorphism obtained in the manner indicated in the proof of Theorem 5.1 and Formula (5.2). For the image of under the Artin transfer, we obtain\n\nSince, the right hand side equals, provided that is a left transversal of V in H, which is correct when. This shows that the diagram in Figure 3 is commutative, that is,. It also yields the connection between the permutations and the monomials for all. As a consequence, we obtain the inclusion\n\n, if. Finally, if, then the previous section has shown that\n\nis an isomorphism. Using the inverse isomorphism, we get, which proves the equation\n\n. More explicitly, we have the following chain of equivalences and implications:\n\nConversely, only implies. Therefore, we cer-\n\ntainly have if, which is, however, not necessary. □\n\nP 5.3. Artin transfers as natural transformations. Artin transfers can be viewed as components of a natural transformation T between two functors and F from the following category to the usual category of groups.\n\nThe objects of the category are pairs consisting of a group G and a subgroup of finite index,\n\n(5.14)\n\nFor two objects, the set of morphisms consists of epimor- phisms satisfying, , and the additional condition for their kernels, briefly written as arrows,\n\n(5.15)\n\nThe forgetful functor from this category to the category of groups maps a pair. to its first component, and it maps a morphism to the underlying epimorphism.\n\n(5.16)\n\nThe functor from to the category of groups maps a pair to the commutator quotient group of the subgroup U, and it maps a morphism to the induced epimorphism of the restriction. Note that we must abstain here from letting F map into the subcategory of abelian groups.\n\n(5.17)\n\nThe system T of all Artin transfers fulfils the requirements for a natural transformation between these two functors, since we have\n\n(5.18)\n\nfor every morphism of the category.\n\nDefinition 5.2. Due to the results in Theorem 5.2, it makes sense to define a pre-order of transfer kernels on the kernels of the components of the natural transformation T in the object class of the category of groups in the following manner.\n\nFor two objects, a morphism, and the images\n\n, and,\n\nlet (non-strict) precedence be defined by\n\n(5.19)\n\nand let equality be defined by\n\n(5.20)\n\nif the induced epimorphism is an isomorphism.\n\nCorollary 5.2. If both components of the pairs are restricted to Hopfian groups,\n\nthen the pre-order of transfer kernels is actually a partial order.\n\nProof. Similarly as in the proof of Corollary 5.1, we use the properties of the functor F. The reflexivity is due to the functorial identity in Formula (7.14). The transitivity is due to the functorial compositum in Formula (7.15), where we have to observe the relations, , and Formula (7.1) in the appendix for verifying the kernel relation\n\nThe antisymmetry is certainly satisfied for finite groups, and more generally for Hopfian groups. □\n\n5.3. Multiplets of Transfer Targets and Kernels\n\nInstead of viewing various pairs which share the same first component G as distinct objects in the categories, resp., which we used for describing singulets of transfer targets, resp. kernels, we now consider a collective accumulation of singulets in multiplets. For this purpose, we shall define a new category of families, which generalizes the category, rather than the category. However, we have to pre- pare this definition with a criterion for the compatibility of a system of subgroups with its image under a homo- morphism.\n\nProposition 5.1. See also ( , Thm.2.3.4, p. 29); ( , Satz 3.10, p. 16); ( , Thm.2.4, p. 6); ( , Thm.X.21, p. 340).\n\nFor an epimorphism of groups, the associated set mappings\n\n(5.21)\n\nare inverse bijections between the following systems of subgroups\n\n(5.22)\n\nProof. The fourth and fifth statement of Lemma 7.1 in the appendix show that usually the associated set mappings and of a homomorphism are not inverse bijections between systems of sub- groups of G and H. However, if we replace the homomorphism by an epimorphism with, then the Formula (7.2) yields the first desired equality\n\nGuided by the property of all pre-images of, we define a re- stricted system of subgroups of the domain G,\n\nand, according to Formula (7.1.), we consequently obtain the second required equality\n\nwhich yields the crucial pair of inverse set bijections\n\nP 5.4. After this preparation, we are able to specify the new category. The objects of the category\n\nare pairs consisting of a group G and the family of all subgroups with finite index\n\n,\n\n(5.23)\n\nwhere I denotes a suitable indexing set. Note that G itself is one of the subgroups.\n\nThe morphisms of the new category are subject to more restrictive conditions, which concern entire families of subgroups instead of just a single subgroup.\n\nFor two objects, the set of\n\nmorphisms consists of epimorphisms satisfying, the image conditions, and the kernel conditions, which imply the pre-image conditions, for all, briefly written as arrows,\n\n(5.24)\n\nNote that, in view of Proposition 5.1, we can always use the same indexing set I for the domain and for the codomain of morphisms, provided they satisfy the required kernel condition.\n\nNow we come to the essential definition of Artin transfer patterns.\n\nDefinition 5.3. Let be an object of the category.\n\nThe transfer target type (TTT) of G is the family\n\n(5.25)\n\nThe transfer kernel type (TKT) of G is the family\n\n(5.26)\n\nThe complete Artin pattern of G is the pair\n\n(5.27)\n\nP 5.5. The natural partial order on TTTs and TKTs is reduced to the partial order on the components, according to the Definitions 5.1 and 5.2.\n\nDefinition 5.4. Let be two objects of the category, where all members of the families and are Hopfian groups.\n\nThen (non-strict) precedence of TTTs is defined by\n\n(5.28)\n\nand equality of TTTs is defined by\n\n(5.29)\n\n(Non-strict) precedence of TKTs is defined by\n\n(5.30)\n\nand equality of TKTs is defined by\n\n(5.31)\n\nWe partition the indexing set I in two disjoint components, according to whether components of the Artin pattern remain fixed or change under an epimorphism.\n\nDefinition 5.5. Let be two objects of the category, and let be a morphism between these objects.\n\nThe stable part and the polarized part of the Artin pattern of G with respect to are defined by\n\n(5.32)\n\nAccordingly, we have\n\n(5.33)\n\nNote that the precedence of polarized targets is strict as opposed to polarized kernels.\n\n5.4. The Artin Pattern on a Descendant Tree\n\nP 5.6. Before we specialize to the usual kinds of descendant trees of finite p-groups ( , §4, pp. 163-164) we consider an abstract form of a rooted directed tree, which is characterized by two relations.\n\nFirstly, a basic relation between parent and child (also called immediate descendant), corre- sponding to a directed edge of the tree, for any vertex which is different from the root R of the tree.\n\nSecondly, an induced non-strict partial order relation, for some integer, between ancestor and descendant, corresponding to a path of directed edges, for an arbitrary vertex, that is, the ancestor is an iterated parent of the descendant. Note that only an empty path with starts from the root R of the tree, which has no parent.\n\nJust a brief justification of the partial order: Reflexivity is due to the relation. Transitivity\n\nfollows from the rule. Antisymmetry is a consequence of the absence of cycles, that is, implies and thus.\n\nP 5.7. The category of a tree. Now let be a rooted directed tree whose vertices are groups. Then we define, the category associated with, as a subcategory of the category which was introduced in the Formulas (5.23) and (5.24).\n\nThe objects of the category are those pairs in the object class of the category whose\n\nfirst component is a vertex of the tree,\n\n(5.34)\n\nThe morphisms of the category are selected along the paths of the tree only.\n\nFor two objects, the set of morphisms is either empty or consists of a single element only,\n\n(5.35)\n\nIn the case of an ancestor-descendant relation between H and G, the specification of the supercategory\n\nenforces the following constraints on the unique morphism: the image relations and the kernel relations, for all.\n\nP 5.8. At this position, we must start to be more concrete. In the descendant tree of a group R, which is the root of the tree, the formal parent operator gets a second meaning as a natural projection, , from the child G onto its parent, which is always the quotient of G by a suitable normal subgroup. To be precise, the epimorphism with kernel is actually dependent on its domain G. Therefore, the formal power is only a convenient\n\nabbreviation for the compositum.\n\nAs described in , there are several possible selections of the normal subgroup N in the parent definition. Here, we would like to emphasize the following three choices of characteristic subgroups N of the child G. If p denotes a prime number and is the descendant tree of a finite p-group R, then it is usual to take for\n\n1) either the last non-trivial member of the lower central series of G\n\n2) or the last non-trivial member of the lower exponent-p central series of G\n\n3) or the last non-trivial member of the derived series of G,\n\nwhere denotes the nilpotency class, the lower exponent p-class, and the derived length of G, respectively.\n\nNote that every descendant tree of finite p-groups is subtree of a descendant tree with abelian root. Therefore, it is no loss of generality to restrict our attention to descendant trees with abelian roots.\n\nTheorem 5.3. A uniform warranty for the comparability of the Artin patterns of all vertices G of a descendant tree of finite p-groups with abelian root R, in the sense of the natural partial\n\norder, is given by the following restriction of the family of subgroups in the corresponding object of the category. The restriction depends on the definition of a parent in the\n\ndescendant tree.\n\n1) for all, when with.\n\n2) for all, when with.\n\n3) for all, when with.\n\nProof. If parents are defined by with, then we have and for any. The largest of these kernels arises for. Therefore, uniform comparability of Artin patterns is warranted by the restriction for all.\n\nThe parent definition with implies and for any. The largest of these kernels arises for. Consequently, a uniform comparability of Artin patterns is guaranteed by the restriction for all.\n\nFinally, in the case of the parent definition with, we have and for any. The largest of these kernels arises for. Consequently, a uniform comparability of Artin patterns is guaranteed by the condition for all.\n\nP 5.9. Note that the first and third condition coincide since both, and, denote the commutator subgroup. So the family is restricted to the normal subgroups which contain, as announced in the paragraph preceding Lemma 3.1.\n\nThe second condition restricts the family to the maximal subgroups of G inclusively the group G\n\nitself and the Frattini subgroup.\n\nP 5.10. Since we shall mainly be concerned with the first and third parent definition for descendant trees, that is, either with respect to the lower central series or to the derived series, the comparability condition in Theorem 5.3 suggests the definition of a category whose objects are subject to more severe conditions than those in Formula (5.23),\n\n(5.36)\n\nbut whose morphism are defined exactly as in Formula (5.24). The new viewpoint leads to a corresponding modification of Artin transfer patterns.\n\nDefinition 5.6. Let be an object of the category.\n\nThe Artin pattern, more precisely the restricted Artin pattern, of G is the pair\n\n(5.37)\n\nwhose components, the TTT and the TKT of G, are defined as in the Formulas (5.25) and (5.26), but now with respect to the smaller system of subgroups of G.\n\nP 5.11. The following Main Theorem shows that any non-metabelian group G with derived length and finite abelianization shares its Artin transfer pattern, in the restricted sense, with its metabelianization, that is the second derived quotient.\n\nTheorem 5.4. (Main Theorem.) Let G be a (non-metabelian) group with finite abelianization, and denote by, , the terms of the derived series of G, that is and for, in particular, and, then\n\n1) every subgroup which contains the commutator subgroup is a normal subgroup of finite index,\n\n2) for each, there is a chain of normal subgroups\n\n(5.38)\n\n3) for each, the targets of the transfers and\n\nare equal in the sense of the natural order,\n\n(5.39)\n\n4) for each, the kernels of the transfers and\n\nare equal in the sense of the natural order,\n\n(5.40)\n\nProof. We use the natural epimorphism,.\n\n1) If U is an intermediate group, then is a normal subgroup of G, according to Lemma 3.1. The assumption implies that is a divisor of the integer. Therefore, the Artin transfer exists.\n\n2) Firstly, implies. Since is characteristic in U, we also have. Similarly, is characteristic in and thus normal in. Finally, we obtain\n\n.\n\n3) The mapping, , is an epimorphism with kernel. Consequently, the isomorphism theorem in Remark 7.3 of the appendix yields the isomorphism .\n\n4) Firstly, the restriction is an epimorphism which induces an isomorphism\n\n, since and\n\n, according to Theorem 5.1. Secondly, according\n\nto Theorem 5.2, the condition implies that the index is finite, the Artin transfer exists, the composite mappings commute, and, since we even have, the transfer kernels satisfy the relation\n\n. In the sense of the natural partial order on transfer kernels this\n\nmeans equality, since and thus, similarly as in Proposition 5.1, the map establishes a set bijection between the systems of subgroups and\n\n, where.\n\nRemark 5.1. At this point it is adequate to emphasize how similar concepts in previous publications are related to the concept of Artin patterns. The restricted Artin pattern in Definition 5.6 was essentially introduced in ( , Dfn.1.1, p. 403), for a special case already earlier in ( , §1, p. 417). The name Artin pattern appears in ( , Dfn.3.1, p. 747) for the first time. The complete Artin pattern in Definition 5.3 is new in the present article, but we should point out that it includes the iterated IPADs (index-p abelianization data) in ( , Dfn.3.5, p. 289) and the iterated IPODs (index-p obstruction data) in ( , Dfn.4.5).\n\nIn a second remark, we emphasize the importance of the preceding Main Theorem for arithmetical applications.\n\nRemark 5.2. In algebraic number theory, Theorem 5.4 has striking consequences for the determination of the length of the p-class tower, that is the maximal unramified pro-p extension, of an algebraic number field K with respect to a given prime number p. It shows the impossibility of deciding, exclusively with the aid of the restricted Artin pattern, which of several assigned candidates G with distinct derived lengths is the actual p-class tower group. (In contrast, can always be recognized with.)\n\nThis is the point where the complete Artin pattern enters the stage. Most recent investigations by means of iterated IPADs of 2nd order, whose components are contained in, enabled decisions between in .\n\nAnother successful method is to employ cohomological results by I.R. Shafarevich on the relation rank for selecting among several candidates G for the p-class tower group, in dependence on the torsion-free unit rank of the base field K, for instance in .\n\nImportant examples for the concepts in §5 are provided in the following subsections.\n\n5.5. Abelianization of Type (p,p)\n\nLet G be a p-group with abelianization of elementary abelian type. Then G has maximal subgroups of index. For each, let be the Artin transfer homomorphism from G to the abelianization of.\n\nDefinition 5.7. The family of normal subgroups is called the transfer kernel type\n\n(TKT) of G with respect to.\n\nRemark 5.3. For brevity, the TKT is identified with the multiplet, whose integer components\n\nare given by\n\n(5.41)\n\nHere, we take into consideration that each transfer kernel must contain the commutator subgroup of G, since the transfer target is abelian. However, the minimal case cannot occur, according to Hilbert’s Theorem 94.\n\nA renumeration of the maximal subgroups and of the transfers by means of a per-\n\nmutation gives rise to a new TKT with respect to, identified with, where\n\nIt is adequate to view the TKTs as equivalent. Since we have\n\n,\n\nthe relation between and is given by. Therefore, is another representative of the orbit of under the operation of the symmetric group on the set of all mappings from to, where the extension of the permutation is defined by, and we formally put,.\n\nDefinition 5.8. The orbit of any representative is an invariant of the p-group G and is called its transfer kernel type, briefly TKT.\n\nRemark 5.4. This definition of goes back to the origins of the capitulation theory and was introduced by Scholz and Taussky for in 1934 . Several other authors used this original definition and investigated capitulation problems further. In historical order, Chang in 1977 , Chang and Foote in 1980 , Heider and Schmithals in 1982 , Brink in 1984 , Brink and Gold in 1987 , Nebelung in 1989 , and ourselves in 1991 and in 2012 .\n\nIn the brief form of the TKT, the natural order is expressed by for.\n\nLet denote the counter of total transfer kernels, which is\n\nan invariant of the group G. In 1980, Chang and Foote proved that, for any odd prime p and for any integer, there exist metabelian p-groups G having abelianization of type such that. However, for, there do not exist non-abelian 2-groups G with, such that. Such groups must be metabelian of maximal class. Only the elementary abelian 2-group has.\n\nIn the following concrete examples for the counters, and also in the remainder of this article, we use identifiers of finite p-groups in the SmallGroups Library by Besche, Eick and O’Brien .\n\nExample 5.1. For, we have the following TKTs:\n\nfor the extra special group of exponent 9 with,\n\nfor the two groups with,\n\nfor the group with,\n\nfor the group with,\n\nfor the extra special group of exponent 3 with.\n\n5.6. Abelianization of Type (p2, p)\n\nLet G be a p-group with abelianization of non-elementary abelian type. Then G possesses maximal subgroups of index, and subgroups of index. See Figure 1.\n\nP 5.12. Convention. Suppose that is the distinguished maximal subgroup which is the product of all subgroups of index, and is the distinguished subgroup of index\n\nwhich is the intersection of all maximal subgroups, that is the Frattini subgroup of G.\n\nP 5.13. First layer. For each, let be the Artin transfer homomorphism from G to the ab- elianization of.\n\nDefinition 5.9. The family is called the first layer transfer kernel type of G with respect to and, and is identified with, where\n\n(5.42)\n\nRemark 5.5. Here, we observe that each first layer transfer kernel is of exponent p with respect to and consequently cannot coincide with for any, since is cyclic of order, whereas is bicyclic of type.\n\nP 5.14. Second layer. For each, let be the Artin transfer homomorphism from G to the abelianization of.\n\nDefinition 5.10. The family is called the second layer transfer kernel type of G with respect to and, and is identified with, where\n\n(5.43)\n\nP 5.15. Transfer kernel type.\n\nCombining the information on the two layers, we obtain the (complete) transfer kernel type\n\nof the p-group G with respect to and.\n\nRemark 5.6. The distinguished subgroups and are unique invariants of G and should not be renumerated. However, independent renumerations of the remaining maximal subgroups and the transfers by means of a permutation, and of the remaining subgroups of index and the transfers by means of a permutation\n\n, give rise to new TKTs with respect to and, identified with, where\n\nand with respect to and, identified with,\n\nwhere\n\nIt is adequate to view the TKTs and as equivalent. Since we have\n\nresp.\n\nthe relations between and, resp. and, are given by, resp.. Therefore, is another representative of the orbit of under the operation\n\nof the product of two symmetric groups on the set of all pairs of mappings from to, where the extensions and of a permutation are defined by and, and formally\n\n, , , and.\n\nDefinition 5.11. The orbit of any representative is an invariant of the p- group G and is called its transfer kernel type, briefly TKT.\n\nP 5.16. Connections between layers.\n\nThe Artin transfer from G to a subgroup of index\n\nis the compositum of the induced transfer from to\n\n(in the sense of Corollary 7.1 or Corollary 7.3 in the appendix) and the Artin transfer from G to, for any intermediate subgroup of index (). There occur two situations:\n\n・ For the subgroups only the distinguished maximal subgroup is an intermediate subgroup.\n\n・ For the Frattini subgroup all maximal subgroups are intermediate subgroups.\n\nThis causes restrictions for the transfer kernel type of the second layer,\n\nsince, and thus\n\n, for all,\n\n・ but even.\n\nFurthermore, when with and, an element () which is of order with respect to, can belong to the transfer kernel only if its pth power is contained in, for all intermediate subgroups, and thus:\n\n, for certain, enforces the first layer TKT singulet,\n\n・ but, for some, even specifies the complete first layer TKT multiplet, that is, for all.\n\n6. Stabilization and Polarization in Descendant Trees\n\nP 6.1. Theorem 5.4 has proved that it suffices to get an overview of the restricted Artin patterns of metabelian groups G with, since groups G of derived length will certainly reveal exactly the same patterns as their metabelianizations.\n\nIn this section, we present the complete theory of stabilization and polarization of the restricted Artin patterns for an extensive exemplary case, namely for all metabelian 3-groups G with abelianization of type (3,3).\n\nSince the bottom layer, resp. the top layer, of the restricted Artin pattern will be considered in Theorem 6.4 on the commutator subgroup, resp. Theorem 6.5 on the entire group G, we first focus on the intermediate layer of the maximal subgroups.\n\n6.1. 3-Groups of Non-Maximal Class\n\nP 6.2. We begin with groups G of non-maximal class. Denoting by m the index of nilpotency of G, we let with be the centralizers of two-step factor groups of the lower central series, that is, the biggest subgroups of G with the property. They form an as- cending chain of characteristic subgroups of G, , which contain the commutator subgroup. coincides with G if and only if. We characterize the smallest two-step centralizer different from the commutator group by an isomorphism invariant . According to Nebelung , we can assume that G has order, class, and coclass, where. Let generators of be selected such that, , if, and. Suppose that a fixed ordering of the four maximal subgroups of G is defined by with, , , and. Let the main commutator of G be declared by and higher commutators recursively by, for. Starting with the powers, , let, for, and put,.\n\nTheorem 6.1. (Non-maximal class.) Let G be a metabelian 3-group of nilpotency class and coclass with abelianization. With respect to the projection onto the parent, the restricted Artin pattern of G reveals\n\n1) a bipolarization and partial stabilization, if G is an interface group with bicyclic last lower central equal to the bicyclic first upper central, more precisely\n\n(6.1)\n\n2) a unipolarization and partial stabilization, if G is a core group with cyclic last lower central and bicyclic first upper central, more precisely\n\n(6.2)\n\n3) a nilpolarization and total stabilization, if G is a core group with cyclic last lower central equal to the cyclic first upper central, more precisely\n\n(6.3)\n\nProof. Theorems 5.1 and 5.2 tell us that for detecting whether stabilization occurs from parent to child G, we have to compare the projection kernel with the commutator subgroups of the four maximal normal subgroups,. According to ( , Cor.3.2, p. 480) these derived subgroups are given by\n\n(6.4)\n\nprovided the generators of G are selected as indicated above. On the other hand, the projection kernel is given by\n\n(6.5)\n\nCombining this information with, we obtain the following results.\n\nfor if, independently of.\n\nfor if, which implies.\n\nbut if, , which also implies.\n\nfor if, , which implies .\n\nTaken together, these results justify all claims. □\n\nExample 6.1. Generally, the parent of an interface group G ( , Dfn.3.3, p. 430) with bicyclic last non-trivial lower central is a vertex of a different coclass graph with lower coclass. In the case of a bipolarization ( , Dfn.3.2, p. 430), which is now also characterized via the Artin pattern by Formula (6.1) for, we can express the membership in coclass graphs by the implication: If with, then. A typical example is the group of coclass 3 with parent of coclass 2 (again with identifiers in the SmallGroups database ), where\n\nand\n\nIn contrast, a core group G ( , Dfn.3.3, p. 430) with cyclic last non-trivial lower central and its parent are vertices of the same coclass graph. In dependence on the p-rank of its centre, the Artin pattern either shows a unipolarization as in Formula (6.2), if the centre is bicyclic, or a total stabilization as in Formula (6.3), if the centre is cyclic. Typical examples are the group with parent, both of coclass 2, where the Artin pattern shows a unipolarization\n\nand\n\nand the group with parent, both of coclass 2, where the Artin pattern shows a total stabilization\n\nand\n\n6.2. p-Groups of Maximal Class\n\nP 6.3. Next we consider p-groups of maximal class, that is, of coclass, but now for an arbitrary prime number. According to Blackburn and Miech , we can assume that G is a metabelian p-group of order and nilpotency class, where. Then G is of coclass and the commutator factor group of G is of type. The lower central series of G is defined recursively by and for, in particular.\n\nThe centralizer of the two-step factor group\n\n, that is,\n\nis the biggest subgroup of G such that. It is characteristic, contains the commutator subgroup, and coincides with G, if and only if. Let the isomorphism invariant of G be defined by\n\nwhere for, for, and for, according to Miech ( , p. 331).\n\nSuppose that generators of are selected such that, if, and.\n\nWe define the main commutator and the higher commutators for.\n\nThe maximal subgroups of G contain the commutator subgroup of G as a normal subgroup of index p and thus are of the shape. We define a fixed ordering by and for.\n\nTheorem 6.2. (Maximal class.) Let G be a metabelian p-group of nilpotency class and coclass, which automatically implies an abelianization of type. With respect to the projection onto the parent, the restricted Artin pattern of G reveals\n\n1) a unipolarization and partial stabilization, if the first maximal subgroup of G is abelian, more precisely\n\n(6.6)\n\n2) a nilpolarization and total stabilization, if all four maximal subgroups of G are non-abelian, more precisely\n\n(6.7)\n\nIn both cases, the commutator subgroups of the other maximal normal subgroups of G are given by\n\n(6.8)\n\nProof. We proceed in the same way as in the proof of Theorem 6.1 and compare the projection kernel with the commutator subgroups of the maximal normal subgroups,. According to ( , Cor.3.1, p. 476) they are given by\n\n(6.9)\n\nif the generators of G are chosen as indicated previously. The cyclic projection kernel is given uniformly by\n\n(6.10)\n\nUsing the relation, we obtain the following results.\n\nfor if.\n\nif and only if, that is,.\n\nThe claims follow by applying Theorems 5.1 and 5.2. □\n\nExample 6.2. For, typical examples are the group with parent, both of coclass 1, where the Artin pattern shows a unipolarization ( , Dfn.3.1, p. 413)\n\nand\n\nand the group with parent, both of coclass 1, where the Artin pattern shows a total stabilization\n\nand\n\n6.3. Extreme Interfaces of p-Groups\n\nP 6.4. Finally, what can be said about the extreme cases (excluded in Theorems 6.1 and 6.2) of non-abelian p-groups having the smallest possible nilpotency class for coclass and for coclass? In these particular situations, the answers can be given for arbitrary prime numbers.\n\nTheorem 6.3. Let G be a metabelian p-group with abelianization of type.\n\n1) If G is of coclass and nilpotency class, then must be odd and the coclass must be exactly.\n\n2) If G is of coclass and nilpotency class, then G is an extra special p-group of order and exponent p or.\n\nIn both cases, there occurs a total polarization and no stabilization at all, more explicitly\n\n(6.11)\n\nProof. Suppose that G is a metabelian p-group with.\n\n1) According to O. Taussky , a 2-group G with abelianization of type must be of coclass. Consequently, implies.\n\nSince the minimal nilpotency class c of a non-abelian group with coclass is given by, the case cannot occur for.\n\nSo we are considering metabelian p-groups G with, nilpotency class and coclass for odd, which form the stem of the isoclinism family in the sense of P. Hall. According to ( , Lem.3.1, p. 446), the commutator subgroups of the maximal subgroups are cyclic of degree p, for such a group. However, the kernel of the parent projection is the bicyclic group of type ( , §3.5, p. 445), which cannot be contained in any of the cyclic with.\n\n2) According to ( , Cor.3.1, p. 476), the commutator subgroups of all maximal subgroups are trivial, for a metabelian p-group G of coclass and nilpotency class, which implies. Thus, the kernel of the parent projection is not contained in any.\n\nIn both cases, the final claim is a consequence of the Theorems 5.1 and 5.2. □\n\nExample 6.3. For, a typical example for the interface between groups of coclass 2 and 1 is the group of coclass 2 with parent of coclass 1, where the Artin pattern shows a total polarization\n\nand\n\nFor, a typical example for the interface between non-abelian and abelian groups is the extra special quaternion group with parent both of coclass 1, where the Artin pattern shows a total polarization\n\nand\n\nSummarizing, we can say that the last three Theorems 6.1, 6.2, and 6.3 underpin the fact that Artin transfer patterns provide a marvellous tool for classifying finite p-groups.\n\n6.4. Bottom and Top Layer of the Artin Pattern\n\nP 6.5. We conclude this section with supplementary general results concerning the bottom layer and top layer of the restricted Artin pattern.\n\nTheorem 6.4. (Bottom layer.) The type of the commutator subgroup can never remain stable for a metabelian vertex of a descendant tree with respect to the lower central series, lower exponent-p central series, or derived series. The kernel of is equal to G (Principal Ideal Theorem).\n\nProof. All possible kernels , resp. , resp. , of the parent projections are non trivial, and can therefore never be contained in the trivial second derived subgroup G\". According to Theorem 5.1, the type of the commutator subgroup G' cannot be stable. The Principal Ideal Theorem is due to Furtwängler and is also proved in ( Thm.10.18, p. 313). □\n\nExample 6.4. In Example 6.1, we point out that the group with cyclic centre and its parent , both of coclass 2, cannot be distinguished by their TTT\n\nand TKT\n\ndue to a total stabilization of the restricted Artin pattern as in Formula (6.3). However, the type of their commutator subgroup (the second layer of their TTT) admits a distinction, since\n\nTheorem 6.5. (Top layer.) In a descendant tree with respect to the lower central series or derived series, the type of the abelianization of remains stable. The kernel of is equal to .\n\nProof. This follows from Theorem 5.1, since even the maximal possible kernel , resp. , of the parent projections is contained in the commutator subgroup of G.\n\nWe briefly emphasize the different behaviour of trees where parents are defined with the lower exponent-p central series.\n\nTheorem 6.6. In a descendant tree with respect to the lower exponent-p central series, only the p-rank of the abelianization of the vertices remains stable.\n\nProof. Denote by the p-rank of the abelianization of G. According to Theorem 5.1, the maximal possible kernel of the parent projections is the Frattini subgroup which is contained in all maximal subgroups of G. According to Proposition 5.1, the map induces a bijection between the sets of maximal subgroups of the child G and the parent , whose cardinality is given by\n\n. Consequently, we have . □\n\nAcknowledgements\n\nThe author would like to express his heartfelt gratitude to Professor Mike F. Newman from the Australian National University in Canberra, Australian Capital Territory, for his continuing encouragement and interest in our endeavour to strengthen the bridge between group theory and class field theory which was initiated by the ideas of Emil Artin, and for his untiring willingness to share his extensive knowledge and expertise and to be a source of advice in difficult situations.\n\nWe also gratefully acknowledge that our research is supported by the Austrian Science Fund (FWF): P 26008- N25.\n\nCite this paper\n\nDaniel C.Mayer, (2016) Artin Transfer Patterns on Descendant Trees of Finite p-Groups. Advances in Pure Mathematics,06,66-104. doi: 10.4236/apm.2016.62008\n\nReferences\n\n1. 1. Artin, E. (1927) Beweis des allgemeinen Reziprozitätsgesetzes. Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, 5, 353-363.\nhttp://dx.doi.org/10.1007/BF02952531\n\n2. 2. Artin, E. (1929) Idealklassen in Oberkörpern und allgemeines Reziprozitätsgesetz. Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, 7, 46-51.\nhttp://dx.doi.org/10.1007/BF02941159\n\n3. 3. Mayer, D.C. (2015) Periodic Bifurcations in Descendant Trees of Finite p-Groups. Advances in Pure Mathematics, 5, 162-195.\nhttp://dx.doi.org/10.4236/apm.2015.54020\n\n4. 4. Newman, M.F. (1977) Determination of Groups of Prime-Power Order. In: Bryce, R.A., Cossey, J. and Newman, M.F., Eds., Group Theory, Lecture Notes in Math, Vol.573, Springer, Berlin, 73-84.\n\n5. 5. O’Brien, E.A. (1990) The p-Group Generation Algorithm. Journal of Symbolic Computation, 9, 677-698.\nhttp://dx.doi.org/10.1016/S0747-7171(08)80082-X\n\n6. 6. Schur, I. (1902) Neuer Beweis eines Satzes über endliche Gruppen. Sitzungsb. Preuss. Akad. Wiss., 42, 1013-1019,.\n\n7. 7. Hall Jr., M. (1999) The Theory of Groups. AMS Chelsea Publishing, American Mathematical Society, Providence.\n\n8. 8. Huppert, B. (1979) Endliche Gruppen I. Grundlehren der mathematischen Wissenschaften, Vol. 134. Springer-Verlag, Berlin, Heidelberg and New York.\n\n9. 9. Gorenstein, D. (2012) Finite Groups. AMS Chelsea Publishing, American Mathematical Society, Providence.\n\n10. 10. Aschbacher, M. (1986) Finite Group Theory. Cambridge Studies in Advanced Mathematics, Vol. 10. Cambridge University Press, Cambridge.\n\n11. 11. Doerk, K. and Hawkes, T. (1992) Finite Soluble Groups, de Gruyter Expositions in Mathematics, Vol.4. Walter de Gruyter, Berlin and New York.\nhttp://dx.doi.org/10.1515/9783110870138\n\n12. 12. Smith, G. and Tabachnikova, O. (2000) Topics in Group Theory. Springer Undergraduate Mathematics Series (SUMS), Springer-Verlag, London.\n\n13. 13. Isaacs, I.M. (2011) Finite Group Theory. Graduate Studies in Mathematics, Vol.92. American Mathematical Society, Providence.\n\n14. 14. Bourbaki, N. (2007) éléments de mathématique, Livre 2, Algèbre. Springer-Verlag, Berlin Heidelberg.\n\n15. 15. Olsen, J. (2010) The Transfer Homomorphism. Math 434. University of Puget Sound, Washington DC.\nhttp://buzzard.ups.edu/courses/2010spring/projects/olsen-transfer-homomorphism-ups-434-2010.pdf\n\n16. 16. Hasse, H. (1932) Bericht über neuere Untersuchungen und Probleme aus der Theorie der algebraischen Zahlkörper, Teil II: Reziprozitötsgesetz. Jahresbericht der Deutschen Mathematiker-Vereinigung, 6, 1-204.\n\n17. 17. Blackburn, N. (1958) On a Special Class of p-Groups. Acta Mathematica, 100, 45-92.\nhttp://dx.doi.org/10.1007/BF02559602\n\n18. 18. Mayer, D.C. (2015) Index-p abelianization data of p-class tower groups. Advances in Pure Mathematics, 5, 286-313.\nhttp://dx.doi.org/10.4236/apm.2015.55029\n\n19. 19. Mayer, D.C. (2011) The Distribution of Second p-Class Groups on Coclass Graphs. Journal de Théorie des Nombres de Bordeaux, 25, 401-456.\n\n20. 20. Mayer, D.C. (2014) Principalization Algorithm via Class Group Structure. Journal de Théorie des Nombres de Bordeaux, 26, 415-464.\nhttp://dx.doi.org/10.5802/jtnb.874\n\n21. 21. Mayer, D.C. (2015) Periodic Sequences of p-Class Tower Groups. Journal of Applied Mathematics and Physics, 3, 746-756.\nhttp://dx.doi.org/10.4236/jamp.2015.37090\n\n22. 22. Mayer, D.C. (2015) New Number Fields with Known p-Class Tower. 22nd Czech and Slovak International Conference on Number Theory, Liptovsky Ján, Conference date, page.\n\n23. 23. Bush, M.R. and Mayer, D.C. (2015) 3-Class Field Towers of Exact Length 3. Journal of Number Theory 147, 766-777.\nhttp://dx.doi.org/10.1016/j.jnt.2014.08.010\n\n24. 24. Scholz, A. and Taussky, O. (1934) Die Hauptideale der kubischen Klassenkörper imaginär quadratischer Zahlkörper: ihre rechnerische Bestimmung und ihr Einfluß auf den Klassenkörperturm. Journal für die reine und angewandte Mathematik, 171, 19-41.\n\n25. 25. Chang, S.M. (1977) Capitulation Problems in Algebraic Number Fields. Ph.D. Thesis, University of Toronto, Toronto.\n\n26. 26. Chang, S.M. and Foote, R. (1980) Capitulation in Class Field Extensions of Type (p,p). Canadian Journal of Mathematics, 32, 1229-1243.\nhttp://dx.doi.org/10.4153/CJM-1980-091-9\n\n27. 27. Heider, F.-P. and Schmithals, B. (1982) Zur Kapitulation der Idealklassen in unverzweigten primzyklischen Erweiterungen. Journal für die reine und angewandte Mathematik., 336, 1-25.\n\n28. 28. Brink, J.R. (1984) The Class Field Tower for Imaginary Quadratic Number Fields of Type (3,3). Dissertation, Ohio State University, Columbus.\n\n29. 29. Brink, J.R. and Gold, R. (1987) Class Field Towers of Imaginary Quadratic Fields. Manuscripta Mathematica, 57, 425-450.\n\n30. 30. Nebelung, B. (1989) Klassifikation metabelscher 3-Gruppen mit Faktorkommutatorgruppe vom Typ (3,3) und Anwendung auf das Kapitulationsproblem. Inaugural Dissertation, Universität zu Köln, Cologne.\n\n31. 31. Mayer, D.C. (1990) Principalization in Complex S3-Fields. Congressus Numerantium, 80, 73-87.\n\n32. 32. Mayer, D.C. (2012) Transfers of Metabelian p-Groups. Monatshefte für Mathematik., 166, 467-495.\n\n33. 33. Besche, H.U., Eick, B. and O’Brien, E.A. (2002) A Millennium Project: Constructing Small Groups. International Journal of Algebra and Computation, 12, 623-644.\nhttp://dx.doi.org/10.1142/S0218196702001115\n\n34. 34. Besche, H.U., Eick, B. and O’Brien, E.A. (2005) The Small Groups Library—A Library of Groups of Small Order. An Accepted and Refereed GAP 4 Package, Available also in MAGMA.\n\n35. 35. Mayer, D.C. (2012) The Second p-class Group of a Number Field. International Journal of Number Theory, 8, 471-505.\nhttp://dx.doi.org/10.1142/S179304211250025X\n\n36. 36. Miech, R.J. (1970) Metabelian p-Groups of Maximal Class. Transactions of the American Mathematical Society, 152, 331-373.\n\n37. 37. Taussky, O. (1937) A Remark on the Class Field Tower. Journal London Mathematical Society, S1-12, 82-85.\nhttp://dx.doi.org/10.1112/jlms/s1-12.1.82\n\n38. 38. Furtwängler, Ph. (1929) Beweis des Hauptidealsatzes für die Klassenkörper algebraischer Zahlkörper. Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, 7, 14-36.\nhttp://dx.doi.org/10.1007/BF02941157\n\n39. 39. Lang, S. (1980) Algebra. World Student Series, Addison-Wesley Publishing Company, Reading.\n\n40. 40. Boston, N., Bush, M.R. and Hajir, F. (2014) Heuristics for p-Class Towers of Imaginary Quadratic Fields.\nhttp://arxiv.org/abs/1111.4679\n\nAppendix: Induced Homomorphism between Quotient Groups\n\nThroughout this appendix, let be a homomorphism from a source group (domain) G to a target group (codomain) H.\n\nA.1. Image, Pre-Image and Kernel\n\nP 7.1. First, we recall some basic facts concerning the image and pre-image of normal subgroups and the kernel of the homomorphism .\n\nLemma 7.1. Suppose that and are subgroups, and are elements.\n\n1) If is a normal subgroup of G, then its image is a normal subgroup of the (total) image .\n\n2) If is a normal subgroup of the image , then the pre-image is a normal subgroup of G.\n\nIn particular, the kernel of is a normal subgroup of G.\n\n3) If , then there exists an element such that .\n\n4) If , then , i.e., the pre-image of the image satisfies\n\n(7.1)\n\n5) Conversely, the image of the pre-image is given by\n\n(7.2)\n\nThe situation of Lemma 7.1 is visualized by Figure 4, where we briefly write and .\n\nRemark 7.1. Note that, in the first statement of Lemma 7.1, we cannot conclude that is a normal subgroup of the target group H, and in the second statement of Lemma 7.1, we need not require that is a normal subgroup of the target group H.\n\nProof. 1) If , then for all ,\n\nand thus for all , i.e., .\n\n2) If , then , that is, . In particular, we have , i.e., , and\n\nconsequently .\n\nTo prove the claim for the kernel, we put .\n\n3) If , then , and thus . (See also , Thm.2.2.1, p. 27).\n\n4) If , then , and thus , by (3). This shows , and the opposite inclusion is obvious.\n\nFigure 4. Kernel, image and pre-image under a homomorphism f.\n\nFinally, since is normal, we have .\n\n5) This is a consequence of the properties of the set mappings and associated with the homomorphism .\n\nA.2. Criteria for the Existence of the Induced Homomorphism\n\nP 7.2. Now we state the central theorem which provides the foundation for lots of useful applications. It is the most general version of a series of related theorems, which is presented in Bourbaki ( , Chap.1), Structures algëriques, Prop.5, p. A I.35]. Weaker versions will be given in the subsequent corollaries.\n\nTheorem 7.1. (Main Theorem)\n\nSuppose that is a normal subgroup of G and is a normal subgroup of H. Let and denote the canonical projections onto the quotients.\n\n・ The following three conditions for the homomorphism are equivalent.\n\n1) There exists an induced homomorphism such that , that is,\n\n(7.3)\n\n2) .\n\n3) .\n\n・ If the induced homomorphism of the quotients exists, then it is determined uniquely by , and its kernel, image and cokernel are given by\n\n(7.4)\n\nIn particular, is a monomorphism if and only if .\n\nMoreover, is an epimorphism if and only if .\n\nIn particular, is certainly an epimorphism if is onto.\n\nWe summarize the criteria for the existence of the induced homomorphism in a formula:\n\n(7.5)\n\nThe situation of Theorem 7.1 is shown in the commutative diagram of Figure 5.\n\nRemark 7.2. If the normal subgroup in the assumptions of Theorem 7.17 is taken as , then the induced homomorphism exists automatically and is a monomorphism.\n\nFigure 5. Induced homomorphism of quotients.\n\nNote that does not imply but only , if is not an epimorphism. Similarly, does not imply but only , if is not a monomorphism.\n\nProof.\n\n・ (1) Þ (2): If there exists a homomorphism such that for all , then, for any , we have , and thus , which means . It follows that .\n\n(2) Þ (1): If , then the image of the coset under is independent of the re- presentative : If for , then and thus . Consequently, we have . Furthermore, is a homomorphism, since\n\n(2) Þ (3): If , then .\n\n(3) Þ (2): If , then .\n\n・ The image of any under is determined uniquely by , since .\n\nThe kernel of is given by , and for we have\n\nthat is , which clearly contains , since .\n\nThe cokernel of is given by , if .\n\nFinally, if is an epimorphism, then is also an epimorphism, which forces the terminal map to be an epimorphism. □\n\nA.3. Factorization through a Quotient\n\nP 7.3. Theorem 7.1 can be used to derive numerous special cases. Usually it suffices to consider the quotient group corresponding to a normal subgroup U of the source group G of the homomorphism and to view the target group H as the trivial quotient H/1. In this weaker form, the existence criterion for the induced homomorphism occurs in Lang’s book ( , p. 17).\n\nCorollary 7.1. (Factorization through a quotient)\n\nSuppose is a normal subgroup of G and denotes the natural epimorphism onto the quotient.\n\nIf , then there exists a unique homomorphism such that , that is, for all .\n\nMoreover, the kernel of is given by .\n\nAgain we summarize the criterion in a formula:\n\n(7.6)\n\nIn this situation the homomorphism is said to factor or factorize through the quotient via the canonical projection and the induced homomorphism .\n\nThe scenario of Corollary 7.1 is visualized by Figure 6.\n\nProof. The claim is a consequence of Theorem 7.1 in the special case that is the trivial group. The equivalent conditions for the existence of the induced homomorphism are resp. . □\n\nRemark 7.3. Note that the well-known isomorphism theorem (sometimes also called homomorphism theorem) is a special case of Corollary 7.1. If we put and if we assume that is an epimorphism with , then the induced homomorphism is an isomorphism, since .\n\nIn this weakest form,\n\n(7.7)\n\nFigure 6. Homomorphism f factorized through a quotient.\n\nactually without any additional assumptions being required, the existence theorem for the induced homomorphism appears in almost every standard text book on group theory or algebra, e.g., ( , Thm.2.3.2, p. 28) and ( , Thm.X.18, p. 339).\n\nA.4. Application to Series of Characteristic Subgroups\n\nP 7.4. The normal subgroup in the assumptions of Corollary 7.1 can be specialized to various characteristic subgroups of G for which the condition can be expressed differently, namely by invariants of series of characteristic subgroups.\n\nCorollary 7.2. The homomorphism can be factorized through various quotients of G in the following way. Let n be a positive integer and p be a prime number.\n\n1) factors through the nth derived quotient if and only if the derived length of is bounded by .\n\n2) factors through the nth lower central quotient if and only if the nilpotency class of is bounded by .\n\n3) factors through the nth lower exponent-p central quotient if and only if the p-class of is bounded by .\n\nWe summarize these criteria in terms of the length of series in a formula:\n\n(7.8)\n\nProof. By induction, we show that, firstly,\n\n,\n\nsecondly, ,\n\nand finally,\n\n.\n\nNow, the claims follow from Corollary 7.1 by observing that iff, iff, and iff\n\nThe following special case is particularly well known. Here we take the commutator subgroup of G as our charecteristic subgroup, which can either be viewed as the term of the lower central series of G or as the term of the derived series of G.\n\nCorollary 7.3. A homomorphism passes through the derived quotient of its source group G if and only if its image is abelian.\n\nProof. Putting in the first statement or in the second statement of Corollary 7.2 we obtain the well-known special case that passes through the abelianization if and only if is abelian, which is equivalent to, and also to. □\n\nThe situation of Corollary 7.3 is visualized in Figure 7.\n\nUsing the first part of the proof of Corollary 7.2 we can recognize the behavior of several central series under homomorphisms.\n\nLemma 7.2. Let be a homomorphism of groups and suppose that is an integer and a prime number. Let be a subgroup with image.\n\n1) If, then\n\n2) If, then\n\n3) If, then\n\nProof. 1) Let, then and\n\n. Consequently, we have ifand if.\n\n2) Let, then and. Thus, we have ifand if.\n\n3) Let, then and. Therefore, we have ifand if.□\n\nA.5. Application to Automorphisms\n\nCorollary 7.4. (Induced automorphism)\n\nFigure 7. Homomorphism passing through the derived quotient.\n\nLet be an epimorphism of groups, , and assume that is an auto- morphism of G.\n\n1) There exists an induced epimorphism such that, if and only if, resp..\n\n2) The induced epimorphism is also an automorphism of H, , if and only if\n\n(7.9)\n\nIn the second statement, is said to have the kernel invariance property (KIP) with respect to.\n\nThe situation of Corollary 7.4 is visualized in Figure 8.\n\nProof. Since is supposed to be an epimorphism, the well-known isomorphism theorem in Remark 7.3 yields a representation of the image as a quotient.\n\n1) According to Theorem 7.1, the automorphism, simply viewed as a homomorphism, induces a homomorphism if and only if. Since is an epimorphism, is also an epimorphism with kernel.\n\n2) Finally,.\n\nRemark 7.4. If is a characteristic subgroup of G, then Corollary 7.4 makes sure that any automorphism induces an automorphism, where. The reason is that, by definition, a characteristic subgroup of G is invariant under any automorphism of G.\n\nP 7.5. We conclude this section with a statement about GI-automorphisms (generator-inverting auto- morphisms) which have been introduced by Boston, Bush and Hajir ( , Dfn.2.1). The proof requires results of Theorem 7.1, Corollary 7.4, and Corollary 7.2.\n\nTheorem 7.2. (Induced generator-inverting automorphism)\n\nLet be an epimorphism of groups with, and assume that is an automorphism satisfying the KIP, and thus induces an automorphism.\n\nIf is generator-inverting, that is,\n\n(7.10)\n\nthen is also generator-inverting, that is, for all.\n\nProof. According to Corollary 7.4,\n\ninduces an automorphism, since.\n\nTwo applications of the Remark 7.4 after Corollary 7.4, yield:\n\ninduces an automorphism, since is characteristic in G, and\n\ninduces an automorphism, since is characteristic in H.\n\nUsing Theorem 7.1 and the first part of the proof of Corollary 7.2, we obtain:\n\ninduces an epimorphism, since.\n\nThe actions of the various induced homomorphisms are given by\n\nFigure 8. Induced automorphism.\n\nfor,\n\nfor,\n\nfor, and\n\nfor.\n\nFinally, combining all these formulas and expressing for a suitable, we see that implies the required relation for:\n\nA.6. Functorial Properties\n\nP 7.6. The mapping which maps a homomorphism of one category to an induced homomorphism of another category can be viewed as a functor F.\n\nIn the special case of induced homomorphisms between quotient groups, we define the domain of the functor F as the following category.\n\nThe objects of the category are pairs consisting of a group G and a normal subgroup,\n\n(7.11)\n\nFor two objects, the set of morphisms consists of homomorphisms such that, briefly written as arrows,\n\n(7.12)\n\nThe functor from this new category to the usual category of groups\n\nmaps a pair to the corresponding quotient group,and it maps a morphism to the induced homomorphism ,\n\n(7.13)\n\nExistence and uniqueness of have been proved in Theorem 7.1 under the assumption that, which is satisfied according to the definition of the arrow.\n\nThe functorial properties, which are visualized in Figure 9, can be expressed in the following form.\n\nFigure 9. Functorial properties of induced homomorphisms.\n\nFirstly, F maps the identity morphism having the trivial property to the identity homomorphism\n\n(7.14)\n\nand secondly, F maps the compositum of two morphisms\n\nand, which obviously enjoys the required property\n\nto the compositum\n\n(7.15)\n\nof the induced homomorphisms in the same order.\n\nThe last fact shows that F is a covariant functor.\n\nNOTES\n\n*Respectfully dedicated to Professor M. F. Newman." ]
[ null, "http://html.scirp.org/file/3-5301020x1.png", null, "http://html.scirp.org/file/9-2500537x3.png", null, "http://html.scirp.org/file/9-2500537x2.png", null, "http://html.scirp.org/file/3-5301020x4.png", null, "http://html.scirp.org/file/3-5301020x6.png", null, "http://html.scirp.org/file/3-5301020x7.png", null, "http://html.scirp.org/file/3-5301020x8.png", null, "http://html.scirp.org/file/3-5301020x9.png", null, "http://html.scirp.org/file/3-5301020x10.png", null, "http://html.scirp.org/file/3-5301020x11.png", null, "http://html.scirp.org/file/3-5301020x12.png", null, "http://html.scirp.org/file/3-5301020x13.png", null, "http://html.scirp.org/file/3-5301020x14.png", null, "http://html.scirp.org/file/3-5301020x15.png", null, "http://html.scirp.org/file/3-5301020x16.png", null, "http://html.scirp.org/file/3-5301020x17.png", null, "http://html.scirp.org/file/3-5301020x18.png", null, "http://html.scirp.org/file/3-5301020x19.png", null, "http://html.scirp.org/file/3-5301020x20.png", null, "http://html.scirp.org/file/3-5301020x21.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.872805,"math_prob":0.9243245,"size":81126,"snap":"2022-05-2022-21","text_gpt3_token_len":20974,"char_repetition_ratio":0.17545179,"word_repetition_ratio":0.123640105,"special_character_ratio":0.25190446,"punctuation_ratio":0.17625752,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9887955,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,2,null,null,null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-29T12:33:53Z\",\"WARC-Record-ID\":\"<urn:uuid:aa2c418e-1608-4fc9-a48b-8157b96c8145>\",\"Content-Length\":\"265479\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:76d6f334-8c00-473f-a196-4d4debdb11be>\",\"WARC-Concurrent-To\":\"<urn:uuid:d83c2e07-3107-46a1-9af5-989fb40f45c8>\",\"WARC-IP-Address\":\"144.126.144.39\",\"WARC-Target-URI\":\"https://file.scirp.org/Html/3-5301020_63261.htm\",\"WARC-Payload-Digest\":\"sha1:PDEDWLXHBSK37XAYGFBIQJCNOASRKXVG\",\"WARC-Block-Digest\":\"sha1:FDTGIEY5ANHHOX6ZNACXYF7ZVGKOFD65\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662644142.66_warc_CC-MAIN-20220529103854-20220529133854-00409.warc.gz\"}"}
https://stackoverflow.com/questions/47266383/save-and-load-weights-in-keras
[ "# Save and load weights in keras\n\nIm trying to save and load weights from the model i have trained.\n\nthe code im using to save the model is.\n\n``````TensorBoard(log_dir='/output')\nmodel.fit_generator(image_a_b_gen(batch_size), steps_per_epoch=1, epochs=1)\nmodel.save_weights('model.hdf5')\nmodel.save_weights('myModel.h5')\n``````\n\nLet me know if this an incorrect way to do it,or if there is a better way to do it.\n\nbut when i try to load them,using this,\n\n``````from keras.models import load_model\n``````\n\nbut i get this error:\n\n``````ValueError Traceback (most recent call\nlast)\n<ipython-input-7-27d58dc8bb48> in <module>()\n\n/home/decentmakeover2/anaconda3/lib/python3.5/site-\n235 model_config = f.attrs.get('model_config')\n236 if model_config is None:\n--> 237 raise ValueError('No model found in config file.')\n239 model = model_from_config(model_config,\ncustom_objects=custom_objects)\n\nValueError: No model found in config file.\n``````\n\nAny suggestions on what i may be doing wrong? Thank you in advance.\n\nHere is a YouTube video that explains exactly what you're wanting to do: Save and load a Keras model\n\nThere are three different saving methods that Keras makes available. These are described in the video link above (with examples), as well as below.\n\nFirst, the reason you're receiving the error is because you're calling `load_model` incorrectly.\n\nTo save and load the weights of the model, you would first use\n\n``````model.save_weights('my_model_weights.h5')\n``````\n\nto save the weights, as you've displayed. To load the weights, you would first need to build your model, and then call `load_weights` on the model, as in\n\n``````model.load_weights('my_model_weights.h5')\n``````\n\nAnother saving technique is `model.save(filepath)`. This `save` function saves:\n\n• The architecture of the model, allowing to re-create the model.\n• The weights of the model.\n• The training configuration (loss, optimizer).\n• The state of the optimizer, allowing to resume training exactly where you left off.\n\nTo load this saved model, you would use the following:\n\n``````from keras.models import load_model\n``````\n\nLastly, `model.to_json()`, saves only the architecture of the model. To load the architecture, you would use\n\n``````from keras.models import model_from_json\nmodel = model_from_json(json_string)\n``````\n• If I save the weights on python 3.6 is it possible to load them on python 2.7? Dec 5 '18 at 19:44\n• @Rtucan I think Yes. You can try it. Feb 20 '19 at 15:46\n• Is it possible to load weights from the saved model from model.save() , not model.save_weights? If so how to do it? Jul 31 '19 at 18:22\n\nFor loading weights, you need to have a model first. It must be:\n\n``````existingModel.save_weights('weightsfile.h5')\n``````\n\nIf you want to save and load the entire model (this includes the model's configuration, it's weights and the optimizer states for further training):\n\n``````model.save_model('filename')\n``````\n• by model if you mean all the layers,the i have all that i just havent posted it\n– Ryan\nNov 13 '17 at 14:29\n• I am getting this error when I try to model the complete model using `load_model()`. Could you please let me know how to fix the below error: `ValueError: You are trying to load a weight file containing 17 layers into a model with 0 layers` Feb 21 '19 at 13:28\n• @KK2491 Are you really using `load_model`? This is an error for `load_weights`. If you're using `load_model`, it seems your file is corrupted, or your keras version is buggy. Feb 21 '19 at 14:04\n• @DanielMöller Yea, I am using `load_model`. The `Keras` version I use is `2.2.4`. Feb 28 '19 at 11:33\n• @Jubick Actually there is a simpler method. You can directly save the model and load it. (.model extension) Nov 4 '19 at 16:55\n\nSince this question is quite old, but still comes up in google searches, I thought it would be good to point out the newer (and recommended) way to save Keras models. Instead of saving them using the older h5 format like has been shown before, it is now advised to use the SavedModel format, which is actually a dictionary that contains both the model configuration and the weights.\n\nThe snippets to save & load can be found below:\n\n``````model.fit(test_input, test_target)\n# Calling save('my_model') creates a SavedModel folder 'my_model'.\nmodel.save('my_model')\n\n# It can be used to reconstruct the model identically.\n``````\n\nA sample output of this :", null, "• After loading the saved model (weights) how can i predict unseen data? can someone provide any sample code for predicting with siamese nets? Jul 17 '20 at 3:59\n• Hello Lakwin, this can be done just the same as you would do, when building the model from scratch, by using model.predict(). that question was answered here: stackoverflow.com/questions/37891954/… Jul 29 '20 at 10:44\n• or just model(X) as .predict can be slow 2 days ago\n\nLoading model from scratch requires you to build model from scratch, so you can try saving your model architecture first using `model.to_json()`\n\n``````model_architecture = model.to_json()\n``````\n\nSave model weighs using\n\n``````model.save_weights('model_weights.h5')\n\n``````\n\n``````from tensorflow.keras.models import model_from_json\nmodel = model_from_json(model_architecture)\n``````\n\n``````model.load_weights('model_weights.h5')\n``````model.compile(loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True)," ]
[ null, "https://i.stack.imgur.com/tllIL.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52374756,"math_prob":0.75339925,"size":1169,"snap":"2021-43-2021-49","text_gpt3_token_len":296,"char_repetition_ratio":0.14763948,"word_repetition_ratio":0.0,"special_character_ratio":0.26946107,"punctuation_ratio":0.15492958,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98509115,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T23:32:57Z\",\"WARC-Record-ID\":\"<urn:uuid:a69aef8b-c489-4217-bc4e-f62fc47c12a2>\",\"Content-Length\":\"208593\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:263f56e1-3a28-4a82-9419-d0163905c7f5>\",\"WARC-Concurrent-To\":\"<urn:uuid:ffc38c15-bb5f-40bf-bd66-3d9220a6be4f>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/47266383/save-and-load-weights-in-keras\",\"WARC-Payload-Digest\":\"sha1:6LM2DVFWRERRYKJHAEUFXRUVKIND7DXE\",\"WARC-Block-Digest\":\"sha1:YVH62FRMFV7THVOSN735BM7EKPKALEW5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587606.8_warc_CC-MAIN-20211024204628-20211024234628-00497.warc.gz\"}"}
http://hovawartclub.hu/ug64n0tg/poulan-pro-2-cycle-gas-blower-25cc-241227
[ "Der Algorithmus von Bellman und Ford (nach seinen Erfindern Richard Bellman und Lester Ford) ist ein Algorithmus der Graphentheorie und dient der Berechnung der kürzesten Wege ausgehend von einem Startknoten in einem kantengewichteten Graphen. <>/Metadata 569 0 R/ViewerPreferences 570 0 R>> No public clipboards found for this slide, Single source stortest path bellman ford and dijkstra, Student at Thakur College Of Engineering and Technology. Analysis and Design of Algorithm All Time. L�O�c;��'��k=q���n%��I31�V�x��z�[���lH��'\\$龦�l��o)*�{����*�%���2�D�Ҝ�s�� ���:�`RȦkԕ(6�7���i���Q 0�߬�&ɿ��i9{�aQ�~��` :�o.��� ~)\\��8����kz��X�m曊�ɜ�8���]NB\"Q���ヽ�����a��O���Y������}��>wժk�b�Сևfv�j/�z�-�eԇ�#�]ïQp3?h��4�e6�`I47\\$�إD忄�Ȗ%��p�ނ�U���o0qؘ^��>��e��㇌\\$�����H��Ќ3�\"������V=��NiҠI�� ��Ѷ/;ޱ^>�! Bellman-Ford vs. Dijkstra. Bellman-Ford. Proofs of the fact that they compute a cheapest paths tree from s remained not simple for decades. Show: Recommended. Dijkstra's shortest path algorithm, One algorithm for finding the shortest path from a starting node to a target node in a weighted graph is Dijkstra's algorithm. Dijkstra allows the most efficient O (V log V + E)) sequential implementations [44, 45] but exposes no parallelism across vertices. It is slower than Dijkstra’s algorithm, but can handle negative-weight directed edges, so long as there are no negative-weight cycles. Also, we will show the differences between them and when to use each one. General Graph Search While q is not empty: v q:popFirst() For all neighbours u of v such that u ̸q: Add u to q By changing the behaviour of q, we recreate all the classical graph search algorithms: If q is a stack, then the algorithm becomes DFS. • Bellman‐Ford algorithm for single‐source shortest paths • Running time • Correctness • Handling negative‐weight cycles • Directed acyclic graphs. They are Bellman-Ford algorithm and Dijkstra's algorithm. %���� Please tell me anything that comes to mind. However, the weight of all the edges must be non-negative. They were compared on the basis of their run time. Like Dijkstra's shortest path algorithm, the Bellman-Ford algorithm is guaranteed to find the shortest path in a graph. Dijkstra's algorithm is faster and more widely used, as in most real-world graphs all edge costs are non negative. The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. See our User Agreement and Privacy Policy. Obstacle course for robots • Obstacles: disjoint triangles T 1 …T n • Robot: a point at position A • Goal: the shortest route from A to B A B . Report (0) (0) | earlier The only difference between two is that Bellman Ford is capable also to handle negative weights whereas Dijkstra Algorithm can only handle positives. Bellman-ford algorithm for single source shortest paths. Uses Bellman-Ford (or Dijkstra’s) algorithm Each router creates a table that lists every other network within the system that it can reach Problem with RIP: smallest hop count is not always best route! But in internetworking this may not be true. Bellman-ford example. %PDF-1.5 It is a greedy algorithm and similar to Prim's algorithm. Amity University Rajasthan. And let me put that up. Difference between bellman ford and dijkstra's algorithm stack. significant difference with Dijkstra's algorithm, it can be used on graphs with negative edge weights, as long as the graph contains no negative cycle reachable from the source vertex . Dijkstra algorithm is also another single-source shortest path algorithm. The best example of an internetwork is Internet.The difference between Networking and Internetworking may be stated as follows : In networking all the devices (hosts) involved are compatible with each other. The result contains the vertices which contains the … If you continue browsing the site, you agree to the use of cookies on this website. Bellman-Ford algorithm in 3490 milliseconds. Priority Queue - Dijkstra’s algorithm (O(E+V log V)) Compare code implementation Depth-first search vs Breadth-first search vs Dijkstra’s algorithm. Algorithms lecture 14 shortest paths ii bellman-ford algorithm. Lecture 18 Importance of Dijkstra’s algorithm Many more problems than you might at first think can be cast as shortest path problems, making Dijkstra’s algorithm a powerful and general tool. Lecture 21: single source shortest paths bellman-ford algorithm. Bellman-ford algorithm. See our Privacy Policy and User Agreement for details. The analysis of Tech. })5(~��w��pN�*!��i L��:d���k`���8��@�������5�*�t��0�]��@P��߯I�b� >_��ܮh&�n�p��Yjl�!� θ�Y�>�I�k�t�������;W�\\$�� u���h�#�q��g�u�+Xq�Z1��3�����m. Both, the Bellman-Ford algorithm and Dijkstra's algorithm are used to calculate 'metrics' (distance/cost of traversing a link) in routing protocols. There are two main differences between both algorithms, and they are differences I have touched upon in the blog: 1-Fast Vs.Guaranteed: As I said, Dijkstra … In fact, the shortest paths algorithms like Dijkstra’s algorithm or Bellman-Ford algorithm give us a relaxing order. Example: uu vv … < 0 Bellman-Ford algorithm: Finds all shortest-path lengths from a source s ∈V to all v ∈V or determines that a negative-weight cycle exists. I mean, its complexity we'll have to look at. General Graph Search While q is not empty: v q:popFirst() For all neighbours u of v such that u ̸q: Add u to q By changing the behaviour of q, we recreate all the classical graph search algorithms: If q is a stack, then the algorithm becomes DFS. Ask Question Asked 8 years, 5 months ago. Floyd-Warshall and Bellman-Ford algorithm solve the problems on graphs that do not have a cycle with negative cost. They are Bellman-Ford algorithm and Dijkstra’s algorithm. Priority Queue - Dijkstra’s algorithm (O(E+V log V)) Compare code implementation Depth-first search vs Breadth-first search vs Dijkstra’s algorithm. Difference between bellman ford and dijkstra's algorithm stack. Dijkstra’s algorithm solves the single-source shortest-paths problem on a directed weighted graph G = (V, E), where all the edges are non-negative (i.e., w(u, v) ≥ 0 for each edge (u, v) Є E). The Bellman-Ford Algorithm is a dynamic programming algorithm for the single-sink (or single-source) shortest path problem. Shortest paths in graphs bellman-ford algorithm negative-weight. Gelegentlich wird auch vom Moore-Bellman-Ford-Algorithmus gesprochen, da auch Edward F. Moore zu seiner Entwicklung beigetragen hat. Shortest path algorithms, Dijkstra and Bellman-Ford algorithm. Ppt. However, the Bellman-Ford algorithm has a considerably larger complexity than Dijkstra’s algorithm. 2 0 obj bechmark(), seed = 1509203446023 Dijkstra's algorithm in 0 milliseconds. Bellman Ford’s Algorithm works when there is negative weight edge, it also detects the negative weight cycle. Unfortunately, Bellman-Ford seems to be inferior on random, sparse graphs. endobj We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. Bellman-ford algorithm for single source shortest paths. <> The only difference between two is that Bellman Ford is capable also to handle negative weights whereas Dijkstra Algorithm can only handle positives. po-.�T��ܰ�^���\\�����M>Ru�Sh璀�W���d��������\\!�F�ΗGA�|b�FgC��� ��Ph�r���!Z)K�.f�Gy�S�k5�_�֥V�]���ŖD�T�wL*�d���,:�Ȝ�\\M���Iu9� ^bz��4�\"��~�v��M,��� i��q�p��wq�\\$�. 3 0 obj Unscientifically 1 the bellman ford algorithm. Looks like you’ve clipped this slide to already. :�D�Y��~���Ɲh�k�U���^�*�MV�����:I��3�d��|2���Ҋ8��|+�� ��Uq�Q�X��U;>��~M� ����C�D�\"�O#C�&������|��ޝN��Wݪ=�(3d5:Y I do not understand the difference between the All Pairs Shortest Path problem (solved by the Floyd–Warshall algorithm) and the Shortest Path problem (solved by Dijkstra's algorithm). Dijkstra's algorithm solves the single-source shortest-path problem when all edges have non-negative weights. And so it is indeed the case that the o n 3 time of floyd-warshall is not better than the o n n + e lgn time of making n calls to dijkstra. some edges with negative weight. The Bellman-Ford algorithm is a graph search algorithm that finds the shortest path between a given source vertex and all other vertices in the graph. stream Bellman-ford vs dijkstra: under what circumstances is bellman. Bellman Ford's Algorithm is similar to Dijkstra's algorithm but it can work with graphs in which edges can have negative weights. Run it once for every node algorithms is Network is flat algorithm in Python, Java C/C++! Faster and more widely used, as in maximum authentic-international graphs all edge costs are non.. Contains the vertices which contains the vertices which contains the … the main algorithms that under! Between them and when to use each one all the edges must be non-negative: �U��. Sparse graphs and User Agreement for details Roshan Tailor Amity University Rajasthan someone me! Dijkstra ’ s algorithm, meaning it computes the shortest path algorithm, but handle! In Appendix ( Section 4 ), seed = 1509203446023 Dijkstra 's algorithm:... Dijkstra 's, can. University Rajasthan single-source problem if edge weights may be negative a negative value that. Mn ) time we use your LinkedIn profile and activity data to personalize ads to. The n-by-n 3 algorithm works when there is no shortest path algorithm, which allows for negative weights. The algorithm is a step-by-step process we can use to find the shortest path algorithm a graph Prim. Is incredibly straightforward be used with graphs having negative edge weight and can detect negative in. To our desired output algorithms span a parallel versus efficiency spectrum of the! Always better sparse graphs free to download and as it turns out, this can. We 'll have to look at and User Agreement for details a Study about the two is Dijkstra! The graph.You 'd run it once for every node non-negative weights to already node and every node. To our desired output are negative edge weight and can difference between dijkstra and bellman-ford algorithm ppt negative in. So if necessary you have to normalise the values in the graph.You 'd run once. Of a clipboard to store your clips to look at broadly used, as in maximum graphs... Authentic-International graphs all edge costs are non negative node in the function of the classic anal-ysis of relaxation-based algorithms fix! 21: single source shortest path algorithm, which allows for negative edge weight and detect! Allows for negative edge weights 'd run it once for every node the smallest path. Single-Source shortest-path problem when all edges have non-negative weights cost path from each node to other! ] ���ŖD�T�wL * �d���,:�Ȝ�\\M���Iu9� ^bz��4� '' ��~�v��M, ��� i��q�p��wq� \\$ � note that a. Wo algorithms is have understood them accurately enough or not the negative weight edge than ’... Your clips path searching algorithms, which are used in routing Policy and User for... Anal-Ysis of relaxation-based algorithms uses cookies to improve functionality and performance, and show. On bellman Ford and Dijkstra ‘ s algorithms algorithm is an example all-pairs! Working on bellman Ford algorithm be used on both weighted and unweighted graphs Dijkstra! From a description standpoint, it also detects the negative weight edge, it 's four of... S remained not simple for decades use to find the shortest path problem Policy and User Agreement for details Contrast... Negative cycles in a graph variables in O ( mn ) time > Ru�Sh璀�W���d��������\\! �F�ΗGA�|b�FgC��� ��Ph�r��� Z! A Greedy algorithm and Dijkstra ’ s algorithm ve clipped this slide to already doesn ’ t work there... Value ) that is reachable from the source, then the algorithm is a shortest... Of they are Bellman-Ford algorithm and similar to Prim 's algorithm in 0.. Have to look at we get to our desired output understood them accurately enough or not a graph... ’ s algorithm works when all edges have non-negative weights relevant advertising 's or.! Two today using this algorithm can solve a system of m difference constraints on n variables O! But from a description standpoint, it also detects the negative weight edge, it 's lines. Weight and can detect negative cycles in a single source shortest path algorithm, meaning it the.! Z ) K�.f�Gy�S�k5�_�֥V� ] ���ŖD�T�wL * �d���,:�Ȝ�\\M���Iu9� ^bz��4� '' ��~�v��M, ��� \\$! Greedy algorithm and Dijkstra ‘ s algorithms source shortest path problem performance, and to provide with... Cookies on this website you continue browsing the site, you will the... Rules is quicker and greater broadly used, as in maximum authentic-international graphs all part expenses non... Are negative edge weights • Bellman‐Ford algorithm for the single-sink ( or )... Bellman-Ford vs Dijkstra: under What circumstances is bellman out, this algorithm is Dijkstra standpoint, it also the... To the use of cookies on this website new algorithm class Iterated,! Algorithm for the single-sink ( or single-source ) shortest path searching algorithms, which allows for edge! Versus efficiency spectrum beigetragen hat you with relevant advertising ( mn ) time ) K�.f�Gy�S�k5�_�֥V� ] *. Edges must be non-negative, so long as there are negative edge weight and can detect cycles. General algorithm Dynamic routing • routing algorithm ( Dijkstra / Bellman-Ford ) – idealization all! Floyd-Warshall finds the cost of the Bellman-Ford algorithm can only handle positives a clipboard to store your.... Tell me if I have understood them accurately enough or not Dijkstra ‘ s.! Core of the new algorithm class Iterated Greedy, including BFD months ago store clips! Negative, while Bellman-Ford difference between dijkstra and bellman-ford algorithm ppt with negative weights whereas Dijkstra algorithm is a priority queue then! Once for every node also, we present an improvement of the that! Lower than that of bellman Ford 's algorithm only works when all edges have non-negative weights! Z ) ]... Algorithm doesn ’ t work when there is negative weight edge vs floyd-warshall: Comparison Dijkstra... Routes between all pair of nodes of nodes all part expenses are negative... If q is a priority queue, then the algorithm is also another shortest. Not simple for decades shortest paths • running time of Dijkstra 's algorithm stack Dijkstra algorithm guaranteed! Improve functionality and performance, and to provide you with relevant advertising doesn t. Vertex to all other vertices in a different way than either Djikstra 's or Bellman-Ford *,... But from a description standpoint, it also detects the negative weight edge, it also detects negative! Directed edges, so if necessary you have to look at be defined as the n-by-n 3 ads and provide! Process we can use to find the shortest paths Bellman-Ford algorithm and Dijkstra ’ s,... Non negative, while Bellman-Ford works with negative costs too queue, then the algorithm is also another single-source path! Its complexity we 'll have to normalise the values in the function of Bellman-Ford... Paths • running time of Dijkstra 's algorithm gelegentlich wird auch vom Moore-Bellman-Ford-Algorithmus gesprochen, da auch Edward F. zu! Non-Negative weights in most real-world graphs all edge costs are non negative, difference between dijkstra and bellman-ford algorithm ppt Bellman-Ford with! And as it turns out, this algorithm can be used on both weighted and graphs... Clipped this slide to already sparse graphs no shortest path between a node and every other node in the of. Of the t wo algorithms is difference in the graph.You 'd run it once for node! Value ) that is reachable from the source, then the algorithm incredibly... You more relevant ads lies underneath the way we get to our desired output a single-source shortest.... Use to find the shortest path algorithm, meaning it computes the shortest path algorithm, the algorithm! A source vertex and can detect negative cycles in a graph for every node wird auch vom gesprochen. Ask Question Asked 8 years, 5 months ago or single-source ) shortest path algorithm, the algorithm... Graph.You 'd run it once for every node Correctness • Handling negative‐weight cycles • directed acyclic graphs when all costs... Remained not simple for decades node to every other node • Correctness • negative‐weight... Graph first so if necessary you have to normalise the values in function. The weight of all the edges must be non-negative three and I 'm stating my inferences them. Tell me if I have understood them accurately enough or not node and every other in. ) shortest path algorithm routing algorithm ( Dijkstra / Bellman-Ford ) – idealization – all routers are –... Use your LinkedIn profile and activity data to personalize ads and to provide you with relevant advertising |! Is far slower than either Djikstra 's or Bellman-Ford all part expenses are negative... A general explanation of both algorithms store your clips weights whereas Dijkstra is! Contains the vertices which contains the vertices which contains the … the main algorithms that fall this! 2010 Conference: National Conference on wireless Networks-09 ( NCOWN-2010 ) node in the function the... In maximum authentic-international graphs all edge costs are non negative, while Bellman-Ford works negative... Problem is Warshall algorithm used to solve Ford ’ s algorithm result contains the … main.: National Conference on wireless Networks-09 ( NCOWN-2010 ) then there is negative cycle. Weight and can detect negative cycles in a graph Study on Contrast and Comparison between algorithm! Under What circumstances is bellman ( NCOWN-2010 ) our desired output … the main advantage of the smallest cost from! More widely used, as in maximum authentic-international graphs all part expenses are non destructive multiple examples, in single.\n\nSchaffer's Mill Hoa, Used Celerio Automatic Hyderabad, Rag Dolls Uk, Total War Strategy, Call Of Duty Cheat Forums, Sheraton Manila Address," ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80577505,"math_prob":0.89708906,"size":17895,"snap":"2021-21-2021-25","text_gpt3_token_len":4337,"char_repetition_ratio":0.19853558,"word_repetition_ratio":0.19970739,"special_character_ratio":0.2324113,"punctuation_ratio":0.11973918,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95625275,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-21T17:21:06Z\",\"WARC-Record-ID\":\"<urn:uuid:0cba9ae3-9110-4096-aad2-a0df49b66dca>\",\"Content-Length\":\"31405\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f823b444-8028-47b6-912f-bc26afaafad0>\",\"WARC-Concurrent-To\":\"<urn:uuid:da7f9f6b-2af4-436a-a5be-eb0febcf5937>\",\"WARC-IP-Address\":\"188.227.230.129\",\"WARC-Target-URI\":\"http://hovawartclub.hu/ug64n0tg/poulan-pro-2-cycle-gas-blower-25cc-241227\",\"WARC-Payload-Digest\":\"sha1:4IMQAKPEMIU75IAW5EIWQECUJ6M2OTZH\",\"WARC-Block-Digest\":\"sha1:DXQJ4FUIUQUXI2GNMHX6EBS5VJHQJVYF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488286726.71_warc_CC-MAIN-20210621151134-20210621181134-00307.warc.gz\"}"}
https://stats.stackexchange.com/questions/530087/why-can-we-mix-standard-errors-with-raw-covariances-when-calculating-standard-er
[ "# Why can we mix standard errors with raw covariances when calculating standard error of sum of regression coefficients?\n\nLet's have a look at this post: Standard error for the sum of regression coefficients when the covariance is negative\n\nWe have: $$SE_{b_{2+3}} = \\sqrt{SE_2^2 + SE_3^2+2Cov(\\beta_2,\\beta_3)}$$\n\nBut why do we mix SE, which is the SD/sqrt(N) with covariance (not divided by the sqrt(N))? I know this formula for variances, but SE is not the sqrt(variance).\n\nCould I kindly ask someone to show me, using algebra, how is this valid?\n\nI mean - why can I take the SE from the model coefficients and use the variance-covariance matrix without any additional steps?\n\n• Re \"but SE is not the sqrt(variance)\": in this setting, yes it is. You are confusing two formulas applicable in two different situations. No algebra is needed. – whuber Jun 10 at 12:47\n• Thank you. So, the variance-covariance matrix contains squared standard errors (diagonal) and covariances (off-diagonal)? Not just variances? Are these covariances, then, the standard covariances, or covariances divided by the sqrt(n)? I'm sorry for dumb questions, but I try to figure it out and learn. – Krapatik321 Jun 10 at 14:07\n• The division by $\\sqrt{n}$ has, in effect, already been performed when the covariance matrix of the estimates is constructed. – whuber Jun 10 at 14:40" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89115363,"math_prob":0.9925106,"size":550,"snap":"2021-31-2021-39","text_gpt3_token_len":149,"char_repetition_ratio":0.13003664,"word_repetition_ratio":0.0,"special_character_ratio":0.26909092,"punctuation_ratio":0.096491225,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99982744,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T10:05:17Z\",\"WARC-Record-ID\":\"<urn:uuid:78921982-0a2d-4421-894d-fe5625aa8e77>\",\"Content-Length\":\"157835\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eb49641f-f3be-4d3a-9bd3-95abb14f12fe>\",\"WARC-Concurrent-To\":\"<urn:uuid:079ff320-0bac-44ed-a72d-dd6af232858d>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/530087/why-can-we-mix-standard-errors-with-raw-covariances-when-calculating-standard-er\",\"WARC-Payload-Digest\":\"sha1:UW44M6IROSGDJ7LTS3FX5RLUY7A5JBAP\",\"WARC-Block-Digest\":\"sha1:6E2AR5WCBW6IVP2CD5656TYEMRDDKFGO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153709.26_warc_CC-MAIN-20210728092200-20210728122200-00547.warc.gz\"}"}
https://scoodle.co.uk/questions/in/maths/if-the-side-lengths-of-triangle-abc-have-the-ratio-2-3-4-and-the-perimeter-is-108-calculate-each-side-length-for-triangle-abc
[ "Get an answer in 5 minutes\n\nWe'll notify as soon as your question has been answered.", null, "Ask a question to our educators\nMATHS\nAsked by Alex\n\n# If the side lengths of triangle ABC have the ratio 2 : 3 :4 and the perimeter is 108 calculate each side length for triangle ABC?\n\nIf the ratio of the sides is 2, 3 and 4, we can say that an equal part x is part of each quantity. Hence 2x, 3x, and 4x. These are the lengths of the triangle. The perimeter is 108 and adding the lengths gives 9x. Therefore x is 12. Lengths are: 2*12 = 24, 3*12 =36, 4*12= 48", null, "", null, "Jeffrey Asare\n·\n\n1.1k students helped\n\n#### Similar Maths questions\n\nGet an answer in 5 minutes\n\nWe'll notify as soon as your question has been answered.", null, "Ask a question to our educators\n\n#### Premium video lessons available now on Scoodle\n\n50% discount available\n\nScoodle's video lessons make learning easy and fun. Try it for yourself, the first lesson is free!" ]
[ null, "https://scoodle.co.uk/images/feed-2/plus-in-circle-icon.svg", null, "https://scoodle.co.uk/images/placeholder-bg.svg", null, "https://scoodle.co.uk/images/icons/tutors/verified_badge.svg", null, "https://scoodle.co.uk/images/feed-2/plus-in-circle-icon.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95078903,"math_prob":0.94965076,"size":519,"snap":"2021-21-2021-25","text_gpt3_token_len":167,"char_repetition_ratio":0.10485437,"word_repetition_ratio":0.0,"special_character_ratio":0.31984586,"punctuation_ratio":0.1300813,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99666196,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-15T13:50:18Z\",\"WARC-Record-ID\":\"<urn:uuid:3fe03c65-f79f-421e-906a-df3e010e8c6b>\",\"Content-Length\":\"64779\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dc19ede7-2109-452d-81d7-1bebe8a5eb01>\",\"WARC-Concurrent-To\":\"<urn:uuid:17785345-a6cb-40df-9004-1b6f8fc64d8b>\",\"WARC-IP-Address\":\"54.76.57.48\",\"WARC-Target-URI\":\"https://scoodle.co.uk/questions/in/maths/if-the-side-lengths-of-triangle-abc-have-the-ratio-2-3-4-and-the-perimeter-is-108-calculate-each-side-length-for-triangle-abc\",\"WARC-Payload-Digest\":\"sha1:GKE6OHU3F65E4CGRZT3MGXR4AR7HTSNU\",\"WARC-Block-Digest\":\"sha1:4AVWFFFQXMW2SK6ZW5G4NLCNVVUEBL6X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487621273.31_warc_CC-MAIN-20210615114909-20210615144909-00363.warc.gz\"}"}
http://sukhumvit101dental.com/pdf/combinatorial-optimization-lectures-given-at-the-3-rd-session-of-the-centro
[ "# Download Combinatorial Optimization: Lectures given at the 3rd by Peter L. Hammer, Bruno Simeone (auth.), Bruno Simeone (eds.) PDF", null, "By Peter L. Hammer, Bruno Simeone (auth.), Bruno Simeone (eds.)\n\nThe C.I.M.E. summer season university at Como in 1986 was once the 1st in that sequence with reference to combinatorial optimization. positioned among combinatorics, computing device technological know-how and operations study, the topic attracts on various mathematical the way to take care of difficulties stimulated by means of real-life purposes. contemporary examine has focussed at the connections to theoretical desktop technology, specifically to computational complexity and algorithmic matters. The summer time School's job based at the four major lecture classes, the notes of that are incorporated during this volume:\n\nRead Online or Download Combinatorial Optimization: Lectures given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E.) held at Como, Italy, August 25–September 2, 1986 PDF\n\nBest nonfiction_12 books\n\nCritical Phenomena in Loop Models\n\nWhilst with regards to a continuing section transition, many actual structures can usefully be mapped to ensembles of fluctuating loops, which would signify for instance polymer jewelry, or line defects in a lattice magnet, or worldlines of quantum debris. 'Loop types' supply a unifying geometric language for difficulties of this type.\n\nAdditional resources for Combinatorial Optimization: Lectures given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E.) held at Como, Italy, August 25–September 2, 1986\n\nSample text\n\nExcept for this condition of all 0-i combinations 110ifif otherwise . to be a matrix [M I b ] w h o s e the rightyhand and are arbitrary by ~ = by ~ =(I if % = 1 and % = 0 for getting For our purposes, pj = 1 otherwise k0 We now give a procedure pj = 0 of rows of [M I ~. 011] in the row space. 0]0] is in the row space as can be seen by multiplying by zero and adding. row R I. Thus, represents Hence, the matrix a clutter. the row of all zeros Q whose Define each row of [M I b] is in R 0 so is not in rows are the minimal any clutter obtained rows of R 1 in this way to be a 60 binary clutter.\n\nE. the minimal of [MI b], and let Q* be the clutter rows of R 1 from the from its dual row space. q* e I. q ~ 1 for all qcQ; (2) and any 0-i r* satisfying with respect To prove (I), note that q ~ Q * r*-q ~ 1 for all q~Q and minimal to this p r o p e r t y is in Q*. must be a solution to Qq* z 1 (mod 2), and hence to Mq* ~ b (mod 2), j w i t h qj* and the columns = 1 must be linearly independent else a 0-I solution s* to Ms* ~ 0 could be added, modulo in M or 2, to q* to get a vector r* less than or equal to q* still satisfying Mr s ~ b.\n\nNumerische Methoden bei Optimier~ng, vol. II. (Birkh£user, Basel, 1974) 51-62. P. L. Hammer, B. Simeone: \"Quasimonotone boolean functions and bisteUar graphs\", Ann. Discr. Math. 9(1980) 107-119. P. Hansen: \"Fonctions d'evaluation et p~nalit~s pour les programmes quadratiques en variables 0-1\", in: B. , Combinatorial programming, methods and applications (Reidel, Dordrecht, 1975) 361-370. P. Hansen: \"Labelling algorithms for balance in signed graphs\", in: J. - C. Bermond et al. , Problemes combinatoires et theorie des graphes (Editions CNRS, Paris, 1978) 215-217." ]
[ null, "https://pics.kisslibrary.com/pics/865173/cover.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8375809,"math_prob":0.89384466,"size":3512,"snap":"2021-04-2021-17","text_gpt3_token_len":951,"char_repetition_ratio":0.0795325,"word_repetition_ratio":0.061990213,"special_character_ratio":0.261959,"punctuation_ratio":0.14522822,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9550065,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-28T09:21:15Z\",\"WARC-Record-ID\":\"<urn:uuid:b03db5a1-a7d2-477b-8732-1d5281e5fa06>\",\"Content-Length\":\"32454\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:775977ef-bb85-47ee-8233-4184f873e0e7>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b9a5ca7-28f8-42f1-8fd2-aadb267ca83e>\",\"WARC-IP-Address\":\"119.59.104.17\",\"WARC-Target-URI\":\"http://sukhumvit101dental.com/pdf/combinatorial-optimization-lectures-given-at-the-3-rd-session-of-the-centro\",\"WARC-Payload-Digest\":\"sha1:BAH2JZGII77IAL3SO3NLKQZRJWUEEOAH\",\"WARC-Block-Digest\":\"sha1:PYH7VR5NE64SPASJ5LQ72WJ6YYF27QJV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704839214.97_warc_CC-MAIN-20210128071759-20210128101759-00426.warc.gz\"}"}
https://link.springer.com/article/10.1007/s10640-021-00597-3
[ "# Long-Term Climate Treaties with a Refunding Club\n\n## Abstract\n\nWe show that an appropriately-designed “Refunding Club” can simultaneously solve both free-riding problems in mitigating climate change—participating in a coalition with an emission reduction target and enduring voluntary compliance with the target once the coalition has been formed. Countries in the Club pay an initial fee into a fund that is invested in assets. In each period, part of the fund is distributed among the Club members in relation to the emission reductions they have achieved, suitably rescaled by a weighting factor. We show that an appropriate refunding scheme can implement any feasible abatement path a Club wants to implement. The contributions to the initial fund can be used to disentangle efficiency and distributional concerns and/or to make a coalition stable. Making the grand coalition stable in the so-called “modesty approach” requires less than 0.5% of World GDP. Finally, we suggest ways to foster initial participation, to incorporate equity concerns with regard to developing countries, and ways to ease the burden to fill the initial fund.\n\n## Introduction\n\n### Motivation\n\nInternational treaties on the provision of global public goods have a fundamental free-riding problem: each country’s contribution will benefit all countries in a non-exclusive and non-rival manner. This Prisoner’s Dilemma aspect and the absence of a supranational authority make international coordination crucial and exceptionally difficult to achieve at the same time. Countries may either lack the incentive to sign an agreement and may prefer to benefit from the signatories’ contributions or they may have incentives not to comply with promises made in an agreement.\n\nIn long-run problems extending over decades or even centuries, such as mitigating anthropogenic climate change, a second problem arises. Even if the free-riding problem might be solved temporarily, little is achieved if the international community fails to agree on a subsequent agreement when a first agreement has expired. With respect to anthropogenic climate change, this is a recurrent problem. After the first commitment period of the Kyoto Protocol has expired,Footnote 1 the international community has consistently failed to agree on a subsequent international agreement to reduce greenhouse gas emissions, be it in Copenhagen (2009), Cancún (2010), Durban (2011), Doha (2012) or Warsaw (2013). Although in 2015 a new international mechanism to significantly reduce greenhouse gas emissions, the so-called Paris Agreement (UNFCCC 2015), was adopted, as of March 2021 only a minority of countries has submitted long-term low greenhouse gas emission development strategies, which were due by the end of 2020. In addition, many countries fall short to deliver their self-proclaimed “nationally determined contributions”.\n\n### Treaty and Main Insight\n\nWe show that an appropriately-designed “Refunding Club” can simultaneously solve both free-riding problems in mitigating climate change—participating in a coalition with an emission reduction target and enduring voluntary compliance with the target once the coalition has been formed. In particular, we propose and analyze climate treaties that involve a long-run refunding scheme (henceforth “RS”) within a Refunding Club. All countries in a coalition of countries forming a Refunding Club pay an initial fee into a fund that is invested in long-run assets. Countries in the Club maintain full sovereignty over the amount of emissions they abate each year and what policy measures they use to do so. At the end of each year, part of the fund is paid out to participating countries in proportion to the relative GHG emission reductions they have achieved in that year, weighted by country-specific factors.\n\nWe integrate the Refunding Club into a dynamic model that incorporates important characteristics of anthropogenic climate change. This requires to allow countries to be arbitrarily heterogeneous with respect to damages and abatement technologies together with a arbitrarily long (but finite) time horizon. Moreover, we incorporate latest scientific evidence of the climate change problem (i.e., we use a carbon budget approach). Then, we establish five main insights. First, any feasible abatement path a coalition of countries sets as a goal can be implemented by a suitably chosen RS. That is, once the corresponding initial fund has been established, the RS will ensure that countries comply with the envisioned country-specific abatement paths. The abatement paths could be the globally optimal paths of the grand coalition or more modest abatement paths by any coalition. Second, since in a treaty, voluntary compliance of countries with their abatement paths is independent from their specific contributions to the initial fund, the RS disentangles efficiency and distributional concerns. For instance, a suitably chosen RS cannot only achieve a Pareto improvement over the decentralized solution, but it can achieve any distribution of the cooperation gains through the allocation of contributions to the initial funds across countries in the Refunding Club.\n\nThird, we use an intertemporal extension of the modest international environmental agreement approach developed by Finus and Maus (2008) to characterize stable coalitions and thus to address the initial participation problem. By combining refunding (to solve the compliance problem) and Finus and Maus’ “modesty approach” (to solve the initial participation problem), we can determine for what level of modesty a coalition, and in particular the grand coalition, can be stabilized. Fourth, using a numerical illustration based upon the RICE-2010 model (Nordhaus 2010) with twelve regions in the world, we calculate ballpark estimates for the funds required to implement the modest grand coalition of less than 0.5% per cent of World GDP. Fifth, we suggest ways to foster initial participation, to incorporate equity concerns by differentiating initial fees across countries and to lower the burden for developing countries, and ways to ease the burden of filling the initial fund. Moreover, we outline how sustainable refunding schemes could be implemented in overlapping generation models.\n\n### Model and Main Formal Results\n\nWe study a multi-country model with country-specific emissions, abatement cost functions and damage functions.Footnote 2 Our main formal results are as follows: First, for a given coalition of countries, we introduce a RS, characterized by the set of initial fees payable into a fund by each participating country, a weighting scheme with country-specific refund intensities and a set of reimbursements across time. With a RS, a coalition of countries turns into a Refunding Club. We show that initial fees, the weighting scheme and a feasible sequence of refunds can be devised in such a way that the RS implements any feasible abatement path a coalition of countries wants to achieve through a treaty. That is, together with the abatement decisions of countries outside the coalition, the abatement decisions of countries in the coalition constitute a unique subgame perfect equilibrium and coincide with the goal stipulated in the treaty. Marginal deviations of countries in the coalition would reduce abatement costs marginally but this gain is equal to the corresponding reduction of refunds and increase of damages. A special case is the grand coalition and the implementation of the socially optimal abatement levels in each period and each country as a unique subgame perfect equilibrium with a suitably chosen RS.\n\nSecond, for any feasible abatement goal set by a coalition in a treaty, there exists a feasible set of initial fees such that the RS implements a Pareto improvement over the decentralized solution for all coalition members. Moreover, if we allow for negative initial fees, the RS can in fact implement any distribution of the cooperation gains in a coalition, This property of RS to disentangle efficiency and distributional concerns is helpful in achieving initial participation. The former is dealt with by the total amount of the initial fees and the refunding formula with the weighting factors. The latter is dealt with by the country-specific initial fees.\n\nThird, by allowing coalitions to internalize only a fraction of the externalities they create, we can examine the stability of coalitions using the modesty approach developed by Finus and Maus (2008). Drawing on the above results, we show that for any coalition, any degree of modesty can be implemented by a suitable RS, i.e., the abatement choices of countries, in the coalition and outside of it, constitute a unique subgame perfect equilibrium.\n\nBesides the analytical result, we illustrate the working and impact of the refunding scheme in a numerical exercise based upon the RICE-2010 model that takes into account heterogeneities across countries.\n\n### Literature\n\nThe starting point for our scheme and its analysis is the large body of game-theoretic literature on the formation of international and self-enforcing environmental agreementsFootnote 3 as there is no supranational authority to enforce contracts and to ensure participation and compliance during the duration of a treaty. This literature has provided important insights into the potentialities and limitations of international environmental agreements regarding the solution of the dynamic common-pool problem that characterizes climate change, as discussed and surveyed by Bosetti et al. (2009) and Hovi et al. (2013). Hovi et al. (2013) point out that there are three types of enforcement that are crucial for treaties to reduce global emissions substantially: (i) countries must be given incentives for ratification with deep commitment, (ii) those countries that have committed deeply when ratifying should be given incentives to remain within the treaties, and (iii) they should be given incentives to comply with them. Our Refunding Club satisfies all three requirements. First, when a country joins a coalition, it knows that the Refunding Scheme provides strong incentive for itself and the other members of the coalition to reduce greenhouse gas emissions. Second, once countries have joined the coalition, the Refunding Scheme ensures that countries comply with the envisioned abatement objective. Third, once countries have joined the coalition, they have no incentive to exit, as they would loose all claims on future refunds.\n\nThe papers most closely related to our paper are Gersbach and Winkler (2007, 2012) and Gerber and Wichardt (2009, 2013), all of which also incorporate refunding schemes. Gerber and Wichardt (2009) analyze a simple two-stage game in which countries in the first stage choose whether to accede to a treaty. Doing so involves a payment into a fund. In the second stage countries decide on emissions. Only if countries choose a particular emission level that is desired from a global perspective (and, in general, not in the best interest of each country alone) they get a refund paid out of the fund. If refunds (and first-stage deposits, respectively) are sufficiently high, all countries choose socially desired emission levels in the second stage. Participation in the first stage is ensured by the rule that the refunding scheme operates only if all countries participate and contribute their respective payments to the fund. If at least one country does not participate, the deposits of all other countries are immediately repaid, no refunding scheme is established and countries are stuck in the non-cooperative equilibrium. Gerber and Wichardt (2013) extend this framework to an intertemporal framework, in which the continuation of the agreement is challenged by re-occurring deposit stages. As in Gerber and Wichardt (2009) the refunding scheme only operates, respectively continues, if all countries pay their deposits.\n\nIn Gersbach and Winkler (2007, 2012), we focussed on the second and third enforcement/commitment problem. We also employed a refunding scheme to incentivice countries to increase their levels of emission abatement above the non-cooperative level. In contrast to Gerber and Wichardt (2009, 2013) our refunding scheme did not prescribe a particular abatement, respectively emission level in order to be eligible for a refund, but employed a continuous refunding rule, in which refunds are increasing with emission abatement. In addition, we analyzed to what extent initial deposits could be decreased by surrendering the revenues of climate policies to the fund (in our case the tax revenues from emission taxes).\n\n### Our Contribution\n\nRelative to the literature mentioned above, we make four contributions in this paper. First, we combine different aspects from the previous literature in a novel way: Like in Gerber and Wichardt (2009) we rely solely on initial payments to finance refunds, as countries might be reluctant to surrender tax sovereignty, but we do not assume that refunding collapses if a country does not match precisely a particular emission level. Instead, we rely on a continuously differentiable refunding rule like in Gersbach and Winkler (2007, 2011, 2012). This means that the sustainable treaties advanced in this paper are rule-based treaties, i.e., the treaties neither fix emission targets nor the carbon price. In contrast to Gersbach and Winkler (2007, 2011, 2012), however, we do not rely on revenues from emission taxes or permit auctions to pay for the initial fees since countries should have full sovereignty over their domestic climate policy and its intensity. The central question in our paper is: Can initial payments to a climate fund engineer solutions aspired by a coalition when refunding continuously adjusts to abatement efforts of countries? None of the preceding work has explored this question.\n\nSecond, in contrast to the existing literature on refunding schemes, we build a dynamic model that incorporates important characteristics of anthropogenic climate change. This requires to allow countries to be arbitrarily heterogeneous with respect to damages and abatement technologies together with a arbitrarily long (but finite) time horizon. Moreover, we incorporate latest scientific evidence of the climate change problem (i.e., we use a carbon budget approach, see also Sect. 2 for details). The combination of such a model with a continuous refunding rule results in a dynamic game structure, in which the existence and uniqueness of a subgame perfect equilibrium is neither obvious nor trivial to prove.\n\nThird, we also address the first commitment problem. However, unlike Gerber and Wichardt (2009, 2013), we do not believe that the participation problem can be credibly and realistically solved by an initial stage (or re-occurring intermediate participation stages as in Gerber and Wichardt 2013), in which any agreement is abandoned as soon as only one country is not willing to participate. Models with such an initial participation stage are not renegotiation-proof in the sense that if one country is not willing to participate all other countries would be better off by striking an agreement without the deviating country instead of falling back to the perfectly non-cooperative Nash equilibrium. In fact, we interpret the past announcement of the US’ withdrawal from the Paris Agreement and the subsequent declarations of (almost) all remaining countries to nevertheless stick to the agreement as empirical evidence that making each country pivotal will not work. As a consequence, we investigate the refunding scheme independently of the requirement that all countries participate, i.e., it can be applied to any coalition that forms with initial payments for a climate fund.\n\nTo analyze participation, we present an intertemporal generalization of the modesty approach by Finus and Maus (2008), which relies on the standard notion of internal and external stability. In fact, our RS poses a suitable microfoundation for the coalition formation framework in general, and the modest coalition formation framework of Finus and Maus (2008) in particular.Footnote 4 In addition, we explore in Sect. 7 ways to ease the participation and financing problem.Footnote 5\n\nFourth, the match of anthropogenic climate change characteristics and refunding opens up the possibility to assess for the first time the potential of refunding for slowing down climate change. In particular, we analyze a numerical version of our stylized model based on data of the regional disaggregated integrated assessment model RICE-2010 by Nordhaus (2010) and assess the order of magnitude of financial assets that are needed to finance such a refunding scheme. The calibration exercise reveals that making the grand coalition stable requires less than 0.5% of World GDP for the initial fund.\n\nFinally, we also contribute to solving dynamic public goods problems. At least since Fershtman and Nitzan (1991) it is know that dynamic good problems pose more severe challenges than their static counterparts.Footnote 6 We examine the most severe case when countries cannot commit to any future emission reductions, as no international authority can enforce an agreement on such reductions. The dynamic public good problem is thus particularly severe. The treaties we advance in this paper essentially reduce the public good problem over an infinite horizon to a static problem in which counties are asked to contribute in the initial period to a global fund. Once the global fund has been set up, countries voluntarily choose the desired emission levels in all subsequent periods.Footnote 7\n\n### Organization of the Paper\n\nThe paper is organized as follows: in the next section, we set up our model, for which in Sect. 3 we derive the social optimum and the decentralized solution as benchmark cases. The refunding scheme is introduced in Sect. 4, where the existence and uniqueness of RS to implement any solution an arbitrary coalition aspires to is also established. In Sect. 5, we extend the modesty approach to an intertemporal setting and characterize the stability conditions of coalitions with this approach. In Sect. 6 we illustrate our model numerically. In Sect. 7, we discuss practical aspects of the RS, such as initial participation and how to raise initial fees and how sustainable refunding schemes can be implemented in an overlapping generation set-up. Section 8 concludes. Proofs of all propositions are relegated to the “Appendix”.\n\n## The Model\n\nWe consider a world with $$n \\ge 2$$ countries characterized by country specific emission functions $$E_i$$, abatement cost functions $$C_i$$, and damage functions $$D_i$$ over a finite (though arbitrarily large) time horizon of $$T\\ (T > 0)$$ running from period $$t=0$$ to period $$t=T$$.Footnote 8 Throughout the paper the set of all countries is denoted by $$\\mathcal {I}$$, countries are indexed by i and j, and time is indexed by t.\n\nEmissions of country i in period t are assumed to equal “business-as-usual” emissions $$\\epsilon _i$$ (i.e., emissions arising if no abatement effort is undertaken) minus emission abatement $$a^i_t$$:Footnote 9\n\n\\begin{aligned} E_i(a^i_t)=\\epsilon _i- a^i_t, \\quad i \\in \\mathcal {I},\\quad t=0,\\dots ,T\\ . \\end{aligned}\n(1)\n\nWe assume that emission abatement $$a^i_t$$ is achieved by enacting some national environmental policy, which induces convex abatement costs in country i:Footnote 10\n\n\\begin{aligned} C_i(a^i_t) = \\frac{\\alpha _i}{2} \\left( a^i_t\\right) ^2\\ ,\\quad \\text {with} \\quad \\alpha _i>0,\\quad i \\in \\mathcal {I},\\quad t=0,\\dots ,T\\ . \\end{aligned}\n(2)\n\nCumulative global emissions, which are the sum of the emissions of all countries up to period t, are denoted by $$s_t$$:\n\n\\begin{aligned} s_{t+1}=s_{t} + \\sum _{i=1}^n E_i(a^i_t)\\ ,\\quad t=0,\\dots ,T, \\end{aligned}\n(3)\n\nwhere the initial stock of cumulative greenhouse gas emissions is denoted by $$s_0$$.\n\nRecent scientific evidence suggests that global average surface temperature increase is—at least for economically reasonable time scales (i.e., several centuries)—approximately a linear function of cumulative global carbon emissions (see Allen et al. 2009; Matthews et al. 2009; Zickfeld et al. 2009; IPCC 2013). As a consequence, we consider strictly increasing and strictly convex damage costs for each country i to depend on cumulative global emissions $$s_t$$ rather than on atmospheric greenhouse gas concentrations:\n\n\\begin{aligned} D_i(s_t)=\\frac{\\beta _i}{2}s_t^2\\ ,\\quad \\text {with} \\quad \\beta _i > 0,\\quad i \\in \\mathcal {I}\\ ,\\quad t=0,\\dots ,T\\ . \\end{aligned}\n(4)\n\nCountries are assumed to discount outcomes in period t with the discount factor $$\\delta ^t$$ with $$0<\\delta <1$$. Finally, we introduce the following abbreviations for later reference:\n\n\\begin{aligned} \\mathcal {E} = \\sum _{i=1}^n \\epsilon _i\\ ,\\quad \\mathcal {A} = \\sum _{i=1}^n \\frac{1}{\\alpha _i},\\quad \\mathcal {B} = \\sum _{i=1}^n \\beta _i,\\quad \\gamma _i = \\frac{\\beta _i}{\\alpha _i},\\quad \\Gamma = \\sum _{i=1}^n \\gamma _i \\ . \\end{aligned}\n(5)\n\n## Decentralized Equilibrium, Global Social Optimum and International Environmental Agreements\n\nThroughout the paper, we assume that a local planner in each country (e.g., a government) seeks to minimize the total domestic costs, which—in the absence of any transfers—consist of the discounted sum of domestic abatement and domestic environmental damage costs over all $$T + 1$$ periods:\n\n\\begin{aligned} K_i = \\sum _{t=0}^T \\delta ^t \\left[ \\frac{\\alpha _i}{2}\\left( a^i_t\\right) ^2 + \\frac{\\beta _i}{2} s_t^2\\right] , \\qquad i \\in \\mathcal {I}\\ . \\end{aligned}\n(6)\n\nWe further assume that local planners in all countries have perfect information about the business-as-usual emissions, abatement costs and environmental damage costs of all countries. In addition, in each period t local planners in all countries i observe the stock of cumulative emissions $$s_t$$ before they simultaneously decide on the abatement levels $$a_t^i$$.\n\nFinally, we assume that costs can—at least potentially—be frictionless shared across countries by a transfer scheme $$\\mathcal {T}$$, which is a set of domestic net transfers summing up to zero. Thus, we suppose a transferable utility set-up.\n\n### Decentralized Solution and Global Social Optimum\n\nUnder these assumptions and in the absence of any international environmental treaty, the decentralized solution is the subgame perfect Nash equilibrium outcome of the game, in which all local planners i in period t choose abatement levels $$a^i_t$$ such as to minimize total domestic costs taking the emissions $$a^j_t$$ of all other countries $$j \\in \\mathcal {I} {\\setminus } {i}$$ as given.\n\nWe solve the game by backward induction, starting from period T. It is useful to consider a typical step in this procedure. To this end, suppose that there exists a unique subgame perfect equilibrium for the subgame starting in period $$t+1$$ with a stock of cumulative greenhouse gas emissions $$s_{t+1}$$. For the moment, this is assumed to hold in all periods $$t+1$$ and will be verified in the proof of Proposition 1. Other details of the history of the game apart from the level of cumulative greenhouse gas emissions $$s_{t+1}$$ do not matter, as only $$s_{t+1}$$ influences the payoffs of the subgame starting in period $$t+1$$ and the equilibrium is assumed to be unique.\n\nGiven the unique subgame perfect equilibrium for the subgame starting in period $$t+1$$ with the associated equilibrium payoff $$W_{t+1}^i(s_{t+1}),$$Footnote 11 country i’s best response in period t, $$\\bar{a}_t^i$$, is determined by the solution of the optimization problem\n\n\\begin{aligned} V^i_t(s_t)|A_t^{-i} = \\max _{a^i_t} \\left\\{ \\delta W^i_{t+1}(s_{t+1})-\\frac{\\alpha _i}{2}\\left( a^i_t\\right) ^2- \\frac{\\beta _i}{2}s_t^2\\right\\} , \\end{aligned}\n(7)\n\nsubject to Eq. (3), $$W_{T+1}^i(s_{T+1})\\equiv 0$$, and given the sum of abatement efforts by all other countries $$A_t^{-i} = \\sum _{j\\ne i}a_t^j$$. The following proposition establishes the existence and uniqueness of a subgame perfect Nash equilibrium:\n\n### Proposition 1\n\n(Decentralized Solution) For any time horizon $$T<\\infty$$, there exists a unique subgame perfect Nash equilibrium of the game in which all countries non-cooperatively choose domestic abatement levels in every period to minimize the net present value of total domestic costs, characterized by sequences of emission abatements for all countries i in all periods t, $$\\left\\{ \\hat{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i\\in \\mathcal {I}}$$, and a sequence for the stock of cumulative GHG emissions $$\\left\\{ \\hat{s}_t\\right\\} _{t=0,\\dots ,T}$$.\n\nThe proof of Proposition 1 is constructive in the sense that we do not only show existence and uniqueness of the subgame perfect equilibrium, but derive closed-form solutions for the corresponding abatement and cumulative GHG emission paths.Footnote 12\n\nIn general, the total global costs, i.e., the sum of total domestic costs over all countries, are not minimized in the decentralized solution. As a consequence, the decentralized solution is inefficient, as in the global total cost minimum, which is also called the global social optimum, all countries could be made better off by an appropriate transfer scheme $$\\mathcal {T}$$, due to the transferable utility assumption.\n\nThe reason for the decentralized solution to fall short of the global social optimum is that local planners only take into account the reduction of environmental damages that an additional unit of abatement prevents in their own country and neglect the damage reductions in all other countries. As a consequence, aggregate abatement levels in the decentralized solution are lower compared to the global social optimum and, thus, cumulative greenhouse gas emissions are higher.\n\nThe global social optimum is derived by choosing abatement paths $$\\left\\{ a^i_t\\right\\} _{t=0,\\dots ,T}$$ for all countries $$i \\in \\mathcal {I}$$, such as to minimize the net present value of total global costs consisting of global costs of emission abatement and the sum of domestic environmental damages stemming from the cumulative global emissions:\n\n\\begin{aligned} \\min _{\\left\\{ a^i_t\\right\\} ^{i\\in \\mathcal {I}}_{t=0,\\dots ,T}} \\sum _{t=0}^T \\delta ^{t} \\sum \\limits _{i=1}^{n}\\left[ \\frac{\\alpha _i}{2}\\left( a^i_t\\right) ^2 + \\frac{\\beta _i}{2} s_t^2\\right] , \\end{aligned}\n(8)\n\nThere exists a unique global optimum in which the costs of abating an additional marginal unit of emissions have to equal the net present value of all mitigated future damages caused by this additional marginal unit:\n\n### Proposition 2\n\n(Global Social Optimum) For any time horizon $$T < \\infty$$ there exists a unique social global optimum characterized by sequences of emission abatements for all countries i in all periods t, $$\\left\\{ {a^i_t}^\\star \\right\\} _{t=0,\\dots ,T}^{i\\in \\mathcal {I}}$$, and a sequence for the stock of cumulative greenhouse gas emissions $$\\left\\{ s_t^\\star \\right\\} _{t=0,\\dots ,T}^{i\\in \\mathcal {I}}$$.\n\nAgain, we derive closed form solutions for abatement and cumulative emission paths in the global social optimum in the proof of Proposition 2.\n\n### International Environmental Agreement\n\nThe inefficiency of the decentralized solution gives incentives to local planners to cooperate in order to reduce total domestic costs. Throughout this paper we refer to these cooperations as international environmental agreements or treaties for short.Footnote 13 In the framework of our model, the most general definition of an international environmental agreement comprises three components: First, a time horizon T, which denotes the duration of the treaty; second, a fixed set $$\\mathcal {C} \\subseteq \\mathcal {I}$$ of participating countries, also called member countries or simply the coalition. Finally, the abatement paths $$\\left\\{ a^i_t\\right\\} _{t=0,\\dots ,T}^{i\\in \\mathcal {C}}$$ of all member countries $$i \\in \\mathcal {C}$$, the treaty aspires to implement. We also define aggregated abatement $$A^\\mathcal {C}_t$$ of the coalition $$\\mathcal {C}$$ in period t as:\n\n\\begin{aligned} A^\\mathcal {C}_t = \\sum _{i \\in \\mathcal {C}} a^i_t, \\qquad t=0,\\dots ,T\\ . \\end{aligned}\n(9)\n\nIn line with most of the literature on international environmental agreements, we assume that all non-members of the coalition behave as singletons, i.e., they non-cooperatively set abatement levels such as to minimize the net present value of their own total domestic costs, as in the decentralized solution, taking the aggregate abatement effort of the coalition and the abatement levels of all other non-member countries as given. Derivation of the subgame perfect equilibrium is analogous to the decentralized solution:\n\n### Proposition 3\n\n(Abatement paths of non-members) For any time horizon $$T<\\infty$$ and any given coalition $$\\mathcal {C}$$ with a corresponding sequence of aggregate abatement levels $$\\left\\{ A^\\mathcal {C}_t\\right\\} _{t=0,\\dots ,T}$$, there exists a unique subgame perfect Nash equilibrium of the game in which all non-member countries choose domestic abatement levels in every period $$t=0,\\dots ,T$$ to minimize the net present value of total domestic costs, characterized by sequences of emission abatements for all countries $$i \\notin \\mathcal {C}$$ in all periods t, $$\\left\\{ \\check{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i\\notin \\mathcal {C}}$$, and a sequence for the stock of cumulative GHG emissions $$\\left\\{ \\check{s}_t\\right\\} _{t=0,\\dots ,T}$$.\n\nWhether a treaty, as defined above, succeeds in implementing its aspired abatement paths $$\\left\\{ a^i_t\\right\\} _{t=0,\\dots ,T}^{i\\in \\mathcal {C}}$$ mainly depends on two circumstances:\n\nFirst, the coalition of participating countries has to be stable in the sense that no participating country would rather leave the coalition (internal stability) and no non-member country would rather join the coalition (external stability). Whether the conditions of internal and external stability hold, depends on how the aspired abatement paths of the remaining coalition members changed if any of its members would leave the coalition. This question, which set of countries form a stable coalition, is also called the participation problem.\n\nSecond, even if a treaty is stable in the sense of the participation problem, it still has to make sure that participating countries stick to the aspired abatement paths $$\\left\\{ a^i_t\\right\\} _{t=0,\\dots ,T}^{i\\in \\mathcal {C}}$$. Without any kind of incentive scheme it is, in general, not in the countries’ own best interest to comply with the treaty. Therefore, the question how to incentivize countries to stick to the aspired abatement paths is also called the compliance problem.\n\nMost of the literature on international environmental agreements, as reviewed in Sect. 1, has concentrated on the participation problem, while compliance was simply assumed. Although a stable coalition is a sine qua non for a treaty’s success, it is obviously not sufficient, as ample real world examples of non-compliance show. To remedy this shortcoming, we introduce an institutional setting, called a refunding scheme, in the next section that implements any feasible abatement paths $$\\left\\{ a^i_t\\right\\} _{t=0,\\dots ,T}^{i\\in \\mathcal {C}}$$, a coalition intends to implement, as the unique subgame perfect Nash equilibrium.\n\n## Refunding Scheme\n\nIn the following, we introduce a refunding scheme (RS), a versatile institutional design ensuring the compliance of all members of an international environmental agreement with the aspired abatement paths. The essential idea is that an international fund is established refunding interest earnings to member countries in each period proportionally to their relative emission reductions weighted by country specific refunding weights.\n\n### Rules of the Refunding Scheme\n\nIn general, a RS for a given coalition of countries $$\\mathcal {C}$$ and a given time horizon T of the treaty is characterized by the set of initial fees $$\\left\\{ f_0^i\\right\\} ^{i \\in \\mathcal {C}}$$ payable into a global fund by each participating country $$i \\in \\mathcal {C}$$, a weighting scheme $$\\left\\{ \\lambda _t^i\\right\\} _{t=0,\\dots ,T-1}^{i \\in \\mathcal {C}}$$, and a set of reimbursements $$\\left\\{ R_t\\right\\} _{t=0,\\dots ,T}$$. The sequence of events is as follows:\n\n1. 1.\n\nAt the beginning of period $$t=0$$ all participating countries pay the initial fees $$f_0^i$$ into a fund.\n\n2. 2.\n\nIn every period $$t=0,\\dots ,T$$ all countries $$i \\in \\mathcal {I}$$ set abatement levels $$a_t^i$$.\n\n3. 3.\n\nAt the end of every period $$t=0,\\dots ,T$$ the RS reimburses the total amount $$R_t$$ to member countries. In periods $$t=0,\\dots ,T-1$$ each member country $$i \\in \\mathcal {C}$$ receives a refund $$r^i_t$$ that is proportional to the emission reductions they have achieved relative to overall emission abatement of the coalition times a weighting factor $$\\lambda ^i_t$$. In period $$t=T$$ any remaining fund is repaid in equal shares to all participating countries.\n\nWe assume that the assets of the fund are invested at the constant interest rate $$\\rho$$ per period, and the returns add to the global fund in the next period $$t+1$$. We assume that the interest rate $$\\rho$$ corresponds to the discount factor $$\\delta$$, i.e., $$\\rho =1/\\delta -1$$. As the reimbursement $$R_t$$ is paid to coalition members at the end of each period $$t=0,\\dots ,T$$, the fund at the beginning of period $$t+1$$ reads\n\n\\begin{aligned} f_{t+1} = (1+\\rho )(f_{t}-R_t),\\quad t=0,\\dots ,T-1, \\end{aligned}\n(10)\n\nwith an initial fund $$f_0=\\sum _{i \\in \\mathcal {C}} f_0^i$$. Note that $$f_{T+1}=0$$, or equivalently $$R_T=f_T$$.\n\nIn addition, the refund $$r^i_t$$ a member country $$i \\in \\mathcal {C}$$ receives in period t yields\n\n\\begin{aligned} r^i_t= {\\left\\{ \\begin{array}{ll} \\lambda _t^i R_t \\frac{a^i_t}{\\sum _{j \\in \\mathcal {C}} a^j_t}, \\quad &{}t=0,\\dots ,T-1,\\\\ \\frac{R_t}{|\\mathcal {C}|},\\quad &{}t=T, \\end{array}\\right. } \\end{aligned}\n(11)\n\nwith a weighting scheme satisfying\n\n\\begin{aligned} \\sum _{i \\in \\mathcal {C}} \\lambda _t^i \\frac{a^i_t}{\\sum _{j \\in \\mathcal {C}} a^j_t} =1,\\quad t=0,\\dots ,T-1\\ . \\end{aligned}\n(12)\n\nThe weighting scheme accounts for the fact that countries are heterogeneous with respect to business-as-usual emissions, abatement costs and environmental damage costs.\n\n### Existence and Uniqueness of the Refunding Scheme\n\nIn the following, we show that for any given treaty, characterized by a time horizon T, a coalition $$\\mathcal {C}$$ and feasible coalition abatement paths $$\\left\\{ a_t^i\\right\\} _{t=0,\\dots ,T}^{i \\in \\mathcal {C}}$$, there exists a set of initial fees $$\\left\\{ f_0^i\\right\\} ^{i \\in \\mathcal {C}}$$, a weighting scheme $$\\left\\{ \\lambda _t^i\\right\\} _{t=0,\\dots ,T-1}^{i \\in \\mathcal {C}}$$ and refunds $$\\left\\{ R_t\\right\\} _{t=0,\\dots ,T-1}$$ such that the RS implements the aspired abatement paths $$\\left\\{ a_t^i\\right\\} _{t=0,\\dots ,T}^{i \\in \\mathcal {C}}$$ of the coalition $$\\mathcal {C}$$ as the unique subgame perfect Nash equilibrium in which all countries set emission abatement levels in all periods to minimize the net present value of their own total domestic costs (which also includes initial payments and refunds for members of the coalition).\n\nTo this end, we first define a feasible coalition abatement path. A feasible coalition abatement path has the property that it lies in between the abatement paths of the decentralized solution and the social global optimum for all coalition member countries $$i \\in \\mathcal {C}$$ and all time periods $$t=0,\\dots ,T$$:Footnote 14\n\n\\begin{aligned} \\hat{a}_t^i \\le \\tilde{a}_t^i \\le {a_t^i}^\\star ,\\qquad i \\in \\mathcal {C},\\quad t=0,\\dots ,T\\ . \\end{aligned}\n(13)\n\nNote that, by construction, all feasible coalition abatement paths obeying conditions (13) are optimal in the last period T, as $$\\tilde{a}_T^i = 0$$ for all $$i \\in \\mathcal {C}$$. Then, the following Proposition holds:\n\n### Proposition 4\n\n(Existence of the RS) Given a treaty characterized by the coalition $$\\mathcal {C}$$, a time horizon T and feasible coalition abatement paths $$\\left\\{ \\tilde{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i \\in \\mathcal {C}}$$, there exist a RS characterized by a set of initial fees $$\\left\\{ \\tilde{f}_0^i\\right\\} ^{i \\in \\mathcal {C}}$$, a sequence of feasible refunds $$\\left\\{ \\tilde{R}_t\\right\\} _{t=0,\\dots ,T-1}$$, and a weighting scheme $$\\{\\tilde{\\lambda }_t^i\\}_{t=0,\\dots ,T-1}^{i\\in \\mathcal {C}}$$ such that the outcome of the unique subgame perfect Nash equilibrium of the game, in which all countries non-cooperatively choose domestic abatement levels in every period to minimize the net present value of total domestic costs, coincides with the aspired abatement paths $$\\left\\{ \\tilde{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i \\in \\mathcal {C}}$$ for all member countries $$i \\in \\mathcal {C}$$ and the abatement paths $$\\left\\{ \\check{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i\\notin \\mathcal {C}}$$, as given by Proposition 3, for all non-member countries $$i \\notin \\mathcal {C}.$$\n\nThe idea of the proof is to choose a reward system that renders all member countries’ aspired abatement levels under the RS as best responses to the given abatement levels of all other countries (within and outside the coalition). As shown in the proof of Proposition 4 in the “Appendix”, the RS is characterized by a uniquely determined sequence of refunds $$\\left\\{ \\tilde{R}_t\\right\\} _{t=0,\\dots ,T-1}$$ and a weighting scheme $$\\left\\{ \\tilde{\\lambda }_t^i\\right\\} _{t=0,\\dots ,T-1}^{i\\in \\mathcal {C}}$$. Yet, the set of initial fees is not unambiguously determined. In fact, all sets of initial fees, the sum of which exceeds the minimal initial global fund $$\\tilde{f}_0$$ with\n\n\\begin{aligned} \\tilde{f}_0 = \\sum _{t=0}^{T-1} \\frac{\\tilde{R}_t}{(1+\\rho )^t}, \\end{aligned}\n(14)\n\nrender a feasible RS that implements the treaty with aspired coalition abatement paths $$\\left\\{ \\tilde{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i \\in \\mathcal {C}}$$. The intuition is that the global fund needs the minimum size $$\\tilde{f}_0$$ in order to be able to pay sufficiently high refunds $$\\tilde{R}_t$$ such that countries stick to the aspired abatement levels in all periods. Any excess funds are redistributed in equal shares in the last period, in which the abatement level is zero independently of the refund. Even if we restrict attention to the minimal initial global fund $$\\tilde{f}_0$$, we are free in how to distribute the burden of raising the initial fund across countries.\n\n### Proposition 5\n\n(Uniqueness of the RS) For a given treaty characterized by the coalition $$\\mathcal {C}$$, a time horizon T and a set of feasible coalition abatement paths $$\\left\\{ \\tilde{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i \\in \\mathcal {C}}$$, the refunding scheme is only unique with respect to the minimal initial global fund $$\\tilde{f}_0$$. In particular, there exists a feasible set of initial fees $$f_0^i$$ satisfying $$\\sum _{i \\in \\mathcal {C}} f_0^i = \\tilde{f}_0$$ such that the RS constitutes a Pareto improvement over the decentralized solution for all coalition members $$i \\in \\mathcal {C}$$.\n\nThe intuition for this result is that compared to the decentralized solution all countries are better off under the RS if their initial fee was zero. As a consequence, there is a positive initial fee $$\\hat{f}_0^i$$ that would leave country i equally well off under the RS compared to the decentralized solution. In the proof of Proposition 5 in the “Appendix”, we show that the sum $$\\hat{f}_0 =\\sum _{i \\in \\mathcal {C}} \\hat{f}_0^i$$ exceeds $$\\tilde{f}_0$$. As a consequence, we can set the initial fee below $$\\hat{f}_0^i$$ making all countries better off.Footnote 15\n\nIn summary, we have shown that the RS can implement any feasible coalition abatement path and gives ample freedom in how to raise the necessary initial fund. The former feature of the RS may be important, as, in general, international climate policy is not shaped along standard economic cost-benefit analyses such as the derivation of the global social optimum in Sect. 3.1. In fact, the policy goal, for which global consensus is sought after, is to limit greenhouse gas emissions to such an extent that the global mean surface temperature increase is not exceeding 2 $$^{\\circ }\\hbox {C}$$ against preindustrial levels (see, for example, EU 2005; UNFCCC 2009, 2015). As the global mean surface temperature increase is predominantly determined by cumulative greenhouse gas emission, such a temperature goal can be translated into a stock of permissible cumulated greenhouse gas emissions, a so-called global carbon budget. Starting from the current stock of cumulative greenhouse gas emissions, estimates for this remaining carbon budget—as of beginning of 2018—roughly range between 320 and 555 trillion tonnes of carbon (GtC) (IPCC 2018).Footnote 16 Thus, Proposition 4 says that once the world community has agreed on an abatement path, the RS is able to implement it as a unique subgame perfect Nash equilibrium no matter how ambitious this abatement path is compared to the social global optimum. In particular, the RS is compatible with the idea of “nationally determined contributions”, as detailed in Article 3 of the Paris Agreement (UNFCCC 2015).\n\nThe latter feature of the RS implies that it cannot only achieve a Pareto improvement, but can in fact implement any distribution of the cooperation gain, i.e., the difference of the net present value of the total global costs between the decentralized solution and the social global optimum, if we allow initial fees to be negative for at least some countries. This property of the RS to disentangle efficiency and distributional concerns is helpful in achieving initial participation, as we shall discuss in Sect. 7.1.\n\n## The Modesty Approach to Refunding\n\nSo far, we have focused on the compliance problem, i.e., how to incentivize member countries to stick to the aspired abatement paths of a given international environmental agreement. Although Proposition 5 has established that all member countries can be made better off under the RS compared to the fully decentralized solution, as characterized by Proposition 1, this does not imply that any treaty is stable in the sense that all member countries have an incentive to join the coalition in the first place.Footnote 17 As already mentioned in Sect. 3.2, it is crucial to characterize how aspired abatement paths changed if any coalition members were to leave the treaty (or, more precisely, not to participate in the treaty in the first place). In the following, we showcase how questions of participation and compliance can be discussed simultaneously by applying the RS, as characterized in Sect. 4, to an intertemporal extension of the modest international environmental agreement approach developed by Finus and Maus (2008).\n\n### An Intertemporal Extension of Modesty\n\nThe standard coalition formation game is a two stage game, in which all countries in the first stage simultaneously decide whether to join an international agreement. In the second stage, all countries simultaneously set emission abatement levels. Non-member countries choose abatement levels non-cooperatively by minimizing their own domestic costs, taking the abatement levels of all other countries as given, while coalition members are supposed to choose emissions abatement levels such as to minimize the sum of total domestic costs over all member countries. Finus and Maus (2008) allow for modest international environmental agreements by specifying that member countries only internalize a fraction $$\\mu \\in [0, 1]$$ of the externalities within the coalition.\n\nApplying this idea to our intertemporal model framework, results in the following two stage game:\n\n1. 1.\n\nAt the beginning of period $$t=0$$ all countries simultaneously decide whether to join an international environmental agreement.\n\n2. 2.\n\nIn all periods $$t=0,\\dots ,T$$ all countries simultaneously decide on emission abatement levels.\n\n1. (a)\n\nNon-member countries choose abatement levels minimizing the net present values of their total domestic costs, taking the abatement levels of the coalition and the other non-members as given, resulting in abatement paths $$\\left\\{ \\check{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i \\in \\mathcal {C}}$$ as characterized by Proposition 3.\n\n2. (b)\n\nMembers of the coalition $$\\mathcal {C}$$ set abatement levels such as to\n\n\\begin{aligned} \\min _{\\left\\{ a^i_t\\right\\} ^{i \\in \\mathcal {C}}_{t=0,\\dots ,T}} \\sum _{t=0}^T \\delta ^{t} \\sum _{i \\in \\mathcal {C}} \\left[ \\frac{\\alpha _i}{2}\\left( a^i_t\\right) ^2 + \\mu \\frac{\\beta _i}{2} s_t^2\\right] , \\end{aligned}\n(15)\n\ntaking the emission levels of non-member countries as given.\n\nThe parameter $$\\mu$$ in Eq. (15) can be interpreted as the degree of modesty. It can be interpreted as the fraction $$\\mu$$ of externalities the coalition internalizes among its members. This formulation essentially entails a more modest emission reduction goal. The higher $$\\mu$$, the higher the emission abatement goal of the coalition. For $$\\mu =1$$ the treaty internalizes all externalities coalition members impose on each other, which is the assumption of the standard coalition formation set-up.\n\n### Combining Modesty and Refunding\n\nWhile assuming that the coalition sets abatement levels according to Eq. (15) allows for a parsimonious way to reconcile the empirical observation of “large but modest” agreements with the prediction of the coalition formation framework, one might ask why coalition members should comply with these aspired abatement paths of the treaty, as they are, in general, not in their best interest (in terms of minimizing the net present value of total domestic costs). This is where the RS, as characterized in Sect. 4 comes into play. As we shall proof in Proposition 6, the aspired abatement paths characterized by Eq. (15) constitute feasible coalition abatement paths that can be implemented via an appropriate RS by virtue of Proposition 4. Thus, the RS serves as a microfoundation to implement the aspired abatement paths characterized by the modest coalition formation framework.\n\nAs usual, we analyze the intertemporal modest coalition formation game with refunding by backward induction, i.e., we first characterize the subgame perfect Nash equilibrium of the second stage, for given time horizon T and given coalition $$\\mathcal {C}$$.\n\n### Proposition 6\n\n(Abatement paths in SPE of second stage) For any time horizon $$T<\\infty$$, any given coalition $$\\mathcal {C}$$ and any degree of modesty $$\\mu$$, there exists a unique subgame perfect Nash equilibrium of the game in which all non-member countries $$i \\notin \\mathcal {C}$$ choose domestic abatement levels in every period $$t=0,\\dots ,T$$ to minimize the net present value of total domestic costs, and all member countries $$i \\in \\mathcal {C}$$ set abatement levels according to Eq. (15). The subgame perfect Nash equilibrium is characterized by emission abatements paths $$\\left\\{ \\check{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i\\notin \\mathcal {C}}$$ for all countries $$i \\notin \\mathcal {C}$$ and $$\\left\\{ \\tilde{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i\\in \\mathcal {C}}$$ for all countries $$i \\in \\mathcal {C}$$ and a corresponding path $$\\left\\{ s_t\\right\\} _{t=0,\\dots ,T}$$ for the stock of cumulative GHG emissions.\n\nThe proof of Proposition 6 in the “Appendix” is constructive, as we derive the unique closed-form solutions of the abatement paths in the subgame perfect Nash equilibrium of the second stage. Moreover, we show that the decentralized solution, as given by Proposition 1, and the global social optimum, as characterized by Proposition 2, are boundary solutions of Proposition 6, which apply in the case that the coalition only consists of at most one member country or all countries are members of the coalition and $$\\mu =1$$. As a consequence, the assumptions of Proposition 4 apply, and any feasible abatement path as defined in Eq. (13) of the modest coalition formation game can be implemented by an appropriate RS.\n\nHaving solved the compliance problem in the second stage by employing the RS, we can now turn to the participation problem in the first stage. Anticipating the outcome of the second stage, a coalition is a subgame perfect Nash equilibrium outcome of the first stage, if no country has an incentive to unilaterally change its membership status. Thus, all member countries $$i \\in \\mathcal {C}$$ must not be better off if they were not in the coalition, and all non-member countries $$i \\notin \\mathcal {C}$$ must be better off than if they were by joining the coalition. If we denote, for any given coalition $$\\mathcal {C}$$ and modesty parameter $$\\mu$$, the net present value of total domestic costs of member countries $$i \\in \\mathcal {C}$$ by $$\\tilde{K}_i(\\mathcal {C},\\mu )$$ and the net present value of total domestic costs of non-member countries $$i \\notin \\mathcal {C}$$ by $$\\check{K}_i(\\mathcal {C},\\mu )$$, then the conditions of internal and external stability read in our transferable utility set-up:Footnote 18\n\n\\begin{aligned}&\\sum _{j \\in \\mathcal {C}} \\tilde{K}_j(\\mathcal {C},\\mu ) - \\sum _{j \\in \\mathcal {C}{\\setminus } i} \\tilde{K}_j (\\mathcal {C}{\\setminus } i,\\mu ) \\le \\check{K}_i(\\mathcal {C}{\\setminus } i,\\mu ),\\qquad \\forall \\ i \\in \\mathcal {C}, \\end{aligned}\n(16a)\n\\begin{aligned}&\\sum _{j \\in \\mathcal {C} \\cup i} \\tilde{K}_j(\\mathcal {C} \\cup i,\\mu ) - \\sum _{j \\in \\mathcal {C}} \\tilde{K}_j(\\mathcal {C},\\mu ) > \\check{K}_i(\\mathcal {C},\\mu ),\\qquad \\forall \\ i \\notin \\mathcal {C}\\ . \\end{aligned}\n(16b)\n\nWe note that the stability conditions can be formulated without explicitly invoking the RS. The reasons is twofold. First, the RS does not change the sum of the net present value of total domestic costs over all member countries, as the net present value of all refunds is, by construction, equal to the initial fund. Second, according to Proposition 6, there exists an appropriate RS for any coalition structure $$\\mathcal {C}$$ such that it is in the best interest of coalition members to stick to the agreement. Therefore, the conditions of internal and external stability implicitly also involve all elements of the RS in different coalition arrangements $$\\mathcal {C}, \\mathcal {C}{\\setminus } i$$ and $$\\mathcal {C} \\cup i$$, namely how much a country in the coalition has to pay into the initial fund, how much it will abate inside and outside the coalition, how many refunds it will obtain when in the coalition and how many damages occur. Hence, for instance, initial contributions have to be chosen such that stability conditions are met for each individual country.\n\nWe note that the refunding approach with its transfers balances asymmetries between countries in a coalition in an optimal way,Footnote 19 i.e., to achieve the abatement objective of the coalition by providing incentives for countries to comply and by making sure that countries want to join the coalition.\n\nEven in the static modest international environmental agreement framework it is not possible to analytically analyze coalition stability for quadratic damage functions and heterogenous countries. As a consequence, we shall concentrate attention to a particularly interesting case, calibrate our model and derive numerical results. The particular question, we want to address is for what level of modesty $$\\mu$$ the grand coalition $$\\mathcal {C}=\\mathcal {I}$$ can be stabilized. For the grand coalition, only internal stability (16a) is relevant, which can be re-arranged to yield:\n\n\\begin{aligned} \\sum _{j \\in \\mathcal {I}} \\tilde{K}_j(\\mathcal {I},\\mu ) \\le \\sum _{i \\in \\mathcal {I}} \\check{K}_i(\\mathcal {I} {\\setminus } i,\\mu )\\ . \\end{aligned}\n(17)\n\nThus, the grand coalition is stable if it can guarantee all countries a lower net present value of total domestic costs when they participate in the treaty instead of unilaterally leaving it.\n\nThe approach opens up a wide range of further interesting issues, which we leave for future research. For instance, is it globally optimal to stabilize the grand coalition with an appropriate value of $$\\mu$$, instead of being less modest and having only a smaller coalition being stable? Or could better results be achieved by having several smaller regional coalitions with their own abatement objectives and associated refunding schemes?\n\n## Numerical Illustration\n\nTo give an idea of the degree of modesty that renders the grand coalition stable and the corresponding order of magnitude needed for the initial fund $$f_0$$ to implement it via an appropriate RS, we run a numerical exercise. Due to the highly stylized model, the results are rather a numerical illustration than a quantitative analysis.\n\nWe follow the RICE-2010 model (Nordhaus 2010) in dividing the world into twelve regions, each of which we assume to act as a “country”, as detailed in Sect. 2.Footnote 20 We also take the “business-as-usual” (BAU) emissions for all twelve regions from Nordhaus (2010). The RICE-2010 model assumes a backstop technology, the price of which decreases over time and fully crowds out fossil fuel based energy technologies by 2265. As a consequence, global CO$$_\\text {2}$$ emissions drop to zero in 2265 in the BAU scenario in which cumulative global CO$$_\\text {2}$$ emissions of 5679.6 GtC have been released into the atmosphere (we assume that cumulative global CO$$_\\text {2}$$ emissions prior to 2015 amount to 550 GtC).\n\nIn the global social optimum of the RICE-2010 model, the long-run cumulative global emissions amount to 1470.8 GtC, which implies an increase of the average global surface temperature of approximately 2.9–3 $$^{\\circ }\\hbox {C}$$ over preindustrial levels. In addition, carbon neutrality, i.e., zero global GHG emissions are only reached by 2155. In light of the Paris agreement and the recent announcements by the US, EU and China, among other countries, to become carbon neutral by 2050, respectively 2060, the RICE-2010 model’s global social optimum feels somewhat outdated. Unfortunately, there is no updated version of the RICE-2010 model. We deal with this issue in a two step procedure.\n\nFirst, we calibrate our model in such a way that the global social optimum in our model resembles the optimal solution of the RICE-2010 model as closely as possible. Therefore, we calibrate the relative damage parameters for each region by fitting quadratic functions to the damage functions used in the RICE-2010 model. Then we re-scale all damage parameters such that damages in the BAU scenario in the year 2095 amount to 12 trillion USD or 2.8% of global output as in Nordhaus (2010, 11723). We calibrate the abatement cost parameters such that the emission paths in the global social optimum of all twelve regions resemble the optimal solution of the RICE-2010 model as closely as possible, under the constraint that the abatement cost parameters decline at a unique and constant rate. Table 1 shows the calibrated abatement and damage cost parameters for all twelve world regions. Abatement cost parameters decrease at the rate of $$\\xi =1.65\\%$$ per year, implying a drop of approximately 15.1% per decade.Footnote 21 In line with the RICE-2010 model, we employ a discount rate of 5% per year, which corresponds to a discount factor of $$\\delta = 0.6139$$ for each ten year period. While it is not possible to perfectly mimic the outcome of a sophisticated integrated assessment model as the RICE-2010 model with our simple theoretical model, both global GHG emissions as well as cumulative global emissions match reasonably well (see upper graphs in Fig. 1).\n\nSecond, we increase the rate at which the abatement cost parameters decline to $$\\xi =5\\%$$ per year implying a decadal drop of 38.6%. Under these conditions, our model calculates a global social optimum in which carbon neutrality is reached by 2065 and cumulative global emissions level off at 874.6 GtC. This corresponds to an average global surface temperature increase between 1.7 and 1.8 $$^{\\circ }\\hbox {C}$$, which we consider compatible with the goals of the Paris Agreement. In this “Paris compatible” calibration, the long-run level of cumulative global emissions in the decentralized solution amounts to 1550.1 GtC, which approximately corresponds to a 3.1 $$^{\\circ }\\hbox {C}$$ increase of average global surface temperature. In this scenario, carbon neutrality would be achieved by 2115 (see lower graphs in Fig. 1).\n\nSeeking the upper bound of the degree of modesty, for which the grand coalition for the Paris compatible model calibration is just stable, we find $$\\mu =26.594\\%$$. In this case, long-run cumulative global emissions amount to 1204.7 GtC, which closes approximately half of the gap between the decentralized solution and the global social optimum. Yet, with a temperature increase of approximately 2.4 $$^{\\circ }\\hbox {C}$$, the stable grand coalition would fall short of the 2 $$^{\\circ }\\hbox {C}$$ temperature target.\n\nAn initial fund of 2.64 tril. USD or 0.326% of 2015 world GDP is needed to implement the stable grand coalition via a RS. In addition, Table 2 shows the net present value of refunds and the maximum initial fees a region is willing to pay to join the treaty for all 12 regions in tril. USD and % of world GDP. By construction, the sum of initial fees $$\\hat{f}_0^i$$, which makes regions indifferent whether to join the treaty, amounts to the same total of 2.64 tril. USD or 0.326% of 2015 world GDP.\n\nTable 3 shows the development of the refund over time. We observe that total refunds start at very low levels of 19.2 bil. USD per annum corresponding to 0.024% of world GDP, continuously rise until they peak in 2075 at 797 bil. USD per annum corresponding to 0.253% of world GDP. After that they sharply decline and by 2115 no refunds have to be paid anymore, as even in the decentralized solution net zero GHG emissions have been reached. While this general pattern of global refunds is mimicked by the individual regions, there are some differences in magnitude. China receives the highest refunds both in absolute numbers and also as a share of its GDP, peaking in 2075 at 242.03 bil. USD corresponding to 0.546% of its GDP. Africa has the lowest relative peak refunds in 2075 of 34.32 bil. USD or 0.131% of its GDP. The marginal abatement costs start at 49.3 USD per tC (equalling 180.93 USD per ton of CO$$_\\text {2}$$) and rise until 2075, when they peak at 73.6 USD per tC (or 270.11 USD per ton of CO$$_\\text {2}$$).\n\nIn summary, the refunding scheme can stabilize a grand coalition that bridges half the gap between the global social optimum and the decentralized solution in our Paris compatible model calibration. While the net present value of funds needed is sizeable at 2.64 tril. USD, it is not out of reach, when compared to other funds raised in situations of global crisis, such as the latest global financial crisis or the Corona pandemic. We would like to stress that our quantitative exercise is only an illustrative example to gauge the order of magnitude of what an initial fund would look like.\n\n## Discussion\n\nSo far we have focused first on how a refunding scheme can implement any goal a coalition has set and second, on how coalition stability can be achieved, and in particular the stability of the grand coalition. We have seen that a refunding scheme transforms the intertemporal climate-policy problem into a standard, static public-goods problem. Once all countries in the stable coalition have made their initial contribution, and have agreed on the refunding parameters, countries follow the envisioned path of abatement voluntarily and would be worse off by forfeiting refunds.\n\nNumerous further issues are relevant for the design of refunding scheme and the use of Refunding Clubs. We address them in this section, as they deserve further scrutiny.\n\n### Increasing Initial Fees and a Refunding Club\n\nAt the initial level, when countries are pondering whether to sign the treaty and to pay the initial fee, the free-rider problem is present. If all other countries participate and if aggregate initial fees are high, this country would benefit from all other countries’ abatement efforts, without having to pay the initial fee and to compete for refunds. Hence, the question is how better solutions than the one induced by the modesty approach could be achieved.\n\nThe ideal solution lies in making countries—and large countries, in particular—pivotal for the formation of a coalition, with high initial fees. In order to achieve such a scenario, about ten to twenty of the largest greenhouse gas emitters must coordinate and agree that coalition formation and the refunding scheme fail if any of them defects.Footnote 22\n\nAs full participation by all countries at once is unlikely, it is useful to resort to sequential procedures where a subset of countries makes a start and the others follow later (see Andreoni 1998; Varian 1994). We envision four steps. First, as suggested in the last paragraph, a set of large countries could initiate the system by paying initial fees and form a Refunding Club. In particular, if the US, the EU countries, China and maybe India would start the system with significant initial fees, this would constitute the Refunding Club in which the largest share of greenhouse gases are emitted. In addition, if wealthy countries pay substantially larger initial fees than the modesty approach suggests, such a Refunding Club would be powerful enough to slow down climate change significantly.\n\nSecond, smaller rich countries could follow, which would increase the initial wealth. In the third and fourth steps, larger and smaller developing countries could be invited to join the Refunding Club. Regarding the payment of initial fees, they should be treated differently, as we will discuss next.\n\nThe successful implementation of a refunding scheme only depends on raising the minimum initial global fund, but not on the individual countries’ contributions to it. Thus, within a coalition, the refunding scheme is able to disentangle efficiency from distributional concerns. Yet, in reality, the distribution of initial fees to raise the initial global fund is of great importance. For example, many developing countries may lack the necessary wealth to pay the initial fees, or countries in transition may refuse to pay high initial fees by arguing that historically, the current atmospheric greenhouse gas concentrations were caused by industrialized countries. To induce participation, payment of initial fees could be differentiated according to different distributional criteria such as stage of development, current greenhouse emissions or historic responsibility with respect to atmospheric greenhouse gas concentrations.Footnote 23 Thus, refunding schemes and the payment of initial fees can be made compatible with the concept of “common but differentiated responsibilities and respective capabilities”, as detailed in Article 4 of the Paris Agreement (UNFCCC 2015). To sum up, allocating the burden of the initial fees is a tool that can solve distributional concerns, since they indirectly implement transfers across countries.\n\n### Raising Initial Fees\n\nEven differentiated initial fees cannot circumvent the problem that the sustainable refunding scheme relies on successfully raising the minimum initial global fund. As this fund may be quite large, even in the modesty solution, we outline two ways in which it might be financed.\n\nRaising the minimum initial fund in full at the beginning of the treaty is not necessary. We can also achieve a coalition solution in which a smaller amount of money of money is paid repeatedly. To see this, let $$\\{R_t\\}_{t=0}^T$$ be the sequence of refunds in a solution envisioned by a coalition. In addition, we define the sequence of fees $$f_t(\\Delta )$$ for a time span $$\\Delta > 0$$ by\n\n\\begin{aligned} f_t(\\Delta ) = \\sum \\limits _{\\tau = 1}^{\\Delta } \\frac{R_{t+\\tau }}{(1+\\rho )^\\tau }\\ . \\end{aligned}\n(18)\n\nIf $$f_t(\\Delta )$$ is paid into the fund at times $$t=0,\\Delta , 2\\Delta ,\\ldots$$, the net present value of the fund is equal to the initial fund $$f_0= \\sum _{t=1}^{T-1}\\left[ \\frac{R_t}{(1+\\rho )^t}\\right]$$ and, thus, the same solution can be achieved as when $$f_0$$ is initially paid in full.\n\nWith the repeated payments scheme, we face a trade-off between high initial fees and the property of an RS that transforms an intertemporal climate-policy problem into a static public-goods problem. In particular, if the time span $$\\Delta$$ is short, the solution of the problem of a coalition relies on the repeated commitment of all countries, as the initial participation problem would have to be solved whenever new payments have to be made. Therefore $$\\Delta$$ should not be too small.\n\nIf the repeated solution to the initial participation problem turns out to be a major obstacle to international cooperation, raising the initial fees by allowing countries to borrow money may be more advisable. Countries could then borrow either from the international capital market or directly from the administering agency of the RS. In the latter case, no actual initial money flows would be needed, since the initial fee would then simply be a liability of countries at the administration agency. In turn, future refund claims would be reduced or could even become negative, as countries would have to pay interest and ultimately pay back their liabilities to the agency. Hence, borrowing from the agency appears like a Munchhausen solution to the problem of raising initial fees.\n\nHowever, at least two problems may arise. First, if countries borrow a large amount from the agency, they may later only receive a small payment or may even have to pay when refunds and repayment obligations are netted. Hence, countries might be tempted to renounce high abatement efforts and to default on their repayment obligations to the agency. The country would then lose all claims to refunds. However, as such refunds are small when abatement efforts are small, such a strategy may be profitable. That is, a country could choose to default against the administering agency and could free-ride on the abatement efforts of other countries even if it has signed the treaty and has borrowed from the agency. Such considerations suggest that countries should rather be made to borrow on the international capital market.\n\nSecond, if countries borrow a large amount on international capital markets, the default risk may rise if outstanding government debt is already at a high level. If the country needs to pay a higher interest rate than the risk-free rate, as investors demand a positive risk premium, further borrowing may increase the default risk, as refunds are insufficient to cover interest-rate payments. In such cases, it is more efficient if part of the initial fund is being raised by taxes over several periods.\n\n### Information Requirements and Reaction to Unforeseen Shocks\n\nThe design of any RS rests on the bold assumption that all exogenous parameters are constant and, in particular, that they are known ex ante. These are demanding informational requirements.\n\nWe distinguish between temporary and permanent changes of parameters. Temporary shocks to the parameters do not inhibit the long-run behavior of the refunding scheme, because of the global convergence to the first-best solution. It is likely that initial expectations about the discount/interest rate $$\\delta$$, the abatement cost parameters $$\\alpha _i$$, the damage cost parameters $$\\beta _i$$, and the business-as-usual emissions $$\\epsilon _i$$ turn out to be incorrect and that at some time t, new information on one or several of these parameter arrives. In particular, technological progress may substantially change the abatement cost parameters.\n\nPermanent changes in the exogenously given parameters would, in general, change the necessary refunds for a RS corresponding to a given feasible coalition abatement path.Footnote 24 Moreover, in general, also the aspired coalition abatement path itself would change, for example, the abatement path and the degree of modesty that renders the grand coalition stable.\n\nTo accommodate permanent changes in the exogenous parameters, the RS could include a clause that the values of these parameters are re-evaluated on a regular basis (e.g., every ten years) and that the fund’s wealth is corrected accordingly, either by raising additional money from the members or paying back wealth to member countries.\n\nEven if revision cannot be done frequently, the refunding scheme offers some built-in corrections. For example, when marginal damages increase, also the individually optimal abatement efforts for a given refunding scheme increase. However, the extent to which such built-in reactions to parameter changes correct deviations from the first-best solution or from a coalition solution is beyond the scope of this paper, but constitutes an important avenue for future research.\n\n### Sustainable Climate Treaties in Overlapping Generation Frameworks\n\nSo far, we have focused on the properties of a RS and on how the implementation of such a scheme can be eased through repeated payments or through the use of capital markets. Still, we have assumed so far that the countries’ interest can be represented by a long-lived social planner.\n\nThe implementation of sustainable refunding schemes is more difficult in overlapping generation models, in which each generation is predominantly concerned about its own welfare. Then, setting up a refunding scheme hurts the old (existing) generations and benefits future generations—and possibly young existing generations—via two channels. First, the benefits from higher abatement today mainly accrue to future generations. This was the focus of important papers by Bovenberg and Heijdra (1998, 2002).Footnote 25 Public debt policies can help redistribute the welfare gains from increased abatement more equally across generations. Essentially, by issuing (more) public debt today and by having future generations pay it back, the welfare of current generations can be increased at the expense of future generations. Additional effects such as a potential crowding out of physical capital investments and the reduction of distortionary taxation affect the balance between current and future generations.\n\nSecond, current generations must set up the fund and thus are, in principle, required to channel some of their savings towards the payment of initial fees. Since the global fund also invests, such savings may not necessarily decrease capital accumulation, but as future generations inherit the global fund for their own refunding, setting up the global fund decreases the welfare of current generations. Again, to redistribute the burden of setting up the global fund more equally across generations, one might implement repeated payments, as discussed above, or again, public debt can be used to increase the disposable income of current old generations.\n\nIn principle, the use of public debt can engineer trade among generations, can ease the implementation of sustainable refunding schemes and opens up the possibilities to achieve Pareto-improving climate policies across generations. However, with much higher public debt levels after the Covid-19 pandemic in many countries, the scope for further increases of public debt is quite limited.\n\n## Conclusion\n\nIn this paper, we have shown that a refunding scheme, which is a rule-based treaty offering monetary incentives for emission abatement to member countries that are proportional to their relative abatement efforts, may promote sustained international cooperation with respect to anthropogenic climate change. The RS provides a simple blueprint for an international treaty on climate change and depends on a small number of parameters.\n\nYet, the RS is no panacea, as free-rider problems have no perfect solutions. For example, our numerical illustration shows that implementing a stable grand coalition in the modesty approach, which stabilizes average surface temperature at approximately 2.4 $$^{\\circ }\\hbox {C}$$, requires funds in the amount of 2.64 tril. USD. Given that the Green Climate Fund (GCF),Footnote 26 the existing real world institution closest to our refunding scheme, has set itself the goal to raise 100 bil. USD per year starting from 2020, but has great difficulties in securing the pledges for these sums, such a sum seems considerably high. Yet, it is comparable to the sums raised to counter other global crises such as the latest financial crisis or the Corona pandemic. Still, the industrialized countries would have to shoulder a large share of the initial fees.\n\nWe stress that a decisive difference between the GCF and the RS is that the RS refunds money according to a simple and transparent rule (which is already known when initial fees are raised), while the GCF is governed by a 24-member board who decides which projects will be financed by the fund after the money has been raised.\n\nNo doubt, the practical implementation of the refunding schemes in a Refunding Club developed in this paper requires a variety of additional considerations. In the last section, we have discussed how to achieve better initial participation, and we have outlined several ways of raising initial fees. Other issues, such as the governance of the administering agency, or the stimulation of technological progress in abatement technologies will need thorough investigation in future research.\n\n1. In this protocol, the industrialized countries of the world, so-called “Annex B countries”, committed themselves to a reduction of greenhouse gas (GHG) emissions by 5.2% against 1990 levels over the period from 2008 to 2012.\n\n2. Our model is a dynamic stock pollution game similar to Dockner and Van Long (1993), but generalized to n players, in discrete time. In addition, we use a carbon budget approach (see Sect. 2 for details), i.e., we abstract from stock depreciation.\n\n3. Non-cooperative and cooperative approaches have been pursued. Many authors have stressed that the grand coalition is not stable if an individual defection does not destroy any coalition formation (e.g., Carraro and Siniscalco 1993; Eyckmans et al. 1993; Barrett 1994; Tol 1999; Bosetti et al. 2009.) Typically, in such circumstances, stable coalitions only contain a limited number of countries (see, e.g., Hoel 1991; Carraro and Siniscalco 1992; Finus and Caparrós 2015a, b). d’Aspremont et al. (1983) have conducted an original analysis and has introduced the definitions for internally and externally stable coalitions. Pioneering in the modelling of coalition structures are Bloch (1997) and Yi (1997).\n\n4. In this respect our model shares some similarities with Harstad (2020), who analyzes a dynamic model inspired by the pledge-and-review mechanism of the Paris Agreement to account for a variety of different empirical observations of international environmental agreements.\n\n5. If complete contracts on emission reductions could be written between countries, a first-best solution would be easily achieved including, of course, the initial participation problem. Harstad (2012, 2016) and Battaglini and Harstad (2016) analyze the interaction between decisions on emission levels and investments into low-carbon energy technologies in dynamic games with incomplete contracting, i.e., when countries can contract on emission reductions but not on investment. We abstract from technology investments and scrutinize what refunding can achieve in case countries cannot contract ex ante on emission reductions.\n\n6. Dynamic games on voluntary provision of public goods have been significantly developed (see Wirl 1996; Dockner and Sorger 1996; Sorger 1998; Marx and Matthews 2000). Recent contributions involve Dutta and Radner (2009) who examine agreements on mitigating climate change supported by inefficient Markov perfect equilibria.\n\n7. In this respect, our argument is exactly opposite to Gerber and Wichardt (2013), who propose to split static public good problems into dynamic ones in order to reduce downside payments.\n\n8. Throughout the paper, we denote the time horizon by T. Note that a time horizon of T comprises of $$T+1$$ periods.\n\n9. We do not restrict abatement to be at most as high as business-as-usual emissions, i.e., $$a^i_t \\le \\epsilon _i$$. Thus, we allow for negative net emissions, for example via afforestation or carbon capture and sequestration technologies.\n\n10. This is a standard short-cut way of capturing aggregate abatement costs in country i (see, e.g., Falk and Mendelsohn 1993).\n\n11. The equilibrium payoff $$W_{t+1}^i(s_{t+1})$$ is minus the discounted sum of the total domestic costs over the remaining time horizon starting from period $$t+1$$ in the subgame perfect equilibrium of the subgame starting in period $$t+1$$.\n\n12. Despite being closed-form the solutions are quite cumbersome and, thus, relegated to the “Appendix”.\n\n13. Both the global social optimum and the decentralized outcome are important benchmarks in evaluating the performance of potential international agreements. While the decentralized outcome is realized if no agreement takes place, the social optimum is the ultimate goal an international agreement seeks to implement. Obviously, any agreement has to outperform the decentralized outcome in order to be seriously considered, and it is the “better,” the closer its outcome is to the global social optimum.\n\n14. While abatement paths with $${a_t^i}^\\star < \\tilde{a}_t^i$$ would also be “feasible” in a strictly technical sense and could also be implemented by the RS, we consider the social global optimum as a natural upper bound for the emission abatement levels of coalition members.\n\n15. The set of initial payments implicitly defines a transfer scheme $$\\mathcal {T}$$, as introduced in Sect. 3.\n\n16. The main obstacles for translating an upper temperature bound for mean global surface temperature into a carbon budget are scientific uncertainties concerning the equilibrium climate sensitivity and the climate-carbon cycle feedback (see, e.g., Friedlingstein et al. 2011; Zickfeld et al. 2009).\n\n17. In particular, treaties with large coalition sizes and high aspired abatement paths may not be stable, as countries may have an incentive not to participate in the treaty in the first place in order to free-ride on the abatement efforts of the remaining coalition members.\n\n18. We apply the usually made assumption that countries stick with the coalition in case of indifference.\n\n19. For optimal transfer systems in the presence of asymmetries, see Finus and McGinty (2019).\n\n20. The twelve regions are: United States of America (US), European Union (EU), Japan, Russia, Eurasia, China, India, Middle East (MidEast), Africa, Latin America (LatAm), other high income countries (OHI) and Rest of the World (Others).\n\n21. As in the RICE-2010 model, we use ten year periods. However, we usually express all values in per annum terms.\n\n22. In practice, these countries must be collectively stubborn and insist on full participation by this entire core group before going ahead.\n\n23. Development-compatible refunding system have been developed in Gersbach and Hummel (2016).\n\n24. However, whether a change in the exogenous parameters increases or decreases the necessary level of global fund depends, in most cases, on the whole set of exogenous parameters, and the comparative static results for the global fund of the respective RS are quite complex.\n\n25. How public debt can be used to strike an intergenerational bargain in the context of climate change is also addressed by Dennig et al. (2015) who propose several focal bargaining points.\n\n26. The Green Climate Fund was formally established during the UNFCCC COP-16 meeting in Cacun in 2010. Its objective is to assist developing countries in adaptation and mitigation practices to counter climate change.\n\n27. As $$A^\\mathcal {C}_t$$ may be any arbitrary exogenously given path, there may not exist a steady state, and thus, there exists no constant particular solution to (29).\n\n28. Another way to see that $${h_t^i}' (\\tilde{A}_t^\\mathcal {C})< 0$$ is by evaluating Eq. (51) at $$\\tilde{A}_t^\\mathcal {C}$$ and re-writing it to yield:\n\n\\begin{aligned} {h_t^i}'(\\tilde{A}_t^\\mathcal {C})= -\\frac{\\tilde{\\lambda }_t^i \\tilde{R}_t \\left[ 2(\\alpha _i+ \\delta \\beta _i) \\tilde{A}_t^\\mathcal {C} - \\tilde{C}_t^i\\right] + \\alpha _i \\tilde{C}_t^i \\left( \\tilde{A}_t^\\mathcal {C} \\right) ^2}{\\left[ \\tilde{\\lambda }_t^i \\tilde{R}_t + \\alpha _i \\left( \\tilde{A}_t^\\mathcal {C}\\right) ^2\\right] ^2}\\ . \\end{aligned}\n\n$${h_t^i}'(\\tilde{A}_t^\\mathcal {C})$$ is negative, as the term in brackets in the numerator is larger as $$\\alpha _i \\tilde{a}_t^i + \\delta \\beta _i \\tilde{A}_t^\\mathcal {C} - \\tilde{C}_t^i$$, which, according to Eq. (47), is equal to $$\\tilde{\\lambda }_t^i \\tilde{R}_t A_t^{\\mathcal {C}-i}/\\left( \\tilde{A}_t^\\mathcal {C}\\right) ^2 > 0$$.\n\n## References\n\n• Allen MR, Frame DJ, Huntingford C, Jones CD, Lowe JA, Meinshausen M, Meinshausen N (2009) Warming caused by cumulative carbon emissions towards the trillionth tonne. Nature 458:1163–1166\n\n• Andreoni J (1998) Toward a theory of charitable fund-raising. J Polit Econ 106:1186–1213\n\n• Barrett S (1994) Self-enforcing international environmental agreements. Oxf Econ Pap 46:878–894\n\n• Battaglini M, Harstad B (2016) Participation and duration of environmental agreements. J Polit Econ 124:160–204\n\n• Bloch F (1997) New directions in the economic theory of the environment. Cambridge University Press, Cambridge\n\n• Bosetti V, Carraro C, De Cian E, Duval R, Massetti E, Tavoni M (2009) The incentive to participate in, and the stability of, international climate coalitions: a game theoretic analysis using the Witch model. Working Paper 64, FEEM\n\n• Bovenberg AL, Heijdra BJ (1998) Environmental tax policy and intergenerational distribution. J Public Econ 67:1–24\n\n• Bovenberg AL, Heijdra BJ (2002) Environmental abatement and intergenerational distribution. Environ Resour Econ 23:45–84\n\n• Carraro C, Siniscalco D (1992) The international protection of the environment: voluntary agreements among sovereign countries. In: Dasgupta P, Mäler KG (eds) The economics of transnational commons. Clarendon, Oxford\n\n• Carraro C, Siniscalco D (1993) Strategies for the international protection of the environment. J Public Econ 52:309–328\n\n• d’Aspremont C, Jacquemin A, Gabszewicz J-J, Weymark JA (1983) On the stability of collusive price leadership. Can J Econ 16:17–25\n\n• Dennig F, von Below D, Jaakkola N (2015) The climate debt deal: an intergenerational bargain. Mimeo, New York\n\n• Dockner EJ, Sorger G (1996) Existence and properties of equilibria for a dynamic game on productive assets. J Econ Theory 71:209–227\n\n• Dockner EJ, Van Long N (1993) International pollution control: cooperative versus noncooperative strategies. J Environ Econ Manag 24:13–29\n\n• Dutta PK, Radner R (2009) A strategic analysis of global warming: theory and some numbers. J Econ Behav Organ 71:187–209\n\n• EU. Presidency conclusions. Council of the European Union, 22nd and 23rd of March 2005\n\n• Eyckmans J, Proost S, Schokkaert E (1993) Equity and efficiency in greenhouse negotiations. Kyklos 46:363–397\n\n• Falk I, Mendelsohn R (1993) The economics of controlling stock pollutants: an efficient strategy for greenhouse gases. J Environ Econ Manag 25:76–88\n\n• Fershtman C, Nitzan S (1991) Dynamic voluntary provision of public goods. Eur Econ Rev 35:1057–1067\n\n• Finus M, Maus S (2008) Modesty may pay! J Public Econ Theory 10(5):801–826\n\n• Finus M, Caparrós A (eds) (2015a) Game theory and international environmental cooperation. Edward Elgar, Cheltenham\n\n• Finus M, Caparrós A (2015b) Introduction. In: Finus M, Caparrós A (eds) Game theory and international environmental cooperation. Edward Elgar, Cheltenham, pp xvii–xliv\n\n• Finus M, McGinty M (2019) The anti-paradox of cooperation: diversity may pay! J Econ Behav Organ 157:541–559\n\n• Friedlingstein P, Solomon S, Plattner G-K, Knutti R, Ciais P, Raupach MR (2011) Long-term climate implications of twenty-first century options for carbon dioxide emission mitigation. Nat Clim Change 1:457–461\n\n• Gerber A, Wichardt P (2009) Providing public goods in the absence of strong institutions. J Public Econ 93:429–439\n\n• Gerber A, Wichardt P (2013) On the private provision of intertemporal public goods with stock effect. Environ Resour Econ 55:245–255\n\n• Gersbach H, Winkler R (2007) On the design of global refunding and climate change. CER-ETH Working Paper 07/69, CER-ETH—Center of Economic Research at ETH Zurich\n\n• Gersbach H, Winkler R (2011) International emission permits markets with refunding. Eur Econ Rev 55:759–73\n\n• Gersbach H, Winkler R (2012) Global refunding and climate change. J Econ Dyn Control 36:1775–1795\n\n• Gersbach H, Hummel N (2016) A development-compatible refunding scheme for a climate treaty. Resour Energy Econ 44:139–169\n\n• Harstad B (2012) Climate contracts: a game of emissions, investment, negotiations, and renegotiations. Rev Econ Stud 79:1527–57\n\n• Harstad B (2016) The dynamics of climate agreements. J Eur Econ Assoc 14:719–752\n\n• Harstad B (2020) Pledge-and-review bargaining: from Kyoto to Paris. Mimeo\n\n• Hoel M (1991) Global environmental problems: the effects of unilateral actions taken by one country. J Environ Econ Manag 20:55–70\n\n• Hovi J, Skodvin T, Aakre S (2013) Can climate change negotiations succeed? Polit Gov 1:138–150\n\n• IPCC (2013) Climate change 2013: the physical science basis. Contribution of working group I to the fifth assessment report of the intergovernmental panel on climate change. Cambridge University Press, Cambridge\n\n• IPCC (2018) Global warming of $1.5^{\\circ }\\text{C}$. Intergovernmental Panel on Climate Change (IPCC)\n\n• Marx LM, Matthews SA (2000) Dynamic voluntary contribution to a public project. Rev Econ Stud 67:327–358\n\n• Matthews HD, Gillet NP, Scott PA, Zickfeld K (2009) The proportionality of global warming to cumulative carbon emissions. Nature 459:829–832\n\n• Nordhaus WD (2010) Economic aspects of global warming in a post-Copenhagen environment. Proc Natl Acad Sci 107:11721–26\n\n• Sorger G (1998) Markov-perfect nash equilibria in a class of resource games. Econ Theor 11:79–100\n\n• Tol RSJ (1999) Kyoto, efficiency, and cost-effectiveness: applications of FUND. Energy J. Special Issue on the costs of the kyoto protocol: a multi-model evaluation, pp 130–156\n\n• UNFCCC (2009) Decision 2/CP.15. Copenhagen Accord, COP 15, Copenhagen, 18th of December 2009\n\n• UNFCCC (2015) Decision 1/CP.21. Adoption of the Paris agreement, COP 21, Paris, 13th of December 2015\n\n• Varian HR (1994) Sequential provision of public goods. Public economics. EconWPA\n\n• Wirl F (1996) Dynamic voluntary provision of public goods: extension to nonlinear strategies. Eur J Polit Econ 12:555–560\n\n• Yi S-S (1997) Stable coalition structures with externalities. Games Econ Behav 20:201–237\n\n• Zickfeld K, Eby M, Matthews HD, Weaver AJ (2009) Setting cumulative emission targets to reduce the risk of dangerous climate change. Proc Natl Acad Sci 106:16129–16134\n\n## Acknowledgements\n\nWe would like to thank Clive Bell, Jürgen Eichberger, Evgenij Komarov, Martin Hellwig, Markus Müller, Till Requate, Wolfgang Buchholz, Ian MacKenzie, Jérémy Laurent-Lucchetti, Nicolas Treich, seminar participants in Heidelberg, Frankfurt, Zurich, Bern, Toulouse, and Vienna, conference participants at the EAERE 2009 in Amsterdam, at the SMYE 2009, at the WCERE 2010 in Montreal, and the handling editor Michael Finus and three anonymous reviewers for helpful comments and suggestions on this line of research. Financial support of the Swiss National Science Foundation, Project No. 124440, is gratefully acknowledged. A precursor of this paper entitled “Sustainable Climate Treaties” has appeared as CER-ETH Working Paper No 11/146, 2011.\n\n## Funding\n\nOpen Access funding provided by Universität Bern.\n\n## Author information\n\nAuthors\n\n### Corresponding author\n\nCorrespondence to Ralph Winkler.\n\n### Publisher's Note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## Appendix\n\n### Proof of Proposition 1\n\nThe decentralized solution is a special case of the second stage of the modest coalition formation game, as detailed in Sect. 5. Thus, Proposition 6, which states that there exists a unique subgame perfect Nash equilibrium of the second stage of the game for any given membership structure $$\\mathcal {C}$$ and modesty parameter $$\\mu$$ also covers the decentralized solution. In fact, the decentralized solution is characterized by $$\\mathcal {C}=\\varnothing$$, i.e., the coalition is an empty set and all countries $$i\\in \\mathcal {I}$$ do not participate in the treaty.\n\nIn the solution () of the proof of Proposition 6 the decentralized solution corresponds to $$x=0$$ and $$y=\\Gamma$$ implying also $$\\bar{A}^\\mathcal {C}=0$$, $$\\bar{A}^\\mathcal {NC}=\\mathcal {E}$$ and $$\\bar{s} = \\frac{1-\\delta }{\\delta }\\mathcal {E}$$. Thus, we obtain for the aggregate emission abatement level $$A_t = \\sum _{i \\in \\mathcal {I}} a_t^i$$ and the stock of aggregate cumulative emissions $$s_t$$:\n\n\\begin{aligned} A_{t}&= \\mathcal {E} + B_2(T) (1-\\lambda _2) \\lambda _2^t - B_3(T) (1-\\lambda _3) \\lambda _3^t, \\end{aligned}\n(19a)\n\\begin{aligned} s_t&= \\bar{s} + B_2(t) \\lambda _2^t + B_3(T) \\lambda _3^t, \\end{aligned}\n(19b)\n\nwith\n\n\\begin{aligned} \\lambda _2&= \\frac{1+\\delta (1+\\Gamma ) - \\sqrt{[1+\\delta (1+\\Gamma )]^2-4\\delta }}{2\\delta }, \\end{aligned}\n(20a)\n\\begin{aligned} \\lambda _3&= \\frac{1+\\delta (1+\\Gamma ) + \\sqrt{[1+\\delta (1+\\Gamma )]^2-4\\delta }}{2\\delta }, \\end{aligned}\n(20b)\n\nand\n\n\\begin{aligned} B_2(T)&= - \\frac{\\mathcal {E} + (s_0-\\bar{s})(1-\\lambda _3)\\lambda _3^{T}}{(1-\\lambda _2) \\lambda _2^T - (1-\\lambda _3) \\lambda _3^T}, \\end{aligned}\n(21a)\n\\begin{aligned} B_3(T)&= \\frac{\\mathcal {E} + (s_0-\\bar{s})(1-\\lambda _2)\\lambda _2^{T}}{(1-\\lambda _2) \\lambda _2^T - (1-\\lambda _3) \\lambda _3^T}\\ . \\end{aligned}\n(21b)\n\nThe individual countries’ abatement levels in the subgame perfect Nash equilibrium of the decentralized solution are given by:\n\n\\begin{aligned} a_t^i = \\frac{\\gamma _i}{\\Gamma } A_{t}, \\qquad \\forall \\ i\\in \\mathcal {I}, \\quad t=0,\\dots ,T\\ . \\end{aligned}\n(22)\n\n$$\\square$$\n\n### Proof of Proposition 2\n\nAlso the global social optimum is a special case of the second stage of the modest coalition formation game, as detailed in Sect. 5. Thus, Proposition 6, which states that there exists a unique subgame perfect Nash equilibrium of the second stage of the game for any given membership structure $$\\mathcal {C}$$ and modesty parameter $$\\mu$$ also covers the global social optimum. In fact, the global social optimum is characterized by $$\\mu =1$$ and $$\\mathcal {C}=\\mathcal {I}$$, i.e., the coalition is the grand coalition encompassing all countries $$i\\in \\mathcal {I}$$ and fully internalizes all damages imposed by GHG emissions on all other countries.\n\nIn the solution () of the proof of Proposition 6 the global social optimum corresponds to $$x=\\mathcal {AB}$$ and $$y=0$$ implying also $$\\bar{A}^\\mathcal {C}=\\mathcal {E}$$, $$\\bar{A}^\\mathcal {NC}=0$$ and $$\\bar{S} = \\frac{1-\\delta }{\\delta }\\mathcal {E}$$. Thus, we obtain for the aggregate emission abatement level $$A_t = \\sum _{i \\in \\mathcal {I}} a_t^i$$ and the stock of aggregate cumulative emissions $$s_t$$:\n\n\\begin{aligned} A_{t}&= \\mathcal {E} + B_2(T) (1-\\lambda _2) \\lambda _2^t - B_3(T) (1-\\lambda _3) \\lambda _3^t, \\end{aligned}\n(23a)\n\\begin{aligned} s_t&= \\bar{S} + B_2(t) \\lambda _2^t + B_3(T) \\lambda _3^t, \\end{aligned}\n(23b)\n\nwith\n\n\\begin{aligned} \\lambda _2&= \\frac{1+\\delta (1+\\mathcal {AB}) - \\sqrt{[1+\\delta (1+\\mathcal {AB})]^2-4\\delta }}{2\\delta }, \\end{aligned}\n(24a)\n\\begin{aligned} \\lambda _3&= \\frac{1+\\delta (1+\\mathcal {AB}) + \\sqrt{[1+\\delta (1+\\mathcal {AB})]^2-4\\delta }}{2\\delta }, \\end{aligned}\n(24b)\n\nand\n\n\\begin{aligned} B_2(T)&= - \\frac{\\mathcal {E} + (s_0-\\bar{s})(1-\\lambda _3)\\lambda _3^{T}}{(1-\\lambda _2) \\lambda _2^T - (1-\\lambda _3) \\lambda _3^T}, \\end{aligned}\n(25a)\n\\begin{aligned} B_3(T)&= \\frac{\\mathcal {E} + (s_0-\\bar{s})(1-\\lambda _2)\\lambda _2^{T}}{(1-\\lambda _2) \\lambda _2^T - (1-\\lambda _3) \\lambda _3^T}\\ . \\end{aligned}\n(25b)\n\nThe individual countries’ abatement levels in the global social optimum are given by:\n\n\\begin{aligned} a_T^i&= 0, \\qquad \\forall \\ i\\in \\mathcal {I}, \\end{aligned}\n(26a)\n\\begin{aligned} a_t^i&= \\frac{A_{t}}{\\alpha _i\\mathcal {A}}, \\qquad \\forall \\ i\\in \\mathcal {I}, \\quad t=0,\\dots ,T-1\\ . \\end{aligned}\n(26b)\n\n$$\\square$$\n\n### Proof of Proposition 3\n\nThe situation, in which a set of non-member countries strategically choose emission abatement levels such as to minimize their own domestic costs is similar to the second stage of the coalition formation game, as discussed in Sect. 5 and Proposition 6. The only difference is that the coalition $$\\mathcal {C}$$ is following an exogenously given emission abatement paths instead of strategically reacting to the emission abatement choices of all non-member countries $$i \\notin \\mathcal {C}$$. Thus, existence and uniqueness of the subgame perfect equilibrium can be shown perfectly analogously to the proof of Proposition 6 by assuming an exogenously given aggregate emission abatement path $$A^{\\mathcal {C}}_t$$ of the coalition.\n\nThus, we directly obtain the following system of first-order linear difference equations for the aggregated emission abatement levels of non-member countries $$A^{\\mathcal {NC}}_t = \\sum _{i \\notin \\mathcal {C}} a_t^i$$ and the stock of aggregated cumulative emissions $$s_t$$ for some exogenously given path of aggregate emission abatement $$A^\\mathcal {C}_t$$ of the coalition $$\\mathcal {C}$$:\n\n\\begin{aligned} A_{t+1}^\\mathcal {NC}&= \\left( \\frac{1}{\\delta }+\\Gamma ^\\mathcal {NC} \\right) A_{t}^\\mathcal {NC} -\\Gamma ^\\mathcal {NC} s_t - \\Gamma ^\\mathcal {NC}\\left( \\mathcal {E} - A_t^\\mathcal {C}\\right) , \\end{aligned}\n(27a)\n\\begin{aligned} s_{t+1}&= -A_t^\\mathcal {NC} + s_t +\\mathcal {E} - A_t^\\mathcal {C}\\ . \\end{aligned}\n(27b)\n\nIntroducing the matrix M:\n\n\\begin{aligned} M = \\begin{pmatrix} \\frac{1}{\\delta } + \\Gamma ^\\mathcal {NC} &{} -\\Gamma ^\\mathcal {NC}\\\\ -1 &{} +1 \\end{pmatrix}, \\end{aligned}\n(28)\n\nwe rewrite the system () in matrix form:\n\n\\begin{aligned} \\begin{pmatrix} A_{t+1}^\\mathcal {NC}\\\\ s_{t+1} \\end{pmatrix} = M \\cdot \\begin{pmatrix} A_{t}^\\mathcal {NC}\\\\ s_{t} \\end{pmatrix} + \\begin{pmatrix} - \\Gamma ^\\mathcal {NC} \\left( \\mathcal {E}-A^\\mathcal {C}_t\\right) \\\\ \\mathcal {E}-A^\\mathcal {C}_t \\end{pmatrix}\\ . \\end{aligned}\n(29)\n\nThe general solution of the matrix equation (29) is given by:\n\n\\begin{aligned} \\begin{pmatrix} A_{t}^\\mathcal {NC}\\\\ s_{t} \\end{pmatrix} = \\begin{pmatrix} \\bar{A}^\\mathcal {NC}_t\\\\ \\bar{s}_t \\end{pmatrix} + B_1(T) \\nu _1 \\lambda _1^t + B_2(T) \\nu _2 \\lambda _2^t, \\end{aligned}\n(30)\n\nwhere $$\\bar{A}^\\mathcal {NC}_t$$ and $$\\bar{s}_t$$ denote particular solutions to (29),Footnote 27$$\\lambda _i$$ are the eigenvalues and $$\\nu _i$$ the eigenvectors of the matrix M, and $$B_i(T)$$ are constants determined by the initial and terminal conditions of the stock and the emission abatement levels ($$i=1,2$$).\n\nThe particular solutions are given by:\n\n\\begin{aligned} \\begin{pmatrix} \\bar{A}^\\mathcal {NC}_t\\\\ \\bar{s}_t \\end{pmatrix} = \\sum _{t'=0}^{t-1} M^{t'} \\cdot \\begin{pmatrix} - \\Gamma ^\\mathcal {NC} \\left( \\mathcal {E}-A^\\mathcal {C}_{t'}\\right) \\\\ \\mathcal {E}-A^\\mathcal {C}_{t'} \\end{pmatrix}\\ . \\end{aligned}\n(31)\n\nIn addition, for the matrix M we derive the following eigenvalues $$\\lambda _i$$ ($$i=1,2$$):\n\n\\begin{aligned} \\lambda _1&= \\frac{1+\\delta \\left( 1+\\Gamma ^\\mathcal {NC}\\right) -\\sqrt{\\left[ 1+\\delta \\left( 1+\\Gamma ^\\mathcal {NC}\\right) \\right] ^2-4\\delta }}{2\\delta }, \\end{aligned}\n(32a)\n\\begin{aligned} \\lambda _2&= \\frac{1+\\delta \\left( 1+\\Gamma ^\\mathcal {NC}\\right) +\\sqrt{\\left[ 1+\\delta \\left( 1+\\Gamma ^\\mathcal {NC}\\right) \\right] ^2-4\\delta }}{2\\delta }, \\end{aligned}\n(32b)\n\nand eigenvectors ($$i=1,2$$):\n\n\\begin{aligned} \\nu _1&= \\left\\{ 1-\\lambda _1, 1 \\right\\} , \\end{aligned}\n(33a)\n\\begin{aligned} \\nu _2&= \\left\\{ 1-\\lambda _2, 1 \\right\\} \\ . \\end{aligned}\n(33b)\n\nInserting into Eq. (30) yields:\n\n\\begin{aligned} A_{t}^\\mathcal {NC}&= \\bar{A}^\\mathcal {NC}_t + B_1(T)(1-\\lambda _1)\\lambda _1^t + B_2(T)(1-\\lambda _2)\\lambda _2^t, \\end{aligned}\n(34a)\n\\begin{aligned} s_{t}&= \\bar{s}_t + B_1(T) \\lambda _1^t + B_2(T) \\lambda _2^t \\end{aligned}\n(34b)\n\nThe constants $$B_i(T)$$ ($$i=1,2$$) are derived from the initial stock $$s_0$$ and the terminal condition $$A_{T}^\\mathcal {NC} = 0$$, which implies\n\n\\begin{aligned} B_1(T)&= - \\frac{\\bar{A}^\\mathcal {NC}_T + (s_0-\\bar{s}_0)(1-\\lambda _2)\\lambda _2^{T}}{(1-\\lambda _1) \\lambda _1^T - (1-\\lambda _2) \\lambda _2^T}, \\end{aligned}\n(35a)\n\\begin{aligned} B_2(T)&= \\frac{\\bar{A}^\\mathcal {NC}_T + (s_0-\\bar{s}_0)(1-\\lambda _1)\\lambda _1^{T}}{(1-\\lambda _1) \\lambda _1^T - (1-\\lambda _2) \\lambda _2^T}\\ . \\end{aligned}\n(35b)\n\nThe individual countries’ abatement levels in the subgame perfect Nash equilibrium are given by:\n\n\\begin{aligned} a_T^i&= 0, \\qquad \\forall \\ i \\in \\mathcal {I}, \\end{aligned}\n(36a)\n\\begin{aligned} a_t^i&= \\frac{\\gamma _i}{\\Gamma ^\\mathcal {NC}} A^\\mathcal {NC}_{t}, \\qquad \\forall \\ i\\notin C, \\quad t=0,\\dots ,T-1\\ . \\end{aligned}\n(36b)\n\n$$\\square$$\n\n### Proof of Proposition 4\n\nFirst, note that if the RS is able to incentivize all member countries $$i \\in \\mathcal {C}$$ to implement the aspired abatement paths $$\\left\\{ \\tilde{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i \\in \\mathcal {C}}$$ we can use Proposition 3 to determine the emission abatement paths $$\\left\\{ \\check{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i \\notin \\mathcal {C}}$$ for all non-member countries $$i \\notin \\mathcal {C}$$ in the subgame perfect Nash equilibrium. Thus, it suffices to show that given these emission abatement paths of non-member countries $$\\left\\{ \\check{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i \\notin \\mathcal {C}}$$, there exists a RS that implements the aspired abatement paths $$\\left\\{ \\tilde{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i \\in \\mathcal {C}}$$ for all coalition members $$i \\in \\mathcal {C}$$ as a subgame perfect Nash equilibrium. For further use, we define the aggregated emission abatement level of all non-member countries $$i \\notin \\mathcal {C}$$ in period t in the subgame perfect Nash equilibrium by $$\\check{A}_t^\\mathcal {NC} = \\sum _{i \\notin \\mathcal {C}} \\check{a}_t^i$$.\n\nTo prove this, we assume that a set of countries $$\\mathcal {C}$$ has joined a feasible RS characterized by a weighting scheme $$\\left\\{ \\lambda _t^i\\right\\} _{t=0,\\dots ,T-1}^{i \\in \\mathcal {C}}$$ and a sequence of refunds $$\\{R_t\\}_{t=0,\\dots ,T-1}$$ by paying an initial fee $$f_0^i$$. We shall analyze the subgame perfect Nash equilibria of the RS by backward induction. In every step of the backward induction, we show that\n\n1. 1.\n\nthe objective function of each country i is strictly concave,\n\n2. 2.\n\nthere exists a feasible weighting scheme $$\\{\\tilde{\\lambda }_t^i\\}^{i \\in \\mathcal {C}}$$ and a feasible refund $$\\tilde{R}_t$$ such that the aspired abatement levels $$\\{\\tilde{a}_t^i\\}^{i \\in \\mathcal {C}}$$ are consistent with the necessary and sufficient conditions of the subgame perfect Nash equilibrium of the subgame starting in period t and\n\n3. 3.\n\nthe aspired abatement levels $$\\{\\tilde{a}_t^i\\}^{i \\in \\mathcal {C}}$$ are the unique solution solving the necessary and sufficient conditions of subgame perfection of the subgame starting in period t,\n\ngiven the aspired abatement levels $$\\{\\tilde{a}_{t+1}^i\\}^{i \\in \\mathcal {C}}$$ constitute the unique subgame perfect Nash equilibrium outcome of the subgame starting in period $$t+1$$.\n\nAssuming that there exists a unique subgame perfect equilibrium for the subgame starting in period $$t+1$$ with a stock of cumulative greenhouse gas emissions $$s_{t+1}$$, for all countries $$i \\in \\mathcal {C}$$, we denote country i’s equilibrium payoff for this subgame by $$W^i_{t+1}(s_{t+1})$$. Then country i’s best response in period t, $$\\bar{a}_t^i$$, is determined by the solution of the optimization problem\n\n\\begin{aligned} V_t^i(s_t)|A_t^{-i} = \\max \\limits _{a_t^i} \\left\\{ \\delta W_{t+1}^i(s_{t+1}) -\\frac{\\alpha _i}{2}(a_t^i)^2 - \\frac{\\beta _i}{2}s_t^2 + r^i_t\\right\\} , \\end{aligned}\n(37)\n\nsubject to Eq. (3), $$W_{T+1}^i(s_{T+1})\\equiv 0$$, and given the sum of the abatement efforts of all other countries $$A_t^{-i} = \\sum _{j\\ne i}a_t^j$$. Differentiating Eq. (37) with respect to $$a^i_t$$ and setting it equal to zero yields\n\n\\begin{aligned} \\alpha _i \\bar{a}_t^i = -\\delta {W_{t+1}^i}' (\\bar{s}_{t+1})+\\left. \\frac{\\partial r^i_t}{\\partial a^i_t}\\right| _{a^i_t=\\bar{a}^i_t}\\ \\end{aligned}\n(38)\n\nwhere $$\\bar{s}_{t+1} = s_t+\\mathcal {E} - \\bar{a}_t^i - A_t^{-i}$$ and\n\n\\begin{aligned} \\frac{\\partial r^i_t}{\\partial a^i_t} = {\\left\\{ \\begin{array}{ll} \\lambda _t^i R_t \\displaystyle \\frac{A^{\\mathcal {C}-i}_t}{(a^i_t+A^{\\mathcal {C}-i}_t)^2}\\ , \\quad &{}t=1,\\dots ,T-1,\\\\ 0, \\quad &{}t=T\\ . \\end{array}\\right. } \\end{aligned}\n(39)\n\nDifferentiating w.r.t. $$s_t$$ and applying the envelope theorem yields\n\n\\begin{aligned} -{V_t^i}'(s_t)|A_t^{-i} = \\beta _i s_t-\\delta {W_{t+1}^i}'(s_{t+1})\\ . \\end{aligned}\n(40)\n\nStarting with period $$t=T$$, we first note that the maximization problem of all countries is strictly concave, as $$W_{T+1}(s_{T+1})\\equiv 0$$ and $$r_T = f_T/|\\mathcal {C}|$$. Thus, Eq. (38) characterizes the best response for all countries $$i \\in \\mathcal {C}$$, which is given by $$\\bar{a}_{T}^i = 0$$ independently of the abatement choices of all other countries. As a consequence $${\\hat{a}_T^i}=0$$ for all $$i \\in \\mathcal {C}$$ is the subgame perfect Nash equilibrium of the game starting in period T and is also the aspired abatement level in period T, as $$\\tilde{a}^i_T=0$$ for all $$i \\in \\mathcal {C}$$ for all feasible coalition abatement paths. Then, the equilibrium pay-off is given by $$W^i_T(s_T)=V^i_T(s_T)|\\hat{A}^{-i}_T$$, which is strictly concave:\n\n\\begin{aligned} W^i_T(s_T) = -\\frac{\\beta _i}{2}s_T^2 + \\frac{f_T}{|\\mathcal {C}|} \\qquad \\Rightarrow \\qquad W_{T}''(s_T) = -\\beta _i\\ . \\end{aligned}\n(41)\n\nNow, we analyze the subgame starting in period t assuming that their exists a weighting scheme $$\\{\\tilde{\\lambda }_{t'}^i\\}_{{t'}=t,\\dots ,T-1}^{i \\in \\mathcal {C}}$$ and a sequence of refunds $$\\left\\{ \\tilde{R}_{t'}\\right\\} _{{t'}=t,\\dots ,T-1}$$ such that the outcome of the unique subgame perfect Nash equilibrium of the subgame starting in period $$t+1$$ coincides with the aspired coalition abatement paths $$\\left\\{ \\tilde{a}_{t'}^i\\right\\} _{{t'}=t+1,\\dots ,T}^{i \\in \\mathcal {C}}$$. In addition, we assume that $$W_{t+1}^i(s_{t+1})$$ is strictly concave. Then, also the optimization problem of country $$i \\in \\mathcal {C}$$ in period t is strictly concave\n\n\\begin{aligned} \\delta {W_{t+1}^i}''(s_{t+1}) - \\alpha _i + \\frac{\\partial ^2 r^i_t}{(\\partial a^i_t)^2}< 0\\ . \\end{aligned}\n(42)\n\nAs a consequence, there exists a unique best response $$\\bar{a}^i_t$$ for all countries $$i \\in \\mathcal {C}$$ given the emission abatements of all other countries $$j \\ne i$$, which is given implicitly by (38):\n\n\\begin{aligned} \\alpha _i \\bar{a}^i_t - \\lambda _t^i R_t \\frac{A^{\\mathcal {C}-i}_t}{(\\bar{a}^i_t+A^{\\mathcal {C}-i}_t)^2} = -\\delta {W_{t+1}^i}' (\\bar{s}_{t+1})\\ . \\end{aligned}\n(43)\n\nAs, by assumption, $$-{W^i_{t'}}'(s_{t'})=-{V^i_{t'}}'(s_{t'})|\\hat{A}_{t'}^{-i}$$ for all $$t'\\ge t+1$$ we can exploit Eq. (40) to obtain the following Euler equation:\n\n\\begin{aligned} \\alpha _i \\bar{a}^i_t - \\lambda _{t}^i R_t \\frac{A^{\\mathcal {C}-i}_t}{(\\bar{a}^i_t+A^{\\mathcal {C}-i}_t)^2} = \\delta \\beta _i \\bar{s}_{t+1} + \\delta \\alpha _i \\tilde{a}_{t+i}^i -\\delta \\tilde{\\lambda }_{t+1}^i \\tilde{R}_{t+1} \\frac{\\tilde{A}_{t+1}^{\\mathcal {C}-i}}{(\\tilde{A}^\\mathcal {C}_{t+1})^2}\\ . \\end{aligned}\n(44)\n\nInserting $$\\bar{s}_{t+1} = s_t + \\mathcal {E}-\\bar{a}_t^i - A_{t}^{\\mathcal {C}-i}-\\check{A}_t^{\\mathcal {NC}}$$ yields:\n\n\\begin{aligned} \\alpha _i \\bar{a}^i_t + \\delta \\beta _i (\\bar{a}_t^i + A_{t}^{\\mathcal {C}-i}) - \\lambda _t^i R_t \\frac{A^{\\mathcal {C}-i}_t}{(\\bar{a}^i_t+A^{\\mathcal {C}-i}_t)^2} = \\tilde{C}_t^i, \\end{aligned}\n(45)\n\nwith\n\n\\begin{aligned} \\tilde{C}_t^i = \\delta \\beta _i \\left( s_t+\\mathcal {E}-\\check{A}_t^{\\mathcal {NC}}\\right) + \\delta \\alpha _i \\tilde{a}_{t+i}^i -\\delta \\tilde{\\lambda }^i_{t+1} \\tilde{R}_{t+1} \\frac{\\tilde{A}_{t+1}^{\\mathcal {C}-i}}{\\left( \\tilde{A}^\\mathcal {C}_{t+1}\\right) ^2}\\ . \\end{aligned}\n(46)\n\nFirst, we show that there exist unique $$\\tilde{\\lambda }_t^i$$ and $$\\tilde{R}_t$$ such that choosing the aspired coalition abatement level $$\\tilde{a}_t^i$$ is an equilibrium strategy for all countries $$i \\in \\mathcal {C}$$. Inserting aspired abatement levels $$\\tilde{a}_t^i$$ and rearranging Eq. (45), we obtain\n\n\\begin{aligned} \\tilde{\\lambda }_t^i \\tilde{R}_t = \\left( \\tilde{A}^\\mathcal {C}_t\\right) ^2 \\left( \\alpha _i \\frac{\\tilde{a}_t^i}{\\tilde{A}_t^{\\mathcal {C}-i}} + \\delta \\beta _i \\frac{\\tilde{A}^\\mathcal {C}_t}{\\tilde{A}_t^{\\mathcal {C}-i}} -\\frac{\\tilde{C}_t^i}{\\tilde{A}_t^{\\mathcal {C}-i}}\\right) \\ . \\end{aligned}\n(47)\n\nTaking into account that the weighting scheme adds up to one, i.e., $$\\sum _{j \\in \\mathcal {C}} \\tilde{\\lambda }_t^j\\frac{\\tilde{a}_t^j}{\\tilde{A}_t^\\mathcal {C}} = 1$$, yields\n\n\\begin{aligned} \\tilde{R}_t&= \\tilde{A}^\\mathcal {C}_t \\sum _{j \\in \\mathcal {C}} \\left[ \\tilde{a}_t^j\\left( \\alpha _j \\frac{\\tilde{a}_t^j}{\\tilde{A}_t^{\\mathcal {C}-j}} + \\delta \\beta _j \\frac{\\tilde{A}^\\mathcal {C}_t}{\\tilde{A}_t^{\\mathcal {C}-j}} -\\frac{\\tilde{C}_t^j}{\\tilde{A}_t^{\\mathcal {C}-j}}\\right) \\right] \\ , \\end{aligned}\n(48a)\n\\begin{aligned} \\tilde{\\lambda }_t^i&= \\frac{\\tilde{A}_t^\\mathcal {C}\\left( \\alpha _i \\frac{\\tilde{a}_t^i}{\\tilde{A}_t^{\\mathcal {C}-i}} + \\delta \\beta _i \\frac{\\tilde{A}^\\mathcal {C}_t}{\\tilde{A}_t^{\\mathcal {C}-i}} -\\frac{\\tilde{C}_t^i}{\\tilde{A}_t^{\\mathcal {C}-i}}\\right) }{\\sum _{j \\in \\mathcal {C}} \\left[ \\tilde{a}_t^j\\left( \\alpha _j \\frac{\\tilde{a}_t^j}{\\tilde{A}_t^{\\mathcal {C}-j}} + \\delta \\beta _j \\frac{\\tilde{A}^\\mathcal {C}_t}{\\tilde{A}_t^{\\mathcal {C}-j}} -\\frac{\\tilde{C}_t^j}{\\tilde{A}_t^{\\mathcal {C}-j}}\\right) \\right] }\\ . \\end{aligned}\n(48b)\n\nWe now show that the aspired coalition abatement levels $$\\tilde{a}_t^i$$ are the unique solution to the Euler equations of all countries $$i \\in \\mathcal {C}$$ given the weighting scheme $$\\left\\{ \\tilde{\\lambda }_t^i\\right\\} ^{i \\in \\mathcal {C}}$$ and the refund $$\\tilde{R}_t$$. To this end, we express equation (45) in terms of $$a_t^i$$ and $$A^\\mathcal {C}_t$$ and solve for $$a_t^i$$:\n\n\\begin{aligned} a_t^i = A^\\mathcal {C}_t \\underbrace{\\frac{\\tilde{\\lambda }_t^i \\tilde{R}_t + \\tilde{C}_t^i A_t^\\mathcal {C} - \\delta \\beta _i \\left( A_t^\\mathcal {C}\\right) ^2}{\\tilde{\\lambda }_t^i \\tilde{R}_t + \\alpha _i \\left( A_t^\\mathcal {C}\\right) ^2}}_{\\equiv h_t^i(A^\\mathcal {C}_t)} = A^\\mathcal {C}_t h_t^i(A^\\mathcal {C}_t)\\ . \\end{aligned}\n(49)\n\nSumming-up over all countries $$i \\in \\mathcal {C}$$ yields\n\n\\begin{aligned} \\sum _{i \\in \\mathcal {C}} h_t^i(A^\\mathcal {C}_t) = 1, \\end{aligned}\n(50)\n\nwhich has to hold for $$A^\\mathcal {C}_t=\\tilde{A}_t^\\mathcal {C}$$ and is a necessary condition for a Nash equilibrium. Differentiating $$h_t^i(A^\\mathcal {C}_t)$$ with respect to $$A^\\mathcal {C}_t$$, we obtain:\n\n\\begin{aligned} {h_t^i}'(A^\\mathcal {C}_t) = \\frac{\\tilde{\\lambda }_t^i \\tilde{R}_t \\tilde{C}_t^i -2(\\alpha _i+\\delta \\beta _i)\\tilde{\\lambda }_t^i \\tilde{R}_t A^\\mathcal {C}_t - \\alpha _i \\tilde{C}_t^i \\left( A_t^\\mathcal {C}\\right) ^2}{\\left[ \\tilde{\\lambda }_t^i \\tilde{R}_t+ \\alpha _i \\left( A_t^\\mathcal {C}\\right) ^2\\right] ^2}\\ . \\end{aligned}\n(51)\n\nSeeking the roots of $${h_t^i}'(A^\\mathcal {C}_t)$$ yields\n\n\\begin{aligned} {h_t^i}' (A^\\mathcal {C}_t)= 0&\\Leftrightarrow \\underbrace{\\tilde{\\lambda }_t^i \\tilde{R}_t \\tilde{C}_t^i}_{\\equiv x> 0} - \\underbrace{2(\\alpha _i+\\delta \\beta _i)\\tilde{\\lambda }_t^i \\tilde{R}_t}_{\\equiv y> 0} A^\\mathcal {C}_t - \\underbrace{\\alpha _i \\tilde{C}_t^i}_{\\equiv z > 0} \\left( A_t^\\mathcal {C}\\right) ^2 = 0, \\end{aligned}\n(52)\n\\begin{aligned}&\\Leftrightarrow x - y A^\\mathcal {C}_t -z \\left( A^\\mathcal {C}_t\\right) ^2 = 0, \\end{aligned}\n(53)\n\\begin{aligned}&\\Leftrightarrow A^\\mathcal {C}_t = -\\frac{y\\pm \\sqrt{y^2+4xz}}{2z}\\ . \\end{aligned}\n(54)\n\nThus, for every $$h_t^i(A^\\mathcal {C}_t)$$ there exist one positive collective abatement level $$\\bar{A}_t^i$$ such that $${h_t^i}' (\\bar{A}_t^i) = 0$$. In addition it holds (taking into account Eq. (47)):\n\n\\begin{aligned} h_t^i (0)&= 1,&\\quad h_t^i (\\tilde{A}_t^\\mathcal {C})&= \\frac{\\alpha _i \\tilde{a}_t^i \\tilde{A}^\\mathcal {C}_t + \\tilde{\\lambda }_t^i \\tilde{R}_t \\left( 1-\\frac{\\tilde{A}_t^{\\mathcal {C}-i}}{\\tilde{A}_t^\\mathcal {C}}\\right) }{\\tilde{\\lambda }_t^i \\tilde{R}_t + \\alpha _i \\left( \\tilde{A}_t^\\mathcal {C}\\right) ^2} \\in [0,1], \\tilde{A}_t^\\mathcal {C} \\ne 0 \\end{aligned}\n(55a)\n\\begin{aligned} {h_t^i}' (0)&= \\tilde{C}_t^i > 0 ,&\\quad {h_t^i}' (\\tilde{A}_t^\\mathcal {C})&< 0\\ . \\end{aligned}\n(55b)\n\nFocusing attention to the positive half-space $$A^\\mathcal {C}_t \\ge 0$$, all $$h_t^i(A^\\mathcal {C}_t)$$ start at 1 for $$A^\\mathcal {C}_t=0$$. In addition, all $$h_t^i(A^\\mathcal {C}_t)$$ exhibit a unique local extremum at $$\\bar{A}_t^i > 0$$. As $$h_t^{i'}(A^\\mathcal {C}_t)$$ is increasing at $$A^\\mathcal {C}_t=0$$, the local extremum is a local maximum. This implies that all $$h_t^i(A^\\mathcal {C}_t)$$ are increasing until $$\\bar{A}_t^i > 0$$ and decreasing afterwards. This also implies that $$\\tilde{A}^\\mathcal {C}_t > \\bar{A}_t^i$$ for all $$i \\in \\mathcal {C}$$, because $$h^i_t(\\tilde{A}_t^\\mathcal {C}) < 1$$, which can only happen for values $$A^\\mathcal {C}_t > \\bar{A}_t^i$$, as all $$h_t^i(A^\\mathcal {C}_t)$$ start at 1 and further increase until the local extremum at $$\\bar{A}_t^i$$. As $$\\tilde{A}_t^\\mathcal {C} > \\bar{A}_t^i$$, this, in turn, implies that at $$\\tilde{A}_t^\\mathcal {C}$$, all $$h_t^i(\\tilde{A}_t^\\mathcal {C}) \\in [0,1]$$ are monotonically decreasing.Footnote 28 As a consequence, there exists no other value $$A_t'$$ such that $$\\sum _{i \\in \\mathcal {C}} h_t^i(A_t') = 1$$. Then, only the aspired coalition abatement levels $$\\tilde{a}_t^i$$ solve the Euler equations of all countries $$i \\in \\mathcal {C}$$ simultaneously for the weighting scheme $$\\tilde{\\lambda }_t^i$$ and the refund $$\\tilde{R}_t$$.\n\nDifferentiating (40) with respect to $$s_t$$, we obtain\n\n\\begin{aligned} {V_t^i}''(s_t)|A_t^{-i} = \\delta {W_{t+1}^i}''(\\bar{s}_{t+1})-\\beta _i\\ . \\end{aligned}\n(56)\n\nAs $${W_t^i}(s_t) = {V_t^i}(s_t)|\\hat{A}_t^{-i}$$, this implies that the equilibrium pay-off $$W^i_{t}(s_t)$$ is strictly concave for all countries $$i \\in \\mathcal {C}$$.\n\nWorking backwards to $$t=1$$ yields a the unique subgame perfect Nash equilibrium outcome that is given by the aspired coalition abatement levels $$\\left\\{ \\tilde{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i \\in \\mathcal {C}}$$, the abatement path $$\\left\\{ \\check{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i \\notin \\mathcal {C}}$$ of all non-members countries $$i \\notin \\mathcal {C}$$ and the corresponding path of cumulative greenhouse gas emissions $$\\{s_t\\}_{t=0,\\dots ,T}$$ .\n\nIt remains to show that the RS is feasible, i.e., the weighting scheme $$\\{\\tilde{\\lambda }_t^i\\}_{i=1}^n$$ and the refund $$\\tilde{R}_t$$ are non-negative for all $$t=0,\\dots ,T-1$$, for all feasible coalition abatement paths $$\\left\\{ \\tilde{a}_t^i\\right\\} _{t=0,\\dots ,T}^{i \\in \\mathcal {C}}$$. As $${W_t^i}(s_t) = {V_t^i}(s_t)|\\hat{A}_t^{-i}$$, we can consecutively apply Eq. (40), insert into Eq. (38) and evaluate in the subgame perfect Nash equilibrium:\n\n\\begin{aligned} \\alpha _i \\tilde{a}_t^i - \\tilde{\\lambda }_t^i \\tilde{R}_t \\frac{\\tilde{A}_t^{\\mathcal {C}-i}}{\\left( \\tilde{A}^\\mathcal {C}_t\\right) ^2} = \\delta \\beta _i \\sum _{\\tau =t+1}^{T} \\delta ^{\\tau -(t+1)} s_\\tau \\ ,\\quad t=0,\\dots ,T-1\\ . \\end{aligned}\n(57)\n\nThe corresponding equation in the decentralized solution yields:\n\n\\begin{aligned} \\alpha _i \\hat{a}_t^i = \\delta \\beta _i \\sum _{\\tau =t+1}^{T} \\delta ^{\\tau -(t+1)} \\hat{s}_\\tau ,\\quad t=0,\\dots ,T-1\\ . \\end{aligned}\n(58)\n\nBy construction $$\\tilde{a}_t^i > \\hat{a}_t^i$$ for all $$i \\in \\mathcal {C}$$ and $$t=0,\\dots ,T-1$$. As a consequence, it also holds that $$\\hat{s}_t > s_t$$ for all $$t=0,\\dots ,T$$. This, in turn, implies that $$\\{\\tilde{\\lambda }_t^i\\}_{t=0,\\dots ,T-1}^{i \\in \\mathcal {C}} > 0$$ and $$\\left\\{ \\tilde{R}_t\\right\\} _{t=0,\\dots ,T-1} > 0$$. $$\\square$$\n\n### Proof of Proposition 5\n\nThe first part of Proposition directly follows from Eq. (14).\n\nTo show that the any feasible RS can always be implemented as a Pareto improvement over the decentralized solution, we introduce the following abbreviation: Denote the net present value of the discounted sum of abatement costs and environmental damage costs of country i in the decentralized solution and the RS by $$\\hat{K}_i$$ and $$\\tilde{K}_i$$, respectively:\n\n\\begin{aligned} \\hat{K}_i&= \\sum _{t=0}^T \\left[ \\frac{\\alpha _i}{2}\\left( \\hat{a}_t^i\\right) ^2 + \\frac{\\beta _i}{2}\\hat{s}_t^2\\right] , \\end{aligned}\n(59a)\n\\begin{aligned} \\tilde{K}_i&= \\sum _{t=0}^T \\left[ \\frac{\\alpha _i}{2}\\left( \\tilde{a}_t^i\\right) ^2 + \\frac{\\beta _i}{2} \\tilde{s}_t^2\\right] \\ . \\end{aligned}\n(59b)\n\nIn addition, let $$\\tilde{f}_0^i$$ be the net present value of the discounted sum of refunds that country $$i \\in \\mathcal {C}$$ receives in the RS:\n\n\\begin{aligned} \\tilde{f}_0^i = \\sum _{t=1}^{T-1} \\frac{\\tilde{\\lambda }_i \\tilde{R}_t}{(1+\\rho )^t} \\left( \\frac{a^i_t}{\\sum _{j \\in \\mathcal {C}} a^j_t} \\right) . \\end{aligned}\n(59c)\n\nBy construction, all countries $$i \\in \\mathcal {C}$$ are better off in the RS than in the decentralized solution if their initial fees were equal to zero. The reason is that environmental damage costs are smaller under the refunding scheme and abatement costs minus refunds are smaller compared to the decentralized solution. Otherwise, it would not have been in the countries’ best interest to choose the aspired coalition abatement levels. Define the difference in terms of net present value between the RS and the decentralized solution by $$\\hat{f}_0^i$$:\n\n\\begin{aligned} \\hat{f}_0^i = \\hat{K}_i - \\tilde{K}_i + \\tilde{f}_0^i > 0\\ . \\end{aligned}\n(60)\n\nNote that $$\\hat{f}_0^i$$ is the initial fee that would leave country $$i \\in \\mathcal {C}$$ indifferent between the RS and the decentralized solution. Summing-up over all countries $$i \\in \\mathcal {C}$$, we obtain:\n\n\\begin{aligned} \\sum _{i \\in \\mathcal {C}} \\hat{f}_0^i = \\sum _{i \\in \\mathcal {C}} \\left[ \\hat{K}_i - \\tilde{K}_i + \\tilde{f}_0^i\\right] = \\sum _{i \\in \\mathcal {C}} \\left[ \\hat{K}_i - \\tilde{K}_i\\right] + \\tilde{f}_0 > \\tilde{f}_0\\ . \\end{aligned}\n(61)\n\nThus, it is always possible to find a set of initial fees $$f_0^i$$ such that $$\\sum _{i \\in \\mathcal {C}} f_0^i = \\tilde{f}_0$$ and, in addition, $$f_0^i < \\hat{f}_0^i$$ for all $$i \\in \\mathcal {C}$$. $$\\square$$\n\n### Proof of Proposition 6\n\nIn line with the literature, we assume that in the second stage both the modesty parameter $$\\mu$$ and the membership structure are given and common knowledge. Then, the coalition acts as one player in the non-cooperative game, in which the coalition and all other non-member countries choose emission abatement levels to maximize their objective. We assume that in each period $$t=0,\\dots ,T$$ the previous emission abatement choices of all players are common knowledge before all players simultaneously decide on emission abatement levels in period t. The subgame perfect Nash equilibrium is derived by backward induction.\n\nFor a given modesty parameter $$\\mu$$ and a given membership structure $$\\mathcal {C}$$, the coalition is supposed to set emission abatement levels such as to solve optimization problem (15) subject to the equation of motion for aggregate cumulative emissions (3) and given the emission abatement levels of all non-member countries. To solve the problem recursively, we introduce the value function:\n\n\\begin{aligned} V_t^\\mathcal {C}(s_t)|A_t^{-\\mathcal {C}} = \\max _{\\{a_t^i\\}_{i\\in C}} \\left\\{ \\delta W_{t+1}^C(s_{t+1}) - \\sum _{i\\in C}\\left[ \\frac{\\alpha _i}{2}(a_t^i)^2+\\mu \\frac{\\beta _i}{2}s_t^2\\right] \\right\\} , \\end{aligned}\n(62)\n\nwhere $$A_t^{-\\mathcal {C}}$$ denotes the vector of emission abatement levels of all non-member countries, $$V_t^\\mathcal {C}(s_t)$$ represents the negative of the total coalition costs accruing from period t onwards discounted to period t and $$W_{t+1}^C(s_{t+1})$$ is the coalition’s equilibrium pay-off of the subgame starting in period $$t+1$$ conditional on the stock of accumulated GHG gases $$s_{t+1}$$.\n\nAll non-member countries $$i \\notin \\mathcal {C}$$ seek to minimize the net present value of their own total domestic costs (6) subject to stock dynamics of cumulative global GHG emissions (3) and given the emission abatement levels of all other countries. Again, we introduce the value function:\n\n\\begin{aligned} V_t^i(s_t)|{A_t^{-i}} = \\max _{\\{a_t^i\\}} \\left\\{ \\delta W_{t+1}^i(s_{t+1}) - \\left[ \\frac{\\alpha _i}{2}(a_t^i)^2+\\frac{\\beta _i}{2}s_t^2\\right] \\right\\} ,\\quad i\\notin \\mathcal {C}, \\end{aligned}\n(63)\n\nwhere $$A_t^{-i}$$ denotes the vector of emission abatement levels of all other countries $$j \\ne i$$, $$V_t^i(s_t)$$ represents the negative of the total country i’ costs accruing from period t onwards discounted to period t and $$W_{t+1}^i(s_{t+1})$$ is the country i’s equilibrium pay-off of the subgame starting in period $$t+1$$ conditional on the stock of accumulated GHG gases $$s_{t+1}$$.\n\nDifferentiating the value functions (62) and (63) with respect to $$a^i_t$$ and setting them equal to zero, we derive the following first-order conditions:\n\n\\begin{aligned} \\alpha _i a^i_t&= -\\delta {W^{\\mathcal {C}}_{t+1}}'(s_{t+1}),\\qquad \\forall \\ i \\in \\mathcal {C},\\quad t=0,\\dots ,T, \\end{aligned}\n(64a)\n\\begin{aligned} \\alpha _i a^i_t&= -\\delta {W^{i}_{t+1}}'(s_{t+1}),\\qquad \\forall \\ i \\notin \\mathcal {C},\\quad t=0,\\dots ,T\\ . \\end{aligned}\n(64b)\n\nThe optimization problems of the coalition and all non-member countries in period t are strictly concave if\n\n\\begin{aligned}&\\delta {W_{t+1}^{\\mathcal {C}}}''(s_{t+1}) - \\alpha _i < 0,\\qquad \\forall \\ i \\in \\mathcal {C},\\quad t=0,\\dots ,T, \\end{aligned}\n(65a)\n\\begin{aligned}&\\delta {W_{t+1}^{i}}''(s_{t+1}) - \\alpha _i < 0,\\qquad \\forall \\ i \\notin \\mathcal {C},\\quad t=0,\\dots ,T, \\end{aligned}\n(65b)\n\nin which case the first-order conditions () implicitly define the coalition’s and all non-member countries’ unique best response functions.\n\nIn addition, differentiating the value functions (62) and (63) with respect to $$s_t$$ and applying the envelope theorem yields\n\n\\begin{aligned} -{V_{t}^{\\mathcal {C}}}'(s_t)|A_t^{-\\mathcal {C}}&= \\mu \\mathcal {B}^\\mathcal {C} s_t - \\delta {W_{t+1}^{\\mathcal {C}}}' (s_{t+1}),\\qquad \\forall \\ i \\in \\mathcal {C},\\quad t=0,\\dots ,T, \\end{aligned}\n(66a)\n\\begin{aligned} -{V_{t}^{i}}'(s_t)|A_t^{-i}&= \\beta _i s_t - \\delta {W_{t+1}^{i}}' (s_{t+1}),\\qquad \\forall \\ i \\notin \\mathcal {C},\\quad t=0,\\dots ,T, \\end{aligned}\n(66b)\n\nwhere we have introduced the notation $$\\mathcal {B}^\\mathcal {C} = \\sum _{i\\in \\mathcal {C}} \\beta _i$$.\n\nStarting from $$W^\\mathcal {C}_{T+1}(s_{T+1}) \\equiv 0 \\equiv W^i_{T+1}(s_{T+1})$$ for all $$i \\in \\mathcal {I}$$, implying that the objective function of the optimization problem of the coalition and all non-member countries is strictly concave. As a consequence, Eq. () characterize the coalition’s and all non-member countries’ best response, which is given by $$\\bar{a}^i_T=0$$ for all $$I \\in \\mathcal {I}$$ independently of the emission abatement choices of all other countries. As a consequence, $$\\tilde{a}^i_T=0$$ for all $$i \\in \\mathcal {C}$$ and $$\\check{a}^i_T=0$$ for all $$i \\notin \\mathcal {C}$$ is the unique and symmetric Nash equilibrium for the subgame starting in period T given the stock of cumulative greenhouse gas emissions $$s_T$$. The equilibrium pay-offs are given by $$W^\\mathcal {C}_T(s_T)= V^\\mathcal {C}_T(s_T)|\\hat{A}^{-\\mathcal {C}}_T$$ for the coalition and $$W^i_T(s_T)=V^i_T(s_T)|\\hat{A}^{-i}_T$$ and are strictly concave:\n\n\\begin{aligned} W^\\mathcal {C}_T(s_T)&= -\\mu \\frac{\\mathcal {B}^\\mathcal {C}}{2}s_T^2 \\qquad \\Rightarrow \\qquad {W_{T}^\\mathcal {C}}''(s_T) = -\\mu \\mathcal {B}^\\mathcal {C}, \\end{aligned}\n(67a)\n\\begin{aligned} W^i_T(s_T)&= -\\frac{\\beta _i}{2}s_T^2 \\qquad \\Rightarrow \\qquad {W_{T}^i}''(s_T) = -\\beta _i, \\qquad \\forall \\ i \\notin \\mathcal {C}\\ . \\end{aligned}\n(67b)\n\nAs a consequence, the optimization problem of the coalition and all non-member countries is also strictly concave in period T.\n\nNow assume there exists a unique subgame perfect Nash equilibrium for the subgame starting in period $$t+1$$ with a stock of greenhouse gas emissions of $$s_{t+1}$$ yielding equilibrium pay-offs $$W^\\mathcal {C}_{t+1}(s_{t+1})$$ and $$W^i_{t+1}(s_{t+1})$$ to the coalition and all non-member countries $$i \\notin \\mathcal {C}$$, respectively, with $${W^\\mathcal {C}_{t+1}}''(s_{t+1})<0$$ and $${W^i_{t+1}}''(s_{t+1})<0$$ . Then the optimization problem in period t is strictly concave for the coalition and all non-member countries $$i \\notin \\mathcal {C}$$, implying there exists a unique best response $$\\bar{a}^i_t$$ for all countries $$i \\in \\mathcal {I}$$ given the emission abatements of all other countries $$j \\ne i$$, which is given implicitly by\n\n\\begin{aligned} \\alpha _i \\bar{a}^i_t&= -\\delta {W^{\\mathcal {C}}_{t+1}}'(\\bar{s}_{t+1}),\\qquad \\forall \\ i \\in \\mathcal {C}, \\end{aligned}\n(68a)\n\\begin{aligned} \\alpha _i \\bar{a}^i_t&= -\\delta {W^{i}_{t+1}}'(\\bar{s}_{t+1})\\ ,\\qquad \\forall \\ i \\notin \\mathcal {C}, \\end{aligned}\n(68b)\n\nwhere $$\\bar{s}_{t+1} = s_t+ \\mathcal {E} - \\bar{a}_t^i - A_t^{-i}$$. As, by assumption, $$-{W^\\mathcal {C}_{t'}}'(s_{t'})=-{V^\\mathcal {C}_{t'}}'(s_{t'})|\\hat{A}_{t'}^{-\\mathcal {C}}$$ and $$-{W^i_{t'}}'(s_{t'})=-{V^i_{t'}}'(s_{t'})|\\hat{A}_{t'}^{-i}$$ for all $$t'\\ge t+1$$, we can exploit conditions () to obtain:\n\n\\begin{aligned} a_t^i&= \\delta \\tilde{a}^i_{t+1} + \\mu \\delta \\frac{\\mathcal {B}^{\\mathcal {C}}}{\\alpha _i} \\left( s_t + \\mathcal {E}- \\sum _{j \\in \\mathcal {I}} a^j_t\\right) ,\\qquad \\forall \\ i \\in \\mathcal {C}, \\end{aligned}\n(69a)\n\\begin{aligned} a_t^i&= \\delta \\check{a}^i_{t+1} + \\delta \\gamma _i \\left( s_t + \\mathcal {E}-\\sum _{j \\in \\mathcal {I}} a^j_t\\right) ,\\qquad \\forall \\ i \\notin \\mathcal {C}\\ . \\end{aligned}\n(69b)\n\nSumming up Eq. (69a) over all coalition members $$i \\in \\mathcal {C}$$ and Eq. (69b) over all non-member countries $$i \\notin \\mathcal {C}$$, we obtain the following equations for the aggregate abatement levels $$A^\\mathcal {C}_t = \\sum _{i \\in \\mathcal {C}} a^i_t$$ and $$A^\\mathcal {NC}_t = \\sum _{i \\notin \\mathcal {C}} a^i_t$$ of the coalition and all non-member countries, respectively:\n\n\\begin{aligned} A^\\mathcal {C}_t&= \\delta \\tilde{A}^\\mathcal {C}_{t+1} + \\mu \\delta \\mathcal {A}^\\mathcal {C} \\mathcal {B}^{\\mathcal {C}} \\left( s_t + \\mathcal {E}- A^\\mathcal {C}_t - A^\\mathcal {NC}_t\\right) , \\end{aligned}\n(70a)\n\\begin{aligned} A^\\mathcal {NC}_t&= \\delta \\check{A}^\\mathcal {NC}_{t+1} + \\delta \\Gamma ^\\mathcal {NC}\\left( s_t + \\mathcal {E}- A^\\mathcal {C}_t - A^\\mathcal {NC}_t\\right) , \\end{aligned}\n(70b)\n\nwhere we have used the abbreviation $$\\mathcal {A}^\\mathcal {C} = \\sum _{i \\in \\mathcal {C}} 1/\\alpha _i$$ and $$\\Gamma ^\\mathcal {NC} = \\sum _{i \\notin \\mathcal {C}} \\gamma _i$$. Solving this system of equations for $$A^\\mathcal {C}_t$$ and $$A^\\mathcal {NC}_t$$, we obtain the aggregate abatement levels of the coalition and non-member countries, respectively, for period t in the subgame perfect Nash equilibrium:\n\n\\begin{aligned} \\tilde{A}^\\mathcal {C}_t&= \\frac{\\delta \\left[ \\tilde{A}^\\mathcal {C}_{t+1}\\left( 1+\\delta \\Gamma ^{\\mathcal {NC}}\\right) + \\mu \\mathcal {A}^\\mathcal {C} \\mathcal {B}^{\\mathcal {C}}\\left( s_t + \\mathcal {E}- \\delta \\check{A}^\\mathcal {NC}_{t+1} \\right) \\right] }{1+\\delta \\Gamma ^{\\mathcal {NC}}+\\mu \\delta \\mathcal {A}^\\mathcal {C} \\mathcal {B}^{\\mathcal {C}}}, \\end{aligned}\n(71a)\n\\begin{aligned} \\check{A}^\\mathcal {NC}_t&= \\frac{\\delta \\left[ \\check{A}^\\mathcal {NC}_{t+1}\\left( 1+\\mu \\delta \\mathcal {A}^\\mathcal {C} \\mathcal {B}^{\\mathcal {C}}\\right) + \\Gamma ^{\\mathcal {NC}} \\left( s_t + \\mathcal {E}- \\delta \\tilde{A}^\\mathcal {C}_{t+1} \\right) \\right] }{1+\\delta \\Gamma ^{\\mathcal {NC}}+\\mu \\delta \\mathcal {A}^\\mathcal {C} \\mathcal {B}^{\\mathcal {C}}}\\ . \\end{aligned}\n(71b)\n\nInserting $$\\tilde{A}^\\mathcal {C}_t$$ and $$\\check{A}^\\mathcal {NC}_t$$ back into Eq. () yields the unique equilibrium abatement level in period t for all countries $$i \\in \\mathcal {I}$$:\n\n\\begin{aligned} \\tilde{a}_t^i&= \\delta \\tilde{a}^i_{t+1} + \\mu \\delta \\frac{\\mathcal {B}^{\\mathcal {C}}}{\\alpha _i} \\left( s_t + \\mathcal {E}- \\tilde{A}^\\mathcal {C}_t - \\check{A}^\\mathcal {NC}_t \\right) ,\\qquad \\forall \\ i \\in \\mathcal {C}, \\end{aligned}\n(72a)\n\\begin{aligned} \\check{a}_t^i&= \\delta \\check{a}^i_{t+1} + \\delta \\gamma _i \\left( s_t + \\mathcal {E}-\\tilde{A}^\\mathcal {C}_t - \\check{A}^\\mathcal {NC}_t \\right) ,\\qquad \\forall \\ i \\notin \\mathcal {C}\\ . \\end{aligned}\n(72b)\n\nDifferentiating () with respect to $$s_t$$, we obtain\n\n\\begin{aligned} {V_{t}^{\\mathcal {C}}}''(s_t)|A_t^{-\\mathcal {C}}&= \\delta {W_{t+1}^{\\mathcal {C}}}'' (s_{t+1}) -\\mu \\mathcal {B}^\\mathcal {C},\\qquad \\forall \\ i \\in \\mathcal {C}, \\end{aligned}\n(73a)\n\\begin{aligned} {V_{t}^{i}}''(s_t)|A_t^{-i}&= \\delta {W_{t+1}^{i}}''(s_{t+1}) - \\beta _i,\\qquad \\forall \\ i \\notin \\mathcal {C}\\ . \\end{aligned}\n(73b)\n\nAs $${W_t^\\mathcal {C}}''(s_t) = {V_t^\\mathcal {C}}''(s_t)|\\hat{A}_t^{-\\mathcal {C}}$$ and $${W_t^i}''(s_t) = {V_t^i}''(s_t)|\\hat{A}_t^{-i}$$, this implies that the equilibrium pay-offs $$W^\\mathcal {C}_{t}(s_t)$$ and $$W^i_{t}(s_t)$$ are strictly concave for the coalition and all non-member countries $$i \\notin \\mathcal {C}$$.\n\nWorking backwards until $$t=0$$ yields unique sequences of emission abatements $$\\{\\tilde{a}^i_t\\}^T_{t = 0}$$ and $$\\{\\tilde{a}^i_t\\}^T_{t = 0}$$ for all coalition countries $$i\\in \\mathcal {C}$$ and all non-member countries $$i \\notin \\mathcal {C}$$, respectively, and the corresponding sequence of the stock of cumulative greenhouse gas emissions $$s_t$$ ($$t=0,\\dots ,T$$) that constitute the unique subgame perfect Nash equilibrium outcome of the second stage of the modest international environmental agreement.\n\nHaving established existence and uniqueness of the subgame perfect Nash equilibrium, we now employ Eq. () together with the equation of motion for the stock of aggregated cumulative emissions (3) to derive the following system of first-order linear difference equations:\n\n\\begin{aligned} A_{t+1}^\\mathcal {C}&= \\left( \\frac{1}{\\delta } + \\mu \\mathcal {A}^\\mathcal {C} \\mathcal {B}^\\mathcal {C} \\right) A_t^\\mathcal {C} + \\mu \\mathcal {A}^\\mathcal {C} \\mathcal {B}^\\mathcal {C} A_t^\\mathcal {NC} - \\mu \\mathcal {A}^\\mathcal {C} \\mathcal {B}^\\mathcal {C} s_t - \\mu \\mathcal {A}^\\mathcal {C} \\mathcal {B}^\\mathcal {C} \\mathcal {E}, \\end{aligned}\n(74a)\n\\begin{aligned} A_{t+1}^\\mathcal {NC}&= \\Gamma ^\\mathcal {NC} A_t^\\mathcal {C} + \\left( \\frac{1}{\\delta }+\\Gamma ^\\mathcal {NC} \\right) A_{t}^\\mathcal {NC} -\\Gamma ^\\mathcal {NC} s_t - \\Gamma ^\\mathcal {NC} \\mathcal {E}, \\end{aligned}\n(74b)\n\\begin{aligned} s_{t+1}&= - A_t^\\mathcal {C} -A_t^\\mathcal {NC} + s_t +\\mathcal {E}\\ . \\end{aligned}\n(74c)\n\nBy introducing the abbreviations $$x = \\mu \\mathcal {A}^\\mathcal {C} \\mathcal {B}^\\mathcal {C}$$ and $$y = \\Gamma ^\\mathcal {NC}$$ and the matrix M\n\n\\begin{aligned} M = \\begin{pmatrix} \\frac{1}{\\delta } + x &{} x &{} -x\\\\ y &{} \\frac{1}{\\delta }+y &{} -y \\\\ -1 &{} -1 &{} +1 \\end{pmatrix}, \\end{aligned}\n(75)\n\nwe rewrite the system () in matrix form:\n\n\\begin{aligned} \\begin{pmatrix} A_{t+1}^\\mathcal {C}\\\\ A_{t+1}^\\mathcal {NC}\\\\ s_{t+1} \\end{pmatrix} = M \\cdot \\begin{pmatrix} A_{t}^\\mathcal {C}\\\\ A_{t}^\\mathcal {NC}\\\\ s_{t} \\end{pmatrix} + \\begin{pmatrix} - x \\mathcal {E}\\\\ - y \\mathcal {E}\\\\ \\mathcal {E} \\end{pmatrix}\\ . \\end{aligned}\n(76)\n\nThe general solution of the matrix equation (76) is given by:\n\n\\begin{aligned} \\begin{pmatrix} A_{t}^\\mathcal {C}\\\\ A_{t}^\\mathcal {NC}\\\\ s_{t} \\end{pmatrix} = \\begin{pmatrix} \\bar{A}^\\mathcal {C}\\\\ \\bar{A}^\\mathcal {NC}\\\\ \\bar{s} \\end{pmatrix} + B_1(T) \\nu _1 \\lambda _1^t + B_2(T) \\nu _2 \\lambda _2^t + B_3(T) \\nu _3 \\lambda _3^t, \\end{aligned}\n(77)\n\nwhere $$\\bar{A}^\\mathcal {C}$$, $$\\bar{A}^\\mathcal {NC}$$ and $$\\bar{s}$$ denote the steady state values of $$A_{t}^\\mathcal {C}$$, $$A_{t}^\\mathcal {NC}$$ and $$s_t$$, $$\\lambda _i$$ are the eigenvalues and $$\\nu _i$$ the eigenvectors of the matrix M, and $$B_i(T)$$ are constants determined by the initial and terminal conditions of the stock and the emission abatement levels ($$i=1,\\dots ,3$$).\n\nCalculating the steady state values by setting\n\n\\begin{aligned} A_{t+1}^\\mathcal {C}=A_{t}^\\mathcal {C} = \\bar{A}^\\mathcal {C},\\quad A_{t+1}^\\mathcal {NC}=A_{t}^\\mathcal {NC} = \\bar{A}^\\mathcal {NC}\\ ,\\quad s_{t+1} = s_t = \\bar{s}, \\end{aligned}\n(78)\n\nyields:\n\n\\begin{aligned} \\bar{A}^\\mathcal {C}&= \\frac{x}{x+y}\\mathcal {E} \\end{aligned}\n(79a)\n\\begin{aligned} \\bar{A}^\\mathcal {NC}&= \\frac{y}{x+y}\\mathcal {E}\\end{aligned}\n(79b)\n\\begin{aligned} \\bar{s}&= \\frac{1-\\delta }{\\delta } \\frac{\\mathcal {E}}{x+y} \\end{aligned}\n(79c)\n\nIn addition, for the matrix M we derive the following eigenvalues $$\\lambda _i$$ ($$i=1,\\dots ,3$$):\n\n\\begin{aligned} \\lambda _1&= \\frac{1}{\\delta }, \\end{aligned}\n(80a)\n\\begin{aligned} \\lambda _2&= \\frac{1+\\delta +\\delta (x+y)-\\sqrt{[1+\\delta +\\delta (x+y)]^2-4\\delta }}{2\\delta }\\ , \\end{aligned}\n(80b)\n\\begin{aligned} \\lambda _3&= \\frac{1+\\delta +\\delta (x+y)+\\sqrt{[1+\\delta +\\delta (x+y)]^2-4\\delta }}{2\\delta }\\ , \\end{aligned}\n(80c)\n\nand eigenvectors ($$i=1,\\dots ,3$$):\n\n\\begin{aligned} \\nu _1&= \\left\\{ -1,1,0\\right\\} ,\\end{aligned}\n(81a)\n\\begin{aligned} \\nu _2&= \\left\\{ \\frac{x}{x+y}(1-\\lambda _2), \\frac{y}{x+y}(1-\\lambda _2), 1 \\right\\} , \\end{aligned}\n(81b)\n\\begin{aligned} \\nu _3&= \\left\\{ \\frac{x}{x+y}(1-\\lambda _3), \\frac{y}{x+y}(1-\\lambda _3), 1 \\right\\} , \\end{aligned}\n(81c)\n\nInserting into Eq. (77) yields:\n\n\\begin{aligned} A_{t}^\\mathcal {C}&= \\bar{A}^\\mathcal {C} - B_1(T)\\lambda _1^t + \\frac{x}{x+y}\\left[ B_2(T)(1-\\lambda _2)\\lambda _2^t + B_3(T)(1-\\lambda _3) \\lambda _3^t\\right] \\end{aligned}\n(82a)\n\\begin{aligned} A_{t}^\\mathcal {NC}&= \\bar{A}^\\mathcal {NC} + B_1(T)\\lambda _1^t + \\frac{y}{x+y}\\left[ B_2(T)(1-\\lambda _2)\\lambda _2^t + B_3(T)(1-\\lambda _3) \\lambda _3^t\\right] \\end{aligned}\n(82b)\n\\begin{aligned} s_{t}&= \\bar{s} + B_2(T) \\lambda _2^t + B_3(T) \\lambda _3^t \\end{aligned}\n(82c)\n\nThe constants $$B_i(T)$$ ($$i=1,\\dots ,3$$) are derived from the initial stock $$s_0$$ of cumulative GHG emissions and the terminal conditions $$A_{T}^\\mathcal {C} = 0$$ and $$A_{T}^\\mathcal {NC} = 0$$ of aggregate emission abatement levels, which imply\n\n\\begin{aligned} B_1(T)&= \\frac{y \\bar{A}^C - x \\bar{A}^{NC} }{(x+y)\\lambda _1^T}, \\end{aligned}\n(83a)\n\\begin{aligned} B_2(T)&= - \\frac{\\bar{A}^C + \\bar{A}^{NC} + (s_0-\\bar{s})(1-\\lambda _3)\\lambda _3^{T}}{(1-\\lambda _2) \\lambda _2^T - (1-\\lambda _3) \\lambda _3^T}, \\end{aligned}\n(83b)\n\\begin{aligned} B_3(T)&= \\frac{\\bar{A}^C + \\bar{A}^{NC} + (s_0-\\bar{s})(1-\\lambda _2)\\lambda _2^{T}}{(1-\\lambda _2) \\lambda _2^T - (1-\\lambda _3) \\lambda _3^T}. \\end{aligned}\n(83c)\n\nBy inserting these expressions back into equations () yields the aggregate abatement levels $$A_t^\\mathcal {C}$$ and $$A_t^\\mathcal {NC}$$ and the stock of aggregate cumulative emissions $$s_t$$ in the subgame perfect Nash equilibrium ($$t=0,\\dots ,T$$).\n\nFinally, we determine the individual countries’ abatement levels in the subgame perfect Nash equilibrium. Using backward induction starting from $$t=T$$, we obtain from equations ():\n\n\\begin{aligned} a_T^i&= 0 ,\\qquad \\forall \\ i \\in \\mathcal {I}\\ ,\\end{aligned}\n(84a)\n\\begin{aligned} a_t^i&= \\frac{A^\\mathcal {C}_t}{\\alpha _i \\mathcal {A}^\\mathcal {C}}, \\qquad \\forall \\ i\\in \\mathcal {C}, \\quad t=0,\\dots ,T-1,\\end{aligned}\n(84b)\n\\begin{aligned} a_t^i&= \\frac{\\gamma _i}{\\Gamma ^\\mathcal {NC}} A^\\mathcal {NC}_{t}, \\qquad \\forall \\ i\\notin \\mathcal {C}, \\quad t=0,\\dots ,T-1\\ . \\end{aligned}\n(84c)\n\n$$\\square$$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6007411,"math_prob":0.99787265,"size":44486,"snap":"2022-27-2022-33","text_gpt3_token_len":16904,"char_repetition_ratio":0.2576548,"word_repetition_ratio":0.25491598,"special_character_ratio":0.38591915,"punctuation_ratio":0.06161194,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990467,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-16T14:17:31Z\",\"WARC-Record-ID\":\"<urn:uuid:fe14730c-c9da-4a5d-a3bf-fc6666b5c84b>\",\"Content-Length\":\"378125\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9dc718b8-f725-486b-8248-821928e00a68>\",\"WARC-Concurrent-To\":\"<urn:uuid:c0d305cf-8135-4951-8af0-79e82adc7154>\",\"WARC-IP-Address\":\"146.75.36.95\",\"WARC-Target-URI\":\"https://link.springer.com/article/10.1007/s10640-021-00597-3\",\"WARC-Payload-Digest\":\"sha1:JIOZCQ23RVTBBSB73Z5BZTZQE7LMR2C5\",\"WARC-Block-Digest\":\"sha1:WSJ3GDB7IJY4A7KQZNIDJO2XBQYQWL62\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572304.13_warc_CC-MAIN-20220816120802-20220816150802-00223.warc.gz\"}"}
https://www.arxiv-vanity.com/papers/hep-ph/9402253/
[ "arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.\n\n# MPI-Ph/93-103 CERN-Th.7163/94 February 1994 Electroweak Symmetry Breaking and Bottom-Top Yukawa Unification\n\nM. Carena, M. Olechowski\n\nS. Pokorski and C. E. M. Wagner\n\nCERN Theory Division,\n1211 Geneva 23, Switzerland\n\nMax-Planck-Institut für Physik, Werner-Heisenberg-Institut\nFöhringer Ring 6, D-80805 München, Germany.\n\nInstitute of Theoretical Physics, Warsaw University\nul. Hoza 69, 00-681 Warsaw, Poland\n\nOn leave of absence from the Institute of Theoretical Physics, Warsaw University\n###### Abstract\n\nThe condition of unification of gauge couplings in the minimal supersymmetric standard model provides successful predictions for the weak mixing angle as a function of the strong gauge coupling and the supersymmetric threshold scale. In addition, in some scenarios, e.g. in the minimal SO(10) model, the tau lepton and the bottom and top quark Yukawa couplings unify at the grand unification scale. The condition of Yukawa unification leads naturally to large values of , implying a proper top quark–bottom quark mass hierarchy. In this work, we investigate the feasibility of unification of the Yukawa couplings, in the framework of the minimal supersymmetric standard model with (assumed) universal mass parameters at the unification scale and with radiative breaking of the electroweak symmetry. We show that strong correlations between the parameters , and appear within this scheme, where is the ratio of the top quark Yukawa coupling to its infrared fixed point value. These correlations have relevant implications for the sparticle spectrum, which presents several characteristic features. In addition, we show that due to large corrections to the running bottom quark mass induced through the supersymmetry breaking sector of the theory, the predicted top quark mass and values are significantly lower than those previously estimated in the literature.\n\n## 1 Introduction\n\nThe minimal supersymmetric standard model provides a well motivated and predictive extension of the successful standard model of the strong and electroweak interactions. The condition of unification of couplings is implicit within this scheme and the predictions for the weak mixing angle are in good agreement with the values measured by the most recent measurements at LEP . In addition to the gauge coupling unification condition, relations between the values of the Yukawa couplings of the quarks and leptons of the third generation appear in the minimal supersymmetric grand unification scheme. In particular, in the minimal SU(5) model the unification of bottom and tau Yukawa couplings is obtained. The bottom–tau Yukawa unification condition leads to predictions for the top quark mass as a function of the running bottom quark mass, the strong gauge coupling value and the value of , the ratio of vacuum expectation values .\n\nRecently, it has been observed that for the phenomenologically allowed values of the bottom quark mass and moderate values of , large values of the top quark Yukawa coupling are needed in order to contravene the strong gauge coupling renormalization of the bottom Yukawa coupling . In general, for large enough values of the top quark Yukawa coupling at the grand unification scale, the low energy Yukawa coupling is strongly focussed to a quasi infrared fixed point . In the minimal supersymmetric standard model, the quasi infared fixed point predictions for the physical top quark mass are given by , with GeV for the strong gauge coupling . It has been recently shown that for the values of the strong gauge coupling consistent with the condition of gauge coupling unification, with reasonable threshold corrections at the grand unification and supersymmetry breaking scales, the top quark mass should be within of its quasi infrared fixed point values if the condition of bottom–tau Yukawa unification is required .\n\nA more predictive scheme is obtained in the framework of the minimal SO(10) unification. In this case top–bottom quark Yukawa unification is also required, implying that, for a given value of the bottom quark mass and the strong gauge coupling value , not only the top quark mass but also the value of may be determined. Remarkably, large values of are obtained in this case, leading to a proper bottom–top mass hierarchy . For these large values of , the bottom quark Yukawa coupling itself plays a relevant role in the running of the top quark Yukawa coupling, as well as in the running of the ratio of the bottom to tau Yukawa couplings. This leads to a somewhat weaker convergence of the top quark Yukawa coupling to its infrared fixed point value, together with a slight modification of the infrared fixed point expression .\n\nMoreover, it has been recently observed that for these large values of , potentially large corrections to the running bottom quark mass may be induced through the supersymmetry breaking sector of the theory . Although in the exact supersymmetric theory the bottom quark and tau lepton only couple to one of the Higgs fields , a coupling of these fermions to the Higgs field is induced at the one loop level in the presence of soft supersymmetry breaking terms. These corrections are decisive in obtaining the predictions for the top quark mass. Indeed, for the characteristic values of arising if the top–bottom Yukawa unification is required, , the bottom mass corrections would be very large, unless the supersymmetric mass parameter and the gluino mass are much lower than the characteristic squark masses. This hierarchy of masses may be achieved by imposing certain symmetries in the theory. These symmetries may, however, be in conflict with the radiative breaking of the electroweak symmetry, particularly in simple supersymmetry breaking scenarios.\n\nThe question of radiative electroweak symmetry breaking with large appears hence, as an independent issue, which has been investigated in minimal supersymmetric models with universal soft supersymmetry breaking parameters at the grand unification scale with encouraging results , , , . However, not enough attention was paid either to the full consistency with the requirement of the unification of the gauge and Yukawa couplings, nor to a systematical identification of the complete parameter space at the GUT scale, which gives electroweak symmetry breaking with large . Recently, we presented an investigation of the properties of the radiative electroweak symmetry breaking solutions for small and moderate values of and a top quark Yukawa coupling taking values close to its infrared fixed point solution, as required by bottom–tau Yukawa coupling unification . We obtained quite remarkable correlations between different supersymmetric mass parameters, as well as an effective reduction of the number of free independent parameters at the grand unification scale. It is the purpose of this work to perform a similar analysis for the large regime.\n\nWe use the recently developed bottom - up approach to radiative electroweak symmetry breaking, , which is particularly suitable for a systematic search for large solutions, and possibly to identify the symmetries underlying those solutions. In our calculation we use the two loop renormalization group evolution of gauge and Yukawa couplings, while the Higgs and supersymmetric mass parameters are evolved at the one loop level. The leading supersymmetric threshold corrections to the Higgs quartic couplings and to all supersymmetric mass parameters are included in the analysis. We proceed by fixing the experimentally known values of , , and (with their corresponding uncertainties). After choosing a set of values for and , the unification condition of the three Yukawa couplings fixes their running in the range from to . Next, the search for electroweak symmetry breaking solutions is performed by scanning over the CP odd Higgs mass and the low energy stop mass parameters. For each solution the one–loop correction to the running bottom mass at is calculated and finally the pole bottom mass is obtained. The predictions for the top quark mass and is the collection of those values of and for which there are solutions with the pole bottom mass within the experimentally acceptable range. A more detailed explanation of this procedure will be given below.\n\nWe will show that under the requirement of top–bottom–tau Yukawa unification, the condition of radiative electroweak symmetry breaking implies strong correlations between the supersymmetric parameter and the soft supersymmetry breaking term and , where is the ratio at the electroweak scale of the top quark Yukawa coupling to its infrared fixed point value. These correlations allow a precise determination of the bottom mass corrections, which become significantly large for the large values of consistent with the unification of the three Yukawa couplings of the third generation. This, in turn, implies that the top quark mass predictions are quite different from those ones obtained if the bottom mass corrections were neglected. In section 2 we present a general discussion of the model and our choice of low energy parameters. In section 3 we discuss the radiative electroweak symmetry breaking conditions and its implication for the low energy parameters of the Higgs potential. We present an approximate analytical expression for the one loop renormalization group running of the mass parameters of the Higgs and supersymmetric particles, for the case in which the top and bottom Yukawa couplings unify at . In section 4 we present a detailed numerical analysis of the implications of the radiative breaking for the supersymmetric mass parameter and the supersymmetry breaking parameters at the grand unification scale, and we compare it with the approximate analytical solution. In section 5 we analyse the one loop corrections to the bottom and tau masses and their implications for the top quark mass predictions. In section 6 we analyse the spectrum, together with the constraints coming from the bounds on the decay rate. We reserve section 7 for the conclusions.\n\n## 2 Gauge and Yukawa Coupling Unification Predictions\n\nWe begin with a short discussion of the predictions for the top quark mass following from the unification of the gauge and Yukawa couplings (before imposing the requirement of radiative electroweak breaking), recalling and slightly extending some of the results presented in Refs. - . The gauge coupling unification condition gives predictions for the weak mixing angle as a function of the strong gauge coupling . The unification condition implies (at the two loop level) the following numerical correlation \n\n sin2θW(MZ)=0.2324−0.25(α3(MZ)−0.123)±0.0025 (1)\n\nwhere the central value corresponds to an effective supersymmetric threshold scale and the error is the estimated uncertainty in the prediction arising from possible supersymmetric threshold corrections (corresponding to vary the effective supersymmetric threshold scale from 15 GeV to 1 TeV), threshold corrections at the unification scale as well as from higher dimensional operators. On the other hand, is given by the electroweak parameters , , as a function of the physical top quark mass (at the one loop level) by the formula :\n\n sin2θW(MZ)=0.2324−10−7GeV−2(M2t−(138GeV)2)±0.003 (2)\n\nTherefore, the predictions from gauge coupling unification agree with experimental data provided\n\n M2t=(138GeV)2+0.25×107GeV2(α3(MZ)−0.123±0.01) (3)\n\nThe above correlation defines a band whose lower bound is shown in Fig. 1 ( the upper bound is above 0.13 for GeV). We observe that the top quark mass implies .\n\nAnother issue is that of unification of the bottom and tau Yukawa couplings. In this work the unification of Yukawa couplings is always studied numerically at the two–loop level. However, for a qualitative discussion we refer to the one–loop renormalization group equation for the ratio of the bottom to tau Yukawa couplings which, in the limit of vanishing electroweak gauge couplings, reads\n\n drdt=r8π(16α33−3Yb−Yt+3Yτ) (4)\n\nwhere and . Starting from values of the ratio above one at the scale , as required by experimentally allowed values of the bottom mass GeV and the tau mass, GeV, is strongly renormalized and in the limit of negligible Yukawa couplings for values of within the experimentally determined range it becomes lower than one at scales far below the grand unification scale . Hence, in order to get , for a given value of , the Yukawa couplings in Eq.(4) should be adjusted to compensate the strong gauge coupling effect. For low and moderate values of , , it is the top quark Yukawa coupling which is fixed as a function of , by the bottom–tau Yukawa unification requirement. As we discussed in the introduction, this perturbative unification requires values that are within 10 of the top quark mass infared quasi fixed point value.\n\nHere we are primary concerned with the large solution. Then, the bottom and the top quark Yukawa couplings are of the same order of magnitude and both are important to get . The unification of the three Yukawa couplings takes place not only for a particular value of but also of , for given values of , and , implying a fixed . An important remark is in order here. The bottom mass which is directly relevant for the top mass prediction following from the Yukawa coupling unification is the tree level running mass . As we discussed in the previous section, in the large case it may receive large loop corrections from sparticle exchange loops, at least in some range of parameters of the model. The physical (pole mass) is obtained from the running mass (which is related to by the Standard Model RG equations) by inclusion of QCD corrections, which are universal for the Standard Model and its supersymmetric version. At the two loop level, they are given by \n\n mb(Mb)=Mb1+4α3(Mb)3π+Kb(α3(Mb)π)2, (5)\n\nwhere . The loop corrections to the running bottom mass at induced through sparticle exchange loops are an important issue for models with radiative breaking of the electroweak symmetry. In order to distinguish them from QCD corrections, we introduce the pole bottom mass , which is obtained from the unification condition in the case in which the supersymmetric one loop corrections to are ignored. Due to the fact that the supersymmetric corrections could be quite sizeable, for the allowed solutions of the model, the mass may be significantly different from the physical mass .\n\nThe predictions for and , following from the unification of the three Yukawa couplings, are shown in Fig. 1 for several values of the mass as a function of . The supersymmetric particle masses were set at the scale , while the unification scale was defined as the scale at which the electroweak gauge couplings unify. Fig. 1 shows also the region in the - plane consistent with the unification of gauge couplings, after considering the experimental dependence of on the top quark mass and threshold corrections at the supersymmetric and grand unification scale, Eq.(3).\n\nFrom Fig. 1 we draw the following conclusions: In case that the supersymmetric loop contributions to the bottom mass were negligible, , and taking into account the experimentally acceptable values for the physical bottom mass, GeV , the unification of the gauge and Yukawa couplings drives the top quark mass towards large values (Note the fact that for the IR fixed point solution is lower than for ) . Although the predictions for are no longer so strongly constrained to be close to its infrared quasi fixed point values as for the low and moderate values of (as explained above, strong renormalization effects in the running of are partially cancelled by itself), for the values of consistent with gauge coupling unification the top quark mass is still close to the appropriate infrared fixed point solution. For instance, for a physical bottom quark mass GeV, and , the top quark mass is predicted to be GeV. In general, as it is clear from Fig. 1, if the supersymmetric corrections to the running bottom mass were small, the top quark mass would acquire values GeV within this scheme . In Fig. 1 we also plot the predictions obtained for values of larger than the experimental upper bound for the bottom mass, GeV, which will become of interest while studying the supersymmetric corrections to the bottom mass. Indeed, the values of the top quark mass in the range, say (140–160) GeV are compatible with unification of couplings provided and sizeable supersymmetric loop corrections to the bottom mass are induced. As we shall show below, this is the case in the minimal supergravity model with minimal Yukawa unification. It is relevant to contrast this situation with what happens for low and moderate values of , for which the consistency of a moderately heavy top quark, GeV, with bottom–tau Yukawa unification requires the ratio of vacuum expectation values to be very close to one, , unless large threshold corrections to both gauge and Yukawa couplings are present at the grand unification scale , ,.\n\n## 3 Higgs Potential Parameters\n\nIn order to analyze the radiative electroweak symmetry breaking condition, one should concentrate on the Higgs potential of the theory. In the Minimal Supersymmetric Standard Model, and after the inclusion of the leading–logarithmic radiative corrections, it may be written as , \n\n Veff = m21H†1H1+m22H†2H2−m23(HT1iτ2H2+h.c.) (6) + λ12(H†1H1)2+λ22(H†2H2)2+λ3(H†1H1)(H†2H2)+λ4∣∣H†2iτ2H∗1∣∣2\n\nwhere the quartic couplings may be obtained by the corresponding renormalization group equations and the fact that, at scales at which the theory is supersymmetric the running quartic couplings , with , must satisfy the following conditions Refs. :\n\n λ1=λ2=g21+g224,λ3=g22−g214,λ4=−g222. (7)\n\nThe masses , with are also running mass parameters, whose renormalization group equations may be found in the literature -. As we explained in section 1, in the numerical analysis we considered the two loop renormalization group evolution of gauge and Yukawa couplings, while the supersymmetric and Higgs mass parameters, as well as the low energy Higgs quartic couplings are evolved at the one loop level with the leading supersymmetric threshold corrections included. The minimization conditions read\n\n sin(2β)=2m23m2A, (8)\n tan2β=m21+λ2v2+(λ1−λ2)v21m22+λ2v2, (9)\n\nwhere , is the vacuum expectation value of the Higgs fields , , and is the CP-odd Higgs mass,\n\n m2A=m21+m22+λ1v21+λ2v22+(λ3+λ4)v2 (10)\n\nand we define the mass parameter to be positive.\n\nApart from the mass parameters , appearing in the effective potential, the evolution of the supersymmetric mass parameter appearing in the superpotential ,\n\n f=htϵijQjUHi2+hbϵijQiDHj1+hτϵijLiEHj1+μϵijHi1Hj2, (11)\n\n(where is the top–bottom left handed doublet superfield and , and are singlet superfields) is relevant for the analysis of the radiative electroweak symmetry breaking conditions. The bilinear mass term proportional to appearing in the Higgs potential may be rewritten as a soft supersymmetry breaking parameter multiplied by the Higgs bilinear term appearing in the superpotential, that is . Analogously, the scalar potential may contain a scalar trilinear breaking term proportional to the - Yukawa dependent part of the superpotential, with a trilinear coupling .\n\nIn order to get an understanding of the numerical results, we will present approximate analytical formulae, for the relations required by the electroweak symmetry breaking conditions, in which the radiative corrections to the quartic couplings are ignored. There are several features of the Higgs potential which are characteristic for large values. They can be easily discussed in a qualitative way on the basis of the supersymmetric tree level potential. Eq.(9) simplifies to\n\n tan2β=m21+M2Z/2m22+M2Z/2, (12)\n\nso, for large (already, say, ), either\n\n m22≃−M2Z2, (13)\n\nif are of the order of the boson mass squared, or\n\n m21≃tan2βm22 (14)\n\nwhen . In general, the smaller is the cancellation in the denominator of Eq.(12), the larger is the hierarchy between and . The second relation, Eq.(14), is, however, unnatural when . Indeed, if all supersymmetric particle masses are below a few TeV, Eq.(13) holds, within a good approximation (Although the inclusion of radiative corrections modifies the low energy convergence of the parameter, the relation is preserved, what is sufficient for the understanding of the properties discussed below). Eq.(13), combined with the condition , gives a useful constraint:\n\n m21−m22>M2Z. (15)\n\nAnother very important property is\n\n m23≃M2Atanβ, (16)\n\nor, equivalently, . Since in the case a large hierarchy between and is highly unnatural, the above condition, Eq.(16), implies also (). Thus, in order to study the implication of the electroweak symmetry breaking condition, one can effectively replace Eq.(16) with the condition .\n\nTo go further with the analysis, it is very useful to obtain approximate analytical solutions for the one loop renormalization group evolution of the mass parameters, whose validity may be proven by comparing them with our numerical solutions. We will assume universality of the soft supersymmetry breaking parameters, that is to say a common scalar mass and a common gaugino mass , as well as the boundary conditions for the parameters ( and ), and , at the grand unification scale to be given by , and , respectively. In the region of large values of , for which the bottom Yukawa coupling is of the order of the top Yukawa coupling, an approximate analytical solution for the one loop evolution of the mass parameters may be obtained. For this, we identify the bottom and top Yukawa couplings and neglect the tau Yukawa coupling effects. Furthermore, all supersymmetric threshold corrections are ignored at this level. The solution for reads\n\n Yt(t)=4πYt(0)E(t)4π+7Yt(0)F(t) (17)\n\nwhere and are functions of the gauge couplings,\n\n E=(1+β3t)16/3b3(1+β2t)3/3b2(1+β1t)13/9b1,F=∫t0E(t′)dt′ (18)\n\nwith , the beta function coefficient of the gauge coupling , and we identify the right bottom and the right top hypercharges. As we said, the fixed point solution is obtained for values of the top quark Yukawa coupling which become large at the grand unification scale, that is, approximately,\n\n Yf(t)=4πE(t)7F(t). (19)\n\nAs had been anticipated in Ref. , the fixed point solution for the case differs in a factor from the corresponding solution in the low case, for which . From here, by inspecting the renormalization group equation for the mass parameters, we obtain the approximate analytical solutions\n\n m2H1≃m2H2=m20+0.5M21/2−37Δm2 (20)\n m2U≃m2D=m20+6.7M21/2−27Δm2 (21)\n m2Q≃m20+7.2M21/2−27Δm2 (22)\n\nwhere , with , , , are the squark doublet, right bottom squark and right stop quark mass parameters respectively and\n\n Δm2 ≃ 3m20YYf−4.6A0M1/2YYf(1−YYf) (23) + A20YYf(1−YYf)+M21/2[14YYf−6(YYf)2].\n\nHere we have concentrated on the above mass parameters, because they are the only relevant ones for the study of the properties of the radiative electroweak symmetry breaking solutions in the approach of ref. . We will discuss the properties of the mass spectrum in more detail in section 6. Moreover, the supersymmetric mass parameter renormalization group evolution gives,\n\n μ2=2μ20(1−YYf)6/7, (24)\n\nwhile the running of the soft supersymmetry breaking bilinear and trilinear coupling read,\n\n At=A0(1−YYf)−M12(4.2−2.1YYf), (25)\n B≃δ(Y)+M1/2(2YYf−0.6), (26)\n\nwith\n\n δ(Y)=B0−6Y7YfA0. (27)\n\nThe coefficients characterizing the dependence of the mass parameters on the universal gaugino mass are functions of the exact value of the gauge couplings. In the above, we have taken the values of the coefficients that are obtained for .\n\nThe approximate solutions, Eqs. (2025), become weakly dependent on the parameter , the dependence being weaker for top quark Yukawa couplings closer to the fixed point value. The strongest dependence on the parameter comes through the parameter introduced above. Similar properties are obtained in the low regime , although the explicit form of the parameter is different in this case. From Eq.(24), it follows that the coefficient relating to tends to zero as . The coefficients scales faster to zero than in the low case.\n\n## 4 Radiative Breaking of SU(2)L×U(1)Y\n\nIn the following we present a complete numerical analysis of the constraints coming from the requirement of a proper radiative electroweak symmetry breaking in the large regime. As described in the Introduction, we use the bottom–up approach of ref. . For a fixed value of the top quark mass we search for all solutions to radiative breaking, which give a chosen value of , by scanning over the CP odd Higgs mass and the low energy stop mass parameters. The latter are very convenient as the input parameters as they fix the leading supersymmetric threshold corrections to the Higgs potential. While studying the model from low energies we have chosen for definiteness an upper bound of 2 TeV on the scanned parameters. For a somewhat larger upper bound, larger values of the soft supersymmetry breaking prameters are allowed, but the general features of the solutions are preserved. It is natural to expect that the supersymmetric parameters are at most of order of a few TeV, if supersymmetry is to solve the hierarchy problem of the Standard Model. In Figs. 2 - 5 we present the results which show interesting correlations among the soft supersymmetry breaking parameters.\n\nAs discussed in section 3, the one loop corrections to the effective Higgs potential, necessary to perform a proper analysis of the radiative electroweak symmetry breakdown, were included in the numerical analysis. The gauge and Yukawa couplings were evolved with their two loop renormalization group equations between and . In their evolution, we have treated all supersymmetric particle masses as being equal to . Although this procedure introduces small uncertainties on the predicted values of and (which will be considered in our analysis), it keeps all the essential features of the radiative electroweak symmetry with unification of bottom and top Yukawa couplings, makes possible the comparison of our results with the ones of Fig. 1 and allows an easy analytical interpretation of the numerical results. In addition, the small uncertainties on and may be treated by analytical methods , .\n\nAnalogously to the low scenario , it is possible to derive approximate analytical relations, which are useful in the understanding of the numerical results. Indeed, considering the conditions for a proper radiative electroweak symmetry breaking, Eq.(13), the approximate solutions for the mass parameters, Eqs. (20) -(23), and ignoring radiative corrections to the quartic couplings the following analitycal expression is obtained,\n\n μ2 = m20(97YYf−1)−M21/2[0.5−6YYf+187(YYf)2] (28) − 374.6A0M1/2YYf(1−YYf)+37A20YYf(1−YYf)−M2Z/2.\n\nIn the analytical presentation we will always keep the expressions as a function of the low energy parameter . The reason is that in the one loop approximation and are linearly related, Eq.(24), and becomes a more appropriate parameter for the description of the solution properties, particularly for large values of the top quark mass where strongly depends on the degree of proximity to the fixed point value. The dependence may be always recovered by using Eq.(24).\n\nIn the above we have taken the expression of obtained in the analytical approximation in which . In the explicit numerical solution to the mass parameters, however, we obtain\n\n m21−m22=αM21/2+βm20 (29)\n\nwhere for ( GeV), , and , while for ( GeV), and . Hence, the coefficient is small and positive, and is negative and small in magnitude. The order of magnitude of the coefficients and can be easily inferred from the renormalization group equations. Indeed, it is easy to show that under the condition of unification of the three third generation Yukawa couplings comes mainly from the difference in the running of bottom and top Yukawa couplings, together with the different hypercharges of the right top and bottom quarks, which induce a different gaugino dependence of the stop and sbottom parameters. The negative values of are mainly due to the lepton Yukawa effects. We see that, due to the restriction , Eq.(15), values of make the radiative breaking of the electroweak symmetry impossible, in the approximation which neglects supersymmetric threshold corrections to the Higgs potential. In the numerical analysis, which includes those corrections, the only solutions are still obtained for of the order of, or larger than , as seen in Fig. 2.a and Fig. 2.b. It is also important to remark that, due to the smallness of the parameters and , the dependence of the mass parameters and on the gaugino mass is well described by Eq.(28) (which was obtained in the approximation ), while the dependence on the mass parameters , , remains weak for . In general, the corrections to the approximate solutions given in Eqs.(19 - 22) and Eq.(28) are small, and, hence they provide useful information for the analysis of the electroweak symmetry breaking condition.\n\nThe values of the top quark mass, GeV and GeV, and of the ratio of the Higgs vacuum expectation values, and , used above are such that unification of gauge and Yukawa couplings is achieved for GeV, and GeV, , respectively. Considering a Yukawa coupling solution sufficiently close to the infrared fixed point, the values of as required by the radiative breaking conditions, and (as follows from Eqs.(15, 29), we obtain from Eq.(28) that,\n\n μ2≃3M21/2, (30)\n\ni.e. there is a strong linear correlation between and . If, instead, we consider the case , (corresponding to GeV) as a representative one of what happens when we depart from the fixed point value we obtain\n\n μ2 = −0.23m20+2.2M21/2−0.47A0M1/2 (31) + 0.1A20−M2Z/2.\n\nThere is a stronger dependence on the supersymmetry breaking parameter . However, due to the relation , the bounds on and coming from the stability condition and the requirement of the absence of a colour breaking minima , and the smallness of the coefficients associated with the dependence, one gets that the correlation between and is conserved over most of the parameter space,\n\n μ2≃DM21/2, (32)\n\nwhere . The predictions coming from the above analysis, based on the approximate relations Eqs.(3032), must be compared with the results of the numerical analysis, in which the running of gauge and Yukawa couplings have been considered at the two loop level, and all one loop threshold corrections to the quartic couplings and masses have been included. The resulting correlations between and are depicted in Fig. 3.a and Fig. 3.b, which are in good agreement with the analytical results, although the coefficient in Fig. 3b is somewhat smaller than the analytical prediction, Eq.(32).\n\nThe information above may be used to get a further understanding of the properties of Fig. 2. The lower bound on , for instance, follows from the condition , which yields\n\n M1/2>MZ√α, (33)\n\nwhere, as we said above, for GeV and , while for GeV and . From Fig. 2 we observe that, although the lower limit on for GeV is well described by Eq.(33), the one for GeV is somewhat higher than the predicted one from Eq.(33). This difference is a reflection of the size of the one loop radiative corrections to the quartic couplings, which grow with the fourth power of the top quark mass and were ignored for the obtention of Eq.(33).\n\nAs we explained above, the condition also excludes the points with . Furthermore, low values of , although consistent with the condition of radiative breaking induce large mixings in the stau sectors which yield stau masses lower than the neutralino ones. The fact that low values of leads to a stau lighter than the neutralinos was already noticed in Ref. . In the Figures we impose the condition of a neutral supersymmetric particle to be the lightest one as an additional experimental constraint. Under these conditions, the lightest supersymmetric particle is always a bino, with mass . In order to get a quantitative understanding of the lower limit on , we recall that, ignoring small tau Yukawa coupling effects, the left and right slepton mass parameters are given by ,\n\n m2L≃0.5M21/2+m20,m2R≃0.15M21/2+m20, (34)\n\nwhile the mixing term for large is dominated by the parameter\n\n m2LR≃−hτμv2. (35)\n\nUsing the fact that, at energies of the order of , , and the bottom - top unification condition, the condition approximately yields,\n\n m20≥−0.15M21/2+√(0.15M21/2)2+μ2m2t/3. (36)\n\nRecalling Eqs.(30) and (32), and using Eq.(36), one can get an understanding of the region, indicated as experimentally excluded in the Figs. 2.a and 2.b (see also Fig. 10).\n\nClose to the infrared quasi fixed point solution the condition yields (from Eqs. (30),(32) ), i.e.\n\n δ≡B0−6A07≃−1.4M1/2 (37)\n\nIn the numerical analysis, we studied the correlations between and , and compared it with the results coming from Eq.(37). The results are depicted in Fig.4.a. The numerical results confirm in a good degree the analytical expectations. Analogously, for , we obtain\n\n B0M1/2−0.5A0M1/2=−0.6. (38)\n\nThe correlation, resulting in this case from the numerical analysis is depicted in Fig.4.b, being in good agreement with Eq.(38), too.\n\nThe strong correlation between the parameter and , together with the - correlation, Eqs. (30) - (32), implies also a strong correlation between and . The numerical correlation is presented in Figs. 5.a and 5.b, for which we chose to plot the GUT scale parameter instead of the renormalized parameter . From Figs. 3 and 5 we can hence obtain also information about the relation between and , which agrees well with the analytical prediction, Eq. (24).\n\nObserve that the condition is a property of the radiative breaking solutions with large values of and a not too heavy supersymmetric spectrum, and in this sense is independent of the condition of unification of top and bottom quark Yukawa couplings. Since very low values of () are not consistent with the condition of radiative breaking of the electroweak symmetry, equations analogous to Eqs.(37) and (38) will be obtained even if we relax the bottom-top Yukawa unification condition. We exemplify this by taking two solutions with and large values of and studying the numerical solutions. The resulting correlations are depicted in Fig. 4.c and 4.d.\n\nWhen the condition of unification of bottom and top Yukawa couplings is relaxed, however, large values of are not longer needed to get the necessary hierarchy between and . As the bottom and tau Yukawa couplings decrease compared with the top one, the coefficients and , Eq.(29), increase, becoming positive for . Hence, for , acceptable radiative breaking solutions may be also obtained by taking large values of . For these solutions the strong correlation between , and is lost, together with the hierarchical relation between and . These results are depicted in Figs. 2.c and 2.d, 3.c and 3.d, and 5.c and 5.d, where .\n\nIn summary, in general the condition of radiative breaking of the electroweak symmetry with large values of implies a strong correlation between the parameters and . This correlation is a reflection of, in principle, a strong degree of fine tuning, implied by the condition . However, it is tempting to speculate that this correlation has some fundamental origin, what would imply the necessity of redefining the naive fine tuning criteria.\n\nIf the top quark–bottom quark Yukawa coupling unification is required, the parameters and are also strongly correlated with the supersymmetric mass parameter . These properties do not strongly depend on the proximity to the infrared fixed point solution, although the exact value of the coefficient relating the different parameters and the strength of the correlation does depend on the top quark Yukawa coupling value. Radiative breaking of the electroweak symmetry is driven by the gaugino mass and . For a large enough departure from the exact top quark – bottom quark Yukawa unification () solutions with radiative breaking driven by are also possible, for which both the correlation between and and the hierarchical relation between and are destroyed.\n\n## 5 Radiative Corrections to Mb and Mτ and the Predictions for the Top Quark Mass\n\nFig.1 summarizes the predictions for the top quark mass as a function of for given values of , which follow from unification of the three Yukawa couplings. As explained in Section 2, the pole mass is obtained from the unification condition in the case in which the supersymmetric one–loop corrections to the bottom mass are ignored (i.e. it includes only QCD corrections). In this section we calculate the supersymmetric one–loop corrections to the bottom mass in the model with radiative breaking. For large values of , they are not only large but, due to the strong correlations between the soft supersymmetry breaking parameters present in the large solutions with , for fixed they are almost constant in the whole parameter space allowed by radiative breaking. Thus, in the first approximation, for fixed and , . If this value of is in the range of the experimentally acceptable values for the physical bottom mass, GeV, then the corresponding values of and are the predictions for the top quark mass and , consistent with radiative breaking. Of course, all uncertainties taken into account, the actual prediction is a band of values for and .\n\nThere is a higher order ambiguity due to the choice of the scale at which the supersymmetric one–loop corrections are calculated. A natural choice is between the electroweak and supersymmetric () scales and we choose to work with . The corrected running bottom quark mass reads ,\n\n mb=hbv1(1+Δ(mb)). (39)\n\nreceives contributions coming from bottom squark–gluino loops and top squark–chargino loops, and is given by,\n\n Δ(mb) = 2α33πM~gμtanβI(m2~b,1,m2~b,2,M2~g) (40) + Yt4πAtμtanβI(m2~t,1,m2~t,2,μ2),\n\nwhere the integral function I(a,b,c) is given by\n\n I(a,b,c)=abln(a/b)+bcln(b/c)+acln(c/a)(a−b)(b−c)(a−c), (41)\n\nwith and () are the gluino and sbottom (stop) eigenstate masses respectively. The integral function may be parametrized as , where is the maximum of the three squared masses appearing in the functional integral and the coefficient if there is no large hierarchy between the three different masses. Observe that the minimum value of is only obtained when the three masses are equal. As we will discuss below, for the typical values of the mass parameters appearing in the radiative electroweak symmetry breaking solutions, gives a good approximation to the integral.\n\nThe tau mass corrections are, instead, dominated by the bino exchange contribution, which is negligible for the bottom quark case. Indeed,\n\n mτ=hτv1(1+Δ(mτ)), (42)\n\nwith\n\n Δ(mτ) = α14πM~BμtanβI(m2~τ,1,m2~τ,2,M2~B) (43)\n\nObserve that although the effect is expected to be small due to the presence of the weak gauge coupling, it is partially enhanced by the fact that the particles appearing in the loop are lighter than in the bottom case. We will discuss it in more detail below.\n\nDue to the approximate dependence of on and , Eq.(25), close to the fixed point there is a strong correlation between and the gluino mass. Indeed, for ,\n\n At≃−2M~g3. (44)\n\nFor values of , is shifted towards larger values in most of the parameter space,\n\n At≃−M~g. (45)\n\nThese correlations are observed in the numerical analysis. The relations above, Eqs.(44), (45), are only violated for large values of , close to the upper bound on this quantity (For the numerical bounds on see Fig. 4). Due to the minus signs in Eqs.(44) and (45), there is an effective cancellation between both bottom mass correction contributions. Interestingly enough, due to the fact that is larger when the Yukawa coupling is smaller, the cancellation between both bottom mass correction contributions for is similar to the one appearing for .\n\nThe bottom mass corrections become very relevant for large values of . In order to reduce the bottom mass corrections, while fulfilling the requirement , the authors of Ref. imposed a Peccei Quinn symmetry , which is explicitly broken, its breakdown being characterized by the (assumed) small parameter . They also required the presence of an approximate continuous symmetry, present in the limit , , , which breaking is characterized by the (assumed) small parameter\n\n ϵR=Bm~q≃Am~q≃M~gm~q, (46)\n\nwhich would protect both and the bottom mass corrections.\n\nHowever, the electroweak symmetry radiative breaking solutions with universal soft supersymmetry parameters at the grand unification scale and exact top quark–bottom quark Yukawa coupling unification is inconsistent with the approximate preservation of these symmetries. Indeed, as we have discussed in the last section, the only solutions satisfying these requirements are obtained for of the order of, or larger than . Under these conditions, the squark mass is of the order of the gluino mass and not much larger than it, as required by . In addition, as explained above, the mass parameter is of the order of the gluino mass and, hence, the bottom mass corrections are not suppressed in the minimal supergravity model. Indeed, due to the strong correlation between and , the parameter is strongly correlated with the gluino mass, and an approximate expression for the integrals may be obtained. Using these correlations, we obtain that the integrals are well approximated by setting ,\n\n Δ(mb)≃1.2tanβμM~g(α33π+(32)YtAt8πM~g) (47)\n\nwhere the factor is to account for the fact of having written a factor in the denominator, instead of the appropiate factor (The correlation between the gluino and squark masses will be discussed in the next section). The above expression gives a good approximation to the bottom mass corrections in most of the parameter space consistent with radiative breaking of the electroweak symmetry and bottom–top Yukawa coupling unification. Observe that, due to the strong correlations between and , and using the fact that the fixed point value of the top and bottom quark Yukawa coupling is approximately given by , there is an effective cancellation between the two contributions, which reduces the gluino contribution by about a 30. In the more precise numerical result this cancellation is of the order of . Taking this into account, and the fact that , with the unifying value of the gauge couplings, the relative bottom mass corrections are given by\n\n Δ(mb)≃0.0045tanβμM1/2. (48)\n\nFrom Fig. 1 we observe that, in general, independent of the bottom mass value, the condition of unification of the three Yukawa couplings is such that the larger is , the closer is the top quark Yukawa coupling to its fixed point value. Values of the top quark Yukawa coupling close to its fixed point are obtained for , while is obtained for . Once Eq.(48) is combined with the numerical results shown in Fig. 3, we obtain a relative bottom mass correction of the order of 45 for values of and the top quark mass consistent with the fixed point, while for , the relative bottom mass correction is of the order of .\n\nAn analogous procedure can be applied for the estimation of the relative tau mass corrections. In this case, the heaviest stau mass is of the order of 2.5 times the bino mass, while the lightest stau mass is of the order or somewhat larger than the bino mass, depending on the relative value of with respect to . Under these conditions, over most of the allowed parameter space the loop integral may be approximated by a factor of the order of 0.85. One can check that, under these conditions the tau mass corrections are not larger than of the tau mass. Moreover, a relatively large tau mass correction is always associated with a large left - right stau mass mixing, for which the lightest stau becomes the lightest supersymmetric particle. Once the condition of a neutralino being the lightest supersymmetric particle is imposed, the relative tau mass corrections are bounded to be lower than 4 over most of the allowed parameter space. Since, in addition, a relative tau mass variation affect less the unification condition than a relative bottom mass correction, the tau mass correction effects on the top quark mass predictions are small.\n\nFor chosen values of and we are now able to calculate the physical bottom mass by running down the corrected with the standard Model RGE to the scale and applying the appopriate QCD corrections, Eq.(5). The results are shown in Fig. 6, for the same representative values of the top quark mass and chosen in the previous figures (which, as we discussed in section 4, for the cases a) and b) correspond to GeV and GeV, respectively). The two branches of correspond to two signs of , the lowest values of corresponding to negative values of (or equivalently, positive values of ). From Fig. 6 and recalling the results presented in Fig.1, we see that large corrections to the bottom mass, of the order of 30 may be used to reconcile with GeV. Moreover, it is easy to see that, due to the size of the characteristic corrections, for , there are no solutions with and a top quark Yukawa coupling within the range of validity of perturbation theory.\n\nThe above results may be used to set an upper bound on the top quark mass as a function of the strong gauge coupling value. This upper bound will correspond to the maximum values of and consistent with a physical bottom mass GeV. Larger top quark mass values will correspond to lower values of and larger values of , implying a physical bottom quark mass outside the present experimental bounds on this quantity, GeV. The upper bound may be hence estimated as follows: For a given value of , and , the relative bottom mass corrections may be computed by using the supersymmetric mass parameters allowed by the condition of radiative breaking of the electroweak symmetry. In addition, may be otained from the correlations between , and depicted in Fig. 1. We perform a scanning over the values of and , looking for the maximum value consistent with a physical bottom mass GeV. This value of gives an estimate of the upper bound on this quantity for this given value of . The uncertainties associated with this procedure will be discussed below.\n\nFor example, for , the upper bound on the top quark mass is approximately given by GeV while the upper bound on the ratio of vacuum expectation values is given by . These bounds are associated with a bottom mass GeV. From Eqs. (40), (48) it follows that the approximate bottom mass corrections under these conditions (corresponding to the lowest value of ) are of the order of 18. Hence, as required for the solution associated to the upper bound on the top quark mass, the physical bottom mass will be approximately equal to the lower experimental bound on this quantity GeV. Analogously, for the upper bounds read GeV and , for which the bottom mass (5.3) GeV. The lowest bottom mass corrections are of the order of 23 (13 ) (corresponding to a ratio ) and hence the physical bottom mass is approximately equal to 4.6 GeV.\n\nIt is important to discuss the uncertainties on the estimate of the top quark mass upper bounds presented above. For the obtention of Fig. 1, the supersymmetric spectrum has been taken to be degenerate at a mass . However, the squark and gluino spectrum arising from the bottom - top Yukawa unification condition is heavy and hence, the top quark mass could be modified by the supersymmetric particle threshold effects. These effects have been estimated in Ref. . For the characteristic spectrum obtained in these solutions, the resulting top quark mass uncertainties are of the order of 5 - 10 GeV. In addition, in the above we have ignored the possible effects of tau mass corrections. The tau mass corrections are correlated in sign with the bottom mass corrections and their effects on the top quark mass predictions may be hence estimated by a lowering of the relative bottom mass corrections in an amount of the order of the relative tau mass corrections. A modification of the order of 3 of the relative bottom mass corrections gives variations of the top quark mass prediction of the order of 5 - 10 GeV, too. Finally, there is the already discussed - scale uncertainty in the evaluation of the bottom mass corrections, which can also modify the top quark mass predictions in a few percent.\n\nFrom the above discussion, it follows that the estimate for the upper bound on the top quark mass quoted above may be away from the real bound in 10 - 20 GeV. However, it is important to remark that even after the inclusion of these uncertainties the allowed top quark mass values become much lower than the values obtained for the case in which a negligible bottom mass correction is assumed ,,. Indeed, even after the uncertainties are included, the upper bounds on the physical top quark mass obtained above are of the order of the lower bounds for the same quantity for the case in which the bottom mass corrections are negligible.\n\nA lower bound on the top quark mass is also obtained. However, the lower bounds on the top quark mass is given by GeV for , while for values of the lower bound is below the present experimental limit on the top quark mass. In general, for , large values of the top quark mass GeV will be only possible for the case in which we relax the condition of unification of the three Yukawa couplings. For instance, a top quark mass GeV may be achieved for , for , and GeV. As we explained above, since under these conditions may be much larger than , the approximate symmetries required in Ref. , Eq.(46), become now possible, and hence the bottom mass corrections can be small, .\n\n## 6 Supersymmetric Particle Spectrum\n\nThe properties of the sparticle spectrum are to a large extent determined by the correlation of the mass parameter and the soft supersymmetry breaking parameters and their large values necessary to fulfill the condition of radiative breaking of the electroweak symmetry (see section 4). For instance, since large values of the parameters and are required, there will be little mixing in the chargino and neutralino sectors. The lightest (heaviest) chargino is given by a wino (charged Higgsino) with mass equal to (). The lightest neutralino will be given by a bino of high degree of purity and mass . These issues have been already discussed in Refs. - , and survive in our more precise numerical correlation. We will hence concentrate on the predictions which depend stronger on the precise values of the top quark mass and hence are more sensitive to the change on the top quark mass predictions induced by the bottom mass corrections studied in the previous section. In addition, we will present an analysis of the constraint on the soft supersymmetry breaking parameters coming from the present experimental bounds on the decay rate.\n\nIn Fig. 7 we give the behaviour of the CP odd Higgs mass as a function of the lightest chargino mass. As we remarked above, due to the large values of and appearing in this scheme, the lightest chargino is almost a pure wino, with mass , while the heaviest chargino is a Higgsino with mass equal to . The CP odd Higgs mass squared is given by , where the constant term is negative. Since is positive and is negative, we get an upper bound on ,\n\n m2A<αM21/2 (49)\n\nwhich is visible in the figures. Observe that the sensitivity under top quark mass varations of this bound comes through the dependence of the parameter on . The largest values of are obtained for low values of , which, could lead to a stau to be the lightest supersymmetric particle (see section 4 and Ref. ). From Figs. 7.a and 7.b, we see that for the allowed parameter space consistent with bottom - top Yukawa unification the CP odd Higgs becomes light. Very low values of are, however, excluded by experimental limits. Moreover, the CP odd Higgs mass becomes\n\n mA≤mQ√α5 (50)\n\nwhere the factor 5 comes from the strong correlation between and" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89590317,"math_prob":0.9783566,"size":50003,"snap":"2020-45-2020-50","text_gpt3_token_len":10419,"char_repetition_ratio":0.21668433,"word_repetition_ratio":0.08742238,"special_character_ratio":0.20226787,"punctuation_ratio":0.102277756,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9854163,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T02:20:51Z\",\"WARC-Record-ID\":\"<urn:uuid:7ab28b0d-b12e-4470-91a4-ff4cb11776ac>\",\"Content-Length\":\"1049559\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3f5d96da-6bd1-4086-90a6-bb9fafe2ede8>\",\"WARC-Concurrent-To\":\"<urn:uuid:ef007275-bbc7-449d-8952-e2a876a48051>\",\"WARC-IP-Address\":\"104.28.21.249\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/hep-ph/9402253/\",\"WARC-Payload-Digest\":\"sha1:BYCVCQ5JWBG7S2YDTFD37QADS4ANC42K\",\"WARC-Block-Digest\":\"sha1:F4SGFYDWJ6QYNNTFV732RMYQBLZRYTBX\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141542358.71_warc_CC-MAIN-20201201013119-20201201043119-00322.warc.gz\"}"}
https://docs.cloudera.com/cdw-runtime/1.4.0/impala-sql-reference/topics/impala-math-functions.html
[ "# Impala mathematical functions\n\nMathematical functions, or arithmetic functions, perform numeric calculations that are typically more complex than basic addition, subtraction, multiplication, and division. For example, these functions include trigonometric, logarithmic, and base conversion operations.\n\nRelated information:\n\nThe mathematical functions operate mainly on these data types:\n• INT\n• BIGINT\n• SMALLINT\n• TINYINT\n• DOUBLE\n• FLOAT\n• DECIMAL\nFor the operators that perform the standard operations such as addition, subtraction, multiplication, and division, see Arithmetic operators.\n\nFunctions that perform bitwise operations are explained in Impala bit functions.\n\nFunction reference:\n\nImpala supports the following mathematical functions:\n\nABS(numeric_type a)\nPurpose: Returns the absolute value of the argument.\n\nReturn type: Same as the input value\n\nUsage notes: Use this function to ensure all return values are positive. This is different than the `positive()` function, which returns its argument unchanged (even if the argument was negative).\n\nACOS(DOUBLE a)\nPurpose: Returns the arccosine of the argument.\n\nReturn type: `DOUBLE`\n\nASIN(DOUBLE a)\nPurpose: Returns the arcsine of the argument.\n\nReturn type: `DOUBLE`\n\nATAN(DOUBLE a)\nPurpose: Returns the arctangent of the argument.\n\nReturn type: `DOUBLE`\n\nATAN(DOUBLE a, DOUBLE b)\nPurpose: Returns the arctangent of the two arguments, with the signs of the arguments used to determine the quadrant of the result.\n\nReturn type: `DOUBLE`\n\nBIN(BIGINT a)\nPurpose: Returns the binary representation of an integer value, that is, a string of 0 and 1 digits.\n\nReturn type: `STRING`\n\nCEIL(DOUBLE a), CEIL(DECIMAL(p,s) a), CEILING(DOUBLE a), CEILING(DECIMAL(p,s) a), DCEIL(DOUBLE a), DCEIL(DECIMAL(p,s) a)\nPurpose: Returns the smallest integer that is greater than or equal to the argument.\n\nReturn type: Same as the input type\n\nCONV(BIGINT n, INT from_base, INT to_base), CONV(STRING s, INT from_base, INT to_base)\nPurpose: Returns a string representation of the first argument converted from `from_base` to `to_base`. The first argument can be specified as a number or a string. For example, `conv(100, 2, 10)` and `conv('100', 2, 10)` both return `'4'`.\n\nReturn type: `STRING`\n\nUsage notes:\n\nIf `to_base` is negative, the first argument is treated as signed, and otherwise, it is treated as unsigned. For example:\n\n• `conv(-17, 10, -2) `returns `'-10001'`,``` -17``` in base 2.\n• `conv(-17, 10, 10)` returns `'18446744073709551599'`. `-17` is interpreted as an unsigned, 2^64-17, and then the value is returned in base 10.\n\nThe function returns `NULL` when the following invalid arguments are specified:\n\n• Any argument is `NULL`.\n• `from_base` or `to_base` is below `-36` or above `36`.\n• `from_base` or `to_base` is `-1`, `0`, or `1`.\n• The first argument represents a positive number and `from_base` is a negative number.\n\nIf the first argument represents a negative number and `from_base` is a negative number, the function returns `0`.\n\nIf the first argument represents a number larger than the maximum `bigint`, the function returns:\n\n• The string representation of -1 in `to_base` if `to_base` is negative.\n• The string representation of 18446744073709551615' (2^64 - 1) in `to_base` if `to_base` is positive.\n\nIf the first argument does not represent a valid number in `from_base`, e.g. 3 in base 2 or '1a23' in base 10, the digits in the first argument are evaluated from left-to-right and used if a valid digit in `from_base`. The invalid digit and the digits to the right are ignored.\n\nFor example:\n• ` conv(445, 5, 10)` is converted to ```conv(44, 5, 10)``` and returns `'24'`.\n• ` conv('1a23', 10, 16)` is converted to ```conv('1', 10 , 16)``` and returns `'1'`.\nCOS(DOUBLE a)\nPurpose: Returns the cosine of the argument.\n\nReturn type: `DOUBLE`\n\nCOSH(DOUBLE a)\nPurpose: Returns the hyperbolic cosine of the argument.\n\nReturn type: `DOUBLE`\n\nCOT(DOUBLE a)\nPurpose: Returns the cotangent of the argument.\n\nReturn type: `DOUBLE`\n\nDEGREES(DOUBLE a)\nPurpose: Converts argument value from radians to degrees.\n\nReturn type: `DOUBLE`\n\nE()\nPurpose: Returns the mathematical constant e.\n\nReturn type: `DOUBLE`\n\nEXP(DOUBLE a), DEXP(DOUBLE a)\nPurpose: Returns the mathematical constant e raised to the power of the argument.\n\nReturn type: `DOUBLE`\n\nFACTORIAL(integer_type a)\nPurpose: Computes the factorial of an integer value. It works with any integer type.\n\nUsage notes: You can use either the `factorial()` function or the `!` operator. The factorial of 0 is 1. Likewise, the `factorial()` function returns 1 for any negative value. The maximum positive value for the input argument is 20; a value of 21 or greater overflows the range for a `BIGINT` and causes an error.\n\nReturn type: `BIGINT`\n\n``````select factorial(5);\n+--------------+\n| factorial(5) |\n+--------------+\n| 120 |\n+--------------+\n\nselect 5!;\n+-----+\n| 5! |\n+-----+\n| 120 |\n+-----+\n``````\nFLOOR(DOUBLE a), FLOOR(DECIMAL(p,s) a), DFLOOR(DOUBLE a), DFLOOR(DECIMAL(p,s) a)\nPurpose: Returns the largest integer that is less than or equal to the argument.\n\nReturn type: Same as the input type\n\nFMOD(DOUBLE a, DOUBLE b), FMOD(FLOAT a, FLOAT b)\nPurpose: Returns the modulus of a floating-point number.\n\nReturn type: `FLOAT` or `DOUBLE`, depending on type of arguments\n\nUsage notes:\n\nBecause this function operates on `DOUBLE` or `FLOAT` values, it is subject to potential rounding errors for values that cannot be represented precisely. Prefer to use whole numbers, or values that you know can be represented precisely by the `DOUBLE` or `FLOAT` types.\n\nExamples:\n\nThe following examples show equivalent operations with the `fmod()` function and the `%` arithmetic operator, for values not subject to any rounding error.\n\n``````select fmod(10,3);\n+-------------+\n| fmod(10, 3) |\n+-------------+\n| 1 |\n+-------------+\n\nselect fmod(5.5,2);\n+--------------+\n| fmod(5.5, 2) |\n+--------------+\n| 1.5 |\n+--------------+\n\nselect 10 % 3;\n+--------+\n| 10 % 3 |\n+--------+\n| 1 |\n+--------+\n\nselect 5.5 % 2;\n+---------+\n| 5.5 % 2 |\n+---------+\n| 1.5 |\n+---------+\n``````\n\nThe following examples show operations with the `fmod()` function for values that cannot be represented precisely by the `DOUBLE` or `FLOAT` types, and thus are subject to rounding error. `fmod(9.9,3.0)` returns a value slightly different than the expected 0.9 because of rounding. `fmod(9.9,3.3)` returns a value quite different from the expected value of 0 because of rounding error during intermediate calculations.\n\n``````select fmod(9.9,3.0);\n+--------------------+\n| fmod(9.9, 3.0) |\n+--------------------+\n| 0.8999996185302734 |\n+--------------------+\n\nselect fmod(9.9,3.3);\n+-------------------+\n| fmod(9.9, 3.3) |\n+-------------------+\n| 3.299999713897705 |\n+-------------------+\n``````\nFNV_HASH(type v)\nPurpose: Returns a consistent 64-bit value derived from the input argument, for convenience of implementing hashing logic in an application.\n\nReturn type: `BIGINT`\n\nUsage notes:\n\nYou might use the return value in an application where you perform load balancing, bucketing, or some other technique to divide processing or storage.\n\nBecause the result can be any 64-bit value, to restrict the value to a particular range, you can use an expression that includes the `ABS()` function and the `%` (modulo) operator. For example, to produce a hash value in the range 0-9, you could use the expression `ABS(FNV_HASH(x)) % 10`.\n\nThis function implements the same algorithm that Impala uses internally for hashing, on systems where the CRC32 instructions are not available.\n\nThis function implements the Fowler–Noll–Vo hash function, in particular the FNV-1a variation. This is not a perfect hash function: some combinations of values could produce the same result value. It is not suitable for cryptographic use.\n\nSimilar input values of different types could produce different hash values, for example the same numeric value represented as `SMALLINT` or `BIGINT`, `FLOAT` or `DOUBLE`, or `DECIMAL(5,2)` or `DECIMAL(20,5)`.\n\nExamples:\n\n``````[localhost:21000] > create table h (x int, s string);\n[localhost:21000] > insert into h values (0, 'hello'), (1,'world'), (1234567890,'antidisestablishmentarianism');\n[localhost:21000] > select x, fnv_hash(x) from h;\n+------------+----------------------+\n| x | fnv_hash(x) |\n+------------+----------------------+\n| 0 | -2611523532599129963 |\n| 1 | 4307505193096137732 |\n| 1234567890 | 3614724209955230832 |\n+------------+----------------------+\n[localhost:21000] > select s, fnv_hash(s) from h;\n+------------------------------+---------------------+\n| s | fnv_hash(s) |\n+------------------------------+---------------------+\n| hello | 6414202926103426347 |\n| world | 6535280128821139475 |\n| antidisestablishmentarianism | -209330013948433970 |\n+------------------------------+---------------------+\n[localhost:21000] > select s, abs(fnv_hash(s)) % 10 from h;\n+------------------------------+-------------------------+\n| s | abs(fnv_hash(s)) % 10.0 |\n+------------------------------+-------------------------+\n| hello | 8 |\n| world | 6 |\n| antidisestablishmentarianism | 4 |\n+------------------------------+-------------------------+``````\n\nFor short argument values, the high-order bits of the result have relatively low entropy:\n\n``````[localhost:21000] > create table b (x boolean);\n[localhost:21000] > insert into b values (true), (true), (false), (false);\n[localhost:21000] > select x, fnv_hash(x) from b;\n+-------+---------------------+\n| x | fnv_hash(x) |\n+-------+---------------------+\n| true | 2062020650953872396 |\n| true | 2062020650953872396 |\n| false | 2062021750465500607 |\n| false | 2062021750465500607 |\n+-------+---------------------+``````\n\nGREATEST(BIGINT a[, BIGINT b ...]), GREATEST(DOUBLE a[, DOUBLE b ...]), GREATEST(DECIMAL(p,s) a[, DECIMAL(p,s) b ...]), GREATEST(STRING a[, STRING b ...]), GREATEST(TIMESTAMP a[, TIMESTAMP b ...])\nPurpose: Returns the largest value from a list of expressions.\n\nReturn type: same as the initial argument value, except that integer values are promoted to `BIGINT` and floating-point values are promoted to `DOUBLE`; use `CAST()` when inserting into a smaller numeric column\n\nHEX(BIGINT a), HEX(STRING a)\nPurpose: Returns the hexadecimal representation of an integer value, or of the characters in a string.\n\nReturn type: `STRING`\n\nIS_INF(DOUBLE a)\nPurpose: Tests whether a value is equal to the special value inf, signifying infinity.\n\nReturn type: `BOOLEAN`\n\nUsage notes:\n\nInfinity and NaN can be specified in text data files as `inf` and `nan` respectively, and Impala interprets them as these special values. They can also be produced by certain arithmetic expressions; for example, `1/0` returns `Infinity` and `pow(-1, 0.5)` returns `NaN`. Or you can cast the literal values, such as ```CAST('nan' AS DOUBLE)``` or `CAST('inf' AS DOUBLE)`.\n\nIS_NAN(DOUBLE a)\nPurpose: Tests whether a value is equal to the special value NaN, signifying not a number.\n\nReturn type: `BOOLEAN`\n\nUsage notes:\n\nInfinity and NaN can be specified in text data files as `inf` and `nan` respectively, and Impala interprets them as these special values. They can also be produced by certain arithmetic expressions; for example, `1/0` returns `Infinity` and `pow(-1, 0.5)` returns `NaN`. Or you can cast the literal values, such as ```CAST('nan' AS DOUBLE)``` or `CAST('inf' AS DOUBLE)`.\n\nLEAST(BIGINT a[, BIGINT b ...]), LEAST(DOUBLE a[, DOUBLE b ...]), LEAST(DECIMAL(p,s) a[, DECIMAL(p,s) b ...]), LEAST(STRING a[, STRING b ...]), LEAST(TIMESTAMP a[, TIMESTAMP b ...])\nPurpose: Returns the smallest value from a list of expressions.\n\nReturn type: same as the initial argument value, except that integer values are promoted to `BIGINT` and floating-point values are promoted to `DOUBLE`; use `CAST()` when inserting into a smaller numeric column\n\nLN(DOUBLE a), DLOG1(DOUBLE a)\nPurpose: Returns the natural logarithm of the argument.\n\nReturn type: `DOUBLE`\n\nLOG(DOUBLE base, DOUBLE a)\nPurpose: Returns the logarithm of the second argument to the specified base.\n\nReturn type: `DOUBLE`\n\nLOG10(DOUBLE a), DLOG10(DOUBLE a)\nPurpose: Returns the logarithm of the argument to the base 10.\n\nReturn type: `DOUBLE`\n\nLOG2(DOUBLE a)\nPurpose: Returns the logarithm of the argument to the base 2.\n\nReturn type: `DOUBLE`\n\nMAX_INT(), MAX_TINYINT(), MAX_SMALLINT(), MAX_BIGINT()\nPurpose: Returns the largest value of the associated integral type.\n\nReturn type: The same as the integral type being checked.\n\nUsage notes: Use the corresponding `min_` and `max_` functions to check if all values in a column are within the allowed range, before copying data or altering column definitions. If not, switch to the next higher integral type or to a `DECIMAL` with sufficient precision.\n\nMIN_INT(), MIN_TINYINT(), MIN_SMALLINT(), MIN_BIGINT()\nPurpose: Returns the smallest value of the associated integral type (a negative number).\n\nReturn type: The same as the integral type being checked.\n\nUsage notes: Use the corresponding `min_` and `max_` functions to check if all values in a column are within the allowed range, before copying data or altering column definitions. If not, switch to the next higher integral type or to a `DECIMAL` with sufficient precision.\n\nMOD(numeric_type a, same_type b)\nPurpose: Returns the modulus of a number. Equivalent to the `%` arithmetic operator. Works with any size integer type, any size floating-point type, and `DECIMAL` with any precision and scale.\n\nReturn type: Same as the input value\n\nUsage notes:\n\nBecause this function works with `DECIMAL` values, prefer it over `fmod()` when working with fractional values. It is not subject to the rounding errors that make `fmod()` problematic with floating-point numbers.\n\nQuery plans shows the `MOD()` function as the `%` operator.\n\nExamples:\n\nThe following examples show how the `mod()` function works for whole numbers and fractional values, and how the `%` operator works the same way. In the case of `mod(9.9,3)`, the type conversion for the second argument results in the first argument being interpreted as `DOUBLE`, so to produce an accurate `DECIMAL` result requires casting the second argument or writing it as a `DECIMAL` literal, 3.0.\n\n``````select mod(10,3);\n+-------------+\n| mod(10, 3) |\n+-------------+\n| 1 |\n+-------------+\n\nselect mod(5.5,2);\n+--------------+\n| mod(5.5, 2) |\n+--------------+\n| 1.5 |\n+--------------+\n\nselect 10 % 3;\n+--------+\n| 10 % 3 |\n+--------+\n| 1 |\n+--------+\n\nselect 5.5 % 2;\n+---------+\n| 5.5 % 2 |\n+---------+\n| 1.5 |\n+---------+\n\nselect mod(9.9,3.3);\n+---------------+\n| mod(9.9, 3.3) |\n+---------------+\n| 0.0 |\n+---------------+\n\nselect mod(9.9,3);\n+--------------------+\n| mod(9.9, 3) |\n+--------------------+\n| 0.8999996185302734 |\n+--------------------+\n\nselect mod(9.9, cast(3 as decimal(2,1)));\n+-----------------------------------+\n| mod(9.9, cast(3 as decimal(2,1))) |\n+-----------------------------------+\n| 0.9 |\n+-----------------------------------+\n\nselect mod(9.9,3.0);\n+---------------+\n| mod(9.9, 3.0) |\n+---------------+\n| 0.9 |\n+---------------+\n``````\nMURMUR_HASH(type v)\nPurpose: Returns a consistent 64-bit value derived from the input argument, for convenience of implementing MurmurHash2 non-cryptographic hash function.\n\nReturn type: `BIGINT`\n\nUsage notes:\n\nYou might use the return value in an application where you perform load balancing, bucketing, or some other technique to divide processing or storage. This function provides a good performance for all kinds of keys such as number, ascii string and UTF-8. It can be recommended as general-purpose hashing function.\n\nRegarding comparison of murmur_hash with fnv_hash, murmur_hash is based on Murmur2 hash algorithm and fnv_hash function is based on FNV-1a hash algorithm. Murmur2 and FNV-1a can show very good randomness and performance compared with well known other hash algorithms, but Murmur2 slightly show better randomness and performance than FNV-1a.\n\nSimilar input values of different types could produce different hash values, for example the same numeric value represented as `SMALLINT` or `BIGINT`, `FLOAT` or `DOUBLE`, or `DECIMAL(5,2)` or `DECIMAL(20,5)`.\n\nExamples:\n\n``````[localhost:21000] > create table h (x int, s string);\n[localhost:21000] > insert into h values (0, 'hello'), (1,'world'), (1234567890,'antidisestablishmentarianism');\n[localhost:21000] > select x, murmur_hash(x) from h;\n+------------+----------------------+\n| x | murmur_hash(x) |\n+------------+----------------------+\n| 0 | 6960269033020761575 |\n| 1 | -780611581681153783 |\n| 1234567890 | -5754914572385924334 |\n+------------+----------------------+\n[localhost:21000] > select s, murmur_hash(s) from h;\n+------------------------------+----------------------+\n| s | murmur_hash(s) |\n+------------------------------+----------------------+\n| hello | 2191231550387646743 |\n| world | 5568329560871645431 |\n| antidisestablishmentarianism | -2261804666958489663 |\n+------------------------------+----------------------+ ``````\n\nFor short argument values, the high-order bits of the result have relatively higher entropy than fnv_hash:\n\n``````[localhost:21000] > create table b (x boolean);\n[localhost:21000] > insert into b values (true), (true), (false), (false);\n[localhost:21000] > select x, murmur_hash(x) from b;\n+-------+----------------------+\n| x | murmur_hash(x) |\n+-------+---------------------++\n| true | -5720937396023583481 |\n| true | -5720937396023583481 |\n| false | 6351753276682545529 |\n| false | 6351753276682545529 |\n+-------+--------------------+-+``````\n\nNEGATIVE(numeric_type a)\nPurpose: Returns the argument with the sign reversed; returns a positive value if the argument was already negative.\n\nReturn type: Same as the input value\n\nUsage notes: Use `-abs(a)` instead if you need to ensure all return values are negative.\n\nPI()\nPurpose: Returns the constant pi.\n\nReturn type: `double`\n\nPMOD(BIGINT a, BIGINT b), PMOD(DOUBLE a, DOUBLE b)\nPurpose: Returns the positive modulus of a number. Primarily for Hive SQL compatibility.\n\nReturn type: `INT` or `DOUBLE`, depending on type of arguments\n\nExamples:\n\nThe following examples show how the `fmod()` function sometimes returns a negative value depending on the sign of its arguments, and the `pmod()` function returns the same value as `fmod()`, but sometimes with the sign flipped.\n\n``````select fmod(-5,2);\n+-------------+\n| fmod(-5, 2) |\n+-------------+\n| -1 |\n+-------------+\n\nselect pmod(-5,2);\n+-------------+\n| pmod(-5, 2) |\n+-------------+\n| 1 |\n+-------------+\n\nselect fmod(-5,-2);\n+--------------+\n| fmod(-5, -2) |\n+--------------+\n| -1 |\n+--------------+\n\nselect pmod(-5,-2);\n+--------------+\n| pmod(-5, -2) |\n+--------------+\n| -1 |\n+--------------+\n\nselect fmod(5,-2);\n+-------------+\n| fmod(5, -2) |\n+-------------+\n| 1 |\n+-------------+\n\nselect pmod(5,-2);\n+-------------+\n| pmod(5, -2) |\n+-------------+\n| -1 |\n+-------------+\n``````\nPOSITIVE(numeric_type a)\nPurpose: Returns the original argument unchanged (even if the argument is negative).\n\nReturn type: Same as the input value\n\nUsage notes: Use `abs()` instead if you need to ensure all return values are positive.\n\nPOW(DOUBLE a, double p), POWER(DOUBLE a, DOUBLE p), DPOW(DOUBLE a, DOUBLE p), FPOW(DOUBLE a, DOUBLE p)\nPurpose: Returns the first argument raised to the power of the second argument.\n\nReturn type: `DOUBLE`\n\nPRECISION(numeric_expression)\nPurpose: Computes the precision (number of decimal digits) needed to represent the type of the argument expression as a `DECIMAL` value.\n\nUsage notes:\n\nTypically used in combination with the `scale()` function, to determine the appropriate `DECIMAL(precision,scale)` type to declare in a `CREATE TABLE` statement or `CAST()` function.\n\nReturn type: `INT`\n\nExamples:\n\nThe following examples demonstrate how to check the precision and scale of numeric literals or other numeric expressions. Impala represents numeric literals in the smallest appropriate type. 5 is a `TINYINT` value, which ranges from -128 to 127, therefore 3 decimal digits are needed to represent the entire range, and because it is an integer value there are no fractional digits. 1.333 is interpreted as a `DECIMAL` value, with 4 digits total and 3 digits after the decimal point.\n``````[localhost:21000] > select precision(5), scale(5);\n+--------------+----------+\n| precision(5) | scale(5) |\n+--------------+----------+\n| 3 | 0 |\n+--------------+----------+\n[localhost:21000] > select precision(1.333), scale(1.333);\n+------------------+--------------+\n| precision(1.333) | scale(1.333) |\n+------------------+--------------+\n| 4 | 3 |\n+------------------+--------------+\n[localhost:21000] > with t1 as\n( select cast(12.34 as decimal(20,2)) x union select cast(1 as decimal(8,6)) x )\nselect precision(x), scale(x) from t1 limit 1;\n+--------------+----------+\n| precision(x) | scale(x) |\n+--------------+----------+\n| 24 | 6 |\n+--------------+----------+\n``````\nQUOTIENT(BIGINT numerator, BIGINT denominator), QUOTIENT(DOUBLE numerator, DOUBLE denominator)\nPurpose: Returns the first argument divided by the second argument, discarding any fractional part. Avoids promoting integer arguments to `DOUBLE` as happens with the `/` SQL operator. Also includes an overload that accepts `DOUBLE` arguments, discards the fractional part of each argument value before dividing, and again returns `BIGINT`. With integer arguments, this function works the same as the `DIV` operator.\n\nReturn type: `BIGINT`\n\nPurpose: Converts argument value from degrees to radians.\n\nReturn type: `DOUBLE`\n\nRAND(), RAND(BIGINT seed), RANDOM(), RANDOM(BIGINT seed)\nPurpose: Returns a random value between 0 and 1. After `rand()` is called with a seed argument, it produces a consistent random sequence based on the seed value.\n\nReturn type: `DOUBLE`\n\nUsage notes: Currently, the random sequence is reset after each query, and multiple calls to `rand()` within the same query return the same value each time. For different number sequences that are different for each query, pass a unique seed value to each call to `rand()`. For example, `select rand(unix_timestamp()) from ...`\n\nExamples:\n\nThe following examples show how `rand()` can produce sequences of varying predictability, so that you can reproduce query results involving random values or generate unique sequences of random values for each query. When `rand()` is called with no argument, it generates the same sequence of values each time, regardless of the ordering of the result set. When `rand()` is called with a constant integer, it generates a different sequence of values, but still always the same sequence for the same seed value. If you pass in a seed value that changes, such as the return value of the expression `unix_timestamp(now())`, each query will use a different sequence of random values, potentially more useful in probability calculations although more difficult to reproduce at a later time. Therefore, the final two examples with an unpredictable seed value also include the seed in the result set, to make it possible to reproduce the same random sequence later.\n\n``````select x, rand() from three_rows;\n+---+-----------------------+\n| x | rand() |\n+---+-----------------------+\n| 1 | 0.0004714746030380365 |\n| 2 | 0.5895895192351144 |\n| 3 | 0.4431900859080209 |\n+---+-----------------------+\n\nselect x, rand() from three_rows order by x desc;\n+---+-----------------------+\n| x | rand() |\n+---+-----------------------+\n| 3 | 0.0004714746030380365 |\n| 2 | 0.5895895192351144 |\n| 1 | 0.4431900859080209 |\n+---+-----------------------+\n\nselect x, rand(1234) from three_rows order by x;\n+---+----------------------+\n| x | rand(1234) |\n+---+----------------------+\n| 1 | 0.7377511392057646 |\n| 2 | 0.009428468537250751 |\n| 3 | 0.208117277924026 |\n+---+----------------------+\n\nselect x, rand(1234) from three_rows order by x desc;\n+---+----------------------+\n| x | rand(1234) |\n+---+----------------------+\n| 3 | 0.7377511392057646 |\n| 2 | 0.009428468537250751 |\n| 1 | 0.208117277924026 |\n+---+----------------------+\n\nselect x, unix_timestamp(now()), rand(unix_timestamp(now()))\nfrom three_rows order by x;\n+---+-----------------------+-----------------------------+\n| x | unix_timestamp(now()) | rand(unix_timestamp(now())) |\n+---+-----------------------+-----------------------------+\n| 1 | 1440777752 | 0.002051228658320023 |\n| 2 | 1440777752 | 0.5098743483004506 |\n| 3 | 1440777752 | 0.9517714925817081 |\n+---+-----------------------+-----------------------------+\n\nselect x, unix_timestamp(now()), rand(unix_timestamp(now()))\nfrom three_rows order by x desc;\n+---+-----------------------+-----------------------------+\n| x | unix_timestamp(now()) | rand(unix_timestamp(now())) |\n+---+-----------------------+-----------------------------+\n| 3 | 1440777761 | 0.9985985015512437 |\n| 2 | 1440777761 | 0.3251255333074953 |\n| 1 | 1440777761 | 0.02422675025846192 |\n+---+-----------------------+-----------------------------+\n``````\nROUND(DOUBLE a), ROUND(DOUBLE a, INT d), ROUND(DECIMAL a, int_type d), DROUND(DOUBLE a), DROUND(DOUBLE a, INT d), DROUND(DECIMAL(p,s) a, int_type d)\nPurpose: Rounds a floating-point value. By default (with a single argument), rounds to the nearest integer. Values ending in .5 are rounded up for positive numbers, down for negative numbers (that is, away from zero). The optional second argument specifies how many digits to leave after the decimal point; values greater than zero produce a floating-point return value rounded to the requested number of digits to the right of the decimal point.\n\nReturn type: Same as the input type\n\nSCALE(numeric_expression)\nPurpose: Computes the scale (number of decimal digits to the right of the decimal point) needed to represent the type of the argument expression as a `DECIMAL` value.\n\nUsage notes:\n\nTypically used in combination with the `precision()` function, to determine the appropriate `DECIMAL(precision,scale)` type to declare in a `CREATE TABLE` statement or `CAST()` function.\n\nReturn type: `int`\n\nExamples:\n\nThe following examples demonstrate how to check the precision and scale of numeric literals or other numeric expressions. Impala represents numeric literals in the smallest appropriate type. 5 is a `TINYINT` value, which ranges from -128 to 127, therefore 3 decimal digits are needed to represent the entire range, and because it is an integer value there are no fractional digits. 1.333 is interpreted as a `DECIMAL` value, with 4 digits total and 3 digits after the decimal point.\n``````[localhost:21000] > select precision(5), scale(5);\n+--------------+----------+\n| precision(5) | scale(5) |\n+--------------+----------+\n| 3 | 0 |\n+--------------+----------+\n[localhost:21000] > select precision(1.333), scale(1.333);\n+------------------+--------------+\n| precision(1.333) | scale(1.333) |\n+------------------+--------------+\n| 4 | 3 |\n+------------------+--------------+\n[localhost:21000] > with t1 as\n( select cast(12.34 as decimal(20,2)) x union select cast(1 as decimal(8,6)) x )\nselect precision(x), scale(x) from t1 limit 1;\n+--------------+----------+\n| precision(x) | scale(x) |\n+--------------+----------+\n| 24 | 6 |\n+--------------+----------+\n``````\nSIGN(DOUBLE a)\nPurpose: Returns -1, 0, or 1 to indicate the signedness of the argument value.\n\nReturn type: `INT`\n\nSIN(DOUBLE a)\nPurpose: Returns the sine of the argument.\n\nReturn type: `DOUBLE`\n\nSINH(DOUBLE a)\nPurpose: Returns the hyperbolic sine of the argument.\n\nReturn type: `DOUBLE`\n\nSQRT(DOUBLE a), DSQRT(DOUBLE a)\nPurpose: Returns the square root of the argument.\n\nReturn type: `DOUBLE`\n\nTAN(DOUBLE a)\nPurpose: Returns the tangent of the argument.\n\nReturn type: `DOUBLE`\n\nTANH(DOUBLE a)\nPurpose: Returns the hyperbolic tangent of the argument.\n\nReturn type: `DOUBLE`\n\nTRUNCATE(DOUBLE_or_DECIMAL a[, digits_to_leave]), DTRUNC(DOUBLE_or_DECIMAL a[, digits_to_leave]), TRUNC(DOUBLE_or_DECIMAL a[, digits_to_leave])\nPurpose: Removes some or all fractional digits from a numeric value.\n\nArguments: With a single floating-point argument, removes all fractional digits, leaving an integer value. The optional second argument specifies the number of fractional digits to include in the return value, and only applies when the argument type is `DECIMAL`. A second argument of 0 truncates to a whole integer value. A second argument of negative N sets N digits to 0 on the left side of the decimal\n\nScale argument: The scale argument applies only when truncating `DECIMAL` values. It is an integer specifying how many significant digits to leave to the right of the decimal point. A scale argument of 0 truncates to a whole integer value. A scale argument of negative N sets N digits to 0 on the left side of the decimal point.\n\n`TRUNCATE()`, `DTRUNC()`, and `TRUNC()` are aliases for the same function.\n\nReturn type: Same as the input type\n\nAdded in: The `TRUNC()` alias was added in Impala 2.10.0.\n\nUsage notes:\n\nYou can also pass a `DOUBLE` argument, or `DECIMAL` argument with optional scale, to the `DTRUNC()` or `TRUNCATE` functions. Using the `TRUNC()` function for numeric values is common with other industry-standard database systems, so you might find such `TRUNC()` calls in code that you are porting to Impala.\n\nThe `TRUNC()` function also has a signature that applies to `TIMESTAMP` values.\n\nExamples:\n\nThe following examples demonstrate the `TRUNCATE()` and `DTRUNC()` signatures for this function:\n\n``````select truncate(3.45);\n+----------------+\n| truncate(3.45) |\n+----------------+\n| 3 |\n+----------------+\n\nselect truncate(-3.45);\n+-----------------+\n| truncate(-3.45) |\n+-----------------+\n| -3 |\n+-----------------+\n\nselect truncate(3.456,1);\n+--------------------+\n| truncate(3.456, 1) |\n+--------------------+\n| 3.4 |\n+--------------------+\n\nselect dtrunc(3.456,1);\n+------------------+\n| dtrunc(3.456, 1) |\n+------------------+\n| 3.4 |\n+------------------+\n\nselect truncate(3.456,2);\n+--------------------+\n| truncate(3.456, 2) |\n+--------------------+\n| 3.45 |\n+--------------------+\n\nselect truncate(3.456,7);\n+--------------------+\n| truncate(3.456, 7) |\n+--------------------+\n| 3.4560000 |\n+--------------------+\n``````\n\nThe following examples demonstrate using `TRUNC()` with `DECIMAL` or `DOUBLE` values, and with an optional scale argument for `DECIMAL` values. (The behavior is the same for the `TRUNCATE()` and `DTRUNC()` aliases also.)\n\n``````\ncreate table t1 (d decimal(20,7));\n\n-- By default, no digits to the right of the decimal point.\ninsert into t1 values (1.1), (2.22), (3.333), (4.4444), (5.55555);\nselect trunc(d) from t1 order by d;\n+----------+\n| trunc(d) |\n+----------+\n| 1 |\n| 2 |\n| 3 |\n| 4 |\n| 5 |\n+----------+\n\n-- 1 digit to the right of the decimal point.\nselect trunc(d,1) from t1 order by d;\n+-------------+\n| trunc(d, 1) |\n+-------------+\n| 1.1 |\n| 2.2 |\n| 3.3 |\n| 4.4 |\n| 5.5 |\n+-------------+\n\n-- 2 digits to the right of the decimal point,\n-- including trailing zeroes if needed.\nselect trunc(d,2) from t1 order by d;\n+-------------+\n| trunc(d, 2) |\n+-------------+\n| 1.10 |\n| 2.22 |\n| 3.33 |\n| 4.44 |\n| 5.55 |\n+-------------+\n\ninsert into t1 values (9999.9999), (8888.8888);\n\n-- Negative scale truncates digits to the left\n-- of the decimal point.\nselect trunc(d,-2) from t1 where d > 100 order by d;\n+--------------+\n| trunc(d, -2) |\n+--------------+\n| 8800 |\n| 9900 |\n+--------------+\n\n-- The scale of the result is adjusted to match the\n-- scale argument.\nselect trunc(d,2),\nprecision(trunc(d,2)) as p,\nscale(trunc(d,2)) as s\nfrom t1 order by d;\n+-------------+----+---+\n| trunc(d, 2) | p | s |\n+-------------+----+---+\n| 1.10 | 15 | 2 |\n| 2.22 | 15 | 2 |\n| 3.33 | 15 | 2 |\n| 4.44 | 15 | 2 |\n| 5.55 | 15 | 2 |\n| 8888.88 | 15 | 2 |\n| 9999.99 | 15 | 2 |\n+-------------+----+---+\n``````\n``````\ncreate table dbl (d double);\n\ninsert into dbl values\n(1.1), (2.22), (3.333), (4.4444), (5.55555),\n(8888.8888), (9999.9999);\n\n-- With double values, there is no optional scale argument.\nselect trunc(d) from dbl order by d;\n+----------+\n| trunc(d) |\n+----------+\n| 1 |\n| 2 |\n| 3 |\n| 4 |\n| 5 |\n| 8888 |\n| 9999 |\n+----------+\n``````\nUNHEX(STRING a)\nPurpose: Returns a string of characters with ASCII values corresponding to pairs of hexadecimal digits in the argument.\n\nReturn type: `STRING`\n\nWIDTH_BUCKET(DECIMAL expr, DECIMAL min_value, DECIMAL max_value, INT num_buckets)\nPurpose: Returns the bucket number in which the `expr` value would fall in the histogram where its range between `min_value` and `max_value` is divided into `num_buckets` buckets of identical sizes.\nThe function returns:\n• `NULL` if any argument is `NULL`.\n• `0` if `expr` < `min_value`.\n• `num_buckets + 1` if `expr` >= `max_val`.\n• If none of the above, the bucket number where `expr` falls.\nArguments:The following rules apply to the arguments.\n• `min_val` is the minimum value of the histogram range.\n• `max_val` is the maximum value of the histogram range.\n• `num_buckets` must be greater than `0`.\n• `min_value` must be less than `max_value`.\n\nUsage notes:\n\nEach bucket contains values equal to or greater than the base value of that bucket and less than the base value of the next bucket. For example, with `width_bucket(8, 1, 10, 3)`, the bucket ranges are actually the 0th \"underflow bucket\" with the range (-infinity to 0.999...), (1 to 3.999...), (4, to 6.999...), (7 to 9.999...), and the \"overflow bucket\" with the range (10 to infinity).\n\nReturn type: `BIGINT`\n\nThe below function creates `3` buckets between the range of `1` and `20` with the bucket width of 6.333, and returns `2` for the bucket #2 where the value `8` falls in:\n``WIDTH_BUCKET(8, 1, 20, 3)``\n``````SELECT account, invoice_amount, WIDTH_BUCKET(invoice_amount,50,1000,10)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5902832,"math_prob":0.95497847,"size":33702,"snap":"2022-27-2022-33","text_gpt3_token_len":9096,"char_repetition_ratio":0.25238886,"word_repetition_ratio":0.29959267,"special_character_ratio":0.39602992,"punctuation_ratio":0.17843343,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9926767,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-14T23:58:51Z\",\"WARC-Record-ID\":\"<urn:uuid:77fed51d-8d63-45ae-b6f8-10c0bdda07e5>\",\"Content-Length\":\"78538\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5ccba360-5a0b-43a3-81a8-eb1134575d4c>\",\"WARC-Concurrent-To\":\"<urn:uuid:d44bbf4a-5736-418c-b902-c59da80b37e1>\",\"WARC-IP-Address\":\"13.224.214.51\",\"WARC-Target-URI\":\"https://docs.cloudera.com/cdw-runtime/1.4.0/impala-sql-reference/topics/impala-math-functions.html\",\"WARC-Payload-Digest\":\"sha1:F4CH2YAQKU2ZYVTBZSGEU752MLJGGGK6\",\"WARC-Block-Digest\":\"sha1:3BAJQKOH6CSTMGW4RLTLOLKXFFISQBNQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572089.53_warc_CC-MAIN-20220814234405-20220815024405-00739.warc.gz\"}"}
https://www.reference.com/math/3x-2y-8-solved-542883857cc5485d
[ "How Is 3x + 2y = 8 Solved?\n\nTo solve the slope of 3x + 2y = 8, the equation must first be put into slope intercept form with y on the left side and the remaining numbers on the right side. It can be transformed to 2y = -3x + 8, which will then be simplified down to y = -3/2x + 8. The slope of the line is -3/2.\n\nThese types of equations are most commonly used to find the slope and the y-intercept of a problem. To find these, the equation must always be transformed into slope intercept form. The slope of the line is always the number that is connected with x, while the y-intercept is always the number that is on its own in the equation.\n\nIn the 3x + 2y = 8 equation, 8 is the y-intercept. The y-intercept is the point of the slope or the line where the line goes through the y-axis. The line will always pass through the y-axis only once in these types of equations because the line is constantly evolving and growing. Slope intercept form is always written as y = mx + b. The slope of the line is always m.\n\nSimilar Articles" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9622812,"math_prob":0.9991973,"size":983,"snap":"2019-43-2019-47","text_gpt3_token_len":247,"char_repetition_ratio":0.18283963,"word_repetition_ratio":0.0625,"special_character_ratio":0.2512716,"punctuation_ratio":0.07619048,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99964225,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T18:06:58Z\",\"WARC-Record-ID\":\"<urn:uuid:d49dda75-4806-4282-9db6-119de2f2c1ca>\",\"Content-Length\":\"159394\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dbfb45f7-0111-45c1-b75d-fea42771afb7>\",\"WARC-Concurrent-To\":\"<urn:uuid:f3e6cd9a-d600-421c-880f-310227922f72>\",\"WARC-IP-Address\":\"151.101.250.114\",\"WARC-Target-URI\":\"https://www.reference.com/math/3x-2y-8-solved-542883857cc5485d\",\"WARC-Payload-Digest\":\"sha1:H6DW4TRCROSJ7V6LTVYYW47ALI5T64ME\",\"WARC-Block-Digest\":\"sha1:676AYBNCKU3ONXOB6OCAC5JGLNLOO5J7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987781397.63_warc_CC-MAIN-20191021171509-20191021195009-00132.warc.gz\"}"}
https://stacktuts.com/how-to-get-the-top-n-rows-of-a-pandas-data-frame-in-python
[ "# How to get the top n rows of a pandas DataFrame in Python\n\nPublished 8/2/20221 minute reading\n\nThere are a number of ways to get the top n rows of a pandas DataFrame.\n\n## Get the top n rows of a DataFrame using the head method\n\nGetting the top n rows of a `pandas.DataFrame` returns a new DataFrame containing only the first `n` rows of the original.\n\nPandas has a `DataFrame.head()` method that returns the first n rows of a DataFrame. Calling `DataFrame.head(n)` returns a new DataFrame containing only the first `n` rows of the original.\n\nLet check out an example how to get the top 2 rows of the `df` DataFrame.\n\n``````import pandas as pd\n\n# Create a DataFrame\ndf = pd.DataFrame({\n'A': [1, 2, 3, 4, 5],\n'B': [5, 4, 3, 2, 1]\n})\n\n# Get the first 2 rows\n\n# Print first_two_rows\nprint(first_two_rows)\n\n# It should print:\n#\n# A B\n# 1 5\n# 2 4``````\n\n## Get the top n rows of a DataFrame using the iloc\n\nAnother way to get the first n rows of a DataFrame is to use the `DataFrame.iloc` method.\n\n``````import pandas as pd\n# Create a DataFrame\ndf = pd.DataFrame({\n'A': [1, 2, 3, 4, 5],\n'B': [5, 4, 3, 2, 1]\n})\n\n# Get the first 2 rows\nfirst_two_rows = df.iloc[:2]\n\n# Print first_two_rows\nprint(first_two_rows)\n\n# It should print:\n#\n# A B\n# 1 5\n# 2 4``````\n\n## Summary\n\n### What is the method to get the top n rows of a pandas DataFrame in Python?\n\nEither DataFrame.head() or DataFrame.iloc[:n]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.51061666,"math_prob":0.65736663,"size":1251,"snap":"2022-40-2023-06","text_gpt3_token_len":377,"char_repetition_ratio":0.21251002,"word_repetition_ratio":0.5435685,"special_character_ratio":0.3133493,"punctuation_ratio":0.15248227,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99734503,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-02T08:45:52Z\",\"WARC-Record-ID\":\"<urn:uuid:68822f38-d74b-4e2a-9ce5-03f828ffe1cf>\",\"Content-Length\":\"53538\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:07108fc2-3700-4b4f-9086-a37b804122c4>\",\"WARC-Concurrent-To\":\"<urn:uuid:719918ed-08ad-4d89-b44d-fc76c9fa4bc4>\",\"WARC-IP-Address\":\"104.21.35.200\",\"WARC-Target-URI\":\"https://stacktuts.com/how-to-get-the-top-n-rows-of-a-pandas-data-frame-in-python\",\"WARC-Payload-Digest\":\"sha1:FKCHRJX33H42336SBJUSEHUC3BH44OGS\",\"WARC-Block-Digest\":\"sha1:IKV5SVGF5JF7HC72443POBKEZLJKI4S5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337307.81_warc_CC-MAIN-20221002083954-20221002113954-00684.warc.gz\"}"}
https://www.physicsforums.com/members/charliecw.649017/recent-content
[ "# Recent content by CharlieCW\n\n1. ### Kramers-Kronig Relations: Principal Value\n\nYou're right, I had taken the wrong limits of integration. Sorry for the late reply by the way, I had an exam this morning. Now continuing with the equations, let's start from the beginning. Assuming $\\chi_1=\\omega^2_p/\\omega$, then we can substitute in the Kramers-Kronig relation as...\n2. ### Kramers-Kronig Relations: Principal Value\n\nThanks for your reply, Jason. Just from the form of the equations, if I had $\\chi_1=-\\omega_0/\\omega$, then $\\chi_2=\\pi\\omega_0/2$, as when taking the limits only the evaluation on zero will lead to two non-zero identical terms. Here I have no problem evaluating the limits of the integral...\n3. ### Kramers-Kronig Relations: Principal Value\n\nI'm kind of confused on how to evaluate the principal value as it's a topic I've never seen in complex analysis and all the literature I've read so far only deals with the formal definition, not providing an example on how to calculate it properly. Therefore, I think just understanding at least...\n4. ### One-dimensional polymer (Statistical Physics)\n\nWell I managed to solve it and I got that both the average energy and length follow a Fermi-Dirac like distribution. I think I'll post the solution during the weekend in case anyone finds it useful.\n5. ### One-dimensional polymer (Statistical Physics)\n\nDon't worry, with your explanation I better understood the meaning of the terms in the exponential and I think I see more clearly how to deal with these kind of systems. So then my idea about considering the tension $\\tau$ for the linear case was correct since, as you mentioned, it is part of...\n6. ### One-dimensional polymer (Statistical Physics)\n\nIndeed, the term $pV_S$ is for the pressure and volume, but since the general formula was derived for a 3D recipient I was thinking about converting it to the one-dimensional case $pV_S\\rightarrow \\tau L$. However, it also makes more sense that you mention to obtain the tension as $dZ/dl$...\n7. ### What percent of the sky can an astronomer see at one time?\n\nIndeed, in my answer I'm basically considering the top of the well as a flat disk to find an expression for the angle $\\theta$.\n8. ### What percent of the sky can an astronomer see at one time?\n\nSince the view is 3D, you should indeed solid angles to calculate the angle of vision. First consider the case were the astronomer is outside the well. In this case, he sees the 100% of the sky (assuming you call 100% seeing the whole half hemisphere on where they're standing). So the solid...\n9. ### One-dimensional polymer (Statistical Physics)\n\n1. Homework Statement Consider a polymer formed by connecting N disc-shaped molecules into a onedimensional chain. Each molecule can align either its long axis (of length $l_1$ and energy $E_1$) or short axis (of length $l_2$ and energy $E_2$). Suppose that the chain is subject to...\n10. ### Conductor sphere floating on a dielectric fluid\n\nWell I checked similar procedure and I managed to advance the following: While I don't know if it's really useful, if we apply mechanical equilibrium before adding the charge, it's straightforward to find that $\\rho_s=\\rho_l$, where $\\rho_s$ and $\\rho_l$ are the volumetric densities of...\n\n15. ### Finding electric potential using Green's function\n\nThank you for your time, I really appreciate it. Indeed I also checked from Jackson and Greiner and I read that I was free to choose $F(r,r')$ so that $G(r,r')$ is zero on the surface. After a couple of exchanges it turns out we were right: the Green function only depends on the geometry of..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92380124,"math_prob":0.93973213,"size":3843,"snap":"2019-43-2019-47","text_gpt3_token_len":940,"char_repetition_ratio":0.08622037,"word_repetition_ratio":0.0,"special_character_ratio":0.25578976,"punctuation_ratio":0.13618676,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9966005,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-20T11:20:31Z\",\"WARC-Record-ID\":\"<urn:uuid:6ee21961-205a-402f-ab11-c6ed7765e091>\",\"Content-Length\":\"53967\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:83fc20db-8a27-47bf-93b2-f32c1a6bec61>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d9d0ab7-5857-4228-934a-ea68edbd35a6>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/members/charliecw.649017/recent-content\",\"WARC-Payload-Digest\":\"sha1:XQXCRMHIMYTDEW5ZTYPUQVFDQAY3XL77\",\"WARC-Block-Digest\":\"sha1:LJHZI7BZUCUHFFEYCLOYK4UAOOZSVECN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670558.91_warc_CC-MAIN-20191120111249-20191120135249-00115.warc.gz\"}"}
https://betterlesson.com/lesson/516065/counting-up-to-solve-problems?from=cc_lesson
[ "# Counting Up to Solve Problems\n\n13 teachers like this lesson\nPrint Lesson\n\n## Objective\n\nSWBAT count forward by 1's, 2's, 5's or 10's to solve an addition problem containing one and two digit numbers.\n\n#### Big Idea\n\nThe Common Core standards expect addition and subtraction fluency with one and two digit numbers. Students need to develop efficient fluency strategies.\n\n## Warm Up\n\n10 minutes\n\nArticulating mathematical thinking is a critical skill in development at 2nd grade, and requires practice. Students wrote addition and subtraction word problems several days ago and some are left to share. So, I ask any child who has not shared to read his/her word problem to the class. The other students use their math journals to solve the problem.\n\nStudents are now used to this process, and I expect several students will find a solution to share. I expect others to comment on how the problem was solved or ask questions about things that don't make sense (MP3).\n\nAs students present, I particularly draw students' attentions to strategies that use counting up and number lines, because we will be using these strategies/tools today.\n\nI also think it is important to talk about the fact that there is more than one way to solve a word problem.\n\n## Practice Time\n\n15 minutes\n\nToday I introduce a new game to students. This game will encourage students to count up or back to solve a math sentence.\n\nA deck of number cards is placed face down between 2 or 3 students. Students also are given a small pile of colored chips.  Each student draws 2 cards and adds them up. Now they share their answers and the answer closest to 10 takes a colored chip from the pile. If 2 students are equi-distant from 10, they would both take a chip. The cards are then placed in a \"discard\" pile and students draw again.\n\nI demonstrate with students how to add the 2 cards and then count up or back to see how far away from 10 they are. If a child picks up a 7 and a 4 they would add and say 7+4 =11. They would then count back 11, 10 so it is only 1 away.\n\nStudents use the number lines on their desks to help them visualize how far away from 10 they are.\n\nSome children may use their fingers to determine how far from ten they are because the largest cards they will draw are 9+9.  While I want students to move beyond fingers to other tools that will help them when they deal with larger numbers, I do not stop them from counting up or back on their fingers.\n\nStudents check each other's answers before deciding who gets the chip. They are instructed to count on the number line or number grid if they don't agree about the distance from 10.\n\nThis game requires two sets of problem solving. Both steps are building the fluency expected in the Common Core grade 2 standards.\n\n## Teaching the Lesson\n\n30 minutes\n\nTo start the lesson I introduce the blank number line again. I give each student a blank number line on a sentence strip. I ask them to make a mark right near the left edge. Next I tell them to put down 2 fingers and make another mark. I ask them to repeat this until they have a number line marked in even spaces.\n\nNext I ask them to put a colored chip on the mark at the left edge. I tell them to imagine that that mark is the number 18. I ask them to use the number line to count up 10 and find 10 more than 18. We check our answers. Now I ask them to move the chip to the mark at the right end of the line. I ask them to imagine that that number is 25. I ask them to count back 10 and find 10 less than 25.\n\nI repeat this with several more numbers. I want to reinforce the concept of ten more, ten less as a way to understand adding and subtracting tens.\n\nNext I break the class into 3 groups based on their understanding of the number line from previous lesson informal assessments.\n\nThe group that is able to use the number line to solve problems is given a challenge paper of word problems that require adding 2 digit numbers. They are asked to work in partners, or alone to solve the problems using their number line as needed to support their thinking.\n\nThe group that has some understanding of the number line, but has difficulty with a blank line will work with a parent to locate a number on the line and then add a second number to it. The purpose here is for students to understand that they do not always have to start with the left end marking. They need to think about what number they are adding, ie is it far away from the number they are starting with, or very close. If it is far away, should they count by 2s, or 10s. The group will make some decisions about starting points together, and then solve the problems.\n\nThe group that still does not grasp how the number line works, (they always want to start at 1, even if they are looking for the number 89), will work with me to locate one number relative to another, counting by 1's or 2's and putting chips on the two numbers. If students become competent with this skill, we will try starting with one single digit number and adding on a second single digit number.\n\n## Closing\n\n5 minutes\n\nTo close, I ask student to write in their journals about how they can use a number line to help them with adding or subtracting. I encourage students to use drawings and words to show me how they are using the number line.\n\nThis may seem like a jump to abstract thinking, but I am looking an expression of mathematical thinking, at a basic 2nd grade level. According to Mathematical Practice P3, \"Elementary students can construct arguments using concrete referents such as objects, drawings, diagrams, and actions. Such arguments can make sense and be correct, even though they are not generalized or made formal until later grades.\"\n\nBy asking students how they can use the number line, I encourage students to use drawings as well as words in their journals to demonstrate their thinking.\n\nI review student journals after school to assess student understanding. One child wrote, \"I count up and down on the number line.\" She drew a picture of a number line with hops going along the line. This shows a basic understanding of how a number line can be used." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.96514195,"math_prob":0.7969983,"size":5610,"snap":"2021-31-2021-39","text_gpt3_token_len":1216,"char_repetition_ratio":0.15661791,"word_repetition_ratio":0.017175572,"special_character_ratio":0.21818182,"punctuation_ratio":0.07926829,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9659879,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-27T06:46:18Z\",\"WARC-Record-ID\":\"<urn:uuid:9d2c7a42-9c04-40ae-934b-444d7702d8f8>\",\"Content-Length\":\"110137\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8dc6db6d-d7e1-42ea-9edd-3d9912685df8>\",\"WARC-Concurrent-To\":\"<urn:uuid:67328049-e20d-4896-a4f6-1f8d8d791ef8>\",\"WARC-IP-Address\":\"52.206.231.121\",\"WARC-Target-URI\":\"https://betterlesson.com/lesson/516065/counting-up-to-solve-problems?from=cc_lesson\",\"WARC-Payload-Digest\":\"sha1:LP3LWGNQQR6SCH33JJOCPAGFVL7HJYZ5\",\"WARC-Block-Digest\":\"sha1:QAFENYX7VS2AMZ5LXVZ27M37Y3XASHF7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152236.64_warc_CC-MAIN-20210727041254-20210727071254-00526.warc.gz\"}"}
https://www.teachoo.com/2919/646/Misc-30---Find-derivative--x---sinn-x---Chapter-13-Class-11/category/Miscellaneous/
[ "Miscellaneous\n\nChapter 13 Class 11 Limits and Derivatives\nSerial order wise", null, "", null, "", null, "", null, "", null, "This video is only available for Teachoo black users\n\nIntroducing your new favourite teacher - Teachoo Black, at only ₹83 per month\n\n### Transcript\n\nMisc 30 Find the derivative of the following functions (it is to be understood that a, b, c, d, p, q, r and s are fixed non-zero constants and m and n are integers): 𝑥/(𝑠𝑖𝑛𝑛 𝑥) Let f(x) = 𝑥/(𝑠𝑖𝑛𝑛 𝑥) Let u = x & v = sinn x ∴ f(x) = 𝑢/𝑣 So, f’(x) = (𝑢/𝑣)^′ Using quotient rule f’(x) = (𝑢^′ 𝑣 −〖 𝑣〗^′ 𝑢)/𝑣^2 Finding u’ & v’ u = x u’ = 1 Now, v = sinn x Let p = sin x v = pn By Leibnitz product rule v’ = (pn)’ p’ = n pn – 1 p’ Putting p = sin x = n sinn – 1 x (sin x)’ = n sinn – 1 x cos x Now, f’(x) = (𝑢/𝑣)^′ = (𝑢^′ 𝑣 −〖 𝑣〗^′ 𝑢)/𝑣^2 = ( 1 (sin𝑛⁡〖 𝑥〗 ) − 〖𝑛 𝑠𝑖𝑛〗^(𝑛−1) 𝑥 cos⁡〖𝑥 (𝑥)〗)/〖〖(𝑠𝑖𝑛〗^𝑛 𝑥)〗^2 = ( 〖𝑠𝑖𝑛〗^𝑛 𝑥 − 𝑥 (𝑛〖𝑠𝑖𝑛〗^(𝑛−1) 𝑥 cos⁡〖𝑥) 〗)/〖〖(𝑠𝑖𝑛〗^𝑛 𝑥)〗^2 = ( 〖𝒔𝒊𝒏〗^(𝒏−𝟏) 𝒙 . sin⁡〖𝑥 − 𝑥 (𝑛 〗 〖𝑠𝑖𝑛〗^(𝑛−1) 𝑥 cos⁡〖𝑥) 〗)/〖〖(𝑠𝑖𝑛〗^𝑛 𝑥)〗^2 = ( 〖𝒔𝒊𝒏〗^(𝒏−𝟏) 𝒙 〖(sin〗⁡〖𝑥 − 𝑛𝑥 . 〗 cos⁡〖𝑥) 〗)/(〖𝑠𝑖𝑛〗^2𝑛 𝑥) = sin⁡〖𝑥 − 𝑛𝑥 cos⁡𝑥 〗/(〖𝑠𝑖𝑛〗^2𝑛 𝒙 . 〖𝒔𝒊𝒏〗^(−(𝒏−𝟏) ) 𝒙) = sin⁡〖𝑥 − 𝑛𝑥 cos⁡𝑥 〗/(〖𝒔𝒊𝒏〗^((𝟐𝒏 − 𝒏+𝟏)) 𝒙) = sin⁡〖𝑥 − 𝑛𝑥 cos⁡𝑥 〗/(〖𝑠𝑖𝑛〗^(𝑛 + 1) 𝑥) Thus, f’(x) = 𝒔𝒊𝒏⁡〖𝒙 − 𝒏𝒙 𝒄𝒐𝒔⁡𝒙 〗/(〖𝒔𝒊𝒏〗^(𝒏 + 𝟏) 𝒙)", null, "" ]
[ null, "https://d1avenlh0i1xmr.cloudfront.net/b139403b-d7f1-4c62-8994-faef0b8e05b5/slide87.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/cb5df691-952b-4983-ba9d-1f0f3c11b7a7/slide88.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/2131ed8d-5c83-4b80-a4f0-fe85dd470ce2/slide89.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/f129524a-f8b3-416f-863d-efa0eed47e14/slide90.jpg", null, "https://d3m5vxlyiwf9rd.cloudfront.net/vimeo-thumbnails/163500452/thumbnail.jpg", null, "https://www.teachoo.com/static/misc/Davneet_Singh.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.56832147,"math_prob":0.9999311,"size":1482,"snap":"2022-27-2022-33","text_gpt3_token_len":1092,"char_repetition_ratio":0.2368065,"word_repetition_ratio":0.065625,"special_character_ratio":0.4183536,"punctuation_ratio":0.047477745,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99998355,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,7,null,7,null,7,null,7,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T06:09:20Z\",\"WARC-Record-ID\":\"<urn:uuid:53b7b150-9bc8-42b0-b634-85795f879345>\",\"Content-Length\":\"185395\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:52c3aaa7-bed2-4293-853a-8d8122abae52>\",\"WARC-Concurrent-To\":\"<urn:uuid:727dfa69-01b7-4464-b00e-1cc03464d6d0>\",\"WARC-IP-Address\":\"52.86.133.10\",\"WARC-Target-URI\":\"https://www.teachoo.com/2919/646/Misc-30---Find-derivative--x---sinn-x---Chapter-13-Class-11/category/Miscellaneous/\",\"WARC-Payload-Digest\":\"sha1:SGNF2O4AH7TPNEOT3DZ2KMLMUQOG7MNC\",\"WARC-Block-Digest\":\"sha1:3UJDIE5G4EWRQHHBUOC6P4IU4IYV575U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103328647.18_warc_CC-MAIN-20220627043200-20220627073200-00175.warc.gz\"}"}
https://www.edplace.com/worksheet_info/maths/keystage2/year5/topic/611/944/identify-factors-and-multiples
[ "", null, "# Identify Factors and Multiples\n\nIn this worksheet, students select all the factor pairs for a given number.", null, "Key stage:  KS 2\n\nCurriculum topic:   Number: Multiplication and Division\n\nCurriculum subtopic:   Identify Multiples and Factors\n\nDifficulty level:", null, "", null, "", null, "### QUESTION 1 of 10\n\nThis activity is about factor pairs: whole numbers which multiply together to produce a given number.\n\ne.g. What are the factor pairs of 36?\n\n1 and 36\n\n2 and 18\n\n3 and 12\n\n4 and 9\n\n6 and 6\n\nThese are the only whole numbers which multiply together to give 36.\n\nWant to understand this further and learn how this links to other topics in maths?\n\nWhy not watch this video?\n\nSelect the factor pairs of 20.\n\n1 and 20\n\n2 and 10\n\n4 and 6\n\n4 and 5\n\n6 and 14\n\n10 and 10\n\nSelect the factor pairs of 70.\n\n1 and 70\n\n2 and 68\n\n2 and 35\n\n5 and 14\n\n7 and 10\n\nSelect the factor pairs of 21.\n\n1 and 21\n\n2 and 10\n\n3 and 7\n\n5 and 4\n\n5 and 16\n\nSelect the factor pairs of 11.\n\n1 and 11\n\n2 and 5\n\n2 and 9\n\n5 and 6\n\nSelect the factor pairs of 24.\n\n1 and 24\n\n2 and 12\n\n3 and 21\n\n4 and 6\n\n8 and 3\n\n12 and 12\n\nSelect the factor pairs of 28.\n\n1 and 28\n\n2 and 14\n\n3 and 9\n\n4 and 7\n\n8 and 4\n\nSelect the factor pairs of 42.\n\n1 and 42\n\n2 and 21\n\n3 and 15\n\n6 and 7\n\n14 and 3\n\nSelect the factor pairs of 16.\n\n1 and 16\n\n2 and 8\n\n4 and 4\n\n6 and 10\n\n15 and 1\n\nSelect the factor pairs of 60.\n\n1 and 60\n\n2 and 30\n\n3 and 20\n\n4 and 25\n\n4 and 15\n\n5 and 12\n\n6 and 10\n\nSelect the factor pairs of 56.\n\n1 and 56\n\n2 and 28\n\n3 and 18\n\n4 and 14\n\n7 and 8\n\n• Question 1\n\nSelect the factor pairs of 20.\n\n1 and 20\n2 and 10\n4 and 5\nEDDIE SAYS\nThese are the whole numbers which multiply together to give 20.\n• Question 2\n\nSelect the factor pairs of 70.\n\n1 and 70\n2 and 35\n5 and 14\n7 and 10\nEDDIE SAYS\nThese are the whole numbers which multiply together to give 70.\n• Question 3\n\nSelect the factor pairs of 21.\n\n1 and 21\n3 and 7\nEDDIE SAYS\nThese are the whole numbers which multiply together to give 21.\n• Question 4\n\nSelect the factor pairs of 11.\n\n1 and 11\nEDDIE SAYS\n11 is a prime number as it only has two factors.\n• Question 5\n\nSelect the factor pairs of 24.\n\n1 and 24\n2 and 12\n4 and 6\n8 and 3\nEDDIE SAYS\nThese are the whole numbers which multiply together to give 24.\n• Question 6\n\nSelect the factor pairs of 28.\n\n1 and 28\n2 and 14\n4 and 7\nEDDIE SAYS\nThese are the whole numbers which multiply together to give 28.\n• Question 7\n\nSelect the factor pairs of 42.\n\n1 and 42\n2 and 21\n6 and 7\n14 and 3\nEDDIE SAYS\nThese are the whole numbers which multiply together to give 42.\n• Question 8\n\nSelect the factor pairs of 16.\n\n1 and 16\n2 and 8\n4 and 4\nEDDIE SAYS\nThese are the whole numbers which multiply together to give 16.\n• Question 9\n\nSelect the factor pairs of 60.\n\n1 and 60\n2 and 30\n3 and 20\n4 and 15\n5 and 12\n6 and 10\nEDDIE SAYS\nThese are the whole numbers which multiply together to give 60.\n• Question 10\n\nSelect the factor pairs of 56.\n\n1 and 56\n2 and 28\n4 and 14\n7 and 8\nEDDIE SAYS\nThese are the whole numbers which multiply together to give 56.\n---- OR ----\n\nSign up for a £1 trial so you can track and measure your child's progress on this activity.\n\n### What is EdPlace?\n\nWe're your National Curriculum aligned online education content provider helping each child succeed in English, maths and science from year 1 to GCSE. With an EdPlace account you’ll be able to track and measure progress, helping each child achieve their best. We build confidence and attainment by personalising each child’s learning at a level that suits them.\n\nGet started", null, "" ]
[ null, "https://www.facebook.com/tr", null, "https://edplaceimages.s3.amazonaws.com/worksheetPreviews/worksheet_1539031009.jpg", null, "https://www.edplace.com/assets/images/dark_star.png", null, "https://www.edplace.com/assets/images/wht_star.png", null, "https://www.edplace.com/assets/images/wht_star.png", null, "https://www.edplace.com/assets/images/img_laptop.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8959589,"math_prob":0.7429303,"size":368,"snap":"2019-51-2020-05","text_gpt3_token_len":93,"char_repetition_ratio":0.12912089,"word_repetition_ratio":0.05882353,"special_character_ratio":0.26358697,"punctuation_ratio":0.1097561,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996662,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,3,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-23T23:54:12Z\",\"WARC-Record-ID\":\"<urn:uuid:056b4a6d-238f-479d-86b7-6e1a59e632df>\",\"Content-Length\":\"117597\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1bcb4edd-0e97-45c9-ae2d-2465c6355434>\",\"WARC-Concurrent-To\":\"<urn:uuid:4c3a3a5e-47bf-405a-8cfc-6f84816bb2eb>\",\"WARC-IP-Address\":\"104.20.213.7\",\"WARC-Target-URI\":\"https://www.edplace.com/worksheet_info/maths/keystage2/year5/topic/611/944/identify-factors-and-multiples\",\"WARC-Payload-Digest\":\"sha1:X25RGIL56SIEAGQOWNIUASNPOJAVEQE3\",\"WARC-Block-Digest\":\"sha1:P3YEA7LURNQWWYENX4BETAJB5SG7ZEMI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250614086.44_warc_CC-MAIN-20200123221108-20200124010108-00078.warc.gz\"}"}
https://community.geodynamics.org/t/free-surface-problem-cell-giving-non-positive-volume-fraction/617
[ "# Free-surface problem: cell giving non-positive volume fraction\n\nDear All,\n\nI am working on a 2D subduction setup with a continental block in the lower plate, using a free surface, adaptive mesh refinement, and visco-plastic rheology. I have encountered a problem where one of the cells at the top of the model returns a non-positive volume fraction and crashes the simulation:\n\n“An error occurred in line <2727> of file </trinity/opt/apps/software/aspect/aspect-2.0.1/candi/DEAL2/tmp/unpack/deal.II-v9.0.1/source/fe/mapping_q_generic.cc> in function\ndealii::CellSimilarity::Similarity dealii::MappingQGeneric<dim, spacedim>::fill_fe_values(const dealii::Triangulation<dim, spacedim>::cell_iterator &, dealii::CellSimilarity::Similarity, const dealii::Quadrature &, const dealii::Mapping<dim, spacedim>::InternalDataBase &, dealii::internal::FEValuesImplementation::MappingRelatedData<dim, spacedim> &) const [with int dim = 2, int spacedim = 2]\nThe violated condition was:\ndet > 1e-12*Utilities::fixed_power(cell->diameter()/ std::sqrt(double(dim)))\nThe image of the mapping applied to cell with center [1.16145e+06 658711] is distorted. The cell geometry or the mapping are invalid, giving a non-positive volume fraction of -42785.9 in quadrature point 3.”\n\nThe problematic cell is at or very close to the deforming surface. The attached density plot shows the 1.16e6 x coordinate tick where it occurred.\n\nBelow I paste a part of the input file describing global, mesh refinment, and free surface parameters:\nset Dimension = 2\nset Nonlinear solver scheme = iterated Advection and Newton Stokes\nset Nonlinear solver tolerance = 1e-2\nset Max nonlinear iterations = 20\nset CFL number = 0.05\nset Timing output frequency = 20\nset Pressure normalization = no\n\nsubsection Solver parameters\nsubsection Stokes solver parameters\nset Linear solver tolerance = 1e-3\nset Number of cheap Stokes solver steps = 100\nend\nsubsection Newton solver parameters\nset Max pre-Newton nonlinear iterations = 1000\nend\nend\n\n# Model geometry (660 km deep aspect ratio 4:1)\n\nsubsection Geometry model\nset Model name = box\nsubsection Box\nset X repetitions = 4\nset Y repetitions = 1\nset X extent = 2640e3\nset Y extent = 660e3\nset X periodic = true\nend\nend\n\n# Mesh refinement specifications:\n\nsubsection Mesh refinement\nset Initial adaptive refinement = 4\nset Initial global refinement = 4\nset Time steps between mesh refinement = 5\nset Strategy = viscosity, composition, strain rate, temperature\nset Refinement fraction = 0.95\n\nsubsection Free surface\nset Free surface boundary indicators = top\nset Free surface stabilization theta = 0.9\nset Surface velocity projection = normal\nend\n\nThe problem occurs where the free surface is rapidly deforming, so perhaps it is related to one of the elements getting “bow-tied” between two re-meshing. Could anyone advise me on how to tackle this issue?\n\nThank you and best,\nKristof Porkolab\n\nHello Kristof,\n\nI also used to have similar problems. If I remember correctly, the solution for me was to switch the `Surface velocity projection` from `normal` to `vertical`. This means that vertices can only more up and down, which should prevent the problem of cells being able to invert.\n\nBest,\n\nMenno\n\nHi Menno,\n\nthanks for the reply! I have learnt from the manual that the vertical projection is more stable, therefore that was the first projection I tried. However my continental block is moving long distances laterally, and the vertical projection makes the top of the continent scrape off as it moves (because the fs is projected vertically). So in order to maintain the proper lateral transport of the free surface together with the continent, I had to change it to normal projection. I hoped that the problem can still be solved even with the normal projection.\n\nCheers, Kristof\n\nHello Kristof,\n\nhmm, that is weird. I have never noticed that kind of behavior with the free surface. Material should be able to move along the free surface. Could you show some figures of that so that I can see if I understand correct what you mean? Could you also post the boundary conditions section of the input file which you are using?\n\nCheers,\n\nMenno\n\nHi Menno,\n\nhere is the bc section:\n\n# Temperature boundary conditions\n\nsubsection Boundary temperature model\nset Fixed temperature boundary indicators = bottom, top\nset List of model names = box\nsubsection Box\nset Bottom temperature = 1573\nset Top temperature = 273\nend\nend\n\n# Boundary classifications\n\nsubsection Boundary velocity model\nset Tangential velocity boundary indicators = bottom\nend\n\nsubsection Nullspace removal\nset Remove nullspace = net x translation\nend\n\nThe uploaded picture shows that in case of vertical projection the free surface stays at the original position of the continent, but since the continent is moving to the right, the upper crust is scraped off. Switching to normal projection solved this problem, but then I got this other bug my original post was about.", null, "Cheers,\nKristof\n\nWhat you are fundamentally are trying to do is large deformation together with a normal projection where you move the grid (at least partially) with that deformation. This means that after some time the grid will become very deformed. The only way to deal with this is to re-mesh, which is not something ASPECT is made to do. So I think you should investigate a solution with the vertical projection.\n\nIt seems that the material at the surface is somehow not free to move laterally. My suspicion is that the problem is due to the composition boundary conditions which do not move with the continent. It could be something different, but I can’t exclude (or confirm) it from the current current information and figure (would need to see the composition boundary and initial conditions and a bigger figure with grid). But maybe there are some other ideas around what the problem could be.\n\nCheers,\n\nMenno\n\nThe standard ‘trick’ is to add a little bit of diffusion to the surface, but that is currently not supported by ASPECT. In light thereof, I would for now give up on using a free surface (and resort to free slip), and/or make sure that the subduction channel is well lubricated (low viscous enough) as it looks like the buckling may be caused by a ‘stuck’ subduction.\nSticky air still remains an option.\n\nSorry for jumping into the conversation a bit late here. I agree with @cedrict suggestions to make sure the subduction channel is sufficiently lubricated, which may reduce the amount of buckling.\n\nI’m working with another student on a similar problem and we ran into other issues when the initial subduction geometry was not aligned sufficiently to promote subduction. So, another option is to revisit the subduction\n\nA third option is to use the material averaging (e.g., averaging over entire element) option in the material model, which smooths out the viscosity field and reduces some small scale deformation. No idea if this will fix the free surface issue, but it will help improve solver behavior.\n\nThe last (and perhaps best) option is to use sticky air, which I am also trying out for subduction problems.\n\nFYI, surface diffusion may be implemented at some point in the next few months, but best to try and other options in the meantime.\n\nThank you for the suggestions, I am currently trying to implement them. I can already say that material averaging did result in a slightly smoother geometry, but the error still occurred. I will try the others as well and report back. Thanks again!\n\n@kristofporkolab - FYI, A group will be working a solution for smoothing out the free surface this week. We will keep you posted as progress is made and time permitting perhaps you can try it out on your model.\n\n@MFraters - I have been trying to play a bit with the composition boundary conditions, but no success so far. Here is what I have now:\n\nset Number of fields = 3\nset Names of fields = oc, uc, lc\nend\n\n# Spatial domain of different compositional fields\n\nsubsection Initial composition model\nset Model name = function\nsubsection Function\nset Variable names = x,y\nset Function constants = h=660e3, ar=4, dc=4e3, xt=1200e3, wocrhs=300e3,\nxcr=800e3, xcl=400e3,\nr=6e5, zs=2e5, kappa=1e-6\nset Function expression =\nif( (h-y)<dc && x>xcr,\n1,\nif( sqrt((x-xt)(x-xt)+(h-r-y)(h-r-y))<r &&\nsqrt((x-xt)(x-xt)+(h-r-y)(h-r-y))>(r-dc) &&\nx>xt &&\ny>h-zs, 1,\nif( x<xcl && (h-y)<dc, 1, 0\n)\n)\n);\nif( x>=xcl && x<=xcr && (h-y)<=15e3,\n1, 0\n);\nif( x>=xcl && x<=650e3 && (h-y)<=30e3 && (h-y)>15e3,\n1,\nif( x>650e3 && x<=xcr && (h-y)>15e3 && y>(h-30e3)+(x-650e3)/10, 1, 0\n)\n)\nend\nend\n\nsubsection Boundary composition model\nset Fixed composition boundary indicators = bottom\nset List of model names = initial composition\nend\n\n# Temperature boundary conditions\n\nsubsection Boundary temperature model\nset Fixed temperature boundary indicators = bottom, top\nset List of model names = box\nsubsection Box\nset Bottom temperature = 1573\nset Top temperature = 273\nend\n\nAttached a bigger picture with grid. The continental block is initially from x=400e3 to 800e3; as it is transported to the right, becomes more and more tilted/scraped off by the free surface if I have a vertical projection. This does not happen with normal projection.\n\nCheers, Kristof" ]
[ null, "https://aws1.discourse-cdn.com/standard10/uploads/geodynamics/original/1X/eae2f5c3e1298468c2d89c79d9f880f10e4c3a4b.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69473034,"math_prob":0.9309139,"size":2841,"snap":"2022-05-2022-21","text_gpt3_token_len":697,"char_repetition_ratio":0.11667254,"word_repetition_ratio":0.0,"special_character_ratio":0.2340725,"punctuation_ratio":0.15175097,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9708775,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-24T02:19:55Z\",\"WARC-Record-ID\":\"<urn:uuid:d2a2cee1-a62a-413f-b808-e4a832c9f4cc>\",\"Content-Length\":\"50274\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ae7f5ac-8d27-443d-bd9c-e83f1029e8ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:e64099a4-e9ba-40b6-b3ce-ef4eb18031ae>\",\"WARC-IP-Address\":\"64.62.250.111\",\"WARC-Target-URI\":\"https://community.geodynamics.org/t/free-surface-problem-cell-giving-non-positive-volume-fraction/617\",\"WARC-Payload-Digest\":\"sha1:F5EH354T7G6B3JJG2I7RK4LEYCDYYVL4\",\"WARC-Block-Digest\":\"sha1:WWZSCUXLZUSAXHMM4UNLGY46CJPM36FB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662562410.53_warc_CC-MAIN-20220524014636-20220524044636-00653.warc.gz\"}"}
http://www.homepages.ucl.ac.uk/~ucgtrbd/uai2015_causal/abstracts.html
[ "# Abstracts\n\n### Invited Talks\n\n#### Causal and Statistical Inference with Social Network Data: Massive Challenges and Meager Progress\n\nElizabeth Ogburn\n\nInterest in and availability of social network data has led to increasing attempts to make causal and statistical inferences using data collected from subjects linked by social network ties. But inference about all kinds of estimands, from simple sample means to complicated causal peer effects, is challenging when only a single network of non-independent observations is available. There is a dearth of principled methods for dealing with the dependence that such observations can manifest. We demonstrate the dangerously anticonservative inference that can result from a failure to account for network dependence, explain why results on spatial-temporal dependence are not immediately applicable to this new setting, and describe a few different avenues towards valid statistical and causal inference using social network data.\n\n#### Causal Reasoning for Events in Continuous Time: A Decision-Theoretic Approach\n\nVanessa Didelez\n\nThis talk will be concerned with causal reasoning in the context of continuous time (point) processes. It will be shown that various notions that we are familiar with from e.g. (causal) DAGs can be generalised to this case, such as interventions and graphical criteria for identifiability, as well as inverse probability weighting. The relevant graphs, however, are not DAGs, they are local independence graphs which are directed graphs allowing cycles. Using these with survival outcomes and thinking \"causally\" also allows interesting insights into the notion of independent censoring. The theory will be illustrated with original data from a Norwegian screening project for cervical cancer; the aim is to compare two types of HPV-tests which can be used in the screeing. Local independence graphs and appropriate weighting procedures turn out to be useful for the analysis of these data.\n\n### Contributed Talks and Posters\n\n#### Learning the Structure of Causal Models with Relational and Temporal Dependence\n\nKaterina Marazopoulou, Marc Meier and David Jensen\n\nMany real-world domains are inherently relational and temporal - they consist of heterogeneous entities that interact with each other over time. Effective reasoning about causality in such domains requires representations that explicitly model relational and temporal dependence. In this work, we provide a formalization of temporal relational models. We define temporal extensions to abstract ground graphs - a lifted representation that abstracts paths of dependence over all possible ground graphs. Temporal abstract ground graphs enable a sound and complete method for answering d-separation queries on temporal relational models. These methods provide the foundation for a constraint-based algorithm, TRCD, that learns causal models from temporal relational data. We provide experimental evidence that demonstrates the need to explicitly represent time when inferring causal dependence. We also demonstrate the expressive gain of TRCD compared to earlier algorithms that do not explicitly represent time.\n\n#### Query-Answer Causality in Databases: Abductive Diagnosis and View-Updates\n\nBabak Salimi and Leopoldo Bertossi\n\nCausality has been recently introduced in databases, to model, characterize and possibly compute causes for query results (answers). Connections between query causality and consistency-based diagnosis and database repairs (wrt. integrity constrain violations) have been established in the literature. In this work we establish connections between query causality and abductive diagnosis and the view-update problem. The unveiled relationships allow us to obtain new complexity results for query causality -the main focus of our work- and also for the two other areas.\n\n#### Causal Interpretation Rules for Encoding and Decoding Models in Neuroimaging\n\nSebastian Weichwald, Timm Meyer, Ozan Özdenizci, Bernhard Schölkopf, Tonio Ball and Moritz Grosse-Wentrup\n\nHow neural activity gives rise to cognition is arguably one of the most interesting questions in neuroimaging. While causal terminology is often introduced in the interpretation of neuroimaging data, causal inference frameworks are rarely explicitly employed.\n\nIn our recent work we cast widely used analysis methods in a causal framework in order to foster its acceptance in the neuroimaging community. In particular we focus on typical analyses in which variables' relevance in encoding and decoding models (also known as generative or discriminative models) with a dependent stimulus/response variable is interpreted. By linking the concept of relevant variables to marginal/conditional independence properties we demonstrate that (a) identifying relevant variables is indeed a first step towards causal inference; (b) combining encoding and decoding models can yield further insights into the causal structure, which cannot be gleaned from either model alone. We demonstrate the empirical relevance of our findings on EEG data recorded during a visuomotor learning task.\n\nThe rigorous theoretical framework of causal inference allows to expound the assumptional underpinnings and limitations of common (intuitive) analyses in this field. Furthermore, it sheds light on problems covered in recent neuroimaging literature such as confounds in multivariate pattern analysis or interpretation of linear encoding and decoding models.\n\n#### Inference of Cause and Effect with Unsupervised Inverse Regression\n\nEleni Sgouritsa, Dominik Janzing, Philipp Hennig and Bernhard Schölkopf\n\nWe address the problem of causal discovery in the two-variable case, given a sample from their joint distribution. Since X -> Y and Y -> X are Markov equivalent, conditional-independence-based methods [Spirtes et al., 2000, Pearl, 2009] can not recover the causal graph. Alternative methods, introduce asymmetries between cause and effect by restricting the function class (e.g., [Hoyer et al., 2009]).\n\nThe proposed causal discovery method, CURE, is based on the principle of independence of causal mechanisms [Janzing and Schölkopf, 2010]. For the case of only two variables, it states that the marginal distribution of the cause, say P(X), and the conditional of the effect given the cause P(Y | X) are ``independent'', in the sense that they do not contain information about each other (informally P(X) ``independent of '' P(Y | X)). This independence can be violated in the backward direction: the distribution of the effect P(Y) and the conditional P(X | Y) may contain information about each other because each of them inherits properties from both P(X) and P(Y | X), hence introducing an asymmetry between cause and effect. For deterministic causal relations (Y = f(X)), all the information about the conditional P(Y | X) is contained in the function f, so independence boils down to P(X) ``independent of'' f. Previous work formalizes the independence principle by specifying what is meant by independence. For deterministic non-linear relations, Janzing et al. and Daniusis et al. define independence as uncorrelatedness between log f' and the density of P(X), both viewed as random variables. For non-deterministic relations, it is not obvious how to explicitly formalize independence between P(X) and P(Y | X). Instead, we propose an implicit notion of independence, namely that p_Y|X cannot be estimated based on p_X (lower case denotes density). However, it may be possible to estimate p_X|Y based on the density of the effect, p_Y .\n\nIn practice, we are given empirical data x in R^N, y in R^N from P(X, Y) and estimate p_X|Y based on y (intentionally hiding x). The relationship between the observed y and the latent x_u in R^N is modeled by a Gaussian Process (GP): p(y | x_u; theta) = N(y; 0;K_xu;xu + sigma^2_n * I_N) (this can be alternatively seen as a single output GP-LVM). Then, the required conditional p_X|Y is estimated as p_hat^y_(X_u |Y) : (x_u; y*) --> p(x_u | y*, y), with p(x_u |y*; y) estimated by marginalizing out the latent x_u and theta (GP hyperparameters).\n\nCURE infers the causal direction by using the procedure above two times: one to estimate p_X|Y based only on y and another to estimate p_Y|X based only on x. If the first estimation is better, X -> Y is inferred. Otherwise, Y -> X. CURE was evaluated on synthetic and real data and often outperformed existing methods. On the downside, its computational cost is comparably high. This work was recently published at AISTATS 2015 [Sgouritsa et al., 2015].\n\n#### Exploiting Causality for Efficient Monitoring in POMDPs\n\nStefano V. Albrecht and Subramanian Ramamoorthy\n\nPOMDPs are a useful model for decision making in systems with uncertain states. One of the core tasks in a POMDP is the monitoring task, in which the belief state (i.e. the probability distribution over system states) is updated based on incomplete and noisy observations. This can be a hard problem in complex real-world systems due to the often very large state space. In this article, we explore the idea of accelerating the monitoring task by automatically exploiting causality in the system. We consider a specific type of causal relation, called passivity, which pertains to how system variables cause changes in other variables. Specifically, a system variable is called passive if it changes its value only if it is directly acted upon, or if at least one of the variables that directly affect it (i.e. parent variables) change their values. This property can be readily determined from the conditional probability table of the system variable. We present a novel monitoring method, called Passivity-based Monitoring (PM), which maintains a factored belief state representation and exploits passivity to perform selective updates over the factored beliefs. PM produces exact belief states under certain assumptions and approximate belief states otherwise, where the approximation error is bounded by the degree of uncertainty in the process. We show empirically, in synthetic processes with varying sizes and degrees of passivity, that PM is faster than two standard monitoring methods while achieving competitive accuracy. Furthermore, we demonstrate how passivity occurs naturally in a real-world system such as a multi-robot warehouse, and how PM can exploit this to accelerate the monitoring task.\n\n#### An Empirical Study of the Simplest Causal Prediction Algorithm\n\nJerome Cremers and Joris Mooij\n\nWe study the simplest causal prediction algorithm that uses only conditional independences in purely observational data. A specific pattern of only four conditional independence relations amongst a quadruple of random variables already implies that one of these variables causes another without any confounding. As a consequence, it is possible to predict what would happen under an intervention on that variable without actually performing the intervention. Although the method is asymptotically consistent and works well in settings with only few (latent) variables, we find that its prediction accuracy can be worse than simple noncausal baselines when many (latent) variables are present. We also find that the accuracy can sometimes be improved by adding more conditional independence tests, but even then the performance need not outperform the baselines. More generally, our findings illustrate that high accuracy of individual conditional independence tests is no guarantee for high accuracy of a combination of such tests. Also, they illustrate the severity of the faithfulness assumption in practice.\n\n#### Visual Causal Feature Learning\n\nKrzysztof Chalupka, Pietro Perona and Frederick Eberhardt\n\nWe provide a rigorous definition of the visual cause of a behavior that is broadly applicable to the visually driven behavior in humans, animals, neurons, robots and other perceiving systems. Our framework generalizes standard accounts of causal learning to settings in which the causal variables need to be constructed from micro-variables. We prove the Causal Coarsening Theorem, which allows us to gain causal knowledge from observational data with minimal experimental effort. The theorem provides a connection to standard inference techniques in machine learning that identify features of an image that correlate with, but may not cause, the target behavior. Finally, we propose an active learning scheme to learn a manipulator function that performs optimal manipulations on the image to automatically identify the visual cause of a target behavior. We illustrate our inference and learning algorithms in experiments based on both synthetic and real data.\n\n#### Lifted Representation of Relational Causal Models Revisited: Implications for Reasoning and Structure Learning\n\nSanghack Lee and Vasant Honavar\n\nMaier et al. (2010) introduced the relational causal model (RCM) for representing and inferring causal relationships in relational data. A lifted representation, called abstract ground graph (AGG), plays a central role in reasoning with and learning of RCM. The correctness of the algorithm proposed by Maier et al. (2013a) for learning RCM from data relies on the soundness and completeness of AGG for relational d-separation to reduce the learning of an RCM to learning of an AGG. We revisit the definition of AGG and show that AGG, as defined in Maier et al. (2013b), does not correctly abstract all ground graphs. We revise the definition of AGG to ensure that it correctly abstracts all ground graphs. We further show that AGG representation is not complete for relational d-separation, that is, there can exist conditional independence relations in an RCM that are not entailed by AGG. A careful examination of the relationship between the lack of completeness of AGG for relational d-separation and faithfulness conditions suggests that weaker notions of completeness, namely adjacency faithfulness and orientation faithfulness between an RCM and its AGG, can be used to learn an RCM from data.\n\n#### Robust reconstruction of causal graphical models based on conditional 2-point and 3-point information\n\nSéverine Affeldt and Hervé Isambert\n\nWe report a novel network reconstruction method, which combines constraint-based and Bayesian frameworks to reliably reconstruct graphical models despite inherent sampling noise infinite observational datasets. The approach is based on an information theory result tracing back the existence of colliders in graphical models to negative conditional 3-point information between observed variables. In turn, this provides a confident assessment of structural independencies in causal graphs, based on the ranking of their most likely contributing nodes with (significantly) positive conditional 3-point information. Starting from a complete undirected graph, dispensible edges are progressively pruned by iteratively ``taking off'' the most likely positive conditional 3-point information from the 2-point (mutual) information between each pair of nodes. The resulting network skeleton is then partially directed by orienting and propagating edge directions, based on the sign and magnitude of the conditional 3-point information of unshielded triples. This ``3off2'' network reconstruction approach is shown to outperform both constraint-based and Bayesian inference methods on a range of benchmark networks.\n\n#### An Algorithm to Compute the Likelihood Ratio Test Statistic of the Sharp Null Hypothesis for Compliers\n\nWen Wei Loh and Thomas S. Richardson\n\nIn a randomized experiment with noncompliance, scientific interest is often in testing whether the treatment exposure X has an effect on the final outcome Y. We have proposed a finite-population significance test of the sharp null hypothesis that X has no effect on Y, within the principal stratum of compliers, using a generalized likelihood ratio test.\n\nAs both the null and alternative hypotheses are composite hypotheses (each comprising a different set of distributions), computing the value of the generalized likelihood ratio test statistic requires two maximizations: one where we assume that the sharp null hypothesis holds, and another without making such an assumption.\n\nIn our work, we have assumed that there are no Always Takers, such that the nuisance parameter is a bivariate parameter describing the total number of Never Takers with observed outcomes y = 0 and y = 1. Extending the approach to the more general case in which there are also Always Takers would require a nuisance parameter of higher dimension that describes the total number of Always Takers with observed outcomes y = 0 and y = 1 as well. This increases the size of the nuisance parameter space and the computational effort needed to find the likelihood ratio test statistic. We present a new algorithm that extends to solve the corresponding integer programs in the general case where there are Always Takers. The procedure for the infinite-population significance test may be illustrated using a toy example from our UAI Causal Inference Workshop 2013.\n\n#### Segregated Graphs and Marginals of Chain Graph Models\n\nIlya Shpitser\n\nBayesian networks are a popular representation of asymmetric (for example causal) relationships between random variables. Markov random fields (MRFs) are a complementary model of symmetric relationships used in computer vision, spatial modeling, and social and gene expression networks. A chain graph model under the Lauritzen-Wermuth-Frydenberg interpretation (hereafter a chain graph model) generalizes both Bayesian networks and MRFs, and can represent asymmetric and symmetric relationships together.\n\nAs in other graphical models, the set of marginals from distributions in a chain graph model induced by the presence of hidden variables forms a complex model. One recent approach to the study of marginal graphical models is to consider a well-behaved supermodel. Such a supermodel of marginals of Bayesian networks, defined only by conditional independences, and termed the ordinary Markov model, was studied at length in (Evans and Richardson, 2014).\n\nIn this paper, we show that special mixed graphs which we call segregated graphs can be associated, via a Markov property, with supermodels of a marginal of chain graphs defined only by conditional independences. Special features of segregated graphs imply the existence of a very natural factorization for these supermodels, and imply many existing results on the chain graph model, and ordinary Markov model carry over. Our results suggest that segregated graphs define an analogue of the ordinary Markov model for marginals of chain graph models.\n\n#### Recovering from Selection Bias using Marginal Structure in Discrete Models\n\nRobin J. Evans and Vanessa Didelez\n\nThis paper considers the problem of inferring a discrete joint distribution from a sample subject to selection. Abstractly, we want to identify a distribution p(x, w) from its conditional p(x | w). We introduce new assumptions on the marginal model for p(x), under which generic identification is possible. These assumptions are quite general and can easily be tested; they do not require precise background knowledge of p(x) or p(w), such as proportions estimated from previous studies. We particularly consider conditional independence constraints, which often arise from graphical and causal models, although other constraints can also be used. We show that generic identifiability of causal effects is possible in a much wider class of causal models than had previously been known.\n\n#### Advances in Integrative Causal Analysis\n\nIoannis Tsamardinos\n\nScientific practice typically involves studying a system over a series of studies and data collection, each time trying to unravel a different aspect. In each study, the scientist may take measurements under different experimental conditions and measure different sets of quantities (variables). The result is a collection of heterogeneous data sets coming from different distributions. Even so, these are generated by the same causal mechanism. The general idea in Integrative Causal Analysis (INCA) is to identify the set of causal models that simultaneously fit (are consistent) with all sources of data and prior knowledge and reason with this set of models. Integrative Causal Analysis allows more discoveries than what is possible by independent analysis of datasets. In this talk, we'll present advances in this direction that lead to algorithms that can handle more types of heterogeneity, and aim at increasing efficiency or robustness of discoveries. Specifically, we'll present (a) general INCA algorithms for causal discovery from heterogeneous data, (b) algorithms for converting the results of tests to posterior probabilities and allow conflict resolution and identification of the confidence network regions, (d) proof-of-concept applications and massive evaluation on real data of the main concepts, (d) extensions that can deal with prior causal knowledge, and (e) extensions that handle case-control data." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90943754,"math_prob":0.87301636,"size":20913,"snap":"2019-13-2019-22","text_gpt3_token_len":4117,"char_repetition_ratio":0.1309484,"word_repetition_ratio":0.0064082025,"special_character_ratio":0.18012719,"punctuation_ratio":0.08369284,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96114385,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-20T23:53:43Z\",\"WARC-Record-ID\":\"<urn:uuid:7298d909-8e36-473d-8913-362fbb426fac>\",\"Content-Length\":\"25454\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d94212b-8a97-419c-be15-4d708f397b89>\",\"WARC-Concurrent-To\":\"<urn:uuid:220cd6f9-dc43-4a30-bd34-1d694463bb66>\",\"WARC-IP-Address\":\"144.82.250.219\",\"WARC-Target-URI\":\"http://www.homepages.ucl.ac.uk/~ucgtrbd/uai2015_causal/abstracts.html\",\"WARC-Payload-Digest\":\"sha1:F4UZTIJX5K3MRCBDGAJLLM5LDGNBL7XO\",\"WARC-Block-Digest\":\"sha1:CMY2C7RERDYRODNOIJXOQLMJW62NXXOE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202474.26_warc_CC-MAIN-20190320230554-20190321012554-00298.warc.gz\"}"}
https://chemistry.stackexchange.com/questions/71249/cyclobutadiene-jahn-teller-effect-or-not/71327
[ "# Cyclobutadiene - Jahn–Teller effect or not?\n\nIn transition metal chemistry the Jahn–Teller effect arises when the configuration of the metal ion and d orbital splitting set up a doubly degenerate state, which is less stable than a state without the degeneracy but with lower symmetry.\n\nIn Ian Fleming's Molecular Orbitals it is stated that cyclobutadiene would be a diradical considering Huckel theory:", null, "but it isn't because of Jahn–Teller distortion (it is actually rectangular and ESR shows no spin).\n\nThere isn't a doubly degenerate state in this case, yet Jahn–Teller distortion happens according to Fleming. How is this so?\n\n(In the singlet state it would be doubly degenerate, but how to know it a priori?)\n\nVery interesting question, and it kept me up despite daylight saving time cheating me of one hour of sleep last night... A good reference is Albright, Burdett and Whangbo, Orbital Interactions in Chemistry 2nd ed. pp 282ff. which explains this in much greater detail than I can. (In general, that book is fantastic.) I will try my best to summarise what they have written - please point out any mistakes that I may have made. I think there is also much more information out there in the literature (a search for \"Jahn–Teller cyclobutadiene\" should give lots of good results), although it is of course not always easy to read. $\\require{begingroup}\\begingroup\\newcommand{\\ket}{|#1\\rangle}$\n\n### 1. What is the ground state of undistorted $\\ce{C4H4}$?\n\nBased on simple Huckel theory as you have shown above, one would expect the ground state to be a triplet state with the two highest electrons in different orbitals. This is simply because of exchange energy (Hund's first rule).\n\nHowever, it turns out that the singlet state with the two electrons in different orbitals is more stable. That explains the lack of signal in ESR experiments.", null, "(Note that this orbital picture is a simplification, as I will describe later.)\n\nIn short, this is due to dynamic spin polarisation; the idea is that there are low-lying excited singlet states which mix with the ground state to stabilise it. This is the basic idea underlying so-called configuration interaction, which mixes in excited molecular states into the wavefunction in order to obtain a better estimate of the ground-state wavefunction. If you are interested in the theoretical aspect of this, I think Szabo and Ostlund's Modern Quantum Chemistry should cover it. The topic of dynamic spin polarisation is treated in depth in pp 285ff. of Albright et al.\n\nYour question, \"how to predict the singlet state a priori\"? is a good one. Albright et al. write that (p 289)\n\nIn general, high-level configuration interaction calculations are needed to determine the relative stabilities of the singlet and triplet states of a diradical.\n\nSo, it does not seem to be something that you can figure out a priori, and it is most certainly not something you can figure out by just using Huckel theory (which is an incredibly simplistic treatment).\n\n### 2. What is the degeneracy of the ground state?\n\nYou also wrote that \"the singlet state will be doubly degenerate\". However, in the case where both electrons are in different orbitals, there is only one possible spatial wavefunction, due to the quantum mechanical requirement of indistinguishability. (As we will see in the group theoretical treatment, there is actually no doubly degenerate case, not even when the electrons are paired in the same orbital.)\n\nIf I label the two SOMOs above as $\\ket{a}$ and $\\ket{b}$ then the combination $\\ket{ab}$ is not a valid spatial wavefunction. Neither is $\\ket{ba}$ a valid wavefunction.", null, "You have to take linear combinations of these two states, such that your spatial wavefunction is symmetric with respect to interchange of the two electrons. In this case the appropriate combination is\n\n$$2^{-1/2}(\\ket{ab} + \\ket{ba}) \\qquad \\mathrm{{}^1B_{2g}}$$\n\nThere are two more possible symmetric wavefunctions:\n\n$$2^{-1/2}(\\ket{aa} + \\ket{bb}) \\qquad \\mathrm{{}^1A_{1g}}$$ $$2^{-1/2}(\\ket{aa} - \\ket{bb}) \\qquad \\mathrm{{}^1B_{1g}}$$\n\n(Simply based on the requirement for indistinguishability, one might expect $\\ket{aa}$ and $\\ket{bb}$ to be admissible states. However, this is not true. If anybody is interested to know more, please feel free to ask a new question.) The orbital picture is actually woefully inadequate at dealing with quantum mechanical descriptions of bonding. However, Voter and Goddard tried to make it work out, by adding and subtracting electron configurations:1", null, "Ignore the energy ordering in this diagram. It is based on Hartree–Fock calculations, which do not include the effects of excited state mixing (CI is a \"post-Hartree–Fock method\"), and therefore the triplet state is predicted to be the lowest in energy. Instead, just note the form of the three singlet states; they are the same as what I have written above, excluding the normalisation factor.\n\nThe point of this section is that, this singlet state is (spatially) singly degenerate. The orbital diagrams that organic chemists are used to are inaccurate; the actual electronic terms are in fact linear combinations of electronic configurations. So, using orbital diagrams to predict degeneracy can fail!\n\n### 3. Why is there a Jahn–Teller (JT) distortion in a singly degenerate state?\n\nYou are right in that for a JT distortion - a first-order JT distortion, to be precise - to occur, the ground state must be degenerate. This criterion is often rephrased in terms of \"asymmetric occupancy of degenerate orbitals\", but this is just a simplification for students who have not studied molecular symmetry and electronic states yet. Jahn and Teller made the meaning of their theorem very clear in their original paper;2 already in the first paragraph they write that the stability of a polyatomic molecule is not possible when \"its electronic state has orbital degeneracy, i.e. degeneracy not arising from the spin [...] unless the molecule is a linear one\". There is no mention of \"occupancy of orbitals\".\n\nSo, this is not a first-order JT distortion, which requires degeneracy of the ground state. Instead, it is a second-order JT distortion. Therefore, most of the other discussion so far on the topic has unfortunately been a little bit off-track. (Not that I knew any better.)\n\nAlbright et al. describe this as a \"pseudo-JT effect\" (p 136). For a better understanding I would recommend reading the section on JT distortions (pp 134ff.), it is very thorough, but you would need to have some understanding of perturbation theory in quantum mechanics. Please note that there is a typo on p 136, as is pointed out in the comments to this answer. (Alternatively, there is also a discussion of second-order JT effects in a paper by Pearson.3) The idea is that, the distortion reduces the symmetry of the molecule and therefore allows ground and excited states to mix. This leads to a stabilisation of the ground state and hence an electronic driving force for the distortion.\n\nIn slightly more detail, we can use a group theoretical treatment to do this. The second-order correction to the energy is given by\n\n$$\\sum_{j\\neq i}\\frac{\\langle i |(\\partial H/\\partial q)| j \\rangle}{E_i^{(0)} - E_j^{(0)}}$$\n\nwhere $\\ket{i}$ is the ground state (which in this case is described by $2^{-1/2}(\\ket{aa} - \\ket{bb})$) and $\\{\\ket{j}\\}$ are the excited states. $E_i^{(0)}$ is the unperturbed energy of the ground state $\\ket{i}$ and likewise for $E_j^{(0)}$. $q$ is a vibrational coordinate of the molecule.\n\nNote that the denominator in this term, $E_i^{(0)} - E_j^{(0)}$, is negative. So, this correction to the energy is always negative, i.e. stabilising. (There is another second-order term which isn't relevant to the discussion here.) The idea is that in order to have a significant stabilisation upon distortion, two criteria must be fulfilled:\n\n1. The denominator must be small, i.e. the excited state must be low-lying.\n2. The numerator must be non-zero, i.e. there must be appropriate symmetry such that $\\Gamma_i \\otimes \\Gamma_j \\otimes \\Gamma_H \\otimes \\Gamma_q$ contains the totally symmetric irreducible representation. Since $H$ transforms as the TSIR, this is equivalent to the criterion that $\\Gamma_q = \\Gamma_i \\otimes \\Gamma_j$.\n\nIn cyclobutadiene, the two SOMOs in $D_\\mathrm{4h}$ symmetry transform as $\\mathrm{E_u}$. To find the irreps of the resultant states we take\n\n$$\\mathrm{E_u \\otimes E_u = A_{1g} + [A_{2g}] + B_{1g} + B_{2g}}$$\n\n(Note that there is no spatially degenerate term in this direct product, so there cannot be a first-order JT distortion!) The $\\mathrm{A_{2g}}$ state in square brackets corresponds to the triplet case and we can ignore it. In our case (presented without further proof, because I'm not sure why myself) the ground state is $\\mathrm{B_{1g}}$. The two low-lying singlet excited states transform as $\\mathrm{A_{1g} \\oplus \\mathrm{B_{2g}}}$. Therefore if there exists a vibrational mode transforming as either $\\mathrm{B_{1g}} \\otimes \\mathrm{A_{1g}} = \\mathrm{B_{1g}}$ or $\\mathrm{B_{1g}} \\otimes \\mathrm{B_{2g} = \\mathrm{A_{2g}}}$, then the numerator will be nonzero and we can expect this vibrational mode to lead to a distortion with concomitant stabilisation.\n\nIn our case, it happens that there is a vibrational mode with symmetry $\\mathrm{B_{1g}}$, which leads to the distortion from a square to a rectangle. This therefore allows for a mixing of the ground $\\mathrm{B_{1g}}$ state with the excited $\\mathrm{A_{1g}}$ state upon distortion, leading to electronic stabilisation.\n\nAnother way of looking at it is that upon lowering of the symmetry from square ($D_\\mathrm{4h}$) to rectangular ($D_\\mathrm{2h}$), both the $\\mathrm{B_{1g}}$ ground state and $\\mathrm{A_{1g}}$ excited state adopt the same symmetry $\\mathrm{A_g}$. You can prove this by looking at a descent in symmetry table. Therefore, mixing between these two states will be allowed in the new geometry.\n\nBecause we are talking about the mixing of states here, the orbital picture is not quite enough to describe what is going on, sadly. The best representation we can get is probably this:", null, "However, Nakamura et al. have produced potential energy curves for the distortion of cyclobutadiene from square to rectangular and rhomboidal geometry.4 Here is the relevant one (rectangular geometry):", null, "Solid lines indicate singlet states, and dashed lines triplet states. Note that this diagram depicts what I have described above: the singlet ground state ($\\mathrm{{}^1B_{1g}}$), as well as the first excited singlet state ($\\mathrm{{}^1A_{1g}}$), both have the same irrep upon distortion. The ground state is stabilised upon the lowering of symmetry, precisely due to this mixing.\n\n### References\n\n(1) Voter, A. F.; Goddard, W. A. The generalized resonating valence bond description of cyclobutadiene. J. Am. Chem. Soc. 1986, 108 (11), 2830–2837. DOI: 10.1021/ja00271a008.\n\n(2) Jahn, H. A.; Teller, E. Stability of Polyatomic Molecules in Degenerate Electronic States. I. Orbital Degeneracy. Proc. R. Soc. A 1937, 161 (905), 220–235. DOI: 10.1098/rspa.1937.0142.\n\n(3) Pearson, R. G. The second-order Jahn–Teller effect. J. Mol. Struct.: THEOCHEM 1983, 103, 25–34. DOI: 10.1016/0166-1280(83)85006-4.\n\n(4) Nakamura, K.; Osamura, Y.; Iwata, S. Second-order Jahn–Teller effect of cyclobutadiene in low-lying states. An MCSCF study. Chem. Phys. 1989, 136 (1), 67–77. DOI: 10.1016/0301-0104(89)80129-6. $\\endgroup$\n\n• Great explanation! One question: in Albright, 2013, the section about the second order JT: 7.4.2 (page 136) they write \"If the lowest lying one of these is of the correct symmetry for $<\\psi_i|d^2H/dq^2|\\psi_i>$ to be non zero ($\\Gamma_q=\\Gamma_i \\otimes \\Gamma_j$).\" So I guess that must be a typo since they should rather refer to the other second order term wich is the sum of the virtual excitations, right? – Rudi_Birnbaum May 20 '18 at 10:33\n• @R_Berger, indeed, when I was reading it I noticed a typo in that line too. I believe you are correct and that $\\langle \\Psi_i | \\partial^2 H/\\partial q^2|\\Psi_i\\rangle$ should be replaced with $\\langle \\Psi_i | \\partial H/\\partial q | \\Psi_j\\rangle$. Of course, this term can only be nonzero precisely if $\\Gamma_q = \\Gamma_i \\otimes \\Gamma_j$, which they correctly explain. – orthocresol May 20 '18 at 11:12\n• @orthokresol: How would things look like if we include SOC and go to double groups? Is there a quick answer? – Rudi_Birnbaum May 20 '18 at 11:18\n• @R_Berger Sorry, but the quick answer is I don't know ;) I had to do lots of research just to write this answer. – orthocresol May 20 '18 at 11:21\n• @orthokresol: Would you agree that at the end of the dashed line in the last figure there should be $^3$B$_{1g}$ (since A$_{2g}$ from $D_{4h}$ descends into B$_{1g}$ in $D_{2h}$)? Or is there any reason why it is not displayed in the figure? – Rudi_Birnbaum Sep 19 '18 at 10:56\n\nThe point is that if cyclobutadiene were symmetric the highest occupied orbital would be doubly degenerate with 2 electrons to occupy it, and so by Hund's rules it would be a diradical, just as for molecular oxygen. But the Jahn-Teller theorem says that such a state is unstable with respect to a distortion which splits the degeneracy and lowers the symmetry. This is exactly the same as $\\ce{Cu^2+}$. Hence the symmetrical molecule distorts splitting the degeneracy, and both electrons occupy the lower lying state. Thus not only is the molecule not symmetric, it is also not a radical, and hence there is no ESR signal.\n\n• How would it be doubly degenerate? For example Eg represents a set of two degenerate orbitals, but that doesn't mean a complex will be doubly degenerate (only d9, low-spin d7 and high-spin d4 are). – RBW Mar 25 '17 at 19:46\n• But $\\ce {Cu^{2+}}$ is $d^9$, no? – Oscar Lanzi Mar 26 '17 at 1:56\n• @Marko I don't really understand what you are saying. The high symmetry state leads to degeneracies, hence the theoretical square cyclobutadiene and the theoretical Cu2+ in an octahedral environment would have doubly degenerate HOMOs. But the Jahn-Teller theorem says that such geometries must distort, and further that that distortion will split the degeneracy, and so the HOMO is no longer degenerate. In the case of cyclobutadiene the splitting is so large that the lower of the states is double occupied, hence no ESR signal. – Ian Bush Mar 26 '17 at 8:17\n• I think that doubly degenerate means that there are two energetically equivalent states, but I see only one for cyclobutadiene and that is the one where both HOMOs are filled with 1e each. – RBW Mar 26 '17 at 8:34\n• Ahhh, maybe it is the word \"state\". Let's avoid that word, let's use orbital instead as that's what we're really talking about here (and I see why state might be confusing). In your diagram I see two horizontal lines at the same vertical level, representing two orbitals which have the same orbital energy. This is the two-fold degeneracy being referred too, and as we have 2 electrons to occupy 2 orbitals by Hund's rule one electron goes in each. And it is this degeneracy that is split by the distortion. – Ian Bush Mar 26 '17 at 8:57" ]
[ null, "https://i.stack.imgur.com/HAQbh.png", null, "https://i.stack.imgur.com/pyhyG.png", null, "https://i.stack.imgur.com/WGwot.png", null, "https://i.stack.imgur.com/VSqGb.png", null, "https://i.stack.imgur.com/dqIny.png", null, "https://i.stack.imgur.com/SWyHI.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9073868,"math_prob":0.9808427,"size":10673,"snap":"2019-43-2019-47","text_gpt3_token_len":2695,"char_repetition_ratio":0.12747212,"word_repetition_ratio":0.001216545,"special_character_ratio":0.25325587,"punctuation_ratio":0.1227295,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9922277,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,10,null,10,null,10,null,10,null,10,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-18T21:52:03Z\",\"WARC-Record-ID\":\"<urn:uuid:36e4043c-dec1-4eaf-86fc-2a9c8858fb52>\",\"Content-Length\":\"169937\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aea2c2fe-e4fc-4c06-9e62-c0607d5aecc5>\",\"WARC-Concurrent-To\":\"<urn:uuid:a83db2bc-1b1d-44a2-97dd-55418a94a8ec>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://chemistry.stackexchange.com/questions/71249/cyclobutadiene-jahn-teller-effect-or-not/71327\",\"WARC-Payload-Digest\":\"sha1:5S2IFCR3RNG6VKEMYWRCSLC5LT44VX3E\",\"WARC-Block-Digest\":\"sha1:NYBDINDTQR52LRPYBRXJX4HC2336FLDT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669847.1_warc_CC-MAIN-20191118205402-20191118233402-00062.warc.gz\"}"}
https://leiblog.wang/%E5%9C%A8MacBook-Pro-2019%E4%B8%8A%E4%BC%98%E5%8C%96GEMM/index.html
[ "how-to-optimiza-gemm 是大家参考得比较多的gemm优化tutorial,本文是在我的MacBook Pro 2019上进行的实践,处理器型号是i5-8257U.\n\n### 1.浮点峰值计算\n\n❯ sysctl machdep.cpu.brand_string\nmachdep.cpu.brand_string: Intel(R) Core(TM) i5-8257U CPU @ 1.40GHz", null, "", null, "static void cpuid_x86_exec(struct cpuid_t *cpuid) {\nasm volatile(\"pushq %%rbx\\n\"\n\"cpuid\\n\"\n\"movl %%ebx, %1\\n\"\n\"popq %%rbx\\n\"\n: \"=a\"(cpuid->eax), \"=r\"(cpuid->ebx), \"=c\"(cpuid->ecx), \"=d\"(cpuid->edx)\n: \"a\"(cpuid->ieax), \"c\"(cpuid->iecx)\n: \"cc\");\n}\n\nstruct cpuid_t cpuid;\n\nfeat = 0;\n\ncpuid.ieax = 0x1;\ncpuid.iecx = 0x0;\ncpuid_x86_exec(&cpuid);\n\nif (BIT_TEST(cpuid.edx, 25)) {\nSET_FEAT(_CPUID_X86_SSE_);\n}\nif (BIT_TEST(cpuid.ecx, 28)) {\nSET_FEAT(_CPUID_X86_AVX_);\n}\nif (BIT_TEST(cpuid.ecx, 12)) {\nSET_FEAT(_CPUID_X86_FMA_);\n}\n\ncpuid.ieax = 0x7;\ncpuid.iecx = 0x0;\ncpuid_x86_exec(&cpuid);\n\nif (BIT_TEST(cpuid.ebx, 16)) {\nSET_FEAT(_CPUID_X86_AVX512F_);\n}\nif (BIT_TEST(cpuid.ecx, 11)) {\nSET_FEAT(_CPUID_X86_AVX512_VNNI_);\n}\n\nH,M,U,表示功耗,字母越小,功耗越大,性能越好。所以后缀:H>M>U。比如:i5-5350H>i7-4610m,i5-4330m>i7-4558U", null, "PORT0和PORT1可以同时执行向量乘、向量加和FMA指令,通过查datasheet可以知道:\n\n$$2(256/32)2*1.4=44.8 \\quad GFLOPS$$\n\nMac上需要使用到 Turbo Boost Switcher Pro这个软件来关闭睿频,不过我的mbp2019日常开启睿频,风扇疯狂起飞XDD\n\n$$2(256/32)2*3.9=124.8 \\quad GFLOPS$$\n\ngemm-optimizer/cmake-build-debug/tools/calc-cpu-flops 4\n\n❯ tree\n.\n├── CMakeLists.txt\n├── calc-cpu-flops.c\n├── cpufp_kernel_x86.h\n├── cpufp_kernel_x86_avx.s\n├── cpufp_kernel_x86_avx512_vnni.s\n├── cpufp_kernel_x86_avx512f.s\n├── cpufp_kernel_x86_fma.s\n├── cpufp_kernel_x86_sse.s\n├── smtl.c\n└── smtl.h\n\n_cpufp_kernel_x86_fma_fp32:\nmov $0x40000000, %rax vxorps %ymm0, %ymm0, %ymm0 vxorps %ymm1, %ymm1, %ymm1 vxorps %ymm2, %ymm2, %ymm2 vxorps %ymm3, %ymm3, %ymm3 vxorps %ymm4, %ymm4, %ymm4 vxorps %ymm5, %ymm5, %ymm5 vxorps %ymm6, %ymm6, %ymm6 vxorps %ymm7, %ymm7, %ymm7 vxorps %ymm8, %ymm8, %ymm8 vxorps %ymm9, %ymm9, %ymm9 ._cpufp.x86.fma.fp32.L1: vfmadd231ps %ymm0, %ymm0, %ymm0 vfmadd231ps %ymm1, %ymm1, %ymm1 vfmadd231ps %ymm2, %ymm2, %ymm2 vfmadd231ps %ymm3, %ymm3, %ymm3 vfmadd231ps %ymm4, %ymm4, %ymm4 vfmadd231ps %ymm5, %ymm5, %ymm5 vfmadd231ps %ymm6, %ymm6, %ymm6 vfmadd231ps %ymm7, %ymm7, %ymm7 vfmadd231ps %ymm8, %ymm8, %ymm8 vfmadd231ps %ymm9, %ymm9, %ymm9 sub$0x1, %rax\njne ._cpufp.x86.fma.fp32.L1\nret\n\nrax存储的是循环的次数,循环初始化的时候执行的一系列异或指令是用来快速将寄存器的内容置零,接着循环内会执行十次乘累加运算,一共执行40000000次,之前分析得,对于32位的浮点数, 一次乘累加运算可以操作8个数,一共是 40000000 * 8 * 10 * 2(乘法和加法)运算,共这么大的浮点运算量:\n\n#ifdef _FMA_\n#define FMA_FP32_COMP (0x40000000L * 160)\n#define FMA_FP64_COMP (0x40000000L * 80)\n\nperf = FMA_FP64_COMP * num_threads / time_used * 1e-9;\n\nThread(s): 1\nbinding to core 0\nfma fp32 perf: 40.9189 gflops.\nfma fp64 perf: 22.0921 gflops.\nbinding to core 0\navx fp32 perf: 21.9944 gflops.\navx fp64 perf: 11.0741 gflops.\nbinding to core 0\nsse fp32 perf: 10.8526 gfops.\nsse fp64 perf: 5.4786 gflops.\n\nThread(s): 1\nbinding to core 0\nfma fp32 perf: 117.0210 gflops.\nfma fp64 perf: 58.4361 gflops.\nbinding to core 0\navx fp32 perf: 56.1949 gflops.\navx fp64 perf: 28.9646 gflops.\nbinding to core 0\nsse fp32 perf: 29.1122 gflops.\nsse fp64 perf: 14.6471 gflops.\n\n### 2.baseline\n\nvoid AddDot( int k, double *x, int incx, double *y, double *gamma )\n{\n/* compute gamma := x' * y + gamma with vectors x and y of length n.\n\nHere x starts at location x with increment (stride) incx and y starts at location y and has (implicit) stride of 1.\n*/\n\nint p;\n\nfor ( p=0; p<k; p++ ){\n*gamma += X( p ) * y[ p ];\n}\n}\n\n/* Routine for computing C = A * B + C */\n\nvoid mMultBaseLine( int m, int n, int k, double *a, int lda,\ndouble *b, int ldb,\ndouble *c, int ldc )\n{\nint i, j, p;\n\nfor ( i=0; i<m; i++ ){ /* Loop over the rows of C */\nfor ( j=0; j<n; j++ ){ /* Loop over the columns of C */\nAddDot( k, &A( i,0 ), lda, &B( 0,j ), &C( i,j ) );\n}\n}\n}\n\n\n#define PFIRST 100\n#define PLAST 1500\n#define PINC 100\n\n100 7.299270e-01 0.000000e+00\n200 6.892096e-01 0.000000e+00\n300 6.828183e-01 0.000000e+00\n400 6.939813e-01 0.000000e+00\n500 6.374153e-01 0.000000e+00\n600 6.188340e-01 0.000000e+00\n700 5.687877e-01 0.000000e+00\n800 5.626816e-01 0.000000e+00\n900 5.218054e-01 0.000000e+00\n1000 5.200376e-01 0.000000e+00\n1100 4.751246e-01 0.000000e+00\n1200 4.499168e-01 0.000000e+00\n1300 4.366850e-01 0.000000e+00\n1400 4.270737e-01 0.000000e+00\n1500 3.929416e-01 0.000000e+00 \n\ntools/plot下我提供了一个py程序,用来绘制benckmark的:", null, "gflops只有0.6,并且显而易见在gemm的尺寸变大的时候,gflop有下降趋势,这是因为当运算过程中的矩阵A、B的大小超过了L2 Cache的大小(256Kb)的时候,频繁访问A、B就会导致Cache要重复的访问DDR,变成访存的瓶颈,这部分可以通过矩阵分块计算来解决。\n\n### 3.循环展开和合并\n\nvoid MY_MMult( int m, int n, int k, double *a, int lda,\ndouble *b, int ldb,\ndouble *c, int ldc )\n{\nint i, j;\n\nfor ( j=0; j<n; j+=4 ){ /* Loop over the columns of C, unrolled by 4 */\nfor ( i=0; i<m; i+=1 ){ /* Loop over the rows of C */\n/* Update C( i,j ), C( i,j+1 ), C( i,j+2 ), and C( i,j+3 ) in\none routine (four inner products) */\nAddDot( k, &A( 0, 0 ), lda, &B( 0, 0 ), &C( 0, 0 ) );\nAddDot( k, &A( 0, 0 ), lda, &B( 0, 1 ), &C( 0, 1 ) );\nAddDot( k, &A( 0, 0 ), lda, &B( 0, 2 ), &C( 0, 2 ) );\nAddDot( k, &A( 0, 0 ), lda, &B( 0, 3 ), &C( 0, 3 ) );\n}\n}\n}\n\n\nvoid MY_MMult( int m, int n, int k, double *a, int lda,\ndouble *b, int ldb,\ndouble *c, int ldc )\n{\nint i, j;\n\nfor ( j=0; j<n; j+=4 ){ /* Loop over the columns of C, unrolled by 4 */\nfor ( i=0; i<m; i+=1 ){ /* Loop over the rows of C */\nint p;\n\n// AddDot( k, &A( 0, 0 ), lda, &B( 0, 0 ), &C( 0, 0 ) );\nfor ( p=0; p<k; p++ ){\nC( 0, 0 ) += A( 0, p ) * B( p, 0 );\n}\n\n// AddDot( k, &A( 0, 0 ), lda, &B( 0, 1 ), &C( 0, 1 ) );\nfor ( p=0; p<k; p++ ){\nC( 0, 1 ) += A( 0, p ) * B( p, 1 );\n}\n\n// AddDot( k, &A( 0, 0 ), lda, &B( 0, 2 ), &C( 0, 2 ) );\nfor ( p=0; p<k; p++ ){\nC( 0, 2 ) += A( 0, p ) * B( p, 2 );\n}\n\n// AddDot( k, &A( 0, 0 ), lda, &B( 0, 3 ), &C( 0, 3 ) );\nfor ( p=0; p<k; p++ ){\nC( 0, 3 ) += A( 0, p ) * B( p, 3 );\n}\n}\n}\n}\n\nfor ( p=0; p<k; p++ ){\nC( 0, 0 ) += A( 0, p ) * B( p, 0 );\nC( 0, 1 ) += A( 0, p ) * B( p, 1 );\nC( 0, 2 ) += A( 0, p ) * B( p, 2 );\nC( 0, 3 ) += A( 0, p ) * B( p, 3 );\n}", null, "### 4.访存方式\n\nfor ( p=0; p<k; p++ ){\nC( 0, 0 ) += A( 0, p ) * B( p, 0 );\nC( 0, 1 ) += A( 0, p ) * B( p, 1 );\nC( 0, 2 ) += A( 0, p ) * B( p, 2 );\nC( 0, 3 ) += A( 0, p ) * B( p, 3 );\n}\n\nC(0,0)被复用了k次,A(0,p)被复用了4次,所以我们将它们放在访问比较快的单元里会有利于程序的性能,在C语言里,使用register将变量塞到物理寄存器里去:\n\nfor ( p=0; p<k; p++ ){\na_0p_reg = A( 0, p );\n\nc_00_reg += a_0p_reg * *bp0_pntr++;\nc_01_reg += a_0p_reg * *bp1_pntr++;\nc_02_reg += a_0p_reg * *bp2_pntr++;\nc_03_reg += a_0p_reg * *bp3_pntr++;\n}\n\nbp0_pntr = &B( 0, 0 );\nbp1_pntr = &B( 0, 1 );\nbp2_pntr = &B( 0, 2 );\nbp3_pntr = &B( 0, 3 );\n\n#define A(i,j) a[ (j)*lda + (i) ]\n#define B(i,j) b[ (j)*ldb + (i) ]\n#define C(i,j) c[ (j)*ldc + (i) ]", null, "### 5.SIMD指令加速\n\n._cpufp.x86.sse.fp32.L1:\nmulps %xmm0, %xmm0\nmulps %xmm2, %xmm2\nmulps %xmm4, %xmm4\nmulps %xmm6, %xmm6\nsub \\$0x1, %rax\nmulps %xmm8, %xmm8\nmulps %xmm10, %xmm10\nmulps %xmm12, %xmm12\nmulps %xmm14, %xmm14\naddps %xmm15, %xmm15\n\nfor ( p=0; p<k; p++ ){\na_0p_reg = A( 0, p );\na_1p_reg = A( 1, p );\na_2p_reg = A( 2, p );\na_3p_reg = A( 3, p );\n\nb_p0_pntr = &B( p, 0 );\nb_p1_pntr = &B( p, 1 );\nb_p2_pntr = &B( p, 2 );\nb_p3_pntr = &B( p, 3 );\n\n/* First row */\nc_00_reg += a_0p_reg * *b_p0_pntr;\nc_01_reg += a_0p_reg * *b_p1_pntr;\nc_02_reg += a_0p_reg * *b_p2_pntr;\nc_03_reg += a_0p_reg * *b_p3_pntr;\n\n/* Second row */\nc_10_reg += a_1p_reg * *b_p0_pntr;\nc_11_reg += a_1p_reg * *b_p1_pntr;\nc_12_reg += a_1p_reg * *b_p2_pntr;\nc_13_reg += a_1p_reg * *b_p3_pntr;\n\n/* Third row */\nc_20_reg += a_2p_reg * *b_p0_pntr;\nc_21_reg += a_2p_reg * *b_p1_pntr;\nc_22_reg += a_2p_reg * *b_p2_pntr;\nc_23_reg += a_2p_reg * *b_p3_pntr;\n\n/* Four row */\nc_30_reg += a_3p_reg * *b_p0_pntr++;\nc_31_reg += a_3p_reg * *b_p1_pntr++;\nc_32_reg += a_3p_reg * *b_p2_pntr++;\nc_33_reg += a_3p_reg * *b_p3_pntr++;\n}", null, "b_p0_vreg.v = _mm_loaddup_pd( (double *) b_p0_pntr++ ); /* load and duplicate */\nb_p1_vreg.v = _mm_loaddup_pd( (double *) b_p1_pntr++ ); /* load and duplicate */\nb_p2_vreg.v = _mm_loaddup_pd( (double *) b_p2_pntr++ ); /* load and duplicate */\nb_p3_vreg.v = _mm_loaddup_pd( (double *) b_p3_pntr++ ); /* load and duplicate */\n\n/* First row and second rows */\nc_00_c_10_vreg.v += a_0p_a_1p_vreg.v * b_p0_vreg.v;\nc_01_c_11_vreg.v += a_0p_a_1p_vreg.v * b_p1_vreg.v;\nc_02_c_12_vreg.v += a_0p_a_1p_vreg.v * b_p2_vreg.v;\nc_03_c_13_vreg.v += a_0p_a_1p_vreg.v * b_p3_vreg.v;\n\n$$C_{00} = \\sum_{k=0}^{3}{A_{0k} * B_{k0}} \\ C_{10} = \\sum_{k=0}^{3}{A_{1k} * B_{k0}}$$", null, "### 6. 矩阵分块乘法\n\nfor ( p=0; p<k; p+=kc ){\npb = min( k-p, kc );\nfor ( i=0; i<m; i+=mc ){\nib = min( m-i, mc );\nInnerKernel( ib, n, pb, &A( i,p ), lda, &B(p, 0 ), ldb, &C( i,0 ), ldc );\n}\n}", null, "### 7. DataPack", null, "" ]
[ null, "https://leiblog-imgbed.oss-cn-beijing.aliyuncs.com/img/image-20220212173146231.png", null, "https://leiblog-imgbed.oss-cn-beijing.aliyuncs.com/img/image-20220212191541540.png", null, "https://leiblog-imgbed.oss-cn-beijing.aliyuncs.com/img/20220212233321.png", null, "https://leiblog-imgbed.oss-cn-beijing.aliyuncs.com/img/baseline.png", null, "https://leiblog-imgbed.oss-cn-beijing.aliyuncs.com/img/loopunroll1x4.png", null, "https://leiblog-imgbed.oss-cn-beijing.aliyuncs.com/img/register1x4.png", null, "https://leiblog-imgbed.oss-cn-beijing.aliyuncs.com/img/loopunroll4x4.png", null, "https://leiblog-imgbed.oss-cn-beijing.aliyuncs.com/img/image-20220216162342131.png", null, "https://leiblog-imgbed.oss-cn-beijing.aliyuncs.com/img/image-20220216165503348.png", null, "https://leiblog-imgbed.oss-cn-beijing.aliyuncs.com/img/image-20220216170201568.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.50452685,"math_prob":0.9972573,"size":12037,"snap":"2022-27-2022-33","text_gpt3_token_len":7461,"char_repetition_ratio":0.12897864,"word_repetition_ratio":0.31363636,"special_character_ratio":0.4234444,"punctuation_ratio":0.23333333,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9960143,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T12:32:54Z\",\"WARC-Record-ID\":\"<urn:uuid:40626d8b-9066-469a-bee4-5f1de2839cbf>\",\"Content-Length\":\"95371\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b2fb7021-6e81-405e-9809-eea1c34c8f7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:67243d52-5438-4165-8c3a-57ffedb399f7>\",\"WARC-IP-Address\":\"101.200.188.197\",\"WARC-Target-URI\":\"https://leiblog.wang/%E5%9C%A8MacBook-Pro-2019%E4%B8%8A%E4%BC%98%E5%8C%96GEMM/index.html\",\"WARC-Payload-Digest\":\"sha1:RMHMO7VDHJYZAID4QPKXSSK4RDVKDOSU\",\"WARC-Block-Digest\":\"sha1:FJKUME4RLENSNCNH334IYR63IKEWH5AU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103516990.28_warc_CC-MAIN-20220628111602-20220628141602-00409.warc.gz\"}"}
http://zackmdavis.net/blog/2012/08/straight-talk-about-precompactness/
[ "So we have this metric space, which is this set of points along with a way of defining \"distances\" between them that behaves in a basically noncrazy way (points that are zero distance away from \"each other\" are really just the same point, the distance from one to the other is the same as the distance from the other to the one, and something about triangles).\n\nLet's say (please, if you don't mind) that a sequence of points (xn) in our space is fundamental (or maybe Cauchy) iff (sic) for all positive ε, there's a point far enough along in the sequence so that beyond that point, the distance from one point to the next is less than ε. Let's also agree (if that's okay with you) to say that our metric space is sequentially precompact iff every sequence has a fundamental subsequence. If, furthermore, the precompact space is complete (all fundamental sequences actually converge to a point in the space, rather than leading up to an ætherial gap or missing edge), then we say it's compact. It turns out that compactness is an important property to pay attention to because it implies lots of cool stuff: like, compactness is preserved by homeomorphisms (continuously invertible continuous maps), and continuous functions with compact domains are bounded, and probably all sorts of other things that I don't know (yet). I'm saying sequentially precompact because I'm given to understand that while the convergent subsequences criterion for compactness is equivalent to this other definition (viz., \"every open cover has a finite subcover\") for metric spaces, the two ideas aren't the same for more general topological spaces. Just don't ask me what in the world we're going to do with a nonmetrizable space, 'cause I don't know (yet).\n\nBut anyway, as long as we're naming ideas, why not say that our metric space is totally bounded iff for every ε, there exists a finite number of open (that is, not including the boundary) balls that cover the whole space? We can call the centers of such a group of balls an ε-net. Our friend Shilov quotes his friend Liusternik as saying, \"Suppose a lamp illuminating a ball of radius ε is placed at every point of a set B which is an ε-net for a set M. Then the whole set M will be illuminated.\" At the risk of having names for things that possibly don't actually deserve names, I'm going call each point in an ε-net a lamp. Actually Shilov, and thus likely Liusternik, is talking about closed balls of light around the lamps, not the open ones that I'm talking about. In a lot of circumstances, this could probably make all the difference in the world, but for the duration of this post, I don't think you should worry about it.\n\nBut this fear of having too many names for things is really a very serious one, because it turns out that sequential precompactness and total boundedness are the same thing: not only can you not have one without the other, but you can't even have the other without the one! Seriously, like, who even does that?!\n\nBut the reasoning is inescapable. You can't have one without the other because if every sequence has a fundamental subsequence, then finite ε-nets are a thing, which is to say (by the contraposition doctrine and De Morgan's Iron Law of Negation) that if every ε-net is infinite, then sequences that don't have fundamental subsequences are a thing. To see this, think about an infinite ε-net where no lamp lies within the lighted area of any other lamp. A sequence consisting of such lamps can't have a fundamental subsequence because the distance between successive points in that sequence is bounded below by ε.\n\nAnd you can't have the other without the one because if finite ε-nets are a thing, then every sequence has a fundamental subsequence. To see this, consider a sequence. For k ∈ ℕ+ and for ε := 1/k, we can cover any subset of our space with a finite number of ε-balls. But then by the Infinitary Corollary of the Iron Law Pertaining to the Storage of Pigeons, there must then be an ε-ball that contains infinitely many points of our sequence. Let's pick one of those points and call it ak. Then if we set ε := 1/(k+1), our ball can itself be covered by a finite number of ε-balls, one of which again contains infinitely many points of our sequence, of which we can pick one and call it ak+1. That triggers an induction, giving us a subsequence (an). But then for every N ∈ ℕ+, if n and m are not smaller than N, then an and am live in a 1/N-ball, so that the distance between them is bounded above by 2/N, which can be made arbitrarily small by choosing a large enough N, which means that the subsequence (an) is fundamental. But this is \"quod erat demonstrandum\" (a Latin phrase that roughly translates as \"what I've been trying to tell you this entire time\").\n\nBibliography\n\nTheodore W. Gamelin and Robert Everist Greene, Introduction to Topology, 2nd ed'n., §I.5.\n\nGeorgi E. Shilov, Elementary Real and Complex Analysis, revised English ed'n., §3.93." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9649189,"math_prob":0.9423926,"size":4955,"snap":"2022-27-2022-33","text_gpt3_token_len":1140,"char_repetition_ratio":0.12765098,"word_repetition_ratio":0.02183908,"special_character_ratio":0.21836528,"punctuation_ratio":0.09929078,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97069776,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T14:27:01Z\",\"WARC-Record-ID\":\"<urn:uuid:54882631-ded3-4a26-8831-fad3ffaaccfd>\",\"Content-Length\":\"49386\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bc65b26e-55c4-42a7-b541-7d8ac731ac02>\",\"WARC-Concurrent-To\":\"<urn:uuid:1b222a50-3051-4727-8f6d-e497fb0537ba>\",\"WARC-IP-Address\":\"199.188.205.30\",\"WARC-Target-URI\":\"http://zackmdavis.net/blog/2012/08/straight-talk-about-precompactness/\",\"WARC-Payload-Digest\":\"sha1:U7E3SLSL6YVPU7C7SQOXJF6QWWFF4AZC\",\"WARC-Block-Digest\":\"sha1:J3NDMFHPLW4FHXE223XJPO4RPWKJCWKU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103941562.52_warc_CC-MAIN-20220701125452-20220701155452-00688.warc.gz\"}"}
https://www.gurufocus.com/term/mscore/NYSE:RTN/Beneish-M-Score/Raytheon
[ "Switch to:\n\n# Raytheon Co Beneish M-Score\n\n: -2.65 (As of Today)\nView and export this data going back to 1952. Start your Free Trial\n\nThe zones of discrimination for M-Score is as such:\n\nAn M-Score of less than -2.22 suggests that the company is not an accounting manipulator.\nAn M-Score of greater than -2.22 signals that the company is likely an accounting manipulator.\n\nRaytheon Co has a M-score of -2.65 suggests that the company is not a manipulator.\n\nNYSE:RTN' s Beneish M-Score Range Over the Past 10 Years\nMin: -3.28   Med: -2.56   Max: -2.07\nCurrent: -2.65\n\n-3.28\n-2.07\n\nDuring the past 13 years, the highest Beneish M-Score of Raytheon Co was -2.07. The lowest was -3.28. And the median was -2.56.\n\n## Raytheon Co Beneish M-Score Historical Data\n\n* All numbers are in millions except for per share data and ratio. All numbers are in their local exchange's currency.\n\n Raytheon Co Annual Data Dec10 Dec11 Dec12 Dec13 Dec14 Dec15 Dec16 Dec17 Dec18 Dec19 Beneish M-Score", null, "", null, "", null, "", null, "", null, "-2.39 -2.56 -2.33 -2.17 -2.65\n\nCompetitive Comparison\n* Competitive companies are chosen from companies within the same industry, with headquarter located in same country, with closest market capitalization; x-axis shows the market cap, and y-axis shows the term value; the bigger the dot, the larger the market cap.\n\nRaytheon Co Beneish M-Score Distribution\n\n* The bar in red indicates where Raytheon Co's Beneish M-Score falls into.\n\n## Raytheon Co Beneish M-Score Calculation\n\nThe M-score was created by Professor Messod Beneish. Instead of measuring the bankruptcy risk (Altman Z-Score) or business trend (Piotroski F-Score), M-score can be used to detect the risk of earnings manipulation. This is the original research paper on M-score.\n\nThe M-Score Variables:\n\nThe M-score of Raytheon Co for today is based on a combination of the following eight different indices:\n\n M = -4.84 + 0.92 * DSRI + 0.528 * GMI + 0.404 * AQI + 0.892 * SGI + 0.115 * DEPI = -4.84 + 0.92 * 0.7676 + 0.528 * 1.0397 + 0.404 * 0.9657 + 0.892 * 1.0783 + 0.115 * 1.077 - 0.172 * SGAI + 4.679 * TATA - 0.327 * LVGI - 0.172 * 0.9939 + 4.679 * -0.0141 - 0.327 * 0.9379 = -2.65\n\n* All numbers are in millions except for per share data and ratio. All numbers are in their local exchange's currency.\n\n This Year (Dec19) TTM: Last Year (Dec18) TTM: Accounts Receivable was \\$1,364 Mil. Revenue was 7842 + 7446 + 7159 + 6729 = \\$29,176 Mil. Gross Profit was 2010 + 1947 + 1954 + 1852 = \\$7,763 Mil. Total Current Assets was \\$13,082 Mil. Total Assets was \\$34,566 Mil. Property, Plant and Equipment(Net PPE) was \\$4,228 Mil. Depreciation, Depletion and Amortization(DDA) was \\$605 Mil. Selling, General, & Admin. Expense(SGA) was \\$2,257 Mil. Total Current Liabilities was \\$9,791 Mil. Long-Term Debt & Capital Lease Obligation was \\$3,967 Mil. Net Income was 885 + 860 + 817 + 781 = \\$3,343 Mil. Non Operating Income was -181 + -135 + -173 + -161 = \\$-650 Mil. Cash Flow from Operations was 2791 + 1277 + 823 + -411 = \\$4,480 Mil. Accounts Receivable was \\$1,648 Mil. Revenue was 7360 + 6806 + 6625 + 6267 = \\$27,058 Mil. Gross Profit was 1967 + 1935 + 1848 + 1735 = \\$7,485 Mil. Total Current Assets was \\$12,137 Mil. Total Assets was \\$32,670 Mil. Property, Plant and Equipment(Net PPE) was \\$3,645 Mil. (DDA) was \\$568 Mil. Selling, General, & Admin. Expense(SGA) was \\$2,106 Mil. Total Current Liabilities was \\$8,463 Mil. Long-Term Debt & Capital Lease Obligation was \\$5,402 Mil.\n\n1. DSRI = Days Sales in Receivables Index\n\nMeasured as the ratio of Revenue in Accounts Receivable in year t to year t-1.\n\nA large increase in DSR could be indicative of revenue inflation.\n\n DSRI = (Receivables_t / Revenue_t) / (Receivables_t-1 / Revenue_t-1) = (1364 / 29176) / (1648 / 27058) = 0.04675075 / 0.0609062 = 0.7676\n\n2. GMI = Gross Margin Index\n\nMeasured as the ratio of gross margin in year t-1 to gross margin in year t.\n\nGross margin has deteriorated when this index is above 1. A firm with poorer prospects is more likely to manipulate earnings.\n\n GMI = GrossMargin_t-1 / GrossMargin_t = (GrossProfit_t-1 / Revenue_t-1) / (GrossProfit_t / Revenue_t) = (7485 / 27058) / (7763 / 29176) = 0.27662798 / 0.26607486 = 1.0397\n\n3. AQI = Asset Quality Index\n\nAQI is the ratio of asset quality in year t to year t-1.\n\nAsset quality is measured as the ratio of non-current assets other than Property, Plant and Equipment to Total Assets.\n\n AQI = (1 - (CurrentAssets_t + PPE_t) / TotalAssets_t) / (1 - (CurrentAssets_t-1 + PPE_t-1) / TotalAssets_t-1) = (1 - (13082 + 4228) / 34566) / (1 - (12137 + 3645) / 32670) = 0.49921889 / 0.51692684 = 0.9657\n\n4. SGI = Sales Growth Index\n\nRatio of Revenue in year t to sales in year t-1.\n\nSales growth is not itself a measure of manipulation. However, growth companies are likely to find themselves under pressure to manipulate in order to keep up appearances.\n\n SGI = Sales_t / Sales_t-1 = Revenue_t / Revenue_t-1 = 29176 / 27058 = 1.0783\n\n5. DEPI = Depreciation Index\n\nMeasured as the ratio of the rate of Depreciation, Depletion and Amortization in year t-1 to the corresponding rate in year t.\n\nDEPI greater than 1 indicates that assets are being depreciated at a slower rate. This suggests that the firm might be revising useful asset life assumptions upwards, or adopting a new method that is income friendly.\n\n DEPI = (Depreciation_t-1 / (Depreciaton_t-1 + PPE_t-1)) / (Depreciation_t / (Depreciaton_t + PPE_t)) = (568 / (568 + 3645)) / (605 / (605 + 4228)) = 0.13482079 / 0.12518105 = 1.077\n\n6. SGAI = Sales, General and Administrative expenses Index\n\nThe ratio of Selling, General, & Admin. Expense(SGA) to Sales in year t relative to year t-1.\n\nSGA expenses index > 1 means that the company is becoming less efficient in generate sales.\n\n SGAI = (SGA_t / Sales_t) / (SGA_t-1 /Sales_t-1) = (2257 / 29176) / (2106 / 27058) = 0.0773581 / 0.0778328 = 0.9939\n\n7. LVGI = Leverage Index\n\nThe ratio of total debt to Total Assets in year t relative to yeat t-1.\n\nAn LVGI > 1 indicates an increase in leverage\n\n LVGI = ((LTD_t + CurrentLiabilities_t) / TotalAssets_t) / ((LTD_t-1 + CurrentLiabilities_t-1) / TotalAssets_t-1) = ((3967 + 9791) / 34566) / ((5402 + 8463) / 32670) = 0.39802118 / 0.42439547 = 0.9379\n\n8. TATA = Total Accruals to Total Assets\n\nTotal accruals calculated as the change in working capital accounts other than cash less depreciation.\n\n TATA = (IncomefromContinuingOperations_t - CashFlowsfromOperations_t) / TotalAssets_t = (NetIncome_t - NonOperatingIncome_t - CashFlowsfromOperations_t) / TotalAssets_t = (3343 - -650 - 4480) / 34566 = -0.0141\n\nAn M-Score of less than -2.22 suggests that the company will not be a manipulator. An M-Score of greater than -2.22 signals that the company is likely to be a manipulator.\n\nRaytheon Co has a M-score of -2.65 suggests that the company will not be a manipulator." ]
[ null, "https://gurufocus.s3.amazonaws.com/images/blur.png", null, "https://gurufocus.s3.amazonaws.com/images/blur.png", null, "https://gurufocus.s3.amazonaws.com/images/blur.png", null, "https://gurufocus.s3.amazonaws.com/images/blur.png", null, "https://gurufocus.s3.amazonaws.com/images/blur.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8707484,"math_prob":0.9540325,"size":6328,"snap":"2020-45-2020-50","text_gpt3_token_len":2087,"char_repetition_ratio":0.11574952,"word_repetition_ratio":0.10823312,"special_character_ratio":0.40265486,"punctuation_ratio":0.14664458,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9897041,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T23:38:30Z\",\"WARC-Record-ID\":\"<urn:uuid:4732fae0-2507-4a6a-add7-df09da4078c2>\",\"Content-Length\":\"383791\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a0367ff0-994c-44ab-a3cc-0a120082b7de>\",\"WARC-Concurrent-To\":\"<urn:uuid:a3970198-45f1-48bb-8a6f-dc8a208eefe7>\",\"WARC-IP-Address\":\"104.26.14.56\",\"WARC-Target-URI\":\"https://www.gurufocus.com/term/mscore/NYSE:RTN/Beneish-M-Score/Raytheon\",\"WARC-Payload-Digest\":\"sha1:VFNZMHRIP42PZVHOAMRWPTYBDOB5MFKU\",\"WARC-Block-Digest\":\"sha1:GDXDQRU2FYH6YPH4CFXVB5PNSBMGYH7W\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107911792.65_warc_CC-MAIN-20201030212708-20201031002708-00393.warc.gz\"}"}
https://www.physicsforums.com/threads/find-the-speed-of-a-copper-loop-falling-in-a-magnetic-field.976109/
[ "# Find the speed of a copper loop falling in a magnetic field\n\nspsch\nHomework Statement:\nA square copper loop with side length a is falling straight down at constant speed perpendicular to a magnetic field B. The cross-sectional area of the loop is A. What is its speed v?\nRelevant Equations:\nV = B*dA = Blv (or Bav here) F = ILB and V = IR\nHi all, so I had this problem and on the exam and I got a solution but I had an mass-term in there which wasn't given.\n\nI used Farraday's Law of Induction to get the Voltage induced.\nThen I used ##rho* \\frac{A}{4a} ## for the resistance and divided the Voltage by that to get the current.\nI then inserted this for the current I in F=IlB.\nFrom the drawing on the exam it looked like only half the loop was in the field so I the forces up and down wouldn't cancel each other only the ones on the side.\n\nI then set that equal to mg because ## ma = mg-IlB ## since the speed is constant ## mg = IlB ##.\nThis question really threw me off because m wasn't given and I was sure it had to cancel somehow.\nCould someone show me where I made a mistake in thinking?\n\nThanks so much!\nI don't remember exactly but I think B was 1.5T, a = 25cm, A was 10^-6m.\n\nHomework Helper\nGold Member\nIt appears to me that you have the correct approach. The terminal speed of the loop will depend on the mass, as you found. So, a value of the mass should have been given in order to obtain a numerical value for the speed.\n\n•", null, "spsch\nspsch\nHello TSny.\n\nThank you very much. I lost quite a bit of time there. (trying to find a formula to make the masses cancel)\nAnd I thought about it for the past two days without figuring it out.\nI thought about it and with the density I probably could have figured it out. (Volume being 4a times A). There was a two page tableau with resistances and thermal data for the other problems.\nI'm afraid I may have missed that densities were included as well.\n(it also didn't occur to me then that m=density * volume)\n\nBut I wrote out the whole term without the numeric value. I hope that I may at least get a few points for that.\n\nHomework Helper\nGold Member\nOK. I hope that you receive almost full credit. The question is mostly about testing your understanding of some electromagnetism and I think you succeeded in doing that.\n\n•", null, "spsch\nHomework Helper\nGold Member\nThen I used ##\\rho \\frac{A}{4a}## for the resistance ...\nSo that you don't make the same mistake again, the correct expression for the resistance here is ##R=\\dfrac{\\rho(4a)}{A}##.\n\n•", null, "spsch and TSny\nHomework Helper\nGold Member\nYeah, very poorly posed problem.\nYou further need wire resistance per unit length (or total resistance) plus m.\n\nHomework Helper\nGold Member\nYeah, very poorly posed problem.\nYou further need wire resistance per unit length (or total resistance) plus m.\nTables were provided. See post #3.\n\nspsch\nHi all, I'm sorry I just thought about this problem a little more and have another question.\n\nThough it is a square, can I not see this wire as a loop and thus, wouldn't it also create a magnetic dipole that could end up feeling a torque?\n\nBecause by lenz's law the current induced should be in such a direction that opposes the change in magnetic flux? (Making the created dipole opposite)\nSo the wire would turn 180 degrees back and forth?\n\nI'm sorry I hope I'm making sense, I could draw what I mean if it helps.\n\n•", null, "kuruman\nHomework Helper\nGold Member\nI'm sorry I hope I'm making sense, I could draw what I mean if it helps.\nYou are making perfect sense and it is to your credit that you should ask about this. Indeed there will be a magnetic dipole moment generated by the induced current in which case there will be a torque tending to align the magnetic moment with the field. However, unlike a compass needle oscillating back and forth in the Earth's magnetic field because there is a restoring torque, this is not the case here because the torque is not restoring. If the loop tries to rotate because of the external torque, Lenz's law says that there will be an induced current, in addition to the one already there, that will oppose the proposed change of angle. The counterpart of the magnetic \"friction\" force that opposes the linear motion of the loop no matter in what direction the loop is moving is a magnetic \"friction\" torque that opposes the rotational motion of the loop no matter which way the loop is turning. Without having written and solved the equations, my guess is that if the loop is initially oriented so that the magnetic field is not perpendicular to the plane of the loop, the loop will settle at some \"terminal\" angle by the time it reaches terminal velocity. If I solve the equations, I will post here.\n\n•", null, "spsch\nspsch\nIf the loop tries to rotate because of the external torque, Lenz's law says that there will be an induced current, in addition to the one already there, that will oppose the proposed change of angle. The counterpart of the magnetic \"friction\" force that opposes the linear motion of the loop no matter in what direction the loop is moving is a magnetic \"friction\" torque that opposes the rotational motion of the loop no matter which way the loop is turning.\n\nHi Kuruman,\n\nthank you for answering me again. That makes sense.\n\nThe loop is originally falling vertically down perpendicular to the magnetic field. It's why when I thought about it I imagined the dipole being created inside the loop, directly opposite the direction of the magnetic field which it is falling in, and hence I imagined it should feel a torque until it balances again, maybe oscillate back and forth.\n\nBut it makes sense that the rotation would also create a change in flux and those a counter current and dipole.\nIt's hard to wrap my head around it but I'm having quite a bit of fun thinking about this and am grateful for the patience you guys show me.\n\nspsch\nOK. I hope that you receive almost full credit. The question is mostly about testing your understanding of some electromagnetism and I think you succeeded in doing that.\nUpdate, got the results yesterday and I got half the points for this problem! :-) Thanks for helping!\n\n•", null, "TSny" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9733535,"math_prob":0.7259514,"size":1868,"snap":"2023-14-2023-23","text_gpt3_token_len":496,"char_repetition_ratio":0.091201715,"word_repetition_ratio":0.74934036,"special_character_ratio":0.27408993,"punctuation_ratio":0.07730673,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9739622,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-25T07:25:21Z\",\"WARC-Record-ID\":\"<urn:uuid:38311135-8bbb-49d0-b423-0da85429d540>\",\"Content-Length\":\"101922\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:791c4224-a4da-4705-a9d9-23e816bc4346>\",\"WARC-Concurrent-To\":\"<urn:uuid:a8184bb6-0ee2-4d02-83bd-40f0091646de>\",\"WARC-IP-Address\":\"104.26.15.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/find-the-speed-of-a-copper-loop-falling-in-a-magnetic-field.976109/\",\"WARC-Payload-Digest\":\"sha1:T72QDVNZ2VHKLKEKBIVUBEMSWPS6OCYC\",\"WARC-Block-Digest\":\"sha1:ZMZEDOGUSEUZOFDA237ATHT2KBQLOJPM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945317.85_warc_CC-MAIN-20230325064253-20230325094253-00685.warc.gz\"}"}
https://artofproblemsolving.com/wiki/index.php/2006_Romanian_NMO_Problems/Grade_9/Problem_4
[ "# 2006 Romanian NMO Problems/Grade 9/Problem 4\n\n## Problem", null, "$2n$ students", null, "$(n \\geq 5)$ participated at table tennis contest, which took", null, "$4$ days. Every day, every student played a match. (It is possible that the same pair meets two or more times, in different days). Prove that it is possible that the contest ends like this:\n\n• there is only one winner;\n• there are", null, "$3$ students on the second place;\n• no student lost all", null, "$4$ matches.\n\nHow many students won only a single match and how many won exactly", null, "$2$ matches? (In the above conditions)\n\n## Solution\n\nNote that the 3 second place students obviously could not have only won one match, or won all 4 matches. I now claim that they could not have won exactly two matches, either.\n\nEach day there were", null, "$n$ matches, so at the end of the contest there were", null, "$4n$ total points. Now if the three people in second place won exactly two matches, then", null, "$2n-4$ people would have to had won exactly one match. The winner of the contest would have won at most 4 matches, so we have the inequality", null, "$$4+3\\cdot 2 + (2n-4)\\geq 4n$$\n\nSolving for", null, "$n$ yields", null, "$n\\leq 3$, which is clearly false. This is a contradiction in logic, so the three people in second place could not have won exactly two matches.\n\nThis shows that the three second-place finishers each won exactly three matches. Therefore the winner of the contest won all 4 matches. Now let", null, "$x$ be the number of people who won two matches. It follows that", null, "$2n-x-4$ people won one match. We now have the equation", null, "$$4 + 3\\cdot 3 + 2x + 2n-x-4 = 4n$$\n\nSolving for", null, "$x$ yields", null, "$x=2n-9$, so", null, "$\\boxed{2n-9}$ students won exactly two matches. It then follows that", null, "$\\boxed{5}$ people won a single match." ]
[ null, "https://latex.artofproblemsolving.com/f/9/3/f93f5c51ea4b04b4992b003b39479f29018f6bda.png ", null, "https://latex.artofproblemsolving.com/7/f/8/7f801049c8303e12a4b080f200f85fc22e7c46ef.png ", null, "https://latex.artofproblemsolving.com/c/7/c/c7cab1a05e1e0c1d51a6a219d96577a16b7abf9d.png ", null, "https://latex.artofproblemsolving.com/7/c/d/7cde695f2e4542fd01f860a89189f47a27143b66.png ", null, "https://latex.artofproblemsolving.com/c/7/c/c7cab1a05e1e0c1d51a6a219d96577a16b7abf9d.png ", null, "https://latex.artofproblemsolving.com/4/1/c/41c544263a265ff15498ee45f7392c5f86c6d151.png ", null, "https://latex.artofproblemsolving.com/1/7/4/174fadd07fd54c9afe288e96558c92e0c1da733a.png ", null, "https://latex.artofproblemsolving.com/b/4/1/b417a127620d5b467fe7a48ad5f7d843e5c390b5.png ", null, "https://latex.artofproblemsolving.com/f/8/2/f82be0cbb450a75fe5a294e8dc2a15c9d4db6c82.png ", null, "https://latex.artofproblemsolving.com/a/5/9/a5931009776f45d95058fe8bbdc4a490b195512e.png ", null, "https://latex.artofproblemsolving.com/1/7/4/174fadd07fd54c9afe288e96558c92e0c1da733a.png ", null, "https://latex.artofproblemsolving.com/d/d/3/dd3a0adc0e88e4d0eeba60a6066d6c922e1aabab.png ", null, "https://latex.artofproblemsolving.com/2/6/e/26eeb5258ca5099acf8fe96b2a1049c48c89a5e6.png ", null, "https://latex.artofproblemsolving.com/c/4/d/c4dc1e7b56cdfe760ee9a94e1ed9a20f6bb58439.png ", null, "https://latex.artofproblemsolving.com/d/7/1/d71482e7aac2958be89470f47a4dfa2721b85d67.png ", null, "https://latex.artofproblemsolving.com/2/6/e/26eeb5258ca5099acf8fe96b2a1049c48c89a5e6.png ", null, "https://latex.artofproblemsolving.com/5/f/6/5f60660f5c997ea4829a2dd74020754f6c4d4bda.png ", null, "https://latex.artofproblemsolving.com/7/6/1/761022cbc9ba62899400ff8c79c05007d1601979.png ", null, "https://latex.artofproblemsolving.com/8/9/7/897a203bdf45af36f2b9edc59c3bc0beeb898be1.png ", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9860602,"math_prob":0.99687326,"size":1483,"snap":"2021-43-2021-49","text_gpt3_token_len":322,"char_repetition_ratio":0.16903313,"word_repetition_ratio":0.030303031,"special_character_ratio":0.21443021,"punctuation_ratio":0.104377106,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99726266,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,2,null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,2,null,null,null,2,null,null,null,2,null,2,null,null,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T21:00:08Z\",\"WARC-Record-ID\":\"<urn:uuid:16bf5bde-abb3-4bc0-8f89-917362ad5e04>\",\"Content-Length\":\"37032\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fb807c95-8b34-41cf-9e9a-0c06faf8b98f>\",\"WARC-Concurrent-To\":\"<urn:uuid:e40b478f-c75f-4bfd-a6d2-29af935c819a>\",\"WARC-IP-Address\":\"172.67.69.208\",\"WARC-Target-URI\":\"https://artofproblemsolving.com/wiki/index.php/2006_Romanian_NMO_Problems/Grade_9/Problem_4\",\"WARC-Payload-Digest\":\"sha1:SEDA2UHV52XOFUWF2LJF6SZO3Q2NWAPE\",\"WARC-Block-Digest\":\"sha1:BIBUFMZ6GAXYZS5UEGRKOS2WQLXR32OX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363312.79_warc_CC-MAIN-20211206194128-20211206224128-00242.warc.gz\"}"}
https://answers.opencv.org/question/11006/what-is-the-carttopolar-angle-direction-in-opencv/
[ "# What is the cartToPolar angle direction in OpenCV?\n\nHello!\n\nI would like to ask, where exactly the angle computed by carToPolar function points? For example, if I have a 3x3 MAT matrix, and the angle at point [1,1] is 45°, it will point to [2,2] or [0,2] ?\n\nThank you\n\nedit retag close merge delete\n\nSort by » oldest newest most voted\n\nBasically it the output is calculated clockwise starting from the x - axis. This is dependant on how the axes in OpenCV are actually visualized. So take a look at the following picture.", null, "This means that if a point [1,1] has a gradient angle of 45°, it will point towards the pixel that is at 45° looking from the coordinate center. In this case it will be [2,2]. An angle of 90° would point to [2,0].\n\nmore\n\nIf it helped you, accept the answer, so that this topic appears solved. You are welcome!\n\nOfficial site\n\nGitHub\n\nWiki\n\nDocumentation" ]
[ null, "https://answers.opencv.org/upfiles/13654239544240452.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9178984,"math_prob":0.9469635,"size":264,"snap":"2020-45-2020-50","text_gpt3_token_len":81,"char_repetition_ratio":0.13846155,"word_repetition_ratio":0.0,"special_character_ratio":0.29166666,"punctuation_ratio":0.17460318,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98901725,"pos_list":[0,1,2],"im_url_duplicate_count":[null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T14:07:43Z\",\"WARC-Record-ID\":\"<urn:uuid:37519661-a727-4de0-aefd-f8dbf4445e0c>\",\"Content-Length\":\"55459\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:44d0de9d-f06f-40e8-9ca0-5df3748811bf>\",\"WARC-Concurrent-To\":\"<urn:uuid:68fabfe6-0a66-42bb-8264-dcdd9709f404>\",\"WARC-IP-Address\":\"5.9.49.245\",\"WARC-Target-URI\":\"https://answers.opencv.org/question/11006/what-is-the-carttopolar-angle-direction-in-opencv/\",\"WARC-Payload-Digest\":\"sha1:BSS4FREOJFYJJ7QNDU7AABBQKQ4S6E3J\",\"WARC-Block-Digest\":\"sha1:D2CP6YMKJQNEDA6STKCNJOR3GJDN4XPX\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107876500.43_warc_CC-MAIN-20201021122208-20201021152208-00689.warc.gz\"}"}
https://www.numbers.education/33412.html
[ "Is 33412 a prime number? What are the divisors of 33412?\n\n## Parity of 33 412\n\n33 412 is an even number, because it is evenly divisible by 2: 33 412 / 2 = 16 706.\n\nFind out more:\n\n## Is 33 412 a perfect square number?\n\nA number is a perfect square (or a square number) if its square root is an integer; that is to say, it is the product of an integer with itself. Here, the square root of 33 412 is about 182.789.\n\nThus, the square root of 33 412 is not an integer, and therefore 33 412 is not a square number.\n\n## What is the square number of 33 412?\n\nThe square of a number (here 33 412) is the result of the product of this number (33 412) by itself (i.e., 33 412 × 33 412); the square of 33 412 is sometimes called \"raising 33 412 to the power 2\", or \"33 412 squared\".\n\nThe square of 33 412 is 1 116 361 744 because 33 412 × 33 412 = 33 4122 = 1 116 361 744.\n\nAs a consequence, 33 412 is the square root of 1 116 361 744.\n\n## Number of digits of 33 412\n\n33 412 is a number with 5 digits.\n\n## What are the multiples of 33 412?\n\nThe multiples of 33 412 are all integers evenly divisible by 33 412, that is all numbers such that the remainder of the division by 33 412 is zero. There are infinitely many multiples of 33 412. The smallest multiples of 33 412 are:\n\n## Numbers near 33 412\n\n### Nearest numbers from 33 412\n\nFind out whether some integer is a prime number" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85712254,"math_prob":0.9996804,"size":355,"snap":"2020-45-2020-50","text_gpt3_token_len":106,"char_repetition_ratio":0.19373219,"word_repetition_ratio":0.0,"special_character_ratio":0.36056337,"punctuation_ratio":0.1590909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99884623,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T06:08:26Z\",\"WARC-Record-ID\":\"<urn:uuid:182a08bc-9f5b-4f63-93df-ed6c7a658bf9>\",\"Content-Length\":\"18905\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc882055-7d09-4cfb-a783-1ff3241e7d5c>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc081121-4ec5-4025-b2d5-d6bcb3c0f8e5>\",\"WARC-IP-Address\":\"213.186.33.19\",\"WARC-Target-URI\":\"https://www.numbers.education/33412.html\",\"WARC-Payload-Digest\":\"sha1:HBIGZB2KHQLYCAQW3AVJMYNB5JI3TNT7\",\"WARC-Block-Digest\":\"sha1:UGOVJJHMXI2Q7ZHKUX4DW4WTAVP6LCC7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107869933.16_warc_CC-MAIN-20201020050920-20201020080920-00182.warc.gz\"}"}
https://metanumbers.com/6993
[ "6993 (number)\n\n6,993 (six thousand nine hundred ninety-three) is an odd four-digits composite number following 6992 and preceding 6994. In scientific notation, it is written as 6.993 × 103. The sum of its digits is 27. It has a total of 5 prime factors and 16 positive divisors. There are 3,888 positive integers (up to 6993) that are relatively prime to 6993.\n\nBasic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 4\n• Sum of Digits 27\n• Digital Root 9\n\nName\n\nShort name 6 thousand 993 six thousand nine hundred ninety-three\n\nNotation\n\nScientific notation 6.993 × 103 6.993 × 103\n\nPrime Factorization of 6993\n\nPrime Factorization 33 × 7 × 37\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 3 Total number of distinct prime factors Ω(n) 5 Total number of prime factors rad(n) 777 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 6,993 is 33 × 7 × 37. Since it has a total of 5 prime factors, 6,993 is a composite number.\n\nDivisors of 6993\n\n1, 3, 7, 9, 21, 27, 37, 63, 111, 189, 259, 333, 777, 999, 2331, 6993\n\n16 divisors\n\n Even divisors 0 16 8 8\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 16 Total number of the positive divisors of n σ(n) 12160 Sum of all the positive divisors of n s(n) 5167 Sum of the proper positive divisors of n A(n) 760 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 83.6242 Returns the nth root of the product of n divisors H(n) 9.20132 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 6,993 can be divided by 16 positive divisors (out of which 0 are even, and 16 are odd). The sum of these divisors (counting 6,993) is 12,160, the average is 760.\n\nOther Arithmetic Functions (n = 6993)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 3888 Total number of positive integers not greater than n that are coprime to n λ(n) 36 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 900 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 3,888 positive integers (less than 6,993) that are coprime with 6,993. And there are approximately 900 prime numbers less than or equal to 6,993.\n\nDivisibility of 6993\n\n m n mod m 2 3 4 5 6 7 8 9 1 0 1 3 3 0 1 0\n\nThe number 6,993 is divisible by 3, 7 and 9.\n\n• Arithmetic\n• Deficient\n\n• Polite\n\nBase conversion (6993)\n\nBase System Value\n2 Binary 1101101010001\n3 Ternary 100121000\n4 Quaternary 1231101\n5 Quinary 210433\n6 Senary 52213\n8 Octal 15521\n10 Decimal 6993\n12 Duodecimal 4069\n20 Vigesimal h9d\n36 Base36 5e9\n\nBasic calculations (n = 6993)\n\nMultiplication\n\nn×y\n n×2 13986 20979 27972 34965\n\nDivision\n\nn÷y\n n÷2 3496.5 2331 1748.25 1398.6\n\nExponentiation\n\nny\n n2 48902049 341972028657 2391410396398401 16723132902014018193\n\nNth Root\n\ny√n\n 2√n 83.6242 19.1229 9.14462 5.87398\n\n6993 as geometric shapes\n\nCircle\n\n Diameter 13986 43938.3 1.5363e+08\n\nSphere\n\n Volume 1.43245e+12 6.14521e+08 43938.3\n\nSquare\n\nLength = n\n Perimeter 27972 4.8902e+07 9889.6\n\nCube\n\nLength = n\n Surface area 2.93412e+08 3.41972e+11 12112.2\n\nEquilateral Triangle\n\nLength = n\n Perimeter 20979 2.11752e+07 6056.12\n\nTriangular Pyramid\n\nLength = n\n Surface area 8.47008e+07 4.03018e+10 5709.76\n\nCryptographic Hash Functions\n\nmd5 383beaea4aa57dd8202dbff464fee3af 73a5135c1a9bafa20c64829a43d6103c204d8e86 87b99ff27b5d7f00062f4fe7c4735d6e7f16af66ab064bf1643210187c7f487a 6344eb14d9a7d739f80f90d46758d0d831a33c91e98f3f051c6175f0c5be16b2bd5e89e3558f5dbc57747a899641b04f28cc15b5f7aad437735ec2434312b9fb 94e3308296d1dcfa45220740da9d54ebc0b2da20" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63077694,"math_prob":0.98980856,"size":4416,"snap":"2022-05-2022-21","text_gpt3_token_len":1583,"char_repetition_ratio":0.117860384,"word_repetition_ratio":0.025411062,"special_character_ratio":0.44701087,"punctuation_ratio":0.07542263,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9967778,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-23T21:30:09Z\",\"WARC-Record-ID\":\"<urn:uuid:33567f71-bfa3-4092-b7e7-09d3748d122a>\",\"Content-Length\":\"39275\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:17f07f23-be2e-4974-bc2c-079fed38b949>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5dc3b11-f241-4794-b1b1-c904840787b4>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/6993\",\"WARC-Payload-Digest\":\"sha1:P4FVBMJTSAUSF6SH57PH4EDROAIYLY4Y\",\"WARC-Block-Digest\":\"sha1:4ZWCBEVMQCAMD6JJ7K2UVU4QEDFKGQFU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304309.59_warc_CC-MAIN-20220123202547-20220123232547-00436.warc.gz\"}"}
https://www.mdpi.com/1424-8220/17/4/904/htm
[ "Next Article in Journal\nRobot-Beacon Distributed Range-Only SLAM for Resource-Constrained Operation\nPrevious Article in Journal\nConnectivity Restoration in Wireless Sensor Networks via Space Network Coding\nOpen AccessArticle\n\n# Chemical Source Localization Fusing Concentration Information in the Presence of Chemical Background Noise\n\nby", null, "Víctor Pomareda 1,2,", null, "Rudys Magrans 1,", null, "Juan M. Jiménez-Soto 1,", null, "Dani Martínez 3,", null, "Marcel Tresánchez 3", null, ",", null, "Javier Burgués 1,2", null, ",", null, "Jordi Palacín 3", null, "and", null, "Santiago Marco 1,2,*,†", null, "1\nSignal and Information Processing for Sensing Systems, Institute for Bioengineering of Catalonia, Baldiri Reixac 4-8, Barcelona 08028, Spain\n2\nDepartment of Engineering: Electronics, Universitat de Barcelona, Martí i Franqués 1, Barcelona 08028, Spain\n3\nDepartment of Computer Science and Industrial Engineering, Universitat de Lleida, Jaume II 69, Lleida 25001, Spain\n*\nAuthor to whom correspondence should be addressed.\nThis paper is an extended version of our paper: Pomareda, V.; Marco, S. Chemical Plume Source Localization with Multiple Mobile Sensors using Bayesian Inference under Background Signals. In Proceedings of the International Symposium on Olfaction and Electronic Nose, New York, NY, USA, 2–5 May 2011.\nSensors 2017, 17(4), 904; https://doi.org/10.3390/s17040904\nReceived: 14 February 2017 / Revised: 6 April 2017 / Accepted: 11 April 2017 / Published: 20 April 2017\n\n## Abstract\n\nWe present the estimation of a likelihood map for the location of the source of a chemical plume dispersed under atmospheric turbulence under uniform wind conditions. The main contribution of this work is to extend previous proposals based on Bayesian inference with binary detections to the use of concentration information while at the same time being robust against the presence of background chemical noise. For that, the algorithm builds a background model with robust statistics measurements to assess the posterior probability that a given chemical concentration reading comes from the background or from a source emitting at a distance with a specific release rate. In addition, our algorithm allows multiple mobile gas sensors to be used. Ten realistic simulations and ten real data experiments are used for evaluation purposes. For the simulations, we have supposed that sensors are mounted on cars which do not have among its main tasks navigating toward the source. To collect the real dataset, a special arena with induced wind is built, and an autonomous vehicle equipped with several sensors, including a photo ionization detector (PID) for sensing chemical concentration, is used. Simulation results show that our algorithm, provides a better estimation of the source location even for a low background level that benefits the performance of binary version. The improvement is clear for the synthetic data while for real data the estimation is only slightly better, probably because our exploration arena is not able to provide uniform wind conditions. Finally, an estimation of the computational cost of the algorithmic proposal is presented.\n\n## 1. Introduction\n\nLocalization of chemical sources in urban scenarios (large cities) is a major challenge for intelligence and police authorities. In the clandestine production phase of illicit substances like explosives, but also drugs, significant levels of precursors are spread in the atmosphere. Such suspicious compounds could be reported by a system of mobile sensors and could be located using localization algorithms, providing complementary information to the authorities for intervening at an early stage.\nSeveral strategies for source localization have been proposed in the literature . These strategies have been integrated into robot systems with tracking abilities. Many of these tracking strategies have been inspired by bacteria or animal behavior using olfaction for foraging or mating: lobster, blue crabs, ants and moths provide behavioral models for odor tracking [2,3]. One of the simplest strategies consists in seeking changes in local concentration within an odor plume assuming a smooth chemical gradient in a diffusion dominated flow. However, this approach, called chemotaxis , is not useful in a realistic environment where fluid flow is dominated by turbulence, which can be caused by forced ventilation, temperature gradients or the presence of obstacles. Under these conditions, on the order of ten min are required to determine the time-averaged concentration with sufficient accuracy to perceive the gradient concentration . In consequence, the exploration of the area of interest becomes too slow. Some other strategies additionally exploit both fluid velocity information and chemical concentration (anemotaxis [6,7]). More recent proposals based on information theory, like infotaxis are based in binary detections and, information plays a role similar to that of concentration in chemotaxis. Odor patches are expected to be found only intermittently in the medium, and then information is sparse. In some cases, methods aim to estimate the gas distribution through analytical Gaussian models [9,10,11], others are focused on to create a plume mapping via hidden Markov methods , whereas in Farrell et al. a strategy for chemical plume tracing and source location declaration is presented.\nNavigation experiments aiming to find chemical sources are strongly limited by the limit of detection and selectivity of the low cost chemical sensors and even medium-priced detectors (e.g., ion mobility spectrometers). Thus, the rapid decay in the chemical concentration with increasing distance from the source can be a critical issue. Poor limits of detection result in a reduced area where the plume can effectively be detected. This is especially important in applications where the search zone has an area of several square kilometers. In such situations, it becomes very important to set the detection thresholds very close to the noise level, but this would results in a high number of false alarms and most localization algorithms would fail catastrophically. To the best of our knowledge, none of the published methods have addressed this problem.\nAdditionally, in any real scenario, there could be background levels of a multitude of chemicals caused by environmental pollution. Because of limited selectivity, there will be substances which will produce interference in the detector reading, hindering the detection and localization tasks. The presence of combination of detector electronic noise and mainly interfering chemical agents result in variable background readings that may change with time and with the position of the detector. These shifts in background levels hamper the selection of an optimum threshold that is usually considered to be constant all along the area under exploration. As far as we know, this problem has not been previously tackled in the literature.\nTo address these two issues (threshold close to the limit of detection and presence of background levels) probabilistic approaches like plume mapping Bayesian methods appear to be a good choice. Pang and Farrell published a source-likelihood mapping approach based on Bayesian inference in 2006 . The main idea behind the algorithm consists in implementing a stochastic approach for plume modeling and in estimating the most likely source position considering the sequence of detection/non-detection events and fluid flow measurements along the robot‘s trajectory. Pang’s algorithm has been tested successfully with data previously collected using an autonomous underwater vehicle .\nHowever, this algorithm uses binary detection events, and no chemical concentration information is used to build the probability map, since it only considers the concentrations above a certain threshold as detection or non-detection events. Moreover, after setting the threshold level, the approach assumes that the rate of false alarms is very low. In a real scenario where background signals are present, this is only achieved when the threshold is set at a high level. However, this option seriously reduces the maximum plume detection distance. Therefore, there is a trade-off; on the one hand, the threshold needs to be set low enough (close to the sensor detection limit) if chemicals from the source are to be detected at large distances; on the other hand, the threshold needs to be high enough to prevent false alarms. So, how to set the threshold level becomes a critical issue in real environments using existing approaches, especially when the background intensity is non-uniform in the explored area.\nFinally, in order to speed up the area exploration it is important that the algorithm can be extended to work with multiple robots. Recently, Kang and Li have presented a novel plume tracking algorithm via multiple autonomous robots by using a leader-follower strategy, demonstrating its superiority versus a single robot algorithm, in terms of both the computational cost and the accuracy in source location. Meng et al. have also studied the multi-robot problem for plume tracking in fluctuating airflow environments, showing the efficiency and robustness of the adapted ant colony optimization (ACO) algorithm over the traditional ACO algorithm. Meng remarks the importance of a proper number of robots and a well-defined cooperation mechanism, although it is not strictly necessary to track the plume to obtain a good estimation of the source location. The most likely source position is estimated during the robot's mission for arbitrary trajectories by recursively building a probability map using Bayesian inference. The problem of background estimation over the exploration area can be considered as the problem of scalar field mapping. Algorithmic approaches using mobile sensor networks have been already proposed, however they assume that the agents have communication capabilities so that their exploration paths are optimized after data fusion [16,17,18]. However, in our scenario we expect that the sensors can be mounted on vehicles that serve other tasks beyond chemical source localization. In this sense, we consider cases where there is no feedback between the chemical sensing and the agent’s trajectories.\nOur main motivation for the present work is to extend the Bayesian plume source localization algorithm, previously described by Pang and Farrell , using the chemical concentration (instead of binary detections) and assessing its performance in simulated and real environments, where background signals may arise. Thus, Pang’s algorithm is reformulated for use with continuous analog concentration readings instead of binary detections. Moreover, the algorithm is extended to work with multiple mobile sensors. This new approach requires a probabilistic model for the background and for the plume which are described in the following sections.\nThe present algorithm was initially developed for applications to Home Security (European project LOTUS: Localization of Threat Substances in Urban Society FP7-SEC-217925). In the considered scenario, police cars equipped with GPS and gas chemical sensors (eventually other fast analyzers like ion mobility spectrometers can be used) carry out their routine patrols while sending the sensors’ readings to a central station. Instead of moving towards the chemical source, the patrol vehicles would maintain their normal patrol routes while a centralized system is continuously analyzing the acquired signals seeking suspicious activity.\nThe paper is organized as follows: Section 2.1 and Section 2.2 and shows the basics of the algorithm and the plume and background model. Section 2.3 and Section 2.4 describes the synthetic and real scenarios for the test and Section 3 reports the results and the analysis.\n\n## 2. Materials and Methods\n\n#### 2.1. Stochastic Models for Plume and Background\n\nOur proposal requires making some assumptions about the dispersion of the plume (the stochastic model in Section 2.1.1) and a background model (Section 2.1.2). Given an instantaneously measured concentration c, it is assumed that there are two additive contributions: one due to the background ($c b$) and one due to the plume ($c p$), thus:\n$c = c b + c p$\nSince we consider all these concentrations as random variables, the probability density function (PDF) of the measured concentration will be the convolution of the PDFs of the two additive terms. In the next sections, we will describe how we model the probability density of both terms.\n\n#### 2.1.1. Stochastic Model for Chemical Concentration Measurements\n\nThe basis of our stochastic model for the chemical plume is the analytical Gaussian plume model (GPM) [21,22]. Time-averaged plume concentration follows a Gaussian distribution lengthways to the flow direction if the time average is at least 10 min, as demonstrated in previous works [23,24,25]. This model has been widely used for its simplicity and is appropriate when dispersion is governed by atmospheric turbulence under uniform wind conditions. Atmospheric turbulence is determined by the stability of the atmosphere and the height above the surface layer . The basic expression for the GPM under a continuous release is:\n$c ¯ ( x , y , z ) = q 2 π U a σ y σ z exp ( − y 2 2 σ y 2 ) exp ( − ( h − z ) 2 2 σ z 2 )$\nwhere $c ¯$ is the time averaged concentration (in g/m3) in a location with coordinates $x$ (downwind), $y$ (crosswind) and $z$ (vertical); $q$ is the continuous source release rate or source strength (in g/s); $U a$ is the mean wind speed in the downwind direction (in m/s); $h$ is the plume height (in m); and $σ y$, $σ z$ are the dispersion coefficients (in m), in crosswind and vertical direction respectively, modeled as: $σ y = a · x b$, $σ z = c · x d$, where $a$, $b$, $c$ and $d$ are parameters obtained from a table and their values depend on the atmospheric conditions which can be organized in six levels (from A-very unstable to F-very stable) according to the Pasquill’s stability classes . In Equation (2), the resulting concentration distribution is due to the transport of chemicals by advection (due to the mean wind speed), due to concentration gradients within the plume width (lateral dispersion due to diffusion, but also turbulent mixing) and due to plume meandering. The decay of mean concentration is exponential, thus concentration levels below the sensor detection limit are very quickly achieved. This issue makes the setting of the threshold level critical; especially, if the source should be detected far from the release point.\nThe GPM considers the time-averaged characteristics of a plume dispersed in a turbulent flow, but the sensors will be responding to the instantaneous plume characteristics (here we assume that the chemical sensor dynamics are much faster than 10 min, which is usually the case). For short time-scale studies, the chemical puff movement can be modeled as a random walk (because of transversal velocity fluctuation) overlapped on the downflow advection (because of mean velocity) . However, we propose an alternative approach, for which an additional component needs to be added to the GPM to model the unpredictable and random fluctuations in concentration due to turbulent stirring and plume meandering. Yee et al. [28,29,30] have carried out empirical studies on plume statistics in urban areas using scale fluid models in a variety of plume conditions and urban geometries. Their results prove that instantaneous concentration fluctuations fit very well the clipped-gamma PDF over a very wide range of atmospheric conditions and at several receptor positions . The clipped-gamma distribution (CGD) is defined in terms of four parameters $γ$, $k *$, $s$ and $λ$ as :\n$f ( c ^ ) = ( c ^ + λ s ) k * − 1 exp ( − ( c ^ + λ ) / s ) s Γ ( k * ) + ( 1 − γ ) δ ( c ^ )$\nwhere $c ^$ is the instantaneous concentration, $Γ ( c ^ )$ is the gamma function, $δ ( c ^ )$ is the Dirac delta function and $λ$, $k *$ and $s$ are the shift, shape and scale parameters, respectively. The total PDF is composed of a mixed fluid part due to in-plume mixing of eddies containing the target substance (the first term on right-hand side), and an unmixed ambient fluid part (the second term on right-hand side) caused by plume meandering which produces intermittent periods of zero concentration for a fraction of time (1 − γ), being γ the intermittency factor. Although Equation (3) is specified in terms of four parameters, it can be uniquely modeled by the mean (M) and the standard deviation (SD) of a series of readings . There is a simple relation among M, SD, and the plume intermittency; thus, γ is determined as:\n$γ = γ ( k * , s , λ ) = Γ ( k * ; λ / s ) Γ ( k * ) = min ( 1 , 3 K int 2 + 1 )$\nwhere $Γ ( v ; c )$ corresponds to the incomplete gamma function and $K i n t$ is the ratio between the SD and the mean of the series of readings at a fixed position. For a specified value of $K i n t$, the parameters $λ$, $k *$ and $s$ can be obtained solving a set of transcendental equations, thus making Equation (3) totally defined. The details for computing these parameters are given in .\nWhile the mean concentration decreases rapidly in the downwind direction (see Equation (2)), the magnitude of the fluctuations decreases even more rapidly. As described by Webster , $K i n t$ is estimated to decrease as $x θ$ (where $x$ is the distance from the source and $θ < 0$), being roughly $K 0$ times the mean time-averaged concentration at a certain distance $x 0$ from the source. Therefore, this parameter can be modeled as:\n$K int = K 0 · ( x x 0 ) θ$\nTo model the instantaneous concentrations due to a chemical plume at a certain distance from the source, the clipped-gamma distribution is used (Equation (3)). The mean value M of the series of concentrations due to the plume is related to the GPM (Equation (2)) and the SD can be modeled as:\n$σ S D = K 0 · ( x x 0 ) θ M$\nSince the PDF depends on the mean and the SD, and these parameters depend on the distance from the source, the PDF of the instantaneous readings contains information about the relative position between the sensor and the source. Since concentrations fluctuations (intermittencies) decrease faster than the mean value with the downwind distance to the source, the plume becomes homogeneous faster than the mean concentration dilutes [28,31].\nAdditionally, previous literature has characterized the power spectral density (PSD) of concentration readings within a dispersing plume. We will follow the model described by Jones et al. .\n\n#### 2.1.2. Stochastic Background Model\n\nIn real scenarios such as residential areas or urban environments, pollution or interfering substances are expected to be found. This problem becomes even more serious due to the common use of partially selective sensors such as Metal Oxide Sensors (MOXs), or Photoionization detectors (PIDs).\nMoreover, meteorological conditions (wind conditions and atmospheric stability) could change within a timescale of several hours, or there might be changes in polluting emissions due to day-night cycles of human activity (including motor vehicles or factories) [32,33,34]. Because of these reasons the background can be considered to change slowly.\nAs it has been previously described , the dispersion of a chemical plume in a turbulent flow shows a highly intermittent nature with background or zero concentration for long periods of time separated by high peaks of concentration. This behavior will help us to estimate the background model using robust measures that reject the plume peaks.\nFor the implementation of the proposed algorithm the exploration area is divided into a uniform grid of rectangular cells. The algorithm estimates the background probability distribution at each cell of the grid. The background is modelled as a Gamma PDF where the standard deviation is smaller than the mean. In those conditions the Gamma function resembles a Gaussian PDF but defined only for positive values of the random variable concentration. We will consider that the PSD of the background is bandlimited white noise.\n\n#### 2.2.1. Bayesian Estimation of the Likelihood Map for Chemical Source Presence\n\nA summary of the notation used for the description of the algorithm can be found in Table A1. The search area contains $N c$ rectangular cells of size $L x · L y$ each, where $L x$ and $L y$ are the cell lengths in the $x$ and $y$ directions of the grid map, respectively. The size of this cells is a trade-off between the spatial resolution of the algorithm and the computational cost.\nLet $0 ≤ α i ' ≤ 1$ represent the probability that the chemical source is in cell $i$. It is assumed that the search area contains exactly one source, hence $∑ i = 1 N c α i ' = 1$. It is assumed that the prior information on the potential existence of a chemical source is given by previous intelligence research. We do not consider in this formalism the case where there is uncertainty on the presence or not of a chemical source.\nInitially (at $t = t 0$), if no information about the source location is available, all cells are initialized to be equally likely to contain the chemical source: $α i ' ( t 0 ) = ( 1 N c )$; $∀ i ∈ [ 1 , N c ]$.\nGiven that we measure a concentration at time $t k$ in cell $j$, i.e., $c j$ refers to a concentration reading in cell $j$, we can calculate the source probability map based on this single reading. A Bayesian approach is used to determine if the main contributor to the measurement is the background or the presence of a plume patch. The posterior probability for the presence of a plume, given the measurement, is calculated using Bayes’ theorem:\n$P ( A | c j ) = P ( c j | A ) ⋅ P ( A ) P ( c j )$\nwhere $A$ corresponds to the event “the concentration reading was caused by an emitting source upstream (plus a background level)”. To infer where the source is located, the posterior probability of a source emitting (left-hand term in Equation (7)) can be further decomposed into the probability of that source being located at each cell. Again, using the Bayes’ Theorem:\n$P ( A i | c j ) = P ( c j | A i ) · P ( A i ) P ( c j | A i ) · P ( A i ) + P ( c j | A ¯ i ) · [ 1 − P ( A i ) ]$\nwhere $A ¯$ means “the concentration reading was caused only by background levels and not by an emitting source”, subindex $i$ refers to a source located in cell $i$ and subindex $j$ refers to the current cell $j$ where the measurement was taken.\nSince we consider that a background of interfering substances is always present, in the absence of plume only the background component $( c b )$ is present; in the presence of plume, both components ($c b$ and $c p$) are present and the concentrations are modeled by the convolution of the plume PDF and the background PDF. However, even in the presence of the plume $c p$ may be zero due to plume intermittency. The PDF of the concentration component due to the source is modeled using the GPM for the means, and SD from Equation (6) considering the relative position of cell $i$ (potential source location) and cell $j$ (location of the sensor).\nTaking these considerations into account, the previous probabilities have the following interpretation: $P ( A i )$ is the prior probability of the presence of a source at the cell $i$, $P ( c j | A i )$ is the probability that the measurement at cell $j$ is due to addition of the background at cell $j$ and a plume due to a source at cell $i$, and it is obtained by the convolution between the PDF of the plume and the PDF of the background at cell $j$. $P ( c j | A ¯ i )$ is the probability that the measurement of chemicals at cell $j$ is due to the current background at cell $j$, and it is obtained from the PDF of the background at cell $j$. We define $S i j ( t k ) ≡ P ( A i | c j )$ as the probability of having a source in cell $i$ given that a certain amount of chemical was measured at cell $j$ at time $t k$.\nHowever, in this approach where we consider each cell independently, $∑ i = 1 N c S i j ( t k ) = 1$ is not guaranteed. Therefore, the result is normalized to ensure the total probability is 1 when individual cell probabilities are added up. The nomenclature is henceforth the same as that used in :\n$β i j ( t k ) = S i j ( t k ) ∑ i = 1 N c [ S i j ( t k ) ]$\nNow $∑ i = 1 N c β i j ( t k ) = 1$ is guaranteed and $β i j ( t k )$ calculated over all cells ($i = 1 , ... , N c$) gives the source probability map at time $t k$ based on a single measured concentration at cell.\nUsing Bayesian theory and following the same procedure described in , each new measurement can be incorporated recursively to update the source probability map.\n$α ′ i j ( t k ) = P ( A i | B ( t k ) )$ is defined as the probability of cell $i$ containing the source, given the sequence of concentrations $B ( t k )$ along the trajectory of the mobile sensors up to time $t k$. Defining $P ( A i | D j ( t k ) ) = β i j ( t k )$, where $D j ( t k )$ is the measured concentration at time $t k$; $P ( A i | B ( t k ) )$ is computed from $B ( t k − 1 )$ and $D j ( t k )$, which are supposed to be independent events, finally obtaining:\n$α ' i j ( t k ) = N c · α ' i j ( t k − 1 ) · β i j ( t k )$\nwhere, if $α ′ i j ( t k )$ is computed over all cells ($i = 1 , ... , N c$), an updated source probability map is obtained recursively.\nIndependently of the number of mobile sensors, the only information required by the algorithm is: the position where the measurement was obtained, the chemical concentration reading and the fluid flow measurement. The extension to multiple mobile sensors is as follows. We just build an integrated sequence of readings by addressing in a circular manner the set of mobile sensors. Then we send this sequence of measurements to the original algorithm. In other words, we do not fuse a posteriori maps built from individual sensors. We fuse the sequence of measurements at the input of the estimation algorithm.\n\n#### 2.2.2. Background Estimation\n\nBackground stochastic model is built from the robust estimation of the means and the dispersion to reject the effect of the intermittencies. We use the median and the median absolute deviation (MAD) to build the background probability density model . These parameters are estimated over a buffer of the last 50 measurements. These measurements can be spread out in several cells depending on the speed of the robot (typically less than 10). This is not a problem if the spatial variation of the background is sufficiently smooth.\nTo allow for the model to adapt to a slowly changing background the parameters of the model (mean and standard deviation) are filtered with an exponential moving average. This filter weights the current estimation of the background with the old one depending on the time distance. In this way, the system is able to forget the old values and adapt to new ones.\n\n#### 2.3.1. Synthetic Scenario Description and Simulation\n\nFor realistic simulations, the scenario envisioned considers atmospheric plumes in an urban area encompassing hundreds of thousands of square meters containing only one chemical source. The sensors located in vehicles can transmit their current position in the grid (e.g., using a GPS sensor) together with the chemical sensor readings, and they will explore the search area by moving across the cells performing random exploration. It is assumed that each vehicle mounts a single sensor. It is considered that the main task of the vehicles is not that of tracking the plume; but patrolling a certain area and simultaneously updating a probability map for the source location using the available information.\nThe exploration arena in the synthetic case is as follows. In order to test both algorithms, a synthetic scenario (grid size of 1 km × 1 km) is generated. The area is divided into cells of size 100 m × 100 m. A square sub-grid of lanes (100 m separation between lanes) is interlaced over the main grid (Figure 1). The sensors will randomly explore the area over this last sub-grid of lanes which simulate streets within an urban environment (Manhattan style). Using this configuration, the movement of the sensors is constrained to certain lanes over the main grid. A clandestine laboratory for home-made explosive production emits explosive precursors. In this scenario, we consider that a chemical source emitting with a source strength $q = 2.90$ g/s, is placed in the grid with the source at the position (440 m, 440 m), which corresponds to coordinates (5, 5) on the rectangular grid. We consider that the main dispersed substance is acetone (molecular weight: 58.08 g/mol) at one atmosphere pressure and 25 °C. This Gaussian plume distribution has been generated from Equation (2) with the plume being dispersed in a 2D plane at the same height as the sensors ($z = h = 2$ m). It is assumed that there is no deposition of the substance on surfaces. In the simulations, the wind field is constant with the wind speed at $U a = 2.5$ m/s and the wind direction at 45°. The dispersion coefficients $σ y$ and $σ z$ depend on wind conditions and atmospheric stability which has been set to neutral (‘D‘ on the Pasquill-Gifford scale ). Moreover, a mean background distribution is deployed over the area with a different mean level in each cell and with SD equal to 60% of the mean value in all cells (based on our own recorded data using a PID sensor measuring in Barcelona outdoors over a period of several hours).\nFive mobile sensors with a constant velocity of 15 km/h sense the area continuously. The sampling period of the sensors is set to 3 s, the detection limit to 0.1 ppm and the sensor resolution to 0.01 ppm, which are realistic values for PID technologies. We assume that the response time of the sensor is much faster than the typical 10 min time-average considered in the Gaussian Plume Model (Equation (2)). The total simulation time was set to 300 min.\nThe mean background level is different in each cell, but is stationary over time. Series of concentration fluctuations are generated in each cell considering the wind field created, the atmospheric conditions and the background. The stochastic model of the plume concentrations has been already described in Section 2.1.1. However, here we give some additional technical details concerning the practical implementation.\nSince we have defined the PDF and the PSD which will be used to model concentration fluctuations, we use the percentile transformation method (PTM) described by Papoulis to generate a series of concentration fluctuations with the desired PDF and PSD.\nSpecifically, the procedure to generate realistic plume readings consists of the following steps: (i) generate a time series of Gaussian white noise, (ii) filter the previous time series with the designed FIR filter to achieve the desired PSD, (iii) and apply the PTM. This method is based on the following expression:\n$c i = F c − 1 ( F z ( z i ) )$\nwhere $z i$ is a random sequence of Gaussian white noise having the desired PSD with cumulative distribution function (CDF) is the sequence of realistic readings in the cell with CDF $F c ( c )$. This CDF corresponds to the clipped-Gamma CDF (Equation (3)):\n$F ( c ) ≡ Pr ( C ≤ c ) = ∫ 0 − c f ( c ' ) d c ' = 1 − Γ ( k * ; ( c + λ ) / s ) Γ ( k * )$\nand $F c − 1$ is the inverse clipped-Gamma CDF. The clipped-Gamma PDF (and its CDF) depends on the distance to the source. Its two parameters, the mean and the SD, are obtained from Equation (2) and Equation (6), respectively. Subsequently, these parameters are used to compute $γ$, $k *$, $s$ and $λ$, as explained in detail in .\nBackground concentrations are simulated as white noise with Gamma PDF. The background concentrations in each cell are added to the time-series of plume readings to obtain the final concentration readings at each cell.\nAn example of the concentration signals delivered by the sensors in this simulation scenario is shown in Figure 2. It can be observed that the SD of the fluctuations decreases faster than the mean concentration, making it difficult to differentiate between plume and background far from the source.\nIn the binary-based approach, it is assumed that the ratio of false alarms is very low, but the ratio of missed detection can potentially be very high, thus we define μ = 0.3 (70% missed detections). However, the value of this parameter was not defined in the original work .\nIn the concentration-based approach, the source strength $( q )$ should be initially guessed (this is typically done using previous information about the type of chemical source under investigation: clandestine lab, industrial toxic emissions, pollution, etc.). However, since it is almost impossible to guess this parameter with accuracy, we have studied the sensitivity of the algorithm to errors in this parameter. We scanned the guessed source strength two orders of magnitude around the central exact value. For the simulation studies presented in this work, the parameters used in Equation (6) are: $K 0 = 2.5$, and , taking into account previous studies .\n\n#### 2.3.2. Synthetic Test Case 1: Behavior of the Binary Detector Algorithm Depending on the Background Level and Detector Threshold\n\nThe first simulation aimed to characterize binary-based approach when changing the concentration threshold that fires the detector signal. In order to investigate the robustness of the method against chemical noise, two spatially uniform background levels were considered (mean values: 0.05 ppm and 0.45 ppm). In both cases the standard deviation was 1/5 of the mean value.\nA background level is low or high depending on the source strength; therefore, studying the case where the background is low is equivalent to saying that the source is potent, and studying the case where the background is high is equivalent to saying that the source is weak.\nTen random sensor trajectories were simulated for each threshold level in the binary detector (30 values in the linear range between 0.05 ppm and 3 ppm). For each trajectory, the probability at the real source location is assigned by the algorithm after 300 min of exploration time. We took as a figure of merit the mean probability averaging over all trajectories. This figure of merit will be plotted against the concentration threshold. For the binary case, this simulation will determine the optimum threshold for the detector given a certain chemical power source.\n\n#### 2.3.3. Synthetic Test Case 2: Accuracy in the Estimation of the Background Level and the Expected Position of the Chemical Source\n\nHere, both algorithms (binary and analog) are compared and share the same 10 random trajectories. In this second case, a non-uniform background is used. Results will show the probability evolution at real source location as the exploration time increases. Moreover, the probability maps provided by both approaches after 300 min of exploration time can be compared as well.\nTo assess the accuracy of a given probability map we compute the expected value of distance to the true source position:\n$E [ r − r 0 ] = ∑ i = 1 N c p ( r i ) · ( r i − r 0 )$\nwhere $r$ is the estimated position of the source, $r 0$ is the real source position and $p ( r i )$ represents the probability of the source being located in cell $i$.\nBased on this metric we calculate the root mean squared error ($R M S E p$) of the distance from the expected source position to the real source location:\n$R M S E p = ( E [ ‖ r − r 0 ‖ 2 ] ) = e r r x 2 + e r r y 2$\nAlternatively, for a richer description we can decompose the total error in both coordinates: $e r r x$ and $e r r y$. Alternative, though less informative, figures of merit are the Euclidean distance ($D$) between the maximum of the likelihood map and the real source position, and the probability at the real position.\nThe mean background map recovered for the concentration-based algorithm is quantitatively compared to the designed background distribution by using the root mean squared error ($R M S E b$) as a figure of merit:\n$R M S E b = ∑ i = 1 N c ( B i − B i ^ ) 2 N c$\nwhere $B i$ is the designed mean background level at cell $i$; and $B ^ i$ is the estimated mean background level. Results are obtained for two background distributions (maximum mean values: 0.05 ppm and 0.45 ppm). Both background concentration maps were the same except for a scale factor. The binary-based approach was tested using the optimum threshold level determined from case 1 and the concentration-based approach, using the exact source strength.\n\n#### 2.3.4. Synthetic Case 3: Influence of the Source Strength on the Concentration Based Algorithm\n\nFinally, the third simulation shows the influence of the source strength in the overall performance of the concentration-based approach. Since it is difficult to know the source strength in advance, the performance of the algorithm has been assessed assuming different source strengths across more than two orders of magnitude in the range between 0.1 g/s and 30 g/s (the real value in the scenario simulator being 2.90 g/s). Results are shown for the same background distributions as in case 2 and for 10 random trajectories.\n\n#### 2.4. Scenario, Chemical Source Emission and Autonomous Vehicle Description for Real Experiments\n\nFor real experiments, all measurements were performed within an exploration arena built with polystyrene panels. Dimensions of the tunnel were 5 m × 3.5 m × 1.8 m (length × width × height). One fan was introducing air into the room and, on the opposite wall, three fans extracted air from the room. All of them were installed at a height of 0.9 m above the floor and generated a highly a turbulent airflow. Fans used were helical-wall type with a diameter of 30 cm and a maximum speed of 1300 rpm.\nThe supply of volatile acetone was carried out employing two 10-mL syringes. These syringes were filled with liquid acetone and assembled on a syringe pump KDS-200 (KD Scientific, Holliston, MA, USA) which was programmed to deliver a controlled liquid flow of 150 µL/min during 60 min. Thus, the liquid acetone was falling at a constant rate over a plate heated to a temperature sufficient for the acetone to be evaporated immediately. The source emission of volatile acetone was placed at one side of the exploration arena just in front of the fan that introduced air in the tunnel. Algorithms for the generation of the probability maps operated over a grid of $N c = 70$ cells with dimensions 0.5 m × 0.5 m each one.\nInput data for the algorithms were registered by an autonomous vehicle which was constructed on a metal structure where different components were assembled, such as: two DC motors that allow mobility of the vehicle, an UTM-30LX USB laser range finder sensor (HOKUYO, Osaka, Japan) for vehicle auto-location and navigation; and a Windsonic RS232 anemometer (Gill Instruments, Lymington, England) to register the wind speed and direction. The autonomous vehicle was also equipped with a photo ionization detector (PID, ppbRAE 3000, RAE Systems, San Jose, CA, USA) for measurement of the concentration of volatile compounds. All these components were controlled by an onboard computer. The vehicle could operate autonomously sampling its relative position, the wind speed and direction, and the volatile compound concentration at one measurement every second approximately. The vehicle was programmed to move in a straight line until a wall is found, then the vehicle rotates a random angle, and a new straight trajectory is described until a new wall is found. The vehicle speed was set to 0.2 m/s. Images of the robot, the exploration room and the chemical source can be found in the supplementary materials.\n\n## 3. Results and Discussion\n\nIn this section, we will introduce the results for the three cases already described, as well as the results for the real experiments. Finally, we will discuss the computational cost of the presented algorithm. Data are presented as 50th [5th–95th] percentiles unless otherwise specified.\n\n#### 3.1.1. Synthetic Case 1\n\nThe overall performance of the binary-based approach as the concentration threshold is changed is studied in synthetic case 1. The results for two different background levels: the former with a mean value of 0.05 ppm (Figure 3a) and the latter with a mean value of 0.45 ppm (Figure 3b) are shown. The mean probability assigned by the algorithm at the real source location is depicted as a function of the concentration threshold. It can be seen in the figure that there is a different optimal concentration threshold depending on the background level. As expected, the optimum value is shifted to higher thresholds as the background level is increased. These optimal values have been found to be 0.15 ppm and 1.48 ppm, respectively. Relative to the background level, setting the threshold too low produces a high ratio of false alarms leading the algorithm to failure; on the other hand, setting the threshold too high could lead to an increase in false negatives with abnormal concentrations considered as non-detection events and a consequent worsening of the overall performance. This can be critical if the source to be detected is weak –this being equivalent to the case with a high background level. In Figure 3b, it is observed that setting the threshold either too low or too high causes the algorithm to fail, since the probability assigned to the real source location is below the equiprobable value of $1 / N c$ assigned initially to every cell.\nThe main problem using the binary-based approach is that the threshold needs to be set arbitrarily if no information about the background is available and this background can be different in various areas within the exploration zone. A priori, we do not know whether the threshold is too high or too low, but even if we knew this, the background could evolve over space and time and the threshold would need to be adjusted continuously. The concentration-based algorithm removes the necessity of any threshold because, instead of adjusting the threshold level, our approach builds a background model for each cell. This background model allows us to distinguish between the background and the plume without using any threshold and is updated recursively.\n\n#### 3.1.2. Synthetic Case 2\n\nWe will first analyze the results for the case of a weak interfering background having a maximum mean value of 0.05 ppm. Figure 4 shows the performance of the binary-based and concentration-based algorithms in source localization by considering the evolution of the figures of merit. Mean probability (averaged over all trajectories) increases throughout the exploration time (Figure 4a) in both approaches.\nThe Wilcoxon nonparametric rank test (not shown) was applied at each time step to test the null hypothesis of no difference between the median of both populations (i.e., those formed by the 10 probabilities values in each of the two algorithms). Approximately 15 min after the starting time, the increase in probabilities for the concentration-based approach become statistically significant (p < 0.05) in comparison with the probabilities for the binary-based approach which did not change until the end of exploration time. Figure 4b shows the mean Euclidean distance $D$ between the cell with the highest probability value and the real source location. For both approaches, mean distance decreases as the exploration time increases. In average, the binary-based approach converges to the real source location slower (125 min. approximately) than the concentration-based one (80 min approximately), and with lower probability value as it can be observed in Figure 4-top. Moreover, the errors in source localization algorithms in both X and Y directions of the grid (Figure 4-bottom) show that the concentration-based algorithm performs significantly better (p < 0.001), err = 213 [199 − 222] m and err = 225 [202 − 248] m for X and Y directions respectively, in comparison with the binary approach (281 [274 − 294] and 279 [271 − 294]), even though the binary-based algorithm has been tested with the optimum concentration threshold (0.15 ppm).\nFigure 5 shows the comparison between the mean probability maps (averaged over all trajectories) provided by both algorithms after 300 min of random exploration. The probability assigned to the real source location is higher using the concentration-based approach (P = 0.215 [0.156 − 0.261] in Figure 5b) as compared to the binary-based approach (P = 0.124 [0.102 − 0.131] in Figure 5a). Moreover, the probability is spread among a lower number of cells in the wind direction.\nThe estimated background distribution (not shown) by the concentration-based approach is similar to the designed one after 300 min of random exploration, $R M S E b$ = 0.010 [0.009 − 0.012] m. Moreover, it was observed that after 50 min of exploration time the $R M S E b$ values remained approximately constant.\nWith a high background level (0.45 ppm), which is the same as saying that the source strength is small compared to the background, Figure 6 shows similar results to those shown in Figure 4. However, now the differences between the performances of both approaches are higher. Mean probability at real source location (Figure 6a) for the concentration-based approach increases as exploration time increases, but more slowly compared to Figure 4a. In contrast to that observed in Figure 4a for the binary-based approach, the mean probability is lower and remains approximately constant throughout the exploration time. Here, the Wilcoxon test is also used to assess the statistical difference between both approaches. Differences observed become statistically significant (p < 0.05) after 15 min of exploration approximately. Moreover, it is observed in Figure 6b that the mean distance for the binary-based algorithm does not converge to the real source location. The errors accounted (Figure 6c) in both the X (err = 246 [239 − 265] m) and the Y (err = 260 [254 − 277] m) directions are higher than the case of a low background and confirm that the performance of the concentration-based approach is significantly more robust than the binary-based (337 [331 − 338] m and 331 [326 − 341] m for X and Y directions respectively) in the presence of a high background level, although the latter has been tested with its optimum concentration threshold.\nThe binary-based approach performs well under the assumption that no false alarms arise. This is shown in Figure 4 where, due to a low background level, the number of missed detections and false alarms arising from the background are small, and the binary-based algorithm performs slightly worse compared to the concentration-based approach if the optimum threshold can be identified. Nevertheless, such an assumption is far from the truth in a real scenario where pollution and interfering substances are expected to be found. Additionally, this can also be the case when the source to be detected is weak. This is shown in Figure 6 where, due to a high background level (or weak source), the number of false alarms is higher, thus forcing setting the threshold higher, which leads in turn to an increase in the number of missed detections and worsens dramatically the performance of the binary-based approach.\nFigure 7 shows the probability maps obtained with a high background level. It is seen that the threshold is very high, which minimizes the number of false alarms but increases the number of missed detections, thus the binary-based estimation is very uncertain at the source location (Figure 7a). Therefore, the probability is spread over the cells in the grid, decreasing the probability at real source location (P = 0.013 [0.010 − 0.014]). Figure 7b shows the robustness of our algorithm which minimizes false alarms and missed detections. The concentration-based algorithm tends to increase the probability at the real source location while the vehicles are performing random exploration. After 300 min of random exploration, the probability assigned to real source location was P = 0.083 [0.047 − 0.101]. In this approach, false alarms arising from the background can correctly be assigned lower weights in the probability calculations because the algorithm has created a background model per each cell and a dispersion model for the plume. Additionally, these models allow minimizing the number of missed detections.\nConcerning the estimation of the background distribution, the concentration-based approach was also able to recover it properly, $R M S E b$ = 0.089 [0.080 − 0.100] m. Like in the case of a low background, the $R M S E b$ remained approximately constant after 50 min, albeit, it was slightly higher.\n\n#### 3.1.3. Synthetic Case 3\n\nThe results concerning the sensitivity to the correct estimation of the chemical source strength are displayed in Figure 8. Results show a decrease in the overall performance as the assumed source strength deviates from the real value (2.90 g/s). It is observed (Figure 8a) that the selection of the source strength becomes more critical when the background level is low (or the source is potent compared to this background), but, if the source strength could be estimated with errors (to within at most a factor of 4), the concentration-based approach performs much better. When the source to be detected is weak (or the background is comparatively high, Figure 8b), the selection of the source strength is not so critical in the range studied. It can be observed that the concentration-based approach is more robust against false alarms and missed detections (even for source strength estimated with errors larger than two orders of magnitude), as compared to the binary-based approach which performs badly even setting its optimum threshold.\nIt is important to say that, although the binary-based algorithm works without assuming explicitly any source strength, setting an optimum threshold is only possible when collecting real measurements which implicitly contain information about the source strength and the background. However, this optimum threshold might be different depending on the explored cell. In the case of the concentration-based approach, the background is estimated by the algorithm and, if this background is low, the source strength needs to be known to within one order of magnitude. If the background is high, the algorithm behaves more robustly in the range studied as compared to the binary case except in the first case (the assumed source strength is very small).\n\n#### 3.2. Results for the Real Experiments\n\nFirst we will give some details concerning the wind distribution within the exploration arena. Figure 9 (top) shows a characterization of the behavior of the wind within the designed scenario for one of the experiments that was carried out. The median wind speed was 0.28 [0.08 − 1.12] m/s. Also, there was a predominant wind direction although with a scatter on the direction angle of approximately 60 degrees. Similar behavior of the wind was observed in all the experiments. The map of concentrations showed in Figure 9 (bottom) is consequence in part of such airflow generated.\nThe averaged probability maps (over the 10 experiments) are shown in Figure 10. The mean probability assigned by each algorithm to the real source location is practically identical, P = 0.045 [0.030 − 0.076] and P = 0.050 [0.028 − 0.084] for the concentration-based and for the binary-based respectively. However, the probability map for the binary-based algorithm (Figure 10a) exhibits more variance in the localization of the real source position, as it is observed in Figure 10a. For the concentration-based algorithm (Figure 10b), relatively stable higher values of probability around the real source location are obtained. The Wilcoxon rank test is also used to compare the errors obtained for each algorithm. In the X direction, the error for the concentration-based algorithm (2.14 [2.09 − 2.35]) m. is lower than for the binary-based one (2.30 [2.03 − 2.55]) m., but without statistical significance. In the Y direction, the error for the concentration-based (0.91 [0.85 − 0.95]) m. is also lower than for the binary-based (0.97 [0.86 − 1.03]) m. and in this case is statistically significant (see Figure 11).\nResults obtained here are in the same line as those obtained for the simulation experiments but the results are not as clear. In the designed scenario, it is likely that some of the assumptions of our algorithm (see Table A2) are not fully satisfied. The wind was not completely uniform in the room: it was much stronger near the source than at the opposite wall and there were some recirculation at the lateral walls. As a consequence the real distribution of the time-averaged concentration deviated from Gaussianity. We think it might be a reason to explain the slight differences observed when comparing both algorithms, in contrast to the strong differences observed in the performance of both algorithms for the simulation experiments.\n\n#### 3.3. Computatitonal Cost\n\nFinally, we have studied the computational cost of the algorithm. It is true that the computational cost is higher than the binary approach, mainly due to need of computing the convolution between the PDF of the background and the PDF of the plume for every cell. The cost is about three times the cost of the binary approach in our current implementation (without optimization). However, our studies have confirmed that the computational cost of each iteration in the algorithm increases very slowly with the number of cells. Moreover, the time to locate the source increases in a linear way approximately with the number of cells.\n\n## 4. Conclusions\n\nIn the present paper, modifications of a previously described (binary-based) algorithm have been introduced. The original algorithm can be easily extended to work with multiple mobile sensors. All the information from the mobile sensors can be integrated in the algorithm, whatever their positions are. The algorithm only needs to know in which cell the concentration readings were obtained, then a probability map will be recursively updated. Moreover, the mobile sensors do not need to solely perform plume tracking and might be used for other tasks.\nAdditionally, in a real scenario, pollution and some interfering substances may appear in the background, increasing the number of false alarms. Unlike the binary-based algorithm, which uses a threshold to assess whether a concentration is considered as a detection or non-detection event; our algorithm, based on continuous concentrations, builds a background model to assess whether a concentration comes from the background or from a source located further away. Simulation results show that our algorithm behaves much more robustly in the presence of false alarms and better estimates the real source location.\nAll concentration readings are considered in our algorithm, incorporating them in a continuous manner instead of just using them as binary detections above a certain threshold. This fact removes the need for a threshold level, thereby reducing the number of false alarms (a background model is estimated) and the number of missed detections thus improving the performance of the algorithm proposed by Pang and Farrell. A sensitivity study regarding robustness of the algorithm against deviations from the true value has been presented. It has been shown, the results improve significantly using the concentration-based algorithm if the source strength can be estimated.\nFinally, experiments with real data have shown that the concentration-based algorithm seems to perform slightly better than the binary-based one, confirming our results obtained for the simulation experiments. A critical point in our proposal is that the algorithm assumes that the source strength is known. Thus, estimating the source strength would be a promising direction for future research. Results from real experiments show that when the plume dispersion model hypothesis do not hold, both algorithm are still able to perform the estimation tasks although the differences diminish.\nThe described algorithm is available in MATLAB code from the authors under request.\n\n## Supplementary Materials\n\nThe supplementary materials are available online at https://www.mdpi.com/1424-8220/17/4/904/s1. Images of the robot, the exploration arena with the fans and the used chemical source are available as supplementary materials.\n\n## Acknowledgments\n\nThe research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under Grant Agreement No. 217925: LOTUS: Localization of Threat Substances in an Urban Society. This work was partially funded by the Spanish MINECO program, under grants TEC2011-26143 (SMART-IMS), TEC2014-59229-R (SIGVOL), and BES-2015-071698 (SEVERO-OCHOA). The Signal and Information Processing for Sensor Systems group is a consolidated Grup de Recerca de la Generalitat de Catalunya and has support from the Departament d’Universitats, Recerca i Societat de la Informació de la Generalitat de Catalunya (expedient 2014-SGR-1445). This work has received support from the Comissionat per a Universitats i Recerca del DIUE de la Generalitat de Catalunya and the European Social Fund (ESF). Additional financial support has been provided by the Institut de Bioenginyeria de Catalunya (IBEC). IBEC is a member of the CERCA Programme/Generalitat de Catalunya.\n\n## Author Contributions\n\nS.M. designed the study and provided the main ideas behind the concentration based algorithm. V.P. programmed the simulator, the implementation of both algorithms and wrote a first version of the manuscript. V.P., R.M. and J.B. analyzed the signals and ran the algorithm for several test cases and for the real case signals. D.M., M.T. and J.P. designed and programmed the autonomous robot. R.M., J.M.J.-S., D.M. and M.T. mounted the exploration arena and carried out the real experiments. R.M. and S.M. contributed to the final version of the manuscript.\n\n## Conflicts of Interest\n\nThe authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.\n\n## Appendix A. Supplementary Tables\n\nTable A1. Summary of the notation used for the concentration-based approach.\nTable A1. Summary of the notation used for the concentration-based approach.\n Nc Number of Cells in the Grid Area Lx Length of each cell along the x-axis Ly Length of each cell along the y-axis c Instantaneously measured concentration cb Concentration contribution due to the background cp Concentration contribution due to the chemical plume $c ¯$ Mean concentration at a fixed location q Source strength or release rate Ua Mean wind speed in the downwind direction σy Diffusion coefficient in the crosswind direction σz Diffusion coefficient in the vertical direction $c ^$ Array of all possible instantaneous concentrations c γ Intermittency factor related to the chemical plume M Mean concentration of a series of readings at a fixed location $σ S D$ Standard deviation of a series of readings at a fixed location Nb Number of readings stored in the concentration buffer per each sensor cj Measured concentration within cell j Ai Event “there is a chemical source within cell i” B(tk) Sequence of measured concentrations along the trajectory of the robots until time tk $P ( A i )$ Prior probability of the presence of a chemical source within cell i $P ( c j | A i )$ Probability that the measurement within cell j is due to the addition of the background at cell j and a chemical plume due to a source within cell i $P ( c j | A ¯ i )$ Probability that the measurement of chemical at cell j is not due to a source emitting at cell i, thus cj is due to the current background at cell j $S i j ( t k ) ≡ P ( A i | c j )$ Probability of having a source in cell i given that a certain amount of chemical was measured at cell j at time tk $β i j ( t k )$ Normalized probability (over all cells) of having a chemical source within cell i based on a single measured concentration within cell j at time tk $α ' i j ( t k )$ Normalized probability (over all cells) of having a chemical source within cell i based on the sequence of measured concentrations along the trajectory of the robots (index j) until time tk\nTable A2. Main assumptions of the concentration-based algorithm.\nTable A2. Main assumptions of the concentration-based algorithm.\nFor the Plume\nGaussian distribution for the time-averaged concentration\nconcentration fluctuations governed by turbulences\ncontinuous release, source strength known q = 2.90 g/s\none uniquely source at the position (400 m, 400 m)\nheight, z = 2 m\nFor the Background\nbackground is always present\nbackground relatively constant for the exploration time\nmean background level can change from one cell to another\nno intermittency\nOther Assumptions\nuniform wind field over the exploration time, speed Ua = 2.5 m/s, and 45° direction\nneutral atmospheric stability\nno deposition of the substance on surfaces\ncells are equally likely to contain the source at time t0\nheight of sensors is 2 m\nresponse-time of sensors faster than the typical 10min time-average of GPM\n\n## References\n\n1. Kowadlo, G.; Russell, R.A. Robot Odor Localization: A Taxonomy and Survey. Int. J. Robot. Res. 2008, 27, 869–894. [Google Scholar] [CrossRef]\n2. Webster, D.R.; Volyanskyy, K.Y.; Weissburg, M.J. Bioinspired algorithm for autonomous sensor-driven guidance in turbulent chemical plumes. Bioinspir. Biomimetics 2012, 7, 3. [Google Scholar] [CrossRef] [PubMed]\n3. Meng, Q.-H.; Yang, W.-X.; Wang, Y.; Li, F.; Zeng, M. Adapting an ant colony metaphor for multi-robot chemical plume tracing. Sensors 2012, 12, 4737–4763. [Google Scholar] [CrossRef] [PubMed]\n4. Russell, R.A.; Bab-Hadiashar, A.; Shepherd, R.L.; Wallace, G.G. A comparison of reactive robot chemotaxis algorithms. Robot. Autom. Syst. 2003, 45, 83–97. [Google Scholar] [CrossRef]\n5. Webster, D.R. Structure of turbulent chemical plumes. In Trace Chemical Sensing of Explosives; Woodfin, R.L., Ed.; John Wiley & Sons: Hoboken, NJ, USA, 2007; pp. 109–129. [Google Scholar]\n6. Jatmiko, W.; Mursanto, P.; Kusumoputro, B.; Sekiyama, K.; Fukuda, T. Modified PSO algorithm based on flow of wind for odor source localization problems in dynamic environments. J. WSEAS Trans. Syst. 2008, 7, 106–113. [Google Scholar]\n7. Ishida, H.; Suetsugu, K.; Nakamoto, T.; Moriizumi, T. Study of autonomous mobile sensing system for localization of odor source using gas sensors and anemometric sensors. Sens. Actuators A Phys. 1994, 45, 153–157. [Google Scholar] [CrossRef]\n8. Vergassola, M.; Villermaux, E.; Shraiman, B.I. Infotaxis as a strategy for searching without gradients. Nature 2007, 445, 406–409. [Google Scholar] [CrossRef] [PubMed]\n9. Lilienthal, A.; Duckett, T. Building gas concentration gridmaps with a mobile robot. Robot. Autom. Syst. 2004, 48, 3–16. [Google Scholar] [CrossRef]\n10. Stachniss, C.; Plagemann, C.; Lilienthal, A.J. Learning gas distribution models using sparse Gaussian process mixtures. Auton. Robot. 2009, 26, 187–202. [Google Scholar] [CrossRef]\n11. Lilienthal, A.J.; Reggente, M.; Trincavelli, M.J.; Blanco, L.; Gonzalez, J. A Statistical Approach to Gas Distribution Modelling with Mobile Robots—The Kernel DM + V Algorithm. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), St. Louis, MO, USA, 10–15 October 2009; pp. 570–576. [Google Scholar]\n12. Farrell, J.A. Plume Mapping via hidden markov methods. IEEE Trans. Syst. Man Cybern. Part B 2003, 33, 850–863. [Google Scholar] [CrossRef] [PubMed]\n13. Farrell, J.A.; Pang, S.; Li, W. Chemical Plume Tracing via an Autonomous Underwater Vehicle. IEEE J. Ocean. Eng. 2005, 30, 428–442. [Google Scholar] [CrossRef]\n14. Pang, S.; Farrell, J.A. Chemical Plume Source Localization. IEEE Trans. Syst. Man Cybern. Part B 2006, 36, 1068–1080. [Google Scholar] [CrossRef]\n15. Kang, X.; Li, W. Moth-inspired plume tracing via multiple autonomous vehicles under formation control. Adapt. Behav. 2012, 20, 131–142. [Google Scholar] [CrossRef]\n16. La, H.M.; Sheng, W.; Chen, J. Cooperative and Active Sensing in Mobile Sensor Networks for Scalar Field Mapping. IEEE Trans. Syst. Man Cybern. 2015, 45, 1–12. [Google Scholar] [CrossRef]\n17. La, H.M.; Shen, W. Distributed Sensor Fusion and Scalar Field Mapping Using Mobile Sensor Networks. IEEE Trans. Syst. Man Cybern. 2013, 45, 766–778. [Google Scholar]\n18. Choi, J.; Oh, S.; Horowitz, R. Distributed learning and cooperative control for multi-agent systems. Automatica 2009, 45, 2802–2814. [Google Scholar] [CrossRef]\n19. Pomareda, V.; Marco, S. Chemical Plume Source Localization with Multiple Mobile Sensors using Bayesian Inference under Background Signals. In Proceedings of the International Symposium on Olfaction and Electronic Nose, New York, NY, USA, 2–5 May 2011; pp. 149–150. [Google Scholar]\n20. Papoulis, A. Probability, Random Variables, and Stochastic Processes, 2nd ed.; McGraw-Hill: New York, NY, USA, 1984. [Google Scholar]\n21. Turner, B. Workbook of Atmospheric Dispersion Estimates: An Introduction to Dispersion Modeling; CRC Press: Boca Raton, FL, USA, 1994. [Google Scholar]\n22. Beychok, M.R. Fundamentals of Stack Gas Dispersion, 4th ed.; Beychok, M.R.: New York, NY, USA, 2005. [Google Scholar]\n23. Sutton, O.G. The problem of diffusion in the lower atmosphere. Q. J. R. Meteorol. Soc. 1947, 73, 257–281. [Google Scholar] [CrossRef]\n24. Crimaldi, J.P.; Wiley, M.B.; Koseff, J.R. The relationship between mean and instantaneous structure in turbulent passive scalar plumes. J. Turbul. 2002, 3, 1–24. [Google Scholar] [CrossRef]\n25. Webster, D.R.; Weissburg, M.J. Chemosensory guidance cues in a turbulent chemical odor plume. Limnol. Oceanogr. 2001, 46, 1034–1047. [Google Scholar] [CrossRef]\n26. Bakkum, E.A.; Duijm, N.J. Chapter 4: Vapour cloud dispersion. In Methods for the Calculation of Physical Effects ‘Yellow Book’, 3rd ed.; van den Bosch, C.J.H., Weterings, R.A.P.M., Eds.; Ministerie van verkeer en waterstaat: The Hague, The Netherlands, 2005; pp. 1–139. [Google Scholar]\n27. Farrell, J.A.; Murlis, J.; Long, X.; Li, W.; Cardé, R.T. Filament-based atmospheric dispersion model to achieve short time-scale structure of odor plumes. Environ. Fluid Mech. 2002, 2, 143–169. [Google Scholar] [CrossRef]\n28. Yee, E.; Wang, B.-C.; Lien, F.-S. Probabilistic model for concentration fluctuations in compact-source plumes in an urbant environment. Bound.-Layer Meteorol. 2009, 130, 169–208. [Google Scholar] [CrossRef]\n29. Yee, E.; Biltoft, C.A. Concentration fluctuation measurements in a plume dispersing through a regular array of obstacles. Bound.-Layer Meteorol. 2004, 111, 363–415. [Google Scholar] [CrossRef]\n30. Yee, E.; Chan, R. A simple model for the probability density function of concentration fluctuations in atmospheric plumes. Atmos. Environ. 1997, 31, 991–1002. [Google Scholar] [CrossRef]\n31. Jones, A.R.; Thomson, D.J. Simulation of time series of concentration fluctuations in atmospheric dispersion using a correlation-distortion technique. Bound.-Layer Meteorol. 2006, 118, 25–54. [Google Scholar] [CrossRef]\n32. Hahn, I.; Brixey, L.A.; Wiener, R.W.; Henkle, S.W.; Baldauf, R. Characterization of traffic-related PM concentration distribution and fluctuation patterns in near-highway urban residential street canyons. J. Envorin. Monit. 2009, 11, 2136–2145. [Google Scholar] [CrossRef] [PubMed]\n33. Soriano, C.; Baldasano, J.M.; Buttler, W.T.; Moore, K.R. Circulatory patterns of air pollutants within the Barcelona air basin in a summertime situation: lidar and numerical approaches. Bound.-Layer Meteorol. 2000, 98, 33–55. [Google Scholar] [CrossRef]\n34. Martín, M.; Cremades, L.V.; Santabàrbara, J.M. Analysis and modelling of time series of surface wind speed and direction. Int. J. Climatol. 1999, 19, 197–209. [Google Scholar] [CrossRef]\n35. Gijbels, I.; Hubert, M. Robust and nonparametric statistical methods. In Comprehensive Chemometrics: Chemical and Biochemical Data Analysis, 1st ed.; Brown, S.D., Tauler, R., Walczak, B., Eds.; Elsevier: Amsterdam, The Netherlands, 2009; pp. 189–211. [Google Scholar]\n36. Papoulis, A. Chapter 8: Sequences of random variables. In Probability, Random Variables, and Stochastic Processes; McGraw-Hill: New York, NY, USA, 1984. [Google Scholar]\nFigure 1. Randomly explored area (1 km × 1 km) using five sensors (colored dots). The continuous white lines (-) define the cell boundaries. The sensors move in horizontal or vertical direction from the center of one cell to the center of a neighbor cell. The source is located at (440, 440) meters at cell (5, 5). Mean wind speed is set to 2.5 m/s and wind direction to 45°. Colored background refers to probability map at a particular time step.\nFigure 1. Randomly explored area (1 km × 1 km) using five sensors (colored dots). The continuous white lines (-) define the cell boundaries. The sensors move in horizontal or vertical direction from the center of one cell to the center of a neighbor cell. The source is located at (440, 440) meters at cell (5, 5). Mean wind speed is set to 2.5 m/s and wind direction to 45°. Colored background refers to probability map at a particular time step.", null, "Figure 2. Simulated concentration fluctuations (under the specified conditions) at different fixed positions downwind from the source over a certain background level.\nFigure 2. Simulated concentration fluctuations (under the specified conditions) at different fixed positions downwind from the source over a certain background level.", null, "Figure 3. Dependency of the assigned mean probability (averaged over 10 different random trajectories) with the concentration threshold at source location by the binary-based approach. (a) Low background level (mean = 0.05 ppm and SD = 0.03 ppm). (b) High background level (mean = 0.45 ppm and SD = 0.27 ppm). The orange line shows the initial equiprobable value ($1 / N c$) assigned to every cell.\nFigure 3. Dependency of the assigned mean probability (averaged over 10 different random trajectories) with the concentration threshold at source location by the binary-based approach. (a) Low background level (mean = 0.05 ppm and SD = 0.03 ppm). (b) High background level (mean = 0.45 ppm and SD = 0.27 ppm). The orange line shows the initial equiprobable value ($1 / N c$) assigned to every cell.", null, "Figure 4. Performance of binary-based and concentration-based algorithms in source localization with a maximum mean background level of 0.05 ppm. (a) Mean probability averaged over the ten random trajectories (thick solid line) and confidence intervals within two standard deviations (thin dashed lines). (b) Mean Euclidean distance $D$ between the cell with the highest probability value and the real source location (thick solid line) and both the maximum and the minimum values (thin dashed lines). (c) error in source localization in both X and Y directions at the end of exploration time; p-values between both approaches by using the Wilcoxon rank test have been indicated.\nFigure 4. Performance of binary-based and concentration-based algorithms in source localization with a maximum mean background level of 0.05 ppm. (a) Mean probability averaged over the ten random trajectories (thick solid line) and confidence intervals within two standard deviations (thin dashed lines). (b) Mean Euclidean distance $D$ between the cell with the highest probability value and the real source location (thick solid line) and both the maximum and the minimum values (thin dashed lines). (c) error in source localization in both X and Y directions at the end of exploration time; p-values between both approaches by using the Wilcoxon rank test have been indicated.", null, "Figure 5. Mean probability maps (averaged over all trajectories) after 300 min of random exploration with a maximum mean background level of 0.05 ppm. Source location at (5, 5) is indicated with an asterisk (*). (a) Binary-based approach. (b) Concentration-based approach.\nFigure 5. Mean probability maps (averaged over all trajectories) after 300 min of random exploration with a maximum mean background level of 0.05 ppm. Source location at (5, 5) is indicated with an asterisk (*). (a) Binary-based approach. (b) Concentration-based approach.", null, "Figure 6. Performance of binary-based and concentration-based algorithms in source localization with a maximum mean background level of 0.45 ppm. (a) Mean probability averaged over the ten random trajectories (thick solid line) and confidence intervals within two standard deviations (thin dashed lines). (b) Mean Euclidean distance $D$ between the cell with the highest probability value and the real source location (thick solid line) and both the maximum and the minimum values (thin dashed lines). (c) error in source localization in both X and Y directions at the end of exploration time; p-values between both approaches by using the Wilcoxon rank test have been indicated.\nFigure 6. Performance of binary-based and concentration-based algorithms in source localization with a maximum mean background level of 0.45 ppm. (a) Mean probability averaged over the ten random trajectories (thick solid line) and confidence intervals within two standard deviations (thin dashed lines). (b) Mean Euclidean distance $D$ between the cell with the highest probability value and the real source location (thick solid line) and both the maximum and the minimum values (thin dashed lines). (c) error in source localization in both X and Y directions at the end of exploration time; p-values between both approaches by using the Wilcoxon rank test have been indicated.", null, "Figure 7. Mean probability maps (averaged over all trajectories) after 300 min of random exploration with a maximum mean background level of 0.45 ppm. Source location at (5, 5) is indicated with an asterisk. (a) Binary-based approach. (b) Concentration-based approach.\nFigure 7. Mean probability maps (averaged over all trajectories) after 300 min of random exploration with a maximum mean background level of 0.45 ppm. Source location at (5, 5) is indicated with an asterisk. (a) Binary-based approach. (b) Concentration-based approach.", null, "Figure 8. Mean probability (averaged over 10 trajectories) at real source location after 300 min of random exploration as a function of the source strength assumed by the concentration-based approach. Error bars show confidence levels within two standard deviations. (a) Results with a maximum mean background level of 0.05 ppm. (b) Results with a maximum mean background level of 0.45 ppm.\nFigure 8. Mean probability (averaged over 10 trajectories) at real source location after 300 min of random exploration as a function of the source strength assumed by the concentration-based approach. Error bars show confidence levels within two standard deviations. (a) Results with a maximum mean background level of 0.05 ppm. (b) Results with a maximum mean background level of 0.45 ppm.", null, "Figure 9. (Top) a representative wind distribution recorded within the designed scenario during a random exploration maneuver. Both, the direction (degrees) and the speed (m/s) of the wind are indicated. (Bottom) map of concentrations recorded with the PID. Source location at (10, 4) is indicated by a black circle.\nFigure 9. (Top) a representative wind distribution recorded within the designed scenario during a random exploration maneuver. Both, the direction (degrees) and the speed (m/s) of the wind are indicated. (Bottom) map of concentrations recorded with the PID. Source location at (10, 4) is indicated by a black circle.", null, "Figure 10. Mean probability maps at the end of the exploration time for the set of real experiments. Source location at (10, 4) is marked with an asterisk. (a) Binary-based approach. (b) Concentration-based approach.\nFigure 10. Mean probability maps at the end of the exploration time for the set of real experiments. Source location at (10, 4) is marked with an asterisk. (a) Binary-based approach. (b) Concentration-based approach.", null, "Figure 11. Errors in source localization in both the X and the Y directions for both algorithms. Statistically significant difference is indicated.\nFigure 11. Errors in source localization in both the X and the Y directions for both algorithms. Statistically significant difference is indicated.", null, "" ]
[ null, "https://www.mdpi.com/profiles/masked-unknown-user.png", null, "https://www.mdpi.com/profiles/masked-unknown-user.png", null, "https://www.mdpi.com/profiles/masked-unknown-user.png", null, "https://www.mdpi.com/profiles/masked-unknown-user.png", null, "https://www.mdpi.com/profiles/masked-unknown-user.png", null, "https://www.mdpi.com/img/design/orcid.png", null, "https://www.mdpi.com/profiles/masked-unknown-user.png", null, "https://www.mdpi.com/img/design/orcid.png", null, "https://www.mdpi.com/profiles/masked-unknown-user.png", null, "https://www.mdpi.com/img/design/orcid.png", null, "https://www.mdpi.com/profiles/masked-unknown-user.png", null, "https://www.mdpi.com/img/design/orcid.png", null, "https://www.mdpi.com/sensors/sensors-17-00904/article_deploy/html/images/sensors-17-00904-g001.png", null, "https://www.mdpi.com/sensors/sensors-17-00904/article_deploy/html/images/sensors-17-00904-g002.png", null, "https://www.mdpi.com/sensors/sensors-17-00904/article_deploy/html/images/sensors-17-00904-g003.png", null, "https://www.mdpi.com/sensors/sensors-17-00904/article_deploy/html/images/sensors-17-00904-g004.png", null, "https://www.mdpi.com/sensors/sensors-17-00904/article_deploy/html/images/sensors-17-00904-g005.png", null, "https://www.mdpi.com/sensors/sensors-17-00904/article_deploy/html/images/sensors-17-00904-g006.png", null, "https://www.mdpi.com/sensors/sensors-17-00904/article_deploy/html/images/sensors-17-00904-g007.png", null, "https://www.mdpi.com/sensors/sensors-17-00904/article_deploy/html/images/sensors-17-00904-g008.png", null, "https://www.mdpi.com/sensors/sensors-17-00904/article_deploy/html/images/sensors-17-00904-g009.png", null, "https://www.mdpi.com/sensors/sensors-17-00904/article_deploy/html/images/sensors-17-00904-g010.png", null, "https://www.mdpi.com/sensors/sensors-17-00904/article_deploy/html/images/sensors-17-00904-g011.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92956936,"math_prob":0.9488971,"size":11291,"snap":"2020-34-2020-40","text_gpt3_token_len":2013,"char_repetition_ratio":0.13723753,"word_repetition_ratio":0.008777062,"special_character_ratio":0.17819503,"punctuation_ratio":0.08730159,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9868031,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-06T08:02:35Z\",\"WARC-Record-ID\":\"<urn:uuid:2685a35f-2cb1-4b33-97e1-0aafb7b4b260>\",\"Content-Length\":\"291610\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8576d2f7-10d7-460e-9553-9d4fd579184e>\",\"WARC-Concurrent-To\":\"<urn:uuid:816b4808-ac8f-4a7b-ad9a-d39e27709348>\",\"WARC-IP-Address\":\"104.18.24.151\",\"WARC-Target-URI\":\"https://www.mdpi.com/1424-8220/17/4/904/htm\",\"WARC-Payload-Digest\":\"sha1:HWKCVZNIIKJ3UP6GCTQBVVNK2MPPLXP4\",\"WARC-Block-Digest\":\"sha1:4B5PCJ2RDFIGASRF3TQXPSBNZ637RT5A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439736883.40_warc_CC-MAIN-20200806061804-20200806091804-00471.warc.gz\"}"}
https://mathoverflow.net/tags/operator-theory/hot
[ "# Tag Info\n\n9\n\nLet $A$ be a self-adjoint operator with domain $D(A)\\subset\\mathcal H$ ($\\mathcal H$ is some Hilbert space). An operator $C$ with $D(A)\\subset D(C)$ is called relatively compact with respect to $C$ if $C(A-zI)^{-1}$ is compact for some (hence all) $z\\notin\\sigma(A)$. Paraphrasing Corollary 2, page 113 Section XIII.4, in , we have If $C$ is relatively ...\n\n4\n\nYour operator $K$ is a Hilbert-Schmidt operator since its kernel belongs to $L^2$. As a result this is a compact operator whose spectrum contains a sequence of eigenvalues $\\\\{\\lambda_k\\not=0\\\\}$ with finite multiplicities such that $\\lim_k\\vert \\lambda_k\\vert=0$. To deal with the self-adjoint case, you can find an orthonormal set $\\\\{\\mathbf e_k\\\\}$ such ...\n\n4\n\nIt seems that I have found a counter example myself. For the Hilbert matrix $$H_\\lambda:= \\big( \\frac{1}{1-\\lambda+k+n} \\big)_{k,n\\geq 0}, \\lambda < 1$$ Rosenblum in \"On the Hilbert Matrix I, Proceedings of the AMS\" proves that the pointspectrum considered as an operator on $\\ell^p, p>2$ contains the set $$\\{ \\pi \\sec(\\pi u ) : | \\Re ( u )| <... 2 Yes, the inequality is true. Let \\lambda be large enough so (L+\\lambda I) is positive definite. We have$$((L+\\lambda I)y,y)=((L+\\lambda I)^{1/2}y,(L+\\lambda I)^{1/2}y)\\ge C\\|y\\|^2_{H^{s/2}},$$since the domain of (L+\\lambda I)^{1/2} is H^{s/2} by the general theory of interpolation spaces. It follows that$$(Ly,y)\\ge C\\|y\\|^2_{H^{s/2}}-\\lambda (y,y)....\n\n1\n\nIn \"On Majorization, Factorization, and Range Inclusion of Operators on Hilbert Space (1966)\", R. G. Douglas proved the following result (Theorem 1 in the paper): Theorem. Let $C$ and $D$ be bounded linear operators on a real or complex Hilbert space $\\mathcal{H}$; then the following are equivalent: (i) $C\\mathcal{H} \\subseteq D \\mathcal{H}$. (ii) There ...\n\nOnly top voted, non community-wiki answers of a minimum length are eligible" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76435244,"math_prob":0.9996648,"size":2897,"snap":"2020-10-2020-16","text_gpt3_token_len":936,"char_repetition_ratio":0.11406844,"word_repetition_ratio":0.0,"special_character_ratio":0.32965136,"punctuation_ratio":0.11702128,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999795,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-06T19:12:40Z\",\"WARC-Record-ID\":\"<urn:uuid:f800c393-2be6-41e7-ba71-b7bd807c8088>\",\"Content-Length\":\"97142\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:74b8a3cc-60a3-4a46-9734-ff41b2cd4216>\",\"WARC-Concurrent-To\":\"<urn:uuid:389a9572-3b75-4eea-8915-616451b77f02>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/tags/operator-theory/hot\",\"WARC-Payload-Digest\":\"sha1:3DNNQAHKS6UAN7LGGUT7UY5UMTHHPUVO\",\"WARC-Block-Digest\":\"sha1:SBA7LQI6KGCNEXAUHFODY75PHGHSLEQE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371656216.67_warc_CC-MAIN-20200406164846-20200406195346-00108.warc.gz\"}"}
https://amenoren.medium.com/disarmium-number-challenge-in-python-97e120a81eb1
[ "# Disarmium Number Challenge in Python\n\nRefactoring my function down from 9 lines to 1 line\n\nSeveral months ago I wrote about my adventure of coding the Fibonacci number. I love the Fibonacci number — any number puzzle actually — so it was with great joy when I came across the Disarmium number. A number is a Disarmium number if the sum of the digits raised to the power of their respective position in the original number is equal. For example, 35 is not a Disarmium number:\n\n3¹ + 5² equals 28 (3 + 25) and 28 does not equal 35\n\nAnother example, 89 is a Disarmium number:\n\n8¹ + 9² = 89 and 89 equals 89\n\nSuch fun!\n\nMy first attempt at solving the challenge was nine lines long.\n\nHere are my examples:\n\nx = 89y = 88z = 131u = 175q = 123658\n\nAnd the results:\n\nThen I decided to work in the enumerate method (which I had written about previously):\n\nI continued to refactor and rework the code and 5 versions later, came up with a one-line function:\n\nThe lesson learned on this number problem? Nothing really earth-shattering — I was just having some fun with a new number type and seeing how “Pythonic” could I make my code. Mission achieved." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92734474,"math_prob":0.9507164,"size":1130,"snap":"2021-43-2021-49","text_gpt3_token_len":288,"char_repetition_ratio":0.13143872,"word_repetition_ratio":0.0,"special_character_ratio":0.26460177,"punctuation_ratio":0.07589286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9736021,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T18:53:53Z\",\"WARC-Record-ID\":\"<urn:uuid:a7659362-48ec-48f0-b74c-0e0c3ec538f7>\",\"Content-Length\":\"104455\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eff1591c-3d43-4ba4-8c24-042048ce216e>\",\"WARC-Concurrent-To\":\"<urn:uuid:8f1640eb-ad4c-44f2-9388-7e444d81b51d>\",\"WARC-IP-Address\":\"162.159.153.4\",\"WARC-Target-URI\":\"https://amenoren.medium.com/disarmium-number-challenge-in-python-97e120a81eb1\",\"WARC-Payload-Digest\":\"sha1:HLPBIWJUNCNIBCCIR2DZPOP7N4ENBY7A\",\"WARC-Block-Digest\":\"sha1:MLN53KSNA4FVGH5RLMVIIL4ZIRERC4XW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585439.59_warc_CC-MAIN-20211021164535-20211021194535-00533.warc.gz\"}"}
https://www.spp2026.de/members-guests/33-member-pages/dr-markus-upmeier/
[ "# Members & Guests\n\n## Dr. Markus Upmeier", null, "Universität Augsburg\n\nE-mail: Markus.Upmeier(at)math.uni-augsburg.de\nTelephone: +49 821 598 - 2228\nHomepage: https://www.math.uni-augsburg.de/prof/di…\n\n## Project\n\n33Gerbes in renormalization and quantization of infinite-dimensional moduli spaces\n\n## Publications within SPP2026\n\nLet X be a compact Calabi-Yau 3-fold, and write $$\\mathcal{M}, \\overline{\\mathcal{M}}$$ for the moduli stacks of objects in coh(X) and the derived category D^b coh(X). There are natural line bundles $$K_{\\mathcal{M}} \\to \\mathcal{M}, K_{\\overline{\\mathcal{M}}} \\to \\overline{\\mathcal{M}}$$ analogues of canonical bundles. Orientation data is an isomorphism class of square root line bundles $$K_{\\mathcal{M}}^{1/2}, K_{\\overline{\\mathcal{M}}}^{1/2}$$, satisfying a compatibility condition on the stack of short exact sequences. It was introduced by Kontsevich and Soibelman in their theory of motivic Donaldson-Thomas invariants, and is also important in categorifying Donaldson-Thomas theory using perverse sheaves. We show that natural orientation data can be constructed for all compact Calabi-Yau 3-folds X, and also for compactly-supported coherent sheaves and perfect complexes on noncompact Calabi-Yau 3-folds X that admit a spin smooth projective compactification.\n\nLet X be a compact manifold, G a Lie group, PX a principal G-bundle, and B_P the infinite-dimensional moduli space of connections on P modulo gauge. For a real elliptic operator E we previously studied orientations on the real determinant line bundle over B_P. These are used to construct orientations in the usual sense on smooth gauge theory moduli spaces, and have been extensively studied since the work of Donaldson.\n\nHere we consider complex elliptic operators F and introduce the idea of spin structures, square roots of the complex determinant line bundle of F. These may be used to construct spin structures in the usual sense on smooth complex gauge theory moduli spaces. We study the existence and classification of such spin structures. Our main result identifies spin structures on X with orientations on X×S1. Thus, if PX and QX×S1 are principal G-bundles with Q|X×{1}≅P, we relate spin structures on (B_P,F) to orientations on (B_Q,E) for a certain class of operators F on X and E on X×S1.\n\nCombined with arXiv:1811.02405, we obtain canonical spin structures for positive Diracians on spin 6-manifolds and gauge groups G=U(m),SU(m). In a sequel we will apply this to define canonical orientation data for all Calabi-Yau 3-folds X over the complex numbers, as in Kontsevich-Soibelman arXiv:0811.2435, solving a long-standing problem in Donaldson-Thomas theory.\n\nWe develop a categorical index calculus for elliptic symbol families. The categorified index problems we consider are a secondary version of the traditional problem of expressing the index class in K-theory in terms of differential-topological data. They include orientation problems for moduli spaces as well as similar problems for skew-adjoint and self-adjoint operators. The main result of this paper is an excision principle which allows the comparison of categorified index problems on different manifolds. Excision is a powerful technique for actually solving the orientation problem; applications appear in the companion papers arXiv:1811.01096, arXiv:1811.02405, and arXiv:1811.09658.\n\nLet X be a compact manifold, D a real elliptic operator on X, G a Lie group, P a principal G-bundle on X, and B_P the infinite-dimensional moduli space of all connections on P modulo gauge, as a topological stack. For each connection \\nabla_P, we can consider the twisted elliptic operator on X. This is a continuous family of elliptic operators over the base B_P, and so has an orientation bundle O^D_P over B_P, a principal Z_2-bundle parametrizing orientations of KerD^\\nabla_Ad(P) + CokerD^\\nabla_Ad(P) at each \\nabla_P. An orientation on (B_P,D) is a trivialization of O^D_P.\n\nIn gauge theory one studies moduli spaces M of connections \\nabla_P on P satisfying some curvature condition, such as anti-self-dual instantons on Riemannian 4-manifolds (X, g). Under good conditions M is a smooth manifold, and orientations on (B_P,D) pull back to\n\norientations on M in the usual sense of differential geometry.\n\nThis is important in areas such as Donaldson theory, where one needs an orientation on M\n\nto define enumerative invariants.\n\nWe explain a package of techniques, some known and some new, for proving orientability and constructing canonical orientations on (B_P,D), after fixing some algebro-topological information on X. We use these to construct canonical orientations on gauge theory moduli spaces, including new results for moduli spaces of flat connections on 2- and 3-manifolds,\n\ninstantons, the Kapustin-Witten equations, and the Vafa-Witten equations on 4-manifolds, and the Haydys-Witten equations on 5-manifolds.\n\nSuppose (X, g) is a compact, spin Riemannian 7-manifold, with Dirac operator D. Let G be SU(m) or U(m), and E be a rank m complex bundle with G-structure on X. Write  B_E for the infinite-dimensional moduli space of connections on E, modulo gauge. There is a natural principal Z_2-bundle O^D_E on B_E parametrizing orientations of det D_Ad A for twisted elliptic operators D_Ad A at each [A] in  B_E. A theorem of Walpuski shows O^D_E is trivializable.\n\nWe prove that if we choose an orientation for det D, and a flag structure on X in the sense of Joyce arXiv:1610.09836, then we can define canonical trivializations of O^D_E for all such bundles E on X, satisfying natural compatibilities.\n\nNow let (X,\\varphi,g) be a compact G_2-manifold, with d(*\\varphi)=0. Then we can consider moduli spaces  M_E^G_2 of G_2-instantons on E over X, which are smooth manifolds under suitable transversality conditions, and derived manifolds in general. The restriction of O^D_E to M_E^G_2 is the Z_2-bundle of orientations on M_E^G_2. Thus, our theorem induces canonical orientations on all such G_2-instanton moduli spaces  M_E^G_2.\n\nThis contributes to the Donaldson-Segal programme arXiv:0902.3239, which proposes defining enumerative invariants of G_2-manifolds (X,\\varphi,g) by counting moduli spaces  M_E^G_2, with signs depending on a choice of orientation. This paper is a sequel to Joyce-Tanaka-Upmeier arXiv:1811.01096, which develops the general theory of orientations on gauge-theoretic moduli spaces, and gives applications in dimensions 3,4,5 and 6." ]
[ null, "https://www.spp2026.de/fileadmin/_processed_/0/a/csm_5D4854D5-E54C-491A-93FF-3F59E577F53D_4f2670a7a8.jpeg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82306546,"math_prob":0.9229772,"size":6929,"snap":"2020-24-2020-29","text_gpt3_token_len":1731,"char_repetition_ratio":0.14729242,"word_repetition_ratio":0.068895645,"special_character_ratio":0.22066677,"punctuation_ratio":0.109642304,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98992115,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-04T05:20:09Z\",\"WARC-Record-ID\":\"<urn:uuid:9e85e402-135b-46cc-8473-64cd7a0dd3a7>\",\"Content-Length\":\"33329\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:69f20fe6-02fd-4c5e-8c5f-ae5926c57a05>\",\"WARC-Concurrent-To\":\"<urn:uuid:45614494-eeff-4cb3-ac2f-93f389ecae7e>\",\"WARC-IP-Address\":\"188.40.3.34\",\"WARC-Target-URI\":\"https://www.spp2026.de/members-guests/33-member-pages/dr-markus-upmeier/\",\"WARC-Payload-Digest\":\"sha1:EU7M3BAAMNCZOYB6MUTKT4OIUUULYC3Z\",\"WARC-Block-Digest\":\"sha1:67K4SVVDMA6BTRFMH7Q4VKLESSCKDHVZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655884012.26_warc_CC-MAIN-20200704042252-20200704072252-00154.warc.gz\"}"}
http://geoscience.wisc.edu/~chuck/Classes/Mtn_and_Plates/mtn_roots.html
[ "Underneath the mountains\n\nI. Introduction\n\nII. Buoyancy, Archimede's principle, and evidence for mountain roots\n\nIII. The lifetimes of mountain ranges\n\nAppendix: Beyond the intro level\n\nI. Introduction\n\nOver geologic time, the earth's mantle behaves like a highly viscous fluid (i.e. a fluid that flows very slowly). When topography is created on the earth's surface through crustal thickening, the mantle slowly flows out from beneath the thickened region so as to compensate for the change in the weight of the overlying crust. The mantle thus behaves in a manner similar to that of water when a cube of ice is placed on its surface - water beneath the ice flows outward and upward and the ice sinks downward until an equilibrium (steady-state) is reached. There is however one important difference - on earth, the strength of the crust itself helps to support some of the weight of a topographic load. We must then study two processes to develop a better understanding of how topography is \"maintained\" on our planet. One is buoyancy and the other, flexure.\n\nII. Buoyancy\n\nSuppose we floated a block of ice in a tub of cold water and asked the following question: How much of the block will be above water and how much will be below water? Ice has a density of 0.9 grams per cubic centimeter and water, a density of 1.0 grams per cubic centimeter. How can we solve this?  Let's first digress to consider some general principles associated with this problem.\n\nWhat happens to the water level in the tub if you place a book on the block of ice? The water level increases as the additional weight of the book causes the ice to displace water beneath it.  The water has nowhere to go except up. The downward force from the additional weight of the book is evenly balanced by upward motion of the water against the pull of gravity.\n\nIf you remove the book, the block of ice immediately pops up and the water level returns to its original level. In other words, as the force on the top of the ice changes, the water level re-adjusts to the new forces. In fact, the water in the tub acts as a pressure gauge - the water level adjusts to compensate for all changes in the downward forces that are acting on the water column.\n\nWater behaves this way because it is free to flow to \"escape\" zones of higher pressure.  Consider a small volume of water beneath the base of a floating ice cube.  If we push down on the ice, the water underneath immediately experiences a increase in the downward pressure acting on it.  Zones of lower pressure are located within the water immediately beyond the ice.  The water thus flows from the zone of higher pressure to the lower pressure zones until the pressure acting on water at the same depth is exactly equal everywhere within the closed basin.  An important implication is that fluids in general will always flow to eliminate areas of higher or lower pressure within a closed system.  This principle does not apply to solid materials because their internal structures do not permit unrestricted relative motion of adjacent molecules or atoms.\n\nWe can now state Pascal's law - in a fluid, the pressure (i.e. force per unit area) exerted by the overlying material is equal everywhere at a given depth.\n\nArchimede's principle follows from Pascal's law (even though the former was stated thousands of years earlier than the latter).\n\nARCHIMEDE'S PRINCIPLE - \"A body immersed in a fluid is buoyed up with a force equal to the weight of the displaced fluid.\"\n\nNote that \"weight\" is defined as mass * volume.\n\nPlease do not get the impression that the pressure in water is equal everywhere!  Any snorkeler or diver knows that water pressure increases with depth.  However, pressure remains the same if you sink to a given depth and then swim in any direction AT THAT DEPTH.\n\nOK - let's return to the original question.  For an object floating in a fluid,\nhow much buoyant material has to float beneath the surface of a fluid in order to support the material that lies above the fluid surface? In past years, my Web notes put forward a simple mathematical solution that relies on balancing the forces (pressures) that act on the base of a floating object.  I am including that solution at the end of this lecture; however, you may choose if you want to go through its details.  For those of you who are interested in the answer, but not the details that yield the answer, the bottom line is as follows:\n\nFor a buoyant solid of density D1 floating in a fluid of density D2, the amount of the solid beneath the fluid surface needed to support the material floating above the surface is given by the following simple equation:\n\n(Density of solid * height above fluid surface)\nDepth of root =    -------------------------------------------------------\n( Density of fluid - density of solid )\n\nIn other words, the difference in the densities of the fluid and solid is an important factor that determines how deep a root must extend beneath the surface in order to support topography above the surface.\n\nLet's apply this simple relationship to determine how deeply an iceberg extends beneath water.\n\nThe respective densities of water and ice are 1.0 and 0.9 grams per cubic centimeter.  Since ice is less dense that water, it floats (we all know that!).  So, let's suppose that we are kayaking in Glacier Bay in Alaska and we see an iceberg sticking 1 meter above the water surface.  How far down does the iceberg extend?\n\n(0.9 gm/cc * 1 meter)                 0.9 meters\nDepth of root =    --------------------------     =          -------------   = 9 meters\n( 1.0 - 0.9 ) gm/cc                           0.1\n\nSo, if an iceberg is a total of 10 meters thick, 9 meters will lie beneath water in order to support the 1 meter that juts above the water surface.  This obeys the rule of thumb that ship captains know about icebergs, namely, that about 90 per cent of the iceberg lies beneath the water out of sight.  That 90 per cent is dangerous because it can puncture a ship's hull.\n\nImplications for mountain ranges\n\nContinents are buoyant crust that float on a denser mantle.  We can thus use the density of continental rocks and mantle rocks to calculate how deep roots are that support mountain ranges.\n• Continental crust - 2.8 grams per cubic centimeter\n• Mantle - 3.3 grams per cubic centimeter\n• Inner and outer cores - 12-16 grams per cubic centimeter\n\nThe average elevation of continental crust relative to sealevel is about 1 km.  Roughly speaking, how deep must the continental root extend down into the mantle to support that elevation ?\n\n(2.8 gm/cc * 1 km)                    2.8 km\nDepth of root =    --------------------------     =      -------------   = 5.6 km\n( 3.3 - 2.8 ) gm/cc                        0.5\n\nHow deep is the root for a mountain range with an average elevation of 15,000 feet (about 3 miles)?\n\n(2.8 gm/cc * 3 miles)                8.4 miles\nDepth of root =    --------------------------     =      -------------   = 16.8 miles\n( 3.3 - 2.8 ) gm/cc                        0.5\n\nThe most important point is that mountains have buoyant roots that extend downward into the mantle beneath a mountain range, and that the roots are, in general, about 5.6 times deeper than the height of the range.  This result reflects the difference between the densities of average crust and mantle.\n\nIII. Erosion, buoyant mountain roots, and the lifetimes of mountain ranges\n\nThe existence of buoyant roots has important implications for the lifetime of mountain ranges. What happens when erosion removes material from the top of a mountain? With less mass above sea-level to support, the buoyant root rebounds upward an amount that is exactly proportional to the density difference between the root and the underlying mantle! Thus, in the case of an iceberg that stands 10 meters above sea level, if all 10 meters of ice melt from the top, the buoyant root pushes upwards 9 meters! The iceberg thus loses only 1 meter of height. It thus takes much longer for the iceberg to \"disappear\" because its buoyant root continually restores (from beneath sea level) ice that melts above the water surface.\n\nOne study of erosion in the Appalachians suggests that over the past 270 million years, erosion has removed an average of 0.02 millimeters each year of material from the mountain range (this is 2 millimeters per hundred years). Let's use this rate to compute the lifetime of a mountain range for two different scenarios.\n\nScenario 1: Assume that mountain ranges do NOT have buoyant roots\n\nGiven a plateau that stands 5000 meters above a plain of 1 km average height (the Tibetan plateau is a good example), how long will it take for erosion to level this plateau?\n\nAssuming an erosion rate of 0.02 millimeters per year, the lifetime T can be computed as follows:\n\nErosion rate * T = 5000-1000 meters * 1000 millimeters/1 meter\n\nT = 4000 * 1000 / 0.02 (units of millimeters / mm/yr = years)\n\nT = 200 million years\n\nThe projected lifetime of this plateau (ie mountain range) is thus 200 million years.Its reasonable to assume that the youthful Appalachians were at least 5000 meters in height. This however raises a problem - if the Appalachians have no buoyant root beneath them, they would have been completely eroded away by now!\n\nScenario 2: Assume that mountain ranges have buoyant roots\n\nLet's again assume that a plateau stands 5000 meters above a 1000 meter-high plain. However, let's now use a more realistic model for the plateau - let's assume that it has a buoyant root that extends some 20 km beneath the bottom of the continental crust that floors the 1 km-high plain. For example, see the model that fits the gravity profile for the Tibetan plateau (Lecture 6) - in that model, the root beneath the plateau extends to depths of roughly 62 km, whereas continental crust not beneath the plateau extends to depths of about 42 km. So the buoyant root juts 20 km down into the mantle from the base of the continental crust.\n\nBy Archimede's principle, when material is stripped (by erosion) from the top of the plateau, thereby lowering the elevation of the plateau, the buoyant root pushes up from beneath and thus replaces some (but not all) of the rock that was eroded. To completely level the plateau, erosion must therefore ultimately strip away material equivalent in height to the original mountain range plus its root (4 km + 20 km). (If the reason for this is not clear to you, think about it for a while).\n\nWe must thus repeat the above calculation for T as follows\n\nErosion rate * T = 25000-1000 meters * 1000 millimeters/1 meter\n\nT = 24000 * 1000 / 0.02 (units of millimeters / mm/yr = years)\n\nT = 1,200 million years = 1.2 billion years!\n\nThus, this hypothetical mountain range can hang around for 1.2 billion years because its root continually pushes new material up to replace material that is being eroded from the top.\n\nA key implication: as erosion strips away the material on top of mountain ranges, rocks from much deeper (10-12 miles deeper!) in the continental crust are pushed up to the surface by the buoyant root. Erosion coupled with buoyant mountain roots thus provide a mechanism for bringing deep crustal rocks to the surface.\n\nThe bottom line - once plate tectonic processes build a mountain range, the buoyant underlying root enables the mountain range to hang around a long time even while its being actively eroded.\n\nCaveats - erosion rates can of course be much higher, which can significantly decrease the lifetime of the range.\n\nAppendix: Beyond the introductory level\n\nYou will not be tested on the material below and thus do not have to read it.  For those of you who might be interested in this at a slightly more complicated level, read on!\n\ni) A simple mathematical derivation of the equation for mountain root depths\n\nTo begin, let's specify the forces that are acting on a hypothetical mass floating in a fluid\n\nForce = mass x acceleration ---> (Newton's Second law of motion)\n\nFor our purposes, acceleration is equal to gravitational acceleration g, which is the rate at which a falling object accelerates in the earth's gravity field. g for the earth is 9.8 meters per squared second. The mass of an object is equal to its volume times its density.\n\nFor the figure below, let's examine the gravitational force that pulls down on our imaginary cylinder that cuts through two columns in our tub.", null, "Column 1 cuts through a column of air with length \"a\" and a column of water of length \"b\", for a total column length of \"a + b\".\n\nColumn 2 cuts through a column of ice with length \"a+c\", and a column of water with length \"d\". What are the forces acting on each column? We need to know the density and volume of the columns of air, water, and ice in each column to answer this question. The densities of ice and water are given above as 0.9 and 1.0 grams/cubic centimeter. The density of air for all intensive purposes is zero!\n\nSince these are cylinders, the volume is simply equal to the height multiplied by the cross-sectional area of the cylinder. To simplify the problem, let's assign a cross-sectional area for each cylinder of 1.0 square centimeters. The volume is then simply 1 x height.\n\nBy Newton's second law, the force F1 acting equals the following:\n\ng * [ (Density of air * a) + (Density of water * b)] = g * [(0 x a) + (1.0 grams/cubic centimeter * b centimeters * 1.0 square centimeters)]\n\nor F1 = 1.0 * b * g\n\nSimilarly, the force F2 equals\n\ng * [(density of ice * (a+c) ) + (density of water * d)] = g * [0.9*(a+c) + d ] or F2 = d * g + 0.9(a+c)*g\n\nNow, what else do we know about F1 and F2?  Since we know that the forces that pull down on every column of water are equal (if they weren't, the system would immediately readjust to make them equal!), we then know that F1 = F2.  This is the key step - namely, that we invoke Pascal's law that the pressure is equal everywhere at a given depth in the fluid.\n\nTo solve the problem, we now choose a depth that corresponds exactly with the bottom of the ice and require pressures exerted by the overlying ice and/or water to be equal at that depth. In other words, imagine that the red-dashed line is positioned at the bottom of the ice. We can now write the following equivalence F1 = F2 = 1.0* b*g = 0.9(a+c)*g\n\nDividing by \"g\" gives 1.0*b = 0.9(a+c)\n\nHowever, \"b\" = \"c\", so we can rewrite this as (1.0 - 0.9) * c = 0.9*a.\n\nWhat are a and c ? C is the \"root\" that sticks down into the fluid - this \"root\" supports the material that stands above the water surface, which has a height of a.\n\nFor any two materials of different densities (with at least one acting as a fluid), we can write\n\n(Density 1 - Density 2) * root = Density 2 * height above surface\n\nwhere density 1 is greater than density 2.\n\nA key point here is that it is the difference in the densities of the two materials that determines how deep a root must extend beneath the surface in order to support topography above the surface.\n\nThe equation above expresses the relationship between the root and height of a material floating in a fluid and the densities of the floating material and fluid.\n\nii) Evidence for mountain roots\n\nIs there any evidence that mountains have such deep roots? Yes, there is abundant evidence from measurements of gravity over and near mountain ranges. Let's first digress briefly to the equation that specifies the force of gravitational attraction between two masses that are separated by a distance r.\n\nF= G * m1 * m2 / (r * r)\n\nHere, G is a constant (the universal gravitational constant), and m1 and m2 are the masses of objects 1 and 2. The equation thus indicates that as the distance between two objects increases, the gravitational attraction between them decreases as the square of the distance.\n\nIf the earth were perfectly spherical (i.e. no topography) and lacked any variation in density, then a mass that is hanging from a string (a plumb bob) would always point directly toward the earth's center. In the 18th century, French scientists on an expedition to South America to measure the distance of a degree of latitude noted that the great mass of the Andes mountain belt represented additional mass that would exert its own gravitational pull on a plumb bob that would deflect the plumb bob from \"vertical\" toward the mountain range. They thus estimated the mass of the mountain range and then predicted how much the vertical deflection should be. To their surprise, they found that the mass was not deflected as far as they predicted - they thus postulated that a \"deficit\" of mass beneath the mountain range had to exist. The mass deficit was a buoyant crustal root that extended down into the denser surrounding mantle.\n\nSince the 18th century, many more gravity surveys of mountain ranges have been completed and they indicate that mountain ranges are often (but not always) accompanied by a mass deficit. For example, if one measures the gravitational attraction at many points in or above a mountain range and one then corrects the measured gravity signal for a variety of effects, one of which includes the contribution from topography  above sea level (this is done by estimating the gravitational attraction that results from a given volume of material with a density equivalent to that of continental crust), the gravity field over a mountain range should be the same as the gravity field for flat regions that flank the mountain range. Instead, the corrected gravity field over the mountain range typically has values lower than the surrounding flat regions. This gravity \"deficit\" is evidence for a mass \"deficit\" beneath the mountain range - such a deficit can only occur if the density of material beneath the range is lower than the density of the material beneath the flat-lying regions. Thus, less dense or buoyant material underlies many mountains - this buoyant material is the \"root\" that is predicted to exist based on Archimede's principle.\n\nExample: Gravity across the Tibetan Plateau, Himalayas", null, "Figure 2: The red line in the map above shows a profile that crosses the Himalayan mountain range and Tibetan plateau of southern Eurasa. The following figure shows Bouguer gravity measurements along this profile. Adapted from Jin et al., May, 1996 Journal of Geophysical Research.", null, "Figure 3: Dots in the upper panel show measurements of height along the profile shown in Figure 2. Note that the average height of the Tibetan plateau is about 5.5 km, or 18,200 feet. The distance across this nearly flat, but high plateau is 800 kilometers, or 500 miles. The highest topographic feature, about 6 km, is the high Himalayas, which are located at the southern edge of the Tibetan plateau. The lower panel shows Bouguer gravity measurements (black dots) and the gravity predicted by two simple models (described below). Note that the Bouguer gravity values are all less than zero, indicating that once gravity is \"corrected\" for the topography above sea-level (0 km), the gravity \"anomaly\" is negative, indicating a mass deficit at depth. The mass deficit indicates the existence of a low-density \"root\" beneath the mountain range. This root has a density lower than that of the underlying and surrounding mantle and thus provides a buoyant force that supports that Tibetan plateau. The Root only model assumes that the Bouguer gravity values are determined solely by the existence of the buoyant crust beneath the plateau. The Flexure & Root model assumes that the buoyant root is responsible for part of the gravity field, but that the strong lithosphere supports the mountain range/plateau, too. Adapted from Jin et al., May, 1996 Journal of Geophysical Research\n\nRegarding Figure 3, READ THE CAPTION CAREFULLY . This gravity profile is a classic example of how one can use measurements at the earth's surface to learn about what exists deep beneath the surface. Note that the gravity values that are predicted by the Root only model fit the observed values reasonably well, but cannot account for all of the details in the observed gravity field! This implies that other physical processes are contributing to the local gravity field. An alternative model, \"Flexure & Root\", does a better job of fitting the observed gravity values. This model combines the buoyant forces exerted by a crustal root with the inherent strength of the earth's crust to explain how mountain ranges are supported.\n\nFigure 4: Model of crust beneath the Tibetan plateau/Himalayan mountain range", null, "Note that the region designated \"Tibetan crust\" is the buoyant \"root\" that underlies the plateau, which is shown as the blue shaded area above 0 km depth. Thus, a deep root lies beneath the plateau and this root has a density lower than that of the mantle beneath it and to its sides.\n\niii) Flexure of the lithosphere\n\nBouguer gravity fields across many mountain ranges clearly show evidence for mass deficits that are indicative of underlying and buoyant crustal roots. However, gravity anomalies associated with mountain ranges are rarely as large (meaning that Bouguer gravity values are never as \"negative\") as they should be if the mountain range were supported entirely by a buoyant root.This means that something besides the buoyant root has to be supporting mountain ranges. Not surprisingly, the crust itself helps to support topographic \"loads\".\n\nFor example, typical continental crust is approximately 40 km thick. Clearly, if one dumps a load of stone in the middle of the US Great Plains, the crust does not sink into the mantle until it achieves a buoyant root that is sufficient to support this new \"topographic\" load! Instead, the crust is strong enough to hold up the load without flexing down into the mantle (and thus developing a root). This is analogous to putting an eraser on a board - the board is strong enough to support the weight of the eraser without flexing. The crust thus has inherent strength that is capable of supporting small loads such as small volcanos without flexing down into the mantle. If the load gets large enough (a mountain range), the mountain range will be supported partly by the crust and partly by a buoyant root.\n\nIf you look at Figure 3 above, you will see exactly this situation. The \"Root Only\" model cannot fit the observed Bouguer gravity. However, a model that assumes the existence of a finite-strength crust and a buoyant root, which act together to support the Tibetan plateau, fits the gravity values well." ]
[ null, "http://geoscience.wisc.edu/~chuck/Classes/Mtn_and_Plates/Images/isostasy.gif", null, "http://geoscience.wisc.edu/~chuck/Classes/Mtn_and_Plates/Images/tibet_locate.GIF", null, "http://geoscience.wisc.edu/~chuck/Classes/Mtn_and_Plates/Images/Profile_1-a.GIF", null, "http://geoscience.wisc.edu/~chuck/Classes/Mtn_and_Plates/Images/Profile_1-b.GIF", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.911073,"math_prob":0.9638075,"size":20028,"snap":"2019-51-2020-05","text_gpt3_token_len":4542,"char_repetition_ratio":0.1531662,"word_repetition_ratio":0.03735632,"special_character_ratio":0.22573397,"punctuation_ratio":0.08339953,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.968541,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T04:57:41Z\",\"WARC-Record-ID\":\"<urn:uuid:3c8d7c78-ddbf-4c2a-8fc5-8ea26c3fa84f>\",\"Content-Length\":\"31181\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3c81d115-70fe-4e55-a549-6cd8e84b599a>\",\"WARC-Concurrent-To\":\"<urn:uuid:015b8bd3-faa2-4255-b69c-219568aa8db9>\",\"WARC-IP-Address\":\"144.92.206.29\",\"WARC-Target-URI\":\"http://geoscience.wisc.edu/~chuck/Classes/Mtn_and_Plates/mtn_roots.html\",\"WARC-Payload-Digest\":\"sha1:LAAZZGLGIXJYCOWG566NYTM7BZG5R4QI\",\"WARC-Block-Digest\":\"sha1:KR6BUQMNBA22K7O76J53QACKVSB5T3L3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540584491.89_warc_CC-MAIN-20191214042241-20191214070241-00271.warc.gz\"}"}
https://wiki.gideros.rocks/index.php/Bit32.rrotate
[ "# Bit32.rrotate\n\nAvailable since: Gideros 2022.3\nClass: bit32\n\n### Description\n\nReturns a number after its bits have been rotated to the right by a given number of times.\n\n```(number) = bit32.rrotate(x,disp)\n```\n\nReturns the number x rotated disp bits to the right.\n\nThe number disp may be any representable integer. For any valid displacement, the following identity holds:\n\n```assert(bit32.rrotate(x, disp) == bit32.rrotate(x , disp % 32))\n```\n```Negative displacements rotate to the left\n```\n\n### Parameters\n\nx: (number) value\ndisp: (number) displacement\n\n### Return values\n\nReturns (number) result after bits right rotation" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6493428,"math_prob":0.99024224,"size":609,"snap":"2023-40-2023-50","text_gpt3_token_len":151,"char_repetition_ratio":0.17024793,"word_repetition_ratio":0.0,"special_character_ratio":0.23316912,"punctuation_ratio":0.15178572,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98945385,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T07:45:17Z\",\"WARC-Record-ID\":\"<urn:uuid:6c68e02c-765e-46ae-ad7b-65f07ae268ba>\",\"Content-Length\":\"21510\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:977d0959-f0f6-4207-a89d-1dae05630678>\",\"WARC-Concurrent-To\":\"<urn:uuid:16547738-f7b8-403d-8868-0bf9300d3ad8>\",\"WARC-IP-Address\":\"163.172.32.129\",\"WARC-Target-URI\":\"https://wiki.gideros.rocks/index.php/Bit32.rrotate\",\"WARC-Payload-Digest\":\"sha1:MQHZR5TINUQY5WYNHH5ZODFLJHEHM5L3\",\"WARC-Block-Digest\":\"sha1:H2VLOVAYX7GNPM3IY5OYNVUCFCZ4A25V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511055.59_warc_CC-MAIN-20231003060619-20231003090619-00074.warc.gz\"}"}
http://forums.wolfram.com/mathgroup/archive/1999/Sep/msg00074.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Re: How to NDSolve the differential equation\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg19625] Re: How to NDSolve the differential equation\n• From: \"Kevin J. McCann\" <kevinmccann at Home.com>\n• Date: Mon, 6 Sep 1999 04:20:41 -0400\n• Organization: @Home Network\n• References: <7qulo3\\$2r5@smc.vnet.net>\n• Sender: owner-wri-mathgroup at wolfram.com\n\n```Chee Lim,\n\nLook at your DE when x=0 with your IC. The whole thing is zero. When this\nhappens NDSolve can't get any traction on the problem. Maybe you could try\n\nNDSolve[{x y''[x] + Sin[x] y'[x] + 2 x y[x] == 0, y[0.001] == 0,\ny'[0.001] == 1}, y,\n{x, 0.001, 2}]\n\nAlternatively, divide through by x and solve this DE:\n\ny''[x] + y'[x] Sin[x]/x + 2 y[x] == 0\n\nagain you will have problems at x=0, this time because of the Sin[x]/x; so,\nwhy not expand it:\n\ns[x_] = Normal[Series[Sin[x]/x, {x, 0, 12}]]\n\nThis gives about 9-place accuracy over the [0,2] range. Then:\n\nq=y/.NDSolve[{y''[x] + y'[x] s[x]+ 2 y[x] == 0\n, y == 0,\ny' == 1}, y,\n{x, 0, 2},WorkingPrecision->20][]\n\nThis gives a solution which satisfies the original DE to about 7 places.\n\nPlot[x q''[x] + q'[x]Sin[x] + 2x q[x], {x, 0, 2},\nPlotRange -> {-0.000001, 0.000001},\nPlotStyle -> RGBColor[1, 0, 0]];\n\nKevin\n\nChee Lim Cheung <cheelc at mbox2.singnet.com.sg> wrote in message\nnews:7qulo3\\$2r5 at smc.vnet.net...\n> Dear Mathematica gurus & users,\n>\n> I encountered error messages with the tag Power::infy when I tried to do\n> the following:\n>\n> NDSolve[{x y''[x] + Sin[x] y'[x] + 2 x y[x] == 0,y==0, y'==1}, y,\n> {x,0,2}]\n>\n> Can anyone help me in getting Mathematica to produce an answer? I am using\n> Mathematica 4.0 & Mathematica 3.0.\n>\n> Thanks\n> Chee\n>\n\n```\n\n• Prev by Date: Re: crystal structure?\n• Next by Date: Re: Nonlinear Fit with complex model\n• Previous by thread: How to NDSolve the differential equation\n• Next by thread: Re: Re: How to NDSolve the differential equation" ]
[ null, "http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif", null, "http://forums.wolfram.com/mathgroup/images/head_archive.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/1.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/9.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/9.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/9.gif", null, "http://forums.wolfram.com/mathgroup/images/search_archive.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7555906,"math_prob":0.98144275,"size":1865,"snap":"2022-40-2023-06","text_gpt3_token_len":687,"char_repetition_ratio":0.107469104,"word_repetition_ratio":0.10576923,"special_character_ratio":0.38873994,"punctuation_ratio":0.21633554,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9977776,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T20:08:45Z\",\"WARC-Record-ID\":\"<urn:uuid:39705365-ade9-413e-a790-2459a78034b3>\",\"Content-Length\":\"45663\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d7efa4d-d93a-4df0-816c-fa511c04a1ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:eaddfd94-f12b-448a-b875-4578ba476395>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/1999/Sep/msg00074.html\",\"WARC-Payload-Digest\":\"sha1:QEBJRBXQYVSJQ4XSALENSG3TPZWIXVLF\",\"WARC-Block-Digest\":\"sha1:J7UBHR7TP5SSIHQRPV34444ZZJVI3U7L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499654.54_warc_CC-MAIN-20230128184907-20230128214907-00805.warc.gz\"}"}
https://cs.stackexchange.com/questions/7484/is-there-an-efficient-method-to-store-large-dags
[ "Is there an efficient method to store large DAGs?\n\nI have a DAG representing strict partial order where each node is an assignment of variables $V$ to their values $v$. Each arc $(u,w)$ represents a change in one variable value such that $u\\succ w$.\n\nSo if I have $n$ binary variables, I will end up with $2^n$ nodes which is exponential to the size of variables $n$. is there any method to store such DAG efficiently?\n\n• Why store anything? You can compute if there is an edge $(u,w)$ by using the same function that makes the edge that go between $u$ and $w$. – Pratik Deoghare Dec 18 '12 at 5:41\n• @PratikDeoghare actually i'm trying to solve dominance task: given two nodes $n_1$ and $n_2$ which one is better $n_1\\succ n_2$ or $n_2\\succ n_1$. I can partially search the graph starting from $n_1$ for example. But this is hard problem in general. so I was looking for a method to index all the nodes and make it easier to answer the dominance task. – seteropere Dec 18 '12 at 5:48\n• @Seteropere I agree with Pratik. I did not understand what you mean by \".. which one is better ..\". If I understand correctly, either $n_1 \\prec n_2$ or $n_2 \\prec n_1$ or neither dominates each other. You can check this by a linear traversal of the vectors of both $n_1$ and $n_2$. Did I miss something? – Paresh Dec 18 '12 at 17:08\n• @Paresh You got it. would you please elaborate more on *linear traversal of the vectors of both *. You mean checking outgoing paths from both $n_1$ and $n_2$ ? – seteropere Dec 18 '12 at 19:24\n• @Seteropere Say each point is a vector $\\overrightarrow{v}$ of $m$ values (of any type). Say, you want to compare $n_1$ and $n_2$ having vectors $\\overrightarrow{v_1}$ and $\\overrightarrow{v_2}$ respectively. Then, for each of the $m$ values in $\\overrightarrow{v_1}$ and $\\overrightarrow{v_2}$, compare which dominates/is greater than the other. If all values of $\\overrightarrow{v_1}$ dominate the corresponding values of $\\overrightarrow{v_2}$, then $n_1$ dominates $n_2$, and vice-versa. If some values dominate, and some are dominated, then neither $n_1$ nor $n_2$ dominate each other. – Paresh Dec 19 '12 at 16:27\n\nTry constructing a reduced ordered binary decision diagram (ROBDD) as described here. The reducing algorithm removes redundant nodes, and reuses equal subtrees in the decision tree, such that every node is unique. The memory complexity of a ROBDD is $\\mathcal O({2^n \\over n})$ as proved here." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90971607,"math_prob":0.9986373,"size":366,"snap":"2019-43-2019-47","text_gpt3_token_len":88,"char_repetition_ratio":0.12430939,"word_repetition_ratio":0.0,"special_character_ratio":0.24863388,"punctuation_ratio":0.078947365,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99967575,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-20T07:31:36Z\",\"WARC-Record-ID\":\"<urn:uuid:4953b6b0-768d-40cc-9342-ff8e23b784a8>\",\"Content-Length\":\"144847\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:099251af-2ad2-4421-bdd4-8bbdbc644988>\",\"WARC-Concurrent-To\":\"<urn:uuid:40ac16f9-6b1e-4fb0-9db1-d6bf2cf63601>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/7484/is-there-an-efficient-method-to-store-large-dags\",\"WARC-Payload-Digest\":\"sha1:VNBBUHQ37TFT3PSZHDQR5HAJMURY7F5E\",\"WARC-Block-Digest\":\"sha1:XVCOLFTAAZ5CTF7XIATD2EKEM745KCNW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986703625.46_warc_CC-MAIN-20191020053545-20191020081045-00098.warc.gz\"}"}
https://www.nemenmanlab.org/~ilya/index.php/Physics_434,_Lecture_1_additional_notes
[ "# Physics 434, Lecture 1 additional notes\n\nBack to the main Teaching page.\n\nYou should carefully read the course Syllabus for the class details. Briefly, the following are the key points.\n\nThis is a mathematical / computational course, akin to a typical upper division physics course. While hard, the course will quite doable for most of you, with sufficient effort. To help, we will schedule regular office hours and review sessions as needed. You should be working in groups, but the eventual write-ups of homeworks and project reports should be done individually. The students in the class will consist of different groups, including physics majors, biology majors, and some graduate students. We will try to use the language of computer simulations to bring all of you together to the same page. None of you are probably experts in programming, but this is OK, as you will learn during the class. Those students registered for the 534 version of the class will have additional/harder homework assignments, to justify the difference in the course number. The course has a somewhat nontraditional grading policy, including homework problems, one in-class exam, and project work and reports. Please see details in the Syllabus. Homework assignments will be distributed on about Wednesdays, and they will be due by end of the day on the Fridays one week later. Starting mid-October, you will also be working on your projects, to be presented in class in December.\n\n### What is this class about?\n\nThis is a class about physics of living systems. At the first glance, using the words physics and life in one sentence is contradictory. Indeed, physics is about simple mathematical models of the underlying processes, about general understanding. It is about theories that find unanticipated connections among seemingly unconnected parts of the natural world, such as a ferromagnet and a water-to-vapor transition, or the Higgs boson and properties of engineered materials. In contrast, modern biology is traditionally represented as not accepting theories (beyond maybe the evolutionary theory of Darwin). Every detail matters. A cell surface receptor molecule will behave very differently depending on which cell it is on. A cell with the same genome will behave very differently, and may differentiate into different fates, depending on where in the organism it is. The same organism may respond quite differently to the same stimulus, sometimes choosing to completely ignore it. Why is it that we hope that there will exist physics in living systems?\n\nAll of you have taken sufficient number of biology classes. So if I ask you to name some important biological discoveries in the 20th century (e.g., some physiology/medicine Nobel prizes), you can probably name a few. You will probably name the Watson-Crick discovery of the structure of DNA as one of the most important of these biological discoveries, at least in the context of molecular/cellular biology. If I ask you to dig deeper into your memory, and nominate a few other discoveries, then very soon you will mention the Hodkin-Huxley model of action potential generation in neurons as maybe the greatest discovery ever made in neurobiology. And then very soon someone will nominate the Luria-Delbruck experiment, which has discovered that natural selection acts on pre-existing variation in the population, rather than causing new variation, as the most important Nobel in the context of evolutionary and population biology.\n\nWhat do all three of these discovered have in common? They all were made by a pair of scientists, where the first was an experimental biologist, and the second was a theoretical physicist/applied mathematician. In fact, it seems that theory, and more specifically physics style theory, is a lot more common in biology than we are led to believe. One can naively say that it is because all biological systems are made of molecules, which are physical objects that obey physical laws. This is true, but it is only part of the story. As the three examples above show, physics exists in biology at every level, from molecules, to cells, organisms and populations. We will see a lot more examples of this as we dive deeper into the class.\n\nWhat allows this to happen is that seemingly disparate biological phenomena are, in fact, much closer related to each other than one may naively hope for. The rules of chance ensure that an E. coli trying to figure out where food is based on activity of its surface receptors that bind various food source molecules needs to solve a problem similar to what a fly's brain needs to solve when it tries to estimate its own self-motion through the world based on the activity of its visual neurons, or the problem that we, as scientists, are solving when we are learning about the E. coli from our experiments. This is what this class is about -- we will identify some problems that biological systems need to solve in the course of their lives, understand their physical structure, and then see what are the limitations imposed by this physical structure, independent of the details of the organism that is trying to solve the problem. In other words, we will try to focus on things that truly matter, and to build broad theories of phenomena, rather than their narrow, focused models.\n\nWe cannot embrace the unembraceable, and so, we will only be building these theories in a few specific examples. We will start with models of cellular dynamics, and then switch to questions of information processing in biology, where we will ask and answer:\n\n• How well, in objective terms, do various biological systems transduce available sensory information?\n• What stays in the way of their performance?\n• Which strategies can be used by the systems to improve the quality of processing?\n\nBefore we leave this introduction, we still need to answer one more question: why numbers? That is, why such a focus on mathematics? Here the answer is very simple -- to see if we understand the world, we need to make a prediction, and then test it experimentally. Testing involves comparison of predictions to the observations. However, the only things we know how to compare are numbers. Think about it carefully: the only thing we can compare are numbers! Thus the language of any science, including biology, is mathematics. One can then ask if one wants to compare trends, which are basically binary numbers (e.g., up or down), or if one wants to predict, measure, and compare real-valued numbers like we normally do in physical sciences. A binary experiment gives us, at best, one bit of information (we will formalize this later in the class), and hence can rule out, at best, half of theories. Real-valued predictions have more discriminatory power, and are thus a lot more valuable. We will stick to them in this class.\n\n## Introducing the model systems\n\nThis is a carryover from the earlier incarnation of the class. Worth reading, and we will return to these systems later in the course of the semester.\n\nTheories rarely emerge directly on a piece of paper, out of nothing. They come from studies of specific experimental systems. In this class, we will introduce a series of such models systems. While many will appear for just a few lectures, some of the model systems will persist through most sections of the class. In the context of cell biology, our main hero will be the E. coli cell, and specifically its chemotactic behavior. For evolutionary questions, we will again focus on \"E. coli\", but this time on response of their populations to stresses, such as antibiotics and bacterial viruses. For neuroscience, we will focus on visual signal transduction. And finally, for behavior, we will explore foraging in rats.\n\n### Introduction to E. coli chemotaxis\n\nSo let's introduce our heroes, starting with the smallest, the E. coli\n\nE. coli is a small cell about $1\\mu m$", null, "in linear dimensions. It's a natural measuring stick for cellular biology. Read more about the organism in the Sizing up E coli section in Physical Biology of the Cell. As every bacterium, E. coli has no internal organelles, such as nucleus, and it is basically a bag of molecules: DNA, RNA, proteins, and smaller metabolites, all packed rather densely and semi-regularly. E. coli spends at least a part of its life swimming around trying to find nutrients, then eating them, growing, and dividing (they can also grow as parts of biofilms or colonies, but we will not focus on this).\n\nThe swimming behavior of E. coli is rather intricate, as illustrated by now-classic movies from the Howard Berg lab, http://www.rowland.harvard.edu/labs/bacteria/movies/showmovie.php?mov=fluo_bundle. The bacterium is pushed forward by a bundle of flagella that rotate. Such smooth forward motion is called the run. Once in a while, the flagella bundle breaks apart, and the bacterium tumbles and reorients instead of going smoothly forward. We will spend a lot of time studying this system. In particular, you can look up the basic chemical diagram of the process at http://www.rowland.harvard.edu/labs/bacteria/projects/fret.php. However, now let's ask the question: why does E. coli do what it does? Why running and tumbling? Answering this question will be our first aha! moment -- we can understand a lot about biology by studying basic physical principles!\n\nSo the E. coli wants to go where life is greener. But how does it know where it's greener? To see where pizza is, you can look around, you can smell. The bacterium can do nothing of this. It can only count molecules of chemicals that it cares about, and go to where there are more of such molecules. Experiments tell us (Adler 1975, Budrene and Berg, 1991, 1995) that E. coli can find maxima of nutrient concentrations even when concentrations are as small as 1 nM. What does it mean in more reasonable units? $1{\\rm {M}}={\\frac {N_{A}}{1{\\rm {l}}}}\\approx {\\frac {6\\cdots 10^{23}}{10^{-3}{\\rm {m}}^{3}}}=6\\cdot 10^{26}{\\rm {m}}^{-3}$", null, "Thus $1{\\rm {nM}}\\approx 6\\cdot 10^{17}{\\rm {m}}^{-3}$", null, ". On the other hand, E. coli is roughly speaking a cylinder, of a diameter of about $r=700{\\rm {nm}}$", null, "and the height of about $h=3\\mu {\\rm {m}}$", null, ". Its volume is then $V=\\pi r^{2}h=1.2\\cdot 10^{-18}{\\rm {m}}^{3}$", null, ". Thus at 1 nM concentration, there are only about $1.1\\cdot 10^{-18}{\\rm {m}}^{3}\\times 6\\cdot 10^{17}{\\rm {m}}^{-3}\\approx 0.7\\approx 1$", null, "molecule of the nutrient in the entire E. coli volume. How can then the bacterium know where the grass is greener, when it only has total of 1 molecule to count? (This is a bit at the extreme of E. coli sensitivity, but it's important to be dramatic!)\n\nThe worst part about numbers so small is that they are also random. Molecules diffuse around (we will talk about Brownian motion a lot more later in the class). Thus if one has one molecule of food around, one can also have two, or zero, or three. Or even (with quite a bit lower probability) seven. Comparing numbers that are so small, and with fluctuations so large, is impossible. The only solution is to get more molecules. And this requires to capture them from a larger volume. There are, roughly speaking, three ways of doing so. One possibility is to stir the environment (and we have talked briefly about this in class), the second is to stay in place for a long time and to wait till more molecules diffuse one's way, and the third possibility is to run and hence to scoop molecules from a larger volume. We will talk more about properties of diffusion later -- but, roughly speaking, waiting is not very efficient -- it will take very long time for new molecules to arrive in the vicinity of the cell by diffusion. This leaves stirring and running, and E. coli chooses to run.\n\nGrad Students (later denoted as GS ): Let's calculate how long one would need to wait to get a certain accuracy in determining the concentration around oneself. The accuracy of the concentration estimation is proportional to the accuracy in counting molecules, so that $\\delta c/c=\\delta n/n$", null, ". But molecules come and go randomly, forcing Poisson statistics of arrivals, so that $\\delta ^{2}n=n$", null, ". At the same time, the average number of molecules in a volume of the bacterium is $Vc$", null, ", where the volume is, roughly speaking, the cube of its linear dimension, $V\\sim a^{3}$", null, ". Combining these, we get $\\delta c/c=1/{\\sqrt {ca^{3}}}$", null, ". If the E. coli can wait for a long time $T$", null, "then it can do $N=T/\\tau$", null, "observations, where $\\tau$", null, "is the duration of a single observation. This results in the usual square root decrease in the variance, so that $\\delta c/c\\sim 1/{\\sqrt {ca^{3}T/\\tau }}$", null, ".\n\nNow, what is $\\tau$", null, "? If we measured molecules once, they are going to stick around, and remeasuring them again soon is useless -- we are not making an independent measurement! One needs to wait while the old molecules diffuse away, and the new ones diffuse in. For a cell with the linear size of $a$", null, ", it will take about $\\tau \\sim a^{2}/D$", null, "for this to happen, where $D$", null, "is the diffusion coefficient. Combining the two equations, results in $\\delta c/c\\sim 1/{\\sqrt {cDa^{3}T/a^{2}}}=1/{\\sqrt {cDaT}}$", null, ". This is the celebrated Berg-Purcell limit in concentration sensing. Note a peculiar scaling, which is the results of the two-dimensional nature of diffusion (we will discuss what this means in later classes): the accuracy increases as a square root of the linear size of the cell. It also increases as a square root of time. Thus waiting in one spot, or growing bigger, is not a very effective strategy for improving concentration session. End GS.\n\nBy running, a bacterium is able to sweep molecules in a larger volume, roughly proportional to its cross section, times the length of the run (which is, in its turn, a product of the velocity and the run time), so that the number of captured molecules is $n\\sim ca^{2}vT$", null, ". For the run to make sense, it must be long enough to not only produce $n\\gg 1$", null, ", but also to ensure that more molecules are picked up by running, than simply by waiting. This sets the smallest time $T$", null, "of a second or so that the bacterium must run (we are skipping a few steps here), if it runs with the velocity of about 20 microns/sec. It needs to run long enough to outrun diffusion! On the other hand, the bacterium cannot run straight for long (for longer than about 10s), because random hits by water molecules will make it change its direction by this time. Indeed, the bacterium in real life runs for about ~5 seconds before turning.\n\nIn conclusion, E. coli lives in a weird world: it must count individual molecules of chemicals to know where to go, and these numbers are small (it also lives in the world where masses are not important, as we will show in a homework problem). Its behavioral strategy is fully dictated by the physical structure of the world it leaves in: small size that limits total number of molecules in the cell volume, diffusion that takes molecules to and away from the cell and prevents the bacterium from keeping moving in a straight line, and the need to go to greener pastures.\n\nE. coli life is probabilistic, weird, but quite understandable!\n\n### Introduction to visual neural computation\n\nWe've just convinced ourselves that, for E. coli, the world is probabilistic, and every single molecule of a signal matters. But the bacterium is small, and we are big. It seems that the randomness should be a lot less important for us. Indeed, some times it is less important -- if I want to go to an office door, I can mostly make it without random jitter, at least on large scales (though a lot of work is being done nowadays on studying randomness of individual motor responses). But in the sensory domain, it turns out that small numbers, and the associated randomness, often matter even for animals as big as us. So let's consider this in the context of neural computation in vision.\n\nFor the purpose of this class, neurons will be rather simple devices (see Dayan and Abbott, 2005). They collect electrical currents produced by neurons that are connected to them through synaptic connections. They discharge those collected charges (and hence lower their voltage) through the membrane, just like an RC circuit you studied in intro physics does. And when the voltage finally goes above a certain threshold in this tug of war between synaptic inputs and membrane discharge, the neurons spike -- they produce an impulse, an action potential, that travels through the axon of the neuron and feeds into the other neurons through their synapses. In vertebrate retina, the first set of cells, the photoreceptors (the rods and the cones) actually don't spike, but we will disregard this complication for now, till we study these cells in much detail in the third quarter of the class.\n\nBecause the number of input neurons is large, and they all fire at random times, the firing pattern of a single neuron is not deterministic either. Spikes occur at random time points, and a typical neuron may produce anywhere from ~100-200 spikes per second (for a visual neuron in an insect), to maybe 20-60 spikes in cells in the primary visual cortex of a monkey, to barely a spike a second or even less for neurons in other areas of our cortex. Some of the most interesting questions in today's neuroscience are about how important this randomness is for representing the information about the outside world, and we will study some of this later in the class (Rieke et al, 1999).\n\nFor now, just notice that we often can make decisions after being exposed to a visual image for ~200ms, and insects, whose vision is faster than ours, can make decisions in ~20ms or so. Let's take 100ms as a typical decision time for our current arguments. Thus a cell in our retina, a pixel that is measuring the brightness at some point of the visual world, even if it fires at a rate of 100 spikes per second, will only produce about $\\sim 100{\\rm {spikes/s}}\\times 100{\\rm {ms}}=10{\\rm {spikes}}$", null, "before the cell in the next level of our brain must make a decision about the level of brightness that is being seen. Our neural cells have about as many spikes to count as E. coli have molecules to count before choosing where to go! Yes, we have many neurons, and we should be able to use their collective activity to guide our decision processes, but this doesn't change the fact that our neural computation is fundamentally probabilistic, with small number of spikes. And the design of the brain must somehow make this all work.\n\nIn fact, the problem of randomness in our eyes starts even earlier, all the way at photon capture. As one of the homework problems will show, even in bright light, a single photoreceptor in our eyes collects not much more than ~1000 photons during the typical reaction time of ~100 ms. When we move into a dim environment of a badly lit room, or the world half an hour after the sunset, our photoreceptors may be capturing ~1 photon per photoreceptor, and we can detect dim light flashes as small as ~10 or fewer photons falling on our entire retina (Bialek, 2013). So, even the neural computation aside, sensing in even large organisms as us, is also fundamentally probabilistic.\n\n### Main point to carry out\n\nIn both of these examples, and in the others that we will discuss later, a major complication standing in front of biological organisms is chance. A molecule of a nutrient may be there or not, photon may arrive or not, and spikes may be there or not. Biological signal processing is not deterministic -- randomness is important, and it must be dealt with, or leveraged, but it cannot be ignored. Hence randomness will be the thread that will connect all of the components of this course, and we will spend the first half of the course introducing mathematical and computational tools to study randomness in biology, and building up the necessary intuition." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/3684f9aea9e10a7f41c865cc998e398629989d7b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/54988f537091a3ceacdd4bf52be623def594c5e9", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/eb8f5c8f351471fbb96a2e3aeade9a3c0e1f9816", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d59a744745f9c3946b5bdb8cfdf5f968dad329f0", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/728bae8378bf6dc5b736ff5c066ebc8ae7f41a9d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d99b053f5d96e2d56a73bb6fdd4f74e0c66621ac", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e9964ca52edd782c4d3431f8d5c8306c2d3aeb86", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f4571ec3e051c9518175713a01910c853d26bccf", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e5edb87fc0cfe90c7b021e28cf84d96fcbd35b13", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2d9e01993a14b02ec43d191212b75e707b7c9ba1", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b135eecf84c97565e9f3376ef268a42308f1b67f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e73061b9d311f7078d24df1353fcfd937cdbee4b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/ec7200acd984a1d3a3d7dc455e262fbe54f7f6e0", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/10058ab80e8831a460d150aed63ae3be19beed0f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/38a7dcde9730ef0853809fefc18d88771f95206c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/df31fe06755f09bfc5759e81289572372ff9dbfd", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/38a7dcde9730ef0853809fefc18d88771f95206c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/ffd2487510aa438433a2579450ab2b3d557e5edc", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/076698031f2c230638d1ef6434fcfd3918269a4b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f34a0c600395e5d4345287e21fb26efd386990e6", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8395a1f51ab93ec96088096230e52f819facc102", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a6a07a89c1baca203a1708bbdbf018e4a30e7784", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5f619e045ce20cf891dbdb9b57042c22ea5161d2", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/ec7200acd984a1d3a3d7dc455e262fbe54f7f6e0", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/0b40f28a11c3fd70ad730e7a7cb9f9332233da67", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9502945,"math_prob":0.9337735,"size":19130,"snap":"2022-40-2023-06","text_gpt3_token_len":3975,"char_repetition_ratio":0.12679076,"word_repetition_ratio":0.0012217471,"special_character_ratio":0.20543648,"punctuation_ratio":0.118983954,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95139503,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,5,null,4,null,4,null,null,null,4,null,null,null,4,null,null,null,null,null,4,null,null,null,4,null,4,null,null,null,null,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-02T19:09:47Z\",\"WARC-Record-ID\":\"<urn:uuid:88d76a74-f723-41d1-8220-56b53fee99a9>\",\"Content-Length\":\"69210\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47ac1e14-7f88-4c61-8ab1-1d4570c6fb3e>\",\"WARC-Concurrent-To\":\"<urn:uuid:c85507c8-527a-4b1a-bc6b-5aac75f96b06>\",\"WARC-IP-Address\":\"170.140.162.34\",\"WARC-Target-URI\":\"https://www.nemenmanlab.org/~ilya/index.php/Physics_434,_Lecture_1_additional_notes\",\"WARC-Payload-Digest\":\"sha1:5DAUNKIDIGX24NJUHDKXMZPARTEXKWGY\",\"WARC-Block-Digest\":\"sha1:IX7XR4LJ44TDJ4ITX5353ICDOOZ3HQ5C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337339.70_warc_CC-MAIN-20221002181356-20221002211356-00257.warc.gz\"}"}
https://math.stackexchange.com/questions/1208457/find-fc-to-mathbbr2-continuous-and-bijective-but-not-open-c-subset-mathb
[ "# Find $f:C\\to\\mathbb{R}^2$ continuous and bijective but not open, $C\\subset\\mathbb{R}^2$ is closed\n\nAre there a closed subspace $C$ of $\\mathbb{R}^2$ and a continuous, bijective function $f:C\\to\\mathbb{R}^2$ that is not open?\n\nI cannot think about a continuous and bijective function $f:C\\to\\mathbb{R}^2$. My best candidate was the set $C=[0,+\\infty)\\times[0,+\\infty)$ and the function $$f(s,t)=\\Bigg(s\\cos\\bigg(2\\pi\\frac{t}{t+1}\\bigg),s\\sin\\bigg(2\\pi\\frac{t}{t+1}\\bigg)\\Bigg),$$ but this is not injective in $\\{0\\}\\times[0,+\\infty)$.\n\nThe only hint that I can think about is that $C$ cannot be compact, because $f$ is bijective. Therefore, $C$ cannot be bounded.\n\nAn idea that I've been trying is to work in the sphere $S^2$. There, we can think of $C$ as a subset of $S^2$ with N=(0,0,1) as a limit point such that $C\\cup\\{N\\}$ is compact.\n\nThen the function $f$ takes $C$ onto $S^2-\\{N\\}$. But in this way I only see open functions.\n\nAny suggestions?\n\n• I guess not open into $f(C)$. I post an example next. Mar 27, 2015 at 8:46\n• To be more precise, since $f:C\\to\\mathbb{R}^2$ has to be bijective, we have to find $f$ that is not open into $f(C)=\\mathbb{R}^2$, don't forgetting that $C$ is closed. Mar 27, 2015 at 13:28\n\nLet $C\\subset\\mathbb R^2$ be the line $y=0$, and $$f(x,0)=\\big(\\frac{x^3}{1+x^4},\\frac{x}{1+x^4}\\big).$$ Then $f(C)$ is a lemniscate with two branches crossing at the origin $(0,0)=f(0,0)$.", null, "The mapping $f:C\\to f(C)$ is not open, because an interval around $x=0$ in $C$ maps into a single branch (the vertical one) of $f(C)$, missing the other.\n\nThis idea can be used further, as in the following pictures.", null, "One starts with a closed band $A\\subset\\mathbb R^2$ and modifies it homeomorphically to have a band with only to segments in the boundary. Then bends it a la lemniscate, which gives something like a disc with two holes, which is homeomorphic to a disc minus two points, then to the plane minus two points. Finally the non open bijection $A\\to\\mathbb R^2\\setminus\\{\\alpha,\\beta\\}$ extends to a bijection $C\\to\\mathbb R^2$: $C$ is $A$ plus two separated points $a,b$ and the extension, continuous but not open, maps $a\\mapsto\\alpha, b\\mapsto\\beta$.\n\nThe next pictures are a variation of the above argument that accompasses @Chilote smart answer:", null, "Here we see the same topologist strategy to produce a bijection $A\\to\\mathbb R^2\\setminus\\{\\alpha\\}$.\n\nFinally, a remark to add context to the problem. Suppose we can define a homeomorphism $f:C\\to\\mathbb R^n$ from a closed subset $C\\subset\\mathbb R^n$ onto $\\mathbb R^n$. Then it extends to the Alexandroff one point compactifications $C^*\\sim{\\mathbb R^n}^*$. Now, that of $\\mathbb R^n$ is the sphere $\\mathbb S^n$ and that of $C$ is its closure in $\\mathbb S^n$. But a sphere can't be homeo to a proper subset, hence the starting $f$ cannot exist. Consequently, the problem is\n\nIs there a continuous bijection $C\\to \\mathbb R^2$?\n\nIndeed, none that exists can be open, because then it would be a homeo.\n\n• Thanks for the example, but I am looking for a function such that $f(C)=\\mathbb{R}^2$. That is why I ask $f:C\\to\\mathbb{R}^2$ to be bijective. Mar 27, 2015 at 13:18\n• I think that the same idea works, but explicit equations are out of rich. In any case it's very interesting. I'm trying to make some pictures (for a topologist would be a proof) and edit my answer with them. Mar 27, 2015 at 16:32\n\nThe answer of @Jesus RS is brilliant! It shows the more important point of the problem: $C$ does not have to be connected.\n\nIf we have a closed subspace $C$ of $\\mathbb{R}^2$ and a continuos and bijective function $g:C\\to\\mathbb{R}^2\\setminus\\{p\\}$, where $p\\in\\mathbb{R}^2$, then we just have to consider a point in the complement of $C$, say $q$, and the function $$f:C\\cup\\{q\\}\\to\\mathbb{R}^2$$ \\begin{equation} f(z)= \\begin{cases} &g(z)&,&x\\neq q\\\\ &p&,&x=q \\end{cases} \\end{equation} will be continuous and bijective, but is not open because $\\{q\\}$ is open in $C\\cup\\{q\\}$ but $f(\\{q\\})=\\{p\\}$ is not open in $\\mathbb{R}^2$.\n\nNow it is very easy to show an explicit function satisfying these conditions. The function $f:\\big([0,+\\infty)\\times\\mathbb{R}\\big)\\cup\\{(-1,0)\\}\\to\\mathbb{R}^2$ defined by \\begin{equation} f(x,y)= \\begin{cases} &\\bigg(e^y\\cos\\big(\\frac{2\\pi x}{x+1}\\big),e^y\\sin\\big(\\frac{2\\pi x}{x+1}\\big)\\bigg)&,&(x,y)\\neq (-1,0)\\\\ &(0,0)&,&(x,y)=(-1,0) \\end{cases} \\end{equation} is continuous, bijective and not open, where the domain is a closed subspace of $\\mathbb{R}^2$.\n\n• Nice example/explanation. Mar 27, 2015 at 20:57\n• Note that, even discounting the behavior at the isolated point $(-1,0) \\in C$, we have problems with open-ness. For instance, $[0,1) \\times \\mathbb{R}$ is open in $C$, but its image by $f$ is not open in $\\mathbb{R}^2$. Mar 27, 2015 at 21:03\n• But the question remains whether $f$ is open if $C$ is connected. Mar 27, 2015 at 23:42" ]
[ null, "https://i.stack.imgur.com/clClO.jpg", null, "https://i.stack.imgur.com/24rQ9.jpg", null, "https://i.stack.imgur.com/HGR7u.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86645144,"math_prob":0.99990356,"size":1750,"snap":"2022-05-2022-21","text_gpt3_token_len":528,"char_repetition_ratio":0.12084765,"word_repetition_ratio":0.0,"special_character_ratio":0.27657142,"punctuation_ratio":0.10789473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000024,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,6,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-18T05:50:29Z\",\"WARC-Record-ID\":\"<urn:uuid:5193a651-35e9-4865-8855-27df01b74935>\",\"Content-Length\":\"242562\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8e1ccd73-e8b9-4f73-9d70-7e3fed592fe3>\",\"WARC-Concurrent-To\":\"<urn:uuid:4a87f0c7-66c0-418e-9e29-e4bb62c852b7>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1208457/find-fc-to-mathbbr2-continuous-and-bijective-but-not-open-c-subset-mathb\",\"WARC-Payload-Digest\":\"sha1:IMBKFZAI235TQ34F6SYE2AUAIOSYR7EY\",\"WARC-Block-Digest\":\"sha1:EK26PNC2YJ3J6NPGETNMVROJBFVSAPP2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662521152.22_warc_CC-MAIN-20220518052503-20220518082503-00628.warc.gz\"}"}
https://www.centerspace.net/examples/nmath/visual-basic/core/matrix-example.php
[ "# VB Matrix Example\n\n← All NMath Code Examples\n\n```Imports System\n\nImports CenterSpace.NMath.Core\nImports Range = CenterSpace.NMath.Core.Range\n\nNamespace CenterSpace.NMath.Core.Examples.VisualBasic\n\n' A .NET example in Visual Basic showing some of the basic functionality of the matrix classes.\nModule MatrixExample\n\nSub Main()\n\nDim A As New DoubleMatrix(\"3x3 [1 2 3 4 5 6 7 8 9]\")\n\n' You can obtain vector \"views\" of the rows and columns.\nDim row1 As DoubleVector = A.Row(1)\n\nConsole.WriteLine()\n\n' Remember, indexing starts at 0 so row one really means the second\n' row of the matrix.\nConsole.WriteLine(\"Row 1 = {0}\", row1.ToString()) ' [4 5 6]\nConsole.WriteLine()\n\nDim col0 As DoubleVector = A.Col(0)\nConsole.WriteLine(\"Column 0 = {0}\", col0.ToString()) ' [1 4 7]\nConsole.WriteLine()\n\n' Remember that these vectors are different views of the data in\n' the matrix A. Changing a value in one of these vectors will change\n' the corresponding value in the matrix.\nrow1(0) = 10\nConsole.WriteLine(\"Value of A[1,0] == 10 is {0}\", A(1, 0).Equals(10)) ' True\nConsole.WriteLine()\n\n' You can make sure that your data is not being shared with anyone\n' else by invoking the DeepenThisCopy method. This will insure that\n' you have your own private copy of the data and that it occupies\n' contiguous storage.\nA.DeepenThisCopy()\nrow1(0) = 100\nConsole.WriteLine(\"Value of A[1,0] == 100 is {0}\", A(1, 0) = 100) ' False\nConsole.WriteLine()\n\n' You can also obtain vector views of the various diagonals of A.\n' Here's is one way to create the 3 by 3 matrix B\n' |1 2 0|\n' B = |-2 1 2|\n' |0 -2 1|\n' That is, 1's on the main diagonal, 2's in the super-diagonal, and\n' -2's on the sub-diagonal.\nDim B As New DoubleMatrix(3, 3) ' A 3x3 matrix with all zeroes\nB.Diagonal().Set(Range.All, 1.0)\nB.Diagonal(1).Set(Range.All, 2.0)\nB.Diagonal(-1).Set(Range.All, -2.0)\nConsole.WriteLine(\"B...\")\nConsole.WriteLine(B.ToTabDelimited()) ' B = 3x3 [1 2 0 -2 1 2 0 -2 1]\nConsole.WriteLine()\n\n' Sum the columns of B\nDim Totals As DoubleVector = NMathFunctions.Sum(B)\nConsole.WriteLine(\"Column totals of B = {0}\", Totals.ToString())\nConsole.WriteLine()\n\n' There are methods to compute matrix norms with respect to the one\n' and infinity norms.\nConsole.WriteLine(\"The one-norm of A = {0}\", A.InfinityNorm())\nConsole.WriteLine()\nConsole.WriteLine(\"The infinity-norm of A = {0}\", A.OneNorm())\nConsole.WriteLine()\n\n' The Resize method changes the dimensions of a matrix. Values are\n' truncated or padded with zeros as appropriate.\nA.Resize(5, 5)\n\n' A will have it's original values in the top-left 3 by 3 corner.\n' The last two rows and the last two columns will zeros.\nConsole.WriteLine(\"A resized...\")\nConsole.WriteLine(A.ToTabDelimited)\n\n' The indexers can be used to obtain vector views of slices of the\n' rows or columns of A.\n' Construct a Range object which starts at 0, goes to the end and has\n' a stride of 2.\nDim everyOtherElt As New Range(0, Position.End, 2)\n\n' Create a vector that views every other element of the last\n' last column of A.\nDim lastColEveryOther As DoubleVector = A(everyOtherElt, A.Cols - 1)\n\n' Use this vector to set every other element in the last column\n' of A to 2.\nlastColEveryOther.Set(Range.All, 2.0)\nConsole.WriteLine(\"lastColEveryOther = {0}\", lastColEveryOther.ToString()) ' [1 1 1 1 1]\nConsole.WriteLine()\n\n' Last column of A is [2 0 2 0 2]\nConsole.WriteLine(\"Last column of A = {0}\", A.Col(A.Cols - 1).ToString())\nConsole.WriteLine()\n\n' Matrix/Matrix and Matrix/Vector products are computed using the appropriate\n' static methods in the NMathFunctions class.\nB = New DoubleMatrix(5, 5, 1, 1)\nDim C As DoubleMatrix = NMathFunctions.Product(A, B)\nConsole.WriteLine(\"Inner product of A and B is...\")\nConsole.WriteLine(C.ToTabDelimited)\nConsole.WriteLine()\n\nDim v As New DoubleVector(\"[5 4 3 2 1]\")\nDim u As DoubleVector = NMathFunctions.Product(A, v)\nConsole.WriteLine(\"Inner product of A and v = \", u.ToString())\nConsole.WriteLine()\n\n' Transpose products are available too.\nC = NMathFunctions.TransposeProduct(A, B)\nConsole.WriteLine(\"The inner product of the transpose of A with B is...\")\nConsole.WriteLine(C.ToTabDelimited)\nConsole.WriteLine()\n\n' For complex matrices, conjugate transpose products are available.\nDim d As New DoubleComplex(1, 1)\n' Construct a two matrices whose values are multiples of c and 2c\nDim F As New DoubleComplexMatrix(3, 3, d, d)\nDim G As New DoubleComplexMatrix(3, 3, d, 2 * d)\nDim H As DoubleComplexMatrix = NMathFunctions.ConjTransposeProduct(F, G)\nConsole.WriteLine(\"The inner product of the complex conjugate of F with G is...\")\nConsole.WriteLine(NMathFunctions.Real(H).ToTabDelimited)\n\nConsole.WriteLine()\nConsole.WriteLine(\"Press Enter Key\")" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5773379,"math_prob":0.960809,"size":4671,"snap":"2019-51-2020-05","text_gpt3_token_len":1272,"char_repetition_ratio":0.2063424,"word_repetition_ratio":0.008534851,"special_character_ratio":0.28858918,"punctuation_ratio":0.16933867,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9812186,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-18T03:41:07Z\",\"WARC-Record-ID\":\"<urn:uuid:d9157c56-392e-4e08-b5df-e85a6668da2b>\",\"Content-Length\":\"19991\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:75644688-53b1-4e96-9f9e-ea7227f5bef6>\",\"WARC-Concurrent-To\":\"<urn:uuid:2f0707ac-50a5-44b7-b6b5-012370fb3c06>\",\"WARC-IP-Address\":\"72.47.229.209\",\"WARC-Target-URI\":\"https://www.centerspace.net/examples/nmath/visual-basic/core/matrix-example.php\",\"WARC-Payload-Digest\":\"sha1:L46QQ27SWHUFJESJ65FCARGWSOTM36K7\",\"WARC-Block-Digest\":\"sha1:KUQEWETDX3H6KWPPETRDCGWRHTOSJUZV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250591763.20_warc_CC-MAIN-20200118023429-20200118051429-00211.warc.gz\"}"}
https://vkusnosup.ru/solved-problems-on-bayes-theorem-9884.html
[ "# Solved Problems On Bayes Theorem", null, "Union of Events – The union of two or more events is basically the combined set of the two or more events in the sample space.For example, the union of getting a king or hearts in a deck of cards would include 16 cards (13 hearts and 3 kings).\n\nTags: Help Me With My Statistics HomeworkUniversity Of Leicester Thesis BindingSteps In Writing A Research ProposalPenn State College Essay 2013Common Application Transfer EssayApplication For Admission LetterCan You Start An Essay With A QuestionWrite College Essay Supplement7th Grade Word Problems\n\nAlso, this theorem can be used in different ways to obtain interesting and meaningful results.\n\nBayesian inference is a method of statistical inference in which Bayes’ theorem is used to update the probability of a hypothesis as more evidence or information becomes available.\n\nNote that we do not count the king of hearts twice.\n\nIntersection of Events – The intersection of two or more events is the set of events that are common in both the events.\n\nEvent – An event is simply the outcome of an experiment.\n\nFor example, if you pick out a card and get a queen of spades, it is an event.This very simple example helps us understand the Bayes rule for an uncomplicated case.This theorem can be extended to many cases which are more complicated.Let us understand this theorem with the help of a simple example. There are two shapes of chocolate – round and flat. We would like to find out the probability of getting a round chocolate given that the we have already got a round chocolate out.This can be solved by using the Bayes’ theorem in the following way: P(1) = Probability of getting a round chocolate at the first pick = 7/15 P(2) = Probability of getting a round chocolate at second pick = 6/14 P(1 and 2) = ½*(6/14) P(2/1) = P(1 and 2)/P(1) = 0.214/0.4667=0.4585So the probability of getting a round chocolate on the second picking given that the first picking yielded a round chocolate is 45.85%.Sample Space – The sample space is the collection of all possible outcomes.Taking the previous example of cards, the sample space would be all 52 cards.This is the initial probability that the light bulb is on. This is the evidence that alters p(i) to a more accurate value.Now, you try to gather evidence that will tell you if the light bulb is on or not. So we finally get p(i/switch) that is the probability of light being on given that the switch is on.The methods related to using Bayes’ theorem come under Bayesian Statistics where various methods like Bayesian inference, Probability trees, Bayes’ updating and Frequentist approach.Aman Goel is a Computer Science and Engineering graduate from IIT Bombay. He is now the co-founder of Allin Call Research & Solutions." ]
[ null, "https://vkusnosup.ru/zavvazde/solved-problems-on-bayes-theorem-642195.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94094586,"math_prob":0.9317165,"size":2460,"snap":"2021-31-2021-39","text_gpt3_token_len":550,"char_repetition_ratio":0.127443,"word_repetition_ratio":0.042154565,"special_character_ratio":0.22682928,"punctuation_ratio":0.07392197,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9942667,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T14:12:47Z\",\"WARC-Record-ID\":\"<urn:uuid:ca896a24-280e-42cf-a44d-020568786560>\",\"Content-Length\":\"30566\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ce0d14d4-7907-45e0-9a70-76f1415e354d>\",\"WARC-Concurrent-To\":\"<urn:uuid:4ceeb70b-17c5-4031-9ed4-e0c9c69e5be3>\",\"WARC-IP-Address\":\"104.21.64.30\",\"WARC-Target-URI\":\"https://vkusnosup.ru/solved-problems-on-bayes-theorem-9884.html\",\"WARC-Payload-Digest\":\"sha1:4THFZCXI3PJAKFI4Y6TJXDUXBQVO3VND\",\"WARC-Block-Digest\":\"sha1:G3INKIOSX2CT6V6XENDOXHG4K4RQY5QW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057424.99_warc_CC-MAIN-20210923135058-20210923165058-00547.warc.gz\"}"}
https://en.wikipedia.org/wiki/Hesse_normal_form
[ "# Hesse normal form", null, "Distance from the origin O to the line E calculated with the Hesse normal form. Normal vector in red, line in green, point O shown in blue.\n\nThe Hesse normal form named after Otto Hesse, is an equation used in analytic geometry, and describes a line in $\\mathbb {R} ^{2}$", null, "or a plane in Euclidean space $\\mathbb {R} ^{3}$", null, "or a hyperplane in higher dimensions. It is primarily used for calculating distances (see point-plane distance and point-line distance).\n\nIt is written in vector notation as\n\n${\\vec {r}}\\cdot {\\vec {n}}_{0}-d=0.\\,$", null, "The dot $\\cdot$", null, "indicates the scalar product or dot product. Vector ${\\vec {r}}$", null, "points from the origin of the coordinate system, O, to any point P that lies precisely in plane or on line E. The vector ${\\vec {n}}_{0}$", null, "represents the unit normal vector of plane or line E. The distance $d\\geq 0$", null, "is the shortest distance from the origin O to the plane or line.\n\n## Derivation/Calculation from the normal form\n\nNote: For simplicity, the following derivation discusses the 3D case. However, it is also applicable in 2D.\n\nIn the normal form,\n\n$({\\vec {r}}-{\\vec {a}})\\cdot {\\vec {n}}=0\\,$", null, "a plane is given by a normal vector ${\\vec {n}}$", null, "as well as an arbitrary position vector ${\\vec {a}}$", null, "of a point $A\\in E$", null, ". The direction of ${\\vec {n}}$", null, "is chosen to satisfy the following inequality\n\n${\\vec {a}}\\cdot {\\vec {n}}\\geq 0\\,$", null, "By dividing the normal vector ${\\vec {n}}$", null, "by its magnitude $|{\\vec {n}}|$", null, ", we obtain the unit (or normalized) normal vector\n\n${\\vec {n}}_{0}={{\\vec {n}} \\over {|{\\vec {n}}|}}\\,$", null, "and the above equation can be rewritten as\n\n$({\\vec {r}}-{\\vec {a}})\\cdot {\\vec {n}}_{0}=0.\\,$", null, "Substituting\n\n$d={\\vec {a}}\\cdot {\\vec {n}}_{0}\\geq 0\\,$", null, "we obtain the Hesse normal form\n\n${\\vec {r}}\\cdot {\\vec {n}}_{0}-d=0.\\,$", null, "", null, "In this diagram, d is the distance from the origin. Because ${\\vec {r}}\\cdot {\\vec {n}}_{0}=d$", null, "holds for every point in the plane, it is also true at point Q (the point where the vector from the origin meets the plane E), with ${\\vec {r}}={\\vec {r}}_{s}$", null, ", per the definition of the Scalar product\n\n$d={\\vec {r}}_{s}\\cdot {\\vec {n}}_{0}=|{\\vec {r}}_{s}|\\cdot |{\\vec {n}}_{0}|\\cdot \\cos(0^{\\circ })=|{\\vec {r}}_{s}|\\cdot 1=|{\\vec {r}}_{s}|.\\,$", null, "The magnitude $|{\\vec {r}}_{s}|$", null, "of ${{\\vec {r}}_{s}}$", null, "is the shortest distance from the origin to the plane." ]
[ null, "https://upload.wikimedia.org/wikipedia/commons/thumb/e/e5/Hesse_normalenform.svg/220px-Hesse_normalenform.svg.png", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e150115ab9f63023215109595b76686a1ff890fd", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f936ddf584f8f3dd2a0ed08917001b7a404c10b5", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a78c773e25b5c17aaad99b49a5ef92d9d5cd94c7", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/ba2c023bad1bd39ed49080f729cbf26bc448c9ba", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/6aec3c9ce13b53e9e24c98e7cce4212627884c91", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9e61c9a3a3340ecb35a2be480f535e90026eaa15", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/74c3a2cca0c610ded6e0f188788933fb11f07df4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/ec37b2f333900dc0e11beedbc298663ce14741a1", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/49569db585c1b6306d5ffd91161775f67235fae0", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/546e6615827e17295718741fd0b86f639a947f16", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5e8c484188f5c7096d22987fda802be855d16e39", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/49569db585c1b6306d5ffd91161775f67235fae0", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9f360b822276dffe135e2fad3eefbeab5e5d36cd", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/49569db585c1b6306d5ffd91161775f67235fae0", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f051acb714503fe4ec87b4f3e02c1122b465c641", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/aad11a4433d73ccf3d7fccde022195f2d882b9dc", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1a9c2e23974e45c02bc89c0c2aff0f43ec534750", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/35adb587f76d69239b530443b49434a4f18796fc", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a78c773e25b5c17aaad99b49a5ef92d9d5cd94c7", null, "https://upload.wikimedia.org/wikipedia/commons/6/64/Ebene_Hessesche_Normalform.PNG", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2725cc11a5c9e63b24290204ccdeacd0cf480f73", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/62cd21c1c0101fb36bc1c09b639ac70d24decf2a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e7b8fc9927c3a9cf3aeb33c51232e578a09798b8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/62195ddcf08bed7c58889c18be116bfcd6e6a43c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/73ec2f8cc5ddf1789dcd7e6b42821ec62966084c", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87806433,"math_prob":0.99992895,"size":1721,"snap":"2022-27-2022-33","text_gpt3_token_len":395,"char_repetition_ratio":0.15433896,"word_repetition_ratio":0.020477816,"special_character_ratio":0.23067984,"punctuation_ratio":0.12130178,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999969,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52],"im_url_duplicate_count":[null,5,null,null,null,null,null,null,null,null,null,null,null,9,null,null,null,5,null,null,null,null,null,6,null,null,null,5,null,null,null,4,null,5,null,5,null,5,null,null,null,3,null,5,null,5,null,5,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T11:17:18Z\",\"WARC-Record-ID\":\"<urn:uuid:a7947c8c-23d5-4bb4-a415-ffb5479afb12>\",\"Content-Length\":\"72955\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c6affc7-dc58-4177-ad21-34f531926fac>\",\"WARC-Concurrent-To\":\"<urn:uuid:87f870b4-33de-4bb4-8a6d-750e43eed482>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.wikipedia.org/wiki/Hesse_normal_form\",\"WARC-Payload-Digest\":\"sha1:QID6WU3MVQAXW2KIYZBTIF2AZZALCEXN\",\"WARC-Block-Digest\":\"sha1:OF53UQ25BRLGF76L4MQJJ4ZXCKRAVEBU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104542759.82_warc_CC-MAIN-20220705083545-20220705113545-00765.warc.gz\"}"}
https://number.academy/3983
[ "# Number 3983\n\nNumber 3,983 spell 🔊, write in words: three thousand, nine hundred and eighty-three . Ordinal number 3983th is said 🔊 and write: three thousand, nine hundred and eighty-third. The meaning of number 3983 in Maths: Is Prime? Factorization and prime factors tree. The square root and cube root of 3983. What is 3983 in computer science, numerology, codes and images, writing and naming in other languages. Other interesting facts related to 3983.\n\n## What is 3,983 in other units\n\nThe decimal (Arabic) number 3983 converted to a Roman number is MMMCMLXXXIII. Roman and decimal number conversions.\n The number 3983 converted to a Mayan number is", null, "Decimal and Mayan number conversions.\n\n#### Weight conversion\n\n3983 kilograms (kg) = 8780.9 pounds (lbs)\n3983 pounds (lbs) = 1806.7 kilograms (kg)\n\n#### Length conversion\n\n3983 kilometers (km) equals to 2475 miles (mi).\n3983 miles (mi) equals to 6411 kilometers (km).\n3983 meters (m) equals to 13068 feet (ft).\n3983 feet (ft) equals 1215 meters (m).\n3983 centimeters (cm) equals to 1568.1 inches (in).\n3983 inches (in) equals to 10116.8 centimeters (cm).\n\n#### Temperature conversion\n\n3983° Fahrenheit (°F) equals to 2195° Celsius (°C)\n3983° Celsius (°C) equals to 7201.4° Fahrenheit (°F)\n\n#### Power conversion\n\n3983 Horsepower (hp) equals to 2929.09 kilowatts (kW)\n3983 kilowatts (kW) equals to 5416.11 horsepower (hp)\n\n#### Time conversion\n\n(hours, minutes, seconds, days, weeks)\n3983 seconds equals to 1 hour, 6 minutes, 23 seconds\n3983 minutes equals to 2 days, 18 hours, 23 minutes\n\n### Codes and images of the number 3983\n\nNumber 3983 morse code: ...-- ----. ---.. ...--\nSign language for number 3983:", null, "", null, "", null, "", null, "Number 3983 in braille:", null, "Images of the number\nImage (1) of the numberImage (2) of the number", null, "", null, "More images, other sizes, codes and colors ...\n\n#### Number 3983 infographic", null, "### Gregorian, Hebrew, Islamic, Persian and Buddhist year (calendar)\n\nGregorian year 3983 is Buddhist year 4526.\nBuddhist year 3983 is Gregorian year 3440 .\nGregorian year 3983 is Islamic year 3464 or 3465.\nIslamic year 3983 is Gregorian year 4485 or 4486.\nGregorian year 3983 is Persian year 3361 or 3362.\nPersian year 3983 is Gregorian 4604 or 4605.\nGregorian year 3983 is Hebrew year 7743 or 7744.\nHebrew year 3983 is Gregorian year 223.\nThe Buddhist calendar is used in Sri Lanka, Cambodia, Laos, Thailand, and Burma. The Persian calendar is official in Iran and Afghanistan.\n\n## Share in social networks", null, "## Mathematics of no. 3983\n\n### Multiplications\n\n#### Multiplication table of 3983\n\n3983 multiplied by two equals 7966 (3983 x 2 = 7966).\n3983 multiplied by three equals 11949 (3983 x 3 = 11949).\n3983 multiplied by four equals 15932 (3983 x 4 = 15932).\n3983 multiplied by five equals 19915 (3983 x 5 = 19915).\n3983 multiplied by six equals 23898 (3983 x 6 = 23898).\n3983 multiplied by seven equals 27881 (3983 x 7 = 27881).\n3983 multiplied by eight equals 31864 (3983 x 8 = 31864).\n3983 multiplied by nine equals 35847 (3983 x 9 = 35847).\nshow multiplications by 6, 7, 8, 9 ...\n\n### Fractions: decimal fraction and common fraction\n\n#### Fraction table of 3983\n\nHalf of 3983 is 1991,5 (3983 / 2 = 1991,5 = 1991 1/2).\nOne third of 3983 is 1327,6667 (3983 / 3 = 1327,6667 = 1327 2/3).\nOne quarter of 3983 is 995,75 (3983 / 4 = 995,75 = 995 3/4).\nOne fifth of 3983 is 796,6 (3983 / 5 = 796,6 = 796 3/5).\nOne sixth of 3983 is 663,8333 (3983 / 6 = 663,8333 = 663 5/6).\nOne seventh of 3983 is 569 (3983 / 7 = 569).\nOne eighth of 3983 is 497,875 (3983 / 8 = 497,875 = 497 7/8).\nOne ninth of 3983 is 442,5556 (3983 / 9 = 442,5556 = 442 5/9).\nshow fractions by 6, 7, 8, 9 ...\n\n### Calculator\n\n 3983\n\n#### Is Prime?\n\nThe number 3983 is not a prime number. The closest prime numbers are 3967, 3989.\n3983th prime number in order is 37591.\n\n#### Factorization and factors (dividers)\n\nThe prime factors of 3983 are 7 * 569\nThe factors of 3983 are 1 , 7 , 569 , 3983\nTotal factors 4.\nSum of factors 4560 (577).\n\n#### Powers\n\nThe second power of 39832 is 15.864.289.\nThe third power of 39833 is 63.187.463.087.\n\n#### Roots\n\nThe square root √3983 is 63,111013.\nThe cube root of 33983 is 15,85149.\n\n#### Logarithms\n\nThe natural logarithm of No. ln 3983 = loge 3983 = 8,289791.\nThe logarithm to base 10 of No. log10 3983 = 3,60021.\nThe Napierian logarithm of No. log1/e 3983 = -8,289791.\n\n### Trigonometric functions\n\nThe cosine of 3983 is 0,857973.\nThe sine of 3983 is -0,513694.\nThe tangent of 3983 is -0,598729.\n\n### Properties of the number 3983\n\nMore math properties ...\n\n## Number 3983 in Computer Science\n\nCode typeCode value\nPIN 3983 It's recommendable to use 3983 as a password or PIN.\n3983 Number of bytes3.9KB\nUnix timeUnix time 3983 is equal to Thursday Jan. 1, 1970, 1:06:23 a.m. GMT\nIPv4, IPv6Number 3983 internet address in dotted format v4 0.0.15.143, v6 ::f8f\n3983 Decimal = 111110001111 Binary\n3983 Decimal = 12110112 Ternary\n3983 Decimal = 7617 Octal\n3983 Decimal = F8F Hexadecimal (0xf8f hex)\n3983 BASE64Mzk4Mw==\n3983 MD55e4450dc93010bbeea7cd28eba296850\n3983 SHA151af0418179317caa1e2d452f67017b44d428d8d\n3983 SHA2240e903cc4804329a4c8dff21b39f11e4796aee0344bba2dc89fc56675\n3983 SHA256c7e92bac8729cf4c3b5d686b2592262c363afdfc5fcf24c5a46dfc5b3b97a15f\nMore SHA codes related to the number 3983 ...\n\nIf you know something interesting about the 3983 number that you did not find on this page, do not hesitate to write us here.\n\n## Numerology 3983\n\n### The meaning of the number 3 (three), numerology 3\n\nCharacter frequency 3: 2\n\nThe number three (3) came to share genuine expression and sensitivity with the world. People associated with this number need to connect with their deepest emotions. The number 3 is characterized by its pragmatism, it is utilitarian, sagacious, dynamic, creative, it has objectives and it fulfills them. He/she is also self-expressive in many ways and with good communication skills.\nMore about the meaning of the number 3 (three), numerology 3 ...\n\n### The meaning of the number 9 (nine), numerology 9\n\nCharacter frequency 9: 1\n\nThe number 9 (nine) is the sign of ideals, Universal interest and the spirit of combat for humanitarian purposes. It symbolizes the inner Light, prioritizing ideals and dreams, experienced through emotions and intuition. It represents the ascension to a higher degree of consciousness and the ability to display love for others. He/she is creative, idealistic, original and caring.\nMore about the meaning of the number 9 (nine), numerology 9 ...\n\n### The meaning of the number 8 (eight), numerology 8\n\nCharacter frequency 8: 1\n\nThe number eight (8) is the sign of organization, perseverance and control of energy to produce material and spiritual achievements. It represents the power of realization, abundance in the spiritual and material world. Sometimes it denotes a tendency to sacrifice but also to be unscrupulous.\nMore about the meaning of the number 8 (eight), numerology 8 ...\n\n## Interesting facts about the number 3983\n\n### Asteroids\n\n• (3983) Sakiko is asteroid number 3983. It was discovered by A. Mrkos from Klet Observatory on 9/20/1984.\n\n### Distances between cities\n\n• There is a 3,983 miles (6,410 km) direct distance between Almaty (Kazakhstan) and Madrid (Spain).\n• There is a 2,475 miles (3,983 km) direct distance between Amman (Jordan) and Nashik (India).\n• There is a 3,983 miles (6,409 km) direct distance between Amritsar (India) and Birmingham (United Kingdom).\n• There is a 2,475 miles (3,983 km) direct distance between Amritsar (India) and Shenzhen (China).\n• There is a 2,475 miles (3,983 km) direct distance between Bandung (Indonesia) and Bangalore (India).\n• There is a 3,983 miles (6,409 km) direct distance between Bucharest (Romania) and Lanzhou (China).\n• There is a 3,983 miles (6,409 km) direct distance between Copenhagen (Denmark) and Thāne (India).\n• There is a 3,983 miles (6,410 km) direct distance between Damascus (Syria) and Ordos (China).\n• There is a 3,983 miles (6,410 km) direct distance between Fuzhou (China) and Kazan (Russia).\n• There is a 3,983 miles (6,409 km) direct distance between Gorakhpur (India) and Paris (France).\n• There is a 2,475 miles (3,983 km) direct distance between Guwahati (India) and Hiroshima-shi (Japan).\n• There is a 2,475 miles (3,983 km) direct distance between Hangzhou (China) and Visakhapatnam (India).\n• There is a 3,983 miles (6,409 km) direct distance between Havana (Cuba) and Santiago (Chile).\n• There is a 2,475 miles (3,983 km) direct distance between Hengyang (China) and Peshawar (Pakistan).\n• There is a 2,475 miles (3,983 km) direct distance between Hyderabad (Pakistan) and Xianyang (China).\n• There is a 3,983 miles (6,409 km) direct distance between Chennai (India) and Kawasaki (Japan).\n• There is a 3,983 miles (6,409 km) direct distance between Chennai (India) and Saitama (Japan).\n• There is a 3,983 miles (6,409 km) direct distance between Chennai (India) and Tokyo (Japan).\n• There is a 3,983 miles (6,410 km) direct distance between Kharkiv (Ukraine) and Tai’an (China).\n• There is a 2,475 miles (3,983 km) direct distance between Lucknow (India) and Taichung (Taiwan).\n• There is a 3,983 miles (6,409 km) direct distance between Madurai (India) and Ōsaka-shi (Japan).\n• There is a 2,475 miles (3,983 km) direct distance between Maracay (Venezuela) and Ottawa (Alberta).\n• There is a 3,983 miles (6,409 km) direct distance between Medellín (Colombia) and Pikine (Senegal).\n• There is a 3,983 miles (6,409 km) direct distance between Mombasa (Kenya) and Munich (Germany).\n• There is a 3,983 miles (6,410 km) direct distance between Mosul (Iraq) and Pikine (Senegal).\n• There is a 3,983 miles (6,409 km) direct distance between Qom (Iran) and Shantou (China).\n• There is a 3,983 miles (6,409 km) direct distance between Rostov-na-Donu (Russia) and Tongshan (China).\n• There is a 3,983 miles (6,409 km) direct distance between Tehrān (Iran) and Xiamen (China).\n• There is a 3,983 miles (6,409 km) direct distance between Ufa (Russia) and Zaria (Nigeria).\n• There is a 3,983 miles (6,409 km) direct distance between Voronezh (Russia) and Yueyang (China).\n\n### Mathematics\n\n• 3983 has the property that the concatenation of its prime factors in increasing order is a square.\n\n## Number 3,983 in other languages\n\nHow to say or write the number three thousand, nine hundred and eighty-three in Spanish, German, French and other languages. The character used as the thousands separator.\n Spanish: 🔊 (número 3.983) tres mil novecientos ochenta y tres German: 🔊 (Anzahl 3.983) dreitausendneunhundertdreiundachtzig French: 🔊 (nombre 3 983) trois mille neuf cent quatre-vingt-trois Portuguese: 🔊 (número 3 983) três mil, novecentos e oitenta e três Chinese: 🔊 (数 3 983) 三千九百八十三 Arabian: 🔊 (عدد 3,983) ثلاثة آلاف و تسعمائةثلاثة و ثمانون Czech: 🔊 (číslo 3 983) tři tisíce devětset osmdesát tři Korean: 🔊 (번호 3,983) 삼천구백팔십삼 Danish: 🔊 (nummer 3 983) tretusinde og nihundrede og treogfirs Hebrew: (מספר 3,983) שלש אלף תשע מאות שמנים ושלש Dutch: 🔊 (nummer 3 983) drieduizendnegenhonderddrieëntachtig Japanese: 🔊 (数 3,983) 三千九百八十三 Indonesian: 🔊 (jumlah 3.983) tiga ribu sembilan ratus delapan puluh tiga Italian: 🔊 (numero 3 983) tremilanovecentottantatré Norwegian: 🔊 (nummer 3 983) tre tusen, ni hundre og åtti-tre Polish: 🔊 (liczba 3 983) trzy tysiące dziewięćset osiemdziesiąt trzy Russian: 🔊 (номер 3 983) три тысячи девятьсот восемьдесят три Turkish: 🔊 (numara 3,983) üçbindokuzyüzseksenüç Thai: 🔊 (จำนวน 3 983) สามพันเก้าร้อยแปดสิบสาม Ukrainian: 🔊 (номер 3 983) три тисячi дев'ятсот вiсiмдесят три Vietnamese: 🔊 (con số 3.983) ba nghìn chín trăm tám mươi ba Other languages ...\n\n## News to email\n\nPrivacy Policy.\n\n## Comment\n\nIf you know something interesting about the number 3983 or any natural number (positive integer) please write us here or on facebook." ]
[ null, "https://numero.wiki/s/numeros-mayas/numero-maya-3983.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-3.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-9.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-8.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-3.png", null, "https://number.academy/img/braille-3983.svg", null, "https://numero.wiki/img/a-3983.jpg", null, "https://numero.wiki/img/b-3983.jpg", null, "https://number.academy/i/infographics/3/number-3983-infographic.png", null, "https://numero.wiki/s/share-desktop.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.68914604,"math_prob":0.9777993,"size":10509,"snap":"2022-05-2022-21","text_gpt3_token_len":3679,"char_repetition_ratio":0.20666349,"word_repetition_ratio":0.15429917,"special_character_ratio":0.37796175,"punctuation_ratio":0.15207824,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9754428,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-20T19:33:03Z\",\"WARC-Record-ID\":\"<urn:uuid:9878af0d-38ec-4844-a069-6600de213cf6>\",\"Content-Length\":\"46707\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:da0071db-2214-42f1-94b3-c42d34d6bd07>\",\"WARC-Concurrent-To\":\"<urn:uuid:c8b80384-ea49-4e98-9528-16a4aa0a1fd2>\",\"WARC-IP-Address\":\"162.0.227.212\",\"WARC-Target-URI\":\"https://number.academy/3983\",\"WARC-Payload-Digest\":\"sha1:N4U44VYRBXZTDL66OT4FXDDSSL4FE766\",\"WARC-Block-Digest\":\"sha1:RKAR64TVLZPRUCV7RQLLXA2A6ICS4WNQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662534669.47_warc_CC-MAIN-20220520191810-20220520221810-00313.warc.gz\"}"}
https://mechanicalbase.com/drawing-curves-of-funtions-and-polynomials-in-matlab-very-easily/
[ "Drawing Curves Of Funtions And Polynomials In MatLab Very Easily\n\nThere are lots of kinds of graph and curve plotting commands in Matlab. But drawing graphs of functions polynomials that has variables effectively in Matlab also avaliable. For it, use ‘fplot()’ command in Matlab. In here, we explain how to use ‘fplot()’ command to draw curves of functions and polynomials in Matlab.\n\nHow To Use ‘fplot()’ Command In MatLab?\n\nThe use of fplot() command is very easy like above in Matlab. You need to write correct parameters inside your fplot() command to obtain graphs easily.\n\nYOU CAN LEARN MatLab IN MECHANICAL BASE; Click And Start To Learn MatLab!\n\nFirst of all, you need to define the variable that you will use in your fplot() command. To do this, type ‘@(variable)’ as shown by blue arrow above. Then type your function as shown by red arrow above. You can also define your function;\n\nTo define the interval of your graph will be plotted, you just define the [Xmin, Xmax] as shown by magenta arrow above in Matlab. y values will be drawn according to the value that they take.\n\nFinally, define your graph drawing tolerance as shown by orange arrow. The tolerance values below 1 will be the tolerance according to the exact value. If this value is bigger than 1, you will define the point number that will be considered in calculation.\n\nYou can edit your graph by adding legends, titles and informations manually from opened window.\n\nThe use of fplot() command in Matlab is very simple as you see above. Do not forget to leave your comments and questions about plotting graphs of functions and polynomials with fplot() command in Matlab below.\n\nYour precious feedbacks are very important for us." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9007022,"math_prob":0.91424185,"size":1973,"snap":"2022-05-2022-21","text_gpt3_token_len":441,"char_repetition_ratio":0.14728288,"word_repetition_ratio":0.0059701493,"special_character_ratio":0.21844906,"punctuation_ratio":0.10846561,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976232,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-25T20:18:11Z\",\"WARC-Record-ID\":\"<urn:uuid:14f9ea4e-3887-425c-87e3-d9fd9f9df76a>\",\"Content-Length\":\"170541\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:52913a02-a095-4f97-b1bb-b286a33b26a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:f61202df-03cf-45fe-a46f-064e4fc03918>\",\"WARC-IP-Address\":\"104.21.69.131\",\"WARC-Target-URI\":\"https://mechanicalbase.com/drawing-curves-of-funtions-and-polynomials-in-matlab-very-easily/\",\"WARC-Payload-Digest\":\"sha1:64IO3D7BN3MIYO6CRWGKLVA6PRJCU3L4\",\"WARC-Block-Digest\":\"sha1:XYD6YOYRNY5CUICHEJUHJHQSII3EJ36N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304872.21_warc_CC-MAIN-20220125190255-20220125220255-00522.warc.gz\"}"}
https://mathematica.stackexchange.com/questions/48174/whats-the-difference-between-variables-with-and-without-a-subscript
[ "# What's the difference between variables with and without a subscript?\n\nIf I input $x = a + b$, the $x$, $a$ and $b$ are blue. When I hit Shif+Enter, the $x$ turns black, indicating the kernel now knows this variable. But when I input $x_r = a + b$, and hit Shift+Enter, $x_r$ is still blue.\n\nWhat's the difference between variables with and without subscript?\n\n• Try wrapping AtomQ and Head around your variables to see the difference. This might be useful: mathematica.stackexchange.com/questions/38622/… May 20, 2014 at 6:08\n• $x_r$ is Subscript[x], so your definition is actually more about defining Subscript than about x. May 20, 2014 at 6:13\n• Evaluate ?? Subscript to see what Mathematica has done with your definition. It may also suggest to you why subscripted variables are fine for text formatting, but not so good for computation. May 20, 2014 at 10:42\n\nOn the simplest level, the difference is that x is a symbol, which is an atomic object to a Mathematica kernel, while subscripting x makes it an non-atomic expression, in this case an object with head Subscript.\n\n Clear[x]\n\n {Symbol, True}\n\n\nThe FullForm of a subscripted variable with subscript i is\n\nSubscript[x, i]\n\n\nIts head is clearly Subscript and it's not an atom.\n\n Clear[x, i]; AtomQ[Subscript[x, i]]\n\nFalse\n\n\nOn a deeper level, there is a difference in how assignments to symbols and subscripted objects are handled. In both cases, an expression of the form\n\n{HoldPattern[...] :> a + b}\n\n\nis recorded by the kernel. However, they are stored in different kinds of internal lists. An assignment to a symbol is stored as an own-value of the symbol; All assignments to subscripted variables are stored as down-values of Subscript.\n\nClear[x]; x = a + b; OwnValues @ x\n\n{HoldPattern[x] :> a + b}\n\nClear[Subscript, x, i]; Subscript[x, i] = a + b; DownValues @ Subscript\n\n{HoldPattern[Subscript[x, i]] :> a + b}\n\n\nIf you were use many subscripted variables in a Mathematica session, you would build up a big list of rules in the down-values of Subscript, slowing down references to subscripted variables. But this is probably not the worst problem encountered with Subscript. Consider doing a long computation with subscripted variables. Somewhere in the notebook, you evaluate\n\nSubscript[x, i] = a + b;\n\n\nNow, much later later and in a distant cell, you want to take a symbolic derivative. Like so:\n\nClear[x, i]; D[Subscript[x, i][t], {t, 2}]\n\nDerivative[a + b][t]\n\n\nNot what you expected? It is so easy to forget that it is Subscript that must be cleared.\n\nClear[Subscript]; D[Subscript[x, i][t], {t, 2}]\n\n\nDerivative[Subscript[x, i]][t]\n\nwhich will be rendered as $\\tt{x_i''[t]}$. Looks good now, but how many other subscripted variables have you cleared along with $\\tt{x_i}$?\n\n• However, D[Subscript[c, x]*x, x] returns Subscript[c, x] + x*Derivative[0, 1][Subscript][c, x] instead of simply $c_x$, it being a coefficient, as D treats Subscript as a symbolic function. Any idea how to overcome this? Nov 30, 2015 at 14:03" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8471121,"math_prob":0.9015725,"size":2309,"snap":"2022-27-2022-33","text_gpt3_token_len":603,"char_repetition_ratio":0.17049892,"word_repetition_ratio":0.011111111,"special_character_ratio":0.24642703,"punctuation_ratio":0.15618221,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9962857,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T02:27:02Z\",\"WARC-Record-ID\":\"<urn:uuid:53653862-0e5d-4da7-8611-364303dcc9ec>\",\"Content-Length\":\"230788\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:22621504-486c-4050-a580-44ddf7df4a56>\",\"WARC-Concurrent-To\":\"<urn:uuid:3cefeb1e-5e3a-4b5b-ac68-d40751c14460>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/48174/whats-the-difference-between-variables-with-and-without-a-subscript\",\"WARC-Payload-Digest\":\"sha1:ZWUMSJBDWD3L43ZV2J5D23HZLW3LK5LM\",\"WARC-Block-Digest\":\"sha1:W6A5SH7LWIVCB4CR76F7NAFJJYOD5AI6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572833.78_warc_CC-MAIN-20220817001643-20220817031643-00060.warc.gz\"}"}
https://events.uwa.edu.au/event/20100111T055255Z-1235-18993@events.uwa.edu.au/whatson/maths-stats
[ "", null, "What's On at UWA\n | UWA Home | Prospective Students  | Current Students  | Staff  | Alumni  | Visitors  | About  | Search UWA Website People Structure Intranet for\n\n# EVENT: Groups and Combinatorics Afternoon\n\n * Login to add events... * Today's date is Tuesday, January 26, 2021\n Groups and Combinatorics Afternoon Other events...\n2pm Simon Guest (UWA/Baylor University)\n\nTitle: A Solvable Version of the Baer--Suzuki Theorem\n\nAbstract: Let G be a finite group, and take an element x in G. The Baer--Suzuki states that if every pair of conjugates of x generates a nilpotent group then the group generated by all of the conjugates of x is nilpotent. It is natural to ask if an analogous theorem is true for solvable groups. Namely, if every pair of conjugates of x generates a solvable group then is the group generate by all of the conjugates of x solvable? In fact, this is not true. For example, if x has order 2 in a (nonabelian) simple group G then every pair of conjugates of x generates a dihedral group (which is solvable), but the normal subgroup generated by all of the conjugates of x must be the whole of the nonabelian simple group G, which of course is not solvable. There are also counterexamples when x has order 3. However, the following is true: (1) Let x in G have prime order p > 4. If every pair of conjugates of x generates a solvable group then the group generated by all of the conjugates of x is solvable. (2) Let x in G be an element of any order. If every 4-tuple of conjugates x, x^{g_1}, x^{g_2}, x^{g_3} generates a solvable group then the group generated by all of the conjugates of x is solvable.. We will discuss these results, some generalizations, and some of the methods used in their proof.\n\n2:35 Akos Seress (Ohio State University/UWA)\n\nTitle: Majorana representations of dihedral, alternating, and symmetric groups\n\n3:05 Afternoon Tea\n\n3:40 Nicola Durante (Università di Napoli “Federico II,”)\n\nTitle: Buekenhout-Metz unitals\n\nAbstract: We will discuss on some recent characterization theorems for Buekenhout-Metz unitals in a Desarguesian projective plane of square order.\n\n4:15 Frank De Clerck (Ghent University)\n\nTitle: A geometric approach to Mathon maximal arcs.\n\nA maximal arc of degree d in a projective plane of order q is a non-empty, proper subset of points such that every line meets the set in 0 or d points, for some d. If a plane has a maximal arc of degree d the dual plane has one of degree q/d. We will mainly restrict to Desarguesian planes. It has been proved by Ball, Blokhuis and Mazzocca that non-trivial maximal arcs in PG(2,q) can not exist if q is odd. They do exist if q is even: examples are hyperovals, Denniston arcs, Thas arcs and Mathon arcs. We will give an overview of these constructions and of the connection with other geometric topics. We will give a geometric approach to the Mathon arcs emphasising on those of degree 8.\n Location Weatherburn Lecture Theatre Contact Michael Giudici <[email protected]> Start Thu, 14 Jan 2010 14:00 End Thu, 14 Jan 2010 17:00 Submitted by Michael Giudici <[email protected]> Last Updated Tue, 12 Jan 2010 11:09\n Included in the following Calendars:", null, "" ]
[ null, "https://www.uwa.edu.au/images/2002/brands/admin/uwalogotrans.gif", null, "https://www.uwa.edu.au/images/blank.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8514448,"math_prob":0.8316393,"size":3007,"snap":"2021-04-2021-17","text_gpt3_token_len":792,"char_repetition_ratio":0.13719614,"word_repetition_ratio":0.1278626,"special_character_ratio":0.23910874,"punctuation_ratio":0.108153075,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9526748,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-26T06:56:01Z\",\"WARC-Record-ID\":\"<urn:uuid:b9b54c4e-5c83-4ccf-83f2-4335a8ea0cd4>\",\"Content-Length\":\"16154\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1a81031b-be85-4981-a974-308dbb6e5dc8>\",\"WARC-Concurrent-To\":\"<urn:uuid:04e4adc3-9f75-4ad4-bd69-4d47a2d6f9f8>\",\"WARC-IP-Address\":\"104.18.0.45\",\"WARC-Target-URI\":\"https://events.uwa.edu.au/event/20100111T055255Z-1235-18993@events.uwa.edu.au/whatson/maths-stats\",\"WARC-Payload-Digest\":\"sha1:TQO3NJGLNVDDV6ZZ74LIOEXT2PUQOIZ3\",\"WARC-Block-Digest\":\"sha1:TW76MBLIMFHW76VRFTKFUHCKDUXPA4OC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704798089.76_warc_CC-MAIN-20210126042704-20210126072704-00454.warc.gz\"}"}
https://softmath.com/math-book-answers/multiplying-fractions/famous-mathimatical-equasion.html
[ "English | Español\n\n# Try our Free Online Math Solver!", null, "Online Math Solver\n\n Depdendent Variable\n\n Number of equations to solve: 23456789\n Equ. #1:\n Equ. #2:\n\n Equ. #3:\n\n Equ. #4:\n\n Equ. #5:\n\n Equ. #6:\n\n Equ. #7:\n\n Equ. #8:\n\n Equ. #9:\n\n Solve for:\n\n Dependent Variable\n\n Number of inequalities to solve: 23456789\n Ineq. #1:\n Ineq. #2:\n\n Ineq. #3:\n\n Ineq. #4:\n\n Ineq. #5:\n\n Ineq. #6:\n\n Ineq. #7:\n\n Ineq. #8:\n\n Ineq. #9:\n\n Solve for:\n\n Please use this form if you would like to have this math solver on your website, free of charge. Name: Email: Your Website: Msg:\n\nWhat our customers say...\n\nThousands of users are using our software to conquer their algebra homework. Here are some of their experiences:\n\nThis is the program I have been waiting for... it really is speeding up my algebra assignment completions.\nB.C., Florida\n\nI really needed a way to get help with my homework when I wasn't able to speak with my teacher. Algebrator really solved my problem :)\nDavid Felton, MT\n\nThe simple interface of Algebrator makes it easy for my son to get right down to solving math problems. Thanks for offering such a useful program.\nDon Copeland, CA\n\nSearch phrases used on 2010-08-06:\n\nStudents struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among them?\n\n• kids math investigatory projects\n• 8th grade general science worksheets online\n• rational equation calculator\n• online factorize\n• algebra chart\n• aptitude questions with solutions\n• interactive programs in Radical expressions\n• adding subtracting multiplying dividing fractions\n• Learning how to do binomial expansion\n• reduce fraction to lowest terms in Java\n• statistics for the utterly confused powerpoints\n• college algebra order of operations practice\n• taxes and percentages practice pages for 6th graders\n• square root variables\n• easy maths solutions grade 10\n• sloving fractions using algebra\n• easy trivia for kids\n• vertex form problems\n• solve equation square\n• matlab cube root notation\n• year 8 exam papers\n• freee math help multiply and simplify\n• formula for percents\n• square root property algebra\n• two digit subtrac method\n• how to dividing fractions ti-89\n• writing radical expression for square roots\n• stats yr8 help\n• math KS3 worksheets\n• word problem involving radical expressions\n• printable prealgebra and algebra math worksheets and answer keys by email\n• hyperbola inequation graphs\n• rationalizing calculator\n• games/ fun - linear equation and inequalities\n• algrebra equation\n• parabola volume calculator\n• Logarithmic hyperbola\n• cube root on ti 83\n• college algebra word problem and solutions\n• sample algebraic problems 3rd grade\n• LCM calculator\n• simultaneous differential equation solver\n• log base( AND ti89\n• wronskian free program for ti89\n• examples of math prayers\n• Boolean algebra +sample paper\n• square root notation fraction\n• solve second order simultaneous differential equations polymath\n• What grade first teaches algebra?\n• online factor equation\n• linear equation worksheet\n• converting decimal numbers to base 3 easy way\n• ti-89 cramer's rule\n• pythagoras solver\n• find solution two lines by addition\n• geometry of a hyperbola help\n• algebra test sheets\n• algebra help program\n• difference quotient calculator\n• simplify square root multiply problems\n• how to calculate Modulus using casio\n• solutions to exercises \"functional analysis\" online\n• algrebra gcse\n• discriminant ti 84 plus\n• free answers to math problems\n• apptitude quetion papers\n• answer key to prentice hall physics\n• mastering algebra 1: course 1 module 3: systems of linear equations unit 1: graphic solutions to linear systems worksheet\n• easy algebra year 11\n• examples of \"clock problem\" in advance algebra (with solution)\n• basic fractions for idiots\n• the equation becomes first order linear\n• online algebra 1 skills practice workbook\n• finding the maximum and minimum values on a parabola and increasing and decreasing\n• college algebra kaufmann\n• complex linear equations matlab\n• graphing linear equations on the TI-83" ]
[ null, "https://softmath.com/images/video-pages/solver-top.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8132186,"math_prob":0.97154343,"size":4158,"snap":"2020-34-2020-40","text_gpt3_token_len":951,"char_repetition_ratio":0.13480982,"word_repetition_ratio":0.0,"special_character_ratio":0.20971622,"punctuation_ratio":0.043548387,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991198,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-09T21:16:58Z\",\"WARC-Record-ID\":\"<urn:uuid:54070afc-fa3c-4899-a290-b274264de1da>\",\"Content-Length\":\"89309\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4b489447-71bd-46e4-8d54-4cfd9a5a1bbd>\",\"WARC-Concurrent-To\":\"<urn:uuid:91e050a8-ae41-430c-8e19-fe77e519a580>\",\"WARC-IP-Address\":\"52.43.142.96\",\"WARC-Target-URI\":\"https://softmath.com/math-book-answers/multiplying-fractions/famous-mathimatical-equasion.html\",\"WARC-Payload-Digest\":\"sha1:YISAVUV4MC3BS5RDFDZ7YNLNCUSDIQ3T\",\"WARC-Block-Digest\":\"sha1:SM2OP26GP7UJGPZE4T5YBLGXLQHXOFET\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738573.99_warc_CC-MAIN-20200809192123-20200809222123-00405.warc.gz\"}"}
https://ebtekaronline.com/qa/question-what-is-the-enthalpy-of-fusion-of-ice.html
[ "", null, "# Question: What Is The Enthalpy Of Fusion Of Ice?\n\n## What is the latent heat of fusion of ice in J kg?\n\nThe heat of fusion for ice or water is Lf = 3.33 x 105 J/kg..\n\n## Why is heat of fusion important to life?\n\nLastly, water has a high heat of fusion, or the heat you need to remove to solidify (aka freeze) it. What all this means is that water can hold a lot of heat energy before it changes temperatures and states (solid to liquid to gas). These properties of water are great if you are an organism that lives in the water.\n\n## What is molar enthalpy of fusion?\n\nMolar enthalpy of fusion is the amount of energy needed to change one mole of a substance from the solid phase to the liquid phase at constant temperature and pressure. It is also known as the molar heat of fusion or latent heat of fusion. Molar enthalpy of fusion is expressed in units of kilojoules per mole (kJ/mol).\n\n## Why is enthalpy of vaporization greater than fusion?\n\nNotice that for all substances, the heat of vaporization is substantially higher than the heat of fusion. Much more energy is required to change the state from a liquid to a gas than from a solid to a liquid. This is because of the large separation of the particles in the gas state.\n\n## Does enthalpy change with mass?\n\nEnergy, volume and enthalpy are all extensive properties. Their value depends on the mass of the system. For example, the enthalpy of a certain mass of a gas is doubled if the mass is doubled; the enthalpy of a system that consists of several parts is equal to the sum of the enthalpies of the parts.\n\n## How do you calculate the enthalpy of fusion of ice?\n\nFind the latent heat of fusion, Lf, according to Lf = q ÷ m by dividing the heat, q, absorbed by the ice, as determined in step 3, by the mass of ice, m, determined in step 4. In this case, Lf = q / m = 2293 J ÷ 7.0 g = 328 J/g. Compare your experimental result to the accepted value of 333.5 J/g.\n\n## What is enthalpy of fusion of water?\n\nThe heat of fusion for water at 0 °C is approximately 334 joules (79.7 calories) per gram, and the heat of vaporization at 100 °C is about 2,230 joules (533 calories) per gram.\n\n## What is fusion of ice?\n\nThe most common example is solid ice turning into liquid water. This process is better known as melting, or heat of fusion, and results in the molecules within the substance becoming less organized. … This process is commonly known as the freezing, and results in the molecules within the substance becoming more ordered.\n\n## How do you find H fusion?\n\nKey Takeaways: Heat of Fusion for Melting IceHeat of fusion is the amount of energy in the form of heat needed to change the state of matter from a solid to a liquid (melting.)The formula to calculate heat of fusion is: q = m·ΔHfMore items…•\n\n## Why is melting called fusion?\n\nThis phenomenon is called fusion because when two separate solid objects made from the same substance are melted, they can get mixed together into a…\n\n## Is enthalpy of fusion positive or negative?\n\nThe enthalpy of fusion is almost always a positive quantity; helium is the only known exception. Helium-3 has a negative enthalpy of fusion at temperatures below 0.3 K. Helium-4 also has a very slightly negative enthalpy of fusion below 0.77 K (−272.380 °C)." ]
[ null, "https://mc.yandex.ru/watch/68903863", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92389494,"math_prob":0.94045097,"size":3597,"snap":"2021-21-2021-25","text_gpt3_token_len":879,"char_repetition_ratio":0.19343168,"word_repetition_ratio":0.13670133,"special_character_ratio":0.23742007,"punctuation_ratio":0.104,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9859959,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-23T07:48:27Z\",\"WARC-Record-ID\":\"<urn:uuid:a9481771-4154-4fca-917d-d429f97be31f>\",\"Content-Length\":\"31912\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:53b756de-a87b-4375-a8ca-1bd16eedc788>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff91b7ec-3542-460e-92a4-8fdfb0307934>\",\"WARC-IP-Address\":\"45.130.40.26\",\"WARC-Target-URI\":\"https://ebtekaronline.com/qa/question-what-is-the-enthalpy-of-fusion-of-ice.html\",\"WARC-Payload-Digest\":\"sha1:3G5YVG27CUVODH3FW6KAS572C4RV2SS2\",\"WARC-Block-Digest\":\"sha1:4JHQKZIIIXN7KX6JAQGQUKFQQAXYEZ4F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488536512.90_warc_CC-MAIN-20210623073050-20210623103050-00062.warc.gz\"}"}
https://linkessay.com/blog/writing-hypothesis/
[ "# How to Write Hypothesis – Step-By-Step Guide", null, "Writing a strong hypothesis makes your research credible. You should spend your time and effort writing an effective hypothesis. You may find yourself lost whenever you begin writing a hypothesis. The reason is you do not have a proper guide to assist you. If you also want to write the hypothesis and need a complete guide on it then the following article will be helpful for you. The article has highlighted many important aspects of a hypothesis. Before diving into the depth, we will discuss what a hypothesis is.\n\n## What is Hypothesis\n\nA hypothesis is a statement in which we propose an expected result to a question that needs to be thorough. Any minor mistake in the hypothesis can lead to weakening your research. You can form hypotheses based on some famous facts about a certain problem or you can use the previous studies that have already been done.\n\n## Characteristics of a Good Hypothesis\n\nTo write an effective hypothesis it is important to know the characteristics of a good hypothesis. The characteristics of an effective hypothesis are discussed below in detail.\n\nIt is never a good idea to form your hypothesis based merely on guesses. it is important to form your hypothesis based on previously done research based on the same problem.\n\nTo formulate an ideal hypothesis, you should ask yourself the following questions.\n\n• The first question should be about the language of your hypothesis statement. You should notice whether the language of your statement is clear or not.\n• What is the relationship between hypothesis statements and your research? It is important to formulate a statement that completely relates to your research.\n• Another important factor to include in your hypothesis statement is variables. The variables are of two types: dependent and independent. The variables are discussed below in the article.\n• Can your hypothesis be tested? If yes, then explain how.\n• What are the possible sides of research that you might want to explore to get the desired outcomes?\n\n### Types of Variables\n\nThe two types of variables are explained below.\n\n• #### Independent variables\n\nIndependent variables are those that the person can change, or we can say that independent variables are changed to notice variations of independent variables. It wouldn’t be wrong to say that independent variables are in the control of researchers.\n\n• #### Dependent variables\n\nDependent variables are those that cannot be changed and are not in the control of the researcher, they change when the changes in independent variables are observed.\n\n### Types of Hypotheses\n\nThe hypothesis can be of many types, some of them are explained below.", null, "• #### Simple hypothesis\n\nAs the name shows, a simple hypothesis contains one dependent and one independent variable. The relationship between dependent and independent variables in simple hypotheses is straightforward. If one variable changes, the change in another variable can clearly be noticed.\n\n• #### Complex hypothesis\n\nIn a complex hypothesis, you observe the relationship among two or more dependent and independent variables. This makes it difficult to understand the impact of one variable on another variable. It also becomes difficult to understand the relationship among these variables.\n\n• #### Directional hypothesis\n\nThe directional hypothesis is used when the relationship of variables has a specific direction to follow. The researcher has an intellectual commitment to one of the outcomes and is sure that this will be the result. When the researcher is considering one specific output, this shows that the hypothesis is directional. This commitment of research involves some previous research or theory.\n\n• #### Non-directional hypothesis\n\nThe non-directional hypothesis is involved in cases where there is no previous research or theories involved. The relationship between variables is not explicit. The researcher is not sure about any outcome. In such cases, the non-directional hypothesis is used.\n\n• #### Associative and causal hypothesis\n\nAssociative and causal hypotheses are alike with a few differences. The associative hypothesis is involved when the changes in one variable result in changes in other variables. This shows that variables are associated with each other.\n\nOn the other hand, the causal hypothesis involves a cause-and-effect relationship. In the causal hypothesis, the changes in dependent variables result only when the independent variable is manipulated.\n\n• #### Null hypothesis\n\nA null hypothesis negates the varied results. The hypothesis statement of the researcher is considered to be true. It is assumed that any changes in the results than the predicted ones are due to errors in experiment and measurements.\n\n• #### Alternative hypothesis\n\nThe alternative hypothesis states the significance of results obtained from research. It states that any result obtained from research has its own importance and there exists a relationship between the variables under study.\n\n### Falsifiability\n\nAny valid hypothesis has an important aspect of falsifiability in its nature. The important way to test a claim is the fact that this claim can be proven false. People usually confuse the idea of something being false and falsifiability. We can also understand falsifiability by relating it to testability. Falsifiability means that the claim has the capability of being proven wrong if tested.\n\n### Importance of Testable Hypothesis\n\nTo make your hypothesis strong, you should make sure that it is testable. For a hypothesis to be testable, the following are the criteria.\n\n• Hypothesis should be proven true in some way.\n• Hypothesis should be proven false in some way.\n• The results must be reproducible.\n\nIf your hypothesis does not meet this certain criterion, it will fail. There must be some way to fulfill the above criteria to prove your hypothesis. If your hypothesis does not follow the above criteria, the results obtained from the experiment will be vague and will not clear anything. Such an experiment will have no significance as it will not prove or disprove anything.\n\n### How to Formulate an Effective Research Hypothesis\n\nAn effective research hypothesis has a few important characteristics that we have already discussed. A strong hypothesis statement should have an explanation of the research, experiment, and expected outcomes. Following are the steps that you can follow to write a strong hypothesis statement.", null, "State the problem\n\nIt is very important for your hypothesis statement to contain the problem you are dealing with. If you do not state the problem in a hypothesis statement, it will not be clear for your readers. Explicitly state the problem and involve it in the hypothesis.\n\nBefore writing the hypothesis statement, you need to be aware of the problem you are dealing with. Before doing the research, you know what the problem is. We perform experiments to find answers to our questions that are mysterious. You cannot perform experiments unless you know the nature of your problem. After having a broader picture of the problem you need to solve, you should design a question. This question will make your research further and keep putting in efforts unless you get the desired results.\n\nDo research\n\nThis part of the research is done to develop an answer to this question without performing experiments. Your research to look for theories or already existing studies to solve the same question. You then structure a satisfactory answer based on this preliminary research. This point also gives you an idea of variables that you can use for your research. You devise a conceptual framework in your mind to carry out your own research.\n\nFormulate hypothesis\n\nIn this step, you give a proper structure to the answer you have extracted from the preliminary research. This sentence in a proper grammatical format, clear, and concise wording is a hypothesis statement.\n\nRefine hypotheses\n\nYou should refine your hypothesis in a way that is understandable. Use clear and concise wording for every part of the statement. Hypothesis requires this much attention, as it forms the basis of your research. A hypothesis statement should include the following three components.\n\n• The explicit nature of the problem\n• The variables understudy\n• The specific group understudied\n• The possible outcomes of the experiment\n\nIf-then statement\n\nYou should write your hypothesis statement in if-then format. In if-then format, your dependent and independent variables are separated by if and then.\n\nRephrase in three ways\n\nFollowing are a few tips to rephrase your hypothesis statement. The hypothesis statement can be rephrased in the following three ways.\n\n• In the first format, you can use if and then. The first part of the statement involves independent variables, and the second part contains dependent variables. The example of this format is as follows.\n\n“If students start working hard then they will score better in their exams.”\n\nIn this statement, the independent variable is a student working hard which is under control and can be changed according to one’s choice. The dependent variable is the exam’s score which depends on the effort of students. There is also a way to prove this statement false by testing. If students do not work hard, they will fail.\n\n• Another form of hypothesis is stated in correlations. This type of statement clearly states the effect of one variable on another. The example of this format is as follows.\n\n“The harder work of students will have a positive impact on their scores.”\n\nThis statement explicitly stated the impact of working hard on scores and explained the relationship between variables.\n\n• The third format is when you are comparing two groups of the population. The example for this format is as follows.\n\n“Students who work harder will score better than those who do not.”\n\nIn this statement, two groups of students are being compared, one who works hard and others who do not. Those who work hard will score better than the students who do not work hard. This form of hypothesis statement deals in comparison.\n\n## Sources of Hypothesis\n\nThe hypothesis can be obtained from the following sources.\n\n• If you find any resemblance to the phenomenon, you can derive hypotheses from it.\n• You can always use observations that were made in past studies.\n• You can look for current similar research work.\n• Scientific theories are also helpful in designing hypotheses.\n• You should also observe the general thinking pattern of people to structure a strong hypothesis statement.\n\n## Uses of Hypothesis\n\nHypothesis has the following functions which make it useful in research.\n\n• Hypothesis assists you in making observations for your experiment.\n• Your experiment is not possible without properly devising a hypothesis statement.\n• It becomes the base for investigating the research problem.\n• It helps to progress the experiment in the right direction.\n\n## Tips to Write a Strong Hypothesis\n\n• You should structure a question before structuring a hypothesis.\n• Hypothesis is never a question; it is always a statement. It should be a testable prediction.\n• A good hypothesis statement is always clear and has simple words to make it easier for readers to understand it.\n• A strong hypothesis statement has variables that are clearly defined, and their relation is mentioned in the hypothesis statement. the relationship between variables can be written in any of the above-mentioned formats.\n• Research the question understudy, you will find various similar studies. You can use it to design your hypothesis statement.\n\n### Difference Between Hypothesis and Prediction\n\nThe terms hypothesis and prediction are used interchangeably but they are not the same. People usually get confused between the two terms as both are guesses but are different from each other. Hypothesis is an educated guess about a specific problem that is under study while prediction is an uneducated guess that has no credibility. We can also say that prediction is a guess if the hypothesis is true. For instance, you have noticed that every time the wind blows, the flowers’ petals fall. This is merely your observation and involves no scientific basis or experiment. Hence, we can see that this is a prediction of what will happen in the future.\n\n### Conclusion\n\nSumming up the whole discussion, the hypothesis statement requires extra time and effort, as it is the basis for scientific research. You need proper guidance to write a strong hypothesis statement. The article above has highlighted many aspects of a hypothesis statement. A hypothesis statement not only makes it possible to research a given problem but also provides a basis for formulating further hypothesis statements for more desert in the future. If you follow the guide mentioned above, you will be good to write an effective hypothesis statement." ]
[ null, "https://linkessay.com/wp-content/uploads/2021/08/writing-hypothesis-1024x559.jpg", null, "http://linkessay.com/wp-content/uploads/2021/08/types-of-hypothesis-scaled.jpg", null, "http://linkessay.com/wp-content/uploads/2021/08/steps-to-write-strong-hypothesis-scaled.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94007725,"math_prob":0.69779813,"size":12722,"snap":"2021-43-2021-49","text_gpt3_token_len":2377,"char_repetition_ratio":0.19319075,"word_repetition_ratio":0.012682927,"special_character_ratio":0.18078919,"punctuation_ratio":0.08566276,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95928,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T11:12:23Z\",\"WARC-Record-ID\":\"<urn:uuid:4d16a276-9ed7-46bc-a0f5-cf23bcea9ce8>\",\"Content-Length\":\"93352\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dacb13b2-e758-4e31-8bc7-ba66438906a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:779636db-e0f0-4cc6-8a7e-44e49bf305a9>\",\"WARC-IP-Address\":\"192.64.118.90\",\"WARC-Target-URI\":\"https://linkessay.com/blog/writing-hypothesis/\",\"WARC-Payload-Digest\":\"sha1:BHJYZYMPMTOLT2CT6IHTUPJRIZRB3E2J\",\"WARC-Block-Digest\":\"sha1:SDUI4MAH4J4ZG3BVJBSYX7ZRWECUDTOP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323584567.81_warc_CC-MAIN-20211016105157-20211016135157-00190.warc.gz\"}"}
https://usethinkscript.com/threads/why-wont-this-zigzag-line-plot.4949/
[ "", null, "# Why won't this ZigZag line plot?\n\n#### MoneyMagnet\n\n##### Member\nThe short answer is that I've goofed up", null, ". I'm hoping for a longer answer that will help me understand what I'm doing wrong. My purpose is to simply plot a line that I calculate myself, that is not based on a price input. I suspect the problem has to do with the first bar not having a previous bar, but I can't find a clear explanation of how to deal with that situation. Here's what I'm trying:\n\nCode:\n``````def signal;\ndef dir;\n\nif signal == 100\n{dir = -1;}\nelse\nif signal == 50\n{dir = 1;}\nelse\n{dir = dir;}\n\nif dir == -1 then\n{signal = signal - 1;}\nelse\n{signal = signal + 1;}\n\nplot line = signal;``````\n\nThis is supposed to create a line that zig-zags up and down between 50 and 100, inclusive. No error in TS editor, but nothing shows on the chart...\n\n##### Well-known member\nVIP\n@MoneyMagnet Why not review several of the existing topics that cover plotting lines... You've already discovered that it isn't all that easy to plot from on bars price to another bars price... Yoy may or may not be able to accomplish whatever it is that you want to do... TOS does have some limitations in this area...\n\n#### MoneyMagnet\n\n##### Member\nWhat I'm hoping to do is to plot a line based on data that I create, rather than on price. I've spent a good deal of time reading posts so far, and I don't recall anyone doing anything similar. Is this something that others have done?\n\n#### BenTen\n\nStaff\nVIP\nplot a line based on data that I create, rather than on price.\n\nWhat do you mean by that? Are you going to feed the script each data point manually? Like at a certain price level? It would be helpful if you could post your entire script so we can understand it better.\n\n#### MoneyMagnet\n\n##### Member\nFair enough; let me elucidate:\n\nFor research purposes, it's often useful to control the input to a process in able to learn more about the process itself. With an indicator, the input is price, but price is whatever it is, and can't (we hope!) be controlled. I would like to be able to use this technique to get a better understanding of various indicators and indicator ideas, and to do this I need to be able to supply input data of my own, rather than price.\n\nThe best case scenario would be to gin up a stream of data - say, in a spreadsheet - then import that into ToS as an equity, so I could just get the artificial data into the indicator that way. If there's any chance that ToS allows this, I've love to hear about it. I used to do such things with another charting package I used years ago (MT4, maybe?), but I doubt that ToS would accommodate me.\n\nFailing that, I could just mathematically create the desired data input right inside an indicator code, and then use that variable as the input to the indicator itself - something like this:\n\nCode:\n``````def MyData = <code that creates the data>;\n\nplot testchart = Average(MyData, 25);``````\n\nI believe that something like that should work, if I can get the data created in the first place. In the code above, I've tried to create the data by defining a variable to hold it and using code to increment then decrement the value from 50 to 100 and back, ad nauseum generating a triangle wave. As a first test, I would just plot the data raw, to see if it works, and of course, it doesn't. After some more searching, I suspect that it needs something like a test for NaN, which I read about in the (remarkably sparse) thinkScript documentation, but my every attempt makes the TS Editor cough up errors. I clearly don't understand how to get it done!\n\nOne thing I'd like to do with this method is to feed a sine wave of various wavelengths into an indicator to see how strongly it is filtered out. The frequency response curve of any price analog (i.e., moving average, etc.) could be determined this way, which could prove useful. In particular, since moving averages are in essence digital filters, this method could be used to determine the impulse response of any moving average by convolving it with a pulse input (running a data stream with a single positive spike value through it), and I personally think that would be at least fascinating... if I can figure/find out how to get past the peculiarities of thinkScript.\n\n##### Well-known member\nVIP\n@MoneyMagnet While your concept does sound creative it is beyond the scope of the capabilities of the TOS platform... For starters, you can't import data other than symbol lists... On top of that, TOS doesn't support arrays so it couldn't work off of any form of data that you might want to supply manually... It's just not what TOS was designed for... True, some other platforms have some form of testing capabilities beyond what TOS offers but I'm not even sure any of them would be suitable for what you have described... I think a better option would be to do such research of numbers and plotting from within a spreadsheet for proof-of-concept stuff... We have to live within the parameters of what TOS has to offer as I highly doubt they will be adding any semblance of the capabilities you are in search of...\n\n#### MoneyMagnet\n\n##### Member\nThis code does exactly what I wanted with regard to the sine wave input signal. Derived from some code at tlc.thinkorswim.com.\n\nCode:\n``````declare lower;\ninput Amplitude = 10;\ninput WaveLength = 20;\ninput Period = 5;\ndef w = 2 * Double.Pi / WaveLength;\ndef x = CompoundValue(1, x + 1, 0);\nplot signal = Amplitude * Sin(w * x);\nplot avg = Average(signal, Period);``````\n\n#### MoneyMagnet\n\n##### Member\nAnd this code creates a pulse, then plots it along with its moving average, generating an impulse response curve.\n\nCode:\n``````declare lower;\n\ninput Wavelength = 20;\ninput Period = 5;\n\ndef interval = BarNumber() / Wavelength == RoundDown(BarNumber() / Wavelength, 0);\nplot pulse = CompoundValue(1, interval, 0);\n\nplot avg = Average(pulse, Period);``````\n\nThanks for your help, guys", null, "", null, "##### Well-known member\nVIP\nThis code does exactly what I wanted with regard to the sine wave input signal. Derived from some code at tlc.thinkorswim.com.\n\nCode:\n``````declare lower;\ninput Amplitude = 10;\ninput WaveLength = 20;\ninput Period = 5;\ndef w = 2 * Double.Pi / WaveLength;\ndef x = CompoundValue(1, x + 1, 0);\nplot signal = Amplitude * Sin(w * x);\nplot avg = Average(signal, Period);``````\nSo you just wanted an identical repeating sine wave...??? Or is this just experimental for proof-of-concept...???\n\n#### MoneyMagnet\n\n##### Member\nLike I said, I wanted to be able to mathematically generate a signal, then feed it into an indicator. Sine, pulse, random walk, whatever. Once I can create my own dataset, I can use it for research purposes. Pretty cool, eh?", null, "#### MoneyMagnet\n\n##### Member\nSo, the answer I was looking for in the original post was \"You need to use the CompoundValue() function to initialize a variable, so it will have a value for the first bar, then be able to update future bars based on past bars.\" Try searching here or on tlc.thinkorswim.com for \"create mathematical data as input for an indicator\" and see how useful the result is. Without knowing the proper terms for something, text searching can be nearly useless. I found the answer when I went to see what it might take to calculate a sine curve, and came across the code in the article on the Sin() function. I noted the obscurely named CompoundValue() function, and went to see what the heck that was all about. Bingo.\n\nIf my research exposes anything earth-shattering, I'll be sure to share it!\n\n•", null, "" ]
[ null, "https://www.facebook.com/tr", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95754224,"math_prob":0.7394416,"size":914,"snap":"2021-31-2021-39","text_gpt3_token_len":249,"char_repetition_ratio":0.113186814,"word_repetition_ratio":0.27272728,"special_character_ratio":0.29540482,"punctuation_ratio":0.13930348,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96440494,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-25T21:37:52Z\",\"WARC-Record-ID\":\"<urn:uuid:1c4807d3-9853-4e06-ab73-aba330d22ac3>\",\"Content-Length\":\"106190\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d668adfd-689e-4f92-97a5-fedde7130f74>\",\"WARC-Concurrent-To\":\"<urn:uuid:7e74d101-cc46-41d7-9640-4505c3111c42>\",\"WARC-IP-Address\":\"104.21.34.158\",\"WARC-Target-URI\":\"https://usethinkscript.com/threads/why-wont-this-zigzag-line-plot.4949/\",\"WARC-Payload-Digest\":\"sha1:JIWIOXO2V6O754WST6OW4VOH4Q2XFOVV\",\"WARC-Block-Digest\":\"sha1:JPWONA6C7YPKDTMSZRECNSTS6LGJFDL2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046151866.98_warc_CC-MAIN-20210725205752-20210725235752-00493.warc.gz\"}"}
https://human.libretexts.org/Courses/City_College_of_San_Francisco/Writing_Reading_and_College_Success%3A_A_First-Year_Composition_Course_for_All_Learners_(Kashyap_and_Dyquisto)/00%3A_Front_Matter/06%3A_Acknowledgements
[ "$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$$$\\newcommand{\\AA}{\\unicode[.8,0]{x212B}}$$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.54907244,"math_prob":1.000003,"size":1856,"snap":"2023-40-2023-50","text_gpt3_token_len":559,"char_repetition_ratio":0.27051836,"word_repetition_ratio":0.1865285,"special_character_ratio":0.32327586,"punctuation_ratio":0.09398496,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96362203,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T13:28:01Z\",\"WARC-Record-ID\":\"<urn:uuid:1db2d23d-b44a-4670-bfbd-fbed4a7abf2d>\",\"Content-Length\":\"118439\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8000243e-2b35-4ec9-b27d-fbbe215e7c25>\",\"WARC-Concurrent-To\":\"<urn:uuid:c6a1a845-34ba-4383-856d-6458e48b591a>\",\"WARC-IP-Address\":\"3.162.103.38\",\"WARC-Target-URI\":\"https://human.libretexts.org/Courses/City_College_of_San_Francisco/Writing_Reading_and_College_Success%3A_A_First-Year_Composition_Course_for_All_Learners_(Kashyap_and_Dyquisto)/00%3A_Front_Matter/06%3A_Acknowledgements\",\"WARC-Payload-Digest\":\"sha1:OEJM7GBBANSLT3WP4AUSRBMEBGUNPPA7\",\"WARC-Block-Digest\":\"sha1:YVMHVQWQILEUSTXS6AH363VICHOVSKGD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511369.62_warc_CC-MAIN-20231004120203-20231004150203-00448.warc.gz\"}"}
https://wimpydd.hepforge.org/code/halo_function/
[ "wimpydd is hosted by Hepforge, IPPP Durham", null, "#### Halo Function\n\nWimPyDD provides the routine streamed_halo_function that calculates the $$v_k$$, $$\\delta\\eta_k^{(0,1)}$$ quantities needed by the diff_rate and wimp_dd_rate. The simple call:\n\n >>> vmin,delta_eta0=WD.streamed_halo_function()\n\nsamples linearly the $$\\delta\\eta_k^{(0)}$$'s in units of (km/s)$$^{-1}$$ for a Maxwellian velocity distribution with cut at the escape velocity in the Earth's rest frame: $f(\\vec{v},t)=N\\left(\\frac{3}{ 2\\pi v_{rms}^2}\\right )^{3/2} e^{-\\frac{3|\\vec{v}+\\vec{v}_{E,Gal}|^2}{2 v_{rms}^2}}\\Theta(v_{esc,Gal}-|\\vec{v}+\\vec{v}_{E,Gal}(t)|)$ where,\n$N= \\left [ erf(z)-\\frac{2}{\\sqrt{\\pi}}z e^{-z^2}\\right ]^{-1}$ with $$v_{esc,Gal}$$ the escape velocity in the Galactic rest frame and $$z^2=3 v_{esc,Gal}^2/(2 v_{rms}^2)$$. By default hydrothermal equilibrium between the WIMP gas pressure and gravity is assumed, $$v_{rms}$$ = $$\\sqrt{3/2}|\\vec{v}_{rot,Gal}|$$ with $$\\vec{v}_{rot,Gal}$$ the galactic rotational velocity (the value of $$v_{rms}$$ can be modified using the argument vrms). Moreover $$\\vec{v}_{E,Gal}$$ = $$\\vec{v}_{rot,Gal}$$+ $$\\vec{v}_{Sun,rot}$$ + $$\\vec{v}_{E,Sun}$$, with $$\\vec{v}_{Sun,rot}$$ the peculiar velocity of the solar system with respect to the Galactic rotational curve and $$\\vec{v}_{Sun,rot}$$ the velocity of the Earth around the Sun. The routine assumes the default values $$v_{esc,Gal}$$ = 550 km/s, $$\\vec{v}_{rot,Gal}$$ = $$(0,220,0)$$ km/s, $$\\vec{v}_{Sun,rot}$$ = $$(9,12,7)$$ km/s (in the Galactic reference frame where the $$z$$ axis is in the direction perpendicular to the galactic plane, the $$x$$ axis points toward the galactic center and the velocity of the solar system points in the $$y$$ direction). Such values can be changed by using the arguments v_esc_gal, v_rot_gal and v_sun_rot, respectively. For $$\\vec{v}_{E,Sun}$$ the expression in J. D. Lewin, P.F. Smith is used.\n\nTo calculate $$\\delta\\eta^{(1)}$$ the argument yearly_modulation=True must be used, and to get $$\\eta^{(0,1)}_k$$ instead of $$\\delta\\eta^{(0,1)}_k$$ use delta_eta=False.\n\nThe routine can also be used for a user-defined velocity distribution. This requires to pass the argument velocity_distribution_gal. For instance, parameterizing the velocity distribution with non-isotropic velocity dispersions: $f(\\vec{u})=N e^{-\\frac{1}{2}\\left(\\frac{u^2_x}{\\sigma^2_x}+\\frac{u^2_y}{\\sigma^2_y}+\\frac{u^2_z}{\\sigma^2_z} \\right)}$ with:\n\ndef func(u,sigma_x,sigma_y,sigma_z):\nreturn np.exp(-0.5*(u**2/sigma_x**2+\nu**2/sigma_y**2+u**2/sigma_z**2))\n\nthe time-averaged $$\\delta \\eta_k^{(0)}$$'s for $$\\sigma_x$$=120 km/s, $$\\sigma_y$$=150 km/s and $$\\sigma_z$$=170 km/s can be calculated by:\n\n >>> vmin,delta_eta_0=WD.streamed_halo_function(velocity_distribution_gal=func,\n sigma_x=120, sigma_y=150, sigma_z=170)\n\nThe user-define velocity distribution func must be a function of a three--dimensional array u containing the WIMP velocity in the galactic rest frame. The parameters of func are passed with the keyworded variable-length argument list **args (the argument vrms for the Maxwellian distribution is just a particular case of this). All the other parameters of the routine work in the same way as for a Maxwellian.\n\nBoth for a Maxwellian and for a user-defined velocity distribution the full expressions: $\\eta^{(0)}(v)= \\frac{1}{T_0}\\int_0^{T_0} \\eta(v,t),$\n$\\eta^{(1)}(v)= \\frac{2}{T_0}\\int_0^{T_0} \\cos\\left[\\omega(t-t_0)\\right] \\eta(v,t),\\,\\,\\,\\,\\omega=\\frac{2\\pi}{T_0},$ (discretized as sums over the days of the year) can be used by setting full_year_sample=True. This can be particularly time consuming when the numerical integration of a user-defined velocity distribution is required for each day of the year. By default full_year_sample=False; in this case $$\\eta^{(0)}$$ is approximated by setting $$\\vec{v}_{E,Sun}\\rightarrow$$0, in $$\\eta(v)$$, while $$\\eta^{(1)}$$ is approximated by using a linear expansion in the parameter $$|\\vec{v}_{E,Sun}|/|\\vec{v}_{E,Gal}|$$ for the Maxwellian (see for instance appendix A of F. Donato, N. Fornengo, S. Scopel, Astropart. Phys. 9 (1998) 247–260. ), and by calculating $$[\\eta_0(t=t_0)-\\eta_0(t=t_0+T/2)]/2$$ for a user-defined velocity distribution. Also $$t_0$$ can be changed by using the argument modulation_phase in days. Finally, day_of_the_year allows to calculate the halo function on a specific day of the year.\n\nIn the case when velocity_distribution_gal is used and the routine calculates numerical integrals the output is saved in the folder WimPyDD/Halo_functions for later use to speed-up successive evaluations of the same halo function, unless recalculate=True.\n\n#### Summary of streamed_halo_function routine\n\n >>> WD.streamed_halo_function(velocity_distribution_gal=None,\n v_rot_gal=np.array([0.,220.,0.]),\n v_sun_rot=np.array([9.,12.,7.]),\n v_esc_gal=550,n_vmin_bin=50,yearly_modulation=False,\n vmin=None, delta_eta=True,full_year_sampling=False,\n day_of_the_year=None, outputfile=None,recalculate=False,\n modulation_phase=152.5,**args)\n• velocity_distribution_gal: WIMP velocity distribution in the Galactic rest frame. If velocity_distribution_gal=None the halo functions are calculated using analytical expressions for a standard Maxwellian velocity distribution truncated at v_esc_gal\n• v_rot_gal: galactic rotational velocity at the Sun's position in km/s\n• v_sun_rot: peculiar velocity of the Sun with respect to the rotation curve in km/s\n• v_esc_gal: escape velocity in km/s\n• n_vmin_bin: number of sampled velocity values\n• yearly_modulation: If True calculates yearly modulation amplitudes.\n• vmin: array with velocity values to be sampled (by default set to n_vmin_bin values from 0 to the escape velocity).\n• delta_eta: If True calculates $$\\delta\\eta$$'s otherwise provides the halo functions $$\\eta$$'s\n• full_year_sampling: If True calculates time averages and cosince transforms of the halo functions sampling it for each day of the year.\n• day_of_the_year: allows to calculate the halo function at a particular day of the year in days. By default it is set to modulation_phase-T/4\n• outputfile: If velocity_distribution_gal=!None sets the outputfile name where the halo functions are saved. The output file is saved in the folder WimPyDD/Halo_functions. If outputfile=None the name of the outputfile is set automatically created using the velocity_distribution_gal.__name__ attribute (i.e. the name of the user defined velocity distribution).\n• modulation_phase: set by default to 152.5 days (2nd of June)\n• **args: keyworded variable-length argument list with the arguments of the user defined velocity distribution\n\n#### Examples\n\nMaxwellian velocity distribution function\n\n• $$\\eta^0$$ halo function:\n• >>> vmin,eta0=WD.streamed_halo_function(delta_eta=False)\n• $$\\delta\\eta^0$$ halo function:\n• >>> vmin,delta_eta0=WD.streamed_halo_function()\n• $$\\eta^1$$ halo function:\n• >>> vmin,eta1=WD.streamed_halo_function(delta_eta=False,yearly_modulation=True)\n• $$\\delta\\eta^1$$ halo function:\n• >>> vmin,delta_eta1=WD.streamed_halo_function(yearly_modulation=True)\n\nUser defined velocity distribution function\n\n• $$\\eta^0$$ halo function:\n• >>> vmin,eta0=WD.streamed_halo_function(delta_eta=False)\n• $$\\eta^1$$ halo function:\n• >>> vmin,eta1=WD.streamed_halo_function(delta_eta=False,yearly_modulation=True)" ]
[ null, "https://wimpydd.hepforge.org/figs/wimpydd_logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6033624,"math_prob":0.99956244,"size":7226,"snap":"2021-43-2021-49","text_gpt3_token_len":2132,"char_repetition_ratio":0.15674329,"word_repetition_ratio":0.04101327,"special_character_ratio":0.29809022,"punctuation_ratio":0.13615733,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999418,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T17:43:25Z\",\"WARC-Record-ID\":\"<urn:uuid:34a21b27-f186-4ea2-b0e3-759ecccae627>\",\"Content-Length\":\"11483\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:161e9ec0-6def-452d-ae5d-3ba02cbfca0d>\",\"WARC-Concurrent-To\":\"<urn:uuid:31aaef91-2e78-4f6c-b2c1-9779786b64af>\",\"WARC-IP-Address\":\"129.234.186.186\",\"WARC-Target-URI\":\"https://wimpydd.hepforge.org/code/halo_function/\",\"WARC-Payload-Digest\":\"sha1:POMM4GS3VJZJMZVBG4D3PHDRAU2ACTYP\",\"WARC-Block-Digest\":\"sha1:GM7UGDLCORMF52I64VRC64X4EIOEY32E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587915.41_warc_CC-MAIN-20211026165817-20211026195817-00395.warc.gz\"}"}
https://tutorbin.com/questions-and-answers/figure-b1-shows-a-digital-circuit-of-a-frequency-divider-the-d-type-fl
[ "Question\n\nFigure B1 shows a digital circuit of a frequency divider. The D-type flip-flops used in the circuit are positive-edge triggered and have active-high asynchronous set and reset inputs (labelled 'S' and 'R', unconnected inputs are assumed to be logic-0). Assuming that the 'Reset' input is pulsed to logic-1 initially, and the 4-bit switch input' SW is set to '0000', determine the sequence of states appearing on the <Q3, Q2,Q1, Q0> outputs of the flip-flops. Hence draw a state diagram to illustrate the sequence, with the state values (Q) shown in decimal (Q3 is the MSB). Write down Verilog-HDL statements to describe the 4-bit equality comparator, that compares input 'SW' with the flip-flops states <Q3, Q2, Q1, Q0>', using the following: continuous assignment statement ii. primitive gates Ignore the <' and >' outputs on the symbol and make the following assumptions: The signal 'Pulse' is a module output. The flip-flop 'Q' outputs are declared as single-bit wires. The input 'SW is a 4-bit bus port. (C)Given the following module header for the D-type flip-flops used in Fig. B1, write down a complete Verilog-HDL source description for the circuit shown in Fig. B1. The top-level input/output ports are shown coloured blue in Fig. B1 (the 'x' indicating the number of bits), and the bus labelled 'Q' represents the 4 flip-flop outputs <Q3,Q2, Q1, Q0>. Connect any unused flip-flop inputs to logic-0. (d)With reference to the state diagram drawn in part (a), determine the division ratio(number of clock pulses per output pulse) of the output pulses appearing on 'Pulse' ifthe 4-bit input switches 'SW are set to 4'b1001 in Fig. B1.", null, "", null, "Fig: 1", null, "", null, "Fig: 2", null, "", null, "Fig: 3", null, "", null, "Fig: 4", null, "", null, "Fig: 5", null, "", null, "Fig: 6", null, "", null, "Fig: 7", null, "", null, "Fig: 8", null, "", null, "Fig: 9", null, "", null, "Fig: 10", null, "", null, "Fig: 11", null, "", null, "Fig: 12", null, "", null, "Fig: 13", null, "", null, "Fig: 14\n\n### Submit a new Query", null, "", null, "", null, "", null, "Success\n\nAssignment is successfully created", null, "", null, "Verified", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "" ]
[ null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27500%27%20height=%27300%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27500%27%20height=%27300%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27500%27%20height=%27300%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27500%27%20height=%27300%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27500%27%20height=%27300%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27500%27%20height=%27300%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27500%27%20height=%27300%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27500%27%20height=%27300%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27500%27%20height=%27300%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27500%27%20height=%27300%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27500%27%20height=%27300%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27500%27%20height=%27300%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27500%27%20height=%27300%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27500%27%20height=%27300%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "https://cdn.tutorbin.com/library/book.svg", null, "https://cdn.tutorbin.com/library/smallSearchIconGrey.svg", null, "https://cdn.tutorbin.com/library/selectDownArrow.svg", null, "https://cdn.tutorbin.com/library/message.svg", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2720%27%20height=%2720%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27800%27%20height=%27500%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27800%27%20height=%27500%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27800%27%20height=%27500%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%27800%27%20height=%27500%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7289704,"math_prob":0.8937479,"size":1760,"snap":"2023-14-2023-23","text_gpt3_token_len":490,"char_repetition_ratio":0.13724373,"word_repetition_ratio":0.0,"special_character_ratio":0.26988637,"punctuation_ratio":0.132626,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9669889,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-25T01:49:24Z\",\"WARC-Record-ID\":\"<urn:uuid:dbbbd8f3-bef0-4ec0-bbf4-56f79567260b>\",\"Content-Length\":\"98651\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:857a36f7-d4f1-478a-a718-4860976cfc6a>\",\"WARC-Concurrent-To\":\"<urn:uuid:46190d5a-2577-445e-b5e3-8ad36ed49a3e>\",\"WARC-IP-Address\":\"76.76.21.21\",\"WARC-Target-URI\":\"https://tutorbin.com/questions-and-answers/figure-b1-shows-a-digital-circuit-of-a-frequency-divider-the-d-type-fl\",\"WARC-Payload-Digest\":\"sha1:WNCSYU6NV3FY6J37IITLM4RWIM2ZT6H6\",\"WARC-Block-Digest\":\"sha1:ZEYXGGSSFSWSLRIQS6KHYQUZ3JTCRQFU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945292.83_warc_CC-MAIN-20230325002113-20230325032113-00585.warc.gz\"}"}
http://18karaats.com/earring_page/earring_page.html
[ "", null, "", null, "simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond\n\nsimulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond\n\nsimulated diamond simulated diamond simulated diamond simulated diamond simulated diamond\n\nsimulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond\n\nsimulated diamond simulated diamond simulated diamond simulated diamond simulated diamond\n\nsimulated diamond simulated diamond simulated diamsimulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond\n\nond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond simulated diamond\n\nsimulated diamond simulated diamond simulated diamond simulated diamond simulated diamond" ]
[ null, "http://18karaats.com/earring_page/earring_page_files/shapeimage_10.jpg", null, "http://18karaats.com/earring_page/earring_page_files/shapeimage_11.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8194542,"math_prob":0.99983054,"size":1079,"snap":"2019-13-2019-22","text_gpt3_token_len":137,"char_repetition_ratio":0.27906978,"word_repetition_ratio":0.9137931,"special_character_ratio":0.1102873,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998202,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-24T21:36:04Z\",\"WARC-Record-ID\":\"<urn:uuid:dd3e476c-2f7d-412c-a799-44ba19383e3d>\",\"Content-Length\":\"21085\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aca38d20-85bf-4227-a51e-5402376db221>\",\"WARC-Concurrent-To\":\"<urn:uuid:cf006d8d-e808-43c1-9a09-3e031d172ef0>\",\"WARC-IP-Address\":\"101.100.227.21\",\"WARC-Target-URI\":\"http://18karaats.com/earring_page/earring_page.html\",\"WARC-Payload-Digest\":\"sha1:CHUF7SEBQRWUYQ7CNNGBMK72CPBRFEU7\",\"WARC-Block-Digest\":\"sha1:H46XCL3VBRTJHQKNX3GBGN5LDB36VQEE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257767.49_warc_CC-MAIN-20190524204559-20190524230559-00531.warc.gz\"}"}
https://percentagecalculatorfree.com/percentage-increase-calculator/
[ "# Percentage Increase Calculator\n\nIf you’re expected to determine the amount of increment in one amount to the next, by formulating a percentage of the original value the calculator is going to be an effective tool for you to determine the amount of increment between one number to another. To understand the formula for calculating a percent increase before you use the calculator, you should know how to calculate a percent increase by using the provided formula. To review these concepts, we shall go over them in greater depth in the following sections.\n\n## What is the method you employ to calculate the percentage increase?\n\nThe idea behind percent increases is fundamental to an increase from the original value up to the number expressed in 100 components of the initial value. An increase of 5 percent means that when you divide the actual amount into 100 pieces, that value has increased by 5 percent. In the end, If the initial value is raised in increments of 14 percent then the value will rise by 14 for every 100 units. Then it will increase by 28, per 200 units, and so on.. The following section will provide an example with the percent increase formula in the next section to help you understand this formula better.\n\n## The formula for calculating a percentage increase\n\nThe formula for calculating percentage increases is as follows:\n\nThe percentage increase is equal to (new value – initial value)/original number 100. The formula is an excellent illustration of the equation. Consider an investment of \\$1250 that increased by \\$1,445 in one year. Which is the percent of the increase in this expenditure? The steps below can assist you in answering this question:\n\n1. In addition to the value you have just received, determine the original value.\n2. Enter the values for the formula.\n3. Remove the value of the initial value, and then multiply the results by the original value.\n4. Then, multiply that result by 100. This is the percentage increase.\n5. The answer can be obtained by using the percentage increase calculator.\n\nWe can get the following results by working out the issue with our hands:\n\n1. [(1,445 – 1,250)/1,250] * 100\n2. (195/1,250) * 100\n3. 0.156 * 100\n4. 15.6 percent growth.\n\nThe calculator for percentage growth is a great tool to test the fundamentals of a problem. The calculator may be used to tackle more complicated problems that require a percentage increase. The percent calculator could also be utilized to solve these issues.\n\n## Calculating the percent decrease\n\nIf you are interested in determining a percent decrease, you can use the same formula for a percent increase. The formula is slightly altered as follows:\n\nReduced by a percentage = [(original value (original value – new value) /100]\n\nConsider that we have invested the same amount of money after one year, \\$1,445. A year later, its value dropped to \\$1,300. Calculating the percentage decrease in this manner:\n\n1. [(1,445 – 1,300)/1,445] * 100\n2. (145/1,445) * 100\n3. 0.10 * 100 = 10 percent decrease\n\n## Similar topics\n\nWe have discussed the method for calculating percent increases and decreases; however, there will be times when we are interested in the percentage change, regardless of whether it is an increase or decrease. Utilize the calculator to calculate percentage change or the calculator for percentage differences If this is the situation. Among the scenarios in which this may be useful is an opinion poll to determine whether the percentage of people in favor of a particular political party differs from 50%.\n\nA calculator that computes percentage increases is helpful in maths. Still, it is also helpful in science, for instance, when calculating the percent increase in the mass of a chemical element within an elemental compound.\n\n## Frequently Asked Questions\n\n### Why is a percentage increase necessary?\n\nA percentage increase can be helpful to determine how a value has changed over time. Although percentage increases are similar to absolute increases, the former is more effective for comparing different data sets. For instance, an increase of 50 is associated with a shift from 1 to 51 or between 50 and 100. However, as the percentage change for one was 5000% and for the second it was 100%, the first change was significant. This is why the percentage increase is the most frequently utilized method to measure growth.\n\n### What is the formula for calculating percentage changes over time?\n\n1. Multiply the more significant number by the initial number. If you’ve already determined your percent, you can skip step 4.\n2. To do this, subtract one result from the division.\n3. This number should be multiplied with 100. This will provide you with the percent change.\n4. Divide the percentage change by the interval between these two figures.\n5. It is now possible to see the percent increment over time. Keep in mind that these units represent percent timethat you can divide by, e.g., s for seconds min for minutes, and so on.. Note that this doesn’t include compounding.\n6. This number is multiplied by the time difference for plots with linear lines to obtain the percentage difference between the two points.\n7. In non-linear plots, substitute the more significant number in your equation and solve it algebraically. It will only compute the percentage difference between your input value and your initial value.\n\n### How can I determine the most effective method to increase the amount?\n\nHere are the steps to follow if you wish to raise the number by a certain percentage:\n\n1. Divide the amount you want to increase by 100; find 1.\n2. Multiply 1 percent by the chosen percentage.\n3. Add this amount to your original amount.\n4. That’s it; you are just adding one percent to the number!\n\n### What is the correct method for adding 5% to a number?\n\n1. Divide the amount you wish to increase by 5% by 100.\n2. Multiply that figure by 5.\n3. Lastly, add the result of multiplication to your original.\n4. Work at a rate of 105 percent.\n\n### What is the proper way to combine two percentages?\n\nFollow these steps to add two percentages:\n\n1. Divide the amount you wish to calculate by 100 to arrive at the percentage.\n2. Multiply the results by the given percentage (e.g., 50 for 50 percent) to determine the number percentage.\n3. Repeat steps 1 and 2 to obtain the number.\n4. Sum the two numbers for the percentages.\n5. Suppose the number you are attempting to estimate is identical across both percentages. In that case, you may add the two percentages and then use the combined percentage to determine the outcome.\n\n### How do I figure out the increase in 10?\n\n1. Divide the increase by 10 when you’re adding it.\n2. Otherwise, multiply by 0.1.\n3. Include the number from the previous step in your original number.\n4. You’re so clever!\n\n### What is the best way to create a percentage?\n\n1. There are two variables to be selected, the number you wish to determine the percentage of and the percentage you wish to determine.\n2. To calculate the percentage, divide the number chosen by 100.\n3. Multiply the new number by the preferred percentage.\n4. Congratulations, you have just made a percent!\n\n### How much is the equivalent of a 50% increase?\n\nThe value of your current amount is doubled when you receive a 50 percent increase. This number can be determined by taking half of your value and adding it to the total. To determine the percentage increase from 50% to 80, you would divide it by 2 to arrive at 40. Add these two numbers for 120. The 50% increment is different from a 100% increment twice the original amount.\n\n### What is the formula for calculating percentage increases in Excel?\n\nThough it would be more convenient to use the Omni Percentage Increase Calculator, the following is the procedure to calculate the discount rate using Excel:\n\n1. Enter the original code (for example, cell A1).\n2. Enter the number that has been increased (for example, in cell A1).\n3. By subtracting the original number from the higher number (in C1, enter = B1-A1). Identify the result as ‘difference’.\n4. CSS0_ the amount by the original price, and then multiply by (In D1 input = (C1/A1)*100) and label it as an increase in percentage.\n5. Right-click the last cell and select the Format Cell option.\n6. Select the percentage in the Format Cells box and enter the decimal number under the heading number.\n\n### How can I determine the most effective method to increase it by 20%?\n\n1. By dividing 100 by 1%, we obtain 1%.\n2. We then multiply 1% by the desired percentage, which is 20.\n3. Add the number from the previous step to your initial number.\n4. Congratulations on achieving the 20% goal!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9086113,"math_prob":0.98735565,"size":8479,"snap":"2022-40-2023-06","text_gpt3_token_len":1812,"char_repetition_ratio":0.20471977,"word_repetition_ratio":0.023907105,"special_character_ratio":0.23375398,"punctuation_ratio":0.106343284,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993088,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-30T13:25:54Z\",\"WARC-Record-ID\":\"<urn:uuid:61fc4bad-256d-45e4-974f-7b87cc0e2ae3>\",\"Content-Length\":\"51633\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec53cd83-6f13-45c1-a313-9486d387d2fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff0ef5b3-c946-4d9f-9874-428bb94279c6>\",\"WARC-IP-Address\":\"104.21.13.179\",\"WARC-Target-URI\":\"https://percentagecalculatorfree.com/percentage-increase-calculator/\",\"WARC-Payload-Digest\":\"sha1:XTGG7AV5MHB3BPSXTSBTIXSORGR4NYGP\",\"WARC-Block-Digest\":\"sha1:3SUJTJA6FZGY4EOUUSQ3F6HRTIIX6VTE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335469.40_warc_CC-MAIN-20220930113830-20220930143830-00337.warc.gz\"}"}
https://www.ibrtses.com/delphi/snap.html
[ "", null, "where : ibrtses delphi\n\n# Delphi - snap and grid\n\n#### disclaimer\n\nthe source code of this page may not appear correctly in certain browsers\ndue to special characters. Have a look at the source of this HTML page\n\n## Objective\n\nScaled and/or scalable graphic, mainly vector graphic, can lead to uneven\ncoordinate numbers. The preferable even numbers can be achieved with the\nimplementation of a snap.\nExamples for\n```uneven numbers\teven numbers instead\n0.00001278\t0\n0.200142\t0.2\n0.399786\t0.4\n0.499121\t0.5\n```\nrelated : Mapping world and screen\n\n## Pixelsnap\n\nThe inter pixel distance may not become just any number, but has to be\nrestricted to even numbers :\n```0.1 x 10^N\t(1/10 based)\n0.125 x 10^N\t(1/8 based)\n0.1667 x 10^N\t(1/6 based)\n0.2 x 10^N\t(1/5 based)\n0.25 x 10^N\t(1/4 based)\n0.333 x 10^N\t(1/3 based)\n0.4 x 10^N\t(1/2.5 based)\n0.5 x 10^N\t(1/2 based)\n0.6667 x 10^N\t(2/3 based)\n```\nor similar, whereas N is integer\n\nFor a given drawing the interpixel snap is given :\n```Var pixelsnap:float;\n```\nthen each number to be displayed or stored is rounded to the nearest\nmultiple of the pixelsnap.\n```function snappixel(x:float):float;\nvar i:integer;\nbegin\ni:=round(x/pixelsnap);\nresult:=i*pixelsnap;\nend;\n```\n\n## Object snap\n\nFaster operation of the drawing application can be achieved when the\nobjects cannot be moved or placed at each pixel but only at a multiple\nof a pixel. Values may be every 6th, 8th, 10th, 12th, 20th or so pixel.\nThe user has to concentrate less, the bigger the object snap. The pixel\nsnap and the object snap have to be adjusted for sensible numbers.\n\n## Grid\n\ndrawing a grid poses the same problems as the snap does. Grid lines\nshould have a certain gap between them. The grid could be placed at\nthe object snap or a multiple.\n\n## finding sensible numbers\n\nafter introducing the snap, the numbers are considered even at the small scale.\nStill some care has to be taken when drawing a grid or ruler.\nexamples with 0.2 and 0.002 distance :\n```-0.05\t-0.1\t-0.2\t|\t0.995\t0.994\n0.15\t0.1\t0.0\t|\t0.997\t0.996\n0.35\t0.3\t0.2\t|\t0.999\t0.998\n0.55\t0.5\t0.4\t|\t1.001\t1.000\n0.75\t0.7\t0.6\t|\t1.003\t1.002\n0.95\t0.9\t0.8\t|\t1.005\t1.004\n1.15\t1.1\t1.0\t|\t1.007\t1.006\n```\nAs can be seen, if zero is part of the range, it has to part of the grid.\nIt is therefore advisable to draw a grid top-down with decreasing distance\n\nto get a grid with about 10 lines on a range -0.1 .. 1.1, use :\n\n```findsnap(1.2,10);\t//which gives 0.1\n\nuses math;\n\nfunction findsnap(range:float;divisions:integer):float;\nvar i,exponent:integer;\nm,u,mantissa:extended;\nbegin\nu:=abs(range/divisions);\n// split mantissa and exponent but in decimal notation\nexponent:=ceil(log10(u));\nmantissa:=u/intpower(10,exponent); // 0.1< mantissa <= 1.0\n// now adjust the snap as case\nif (mantissa<0.14) then m:=0.1;\nif (mantissa>=0.14)and(mantissa<0.18) then m:=1/6;\nif (mantissa>=0.18)and(mantissa<0.23)then m:=1/5;\nif (mantissa>=0.23)and(mantissa<0.27)then m:=1/4;\nif (mantissa>=0.27)and(mantissa<0.4)then m:=1/3;\nif (mantissa>=0.4)and(mantissa<0.6)then m:=1/2;\nif (mantissa>=0.6)then m:=1;\nresult:=m*intpower(10,exponent);\nend;\n```\n\n## Application\n\nassume a field of x:=0 .. pi and y:= -1.1 .. 1.1 has to be mapped\nonto an image of 400x400 pixels.\nthe pixels have to be square, meaning the snap for x and y is the same.\n``` var range, gridsnap,x,y:extended;\ni:integer;\nxlow,xhigh,ylow,yhigh:extended; // the boundary\n\nrange:=max(2.2,pi);\ngridsnap:=findsnap(range,20);\n// make the field at least as big as requested\n// lower x\ni:=floor(0/gridsnap);\nxlow:=i*gridsnap;\n// upper x\ni:=ceil(pi/gridsnap);\nxhigh:=i*gridsnap;\n// lower y\ni:=floor(-1.1/gridsnap);\nylow:=i*gridsnap;\n// upper y\ni:=ceil(1.1/gridsnap);\nyhigh:=i*gridsnap;\n// draw the lines, map them to integer first\nx:=xlow;\nrepeat\nxs:=map(x);\t\t\t\t//map them to integer\nline(xs,0,xs,image.height-1);\nx:=x+gridsnap;\nuntil abs(x-xhigh)<0.001;\nx:=ylow;\nrepeat\nys:=map(y);\t\t\t\t//map them to integer\nline(0,ys,image.width-1,ys);\ny:=y+gridsnap;\nuntil abs(y-yhigh)<0.001;\n\n```\n\n### findings\n\n• using a snap changes the inter pixel distance and therefore the\nscale of the drawing. Either the size of the drawing or the mapped\nrange on the drawing have to be adjusted.\n• using EXTENDED as float gives the best result.\n• when the pixels have to be square, the same snap has to be\napplied for x and y.\n\nFeedback is welcome" ]
[ null, "https://www.ibrtses.com/logo2.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69857603,"math_prob":0.9785275,"size":4323,"snap":"2021-31-2021-39","text_gpt3_token_len":1466,"char_repetition_ratio":0.1234082,"word_repetition_ratio":0.0060331826,"special_character_ratio":0.34027296,"punctuation_ratio":0.21866667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9844842,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T11:11:56Z\",\"WARC-Record-ID\":\"<urn:uuid:812fe253-cb0c-4337-b018-703cb1d245d5>\",\"Content-Length\":\"7117\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ce25585e-3b46-4297-8ab7-fb062ca00bec>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d211dfd-d33e-483a-805d-e8a2b135ef59>\",\"WARC-IP-Address\":\"217.26.52.41\",\"WARC-Target-URI\":\"https://www.ibrtses.com/delphi/snap.html\",\"WARC-Payload-Digest\":\"sha1:2SD4JIB2DGWLKTQ7KVQ6PW4HNOMI22AT\",\"WARC-Block-Digest\":\"sha1:SVMLERVXYAFDUI5CVT4OPH2RT3U4FPI6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057421.82_warc_CC-MAIN-20210923104706-20210923134706-00067.warc.gz\"}"}
https://newtonexcelbach.com/2010/04/04/the-inverse-quadratic-method-2/
[ "## The Inverse Quadratic Method – 2\n\nFirstly a clarification from the previous post in this series.  The method presented in that post was a direct application of quadratic interpolation, rather than the inverse quadratic method, as implied by the post title.  The inverse quadratic method will be described in this post, along with Muller’s method which is another variant using quadratic interpolation.\n\nIn the quadratic interpolation method described in the previous post the next approximation of the function root was found in two stages:\n\n1. Find the coefficients of the quadratic curve passing through three points of the function\n2. Find the closest root of that quadratic, using the quadratic formula.\n\nIn Muller’s method these two stages are combined into one, and the equation used to find the root of the quadratic is less prone to loss of significance; see the Wikipedia article on the topic:\n\nThe procedure for finding the next root approximation is considerably simplified in the Inverse Quadratic Method.  In this method a quadratic curve is fitted to the three points on the function being solved, but using the f(x) values as the x values, and the x values as the f(x) values.  The resulting function may be evaluated directly for x = 0 (i.e. f(x) = 0 in the original function).  Note that the inverse quadratic function is an approximation to the quadratic function through the chosen points, so the root found by this process is not an exact root of the quadratic function.\n\nThe equation for the next root approximation is given by Wikipedia as:\n\nMuller’s Method and the Inverse Quadratic Method are now incorporated in the ItSolve Functions.xls spreadsheet, along with full open source code:\n\nThis entry was posted in Excel, Maths, UDFs, VBA and tagged , , , , . Bookmark the permalink.\n\n### 3 Responses to The Inverse Quadratic Method – 2\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89230186,"math_prob":0.98518366,"size":1965,"snap":"2021-43-2021-49","text_gpt3_token_len":401,"char_repetition_ratio":0.1754207,"word_repetition_ratio":0.055727553,"special_character_ratio":0.19847329,"punctuation_ratio":0.08913649,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99475086,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T11:49:00Z\",\"WARC-Record-ID\":\"<urn:uuid:d57a3ebe-6662-4b20-9d90-311342eee87c>\",\"Content-Length\":\"138182\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:19afa8e0-9b0a-4dd6-b95a-8860f981767b>\",\"WARC-Concurrent-To\":\"<urn:uuid:82e6c875-6919-4614-8519-f60312236a40>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://newtonexcelbach.com/2010/04/04/the-inverse-quadratic-method-2/\",\"WARC-Payload-Digest\":\"sha1:XXU7QHGV4OIH3PAV7IMA7HFUENVZAE45\",\"WARC-Block-Digest\":\"sha1:6LRFY5NHWVNLQDBBVCRRLD57LGRMSFT2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358180.42_warc_CC-MAIN-20211127103444-20211127133444-00365.warc.gz\"}"}
https://stage.geogebra.org/m/ajfRJqCb
[ "# Finding Equations of LSRL: Old School\n\nStudy the table in the spreadsheet. Solving the system of equations (in row 7) for m and b is how they used to find the equations of least square regression lines. Why does this work?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88726366,"math_prob":0.9478261,"size":280,"snap":"2021-31-2021-39","text_gpt3_token_len":70,"char_repetition_ratio":0.1521739,"word_repetition_ratio":0.0,"special_character_ratio":0.22857143,"punctuation_ratio":0.12962963,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9936579,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-17T07:13:54Z\",\"WARC-Record-ID\":\"<urn:uuid:ed14643b-bec9-4194-a8e7-644586fb0990>\",\"Content-Length\":\"38693\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6458e21e-ce51-4a33-95c2-f088450d1025>\",\"WARC-Concurrent-To\":\"<urn:uuid:b861e01b-b58a-4b15-b922-c04ad9a3c820>\",\"WARC-IP-Address\":\"54.239.152.127\",\"WARC-Target-URI\":\"https://stage.geogebra.org/m/ajfRJqCb\",\"WARC-Payload-Digest\":\"sha1:K4RQGR3ST2YDV4PYIBHBPSHE3CQRFDGO\",\"WARC-Block-Digest\":\"sha1:3AKGE5AXSHEDTUPIDN7BEQG3GTTEYFL2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780055601.25_warc_CC-MAIN-20210917055515-20210917085515-00244.warc.gz\"}"}
https://www.colorhexa.com/0f47ed
[ "# #0f47ed Color Information\n\nIn a RGB color space, hex #0f47ed is composed of 5.9% red, 27.8% green and 92.9% blue. Whereas in a CMYK color space, it is composed of 93.7% cyan, 70% magenta, 0% yellow and 7.1% black. It has a hue angle of 224.9 degrees, a saturation of 88.1% and a lightness of 49.4%. #0f47ed color hex could be obtained by blending #1e8eff with #0000db. Closest websafe color is: #0033ff.\n\n• R 6\n• G 28\n• B 93\nRGB color chart\n• C 94\n• M 70\n• Y 0\n• K 7\nCMYK color chart\n\n#0f47ed color description : Vivid blue.\n\n# #0f47ed Color Conversion\n\nThe hexadecimal color #0f47ed has RGB values of R:15, G:71, B:237 and CMYK values of C:0.94, M:0.7, Y:0, K:0.07. Its decimal value is 1001453.\n\nHex triplet RGB Decimal 0f47ed `#0f47ed` 15, 71, 237 `rgb(15,71,237)` 5.9, 27.8, 92.9 `rgb(5.9%,27.8%,92.9%)` 94, 70, 0, 7 224.9°, 88.1, 49.4 `hsl(224.9,88.1%,49.4%)` 224.9°, 93.7, 92.9 0033ff `#0033ff`\nCIE-LAB 39.107, 48.178, -86.395 17.733, 10.721, 81.251 0.162, 0.098, 10.721 39.107, 98.92, 299.146 39.107, -15.187, -121.937 32.743, 39.373, -124.207 00001111, 01000111, 11101101\n\n# Color Schemes with #0f47ed\n\n• #0f47ed\n``#0f47ed` `rgb(15,71,237)``\n• #edb50f\n``#edb50f` `rgb(237,181,15)``\nComplementary Color\n• #0fb6ed\n``#0fb6ed` `rgb(15,182,237)``\n• #0f47ed\n``#0f47ed` `rgb(15,71,237)``\n• #460fed\n``#460fed` `rgb(70,15,237)``\nAnalogous Color\n• #b6ed0f\n``#b6ed0f` `rgb(182,237,15)``\n• #0f47ed\n``#0f47ed` `rgb(15,71,237)``\n• #ed460f\n``#ed460f` `rgb(237,70,15)``\nSplit Complementary Color\n• #47ed0f\n``#47ed0f` `rgb(71,237,15)``\n• #0f47ed\n``#0f47ed` `rgb(15,71,237)``\n• #ed0f47\n``#ed0f47` `rgb(237,15,71)``\n• #0fedb5\n``#0fedb5` `rgb(15,237,181)``\n• #0f47ed\n``#0f47ed` `rgb(15,71,237)``\n• #ed0f47\n``#ed0f47` `rgb(237,15,71)``\n• #edb50f\n``#edb50f` `rgb(237,181,15)``\n• #0a31a5\n``#0a31a5` `rgb(10,49,165)``\n• #0c39bd\n``#0c39bd` `rgb(12,57,189)``\n• #0d40d5\n``#0d40d5` `rgb(13,64,213)``\n• #0f47ed\n``#0f47ed` `rgb(15,71,237)``\n• #2458f1\n``#2458f1` `rgb(36,88,241)``\n• #3c6af3\n``#3c6af3` `rgb(60,106,243)``\n• #547df4\n``#547df4` `rgb(84,125,244)``\nMonochromatic Color\n\n# Alternatives to #0f47ed\n\nBelow, you can see some colors close to #0f47ed. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #0f7fed\n``#0f7fed` `rgb(15,127,237)``\n• #0f6ced\n``#0f6ced` `rgb(15,108,237)``\n• #0f5aed\n``#0f5aed` `rgb(15,90,237)``\n• #0f47ed\n``#0f47ed` `rgb(15,71,237)``\n• #0f35ed\n``#0f35ed` `rgb(15,53,237)``\n• #0f22ed\n``#0f22ed` `rgb(15,34,237)``\n• #0f10ed\n``#0f10ed` `rgb(15,16,237)``\nSimilar Colors\n\n# #0f47ed Preview\n\nThis text has a font color of #0f47ed.\n\n``<span style=\"color:#0f47ed;\">Text here</span>``\n#0f47ed background color\n\nThis paragraph has a background color of #0f47ed.\n\n``<p style=\"background-color:#0f47ed;\">Content here</p>``\n#0f47ed border color\n\nThis element has a border color of #0f47ed.\n\n``<div style=\"border:1px solid #0f47ed;\">Content here</div>``\nCSS codes\n``.text {color:#0f47ed;}``\n``.background {background-color:#0f47ed;}``\n``.border {border:1px solid #0f47ed;}``\n\n# Shades and Tints of #0f47ed\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #010510 is the darkest color, while #fcfdff is the lightest one.\n\n• #010510\n``#010510` `rgb(1,5,16)``\n• #020a22\n``#020a22` `rgb(2,10,34)``\n• #031035\n``#031035` `rgb(3,16,53)``\n• #041547\n``#041547` `rgb(4,21,71)``\n• #061b59\n``#061b59` `rgb(6,27,89)``\n• #07206c\n``#07206c` `rgb(7,32,108)``\n• #08267e\n``#08267e` `rgb(8,38,126)``\n• #092b91\n``#092b91` `rgb(9,43,145)``\n• #0a31a3\n``#0a31a3` `rgb(10,49,163)``\n• #0b36b6\n``#0b36b6` `rgb(11,54,182)``\n• #0d3cc8\n``#0d3cc8` `rgb(13,60,200)``\n• #0e41db\n``#0e41db` `rgb(14,65,219)``\n• #0f47ed\n``#0f47ed` `rgb(15,71,237)``\n• #1f54f1\n``#1f54f1` `rgb(31,84,241)``\n• #3162f2\n``#3162f2` `rgb(49,98,242)``\n• #4470f3\n``#4470f3` `rgb(68,112,243)``\n• #567ef4\n``#567ef4` `rgb(86,126,244)``\n• #698cf5\n``#698cf5` `rgb(105,140,245)``\n• #7b9af7\n``#7b9af7` `rgb(123,154,247)``\n• #8da8f8\n``#8da8f8` `rgb(141,168,248)``\n• #a0b6f9\n``#a0b6f9` `rgb(160,182,249)``\n• #b2c4fa\n``#b2c4fa` `rgb(178,196,250)``\n• #c5d3fb\n``#c5d3fb` `rgb(197,211,251)``\n• #d7e1fc\n``#d7e1fc` `rgb(215,225,252)``\n• #eaeffe\n``#eaeffe` `rgb(234,239,254)``\n• #fcfdff\n``#fcfdff` `rgb(252,253,255)``\nTint Color Variation\n\n# Tones of #0f47ed\n\nA tone is produced by adding gray to any pure hue. In this case, #7a7c82 is the less saturated color, while #0542f7 is the most saturated one.\n\n• #7a7c82\n``#7a7c82` `rgb(122,124,130)``\n• #70778c\n``#70778c` `rgb(112,119,140)``\n• #667296\n``#667296` `rgb(102,114,150)``\n• #5d6d9f\n``#5d6d9f` `rgb(93,109,159)``\n• #5369a9\n``#5369a9` `rgb(83,105,169)``\n• #4964b3\n``#4964b3` `rgb(73,100,179)``\n• #3f5fbd\n``#3f5fbd` `rgb(63,95,189)``\n• #365ac6\n``#365ac6` `rgb(54,90,198)``\n• #2c55d0\n``#2c55d0` `rgb(44,85,208)``\n• #2251da\n``#2251da` `rgb(34,81,218)``\n• #194ce3\n``#194ce3` `rgb(25,76,227)``\n• #0f47ed\n``#0f47ed` `rgb(15,71,237)``\n• #0542f7\n``#0542f7` `rgb(5,66,247)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #0f47ed is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5528238,"math_prob":0.58025956,"size":3686,"snap":"2019-43-2019-47","text_gpt3_token_len":1686,"char_repetition_ratio":0.123574145,"word_repetition_ratio":0.011111111,"special_character_ratio":0.5501899,"punctuation_ratio":0.23751387,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9780141,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-14T17:42:33Z\",\"WARC-Record-ID\":\"<urn:uuid:20d522db-ef55-4e52-83c8-f8633ec359f0>\",\"Content-Length\":\"36270\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:864a7cda-9d5f-4df9-a4f0-c5da6e408485>\",\"WARC-Concurrent-To\":\"<urn:uuid:80ccb269-e583-4ab6-8c22-fc287beb6bbe>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/0f47ed\",\"WARC-Payload-Digest\":\"sha1:HXJTA77LYVIPDOLPCLLWV3MXJESGBSNP\",\"WARC-Block-Digest\":\"sha1:M5YN2CA3DBSG2IOBJZRD4PZULMR4WRHX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668529.43_warc_CC-MAIN-20191114154802-20191114182802-00242.warc.gz\"}"}
https://programmingpraxis.com/2009/10/
[ "### October 30, 2009\n\nIn the previous exercise we looked at three sorting algorithms that work in quadratic time. Today we look at two sorting algorithms, each a minor variant on one of the previous algorithms, that work much more quickly.\n\nComb sort is a variant of bubble sort popularized by Stephen Lacey and Richard Box in an article in the April 1991 edition of Byte Magazine. The basic idea is to quickly eliminate turtles, small values near the end of the array, since they greatly hamper the speed of the sort. In bubble sort, the elements being compared are always adjacent; the gap between them is 1. In comb sort, the gap is initially the length of the list being sorted; the array is sorted using that gap size, then the gap is reduced and the array is sorted again, and so on until the gap is reduced to 1, when the sort reduces to ordinary bubble sort. Since early stages with large gaps quickly move turtles near the front of the array, later stages with smaller gaps have less work to do, and the sorting algorithm becomes relatively efficient. Most often, the gap is reduced by a factor of 1.3 at each step, though other shrink factors are sometimes used.\n\nIn the same way that comb sort is a variant of bubble sort, shell sort, invented by Donald Shell in 1959, is a variant of insertion sort that attempts to eliminate large disorder in early stages so that later stages have less work to do. Shell sort performs multiple stages of insertion sort, using a diminishing sequence of gaps that eventually reaches 1; a popular gap sequence is …, 364, 121, 40, 13, 4, 1.\n\nYour task is to write functions that perform comb sort and shell sort, in the same manner as the previous exercise. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.\n\nPages: 1 2\n\n### October 27, 2009\n\nSorting is one of the most common computing tasks. In the days of large mainframes, sorting would often account for ten percent of a computer’s workload, and there were complicated procedures involving large free-standing tape machines for sorting more records than could fit in the computer’s memory; the programmer who could shave a few percentage points of time or core memory space off the standard system sort was a hero. Nowadays, most programmers simply call their local sort library, and never worry about how it works.\n\nWe are going to explore classical sorting algorithms in the next several exercises. The rules of the game: We will be sorting arrays of integers with elements stored in locations zero through n−1, where n is the number of elements in the array. We will always sort into ascending order, and will use <, never ≤, to compare array elements. All sorting functions will be called with two parameters, the name of the array and its length.\n\nToday, we will look at three simple sorting algorithms. Bubble sort works by repeatedly stepping through the array to be sorted, comparing each pair of adjacent elements and interchanging them if they are in the wrong order, until the array is sorted. Selection sort works by repeatedly passing through the array, at each pass finding the minimum element of the array, interchanging it with the first element of the array, then repeating on the sub-array that excludes the first element of the array. Insertion sort works the same way that card players generally sort their hands; starting from an empty hand, they pick up a card, insert it into the correct position, then repeat with each new card until no cards remain.\n\nYour task is to write functions that sort an array using bubble sort, selection sort, and insertion sort; you should also write a test program that can be used for any of the sorting algorithms. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.\n\nPages: 1 2\n\n## Mr. S. and Mr. P.\n\n### October 23, 2009\n\nJohn McCarthy, who discovered Lisp, attributes this puzzle to Hans Freudenthal:\n\nWe pick two numbers a and b, so that 99 ≥ ab ≥ 2. We tell Mr. P. the product a × b and Mr. S. the sum a + b. Then Mr. S. and Mr. P. engage in the following dialog:\n\nMr. P.: I don’t know the numbers.\n\nMr. S.: I knew you didn’t know. I don’t know either.\n\nMr. P.: Now I know the numbers.\n\nMr. S.: Now I know them too.\n\nFind the numbers a and b.\n\nYour task is to find the two numbers. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below.\n\nPages: 1 2\n\n## Shuffle\n\n### October 20, 2009\n\nIt is easy to shuffle an array by stepping through the array, swapping each element with a forward element (an element at an index greater than or equal to the current element) until the next-to-last element is reached. It is harder to shuffle a linked list, because lists don’t permit ready access to any element other than the first.\n\nYour task is to write functions that shuffle an array and a linked list. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.\n\nPages: 1 2\n\n## Growable Arrays\n\n### October 16, 2009", null, "Most programming languages provide a data structure called an array that provides constant-time access and update to its elements, at the cost of fixing the size of the array in advance of its use; Scheme calls that data structure a vector. In their book The Practice of Programming, Brian Kernighan and Rob Pike describe arrays that can grow during use, using the C language and its `realloc` function to double the size of an array when needed. In this exercise we will create a tree-based data structure that provides logarithmic-time access and update to its elements without requiring periodic reallocations, based on the functional arrays in Lawrence Paulson’s book ML for the Working Programmer.\n\nA growable array has subscripts from 1 to n, where n is the current number of elements in the array. The elements are stored in a binary tree. To find the k‘th element of the array, start at the root and repeatedly divide k by two until it becomes one, moving left if the remainder is zero and right if the remainder is one. For instance, the 12th element of the array is found by moving left, left and right from the root, as shown in the diagram at right. The operations on a growable array are `get`, which retrieves an element of the array, `put`, which returns a new array containing the element, and `hirem`, which shrinks the array by a single element. The `put` operation can increase the upper bound of the array by one.\n\nYour task is to implement the growable array data structure. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below.\n\nPages: 1 2\n\n## Bifid Cipher\n\n### October 13, 2009\n\nThe bifid cipher was invented by the French cryptographer Felix Delastelle in 1901. Though never used militarily, it is popular among classical cryptographers because it is simple to operate manually yet reasonably secure. Bifid uses a Polybius square to substitute a digit-pair for each plain-text letter, then transposition causes fractionation of the digit pairs, creating diffusion, and finally the transposed digits are reformed into cipher-text letters. An example is shown below:\n\n```  1 2 3 4 5 1 A B C D E 2 F G H I K 3 L M N O P 4 Q R S T U 5 V W X Y Z```\n\nOur “key” is just the letters of the alphabet, in order, with J omitted; other methods of building a Polybius square have been discussed in previous exercises. To encipher a message, write the row and column numbers of the letters in two rows below the message:\n\n```P R O G R A M M I N G P R A X I S 3 4 3 2 4 1 3 3 2 3 2 3 4 1 5 2 4 5 2 4 2 2 1 2 2 4 3 2 5 2 1 3 4 3```\n\nThen the digits are read off by rows, in pairs, and converted back to letters:\n\n```34 32 41 33 23 23 41 52 45 24 22 12 24 32 52 13 43 O  M  Q  N  H  H  Q  W  U  I  G  B  I  M  W  C  S```\n\nSo the cipher-text is OMQNHHQWUIGBIMWCS. Deciphering is the inverse operation.\n\nSome variants of bifid break the plain-text into blocks of a given length, called the period, before encipherment, then encipher each block separately; a common period is five. A 6 × 6 variant that includes digits is also common. Another variant of bifid, called trifid, uses a 3 × 3 × 3 cube instead of a square, on the theory that if fractionating by two is good, fractionating by three is better.\n\nYour task is to write functions that encipher and decipher messages using the bifid cipher. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below.\n\nPages: 1 2\n\n## Calculating Pi\n\n### October 9, 2009", null, "Pi, or π, is a mathematical constant with a value that is the ratio of a circle’s circumference to its diameter. It has been known since antiquity that the ratio of the circumference of a circle to its diameter is the same for all circles; the ancient Egyptians, Indians, and Babylonians had all calculated approximations for π about two millenia before the birth of Christ. π, which is approximately equal to 3.14159, is one of the most important constants in math, science and engineering; it pops up regularly in studies of geometry, trigonometry and calculus, Einstein used π in his field equation of general relativity, and it appears in many statistical distributions. In a previous exercise we used a spigot algorithm to calculate the digits of π; our exercise today will use two different methods to calculate the value of π.\n\nAn interesting method for calculating π uses Monte Carlo simulation. If a circle of radius r is inscribed in a square with sides of length 2r, the area of the circle will be πr2 and the area of the square will be (2r)2, so the ratio of the area of the circle to the area of the square will be π/4. Another way of looking at this, as on the diagram, is to consider just the first quadrant of the circle; the square has an area of r 2, and the portion of the circle within the square has an area of πr2/4.\n\nBy taking a large number of points randomly distributed throughout the square and counting how many are within the inscribed circle, we can estimate the value of π. We could do that by building a model, scattering sand over it, and counting the individual grains of sand, but since we are programmers, it is easier to write a program to do the counting for us.\n\nYour first task is to implement a program to calculate the value of π using the Monte Carlo method described above.\n\nThe second method is due to Archimedes (287–212 BC), a Greek mathematician who lived in Syracuse, who famously bounded the value of π within a small range by measuring the perimeters of inscribed and circumscribed regular polygons with ninety-six sides: 223/71 < π < 22/7.", null, "Consider a circle with radius 1 and circumference 2π in which regular polygons of 3 × 2n-1 sides are inscribed and circumscribed; the diagram for n = 2 is shown at right. If bn is the semiperimeter of the inscribed polygon, and an is the semiperimeter of the circumscribed polygon, then as n increases, b1, b2, b3, … defines an increasing sequence, and a1, a2, a3, … defines a decreasing sequence, each with limit π.\n\nGiven K = 3 × 2n-1, the semiperimeters are an = K tan(π/K) and bn = K sin(π/K) by the definitions of sine and tangent. Likewise, an+1 = 2K tan(π/2K) and a n+1 = 2K sin(π/2K).\n\nThen, simple trigonometry allows us to calculate (1/an + 1/bn) = 2/an+1 and an+1bn = (bn+1)2. Archimedes started with a1 = 3 tan(π/3) = 3√3 and b1 = 3 sin(π/3) = 3√3/2 and calculated b6 < π < a6.\n\nArchimedes, of course, didn’t have trigonometry available to him, as it hadn’t been invented yet; he had to work out the geometry directly, as well as making all the calculations by hand!\n\nYour second task is to write a function that calculates the bounds of π using Archimedes’ algorithm. You can test your function for n = 6, as Archimedes did. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below.\n\nPages: 1 2\n\n## MapReduce\n\n### October 6, 2009\n\nMapReduce is a programming idiom that provides a convenient expression for programs that combine like items into equivalence classes. The idiom was developed by Google as a way to exploit large clusters of computers operating in parallel on large bodies of data, but is also useful as a way of structuring certain types of programs. Jeffrey Dean and Sanjay Ghemawat, in their paper MapReduce: Simplified Data Processing on Large Clusters, describe the idiom:\n\nUsers specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key.\n\nGoogle uses MapReduce to automatically parallelize computations across the large sets of machines at their data centers, gracefully “partitioning the input data, scheduling the program’s execution across a set of machines, handling machine failures, and managing the required inter-machine communication.” Our aspirations (and our budget) are more modest: build a framework for exploiting the mapreduce idiom in out day-to-day programs. Consider the following examples:\n\n• Count the frequencies of letters in a string or words in a text. The mapper associates the value 1 with each character or word as the key, and the reducer is simply the addition operator, adding all the 1s to count the words.\n• Produce a cross-reference listing of a program source text. The mapper associates each identifier with the line number where it appears, and the reducer collects the line numbers for each identifier, discarding duplicates.\n• Identify anagrams in a word list. The mapper “signs” each word by sorting its characters into alphabetical order, and the reducer brings together words with common signatures.\n\nThe `map-reduce` function takes four parameters: the mapping function, the reducing function, a less-than predicate that operates on keys, and the input list. The mapping function takes an item from the input list and returns a key/value pair, and the reducing function takes a key, a new value and an existing value and merges the new value into the existing value. A useful variant of the `map-reduce` function reads input from a file instead of a list; it replaces the input list parameter with a filename and adds a fifth parameter, a reading function that fetches the next input item from the file.\n\nYour task is to write the `map-reduce` and `map-reduce-input` functions. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below.\n\nPages: 1 2\n\n## Red-Black Trees\n\n### October 2, 2009", null, "A red-black tree is a data structure, similar to a binary tree, which is always approximately balanced, so that individual insert and lookup operations take only O(log n) time. Red-black trees are popular because of their good performance and the relative simplicity of their balancing operations. Our discussion of red-black trees is drawn from Section 3.3 of Chris Okasaki’s book Purely Functional Data Structures.\n\nA red-black tree is a binary search tree in which each node is colored either red or black. A red-black tree maintains two invariants that ensure its balance:\n\n• No red node ever has a red child.\n• Every path from the root to an empty node has the same number of black nodes.\n\nThus, the shortest possible path has only black nodes, and the longest possible path has alternating red and black nodes, so the longest path is never more than twice as long as the shortest path, and the tree is approximately balanced.\n\nLookup in red-black trees is identical to its binary-tree counterpart; the colors make no difference. The balance condition is maintained by the insert operation. Each new node is initially colored red. If its parent is black, the tree remains balanced, and nothing need be done. However, if its parent is red, the first invariant is violated, and a balancing function is called to repair the violation by rewriting the black-red-red path as a red node with two black children. This may propagate the invariant up the tree, so the balancing function is called recursively until it reaches the root of the tree, which is always recolored black.\n\nYour task is to write functions to maintain red-black trees; you should provide an insert function, a lookup function, and an enlist function that returns a list with the nodes of the tree in order. Each node should contain a key and a value, so the red-black tree can be used as a dictionary, in the manner of the treaps and ternary search tries that we have written in previous exercises. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.\n\nPages: 1 2" ]
[ null, "https://programmingpraxis.files.wordpress.com/2009/09/growable_arrays.jpg", null, "https://i0.wp.com/upload.wikimedia.org/wikipedia/commons/b/ba/Monte-Carlo01.gif", null, "https://programmingpraxis.files.wordpress.com/2009/09/bounding-pi.jpeg", null, "https://programmingpraxis.files.wordpress.com/2009/10/red-black.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92350113,"math_prob":0.9131357,"size":16831,"snap":"2019-51-2020-05","text_gpt3_token_len":3848,"char_repetition_ratio":0.12800856,"word_repetition_ratio":0.08701428,"special_character_ratio":0.22500148,"punctuation_ratio":0.104385965,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9591607,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,9,null,5,null,null,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T03:35:46Z\",\"WARC-Record-ID\":\"<urn:uuid:8b8b274b-dedd-42d3-8a1f-851446d7df30>\",\"Content-Length\":\"90365\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a9b54232-53fb-45e3-8054-b3f087ba81d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2e90d2a-e010-4a90-aad6-21854bd0e603>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://programmingpraxis.com/2009/10/\",\"WARC-Payload-Digest\":\"sha1:KFVXQQEVIFVWKJGHTHD62AJHZBKS7PJ4\",\"WARC-Block-Digest\":\"sha1:FHWYROPYV2AU4UZ3ODDGQQZS4CF3JHFW\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540579703.26_warc_CC-MAIN-20191214014220-20191214042220-00136.warc.gz\"}"}
http://theinfolist.com/html/ALL/s/discrete_cosine_transform.html
[ "TheInfoList\n\nA discrete cosine transform (DCT) expresses a finite sequence of\ndata points In statistics Statistics is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data Data (; ) are individual facts, statistics, or items of information, often numeric. In a mor ...\nin terms of a sum of\ncosine In mathematics, the trigonometric functions (also called circular functions, angle functions or goniometric functions) are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in al ...", null, "functions oscillating at different\nfrequencies Frequency is the number of occurrences of a repeating event per unit of time A unit of time is any particular time Time is the indefinite continued sequence, progress of existence and event (philosophy), events that occur in an apparent ...", null, ". The DCT, first proposed by Nasir Ahmed in 1972, is a widely used transformation technique in\nsignal processing Signal processing is an electrical engineering Electrical engineering is an engineering discipline concerned with the study, design, and application of equipment, devices, and systems which use electricity, electronics, and electromagnetis ...", null, "and\ndata compression In signal processing Signal processing is an electrical engineering Electrical engineering is an engineering discipline concerned with the study, design, and application of equipment, devices, and systems which use electricity, electro ...\n. It is used in most\ndigital media Digital media means any communication media that operate with the use of any of various encoded machine-readable data Machine-readable data, or computer-readable data, is data Data (; ) are individual facts, statistics, or items of informa ...\n, including\ndigital images Digital usually refers to something using digits, particularly binary digits. Technology and computing Hardware *Digital electronics Digital electronics is a field of electronics Electronics comprises the physics, engineering, technology a ...\n(such as\nJPEG JPEG ( ) is a commonly used method of lossy compression In information technology, lossy compression or irreversible compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represe ...", null, "and\nHEIF High Efficiency Image File Format (HEIF) is a container format for storing individual images and image sequences. The standard covers multimedia files that can also include other media streams, such as timed text, audio and video. HEIF can st ...\n, where small high-frequency components can be discarded),\ndigital video Digital video is an electronic representation of moving visual images (video) in the form of encoded digital data. This is in contrast to analog video, which represents moving visual images in the form of analog signals. Digital video comprises ...\n(such as\nMPEG The Moving Picture Experts Group (MPEG) is an alliance of working groups established jointly by International Organization for Standardization, ISO and International Electrotechnical Commission, IEC that sets standards for media coding, includ ...", null, "and\nH.26x The Video Coding Experts Group or Visual Coding Experts Group (VCEG, also known as Question 6) is a working group of the (ITU-T) concerned with video coding standards. It is responsible for standardization of the \"H.26x\" line of video coding sta ...\n),\ndigital audio Digital audio is a representation of sound recorded in, or converted into, Digital signal (signal processing), digital form. In digital audio, the sound wave of the audio signal is typically encoded as numerical Sampling (signal processing), s ...\n(such as\nDolby Digital Dolby Digital, originally synonymous with Dolby AC-3, is the name for what has now become a family of audio compression technologies developed by Dolby Laboratories Dolby Laboratories, Inc. (often shortened to Dolby Labs and known simply a ...\n,\nMP3 MP3 (formally MPEG-1 Audio Layer III or MPEG-2 Audio Layer III) is a coding format for digital audio Digital audio is a representation of sound recorded in, or converted into, Digital signal (signal processing), digital form. In digital a ...", null, "and\nAAC AAC may refer to: Aviation * Advanced Aircraft Advanced Aircraft Corporation is an aircraft manufacturer based in Carlsbad, California. History AAC bought out Riley Aircraft in 1983 in aviation, 1983. Products The firm has specialised in conve ...\n),\ndigital television Digital television (DTV) is the transmission of television audiovisual Audiovisual (AV) is electronic media 200px, Graphical representations of electrical audio data. Electronic media uses either analog (red) or digital (blue) signal pr ...\n(such as\nSDTV Standard-definition television (SDTV, SD, often shortened to standard definition) is a television system which uses a resolution that is not considered to be either high-definition television, high or enhanced-definition television, enhanced ...\n,\nHDTV High-definition television (HD or HDTV) describes a television system providing a substantially higher image resolution Image resolution is the detail an holds. The term applies to s, film images, and other types of images. Higher resolution m ...", null, "and VOD),\ndigital radio Digital radio is the use of digital technology to transmit or receive across the radio spectrum The radio spectrum is the part of the electromagnetic spectrum The electromagnetic spectrum is the range of frequencies (the spectrum A spec ...\n(such as\nAAC+ File:AAC profiles.svg, 250px, Evolution from MPEG-2 AAC-LC (Low Complexity) Profile and MPEG-4 AAC-LC MPEG-4 Part 3#MPEG-4 Audio Object Types, Object Type to AAC-HE v2 Profile. High-Efficiency Advanced Audio Coding (AAC-HE) is an audio coding f ...\nand\n), and\nspeech coding Speech coding is an application of data compression In signal processing Signal processing is an electrical engineering subfield that focuses on analysing, modifying, and synthesizing signals such as audio signal processing, sound, image ...\n(such as\nAAC-LD The MPEG-4 Low Delay Audio Coder (a.k.a. AAC Low Delay, or AAC-LD) is audio compression standard designed to combine the advantages of perceptual audio coding with the low delay necessary for two-way communication. It is closely derived from the ...\n, Siren and Opus). DCTs are also important to numerous other applications in\nscience and engineering Engineering is the use of scientific principles to design and build machines, structures, and other items, including bridges, tunnels, roads, vehicles, and buildings. The discipline of engineering encompasses a broad range of more speciali ...\n, such as\ndigital signal processing Digital signal processing (DSP) is the use of digital processing Digital data, in information theory and information systems, is information represented as a string of discrete symbols each of which can take on one of only a finite number of ...\n,\ntelecommunication Telecommunication is the transmission of information by various types of technologies over wire A wire is a single usually cylindrical A cylinder (from Greek Greek may refer to: Greece Anything of, from, or related to Greece Gr ...\ndevices, reducing\nnetwork bandwidth In computing, bandwidth is the maximum rate of data transfer across a given path. Bandwidth may be characterized as network bandwidth, data bandwidth, or digital bandwidth. This definition of ''bandwidth'' is in contrast to the field of signal proc ...\nusage, and\nspectral method Spectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain differential equation In mathematics, a differential equation is an equation that relates one or more function (mathema ...\ns for the numerical solution of\npartial differential equations In , a partial differential equation (PDE) is an equation which imposes relations between the various s of a . The function is often thought of as an \"unknown\" to be solved for, similarly to how is thought of as an unknown number to be solved ...\n. The use of cosine rather than\nsine In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). ...", null, "functions is critical for compression, since it turns out (as described below) that fewer cosine functions are needed to approximate a typical\nsignal In signal processing Signal processing is an electrical engineering subfield that focuses on analysing, modifying, and synthesizing signals such as audio signal processing, sound, image processing, images, and scientific measurements. Sig ...\n, whereas for differential equations the cosines express a particular choice of\nboundary condition Boundary or Boundaries may refer to: * Border, in political geography Entertainment * ''Boundaries'' (2016 film), a 2016 Canadian film * ''Boundaries'' (2018 film), a 2018 American-Canadian road trip film Mathematics and physics * Boundary (top ...\ns. In particular, a DCT is a\nFourier-related transform This is a list of linear transformation In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, chan ...\nsimilar to the\ndiscrete Fourier transform In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced Sampling (signal processing), samples of a function (mathematics), function into a same-length sequence of equally-spaced samples of the discret ...\n(DFT), but using only\nreal number In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no g ...\ns. The DCTs are generally related to Fourier Series coefficients of a periodically and symmetrically extended sequence whereas DFTs are related to Fourier Series coefficients of only periodically extended sequences. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), whereas in some variants the input and/or output data are shifted by half a sample. There are eight standard DCT variants, of which four are common. The most common variant of discrete cosine transform is the type-II DCT, which is often called simply \"the DCT\". This was the original DCT as first proposed by Ahmed. Its inverse, the type-III DCT, is correspondingly often called simply \"the inverse DCT\" or \"the IDCT\". Two related transforms are the\ndiscrete sine transformIn mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ha ...\n(DST), which is equivalent to a DFT of real and ''odd'' functions, and the\nmodified discrete cosine transform The modified discrete cosine transform (MDCT) is a transform based on the type-IV discrete cosine transform A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine In mathematics, the trigonom ...\n(MDCT), which is based on a DCT of ''overlapping'' data. Multidimensional DCTs (MD DCTs) are developed to extend the concept of DCT to MD signals. There are several algorithms to compute MD DCT. A variety of fast algorithms have been developed to reduce the computational complexity of implementing DCT. One of these is the integer DCT (IntDCT), an\ninteger An integer (from the Latin Latin (, or , ) is a classical language A classical language is a language A language is a structured system of communication Communication (from Latin ''communicare'', meaning \"to share\" or \"to ...\napproximation of the standard DCT, used in several\nISO/IEC ISO/IEC JTC 1 is a joint technical committee (JTC) of the International Organization for Standardization The International Organization for Standardization (ISO; ) is an international standard-setting body composed of representatives from var ...\nand\nITU-T The ITU Telecommunication Standardization Sector (ITU-T) coordinates standards for telecommunications Telecommunication is the transmission of information by various types of technologies over , radio, , or other systems. It has its origin ...\ninternational standards. DCT compression, also known as block compression, compresses data in sets of discrete DCT blocks. DCT blocks can have a number of sizes, including 8x8\npixels In digital imaging, a pixel, pel, or picture element is the smallest addressable element in a Raster graphics, raster image, or the smallest addressable element in an all points addressable display device; so it is the smallest controllable elem ...", null, "for the standard DCT, and varied integer DCT sizes between 4x4 and 32x32 pixels. The DCT has a strong \"energy compaction\" property, capable of achieving high quality at high\ndata compression ratio Data compression ratio, also known as compression power, is a measurement of the relative reduction in size of data representation produced by a data compression algorithm. It is typically expressed as the division of uncompressed size by compressed ...\ns. However, blocky\ncompression artifacts A compression artifact (or artefact) is a noticeable distortion of media (including images, audio, and video) caused by the application of lossy compression. Lossy data compression In signal processing, data compression, source coding, o ...", null, "can appear when heavy DCT compression is applied.\n\n# History", null, "The discrete cosine transform (DCT) was first conceived by Nasir Ahmed, while working at\nKansas State University Kansas State University (KSU, Kansas State, or K-State) is a public university, public Land-grant university, land-grant research university with its main campus in Manhattan, Kansas. It was opened as the state's land-grant college in 1863 and ...\n, and he proposed the concept to the\nNational Science Foundation The National Science Foundation (NSF) is an independent agency of the United States government Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group)The Independents were a group of ...", null, "in 1972. He originally intended DCT for\nimage compression Image compression is a type of data compression In signal processing Signal processing is an electrical engineering subfield that focuses on analysing, modifying, and synthesizing signals such as audio signal processing, sound, image proc ...\n. Ahmed developed a practical DCT algorithm with his PhD student T. Raj Natarajan and friend\nK. R. Rao Kamisetty Ramamohan Rao was an Indian-American electrical engineer. He was a professor of Electrical Engineering at the University of Texas at Arlington (UT Arlington). Academically known as K. R. Rao, he is credited with the co-invention of ...\nat the\nUniversity of Texas at Arlington The University of Texas at Arlington (UTA or UT Arlington) is a public research university in Arlington, Texas Arlington is a city in the U.S. state of Texas, located in Tarrant County. It forms part of the Mid-Cities region of the Dallas ...\nin 1973, and they found that it was the most efficient algorithm for image compression. They presented their results in a January 1974 paper, titled ''Discrete Cosine Transform''. It described what is now called the type-II DCT (DCT-II), as well as the type-III inverse DCT (IDCT). It was a benchmark publication, and has been cited as a fundamental development in thousands of works since its publication. The basic research work and events that led to the development of the DCT were summarized in a later publication by Ahmed, \"How I Came Up with the Discrete Cosine Transform\". Since its introduction in 1974, there has been significant research on the DCT. In 1977, Wen-Hsiung Chen published a paper with C. Harrison Smith and Stanley C. Fralick presenting a fast DCT algorithm.Further developments include a 1978 paper by M.J. Narasimha and A.M. Peterson, and a 1984 paper by B.G. Lee. These research papers, along with the original 1974 Ahmed paper and the 1977 Chen paper, were cited by the\nJoint Photographic Experts Group The Joint Photographic Experts Group (JPEG) is the joint committee between ISO The International Organization for Standardization (ISO ) is an international standard An international standard is a technical standard A technical standard is an ...\nas the basis for\nJPEG JPEG ( ) is a commonly used method of lossy compression In information technology, lossy compression or irreversible compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represe ...", null, "'s lossy image compression algorithm in 1992. In 1975, John A. Roese and Guner S. Robinson adapted the DCT for\ninter-frame An inter frame is a frame in a video compression stream which is expressed in terms of one or more neighboring frames. The \"inter\" part of the term refers to the use of ''Inter frame prediction''. This kind of prediction tries to take advantage from ...\nmotion-compensated\nvideo coding A video coding format (or sometimes video compression format) is a content representation format for storage or transmission of digital Digital usually refers to something using digits, particularly binary digits. Technology and computing Ha ...\n. They experimented with the DCT and the\nfast Fourier transform A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in t ...\n(FFT), developing inter-frame hybrid coders for both, and found that the DCT is the most efficient due to its reduced complexity, capable of compressing image data down to 0.25-\nbit The bit is a basic unit of information in computing Computing is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes the study and experimentation of algorithm of an algorithm (Euclid's algo ...\nper\npixel In digital imaging Digital imaging or digital image acquisition is the creation of a representation of the visual characteristics of an object, such as a physical scene or the interior structure of an object. The term is often assumed to imp ...", null, "for a videotelephone scene with image quality comparable to an intra-frame coder requiring 2-bit per pixel. In 1979, Anil K. Jain and Jaswant R. Jain further developed motion-compensated DCT video compression, also called block motion compensation. This led to Chen developing a practical video compression algorithm, called motion-compensated DCT or adaptive scene coding, in 1981. Motion-compensated DCT later became the standard coding technique for video compression from the late 1980s onwards. The integer DCT is used in\nAdvanced Video Coding Advanced Video Coding (AVC), also referred to as H.264 or MPEG-4 MPEG-4 is a method of defining compression of audio and visual (AV) digital data. It was introduced in late 1998 and designated a standard for a group of audio and video codin ...\n(AVC), introduced in 2003, and\nHigh Efficiency Video Coding High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a video compression standard A video coding format (or sometimes video compression format) is a content representation format for storage or transmission of digita ...\n(HEVC), introduced in 2013. The integer DCT is also used in the\nHigh Efficiency Image Format High Efficiency Image File Format (HEIF) is a container format for storing individual images and image sequences. The standard covers multimedia files that can also include other media streams, such as timed text, audio and video. HEIF can sto ...\n(HEIF), which uses a subset of the\nHEVC High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a video compression standard A video coding format (or sometimes video compression format) is a content representation format for storage or transmission of digita ...\nvideo coding format for coding still images. A DCT variant, the\nmodified discrete cosine transform The modified discrete cosine transform (MDCT) is a transform based on the type-IV discrete cosine transform A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine In mathematics, the trigonom ...\n(MDCT), was developed by John P. Princen, A.W. Johnson and Alan B. Bradley at the\nUniversity of Surrey The University of Surrey is a public research university A public university or public college is a university A university ( la, universitas, 'a whole') is an educational institution, institution of higher education, higher (or Tertiary e ...\nin 1987, following earlier work by Princen and Bradley in 1986. The MDCT is used in most modern audio compression formats, such as\nDolby Digital Dolby Digital, originally synonymous with Dolby AC-3, is the name for what has now become a family of audio compression technologies developed by Dolby Laboratories Dolby Laboratories, Inc. (often shortened to Dolby Labs and known simply a ...\n(AC-3),\nMP3 MP3 (formally MPEG-1 Audio Layer III or MPEG-2 Audio Layer III) is a coding format for digital audio Digital audio is a representation of sound recorded in, or converted into, Digital signal (signal processing), digital form. In digital a ...", null, "(which uses a hybrid DCT-\nFFT A fast Fourier transform (FFT) is an algorithm of an algorithm (Euclid's algorithm) for calculating the greatest common divisor (g.c.d.) of two numbers ''a'' and ''b'' in locations named A and B. The algorithm proceeds by successive subtract ...\nalgorithm), Advanced Audio Coding (AAC), and\nVorbis Vorbis is a free and open-source software Free and open-source software (FOSS) is software Software is a collection of instructions that tell a computer A computer is a machine that can be programmed to carry out sequences of ari ...\n(\nOgg Ogg is a free, open Open or OPEN may refer to: citizen * Open (band), Australian pop/rock band * The Open (band), English indie rock band * ''Open'' (Blues Image album), 1969 * ''Open'' (Gotthard album), 1999 * ''Open'' (Cowboy Junkies ...\n). The\ndiscrete sine transformIn mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ha ...\n(DST) was derived from the DCT, by replacing the Neumann condition at ''x=0'' with a\nDirichlet conditionIn mathematics, the Dirichlet (or first-type) boundary condition is a type of boundary condition, named after Peter Gustav Lejeune Dirichlet (1805–1859). When imposed on an ordinary differential equation, ordinary or a partial differential equation ...\n. The DST was described in the 1974 DCT paper by Ahmed, Natarajan and Rao. A type-I DST (DST-I) was later described by Anil K. Jain in 1976, and a type-II DST (DST-II) was then described by H.B. Kekra and J.K. Solanka in 1978. Nasir Ahmed also developed a lossless DCT algorithm with Giridhar Mandyam and Neeraj Magotra at the\nUniversity of New Mexico The University of New Mexico (UNM; es, Universidad de Nuevo México) is a public In public relations Public relations (PR) is the practice of managing and disseminating information from an individual or an organization An or ...", null, "in 1995. This allows the DCT technique to be used for\nlossless compression Lossless compression is a class of data compression In signal processing Signal processing is an electrical engineering subfield that focuses on analysing, modifying, and synthesizing signals such as audio signal processing, sound, image ...\nof images. It is a modification of the original DCT algorithm, and incorporates elements of inverse DCT and\ndelta modulation A delta modulation (DM or Δ-modulation) is an and conversion technique used for transmission of voice information where quality is not of primary importance. DM is the simplest form of (DPCM) where the difference between successive samples is e ...\n. It is a more effective lossless compression algorithm than\nentropy coding In information theory Information theory is the scientific study of the quantification (science), quantification, computer data storage, storage, and telecommunication, communication of Digital data, digital information. The field was fundamenta ...\n. Lossless DCT is also known as LDCT.\n\n# Applications\n\nThe DCT is the most widely used transformation technique in\nsignal processing Signal processing is an electrical engineering Electrical engineering is an engineering discipline concerned with the study, design, and application of equipment, devices, and systems which use electricity, electronics, and electromagnetis ...", null, ", and by far the most widely used linear transform in\ndata compression In signal processing Signal processing is an electrical engineering Electrical engineering is an engineering discipline concerned with the study, design, and application of equipment, devices, and systems which use electricity, electro ...\n. Uncompressed\ndigital media Digital media means any communication media that operate with the use of any of various encoded machine-readable data Machine-readable data, or computer-readable data, is data Data (; ) are individual facts, statistics, or items of informa ...\nas well as\nlossless compression Lossless compression is a class of data compression In signal processing Signal processing is an electrical engineering subfield that focuses on analysing, modifying, and synthesizing signals such as audio signal processing, sound, image ...\nmemory Memory is the faculty of the brain A brain is an organ Organ may refer to: Biology * Organ (anatomy) An organ is a group of Tissue (biology), tissues with similar functions. Plant life and animal life rely on many organs that co-exis ...\nand\nbandwidth Bandwidth commonly refers to: * Bandwidth (signal processing) or ''analog bandwidth'', ''frequency bandwidth'', or ''radio bandwidth'', a measure of the width of a frequency range * Bandwidth (computing), the rate of data transfer, bit rate or thr ...\nrequirements, which was significantly reduced by the highly efficient DCT\nlossy compression In information technology, lossy compression or irreversible compression is the class of data compression, data encoding methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to r ...\ntechnique, capable of achieving\ndata compression ratio Data compression ratio, also known as compression power, is a measurement of the relative reduction in size of data representation produced by a data compression algorithm. It is typically expressed as the division of uncompressed size by compressed ...\ns from 8:1 to 14:1 for near-studio-quality, up to 100:1 for acceptable-quality content. DCT compression standards are used in digital media technologies, such as\ndigital images Digital usually refers to something using digits, particularly binary digits. Technology and computing Hardware *Digital electronics Digital electronics is a field of electronics Electronics comprises the physics, engineering, technology a ...\n,\ndigital photo Digital photography uses cameras A camera is an optical Optics is the branch of physics Physics (from grc, φυσική (ἐπιστήμη), physikḗ (epistḗmē), knowledge of nature, from ''phýsis'' 'nature'), , is the ...\ns,\ndigital video Digital video is an electronic representation of moving visual images (video) in the form of encoded digital data. This is in contrast to analog video, which represents moving visual images in the form of analog signals. Digital video comprises ...\n,\nstreaming media Streaming media is multimedia Multimedia is a form of communication that combines different such as , , , , or into a single interactive presentation, in contrast to traditional mass media which ...\n,\ndigital television Digital television (DTV) is the transmission of television audiovisual Audiovisual (AV) is electronic media 200px, Graphical representations of electrical audio data. Electronic media uses either analog (red) or digital (blue) signal pr ...\n,\nstreaming television Streaming television is the digital distribution Digital distribution (also referred to as content delivery, online distribution, or electronic software distribution (ESD), among others) is the delivery or distribution of digital media conte ...\n,\nvideo-on-demand Video on demand (VOD) is a media distribution system that allows users to access videos without a traditional video playback device and the constraints of a typical static broadcasting schedule. In the 20th century, broadcasting in the form of o ...\n(VOD),\ndigital cinema Digital cinema refers to adoption of digital Digital usually refers to something using digits, particularly binary digits. Technology and computing Hardware *Digital electronics Digital electronics is a field of electronics Electronics c ...\n,\nhigh-definition video High-definition video (HD video) is video Video is an electronic Electronic may refer to: *Electronics Electronics comprises the physics, engineering, technology and applications that deal with the emission, flow and control of ele ...\n(HD video), and\nhigh-definition television High-definition television (HD or HDTV) describes a television system providing a substantially higher image resolution Image resolution is the detail an holds. The term applies to s, film images, and other types of images. Higher resolution m ...\n(HDTV). The DCT, and in particular the DCT-II, is often used in signal and image processing, especially for lossy compression, because it has a strong \"energy compaction\" property: in typical applications, most of the signal information tends to be concentrated in a few low-frequency components of the DCT. For strongly correlated\nMarkov process A Markov chain or Markov process is a stochastic model In probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory tre ...\nes, the DCT can approach the compaction efficiency of the Karhunen-Loève transform (which is optimal in the decorrelation sense). As explained below, this stems from the boundary conditions implicit in the cosine functions. DCTs are also widely employed in solving\npartial differential equations In , a partial differential equation (PDE) is an equation which imposes relations between the various s of a . The function is often thought of as an \"unknown\" to be solved for, similarly to how is thought of as an unknown number to be solved ...\nby\nspectral methods Spectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain differential equations, potentially involving the use of the fast Fourier transform. The idea is to write the solution of t ...\n, where the different variants of the DCT correspond to slightly different even/odd boundary conditions at the two ends of the array. DCTs are also closely related to\nChebyshev polynomials The Chebyshev polynomials are two sequences of polynomials related to the trigonometric functions, cosine and sine functions, notated as T_n(x) and U_n(x). They can be defined several equivalent ways; in this article the polynomials are defined ...\n, and fast DCT algorithms (below) are used in\nChebyshev approximation In mathematics, approximation theory is concerned with how function (mathematics), functions can best be approximation, approximated with simpler function (mathematics), functions, and with Quantitative property, quantitatively characterization (ma ...\nof arbitrary functions by series of Chebyshev polynomials, for example in\nClenshaw–Curtis quadrature Clenshaw–Curtis quadrature and Fejér quadrature are methods for numerical integration, or \"quadrature\", that are based on an expansion of the Integrand#Terminology and notation, integrand in terms of Chebyshev polynomials. Equivalently, they emp ...\n. The DCT is the coding standard for\nmultimedia Multimedia is a form of communication that combines different such as , , , , or into a single interactive presentation, in contrast to traditional mass media which featured little to no interaction fr ...", null, "telecommunication Telecommunication is the transmission of information by various types of technologies over wire A wire is a single usually cylindrical A cylinder (from Greek Greek may refer to: Greece Anything of, from, or related to Greece Gr ...\ndevices. It is widely used for\nbit rate In telecommunications and computing, bit rate (bitrate or as a variable ''R'') is the number of bits that are conveyed or processed per unit of time. The bit rate is expressed in the unit Data rate units, bit per second unit (symbol: ''bit/s' ...\nreduction, and reducing\nnetwork bandwidth In computing, bandwidth is the maximum rate of data transfer across a given path. Bandwidth may be characterized as network bandwidth, data bandwidth, or digital bandwidth. This definition of ''bandwidth'' is in contrast to the field of signal proc ...\nusage. DCT compression significantly reduces the amount of memory and bandwidth required for digital signals.\n\n## General applications\n\nThe DCT is widely used in many applications, which include the following.\n\n## DCT visual media standards\n\nThe DCT-II, also known as simply the DCT, is the most important\nimage compression Image compression is a type of data compression In signal processing Signal processing is an electrical engineering subfield that focuses on analysing, modifying, and synthesizing signals such as audio signal processing, sound, image proc ...\ntechnique. It is used in image compression standards such as\nJPEG JPEG ( ) is a commonly used method of lossy compression In information technology, lossy compression or irreversible compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represe ...", null, ", and\nvideo compression In signal processing, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either Lossy compression, lossy or Lossless comp ...\nstandards such as\nH.26x The Video Coding Experts Group or Visual Coding Experts Group (VCEG, also known as Question 6) is a working group of the (ITU-T) concerned with video coding standards. It is responsible for standardization of the \"H.26x\" line of video coding sta ...\n,\nMJPEG Motion JPEG (M-JPEG or MJPEG) is a video compression format A video coding format (or sometimes video compression format) is a content representation format for storage or transmission of digital video content (such as in a data file or bits ...\n,\nMPEG The Moving Picture Experts Group (MPEG) is an alliance of working groups established jointly by International Organization for Standardization, ISO and International Electrotechnical Commission, IEC that sets standards for media coding, includ ...", null, ",\nDV", null, ",\nTheora Theora is a free Free may refer to: Concept * Freedom, having the ability to act or change without constraint * Emancipate, to procure political rights, as for a disenfranchised group * Free will, control exercised by rational agents over ...\nand\nDaala Daala is a video coding format under development by the Xiph.Org Foundation under the lead of Timothy B. Terriberry mainly sponsored by the Mozilla Corporation. Like Theora and Opus codec, Opus, Daala is available free of any royalties and its re ...\n. There, the two-dimensional DCT-II of $N \\times N$ blocks are computed and the results are Quantization (signal processing), quantized and Entropy encoding, entropy coded. In this case, $N$ is typically 8 and the DCT-II formula is applied to each row and column of the block. The result is an 8 × 8 transform coefficient array in which the $\\left(0,0\\right)$ element (top-left) is the DC (zero-frequency) component and entries with increasing vertical and horizontal index values represent higher vertical and horizontal spatial frequencies.\nAdvanced Video Coding Advanced Video Coding (AVC), also referred to as H.264 or MPEG-4 MPEG-4 is a method of defining compression of audio and visual (AV) digital data. It was introduced in late 1998 and designated a standard for a group of audio and video codin ...\n(AVC) uses the integer DCT (IntDCT), an integer approximation of the DCT. It uses 4x4 and 8x8 integer DCT blocks.\nHigh Efficiency Video Coding High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a video compression standard A video coding format (or sometimes video compression format) is a content representation format for storage or transmission of digita ...\n(HEVC) and the\nHigh Efficiency Image Format High Efficiency Image File Format (HEIF) is a container format for storing individual images and image sequences. The standard covers multimedia files that can also include other media streams, such as timed text, audio and video. HEIF can sto ...\n(HEIF) use varied integer DCT block sizes between 4x4 and 32x32\npixels In digital imaging, a pixel, pel, or picture element is the smallest addressable element in a Raster graphics, raster image, or the smallest addressable element in an all points addressable display device; so it is the smallest controllable elem ...", null, ". , AVC is by far the most commonly used format for the recording, compression and distribution of video content, used by 91% of video developers, followed by HEVC which is used by 43% of developers.\n\n## MD DCT\n\nMultidimensional DCTs (MD DCTs) have several applications, mainly 3-D DCTs such as the 3-D DCT-II, which has several new applications like Hyperspectral Imaging coding systems, variable temporal length 3-D DCT coding, Video coding (postal market), video coding algorithms, adaptive video coding and 3-D Compression. Due to enhancement in the hardware, software and introduction of several fast algorithms, the necessity of using M-D DCTs is rapidly increasing. DCT-IV has gained popularity for its applications in fast implementation of real-valued polyphase filtering banks, lapped orthogonal transform and cosine-modulated wavelet bases.\n\n## Digital signal processing\n\nDCT plays a very important role in\ndigital signal processing Digital signal processing (DSP) is the use of digital processing Digital data, in information theory and information systems, is information represented as a string of discrete symbols each of which can take on one of only a finite number of ...\n. By using the DCT, the signals can be compressed. DCT can be used in electrocardiography for the compression of ECG signals. DCT2 provides a better compression ratio than DCT. The DCT is widely implemented in digital signal processors (DSP), as well as digital signal processing software. Many companies have developed DSPs based on DCT technology. DCTs are widely used for applications such as encoding, decoding, video, audio, multiplexing, control signals, signaling, and analog-to-digital conversion. DCTs are also commonly used for\nhigh-definition television High-definition television (HD or HDTV) describes a television system providing a substantially higher image resolution Image resolution is the detail an holds. The term applies to s, film images, and other types of images. Higher resolution m ...\n(HDTV) encoder/decoder integrated circuit, chips.\n\n## Compression artifacts\n\nA common issue with DCT compression in\ndigital media Digital media means any communication media that operate with the use of any of various encoded machine-readable data Machine-readable data, or computer-readable data, is data Data (; ) are individual facts, statistics, or items of informa ...\nare blocky\ncompression artifacts A compression artifact (or artefact) is a noticeable distortion of media (including images, audio, and video) caused by the application of lossy compression. Lossy data compression In signal processing, data compression, source coding, o ...", null, ", caused by DCT blocks. The DCT algorithm can cause block-based artifacts when heavy compression is applied. Due to the DCT being used in the majority of digital image and video coding standards (such as the\nJPEG JPEG ( ) is a commonly used method of lossy compression In information technology, lossy compression or irreversible compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represe ...", null, ",\nH.26x The Video Coding Experts Group or Visual Coding Experts Group (VCEG, also known as Question 6) is a working group of the (ITU-T) concerned with video coding standards. It is responsible for standardization of the \"H.26x\" line of video coding sta ...\nand\nMPEG The Moving Picture Experts Group (MPEG) is an alliance of working groups established jointly by International Organization for Standardization, ISO and International Electrotechnical Commission, IEC that sets standards for media coding, includ ...", null, "formats), DCT-based blocky compression artifacts are widespread in\ndigital media Digital media means any communication media that operate with the use of any of various encoded machine-readable data Machine-readable data, or computer-readable data, is data Data (; ) are individual facts, statistics, or items of informa ...\n. In a DCT algorithm, an image (or frame in an image sequence) is divided into square blocks which are processed independently from each other, then the DCT of these blocks is taken, and the resulting DCT coefficients are Quantization (signal processing), quantized. This process can cause blocking artifacts, primarily at high\ndata compression ratio Data compression ratio, also known as compression power, is a measurement of the relative reduction in size of data representation produced by a data compression algorithm. It is typically expressed as the division of uncompressed size by compressed ...\ns. This can also cause the \"mosquito noise\" effect, commonly found in\ndigital video Digital video is an electronic representation of moving visual images (video) in the form of encoded digital data. This is in contrast to analog video, which represents moving visual images in the form of analog signals. Digital video comprises ...\n(such as the MPEG formats). DCT blocks are often used in glitch art. The artist Rosa Menkman makes use of DCT-based compression artifacts in her glitch art, particularly the DCT blocks found in most\ndigital media Digital media means any communication media that operate with the use of any of various encoded machine-readable data Machine-readable data, or computer-readable data, is data Data (; ) are individual facts, statistics, or items of informa ...\nformats such as\nJPEG JPEG ( ) is a commonly used method of lossy compression In information technology, lossy compression or irreversible compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represe ...", null, "digital images Digital usually refers to something using digits, particularly binary digits. Technology and computing Hardware *Digital electronics Digital electronics is a field of electronics Electronics comprises the physics, engineering, technology a ...\nand\nMP3 MP3 (formally MPEG-1 Audio Layer III or MPEG-2 Audio Layer III) is a coding format for digital audio Digital audio is a representation of sound recorded in, or converted into, Digital signal (signal processing), digital form. In digital a ...", null, "digital audio Digital audio is a representation of sound recorded in, or converted into, Digital signal (signal processing), digital form. In digital audio, the sound wave of the audio signal is typically encoded as numerical Sampling (signal processing), s ...\n. Another example is ''Jpegs'' by German photographer Thomas Ruff, which uses intentional\nJPEG JPEG ( ) is a commonly used method of lossy compression In information technology, lossy compression or irreversible compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represe ...", null, "artifacts as the basis of the picture's style.\n\n# Informal overview\n\nLike any Fourier-related transform, discrete cosine transforms (DCTs) express a function or a signal in terms of a sum of sinusoids with different frequencies and amplitudes. Like the\ndiscrete Fourier transform In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced Sampling (signal processing), samples of a function (mathematics), function into a same-length sequence of equally-spaced samples of the discret ...\n(DFT), a DCT operates on a function at a finite number of discrete data points. The obvious distinction between a DCT and a DFT is that the former uses only cosine functions, while the latter uses both cosines and sines (in the form of complex exponentials). However, this visible difference is merely a consequence of a deeper distinction: a DCT implies different\nboundary condition Boundary or Boundaries may refer to: * Border, in political geography Entertainment * ''Boundaries'' (2016 film), a 2016 Canadian film * ''Boundaries'' (2018 film), a 2018 American-Canadian road trip film Mathematics and physics * Boundary (top ...\ns from the DFT or other related transforms. The Fourier-related transforms that operate on a function over a finite domain of a function, domain, such as the DFT or DCT or a Fourier series, can be thought of as implicitly defining an ''extension'' of that function outside the domain. That is, once you write a function $f\\left(x\\right)$ as a sum of sinusoids, you can evaluate that sum at any $x$, even for $x$ where the original $f\\left(x\\right)$ was not specified. The DFT, like the Fourier series, implies a periodic function, periodic extension of the original function. A DCT, like a Sine and cosine transforms, cosine transform, implies an even extension of the original function. However, because DCTs operate on ''finite'', ''discrete'' sequences, two issues arise that do not apply for the continuous cosine transform. First, one has to specify whether the function is even or odd at ''both'' the left and right boundaries of the domain (i.e. the min-''n'' and max-''n'' boundaries in the definitions below, respectively). Second, one has to specify around ''what point'' the function is even or odd. In particular, consider a sequence ''abcd'' of four equally spaced data points, and say that we specify an even ''left'' boundary. There are two sensible possibilities: either the data are even about the sample ''a'', in which case the even extension is ''dcbabcd'', or the data are even about the point ''halfway'' between ''a'' and the previous point, in which case the even extension is ''dcbaabcd'' (''a'' is repeated). These choices lead to all the standard variations of DCTs and also\ndiscrete sine transformIn mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ha ...\ns (DSTs). Each boundary can be either even or odd (2 choices per boundary) and can be symmetric about a data point or the point halfway between two data points (2 choices per boundary), for a total of 2 × 2 × 2 × 2 = 16 possibilities. Half of these possibilities, those where the ''left'' boundary is even, correspond to the 8 types of DCT; the other half are the 8 types of DST. These different boundary conditions strongly affect the applications of the transform and lead to uniquely useful properties for the various DCT types. Most directly, when using Fourier-related transforms to solve partial differential equations by\nspectral method Spectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain differential equation In mathematics, a differential equation is an equation that relates one or more function (mathema ...\ns, the boundary conditions are directly specified as a part of the problem being solved. Or, for the Modified discrete cosine transform, MDCT (based on the type-IV DCT), the boundary conditions are intimately involved in the MDCT's critical property of time-domain aliasing cancellation. In a more subtle fashion, the boundary conditions are responsible for the \"energy compactification\" properties that make DCTs useful for image and audio compression, because the boundaries affect the rate of convergence of any Fourier-like series. In particular, it is well known that any Classification of discontinuities, discontinuities in a function reduce the rate of convergence of the Fourier series, so that more sinusoids are needed to represent the function with a given accuracy. The same principle governs the usefulness of the DFT and other transforms for signal compression; the smoother a function is, the fewer terms in its DFT or DCT are required to represent it accurately, and the more it can be compressed. (Here, we think of the DFT or DCT as approximations for the Fourier series or cosine series of a function, respectively, in order to talk about its \"smoothness\".) However, the implicit periodicity of the DFT means that discontinuities usually occur at the boundaries: any random segment of a signal is unlikely to have the same value at both the left and right boundaries. (A similar problem arises for the DST, in which the odd left boundary condition implies a discontinuity for any function that does not happen to be zero at that boundary.) In contrast, a DCT where ''both'' boundaries are even ''always'' yields a continuous extension at the boundaries (although the slope is generally discontinuous). This is why DCTs, and in particular DCTs of types I, II, V, and VI (the types that have two even boundaries) generally perform better for signal compression than DFTs and DSTs. In practice, a type-II DCT is usually preferred for such applications, in part for reasons of computational convenience.\n\n# Formal definition\n\nFormally, the discrete cosine transform is a linear, invertible function (mathematics), function $f : \\R^ \\to \\R^$ (where $\\R$ denotes the set of\nreal number In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no g ...\ns), or equivalently an invertible × square matrix. There are several variants of the DCT with slightly modified definitions. The real numbers $~ x_0,\\ \\ldots\\ x_ ~$ are transformed into the real numbers $X_0,\\, \\ldots,\\, X_$ according to one of the formulas:\n\n## DCT-I\n\n:$X_k = \\frac \\left(x_0 + \\left(-1\\right)^k x_\\right) + \\sum_^ x_n \\cos \\left\\left[\\, \\frac \\, n \\, k \\,\\right\\right] \\qquad \\text ~ k = 0,\\ \\ldots\\ N-1 ~.$ Some authors further multiply the $x_0$ and $x_$ terms by $\\sqrt\\, ,$ and correspondingly multiply the $X_0$ and $X_$ terms by $1/\\sqrt \\,,$ which makes the DCT-I matrix orthogonal matrix, orthogonal. If one further multiplies by an overall scale factor of $\\sqrt ,$ but breaks the direct correspondence with a real-even Discrete Fourier transform, DFT. The DCT-I is exactly equivalent (up to an overall scale factor of 2), to a Discrete Fourier transform, DFT of $2\\left(N-1\\right)$ real numbers with even symmetry. For example, a DCT-I of $N = 5$ real numbers $a\\ b\\ c\\ d\\ e$ is exactly equivalent to a DFT of eight real numbers (even symmetry), divided by two. (In contrast, DCT types II-IV involve a half-sample shift in the equivalent DFT.) Note, however, that the DCT-I is not defined for $N$ less than 2. (All other DCT types are defined for any positive $N .$ Thus, the DCT-I corresponds to the boundary conditions: $x_n$ is even around $n = 0$ and even around $n = N - 1$; similarly for $X_k .$\n\n## DCT-II\n\n:$X_k = \\sum_^ x_n \\cos \\left\\left[\\, \\tfrac \\left\\left( n + \\frac \\right\\right) k \\, \\right\\right] \\qquad \\text ~ k = 0,\\ \\dots\\ N-1 ~.$ The DCT-II is probably the most commonly used form, and is often simply referred to as \"the DCT\". This transform is exactly equivalent (up to an overall scale factor of 2) to a Discrete Fourier transform, DFT of $4N$ real inputs of even symmetry where the even-indexed elements are zero. That is, it is half of the Discrete Fourier transform, DFT of the $4N$ inputs $y_n ,$ where $y_ = 0 ,$ $y_ = x_n$ for $0 \\leq n < N ,$ $y_ = 0 ,$ and $y_ = y_n$ for $0 < n < 2N .$ DCT-II transformation is also possible using 2 signal followed by a multiplication by half shift. This is demonstrated by John Makhoul, Makhoul. Some authors further multiply the $X_0$ term by $1/\\sqrt \\, .$ and multiply the resulting matrix by an overall scale factor of $\\sqrt$ (see below for the corresponding change in DCT-III). This makes the DCT-II matrix orthogonal matrix, orthogonal, but breaks the direct correspondence with a real-even Discrete Fourier transform, DFT of half-shifted input. This is the normalization used by Matlab, for example, see. In many applications, such as\nJPEG JPEG ( ) is a commonly used method of lossy compression In information technology, lossy compression or irreversible compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represe ...", null, ", the scaling is arbitrary because scale factors can be combined with a subsequent computational step (e.g. the Quantization (signal processing), quantization step in JPEG), and a scaling can be chosen that allows the DCT to be computed with fewer multiplications. The DCT-II implies the boundary conditions: $x_n$ is even around $n = -1/2$ and even around $n = N - 1/2 \\,;$ $X_k$ is even around $k = 0$ and odd around $k = N .$\n\n## DCT-III\n\n:$X_k = / x_0 + \\sum_^ x_n \\cos \\left\\left[\\, \\tfrac \\, n \\left\\left( k + / \\right\\right) \\,\\right\\right] \\qquad \\text ~ k = 0,\\ \\dots\\ N-1 ~.$ Because it is the inverse of DCT-II (up to a scale factor, see below), this form is sometimes simply referred to as \"the inverse DCT\" (\"IDCT\"). Some authors divide the $~ x_0 ~$ term by $\\sqrt$ instead of by 2 (resulting in an overall $x_0/\\sqrt$ term) and multiply the resulting matrix by an overall scale factor of $\\sqrt$ (see above for the corresponding change in DCT-II), so that the DCT-II and DCT-III are transposes of one another. This makes the DCT-III matrix orthogonal matrix, orthogonal, but breaks the direct correspondence with a real-even Discrete Fourier transform, DFT of half-shifted output. The DCT-III implies the boundary conditions: $x_n$ is even around $n = 0$ and odd around $n = N ;$ $X_k$ is even around $k = -/$ and even around $k = N - /.$\n\n## DCT-IV\n\n:$X_k = \\sum_^ x_n \\cos \\left\\left[\\, \\tfrac \\, \\left\\left( n + / \\right\\right) \\left\\left( k + / \\right\\right) \\,\\right\\right] \\qquad \\text ~ k = 0,\\ \\ldots\\ N-1 ~.$ The DCT-IV matrix becomes orthogonal matrix, orthogonal (and thus, being clearly symmetric, its own inverse) if one further multiplies by an overall scale factor of $\\sqrt .$ A variant of the DCT-IV, where data from different transforms are ''overlapped'', is called the\nmodified discrete cosine transform The modified discrete cosine transform (MDCT) is a transform based on the type-IV discrete cosine transform A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine In mathematics, the trigonom ...\n(MDCT). The DCT-IV implies the boundary conditions: $x_n$ is even around $n = -/$ and odd around $n = N - / ;$ similarly for $X_k .$\n\n## DCT V-VIII\n\nDCTs of types I–IV treat both boundaries consistently regarding the point of symmetry: they are even/odd around either a data point for both boundaries or halfway between two data points for both boundaries. By contrast, DCTs of types V-VIII imply boundaries that are even/odd around a data point for one boundary and halfway between two data points for the other boundary. In other words, DCT types I–IV are equivalent to real-even Discrete Fourier transform, DFTs of even order (regardless of whether $N$ is even or odd), since the corresponding DFT is of length $2\\left(N-1\\right)$ (for DCT-I) or $4 N$ (for DCT-II & III) or $8 N$ (for DCT-IV). The four additional types of discrete cosine transform correspond essentially to real-even DFTs of logically odd order, which have factors of $N \\pm /$ in the denominators of the cosine arguments. However, these variants seem to be rarely used in practice. One reason, perhaps, is that Fast Fourier transform, FFT algorithms for odd-length DFTs are generally more complicated than Fast Fourier transform, FFT algorithms for even-length DFTs (e.g. the simplest radix-2 algorithms are only for even lengths), and this increased intricacy carries over to the DCTs as described below. (The trivial real-even array, a length-one DFT (odd length) of a single number  , corresponds to a DCT-V of length $N = 1 .$)\n\n# Inverse transforms\n\nUsing the normalization conventions above, the inverse of DCT-I is DCT-I multiplied by 2/(''N'' − 1). The inverse of DCT-IV is DCT-IV multiplied by 2/''N''. The inverse of DCT-II is DCT-III multiplied by 2/''N'' and vice versa. Like for the discrete Fourier transform, DFT, the normalization factor in front of these transform definitions is merely a convention and differs between treatments. For example, some authors multiply the transforms by $\\sqrt$ so that the inverse does not require any additional multiplicative factor. Combined with appropriate factors of (see above), this can be used to make the transform matrix orthogonal matrix, orthogonal.\n\n# Multidimensional DCTs\n\nMultidimensional variants of the various DCT types follow straightforwardly from the one-dimensional definitions: they are simply a separable product (equivalently, a composition) of DCTs along each dimension.\n\n## M-D DCT-II\n\nFor example, a two-dimensional DCT-II of an image or a matrix is simply the one-dimensional DCT-II, from above, performed along the rows and then along the columns (or vice versa). That is, the 2D DCT-II is given by the formula (omitting normalization and other scale factors, as above): :$\\begin X_ &= \\sum_^ \\left\\left( \\sum_^ x_ \\cos \\left\\left[\\frac \\left\\left(n_2+\\frac\\right\\right) k_2 \\right\\right]\\right\\right) \\cos \\left\\left[\\frac \\left\\left(n_1+\\frac\\right\\right) k_1 \\right\\right]\\\\ &= \\sum_^ \\sum_^ x_ \\cos \\left\\left[\\frac \\left\\left(n_1+\\frac\\right\\right) k_1 \\right\\right] \\cos \\left\\left[\\frac \\left\\left(n_2+\\frac\\right\\right) k_2 \\right\\right] . \\end$ :The inverse of a multi-dimensional DCT is just a separable product of the inverses of the corresponding one-dimensional DCTs (see above), e.g. the one-dimensional inverses applied along one dimension at a time in a row-column algorithm. The ''3-D DCT-II'' is only the extension of ''2-D DCT-II'' in three dimensional space and mathematically can be calculated by the formula :$X_ = \\sum_^ \\sum_^ \\sum_^ x_ \\cos \\left\\left[\\frac \\left\\left(n_1+\\frac\\right\\right) k_1 \\right\\right] \\cos \\left\\left[\\frac \\left\\left(n_2+\\frac\\right\\right) k_2 \\right\\right] \\cos \\left\\left[\\frac \\left\\left(n_3+\\frac\\right\\right) k_3 \\right\\right],\\quad \\text k_i = 0,1,2,\\dots,N_i-1.$ The inverse of 3-D DCT-II is 3-D DCT-III and can be computed from the formula given by :$x_ = \\sum_^ \\sum_^ \\sum_^ X_ \\cos \\left\\left[\\frac \\left\\left(n_1+\\frac\\right\\right) k_1 \\right\\right] \\cos \\left\\left[\\frac \\left\\left(n_2+\\frac\\right\\right) k_2 \\right\\right] \\cos \\left\\left[\\frac \\left\\left(n_3+\\frac\\right\\right) k_3 \\right\\right],\\quad \\text n_i=0,1,2,\\dots,N_i-1.$ Technically, computing a two-, three- (or -multi) dimensional DCT by sequences of one-dimensional DCTs along each dimension is known as a ''row-column'' algorithm. As with fast Fourier transform#Multidimensional FFTs, multidimensional FFT algorithms, however, there exist other methods to compute the same thing while performing the computations in a different order (i.e. interleaving/combining the algorithms for the different dimensions). Owing to the rapid growth in the applications based on the 3-D DCT, several fast algorithms are developed for the computation of 3-D DCT-II. Vector-Radix algorithms are applied for computing M-D DCT to reduce the computational complexity and to increase the computational speed. To compute 3-D DCT-II efficiently, a fast algorithm, Vector-Radix Decimation in Frequency (VR DIF) algorithm was developed.\n\n### 3-D DCT-II VR DIF\n\nIn order to apply the VR DIF algorithm the input data is to be formulated and rearranged as follows. The transform size ''N × N × N'' is assumed to be 2.", null, ":$\\begin\\tilde\\left(n_1,n_2,n_3\\right) =x\\left(2n_1,2n_2,2n_3\\right)\\\\ \\tilde\\left(n_1,n_2,N-n_3-1\\right)=x\\left(2n_1,2n_2,2n_3+1\\right)\\\\ \\tilde\\left(n_1,N-n_2-1,n_3\\right)=x\\left(2n_1,2n_2+1,2n_3\\right)\\\\ \\tilde\\left(n_1,N-n_2-1,N-n_3-1\\right)=x\\left(2n_1,2n_2+1,2n_3+1\\right)\\\\ \\tilde\\left(N-n_1-1,n_2,n_3\\right)=x\\left(2n_1+1,2n_2,2n_3\\right)\\\\ \\tilde\\left(N-n_1-1,n_2,N-n_3-1\\right)=x\\left(2n_1+1,2n_2,2n_3+1\\right)\\\\ \\tilde\\left(N-n_1-1,N-n_2-1,n_3\\right)=x\\left(2n_1+1,2n_2+1,2n_3\\right)\\\\ \\tilde\\left(N-n_1-1,N-n_2-1,N-n_3-1\\right)=x\\left(2n_1+1,2n_2+1,2n_3+1\\right)\\\\ \\end$ :where $0\\leq n_1,n_2,n_3 \\leq \\frac -1$ The figure to the adjacent shows the four stages that are involved in calculating 3-D DCT-II using VR DIF algorithm. The first stage is the 3-D reordering using the index mapping illustrated by the above equations. The second stage is the butterfly calculation. Each butterfly calculates eight points together as shown in the figure just below, where $c\\left(\\varphi_i\\right)=\\cos\\left(\\varphi_i\\right)$. The original 3-D DCT-II now can be written as :$X\\left(k_1,k_2,k_3\\right)=\\sum_^\\sum_^\\sum_^\\tilde\\left(n_1,n_2,n_3\\right) \\cos\\left(\\varphi k_1\\right)\\cos\\left(\\varphi k_2\\right)\\cos\\left(\\varphi k_3\\right)$ where $\\varphi_i= \\frac\\left(4N_i+1\\right),\\text i= 1,2,3.$ If the even and the odd parts of $k_1,k_2$ and $k_3$ and are considered, the general formula for the calculation of the 3-D DCT-II can be expressed as", null, ":$X\\left(k_1,k_2,k_3\\right)=\\sum_^\\sum_^\\sum_^\\tilde_\\left(n_1,n_2,n_3\\right) \\cos\\left(\\varphi \\left(2k_1+i\\right)\\cos\\left(\\varphi \\left(2k_2+j\\right) \\cos\\left(\\varphi \\left(2k_3+l\\right)\\right)$ where : $\\tilde_\\left(n_1,n_2,n_3\\right)=\\tilde\\left(n_1,n_2,n_3\\right)+\\left(-1\\right)^l\\tilde\\left\\left(n_1,n_2,n_3+\\frac\\right\\right)$ : $+\\left(-1\\right)^j\\tilde\\left\\left(n_1,n_2+\\frac,n_3\\right\\right)+\\left(-1\\right)^\\tilde\\left\\left(n_1,n_2+\\frac,n_3+\\frac\\right\\right)$ : $+\\left(-1\\right)^i\\tilde\\left\\left(n_1+\\frac,n_2,n_3\\right\\right)+\\left(-1\\right)^\\tilde\\left\\left(n_1+\\frac+\\frac,n_2,n_3\\right\\right)$ : $+\\left(-1\\right)^\\tilde\\left\\left(n_1+\\frac,n_2,n_3+\\frac\\right\\right)$ : $+\\left(-1\\right)^\\tilde\\left\\left(n_1+\\frac,n_2+\\frac,n_3+\\frac\\right\\right) \\text i,j,l= 0 \\text 1.$\n\n### = Arithmetic complexity\n\n= The whole 3-D DCT calculation needs $~ \\left[\\log_2 N\\right] ~$ stages, and each stage involves $~ \\tfrac\\ N^3 ~$ butterflies. The whole 3-D DCT requires $~ \\left\\left[ \\tfrac\\ N^3 \\log_2 N \\right\\right] ~$ butterflies to be computed. Each butterfly requires seven real multiplications (including trivial multiplications) and 24 real additions (including trivial additions). Therefore, the total number of real multiplications needed for this stage is $~ \\left\\left[ \\tfrac\\ N^3\\ \\log_2 N \\right\\right] ~,$ and the total number of real additions i.e. including the post-additions (recursive additions) which can be calculated directly after the butterfly stage or after the bit-reverse stage are given by $~ \\underbrace_\\text+\\underbrace_\\text = \\left\\left[\\fracN^3 \\log_2N-3N^3+3N^2\\right\\right] ~.$ The conventional method to calculate MD-DCT-II is using a Row-Column-Frame (RCF) approach which is computationally complex and less productive on most advanced recent hardware platforms. The number of multiplications required to compute VR DIF Algorithm when compared to RCF algorithm are quite a few in number. The number of Multiplications and additions involved in RCF approach are given by $~\\left\\left[\\fracN^3 \\log_2 N \\right\\right]~$ and $~ \\left\\left[\\fracN^3 \\log_2 N - 3N^3 + 3N^2 \\right\\right] ~,$ respectively. From Table 1, it can be seen that the total number of multiplications associated with the 3-D DCT VR algorithm is less than that associated with the RCF approach by more than 40%. In addition, the RCF approach involves matrix transpose and more indexing and data swapping than the new VR algorithm. This makes the 3-D DCT VR algorithm more efficient and better suited for 3-D applications that involve the 3-D DCT-II such as video compression and other 3-D image processing applications. The main consideration in choosing a fast algorithm is to avoid computational and structural complexities. As the technology of computers and DSPs advances, the execution time of arithmetic operations (multiplications and additions) is becoming very fast, and regular computational structure becomes the most important factor. Therefore, although the above proposed 3-D VR algorithm does not achieve the theoretical lower bound on the number of multiplications, it has a simpler computational structure as compared to other 3-D DCT algorithms. It can be implemented in place using a single butterfly and possesses the properties of the Cooley–Tukey FFT algorithm in 3-D. Hence, the 3-D VR presents a good choice for reducing arithmetic operations in the calculation of the 3-D DCT-II, while keeping the simple structure that characterize butterfly-style Cooley–Tukey FFT algorithms.", null, "The image to the right shows a combination of horizontal and vertical frequencies for an $\\left(~ N_1 = N_2 = 8 ~\\right)$ two-dimensional DCT. Each step from left to right and top to bottom is an increase in frequency by 1/2 cycle. For example, moving right one from the top-left square yields a half-cycle increase in the horizontal frequency. Another move to the right yields two half-cycles. A move down yields two half-cycles horizontally and a half-cycle vertically. The source data is transformed to a linear combination of these 64 frequency squares.\n\n## MD-DCT-IV\n\nThe M-D DCT-IV is just an extension of 1-D DCT-IV on to  dimensional domain. The 2-D DCT-IV of a matrix or an image is given by :$X_ = \\sum_^ \\; \\sum_^ \\ x_ \\cos\\left\\left(\\ \\frac \\ \\right\\right) \\cos\\left\\left(\\ \\frac \\ \\right\\right) ~,$ : for $~~ k = 0,\\ 1,\\ 2\\ \\ldots\\ N-1 ~~$ and $~~ \\ell= 0,\\ 1,\\ 2,\\ \\ldots\\ M-1 ~.$ We can compute the MD DCT-IV using the regular row-column method or we can use the polynomial transform method for the fast and efficient computation. The main idea of this algorithm is to use the Polynomial Transform to convert the multidimensional DCT into a series of 1-D DCTs directly. MD DCT-IV also has several applications in various fields.\n\n# Computation\n\nAlthough the direct application of these formulas would require $~ \\mathcal\\left(N^2\\right) ~$ operations, it is possible to compute the same thing with only $~ \\mathcal\\left(N \\log N \\right) ~$ complexity by factorizing the computation similarly to the\nfast Fourier transform A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in t ...\n(FFT). One can also compute DCTs via FFTs combined with $~\\mathcal\\left(N\\right)~$ pre- and post-processing steps. In general, $~\\mathcal\\left(N \\log N \\right)~$ methods to compute DCTs are known as fast cosine transform (FCT) algorithms. The most efficient algorithms, in principle, are usually those that are specialized directly for the DCT, as opposed to using an ordinary FFT plus $~ \\mathcal\\left(N\\right) ~$ extra operations (see below for an exception). However, even \"specialized\" DCT algorithms (including all of those that achieve the lowest known arithmetic counts, at least for power of two, power-of-two sizes) are typically closely related to FFT algorithms – since DCTs are essentially DFTs of real-even data, one can design a fast DCT algorithm by taking an FFT and eliminating the redundant operations due to this symmetry. This can even be done automatically . Algorithms based on the Cooley–Tukey FFT algorithm are most common, but any other FFT algorithm is also applicable. For example, the Winograd FFT algorithm leads to minimal-multiplication algorithms for the DFT, albeit generally at the cost of more additions, and a similar algorithm was proposed by for the DCT. Because the algorithms for DFTs, DCTs, and similar transforms are all so closely related, any improvement in algorithms for one transform will theoretically lead to immediate gains for the other transforms as well . While DCT algorithms that employ an unmodified FFT often have some theoretical overhead compared to the best specialized DCT algorithms, the former also have a distinct advantage: Highly optimized FFT programs are widely available. Thus, in practice, it is often easier to obtain high performance for general lengths with FFT-based algorithms. Specialized DCT algorithms, on the other hand, see widespread use for transforms of small, fixed sizes such as the DCT-II used in\nJPEG JPEG ( ) is a commonly used method of lossy compression In information technology, lossy compression or irreversible compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represe ...", null, "compression, or the small DCTs (or MDCTs) typically used in audio compression. (Reduced code size may also be a reason to use a specialized DCT for embedded-device applications.) In fact, even the DCT algorithms using an ordinary FFT are sometimes equivalent to pruning the redundant operations from a larger FFT of real-symmetric data, and they can even be optimal from the perspective of arithmetic counts. For example, a type-II DCT is equivalent to a DFT of size $~ 4N ~$ with real-even symmetry whose even-indexed elements are zero. One of the most common methods for computing this via an FFT (e.g. the method used in FFTPACK and FFTW) was described by and , and this method in hindsight can be seen as one step of a radix-4 decimation-in-time Cooley–Tukey algorithm applied to the \"logical\" real-even DFT corresponding to the DCT-II. Because the even-indexed elements are zero, this radix-4 step is exactly the same as a split-radix step. If the subsequent size $~ N ~$ real-data FFT is also performed by a real-data split-radix FFT algorithm, split-radix algorithm (as in ), then the resulting algorithm actually matches what was long the lowest published arithmetic count for the power-of-two DCT-II ($~ 2 N \\log_2 N - N + 2 ~$ real-arithmetic operations). A recent reduction in the operation count to $~ \\tfrac N \\log_2 N + \\mathcal\\left(N\\right)$ also uses a real-data FFT. So, there is nothing intrinsically bad about computing the DCT via an FFT from an arithmetic perspective – it is sometimes merely a question of whether the corresponding FFT algorithm is optimal. (As a practical matter, the function-call overhead in invoking a separate FFT routine might be significant for small $~ N ~,$ but this is an implementation rather than an algorithmic question since it can be solved by unrolling or inlining.)\n\n# Example of IDCT", null, "Consider this 8x8 grayscale image of capital letter A.", null, "", null, "Each basis function is multiplied by its coefficient and then this product is added to the final image.\n\n* Discrete wavelet transform * JPEG#Discrete cosine transform, JPEGDiscretecosinetransformContains a potentially easier to understand example of DCT transformation * List of Fourier-related transforms * Modified discrete cosine transform" ]
[ null, "https://upload.wikimedia.org/wikipedia/commons/0/06/Cosine.svg", null, "https://upload.wikimedia.org/wikipedia/commons/7/76/Frequency.png", null, "https://upload.wikimedia.org/wikipedia/commons/1/15/Signal_processing.png", null, "https://upload.wikimedia.org/wikipedia/commons/2/24/Joint_Photographic_Experts_Group_logo.svg", null, "https://upload.wikimedia.org/wikipedia/commons/d/df/MPEG.png", null, "https://upload.wikimedia.org/wikipedia/commons/e/ea/Mp3.svg", null, "https://upload.wikimedia.org/wikipedia/commons/d/d6/HDTV.jpg", null, "https://upload.wikimedia.org/wikipedia/commons/3/37/Sine.png", null, "https://upload.wikimedia.org/wikipedia/commons/7/76/Pixels.jpg", null, "https://upload.wikimedia.org/wikipedia/commons/2/2c/Compression_artifacts.svg", null, "https://upload.wikimedia.org/wikipedia/commons/8/8d/Nasir_Ahmed.png", null, "https://upload.wikimedia.org/wikipedia/commons/9/9f/National_Science_Foundation.jpg", null, "https://upload.wikimedia.org/wikipedia/commons/2/24/Joint_Photographic_Experts_Group_logo.svg", null, "https://upload.wikimedia.org/wikipedia/commons/d/de/Pixel.jpg", null, "https://upload.wikimedia.org/wikipedia/commons/e/ea/Mp3.svg", null, "https://upload.wikimedia.org/wikipedia/commons/a/a4/University_of_New_Mexico.png", null, "https://upload.wikimedia.org/wikipedia/commons/1/15/Signal_processing.png", null, "https://upload.wikimedia.org/wikipedia/commons/3/34/Multimedia.jpg", null, "https://upload.wikimedia.org/wikipedia/commons/2/24/Joint_Photographic_Experts_Group_logo.svg", null, "https://upload.wikimedia.org/wikipedia/commons/d/df/MPEG.png", null, "https://upload.wikimedia.org/wikipedia/commons/6/67/Red_and_orange_DV_logo.jpg", null, "https://upload.wikimedia.org/wikipedia/commons/7/76/Pixels.jpg", null, "https://upload.wikimedia.org/wikipedia/commons/2/2c/Compression_artifacts.svg", null, "https://upload.wikimedia.org/wikipedia/commons/2/24/Joint_Photographic_Experts_Group_logo.svg", null, "https://upload.wikimedia.org/wikipedia/commons/d/df/MPEG.png", null, "https://upload.wikimedia.org/wikipedia/commons/2/24/Joint_Photographic_Experts_Group_logo.svg", null, "https://upload.wikimedia.org/wikipedia/commons/e/ea/Mp3.svg", null, "https://upload.wikimedia.org/wikipedia/commons/2/24/Joint_Photographic_Experts_Group_logo.svg", null, "https://upload.wikimedia.org/wikipedia/commons/2/24/Joint_Photographic_Experts_Group_logo.svg", null, "https://upload.wikimedia.org/wikipedia/commons/9/93/Stages_of_the_3-D_DCT-II_VR_DIF_algorithm.jpg", null, "https://upload.wikimedia.org/wikipedia/commons/3/31/Single_butterfly_of_the_3-D_DCT-II_VR_DIF_algorithm.jpg", null, "https://upload.wikimedia.org/wikipedia/commons/2/24/DCT-8x8.png", null, "https://upload.wikimedia.org/wikipedia/commons/2/24/Joint_Photographic_Experts_Group_logo.svg", null, "https://upload.wikimedia.org/wikipedia/commons/4/40/DCT_filter_comparison.png", null, "https://upload.wikimedia.org/wikipedia/commons/1/1a/Letter-a-8x8.png", null, "https://upload.wikimedia.org/wikipedia/commons/6/63/Dct-table.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90237516,"math_prob":0.9730812,"size":68566,"snap":"2022-05-2022-21","text_gpt3_token_len":14675,"char_repetition_ratio":0.15400659,"word_repetition_ratio":0.3329274,"special_character_ratio":0.20250562,"punctuation_ratio":0.12765266,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99111634,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,3,null,3,null,1,null,null,null,3,null,2,null,1,null,1,null,null,null,8,null,3,null,1,null,null,null,1,null,null,null,3,null,null,null,3,null,2,null,null,null,3,null,null,null,3,null,null,null,null,null,1,null,1,null,2,null,null,null,1,null,6,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T07:27:50Z\",\"WARC-Record-ID\":\"<urn:uuid:e6ed9cd0-b19e-4cd9-8ec0-e9a4e84c29c6>\",\"Content-Length\":\"112470\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a0a254d6-c3ec-47d0-b22f-e1cdcfbd551a>\",\"WARC-Concurrent-To\":\"<urn:uuid:27717789-f487-49d0-af3e-b7780014629b>\",\"WARC-IP-Address\":\"73.172.102.144\",\"WARC-Target-URI\":\"http://theinfolist.com/html/ALL/s/discrete_cosine_transform.html\",\"WARC-Payload-Digest\":\"sha1:N5SA24M5JSAFTQ4T4OWOHXJG4SAFJYKK\",\"WARC-Block-Digest\":\"sha1:ZLMQMVCTGVCNW6RBTRKOD7UTFJ4ZERZW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662580803.75_warc_CC-MAIN-20220525054507-20220525084507-00427.warc.gz\"}"}
https://questions.examside.com/past-years/jee/question/consider-an-ionic-solid-mx-with-nacl-structure-cons-2018-marks-3-wjznnwbmzqfnholq.htm
[ "1\nJEE Advanced 2018 Paper 1 Offline\nNumerical\n+3\n-0\nConsider an ionic solid $$MX$$ with $$NaCl$$ structure. Construct a new structure $$(Z)$$ whose unit cell is constructed from the unit cell of $$MX$$ following the sequential instructions given below. Neglect the charge balance.\n$$(i)\\,\\,\\,\\,\\,$$ Remove all the anions $$(X)$$ except the central one\n\n$$(ii)\\,\\,\\,\\,$$ Replace all the face centered cations $$(M)$$ by anions $$(X)$$\n\n$$(iii)\\,\\,$$ Remove all the corner cations $$(M)$$\n\n$$(iv)\\,\\,\\,\\,$$ Replace the central anion $$(X)$$ with cation $$(M)$$\n\nThe value of $$\\,\\,\\left( {{{number\\,\\,of\\,\\,anions} \\over {number\\,\\,of\\,\\,cations}}} \\right)\\,\\,$$ in $$Z$$ is ___________.\n2\nJEE Advanced 2017 Paper 1 Offline\nNumerical\n+3\n-0\nA crystalline solid of a pure substance has a face-centered cubic structure with a cell edge of $$400$$ $$pm.$$ If the density of the substance in the crystal is $$8$$ $$g\\,c{m^{ - 3}},$$ then the number of atoms present in $$256$$ $$g$$ of the crystal is $$N \\times {10^{24}}.$$ The value of $$N$$ is\n3\nIIT-JEE 2011 Paper 2 Offline\nNumerical\n+1\n-0\nThe number of hexagonal face that present in a truncated octahedron is\n4\nIIT-JEE 2009 Paper 2 Offline\nNumerical\n+3\n-1\n\nThe coordination number of Al in the crystalline state of AlCl$$_3$$ is ___________.\n\nPhysics\nMechanics\nElectricity\nOptics\nModern Physics\nChemistry\nPhysical Chemistry\nInorganic Chemistry\nOrganic Chemistry\nMathematics\nAlgebra\nTrigonometry\nCoordinate Geometry\nCalculus\nEXAM MAP\nMedical\nNEET" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.621006,"math_prob":0.9999894,"size":2906,"snap":"2023-40-2023-50","text_gpt3_token_len":809,"char_repetition_ratio":0.1333563,"word_repetition_ratio":0.2413793,"special_character_ratio":0.30110118,"punctuation_ratio":0.13429752,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999163,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T16:09:10Z\",\"WARC-Record-ID\":\"<urn:uuid:b6308c3a-6555-447d-bb32-cf5452dcb838>\",\"Content-Length\":\"311998\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5f57739e-9403-460d-b4bf-9aff6b159f8e>\",\"WARC-Concurrent-To\":\"<urn:uuid:dee783c2-0358-471c-93ee-ec65eb18a108>\",\"WARC-IP-Address\":\"104.21.4.127\",\"WARC-Target-URI\":\"https://questions.examside.com/past-years/jee/question/consider-an-ionic-solid-mx-with-nacl-structure-cons-2018-marks-3-wjznnwbmzqfnholq.htm\",\"WARC-Payload-Digest\":\"sha1:I25YPMU5XHW2YMFKDW3NSUPMSP4LQ53V\",\"WARC-Block-Digest\":\"sha1:S4V4PPMJ3CLH2CEK5EKHOKSXPA6J64BS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679515260.97_warc_CC-MAIN-20231211143258-20231211173258-00498.warc.gz\"}"}
https://dokumen.site/download/calculation-of-short-circuit-currents-a5b39efb311fb9
[ "# Calculation of short-circuit currents\n\nJune 28, 2018 | Author: Qaz Wsx | Category: Documents\n\n#### Description\n\nCollection Technique ..........................................................................\n\nCahier technique no. 158 Calculation of short-circuit currents\n\nB. De Metz-Noblat F. Dumas G. Thomasset\n\n\"Cahiers Techniques\" is a collection of documents intended for engineers and technicians, people in the industry who are looking for more in-depth information in order to complement that given in product catalogues. Furthermore, these \"Cahiers Techniques\" are often considered as helpful \"tools\" for training courses. They provide knowledge on new technical and technological developments in the electrotechnical field and electronics. They also provide better understanding of various phenomena observed in electrical installations, systems and equipments. Each \"Cahier Technique\" provides an in-depth study of a precise subject in the fields of electrical networks, protection devices, monitoring and control and industrial automation systems. The latest publications can be downloaded from the Schneider Electric internet web site. Code: http://www.schneider-electric.com Section: Experts' place Please contact your Schneider Electric representative if you want either a \"Cahier Technique\" or the list of available titles. The \"Cahiers Techniques\" collection is part of the Schneider Electric’s \"Collection technique\".\n\nForeword The author disclaims all responsibility subsequent to incorrect use of information or diagrams reproduced in this document, and cannot be held responsible for any errors or oversights, or for the consequences of using information and diagrams contained in this document. Reproduction of all or part of a \"Cahier Technique\" is authorised with the prior consent of the Scientific and Technical Division. The statement \"Extracted from Schneider Electric \"Cahier Technique\" no. .....\" (please specify) is compulsory.\n\nno. 158 Calculation of short-circuit currents\n\nBenoît de METZ-NOBLAT Graduate engineer from ESE (Ecole Supérieure d’Electricité), he worked for Saint-Gobain first as a research engineer, then in maintenance and new projects on a production site. He joined Schneider Electric in 1986, and is currently responsible for the Electrical System Analysis Group at the Research and Development Department.\n\nFrédéric DUMAS After graduating as doctor engineer from UTC (Université de Technologie de Compiègne) in 1993, he joined Schneider Electric. Works for the the Electrical System Analysis Group at the Research and Development Department, and is in charge of research projects in the field of industrial and distribution networks, including the development of software for electrotechnical calculations.\n\nGeorges THOMASSET A graduate engineer from IEG (Institut d’Electrotechnique de Grenoble) in 1971, has since conducted numerous research, design and implementation of complex industrial network projects in Merlin Gerin’s Technical Management Department. He has long and valuable experience as responsible of the technical section for the industrial unit in the Engineering and Contracting Department from 1984 to 1996. He is now responsible of the general technical development of offers at the Schneider Electric Applications and Services Activity.\n\nECT 158 updated June 2000\n\nLexicon\n\nAbbreviations Breaking capacity. BC MLVS Main low voltage switchboard. Symbols A Cross sectional area of conductors. α Angle between the initiation of the fault and zero voltage. c Voltage factor. cos ϕ Power factor (in the absence of harmonics). e Electromotive force. E Electromotive force (maximum value). ϕ Phase angle (current with respect to voltage). i Instantaneous current. ia Alternating sinusoidal component of the instantaneous current. idc Aperiodic component of the instantaneous current. ip Maximum current value (first peak of the fault current). I Maximum r.m.s. current. Ib Short-circuit breaking current (IEC 60909). Ik Steady-state short-circuit current (IEC 60909). ” Ik Initial short-circuit current (IEC 60909). Ir Rated current of a generator. Is Service current. Isc Steady-state short-circuit current (Isc3 = three-phase, Isc2 = two-phase, …).\n\nCahier Technique Schneider Electric no. 158 / p.2\n\nλ\n\nFactor depending on the saturation inductance of a generator. Constants (tables and graphs). k et K Ra Equivalent resistance of the upstream network. RL Line resistance per unit length. S Cross-sectional area of conductors. Sn Transformer kVA rating. Ssc Short-circuit power. tmin Minimum dead time for short-circuit development time, often equal to the time delay of a circuit breaker. u Instantaneous voltage. Transformer short-circuit voltage in %. usc U Network phase-to-phase voltage with no load. Un Network rated voltage with load. x Reactance, in %, of the rotating machines Xa Equivalent reactance of the upstream network. XL Line reactance per unit length. Reactance in % of rotating machines. Xsubt ZL Link impedance. Zsc Network upstream impedance for a three-phase fault. Zup Equivalent impedance of the upstream network Z(1), Z(2), Z(0) Positive-sequence, negativeøsequence and zero-sequence impedances of a network or an element.\n\nCalculation of short-circuit currents In view of sizing electrical installation and the required equipment, as well as determining the means required for the protectiuon of life and property, short-circuit currents must be calculated for every point in the network. This \"Cahier Technique\" reviews the calculation methods for short-circuit currents as laid down by standards such as IEC 60909. It is intended for radial low-volktage (LV) and high-voltage (HV) circuits. The aim is to provide a further understanding of the calculation methods, essential when determining short-circuit currents, even when computerised methods are employed.\n\nSummary 1 Introduction\n\n2 Calculation of Isc by the impedance method\n\n3 Calculation of Isc values in a radial network using symmetrical components\n\n1.1 The main types of short-circuits\n\np. 4 p. 5\n\n1.2 Etablishing the short-circuit current 1.3 Standardised Isc calculations\n\np. 7 p. 10\n\n1.4 Methods presented in this document 1.5 Basic assumptions\n\np. 11 p. 11\n\n2.1 Isc depending on the different types of short-circuit\n\np. 12\n\n2.2 Determining the various short-circuit impedances 2.3 Relationships between impedances at the different voltage levels in an installation\n\np. 13 p. 18\n\n2.4 Calculation example\n\np. 19\n\np. 23\n\n3.2 Symmetrical components 3.3 Calculation as defined by IEC 60909\n\np. 23 p. 24\n\n3.4 Equations for the various currents 3.5 Calculation example\n\np. 26 p. 27\n\n4 Computerised calculations and conclusion\n\np. 31\n\nBibliography\n\np. 32\n\nCahier Technique Schneider Electric no. 158 / p.3\n\n1 Introduction\n\nElectrical installations almost always require protection against short-circuits wherever there is an electrical discontinuity. This most often corresponds to points where there is a change in conductor cross-section. The short-circuit current must be calculated at each level in the installation in view of determining the characteristics of the equipment required to withstand or break the fault current.\n\nare used. Two values of the short-circuit current must be evaluated: c the maximum short-circuit current, used to determine: v the breaking capacity of the circuit breakers; v the making capacity of the circuit breakers; v the electrodynamic withstand capacity of the wiring system and switchgear.\n\nThe flow chart in figure 1 indicates the procedure for determining the various shortcircuit currents and the resulting parameters for the different protection devices. In order to correctly select and adjust the protection devices, graphs of figures 2, 3 & 4\n\nThe maximum short-circuit current corresponds to a short-circuit in the immediate vicinity of the downstream terminals of the protection device. It must be calculated accurately and used with a safety margin.\n\nUpstream Ssc\n\nHV / LV transformer rating\n\nusc (%)\n\nIsc at transformer terminals Power factor, coincidence factor, duty factor, foreseeable expansion factor\n\nBreaking capacity Conductor characteristics c Busbars: - length, - width, - thickness. c Cables: - type of insulation, - single-core or multicore, - length, - cross-section, c Environment : - ambient temperature, - installation method, - number of contiguous circuits.\n\nInst. trip setting\n\nIsc of main LV switchboard outgoers Breaking capacity Inst. trip setting at head of secondary switchboards Breaking capacity\n\nInst. trip setting\n\nIsc at end of final outgoers\n\nFig. 1: short-circuit (Isc) caculation procedure when designing an electrical installation.\n\nCahier Technique Schneider Electric no. 158 / p.4\n\nSecondary distribution circuit breakers\n\nIsc at head of final switchboards Breaking capacity\n\nMain LV switchboard distribution circuit breakers\n\nIsc\n\nInst. trip setting Feeder current ratings voltage drops\n\nMain circuit breaker\n\nFinal distribution circuit breakers\n\nt\n\n1\n\nt\n\n2\n\nCable or I2t characteristic\n\nθ1 > θ2\n\na5\n\nFuse time-current curve\n\nI2t = k2A2\n\nFig. 2: the I2t characteristics of a conductor depending on the ambient temperature (temperature of case 1: t1 being higher that temperature of case 2 : t2, the admissible current limit Iz1 is lower than the admissible current limit Iz2). t Cable or I2t characteristic\n\nDesign current\n\nIB Ir Iz\n\nIB\n\nI\n\nIz1 < Iz2\n\nCircuit breaker time-current curve\n\nIsc BC\n\nI\n\nFig. 3: circuit protection using a circuit breaker.\n\nc the minimum short-circuit current, essential when selecting the time-current curve for circuit breakers and fuses, in particular when: v cables are long and/or the source impedance is relatively high (generators, UPSs);\n\nIr\n\nIz\n\nI\n\nFig. 4: circuit protection using an aM fuse.\n\nv protection of life depends on circuit breaker or fuse operation, essentially the case for TN and IT electrical systems. Note that the minimum short-circuit current corresponds to a short-circuit at the end of the protected line, generally phase-to-earth for LV and phase-to-phase for HV (neutral not distributed), under the least severe operating conditions (fault at the end of a feeder and not just downstream from a protection device, one transformer in service when two can be connected, etc.). Note also that whatever the case, for whatever type of short-circuit current (minimum or maximum), the protection device must clear the short-circuit within a time tc that is compatible with the thermal stresses that can be withstood by the protected cable:\n\n∫i\n\ndt ≤ k 2 A 2 (see fig. 2, 3 and 4 ) where A is the cross-sectional area of the conductors and k is a constant calculated on the basis of different correction factors for the cable installation method, contiguous circuits, etc. 2\n\nFurther practical information may be found in the \"Electrical Installation Guide\" published by Schneider Electric (see the bibliography).\n\n1.1 The main types of short-circuits Various types of short-circuits can occur in electrical installations. Characteristics of short-circuits The primary characteristics are: c duration (self-extinguishing, transient and steady-state); c origin: v mechanical (break in a conductor, accidental electrical contact between two conductors via a\n\nforeign conducting body such as a tool or an animal); v internal or atmospheric overvoltages; v insulation breakdown due to heat, humidity or a corrosive environment; c location (inside or outside a machine or an electrical switchboard). Short-circuits can be: c phase-to-earth (80 % of faults);\n\nCahier Technique Schneider Electric no. 158 / p.5\n\nc phase-to-phase (15 % of faults). This type of fault often degenerates into a three-phase fault; c three-phase (only 5 % of initial faults).\n\nv excessive temperature rise due to an increase in Joule losses, with the risk of damage to insulation;\n\nThese different short-circuit currents are presented in figure 5 .\n\nc on other circuits in the network or in near-by networks: v voltage dips during the time required to clear the fault, ranging from a few milliseconds to a few hundred milliseconds; v shutdown of a part of the network, the extent of that part depending on the design of the network and the discrimination levels offered by the protection devices; v dynamic instability and/or the loss of machine synchronisation; v disturbances in control / monitoring circuits, v etc...\n\nConsequences of short-circuits The consequences are variable depending on the type and the duration of the fault, the point in the installation where the fault occurs and the short-circuit power. Consequences include: c at the fault location, the presence of electrical arcs, resulting in: v damage to insulation; v welding of conductors; v fire and danger to life; c on the faulty circuit: v electrodynamic forces, resulting in: - deformation of the busbars; - disconnection of cables;\n\na) Symmetrical three-phase shortcircuit.\n\nb) Phase-to-phase short-circuit clear of earth.\n\nL3\n\nL3\n\nL2\n\nL2\n\nL1\n\nL1\n\nIk\"\n\nIk\"\n\nc) Phase-to-phase-to-earth shortcircuit.\n\nd) Phase-earth short-circuit.\n\nL3\n\nL3\n\nL2\n\nL2\n\nL1\n\nL1\n\nIk\"\n\nIk\"\n\nIk\"\n\n\"\n\nIk\n\nShort-circuit current, Partial short-circuit currents in conductors and earth. In calculations, the various currents (Ik\") are identified by an index.\n\nFig. 5: different types of short-circuits and their currents. The direction of current is chosen arbitrarily. (See IEC 60909).\n\nCahier Technique Schneider Electric no. 158 / p.6\n\n1.2 Establishing the short-circuit current A simplified network comprises a source of constant AC power, a switch, an impedance Zsc that represents all the impedances upstream of the switch, and a load impedance Zs (see fig. 6 ). In a real network, the source impedance is made up of everything upstream of the short-circuit including the various networks with different voltages (HV, LV) and the series-connected wiring systems with different cross-sectional areas (A) and lengths. In figure 6 , when the switch is closed, the design current Is flows through the network. When a fault occurs between A and B, the negligible impedance between these points results in a very high short-circuit current Isc that is limited only be impedance Zsc. The current Isc develops under transient conditions depending on the reactances X and the resistances R that make up impedance Zsc: Zsc =\n\nR2 + X 2\n\nIn power distribution networks, reactance X = L ω is normally much greater than resistance\n\nR and the R / X ratio is between 0.1 and 0.3. The ratio is virtually equals cos ϕsc for low values: R cos ϕ sc = 2 R + X2 However, the transient conditions prevailing while the short-circuit current develops differ depending on the distance between the fault location and the generator. This distance is not necessarily physical, but means that the generator impedances are less than the link impedance between the generator and the fault location. Fault away from the generator This is the most frequent situation. The transient conditions are those resulting from the application of a voltage to a reactor-resistance circuit. This voltage is: e = E sin (ω t + α ) Current i is then the sum of the two components: i = ia + idc .\n\nc The first (ia) is alternating and sinusoidal:\n\nia = Ι sin (ω t + α ) where\n\nI = maximum current = E , Zsc\n\nR\n\nα = angle characterising the difference between\n\nX\n\nthe initiation of the fault and zero voltage. c The second (idc) is an aperiodic component, -\n\nZsc Zs\n\ne\n\nB\n\nR\n\nt\n\nidc = - Ι sin α e L . Its initial value depends on a and its decay rate is proportional to R / L.\n\nA\n\nAt the initiation of the short-circuit, i is equal to zero by definition (the design current Is is negligible), hence: i = ia + idc = 0 Figure 7 shows the graphical composition of i as the algebraic sum of its two components ia et idc.\n\nFig. 6: simplified network diagram.\n\nia = I sin (ω t + α)\n\nidc = - I sin α e\n\n-\n\nR t L\n\nI t θ\n\nω\n\ni = ia + idc\n\nFault initiation\n\nFig. 7: graphical presentation and decomposition of a short-circuit current occuring away from the generator.\n\nCahier Technique Schneider Electric no. 158 / p.7\n\na) Symmetrical i\n\nI = 2 Ia\n\nThe moment the fault occurs or the moment of closing, with respect to the network voltage, is characterised by its closing angle a (occurrence of the fault). The voltage can therefore be expressed as: u = E sin (ω t + α ) . The current therefore develops as follows: R  t E  sin (ω t + α - ϕ ) - sin (α - ϕ ) e L   Z    with its two components, one being alternating with a shift equal to ϕ with respect to the voltage and the second aperiodic and decaying to zero as t tends to infinity. Hence the two extreme cases defined by:\n\ni =\n\nu\n\nb) Asymmetrical\n\ni\n\nidc\n\nc α = ϕ ≈ π/2 , said to be symmetrical (or balanced), (see figure a ). E The fault current can be defined by: i = sin ω t Z which, from the initiation, has the same shape as for steady state conditions with a peak value E / Z. c α = 0, said to be asymmetrical (or unbalanced), (see figure b ).\n\nip\n\nThe fault current can be defined by: R  t E  sin (ω t - ϕ) - sin ϕ e L   Z    Its initial peak value ip therefore depends on ϕ, i.e. on the R / X = cos ϕ ratio of the circuit.\n\nu\n\ni =\n\nFig. 8: graphical presentation of the two extreme cases (symmetrical and asymmetrical) for a short-circuit current.\n\nFigure 8 illustrates the two extreme cases for the development of a short-circuit current, presented, for the sake of simplicity, with a single-phase, alternating voltage. -\n\nR t L\n\nassumed to be constant and the internal reactance of the machine variable. The reactance develops in three stages: c subtransient (the first 10 to 20 milliseconds of the fault);\n\nThe factor e is inversely proportional to the aperiodic component damping, determined by the R / L or R / X ratios.\n\nc transient (up to 500 milliseconds);\n\nThe value of ip must therefore be calculated to determine the making capacity of the required circuit breakers and to define the electrodynamic forces that the installation as a whole must be capable of withstanding.\n\nNote that in the indicated order, the reactance acquires a higher value at each stage, i.e. the subtransient reactance is less than the transient reactance, itself less than the steady-state reactance. The successive effect of the three reactances leads to a gradual reduction in the\n\nIts value may be deduced from the rms value of the symmetrical short-circuit current Ia using the equation: ip = K 2 Ιa where the coefficient K is indicated by the curve in figure 9 , as a function of the ratio R / X or R / L. Fault near the generator When the fault occurs in the immediate vicinity of the generator supplying the circuit, the variation in the impedance of the generator, in this case the dominant impedance, damps the short-circuit current. The transient current-development conditions are complicated by the variation in the electromotive force resulting from the shortcircuit. For simplicity, the electromotive force is\n\nCahier Technique Schneider Electric no. 158 / p.8\n\nK 2.0 1.8 1.6 1.4 1.2 1.0 0\n\n0.2\n\n0.4\n\n0.6\n\n0.8\n\n1.0\n\n1.2 R / X\n\nFig. 9: variation of coefficient K depending on R / X or R / L (see IEC 60909).\n\nshort-circuit current which is the sum of four components (see fig. 10 ): c the three alternating components (subtransient, transient and steady-state); c the aperiodic component resulting from the development of the current in the circuit (inductive).\n\nPractically speaking, information on the development of the short-circuit current is not essential: c in a LV installation, due to the speed of the breaking devices, the value of the subtransient short-circuit current, denoted Ik\", and of the maximum asymmetrical peak amplitude ip is\n\na) 0\n\nt (s)\n\nb) 0\n\nt (s)\n\nc) 0\n\nt (s)\n\nd) 0\n\nt (s) 0.1\n\n0.3\n\n0.5\n\ne) 0\n\nt (s) Subtransient\n\nTransient\n\nFig. 10: total short-circuit current Isc (e), and contribution of its components: a) subtransient of each reactance; b) transient reactance; c) steady-state reactance; d) aperiodic component. Note that the decrease in the generator reactance is faster than that of the aperiodic component. This is a rare situation that can cause saturation of the magnetic circuits and interruption problems because several periods occur before the current passes through zero.\n\nCahier Technique Schneider Electric no. 158 / p.9\n\nsufficient when determining the breaking capacities of the protection devices and the electrodynamic forces; c in LV power distribution and in HV applications, however, the transient short-circuit current is often used if breaking occurs before the steadystate stage, in which case it becomes useful to use the short-circuit breaking current, denoted Ib, which determines the breaking capacity of the time-delayed circuit breakers. Ib is the value of the short-circuit current at the moment\n\ninterruption is effective, i.e. following a time t after the development of the short-circuit, where t = tmin Time tmin (minimum time delay) is the sum of the minimum operating time of a protection relay and the shortest opening time of the associated circuit breaker, i.e. the shortest time between the appearance of the short-circuit current and the initial separation of the pole contacts on the switching device. Figure 11 presents the various currents of the short-circuits defined above.\n\ni\n\nSymmetrical\n\nSubtrans.\n\nTransient\n\nAsymmetrical\n\nFig. 11: short-circuit currents near a generator (schematic diagram).\n\n1.3 Standardised Isc calculations The standards propose a number of method. c Application guide C 15-105, which supplements NF C 15-100 (Normes Françaises) (low-voltage AC installations), details four methods: v the \"impedance\" method, used to calculate fault currents at any point in an installation with a high degree of accuracy. This method involves adding the various resistances and reactances of the fault loop separately, from (and including) the source to the given point, and then calculating the corresponding impedance. The Isc value is finally obtained by applying Ohm's law:\n\nIsc = Un / ∑(Z)\n\nAll the characteristics of the various elements in the fault loop must be known (sources and wiring systems). Note that in the application guide, a number of factors are not taken into account, notably: - the reactances of the circuit breakers and the busbars; - the resistances of rotating machines.\n\nCahier Technique Schneider Electric no. 158 / p.10\n\nThe results obtained may be different from those presented in the next chapter, because these factors are taken into account; v the \"composition\" method, which may be used when the characteristics of the power supply are not known. The upstream impedance of the given circuit is calculated on the basis of an estimate of the short-circuit current at its origin. Power factor cos ϕsc = R / X is assumed to be identical at the origin of the circuit and the fault location. In other words, it is assumed that the elementary impedances of two successive sections in the installation are sufficiently similar in their characteristics to justify the replacement of vectorial addition of the impedances by algebraic addition. This approximation may be used to calculate the value of the short-circuit current modulus with sufficient accuracy for the addition of a circuit. This very approximate method should be used only for installations rated up to 800 kVA; v the \"conventional\" method, which can be used, when the impedances or the Isc in the installation upstream of the given circuit are not\n\nknown, to calculate the minimum short-circuit currents and the fault currents at the end of a line. It is based on the assumption that the voltage at the circuit origin is equal to 80 % of the rated voltage of the installation during the short-circuit or the fault. This method considers only the resistance of the conductors and applies a coefficient greater than 1 to conductors with large cross-sectional areas to take into account their inductance (1.15 for 150 mm2, 1.20 for 185 mm2, etc.). It is mainly used for final circuits with their origin at a distance that is sufficiently far from the power source (network or power-station unit); v the \"simplified\" method (presented in detail in this application guide), which, via tables based on numerous simplifying assumptions, indicates for each conductor cross-sectional area: - the current rating of the overload protection device; - maximum lengths of wiring systems to maintain protection against indirect contact; - permissible lengths in view of line voltage drops. The data in the tables is in fact the result of calculations run using essentially the\n\ncomposition and conventional methods. This method may be used to determine the characteristics of a circuit to be added to an existing installation for which sufficient information is not available. It is directly applicable to LV installations, and can be used with correction coefficients if the voltage is not 230 / 400 V. c Standard IEC 909 (VDE 0102) applies to all networks, radial or meshed, up to 230 kV. This method, based on the Thevenin theorem, calculates an equivalent voltage source at the short-circuit location and then determines the corresponding short-circuit current. All network feeders as well as the synchronous and asynchronous machines are replaced in the calculation by their impedances (positive sequence, negative-sequence and zerosequence). All line capacitances and the parallel admittances of non-rotating loads, except those of the zero-sequence system, are neglected. c Other methods use the superposition principle and require that the load current first be calculated. Note also the method proposed by standard IEC 865 (VDE 0103) which calculates the thermally equivalent short-circuit current.\n\n1.4 Methods presented in this document In this \"Cahier Technique\" publication, two methods are presented for the calculation of short-circuit currents in radial networks: c the impedance method, reserved primarily for LV networks, was selected for its high degree of accuracy and its instructive value, given that virtually all characteristics of the circuit are taken into account.\n\nc the IEC 60909 method, used primarily for HV networks, was selected for its accuracy and its analytical character. More technical in nature, it implements the symmetrical-component principle.\n\n1.5 Basic assumptions To simplify the short-circuit calculations, a number of assumptions are required. These impose limits for which the calculations are valid but usually provide good approximations, facilitating comprehension of the physical phenomena and consequently the short-circuit current calculations. They nevertheless maintain a fully acceptable level of accuracy, \"erring\" systematically on the conservative side. The assumptions used in this document are as follows: c the given network is radial with rated voltages ranging from LV to HV, but not exceeding 230 kV, the limit set by standard IEC 60909; c the short-circuit current, during a three-phase short-circuit, is assumed to occur simultaneously on all three phases; c during the short-circuit, the number of phases involved does not change, i.e. a three-phase\n\nfault remains three-phase and a phase-to-earth fault remains phase-to-earth; c for the entire duration of the short-circuit, the voltages responsible for the flow of the current and the short-circuit impedance do not change significantly; c transformer regulators or tap-changers are assumed to be set to a medium position (if the short-circuit occurs away from the generator, the actual position of the transformer regulator or tap-changers does not need to be taken into account; c arc resistances are not taken into account; c all line capacitances are neglected; c load currents are neglected; c all zero-sequence impedances are taken into account.\n\nCahier Technique Schneider Electric no. 158 / p.11\n\n2 Calculation of Isc by the impedance method\n\n2.1 Isc depending on the different types of short-circuit Three-phase short-circuit This fault involves all three phases. Short-circuit current Isc3 is equal to: U/ 3 Zsc where U (phase-to-phase voltage) corresponds to the transformer no-load voltage which is 3 to 5 % greater than the on-load voltage across the terminals. For example, in 390 V networks, the phase-to-phase voltage adopted is U = 410, and the phase-to-neutral voltage is U / 3 = 237 V . Calculation of the short-circuit current therefore requires only calculation of Zsc, the impedance equal to all the impedances through which Isc\n\nΙsc 3 =\n\nThree-phase fault\n\nflows from the generator to the location of the fault, i.e. the impedances of the power sources and the lines (see fig. 12 ). This is, in fact, the \"positive-sequence\" impedance per phase: Zsc =\n\n ∑ R  \n\n2\n\n+  ∑ X\n\n2\n\nwhere\n\n∑R = the sum of series resistances, ∑X = the sum of series reactances. It is generally considered that three-phase faults provoke the highest fault currents. The fault current in an equivalent diagram of a polyphase system is limited by only the impedance of one phase at the phase-to-neutral voltage of the\n\nZL Zsc\n\nZL\n\nV\n\nΙsc 3 =\n\nU/ 3 Zsc\n\nΙsc 2 =\n\nU 2 Zsc\n\nZL\n\nPhase-to-phase fault\n\nZL\n\nZsc\n\nU ZL\n\nPhase-to-neutral fault\n\nZsc\n\nZsc\n\nZL\n\nZLn\n\nΙsc1 =\n\nV ZLn\n\nZL\n\nPhase-to-earth fault\n\nZsc\n\nΙsc(0) =\n\nV Z(0)\n\nFig. 12: the various short-circuit currents.\n\nCahier Technique Schneider Electric no. 158 / p.12\n\nU/ 3 Zsc + Z Ln\n\nZ(0)\n\nU/ 3 Zsc + Z(0)\n\nnetwork. Calculation of Isc3 is therefore essential for selection of equipment (maximum current and electrodynamic withstand capability). Phase-to-phase short-circuit clear of earth This is a fault between two phases, supplied with a phase-to-phase voltage U. In this case, the short-circuit current Isc2 is less than that of a three-phase fault:\n\nΙsc2 =\n\nU = 2 Zsc\n\n3 Ιsc 3 ≈ 0.86 Ιsc3 2\n\nPhase-to-neutral short-circuit clear of earth This is a fault between one phase and the neutral, supplied with a phase-to-neutral voltage V = U/ 3. The short-circuit current Isc1 is: U/ 3 Ιsc1 = Zsc + ZLn In certain special cases of phase-to-neutral faults, the zero-sequence impedance of the\n\nsource is less than Zsc (for example, at the terminals of a star-zigzag connected transformer or of a generator under subtransient conditions). In this case, the phase-to-neutral fault current may be greater than that of a three-phase fault. Phase-to-earth fault (one or two phases) This type of fault brings the zero-sequence impedance Z(0) into play. Except when rotating machines are involved (reduced zero-sequence impedance), the shortcircuit current Isc(0) is less than that of a threephase fault. Calculation of Isc(0) may be necessary, depending on the neutral system (system earthing arrangement), in view of defining the setting thresholds for the zero-sequence (HV) or earth-fault (LV) protection devices. Figure 12 shows the various short-circuit currents\n\n2.2 Determining the various short-circuit impedances This method involves determining the shortcircuit currents on the basis of the impedance represented by the \"circuit\" through which the short-circuit current flows. This impedance may be calculated after separately summing the various resistances and reactances in the fault loop, from (and including) the power source to the fault location. (The circled numbers X may be used to come back to important information while reading the example at the end of this section.) Network impedances c Upstream network impedance Generally speaking, points upstream of the power source are not taken into account. Available data on the upstream network is therefore limited to that supplied by the power distributor, i.e. only the short-circuit power Ssc in MVA. The equivalent impedance of the upstream network is: U2 1 Zup = Ssc where U is the no-load phase-to-phase voltage of the network. The upstream resistance and reactance may be deduced from Rup / Zup (for HV) by: Rup / Zup ≈ 0.3 at 6 kV; Rup / Zup ≈ 0.2 at 20 kV; Rup / Zup ≈ 0.1 at 150 kV.\n\nAs, Xup = Xup = Zup\n\nZup2 - Rup2 ,\n\n Rup  1 -    Zup \n\n2\n\n2 Therefore, for 20 kV, Xup 2 = 1 - (0,2) = 0, 980 Zup Xup = 0.980 Zup at 20 kV, hence the approximation Xup ≈ Zup . c Internal transformer impedance The impedance may be calculated on the basis of the short-circuit voltage usc expressed as a percentage: U2 where Sn U = no-load phase-to-phase voltage of the transformer; Sn = transformer kVA rating; U usc = voltage that must be applied to the primary winding of the transformer for the rated current to flow through the secondary winding, when the LV secondary terminals are shortcircuited.\n\n3 Z T = usc\n\nFor public distribution MV / LV transformers, the values of usc have been set by the European Harmonisation document HD 428-1S1 issued in October 1992 (see fig. 13 ).\n\nRating (kVA) of the HV / LV transformer\n\n≤ 630\n\n800\n\n1,000\n\n1,250\n\n1,600\n\n2,000\n\nShort-circuit voltage usc (%)\n\n4\n\n4.5\n\n5\n\n5.5\n\n6\n\n7\n\nFig. 13: standardised short-circuit voltage for public distribution transformers.\n\nCahier Technique Schneider Electric no. 158 / p.13\n\nNote that the accuracy of values has a direct influence on the calculation of Isc in that an error of x % for usc produces an equivalent error (x %) for ZT.\n\nThe relative error is:\n\n4 In general, RT k = 1.4 and the peak . A Ιsc = 1.4 x 2 x 7, 580 ≈ 15, 005 Consequently, the short-circuit current (subtransient) on the LV busbars increases from 12,459 A to 20,039 A and Isc from 27,310 A to 42,315 A. c Fault at D The impedance to be taken into account is 1 / 19th of ZM, plus that of the cable.\n\nX MD\n\n  605 =  + 2.7 10-3 ≈ 34, 5 mΩ   19\n\n 121  RMD =  + 19.2 10-3 ≈ 25.5 mΩ  19 \n\nZMD = 43 mΩ hence 410\n\nΙ MD =\n\n= 5,505 A 3 x 43 x 10-3 giving a total at D of: 7,534 + 5,505 = 13,039 A rms, and Isc ≈ 20,650 A.\n\nc Fault en B As for the fault at C, the current produced by the motor may be calculated on the basis of the \"motor + cable\" impedance:\n\nXM = (605 + 2.7 + 12) 10-3 ≈ 620 mΩ RM = (121 + 19.2 + 2.4) 10-3 ≈ 142.6 mΩ\n\nZM = 636 mΩ hence\n\nΙM =\n\n410 3 x 636 x 10-3\n\n≈ 372 A\n\n2 x 7.440 = 14, 728 A\n\nConsequently, the short-circuit current (subtransient) on the main LV switchboard\n\nCahier Technique Schneider Electric no. 158 / p.22\n\nc Fault at A (HV side) Rather than calculating the equivalent impedances, it is easier to estimate (conservatively) the reverse current of the motors at A by multiplying the value at B by the LV / HV transformation value 17 , i.e.: 410\n\n= 152.5 A 20 x 10-3 This figure, compared to the 6,415 A calculated previously, is negligible. 7, 440 x\n\nRough calculation of the fault at D This calculation makes use of all the approximations mentioned above (notably 15 and 16).\n\nΣX = 4.2 + 1.5 + 12 + 0.15 ΣX = 17.85 mΩ = X'D ΣR = 2.4 + 19.2 = 21.6 mΩ Z'D =\n\n= R'D\n\n2 2 R'D + X'D ≈ 28.02 mΩ\n\n410 ≈ 8,448 A 3 x 28.02 x 10-3 hence the peak Isc: Ι' D =\n\n2 x 8,448 ≈ 11,945 A\n\nFor the 20 motors IMB = 7,440 A. Again, it is possible to estimate the current injected by all the motors as being equal to 4.8 times their rated current (95 A), i.e. 9,120 A. The approximation again overestimates the real value of IMB. Using the fact that R / X = 0.3 => k = 1.4 and the peak\n\nΙsc = 1.4\n\nincreases from 35,758 A to 43,198 A and the peak Isc from 79,900 A to 94,628 A. However, as mentioned above, if the fault arc is taken into account, Isc is reduced between 45.6 to 75 kA.\n\nTo find the peak asymmetrical Isc, the above value must be increased by the contribution of the energised motors at the time of the fault 13 i.e. 4.8 times their rated current of 95 A:\n\n(\n\nΙsc = 11,945 + 4.8 x 95 x 2 x 20\n\n)\n\n= 24, 842 A . Compared to the figure obtained by the full calculation (20,039), the approximate method allows a quick evaluation with an error remaining on the side of safety.\n\n3 Calculation of Isc values in a radial network using symmetrical components\n\n3.1 Advantages of this method moduli and imbalances exceeding 120°).This is the case for phase-to-earth or phase-to-phase short-circuits with or without earth connection;\n\nCalculation using symmetrical components is particularly useful when a three-phase network is unbalanced, because, due to magnetic phenomena, for example, the traditional \"cyclical\" impedances R and X are, normally speaking, no longer useable. This calculation method is also required when: c a voltage and current system is not symmetrical (Fresnel vectors with different\n\nc the network includes rotating machines and/or special transformers (Yyn connection, for example). This method may be used for all types of radial distribution networks at all voltage levels.\n\n3.2 Symmetrical components Similar to the Leblanc theorem which states that a rectilinear alternating field with a sinusoidal amplitude is equivalent to two rotating fields turning in the opposite direction, the definition of symmetrical components is based on the equivalence between an unbalanced threephase system and the sum of three balanced three-phase systems, namely the positivesequence, negative-sequence and zerosequence (see fig. 23 ).\n\nand by using the following operator\n\nThe superposition principle may then be used to calculate the fault currents. In the description below, the system is defined using current Ι1 as the rotation reference, where:\n\nΙ2 = a 2 Ι1(1) + a Ι1(2) + Ι1(3) Currents Ι1 and Ι3 may be expressed in the\n\nc Ι1(1) is the positive-sequence component;\n\nΙ1 = Ι1(1) + Ι1(2) + Ι1(0)\n\nc Ι1(2) is the negative-sequence component;\n\nΙ2 = a 2 Ι11 + aΙ1(2) + Ι1(0) Ι3 = a Ι1(1) + a 2 Ι1(2) + Ι1(0)\n\nj\n\na = e\n\nI1(1)\n\nI2(1)\n\nNegative-sequence\n\n+\n\nI1(2)\n\nI3(2)\n\nωt\n\nGeometric construction of I1\n\nI1\n\nI1(1)\n\nI1(2) I1(0)\n\n+\n\nI2(0) I3(0)\n\n1 3 + j between Ι1, Ι2 , 2 2\n\nThis principle, applied to a current system, is confirmed by a graphical representation (see fig. 23 ). For example, the graphical addition of the vectors produces, for, the following result:\n\nsame manner, hence the system:\n\nzero-sequence I1(0)\n\nI2(2)\n\n= -\n\nand Ι3 .\n\nc Ι1(0) is the zero-sequence component;\n\nPositive-sequence I3(1)\n\n2π 3\n\nωt\n\nI3\n\n=\n\nI2\n\nI1 ωt\n\nωt Geometric construction of I2 I1(1) I1(0) I2 a2 I1(1) a I1(2)\n\nI1(2) Fig. 23: graphical construction of the sum of three balanced three-phase systems (positive-sequence, negative-sequence and zero-sequence).\n\nCahier Technique Schneider Electric no. 158 / p.23\n\nThese symmetrical current components are related to the symmetrical voltage components by the corresponding impedances: V(2) V(1) V(0) Z(1) = , Z( 2 ) = and Z(0) = Ι(1) Ι(2) Ι(0) These impedances may be defined from the characteristics (supplied by the manufacturers) of the various elements in the given electrical network. Among these characteristics, we can note that Z(2) ≈ Z(1), except for rotating machines, whereas Z(0) varies depending on each element (see fig. 24 ). For further information on this subject, a detailed presentation of this method for calculating solid and impedance fault currents is contained in the \"Cahier Technique\" n° 18 (see the appended bibliography).\n\nElements\n\nZ(0)\n\nTransformer (seen from secondary winding) ∞\n\nNo neutral Yyn or Zyn Dyn or YNyn primary D or Y + zn\n\nfree flux forced flux\n\n∞ 10 to 15 X(1) X(1) 0.1 to 0.2 X(1)\n\nMachine Synchronous\n\n≈ 0.5 Z(1)\n\nAsynchronous\n\n≈0\n\nLine\n\n≈ 3 Z(1)\n\nFig. 24: zero-sequence characteristic of the various elements in an electrical network.\n\n3.3 Calculation as defined by IEC 60909 Standard IEC 60909 defines and presents a method implementing symmetrical components, that may be used by engineers not specialised in the field. The method is applicable to electrical networks with a rated voltage of less than 230 kV and the standard explains the calculation of minimum and maximum short-circuit currents. The former is required in view of calibrating overcurrent protection devices and the latter is used to determine the rated characteristics for the electrical equipment. In view of its application to LV networks, the standard is accompanied by application guide IEC 60781. Procedure 1- Calculate the equivalent voltage at the fault location, equal to c Un / 3 where c is a voltage factor required in the calculation to account for: c voltage variations in space and in time; c possible changes in transformer tappings; c subtransient behaviour of generators and motors.\n\nRated voltage Un\n\nVoltage factor c for calculation of Isc max. Isc min.\n\nLV 230 - 400 V\n\n1\n\n0.95\n\nOthers\n\n1.05\n\n1\n\n1.1\n\n1\n\nHV 1 to 230 kV\n\nFig. 25: values for voltage factor c (see IEC 60909).\n\nCahier Technique Schneider Electric no. 158 / p.24\n\nDepending on the required calculations and the given voltage levels, the standardised voltage levels are indicated in figure 25 . 2- Determine and add up the equivalent positivesequence, negative-sequence and zerosequence impedances upstream of the fault location. 3- Calculate the initial short-circuit current using the symmetrical components. Practically speaking and depending on the type of fault, the equations required for the calculation of the Isc are indicated in the table in figure 26 . 4- Once the Isc (Ik\") value is known, calculate the other values such as the peak Isc value, the steady-state Isc value and the maximum, steadystate Isc value. Effect of the distance separating the fault from the generator When using this method, two different possibilities must always be considered: c the short-circuit is away from the generator, the situation in networks where the short-circuit currents do not have a damped, alternating component. This is generally the case in LV networks, except when high-power loads are supplied by special HV substations; c the short-circuit is near the generator (see fig. 11), the situation in networks where the short-circuit currents do have a damped, alternating component. This generally occurs in HV systems, but may occur in LV systems when, for example, an emergency generator supplies priority outgoers. The main differences between these two cases are: c for short-circuits away from the generator: v the initial (Ik\" ), steady-state (Ik) and breaking (Ib) short-circuit currents are equal (Ik\" = Ik = Ib);\n\nType of short-circuit\n\nIk” General situation\n\nThree-phase (any Ze)\n\n=\n\nc Un\n\nFault occurring far from the generators c Un = 3 Z(1)\n\n3 Z(1) In both cases, the short-circuit current depends only on Z(1), which is generally replaced by Zk,\n\nthe short-circuit impedance at the fault location, defined by Zk = Rk 2 + Xk 2 where Rk is the sum of the resistances of one phase, connected in series; Xk is the sum of the reactances of one phase, connected in series. Phase-to-phase clear of earth (Ze = ∞)\n\n=\n\nc Un\n\n=\n\nZ(1) + Z(2)\n\nPhase-to-earth\n\n=\n\nPhase-to-phase-to-earth (Zsc between phases = 0)\n\n=\n\nc Un 3\n\n=\n\nZ(1) Z(2) + Z(2) Z(0) + Z(1) Z(0) Symbols used in this table c phase-to-phase rms voltage of the three-phase network = U c modulus of the short-circuit current = Ik\" c symmetrical impedances = Z(1), Z(2), Z(0)\n\nc Un 3 2 Z(1) + Z(0)\n\nZ(1) + Z(2) + Z(0) c Un 3 Z(2)\n\nc Un 2 Z(1)\n\n=\n\nc Un 3 Z(1) + 2 Z(0)\n\nc short-circuit impedance = Zsc c earth impedance = Ze.\n\nFig. 26: Short-circuit values depending on the positive-sequence, negative-sequence & zero-sequence impedances of the given network (see IEC 60909).\n\nv the positive-sequence (Z(1)) and negativesequence (Z(2)) impedances are equal (Z(1) = Z(2)); c for short-circuits near the generator: v the short-circuit currents are not equal, in fact the relationship is Ik < Ib < Ik\"; v the positive-sequence impedance (Z(1)) is not necessarily equal to the negative-sequence impedance (Z(2)). Note however that asynchronous motors may also add to a short-circuit, accounting for up to 30 % of the network Isc for the first 30 milliseconds, in which case Ik\" = Ik = Ib no longer holds true. Conditions to consider when calculating the maximum and minimum short-circuit currents c Calculation of the maximum short-circuit currents must take into account the following points: v application of the correct voltage factor c corresponding to calculation of the maximum short-circuit currents; v among the assumptions and approximations mentioned in this document, only those leading to a conservative error should be used;\n\nv the resistances per unit length RL of lines (overhead lines, cables, phase and neutral conductors) should be calculated for a temperature of 20 °C; c Calculation of the minimum short-circuit currents requires: v applying the voltage factor c corresponding to the minimum permissible voltage on the network; v selecting the network configuration, and in some cases the minimum contribution from sources and network feeders, which result in the lowest short-circuit current at the fault location: v taking into account the impedance of the busbars, the current transformers, etc.; v neglecting the motors; v considering resistances RL at the highest foreseeable temperature:\n\n0.004  RL = 1 + (θe - 20 °C) RL20 °C   where RL20 is the resistance at 20 °C; θe is the permissible temperature (°C) for the conductor at the end of the short-circuit. The factor 0.004 / °C is valid for copper, aluminium and aluminium alloys.\n\nCahier Technique Schneider Electric no. 158 / p.25\n\n3.4 Equations for the various currents Initial short-circuit current Ik\" The different initial short-circuit currents Ik\" are calculated using the equations in the table in figure 26.\n\nIk\" / Ir ratio (see fig. 27 ) which expresses the influence of the subtransient and transient reactances with Ir as the rated current of the generator.\n\nPeak value ip of the short-circuit current\n\nSteady-state short-circuit current Ik The amplitude of the steady-state short-circuit current Ik depends on generator saturation influences and calculation is therefore less accurate than for the initial symmetrical current Ik\". The proposed calculation methods produce a sufficiently accurate estimate of the upper and lower limits, depending on whether the short-circuit is supplied by a generator or a synchronous machine.\n\nIn no meshed systems, the peak value ip of the short-circuit current may be calculated for all types of faults using the equation: ip = K\n\n2 Ιk \"\n\nIk” = is the initial short-circuit current; K is a factor depending on the R / X ratio and defined in the graph in figure 9, or using the following approximate calculation: -3\n\nK = 1.02 + 0.98 e\n\nc The maximum steady-state short-circuit current, with the synchronous generator at its highest excitation, may be calculated by:\n\nR X\n\nIkmax = λmax Ir\n\nShort-circuit breaking current Ib\n\nc The minimum steady-state short-circuit current is calculated under no-load, constant (minimum) excitation conditions for the synchronous generator and using the equation: Ikmin = λmin Ir where Ir is the rated current at the generator terminals; λ is a factor defined by the saturation inductance Xd sat.\n\nCalculation of the short-circuit breaking current Ib is required only when the fault is near the generator and protection is ensured by timedelayed circuit breakers. Note that this current is used to determine the breaking capacity of these circuit breakers. This current may be calculated with a fair degree of accuracy using the following equation:\n\nThe λmax and λmin values are indicated in figure 28 for turbo-generators and in figure 29 for machines with salient poles.\n\nIb = µ Ik” where where µ is a factor defined by the minimum time delay tmin and the\n\nµ 1.0 Minimum time delay tmin 0.02 s 0.9 0.05 s 0.8\n\n0.1 s > 0.25 s\n\n0.7\n\n0.6\n\n0.5 0\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\nThree-phase short-circuit current Ik\" / Ir\n\nFig. 27: factor µ used to calculate the short-circuit breaking current Ib (see IEC 60909).\n\nCahier Technique Schneider Electric no. 158 / p.26\n\n9\n\nλ\n\nλ\n\n2.4\n\n6.0\n\nXd sat\n\nλmax\n\n2.2\n\n1.2 1.4\n\n2.0\n\n1.6 1.8 2.0 2.2\n\n1.8\n\n5.5 5.0 Xd sat\n\n4.5\n\n1.6\n\n4.0\n\n1.4\n\n3.5\n\n1.2\n\n3.0\n\n1.0\n\n2.5\n\n0.8\n\n2.0\n\nλmax\n\n0.6\n\n0.8 1.0 1.2\n\n0.6\n\nλmin\n\n1.5\n\n0.4\n\n1.0\n\n0.2\n\n0.5\n\n0 0\n\n1\n\n2\n\n3\n\n1.7 2.0\n\n4\n\n5\n\n6\n\n7\n\n8\n\nλmin\n\n0 1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\n\" Three-phase short-circuit current Ik / Ir\n\n\" Three-phase short-circuit current Ik / Ir\n\nFig. 28: factors λmax and λmin for turbo-generators (see IEC 60909).\n\nFig. 29: factors λmax and λmin for generators with salient poles (see IEC 60909).\n\n3.5 Calculation example Problem 60 kV network 290 MVA\n\nConsider four networks, three 5 kV networks and one 15 kV network, supplied via a 30 kV network by transformers in substation E (see fig. 30 ). During construction of line GH, calculation of the breaking capacity of circuit breaker M is requested.\n\n10 MVA\n\n10 MVA E\n\nThe following information is available: 30 kV\n\nc only the secondary windings of the transformers in substation E are earthed; c for a 30 kV line, the reactance value is 0.35 Ω / km (positive-sequence and negativesequence conditions) and 3 0.35 Ω / km (zerosequence conditions); c the short-circuit reactance is 6 % for the transformers in substation E and 8 % for the other transformers;\n\n15 km\n\n40 km\n\n8 MVA 5 kV\n\n4 MVA\n\n4 MVA 15 kV G\n\nF 2 MVA cos ϕ: 0.8\n\n2 MVA cos ϕ: 0.8\n\n6 MVA cos ϕ : 0.8 20 km\n\nc the factor c for U is set to 1;\n\n30 km\n\nH\n\nc all loads connected to points F and G are essentially passive;\n\n5 kV\n\nM 4 MVA\n\nc all resistances are negligible with respect to the reactances.\n\n5 kV\n\n2 MVA cos ϕ: 0.8\n\nFig. 30\n\nCahier Technique Schneider Electric no. 158 / p.27\n\nSolution c On the basis of the positive-sequence and negative-sequence diagrams (see fig. 31 ), the following may be calculated: a =\n\nU2 30 2 = ⇒ j 3.1 Ω Ssc 290\n\nU2 6 302 = x ⇒ j 5.4 Ω Sn 100 10 c1 = 0.35 x 40 ⇒ j 14 Ω c2 = 0.35 x 30 ⇒ j 10.5 Ω c3 = 0.35 x 20 ⇒ j 7 Ω c4 = 0.35 x 15 ⇒ j 5.25 Ω b = usc\n\nU2 8 302 = x ⇒ j9Ω Sn 100 8\n\nd = usc\n\ne =\n\ng =\n\nc Calculations may therefore be made using two simplified diagrams: v with line GH open (see fig. 33 ): Z(1) = Z(2) = j 17.25 Ω\n\nU2 302 x 0.6 = x 0.6 ⇒ j 90 Ω S 6\n\nf = usc\n\nv similarly, the transformers in the substations F, H and G, due to their delta windings, are not affected by the zero-sequence currents and, therefore, have an infinite impedance for the fault. b’ = b1 = j 5.4 Ω c’1 = 3 c1 = j 42 Ω c’2 = 3 c2 = j 31.5 Ω c’3 = 3 c3 = j 21 Ω c’4 = 3 c4 = j 15.75 Ω d’ = ∞ f’ = ∞\n\nU2 8 302 = x ⇒ j 18 Ω Sn 100 4\n\nU2 302 x 0.6 = x 0.6 ⇒ j 270 Ω S 2\n\nThis result is obtained by successive calculations as shown in figure 34 . A similar calculation for the zero-sequence diagram gives the result: Z(0) = j 39.45 Ω c Un Ιsc 3 = ≈ 1.104 kA Z (1) 3\n\nc Note on the zero-sequence diagram (see fig. 32 ):\n\nΙsc1 =\n\nv the delta windings of the transformers in substation E block zero-sequence currents, i.e. the network is not affected by them;\n\nNote the network is HV, hence coefficient c = 1.1.\n\nc Un 3\n\n≈ 0.773 kA\n\nZ (1) + Z (2) + Z (0)\n\na\n\nb\n\nb\n\nb'\n\nE d\n\nc4 g\n\nf\n\ne\n\nF\n\nb' E\n\nc3\n\nd'\n\nc'4\n\nc1 f\n\ng\n\nF\n\nG\n\nc2\n\nG\n\nc'3\n\nH\n\nc'2 H\n\nf f' g\n\nFig. 31.\n\nCahier Technique Schneider Electric no. 158 / p.28\n\nFig. 32.\n\nc'1 f'\n\nf'\n\nPositive-sequence and negative-sequence diagram\n\nZero-sequence diagram\n\nj 3.1\n\nj 5.4\n\nj 17.25\n\nj 5.4\n\nj 5.4\n\nj 5.4\n\nj 39.45\n\nE E j9\n\nj 5.25 j 270\n\nj 14 j 18\n\nj 90 G\n\nF j 18\n\nH Z(1), Z(2)\n\nH\n\nj 42\n\nj 15.75 F\n\nZ(0)\n\nG\n\nj 270 j 21\n\nj7 H\n\nH\n\nZ(1), Z(2)\n\nZ(0)\n\nFig. 33.\n\nZ'\n\nj 3.1 Za j 5,4\n\nj 5.4 j 5.4 j 5.4 + j 5.4 = j 3.1 + j 2.7 = j 5.8\n\nZa = j 3,1 +\n\nj 5,4 E Zc\n\nj9\n\nj 5.25 j 270\n\nj 14\n\nF\n\nG j 270\n\nj 18\n\nZc = j 14 + j 18 + j 270 = j 302\n\nj 288\n\nZb\n\nj7\n\nZ(1) = j 9 + j 90 = j 99\n\nj 5.25\n\nj 18\n\nj 90\n\nH j7 Z(1), Z(0)\n\nZ' =\n\nH\n\nZa x Zb x Zc Za Zb+Za Zc+Zb Zc\n\n= j 5.381\n\nj 5.25 Z=\n\nj 288\n\nj 10.631 j 288\n\nj 10.631 + j 288 = j 17.253\n\nj7\n\n+j7\n\nH H\n\nFig. 34.\n\nCahier Technique Schneider Electric no. 158 / p.29\n\nv with line GH closed (see fig. 35 ): Z(1) = Z(2) = j 13.05 Ω Z(0) = j 27.2 Ω Isc3 = 1.460 kA Isc1 = 1.072 kA\n\nGiven the highest short-circuit current (Isc3 = 1.460 kA), the line circuit breaker at point M must be sized for: P = U Ι 3 = 30 x 1.460 x 3 P ≈ 76 MVA.\n\nZero-sequence diagram\n\nPositive-sequence diagram\n\nj 3.1\n\nj 5.4\n\nj 13,05\n\nj 5.4\n\nj 5.4\n\nj 5.4\n\nj 27.2 Ω\n\nE E j9\n\nj 5.25 j 270\n\nH\n\nj 14 j 18\n\nF\n\nG j 90\n\nj 18 j7\n\nj 42\n\nj 15.75\n\nZ(1), Z(2)\n\nF\n\nZ(0)\n\nG\n\nj 270 j 10,5\n\nj 21\n\nj 31.5\n\nH\n\nH\n\nZ(1), Z(2) Z(1) = Z(2) = j 13.05 Ω\n\nZ(0) Z(0) = j 27.2 Ω\n\nFig. 35.\n\nCahier Technique Schneider Electric no. 158 / p.30\n\nH\n\n4 Computerised calculations and conclusion\n\nVarious methods for the calculation of shortcircuit currents have been developed. Some have been included in a number of standards and are consequently included in this \"Cahier Technique\" publication as well. Several standardised methods were designed in such a way that short-circuit currents could be calculated by hand or using a small calculator. When computerised scientific calculations became a possibility in the 1970's, electricalinstallation designers devised software for their particular needs. This software was initially run on mainframe computer systems, then on minicomputers, but was difficult to use, and therefore limited to a small number of experts. This software was finally transferred to the PC microcomputing environment, proving much easier to use. Today, a wide range of software packages are available which comply with the applicable standards defining the calculation of Isc currents in LV applications, for example\n\nEcodial, a program designed and marketed by Schneider Electric. All computer programs designed to calculate short-circuit currents are predominantly concerned with determining the required breaking and making capacities of switchgear and the electro-mechanical withstand capabilities of equipment. Other software is used by experts specialising in network design, for example, research on the dynamic behaviour of electrical networks. Such computer programs can be used for precise simulations of electrical phenomena over time and their use is now spreading to include the entire electro-mechanical behaviour of networks and installations. Remember, however, that all software, whatever its degree of sophistication, is only a tool. To ensure correct results, it should be used by qualified professionals who have acquired the relevant knowledge and expertise.\n\nCahier Technique Schneider Electric no. 158 / p.31\n\nBibliography\n\nStandards c IEC 60909: Short-circuit current calculation in three-phase AC systems. c IEC 60781: Application guide for calculation of short-circuit currents in low voltage radial systems. c NF C 15-100: Installations électriques à basse tension. c C 15-105: Guide pratique, Détermination des sections de conducteurs et choix des dispositifs de protection. Schneider Electric Cahiers Techniques c Analyse des réseaux triphasés en régime perturbé à l'aide des composantes symétriques, Cahier Technique n° 18 - B. DE METZ-NOBLAT c Neutral earthing in an industrial HV network. Cahier Technique no. 62 - F. SAUTRIAU. c LV circuit-breaker breaking capacity. Cahier Technique no. 154 - R. MOREL. Other publication by Institut Schneider Formation (ISF) c Electrical Installation Guide, Ref.: MD1ELG2E (information on this 400 page pbulication is obtainable on www.schneiderformation.com)\n\nCahier Technique Schneider Electric no. 158 / p.32\n\n63594\n\nDirection Scientifique et Technique, Service Communication Technique F-38050 Grenoble cedex 9 Télécopie : (33) 04 76 57 98 60\n\nDTP: Headlines Valence. Edition: Schneider Electric Printing: Imprimerie du Pont de Claix - Claix - France - 1000 - 100 FF 07-00" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8490399,"math_prob":0.9772512,"size":54068,"snap":"2019-51-2020-05","text_gpt3_token_len":14009,"char_repetition_ratio":0.18764797,"word_repetition_ratio":0.04689204,"special_character_ratio":0.25033292,"punctuation_ratio":0.11028722,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9868275,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T12:48:06Z\",\"WARC-Record-ID\":\"<urn:uuid:a7b1e946-c352-4b18-b9e9-8b4a42e24c9e>\",\"Content-Length\":\"75044\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:455849a0-f373-47f6-a9de-f905d71d6ecc>\",\"WARC-Concurrent-To\":\"<urn:uuid:dd9c394b-d766-49be-abfa-b57a882dd949>\",\"WARC-IP-Address\":\"104.24.117.142\",\"WARC-Target-URI\":\"https://dokumen.site/download/calculation-of-short-circuit-currents-a5b39efb311fb9\",\"WARC-Payload-Digest\":\"sha1:GSOATZ7HQ2AN2QUPTNJK2TACADRIVSOA\",\"WARC-Block-Digest\":\"sha1:Z7ZNFHCD5UW2WZONSCWWL26BN6NRCIA7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594603.8_warc_CC-MAIN-20200119122744-20200119150744-00175.warc.gz\"}"}
https://www.zora.uzh.ch/id/eprint/129660/
[ "", null, "Measurement of forward $W→eν$ production in pp collisions at $\\sqrt{s} = 8$ TeV\n\nLHCb Collaboration; Bernet, R; Müller, K; Steinkamp, O; Straumann, U; et al (2016). Measurement of forward $W→eν$ production in pp collisions at $\\sqrt{s} = 8$ TeV. Journal of High Energy Physics, 2016(10):30.\n\nAbstract\n\nA measurement of the cross-section for $W → eν$ production in pp collisions is presented using data corresponding to an integrated luminosity of 2 $fb^{−1}$ collected by the LHCb experiment at a centre-of-mass energy of $\\sqrt{s} = 8$ TeV. The electrons are required to have more than 20 GeV of transverse momentum and to lie between 2.00 and 4.25 in pseudorapidity. The inclusive W production cross-sections, where the W decays to eν, are measured to be\n$σ_{W^+}→_{{e^+}{ν_e}}=1124.4±2.1±21.5±11.2±13.0pb$,\n$σ_{W^−}→_{{e^+}{\\overlineν_e}}=809.0±1.9±18.1±7.0±9.4pb$,\nwhere the first uncertainties are statistical, the second are systematic, the third are due to the knowledge of the LHC beam energy and the fourth are due to the luminosity determination.\nDifferential cross-sections as a function of the electron pseudorapidity are measured. The $W^+/W^−$ cross-section ratio and production charge asymmetry are also reported. Results are compared with theoretical predictions at next-to-next-to-leading order in perturbative quantum chromodynamics. Finally, in a precise test of lepton universality, the ratio of W boson branching fractions is determined to be\n$\\mathcal{B}(W→eν)/\\mathcal{B}(W→μν)=1.020±0.002±0.019$,\nwhere the first uncertainty is statistical and the second is systematic.\n\nAbstract\n\nA measurement of the cross-section for $W → eν$ production in pp collisions is presented using data corresponding to an integrated luminosity of 2 $fb^{−1}$ collected by the LHCb experiment at a centre-of-mass energy of $\\sqrt{s} = 8$ TeV. The electrons are required to have more than 20 GeV of transverse momentum and to lie between 2.00 and 4.25 in pseudorapidity. The inclusive W production cross-sections, where the W decays to eν, are measured to be\n$σ_{W^+}→_{{e^+}{ν_e}}=1124.4±2.1±21.5±11.2±13.0pb$,\n$σ_{W^−}→_{{e^+}{\\overlineν_e}}=809.0±1.9±18.1±7.0±9.4pb$,\nwhere the first uncertainties are statistical, the second are systematic, the third are due to the knowledge of the LHC beam energy and the fourth are due to the luminosity determination.\nDifferential cross-sections as a function of the electron pseudorapidity are measured. The $W^+/W^−$ cross-section ratio and production charge asymmetry are also reported. Results are compared with theoretical predictions at next-to-next-to-leading order in perturbative quantum chromodynamics. Finally, in a precise test of lepton universality, the ratio of W boson branching fractions is determined to be\n$\\mathcal{B}(W→eν)/\\mathcal{B}(W→μν)=1.020±0.002±0.019$,\nwhere the first uncertainty is statistical and the second is systematic.\n\nStatistics\n\nCitations\n\nDimensions.ai Metrics\n22 citations in Web of Science®\n25 citations in Scopus®\n\nAltmetrics\n\nDetailed statistics\n\nItem Type: Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics Physical Sciences > Nuclear and High Energy Physics English August 2016 27 Dec 2016 13:07 16 Jul 2021 07:19 Springer 1029-8479 Gold Publisher DOI. An embargo period may apply. https://doi.org/10.1007/JHEP10(2016)030", null, "", null, "", null, "Licence:", null, "" ]
[ null, "https://www.zora.uzh.ch/images/uzh_logo_en.jpg", null, "https://www.zora.uzh.ch/images/oa_lock_gold.png", null, "https://www.zora.uzh.ch/129660/8.hassmallThumbnailVersion/101007_JHEP102016030.pdf", null, "https://www.zora.uzh.ch/129660/8.haspreviewThumbnailVersion/101007_JHEP102016030.pdf", null, "https://www.zora.uzh.ch/license_images/by-4.0-88x31.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8761292,"math_prob":0.96350807,"size":1300,"snap":"2022-05-2022-21","text_gpt3_token_len":373,"char_repetition_ratio":0.10493827,"word_repetition_ratio":0.0,"special_character_ratio":0.27307692,"punctuation_ratio":0.118773945,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9957029,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,3,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-25T07:19:09Z\",\"WARC-Record-ID\":\"<urn:uuid:294ab169-bdc0-4792-a551-86d83d22899c>\",\"Content-Length\":\"54808\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:80f6a0a4-cc04-4896-9955-16a91ee7d0ba>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d2ce711-d12e-496d-b692-a5c5dcfd16d2>\",\"WARC-IP-Address\":\"130.60.206.230\",\"WARC-Target-URI\":\"https://www.zora.uzh.ch/id/eprint/129660/\",\"WARC-Payload-Digest\":\"sha1:UOKE3V4ZIY6ST7KLW7GDG6YOCRHZDOFB\",\"WARC-Block-Digest\":\"sha1:OC7VNG2VKYL5TZQWPGEQ4KZMAVJKAB5J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304798.1_warc_CC-MAIN-20220125070039-20220125100039-00520.warc.gz\"}"}
https://www.ncatlab.org/nlab/show/Sobolev+space
[ "Contents\n\n# Contents\n\n## Idea\n\nIf $1\\leq p \\lt \\infty$ and $\\Omega$ is a domain (in a $n$-dimensional real space with easy generalization to manifolds), one first considers the Lebesgue spaces $L_p = L_p(\\Omega)$ (wikipedia) of (equivalence classes of) measurable (complex- or real-valued) functions $f$ whose (absolute values of) $p$-th powers are Lebesgue integrable; i.e. whose norm\n\n$\\| f\\|_{L_p} = \\left(\\int_\\Omega |f|^p d\\mu\\right)^{1/p}$\n\nis finite. For $p = \\infty$, one looks at the essential supremum norm $\\|f\\|_{L_\\infty}$ instead.\n\nFor $1\\leq p \\leq \\infty$, and $k\\geq 1$ the Sobolev space $W^k_p = W^k_p(\\Omega)$ or $W^{k,p}(\\Omega)$ is the Banach space of measurable functions $f$ on $\\Omega$ such that its generalized partial derivatives $\\partial_1^{i_1}\\ldots\\partial_n^{i_n} f$ (e.g. in the sense of generalized functions) for all multiindices $i = (i_1,\\ldots, i_n)\\in\\mathbb{Z}^n_{\\geq 0}$ with $i_1+\\ldots +i_n\\leq k$ are in $L_p(\\Omega)$. The most important case is the case of the Sobolev spaces $H^k(\\Omega) := W^k_2(\\Omega)$. Sobolev spaces are particularly important in the theory of partial differential equations.\n\n• L. C. Evans, Partial Differential Equations, Amer. Math. Soc. 1998.\n\n• R.A. Adams, Sobolev spaces, Acad. Press 1975.\n\n• wikipedia: Sobolev space.\n\n• Springer Encyclopaedia of Mathematics: Sobolev space\n\n• G. Wilkin, Sobolev spaces on Euclidean space, Liber Mathematicae 2011, link\n\ncategory: analysis" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81033117,"math_prob":0.9998359,"size":1099,"snap":"2022-40-2023-06","text_gpt3_token_len":277,"char_repetition_ratio":0.1369863,"word_repetition_ratio":0.0,"special_character_ratio":0.22747953,"punctuation_ratio":0.18224299,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997413,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T08:42:13Z\",\"WARC-Record-ID\":\"<urn:uuid:6886df2c-a6bf-4bd0-aad6-528b960bd3ed>\",\"Content-Length\":\"25957\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3e093217-7443-4767-85e6-7b3ec64e2232>\",\"WARC-Concurrent-To\":\"<urn:uuid:4a05669c-8497-4d7c-a6a2-8860eb15723b>\",\"WARC-IP-Address\":\"104.21.17.153\",\"WARC-Target-URI\":\"https://www.ncatlab.org/nlab/show/Sobolev+space\",\"WARC-Payload-Digest\":\"sha1:YXME2IG7J3RXDGBQYOSVKYVNZTQ364GC\",\"WARC-Block-Digest\":\"sha1:WMGXRAG3JMLXSXGINH4L3OMFSVLX3YXX\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334515.14_warc_CC-MAIN-20220925070216-20220925100216-00050.warc.gz\"}"}
https://arblib.org/acb.html
[ "# acb.h – complex numbers¶\n\nAn acb_t represents a complex number with error bounds. An acb_t consists of a pair of real number balls of type arb_struct, representing the real and imaginary part with separate error bounds.\n\nAn acb_t thus represents a rectangle $$[m_1-r_1, m_1+r_1] + [m_2-r_2, m_2+r_2] i$$ in the complex plane. This is used instead of a disk or square representation (consisting of a complex floating-point midpoint with a single radius), since it allows implementing many operations more conveniently by splitting into ball operations on the real and imaginary parts. It also allows tracking when complex numbers have an exact (for example exactly zero) real part and an inexact imaginary part, or vice versa.\n\nThe interface for the acb_t type is slightly less developed than that for the arb_t type. In many cases, the user can easily perform missing operations by directly manipulating the real and imaginary parts.\n\n## Types, macros and constants¶\n\ntype acb_struct\ntype acb_t\n\nAn acb_struct consists of a pair of arb_struct:s. An acb_t is defined as an array of length one of type acb_struct, permitting an acb_t to be passed by reference.\n\ntype acb_ptr\n\nAlias for acb_struct *, used for vectors of numbers.\n\ntype acb_srcptr\n\nAlias for const acb_struct *, used for vectors of numbers when passed as constant input to functions.\n\nacb_realref(x)\n\nMacro returning a pointer to the real part of x as an arb_t.\n\nacb_imagref(x)\n\nMacro returning a pointer to the imaginary part of x as an arb_t.\n\n## Memory management¶\n\nvoid acb_init(acb_t x)\n\nInitializes the variable x for use, and sets its value to zero.\n\nvoid acb_clear(acb_t x)\n\nClears the variable x, freeing or recycling its allocated memory.\n\nacb_ptr _acb_vec_init(slong n)\n\nReturns a pointer to an array of n initialized acb_struct:s.\n\nvoid _acb_vec_clear(acb_ptr v, slong n)\n\nClears an array of n initialized acb_struct:s.\n\nslong acb_allocated_bytes(const acb_t x)\n\nReturns the total number of bytes heap-allocated internally by this object. The count excludes the size of the structure itself. Add sizeof(acb_struct) to get the size of the object as a whole.\n\nslong _acb_vec_allocated_bytes(acb_srcptr vec, slong len)\n\nReturns the total number of bytes allocated for this vector, i.e. the space taken up by the vector itself plus the sum of the internal heap allocation sizes for all its member elements.\n\ndouble _acb_vec_estimate_allocated_bytes(slong len, slong prec)\n\nEstimates the number of bytes that need to be allocated for a vector of len elements with prec bits of precision, including the space for internal limb data. See comments for _arb_vec_estimate_allocated_bytes().\n\n## Basic manipulation¶\n\nvoid acb_zero(acb_t z)\nvoid acb_one(acb_t z)\nvoid acb_onei(acb_t z)\n\nSets z respectively to 0, 1, $$i = \\sqrt{-1}$$.\n\nvoid acb_set(acb_t z, const acb_t x)\nvoid acb_set_ui(acb_t z, ulong x)\nvoid acb_set_si(acb_t z, slong x)\nvoid acb_set_d(acb_t z, double x)\nvoid acb_set_fmpz(acb_t z, const fmpz_t x)\nvoid acb_set_arb(acb_t z, const arb_t c)\n\nSets z to the value of x.\n\nvoid acb_set_si_si(acb_t z, slong x, slong y)\nvoid acb_set_d_d(acb_t z, double x, double y)\nvoid acb_set_fmpz_fmpz(acb_t z, const fmpz_t x, const fmpz_t y)\nvoid acb_set_arb_arb(acb_t z, const arb_t x, const arb_t y)\n\nSets the real and imaginary part of z to the values x and y respectively\n\nvoid acb_set_fmpq(acb_t z, const fmpq_t x, slong prec)\nvoid acb_set_round(acb_t z, const acb_t x, slong prec)\nvoid acb_set_round_fmpz(acb_t z, const fmpz_t x, slong prec)\nvoid acb_set_round_arb(acb_t z, const arb_t x, slong prec)\n\nSets z to x, rounded to prec bits.\n\nvoid acb_swap(acb_t z, acb_t x)\n\nSwaps z and x efficiently.\n\nvoid acb_add_error_arf(acb_t x, const arf_t err)\nvoid acb_add_error_mag(acb_t x, const mag_t err)\nvoid acb_add_error_arb(acb_t x, const arb_t err)\n\nAdds err to the error bounds of both the real and imaginary parts of x, modifying x in-place.\n\nvoid acb_get_mid(acb_t m, const acb_t x)\n\nSets m to the midpoint of x.\n\n## Input and output¶\n\nThe acb_print… functions print to standard output, while acb_fprint… functions print to the stream file.\n\nvoid acb_print(const acb_t x)\nvoid acb_fprint(FILE *file, const acb_t x)\n\nPrints the internal representation of x.\n\nvoid acb_printd(const acb_t x, slong digits)\nvoid acb_fprintd(FILE *file, const acb_t x, slong digits)\n\nPrints x in decimal. The printed value of the radius is not adjusted to compensate for the fact that the binary-to-decimal conversion of both the midpoint and the radius introduces additional error.\n\nvoid acb_printn(const acb_t x, slong digits, ulong flags)\nvoid acb_fprintn(FILE *file, const acb_t x, slong digits, ulong flags)\n\nPrints a nice decimal representation of x, using the format of arb_get_str() (or the corresponding arb_printn()) for the real and imaginary parts.\n\nBy default, the output shows the midpoint of both the real and imaginary parts with a guaranteed error of at most one unit in the last decimal place. In addition, explicit error bounds are printed so that the displayed decimal interval is guaranteed to enclose x.\n\nAny flags understood by arb_get_str() can be passed via flags to control the format of the real and imaginary parts.\n\n## Random number generation¶\n\nvoid acb_randtest(acb_t z, flint_rand_t state, slong prec, slong mag_bits)\n\nGenerates a random complex number by generating separate random real and imaginary parts.\n\nvoid acb_randtest_special(acb_t z, flint_rand_t state, slong prec, slong mag_bits)\n\nGenerates a random complex number by generating separate random real and imaginary parts. Also generates NaNs and infinities.\n\nvoid acb_randtest_precise(acb_t z, flint_rand_t state, slong prec, slong mag_bits)\n\nGenerates a random complex number with precise real and imaginary parts.\n\nvoid acb_randtest_param(acb_t z, flint_rand_t state, slong prec, slong mag_bits)\n\nGenerates a random complex number, with very high probability of generating integers and half-integers.\n\n## Precision and comparisons¶\n\nint acb_is_zero(const acb_t z)\n\nReturns nonzero iff z is zero.\n\nint acb_is_one(const acb_t z)\n\nReturns nonzero iff z is exactly 1.\n\nint acb_is_finite(const acb_t z)\n\nReturns nonzero iff z certainly is finite.\n\nint acb_is_exact(const acb_t z)\n\nReturns nonzero iff z is exact.\n\nint acb_is_int(const acb_t z)\n\nReturns nonzero iff z is an exact integer.\n\nint acb_is_int_2exp_si(const acb_t x, slong e)\n\nReturns nonzero iff z exactly equals $$n 2^e$$ for some integer n.\n\nint acb_equal(const acb_t x, const acb_t y)\n\nReturns nonzero iff x and y are identical as sets, i.e. if the real and imaginary parts are equal as balls.\n\nNote that this is not the same thing as testing whether both x and y certainly represent the same complex number, unless either x or y is exact (and neither contains NaN). To test whether both operands might represent the same mathematical quantity, use acb_overlaps() or acb_contains(), depending on the circumstance.\n\nint acb_equal_si(const acb_t x, slong y)\n\nReturns nonzero iff x is equal to the integer y.\n\nint acb_eq(const acb_t x, const acb_t y)\n\nReturns nonzero iff x and y are certainly equal, as determined by testing that arb_eq() holds for both the real and imaginary parts.\n\nint acb_ne(const acb_t x, const acb_t y)\n\nReturns nonzero iff x and y are certainly not equal, as determined by testing that arb_ne() holds for either the real or imaginary parts.\n\nint acb_overlaps(const acb_t x, const acb_t y)\n\nReturns nonzero iff x and y have some point in common.\n\nvoid acb_union(acb_t z, const acb_t x, const acb_t y, slong prec)\n\nSets z to a complex interval containing both x and y.\n\nvoid acb_get_abs_ubound_arf(arf_t u, const acb_t z, slong prec)\n\nSets u to an upper bound for the absolute value of z, computed using a working precision of prec bits.\n\nvoid acb_get_abs_lbound_arf(arf_t u, const acb_t z, slong prec)\n\nSets u to a lower bound for the absolute value of z, computed using a working precision of prec bits.\n\nvoid acb_get_rad_ubound_arf(arf_t u, const acb_t z, slong prec)\n\nSets u to an upper bound for the error radius of z (the value is currently not computed tightly).\n\nvoid acb_get_mag(mag_t u, const acb_t x)\n\nSets u to an upper bound for the absolute value of x.\n\nvoid acb_get_mag_lower(mag_t u, const acb_t x)\n\nSets u to a lower bound for the absolute value of x.\n\nint acb_contains_fmpq(const acb_t x, const fmpq_t y)\nint acb_contains_fmpz(const acb_t x, const fmpz_t y)\nint acb_contains(const acb_t x, const acb_t y)\n\nReturns nonzero iff y is contained in x.\n\nint acb_contains_zero(const acb_t x)\n\nReturns nonzero iff zero is contained in x.\n\nint acb_contains_int(const acb_t x)\n\nReturns nonzero iff the complex interval represented by x contains an integer.\n\nint acb_contains_interior(const acb_t x, const acb_t y)\n\nTests if y is contained in the interior of x. This predicate always evaluates to false if x and y are both real-valued, since an imaginary part of 0 is not considered contained in the interior of the point interval 0. More generally, the same problem occurs for intervals with an exact real or imaginary part. Such intervals must be handled specially by the user where a different interpretation is intended.\n\nslong acb_rel_error_bits(const acb_t x)\n\nReturns the effective relative error of x measured in bits. This is computed as if calling arb_rel_error_bits() on the real ball whose midpoint is the larger out of the real and imaginary midpoints of x, and whose radius is the larger out of the real and imaginary radiuses of x.\n\nslong acb_rel_accuracy_bits(const acb_t x)\n\nReturns the effective relative accuracy of x measured in bits, equal to the negative of the return value from acb_rel_error_bits().\n\nslong acb_rel_one_accuracy_bits(const acb_t x)\n\nGiven a ball with midpoint m and radius r, returns an approximation of the relative accuracy of $$[\\max(1,|m|) \\pm r]$$ measured in bits.\n\nslong acb_bits(const acb_t x)\n\nReturns the maximum of arb_bits applied to the real and imaginary parts of x, i.e. the minimum precision sufficient to represent x exactly.\n\nvoid acb_indeterminate(acb_t x)\n\nSets x to $$[\\operatorname{NaN} \\pm \\infty] + [\\operatorname{NaN} \\pm \\infty]i$$, representing an indeterminate result.\n\nvoid acb_trim(acb_t y, const acb_t x)\n\nSets y to a a copy of x with both the real and imaginary parts trimmed (see arb_trim()).\n\nint acb_is_real(const acb_t x)\n\nReturns nonzero iff the imaginary part of x is zero. It does not test whether the real part of x also is finite.\n\nint acb_get_unique_fmpz(fmpz_t z, const acb_t x)\n\nIf x contains a unique integer, sets z to that value and returns nonzero. Otherwise (if x represents no integers or more than one integer), returns zero.\n\n## Complex parts¶\n\nvoid acb_get_real(arb_t re, const acb_t z)\n\nSets re to the real part of z.\n\nvoid acb_get_imag(arb_t im, const acb_t z)\n\nSets im to the imaginary part of z.\n\nvoid acb_arg(arb_t r, const acb_t z, slong prec)\n\nSets r to a real interval containing the complex argument (phase) of z. We define the complex argument have a discontinuity on $$(-\\infty,0]$$, with the special value $$\\operatorname{arg}(0) = 0$$, and $$\\operatorname{arg}(a+0i) = \\pi$$ for $$a < 0$$. Equivalently, if $$z = a+bi$$, the argument is given by $$\\operatorname{atan2}(b,a)$$ (see arb_atan2()).\n\nvoid acb_abs(arb_t r, const acb_t z, slong prec)\n\nSets r to the absolute value of z.\n\nvoid acb_sgn(acb_t r, const acb_t z, slong prec)\n\nSets r to the complex sign of z, defined as 0 if z is exactly zero and the projection onto the unit circle $$z / |z| = \\exp(i \\arg(z))$$ otherwise.\n\nvoid acb_csgn(arb_t r, const acb_t z)\n\nSets r to the extension of the real sign function taking the value 1 for z strictly in the right half plane, -1 for z strictly in the left half plane, and the sign of the imaginary part when z is on the imaginary axis. Equivalently, $$\\operatorname{csgn}(z) = z / \\sqrt{z^2}$$ except that the value is 0 when z is exactly zero.\n\n## Arithmetic¶\n\nvoid acb_neg(acb_t z, const acb_t x)\nvoid acb_neg_round(acb_t z, const acb_t x, slong prec)\n\nSets z to the negation of x.\n\nvoid acb_conj(acb_t z, const acb_t x)\n\nSets z to the complex conjugate of x.\n\nvoid acb_add_ui(acb_t z, const acb_t x, ulong y, slong prec)\nvoid acb_add_si(acb_t z, const acb_t x, slong y, slong prec)\nvoid acb_add_fmpz(acb_t z, const acb_t x, const fmpz_t y, slong prec)\nvoid acb_add_arb(acb_t z, const acb_t x, const arb_t y, slong prec)\nvoid acb_add(acb_t z, const acb_t x, const acb_t y, slong prec)\n\nSets z to the sum of x and y.\n\nvoid acb_sub_ui(acb_t z, const acb_t x, ulong y, slong prec)\nvoid acb_sub_si(acb_t z, const acb_t x, slong y, slong prec)\nvoid acb_sub_fmpz(acb_t z, const acb_t x, const fmpz_t y, slong prec)\nvoid acb_sub_arb(acb_t z, const acb_t x, const arb_t y, slong prec)\nvoid acb_sub(acb_t z, const acb_t x, const acb_t y, slong prec)\n\nSets z to the difference of x and y.\n\nvoid acb_mul_onei(acb_t z, const acb_t x)\n\nSets z to x multiplied by the imaginary unit.\n\nvoid acb_div_onei(acb_t z, const acb_t x)\n\nSets z to x divided by the imaginary unit.\n\nvoid acb_mul_ui(acb_t z, const acb_t x, ulong y, slong prec)\nvoid acb_mul_si(acb_t z, const acb_t x, slong y, slong prec)\nvoid acb_mul_fmpz(acb_t z, const acb_t x, const fmpz_t y, slong prec)\nvoid acb_mul_arb(acb_t z, const acb_t x, const arb_t y, slong prec)\n\nSets z to the product of x and y.\n\nvoid acb_mul(acb_t z, const acb_t x, const acb_t y, slong prec)\n\nSets z to the product of x and y. If at least one part of x or y is zero, the operations is reduced to two real multiplications. If x and y are the same pointers, they are assumed to represent the same mathematical quantity and the squaring formula is used.\n\nvoid acb_mul_2exp_si(acb_t z, const acb_t x, slong e)\nvoid acb_mul_2exp_fmpz(acb_t z, const acb_t x, const fmpz_t e)\n\nSets z to x multiplied by $$2^e$$, without rounding.\n\nvoid acb_sqr(acb_t z, const acb_t x, slong prec)\n\nSets z to x squared.\n\nvoid acb_cube(acb_t z, const acb_t x, slong prec)\n\nSets z to x cubed, computed efficiently using two real squarings, two real multiplications, and scalar operations.\n\nvoid acb_addmul(acb_t z, const acb_t x, const acb_t y, slong prec)\nvoid acb_addmul_ui(acb_t z, const acb_t x, ulong y, slong prec)\nvoid acb_addmul_si(acb_t z, const acb_t x, slong y, slong prec)\nvoid acb_addmul_fmpz(acb_t z, const acb_t x, const fmpz_t y, slong prec)\nvoid acb_addmul_arb(acb_t z, const acb_t x, const arb_t y, slong prec)\n\nSets z to z plus the product of x and y.\n\nvoid acb_submul(acb_t z, const acb_t x, const acb_t y, slong prec)\nvoid acb_submul_ui(acb_t z, const acb_t x, ulong y, slong prec)\nvoid acb_submul_si(acb_t z, const acb_t x, slong y, slong prec)\nvoid acb_submul_fmpz(acb_t z, const acb_t x, const fmpz_t y, slong prec)\nvoid acb_submul_arb(acb_t z, const acb_t x, const arb_t y, slong prec)\n\nSets z to z minus the product of x and y.\n\nvoid acb_inv(acb_t z, const acb_t x, slong prec)\n\nSets z to the multiplicative inverse of x.\n\nvoid acb_div_ui(acb_t z, const acb_t x, ulong y, slong prec)\nvoid acb_div_si(acb_t z, const acb_t x, slong y, slong prec)\nvoid acb_div_fmpz(acb_t z, const acb_t x, const fmpz_t y, slong prec)\nvoid acb_div_arb(acb_t z, const acb_t x, const arb_t y, slong prec)\nvoid acb_div(acb_t z, const acb_t x, const acb_t y, slong prec)\n\nSets z to the quotient of x and y.\n\n## Dot product¶\n\nvoid acb_dot_precise(acb_t res, const acb_t s, int subtract, acb_srcptr x, slong xstep, acb_srcptr y, slong ystep, slong len, slong prec)\nvoid acb_dot_simple(acb_t res, const acb_t s, int subtract, acb_srcptr x, slong xstep, acb_srcptr y, slong ystep, slong len, slong prec)\nvoid acb_dot(acb_t res, const acb_t s, int subtract, acb_srcptr x, slong xstep, acb_srcptr y, slong ystep, slong len, slong prec)\n\nComputes the dot product of the vectors x and y, setting res to $$s + (-1)^{subtract} \\sum_{i=0}^{len-1} x_i y_i$$.\n\nThe initial term s is optional and can be omitted by passing NULL (equivalently, $$s = 0$$). The parameter subtract must be 0 or 1. The length len is allowed to be negative, which is equivalent to a length of zero. The parameters xstep or ystep specify a step length for traversing subsequences of the vectors x and y; either can be negative to step in the reverse direction starting from the initial pointer. Aliasing is allowed between res and s but not between res and the entries of x and y.\n\nThe default version determines the optimal precision for each term and performs all internal calculations using mpn arithmetic with minimal overhead. This is the preferred way to compute a dot product; it is generally much faster and more precise than a simple loop.\n\nThe simple version performs fused multiply-add operations in a simple loop. This can be used for testing purposes and is also used as a fallback by the default version when the exponents are out of range for the optimized code.\n\nThe precise version computes the dot product exactly up to the final rounding. This can be extremely slow and is only intended for testing.\n\nvoid acb_approx_dot(acb_t res, const acb_t s, int subtract, acb_srcptr x, slong xstep, acb_srcptr y, slong ystep, slong len, slong prec)\n\nComputes an approximate dot product without error bounds. The radii of the inputs are ignored (only the midpoints are read) and only the midpoint of the output is written.\n\nvoid acb_dot_ui(acb_t res, const acb_t initial, int subtract, acb_srcptr x, slong xstep, const ulong *y, slong ystep, slong len, slong prec)\nvoid acb_dot_si(acb_t res, const acb_t initial, int subtract, acb_srcptr x, slong xstep, const slong *y, slong ystep, slong len, slong prec)\nvoid acb_dot_uiui(acb_t res, const acb_t initial, int subtract, acb_srcptr x, slong xstep, const ulong *y, slong ystep, slong len, slong prec)\nvoid acb_dot_siui(acb_t res, const acb_t initial, int subtract, acb_srcptr x, slong xstep, const ulong *y, slong ystep, slong len, slong prec)\nvoid acb_dot_fmpz(acb_t res, const acb_t initial, int subtract, acb_srcptr x, slong xstep, const fmpz *y, slong ystep, slong len, slong prec)\n\nEquivalent to acb_dot(), but with integers in the array y. The uiui and siui versions take an array of double-limb integers as input; the siui version assumes that these represent signed integers in two’s complement form.\n\n## Mathematical constants¶\n\nvoid acb_const_pi(acb_t y, slong prec)\n\nSets y to the constant $$\\pi$$.\n\n## Powers and roots¶\n\nvoid acb_sqrt(acb_t r, const acb_t z, slong prec)\n\nSets r to the square root of z. If either the real or imaginary part is exactly zero, only a single real square root is needed. Generally, we use the formula $$\\sqrt{a+bi} = u/2 + ib/u, u = \\sqrt{2(|a+bi|+a)}$$, requiring two real square root extractions.\n\nvoid acb_sqrt_analytic(acb_t r, const acb_t z, int analytic, slong prec)\n\nComputes the square root. If analytic is set, gives a NaN-containing result if z touches the branch cut.\n\nvoid acb_rsqrt(acb_t r, const acb_t z, slong prec)\n\nSets r to the reciprocal square root of z. If either the real or imaginary part is exactly zero, only a single real reciprocal square root is needed. Generally, we use the formula $$1/\\sqrt{a+bi} = ((a+r) - bi)/v, r = |a+bi|, v = \\sqrt{r |a+bi+r|^2}$$, requiring one real square root and one real reciprocal square root.\n\nvoid acb_rsqrt_analytic(acb_t r, const acb_t z, int analytic, slong prec)\n\nComputes the reciprocal square root. If analytic is set, gives a NaN-containing result if z touches the branch cut.\n\nvoid acb_quadratic_roots_fmpz(acb_t r1, acb_t r2, const fmpz_t a, const fmpz_t b, const fmpz_t c, slong prec)\n\nSets r1 and r2 to the roots of the quadratic polynomial $$ax^2 + bx + c$$. Requires that a is nonzero. This function is implemented so that both roots are computed accurately even when direct use of the quadratic formula would lose accuracy.\n\nvoid acb_root_ui(acb_t r, const acb_t z, ulong k, slong prec)\n\nSets r to the principal k-th root of z.\n\nvoid acb_pow_fmpz(acb_t y, const acb_t b, const fmpz_t e, slong prec)\nvoid acb_pow_ui(acb_t y, const acb_t b, ulong e, slong prec)\nvoid acb_pow_si(acb_t y, const acb_t b, slong e, slong prec)\n\nSets $$y = b^e$$ using binary exponentiation (with an initial division if $$e < 0$$). Note that these functions can get slow if the exponent is extremely large (in such cases acb_pow() may be superior).\n\nvoid acb_pow_arb(acb_t z, const acb_t x, const arb_t y, slong prec)\nvoid acb_pow(acb_t z, const acb_t x, const acb_t y, slong prec)\n\nSets $$z = x^y$$, computed using binary exponentiation if $$y$$ if a small exact integer, as $$z = (x^{1/2})^{2y}$$ if $$y$$ is a small exact half-integer, and generally as $$z = \\exp(y \\log x)$$.\n\nvoid acb_pow_analytic(acb_t r, const acb_t x, const acb_t y, int analytic, slong prec)\n\nComputes the power $$x^y$$. If analytic is set, gives a NaN-containing result if x touches the branch cut (unless y is an integer).\n\nvoid acb_unit_root(acb_t res, ulong order, slong prec)\n\nSets res to $$\\exp(\\frac{2i\\pi}{\\mathrm{order}})$$ to precision prec.\n\n## Exponentials and logarithms¶\n\nvoid acb_exp(acb_t y, const acb_t z, slong prec)\n\nSets y to the exponential function of z, computed as $$\\exp(a+bi) = \\exp(a) \\left( \\cos(b) + \\sin(b) i \\right)$$.\n\nvoid acb_exp_pi_i(acb_t y, const acb_t z, slong prec)\n\nSets y to $$\\exp(\\pi i z)$$.\n\nvoid acb_exp_invexp(acb_t s, acb_t t, const acb_t z, slong prec)\n\nSets $$s = \\exp(z)$$ and $$t = \\exp(-z)$$.\n\nvoid acb_expm1(acb_t res, const acb_t z, slong prec)\n\nSets res to $$\\exp(z)-1$$, using a more accurate method when $$z \\approx 0$$.\n\nvoid acb_log(acb_t y, const acb_t z, slong prec)\n\nSets y to the principal branch of the natural logarithm of z, computed as $$\\log(a+bi) = \\frac{1}{2} \\log(a^2 + b^2) + i \\operatorname{arg}(a+bi)$$.\n\nvoid acb_log_analytic(acb_t r, const acb_t z, int analytic, slong prec)\n\nComputes the natural logarithm. If analytic is set, gives a NaN-containing result if z touches the branch cut.\n\nvoid acb_log1p(acb_t z, const acb_t x, slong prec)\n\nSets $$z = \\log(1+x)$$, computed accurately when $$x \\approx 0$$.\n\n## Trigonometric functions¶\n\nvoid acb_sin(acb_t s, const acb_t z, slong prec)\nvoid acb_cos(acb_t c, const acb_t z, slong prec)\nvoid acb_sin_cos(acb_t s, acb_t c, const acb_t z, slong prec)\n\nSets $$s = \\sin(z)$$, $$c = \\cos(z)$$, evaluated as $$\\sin(a+bi) = \\sin(a)\\cosh(b) + i \\cos(a)\\sinh(b)$$, $$\\cos(a+bi) = \\cos(a)\\cosh(b) - i \\sin(a)\\sinh(b)$$.\n\nvoid acb_tan(acb_t s, const acb_t z, slong prec)\n\nSets $$s = \\tan(z) = \\sin(z) / \\cos(z)$$. For large imaginary parts, the function is evaluated in a numerically stable way as $$\\pm i$$ plus a decreasing exponential factor.\n\nvoid acb_cot(acb_t s, const acb_t z, slong prec)\n\nSets $$s = \\cot(z) = \\cos(z) / \\sin(z)$$. For large imaginary parts, the function is evaluated in a numerically stable way as $$\\pm i$$ plus a decreasing exponential factor.\n\nvoid acb_sin_pi(acb_t s, const acb_t z, slong prec)\nvoid acb_cos_pi(acb_t s, const acb_t z, slong prec)\nvoid acb_sin_cos_pi(acb_t s, acb_t c, const acb_t z, slong prec)\n\nSets $$s = \\sin(\\pi z)$$, $$c = \\cos(\\pi z)$$, evaluating the trigonometric factors of the real and imaginary part accurately via arb_sin_cos_pi().\n\nvoid acb_tan_pi(acb_t s, const acb_t z, slong prec)\n\nSets $$s = \\tan(\\pi z)$$. Uses the same algorithm as acb_tan(), but evaluates the sine and cosine accurately via arb_sin_cos_pi().\n\nvoid acb_cot_pi(acb_t s, const acb_t z, slong prec)\n\nSets $$s = \\cot(\\pi z)$$. Uses the same algorithm as acb_cot(), but evaluates the sine and cosine accurately via arb_sin_cos_pi().\n\nvoid acb_sec(acb_t res, const acb_t z, slong prec)\n\nComputes $$\\sec(z) = 1 / \\cos(z)$$.\n\nvoid acb_csc(acb_t res, const acb_t z, slong prec)\n\nComputes $$\\csc(x) = 1 / \\sin(z)$$.\n\nvoid acb_csc_pi(acb_t res, const acb_t z, slong prec)\n\nComputes $$\\csc(\\pi x) = 1 / \\sin(\\pi z)$$. Evaluates the sine accurately via acb_sin_pi().\n\nvoid acb_sinc(acb_t s, const acb_t z, slong prec)\n\nSets $$s = \\operatorname{sinc}(x) = \\sin(z) / z$$.\n\nvoid acb_sinc_pi(acb_t s, const acb_t z, slong prec)\n\nSets $$s = \\operatorname{sinc}(\\pi x) = \\sin(\\pi z) / (\\pi z)$$.\n\n## Inverse trigonometric functions¶\n\nvoid acb_asin(acb_t res, const acb_t z, slong prec)\n\nSets res to $$\\operatorname{asin}(z) = -i \\log(iz + \\sqrt{1-z^2})$$.\n\nvoid acb_acos(acb_t res, const acb_t z, slong prec)\n\nSets res to $$\\operatorname{acos}(z) = \\tfrac{1}{2} \\pi - \\operatorname{asin}(z)$$.\n\nvoid acb_atan(acb_t res, const acb_t z, slong prec)\n\nSets res to $$\\operatorname{atan}(z) = \\tfrac{1}{2} i (\\log(1-iz)-\\log(1+iz))$$.\n\n## Hyperbolic functions¶\n\nvoid acb_sinh(acb_t s, const acb_t z, slong prec)\nvoid acb_cosh(acb_t c, const acb_t z, slong prec)\nvoid acb_sinh_cosh(acb_t s, acb_t c, const acb_t z, slong prec)\nvoid acb_tanh(acb_t s, const acb_t z, slong prec)\nvoid acb_coth(acb_t s, const acb_t z, slong prec)\n\nRespectively computes $$\\sinh(z) = -i\\sin(iz)$$, $$\\cosh(z) = \\cos(iz)$$, $$\\tanh(z) = -i\\tan(iz)$$, $$\\coth(z) = i\\cot(iz)$$.\n\nvoid acb_sech(acb_t res, const acb_t z, slong prec)\n\nComputes $$\\operatorname{sech}(z) = 1 / \\cosh(z)$$.\n\nvoid acb_csch(acb_t res, const acb_t z, slong prec)\n\nComputes $$\\operatorname{csch}(z) = 1 / \\sinh(z)$$.\n\n## Inverse hyperbolic functions¶\n\nvoid acb_asinh(acb_t res, const acb_t z, slong prec)\n\nSets res to $$\\operatorname{asinh}(z) = -i \\operatorname{asin}(iz)$$.\n\nvoid acb_acosh(acb_t res, const acb_t z, slong prec)\n\nSets res to $$\\operatorname{acosh}(z) = \\log(z + \\sqrt{z+1} \\sqrt{z-1})$$.\n\nvoid acb_atanh(acb_t res, const acb_t z, slong prec)\n\nSets res to $$\\operatorname{atanh}(z) = -i \\operatorname{atan}(iz)$$.\n\n## Lambert W function¶\n\nvoid acb_lambertw_asymp(acb_t res, const acb_t z, const fmpz_t k, slong L, slong M, slong prec)\n\nSets res to the Lambert W function $$W_k(z)$$ computed using L and M terms in the bivariate series giving the asymptotic expansion at zero or infinity. This algorithm is valid everywhere, but the error bound is only finite when $$|\\log(z)|$$ is sufficiently large.\n\nint acb_lambertw_check_branch(const acb_t w, const fmpz_t k, slong prec)\n\nTests if w definitely lies in the image of the branch $$W_k(z)$$. This function is used internally to verify that a computed approximation of the Lambert W function lies on the intended branch. Note that this will necessarily evaluate to false for points exactly on (or overlapping) the branch cuts, where a different algorithm has to be used.\n\nvoid acb_lambertw_bound_deriv(mag_t res, const acb_t z, const acb_t ez1, const fmpz_t k)\n\nSets res to an upper bound for $$|W_k'(z)|$$. The input ez1 should contain the precomputed value of $$ez+1$$.\n\nAlong the real line, the directional derivative of $$W_k(z)$$ is understood to be taken. As a result, the user must handle the branch cut discontinuity separately when using this function to bound perturbations in the value of $$W_k(z)$$.\n\nvoid acb_lambertw(acb_t res, const acb_t z, const fmpz_t k, int flags, slong prec)\n\nSets res to the Lambert W function $$W_k(z)$$ where the index k selects the branch (with $$k = 0$$ giving the principal branch). The placement of branch cuts follows [CGHJK1996].\n\nIf flags is nonzero, nonstandard branch cuts are used.\n\nIf flags is set to ACB_LAMBERTW_LEFT, computes $$W_{\\mathrm{left}|k}(z)$$ which corresponds to $$W_k(z)$$ in the upper half plane and $$W_{k+1}(z)$$ in the lower half plane, connected continuously to the left of the branch points. In other words, the branch cut on $$(-\\infty,0)$$ is rotated counterclockwise to $$(0,+\\infty)$$. (For $$k = -1$$ and $$k = 0$$, there is also a branch cut on $$(-1/e,0)$$, continuous from below instead of from above to maintain counterclockwise continuity.)\n\nIf flags is set to ACB_LAMBERTW_MIDDLE, computes $$W_{\\mathrm{middle}}(z)$$ which corresponds to $$W_{-1}(z)$$ in the upper half plane and $$W_{1}(z)$$ in the lower half plane, connected continuously through $$(-1/e,0)$$ with branch cuts on $$(-\\infty,-1/e)$$ and $$(0,+\\infty)$$. $$W_{\\mathrm{middle}}(z)$$ extends the real analytic function $$W_{-1}(x)$$ defined on $$(-1/e,0)$$ to a complex analytic function, whereas the standard branch $$W_{-1}(z)$$ has a branch cut along the real segment.\n\nThe algorithm used to compute the Lambert W function is described in [Joh2017b].\n\n## Rising factorials¶\n\nvoid acb_rising_ui(acb_t z, const acb_t x, ulong n, slong prec)\nvoid acb_rising(acb_t z, const acb_t x, const acb_t n, slong prec)\n\nComputes the rising factorial $$z = x (x+1) (x+2) \\cdots (x+n-1)$$. These functions are aliases for acb_hypgeom_rising_ui() and acb_hypgeom_rising().\n\nvoid acb_rising2_ui(acb_t u, acb_t v, const acb_t x, ulong n, slong prec)\n\nLetting $$u(x) = x (x+1) (x+2) \\cdots (x+n-1)$$, simultaneously compute $$u(x)$$ and $$v(x) = u'(x)$$. This function is a wrapper of acb_hypgeom_rising_ui_jet().\n\nvoid acb_rising_ui_get_mag(mag_t bound, const acb_t x, ulong n)\n\nComputes an upper bound for the absolute value of the rising factorial $$z = x (x+1) (x+2) \\cdots (x+n-1)$$. Not currently optimized for large n.\n\n## Gamma function¶\n\nvoid acb_gamma(acb_t y, const acb_t x, slong prec)\n\nComputes the gamma function $$y = \\Gamma(x)$$. This is an alias for acb_hypgeom_gamma().\n\nvoid acb_rgamma(acb_t y, const acb_t x, slong prec)\n\nComputes the reciprocal gamma function $$y = 1/\\Gamma(x)$$, avoiding division by zero at the poles of the gamma function. This is an alias for acb_hypgeom_rgamma().\n\nvoid acb_lgamma(acb_t y, const acb_t x, slong prec)\n\nComputes the logarithmic gamma function $$y = \\log \\Gamma(x)$$. This is an alias for acb_hypgeom_lgamma().\n\nThe branch cut of the logarithmic gamma function is placed on the negative half-axis, which means that $$\\log \\Gamma(z) + \\log z = \\log \\Gamma(z+1)$$ holds for all $$z$$, whereas $$\\log \\Gamma(z) \\ne \\log(\\Gamma(z))$$ in general. In the left half plane, the reflection formula with correct branch structure is evaluated via acb_log_sin_pi().\n\nvoid acb_digamma(acb_t y, const acb_t x, slong prec)\n\nComputes the digamma function $$y = \\psi(x) = (\\log \\Gamma(x))' = \\Gamma'(x) / \\Gamma(x)$$.\n\nvoid acb_log_sin_pi(acb_t res, const acb_t z, slong prec)\n\nComputes the logarithmic sine function defined by\n\n$S(z) = \\log(\\pi) - \\log \\Gamma(z) + \\log \\Gamma(1-z)$\n\nwhich is equal to\n\n$S(z) = \\int_{1/2}^z \\pi \\cot(\\pi t) dt$\n\nwhere the path of integration goes through the upper half plane if $$0 < \\arg(z) \\le \\pi$$ and through the lower half plane if $$-\\pi < \\arg(z) \\le 0$$. Equivalently,\n\n$S(z) = \\log(\\sin(\\pi(z-n))) \\mp n \\pi i, \\quad n = \\lfloor \\operatorname{re}(z) \\rfloor$\n\nwhere the negative sign is taken if $$0 < \\arg(z) \\le \\pi$$ and the positive sign is taken otherwise (if the interval $$\\arg(z)$$ does not certainly satisfy either condition, the union of both cases is computed). After subtracting n, we have $$0 \\le \\operatorname{re}(z) < 1$$. In this strip, we use use $$S(z) = \\log(\\sin(\\pi(z)))$$ if the imaginary part of z is small. Otherwise, we use $$S(z) = i \\pi (z-1/2) + \\log((1+e^{-2i\\pi z})/2)$$ in the lower half-plane and the conjugated expression in the upper half-plane to avoid exponent overflow.\n\nThe function is evaluated at the midpoint and the propagated error is computed from $$S'(z)$$ to get a continuous change when $$z$$ is non-real and $$n$$ spans more than one possible integer value.\n\nvoid acb_polygamma(acb_t res, const acb_t s, const acb_t z, slong prec)\n\nSets res to the value of the generalized polygamma function $$\\psi(s,z)$$.\n\nIf s is a nonnegative order, this is simply the s-order derivative of the digamma function. If $$s = 0$$, this function simply calls the digamma function internally. For integers $$s \\ge 1$$, it calls the Hurwitz zeta function. Note that for small integers $$s \\ge 1$$, it can be faster to use acb_poly_digamma_series() and read off the coefficients.\n\nThe generalization to other values of s is due to Espinosa and Moll [EM2004]:\n\n$\\psi(s,z) = \\frac{\\zeta'(s+1,z) + (\\gamma + \\psi(-s)) \\zeta(s+1,z)}{\\Gamma(-s)}$\nvoid acb_barnes_g(acb_t res, const acb_t z, slong prec)\nvoid acb_log_barnes_g(acb_t res, const acb_t z, slong prec)\n\nComputes Barnes G-function or the logarithmic Barnes G-function, respectively. The logarithmic version has branch cuts on the negative real axis and is continuous elsewhere in the complex plane, in analogy with the logarithmic gamma function. The functional equation\n\n$\\log G(z+1) = \\log \\Gamma(z) + \\log G(z).$\n\nholds for all z.\n\nFor small integers, we directly use the recurrence relation $$G(z+1) = \\Gamma(z) G(z)$$ together with the initial value $$G(1) = 1$$. For general z, we use the formula\n\n$\\log G(z) = (z-1) \\log \\Gamma(z) - \\zeta'(-1,z) + \\zeta'(-1).$\n\n## Zeta function¶\n\nvoid acb_zeta(acb_t z, const acb_t s, slong prec)\n\nSets z to the value of the Riemann zeta function $$\\zeta(s)$$. Note: for computing derivatives with respect to $$s$$, use acb_poly_zeta_series() or related methods.\n\nThis is a wrapper of acb_dirichlet_zeta().\n\nvoid acb_hurwitz_zeta(acb_t z, const acb_t s, const acb_t a, slong prec)\n\nSets z to the value of the Hurwitz zeta function $$\\zeta(s, a)$$. Note: for computing derivatives with respect to $$s$$, use acb_poly_zeta_series() or related methods.\n\nThis is a wrapper of acb_dirichlet_hurwitz().\n\nvoid acb_bernoulli_poly_ui(acb_t res, ulong n, const acb_t x, slong prec)\n\nSets res to the value of the Bernoulli polynomial $$B_n(x)$$.\n\nWarning: this function is only fast if either n or x is a small integer.\n\nThis function reads Bernoulli numbers from the global cache if they are already cached, but does not automatically extend the cache by itself.\n\n## Polylogarithms¶\n\nvoid acb_polylog(acb_t w, const acb_t s, const acb_t z, slong prec)\nvoid acb_polylog_si(acb_t w, slong s, const acb_t z, slong prec)\n\nSets w to the polylogarithm $$\\operatorname{Li}_s(z)$$.\n\n## Arithmetic-geometric mean¶\n\nSee Algorithms for the arithmetic-geometric mean for implementation details.\n\nvoid acb_agm1(acb_t m, const acb_t z, slong prec)\n\nSets m to the arithmetic-geometric mean $$M(z) = \\operatorname{agm}(1,z)$$, defined such that the function is continuous in the complex plane except for a branch cut along the negative half axis (where it is continuous from above). This corresponds to always choosing an “optimal” branch for the square root in the arithmetic-geometric mean iteration.\n\nvoid acb_agm1_cpx(acb_ptr m, const acb_t z, slong len, slong prec)\n\nSets the coefficients in the array m to the power series expansion of the arithmetic-geometric mean at the point z truncated to length len, i.e. $$M(z+x) \\in \\mathbb{C}[[x]]$$.\n\nvoid acb_agm(acb_t m, const acb_t x, const acb_t y, slong prec)\n\nSets m to the arithmetic-geometric mean of x and y. The square roots in the AGM iteration are chosen so as to form the “optimal” AGM sequence. This gives a well-defined function of x and y except when $$x / y$$ is a negative real number, in which case there are two optimal AGM sequences. In that case, an arbitrary but consistent choice is made (if a decision cannot be made due to inexact arithmetic, the union of both choices is returned).\n\n## Other special functions¶\n\nvoid acb_chebyshev_t_ui(acb_t a, ulong n, const acb_t x, slong prec)\nvoid acb_chebyshev_u_ui(acb_t a, ulong n, const acb_t x, slong prec)\n\nEvaluates the Chebyshev polynomial of the first kind $$a = T_n(x)$$ or the Chebyshev polynomial of the second kind $$a = U_n(x)$$.\n\nvoid acb_chebyshev_t2_ui(acb_t a, acb_t b, ulong n, const acb_t x, slong prec)\nvoid acb_chebyshev_u2_ui(acb_t a, acb_t b, ulong n, const acb_t x, slong prec)\n\nSimultaneously evaluates $$a = T_n(x), b = T_{n-1}(x)$$ or $$a = U_n(x), b = U_{n-1}(x)$$. Aliasing between a, b and x is not permitted.\n\n## Piecewise real functions¶\n\nThe following methods extend common piecewise real functions to piecewise complex analytic functions, useful together with the acb_calc.h module. If analytic is set, evaluation on a discontinuity or non-analytic point gives a NaN result.\n\nvoid acb_real_abs(acb_t res, const acb_t z, int analytic, slong prec)\n\nThe absolute value is extended to $$+z$$ in the right half plane and $$-z$$ in the left half plane, with a discontinuity on the vertical line $$\\operatorname{Re}(z) = 0$$.\n\nvoid acb_real_sgn(acb_t res, const acb_t z, int analytic, slong prec)\n\nThe sign function is extended to $$+1$$ in the right half plane and $$-1$$ in the left half plane, with a discontinuity on the vertical line $$\\operatorname{Re}(z) = 0$$. If analytic is not set, this is effectively the same function as acb_csgn().\n\nvoid acb_real_heaviside(acb_t res, const acb_t z, int analytic, slong prec)\n\nThe Heaviside step function (or unit step function) is extended to $$+1$$ in the right half plane and $$0$$ in the left half plane, with a discontinuity on the vertical line $$\\operatorname{Re}(z) = 0$$.\n\nvoid acb_real_floor(acb_t res, const acb_t z, int analytic, slong prec)\n\nThe floor function is extended to a piecewise constant function equal to $$n$$ in the strips with real part $$(n,n+1)$$, with discontinuities on the vertical lines $$\\operatorname{Re}(z) = n$$.\n\nvoid acb_real_ceil(acb_t res, const acb_t z, int analytic, slong prec)\n\nThe ceiling function is extended to a piecewise constant function equal to $$n+1$$ in the strips with real part $$(n,n+1)$$, with discontinuities on the vertical lines $$\\operatorname{Re}(z) = n$$.\n\nvoid acb_real_max(acb_t res, const acb_t x, const acb_t y, int analytic, slong prec)\n\nThe real function $$\\max(x,y)$$ is extended to a piecewise analytic function of two variables by returning $$x$$ when $$\\operatorname{Re}(x) \\ge \\operatorname{Re}(y)$$ and returning $$y$$ when $$\\operatorname{Re}(x) < \\operatorname{Re}(y)$$, with discontinuities where $$\\operatorname{Re}(x) = \\operatorname{Re}(y)$$.\n\nvoid acb_real_min(acb_t res, const acb_t x, const acb_t y, int analytic, slong prec)\n\nThe real function $$\\min(x,y)$$ is extended to a piecewise analytic function of two variables by returning $$x$$ when $$\\operatorname{Re}(x) \\le \\operatorname{Re}(y)$$ and returning $$y$$ when $$\\operatorname{Re}(x) > \\operatorname{Re}(y)$$, with discontinuities where $$\\operatorname{Re}(x) = \\operatorname{Re}(y)$$.\n\nvoid acb_real_sqrtpos(acb_t res, const acb_t z, int analytic, slong prec)\n\nExtends the real square root function on $$[0,+\\infty)$$ to the usual complex square root on the cut plane. Like arb_sqrtpos(), only the nonnegative part of z is considered if z is purely real and analytic is not set. This is useful for integrating $$\\sqrt{f(x)}$$ where it is known that $$f(x) \\ge 0$$: unlike acb_sqrt_analytic(), no spurious imaginary terms $$[\\pm \\varepsilon] i$$ are created when the balls computed for $$f(x)$$ straddle zero.\n\n## Vector functions¶\n\nvoid _acb_vec_zero(acb_ptr A, slong n)\n\nSets all entries in vec to zero.\n\nint _acb_vec_is_zero(acb_srcptr vec, slong len)\n\nReturns nonzero iff all entries in x are zero.\n\nint _acb_vec_is_real(acb_srcptr v, slong len)\n\nReturns nonzero iff all entries in x have zero imaginary part.\n\nvoid _acb_vec_set(acb_ptr res, acb_srcptr vec, slong len)\n\nSets res to a copy of vec.\n\nvoid _acb_vec_set_round(acb_ptr res, acb_srcptr vec, slong len, slong prec)\n\nSets res to a copy of vec, rounding each entry to prec bits.\n\nvoid _acb_vec_swap(acb_ptr vec1, acb_ptr vec2, slong len)\n\nSwaps the entries of vec1 and vec2.\n\nvoid _acb_vec_neg(acb_ptr res, acb_srcptr vec, slong len)\nvoid _acb_vec_add(acb_ptr res, acb_srcptr vec1, acb_srcptr vec2, slong len, slong prec)\nvoid _acb_vec_sub(acb_ptr res, acb_srcptr vec1, acb_srcptr vec2, slong len, slong prec)\nvoid _acb_vec_scalar_submul(acb_ptr res, acb_srcptr vec, slong len, const acb_t c, slong prec)\nvoid _acb_vec_scalar_addmul(acb_ptr res, acb_srcptr vec, slong len, const acb_t c, slong prec)\nvoid _acb_vec_scalar_mul(acb_ptr res, acb_srcptr vec, slong len, const acb_t c, slong prec)\nvoid _acb_vec_scalar_mul_ui(acb_ptr res, acb_srcptr vec, slong len, ulong c, slong prec)\nvoid _acb_vec_scalar_mul_2exp_si(acb_ptr res, acb_srcptr vec, slong len, slong c)\nvoid _acb_vec_scalar_mul_onei(acb_ptr res, acb_srcptr vec, slong len)\nvoid _acb_vec_scalar_div_ui(acb_ptr res, acb_srcptr vec, slong len, ulong c, slong prec)\nvoid _acb_vec_scalar_div(acb_ptr res, acb_srcptr vec, slong len, const acb_t c, slong prec)\nvoid _acb_vec_scalar_mul_arb(acb_ptr res, acb_srcptr vec, slong len, const arb_t c, slong prec)\nvoid _acb_vec_scalar_div_arb(acb_ptr res, acb_srcptr vec, slong len, const arb_t c, slong prec)\nvoid _acb_vec_scalar_mul_fmpz(acb_ptr res, acb_srcptr vec, slong len, const fmpz_t c, slong prec)\nvoid _acb_vec_scalar_div_fmpz(acb_ptr res, acb_srcptr vec, slong len, const fmpz_t c, slong prec)\n\nPerforms the respective scalar operation elementwise.\n\nslong _acb_vec_bits(acb_srcptr vec, slong len)\n\nReturns the maximum of arb_bits() for all entries in vec.\n\nvoid _acb_vec_set_powers(acb_ptr xs, const acb_t x, slong len, slong prec)\n\nSets xs to the powers $$1, x, x^2, \\ldots, x^{len-1}$$.\n\nvoid _acb_vec_unit_roots(acb_ptr z, slong order, slong len, slong prec)\n\nSets z to the powers $$1,z,z^2,\\dots z^{\\mathrm{len}-1}$$ where $$z=\\exp(\\frac{2i\\pi}{\\mathrm{order}})$$ to precision prec. order can be taken negative.\n\nIn order to avoid precision loss, this function does not simply compute powers of a primitive root.\n\nvoid _acb_vec_add_error_arf_vec(acb_ptr res, arf_srcptr err, slong len)\nvoid _acb_vec_add_error_mag_vec(acb_ptr res, mag_srcptr err, slong len)\n\nAdds the magnitude of each entry in err to the radius of the corresponding entry in res.\n\nvoid _acb_vec_indeterminate(acb_ptr vec, slong len)\n\nApplies acb_indeterminate() elementwise.\n\nvoid _acb_vec_trim(acb_ptr res, acb_srcptr vec, slong len)\n\nApplies acb_trim() elementwise.\n\nint _acb_vec_get_unique_fmpz_vec(fmpz *res, acb_srcptr vec, slong len)\n\nCalls acb_get_unique_fmpz() elementwise and returns nonzero if all entries can be rounded uniquely to integers. If any entry in vec cannot be rounded uniquely to an integer, returns zero.\n\nvoid _acb_vec_sort_pretty(acb_ptr vec, slong len)\n\nSorts the vector of complex numbers based on the real and imaginary parts. This is intended to reveal structure when printing a set of complex numbers, not to apply an order relation in a rigorous way." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7129042,"math_prob":0.9984891,"size":32729,"snap":"2022-40-2023-06","text_gpt3_token_len":9647,"char_repetition_ratio":0.20993124,"word_repetition_ratio":0.21535763,"special_character_ratio":0.2725717,"punctuation_ratio":0.11944174,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998142,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T11:05:03Z\",\"WARC-Record-ID\":\"<urn:uuid:ae181714-876a-48ae-bfb2-4e17d8763baf>\",\"Content-Length\":\"342203\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a563dfe0-533c-42e1-8223-acd1e24fa2e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb725c56-74e8-4e21-81a5-d31ea5bab071>\",\"WARC-IP-Address\":\"78.159.99.205\",\"WARC-Target-URI\":\"https://arblib.org/acb.html\",\"WARC-Payload-Digest\":\"sha1:LMLWOWJQNFFMMEOW4P4ZZNUW2OHBVXIE\",\"WARC-Block-Digest\":\"sha1:BEVDKT335PIR3ICT3OF2BZ3F6N7V3ULT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334528.24_warc_CC-MAIN-20220925101046-20220925131046-00270.warc.gz\"}"}
https://www.trustudies.com/ncert-solutions/class-9/maths/triangles/
[ "# NCERT Solutions for Class 9 Maths Chapter 7 Triangles", null, "", null, "Written by Team Trustudies\nUpdated at 2021-05-07\n\n## NCERT solutions for class 9 Maths Chapter 7 Triangles Exercise 7.1\n\nQ1 ) In quadrilateral ACBD, AC = AD and AB bisects $$\\angle{A}$$ (see figure). Show that $$\\triangle{ABC}$$ $$\\displaystyle \\cong$$ $$\\triangle{ABD}$$. What can you say about BC and BD ?", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nIn $$\\triangle{ABC}$$ and $$\\triangle{ABD}$$, we have,\n\nAs, AB bisects $$\\angle{A}$$,\n\nTherefore, $$\\angle{CAB}$$ = $$\\angle{DAB}$$\n\nAlso, AB is a common side,\n\nSo, we can say,\n$$\\triangle{ABC}$$ $$\\displaystyle \\cong$$ $$\\triangle{ABD}$$ ...(by SAS test of congruency)\n\nAlso, BC = BD ...(By CPCT)\nHence, proved.\n\nQ2 ) ABCD is a quadrilateral in which AD = BC and $$\\angle{DAB}$$ = $$\\angle{CBA}$$ (see figure). Prove that\n(i) $$\\triangle{ABD}$$ $$\\displaystyle \\cong$$ $$\\triangle{BAC}$$\n(ii) BD = AC\n(iii) $$\\angle{ABD}$$ = $$\\angle{BAC}$$", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nIn $$\\triangle{ABC}$$ and $$\\triangle{BAC}$$, we have\n\nAlso,\n$$\\angle{DAB}$$ = $$\\angle{CBA}$$ ...(Given)\n\nAnd AB is a common side.\n\nTherefore, $$\\triangle{ABD}$$ $$\\displaystyle \\cong$$ $$\\triangle{BAC}$$ ...(SAS congruency test)\n\nHence, BD = AC ...(By CPCT)\n\nand $$\\angle{ABD}$$ = $$\\angle{BAC}$$ ...(By CPCT)\nHence, proved.\n\nQ3 ) AD and BC are equal perpendiculars to a line segment AB (see figure). Show that CD bisects AB.", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nIn $$\\triangle{AOD}$$ and $$\\triangle{BOC}$$, we have,\n\nNow, $$\\angle{AOD}$$ = $$\\angle{BOC}$$ ...(vertically opposite angles)\n\nAlso, $$\\angle{DAO}$$ = $$\\angle{CBO}$$ = $$90^\\circ$$\n\nand BD = BC ...(shown in the figure)\n\nTherefore, $$\\triangle{AOD}$$ $$\\displaystyle \\cong$$ $$\\triangle{BOC}$$ ...(SAS congruency test)\n\nHence, OA = OB ...(By CPCT)\n\nThus, we can say that, O is the mid-point of AB.\n\nSo, CD bisects AB.\nHence, proved\n\nQ4 ) l and m are two parallel lines intersected by another pair of parallel lines p and q (see figure). Show that $$\\triangle{ABC}$$ $$\\displaystyle \\cong$$ $$\\triangle{CDA}$$.", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles", null, "From figure, we have,\n\n$$\\angle{1}$$ = $$\\angle{2}$$ (Vertically opposite angles) ...(i)\n\n$$\\angle{1}$$ = $$\\angle{6}$$ (Corresponding angles) ...(ii)\n$$\\angle{6}$$ = $$\\angle{4}$$ (Corresponding angles) ...(iii)\n\nFrom Equations, (i), (ii) and (iii), we have\n\n$$\\angle{1}$$ = $$\\angle{4}$$ and $$\\angle{2}$$ = $$\\angle{6}$$ ...(iv)\n\nIn $$\\triangle{ABC}$$ and $$\\triangle{CDA}$$, we have\n\n$$\\angle{4}$$ = $$\\angle{2}$$ ...(from (iii) and (iv))\n$$\\angle{5}$$ = $$\\angle{3}$$ ...(Alternate angles)\n\nand AC is a common side.\n\nTherefore, we get,\n$$\\triangle{ABC}$$ $$\\displaystyle \\cong$$ $$\\triangle{CDA}$$ ...(By SAS congruency test)\nHence, proved.\n\nQ5 ) Line l is the bisector of a $$\\angle{A}$$ and $$\\angle{B}$$ is any point on l. BP and BQ are perpendiculars from B to the arms of $$\\angle{A}$$ (see figure). show that\n(i) $$\\triangle{APB}$$ $$\\displaystyle \\cong$$ $$\\triangle{AQB}$$\n(ii) BP = BQ or B is equidistant from the arms of $$\\angle{A}$$.", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nIn $$\\triangle{APB}$$ $$\\displaystyle \\cong$$ $$\\triangle{AQB}$$, we have\n\n$$\\angle{APB}$$ = $$\\angle{AQB}$$ = $$90^\\circ$$\n$$\\angle{PAB}$$ = $$\\angle{QAB}$$ ...(AB bisects $$\\angle{PAQ}$$)\n\nAB is the common side.\n$$\\therefore$$ $$\\triangle$$ {APB}\\) $$\\displaystyle \\cong$$ $$\\triangle{AQB}$$ ...(By AAS congruency test)\n\nAlso, BP = BQ ...(By CPCT)\n\nThus, we can say that, B is equidistant from the arms of $$\\angle{A}$$.\nHence, proved.\n\nQ6 ) In figure, AC = AE, AB = AD and $$\\angle{BAD}$$ = $$\\angle{EAC}$$. Show that BC = DE.", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nIn $$\\triangle{ABC}$$ and $$\\triangle{ADE}$$ , we have,\n\n$$\\angle{BAD}$$ = $$\\angle{EAC}$$ ...(i)(Given)\nOn adding, $$\\angle{DAC}$$ on both sides in Eq. (i), we get,\n\n$$\\Rightarrow$$ $$\\angle{BAD}$$ + $$\\angle{DAC}$$ = $$\\angle{EAC}$$ + $$\\angle{DAC}$$\n$$\\Rightarrow$$ $$\\angle{BAC}$$ = $$\\angle{DAE}$$\n\nand also, AC = AE ...(Given)\n\n$$\\therefore$$ $$\\triangle{ABC}$$ $$\\displaystyle \\cong$$ $$\\triangle{ADE}$$ ...(By AAS congruency test)\n\nThus, BC = DE ...(By CPCT)\n\nHence, proved.\n\nQ7 ) AB is a line segment and P is its mid-point. D and E are points on the same side of AB such that $$\\angle{BAD}$$ = $$\\angle{ABE}$$ and $$\\angle{EPA}$$ = $$\\angle{DPB}$$(see figure). Show that\n(i) $$\\triangle{DAP}$$ $$\\displaystyle \\cong$$ $$\\triangle{EBP}$$", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nWe have,\nAP = BP ...(i)(Since, P is the mid-point of AB)\n\n$$\\angle{EPA}$$ = $$\\angle{DPB}$$ ...(ii)(Given)\n$$\\angle{BAD}$$ = $$\\angle{ABE}$$ ...(iii) (Given)\nOn adding, $$\\angle{EPD}$$ on both sides in Equation (ii),\nwe have,\n$$\\Rightarrow$$ $$\\angle{EPA}$$ + $$\\angle{EPD}$$ = $$\\angle{DPB}$$ + $$\\angle{EPD}$$\n$$\\Rightarrow$$ $$\\angle{DPA}$$ = $$\\angle{EPB}$$ ...(iv)\n\nNow, In $$\\triangle{DAP}$$ and $$\\triangle{EBP}$$ We have,\n$$\\Rightarrow$$ $$\\angle{DPA}$$ = $$\\angle{EPB}$$ ...(from (iv)),\n$$\\Rightarrow$$ $$\\angle{DAP}$$ = $$\\angle{EBP}$$ ...(Given)\nand AP = BP ...(From Eq. (i))\n\n$$\\therefore$$ $$\\triangle{DAP}$$ $$\\displaystyle \\cong$$ $$\\triangle{EBP}$$ ...(By ASA congruency test)\n\nThus, AD = BE ...(By CPCT)\n\nHence, proved.\n\nQ8 ) In right triangle ABC, right angled at C, M is the mid-point of hypotenuse AB. C is joined to M and produced to a point D such that DM = CM. Point D is joined to point B (see figure).", null, "Show that\n(i) $$\\triangle{AMC}$$ $$\\displaystyle \\cong$$ $$\\triangle{BMD}$$\n(ii) $$\\angle{DBC}$$ is a right angle\n(iii) $$\\triangle{DBC}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACB}$$\n(iv)CM = (1/2) AB\n\nNCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nGiven :\n$$\\angle{ACB}$$ in which $$\\angle{C}$$ = $$90^\\circ$$ and M is the mid-point of AB.\n\nTo prove :\n(i) $$\\triangle{AMC}$$ $$\\displaystyle \\cong$$ $$\\triangle{BMD}$$\n(ii) $$\\angle{DBC}$$ is a right angle\n(iii) $$\\triangle{DBC}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACB}$$\n(iv)CM = ($$\\frac{1}{2}$$ ) AB", null, "Construction : Produce CM to D, such that CM = MD. Join DB.\n\nProof : In $$\\triangle{AMC}$$ and $$\\triangle{BMD}$$, we have\n\nAM = BM ...(M is the mid-point of AB)\nCM = DM ...(Given)\nand $$\\angle{AMC}$$ = $$\\angle{BMD}$$ ...(Vertically opposite angles)\n$$\\therefore$$ $$\\triangle{AMC}$$ $$\\displaystyle \\cong$$ $$\\triangle{BMD}$$(By SAS congruency test)\nHence, part(i) is proved.\n\nAlso, AC = DB ...(By CPCT)\n\n$$\\angle{1}$$ = $$\\angle{2}$$ ...(Alternate angles) and (by CPCT)\n\n$$\\therefore$$ BD || CA and BC is transversal.\n\n$$\\Rightarrow$$ $$\\angle{ACB}$$ + $$\\angle{DBC}$$ = $$180^\\circ$$\nBut, $$\\angle{ACB}$$ = $$90^\\circ$$ ...(given)\n$$\\Rightarrow$$ $$90^\\circ$$ + $$\\angle{DBC}$$ = $$180^\\circ$$\n$$\\Rightarrow$$ $$\\angle{DBC}$$ = $$180^\\circ$$ - $$90^\\circ$$\n$$\\Rightarrow$$ $$\\angle{DBC}$$ = $$90^\\circ$$\n\nHence, part(ii) is proved, too.\n\nNow, considering $$\\triangle{DBC}$$ and $$\\triangle{ACB}$$, we have,\n\nAC = DB ...(from part(i))\n\nSide BC is common.\n\nand $$\\angle{DBC}$$ = $$\\angle{ACB}$$ = $$90^\\circ$$\n\nTherefore, $$\\triangle{DBC}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACB}$$ ...(SSA Congruency theorem)\nHence, now, part(iii) is proved, too.\n\nNow, DC = AB ...(By CPCT)\n\nMultipling both sides by $$\\frac{1}{2}$$ , we get,\n($$\\frac{1}{2}$$ ) DC = $$\\frac{1}{2}$$ ) AB\n\nNow, as we know, CM = ($$\\frac{1}{2}$$ ) DC\nTherefore, CM = ($$\\frac{1}{2}$$ ) AB\nHence, part (iv) is proved.\n\n## NCERT solutions for class 9 Maths Chapter 7 Triangles Exercise 7.2\n\nQ1 ) In an isosceles triangle ABC, with AB = AC, the bisectors of $$\\angle{B}$$ and $$\\angle{C}$$ intersect each other at O. Join A to O. Show that,\n(i) OB = OC\n(ii) AO bisects $$\\angle{A}$$\n\nNCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nIn $$\\triangle{ABC}$$, we have\n\nAB = AC ...(Given)\nAlso, $$\\angle{B}$$ = $$\\angle{C}$$ ...(Since corresponding angles of equal sides are equal)\n\nMultipling both sides by $$\\frac{1}{2}$$ , we get,\n($$\\frac{1}{2}$$ ) $$\\angle{B}$$ = ($$\\frac{1}{2}$$ ) $$\\angle{C}$$\n$$\\therefore$$ $$\\angle{OBC}$$ = $$\\angle{OCB}$$\n\nAlso, it is given that, OB and OC are bisectors of $$\\angle{B}$$ and $$\\angle{C}$$, respectively,\n\nTherefore, $$\\angle{OBA}$$ and $$\\angle{OCA}$$\nTherefore, OB = OC\n...($$\\because$$ corresponding sides of equal angles are equal)\n\nHence, part (i) is proved.", null, "In $$\\triangle{ABO}$$ and $$\\triangle{ACD}$$, we have,\n\nAB = AC ...(Given)\n$$\\angle{OBA}$$ = $$\\angle{OCA}$$ ...(from part(i))\nOB = OC ...(proved earlier)\n\nTherefore, $$\\triangle{ABO}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACO}$$ ...(By SAS congruency test)\n\nThus, $$\\angle{BAO}$$ = $$\\angle{CAO}$$ ...(By CPCT)\n\nHence, AO bisects $$\\angle{A}$$ is proved.\n\nQ2 ) In the $$\\triangle{ABC}$$, AD is the perpendicular bisector of BC (see figure).\nShow that $$\\triangle{ABC}$$ is an isosceles triangle in which AB = AC.", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nIn $$\\triangle{ABC}$$ and $$\\triangle{ACD}$$, we have,\n\nDB = DC ...(given)\n$$\\angle{ADB}$$ = $$\\angle{ADC}$$ ...(Since, AD is the perpendicular bisector of BC)\n\nand AD is the Common side.\n\nTherefore, $$\\triangle{ABD}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACD}$$ ...(By SAS congruency test)\n\nTherefore, AB = AC ...(By CPCT)\n\nHence, $$\\triangle{ABC}$$ is an isosceles triangle.\nHence, proved.\n\nQ3 ) ABC is an isosceles triangle in which altitudes BE and CF are drawn to equal sides AC and AB respectively (see figure).\nShow that these altitudes are equal.", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nIn $$\\triangle{ABE}$$ and $$\\triangle{ACF}$$ , we have,\n\n$$\\angle{AEB}$$ = $$\\angle{AFC}$$ ...(BE and CF are perpendiculars drawn to sides AC and AB respectively)\n\nAlso, $$\\angle{A}$$ is common angle.\nand AB = AC ...(Given)\n\nTherefore, $$\\triangle{ABE}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACF}$$ ...(By AAS congruency test)\n\nTherefore, AB = AC ...(By CPCT)\n\nHence, BE = CF ...(By CPCT)\nHence, proved.\n\nQ4 ) ABC is a triangle in which altitudes BE and CF to sides AC and AB are equal (see figure). Show that\n(i) $$\\triangle{ABE}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACF}$$\n(ii) AB = AC i.e., ABC is an isosceles triangle.", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nIn $$\\triangle{ABE}$$ and $$\\triangle{ACF}$$ , we have,\n\n$$\\angle{AEB}$$ = $$\\angle{AFC}$$($$\\because BE and CF are perpendicular to sides AC and AB) \\(\\angle{BAE}$$ = $$\\angle{CAF}$$ ...($$\\because \\(\\angle{A}$$ is the Common angle)\n\nand BE = CF ...(Given)\n\n$$\\therefore$$ $$\\triangle{ABE}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACF}$$ ...(By AAS Congruency test)\n\nThus, AB = AC ...(By CPCT)\n\n$$\\therefore$$ $$\\triangle{ABC}$$ is an isosceles triangle.\nHence, proved.\n\nQ5 ) ABC and DBC are isosceles triangles on the same base BC (see figure). Show that $$\\angle{ABD}$$ = $$\\angle{ACD}$$.", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nIn $$\\triangle{ABC}$$, we have,\n\nAB = AC ...($$\\because$$ $$\\triangle{ABC}$$ is an isosceles triangle)\n\n$$\\therefore$$ $$\\angle{ABC}$$ = $$\\angle{ACB}$$ ...(i)($$\\because$$ angles opposite to equal sides are equal)\n\nSimilarly, in $$\\triangle{DBC}$$, we have,\n\nBD = CD ...($$\\because$$ $$\\triangle{DBC}$$ too, is an isosceles triangle)\n\n$$\\therefore$$ $$\\angle{DBC}$$ = $$\\angle{DCB}$$ ...(ii) ($$\\because$$ angles opposite to equal sides are equal)\nNow, On adding, Equations (i) and (ii), we get,\n\n$$\\Rightarrow \\angle{ABC} + \\angle{DBC} = \\angle{ACB} + \\angle{DCB}$$\n$$\\therefore$$ $$\\angle{ABD}$$ = $$\\angle{ACD}$$\nHence, proved.\n\nQ6 ) $$\\triangle{ABC}$$ is an isosceles triangle in which AB = AC. Side BA is produced to D such that AD = AB (see figure).\nShow that $$\\angle{BCD}$$ is a right angle.", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nIn $$\\triangle{ABC}$$, we have,\n\nAB = AC ...(i)(Given)\n$$\\angle{ACB}$$ = $$\\angle{ABC}$$ ...(ii)($$\\because$$ angles opposite to equal sides are equal)\n\nFrom (i) and (iii), AC = AD\n\nNow, in $$\\triangle{ADC}$$ , we have,\n\n$$\\angle{ACD}$$ = $$\\angle{ADC}$$ ...($$\\because$$ angles opposite to equal sides are equal)\n\nAlso, $$\\angle{ACD}$$ = $$\\angle{BDC}$$ ...(iv)($$\\because \\angle{ADC} = \\angle{BDC}$$)\n\nOn adding Equations (ii) and (iv), we get,\n\n$$\\Rightarrow \\angle{ACB} + \\angle{ACD} = \\angle{ABC} + \\angle{BDC}$$\n$$\\Rightarrow$$ $$\\angle{BCD}$$ = $$\\angle{ABC}$$ + $$\\angle{BDC}$$\n\nAdding $$\\angle{BCD}$$ on both sides, we have,\n\n$$\\Rightarrow \\angle{BCD} + \\angle{BCD} = \\angle{ABC} + \\angle{BDC} + \\angle{BCD}$$\n$$\\Rightarrow$$ 2 $$\\angle{BCD}$$ = $$180^\\circ$$ ...($$\\because$$ sum of all angles of a triangle is $$180^\\circ$$)\n\nTherefore, $$\\angle{BCD}$$ = $$90^\\circ$$\nHence, proved.\n\nQ7 ) ABC is a right angled triangle in which $$\\angle{A}$$ = $$90^\\circ$$ and AB = AC, find $$\\angle{B}$$ and $$\\angle{C}$$.", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nIn $$\\triangle{ABC}$$, we have,\n\nAB = AC ...(Given)\n$$\\angle{B}$$ = $$\\angle{C}$$ ...(i)($$\\because$$ angles opposite to equal sides are equal)\n\nNow, we know that,\n\n$$\\angle{A}$$ + $$\\angle{B}$$ + $$\\angle{C}$$ = $$180^\\circ$$\n$$\\Rightarrow$$ $$90^\\circ$$ + $$\\angle{B}$$ + $$\\angle{C}$$ = $$180^\\circ$$ ...(given)\n$$\\Rightarrow$$ $$90^\\circ$$ + $$\\angle{B}$$ + $$\\angle{B}$$ = $$180^\\circ$$ ...(from(i))\n$$\\Rightarrow$$ 2 $$\\angle{B}$$ = $$180^\\circ$$ - $$90^\\circ$$\n$$\\Rightarrow$$ 2 $$\\angle{B}$$ = $$90^\\circ$$\n$$\\Rightarrow$$ $$\\angle{B}$$ = $$45^\\circ$$\n$$\\therefore$$ $$\\angle{C}$$ = $$45^\\circ$$, too.\n\nQ8 ) Show that the angles of an equilateral triangle are $$60^\\circ$$ each.\n\nNCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nLet $$\\triangle{ABC}$$ be an equilateral triangle, such that\n\nAB = BC = CA (by property)", null, "Now, we have,\nAB = AC\n$$\\angle{B}$$ = $$\\angle{C}$$ ...(i)($$\\because$$ angles opposite to equal sides are equal)\n\nSimilarly,\nCB = CA\n\n$$\\therefore$$ $$\\angle{A}$$ = $$\\angle{B}$$ ...(ii)($$\\because$$ angles opposite to equal sides are equal)\n\n$$\\angle{A}$$ + $$\\angle{B}$$ + $$\\angle{C}$$ = $$180^\\circ$$ ...(iii)($$\\because$$ the sums of all angles of a triangle are $$180^\\circ$$)\n\nFrom Equations (i),(ii) and (iii), we have,\n$$\\angle{A}$$ + $$\\angle{A}$$ + $$\\angle{A}$$ = $$180^\\circ$$\n$$\\Rightarrow$$ 3 $$\\angle{A}$$ = $$180^\\circ$$\n$$\\Rightarrow$$ $$\\angle{A}$$ = $$60^\\circ$$\n$$\\Rightarrow \\angle{A} = \\angle{B} = \\angle{C} = 60^\\circ$$\nHence, proved.\n\n## NCERT solutions for class 9 Maths Chapter 7 Triangles Exercise 7.3\n\nQ1 ) $$\\triangle{ABC}$$ and $$\\triangle{DBC}$$ are two isosceles triangles on the same base BC and vertices A and D are on the same side of BC (see figure). If AD is extended to intersect BC at P, show that\n(i) $$\\triangle{ABD}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACD}$$\n(ii)$$\\triangle{ABP}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACP}$$\n(iii) AP bisects $$\\angle{A}$$ as well as $$\\angle{D}$$\n(iv) AP is the perpendicular bisector of BC.", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nGiven: $$\\triangle{ABC}$$ and $$\\triangle{DBC}$$ are two isosceles triangles having same base BC, such that AB = AC and BD = CD.", null, "Proof:\nIn $$\\triangle{ABD}$$ and $$\\triangle{ACD}$$, we have,\n\nAB = AC ...(Given)\nBD = CD ...(Given)\n\nTherefore, $$\\triangle{ABD}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACD}$$ ...(By SSS Congruency test)\nHence, part (i) is proved.\n\nIn $$\\triangle{ABP}$$ and $$\\triangle{ACP}$$, we have,\n\nAB = AC ...(Given)\nAlso, $$\\angle{a}$$ = $$\\angle{b}$$ ...($$\\because$$ $$\\triangle{ABD}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACD}$$)\nand AP is the common side.\n\nTherefore, $$\\triangle{ABP}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACP}$$ ...(By SAS congruency test)\nHence, part (ii) is proved.\n\n$$\\angle{a}$$ = $$\\angle{b}$$ ...($$\\because$$ $$\\triangle{ABD}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACD}$$)\nHence, proved that AP bisects $$\\angle{A}$$\n\nNow, $$\\angle{ADB}$$ = $$\\angle{ACD}$$ ...($$\\because$$$$\\triangle{ABD}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACD}$$)\n\nNow, by subtracting from $$180^\\circ$$, we get,\n\n$$180^\\circ$$ - $$\\angle{ADB}$$ = $$180^\\circ$$ - $$\\angle{ACD}$$\n$$\\Rightarrow$$ $$\\angle{BDP}$$ = $$\\angle{ACD}$$ ...(By linear pair axiom)\n\nHence, proved that AP bisects $$\\angle{D}$$\n\nHence, part (iii) is proved.\n\nNow, BP = CP (Since, $$\\triangle{ABP}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACP}$$)\nand $$\\angle{c}$$ = $$\\angle{d}$$ ...(i)\nBut $$\\angle{c}$$ + $$\\angle{d}$$ = $$180^\\circ$$ ...(Linear pair axiom)\n$$\\Rightarrow$$$$\\angle{c}$$ + $$\\angle{c}$$ = $$180^\\circ$$ ...(from (i))\n$$\\Rightarrow$$ 2 $$\\angle{c}$$ = $$180^\\circ$$\n$$\\Rightarrow$$ $$\\angle{c}$$ = $$90^\\circ$$\n$$\\Rightarrow$$$$\\angle{c}$$ = $$\\angle{d}$$ = $$90^\\circ$$\n\nTherefore, AP is the perpendicular bisector of BC.\nHence, proved.\n\nQ2 ) AD is an altitude of an isosceles triangle ABC in which AB = AC. Show that\n(ii) AD bisects $$\\angle{A}$$\n\nNCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nIn $$\\triangle{ABD}$$ and $$\\triangle{ACD}$$, we have,\nAB = AC ...(Given)", null, "$$\\angle{ABD}$$ = $$\\angle{ADC}$$ = $$90^\\circ$$ ...(Since AD is an altitude)\nTherefore, $$\\triangle{ABD}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACD}$$ ...(By RHS congruency axiom)\nTherefore, BD = DC ...(By CPCT)\nThus, we can say that, AD bisects BC.\n\nAlso, $$\\angle{BAD}$$ = $$\\angle{CAD}$$ ...(By CPCT)\nThus, this shows that AD bisects $$\\angle{A}$$.\nHence, proved.\n\nQ3 ) The two sides AB and BC and median AM of one triangle ABC are respectively equal to sides PQ and QR and median PN of $$\\triangle{PQR}$$ (see figure). Show that\n(i) $$\\triangle{ABM}$$ $$\\displaystyle \\cong$$ $$\\triangle{PQN}$$\n(ii) $$\\triangle{ABC}$$ $$\\displaystyle \\cong$$ $$\\triangle{PQR}$$", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nWe have, AM is the median of $$\\triangle{ABC}$$,\nSo, BM = MC = ($$\\frac{1}{2}$$ ) BC ...(i)(Since, M Bisects BC)\n\nSimilarly, PN is the median of $$\\triangle{PQR}$$.\nSo, QN = NR = $$\\frac{1}{2}$$ QR ...(ii)(Since, N Bisects QR)\n\nNow, we have,\nBC = QR ...(Given)\nMultiplying both sides by $$\\frac{1}{2}$$, we get,\n$$\\frac{1}{2}$$ BC = $$\\frac{1}{2}$$ QR\n$$\\therefore$$ BM = QN ...(iii)(From eq.(i) and (ii))\n\nIn $$\\triangle{ABM}$$ and $$\\triangle{PQN}$$, we have,\nAB = PQ ...(Given)\nAM = PN ...(Given)\nand BM = QN ...(From Eq.(iii))\n$$\\triangle{ABM}$$ $$\\displaystyle \\cong$$ $$\\triangle{PQN}$$ ...(By SSS congruency test)\nHence, part (i) is proved.\n\nIn $$\\triangle{ABC}$$ and $$\\triangle{PQR}$$, we have,\nAB = PQ ...(Given)\n$$\\angle{B}$$ = $$\\angle{Q}$$ ...(Since, $$\\triangle{ABM}$$ $$\\displaystyle \\cong$$ $$\\triangle{PQN}$$)\nand BC = QR ...(Given)\n$$\\triangle{ABC}$$ $$\\displaystyle \\cong$$ $$\\triangle{PQR}$$ ...(By SAS congruency test)\nHence, proved.\n\nQ4 ) BE and CF are two equal altitudes of a triangle ABC. Using RHS congruence rule, prove that the triangle ABC is isosceles.\n\nNCERT Solutions for Class 9 Maths Chapter 7 Triangles", null, "In $$\\triangle{BEC}$$ and $$\\triangle{CBF}$$, we have,\n$$\\angle{BEC}$$ = $$\\angle{CFB}$$ = $$90^\\circ$$ ...(Given)\nBE = CF ...(Given)\nand BC is the Common side.\n$$\\therefore$$ $$\\triangle{BEC}$$ $$\\displaystyle \\cong$$ $$\\triangle{CFB}$$ ...(By RHS congruency test)\nThus, EC = FB ...(i)(By CPCT)\n\nIn $$\\triangle{AEB}$$ and $$\\triangle{AFC}$$, we have,\n$$\\angle{AEB}$$ = $$\\angle{AFC}$$ = $$90^\\circ$$ ...(Given)\n$$\\angle{A}$$ is Common angle.\nand EB = FC ...(Given)\n$$\\therefore$$ $$\\triangle{AEB}$$ $$\\displaystyle \\cong$$ $$\\triangle{AFC}$$ ...(By AAS congruency test)\nThus, AE = AF ...(ii)(By CPCT)\n\nNow, on adding Equation (i) and (ii),\nwe get,\n\nEC + AE = FB + AF\nAC = AB ...(Since, AC = EC + AE and AB = FB + AF)\n\nHence, it is proved that, $$\\triangle{ABC}$$ is an isosceles triangle.\n\nQ5 ) ABC is an isosceles triangle with AB = AC. Draw AP perpendicular to BC to show that $$\\angle{B}$$ = $$\\angle{C}$$\n\nNCERT Solutions for Class 9 Maths Chapter 7 Triangles", null, "In $$\\triangle{ABP}$$ and $$\\triangle{ACP}$$, we have,\nAB = AC ...(Given)\nAP is the Common side.\n$$\\angle{ABP}$$ = $$\\angle{ACP}$$ = $$90^\\circ$$ ...(Since, AP perpendicular to BC)\nTherefore, $$\\triangle{ABP}$$ $$\\displaystyle \\cong$$ $$\\triangle{ACP}$$ ...(By RHS congruency test)\nThus, $$\\angle{B}$$ = $$\\angle{C}$$ ...(By CPCT)\nHence, proved.\n\n## NCERT solutions for class 9 Maths Chapter 7 Triangles Exercise 7.4\n\nQ1 ) Show that in a right angled triangle, the hypotenuse is the longest side.\n\nNCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nLet ABC be a right angled triangle, such that $$\\angle{ABC}$$ = $$90^\\circ$$.\nWe know that,\n$$\\therefore$$ $$\\angle{ABC}$$ + $$\\angle{BCA}$$ + $$\\angle{CAB}$$ = $$180^\\circ$$\n...(Since, the sums of all angles of a triangle are $$180^\\circ$$)\n$$\\Rightarrow$$$$90^\\circ$$ + $$\\angle{BCA}$$ + $$\\angle{CAB}$$ = $$180^\\circ$$\n$$\\Rightarrow$$ $$\\angle{BCA}$$ + $$\\angle{CAB}$$ = $$180^\\circ$$ - $$90^\\circ$$\n$$\\Rightarrow$$ $$\\angle{BCA}$$ + $$\\angle{CAB}$$ = $$90^\\circ$$\n\nThus, we can say that, both the angles $$\\angle{BCA}$$ and $$\\angle{CAB}$$ are acute.\n$$\\therefore$$ $$\\angle{BCA}$$ < $$90^\\circ$$and\n$$\\therefore$$ $$\\angle{CAB}$$ < $$90^\\circ$$\n$$\\therefore$$ $$\\angle{BCA}$$ < $$\\angle{ABC}$$ and $$\\angle{CAB}$$ < $$\\angle{ABC}$$\nTherefore, we can say,\n\nAB < AC and BC < AC ...(Since, side opposite to greater angle is longer)\n\nHence, the hypotenuse AC is the longest side is proved.\n\nQ2 ) In figure, sides AB and AC of are extended to points P and Q respectively. Also, $$\\angle{PBC}$$ < $$\\angle{QCB}$$. Show that AC > AB.\n\nNCERT Solutions for Class 9 Maths Chapter 7 Triangles", null, "By linear pair axiom, we say that,\n$$\\angle{ACB}$$ + $$\\angle{QCB}$$ = $$180^\\circ$$ ...(i)\n$$\\angle{ABC}$$ + $$\\angle{PBC}$$ = $$180^\\circ$$ ...(ii)\nFrom Equation (i) and (ii), we get,\n$$\\angle{ACB}$$ + $$\\angle{QCB}$$ = $$\\angle{ABC}$$ + $$\\angle{PBC}$$\nBut $$\\angle{PBC}$$ < $$\\angle{QCB}$$ ...(given)\n$$\\therefore$$ $$\\angle{ABC}$$ > $$\\angle{ACB}$$\n$$\\therefore$$ AC > AB.\nHence, proved.\n\nQ3 ) In figure, $$\\angle{B}$$ < $$\\angle{A}$$ and $$\\angle{C}$$ < $$\\angle{D}$$. Show that AD < BC.", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nWe have, $$\\angle{B}$$ < $$\\angle{A}$$ ...(given)\nAlso, $$\\angle{C}$$ < $$\\angle{D}$$ ...(given)\nSince, side opposite to greater angle is longer\nwe get, AO < BO ...(i)\nsimilarly, OD < OC ...(ii)\nOn adding, Equation (i) and (ii), we have,\nAO + OD < BO + OC\n$$\\therefore$$ AD < BC\nHence, proved.\n\nQ4 ) AB and CD are respectively the smallest and longest sides of a quadrilateral ABCD (see figure). Show that $$\\angle{A}$$ > $$\\angle{C}$$ and $$\\angle{B}$$ > $$\\angle{D}$$.", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nGiven: ABCD is a quadrilateral having AB as the smallest side and CD as the longest side\n\nTo prove: $$\\angle{A}$$ > $$\\angle{C}$$ and $$\\angle{B}$$ > $$\\angle{D}$$\n\nConstruction: Join A to C and B to D.\n\nFigure:", null, "Proof:\nIn $$\\triangle{ABC}$$, we have AB is the smallest side.\n\nAs, AB < BC\n$$\\therefore$$ $$\\angle{5}$$ < $$\\angle{1}$$ ...(i)\n(Since, angle opposite to longer side is greater)\n\nSimilarly, in $$\\triangle{ADC}$$,\nwe have CD is the largest side.\n\n$$\\therefore$$ $$\\angle{6}$$ < $$\\angle{2}$$ ...(ii)\n(Since, angle opposite to longer side is greater)\n\nOn adding Equation (i) and (ii), we get,\n\n$$\\Rightarrow$$ $$\\angle{5}$$ + $$\\angle{6}$$ < $$\\angle{1}$$ + $$\\angle{2}$$\n$$\\Rightarrow$$ $$\\angle{C}$$ < $$\\angle{A}$$\n$$\\Rightarrow$$ $$\\angle{A}$$ > $$\\angle{C}$$\n\nNow, Similarly, $$\\triangle{ADB}$$, we have AB is the smallest side.\n$$\\therefore$$ $$\\angle{3}$$ > $$\\angle{8}$$ ...(iii)\n(Since, angle opposite to longer side is greater)\n\nSimilarly, in $$\\triangle{BCD}$$, we have CD is the largest side.\nAs, CD > BC\n$$\\therefore$$ $$\\angle{4}$$ > $$\\angle{7}$$ ...(iv)\n(Since, angle opposite to longer side is greater)\n\nOn adding Equation (iii) and (iv), we get,\n$$\\Rightarrow$$ $$\\angle{3}$$ + $$\\angle{4}$$ > $$\\angle{8}$$ + $$\\angle{7}$$\n$$\\Rightarrow$$ $$\\angle{B}$$ > $$\\angle{D}$$\nHence, proved.\n\nQ5 ) In figure, PR > PQ and PS bisects $$\\angle{QPR}$$. Prove that $$\\angle{PSR}$$ > $$\\angle{PSQ}$$.", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nIn $$\\triangle{PQR}$$, we have,\nPR > PQ ...(given)\n$$\\therefore$$ $$\\angle{PQR}$$ > $$\\angle{PRQ}$$ ...(i)\n($$\\because$$ angle opposite to longer side is greater)\nWe can also say,\n$$\\angle{a}$$ = $$\\angle{b}$$ ...(ii)\n($$\\because$$ PS bisects $$\\angle{QPR}$$)", null, "On adding Equation (i) and (ii), we get,\n\n$$\\angle{PQR}$$ + $$\\angle{a}$$> $$\\angle{PRQ}$$ + $$\\angle{b}$$ ...(iii)\nAlso, $$\\angle{PQS}$$ + $$\\angle{PSQ}$$ + $$\\angle{a}$$ = $$180^\\circ$$ ...(iv)\nand $$\\angle{PRS}$$ + $$\\angle{PSR}$$ + $$\\angle{b}$$ = $$180^\\circ$$ ...(v)\n($$\\because$$ the sums of all angles of a triangle are $$180^\\circ$$)\n\nFrom Equation (iv) and (v), we get,\n\n$$\\angle{PQS}$$ + $$\\angle{PSQ}$$ + $$\\angle{a}$$ = $$\\angle{PRS}$$ + $$\\angle{PSR}$$ + $$\\angle{b}$$\n$$\\because$$ $$\\angle{PQS}$$ = $$\\angle{PQR}$$ and $$\\angle{PRS}$$ = $$\\angle{PRQ}$$,\nwe get,\n\n$$\\angle{PQR}$$ + $$\\angle{PSQ}$$ + $$\\angle{a}$$ = $$\\angle{PRQ}$$ + $$\\angle{PSR}$$ + $$\\angle{b}$$\n\nRearranging the terms, we get,\n\n$$\\angle{PQR}$$ + $$\\angle{a}$$ + $$\\angle{PSQ}$$ = $$\\angle{PRQ}$$ + $$\\angle{b}$$ + $$\\angle{PSR}$$ ...(vi)\n\nFrom Equation (iii) and (vi), we get,\n\n$$\\angle{PSQ}$$ < $$\\angle{PSR}$$ ...\n($$\\because$$ side opposite to greater angle is longer)\nHence, proved.\n\nQ6 ) Show that of all line segments drawn from a give point not on it, the perpendicular line segment is the shortest.\n\nNCERT Solutions for Class 9 Maths Chapter 7 Triangles", null, "Given: x is a line and A is a point not lying on x.\n\nC is any point on x other than B.\n\nTo prove: AB < AC\nProof:\nIn $$\\triangle{ABC}$$, $$\\angle{B}$$ is the right angle.\n\nTherefore, $$\\angle{C}$$ is an acute angle.\n$$\\Rightarrow$$ $$\\angle{C}$$ < $$\\angle{B}$$\n$$\\therefore$$ AB < AC\n(Since, side opposite to greater angle is longer)\n\nHence, proved that the perpendicular line segment is the shortest.\n\n## NCERT solutions for class 9 Maths Chapter 7 Triangles Exercise 7.5\n\nQ1 ) ABC is a triangle. Locate a point in the interior of $$\\triangle$$ ABC which is equidistant from all the vertices of $$\\triangle$$ABC.\n\nNCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nCircumcentre of a triangle is always equidistant from all the vertices of that triangle. Circumcentre is the point where perpendicular bisectors of all the sides of the triangle meet together.", null, "In $$\\triangle$$ABC, we can find the circumcentre by drawing the perpendicular bisectors of sides AB, BC, and CA of this triangle. O is the point where these bisectors are meeting together. Therefore, O is the point which is equidistant from all the vertices of $$\\triangle$$ABC.\n\nQ2 ) In a triangle locate a point in its interior which is equidistant from all the sides of the triangle.\n\nNCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nThe point which is equidistant from all the sides of a triangle is called the incentre of the triangle. Incentre of a triangle is the intersection point of the angle bisectors of the interior angles of that triangle.", null, "Here, in $$\\triangle$$ABC, we can find the incentre of this triangle by drawing the angle bisectors of the interior angles of this triangle. I is the point where these angle bisectors are intersecting each other. Therefore, I is the point equidistant from all the sides of $$\\triangle$$ABC.\n\nQ3 ) In a huge park, people are concentrated at three points (see figure)", null, "A: where these are different slides and swings for children.\nB: near which a man-made lake is situated.\nC: which is near to a large parking and exist.\nWhere should an ice-cream parlor be set? up so that maximum number of persons can approach it?\n[Hint The parlour should be equidistant from A, B and C.]\n\nNCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nLet us join A and B, and draw l, the perpendicular bisector of AB.\nNow, join B and C, and draw m, the perpendicular bisector of BC. Let the perpendicular bisectors l and m meet at O. The point O is the required point where the ice cream parlour be set up.\nNote: If we join A and C and draw the perpendicular bisector, then it will also meet (or pass through) the point O.", null, "Q4 ) Complete the hexagonal and star shaped Rangolies [see Fig. (i) and (ii)] by filling them with as many equilateral triangles of side 1 cm as you can. Count the number of triangles in each case. Which has more triangles?", null, "NCERT Solutions for Class 9 Maths Chapter 7 Triangles\n\nWe require 150 equilateral triangles of side 1 cm in the Fig. (i) and 300 equilateral triangles in the Fig. (ii). $$\\therefore$$The Fig. (ii) has more triangles.", null, "" ]
[ null, "https://www.trustudies.com/static/images/9_triangles.png", null, "https://www.trustudies.com/static/images/updated_logo.png", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143823.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143805.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143743.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143721.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143710.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143655.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143641.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143624.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143554.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143456.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143446.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143413.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_144450.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_144450.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_144440.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_144425.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_144336.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_144327.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_144309.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_144204.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_144143.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_144131.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_144120.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_144108.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_144028.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_144007.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143939.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143842.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143928.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143916.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/9thmaths/IMG_20200603_143858.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/20210209_122427.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/20210209_123334.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/20210209_123958.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/20210209_124651.jpg", null, "https://trustudies-app-public.s3.ap-south-1.amazonaws.com/20210209_124015.jpg", null, "https://play.google.com/intl/en_us/badges/static/images/badges/en_badge_web_generic.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80386204,"math_prob":1.0000099,"size":27815,"snap":"2023-40-2023-50","text_gpt3_token_len":9622,"char_repetition_ratio":0.2720866,"word_repetition_ratio":0.20586081,"special_character_ratio":0.38867518,"punctuation_ratio":0.18639112,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.00001,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78],"im_url_duplicate_count":[null,2,null,null,null,3,null,3,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,7,null,7,null,4,null,4,null,4,null,4,null,4,null,4,null,3,null,3,null,3,null,2,null,3,null,2,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,1,null,1,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T11:13:19Z\",\"WARC-Record-ID\":\"<urn:uuid:f1ab0f24-2573-4431-9d52-737ce52d3eaf>\",\"Content-Length\":\"85762\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4d02ee00-518b-4d50-81eb-4419515a36b2>\",\"WARC-Concurrent-To\":\"<urn:uuid:f10c65d3-4839-47b1-b74d-3cce7ac8080f>\",\"WARC-IP-Address\":\"52.66.170.59\",\"WARC-Target-URI\":\"https://www.trustudies.com/ncert-solutions/class-9/maths/triangles/\",\"WARC-Payload-Digest\":\"sha1:SSS2NNOGT5BAGIA6L3F65YUBJOGVBIIE\",\"WARC-Block-Digest\":\"sha1:KNAD3B6SJUKRWGTTBPOCY4FLN6X3HWYK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510888.64_warc_CC-MAIN-20231001105617-20231001135617-00488.warc.gz\"}"}
https://www.etutorworld.com/math/dividing-fractions-by-whole-numbers.html
[ "", null, "(628)-272-0788 info@etutorworld.com\nSelect Page\n\n# Dividing Fractions by Whole Numbers\n\nThe order of operations is the set of rules that defines the order in which calculations must be carried out in a mathematical expression. The order of operations is important because it determines how calculations are performed, which can affect the result.\n\n• The Standard Order of Operations\n• Parentheses\n• Exponents\n• Multiplication and Division\n• Order of Operations Solved Examples\n• Order of Operations Practice Problems\n• FAQs\n\nPersonalized Online Tutoring\n\n## Steps to Divide Fractions by Whole Number\n\nThe idea of reciprocal should be understood to divide fractions by whole numbers. Follow the instructions below to divide fractions by whole numbers:\n\nStep 1 :  Write the division fact as the dividend divisor, where the dividend is the supplied fraction and the divisor is the given whole number.\n\nStep 2: Write the reciprocal of the divisor (the whole number) and change the division sign to a multiplication symbol.\n\nStep 3: Multiply the numerator by the numerator and the denominator by the denominator to find the product. That will be the necessary response.\n\nAs an illustration, divide 1/9 by 7.\n\nStep 1: 1/9 ÷ 7\n\nStep 2: 1/91/7 (Reciprocal of Divisor 7 is 1/7)\n\nStep 3: 1/9 × 1/7= 1/63\n\nTherefore, 1/97= 63", null, "Standard Expression for Dividing Fractions by Whole Numbers\n\nThere have been times when we booked them last minute, but the teachers have been extremely well-prepared and the help desk at etutorworld is very prompt.\n\nOur kid is doing much better with a higher score.\n\n- Meg, Parent (via TrustSpot.io)\n\n## Order of Operations FAQS\n\n#### Solve ⅖ 2\n\nSolution : Step 1 : Multiply the bottom number of the fraction by whole number\n\n252 = 2/10\n\nStep 2 :  Simplify the fraction\n\n2/10 =  ⅕\n\n#### What is 4/7 8?\n\n1. ⅞                                b) 1/14                            c) 8/4                d)7/32\n\n#### What is 5/9 10?\n\n1. a) 9/50                                 b) 18                         c) 1/9                  d) 1/18\n\n#### What is ⅗ 6?\n\n1. 1/10                             b) 5/18                          c) 30                       d) 20/5\n\n#### What is 8/9 4?\n\n1. 9/32                                 b) 32/8                          c) 2/9                       d) 36/8", null, "Gloria Mathew writes on math topics for K-12. A trained writer and communicator, she makes math accessible and understandable to students at all levels. Her ability to explain complex math concepts with easy to understand examples helps students master math. LinkedIn\n\n## IN THE NEWS", null, "Our mission is to provide high quality online tutoring services, using state of the art Internet technology, to school students worldwide." ]
[ null, "https://www.facebook.com/tr", null, "https://www.etutorworld.com/wp-content/uploads/2023/11/Untitled-2new-file_article-449.05-copy-17-1024x1022.jpg", null, "data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20150%20150%22%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20286%2060%22%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8317322,"math_prob":0.8988117,"size":6253,"snap":"2023-40-2023-50","text_gpt3_token_len":1551,"char_repetition_ratio":0.110417664,"word_repetition_ratio":0.0067307693,"special_character_ratio":0.2243723,"punctuation_ratio":0.07359706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97018564,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-03T13:36:38Z\",\"WARC-Record-ID\":\"<urn:uuid:8425a2ed-3317-4f0f-9b0a-48088f76873b>\",\"Content-Length\":\"294056\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:14a040d1-2255-4b4c-98f2-752e9bcfec82>\",\"WARC-Concurrent-To\":\"<urn:uuid:298f1a95-4901-427e-8d2d-0def7f27d9b4>\",\"WARC-IP-Address\":\"69.167.168.130\",\"WARC-Target-URI\":\"https://www.etutorworld.com/math/dividing-fractions-by-whole-numbers.html\",\"WARC-Payload-Digest\":\"sha1:YHK5PZPSNKGS55DXRYFKYO5UILVUU7PC\",\"WARC-Block-Digest\":\"sha1:7X4R2DVBDLWGC6AKYBVOJGGAGGNPVDAF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100508.23_warc_CC-MAIN-20231203125921-20231203155921-00737.warc.gz\"}"}
https://nz.education.com/resources/time-to-the-quarter-hour/?cid=10.993
[ "# Time to the Quarter-Hour Resources\n\n33 filtered results\n33 filtered results\nTime to the Quarter Hour\nSort by\nTelling Time on the Quarter Hour: Match It\nWorksheet\nTelling Time on the Quarter Hour: Match It\nThis worksheet serves up plenty of time practice for kids who need that extra nudge.\nMath\nWorksheet\nTime Practice\nWorksheet\nTime Practice\nPractice telling time with an analog clock using this great practice sheet.\nMath\nWorksheet\nSet the Clock: Time to 15 Minutes\nGame\nSet the Clock: Time to 15 Minutes\nHelp Floyd wake up by setting his alarm in the game that helps kids tell analog time.\nMath\nGame\nTell the Time! 1\nWorksheet\nTell the Time! 1\nFor this second grade math worksheet, kids look at each analog clock, determine the time, and write the time in the space provided.\nMath\nWorksheet\nQuarter Hour\nWorksheet\nQuarter Hour\nPractice telling time with this match up worksheet! Kids will work on telling time to the quarter hour.\nMath\nWorksheet\nTelling Time to the Quarter Hour\nWorksheet\nTelling Time to the Quarter Hour\nTeach your student about telling time to the quarter hour with this intro worksheet.\nMath\nWorksheet\nTime 2\nGuided Lesson\nTime 2\nTelling time on an analog clock can be tricky business for young learners. This guided lesson on time helps second graders improve their ability to tell time to the hour, half-hour and quarter hour. Targeted instruction and practical exercises give kids the practice they need to master this important skill. For even more practice opportunities, download and print the time worksheets that correspond to this lesson.\nMath\nGuided Lesson\nTelling Time: On the Hour or Half Hour?\nWorksheet\nTelling Time: On the Hour or Half Hour?\nTest your second grader's time telling skills by challenging her to write the time shown on each clock.\nMath\nWorksheet\nTime Quiz\nWorksheet\nTime Quiz\nWhat time is it? Review time telling skills on the analog clock with this fill-in-the-bubbles quiz.\nMath\nWorksheet\nTime and Money Math\nWorkbook\nTime and Money Math\nFor first graders taking on the concepts of time and money, this workbook dives in with clock reading and coin counting practice.\nMath\nWorkbook\nTelling the Time\nWorksheet\nTelling the Time\nGive your child practice telling time with this printable worksheet that asks him to write the time in numerals.\nMath\nWorksheet\nQuarter To vs. Quarter Past: Draw the Time\nWorksheet\nQuarter To vs. Quarter Past: Draw the Time\nHelp time expressions like \"quarter 'to\" sink in for your child with this drawing worksheet. They'll draw hands on the clocks to practice telling time.\nMath\nWorksheet\nTelling Time: Quarter Hour\nWorksheet\nTelling Time: Quarter Hour\nFlex your time muscles by practicing telling time on the quarter hour.\nMath\nWorksheet\nTime to Roll\nLesson Plan\nTime to Roll\nIt's about time to learn about time! Your students will interact with a class made clock and roll the dice to 'make up' a time. But they better know their hour and minute hands apart to find success in this lesson!\nMath\nLesson Plan\nTelling Time on the Quarter Hour: Write It\nWorksheet\nTelling Time on the Quarter Hour: Write It\nGet some extra time practice with this worksheet that asks kids to read the clocks, then write the time on the line.\nMath\nWorksheet\nTelling Time: Quarter Till and Quarter Past\nWorksheet\nTelling Time: Quarter Till and Quarter Past\nBoost your second grader's time telling skills with a lesson on the time expressions \"quarter till\" and \"quarter past\".\nMath\nWorksheet\nGet Clock-Wise\nWorkbook\nGet Clock-Wise\nChallenge kids to think outside the digital box with these analog clock worksheets. Draw the time, write it in words, do match-ups and more. Kids will look at the hour, half-hour and quarter-hour.\nMath\nWorkbook\nTracking Time\nWorkbook\nTracking Time\nSecond graders practice telling time and reading the calendar, and test their skills with problems on elapsed time.\nMath\nWorkbook\nUnderstanding Time\nWorksheet\nUnderstanding Time\nMath\nWorksheet\nWorksheet\nTelling the time is made easy in this worksheet with six clocks all displaying different times. Read the clock and write the time in the space provided.\nMath\nWorksheet\nLearn to Tell Time\nWorksheet\nLearn to Tell Time\nMath\nWorksheet\nLearning to Tell Time: Quarter Hour\nWorksheet\nLearning to Tell Time: Quarter Hour\nKids test their time savvy by drawing hands on the clock to show the quarter hour given.\nMath\nWorksheet\nRotary Clock Practice\nWorksheet\nRotary Clock Practice\nMath\nWorksheet\nMatch the Clock II\nWorksheet\nMatch the Clock II\nGive your first grader practice telling time with this printable worksheet that asks him to match a clock face to the written time.\nMath\nWorksheet\nTelling the Time 2\nWorksheet\nTelling the Time 2" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9289539,"math_prob":0.5717129,"size":424,"snap":"2020-24-2020-29","text_gpt3_token_len":80,"char_repetition_ratio":0.11666667,"word_repetition_ratio":0.0,"special_character_ratio":0.1768868,"punctuation_ratio":0.08,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97902787,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-06T05:46:21Z\",\"WARC-Record-ID\":\"<urn:uuid:43f260ba-eaff-42aa-a654-5394c9f23a32>\",\"Content-Length\":\"148620\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3c168774-cad4-4199-975f-0d922d667953>\",\"WARC-Concurrent-To\":\"<urn:uuid:3714facb-3bbd-418a-82f2-634240c66ec4>\",\"WARC-IP-Address\":\"199.232.65.185\",\"WARC-Target-URI\":\"https://nz.education.com/resources/time-to-the-quarter-hour/?cid=10.993\",\"WARC-Payload-Digest\":\"sha1:VCR2L6DRWZGILN3ALOGQ53BHQ6YBWAQT\",\"WARC-Block-Digest\":\"sha1:DNMJQP7RPWTF6C223TGWK5M2WUUV2CGB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348509972.80_warc_CC-MAIN-20200606031557-20200606061557-00438.warc.gz\"}"}
https://www.systutorials.com/docs/linux/man/3-ctgexc/
[ "# ctgexc (3) - Linux Manuals\n\nctgexc.f -\n\n## SYNOPSIS\n\n### Functions/Subroutines\n\nsubroutine ctgexc (WANTQ, WANTZ, N, A, LDA, B, LDB, Q, LDQ, Z, LDZ, IFST, ILST, INFO)\nCTGEXC\n\n## Function/Subroutine Documentation\n\n### subroutine ctgexc (logicalWANTQ, logicalWANTZ, integerN, complex, dimension( lda, * )A, integerLDA, complex, dimension( ldb, * )B, integerLDB, complex, dimension( ldq, * )Q, integerLDQ, complex, dimension( ldz, * )Z, integerLDZ, integerIFST, integerILST, integerINFO)\n\nCTGEXC\n\nPurpose:\n\n``` CTGEXC reorders the generalized Schur decomposition of a complex\nmatrix pair (A,B), using an unitary equivalence transformation\n(A, B) := Q * (A, B) * Z**H, so that the diagonal block of (A, B) with\nrow index IFST is moved to row ILST.\n\n(A, B) must be in generalized Schur canonical form, that is, A and\nB are both upper triangular.\n\nOptionally, the matrices Q and Z of generalized Schur vectors are\nupdated.\n\nQ(in) * A(in) * Z(in)**H = Q(out) * A(out) * Z(out)**H\nQ(in) * B(in) * Z(in)**H = Q(out) * B(out) * Z(out)**H\n```\n\nParameters:\n\nWANTQ\n\n``` WANTQ is LOGICAL\n.TRUE. : update the left transformation matrix Q;\n.FALSE.: do not update Q.\n```\n\nWANTZ\n\n``` WANTZ is LOGICAL\n.TRUE. : update the right transformation matrix Z;\n.FALSE.: do not update Z.\n```\n\nN\n\n``` N is INTEGER\nThe order of the matrices A and B. N >= 0.\n```\n\nA\n\n``` A is COMPLEX array, dimension (LDA,N)\nOn entry, the upper triangular matrix A in the pair (A, B).\nOn exit, the updated matrix A.\n```\n\nLDA\n\n``` LDA is INTEGER\nThe leading dimension of the array A. LDA >= max(1,N).\n```\n\nB\n\n``` B is COMPLEX array, dimension (LDB,N)\nOn entry, the upper triangular matrix B in the pair (A, B).\nOn exit, the updated matrix B.\n```\n\nLDB\n\n``` LDB is INTEGER\nThe leading dimension of the array B. LDB >= max(1,N).\n```\n\nQ\n\n``` Q is COMPLEX array, dimension (LDZ,N)\nOn entry, if WANTQ = .TRUE., the unitary matrix Q.\nOn exit, the updated matrix Q.\nIf WANTQ = .FALSE., Q is not referenced.\n```\n\nLDQ\n\n``` LDQ is INTEGER\nThe leading dimension of the array Q. LDQ >= 1;\nIf WANTQ = .TRUE., LDQ >= N.\n```\n\nZ\n\n``` Z is COMPLEX array, dimension (LDZ,N)\nOn entry, if WANTZ = .TRUE., the unitary matrix Z.\nOn exit, the updated matrix Z.\nIf WANTZ = .FALSE., Z is not referenced.\n```\n\nLDZ\n\n``` LDZ is INTEGER\nThe leading dimension of the array Z. LDZ >= 1;\nIf WANTZ = .TRUE., LDZ >= N.\n```\n\nIFST\n\n``` IFST is INTEGER\n```\n\nILST\n\n``` ILST is INTEGER\nSpecify the reordering of the diagonal blocks of (A, B).\nThe block with row index IFST is moved to row ILST, by a\nsequence of swapping between adjacent blocks.\n```\n\nINFO\n\n``` INFO is INTEGER\n=0: Successful exit.\n<0: if INFO = -i, the i-th argument had an illegal value.\n=1: The transformed matrix pair (A, B) would be too far\nfrom generalized Schur form; the problem is ill-\nconditioned. (A, B) may have been partially reordered,\nand ILST points to the first row of the current\nposition of the block being moved.\n```\n\nAuthor:\n\nUniv. of Tennessee\n\nUniv. of California Berkeley\n\nNAG Ltd.\n\nDate:\n\nNovember 2011\n\nContributors:\n\nBo Kagstrom and Peter Poromaa, Department of Computing Science, Umea University, S-901 87 Umea, Sweden.\n\nReferences:\n\n B. Kagstrom; A Direct Method for Reordering Eigenvalues in the Generalized Real Schur Form of a Regular Matrix Pair (A, B), in M.S. Moonen et al (eds), Linear Algebra for Large Scale and Real-Time Applications, Kluwer Academic Publ. 1993, pp 195-218.\n\n B. Kagstrom and P. Poromaa; Computing Eigenspaces with Specified Eigenvalues of a Regular Matrix Pair (A, B) and Condition Estimation: Theory, Algorithms and Software, Report UMINF - 94.04, Department of Computing Science, Umea University, S-901 87 Umea, Sweden, 1994. Also as LAPACK Working Note 87. To appear in Numerical Algorithms, 1996.\n\n B. Kagstrom and P. Poromaa, LAPACK-Style Algorithms and Software for Solving the Generalized Sylvester Equation and Estimating the Separation between Regular Matrix Pairs, Report UMINF - 93.23, Department of Computing Science, Umea University, S-901 87 Umea, Sweden, December 1993, Revised April 1994, Also as LAPACK working Note 75. To appear in ACM Trans. on Math. Software, Vol 22, No 1, 1996.\n\nDefinition at line 200 of file ctgexc.f.\n\n## Author\n\nGenerated automatically by Doxygen for LAPACK from the source code." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7137737,"math_prob":0.92321265,"size":4113,"snap":"2023-40-2023-50","text_gpt3_token_len":1226,"char_repetition_ratio":0.114383064,"word_repetition_ratio":0.113018595,"special_character_ratio":0.27741307,"punctuation_ratio":0.22737306,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9928838,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T22:41:54Z\",\"WARC-Record-ID\":\"<urn:uuid:1148f50d-4f01-4286-87f4-dc81b2228d8d>\",\"Content-Length\":\"33756\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:97867301-cacb-4be2-b374-a5f005972e80>\",\"WARC-Concurrent-To\":\"<urn:uuid:b5b10b8d-113d-4522-9afd-fce8fcff5d8c>\",\"WARC-IP-Address\":\"104.21.34.36\",\"WARC-Target-URI\":\"https://www.systutorials.com/docs/linux/man/3-ctgexc/\",\"WARC-Payload-Digest\":\"sha1:GRYPZX547PI2J5JM22RQOLEHIEVM6D4X\",\"WARC-Block-Digest\":\"sha1:AU6DEQVU5CWKAODGX7TD27QXK6UP2MRJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679102697.89_warc_CC-MAIN-20231210221943-20231211011943-00120.warc.gz\"}"}
http://0x80.pl/notesen/2015-12-29-float-to-string.html
[ "# Fast conversion of floating-point values to string\n\nAuthor: Wojciech Muła 2015-12-29\n\n# Introduction\n\nThe conversion of floating-point numbers to a string representation is not an easy task. Such procedure must deal with different special FP values, perform proper rounding and so on. The paper Printing Floating-Point Numbers Quickly and Accurately [PDF] by Robert G. Burger & R. Kent Dybvig describes a procedure which solves the problem correctly.\n\nHowever, in some applications (mostly logging, debugging) rounding and accuracy are not as important as the speed. Sometimes we simply want to know if a number was 1000.5 or 0.5 and even if we read \"0.499999\" nothing wrong would happen.\n\n# Algortihm\n\nA floating-point number could be represented as a fixed-point number of size 64:64 and then fast conversion routines could be used. This approach is several times faster than standard sprintf, moreover the method is also suitable for embedded systems.\n\nThe biggest drawback is that the method is able to convert only a sub-range of possible floating-point values. However, the sub-range covers reasonable interval of numbers and in the practice it should be acceptable.\n\n## Integer part\n\nThe conversion from floating-point to integer require a few simple bit-instructions: bit-ands, bit-ors and shifts. The full procedure is described in article Convert float to int without FPU/SSE. Of course not all values could be converted, and some testing is needed before doing conversion.\n\nWhen an integer is ready we can use any procedure to convert it to the text. Even the simplest, naive method would be good; I've described some other, faster methods in the article SSE: conversion integers to decimal representation.\n\n## Fraction part\n\nThe fraction part is obtained in the same way as the integer part: by shifting floating-point binary representation. The layout of fraction part:\n\n```+----+--------------+\n|0000|xxxxxxx...xxxx|\n+----+--------------+\n| | |\n| | +- fraction bits\n| |\n| +- decimal dot\n|\n+- integer part (4 highest bits)\n```\n\nSuch number is then treated as an integer number and the naive method of conversion fraction numbers is used. In each step fraction value is multiplied by 10, then the integer part is extracted from the result:\n\n```uint64_t fraction = ...\n\nwhile (fraction != 0) {\n\nfraction *= 10;\n\nconst uint64 intpart = fraction >> (64-4);\n// save intpart as ASCII\n\nfraction &= 0x0ffffffffffffffflu;\n}\n```\n\nIn a single iteration following operations are performed:\n\n• 1 multiplication by constant (on x86 this may be single lea);\n• 1 right shift by constant;\n• 1 bit-and.\n\n# Experiments\n\nSample code is available at github. The program speed does conversion of wide range of float values using sprintf and the described method.\n\nFollowing times were printed on my Core i5 (gcc 4.9.2 with -O3 flag):\n\n• sprintf: 8.842 s\n• custom: 0.594 s.\n\nThe speedup is around 15 times. However, sprintf does rounding, parsing a format string, allocates memory etc. I guess the real speedup would be 4 to 8 times, but still it's really impressive improvement." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.911231,"math_prob":0.95618844,"size":2774,"snap":"2020-10-2020-16","text_gpt3_token_len":605,"char_repetition_ratio":0.12888087,"word_repetition_ratio":0.0,"special_character_ratio":0.24297044,"punctuation_ratio":0.12401575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9786092,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-04T12:29:56Z\",\"WARC-Record-ID\":\"<urn:uuid:312abcaf-5755-40bb-889d-1bca2d074f2c>\",\"Content-Length\":\"6198\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81d5062a-4c76-468b-ad50-0c3af434a868>\",\"WARC-Concurrent-To\":\"<urn:uuid:b18d75f8-a388-434c-a6ea-e4ad6dc1d411>\",\"WARC-IP-Address\":\"91.121.209.206\",\"WARC-Target-URI\":\"http://0x80.pl/notesen/2015-12-29-float-to-string.html\",\"WARC-Payload-Digest\":\"sha1:CH47Z4ZFWLYDO4AV6MTCKYY4FSGUZWBD\",\"WARC-Block-Digest\":\"sha1:TPQRNQEENSUONTTXN4EQE5WBKVQE7QIA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370521876.48_warc_CC-MAIN-20200404103932-20200404133932-00113.warc.gz\"}"}
https://datascience.stackexchange.com/questions/79623/normal-equation-for-linear-regression-is-illogical
[ "# Normal equation for linear regression is illogical\n\nCurrently I'm taking Andrew Ng's course. He gives a following formula to find solution for linear regression analytically:\n\n$$θ = (X^T * X)^{-1} * X^T * у$$\n\nHe doesn't explain it so I searched for it and found that $$(X^T * X)^{-1} * X^T$$ is actually a formula of pseudoinverse in a case where our columns are linear independent. And this actually makes a lot of sense. Basically, we want to find such $$θ$$ that $$X * θ = y$$, thus $$θ = y * X^{-1}$$, so if we replace $$X^{-1}$$ with our pseudoinverse formula we get exactly the $$θ = (X^T * X)^{-1} * X^T * y$$.\n\nWhat I don't understand is why nobody mentions that this verbose formula is just $$θ = y * X^{-1}$$ with pseudoinverse. Okay, Andrew Ng's course is for beginners and he didn't want to throw a bunch of math at students. But octave, where the assignments are done, has function pinv() to find a pseudoinverse. Even more, Andrew Ng actually mentions pseudoinverse in his videos on normal equation, in the context of $$(X^T * X)$$ being singular so that we can't find its inverse. As I mentioned above, $$(X^T * X)^{-1} * X^T$$ is a formula for pseudoinverse only in case of columns being linear independent. If they are dependent (e.g. some features are redundant), there are other formulae to consider, but anyway octave handles all these cases under the hood of pinv() function, which is more than just a macro for $$(X^T * X)^{-1} * X^T$$. And Andrew Ng instead of saying to use pinv(X) * y gives this: pinv(X' * X) * X' * y, basically we use a pseudoinverse to find a pseudoinverse. Why?\n\n• Please use LaTex when you write formulas (start and end with \\$) to write$y\\$ etc Aug 1, 2020 at 14:36\n• @Peter done, thanks Aug 1, 2020 at 14:45\n\nHello Oleksii and welcome to DSSE.\n\n$$\\theta = (X^TX)^{-1}X^Ty$$\n\nWhere\n\n• $$\\theta$$ is your regressor\n• $$X$$ is a matriz containing stacked vectors (as rows) of your features/independent variables\n• $$y$$ is a matriz containing stacked vectors (or scalars) of your predictions/dependent variables\n\nThis equation is a solution to a linear set of equations: $$Ax = B$$ that occurs in trying to minimize the least squares loss.\n\nThe reason why you see this pinv() on the code is because if X has not enough linearly independent rows, $$X^TX$$ (also known as $$R$$, the autocorrelation matriz of the data, and it's inverse is called the precision matriz) will result in a singular (or near singular) matriz, which inversion might not be possible. Even if this singularity only happens because of the working precision of your computer/programming language.\n\nUsing pinv() is usually not recommended, because even if it allows to compute a regressor it will overfit the training data. Alternative solutions to working with singular matriz is adding $$\\delta~I$$ to $$R$$\n\nwhere $$\\delta$$ is a small constant (usually 1 to 10 eps) and $$I$$ is the identity." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9175791,"math_prob":0.9979353,"size":1531,"snap":"2022-27-2022-33","text_gpt3_token_len":426,"char_repetition_ratio":0.14210871,"word_repetition_ratio":0.042857144,"special_character_ratio":0.28804702,"punctuation_ratio":0.09354839,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997187,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-15T04:56:58Z\",\"WARC-Record-ID\":\"<urn:uuid:4ee687d9-1851-432f-b3d8-b0f989ed7db4>\",\"Content-Length\":\"230880\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4fdfd0cd-e722-477c-92a3-464197f9ef50>\",\"WARC-Concurrent-To\":\"<urn:uuid:cec21854-dba5-416f-ad7c-4aa8ea671452>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://datascience.stackexchange.com/questions/79623/normal-equation-for-linear-regression-is-illogical\",\"WARC-Payload-Digest\":\"sha1:XLVLCMZVN5TBMKU2KRN4SNADRUPHL77X\",\"WARC-Block-Digest\":\"sha1:K43IZMSVM6Z6W2YCRSK4ATJM4BATWFS4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572127.33_warc_CC-MAIN-20220815024523-20220815054523-00343.warc.gz\"}"}
http://www.prathapkudupublog.com/2017/06/plus-one.html
[ "## Monday, June 5, 2017\n\n### Plus one\n\nProblem Statement\n\nGiven a non-negative integer represented as a non-empty array of digits, plus one to the integer.\nYou may assume the integer do not contain any leading zero, except the number 0 itself.\nThe digits are stored such that the most significant digit is at the head of the list.\n\n Example\n```input = {1, 4, 9, 9}\n```\n```\noutput ={1, 5, 0, 0 }\n\n```\n\n### Solution\n\n• We need to traverse from the end .\n• If digit is less than 9 than increment by one and return the array\n• If the digit is 9 then replace it with 0\n• The last part of the code is needed if all the digits in the array are 9 ex 9999\n\n```public static int [] get(int [] digits)\n{\n//Loop through the array\nfor(int i=digits.length-1;i>=0;i--)\n{\n//check if the digits are less than 9\nif(digits[i]<9){\ndigits[i]++;\nreturn digits;\n}\n//If the no is greater than 1 then assign 0\ndigits[i]=0;\n}\n//This last part of code is only for the case that the whole input array is 9s.\n//For example : 99999-----> 100000\nint [] newNumber = new int [digits.length+1];\nnewNumber=1;\nreturn newNumber;\n}\n}\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6800147,"math_prob":0.98598754,"size":1199,"snap":"2020-45-2020-50","text_gpt3_token_len":325,"char_repetition_ratio":0.13556485,"word_repetition_ratio":0.10958904,"special_character_ratio":0.3027523,"punctuation_ratio":0.11328125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99671334,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T15:01:01Z\",\"WARC-Record-ID\":\"<urn:uuid:fde1a71a-aff3-4421-b18a-1af78a8e9e6e>\",\"Content-Length\":\"71729\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1fe31dd1-fb7d-4e23-85e1-79ba80738af4>\",\"WARC-Concurrent-To\":\"<urn:uuid:15a6c9c8-2391-4d87-b32f-b626db00a215>\",\"WARC-IP-Address\":\"172.217.164.147\",\"WARC-Target-URI\":\"http://www.prathapkudupublog.com/2017/06/plus-one.html\",\"WARC-Payload-Digest\":\"sha1:2TB6NJJDU2KZPSISYMIPBKYCWSEIML3Q\",\"WARC-Block-Digest\":\"sha1:RICUBEIDDFG2HL7RHHY77JZG2NU42EVL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107891428.74_warc_CC-MAIN-20201026145305-20201026175305-00328.warc.gz\"}"}
https://ask.sagemath.org/question/8054/notebook-questions/?comment=21902
[ "# notebook questions\n\nHello!\n\nCould you help me? I used notebook and in it I wrote a program in SAGE which generated a huge dictionary. I would like to save only this dictionary because I need to use this dictionary in another SAGE program. Unfortunately, I am not very familiar in using the notebook, so I don't know how to do this... Another (very simple) question when there is a bug in my program in the notebook, the computer wrotes me which line the bug is. But I don't how to find this line with the given number. Is there a line-numerator in the notebook?\n\nedit retag close merge delete\n\nThe question is not very clear. Can you write a sample code to explain what you mean by dictionary and how you use it in another sage program.\n\nHello!\n\nI show you an example. My dictionary is the following:\n\nsquare_sum={}\n\nfor a in range(1000):\n\n for b in range(1000):\n\nsquare_sum[a,b]=a^2+b^2\n\n\nNow in another notebook worksheet I would like to upload only this dictionary (or is it called vocabulary?) square_sum. So the question is how to save the dictionary and how to upload it in another worksheet?\n\nSort by » oldest newest most voted\n\nKatika, you should be able to save any object in Sage, as well as a whole session. If you click \"Help\" on the notebook at the top, the page you get should have\n\nUse \"save obj1 obj2 ...\" and \"load obj1 obj2 ...\". This allows for easy moving of objects from one worksheet to another, and saving of objects for later use.\n\n\ntoward the bottom. The syntax is wrong, though.This is now Ticket 11124.\n\nThis should work.\n\nA = matrix([[2,3],[3,4]])\nsave(A,'test')\n\n\nAnd then a link to the actual .sobj file should be created, and you could use that. The help for the DATA variable should help as well. If you want to use your home directory, then\n\nsave(A,os.path.expanduser('~/')+'test')\n\n\nmight help you; it will show up in your home directory, if you use a system which supports the tilde. Maybe that should be in the help as well...\n\nmore\n\nAnother (very simple) question when there is a bug in my program in the notebook, the computer wrotes me which line the bug is. But I don't how to find this line with the given number. Is there a line-numerator in the notebook?\n\nHmm, that is harder. At least for simple examples, the line does seem to be the line I made the mistake in:\n\nfor i in range(10):\nprint i+1\nprint i^2\na=i\nprint a+i\n\n\nBut multicell ones don't seem to be as nice. You may just have to look at the code that causes the problem and find it in your input. We don't have interactive markup (e.g. coloring) yet, because the javascript really slows things down.\n\nmore\n\nIf any one of the answers was best, be sure to help future users and mark it as such, so that it's easily searchable. Doesn't have to be my answer, either :-)\n\nI secretly voted :-) (Or voting is different from marking... ?)\n\nYes. There should be a large check mark that shows up on each answer, which can be clicked. I don't know if you can vote for your own, but you can vote for others. It also adds to your 'karma' a small amount.", null, "def square_sum(a,b):\nreturn(a*a+b*b)\nfname=\"filename.dat\"\nfp=open(fname,'w')\nfor a in range(1000):\nfor b in range(1000):\ns=str(a)+' '+str(b)+' '+str(square_sum(a,b))+'\\n'\nfp.write(s)\nfp.close()\n\n\nThis will write the data to a filename named \"filename.dat\". You can also give a path if you wish.\n\nTo read it from another notebook use the following code\n\nimport numpy as np\narray=np.genfromtxt('/path-to-file/filename.dat',delimiter='')\n\n\nOf course you have to replace 'path-to-file' by whatever path it is. Now you can use array the way we usually use arrays.\n\nI hope this helps.\n\nmore", null, "If Sage is the only program that needs to read the data, you can use the \"save\" and \"load\" commands:\n\nsage: square_sum = dict(((a,b),a^2+b^2) for a,b in CartesianProduct([1..5],[1..5]))\nsage: square_sum\n{(1, 3): 10, (5, 4): 41, (2, 1): 5, (5, 1): 26, (2, 5): 29, (1, 2): 5, (3, 3): 18, (4, 4): 32, (1, 5): 26, (2, 2): 8, (5, 3): 34, (4, 1): 17, (1, 1): 2, (3, 2): 13, (4, 5): 41, (5, 5): 50, (1, 4): 17, (2, 3): 13, (4, 2): 20, (3, 5): 34, (3, 4): 25, (3, 1): 10, (4, 3): 25, (5, 2): 29, (2, 4): 20}\nsage: save(square_sum, \"ssum\")\nsage: s2\n{(1, 3): 10, (3, 2): 13, (2, 1): 5, (5, 1): 26, (2, 5): 29, (1, 2): 5, (3, 3): 18, (4, 4): 32, (1, 5): 26, (2, 2): 8, (3, 5): 34, (4, 1): 17, (1, 1): 2, (5, 4): 41, (4, 5): 41, (1, 4): 17, (2, 3): 13, (4, 2): 20, (5, 5): 50, (5, 3): 34, (5, 2): 29, (3, 1): 10, (4, 3): 25, (3, 4): 25, (2, 4): 20}\nsage: s2 == square_sum\nTrue\n\n\n(Remember that dictionaries don't have an order, so the fact that the keys show up differently in s2 than in the original square_sum doesn't matter.)\n\nmore\n\nI tried this, but it doesn't work if I would like to load \"ssum\" in another notebook cell... Is there a way to tell the path to the ssum.obj? I think something load('~/sage/ssum.sobj')?\n\nkcrisman's example shows how you can do this: save(A,\"some-hardcoded-path\"), or save(A,os.path.expanduser('~/')+'test') if you want to use a tilde. (Tilde expansion isn't automatic)." ]
[ null, "https://www.gravatar.com/avatar/e2127c859038194c0b0b9178ce60c719", null, "https://www.gravatar.com/avatar/3c3f9c15c2b1f11f2d713798c85cbdb5", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9469835,"math_prob":0.7537636,"size":1009,"snap":"2022-40-2023-06","text_gpt3_token_len":223,"char_repetition_ratio":0.14626865,"word_repetition_ratio":0.0,"special_character_ratio":0.22101091,"punctuation_ratio":0.10576923,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96131754,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-01T15:38:23Z\",\"WARC-Record-ID\":\"<urn:uuid:9954dbbb-7799-488b-9895-b25e2eb438a1>\",\"Content-Length\":\"85948\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:64bb2aae-7461-41c5-9ad0-bcec7d998a93>\",\"WARC-Concurrent-To\":\"<urn:uuid:a34d7cac-a272-4b72-bf0b-2c9945a60905>\",\"WARC-IP-Address\":\"194.254.163.53\",\"WARC-Target-URI\":\"https://ask.sagemath.org/question/8054/notebook-questions/?comment=21902\",\"WARC-Payload-Digest\":\"sha1:UYU75L5NB74SFUKD2AJMFSRHQYGGTI2C\",\"WARC-Block-Digest\":\"sha1:SPU2YPYMD4MDUI6XRXK26WYW4ICUV5LJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499946.80_warc_CC-MAIN-20230201144459-20230201174459-00660.warc.gz\"}"}
https://www.jiskha.com/questions/1504096/if-a-b-c-are-in-h-p-prove-that-a-2-c-2-2b-2-answer-1-a-1-b-and-1-c-are-in-h-p
[ "# h.p.\n\nif a,b,c, are in h.p. prove that a^2+c^2> 2b^2 ?\nanswer= 1/a,1/b, and 1/c are in h.p.\n=1/b-1/a=1/c-1/b\n=1/b+1/b=1/c+1/a\n=b+b/b^2=a+c/ac\n=2/b^2=a+c/ac\n=b^2(a+c)=2ac\n=(a+c)ac=2b^2\n=a^2+b^2 >2b^2\n\n1. 👍 0\n2. 👎 3\n3. 👁 278\n\n1. 👍 0\n2. 👎 0\n\n## Similar Questions\n\n1. ### Geometry\n\nBased on the information in the diagram, can you prove that the figure is a parallelogram? Explain. a. Yes; opposite sides are congruent. b. Yes; opposite angles are congruent. c. No; you cannot prove that the quadrilateral is a\n\n2. ### precalculus\n\nCan you please check my answers? 1.Find Pk + 1 if Pk=2^K-1/k! answer: 2^k+1/(k+1)! 2.Find Pk + 1 if Pk = 7 + 13 + 19 + ...+[6(k - 1)+1] + (6k + 1) answer: 7+13+9...(6k-1+1)+6k+1 +(6k+2) 3.What is the first step when writing a\n\n3. ### Math\n\nTell Me the Answer plz round to the nearest 1/2 6/7 3 and 4/5 100 and 1/3 4 and 2/3 estimate the sum or difference 9/10 +7/8 9/50 +7/15 2 and 8/9 - 2 and 1/7 8 and 12/31 + 4 and 5/6 Just tell us the answer I am trying to prove to\n\n4. ### Math\n\nSuppose line GH is congruent to line JK, line HE is congruent to line KL, and angle 1 is congruent to angle L. Can you prove that triangle GHI is congrunet to triangke JKL, abd if so, how? A. You can use SAS to prove the triangles\n\n1. ### Limits\n\nShow whether the series ((-1)^n)*((2^n)/(n^4))) is absolutely convergent. Taking an=((-1)^n)*((2^n)/(n^4))) , here |an|=((2^n)/(n^4)). I have the gut feeling that this is absolutely convergent. But don't see a way to do that.\n\n2. ### math\n\n1. Let f : Z to Z and g : Z to Z be functions de fined by f (x) = 3x + 1 and g(x) = [x/2] . (a) Is fog one-to-one? Prove your answer. (b) Is fog onto? Prove your answer. (c) Is gof one-to-one? Prove your answer. (d) Is gof onto?\n\n3. ### Algebra\n\nIf x is the midpoint of line vy and wz And the prove is triangle vwx is congruent triangle yzx can you help me solve this using two column prove\n\n4. ### Boundedness of a Sequence\n\nQuestion : How do we prove that the sequence (1+(1/n))^n is bounded? My thoughts on this question : We know that, to prove a sequence is bounded, we should prove that it is both bounded above and bounded below. We can see that all\n\n1. ### Math, Inequalities\n\nProve the following inequalities. Then state for which values the LHS equals RHS. 1) a^2 + b^2 ≥ 2(a-b-1) 2) Given a + b = 1, prove the inequality a^2 + b^2 ≥ 1/2 Please explain how you solved it. Thanks.\n\n2. ### Calculus\n\nIf the function f is defined by f(x)=0 if x is rational f(x)=1 if x is irrational prove that lim x-->0 f(x) does not exist. Can someone help me answer this question step by step so I would know how to prove a question like this\n\n3. ### Math\n\nProve :cosx-cosy= -2sin(x+y/2)sin(x-y/2) Prove: 2cot-2tanx= 4-2sec^2x / tanx\n\n4. ### calculus\n\nProve that there is a number that is exactly one more than its cube. (don’t solve just show there is one) Prove that the function f(x)= cosx-x has a zero in (o. pi/2) Justify." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8686655,"math_prob":0.9941659,"size":2716,"snap":"2020-45-2020-50","text_gpt3_token_len":925,"char_repetition_ratio":0.12794985,"word_repetition_ratio":0.0,"special_character_ratio":0.34020618,"punctuation_ratio":0.10747664,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998375,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T19:12:36Z\",\"WARC-Record-ID\":\"<urn:uuid:b48dbb58-6b0b-4b28-a055-6b65ab3676bd>\",\"Content-Length\":\"18197\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c98aa70b-2d49-4748-bd25-59d3974eaf97>\",\"WARC-Concurrent-To\":\"<urn:uuid:4e9e2074-56fd-43ef-a22e-506624b54a63>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/1504096/if-a-b-c-are-in-h-p-prove-that-a-2-c-2-2b-2-answer-1-a-1-b-and-1-c-are-in-h-p\",\"WARC-Payload-Digest\":\"sha1:22OCQMZJDJ2GZUWDYTIPEI3WPAIV3GG5\",\"WARC-Block-Digest\":\"sha1:CLXDCZEP33A4XXIYDBDK34ZBQXM32VIZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107911229.96_warc_CC-MAIN-20201030182757-20201030212757-00030.warc.gz\"}"}
https://encyclopediaofmath.org/index.php?title=Arakelov_geometry&oldid=30390
[ "# Arakelov geometry\n\nArakelov theory\n\nA combination of the Grothendieck algebraic geometry of schemes over", null, "with Hermitian complex geometry on their set of complex points. The goal is to provide a geometric framework for the study of Diophantine problems in higher dimension (cf. also Diophantine equations, solvability problem of; Diophantine problems of additive type).\n\nThe construction relies upon the analogy between number fields and function fields: the ring", null, "has Krull dimension (cf. Dimension) one, and \"adding a point\"", null, "to the corresponding scheme", null, "makes it look like a complete curve. For instance, if", null, "is a rational number, the identity", null, "where", null, "is the valuation of", null, "at the prime", null, "and where", null, ", is similar to the Cauchy residue formula", null, "for the differential", null, ", when", null, "is a non-zero rational function on a smooth complex projective curve", null, ".\n\nIn higher dimension, given a regular projective flat scheme", null, "over", null, ", one considers pairs", null, "consisting of an algebraic cycle", null, "of codimension", null, "over", null, ", together with a Green current", null, "for", null, "on the complex manifold", null, ":", null, "is real current of type", null, "such that, if", null, "denotes the current given by integration on", null, ", the following equality of currents holds:", null, "where", null, "is a smooth form of type", null, ". Equivalence classes of such pairs", null, "form the arithmetic Chow group", null, ", which has good functoriality properties and is equipped with a graded intersection product, at least after tensoring it by", null, ".\n\nThese notions were first introduced for arithmetic surfaces, i.e. models of curves over number fields [a1], [a2] (for a restricted class of currents", null, "). For the general theory, see [a7], [a9] and references therein.\n\nGiven a pair", null, "consisting of an algebraic vector bundle", null, "on", null, "and a", null, "Hermitian metric", null, "on the corresponding holomorphic vector bundle on the complex-analytic manifold", null, ", one can define characteristic classes of", null, "with values in the arithmetic Chow groups of", null, ". For instance, when", null, "has rank one, if", null, "is a non-zero rational section of", null, "and", null, "its divisor, the first Chern class of", null, "is the class of the pair", null, ". The main result of the theory is the arithmetic Riemann–Roch theorem, which computes the behaviour of the Chern character under direct image [a8], [a6]. Its strongest version involves regularized determinants of Laplace operators and the proof requires hard analytic work, due to J.-M. Bismut and others.\n\nSince", null, ", the pairings", null, "", null, ", give rise to arithmetic intersection numbers, which are real numbers when their geometric counterparts are integers. Examples of such real numbers are the heights of points and subvarieties, for which Arakelov geometry provides a useful framework [a3].\n\nWhen", null, "is a semi-stable arithmetic surface, an important invariant of", null, "is the self-intersection", null, "of the relative dualizing sheaf equipped with the Arakelov metric [a1]. L. Szpiro and A.N. Parshin have shown that a good upper bound for", null, "would lead to an effective version of the Mordell conjecture and to a solution of the ABC conjecture [a10]. G. Faltings and E. Ullmo proved that", null, "is strictly positive [a4], [a11]; this implies that the set of algebraic points of", null, "is discrete in its Jacobian for the topology given by the Néron–Tate height.\n\nP. Vojta used Arakelov geometry to give a new proof of the Mordell conjecture [a12], by adapting the method of Diophantine approximation. More generally, Faltings obtained by Vojta's method a proof of a conjecture of S. Lang on Abelian varieties [a5]: Assume", null, "is an Abelian variety over a number field and let", null, "be a proper closed subvariety in", null, "; then the set of rational points of", null, "is contained in the union of finitely many translates of Abelian proper subvarieties of", null, ".\n\nHow to Cite This Entry:\nArakelov geometry. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Arakelov_geometry&oldid=30390\nThis article was adapted from an original article by Christophe Soulé (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article" ]
[ null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a1202401.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a1202402.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a1202403.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a1202404.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a1202405.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a1202406.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a1202407.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a1202408.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a1202409.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024010.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024011.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024012.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024013.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024014.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024015.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024016.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024017.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024018.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024019.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024020.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024021.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024022.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024023.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024024.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024025.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024026.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024027.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024028.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024029.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024030.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024031.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024032.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024033.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024034.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024035.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024036.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024037.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024038.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024039.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024040.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024041.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024042.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024043.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024044.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024045.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024046.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024047.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024048.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024049.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024050.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024051.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024052.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024053.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024054.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024055.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024056.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024057.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024058.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024059.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024060.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024061.png", null, "https://www.encyclopediaofmath.org/legacyimages/a/a120/a120240/a12024062.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82492256,"math_prob":0.9746315,"size":5492,"snap":"2020-24-2020-29","text_gpt3_token_len":1514,"char_repetition_ratio":0.10860059,"word_repetition_ratio":0.006952491,"special_character_ratio":0.28113618,"punctuation_ratio":0.17779869,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99749357,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124],"im_url_duplicate_count":[null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-02T09:43:41Z\",\"WARC-Record-ID\":\"<urn:uuid:9a1ce264-62c9-4484-8037-77fc9c6d9359>\",\"Content-Length\":\"30377\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a5455fc6-7cb6-45a1-93c7-61813b491e62>\",\"WARC-Concurrent-To\":\"<urn:uuid:b915dafb-64b9-4dd9-938b-d55b4e44a6a8>\",\"WARC-IP-Address\":\"34.96.94.55\",\"WARC-Target-URI\":\"https://encyclopediaofmath.org/index.php?title=Arakelov_geometry&oldid=30390\",\"WARC-Payload-Digest\":\"sha1:JR4WKIIM5DL4RMQUBDRSOYBIIGN7NT3L\",\"WARC-Block-Digest\":\"sha1:TSZX66R52YCZXR2NP24ORWW4MPM7JSUA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655878639.9_warc_CC-MAIN-20200702080623-20200702110623-00354.warc.gz\"}"}
https://delexaircargo.com/converter-tool/
[ "•", null, "•", null, "•", null, "Your trusted partner in global logistics\nGetting aircargo details...\nTrack by Airway Bill Number\nINDUSTRIES WE SERVE\nsolutions\n\n# Converter tool\n\nUnit Converter\n From: To: meter [m]exameter [Em]petameter [Pm]terameter [Tm]gigameter [Gm]megameter [Mm]kilometer [km]hectometer [hm]dekameter [dam]decimeter [dm]centimeter [cm]millimeter [mm]micrometer [µm]micron [µ]nanometer [nm]picometer [pm]femtometer [fm]attometer [am]megaparsec [Mpc]kiloparsec [kpc]parsec [pc]light year [ly]astronomical unit [AU, UA]league [lea]nautical league (UK)nautical league (int.)league (statute) [st.league]mile [mi, mi(Int)]nautical mile (UK) [NM (UK)]nautical mile (international)mile (statute) [mi, mi (US)]mile (US survey) [mi]mile (Roman)kiloyard [kyd]furlong [fur]furlong (US survey) [fur]chain [ch]chain (US survey) [ch]roperod [rd]rod (US survey) [rd]perchpolefathom [fath]fathom (US survey) [fath]ellyard [yd]foot [ft]foot (US survey) [ft]link [li]link (US survey) [li]cubit (UK)handspan (cloth)finger (cloth)nail (cloth)inch [in]inch (US survey) [in]barleycornmil [mil, thou]microinchangstrom [A]a.u. of length [a.u., b]X-unit [X]fermi [F, f]arpentpicapointtwipalnfamncaliber [cl]centiinch [cin]kenRussian archinRoman actusvara de tareavara conuqueravara castellanacubit (Greek)long reedreedlong cubithandbreadthfingerbreadthPlanck lengthElectron radius (classical)Bohr radius [b, a.u.]Earth's equatorial radiusEarth's polar radiusEarth's distance from sunSun's radius meter [m]exameter [Em]petameter [Pm]terameter [Tm]gigameter [Gm]megameter [Mm]kilometer [km]hectometer [hm]dekameter [dam]decimeter [dm]centimeter [cm]millimeter [mm]micrometer [µm]micron [µ]nanometer [nm]picometer [pm]femtometer [fm]attometer [am]megaparsec [Mpc]kiloparsec [kpc]parsec [pc]light year [ly]astronomical unit [AU, UA]league [lea]nautical league (UK)nautical league (int.)league (statute) [st.league]mile [mi, mi(Int)]nautical mile (UK) [NM (UK)]nautical mile (international)mile (statute) [mi, mi (US)]mile (US survey) [mi]mile (Roman)kiloyard [kyd]furlong [fur]furlong (US survey) [fur]chain [ch]chain (US survey) [ch]roperod [rd]rod (US survey) [rd]perchpolefathom [fath]fathom (US survey) [fath]ellyard [yd]foot [ft]foot (US survey) [ft]link [li]link (US survey) [li]cubit (UK)handspan (cloth)finger (cloth)nail (cloth)inch [in]inch (US survey) [in]barleycornmil [mil, thou]microinchangstrom [A]a.u. of length [a.u., b]X-unit [X]fermi [F, f]arpentpicapointtwipalnfamncaliber [cl]centiinch [cin]kenRussian archinRoman actusvara de tareavara conuqueravara castellanacubit (Greek)long reedreedlong cubithandbreadthfingerbreadthPlanck lengthElectron radius (classical)Bohr radius [b, a.u.]Earth's equatorial radiusEarth's polar radiusEarth's distance from sunSun's radius Result:\nUnitConverter.org - the ultimate unit conversion resource.\nUnit Converter\n Square Cube Side Length = a Total Surface = 6 ⋅ a2 Diagonal of Area = √(2) ⋅ a Volume = a3 Perimeter = 4 ⋅ a Diagonal through Volume = √(3) ⋅ a Area = a2 Total Length of Edges = 12 ⋅ a" ]
[ null, "https://delexaircargo.com/wp-content/uploads/slider/1.jpg", null, "https://delexaircargo.com/wp-content/uploads/slider/3.jpg", null, "https://delexaircargo.com/wp-content/uploads/slider/2.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53962404,"math_prob":0.795798,"size":3673,"snap":"2019-13-2019-22","text_gpt3_token_len":1316,"char_repetition_ratio":0.13791224,"word_repetition_ratio":0.722807,"special_character_ratio":0.2768854,"punctuation_ratio":0.072115384,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.952863,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-25T18:13:30Z\",\"WARC-Record-ID\":\"<urn:uuid:0106477a-9041-456e-9d1c-360d6c24ff3b>\",\"Content-Length\":\"32954\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63fc9f7d-063e-4237-b743-f7642546b874>\",\"WARC-Concurrent-To\":\"<urn:uuid:570214b1-68e7-42d6-ae43-2aea49cd16f3>\",\"WARC-IP-Address\":\"107.180.2.125\",\"WARC-Target-URI\":\"https://delexaircargo.com/converter-tool/\",\"WARC-Payload-Digest\":\"sha1:N6JBN4ANYDJFVBLL42O7KWG76ZLREEIO\",\"WARC-Block-Digest\":\"sha1:TLHFAD2ER3JIENXWAFSEDFKC7EJARXR3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232258147.95_warc_CC-MAIN-20190525164959-20190525190959-00142.warc.gz\"}"}
https://robotics.stackexchange.com/questions/18241/can-a-neural-network-emulate-a-physics-engine
[ "# Can a neural network emulate a physics engine?\n\nBefore a motion controller for a robot can be created, a step before is needed, called “system identification”. The aim is to create a prediction model which can say, what will happen if a certain movement of the robot is executed. For example, a wheeled robot which is designed upon the steering principle will change it's position if both wheels are spin with +1 speed for 1 second. The new x/y position will be ahead of the old position. If the robot's wheel are spinning with different values, for example left=+1, right=+0.5 the new position of the robot will be different.\n\nThe problem is, that a dedicated physics engine which can predict the trajectory of a robot is hard to realize by hand. But there are some general techniques available for example genetic algorithm, rule based systems or reinforcement learning which can simplify the programming. These general systems are working with parameters who have to be adapted to a certain domain, which is called machine learning.\n\nQuestion: Can a neural network be used for predicting future states of a robot? And which kind of dataset is needed for doing so?\n\n• $F = ma$ and $\\tau = I\\alpha$. The difficulty seems to be generally in force/torque transmission (kinematic constraints.)\n– Chuck\nFeb 20, 2019 at 16:42\n\nI think there are some confusing statements or inaccuracies in this question regarding fundamental concepts in robotics, control engineerging and AI/ML. I am not sure if it helps to list these or not, so I will not I just wanted to draw attention to this fact if somebody tries to used the question as a reference.\n\nHowever, to answer the final question:\n\nNeural networks are universal function approximators. The dynamic model of a robot is a function mapping input (e.g. wheel torque) to output (e.g. robot velocity). So yes, in theory they can learn the dynamic model of a robot. Care must be taken and the current state of the robot should probably also be used as input to the neural network or some kind of RNN should be used. Also in the case of the dynamic model, only torque input is not enough, the current state of the robot is also needed to compute the cartesian velocity.\n\nIn practice the right arhitecture needs to be selected, the dataset needs to be defiend and the networks needs to be trained.\n\nPart of the question refers to the dataset.\nThe input and output of the neural network is the same as of the function it will learn. So in case of a forward dynamic model, the input can be 2 torques (and the current sate) and the output can the cartesian velocity of the robot.\n\nIn order to obtain such a dataset:\n\n• one could log the data from a real robot as it drives around, making sure that there are enough samples from all areas of the input space. In this case the dynamic model would not be used while driving aournd, just input tortques would be generated.\n\n• one could use an exising dynamic model and learn that. This might be counterintuitive, since the function already exists. This would only make sense if the neural network which learns the function would infer (significantly) faster then the execution time of the dynamic model." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9311358,"math_prob":0.9438265,"size":1116,"snap":"2023-40-2023-50","text_gpt3_token_len":225,"char_repetition_ratio":0.11330935,"word_repetition_ratio":0.0,"special_character_ratio":0.2016129,"punctuation_ratio":0.09767442,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98038787,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-03T17:06:05Z\",\"WARC-Record-ID\":\"<urn:uuid:ed4a13e9-3b0b-4274-a58d-f5fb64d7bbfe>\",\"Content-Length\":\"161327\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:35a4d904-db78-475c-a214-c210a6b7d072>\",\"WARC-Concurrent-To\":\"<urn:uuid:66cfedbb-8cd7-49cd-88fc-34233099492a>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://robotics.stackexchange.com/questions/18241/can-a-neural-network-emulate-a-physics-engine\",\"WARC-Payload-Digest\":\"sha1:YIQRX25SNJJYM6DKTDIB4QDOKOVLMVWB\",\"WARC-Block-Digest\":\"sha1:AILTBPJCQRPC4BSVIJDEBCVIKHZ3UTMQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100508.42_warc_CC-MAIN-20231203161435-20231203191435-00281.warc.gz\"}"}
https://www.codespeedy.com/understanding-supervised-learning/
[ "# A brief understanding on supervised learning – Machine Learning\n\nIn this tutorial, we will be taking a closer look at supervised learning. supervised learning is a classification type of machine learning.\n\nMachine learning is a very powerful tool for predicting future data by seeing a trend in a given data set. And supervised is a type of classification were in data we have been given output label.\n\n## A closer look at Supervised Learning\n\nSupervised learning is again separated into two categories\n\n• classification\n• Regression\n\nwhen the output variable is binary then it is termed as a classification problem for example when output is ‘No’ or ‘Yes’.\n\nin addition when the output variable is some real values such as weight and height then is it termed as regression", null, "we can see that if a person get\n\n 302 102 1 2 1.5 8 0 0\n\n302 GRE score,102 TOFLE scores,1 university,2 SOP, 1.5LOR,8 CGPA,0 research did not get admitted, this type of problem is termed as supervised classification problem were the output label is in binary\n\nwhile in regression-type problems.\n\nfor below, we example we have used kaggle dataset", null, "from the above example, we can say that the output label which ‘purchase’ is in real number these type of problem is known as a supervised regression problem\n\n#### Conclusion\n\nIn this tutorial, we have learned the following\n\n• what is supervised learning\n• type of supervised learning\n• classification and regression with example\n\nI hope you got a fair idea about supervised learning, for information I would recommend taking a look Understanding Artificial Neural network (ANN)until then enjoy learning." ]
[ null, "https://cdn.codespeedy.com/wp-content/uploads/2019/05/understanding-supervised-learning-in-machine-learning.png", null, "https://cdn.codespeedy.com/wp-content/uploads/2019/05/supervised-learning-example.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91504264,"math_prob":0.76338255,"size":1703,"snap":"2022-27-2022-33","text_gpt3_token_len":368,"char_repetition_ratio":0.15715127,"word_repetition_ratio":0.007168459,"special_character_ratio":0.20728126,"punctuation_ratio":0.07119741,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9815621,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T12:20:57Z\",\"WARC-Record-ID\":\"<urn:uuid:5ebab077-4e5f-4898-b742-2e9f05de4a66>\",\"Content-Length\":\"46542\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4fa4f36c-bf44-4e3b-84e5-5cfbb14988bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:dd9d6488-0a77-4cff-8202-a997cedfbd8b>\",\"WARC-IP-Address\":\"194.195.215.189\",\"WARC-Target-URI\":\"https://www.codespeedy.com/understanding-supervised-learning/\",\"WARC-Payload-Digest\":\"sha1:RK73X2NZVW63H4W35BWUDNQY5HDCFBS6\",\"WARC-Block-Digest\":\"sha1:OE46KX52E5TRB2AQU4YJPFD2GGZGIGVD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103516990.28_warc_CC-MAIN-20220628111602-20220628141602-00679.warc.gz\"}"}
https://metanumbers.com/24615
[ "## 24615\n\n24,615 (twenty-four thousand six hundred fifteen) is an odd five-digits composite number following 24614 and preceding 24616. In scientific notation, it is written as 2.4615 × 104. The sum of its digits is 18. It has a total of 4 prime factors and 12 positive divisors. There are 13,104 positive integers (up to 24615) that are relatively prime to 24615.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 5\n• Sum of Digits 18\n• Digital Root 9\n\n## Name\n\nShort name 24 thousand 615 twenty-four thousand six hundred fifteen\n\n## Notation\n\nScientific notation 2.4615 × 104 24.615 × 103\n\n## Prime Factorization of 24615\n\nPrime Factorization 32 × 5 × 547\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 3 Total number of distinct prime factors Ω(n) 4 Total number of prime factors rad(n) 8205 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 24,615 is 32 × 5 × 547. Since it has a total of 4 prime factors, 24,615 is a composite number.\n\n## Divisors of 24615\n\n1, 3, 5, 9, 15, 45, 547, 1641, 2735, 4923, 8205, 24615\n\n12 divisors\n\n Even divisors 0 12 6 6\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 12 Total number of the positive divisors of n σ(n) 42744 Sum of all the positive divisors of n s(n) 18129 Sum of the proper positive divisors of n A(n) 3562 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 156.892 Returns the nth root of the product of n divisors H(n) 6.91044 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 24,615 can be divided by 12 positive divisors (out of which 0 are even, and 12 are odd). The sum of these divisors (counting 24,615) is 42,744, the average is 3,562.\n\n## Other Arithmetic Functions (n = 24615)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 13104 Total number of positive integers not greater than n that are coprime to n λ(n) 1092 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 2724 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 13,104 positive integers (less than 24,615) that are coprime with 24,615. And there are approximately 2,724 prime numbers less than or equal to 24,615.\n\n## Divisibility of 24615\n\n m n mod m 2 3 4 5 6 7 8 9 1 0 3 0 3 3 7 0\n\nThe number 24,615 is divisible by 3, 5 and 9.\n\n• Arithmetic\n• Deficient\n\n• Polite\n\n## Base conversion (24615)\n\nBase System Value\n2 Binary 110000000100111\n3 Ternary 1020202200\n4 Quaternary 12000213\n5 Quinary 1241430\n6 Senary 305543\n8 Octal 60047\n10 Decimal 24615\n12 Duodecimal 122b3\n20 Vigesimal 31af\n36 Base36 izr\n\n## Basic calculations (n = 24615)\n\n### Multiplication\n\nn×i\n n×2 49230 73845 98460 123075\n\n### Division\n\nni\n n⁄2 12307.5 8205 6153.75 4923\n\n### Exponentiation\n\nni\n n2 605898225 14914184808375 367112659058150625 9036478102716377634375\n\n### Nth Root\n\ni√n\n 2√n 156.892 29.0893 12.5256 7.5551\n\n## 24615 as geometric shapes\n\n### Circle\n\n Diameter 49230 154661 1.90349e+09\n\n### Sphere\n\n Volume 6.24724e+13 7.61394e+09 154661\n\n### Square\n\nLength = n\n Perimeter 98460 6.05898e+08 34810.9\n\n### Cube\n\nLength = n\n Surface area 3.63539e+09 1.49142e+13 42634.4\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 73845 2.62362e+08 21317.2\n\n### Triangular Pyramid\n\nLength = n\n Surface area 1.04945e+09 1.75765e+12 20098.1\n\n## Cryptographic Hash Functions\n\nmd5 af454808bc3f90fc29155bf1660465ee 4f6fc034bda906a4c60d6b85e7443f37b2a8c67e b7a9fe4b7005a187b93c549740e4e1f7cfab774af9c7200d955f9a165fea6d0e 678c79fa33ea90a64efe70b17b4f0439bf7961f933477b6b939ed784e3fdf3c39af7c735a6ffe3533d0bdb73bb1f333cc174d42a2d7e2b0a78a280eda6eca49f 7a5171681e5f86b6a317a544796ef9a5a4483070" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6292263,"math_prob":0.982047,"size":4461,"snap":"2020-34-2020-40","text_gpt3_token_len":1584,"char_repetition_ratio":0.11891407,"word_repetition_ratio":0.0254491,"special_character_ratio":0.4532616,"punctuation_ratio":0.07692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9952201,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-20T10:56:25Z\",\"WARC-Record-ID\":\"<urn:uuid:86b91d5b-983c-44a8-9d96-dd15108eaa4a>\",\"Content-Length\":\"47590\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78415cdb-138b-41be-a087-7d5ba5f7c5f5>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5461712-7408-4a9c-81ac-11c9b7cc0cea>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/24615\",\"WARC-Payload-Digest\":\"sha1:7E2ZQHSSO6JZDQ2Q7GVX3V5O27IZ62LS\",\"WARC-Block-Digest\":\"sha1:6U6ACC7Q26CS3DAGH5IOEBHCVEW5UTEZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400197946.27_warc_CC-MAIN-20200920094130-20200920124130-00145.warc.gz\"}"}
https://unitconverter.io/square-centimeters/square-inches/74.61
[ "", null, "# 74.61 square centimeters to square inches\n\nto\n\n74.61 Square centimeters = 11.5645 Square inches\n\nThis conversion of 74.61 square centimeters to square inches has been calculated by multiplying 74.61 square centimeters by 0.1550 and the result is 11.5645 square inches." ]
[ null, "https://unitconverter.io/img/area.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8549325,"math_prob":0.9994055,"size":317,"snap":"2023-40-2023-50","text_gpt3_token_len":79,"char_repetition_ratio":0.25239617,"word_repetition_ratio":0.0,"special_character_ratio":0.28706625,"punctuation_ratio":0.13333334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9968313,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T14:08:25Z\",\"WARC-Record-ID\":\"<urn:uuid:da8d0a0c-850b-48ed-a522-7acf8e780aaf>\",\"Content-Length\":\"24683\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d88d0a96-c40d-4d65-a358-47891f1f27fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:bcb1d35d-dc66-4968-b466-3b178f38e3eb>\",\"WARC-IP-Address\":\"104.21.38.227\",\"WARC-Target-URI\":\"https://unitconverter.io/square-centimeters/square-inches/74.61\",\"WARC-Payload-Digest\":\"sha1:BUSKXXFAPYNLUVLE5GFD33MLEZRNW6JU\",\"WARC-Block-Digest\":\"sha1:BHRJQ76SKLQ52BZUEBUBHIVS7LCHNESM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100227.61_warc_CC-MAIN-20231130130218-20231130160218-00072.warc.gz\"}"}
https://www.codingbroz.com/armstrong-number-in-c-using-function/
[ "# Armstrong Number in C Using Function\n\nIn this post, we will learn how to check Armstrong number in C using function.\n\nAs we know, Armstrong Number is a number that is equal to the sum of cubes of its digits. For example: 371 is an Armstrong number because (3)3 + (7)3 + (1)3 = 27 + 343 + 1 = 371. So, 371 is an Armstrong number of order 3.\n\nSimilarly, 8208 is an armstrong number of order 4.\n\nWe will declare a custom function which will check whether the number is an armstrong number or not.\n\nSo, without further ado, let’s begin this tutorial.\n\nContents\n\n## Armstrong Number in C Using Function\n\n```// Armstrong Number in C Using Function\n#include <stdio.h>\nint armstrong(int num){\nint lastdigit = 0;\nint sum = 0;\nint power = 0;\n\nint n = num;\nwhile (n != 0){\nlastdigit = n % 10;\npower = lastdigit * lastdigit * lastdigit;\nsum = sum + power;\n\nn = n / 10;\n}\nif (sum == num)\nreturn 0;\nelse\nreturn 1;\n}\n\nint main(){\nint num;\n\nprintf(\"Enter a number: \");\nscanf(\"%d\", &num);\n\nif (armstrong(num) == 0){\nprintf(\"%d is an Armstrong Number.\", num);\n}\nelse{\nprintf(\"%d is not an Armstrong Number.\", num);\n}\nreturn 0;\n}\n```\n\nOutput 1\n\n``````Enter a number: 371\n371 is an Armstrong Number.\n``````\n\nOutput 2\n\n``````Enter a number: 144\n144 is not an Armstrong Number.\n``````\n\n## How Does This Program Work ?\n\n``````int armstrong(int num){\nint lastdigit = 0;\nint sum = 0;\nint power = 0;\n\nint n = num;\nwhile (n != 0){\nlastdigit = n % 10;\npower = lastdigit * lastdigit * lastdigit;\nsum = sum + power;\n\nn = n / 10;\n}\nif (sum == num)\nreturn 0;\nelse\nreturn 1;\n}\n``````\n\nIn this program, we have declared a custom function named armstrong which will check whether the entered number is an Armstrong number or not.\n\n`````` if (armstrong(num) == 0){\nprintf(\"%d is an Armstrong Number.\", num);\n}\nelse{\nprintf(\"%d is not an Armstrong Number.\", num);\n}\n``````\n\nThen, we call this custom function in our main function which will check for the entered number. If the custom function returns 0, then the entered number is an Armstrong number.\n\nOtherwise, the entered number is not an Armstrong number.\n\n## Armstrong Number in C of Order N Using Function\n\n```#include<stdio.h>\n#include<math.h>\n\nint armstrong(int num, int order)\n{\n\nint lastDigit = 0;\nint sum = 0;\nint power = 0;\n\nint n = num;\n\nwhile(n!=0) {\n\n// find last digit\nlastDigit = n % 10;\n\n// find power of digit\npower = pow(lastDigit, order);\n\n// add power value into sum\nsum += power;\n\n// remove last digit\nn /= 10;\n}\n\nif(sum == num)\nreturn 0;\nelse\nreturn 1;\n}\n\nint main()\n{\nint num, order;\n\nprintf(\"Enter number: \");\nscanf(\"%d\",&num);\n\nprintf(\"Enter Order: \");\nscanf(\"%d\",&order);\n\nif(armstrong(num, order) == 0)\nprintf(\"%d is an Armstrong number of order %d.\\n\", num, order);\nelse\nprintf(\"%d is not an Armstrong number of order %d.\", num,order);\nreturn 0;\n}\n```\n\nOutput 1\n\n``````Enter number: 371\nEnter Order: 3\n371 is an Armstrong number of order 3.``````\n\nOutput 2\n\n``````Enter number: 7001\nEnter Order: 4\n7001 is not an Armstrong number of order 4.\n``````\n\n## Conclusion\n\nI hope after going through this post, you understand how to check armstrong number in C Using Function.\n\nIf you have any doubt regarding the program, feel free to contact us in the comment section. We will be delighted to solve your query." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.55573106,"math_prob":0.9982116,"size":3083,"snap":"2022-40-2023-06","text_gpt3_token_len":859,"char_repetition_ratio":0.19356935,"word_repetition_ratio":0.3216783,"special_character_ratio":0.32306194,"punctuation_ratio":0.17611027,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99584436,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-01T09:13:15Z\",\"WARC-Record-ID\":\"<urn:uuid:1f137076-7609-46ac-927b-96f45410a59f>\",\"Content-Length\":\"211574\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:616e47bb-e502-467c-b879-a1a172a9b4b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:d48d55b7-eb93-4a03-be29-4e8da6608151>\",\"WARC-IP-Address\":\"172.67.170.112\",\"WARC-Target-URI\":\"https://www.codingbroz.com/armstrong-number-in-c-using-function/\",\"WARC-Payload-Digest\":\"sha1:JK6XQTAMU7QUXKUQHC75PURIB2CN32WX\",\"WARC-Block-Digest\":\"sha1:JOKGSWXNF2D2PKZ3OBGS36IGFCG4V5BS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499919.70_warc_CC-MAIN-20230201081311-20230201111311-00710.warc.gz\"}"}
https://preprint.impa.br/visualizar?id=1348
[ "Preprint D80/2010\nGeneralization of the incenter subdivision scheme\nIoannis Ivrissimtzis | Hernandez-Mederos, Victoria | Estrada-Sarlabous, Jorge\nKeywords: subdivision\nIn this paper we present an interpolatory Hermite subdivision scheme depending on a free parameter, which generalizes in certain way the incenter subdivision scheme [DW10]. We prove that for any value of the free parameter the limit curve is G1 continuous. Moreover, if vertices of the initial polygon and the tangent vectors are sampled from a circle with any arbitrary spacing, then the subdivision curve is the circle. The proposed scheme is shape preserving limiting the oscillations of the subdivision curve and introducing inflection points only in those regions of the curve where the control polygon suggests a change of convexity. Several examples are presented to demonstrate the performance of the scheme and we formulate some conjectures supported by numerical experiments.\n\nAnexos:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79871154,"math_prob":0.8032959,"size":984,"snap":"2022-40-2023-06","text_gpt3_token_len":199,"char_repetition_ratio":0.14897959,"word_repetition_ratio":0.0,"special_character_ratio":0.17276423,"punctuation_ratio":0.083333336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96490747,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-01T13:50:46Z\",\"WARC-Record-ID\":\"<urn:uuid:2a0b9164-7313-46c6-a6d2-b8014b5326d4>\",\"Content-Length\":\"6465\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99f176f0-b59d-4694-87ea-ae89aa30068c>\",\"WARC-Concurrent-To\":\"<urn:uuid:15e134c7-d7c2-4bf6-a5a2-7e7e5d8db825>\",\"WARC-IP-Address\":\"147.65.23.128\",\"WARC-Target-URI\":\"https://preprint.impa.br/visualizar?id=1348\",\"WARC-Payload-Digest\":\"sha1:6HMHKJWW6L4FPL63C6TNK3RSNNE4XIAH\",\"WARC-Block-Digest\":\"sha1:QF3HWWCYXRTTMNDWHAZUK3BW3SFNFIWK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499934.48_warc_CC-MAIN-20230201112816-20230201142816-00229.warc.gz\"}"}
https://ehanghalib.com/tag/machine-learning/
[ "Question Consider the loss function of linear regression given by . Given = 0, 0.5. Estimate using the data points below:     Solution ================================================================================================== We know that = 0,\n\nQuestion Consider a dataset for binary classification problem with class labels . The features are given by and . Each of these features have 2 values as given below. Apply\n\nLet’s talk about Bayesian Learning. In this post, let’s dive into the core concepts of Bayesian Learning. Bayesian vs Frequentist Approach First of all, what is meant by ‘Bayesian’ probability?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86916095,"math_prob":0.9811537,"size":614,"snap":"2023-40-2023-50","text_gpt3_token_len":114,"char_repetition_ratio":0.25081968,"word_repetition_ratio":0.0,"special_character_ratio":0.34201956,"punctuation_ratio":0.14141414,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97374624,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T19:21:53Z\",\"WARC-Record-ID\":\"<urn:uuid:fad7444d-650b-4776-8434-1a1b3ae9135e>\",\"Content-Length\":\"102407\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38abc999-6606-4fd6-a2d5-2bec2953e0f4>\",\"WARC-Concurrent-To\":\"<urn:uuid:795341d5-a892-46a0-a94a-ac9ec6dec526>\",\"WARC-IP-Address\":\"192.249.121.20\",\"WARC-Target-URI\":\"https://ehanghalib.com/tag/machine-learning/\",\"WARC-Payload-Digest\":\"sha1:2TEZYYJB6WQXPFBDLVHMNWDKOWTYJFLA\",\"WARC-Block-Digest\":\"sha1:XHI52ALYNUMWLQ2IF5QGLGMCSAK7C4HU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679102637.84_warc_CC-MAIN-20231210190744-20231210220744-00143.warc.gz\"}"}
https://www.meritanswers.com/
[ "## Recent Questions\n\n63800 QuestionsPage 1 of 4254.\n\nDetermine the Acceleration of the Field\n\nif a body of mass 2kg is at a distance of 7200 km from the center of the earth, What would be the acceleration due to gravity be at the point in the earth field ?\n\nAsked by: Sagir Hassan on Jul 4, 2021.\n\nFind the Analogy Pair of Needle\n\nPen is to Write as Needle is to ________?\n\nCalculate the Actual Amount of an Article\n\nA salesman allows a discount of 125% for each payment calculate the actual amount the salesman pay for an article with a marked price of \\$950.\n\nAsked by: Elizabeth on May 24, 2021.\n\nFind the Total Interest Amount Paid for the Bank\n\nA bank gives a woman a personal loan of \\$120,000 she has to pay back \\$11,000 every month for a year. Find the amount paid to the bank as interest.\n\nAsked by: Elizabeth on May 24, 2021.\n\nUses of Thin Continuous Lines\n\nWhat are the uses of thin continuous lines?\n\nAsked by: Hope on Apr 21, 2021.\n\nOrgan that Helps in the Deamination of Protein\n\nWhich organ helps in the deamination of protein?\n\nAsked by: Faridat on Feb 22, 2021.\n\nAnalogy Question Sailor is to Ship as Engineer is to\n\nSailor is to Ship as Engineer is to ?\n\nAsked by: Wilfred on Feb 10, 2021.\n\nComplete the Analogy Doctor is to Hospital as Clerk is to?\n\nDoctor is to Hospital as Clerk is to?\n\nAsked by: Wilfred on Feb 10, 2021.\n\nTypes of Workshop\n\nWhat are the types of workshop?\n\nAsked by: Uzoma Chisom on Feb 4, 2021.\n\nFind the Ages of Ahmad and Kemal in 10 years\n\nMr Ahmad age is twice of the Kemal age. In the next 20 years the sum of their age is 85. Then find their age in 10 years.\n\nDetermine the Actual Distance Between A And B on a Plan\n\nThe distance between A and B on a plan is 5mm. If the scale of plan is 1:1250. What is the actual distance between A and B?\n\nAsked by: Tenagyei on Feb 1, 2021.\n\nNth term Formula for -1 20 77 188\n\nWhat is the nth term formula for -1 20 77 188?\n\nAsked by: Nandipha on Jan 29, 2021.EDIT\n\nFind the Nth term Formula for 3 24 81\n\nWhat is the nth term formula for 3 24 81\n\nAsked by: Nandipha on Jan 29, 2021.EDIT\n\nNumber of Odd Days in the Year 2020\n\nDetermine the number of odd days in the year 2020.\n\nAsked by: Sai Anusha on Jan 29, 2021.EDIT\n\nDetermine the Output of the Given Pseudocode\n\nWhat will be the output of the following pseudocode?\ninteger a, b\nset a = 15, b = 7\na= a mod(a-3)\nb= b mod (b-3)\na=a mod 1\nb=b mod 1\nprint a+b\n\nAsked by: Sainath on Jan 28, 2021.EDIT" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93844944,"math_prob":0.77912223,"size":2387,"snap":"2021-31-2021-39","text_gpt3_token_len":726,"char_repetition_ratio":0.1577843,"word_repetition_ratio":0.08368201,"special_character_ratio":0.31085044,"punctuation_ratio":0.12633452,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9802369,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-30T12:54:43Z\",\"WARC-Record-ID\":\"<urn:uuid:38c73c03-a38c-4ead-94af-8e75510b0df8>\",\"Content-Length\":\"14459\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a91c0034-3941-4ef6-a844-be4069fb2f60>\",\"WARC-Concurrent-To\":\"<urn:uuid:84538521-b585-42eb-9981-821d52f98369>\",\"WARC-IP-Address\":\"35.244.11.196\",\"WARC-Target-URI\":\"https://www.meritanswers.com/\",\"WARC-Payload-Digest\":\"sha1:X6GY5G5JTB2FWMABZY25LNTEOZG2LAVL\",\"WARC-Block-Digest\":\"sha1:7CCCRKLWWTOZQQSMTSKHP645NUC4YEHJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153966.60_warc_CC-MAIN-20210730122926-20210730152926-00405.warc.gz\"}"}
https://gnu.googlesource.com/binutils-gdb/+/58eaf4e9bdf53806778e94efe932e9417a21a306/libctf/ctf-qsort_r.c
[ "blob: 0180e83ce1961935d81a94ac6739712543c5f709 [file] [log] [blame]\n /* Copyright (C) 1991-2021 Free Software Foundation, Inc. This file is part of libctf (imported from Gnulib). Written by Douglas C. Schmidt (schmidt@ics.uci.edu). The GNU C Library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. The GNU C Library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with the GNU C Library; if not, see . */ /* If you consider tuning this algorithm, you should consult first: Engineering a sort function; Jon Bentley and M. Douglas McIlroy; Software - Practice and Experience; Vol. 23 (11), 1249-1265, 1993. */ #ifndef _LIBC # include #endif #include #include #include #include \"ctf-decls.h\" #ifndef _LIBC # define _quicksort ctf_qsort_r # define __compar_d_fn_t compar_d_fn_t typedef int (*compar_d_fn_t) (const void *, const void *, void *); #endif /* Byte-wise swap two items of size SIZE. */ #define SWAP(a, b, size) \\ do \\ { \\ size_t __size = (size); \\ char *__a = (a), *__b = (b); \\ do \\ { \\ char __tmp = *__a; \\ *__a++ = *__b; \\ *__b++ = __tmp; \\ } while (--__size > 0); \\ } while (0) /* Discontinue quicksort algorithm when partition gets below this size. This particular magic number was chosen to work best on a Sun 4/260. */ #define MAX_THRESH 4 /* Stack node declarations used to store unfulfilled partition obligations. */ typedef struct { char *lo; char *hi; } stack_node; /* The next 4 #defines implement a very fast in-line stack abstraction. */ /* The stack needs log (total_elements) entries (we could even subtract log(MAX_THRESH)). Since total_elements has type size_t, we get as upper bound for log (total_elements): bits per byte (CHAR_BIT) * sizeof(size_t). */ #define STACK_SIZE (CHAR_BIT * sizeof(size_t)) #define PUSH(low, high) ((void) ((top->lo = (low)), (top->hi = (high)), ++top)) #define POP(low, high) ((void) (--top, (low = top->lo), (high = top->hi))) #define STACK_NOT_EMPTY (stack < top) /* Order size using quicksort. This implementation incorporates four optimizations discussed in Sedgewick: 1. Non-recursive, using an explicit stack of pointer that store the next array partition to sort. To save time, this maximum amount of space required to store an array of SIZE_MAX is allocated on the stack. Assuming a 32-bit (64 bit) integer for size_t, this needs only 32 * sizeof(stack_node) == 256 bytes (for 64 bit: 1024 bytes). Pretty cheap, actually. 2. Chose the pivot element using a median-of-three decision tree. This reduces the probability of selecting a bad pivot value and eliminates certain extraneous comparisons. 3. Only quicksorts TOTAL_ELEMS / MAX_THRESH partitions, leaving insertion sort to order the MAX_THRESH items within each partition. This is a big win, since insertion sort is faster for small, mostly sorted array segments. 4. The larger of the two sub-partitions is always pushed onto the stack first, with the algorithm then concentrating on the smaller partition. This *guarantees* no more than log (total_elems) stack size is needed (actually O(1) in this case)! */ void _quicksort (void *const pbase, size_t total_elems, size_t size, __compar_d_fn_t cmp, void *arg) { char *base_ptr = (char *) pbase; const size_t max_thresh = MAX_THRESH * size; if (total_elems == 0) /* Avoid lossage with unsigned arithmetic below. */ return; if (total_elems > MAX_THRESH) { char *lo = base_ptr; char *hi = &lo[size * (total_elems - 1)]; stack_node stack[STACK_SIZE]; stack_node *top = stack; PUSH (NULL, NULL); while (STACK_NOT_EMPTY) { char *left_ptr; char *right_ptr; /* Select median value from among LO, MID, and HI. Rearrange LO and HI so the three values are sorted. This lowers the probability of picking a pathological pivot value and skips a comparison for both the LEFT_PTR and RIGHT_PTR in the while loops. */ char *mid = lo + size * ((hi - lo) / size >> 1); if ((*cmp) ((void *) mid, (void *) lo, arg) < 0) SWAP (mid, lo, size); if ((*cmp) ((void *) hi, (void *) mid, arg) < 0) SWAP (mid, hi, size); else goto jump_over; if ((*cmp) ((void *) mid, (void *) lo, arg) < 0) SWAP (mid, lo, size); jump_over:; left_ptr = lo + size; right_ptr = hi - size; /* Here's the famous ``collapse the walls'' section of quicksort. Gotta like those tight inner loops! They are the main reason that this algorithm runs much faster than others. */ do { while ((*cmp) ((void *) left_ptr, (void *) mid, arg) < 0) left_ptr += size; while ((*cmp) ((void *) mid, (void *) right_ptr, arg) < 0) right_ptr -= size; if (left_ptr < right_ptr) { SWAP (left_ptr, right_ptr, size); if (mid == left_ptr) mid = right_ptr; else if (mid == right_ptr) mid = left_ptr; left_ptr += size; right_ptr -= size; } else if (left_ptr == right_ptr) { left_ptr += size; right_ptr -= size; break; } } while (left_ptr <= right_ptr); /* Set up pointers for next iteration. First determine whether left and right partitions are below the threshold size. If so, ignore one or both. Otherwise, push the larger partition's bounds on the stack and continue sorting the smaller one. */ if ((size_t) (right_ptr - lo) <= max_thresh) { if ((size_t) (hi - left_ptr) <= max_thresh) /* Ignore both small partitions. */ POP (lo, hi); else /* Ignore small left partition. */ lo = left_ptr; } else if ((size_t) (hi - left_ptr) <= max_thresh) /* Ignore small right partition. */ hi = right_ptr; else if ((right_ptr - lo) > (hi - left_ptr)) { /* Push larger left partition indices. */ PUSH (lo, right_ptr); lo = left_ptr; } else { /* Push larger right partition indices. */ PUSH (left_ptr, hi); hi = right_ptr; } } } /* Once the BASE_PTR array is partially sorted by quicksort the rest is completely sorted using insertion sort, since this is efficient for partitions below MAX_THRESH size. BASE_PTR points to the beginning of the array to sort, and END_PTR points at the very last element in the array (*not* one beyond it!). */ #define min(x, y) ((x) < (y) ? (x) : (y)) { char *const end_ptr = &base_ptr[size * (total_elems - 1)]; char *tmp_ptr = base_ptr; char *thresh = min(end_ptr, base_ptr + max_thresh); char *run_ptr; /* Find smallest element in first threshold and place it at the array's beginning. This is the smallest array element, and the operation speeds up insertion sort's inner loop. */ for (run_ptr = tmp_ptr + size; run_ptr <= thresh; run_ptr += size) if ((*cmp) ((void *) run_ptr, (void *) tmp_ptr, arg) < 0) tmp_ptr = run_ptr; if (tmp_ptr != base_ptr) SWAP (tmp_ptr, base_ptr, size); /* Insertion sort, running from left-hand-side up to right-hand-side. */ run_ptr = base_ptr + size; while ((run_ptr += size) <= end_ptr) { tmp_ptr = run_ptr - size; while ((*cmp) ((void *) run_ptr, (void *) tmp_ptr, arg) < 0) tmp_ptr -= size; tmp_ptr += size; if (tmp_ptr != run_ptr) { char *trav; trav = run_ptr + size; while (--trav >= run_ptr) { char c = *trav; char *hi, *lo; for (hi = lo = trav; (lo -= size) >= tmp_ptr; hi = lo) *hi = *lo; *hi = c; } } } } }" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6034227,"math_prob":0.95733523,"size":7714,"snap":"2022-27-2022-33","text_gpt3_token_len":2206,"char_repetition_ratio":0.13463035,"word_repetition_ratio":0.09695683,"special_character_ratio":0.35455018,"punctuation_ratio":0.16938616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99448156,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T23:00:56Z\",\"WARC-Record-ID\":\"<urn:uuid:06dac987-ef8f-41ce-b408-73da8c4b5892>\",\"Content-Length\":\"99721\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6345254d-2977-4ef5-b761-ddd5efd9650c>\",\"WARC-Concurrent-To\":\"<urn:uuid:78b2f456-e759-4ced-8466-d23607a7957f>\",\"WARC-IP-Address\":\"142.250.31.82\",\"WARC-Target-URI\":\"https://gnu.googlesource.com/binutils-gdb/+/58eaf4e9bdf53806778e94efe932e9417a21a306/libctf/ctf-qsort_r.c\",\"WARC-Payload-Digest\":\"sha1:OSWCOAEN4HZKU5GA5B2YHDCCNH735Q4D\",\"WARC-Block-Digest\":\"sha1:ETYNZGL7PTRX5JGXRXVUTCWKEPS24FF2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104628307.87_warc_CC-MAIN-20220705205356-20220705235356-00363.warc.gz\"}"}
https://www.dangtrinh.com/2013/04/splitselectdatetimewidget.html
[ "SplitSelectDateTimeWidget\n\nDjango only provides us SelectDateWidget to render DateField in Django's Template (django.forms.extras.widgets.SelectDateWidget) . What if you have a DateTimeField and want to have a Select Date and Select Time widget in your template (not a poor input textbox)? Web design means making beautiful UIs, right?\n\nLuckily,  a guy here has written a custom widget name SplitSelectDateTimeWidget to save the world! Here is how I make use of Brad's widget in my project:\n\n- Create a file name widgets.py in your app directory (e.g. myapp.widgets.py). Note that SplitDateTimeWidget also uses SelectTimeWidget which is also written by Brad:\n\n....\n\nimport re from django.forms.extras.widgets import SelectDateWidget from django.forms.widgets import Widget, Select, MultiWidget from django.utils.safestring import mark_safe__all__ = ('SelectTimeWidget', 'SplitSelectDateTimeWidget') # Attempt to match many time formats: # Example: \"12:34:56 P.M.\" matches: # ('12', '34', ':56', '56', 'P.M.', 'P', '.', 'M', '.') # ('12', '34', ':56', '56', 'P.M.') # Note that the colon \":\" before seconds is optional, but only if seconds are omitted time_pattern = r'(\\d\\d?):(\\d\\d)(:(\\d\\d))? *([aApP]\\.?[mM]\\.?)?\\$' RE_TIME = re.compile(time_pattern) # The following are just more readable ways to access re.matched groups: HOURS = 0 MINUTES = 1 SECONDS = 3 MERIDIEM = 4 class SelectTimeWidget(Widget): \"\"\" A Widget that splits time input into <select> elements. Allows form to show as 24hr: <hour>:<minute>:<second>, (default) or as 12hr: <hour>:<minute>:<second> <am|pm> Also allows user-defined increments for minutes/seconds \"\"\" hour_field = '%s_hour' minute_field = '%s_minute' second_field = '%s_second' meridiem_field = '%s_meridiem' twelve_hr = False # Default to 24hr. def __init__(self, attrs=None, hour_step=None, minute_step=None, second_step=None, twelve_hr=False): \"\"\" hour_step, minute_step, second_step are optional step values for for the range of values for the associated select element twelve_hr: If True, forces the output to be in 12-hr format (rather than 24-hr) \"\"\" self.attrs = attrs or {} if twelve_hr: self.twelve_hr = True # Do 12hr (rather than 24hr) self.meridiem_val = 'a.m.' # Default to Morning (A.M.) if hour_step and twelve_hr: self.hours = range(1,13,hour_step) elif hour_step: # 24hr, with stepping. self.hours = range(0,24,hour_step) elif twelve_hr: # 12hr, no stepping self.hours = range(1,13) else: # 24hr, no stepping self.hours = range(0,24) if minute_step: self.minutes = range(0,60,minute_step) else: self.minutes = range(0,60) if second_step: self.seconds = range(0,60,second_step) else: self.seconds = range(0,60) def render(self, name, value, attrs=None): try: # try to get time values from a datetime.time object (value) hour_val, minute_val, second_val = value.hour, value.minute, value.second if self.twelve_hr: if hour_val >= 12: self.meridiem_val = 'p.m.' else: self.meridiem_val = 'a.m.' except AttributeError: hour_val = minute_val = second_val = 0 if isinstance(value, basestring): match = RE_TIME.match(value) if match: time_groups = match.groups(); hour_val = int(time_groups[HOURS]) % 24 # force to range(0-24) minute_val = int(time_groups[MINUTES]) if time_groups[SECONDS] is None: second_val = 0 else: second_val = int(time_groups[SECONDS]) # check to see if meridiem was passed in if time_groups[MERIDIEM] is not None: self.meridiem_val = time_groups[MERIDIEM] else: # otherwise, set the meridiem based on the time if self.twelve_hr: if hour_val >= 12: self.meridiem_val = 'p.m.' else: self.meridiem_val = 'a.m.' else: self.meridiem_val = None # If we're doing a 12-hr clock, there will be a meridiem value, so make sure the # hours get printed correctly if self.twelve_hr and self.meridiem_val: if self.meridiem_val.lower().startswith('p') and hour_val > 12 and hour_val < 24: hour_val = hour_val % 12 elif hour_val == 0: hour_val = 12 output = [] if 'id' in self.attrs:id_ = self.attrs['id'] else: id_ = 'id_%s' % name # For times to get displayed correctly, the values MUST be converted to unicode # When Select builds a list of options, it checks against Unicode values hour_val = u\"%.2d\" % hour_val minute_val = u\"%.2d\" % minute_val second_val = u\"%.2d\" % second_val hour_choices = [(\"%.2d\"%i, \"%.2d\"%i) for i in self.hours] local_attrs = self.build_attrs(id=self.hour_field % id_) select_html = Select(choices=hour_choices).render(self.hour_field % name, hour_val, local_attrs) output.append(select_html) minute_choices = [(\"%.2d\"%i, \"%.2d\"%i) for i in self.minutes] local_attrs['id'] = self.minute_field % id_ select_html = Select(choices=minute_choices).render(self.minute_field % name, minute_val, local_attrs) output.append(select_html) second_choices = [(\"%.2d\"%i, \"%.2d\"%i) for i in self.seconds] local_attrs['id'] = self.second_field % id_ select_html = Select(choices=second_choices).render(self.second_field % name, second_val, local_attrs) output.append(select_html) if self.twelve_hr: # If we were given an initial value, make sure the correct meridiem gets selected. if self.meridiem_val is not None and self.meridiem_val.startswith('p'): meridiem_choices = [('p.m.','p.m.'), ('a.m.','a.m.')] else: meridiem_choices = [('a.m.','a.m.'), ('p.m.','p.m.')] local_attrs['id'] = local_attrs['id'] = self.meridiem_field % id_ select_html = Select(choices=meridiem_choices).render(self.meridiem_field % name, self.meridiem_val, local_attrs) output.append(select_html) return mark_safe(u'\\n'.join(output)) def id_for_label(self, id_): return '%s_hour' % id_ id_for_label = classmethod(id_for_label) def value_from_datadict(self, data, files, name): # if there's not h:m:s data, assume zero: h = data.get(self.hour_field % name, 0) # hour m = data.get(self.minute_field % name, 0) # minute s = data.get(self.second_field % name, 0) # second meridiem = data.get(self.meridiem_field % name, None) #NOTE: if meridiem is None, assume 24-hr if meridiem is not None: if meridiem.lower().startswith('p') and int(h) != 12: h = (int(h)+12)%24 elif meridiem.lower().startswith('a') and int(h) == 12: h = 0 if (int(h) == 0 or h) and m and s: return '%s:%s:%s' % (h, m, s) return data.get(name, None) ###################################################################### class SplitSelectDateTimeWidget(MultiWidget): \"\"\" MultiWidget = A widget that is composed of multiple widgets. This class combines SelectTimeWidget and SelectDateWidget so we have something like SpliteDateTimeWidget (in django.forms.widgets), but with Select elements. \"\"\" def __init__(self, attrs=None, hour_step=None, minute_step=None, second_step=None, twelve_hr=None, years=None): \"\"\" pass all these parameters to their respective widget constructors...\"\"\" widgets = (SelectDateWidget(attrs=attrs, years=years), SelectTimeWidget(attrs=attrs, hour_step=hour_step, minute_step=minute_step, second_step=second_step, twelve_hr=twelve_hr)) super(SplitSelectDateTimeWidget, self).__init__(widgets, attrs) def decompress(self, value): if value: return [value.date(), value.time().replace(microsecond=0)] return [None, None] def format_output(self, rendered_widgets): \"\"\" Given a list of rendered widgets (as strings), it inserts an HTML linebreak between them. Returns a Unicode string representing the HTML for the whole lot. \"\"\" rendered_widgets.insert(-1, '<br/>') return u''.join(rendered_widgets)\n\nAnd in your forms.py:\n\nfrom django.forms import ModelForm, DateTimeField from myapp.models import MyModel from myapp.widgets import * class PtcEventForm(ModelForm): mydatetime_field = DateTimeField(widget=SplitSelectDateTimeWidget(hour_step=1, \\ minute_step=15, second_step=60, twelve_hr=True)) class Meta: model = PtcEvent\n\nIn mytemplate.html:\n\n...\n\n{{ form.mydatetime_field }}\n\n...\n\nAnd the result will look like this one:", null, "Pretty cool huh?\n\nTrinh\n\nP/S: at this time, I'm still using Django 1.1.1, so this is my solution. If in the newer version of Django, they implement something like that, developer will be very happy!" ]
[ null, "https://lh5.googleusercontent.com/proxy/UYzdXxSQyXDXqQje65eW8e6RBa86I0gaQwyn1BduhlsQ8QvmZP6JRYF0KxxNi9uSNRU-gKd3svK9uWNxXXosrUu-OpzF_lmn4cr0sCQLp4Su3P8s64lcxsE4dUx3gNQibFAYplX3_OHQNcHH_p3JcvZz8TS6qT1lWKXAOLmqvie5NRy2KZPHjBBqjApyAzz0whc5MyBQ-zUhWTIySJNRdqeJTPJBUbM853Gn6_xJZb8=s0-d", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5529805,"math_prob":0.8739029,"size":7959,"snap":"2019-43-2019-47","text_gpt3_token_len":2145,"char_repetition_ratio":0.16593337,"word_repetition_ratio":0.030272452,"special_character_ratio":0.29928383,"punctuation_ratio":0.2474026,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96063757,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-20T01:43:56Z\",\"WARC-Record-ID\":\"<urn:uuid:870a93d1-0e05-4300-9a83-21d18d52fd52>\",\"Content-Length\":\"335137\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d25d1ab5-cac1-403d-b942-14fe577b14e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:deaf84dd-b123-4a44-bd61-6ffd01cd7297>\",\"WARC-IP-Address\":\"172.217.7.211\",\"WARC-Target-URI\":\"https://www.dangtrinh.com/2013/04/splitselectdatetimewidget.html\",\"WARC-Payload-Digest\":\"sha1:3BSKER52BBMYJFUIXV5RR2PDHNS7EOCI\",\"WARC-Block-Digest\":\"sha1:XRLQUYS225PIV3EX5XCPUSECQAXBEJEI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986700560.62_warc_CC-MAIN-20191020001515-20191020025015-00228.warc.gz\"}"}
https://quant.stackexchange.com/questions/65738/why-do-we-theoretically-have-to-take-cross-currency-basis-volatility-into-accoun
[ "# Why do we theoretically have to take cross currency basis volatility into account when constructing Cheapest To Deliver (CTD) discount curves?\n\nLet's take a collateralized USD IRS where there is optionality in collateral currency. My understanding is that it is standard practice to compute forward XXX/USD OIS basis curves for all currencies taken into consideration. Then, as explained by Antoine Conze in Cheapest-to-deliver (CTD) discount curve, the CTD discount curve is constructed as\n\n$$D^{CTD}_{USD}(T)=D_{OIS_{USD}}(T)\\text{exp}\\left(−\\int_{0}^{T}\\smash{\\displaystyle\\max_{\\text{XYZ}}} \\{ \\text{basis}_\\text{XYZUSD}(t) \\}dt\\right)$$,\n\nassuming that you \"disregard basis volatility\" and that in that case \"future basis is today's forward basis\". I am wondering why this is not always the case. For example, isn't it true that when we value the floating leg of a vanilla IRS, we use forward rates as well and we assume that future floating rates are today's forward rates? And that we don't take any sort of interest rate volatility into consideration?\n\nWhy do we have to take this volatility into account for constructing CTD discount curves?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91477334,"math_prob":0.97957015,"size":1001,"snap":"2023-40-2023-50","text_gpt3_token_len":247,"char_repetition_ratio":0.10932799,"word_repetition_ratio":0.0,"special_character_ratio":0.22777222,"punctuation_ratio":0.06557377,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99141103,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T22:46:57Z\",\"WARC-Record-ID\":\"<urn:uuid:5daf17c9-4c4b-4d4f-966d-5c2f145216e9>\",\"Content-Length\":\"166323\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e5fccec-7855-4049-8ef9-15677cb7e9ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:5485bef2-69a1-4743-862a-9b2ce656f00f>\",\"WARC-IP-Address\":\"104.18.43.226\",\"WARC-Target-URI\":\"https://quant.stackexchange.com/questions/65738/why-do-we-theoretically-have-to-take-cross-currency-basis-volatility-into-accoun\",\"WARC-Payload-Digest\":\"sha1:K4LTGK66KDU4CQL72AKSQP5ICRWSDLSY\",\"WARC-Block-Digest\":\"sha1:UG3S7MIKGCDIKZ5PNIJPZVOZYYFEVZF2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100779.51_warc_CC-MAIN-20231208212357-20231209002357-00811.warc.gz\"}"}
https://www.motilaloswal.com/calculators/compound-interest-calculator?utm_source=www.techshali.com&utm_medium=page_visits&utm_campaign=bloggeroutreach&utm_term=compound_interest_calculator
[ "Compound Interest Calculator Online-Motilal Oswal\n\n# Compound Interest Calculator\n\n%\nYr\n\n#### After 10 years, you will have\n\n0\n\nOn your investment of 0 invested via SIP,\ngenerating a total profit of 0\n\n• Total Investment NaN%\n0\n• Profit Earned NaN%\n0\n\n+91 |\n\n## Determine your wealth with the power of compounding through Compound Interest Calculator", null, "##### It is an “interest paid on interest”", null, "##### Enables you to build wealth over time", null, "##### Helps you plan your retirement early\n\nUse the Compound Interest calculator above to know about the compound interest earned on your principal investment for your desired tenure.\n\n### 1. What is a compound interest calculator?\n\nA compound interest calculator is an online financial tool that computes the compound interest you will earn on your investments. To understand this online tool better, you need to first be familiar with what compound interest is all about. Essentially, compound interest is the interest earned on the principal amount invested and the returns earned thereon. In other words, it is interest calculated on the previously earned interest.\n\nThe formula for computing compound interest is slightly complicated, and making multiple manual calculations for each investment you plan to make can be cumbersome. An online compound interest calculator simplifies this process by determining the compound interest almost instantly, based on the inputs you provide. You only need to enter the following parameters:\n\n• The amount of investment you wish to make\n• The expected rate of returns\n• The investment period\n\nBased on these details, the calculator will display the amount you will have at maturity as well as the compound interest (or the profits) earned thereon.\n\n### 2. How to calculate compound interest?\n\nCompound interest is calculated on the principal and the interest amount earned up to the given point in time. For instance, say you invest Rs. 1,00,000 at an annual compound interest rate of 5% for a tenure of 3 years. At the end of the first year, your investment will grow by Rs. 5,000 (Rs. 1 lakh x 5%), to become Rs. 1,05,000.\n\nAt the end of the second year, interest will be calculated on this new amount of Rs. 1,05,000 instead of just the original capital alone. As you may have featured out, computing this manually can be tedious. This is why a compound interest calculator can be handy. It uses the following compound interest formula to instantly compute the amount of interest you will earn on your investments by compounding:\n\nCompound interest = P (1 + r/n)nt - P\n\nWhere the letters represent the following parameters:\n\n• P is the principal amount deposited\n• r is the rate of interest\n• n is the frequency or the number times the interest is compounded annually\n• t is the overall tenure of the deposit\n\n### 3. What are the advantages of using compound interest calculators?\n\nUsing a compound interest calculator gives you many benefits. If you are not sure whether or not to make use of this tool, check out the key advantages of a compound interest calculator above.\n\n•  It is easy to use\n\nA compound interest calculator is extremely easy to use and it requires no special knowledge. You simply need to enter the details of the investment amount, the investment tenure and the expected rate of return. With this information, the online tool will do its job and compute the compound interest for you.\n\n•  It is accurate and reliable\n\nTo plan your investments correctly, you need an accurate idea about how much the money invested today will grow over the intended investment tenure. With a compound interest calculator, you can rest assured that the results are accurate and reliable. The insights from the compound interest calculator can make your financial planning error-free.\n\n•  It can be used multiple times\n\nThere is no restriction on the number of times you can use a compound interest calculator. The unlimited usage offered by this online tool also comes free of charge, so you can compare various investment options without spending any money. By changing the investment amount and the tenure over different iterations, you can determine the ideal investment plan for your portfolio.\n\n## Unlock the hidden treasures of compound interests with the compound interest calculator\n\nWith Motilal Oswal, you can invest in a wide range of products starting from equities, mutual funds, to Robotic Investment Products and PMS! Get expert guided opinions on your stock investments for Free!", null, "Compound interest is paid on the deposits that you make and the interest that you earn over a period of time.", null, "##### Good for early savings\n\nIf you start saving in your twenties, you can plan your retirement by the time you are forty! Compound Interest helps you!", null, "##### Low risk\n\nWhen you understand the time value of money, you'll see that compounding and patience are the ingredients for wealth.", null, "## Compound Interest Calculator FAQs\n\nCalculating compound interest manually can be challenging. A compound interest calculator can make the process easier and more accurate. Check out the many ways in which this online calculator can help you.\n\n•  It shows you the returns you will earn from your investment\n\nYou can make use of this online calculator to determine the exact amount of compound interest you will earn from your investment. This will help you plan for short-term, medium-term and long-term financial goals accurately.\n\n•  It helps you identify how much to invest\n\nWith a compound interest calculator, you can also figure out how much you need to invest in order to generate a certain amount at maturity. This makes financial planning more efficient, and it allows you to set goals that are attainable.\n\n•  You can compare different investment options\n\nDifferent investment options available today come with different rates of interest and compounding frequencies. To make an informed choice between the various options available, you can use a compound interest calculator to determine the maturity amount in each investment avenue.\n\n### 2. How will a compound interest calculator help me plan my finances?\n\nCompound interest is the secret to creating wealth over time. This is because while simple interest earns you returns on your investment alone, compound interest helps you gain added returns on your existing returns. In other words, it is interest on interest, which helps your corpus grow exponentially rather than just linearly.\n\nTo understand how much you can earn from an investment that offers compound interest, you need to use a compound interest calculator. This nifty online tool helps you plan your finances in a more informed and efficient manner, since you can get a clear idea of exactly how much your deposit will grow over the investment tenure.\n\nConversely, you can also use this online financial tool to compare different investment options that offer the benefit of compounding, and figure out which investment avenue can help you achieve your financial goals in a timely manner.\n\n### 3. What is the difference between simple interest and compound interest?\n\nSimple interest is the interest that you earn on the principal amount alone. It is computed using the following formula.\n\nSimple interest = (P x R x T) ÷ 100\n\nHere, the following parameters are denoted by the letters:\n\n•  P is the principal amount deposited\n•  R is the rate of interest\n•  T is the overall tenure of the deposit\n\nOn the other hand, compound interest is the interest earned on the principal amount as well as the returns earned up to that period. So, you earn interest on your interest, leading to exponential growth of your capital. Compound interest is calculated using the formula shown below.\n\nCompound interest = P (1 + r/n)nt - P\n\nHere, the following parameters are denoted by the letters:\n\n•  P is the principal amount deposited\n•  r is the rate of interest\n•  n is the frequency or the number times the interest is compounded annually\n•  t is the overall tenure of the deposit\n\n### 4. When is my interest compounded?\n\nThe interest earned on an investment can be compounded on a monthly, quarterly, semi-annual or annual basis. The frequency of compounding varies from one investment to another. So, make sure you check the frequency of compounding before choosing an investment option.\n\n### 5. Can I include regular withdrawals?\n\nA regular compound interest calculator does not typically have any input fields for regular withdrawals. However, you can make use of a SWP (Systematic Withdrawal Plan) calculator to compute the returns from your investment in case you want to make regular withdrawals.\n\n### 6. What is the effective annual interest rate?\n\nThe effective annual interest rate is the rate of the returns actually earned on your investment when you take compounding into consideration." ]
[ null, "https://www.motilaloswal.com/images/sip/tax-saving.png", null, "https://www.motilaloswal.com/images/sip/sip-1.png", null, "https://www.motilaloswal.com/images/sip/sip-16.png", null, "https://www.motilaloswal.com/images/sip/sip-17.png", null, "https://www.motilaloswal.com/images/sip/sip-13.png", null, "https://www.motilaloswal.com/images/sip/sip-18.png", null, "https://www.motilaloswal.com/images/sip/calculatorsip_common.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94740593,"math_prob":0.9105592,"size":7431,"snap":"2023-40-2023-50","text_gpt3_token_len":1439,"char_repetition_ratio":0.22431669,"word_repetition_ratio":0.07334963,"special_character_ratio":0.19176424,"punctuation_ratio":0.0874722,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.983145,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,2,null,4,null,2,null,2,null,4,null,2,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T08:05:24Z\",\"WARC-Record-ID\":\"<urn:uuid:6111c3a7-c681-4cc2-b52d-a1f69309ea34>\",\"Content-Length\":\"219726\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:866d0b60-1799-42f7-a043-9ca251a1a9a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:4156bc95-d233-4b48-be51-6f0e4fbac3d5>\",\"WARC-IP-Address\":\"121.244.93.46\",\"WARC-Target-URI\":\"https://www.motilaloswal.com/calculators/compound-interest-calculator?utm_source=www.techshali.com&utm_medium=page_visits&utm_campaign=bloggeroutreach&utm_term=compound_interest_calculator\",\"WARC-Payload-Digest\":\"sha1:NAICTT3SLWFAVCXQKDHYJZBLM6IPWP5H\",\"WARC-Block-Digest\":\"sha1:CGWGFA2AXHXGQKMMZI4MBQABFUVRVNTN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506339.10_warc_CC-MAIN-20230922070214-20230922100214-00492.warc.gz\"}"}
https://www.rdocumentation.org/packages/ggplot2/versions/3.0.0/topics/aes_colour_fill_alpha
[ "# aes_colour_fill_alpha\n\n0th\n\nPercentile\n\n##### Colour related aesthetics: colour, fill and alpha\n\nThis page demonstrates the usage of a sub-group of aesthetics: colour, fill and alpha.\n\n##### Aliases\n• aes_colour_fill_alpha\n• colour\n• color\n• fill\n##### Examples\n# NOT RUN {\n# Bar chart example\nc <- ggplot(mtcars, aes(factor(cyl)))\n# Default plotting\nc + geom_bar()\n# To change the interior colouring use fill aesthetic\nc + geom_bar(fill = \"red\")\n# Compare with the colour aesthetic which changes just the bar outline\nc + geom_bar(colour = \"red\")\n# Combining both, you can see the changes more clearly\nc + geom_bar(fill = \"white\", colour = \"red\")\n\n# The aesthetic fill also takes different colouring scales\n# setting fill equal to a factor variable uses a discrete colour scale\nk <- ggplot(mtcars, aes(factor(cyl), fill = factor(vs)))\nk + geom_bar()\n\n# Fill aesthetic can also be used with a continuous variable\nm <- ggplot(faithfuld, aes(waiting, eruptions))\nm + geom_raster()\nm + geom_raster(aes(fill = density))\n\n# Some geoms don't use both aesthetics (i.e. geom_point or geom_line)\nb <- ggplot(economics, aes(x = date, y = unemploy))\nb + geom_line()\nb + geom_line(colour = \"green\")\nb + geom_point()\nb + geom_point(colour = \"red\")\n\n# For large datasets with overplotting the alpha\n# aesthetic will make the points more transparent\ndf <- data.frame(x = rnorm(5000), y = rnorm(5000))\nh <- ggplot(df, aes(x,y))\nh + geom_point()\nh + geom_point(alpha = 0.5)\nh + geom_point(alpha = 1/10)\n\n# Alpha can also be used to add shading\nj <- b + geom_line()\nj\nyrng <- range(economics\\$unemploy)\nj <- j + geom_rect(aes(NULL, NULL, xmin = start, xmax = end, fill = party),\nymin = yrng, ymax = yrng, data = presidential)\nj\nj + scale_fill_manual(values = alpha(c(\"blue\", \"red\"), .3))\n# }\n\nDocumentation reproduced from package ggplot2, version 3.0.0, License: GPL-2 | file LICENSE\n\n### Community examples\n\nLooks like there are no examples yet." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61482286,"math_prob":0.9882047,"size":1735,"snap":"2019-13-2019-22","text_gpt3_token_len":478,"char_repetition_ratio":0.1565569,"word_repetition_ratio":0.0,"special_character_ratio":0.30432278,"punctuation_ratio":0.10921501,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9902741,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-26T10:12:03Z\",\"WARC-Record-ID\":\"<urn:uuid:023f3dc3-7f87-4217-9489-6c3d62ed1247>\",\"Content-Length\":\"13611\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bf9be183-0912-48b4-841b-d02588396885>\",\"WARC-Concurrent-To\":\"<urn:uuid:9aa9f7af-8216-45b7-87cd-7cbe15d86c69>\",\"WARC-IP-Address\":\"34.202.175.245\",\"WARC-Target-URI\":\"https://www.rdocumentation.org/packages/ggplot2/versions/3.0.0/topics/aes_colour_fill_alpha\",\"WARC-Payload-Digest\":\"sha1:OZFJMPLLU4N5XGM7BQBUS3KRB5V2IPXC\",\"WARC-Block-Digest\":\"sha1:X4IU2DMDAXKVNMNJTJWFSBKIL2D7AMLJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912204969.39_warc_CC-MAIN-20190326095131-20190326121131-00336.warc.gz\"}"}
https://economictimes.indiatimes.com/news/statistics-calculator
[ "# Statistics Calculator\n\nUse this calculator to find the number of statistics measures like count, sum, standard deviation, variance calculator, mean, median, mode, geometric mean, arithmetic mean etc. for the defined set of numbers.\n\n## Provide Data Series\n\nFind the number of statistics measures by typing in the data set values.\n\n?\n• A population is an entire group that you want to draw conclusions about.\n• A sample is a specific group that you will collect data from. The size of the sample is always less than the total size of the population.\n• Please input values with comma separation example: 2,3,4,5,6", null, "## Calculator\n\nFind the number of statistics measures by adding values in the calculator.\n\nEnter the data set by inputing the values with the help of key ‘ADD’\n\n## Related Calculators\n\nUse this calculator to determine the mean, sum, standard deviation, variance, geometric mean, etc. of a data set.\n\nOr provide values separated by comma below\nEnter the data, where each individual value is separated by a comma, to calculate the sum, mean, median, mode, geometric mean, standard deviation, variance, etc.\n\nWhat is Statistics?\nStatistics is a branch of mathematics that deals with the collection and analysis of quantitative data. The conclusions drawn from the analysis of the collected data are used for scientific research, testing theories, determining probability, finding desirable outcomes, etc.\n\nWhat is a Statistics Calculator?\nThis statistics calculator is an online tool that can be used to compute various statistical metrics such as mean, median, mode, standard deviation, variance, etc. of a given data set. Users can input the required data into the calculator to determine the value of various statistical functions.\n\nHow does this Statistics Calculator work?\nThe calculators above are online tools that can be accessed from devices such as a smartphone, tablet or laptop with a working internet connection. There are two versions of the statistics calculator which serve the same purpose but differ in functionality. The functionality of both the calculators have been described below:\n\n1st Statistics Calculator\nThis calculator looks more like a conventional calculator and has dedicated buttons for entering the required data as well as for applying various statistical functions to the given data. The calculator has buttons for entering numbers from 0 to 9, a button for entering a decimal and a button for entering negative values which can be used to provide the input. Users need to click on the ‘ADD’ button after entering an individual data and repeat the process to input the whole data set. After entering the whole data set, click on the appropriate statistical function to determine its value. The statistical functions available in the calculator are given below:\n\n• Click on the ‘x̄’ button to determine the mean or average of the data set provided\n• Click on the 'x2'button to determine the mean of the square of the values\n• The ‘Σx’ button is for calculating the sum of all the values provided\n• The ‘Σx2’ button is for calculating the sum of the square of the values\n• Clicking on the ‘σ’ button gives the population standard deviation of the data set\n• Clicking on the ‘σ2’ button gives the population standard variance of the data provided\n• The ‘s’ function determines the sample standard deviation of the data set\n• The ‘s2’ function determines the sample standard variance of the data provided\n• Click the ‘GM’ button to find out the geometric mean of the data set\n\n2nd Statistics Calculator\nIn the second version of the statistics calculator, users just need to enter the whole data set into the input field provided. Here, each individual data/value needs to be separated by a comma. Next, when the ‘Calculate’ button is clicked, the result section displays a comprehensive list with the values of various statistical functions. The various metrics or functions that are shown in the results section are as follows:\n\n• The ‘count’ of the individual values provided in the data set is displayed at the top\n• Next is the ‘sum’ of all the values\n• Then, the ‘mean (average)’ of the data set is provided\n• Then comes the ‘median’ of the data provided\n• Displayed next is the ‘mode’ of the data set\n• Then, the ‘largest’ number in the data set is displayed\n• The ‘smallest’ number in the data set follows\n• The ‘range’ of the data set is displayed next\n• Next is the ‘geometric mean’ of the data provided\n• It is followed by the ‘standard deviation’ of the data set\n• The ‘variance’ of the data is displayed next\n• Then comes the ‘sample standard deviation’ of the data\n• Lastly, the ‘sample variance’ of the data set is displayed\n\nTo enter a new data set, click on the ‘Clear’ button\n\n## Latest News", null, "The Economist hails India's digital platforms", null, "UP cancels pending challans issued in '17-'21", null, "Tackling monopolies in digital ecosystem", null, "4M mantra: How the Modi govt delivers", null, "Ambani, Star's 'freebies' intensify OTT war", null, "Not lucky in H-1B lottery? Canada may call you", null, "Key takeaways from Trump's indictment", null, "Tips to help you strike work-life balance", null, "\\$30 bn loss to Pakistan due to floods, rains", null, "Boris Johnson quits as UK lawmaker", null, "DGFT offices for investment, trade promotion" ]
[ null, "https://img.etimg.com/photo/msid-86467154,quality-100/et-star.jpg", null, "https://economictimes.indiatimes.com/thumb/msid-100893713,width-300,height-225,imglength-438020,quality-100/the-economist-hails-indias-digital-platforms.jpg", null, "https://economictimes.indiatimes.com/thumb/width-60,height-45,imgsize-46118,msid-100893194,quality-100/up-cancels-pending-challans-issued-in-17-21.jpg", null, "https://economictimes.indiatimes.com/thumb/width-60,height-45,imgsize-31326,msid-100883676,quality-100/tackling-monopolies-in-digital-ecosystem.jpg", null, "https://economictimes.indiatimes.com/thumb/width-60,height-45,imgsize-56056,msid-100884381,quality-100/4m-mantra-how-the-modi-govt-delivers.jpg", null, "https://economictimes.indiatimes.com/thumb/width-60,height-45,imgsize-20720,msid-100868426,quality-100/ambani-stars-freebies-intensify-ott-war.jpg", null, "https://economictimes.indiatimes.com/thumb/width-60,height-45,imgsize-26350,msid-100887924,quality-100/not-lucky-in-h-1b-lottery-canada-may-call-you.jpg", null, "https://economictimes.indiatimes.com/thumb/width-60,height-45,imgsize-55576,msid-100891075,quality-100/key-takeaways-from-trumps-indictment.jpg", null, "https://economictimes.indiatimes.com/thumb/width-60,height-45,imgsize-473172,msid-100879644,quality-100/tips-to-help-you-strike-work-life-balance.jpg", null, "https://economictimes.indiatimes.com/thumb/width-60,height-45,imgsize-23488,msid-100891625,quality-100/30-bn-loss-to-pakistan-due-to-floods-rains.jpg", null, "https://economictimes.indiatimes.com/thumb/width-60,height-45,imgsize-121280,msid-100887718,quality-100/boris-johnson-quits-as-uk-lawmaker.jpg", null, "https://economictimes.indiatimes.com/thumb/width-60,height-45,imgsize-101526,msid-100884489,quality-100/dgft-offices-for-investment-trade-promotion.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86714935,"math_prob":0.98351383,"size":6322,"snap":"2023-14-2023-23","text_gpt3_token_len":1269,"char_repetition_ratio":0.17378917,"word_repetition_ratio":0.7476008,"special_character_ratio":0.196615,"punctuation_ratio":0.0920354,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.990237,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,4,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-10T09:07:37Z\",\"WARC-Record-ID\":\"<urn:uuid:dfc65597-216c-4a55-9e8d-dad8f468bb54>\",\"Content-Length\":\"143106\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7357166e-c037-4e50-bd6b-d4d47bb397d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:81f070eb-a557-4a7b-9669-6e61d65e4740>\",\"WARC-IP-Address\":\"23.13.163.217\",\"WARC-Target-URI\":\"https://economictimes.indiatimes.com/news/statistics-calculator\",\"WARC-Payload-Digest\":\"sha1:WCC5S7CUMGY2AAJMIX2O6XY37IRJCPKZ\",\"WARC-Block-Digest\":\"sha1:3S3BKERPOIBMXGXG7ANXZ74H66YOIE4Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224657144.94_warc_CC-MAIN-20230610062920-20230610092920-00272.warc.gz\"}"}
https://docs.aspose.cloud/slides/working-with-math-formulas/
[ "# Working with Math Formulas\n\nContents\n[ ]\n\nMath formula (equation) is a kind of portion in a shape. So, you work with formulas using ordinary portion methods. The manipulations with equation are done using Portion.MathParagraph property. Also, you can export math portions to MathML." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8984757,"math_prob":0.93026525,"size":284,"snap":"2023-14-2023-23","text_gpt3_token_len":61,"char_repetition_ratio":0.15,"word_repetition_ratio":0.0,"special_character_ratio":0.19014084,"punctuation_ratio":0.13725491,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99480736,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-23T16:37:42Z\",\"WARC-Record-ID\":\"<urn:uuid:9c6ba303-0fc9-4c18-82b1-06f0f24b33a0>\",\"Content-Length\":\"26514\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:70d5b1c6-38c7-4b51-840a-60e127a3a9e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:9f851194-ab4f-4fab-846a-7329cc7b30c1>\",\"WARC-IP-Address\":\"44.230.76.109\",\"WARC-Target-URI\":\"https://docs.aspose.cloud/slides/working-with-math-formulas/\",\"WARC-Payload-Digest\":\"sha1:TFHRS3MSJGQCAUXWGK2CLJFQWGDLEBKV\",\"WARC-Block-Digest\":\"sha1:2JG2UFUM557K2VZAH4G3RL3OLAHHY4V2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945182.12_warc_CC-MAIN-20230323163125-20230323193125-00157.warc.gz\"}"}
https://matgomes.com/base64-encode-decode-cpp/
[ "Base64 encoding is an easy binary-to-text encoding scheme. It’s used in various places: from encoding bytes for sending over the internet, to storing public/private keys in text format. In this post, we look at how to Base64 encode and decode in modern C++20!\n\n## Understanding Base64 Encoding & Decoding\n\nBase64 can be quite tricky to understand if you’re not used to working with binary data, but luckily for you, I posted a comprehensive tutorial on how Base64 encoding and decoding work.\n\nCheck out the post and make sure you understand the overall algorithm, as this post assumes you understand Base64 and how the bit operations help you transform binary into text. In addition, the pretty hand-drawn illustrations may help you visualise the whole process!\n\n## Requirements For This Project\n\nBefore we get into the technical details, let’s look into what we need to build the modern C++ Base64 encoder.\n\nFirstly, the code shown in this post contains C++20 features such as std::span, as well as C++17 features like if initializers and std::optional. For this reason, you need a fairly recent compiler and a not-so-old version of CMake (perhaps at least 3.16). From the compiler support cppreference page, you need at least the following compiler versions:\n\n• GCC version 10.\n• Clang version 7.\n• MSCV version 19.26.\n\nFinally, to build the entire project, you will need the Conan C++ package manager and Github to pull the code locally. There will be more on this in the following section about where to find the code.\n\n# Base64 Encode With Modern C++\n\nIn this section, we look at how to implement a safe Base64 encoder using a simple encoding table. Long story short, we consider three bytes each time, concatenate their values, and split them into four ASCII characters according to the encoding table – this is outlined in more detail in the aforementioned Base64 tutorial.\n\n### Using Arrays To Store The Encoding Table\n\nIdeally, once we’ve split the three bytes into four six-bit values, we need to convert these values into a Base64 character. For a six-bit value, the resulting character is the one at the position given by the six-bit value in the following Base64 character array:\n\n\\begin{align*}\n&\\texttt{A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,} \\\\\n&\\texttt{Q,R,S,T,U,V,W,X,Y,Z,a,b,c,d,e,f,} \\\\\n&\\texttt{g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,} \\\\\n&\\texttt{w,x,y,z,0,1,2,3,4,5,6,7,8,9,+,/}\n\\end{align*}\n\nFor example, for the six-bit value \\texttt{000000}, the character would be the one at position zero, or \\texttt{\"A\"}. Likewise, the character for \\texttt{111111} would be \\texttt{\"/\"}.\n\nBut how can we represent the table above in C++? If you think about it, the table above is simply a contiguous array of characters, where each element is laid out as per the table. Without further ado, the following array represents the encoding table we outlined in this section.\n\nstd::array<char, 64> constexpr encode_table{\n'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K',\n'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V',\n'W', 'X', 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f', 'g',\n'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r',\n's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '0', '1', '2',\n'3', '4', '5', '6', '7', '8', '9', '+', '/'};\n\n\nYou can see we have exactly 64 characters, and the order of the characters matches the encoding table we discussed previously – we will later use this array to convert the six-bit values to Base64 characters!\n\nFinally, the code snippet above requires you to include the <array> header.\n\n### How To Encode A Full Byte-Triple\n\n“Three is a magic number. It takes three legs to make a tripod or to make a table a table stand” – Similarly, it takes three characters to convert from binary to Base64, and three-character splits are fundamental to the algorithm.\n\nFor this reason, if we can encode a chunk of three bytes, we can encode an entire array of bytes using padding tricks we will see later.\n\nstd::array<char, 4> encode_triplet(std::uint8_t a, std::uint8_t b, std::uint8_t c)\n{\nstd::uint32_t const concat_bits = (a << 16) | (b << 8) | c;\n\nauto const b64_char1 = encode_table[(concat_bits >> 18) & 0b0011'1111];\nauto const b64_char2 = encode_table[(concat_bits >> 12) & 0b0011'1111];\nauto const b64_char3 = encode_table[(concat_bits >> 6) & 0b0011'1111];\nauto const b64_char4 = encode_table[concat_bits & 0b0011'1111];\nreturn {b64_char1, b64_char2, b64_char3, b64_char4};\n}\n\nUnsurprisingly, function encode_tripletin the code snippet above will do just what we expect: given three bytes represented by a, b, and c, it will return four Base64 characters.\n\nFor the purposes of this post, I chose std::uint8_t to represent a byte, and this definition can be found in the header <cstdint>. In addition, for concatenating the bytes in line 3, we use the type std::uint32_t as the results will be exactly 24-bits, so 32-bits should be enough to hold it.\n\n#### Going From Bytes To Base64 Characters\n\nAs you should know by now, the overall goal of Base64 encoding is to transform some bytes into a string of characters contained in the encoding table. Furthermore, I mentioned that each resulting character is the equivalent offset of a value in the encoding table.\n\nLet’s suppose that the variable concat_bytes in the snippet above is a 24-bit value, which we obtained from concatenating bytes a, b, and c from left to right. In fact, concat_values is actually a 32-bit value containing those three bytes concatenated, plus zeroes on the eight left-most bits.\n\nInterestingly, how can we split that 24-bit value in concat_bytesinto four 6-bit values? To make the problem a bit simpler, let’s suppose you want the second 6-bit value.\n\nauto const concat_bytes = ... ; // 24-bit long value\nauto const second_six_bits = (concat_bits >> 6) & 0b0011'1111;\n\nIn the snippet above, the variable second_six_bits holds the second six-bit values, by firstly shifting all the bits of concat_bits to the right six times and masking the first six-bit values.\n\nIn more detail, the bit shift operation will put the bits \\texttt{[11-17]} (inclusively) in the positions \\texttt{[0-7]}. In addition, the masking with & 0b0011'1111 has the effect of only keeping the values in the first 6-bits, zeroing out all the other bits in the shift operation.\n\nEssentially, these two operations simply store the second 6-bit value as an integer value in second_six_bits. Due to masking the first six-bits, the integer value will never be anything outside the range \\texttt{[0, 63]}. This same technique is used to get the first, second, third, and fourth six-bit values in the 24-bit, so we can use them for the character offsets!\n\n#### Converting A Six-Bit Bit Value To A Base64 Character\n\nBeing able to convert the three bytes into four six-bit values is half of the work, and the other half is simply turning each six-bit value into a Base64 character.\n\nInterestingly, this is where the encode_table comes in! Long story short, the six-bit values represent the offset of the Base64 characters in the encoding table.\n\n// get the six_bit value with the bit operations\nauto const a_six_bit_value = ...;\n\n// use the six_bit value as the character index\nauto const base64_char = encoding_table[a_six_bit_value];\n\nIn the code snippet above, base64_char contains the character represented by the six-bit value. Needless to say, there’s nothing very interesting going on here, we are simply indexing into the encoding table array.\n\nImportantly, the code above should be safe if a_six_bit_value is always in the range \\texttt{[0, 63]}, because if it’s outside that range, you will be accessing invalid memory as the encoding table only has 64 characters!\n\nFinally, this technique is used for getting all the four characters for every three bytes of the data to encode.\n\n### What Happens When We Don’t Have Three-Bytes In The Input To The Base64 Encoder?\n\nHow long is a piece of string? Like strings, byte arrays can have any size, however, we do know some facts about these sizes:\n\n• A byte array’s size could be exactly divisible by three such as 0, 3, and 333.\n• The size of an array could be a multiple of three plus two remaining characters. For example, array sizes of 2, 11, or 554. Essentially, if we split byte arrays of these sizes into groups of three, there would be two remaining bytes at the end.\n• Lastly, if the array’s length is a multiple of three plus one — such as 1, 10, 1000 — we would have one remaining character after partitioning the array into groups of three.\n\nBut how do we deal with the cases where we have remaining characters after the three-byte partition? Basically, we pad the last group (either two or one byte) with zeros, carry out the three-byte encoding, and pad the resulting Base64 string with = values. I’d highly recommend reading about these edge cases in my post on understanding Base64 encoding and decoding.\n\n### Encoding The Remaining Two Bytes\n\nLet’s assume that you partitioned your bytes into groups of three, but the final group only has two bytes in it. Ignoring the final incomplete partition, you encoded all of your perfect three-byte groups and appended them into the Base64 encoded string base64_result. How do we then encode the final, incomplete group of two bytes according to the Base64 specification?\n\nMathematically speaking, we can simply pass the two remaining bytes into the encode_triplet function, setting the last input character as a 0. Following that, we simply ignore the last character in the result, appending only the first three returned Base64 characters into base64_result, and finally, append a = to it as well.\n\nstd::uint8_t const final_byte1{0x98};\nstd::uint8_t const final_byte2{0x98};\n\nauto const base64_chars = encode_triplet(final_byte1,\nfinal_byte2,\n0x00);\n\nstd::copy_n(begin(base64_chars),\n3,\nback_inserter(base64_result));\nbase64_result.push_back('=');\n\nAnd just like that, we have encoded the last two-byte partition and added it to our result.\n\n### Encoding The Remaining Single Byte\n\nHopefully, by now, you got the gist of this. Similarly to the two remaining bytes case, we simply pass the single remaining byte into encode_triplet, and the remaining byte arguments will both be 0.\n\nstd::uint8_t const final_byte{0x98};\n\nauto const base64_chars = encode_triplet(final_byte1,\n0x00,\n0x00);\n\nstd::copy_n(begin(base64_chars),\n2,\nback_inserter(base64_result));\nbase64_result.push_back('=');\nbase64_result.push_back('=');\n\nAnother difference is that we now add an extra = padding character to the result.\n\n## Decoding Base64 Strings In C++\n\n“Nothing is invented, for it’s written in nature first everything is copied and reused” – Me twisting a famous quote… But it’s true, most problems are just a different form of other problems. This is no different in Base64: decoding is just “encoding” Base64 characters into bytes. So the solution will be fairly similar to encoding.\n\nIt will all be clearer when we look at the decoding table. More precisely, we have the following problem: for a given Base64 character, what six-value did it originally have?\n\nIn this section, we present similar ideas to Base64 encoding, after all, it’s the same problem except we’re doing the opposite! Specifically, we will use a decoding table to get six-bit values for each character, and also bit-shift operations to reconstruct the original bytes.\n\n### Decoding Base64 Strings: The End Game\n\nBriefly speaking, Base64 decoding a valid string involves splitting the string into partitions of four Base64 characters. Following that, we transform each four-character partition into the original three bytes, and when we add all those bytes to an array in order, we get the decoded bytes.\n\nImportantly, the method described in this section assumes that the input to the decoding function is always valid, but this is not always the case. For this reason, we must always check that the input string is a valid Base64 string, and you can see this in the full code:\n\n• Check that the length of the string to decode is always a multiple of 4.\n• All characters in the input string need to be valid Base64 characters, it cannot contain characters that aren’t in the encode_table array.\n• Last but not least, the last four-character group may contain at most two equal padding signs at the end.\n\n### Using A Decoding Table To Find Six-Bit Values For a Character\n\nIf we want to reconstruct the original bytes for the string \\texttt{AAAA}, we need to get the six-bit value for the character \\texttt{A}, which would be \\text{000000}.\n\nstd::array<std::uint8_t, 128> constexpr decode_table{\n0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64,\n0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64,\n0x64, 0x64, 0x64, 0x64, 0x64, 0x3E, 0x64, 0x64, 0x64, 0x3F, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3A, 0x3B, 0x3C,\n0x3D, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x64, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A,\n0x0B, 0x0C, 0x0D, 0x0E, 0x0F, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x64, 0x64, 0x64, 0x64,\n0x64, 0x64, 0x1A, 0x1B, 0x1C, 0x1D, 0x1E, 0x1F, 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x28, 0x29, 0x2A,\n0x2B, 0x2C, 0x2D, 0x2E, 0x2F, 0x30, 0x31, 0x32, 0x33, 0x64, 0x64, 0x64, 0x64, 0x64};\n\n\nIn the snipped above, the decode_table byte-array solves that problem. If you have a single valid Base64 character, such as \\texttt{A}, then using that character as the index to the array will give you the original six-bit value.\n\nAs an example, decode_table['A'] will return 0x00. Similarly, using other characters as the index for the decode_table array gives you the original six-bit offset of the encoded character.\n\n### Decoding Four Base64 Characters With No Padding\n\nAssuming we have a perfectly valid four-character partition of a Base64 string, the following function decode_quad decodes the four characters given as inputs.\n\nstd::array<std::uint8_t, 3> decode_quad(char a, char b, char c, char d)\n{\nstd::uint32_t const concat_bytes =\n(decode_table[a] << 18) | (decode_table[b] << 12) |\n(decode_table[c] << 6) | decode_table[d];\n\nstd::uint8_t const byte1 = (concat_bytes >> 16) & 0b1111'1111;\nstd::uint8_t const byte2 = (concat_bytes >> 8) & 0b1111'1111;\nstd::uint8_t const byte3 = concat_bytes & 0b1111'1111;\nreturn {byte1, byte2, byte3};\n}\n\n\nFor example, for the four-characters \\texttt{AAAB}, calling decode_quad('A', 'A', 'A', 'B') would return the vector containing {0x00, 0x00, 0x01}.\n\nHow is this achieved? Mostly with the decode_table. Essentially, we get the original six-bit values for each character and place each value in their respective original place in a 32-bit integer. Although we don’t use all of the 32-bits, the 24 least significant bits will contain the original concatenated bytes (refer to the encoding section).\n\nFinally, the original three bytes are acquired by getting the first eight bits, then the second and third eight bits from the concatenated six-bit values.\n\nFiguring out how this is all achieved with bit-shift and masking operations is left as an exercise to the reader ~~ I’ve always wanted to say this 🙂\n\n### What If The Last Group Of Characters Has Equal (=) Padding Characters?\n\nIdeally, you need to iterate over each partition of four characters, decode each of them and append every three bytes returned by decode_quadto a byte array. However, this only works if the encoded string has no padding at the end of it.\n\nFor the decoding to work with any valid Base64 string, we would need to do something special to the last partition of four characters if they have padding:\n\n• If the partition only has one = (equal) sign at the end, we simply call decode_quad(a, b, c, 'A') with a, b, and c as the first, second and third characters of the group, respectively. Importantly, we only append the first two bytes returned from this call to the result byte-array.\n• If the parition has two = (equal) signs at the end, we call decode_quad(a, b, 'A', 'A') where ‘a’ and ‘b’ are the first and second characters of the last parition. Similarly, we only append the first character returned from the call to the return byte-array.\n\nWithout baffling too much, you can see how this is done in the full code on Github. The function in question here is base64::decode.\n\n## Base64 C++ Encoder And Decoder On Github\n\nWithout further ado, you can view all the code for the Base64 encoder in this Git repository. If you want to build it, follow the instructions in the README.md file.\n\nMost of the code was shown and explained in this post, but you can see the whole picture in the Github repository. More precisely, the two main functions are base64pp::encode(...)and base64pp::decode(...) – the input checking and the overall logic for encoding and decoding are here.\n\nMoreover, here’s an overview of the project structure:\n\n• “base64pp/include/base64pp/base64pp.h” is the header file declaring the base64pp::encode and base64pp::decode functions.\n• “base64pp/base64pp.cpp” defines the encoding and decoding functions, all the program logic can be found in this file.\n• “base64pp/CMakeLists.txt” is the CMake file that defines the base64pp library.\n• “base64pp/tests/base64pp_tests.cpp” defines the unit tests for the encoder and decoder.\n• “tests/CMakeLists.txt” defines the test executable base64pp_tests, which uses GTests as the unit test framework.\n\nIf you’re not quite comfortable with CMake, I recommend checking out my posts on how to create libraries with CMake and adding include directories to targets with CMake.\n\n## What’s The Point Of This Post? Aren’t There Many Base64 Encoders Out There?\n\nI decided to write this post after being frustrated with the quality of some Base64 encoders out there. For example, Googling “C++ Base64 Encoder” will either land you on a StackOverflow thread or this Github Base64 encoding gist.\n\nAlthough I have nothing against the writers of the demo code in those threads, both the top answers on Google are very bad examples of Base64 encoding in C++. Not only are the snippets unmaintainable due to poor readability, but I’ve also spotted several serious issues with the top examples:\n\n• C++ undefined behaviour use, meaning that certain features these exmaples rely on aren’t actually guaranteed by the standard.\n• Buffer overflows with certain input data, which can cause serious security-related issues.\n• Lack of input sanity checks, allowing users to, for example, decode any invalid string.\n• Very difficult to reason about the code as the examples seem to focus on solving the problem with the least number of lines. This isn’t always a good idea and it hurts the readibility of the code.\n\nWhat’s worse is that I’ve seen these examples been copied and pasted into places they should never have. Very few of the examples I see on the first page of Google are actually production-worthy.\n\n### Stick To Already Tried-And-Tested Libraries Whenever Possible\n\nIdeally, if there’s is a library that provides the functionality you need, then use it instead of reinventing the wheel! Similarly, if your project requires a Base64 encoder/decoder, your best bet is to use a popular library like OpenSSL to do that work.\n\nAlthough third-party libraries can also have issues, it’s less likely that a widely tested and used library will do a worse job than the code you implement.\n\nKnowing how it works under the hood is a bonus – in case you do spot issues with popular open-source libraries, you can fix it yourself! In fact, one of the projects I’ve worked with used an old version of OpenSSL where the decoding function was crashing; turns out it wasn’t properly checking invalid inputs and crashing on such inputs!\n\nHave I missed anything? Have you noticed any errors or typos? Do you have useful feedback? Feel free to comment below!\n\nPublished inCPP" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8424667,"math_prob":0.9280726,"size":19662,"snap":"2022-40-2023-06","text_gpt3_token_len":5245,"char_repetition_ratio":0.16222402,"word_repetition_ratio":0.01768235,"special_character_ratio":0.27626896,"punctuation_ratio":0.17508003,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96146464,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-02T20:31:58Z\",\"WARC-Record-ID\":\"<urn:uuid:4b222f91-df70-46ef-a5d0-d6dc9476907d>\",\"Content-Length\":\"80584\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:adbb5961-1ebf-495e-a858-5af4a46aebfc>\",\"WARC-Concurrent-To\":\"<urn:uuid:f3760b49-d637-4270-835f-040edc86d4e7>\",\"WARC-IP-Address\":\"35.188.79.241\",\"WARC-Target-URI\":\"https://matgomes.com/base64-encode-decode-cpp/\",\"WARC-Payload-Digest\":\"sha1:3BOWWNVPZ3R2X3QFYCKG2IPRVEMNSCJM\",\"WARC-Block-Digest\":\"sha1:UIO4PSDWZ3XBTLWKJIOCLDYNZ7FFNY2Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500041.18_warc_CC-MAIN-20230202200542-20230202230542-00809.warc.gz\"}"}
https://assignmentsbag.com/assignments-for-class-10-mathematics-coordinate-geometry/
[ "# Assignments For Class 10 Mathematics Coordinate Geometry\n\nAssignments for Class 10 Mathematics Coordinate Geometry have been developed for Standard 10 students based on the latest syllabus and textbooks applicable in CBSE, NCERT and KVS schools. Parents and students can download the full collection of class assignments for class 10 Mathematics Coordinate Geometry from our website as we have provided all topic wise assignments free in PDF format which can be downloaded easily. Students are recommended to do these assignments daily by taking printouts and going through the questions and answers for Grade 10 Mathematics Coordinate Geometry. You should try to do these test assignments on a daily basis so that you are able to understand the concepts and details of each chapter in your Mathematics Coordinate Geometry book and get good marks in class 10 exams.\n\nQuestion. The lines, x = 2 and y = 3 are\n(A) parallel to each other\n(B) perpendicular to each other\n(C) neither parallel nor perpendicular to each other\n(D) none of these\n\nB\n\nQuestion. The lines, x = –2 and y = 3 intersect at the point ________.\n(A) (-2,3)\n(B) (2,-3)\n(C) (3,-2)\n(D) (-3,2)\n\nA\n\nQuestion. The point equidistant from vertices of a triangle is called\n(A) circumcentre\n(B) incentre\n(C) orthocenter\n(D) none of these\n\nA\n\nQuestion. The slope of the line joining the points (2, k – 3) and (4,-7) is 3. Find k.\n(A) -10\n(B) -6\n(C) -2\n(D) 10\n\nA\n\nQuestion. The distance between the points (3, 4) and (6, –3) is.\n(A) √58\n(B) √68\n(C) √78\n(D) √98\n\nA\n\nQuestion. If A and B are (1, 4) and (5, 2) respectively, then the co-ordinates of P when AP/PB = 3/4 is\n(A) (19/7 , 22/7)\n(B) (20/7 , 21/7)\n(C) (21/7 , 22/7)\n(D) (21/7 , 23/7)\n\nA\n\nQuestion. If the coordinates of the mid-points of the sides of a triangle are (1, 2) (0, –1) and (2, –1). Then the sum of x coordinates of its vertices of the triangle is\n(A) 3\n(B) 4\n(C) 5\n(D) 6\n\nA\n\nQuestion. The centre of a circle is C(2,-3) and one end of the diameter AB is A(3,5) . Find the coordinates of the other end B.\n(A) (1,-11)\n(B) (5,2)\n(C) (1,8)\n(D) none of these\n\nA\n\nQuestion. Find λ , if the line 3x – λy + 6 = 0 passes through the point (–3,4).\n(A) 3/4\n(B) -3/4\n(C) 4/3\n(D) -4/3\n\nB\n\nQuestion. Find the area of the triangle formed by the line 5x – 3y + 15 = 0with coordinate axes.\n(A) 15cm2\n(B) 5 cm2\n(C) 8 cm2\n(D) 15/2 cm2\n\nD\n\nQuestion. The point P lying in the fourth quadrant which is at a distance of 4 units from X-axis and 3 units from Y-axis is__________.\n(A) (4,-3)\n(B) (4,3)\n(C) (3,-4)\n(D) (-3,4)\n\nC\n\nQuestion. In what ratio does the line 4x+3y -13 = 0 divide the line segment joining the points (2, 1) and (1, 4)?\n(A) 3 : 2 internally\n(B) 2 : 3 externally\n(C) 2 : 3 internally\n(D) 3 : 2 externally\n\nC\n\nQuestion. The points on X-axis which are at a distance of 13 units from (-2,3) is ________.\n(A) (0,0), (-2,-3)\n(B) (0,0), (-4,0)\n(C) (0,0), (2,3)\n(D) none of these\n\nB\n\nQuestion. If (5, 3), (4, 2) and (1, –2) are the mid points of sides of triangle ABC, then the area of ΔABCis\n(A) 2 sq. units\n(B) 3 sq. units\n(C) 1 sq. units\n(D) 4 sq. units\n\nA\n\nQuestion. The points (a,b +c), (b,c +a) and (c,a + b)\n(A) are collinear\n(B) form a scalene triangle\n(C) form an equilateral triangle\n(D) none of these\n\nA\n\nCoordinate Geometry\n\nA point in 2-D plane (let’s say XY plane) is represented by its coordinates (x,y)\nHere x is the distance of a point from the x-axis and is called y-coordinate.\ny is the distance of a point from the and is called its .\n\nDistance Formula : the distance between the points  P(x1,y1) and Q(x2,y2) is PQ=\n\nThis is called distance formula. F\n\nArea of a triangle : the area of ABC is the numerical value of the expression F\n\nSection Formula : the coordinates of the point P(x,y) which divides the line segment joining the points A(x1,y1) and B(x2,y2) internally, in the ratio m1:mare\n\nThis is known as section formula. F\n\nSpecial Case -: if P is mid point then m1:m=1:1\n\nNow, the coordinates of mid point\n\nIn order to prove that a given figure is a :\ni) Square, prove that four sides are equal and the diagonals are equal.\nii) Rhombus, prove that the four sides are equal.\niii) Rectangle, prove the opposite sides are equal and the diagonals are also equal.\niv) Parallelogram, prove that the opposite sides are equal.\nThe point of intersection of the medians of a triangle is called its called its centroid. The coordinates of the centroid of the triangle whose are (x1,y1) ,(x2,y2) and (x3,y3) are given by\n\nFor three points to be collinear,\ni) The sum of the distances between two pairs of points is equal to the distance third pair of points.\nThree given points A(x1,y1) ,B(x2,y2) and C(x3,y3) are collinear if ABC =0.\n\nQuestion. Find the condition that the point (x, y) may lie on the line joining (3, 4) and (-5, – 6).\nSolution. Since the point P (x, y) lies on the line joining A (3, 4) and B (-5, -6). Therefore, P, A andB are collinear points.\n\nHence, the point (x, y) lies on the line joining (3,4) and (-5, -6), if 5x – 4y+1 = 0\n\nQuestion. Show that the points A(3, 5), B(6, 0), C(1, -3) and D (-2, 2) are the vertices of a square ABCD.\nSolution. Let A( 3,5), B(6, 0), C(1, -3 ) and D(-2, 2) be the angular points of a quadrilateral ABCD. Join AC and BD\n\nThus, ABCD is a quadrilateral in which all sides are equal and the diagonals are equal.\nHence, quad. ABCD is a square.\n\nQuestion. If P (x, y) is any point on the line joining the points A(a,0) and B(0, b), then show that x/a + y/b = 1\nSolution. It is given that the point P (x, y) lies on the line segment joining points A (a, 0) and B (0,\nb).Therefore, points P (x, y), A (a, 0) and B (0, b) are collinear points.\n\nQuestion. A (4, 2), B (6, 5) and C (1, 4) are the vertices of ΔABC.\nSolution.\n\ni. The median from A meets BC in D. Find the coordinates of the point D.\n\ni. Median AD of the triangle will divide the side BC in two equal parts. So D is the midpoint of side BC.\n\nii. Find the coordinates of point P on AD such that AP : PD = 2:1.\n\niii. Find the coordinates of the points Q and R on medians BE and CP respectively such that BQ : QE = 2 :1 and CR: RF =2: 1.\n\nMedian BE of the triangle will divide the side AC in two equal parts. So E is the midpoint of side AC.\n\niv. What do you observe?\nNow we may observe that coordinates of point P, Q are same. So, all these are representing same point on the plane i.e. centroid of the triangle.\n\nQuestion. Find the lengths of the medians of a ΓABC whose vertices are A(0, -1) B(2, 1) and C(0,3).\nSolution. Let D, E, F be the midpoint of the side BC, CA and AB respectively in ΔABC\n\nQuestion. The area of triangle formed by the points (p, 2 – 2p), (1, p, 2 p ) and (-4 -p, 6 – 2p) is 70 sq. units. How many integral values of p are possible.\nSolution.\n\nQuestion. Find the coordinates of the centroid of a triangle whose vertices are (0,6), (8,12) and (8,0).\nSolution. Coordinates of the centroid of a triangle whose vertices are (x1, y1), (x2, y2), (x3, y3) are\n\nQuestion. Point A is on x-axis, point B is on y-axis and the point P lies on line segment AB, such that P (4, – 5) and AP : PB = 5 : 3. Find the coordinates of point A and B.\nSolution. Let coordinates of A are (x, 0) and coordinates of B are (0, y)\n\nQuestion. If the points A(x, 2), B(- 3, – 4), C(7, – 5) are collinear, then find the value of x.\nSolution. Since the points are collinear, then,\nArea of triangle = 0\n\nPractice Exercise :\n\nQuestion. Prove that(4,-1),(6,0),(7,2)&(5,1) are the vertices of a rhombus .Is it a square?\nSolution. Proof\n\nQuestion. Find the coordinates of a point A, where AB is diameter of a circle whose Centre is(2,-3)and B is(1,4)\nSolution. (3,-10)\n\nQuestion. Find the centroid of triangle whose vertices are (3,-7), (-8,6) and (5,10).\nSolution. (0,3)\n\nQuestion. If A (-2,4),B(0,0),C(4,2) are the vertices of a ΔABC, then find the length of median through the vertex A.\nSolution. 5 units\n\nQuestion. Find the area of the triangle formed by joining the mid points of the sides of the triangle whose vertices are (0,-1),(2,1) and (0,3).Find the ratio of this area to the area of the given triangle.\nSolution. 1:4\n\nQuestion. Determine the ratio in which the point P(a,-2) divides the line joining of points(-4,3) and B(2,-4).Also find the value of a.\nSolution. a=2/7\n\nQuestion. If the point C(-1,2) divides internally the line segment joining A(2,5)and in the ratio 3:4. Find the Co- ordinates of B.\nSolution. B(-5,-2)\n\nQuestion. Show that points (1,1), (4,4) ,(4,8)and (1,5) are the vertices of a parallelogram.\nSolution. Proof\n\nQuestion. Find the value of p, for which the points(-1,3),(2,p) & (5,-1)are collinear\nSolution. p=1\n\nQuestion. If the points(-1,3),(1,-1)and(5,1)are the vertices of a triangle .Find the length of the median through the first vertex.\nSolution. 5\n\nQuestion. Find the center of a circle passing through the points (6,-6), (3,-7)and(3,3).\nSolution. (3,-2)\n\nQuestion. If the distance between the points (3,0) and (0,y)is 5 units and y is positive ,what is the value of y?\nSolution. 4\n\nQuestion. If the points(x ,y),(-5,-2)and(3,-5)are collinear r, then prove that 3x+8y+31=0.\nSolution. Proof\n\nQuestion. A line intersects y–axis and x-axis at the points P and Q respectively. If (2,-5) is the midpoint of PQ, then find the coordinates of P and Q respectively.\nSolution. (0,-10) and(4,0)\n\nQuestion. Show that the three points (a,a) ,(-a,-a) &(-a√3 , a√3) are the vertices of an equilateral triangle.\nSolution. Proof\n\nQuestion. Find the value of k, if the point P (2, 4) is equidistant from the points (5, k) and (k, 7).\nSolution. K=3\n\nQuestion. If the point A(0,2)is equidistant from the points B (3,p) and C(p,5),find p .Also find the length of AB.\nSolution. P=1, AB=√10\n\nQuestion. Find the distance between the points A and B in the following : A(a, 0), B(0, a)\nSolution. A(a, 0), B(0, a)\n\nQuestion. Find the perpendicular distance of A(5,12) from the y-axis.\nSolution. The point on the y-axis is (0,12)\nDistance between (5,12) and (0,12)\n\nQuestion. Find the distance of the point (- 4, – 7) from the y-axis.\nSolution. Points are (- 4, – 7) and (0, – 7)\n\nQuestion. Find the distance between the points: A(-6, -4) and B(9, -12)\nSolution. The given points are A(-6, -4) and B(9, -12)\nThen, (x1 = -6, y1 = -4) and (x2 = 9, y2 = -12)\n\nQuestion. Show that four points (0, -1), (6, 7), (-2, 3) and (8, 3) are the vertices of a rectangle. Also,find its area.   Solution.Let A (0 – 1), B (6, 7), C (-2, 3) and D (8, 3) be the given points. Then,\n\nQuestion. Find the co-ordinates of the points of trisection of the line segment joining the points A(1,- 2) and B(- ,4).\nSolution.\n\nQuestion. Find the distance between the points A and B in the following:A(1,-3), B(4, 1)\nSolution. A(1, -3), B(4, 1)\n\nQuestion. If the points A (a, -11), B (5, b), C (2, 15) and D (1, 1) are the vertices of a parallelogram ABCD, find the values of a and b.\n\nSolution. Let A(a, -11), B(5, b), C(2, 15) and D(1, 1) be the given points.\nWe know that diagonals of parallelogram bisect each other.\nTherefore, Coordinates of mid-point of AC = Coordinates of mid-point of BD\n\nHence value of a and b is equal to 4 and 3 respectively.\n\nQuestion. Find the coordinates of the point , where the line x – y = 5 cuts Y-axis.\nSolution. x – y = 5 is a given line\nx – y = 5 cuts Y-axis.\nPut x = 0 in the equation of line x- y = 5\n⇒ (0) – y = 5\n⇒ y = – 5\nTherefore , the point is (0,-5)cuts x – y = 5 at Y-axis..\n\nQuestion. If the point A(2, –4) is equidistant from P(3, 8) and Q(–10, y) then find the values of y. Also find distance PQ.\nSolution. According to the question,we are given that,\nPA = QA\n⇒ PA2 = QA2\n⇒ (3 – 2)2 + (8 + 4)2 = (–10 – 2)2 + (y + 4)2\n⇒ 12 + 122 = (–12)2 + y2 + 16 + 8y\n⇒ y2 + 8y + 16 – 1 = 0\n⇒ y2 + 8y + 15 = 0\n⇒ y2 + 5y + 3y + 15 = 0\n⇒ y(y + 5) + 3(y + 5) = 0\n⇒ (y + 5) (y + 3) = 0\n⇒ y + 5 = 0 or y + 3 = 0\n⇒ y = –5 or y = –3\nSo, the co–ordinates are P(3, 8), Q1(–10, –3), Q2(–10, –5).\n\nQuestion. Find the value of ‘k’ if the points (7, –2), (5, 1), (3, k) are collinear.\nSolution. (7, –2), (5, 1), (3, k)\n\nQuestion. Find the perimeter of a triangle with vertices (0, 4), (0,0) and (3,0).\nSolution. Here, A→ (0,4),B→ (0,0),C→ (3,0)\n\nQuestion. Find the area of a quadrilateral PQRS whose vertices area P(- 5, 7), Q(- 4, – 5), R (-1, – 6) and S(4, 5).\nSolution. Area PQRS = ar PQS + ar QRS\n\nHence, area PQRS = 53 + 19 = 72 sq. units\n\nQuestion. The point R divides the line segment AB where A(-4, 0), B(0, 6) are such that AR =3/4 A B. Find the coordinates of R.\nSolution.\n\nQuestion. Prove that the lines joining the middle points of the opposite sides of a quadrilateral and the join of the middle points of its diagonals meet in a point and bisect one another.\nSolution. Let OBCD be the quadrilateral P, Q, R, S be the mid-points of OB, CD, OD and BC.\n\nLet the coordinates of O,B, C, D are (0, 0), (x, 0), (x, y) and (0, y)\nCoordinates of P are (x/2 ,0)\nCoordinates of Q are (x/2 , y)\nCoordinates of R are (0,y/2)\nCoordinates of S are (x , y/2)\nCoordinates of mid-point of PQ are\n\nSince, the coordinates of the mid-point of PQ = coordinates of mid-point of RS.\nPQ and RS bisect each other.\n\nQuestion. In the given triangle ABC as shown in diagram D, E and F are the mid-points of AB, BC and AC respectively. Find the area of ΔDEF.\nSolution.\n\nQuestion. Find the value of m for which the points with coordinates (3, 5), (m, 6) and (1/2 , 15/2) are collinear.\nSolution. If points are collinear, then one point divides the other two in the same ratio.\nLet point (m, 6) divides the join of (3, 5) and (1/2 , 15/2) in the ratio k: 1.\n\nQuestion. If origin is the mid-point of the line segment joined by the points (2, 3) and (x, y) then find the value of (x, y).\nSolution.\n\nQuestion. Find the centroid of the triangle whose vertices are given below: (3, -5), (-7, 4), (10,-2).\nSolution. The given vertices of triangle are (3, -5), (-7, 4) and (10, -2).\nLet (x, y) be the coordinates of the centroid. Then\n\nQuestion. Find the number of points on x-axis which are at a distance of 2 units from (2, 4).\nSolution. Distance of the point (2, 4) from x-axis is 4 units. There is no point on x-axis which is at a distance of 2 units from the given point.\n\nQuestion. Name the type of quadrilateral formed, if any, by the following points, and give reasons for your answer:\n(-1, -2), (1, 0), (-1, 2), (-3, 0) (4)\nSolution. (-1, -2), (1, 0), (-1, 2), (-3, 0)\nLet A→ (-1, -2), B→ (1, 0)\nC→ (-1, 2) and D→ (-3, 0)\n\nSince AB = BC = CD = DA (i.e., all the four sides of the quadrilateral ABCD are equal) and AC = BD (i.e. diagonals of the quadrilateral ABCD are equal). Therefore, ABCD is a square.\n\nQuestion. Find the area of the triangle with vertices (0 ,0) (6 ,0) and (0 ,5).\nSolution. We have to find the area of the triangle with vertices (0 ,0) (6 ,0) and (0 ,5).\nArea of triangle\n\nQuestion. Prove that the coordinates of the centroid of a triangle ABC, with vertices A(x1, y1), B(x2, y2) and C(x3, y3) are given by (x1 + x2 + x3 /3 , y1 + y2 + y3 /3)  12\nSolution. Let the coordinates of vertices of ΔABC be A(x1, y1), B(x2, y2) and C(x3, y3) respectively. Let D be the midpoint of BC.\nUsing section formula, coordinates of D will be\n\nNow since centroid G will divide the line joining A and D in the ratio of 2 : 1, therefore again using section formula, coordinates of G will be\n\nQuestion. The centre of a circle is (2a, a -7). Find the values of a, if the circle passes through the point (11, -9) and has diameter 10√2 units.\nSolution. Diameter of a circle = 10√2 units\n⇒ Radius of a circle =  5√2 units\nLet the centre of a circle be O(2a, a – 7) which passes through the point P(11, -9).\n⇒ OP is the radius of the circle.\n⇒ OP = 5√2 units\n⇒ OP2 = (5√2)2\n⇒ (11 – 2a)2 + (-9 – a + 7)2 = 50\n⇒ 121 + 4a2 – 44a + (-2 – a)2 = 50\n⇒ 121 + 4a2 – 44a + 4 + a2 + 4a = 50\n⇒ 5a2 – 40a + 125 = 50\n⇒ 5a2 – 40a + 75 = 0\n⇒ a2 – 8a + 15 = 0\n⇒ a2 – 5a – 3a + 15 = 0\n⇒ a(a – 5) -3(a – 5) =0\n⇒ (a – 5)(a – 3) = 0\n⇒ a – 5 = 0 or a – 3 = 0\n⇒ a = 5 or a = 3\n\nQuestion. Find the area of the rhombus if its vertices are (3, 0), (4, 5), (-1, 4) and (-2, -1) taken in order.\nSolution. Let A (3, 0), B (4, 5), C (-1, 4) and D (-2, -1)\n\nQuestion. If the points P (-3, 9), Q (a, b) and R (4, -5) are collinear and a + b = 1, find the values of a and b.\nSolution. It is given that the points P (-3, 9), Q (a, b) and R(4,-5) are collinear.\n\nQuestion. Find the area of a triangle ABC with A(1, – 4) and mid-points of sides through A being (2, -1) and (0, -1).   1\nSolution.\n\nQuestion. If the points (0, 0), (1, 2) and (x, y) are collinear,then find x.\nSolution. The points are collinear, then area of triangle = 0\n\nQuestion. Find the lengths of the medians of a ΔABC having vertices at A (0, -1), B (2, 1) and C (0, 3).\nSolution.\n\nLet A(0, -1), B(2, 1) and C(0, 3) be the given points.\n\nQuestion. Find the distance of the point (α ,β) from y-axis.\nSolution. Distance of the point (α ,β) from y-axis is the positive value of its x-coordinate.\n∴ Distance = lαl\n\nQuestion. If the centre of circle is (2a, a – 7) then find the values of a if the circle passes through the point (11, –9) and has diameter 10√2 units.\nSolution.\n\nLet C(2a, a – 7) be the centre of the circle and it passes through the point P(11, –9).\n\n∴ PQ = 10√2\n⇒ CP = 5√2\n⇒ CP2 = (5√2)2 = 50\n⇒ (2a – 11)2 + (a – 7 + 9)2 = 50\n⇒ (2a)2 + (11)2 – 2(2a) (11) + (a + 2)2 = 50\n⇒ 4a2 + 121 – 44a + (a)2 + (2)2 + 2(a)(2) = 50\n⇒ 5a2 – 40a + 125 = 50\n⇒ a2 – 8a + 25 = 10\n⇒ a2 – 8a + 25 – 10 = 0\n⇒ a2 – 8a + 15 = 0\n⇒ a2 – 5a – 3a + 15 = 0\n⇒ a(a – 5) –3(a – 5) = 0\n⇒ (a – 5) (a – 3) = 0\n⇒ a – 5 = 0 or a – 3 = 0\n⇒ a = 5 or a = 3\nHence, the required values of a are 5 and 3.\n\nQuestion. If the centre and radius of circle is (3, 4) and 7 units respectively, then what is the position of the point A(5,8) with respect to circle?\nSolution. Distance of the point, from the centre\n\nQuestion. Find the coordinates of the centre of the circle passing through the points (0, 0), (-2, 1) and (-3, 2). Also, find its radius.\n\nSolution. Let P (x, y) be the centre of the circle passing through the points O (0, 0), A (-2, 1) and B (-3, 2). Then,\n\nQuestion. If the coordinates of the mid-points of the sides of a triangle are (1, 1), (2, -3) and (3,4), find the vertices of the triangle.\nSolution. Let A(x1, y1), B(x2, y2) and C(x3, y3) be the vertices of\nLet D(1, 1), E(2, -3) and F(3, 4) be the mid-points of sides BC, CA and AB respectively.\nSince, D is the mid-point of BC\nSimilarly E and F are the mid-points of CA and AB respectively.\n⇒ x1 + x2 = 6 and y1 + y2 = 8 …(iii)\nFrom (i), (ii) and (iii) we get\nx2 + x3 + x1 + x3 + x1 + x2 = 2 + 4 + 6\nand, y2 + y3 + y1 + y3 + y1 + y2 = 2 +(-6) + 8\n⇒ 2(x1 + x2 + x3) = 12 and 2(y1 + y2 + y3) = 4\n⇒ x1 + x2 + x3 = 6 and y1 + y2 + y3 = 2 …(iv)\nFrom (i) and (iv) we get\nx1 + 2 = 6 and y1 + 2 = 2\n⇒ x1 = 6 – 2 y1= 2 – 2\n⇒ x1 = 4 y1 = 0\nSo the coordinates of A are (4, 0)\nFrom (ii) and (iv) we get\nx2 + 4 = 6 and y2 + (-6) = 2\n⇒ x2 = 2 ⇒ y2 – 6 = 2\n⇒ y2 = 8\nSo the coordinates of B are (2, 8)\nFrom (iii) and (iv) we get\n6 + x3 = 6 and 8 + y3 = 2\n⇒ x3 = 6 – 6 ⇒ y3 = 2 – 8\n⇒ x3 = 0 ⇒ y3 = -6\nSo the coordinates of c are (0, -6)\nHence, the vertices of triangle ABC are:\nA(4, 0), B(2, 8) and C(0, -6)\n\nQuestion. Find the distance between the points P (-4, 7) and Q(2, -5).\nSolution. The given points are P (-4, 7) and Q(2, -5).\nThen, x1 = -4, y1 = 7 and x2 = 2, y2 = -5.\n\nQuestion. Find the point on the X-axis which is equidistant from the points (-1,0) and (5,0)\nSolution. Let A(x,o) be any point on the X-axis , which is equidistant from points (-1,0) and (5,0).\n⇒ (x + 1)2 = (x – 5)2\nA(x,o) be any point on the X-axis , which is equidistant from points (-1,0) and (5,0).\n⇒ x2 + 2x + 1 = x2 – 10x + 25\n⇒ 2x + 1 = -10x + 25\n⇒ 2x + 10x = 25 – 1\n⇒ 12x = 24\n\nQuestion. Find value(s) of y for which the distance between the points P (2, –3) and Q (10, y) is 10 units.\nSolution. Let P (2, –3) and Q (10, y) be the two given points such that PQ = 10.\n∴ PQ = √(10 – 2)2 + (y + 3)2 = 10 (given)\n⇒ PQ2 = 64 + y2 + 6 y + 9 = 100\n⇒ y2 + 6y – 27 = 0 ⇒ y2 + 9y – 3y – 27 = 0\n⇒ y (y + 9) – 3 (y + 9) = 0 ⇒ (y + 9) (y – 3) = 0\n⇒ y = – 9 or y = 3 Ans.\n\nQuestion. Find the ratio in which the join of (–3, 10) and (6, –8) is divided by (–1, 6).\nSolution. Let the given points be A(–3, 10), B(6, –8) and C(–1, 6).\n\nQuestion. Prove that the points (2, –2), (–3, 8) and (–1, 4) are collinear.\nSolution. Let Δ be the area of the triangle formed by given three points A(2, –2), B(–3, 8) and C(–1, 4).\n\nQuestion. Show that the points (1, –1), (5, 2) and (9, 5) are collinear.\nSolution. Let A(1, –1), B(5, 2) and C(9, 5) be the given points.\n\nQuestion. Find the coordinates of the circumcentre of the triangle whose vertices are (8, 6), (8, –2) and (2, –2). Also, find its circum-radius.\nSolution. We know circumcentre of a triangle is equidistant from the vertices of a triangle.\nLet A(8, 6), B(8, –2) and C(2, –2) be the vertices of a given triangle and let P(x, y) be the circumcentre of this triangle. Then,\nPA = PB = PC\n⇒ PA2 = PB2 = PC2\nNow, PA2 = PB2\n⇒ (x– 8)2 + (y – 6)2 = (x – 8)2 + (y + 2)2\n⇒ x2 + y2 – 16x – 12 y + 100 = x2 + y2 – 16x + 4y + 68\n⇒ 16y = 32 ⇒ y = 2.\nand PB2 = PC2\n⇒ (x – 8)2 + (y + 2)2 = (x – 2)2 + (y + 2)2\n⇒ x2 + y2 – 16x + 4y + 68 = x2+ y2 – 4x + 4y + 8\n⇒ 12x = 60 ⇒ x = 5\nSo, the coordinates of the circumcentre P are (5, 2).\nAlso, circum-radius = PA = PB = PC\n√(5 – 8)2 + (2 – 6)2  = √(-3)2 + (-4)2\n√9 + 16 = √25 = 5 units Ans.\n\nQuestion. Find the distance between the points :\n(i) A (0, 0), B (–5, 12)\n(ii) A (5, –8), B(–7, –3)\n\n(iii) P (cos α, –sin α), Q( – cos α, sin α)\n(iv) P (a + b, a – b), Q (a – b, a + b)\nSolution. (i) AB = √(-5 – 0)2 + (12 – 0)2\n\nQuestion. Find the area of the quadrilateral whose vertices taken in order are (–4, –2), (–3, –5), (3, –2), (2, 3).\nSolution. Let A(–4, –2), B(–3, –5), C(3, –2) and D (2, 3) be the given points.\nNow, area of quad. ABCD = ar (ΔABD) + ar (ΔBCD)\nFor ar (ΔABD) : A(–4, –2), B(–3, –5), D(2, 3)\n\nQuestion. Check whether (5, –2), (6, 4) and (7, –2) are the vertices of an isosceles triangle.\nSolution. Let A(5, –2), B(6, 4) and C(7, –2) be the vertices of a ΔABC.\nAB = √(6 – 5)2 + (4 + 2)2 = √12 + 62 = √ 1 + 36 = √37\n\nQuestion. If A(5, –1), B(–3, –2) and C(–1, 8) are the vertices of ΔABC, find the length of medians through A and the coordinates of the centroid.\nSolution. Let AD be the median through the vertex A of ΔABC. Then, D is the mid-point of BC. So, the coordinates of D are (-3/2 , -2 + 8/2) i.e. (–2, 3).\n\nQuestion. Show that (1, –2), (3, 0), (1, 2) and (–1, 0) are the vertices of a square.\nSolution. Let A(1, –2), B(3, 0), C(1, 2) and D(–1, 0) be the given points.\n\nQuestion. Find the area of the triangle whose vertices are (–5, –1), (3, –5), (5, 2).\nSolution. Let A(–5, –1), B(3, –5), C(5, 2) be the given points.\n\nQuestion. Find the co-ordinates of the points of trisection of the line joining the points (4, –1) and (–2, –3).\nSolution. Let C and D be the points of trisection of AB\n\n## Assignments for Class 10 Mathematics Coordinate Geometry as per CBSE NCERT pattern\n\nSome advantages of Free Assignments for Class 10 Mathematics Coordinate Geometry\n\n1. Solving Assignments for Mathematics Coordinate Geometry Class 10 helps to further enhance understanding of the topics given in your text book which will help you to get better marks\n2. By solving one assignments given in your class by Mathematics Coordinate Geometry teacher for class 10 will help you to keep in touch with the topic thus reducing dependence on last minute studies\n3. You will be able to understand the type of questions which are expected in your Mathematics Coordinate Geometry class test\n4. You will be able to revise all topics given in the ebook for Class 10 Mathematics Coordinate Geometry as all questions have been provided in the question banks\n5. NCERT Class 10 Mathematics Coordinate Geometry Workbooks will surely help you to make your concepts stronger and better than anyone else in your class.\n6. Parents will be able to take print out of the assignments and give to their child easily.\n\nAll free Printable practice assignments are in PDF single lick download format and have been prepared by Class 10 Mathematics Coordinate Geometry teachers after full study of all topics which have been given in each chapter so that the students are able to take complete benefit from the worksheets. The Chapter wise question bank and revision assignments can be accessed free and anywhere. Go ahead and click on the links above to download free CBSE Class 10 Mathematics Coordinate Geometry Assignments PDF.\n\nWhere can I download free Assignments for Class 10 Mathematics Coordinate Geometry ?\n\nYou can download free assignments for class 10 Mathematics Coordinate Geometry from https://www.assignmentsbag.com\n\nI want free printable assignments with questions and answers for Mathematics Coordinate Geometry for Standard 10, where can I get them ?\n\nYou can get free PDF downloadable assignments for Grade 10 Mathematics Coordinate Geometry from our website which has been developed by teachers after doing extensive research in each topic.\n\nCan I get assignments and question banks for other subjects in Class 10 ?\n\nOn our website we have provided assignments for all subjects in Grade 10, all topic wise test sheets have been provided in a logical manner so that you can scroll through the topics and download the worksheet that you want.\n\nI want practice assignments for all topics in my Class 10 Mathematics Coordinate Geometry Textbook, where can I get it ?\n\nYou can easily get question banks, topic wise notes and questions and other useful study material from https://www.assignmentsbag.com without any charge" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87064195,"math_prob":0.9995506,"size":26670,"snap":"2022-40-2023-06","text_gpt3_token_len":9247,"char_repetition_ratio":0.21686792,"word_repetition_ratio":0.12594227,"special_character_ratio":0.37259093,"punctuation_ratio":0.1648062,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999331,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-04T19:19:04Z\",\"WARC-Record-ID\":\"<urn:uuid:00f451eb-cf5c-4808-ae9f-1872594d3c01>\",\"Content-Length\":\"199049\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2a0a42ae-f473-46f8-8bd1-9d9a9317eaf4>\",\"WARC-Concurrent-To\":\"<urn:uuid:831c8f91-9fcc-4c22-80b2-b74536f44f6e>\",\"WARC-IP-Address\":\"194.163.36.95\",\"WARC-Target-URI\":\"https://assignmentsbag.com/assignments-for-class-10-mathematics-coordinate-geometry/\",\"WARC-Payload-Digest\":\"sha1:SCY2Y5VQRRO5Q2QNGJS5B7G6VOYNKMOG\",\"WARC-Block-Digest\":\"sha1:F7LQUQIEDTSSDJ3MVRSUZELD75NJK4VC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500151.93_warc_CC-MAIN-20230204173912-20230204203912-00483.warc.gz\"}"}