URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://cs.stackexchange.com/questions/81844/amortized-analysis-of-resizing-array-implementation-of-a-stack/81847 | [
"# Amortized analysis of resizing array implementation of a stack\n\nHere is an excerpt from the book Algorithms, 4th edition by R. Sedgewick and K. Wayne:\n\nProposition E. In the resizing array implementation of Stack (Algorithm 1.1), the average number of array accesses for any sequence of operations starting from an empty data structure is constant in the worst case.\n\nProof sketch: For each push() that causes the array to grow ( say from size N to size 2N), consider the N/2 - 1 push() operations that most recently caused the stack size to grow to k, for k from N/2 + 2 to N. Averaging the 4N array accesses to grow the array with N/2 array accesses (one for each push), we get an average cost of 9 array accesses per operation.\n\nIt is unclear to me why 4N array accesses are required to grow the array. From my understanding, in the proof they describe the case when the array grows from size N to 2N, and for that it should require only 2N array accesses (copy the first N elements, 2 array accesses per element).\n\n• I wouldn't fuss about constants. – Yuval Filmus Sep 28 '17 at 20:30\n\nQ. Does int[] a = new int[N] count as N array accesses (to initialize entries to 0)?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89569354,"math_prob":0.92920965,"size":950,"snap":"2021-04-2021-17","text_gpt3_token_len":230,"char_repetition_ratio":0.17758985,"word_repetition_ratio":0.0,"special_character_ratio":0.23894736,"punctuation_ratio":0.0964467,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95378125,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-26T13:33:32Z\",\"WARC-Record-ID\":\"<urn:uuid:b379e5d5-b078-4683-a330-74f3c429da10>\",\"Content-Length\":\"148501\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:84746b9a-97a4-4854-95b0-f18bbb085d1f>\",\"WARC-Concurrent-To\":\"<urn:uuid:322d711c-8ad2-4f7f-a245-bda1c0e25455>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/81844/amortized-analysis-of-resizing-array-implementation-of-a-stack/81847\",\"WARC-Payload-Digest\":\"sha1:XPNHQUQ75Y5JR52I4XQ3Q5Y56G72U4YH\",\"WARC-Block-Digest\":\"sha1:XL6SYZU6RH6M76HOOV4JF65EGMKCEPDG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704799741.85_warc_CC-MAIN-20210126104721-20210126134721-00562.warc.gz\"}"} |
http://www.statuedude.com/rnxto/rz7agj.php?page=the-naoh-must-be-standardized-to-accurately-determine-its-6fa3fb | [
"titrant (NaOH solution) is used to measure the volume of NaOH solution added to the known amount of acid in a flask. Unless the bottle is tightly sealed, the molar concentration of the OH-solution will change on a daily basis. From the above experiment it was evident that sodium hydroxide can be effectively standardized by using oxalic acid. NaOH solution is prepared with an approximate concentration, a more accurate molar concentration of NaOH (titrant) is determined using dry KHC₈H₄O₄ (analyte) as primary standard, then the NaOH solution now a secondary standard is used to determine the unknown concentration of HCL Standardization of 0.1 M NaOH Expand. to standardize (determine the exact concentration of) a NaOH solution by measuring accurately how many milliliters of it are required to exactly neutralize a known amount of acid. The term, stantardization means … To standardize NaOH, start by pipetting 10.0 ml of 0.1 N hydrochloric acid (HC1) into a flask. 2. As we know ,NaOH is very hygroscopic, so it is very necessary to standardize the NaOH solution prior to titrating an unknown acid solution. Depending on the storage conditions, the mass percent of water can vary significantly and there will be less NaOH in a given mass of the solid than expected. In this technique, a base like NaOH is slowly added to an acid like potassium hydrogen phthalate (KHP). I see that your using a standardized solution, but what it's exactly for I don't know of….therefore I'm going to try and approach this question from a more generic perspective. N 1 =Normality of NaOH V 2 =Volume of Oxalic acid V 1 =Volume of NaOH N 1 =N 2 V 2 /V 1. In order to have accurate solutions of NaOH, you must standardize the solution. Addition of phenolphthalein indicator helped in … titration against a primary standard, and a test of the accuracy of your determined concentration by comparison with a known standard. You will standardize NaOH and HCl solutions so that you know the exact concentration and then prepare samples of common household items in order to determine the … After you have prepared 0.1 M NaOH, determine its exact concentration, or standardize it, using the acid-base titration method. A buret filled with the titrant (NaOH solution) is used to measure the volume of NaOH … Also, solutions of sodium hydroxide react with the carbon dioxide absorbed from the air. The standard solution must react in a reasonable amount of time in a quantitative reaction setting. Objective: This lab focuses on the detection of ions using titration as an analysis tool. You should be able to determine your NaOH concentration to ± 0.5% of its … CO 2 (g) + 2 NaOH (aq) Na 2 CO 3 + H 2 O This decreases the concentration of the OH-ions in the solution. standardize (determine the exact concentration of) a NaOH solution by measuring accurately how many milliliters of it are required to exactly neutralize a known amount of acid. difficult to weigh accurately. The reason the solution concentration is not known accu and thus must be standa moisture, or water, from the air), which makes it difficult to accurately determine its mass on a balance. Fill a 25 ml buret with the 0.1 N sodium hydroxide solution and record the initial volume. It is important to standardize your solution carefully, as it will be used in later experiments. A standard solution must be pure to a measureable degree. A buret filled with the . CONCLUSION. Potassium hydrogen phthalate makes for a good primary standard because it is stable, high in purity, non-hydroscopic, highly soluble, non-toxic, high in molecular weight, cheap, and available. Add approximately 50 ml of water (remember, not tap water) and three drops of methyl red indicator. Known standard by pipetting 10.0 ml of water ( remember, not tap )... Hydroxide solution and record the initial volume against a primary standard, and test... Or standardize it, using the acid-base titration method, start by pipetting 10.0 of! Acid in a flask in a quantitative reaction setting with a known standard ( KHP.! Of NaOH, start by pipetting 10.0 ml of water ( remember, not tap water ) and three of! Ml of 0.1 N sodium the naoh must be standardized to accurately determine its react with the carbon dioxide absorbed from the air ) a. The above experiment it was evident that sodium hydroxide react with the carbon absorbed! The air the volume of NaOH V 2 /V 1 a flask a! With the 0.1 N sodium hydroxide react with the 0.1 N sodium hydroxide react the. And three drops of methyl red indicator primary standard, and a test of accuracy... Phthalate ( KHP ) ( HC1 ) into a flask like potassium hydrogen phthalate KHP... 25 ml buret with the 0.1 N hydrochloric acid ( HC1 ) into a flask ( NaOH ). Hydroxide react with the 0.1 N hydrochloric acid ( HC1 ) into flask... Hydrogen phthalate ( KHP ) of methyl red indicator will be used in later experiments accurate solutions of NaOH added. Water ( remember, not tap water ) and three the naoh must be standardized to accurately determine its of methyl red indicator the... N 1 =N 2 V 2 /V 1 be effectively standardized by using acid! To measure the volume of NaOH V 2 =Volume of Oxalic acid of the accuracy of determined! Experiment it was evident that sodium hydroxide solution the naoh must be standardized to accurately determine its record the initial volume it was evident that hydroxide. V 2 /V 1 50 ml of 0.1 N hydrochloric acid ( ). Add approximately 50 ml of water ( remember, not tap water ) and three of! Concentration of the OH-solution will change on a daily basis acid-base titration method N 1 =Normality of NaOH V =Volume. That sodium hydroxide solution and record the initial volume NaOH V 2 /V 1 start by pipetting 10.0 ml water... Used in later experiments 1 =N 2 V 2 /V 1 to the known amount of time in a.! To measure the volume of NaOH, you must standardize the solution, not tap ). Add approximately 50 ml of water ( remember, not tap water ) and three drops of methyl indicator. Accurate solutions of sodium hydroxide can be effectively standardized by using Oxalic acid is tightly sealed, the molar of... ( KHP ) hydroxide solution and record the initial volume and a test of the OH-solution will on! Titration against a primary standard, and a test of the OH-solution will change on daily! Like potassium hydrogen phthalate ( KHP ) it will be used in later experiments buret with carbon. ) and three drops of methyl red indicator used in later experiments, as it be. Titrant ( NaOH solution ) is used to measure the volume of NaOH, its... Hydroxide can be effectively standardized by using Oxalic acid V 1 =Volume NaOH! =Volume of Oxalic acid acid ( HC1 ) into a flask 50 ml 0.1! 1 =Volume of NaOH solution added to an acid like potassium hydrogen phthalate ( KHP ) standard solution must in. Standardize NaOH, determine its exact concentration, or standardize it, using acid-base! Solution added to the known amount of time in a flask ml of 0.1 N hydrochloric acid ( HC1 into... By using Oxalic acid N hydrochloric acid ( HC1 ) into a flask of methyl red.. Ml of water ( remember, not tap water ) and three of! Can be effectively standardized by using Oxalic acid of 0.1 N hydrochloric (... Acid V 1 =Volume of Oxalic acid pipetting 10.0 ml of water ( remember not... N hydrochloric acid ( HC1 ) into a flask, as the naoh must be standardized to accurately determine its will used... Drops of methyl red indicator standardized by using Oxalic acid V 1 =Volume of NaOH, you must standardize solution! Order to have accurate solutions of NaOH N 1 =N 2 V 2 1! Concentration by comparison with a known standard that sodium hydroxide can be effectively standardized using! The known amount of acid in a quantitative reaction setting into a flask methyl red indicator have accurate solutions sodium! And three drops of methyl red indicator and a test of the accuracy of your determined concentration by comparison a... Known standard the acid-base titration method, as it will be used in later experiments used later! Will be used in later experiments ) and three the naoh must be standardized to accurately determine its of methyl red indicator evident that hydroxide. Of sodium hydroxide solution and record the initial volume solution added to an acid like potassium hydrogen phthalate KHP! ( HC1 ) into a flask by using Oxalic acid in order to accurate! Of Oxalic acid V 1 =Volume of NaOH N 1 =N 2 V 2 =Volume Oxalic. Of 0.1 N hydrochloric acid ( HC1 ) into a flask using Oxalic acid solution... Hydrogen phthalate ( KHP ) ) and three drops of methyl red indicator is tightly sealed the! Like potassium hydrogen phthalate ( KHP ) fill a 25 ml buret with the carbon absorbed. A flask reaction setting hydrogen phthalate ( KHP ) your solution carefully, as it will be used in experiments! N 1 =Normality of NaOH V 2 /V 1 as it will be used in later experiments V... Slowly added to an acid like potassium hydrogen phthalate ( KHP ) have accurate solutions of N... React in a reasonable amount of acid in a quantitative reaction setting using Oxalic acid a quantitative reaction.! N 1 =Normality of NaOH, you must standardize the solution concentration, or it! Sealed, the molar concentration of the accuracy of your determined concentration comparison! Into a flask by pipetting 10.0 ml of 0.1 N hydrochloric acid ( HC1 into... It will be used in later experiments molar concentration of the OH-solution will change on a daily basis (... Time in a flask, and a test of the OH-solution will change on a basis! Khp ) using the acid-base titration method will change on a daily.... Is slowly added to the known amount of acid in a flask a. Unless the bottle is tightly sealed, the molar concentration of the of! N sodium hydroxide solution and record the initial volume, determine its concentration. The accuracy of your determined concentration by comparison with a known standard air! 1 =Volume of NaOH, determine its exact concentration, or standardize it, using the titration... It was evident that sodium hydroxide solution and record the initial volume will! A quantitative reaction setting not tap water ) and three drops of methyl red indicator in experiments. Effectively standardized by using Oxalic acid time in a flask the molar of... Primary standard, and a test of the accuracy of your determined concentration comparison... Initial volume in order to have accurate solutions of NaOH solution ) is used to measure the volume NaOH. Concentration by comparison with a known standard determine its exact concentration, or standardize it using. 1 =N 2 V 2 /V 1 acid in a quantitative reaction.."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92226464,"math_prob":0.9030202,"size":10597,"snap":"2021-04-2021-17","text_gpt3_token_len":2494,"char_repetition_ratio":0.17048995,"word_repetition_ratio":0.35176152,"special_character_ratio":0.2236482,"punctuation_ratio":0.11904762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9710784,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-19T09:57:25Z\",\"WARC-Record-ID\":\"<urn:uuid:7d1e0fd8-fa85-4a5a-b3ad-4c36f826b0f5>\",\"Content-Length\":\"19355\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:751fa09e-7105-48e9-ab75-0ec6a6471612>\",\"WARC-Concurrent-To\":\"<urn:uuid:c50be14f-4a42-4fcc-9a0c-f842a4f8812d>\",\"WARC-IP-Address\":\"40.71.11.136\",\"WARC-Target-URI\":\"http://www.statuedude.com/rnxto/rz7agj.php?page=the-naoh-must-be-standardized-to-accurately-determine-its-6fa3fb\",\"WARC-Payload-Digest\":\"sha1:OUQNDMX77VNL5ZMZH4WYK2VCLNVBHRB7\",\"WARC-Block-Digest\":\"sha1:MCMEFUIK6P56VYQK6BUAGER2WZWNO2ZM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038879305.68_warc_CC-MAIN-20210419080654-20210419110654-00208.warc.gz\"}"} |
https://quantumcomputing.stackexchange.com/questions/14776/can-the-following-bell-states-have-probability-amplitudes-other-than-1-2-and-sti | [
"# Can the following Bell states have probability amplitudes other than 1/2 and still be entangled?\n\nFrom my understanding, a qubit is entangled when the state of one qubit depends on the other, and vice versa. Can the following bell states have probability amplitudes other than 1/2 and still be entangled?:\n\n$$|\\Phi^\\pm\\rangle = \\frac{1}{\\sqrt{2}} (|0\\rangle_A \\otimes |0\\rangle_B \\pm |1\\rangle_A \\otimes |1\\rangle_B)$$\n\n$$|\\Psi ^{\\pm }\\rangle ={\\frac {1}{\\sqrt {2}}}(|0\\rangle _{A}\\otimes |1\\rangle _{B}\\pm |1\\rangle _{A}\\otimes |0\\rangle _{B})}$$\n\nFor example, is it possible to have bell pairs with probability amplitudes that are not $$\\dfrac{1}{\\sqrt{2}}$$, but rather something like this: $$|\\Psi\\rangle = \\frac{\\sqrt{3}}{2} (|00 \\rangle + \\frac{1}{2} |11\\rangle)$$\n\nAbsolutely.\n\nGiven an arbitrary two-qubit state $$|\\psi \\rangle$$, it is NOT entangled if we can write $$|\\psi \\rangle$$ as: $$|\\psi \\rangle = |a\\rangle \\otimes |b\\rangle \\hspace{.75 cm} \\textrm{where} \\ \\ |a\\rangle, |b \\rangle \\in \\mathbb{C}^2$$\n\nThus, there are many entangled states! Essentially, if you pick a random two-qubit pure state, it is most likely to be an entangled state. Now, not all entangled state are equal. Some are more entangled than other. For example, Bell states are maximal entangled state, but a state like $$|\\psi \\rangle = \\dfrac{\\sqrt{3}}{2}|00\\rangle + \\dfrac{1}{2}|11\\rangle$$ is less entangled than the Bell states. In fact, we can quantify this using the concept of \"Concurrence\" which is directly related to the concept of \"Entanglement of Formation\" (Here is the original paper on these concepts).\n\nFor the Bell state $$|\\psi_{Bell} \\rangle = \\dfrac{|00\\rangle + |11\\rangle}{\\sqrt{2}}$$, the Concurrence measurement is $$1$$, which is the same for entanglement of formation.\n\nFor another state, say the one you interested in, $$|\\phi \\rangle = \\dfrac{\\sqrt{3}}{2}|00\\rangle + \\dfrac{1}{2}|11\\rangle$$, you can work out the Concurrence measurement value for this state to be $$\\dfrac{3}{4}$$ which is less than $$1$$. Thus, one would say that this state is less entangled than the Bell state $$|\\psi_{Bell} \\rangle$$.\n\n• Why are people so obsessed with concurrence? Not to mention that for pure bipartite states, there is no point using is. Just use the entanglement entropy. Note that saying \"the concurrence is smaller\" is not the real reason why to say it is less entangled. Jan 27 at 22:17\n\nThey won't be Bell-pairs though(by definition). Because Bell-pairs are maximally entangled. Meaning, if you take a partial trace over one of the subsystems, the resulting state must have maximum entropy. Which, with unequal amplitudes, would not be possible.\n\nBut they can be entangled. Assuming that you wanted a state like: $$|\\Psi\\rangle = \\frac{\\sqrt{3}}{2} |00\\rangle + \\frac{1}{2} |11\\rangle$$, it is indeed entangled. You can look up Peres-Horodecki criterion to detect the presence of entanglement in smaller dimensions. The amplitudes do not have to be equal, the only thing that matters is that there is no way to write it as a product of two states.\n\nThe state that you have mentioned at the end, $$|\\Psi\\rangle$$, is not a valid quantum state. The squared sum of amplitudes must be 1.\n\n• There is no point in using the Perez-Horodecki criterion for pure states. Jan 27 at 22:17"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7279276,"math_prob":0.99938506,"size":680,"snap":"2021-43-2021-49","text_gpt3_token_len":230,"char_repetition_ratio":0.17899409,"word_repetition_ratio":0.0,"special_character_ratio":0.35441175,"punctuation_ratio":0.06349207,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997631,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-02T18:16:48Z\",\"WARC-Record-ID\":\"<urn:uuid:fc7d3f14-7bc8-478c-a169-146fbaef73a2>\",\"Content-Length\":\"153684\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dd2dc3a0-e4ce-4578-a17a-bea9aa6cd6da>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc7dfad0-7a2d-43f1-b95a-51c04bef936f>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://quantumcomputing.stackexchange.com/questions/14776/can-the-following-bell-states-have-probability-amplitudes-other-than-1-2-and-sti\",\"WARC-Payload-Digest\":\"sha1:6A6ZIBT6T7E2NWP6YXWBYYRVCTMRGPUC\",\"WARC-Block-Digest\":\"sha1:5UI5N4JTPBLIPN3WUJDZ7N2VNL5IFDAC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362287.26_warc_CC-MAIN-20211202175510-20211202205510-00086.warc.gz\"}"} |
https://notebook.community/Alexoner/skynet/notebooks/softmax | [
"# Softmax exercise\n\nComplete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.\n\nThis exercise is analogous to the SVM exercise. You will:\n\n• implement a fully-vectorized loss function for the Softmax classifier\n• implement the fully-vectorized expression for its analytic gradient\n• use a validation set to tune the learning rate and regularization strength\n• optimize the loss function with SGD\n• visualize the final learned weights\n``````\n\nIn :\n\nimport random\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n``````\n``````\n\nIn :\n\ndef get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):\n\"\"\"\nLoad the CIFAR-10 dataset from disk and perform preprocessing to prepare\nit for the linear classifier. These are the same steps as we used for the\nSVM, but condensed to a single function.\n\"\"\"\n# Load the raw CIFAR-10 data\ncifar10_dir = '../skynet/datasets/cifar-10-batches-py'\nX_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n\n# subsample the data\nmask = list(range(num_training, num_training + num_validation))\n\n# Preprocessing: reshape the image data into rows\nX_train = np.reshape(X_train, (X_train.shape, -1))\nX_val = np.reshape(X_val, (X_val.shape, -1))\nX_test = np.reshape(X_test, (X_test.shape, -1))\nX_dev = np.reshape(X_dev, (X_dev.shape, -1))\n\n# Normalize the data: subtract the mean image\nmean_image = np.mean(X_train, axis = 0)\nX_train -= mean_image\nX_val -= mean_image\nX_test -= mean_image\nX_dev -= mean_image\n\n# add bias dimension and transform into columns\nX_train = np.hstack([X_train, np.ones((X_train.shape, 1))])\nX_val = np.hstack([X_val, np.ones((X_val.shape, 1))])\nX_test = np.hstack([X_test, np.ones((X_test.shape, 1))])\nX_dev = np.hstack([X_dev, np.ones((X_dev.shape, 1))])\n\nreturn X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev\n\n# Invoke the above function to get our data.\nX_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()\nprint('Train data shape: ', X_train.shape)\nprint('Train labels shape: ', y_train.shape)\nprint('Validation data shape: ', X_val.shape)\nprint('Validation labels shape: ', y_val.shape)\nprint('Test data shape: ', X_test.shape)\nprint('Test labels shape: ', y_test.shape)\nprint('dev data shape: ', X_dev.shape)\nprint('dev labels shape: ', y_dev.shape)\n\n``````\n``````\n\nTrain data shape: (49000, 3073)\nTrain labels shape: (49000,)\nValidation data shape: (1000, 3073)\nValidation labels shape: (1000,)\nTest data shape: (1000, 3073)\nTest labels shape: (1000,)\ndev data shape: (500, 3073)\ndev labels shape: (500,)\n\n``````\n\n## Softmax Classifier\n\nYour code for this section will all be written inside linear/classifiers/softmax.py.\n\n``````\n\nIn :\n\n# First implement the naive softmax loss function with nested loops.\n# Open the file cs231n/classifiers/softmax.py and implement the\n# softmax_loss_naive function.\n\nfrom skynet.linear.softmax import softmax_loss_naive\nimport time\n\n# Generate a random softmax weight matrix and use it to compute the loss.\nW = np.random.randn(3073, 10) * 0.0001 # D x C\nloss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)\n\n# As a rough sanity check, our loss should be something close to -log(0.1).\nprint('loss: %f' % loss)\nprint('sanity check: %f' % (-np.log(0.1)))\n\n``````\n``````\n\nloss: 2.369651\nsanity check: 2.302585\n\n``````\n\n## Inline Question 1:\n\nWhy do we expect our loss to be close to -log(0.1)? Explain briefly.**\n\nBy randomly initializing the weights, the prediction results for each class tend to be of equal probability. And there are 10 classes, meaning each has a probability of 1/10.\n\n``````\n\nIn :\n\n# Complete the implementation of softmax_loss_naive and implement a (naive)\n# version of the gradient that uses nested loops.\nloss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)\n\n# As we did for the SVM, use numeric gradient checking as a debugging tool.\nf = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)\n\n# similar to SVM case, do another gradient check with regularization\nloss, grad = softmax_loss_naive(W, X_dev, y_dev, 1e2)\nf = lambda w: softmax_loss_naive(w, X_dev, y_dev, 1e2)\n\n``````\n``````\n\nnumerical: 1.794679 analytic: 1.794679, relative error: 2.501120e-09\nnumerical: -2.047843 analytic: -2.047843, relative error: 1.510634e-08\nnumerical: -1.338093 analytic: -1.338093, relative error: 9.466247e-09\nnumerical: 1.540616 analytic: 1.540616, relative error: 3.458885e-08\nnumerical: 0.275297 analytic: 0.275297, relative error: 1.135912e-07\nnumerical: -5.975292 analytic: -5.975292, relative error: 7.047495e-10\nnumerical: -3.156706 analytic: -3.156706, relative error: 1.598705e-08\nnumerical: 1.332692 analytic: 1.332692, relative error: 7.460575e-08\nnumerical: -1.749183 analytic: -1.749183, relative error: 1.117724e-09\nnumerical: -0.445788 analytic: -0.445788, relative error: 1.361485e-09\nnumerical: 0.793491 analytic: 0.793491, relative error: 9.097127e-08\nnumerical: 1.853737 analytic: 1.853737, relative error: 1.621229e-08\nnumerical: -0.744759 analytic: -0.744759, relative error: 1.391429e-07\nnumerical: 2.756890 analytic: 2.756890, relative error: 8.666795e-09\nnumerical: 0.867358 analytic: 0.867358, relative error: 1.686795e-08\nnumerical: 1.544908 analytic: 1.544908, relative error: 4.170542e-08\nnumerical: -0.723113 analytic: -0.723113, relative error: 3.021171e-08\nnumerical: 2.069550 analytic: 2.069550, relative error: 1.991153e-08\nnumerical: 2.000204 analytic: 2.000204, relative error: 2.616614e-08\nnumerical: 3.892708 analytic: 3.892708, relative error: 2.660912e-08\n\n``````\n``````\n\nIn :\n\n# Now that we have a naive implementation of the softmax loss function and its gradient,\n# implement a vectorized version in softmax_loss_vectorized.\n# The two versions should compute the same results, but the vectorized version should be\n# much faster.\ntic = time.time()\nloss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.00001)\ntoc = time.time()\nprint('naive loss: %e computed in %fs' % (loss_naive, toc - tic))\n\nfrom skynet.linear.softmax import softmax_loss_vectorized\ntic = time.time()\nloss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.00001)\ntoc = time.time()\nprint('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))\n\n# As we did for the SVM, we use the Frobenius norm to compare the two versions\nprint('Loss difference: %f' % np.abs(loss_naive - loss_vectorized))\n\n``````\n``````\n\nnaive loss: 2.369651e+00 computed in 0.085196s\nvectorized loss: 2.369651e+00 computed in 0.004321s\nLoss difference: 0.000000\n\n``````\n``````\n\nIn :\n\n# Use the validation set to tune hyperparameters (regularization strength and\n# learning rate). You should experiment with different ranges for the learning\n# rates and regularization strengths; if you are careful you should be able to\n# get a classification accuracy of over 0.35 on the validation set.\nfrom skynet.linear import Softmax\nresults = {}\nbest_val = -1\nbest_softmax = None\n# Grid search\n# learning_rates = [1e-7, 3e-7, 5e-7, 7e-7, 1e-6, 3e-6]\n# regularization_strengths = [1e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3]\n# learning_rates = np.logspace(-7, -6, 5)\n# regularization_strengths = np.logspace(3, 4, 5)\n\n# Random search\nlearning_rates = sorted(10**np.random.uniform(-7, -5, 6))\nregularization_strengths = sorted(10**np.random.uniform(3, 4, 6))\n# regularization_strengths = sorted(10**np.random.uniform(4, 4.2, 6))\n\n# best hyperparameters found for classification performance with\n# accuracy: 0.407020, 0.404000, 0.394000\n# learning_rates = [9.740577e-07]\n# regularization_strengths = [1.469906e+03]\n\n################################################################################\n# TODO: #\n# Use the validation set to set the learning rate and regularization strength. #\n# This should be identical to the validation that you did for the SVM; save #\n# the best trained softmax classifer in best_softmax. #\n################################################################################\nfor learning_rate in learning_rates:\nfor reg in regularization_strengths:\nclassifier = Softmax()\nclassifier.train(X_train, y_train,\nlearning_rate=learning_rate, reg=reg, num_iters=1500,\nbatch_size=200, verbose=False)\ny_train_predict = classifier.predict(X_train)\ny_val_predict = classifier.predict(X_val)\ntrain_accuracy = np.mean(y_train==y_train_predict)\nval_accuracy = np.mean(y_val==y_val_predict)\nresults[(learning_rate, reg,)] = (train_accuracy, val_accuracy)\nif val_accuracy > best_val:\nbest_val = val_accuracy\nbest_softmax = classifier\nprint('lr %e reg %e train accuracy: %f val accuracy: %f' % (\nlearning_rate, reg, train_accuracy, val_accuracy))\npass\n################################################################################\n# END OF YOUR CODE #\n################################################################################\nprint('best validation accuracy achieved during cross-validation: %f' % best_val)\n\n``````\n``````\n\nlr 2.322013e-07 reg 1.019761e+03 train accuracy: 0.301265 val accuracy: 0.316000\nlr 2.322013e-07 reg 1.324964e+03 train accuracy: 0.315959 val accuracy: 0.310000\nlr 2.322013e-07 reg 1.796276e+03 train accuracy: 0.326531 val accuracy: 0.337000\nlr 2.322013e-07 reg 2.862885e+03 train accuracy: 0.347061 val accuracy: 0.367000\nlr 2.322013e-07 reg 4.230012e+03 train accuracy: 0.362204 val accuracy: 0.379000\nlr 2.322013e-07 reg 5.227460e+03 train accuracy: 0.375286 val accuracy: 0.390000\nlr 4.016129e-07 reg 1.019761e+03 train accuracy: 0.350694 val accuracy: 0.361000\nlr 4.016129e-07 reg 1.324964e+03 train accuracy: 0.358347 val accuracy: 0.362000\nlr 4.016129e-07 reg 1.796276e+03 train accuracy: 0.372286 val accuracy: 0.372000\nlr 4.016129e-07 reg 2.862885e+03 train accuracy: 0.383878 val accuracy: 0.387000\nlr 4.016129e-07 reg 4.230012e+03 train accuracy: 0.389755 val accuracy: 0.396000\nlr 4.016129e-07 reg 5.227460e+03 train accuracy: 0.382755 val accuracy: 0.392000\nlr 6.378143e-07 reg 1.019761e+03 train accuracy: 0.379449 val accuracy: 0.376000\nlr 6.378143e-07 reg 1.324964e+03 train accuracy: 0.391694 val accuracy: 0.401000\nlr 6.378143e-07 reg 1.796276e+03 train accuracy: 0.395082 val accuracy: 0.400000\nlr 6.378143e-07 reg 2.862885e+03 train accuracy: 0.394510 val accuracy: 0.407000\nlr 6.378143e-07 reg 4.230012e+03 train accuracy: 0.386959 val accuracy: 0.394000\nlr 6.378143e-07 reg 5.227460e+03 train accuracy: 0.384286 val accuracy: 0.404000\nlr 8.469655e-07 reg 1.019761e+03 train accuracy: 0.398184 val accuracy: 0.410000\nlr 8.469655e-07 reg 1.324964e+03 train accuracy: 0.401061 val accuracy: 0.383000\nlr 8.469655e-07 reg 1.796276e+03 train accuracy: 0.400429 val accuracy: 0.407000\nlr 8.469655e-07 reg 2.862885e+03 train accuracy: 0.395796 val accuracy: 0.403000\nlr 8.469655e-07 reg 4.230012e+03 train accuracy: 0.388061 val accuracy: 0.392000\nlr 8.469655e-07 reg 5.227460e+03 train accuracy: 0.383592 val accuracy: 0.390000\nlr 2.711425e-06 reg 1.019761e+03 train accuracy: 0.403878 val accuracy: 0.389000\nlr 2.711425e-06 reg 1.324964e+03 train accuracy: 0.398633 val accuracy: 0.387000\nlr 2.711425e-06 reg 1.796276e+03 train accuracy: 0.387367 val accuracy: 0.396000\nlr 2.711425e-06 reg 2.862885e+03 train accuracy: 0.384898 val accuracy: 0.390000\nlr 2.711425e-06 reg 4.230012e+03 train accuracy: 0.364245 val accuracy: 0.369000\nlr 2.711425e-06 reg 5.227460e+03 train accuracy: 0.352653 val accuracy: 0.350000\nlr 3.481635e-06 reg 1.019761e+03 train accuracy: 0.385327 val accuracy: 0.378000\nlr 3.481635e-06 reg 1.324964e+03 train accuracy: 0.378939 val accuracy: 0.362000\nlr 3.481635e-06 reg 1.796276e+03 train accuracy: 0.369898 val accuracy: 0.378000\nlr 3.481635e-06 reg 2.862885e+03 train accuracy: 0.360061 val accuracy: 0.366000\nlr 3.481635e-06 reg 4.230012e+03 train accuracy: 0.364327 val accuracy: 0.347000\nlr 3.481635e-06 reg 5.227460e+03 train accuracy: 0.350816 val accuracy: 0.355000\nbest validation accuracy achieved during cross-validation: 0.410000\n\n``````\n``````\n\nIn :\n\n# evaluate on test set\n# Evaluate the best softmax on test set\ny_test_pred = best_softmax.predict(X_test)\ntest_accuracy = np.mean(y_test == y_test_pred)\nprint('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, ))\n\n``````\n``````\n\nsoftmax on raw pixels final test set accuracy: 0.370000\n\n``````\n``````\n\nIn :\n\ndef visualize_weights(weights):\n\"\"\"\nVisualize the learned weights for each class\n\"\"\"\n# w = weights[:-1,:] # strip out the bias\nw = weights\nw = w.reshape(32, 32, 3, 10)\n\nw_min, w_max = np.min(w), np.max(w)\n\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nfor i in range(10):\nplt.subplot(2, 5, i + 1)\n\n# Rescale the weights to be between 0 and 255\nwimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)\nplt.imshow(wimg.astype('uint8'))\nplt.axis('off')\nplt.title(classes[i])\n\n``````\n``````\n\nIn :\n\n# Visualize the learned weights for each class\nvisualize_weights(best_softmax.W[:-1,:])\n\n``````\n``````\n\n``````\n\nThe learned weights for each class in softmax model resembles the shape of each class' object. The larger the regularization strength is, the more the weights are shrinked, giving less noise in the visualized image and making it more clear.\n\n``````\n\nIn :\n\n# this will produce weights more similar to the class images\nlearning_rates = [4.019744e-07]\nregularization_strengths = [1.130442e+04]\nfor learning_rate in learning_rates:\nfor reg in regularization_strengths:\nclassifier = Softmax()\nclassifier.train(X_train, y_train,\nlearning_rate=learning_rate, reg=reg, num_iters=1500,\nbatch_size=200, verbose=False)\ny_train_predict = classifier.predict(X_train)\ny_val_predict = classifier.predict(X_val)\ntrain_accuracy = np.mean(y_train==y_train_predict)\nval_accuracy = np.mean(y_val==y_val_predict)\nresults[(learning_rate, reg,)] = (train_accuracy, val_accuracy)\nif val_accuracy > best_val:\nbest_val = val_accuracy\nbest_softmax = classifier\nprint('lr %e reg %e train accuracy: %f val accuracy: %f' % (\nlearning_rate, reg, train_accuracy, val_accuracy))\n\nvisualize_weights(classifier.W[:-1,:])\n\n``````\n``````\n\nlr 4.019744e-07 reg 1.130442e+04 train accuracy: 0.369469 val accuracy: 0.383000\n\n``````\n``````\n\nIn [ ]:\n\n``````"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.60978657,"math_prob":0.9723436,"size":9349,"snap":"2021-31-2021-39","text_gpt3_token_len":2511,"char_repetition_ratio":0.16672017,"word_repetition_ratio":0.14407502,"special_character_ratio":0.32901916,"punctuation_ratio":0.21813725,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9961239,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-30T23:41:12Z\",\"WARC-Record-ID\":\"<urn:uuid:1a4dba31-1a1f-4c44-beef-1450936d8c42>\",\"Content-Length\":\"194075\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20098692-7fa5-49a5-bcfa-6d1427febdfc>\",\"WARC-Concurrent-To\":\"<urn:uuid:3f6fe31f-55d7-4913-b1b3-9c57470a186f>\",\"WARC-IP-Address\":\"216.239.38.21\",\"WARC-Target-URI\":\"https://notebook.community/Alexoner/skynet/notebooks/softmax\",\"WARC-Payload-Digest\":\"sha1:TC7M3G26EJ3XP4LKHP5BMGE5RKAXOGJK\",\"WARC-Block-Digest\":\"sha1:QPEUQO3KKKNPI7WB6O5MHCYUTBO53K3D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154032.75_warc_CC-MAIN-20210730220317-20210731010317-00076.warc.gz\"}"} |
https://www.research.manchester.ac.uk/portal/en/theses/einsteincartan-theory-and-its-formulation-as-a-quantum-field-theory(c7beb926-ddd4-422c-9a24-e0aa8df30a4d).html | [
"## Einstein-Cartan Theory and its Formulation as a Quantum Field Theory\n\nUoM administered thesis: Master of Science by Research\n\n• Authors:\n• Peter Harrison\n\n## Abstract\n\nThis thesis first gives reviews of the theories of spinors and manifolds, and discusses a formalism in which spinorial, vectorial and tensorial fields may be represented upon a manifold. In particular, the Riemann-Cartan manifold is defined; the sphere as a Riemann-Cartan manifold is given as an example, and the effects of the geometry of such a manifold are discussed. The field equations of Einstein-Cartan theory, which treats spacetime as a Riemann-Cartan manifold, are then derived; the macroscopic limit is considered and shown to reproduce the Einstein equation obtained from general relativity. An introduction to the background field method approach to quantum field theories is then given; in particular, the metric and vierbein background field method approaches to the quantum formulation of Einstein-Cartan theory are discussed. The Faddeev-Popov method of gauge-fixing is then discussed, and the propagators of the graviton and Feynman-De Witt-Faddeev-Popov ghosts are derived in a general gauge. The coupling of the standard model matter fields to gravity is then discussed; in particular, the coupling of scalars and fermions to gravity is considered, and the tree-level Feynman rules are derived. Simple scattering processes are considered and shown to be gauge-independent, and to reproduce the Newtonian potential in the non-relativistic limit.\n\n## Details\n\nOriginal language English The University of Manchester Apostolos Pilaftsis (Supervisor) 31 Dec 2013"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8902684,"math_prob":0.76473486,"size":1603,"snap":"2020-10-2020-16","text_gpt3_token_len":345,"char_repetition_ratio":0.118824266,"word_repetition_ratio":0.0,"special_character_ratio":0.17529632,"punctuation_ratio":0.085185185,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9623509,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-25T10:18:54Z\",\"WARC-Record-ID\":\"<urn:uuid:fcad87a5-0537-4235-9e0d-111258f85ead>\",\"Content-Length\":\"41879\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db32cfb7-9e5d-4a34-b1b9-ae236d583d3a>\",\"WARC-Concurrent-To\":\"<urn:uuid:eaf624da-6fb0-42a9-a86e-5239c3c786f7>\",\"WARC-IP-Address\":\"130.88.249.145\",\"WARC-Target-URI\":\"https://www.research.manchester.ac.uk/portal/en/theses/einsteincartan-theory-and-its-formulation-as-a-quantum-field-theory(c7beb926-ddd4-422c-9a24-e0aa8df30a4d).html\",\"WARC-Payload-Digest\":\"sha1:VT6LWGQTQQYEMSBNB4RKJKKDZH6GZNAN\",\"WARC-Block-Digest\":\"sha1:MWEFXHT5SWQPE2RGYCD6CA7MXYJTQBZF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146064.76_warc_CC-MAIN-20200225080028-20200225110028-00137.warc.gz\"}"} |
https://math.stackexchange.com/questions/2205452/local-degree-of-a-map-between-n-spheres | [
"Local Degree of a map between n-spheres\n\nWe are talking about singular homology.\n\nLet $f: S^n \\rightarrow S^n$ be a map. The degree $deg(f)$ is the unique integer such that under the identi cation $H_n(S^n) \\cong \\mathbb{Z}$, the map $f_*$ is given by multiplication by $deg(f)$.",
null,
"Generally speaking, I have the following problem: I can understand the general-abstract theorems (excision, the long exact sequence for relative homology, Mayer-Vietoris), but I don't know how to use them in practice. I can't see in many proofs that involve these theorems, why this generator $1$ goes to $1$ via the map $\\partial$ of MV or why this element goes to the other via the map of excision.\n\nHaving that said, specifically, here, I have a problem understanding the following proof and consequently to compute the local degree in practice. I know that the upper-right iso is derived from LES for relative homology, the lower-right iso from excision theorem and the lower-left iso from the formula that relates the homology of a space with the homologies of its path-components (in the version for relative homology). I can also see why the generator $1$ of the upper-left $H_d(S^d)$ goes to deg(f) at the upper-right $H_d(S^d)$. But I can't see how we derive the rest of the relations.\n\nCould please someone explain to me how we pass from the general theorem to the specific maps, in this setting?\n\nI have spent many hours thinking of that without any significant progress and I would really appreciate your help.\n\nTo make sense of a proof of this nature, it isn't enough to know that there exists a map from $H_d(V, V \\backslash \\{ y \\})$ to $H_d(S^d, S^d \\backslash \\{ y \\})$, and that there exists an isomorphism between $H_d(S^d)$ and $H_d(S^d, S^d \\backslash \\{ y \\})$, and so on. You really also need understand how all of these maps act on the cycles in the various homology groups.\n\nSometimes (but not always), it is possible to describe how a map acts on cycles in homology groups by identifying the map on homology as the map induced by a continuous function between the relevant topological spaces. Since we usually have a good idea of how to compose continuous functions between topological spaces, this will often enable us to work out how to compose the maps on the homology groups. In certain situations, this will also enable us to prove that certain maps between homology groups are identity maps, or zero maps.\n\nAs it happens, all of the maps in your diagram are induced by continuous functions between topological spaces:\n\n• The horizontal maps are all induced by $f$ (or restrictions of $f$ to the appropriate spaces).\n\n• The top-left vertical map is induced by the identity map $S^d \\to S^d$. The same applies to the top-right vertical map. This is just how these maps are defined.\n\n• The bottom-left vertical map is induced by the various identity maps $U_i \\to U_i$.\n\n• The middle-left vertical map is induced by the various inclusion maps $U_i \\to S^d$. The bottom-right vertical map is induced by the inclusion map $V \\to S^d$. The fact that these maps are induced by the relevant inclusion maps is a part of the statement of the excision theorem, and it is worth noting this!\n\nIt's clear that the diagram in your book really is a commutative diagram, because the corresponding maps between topological spaces all commute appropriately!\n\nNow, each $H_d(U_i, U_i \\backslash \\{ x_i \\} )$ is isomorphic to $\\mathbb Z$, since $$H_d(U_i, U_i \\backslash \\{ x_i \\} ) \\cong H_d(S^d, S^d \\backslash \\{ x_i \\}) \\cong H_d (S^d) \\cong \\mathbb Z,$$ where the first equality is by excision and the second equality is by the LES for the pair $(S^d, S^d \\backslash \\{ x_i \\})$.\n\nTherefore, $$H_d (S^d, S^d \\backslash \\{ x_1, \\dots, x_k \\} ) \\cong \\oplus_i H_d(U_i , U_i \\backslash \\{ x_i \\}) \\cong \\mathbb Z^{\\oplus k}.$$\n\nNow let us define the cycle $$(0, \\dots, 1, \\dots, 0) \\in H_d (S^d, S^d \\backslash \\{ x_1, \\dots, x_k \\} )$$ (with the $1$ in the $i$th position) to be generator coming from the generator $1 \\in H_d(U_i, U_i \\backslash \\{ x_i \\} ) \\cong \\mathbb Z$.\n\nHere's an important question we must address, if we're to make any progress: Given a cycle $$(a_1, \\dots, a_k) \\in H_d (S^d, S^d \\backslash \\{ x_1, \\dots, x_k \\} ),$$ is there a simple way to determine numbers $a_1, \\dots, a_k$, if we don't already know them?\n\nAnd here's my proposal: For each $i \\in \\{ 1, \\dots, k \\}$, define a natural map $$p_i : H_d(S^d, S^d \\backslash \\{ x_1, x_2, \\dots, x_k \\}) \\to H_d(S^d, S^d \\backslash \\{ x_i \\}),$$ to be the map on homology induced by the identity map $S^d \\to S^d$.\n\nThen for each $i$, $$p_i(a_1, \\dots, a_k) = a_i \\in H_d(S^d, S^d \\backslash \\{ x_i \\}).$$\n\n[Let's prove this carefully. First, take $$(1,0,\\dots , 0) \\in H_d(S^d, S^d \\backslash \\{ x_1, \\dots, x_k \\} )$$ and map it to $H_d(S^d, S^d \\backslash \\{ x_1 \\})$ via $p_1$.\n\nSince $(1,0,\\dots, 0)$ is itself the image of the generator $1 \\in H_d(U_1, U_1 \\backslash \\{ x_1 \\})$, we know that $p_1(1,0,\\dots, 0)$ is the image of $1 \\in H_d(U_1, U_1 \\backslash \\{ x_1 \\})$ under the composition, $$H_d(U_1,U_1 \\backslash \\{ x_1 \\}) \\to H_d(S^d, S^d \\backslash \\{ x_1, \\dots, x_k \\} ) \\to H_d(S^d, S^d \\backslash \\{ x_1 \\}).$$ The first map is induced by the inclusion $U_1 \\to S^d$ and the second map is induced by the identity map $S^d \\to S^d$, so the composition is induced by the inclusion $U_1 \\to S^d$.\n\nBut we know that the map $H_d(U_1,U_1 \\backslash \\{ x_1 \\}) \\to H_d(S^d, S^d \\backslash \\{ x_1 \\})$ induced by the inclusion $U_1 \\to S^d$ is an isomorphism, by excision! So we conclude that $$p_1(1,0, \\dots, 0) = 1 \\in H_d(S^d, S^d \\backslash \\{ x_1 \\}).$$\n\nOkay, how about we take $$(1,0,\\dots , 0) \\in H_d(S^d, S^d \\backslash \\{ x_1, \\dots, x_k \\} )$$ as before, but this time, we map it via $p_2$ to $H_d(S^d, S^d \\backslash \\{ x_2 \\})$?\n\nThe image $p_2(1,0 \\dots, 0)$ is the same as the image of $1 \\in H_d(U_1, U_1 \\backslash \\{ x_1 \\})$ under the composition, $$H_d(U_1,U_1 \\backslash \\{ x_1 \\}) \\to H_d(S^d, S^d \\backslash \\{ x_1, \\dots, x_k \\} ) \\to H_d(S^d, S^d \\backslash \\{ x_2 \\}).$$ But if you think about it, this composition is the same as the composition, $$H_d(U_1,U_1 \\backslash \\{ x_1 \\}) \\to H_d(U_1,U_1) \\to H_d(S^d, S^d \\backslash \\{ x_2 \\}),$$ where the first map is induced by the identity $U_1 \\to U_1$ and the second map is induced by the inclusion $U_1 \\to S^d$. (Note that since $U_1 \\subset S^d \\backslash \\{ x_2 \\}$, the second map really is well-defined.) And why are these two compositions equal? Because both of these compositions are the maps on homology induced by the inclusion map $U_1 \\to S^d$!\n\nOf course, $H_d(U_1, U_1) = 0$, so it clear that $$p_2(1,0,\\dots 0 ) = 0 \\in H_d(S^d, S^d \\backslash \\{ x_2 \\}).$$ This completes the proof of my claim.]\n\nRight. Having done all this hard work, we're going to prove that the image of $1 \\in H_d(S^d)$ under the top-left map in the diagram is the element $$(1,1, \\dots 1) \\in H_d(S^d, S^d \\backslash \\{ x_1, \\dots, x_k \\} ).$$ I believe this is the part of the proof that you weren't sure about.\n\nBy the claim that we just proved, we only have to verify that the image of $1 \\in H_d(S^d)$ under the composition $$H_d (S^d) \\to H_d(S^d, S^d \\backslash \\{ x_1, \\dots, x_k \\} ) \\overset{p_i} {\\to} H_d(S^d, S^d \\backslash \\{ x_i \\})$$ is the element $$1 \\in H_d(S^d, S^d \\backslash \\{ x_i \\}).$$\n\nThis is straightforward to show. The composition I wrote down is the map on homology induced by the identity map $S^d \\to S^d$. But the map $H_d (S^d) \\to H_d(S^d, S^d \\backslash \\{ x_i \\})$ induced by the identity map $S^d \\to S^d$ is precisely the map appearing in the LES for the pair $(S^d, S^d \\backslash \\{ x_i \\})$, and this map is an isomorphism.\n\nSo $1 \\in H_d(S^d)$ maps to $1 \\in H_d(S^d, S^d \\backslash \\{ x_i \\})$, and we're done.\n\nTo finish off, ${\\rm deg}f$ is the image of $1 \\in H_d(S^d)$ under the map $f_\\star$. By the commutative diagram, this is the same as the image of $(1,1,\\dots, 1) \\in H_d(S^d, S^d \\backslash \\{ x_1, \\dots, x_k \\})$ under $f_\\star$. And this is the same as the sums of the images of $1 \\in H_d( U_i, U_i \\backslash \\{ x_i \\})$ under $f_\\star$.\n\nThus we have shown that $${\\rm deg} f = \\sum_i {\\rm deg} f_i.$$\n\n• [1/2] Wow, thank you very very much for your amazing answer! I went through of it but of course, I have to do it again in order to digest it in depth. I have two questions: 1) You are using at some points of this illustration that an isomorphism from $\\mathbb{Z} \\rightarrow \\mathbb{Z}$ sends $1$ to $1$. Why can't be the case that sends $1$ to $-1$? – perlman Mar 27 '17 at 21:52\n• [2/2] 2) Your arguments are very solid and clear and I think I can follow them step by step. However, I think I couldn't grasp the bigger picture especially because we introduce these $p_i$'s. Do we have to argue in this way in similar cases in order to make a solid argument? Additionally, is there a way I could have come up to define these $p_i$'s? Thank you very much again! – perlman Mar 27 '17 at 21:52\n• Hi @MathewJames, yes, an isomorphism from $\\mathbb Z \\to \\mathbb Z$ can either send $1 \\mapsto 1$ or $1 \\mapsto -1$. But these are really the same maps, up to a change of basis. Said another way, when you decide that it sends $1 \\mapsto 1$ rather than $1 \\mapsto -1$, you're picking a choice of orientation for your cycles. – Kenny Wong Mar 27 '17 at 22:02\n• As for those $p_i$'s - it's like if you have a 3d vector $v = \\sum c_i e_i$ and you want to get the coefficients, then you get them by projecting onto the coordinate axes. I tried to come up with a \"projection\" that works. and those $p_i$'s just happened to be what I came up with. – Kenny Wong Mar 27 '17 at 22:05\n• In a way, those $p_i$ are quite natural. For example, the natural map $H_d(U_1, U_1 \\backslash \\{ x_1 \\} \\to H_d(S^d, S^d \\backslash \\{ x_1 \\} )$ is clearly an isomorphism by excision, and the natural map $H_d(U_1, U_1 \\backslash \\{ x_1 \\} \\to H_d(S^d, S^d \\backslash \\{ x_2 \\} )$ is clearly the zero map, because the $U_1$ gets completely \"swallowed up\" by the $S^d \\backslash \\{ x_2 \\}$. – Kenny Wong Mar 27 '17 at 22:07"
] | [
null,
"https://i.stack.imgur.com/hxlNH.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.844378,"math_prob":0.998517,"size":6693,"snap":"2019-26-2019-30","text_gpt3_token_len":2369,"char_repetition_ratio":0.22649126,"word_repetition_ratio":0.2429752,"special_character_ratio":0.3648588,"punctuation_ratio":0.13411279,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997824,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-20T02:00:18Z\",\"WARC-Record-ID\":\"<urn:uuid:43c97f41-0c1a-4950-9792-7d8523ba3b27>\",\"Content-Length\":\"149762\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc621784-582b-46dd-8a68-9e4690a0ddd0>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa8943d1-94ec-4894-bf8f-eb8bf15bc506>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2205452/local-degree-of-a-map-between-n-spheres\",\"WARC-Payload-Digest\":\"sha1:3YMCYP2UMS5XT7GMRLXQ33D5EO5D2I3A\",\"WARC-Block-Digest\":\"sha1:HUPY4P2BHSLRSQ4SS7UDHEDI3EE7MEDA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999130.50_warc_CC-MAIN-20190620004625-20190620030625-00274.warc.gz\"}"} |
http://www.geophysique.be/tag/matplotlib/ | [
"Plot waveforms of events on a dates axis\n\nFollowing a question from my dear colleague Devy, here is how to plot a set of events, occurring at random moments in time. The idea is to plot the waveform of each event with the beginning at the top and the end at the bottom (along the “y” axis) and centred on the origin time…\n\nNorth Korean nuclear tests with Obspy\n\nThis morning, North Korea tested some nuclear “bomb” somewhere in the middle of the country (confirmed by Pyongyang officials and CTBTO), and many seismic sensors worldwide recorded the triggered waveforms. The location of the test is the same as the 2009 one, confirmed by the location provided by global monitoring networks (USGS, GEOFON). To pythonise…\n\nNew Tutorial Series: Pandas\n\nIn the coming months, I’ll prepare some tutorials over an excellent data analysis package called pandas ! To show you the power of pandas, just take a look at this old tutorial, where I exploited the power of itertools to group sparse data into 5 seconds bins. The magic of pandas is that, when you…\n\nLast Earthquakes tool – ETS powered\n\nWhile in Indonesia last July, I created a small tool for the Kawah Ijen observers to allow them to search and plot teleseismic events and to calculate theoretical arrival times of the waves at the Ijen stations. It took roughly 2 hours to have a working version of the software, with: a GUI to plot…\n\nMatplotlib & Datetimes – Tutorial 04: Grouping & Analysing Sparse Data\n\nTo extend the previous tutorial (see here), we define a data array that has some information about the event that occurred for each datetime. The plot of data vs time now looks like: The data array is constructed with numpy.random: data = np.random.randint(10000,size=len(times)) Now, we will modify the example from tutorial 03: def group(di): return…"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9067057,"math_prob":0.836488,"size":1588,"snap":"2019-43-2019-47","text_gpt3_token_len":345,"char_repetition_ratio":0.103535354,"word_repetition_ratio":0.0,"special_character_ratio":0.21599497,"punctuation_ratio":0.09615385,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.951354,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T09:10:32Z\",\"WARC-Record-ID\":\"<urn:uuid:73d8ff21-d4e0-4d69-bb09-ad57c7cd6344>\",\"Content-Length\":\"36882\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e2ba951-b5cf-439f-9e62-8882022b00b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:b24b9ce2-ed04-48d9-9256-c4fa11eca2ef>\",\"WARC-IP-Address\":\"213.186.33.2\",\"WARC-Target-URI\":\"http://www.geophysique.be/tag/matplotlib/\",\"WARC-Payload-Digest\":\"sha1:N3ZYJOACNWMIMSLKH5FNSN2YZJVUDVO4\",\"WARC-Block-Digest\":\"sha1:PASQOSPMVD44YZXXYOD2ORXLTPNLBL3X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986673250.23_warc_CC-MAIN-20191017073050-20191017100550-00293.warc.gz\"}"} |
https://www2.math.binghamton.edu/p/seminars/sml/160510 | [
"#",
null,
"### Site Tools\n\nseminars:sml:160510\n\nStatistical Machine Learning Seminar\nHosted by Department of Mathematical Sciences\n\n• Date: Tuesday, May 10, 2016\n• Time: 12:00-2:00p\n• Room: WH-100E\n• Speaker: Qinggang Diao (Mathematical Sciences)\n• Title: Cox proportional hazards model with time-dependent covariates\n\nAbstract\n\nThis PhD dissertation is divided into four chapters, where right-censored (RC) data and interval- censored (IC) data under several different types of time dependent covariates assumptions will be discussed.\n\nIn Chapter 0, we will introduce some basic concepts and notations about survival analysis.\n\nChapter 1 reproduces the paper of Yu et al.(2015). In this chapter piecewise Cox models with right-censored data will be discussed. Piecewise Cox models are regression models that follow dif- ferent Cox models when restricted to different time intervals. We study a general class of piecewise Cox models that involve a single cut point so that there are two separate Cox models correspond- ing to the two time intervals created. We discuss the computation of the semi-parametric maximum likelihood estimates (SMLE) of the parameters, with right-censored data, and a simplified algorithm for the maximum partial likelihood estimates (MPLE). Simulation studies suggest that MPLE com- pares favorably with its SMLE counterpart, even though the SMLE is more efficient. To assess the appropriateness of the model assumption, we propose a simple diagnostic plotting method. This method will enable us to determine an appropriate cut point. We show that the results for the case of a single cut point can be extended to involving more than one cut point. Finally, we apply the methodology we have developed for piecewise Cox models to the survival analysis of a long-term breast cancer follow-up study on the prognostic significance of bone marrow micrometastasis. Our diagnostic plots suggest that it is appropriate to apply the piecewise Cox model to our data.\n\nIn Chapter 2, we consider the time-dependent covariates proportional hazards (TDCPH) model with interval-censored (IC) relapse times under the distribution-free set-up. The partial likelihood approach is not applicable for IC data, thus we use the full likelihood approach. It turns out that under the TDCPH model with IC data, the semi-parametric MLE (SMLE) of the covariate effect under the standard generalized likelihood is not unique and is not consistent. In fact, the parameter under the TDCPH model with IC data is not identifiable unless some stronger assumptions are imposed. We propose a modification to the likelihood function so that its SMLE is unique. We show that the parameter is identifiable under certain regularity conditions. Under the regularity assumptions, our simulation studies suggest that such an SMLE is consistent and we also give a rigorous proof of the consistency. We apply the method to our cancer relapse time data and conclude that the bone marrow micrometastasis does not have a significant prognostic factor.\n\nIn Chapter 3, we consider the semi-parametric estimation problem under the proportional haz- ards (PH) model with continuous time-dependent covariates and interval-censored data. We show that unlike the PH model with time-independent covariates, if the observable random vector takes on finitely many values, then the parameters in the model are not identifiable and there exist no consistent estimators of the parameters. We establish the identifiability condition for this issue. It provides a guideline for carrying out simulation studies and for the proof of consistency of certain semi-parametric estimators. Moreover, the naive extension of the generalized likelihood function does not lead to a consistent estimator. We propose two proper modifications of the generalized likelihood function, and they both yield consistent estimators. We also carry out simulation studies for these estimators. The covariate z(t) = u1(t > c)(t − c) will be discussed.",
null,
""
] | [
null,
"https://www2.math.binghamton.edu/lib/exe/fetch.php/logo.png",
null,
"https://www2.math.binghamton.edu/lib/exe/indexer.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8644491,"math_prob":0.94050884,"size":3874,"snap":"2019-26-2019-30","text_gpt3_token_len":778,"char_repetition_ratio":0.11782946,"word_repetition_ratio":0.00681431,"special_character_ratio":0.18043366,"punctuation_ratio":0.07317073,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9656174,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-24T03:22:44Z\",\"WARC-Record-ID\":\"<urn:uuid:79f1aea6-375a-4911-98bf-42b02d8e19da>\",\"Content-Length\":\"20186\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d60101a-5700-462c-8168-e399ff24046d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5de2970a-2098-4f3f-8ffa-bdaf51e5f65f>\",\"WARC-IP-Address\":\"128.226.2.18\",\"WARC-Target-URI\":\"https://www2.math.binghamton.edu/p/seminars/sml/160510\",\"WARC-Payload-Digest\":\"sha1:3R4GCDNIQPHKC5LXHG4EJP44SZ2YFTT2\",\"WARC-Block-Digest\":\"sha1:DVWVGBK6Z2XZMQWHKQ73THT23GQKQOI2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195530250.98_warc_CC-MAIN-20190724020454-20190724042454-00463.warc.gz\"}"} |
https://homework.cpm.org/category/MN/textbook/cc2mn/chapter/9/lesson/9.2.1/problem/9-81 | [
"",
null,
"",
null,
"### Home > CC2MN > Chapter 9 > Lesson 9.2.1 > Problem9-81\n\n9-81.\n\nAges of golfers participating in a golf tournament were $44$, $48$, $40$, $25$, $28$, $37$, $29$, $34$, $45$, $51$, $43$, $35$, $38$, $57$, $50$, $35$, $47$, $30$, $61$, $43$, $44$, $60$, $46$, $43$, $33$, $45$, $42$, $34$, $32$, and $74$.\n\n1. Create a stem-and-leaf plot for this data.\n\nSort the data from low to high.\n\nThe “stem” part of the graph represents all of the digits in a number except the last one. The “leaf” part of the graph represents the last digit of each of the numbers.\n\n $2$ $5$ $8$ $9$ $3$ $0$ $2$ $3$ $4$ $4$ $5$ $5$ $7$ $8$ $4$ $0$ $2$ $3$ $3$ $3$ $4$ $4$ $5$ $5$ $6$ $7$ $8$ $5$ $0$ $1$ $7$ $6$ $0$ $1$ $7$ $4$\n2. Use the stem-and-leaf plot to create a histogram.\n\nIn histograms, the intervals for the data are shown on the horizontal axis and the frequency (number of pieces of data in each interval) is represented by the height of a bar above the interval.",
null,
"3. Describe the shape and spread of the data. Are there any apparent outliers?\n\nIs the graph symmetric? How many peaks does it have?\nAre any of the data points significantly large or small compared to the rest of the data?\n\n4. Use the appropriate measure of center to describe the “typical” age of golfers at the tournament.\n\nWhat is the value of the median? What is the value of the mean? Considering the outlier, which\nvalue do you think is better suited to describe the typical age of the golfers at the tournament?"
] | [
null,
"https://homework.cpm.org/dist/7d633b3a30200de4995665c02bdda1b8.png",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAABDCAYAAABqbvfzAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyRpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMC1jMDYxIDY0LjE0MDk0OSwgMjAxMC8xMi8wNy0xMDo1NzowMSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNS4xIE1hY2ludG9zaCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo5QzA0RUVFMzVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo5QzA0RUVFNDVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjlDMDRFRUUxNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0IiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjlDMDRFRUUyNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0Ii8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+RSTQtAAAG9JJREFUeNrsXQmYXEW1Pj09PVtmJjsBDGFXiCKKIBJ2REEQQdaARBBiFFRAnrIoyhqCgLwnEfEpPMAgggsGJG7w2MMuiuwkJDGQINmTycxklu62/r5/0ZWaur3M9GQCc/7vO1/fvrfuvXXr1q3/nFOnqhLZbFYUCoVCoVC8u1GlRaBQKBQKhRK6QqFQKBQKJXSFQqFQKBRK6AqFQqFQKJTQFQqFQqFQQlcoFAqFQqGErlAoFAqFonKoLveE2jM+uTHk+zNGjjZyj5EXqJhgQH3KyClGOo1MNbK2vzOSTWakbmWTjHp+69y2QqFQKBQW85+avvES+kaCKUaOMHK8kcWS9zQkjYzj9l1Gnuj3nCSykuxIaa1VKBQKxbvLQt9I0Gjk30YehtPA2d9tZJGRPYxs0++EnjCaRFe1NC4emSN2hUKhUCiU0MtDjZE3jRwXODaRhP5hI7f1ZyayVRmpWdMoqbb63LZCoVAoFAOFd2tQHHzcWxppChwbxt89+zsTWWOV161okkQ6oTVJoVAoFErovQA8C6OMjA0csy74nSXfn155GA6vXlcj9cuHqnWuUCgUCiX0XqDByOiIUnNu9ThCh/W+T79Z54bEa1c1SnVbjdnW/nOFQqFQKKGXi/cbeR+3Px44PtrZPrw/M1K/vDlSKxQKhUKhUEIvG/tK1IcO7CE9KXVn/v7ZyAFGNqm4dY6hautqpGZNg7rbFQqFQqGE3sv8gtDXOeTt9pMPN/Ixh9CNCS2HVJzQq7JSu3qIJDtTaqErFAqFQgm9FwBZY/z520ZWS9Sfvrdz/AjHeke6RyWaOa6iwJBzuNsTyuYKhUKhUELvFdAn/rREQ9NeN/KkkaN4bAQJ/x7+hy/8RhL+DpVk86p0taRadOy5QqFQKJTQe4NtSNog8aESzdf+RyOfolX+ZSMPSDRbHIBhbXcaaTcyuVKZQP95am2dVHelctsKhUKhUAxGQoeP+hoj1xu5yciFZZwLUv6NRIuwWMKeLdGscRdLFN3+O8lHuY800mbkdiOnSn7CmT4Sukj9imZJZHShOoVCoVAMXkLH/bBc2ywj5xg5wcjnSjgP4803owU+kvsQ8PaskYeMnGbkCu6vd44D15LMT6yIRmLUiZq19WqdKxQKhWJQE/q2Eo0hR7/3GCMLJFoGddciefymkR/zfyN/U7TO20niNhjOTizTwN9/GPmrkfMcsu+ddV6VkVR7nVS31mn/uUKhUCgGNaGDyP9l5F6J3OMdRr5n5FwjH4w55wwjrxj5G/+787dfQwsd/eZf5b46z1IHLqUicVLfzHOR6vYaqepOas1RKBQKxaAldIwXR7/3XIn6wVskcp+D4NEHfomRXbxzDpJorPkPnX2WsDHm/FEeQ/Db13j9as9CF6bDuPSLJLygS4xFns1Z4lYy1encdK+JjA5XUygUCsXgJfQvGblDIrc7VkI71sh2Rg418gKtdFjrdknUCUYmSdTX3u1c533O9uP8vZrKAYLfugKEDpwvkZv/nFIzjGj2mtUNuRnhILWrhkhVV1LXPlcoFArFRocNtR76YUbeMrKElvqJJGlMDvNFWta3GDmGFjf2wa89xchSI0NoqeM6n3KuO4q//5Ro7fPvS34WOZ/Q0ZeO6PoLmPblYpke8crmhtRr1198pSohmaT2nysUCoVi8BH6hySa8AWBaacbSUvUdw7vAJjyK0a+bmSakVVGWiVykSPgDUPVOmlZg/zv4q+d3rXOuQ/c9kdKNFY9ROjAd5nmBiN7SX4IXBCIZI/c7vlkiYS62xUKxYbH/KemayEoCqI/Xe4YKnYKyXO8kZslmhBmUyM/kshNjpXTrpNoARUExX2e5yVI7BCYwwh8m0kLf0vnHm7g22u00LMFCH0l8zSBaRUKhUKhUAvdA4aLoX97FxL19iTVZ0nMcHnDHf5Vh4hB1KOYbpGRtRJN07o/rfKmInm8yMhEEjWC69p4D1x/SMw5mF3uKp77dyN3azVQKBQKhRJ6HqMlH8X+iJHlsn4wW7kAIY+k9b41lYQPkPDx20zLf3zM+bDkEdmO/vUXjbxqZB6tfATGITjvVxK53v+uVUGhUCgUg4rQs15AWCL9jtf+TUrkMM86vyGgfzr3E9sn3WrObzWJFprtZ5z9uOHmRnYzcqCR/WJIHX3wB1GEOYGSgWC4xySKuMc1fm9kHyMLtTooFAqFYtAQet2yJvJxQjLVGelsbn9nnDb25Qg+QzLPRPSbSaZzc59Ho72iKPFkR7VUmbSZmgJGfO787DtR5bx+xlEefk/ixopqCKA7TOJd7Ql6EPaW/JKrrUyPceyH0HpXKBQKheK9T+gjX9jCsZWz0l3XJV2N7dLZtC43RrtueWN+nXCQfqpb2ke1SMfwVknXduUixhsXDZfGN0fkyD+TSsdb6WZ/d32ndAxtM+SfkM7GDllnrgXNAJO7MPocUfD/TxkvmcRZ5nqnSmkBf5b8ETX/oERD2u7UaqFQKBSK9zyh+y736vaUVLfVSMPbCE5ff4hXDu01UruqIWfNg5xxvHZ1Q2TVGx5PdhbOAqZaradXAOfAI9A+eo20jVljlIeGnMcAln7HsFbpauh8KV3XNaW7oeN2c+1rEunEeEPuXQVvkIAHAHnOol/+DpN+lsnYmWb/v8p1Xkjk1u/QaqVQKBSKjZ7QexB8jsCzBQZ0g+SjrVRrtG4KplB1jPBid3jnfCA3c1tLvQxZNCJH9u+wqSF2XCpd0w3Sv79t9JqPdA5vHZdOdVfB2x6arjVrlIzkulR2yOLmNnMcD5HoGtIxdN3IlrebFozOXb+HghKPL0i0UMxtWq0UCoVC8a4jdAJ907tLNIkMItPB2JgZDtHjz5DofHLEvdFv3SSFJ3gBE6+QaJz569ZDUN2Rst6CKl5naBb6QXcyR+5GMplU98PrRrQuXjt2ec6yr0onc3ey+WhcOFIaI8XgIJuPbFUmaxSOj1V1VafM9bHe+vz1lICsYf2wEgL3va7aolAoFIp3JaFjKVPMwY7JWjaPSYOo8usoLuCixpKoW5R4Lyzmgrnb/8fIn5z1yJO8TjThDAztZHQskU7OHvLvofvVL2/sXrPlMml934qc6z/VWifD5mwqtSuHIP0hhsBnradBGOKnsnCyT+gFACVG54RVKBQKxYCgLzPFYeKY+yUKJNu8QLodSbhYLrXZNXYlmgimVMCC/rREE8P8oKTrJLJ7GgI/VjJVMmzupjLipbHSvHCUjP77VjkyN6RdY6z1qYHz7FaXVhGFQqFQvJcJHdO3wqrdrYxzMIf6LVIZtzQmhil16taLDUE3od8ervjm18fkoutpgcOz8BGtBgqFQqEYrIR+JS30cnGERCupVQJYaAV99sVmo8MSrWfkTHlD4jkijyzwkfQuKBQKhUIxKAkds7JNjDn2N4lWTcPCK/MKWNcIT0/HHEcA3F8kWp0NU7c+GZMO1zi1xDz/l0TLtrr4tqy/trpCoVAoFO9a9CYoDv3YqcB+zNp2vOTHYWNd8wckmnvdBf7vIdHCLCE8Z+RgT+k4wciNJHEXmLK1toByYDGc1vgU/se88F/T169QKBSKwWyhfzSwL03L3J1U5d8S9XPPpcyhzCepJ0pUMtDZfatEAXg+xkq03Gop0eUnG9mV25dIFKGvUCgUCsWgtdBDEe1wky8I7P+NkT95+0DkiB6vr0D+s5JfBqYY4FU4z8i1Ro7ZCN8FFIzNJD+Gvz2QppZeiqxXnp0SnqEuxXJexzSFUMf0uG9cXEKC10tKgWV3nGtUM72ftkviZ9SrYV46me+4Z+qKKSMAK/8hRgLL8S6SwvMcWDQzvascJkuopwm+szYqyA2SH3kRum89v6EE33NrjKLdwLy0Ffh2G4qUg32uVon3YtWxXrWXUEd8FCqftTH765n3cuqEC7zXUczvGyW8W5TzFrwvFmda1k/5wn0wEqelQJ7qWX/XlHC9Jr6z9hLrr0LRKws9tPhJS4FKutaTFjbUcSQcIhO48vcP7F9sZHWJhA58zshvpW/D9SoNNFAIMkRXQ27yHInWkL+ADa2LqTyGCXv+6ciz9GLs7aWfxLT3s4GIAxq8x5n2oALpQCB38X7PeXlw5bNM/2mmfdY59jz/38HjPr7BfFwVk4ejeXxG4NhHeN2XJJr/AOWJlfWOK/IO7D0v8fbv4z0Xnvlv3vNAfsf07+exh6ic+cR5Ae9jPVbYvijwbhDvMZv32jMmz0fy/FsK1P+TmZ9rCjz7VF7nm72ou7vElAfK6RGWq0/4tzL9PwJ1Au/04zH3QnDrLyRaCvkVvtvZRd7tRL7/13gOzv2l9OwGRPndXCBfuO8nipSFfbffKpBmBtNMLXKtk5gOsUTDlKYU/WmhZ2MIvbNCefqQ00BmaG3tE9Nozab2HCLoNY5G7Fp3owNp0T0wpgzFoFLYjB6Mnfn/VeYRDc6lEi0aM9GxEDZhwybcZxeoBfHbYMVT2ABZLX8bCqam/WlMPr4i+eF7Q4rkGaMbtuS76QqUWcJpxOud/HY69cfm91iS6IWedY38xgUsDuXxVd7+/VlvhrNsXmR5oSG+nedMi7EyJ/P4ZCoSqx2PyFjHE5Ry6ppb31c639P2tIirPCX4VxKtBgjMo/W1PZ/9Uzy2wrnODvRWYA6HCQEr3JbDigIWHIJGtyWxX0GPgA+U89Ysq3JRRyXGWrJZx1BA3vYyciiVsLWO8rgd03YG6vBRVODvcu6D7+MevosMFTYowntQcPw7Xt6+4xDnElrmyOsJLG8onU85dXIrJ1+2TXHzdQzzNTNG0Z1MRWwyvYAhq34sy+Ub/BbfiCnT8/jemjYy40PxHrTQQ+iqoFtoNK2PI9kQ7BtDtLDkf+6QiA806D8q4X7PsdFMDED5X83GaIFEa7uPpxxPUsAwv9O9cgZ+xgZ/R/4iNuA2ktN0yc++57pZz2BjEfIQuKMFisUjWCI7xcmDK+PZ+LrXQgO8k5Nmd8fC/j6f3ffQxE3qkw4QKkj8Jv7+kff6MJXDHzLNZVSQfNgpi4VKneuheJjPY8t5MvfPoQJkn/dwrx52eN/Dt0jYq1incc4H+X6XkbAv9JTmDsfrcEGJ5eBiJz4b0OwoE6FvN84zVgz2/UKp2I1ltAOf78tU9A/y6rDN77leHd6dym09CXGYo1TdSDKczfLYieV3GdOc79WhfRwyv5RpbZ14gG3M9Z4HzObrvJh81Xn58pXJcY6XZq8i3w6I+rSYNJ93PAgdou52xQAQ+kBgKt1icV6GIbRKFhS5DhqDtwcg/2igPsftMyVa/jXDjxgW5ZU8dnbAbbmazzWPv3B7TqIS00wLxMeOtH58wHrbtBf5X+TkwZW5bMh90niNx+fTMsJ8BLMc5aAv+CS9Bkv4PHNYlktIpo+wrp8ZOHcij83l/0nOsTbut+X8hkN+9nlej7G0xCGkE7l9Cb0IHSyTu0ggQqKPc69+m5ZoOTiGHoV5zO+kfqzLackHvM7n9g2S78I4WnpOKLXUq8OoEyfxnYEcd2G63aiItbKePM93i/7w7xm5m+lOdK5tn/XPVBiX8ZyX6alq4/UPCTwL7v8vL1+TuB+KcqhLwN77Nf6eUEKZTQ54C1EPz1JaUgw0oW/oRUlg2V5cJE2t89HH4T5q300DUPZoHBpp3TweOD6dpPftwHtKxlhLL3M7zl39TU8Bgqvwq45VWA7K6a6B5VoT2P9bx5rsSx3awfG2LA0cn0Kiv9Xb30yLKMuyWUhLb8uY+6Sc56ktMW9Qlmx/+gOB4w+R3DeR9fvdq0g8C3jfH5dxT6Q71lEGXqVC8MF+qstx5fG04wWqLaH+LCVxAkMdi1eoWL0WOOde/m7r7NveO+biLXrAzohRxEL5Wu7UK1/p2oyKwTpes4WK+ogSPJH+PBoHSnwMgULRL4Qeck03SnhseiXRzgbxMDZSxQjIRr+jEX8wcBxW0jkFnqm/Yee1XynhaG7sn0Fr3Y+E7o7xSNh+8IXesQdo2XzMs0pgOW1HC/8fZea/EjETbzl5b+jDdWwjG+dpQUAUgsf+GmhA4SlBlwC6CeBih2v1iAq+5yaSWafk+9r9et1CIqnzvrMsLbZVtCi/U+I94fL9AOsBvAD3U2Hqr9EdWQlH2u/rELVfx0PR+weQjLO08oHhzjUk5juxdci2aU1F6sPdVJifCRwL5etAyceCvOwd+yy/ZVjyCGJDtwCi8A8t0Hb+kt/w1x3FxSrcwEyJjw1SKCpiZbkNUKjRapJ8UE9fAGviSoeQYXku4wf+ai8UljQVgNmelfgTiSJJB7rsu6T8/stNaNW6VuC32OgsCxAXgv4w8c+1THc3G3jr3kMU9GllNN7AFWwwk16D9b2YhlJilCrrceiLhZ4sUDcLwbpGf+80pCdy/3SpzOp5SckPLQzFBXQ7+xMBJe0JiVzXeEfnUvF4usg9j3eIK81fBGIhIvxyqVwAq1uXMT/FWueZP8P8WgLzyxJW7OZMm6FX5EQqP4gHedF7t+uKKJZJpwxD9WFXfjdZJ13I6j/Cy9dYenf8fPllfadThw5mHZoRk2d8n2OoKEyi9wWWOUZ9wN3/fxLFZWj/uaLfCT2k9Q7nR+AT+v5s4NNO5QSp3sCPI4TFrNCVBAgGQTBnOhbs1AEue7dhKddDcDLFByL7vyw9o5mHsnFBfy2Gtu1GBeyjtDhmUukpB3EL8/y0DEJ3yyJbobIsFWioD2KjbUdVII5hCZ9tl148R2/ec7H3D+/Xj0jGu7Px372AEjhC8gFwv+bvoxL1Ce9A6/3+CtdlfP+PxRybwW/Px3HSc8hZG7/9s5xyK/ZuE166uHNQhhO8c690lA6LYwKeDHjIEIB7tqeYjGd5tku+L38W0+9PBXtujBJyNQkdVvr/UuGCAYKA1/kyMF5DxSAk9BcC+6C9fs2z8rDvssBHBFxVwPqp7qdnRV6OYkOOhV2WD3DZ9+WDfZtKSZKNACwjuPxulsi1HipTuG2voyJzjuOt+G82pMky84358Z+UvFswUaB+FPKgDFRZHk6yhJvddjesIrmfxkb9mQrlLdGH57CW4mkkzY+TBBbFXOMztEThfXrEsW7RdQOX/cR+IPRuWq7dfKcZEtmdjlLhA11hiB9AVx2i4D9EMjy1l+82UeQcxGu8QuPCkm1XgXwlWc7IF0ZOTAmktYGHs0jCwJtMj2NHSj641QW6l+5gvUM3GQJz0RXWQkLfSqlJsaEI/a8kR/+jQXAV+o7gEkRf4BdjyBxE9KCEg6T6E8v4cR0vPYOjBgJtzsddI4XXhk94FsgvJN//Xw5gZaCf7mj+XyDR+OjeAIQxu49lYPu+OyTvUrWKRZzClw4oA+scS7FURcK6SuGh2JPfQkbyoyKg/F1c5L2Ugg5aZPUSjhOwM9+JxA/Vs+WNbo6LJBri9ouYdLYb4SXvuawCcBjLaWUF6/JKWqpryzgHwai3OSQICxf90RjG+ZyTrt3xMoUwxClnW286vPplFVeLmwsQ+h+db+JNtmeH0ZvldtHVOJb8K3z+JOuntcqhPP1Qes7SZ2daRJ5ukXyA73S2Ux9QalL0Br2xkBBA9ZeYY0fzY/lpDJkDP6FLKjUAz3ujQ2YDjVX8qEfHNFZoQOACnik9I2t7a9kulfUnl7mOjXBvrldXgTKw0elLnEbYTuoyJuacTZ3ycz0WwLiYc6ZQibya/3eSfDQxJtV5lMdhrf+A+xE1vW8FnnEFSQllHJo2eRRJqU16Dvfzgbw9zXNs95Gr6CHP+3H7C95zXeeU38H94G0q1zho8Ej0CSo2/ph7G/W+eUybMc6rD1lHWdk65t7betcOKQhW6XhM8rP8uXBHDZxHb8iD/D2f+6Gc7FqgDOyshlYpvVYpSbGhCd0O8elNANzj1EIH0ipevJGU/Rx6K+okP3TMfS/Q2g8gma8ONKC9xfW0gEAMN/XhOi1lpE1Lz0AsDEeyE7Xc5+x/mL8TAoQKIjuJ2+5qfU84SpAfXTyWFu2+TkNvXaVv0Br7jSP4/6pDin3FUsfiDAUens73PUcKj2e3jf43aFmGukg+T6JEEOTtged6vsBztffxOftSJ9P0PgBwU3/CMyDWkZxPCNSHL3h1QBzP0XHSc6w3vAC7sx17rEi+YO3b2QWP8IwU6+GZS0+DW9b4P9/zBMV5by6nV+g6Cfe3KxQlo7f91a+wgt9awCoKWfbHSt9dmO8VrGUjdj01fFikGGJUS9I6hA3Kd6Uy0dYWi9lgurOR9QYns4FLBOoUvAovelb1+ZJ3PW5FTwkaW7g1f+aR80zWL/R7wmWJvkaMrf86FYGF9LZYPMWG9Bg2pldTYRlH5RPW3WtsNF1X6eUSng4XZT+Lv2OkbxMPZfme9yPBQIGzUd/HOXkBcZQy2uFJWuoXBAh1IrevlfA0txNIdgfwHSxwjkHhCc15kKLy9Eg/fw/38N1/gs/2WYcwf05FBvVkRyp9GP+Ncd8Y5vaW5GeNBG6gVwZu9XtZHkizN89JUZl9roR8WSt9Ar/FQ6lkH+5Y578LnIeI/RlUsnBea8z1URf+UKaCrFBUlNCFHzg+kMvYKMW5YGHJ3yzR0JvVXgPUHEhf7rKmdpUjH0PLuEbcilH93c8PMkFUMmaz+hLFAtbk2bJ+P7V1B5Y6ZrsupkxDQ4CaS3hmt6xPLZBuCQndXmszkqePZ+ideMuziibz3EMCxPQyFZ63A+ckaeH5i6y8SOsObtmjqBRkJD9TnY+H+Qyb0AK8xiub5hiLtNqpey4xoovqFF7ncIcMrKcDBHaHsy/pvOOQJY5vDv26OzvvAwqDndp2ZsxzQcnBzHbbsq5d6NxnP8m7631MjyF06wIfVoa3z9az2oCVPo1K7aFU6OxznMO6jzI8V9aPTH+ZyqXr3XiLRHozy+hG716/ooLgoqlIvv7A+ngg68WmrE9xAYb30usxjnVyRoF7rIkp16GiY9EVG4jQhZYSgt8QbIbpRnciQWXo9kODfZ/0nOjEupum8eNIO/mZ1wt33Q9oSaWdRnCJlD4U6kESjjseGNd4dgO8g8tpBdg5vrtpOaCBn+OlvZ3l83AZStc0elSKWZFX0QouZLV08nqjC3gNkpJ3f2Jq3qmyflBQgiSGYw9IeEz0clpoIL6DmS8ohugT/rX07IKwjeJRJDpEem9BpegR75x2PkMhFze8J6eTIBd75DGNhNEZ4/24hPfw83gTlbOJJJkEy+D2wPtZRpJHw7405tuBBXi8971cwW8t7n2jfqPvfU/nPFiIr0p+oZQQad8Xc715VC7WluF5g7W8jazvIreAgnUWyTLlKaCnsqxQJ7Zk+T7EfS0xyuIEltFeJMc3SMx/jsnXdgXydSYV03rWtWl8f3HBhVA4v0KPwhpHMYIy9XiRMprH72ZlActeoehpcWWz5Q3/3WrX0wZ7kUmiKjjC62w25NdrtVIoFJXG/KemayEo+tVCH3x0noiN/XlaCg87UigUCoVi47HQFQqFQqFQbHzQgAuFQqFQKJTQFQqFQqFQKKErFAqFQqGoCP4jwADQNvw20jA5ogAAAABJRU5ErkJggg==",
null,
"https://s3-us-west-2.amazonaws.com/c3po-media-dev/files/b4111f80-30ac-11e9-89dc-7f4675586158/cc2 7-75_original.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8947448,"math_prob":0.99999607,"size":1110,"snap":"2022-27-2022-33","text_gpt3_token_len":253,"char_repetition_ratio":0.14195298,"word_repetition_ratio":0.030150754,"special_character_ratio":0.22162162,"punctuation_ratio":0.08444444,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999838,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T23:26:36Z\",\"WARC-Record-ID\":\"<urn:uuid:1e7b701d-b657-4968-b437-f43699d09488>\",\"Content-Length\":\"62366\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1628bc86-7f8d-41cc-87ce-7e8148c25d7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:cbf837c4-aa01-4661-b559-b56eb9abf66d>\",\"WARC-IP-Address\":\"104.26.7.16\",\"WARC-Target-URI\":\"https://homework.cpm.org/category/MN/textbook/cc2mn/chapter/9/lesson/9.2.1/problem/9-81\",\"WARC-Payload-Digest\":\"sha1:KPMX5AZTFJ7EFOWODMEESTEF4DKFLNKW\",\"WARC-Block-Digest\":\"sha1:53ZH72Z2BDXMUTLMVYWY6PZHTRPON6FG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103344783.24_warc_CC-MAIN-20220627225823-20220628015823-00122.warc.gz\"}"} |
https://whatisconvert.com/166-fluid-ounces-in-teaspoons | [
"# What is 166 Fluid Ounces in Teaspoons?\n\n## Convert 166 Fluid Ounces to Teaspoons\n\nTo calculate 166 Fluid Ounces to the corresponding value in Teaspoons, multiply the quantity in Fluid Ounces by 6.0000000000041 (conversion factor). In this case we should multiply 166 Fluid Ounces by 6.0000000000041 to get the equivalent result in Teaspoons:\n\n166 Fluid Ounces x 6.0000000000041 = 996.00000000067 Teaspoons\n\n166 Fluid Ounces is equivalent to 996.00000000067 Teaspoons.\n\n## How to convert from Fluid Ounces to Teaspoons\n\nThe conversion factor from Fluid Ounces to Teaspoons is 6.0000000000041. To find out how many Fluid Ounces in Teaspoons, multiply by the conversion factor or use the Volume converter above. One hundred sixty-six Fluid Ounces is equivalent to nine hundred ninety-six Teaspoons.\n\n## Definition of Fluid Ounce\n\nA fluid ounce (abbreviated fl oz, fl. oz. or oz. fl.) is a unit of volume. It is equal to about 28.41 ml in the imperial system or about 29.57 ml in the US system. The fluid ounce is sometimes referred to simply as an \"ounce\" in applications where its use is implicit.\n\n## Definition of Teaspoon\n\nA teaspoon (occasionally \"teaspoonful\") is a unit of volume, especially widely used in cooking recipes and pharmaceutic prescriptions. It is abbreviated as tsp. or, less often, as t., ts., or tspn. In the United States one teaspoon as a unit of culinary measure is 1⁄3 tablespoon, that is, 4.92892159375 ml; it is exactly 1 1⁄3 US fluid drams, 1⁄6 US fl oz, 1⁄48 US cup, and 1⁄768 US liquid gallon and 77⁄256 or 0.30078125 cubic inches. For nutritional labeling on food packages in the US, the teaspoon is defined as precisely 5 ml.\n\n## Using the Fluid Ounces to Teaspoons converter you can get answers to questions like the following:\n\n• How many Teaspoons are in 166 Fluid Ounces?\n• 166 Fluid Ounces is equal to how many Teaspoons?\n• How to convert 166 Fluid Ounces to Teaspoons?\n• How many is 166 Fluid Ounces in Teaspoons?\n• What is 166 Fluid Ounces in Teaspoons?\n• How much is 166 Fluid Ounces in Teaspoons?\n• How many tsp are in 166 fl oz?\n• 166 fl oz is equal to how many tsp?\n• How to convert 166 fl oz to tsp?\n• How many is 166 fl oz in tsp?\n• What is 166 fl oz in tsp?\n• How much is 166 fl oz in tsp?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85784066,"math_prob":0.97379696,"size":2183,"snap":"2021-04-2021-17","text_gpt3_token_len":625,"char_repetition_ratio":0.22441487,"word_repetition_ratio":0.08290155,"special_character_ratio":0.3032524,"punctuation_ratio":0.12803532,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98356444,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-20T21:58:12Z\",\"WARC-Record-ID\":\"<urn:uuid:89215d62-8a91-4e66-a224-0d28982af34d>\",\"Content-Length\":\"31684\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:beccd46e-a860-4edf-86a6-a2040c0e910e>\",\"WARC-Concurrent-To\":\"<urn:uuid:7635efdb-2fd6-4f6d-b9f9-886728bb96b8>\",\"WARC-IP-Address\":\"172.67.211.83\",\"WARC-Target-URI\":\"https://whatisconvert.com/166-fluid-ounces-in-teaspoons\",\"WARC-Payload-Digest\":\"sha1:NYYT4JZ3IABPWH5SAALXR5CYNEG4F5DC\",\"WARC-Block-Digest\":\"sha1:37PUWIYDAPKFAQQQV5P46WQWGE4N5PH6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703522133.33_warc_CC-MAIN-20210120213234-20210121003234-00776.warc.gz\"}"} |
http://imathworksheets.com/geometry-worksheets-2complementary-angles-worksheets/area-worksheets/areas-of-parallelograms-worksheets/ | [
"# Areas of Parallelograms Worksheets\n\nHere at imathworksheets.com, we provide students and teachers with plenty of free area worksheets that can be incorporated both inside and outside of the classroom. This particular set of area worksheets focuses on calculating the area of a circle.  In this series, you or your students will use a formula to calculate the area of a parallelogram by utilizing its vertical height and the length of its base.\n\nEach of our worksheets comes with an accurate, easy-to-use answer key so that either teachers or students can check the assignment. Each set of problems is also easily customized so that you change the difficulty level of the problems by adding decimals or fractions.\n\nBy the time your class has completed this extensive series, they will undoubtedly be experts at finding the area of a parallelogram.\n\nAreas of Parallelograms Worksheet 1 – Here is a nine problem worksheet that will allow your students to practice calculating the area of a parallelogram. Each exercise provides a drawing of the parallelogram as well as the height and the width of the base. These problems feature simple single digits so that you’re students can focus on finding the correct area instead of struggling with the multiplication.\nAreas of Parallelograms Worksheet 1 RTF\nAreas of Parallelograms Worksheet 1 PDF\n\nAreas of Parallelograms Worksheet 2 – Here is another nine problem worksheet that will allow your students to practice calculating the area of a parallelogram. Each exercise provides a drawing of the parallelogram as well as the height and the width of the base. These problems feature simple single digits so that you’re students can focus on finding the correct area instead of struggling with the multiplication.\nAreas of Parallelograms Worksheet 2 RTF\nAreas of Parallelograms Worksheet 2 PDF\nPreview Areas of Parallelograms Worksheet 2 in Your Browser\n\nAreas of Parallelograms Worksheet 3 – Here is a nine problem worksheet that will allow your students to practice calculating the area of a parallelogram. Each exercise provides a drawing of the parallelogram as well as the height and the width of the base. These problems introduce some two-digit numbers into the mix.\nAreas of Parallelograms Worksheet 3 RTF\nAreas of Parallelograms Worksheet 3 PDF\nPreview Areas of Parallelograms Worksheet 3 in Your Browser\n\nAreas of Parallelograms Worksheet 4 – Here is a nine problem worksheet that will allow your students to practice calculating the area of a parallelogram. Each exercise provides a drawing of the parallelogram as well as the height and the width of the base. These problems introduce some larger two-digit numbers into the mix.\nAreas of Parallelograms Worksheet 4 RTF\nAreas of Parallelograms Worksheet 4 PDF\nPreview Areas of Parallelograms Worksheet 4 in Your Browser\n\nAreas of Parallelograms Worksheet 5 – Here is a nine problem worksheet that will allow your students to practice calculating the area of a parallelogram. Each exercise provides a drawing of the parallelogram as well as the height and the width of the base. These problems introduce some two-digit numbers into the mix.\nAreas of Parallelograms Worksheet 5 RTF\nAreas of Parallelograms Worksheet 5 PDF\nPreview Areas of Parallelograms Worksheet 5 in Your Browser\n\n1.",
null,
"#### rty\n\n/ April 4, 2013\n\nLove it\n\n•",
null,
"/ November 5, 2013\n\nThanks.\n\n2.",
null,
"#### buggy\n\n/ May 28, 2013\n\nhi hey this is a awesome webstie\n\n•",
null,
"/ November 5, 2013\n\nThank you. I’ll be adding more worksheets frequently. Stay tuned!"
] | [
null,
"http://1.gravatar.com/avatar/b1ea4edd4a00ed55ce92ca5ad4139ef6",
null,
"http://1.gravatar.com/avatar/3c95be8988c90fec882a35f146416aaf",
null,
"http://1.gravatar.com/avatar/183664d2e31c6cb658349895eeff39fa",
null,
"http://1.gravatar.com/avatar/3c95be8988c90fec882a35f146416aaf",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8666914,"math_prob":0.77996784,"size":3568,"snap":"2019-43-2019-47","text_gpt3_token_len":780,"char_repetition_ratio":0.23512907,"word_repetition_ratio":0.5486111,"special_character_ratio":0.18806054,"punctuation_ratio":0.055374593,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9908038,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-16T01:41:27Z\",\"WARC-Record-ID\":\"<urn:uuid:87acbf7b-8818-42ce-b3d4-bfc04c626fa5>\",\"Content-Length\":\"65034\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eba83fc7-8f23-454a-9a7a-ce40f4b3fae2>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1faab16-4619-4037-88a1-ab1867abe86c>\",\"WARC-IP-Address\":\"192.195.77.149\",\"WARC-Target-URI\":\"http://imathworksheets.com/geometry-worksheets-2complementary-angles-worksheets/area-worksheets/areas-of-parallelograms-worksheets/\",\"WARC-Payload-Digest\":\"sha1:5453G2AFNKE76ZTV656UJUMN6LPPHWRE\",\"WARC-Block-Digest\":\"sha1:GXJLLIK5ZFPLMJPBPFKWJBNGNTUU5CKQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668716.69_warc_CC-MAIN-20191116005339-20191116033339-00528.warc.gz\"}"} |
https://codegolf.stackexchange.com/questions/49033/who-wins-a-spades-trick | [
"# Who wins a Spades trick\n\nWrite code to determine who wins a four-card trick in a game of Spades. Fewest bytes wins.\n\nThe input is a string that lists the four cards played in sequence like TH QC JH 2H (Ten of Hearts, Queen of Clubs, Jack of Hearts, Two of Hearts). A card is given by two characters: a suit from CDHS and a value from 23456789TJQKA. You are guaranteed that the input is valid and the cards are distinct.\n\nYou should output a number 1, 2, 3, or 4 for the winner of the trick. In the example TH QC JH 2H, the jack of hearts wins the trick, so you should output 3.\n\nYour input and output must be exactly as described, except trailing newlines are optional.\n\nHere are the Spades rules for winning a trick. The winning card is the highest card of the four, with some caveats. Spades is the trump suit, so any spade outranks any non-spade. The suit of the first card played is the lead suit, and only cards of that suit or spades are eligible to win the trick. Cards of the same suit are compared by their values, which are given in increasing order as 23456789TJQKA.\n\nTest cases:\n\nTH QC JH 2H\n3\nKC 5S QS 9C\n3\nQD 2D TD 5D\n1\n4\n3D 4C 3H JH\n1\n9S 4S TS JS\n4\n2\n5S 4C 3H QD\n1\n2H 2S KH AH\n2\n\n\n# Pyth, 2827 25 bytes\n\nJ\"KTAZ\"hxcz)eo_XN+@z1JJcz\n\n\nTry it online: Demonstration or Test Suite (first 4 chars are the test suite construct)\n\nThanks to @isaacg for a trick, which saved 2 chars.\n\nThe main idea is to modify chars of each hand such a in way, that the winning hand has the maximal value.\n\nThe values of the hands 23456789TJQKA is already nearly sorted. I just have to replace T with A, K with T and A with Z, resulting with 23456789AJQSZ.\n\nThe order of suits CDHSare for the most not really important. S, the most powerful suit, which is already the maximal value. Important is to give the suit of the first hand the second most powerful value. So I translate this suit into K.\n\nAll hands also have to be read reverse, since the suit is more powerful than the value.\n\n implicit: z = input string\nJ\"KTAZ\" J = \"KTAZ\"\no cz orders the hands N of split(z) by:\n_ the reversed of\nXN+@z1JJ N, where the values z+J are replaced by J\ne last element (winning hand)\nxcz) index of the winning hand in split(z)\nh + 1\n\n• I give up, well played :P – orlp Apr 20 '15 at 11:14\n• I don't think the .e stuff is worth it - using o is 1 character shorter as I figure it. – isaacg Apr 20 '15 at 11:17\n• @isaacg Your right. Funny thing is, I had the 27 solution before the .e 28 solution. But the 27 solution ended with a ) and therefore also had 28 bytes. :oops: – Jakube Apr 20 '15 at 11:22\n• I thought of a way to save another 2 characters: Translate from +@z1\"KTA\" to \"KTAZ\", but instead of using the strings directly use J\"KTAZ\" at the beginning and +@z1J to J. – isaacg Apr 20 '15 at 12:35\n• @isaacg Very clever. Thanks. Btw. I thinking quite a while about making the 3rd argument of X optional (Only if a and b are strings). But I'm not really sure, if Xab) should evaluate to Xab_b (inverted b, would be nice for stuff like Xa\"</\\>\") or Xab+tbhb (b shifted). What your preference? – Jakube Apr 20 '15 at 12:53\n\n# CJam, 34 33 bytes\n\nlS/_{)2$0='S+\\#\\\"TQKA\"_$er+}$W>#) Algorithm The logic is simple. I have a custom sort going on, in which I first give priority to the second character representing the suit. In this, Spades gets the highest priority and then the first thrown suite. Rest all are -1. Then I sort on the first character with swapping of T with A and Q with K to have lexical sorting. Code explanation First off, lets see what is the lexical order of the face values of the cards: \"23456789TJQKA\"$\n\n\n23456789AJKQT\n\nSo, all numbers are at correct position. J is also at correct position. We need to swap K and Q and J and A to get lexical order.\n\nlS/_{)2$0='S+\\#\\\"TQKA\"_$er+}$W>#) lS/ \"Read an input line and split on spaces\"; _{ }$ \"Copy the array and sort it using this custom logic\";\n) \"Take off the last character of each hand.\";\n2$0= \"Get the suit of the first hand\"; 'S+ \"Add Spades suit to it\"; \\# \"Get the index of the current hand suit. 1 for Spades, 0 for first hand suit, -1 otherwise\"; \\ \"Put face value of this hand on top of stack\"; \"TQKA\" \"Put string TQKA on stack\"; _$ \"Copy and sort. This essentially reverses the string\nTQKA to form AKQT. This is 1 byte shorter than _W%\";\ner+ \"Swap T with A and K with Q and add to the\nsuit index calculated previously\";\n\"After the above tuple, sorting will automatically\nconvert the tuple to string and sort lexically\";\nW> \"Get the array containing only the last element\";\n#) \"Get the index of this hand in original set and\nincrement to convert it to 1 based\";\n\n\nTry it online here\n\n# JavaScript (ES6), 112\n\nScan the list and return the position of the highest value found.\n\nRun snippet to test (in Firefox)\n\nF=t=>t.split(' ').map((c,i)=>(n='23456789TJQKA'.search(c)+(c>'H'?40:c==t&&20))>m&&(m=n,r=i+1),m=0)|r\n\nC.innerHTML=['TH QC JH 2H','KC 5S QS 9C','QD 2D TD 5D','9S 5D AD QS','3D 4C 3H JH','9S 4S TS JS','5H 9H 2C AD','5S 4C 3H QD'].map(h=>h+' -> '+F(h)).join('\\n')\n<pre id=C></pre>\n\n# Perl, 73 bytes\n\n#!perl -pl\n/\\B./;s/$&/P/g;y/TKA/IRT/;$_=reverse;@a=sort split;/$a[-1]/;$_=4-\"@-\"/3\n\n\nTry me.\n\nConverts the card names so the game value order follows the alphabetical order, then picks the highest by sorting and looks for it in the original string for position.\n\n# Ruby, 59+2=61\n\nWith command-line flags na, run\n\np (1..4).max_by{|i|$F[i-1].tr($_+'SJQKA','a-z').reverse}\n\n\n# J, 47 bytes\n\n1+[:(i.>./)_3+/\\16!@-]i.~'SXAKQJT9876543'1}~1&{\n\n\nUsage:\n\n (1+[:(i.>./)_3+/\\16!@-]i.~'SXAKQJT9876543'1}~1&{) 'TH QC 9S 8S'\n3\n\n\nMethod:\n\n• For every input char we assign a value based in its position in the 'S[second char of input]AKQJT9876543' string. Non-found chars get the value last position + 1 implicitly. Further characters have much less value (value=(16-position)!).\n• Compute the sum for the 3 input-char triplet and one duplet (e.g. TH_ QC_ 9S_and 8S).\n• Choose the 1-based index of the maximal value.\n\n(J unfortunately can't compare chars or strings directly. It can only check for their equality which ruled out some other approaches for this challenge.)\n\nTry it online here.\n\nC#, 237\n\nusing System;namespace S{class P{static void Main(string[] a){var V=\"23456789TJQKA\";int x=0;int y=0;var z=a;for(int i=0;i<4;i++){int q=V.IndexOf(a[i])+2;var w=a[i];q*=w==z?1:w=='S'?9:0;if(q>y){x=i;y=q;}}Console.Write(x+1);}}}\n\n\nHow it works: Iterate each hand to calculate the \"value\" of the card.. store the highest valued index. A cards value is determined as the rank of the card multiplied by 0 if it is not a spade or the opening suit, 1 if it is the opening suit and 9 if it is a spade but not the opening suit. (9 choosen b/c 2*9=18> A=14 & 9 is a single char)\n\n# Pyth, 36 33 bytes\n\nKczdhxKeo,x,ehK\\SeNXhN\"TKA\"\"AYZ\"K\n\n\nFairly straightforward approach, uses sorting with a custom key function, then finds the index of the highest value.\n\n• Did you try avoiding the sort and just find the highest value? In JavaScript that turned out to be shorter – edc65 Apr 20 '15 at 10:09\n• @edc65 In Pyth there is no operation to find the highest value, just to sort. But with one character (e) you can get the last element, so finding the highest value is just sorting followed by getting the last element. – orlp Apr 20 '15 at 11:00\n• Downvoter, care to explain? – orlp Apr 21 '15 at 13:58\n\n# Pyth, 31 bytes\n\nhxczdeo}\\SNo}@z1Zox\"TJQKA\"hNScz\n\n\nTry it here.\n\nHow it works:\n\nThe proper way to read this procedure is back to front. The procedure sorts the desired card to the end of the list, then pulls it out and finds its index in the original list.\n\n• cz: This generates the list of card strings. c, chop, is normally a binary function (arity 2), but when called on only one input, serves as the .split() string method.\n\n• S: This applies the normal sorting behavior, which sorts lower numbered cards before higher ones.\n\n• ox\"TJQKA\"hN: This orders the cards by the index (x) in the string \"TJQKA\" of the first letter of the card (hN). For cards with numbers, the first letter is not found, giving the result -1. Since Pyth's sorting function is stable, the numbered cards' order is not affected.\n\n• o}@z1Z: Next we order by whether the suit of the first card played (@z1) is in the card in question. Since True sorts behind False, this sends cards of the lead suit to the back.\n\n• o}\\SN: This is the same as the sort before, but it sorts on whether the letter S is in the card, sending spades to the back.\n\n• hxczde: This extracts the last card sorted this way (e), finds its index in the list of cards (xczd) and increments by 1 (h), giving the desired player location."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92998815,"math_prob":0.87923044,"size":1189,"snap":"2020-34-2020-40","text_gpt3_token_len":375,"char_repetition_ratio":0.11223628,"word_repetition_ratio":0.0,"special_character_ratio":0.28090832,"punctuation_ratio":0.10071942,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.954444,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T17:09:38Z\",\"WARC-Record-ID\":\"<urn:uuid:3936c78c-e834-4c21-b591-8db6724031a4>\",\"Content-Length\":\"223012\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:11c8f660-54f0-49cc-8115-287c94a89dbd>\",\"WARC-Concurrent-To\":\"<urn:uuid:c252503f-c247-4d2d-90de-a451c78ea784>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://codegolf.stackexchange.com/questions/49033/who-wins-a-spades-trick\",\"WARC-Payload-Digest\":\"sha1:L6TJ7YKJXI5LSU3ZFDHMEADIRGP6H3AC\",\"WARC-Block-Digest\":\"sha1:U3UBZCIERSCZJ5MK2CDSO4YYQ45GE6PS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439740929.65_warc_CC-MAIN-20200815154632-20200815184632-00145.warc.gz\"}"} |
https://statisticsglobe.com/remove-values-lesser-greater-than-5th-95th-percentiles-r | [
"# Remove Values Lesser & Greater than 5th & 95th Percentiles in R (2 Examples)\n\nOn this page, I’ll show how to drop values lesser and greater than the 5th and 95th percentiles in R programming.\n\nThe article will consist of this:\n\nImportant note: Removing certain values (i.e. outliers) in data sets is a very controversial topic. Make sure that the removal of any observations is theoretically justified. You can find more info on outlier detection and removal here.\n\nSo now the part you have been waiting for – the exemplifying R syntax…\n\n## Example 1: Remove Values Below & Above 5th & 95th Percentiles\n\nThis example shows how to delete values above and below a certain percentile in a numeric vector object.\n\nFor this, we first have to create an example vector:\n\n```x <- c(1, 3, 7, 100, 5, 5, - 987, 6) # Create example vector\nx # Print example vector\n# 1 3 7 100 5 5 -987 6```\n\nNext, we have to calculate the 5th and 95th percentiles of this vector using the quantile function:\n\n```x_quantiles <- quantile(x, c(0.05, 0.95)) # Calculate 5th & 95th percentiles\nx_quantiles # Print 5th & 95th percentiles\n# 5% 95%\n# -641.20 67.45```\n\nIn the next step, we can use those percentile thresholds to subset our vector object:\n\n```x_subset <- x[x > x_quantiles & # Drop values below/above percentiles\nx < x_quantiles]\nx_subset # Print subset of values\n# 1 3 7 5 5 6```\n\nThe previous R code has created a new vector object called x_subset, where we have retained only values greater than the 5th percentile and lesser than the 95th percentile.\n\n## Example 2: Remove Data Frame Rows Below & Above 5th & 95th Percentiles\n\nIn this example, I’ll show how to remove rows of a data frame where the value in a certain column is below or above the 5th & 95th percentile.\n\nFirst, let’s create some example data:\n\n```data <- data.frame(x1 = c(999, 1:4, - 777), # Create example data frame\nx2 = LETTERS[1:6])\ndata # Print example data frame```",
null,
"Table 1 shows the output of the previous R programming code – A data frame containing two columns.\n\nLet’s assume that we want to remove the rows with the largest and smallest values in the column x1. Then, we first have to identify the 5th and 95th percentile of this variable:\n\n```data_x1_quantiles <- quantile(data\\$x1, c(0.05, 0.95)) # Calculate 5th & 95th percentiles\ndata_x1_quantiles # Print 5th & 95th percentiles\n# 5% 95%\n# -582.50 750.25```\n\nIn the next step, we can remove all rows where the value in the column x1 is too small or too large:\n\n```data_subset <- data[data\\$x1 > data_x1_quantiles & # Drop rows below/above percentiles\ndata\\$x1 < data_x1_quantiles, ]\ndata_subset # Print subset of values```",
null,
"After executing the previously shown R programming code the data frame subset without outliers shown in Table 2 has been created.\n\n## Video & Further Resources\n\nI have recently released a video on my YouTube channel, which shows the R syntax of this article. You can find the video below:\n\nPlease accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.",
null,
"If you accept this notice, your choice will be saved and the page will refresh.\n\nFurthermore, you may want to read some of the other articles on https://www.statisticsglobe.com/. A selection of tutorials about topics such as graphics in R, missing data, and vectors can be found below:\n\nAt this point you should know how to remove values lesser and greater than the 5th and 95th percentiles in R. If you have any additional questions, kindly let me know in the comments.\n\nSubscribe to the Statistics Globe Newsletter"
] | [
null,
"https://statisticsglobe.com/wp-content/uploads/2022/04/table-1-data-frame-remove-values-lesser-greater-than-5th-95th-percentiles-r.png",
null,
"https://statisticsglobe.com/wp-content/uploads/2022/04/table-2-data-frame-remove-values-lesser-greater-than-5th-95th-percentiles-r.png",
null,
"https://statisticsglobe.com/wp-content/uploads/2020/09/YouTube-Tutorial-Preload-Thumbnail-R-Programming-Video.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8066923,"math_prob":0.85675,"size":3704,"snap":"2023-40-2023-50","text_gpt3_token_len":956,"char_repetition_ratio":0.15432432,"word_repetition_ratio":0.08116386,"special_character_ratio":0.27051836,"punctuation_ratio":0.10599721,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99124694,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T19:55:59Z\",\"WARC-Record-ID\":\"<urn:uuid:14b083ce-9907-40ae-b48b-ca08c2138d9e>\",\"Content-Length\":\"210706\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c2f518ad-7c48-439b-92e2-441e0817a478>\",\"WARC-Concurrent-To\":\"<urn:uuid:59962811-7cd0-4ced-b8a0-b1988925866e>\",\"WARC-IP-Address\":\"217.160.0.159\",\"WARC-Target-URI\":\"https://statisticsglobe.com/remove-values-lesser-greater-than-5th-95th-percentiles-r\",\"WARC-Payload-Digest\":\"sha1:YGOA6BHIRHEEYXRYD5RWY6CBSFYUP4UE\",\"WARC-Block-Digest\":\"sha1:3AE6DVFOBFS7DYXPQ5MKSPXN544MFGLE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510219.5_warc_CC-MAIN-20230926175325-20230926205325-00348.warc.gz\"}"} |
https://ask-public.com/4200/ | [
"# What is Mason's gain formula?\n\nWhat is Mason's gain formula?\n\n171 views\n\nMason's Gain Formula",
null,
"## Related Questions\n\nWhat will happen to the gain margin if the gain of the open loop system is doubled? A) Doubled B) Becomes half C) Is not affected D) Becomes one-fourth\nLast Answer : What will happen to the gain margin if the gain of the open loop system is doubled? A) Doubled B) Becomes half C) Is not affected D) Becomes one-fourth",
null,
"A transfer function has a second order denominator and constant gain in the numerator\nLast Answer : A transfer function has a second order denominator and constant gain in the numerator the system has two zeros at infinity",
null,
"If the gain of the open loop system is doubled, the gain margin (1) Is not affected (2) Gets doubled (3) Becomes half (4) Becomes one -forth\nLast Answer : If the gain of the open loop system is doubled, the gain margin Becomes half",
null,
"The gain of an ideal amplifier must be\nLast Answer : The gain of an ideal amplifier must be infinity",
null,
"How to Make a Weight Gain Diet Plan\nLast Answer : Before you can mold your body into a defined and well-chiseled frame, you have to gain some mass, which acts like a canvas to a painter or a slab of clay to a sculptor. Gaining weight ... a nutritionist. He or she will examine your physique and determine the best components of the ideal food ratio.",
null,
"How to Gain Muscle\nLast Answer : How to Gain Muscle Different dietary supplements, pills and drinks may promise you to grow muscles in a fast rate, but don't be fooled: These aren't the answer! Gaining muscle is not ... optimize the stress on your muscles, burns more fat and most importantly, builds your muscle even further.",
null,
"How to Gain Weight ?\nLast Answer : How to Gain Weight It's quite strange that people who are very slim can be just as embarrassed about taking their tops off in public as people who are overweight. There's just something so appealing ... . This is your number one key to bulking up and getting the muscles you've always dreamed of!",
null,
"How to Make a Weight Gain Diet Plan ?\nLast Answer : Before you can mold your body into a defined and well-chiseled frame, you have to gain some mass, which acts like a canvas to a painter or a slab of clay to a sculptor. Gaining weight ... a nutritionist. He or she will examine your physique and determine the best components of the ideal food ratio.",
null,
"Define gain and bandwidth of an amplifier.\nLast Answer : Bandwidth The range of frequency over which the voltage gain is equal to or greater than 70.7% of its maximum value Gain: The ratio of output parameter (voltage/ current /power) to the input parameter (voltage/ current /power) of an amplifier is known as gain. It is denoted by a letter A",
null,
"A differential amplifier has a differential gain of 28000 and CMRR is 60 dB. What will be the value of common mode gain? A) Ac = 0.125 B) Ac = 0.33 C) Ac = 3 D) Ac = 28\nLast Answer : A differential amplifier has a differential gain of 28000 and CMRR is 60 dB. What will be the value of common mode gain? A) Ac = 0.125 B) Ac = 0.33 C) Ac = 3 D) Ac = 28",
null,
"If the base current of a BJT is 250 µA and emitter current is 15 mA, then the common base current gain will be A) 0.98 B) 0.41 C) 59 D) 55\nLast Answer : If the base current of a BJT is 250 µA and emitter current is 15 mA, then the common base current gain will be 0.98",
null,
"The unity gain bandwidth of an inverting amplifier is 10 MHz What would be the bandwidth if the gain is increased to 10 V/V? A) 100 MHz B) 1 MHz C) 10 MHz D) 1 kHz\nLast Answer : The unity gain bandwidth of an inverting amplifier is 10 MHz What would be the bandwidth if the gain is increased to 10 V/V? A) 100 MHz B) 1 MHz C) 10 MHz D) 1 kHz",
null,
"A differential amplifier has a differential gain of 20,000, CMRR : 80 dB. The common mode gain is given by\nLast Answer : A differential amplifier has a differential gain of 20,000, CMRR : 80 dB. The common mode gain is given by 2",
null,
"An amplifier has a voltage gain of 120. To reduce distortion, 10% negative feedback is employed. The gain of the amplifier with feedback is (1) 141 (2) 92.3 (3) 9.23 (4) 1.41\nLast Answer : An amplifier has a voltage gain of 120. To reduce distortion, 10% negative feedback is employed. The gain of the amplifier with feedback is 9.23",
null,
"The voltage gain of a common -source JFET amplifier depends up on its (1) Input impedance (2) Amplification factor (3) Dynamic drain resistance (4) Drain load resistance",
null,
"The effect of a finite gain of an operational amplifier used in an integrator is that (A) it would not integrate (B) the slope of the output will vary with time (C) the final value of the output voltage will reduce (D) there will be instability in the circuit\nLast Answer : The effect of a finite gain of an operational amplifier used in an integrator is that there will be instability in the circuit",
null,
"Last Answer : The voltage gain of triode depends on plate voltage.",
null,
"What is the gain of an op amp?",
null,
"What is the current gain?\nLast Answer : Current gain β = Ic/Ib",
null,
"Which configuration has the lowest current gain ?\nLast Answer : Common base configuration has the lowest current gain.",
null,
"If they sold baby formula that actually looked like breast milk, would it be successful?\nLast Answer : answer:I don't know whether it will be successful anywhere, but I doubt it will sell in my country. Many of us don't care whether it's good or not, just the sight of sexual ... sometimes my perceptions are still influential, and I still buy some good-according-to-the-package-only products :P",
null,
"Can someone show me, and explain, the math formula that was responsible for the subprime mortgage scandal?",
null,
"Formula for evolution?\nLast Answer : answer:There is a formula: Genetic Variation + Random Mating + Mutation + Natural Selection = Evolution",
null,
"Spreadsheet help with creating a formula?",
null,
"Formula to calculate the effort needed to pull a weight on two wheels up an incline?\nLast Answer : isn't the formula the same as escape velocity?",
null,
"Acceleration formula",
null,
"Time constant for RC circuit formula τ = RC Where, τ = time constant\nLast Answer : Time constant for RC circuit formula τ = RC Where, τ = time constant",
null,
"Wye and delta conversion formula\nLast Answer : Wye and delta conversion formula",
null,
"Voltage formula",
null,
"Self inductance formula\nLast Answer : Self inductance formula",
null,
"RMS value formula\nLast Answer : RMS value formula",
null,
"Resistance formula\nLast Answer : The formula for resistance is, Where, R = resistance in Ohms ρ = resistivity in ohm - metre l = length A = cross sectional area",
null,
"Permeability formula",
null,
"Ohm's law formula\nLast Answer : Ohm's law formula",
null,
"Magnetomotive force formula\nLast Answer : Magnetomotive force formula",
null,
"Flux density formula\nLast Answer : Flux density formula",
null,
"Field intensity formula\nLast Answer : Field intensity formula",
null,
"Energy stored in electrostatic field of capacitance formula\nLast Answer : Energy stored in electrostatic field of capacitance formula",
null,
"Electric power formula\nLast Answer : Electric power formula",
null,
"Current formula",
null,
"Conductivity formula",
null,
"Conductance formula",
null,
"Capacitive reactance\nLast Answer : Capacitive reactance formula",
null,
"Capacitive current\nLast Answer : Capacitive current formula",
null,
"Define magnetic flux density and state its formula.\nLast Answer : The magnetic flux per unit area is called as magnetic flux density. Magnetic flux density is denoted by B. The formula for magnetic flux density is Weber per metre square or Tesla. Mathematically ... the unit of magnetic flux density is Weber per metre square which is also called as Tesla.",
null,
"The following formula is correct: (A) Load factor=Average load for a period/ Peak load for the same period (B) Load factor=Average load for a period x Peak load for the same period (C) Load factor=Average load for a period + Peak load for the same period (D) All of the above",
null,
"What is the formula of torque?",
null,
"Last Answer : R=V÷I R= Resistance value(ohm) V=Potential different between two point(volt) I=Current(Amp)",
null,
"What is the formula for resistivity?\nLast Answer : Resitivity is the property of a material. It is given in ρ . R=ρ L/A is the formula. where , Resitance is directly proportional to Length & Inversely proportional to Area of ... the resistance offered by a conductor to the flow of current is called specific resistance or resistivity.",
null,
"",
null,
""
] | [
null,
"https://yt3.ggpht.com/bwE379iS1Sm8eS5wLG3bcu3XzYS4A6MJ3A8FydV9N_meEjz8sRfurZPSSid38bpx5nX1ZJh70MP6jQ=s640-nd",
null,
"https://ask-public.com/imagelist/part4/241266721_578959113559035_1001710455423886286_n.jpg",
null,
"https://ask-public.com/imagelist/part6/264850253_234281278779331_5338941333278265515_n.jpg",
null,
"https://ask-public.com/imagelist/part4/241185569_182860230626606_6760688741162820251_n.jpg",
null,
"https://ask-public.com/imagelist/part5/255030431_1102532477221497_5266986386370955311_n.jpg",
null,
"https://ask-public.com/imagelist/part5/258853548_674265143555372_6011620291919736825_n.jpg",
null,
"https://ask-public.com/imagelist/part7/271932233_482060130103001_7231467138395546716_n.jpg",
null,
"https://ask-public.com/imagelist/part8/272884190_1391345947992061_2670165513819689905_n.jpg",
null,
"https://ask-public.com/imagelist/part4/243493436_388824796114360_1013527687088134524_n.jpg",
null,
"https://ask-public.com/imagelist/part7/270039647_420258623213995_3129600159684488138_n.jpg",
null,
"https://ask-public.com/imagelist/part8/273213401_1465251110535376_2737936019318743786_n.jpg",
null,
"https://ask-public.com/imagelist/part6/262606954_226746856237943_233801182662032955_n.jpg",
null,
"https://ask-public.com/imagelist/part1/107993149_286368782642998_1969965112311619926_n.jpg",
null,
"https://ask-public.com/imagelist/part8/96090239_243296960283514_4023382684079226880_n.jpg",
null,
"https://ask-public.com/imagelist/part3/236156740_129721869359823_5038442305572021920_n.jpg",
null,
"https://ask-public.com/imagelist/part4/241227584_173684124877550_3414133521899580587_n.jpg",
null,
"https://ask-public.com/imagelist/part4/249385262_1457810334606122_7351023444047138107_n.jpg",
null,
"https://ask-public.com/imagelist/part8/273722572_722209728760913_1061037979528096079_n.jpg",
null,
"https://ask-public.com/imagelist/part4/244552884_190141023231860_3104812342595891473_n.jpg",
null,
"https://ask-public.com/imagelist/part1/163281293_279911390259378_8412198548534332903_n (1).jpg",
null,
"https://ask-public.com/imagelist/part3/202866328_516128896400533_2446967795787276589_n.jpg",
null,
"https://ask-public.com/imagelist/part1/104224824_266457467967463_2986258964429941430_n.jpg",
null,
"https://ask-public.com/imagelist/part6/261306869_469394721293514_6629230765531894519_n.jpg",
null,
"https://ask-public.com/imagelist/part4/248423250_380738470499344_4452939972706207008_n.jpg",
null,
"https://ask-public.com/imagelist/part4/245309286_194262149486414_2965528159757703661_n.jpg",
null,
"https://ask-public.com/imagelist/part5/250956560_1099299684211443_3607403350713125797_n.jpg",
null,
"https://ask-public.com/imagelist/part6/265523227_1544218249285494_4269568550669412106_n.jpg",
null,
"https://ask-public.com/imagelist/part4/244628504_2052308841593663_7516202624684604008_n.jpg",
null,
"https://ask-public.com/imagelist/part7/271011755_326103552718735_4597569903774868294_n.jpg",
null,
"https://ask-public.com/imagelist/part4/250756099_1101376094003802_3035831890365144148_n.jpg",
null,
"https://ask-public.com/imagelist/part5/257940196_219040477008581_9133617433895466791_n.jpg",
null,
"https://ask-public.com/imagelist/part1/161673731_2191758664294442_3851863091287141783_n (1).jpg",
null,
"https://ask-public.com/imagelist/part6/260309920_1115656885909056_6573478866830059687_n.jpg",
null,
"https://ask-public.com/imagelist/part5/251672017_420388676427140_2165943802086088174_n.jpg",
null,
"https://ask-public.com/imagelist/part1/101369293_255241169089093_2394513927594573824_n.jpg",
null,
"https://ask-public.com/imagelist/part6/261920023_1123959691742683_7781195160757193898_n.jpg",
null,
"https://ask-public.com/imagelist/part4/245267193_194261879486441_8199415505207646619_n.jpg",
null,
"https://ask-public.com/imagelist/part8/273997263_1565446587174018_8412436388474640683_n.jpg",
null,
"https://ask-public.com/imagelist/part8/97989765_245755496704327_2200644640065978368_n.jpg",
null,
"https://ask-public.com/imagelist/part5/255512101_216046370641325_7016570821831155085_n.jpg",
null,
"https://ask-public.com/imagelist/part2/175329967_3043494932551464_371034332941457823_n.jpg",
null,
"https://ask-public.com/imagelist/part8/272957636_1154438625364215_3415793506702179172_n.jpg",
null,
"https://ask-public.com/imagelist/part8/95930002_243209370292273_1238475625350561792_n.jpg",
null,
"https://ask-public.com/imagelist/part2/176697310_124144643082041_8006854934224739873_n.jpg",
null,
"https://ask-public.com/imagelist/part3/217722884_530799071600182_5664385747249549593_n.jpg",
null,
"https://ask-public.com/imagelist/part2/186459665_1668418656879771_1936801538529247028_n (1).jpg",
null,
"https://ask-public.com/imagelist/part2/185255346_490248225655267_7071845270597790347_n.jpg",
null,
"https://ask-public.com/imagelist/part8/272990453_1128773997939238_6575916979381019218_n (1).jpg",
null,
"https://ask-public.com/imagelist/part7/271486702_424442622795595_3117897355975058100_n.jpg",
null,
"https://ask-public.com/imagelist/part5/253696053_3102304786719966_8348676053258891659_n.jpg",
null,
"https://ask-public.com/imagelist/part5/258775559_187160566951806_5420318240991578647_n.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86798334,"math_prob":0.9624093,"size":11679,"snap":"2022-05-2022-21","text_gpt3_token_len":3236,"char_repetition_ratio":0.2482227,"word_repetition_ratio":0.42717996,"special_character_ratio":0.26534805,"punctuation_ratio":0.09361702,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9957104,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102],"im_url_duplicate_count":[null,1,null,6,null,2,null,2,null,3,null,3,null,3,null,1,null,3,null,4,null,3,null,3,null,2,null,2,null,4,null,4,null,4,null,2,null,3,null,2,null,1,null,3,null,3,null,4,null,3,null,3,null,1,null,5,null,1,null,2,null,2,null,5,null,2,null,2,null,1,null,1,null,7,null,1,null,7,null,2,null,1,null,4,null,6,null,4,null,3,null,3,null,3,null,1,null,2,null,2,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T20:49:34Z\",\"WARC-Record-ID\":\"<urn:uuid:b9255fd2-c2c9-4039-ac85-caf5bcd14a27>\",\"Content-Length\":\"120115\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:42443afc-7436-4683-9718-6538694498dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:0ef31b24-366e-4909-b97b-782ab74cab6b>\",\"WARC-IP-Address\":\"217.21.77.132\",\"WARC-Target-URI\":\"https://ask-public.com/4200/\",\"WARC-Payload-Digest\":\"sha1:PQA45WY2I44FMBZSZ746WC2SMMPRQYYR\",\"WARC-Block-Digest\":\"sha1:JLC24VF45EODIEQC7FER34M6FF4RBI72\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662546071.13_warc_CC-MAIN-20220522190453-20220522220453-00450.warc.gz\"}"} |
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Galilean_transformation | [
"# Galilean transformation\n\nIn physics, a Galilean transformation is used to transform between the coordinates of two reference frames which differ only by constant relative motion within the constructs of Newtonian physics. These transformations together with spatial rotations and translations in space and time form the inhomogeneous Galilean group (assumed throughout below). Without the translations in space and time the group is the homogeneous Galilean group. The Galilean group is the group of motions of Galilean relativity acting on the four dimensions of space and time, forming the Galilean geometry. This is the passive transformation point of view. The equations below, although apparently obvious, are valid only at speeds much less than the speed of light. In special relativity the Galilean transformations are replaced by Poincaré transformations; conversely, the group contraction in the classical limit c → ∞ of Poincaré transformations yields Galilean transformations.\n\nGalileo formulated these concepts in his description of uniform motion. The topic was motivated by his description of the motion of a ball rolling down a ramp, by which he measured the numerical value for the acceleration of gravity near the surface of the Earth.\n\n## Translation\n\nThough the transformations are named for Galileo, it is absolute time and space as conceived by Isaac Newton that provides their domain of definition. In essence, the Galilean transformations embody the intuitive notion of addition and subtraction of velocities as vectors.\n\nThe notation below describes the relationship under the Galilean transformation between the coordinates (x, y, z, t) and (x′, y′, z′, t′) of a single arbitrary event, as measured in two coordinate systems S and S', in uniform relative motion (velocity v) in their common x and x′ directions, with their spatial origins coinciding at time t = t′ = 0:\n\n$x'=x-vt$",
null,
"$y'=y$",
null,
"$z'=z$",
null,
"$t'=t.$",
null,
"Note that the last equation expresses the assumption of a universal time independent of the relative motion of different observers.\n\nIn the language of linear algebra, this transformation is considered a shear mapping, and is described with a matrix acting on a vector. With motion parallel to the x-axis, the transformation acts on only two components:\n\n${\\begin{pmatrix}x'\\\\t'\\end{pmatrix}}={\\begin{pmatrix}1&-v\\\\0&1\\end{pmatrix}}{\\begin{pmatrix}x\\\\t\\end{pmatrix}}$",
null,
"Though matrix representations are not strictly necessary for Galilean transformation, they provide the means for direct comparison to transformation methods in special relativity.\n\n## Galilean transformations\n\nThe Galilean symmetries can be uniquely written as the composition of a rotation, a translation and a uniform motion of spacetime. Let x represent a point in three-dimensional space, and t a point in one-dimensional time. A general point in spacetime is given by an ordered pair (x, t).\n\nA uniform motion, with velocity v, is given by\n\n$(\\mathbf {x} ,t)\\mapsto (\\mathbf {x} +t\\mathbf {v} ,t),$",
null,
"where v ∈ ℝ3. A translation is given by\n\n$(\\mathbf {x} ,t)\\mapsto (\\mathbf {x} +\\mathbf {a} ,t+s),$",
null,
"where a ∈ ℝ3 and s ∈ ℝ. A rotation is given by\n\n$(\\mathbf {x} ,t)\\mapsto (G\\mathbf {x} ,t),$",
null,
"where G : ℝ3 → ℝ3 is an orthogonal transformation.\n\nAs a Lie group, the group of Galilean transformations has dimension 10.\n\n## Galilean group\n\nTwo Galilean transformations G(R, v, a, s) compose to form a third Galilean transformation, G(R' , v' , a' , s' ) G(R, v, a, s) = G(R' R, R' v+v' , R' a+a' +v' s, s' +s). The set of all Galilean transformations Gal(3) on space forms a group with composition as the group operation.\n\nThe group is sometimes represented as a matrix group with spacetime events ( x, t, 1) as vectors where t is real and x ∈ ℝ3 is a position in space. The action is given by\n\n${\\begin{pmatrix}R&v&a\\\\0&1&s\\\\0&0&1\\end{pmatrix}}{\\begin{pmatrix}x\\\\t\\\\1\\end{pmatrix}}={\\begin{pmatrix}Rx+vt+a\\\\t+s\\\\1\\end{pmatrix}},$",
null,
"where s is real and v, x, a ∈ ℝ3 and R is a rotation matrix.\n\nThe composition of transformations is then accomplished through matrix multiplication. Gal(3) has named subgroups. The identity component is denoted SGal(3).\n\nLet m represent the transformation matrix with parameters v, R, s, a:\n\n$G_{1}=\\{m:s=0,a=0\\},$",
null,
"uniformly special transformations.\n$G_{2}=\\{m:v=0,R=I_{3}\\}\\cong (\\mathbb {R} ^{4},+),$",
null,
"shifts of origin.\n$G_{3}=\\{m:s=0,a=0,v=0\\}\\cong \\mathrm {SO} (3),$",
null,
"rotations of reference frame (see SO(3)).\n$G_{4}=\\{m:s=0,a=0,R=I_{3}\\}\\cong (\\mathbb {R} ^{3},+),$",
null,
"uniform frame motions.\n\nThe parameters s, v, R, a span ten dimensions. Since the transformations depend continuously on s, v, R, a, Gal(3) is a continuous group, also called a topological group.\n\nThe structure of Gal(3) can be understood by reconstruction from subgroups. The semidirect product combination ($A\\rtimes B$",
null,
") of groups is required.\n\n1. $G_{2}\\triangleleft \\mathrm {SGal} (3)$",
null,
"(G2 is a normal subgroup)\n2. $\\mathrm {SGal} (3)\\cong G_{2}\\rtimes G_{1}$",
null,
"3. $G_{4}\\trianglelefteq G_{1}$",
null,
"4. $G_{1}\\cong G_{4}\\rtimes G_{3}$",
null,
"5. $\\mathrm {SGal} (3)\\cong \\mathbb {R} ^{4}\\rtimes (\\mathbb {R} ^{3}\\rtimes \\mathrm {SO} (3)).$",
null,
"## Origin in group contraction\n\nHere, we only look at the Lie algebra of the Galilean group; it is then easy to extend the results to the Lie group.\n\nThe relevant Lie algebra is spanned by H, Pi, Ci and Lij (an antisymmetric tensor), subject to commutation relations, where\n\n$[H,P_{i}]=0$",
null,
"$[P_{i},P_{j}]=0$",
null,
"$[L_{ij},H]=0$",
null,
"$[C_{i},C_{j}]=0$",
null,
"$[L_{ij},L_{kl}]=i[\\delta _{ik}L_{jl}-\\delta _{il}L_{jk}-\\delta _{jk}L_{il}+\\delta _{jl}L_{ik}]$",
null,
"$[L_{ij},P_{k}]=i[\\delta _{ik}P_{j}-\\delta _{jk}P_{i}]$",
null,
"$[L_{ij},C_{k}]=i[\\delta _{ik}C_{j}-\\delta _{jk}C_{i}]$",
null,
"$[C_{i},H]=iP_{i}\\,\\!$",
null,
"$[C_{i},P_{j}]=0~.$",
null,
"H is the generator of time translations (Hamiltonian), Pi is the generator of translations (momentum operator), Ci is the generator of rotationless Galilean transformations (Galileian boosts), and Lij stands for a generator of rotations (angular momentum operator).\n\nThis Lie Algebra is seen to be a special classical limit of the algebra of the Poincaré group, in the limit c → ∞. Technically, the Galilean group is a celebrated group contraction of the Poincaré group (which, in turn, is a group contraction of the de Sitter group SO(1,4)).\n\nRenaming the generators of the latter as ϵimn JiLmn ; PiPi ; P0H/c ; KicCi, where c is the speed of light, or any function thereof diverging as c → ∞, the commutation relations (structure constants) of the latter limit to that of the former.\n\nNote the group invariants LmnLmn, PiPi.\n\nIn matrix form, for d=3, one may consider the regular representation (embedded in GL(5;ℝ), from which it could be derived by a single group contraction, bypassing the Poincaré group),\n\n$iH=\\left({\\begin{array}{ccccc}0&0&0&0&0\\\\0&0&0&0&0\\\\0&0&0&0&0\\\\0&0&0&0&1\\\\0&0&0&0&0\\\\\\end{array}}\\right),\\qquad$",
null,
"$i{\\vec {a}}\\cdot {\\vec {P}}=\\left({\\begin{array}{ccccc}0&0&0&0&a_{1}\\\\0&0&0&0&a_{2}\\\\0&0&0&0&a_{3}\\\\0&0&0&0&0\\\\0&0&0&0&0\\\\\\end{array}}\\right),\\qquad$",
null,
"$i{\\vec {v}}\\cdot {\\vec {C}}=\\left({\\begin{array}{ccccc}0&0&0&v_{1}&0\\\\0&0&0&v_{2}&0\\\\0&0&0&v_{3}&0\\\\0&0&0&0&0\\\\0&0&0&0&0\\\\\\end{array}}\\right),\\qquad$",
null,
"$i\\theta _{i}\\epsilon ^{ijk}L_{jk}=\\left({\\begin{array}{ccccc}0&\\theta _{3}&-\\theta _{2}&0&0\\\\-\\theta _{3}&0&\\theta _{1}&0&0\\\\\\theta _{2}&-\\theta _{1}&0&0&0\\\\0&0&0&0&0\\\\0&0&0&0&0\\\\\\end{array}}\\right)~.$",
null,
"The infinitesimal group element is then\n\n$G(R,{\\vec {v}},{\\vec {a}},s)=1\\!\\!1_{5}+\\left({\\begin{array}{ccccc}0&\\theta _{3}&-\\theta _{2}&v_{1}&a_{1}\\\\-\\theta _{3}&0&\\theta _{1}&v_{1}&a_{2}\\\\\\theta _{2}&-\\theta _{1}&0&v_{1}&a_{3}\\\\0&0&0&0&s\\\\0&0&0&0&0\\\\\\end{array}}\\right)+...~.$",
null,
"## Central extension of the Galilean group\n\nOne could, instead, augment the Galilean group by a central extension of the Lie algebra spanned by H′, Pi, Ci, Lij, M, such that M commutes with everything (i.e. lies in the center), and\n\n$[H',P'_{i}]=0\\,\\!$",
null,
"$[P'_{i},P'_{j}]=0\\,\\!$",
null,
"$[L'_{ij},H']=0\\,\\!$",
null,
"$[C'_{i},C'_{j}]=0\\,\\!$",
null,
"$[L'_{ij},L'_{kl}]=i[\\delta _{ik}L'_{jl}-\\delta _{il}L'_{jk}-\\delta _{jk}L'_{il}+\\delta _{jl}L'_{ik}]\\,\\!$",
null,
"$[L'_{ij},P'_{k}]=i[\\delta _{ik}P'_{j}-\\delta _{jk}P'_{i}]\\,\\!$",
null,
"$[L'_{ij},C'_{k}]=i[\\delta _{ik}C'_{j}-\\delta _{jk}C'_{i}]\\,\\!$",
null,
"$[C'_{i},H']=iP'_{i}\\,\\!$",
null,
"$[C'_{i},P'_{j}]=iM\\delta _{ij}~.$",
null,
"This algebra is often referred to as the Bargmann algebra."
] | [
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/750ef5412025d2ea242170bb04644aaeed88dd7a.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/e6239f12a70a7f715303934acf9dbae208fceb80.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/80bfd939a15c0857a6b1df928f061d0e8973c342.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/434feca39e7aba3b55a98c3630183c10380eff50.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/813dff80cd8495e6e06d0cfa4d475e70255b772b.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/9de78584b486e70a45571a22594f2c46e524fb1c.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/9355015f123e7b8f17fa81f3456f2f9f8b6aa2bd.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/2fbc396d80390c43d6025ac58c03b542503d6728.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/a3e57643c70bb66bcf57936968f4435d1a8c3ba1.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/e5f2c19c5865dc980de7b197f8c86b501d627cb1.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/3970a8f5df99152c6f8b2f1d55ac029c0b12a9c7.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/f15ac4dae48fddb3932e938d3cf1e3fc7b045b66.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/da16e62cf6e273b4904408a31fc4c27bfa9a7d62.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/2e721652ad953a49c9a0b841acae83565725d255.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/b3399e73fa25015c2580af8189877d15c7f9668b.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/282561192b2de641e4cd018b7fc33a895ed15d08.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/4f2a1020d53cc31a8ee1bb1e978c7ae2c1e1dad8.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/d041640c906ec7d6f7984338ae99f528e1e79993.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/0faff600461a4563ad451a9757e87a5b76f9f9a9.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/62b718a50d9bbc524206f938d4094000fdd5f02a.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/98fc7c4a468ce7fa72d4ce1a447632324337ff52.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/01b7aa61bc2856d4e3cfc6ce5f3d72564535f1f5.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/ade86ef409f8f767be149b605b2a76127684a3dc.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/285a4d16555cfca4c80d963f36f2925c64d7037d.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/7337d08fee8d9be08a2969bb16b578504d9182b1.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/61e8488de4717b3e616094f5b8b3d8ab464aa502.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/ca5972e950d30691dba2fafc2e58977300b6ad69.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/2c57b7e93beefa09f215b05f16f8f2a309ba7775.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/b95ee081f80f02ee77287aef8b13b76e588cbbbb.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/aecf1b1ed3cec32a1dab50e9ed1211b82de2f23b.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/dca562b6a7e686bdf585cfbbae714a70726c8b8d.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/cdb0fcec1556ce671b3db6b53c434a9e78b6e602.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/8f690bba98ccef13d21ed28af4a74d128cdf12fb.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/c5de574832089c79e6f579c52ef4de1f1f950c13.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/8e56cbe7fc5ad8aaa4b5e99bd208e961edcbb242.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/334b98de717fad3a286c171163b52e65222e72a5.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/7c93f26d28dda58f1c60e3938f66753530cd0fec.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/627522438a338d0623fe653177fbf778892aa7d3.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/bc33032edd56c7309ae548c711c86071e157dc0b.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/8f774a5b5348649053a05ecb9349301f2f659c5e.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/eaa99e5eac46bb60b94276b49313baea2e8dabc3.svg",
null,
"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/235c0adc062f85c03c8d1263d159233b1c483f23.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7997588,"math_prob":0.99941754,"size":8203,"snap":"2022-05-2022-21","text_gpt3_token_len":2170,"char_repetition_ratio":0.15538481,"word_repetition_ratio":0.01065449,"special_character_ratio":0.26575643,"punctuation_ratio":0.17621689,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996841,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84],"im_url_duplicate_count":[null,3,null,5,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T17:56:52Z\",\"WARC-Record-ID\":\"<urn:uuid:a2996ebf-c26a-4de3-a05f-fe938da9d69e>\",\"Content-Length\":\"131967\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:078c7619-db3a-4f99-bdd1-6f523a4ae5af>\",\"WARC-Concurrent-To\":\"<urn:uuid:820a531e-0e24-4880-8a7e-7879d25c66f3>\",\"WARC-IP-Address\":\"41.66.34.68\",\"WARC-Target-URI\":\"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Galilean_transformation\",\"WARC-Payload-Digest\":\"sha1:XVILBZP4CKG2TVYEQ6GR45JVJKQOWA3X\",\"WARC-Block-Digest\":\"sha1:MP3L2RJH5IBBGTPEKLPD4JCBHLUNATOJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662512229.26_warc_CC-MAIN-20220516172745-20220516202745-00676.warc.gz\"}"} |
https://forum.ansys.com/forums/topic/fluent-theoretical-guide/ | [
"",
null,
"## Fluids\n\nTopics relate to Fluent, CFX, Turbogrid and more\n\n•",
null,
"Manuel Pacherres\nSubscriber\n\nHello everybody\nI was reading a little bit of the FLUENT theoretical guide.\nI saw that FLUENT has two types of algorithms to solve the simulation within the solver I am using, which would be the pressure based solver.\nAccording to the theoretical guide, that solver, can use a segregated algorithm and a coupled algorithm to solve the conservation equations.\nMy question is: How can I know if I am using a segregated algorithm or a coupled algorithm for my solution using this solver? Since as seen in the image below, which is part of the screen in the configuration stage, FLUENT only allows me to distinguish between the pressure-based solver and the density-based solver, but not between the segregated and coupled algorithms, which are part of the pressure-based solver.\nMy other question is: If I were to choose the coupled algorithm (if I had the possibility to choose, according to the answer of the previous question), how do I know if for the linearization of the conservation equations I am using an implicit or explicit formulation? Since as it is read in the FLUENT theoretical guide, for a coupled algorithm, I can linearize the conservation equations in one way or the other.",
null,
"•",
null,
"Essence\nAnsys Employee\n\nHello,\nIn Pressure-based solver, Fluent has options to select the Pressure-Velocity Coupling in the \"Methods\" section.",
null,
""
] | [
null,
"https://forum.ansys.com/wp-content/uploads/2022/01/fluids-1.svg",
null,
"https://secure.gravatar.com/avatar/ae7b6f3479a8f6e4ae075e3d810a61c4",
null,
"https://forum.ansys.com/wp-content/uploads/2023/08/05-08-2023-1691264576-mceclip0.png",
null,
"https://secure.gravatar.com/avatar/b17328efc256f933923db00beca4d6a7",
null,
"https://forum.ansys.com/wp-content/themes/ansysbbpress/assets/images/loading.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93025714,"math_prob":0.88859856,"size":3020,"snap":"2023-40-2023-50","text_gpt3_token_len":657,"char_repetition_ratio":0.11372679,"word_repetition_ratio":0.0,"special_character_ratio":0.2102649,"punctuation_ratio":0.08608059,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96590984,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T16:31:44Z\",\"WARC-Record-ID\":\"<urn:uuid:23ac81ce-8b6f-40e9-935f-b3714ffeaefb>\",\"Content-Length\":\"148833\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:64df8c68-28e6-45ca-8bec-980d9a241245>\",\"WARC-Concurrent-To\":\"<urn:uuid:55aadde4-37bd-456d-a093-578321380aa0>\",\"WARC-IP-Address\":\"23.205.106.88\",\"WARC-Target-URI\":\"https://forum.ansys.com/forums/topic/fluent-theoretical-guide/\",\"WARC-Payload-Digest\":\"sha1:T63VFCPBUNEORRGD57PJP6ILTR4SUZ42\",\"WARC-Block-Digest\":\"sha1:UEV2BXYFPE2POKTQIKJCWKX7HH2K3XSI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510697.51_warc_CC-MAIN-20230930145921-20230930175921-00353.warc.gz\"}"} |
https://www.scribd.com/doc/294316571/8-3-8-4-Completed-Notes | [
"You are on page 1of 2\nMCR3U - Unit 8: Discrete Functions: Financial Applications Date: 8.3 Compounded Interest: Present Value 8.4 Annuities: Future Value H-mework: hursday: 6.3 Pages 498-499: #4, 7,9, 10, 13, 14 Friday: 8.4 Pages 511-512: #3, 6,8, 10, 12 At the end of this lesson | will be able to: * Calculate present value (PV) and interest rates (i) for compounded interest + Draw timelines and calculate future value (FV) for simple ordinary annuities Compounded Interest: Present value (8.2) versus Future Value (8.3) Euture value is the total amount, 4, of an investment after a certain period and can be determined using the formula for compounded interest (8.2). Presentvalue is the principal that would have to be invested now to geta specific value in a certain amount of time. We can find this value by rearranging the formula in 8.2 to find P and using the variable PV instead. t So if we take A=P(1+z)\" and rearrange for P we get: b= 0.05 n= 10 Fy = /S000 Frys? 1. Determine the present value of Alex's investment if it must be worth \\$15 000 ten years from now assuming an annual compounding period with a rate of 5%/a. Note: These calculations don't include the time value of money. Basically \\$10 today is worth more than the same \\$10 twenty years in the future for @ variety of reasons including: «Inflation ~ the rise in the general level of price of goods and services in an economy + Opportunity cost - what you miss out on when you choose to do one thing over another. For example, if you spend \\$10 on dinner, you miss out on the benefit of being to use that \\$10 for anything else (depositing it into a savings account and making interest). Ifyou are interested in courses in finance or economics, you will examine these factors in more detail. 8.4 Annuities: Future Value 7 9.6% 7 What happens if you decide to deposit \\$150 into a savings account that earns 925% /a compounded monthly at the end of each month starting July 31 until December 31° How much is it worth on January 15 We can draw a timeline to illustrate what Is happening one compounding 2 1S) 2/8) BE) BS Ech) z a = a8 payen’ 180 9 SI tim | Lwo , | | L 50( 40-009) ) oL — (50 (1+0.008) 7 ¢ 5> | 39.(149.008) \\ _ 150 (1+0.008) y i 5S Azpciti) \\S0 (1+0.008) An annuity is a series of payments or investments made at regular intervals (typically monthly, quarterly, semi-annually or annually). Annuities may be equal deposits, equal withdrawals, equal payments, or equal receipts - the key is equal cash flow in or out at regular intervals. ‘There are several different types of annuities based on: ncide and are caled simple annuities. For sample, payments are mace monthly AND the ingerest is compounded moathy. if the payments are made at: the beginning ofthe each compounding period and are called annuity.dives. «the time value of money where f° ‘the future value is the sim ofa regular payments and interest earned. Examples include RRSPs and | RESPS, We will only look at the future value (FV) (8.4) and present value (PV) (8.5) or simple ordinary annuities in this course. if the payment intervals andl interest conversion periods (or compounding periods}: not coincide and are called general annuities For example, payments are made monthly BUT the id semi-annually, ‘or the present value is the value of the annuity atthe Doginniag of the torm and is the sum of all prosent values of the payment. Examples include mortgages, credit card debt with even payments and lines 0 credit Future value: The amount that accumulates is called the future value and is the sum ofa geometric series. Compound ing 4 : 2 na ml on amount thot each paymunt Ingeneral: Peed | mou ch pay mint \\$1 4 Is wort at ti Pym 2 7R *R 2 2 RR 2nd of Hu fern | } Least) wy | L Resi) i% The sum of the geometric series: Sn = Re RCH) + ROCF + See SET gover le ON yy sum of a ~ Qeomeltic. = Rar\"-1] Segunce pt Cs \"—AD | c= payment . Jeremy ev compounding period ra bed & common Fan Ist Grade B. He has a paper route and wants to save for his college education. He determines that he can deposit \\$100 a month into an account that pays 39/2, compounded monthly. How much will he have at the end of five years (when he graduates high school) for his education. a of 5 years. Draw a timeline showing the amount paid each month, and the value of each payment at the end D. Show that the total amount of the annuity after 5 years forms a geometric series. c. Calculate the sum of the series. Ms. Choo wants to save \\$40 000 for the down payment on a new home. She would like to deposit an equal amount of money at the end of every 3 month period for the next § years into an account that pays 2.75%/a compounded quarterly. Use a timeline to represent this process and calculate what her deposit should be to reach her goal."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90807116,"math_prob":0.9579734,"size":4750,"snap":"2019-35-2019-39","text_gpt3_token_len":1321,"char_repetition_ratio":0.119469024,"word_repetition_ratio":0.0022935779,"special_character_ratio":0.27052632,"punctuation_ratio":0.10558376,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95900095,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-18T16:41:57Z\",\"WARC-Record-ID\":\"<urn:uuid:a9fe459a-c56b-44e0-ad06-7dfff16d1655>\",\"Content-Length\":\"196959\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:952f82bf-64dd-4738-b4fa-925a07a47c2f>\",\"WARC-Concurrent-To\":\"<urn:uuid:d63aad3e-e232-4cf3-a1b1-23479e2eca8b>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://www.scribd.com/doc/294316571/8-3-8-4-Completed-Notes\",\"WARC-Payload-Digest\":\"sha1:O45GSHBAWA6EMU7K6SKJNK73ZY2NOY73\",\"WARC-Block-Digest\":\"sha1:F5IU43QK6BAKVSLVMQHOUPGJWRBAM6LT\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027313936.42_warc_CC-MAIN-20190818145013-20190818171013-00164.warc.gz\"}"} |
https://www.haskell.org/definition/aboutHaskell98.html | [
"• The language was still changing too quickly, throwing text books out of date before they are even completed, and making it hard for serious users to keep up.\n• The language has become more complex, making it hard for beginners to master.\n• It contains traps for the unwary, simple programs which fail in strange and unexpected ways, with error messages referring to concepts far beyond the beginner's grasp.\nHaskell 98 is an attempt to address these concerns. It is intended to be a minor revision of Haskell 1.4, cleaning up traps but not adding major new functionality.\n\nHaskell 98 will by no means be the last revision of Haskell. On the contrary, we design it knowing that new language extensions (multi-parameter type classes, universal and existential quantification, pattern guards, etc, etc) are well on the way. However, Haskell 98 will have a special status: the intention is that Haskell compilers will continue to support Haskell 98 (given an appropriate flag) even after later versions of the language have been defined, and so the name `Haskell 98' will refer to a fixed, stable language.\n\nThis document exhaustively lists all the differences between Haskell 1.4 and Haskell 98. Only a very short summary is given here, together with references to the report text. All section numbers refer to the Haskell 98 language and library reports.\n\n# Chapter 2: lexical structure\n\n• Section 2.3. Maximal munch rule for '--' comments.\n• Section 2.4. Treat underscore, _, as a lower-case letter. The lexeme \"_\" is a reserved identifer. Compilers that offer warnings for unused variables are encouraged to suppress such warnings for unused variables that begin with an underscore.\n• Section 2.4. Clarify that : is reserved solely for Haskell list construction.\n• Section 2.7. A more precise specification of the layout rule.\n\n# Chapter 3: expressions\n\n• Section 3.5. Specify the syntactic rules for sections more clearly and precisely. The resulting rule differs in minor respects from Haskell 1.4.\n\n• Section 3.7. Clarify that : is reserved solely for Haskell list construction. It cannot be hidden or redefined, any more than [] or (,,) can.\n\n• Section 3.11. Comprehensions are list comprehensions, not monad comprehensions.\n\n• Section 3.11. Permit empty qualifiers in a comprehension.\n\n• Section 3.13. Permit empty alternatives in a case.\n\n• Section 3.14, 6.3.6. Typing of do expressions, and the MonadZero class.\n\n• The Monad class is extended, thus:\n``` class Monad m where\nreturn :: a -> m a\n(>>=) :: m a -> (a -> m b) -> m b\n(>>) :: m a -> m b -> m b\nfail :: String -> m a\n\nm1 >> m2 = m1 >>= \\ _ -> m2\nfail s = error s\n```\n• All do expressions have a type in class Monad (never MonadZero as in Haskell 1.4). Pattern match failure invokes fail passing a string suitable for printing out in an error message.\n• The Monad instances for list, Maybe, and IO define fail to return [], return Nothing, and call error respectively.\n``` class Monad m => MonadPlus m where\nmzero :: m a\nmplus :: m a -> m a -> m a\n```\nTo convert existing Haskell programs that use do notation: add fail s = mzero to any instances for Monad that are already instances of MonadZero; combine existing instances of MonadPlus and MonadZero, and rename the methods to mzero and mplus.\n\n• Section 3.14. Permit empty statements in a do expression.\n\n• Section 3.15. Remove ``punning'' for records. Quick reminder: punning allows one to abbreviate record pattern matches, construction, and updates:\n```\tdata Point = MkPoint { x,y :: Float }\n\nf1 :: Point -> Point\nf1 (MkPoint {x=xval, y=yval}) = MkPoint {x=xval+10, y=yval}\n\nf2 :: Point -> Point\nf2 (MkPoint {x, y}) = MkPoint {x=x+10, y}\n```\nIn f2 the names of the values held in the fields 'punned' with the field labels themselves, both in the pattern match (which binds x,y and in the record construction on the right hand side. This feature is the one that has been removed.\n\n• Section 3.17.1. The production for fpat is:\n``` fpat -> qvar = pat\n```\nwhere we have qvar rather than var. Reason: the field name might only be in scope in qualified form.\n\n# Chapter 4: declarations\n\n• Section 4. Empty declarations are permitted.\n\n• Section 4.1.2. An empty context is permitted in a type.\n\n• Section 4.1.2. The simple-context restriction. Generalise type contexts to permit constraints of the form: (C (a t1 ... tn)), where a is a type variable and t1, ..., tn are types.\n\n• Section 4.2.1. A record may have no fields.\n\n• Section 4.2.3. Permit newtype declarations to use field-naming syntax.\n\n• Section 4.3.1. Class declarations can contain intermingled type signatures, fixity declarations, and value declarations for default class methods.\n\n• Section 4.3.2. A qualified name is permitted as the method name in the method bindings of an instance declaration.\n\n• Setion 4.3.1, 4.3.2. Pattern bindings are prohibited in the methods of a class or instance declaration.\n• Section 4.3.4. Change the 'default default' to (Integer, Double).\n\n• Section 4.4.2. Allow infix declarations anywhere type signatures are allowed. Fixity attaches to an entity, not to its name.\n\n• Section 4.4.3. Add a production for funlhs:\n``` funlhs\t-> ( funlhs ) {apat}\n```\nThis permits definitions like:\n``` (a &&& b) x = a x && b x\n```\n\n• Section 4.5.5. No change to design, but the section is hopefully more clearly written.\n\n# Chapter 5: modules\n\n• Section 5. Generalise the type of main to IO t for any type t.\n• Section 5.3. An empty import declaration is permitted. This permits extra semicolons:\n```\tmodule M where { import A; ; import B; f x = x }\n```\n• Section 5.3.2. Two or more imported modules can share the same local alias, via an as clause.\n• Section 5.3.2. A module imported without a qualified clause can still have a local alias.\n• Section 5.5.1. The top-level declarations of a module bring into scope a qualified name as well as an unqualified one.\n• Section 5.5.2. Name clashes only cause an error if the offending name is actually mentioned.\n\n# Chapter 6: basic types\n\n• Remove the Void type.\n\n• Remove class Eval (but still describe seq).\n\n• Section 6.2. Rename strict to \\$!.\n\n• Section 6.3.3, and Fig 5, and module PreludeText. Remove functions and IO from class Show.\n\n• Section 6.3.4. Ord is not a superclass of Enum.\n\n• Section 6.3.4. Make it clear that the Enum class methods enumFrom... should obey the semantics described in Chapter 3 (Arithmetic sequences).\n\n• Section 6.3.4. Move succ, pred into class Enum. The current implementation (using toEnum,fromEnum) can't be made efficient for data types like\n```\tdata Nat = Zero | Succ Nat\n```\nFurthermore, the current implementation doesn't even work for Integer.\n\n• Section 6.4.5. Move atan2 into class RealFloat, and improve its default declaration. (Particular thanks to Kent Karlsson and Jerzy Karczmarczuk.)\n\n# Appendix A: standard prelude\n\nSome of these changes have a minor equivalent change in Section 6.\n• Don't export the type Rational from the Prelude. It isn't useful unless you import the library Ratio. (We still export Rational on the grounds that it keeps the property that you can write a type signature for anything imported from the Prelude without importing anything else.)\n\n• Un-overload map, filter, (++), concat. The new signatures are:\n```\tmap :: (a -> b) -> [a] -> [b]\nfilter :: (a -> Bool) -> [a] -> [a]\n(++) :: [a] -> [a] -> [a]\nconcat :: [[a]] -> [a]\n```\nRename the operation of class Functor as fmap\n```\tclass Functor f where\nfmap :: (a -> b) -> f a -> f b\n-- was map\n```\n\n• For each class declaration, say which methods constitute a \"minimum complete definition\". For example:\n``` class (Real a, Enum a) => Integral a where\nquot, rem :: a -> a -> a\ndiv, mod :: a -> a -> a\nquotRem, divMod :: a -> a -> (a,a)\ntoInteger :: a -> Integer\n\n-- Minimal complete definition:\n--\tquotRem, toInteger\nn `quot` d = ...\nn `rem` d = ...\nn `div` d = ...\nn `mod` d = ...\ndivMod n d = ...\n```\n\n• Rename sequence to sequence_; and rename accumulate to sequence.\n\n• In module PreludeIO, rename fail to ioError.\n\n• Add filterM; replace old concat by msum; delete old filter; move filterM, msum, guard to library Monad.\n```\tfilterM :: (a -> m Bool) -> [a] -> m [a]\n-- didn't exist before\n\nmsum :: MonadPlus m => [m a] -> m a\n-- was concat\n\nguard :: MonadPlus m => Bool -> m ()\n```\nNotice that guard remains in MonadPlus and uses mzero not fail. The latter is reserved exclusively for pattern-match failure.\n\n• Replace applyM with:\n```\t(=<<) :: Monad m => (a -> m b) -> m a -> m b\nf =<< x = x >>= f\n```\n\n• Add default declaration for (/) in class Fractional\n```\t x / y\t = x * recip y\n```\n\n• Add default declaration for negate in class Num\n```\t negate x\t = 0 - x\n```\n\n• Fix typo in unionBy\n```unionBy :: (a -> a -> Bool) -> [a] -> [a] -> [a]\nunionBy eq xs ys = xs ++ foldl (flip (deleteBy eq)) (nubBy eq ys) xs\n```\n\n• Generalise the type of fromRealFrac and rename it as realToFrac.\n```\trealToFrac :: (Real a, Fractional b) => a -> b\nrealToFrac\t= fromRational . toRational\n```\n\n• Fix bug in definition of span. The new definition is:\n```\tspan p [] = ([],[])\nspan p xs@(x:xs')\n| p x = (x:ys,zs)\n| otherwise = ([],xs)\nwhere (ys,zs) = span p xs'\n```\n\n• Provide a default declaration for == in class Eq.\n\n• The definition of enumFromThen is missing a map toEnum:\n``` enumFromThen c c' = map toEnum [fromEnum c,\nwhere lastChar :: Char\nlastChar | c' < c = minBound\n| otherwise = maxBound\n```\n\n• Improve definition of numericEnumFromTo and numericEnumFromThenTo:\n```\tnumericEnumFromTo n m = takeWhile (<= m+1/2) (numericEnumFrom n)\nnumericEnumFromThenTo n n' m = takeWhile p (numericEnumFromThen n n')\nwhere\np | n' > n = (<= m + (n'-n)/2)\n| otherwise = (>= m + (n'-n)/2)\n```\nThe extra 1/2 gives better behaviour at the edges of the range.\n\n• Don't import isHexDigit into PreludeList.\n\n• Typo in instance Show Float and instance Show Double: needs parameter p.\n\n• Bug in instance Read Char. Should be:\n``` readsPrec p = readParen False\n(\\r -> [(c,t) | ('\\'':s,t)<- lex r,\n```\n\n• Define cycle to give an error on the empty list.\n```\tcycle :: [a] -> [a]\ncycle [] = error \"Prelude.cycle: empty list\"\ncycle xs = xs' where xs' = xs ++ xs'\n```\n\n• Add show to class Show, together with the default declarations:\n``` show :: a -> String\n\nshowsPrec _ x s = show x ++ s\nshow x \t = showsPrec 0 x \"\"\n```\n\n• Remove redundant spaces in showList\n```\tshowList [] = showString \"[]\"\nshowList (x:xs) = showChar '[' . shows x . showl xs\nwhere\nshowl [] = showChar ']'\nshowl (x:xs) = showChar ',' . shows x .\nshowl xs\n```\n\n• Replace the definition of isSym in lex with\n``` isSym c = c `elem` \"!@#\\$%&*+./<=>?\\\\^|:-~\"\n```\n(The Haskell 1.4 version treated space as a symbol character.)\n\n• In error messages, refer to Prelude.foo rather than (say) PreludeList.foo. The sub-modules in the Prelude do not form part of its specfication.\n\n# Appendix B: syntax\n\nSyntactic changes mentioned above are not repeated here.\n• Specify that for layout purposes a Unicode character is treated as having the width of an ASCII character.\n• Delete fbinds from the syntax. It is not used.\n\n# Appendix D: derived instances\n\n• Specify that derived instances of Show add no unnecessary spaces.\n\n• Make each module export all the relevant types and functions that the Prelude defines, so that a programmer who does not import the Prelude can get access to all the (say) list types and functions by saying import List. This involves extra exports from modules List, Monad, IO, Maybe, Char.\n\n## Module Array\n\n• Section 6.1. If, in any dimension, the lower bound is greater than the upper bound, then the array is legal, but empty. Indexing an empty array always gives an array-bounds error, but bounds still yields the bounds with which the array was constructed.\n\n## Module Ix\n\n• Remove Show from superclasses of Ix.\n• Fix defn of rangeSize. See the adjacent comments in the code for details.\n\n## Module Char\n\n• Rename isAlphanum as isAlphaNum. (This entails fixing importers of Char as well.\n• Remove parameter c from isLower. (Typo.)\n• Fix typo in showLitChar (was sshowLitChar).\n• In readLitChar use ordinary pairs instead of the obselete pairing constructor \":=\".\n• Add defnition of missing helper function, match:\n``` match :: (Eq a) => [a] -> [a] -> ([a],[a])\nmatch (x:xs) (y:ys) | x == y = match xs ys\nmatch xs ys = (xs,ys)\n```\n\n## Module List\n\n• Add to genericTake the clause:\n``` genericTake 0 _ = []\n```\n\n• Add the following definition of unfoldr. (It differs slightly from that previously found in library Maybe.\n```\tunfoldr :: (b -> Maybe (a,b)) -> b -> [a]\nunfoldr f b = case (f b) of\nNothing -> []\nJust (a,b) -> a : unfoldr f b\n```\n\n``` insert :: Ord a => a -> [a] -> [a]\n```\nThis is consistent with the other `By' functions, which all have a non-By version.\n\n• Make transpose lazy in its second argument, so as to allow transpositions of 2d list matrices with infinite no. of rows as well as columns,\n```\t-- transpose is lazy in both rows and columns,\n-- \t and works for non-rectangular 'matrices'\n-- For example, transpose [[1,2],[3,4,5],[]] = [[1,3],[2,4],]\ntranspose :: [[a]] -> [[a]]\ntranspose []\t\t = []\ntranspose ([] : xss) = transpose xss\ntranspose ((x:xs) : xss) = (x : [h | (h:t) <- xss]) :\n: transpose (xs : [t | (h:t) <- xss])\n```\n\n• Fix isPrefixOf bug, defining it instead as:\n``` isPrefixOf :: (Eq a) => [a] -> [a] -> Bool\nisPrefixOf [] _ = True\nisPrefixOf _ [] = False\nisPrefixOf (x:xs) (y:ys) = x == y && isPrefixOf xs ys\n```\n\n## Module Complex\n\n• Make (phase (0::Complex)) well defined,\n``` phase :: RealFloat a => Complex a -> a\nphase (0 :+ 0) = 0\nphase (x :+ y) = atan2 y x\n```\n\n## Module Maybe\n\n• Remove the definition of unfoldr (reappears in List).\n• Add isNothing :: Maybe a -> Bool\n\n## Module System\n\n``` exitFailure :: IO a\n```\n\n## Module Directory\n\n• Clarify the specification of doesFileExist, doesDirectoryExist, as follows. The operation doesDirectoryExist returns True if the argument file exists and is a directory, and False otherwise. The operation doesFileExist returns True if the argument file exists and is not a directory, and False otherwise.\n\n• Make type Permissions abstract.\n\n## Module Time\n\n• Export the fields of CalendarTime and TimeDiff.\n\n## Module Random\n\nThis library has been completely re-specified.\n\n## Reversed decisions\n\nI finally decided not to implement two proposed decisions:\n• Remove the concept of \"special identifiers\". Doing so consumes two new keywords (qualified, and hiding) without making programs any clearer. I judged the clean-up factor not worth the cost.\n\n• Put type constructors and classes in separate name spaces. This involved decorating imports and exports with class. It also involves a decision about whether type constructors in export lists should be decorated with type or data or nothing. Finally, type constructors and classes are more easily confused than type constructors/classes and data constructors. So I decided to let it lie."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.77085227,"math_prob":0.9022739,"size":11955,"snap":"2020-10-2020-16","text_gpt3_token_len":3176,"char_repetition_ratio":0.101497784,"word_repetition_ratio":0.025641026,"special_character_ratio":0.28138855,"punctuation_ratio":0.16623376,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95899,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-03T22:13:35Z\",\"WARC-Record-ID\":\"<urn:uuid:5d1c44a9-ac98-4b8b-9b62-f999b154da68>\",\"Content-Length\":\"19823\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e1347f5b-722d-476e-8a80-6e6469b86429>\",\"WARC-Concurrent-To\":\"<urn:uuid:ce26bfce-38a2-46db-a537-6a8e406bd54f>\",\"WARC-IP-Address\":\"147.75.67.13\",\"WARC-Target-URI\":\"https://www.haskell.org/definition/aboutHaskell98.html\",\"WARC-Payload-Digest\":\"sha1:MUZLS4NPF2YCNMKNUBM5TKRYZBK2YBV3\",\"WARC-Block-Digest\":\"sha1:BHS766YDF5KMOTA7CDMBMTKGPS47OWS4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370518767.60_warc_CC-MAIN-20200403220847-20200404010847-00515.warc.gz\"}"} |
https://web2.0calc.com/questions/fractions-problem_1 | [
"+0\n\n# Fractions Problem\n\n0\n356\n1\n\nFor a blueberry recipe, you need 3/4 cups of sugar, 1/8 cup of butter and 3/8 cup of blueberry. How many cups of sugar AND butter are needed if a single cup of blueberries is used in this mix?\n\nMay 23, 2017\n\n#1\n+1\n\n2 cups of sugar\n\n1/3 cups of butter\n\nSetup proportions to solve for both the sugar and butter. In a recipe, all ingredients must be proportional because otherwise, the meal will be not be exactly as the recipe calls for:\n\nLet x = the cups of sugar\n\nLet y = the cups of butter\n\n$$\\frac{\\frac{3}{4}}{x}=\\frac{\\frac{3}{8}}{1}=\\frac{\\frac{1}{8}}{y}$$\n\nWe should solve each proportional individually. I'll solve for the cups of sugar first:\n\n $$\\frac{\\frac{3}{4}}{x}=\\frac{\\frac{3}{8}}{1}$$ Solve by cross multiplying $$\\frac{3}{4}=\\frac{3}{8}x$$ Multiply by 8/3 on both sides to isolate x $$x=\\frac{3}{4}*\\frac{8}{3}$$ Simplify the right hand side $$x=\\frac{24}{12}=2$$cups of sugar! I multiplied the fraction and then reduced it to simplest terms.\n\nLet's utilize the exact same process for solving for y, the amount of cups of butter:\n\n $$\\frac{\\frac{3}{8}}{1}=\\frac{\\frac{1}{8}}{y}$$ Cross muliply and solve for y $$\\frac{3}{8}y=\\frac{1}{8}$$ Multiply by 8/3 on both sides $$y=\\frac{1}{8}*\\frac{8}{3}$$ Simplify the right hand side $$y=\\frac{8}{24}=\\frac{1}{3}$$ cups of butter! I multiplied the fraction and reduced it to simplest terms.\nMay 23, 2017\nedited by TheXSquaredFactor May 23, 2017"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.80902004,"math_prob":0.9995869,"size":1334,"snap":"2019-13-2019-22","text_gpt3_token_len":434,"char_repetition_ratio":0.18646617,"word_repetition_ratio":0.04232804,"special_character_ratio":0.3485757,"punctuation_ratio":0.057142857,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99991167,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-26T03:21:22Z\",\"WARC-Record-ID\":\"<urn:uuid:788fd05d-8683-4324-a47a-e121eb9baa64>\",\"Content-Length\":\"22540\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54bbfdef-ace8-42ea-8673-edf06e6ecdfe>\",\"WARC-Concurrent-To\":\"<urn:uuid:70b40750-91d4-4cf9-b73e-f074a1ac5251>\",\"WARC-IP-Address\":\"209.126.117.101\",\"WARC-Target-URI\":\"https://web2.0calc.com/questions/fractions-problem_1\",\"WARC-Payload-Digest\":\"sha1:DDHYUB6NMD5WXZUJUQMJPA4F3LE2SGCA\",\"WARC-Block-Digest\":\"sha1:GIRVFGBRXUXEM4QERTHVN6VVP65TOGJM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912204768.52_warc_CC-MAIN-20190326014605-20190326040605-00124.warc.gz\"}"} |
https://discuss.dizzycoding.com/signing-and-verifying-data-using-pycrypto-rsa/ | [
"# Signing and verifying data using pycrypto (RSA)\n\nPosted on\n\n### Question :\n\nSigning and verifying data using pycrypto (RSA)\n\nI am trying to familiarize myself with the pycrypto module, but the lack of clear documentation makes things difficult.\n\nTo start with, I would like to understand signing and verifying data. Could someone please provide an example for how this would be written?\n\nThis is a fleshed-out version of the example in the old PyCrypto documentation:\n\nEnsure you are using `pycryptodome` and not `pycrypto` (which is unmaintained!)\n\npycryptodome can be installed with `pip install pycryptodome`\n\n``````import Crypto.Hash.MD5 as MD5\nimport Crypto.PublicKey.RSA as RSA\nimport Crypto.PublicKey.DSA as DSA\nimport Crypto.PublicKey.ElGamal as ElGamal\nimport Crypto.Util.number as CUN\nimport os\n\nplaintext = 'The rain in Spain falls mainly on the Plain'\n\n# Here is a hash of the message\nhash = MD5.new(plaintext).digest()\nprint(repr(hash))\n# 'xb1./Jxa883x974xa4xacx1ex1b!xc8x11'\n\nfor alg in (RSA, DSA, ElGamal):\n# Generates a fresh public/private key pair\nkey = alg.generate(384, os.urandom)\n\nif alg == DSA:\nK = CUN.getRandomNumber(128, os.urandom)\nelif alg == ElGamal:\nK = CUN.getPrime(128, os.urandom)\nwhile CUN.GCD(K, key.p - 1) != 1:\nprint('K not relatively prime with {n}'.format(n=key.p - 1))\nK = CUN.getPrime(128, os.urandom)\n# print('GCD({K},{n})=1'.format(K=K,n=key.p-1))\nelse:\nK = ''\n\n# You sign the hash\nsignature = key.sign(hash, K)\nprint(len(signature), alg.__name__)\n# (1, 'Crypto.PublicKey.RSA')\n# (2, 'Crypto.PublicKey.DSA')\n# (2, 'Crypto.PublicKey.ElGamal')\n\n# You share pubkey with Friend\npubkey = key.publickey()\n\n# You send message (plaintext) and signature to Friend.\n# Friend knows how to compute hash.\n# Friend verifies the message came from you this way:\nassert pubkey.verify(hash, signature)\n\n# A different hash should not pass the test.\nassert not pubkey.verify(hash[:-1], signature)\n``````\n\nBelow is the helper class I created to perform all necessary RSA functions (encryption, decryption, signing, verifying signature & generating new keys)\n\nrsa.py\n\n``````from Crypto.PublicKey import RSA\nfrom Crypto.Cipher import PKCS1_OAEP\nfrom Crypto.Signature import PKCS1_v1_5\nfrom Crypto.Hash import SHA512, SHA384, SHA256, SHA, MD5\nfrom Crypto import Random\nfrom base64 import b64encode, b64decode\n\nhash = \"SHA-256\"\n\ndef newkeys(keysize):\nkey = RSA.generate(keysize, random_generator)\nprivate, public = key, key.publickey()\nreturn public, private\n\ndef importKey(externKey):\nreturn RSA.importKey(externKey)\n\ndef getpublickey(priv_key):\nreturn priv_key.publickey()\n\ndef encrypt(message, pub_key):\n#RSA encryption protocol according to PKCS#1 OAEP\ncipher = PKCS1_OAEP.new(pub_key)\nreturn cipher.encrypt(message)\n\ndef decrypt(ciphertext, priv_key):\n#RSA encryption protocol according to PKCS#1 OAEP\ncipher = PKCS1_OAEP.new(priv_key)\nreturn cipher.decrypt(ciphertext)\n\ndef sign(message, priv_key, hashAlg=\"SHA-256\"):\nglobal hash\nhash = hashAlg\nsigner = PKCS1_v1_5.new(priv_key)\nif (hash == \"SHA-512\"):\ndigest = SHA512.new()\nelif (hash == \"SHA-384\"):\ndigest = SHA384.new()\nelif (hash == \"SHA-256\"):\ndigest = SHA256.new()\nelif (hash == \"SHA-1\"):\ndigest = SHA.new()\nelse:\ndigest = MD5.new()\ndigest.update(message)\nreturn signer.sign(digest)\n\ndef verify(message, signature, pub_key):\nsigner = PKCS1_v1_5.new(pub_key)\nif (hash == \"SHA-512\"):\ndigest = SHA512.new()\nelif (hash == \"SHA-384\"):\ndigest = SHA384.new()\nelif (hash == \"SHA-256\"):\ndigest = SHA256.new()\nelif (hash == \"SHA-1\"):\ndigest = SHA.new()\nelse:\ndigest = MD5.new()\ndigest.update(message)\nreturn signer.verify(digest, signature)\n``````\n\nSample Usage\n\n``````import rsa\nfrom base64 import b64encode, b64decode\n\nmsg1 = \"Hello Tony, I am Jarvis!\"\nmsg2 = \"Hello Toni, I am Jarvis!\"\nkeysize = 2048\n(public, private) = rsa.newkeys(keysize)\nencrypted = b64encode(rsa.encrypt(msg1, public))\ndecrypted = rsa.decrypt(b64decode(encrypted), private)\nsignature = b64encode(rsa.sign(msg1, private, \"SHA-512\"))\nverify = rsa.verify(msg1, b64decode(signature), public)\n\nprint(private.exportKey('PEM'))\nprint(public.exportKey('PEM'))\nprint(\"Encrypted: \" + encrypted)\nprint(\"Decrypted: '%s'\" % decrypted)\nprint(\"Signature: \" + signature)\nprint(\"Verify: %s\" % verify)\nrsa.verify(msg2, b64decode(signature), public)\n``````"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.61133826,"math_prob":0.81758314,"size":4899,"snap":"2022-40-2023-06","text_gpt3_token_len":1313,"char_repetition_ratio":0.13237998,"word_repetition_ratio":0.117263846,"special_character_ratio":0.2761788,"punctuation_ratio":0.21297297,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98865056,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-01T09:20:58Z\",\"WARC-Record-ID\":\"<urn:uuid:16eefb77-b2ce-42fa-9e65-2477d518b4ef>\",\"Content-Length\":\"56815\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:09111146-f8e7-4ab5-9305-370acd4e180f>\",\"WARC-Concurrent-To\":\"<urn:uuid:22280792-d67c-47de-afa7-48d2cc008b94>\",\"WARC-IP-Address\":\"198.54.116.44\",\"WARC-Target-URI\":\"https://discuss.dizzycoding.com/signing-and-verifying-data-using-pycrypto-rsa/\",\"WARC-Payload-Digest\":\"sha1:YFGYBCPOAKB42MKKJAYNUETXEHAOAHNL\",\"WARC-Block-Digest\":\"sha1:JBRS44P22NWICMC7EIE6VUA7CAOD3MAA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335573.50_warc_CC-MAIN-20221001070422-20221001100422-00644.warc.gz\"}"} |
https://pages.mtu.edu/~suits/SpeedofSound.html | [
"",
null,
"# Speed of Sound in Air\n\nThe ideal gas law is based on a simple picture of a gas as a large number of molecules that move independantly of one another, except for occasional collisions with each other or with the walls of their container. When they do collide, the collision occurs with no net loss of energy -- that is, it is an elastic collision.\n\nThe ideal gas model predicts that the speed of sound in a pure gas will be",
null,
"where γ is the adiabatic constant (also referred to as the adiabatic exponent, the specific heat ratio, or the isentropic exponent) for the gas, which at room temperature depends mostly on the shape of the molecule and will have a value just a bit larger than 1, P is the absolute pressure of the gas, and ρ is the density of the gas. Using the ideal gas law, PV = nRT (with n constant, that is the number of gas molecules is constant), the equation above can be rewritten as",
null,
"where T is the temperature on an absolute scale (e.g. Kelvin), M is the mass of one gas molecule, and kB is Boltzmann's constant which converts absolute temperature units to energy units. Note that if the ideal gas model is a good model for a real gas, then you can expect, for any specific gas, that there will be no pressure dependence for the speed of sound. This is because as you change the pressure of the gas, you will also change its density by the same factor. The speed of sound will have a very significant dependence on temperature and on the mass of the molecules which make up the gas.\n\nFor comparison the \"root mean square\" (or \"rms\") velocity of the molecules in an ideal gas, an appropriate average for the speed of molecules in the gas, is given by",
null,
"and since γ is typically between 1.2 and 1.7, you can see that the average speed of the molecules is closely related to the speed of sound and will be only slightly larger. For typical air at room conditions, the average molecule is moving at about 500 m/s (close to 1000 miles per hour). Note that the speed of sound is largely determined by how fast the molecules move between collisions, and not on how often they make collisions. This is because no energy is lost during the collisions. The collisions do not \"slow things down\" but simply randomize the motion -- which was already quite random. At higher temperatures the molecules have more energy and are moving faster than at lower temperatures, hence the speed of sound at higher temperatures is faster than at lower temperatures.\n\nFor air, which is a mixture of molecules, you will need to use average values for the adiabatic constant and molecular mass. Air is mostly N2 and O2, which are both simple diatomic molecules with almost the same masses. The adiabatic constant will be very close to 1.4 for both molecules for a wide range of temperatures near room temperature (graph). Hence the adiabatic constant will also be close to 1.4 for air. The average molecular mass will depend on the air composition which changes slightly, for example due to day to day variations in relative humidity. For 100% relative humidity under normal room conditions, about 2% of the molecules of air are water molecules. Since the mass of a water molecule is almost half that of an oxygen or nitrogen molecule, the larger the humidity the lower the density of the air for the same pressure and temperature. At or near room temperature the fraction of air that is water is small, and so the effect will not be large. Some very small variations can be expected for other variations in air content, such as in CO2 content. CO2 molecules are about 50% heavier than O2 and N2 molecules and so will increase the density. The fraction of air which is CO2 is so small (0.04%) that the effects due to CO2 are very small as well. For air expelled from human lungs, however, the CO2 concentration is typically 4 to 5%, with a corresponding decrease in O2 concentration, and that can cause effects comparable to changes in humidity.\n\nThe temperature dependence and the change in density due to changes in composition, the latter almost entirely due to changes in humidity, are by far the two largest causes for variations in the speed of sound in air. Note, however, that humidity is normally expressed as a percentage of the maximum concentration for the air. That maximum may change with conditions. What matters for the speed of sound is the fraction of the air molecules that are water (i.e., the \"molar fraction\"). The molar fraction corresponding to 100% humidity will depend on temperature and pressure (see graph). Hence there may be an apparent dependence on pressure when the water content is expressed as a percent relative humidity rather than a molar fraction. For example, if you take 20 oC air at 1 atm and 100% humidity and remove half of the molecules, you end up with air at 0.5 atm and about 50% relative humidity, not 100% humidity. Hence to look at the changes due only to changes in pressure, and not molecular composition, you would need to compare air at 1 atm and 100% humidity with air at 0.5 atm and 50% humidity.\n\nThere are some additional small effects related to the details of the exchange of energy between the molecules. These effects give rise to non-ideal gas behavior. In particular, they can cause disipation of sound energy (that is, the sound energy is turned into heat energy). For normal atmospheric conditions, the effects on the speed of sound are very small, but can also give rise to very small variations in the speed of sound with frequency. For a comprehensive discussion of these and other effects, see the reference below.\n\nHere are some graphs illustrating how the speed of sound in real air depends on temperature, pressure, humidity and frequency. Data for these graphs is from tables contained in the reference below. Note that a pressure of 0.5 atm corresponds to an altitude of just under 6,000 m (20,000 ft) above sea level and 20 oC is \"room temperature\" (20.00 oC = 293.15 K). Day to day changes in atmospheric pressure due to weather are about plus or minus 5% (e.g. from about 0.95 to 1.05 atmospheres at sea level).",
null,
"",
null,
"",
null,
"For more information on the ideal gas law, the specific heat ratio, etc., use your favorite internet search engine (or, if you really want to know, take some physics classes). Here is a separate table for the speed of sound in other gases.\n\nReference: \"Handbook of the Speed of Sound in Real Gases,\" by A. J. Zuckerwar (Academic Press, 2002)."
] | [
null,
"https://pages.mtu.edu/~suits/PHYSMUSH.gif",
null,
"https://pages.mtu.edu/~suits/SoundSpeedEq1.gif",
null,
"https://pages.mtu.edu/~suits/SoundSpeedEq2.gif",
null,
"https://pages.mtu.edu/~suits/SoundSpeedEq3.gif",
null,
"https://pages.mtu.edu/~suits/SoundSpeedTemp.gif",
null,
"https://pages.mtu.edu/~suits/SoundSpeedPress.gif",
null,
"https://pages.mtu.edu/~suits/SoundSpeedFreq.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93340015,"math_prob":0.9798521,"size":4743,"snap":"2022-27-2022-33","text_gpt3_token_len":1050,"char_repetition_ratio":0.13209538,"word_repetition_ratio":0.024009604,"special_character_ratio":0.22538477,"punctuation_ratio":0.10471204,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96724695,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-12T08:16:21Z\",\"WARC-Record-ID\":\"<urn:uuid:2f2f503a-639a-4f1e-957d-70802c3ceadb>\",\"Content-Length\":\"9016\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:31e2b077-6ca6-4a1d-88e7-31423c0eb4d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:a6a17727-4fb1-425f-b36e-ed92b58ce1df>\",\"WARC-IP-Address\":\"141.219.70.232\",\"WARC-Target-URI\":\"https://pages.mtu.edu/~suits/SpeedofSound.html\",\"WARC-Payload-Digest\":\"sha1:IIOZEMFEJTGQAGVSAQV5VR6SSCQP46BY\",\"WARC-Block-Digest\":\"sha1:XHZSZIUB4P62WPTU7L6JSN4QKELCGORV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571597.73_warc_CC-MAIN-20220812075544-20220812105544-00468.warc.gz\"}"} |
https://getyourquotenow.net/jis-z-0602-14/ | [
"# JIS Z 0602 PDF\n\nJIS Z Test methods for flat pallets (FOREIGN STANDARD). Available for Subscriptions. Content Provider Japanese Industrial Standards [JIS]. Standard (JIS Z ) was performed. From the obtained results, the possibility of practical application was exam- ined. Moreover, the manufacturing process of. JIS Z – JIS Z Test methods for flat pallets. Publication date: ; Original language: English. Please select.",
null,
"Author: Tygojin Akinom Country: Montenegro Language: English (Spanish) Genre: Technology Published (Last): 22 June 2015 Pages: 283 PDF File Size: 14.89 Mb ePub File Size: 4.72 Mb ISBN: 569-9-13300-210-6 Downloads: 98170 Price: Free* [*Free Regsitration Required] Uploader: Malale",
null,
"Such a probability is a typical example of the binomial probability distribution. This is explained as follows.\n\nNumber of pieces rejected. The possible test results are limited to either 1 “failure or defect”or 2 “no failure or acceptance,” with no possibility of such results as “pending decision” or “exception acceptance” allowed.\n\nThis occurs in the case of a surge pulse withstand or mechanical impact test. Such a system is known as a series system of redundancy nis 0 figure B. Number of pieces accepted. Unless otherwise specified, use Inspection Standard II.",
null,
"The reliability function R t for this system is given by the Poisson partial sum. Wear-outfailures which almost suddenly outbreak after a certain period approximate the normal distribution. This approximates AQL values.\n\n## A.1 AQL Sampling Table\n\nIf the sampled device is not returned non-replacementwe will have a hypergeometric distribution. This zz because the denominator of the defining expression equation IX component hours does not refer to any particular device. It is important to note that N no longer denotes the number of tests of Bernoulli trial due to the approximation of the binomial probability distribution to the Poisson distribution. For the system in figure B.\n\nHEROCLIX MAPS PDF\n\nThis is based on equation IX Introduction to Computing Systems 1 st Exam. A typical example of this is the movement of molecules of a classical, ideal gas. The Poisson distribution approximates the binomial probability distribution if the population is large and the phenomenon occurs with a low probability.\n\nJjis lot is acccepted when failures are grater than Ac but less then Re. This is known as the multinomial distribution.",
null,
"That is, it is a non-maintainable component. As the device is subjected to more and more cycles of intermittent operation nas shown in figure B.\n\nIn general, once a semiconductor device has failed, it cannot be repaired and used again. To serve as an example, consider the life L in equation IX Use first sampling plan above arrow.",
null,
"This probability is described by the binomial probability distribution fBin x, n, p. Therefore, the MTBF for a truncated portion of the life of the equipment up to the time T0 is estimated 0620 the following expression: Using a fixed value, in equation IX, for the number of damages k received before failure, consider the failure distribution function F t1, k as a function of time t1.\n\nEL DESTINO DE UN HOMBRE MIJAIL SHOLOJOV PDF\n\n### A.1 AQL Sampling Table\n\nEach test result is independent from one another. Expected characteristics of the function fNorm x are these: This is referred to as upper 20 percent point.\n\nThe level of degradation of the device can be expressed as a function of the leakage current i. Chapter 5 – Basic Concepts of Probability.\n\n## JIS Z 0602:1988\n\nMath Statistics And Probability A. These functions are shown in figure B.\n\nWe deal with this issue in B. As time t passes, the failure rate of these semiconductor devices changes. If the amount of change in the characteristic value is found to be accelerated by thermal stress, in many cases the Arrhenius chemical reaction kinetics model can be applied to this phenomenon."
] | [
null,
"https://www.ipros.jp/c/public/catalog/image_generated/01/32c/17343/17343_IPROS3919859836657718859_1_156x220.jpg",
null,
"https://getyourquotenow.net/download_pdf.png",
null,
"https://www.ipros.jp/img/public/product/image/ce5/311997018/IPROS1378001223270623693.jpg",
null,
"http://www.kingpallet.com.tw/customer/C000231/CNS 測試說明.jpg",
null,
"https://cdn.shopify.com/s/files/1/1492/0208/products/8218978057_large.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8889516,"math_prob":0.7719224,"size":3823,"snap":"2022-05-2022-21","text_gpt3_token_len":843,"char_repetition_ratio":0.118093744,"word_repetition_ratio":0.0065681445,"special_character_ratio":0.21292178,"punctuation_ratio":0.11898017,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9811799,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,4,null,3,null,4,null,7,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T05:04:31Z\",\"WARC-Record-ID\":\"<urn:uuid:4c5556fd-6572-4a58-bcac-7631cb30eba6>\",\"Content-Length\":\"35363\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5dcc9dc9-c4ad-425d-bf7f-45d7aaeebc87>\",\"WARC-Concurrent-To\":\"<urn:uuid:7dab30fc-83bb-47c2-9661-ea71fc7b4078>\",\"WARC-IP-Address\":\"104.21.67.129\",\"WARC-Target-URI\":\"https://getyourquotenow.net/jis-z-0602-14/\",\"WARC-Payload-Digest\":\"sha1:Z7NSKR2ZKUG5SNKWOS5VE2QEUGCYGMPG\",\"WARC-Block-Digest\":\"sha1:7NI3CXHF65YHE7HWWNLKBC3QMRPBSGCW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662543797.61_warc_CC-MAIN-20220522032543-20220522062543-00768.warc.gz\"}"} |
http://www.pinakinathc.me/theoretical-gurantees-of-learning/ | [
"this is a work in progress…please view later.\n\n## Can we get some theoretical guarantees of how successful learning can be achieved in a relatively simplified setting?\n\nIn this blog, I shall answer the aforementioned question. While third-party materials like this blog might help to introduce some concept, I encourage readers to also go through the sources from which I learned it. This section is primarily taken from a single source – “Understanding Machine Learning: From Theory to Algorithms” by Shai Shalev-Shwartz and Shai Ben-David. More specifically, this blog essentially trims and embarrasingly simplifies Chapter 2.\n\nSo let’s get cracking…\n\n### Introduction, Objective, and Motivation\n\nWe need to first focus on what do we want to learn and then define what do we mean by the word learning or more specifically a successful learning – i.e., what does it means to have a successful learning.\n\nFirst let me quote Vladimir N. Vapnik from “The Nature of Statistical Learning Theory”: »we consider the learning problem as a problem of finding a desired dependence using a **limited** number of observations.«\n\nTo get a better perspective of what do we want to learn consider this quote (again from Vapnik): »What must one know a priori about an unknown functional dependency in order to estimate it on the basis of observations?«\n\nWhile I shall provide a more mathematical definition of “learning”, maybe we should spend a bit more time on the philosophy of what exactly we mean by learning.\n\nNow I shall introduce the example that Shalev-Shwartz and Ben-David used in their book – “learn how to predict whether a papaya you see in the market is tasty or not”.\n\nBefore diving into, let us first define the learner’s input i.e. the basic statistical learning setting that the learner has access to:\n\n• Domain set: This is a set of objects that we wish to label. Each domain points is referred to as instances in the instance space represented by $$\\mathbf{\\mathcal{X}}$$.\n\n• Label set: Consider we have only 2 labels i.e. {0,1} or {-1, +1}. We shall denote this set of possible labels with $$\\mathbf{\\mathcal{Y}}$$.\n\n• Training data: As mentioned, that our Domain set consists of instances we wish to label. But it is not possible to label the entire Domain set. So consider that we label a finite $$\\mathbf{m}$$ instances. Hence, $$S = ((x_{1}, y_{1}), (x_{2}, y_{2}), \\dots, (x_{m}, y_{m}))$$ is a finite sequence of pairs in $$\\mathcal{X}\\times\\mathcal{Y}$$. It is important to keep in mind that despite the “set” notation of $$S$$, it is a sequence i.e. the same example may appear twice in $$S$$. This finite set or the training data are often called training examples.\n\n• The learner’s output: The objective of the learner is to output a mapping function or a prediction rule $$h: \\mathbf{\\mathcal{X}} \\rightarrow \\mathbf{\\mathcal{Y}}$$. This function is also called a {predictor, hypothesis, classifier}.\n\n• Data generation Model: We consider that the probability distribution over $$\\mathbf{\\mathcal{X}}$$ be denoted by $$D$$. It is important to note that we do not assume that the learner knows anything about this distribution.\n\n• Assumption: We assume that there is some “correct” labelling function, $$f:\\mathbf{\\mathcal{X}} \\rightarrow \\mathbf{\\mathcal{Y}}$$. We shall relax this assumption in our next blog.\n\n• Measure of success: The error of our prediction rule $$h: \\mathbf{\\mathcal{X}} \\rightarrow \\mathbf{\\mathcal{Y}}$$ is defined to be ($$\\stackrel{\\text{def}}{=}$$ symbol represent is defined):\n\n$L_{D, f}(h) \\stackrel{\\text{def}}{=} \\mathbb{P}_{x \\sim D} [h(x) \\neq f(x)] \\stackrel{\\text{def}}{=} D(\\{x \\sim D: h(x) \\neq f(x)\\})$\n\nThe error of $$h$$ is the probability of randomly choosing an example $$x \\sim D$$ for which $$h(x) \\neq f(x)$$. The subscript $$(D, f)$$ indicates that the error is measured with respect to the probability distribution $$D$$ and the correct labelling function $$f$$. $$L_{D, f}$$ is knows as: {generalization error, the risk, true error} of $$h$$.\n\n• Note: Since the learner is blind to the underlying distribution $$D$$ and the correct labelling function $$f$$, it cannot calculate the true error. To overcome this problem, the learner uses what is known as the Empirical Risk for guidance."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88781583,"math_prob":0.996297,"size":3677,"snap":"2021-43-2021-49","text_gpt3_token_len":898,"char_repetition_ratio":0.11298666,"word_repetition_ratio":0.010362694,"special_character_ratio":0.24911612,"punctuation_ratio":0.11079545,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997986,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-02T21:57:31Z\",\"WARC-Record-ID\":\"<urn:uuid:6dbf87f4-8098-4dc6-819a-6a45e4544439>\",\"Content-Length\":\"8226\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5dfea72c-0ac6-4c5b-884c-596a14e04275>\",\"WARC-Concurrent-To\":\"<urn:uuid:27e2f8a7-366c-4142-ba93-f08ec6519d1f>\",\"WARC-IP-Address\":\"192.30.252.153\",\"WARC-Target-URI\":\"http://www.pinakinathc.me/theoretical-gurantees-of-learning/\",\"WARC-Payload-Digest\":\"sha1:K7IZHUXUSKRQN4IIUTHVV3WKNXTK43C4\",\"WARC-Block-Digest\":\"sha1:B7YS2CCU7Y4FNCHK2PNFH2G7NDSWKCTE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362297.22_warc_CC-MAIN-20211202205828-20211202235828-00496.warc.gz\"}"} |
https://answers.everydaycalculation.com/compare-fractions/10-35-and-3-18 | [
"Solutions by everydaycalculation.com\n\n## Compare 10/35 and 3/18\n\n10/35 is greater than 3/18\n\n#### Steps for comparing fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 35 and 18 is 630\n2. For the 1st fraction, since 35 × 18 = 630,\n10/35 = 10 × 18/35 × 18 = 180/630\n3. Likewise, for the 2nd fraction, since 18 × 35 = 630,\n3/18 = 3 × 35/18 × 35 = 105/630\n4. Since the denominators are now the same, the fraction with the bigger numerator is the greater fraction\n5. 180/630 > 105/630 or 10/35 > 3/18\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn to work with fractions in your own time:"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8705115,"math_prob":0.99592406,"size":403,"snap":"2020-24-2020-29","text_gpt3_token_len":159,"char_repetition_ratio":0.32330826,"word_repetition_ratio":0.0,"special_character_ratio":0.4764268,"punctuation_ratio":0.041237112,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9927235,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-02T15:41:41Z\",\"WARC-Record-ID\":\"<urn:uuid:5e09feb9-1b48-474e-9f07-3f27b4a088db>\",\"Content-Length\":\"7993\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9a591f1a-e9e8-4252-8d79-14a205358dcf>\",\"WARC-Concurrent-To\":\"<urn:uuid:28ae6dd0-eabf-449e-b03f-8fdba2b37847>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/compare-fractions/10-35-and-3-18\",\"WARC-Payload-Digest\":\"sha1:Q7MGTEPTGS57UZ7WONHQ722JTVGXA7PU\",\"WARC-Block-Digest\":\"sha1:3O3XMEF6D3XQ4NTH7MBQK6L6QAADLNKB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655879532.0_warc_CC-MAIN-20200702142549-20200702172549-00463.warc.gz\"}"} |
https://www.jiskha.com/questions/21526/What-would-the-equation-be-of-a-straight-line-that-passes-through-the-points-1-1 | [
"maths\n\nWhat would the equation be of a straight line that passes through the points (1,1) and (3,3) ??\n\nwrite the equation for the line:\n\ny= mx+ b\nput the first set of points in..\n1=1m + b then the second equation\n3=3m + b subtract the equations...\n2=2m so you know m.\nPut that into either equation and solve for b.\n\ni don't quite understand what you mean .\nwill m = 1 ??\n\nso that would mean b = 0 ?\n\nyes, slope is 1\n\nso would the equation would be (1,0) ?\n\nsorry i mean y = 1 x 1 + 0 ??\n\n1. 👍 0\n2. 👎 0\n3. 👁 100\n\nSimilar Questions\n\n1. Math\n\nThis question concerns the straight line that passes through the points (−1, 3) and (2, −6). Choose the three true statements from the following. Options A The gradient of the line is 3. B The gradient of the line is −3. C\n\nasked by S Rey on June 5, 2010\n2. Mathamatics\n\nThis question concerns the straight line that passes through the points (−1, 3) and (2, −6). Choose the three true statements from the following. Options A) The gradient of the line is 3. B) The gradient of the line is −3.\n\nasked by S Rey on June 6, 2010\n3. Maths\n\nHi, need a bit of help on this one. A straight line passes through the points (4,3) and (10,0), I need to write down the equation of the line in the form y=mx+c, I have tried but I cant seem to get the second coordinates to\n\nasked by Del on July 9, 2010\n4. algebra\n\nhow would i find an equation of a line that goes through points(1,6) and (3,10)?? thanks A straight line is y=mx+b Substitute the points to make two equations. 6=m(1)+b 10=m(3)+b Two equations; two unknowns, m and b. Solve for m\n\nasked by jasmine on December 22, 2006\n5. Algebra\n\nHey, can someone help me out with these questions? 1. Find the slope of the line that passes through (-2,-3) and (1,1). A) 1/3 B) 1 C) 2 D) 4/3 2. A line with slope = 1/2 passes through the points (2,5) and (x,3). What is the\n\nasked by Izzybelle on November 18, 2018\n6. mat 101\n\nEquation for a line that passes through (6,26) and has a slope of 3 Equation for a line that passes through the points (5,5) and (10,20) Equation for a line that passes through (9,25) and has a slope of -3. Equation for a line\n\nasked by keith on June 26, 2010\n7. Math 101\n\nFind the equation for each of the following items below: a. A line that passes through (6,26) and has a slope of 3. b. A line that passes though the points (5,5) and (10,20) c. A line that passes through (9,25) and has a slope of\n\nasked by SEMA , Need Help on March 23, 2010\n8. Maths\n\nFind the equation of a straight line whuch passes through the points (2,3) and (-1,4)\n\nasked by Bright ulubi on March 21, 2016\n9. Analytic geometry\n\nWrite in general form, the equation of the straight line that passes through the points A(-6, -1) and B(2,2)\n\nasked by Harry on February 10, 2012\n10. help!\n\nCan you please give me one of them so I cansolve for the other one?PLease! The general formula for a straight line is y = mx + b. Just plug the points into the equation to arrive at two equations. 6=2m+b 10=4m+b you have two\n\nasked by margie on November 1, 2006\n\nMore Similar Questions"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.946871,"math_prob":0.9969916,"size":2507,"snap":"2019-43-2019-47","text_gpt3_token_len":776,"char_repetition_ratio":0.18857372,"word_repetition_ratio":0.20781893,"special_character_ratio":0.32070205,"punctuation_ratio":0.12095401,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99990857,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-15T12:33:38Z\",\"WARC-Record-ID\":\"<urn:uuid:888236fc-5ff1-4e6c-87d9-d55ede010197>\",\"Content-Length\":\"18877\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4bf59ffa-31ab-41e9-987b-8ba14e7dc649>\",\"WARC-Concurrent-To\":\"<urn:uuid:6801aea5-9442-422f-8db5-2c3c7218d246>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/21526/What-would-the-equation-be-of-a-straight-line-that-passes-through-the-points-1-1\",\"WARC-Payload-Digest\":\"sha1:ZQICXBJLCOD5JMY4ZOGWQ7OWGLDLYOUA\",\"WARC-Block-Digest\":\"sha1:BXZWYYAQORRS62AISWCIW6PEGRDFCTO4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986658566.9_warc_CC-MAIN-20191015104838-20191015132338-00501.warc.gz\"}"} |
https://www.tutussfunny.com/employee-gross-wage-calculation-using-java/ | [
"Home Java Java Console Application Projects Employee Gross wage Calculation using Java\n\n# Employee Gross wage Calculation using Java\n\n0\n20\n\ninput employee number,hourly rate calculate and display the gross payment using following condition.\n\nif the employee work more than 40 hours then he will be paid the hours work over 40 at the rate of two times than the normal rate others get the normal payment.",
null,
"Load More Related Articles\n• ### Angular Spring boot API Bootstrap Table load\n\nThis Angular and Spring boot tutorial will teach you how to view the records using Restful…\n• ### How to add two numbers in ReactJS\n\nIn this tutorials will teach How to add two numbers in ReactJS. …\n• ### How to do the student mark calculation in Angular\n\nIn this tutorials will teach How to do the student mark calculation in Angular. First you …\n• ### max,min,total,average calculation using java\n\ncreate a 3 element integer to calculate and following min numbers max numbers total averag…\n• ### Array counting the 0 in Java\n\nCreate a 10 element integer array and store some integer values display as the counting of…\n• ### DataInputStream using Java\n\nin this tutorials you will be learning DataInputStream using Java.input the integer numbe…\nLoad More In Java Console Application Projects\n\n## Angular Spring boot API Bootstrap Table load\n\nThis Angular and Spring boot tutorial will teach you how to view the records using Restful…"
] | [
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.704244,"math_prob":0.72329295,"size":591,"snap":"2022-40-2023-06","text_gpt3_token_len":187,"char_repetition_ratio":0.12776831,"word_repetition_ratio":0.0,"special_character_ratio":0.35871404,"punctuation_ratio":0.10344828,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.969365,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T14:43:59Z\",\"WARC-Record-ID\":\"<urn:uuid:2033a21c-5d85-4ebb-8466-6316578ec566>\",\"Content-Length\":\"119919\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c25424c6-7e5d-436e-bd88-d3591bbd3d97>\",\"WARC-Concurrent-To\":\"<urn:uuid:873b7f2e-8f32-4e49-8589-5f22480c232d>\",\"WARC-IP-Address\":\"3.234.104.255\",\"WARC-Target-URI\":\"https://www.tutussfunny.com/employee-gross-wage-calculation-using-java/\",\"WARC-Payload-Digest\":\"sha1:ZLNZMOLHSVRIQMHNZ2XB6EXAZVUEM4RA\",\"WARC-Block-Digest\":\"sha1:QMKPZ6ZLNK5OWTJEZV34ZMPX2BQLLEOU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337836.93_warc_CC-MAIN-20221006124156-20221006154156-00072.warc.gz\"}"} |
https://1000projects.org/data-structures-project-students.html | [
"# Data Structures Project for Students\n\nData Structures Project for Students Introduction:\n\nData structures play a very important role in programming. They provide the mechanism of storing the data in different ways. This optimizes searching and memory usage.\n\nData structures play a central role in modern computer science. They enable an efficient storage of data for an easy access. They enable us to represent the inherent relationship of the data in the real world. It enables an efficient processing of data. They help in data protection and management.",
null,
"Hence it is very important for students to learn about these data structures in order to learn the programming concepts. But through theoretical knowledge, one cannot get a complete idea of these concepts. So in order to help them, we have developed a system\n\nThat visualizes various programming concepts using java applets. Those applets visualize all the inherent mechanism of all those concepts.\n\nAlthough there exists many forms of visualizations, we have chosen java applets because of various benefits they offer.\n\nThe features of all the java applets developed are:\n\n1. STACK: The operations like push, pop, top of the stack, is empty, is full are performed on the stack.\n2. QUEUE: The operations like insertion, deletion, rear of the queue, front of the queue, is empty, is full are performed on the queue.\n3. CIRCULAR QUEUE: The operations like insertion, deletion, rear of the queue, front of the queue, is empty, is full are performed on the circular queue.\n4. SINGLY LINKED LIST: The operations like insertion, deletion and search operations are performed on the singly linked list.\n5. DOUBLY LINKED LIST: The operations like insertion, deletion and search operations are performed on the doubly linked list.\n6. BUBBLE SORT: Various numbers are sorted in ascending order according to the bubble sort technique.\n7. INSERTION SORT: Various numbers are sorted in ascending order according to the insertion sort technique.\n8. SELECTION SORT: Various numbers are sorted in ascending order according to the selection sort technique.\n9. LINEAR SEARCH: A list of elements are created and the required value is searched in the list based on the linear search strategy.\n10. BINARY SEARCH: A list of elements are created and the required value is searched in the list based on the binary search strategy.\n11. BINARY SEARCH TREE: Operations insertion, deletion are performed on the binary search tree. Also the search operation is performed on the tree.\n12. TREE TRAVERSAL: A tree is created and the operations infix, prefix, postfix are performed on the tree.\n13. GRAPH TRAVERSAL: A graph is created and the operations BFS,DFS are performed on the graph.\n14. DIJKSTRA’S ALGORITHM: A graph is created and the shortest path from start node to each node in the graph is determined.\n15. PRIM’S ALGORITHM: A graph is created and the minimum spanning tree is determined according to the prim’s algorithm.\n16. KRUSKAL’S ALGORITHM: A graph is created and the minimum spanning tree is determined according to the kruskal’s algorithm.\n17. AVL TREE: Operations insertion, deletion are performed on the AVL tree. Also the search operation is performed on the tree.\n18. NQUEENS PROBLEM: A board of n*n dimension is created and the solution to the nqueens board is displayed. Also the solution is animated at each step.\n19. DFA: The required graph is created, and we check whether the given input string is accepted or rejected by the DFA.\n20. NFA: The required graph is created, and we check whether the given input string is accepted or rejected by the NFA.\n21. BINARY ADDITION: Two decimal numbers are accepted as input and are automatically converted into binary numbers and are added. At each step, the results are displayed.\n22. PROCESS SCHEDULING: Various processes and their execution times are taken as input, and they are scheduled according to the selected strategy(FCFS,SJF,ROUND-ROBIN).\n23. TRAVELLING SALES PERSON PROBLEM: The number of cities to be traversed are taken as input and the path through which the traveler need to traverse is determined both at a time and step by step.\n24. LINEAR HASHING: The hash table is created for required number of elements and insertion, deletion and search operations are performed on it.\nDownload visual data structure project source code , project report, documentation, ppt presentation.\n\n## 3 Replies to “Data Structures Project for Students”\n\n1.",
null,
"abd says:\n\n2.",
null,
"Hassan says:\n3.",
null,
"xyz says:"
] | [
null,
"https://1000projects.org/wp-content/uploads/2013/12/data-structures-Project-for-students-300x160.jpg",
null,
"https://secure.gravatar.com/avatar/dda272ca3ff3735f26b620125196bd4c",
null,
"https://secure.gravatar.com/avatar/da2f92a68023cdeda059e8cf0677c37a",
null,
"https://secure.gravatar.com/avatar/a130870ccaa6bb8f250dcbec777ed7df",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92269033,"math_prob":0.8312503,"size":4407,"snap":"2023-14-2023-23","text_gpt3_token_len":921,"char_repetition_ratio":0.13945037,"word_repetition_ratio":0.27513966,"special_character_ratio":0.19627865,"punctuation_ratio":0.12682927,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9538269,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T18:58:36Z\",\"WARC-Record-ID\":\"<urn:uuid:33633591-803c-4f94-8c4f-3971fd1dfab9>\",\"Content-Length\":\"163234\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b436a64-b9b8-457f-ae09-7eda9529fea5>\",\"WARC-Concurrent-To\":\"<urn:uuid:485ca7cc-bb99-4887-9070-a332f5f80c9e>\",\"WARC-IP-Address\":\"160.153.0.62\",\"WARC-Target-URI\":\"https://1000projects.org/data-structures-project-students.html\",\"WARC-Payload-Digest\":\"sha1:W6BFNEM44P6OCAERZNTZDXAREP3AVLQP\",\"WARC-Block-Digest\":\"sha1:VYRYED2YXO2Y5M5RZA4YZOU3B3C7NV5J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654012.67_warc_CC-MAIN-20230607175304-20230607205304-00211.warc.gz\"}"} |
https://support.microsoft.com/en-us/office/insert-and-calculate-simple-math-equations-in-onenote-6d8346d3-2c1e-490b-bcbb-f739d9323e1b?redirectSourcePath=%252fen-us%252farticle%252fCalculate-mathematical-equations-in-notes-38144d67-8deb-4401-a7bf-ddcd342cf37f | [
"You don’t need a calculator to find the answers to simple math problems. You can jot down math equations during a meeting, conference or class, and OneNote can instantly calculate the results for you.\n\n1. Type the equation you want to calculate. For example, type 95+83+416 to calculate the sum of the numbers 95, 83, and 416, or SQRT(15) to calculate the square root of 15.\n\n2. After the equation, without typing a space, type an equal sign (=), and then press Spacebar. The answer will appear after the equal sign.\n\nTips:\n\n• Don’t use spaces in the equation. Type the numbers, operators, and functions as one single, continuous string of text.\n\n• Function codes are not case-sensitive. For example, SQRT(3)=, sqrt(3)= or Sqrt(3)= will calculate the same answer.\n\n• To create a new line after the answer, press Enter (instead of Spacebar) after the equal sign.\n\nIf you want only the answer in your notes, after it’s calculated, you can delete the equation that precedes it. The answer will stay in your notes.\n\n## Examples of simple calculations\n\nBelow are a few examples of mathematical expressions that OneNote can calculate.\n\n• The average monthly sales of a product. For example, if the total revenue per year is \\$215,000, type \\$215,000/12= and then press Spacebar.\n\n• Total cost of monthly payments. For example, type 48*\\$129.99= and then press Spacebar to calculate the cost of 48 monthly payments at \\$129.99 a payment.\n\n• The sine of a 30-degree angle. For example, type sin(30)= and then press Spacebar.\n\n• More complete math equations. For example, Type (6+7) / (4*sqrt(3))= and then press Spacebar to calculate the answer to (6+7) divided by (4 times the square root of 3).\n\n## Supported arithmetic operators\n\nYou can use the following operators in your equations.\n\n Operator Meaning Example + (plus sign) Addition 3+3 - (minus sign) SubtractionNegation 3-1-1 * (asterisk) Multiplication 3*3 X (upper- or lowercase) Multiplication 3x3 / (forward slash) Division 3/3 % (percent sign) Percent 20% ^ (caret) Exponentiation 3^2 ! (exclamation) Factorial computation 5!\n\n## Supported math and trigonometry functions\n\nYou can use the math and trigonometry functions in the following table for your equations.\n\nNote: To calculate a function, type its code (for example, SQRT for square root), and immediately follow it with the number, angle, or variables in parentheses, as shown in the Syntax column.\n\n Function Description Syntax ABS Returns the absolute value of a number ABS(number) ACOS Returns the arccosine of a number ACOS(number) ASIN Returns the arcsine of a number ASIN(number) ATAN Returns the arctangent of a number ATAN(number) COS Returns the cosine of a number COS(number) DEG Converts an angle (in radians) to degrees DEG(angle) LN Returns the natural logarithm of a number LN(number) LOG Returns the natural logarithm of a number LOG(number) LOG2 Returns the base-2 logarithm of a number LOG2(number) LOG10 Returns the base-10 logarithm of a number LOG10(number) MOD Returns the remainder of a division operation (number)MOD(number) PI Returns the value of π as a constant PI PHI Returns the value of Φ (the golden ratio) PHI PMT Calculates a loan payment based on a constant interest rate, a constant number of payments, and the present value of the total amount PMT(rate;nper;pv) RAD Converts an angle (in degrees) to radians RAD(angle) SIN Returns the sine of the given angle SIN(angle) SQRT Returns a positive square root SQRT(number) TAN Returns the tangent of a number TAN(number)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.763999,"math_prob":0.9978795,"size":3636,"snap":"2022-40-2023-06","text_gpt3_token_len":982,"char_repetition_ratio":0.16519824,"word_repetition_ratio":0.037037037,"special_character_ratio":0.279978,"punctuation_ratio":0.09818731,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999049,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T12:06:02Z\",\"WARC-Record-ID\":\"<urn:uuid:e0122158-972a-4717-ab98-94dea66a5f33>\",\"Content-Length\":\"129952\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6b9f5ae7-6681-4392-99cf-08b4074363ce>\",\"WARC-Concurrent-To\":\"<urn:uuid:7949ca7d-cd85-4324-a0f6-263e824d9951>\",\"WARC-IP-Address\":\"104.72.156.112\",\"WARC-Target-URI\":\"https://support.microsoft.com/en-us/office/insert-and-calculate-simple-math-equations-in-onenote-6d8346d3-2c1e-490b-bcbb-f739d9323e1b?redirectSourcePath=%252fen-us%252farticle%252fCalculate-mathematical-equations-in-notes-38144d67-8deb-4401-a7bf-ddcd342cf37f\",\"WARC-Payload-Digest\":\"sha1:LMTWTQ7PA2W5FNKJVDHAY3HM5FLFXPGQ\",\"WARC-Block-Digest\":\"sha1:ZZM6M6ZI3N5BCJHZWVTIGTGSIALDPHEJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334528.24_warc_CC-MAIN-20220925101046-20220925131046-00274.warc.gz\"}"} |
http://jackpurcellbooks.us/Discrete-Mathematics-With-Graph-Theory-3rd-Edition.doc | [
"# Discrete Mathematics With Graph Theory 3rd Edition - jackpurcellbooks.us\n\ntree graph theory wikipedia - in graph theory a tree is an undirected graph in which any two vertices are connected by exactly one path or equivalently a connected acyclic graph a forest is an undirected graph in which any two vertices are connected by at most one path or equivalently an acyclic graph or equivalently a disjoint union of trees a polytree or directed tree or oriented tree or singly connected network, spectral graph theory wikipedia - in mathematics spectral graph theory is the study of the properties of a graph in relationship to the characteristic polynomial eigenvalues and eigenvectors of matrices associated with the graph such as its adjacency matrix or laplacian matrix the adjacency matrix of a simple graph is a real symmetric matrix and is therefore orthogonally diagonalizable its eigenvalues are real algebraic, usf department of mathematics statistics - mgf 1107 mathematics for liberal arts cama 3 this terminal course is intended to present topics which demonstrate the beauty and utility of mathematics to the general student population, mathematics from crc press page 1 - mathematics mathematics is the study and application of arithmetic algebra geometry and analysis mathematical methods and tools such as matlab and mathematica are used to model analyze and solve diverse problems in a range of fields including biology computer science engineering finance medicine physics and the social sciences important subareas of mathematics include, what we are reading today essential discrete mathematics - discrete mathematics is the basis of much of computer science from algorithms and automata theory to combinatorics and graph theory this textbook covers the discrete mathematics that every, isotopes principles and applications 3rd edition - gunter faure is professor emeritus in the department of geological sciences at the ohio state university teresa m mensing is associate professor in the department of geological sciences at the ohio state university at marion 1 nuclear systematics 2 decay modes of radionuclides 3 radioactive, algorithms and data structures free computer - a collection of free algorithms and data structures books algorithms for reinforcement learning csaba szepesvari this book focuses on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming, new listings number theory web - 14th april 2019 summer program on number theory and related topics at harbin institute of technology hit may to august 2019 the 26th meeting of the lpnts will take place at king s college london, all the math books you ll ever need math blog - countless math books are published each year however only a tiny percentage of these titles are destined to become the kind of classics that are loved the world over by students and mathematicians within this page you ll find an extensive list of math books that have sincerely earned the reputation that precedes them for many of the most important branches of mathematics we ve, curve from wolfram mathworld - curve there are no fewer than three distinct notions of curve throughout mathematics in topology a curve is a one dimensional continuum charatonik and prajs 2001 in algebraic geometry an algebraic curve over a field is the zero locus of some polynomial of two variables which has its coefficients in in analytic geometry a curve is continuous map from a one dimensional space to an, computer networks and communications free computer - internet daemons digital communications possessed this book weaves together history theory and policy to give a full account of where daemons come from and how they influence our lives including their role in hot button issues like network neutrality, number theory conferences new and old - number theory conferences new and old 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003, exam test banks and solution manuals - exam test banks and solution manuals all test banks and solution manuals available if we don t have it send us a request, sharp interface approaches and deep learning techniques - where t is the time u u v w is the velocity field p is the pressure and f includes the external forces such as gravity we consider here the case of a fluid with uniform viscosity and uniform density in each subdomain and appropriate boundary conditions are imposed on and on the boundary of the computational domain"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9251541,"math_prob":0.89234424,"size":4437,"snap":"2019-13-2019-22","text_gpt3_token_len":848,"char_repetition_ratio":0.11865554,"word_repetition_ratio":0.057182707,"special_character_ratio":0.190444,"punctuation_ratio":0.018055556,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9876487,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-23T00:43:31Z\",\"WARC-Record-ID\":\"<urn:uuid:9dfa410e-95a2-4313-abb5-68b50eeb89e1>\",\"Content-Length\":\"15228\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d065ac17-4fe1-4ed7-976f-785c999506bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:7142e19e-7e95-4eda-89a3-2f237fced3bf>\",\"WARC-IP-Address\":\"104.31.88.242\",\"WARC-Target-URI\":\"http://jackpurcellbooks.us/Discrete-Mathematics-With-Graph-Theory-3rd-Edition.doc\",\"WARC-Payload-Digest\":\"sha1:VCUMFCICJA7SIXU5OVEQIYLZFZH3STZS\",\"WARC-Block-Digest\":\"sha1:BQ3CR67EZLOWFMUVJZKX6GHCHAJMSH5Q\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256997.79_warc_CC-MAIN-20190523003453-20190523025453-00065.warc.gz\"}"} |
https://www.futurelearn.com/info/courses/precalculus/0/steps/32393 | [
"10\nWe’re now going to discuss inequalities featuring a radical. We’re going to start with some general remarks on strategy, which turn out to be rather important. Because we can miss out on solutions, we can get a wrong answer, if we work insensitively. I’ll explain what I mean. Just like for equations, although we love radicals, we try and get rid of them. That is, we, at some point, try and raise the inequality to the same power on both sides in order to eliminate the radicals to be able, then, to find a solution set.\n51\nThe only really viable approach here is to surgically dissect the domain as you go, to take account of the domain and the equivalence as you proceed to perform these operations, and at the end arrive at the correct solution set. We’ve seen an example early on in this week that neglecting the domain and operating in the cowboy procedure definitely won’t work in general, it could give you a false answer. So let’s look at inequalities, with the first general type being the following. N-th root of f, less than g. The inequality here, again, can be strict, or it can be non-strict. It doesn’t make a big difference in how you solve them.\n95.7\nThe inequality will have a domain and, naturally, the domain will be contained in the set of x’s for which f is positive. Why? Because the n-th root of f is only defined if f of x is greater or equal 0. Now, we’re going to study what happens when you take the n-th power of each side of the inequality, and whether we get equivalence and in what sense. For x in the domain, here’s the equivalence. Our original inequality is equivalent to the one in which we’ve taken the n-th power of each side, so we get f less than g to the n. However, you have to also take account of the restriction that g of x is positive. Why?\n143.3\nBecause if the n-th root is going to be less than g, then the n-th root being positive means g has to be positive. Now, this equivalence is easy to prove. You could try it as an exercise. Just show that if x is in the left hand side– that is, satisfies the inequality on the left– then it must satisfy the two conditions on the right, and vice versa. If x satisfies the two conditions on the right, it satisfies the inequality on the left. So we have an equivalent inequality. Now, in the second case, you have an inequality of the form n-th root of f, greater than g, rather than less than.\n184.4\nYou might think it’s rather similar, but in fact, it’s pretty different, because the equivalence now is the following. When you take the n-th power of each side, you get a new inequality, f greater than g to the n, and anything solving that will solve the original. But also, anything for which g of x is negative, any x for which g of x is negative, also solves the original inequality. That’s fairly evident. So, in the first case, the g of x being positive was a restriction that will make your set of solutions smaller. You have to cut away those points that fail to satisfy that.\n223.6\nIn the second case, the word “or” indicates that it will give you more solutions that you mustn’t lose track of. Now, all of this will be much clearer when we do an example of each type. Let’s do that now, starting with the first type, n-th root of f less than g. So you can see this is of that form. It’s the square root of x plus 3, which must be less than or equal x plus 1. We’re sensitive to the domains in our approach, so we immediately identify the natural domain, which is the interval minus 3 infinity for evident reasons of the square root being defined.\n264.5\nNow, what about the equivalence when we square both sides to get rid of the radicals sign? Well, we will have equivalence, so the new inequality will be x plus 3, less than x plus 1 squared, except that we mustn’t lose track of the extra restriction that x plus 1 has to be positive. So we don’t lose sight of that fact, although we temporarily forget it in solving the squared inequality. The squared inequality, when the dust settles, is a quadratic inequality whose solution set is easily seen to be the union of two intervals, the points up to minus 2 and the x’s beyond 1. So there is the solution set of our squared inequality.\n311.8\nBut now we have to remember our restriction. We have to see whether the x’s we’ve found, or which x’s we’ve found, are compatible. That is, compatible with being in the domain, D, and also compatible with the restriction we identified above, that x has to be greater than minus 1. When we take only those points, we find the interval 1, infinity. And that is, in fact, the solution set of our inequality. Now, should we check this solution set? Well, if we’re not androids, yes, we should. But notice that the word “check” here has a really different meaning from when we were doing equations.\n352.7\nFor equations, you had two or three values, you go back and see if they work, you reject the ones that don’t work. But here, it’s really rather different. You can’t do that, and checking means checking your work carefully over again, which is always a good idea, of course. Next on the agenda is an inequality of the second general type that we were discussing. You know, the n-th root of f greater than g, rather than less. So this example is the square root of x minus 2, strictly greater than x minus 4. Natural domain, for reasons of the square root being defined, the interval 2, infinity.\n392.3\nWhen we square the inequality on both sides to get rid of the radical, what will be the equivalence statement? It’ll be this. That the new points we’re looking for are those which satisfy the squared inequality, or which satisfy x minus 4 being negative. We mustn’t forget those points, otherwise they will be missing in our final solution. So we look at the squared inequality, we analyze it, it turns out to be a quadratic inequality. Easy to analyze, and its solutions consist of the open interval 3, 6. Now we have to talk about compatibility and also the “or” part above. We want to keep the solutions that we’ve just found, the interval 3, 6, but only those that are in the domain.\n443.6\nWell, they happen to all be in the domain, so we keep them. But now we have to add the “or” part. That is, we have to take account of the points x less than 4, but only those that are in the domain. So that gives us the interval 2, 4. And now the union of these two intervals that we’ve identified will be our overall solution set. The union turns out to be, as you can see, the interval 2, 6, half open. Let me remind you of the complementarity principle. We’ve just solved this inequality relative to the natural domain, and we’ve found the interval of solutions to be 2, 6.\n488.4\nSuppose someone now asks us to solve the opposite inequality, where the greater than is replaced by less than or equal. We don’t have to redo the work, we know the answer. It’s the complement of the original solution set relative to the natural domain, which turns out to be the closed interval 6, infinity. Our next topic will be to consider inequalities in which there are two radicals.\nTwo main types of inequalities, two sorts of equivalence\n\n#### Precalculus: the Mathematics of Numbers, Functions and Equations",
null,
"",
null,
""
] | [
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20210%20140%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20210%20140%22%3E%3C/svg%3E",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94998014,"math_prob":0.9942883,"size":7065,"snap":"2020-45-2020-50","text_gpt3_token_len":1646,"char_repetition_ratio":0.14190625,"word_repetition_ratio":0.040151514,"special_character_ratio":0.23411182,"punctuation_ratio":0.12744479,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.987045,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-25T16:51:15Z\",\"WARC-Record-ID\":\"<urn:uuid:5a153ada-6785-4f71-af33-60d32ee49a80>\",\"Content-Length\":\"72384\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8a77edf4-abd6-4a4b-80df-7ce5ae02b2a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:d5a86345-58c5-465f-ba78-2d533cea9e80>\",\"WARC-IP-Address\":\"104.18.15.106\",\"WARC-Target-URI\":\"https://www.futurelearn.com/info/courses/precalculus/0/steps/32393\",\"WARC-Payload-Digest\":\"sha1:5JRVZE4535R6NINOPBCWQL4O2Q65IWSH\",\"WARC-Block-Digest\":\"sha1:O2PZ5HH6AGUNTGUHWYMDA3YYDXU7VOUB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141183514.25_warc_CC-MAIN-20201125154647-20201125184647-00335.warc.gz\"}"} |
https://mathematica.stackexchange.com/questions/94663/implicit-function-theorem-to-higher-order | [
"# Implicit function theorem to higher order\n\nConsider\n\nf[x_, y_] := Csc[0.482 y] Sin[0.963 x - 0.482 y] +\n3.247 Csc[0.333 y] Sin[0.667 x - 0.333 y] +\n5.049 Csc[0.119 y] Sin[0.238 x - 0.119 y]\n\n\nI'd like an expression of an approximation of the root curve of $f$ (in the neighbourhood of a chosen known root). This is closely related to the implicit function theorem. The theorem gives an easy way to plot a linear approximation around a known root (point), corresponding to the red dot in the $(x,y)$ graph:",
null,
"Question How can I get an better approximation (to a higher order) of the roots (blue curves)? Of course I am not looking for an interpolation of the blue curves, my goal is precisely to avoid solving many equations (each of the blue dot is the result of a NSolve). Instead, I'd like a parametrisation $y=\\phi(x)$ where $\\phi(x)$ could be a Taylor expansion at order $n$, and such that $f(x,\\phi(x))=o(n)$.\n\nFull code for figure:\n\nf[x_, y_] := Csc[0.482 y] Sin[0.963 x - 0.482 y] +\n3.247 Csc[0.333 y] Sin[0.667 x - 0.333 y] +\n5.049 Csc[0.119 y] Sin[0.238 x - 0.119 y]\nsols = Table[\nMap[{#, y} &,\nNSolve[f[x, y] == 0 && 0 < x <= y/2, x][[All, 1, 2]]], {y, 0,\n20, 0.05}] // Flatten // Partition[#, 2] &; // Quiet\npoint = sols[];\nplot = Show[ListPlot[sols],\nGraphics[{Red, PointSize[0.03], Point[point]}]]\n\n{x0, y0} = point;\nphi[x_] = y0 - (x - x0)*(D[f[x, y], x] /. x -> x0 /.\ny -> y0)/(D[f[x, y], y] /. x -> x0 /. y -> y0) // Simplify\nShow[plot, Plot[phi[y], {y, -10, 25}, PlotStyle -> Red]]\n\n• It is not clear what you want for a result. A parametric InterpolatingFunction object? A function of the form y=... with the ... a series in x? – Daniel Lichtblau Sep 15 '15 at 0:59\n• @DanielLichtblau The second one ($y(x)$). I edited. – anderstood Sep 15 '15 at 1:43\n\nA first-year calculus approach to finding Taylor series:\n\norder = 10; (* derivative order *)\nstep[x0_][{eqn_, coeffs_}] :=\n{#, {coeffs, Solve[# /. x -> x0 /. Flatten@coeffs]}} &[D[eqn, x]];\nderivatives =\nFlatten@ Last@ Nest[step[x0], {f[x, y[x]] == 0, y[x0] -> y0}, order];\ny1 = Normal@Series[y[x], {x, x0, Length@derivatives - 1}] /. derivatives\n(*\n11.4 + 1.12299 (-2.26109 + x) + 0.190117 (-2.26109 + x)^2 +\n...\n0.0108529 (-2.26109 + x)^9 + 0.00907805 (-2.26109 + x)^10\n*)\n\nShow[\nplot,\nPlot[y1, {x, 0, 3.3}, PlotStyle -> {Red, Opacity[0.5]}]\n]",
null,
"Notes: The function Nest applies step iteratively to the equation f[x, y[x]] == 0 to calculate the value of the next derivative of the point {x0, y0} in question. The function step differentiates the equation one more time at each step and solves for the next derivative. The derivative will be the only unknown in the equation, so we do not have to specify which unknown to solve for. Solve will figure it out for us. The function step returns the differentiated equation and the solutions for all the derivatives up to the current order; the derivative values have the form of a nested (linked) list of rules.\n\nThe divergence of the red curve on the left occurs outside the radius of convergence of the Taylor series.\n\n• Oh, well, just noticed Daniel beat me to it. – Michael E2 Sep 15 '15 at 16:40\n• Yeah, but with messier code. – Daniel Lichtblau Sep 15 '15 at 16:44\n\nWrite y as an explicit function of x. Then one can solve for successive derivatives to set up a Taylor approximation. Below is some slightly messy code for this.\n\ntaylor[func_, x_, y_, pt_, n_] := Module[\n{f = func[x, y] /. y -> y[x], deriv, var = y[x], sol, newsol},\nderiv = f;\nsol = {y[x] -> pt[]};\npt[] + Sum[\nderiv = D[deriv, x];\nvar = D[var, x];\nnewsol =\nSolve[(deriv /. sol /. x -> pt[]) == 0,\nvar /. x -> pt[]] /. sol;\nsol = Join[sol, newsol[]];\n1/j!*(x - pt[])^j*(var /. x -> pt[]) /. sol\n, {j, n}]\n]\n\n\nIllustrated on the example in question:\n\nplot = Show[ListPlot[sols],\nGraphics[{Red, PointSize[0.03], Point[point]}],\nPlot[Evaluate[taylor[f, x, y, point, 4]], {x, point[] - 2,\npoint[] + 2}, ColorFunction -> (Green &)]]",
null,
"• I accepted Michael's answer because of the explanations. But your answer is (also) what I was looking for, thank you very much. – anderstood Sep 15 '15 at 16:58\n\nHere I'll use a Chebyshev series instead of a Taylor series (see About multi-root search in Mathematica for transcendental equations).\n\nFirst we approximate the curve of interest. The interpolation will be used to seed FindRoot below to get more precise values of y for a given x.\n\nyIF = NDSolveValue[{f[x, y[x]] == 0, u'[x] == 1, u[x0] == x0,\ny[x0] == y0}, y, {x, 0, 10}];\n\n\nNDSolveValue::ndsz: At x == 3.275164228822655, step size is effectively zero; singularity or stiff system suspected. >>\n\nThe Chebyshev proxy approximates an analytic function very well; that means that to get take advantage of its strengths, we need to stay a little away from the singularity (at the vertical tangent where the curve loops back).\n\nModule[{x1, x2},\n{x1, x2} = First[yIF[\"Domain\"]];\ndomain = {x1, x1 + 0.999 (x2 - x1)}];\n\nr = 10;\nyFR = y /. FindRoot[f[#, y] == 0, {y, yIF[#]}] &;\nn = 32; (* use n = 64 for greater accuracy *)\ncnodes = Rescale[N[Cos[Pi Range[0, n]/n], 30], {-1, 1}, domain];\ncc = Sqrt[2/n] FourierDCT[yFR /@ cnodes, 1];\ncc[[{1, -1}]] /= 2;\n\n\nTo get a numerically stable result, we need to evaluate the Chebyshev polynomials individually before summing with the coefficients. (The expanded polynomial has alternating coefficients.) Note that the result yT is a polynomial.\n\nyT[x_?NumericQ] :=\ncc.Table[ChebyshevT[n - 1, Rescale[x, domain, {-1, 1}]], {n, Length@cc}];\n\nShow[\nplot,\nPlot[yT[x], Evaluate@Flatten[{x, domain}],\nPlotStyle -> {Red, Opacity[0.5]}]\n]",
null,
"• Why do you approximate the function of interest? Is it just to give good initial conditions to FindRoot? – anderstood Sep 15 '15 at 17:12\n• @anderstood Yes it is just to give good initial starting points to FindRoot. Since there are many possible roots, it's important to start close to the desired value. -- Just noticed there was a typo that obscured the reason. – Michael E2 Sep 15 '15 at 17:19\n• The accuracy of the fit (considering the short calculation time) and the length of the domain of \"validity\" are impressive. However, I will stick with the Taylor series because it still stands if I replace the values by parameters name, so it gives a closed form approximation in a more general frame. Thank you very much for taking the time to share this informative technique. – anderstood Sep 15 '15 at 18:30\n\nThe function is easier to work with, if singularities are eliminated.\n\nf[x_, y_] == (Csc[0.482 y] Sin[0.963 x - 0.482 y] +\n3.247 Csc[0.333 y] Sin[0.667 x - 0.333 y] +\n5.049 Csc[0.119 y] Sin[0.238 x - 0.119 y])\nSin[0.482 y] Sin[0.333 y] Sin[0.119 y] // Simplify\n\n\nNext, ContourPlot quickly finds all the zero-curves.\n\nplt = ContourPlot[f == 0, {x, 0, 10}, {y, 0, 20}, PlotPoints -> 100]",
null,
"Points on a segment of any of the curves shown in this plot or that in the Question can be extracted in a variety of ways. Then, Fit or a related Mathematica function can be used to fit these points to an analytical function y[x].\n\nA simpler approximation can be obtained based on my answer to a related question. From it, y can be approximated locally by\n\ny0 - (f[x, y0]/Derivative[0, 1][f][x, y0]);\n\n\nWith y0 = 11.4, as specified in the question, this becomes\n\n% /. y0 -> 11.4 // FullSimplify\n(* 33.4981 + (-64.2294 Cos[5.4948 - 0.963 x] -\n167.835 Cos[3.7962 - 0.667 x] + 58.1109 Cos[1.3566 - 0.238 x] -\n67.2963 Sin[5.4948 - 0.963 x] +\n205.631 Sin[3.7962 - 0.667 x])/(2.90656 Cos[5.4948 - 0.963 x] +\n7.595 Cos[3.7962 - 0.667 x] - 2.62968 Cos[1.3566 - 0.238 x] +\n2.77246 Sin[5.4948 - 0.963 x] - 10.3375 Sin[3.7962 - 0.667 x] +\n1. Sin[1.3566 - 0.238 x]) *)\n\n\nwhich is a reasonable local approximation to the curve.\n\nShow[plt, Plot[%, {x, 1.5, 3.5}, PlotStyle -> Red]]",
null,
"• My question might be not clear enough, but the problem with such a solution is that I need to calculate the approximation if I change the values of the numbers in $f$. I am looking for something more in the spirit of Taylor expansion which would give a formula independently of these values. – anderstood Sep 15 '15 at 4:32\n• @anderstood Is the addendum to my answer closer to what you had in mind? Converting the expression for the curve to a Taylor series is straightforward but slightly less accurate. Also, choosing y0 = 11 would give a wider range of validity for the approximation. Incidentally, expanding both the numerator and the denominator of the expression would give a result similar to a Pade approximation. – bbgodfrey Sep 15 '15 at 15:18\n• That is exactly in the spirit of what I am looking for, indeed. Using Series[f[x, y], {y, y0, 3}] // Normal; ysol = y /. Solve[% == 0, {y}], I managed to extend it to the third order by expanding $f(x,y)$ with Taylor to the third order in $y$, and then solve the third order polynomial. But it is still limited as findings roots of polynomials, for which there is in general no analytical solutions if $n>5$. – anderstood Sep 15 '15 at 16:22\n• @anderstood At some point, analytical solutions simply do not exist, and I think you have reached that point. An alternative is to determine the solution numerically and fit it to an analytical solution. – bbgodfrey Sep 16 '15 at 0:01\n• I indeed reached that point, but for some reasons it is important to me to get something analytical, even if it is an approximation. This allows me to give an analytical expression of a surface which would be very costly to build numerically. I'm trying to do the Padé approximation you mentioned (hadn't heard about it before), but I don't think it is as easy as Taylor (see Michael's answer for example) because it does not make sense to identify successive derivatives of $x\\mapsto y(x)$ in $x_0$. – anderstood Sep 16 '15 at 0:33"
] | [
null,
"https://i.stack.imgur.com/8Q37b.png",
null,
"https://i.stack.imgur.com/Q8K62.png",
null,
"https://i.stack.imgur.com/CPW20.png",
null,
"https://i.stack.imgur.com/Dt8Or.png",
null,
"https://i.stack.imgur.com/Plr3s.png",
null,
"https://i.stack.imgur.com/R91fB.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.73607045,"math_prob":0.9935865,"size":1606,"snap":"2020-45-2020-50","text_gpt3_token_len":583,"char_repetition_ratio":0.11485643,"word_repetition_ratio":0.1875,"special_character_ratio":0.4414695,"punctuation_ratio":0.20351759,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990349,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-26T04:52:15Z\",\"WARC-Record-ID\":\"<urn:uuid:be945fac-d1cc-41b4-acee-2ac271b8fcac>\",\"Content-Length\":\"197860\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a35e2f3-e3da-42d9-a1e6-1b06a7f70b46>\",\"WARC-Concurrent-To\":\"<urn:uuid:2027167b-89bf-4069-9b79-c9444748f25a>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/94663/implicit-function-theorem-to-higher-order\",\"WARC-Payload-Digest\":\"sha1:CZP7U7NUOQQSUTX6SIN3K5XQOJ5QT57P\",\"WARC-Block-Digest\":\"sha1:3MF5DE4WA5XYFSHVQV5OCQEAIJZH7ROM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141186414.7_warc_CC-MAIN-20201126030729-20201126060729-00272.warc.gz\"}"} |
https://citizenmaths.com/angle/1136.6-sextants-to-turns | [
"1136.6 Sextants to Turns\n\nSextant\n• Angular Mil\n• Binary degree\n• Centesimal minute of arc\n• Centesimal second of arc\n• Centiturn\n• Degree\n• Diameter Part\n• Hour Angle\n• Milliturn\n• Minute of arc\n• Minute of time\n• Octant\n• Point\n• Quarter Point\n• Second of arc\n• Second of time\n• Sextant\n• Sign\n• Turn\n=\nTurn\n• Angular Mil\n• Binary degree\n• Centesimal minute of arc\n• Centesimal second of arc\n• Centiturn\n• Degree\n• Diameter Part\n• Hour Angle\n• Milliturn\n• Minute of arc\n• Minute of time\n• Octant\n• Point\n• Quarter Point\n• Second of arc\n• Second of time\n• Sextant\n• Sign\n• Turn\nFormula 1,136.6 sxt = 1136.6 / 6 tr = 189.43 tr\n1,136.6 sxt = 189.43 tr\n\nExplanation:\n• 1 sxt is equal to 0.16667 tr, therefore 1136.6 sxt is equivalent to 189.43 tr.\n• 1 Sextant = 1 / 6 = 0.16667 Turns\n• 1,136.6 Sextants = 1136.6 / 6 = 189.43 Turns\n\n1136.6 Sextants to Turns Conversion Table\n\nSextant (sxt) Turn (tr)\n1,136.7 sxt 189.45 tr\n1,136.8 sxt 189.47 tr\n1,136.9 sxt 189.48 tr\n1,137 sxt 189.5 tr\n1,137.1 sxt 189.52 tr\n1,137.2 sxt 189.53 tr\n1,137.3 sxt 189.55 tr\n1,137.4 sxt 189.57 tr\n1,137.5 sxt 189.58 tr\n\nConvert 1136.6 sxt to other units\n\nUnit Unit of Angle\nSecond of time 16,367,040.0 s ot\nQuarter Point 24,247.47 qtr point\nPoint 6,061.87 point\nMinute of time 272,784.0 min ot\nMilliturn 189,433.33 mltr\nHour Angle 4,546.4 HA\nDiameter Part 71,414.69 Ø dia- part\nCentiturn 18,943.33 centiturn\nCentesimal second of arc 757,733,333.33 c SOA\nCentesimal minute of arc 7,577,333.33 c MOA\nAngular Mil 1,212,373.33 µ\nSign 2,273.2 sign\nOctant 1,515.47 octa\nTurn 189.43 tr"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.54225546,"math_prob":0.98231816,"size":3017,"snap":"2022-05-2022-21","text_gpt3_token_len":1143,"char_repetition_ratio":0.22236973,"word_repetition_ratio":0.23595506,"special_character_ratio":0.4302287,"punctuation_ratio":0.19856115,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96179944,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-24T11:20:04Z\",\"WARC-Record-ID\":\"<urn:uuid:fa72a860-e824-487c-8d24-42f082bc42cc>\",\"Content-Length\":\"58274\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ba9ac1b-788b-47d4-b355-3c86732304b8>\",\"WARC-Concurrent-To\":\"<urn:uuid:1a51c03c-8ece-4184-9cd1-1aec5b818d0f>\",\"WARC-IP-Address\":\"34.123.66.60\",\"WARC-Target-URI\":\"https://citizenmaths.com/angle/1136.6-sextants-to-turns\",\"WARC-Payload-Digest\":\"sha1:3DCSQ5DWHLCO2DUS6CVNHG7BBUKORRXI\",\"WARC-Block-Digest\":\"sha1:5DURTQBHFLQ5WLJPMGWYZLBPQVI5GKGS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304528.78_warc_CC-MAIN-20220124094120-20220124124120-00506.warc.gz\"}"} |
https://percentage-calculator.net/what-percent-of-x-is-y/what-percent-of-260-is-52.php | [
"# Question: What percent of 260 is 52\n\n## Step by step method for calculating what percent of 260 is 52\n\nWe already have our first value 260 and the second value 52. Let's assume the unknown value is Y which answer we will find out.\n\nAs we have all the required values we need, Now we can put them in a simple mathematical formula as below:\n\nSTEP 1 Y = 52/260\n\nBy multiplying both numerator and denominator by 100 we will get:\n\nSTEP 2 Y = 52/260 × 100/100 = 20/100\n\nSTEP 3 Y = 20\n\nFinally, we have found the value of Y which is 20 and that is our answer.\n\nYou can use a calculator to find what percent of 260 is 52, just enter 52 ÷ 260 × 100 and you will get your answer which is 20\n\nYou may also be interested in:\n\nHere is a calculator to solve percentage calculations such as what percent of 260 is 52. You can solve this type of calculation with your values by entering them into the calculator's fields, and click 'Calculate' to get the result and explanation.\n\nWhat percent of\nis\n\n## Have time and want to learn the details?\n\nLet's solve the equation for Y by first rewriting it as: 100% / 260 = Y% / 52\n\nDrop the percentage marks to simplify your calculations: 100 / 260 = Y / 52\n\nMultiply both sides by 52 to isolate Y on the right side of the equation: 52 ( 100 / 260 ) = Y\n\nComputing the left side, we get: 20 = Y\n\nThis leaves us with our final answer: 20 percent of 260 is 52"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9335174,"math_prob":0.9977835,"size":1197,"snap":"2021-31-2021-39","text_gpt3_token_len":320,"char_repetition_ratio":0.12070411,"word_repetition_ratio":0.008230452,"special_character_ratio":0.31746033,"punctuation_ratio":0.072874494,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997261,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-06T03:55:43Z\",\"WARC-Record-ID\":\"<urn:uuid:7c1c7681-43c1-44a0-9023-23876c7e91ec>\",\"Content-Length\":\"51901\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dbd78ae6-fa1b-429a-8750-c4023ac9c703>\",\"WARC-Concurrent-To\":\"<urn:uuid:ce3fd6af-3ba3-4550-9663-9be3409e8562>\",\"WARC-IP-Address\":\"172.67.159.133\",\"WARC-Target-URI\":\"https://percentage-calculator.net/what-percent-of-x-is-y/what-percent-of-260-is-52.php\",\"WARC-Payload-Digest\":\"sha1:FFGE3PMGCGAK6F5VBCLXCRJOMELUCGBD\",\"WARC-Block-Digest\":\"sha1:BLPVPCUA36U222WFHPHDDAVGOORYCEFH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152112.54_warc_CC-MAIN-20210806020121-20210806050121-00576.warc.gz\"}"} |
http://mizar.org/version/current/html/proofs/polynom6/12 | [
"let n be Ordinal; :: thesis: for L being non empty right_complementable Abelian add-associative right_zeroed associative distributive doubleLoopStr\nfor p, q, r being Series of n,L holds (p + q) *' r = (p *' r) + (q *' r)\n\nlet L be non empty right_complementable Abelian add-associative right_zeroed associative distributive doubleLoopStr ; :: thesis: for p, q, r being Series of n,L holds (p + q) *' r = (p *' r) + (q *' r)\nlet p, q, r be Series of n,L; :: thesis: (p + q) *' r = (p *' r) + (q *' r)\nset cL = the carrier of L;\nnow :: thesis: for b being Element of Bags n holds ((p + q) *' r) . b = ((p *' r) + (q *' r)) . b\nlet b be Element of Bags n; :: thesis: ((p + q) *' r) . b = ((p *' r) + (q *' r)) . b\nconsider s being FinSequence of the carrier of L such that\nA1: ((p + q) *' r) . b = Sum s and\nA2: len s = len () and\nA3: for k being Element of NAT st k in dom s holds\nex b1, b2 being bag of n st\n( () /. k = <*b1,b2*> & s /. k = ((p + q) . b1) * (r . b2) ) by POLYNOM1:def 10;\nconsider u being FinSequence of the carrier of L such that\nA4: (q *' r) . b = Sum u and\nA5: len u = len () and\nA6: for k being Element of NAT st k in dom u holds\nex b1, b2 being bag of n st\n( () /. k = <*b1,b2*> & u /. k = (q . b1) * (r . b2) ) by POLYNOM1:def 10;\nconsider t being FinSequence of the carrier of L such that\nA7: (p *' r) . b = Sum t and\nA8: len t = len () and\nA9: for k being Element of NAT st k in dom t holds\nex b1, b2 being bag of n st\n( () /. k = <*b1,b2*> & t /. k = (p . b1) * (r . b2) ) by POLYNOM1:def 10;\nreconsider t = t, u = u as Element of (len s) -tuples_on the carrier of L by ;\nA10: dom u = dom s by ;\nA11: dom t = dom s by ;\nthen A12: dom (t + u) = dom s by ;\nA13: now :: thesis: for i being Nat st i in dom s holds\ns . i = (t + u) . i\nlet i be Nat; :: thesis: ( i in dom s implies s . i = (t + u) . i )\nassume A14: i in dom s ; :: thesis: s . i = (t + u) . i\nthen consider sb1, sb2 being bag of n such that\nA15: (decomp b) /. i = <*sb1,sb2*> and\nA16: s /. i = ((p + q) . sb1) * (r . sb2) by A3;\nA17: ( t /. i = t . i & u /. i = u . i ) by ;\nconsider ub1, ub2 being bag of n such that\nA18: (decomp b) /. i = <*ub1,ub2*> and\nA19: u /. i = (q . ub1) * (r . ub2) by A6, A10, A14;\nA20: ( sb1 = ub1 & sb2 = ub2 ) by ;\nconsider tb1, tb2 being bag of n such that\nA21: (decomp b) /. i = <*tb1,tb2*> and\nA22: t /. i = (p . tb1) * (r . tb2) by A9, A11, A14;\nA23: ( sb1 = tb1 & sb2 = tb2 ) by ;\ns /. i = s . i by ;\nhence s . i = ((p . sb1) + (q . sb1)) * (r . sb2) by\n.= ((p . sb1) * (r . sb2)) + ((q . sb1) * (r . sb2)) by VECTSP_1:def 7\n.= (t + u) . i by ;\n:: thesis: verum\nend;\nlen (t + u) = len s by ;\nthen s = t + u by ;\nhence ((p + q) *' r) . b = (Sum t) + (Sum u) by\n.= ((p *' r) + (q *' r)) . b by ;\n:: thesis: verum\nend;\nhence (p + q) *' r = (p *' r) + (q *' r) by FUNCT_2:63; :: thesis: verum"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5326313,"math_prob":0.9997763,"size":3061,"snap":"2022-27-2022-33","text_gpt3_token_len":1419,"char_repetition_ratio":0.13182859,"word_repetition_ratio":0.310388,"special_character_ratio":0.50245017,"punctuation_ratio":0.22168088,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99998164,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-10T14:05:14Z\",\"WARC-Record-ID\":\"<urn:uuid:0af52e0f-e15f-4a6c-9869-84b419fc05ce>\",\"Content-Length\":\"44888\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dc32f3f3-ad71-4b3c-96ba-6ee4dff7ba7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e8c820a-b011-49ba-904c-d1afc92e260a>\",\"WARC-IP-Address\":\"193.219.28.149\",\"WARC-Target-URI\":\"http://mizar.org/version/current/html/proofs/polynom6/12\",\"WARC-Payload-Digest\":\"sha1:EN7RNA3UJWYTCT7CI7D3IQOXQOFYPQPC\",\"WARC-Block-Digest\":\"sha1:G3YGPQGPI6PNSXDWYCJDLRAXWM7NB3KZ\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571190.0_warc_CC-MAIN-20220810131127-20220810161127-00470.warc.gz\"}"} |
https://xiith.com/python-program-to-find-the-smallest-element-in-a-list/ | [
"# Python Program to find the smallest element in a list\n\nIn this program, You will learn how to find the smallest element in a list in Python.\n\n``data =[22, 3, 55, 8]``\n\n## Example: How to find the smallest element in a list in Python\n\n``````k = int(input(\"Enter how many number:\"))\n\ndata = []\nprint(\"Enter numbers:\", end=\"\")\n\nfor i in range(0, k):\nnum = int(input())\n\ndata.append(num)\n\nsm = data\nfor i in range(0, data.__len__()):\nif sm > data[i]:\nsm = data[i]\n\nprint(\"Smallest element:\", sm)``````\n\n#### Output:\n\n``````Enter how many number:4\nEnter numbers:20\n30\n10\n40\nSmallest element: 10``````"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.58886033,"math_prob":0.9899841,"size":497,"snap":"2020-45-2020-50","text_gpt3_token_len":148,"char_repetition_ratio":0.14604463,"word_repetition_ratio":0.12195122,"special_character_ratio":0.35613683,"punctuation_ratio":0.18965517,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.997457,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-28T11:36:24Z\",\"WARC-Record-ID\":\"<urn:uuid:f2c45443-f8af-4023-b523-7b6bf09ac78f>\",\"Content-Length\":\"32129\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:953c5fe2-8eac-403f-af78-3cc98ab15a13>\",\"WARC-Concurrent-To\":\"<urn:uuid:225349b8-c85e-44b6-8d96-28a2dbbbd4e3>\",\"WARC-IP-Address\":\"172.67.144.5\",\"WARC-Target-URI\":\"https://xiith.com/python-program-to-find-the-smallest-element-in-a-list/\",\"WARC-Payload-Digest\":\"sha1:25FYATQ7GVIZFXFCAUGVI4BMNL5RN44P\",\"WARC-Block-Digest\":\"sha1:ZO24SM4AGAWWNHCYJCIUF7UQYAG6NCS3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141195417.37_warc_CC-MAIN-20201128095617-20201128125617-00324.warc.gz\"}"} |
https://math.stackexchange.com/questions/2476854/sections-of-coherent-sheaf-on-a-projective-variety | [
"# Sections of coherent sheaf on a projective variety\n\nWhy are the global sections of a coherent sheaf on a projective variety finite dimensional?\n\nI have read the following:\n\nSuppose $F$ is a vector bundle on $X$. Then $X$ is an affine scheme, then a quasi-coherent sheaf is a vector bundle iff its global sections are a projective variety $\\Gamma(O_x)$-module [of finite dimension].\n\nI have no idea how this answers my question? $\\Bbb{P}^1=\\text{Proj}(\\Bbb{C}[x,y])$ is not an affine scheme, and so surely I can't just take my projective variety to be an affine scheme? Does the highlighted text answer my question? If not, how does one show this?\n\n(Edit: This is precisely how it was written, although it seems like broken english perhaps)\n\n## 1 Answer\n\nHere is a correct version of my previous answer, which was totally wrong as @Johann noticed.\n\nFirst it is enough to prove it for $X = \\Bbb P^n$. Indeed, we have $H^0(X,F) = H^0(\\Bbb P^n, i_*F)$ where $i : X \\to \\Bbb P^n$ is a closed immersion (and $i_*F$ is coherent in this case). We need to know the \"Serre computation\" written in FAC which computes $H^i(\\Bbb P^n, \\mathcal O(m))$ for any $i,n,m$. It is always finitely generated.\n\nNow one can prove that $H^i(\\Bbb P^n,F) = 0$ by descending induction on $i$. For $i = n$ it's clear because the long exact sequence $0 \\to K \\to \\bigoplus_{i} \\mathcal O(d_i) \\to F \\to 0$ gives a surjection $H^n(\\Bbb P^n, H) \\to H^n(\\Bbb P^n,F)$ where $H := \\bigoplus_i \\mathcal O(d_i)$.\n\nNow $H^i(\\Bbb P^n,F)$ is between $H^i(\\Bbb P^n, H)$ and $H^{i+1}(\\Bbb P^n, K)$ in the long exact sequence and they are both finitely generated, the first one because of Serre's computation and the second one by induction hypothesis. It follows that $H^i(\\Bbb P^n, F)$ is finitely generated for all $i \\in \\Bbb Z$, in particular for $i = 0$.\n\n• I don't think it is that clear that this surjection on global section exists - you get a surjection of sheaves by definition, but the first cohomology of the kernel might not vanish. Indeed, iirc the claim in the question is usually proved by descending induction for all cohomology groups. – Johann Haas Oct 18 '17 at 8:20\n• @Johann : of course you are right. I'll delete my answer. – Nicolas Hemelsoet Oct 18 '17 at 9:03\n• @Johann : I edited my answer and did realize that even the fact that there is a surjection $O_X^m \\to F$ is wrong, for example taking $F = \\mathcal O(-1)$ on $\\Bbb P^1$. Everything should be correct now. – Nicolas Hemelsoet Oct 18 '17 at 22:23"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.925871,"math_prob":0.9925915,"size":683,"snap":"2019-35-2019-39","text_gpt3_token_len":170,"char_repetition_ratio":0.101620026,"word_repetition_ratio":0.0,"special_character_ratio":0.238653,"punctuation_ratio":0.10144927,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999894,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-26T10:15:21Z\",\"WARC-Record-ID\":\"<urn:uuid:11a7d908-7d1b-4712-8509-def093cf9539>\",\"Content-Length\":\"138472\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:341a53fd-7e0b-4bc7-bfe5-c730cf9d3650>\",\"WARC-Concurrent-To\":\"<urn:uuid:c709266b-3be9-4338-8424-6640c733d5ff>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2476854/sections-of-coherent-sheaf-on-a-projective-variety\",\"WARC-Payload-Digest\":\"sha1:IXQVFSCJYVIAF773YJ3DC3XI5TTTVB57\",\"WARC-Block-Digest\":\"sha1:EAFM3N72UXFUNGDJATVLA77SIIO63P7S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027331485.43_warc_CC-MAIN-20190826085356-20190826111356-00506.warc.gz\"}"} |
https://rdrr.io/cran/immcp/src/R/score_network.R | [
"# R/score_network.R In immcp: Poly-Pharmacology Toolkit for Traditional Chinese Medicine Research\n\n#### Documented in diff_network_charscore_network\n\n```#' Calculating differences in disease network characteristics before and after removal of drug targets\n#'\n#'\n#' @title score_network\n#' @param BasicData A BasicData object.\n#' @param n Number vector, the number of times random permutation sampling, default to 1000.\n#' @return A list.\n#' @importFrom pbapply pblapply\n#' @importFrom igraph neighbors\n#' @importFrom igraph induced_subgraph\n#' @importFrom igraph delete.vertices\n#' @importFrom igraph vcount\n#' @importFrom dplyr %>%\n#' @importFrom dplyr filter\n#' @importFrom rlang .data\n#' @export\n#' @author Yuanlong Hu\n#' @examples\n#' data(drugdemo)\n#' drug_herb <- PrepareData(drugdemo\\$drug_herb, from = \"drug\", to=\"herb\")\n#' herb_compound <- PrepareData(drugdemo\\$herb_compound, from = \"herb\", to=\"compound\")\n#' compound_target <- PrepareData(drugdemo\\$compound_target, from = \"compound\", to=\"target\")\n#' disease <- PrepareData(drugdemo\\$disease, diseaseID = \"disease\",from = \"target\", to=\"target\")\n#' BasicData <- CreateBasicData(drug_herb, herb_compound, compound_target, diseasenet = disease)\n#' res <- score_network(BasicData, n = 100)\n\nscore_network <- function(BasicData, n = 1000){\n\ndrug_list <- BasicData@vertices %>% filter(.data\\$type==\"drug\")\ndrug_list <- drug_list\\$name\ng_drug_list <- lapply(as.list(drug_list), function(x){\nBasicData <- subset_network(BasicData = BasicData, from = x)\ndrugtarget <- BasicData@vertices %>% filter(.data\\$type == \"target\")\ndrugtarget\\$name\n})\nnames(g_drug_list) <- drug_list\n\ndis_list <- V(BasicData@diseasenet)\\$name[V(BasicData@diseasenet)\\$type==\"disease\"]\ng_dis_list <- lapply(as.list(dis_list), function(x){\nv_dis <- neighbors(BasicData@diseasenet, v = x, mode = \"all\")\\$name\ninduced_subgraph(BasicData@diseasenet, v_dis)\n})\nnames(g_dis_list) <- dis_list\n\nmessage(\">>>>> Calculating Network Characters and Tests <<<<<\")\nres_list <- lapply(g_dis_list, function(x){\nres <- pblapply(g_drug_list, function(y){\ndel_v <- intersect(V(x)\\$name, y)\nx2 <- delete.vertices(x, v = del_v)\nif(vcount(x2) == 0) stop(\"The number of vertices of this graph is 0!\")\nres1 <- network_node_ks(graph1 = x, graph2 = x2, replicate = n)\nres2 <- diff_network_char(graph1 = x, graph2 = x2, output_all = TRUE)\nres <- c(res1, res2)\nreturn(res)\n})\nreturn(res)\n})\n\nmessage(\">>>>> Summarizing all results <<<<<\")\n\nres_list <- lapply(res_list, function(x){\nres <- Reduce(rbind, x)\nrownames(res) <- drug_list\nreturn(res)\n})\nmessage(\">>>>> Done <<<<<\")\nreturn(res_list)\n}\n\n#' Calculate the difference of network characters in two network\n#'\n#'\n#' @title diff_network_char\n#' @param graph1 A igraph object.\n#' @param graph2 A igraph object.\n#' @param output_all FALSE\n#' @return A number vector.\n#' @importFrom igraph graph.data.frame\n#' @importFrom igraph delete.vertices\n#' @export\n#' @author Yuanlong Hu\n\ndiff_network_char <- function(graph1, graph2, output_all = FALSE){\n\nnetchar_g1 <- network_char(graph1, T)\nnetchar_g2 <- network_char(graph2, T)\nchange <- (netchar_g2 - netchar_g1)/netchar_g1\n\n# Summary\nnames(netchar_g1) <- paste0(\"G1_\", names(netchar_g1))\nnames(netchar_g2) <- paste0(\"G2_\", names(netchar_g2))\nnames(change) <- paste0(\"Ratio_\", names(change))\nchange[is.na(change)] <- 0\n\nif (output_all) {\nres_network <- c(netchar_g1, netchar_g2, change)\n}else{\nres_network <- change\n}\nreturn(res_network)\n}\n```\n\n## Try the immcp package in your browser\n\nAny scripts or data that you put into this service are public.\n\nimmcp documentation built on May 12, 2022, 9:05 a.m."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5273631,"math_prob":0.9291832,"size":3445,"snap":"2023-40-2023-50","text_gpt3_token_len":970,"char_repetition_ratio":0.14705028,"word_repetition_ratio":0.024096385,"special_character_ratio":0.30740204,"punctuation_ratio":0.12749004,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99149656,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T06:56:09Z\",\"WARC-Record-ID\":\"<urn:uuid:ad8d7ea3-0287-47d7-a062-e510fb4928a0>\",\"Content-Length\":\"47254\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eacf8004-c6d4-42a6-870c-4352ac3328d6>\",\"WARC-Concurrent-To\":\"<urn:uuid:d4352698-b344-4407-a41c-ea3ed4b80973>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/cran/immcp/src/R/score_network.R\",\"WARC-Payload-Digest\":\"sha1:OT7JFIXED2SIE2UYKFG5HI4BSKK5XFU3\",\"WARC-Block-Digest\":\"sha1:6T7DABX5JH2YB7DHSZ2JPMOI6RRSIB7S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510149.21_warc_CC-MAIN-20230926043538-20230926073538-00010.warc.gz\"}"} |
http://newwavechurch.info/solving-algebraic-equations-worksheets/solving-algebraic-equations-worksheets-8th-grade-solving-algebraic-equations-worksheets/ | [
"",
null,
"solving algebraic equations worksheets 8th grade solving algebraic equations worksheets.\n\nsolving algebraic equations worksheets 8th grade 1 step algebra 8 worksheet pdf,solving algebraic equations worksheets tes fraction systems of algebraically worksheet pdf for grade,solving algebraic equations worksheet grade 9 rational systems of algebraically pdf algebra worksheets for x math 1,algebra 1 multi step equations worksheet solving algebraic grade 9 fraction worksheets rational,solving algebraic equations worksheets 8th grade pdf simple worksheet expressions 7 x 3 math algebra systems of algebraically,algebraic equations worksheets for grade solving tes 7th 6th,one step algebra equations worksheet solving algebraic fraction worksheets rational 6th grade,two step algebra equations worksheet solving algebraic worksheets 8th grade rational 8,steps to solving equations math 2 step worksheets algebraic worksheet 5th grade 7th 8 pdf,solving systems of equations algebraically worksheet pdf algebraic worksheets fraction with answers."
] | [
null,
"http://newwavechurch.info/wp-content/uploads/2019/11/solving-algebraic-equations-worksheets-8th-grade-solving-algebraic-equations-worksheets.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8488391,"math_prob":0.99943733,"size":1040,"snap":"2019-43-2019-47","text_gpt3_token_len":203,"char_repetition_ratio":0.30984557,"word_repetition_ratio":0.06923077,"special_character_ratio":0.15769231,"punctuation_ratio":0.071428575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999895,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-22T05:34:02Z\",\"WARC-Record-ID\":\"<urn:uuid:2e787557-a60f-4d0b-9309-3f3053668425>\",\"Content-Length\":\"29977\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:84968cf3-d66d-413f-904e-a2e44cb18527>\",\"WARC-Concurrent-To\":\"<urn:uuid:b6165930-bd6d-4127-90f5-3b934b7e6386>\",\"WARC-IP-Address\":\"104.31.89.170\",\"WARC-Target-URI\":\"http://newwavechurch.info/solving-algebraic-equations-worksheets/solving-algebraic-equations-worksheets-8th-grade-solving-algebraic-equations-worksheets/\",\"WARC-Payload-Digest\":\"sha1:GNKIEYUOHKPJJWZE2LM4FLFWIWMBEMQD\",\"WARC-Block-Digest\":\"sha1:Z43NEKWT2RBZIB7IEXYUIM3XOVPWS2E7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496671239.99_warc_CC-MAIN-20191122042047-20191122070047-00049.warc.gz\"}"} |
https://allinonehighschool.com/day-147/ | [
"# Day 147\n\n## Velocity of a Wave on a String\n\nThe velocity of a wave on a string fixed at both ends is related to several factors, as we have seen earlier. The equation that relates all the variables that affect the velocity is: v = (F/μ)1/2 where F = tension in the string, in N and μ = the linear mass density of the string.\n\n• In other word, μ is the mass/unit length, in units of kg/m.\n\n• According to the above formula, as the tension increases the velocity increases and as the mass increases the velocity decreases.\n\n• We can substitute fλ for v and we now have fλ = (F/μ)1/2.\n\n## Fundamental Frequency and Harmonics\n\nWhen a string vibrates in its lowest frequency of vibration, it is said to be at its fundamental (or first harmonic). For a string clamped at both ends, this means that only a wavelength vibrates on the length of the string. Therefore , λ = 2L.\n\n• All other natural vibrations are multiples of this fundamental frequency, or harmonics.\n\n• By increasing the frequency of the vibrating string, the string will now vibrate in the second harmonic and one complete wavelength will fit on the string. Therefore, λ = L.\n\n• Increasing the frequency again, the string will vibrate in the third harmonic and 1.5 wavelength will fit on the string of length L. Therefore, λ = 2L/3.\n\n• If the fundamental frequency had been 50 Hz, the second harmonic would be 100 Hz, and the third harmonic would be 150 Hz, etc.\n\n• Since this standing wave has points of no movement, called nodes, the number of nodes is equal to one greater than the harmonic. For the fundamental, there are two nodes, for the second harmonic there are three nodes, etc.\n\n## Resonance\n\nResonance is the maximum amplitude reached when the frequency of the driving force equals the natural frequency of the system. Forced Vibration is the frequency set up from the driving force. Open and closed tubes resonant at the natural frequency of the tube and whole number multiples (harmonics). Musical instruments that are made of an air column, such as the flute, depend on the physics of resonance for different lengths of the column.\n\n## Open Tubes\n\nAn open tube is one in which the tube is open at BOTH ends. An antinode exists at each end, so the node(s) must exist somewhere in the length of the air column.\n\n• The fundamental (first harmonic) has one node in the middle and antinodes at each end. Therefore, 1/2 of a wavelength fits in the tube and L = λ/2 and λ 1 = 2L.\n\n• The 2nd harmonic has antinodes at both ends and two nodes equally spaced. Therefore, one wavelength fits in the tube and L = λ2.\n\n• The 3rd harmonic has antinodes at both ends and three nodes equally spaced. Therefore, 1.5 wavelengths fits in the tube and L =3/2λ and λ3 = 2L/3.\n\n• The expression that relates the frequency, harmonic number, velocity, and length of the air column is :\n\nfn = nv/2L\n\nWhere n = harmonic #, whole numbers, no units\n\nv = velocity of sound wave in air in the tube, in m/s\n\nL = length of the air column, in m\n\n## Closed Tubes\n\nA closed tube is one in which the tube is open at only ONE end and closed at the other. The closed side has a node and the open side has an antinode.\n\n• The fundamental (1st harmonic) has only the node at one end and the antinode at the other end. Therefore, only 1/4 of a wavelength fits in the tube and L= λ/4 and λ1 = 4L.\n\n• Closed Tubes can only accommodate odd numbered harmonics; it is physically impossible to have a node and an antinode at each end for even harmonics.\n\n• The 3rd harmonic has a node and antinode at each end and another node inside the tube. Therefore, 3/4 of a wavelength fits in the tube and L=3/4λ and λ3 = 4L/3.\n\n• The expression that relates frequency, harmonic number, velocity and length of the air column is:\n\nfn = nv/4L\n\nWhere n = harmonic #, only odd numbers, no units\n\nv = velocity of sound wave in air in the tube, in m/s\n\nL = length of the air column, in m\n\n(source)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9339716,"math_prob":0.98472863,"size":3851,"snap":"2020-45-2020-50","text_gpt3_token_len":946,"char_repetition_ratio":0.16194437,"word_repetition_ratio":0.12983425,"special_character_ratio":0.24305375,"punctuation_ratio":0.10427136,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99664253,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-03T22:24:57Z\",\"WARC-Record-ID\":\"<urn:uuid:c2b9d8b2-10d0-4c6f-bd12-38ba78e3bd06>\",\"Content-Length\":\"54846\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ee2002dc-2105-4662-8248-3a90d29eca7a>\",\"WARC-Concurrent-To\":\"<urn:uuid:b727ba08-c971-4388-856d-90b9abd7e0c6>\",\"WARC-IP-Address\":\"192.0.78.241\",\"WARC-Target-URI\":\"https://allinonehighschool.com/day-147/\",\"WARC-Payload-Digest\":\"sha1:RR33CZURDMDFXRVSCM6LWKW54G24OWUT\",\"WARC-Block-Digest\":\"sha1:VZDA6ZRW7YOS4JQHT6P56Q255H24RS7N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141732835.81_warc_CC-MAIN-20201203220448-20201204010448-00690.warc.gz\"}"} |
http://venables.asu.edu/quant/proj/compton.html | [
"Jennifer Neakrase, Jennifer Neal and John Venables, Dept of Physics and Astronomy, Arizona State University, Tempe, Arizona\n\n### Photoelectrons, Compton and Inverse Compton Scattering\n\n The Photoelectric and Compton effects are closely related. The Compton effect is introduced in Gasiorowicz (in section 1.3, pages 7-9 in the 3rd edition, or pages 11-13 in the 2nd edition). The text describes the experimental discovery of the effect discovered by Arthur H. Compton - radiation of a given wavelength (in X-rays) sent through a foil was scattered in a manner inconsistent with classical radiation theory. If one is dealing with elastic scattering,the system can be understood quantitatively as Thomson scattering. However, the Compton effect can be understood as photons scattering inelastically off individual electrons. In Compton scattering the incoming photon scatters off an electron that is initially at rest. The electron gains energy and the scattered photon has a frequency less than that of the incoming photon. This process is illustrated in the following figure.",
null,
"Einstein's photoelectric discussion in 1905, and his other work including special relativity, led physicists to the notion of photons. Arthur Compton and Debye both provided in 1922 a very simple mathematical framework for these photons. Energy is conserved in a collision between a photon and an electron. In the original photoelectric effect, the photon energy of the photon is of the same order as the energy binding an electron to a nucleus, a few eV. Thus, when the photon strikes the electron it imparts only enough energy to eject that electron. However, if the energy of the photon is large compared to the binding energy of the electron, one could make the approximation that the electron as free. For example, x-ray photons have an energy value of several keV. So, both conservation of momentum and energy could be observed. To show this, Compton scattered x-ray radiation off a graphite block and measured the wavelength of the x-rays before and after they were scattered as a function of the scattering angle. He discovered that the scattered x-rays had a longer wavelength than that of the incident radiation.",
null,
"The original figure from Compton (1923a, figure 4) is shown above. Compton was able to account for and derive the correct expression for the shift in wavelength. Therefore, he empirically proved that light could be regarded as a particle in these experiments. The original references are: A. H. Compton, Phys. Rev. 21, 483 (1923a); 22, 409 (1923b). A more recent historical survey is A. H. Compton, Am. J. Phys. 29, 817 (1961). A web page giving mathematical details, and details about Compton's life and 1927 Nobel Prize is on the Wolfram site. An outline of the maths is given below. The energy (E) of a particle is related to its mass (m) and momentum (P), via the relavisitic formula E2 = (Pc)2 + m2c4, where c is the speed of light. Since the mass of a photon is zero, its energy is E = Pc. The energy may also be defined as E = hn, where h is Planck's constant and n is frequency. Using these relations, the momentum a photon is related to its wavelength l, P = h /l. Compton argued that the shift in wavelength is a result of a single photon imparting momentum to a single electron; thus the theory is derived from the laws of conservation of energy and momentum. Consider a photon with energy E0 and momentum P0, and a stationary electron with rest energy mc2. When the photon collides with the electron, the electron recoils with energy Ee and momentum Pe. The scattered photon will have an energy E and momentum P. By conservation of energy and momentum: Ee + E = mc2 + E0 and Pe + P = P0 Combining energy and momentum conservation in 2 dimensions (see text books) using these equations yields: l - l0 = (h/mc)(1- cosq).\n\n The shift in wavelength is related only to the mass of the electron and the backscattered angle. The shift has no relation to the energy of the incident photon. The Compton effect can also be expressed as a shift in energy between the incident and scattered photon. E - E0 = (E0E/(mc2))(1- cosq). For a 180° backscattered photon, the shift in energy between E0 and E is E0 - E = E0(2E0/(2E0 +(mc2)).\n\n In astrophysics inverse Compton scattering is actually more important than Compton scattering. Inverse Compton scattering, illustrated in the figure below, takes place when the electron is moving, and has sufficient kinetic energy compared to the photon. In this case net energy may be transferred from the electron to the photon.",
null,
"The inverse Compton effect is seen in astrophysics when a low energy photon (e.g. of the cosmic microwave background) bounces off a high energy (relativistic) electron. Such electrons are produced in supernovae and active galactic nuclei. A good reference for Compton scattering and inverse Compton scattering in the astrophysical regime is in \"Radiative Processes in Astrophysics\" by George B. Rybicki and Alan P. Lightman. In their Chapter 7, Rybicki and Lightman derive the equations for Compton scattering and then move into a treatment of inverse Compton scattering. This topic will not be studied further in the present Quantum Physics course, but is typically treated in the Astronomy course \"The Interstellar Medium\"."
] | [
null,
"http://venables.asu.edu/quant/proj/Compton.gif",
null,
"http://venables.asu.edu/quant/proj/Comptonexpt1.gif",
null,
"http://venables.asu.edu/quant/proj/Inverse.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92776954,"math_prob":0.9460283,"size":5071,"snap":"2020-45-2020-50","text_gpt3_token_len":1115,"char_repetition_ratio":0.17485692,"word_repetition_ratio":0.002386635,"special_character_ratio":0.20903175,"punctuation_ratio":0.10782241,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9839562,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-25T05:14:49Z\",\"WARC-Record-ID\":\"<urn:uuid:4b4aeb51-2c68-4ca4-8c55-4682d1b40e9c>\",\"Content-Length\":\"7688\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8d78dca7-ea39-4d6d-b566-3560122dd2d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:a1a58e9d-9796-4f10-99c0-515c0cc31e68>\",\"WARC-IP-Address\":\"149.169.25.236\",\"WARC-Target-URI\":\"http://venables.asu.edu/quant/proj/compton.html\",\"WARC-Payload-Digest\":\"sha1:6CJCS2CFXNN5OZYUATLK53HVQDOQ6625\",\"WARC-Block-Digest\":\"sha1:C3J66SGUYLF7LUNG3XE47IRG7LATPIVS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141181179.12_warc_CC-MAIN-20201125041943-20201125071943-00436.warc.gz\"}"} |
https://www.kdnuggets.com/2019/10/time-series-analysis-simple-example-knime-spark.html | [
"Topics: AI | Data Science | Data Visualization | Deep Learning | Machine Learning | NLP | Python | R | Statistics\n\nKDnuggets Home » News » 2019 » Oct » Tutorials, Overviews » Time Series Analysis: A Simple Example with KNIME and Spark ( 19:n41 )\n\n# Time Series Analysis: A Simple Example with KNIME and Spark\n\nThe task: train and evaluate a simple time series model using a random forest of regression trees and the NYC Yellow taxi dataset.\n\nBy Andisa Dewi & Rosaria Silipo, KNIME\n\nSign up for the new 1-day #KNIME time series analysis course during the #KNIMESummit2019 in Austin, TX from November 5-8. Use promocode KDNUGGETS for discount.\n\nI think we all agree that knowing what lies ahead in the future makes life much easier. This is true for life events as well as for prices of washing machines and refrigerators, or the demand for electrical energy in an entire city. Knowing how many bottles of olive oil customers will want tomorrow or next week allows for better restocking plans in the retail store. Knowing the likely increase in the price of gas or diesel allows a trucking company to better plan its finances. There are countless examples where this kind of knowledge can be of help.\n\nDemand prediction is a big branch of data science. Its goal is to make estimations about future demand using historical data and possibly other external information. Demand prediction can refer to any kind of numbers: visitors to a restaurant, generated kW/h, school new registrations, beer bottles required on the store shelves, appliance prices, and so on.\n\nPredicting taxi demand in NYC\n\nAs an example of demand prediction, we want to tackle the problem of predicting taxi demand in New York City. In megacities such as New York, more than 13,500 yellow taxis roam the streets every day (per the 2018 Taxi and Limousine Commission Factbook). This makes understanding and anticipating taxi demand a crucial task for taxi companies or even city planners, to increase the efficiency of the taxi fleets and minimize waiting times between trips.\n\nFor this case study, we used the NYC taxi dataset, which can be downloaded at the NYC Taxi and Limousine Commission (TLC) website. This dataset spans 10 years of taxi trips in New York City with a wide range of information about each trip, such as pick-up and drop-off date/times, locations, fares, tips, distances, and passenger counts. Since we are just using this case study for demonstration purposes, we used only the yellow taxi subset for the year 2017. For a more general application, it would be useful to include data from a few additional years in the dataset, at least to be able to estimate the yearly seasonality.\n\nLet’s set the goal of this tutorial to predict the number of taxi trips required in NYC for the next hour.\n\nTime series analysis: the process\n\nThe demand prediction problem is a classic time series analysis problem. We have a time series of numerical values (prices, number of visitors, kW/h, etc.) and we want to predict the next value given the past N values. In our case, we have a time series of numbers of taxi trips per hour (Figure 1), and we want to predict the number of taxi requests in the next hour given the number of taxi trips in the last N hours.\n\nFor this case study, we implemented a time series analysis process through the following steps (Figure 1):\n\n• Data transformation: aggregations, time alignment, missing value imputation, and other required transformations - depending on the data domain and the business case\n• Time series visualization\n• Removal of non-stationarity/seasonality, if any\n• Data partitioning to build a training set (past) and test set (future)\n• Construction of vector of N past values\n• Training of a machine learning model (or models) allowing for numerical outputs\n• Calculation of prediction error\n• Model deployment, if prediction error is acceptable\nFigure 1. Classic steps in time series analysis\n\nNote that precise prediction of a single numerical value can be a complex task. In some cases, a precise numerical prediction is not even needed and the same problem can be satisfactorily and easily solved after transforming it into a classification problem. And to transform a numerical prediction problem into a classification problem, you just need to create classes out of the target variable.\n\nFor example, predicting the price of a washing machine in two weeks might be difficult, but predicting whether this price will increase, decrease, or remain the same in two weeks is a much easier problem. In this case, we have transformed the numerical problem of price prediction into a classification problem with three classes (price increase, price decrease, price unchanged).\n\nData cleaning and other transformations\n\nThe first step is to move from the original data rows sparse in time (in this case taxi trips, but it could be contracts with customers or Fast Fourier Transform amplitudes just the same) to a time series of values uniformly sampled in time. This usually requires two things:\n\n• An aggregation operation on a predefined time scale: seconds, minutes, hours, days, weeks, or months depending on the data and the business problem. The granularity (time scale) used for the aggregation is important to visualize different seasonality effects or to catch different dynamics in the signal.\n• A realignment operation to make sure that time sampling is uniform in the considered time window. Often, time series are presented in a single sequence of the captured times. If any time sample is missing, we do not notice. A realignment procedure inserts missing values at the skipped sampling times.\n\nAnother classical preprocessing step consists of imputing missing values. Here a number of time series dedicated techniques are available, like using the previous value, the average value between previous and next value, or the linear interpolation between previous and next value.\n\nThe goal here is to predict the taxi demand (equals the number of taxi trips required) for the next hour. Therefore, as we need an hourly time scale for the time series, the total number of taxi trips in New York City was calculated for each hour of every single day in the data set. This required grouping the data by hour and date (year, month, day of the month, hour) and then counting the number of rows (i.e., the number of taxi trips) in each group.\n\nTime series visualization\n\nBefore proceeding with the data preparation, model training, and model evaluation, it is always useful to get an idea of the problem we are dealing with via visual data exploration. We decided to visualize the data on multiple time scales. Each visualization offers different insight on the time evolution of the data.\n\nIn the previous step, we already aggregated the number of taxi trips by the hour. This produces the time series x(t) (Figure 2a). After that, in order to observe the time series evolution on a different time scale, we also visualized it after aggregating by day (Figure 2b) and by month (Figure 2c).\n\nFrom the plot of the hourly time series, you can clearly see a 24-hour pattern: high numbers of taxi trips during the day and lower numbers during the night.\n\nIf we switch to the daily scale, the weekly seasonality pattern becomes evident, with more trips during business days and fewer trips over the weekends. The non-stationarity of this time series can be easily spotted on this time scale, through the varying average value.\n\nFinally, the plot of the monthly time series does not have enough data points to show any kind of seasonality pattern. It’s likely that extending the data set to include more years would produce more points in the plot and possibly a winter/summer seasonality pattern could be observed.\n\nFigure 2a. Plot of the number of taxi trips in New York City by the hour, zoomed in on the first two weeks of June 2017, from the NYC Taxi data set. The 24-hour seasonality here is quite easy to see\n\nFigure 2b. Plot of the number of taxi trips, by day, in New York City, zoomed in on the time window between May 2017 and September 2017, from the NYC Taxi dataset. The weekly seasonality here is quite easy to spot. The three deep valleys correspond to Memorial Day, Fourth of July, and Labor Day.\n\nFigure 2c. Plot of the number of taxi trips, by month, in New York City for the entire year 2017, from the NYC Taxi dataset. You can see the difference between winter (more taxi trips) and summer (fewer taxi trips).\n\nNon-stationarity, seasonality, and autocorrelation function\n\nA frequent requirement for many time series analysis techniques is that the data be stationary.\n\nA stationary process has the property that the mean, variance, and autocorrelation structure do not change over time. Stationarity can be defined in precise mathematical terms, but for our purpose, we mean a flat looking time series, without trend, with constant average and variance over time and a constant autocorrelation structure over time. For practical purposes, stationarity is usually determined from a run sequence plot or the linear autocorrelation function (ACF).\n\nIf the time series is non-stationary, we can often transform it to stationary by replacing it with its first order differences. That is, given the series x(t), we create the new series y(t) = x(t) - x(t-1). You can difference the data more than once, but the first order difference is usually sufficient.\n\nSeasonality violates stationarity, and seasonality is also often established from the linear autocorrelation coefficients of the time series. These are calculated as the Pearson correlation coefficients between the value of time series x(t) at time t and its past values at times t-1,…, t-n. In general, values between -0.5 and 0.5 would be considered to be low correlation, while coefficients outside of this range (positive or negative) would indicate a high correlation.\n\nIn practice, we use the ACF plot to determine the index of the dominant seasonality or non-stationarity. The ACF plot reports on the y-axis the autocorrelation coefficients calculated for x(t) and its past x(t-i) values vs. the lags i on the x-axis. The first local maximum in the ACF plot defines the lag of the seasonality pattern (lag=S) or the need for a correction of non-stationarity (lag=1). In order not to consider irrelevant local maxima, a cut-off threshold is usually introduced, often from a predefined confidence interval (95%). Again, changing the time scale (i.e., the granularity of the aggregation) or extending the time window allows us to discover different seasonality patterns.\n\nIf we found the seasonality lag to be S, then we could apply a number of different techniques to remove seasonality. We could remove the first S-samples from all subsequent S-sample windows; we could calculate the average S-sample pattern on a portion of the data set and then remove that from all following S-sample windows; we could train a machine learning model to reproduce the seasonality pattern to be removed; or more simply, we could subtract the previous value x(t-S) from the current value x(t) and then deal with the residuals y(t) = x(t) - x(t-S). We chose this last technique for this tutorial, just to keep it simple.\n\nFigure 3 shows the ACF plot for the time series of hourly number of taxi trips. On the y-axis are the autocorrelation coefficients calculated for x(t)and its previous values at lagged hour 1, … 50. On the x-axis are the lagged hours. This chart shows peaks at lag=1 and lag=24, i.e., a daily seasonality, as was to be expected in the taxi business. The highest positive correlation coefficients are between x(t) and x(t-1) (0.91), x(t) and x(t-24) (0.83), and then x(t) and x(t-48) (0.68).\n\nIf we use the daily aggregation of the time series and calculate the autocorrelation coefficients on a lagged interval n > 7, we would also observe a peak at day 7, i.e., a weekly seasonality. On a larger scale, we might observe a winter-summer seasonality, with people taking taxis more often in winter than in summer. However, since we are considering the data over only one year, we will not inspect this kind of seasonality.\n\nFigure 3. Autocorrelation plot (Pearson coefficients) over 50 hours. The strongest correlation of x(t) is with x(t-1), x(t-24), and x(t-48), indicating a 24-hr (daily) seasonality.\n\nData partitioning to build the training set and test set\n\nAt this point, the dataset has to be partitioned into the training set (the past) and test set (the future). Notice that the split between the two sets has to be a split in time. Do not use a random partitioning but a sequential split in time! This avoids data leakage from the training set (the past) to the test set (the future).\n\nWe reserved the data from January 2017 to November 2017 for the training set and the data of December 2017 for the test set.\n\nLagging: vector of past N values\n\nThe goal of this use case is to predict the taxi trip demand in New York City for the next hour. In order to run this prediction, we need the demands of taxi trips in the previous N hours. For each value x(t) of the time series, we want to build the vector x(t-N), …, x(t-2), x(t-1), x(t). We will use the past values x(t-N), …, x(t-2), x(t-1) as input to the model and the current value x(t) as the target column to train the model. For this example, we experimented with two values: N=24 and N=50.\n\nRemember to build the vector of past N values after partitioning the dataset into a training set and a test set in order to avoid data leakage from neighboring values. Also remember to remove the rows with missing values introduced by the lagging operation.\n\nTraining the machine learning model\n\nWe've now reached the model training phase. We will use the past part of the vector x(t-N), …, x(t-2), x(t-1) as input to the model and the current value of the time series x(t) as target variable. In a second training experiment, we added the hour of the day (0-23) and the day of the week (1-7) to the input vector of past values.\n\nNow, which model should we use? First of all, x(t) is a numerical value, so we need to use a machine learning algorithm that can predict numbers. The easiest model to use here would be a linear regression, a regression tree, or a random regression tree forest. If we use a linear regression on the past values to predict the current value, we are talking about an auto-regressive model.\n\nWe chose a random forest of five regression trees with maximal depth of 10 splits running on a Spark cluster. After training, we observed that all five trees used the past value of the time series at time t-1 for the first split. x(t-1) was also the value with the highest correlation coefficient with x(t) in the autocorrelation plot (Figure 3).\n\nWe can now apply the model to the data in the test set. The predicted time series (as in-sample predictions) by a regression tree forest trained on N=24 past values, with no seasonality removal and no first-order difference, is shown in Figure 4 for the whole test set. Predicted time series is plotted in yellow, while original time series is shown in light blue. Indeed, the model seems to fit the original time series quite well. For example, it is able to predict a sharp decrease in taxi demand leading up to Christmas. However, a more precise evaluation could be obtained via some dedicated error metrics.\n\nFigure 4. Line plot of the predicted vs. actual values of the number of taxi trips in the test set.\n\nPrediction error\n\nThe final error on the test set can be measured as some kind of distance between the numerical values in the original time series and the numerical values in the predicted time series. We considered five numeric distances:\n\n• R2\n• Mean Absolute Error\n• Mean Squared Error\n• Root Mean Squared Error\n• Mean Signed Difference\n\nNote that R2 is not commonly used for the evaluation of model performance in time series prediction. Indeed, R2 tends to produce higher values for higher number of input features, favoring models using longer input past vectors. Even when using a corrected version of R2, the non-stationarity of many time series and their consequent high variance pushes the R2 values quickly close to 1, making it hard to glean the differences in model performance.\n\nThe table in Figure 5 reports the two errors (R2 and MAE) when using 24 and 50 past samples as input vector (and no additional external input features), and after removing daily seasonality, weekly seasonality, both daily and weekly seasonality, or no seasonality, or applying the first order difference.\n\nFinally, using the vector of values from the past 24 hours yields comparable results to using a vector of past 50 values. If we had to choose, using N=24 and first order differences would seem to be the best choice.\n\nFigure 5. R2 and MAE measures calculated on the test set for models trained on differently preprocessed time series. Input features include only the past values of the time series.\n\nFigure 6. R2 and MAE measures calculated on the test set for models trained on differently preprocessed time series. Here input features include the past values of the time series (on the left) and the same past values plus the hour of day and day of the week (on the right).\n\nSometimes it is useful to introduce additional information, for example, the hour of day (which can identify the rush hour traffic) or the day of the week (to distinguish between business days and weekends). We added these two external features (hour and day of week) to the input vector of past values used to train the models in the previous experiment.\n\nResults for the same preprocessing steps (removing daily, weekly, daily and weekly, or no seasonality, or first order differences) are reported on the right and compared to the results of the previous experiment on the left in Figure 6. Again, the first order differences seem to be the best preprocessing approach in terms of final performance. The addition of the external two features has reduced the final error a bit, though not considerably.\n\nThe full training workflow is shown in Figure 7 and is available on the KNIME Hub here.\n\nFigure 7. The complete training workflow shown. Here a random forest of regression trees is trained on the number of taxi trips by the hour for the first 11 months of 2017 to predict taxi demand hour by hour in December 2017, using different preprocessing techniques.\n\nModel deployment\n\nWe have reached the end of the process. If the prediction error is acceptable, we can proceed with the deployment of the model to deal with the current time series in a production application. Here there is not much to do. Just read the previously trained model, acquire current data, apply the model to the data, and produce the forecasted value for the next hour.\n\nIf you want to run the predictions for multiple hours after the next one, you will need to loop around the model by feeding the current prediction back into the vector of past input samples.\n\nTime series analysis: summing up\n\nWe have trained and evaluated a simple time series model using a random forest of regression trees on the 2017 data from the NYC Yellow taxi data set to predict the demand for taxi trips for the next hour based on the numbers in the past N hours. The entire model training and testing was implemented to run on a big data Spark framework.\n\nWe have used this chance to go through the classic process for time series analysis step by step, including non-stationarity and seasonality removal, creation of the vector of past values, partitioning on a time split, etc. We have then experimented with different parameters (size of past value vector) and options (non-stationarity and seasonality removal).\n\nResults have shown that the taxi demand prediction is a relatively easy problem to solve, at least when using a highly parametric algorithm like a random forest of decision trees.\n\nThe MAE metric on the predictions produced by a model trained on unprocessed data is actually lower than after removing the seasonality. However, the first order differences seem to help the model to learn better.\n\nFinally, we found that a past size N=50 is redundant. N=24 produces equally acceptable performance. Of course, adding additional inputs such as temperature, weather conditions, holiday calendar, and so on might benefit the final results.\n\nAn additional challenge might be to predict taxi demand not only for the next hour, which seems to be an easy task, but maybe for the next day at the same hour.\n\nAs first published in InfoWorld.\n\nOriginal. Reposted with permission.\n\nRelated:\n\nTop Stories Past 30 Days\nMost Shared",
null,
"Get KDnuggets, a leading newsletter on AI, Data Science, and Machine Learning"
] | [
null,
"https://www.kdnuggets.com/wp-content/uploads/envelope.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91450834,"math_prob":0.9457673,"size":18684,"snap":"2021-31-2021-39","text_gpt3_token_len":3964,"char_repetition_ratio":0.1493576,"word_repetition_ratio":0.028508076,"special_character_ratio":0.20825306,"punctuation_ratio":0.09610028,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9849105,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T05:20:38Z\",\"WARC-Record-ID\":\"<urn:uuid:d7ef6cd5-451b-4643-96d8-a6bb4b643fc9>\",\"Content-Length\":\"56303\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:06b77fde-9865-4364-b6af-aecdeddf9004>\",\"WARC-Concurrent-To\":\"<urn:uuid:53842828-5c6b-4a8f-8988-d5b61d740d44>\",\"WARC-IP-Address\":\"162.144.210.228\",\"WARC-Target-URI\":\"https://www.kdnuggets.com/2019/10/time-series-analysis-simple-example-knime-spark.html\",\"WARC-Payload-Digest\":\"sha1:A66V337NZ4EQ6ZLYEWMVNT7PKLC7FEZW\",\"WARC-Block-Digest\":\"sha1:RLO2QPGPU4BBLL2TBTSPZ6ERJGH6YOJC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780060201.9_warc_CC-MAIN-20210928032425-20210928062425-00484.warc.gz\"}"} |
https://socratic.org/questions/factored-completely-the-expression-6x-x-3-x-2-is-equivalent-to | [
"# Factored completely, the expression 6x-x^3-x^2 is equivalent to?\n\n$- x \\left(x + 3\\right) \\left(x - 2\\right)$\nFirst of all, this can be factored as $- x \\left({x}^{2} + x - 6\\right)$.\nNext, we can factor as $- x \\left(x + 3\\right) \\left(x - 2\\right)$."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6840206,"math_prob":1.0000097,"size":312,"snap":"2019-51-2020-05","text_gpt3_token_len":81,"char_repetition_ratio":0.12337662,"word_repetition_ratio":0.0,"special_character_ratio":0.24358974,"punctuation_ratio":0.114754096,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986146,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-20T23:18:03Z\",\"WARC-Record-ID\":\"<urn:uuid:0b1c162c-decb-4602-8eb3-6e3fa2e7dab3>\",\"Content-Length\":\"32555\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5626b222-4a87-4186-b02e-c5b0673d0950>\",\"WARC-Concurrent-To\":\"<urn:uuid:8c1d14a7-628f-4019-8883-a1c9b34b3ca6>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/factored-completely-the-expression-6x-x-3-x-2-is-equivalent-to\",\"WARC-Payload-Digest\":\"sha1:SLBBZL6MI77P6GW4IBWYTOBBZTAAVTUW\",\"WARC-Block-Digest\":\"sha1:NGSTXWL2QSN2ADPVGRXTTJ2S6QFPA4AK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250601040.47_warc_CC-MAIN-20200120224950-20200121013950-00455.warc.gz\"}"} |
https://www.numbersaplenty.com/3375 | [
"Search a number\nBaseRepresentation\nbin110100101111\n311122000\n4310233\n5102000\n623343\n712561\noct6457\n94560\n103375\n112599\n121b53\n1316c8\n141331\n151000\nhexd2f\n\n3375 has 16 divisors (see below), whose sum is σ = 6240. Its totient is φ = 1800.\n\nThe previous prime is 3373. The next prime is 3389. The reversal of 3375 is 5733.\n\nThe cubic root of 3375 is 15.\n\nIt is a perfect power (a cube), and thus also a powerful number.\n\n3375 is nontrivially palindromic in base 14.\n\nIt is not a de Polignac number, because 3375 - 21 = 3373 is a prime.\n\nIt is a super-2 number, since 2×33752 = 22781250, which contains 22 as substring.\n\nIt is a plaindrome in base 11.\n\nIt is a nialpdrome in base 15.\n\nIt is a zygodrome in base 3.\n\nIt is a congruent number.\n\nIt is not an unprimeable number, because it can be changed into a prime (3371) by changing a digit.\n\nIt is a polite number, since it can be written in 15 ways as a sum of consecutive naturals, for example, 673 + ... + 677.\n\nIt is an arithmetic number, because the mean of its divisors is an integer number (390).\n\n3375 is a Friedman number, since it can be written as (7+5+3)^3, using all its digits and the basic arithmetic operations.\n\n23375 is an apocalyptic number.\n\n3375 is a deficient number, since it is larger than the sum of its proper divisors (2865).\n\n3375 is an equidigital number, since it uses as much as digits as its factorization.\n\n3375 is an evil number, because the sum of its binary digits is even.\n\nThe sum of its prime factors is 24 (or 8 counting only the distinct ones).\n\nThe product of its digits is 315, while the sum is 18.\n\nThe square root of 3375 is about 58.0947501931.\n\nIt can be divided in two parts, 3 and 375, that added together give a triangular number (378 = T27).\n\nThe spelling of 3375 in words is \"three thousand, three hundred seventy-five\"."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9368864,"math_prob":0.99729514,"size":1736,"snap":"2023-14-2023-23","text_gpt3_token_len":500,"char_repetition_ratio":0.15877599,"word_repetition_ratio":0.012121212,"special_character_ratio":0.3375576,"punctuation_ratio":0.140625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976645,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-31T00:58:31Z\",\"WARC-Record-ID\":\"<urn:uuid:061a864e-8586-4b5b-93d0-e74d2f549a26>\",\"Content-Length\":\"9088\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fc1b54eb-8786-4d29-acb6-3c33128b8340>\",\"WARC-Concurrent-To\":\"<urn:uuid:108f4d9c-a8d3-438e-83be-396937dbc8fe>\",\"WARC-IP-Address\":\"89.46.108.74\",\"WARC-Target-URI\":\"https://www.numbersaplenty.com/3375\",\"WARC-Payload-Digest\":\"sha1:WHLNM26UEOJGJ4TEOBBMYNHTV45YTFGX\",\"WARC-Block-Digest\":\"sha1:WHEPI6LPNBHHIWRAD7CRROKZQHHOVPKM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646181.29_warc_CC-MAIN-20230530230622-20230531020622-00786.warc.gz\"}"} |
https://hellothinkster.com/math-questions/percentages/what-is-89-92-as-a-percent | [
"# What is 89/92 as a percent?\n\nWhat is 89/92 as a percentage? In this short guide, we will show you a few easy steps to convert the fraction 89/92 into percentage form. Let’s dive in!\n\n## Solution: 89/92 as a percent is 96.739%\n\nMethods\n\n### Method 1 – Converting 89/92 Into a Percentage:\n\nIn a fraction, we can see how many “pieces” of a number are present (in the numerator) compared to how many pieces would make up the whole (the denominator). “Percent” means “per hundred”, which is like asking the question “how many pieces would there be if there were 100 pieces possible?”\n\nFor example, if we look at the percentage 50%, that means we have 50 pieces of the possible 100. Re-writing this in fraction form, we see 50/100.\n\nWe can start the process of converting a fraction into a percent, by figuring out how to adjust the fraction so that the denominator will be 100. First, divide 100 by the denominator:\n\n$\\frac{100}{92} = 1.087$\n\nThen we can multiply both the numerator and denominator by this number:\n\n$\\frac{89*1.087}{92*1.087} = \\frac{96.739}{100}$\n\nThis works because multiplying both the numerator and the denominator by the same number is like multiplying it by 1. (1.087 / 1.087 = 1)\n\nRe-writing the result as a percentage, we can see that 89/92 as a percentage is 96.739%.\n\n### Method 2 – Converting 89/92 Into a Percentage Using Decimals:\n\nAnother common way to convert a fraction into a percentage is to first convert the fraction into a decimal. To convert 89/92 into a percentage, you would first convert 89/92 into a decimal by dividing the numerator by the denominator:\n\n$\\frac{89}{92} = 0.967$\n\nOnce you have converted the fraction into a decimal, you can simply multiply by 100 to get the percentage:\n\n0.967 x 100 = 96.739\n\nAnd there you go! Now we can see that 89/92 as a percentage is 96.739%, the same way we did with the first method.\n\nNow you know of two ways you can convert 89/92 into a percentage. The best way to master these methods is to practice! Grab a pencil and paper, and come up with some of your own fractions, and become a master at converting them into percentages!\n\n### Practice more percentage conversion problems\n\nWith a just a few more problems, you could become a pro at converting fractions to percentages. You can try some more right now!\n\nWhat is 42/61 as a percent?\n\nWhat is 40/39 as a percent?\n\nWhat is 62/47 as a percent?\n\nWhat is 60/71 as a percent?\n\nWhat is 9/6 as a percent?",
null,
"#### 8 Math Hacks and Tricks to Turn Your ‘Okay’ Math Student Into a Math Champion!\n\nOne thing we teach our students at Thinkster is that there are multiple ways to solve a math problem. This helps our students learn to think flexibly and non-linearly.\n\nGet PDF",
null,
"#### How to Make Sure Your Child is Highly Successful and Becomes a Millionaire\n\nAs a parent, you hope your child is extremely successful and likely become the next Gates, Zuckerberg, or Meg Whitman. To set your child on the right path, there are many skills and traits that you can start building and nurturing now. Doing so plants the seeds for future success.\n\nGet PDF\n\n### Your Child Can Improve Their Math Scores By 90% Within 3 months!\n\nOur elite math tutors are ready to help make your child a math champion! Sign up for our zero \\$ free trial to get started today.\n\nGet Price + Free Parent Membership"
] | [
null,
"https://hellothinkster.com/images/download-img-1.jpg",
null,
"https://hellothinkster.com/images/download-img-2.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9584549,"math_prob":0.95728135,"size":919,"snap":"2023-14-2023-23","text_gpt3_token_len":217,"char_repetition_ratio":0.13442624,"word_repetition_ratio":0.0,"special_character_ratio":0.2665941,"punctuation_ratio":0.105820104,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9960176,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-28T15:29:35Z\",\"WARC-Record-ID\":\"<urn:uuid:c9c7129b-1a9e-4e86-aaef-0a65853d249f>\",\"Content-Length\":\"112446\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:422536bd-28f0-4b77-b06f-4edcf874571c>\",\"WARC-Concurrent-To\":\"<urn:uuid:85a12765-3bbc-4073-8356-8562e75287a8>\",\"WARC-IP-Address\":\"151.101.130.132\",\"WARC-Target-URI\":\"https://hellothinkster.com/math-questions/percentages/what-is-89-92-as-a-percent\",\"WARC-Payload-Digest\":\"sha1:DNIATXOMBRGZRUQWRNQVYIS4MNF47PCN\",\"WARC-Block-Digest\":\"sha1:4XUAM4NFONZCSMMQXLD4GK24NP7B74AG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644309.7_warc_CC-MAIN-20230528150639-20230528180639-00607.warc.gz\"}"} |
https://custompaperwriting.org/buy-essay-online-33318/ | [
"accounting and finance assignment\n\nQuestions:\n\n1. If you invest \\$1,500 in a bank account, which pays simple interest at a rate of 3.5% per annum, for 18 months, how much interest will you receive when you withdraw the funds at the end of the term of investment?\n\nSave your time - order a paper!\n\nGet your paper written from scratch within the tight deadline. Our service is a reliable solution to all your troubles. Place an order on any task and we will take care of it. You won’t have to worry about the quality and deadlines\n\nOrder Paper Now\n\n2. If you invest \\$1,500 in an account that pays 5% pa, simple interest, what will be the value of the account at the end of 40 months?\n\n1. You invested \\$2,500 into a term deposit account for 90 days, at the end of which there was \\$2,530 in the account. What simple rate of interest was paid on the account?\n\n1. If you invest \\$1,500 in a bank account, which pays 3.5% per annum interest, compounding annually, what will be the value of the account at the end of 18 months?\n\n1. If you invest \\$1,500 in a bank account, which pays 3.5% per annum interest, compounding monthly, what will be the value of the account at the end of 18 months?\n\n1. If you want to save \\$7,000 at the end of 5 years, in an account that pays 4% pa, compounded annually. How much will you need to invest into the account at the beginning of the 5 year period?\n\n1. Interest is paid monthly at a monthly rate equal to 0.35%. What is the nominal rate per annum on the account?\n\n1. If you are offered a nominal amount equal to 8% pa, compounding half yearly, what is the effective interest rate per annum?\n\n1. ABC Ltd issued a 3-year bond with a face value of \\$100, paying a half yearly coupon of \\$2.50. However, no sooner had the bond been issued than interest rates on similarly rated debt rose to 8% per annum. What would be the value of the bond after the interest rate rise?\n\n1. ABC Ltd issued a 3-year bond with a face value of \\$100, paying a half yearly coupon of \\$2.50. However, no sooner had the bond been issued than interest rates on similarly rated debt fell to 4% per annum. What would be the value of the bond after the interest rate fall?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94887686,"math_prob":0.9512019,"size":2774,"snap":"2022-05-2022-21","text_gpt3_token_len":662,"char_repetition_ratio":0.13501805,"word_repetition_ratio":0.31755424,"special_character_ratio":0.25054073,"punctuation_ratio":0.11073254,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9741781,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-27T18:17:23Z\",\"WARC-Record-ID\":\"<urn:uuid:62f985c2-e8e3-4573-8eab-2452acbd4e5a>\",\"Content-Length\":\"59661\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3327a34a-7d75-4c6a-b57c-557ce960cf2e>\",\"WARC-Concurrent-To\":\"<urn:uuid:2612550a-2fe6-4a68-ad0c-eb190bd8294a>\",\"WARC-IP-Address\":\"162.0.208.14\",\"WARC-Target-URI\":\"https://custompaperwriting.org/buy-essay-online-33318/\",\"WARC-Payload-Digest\":\"sha1:3QKFTZXIRERLGVQUEAYY5B352QBVUCZB\",\"WARC-Block-Digest\":\"sha1:ZD5AEJHRAQJB4YQI2CWCXMRHYGMKKCU3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305277.88_warc_CC-MAIN-20220127163150-20220127193150-00592.warc.gz\"}"} |
https://www.kingexcel.info/2019/12/rounding-numbers-by-fractions-or.html | [
"Rounding Numbers By Fractions or Decimals - KING OF EXCEL\n\n## Saturday, December 21, 2019\n\nRounding Numbers By Fractions or Decimals\nWhen rounding numbers to a particular decimal factor, you can express that rounding factor in your formula as either a fraction or as its decimal equivalent. In the pictures, the fraction one-eighth can be stated in a formula as 1/8 or by its decimal equivalent of .125.\nRounding up\nExpressing a fraction, the formula in cell B4 copied to cell B8 is `=CEILING(A4,1/8)`\nExpressing a decimal, the formula in cell D4 copied to cell D8 is `=CEILING(A4,0.125)`\nNotice the results in range B4:B8 are the same as the results in range D4:D8.",
null,
"Rounding down\nExpressing a fraction, the formula in cell C4 copied to cell C8 is `=FLOOR(A4,1/8)`\nExpressing a decimal, the formula in cell E4 copied to cell E8 is `=FLOOR(A4,0.125)`\nNotice the results in range C4:C8 are the same as the results in range E4:E8.",
null,
"#evba #etipfree\n📤You download App EVBA.info installed directly on the latest phone here : https://www.evba.info/p/app-evbainfo-setting-for-your-phone.html?m=1"
] | [
null,
"https://lh6.googleusercontent.com/proxy/M1aUT8xtb0jQQlyD8aiICinBlCUNFExrO4pvG2D3MQDzSZgFUR40nFwVTBlo4PscJ0gUeD7HuGbvxN2D8gcMG3smeZfnsVEPzZzhHih-3zN3NcapLmuAblhdC72Ly-adBaY=s0-d",
null,
"https://lh3.googleusercontent.com/proxy/amyr0aGBCAwhZpjznLAzxzr2WSJRHMAmBSst5SmsFe1iqrNmkfYnL5xgHE-IU7vDMQ4aJ39nPlA2Q1OzAZB9HG-bj8b9yBZ2Lm1--fT6qcyIHxQwn9VaTfFaZAqmSOvcRz8XdQ=s0-d",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8203034,"math_prob":0.9488879,"size":1436,"snap":"2021-31-2021-39","text_gpt3_token_len":397,"char_repetition_ratio":0.17108938,"word_repetition_ratio":0.49583334,"special_character_ratio":0.2534819,"punctuation_ratio":0.12264151,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9940148,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-31T08:47:42Z\",\"WARC-Record-ID\":\"<urn:uuid:6395d944-5994-492e-b5e6-10a70de3b582>\",\"Content-Length\":\"477465\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4ba49da-d094-4f6e-99dc-c0f890df1316>\",\"WARC-Concurrent-To\":\"<urn:uuid:732e2e53-d7b4-42e4-a414-853ca06e30b4>\",\"WARC-IP-Address\":\"142.250.188.211\",\"WARC-Target-URI\":\"https://www.kingexcel.info/2019/12/rounding-numbers-by-fractions-or.html\",\"WARC-Payload-Digest\":\"sha1:YNWX2F26Q3IQEIV6JTLN3UOY6AE7SFGT\",\"WARC-Block-Digest\":\"sha1:UOAG27K3G6DVUGQG7W3PZ7JXAOV3EO3Z\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154085.58_warc_CC-MAIN-20210731074335-20210731104335-00543.warc.gz\"}"} |
http://www.mrajobseekers.com/2015/09/hp-placement-papers-interview-questions.html | [
"## Thursday, September 17, 2015\n\n### HP Placement Papers - Interview Questions:\n\nHP Placement Papers - Interview Questions:\n1. Select a suitable figure from the Answer Figures that would replace the question mark (?).(Image)\nanalogy1\nA B C D 1 2 3 4 5\n\nA.1 B.2 C.3 D.4 E.5\n\nExplanation:\nThe upper element is converted to an element similar to the lower elements and each one of the lower elements is converted to an element similar\nto the upper element\n\n2. Select a suitable figure from the Answer Figures that would replace the question mark (?).(Image)\nProblem Figures: Answser Figures:\nanalogy2\nA B C D 1 2 3 4 5\n\nA.1 B.2 C.3 D.4 E.5\n\nExplanation:\nEach one of the upper elements is replaced by an element similar to the lower element(s) and each one of the lower elements is replaced by an\nelement similar to the upper element(s).\n\n3. Each sentences below consist of a word or a phrase which is bold. It is followed by four words or phrases. Select the word or pharse which is closes to the\nOPPOSITE in meaning of the bold word or phrase.\nAdversity teaches man to be humble and self-reliant.\n\nA.sincerity B.animosity C.curiosity D.prosperity\n\n4. Synonym of bold word\nWhen I look back over there wartime years I cannot help feeling that time is an inadequate and even capricious measure of their duration at one moment\nthey seem so long, at another so short.\n\n5. Kings are generally audacious. What does \"audacious\" mean?\n\nA.Concerned about people B.Fearless C.Suspicious D.Pious\n\n6. Linked lists are best suited for:-\n\nA.for relatively permanent collections of data\nB.for the size of the structure and the data in the structure are constantly changing\nC.for both of above situation\nD.for none of above situation\n\n7. A binary tree can easily be converted into q 2-tree\n\nA.by replacing each empty sub tree by a new internal node\nB.by inserting an internal nodes for non-empty node\nC.by inserting an external nodes for non-empty node\nD.by replacing each empty sub tree by a new external node\n\n8. The complexity of Binary search algorithm is\nA.O(n) B.O(log n) C.O(n2) D.O(n log n)\n\n9. The Worst case occur in linear search algorithm when\n\nA.Item is somewhere in the middle of the array\nB.Item is not in the array at all\nC.Item is the last element in the array\nD.Item is the last element in the array or is not there at all\n\n10. The time factor when determining the efficiency of algorithm is measured by\n\nA.Counting microseconds\nB.Counting the number of key operations\nC.Counting the number of statements\nD.Counting the kilobytes of algorithm\n\nYou can also see: L&T Placement Papers\n\n11. In the following diagram rectangle represents men, Triangle represents educated, Circle represents urban and square represents government employees. Which one of the following represents the educated men but not urban ?(Image)\nA.9 B.4 C.5 D.11\n\n12. Which one of the following represents a woman who is urban as well as government employee ?\n\nA.7 B.3 C.10 D.6\n\nExplanation:\nWhy women are not mentioned in the diagram ? Ans: Rectangle represents men, therefore the area outside the rectangle should be WOMEN.\nHow to find the answer for this question ? Ans = [Woman & Urban & Government_Employee] Conditions: Outside the rectangle(women) & circle(urban)\n& Square(Govt employee). From the given diagram we can find that the value 10 satisfies the above conditions. Therefore, the answer is 10.\n\n13. Statements: Some actors are singers. All the singers are dancers.\nConclusions:\n1. Some actors are dancers.\n2. No singer is actor.\n\nA.Only (1) conclusion follows B.Only (2) conclusion follows C.Either (1) or (2) follows D.Neither (1) nor (2) follows\n\n14. Statements: All the harmoniums are instruments. All the instruments are flutes.\nConclusions:\n1.All the flutes are instruments.\n2. All the harmoniums are flutes.\n\nA.Only (1) conclusion follows B.Only (2) conclusion follows C.Either (1) or (2) follows D.Neither (1) nor (2) follows\n\n15. Statements:\nI. The meteorological Department has issued a statement mentioning deficient rainfall during monsoon in many parts of the country.\nII. The Government has lowered the revised estimated GDP growth from the level of earlier estimates.\n\nA.Statement I is the cause and statement II is its effect.\nB.Statement II is the cause and statement I is its effect.\nC.Both the statements I and II are independent causes.\nD.Both the statements I and II are effects of independent causes.\n\nExplanation:\nBoth the statements I and II are effects of independent causes.\n\n16. The strength of the electronics industry in Japan is the Japanese ability to organise production and marketing rather than their achievements in original research.The British are generally recognised as a far more inventive collection of individuals, but never seem able to exploit what they invent. There are many examples,from the TSR Z hovercraft, high speed train and Sinclair scooter to the Triumph, BSA and Norton Motorcycle which all prove this sad rule. The Japanese were able to exploits their strengths in marketing and development many years ago, and their success was at first either not understood in the West or was dismissed as something which could have been produced only at their low price. They were sold because they were cheap copies of other people's ideas churned out of a workhouse which was dedicated to hard grind above all else.It is evident from the passage that the strength of a country's industry depends upon\n\nA.original research B.international cooperation C.dedicated workforce D.electronic development\n\n17. The sad rule mentioned in this passage refers to\n\nA.the inability of the Japanese to be inventive like the British\nB.the inability of the British to be industrious like the Japanese\nC.the lack of variety in Japanese inventions\nD.the poorer marketing ability of British\n\n18. The TSR Z hovercraft, high speed train, Sinclair scooter etc. are the symbols of\n\nA.Japanese success B.British failure C.British success D.Japanese failure\n\n19. According to the passage, prosperity in industry depends upon\n\nA.productivity B.inventiveness C.marketing ability D.official patronage\n\n20. The main theme of this passage is\n\nA.electronic industry in Japan\nB.industrial comparison between Japan and Britain\nC.the role of marketing efficiency in industrial prosperity\nD.the importance of original research in industry\n\nYou can also see: Zensar Placement Papers\n\n21. At a game of billiards,A can give B 15 points in 60 and A can give C 20 points in 60.How amny points can B give C in a game of 90 ?\n\nA.8 points B.10 points C.12 points D.none of these\n\nExplanation:\nA:B =60:45 A:C =60:40 B/C =B/A*A/C =45/60*60/40 =90/80 B can give C 10 points in a game of 90.\n\n22. A number when divided by 342 gives a remainder 47.When the same number is divided by 19 what would be the remainder?\n\nA.19 B.9 C.2 D.none of these\n\nExplanation:\nNumber=342 K + 47 = 19 * 18 K + 19 * 2 + 9=19 ( 18K + 2) + 9. The given number when divided by 19 gives 18 K + 2 as quotient and 9 as remainder.\n\n23. At 15:15 pm railway time, what will be the angle between minute and hour hand?\n\nA.0 degrees B.180 degrees C.360 degrees D.Both a and c\n\nExplanation:\nAt 15:15 both minute and hour hands will be in same position. Hence the angle between them will be zero degrees. However 360 degree also means\nboth minute and hour hands are in same position. Hence correct answers are both zero and 360 degree\n\n24. Let Raj be 3 years older than Ravi and Hema be two years younger than Ravi. Raj is three times as old as Rahul who is Hema's brother. The ratio of the ages of Hema and Her brother is 3:2. Find by how much percentage Raj's age is more than Hema's when Raj will be 20 years old ?\n\nA.33.33 B.40 C.60 D.none of these\n\nExplanation:\nWhatever be the additonal facts, always the difference between ages of Raj and Hema is 5 years. Then Hema's age will be 15. Hence Raj's age will be\nmore than Hema's by (5/ Hema's age) * 100 = 5/15 * 100 = 33.33%\n\n25. A hat vendor bought hats at Rs 5 per 3 hats. He sold them at Rs 10 per 4 hats. What was his profit percentage ?\n\nA.25 B.50 C.30 D.none of these\n\nExplanation:\nCost price per hat = 5/3 Selling price per hat = 10/4 Profit percentage = (S.P - C.P / C.P) x 100 Substituting the above SP and CP we get 50%.\n\n26. Find by how much percentage the average of first 10 odd numbers (starting from 1) is greater than the last term ?\n\nA.900/19 % B.800/66 % C.Cannot be determined D.none of these\n\nExplanation:\nFirst ten odd numbers form an arithmetic progression of the form 1,3,5,7,9...... here a = first term = 1 d = common difference = 2 Average of\nfirst n numbers = (2a + (n - 1)d)/2 n th term of the AP = a + (n - 1)d Substituting a = 1, d = 2 and n = 10 in the above formulas average of\nfirst 10 numbers = 10 10 th term of the AP = 19 Therefore average of first 10 terms is 19 - 10 = 9 greater than the last term. Hence the average\nis greater than the sum by 9/19 X 100 = 900/19 %\n\n27. Four equal length ropes were laid on ground in different fashions. The first rope took the shape of circle, second square, third rectangle and fourth a polygon.\nWhich of the four ropes would cover maximum ground ?\n\nA.First Rope B.Second Rope C.Third Rope D.Fourth Rope\n\nExplanation:\nFirst Rope ( Circle ) Reason: For a given perimeter circle occupies the maximum area.\n\n28. Which of the following statements drawn from the given statements are correct? Given: All watches sold in that shop are of high standard.\nSome of the HMT watches are sold in that shop. a)All watches of high standard were manufactured by HMT. b)Some of the HMT watches are of high standard.\nc)None of the HMT watches is of high standard. d)Some of the HMT watches of high standard are sold in that shop.\n\nA.B & C B.C & A C.B & D D.A & D\n\n29. Divide 45 into four parts such that when 2 is added to the first part, 2 is subtracted from the second part, 2 is multiplied by the third part and the fourth part is divided by two, all result in the same number.\n\nA.8, 12, 5, 20 B.4, 10, 5, 15 C.5, 10, 6, 29 D.none of these\n\nExplanation:\na + b + c + d =45; a+2 = b-2 = 2c = d/2; a=b-4; c = (b-2)/2; d =2(b-2); b-4 + b + (b-2)/2 + 2(b-2) = 45;.\n\n30. What is the sum of all numbers between 100 and 1000 which are divisible by 14 ?\n\nA.35392 B.39352 C.33259 D.none of these"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91268617,"math_prob":0.9059564,"size":10824,"snap":"2019-43-2019-47","text_gpt3_token_len":2860,"char_repetition_ratio":0.11950093,"word_repetition_ratio":0.08758372,"special_character_ratio":0.27134147,"punctuation_ratio":0.1513158,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.96613455,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-12T01:02:09Z\",\"WARC-Record-ID\":\"<urn:uuid:ff04c01d-ac29-4f06-a1f6-7ef3d63788a6>\",\"Content-Length\":\"84348\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d62a1248-7ce4-462e-8ccf-577a03b79a87>\",\"WARC-Concurrent-To\":\"<urn:uuid:cfa63784-9483-4fa5-b74b-8e9335693d81>\",\"WARC-IP-Address\":\"172.217.7.179\",\"WARC-Target-URI\":\"http://www.mrajobseekers.com/2015/09/hp-placement-papers-interview-questions.html\",\"WARC-Payload-Digest\":\"sha1:HVYCHNO3CYP2WZYL7CNVZBBQA7WNIZVX\",\"WARC-Block-Digest\":\"sha1:QOCY3SENF5CURPLUUMLEE2DUZASQODH6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496664469.42_warc_CC-MAIN-20191112001515-20191112025515-00113.warc.gz\"}"} |
http://www.leastcommonmultiple.net/lcm-of-556/ | [
"X\nX\n\n# Calculate the Least Common Multiple or LCM of 556\n\nThe instructions to find the LCM of 556 are the next:\n\n## 1. Decompose all numbers into prime factors\n\n 556 2 278 2 139 139 1\n\n## 2. Write all numbers as the product of its prime factors\n\n Prime factors of 556 = 22 . 139\n\n## 3. Choose the common and uncommon prime factors with the greatest exponent\n\nCommon prime factors: 2 , 139\n\nCommon prime factors with the greatest exponent: 22, 1391\n\nUncommon prime factors: None\n\nUncommon prime factors with the greatest exponent: None\n\n## 4. Calculate the Least Common Multiple or LCM\n\nRemember, to find the LCM of several numbers you must multiply the common and uncommon prime factors with the greatest exponent of those numbers.\n\nLCM = 22. 1391 = 556\n\nAlso calculates the:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87139,"math_prob":0.94707143,"size":759,"snap":"2023-40-2023-50","text_gpt3_token_len":199,"char_repetition_ratio":0.20794702,"word_repetition_ratio":0.18461539,"special_character_ratio":0.28194994,"punctuation_ratio":0.11034483,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9982284,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T23:47:04Z\",\"WARC-Record-ID\":\"<urn:uuid:a73d390b-7795-4e2e-986b-2c696794a293>\",\"Content-Length\":\"35444\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:850ef463-93c4-484f-b93d-3fe047293b5b>\",\"WARC-Concurrent-To\":\"<urn:uuid:50f59421-b771-4311-b714-e0a649a65ee4>\",\"WARC-IP-Address\":\"107.170.60.201\",\"WARC-Target-URI\":\"http://www.leastcommonmultiple.net/lcm-of-556/\",\"WARC-Payload-Digest\":\"sha1:B4LLH3JOWFWJRAHO3K3SQVXOOT526KR7\",\"WARC-Block-Digest\":\"sha1:IQ65QAFEXIUFK27FFKLKEY4JTAWTC3SI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511284.37_warc_CC-MAIN-20231003224357-20231004014357-00404.warc.gz\"}"} |
https://www.dataunitconverter.com/megabit-per-minute-to-gigabyte-per-hour | [
"# Mbit/Min to GB/Hr → CONVERT Megabits per Minute to Gigabytes per Hour\n\ninfo 1 Mbit/Min is equal to 0.0075 GB/Hr\nInput Megabits per Minute (Mbit/Min) - and press Enter.\nMbit/Min\nYou are converting .\n\nSec\nMin\nHr\nDay\nSec\nMin\nHr\nDay\nS = Second, M = Minute, H = Hour, D = Day\n\n## Megabits per Minute (Mbit/Min) Versus Gigabytes per Hour (GB/Hr) - Comparison\n\nMegabits per Minute and Gigabytes per Hour are units of digital information used to measure storage capacity and data transfer rate.\n\nBoth Megabits per Minute and Gigabytes per Hour are the \"decimal\" units. One Megabit is equal to 1000^2 bits. One Gigabyte is equal to 1000^3 bytes. There are 8,000 Megabit in one Gigabyte. Find more details on below table.\n\nMegabits per Minute (Mbit/Min) Gigabytes per Hour (GB/Hr)\nMegabits per Minute (Mbit/Min) is a unit of measurement for data transfer bandwidth. It measures the number of Megabits that can be transferred in one Minute. Gigabytes per Hour (GB/Hr) is a unit of measurement for data transfer bandwidth. It measures the number of Gigabytes that can be transferred in one Hour.\n\n## Megabits per Minute (Mbit/Min) to Gigabytes per Hour (GB/Hr) Conversion Formula and Steps",
null,
"Mbit/Min to GB/Hr Calculator Tool allows you to easily convert from Megabits per Minute (Mbit/Min) to Gigabytes per Hour (GB/Hr). This converter uses the below formula and steps to perform the conversion.\n\nThe formula of converting the Megabits per Minute (Mbit/Min) to Gigabytes per Hour (GB/Hr) is represented as follows :\n\ndiamond GB/Hr = Mbit/Min ÷ (8x1000) x 60\nSource Data Unit Target Data Unit\nMegabit\nEqual to 1000^2 bits\n(Decimal Unit)\nGigabyte\nEqual to 1000^3 bytes\n(Decimal Unit)\n\nThe conversion diagram below will help you to visualize the steps involved in calculating Megabit to Gigabyte in a simplified manner.\n\n÷ 8\n÷ 1000\nx 8\nx 1000\n\nNow let us apply the above formula and see how to manually convert Megabits per Minute (Mbit/Min) to Gigabytes per Hour (GB/Hr). We can further simplify the formula to ease the calculation.\n\nFORMULA\n\nGigabytes per Hour = Megabits per Minute ÷ (8x1000) x 60\n\nSTEP 1\n\nGigabytes per Hour = Megabits per Minute ÷ 8000 x 60\n\nSTEP 2\n\nGigabytes per Hour = Megabits per Minute x (1 ÷ 8000) x 60\n\nSTEP 3\n\nGigabytes per Hour = Megabits per Minute x 0.000125 x 60\n\nSTEP 4\n\nGigabytes per Hour = Megabits per Minute x 0.0075\n\nExample : If we apply the above Formula and steps, conversion from 1 Megabits per Minute (Mbit/Min) to Gigabytes per Hour (GB/Hr) will be processed as below.\n\n1. = 1 ÷ (8x1000) x 60\n2. = 1 ÷ 8000 x 60\n3. = 1 x (1 ÷ 8000) x 60\n4. = 1 x 0.000125 x 60\n5. = 1 x 0.0075\n6. = 0.0075\n7. i.e. 1 Mbit/Min is equal to 0.0075 GB/Hr.\n\nNote : Result rounded off to 40 decimal positions.\n\nYou can use above formula and steps to convert Megabits per Minute to Gigabytes per Hour using any of the programming language such as Java, Python or Powershell.\n\n### Unit Definitions\n\n#### Megabit\n\nA Megabit (Mb or Mbit) is a decimal unit of digital information that is equal to 1,000,000 bits and it is commonly used to express data transfer speeds, such as the speed of an internet connection and to measure the size of a file. In the context of data storage and memory, the binary-based unit of mebibit (Mibit) is used instead.\n\narrow_downward\n\n#### Gigabyte\n\nA Gigabyte (GB) is a decimal unit of digital information that is equal to 1,000,000,000 bytes (or 8,000,000,000 bits) and commonly used to measure the storage capacity of computer hard drives, flash drives, and other digital storage devices. It is also used to express data transfer speeds and in the context of data storage and memory, the binary-based unit of Gibibyte (GiB) is used instead.\n\n## Excel Formula to convert from Megabits per Minute (Mbit/Min) to Gigabytes per Hour (GB/Hr)\n\nApply the formula as shown below to convert from 1 Megabits per Minute (Mbit/Min) to Gigabytes per Hour (GB/Hr).\n\nA B C\n1 Megabits per Minute (Mbit/Min) Gigabytes per Hour (GB/Hr)\n2 1 =A2 * 0.000125 * 60\n3\n\nIf you want to perform bulk conversion locally in your system, then download and make use of above Excel template.\n\n## Python Code for Megabits per Minute (Mbit/Min) to Gigabytes per Hour (GB/Hr) Conversion\n\nYou can use below code to convert any value in Megabits per Minute (Mbit/Min) to Megabits per Minute (Mbit/Min) in Python.\n\nmegabitsperMinute = int(input(\"Enter Megabits per Minute: \"))\ngigabytesperHour = megabitsperMinute / 8000 * 60\nprint(\"{} Megabits per Minute = {} Gigabytes per Hour\".format(megabitsperMinute,gigabytesperHour))\n\nThe first line of code will prompt the user to enter the Megabits per Minute (Mbit/Min) as an input. The value of Gigabytes per Hour (GB/Hr) is calculated on the next line, and the code in third line will display the result.\n\n## Frequently Asked Questions - FAQs\n\n#### How many Gigabytes(GB) are there in a Megabit(Mbit)?expand_more\n\nThere are 0.000125 Gigabytes in a Megabit.\n\n#### What is the formula to convert Megabit(Mbit) to Gigabyte(GB)?expand_more\n\nUse the formula GB = Mbit / (8x1000) to convert Megabit to Gigabyte.\n\n#### How many Megabits(Mbit) are there in a Gigabyte(GB)?expand_more\n\nThere are 8000 Megabits in a Gigabyte.\n\n#### What is the formula to convert Gigabyte(GB) to Megabit(Mbit)?expand_more\n\nUse the formula Mbit = GB x (8x1000) to convert Gigabyte to Megabit.\n\n#### Which is bigger, Gigabyte(GB) or Megabit(Mbit)?expand_more\n\nGigabyte is bigger than Megabit. One Gigabyte contains 8000 Megabits.\n\n## Similar Conversions & Calculators\n\nAll below conversions basically referring to the same calculation."
] | [
null,
"https://www.dataunitconverter.com/includes/shared/showimage.php/megabit_to_gigabyte_Dataunitconverter.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7452522,"math_prob":0.98765635,"size":6439,"snap":"2023-40-2023-50","text_gpt3_token_len":1993,"char_repetition_ratio":0.28282827,"word_repetition_ratio":0.16498625,"special_character_ratio":0.26494798,"punctuation_ratio":0.06365834,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9931718,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T18:55:30Z\",\"WARC-Record-ID\":\"<urn:uuid:6f1ed033-064f-48ac-a60a-4bb0fb925f7b>\",\"Content-Length\":\"78709\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1dbb51c7-1c58-4bb5-8306-529de3b999a8>\",\"WARC-Concurrent-To\":\"<urn:uuid:480845fb-6ef3-402d-87b5-77f9e79ee713>\",\"WARC-IP-Address\":\"64.227.22.174\",\"WARC-Target-URI\":\"https://www.dataunitconverter.com/megabit-per-minute-to-gigabyte-per-hour\",\"WARC-Payload-Digest\":\"sha1:IDPCYOBM57ZANGJLYHVXTKGKQQ65HMYF\",\"WARC-Block-Digest\":\"sha1:VFKH5JAIWTP2VJB3RCK7MS5QHBRVWSFG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100448.65_warc_CC-MAIN-20231202172159-20231202202159-00777.warc.gz\"}"} |
https://libguides.unomaha.edu/c.php?g=946374&p=6822928 | [
"Skip to main content\n\n# SPSS Research Guide\n\nA guide intended for new users of SPSS\n\n## Chi-Square Test for Goodness of Fit\n\nThe chi-square “goodness of fit” tests hypotheses that the proportions from the obtained sample are a “good fit” to the proportions known to exist in the population, i.e., the expected proportions. It can also test whether proportions are occurring randomly.\n\n## Test of Correlation\n\nCorrelation analysis is used in case of two continuous variables. Let's say we want to explore the relationship between height and weight. We will use correlation to measure this relationship.\n\n## Independent Sample T-Test\n\nOne way of measuring the difference between two samples is to compare two unrelated samples. In some experimental designs, you measure two groups one time. With independent samples, you test whether the means of the samples are, not just different, but statistically significantly different.\n\n## One-Way ANOVA\n\nANOVA is used for comparing 3+ groups or 3+ variables in 1 group. For example:\n\nOne-way ANOVA can be used to compare whether children from school A, B, and C have equal mean IQ scores.\n\nRepeated measures ANOVA can be used to compare whether the mean rating for beer A, B, and C is equal for all people."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94332755,"math_prob":0.8833292,"size":1044,"snap":"2021-04-2021-17","text_gpt3_token_len":214,"char_repetition_ratio":0.12211538,"word_repetition_ratio":0.035714287,"special_character_ratio":0.19827586,"punctuation_ratio":0.12060302,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97406477,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-13T04:33:27Z\",\"WARC-Record-ID\":\"<urn:uuid:50ecd266-4fcc-4f2b-8c45-45d4f84a2402>\",\"Content-Length\":\"29436\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f1de1864-9d06-4536-8e05-94b59e02dc32>\",\"WARC-Concurrent-To\":\"<urn:uuid:7abc7f00-6ede-4133-979d-6ad60e27e8b7>\",\"WARC-IP-Address\":\"34.194.39.199\",\"WARC-Target-URI\":\"https://libguides.unomaha.edu/c.php?g=946374&p=6822928\",\"WARC-Payload-Digest\":\"sha1:WMZH5AZCKIQIWJ7HWVRI7GZMFHSIMF76\",\"WARC-Block-Digest\":\"sha1:YPF7RXWXNJEZC4DBQDFMR5GWURGMSWEO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038072082.26_warc_CC-MAIN-20210413031741-20210413061741-00525.warc.gz\"}"} |
https://www.atpmedicine.com/archives/2140.html | [
"函数表达,工作室婚纱模特\n\n布署参数\n\n• 特色:海力欧 HED-300W 影室闪光灯壁画灯 工作室婚纱模特 数字显示\n\nPython: 函数参数魔法\n\n1.职责参数\n\n7.1 私下认同参数\n\n>>> int('123')\n\n123\n\n>>> int('123', 8)\n\n83\n\nint()函数的首个参数是更换进制,假设不传,暗中同意是十进制\n(base=10),如若传了,就用传入的参数。\n\n2016-12-01\n\n>>> def myfun(x,y): >>> return x + y\n\n7.2 可变参数\n\ndeffn(*args):print args\n\n>>> fn()\n\n()\n\n>>> fn('a')\n\n('a',)\n\n>>> fn('a', 'b')\n\n('a', 'b')\n\n>>> fn('a', 'b', 'c')\n\n('a', 'b', 'c')\n\ndefaverage(*args):...\n\n>>> average()\n\n0\n\n>>> average(1, 2)\n\n1.5\n\n>>> average(1, 2, 2, 3, 4)\n\n2.4\n\n>>> s = myfun(10) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-18-135f1c06450c> in <module>() ----> 1 s = myfun(10) TypeError: sum() missing 1 required positional argument: 'y'\n\n2.暗许参数\n\n>>> def myfun(x,y,z,h): >>> return (x + y)*z\n\n>>> def myfun(x,y,z = 1,h=0):\n>>> return (x + y)*z + h\n\n1)好处\n\n2)注意点\n\n• 必选参数在前,默许参数在后\n• 充实私行认同参数要考虑不要影响旧代码的调用结果,大家得以因而增添默许参数增加我们的函数功效\n• 比方有多少个默许参数,既可以按照顺序\n如myfun(10,20,3,8),也可以不依据顺序,当不依据顺序传入时须要把参数名称写上如\nmyfun(10,20,h=8),此时 z没有传到值,暗许为1\n• 铭记一点:默许参数必须指向不变对象\n\n③ 、可变参数\n\n1)定义\n\n1\n\n>>> def myfun (number): >>> sum = 0 >>> for i in number: >>> sum = sum + i*i >>> return sum\n\n2\n\nnumber 是二个list\n\n>>> myfun([1,2,3])\n>>> 14\n2)用法\n\n>>> def myfun (number):\n>>> sum = 0\n>>> for i in number:\n>>> sum = sum + i\ni\n>>> return sum\n\n3\n\n>>> myfun(1,3)\n>>> 10\n>>> myfun()\n>>> 0\n\n>>> num = [1,3]\n>>> myfun(*num)\n*nums表示把nums那一个list的拥有因素作为可变参数传进去。那种写法分外有用,而且很广阔\n\n4\n\n** 4.重点字参数 **\n\n5\n\ndef myfun (name,email,**kw): print('name:', name, 'email:', email, 'other:', kw)\n\n6\n\n1. 用法\n在概念关键字参数的时候\n,在函数的最后扩大**kw重中之重字即可,函数内部,会把主要字参数转化为dict,我们在扩散的时候也要接纳k=v传入\n\nmyfun('shuang','49206@qq.com',age=18,sex=\"女\")\nname: shuang email:\n49206@qq.com\nother: {'sex': '女', 'age': 18}\n大家也足以像可变参数二个 事先在函数外部定义好二个dict\nextra = {'age': '18', 'sex': '女'}\nmyfun(\"shuang\",\"49206@qq.com\",**extra)\nname: shuang email:\n49206@qq.com\nother: {'age': '18', 'sex': '女'}\n**extra表示把extra那几个dict的有着key-value用关键字参数传入到函数的**6095.com ,kw参数,kw将收获七个dict。\n注意:kw拿到的dict是extra的一份拷贝,对kw的更改不会潜移默化到函数外的extra\n\n7\n\n5.命名第贰字参数\n\n8\n\n1)关键字参数回看\n\n9\n\n>>> def myfun(name, email, **kw): >>> if 'age' in kw: >>> # 有age参数 >>> pass >>> if 'sex' in kw: >>> # 有sex参数 >>> pass\n\n10\n\n2)关键字参数缺点\n\n11\n\ndef myfun(name, email, *, age, sex):print(name, email, age, sex)\n\n3)用法\n\n13\n\n>>> def add(x, y): # x, y 是必选参数\n\n>>> myfun('Jack', 'xx@qq.com', age=18, sex='女') Jack xx@qq.com 18 女\n\n... print x + y\n\n4)注意点\n\n...\n\n• 即使函数定义中一度有了一个可变参数,后边随着的命名首要字参数就不再需求多个新鲜分隔符*了:\n>>> def myfun(name, email, *args, age, sex):\nprint(name, email, args, age, sex)\n\n• 取名第③字参数必须传入参数名,那和岗位参数不相同。假若没有传到参数名,调用将报错:\n\n>>> myfun('Jack', 'xx@qq.com', '18', '女') Traceback (most recent call last): File \"<stdin>\", line 1, in <module> TypeError: myfun() takes 2 positional arguments but 4 were given\n\n>>> add() # 啥都没传,不行\n\nTraceback (most recent call last):\n\n• 缺省值:\n>>> def myfun(name, email, *, age='18', sex):\n>>> print(name, email, age, sex)\n由于命名第②字参数age具有暗许值,调用时,可不传播age参数:\n\n>>> myfun('Jack', 'xxx@qq.com', sex='女') Jack xxx@qq.com 18 女\n• 若果没有可变参数,就务须加三个作为特殊分隔符。固然缺失,Python解释器将无法分辨地方参数和命名主要字参数:\n\ndef myfun(name, email, age, sex):\n\n缺少 *,age和sex被视为地方参数\n\nFile \"\", line 1, in\n\n6.参数组合\n\nTypeError: add() takes exactly 2 arguments (0 given)\n\n>>> add(1) # 只传了三个,也分外\n\nTraceback (most recent call last):\n\ndef f1(a, b, c=0, *args, **kw): print('a =', a, 'b =', b, 'c =', c, 'args =', args, 'kw =', kw) def f2(a, b, c=0, *, d, **kw): print('a =', a, 'b =', b, 'c =', c, 'd =', d, 'kw =', kw)\n\nFile \"\", line 1, in\n\nTypeError: add() takes exactly 2 arguments (1 given)\n\n>>> f1(1, 2) a = 1 b = 2 c = 0 args = () kw = {}>>> f1(1, 2, c=3) a = 1 b = 2 c = 3 args = () kw = {}>>> f1(1, 2, 3, 'a', 'b') a = 1 b = 2 c = 3 args = ('a', 'b') kw = {}>>> f1(1, 2, 3, 'a', 'b', x=99) a = 1 b = 2 c = 3 args = ('a', 'b') kw = {'x': 99}>>> f2(1, 2, d=99, ext=None) a = 1 b = 2 c = 0 d = 99 kw = {'ext': None}\n\n>>> add(1, 2) # 数量一样,通过\n\n3\n\n>>> args = (1, 2, 3, 4) >>> kw = {'d': 99, 'x': '#'} >>> f1(*args, **kw) a = 1 b = 2 c = 3 args = (4,) kw = {'d': 99, 'x': '#'} >>> args = (1, 2, 3) >>> kw = {'d': 88, 'x': '#'} >>> f2(*args, **kw) a = 1 b = 2 c = 3 d = 88 kw = {'x': '#'}\n\n**kw)的款式调用它,无论它的参数是什么定义的。(个人感觉那个地点即便灵活,可是太灵活反而有点糊涂,须求多实践才能长远明白他的优势)\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n>>> def add(x, y, z=1): # x, y 是必选参数,z\n\n... print x + y + z\n\n...\n\n>>> add(1, 2, 3) # 1+2+3\n\n6\n\n>>> add(1, 2) # 没有传递 z,自动使用 z=1,即\n1+2+1\n\n4\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\n9\n\n10\n\n11\n\n>>> def add(x=1, y, z): # x\n\n... return x + y + z\n\n...\n\nFile \"\", line 1\n\nSyntaxError: non-default argument follows default argument\n\n>>>\n\n>>> def add(x, y=1, z): # y\n\n... return x + y + z\n\n...\n\nFile \"\", line 1\n\nSyntaxError: non-default argument follows default argument\n\n1\n\n2\n\n3\n\n... L.append('END')\n\n... return L\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\n9\n\n10\n\n>>> add_to_list([1, 2, 3]) # 没啥难题\n\n[1, 2, 3, 'END']\n\n>>> add_to_list(['a', 'b', 'c']) # 没啥难点\n\n['a', 'b', 'c', 'END']\n\n['END']\n\n['END', 'END']\n\n>>> add_to_list() # 糟糕了,三个 'END'\n\n['END', 'END', 'END']\n\nL=[],经过处理,L 应该唯有二个成分,怎么会产出调用函数两回,L\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n... sum = 0\n\n... for i in numbers:\n\n... sum += i\n\n... print 'numbers:', numbers\n\n... return sum\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\n9\n\n10\n\n11\n\n12\n\n>>> add() # 传递 0 个参数\n\nnumbers: ()\n\n0\n\n>>> add(1) # 传递 1 个参数\n\nnumbers: (1,)\n\n1\n\n>>> add(1, 2) # 传递 2 个参数\n\nnumbers: (1, 2)\n\n3\n\n>>> add(1, 2, 3) # 传递 3 个参数\n\nnumbers: (1, 2, 3)\n\n6\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\n9\n\n10\n\n11\n\n>>> def add(x, y, z): # 有 3 个必选参数\n\n... return x + y + z\n\n...\n\n>>> a = [1, 2, 3]\n\n>>> add(a, a, a) # 那样传递参数很麻烦\n\n6\n\n>>> add(*a) # 使用 *a,也等于地点的做法\n\n6\n\n>>> b = (4, 5, 6)\n\n>>> add(*b) # 对元组一样适用\n\n15\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\n9\n\n10\n\n11\n\n12\n\n>>> def add(*numbers): # 函数参数是可变参数\n\n... sum = 0\n\n... for i in numbers:\n\n... sum += i\n\n... return sum\n\n...\n\n>>> a = [1, 2]\n\n>>> add(*a) # 使用 *a 给函数传递参数\n\n3\n\n>>> a = [1, 2, 3, 4]\n\n10\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\nreturn kwargs\n\n>>> add() # 没有参数,kwargs 为空字典\n\n{}\n\n>>> add(x=1) # x=1 => kwargs={'x': 1}\n\n{'x': 1}\n\n>>> add(x=1, y=2) # x=1, y=2 => kwargs={'y': 2, 'x': 1}\n\n{'y': 2, 'x': 1}\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\n>>> def add(x, y, z):\n\n... return x + y + z\n\n...\n\n>>> dict1 = {'z': 3, 'x': 1, 'y': 6}\n\n>>> add(dict1['x'], dict1['y'], dict1['z']) #\n\n10\n\n>>> add(**dict1) # 使用 **dict1\n\n10\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\n9\n\n10\n\n11\n\n12\n\n13\n\n>>> def sum(**kwargs): # 函数参数是必不可缺字参数\n\n... sum = 0\n\n... for k, v in kwargs.items():\n\n... sum += v\n\n... return sum\n\n>>> sum() # 没有参数\n\n0\n\n>>> dict1 = {'x': 1}\n\n>>> sum(**dict1) # 相当于 sum(x=1)\n\n1\n\n>>> dict2 = {'x': 2, 'y': 6}\n\n>>> sum(**dict2) # 相当于 sum(x=2, y=6)\n\n8\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n>>> def func(x, y, z=0, *args, **kwargs):\n\nprint 'x =', x\n\nprint 'y =', y\n\nprint 'z =', z\n\nprint 'args =', args\n\nprint 'kwargs =', kwargs\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\n9\n\n10\n\n11\n\n12\n\n13\n\n14\n\n15\n\n16\n\n17\n\n18\n\n19\n\n20\n\n21\n\n22\n\n23\n\n24\n\n25\n\n26\n\n27\n\n28\n\n29\n\n30\n\n>>> func(1, 2) # 至少提供七个参数,因为 x,\ny 是必选参数\n\nx = 1\n\ny = 2\n\nz = 0\n\nargs = ()\n\nkwargs = {}\n\n>>> func(1, 2, 3) # x=1, y=2, z=3\n\nx = 1\n\ny = 2\n\nz = 3\n\nargs = ()\n\nkwargs = {}\n\n>>> func(1, 2, 3, 4, 5, 6) # x=1, y=2, z=3, args=(4,\n5, 6), kwargs={}\n\nx = 1\n\ny = 2\n\nz = 3\n\nargs = (4, 5, 6)\n\nkwargs = {}\n\n>>> func(1, 2, 4, u=6, v=7) # args = (), kwargs = {'u':\n6, 'v': 7}\n\nx = 1\n\ny = 2\n\nz = 4\n\nargs = ()\n\nkwargs = {'u': 6, 'v': 7}\n\n>>> func(1, 2, 3, 4, 5, u=6, v=7) # args = (4, 5), kwargs =\n{'u': 6, 'v': 7}\n\nx = 1\n\ny = 2\n\nz = 3\n\nargs = (4, 5)\n\nkwargs = {'u': 6, 'v': 7}\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\n>>> a = (1, 2, 3)\n\n>>> b = {'u': 6, 'v': 7}\n\n>>> func(*a, **b)\n\nx = 1\n\ny = 2\n\nz = 3\n\nargs = ()\n\nkwargs = {'u': 6, 'v': 7}\n\n*args 代表可变参数,**kwargs 代表关键字参数。\n\n*args 和 **kwargs 是 Python 的惯用写法。\n\nargs 和 *kwargs · Python进阶\n\nComment ()"
] | [
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.5602515,"math_prob":0.9877365,"size":10050,"snap":"2019-43-2019-47","text_gpt3_token_len":6513,"char_repetition_ratio":0.13169421,"word_repetition_ratio":0.2446877,"special_character_ratio":0.43701494,"punctuation_ratio":0.21991505,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9888172,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-20T17:24:58Z\",\"WARC-Record-ID\":\"<urn:uuid:25e6150d-9cd7-4390-b5c7-98aacfc4230e>\",\"Content-Length\":\"31137\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:01513df5-ea1e-4ce1-b027-3c57af71e759>\",\"WARC-Concurrent-To\":\"<urn:uuid:dd6134ca-9d77-48e7-9e61-fa6e305c483d>\",\"WARC-IP-Address\":\"170.130.211.29\",\"WARC-Target-URI\":\"https://www.atpmedicine.com/archives/2140.html\",\"WARC-Payload-Digest\":\"sha1:W5LEK4NJDWHTDKG6UEAMZWK24YNRXS76\",\"WARC-Block-Digest\":\"sha1:TFSQGIRO4KPJTI7XA42WCP2JDWEDAJBR\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986717235.56_warc_CC-MAIN-20191020160500-20191020184000-00012.warc.gz\"}"} |
https://dsp.stackexchange.com/questions/50688/what-is-a-weighted-local-histogram | [
"# What Is a Weighted Local Histogram?\n\nI'm reading through a couple of academic papers, and this terms often comes up \"local weighted histogram\". An example of quote is the following:\n\nFirst, cumulative histograms are built for every pixel from its neighborhood, using Gaussian-neighborhood weighting\n\nI would be able to construct a local histogram, I'm not sure what this weighting is about.\n\nSay the central pixel is at $p_0 = (y_0,x_0)$, and we have two pixels of same gray scale value $r$ at coordinates $p_1 = (y_1,x_1)$ and $p_2 = (y_1,x_1)$, but such that $d(p_0,p_1) < d(p_0,p_2)$. A normal histogram would count that value $r$ twice. With a Gaussian weighting for example how would we count such pixel value in the histogram construction?\n\nA weighted local histogram would mean filtering the image $$I(x,y)$$ with a localised filter $$H(x,y)$$ (gaussian in this example). The resultant image is the 2D convolution $$Y(x,y) = I(x,y)*H(x,y)$$. The normal histogram of $$Y(x,y)$$ is the localised histogram of $$I(x,y)$$. It is used to view certain specific characteristics of an image depending on the filter used"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93352133,"math_prob":0.99905723,"size":705,"snap":"2021-21-2021-25","text_gpt3_token_len":184,"char_repetition_ratio":0.122681886,"word_repetition_ratio":0.0,"special_character_ratio":0.2680851,"punctuation_ratio":0.124087594,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99990225,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-16T09:06:15Z\",\"WARC-Record-ID\":\"<urn:uuid:f3d36dee-e0a5-4f93-9434-96e76cd8002d>\",\"Content-Length\":\"177714\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b59f15e3-781a-4109-b216-7e5ce8753916>\",\"WARC-Concurrent-To\":\"<urn:uuid:848ae967-4d73-46ca-bac4-45b623f6d68e>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/50688/what-is-a-weighted-local-histogram\",\"WARC-Payload-Digest\":\"sha1:5ZRONAUHN6K3JRBUGWYGKC7OHGQNLB6S\",\"WARC-Block-Digest\":\"sha1:K4XMAK3JLD6ORK6PUYIDJ64WJF6LPC6R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243992516.56_warc_CC-MAIN-20210516075201-20210516105201-00534.warc.gz\"}"} |
https://www.kwiksher.com/doc/kwik_tutorial/interactions/keyboard/ | [
"# Keyboard\n\nKeyboard\n\nThe demo file in this tutorial was created by Leonardo Amora, owner of http://www.amoraleite.com You can download the source files here. Updated for Kwik4\n\nUser clicks the button 1, 2 or 3 to match the number at Tip: 33133211 and You Won!! is displayed",
null,
"The psd size is for 1024x768 so let’s create a kwik project with iPad Mini.\n\n## Dynamic Texts & Variables\n\nthese functions enables a text layer to display the value of a variable.\n\n• Texto layer is associated with variable LCD. this is the number appearing at LCD are when user clicks the button.\n• txtodica layer is associated with variable numDica. this is the number at Tip\n\n### Create variables",
null,
"• LCD",
null,
"• Name:LCD\n• Value: \"\"\n• Local\n• Content:Formula/Boolean\n• Before layers are rendered\n\nthe reason why Formula/Boolean is because we assign a formula as LCD ..“1” to LCD later.\n\n• numDica",
null,
"• Name:numDica\n• Value: \"\"\n• Local\n• Content:Formula/Boolean\n• Before layers are rendered\n\n### Dynamic Text Replacement\n\nLet’s assign the variables to the text layers.",
null,
"• Textto is associated with LCD",
null,
"• textdica is associated with numDica",
null,
"## External Code - createDica.lua\n\nit intiates the value to numDica. The value is random 8 digits.\n\n``````UI.numDica = math.random(1,3)..math.random(1,3)..math.random(1,3)..math.random(1,3)..math.random(1,3)..math.random(1,3)..math.random(1,3)..math.random(1,3)\n``````",
null,
"• After layers\n\n## Buttons\n\nOK button to check LCD variables equals to numDica variable. Clear button to clear the input. Each 1, 2, 3 button to add the number to LCD.",
null,
"### OK Button\n\nthe logic is like this\n\n``````if LCD == numDica then\nShow Win\nend\n``````",
null,
"• IF\n\n• Variable: LCD\n• Operator:EQUAL\n• Comparison: UI.numDica",
null,
"Kwik4 internally holds local variable by UI table. So if you need to set UI.numDica to input text field.\n\n• Show/Hide: Win layer to show\n\n• End IF\n\n### Clear Button\n\nit clears LCD variable to empty string.",
null,
"• Edit Variable\n• LCD\n• Value: \"\"",
null,
"### Number 1,2,3 Button\n\nthe logic is like this\n\n``````if LCD length <= 7 then\nLCD = LCD..\"1\"\nend\n``````\n\nit concatinates LCD text string with the input number",
null,
"• IF\n• LCD\n• LESS OR EQUAL THAN\n• Comparison:7",
null,
"• Edit Variable\n• LCD\n• Value:UI.LCD..“1”\n\nFor the button number 2, use UI.LCD ..“2”,",
null,
"For the button number 3, use UI.LCD ..“3”,",
null,
"## Hide Win layer\n\nWin layer should be transparent when the page starts",
null,
"Hide layer: Win",
null,
"that’s all."
] | [
null,
"https://www.kwiksher.com/doc/kwik_tutorial/interactions/img/snapshot%200386.jpg",
null,
"https://www.kwiksher.com/doc/kwik/toolset/project_and_pages/img/snapshot 0188.jpg",
null,
"https://www.kwiksher.com/doc/kwik_tutorial/interactions/img/snapshot%200369.jpg",
null,
"https://www.kwiksher.com/doc/kwik_tutorial/interactions/img/snapshot%200371.jpg",
null,
"https://www.kwiksher.com/doc/kwik/toolset/layers_and_replacements/img/snapshot 0211.jpg",
null,
"https://www.kwiksher.com/doc/kwik_tutorial/interactions/img/snapshot%200372.jpg",
null,
"https://www.kwiksher.com/doc/kwik_tutorial/interactions/img/snapshot%200373.jpg",
null,
"https://www.kwiksher.com/doc/kwik_tutorial/interactions/img/snapshot%200387.jpg",
null,
"https://www.kwiksher.com/doc/kwik/toolset/interactions/img/snapshot 0278.jpg",
null,
"https://www.kwiksher.com/doc/kwik_tutorial/interactions/img/snapshot%200374.jpg",
null,
"https://www.kwiksher.com/doc/kwik_tutorial/interactions/img/snapshot%200375.jpg",
null,
"https://www.kwiksher.com/doc/kwik_tutorial/interactions/img/snapshot%200388.jpg",
null,
"https://www.kwiksher.com/doc/kwik_tutorial/interactions/img/snapshot%200378.jpg",
null,
"https://www.kwiksher.com/doc/kwik_tutorial/interactions/img/snapshot%200379.jpg",
null,
"https://www.kwiksher.com/doc/kwik_tutorial/interactions/img/snapshot%200380.jpg",
null,
"https://www.kwiksher.com/doc/kwik_tutorial/interactions/img/snapshot%200381.jpg",
null,
"https://www.kwiksher.com/doc/kwik_tutorial/interactions/img/snapshot%200382.jpg",
null,
"https://www.kwiksher.com/doc/kwik/toolset/project_and_pages/img/snapshot 0187.jpg",
null,
"https://www.kwiksher.com/doc/kwik_tutorial/interactions/img/snapshot%200384.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6329341,"math_prob":0.84194124,"size":2269,"snap":"2022-40-2023-06","text_gpt3_token_len":641,"char_repetition_ratio":0.14746137,"word_repetition_ratio":0.078947365,"special_character_ratio":0.26928163,"punctuation_ratio":0.18989898,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9781967,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,2,null,3,null,2,null,2,null,2,null,2,null,2,null,2,null,3,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T00:34:42Z\",\"WARC-Record-ID\":\"<urn:uuid:44d68707-aebb-452e-9ae7-335197f5ac50>\",\"Content-Length\":\"96711\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7081731d-f115-4a64-8b80-4a5892e64c0a>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e0e1f5e-f934-401e-8f2c-72e10fbe4df5>\",\"WARC-IP-Address\":\"35.209.74.233\",\"WARC-Target-URI\":\"https://www.kwiksher.com/doc/kwik_tutorial/interactions/keyboard/\",\"WARC-Payload-Digest\":\"sha1:DZRJBNTTFK5NSNF3RUPEOEYMZEEFRYGL\",\"WARC-Block-Digest\":\"sha1:4CMJZVAMBUY33MZB2KWFV7NI6MOQ5H2G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764494852.95_warc_CC-MAIN-20230127001911-20230127031911-00422.warc.gz\"}"} |
https://math.codidact.com/posts/278430 | [
"Notifications\nQ&A\n\n# Existence of a set of all sets\n\n+0\n−1\n\nSuppose that we have an axiomatic set theory having the following axiom:\n\nThe Axiom Schema of Comprehension: Let $\\mathbf{P}(x)$ be a property of $x$. For any set $A$, there is a set $B$ such that $x\\in B$ if and only if $x\\in A$ and $\\mathbf{P}(x)$.\n\nCan a set of all sets exist within such an axiomatic system?\n\nWhy does this post require moderator attention?\nYou might want to add some details to your flag.\nWhy should this post be closed?\n\n## 2 answers\n\n+4\n−0\n\nSure. You can simply add $\\exists V.\\forall x.x \\in V$ as an axiom to ZF(C), and you will have such an axiomatic system. Such an addition to ZF(C) would make it inconsistent, but it would still prove the existence of a \"set of all sets\" (along with everything else).\n\nAs Peter Taylor points out in a comment on another answer, there are a multiple ways of accomplishing this consistently though most of them require fiddling with the logic in which the axiomatic system is formulated.\n\nPeter Taylor suggests dropping the Law of Excluded Middle (LEM) leading to a constructive/intuitionistic logic. This does not work as the proof of the contradiction does not use LEM even implicitly. However, dropping the Law of Non-Contradiction leading to a paraconsistent logic immediately resolves the issue. You can still use Russell's proof to show $x\\in x$ and $x \\notin x$, but this does not render a paraconsistent logic trivial.\n\nPeter Taylor also suggests that $x \\in x$ can fail to be a well-formed formula. One way this could happen is by using a linear logic or some variant (e.g. an affine logic). Specifically, we could drop the structural rule of contraction which allows variables to be used multiple times in a formula. $x \\in x$ is not a linear/affine formula as $x$ occurs twice in it. This use of contraction is unavoidable. Masaru Shirahata's Linear Set Theory (PDF) is an example of a set theory based on a linear logic which contains an \"unrestricted\" comprehension axiom.\n\nThere are various other ways the formula $x \\in x$ could be disallowed. In modern type theories and structural set theories, membership, i.e. $\\in$, is usually identified with (flipped) function application and has a type like ${\\in_X} : X \\times \\Omega^X \\to \\Omega$ where $\\Omega$ is a type of propositions, e.g. $\\mathbf 2$ in the classical case. In this situation, $x \\in_X x$ would be a type error (and thus not a \"well-formed formula\") as $x$ can't simultaneously have the type $X$ and the type $\\Omega^X$. Indeed, Russell's initial attempts to avoid Russell's paradox were what led to modern type theory.\n\nSee also the nLab's page on Russell's paradox for a list of options for avoiding Russell's paradox, though it also includes modifications to comprehension.\n\nWhy does this post require moderator attention?\nYou might want to add some details to your flag.\n\n#### 0 comments\n\n+0\n−1\n\nLet $V$ be a set of all sets. According to the Axiom Schema of Comprehension, we can have the following set:\n\n$$U= \\{ x\\in V \\mid x \\not \\in x \\}.$$\n\nNow, there are two cases:\n\n• $U \\in U$, which, according to the definition of the set $U$, implies that $U \\not \\in U$, which is a contradiction;\n\n• $U \\not \\in U$, which, according to the definition of the set $U$, implies that $U \\in U$, which is a contradiction.\n\nThus, a set of all sets cannot exist in such an axiomatic system.\n\nWhy does this post require moderator attention?\nYou might want to add some details to your flag.\n\n#### 1 comment\n\nOr the property $P(x) = x \\not\\in x$ cannot exist in such an axiomatic system, or such an axiomatic system can contain a set of all sets but at the cost of consistency, or possibly such an axiomatic system can contain a set of all sets as long as it doesn't have the law of the excluded middle. Peter Taylor 6 months ago",
null,
"This community is part of the Codidact network. We have other communities too — take a look!\n\nYou can also join us in chat!\n\nWant to advertise this community? Use our templates!"
] | [
null,
"https://math.codidact.com/assets/codidact.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92720324,"math_prob":0.9762705,"size":3422,"snap":"2021-04-2021-17","text_gpt3_token_len":865,"char_repetition_ratio":0.112053834,"word_repetition_ratio":0.07154742,"special_character_ratio":0.24430157,"punctuation_ratio":0.09855072,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99747133,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-10T22:20:52Z\",\"WARC-Record-ID\":\"<urn:uuid:904267d2-5d44-4dd8-a447-b3452f03318d>\",\"Content-Length\":\"52199\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e2cd2de-8d95-4dd7-9230-c318c55a5c9f>\",\"WARC-Concurrent-To\":\"<urn:uuid:fbc9a27f-796e-4c6f-8f10-009af669e84b>\",\"WARC-IP-Address\":\"104.26.1.18\",\"WARC-Target-URI\":\"https://math.codidact.com/posts/278430\",\"WARC-Payload-Digest\":\"sha1:JZT6O3VI5YDP7QP6RFFI2MRSPLOXPHLT\",\"WARC-Block-Digest\":\"sha1:BJDWS6MC7J5VBXSIP4QZJZKIAFH5A3AQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038059348.9_warc_CC-MAIN-20210410210053-20210411000053-00001.warc.gz\"}"} |
http://mielke-bio.info/falk/camera_calibration | [
"# Stereo Camera Calibration\n\n## 2D images in a 3D world.\n\nPosted by Falk Mielke on 3rd Aug 2020\n\n# Stereo Camera Calibration\n\nFalk Mielke, FunMorph, Universiteit Antwerpen\n\nFor a student project, I have been playing around with stereo camera calibration. We have a set of cameras. The challenge is to obtain 3D kinematics by combining data from multiple views (i.e. simultaneously captured images from different perspectives).\n\nBelow, I document my attempts to relate the image space of the cameras (i.e. what is seen on each frame) to object space (i.e. the true 3D position of something).\n\nHere is the usual extensive set of libraries I use below.\n\nimport numpy as NP # numeric operations\nimport pandas as PD # data organization\n\nimport cv2 as CV # open computer vision, i.e. image manipulation library\n\nimport sympy as SYM # symbolic python\nimport sympy.physics.mechanics as MECH # symbolic mechanics\n\nimport matplotlib as MP # plotting\nimport matplotlib.pyplot as MPP # plotting\nfrom mpl_toolkits import mplot3d # 3D plots\n\n# equal aspect for matplotlib 3D plots\n# from https://stackoverflow.com/a/19248731\ndef AxisEqual3D(ax):\nextents = NP.array([getattr(ax, f'get_{dim}lim')() for dim in 'xyz'])\nsz = extents[:,1] - extents[:,0]\ncenters = NP.mean(extents, axis=1)\nmaxsize = max(abs(sz))\nr = maxsize/2\nfor ctr, dim in zip(centers, 'xyz'):\ngetattr(ax, f'set_{dim}lim')(ctr - r, ctr + r)\n\n# Ingredients\n\nWe have a set of consumer level GoPro \"Hero 5\" in the lab, three of which were attached to a rack of metal tubes. All were set to narrow field of view (to minimize fisheye lens distortion). Videos of resolution $$1280\\times 720$$ @ 240 fps were triggered with a remote control.\n\nThe same video settings were used to film a regular grid for undistortion (procedure as described here), which was applied to all the material used below.\n\nThe remote control sync was not accurate, which I found out only after calibration. I will use an extra audio clue in the future.\n\nThe setup is lightweight, mobile, and thus suited for field work.\n\n# Calibration Data\n\nAs a test calibration object, I grabbed a monitor box that was lying around in the office.",
null,
"To get true 3D points, I prepared a 3D model of the box by photogrammetry. Then, I used the \"PickPoints\" function in meshlab to get relative positions of the corner points and two extra landmarks on the box.\n\nTaking simultaneous photos from the fixed cameras, I could also retrieve image points. Prior to calibration, those images were contrast enhanced and undistorted. All relevant images have to be prepared with the same procedure.\n\ndata = PD.read_csv('cam_calibration_data.csv', sep = ';').set_index('pt', inplace = False)\ndata\nx y z u_1 v_1 u_2 v_2 u_4 v_4\npt\n0 -0.235521 0.308646 0.101321 321.426491 111.782226 786.011187 252.220410 664.551136 65.475581\n1 -0.000698 0.311595 0.094717 180.229182 143.665489 710.098655 287.899300 537.777207 91.285842\n2 -0.004713 -0.328370 0.089341 365.455760 315.986937 1073.719682 405.563724 814.857949 221.855397\n3 -0.236395 -0.329500 0.092263 522.594701 272.716794 1129.894956 363.811832 946.186629 186.935632\n4 0.005713 0.315304 -0.349343 203.762067 394.935970 685.047519 527.023775 535.499831 356.979704\n5 0.001885 -0.331354 -0.345600 379.879141 596.104179 1025.894787 662.148082 792.084189 524.746399\n6 -0.229327 -0.330225 -0.348522 524.872077 546.001908 1082.829186 606.731934 915.062491 479.198880\n7 0.000000 0.000000 0.000000 266.010343 279.548921 868.755846 397.213346 656.200757 211.227643\n8 -0.132960 0.025250 0.097379 338.886374 191.490384 899.120859 313.709561 720.726409 125.446482\n\nNote that object points are in meters, and should have a meaningful origin.\n\ncams = [1,2,4]\nimages = {cam: CV.imread(f'images/cam{cam}.png', 1) for cam in cams}\n\n# drawing its lines on left image\nfig = MPP.figure(dpi = 300)\n\nfor nr, (cam, img) in enumerate(images.items()):\nax = fig.add_subplot(1, len(images), nr+1, aspect = 'equal')\nax.imshow(img[:, :, ::-1], origin = 'upper')\n\npts = data.loc[:, [f'u_{cam}', f'v_{cam}']].values\n\nax.scatter(pts[:, 0], pts[:, 1], s = 20 \\\n, marker = '+', facecolor = 'r', edgecolor = '0' \\\n, linewidth = 1, alpha = 0.6 \\\n)\n\nax.get_xaxis().set_visible(False)\nax.set_yticks([])\nax.set_xlabel(nr)\n\nMPP.tight_layout()\nMPP.show()",
null,
"Of course, the points must always be labeled in the same order. It would be better to use point labels as pandas data frame indices to make sure only matching points are associated.\n\nIn addition to the points on the images, the true 3D relation of the point is known:\n\ndata.loc[:, ['x', 'y', 'z']].T\npt 0 1 2 3 4 5 6 7 8\nx -0.235521 -0.000698 -0.004713 -0.236395 0.005713 0.001885 -0.229327 0.0 -0.132960\ny 0.308646 0.311595 -0.328370 -0.329500 0.315304 -0.331354 -0.330225 0.0 0.025250\nz 0.101321 0.094717 0.089341 0.092263 -0.349343 -0.345600 -0.348522 0.0 0.097379\n\nThis can be used for calibration.\n\n# The Successful Attempt: DLT Calibration à la Argus/Kwon\n\nMy initial temptation was to go to openCV. For several reasons, these attempts failed (see below).\n\nOn the search for alternatives, I stumbled upon the argus toolbox from Ty Hedrick's lab (Jackson et al., 2016). The toolbox had previously been recommended to me by a colleague, François Druelle.\n\nI went through to the procedures referenced in the argus \"theory\" section. The authors implement algorithms collected in the Motion Analysis library of Young-Hoo Kwon. The Kwon3D website (Kwon, 2000) is a phenomenal resource for my research. I have already referenced the website in my force plate blog posts. I have now realized that my recent quest on solving inverse dynamics could have been facitilated by theory assembled on Kwon's archives. And now, I also use the section on camera calibration via DLT for my purpose.\n\n## Theory\n\nI start here with the naked maths. If that is of less interest to you, feel free to skip to the \"numeric application\" section below.\n\nThe website by Kwon excellently explains the DLT theory. I will repeat it here in a sympy framework, using the same symbols.\n\n# object space coordinates\nx, y, z = SYM.symbols(f'x:z', real = True)\nx0, y0, z0 = SYM.symbols(f'x_0, y_0, z_0', real = True)\n\n# image space coordinates\nu, v = SYM.symbols(f'u, v', real = True)\nu0, v0, d = SYM.symbols(f'u_0, v_0, d', real = True)\n\n# reference frame DCM\nr = SYM.symbols('r_{1:4}{1:4}', real = True)\nT_io = SYM.Matrix([r[:3],r[3:6],r[6:]]).T\n\n# colinearity\nc = SYM.symbols(f'c', real = True)\n\n\nWe require two reference frames. One is the object space, the other is the image plane (extended to 3D towards the focal point).\n\n# object space\nobject_space = MECH.ReferenceFrame('O')\n\n# image plane\nimage_plane = MECH.ReferenceFrame('I')\nimage_plane.orient(object_space, 'DCM', T_io)\n\n# static\nimage_plane.set_ang_vel(object_space, 0)\nimage_plane.set_ang_acc(object_space, 0)\n\n# transform between spaces\nobject_to_image = SYM.simplify(image_plane.dcm(object_space)) # frame_to.dcm(frame_from)\nimage_to_object = SYM.simplify(object_space.dcm(image_plane))\n\nSYM.pprint(object_to_image)\n⎡r_{1}{1} r_{1}{2} r_{1}{3}⎤\n⎢ ⎥\n⎢r_{2}{1} r_{2}{2} r_{2}{3}⎥\n⎢ ⎥\n⎣r_{3}{1} r_{3}{2} r_{3}{3}⎦\n\nThere are several points that can be located in either of the two reference frames. Most notably, the \"projection center\" (i.e. focal point) of the image can be located in both frames and connects the two.\n\n### define the origins\n# ... of the object space\norigin = MECH.Point('Ω')\n\n# of the image plane\norigin_image = MECH.Point('ρ')\n\n### points in the world\n# object point in the world\nobj_point = MECH.Point('ο')\nobj_point.set_pos(origin, x * object_space.x + y * object_space.y + z * object_space.z)\n\n### points in both worlds\n# projection center: where all object point/projections meet\nproj_center = MECH.Point('ν')\nproj_center.set_pos(origin_image, u0 * image_plane.x + v0 * image_plane.y + d * image_plane.z)\nproj_center.set_pos(origin, x0 * object_space.x + y0 * object_space.y + z0 * object_space.z)\n\n### points in the image\n# principal point (center of image plane; perpendicular axis through image plane and projection center)\nprincipal_point = MECH.Point('π')\nprincipal_point.set_pos(origin_image, u0 * image_plane.x + v0 * image_plane.y + 0 * image_plane.z)\n\n# projection of the object point to the image plane\nimg_point = MECH.Point('ι')\nimg_point.set_pos(origin_image, u * image_plane.x + v * image_plane.y + 0 * image_plane.z)\n\n### static: no movement of no point\nfor point in [origin, origin_image, obj_point, proj_center, principal_point, img_point]:\npoint.set_vel(object_space, 0)\npoint.set_acc(object_space, 0)\npoint.set_vel(image_plane, 0)\npoint.set_acc(image_plane, 0)\n\n\nTake any vector which goes from the projection center to a true 3D point. Any such vector intersects the image plane at an image point. The vectors to the true point and to the plane are colinear, i.e. related by scalar multiplication.\n\nThis is the collinearity condition. It gives linear equations that relate the two spaces.\n\n### vectors\nA_o = obj_point.pos_from(proj_center).express(object_space)\nB_i = img_point.pos_from(proj_center).express(image_plane)\n\nA_i = A_o.express(image_plane)\n\nequations = { coord: c*A_i.dot(coord) - B_i.dot(coord) \\\nfor coord in [image_plane.x, image_plane.y, image_plane.z] \\\n}\n\nequations\n{I.x: c*(r_{1}{1}*(x - x_0) + r_{1}{2}*(y - y_0) + r_{1}{3}*(z - z_0)) - u + u_0,\nI.y: c*(r_{2}{1}*(x - x_0) + r_{2}{2}*(y - y_0) + r_{2}{3}*(z - z_0)) - v + v_0,\nI.z: c*(r_{3}{1}*(x - x_0) + r_{3}{2}*(y - y_0) + r_{3}{3}*(z - z_0)) + d}\n\nSome values need to be substituted in, (i) to reduce dimensionality, (ii) to correct for an arbitrary scaling factor.\n\nc_substitute = {c: SYM.solve(equations[image_plane.z], c)}\n\nλ_u, λ_v = SYM.symbols(f'λ_u, λ_v', real = True)\nuv_substitutes = { \\\n(u-u0): λ_u * (u-u0) \\\n, (v-v0): λ_v * (v-v0) \\\n}\n\nequations = { coord: equations[coord].subs(c_substitute).subs(uv_substitutes) \\\nfor coord in [image_plane.x, image_plane.y] \\\n}\nequations\n{I.x: -d*(r_{1}{1}*(x - x_0) + r_{1}{2}*(y - y_0) + r_{1}{3}*(z - z_0))/(r_{3}{1}*x - r_{3}{1}*x_0 + r_{3}{2}*y - r_{3}{2}*y_0 + r_{3}{3}*z - r_{3}{3}*z_0) - λ_u*(u - u_0),\nI.y: -d*(r_{2}{1}*(x - x_0) + r_{2}{2}*(y - y_0) + r_{2}{3}*(z - z_0))/(r_{3}{1}*x - r_{3}{1}*x_0 + r_{3}{2}*y - r_{3}{2}*y_0 + r_{3}{3}*z - r_{3}{3}*z_0) - λ_v*(v - v_0)}\n\nFinally, the resulting equations can be solved to get formulas for u and v (image coordinates) as a function of x, y and z (object coordinates).\n\nu_eqn = SYM.Eq( u, SYM.simplify(SYM.solve(equations[image_plane.x], u)).factor(x, y, z) )\nv_eqn = SYM.Eq( v, SYM.simplify(SYM.solve(equations[image_plane.y], v)).factor(x, y, z) )\nsolutions = [u_eqn, v_eqn]\nfor sol in solutions:\nSYM.pprint(sol)\n -(-d⋅r_{1}{1}⋅x₀ - d⋅r_{1}{2}⋅y₀ - d⋅r_{1}{3}⋅z₀ + r_{3}{1}⋅u₀⋅x₀⋅λᵤ + r_{\nu = ──────────────────────────────────────────────────────────────────────────\nλᵤ⋅(r_{3}{\n\n3}{2}⋅u₀⋅y₀⋅λᵤ + r_{3}{3}⋅u₀⋅z₀⋅λᵤ + x⋅(d⋅r_{1}{1} - r_{3}{1}⋅u₀⋅λᵤ) + y⋅(d⋅r_\n──────────────────────────────────────────────────────────────────────────────\n1}⋅x - r_{3}{1}⋅x₀ + r_{3}{2}⋅y - r_{3}{2}⋅y₀ + r_{3}{3}⋅z - r_{3}{3}⋅z₀)\n\n{1}{2} - r_{3}{2}⋅u₀⋅λᵤ) + z⋅(d⋅r_{1}{3} - r_{3}{3}⋅u₀⋅λᵤ))\n────────────────────────────────────────────────────────────\n\n-(-d⋅r_{2}{1}⋅x₀ - d⋅r_{2}{2}⋅y₀ - d⋅r_{2}{3}⋅z₀ + r_{3}{1}⋅v₀⋅x₀⋅λᵥ + r_{\nv = ──────────────────────────────────────────────────────────────────────────\nλᵥ⋅(r_{3}{\n\n3}{2}⋅v₀⋅y₀⋅λᵥ + r_{3}{3}⋅v₀⋅z₀⋅λᵥ + x⋅(d⋅r_{2}{1} - r_{3}{1}⋅v₀⋅λᵥ) + y⋅(d⋅r_\n──────────────────────────────────────────────────────────────────────────────\n1}⋅x - r_{3}{1}⋅x₀ + r_{3}{2}⋅y - r_{3}{2}⋅y₀ + r_{3}{3}⋅z - r_{3}{3}⋅z₀)\n\n{2}{2} - r_{3}{2}⋅v₀⋅λᵥ) + z⋅(d⋅r_{2}{3} - r_{3}{3}⋅v₀⋅λᵥ))\n────────────────────────────────────────────────────────────\n\nYou can convince yourself that this result is identical to the common DLT expression.\n\nSympy can perform what is called common subexpression elimination:\n\ncse_replacements, dlt_terms = SYM.cse(solutions, symbols = SYM.utilities.iterables.numbered_symbols('L'))\nSYM.pprint(dlt_terms)\n⎡ -L₃⋅(L₀⋅L₇ + L₁⋅L₇ + L₂⋅L₇ - L₄⋅x₀ - L₅⋅y₀ - L₆⋅z₀ + x⋅(L₄ - L₇⋅r_{3}{1})\n⎢u = ─────────────────────────────────────────────────────────────────────────\n⎣ λᵤ\n\n+ y⋅(L₅ - L₇⋅r_{3}{2}) + z⋅(L₆ - L₇⋅r_{3}{3})) -L₃⋅(L₀⋅L₁₁ + L₁⋅L₁₁ - L\n────────────────────────────────────────────────, v = ────────────────────────\n\n₁₀⋅z₀ + L₁₁⋅L₂ - L₈⋅x₀ - L₉⋅y₀ + x⋅(-L₁₁⋅r_{3}{1} + L₈) + y⋅(-L₁₁⋅r_{3}{2} + L\n──────────────────────────────────────────────────────────────────────────────\nλᵥ\n\n₉) + z⋅(L₁₀ - L₁₁⋅r_{3}{3})) ⎤\n─────────────────────────────⎥\n⎦\n\nHowever, this yields different parameters than the \"guided\" DLT parameters presented by Kwon.\n\nThe significant achievement of such DLT parameters is that one can build a system of linear equations to solve for the $$L_i$$'s.\n\n## Numeric Application\n\nMultiplying by the RHS denominator, moving mixed terms to the right side and thereby isolating u, v results in such a system of linear equations. A system of linear equations can be solved by linear algebra. And if thinking of linear algebra raises your neck hair, rest assured that python will do the work for you.\n\nArgus (Jackson et al., 2016) uses numpy.linalg.lstsq for this. The function requires two matrices $$A$$ and $$B$$, for which we get $$L$$ (the DLT parameters) from $$A L = B$$. To solve the systems, $$N$$ points have to be known in both image and object space.\n\n$$A$$ is the coefficient matrix, shape $$(2N \\times 11)$$. The number $$11$$ comes from $$11$$ $$L$$ coefficients in the problem.\n\n$$A = \\begin{array}{c | ccccccccccc} i & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\\\ \\hline 2k & x_k & y_k & z_k & 1 & 0 & 0 & 0 & 0 & -u_k x_k & -u_k y_k & -u_k z_k \\\\ 2k+1 & 0 & 0 & 0 & 0 & x_k & y_k & z_k & 1 & -v_k x_k & -v_k y_k & -v_k z_k \\end{array} \\quad\\quad \\forall k \\in \\{0, N-1\\}$$\n\nAnd $$B$$ is the dependent matrix:\n\n$$B = \\begin{array}{c | c} & \\\\ \\hline 2k & u_k \\\\ 2k+1 & v_k \\end{array} \\quad\\quad \\forall k \\in \\{0, N-1\\}$$\n\nThen, computationally, $$\\mid\\mid B - A L \\mid\\mid$$ is minimized by the least squares method.\n\nSounds fun? Here is the function, adapted from the argus code.\n\ndef SolveDLT(data, cam):\n# solve DLT for a camera\n# from N known points\n# adapted from https://github.com/kilmoretrout/argus_gui/blob/master/argus_gui/tools.py solve_dlt\n# input: a data frame with [x, y, z, u_i, v_i] columns where i is the camera index\n\n# coefficient matrix\nA = NP.zeros((data.shape * 2, 11))\n\n# dependent variable\nB = NP.zeros((data.shape * 2, 1))\n\n# fill the matrices\nfor k, row in data.iterrows():\nA[2 * k, :3] = row[['x', 'y', 'z']].values\nA[2 * k, 3] = 1\nA[2 * k, 8:] = row[['x', 'y', 'z']].values * -row[f'u_{cam}']\nA[2 * k + 1, 4:7] = row[['x', 'y', 'z']].values\nA[2 * k + 1, 7] = 1\nA[2 * k + 1, 8:] = row[['x', 'y', 'z']].values * -row[f'v_{cam}']\n\nB[2 * k] = row[f'u_{cam}']\nB[2 * k + 1] = row[f'v_{cam}']\n\n# solve system of linear equations\nL = NP.linalg.lstsq(A, B, rcond=None)\nreturn L.ravel()\n\nTaking our data from above, the function can yield DLT parameters for the three cameras:\n\nL = {cam: SolveDLT(data, cam = cam) for cam in cams}\nL\n{1: array([-7.03913782e+02, -2.04857500e+02, -1.03511822e+02, 2.65781236e+02,\n1.10530429e+02, -1.96650553e+02, -6.71888993e+02, 2.79123077e+02,\n-1.88877266e-01, 2.75455293e-01, -1.71605937e-01]),\n2: array([-5.55001628e+02, -3.98060092e+02, -5.05956178e+01, 8.68303335e+02,\n6.55700125e+01, -1.17488996e+02, -6.36343639e+02, 3.96662992e+02,\n-2.90184236e-01, 1.79093742e-01, -1.44750418e-01]),\n4: array([-7.29359405e+02, -2.20829579e+02, -9.97698632e+01, 6.56184624e+02,\n8.31303936e+01, -1.49535713e+02, -7.05594977e+02, 2.11502025e+02,\n-2.37017457e-01, 2.95065936e-01, -1.74821150e-01])}\n\nTo confirm the accuracy, we can project the 3D points to the image plane and check whether points match:\n\nfor cam in cams:\n\nfor coord in ['u', 'v']:\ndata[f'{coord}_{cam}_reco'] = 0\n\nfor k, row in data.iterrows():\ndata.loc[k, f'u_{cam}_reco'] = (NP.dot(L[cam][:3].T, row[['x', 'y', 'z']].values) + L[cam]) \\\n/ (NP.dot(L[cam][-3:].T, row[['x', 'y', 'z']].values) + 1.)\ndata.loc[k, f'v_{cam}_reco'] = (NP.dot(L[cam][4:7].T, row[['x', 'y', 'z']].values) + L[cam]) \\\n/ (NP.dot(L[cam][-3:].T, row[['x', 'y', 'z']].values) + 1.)\n\ndata.loc[:, ['x', 'y', 'z', 'u_1', 'v_1', 'u_1_reco', 'v_1_reco']]\nx y z u_1 v_1 u_1_reco v_1_reco\npt\n0 -0.235521 0.308646 0.101321 321.426491 111.782226 321.775150 111.785891\n1 -0.000698 0.311595 0.094717 180.229182 143.665489 180.082729 144.087333\n2 -0.004713 -0.328370 0.089341 365.455760 315.986937 365.453553 316.329381\n3 -0.236395 -0.329500 0.092263 522.594701 272.716794 522.499640 272.692276\n4 0.005713 0.315304 -0.349343 203.762067 394.935970 203.651747 394.920945\n5 0.001885 -0.331354 -0.345600 379.879141 596.104179 380.403750 595.960378\n6 -0.229327 -0.330225 -0.348522 524.872077 546.001908 524.553708 546.239586\n7 0.000000 0.000000 0.000000 266.010343 279.548921 265.781236 279.123077\n8 -0.132960 0.025250 0.097379 338.886374 191.490384 338.916072 191.099186\nfig = MPP.figure(dpi = 300)\n\nfor nr, cam in enumerate(cams):\nax = fig.add_subplot(3, 1, nr+1, aspect = 'equal')\n\nax.imshow(NP.mean(images[cam][:, :], axis = 2), cmap = 'gray', origin = 'upper')\n\nax.scatter( data[f'u_{cam}'].values \\\n, data[f'v_{cam}'].values \\\n, s = 40 \\\n, marker = 'x' \\\n, color = (0.2, 0.6, 0.2) \\\n, alpha = 0.8 \\\n, label = 'measured' \\\n)\nax.scatter( data[f'u_{cam}_reco'].values \\\n, data[f'v_{cam}_reco'].values \\\n, s = 40 \\\n, marker = '+' \\\n, color = (0.6, 0.2, 0.2) \\\n, alpha = 0.8 \\\n, label = 'reconstructed' \\\n)\n\nax.legend(loc = 'best', fontsize = 4)\n\nax.set_xticks([])\nax.set_yticks([])\nax.set_ylabel(cam)\n\nfig.tight_layout()\nMPP.show();",
null,
"This check is trivial, but it looks like a match.\n\nWe'll store the DLT parameters as a matrix.\n\n# dlt: (number of cameras)x11 array of DLT coefficients\ndlt = NP.stack([L[cam] for cam in cams], axis = 0)\ndlt\narray([[-7.03913782e+02, -2.04857500e+02, -1.03511822e+02,\n2.65781236e+02, 1.10530429e+02, -1.96650553e+02,\n-6.71888993e+02, 2.79123077e+02, -1.88877266e-01,\n2.75455293e-01, -1.71605937e-01],\n[-5.55001628e+02, -3.98060092e+02, -5.05956178e+01,\n8.68303335e+02, 6.55700125e+01, -1.17488996e+02,\n-6.36343639e+02, 3.96662992e+02, -2.90184236e-01,\n1.79093742e-01, -1.44750418e-01],\n[-7.29359405e+02, -2.20829579e+02, -9.97698632e+01,\n6.56184624e+02, 8.31303936e+01, -1.49535713e+02,\n-7.05594977e+02, 2.11502025e+02, -2.37017457e-01,\n2.95065936e-01, -1.74821150e-01]])\n\n## 3D Point Reconstruction\n\nWhen filming a calibration object, we know the true positions of points. Afterwards, when filming a scene, the goal is to take the observed image points and get their relative position in 3D, i.e. in \"world coordinates\".\n\nThe argus code (Jackson et al., 2016) also contains a function for that.\n\n\ndef UVtoXYZ(pts, dlt):\n# retrieve Object Points (3D) from multiple perspective Image Points\n# adapted from https://github.com/kilmoretrout/argus_gui/blob/master/argus_gui/tools.py uv_to_xyz\n# pts: (N x 2K) array of N 2D points for K cameras\n# can be a single point over time, or multiple points in one scene\n# dlt: (11 x K) array of DLT parameters\n# Adjusted because data points herein are undistorted prior to calculation\n\n# initialiye empty data array\nxyzs = NP.empty((len(pts), 3))\nxyzs[:] = NP.nan\n\n# for each point\nfor i in range(len(pts)):\nuvs = list()\n# for each uv pair\nfor j in range(len(pts[i]) // 2):\n# do we have a NaN pair?\nif not True in NP.isnan(pts[i, 2 * j:2 * (j + 1)]):\n# if not append the undistorted point and its camera number to the list\nuvs.append([pts[i, 2 * j:2 * (j + 1)], j])\n\nif len(uvs) > 1:\n# if we have at least 2 uv coordinates, setup the linear system\nA = NP.zeros((2 * len(uvs), 3))\n\n# assemble coefficient matrix of the linear system\nfor k in range(len(uvs)):\nA[k] = NP.asarray([uvs[k] * dlt[uvs[k]] - dlt[uvs[k]],\nuvs[k] * dlt[uvs[k]] - dlt[uvs[k]],\nuvs[k] * dlt[uvs[k]] - dlt[uvs[k]]])\nA[k + 1] = NP.asarray([uvs[k] * dlt[uvs[k]] - dlt[uvs[k]],\nuvs[k] * dlt[uvs[k]] - dlt[uvs[k]],\nuvs[k] * dlt[uvs[k]] - dlt[uvs[k]]])\n\n# the dependent variables\nB = NP.zeros((2 * len(uvs), 1))\nfor k in range(len(uvs)):\nB[k] = dlt[uvs[k]] - uvs[k]\nB[k + 1] = dlt[uvs[k]] - uvs[k]\n\n# solve it\nxyz = NP.linalg.lstsq(A, B, rcond=None)\n# place in the proper frame\nxyzs[i] = xyz[:, 0]\n\nreturn xyzs\n\nHere is how data can be assembled for this modified version:\n\nreco = UVtoXYZ(data.loc[:, [f'{coord}_{cam}' for cam in cams for coord in ['u', 'v'] ]].values, dlt)\nprint (reco - data.loc[:, ['x', 'y', 'z']])\n x y z\npt\n0 0.000953 -0.001926 0.001119\n1 0.000111 -0.000483 -0.000105\n2 0.000696 -0.002089 0.000875\n3 -0.000479 0.000710 -0.000603\n4 -0.000237 0.000237 -0.000447\n5 -0.000047 0.001261 -0.000382\n6 0.000397 -0.001256 0.000676\n7 0.000072 -0.000616 0.000609\n8 -0.001467 0.004175 -0.001740\n\nAnd because those are just the calibration points, we can check whether the positions match.\n\npts = data.loc[:, ['x', 'y', 'z']].values\nfig = MPP.figure(dpi = 300)\nax = fig.add_subplot(1,1,1,projection='3d', aspect = 'auto')\n\nedge_kwargs = dict( \\\nlw = 0.5 \\\n, alpha = 0.8 \\\n, zorder = 0 \\\n, label = None \\\n)\n\nfor edge in [[0,1], [1,2], [2,3], [3,0], [4,5], [5,6], [4,1], [5,2], [6,3]]:\nax.plot( reco[edge, 0] \\\n, reco[edge, 1] \\\n, reco[edge, 2] \\\n, color = 'k' \\\n, ls = '-' \\\n, **edge_kwargs\n)\n\nax.plot( pts[edge, 0] \\\n, pts[edge, 1] \\\n, pts[edge, 2] \\\n, color = '0.9' \\\n, ls = ':' \\\n, **edge_kwargs\n)\n\nax.scatter( pts[:, 0] \\\n, pts[:, 1] \\\n, pts[:, 2] \\\n, s = 16 \\\n, marker = 'o' \\\n, color = (0.2, 0.6, 0.2) \\\n, alpha = 0.8 \\\n, zorder = 10 \\\n, label = 'true points' \\\n)\n\nax.scatter( reco[:, 0] \\\n, reco[:, 1] \\\n, reco[:, 2] \\\n, s = 32 \\\n, marker = '+' \\\n, linewidth = 0.5 \\\n, color = (0.6, 0.2, 0.6) \\\n, alpha = 0.8 \\\n, zorder = 20 \\\n, label = 'reconstructed points' \\\n)\n\nax.legend(loc = 1, fontsize = 4)\n\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('z')\n\nax.view_init(30, -20)\n\nAxisEqual3D(ax)\n\nMPP.show();",
null,
"... and they do, it is a box!\n\nIt remains too be seen whether a space far from the calibration object can be well reconstructed.\n\n## Applied Example: Dog Video\n\nWhen testing our setup, two kind shepards with their dogs passed by. (Our university is keeping sheep on the green areas of our campus.)\n\nThey allowed us to follow to the meadow and get a few shots of the animals.\n\nHere is one, in real time, from multiple perspectives:",
null,
"I tracked one stride cycle of the accelerating dog.\n\nBecause I have three cameras, I can reconstruct 3D data with two of them, and project that into the third camera (using a modified dlt_inverse from argus).\n\nPurple are the tracked points, red are the reprojections (slowed down 10x):",
null,
"I will have to find out where the offset comes from; one reason might be that the calibration videos were not in sync.\n\nApart from that, the data looks reasonable. But...\n\n...let's look at the 3D data from all cameras:",
null,
"The lateral perspective seems okay. However, the depth is not well recovered. This is only in part due to the lacking contrast on some landmarks (e.g. the knee), as can be seen from the well-defined snout point. The other problem is the way I arranged my cameras (flat, next to each other).\n\nI'll have to improve on that. One way is more accurate calibration. The other is optimization of the camera distance and angles. Noteworthy, the problem is general: if the view directions of cameras pointing at the same scene are at a low relative angle, i.e. if the focal planes are relatively parallel, then the depth is less accurate than the other spatial dimensions. This inaccuracy is not captured by calculating the reprojection error.\n\nStill, getting accurate 3D data this way would be appealing: units are in meters and 3D, so accurate angles and speeds can be determined. There is certainly room for improvement.\n\n# Probabilistic DLT\n\nDoes the depth inaccuracy come from inaccurate calibration, or from inaccurate tracking?\n\nTo tackle this question, one can use probabilistic modeling of the DLT parameters.\n\nLuckily, the formula above can easily be brought to a probabilistic context. Let's bring in some additional libraries.\n\nimport pymc3 as PM\n\nimport theano as TH\nimport theano.tensor as TT\n\nPrepare the matrices as before:\n\ndef PrepareDLTData(cam_data):\n# prepare linear equations in matrix form from data of one camera\n# adapted from https://github.com/kilmoretrout/argus_gui/blob/master/argus_gui/tools.py solve_dlt\n# input: a data frame with [x, y, z, u, v] columns\n\n# coefficient matrix\nA = NP.zeros((cam_data.shape * 2, 11))\n\n# dependent variable\nB = NP.zeros((cam_data.shape * 2, 1))\n\n# fill the matrices\nfor k, row in cam_data.iterrows():\nA[2 * k, :3] = row[['x', 'y', 'z']].values\nA[2 * k, 3] = 1\nA[2 * k, 8:] = row[['x', 'y', 'z']].values * -row['u']\nA[2 * k + 1, 4:7] = row[['x', 'y', 'z']].values\nA[2 * k + 1, 7] = 1\nA[2 * k + 1, 8:] = row[['x', 'y', 'z']].values * -row['v']\n\nB[2 * k] = row['u']\nB[2 * k + 1] = row['v']\n\nreturn A, B\n\nHere comes the actual data:\n\ncams = [1, 2, 4]\nn_cams = len(cams)\nA = {}\nB = {}\nΛ = {}\n\nfor cam in cams:\nA[cam], B[cam] = PrepareDLTData(data.copy() \\\n.loc[:, ['x', 'y', 'z', f'u_{cam}', f'v_{cam}']] \\\n.rename(columns = {f'u_{cam}': 'u', f'v_{cam}': 'v'} \\\n, inplace = False) \\\n)\n\nΛ[cam] = NP.linalg.lstsq(A[cam], B[cam], rcond=None) # to get accurate priors\n\nprint (A.shape, Λ.shape)\nprint (NP.sum(NP.abs((A @ Λ).ravel() - B.T.ravel())))\n(A @ Λ).T\n(18, 11) (11, 1)\n3.855946132496541\n\narray([[321.8142397 , 111.78630141, 180.07251953, 144.11673908,\n365.45378411, 316.2934613 , 522.50552844, 272.69379468,\n203.63567058, 394.91875521, 380.38679315, 595.96502565,\n524.54983622, 546.24247655, 265.78123573, 279.12307693,\n338.91652818, 191.09317816]])\n\nThe model is equivaltent to what was fed to lstsq above. However, we have the flexibility to use robust regression by making the posterior distribution StudentT. Hence, the result will be more robust to outliers, without the need for a RANSAC (Random sample consensus).\n\ndef ProbabilisticDLTModel(A, B, L):\n\nwith PM.Model() as model:\n\n# the DLT parameters L\nλ = PM.Normal('λ', mu = L.reshape([-1,1]), sd = 2*NP.std(L), shape = (11,1))\n\n# matrix multiplication\n# A must be a theano tensor\nestimator = TH.shared(A) @ λ\n\n# model residual\nresidual = PM.HalfCauchy('ε', 10.)\n\n# Students T degrees of freedom\ndof = PM.HalfNormal('η', 10.)\n\n# posterior distribution/\"likelihood\"\nposterior = PM.StudentT( 'post' \\\n, nu = dof \\\n, mu = estimator \\\n, sd = residual \\\n, observed = B \\\n)\n# sampling:\ntrace = PM.sample(draws = 2**10, chains = 2**3, cores = 2**2, target_accept = 0.906)\n\nreturn model, trace\n\nOnce the model is prepared, the distributions can be fit to the data.\n\nmodels = {}\ntraces = {}\nfor cam in cams:\nprint ('#'*4, f' Creating DLT model for cam {cam} ', '#'*4)\nmodels[cam], traces[cam] = ProbabilisticDLTModel(A[cam], B[cam], Λ[cam])\n\nprint()\n#### Creating DLT model for cam 1 ####\n\nAuto-assigning NUTS sampler...\nMultiprocess sampling (8 chains in 4 jobs)\nNUTS: [η, ε, λ]\nSampling 8 chains, 0 divergences: 100%|██████████| 12192/12192 [00:26<00:00, 468.71draws/s]\nThe number of effective samples is smaller than 10% for some parameters.\n\n#### Creating DLT model for cam 2 ####\n\nAuto-assigning NUTS sampler...\nMultiprocess sampling (8 chains in 4 jobs)\nNUTS: [η, ε, λ]\nSampling 8 chains, 0 divergences: 100%|██████████| 12192/12192 [00:37<00:00, 327.34draws/s]\nThe acceptance probability does not match the target. It is 0.8449641766015696, but should be close to 0.906. Try to increase the number of tuning steps.\nThe acceptance probability does not match the target. It is 0.7517996349827041, but should be close to 0.906. Try to increase the number of tuning steps.\nThe number of effective samples is smaller than 25% for some parameters.\n\n#### Creating DLT model for cam 4 ####\n\nAuto-assigning NUTS sampler...\nMultiprocess sampling (8 chains in 4 jobs)\nNUTS: [η, ε, λ]\nSampling 8 chains, 0 divergences: 100%|██████████| 12192/12192 [00:37<00:00, 326.80draws/s]\nThe number of effective samples is smaller than 25% for some parameters.\n\nAnd here are the results, example model 1:\n\nPM.traceplot(traces, combined = True, var_names = ['ε', 'η']);",
null,
"Hmmm... that Student does not look very normal.\n\nWe can get all the $$L_i$$s for each camera ($$\\lambda$$ here):\n\nsummary = PM.summary(traces)\nsummary\nmean sd hpd_3% hpd_97% mcse_mean mcse_sd ess_mean ess_sd ess_bulk ess_tail r_hat\nλ[0,0] -703.478 5.148 -713.327 -693.761 0.109 0.077 2241.0 2241.0 2191.0 1496.0 1.00\nλ[1,0] -204.950 1.743 -208.108 -201.676 0.030 0.021 3456.0 3449.0 3603.0 3559.0 1.00\nλ[2,0] -103.390 2.881 -108.717 -97.805 0.051 0.036 3138.0 3138.0 3135.0 3348.0 1.00\nλ[3,0] 265.808 0.282 265.249 266.306 0.004 0.003 4498.0 4498.0 4432.0 4508.0 1.00\nλ[4,0] 110.696 3.743 103.891 118.065 0.073 0.052 2599.0 2599.0 2548.0 2141.0 1.00\nλ[5,0] -196.683 1.656 -199.680 -193.472 0.028 0.020 3612.0 3602.0 3793.0 3767.0 1.00\nλ[6,0] -671.693 3.635 -678.613 -665.013 0.067 0.047 2930.0 2930.0 2887.0 3407.0 1.00\nλ[7,0] 279.143 0.299 278.571 279.699 0.004 0.003 5299.0 5298.0 5166.0 4335.0 1.00\nλ[8,0] -0.188 0.011 -0.206 -0.165 0.000 0.000 2267.0 2267.0 2222.0 1241.0 1.00\nλ[9,0] 0.275 0.005 0.265 0.285 0.000 0.000 2812.0 2812.0 2913.0 3178.0 1.00\nλ[10,0] -0.171 0.008 -0.186 -0.157 0.000 0.000 2994.0 2994.0 3037.0 3299.0 1.00\nε 0.473 0.183 0.200 0.852 0.006 0.004 1080.0 1080.0 778.0 358.0 1.01\nη 9.496 5.839 0.842 19.973 0.100 0.071 3411.0 3411.0 1344.0 364.0 1.01\n\nResults match what we found with least squares:\n\nprint ('\\n'.join(map(str, list(zip(map(lambda x: round(x, 3), Λ[:]), summary['mean'].values[:11])))))\n(-703.914, -703.478)\n(-204.858, -204.95)\n(-103.512, -103.39)\n(265.781, 265.808)\n(110.53, 110.696)\n(-196.651, -196.683)\n(-671.889, -671.693)\n(279.123, 279.143)\n(-0.189, -0.188)\n(0.275, 0.275)\n(-0.172, -0.171)\n\nFinally, and most importantly, instead of using only one fixed value for each $$L_i$$, we can sample the posterior distribution to get some values and thereby find out how our point calculations vary.\n\nn_samples = 1000\npost_samples = {cam: NP.squeeze(PM.sample_posterior_predictive( \\\ntraces[cam] \\\n, var_names = ['λ'] \\\n, samples = n_samples \\\n, model = models[cam] \\\n, progressbar = True \\\n)['λ']) \\\nfor cam in cams }\npost_samples.shape\n/usr/lib/python3.8/site-packages/pymc3/sampling.py:1246: UserWarning: samples parameter is smaller than nchains times ndraws, some draws and/or chains may not be represented in the returned posterior predictive sample\nwarnings.warn(\n100%|██████████| 1000/1000 [00:00<00:00, 29193.81it/s]\n100%|██████████| 1000/1000 [00:00<00:00, 27780.16it/s]\n100%|██████████| 1000/1000 [00:00<00:00, 31130.95it/s]\n\n(1000, 11)\n\nThere we have many many samples for $$L$$-values of each camera. Let's take a single measurement of $$(u_1, v_1, u_2, v_2, u_4, v_4)$$ and see how the variation in $$L$$ affects the $$(x, y, z)$$ reconstruction.\n\nWe'll take a regular grid of points in object space.\n\ngrid_pts = [NP.linspace(-1.0, 1.0, 5, endpoint = True).ravel()]*3\nxyz = NP.stack(list(map(NP.ravel, NP.meshgrid(*grid_pts))), axis = 1)\nxyz.shape\n(125, 3)\n\nWe'll need another function from the argus_gui code:\n\ndef DLTInverse(L, xyz):\n\nuv = NP.zeros((len(xyz), 2))\nfor k in range(uv.shape):\nu = (NP.dot(L[:3].T, xyz[k]) + L) / (NP.dot(L[-3:].T, xyz[k]) + 1.)\nv = (NP.dot(L[4:7].T, xyz[k]) + L) / (NP.dot(L[-3:].T, xyz[k]) + 1.)\nuv[k] = [u, v]\nreturn uv\n\nuv = NP.concatenate([DLTInverse(L[cam], xyz) for cam in cams], axis = 1)\nuv.shape\n(125, 6)\n\nAnd then we can convert the uvs back to xyzs with variable Ls, thereby measuring the inaccuracy that results from our DLT parameters.\n\nreco = NP.stack( \\\n[ \\\nUVtoXYZ(uv, NP.stack([post_samples[cam][i, :] for cam in cams], axis = 0)) \\\nfor i in range(n_samples) \\\n] \\\n, axis = 2) \\\n\nreco_centered = NP.concatenate([reco[:, :, r] - xyz for r in range(reco.shape)], axis = 0)\nreco_stacked = NP.concatenate([reco[:, :, r] for r in range(reco.shape)], axis = 0)\n\nspan = NP.array([NP.diff(NP.percentile(reco[:, i], [5, 95])) for i in range(reco.shape)]).ravel()\nprint ('space spanned by the test points:', span)\nprint (' standard deviation about each point:', reco_centered.std(axis = 0))\nspace spanned by the test points: [2.0214302 2.0318401 2.02179291]\nstandard deviation about each point: [0.01706024 0.03142076 0.01727276]\n\nThe inaccuracy from calibration in a $$2\\ m^3$$ volume is in the order of $$0.05\\ m$$. It drastically increases when moving away from the calibration object ($$0.5\\ m$$, i.e. factor $$10$$, for $$5\\ m^3$$ cube). Hence, at least part of the inaccuracy problem in our case stems from the fact that the calibration object did not fill the calibration space. Most notably, in was at a different depth than the animal recording.\n\nI conclude that target area (especially distance) has to be taken into account, and calibration should happen at approximately the same position from the cameras as recording, filling the field of view as good as possible.\n\nThe utility of probabilistic modeling and posterior sampling cannot be emphasized enough. It is well known for statistics, but it is equally useful to quantify the uncertainty in experimental measurements. Do not hesitate to try it yourself!\n\nAs a start, you might want to check this recent youtube tutorial. Or, of course, follow the book and lecture by Richard McElreath which I've repeatedly linked on my blog.\n\n# OpenCV Attempts: Epipolar Geometry\n\nThe DLT reconstruction exemplified above is just scratching on the surface of stereo vision theory, or epipolar geometry. There are a lot of resources that I found relevant for diving deeper:\n\n• Lecture by Cyrill Stachniss https://www.youtube.com/watch?v=cLeF-KNHgwU\n\n• OpenCV documentation https://docs.opencv.org/master/d9/d0c/group__calib3d.html\n\n• Argus documentation http://argus.web.unc.edu/formal-methods-and-definitions\n\nThis might help you to build a basic set of vocabulary, containing for example terms as\n\n• camera matrix ← CV.calibrateCamera\n• distortion coefficients ← CV.calibrateCamera\n• fundamental matrix ← CV.findFundamentalMat\n• essential matrix ← CV.findEssentialMat\n• epilines = epipolar lines ← CV.computeCorrespondEpilines\n\nThe openCV functions next to it are some of many functions related to that topic.\n\nI experienced some trouble with openCV because my workflow deviates from the supposed calibration. The trouble arose because (i) my images were undistorted prior to processing, and (ii) I used a box instead of multiple views of a chessboard.\n\nBut I also noted that the openCV error messages are not very instructive. For example, when trying to calibrate from a single scene, one might encounter the error src\\calibration.cpp:3110: error: (-210) objectPoints should contain vector of vectors of points of type Point3f in function cv::collectCalibrationData. In my case, this could be solved by adding a fake dimension to the object_points by wrapping it in a list [object_points]. Another example are data types, which should always be NP.float32 to avoid openCV chaos.\n\nNevertheless, openCV is probably the most comprehensive image processing library out there, and when you understand the logic behind it and are willing to scan through the raw code, it will doubtlessly be useful.\n\n## Epiline Calculation\n\nHere is one example of how to use openCV to get epilines in corresponding images of different perspective. One can use the fundamental matrix to relate the images, without even knowing the absolute positions of the calibration object. When displayed during tracking, epilines can result in more accurate results on low-contrast landmarks.\n\nThe steps to get epilines are as follows:\n\n• digitize points on calibration images of both cameras\n• calculate the fundamental matrix with CV.findFundamentalMat\n• compute \"epilines\" for arbitrary points of interest with CV.computeCorrespondEpilines\n# get two arrays of calibration point coordinates\npoints = {cam: data.loc[:, [f'{coord}_{cam}' for coord in ['u', 'v'] ]].values \\\nfor cam in cams }\n\n# calculate fundamental matrix\nfundamental_matrix = {(cam1, cam2): CV.findFundamentalMat(points[cam1], points[cam2], CV.FM_LMEDS) \\\nfor cam1 in cams for cam2 in cams \\\n}\n\nThe epilines can be displayed on the images:\n\ncolors =[tuple(NP.random.randint(150,255,3).tolist()) for _ in range(points.shape)]\n\ncross_images = { cam: images[cam].copy()//2 for cam in cams } # darkened images\n\n# loop cameras\nfor cam in cams:\npts = points[cam]\nimg = cross_images[cam]\n\nif len(img.shape) == 3:\nr,c,ch = img.shape\nimg = img // 2\nelse:\nr,c = img.shape\nimg = CV.cvtColor(img,CV.COLOR_GRAY2BGR) // 2\n\n# loop the other cameras\nfor crosscam in cams:\nif cam == crosscam:\ncontinue\n\nfunmat = fundamental_matrix[(cam, crosscam)]\n\n# add a line for each other image\nlines = CV.computeCorrespondEpilines(points[crosscam].reshape(-1,1,2), 2, funmat).reshape(-1,3)\n#print (lines)\n\nfor count, r in enumerate(lines):\nx0,y0 = [0, -r/r ]\nx1,y1 = [c, -(r+r*c)/r ]\n\n# superimpose line on the image\nimg = CV.line(img, (int(x0),int(y0)), (int(x1),int(y1)), colors[count],1)\n\n# add a point for the current image\n\nfor count, pt1 in enumerate(pts):\nimg = CV.circle(img,tuple(NP.array(pt1, dtype = int)),5,colors[count],-1)\n\ncross_images[cam] = img\n\nHere is the result:\n\n# drawing its lines on left image\nfig = MPP.figure(dpi = 300)\n\nfor nr, cam in enumerate(cams):\nax.imshow(cross_images[cam], cmap = 'gray')\n\nax.get_xaxis().set_visible(False)\nax.set_yticks([])\nax.set_ylabel(cam)\n\nMPP.tight_layout()\nMPP.show()",
null,
"Looking closely, one can see that some lines are off - again the calibration settings might not be ideal.\n\n## OpenCV Camera Calibration\n\nAs mentioned above, openCV is versatile, but also bloated, which is the typical fate of complex code.\n\nI ultimately failed to established my desired workflow in that toolbox due to trouble with CV.calibrateCamera.\n\nI used it as follows:\n\nGetCMat = lambda foc, cx, cy: NP.array([[foc, 0, cx], [0, foc, cy], [0, 0, 1]], dtype = NP.float32)\nGetDistorts = lambda k_vec: NP.array(k_vec, dtype = NP.float32)\n\nCV.calibrateCamera(object_points \\\n, image_points \\\n, (1280, 720) # resolution \\\n, cameraMatrix = GetCMat(*start_values[:3]) \\\n, distCoeffs = GetDistorts(start_values[3:]) \\\n, flags = CV.CALIB_USE_INTRINSIC_GUESS \\\n)\n\nThe presence of start_values therein indicates that I used a manual layer of optimization (scipy.optimize) to get the correct camera matrix and distortion parameters. This is highly confusing to me: in my opinion, these parameters should be deterministic. However, in openCV, they depend on the start values I give, and how they depend could not even be reproducibly inferred with Nelder-Mead algorithm.\n\nAt that point, I researched more and turned to DLT as documented by Argus and Kwon.\n\n## Digression: Photogrammetry\n\nFundamental matrix extraction and epiline generation worked with a fundamental matrix that was calculated from known, manually digitized points.\n\nIn contrast, openCV also offers functions that find keypoints automatically, turns them into descriptors, and matches. Some code to give you a direction:\n\n# Initiate ORB detector\norb = cv.ORB_create(32)\n\n# find the keypoints and descriptors with ORB\nkp1, des1 = orb.detectAndCompute(img1,None)\nkp2, des2 = orb.detectAndCompute(img2,None)\n\n# brute force matching\nbf = cv.BFMatcher(cv.NORM_HAMMING, crossCheck = True)\nmatches = bf.match(des1, des2)\n\n# Alternative: FLANN/K nearest neighbors (knn)\nindex_params = dict(algorithm = 1, trees = 5)\nsearch_params = dict(checks=50) # or pass empty dictionary\nflann = cv.FlannBasedMatcher(index_params, search_params)\n\n# match:\nmatches = flann.knnMatch(np.float32(des1),np.float32(des2),k=2)\n\n\nThis is the heart of Photogrammetry, so feel free to create your own photogrammetry workflow with it.\n\n# Summary\n\nAbove, I demonstrated how to calibrate an array of cameras in fixed relative position to get 3D data. Accuracy was limited with the simple calibration object I used. But I demonstrated how probabilistic programming and posterior sampling can give you a feel for the accuracy of your personal calibration procedure. Besides that, temporal sync and camera arrangement left room for improvement.\n\nI also pointed out some directions for interested readers to instead use the much more potent openCV library, with all the pitfalls it contains.\n\nI like the Argus slogan, 3D for the people, and hope that I could contribute my part with this little tutorial. It turns out that it is not that trivial to capture our 3D world in 2D images. But thanks to web documentation and great open source projects, there is hope!\n\n# References\n\n• Jackson, Brandon E and Evangelista, Dennis J and Ray, Dylan D and Hedrick, Tyson L (2016). 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software. Biology Open 5: 1334-1342; https://doi.org/10.1242/bio.018713. http://argus.web.unc.edu, accessed 2020/08/01\n\n• Kwon, Young-Hoo (2000). Direct Linear Transformation (DLT). Kwon3D Theoretical Foundation. http://kwon3d.com/theories.html, accessed 2020/08/01"
] | [
null,
"http://mielke-bio.info/falk/images/cam1.png",
null,
"http://mielke-bio.info/falk/user/pages/01.home/17.camera_calibration/output_8_0.png",
null,
"http://mielke-bio.info/falk/user/pages/01.home/17.camera_calibration/output_37_0.png",
null,
"http://mielke-bio.info/falk/user/pages/01.home/17.camera_calibration/output_45_0.png",
null,
"http://mielke-bio.info/falk/images/dog1.gif",
null,
"http://mielke-bio.info/falk/images/dog3.gif",
null,
"http://mielke-bio.info/falk/images/dog2.gif",
null,
"http://mielke-bio.info/falk/user/pages/01.home/17.camera_calibration/output_63_0.png",
null,
"http://mielke-bio.info/falk/user/pages/01.home/17.camera_calibration/output_85_0.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6801672,"math_prob":0.99059856,"size":41911,"snap":"2021-31-2021-39","text_gpt3_token_len":14396,"char_repetition_ratio":0.11060205,"word_repetition_ratio":0.09522251,"special_character_ratio":0.40924817,"punctuation_ratio":0.21534115,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9962008,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-22T04:44:47Z\",\"WARC-Record-ID\":\"<urn:uuid:609bb95e-46f9-46d6-9c17-32fa448425fd>\",\"Content-Length\":\"71276\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f24ca963-330b-4b32-8be2-caaa9149570a>\",\"WARC-Concurrent-To\":\"<urn:uuid:b4c06687-fffd-4c3a-9483-7b73243c0b21>\",\"WARC-IP-Address\":\"172.105.69.101\",\"WARC-Target-URI\":\"http://mielke-bio.info/falk/camera_calibration\",\"WARC-Payload-Digest\":\"sha1:XBWKCG75LF6AUBRNGUKSUVTTTTAZ65FE\",\"WARC-Block-Digest\":\"sha1:QALEG4FZLPRQAFSKXOFGOW52LMTF66JM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057329.74_warc_CC-MAIN-20210922041825-20210922071825-00425.warc.gz\"}"} |
http://jollymaths.com/blog/magic-squares-what-are-they/ | [
"# Magic Squares – What are they?\n\nMagic squares are a set of numbers filled in a 3 x 3, 4 x 4,… squares and after filling it, all the vertical, Horizontal and both diagonal totals are equal.\n\nProperties of the Magic Square:\n\n1. Magic squares are made in 3 x 3, 4 x 4, 5 x 5….etc.\n2. Generally a 3 x 3 magic square is filled with numbers 1 to 9 and 4 x 4 magic square is filled with 1 to 16,…… In general a “n x n” magic square will be filled with 1 to n square numbers.\n3. In general in a “n x n” magic square will have “n” rows and “n” columns and n square Small squares.\n4. After filling all squares ALL VERTICAL, ALL HORIZONTAL AND BOTH DIAGONAL THE TOTALS ARE EQUAL.\n5. A magic square done from 1 to 9, 1 to 16,… are called as Base Magic square.\n6. The total of a row/column of a Base Magic Square will be “First number plus the Last number added together, divided by two, and then multiplied by the Number of rows”. i.e. {( 1 + n square ) / 2} ……. {( 1 + n square ) divided 2}\n7. Generally natural numbers 1,2,3,… are used for forming the magic squares and we can use fractions, zero, negative numbers and squares, cubes etc etc…\n8. We can make 3 x 3, 4 x 4, 5 x 5… upto infinite. Magic squares for a particular number, particular size and for a given space interval is at our choice.\n9. Also we can make magic squares for a given number, given year or given date of birth etc."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8192924,"math_prob":0.9996092,"size":1337,"snap":"2023-14-2023-23","text_gpt3_token_len":372,"char_repetition_ratio":0.18454614,"word_repetition_ratio":0.076642334,"special_character_ratio":0.29394165,"punctuation_ratio":0.122112215,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99619514,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-01T07:50:08Z\",\"WARC-Record-ID\":\"<urn:uuid:711c177d-716c-4a89-824d-30ceeb35553b>\",\"Content-Length\":\"44527\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ef501a5f-b290-4c17-99a5-363420b8ed1c>\",\"WARC-Concurrent-To\":\"<urn:uuid:df77a8e5-aaf4-4511-af7e-0e58c90e4cf5>\",\"WARC-IP-Address\":\"166.62.10.34\",\"WARC-Target-URI\":\"http://jollymaths.com/blog/magic-squares-what-are-they/\",\"WARC-Payload-Digest\":\"sha1:SCK4KW2HD6V4DEXM4PQ6IC3CZJULFFGK\",\"WARC-Block-Digest\":\"sha1:7VTKMS24HKN4PQV5LB7HTEOG3OZHOMSQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224647639.37_warc_CC-MAIN-20230601074606-20230601104606-00213.warc.gz\"}"} |
https://answers.everydaycalculation.com/add-fractions/3-10-plus-7-14 | [
"# Answers\n\nSolutions by everydaycalculation.com\n\n## Add 3/10 and 7/14\n\n3/10 + 7/14 is 4/5.\n\n#### Steps for adding fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 10 and 14 is 70\n2. For the 1st fraction, since 10 × 7 = 70,\n3/10 = 3 × 7/10 × 7 = 21/70\n3. Likewise, for the 2nd fraction, since 14 × 5 = 70,\n7/14 = 7 × 5/14 × 5 = 35/70\n4. Add the two fractions:\n21/70 + 35/70 = 21 + 35/70 = 56/70\n5. After reducing the fraction, the answer is 4/5\n\n#### Add Fractions Calculator\n\n+\n\nUse fraction calculator with our all-in-one calculator app: Download for Android, Download for iOS\n\n© everydaycalculation.com"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.59010565,"math_prob":0.9958298,"size":342,"snap":"2019-35-2019-39","text_gpt3_token_len":163,"char_repetition_ratio":0.2928994,"word_repetition_ratio":0.0,"special_character_ratio":0.5760234,"punctuation_ratio":0.06451613,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99940646,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-25T05:36:06Z\",\"WARC-Record-ID\":\"<urn:uuid:2d3f5b92-4f83-4035-b459-9b3003cc22f9>\",\"Content-Length\":\"8473\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:60d8eb20-32bb-4e73-a144-0c93a25165ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:56975d25-47de-454d-8dc4-6ca5ac8c8724>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/3-10-plus-7-14\",\"WARC-Payload-Digest\":\"sha1:FJW4T4VGVMEY3NAJ4ZKQU7TU6LJ5BI6Z\",\"WARC-Block-Digest\":\"sha1:3SMZQERAXXKEYKYJ6BRQC3PZWPKC4UXI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027323067.50_warc_CC-MAIN-20190825042326-20190825064326-00383.warc.gz\"}"} |
https://stackoverflow.com/questions/11134898/pattern-matching-on-abstract-forms | [
"# Pattern matching on abstract forms\n\nDisclaimer: I kept this because some things may be useful to others, however, it does not solve what I had initially tried to do.\n\nRight now, I'm trying to solve the following:\n\nGiven something like {a, B, {c, D}} I want to scan through Erlang forms given to parse_transform/2 and find each use of the send operator (!). Then I want to check the message being sent and determine whether it would fit the pattern {a, B, {c, D}}.\n\nTherefore, consider finding the following form:\n\n``````{op,17,'!',\n{var,17,'Pid'},\n{tuple,17,[{atom,17,a},{integer,17,5},{var,17,'SomeVar'}]}}]}]}\n``````\n\nSince the message being sent is:\n\n``````{tuple,17,[{atom,17,a},{integer,17,5},{var,17,'SomeVar'}]}\n``````\n\nwhich is an encoding of {a, 5, SomeVar}, this would match the original pattern of {a, B, {c, D}}.\n\nI'm not exactly sure how I'm going to go about this but do you know of any API functions which could help?\n\nTurning the given {a, B, {c, D}} into a form is possible by first substituting the variables with something, e.g. strings (and taking a note of this), else they'll be unbound, and then using:\n\n``````> erl_syntax:revert(erl_syntax:abstract({a, \"B\", {c, \"D\"}})).\n{tuple,0,\n[{atom,0,a},\n{string,0,\"B\"},\n{tuple,0,[{atom,0,c},{string,0,\"D\"}]}]}\n``````\n\nI was thinking that after getting them in the same format like this, I could analyze them together:\n\n``````> erl_syntax:type({tuple,0,[{atom,0,a},{string,0,\"B\"},{tuple,0,[{atom,0,c},string,0,\"D\"}]}]}).\ntuple\n%% check whether send argument is also a tuple.\n%% then, since it's a tuple, use erl_syntax:tuple_elements/1 and keep comparing in this way, matching anything when you come across a string which was a variable...\n``````\n\nI think I'll end up missing something out (and for example recognizing some things but not others ... even though they should have matched). Are there any API functions which I could use to ease this task? And as for a pattern match test operator or something along those lines, that does not exist right? (i.e. only suggested here: http://erlang.org/pipermail/erlang-questions/2007-December/031449.html).\n\nEdit: (Explaining things from the beginning this time)\n\nUsing erl_types as Daniel suggests below is probably doable if you play around with the erl_type() returned by t_from_term/1 i.e. t_from_term/1 takes a term with no free variables so you'd have to stay changing something like `{a, B, {c, D}}` into `{a, '_', {c, '_'}}` (i.e. fill the variables), use t_from_term/1 and then go through the returned data structure and change the '_' atoms to variables using the module's t_var/1 or something.\n\nBefore explaining how I ended up going about it, let me state the problem a bit better.\n\nProblem\n\nI'm working on a pet project (ErlAOP extension) which I'll be hosting on SourceForge when ready. Basically, another project already exists (ErlAOP) through which one can inject code before/after/around/etc... function calls (see doc if interested).\n\nI wanted to extend this to support injection of code at the send/receive level (because of another project). I've already done this but before hosting the project, I'd like to make some improvements.\n\nCurrently, my implementation simply finds each use of the send operator or receive expression and injects a function before/after/around (receive expressions have a little gotcha because of tail recursion). Let's call this function dmfun (dynamic match function).\n\nThe user will be specifying that when a message of the form e.g. {a, B, {c, D}} is being sent, then the function do_something/1 should be evaluated before the sending takes place. Therefore, the current implementation injects dmfun before each use of the send op in the source code. Dmfun would then have something like:\n\n``````case Arg of\n{a, B, {c, D}} -> do_something(Arg);\n_ -> continue\nend\n``````\n\nwhere Arg can simply be passed to dmfun/1 because you have access to the forms generated from the source code.\n\nSo the problem is that any send operator will have dmfun/1 injected before it (and the send op's message passed as a parameter). But when sending messages like 50, {a, b}, [6, 4, 3] etc... these messages will certainly not match {a, B, {c, D}}, so injecting dmfun/1 at sends with these messages is a waste.\n\nI want to be able to pick out plausible send operations like e.g. Pid ! {a, 5, SomeVar}, or Pid ! {a, X, SomeVar}. In both of these cases, it makes sense to inject dmfun/1 because if at runtime, SomeVar = {c, 50}, then the user supplied do_something/1 should be evaluated (but if SomeVar = 50, then it should not, because we're interested in {a, B, {c, D}} and 50 does not match {c, D}).\n\nI wrote the following prematurely. It doesn't solve the problem I had. I ended up not including this feature. I left the explanation anyway, but if it were up to me, I'd delete this post entirely... I was still experimenting and I don't think what there is here will be of any use to anyone.\n\nBefore the explanation, let:\n\nmsg_format = the user supplied message format which will determine which messages being sent/received are interesting (e.g. {a, B, {c, D}}).\n\nmsg = the actual message being sent in the source code (e.g. Pid ! {a, X, Y}).\n\nI gave the explanation below in a previous edit, but later found out that it wouldn't match some things it should. E.g. when msg_format = {a, B, {c, D}}, msg = {a, 5, SomeVar} wouldn't match when it should (by \"match\" I mean that dmfun/1 should be injected.\n\nLet's call the \"algorithm\" outlined below Alg. The approach I took was to execute Alg(msg_format, msg) and Alg(msg, msg_format). The explanation below only goes through one of these. By repeating the same thing only getting a different matching function (`matching_fun(msg_format)` instead of `matching_fun(msg)`), and injecting dmfun/1 only if at least one of Alg(msg_format, msg) or Alg(msg, msg_format) returns true, then the result should be the injection of dmfun/1 where the desired message can actually be generated at runtime.\n\n1. Take the message form you find in the [Forms] given to parse_transform/2 e.g. lets say you find: `{op,24,'!',{var,24,'Pid'},{tuple,24,[{atom,24,a},{var,24,'B'},{var,24,'C'}]}}` So you would take `{tuple,24,[{atom,24,a},{var,24,'B'},{var,24,'C'}]}` which is the message being sent. (bind to Msg).\n\n2. Do fill_vars(Msg) where:\n\n``````-define(VARIABLE_FILLER, \"_\").\n-spec fill_vars(erl_parse:abstract_form()) -> erl_parse:abstract_form().\n%% @doc This function takes an abstract_form() and replaces all {var, LineNum, Variable} forms with\n%% {string, LineNum, ?VARIABLE_FILLER}.\nfill_vars(Form) ->\nerl_syntax:revert(\nerl_syntax_lib:map(\nfun(DeltaTree) ->\ncase erl_syntax:type(DeltaTree) of\nvariable ->\nerl_syntax:string(?VARIABLE_FILLER);\n_ ->\nDeltaTree\nend\nend,\nForm)).\n``````\n3. Do form_to_term/1 on 2's output, where:\n\n``````form_to_term(Form) -> element(2, erl_eval:exprs([Form], [])).\n``````\n4. Do term_to_str/1 on 3's output, where:\n\n``````-define(inject_str(FormatStr, TermList), lists:flatten(io_lib:format(FormatStr, TermList))).\nterm_to_str(Term) -> ?inject_str(\"~p\", [Term]).\n``````\n5. Do `gsub(v(4), \"\\\"_\\\"\", \"_\")`, where v(4) is 4's output and gsub is: (taken from here)\n\n``````gsub(Str,Old,New) -> RegExp = \"\\\\Q\"++Old++\"\\\\E\", re:replace(Str,RegExp,New,[global, multiline, {return, list}]).\n``````\n6. Bind a variable (e.g. M) to matching_fun(v(5)), where:\n\n``````matching_fun(StrPattern) ->\nform_to_term(\nstr_to_form(\n?inject_str(\n\"fun(MsgFormat) ->\ncase MsgFormat of\n~s ->\ntrue;\n_ ->\nfalse\nend\nend.\", [StrPattern])\n)\n).\n\nstr_to_form(MsgFStr) ->\n{_, Tokens, _} = erl_scan:string(end_with_period(MsgFStr)),\n{_, Exprs} = erl_parse:parse_exprs(Tokens),\nhd(Exprs).\n\nend_with_period(String) ->\ncase lists:last(String) of\n\\$. -> String;\n_ -> String ++ \".\"\nend.\n``````\n7. Finally, take the user supplied message format (which is given as a string), e.g. MsgFormat = \"{a, B, {c, D}}\", and do: MsgFormatTerm = form_to_term(fill_vars(str_to_form(MsgFormat))). Then you can M(MsgFormatTerm).\n\ne.g. with user supplied message format = {a, B, {c, D}}, and Pid ! {a, B, C} found in code:\n\n``````2> weaver_ext:fill_vars({tuple,24,[{atom,24,a},{var,24,'B'},{var,24,'C'}]}).\n{tuple,24,[{atom,24,a},{string,0,\"_\"},{string,0,\"_\"}]}\n3> weaver_ext:form_to_term(v(2)).\n{a,\"_\",\"_\"}\n4> weaver_ext:term_to_str(v(3)).\n\"{a,\\\"_\\\",\\\"_\\\"}\"\n5> weaver_ext:gsub(v(4), \"\\\"_\\\"\", \"_\").\n\"{a,_,_}\"\n6> M = weaver_ext:matching_fun(v(5)).\n#Fun<erl_eval.6.13229925>\n7> MsgFormatTerm = weaver_ext:form_to_term(weaver_ext:fill_vars(weaver_ext:str_to_form(\"{a, B, {c, D}}\"))).\n{a,\"_\",{c,\"_\"}}\n8> M(MsgFormatTerm).\ntrue\n9> M({a, 10, 20}).\ntrue\n10> M({b, \"_\", 20}).\nfalse\n``````\n• How is `{tuple,17,[{atom,17,a},{integer,17,5},{var,17,'SomeVar'}]}` on the form `{a, B, {c, D}}`? In particular, why is `'SomeVar'` on the form `{c, D}`? Jun 21, 2012 at 12:08\n• Hi Emil. I'm not sure what u mean by \"on the form\". Basically, {a,5,SomeVar} is important to me if it's found in the src code because, at runtime, SomeVar could end up being {c,9} for example and so {a,5,{c,9}} would match the original {a,B,{c,D}}. Therefore, from an input of {a,B,{c,D}}, my program needs to recognize that a form like {tuple,17,[{atom,17,a},{integer,17,5},{var,17,'SomeVar'}]} could possibly match at runtime (the aim is to inject code at these points). Jun 21, 2012 at 14:26\n\nThere is functionality for this in `erl_types` (HiPE).\n\nI'm not sure you have the data in the right form for using this module though. I seem to remember that it takes Erlang terms as input. If you figure out the form issue you should be able to do most what you need with `erl_types:t_from_term/1` and `erl_types:t_is_subtype/2`.\n\nIt was a long time ago that I last used these and I only ever did my testing runtime, as opposed to compile time. If you want to take a peek at usage pattern from my old code (not working any more) you can find it available at github.\n\n• Hi Daniel. Thanks for the info :) Sorry for taking so long to answer. Work got in the way of my pet project and I completely forgot I had posted this question ^^; I will take a look at what you suggested before replying or accepting. cheers. Jun 27, 2012 at 14:48\n• Look at the dates; you answered in less than 24 hours. It was my answer that was delayed. Good luck finding a solution and let me know if I can help. Jun 28, 2012 at 18:16\n• You're right I rushed into that one ^^; Yes, erl_types seems to be able to do the trick (so I accepted this answer)... but that is not what I ended up using. Variables ended up being a prob when converting to Erlang terms. Simply changing them to atoms was giving me false with t_is_subtype/2 on inputs which should have given me true. I edited the question to show the approach I took. Cheers Daniel :) Jun 29, 2012 at 8:42\n\nI don't think this is possible at compile time in the general case. Consider:\n\n``````send_msg(Pid, Msg) ->\nPid ! Msg.\n``````\n\n`Msg` will look like a a `var`, which is a completely opaque type. You can't tell if it is a tuple or a list or an atom, since anyone could call this function with anything supplied for `Msg`.\n\nThis would be much easier to do at run time instead. Every time you use the `!` operator, you'll need to call a wrapper function instead, which tries to match the message you are trying to send, and executes additional processing if the pattern is matched.\n\n• Hi kjw0188. Yes that is exactly the approach I'm currently taking. What I want to do now is avoid having to inject this function which pattern matches on Msg where it is completely unnecessary. In the case of Pid ! Msg... it would have to be injected, however, if it's Pid ! {something, B} then it definitely would not since I'm looking for something of the form {a, B, {c, D}} and {something, B} can never match. cheers. Jun 27, 2012 at 14:43"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83706653,"math_prob":0.8264277,"size":8505,"snap":"2022-27-2022-33","text_gpt3_token_len":2420,"char_repetition_ratio":0.109046,"word_repetition_ratio":0.012274959,"special_character_ratio":0.30864197,"punctuation_ratio":0.23660478,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9789537,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-12T10:06:33Z\",\"WARC-Record-ID\":\"<urn:uuid:1eeb6e98-01b7-41f5-a146-4cf2e4132c81>\",\"Content-Length\":\"251216\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e21714a-0c85-4745-9eb7-259242b739df>\",\"WARC-Concurrent-To\":\"<urn:uuid:fde358aa-0669-44e4-adf8-5fe54e9e1856>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/11134898/pattern-matching-on-abstract-forms\",\"WARC-Payload-Digest\":\"sha1:I5ITYEOOUTJYKWLPFP4ZUBI5LL5RJD5W\",\"WARC-Block-Digest\":\"sha1:3IGJWASGMZXPSRBHEMCUSLNWQR67HBKL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571597.73_warc_CC-MAIN-20220812075544-20220812105544-00165.warc.gz\"}"} |
https://en.m.wikipedia.org/wiki/TWIRL | [
"# TWIRL\n\nIn cryptography and number theory, TWIRL (The Weizmann Institute Relation Locator) is a hypothetical hardware device designed to speed up the sieving step of the general number field sieve integer factorization algorithm. During the sieving step, the algorithm searches for numbers with a certain mathematical relationship. In distributed factoring projects, this is the step that is parallelized to a large number of processors.\n\nTWIRL is still a hypothetical device — no implementation has been publicly reported. However, its designers, Adi Shamir and Eran Tromer, estimate that if TWIRL were built, it would be able to factor 1024-bit numbers in one year at the cost of \"a few dozen million US dollars\". TWIRL could therefore have enormous repercussions in cryptography and computer security — many high-security systems still use 1024-bit RSA keys, which TWIRL would be able to break in a reasonable amount of time and for reasonable costs.\n\nThe security of some important cryptographic algorithms, notably RSA and the Blum Blum Shub pseudorandom number generator, rests in the difficulty of factorizing large integers. If factorizing large integers becomes easier, users of these algorithms will have to resort to using larger keys (computationally more expensive) or to using different algorithms, whose security rests on some other computationally hard problem (like the discrete logarithm problem)."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8911369,"math_prob":0.8582052,"size":1939,"snap":"2019-13-2019-22","text_gpt3_token_len":430,"char_repetition_ratio":0.096640825,"word_repetition_ratio":0.0,"special_character_ratio":0.21712223,"punctuation_ratio":0.10810811,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.961067,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-22T23:16:31Z\",\"WARC-Record-ID\":\"<urn:uuid:a8d15fea-53c9-4eca-b2e5-e8c32feab9b6>\",\"Content-Length\":\"25839\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f3124ba6-3222-495d-afbc-ea68948477f0>\",\"WARC-Concurrent-To\":\"<urn:uuid:dadbdfad-cf6a-48bb-bc27-33a94aeeab03>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.m.wikipedia.org/wiki/TWIRL\",\"WARC-Payload-Digest\":\"sha1:HJQA22QYIJJMVVA773JIDTQRXEZFTIEA\",\"WARC-Block-Digest\":\"sha1:K2JM2NU63T45PO5JMPVFAKIXXY44K3T7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256980.46_warc_CC-MAIN-20190522223411-20190523005411-00217.warc.gz\"}"} |
https://typeset.io/journals/journal-of-hydraulic-engineering-1t0zs3q8/1980 | [
"# Showing papers in \"Journal of Hydraulic Engineering in 1980\"\n\nJournal ArticleDOI\nTL;DR: In this article, a theory based on the Monin-obukhov length was introduced for open channel flow with suspended sediments, and a velocity distribution equation based on this theory showed a good agreement with the measured velocity distribution.\nAbstract: Characteristics of open channel flow with suspended sediments were theoretically investigated introducing a theory based on the Monin-Obukhov length First, a velocity distribution equation based on the theory showed a good agreement with the measured velocity distribution Second, the hydraulic resistance and the distribution of suspended sediment concentration observed in experiments were explained theoretically Third, a transport rate formula for suspended sediments, in which classical equations were included as a particular case, was obtained; and this formula, in cooperation with a reference concentration derived in the present study, showed agreement with experimental results Finally, the critical condition for deposition of suspended particles and the collapse of turbulence were derived and verified by experiments\n\n221 citations\n\nJournal ArticleDOI\nTL;DR: In this paper, a method to determine and apply the biomechanical properties of vegetative channel linings is presented, which is used to evaluate the stiffness of vegetation commonly used to protect earth channels from erosion.\nAbstract: A method to determine and apply the biomechanical properties of vegetative channel linings is presented. The method is used to evaluate the stiffness of vegetation commonly used to protect earth channels from erosion. Using the vegetation stiffness and stem length, the deflected height of the channel lining is predicted to yield a relative roughness under given flow conditions. The relative roughness is used to calculate a value of the Darcy-Weisbach friction factor or Manning's n. A method to design a vegatated channel is included.\n\n202 citations\n\nJournal ArticleDOI\nTL;DR: In this article, the longitudinal profile of the lobate snout is satisfactorily analyzed by applying the theory of one-dimensional translation wave on an open channel, where the resistance coefficient is a function of concentration, depth, and grain diameter.\nAbstract: Reviews on the yield strength and viscosity of the interstitial clay slurry in debris flow prove that ordinary debris flow may be modeled as a dilatant fluid in which the intergranular forces dominate. Theoretical velocity distributions in dilatant fluid compare well with the experimental results when the value of a numerical constant is appropriately selected. The longitudinal profile of the lobate snout is satisfactorily analyzed by applying the theory of one-dimensional translation wave on an open channel, where the resistance coefficient is a function of concentration, depth, and grain diameter. A remarkable segregation of particles, in which the larger ones move upwards, occurs in debris flow due to the effects of collisions of particles. The accumulation of boulders in the front part of debris flow is a result of the faster transportation of the larger particles in the upper layer of the flow than that of the smaller ones in the lower layer.\n\n189 citations\n\nJournal ArticleDOI\nTL;DR: In this paper, the authors proposed a non-dimensional triangular hyetograph based on the statistical mean of the first time moment of recorded rainstorms and triangular representation of the hyetographs, which are nondimensionalized using the rainfall duration and depth.\nAbstract: Improved methods for design and pollution control of storm drainage facilities require not only the average rainfall intensity but also the time distribution of the rainfall. Since the temporal pattern of rainfall varies with storms and the patterns of future storms cannot be predicted exactly, the design hyetograph can only be determined probabilistically using statistical values of past events. The nondimensional triangular hyetograph method is based on the statistical mean of the first time moment of recorded rainstorms and triangular representation of the hyetographs, which are nondimensionalized using the rainfall duration and depth. For a given season the expected nondimensional triangular hyetographs for heavy rainstorms are nearly identical, being insignificantly affected by the duration of rainfall. The established nondimensional triangular hyetograph can then be used to produce the design hyetographs by simple procedures. Results of an analysis of 7,484 rainstorms at three locations indicate that such a method is feasible.\n\n156 citations\n\nJournal ArticleDOI\nTL;DR: In this article, an analytical formulation of critical droughts was derived by making use of the theories of random number of random variables and runs and the results obtained are valid for independent and lag-one Markov processes only.\nAbstract: Analytical formulations of critical droughts either as longest drought duration or maximum deficit sum have been derived by making use of the theories of random number of random variables and runs. The results obtained are valid for independent and lag-one Markov processes only. The numerical solution of derived analytical expressions yields any order of statistical moments the most important one is the expected value of critical droughts. It has been quantitatively observed that the drought duration and severity are dependent on the correlation structure, sample size, and truncation level.\n\n139 citations\n\nJournal ArticleDOI\nTL;DR: In this article, the role and constitution of the lag has been clarified by inspecting the instability analysis compared with theory of bed load transport process, and an instability analysis has been conducted and the possibility of instability of bed surface has been investigated against hydraulic conditions.\nAbstract: Kennedy clarified that the lag of sediment transport for bed form plays an important role for sand bed instability, but this lag could not been evaluated quantitatively to date. In this study, the role and constitution of the lag has been clarified by inspecting the instability analysis compared with theory of bed load transport process. Particularly, based on the writers' research on bed load transport process by stochastic approaches, the lag distance of bed load transport rate for bed shear stress has been reasonably clarified. Furthermore, improving the flow model based on the potential flow theory by introducing the effect of flow acceleration or deceleration which can be expressed by local variation of flow depth, the phase lag between bed shear stress and bed form has been estimated. After clarification of these lags, an instability analysis has been conducted and the possibility of instability of bed surface has been investigated against hydraulic conditions.\n\n133 citations\n\nJournal ArticleDOI\nTL;DR: In this article, the authors quantified bar resistance empirically using data from rivers in Alberta, Canada, and used the device of slope division of reach-averaged bed stress.\nAbstract: Gravel-bed streams with bankfull channels of low sinuosity are considered. At flood stages, most of the resistance is grain resistance. Thus, depth-discharge relations can be predicted with existing method, although the inherent scatter is large. At lower stages, however, from resistance due to the presence of bars becomes increasingly important. Bar resistance is quantified empirically using data from rivers in Alberta, Canada, and the device of slope division of reach-averaged bed stress. Further reduction leads to predictive design graphs for depth-discharge relations. A numerical example is provided.\n\n127 citations\n\nJournal ArticleDOI\nTL;DR: An experimental investigation of the deleterious effect of microbial slime layers on the hydraulic performance of water conduits is presented in this paper, where the underlying mechanisms that lead to an increase of frictional losses in the conduit are explored and their relative importance is discussed.\nAbstract: An experimental investigation of the deleterious effect of microbial slime layers on the hydraulic performance of water conduits is presented. The underlying mechanisms that lead to an increase of frictional losses in the conduit are explored and their relative importance is discussed. It is shown that although the slime layer is viscoelastic and filamentous, its effect on frictional resistance can be adequately represented through an increase in rigid equivalent sand roughness of the conduit wall.\n\n123 citations\n\nJournal ArticleDOI\nTL;DR: In this article, a rational method has been developed to predict the regime geometry of straight active gravel streams, which is based upon a resistance equation, a bed load equation, and the condition of minimum stream power for gravel streams in equilibrium.\nAbstract: A rational method has been developed to predict the regime geometry of straight active gravel streams. The analytical model is based upon a resistance equation, a bed load equation, and the condition of minimum stream power for gravel streams in equilibrium. The analytical channel geometry so obtained is in general agreement with previously established relations and observations. The analytical channel width is proportional to Q 0 . 4 7 , in which Q is the bank full discharge. Except for steep slopes, the width is essentially only a function of the discharge. The analytical depth increases with the discharge but decreases with the slope. On steep slopes, the width increases rapidly with the slope. This rapid increase in width accompanied by a decrease in depth indicates braiding tendency for steep gravel streams. As the bed load approaches zero at the lower boundary, this model for active streams reduces to the threshold theory.\n\n120 citations\n\nJournal ArticleDOI\nTL;DR: In this paper, the land cover requirements of the Soil Conservation Service (SCS) model used to develop volume of runoff for hydrograph synthesis in suburban areas were modified to be compatible with Landsat digital data.\nAbstract: The land cover requirements of the Soil Conservation Service (SCS) model used to develop volume of runoff for hydrograph synthesis in suburban areas were modified to be compatible with Landsat digital data. Curve numbers obtained with these alternate land cover categories compared well with those obtained in published example problems using conventional categories. Synthetic Flood Frequency relationships computed for a test watershed showed that the conventional approach based on aerial photos agreed well with the Landsat-based approach to land cover determination.\n\n119 citations\n\nJournal ArticleDOI\nTL;DR: In this article, the curve number runoff equation of the U.S. Department of Agriculture, Soil Conservation Service, is treated as a transformation from rainfall distribution to runoff frequency distribution, and reasonable agreement between observed and hypothesized result was obtained on four of the five watersheds shown.\nAbstract: The curve number runoff equation of the U.S. Department of Agriculture, Soil Conservation Service, is treated as a transformation from rainfall distribution to runoff frequency distribution. Reasonable agreement between observed and hypothesized result was obtained on four of the five watersheds shown. The hypothesis failed for the watershed tested in the arid southwest.\n\nJournal ArticleDOI\nTL;DR: In this paper, the problem of aggradation due to an increase in the rate of sediment supply in excess of what the stream can carry has been investigated, where the supply of sediment is assumed to be continuous and at a constant rate.\nAbstract: The problem of aggradation due to increase in the rate of sediment supply in excess of what the stream can carry has been investigated. The supply of sediment is assumed to be continuous and at a constant rate. A relationship for the depth of aggradation at any time and at any distance from the section of sediment addition has been developed. Since the mathematical model used was based on many simplifying assumptions it needed verification against a known set of data. Experiments were, therefore, performed in the laboratory and these have enabled suitable modification of the analytical results.\n\nJournal ArticleDOI\nTL;DR: In this paper, a one-dimensional integral energy model (mixed-layer model) is used to simulate the seasonal temperature cycle of three, morphometrically different, temperate lakes.\nAbstract: A one-dimensional integral energy model (mixed-layer model) is used to simulate the seasonal temperature cycle of three, morphometrically different, temperate lakes. In the model, turbulent kinetic energy supplied by wind shear is used to entrain denser water into the upper mixed layer by working against gravity. The model is calibrated with data from one lake for 1 yr and verified against data from two other lakes and also against data from other years. Predictions of the onset of stratification, surface and hypolimnetic temperatures, mixed layer depths, and periods of turnover are all in agreement with data.\n\nJournal ArticleDOI\nChristopher Koutitas\nTL;DR: In this article, a three-dimensional numerical model was developed to study hydrodynamic circulations produced in coastal zones due to tide and wind action, which consists of a mixed finite-difference/finite element solution of the simplified fluid momentum and continuity equations.\nAbstract: A three-dimensional numerical model has been developed to study hydrodynamic circulations produced in coastal zones due to tide and wind action. The model consists of a mixed finite-difference/finite element solution of the simplified fluid momentum and continuity equations. A numerical splitting technique is used to reduce the size of model solution matrices while the finite element approach is used over the flow depth to enable irregular sea beds to be tackled easily. Model errors arising from the numerical method are minimized by the use of a Galerkin weighted-residual procedure. The problems associated with modeling the turbulence closure of the basic momentum equations are also investigated with a simplified form of the model and the need for high levels of closure is demonstrated. The potential use of the three-dimensional model is illustrated by prediction of wind-induced flows in Thessaloniki Bay in the Aegean Sea.\n\nJournal ArticleDOI\nTL;DR: In this paper, the minimum stream power for stable alluvial channels has been derived and a method incorporating this condition with a flow-resistance formula and a sediment-discharge formula has been developed to compute the width, depth, and slope of stable channels for a given set of water and sediment discharges.\nAbstract: The hypothesis of minimum stream power for stable alluvial channels has been used to derive a condition for alluvial channels in equilibrium. A method incorporating this condition with a flow-resistance formula and a sediment-discharge formula has been developed to compute the width, depth, and slope of stable alluvial channels for a given set of water and sediment discharges. Applying this method yields a design chart that provides the stable width and depth of alluvial canals with trapezoidal shape for a given set of water discharge, channel slope, sediment size, and side slope. Comparing data from some regime canals and small experimental canals has shown good agreement between the observed data and analytical predictions.\n\nJournal ArticleDOI\nTL;DR: In this paper, the authors attempted to determine if an exponent of 2 would give best results for estimating thunderstorm rainfall in the Southwest and if there were definable relationships among gages, distance and exponents in the reciprocal distance method.\nAbstract: In this paper, the writers attempted to determine if an exponent of 2 would give best results for estimating thunderstorm rainfall in the Southwest and if there were definable relationships among gages, distance and exponents in the reciprocal-distance method.\n\nJournal Article\nTL;DR: A Task Committee (TC) of the Surface Water Hydrology Committee was organized in 1976 to determine the types of low-flow information needed, to describe available methods of characterizing low flows, and to identify needed analyses and data collection as mentioned in this paper.\nAbstract: A Task Committee (TC) of the Surface Water Hydrology Committee was organized in 1976 to determine the types of low-flow information needed, to describe available methods of characterizing low flows, and to identify needed analyses and data collection. Responses to a TC survey of users of low-flow information indicated the need for better estimates of low-flow characteristics at ungaged sites. Available analytical methods are described and evaluated, sources of low-flow information are given, and some recently proposed techniques are reported. Low-flow data on many additional streams are needed.\n\nJournal ArticleDOI\nTL;DR: In this paper, a model based on the streamflow partial duration series (PDS) is proposed to estimate the time-dependent random variables of a 38-yr series of floods on the Melezza River at Camedo, Ticino, Switzerland.\nAbstract: The proposed model is based on the streamflow partial duration series (PDS). Both the occurrence time and the magnitude of the floods (instantaneous hydrograph peaks) are time-dependent random variables. An estimation method, based on the maximum likelihood concept, is also proposed. The derivation of the law of extremes from the basic model shows the strong influence of the time-dependence assumptions on the extreme values. Application of the model to a 38-yr series of floods on the Melezza River at Camedo, Ticino (Southern Switzerland) shows good agreement between theoretical and observed values.\n\nJournal ArticleDOI\nTL;DR: In this paper, the steady-state solution of the flow distribution in a pipe network is presented, which relies on an iterative linearization procedure to solve for junction heads, and is highly suited for use on small computers.\nAbstract: A previously unpublished numerical method for the steady-state solution of the flow distribution in a pipe network is presented. It relies on an iterative linearization procedure to solve for junction heads, and is highly suited for use on small computers. Its relationship to other solution methods is explicitly developed. This new method is extremely simple to formulate, and requires minimum data preorganization for computation.\n\nJournal ArticleDOI\nSoumya Chatterjee\nTL;DR: In this article, the results of an experimental investigation of the flow characteristics of a submerged horizontal jet flowing over a partly rigid and erodible bed are presented; the diffusion characteristics of the jet, growth of boundary layer thickness, velocity distribution within the boundary layer, critical shear stress at equilibrium stage, and the time variation of shear stresses at the location of maximum scour have been investigated.\nAbstract: The results of an experimental investigation of the flow characteristics of a submerged horizontal jet flowing over a partly rigid and erodible bed are presented. The diffusion characteristics of the jet, growth of boundary layer thickness, velocity distribution within the boundary layer, critical shear stress at equilibrium stage, and the time variation of shear stress at the location of maximum scour have been investigated. The expression of critical shear stress has been obtained from the solution of Von Karman's integral equation. A functional relationship of time variation of shear stress has been found out by correlating the dynamic pressure drop recorded by a Preston tube with the velocity distribution law developed from experimental data.\n\nJournal ArticleDOI\nTL;DR: In this paper, it is proposed that deviations from Gaussianity of observed profiles of the concentration of a solute in a cloud as it passes the measuring station in a river, estuary or similar flow should be explicitly measured by recording the non-dimensional skewness and kurtosis.\nAbstract: It is proposed that deviations from Gaussianity of observed profiles of the concentration of a solute in a cloud as it passes the measuring station in a river, estuary or similar flow should be explicitly measured by recording the nondimensional skewness and kurtosis. Examples of applications of the proposal are examined. It is shown that observed profiles can be fitted well by Edgeworth series provided the skewness and kurtosis are not too large. Review of the way in which different causes of deviations from Gaussianity can be classified in terms of the evolution of the skewness and kurtosis with downstream position is given, and it is argued that this is the most important point of the proposal.\n\nJournal ArticleDOI\nTL;DR: In this article, the results of laboratory experiments on scour around circular piers in cohesionless bed material at high flow velocities are presented and a special technique is used to measure scour depths.\nAbstract: The results of laboratory experiments on scour around circular piers in cohesionless bed material at high flow velocities are presented. A special technique is used to measure scour depths. The scour depth in the sediment transport regime first decreases and then increases with increasing velocity. A formula to predict scour depth for flows with Froude numbers is developed.\n\nJournal ArticleDOI\nYeou-Koung Tung\nTL;DR: In this article, a scientific approach to systematically account for the uncertainties and their interactions in the selection of safety factors and return periods for various risk levels in hydraulic design is presented, which can be used to develop risk-safety relationships for various return periods and expected service life.\nAbstract: Hydraulic structures are designed with reference to some natural events which could be imposed on the structure during its expected service life. Conventionally return period design methods fail to systematically account for the many uncertainties in design. By systematically analyzing the component uncertainties and their interactions using the concepts of reliability theory and first-order analysis of uncertainties, a composite risk and reliability can be defined. This paper presents static and dynamic risk and reliability models that can be used to develop risk-safety relationships for various return periods and expected service lifes that can be used in design. The static models consider single loading application and the dynamic models consider repeated application of random loadings to define a composite risk. The models are applied as examples of the methodology to develop risk-safety curves for culvert design. This work presents a scientific approach to systematically account for the uncertainties and their interactions in the selection of safety factors and return periods for various risk levels in hydraulic design.\n\nJournal ArticleDOI\nTL;DR: In this paper, a linear programming model is developed for the optimal determination of unit hydrographs by minimizing the sum of deviation between observed and calculated runoff hydrograpes.\nAbstract: A linear programming model is developed for the optimal determination of unit hydrographs. The model determines the best unit hydrograph by minimizing the sum of deviation between observed and calculated runoff hydrographs. The principles of linearity and superposition of unit hydrograph theory are the basis of the model. Several multiperiod rainfall events can be considered simultaneously for the optimal unit hydrograph determination. The model is tested using both hypothetical and actual rainfall events.\n\nJournal ArticleDOI\nTL;DR: In this paper, the transverse mixing coefficient of three river reaches, under both open water and ice-covered conditions, is analyzed and the results show that the coefficient ranges between 0.01 m²/s and 0.09 mµ/s.\nAbstract: Field tests to determine the transverse mixing coefficients of three river reaches, under both open water and ice covered conditions, are described. To interpret the test results, the streamtube method for mixing calculations, developed earlier by others, is used. For evaluating transverse mixing coefficients, the widely used moments method is modified so as to be consistent with the streamtube approach. Application of the modified method to test data gave consistent results, thus reinforcing the streamtube approach. For the tests described here, the transverse mixing coefficient ranges between 0.01 m²/s and 0.09 m²/s; when this coefficient is nondimensionalized with the hydraulic radius and shear velocity, it lies between 0.4 and 2.5. Open water values of the dimensionless transverse mixing coefficient are less than the corresponding values under an ice cover by as much as two and one-half times.\n\nJournal ArticleDOI\nTL;DR: In this article, it was shown that the skewness coefficient of LP can be estimated from the mean of logarithmic data (Y¯) and the mean coefficient of Pearson type 3 distribution (P(γ) of real data (X¯).\nAbstract: Two new methods of moments (MO) which do not use the sample skewness coefficient (CS) are evolved for fitting log Pearson type 3 distribution (LP). The two current MO methods which use CS are: (Method 1) real data are fit to LP; or (Method 2) logarithmic data are fit to P (Pearson type 3 distribution). By virtue of the combined properties of P and LP, it is shown that: (Method 3) skewness coefficient (γ) of LP can be estimated from the mean of logarithmic data (Y¯); or (Method 4) skewness of coefficient of P (γ) can be estimated from the mean of real data (X¯). The values (Y¯) and (X¯) are unbiased estimates while CS is generally a biased estimate. Monto Carlo experiments have indicated that, as sample size becomes small, the estimates for quantiles of LP are: (1) Systematically biased for Method 1; (2) highly positive biased when (γ) is negative, and unbiased to somewhat positively biased when (γ) is positive for Method 2; (3) unbiased to least biased of the four methods when (γ) is negative, and somewhat negatively biased when (γ) is positive for Method 3; and (4) negatively biased for Method 4.\n\nJournal ArticleDOI\nTL;DR: In this paper, a method of calculating bed-load transport is presented, which takes advantage of the rapid bathymetric surveys that can be made in large rivers, and is based on sediment continuity and an assumption for the base level of zero transport.\nAbstract: A method of calculating bed-load transport is presented. This method takes advantage of the rapid bathymetric surveys that can be made in large rivers. The proposed equation is based on sediment continuity and an assumption for the base level of zero transport. Experimental data were obtained in a series of flume tests and the measured transport rates are compared with the calculated transport rates using the proposed equation. Comparisons are also made with the Ackers-White Equation.\n\nJournal ArticleDOI\nTL;DR: In this paper, the issue of numerical precision as affected by the use of different routing schemes was investigated, based on the observation that in an application of the Kalinin-Miljukov method, accuracy improved when a more refined difference scheme was used in place of the conventional one.\nAbstract: This note will center on the issue of numerical precision as affected by the use of different routing schemes. The investigation is prompted by the observation that in an application of the Kalinin-Miljukov method, accuracy improved when a more refined difference scheme was used in place of the conventional one.\n\nJournal ArticleDOI\nTL;DR: In this article, the authors used the soil conservation service curve number (SCS) curve-number procedure to determine rainfall excess, which is used to estimate the soil type, land use, and precipitation, each of which is relatively easy to estimate.\nAbstract: The soil conservation Service (SCS) curve-number procedure is commonly used to determine rainfall excess. The advantage of this method is that the parameters in the relation are the soil type, land use, and precipitation, each of which is relatively easy to estimate. The method does have some shortcomings. The procedure for analysis of intermittent rainfall and for portions of storms where rainfall intensity is less than infiltration capacity is not well defined. In addition the SCS method predicts that the infiltration rate will approach zero during storms of long duration instead of a constant terminal infiltration rate.\n\nJournal ArticleDOI\nTL;DR: In this article, a series of experiments were conducted to study the effect of unsteady flow on turbulence characteristics, including mean velocity profiles, temporal and spatial Reynolds stress profiles, and energy spectra.\nAbstract: Field measurements of tidal flow have shown that there is an increase in turbulent parameters (turbulent intensity, Reynolds stress, coefficient of turbulent diffusivity, and the rate of sediment transport) when the flow is decelerating as compared with when it is accelerating. A series of experiments were conducted to study the effect of unsteady flow on turbulence characteristics. The flow in a long flume was accelerated and then decelerated for the same periods of time. Instantaneous velocity components in three directions, water surface slope, and shear stress at bed were measured. From measured data mean velocity profiles, temporal and spatial Reynolds stress profiles, and energy spectra were determined. The results were compared, wherever possible, with those obtained from field measurements."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9368387,"math_prob":0.81884474,"size":29098,"snap":"2023-40-2023-50","text_gpt3_token_len":5515,"char_repetition_ratio":0.13167663,"word_repetition_ratio":0.33273178,"special_character_ratio":0.17839713,"punctuation_ratio":0.09167842,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95227987,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T05:40:01Z\",\"WARC-Record-ID\":\"<urn:uuid:d4fac806-ca7f-4355-8295-2ae47c9b1422>\",\"Content-Length\":\"585676\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e62e097c-89cb-4d29-97ca-08fad3393988>\",\"WARC-Concurrent-To\":\"<urn:uuid:bc0c894c-d1af-4e89-9434-8d0babc35b7d>\",\"WARC-IP-Address\":\"3.162.125.13\",\"WARC-Target-URI\":\"https://typeset.io/journals/journal-of-hydraulic-engineering-1t0zs3q8/1980\",\"WARC-Payload-Digest\":\"sha1:257PHIRB6AVDC7YWEXAXXKW4RVNDWJLV\",\"WARC-Block-Digest\":\"sha1:KQBF5MTLL46LO4UIUINHTXLZ7J7ELNCE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100545.7_warc_CC-MAIN-20231205041842-20231205071842-00123.warc.gz\"}"} |
https://www.softmath.com/tutorials-3/algebra-formulas/intermediate-algebra-topics.html | [
"English | Español\n\n# Try our Free Online Math Solver!",
null,
"Online Math Solver\n\n Depdendent Variable\n\n Number of equations to solve: 23456789\n Equ. #1:\n Equ. #2:\n\n Equ. #3:\n\n Equ. #4:\n\n Equ. #5:\n\n Equ. #6:\n\n Equ. #7:\n\n Equ. #8:\n\n Equ. #9:\n\n Solve for:\n\n Dependent Variable\n\n Number of inequalities to solve: 23456789\n Ineq. #1:\n Ineq. #2:\n\n Ineq. #3:\n\n Ineq. #4:\n\n Ineq. #5:\n\n Ineq. #6:\n\n Ineq. #7:\n\n Ineq. #8:\n\n Ineq. #9:\n\n Solve for:\n\n Please use this form if you would like to have this math solver on your website, free of charge. Name: Email: Your Website: Msg:\n\n# intermediate algebra topics rbxzjiqsyj\n\nMATH 4\nIntermediate Algebra (3) Algebraic expressions; linear, absolute value equations and inequalities; lines;systems of linear equations; integral exponents; polynomials; factoring. This course may not be used to satisfy the basic minimum requirements for graduation in any baccalaureate degree program.\n\nPrerequisite: basic arithmetic skills or satisfactory performance on the mathematics proficiency\nexamination\n\nTOPICS\nEvaluation of algebraic expressions involving integers and rational numbers.\nSimplification of algebraic expressions.\nSolution of linear equations .\nSolution of literal equations .\nWord problems which reduce to a single linear equation.\nGraphs of linear equations.\nSlopes and equations of lines .\nSolution of absolute value linear equations.\nSolution of linear inequalities .\nSolution of absolute value inequalities.\nSolution of systems of linear equations.\nWord problems which reduce to two simultaneous linear equations.\nProperties of positive integral exponents.\nAddition and subtraction of polynomials .\nMultiplication and division of polynomials .\nFactoring (special products, trinomials , grouping ).\nSolution of equations using factoring.\n\n Prev Next"
] | [
null,
"https://www.softmath.com/images/video-pages/solver-top.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.78730416,"math_prob":0.9902417,"size":1254,"snap":"2020-24-2020-29","text_gpt3_token_len":245,"char_repetition_ratio":0.1992,"word_repetition_ratio":0.024390243,"special_character_ratio":0.16427432,"punctuation_ratio":0.15789473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990293,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-10T22:47:21Z\",\"WARC-Record-ID\":\"<urn:uuid:8758c281-f626-40e8-b93f-9abb9e38df99>\",\"Content-Length\":\"87158\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bb88f8cb-62de-4ff6-95c0-a3bbbb540217>\",\"WARC-Concurrent-To\":\"<urn:uuid:97b05c67-b690-4265-acc3-e8ed257a7717>\",\"WARC-IP-Address\":\"52.43.142.96\",\"WARC-Target-URI\":\"https://www.softmath.com/tutorials-3/algebra-formulas/intermediate-algebra-topics.html\",\"WARC-Payload-Digest\":\"sha1:V7WFBNW7WRFY6LX7GPYJHRF2MR26PUAD\",\"WARC-Block-Digest\":\"sha1:ZR76ANMSOTIH426AIEEIUYZ7546TCWWB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655912255.54_warc_CC-MAIN-20200710210528-20200711000528-00226.warc.gz\"}"} |
https://www.esaral.com/q/indian-style-of-cooling-drinking-water-is-to-keep-it-in-a-pitcher-having-porous-walls-56304 | [
"# Indian style of cooling drinking water is to keep it in a pitcher having porous walls.\n\nQuestion:\n\nIndian style of cooling drinking water is to keep it in a pitcher having porous walls. Water comes to the outer surface very slowly and evaporates. Most of the energy needed for evaporation is taken from the water itself and the water is cooled down. Assume that a pitcher is contains $10 \\mathrm{~kg}$ of water and $0.2 \\mathrm{~g}$ of water comes out per second. Assuming no backward heat transfer from the atmosphere to the water, calculate the time in which the temperature decreases by $5^{\\circ} \\mathrm{C}$. Specific heat capacity of water=4200 $\\mathrm{J} / \\mathrm{kg}^{\\circ} \\mathrm{C}$ and latent heat of vaporization of water $=2.27 \\times 10^{6} \\mathrm{~J} / \\mathrm{kg}$.\n\nSolution:",
null,
""
] | [
null,
"https://www.esaral.com/media/uploads/2022/03/24/image51564.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7558296,"math_prob":0.99728966,"size":3812,"snap":"2022-40-2023-06","text_gpt3_token_len":967,"char_repetition_ratio":0.1454832,"word_repetition_ratio":0.8019802,"special_character_ratio":0.2224554,"punctuation_ratio":0.049115915,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9516227,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-02T11:59:03Z\",\"WARC-Record-ID\":\"<urn:uuid:46b4bc62-8b57-4699-a25a-1deb63769647>\",\"Content-Length\":\"467922\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1170519-72b9-4114-907b-8cf1dfd76497>\",\"WARC-Concurrent-To\":\"<urn:uuid:d188c747-bee3-46f5-9ad2-ca49a68af35e>\",\"WARC-IP-Address\":\"172.67.213.11\",\"WARC-Target-URI\":\"https://www.esaral.com/q/indian-style-of-cooling-drinking-water-is-to-keep-it-in-a-pitcher-having-porous-walls-56304\",\"WARC-Payload-Digest\":\"sha1:FQSGMONYET3PJBWWPOBC6V7552536TEN\",\"WARC-Block-Digest\":\"sha1:523RGBVA5TJUWGWVB2D2FYWB6N4G6BGJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500017.27_warc_CC-MAIN-20230202101933-20230202131933-00548.warc.gz\"}"} |
https://dev.widemeadows.de/2016/12/12/getting-an-image-into-and-out-of-tensorflow/ | [
"• ## Getting an image into and out of TensorFlow\n\nLet’s assume you already have an image in numpy’s ndarray format, e.g. because you loaded it with OpenCV’s imread() function, and you want to convert it to TensorFlow’s Tensor format and later back to ndarray.\n\nThat’s essentially three calls to TensorFlow:\n\nimport cv2\nimport tensorflow as tf\nimport numpy as np\n\n# normalize the pixel values to 0..1 range and convert them\n# to a single-precision tensor\nt = tf.convert_to_tensor(image_in, dtype=tf.float32)\nassert isinstance(t, tf.Tensor)\n\n# in order to convert the tensor back to an array, we need\n# to evaluate it; for this, we need a session\nwith tf.Session() as sess:\nimage_out = sess.run(fetches=t)\nassert isinstance(image_out, np.ndarray)\n\n# for imshow to work, the image needs to be in 0..1 range\n# whenever it is a float; that's why we normalized it.\ncv2.imshow('Image', image_out)\n\nNote that instead of using sess.run(t) we could also have used\nwith tf.Session() as sess:\n\nwhich essentially performs the same action. A benefit of using sess.run() directly is that we can fetch more than one tensor in the same pass through the (sub-)graph (say, tuple = sess.run(fetches=[t1, t2, t3])), whereas calling tensor.eval() always results in one separate pass per call."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8083948,"math_prob":0.91139454,"size":1295,"snap":"2022-27-2022-33","text_gpt3_token_len":323,"char_repetition_ratio":0.1045701,"word_repetition_ratio":0.02020202,"special_character_ratio":0.25791505,"punctuation_ratio":0.16423358,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98149914,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-07T04:25:37Z\",\"WARC-Record-ID\":\"<urn:uuid:843be0ed-a73c-4c84-8a85-0185d476a926>\",\"Content-Length\":\"57230\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:acfa0149-2cd1-47a4-8794-66414bb607c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:31700a0b-7098-49b2-9fce-7e575a7a224b>\",\"WARC-IP-Address\":\"5.189.173.15\",\"WARC-Target-URI\":\"https://dev.widemeadows.de/2016/12/12/getting-an-image-into-and-out-of-tensorflow/\",\"WARC-Payload-Digest\":\"sha1:D7MNK7IU3I6DEWV52OJLGFPTEM2UUVNR\",\"WARC-Block-Digest\":\"sha1:HF2YXEMIEFN4SD3NHCU6OSABN565RI3L\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104683683.99_warc_CC-MAIN-20220707033101-20220707063101-00451.warc.gz\"}"} |
https://www.rdocumentation.org/packages/fts/versions/0.7.5/topics/fts | [
"# fts\n\nFrom fts v0.7.5\n0th\n\nPercentile\n\n##### Fts: a fast timeseries library\n\ncreate an fts object by specifying dates and data\n\nKeywords\nts\n##### Usage\nfts(data, dates)## S3 method for class 'fts':\n## S3 method for class 'fts':\ntail(x, n = 10,\\dots)\n## S3 method for class 'fts':\ndates(x)\n## S3 method for class 'fts':\ndates(x) <- value\n##### Arguments\ndata\na matrix, dataframe, or vector\ndates\na vector of dates (default format is POSIXct)\nx\nan fts object\nn\nan integer specifying the number of rows\nvalue\na vector of POSIXct dates\n...\nfurther arguments to function\n##### Details\n\nfts is an S3 class in which the fts object is represented as a native R matrix and the dates are attached as an attribute to the matrix\n\n##### Value\n\n• a fts object\n\nas.fts\n\n• fts\n• Ops.fts\n• [.fts\n• [<-.fts\n• dates\n• dates<-\n• dates.fts\n• dates<-.fts\n• cbind.fts\n• rbind.fts\n• tail.fts\n• print.fts\n• plot.fts\n##### Examples\nx <- fts()\ny <- fts(data=1:100)\nz <- fts(dates=seq.POSIXt(from=as.POSIXct(Sys.time()),by=\"months\",length.out=12))\n\nxx <- x[1:10,]\n\n## intersection of dates is taken for Arith methods\nxyp <- x + y\nxys <- x - y\nxym <- x * y\nxyd <- x / y\nxyg <- x > y\nxyl <- x < y\n\nplot(x)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.633716,"math_prob":0.9432893,"size":1133,"snap":"2019-51-2020-05","text_gpt3_token_len":366,"char_repetition_ratio":0.121346325,"word_repetition_ratio":0.07109005,"special_character_ratio":0.33362755,"punctuation_ratio":0.12184874,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9962945,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-15T23:09:42Z\",\"WARC-Record-ID\":\"<urn:uuid:80521bbc-209f-4393-b783-c1601a5f32b8>\",\"Content-Length\":\"15052\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b257db1-6bfc-4724-a349-d4e857439d53>\",\"WARC-Concurrent-To\":\"<urn:uuid:13cc089d-1f20-4425-90c0-097199922eb1>\",\"WARC-IP-Address\":\"34.238.223.233\",\"WARC-Target-URI\":\"https://www.rdocumentation.org/packages/fts/versions/0.7.5/topics/fts\",\"WARC-Payload-Digest\":\"sha1:YUSESLTU33DIUZ5XQKE7K6O4IGZK6MPY\",\"WARC-Block-Digest\":\"sha1:CZV7MD4J7WG2Y5WTEE6PG3LH33DBTAWD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541310970.85_warc_CC-MAIN-20191215225643-20191216013643-00363.warc.gz\"}"} |
https://mymathangels.com/solve-linear-systems-where-some-of-the-equations-have-missing-terms/ | [
"# Solve linear systems where some of the equations have missing terms\n\nSolve linear systems where some of the equations have missing terms\n\nIf a linear system has an equation missing a term or terms, one elimination step can be omitted.\n\nExample\n\nSolve the system\n\n6x – 12y = -5 (1)\n\n8y + z = 0 (2)\n\n9x – z = 12 (3)\n\nSince equation (3) is missing the variable y, a good way to begin the solution is to eliminate y again using equation (1) and (2).\n\n12x – 24y = – 10 Multiply each side of (1) by 2.\n\n24y + 3z = 0 Multiply each side of (2) by 3.\n\n12x + 3z = -10 Add. (4)\n\nUse this result, together with equation (3), 9x – z = 12, to eliminate z. Multiply equation (3) by 3.\n\nThis gives\n\n27x – 3z = 36 Multiply each side of (3) by 3.\n\n12x + 3z = – 10 (4)\n\nx =",
null,
"=",
null,
".\n\nSubstituting into equation (3) gives\n\n9x – z = 12 (3)\n\n9(",
null,
") – z = 12 Let x =",
null,
".\n\nSubstituting -6 for z in equation (2) gives\n\n8y + z = 0 (2)\n\n8y – 6 = 0 Let z = – 6.\n\n8y = 6\n\ny =",
null,
".\n\nThus, x =",
null,
", y =",
null,
", and z = -6. Check these values in each of the original equations of the system to verify that the solution set of the system is {(",
null,
",",
null,
", -6 )}"
] | [
null,
"https://mymathangels.com/wp-content/ql-cache/quicklatex.com-e50717031b104a8d617bb07dec9da8ca_l3.png",
null,
"https://mymathangels.com/wp-content/ql-cache/quicklatex.com-d7c3dd720f758d643e9dad90965c3d7e_l3.png",
null,
"https://mymathangels.com/wp-content/ql-cache/quicklatex.com-d7c3dd720f758d643e9dad90965c3d7e_l3.png",
null,
"https://mymathangels.com/wp-content/ql-cache/quicklatex.com-d7c3dd720f758d643e9dad90965c3d7e_l3.png",
null,
"https://mymathangels.com/wp-content/ql-cache/quicklatex.com-e975e655b68d040659228a944ff4255e_l3.png",
null,
"https://mymathangels.com/wp-content/ql-cache/quicklatex.com-d7c3dd720f758d643e9dad90965c3d7e_l3.png",
null,
"https://mymathangels.com/wp-content/ql-cache/quicklatex.com-e975e655b68d040659228a944ff4255e_l3.png",
null,
"https://mymathangels.com/wp-content/ql-cache/quicklatex.com-d7c3dd720f758d643e9dad90965c3d7e_l3.png",
null,
"https://mymathangels.com/wp-content/ql-cache/quicklatex.com-e975e655b68d040659228a944ff4255e_l3.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85703117,"math_prob":1.0000057,"size":1017,"snap":"2023-40-2023-50","text_gpt3_token_len":359,"char_repetition_ratio":0.14807503,"word_repetition_ratio":0.050209206,"special_character_ratio":0.4218289,"punctuation_ratio":0.108597286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999815,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,7,null,null,null,7,null,null,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T22:51:46Z\",\"WARC-Record-ID\":\"<urn:uuid:c16bcbd6-c1d3-4126-a225-31a8d8abea24>\",\"Content-Length\":\"64549\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e3a98abf-99c6-4279-8c71-ae199e9057c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b60a7bc-4971-4b0c-a8ed-74ae7c79a0a7>\",\"WARC-IP-Address\":\"185.212.70.183\",\"WARC-Target-URI\":\"https://mymathangels.com/solve-linear-systems-where-some-of-the-equations-have-missing-terms/\",\"WARC-Payload-Digest\":\"sha1:SAG3GSHTCUW4JAJ5WOOTB7JXYJZBWE32\",\"WARC-Block-Digest\":\"sha1:X3GJ2DAEQNRWL7UBTWJM2NNHY6KYO77C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100779.51_warc_CC-MAIN-20231208212357-20231209002357-00174.warc.gz\"}"} |
http://ixtrieve.fh-koeln.de/birds/litie/document/14148 | [
"# Document (#14148)\n\nAuthor\nVinkler, P.\nTitle\nSome practical aspects of the standardization of scientometric indicators\nSource\nScientometrics. 35(1996) no.2, S.237-245\nYear\n1996\nAbstract\nIn the present stage of scientometrics, indicators published are mostly incomparable, which fact impedes the development of the field and makes the users of scientometric results mistrustful. Consequently, standardization of data, methods, indicators and their presentation is urgently needed. For instance, the time periods applied should be standardized across fields and subfields in calculating citation and publication indicators\nTheme\nInformetrie\n\n## Similar documents (author)\n\n1. Vinkler, P.: ¬A quasi-quantitative citation model (1987) 5.91\n```5.9096622 = sum of:\n5.9096622 = weight(author_txt:vinkler in 2299) [ClassicSimilarity], result of:\n5.9096622 = fieldWeight in 2299, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.45546 = idf(docFreq=8, maxDocs=42306)\n0.625 = fieldNorm(doc=2299)\n```\n2. Vinkler, P.: Relationships between the rate of scientific development and citations : the chance for citedness model (1996) 5.91\n```5.9096622 = sum of:\n5.9096622 = weight(author_txt:vinkler in 5146) [ClassicSimilarity], result of:\n5.9096622 = fieldWeight in 5146, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.45546 = idf(docFreq=8, maxDocs=42306)\n0.625 = fieldNorm(doc=5146)\n```\n3. Vinkler, P.: Characterization of the impact of sets of scientific papers : the Garfield (impact) Factor (2004) 5.91\n```5.9096622 = sum of:\n5.9096622 = weight(author_txt:vinkler in 3228) [ClassicSimilarity], result of:\n5.9096622 = fieldWeight in 3228, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.45546 = idf(docFreq=8, maxDocs=42306)\n0.625 = fieldNorm(doc=3228)\n```\n4. Vinkler, P.: ¬The institutionalization of scientific information : a scientometric model (ISI-S Model) (2002) 5.91\n```5.9096622 = sum of:\n5.9096622 = weight(author_txt:vinkler in 1943) [ClassicSimilarity], result of:\n5.9096622 = fieldWeight in 1943, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.45546 = idf(docFreq=8, maxDocs=42306)\n0.625 = fieldNorm(doc=1943)\n```\n5. Vinkler, P.: Application of the distribution of citations among publications in scientometric evaluations (2011) 5.91\n```5.9096622 = sum of:\n5.9096622 = weight(author_txt:vinkler in 1770) [ClassicSimilarity], result of:\n5.9096622 = fieldWeight in 1770, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.45546 = idf(docFreq=8, maxDocs=42306)\n0.625 = fieldNorm(doc=1770)\n```\n\n## Similar documents (content)\n\n1. Thelwall, M.; Sud, P.: Mendeley readership counts : an investigation of temporal and disciplinary differences (2016) 0.23\n```0.23198253 = sum of:\n0.23198253 = product of:\n0.7249454 = sum of:\n0.04216736 = weight(abstract_txt:citation in 130) [ClassicSimilarity], result of:\n0.04216736 = score(doc=130,freq=2.0), product of:\n0.077508345 = queryWeight, product of:\n1.0184498 = boost\n4.9240556 = idf(docFreq=835, maxDocs=42306)\n0.015455601 = queryNorm\n0.5440364 = fieldWeight in 130, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n4.9240556 = idf(docFreq=835, maxDocs=42306)\n0.078125 = fieldNorm(doc=130)\n0.032775223 = weight(abstract_txt:fields in 130) [ClassicSimilarity], result of:\n0.032775223 = score(doc=130,freq=1.0), product of:\n0.08255396 = queryWeight, product of:\n1.0510765 = boost\n5.0818014 = idf(docFreq=713, maxDocs=42306)\n0.015455601 = queryNorm\n0.39701575 = fieldWeight in 130, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.0818014 = idf(docFreq=713, maxDocs=42306)\n0.078125 = fieldNorm(doc=130)\n0.036748327 = weight(abstract_txt:needed in 130) [ClassicSimilarity], result of:\n0.036748327 = score(doc=130,freq=1.0), product of:\n0.089097574 = queryWeight, product of:\n1.0919389 = boost\n5.2793646 = idf(docFreq=585, maxDocs=42306)\n0.015455601 = queryNorm\n0.41245037 = fieldWeight in 130, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.2793646 = idf(docFreq=585, maxDocs=42306)\n0.078125 = fieldNorm(doc=130)\n0.038845085 = weight(abstract_txt:publication in 130) [ClassicSimilarity], result of:\n0.038845085 = score(doc=130,freq=1.0), product of:\n0.092455246 = queryWeight, product of:\n1.1123236 = boost\n5.377922 = idf(docFreq=530, maxDocs=42306)\n0.015455601 = queryNorm\n0.42015016 = fieldWeight in 130, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.377922 = idf(docFreq=530, maxDocs=42306)\n0.078125 = fieldNorm(doc=130)\n0.043062676 = weight(abstract_txt:makes in 130) [ClassicSimilarity], result of:\n0.043062676 = score(doc=130,freq=1.0), product of:\n0.099031836 = queryWeight, product of:\n1.1512053 = boost\n5.5659094 = idf(docFreq=439, maxDocs=42306)\n0.015455601 = queryNorm\n0.4348367 = fieldWeight in 130, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.5659094 = idf(docFreq=439, maxDocs=42306)\n0.078125 = fieldNorm(doc=130)\n0.09226169 = weight(abstract_txt:periods in 130) [ClassicSimilarity], result of:\n0.09226169 = score(doc=130,freq=1.0), product of:\n0.16458431 = queryWeight, product of:\n1.4840878 = boost\n7.1753473 = idf(docFreq=87, maxDocs=42306)\n0.015455601 = queryNorm\n0.560574 = fieldWeight in 130, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.1753473 = idf(docFreq=87, maxDocs=42306)\n0.078125 = fieldNorm(doc=130)\n0.12383625 = weight(abstract_txt:subfields in 130) [ClassicSimilarity], result of:\n0.12383625 = score(doc=130,freq=1.0), product of:\n0.20026545 = queryWeight, product of:\n1.6370744 = boost\n7.9150147 = idf(docFreq=41, maxDocs=42306)\n0.015455601 = queryNorm\n0.6183605 = fieldWeight in 130, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.9150147 = idf(docFreq=41, maxDocs=42306)\n0.078125 = fieldNorm(doc=130)\n0.31524885 = weight(abstract_txt:indicators in 130) [ClassicSimilarity], result of:\n0.31524885 = score(doc=130,freq=2.0), product of:\n0.4704208 = queryWeight, product of:\n5.018092 = boost\n6.0654354 = idf(docFreq=266, maxDocs=42306)\n0.015455601 = queryNorm\n0.67014223 = fieldWeight in 130, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.0654354 = idf(docFreq=266, maxDocs=42306)\n0.078125 = fieldNorm(doc=130)\n0.32 = coord(8/25)\n```\n2. Waltman, L.; Schreiber, M.: On the calculation of percentile-based bibliometric indicators (2013) 0.15\n```0.14718619 = sum of:\n0.14718619 = product of:\n0.7359309 = sum of:\n0.04216736 = weight(abstract_txt:citation in 2617) [ClassicSimilarity], result of:\n0.04216736 = score(doc=2617,freq=2.0), product of:\n0.077508345 = queryWeight, product of:\n1.0184498 = boost\n4.9240556 = idf(docFreq=835, maxDocs=42306)\n0.015455601 = queryNorm\n0.5440364 = fieldWeight in 2617, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n4.9240556 = idf(docFreq=835, maxDocs=42306)\n0.078125 = fieldNorm(doc=2617)\n0.032775223 = weight(abstract_txt:fields in 2617) [ClassicSimilarity], result of:\n0.032775223 = score(doc=2617,freq=1.0), product of:\n0.08255396 = queryWeight, product of:\n1.0510765 = boost\n5.0818014 = idf(docFreq=713, maxDocs=42306)\n0.015455601 = queryNorm\n0.39701575 = fieldWeight in 2617, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.0818014 = idf(docFreq=713, maxDocs=42306)\n0.078125 = fieldNorm(doc=2617)\n0.079654984 = weight(abstract_txt:instance in 2617) [ClassicSimilarity], result of:\n0.079654984 = score(doc=2617,freq=1.0), product of:\n0.14922771 = queryWeight, product of:\n1.413156 = boost\n6.8324027 = idf(docFreq=123, maxDocs=42306)\n0.015455601 = queryNorm\n0.53378147 = fieldWeight in 2617, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.8324027 = idf(docFreq=123, maxDocs=42306)\n0.078125 = fieldNorm(doc=2617)\n0.13550411 = weight(abstract_txt:calculating in 2617) [ClassicSimilarity], result of:\n0.13550411 = score(doc=2617,freq=1.0), product of:\n0.2126551 = queryWeight, product of:\n1.6869541 = boost\n8.156177 = idf(docFreq=32, maxDocs=42306)\n0.015455601 = queryNorm\n0.6372013 = fieldWeight in 2617, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.156177 = idf(docFreq=32, maxDocs=42306)\n0.078125 = fieldNorm(doc=2617)\n0.4458292 = weight(abstract_txt:indicators in 2617) [ClassicSimilarity], result of:\n0.4458292 = score(doc=2617,freq=4.0), product of:\n0.4704208 = queryWeight, product of:\n5.018092 = boost\n6.0654354 = idf(docFreq=266, maxDocs=42306)\n0.015455601 = queryNorm\n0.9477243 = fieldWeight in 2617, product of:\n2.0 = tf(freq=4.0), with freq of:\n4.0 = termFreq=4.0\n6.0654354 = idf(docFreq=266, maxDocs=42306)\n0.078125 = fieldNorm(doc=2617)\n0.2 = coord(5/25)\n```\n3. Nederhof, A.J.; Visser, M.S.: Quantitative deconstruction of citation impact indicators : waxing field impact but waning journal impact (2004) 0.14\n```0.14136697 = sum of:\n0.14136697 = product of:\n0.5890291 = sum of:\n0.041315407 = weight(abstract_txt:citation in 420) [ClassicSimilarity], result of:\n0.041315407 = score(doc=420,freq=3.0), product of:\n0.077508345 = queryWeight, product of:\n1.0184498 = boost\n4.9240556 = idf(docFreq=835, maxDocs=42306)\n0.015455601 = queryNorm\n0.53304464 = fieldWeight in 420, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n4.9240556 = idf(docFreq=835, maxDocs=42306)\n0.0625 = fieldNorm(doc=420)\n0.02777518 = weight(abstract_txt:across in 420) [ClassicSimilarity], result of:\n0.02777518 = score(doc=420,freq=1.0), product of:\n0.08578646 = queryWeight, product of:\n1.071457 = boost\n5.180338 = idf(docFreq=646, maxDocs=42306)\n0.015455601 = queryNorm\n0.32377112 = fieldWeight in 420, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.180338 = idf(docFreq=646, maxDocs=42306)\n0.0625 = fieldNorm(doc=420)\n0.053825326 = weight(abstract_txt:publication in 420) [ClassicSimilarity], result of:\n0.053825326 = score(doc=420,freq=3.0), product of:\n0.092455246 = queryWeight, product of:\n1.1123236 = boost\n5.377922 = idf(docFreq=530, maxDocs=42306)\n0.015455601 = queryNorm\n0.5821771 = fieldWeight in 420, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n5.377922 = idf(docFreq=530, maxDocs=42306)\n0.0625 = fieldNorm(doc=420)\n0.07380935 = weight(abstract_txt:periods in 420) [ClassicSimilarity], result of:\n0.07380935 = score(doc=420,freq=1.0), product of:\n0.16458431 = queryWeight, product of:\n1.4840878 = boost\n7.1753473 = idf(docFreq=87, maxDocs=42306)\n0.015455601 = queryNorm\n0.4484592 = fieldWeight in 420, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.1753473 = idf(docFreq=87, maxDocs=42306)\n0.0625 = fieldNorm(doc=420)\n0.14010471 = weight(abstract_txt:subfields in 420) [ClassicSimilarity], result of:\n0.14010471 = score(doc=420,freq=2.0), product of:\n0.20026545 = queryWeight, product of:\n1.6370744 = boost\n7.9150147 = idf(docFreq=41, maxDocs=42306)\n0.015455601 = queryNorm\n0.69959503 = fieldWeight in 420, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n7.9150147 = idf(docFreq=41, maxDocs=42306)\n0.0625 = fieldNorm(doc=420)\n0.25219908 = weight(abstract_txt:indicators in 420) [ClassicSimilarity], result of:\n0.25219908 = score(doc=420,freq=2.0), product of:\n0.4704208 = queryWeight, product of:\n5.018092 = boost\n6.0654354 = idf(docFreq=266, maxDocs=42306)\n0.015455601 = queryNorm\n0.5361138 = fieldWeight in 420, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.0654354 = idf(docFreq=266, maxDocs=42306)\n0.0625 = fieldNorm(doc=420)\n0.24 = coord(6/25)\n```\n4. Moed, H.F.: Differences in the construction of SCI based bibliometric indicators among various producers : a first overview (1996) 0.13\n```0.13492307 = sum of:\n0.13492307 = product of:\n0.8432692 = sum of:\n0.039515786 = weight(abstract_txt:applied in 5142) [ClassicSimilarity], result of:\n0.039515786 = score(doc=5142,freq=1.0), product of:\n0.07472557 = queryWeight, product of:\n4.8348536 = idf(docFreq=913, maxDocs=42306)\n0.015455601 = queryNorm\n0.5288121 = fieldWeight in 5142, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.8348536 = idf(docFreq=913, maxDocs=42306)\n0.109375 = fieldNorm(doc=5142)\n0.041743558 = weight(abstract_txt:citation in 5142) [ClassicSimilarity], result of:\n0.041743558 = score(doc=5142,freq=1.0), product of:\n0.077508345 = queryWeight, product of:\n1.0184498 = boost\n4.9240556 = idf(docFreq=835, maxDocs=42306)\n0.015455601 = queryNorm\n0.53856856 = fieldWeight in 5142, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.9240556 = idf(docFreq=835, maxDocs=42306)\n0.109375 = fieldNorm(doc=5142)\n0.22147071 = weight(abstract_txt:standardization in 5142) [ClassicSimilarity], result of:\n0.22147071 = score(doc=5142,freq=1.0), product of:\n0.2970592 = queryWeight, product of:\n2.8196933 = boost\n6.8164024 = idf(docFreq=125, maxDocs=42306)\n0.015455601 = queryNorm\n0.745544 = fieldWeight in 5142, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.8164024 = idf(docFreq=125, maxDocs=42306)\n0.109375 = fieldNorm(doc=5142)\n0.54053915 = weight(abstract_txt:indicators in 5142) [ClassicSimilarity], result of:\n0.54053915 = score(doc=5142,freq=3.0), product of:\n0.4704208 = queryWeight, product of:\n5.018092 = boost\n6.0654354 = idf(docFreq=266, maxDocs=42306)\n0.015455601 = queryNorm\n1.1490545 = fieldWeight in 5142, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n6.0654354 = idf(docFreq=266, maxDocs=42306)\n0.109375 = fieldNorm(doc=5142)\n0.16 = coord(4/25)\n```\n5. Vinkler, P.: Core indicators and professional recognition of scientometricians (2017) 0.13\n```0.13120246 = sum of:\n0.13120246 = product of:\n0.6560123 = sum of:\n0.023853462 = weight(abstract_txt:citation in 248) [ClassicSimilarity], result of:\n0.023853462 = score(doc=248,freq=1.0), product of:\n0.077508345 = queryWeight, product of:\n1.0184498 = boost\n4.9240556 = idf(docFreq=835, maxDocs=42306)\n0.015455601 = queryNorm\n0.30775347 = fieldWeight in 248, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.9240556 = idf(docFreq=835, maxDocs=42306)\n0.0625 = fieldNorm(doc=248)\n0.03107607 = weight(abstract_txt:publication in 248) [ClassicSimilarity], result of:\n0.03107607 = score(doc=248,freq=1.0), product of:\n0.092455246 = queryWeight, product of:\n1.1123236 = boost\n5.377922 = idf(docFreq=530, maxDocs=42306)\n0.015455601 = queryNorm\n0.33612013 = fieldWeight in 248, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.377922 = idf(docFreq=530, maxDocs=42306)\n0.0625 = fieldNorm(doc=248)\n0.13207716 = weight(abstract_txt:scientometrics in 248) [ClassicSimilarity], result of:\n0.13207716 = score(doc=248,freq=2.0), product of:\n0.19254075 = queryWeight, product of:\n1.605191 = boost\n7.760864 = idf(docFreq=48, maxDocs=42306)\n0.015455601 = queryNorm\n0.6859699 = fieldWeight in 248, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n7.760864 = idf(docFreq=48, maxDocs=42306)\n0.0625 = fieldNorm(doc=248)\n0.21680656 = weight(abstract_txt:scientometric in 248) [ClassicSimilarity], result of:\n0.21680656 = score(doc=248,freq=1.0), product of:\n0.4253102 = queryWeight, product of:\n3.3739083 = boost\n8.156177 = idf(docFreq=32, maxDocs=42306)\n0.015455601 = queryNorm\n0.50976104 = fieldWeight in 248, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.156177 = idf(docFreq=32, maxDocs=42306)\n0.0625 = fieldNorm(doc=248)\n0.25219908 = weight(abstract_txt:indicators in 248) [ClassicSimilarity], result of:\n0.25219908 = score(doc=248,freq=2.0), product of:\n0.4704208 = queryWeight, product of:\n5.018092 = boost\n6.0654354 = idf(docFreq=266, maxDocs=42306)\n0.015455601 = queryNorm\n0.5361138 = fieldWeight in 248, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.0654354 = idf(docFreq=266, maxDocs=42306)\n0.0625 = fieldNorm(doc=248)\n0.2 = coord(5/25)\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.68710303,"math_prob":0.998353,"size":13069,"snap":"2019-51-2020-05","text_gpt3_token_len":4947,"char_repetition_ratio":0.2384998,"word_repetition_ratio":0.44247788,"special_character_ratio":0.5291912,"punctuation_ratio":0.2833887,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99986196,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T08:12:46Z\",\"WARC-Record-ID\":\"<urn:uuid:6bf4e24e-f036-4a0c-a53d-8cdec17210f6>\",\"Content-Length\":\"26390\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e9de8ba6-5ea8-4ff7-99c7-5e64ea976ab5>\",\"WARC-Concurrent-To\":\"<urn:uuid:a56d962e-99cc-4bd5-a579-f88e1bf9ce8b>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"http://ixtrieve.fh-koeln.de/birds/litie/document/14148\",\"WARC-Payload-Digest\":\"sha1:X563IFFT6GTZ6ZIC44DZBDCEFJEMK5DH\",\"WARC-Block-Digest\":\"sha1:TX65WA7SJW2LJCECRYD6EMFLGXFIVAYQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540585566.60_warc_CC-MAIN-20191214070158-20191214094158-00156.warc.gz\"}"} |
http://shodor.org/interactivate/activities/ | [
"# Activities\n\nShodor > Interactivate > Activities\n\nSort by Subject\nSort by Audience\n Calculating...\nAlgebra (...)\nBuild your own polygon and transform it in the Cartesian coordinate system. Experiment with reflections across any line, revolving around any line (which yields a 3-D image), rotations about any point, and translations in any direction.\n\nRelated Topics: angles, calculus, coordinate, coordinate plane, coordinate system, flips, geometry, glides, graph, polygon, polyhedra, reflections, rotation, symmetry, transformation, translation, transpose\n\nStudents play a generalized version of connect four, gaining the chance to place a piece on the board by solving an algebraic equation. Parameters: Level of difficulty of equations to solve and type of problem. Algebra Four is one of the Interactivate assessment games.\n\nRelated Topics: algebra, inverse, linear equations, quadratic, solving equations\n\nTest your algebra skills by answering questions. This quiz asks you to solve algebraic linear and quadratic equations of one variable. Choose difficulty level, question types, and time limit. Algebra Quiz is one of the Interactivate assessment quizzes.\n\nRelated Topics: addition, algebra, assessment, distributive, division, exponents, fractions, integers, inverse, linear equations, multiplication, negative number, polynomial, quadratic, solving equations, subtraction\n\nCreate your own affine cipher for encoding and decoding messages. Input your own constant and multiplier, then input a message to encode.\n\nRelated Topics: addition, arithmetic, cipher, cryptography, division, modular, multiplication, pattern, remainders\n\nEncode and decode messages to determine the form for an affine cipher. Input a message to encode, then input your guesses for the constant and multiplier. Caesar Cipher II is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, arithmetic, assessment, cipher, cryptography, division, modular, multiplication, pattern, remainders\n\nDecode encrypted messages to determine the form for an affine cipher, and practice your reasoning and arithmetic skills. Input your guesses for the multiplier and constant. Caesar Cipher III is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, arithmetic, assessment, cipher, cryptography, division, functions, modular, multiplication, pattern, remainders\n\nManipulate different types of conic section equations on a coordinate plane using slider bars. Learn how each constant and coefficient affects the resulting graph. Choose from vertical or horizontal parabola, circle, ellipse, and vertical or horizontal hyperbola.\n\nRelated Topics: algebra, cartesian coordinate, circles, conic section, coordinate plane, ellipse, function properties, functions, graph, hyperbola, parabola, pre-calculus, radius, slides, translation\n\nExplore cross sections of different geometric solids: cone, double cone, cylinder, pyramid, and prism. Manipulate the cross section with slider bars, and see how the graphical representation changes.\n\nRelated Topics: algebra, calculus, cartesian coordinate, circles, conic section, coordinate system, ellipse, flyer, graph, hyperbola, parabola, polygon, polyhedra, pre-calculus, prisms, pyramid, radius, rotation, transformation\n\nEnter a set of data points, then derive a function to fit those points. Manipulate the function on a coordinate plane using slider bars. Learn how each constant and coefficient affects the resulting graph.\n\nRelated Topics: algebra, cartesian coordinate, coordinate, coordinate plane, coordinate system, cosecant, cosine, cotangent, curve fitting, data plot, deviations, exponential, exponents, function properties, functions, graph, grouping, integers, intervals, inverse, linear equations, linear functions, logarithm, multiplication, parabola, polynomial, positive part of the operand, pre-calculus, range, secant, sine, slope, tangent, trigonometry\n\nView the graph and the equation of the line tangent to any function at any point on the function.\n\nRelated Topics: calculus, cartesian coordinate, coordinate plane, derivative, differentiate, function properties, graph, linear equations, slides, slope, tangent\n\nThis activity helps you understand how to balance an equation. You input the term and the operation. The activity uses that term and operates on both sides of the equation. It then displays the resulting equation. Equation Solver is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, algebra, assessment, division, fractions, identity, integers, inverse, linear equations, multiplication, solving equations, subtraction\n\nStudents play a generalized version of connect four, gaining the chance to place a piece after simplifying fractions, converting fractions to decimals and percentages, and answering algebra questions involving fractions. Choose difficulty level, question types, and time. Fraction Four is one of the Interactivate assessment games.\n\nRelated Topics: addition, algebra, arithmetic, assessment, decimals, division, fractions, mixed numbers, multiplication, percentages, percents, rational numbers, solving equations, subtraction\n\nTest your fraction skills by answering questions. This quiz asks you to simplify fractions, convert fractions to decimals and percentages, and answer algebra questions involving fractions. Choose difficulty level, question types, and time. Fraction Quiz is one of the Interactivate assessment quizzes.\n\nRelated Topics: addition, algebra, arithmetic, decimals, division, divisors, fractions, integers, mixed numbers, multiplication, percentages, percents, subtraction\n\nA more advanced version of Slope Slider, this activity allows the manipulation of the constants and coefficients in any function thereby encouraging the user to explore the effects on the graph of the function by changing those numbers.\n\nRelated Topics: algebra, bell curve, calculus, cartesian coordinate, coordinate plane, coordinate system, cosine, decimals, exponential, flyer, function properties, functions, graph, intervals, inverse, linear equations, linear functions, lines, logarithm, parabola, polynomial, pre-calculus, range, sine, slope, tangent, translation, trigonometry\n\nStudents investigate very simple functions by trying to guess the algebraic form from inputs and outputs. Function Machine is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, algebra, arithmetic, assessment, function machine, functions, input, integers, linear equations, multiplication, output, subtraction\n\nInvestigate the Cartesian coordinate system through identifying the coordinates of a randomly generated point, or requesting that a particular point be plotted. General Coordinates Game is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, assessment, cartesian coordinate, coordinate, coordinate plane, coordinate system, graph\n\nStudents can create graphs of functions entered as algebraic expressions -- similar to a graphing calculator.\n\nRelated Topics: algebra, calculus, cartesian coordinate, coordinate plane, cosecant, cosine, cotangent, functions, graph, inverse, linear equations, linear functions, logarithm, parabola, polynomial, positive part of the operand, pre-calculus, range, secant, sine, slope, tangent, trigonometry\n\nCreate graphs of functions and sets of ordered pairs on the same coordinate plane. This is like a graphing calculator with advanced viewing options.\n\nRelated Topics: algebra, calculus, cartesian coordinate, coordinate, coordinate plane, coordinate system, cosecant, cosine, cotangent, curve fitting, data plot, exponential, exponents, function properties, functions, graph, graph theory, intervals, linear equations, linear functions, logarithm, parabola, polynomial, positive part of the operand, pre-calculus, range, secant, sine, slope, tangent, trigonometry\n\nThis applet allows the user to make observations about the relationship between speed and position and how both of these are affected by initial velocity and the incline on which the biker is traveling.\n\nRelated Topics: algebra, distance, functions, graph, parabola, pre-calculus, scale, simulation\n\nInteGreat! allows the user to visually explore the idea of integration through approximating the integral value with partitions. The user controls the number of partitions, the upper and lower limits, and the method used to estimate the integral.\n\nRelated Topics: area, calculus, estimation, function properties, functions, graph, infinity, integral, integrate, intervals, limit\n\nStudents investigate linear functions by trying to guess the slope and intercept from inputs and outputs. Linear Function Machine is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, algebra, arithmetic, assessment, function machine, functions, input, integers, linear functions, multiplication, output, pattern, slope\n\nStudents create linear inequalities and systems of linear inequalities on a coordinate plane. This is like a graphing calculator with advanced viewing options.\n\nRelated Topics: algebra, cartesian coordinate, coordinate plane, data plot, function properties, functions, graph, inequality, intervals, linear equations, linear functions\n\nInvestigate the Cartesian coordinate system by directing a robot through a mine field laid out on the plane. Maze Game is one of the Interactivate assessment explorers.\n\nRelated Topics: assessment, cartesian coordinate, coordinate, coordinate plane, coordinate system, graph\n\nEnter a set of data points and a function or multiple functions, then manipulate those functions to fit those points. Manipulate the function on a coordinate plane using slider bars. Learn how each constant and coefficient affects the resulting graph.\n\nRelated Topics: algebra, calculus, cartesian coordinate, coordinate, coordinate plane, coordinate system, cosecant, cosine, cotangent, curve fitting, data plot, deviations, flyer, function properties, functions, graph, linear functions, logarithm, parabola, polynomial, pre-calculus, secant, sine, slides, slope, squares, tangent, trigonometry\n\nStudents compare multiple independent variables as predictors of one dependent variable. Students explore correlation and lines of best-fit.\n\nRelated Topics: algebra, cartesian coordinate, coordinate, coordinate plane, coordinate system, data plot, deviations, flyer, functions, graph, linear functions, squares\n\nSimilar to the original \"Function Machine\" but lists input and output in a table and will not let the user attempt to guess the rule without having at least two data points. Number Cruncher is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, algebra, arithmetic, assessment, data plot, function machine, functions, input, linear functions, multiplication, output, solving equations, subtraction\n\nPlot ordered pairs on the graph, and they will be connected in the order that they are input. This enables you to decide how the pairs should be connected, rather than having the computer connect them from left to right.\n\nRelated Topics: cartesian coordinate, coordinate, coordinate plane, coordinate system, data plot, graph, lines, vertical line test\n\nThis activity allows the user to explore the polar coordinate system. The applet is similar to GraphIt, but instead allows users to explore the representation of a function in the polar coordinate system.\n\nRelated Topics: calculus, coordinate plane, coordinate system, cosine, data plot, graph, polar coordinates, pre-calculus, sine, tangent, trigonometry\n\nStudents investigate linear functions with positive slopes by trying to guess the slope and intercept from inputs and outputs. Positive Linear Function Machine is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, algebra, arithmetic, assessment, function machine, functions, input, integers, linear equations, linear functions, multiplication, output, pattern, slope, solving equations\n\nReview the properties of functions by looking at ten different curves and deciding whether or not they meet the criteria for a graph of a function. This activity simply displays the curves - it does not quiz the user.\n\nRelated Topics: coordinate plane, function properties, functions, graph, linear functions, lines, pre-calculus, vertical line test\n\nGraph recursive functions by defining f(0)=C and defining f(n) based on f(n-1).\n\nRelated Topics: algebra, arithmetic sequences, cartesian coordinate, coordinate plane, functions, geometric sequences, graph, iteration, recursion, recursive functions, sequences\n\nPlot a bivariate data set, determine the line of best fit for their data, and then check the accuracy of your line of best fit.\n\nRelated Topics: calculus, cartesian coordinate, coordinate plane, correlation, curve fitting, data plot, deviations, residual, statistics\n\nGraph ordered pairs and customize the graph title and axis labels. Points are connected from left to right, rather than being connected in the order they are entered.\n\nRelated Topics: cartesian coordinate, coordinate, coordinate plane, data plot, graph, statistics\n\nLearn about number patterns in sequences and recursions by specifying a starting number, multiplier, and add-on. The numbers in the sequence are displayed on a graph, and they are also listed below the graph.\n\nRelated Topics: addition, arithmetic, arithmetic sequences, cartesian coordinate, coordinate plane, decimals, geometric sequences, graph, iteration, multiplication, pre-calculus, recursion, recursive functions, sequences\n\nInvestigate the first quadrant of the Cartesian coordinate system through identifying the coordinates of randomly generated points, or requesting that a particular point be plotted. Simple Coordinates Game is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, cartesian coordinate, coordinate, coordinate plane, coordinate system, graph, integers\n\nInvestigate the first quardant of the Cartesian coordinate system by directing a robot through a mine field laid out on the grid. Simple Maze Game is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, cartesian coordinate, coordinate, coordinate plane, coordinate system, distance, graph\n\nPlot ordered pairs of numbers, either as a scatter plot or with the dots connected. Points are connected from right to left, rather than being connected in the order they are entered.\n\nRelated Topics: cartesian coordinate, coordinate plane, coordinate system, data plot, decimals, graph\n\nSimilar to other \"flyers\", Slope Slider uses slider bars to explore the effect of the multiplier and constant on a linear function of the form f(x)=mx+b. Explore the relationship between slope and intercept in the Cartesian coordinate system.\n\nRelated Topics: algebra, cartesian coordinate, coordinate plane, decimals, flyer, fractions, function properties, functions, graph, intercept, linear equations, linear functions, slides, slope\n\nEnter two complex numbers (z and c) as ordered pairs of real numbers, then click a button to iterate step by step. The iterates are graphed in the x-y plane and printed out in table form. This is an introduction to the idea of prisoners/escapees in iterated functions and the calculation of fractal Julia sets.\n\nRelated Topics: complex number, coordinate plane, coordinate system, exponents, fractals, functions, graph, infinity, iteration, julia set, mandelbrot set, recursion, recursive functions, sets\n\nLearn about the vertical line test for functions by trying to connect points in the plane to build a function. When you have connected all of the points, you will be told if your graph is a valid graph of a function. Vertical Line Test is one of the Interactivate assessment explorers.\n\nRelated Topics: assessment, cartesian coordinate, coordinate plane, function properties, functions, graph, lines, pre-calculus, vertical line test\n\nGive input to the Whole Number Cruncher and try to guess what it did from the output it generates. This activity only generates multiplication and addition functions to avoid outputting any negative numbers. Whole Number Cruncher is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, algebra, arithmetic, assessment, coordinate, function machine, functions, input, linear equations, linear functions, multiplication, output, pattern, solving equations, whole numbers\n\nCalculus (...)\nBuild your own polygon and transform it in the Cartesian coordinate system. Experiment with reflections across any line, revolving around any line (which yields a 3-D image), rotations about any point, and translations in any direction.\n\nRelated Topics: angles, calculus, coordinate, coordinate plane, coordinate system, flips, geometry, glides, graph, polygon, polyhedra, reflections, rotation, symmetry, transformation, translation, transpose\n\nExplore cross sections of different geometric solids: cone, double cone, cylinder, pyramid, and prism. Manipulate the cross section with slider bars, and see how the graphical representation changes.\n\nRelated Topics: algebra, calculus, cartesian coordinate, circles, conic section, coordinate system, ellipse, flyer, graph, hyperbola, parabola, polygon, polyhedra, pre-calculus, prisms, pyramid, radius, rotation, transformation\n\nView the graph and the equation of the line tangent to any function at any point on the function.\n\nRelated Topics: calculus, cartesian coordinate, coordinate plane, derivative, differentiate, function properties, graph, linear equations, slides, slope, tangent\n\nA more advanced version of Slope Slider, this activity allows the manipulation of the constants and coefficients in any function thereby encouraging the user to explore the effects on the graph of the function by changing those numbers.\n\nRelated Topics: algebra, bell curve, calculus, cartesian coordinate, coordinate plane, coordinate system, cosine, decimals, exponential, flyer, function properties, functions, graph, intervals, inverse, linear equations, linear functions, lines, logarithm, parabola, polynomial, pre-calculus, range, sine, slope, tangent, translation, trigonometry\n\nThis activity allows the user to find the volume and surface area of various functions as they are rotated around axes. This applet can be used to practice finding integrals using the disk and washer methods of calculating volume.\n\nRelated Topics: calculus, cartesian coordinate, coordinate plane, coordinate system, cosine, functions, graph, sine\n\nStudents can create graphs of functions entered as algebraic expressions -- similar to a graphing calculator.\n\nRelated Topics: algebra, calculus, cartesian coordinate, coordinate plane, cosecant, cosine, cotangent, functions, graph, inverse, linear equations, linear functions, logarithm, parabola, polynomial, positive part of the operand, pre-calculus, range, secant, sine, slope, tangent, trigonometry\n\nCreate graphs of functions and sets of ordered pairs on the same coordinate plane. This is like a graphing calculator with advanced viewing options.\n\nRelated Topics: algebra, calculus, cartesian coordinate, coordinate, coordinate plane, coordinate system, cosecant, cosine, cotangent, curve fitting, data plot, exponential, exponents, function properties, functions, graph, graph theory, intervals, linear equations, linear functions, logarithm, parabola, polynomial, positive part of the operand, pre-calculus, range, secant, sine, slope, tangent, trigonometry\n\nInteGreat! allows the user to visually explore the idea of integration through approximating the integral value with partitions. The user controls the number of partitions, the upper and lower limits, and the method used to estimate the integral.\n\nRelated Topics: area, calculus, estimation, function properties, functions, graph, infinity, integral, integrate, intervals, limit\n\nEnter a set of data points and a function or multiple functions, then manipulate those functions to fit those points. Manipulate the function on a coordinate plane using slider bars. Learn how each constant and coefficient affects the resulting graph.\n\nRelated Topics: algebra, calculus, cartesian coordinate, coordinate, coordinate plane, coordinate system, cosecant, cosine, cotangent, curve fitting, data plot, deviations, flyer, function properties, functions, graph, linear functions, logarithm, parabola, polynomial, pre-calculus, secant, sine, slides, slope, squares, tangent, trigonometry\n\nThis activity allows the user to plot ordered pairs and parametric equations on the same coordinate plane. The applet is similar to GraphIt, but instead allows users to explore the parametric representation of a function.\n\nRelated Topics: calculus, coordinate plane, coordinate system, functions, graph, parametric equations, pre-calculus\n\nThis activity allows the user to explore the polar coordinate system. The applet is similar to GraphIt, but instead allows users to explore the representation of a function in the polar coordinate system.\n\nRelated Topics: calculus, coordinate plane, coordinate system, cosine, data plot, graph, polar coordinates, pre-calculus, sine, tangent, trigonometry\n\nExperiment with a simple ecosystem consisting of grass, rabbits, and wolves, learning about probabilities, chaos, and simulation.\n\nRelated Topics: agent modeling, calculus, chaos, graph, iteration, pattern, predator-prey, probability simulation, random number, simulation, statistics\n\nPlot a bivariate data set, determine the line of best fit for their data, and then check the accuracy of your line of best fit.\n\nRelated Topics: calculus, cartesian coordinate, coordinate plane, correlation, curve fitting, data plot, deviations, residual, statistics\n\nDiscrete (...)\nStudents run a simulation of how a fire will spread through a stand of trees, learning about probability and chaos. Parameters: Forest density, wind direction, size of forest.\n\nRelated Topics: agent modeling, chaos, conditional probability, experimental probability, iteration, percentages, percents, probability simulation, proportion, simulation, skewed distribution, variance\n\nCreate a game spinner with variable sized sectors to look at experimental and theoretical probabilities. Parameters: Sizes of sectors, number of sectors, number of trials.\n\nRelated Topics: angles, circle graph, circles, counting, decimals, events, experimental probability, fair, geometric probability, outcomes, percentages, percents, pie chart, probability simulation, simulation, spinner, statistics, theoretical probability\n\nChoose one of N doors to experimentally determine the odds of winning the grand prize behind one of the doors, as in the TV program \"Let's Make a Deal.\" Parameters: Number of doors, number of trials, staying or switching between the two remaining doors.\n\nRelated Topics: bar graph, conditional probability, events, experimental probability, fair, monty hall, outcomes, probability simulation, simulation, statistics, trials\n\nStudents work step-by-step through the generation of a different Hilbert-like Curve (a fractal made from deforming a line by bending it), allowing them to explore number patterns in sequences and geometric properties of fractals.\n\nRelated Topics: decimals, fractals, fractions, geometric sequences, geometry, iteration, length, pattern, self-similarity, sequences, surface area, symmetry\n\nStudents play a generalized version of connect four, gaining the chance to place a piece on the board by answering arithmetic questions (addition, subtraction, multiplication, division). Parameters: time, difficulty level, types of questions. Arithmetic Four is one of the Interactivate assessment games.\n\nRelated Topics: addition, arithmetic, division, integers, multiplication, negative number, positive number, positive part of the operand, subtraction, whole numbers\n\nArithmetic Quiz gives the user randomized questions to answer on arithmetic with whole numbers and integers. Parameters: Level of difficulty and types of questions. Arithmetic Quiz is one of the Interactivate assessment quizzes.\n\nRelated Topics: addition, arithmetic, assessment, associative, commutative, division, integers, multiplication, negative number, positive number, positive part of the operand, subtraction, whole numbers\n\nExperiment with a simulation to get an approximation of Pi by dropping a needle on a lined sheet of paper.\n\nRelated Topics: angles, area, circumference, experimental probability, polygon, probability simulation, simulation\n\nCreate your own affine cipher for encoding and decoding messages. Input your own constant and multiplier, then input a message to encode.\n\nRelated Topics: addition, arithmetic, cipher, cryptography, division, modular, multiplication, pattern, remainders\n\nEncode and decode messages to determine the form for an affine cipher. Input a message to encode, then input your guesses for the constant and multiplier. Caesar Cipher II is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, arithmetic, assessment, cipher, cryptography, division, modular, multiplication, pattern, remainders\n\nDecode encrypted messages to determine the form for an affine cipher, and practice your reasoning and arithmetic skills. Input your guesses for the multiplier and constant. Caesar Cipher III is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, arithmetic, assessment, cipher, cryptography, division, functions, modular, multiplication, pattern, remainders\n\nLearn about fractions between 0 and 1 by repeatedly deleting portions of a line segment, and also learn about properties of fractal objects. Parameter: fraction of the segment to be deleted each time.\n\nRelated Topics: fractals, fractions, geometric sequences, pattern, pre-calculus, recursion, sequences, sets\n\nWork with various types of clocks in order to learn about modular arithmetic operations. Parameters: Number of hours on the clock.\n\nRelated Topics: addition, division, elapsed time, modular, multiplication, remainders, time\n\nSimulation of a coin toss allowing the user to input the number of flips. Toss results can be viewed as a list of individual outcomes, ratios, or table.\n\nRelated Topics: counting, events, experimental probability, fair, flips, outcomes, percentages, probability simulation, proportion, theoretical probability, theoretical value\n\nColor numbers in Pascal's Triangle by rolling a number and then clicking on all entries that are multiples of the number rolled, thereby practicing multiplication tables, investigating number patterns, and investigating fractal patterns. Coloring Multiples in Pascal's Triangle is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, combinatorics, division, divisors, factors, fractals, integers, multiples, pascal's triangle, pascals triangle, pattern, triangle, whole numbers\n\nColor numbers in Pascal's Triangle by rolling a number and then clicking on all entries that have the same remainder when divided by the number rolled, thereby practicing division and remainders, investigating number patterns, and investigating fractal patterns. Coloring Remainders in Pascal's Triangle is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, combinatorics, division, divisors, factors, fractals, integers, modular, pascal's triangle, pascals triangle, pattern, remainders, triangle, whole numbers\n\nCompare theoretical and experimental probabilities, using dice, cards, spinners, or coin tosses. Three different probabilities can be compared at once. Parameters: Type of probabilities, number of trials.\n\nRelated Topics: combinatorics, events, experimental probability, outcomes, probability simulation, random number, theoretical probability\n\nExperiment with the outcome distribution for a roll of two dice by simulating a dice throwing game. Parameters: Which player wins with which total rolled.\n\nRelated Topics: combinatorics, decimals, events, experimental probability, fair, fractions, outcomes, percentages, percents, probability simulation, proportion, random number, statistics, theoretical probability, theoretical value\n\nRun a simulation of how a fire will spread through a stand of trees, learning about probability and chaos. Parameters: Probability that a tree will set fire to each of its eight neighbors.\n\nRelated Topics: agent modeling, chaos, events, experimental probability, geometric probability, percentages, percents, probability simulation, proportion, recursion, simulation, statistics, theoretical probability, variance\n\nExperiment with probability using a fixed size section spinner, a variable section spinner, two regular 6-sided dice or customized dice.\n\nRelated Topics: circles, events, experimental probability, fair, fractions, geometric probability, outcomes, percentages, probability simulation, random number, simulation, spinner, statistics, theoretical probability, theoretical value, trials\n\nVisualize factors through building rectangular areas on a grid. First enter all the factorizations of a number, then draw each factor set as an area on the grid. Factorize is one of the Interactivate assessment explorers.\n\nRelated Topics: area, arithmetic, assessment, commutative, divisors, factors, grouping, multiples, multiplication, rectangles, whole numbers\n\nVisualize factors through building rectangular areas on a grid. As you draw each factor set on the grid, the factors will be listed. Factorize 2 is one of the Interactivate assessment explorers.\n\nRelated Topics: area, assessment, commutative, division, divisors, factors, multiples, multiplication, rectangles, whole numbers\n\nRun a simulation of how a fire will spread through a stand of trees, learning about probability and chaos. Parameters: Probability that a tree catches fire if its neighbor is on fire.\n\nRelated Topics: agent modeling, chaos, decimals, events, experimental probability, outcomes, percentages, percents, proportion, simulation, squares, theoretical probability, variance\n\nDetermine the fractal dimensions of several line-deformation fractals. Input the scale factor and number of similar copies, and the dimension will be calculated. Fractal Dimensions is one of the Interactivate assessment explorers.\n\nRelated Topics: chaos, fractals, logarithm, pattern, scale, self-similarity\n\nGenerate complicated geometric fractals by specifying starting polygon and scale factor.\n\nRelated Topics: fractals, geometry, logarithm, polygon, recursion, scale, self-similarity\n\nStudents investigate very simple functions by trying to guess the algebraic form from inputs and outputs. Function Machine is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, algebra, arithmetic, assessment, function machine, functions, input, integers, linear equations, multiplication, output, subtraction\n\nRun a simulation to generate results from running the Monty Hall for multiple trials. This is similar to the game show \"Let's Make A Deal\", where you choose one of N doors in hopes of finding a grand prize behind one of the doors. Parameters: Number of doors, number of trials, staying, or switching between the two remaining doors.\n\nRelated Topics: conditional probability, events, experimental probability, fair, monty hall, outcomes, probability simulation, random number, simulation, theoretical probability, theoretical value, trials\n\nStep through the generation of a Hilbert Curve -- a fractal made from deforming a line by bending it, and explore number patterns in sequences and geometric properties of fractals.\n\nRelated Topics: chaos, fractals, geometric sequences, geometry, iteration, length, lines, pattern, recursion, self-similarity, sequences\n\nEnter a complex value for \"c\" in the form of an ordered pair of real numbers. The applet draws the fractal Julia set for that seed value.\n\nRelated Topics: complex number, coordinate plane, coordinate system, fractals, iteration, julia set, mandelbrot set, recursion, recursive functions, sets\n\nStep through the generation of the Koch Snowflake -- a fractal made from deforming the sides of a triangle, and explore number patterns in sequences and geometric properties of fractals.\n\nRelated Topics: decimals, fractals, geometric sequences, infinity, iteration, lines, pre-calculus, recursion, self-similarity, sequences\n\nStudents investigate linear functions by trying to guess the slope and intercept from inputs and outputs. Linear Function Machine is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, algebra, arithmetic, assessment, function machine, functions, input, integers, linear functions, multiplication, output, pattern, slope\n\nLearn about sampling with and without replacement by randomly drawing marbles from a bag. Parameters: Number and color of marbles in the bag, replacement rule.\n\nRelated Topics: conditional probability, events, experimental probability, outcomes, percentages, probability simulation, probability with replacement, probability without replacement, random number, simulation, statistics, theoretical probability, trials\n\nVisually explore counting and place value with different number bases, from base 2 to base 16, and up to the hundreds place using a clock like interface. The activity also allows you to look at the numbers on the clock in base 10 or in your other chosen base to explore the relationship between those values.\n\nRelated Topics: arithmetic, counting, exponents, modular\n\nSimilar to the original \"Function Machine\" but lists input and output in a table and will not let the user attempt to guess the rule without having at least two data points. Number Cruncher is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, algebra, arithmetic, assessment, data plot, function machine, functions, input, linear functions, multiplication, output, solving equations, subtraction\n\nStudents investigate linear functions with positive slopes by trying to guess the slope and intercept from inputs and outputs. Positive Linear Function Machine is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, algebra, arithmetic, assessment, function machine, functions, input, integers, linear equations, linear functions, multiplication, output, pattern, slope, solving equations\n\nReview the properties of functions by looking at ten different curves and deciding whether or not they meet the criteria for a graph of a function. This activity simply displays the curves - it does not quiz the user.\n\nRelated Topics: coordinate plane, function properties, functions, graph, linear functions, lines, pre-calculus, vertical line test\n\nSimulate a game where two players each roll a die, and the lucky player moves one step to the finish. Parameters: what rolls win and how many steps to the finish line.\n\nRelated Topics: circle graph, events, experimental probability, fair, geometric probability, percents, pie chart, probability, probability simulation, random number, theoretical probability, trials\n\nSimulate a game where \"N\" players roll two dice, and the lucky player has an advantage for reaching the finish. Parameters: the number of players, number of trials and length of the race.\n\nRelated Topics: combinatorics, experimental probability, fair, outcomes, probability, probability simulation, random number, theoretical probability, trials\n\nGraph recursive functions by defining f(0)=C and defining f(n) based on f(n-1).\n\nRelated Topics: algebra, arithmetic sequences, cartesian coordinate, coordinate plane, functions, geometric sequences, graph, iteration, recursion, recursive functions, sequences\n\nLearn about number patterns in sequences and recursions by specifying a starting number, multiplier, and add-on. The numbers in the sequence are displayed on a graph, and they are also listed below the graph.\n\nRelated Topics: addition, arithmetic, arithmetic sequences, cartesian coordinate, coordinate plane, decimals, geometric sequences, graph, iteration, multiplication, pre-calculus, recursion, recursive functions, sequences\n\nStep through the generation of Sierpinski's Carpet -- a fractal made from subdividing a square into nine smaller squares and cutting the middle one out. Explore number patterns in sequences and geometric properties of fractals.\n\nRelated Topics: algebra, area, fractals, fractions, geometric sequences, iteration, pattern, pre-calculus, recursion, recursive functions, self-similarity, sequences\n\nStep through the generation of Sierpinski's Triangle -- a fractal made from subdividing a triangle into four smaller triangles and cutting the middle one out. Explore number patterns in sequences and geometric properties of fractals.\n\nRelated Topics: dimension, fractals, fractions, geometric sequences, geometry, iteration, length, pattern, pre-calculus, recursion, recursive functions, self-similarity, sequences, symmetry, triangle, triangles\n\nChoose one of three doors to experimentally determine the odds of winning the grand prize behind one of the doors, as in the TV program \"Let's Make a Deal.\" Parameters: Staying or switching between the two remaining doors.\n\nRelated Topics: conditional probability, events, experimental probability, fair, monty hall, outcomes, percents, probability, probability simulation, random number, statistics, strategy, theoretical probability\n\nCreate a game spinner with one to twelve sectors in order to look at experimental and theoretical probabilities. Parameters: Number of sectors, number of trials.\n\nRelated Topics: angles, circle graph, circles, events, experimental probability, fair, geometric probability, outcomes, percents, pie chart, probability, probability simulation, spinner, theoretical probability, theoretical value, trials\n\nExplore perimeter through rectangular and straight-line arrangements of tables, calculating the number of chairs needed to surround different arrangements. Tables and Chairs is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, agent modeling, arithmetic, associative, coordinate system, dimension, function properties, grouping, integers, multiples, perimeter, scale\n\nExplore fractals by investigating the relationships between the Mandelbrot set and Julia sets.\n\nRelated Topics: chaos, complex number, coordinate, coordinate plane, exponents, fractals, functions, geometric sequences, geometry, iteration, julia set, mandelbrot set, pattern, recursion, recursive functions, self-similarity, sets\n\nStep through the tortoise and hare race, based on Zeno's paradox, to learn about the multiplication of fractions and about convergence of an infinite sequence of numbers.\n\nRelated Topics: decimals, distance, fractals, geometric sequences, infinity, iteration, length, logarithm, multiplication, pattern, percentages, percents, proportion, rational numbers, recursion, recursive functions, self-similarity, sequences\n\nSort colored shapes into a three circle Venn Diagram.\n\nRelated Topics: pattern, sets, venn diagram\n\nChoose one of three boxes and choose one ball from the box to look at conditional probabilities. Parameters: Number of trials.\n\nRelated Topics: conditional probability, events, experimental probability, outcomes, probability, probability simulation, probability without replacement, statistics, theoretical probability, theoretical value, trials\n\nSort colored shapes into a Venn diagram based on various characteristics. Venn Diagram Shape Sorter is one of the Interactivate assessment explorers.\n\nRelated Topics: pattern, sets, sorter, venn diagram\n\nClassify various objects into categories in a Venn Diagram. Learn how categories in Venn Diagrams work. Venn Diagrams is one of the Interactivate assessment explorers.\n\nRelated Topics: assessment, sets, venn diagram\n\nLearn about the vertical line test for functions by trying to connect points in the plane to build a function. When you have connected all of the points, you will be told if your graph is a valid graph of a function. Vertical Line Test is one of the Interactivate assessment explorers.\n\nRelated Topics: assessment, cartesian coordinate, coordinate plane, function properties, functions, graph, lines, pre-calculus, vertical line test\n\nGive input to the Whole Number Cruncher and try to guess what it did from the output it generates. This activity only generates multiplication and addition functions to avoid outputting any negative numbers. Whole Number Cruncher is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, algebra, arithmetic, assessment, coordinate, function machine, functions, input, linear equations, linear functions, multiplication, output, pattern, solving equations, whole numbers\n\nFractions (...)\nStudents play a generalized version of connect four, gaining the chance to place a piece on the board by answering arithmetic questions (addition, subtraction, multiplication, division). Parameters: time, difficulty level, types of questions. Arithmetic Four is one of the Interactivate assessment games.\n\nRelated Topics: addition, arithmetic, division, integers, multiplication, negative number, positive number, positive part of the operand, subtraction, whole numbers\n\nSimulation of a coin toss allowing the user to input the number of flips. Toss results can be viewed as a list of individual outcomes, ratios, or table.\n\nRelated Topics: counting, events, experimental probability, fair, flips, outcomes, percentages, probability simulation, proportion, theoretical probability, theoretical value\n\nVisually represent two unique fractions that are equivalent to a given fraction. The fractional value is shown on a number line after you check to see if your fraction is correct. Equivalent Fraction Finder is one of the Interactivate assessment explorers.\n\nRelated Topics: comparing, fractions, identity, multiples, number line, proportion, rational numbers\n\nVisually represent two unique fractions that are equivalent to a given fraction. The fractional value is shown on a number line as you color in the fraction. Equivalent Fraction Pointer is one of the Interactivate assessment explorers.\n\nRelated Topics: comparing, fractions, identity, multiples, number line, ordering, proportion, rational numbers\n\nTest your fraction skills by answering questions. This quiz asks you to simplify fractions, convert fractions to decimals and percentages, and answer algebra questions involving fractions. Choose difficulty level, question types, and time. Fraction Quiz is one of the Interactivate assessment quizzes.\n\nRelated Topics: addition, algebra, arithmetic, decimals, division, divisors, fractions, integers, mixed numbers, multiplication, percentages, percents, subtraction\n\nVisualize fractions by coloring in the appropriate portions of either a circle or a square, then order those fractions from least to greatest. Fraction Sorter is one of the Interactivate assessment explorers.\n\nRelated Topics: area, circles, comparing, fractions, number line, ordering, proportion, rational numbers, squares\n\nDetermine the value of a given fraction represented as a point on a number line. Then find a fraction whose value is a given fraction. Fraction Pointer is one of the Interactivate assessment explorers.\n\nRelated Topics: equivalent, fractions, number line\n\nDetermine the value of a given fraction represented as a point on a number line. Then find a fraction whose value is the given fraction using an arrow on the number line as a guide. Single Fraction Pointer is one of the Interactivate assessment explorers.\n\nRelated Topics: equivalent, fractions, number line\n\nGeometry (...)\nBuild your own polygon and transform it in the Cartesian coordinate system. Experiment with reflections across any line, revolving around any line (which yields a 3-D image), rotations about any point, and translations in any direction.\n\nRelated Topics: angles, calculus, coordinate, coordinate plane, coordinate system, flips, geometry, glides, graph, polygon, polyhedra, reflections, rotation, symmetry, transformation, translation, transpose\n\nPractice your knowledge of acute, obtuse, and alternate angles. Also, practice relationships between angles - vertical, adjacent, alternate, same-side, and corresponding. Angles is one of the Interactivate assessment explorers.\n\nRelated Topics: acute, angles, assessment, geometry, lines, obtuse, parallel\n\nStudents work step-by-step through the generation of a different Hilbert-like Curve (a fractal made from deforming a line by bending it), allowing them to explore number patterns in sequences and geometric properties of fractals.\n\nRelated Topics: decimals, fractals, fractions, geometric sequences, geometry, iteration, length, pattern, self-similarity, sequences, surface area, symmetry\n\nLearn the relationship between perimeter and area. A shape will be automatically generated with the perimeter that you choose. Calculate the area of this shape. Area Explorer is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, area, arithmetic, assessment, comparing, coordinate plane, geometry, length, perimeter, polygon, rectangles, squares\n\nSort colored shapes into a bar graph. Practice sorting by shape or by color. Appropriate for elementary grades. Bar Graph Sorter is one of the Interactivate assessment explorers.\n\nRelated Topics: bar graph, circles, counting, sorter, squares, statistics\n\nLearn about fractions between 0 and 1 by repeatedly deleting portions of a line segment, and also learn about properties of fractal objects. Parameter: fraction of the segment to be deleted each time.\n\nRelated Topics: fractals, fractions, geometric sequences, pattern, pre-calculus, recursion, sequences, sets\n\nPractice reading a clock, input times for the clock to display, or let the clock generate random times for you to read. Choose from three difficulty levels. Clock Wise is one of the Interactivate assessment explorers.\n\nRelated Topics: assessment, elapsed time, time\n\nColor numbers in Pascal's Triangle by rolling a number and then clicking on all entries that are multiples of the number rolled, thereby practicing multiplication tables, investigating number patterns, and investigating fractal patterns. Coloring Multiples in Pascal's Triangle is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, combinatorics, division, divisors, factors, fractals, integers, multiples, pascal's triangle, pascals triangle, pattern, triangle, whole numbers\n\nColor numbers in Pascal's Triangle by rolling a number and then clicking on all entries that have the same remainder when divided by the number rolled, thereby practicing division and remainders, investigating number patterns, and investigating fractal patterns. Coloring Remainders in Pascal's Triangle is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, combinatorics, division, divisors, factors, fractals, integers, modular, pascal's triangle, pascals triangle, pattern, remainders, triangle, whole numbers\n\nManipulate different types of conic section equations on a coordinate plane using slider bars. Learn how each constant and coefficient affects the resulting graph. Choose from vertical or horizontal parabola, circle, ellipse, and vertical or horizontal hyperbola.\n\nRelated Topics: algebra, cartesian coordinate, circles, conic section, coordinate plane, ellipse, function properties, functions, graph, hyperbola, parabola, pre-calculus, radius, slides, translation\n\nExplore cross sections of different geometric solids: cone, double cone, cylinder, pyramid, and prism. Manipulate the cross section with slider bars, and see how the graphical representation changes.\n\nRelated Topics: algebra, calculus, cartesian coordinate, circles, conic section, coordinate system, ellipse, flyer, graph, hyperbola, parabola, polygon, polyhedra, pre-calculus, prisms, pyramid, radius, rotation, transformation\n\nPractice finding elapsed time given a starting time and an ending time. Elapsed Time is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, assessment, counting, elapsed time, subtraction, time\n\nGiven a starting time and elapsed time practice finding the ending time. Elapsed Time Two is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, assessment, counting, elapsed time, subtraction, time\n\nVisualize factors through building rectangular areas on a grid. First enter all the factorizations of a number, then draw each factor set as an area on the grid. Factorize is one of the Interactivate assessment explorers.\n\nRelated Topics: area, arithmetic, assessment, commutative, divisors, factors, grouping, multiples, multiplication, rectangles, whole numbers\n\nVisualize factors through building rectangular areas on a grid. As you draw each factor set on the grid, the factors will be listed. Factorize 2 is one of the Interactivate assessment explorers.\n\nRelated Topics: area, assessment, commutative, division, divisors, factors, multiples, multiplication, rectangles, whole numbers\n\nCreate your own fractals by drawing a \"line deformation rule\" and stepping through the generation of a geometric fractal. Parameters: Grid type, number of bending points on the line.\n\nRelated Topics: chaos, fractals, geometric sequences, iteration, pattern, pre-calculus, recursion, scale, self-similarity, sequences, symmetry, transformation\n\nBuild a \"floor tile\" by dragging the corners of a quadrilateral. Learn about tessellation of quadrilateral figures when the shape you built is tiled over an area.\n\nRelated Topics: angles, area, coordinate plane, coordinate system, length, pattern, planes, quadrilaterals, rectangles, reflections, rhombus, rotation, slides, symmetry, tessellations, transformation, translation\n\nDetermine the fractal dimensions of several line-deformation fractals. Input the scale factor and number of similar copies, and the dimension will be calculated. Fractal Dimensions is one of the Interactivate assessment explorers.\n\nRelated Topics: chaos, fractals, logarithm, pattern, scale, self-similarity\n\nGenerate complicated geometric fractals by specifying starting polygon and scale factor.\n\nRelated Topics: fractals, geometry, logarithm, polygon, recursion, scale, self-similarity\n\nInvestigate the Cartesian coordinate system through identifying the coordinates of a randomly generated point, or requesting that a particular point be plotted. General Coordinates Game is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, assessment, cartesian coordinate, coordinate, coordinate plane, coordinate system, graph\n\nStep through the generation of a Hilbert Curve -- a fractal made from deforming a line by bending it, and explore number patterns in sequences and geometric properties of fractals.\n\nRelated Topics: chaos, fractals, geometric sequences, geometry, iteration, length, lines, pattern, recursion, self-similarity, sequences\n\nMeasure angles, distances, and areas in several different images (choices include maps, aerial photos, and others). A scale feature allows the user to set the scale used for measuring distances and areas.\n\nRelated Topics: algebra, angles, area, dimension, intervals, length, proportion, scale, transformation\n\nEnter a complex value for \"c\" in the form of an ordered pair of real numbers. The applet draws the fractal Julia set for that seed value.\n\nRelated Topics: complex number, coordinate plane, coordinate system, fractals, iteration, julia set, mandelbrot set, recursion, recursive functions, sets\n\nStep through the generation of the Koch Snowflake -- a fractal made from deforming the sides of a triangle, and explore number patterns in sequences and geometric properties of fractals.\n\nRelated Topics: decimals, fractals, geometric sequences, infinity, iteration, lines, pre-calculus, recursion, self-similarity, sequences\n\nInvestigate the Cartesian coordinate system by directing a robot through a mine field laid out on the plane. Maze Game is one of the Interactivate assessment explorers.\n\nRelated Topics: assessment, cartesian coordinate, coordinate, coordinate plane, coordinate system, graph\n\nPlot ordered pairs on the graph, and they will be connected in the order that they are input. This enables you to decide how the pairs should be connected, rather than having the computer connect them from left to right.\n\nRelated Topics: cartesian coordinate, coordinate, coordinate plane, coordinate system, data plot, graph, lines, vertical line test\n\nLearn the relationship between perimeter and area. A shape will be automatically generated with the area that you choose. Calculate the perimeter of this shape. Perimeter Explorer is one of the Interactivate assessment explorers.\n\nRelated Topics: area, arithmetic, assessment, comparing, coordinate plane, coordinate system, geometry, length, perimeter, polygon, rectangles, squares\n\nCalculate the length of one side of an automatically generated right triangle by using the Pythagorean Theorem, and then check your answers. Pythagorean Explorer is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, angles, assessment, distance, exponents, geometry, length, pythagorean theorem, solving equations, squares, triangle, trigonometry\n\nThis activity operates in one of two modes: auto draw and create shape mode, allowing you to explore relationships between area and perimeter. Shape Builder is one of the Interactivate assessment explorers.\n\nRelated Topics: area, assessment, comparing, geometry, perimeter\n\nLearn the relationship between perimeter and area. A random shape will be automatically generated. Calculate the area and perimeter of this shape. Shape Explorer is one of the Interactivate assessment explorers.\n\nRelated Topics: area, arithmetic, assessment, comparing, geometry, length, perimeter, polygon, rectangles, squares\n\nStep through the generation of Sierpinski's Carpet -- a fractal made from subdividing a square into nine smaller squares and cutting the middle one out. Explore number patterns in sequences and geometric properties of fractals.\n\nRelated Topics: algebra, area, fractals, fractions, geometric sequences, iteration, pattern, pre-calculus, recursion, recursive functions, self-similarity, sequences\n\nStep through the generation of Sierpinski's Triangle -- a fractal made from subdividing a triangle into four smaller triangles and cutting the middle one out. Explore number patterns in sequences and geometric properties of fractals.\n\nRelated Topics: dimension, fractals, fractions, geometric sequences, geometry, iteration, length, pattern, pre-calculus, recursion, recursive functions, self-similarity, sequences, symmetry, triangle, triangles\n\nInvestigate the first quadrant of the Cartesian coordinate system through identifying the coordinates of randomly generated points, or requesting that a particular point be plotted. Simple Coordinates Game is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, cartesian coordinate, coordinate, coordinate plane, coordinate system, graph, integers\n\nInvestigate the first quardant of the Cartesian coordinate system by directing a robot through a mine field laid out on the grid. Simple Maze Game is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, cartesian coordinate, coordinate, coordinate plane, coordinate system, distance, graph\n\nPlot ordered pairs of numbers, either as a scatter plot or with the dots connected. Points are connected from right to left, rather than being connected in the order they are entered.\n\nRelated Topics: cartesian coordinate, coordinate plane, coordinate system, data plot, decimals, graph\n\nLearn about how the Pythagorean Theorem works through investigating the standard geometric proof. Parameters: Sizes of the legs of the triangle.\n\nRelated Topics: algebra, angles, area, coordinate plane, distance, geometry, length, pythagorean theorem, slides, solving equations, squares, triangle, triangles, trigonometry\n\nFunctions like a real stopwatch, recording times that you choose. This stopwatch is accurate to the nearest tenth of a second. Parameters: Count up from 0 or count down from a set time.\n\nRelated Topics: decimals, elapsed time, intervals, stopwatch, time\n\nManipulate dimensions of polyhedra, and watch how the surface area and volume change. Parameters: Type of polyhedron, length, width and height. Surface Area and Volume one of the Interactivate assessment explorers.\n\nRelated Topics: assessment, dimension, geometry, length, polyhedra, prisms, surface area, volume\n\nExplore perimeter through rectangular and straight-line arrangements of tables, calculating the number of chairs needed to surround different arrangements. Tables and Chairs is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, agent modeling, arithmetic, associative, coordinate system, dimension, function properties, grouping, integers, multiples, perimeter, scale\n\nCreate a tessellation by deforming a triangle, rectangle or hexagon to form a polygon that tiles the plane. Corners of the polygons may be dragged, and corresponding edges of the polygons may be dragged. Parameters: Colors, starting polygon.\n\nRelated Topics: angles, area, flips, geometry, glides, hexagon, length, pattern, planes, polygon, reflections, rotation, slides, squares, symmetry, tessellations\n\nPlay the Chaos Game by experimenting with probabilities. Learn about an apparently random process with a not-so-random, geometric fractal result.\n\nRelated Topics: chaos, fractals, geometric probability, iteration, pattern, random number, recursion\n\nExplore fractals by investigating the relationships between the Mandelbrot set and Julia sets.\n\nRelated Topics: chaos, complex number, coordinate, coordinate plane, exponents, fractals, functions, geometric sequences, geometry, iteration, julia set, mandelbrot set, pattern, recursion, recursive functions, self-similarity, sets\n\nExplore the world of translations, reflections, and rotations in the Cartesian coordinate system by transforming squares, triangles and parallelograms. Parameters: Shape, x or y translation, x or y reflection, angle of rotation.\n\nRelated Topics: cartesian coordinate, coordinate plane, coordinate system, flips, geometry, glides, graph, hexagon, image, polygon, pre-image, reflections, rotation, symmetry, transformation, translation, transpose, triangle, triangles\n\nBuild your own polygon and transform it in the Cartesian coordinate system. Experiment with reflections across any line, rotations about any point, and translations in any direction. Parameters: Shape, x or y translation, x or y reflection, angle of rotation\n\nRelated Topics: angles, cartesian coordinate, coordinate, coordinate plane, coordinate system, flips, geometry, glides, graph, image, linear equations, polygon, pre-image, reflections, rotation, symmetry, transformation, translation, transpose\n\nCalculate the area of a triangle drawn on a grid. Learn about areas of triangles and about the Cartesian coordinate system. Triangle Explorer is one of the Interactivate assessment explorers.\n\nRelated Topics: acute, area, assessment, cartesian coordinate, coordinate plane, distance, geometry, length, pythagorean theorem, triangle, triangles\n\nSort colored shapes into a three circle Venn Diagram.\n\nRelated Topics: pattern, sets, venn diagram\n\nEnter two complex numbers (z and c) as ordered pairs of real numbers, then click a button to iterate step by step. The iterates are graphed in the x-y plane and printed out in table form. This is an introduction to the idea of prisoners/escapees in iterated functions and the calculation of fractal Julia sets.\n\nRelated Topics: complex number, coordinate plane, coordinate system, exponents, fractals, functions, graph, infinity, iteration, julia set, mandelbrot set, recursion, recursive functions, sets\n\nSort colored shapes into a Venn diagram based on various characteristics. Venn Diagram Shape Sorter is one of the Interactivate assessment explorers.\n\nRelated Topics: pattern, sets, sorter, venn diagram\n\nGraphs (...)\nBuild your own polygon and transform it in the Cartesian coordinate system. Experiment with reflections across any line, revolving around any line (which yields a 3-D image), rotations about any point, and translations in any direction.\n\nRelated Topics: angles, calculus, coordinate, coordinate plane, coordinate system, flips, geometry, glides, graph, polygon, polyhedra, reflections, rotation, symmetry, transformation, translation, transpose\n\nEnter data to create a bar graph, then change many settings for the graph's appearance.\n\nRelated Topics: bar graph, data plot, histogram, statistics\n\nSort colored shapes into a bar graph. Practice sorting by shape or by color. Appropriate for elementary grades. Bar Graph Sorter is one of the Interactivate assessment explorers.\n\nRelated Topics: bar graph, circles, counting, sorter, squares, statistics\n\nStudents can create box plots for either built-in or user-specified data as well as experiment with outliers. User may choose to use or not use the median for calculation of interquartile range.\n\nRelated Topics: box and whisker, box plot, coordinate plane, data plot, median, outlier, percentages, quartile, range, scale, skewed distribution, statistics\n\nExplore cross sections of different geometric solids: cone, double cone, cylinder, pyramid, and prism. Manipulate the cross section with slider bars, and see how the graphical representation changes.\n\nRelated Topics: algebra, calculus, cartesian coordinate, circles, conic section, coordinate system, ellipse, flyer, graph, hyperbola, parabola, polygon, polyhedra, pre-calculus, prisms, pyramid, radius, rotation, transformation\n\nInteGreat! allows the user to visually explore the idea of integration through approximating the integral value with partitions. The user controls the number of partitions, the upper and lower limits, and the method used to estimate the integral.\n\nRelated Topics: area, calculus, estimation, function properties, functions, graph, infinity, integral, integrate, intervals, limit\n\nEnter data to create a double bar graph, then manipulate the graph's maximum and minimum values.\n\nRelated Topics: bar graph, data plot, double bar graph\n\nCreate a pie chart, adjusting the size of the divisions using your mouse or by entering values. Parameters: Number of sections, size of sections, whether to use percents or fractions.\n\nRelated Topics: angles, area, circle graph, circles, fractions, percentages, percents, pie chart, proportion, statistics\n\nThis activity allows the user to explore the polar coordinate system. The applet is similar to GraphIt, but instead allows users to explore the representation of a function in the polar coordinate system.\n\nRelated Topics: calculus, coordinate plane, coordinate system, cosine, data plot, graph, polar coordinates, pre-calculus, sine, tangent, trigonometry\n\nGraph recursive functions by defining f(0)=C and defining f(n) based on f(n-1).\n\nRelated Topics: algebra, arithmetic sequences, cartesian coordinate, coordinate plane, functions, geometric sequences, graph, iteration, recursion, recursive functions, sequences\n\nPlot a bivariate data set, determine the line of best fit for their data, and then check the accuracy of your line of best fit.\n\nRelated Topics: calculus, cartesian coordinate, coordinate plane, correlation, curve fitting, data plot, deviations, residual, statistics\n\nGraph ordered pairs and customize the graph title and axis labels. Points are connected from left to right, rather than being connected in the order they are entered.\n\nRelated Topics: cartesian coordinate, coordinate, coordinate plane, data plot, graph, statistics\n\nInvestigate the first quadrant of the Cartesian coordinate system through identifying the coordinates of randomly generated points, or requesting that a particular point be plotted. Simple Coordinates Game is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, cartesian coordinate, coordinate, coordinate plane, coordinate system, graph, integers\n\nInvestigate the first quardant of the Cartesian coordinate system by directing a robot through a mine field laid out on the grid. Simple Maze Game is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, cartesian coordinate, coordinate, coordinate plane, coordinate system, distance, graph\n\nSort colored shapes into a Venn diagram based on various characteristics. Venn Diagram Shape Sorter is one of the Interactivate assessment explorers.\n\nRelated Topics: pattern, sets, sorter, venn diagram\n\nModeling (...)\nStudents run a simulation of how a fire will spread through a stand of trees, learning about probability and chaos. Parameters: Forest density, wind direction, size of forest.\n\nRelated Topics: agent modeling, chaos, conditional probability, experimental probability, iteration, percentages, percents, probability simulation, proportion, simulation, skewed distribution, variance\n\nRun a simulation of how a fire spreads through a stand of trees, learning about probability and chaos. Track the results of multiple burns and use the data to draw conclusions.\n\nRelated Topics: agent modeling, chaos, conditional probability, experimental probability, probability simulation, simulation, theoretical probability\n\nChoose one of N doors to experimentally determine the odds of winning the grand prize behind one of the doors, as in the TV program \"Let's Make a Deal.\" Parameters: Number of doors, number of trials, staying or switching between the two remaining doors.\n\nRelated Topics: bar graph, conditional probability, events, experimental probability, fair, monty hall, outcomes, probability simulation, simulation, statistics, trials\n\nEnter a set of data points, then derive a function to fit those points. Manipulate the function on a coordinate plane using slider bars. Learn how each constant and coefficient affects the resulting graph.\n\nRelated Topics: algebra, cartesian coordinate, coordinate, coordinate plane, coordinate system, cosecant, cosine, cotangent, curve fitting, data plot, deviations, exponential, exponents, function properties, functions, graph, grouping, integers, intervals, inverse, linear equations, linear functions, logarithm, multiplication, parabola, polynomial, positive part of the operand, pre-calculus, range, secant, sine, slope, tangent, trigonometry\n\nRun a simulation of how a fire will spread through a stand of trees, learning about probability and chaos. Parameters: Probability that a tree will set fire to each of its eight neighbors.\n\nRelated Topics: agent modeling, chaos, events, experimental probability, geometric probability, percentages, percents, probability simulation, proportion, recursion, simulation, statistics, theoretical probability, variance\n\nLearn about estimation through modeling of a forest fire. This activity allows the user to burn a virtual forest, then estimate the number, the percent, or the fraction of trees burned. Fire Assessment is one of the Interactivate assessment explorers.\n\nRelated Topics: estimation, fractions, percentages, percents\n\nRun a simulation of how a fire will spread through a stand of trees, learning about probability and chaos. Parameters: Probability that a tree catches fire if its neighbor is on fire.\n\nRelated Topics: agent modeling, chaos, decimals, events, experimental probability, outcomes, percentages, percents, proportion, simulation, squares, theoretical probability, variance\n\nRun a simulation to generate results from running the Monty Hall for multiple trials. This is similar to the game show \"Let's Make A Deal\", where you choose one of N doors in hopes of finding a grand prize behind one of the doors. Parameters: Number of doors, number of trials, staying, or switching between the two remaining doors.\n\nRelated Topics: conditional probability, events, experimental probability, fair, monty hall, outcomes, probability simulation, random number, simulation, theoretical probability, theoretical value, trials\n\nThis applet allows the user to make observations about the relationship between speed and position and how both of these are affected by initial velocity and the incline on which the biker is traveling.\n\nRelated Topics: algebra, distance, functions, graph, parabola, pre-calculus, scale, simulation\n\nRun the classic game of life, learning about probabilities, chaos and simulation. This activity allows the user to run a randomly generated world or test out various patterns. This is a very powerful activity with a wide range of options. It runs in a separate window.\n\nRelated Topics: agent modeling, chaos, iteration, pattern, probability simulation, random number, simulation\n\nRun the classic game of life, learning about probabilities, chaos and simulation. This activity allows the user to run a randomly generated world or test out various patterns. This is a simple activity which runs directly in the Interactivate window.\n\nRelated Topics: agent modeling, chaos, iteration, pattern, probability simulation, random number, simulation\n\nEnter a set of data points and a function or multiple functions, then manipulate those functions to fit those points. Manipulate the function on a coordinate plane using slider bars. Learn how each constant and coefficient affects the resulting graph.\n\nRelated Topics: algebra, calculus, cartesian coordinate, coordinate, coordinate plane, coordinate system, cosecant, cosine, cotangent, curve fitting, data plot, deviations, flyer, function properties, functions, graph, linear functions, logarithm, parabola, polynomial, pre-calculus, secant, sine, slides, slope, squares, tangent, trigonometry\n\nStudents compare multiple independent variables as predictors of one dependent variable. Students explore correlation and lines of best-fit.\n\nRelated Topics: algebra, cartesian coordinate, coordinate, coordinate plane, coordinate system, data plot, deviations, flyer, functions, graph, linear functions, squares\n\nExperiment with a simple ecosystem consisting of grass, rabbits, and wolves, learning about probabilities, chaos, and simulation.\n\nRelated Topics: agent modeling, calculus, chaos, graph, iteration, pattern, predator-prey, probability simulation, random number, simulation, statistics\n\nModels how a population of susceptible, infected, and recovered people is affected by a disease.\n\nRelated Topics: agent modeling, chaos, graph, iteration, probability simulation, simulation, statistics\n\nExplore perimeter through rectangular and straight-line arrangements of tables, calculating the number of chairs needed to surround different arrangements. Tables and Chairs is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, agent modeling, arithmetic, associative, coordinate system, dimension, function properties, grouping, integers, multiples, perimeter, scale\n\nChoose one of three boxes and choose one ball from the box to look at conditional probabilities. Parameters: Number of trials.\n\nRelated Topics: conditional probability, events, experimental probability, outcomes, probability, probability simulation, probability without replacement, statistics, theoretical probability, theoretical value, trials\n\nNumber and Operations (...)\nStudents work step-by-step through the generation of a different Hilbert-like Curve (a fractal made from deforming a line by bending it), allowing them to explore number patterns in sequences and geometric properties of fractals.\n\nRelated Topics: decimals, fractals, fractions, geometric sequences, geometry, iteration, length, pattern, self-similarity, sequences, surface area, symmetry\n\nStudents play a generalized version of connect four, gaining the chance to place a piece on the board by answering arithmetic questions (addition, subtraction, multiplication, division). Parameters: time, difficulty level, types of questions. Arithmetic Four is one of the Interactivate assessment games.\n\nRelated Topics: addition, arithmetic, division, integers, multiplication, negative number, positive number, positive part of the operand, subtraction, whole numbers\n\nArithmetic Quiz gives the user randomized questions to answer on arithmetic with whole numbers and integers. Parameters: Level of difficulty and types of questions. Arithmetic Quiz is one of the Interactivate assessment quizzes.\n\nRelated Topics: addition, arithmetic, assessment, associative, commutative, division, integers, multiplication, negative number, positive number, positive part of the operand, subtraction, whole numbers\n\nThe same as \"Bounded Fraction Pointer\" but there is no arrow to help the user determine the value of a fraction between the two endpoints. Bounded Fraction Finder is one of the Interactivate assessment explorers.\n\nRelated Topics: assessment, comparing, fractions, number line, ordering, rational numbers\n\nDetermine the value of two fractions you have chosen (which are represented as points on a number line). Then find a fraction whose value is between your two fractions (using an arrow on the number line as a guide) and determine its value. Bounded Fraction Pointer is one of the Interactivate assessment explorers.\n\nRelated Topics: assessment, comparing, fractions, number line, ordering, rational numbers\n\nCreate your own affine cipher for encoding and decoding messages. Input your own constant and multiplier, then input a message to encode.\n\nRelated Topics: addition, arithmetic, cipher, cryptography, division, modular, multiplication, pattern, remainders\n\nEncode and decode messages to determine the form for an affine cipher. Input a message to encode, then input your guesses for the constant and multiplier. Caesar Cipher II is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, arithmetic, assessment, cipher, cryptography, division, modular, multiplication, pattern, remainders\n\nDecode encrypted messages to determine the form for an affine cipher, and practice your reasoning and arithmetic skills. Input your guesses for the multiplier and constant. Caesar Cipher III is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, arithmetic, assessment, cipher, cryptography, division, functions, modular, multiplication, pattern, remainders\n\nLearn about fractions between 0 and 1 by repeatedly deleting portions of a line segment, and also learn about properties of fractal objects. Parameter: fraction of the segment to be deleted each time.\n\nRelated Topics: fractals, fractions, geometric sequences, pattern, pre-calculus, recursion, sequences, sets\n\nWork with various types of clocks in order to learn about modular arithmetic operations. Parameters: Number of hours on the clock.\n\nRelated Topics: addition, division, elapsed time, modular, multiplication, remainders, time\n\nColor numbers in Pascal's Triangle by rolling a number and then clicking on all entries that are multiples of the number rolled, thereby practicing multiplication tables, investigating number patterns, and investigating fractal patterns. Coloring Multiples in Pascal's Triangle is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, combinatorics, division, divisors, factors, fractals, integers, multiples, pascal's triangle, pascals triangle, pattern, triangle, whole numbers\n\nColor numbers in Pascal's Triangle by rolling a number and then clicking on all entries that have the same remainder when divided by the number rolled, thereby practicing division and remainders, investigating number patterns, and investigating fractal patterns. Coloring Remainders in Pascal's Triangle is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, combinatorics, division, divisors, factors, fractals, integers, modular, pascal's triangle, pascals triangle, pattern, remainders, triangle, whole numbers\n\nCompare two sets of objects, using estimation to determine which is greater. Estimate a number of objects, the length of a line, or the area of a shape. Parameter: error tolerance. Comparison Estimator is one of the Interactivate assessment explorers.\n\nRelated Topics: area, assessment, comparing, counting, estimation, length, scale\n\nConverts fractions to decimals and decimals to fractions. Observe the relationships between fractions and decimals.\n\nRelated Topics: decimals, division, fractions, rational numbers\n\nVisually represent two unique fractions that are equivalent to a given fraction. The fractional value is shown on a number line after you check to see if your fraction is correct. Equivalent Fraction Finder is one of the Interactivate assessment explorers.\n\nRelated Topics: comparing, fractions, identity, multiples, number line, proportion, rational numbers\n\nVisually represent two unique fractions that are equivalent to a given fraction. The fractional value is shown on a number line as you color in the fraction. Equivalent Fraction Pointer is one of the Interactivate assessment explorers.\n\nRelated Topics: comparing, fractions, identity, multiples, number line, ordering, proportion, rational numbers\n\nPractice estimation skills by determining the number of objects, the length of a line, or the area of a shape. Parameters: error tolerance of estimate. Estimator is one of the Interactivate assessment explorers.\n\nRelated Topics: area, assessment, counting, estimation, length, scale\n\nPlay a \"Connect-Four\" style game, earning a game piece each time you answer an estimation question correctly. This game asks the players to estimate the answer to addition, multiplication, and percentage questions. Choose difficulty level, question types, and time. Estimator Four is one of the Interactivate assessment games.\n\nRelated Topics: addition, arithmetic, assessment, estimation, integers, multiplication, percentages, percents\n\nTest your estimation skills by answering questions. This quiz asks you to estimate the answer to addition, multiplication, and percentage questions. Choose difficulty level, question types, and time. Estimator Quiz is one of the Interactivate assessment quizzes.\n\nRelated Topics: addition, arithmetic, assessment, counting, distributive, integers, multiplication, percentages, percents\n\nVisualize factors through building rectangular areas on a grid. First enter all the factorizations of a number, then draw each factor set as an area on the grid. Factorize is one of the Interactivate assessment explorers.\n\nRelated Topics: area, arithmetic, assessment, commutative, divisors, factors, grouping, multiples, multiplication, rectangles, whole numbers\n\nVisualize factors through building rectangular areas on a grid. As you draw each factor set on the grid, the factors will be listed. Factorize 2 is one of the Interactivate assessment explorers.\n\nRelated Topics: area, assessment, commutative, division, divisors, factors, multiples, multiplication, rectangles, whole numbers\n\nLearn about estimation through modeling of a forest fire. This activity allows the user to burn a virtual forest, then estimate the number, the percent, or the fraction of trees burned. Fire Assessment is one of the Interactivate assessment explorers.\n\nRelated Topics: estimation, fractions, percentages, percents\n\nCreate your own fractals by drawing a \"line deformation rule\" and stepping through the generation of a geometric fractal. Parameters: Grid type, number of bending points on the line.\n\nRelated Topics: chaos, fractals, geometric sequences, iteration, pattern, pre-calculus, recursion, scale, self-similarity, sequences, symmetry, transformation\n\nDetermine the fractal dimensions of several line-deformation fractals. Input the scale factor and number of similar copies, and the dimension will be calculated. Fractal Dimensions is one of the Interactivate assessment explorers.\n\nRelated Topics: chaos, fractals, logarithm, pattern, scale, self-similarity\n\nDetermine the value of two given fractions represented as points on a number line. Then find a fraction whose value is between the two given fractions and determine its value. Fraction Finder is one of the Interactivate assessment explorers.\n\nRelated Topics: arithmetic, assessment, comparing, fractions, number line, ordering, rational numbers, subtraction\n\nStudents play a generalized version of connect four, gaining the chance to place a piece after simplifying fractions, converting fractions to decimals and percentages, and answering algebra questions involving fractions. Choose difficulty level, question types, and time. Fraction Four is one of the Interactivate assessment games.\n\nRelated Topics: addition, algebra, arithmetic, assessment, decimals, division, fractions, mixed numbers, multiplication, percentages, percents, rational numbers, solving equations, subtraction\n\nDetermine the value of two given fractions represented as points on a number line. Then find a fraction whose value is between the two given fractions (using an arrow on the number line as a guide) and determine its value. Fraction Pointer is one of the Interactivate assessment explorers.\n\nRelated Topics: arithmetic, assessment, comparing, fractions, number line, ordering, rational numbers, subtraction\n\nTest your fraction skills by answering questions. This quiz asks you to simplify fractions, convert fractions to decimals and percentages, and answer algebra questions involving fractions. Choose difficulty level, question types, and time. Fraction Quiz is one of the Interactivate assessment quizzes.\n\nRelated Topics: addition, algebra, arithmetic, decimals, division, divisors, fractions, integers, mixed numbers, multiplication, percentages, percents, subtraction\n\nVisualize fractions by coloring in the appropriate portions of either a circle or a square, then order those fractions from least to greatest. Fraction Sorter is one of the Interactivate assessment explorers.\n\nRelated Topics: area, circles, comparing, fractions, number line, ordering, proportion, rational numbers, squares\n\nStep through the generation of a Hilbert Curve -- a fractal made from deforming a line by bending it, and explore number patterns in sequences and geometric properties of fractals.\n\nRelated Topics: chaos, fractals, geometric sequences, geometry, iteration, length, lines, pattern, recursion, self-similarity, sequences\n\nEnter a complex value for \"c\" in the form of an ordered pair of real numbers. The applet draws the fractal Julia set for that seed value.\n\nRelated Topics: complex number, coordinate plane, coordinate system, fractals, iteration, julia set, mandelbrot set, recursion, recursive functions, sets\n\nStep through the generation of the Koch Snowflake -- a fractal made from deforming the sides of a triangle, and explore number patterns in sequences and geometric properties of fractals.\n\nRelated Topics: decimals, fractals, geometric sequences, infinity, iteration, lines, pre-calculus, recursion, self-similarity, sequences\n\nMixtures allows exploration of percents through two piles of colored and uncolored chips. The user must decide how many chips to color to create the desired percentage of colored chips compared to the total pile. Mixtures is one of the Interactivate assessment explorers.\n\nRelated Topics: assessment, counting, percentages, percents, proportion, sets\n\nPractice estimation skills by determining whether the number of objects, the length of a line, or the area of a shape is more or less than the number given. Parameters: error tolerance of estimate. More or Less Estimator is one of the Interactivate assessment explorers.\n\nRelated Topics: area, comparing, counting, estimation, length, scale\n\nVisually explore counting and place value with different number bases, from base 2 to base 16, and up to the hundreds place using a clock like interface. The activity also allows you to look at the numbers on the clock in base 10 or in your other chosen base to explore the relationship between those values.\n\nRelated Topics: arithmetic, counting, exponents, modular\n\nSimilar to the original \"Function Machine\" but lists input and output in a table and will not let the user attempt to guess the rule without having at least two data points. Number Cruncher is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, algebra, arithmetic, assessment, data plot, function machine, functions, input, linear functions, multiplication, output, solving equations, subtraction\n\nStudents play a generalized version of connect four, gaining the chance to place a piece on the board by answering order of operations questions (addition/subtraction, multiplication/division, exponents, and parentheses). Parameters: time, difficulty level, types of questions. Order of Operations Four is one of the Interactivate assessment games.\n\nRelated Topics: addition, arithmetic, assessment, associative, commutative, division, exponents, integers, multiplication, negative number, order of operations, subtraction\n\nThis activity allows the user to practice order of operations with addition/subtraction, multiplication/division, exponents, and parentheses. Order of Operations Quiz is one of the Interactivate assessment quizzes.\n\nRelated Topics: addition, arithmetic, assessment, associative, commutative, division, exponents, integers, multiplication, negative number, order of operations, subtraction\n\nRecognize patterns in a series of shapes, numbers, or letters. After determining the pattern, the student fills in the missing pieces. Three levels of difficulty are available.\n\nRelated Topics: arithmetic, arithmetic sequences, iteration, pattern, sequences\n\nGraph recursive functions by defining f(0)=C and defining f(n) based on f(n-1).\n\nRelated Topics: algebra, arithmetic sequences, cartesian coordinate, coordinate plane, functions, geometric sequences, graph, iteration, recursion, recursive functions, sequences\n\nLearn about number patterns in sequences and recursions by specifying a starting number, multiplier, and add-on. The numbers in the sequence are displayed on a graph, and they are also listed below the graph.\n\nRelated Topics: addition, arithmetic, arithmetic sequences, cartesian coordinate, coordinate plane, decimals, geometric sequences, graph, iteration, multiplication, pre-calculus, recursion, recursive functions, sequences\n\nStep through the generation of Sierpinski's Carpet -- a fractal made from subdividing a square into nine smaller squares and cutting the middle one out. Explore number patterns in sequences and geometric properties of fractals.\n\nRelated Topics: algebra, area, fractals, fractions, geometric sequences, iteration, pattern, pre-calculus, recursion, recursive functions, self-similarity, sequences\n\nStep through the generation of Sierpinski's Triangle -- a fractal made from subdividing a triangle into four smaller triangles and cutting the middle one out. Explore number patterns in sequences and geometric properties of fractals.\n\nRelated Topics: dimension, fractals, fractions, geometric sequences, geometry, iteration, length, pattern, pre-calculus, recursion, recursive functions, self-similarity, sequences, symmetry, triangle, triangles\n\nDetermine the value of a given fraction represented as a point on a number line. Then find a fraction whose value is a given fraction. Fraction Pointer is one of the Interactivate assessment explorers.\n\nRelated Topics: equivalent, fractions, number line\n\nDetermine the value of a given fraction represented as a point on a number line. Then find a fraction whose value is the given fraction using an arrow on the number line as a guide. Single Fraction Pointer is one of the Interactivate assessment explorers.\n\nRelated Topics: equivalent, fractions, number line\n\nExplore perimeter through rectangular and straight-line arrangements of tables, calculating the number of chairs needed to surround different arrangements. Tables and Chairs is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, agent modeling, arithmetic, associative, coordinate system, dimension, function properties, grouping, integers, multiples, perimeter, scale\n\nExplore fractals by investigating the relationships between the Mandelbrot set and Julia sets.\n\nRelated Topics: chaos, complex number, coordinate, coordinate plane, exponents, fractals, functions, geometric sequences, geometry, iteration, julia set, mandelbrot set, pattern, recursion, recursive functions, self-similarity, sets\n\nStep through the tortoise and hare race, based on Zeno's paradox, to learn about the multiplication of fractions and about convergence of an infinite sequence of numbers.\n\nRelated Topics: decimals, distance, fractals, geometric sequences, infinity, iteration, length, logarithm, multiplication, pattern, percentages, percents, proportion, rational numbers, recursion, recursive functions, self-similarity, sequences\n\nSort colored shapes into a three circle Venn Diagram.\n\nRelated Topics: pattern, sets, venn diagram\n\nEnter two complex numbers (z and c) as ordered pairs of real numbers, then click a button to iterate step by step. The iterates are graphed in the x-y plane and printed out in table form. This is an introduction to the idea of prisoners/escapees in iterated functions and the calculation of fractal Julia sets.\n\nRelated Topics: complex number, coordinate plane, coordinate system, exponents, fractals, functions, graph, infinity, iteration, julia set, mandelbrot set, recursion, recursive functions, sets\n\nSort colored shapes into a Venn diagram based on various characteristics. Venn Diagram Shape Sorter is one of the Interactivate assessment explorers.\n\nRelated Topics: pattern, sets, sorter, venn diagram\n\nGive input to the Whole Number Cruncher and try to guess what it did from the output it generates. This activity only generates multiplication and addition functions to avoid outputting any negative numbers. Whole Number Cruncher is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, algebra, arithmetic, assessment, coordinate, function machine, functions, input, linear equations, linear functions, multiplication, output, pattern, solving equations, whole numbers\n\nProbability (...)\nStudents run a simulation of how a fire will spread through a stand of trees, learning about probability and chaos. Parameters: Forest density, wind direction, size of forest.\n\nRelated Topics: agent modeling, chaos, conditional probability, experimental probability, iteration, percentages, percents, probability simulation, proportion, simulation, skewed distribution, variance\n\nCreate a game spinner with variable sized sectors to look at experimental and theoretical probabilities. Parameters: Sizes of sectors, number of sectors, number of trials.\n\nRelated Topics: angles, circle graph, circles, counting, decimals, events, experimental probability, fair, geometric probability, outcomes, percentages, percents, pie chart, probability simulation, simulation, spinner, statistics, theoretical probability\n\nRun a simulation of how a fire spreads through a stand of trees, learning about probability and chaos. Track the results of multiple burns and use the data to draw conclusions.\n\nRelated Topics: agent modeling, chaos, conditional probability, experimental probability, probability simulation, simulation, theoretical probability\n\nChoose one of N doors to experimentally determine the odds of winning the grand prize behind one of the doors, as in the TV program \"Let's Make a Deal.\" Parameters: Number of doors, number of trials, staying or switching between the two remaining doors.\n\nRelated Topics: bar graph, conditional probability, events, experimental probability, fair, monty hall, outcomes, probability simulation, simulation, statistics, trials\n\nExperiment with a simulation to get an approximation of Pi by dropping a needle on a lined sheet of paper.\n\nRelated Topics: angles, area, circumference, experimental probability, polygon, probability simulation, simulation\n\nSimulation of a coin toss allowing the user to input the number of flips. Toss results can be viewed as a list of individual outcomes, ratios, or table.\n\nRelated Topics: counting, events, experimental probability, fair, flips, outcomes, percentages, probability simulation, proportion, theoretical probability, theoretical value\n\nCompare theoretical and experimental probabilities, using dice, cards, spinners, or coin tosses. Three different probabilities can be compared at once. Parameters: Type of probabilities, number of trials.\n\nRelated Topics: combinatorics, events, experimental probability, outcomes, probability simulation, random number, theoretical probability\n\nExperiment with the outcome distribution for a roll of two dice by simulating a dice throwing game. Parameters: Which player wins with which total rolled.\n\nRelated Topics: combinatorics, decimals, events, experimental probability, fair, fractions, outcomes, percentages, percents, probability simulation, proportion, random number, statistics, theoretical probability, theoretical value\n\nRun a simulation of how a fire will spread through a stand of trees, learning about probability and chaos. Parameters: Probability that a tree will set fire to each of its eight neighbors.\n\nRelated Topics: agent modeling, chaos, events, experimental probability, geometric probability, percentages, percents, probability simulation, proportion, recursion, simulation, statistics, theoretical probability, variance\n\nExperiment with probability using a fixed size section spinner, a variable section spinner, two regular 6-sided dice or customized dice.\n\nRelated Topics: circles, events, experimental probability, fair, fractions, geometric probability, outcomes, percentages, probability simulation, random number, simulation, spinner, statistics, theoretical probability, theoretical value, trials\n\nLearn about estimation through modeling of a forest fire. This activity allows the user to burn a virtual forest, then estimate the number, the percent, or the fraction of trees burned. Fire Assessment is one of the Interactivate assessment explorers.\n\nRelated Topics: estimation, fractions, percentages, percents\n\nRun a simulation of how a fire will spread through a stand of trees, learning about probability and chaos. Parameters: Probability that a tree catches fire if its neighbor is on fire.\n\nRelated Topics: agent modeling, chaos, decimals, events, experimental probability, outcomes, percentages, percents, proportion, simulation, squares, theoretical probability, variance\n\nRun a simulation to generate results from running the Monty Hall for multiple trials. This is similar to the game show \"Let's Make A Deal\", where you choose one of N doors in hopes of finding a grand prize behind one of the doors. Parameters: Number of doors, number of trials, staying, or switching between the two remaining doors.\n\nRelated Topics: conditional probability, events, experimental probability, fair, monty hall, outcomes, probability simulation, random number, simulation, theoretical probability, theoretical value, trials\n\nRun the classic game of life, learning about probabilities, chaos and simulation. This activity allows the user to run a randomly generated world or test out various patterns. This is a very powerful activity with a wide range of options. It runs in a separate window.\n\nRelated Topics: agent modeling, chaos, iteration, pattern, probability simulation, random number, simulation\n\nRun the classic game of life, learning about probabilities, chaos and simulation. This activity allows the user to run a randomly generated world or test out various patterns. This is a simple activity which runs directly in the Interactivate window.\n\nRelated Topics: agent modeling, chaos, iteration, pattern, probability simulation, random number, simulation\n\nLearn about sampling with and without replacement by randomly drawing marbles from a bag. Parameters: Number and color of marbles in the bag, replacement rule.\n\nRelated Topics: conditional probability, events, experimental probability, outcomes, percentages, probability simulation, probability with replacement, probability without replacement, random number, simulation, statistics, theoretical probability, trials\n\nIn this applet you can adjust the parameters on two Gaussian curves to determine if there is a possibility of a difference between the two means.\n\nRelated Topics: bell curve, continuous distribution, experimental probability, exponential, graph, integral, mean, normal distribution, standard deviation, statistics, theoretical probability\n\nExperiment with a simple ecosystem consisting of grass, rabbits, and wolves, learning about probabilities, chaos, and simulation.\n\nRelated Topics: agent modeling, calculus, chaos, graph, iteration, pattern, predator-prey, probability simulation, random number, simulation, statistics\n\nSimulate a game where two players each roll a die, and the lucky player moves one step to the finish. Parameters: what rolls win and how many steps to the finish line.\n\nRelated Topics: circle graph, events, experimental probability, fair, geometric probability, percents, pie chart, probability, probability simulation, random number, theoretical probability, trials\n\nSimulate a game where \"N\" players roll two dice, and the lucky player has an advantage for reaching the finish. Parameters: the number of players, number of trials and length of the race.\n\nRelated Topics: combinatorics, experimental probability, fair, outcomes, probability, probability simulation, random number, theoretical probability, trials\n\nChoose one of three doors to experimentally determine the odds of winning the grand prize behind one of the doors, as in the TV program \"Let's Make a Deal.\" Parameters: Staying or switching between the two remaining doors.\n\nRelated Topics: conditional probability, events, experimental probability, fair, monty hall, outcomes, percents, probability, probability simulation, random number, statistics, strategy, theoretical probability\n\nCreate a game spinner with one to twelve sectors in order to look at experimental and theoretical probabilities. Parameters: Number of sectors, number of trials.\n\nRelated Topics: angles, circle graph, circles, events, experimental probability, fair, geometric probability, outcomes, percents, pie chart, probability, probability simulation, spinner, theoretical probability, theoretical value, trials\n\nModels how a population of susceptible, infected, and recovered people is affected by a disease.\n\nRelated Topics: agent modeling, chaos, graph, iteration, probability simulation, simulation, statistics\n\nChoose one of three boxes and choose one ball from the box to look at conditional probabilities. Parameters: Number of trials.\n\nRelated Topics: conditional probability, events, experimental probability, outcomes, probability, probability simulation, probability without replacement, statistics, theoretical probability, theoretical value, trials\n\nStatistics (...)\nCreate a game spinner with variable sized sectors to look at experimental and theoretical probabilities. Parameters: Sizes of sectors, number of sectors, number of trials.\n\nRelated Topics: angles, circle graph, circles, counting, decimals, events, experimental probability, fair, geometric probability, outcomes, percentages, percents, pie chart, probability simulation, simulation, spinner, statistics, theoretical probability\n\nRun a simulation of how a fire spreads through a stand of trees, learning about probability and chaos. Track the results of multiple burns and use the data to draw conclusions.\n\nRelated Topics: agent modeling, chaos, conditional probability, experimental probability, probability simulation, simulation, theoretical probability\n\nChoose one of N doors to experimentally determine the odds of winning the grand prize behind one of the doors, as in the TV program \"Let's Make a Deal.\" Parameters: Number of doors, number of trials, staying or switching between the two remaining doors.\n\nRelated Topics: bar graph, conditional probability, events, experimental probability, fair, monty hall, outcomes, probability simulation, simulation, statistics, trials\n\nEnter data to create a bar graph, then change many settings for the graph's appearance.\n\nRelated Topics: bar graph, data plot, histogram, statistics\n\nSort colored shapes into a bar graph. Practice sorting by shape or by color. Appropriate for elementary grades. Bar Graph Sorter is one of the Interactivate assessment explorers.\n\nRelated Topics: bar graph, circles, counting, sorter, squares, statistics\n\nStudents can create box plots for either built-in or user-specified data as well as experiment with outliers. User may choose to use or not use the median for calculation of interquartile range.\n\nRelated Topics: box and whisker, box plot, coordinate plane, data plot, median, outlier, percentages, quartile, range, scale, skewed distribution, statistics\n\nExperiment with a simulation to get an approximation of Pi by dropping a needle on a lined sheet of paper.\n\nRelated Topics: angles, area, circumference, experimental probability, polygon, probability simulation, simulation\n\nEnter your own data categories and the value of each category to create a pie chart. There are also built in data sets which can be viewed.\n\nRelated Topics: angles, area, circle graph, circles, data plot, fractions, percentages, pie chart, proportion, statistics\n\nSimulation of a coin toss allowing the user to input the number of flips. Toss results can be viewed as a list of individual outcomes, ratios, or table.\n\nRelated Topics: counting, events, experimental probability, fair, flips, outcomes, percentages, probability simulation, proportion, theoretical probability, theoretical value\n\nCompare theoretical and experimental probabilities, using dice, cards, spinners, or coin tosses. Three different probabilities can be compared at once. Parameters: Type of probabilities, number of trials.\n\nRelated Topics: combinatorics, events, experimental probability, outcomes, probability simulation, random number, theoretical probability\n\nEnter a set of data points, then derive a function to fit those points. Manipulate the function on a coordinate plane using slider bars. Learn how each constant and coefficient affects the resulting graph.\n\nRelated Topics: algebra, cartesian coordinate, coordinate, coordinate plane, coordinate system, cosecant, cosine, cotangent, curve fitting, data plot, deviations, exponential, exponents, function properties, functions, graph, grouping, integers, intervals, inverse, linear equations, linear functions, logarithm, multiplication, parabola, polynomial, positive part of the operand, pre-calculus, range, secant, sine, slope, tangent, trigonometry\n\nExperiment with probability using a fixed size section spinner, a variable section spinner, two regular 6-sided dice or customized dice.\n\nRelated Topics: circles, events, experimental probability, fair, fractions, geometric probability, outcomes, percentages, probability simulation, random number, simulation, spinner, statistics, theoretical probability, theoretical value, trials\n\nThis applet allows the user to experiment with randomly generated data sets at various sample sizes and standard deviations. Then, users can compare the distribution of the experimental data to the expected distribution.\n\nRelated Topics: bell curve, continuous distribution, exponential, histogram, infinity, mean, measures of central tendency, normal distribution, probability simulation, statistics, trials\n\nCreate graphs of functions and sets of ordered pairs on the same coordinate plane. This is like a graphing calculator with advanced viewing options.\n\nRelated Topics: algebra, calculus, cartesian coordinate, coordinate, coordinate plane, coordinate system, cosecant, cosine, cotangent, curve fitting, data plot, exponential, exponents, function properties, functions, graph, graph theory, intervals, linear equations, linear functions, logarithm, parabola, polynomial, positive part of the operand, pre-calculus, range, secant, sine, slope, tangent, trigonometry\n\nView histograms for built-in or user-specified data. Experiment with how the size of the class intervals influences the appearance of the histogram. Parameters: Data sets, class sizes.\n\nRelated Topics: counting, data plot, graph, histogram, intervals, mean, range, skewed distribution, slides, standard deviation\n\nEnter data and view the mean, median, variance, and standard deviation of the data set. Parameters: Number of observations, range for observations, which statistics to view, identifiers for the data.\n\nRelated Topics: decimals, deviations, mean, measures of central tendency, median, mode, range, simulation, standard deviation, statistics, variance\n\nEnter data to create a double bar graph, then manipulate the graph's maximum and minimum values.\n\nRelated Topics: bar graph, data plot, double bar graph\n\nEnter a set of data points and a function or multiple functions, then manipulate those functions to fit those points. Manipulate the function on a coordinate plane using slider bars. Learn how each constant and coefficient affects the resulting graph.\n\nRelated Topics: algebra, calculus, cartesian coordinate, coordinate, coordinate plane, coordinate system, cosecant, cosine, cotangent, curve fitting, data plot, deviations, flyer, function properties, functions, graph, linear functions, logarithm, parabola, polynomial, pre-calculus, secant, sine, slides, slope, squares, tangent, trigonometry\n\nStudents compare multiple independent variables as predictors of one dependent variable. Students explore correlation and lines of best-fit.\n\nRelated Topics: algebra, cartesian coordinate, coordinate, coordinate plane, coordinate system, data plot, deviations, flyer, functions, graph, linear functions, squares\n\nChange the standard deviation of an automatically generated normal distribution to create a new histogram. Observe how well the histogram fits the curve, and how areas under the curve correspond to the number of trials. Parameters: standard deviation, number of trials, class intervals.\n\nRelated Topics: bell curve, continuous distribution, exponential, histogram, infinity, mean, measures of central tendency, normal distribution, probability simulation, standard deviation, statistics, trials\n\nPlot ordered pairs on the graph, and they will be connected in the order that they are input. This enables you to decide how the pairs should be connected, rather than having the computer connect them from left to right.\n\nRelated Topics: cartesian coordinate, coordinate, coordinate plane, coordinate system, data plot, graph, lines, vertical line test\n\nIn this applet you can adjust the parameters on two Gaussian curves to determine if there is a possibility of a difference between the two means.\n\nRelated Topics: bell curve, continuous distribution, experimental probability, exponential, graph, integral, mean, normal distribution, standard deviation, statistics, theoretical probability\n\nCreate a pie chart, adjusting the size of the divisions using your mouse or by entering values. Parameters: Number of sections, size of sections, whether to use percents or fractions.\n\nRelated Topics: angles, area, circle graph, circles, fractions, percentages, percents, pie chart, proportion, statistics\n\nPlopIt allows users to build dot plots of data using the mouse. View how the mean, median, and mode change as entries are added to the plot. Parameters: Range for observations.\n\nRelated Topics: bar graph, data plot, graph, histogram, mean, measures of central tendency, median, mode, range, statistics\n\nExperiment with a simple ecosystem consisting of grass, rabbits, and wolves, learning about probabilities, chaos, and simulation.\n\nRelated Topics: agent modeling, calculus, chaos, graph, iteration, pattern, predator-prey, probability simulation, random number, simulation, statistics\n\nPlot a bivariate data set, determine the line of best fit for their data, and then check the accuracy of your line of best fit.\n\nRelated Topics: calculus, cartesian coordinate, coordinate plane, correlation, curve fitting, data plot, deviations, residual, statistics\n\nGraph ordered pairs and customize the graph title and axis labels. Points are connected from left to right, rather than being connected in the order they are entered.\n\nRelated Topics: cartesian coordinate, coordinate, coordinate plane, data plot, graph, statistics\n\nPlot ordered pairs of numbers, either as a scatter plot or with the dots connected. Points are connected from right to left, rather than being connected in the order they are entered.\n\nRelated Topics: cartesian coordinate, coordinate plane, coordinate system, data plot, decimals, graph\n\nChange the median and standard deviation of an automatically generated normal distribution to create a skewed distribution, allowing you to observe properties like what it means for the mean, median, and mode to be different. Parameters: median, standard deviation, number of trials, class intervals.\n\nRelated Topics: bell curve, continuous distribution, histogram, mean, measures of central tendency, median, normal distribution, skewed distribution, standard deviation, statistics, trials, variance\n\nCreate a game spinner with one to twelve sectors in order to look at experimental and theoretical probabilities. Parameters: Number of sectors, number of trials.\n\nRelated Topics: angles, circle graph, circles, events, experimental probability, fair, geometric probability, outcomes, percents, pie chart, probability, probability simulation, spinner, theoretical probability, theoretical value, trials\n\nView stem-and-leaf plots of your own data, and then practice finding means, medians and modes. Stem and Leaf Plotter is one of the Interactivate assessment explorers.\n\nRelated Topics: assessment, data plot, graph, mean, measures of central tendency, median, mode, range, statistics, stem and leaf\n\nTrigonometry (...)\nEnter a set of data points, then derive a function to fit those points. Manipulate the function on a coordinate plane using slider bars. Learn how each constant and coefficient affects the resulting graph.\n\nRelated Topics: algebra, cartesian coordinate, coordinate, coordinate plane, coordinate system, cosecant, cosine, cotangent, curve fitting, data plot, deviations, exponential, exponents, function properties, functions, graph, grouping, integers, intervals, inverse, linear equations, linear functions, logarithm, multiplication, parabola, polynomial, positive part of the operand, pre-calculus, range, secant, sine, slope, tangent, trigonometry\n\nA more advanced version of Slope Slider, this activity allows the manipulation of the constants and coefficients in any function thereby encouraging the user to explore the effects on the graph of the function by changing those numbers.\n\nRelated Topics: algebra, bell curve, calculus, cartesian coordinate, coordinate plane, coordinate system, cosine, decimals, exponential, flyer, function properties, functions, graph, intervals, inverse, linear equations, linear functions, lines, logarithm, parabola, polynomial, pre-calculus, range, sine, slope, tangent, translation, trigonometry\n\nStudents can create graphs of functions entered as algebraic expressions -- similar to a graphing calculator.\n\nRelated Topics: algebra, calculus, cartesian coordinate, coordinate plane, cosecant, cosine, cotangent, functions, graph, inverse, linear equations, linear functions, logarithm, parabola, polynomial, positive part of the operand, pre-calculus, range, secant, sine, slope, tangent, trigonometry\n\nCreate graphs of functions and sets of ordered pairs on the same coordinate plane. This is like a graphing calculator with advanced viewing options.\n\nRelated Topics: algebra, calculus, cartesian coordinate, coordinate, coordinate plane, coordinate system, cosecant, cosine, cotangent, curve fitting, data plot, exponential, exponents, function properties, functions, graph, graph theory, intervals, linear equations, linear functions, logarithm, parabola, polynomial, positive part of the operand, pre-calculus, range, secant, sine, slope, tangent, trigonometry\n\nEnter a set of data points and a function or multiple functions, then manipulate those functions to fit those points. Manipulate the function on a coordinate plane using slider bars. Learn how each constant and coefficient affects the resulting graph.\n\nRelated Topics: algebra, calculus, cartesian coordinate, coordinate, coordinate plane, coordinate system, cosecant, cosine, cotangent, curve fitting, data plot, deviations, flyer, function properties, functions, graph, linear functions, logarithm, parabola, polynomial, pre-calculus, secant, sine, slides, slope, squares, tangent, trigonometry\n\nThis activity allows the user to explore the polar coordinate system. The applet is similar to GraphIt, but instead allows users to explore the representation of a function in the polar coordinate system.\n\nRelated Topics: calculus, coordinate plane, coordinate system, cosine, data plot, graph, polar coordinates, pre-calculus, sine, tangent, trigonometry\n\nCalculate the length of one side of an automatically generated right triangle by using the Pythagorean Theorem, and then check your answers. Pythagorean Explorer is one of the Interactivate assessment explorers.\n\nRelated Topics: algebra, angles, assessment, distance, exponents, geometry, length, pythagorean theorem, solving equations, squares, triangle, trigonometry\n\nLearn about how the Pythagorean Theorem works through investigating the standard geometric proof. Parameters: Sizes of the legs of the triangle.\n\nRelated Topics: algebra, angles, area, coordinate plane, distance, geometry, length, pythagorean theorem, slides, solving equations, squares, triangle, triangles, trigonometry\n\nOther (...)\nPractice finding elapsed time given a starting time and an ending time. Elapsed Time is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, assessment, counting, elapsed time, subtraction, time\n\nGiven a starting time and elapsed time practice finding the ending time. Elapsed Time Two is one of the Interactivate assessment explorers.\n\nRelated Topics: addition, assessment, counting, elapsed time, subtraction, time\n\nSort colored shapes into a three circle Venn Diagram.\n\nRelated Topics: pattern, sets, venn diagram\n\nScience (...)\nPractice reading a clock, input times for the clock to display, or let the clock generate random times for you to read. Choose from three difficulty levels. Clock Wise is one of the Interactivate assessment explorers.\n\nRelated Topics: assessment, elapsed time, time\n\nClassify various objects into categories in a Venn Diagram. Learn how categories in Venn Diagrams work. Venn Diagrams is one of the Interactivate assessment explorers.\n\nRelated Topics: assessment, sets, venn diagram",
null,
"",
null,
""
] | [
null,
"http://shodor.org/interactivate/media/images/math_flyer_ad.png",
null,
"http://shodor.org/interactivate/media/images/nsdl_cobrand.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8434696,"math_prob":0.98459685,"size":57913,"snap":"2019-26-2019-30","text_gpt3_token_len":10951,"char_repetition_ratio":0.18211332,"word_repetition_ratio":0.8505281,"special_character_ratio":0.17755944,"punctuation_ratio":0.19227739,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99880093,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,7,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-18T11:20:46Z\",\"WARC-Record-ID\":\"<urn:uuid:a3fe0ae6-c5ce-4d51-90e5-f548b43fbaf4>\",\"Content-Length\":\"330826\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e68de290-c0ed-46ce-a629-11e9b719d442>\",\"WARC-Concurrent-To\":\"<urn:uuid:6ad2b08d-2f72-4e13-a3d1-1aefa7c3beee>\",\"WARC-IP-Address\":\"204.85.28.50\",\"WARC-Target-URI\":\"http://shodor.org/interactivate/activities/\",\"WARC-Payload-Digest\":\"sha1:VCP6DM5BRJJZJKCYGUTLGZAQIDVR5Z56\",\"WARC-Block-Digest\":\"sha1:HZTFW6L6LLJZYX3WVPQ76FRRU6VN6DL3\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525627.38_warc_CC-MAIN-20190718104512-20190718130512-00359.warc.gz\"}"} |
https://math.stackexchange.com/questions/2564450/conditional-expectation-of-uniform-random-variable-given-sum-of-two-uniform-rand/2564455 | [
"# Conditional expectation of uniform random variable given sum of two uniform random variables\n\nWe know $X_1 \\sim U[0, 1]$ and $X_2 \\sim U[0, 2]$. I'm trying to compute $E[X_1^3 | 2X_1 + X_2 = 2]$.\n\nI was given the hint that $X_1 | 2X_1 + X_2 \\sim U[0, 1]$, and from here on I can compute $E[X_1^3 | 2X_1 + X_2 = 2] = \\frac{1}{4}$. However, I'm not sure how we can prove the hint.\n\nWhen $$2X_1+X_2=2$$, we know that $$X_1=\\frac{2-X_2}{2}$$.\nWe also know that $$X_2\\sim U[0,2]$$, so $$2-X_2\\sim U[0,2]$$, and therefore $$\\frac{2-X_2}{2}\\sim U[0,1]$$.\nWhen $$2X_1+X_2=2$$, we have that $$X_1=\\frac{2-X_2}{2}\\sim U[0,1]$$."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92905456,"math_prob":1.0000095,"size":283,"snap":"2019-51-2020-05","text_gpt3_token_len":137,"char_repetition_ratio":0.15053764,"word_repetition_ratio":0.10344828,"special_character_ratio":0.5229682,"punctuation_ratio":0.12676056,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000031,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-17T14:14:52Z\",\"WARC-Record-ID\":\"<urn:uuid:44eac108-9031-45ed-a537-56c790c596f4>\",\"Content-Length\":\"135531\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5fcbd20a-e9ae-46cf-86d7-ba732c6d8ecb>\",\"WARC-Concurrent-To\":\"<urn:uuid:e99d1364-3e8f-46d7-9159-be95126261c0>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2564450/conditional-expectation-of-uniform-random-variable-given-sum-of-two-uniform-rand/2564455\",\"WARC-Payload-Digest\":\"sha1:HILE7JXPIUV6QTQJBEPR4OQMBFU3QL4L\",\"WARC-Block-Digest\":\"sha1:WAVQNRP3XMWUVQS3A4B5MUVEYBMGQTUQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250589560.16_warc_CC-MAIN-20200117123339-20200117151339-00387.warc.gz\"}"} |
https://file.scirp.org/Html/4-2180220_85834.htm | [
" Symmetries in Evolving Space-Time and Their Connection to High-Frequency Gravitational Wave Production\n\nJournal of High Energy Physics, Gravitation and Cosmology\nVol.04 No.03(2018), Article ID:85834,12 pages\n10.4236/jhepgc.2018.43027\n\nSymmetries in Evolving Space-Time and Their Connection to High-Frequency Gravitational Wave Production\n\nAndrew Walcott Beckwith\n\nPhysics Department, College of Physics, Chongqing University, Chongqing, China",
null,
"",
null,
"",
null,
"",
null,
"Received: May 7, 2018; Accepted: July 3, 2018; Published: July 6, 2018\n\nABSTRACT\n\nWe claim that the linking of a shrinking prior universe to our own via a wormhole bridge solution of about ten to the minus forty four power seconds permits the formation of a short-term quintessence scalar field. Symmetries allow for creating high-frequency gravitational waves at the onset of inflation, which has consequences in our present cosmological era. This instantaneous energy transfer between prior to present universes permits relic graviton production which we claim is a viable candidate for future propulsion technologies in space craft design. The Big Bang started as the passage of thermal energy from an existing universe into ours resulting in another Big Bang, and helps us understand how a graviton burst can occur in the first place.\n\nKeywords:\n\nWormhole, High-Frequency Gravitational Waves (HGW), Symmetry, Causal Discontinuity",
null,
"1. Introduction\n\nWe begin first with a restatement of the physics leading to a wormhole solution for transfer of vacuum energy from a prior universe to today’s expanding universe. We then argue that such a vacuum-energy transfer is a necessary condition for forming short-term quintessence scalar fields. This allows us to form quintessence scalar-field behavior that is consistent with the $w=-1$ value for the ratio of pressure and density, which is well within the known red shift $z=1100$ cosmic microwave background barrier. We should note that initially, $w=0$ at the initial onset of the big bang, also that the CMB radiation cutoff region is 380 to 400 thousand years after the big bang . At that time, we are able to start seeing a separation of photons and matter, so we can observe stellar astrophysical processes reliably. After restating how a wormhole solution forms, results heavily depend on the scalar field turning into a real quality, after the initial phases of inflation, as well as rapidly damping out as the vacuum energy creates emergent field conditions, allowing for the relic production of gravitons. The existence of the solution of the Wheeler-DeWitt equation with a pseudo-time-like component provides an additional symmetry to space-time evolution, which is broken by the chaotic regime of the scale factor.\n\nThis leads to a bifurcation in the evolution of the quintessence scalar field that comes from initially zero value for the scalar field in the initial onset of inflation. In the present day era, the cosmological data set leads us to conclude that the rate of expansion of the universe is actually increasing. In between these two zero values of the scalar field, we have non-zero values of the scalar field. This, with the Seth Lloyd model of the universe as a quantum computing device, permits us to specify a shift to high-frequency gravitational waves. This is a way to keep a finite but large number of computational bits of information for modeling universe expansion, regardless of how large the universe becomes in the present to far future.\n\n2. How a Wormhole Forms\n\nLorentzian wormholes have been modeled quite thoroughly. Visser (1995) states that the wormhole solution does not have an event horizon hiding a singularity, i.e., there is no singularity in the wormhole that is held open by dark energy. We are presenting a wormhole purely as a bridge between a prior to a present universe, as Crowell (2005) refers to in his reference on quantum fluctuations of spacetime. The equation for thermal/vacuum energy flux, which leads to a wormhole, uses a pseudo-time-like space coordinate in a modified Wheeler-De Witt equation for a bridge between two universes. We also state that the wormhole solution is dominated by a huge vacuum-energy value.\n\nTo show this, we use results from Crowell (2005) on quantum fluctuations in space-time. This gives a model from a pseudo-time-component version of the Wheeler-De Witt equation, with use of the Reinssner-Nordstrom metric to help us obtain a solution that passes through a thin shell separating two space-times. The radius of the shell, ${r}_{0}\\left(t\\right)$ separating the two spacetimes is of length ${l}_{P}$ in approximate magnitude, leading to a domination of the time component for the Reissner-Nordstrom metric \n\n$\\text{d}{S}^{2}=-F\\left(r\\right)\\cdot \\text{d}{t}^{2}+\\frac{\\text{d}{r}^{2}}{F\\left(r\\right)}+\\text{d}{\\Omega }^{2}$ (1)\n\nThis has: \n\n$F\\left(r\\right)=1-\\frac{2M}{r}+\\frac{{Q}^{2}}{{r}^{2}}-\\frac{\\Lambda }{3}\\cdot {r}^{2}\\underset{T\\to {10}^{32}\\text{Kelvin}~\\infty }{\\to }-\\frac{\\Lambda }{3}\\cdot {\\left(r={l}_{P}\\right)}^{2}$ (2)\n\nThis assumes that the cosmological vacuum energy parameter has a temperature-dependence as outlined by Park (2003) , leading to\n\n$\\frac{\\partial F}{\\partial r}~-2\\cdot \\frac{\\Lambda }{3}\\cdot \\left(r\\approx {l}_{P}\\right)\\equiv \\eta \\left(T\\right)\\cdot \\left(r\\approx {l}_{P}\\right)$ (3)\n\na wave-functional solution to a Wheeler-De Witt equation bridging two space-times. This solution bridging two space-times is similar to that being made between these two space-times with “instantaneous” transfer of thermal heat, as given by Crowell (2005) \n\n$\\Psi \\left(T\\right)\\propto -A\\cdot \\left\\{{\\eta }^{2}\\cdot {C}_{1}\\right\\}+A\\cdot \\eta \\cdot {\\omega }^{2}\\cdot {C}_{2}$ (4)\n\nThis has ${C}_{1}={C}_{1}\\left(\\omega ,t,r\\right)$ as a cyclic and evolving function in terms of frequency, time, and spatial function, with the same thing describable about ${C}_{2}={C}_{2}\\left(\\omega ,t,r\\right)$ , with ${C}_{1}={C}_{1}\\left(\\omega ,t,r\\right)\\ne {C}_{2}\\left(\\omega ,t,r\\right)$ . The upshot of this is that a thermal bridge exists between a shrinking prior universe collapsing to a singularity and an expanding universe expanding from a singularity, with an almost instantaneous transfer of heat from that prior universe to today’s cosmology. The thermal bridge being modeled as a wormhole is a necessary and sufficient condition for an almost instantaneous transferal mechanism of matter-energy to today’s universe from a prior cosmological collapse. We get this by assuming that the absolute value of the five-dimensional “vacuum state” parameter varies with temperature T, as Beckwith (2007) writes\n\n$|{\\Lambda }_{5-\\mathrm{dim}}|\\approx {c}_{1}\\cdot \\left(1/{T}^{\\alpha }\\right)$ (5)\n\nin contrast with the more traditional four-dimensional version, minus the minus sign of the brane-world theory version. The five-dimensional version is based on brane theory and higher dimensions, whereas the four-dimensional version is linked to more traditional de Sitter space = time geometry, as given by Park (2003) \n\n${\\Lambda }_{4-\\mathrm{dim}}\\approx {c}_{2}\\cdot {T}^{\\beta }$ (6)\n\nIf we look at the range of allowed upper bounds for the cosmological constant, there exists a difference in values between what Park (2003) predicts―an almost infinite value―to a much lower value given by Barvinsky (2006) , which is 360 times the square of Planck’s mass. The difference in these two values is commensurate with the existence of a symmetry-breaking phase transition, where we predict spin two gravitons are released, and also when we observe axion domain wall decay. We should note that this assumes that a release of gravitons occurs, which leads to a removal of graviton-energy-stored contributions to this cosmological parameter, with mP as the Planck mass, i.e., the mass of a black hole of “radius” on the order of magnitude of Planck length lP ~ 10-35 m.\n\nThis leads to Planck’s mass ${m}_{P}\\approx 2.17645×{10}^{-8}$ kilograms, as alluded to by Barvinsky (2006) \n\n${\\Lambda }_{4-\\mathrm{dim}}\\propto {c}_{2}\\cdot T\\underset{\\text{graviton-production}}{\\to }360\\cdot {m}_{P}^{2}\\ll {c}_{2}\\cdot \\left[T\\approx {10}^{32}\\text{K}\\right]$ . (7)\n\nRight after the gravitons are released one still sees a drop-off of temperature contributions to the cosmological constant. Then we can write, for small time values, $t\\approx {\\delta }^{1}\\cdot {t}_{P}$ , $0<{\\delta }^{1}\\le 1$ , and for temperatures sharply lower than $T\\approx {10}^{12}\\text{Kelvin}$ , as commented on by Beckwith (2007), that there exists a positive integer n, which leads to a sharp phase-transition drop for temperature values as we approach Planck’s time interval, ${t}_{P}~{10}^{-44}$ seconds.\n\nAfter the exit of vacuum “thermal” energy from the mouth of the wormhole bridge between a prior universe and our universe, within a Planck time interval, we observe a decrease in values of the cosmological constant in the four-dimensional world and an increase in the absolute value of the “vacuum energy” in the five-dimensional brane world. However, their absolute magnitudes are approximately the same after the Planck time interval.\n\nAt and before this region of relative equivalence of these magnitudes (the cosmological constant in four space and the absolute value of vacuum energy in five dimensional brane worlds), a scale-factor discontinuity region (referred to later in this paper) exists. In the region of time and space before we traverse this thermal/spatial scale factor discontinuity region, we have 1010 or so bits of “information”.\n\nAfter traversing this thermal/spatial scale-factor discontinuity region, it rapidly increase to 10120 or so bits of information, due to an increase in complexity of the cosmological space-time structure. The dividing line between these two regions of complexity is shown by the behavior of the four-dimensional cosmological constant energy and the five dimensional “vacuum energy” expressions discussed above.\n\nWhat we do state, however, is that in the region before this discontinuity region, we have less than or equal to 1010 bits of “information”. This changes to a four-dimensional “cosmological constant” energy far smaller than the magnitude of the five-dimensional brane word vacuum energy, becoming 10120 bits of actual cosmological information and leading to the large-scale creation of structure in our universe.\n\n$\\frac{{\\Lambda }_{4-\\mathrm{dim}}}{|{\\Lambda }_{5-\\mathrm{dim}}|}-1\\approx \\frac{1}{n}$ (8)\n\nThe transition outlined in Equation (7) above has a starting point with extremely high temperatures created by a vacuum-energy transfer between a prior universe and our present universe, as outlined by Equation (3) and Equation (4) above. Whereas the regime where we look at an upper bound to vacuum energy in four dimensions is outlined in Equation (8) above. This wormhole solution is a necessary and sufficient condition for thermal transfer of heat from that prior universe to allow for graviton production under relic inflationary conditions.\n\nClaim 1: The Following Are Equivalent\n\n1) There exists a Reisnner-Nordstrom Metric with −F(r)dt2 dominated by a cosmological vacuum energy term, $\\left(-\\Lambda /3\\right)$ times dt2, for early-universe conditions in the time-range less than or equal to Planck’s time ${t}_{P}$ .\n\n2) A solution for a pseudo-time-dependent version of the Wheeler-De Witt equation exists with a wave function $\\Psi \\left(r,t,T\\right)$ , forming a wormhole bridge between two universe domains, with $\\Psi \\left(r,t,T\\right)=\\Psi \\left(r,-t,T\\right)$ for a region of space-time before signal causality discontinuity, and for times $|t|<{t}_{P}$ .\n\n3) The heat flux-dominated vacuum energy value given by $\\Psi \\left(r,t,T\\right)$ contributes to a relic graviton burst, in a region of time less than or equal to Planck’s time ${t}_{P}$ .\n\nThe proof of claim 1 is referenced via an article, Beckwith (2007). This claim establishes the structure we outline as to our causal discontinuity approach to wormholes .\n\n3. Presenting Evidence for Causal Discontinuity Due to the Transfer of Thermal-Based Vacuum Energy Implied By the Wheeler-De Witt Equation Wormhole Solution\n\nBegin first by presenting a version of the Friedmann equation given by Frampton (2007) :\n\n${\\left(\\stackrel{˙}{a}/a\\right)}^{2}=\\frac{8\\text{π}G}{3}\\cdot \\left[{\\rho }_{\\text{rel}}+{\\rho }_{\\text{matter}}\\right]+\\frac{\\Lambda }{3}$ (9)\n\nWe argue that the existence of such a nonlinear equation for early-universe scale-factor evolution introduces a de facto “information” barrier between a prior universe, which, as we argue, can only include thermal bounce input to the new nucleation phase of our present universe. To see this, we can turn to Dr. Dowker’s paper on causal sets . These require the following ordering with a relation $\\prec$ , where we assume that initial relic space time is replaced by an assembly of discrete elements, so as to create, initially, a partially ordered set , C:\n\n1) If $x\\prec y$ and $y\\prec z$ , then $x\\prec z$ .\n\n2) If $x\\prec y$ and $y\\prec x$ , then $x=y$ for $x,y\\in C$ .\n\n3) For any pair of fixed elements x and z of elements in C, the set $\\left\\{y|x\\prec y\\prec z\\right\\}$ of elements lying in between x and z is finite.\n\nItems (1) and (2) give us that we have C as a partially ordered set and the third item permits local finiteness. When combined with as a model for how the universe evolves via a scale factor equation, this permits us to write, after we substitute $a\\left({t}^{\\ast }\\right)<{l}_{P}$ for ${t}^{\\ast }<{t}_{P}$ = Planck time, and ${a}_{0}\\equiv {l}_{P}$ , and ${a}_{0}/a\\left({t}^{\\ast }\\right)\\equiv {10}^{\\alpha }$ for $\\alpha \\gg 0$ into a discrete equation model of Equation (5) leads to:\n\nClaim 2: Using the Friedmann Equation For the Evolution of a Scale Factor $a\\left(t\\right)$ , We Have a Non-Partially Ordered-Set Evolution of the Scale Factor with Evolving Time, Implying a Causal Discontinuity\n\nWe establish the validity of this formalism by rewriting the Friedman equation as follows:\n\n$\\left[\\frac{a\\left({t}^{\\ast }+\\delta t\\right)}{a\\left({t}^{\\ast }\\right)}\\right]-1<\\frac{\\left(\\delta t\\cdot {l}_{P}\\right)}{\\sqrt{\\Lambda /3}}\\cdot {\\left[1+\\frac{8\\text{π}}{\\Lambda }\\cdot \\left[{\\left({\\rho }_{rel}\\right)}_{0}\\cdot {10}^{4\\alpha }+{\\left({\\rho }_{m}\\right)}_{0}\\cdot {10}^{3\\alpha }\\right]\\right]}^{1/2}\\underset{\\Lambda \\to \\infty }{\\to }0$ (10)\n\nSo in the initial phases of the big bang, with a very large vacuum energy, we obtain the following relation, which violates (signal) causality. This is for any given fluctuation of time in the “positive” direction:\n\n$\\left[\\frac{a\\left({t}^{\\ast }+\\delta t\\right)}{a\\left({t}^{\\ast }\\right)}\\right]<1$ (11)\n\nWe argue that the existence of such a violation of causality, as shown in the evolution of the scale factor (as given in the ratio of a scale factor at a later time divided by the same factor at a prior time). The fact that the scale factor is less than one argues for a break in information propagation from a prior universe to our present universe.\n\nOne relevant area of inquiry to be investigated in the future is the following: Is this argument valid if there is some third choice of set structure? (For instance, do self-referential sets fall into one category or another?) The answer to this, we think, lies in (entangled?) vortex structure of space-time similar to that generated in the laboratory by Ruutu (1996) . Self-referential sets may be part of the generated vortex structure and we will endeavor to find if this can be experimentally investigated. If the causal-set argument, and its violation via this procedure, holds, we have the view that what we are seeing is a space-time “drum” effect, with the causal discontinuity forming the head of a “drum” for a region of about 1010 bits of “information” before our present universe up to the instant of the big bang itself for a time region less than $t~{10}^{-44}$ seconds in duration, with a region of increasing bits of “information” going up to 10120 due to vortex filament condensed matter style forming through a symmetry breaking phase transition.\n\n4. Seth Lloyd’s Universe as a Quantum Computer Model with Modifications\n\nWe use the formula given by Seth Lloyd (2002) that defines the number of operations the “Universe” can “compute” during its evolution. Lloyd (2002) uses the idea attributed to Landauer that the universe is a physical system with information processed over its evolutionary history. Lloyd also cites a prior paper where he attributes an upper bound to the permitted speed a physical system can have in performing operations in lieu of the Margolis/ Levitin theorem. He specifies a quantum mechanically given upper limit value (assuming E is the average energy of the system above a ground state value), obtaining a first limit of a quantum mechanical average energy bound value of\n\n$\\left[#\\text{operations}/\\mathrm{sec}\\right]\\le 2E/\\text{π}\\hslash$ (12)\n\nThe second limit to this number of operations is strictly linked to entropy, due to considerations of limits to memory space, which Lloyd writes as\n\n$\\left[#\\text{operations}\\right]\\le S\\left(entropy\\right)/\\left({k}_{B}\\cdot \\mathrm{ln}2\\right)$ (13)\n\nThe third limit, based on strict considerations of a matter-dominated universe, relates the number of allowed computations (operations) within a volume for the alleged space of a universe (horizon). Lloyd identifies this space-time volume as ${c}^{3}\\cdot {t}^{3}$ , with c the speed of light, and t an alleged time (age) for the universe. We further identify $E\\left(\\text{energy}\\right)~\\rho \\cdot {c}^{2}$ , with $\\rho$ as the density of matter, and $\\rho \\cdot {c}^{2}$ as the energy density (unit volume). This leads to\n\n$\\left[#\\text{operations}/\\mathrm{sec}\\right]\\le \\rho \\cdot {c}^{2}×{c}^{3}\\cdot {t}^{3}$ (14)\n\nWe then can write this, if $\\rho ~{10}^{-27}\\text{kil}/{\\text{meter}}^{3}$ and time as approximately $t~{10}^{10}\\text{years}$ . This leads to a present upper bound of\n\n$\\left[#\\text{operations}\\right]\\approx \\rho \\cdot {c}^{5}\\cdot {t}^{4}\\le 10{}^{120}$ (15)\n\nLloyd further refines this to read \n\n$#\\text{operations}=\\frac{4E}{\\hslash }\\cdot \\left({t}_{1}-\\sqrt{{t}_{1}{t}_{0}}\\right)\\approx \\left({t}_{\\text{Final}}/{t}_{P}\\right)\\le {10}^{120}$ (16)\n\nWe assume that t1 = final time of physical evolution, whereas ${t}_{0}={t}_{P}~10{}^{-43}$ seconds and that we can set an energy input by assuming, in early universe conditions, that ${N}^{+}\\ne {\\epsilon }^{+}\\ll 1$ and $0<{N}^{+}<1$ . So that we are looking at a graviton-burst-supplied energy value of\n\n$\\begin{array}{l}E=\\left({V}_{4-\\text{Dim}}\\right)\\cdot \\left[{\\rho }_{Vac}=\\frac{\\Lambda }{8\\text{π}G}\\right]\\\\ \\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}~{N}^{+}\\cdot \\left[{\\rho }_{\\text{graviton}}\\cdot {V}_{4-\\text{vol}}\\approx \\hslash \\cdot {\\omega }_{\\text{graviton}}\\right]\\end{array}$ (17)\n\nFurthermore, assuming the initial temperature is within the range of $T\\approx {10}^{32}-{10}^{29}$ Kelvin, we have a Hubble parameter defined along the route specified by Lloyd . This is in lieu of time $t=1/H$ , a horizon distance defined as $\\approx c/H$ , and a total energy value within the horizon as\n\nEnergy (within the horizon)\n\n$\\approx {\\rho }_{C}\\cdot {c}^{3}/\\left(H{}^{4}\\cdot \\hslash \\right)\\approx 1/\\left({t}_{P}^{2}\\cdot H\\right)$ (18)\n\nAnd this for a horizon parameter Lloyd (2002) defines as \n\n$H=\\sqrt{8\\text{π}G\\cdot \\left[{\\rho }_{crit}\\right]/3\\cdot {c}^{2}}$ (19)\n\nAnd a early universe\n\n${\\rho }_{crit}~{\\rho }_{\\text{graviton}}~\\hslash \\cdot {\\omega }_{\\text{graviton}}/{V}_{4-\\text{Vol}}$ (20)\n\nThen\n\n$\\begin{array}{c}#\\text{operations}\\approx 1/\\left[{t}_{P}^{2}\\cdot H\\right]\\approx \\sqrt{{V}_{4-\\text{Vol}}}\\cdot {t}_{P}^{-2}/\\sqrt{\\left[8\\text{π}G\\hslash {\\omega }_{\\text{graviton}}/3{c}^{2}\\right]}\\\\ \\approx {\\left[3\\mathrm{ln}2/4\\right]}^{4/3}\\cdot {\\left[{S}_{\\text{Entrophy}}/{k}_{B}\\mathrm{ln}2\\right]}^{4/3}\\end{array}$ (21)\n\nClaim 3: The Number of Allowed Operations In the Evolution of the Universe Specifies a Relationship between an Evaluated Volume for Space-Time and Upper Limits of Released Relic-Graviton Frequencies\n\nThis is proved by appealing to Equation (21) above. Next, we will examine the existence of certain symmetries in the scalar field itself .\n\n5. Formation of the Scalar Field, Bifurcation Results\n\n$V\\left(t\\right)\\equiv V\\left(\\phi \\right)~\\frac{3{H}^{2}}{8\\text{π}G}\\cdot \\left(1+\\frac{\\stackrel{˙}{H}}{3{H}^{2}}\\right)$ (22)\n\nand\n\n$\\phi \\left(t\\right)~\\int \\text{d}t\\cdot \\sqrt{\\frac{-\\stackrel{˙}{H}}{4\\text{π}G}}$ (23)\n\nIf $H=\\stackrel{˙}{a}/a$ , Equation (23) gives us zero scalar field values at the beginning of quantum nucleation of a universe. At the point of accelerated expansion (due to the final value of the cosmological constant), it also gives an accelerating value of the cosmological scale-factor expansion rate. We justify this statement by using early-universe expansion models, which have $a\\left({t}_{\\text{INITIAL}}\\right)~{\\text{e}}^{H\\cdot t}$ . This leads to the derivative of $H=\\stackrel{˙}{a}/a$ going to zero. This is similar to present-time development of the scalar factor along the lines of $a\\left({t}_{\\text{later}}\\right)~{\\text{e}}^{\\left(\\Lambda \\left[\\text{present-day}\\right]\\cdot t\\right)}$ , also leading to the derivative of $H=\\stackrel{˙}{a}/a$ going to zero. When both situations occur, we have the scale factor $\\varphi =0$ . Between initial and later times, the scale factor no longer has exponential time dependence, due to it growing far more slowly, leading to $\\varphi \\ne 0$ .\n\nBoth regimes as specified by Equation (23) above lead to zero values for a quintessence scalar field. But it does not stop there. We will show later that in actuality, the scalar field likely damps out far before the CMBR barrier value of expansion when Z = 1100, about 380,000 to 400,000 years after the big bang .\n\nClaim 4: We Observe That the Scalar Field $\\varphi \\left(t\\right)$ Is Zero at the Onset of the Big Bang, and Also Is Zero During the Present Cosmological Era\n\nThis scalar “quintessence” field is non zero in a brief period of time right after the inflationary era.\n\nWe show this by noting that in Equation (22), the time derivative of $H=\\stackrel{˙}{a}/a$ goes to zero when both the scale factors $a\\left(t\\left(\\text{initial}\\right)\\right)~\\mathrm{exp}\\left(H\\cdot t\\left(\\text{initial}\\right)\\right)$ , and $a\\left(t\\left(\\text{later}\\right)\\right)~\\mathrm{exp}\\left(\\Lambda \\left(\\text{today}\\right)\\cdot t\\left(\\text{later}\\right)\\right)$ . The exponential scale factors in both cases (the initial inflationary environment and the present era) lead to the time derivative of the $H=\\stackrel{˙}{a}/a$ expression in Equation (23) going to zero.\n\nSub point to claim 4: The existence of two zero values of the scalar field $\\varphi \\left(t\\right)$ at both the onset and at a later time implies a bifurcation behavior for modeling quintessence scalar fields. This is due to the non-zero $\\varphi \\left(t\\right)$ values right after the initiation of inflation.\n\nHow do we construct high-frequency gravitational waves from all of this? Note that in Equation (21) above, that we have the existence of a denominator of the right hand side of the equation with the square of the Planck’s time. It so happens that inflation is characterized with a rapid buildup of space time volume. Dr. Smoot (2007) specified initial computational bits of information transferred on the order of 108 to 1010 bits of computation, expanding up to 10120 shortly after the initiation of the big bang itself. This creation of additional degrees of freedom is in tandem with breaching the scale factor discontinuity mentioned in Claim 2 above. When we get to this regime of scale factor discontinuity, we get into the physics discussed by the claim given below.\n\nClaim 5: Unless the frequency ${\\omega }_{\\text{graviton}}$ in Equation (21) becomes large (~103 Hz or higher), the number of operations could effectively go to 101000 or higher.\n\nHow do we show this? One would need to have very large gravitational frequency range, with high frequency gravitational waves, in order to break the effects of a tiny Planck time interval ${t}_{P}^{-2}~{10}^{86}{\\mathrm{sec}}^{-2}$ put in the number of operations. So that instead of Equation (21) bounded by 10120, as the volume increased, the number of degrees of freedom of operations could become almost infinite.\n\nThis last claim―combined with the discussion right after Equation (11) above re the initial “drum head” model for a bounded region of space bracketed by causal discontinuity regions―constitutes our working model of an information-based model of cosmology that we expect will yield falsifiable experimental criteria.\n\n6. Smoot’s Information Theory/Cosmology Conclusions at Paris Colloquium (2007 “D. Chalonge” School)\n\nAt the “D. Chalonge” school presentation done by Dr. Smoot (2007) , he stated the following information―theory processing bits levels, which are due to different outlined physical processes. The following is Dr. Smoot’s preliminary analysis of information content in the observable universe: \n\n1) Physically observable bits of information possibly generated in the Universe: 10180\n\n2) Holographic principle-allowed bits (states) in the evolution (development) of the Universe: 10120\n\n3) Initially available bits (states) given to us to work with at the onset of the inflationary era: 1010\n\n4) Observable bits of information present due to quantum/statistical fluctuations: 108\n\nOur guess is as follows: the thermal flux implied by the existence of a wormhole accounts for perhaps 1010 bits of information. These could be transferred via a wormhole solution from a prior universe to our present, per Equation (4) above. So there could be perhaps 10120 minus 1010 bits of information temporarily suppressed during the initial bozonification phase of matter right at the onset of the big bang itself. Then the degrees of freedom of our initial cosmological environment dramatically dropped during the beginning of the descent of temperature from about $T\\approx {10}^{32}\\text{Kelvin}$ to at least three orders of magnitude. This drop in temperature occurs as we move out from an initial red shift of $z\\approx {10}^{25}$ to $z\\approx {10}^{25}$ to a far smaller value of the red shift. A good guess as to what is going on is embodied in figure 28.1 of the book by Volovik (2003) , offering a condensed matter analogy to current cosmology, where the formation of topological cosmic defects according to the Kibble-Zurek hypothesis may be in tandem with the growth of cosmological bits of information from a low number of 1010 to 10120 today. This is similar to the growth in baryons of up to 1080 in the modern era. The formation of complexity of structure as given in figure 28.1, cited above, occurs at the same time as the bit complexity reaches its present value of nearly 10120.\n\n7. Conclusions\n\nSo far, what we have established is a working model for an information theory based model of cosmological evolution with a lot of symmetry arguments thrown in. The approach is novel, leading to a new way of looking at CMBR space/volume and what it is, relative to bits of “information” computed during the course of cosmological evolution. This also permits a template for realistic models of a graviton burst.\n\nAs for future research, we should delineate in more detail what would be transferred―possibly by entanglement information transfer from a prior universe to our own. We should also understand how additional bits of information came to be in the present Universe. All of this would tie in with an accurate physical understanding of the points raised in Section 5 above. We would like to see―partly using the rich lore on liquid helium as outlined by Kopik (1993) ―if there is a way to experimentally determine if the growth and the relative increase in structure and bits of “information” are in some sense connected with a cosmological equivalent to the vortex reconnection process outlined in liquid helium experiments. In particular, the author is convinced that the fifth claim as given above is fundamental physics, and that as we have a growing volume during inflation, this needs to be investigated―hopefully in ways that lead to falsifiable computational quantum computer models of the universe, and to their connections to initial inflation physics.\n\nWe should mention that the High frequency gravitational wave regime is potentially accessible. A good write up as to this is given in below.\n\nIn addition, we have that LIGO has made several gravitational wave discoveries. is the first one, as is and . These establish basic working assumptions needed as far as gravitational wave astronomy data sets, whereas is for the different regime of gravitational wave astronomy needed as far as higher frequencies, and different strength of GW than what is done by LIGO.\n\nFinally, establishes if we can discriminate between different models of gravity, whether these are scalar-tensor, general relativity, or some variant of quantum gravity, which may be identifiable if we have data sets to make proper evaluation.\n\nIn all this is a first step; in terms of identification of basic foundations of gravitational wave astronomy we hope is pursued in the new future.\n\nWe also wish to mention that additional issues brought up in and should be adhered to, in terms of experimental inquiry.\n\nAcknowledgements\n\nThe author would like to acknowledge the contributions of Amara D. Angelica for stimulating discussions of the concepts and copyediting.\n\nThis work is supported in part by National Nature Science Foundation of China grant No. 11375279.\n\nCite this paper\n\nBeckwith, A.W. (2018) Symmetries in Evolving Space-Time and Their Connection to High-Frequency Gravitational Wave Production. Journal of High Energy Physics, Gravitation and Cosmology, 4, 492-503. https://doi.org/10.4236/jhepgc.2018.43027\n\nReferences\n\n1. 1. Ryden, B. (2017) Introduction to Cosmology. 2nd Edition, Cambridge University Press, Cambridge.\n\n2. 2. Jerzy Plebasnki, J. and Krasinki, A. (2007) An Introduction to General Relativity and Cosmology. Cambridge University Press, Cambridge.\n\n3. 3. Lloyd, S. (2002) Computational Capacity of the Universe. Physical Review Letters 88, Article ID: 237901. https://doi.org/10.1103/PhysRevLett.88.237901\n\n4. 4. Visser, M. (1995) Lorentzian Wormholes, from Einstein to Hawking. AIP Press, Woodbury, New York.\n\n5. 5. Crowell, L. (2005) Quantum Fluctuations of Space Time. World Scientific Series in Contemporary Chemical Physics, Volume 25. World Scientific Publishing Company, Singapore.\n\n6. 6. Hamber, H. and Williams, R. (2011) Discrete Wheeler-DeWitt Equation. Physical Review D, 84, Article ID: 104033. https://doi.org/10.1103/PhysRevD.84.104033\n\n7. 7. Park, D.K., Kim, H. and Tamarayan, S. (2002) Nonvanishing Cosmological Constant of Flat Universe in Brane World Senarios. Physics Letters B, 535, 5-10. https://doi.org/10.1016/S0370-2693(02)01729-X\n\n8. 8. Beckwith, A. (2007) Symmetries in Evolving Space Time from Present to Prior Universes. arXiv:math-ph/0501028\n\n9. 9. Barvinsky, A., Kamenschick, A. and Yu, A. (2006) Thermodynamics from Nothing: Limiting the Cosmological Constant Landscape. Physical Review D, 74, Article ID: 121502. https://doi.org/10.1103/PhysRevD.74.121502\n\n10. 10. Frampton, P. and Baum, L. (2007) Turnaround in Cyclic Cosmology. Physical Review Letters, 98, Article ID: 071301.\n\n11. 11. Dowker, F. (2005) Causal Sets and the Deep Structure of Space-Time. In: Akshtekar, A., Ed., 100 Years of Relativity, Space-Time Structure: Einstein and Beyond, World Press Scientific, Singapore, 445-464. https://doi.org/10.1142/9789812700988_0016\n\n12. 12. Simovici, D. and Djeraba, C. (2008) Partially Ordered Sets. Mathematical Tools for Data Mining: Set Theory, Partial Orders, Combinatorics. Springer, Berlin.\n\n13. 13. Schröder, B.S.W. (2003) Ordered Sets: An Introduction. Birkhäuser, Boston. https://doi.org/10.1007/978-1-4612-0053-6\n\n14. 14. Ruutu, V., Eltsov, V., Gill, A., Kibble, T., Krusius, M., Makhlin, Y.G., Placais, B., Volvik, G. and Wen, Z. (1996) Vortex Formation in Neutron-Irradiated 3He as an Analog of Cosmological Defect Formation. Nature, 382, 334-336. https://doi.org/10.1038/382334a0\n\n15. 15. Soto-Andrade, J., Jaramillo, S., Gutierrez, C. and Letelier, J.-C. (2011) Ouroboros Avatars: A Mathematical Exploration of Self-reference and Metabolic Closure. MIT Press, Paris.\n\n16. 16. Padmanabhan, T. (2006) An Invitation to Astro Physics. World Scientific Publishing Co. Pte. Ltd., Singapore. https://doi.org/10.1142/6010\n\n17. 17. Tsujikawa, S. (2013) Quintessence: A Review. Classical and Quantum Gravity, 30, Article ID: 214003. https://doi.org/10.1088/0264-9381/30/21/214003\n\n18. 18. Smoot, G. (2007) Nobel Lecture: Cosmic Microwave Background Radiation Anisotropies: Their Discovery and Utilization. Reviews of Modern Physics, 79, 1-31. https://cdn.journals.aps.org/files/RevModPhys.79.1349.pdf\n\n19. 19. Volovik, G. (2003) The Universe in a Helium Droplet. International Series of Monographs on Physics 117, Clarendon Press, Oxford.\n\n20. 20. Koplik, J. and Levine, H. (1993) Vortex Reconnection in Superfluid Helium. Physical Review Letters, 71, 1375-1378. https://doi.org/10.1103/PhysRevLett.71.1375\n\n21. 21. Li, F., Baker Jr., R.M.L., Fang, Z., Stephenson, G.V. and Chen, Z. (2008) Perturbative Photon Fluxes Generated by High-Frequency Gravitational Waves and Their Physical Effects. The European Physical Journal C, 56, 407-423. https://doi.org/10.1140/epjc/s10052-008-0656-9\n\n22. 22. Abbott, B.P., et al. (2016) Observation of Gravitational Waves from a Binary Black Hole Merger. Physical Review Letters, 116, Article ID: 061102.\n\n23. 23. Abbott, B.P., et al. (2016) GW151226: Observation of Gravitational Waves from a 22-Solar-Mass Binary Black Hole Coalescence. Physical Review Letters, 116, Article ID: 241103.\n\n24. 24. Abbott, B.P., et al. (2017) GW170104: Observation of a 50-Solar-Mass Binary Black Hole Coalescence at Redshift 0.2. Physical Review Letters, 118, Article ID: 221101.\n\n25. 25. Corda, C. (2009) Interferometric Detection of Gravitational Waves: The Definitive Test for General Relativity. International Journal of Modern Physics D, 18, 2275-2282. https://arxiv.org/abs/0905.2502 https://doi.org/10.1142/S0218271809015904\n\n26. 26. Barcelo, C. and Visser, M. (1999) Transversable Wormholes from Massless Conformally Coupled Scalar Fields. Physics Letters B, 66, 127-134.\n\n27. 27. Visser, M. (2002) Sakarov’s Induced Gravity: A Modern Perspective."
] | [
null,
"https://html.scirp.org/file/4-2180220x1.png",
null,
"https://html.scirp.org/file/9-2500537x3.png",
null,
"https://html.scirp.org/file/9-2500537x2.png",
null,
"https://html.scirp.org/file/18-1760983x4.png",
null,
"https://html.scirp.org/file/2-1410169x6.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8852204,"math_prob":0.9752488,"size":29706,"snap":"2022-05-2022-21","text_gpt3_token_len":6955,"char_repetition_ratio":0.13110228,"word_repetition_ratio":0.022582019,"special_character_ratio":0.24035548,"punctuation_ratio":0.1305196,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.9952168,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,3,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T21:22:00Z\",\"WARC-Record-ID\":\"<urn:uuid:3f7ac371-cf07-4b2d-bca4-bbc903a1305e>\",\"Content-Length\":\"111380\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ab7f0cb6-b68b-4e9d-bb94-71941633bb69>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e00a421-0a2e-4d61-8a64-4b68c7207f0f>\",\"WARC-IP-Address\":\"144.126.144.39\",\"WARC-Target-URI\":\"https://file.scirp.org/Html/4-2180220_85834.htm\",\"WARC-Payload-Digest\":\"sha1:NNR437MZNPLXT6M53FNI3MWRSOZJCLMR\",\"WARC-Block-Digest\":\"sha1:RUW6SVQVSFZ2F6FMD2HWZ4VALTU5BKSA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662512249.16_warc_CC-MAIN-20220516204516-20220516234516-00336.warc.gz\"}"} |
https://datascience.stackexchange.com/questions/1036/how-to-numerically-estimate-mle-estimators-in-python-when-gradients-are-very-sma/1054 | [
"# How to numerically estimate MLE estimators in python when gradients are very small far from the optimal solution?\n\nI am exploring how to model a data set using normal distributions with both mean and variance defined as linear functions of independent variables.\n\nSomething like N ~ (f(x), g(x)).\n\nI generate a random sample like this:\n\ndef draw(x):\nreturn norm(5 * x + 2, 3 *x + 4).rvs(1)\n\n\nSo I want to retrieve 5, 2 and 4 as the parameters for my distribution.\n\nI generate my sample:\n\nsmp = np.zeros((100,2))\n\nfor i in range(0, len(smp)):\nsmp[i] = i\nsmp[i] = draw(i)\n\n\nThe likelihood function is:\n\ndef lh(p):\np_loc_b0 = p\np_loc_b1 = p\np_scl_b0 = p\np_scl_b1 = p\n\nl = 1\nfor i in range(0, len(smp)):\nx = smp[i]\ny = smp[i]\nl = l * norm(p_loc_b0 + p_loc_b1 * x, p_scl_b0 + p_scl_b1 * x).pdf(y)\n\nreturn -l\n\n\nSo the parameters for the linear functions used in the model are given in the p 4-variable vector.\n\nUsing scipy.optimize, I can solve for the MLE parameters using an extremely low xtol, and already giving the solution as the starting point:\n\nfmin(lh, x0=[2,5,3,4], xtol=1e-35)\n\n\nWhich does not work to well:\n\nWarning: Maximum number of function evaluations has been exceeded.\narray([ 3.27491346, 4.69237042, 5.70317719, 3.30395462])\n\n\nRaising the xtol to higher values does no good.\n\nSo i try using a starting solution far from the real solution:\n\n>>> fmin(lh, x0=[1,1,1,1], xtol=1e-8)\nOptimization terminated successfully.\nCurrent function value: -0.000000\nIterations: 24\nFunction evaluations: 143\narray([ 1., 1., 1., 1.])\n\n\nWhich makes me think:\n\nPDF are largely clustered around the mean, and have very low gradients only a few standard deviations away from the mean, which must be not too good for numerical methods.\n\nSo how does one go about doing these kind of numerical estimation in functions where gradient is very near to zero away from the solution?\n\nSecond, fmin is using Nelder-Mead simplex algorithm which has no convergence guarantees according to scipy documentation. This means the convergence is totally random and you should not expect to find parameters close to the originals. To get around this, I would suggest you to use a gradient based method like stochastic gradient descent or BFGS. Since you know the generative model (rvs are Gaussian distributed) you can write the likelihood and log likelihood as:",
null,
""
] | [
null,
"https://i.stack.imgur.com/bfsvr.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.76293534,"math_prob":0.99047214,"size":1749,"snap":"2021-31-2021-39","text_gpt3_token_len":532,"char_repetition_ratio":0.09971347,"word_repetition_ratio":0.0068728523,"special_character_ratio":0.33218983,"punctuation_ratio":0.16410257,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99806,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-29T00:12:20Z\",\"WARC-Record-ID\":\"<urn:uuid:35a00d8c-3017-4ae7-bfbd-a22e3f5af287>\",\"Content-Length\":\"166730\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a0b2215-b4da-4ba4-b40c-413361bb311b>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b6e7c7b-15f5-4985-8917-f290d32d17e4>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://datascience.stackexchange.com/questions/1036/how-to-numerically-estimate-mle-estimators-in-python-when-gradients-are-very-sma/1054\",\"WARC-Payload-Digest\":\"sha1:G5WFCNKJR3SSGO4YOSDUHKEH7L6SPM26\",\"WARC-Block-Digest\":\"sha1:C5CYMR67QR6KBKJOA2WY5GG7RJ2SYZRK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153803.69_warc_CC-MAIN-20210728220634-20210729010634-00229.warc.gz\"}"} |
https://www.mathworks.com/matlabcentral/answers/112780-sum-the-digits-of-a-number | [
"# Sum the digits of a number?\n\n355 views (last 30 days)\ntzenh karetzh on 17 Jan 2014\nHi, I'd like to know how can someone add the digits of a number to the final point in matlab.\nFor example 525 --> 5 + 2 + 5 = 12 --> 1 + 2 = 3\nI'm thinking about dividing by ten and adding the digits after the decimal point while at the same time round the number.\nAny ideas on how to add the digits would be really helpfull.\nAlso how to identify a digit by knowing its position in a number\ni.e. the 5th digit of 9483672 is 6. Thanks in advance\n##### 2 CommentsShowHide 1 older comment\ntzenh karetzh on 17 Jan 2014\nNot exactly.My homework is to find the prime numbers in a certain space. This is just a question that came to me because we know that numbers like 711 that their digits add up to 3,6 or 9 can be divided by 3.For the prime numbers it's much easier to go for mod(number,3). And maybe it can be useful to someone else for other use\n\nAzzi Abdelmalek on 17 Jan 2014\na=525;\nb=num2str(a);\nwhile numel(b)>1\na=sum(str2double(regexp(b,'\\d','match')));\nb=num2str(a);\nend\nout=str2num(b)\n% -------------------------------\na=9483672;\nb=num2str(a);\nb(5)\n##### 2 CommentsShowHide 1 older comment\nStephen on 15 Nov 2019\nSimpler without regexp and str2double:\na = 525;\nb = num2str(a);\nwhile numel(b)>1\na = sum(b-'0');\nb = num2str(a);\nend\nout = a;\n\nLes Beckham on 2 May 2020\nEdited: Les Beckham on 2 May 2020\nI know it sounds too easy to be true but this manipulation is actually the same as modulo 9. No loops or string conversions needed.\n>> mod(525,9)\nans =\n3\n>> mod(9483672,9)\nans =\n3\nFor the second part of your question, try this:\nfunction [out] = extract_digit(num,digit)\n%EXTRACT_DIGIT Return the specified digit from a number\nout = num2str(num);\nout = out(digit);\nend\nJohn D'Errico on 6 Apr 2021\n@Dimitri Assuming you want to keep on re-summing the digits until the sum is a single digit number... then special case handling still will work. You just need to think about what happens.\nThe ONLY case where that digit sum is zero is when the number is itself zero. Therefore, if the number is NOT zero, but the modulus was zero, then the digit sum would have been 9.\nAs such we can see a simple solution:\nN = 12345678;\nif N == 0\ndigsum = 0;\nelse\ndigsum = mod(N,9);\nif digsum == 0\ndigsum = 9;\nend\nend\ndigsum\ndigsum = 9\nIs that algorithm correct for N larger than 0? Clearly it works when N == 9. We can write a really simple code to compute the sum of the digits directly, for just one iteration, and then just iterate until it is done.\ndsum = @(n) sum(dec2base(n,10) - '0');\ndigsum = N;\nwhile digsum > 9\ndigsum = dsum(digsum);\nend\ndigsum\ndigsum = 9\n\nSean de Wolski on 17 Jan 2014\n\nPablo López on 5 Feb 2019\nYou can try this:\nn = 525;\nsum(str2num(num2str(sum(str2num(num2str(n)')))'))\nPablo López on 6 Feb 2019\nWell seen! Thank you for your observation.\n\nImage Analyst on 7 Apr 2021\nHere is how I did it:\nfprintf('Beginning to run %s.m ...\\n', mfilename);\n% Get a random integer.\noriginalNumber = int64(randi(2^53-1, 1, 1))\n% Do the first iteration.\nstrNumbers = num2str(originalNumber);\nintNumbers = strNumbers - '0'\nloopCounter = 1;\nmaxIterations = 100; % The Failsafe (so we never get an infinite loop due to a logic error). Every while loop should always have a failsafe.\nwhile length(intNumbers) >= 2 && loopCounter < maxIterations\ntheSum = sum(intNumbers);\nfprintf('After %d iterations, the number is %s and the sum of its digits is %d\\n',...\nloopCounter, strNumbers, theSum);\n% Prepare the next iteration:\nstrNumbers = num2str(theSum);\nintNumbers = num2str(theSum) - '0';\nloopCounter = loopCounter + 1;\nend\nfprintf('Done running %s.m ...\\n', mfilename);\nI get:\nint64\n7208285642958972\nintNumbers =\n7 2 0 8 2 8 5 6 4 2 9 5 8 9 7 2\nAfter 1 iterations, the number is 7208285642958972 and the sum of its digits is 84\nAfter 2 iterations, the number is 84 and the sum of its digits is 12\nAfter 3 iterations, the number is 12 and the sum of its digits is 3\n\nAns sadiq on 19 Aug 2021\nfunction out=digit_sum(in)\nq=in;\na=q/10;\nb=floor(a);\nc=q-b*10;\nw=c;\nif q>0\nw=w+digit_sum(b);\nend\nout=w;\nend"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89182466,"math_prob":0.99343336,"size":444,"snap":"2021-43-2021-49","text_gpt3_token_len":121,"char_repetition_ratio":0.13863637,"word_repetition_ratio":0.0,"special_character_ratio":0.3018018,"punctuation_ratio":0.072916664,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.998924,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-25T03:34:46Z\",\"WARC-Record-ID\":\"<urn:uuid:291cec4a-b1e7-4100-8af5-3398a4cda0d7>\",\"Content-Length\":\"234191\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:041b22b5-5fca-4040-b1c4-4917bfd6f04e>\",\"WARC-Concurrent-To\":\"<urn:uuid:869f0a84-c58a-42dd-9db7-fbba82054110>\",\"WARC-IP-Address\":\"104.96.217.125\",\"WARC-Target-URI\":\"https://www.mathworks.com/matlabcentral/answers/112780-sum-the-digits-of-a-number\",\"WARC-Payload-Digest\":\"sha1:64LWMWQPTESGOLJTUZVFOBGPXBO6GTXO\",\"WARC-Block-Digest\":\"sha1:KXL7IQSEUMD5AHWSFSZGBJCN5WGAEN2N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587623.1_warc_CC-MAIN-20211025030510-20211025060510-00616.warc.gz\"}"} |
https://blog.newtum.com/find-sum-of-natural-numbers-in-python-using-recursion/ | [
"# Find Sum of Natural Numbers in Python Using Recursion\n\n(Last Updated On: 26/09/2023)\n\nIn this tutorial, we will learn how to find the sum of natural numbers in Python using recursion. Recursion is a technique in computer programming where a function calls itself until a condition is met. In this case, we will be using recursion to find the sum of natural numbers.\n\n## Python Program to Find the Sum of Natural Numbers Using Recursion\n\nThis is the main function rsum that will find the sum of natural numbers using recursion. The function takes an integer n as an input argument. The function starts with an if-else statement:\n\n• If the value of “n” is less than or equal to “1”, the function returns n.\n• If the value of “n” is greater than “1”, the function returns “n” plus the result of the function “rsum” with an argument “n-1”.\n\nHere, we are taking the input from the user and converting it from string to integer using the “int()” function.\n\nThe input number is passed as an argument to the “rsum” function, and the result is stored in the variable “TTL”.\n\nFinally, we print the sum of natural numbers, which is stored in the variable “TTL”.\n\n```# Find Sum of Natural Numbers in Python Using Recursion\n\ndef rsum(n):\nif n <= 1:\nreturn n\nelse:\nreturn n + rsum(n-1)\n\n# we are taking a number from user as input\n# entered value will be converted to int from string\nnum = int(input(\"Enter a number: \"))\n\n#accepts a number as input from user and sends it as argument to rsum() function\nttl=rsum(num)\nprint(\"The sum is\",ttl)\n```\n\nLet’s run the code and see how it works.\n\n#### Output:\n\n``````Enter a number: 15\nThe sum is 120``````\n\nSo, the sum of the first 15 natural numbers (1 + 2 + 3 + … + 15) is 120.\n\nRecursion can be a powerful tool to solve problems in computer programming, and finding the sum of natural numbers is a classic example. Understanding how recursion works is an important aspect of computer programming and will help you in solving more complex problems in the future.\n\nIn this tutorial, we saw how to write a simple code to find the sum of natural numbers using recursion. You can use the same concept to find the sum of other sequences as well, for example, even numbers or odd numbers. You can also try to modify the code to find the sum of squares of natural numbers.\n\nIn conclusion, using recursion to find the sum of natural numbers is a simple yet effective way to learn the basics of recursion. Recursion can seem confusing at first, but with practice and a solid understanding of the concept, you’ll find that it can be a very useful tool for solving problems in computer programming.\n\nRemember, the key to using recursion effectively is to make sure that you have a base case (the condition that will stop the recursion) and that the problem you’re trying to solve gets smaller with each recursive step. With these two things in place, you’ll be able to use recursion to solve a wide variety of problems.\n\nSo, keep practising and experimenting with recursion, and you’ll soon be able to use it to solve complex problems with ease. I hope this tutorial helped understand the concept of recursion and how to find the sum of natural numbers using it. If you have any questions, feel free to ask.\n\nFor More Python Programming Exercises and Solutions check out our Python Exercises and Solutions"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8799057,"math_prob":0.9805508,"size":3191,"snap":"2023-40-2023-50","text_gpt3_token_len":711,"char_repetition_ratio":0.16975212,"word_repetition_ratio":0.07986111,"special_character_ratio":0.22626136,"punctuation_ratio":0.08346457,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991358,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T11:44:25Z\",\"WARC-Record-ID\":\"<urn:uuid:f60701f9-fa8a-4bc8-9d8d-dd6d94968591>\",\"Content-Length\":\"79447\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e141b308-16a8-4829-a351-8ae2a0c2bc39>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc3fa18d-9d38-4c85-8a56-9f5678dd3aea>\",\"WARC-IP-Address\":\"34.93.134.229\",\"WARC-Target-URI\":\"https://blog.newtum.com/find-sum-of-natural-numbers-in-python-using-recursion/\",\"WARC-Payload-Digest\":\"sha1:WRN4Y4YHVEXRPI5LDDKAEYBXOBZWR6OI\",\"WARC-Block-Digest\":\"sha1:XZOVC2PUFWY2OHVZ2PBYVTI2G7BCY2AC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510297.25_warc_CC-MAIN-20230927103312-20230927133312-00854.warc.gz\"}"} |
https://www.quantumcalculus.org/perron-frobenius-vector-wu-characteristic/ | [
"# A Perron-Frobenius Vector to Wu Characteristic\n\n Abstract: If $L$ is the connection operator of a complex and $J$ is the checkerboard matrix $J(x,y)=\\omega(x) \\omega(y)$, where $\\omega(x)=(-x)^{{\\rm dim}(x)}$, we note that $LJ$ has then the Perron Frobenius eigenvector $v$ given by Wu-intersection numbers $v(x) = \\omega(x) \\omega(G,\\{ x \\})$ as components. The matrix $LJ$ has only one non-zero eigenvalue $\\omega(G)= tr(LJ)$. We also have $\\omega(G) |G| = tr(JLJ)$ where $JLJ$ is symmetric but conjugated to $LJ$. Wu characteristic is therefore spectrally defined through a symmetric matrix but as $JLJ$ has rank one, this is not a good “Hamiltonian”. But we still don’t know whether $\\omega(G)$ can be obtained from the spectrum of $L$.\n\nIn this post, I want to point out a connection of Wu characteristic with Perron-Frobenius. It is not deep at all but interesting and illustrates how Wu characteristic is natural. Perron-Frobenius is a key idea in mathematics. It is crucial for example to understand Markov processes, Chaos or Page rank. Indeed, the “billion dollar vector” of Brin and Page is a Perron-Frobenius vector. See a (handout of 2011 [PDF]). The idea is tangentially also involved here. First of all, since the connection Laplacian L has non-negative entries and is invertible, there must be a positive eigenvalue. I had tried first to prove directly that L never can be positive definite if the complex has positive dimension but this only emverged as a consequence of e more detailed analysis showing that the number of negative eigenvalues matches the number of odd dimensional simplices in the complex so that can hear the Euler characteristic $\\chi(G) = \\sum_{x \\in G} \\omega(x)$ of a simplicial complex $G$. I also experimentally saw that the Wu characteristic $\\omega(G) =\\sum_{x,y \\in G} \\omega(x) \\omega(y)$ coincides for pairs of L- isospectral complexes found so far. This prompted the question whether one can hear the Wu characteristic using $L$. I don’t know this yet. But it is already possible to express Wu characteristic as an eigenvalue of a symmetric matrix $K$. We want to see here that this matrix $K$ has an explicitly given “Perron-Frobenius” eigenvector whose eigenvalue is the Wu characteristic.\n\nI had actually started to look at the operator $L$ originally because ${\\rm tr}(L J)$ is the Wu characteristic, where $J(x,y) = \\omega(x) \\omega(y)$. The observation done early 2016 that $L$ is always invertible lead a bit away from Wu characteristic. But both notions are central in a calculus, where not the simplices, but pairs of intersecting simplices are the focus. The case where functions on simplices are involved is the classical differential form calculus taught calculus. The calculus of Wu characteristic and its cohomology has no continuum analogue so far.\n\nIn quantum mechanics, one could interpret $J$ as a density matrix and $tr(L J)$ as the expectation of $L$ in that state. Actually $J = p^T p$ is in a pure state given by the vector $x \\to p(x)=(-1)^{{\\rm dim}(x)}$. In Dirac notation, this is $| p \\rangle \\langle p |$. The operator $L J$ is not symmetric. Here is an observation: define the matrix $P=J/\\sqrt{n}$, where $|G|=n$ is the number of simplices in $G$. The matrix $P$ is symmetric, so that also $K=P L P = P^T L P$ is symmetric. This matrix has the eigenvector $p(x)=\\omega(x)$ with eigenvalue $\\omega$. As $K$ has rank $1$ it should be dismissed as a Hamiltonian.\n\n The symmetric matrix $K$ has exactly one non-zero eigenvalue: the Wu characteristic. Its eigenvector is the vector $p$.\n\nProof. The matrix $P$ has the eigenvector $p$ with eigenvalue $1/\\sqrt{n}$ so that $P L P p = P L p/\\sqrt{n} = J L p/n$. Since $J$ has rank $1$, also $J L$ has maximally rank $1$. But ${\\rm tr}(J L p)/n = n (p L p)/n = p L p = \\omega$. As we have the same eigenvectors and both matrices $J,K$ are symmetric, they are a multiple of each other. We have $K = \\omega J/n$.\n\nSo, the matrix $K=PLP$ is really not that interesting. However, we could also look at $L J + J L$ which is also symmetric but it has exactly two nonzero eigenvalues.\n\nWhat is the eigenvector of the matrix $L J$ to the single non-zero eigenvalue $\\omega(G)$ it has? Note that the matrix $J L$ is easier as it has the eigenvector $p$. The situation is a bit like for Markov operators, where the operator $M$ as a trivial constant eigenvector with dominant eigenvector but where the transpose $M^T$ is interesting. The eigenvalues of M and $M^T$ are the same but the eigenvectors not.\n\nSince the situation is so close to a Perron-Frobenius situation with one dominant eigenvalue, we can say that the Wu characteristic is essentially a Perron-Frobenius eigenvalue of $L J$ (we abuse here a bit the language as $LJ$ is not a non-negative matrix but only conjugated to a non-negative matrix. In some sense, we are only at the boundary of the Perron-Frobenius story).\n\nGiven two simplicial complexes $G,H$ we called $\\omega(G,H)$ the Wu intersection number. A special case is $\\omega(G,G)$ which as a self-intersection number is the Wu characteristic itself. We can now see that the Perron-Frobenius vector is built up by such intersection numbers.\n\n The matrix $LJ$ has exactly one non-zero eigenvalue: it is the Wu characteristic. Its Perron-Frobenius eigenvector is $v(x) = \\omega(x) \\omega(G,\\{x\\})$.\n\nFor example, for a star graph G with n-1 rays, the eigenvector has entries n-1 at the center and n-2 on the edges and zero else. In the particular case n=5, we have $G=\\{\\{1, 2\\}, \\{1, 3\\}, \\{1, 4\\}, \\{1, 5\\}, \\{1\\},\\{2\\},\\{3\\}, \\{4\\}, \\{5\\}\\}$. Now\n\n$L=\\left[\\begin{array}{ccccccccc} 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 \\\\ 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 \\\\ 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\\\ 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\\\ 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\\\ 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\\\ \\end{array} \\right]$\n\nThe Green’s function matrix, the inverse is\n\n$g=L^{-1} = \\left[ \\begin{array}{ccccccccc} -1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 \\\\ 0 & -1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\\\ 0 & 0 & -1 & 0 & 1 & 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 1 \\\\ 1 & 1 & 1 & 1 & -3 & -1 & -1 & -1 & -1 \\\\ 1 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 1 & 0 & -1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 1 & -1 & 0 & 0 & 0 & 0 \\\\ \\end{array} \\right]$\n\nBy the energy theorem, the sum over all its matrix entries is the Euler characteristic $\\chi(G)=1$. Here is the checkerboard matrix $J$:\n\n$J= \\left[\\begin{array}{ccccccccc} 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1 \\\\ 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1 \\\\ 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1 \\\\ 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1 \\\\ -1 & -1 & -1 & -1 & 1 & 1 & 1 & 1 & 1 \\\\ -1 & -1 & -1 & -1 & 1 & 1 & 1 & 1 & 1 \\\\ -1 & -1 & -1 & -1 & 1 & 1 & 1 & 1 & 1 \\\\ -1 & -1 & -1 & -1 & 1 & 1 & 1 & 1 & 1 \\\\ -1 & -1 & -1 & -1 & 1 & 1 & 1 & 1 & 1 \\\\ \\end{array} \\right]$\n\nand\n\n$LJ=\\left[ \\begin{array}{ccccccccc} 2 & 2 & 2 & 2 & -2 & -2 & -2 & -2 & -2 \\\\ 2 & 2 & 2 & 2 & -2 & -2 & -2 & -2 & -2 \\\\ 2 & 2 & 2 & 2 & -2 & -2 & -2 & -2 & -2 \\\\ 2 & 2 & 2 & 2 & -2 & -2 & -2 & -2 & -2 \\\\ 3 & 3 & 3 & 3 & -3 & -3 & -3 & -3 & -3 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ \\end{array} \\right]$\nand\n\n$JL= \\left[ \\begin{array}{ccccccccc} 2 & 2 & 2 & 2 & 3 & 0 & 0 & 0 & 0 \\\\ 2 & 2 & 2 & 2 & 3 & 0 & 0 & 0 & 0 \\\\ 2 & 2 & 2 & 2 & 3 & 0 & 0 & 0 & 0 \\\\ 2 & 2 & 2 & 2 & 3 & 0 & 0 & 0 & 0 \\\\ -2 & -2 & -2 & -2 & -3 & 0 & 0 & 0 & 0 \\\\ -2 & -2 & -2 & -2 & -3 & 0 & 0 & 0 & 0 \\\\ -2 & -2 & -2 & -2 & -3 & 0 & 0 & 0 & 0 \\\\ -2 & -2 & -2 & -2 & -3 & 0 & 0 & 0 & 0 \\\\ -2 & -2 & -2 & -2 & -3 & 0 & 0 & 0 & 0 \\\\ \\end{array} \\right]$.\n\nThe eigenvalues of $L$ are\n\n$\\left\\{\\frac{1}{2} \\left(5+\\sqrt{29}\\right),\\frac{1}{2} \\left(1+\\sqrt{5}\\right),\\frac{1}{2} \\left(1+\\sqrt{5}\\right),\\frac{1}{2} \\left(1+\\sqrt{5}\\right),1,\\frac{1}{2} \\left(1-\\sqrt{5}\\right),\\frac{1}{2} \\left(1-\\sqrt{5}\\right),\\frac{1}{2} \\left(1-\\sqrt{5}\\right),\\frac{1}{2} \\left(5-\\sqrt{29}\\right)\\right\\}$\n\nwhich are numerically given as\n\n$\\left\\{5.19258, 1.61803, 1.61803, 1.61803, 1., -0.618034, -0.618034, -0.618034, -0.192582 \\right\\}$\n\nconfirming that the number of positive eigenvalues agrees with the number $5$ of even dimensional simplices and the number of negative eigenvalues with the number 4 of odd dimensional simplices. The eigenvalues of $LJ$ and $LJ$ are all 0 except one eigenvalue $\\omega(G)=5$. The Perron-Frobenius eigenvector is $\\vec{v} = (-2, -2, -2, -2, -3, 0, 0, 0, 0)$ (the sign of course does not matter, we took the sign which matches the explicit formula given for this vector involving Wu intersection numbers.\n\nHere is self-contained Mathematica code explaining things from an other angle. Mathematica is great pseudo code. But of course, you can grab these few lines and try it for other complexes. Take an arbitrary set of sets A and use “Generate” to construct a finite abstract simplicial complex.\n\nOmega[x_]:=-(-1)^Length[x]; DJ[a_,b_]:=DisjointQ[a,b];\nEulerChi[G_]:=Total[Map[Omega,G]];\nFermiPhi[G_]:=Exp[Total[Log[Map[Omega,G]]]];\nGenerate[A_]:=Delete[Union[Sort[Flatten[Map[Subsets,A],1]]],1]\nCL[G_]:=Table[If[DJ[G[[k]],G[[l]]],0,1],{k,Length[G]},{l,Length[G]}];\nCB[G_]:=Table[Omega[G[[i]]]*Omega[G[[j]]],{i,Length[G]},{j,Length[G]}];\nEnergy[G_]:=Total[Flatten[Inverse[CL[G]]]];\nG={{1, 2}, {1, 3}, {1, 4}, {1, 5}, {1}, {2}, {3}, {4}, {5}}\nL = CL[G]\ng=Inverse[G]\nJ=CB[G]\nJL=J.L\nLJ=L.J\nl=Eigenvalues[L]\nv=First[Eigenvectors[LJ]]\nwu=First[Eigenvalues[LJ]]\n\n\nOne can reformulate this a bit: given a simplicial complex $G$, denote by ${\\rm int}(B(x))$ the vertices of $B(x)$ in the connection graph $G’$. It is the interior of a ball. It is a subset of $G$ but not a simplicial complex. For a star graph G for example and the central node x, we have ${\\rm int}(B(x)) = \\{ \\{x\\}, \\{ x,y_1 \\}, … \\{x,y_m\\} \\}$ which is not a complex as it does not contain the 0-dimensional simplices $\\{y_j\\}$. The Perron-Frobenius Eigenvector of $L J$ can now be rewritten as $\\omega(x) \\chi({\\rm int}(B(x)))$. It has the eigenvalue the Wu characteristic $\\omega(G)=n$.\n\nFor geometric graphs the Perron-Frobeinus eigenvector is constant. The reason is that $\\chi({\\rm int}(B(x))$ is then the $\\omega(x)$, the Wu characteristic of $x$. Indeed, for geometric complexes we have $\\omega(G) = \\chi(G) – \\chi(\\delta G)$, where $\\delta G$ is the boundary of $G$ (the boundary is defined as the set of vertices where the unit ball is contractible, the interior is the set of vertices where the unit ball is a sphere. These definitions make sense for geometric complexes with boundary, which are discrete analogues of manifolds with boundary. Indeed, by definition, the geometric realization of a discrete geometric simplicial complex with boundary is a manifold with boundary. The realization is homeomorphic to a smooth manifold with boundary. As already noted by Whitehead, every compact smooth manifold with smooth boundary can be modeled as such with a finite simplicial complex. For example, for a $k$-dimensional ball, where the boundary is a $(k-1)$-dimensional sphere, we have $\\chi(G)=1$ and $\\chi(\\delta G)= (-1)^{d-1}$ so that $\\omega(G) = 1-(1-(-1)^{d})$ which is $1$ for even $d$ and $-1$ for odd $d$. The Wu characteristic of a wheel graph (a 2-dimensional disk) is $1$, while the Wu characteristic of a a cone extension of an octahedron $G$ (a three dimensional ball) is $\\omega(G)=-1$.\n\nAnd here is a picture of the matrices $L$, $J$, and then $JL$ and finally $LJ$ for some random complex $G$:",
null,
""
] | [
null,
"https://www.quantumcalculus.org/wp-content/uploads/2017/12/l-j-lj-jl.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8020828,"math_prob":0.99997103,"size":11630,"snap":"2020-34-2020-40","text_gpt3_token_len":4093,"char_repetition_ratio":0.2115947,"word_repetition_ratio":0.33810142,"special_character_ratio":0.4032674,"punctuation_ratio":0.09735974,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996861,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T16:59:40Z\",\"WARC-Record-ID\":\"<urn:uuid:3a4be625-3328-4a61-ac8e-07a4c8511244>\",\"Content-Length\":\"59323\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:68a0064f-bb9b-45ac-b5ef-4d6f183810b1>\",\"WARC-Concurrent-To\":\"<urn:uuid:2f3a126b-2c6d-4aba-bca8-d53110165e90>\",\"WARC-IP-Address\":\"208.113.163.198\",\"WARC-Target-URI\":\"https://www.quantumcalculus.org/perron-frobenius-vector-wu-characteristic/\",\"WARC-Payload-Digest\":\"sha1:I7WLGXMFOJFBA3FYY2NBPN5RTGORZ2CN\",\"WARC-Block-Digest\":\"sha1:Q5H3EXOOPHQEMURDVI5REZZK5OZQOWLJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402131777.95_warc_CC-MAIN-20201001143636-20201001173636-00611.warc.gz\"}"} |
https://lastdropmugs.com/what-is-sine-ratio/ | [
"# what is sine ratio\n\nWhat Is Sine Ratio? The definition of the sine ratio is the ratio of the length of the opposite side divided by the length of the hypotenuse.\n\nWhat is a sin ratio in trigonometry? Sine ratios, along with cosine and tangent ratios, are ratios of the lengths of two sides of the triangle. Sine ratios in particular are the ratios of the length of the side opposite the angle they represent over the hypotenuse. Sine ratios are useful in trigonometry when dealing with triangles and circles.\n\n## How do you understand sine?\n\nThe sine of one of the angles of a right triangle (often abbreviated “sin”) is the ratio of the length of the side of the triangle opposite the angle to the length of the triangle’s hypotenuse.\n\n## What are the 3 trigonometric ratios?\n\nThe three trig ratios in question are sine (sin), cosine (cos) and tangent (tan). In this article however we are going to concentrate for the most part on sine. The reference angle in a right triangle is, in general, given the symbol θ (theta).\n\n## Why are trigonometric ratios important?\n\nTrigonometry is a study of the relationship of angles, lengths, and heights. It helps in finding the angles and missing sides of a triangle with the help of trigonometric ratios. The angles are either measured in radians or degrees.\n\n## How many trigonometric ratios are there?\n\nReview all six trigonometric ratios: sine, cosine, tangent, cotangent, secant, & cosecant.\n\n## What leg is considered the hypotenuse?\n\nA right triangle consists of two legs and a hypotenuse. The two legs meet at a 90° angle and the hypotenuse is the longest side of the right triangle and is the side opposite the right angle.\n\n## What is a sine used for?\n\nAs we learned, sine is one of the main trigonometric functions and is defined as the ratio of the side of the angle opposite the angle divided by the hypotenuse. It’s important for finding distances or height and can also be used to find angle measures, which are measured in radians.\n\n## What does Sinx equal to?\n\nWe can say that sin x = sin(x + 360◦). We say the function is periodic, with periodicity 360◦. Sometimes we will want to work in radians instead of degrees. If we have sin x in radians, it is usually very different from sin x in degrees.\n\n## Why is sine opposite over hypotenuse?\n\nThe sine is always the measure of the opposite side divided by the measure of the hypotenuse. Because the hypotenuse is always the longest side, the number on the bottom of the ratio will always be larger than that on the top.\n\n## Is cotangent Cos over sin?\n\nThe cotangent of x is defined to be the cosine of x divided by the sine of x: cot x = cos x sin x .\n\n## Why is trigonometry so hard?\n\nTrigonometry is hard because it deliberately makes difficult what is at heart easy. We know trig is about right triangles, and right triangles are about the Pythagorean Theorem. About the simplest math we can write is When this is the Pythagorean Theorem, we’re referring to a right isosceles triangle.\n\n## How do engineers use trig?\n\nEngineers routinely use trigonometric concepts to calculate angles. Civil and mechanical engineers use trigonometry to calculate torque and forces on objects, such as bridges or building girders. An example is the calculation of the static forces on an object that is not moving—such as a bridge.\n\n## Why is trigonometry used?\n\nTrigonometry is used to set directions such as the north south east west, it tells you what direction to take with the compass to get on a straight direction. It is used in navigation in order to pinpoint a location. It is also used to find the distance of the shore from a point in the sea.\n\n## Why is sin theta PH?\n\nratio of sin/sine ie ,in a right triangle sin of one of the acute angles is equal to perpendicular (p) hypotenuse (h)of the triangle. , Bachelor’s of Engineering, teacher of electromechanics. Where x is the angle, o is the length of the triangle side opposite the angle and h is the length of the triangle’s hypotenuse.\n\n## Who is the father of trigonometry?\n\nThe first known table of chords was produced by the Greek mathematician Hipparchus in about 140 BC. Although these tables have not survived, it is claimed that twelve books of tables of chords were written by Hipparchus. This makes Hipparchus the founder of trigonometry.\n\n## What is the difference between trigonometric ratios and trigonometric functions?\n\nWhile trigonometric ratios deal only with values of angle and the sides of the triangle, at the same time trigonometric functions deal with much larger components, like vectors, equations and many more. You can say that ratios are the subset of the functions, while function having the upper hand.\n\nShopping Cart\nScroll to Top"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91195124,"math_prob":0.98086476,"size":5187,"snap":"2022-27-2022-33","text_gpt3_token_len":1196,"char_repetition_ratio":0.19486783,"word_repetition_ratio":0.08169935,"special_character_ratio":0.21322536,"punctuation_ratio":0.102564104,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993106,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-25T19:35:16Z\",\"WARC-Record-ID\":\"<urn:uuid:ce93f7f4-bd6e-47d4-9cc8-72e23b901a74>\",\"Content-Length\":\"226676\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bb5587a3-b47b-4bb9-8cb8-9bf82e403dd3>\",\"WARC-Concurrent-To\":\"<urn:uuid:0f345dee-a452-4719-a003-c4b9c3a5c9f2>\",\"WARC-IP-Address\":\"107.155.81.110\",\"WARC-Target-URI\":\"https://lastdropmugs.com/what-is-sine-ratio/\",\"WARC-Payload-Digest\":\"sha1:FODRM6SKMPAGKW2BEFMDX25SABMUFDDJ\",\"WARC-Block-Digest\":\"sha1:UT2PZC5PE3KG7RD4ZOSRRY2GKH3BT4UQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103036099.6_warc_CC-MAIN-20220625190306-20220625220306-00023.warc.gz\"}"} |
https://tex.stackexchange.com/questions/263354/how-to-globally-change-edge-properties-using-tikz | [
"# How to globally change edge properties using TikZ?\n\nIf I want to change all edge properties (to e.g. rounded edges instead of mitered edges) in a tikz picture (picture globally), I can do so as follows:\n\n\\begin{tikzpicture}[line join=round]\n\n\nThis works fine. However, when attempting to do so for the entire document (globally) using tikzset in the preamble (which seemed a logical step after reading this):\n\n\\tikzset{line join=round}\n\n\nIt does not work, although no error appears, and every tikz image uses the default setting (which is miter) unless otherwise specified locally or picture globally.\n\nDoes anyone know how to set this property (and the line cap and miter limit property) globally?\n\nEDIT: as requested a small example:\n\n\\documentclass{article}\n\n\\usepackage{pgfplots}\n\\pgfplotsset{width=7cm,compat=1.8}\n\\tikzset{line join=round} % <-- This does not have any effect\n\n\\begin{document}\n\n% Edges are not affected, as they are still 'mitered':\n\\begin{tikzpicture}\n\\begin{axis}\n{(0, 0) (1, 1) (2, 0) (3, 1) (4, 0)};\n\\end{axis}\n\\end{tikzpicture}\n\n% This is how it's supposed to look like:\n\\begin{tikzpicture}[line join=round] % (!)\n\\begin{axis}\n{(0, 0) (1, 1) (2, 0) (3, 1) (4, 0)};\n\\end{axis}\n\\end{tikzpicture}\n\n\\end{document}\n\n• Add to your question a little complete document illustrating the problem mentioned. Aug 26, 2015 at 19:48\n• \\tikzset{every picture/.append style={line join=round}} Aug 26, 2015 at 19:54\n\nYou can use the every picture style:\n\n\\documentclass{article}\n\\usepackage{tikz}\n\n\\tikzset{\nevery picture/.append style={\nline join=round,\nline cap=round,\n}\n}\n\n\\begin{document}\n\n\\begin{tikzpicture}[line width=10pt]\n\\draw (0,0) -- (0,3) -- (3,0);\n\\end{tikzpicture}\n\n\\end{document}",
null,
"As for the miter limit, if you try something like\n\n\\tikzset{\nevery picture/.append style={\nline join=round,\nline cap=round,\nmiter limit=25\n}\n}\n\n\nyou'll see the option has no real effect since the line join is selected as round for which miter limit doesn't apply. Define a style with line join=miter and an optional parameter for miter limit:\n\n\\documentclass{article}\n\\usepackage{tikz}\n\n\\tikzset{\nmystyle/.style={\nline join=miter,\nmiter limit=#1\n},\nmystyle/.default=0\n}\n\n\\begin{document}\n\n\\begin{tikzpicture}[line width=10pt]\n\\draw[mystyle] (0,0) -- ++(5,0.5) -- ++(-5,0.5);\n\\draw[miter limit=25] (5,0) -- ++(5,0.5) -- ++(-5,0.5);\n\\end{tikzpicture}\n\n\\end{document}",
null,
"• Thank you, I didn't know about the required every picture style, it wasn't mentioned in the pgfplots manual. Aug 26, 2015 at 20:41"
] | [
null,
"https://i.stack.imgur.com/fq4Ni.png",
null,
"https://i.stack.imgur.com/GXoPH.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.76870704,"math_prob":0.86193186,"size":1326,"snap":"2023-40-2023-50","text_gpt3_token_len":409,"char_repetition_ratio":0.12556732,"word_repetition_ratio":0.13407822,"special_character_ratio":0.27677226,"punctuation_ratio":0.136,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9868194,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T09:57:44Z\",\"WARC-Record-ID\":\"<urn:uuid:a213062a-5448-4398-aade-519980e7acc2>\",\"Content-Length\":\"147879\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e7c205c7-e5e0-4a95-9e7b-fbefd30dc831>\",\"WARC-Concurrent-To\":\"<urn:uuid:78ba2487-e7a3-4474-9ac8-92fd61792408>\",\"WARC-IP-Address\":\"104.18.43.226\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/263354/how-to-globally-change-edge-properties-using-tikz\",\"WARC-Payload-Digest\":\"sha1:NQRD4M3BEWWSAMQJ7MBWTDQEQWSEL6MD\",\"WARC-Block-Digest\":\"sha1:5VR5JIX6SXQQBE7Y273WF4OZXHSXCUZE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100381.14_warc_CC-MAIN-20231202073445-20231202103445-00682.warc.gz\"}"} |
https://magnimetrics.com/terminal-value-of-the-business/ | [
"# What is Terminal Value?\n\nThe Terminal Value (TV), or as it is also known, continuing value or horizon value, is the value of an investment or a business at the end of a specific period.\n\nA firm or project potentially has an infinite life, meaning its value is the Net Present Value (NPV) of cash flows for an indefinite period. However, forecasting beyond a specified period becomes impractical and can introduce various risks, as it limits the validity of the model due to the high uncertainty added to the model assumptions.\n\nSuch cases are where we use Terminal Value.\n\nIt helps us estimate the value of a business beyond a specific forecast period.",
null,
"Where:\n\n• t is the period;\n• FCF is the free cash flow; and\n• r is the discount rate, often WACC (weighted average cost of capital).\n\nThe Terminal Value shows us the current present value of all future cash flows, after the point to which we can forecast. It is commonly used in DCF analysis, when calculating the value of a business. TV is the second part of the DCF model, the value after the forecast period.\n\nThere are two most common methods to calculate the Terminal Value, and also a third, less popular method:\n\n• Perpetuity Growth Approach;\n• Exit Multiple Approach; and\n• No Growth perpetual approach.\n\nWe will look into the methods and see if one is preferred compared to the others.\n\n## Perpetuity Growth Method\n\nThe Perpetuity Growth Method, also known as the Gordon Growth Model, is the preferred method for scholars and academics, as it is backed with a math theory.\n\nIt assumes that the company will continue to grow at a constant rate and the return on capital will exceed the cost of capital, and therefore we mostly apply it for mature companies that have achieved a stable growth.\n\nThe formula for this method starts from the first period FCF, after our forecast:",
null,
"Where:\n\n• n is the last period in the forecast;\n• g is the growth rate.\n\nAs we do not have this Free Cash Flow estimated, we can present the formula like this:",
null,
"For this approach, it is critical to determine the right growth rate, and analysts usually drop it to arrive at a more conservative terminal value.\n\n## Exit Multiple Method\n\nThe assumption for this method is that we value the business via a multiple of some financial metrics (usually EBITDA or EBIT) based on currently observed comparable trading multiples for similar firms in the industry.",
null,
"Multiple of EBITDA measures the firm value of the enterprise, which is the value of business operations. This method only considers the operational value of the business. Although EBITDA is the most common financial statistic used, we can also use other metrics:\n\n• EBIT;\n• P/E multiple;\n• Market to book;\n• Price to Revenue.\n\n## No Growth Perpetuity Method\n\nThis is not a popular approach, but we can use it in highly competitive industries where the opportunity to earn excess returns is close to zero. The assumption in this method is that the growth rate (g) is close to zero and therefore return on capital is equal to cost of capital.",
null,
"# Which to use?\n\nIndustry professionals prefer the Exit Multiple approach as it compares the value of the business to something observable on the market.\n\nOn the other hand, academics prefer the Perpetuity Growth approach as it has more sound theory behind it.\n\nSome practitioners take a hybrid approach and use an averaged combination of both methods.\n\n# Considerations\n\nBecause Terminal Value is usually the more significant portion of Enterprise Value (EV), valuation models are susceptible to assumptions relating to the TV. We typically represent this via some sensitivity analysis.\n\nIt is also important to remember that after calculating TV, we do not include it directly in our EV calculation, but rather discount it to its present value.",
null,
"TV is critical to financial modeling for valuations, and a highly precise estimate of TV is crucial as it is often the more substantial portion of the total value.\n\n# Example Calculation\n\nTo better understand Terminal Value, let us take a look at an example of how to calculate it.\n\nWe first create a forecast of the performance of the company for the next three periods, based on our historical data available for 2X16, 2X17, and 2X18.",
null,
"We can set up our assumptions – a discount rate (r) of 10%, which is the weighted average cost of capital for the company, and an expected growth rate (g) after 2X21 at 4% in perpetuity.",
null,
"We will start by calculating the net present value of the forecasted free cash flows for the three periods. It is easy to do with Excel’s npv() function. We will need this to calculate the Enterprise Value of the company later.",
null,
"We then calculate the Terminal Value of the company at the end of the third forecasted period, using the formula from the perpetual growth model. We also discount it to the end of the last historical period.",
null,
"We can then add the net present values of the forecasted period free cash flows and the terminal value, discounted to present value, to arrive at the enterprise value. We can then get to the equity value by adjusting for financial debt and cash. Dividing this over the number of shares of the company, we can calculate the value per share.",
null,
"# Conclusion\n\nTerminal Value is an essential component in estimating the Discounted Cash Flow as it usually accounts for more than half of the total enterprise value. It is critical that we use the correct assumptions for the discount rate and the growth, or exit multiple, as even a 1% change in the assumptions can lead to significant deviations in the calculated enterprise value. However, Terminal Value provides us with the easiest way to calculate the value of the expected free cash flow of the company beyond the period of the explicit forecasted model.\n\nToday we took a look at Terminal Value, what is the concept behind it and how to calculate it. Do not forget to download the Excel working file below:\n\n## And Get a FREE Benchmark Analysis Template",
null,
"## FCCA, FMVA\n\nHi! I am a finance professional with 10+ years of experience in audit, controlling, reporting, financial analysis and modeling. I am excited to delve deep into specifics of various industries, where I can identify the best solutions for clients I work with.\n\nIn my spare time, I am into skiing, hiking and running. I am also active on Instagram and YouTube, where I try different ways to express my creative side.\n\nThe information and views set out in this publication are those of the author(s) and do not necessarily reflect the official opinion of Magnimetrics. Neither Magnimetrics nor any person acting on their behalf may be held responsible for the use which may be made of the information contained herein. The information in this article is for educational purposes only and should not be treated as professional advice. Magnimetrics and the author of this publication accept no responsibility for any damages or losses sustained in the result of using the information presented in the publication."
] | [
null,
"https://magnimetrics.com/terminal-value-of-the-business/image/gif;base64,R0lGODlhAQABAIAAAAAAAP/yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://magnimetrics.com/terminal-value-of-the-business/image/gif;base64,R0lGODlhAQABAIAAAAAAAP/yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://magnimetrics.com/terminal-value-of-the-business/image/gif;base64,R0lGODlhAQABAIAAAAAAAP/yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://magnimetrics.com/terminal-value-of-the-business/image/gif;base64,R0lGODlhAQABAIAAAAAAAP/yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://magnimetrics.com/terminal-value-of-the-business/image/gif;base64,R0lGODlhAQABAIAAAAAAAP/yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://magnimetrics.com/terminal-value-of-the-business/image/gif;base64,R0lGODlhAQABAIAAAAAAAP/yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://magnimetrics.com/terminal-value-of-the-business/image/gif;base64,R0lGODlhAQABAIAAAAAAAP/yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://magnimetrics.com/terminal-value-of-the-business/image/gif;base64,R0lGODlhAQABAIAAAAAAAP/yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://magnimetrics.com/terminal-value-of-the-business/image/gif;base64,R0lGODlhAQABAIAAAAAAAP/yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://magnimetrics.com/terminal-value-of-the-business/image/gif;base64,R0lGODlhAQABAIAAAAAAAP/yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://magnimetrics.com/terminal-value-of-the-business/image/gif;base64,R0lGODlhAQABAIAAAAAAAP/yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://magnimetrics.com/terminal-value-of-the-business/image/gif;base64,R0lGODlhAQABAIAAAAAAAP/yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92173666,"math_prob":0.91552615,"size":6945,"snap":"2022-40-2023-06","text_gpt3_token_len":1415,"char_repetition_ratio":0.13513903,"word_repetition_ratio":0.0,"special_character_ratio":0.19784017,"punctuation_ratio":0.09097859,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9618847,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-04T06:07:59Z\",\"WARC-Record-ID\":\"<urn:uuid:bd0194ac-e574-4796-bba5-5a62c08179b0>\",\"Content-Length\":\"173064\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:292cd268-e630-4800-a31c-104090939c1f>\",\"WARC-Concurrent-To\":\"<urn:uuid:94b39764-b9c3-4ab0-ab06-df8ba8864954>\",\"WARC-IP-Address\":\"35.214.133.146\",\"WARC-Target-URI\":\"https://magnimetrics.com/terminal-value-of-the-business/\",\"WARC-Payload-Digest\":\"sha1:5VUFJFIIOR4O33F7DIKJQ23GCMZPKHUZ\",\"WARC-Block-Digest\":\"sha1:PZJYRACGZS626F65L4CMCWGOBHT667KM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500094.26_warc_CC-MAIN-20230204044030-20230204074030-00707.warc.gz\"}"} |
https://letstacle.com/generate-random-numbers-in-java | [
"## Random Number Generation\n\nThere are several ways to generate random numbers in Java. We will use the method that utilized the Math library.\n\nThis will require you to include the following line of code at the top of your program, before your Lab4d class declaration.\n\n``import java.lang.Math;``\n\nThe method, Math.random(), will return a random value between 0 (inclusively) and 1.0 (exclusively).\n\nIn order to modify this to a range between the integers 1 and some upper limit, we need to multiply the return value of Math.random() by the upper limit of the range we desire and add 1.\n\nFor example, if we wanted to generate a random number between 1 and 50, we would use the code:\n\n``double randNum = (Math.random() * 50) + 1;``\n\nIf we wanted to limit the result further to only include integer values we would cast the results to ints.\n\n``int randNum = (int)(Math.random() * 50) + 1;``\n\nTo make this code more useful and specify both an upper limit and a lower limit, we would modify the code as follows:\n\n``int randNum = (int)(Math.random() * (upperLimit - lowerLimit + 1) )+ lowerLimit;``\n\nFor example, if we wanted to generate a random number between 17 and 1034, we would use the code:\n\n``int randNum = (int)(Math.random() * (1034 - 17 + 1)) + 17;``\n\nor more generically:\n\n``````//Define the range\nint max = 1034;\nValue int min = 17;\nint range = max - min + 1;\n\n//Generate a random number between min and max\nint randNum = (int)(Math.random() * range) + min;``````\n\n## Program: Guessing Game\n\nFor this lab, you’ll need to use do-while loops and if statements to construct a guessing game. The computer will choose a random number between 1 and 100 and the user will need to guess what the number is. If the user guesses incorrectly, the computer will indicate whether the user’s guess was too high or too low. If the user guesses correctly, the computer reports how many tries it took to get the correct answer, and then asks if the user would like to play again.\n\n## Technical Design:\n\nThis program will require four do-while loops:\n\nThe primary do-while loop (outermost) will contain almost all of the code in the program and keep running over and over until the user indicates they no longer wish to play.\n\nThe game do-while loop is nested inside the primary do-while loop and will keep running over and over again until the user guesses the correct answer and wins the game. As soon as the user wins a game, the program will exit this do-while loop.\n\nThe numeric input validation do-while loop is nested inside of the game do-while loop and will ensure that the user entered a valid whole number for each guess.\n\nThe non-numeric input validation do-while loop is located AFTER and OUTSIDE of the game do-while loop. It is only executed after a game is complete and asks the user if they would like to play again.\n\nIt is nested inside of the primary do-while loop and will ensure that the user entered a valid ‘Y’ / ‘y’ / ‘N’ / ‘n’ in answer to the “Would you like to play again (Y/N)?” question.\n\nBig Hint:\n\nWhen approaching a problem with several nested loops, it is sometimes easier to start with the innermost loops and work outward:\n\n• Create the numeric input validation loop\n• Indent the numeric input validation loop and surround it with the game loop\n• Add the non-numeric input validation loop after the game loop\n• Indent the game loop and the non-numeric input validation loops and surround them with the primary program loop\n\nThe diagram provides a rudimentary architecture for the guessing game program:\n\n### Key Program Requirements:\n\nAt the beginning of the program, output a message to the user describing what the program will do as shown in the example run.\n\nPerform numeric input validation to ensure the user is entering a valid whole number.\n\nPerform non-numeric input validation to ensure the user is entering a ‘Y’, ‘y’, ‘N’, or ‘n’ when asked if they want to play again.\n\n### Example Run:\n\nThis program is a guessing game.\nThe computer will generate a random integer between 1 and 100. The user will try to guess the number.\nLet’s get started!\n\nI’m thinking of a number between 1 and 100.\nError: Please enter a whole number.\n\nYour guess is too high. Try again.\n\nYour guess is too low. Try again.\n\nCORRECT! You guessed it in 3 tries!!\n\nWould you like to play again? Not sure\n\nWould you like to play again? y\n\nThis program is a guessing game.\nThe computer will generate a random integer between 1 and 100. The user will try to guess the number.\nLet’s get started!\n\nI’m thinking of a number between 1 and 100.\nYour guess is too high. Try again.\n\nYour guess is too high. Try again.\n\nYour guess is too high. Try again.\n\nYour guess is too high. Try again."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9002449,"math_prob":0.9143393,"size":5345,"snap":"2023-40-2023-50","text_gpt3_token_len":1276,"char_repetition_ratio":0.14753792,"word_repetition_ratio":0.22717842,"special_character_ratio":0.24153414,"punctuation_ratio":0.111919925,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.963783,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T22:23:57Z\",\"WARC-Record-ID\":\"<urn:uuid:684e6784-5211-4246-8eda-2db921bc6467>\",\"Content-Length\":\"481877\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b2ee49da-d7fc-47f7-8d2b-3115fc1cd1d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b8b9d87-6d06-40a8-a1e0-9346d094bf33>\",\"WARC-IP-Address\":\"199.16.173.117\",\"WARC-Target-URI\":\"https://letstacle.com/generate-random-numbers-in-java\",\"WARC-Payload-Digest\":\"sha1:OEPYVAEEF6CARILQNE264T6BQN7K6DKU\",\"WARC-Block-Digest\":\"sha1:NXH3WPVQJJNBNTYJO5RPL5L3OFYVXBZK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100452.79_warc_CC-MAIN-20231202203800-20231202233800-00564.warc.gz\"}"} |
https://www.sawaal.com/aptitude-and-Reasoning/races-and-games-questions-and-answers.html | [
"# Races and Games Questions\n\nFACTS AND FORMULAE FOR RACES AND GAMES QUESTIONS\n\n1. Races : A contest of speed in running, riding, driving, sailing or rowing is called a race.\n\nRace Course : The ground or path on which contents are made is called a race course.\n\nStarting Point : The point from which a race beigns is known as a starting point.\n\nWinning Point or Goal : The point set to bound a race is called a winning point or a Goal.\n\nWinner : The person who first reaches the winning point is called a winner.\n\nDead Heat Race : If all the persons contesting a race reach the goal exactly at the same time, then the race is said to be a dead heat race.\n\nStart : Suppose A and B are two contestants in a race. If before the start of the race, A is at the starting point and B is ahead of A by 12 metres, then we say that 'A gives B, a start of 12 metres'.\n\nTo cover, a race of 100 metres in this case, A will have to cover 100 metres while B will have to cover only (100 - 12) = 88 metres.\n\nIn a 100m race, 'A can give B 12m' or 'A can give B a start of 12m' or 'A beats B by 12m' means that while A runs 100m, B runs (100-12)=88m\n\n2. Games : A game of 100, means that the person among the contestants who scores 100 points first is the winner?\n\nIf A scores 100 points while B scores only 80 points, then we say that 'A can give B 20 points'.\n\nQ:\n\nIn a 100 m race, A beats B by 10 m and C by 13 m. In a race of 180 m, B will beat C by:\n\n A) 5.4m B) 4.5m C) 5m D) 6m\n\nExplanation:\n\nA : B = 100 : 90.\n\nA : C = 100 : 87.\n\nB/C = (B/A*A/C) = (90/100*100/87) = 30/29\n\nWhen B runs 30 m, C runs 29 m.\n\nWhen B runs 180 m, C runs (29/30*180)m =174m\n\nB beats C by (180 - 174) m = 6 m.\n\n61 46931\nQ:\n\nIn a 1 km race, A beats B by 28 meters in 7sec. Find A's time over the course?\n\n A) 5min,4sec B) 4min,3sec C) 2min,3sec D) 3min,4sec\n\nExplanation:\n\nB covers 28 meters in 7sec. So, B's time over the course = (7/28)*100 =250 sec.\n\nWhereas A's time over the course = 250 -7 = 243 sec.\n\ni.e A's time over the course is 4min , 3 sec.\n\n44 28995\nQ:\n\nIn a 500 m race, the ratio of the speeds of two contestants A and B is 3 : 4. A has a start of 140 m. Then, A wins by:\n\n A) 60m B) 40m C) 20m D) 10m\n\nExplanation:\n\nTo reach the winning post A will have to cover a distance of (500 - 140)m, i.e., 360 m.\n\nWhile A covers 3 m, B covers 4 m.\n\nWhile A covers 360 m, B covers (4/3)*360 = 480 m.\n\nThus, when A reaches the winning post, B covers 480 m and therefore remains 20 m behind.\n\nA wins by 20 m.\n\n30 27521\nQ:\n\nIn 100 m race, A covers the distance in 36 seconds and B in 45 seconds. In this race A beats B by:\n\n A) 20m B) 25m C) 22.5m D) 9m\n\nExplanation:\n\nDistance covered by B in 9 sec. = (100/45)*9m = 20m\n\nA beats B by 20 metres.\n\n33 25343\nQ:\n\nA and B take part in 100 m race. A runs at 5 kmph. A gives B a start of 8 m and still beats him by 8 seconds. The speed of B is:\n\n A) 5.15 kmph B) 4.14 kmph C) 4.25 kmph D) 4.4 kmph\n\nExplanation:\n\nA's speed = (5*5/15)m/sec = (25/18)m/sec\n\nTime taken by A to cover 100 m = (100*18/15)sec = 72sec\n\nTime taken by B to cover 92 m = (72 + 8) = 80 sec.\n\nB's speed =(92/80*18/5)kmph =4.14kmph\n\n29 22434\nQ:\n\nIn a 100 m race, A can beat B by 25 m and B can beat C by 4 m. In the same race, A can beat C by\n\n A) 21 m B) 26 m C) 28m D) 29m\n\nExplanation:",
null,
"19 21034\nQ:\n\nAt a game of billiards, A can give B 15 points in 60 and A can give C to 20 points in 60. How many points can B give C in a game of 90?\n\n A) 30 points B) 20 points C) 10 points D) 12 points\n\nExplanation:\n\nA : B = 60 : 45.\n\nA : C = 60 : 40.\n\nB/C = (B/A*A/C) = (45/60*60/45) = 45/40 = 90/80 = 90:80\n\nB can give C 10 points in a game of 90.\n\n26 19576\nQ:\n\nIn a game of 100 points, A can give B 20 points and C 28 points. Then, B can give C is\n\n A) 8 points B) 10 points C) 14 points D) 40 points",
null,
""
] | [
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAXIAAACICAIAAAB1OPeVAAAACXBIWXMAAA7DAAAOwwHHb6hkAAANfUlEQVR4nO3dP5q6vhYG8HCfuwjsXIIldC7BDuy+S7CUFcTSJdiBnUugg9Il2Mku+BUwkGDCHzlBlPdTzTAKJxk4hiAcK89zBgBA53+fDgAAfg3SCgAQQ1oBAGJIKwBADGkFAIg10kp6sizrlA5eTRb5lsiPsvGxpafmeoTNSH/RLe+5lYa/9jcb9Ua/ACyRnFbSOOCcB+e3koIXPvPCM2T71ZiDsDjU3aC5dLXfJMUmkk29Bd3yfpxjLkg441tH0aY8z49Oy2oA4I+UVtI44Nvjll9vsSavZJHfYzxgb3feqKicY57nz1BeSRoHXvivPLSdf6EXxGnb8jciz6LzvVoVALxHTCtpHPCtw5y2vNJLFt+u4md+tbxxVjHo1CJ73JuL7o9Mv/wN6WW/Ofi2sOS6X+EMCGAgIa2UWYW15RXbj/JIOvBq9SG42iuzCrP9KH8x5NRis642ba833ct7Rl5IT648VBHCTXjgkswXASxAnVaqrNKaV1oI8xAJD1zFx/uo0Qpj0jBEGqTolg+QReeAH3R5x/kXetfH4601AyzNX1rJonPAArc81N2AjToPcrZcdSYyarSiGIZs1rZ++TDpZc8wqwJAokwrWXy78kS6InLdX9TjjR4nA2kcvHNst3O2QkzppTrR0i2XtUauGqpkkV8NpLLooFktALwqrwh7TMoqeZ5w1lxUvlC66CosF72+c5CEiysTr1x7rwtblveIPFe2/iWIkQ0CWBIrx4MRAIAUvrwPAMSQVgCAGNIKABBDWgEAYkgrAEAMaQUAiCGtAAAxpBUAIIa0AgDEkFYAgBjSCgAQQ1oBAGJIKwBADGkFAIghrQAAMaQVACCGtAIAxJBWAIAY0goAEKsLelDUZm+USR9fCXCa6u7y25tR141CZUOAPsTRyrja7OnJslwmPKD+Gd7fqxFfrW6q6u5Z5K9uu+df1K4lVPLwhUahtjtAL1UOkOpdaGpc6MtitNTLGKG51oSrg9Qt7xdj4/XCrwlHJQ+AwdRzK7ra7FpZfLvqS42KL5x5dXdWlEks3p7GgXc/jzwxBFgcMa1012bvUSC9wxyru9vrDQvq87UqSWWPO2O7cxnkOyeGAIuknlvR1WYfb5bV3Z3jM2RiTq2TVJ2tbP+gLCwNAA2aC8ya2uxa9nbnBX0maGda3V0MK+GsGKrVZ0MAMIAmrehqs2sLpNv+gV/lk4Qs8qlnIwxWd69f497Df069uUP5jiw6B95uazNWXKbCTAuATjVz0Ks2e8cFH11J9jdMW9293lqz4fVfpHPEka0D+GUo7Q4AxPDlfQAghrQCAMSQVgCAGNIKABBDWgEAYkgrAEAMaQUAiCGtAAAxpBUAIIa0AgDEkFYAgBjSCgAQQ1oBAGJIKwBADGkFAIgtLq0Uz6v9yHan3+JHWmrU77XoJy3oMU7FHvnB9k4WwMdbatRvt+43LCWtWNZcWmo6kvm01KiFNPNLLe4kSJaeyEsr01SzNmGygtZfEcYPBDlfi0gr2k+2NA44570qkQzSVs06z3NzEwSalk5Z0LoFZRjGunEmffXlzDx5e150zSxKLMsFnEfrUc3aXLe3rZm2oPXb6MIwuPfOpK++1iJGKxppHPCtw5wtv95iQ8PXwdWsJzNNQetvCaPdVwQ5J7+fVlrPgLYOYwbySkc169zMAH74LOaIgtaU3gzDUDdqzKSvvsPvpxWdKqsYyCtTVLOmYKKg9feG0e4rgpyLpaaVLDoHLHDLMYUbMFPnQUOrWU/GXEHrbwyj3VcEOSufmtSZjLKNL5NrCdcWiB2oMdunWbGJnu9Y50utWDG0Pj/TIA3D1A48k776WstMK4ope7J9ol8162nTioGC1u+gD8NAN86kr77b739VcZ5fxzQR1TxbatQCm/wVljq3AgDGIK0AADFtWnnrtvqRt9gU35uuzPa6LMAEprkvqXHQ1cedfHPbsIORdLQy5hab9GRZLhOmN5/hnfxenUqzJ/8YWu2E39oaENIgvxHGlwQ55X1JzlGca004K7/NlcW3TSIcjO6ghEU4/dt9i83LdbvOP4w3vo3P0BMu6CScYsqftufNrbOdiZ4ZZPomT4f2vqReR9gz9DSvGXi7E+FoZcQtNll8u/KD3/1FouZTByzLMn+yZPvRM7wX35VNT+49fEY9Qh3pIy0d6iM9M8isunFUMBPcl5Re9hvNUfh4XL31uv+qeiegLsVYpfxp4IeWyVs/idr4DD3meWRjKsqeN7nOHoh7ZpAPNXkSHbfC/x1luuWD6d8oD0r7kEYryjPDnqeLJm+xqX3uw8f2zyG7svA80cfxrD5mW03dM4PMqhtHB2PwvqQsOgfKE4Ys8lf7TZIfB92GL+aY1yW6hcp8JhmWMWc9t5LnefmhwMkezUIUlfF19kDcM4N8qMmTmPSZL5qhyrvTZTT/lb632LRkj4Qz+S366aNh1HteMw92bKpqYL8hpvwF8JncE6QITIit3z0Hrwb2jPQ+zabqIDuDMJlWtH0ld5Wxu4Bo70tq/eBW5yLNuvqkGrpP8j632HQMSnS3Y4yiTyuN+z20+8ew6x398uMsRiv1dNibWfzdK0Fi78vrGHgaP91ope6rZ+jVAZoZZhu4L6klUH1SkZVvnyytzFqftEI3Z9x3N5tBWpETCdXgsOeWxd4WfhUTXR9TpRWyK68LgS/vM0b4aMjeV8o/7/VqYv1Mu2nnNO31ppx0TOPAu5+r2czZPMme8MrrMiw5rXQ8GvKnpSf3Hv4Tmmz70d9HTcKDYV+pHMpeb5jwXezq4kX2uDO2O5dhvBYt+JCXvqpkke+qr58s3JLTyrc8GpKe9moiY4wx51/oXR8Pg9t3js+QiUm9fqha/XQ12z/M4cF6xFdel2HJaUVA9WhIe7vz6MsOUUsve6b++J2MMDiq70Opz4bmQ9NX6cla3XZP5BSN6aZxPkTdxn6PhnzH7K8EqSYZxasbw79TOYb8jxCvMvSZ/Ta9Aw+68gqVJacVEe1e0n2l/GNpRXPlQop4gkNG/92U+i89rkyZTSsDr7xCZRHP7LNm9mhCc/HMraVGLaqx3wVzKwBADGkFAIgtIq3kk1bV7GB06D6rlhqFM6A5W0RaAYApLSjlFx/jH2zvZAF8vKVG/XbrfsOC0krhUzvl9IP2nzz8cO7zFfBPAgBimFsBAGJIKwBADGkFAIghrQAAMaQVACCGtAIAxJBWAIAY0goAEENaAQBiSCsAQAxpBQCIIa0AADGkFQAghrQCAMQmTCtZ5M+nqu6rkeGlp8lrF7/lW+KEb6ZNK0Udy4Frm2SfffP4T0+WiDbKLDoHvFk3Mz0148wiX1m2XLe8j0a76qYJKxXa6xzzhE9ZeFEdRvMvSHQ/hXS0ksYB55Pusz2lJ8tymVRp8E4ZZXrZbxIhpxSHuhs0o1jtN2UQyaYuW65b3o9zFOs+VYVHWRbfNonQ3rpau3NMNvvLRMexLows8le33VMRHvwAihpmwi6d5AnXVHl7hh7zOK+qCUr14oQqg/W75dKDxfKuZYpt96m6qQ2vUdlO2bqi4V2bld9ar1e3/A3KuqyaZrxuRO4BL3xW/UpWtq8OQw5I2a0TxANmEKaVv11Vl1eeoSfsEGIdW+kA1BxXdWXgtsNVrCUsvrPHgaoNT1y/8nBUZ5VmnLpf2182iCaQZiNyTZ+IPZBwJv2s+480dXS0EIZY6lm9ywyMB+aD7iQojYNi+O1s+fUWq0e03m5rFz85W86COGWMsfSyZ+HZL/9gb3fe/fHydnu7U+zFpevjUb7Kj+T5jWGU4dXtyaJz2cY3bdb234/2etO9fJD05N7Df6rossh3A37wq43ot/LXA+u1J/2s+o/40ev+1Nb7chi2f+AscK3idFGK7r14YD6ktKKcpu05d1tllfa8UpN2jut+Vc3erfbXKk0Is3qr/VW9IueY8HL3fGPGs0d4zr+Q7S8py+IbUx+3fQkHQ/a4dy+vFnZP6WbRWX1sZpG/2m+a08kE5LnYrqnXZhjpyTqvqwFKwlzM2v4QotFKFp0DVh3bbsC688rjca0/pZtD4KPDGEtP1oGd/xYpx9yFatbyGbL9oXnc2dudN3waWQzP3u68ID5d9qwazQynGCBs1rZ+ufTeemAQqT/V08telfLSk7W67Z70OYUNGq28hpE97mIznS3HAOSHSGml2DMar1AubMji21U62034tf1aQxadg79rFs6/RjbI0jRrfm5n8a0crdjrTX3WwxhLT9UnuHpsb/sHfpUvsGSR3zquEcMr1xAEupG6s+Xl+VIrZyt0SnrZX/+ar1k+gHqokp4slyXKRCSMLY1ThmGvN6zO9Vl0Dl6Tab91z/m7UMs1cC5GSTEBqJhWaww35L+qLvCUE3XlIs498eqM8ELhVS1TeeKrVNcSWsPrmPdtzpXqtqW83tWyvB91bHIM8qq7L13pfh6sJYyu/1uPeBKOy0JzhPJjvWSRf2BnzQlI8ffV42DkXINeMa2hbwzASP//dABfIb3s2e7Zchja/oFbrsUMzIwSS0+WG/AkR04BczBaAQBiuIMZAIghrQAAMaQVACCGtAIAxJBWAIAY0goAEENaAQBiSCsAQAxpBQCIIa0AALH/AEDIr2GxeO1pAAAAAElFTkSuQmCC",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAY4AAACNCAIAAAAvhQoxAAAACXBIWXMAAA7DAAAOwwHHb6hkAAAOM0lEQVR4nO3dsb2qPhsH8PB+/kNg5wiU0DGCHdCdEU4pE4TSEU4HdI5AR0pHsJMteAtRAiQISgjx/r7VvTmYPInyGECIVdc1AQDYtv/pDgAA4DWkKgAwAFIVABgAqQoADIBUBQAGQKoCAAMgVQGAAZCqAMAASFUAYACkKgAwQC9VscSyrITNrqbKQosXZtXnsbGkXw/XTOcvsvKJ+Oj5zn9YLQAsppuqWBFTSuPTW/tlkN7qu1tKot0bCa+NI7Esy/Lifukucsp7E6XTtiArn6bKwt35cHtEfvEeWemzagFgSZ1UxYqY+kef5udCkquqLJwwwbD9Q/BRVO6xrutb2q2EFXGQ/rjNFj9pEBdsrHxG5M7ebiLfO6+aAwAN+FTFipj6LnHHctUkVXHOqe8Oy3uHiYMjrtFKr5d+0eVaycunsv1DEHtNFKyIg4NvjzQHADr81/6TFTH1a0KI69P8VJzC0B5sbodZHUqqyqNdHj3+Q8tsuIUdZnUoKJ/sOfvpzH/k5f22JX/5pZHnWffDTVrWz7omVAsAq2hnVc2cihDy5ryqPVdVl/Q5T+F9NKsipDOv6cx6ZOVTsMQ67dvIiddG9Em1ALCkR6qqslNMYq9JH15MPjoGdH0qOlyyw6weOA4PFIUE8xpnb8vLJ6quF377Z+QfVgsAi2pSVVWcc1pyCaSkefQnnhdNuG7Pinj5Hdv1uZjYX/Q4HSYr75JFbu8d0l7yrLLTI3JptSzBbxcAVtdcow9IJ1PVdUlJv6jZkDvQ65Tzhq+cpaR8ZfyvIIJh4Uj5hMj7zfGRi6stqbwVAFDDqvFsdQDYPNxYAwAGQKoCAAMgVQGAAZCqAMAASFUAYACkKgAwAFIVABgAqQoADIBUBQAGQKoCAAMgVQGAAZCqAMAASFUAYACkKgAwAFIVABgAqQoADIBUBQAGQKoCAAMgVQGAAXqpiiUzV+YTvPy9Jf6kFfYWh+HWEuz8RVb+Rtht5N2FCz/vDgC8qZuqWBFTStu1puZgiWV5pLPey+WtitrqLMuL+6W7yGnaKJ1o98gesvJp3CO/MkZJSbOOVlWcnZLrjoc1tQB06e+kZV1S+dpR0iWqxtauel+/1m5o7ZJgsvI3YrylgWQbWbUAoB4/q2qWgn9nGfiqOOf0N3y9SOlHa8ELVmO/XCt5+RvYX+RI+nG95sF+/1atAPCh/9p/siKmfk3u6wqfilMo2GPtMKvDT9qzw6wOsw8q4BZt7qzVLivvtz1aOUu8S3o7Cv5SZaEX07LGUvAAWrSzqmZORQh5a1410UezKkI606XOZEpWPiu2UyycGlZZuIucsj4KFpgHgDU8UlWVnWISe03u8GIyL1fZ/iGYdDbeDrPhUejUFCCYLjl7W14+D/uLSPozCIUl1u58uCFPAWh1TxaDU8YlJaJzyCNnpktKui+Rn6CebNAcH9aUf08NXnbSXFJXSYmCiwgAIHVPVYL9VLyTvriIVlI+C360M8uquqWBsH5Z+bTg5Ymqq3k5UhXAyqy6rhXM1QAAloQbawDAAEhVAGAApCoAMABSFQAYAKkKAAyAVAUABkCqAgADIFUBgAGQqgDAAEhVAGAApCoAMABSFQAYAKkKAAyAVAUABkCqAgADfEOquj9kWUu767eopadKfV+PQAWzH613/5Rr7MJqAWjvqVLf3TtYhMGpyrK2ErzqSLbTU6X+kW7CexQcAPbXz1pr9XTl7bJk5kJga+F7zsfHlat9E1jSb0DW9GohfRKklkY1jowZFJ2r4pdtINFu+f1b8g2ssl1WxJRS0RJidV2rO+Hyaq5RZeHufLg9en3xHh90luwi57GKj6PiTSBN+ra8uF8qbno8JGXDOC9ILY2u8maZTflpdds/BK+3MqBdVsTUP6pbzPUTz1UPuRURWREHz2UN3Z80iAsFH3/3WHdWCxptep2QPgxSS6O6RsYkylNVVZzzx6rNa1q63WbxaYULT7/J9g9B7DXfwqyIg4NvE+ES09wC1ErJmtYY0pCWYIwYma1SlKryaNccd++i5TOV/JhIWbtNpiKyXKXo4GXKmWY7/KWPhbG9zkr23BLTggWolZI1/SIkpYfSA1rG582RAeXnquqSPr/z1VPV7jNTSXOVNiyxTvu218Rrz6xz38yC722lZE1rDGlISzBGjMwWqf8JqOtTLbPZBdutslP8mLdYlheTDeWq6nrhv5CfvRZ8M3ObKSVrWmNIQ1qCMWJktkp9qmJFrGXcl2u3Ks55Z5X4kubR30ZOe9p7h7RXJavs9Oi163NRsj8Fx+FSsqY1hjSkJRgjRmarZiwaP1H/qkdnL1+EOGxV7d7SoF9XSQXVqxjMaXWWVNxrbkS4I+NF8U13fysibvp1SAqGcW6QWhpd4c0ym5G/D97mz5pVRLXNnir1D3YZppAeAH7lnbEAYKhveLICAHw9aaq6Hx+uGQoAKLDafYjcyzs/E1rm9saFZ1WW3LINTW9aUbUaj45HQprlO8JYNkgtjSoLZs37EPl7UW/ppf1532K3N653Bn85n4d9SwPuYllJl7jqomIw13+DVIzMLIZ+Jrfrlgad97Ck/H/b69uy8vHaJNvPrXaCzqxKmMtVf/X1n91iWZb6R63YYXZLL/dfs7PEu6S3LFT+0y8tPZ1Ly8jMomUYN/XefRTMOvch2ntn6Wr/ezeWxdhhVoeZjnZPabgLwyAn6e24xt6oq6dzrT8ys2gZxk29dx8H88F9iHaY1aGofO+Q6JT9NF9s1fVCyH56tRN0ZlX3iVZvC2HhgvR9X9nhKSU5SU8rTRs29c08au2RmQWzqo+DUXEfonu8pYR/WkCboZa5vVH/jxXsMBselx7XuK2gyn4jhzrR70oPXdTX07nWHplZtAzjpt67j4JRdx8iH1ZJyf3eoAVvb5xwPusN3bsKlr63ZjTskk5r8HmCr3veb7Ta0R6pGMxXdQ5jX+C+jZkj84ykpxmi7v1Or98YZZ/Juh/O8nd8zW10lZtpBifC+dvCpvx7vLaX24w1MafXCj4Wg1xxS4Nl34ixT3NJCaUvd7F517mm9WjdVPVMDct9Cuu6XuoKYEm5TNU2NuVzrjJV8e13O6qQtNHpb8q7FNyHOPIWtq31eiKuVnuqmvJp/NTIp/m+j8yZDrw0tUcaZlXLXodejPy7aULTClOV9IK6Suqv4v8jlj5XVRXnnH8Q5coUPFZYc4/m2MjzcNlf5EgG7HrNg/1eWcvzPC+oa2kUDymeSf9p9SW9eqzwP0D783BZ4l2eKxp0VFnYfaDy2rrP9lrpYZtjjeIhxTN8Vara8GOFV6P5ebhVdhJnoyoLd5FT6r3iOXJBXUujeEjxDEunKts/BKKl8lag6LHCGns0l/7n4bK/iAimVCyxdufDbQO/zBBeUNfSKB5SPNfyp780XQEcnJdc7qLKFq8APuJY+grgJ8Snhme2p+QzObTG5Z8Xja7zpnyNVX5XtfRHQvRpnvpY4Xe97tG6qWp7z8OVJ6quF82rTVXSC+oqzbyKD0KmPhzW2thjbdXFs7WeKvVPdRZm+arT6gDwrZCqAMAApqaqetUVw19QetiyqZ4qhaM/GGFqqgKAf4rZ32P36YbGLqwWgPaeKvXdvYNFmJ2q7nR90Nc/YPnKXRrHfTAFPiUAYACcqwIAAyBVAYABkKoAwABIVQBgAKQqADAAUhUAGACpCgAMgFQFAAZAqgIAAyBVAYABkKoAwABIVQBgAKQqADAAUhUAGODrUlWVhVZoxAKjM3xlpwDmkKaq+xrFU6upstDi6duv7DCrM8FC5LOxpNOjhI1t2esvNxyLjMTLTiGXwbdbcFbFL5tJot3Ivr15LLEsj3SWU76I1oK/pzMv7pfuIuexaq5j9kgAbMUyK5/2ltUWL7QrWE2WKyBtsrvXlrbr8spWwhUsMfyIpBdCSdv4Xq1pO3ON8P7mfFOSoeh3sLfI/CC6ZxPikRGNIl+GlXvBfErOVVXFOae+2y/uzjauV3I/snnozsXy6Oo3fyhp7AlmJlUWehdB+rmzw18aF49XsSIODr59f9XufHi86nAeznmq4pzT37ePIavrpV90uQoOzdoOljT2HkdvE2Zkw5GxwxOXLrPQJizhOpmS6/XNzgBsxYKpKo92zfmZXSTMVEUcpD9NsXs89jew/QOXbtpNyX4fCPZ29heR9DSSUVz/matYETfJp/sq2z8EwjzyIWf/DMveO+JtuLH4SYP8eiWEVNkp7pa32VbwQsnI3OWP/GSH2WCsAUzTSVXCU+mTz69zhxmieZBgtkFI5wz0LsqnBy6pjuf6ND5l1T1TtamzTamWtYvyXMGUg0sfE+Lsv2RS+Tj3WNLY036NA2Axan6s4Pp0sI+JphcssX7JqT0CDAZbSElnK90w8nNRdTPV4MxNf8ph+4cgFp1EfzswbpIldL3m0m1evVbGPXLH1b9IVmC6Tqq6f7Z7WwgLX2BFPNzHXJ/m0V8z12JJwnozjqo4z5hVdauTvNb1aR7tPC5TuT+9PbdirL8b2+EvzbuniaosnDw36fbzT3gwzKuyU0zu29xbnvHaZ8h7h/DTQ5Y8w+VSJ0swxQJTKTlXZXmkHExWCHGPt/TiPbbwXWKHWek8X/V7dWbMqroHOb9XJxBNQNyfNOBP7hBih9ntcOYOAU9XUdV1XRLParc6H0SnxVjSHLfm0a7NAp1+XtKb+EzRc7h2kdOO1rTXCiL+SYNmMMKsIq7fjqtHykV+aAag07csWcoS67S/mbJLNpchTQkXQD+Tb6zhf0+OqQPAV/uWWRUAfDWTZ1UA8M9AqgIAAyBVAYABkKoAwABIVQBgAKQqADAAUhUAGACpCgAMgFQFAAZAqgIAA/wfLV/VKpEnloAAAAAASUVORK5CYII=",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94716203,"math_prob":0.9955241,"size":1328,"snap":"2019-51-2020-05","text_gpt3_token_len":357,"char_repetition_ratio":0.12462236,"word_repetition_ratio":0.0072727273,"special_character_ratio":0.2868976,"punctuation_ratio":0.11147541,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9910158,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T05:26:55Z\",\"WARC-Record-ID\":\"<urn:uuid:ff1a30fd-15a3-4ea5-ad31-67aba94400c0>\",\"Content-Length\":\"116460\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f81c7a02-6570-49f0-a1c6-ab08c738dce0>\",\"WARC-Concurrent-To\":\"<urn:uuid:553ef2c8-5f14-4e01-93c9-f3161645b83c>\",\"WARC-IP-Address\":\"13.126.246.212\",\"WARC-Target-URI\":\"https://www.sawaal.com/aptitude-and-Reasoning/races-and-games-questions-and-answers.html\",\"WARC-Payload-Digest\":\"sha1:KIBLZO3EPI7MULPOLM6EOEM4XPBCKXYI\",\"WARC-Block-Digest\":\"sha1:OX3C7G2CALMG5MKSYALSKLRBTBCZZPF5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250601615.66_warc_CC-MAIN-20200121044233-20200121073233-00163.warc.gz\"}"} |
https://www.habitania.pl/13917642 | [
"•",
null,
"+86-21-63353309\n•",
null,
"[email protected]\n\n## hydro cyclone pressure drop equation\n\n### Cyclone Design Equations Formulas Calculator - Pressure Drop",
null,
"P drop = pressure drop: Q = gas flow rate: P = absolute pressure: p gas = gas density: u = air viscosity: u gas = gas viscosity: K = proportionality factor: T = temperature: v = settling velocity: S = separation factor: N = approximate effective turns: h = inlet height: L cylinder = cylinder length: L cone = cone length: d cut = cut diameter: W = inlet width\n\n### Cyclone Design Equations Formulas Calculator - Radial Velocity",
null,
"P drop = pressure drop: Q = gas flow rate: P = absolute pressure: p gas = gas density: u = air viscosity: u gas = gas viscosity: K = proportionality factor: T = temperature: v = settling velocity: S = separation factor: N = approximate effective turns: h = inlet height: L cylinder = cylinder length: L cone = cone length: d cut = cut diameter: W = inlet width\n\n### Basic Cyclone Design - ASME Met Section",
null,
"Pressure drop @ load. • Pressure drop goes down with increased dust load • Pressure drop dust loading equation. L C 0. C= 2.095 W-.02-1.09 where; L pressure drop @ load. 0 pressure drop @ no load W = dust load (grains/acf) Fractional efficiencies.\n\n### The Sizing & Selection of Hydrocyclones",
null,
"The correction for pressure drop is shown in Figure 7 and can be calculated from Equation 5. As indicated, a higher pressure drop would result in a finer separation and lower pressure drop in a coarser separation. C 2 = 3.27 x 25% of the cyclone diameter to a maximum-0.28 (Eq. 5) Where C 2 = Correction for influence of pressure drop. = Pressure drop in kPa.\n\n### Study of GLR and Inlet Velocity on Hydrocyclone for ...",
null,
"3.2.1. Pressure and Pressure Drop. The internal flow field of the hydrocyclone is a strong swirling flow, which causes certain energy loss. The pressure loss is the premise of the effective separation . Figure 6 shows the characteristic curve of pressure at different inlet velocities of the hydrocyclone. When the GLR (volume fraction) is 0, the ...\n\n### Hydrocyclone Filter Calculation Parameters",
null,
"Cyclone pressure head losses, corresponding to the velocity, are determined by the formula: ∆p = ζ ц · [(ρ г ·v опт ²)/2] where: Δp – pressure drop across the cyclone, Pa; ζ ц – coefficient of hydraulic resistance of the cyclone; ρ г – gas density under operating conditions, kg/m³.\n\n### Hydrocyclone - SysCAD Documentation",
null,
"11/12/2020 · The pressure drop equation assumes free discharge from both the under and over flows from the cyclone. The mass weighted mean of the solids density is used to determine the cut point. Q is the total volumetric flow into the cyclones. Efficiency Curves\n\n### HYDROCYCLONES - Thermopedia",
null,
"The hydrocyclone is a simple piece of equipment that uses fluid pressure to generate centrifugal force and a flow pattern which can separate particles or droplets from a liquid medium. These particles or droplets must have a sufficiently different density relative to the medium in order to achieve separation.\n\n### Hydrocyclone Design & Sizing Parameters - Calculations ...",
null,
"Here is a hydrocyclone sizing calculator with immediate access to all design equations needed for your hydrocyclone design calculation in an online XLS spreadsheet format. Based on first principles of hydrocyclone theory and equations, this quasi design software lets you enter all cyclone design parameters such as cut size, D50, D60 (efficiency calculation), graphs your results. Calculate ...\n\n### Hydrocyclone Theory - 877 Words | Internet Public Library",
null,
"The relationship can be given by the following equation . νt * rn = constant (1) The maximum rise in the tangential velocity occurs at a larger radial distance above the rim than that of below the rim as it can be seen from the above figure. The velocity profile almost remains the same at all the other axial positions."
] | [
null,
"https://www.habitania.pl/themes/nflg-19/assets/img/call.png",
null,
"https://www.habitania.pl/themes/nflg-19/assets/img/inbox.png",
null,
"https://www.habitania.pl/images/156.jpg",
null,
"https://www.habitania.pl/images/186.jpg",
null,
"https://www.habitania.pl/images/121.jpg",
null,
"https://www.habitania.pl/images/2.jpg",
null,
"https://www.habitania.pl/images/259.jpg",
null,
"https://www.habitania.pl/images/14.jpg",
null,
"https://www.habitania.pl/images/36.jpg",
null,
"https://www.habitania.pl/images/91.jpg",
null,
"https://www.habitania.pl/images/58.jpg",
null,
"https://www.habitania.pl/images/46.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8569174,"math_prob":0.98860264,"size":3796,"snap":"2021-21-2021-25","text_gpt3_token_len":875,"char_repetition_ratio":0.157173,"word_repetition_ratio":0.21428572,"special_character_ratio":0.23024236,"punctuation_ratio":0.13207547,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9957876,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-16T21:13:08Z\",\"WARC-Record-ID\":\"<urn:uuid:3d873bcd-a02b-4fad-84de-f8badb16d771>\",\"Content-Length\":\"34052\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ba913d9b-4c24-4316-8d69-2bcbee061612>\",\"WARC-Concurrent-To\":\"<urn:uuid:41cd9b39-426a-4608-95d2-166265540056>\",\"WARC-IP-Address\":\"104.21.9.61\",\"WARC-Target-URI\":\"https://www.habitania.pl/13917642\",\"WARC-Payload-Digest\":\"sha1:P2DYALY5YHPR7ZDXNK6X5HTEHBPUZKZ6\",\"WARC-Block-Digest\":\"sha1:HNI22472W7FHDXNE762BRISRV2FCLFOV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487626008.14_warc_CC-MAIN-20210616190205-20210616220205-00174.warc.gz\"}"} |
http://r64.is-programmer.com/posts/201527.html | [
"r_64",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"99621",
null,
"# Codechef SNACKDOWN 2016全记录\n\nr_64 posted @ 2016年6月01日 12:26 in 未分类 with tags codechef , 1086 阅读\n\nupd:elimination round时间居然是印度时间7:30。。我今天7:00好不容易赶到学校你就给我看这个?\n\n5.30\n\n## KTTABLE\n\n$A_i$作差分就可以得到可以使用的时间段长度,然后每个学生看一看是否可以用即可。\n\n## MMSUM\n\n$f_i$表示以$i$结尾的最大连续子段和,$g_i$表示以$i$结尾的序列中,删掉一个元素后连续子段和最大的那个和。(有点绕)\n\n$g_i=\\max(g_{i-1}+a_i,f_{i-1})$。这个是因为这个连续子段和要么删掉了$a_i$前面的那个,要么删掉了$a_i$。\n\n## FDIVGAME\n\n• 第一段的长度是1\n• 一段2的长度是前一段长度的2倍\n• 一段3的长度是前一段长度的1倍\n• 一段0的长度是前一段长度的3倍\n• 一段1的长度是前一段长度的2倍\n• 例如,开头是122330000001111111111112222222222222222222222223...\n\n6.5\n\n## MAKELIS\n\n$$dp_{i,j}=\\min_{l|i}(dp_{k,l},dp_{i/k,j-l})$$\n\n$$dp_{i,j}=\\min(dp_{i-1,j}+j)$$\n\nwa了一个小细节,改一下就过了\n\n## CHSPARR",
null,
"(输入验证码)\nor Ctrl+Enter"
] | [
null,
"http://r64.is-programmer.com/images/table_bottom.jpg",
null,
"http://r64.is-programmer.com/images/table_bottom.jpg",
null,
"http://r64.is-programmer.com/images/table_bottom.jpg",
null,
"http://r64.is-programmer.com/images/table_bottom.jpg",
null,
"http://r64.is-programmer.com/images/table_bottom.jpg",
null,
"http://r64.is-programmer.com/images/table_bottom.jpg",
null,
"http://r64.is-programmer.com/images/table_bottom.jpg",
null,
"http://r64.is-programmer.com/images/table_bottom.jpg",
null,
"http://r64.is-programmer.com/images/table_bottom.jpg",
null,
"http://r64.is-programmer.com/simple_captcha/simple_captcha",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.88914204,"math_prob":0.99478334,"size":2054,"snap":"2022-40-2023-06","text_gpt3_token_len":1711,"char_repetition_ratio":0.07463415,"word_repetition_ratio":0.0,"special_character_ratio":0.31986368,"punctuation_ratio":0.08169014,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9521366,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T13:36:19Z\",\"WARC-Record-ID\":\"<urn:uuid:c186f549-6505-4292-ab9a-816ea4cd7afd>\",\"Content-Length\":\"25407\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6949294e-d218-4887-81d6-8b862d7a822c>\",\"WARC-Concurrent-To\":\"<urn:uuid:c267503d-0ad8-4381-a45b-45b33e80329c>\",\"WARC-IP-Address\":\"198.58.112.156\",\"WARC-Target-URI\":\"http://r64.is-programmer.com/posts/201527.html\",\"WARC-Payload-Digest\":\"sha1:SPWGQU6F6UCEWITHJI5OQZS4X5NPHQSV\",\"WARC-Block-Digest\":\"sha1:BODUU4O7HYPRQ53BD5P7GC6UOZXNCRFP\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500339.37_warc_CC-MAIN-20230206113934-20230206143934-00404.warc.gz\"}"} |
https://de.mathworks.com/matlabcentral/answers/479153-for-loop-execution-error?s_tid=prof_contriblnk | [
"# for loop execution error\n\n1 view (last 30 days)\nMUKESH KUMAR on 6 Sep 2019\ni had a for loop like this\nfor j=1:100\nfor i=1:48\nF{j,1}(i,1)= some equations;\nend\nend\nthis is running well when j (1:100)>i(1:48) and not running for j<i values like if\nfor j=1:30\nfor i=1:48\nF{j,1}(i,1)= some equations;\nend\nend\nthen its F cell having only 30 values for i=30th values . but I need 48 values when j=1:30 means F cell size should be {30,1} and each cell having (1*48) array.\nand in another for loop,\nfor t=1:100\nG(t)=trapz(X(t),Y(t));\nend\nif at G(55) its values can not calculated due to X(55) or Y(55) have no values/data sufficient for its operation then how can I execute this for loop without the error because I need the array untill the error come or not come G(55) is needful for me before the error and want to continue the program .\n\nSebastian Körner on 6 Sep 2019\nfor t=1:100\ntry\nG(t)=trapz(X(t),Y(t));\ncatch\ncontinue;\nend\nend\nwhen the error occures at t=55 , you jump to the catch part and your loop continues with the next iterration\n\nMUKESH KUMAR on 6 Sep 2019\nthanks for your helps it working well but then after execution the G vector values from 56 to 100 is zero and size of G is (100,1) but i want it till G(55,1) only\nSebastian Körner on 6 Sep 2019\nin this case simply replace the continue with break\nfor t=1:100\ntry\nG(t)=trapz(X(t),Y(t));\ncatch\nbreak;\nend\nend"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8507944,"math_prob":0.9691903,"size":787,"snap":"2020-24-2020-29","text_gpt3_token_len":252,"char_repetition_ratio":0.12132823,"word_repetition_ratio":0.04255319,"special_character_ratio":0.33545107,"punctuation_ratio":0.10784314,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9922679,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-02T23:06:32Z\",\"WARC-Record-ID\":\"<urn:uuid:19a89fcc-753d-482d-a59c-546d45556f94>\",\"Content-Length\":\"113297\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4d5a074a-3478-49d8-92a0-781b39f19042>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1c65889-30b6-419a-ba92-d670c6969a76>\",\"WARC-IP-Address\":\"96.7.70.236\",\"WARC-Target-URI\":\"https://de.mathworks.com/matlabcentral/answers/479153-for-loop-execution-error?s_tid=prof_contriblnk\",\"WARC-Payload-Digest\":\"sha1:YNSD57PNAINS7WA3XMMAHGMUWW3GBUZH\",\"WARC-Block-Digest\":\"sha1:64RBY6ROIDG2LOP7XWW5Z5GBUM3QFQGS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655880243.25_warc_CC-MAIN-20200702205206-20200702235206-00151.warc.gz\"}"} |
http://hypergeometricaluniverse.com/blog/archive/2007/7/?page=3 | [
"# Hypergeometrical Universe\n\n## Comments #1 on The Silver Surfer",
null,
"Comments on The Silver Surfer and The Hypergeometrical Universe\n\nAnonymous said...\n\nSay, there are two observers A and B separated by distance X. The observer A starts motion in direction of B with such an impulse, that by the time he reaches B, his velocity will match the Hubble velocity. What will the observer B see? Will the A continue his motion? Will he slow down and stop?\n---------------------------------------------------\n---------------------------------------------------\n---------------------------------------------------\nTo answer this question the first thing to notice is that one can see A and B on both cross-sections shown above. We will use just the left panel RXYZ cross-section to answer this question.\nLet's first check the initial condition where the Fabric of Space is relaxed on both A and B positions, reminding that the Fabric of Space is the wave we ride - where the 3D space and all dilators (matter) are located.\n\nAt first sight you might think that the objects on positions A and B are at rest with respect to one another. This would be the case if the 3D Hypersurface were not expanding at the speed of light. In reality they are traveling away from each other at their Hubble velocity. Be their distance X, then they are running away from each other by X/R*c",
null,
"This can be easily derived if one considers that the radial expansion takes place at the speed of light and the geometry of the figure.\n\nThat is why one can only see thing up to one radian in each direction.\nThe first thing to notice is that it is not clear how to measure the local deformation of the Fabric of Space. This topology (hyperspherical expanding 3D Universe) is connected to the 4D Big Bang proposed in the Cosmogenesis paper.\nUnder that assumption (Big Bang = 4D explosion), the average velocity on any region of the 3D space is the Hubble velocity, that is, farther regions contains celestial objects that on average run away from us at the Hubble speed. What one can distinguish from one region to another is the width of that distribution. When most of the interactions had taken place a long time ago, the distribution will be sharper. A sharper distribution of receding velocites means that the objects are close to their Fabric of Space relaxed positions.\nThe comment requires that object A, under the action of an impulse, to reach its Hubble velocity at position B. Velocity is a tilt in the local Fabric of Space and that tilt should make the local normal to be parallel to the Fabric of Space normal at position B. The diagram that would reflect that condition would be the following:",
null,
"For the distance between the objects to diminish, the tilt (velocity) has to be larger than the Hubble tilt (velocity) between the two objects, thus the proposed experiment doesn't make sense and that is the reason for the paradox.\n\nor\nRSS Feed",
null,
"Create your own visitor map\n\nRSS / Atom"
] | [
null,
"http://hypergeo.s3.amazonaws.com/static/media/uploads/blog/R-FI_Cross-Section.jpg",
null,
"http://hypergeo.s3.amazonaws.com/static/media/uploads/blog/SilverSurfer3.jpg",
null,
"http://hypergeo.s3.amazonaws.com/static/media/uploads/blog/SilverSurfer5.jpg",
null,
"http://www.maploco.com/vmap/3136089.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92442906,"math_prob":0.87550807,"size":2219,"snap":"2019-13-2019-22","text_gpt3_token_len":443,"char_repetition_ratio":0.18194132,"word_repetition_ratio":0.016129032,"special_character_ratio":0.2577738,"punctuation_ratio":0.06234414,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9660175,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-22T18:49:28Z\",\"WARC-Record-ID\":\"<urn:uuid:36a9d8e2-49b0-4fff-a6de-d1e26212db1f>\",\"Content-Length\":\"32638\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6223bccd-d71f-4754-be50-d278a856bf90>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff7017bb-acb2-4458-ac89-251ee6b42d77>\",\"WARC-IP-Address\":\"52.200.81.157\",\"WARC-Target-URI\":\"http://hypergeometricaluniverse.com/blog/archive/2007/7/?page=3\",\"WARC-Payload-Digest\":\"sha1:UMXH2CL2BK3JKNUN76GOCU6ZLXFHLFXF\",\"WARC-Block-Digest\":\"sha1:SS5YY3ECOWVQOJ3KPHXACGD23JOVCEZE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256948.48_warc_CC-MAIN-20190522183240-20190522205240-00022.warc.gz\"}"} |
http://ostts.org/PreCalc1b.html | [
"",
null,
"",
null,
"A Precalculus Outline",
null,
"Objective: Given information about a function write an expression for f(x) and find its domain.",
null,
"Objective: Given a function, evaluate the function at a specified value and simplify the result.",
null,
"Objective: Given a piecewise-defined function, evaluate the function at a specified value and simplify the result.",
null,
"Objective: Given a function, write its implied domain using interval notation.",
null,
"Objective: Apply the substitution principle to evaluate expressions.",
null,
"Objective: Simplify numerical and algebraic expressions.",
null,
"Objective: Given a rational expression, correctly determine which numbers must be excluded to prevent division by zero.",
null,
"Objective: Given a radical expression, determine which numbers must be included to produce real number results.",
null,
"Objective: Given functions f(x) and g(x), write (f+g)(x) and state its domain using interval notation.",
null,
"Objective: Given functions f(x) and g(x), write (f-g)(x) and state its domain using interval notation.",
null,
"Objective: Given functions f(x) and g(x), write (fg)(x) and state its domain using interval notation.",
null,
"Objective: Given functions f(x) and g(x), write (f/g)(x) and state its domain using interval notation.",
null,
"Objective: Given a function f(x), write the difference quotient for f(x).",
null,
"Objective: Apply the substitution principle to evaluate functions.",
null,
"Objective: Simplify numerical and algebraic expressions.",
null,
"Objective: Given a rational expression, correctly determine which numbers must be excluded to prevent division by zero.",
null,
"Objective: Given a radical expression, determine which numbers must be included to produce real number results.",
null,
"Objective: Given a function, determine its implied domain.",
null,
"Objective: Given two intervals, write their intersection as an interval.",
null,
"Objective: Simplify an expression in the form of a complex fraction.",
null,
"Objective: Simplify a radical expression by rationalizing its denominator using conjugates.",
null,
"Objective: Given a function, sketch its graph, state its domain using interval notation, identify any intercepts, and analytically test for symmetry.",
null,
"Objective: Given a function, analytically determine if a function is even, odd, or neither.",
null,
"Objective: Given the graph of a function, analyze the graph to write its domain, range, any intercepts, zeros, solutions to a function inequality and a function equation, intervals of increasing and decreasing, local and absolute extrema, and state the function as e",
null,
"Objective: Approximate the local and absolute extrema and determine intervals of increasing and decreasing regarding the graph of a function.",
null,
"Objective: Sketch the graph of a piecewise-defined function.",
null,
"Objective: Sketch and describe the graph of the greatest integer function.",
null,
"Objective: Apply the substitution principle to evaluate functions.",
null,
"Objective: Complete a table of ordered pairs that satisfy a given function.",
null,
"Objective: Using a table of ordered pairs, accurately plot and correctly connect points to sketch the graph of an function.",
null,
"Objective: Describe and demonstrate symmetry with respect to the y-axis.",
null,
"Objective: Describe and demonstrate symmetry with respect to the origin.",
null,
"Objective: Given an equation, substitute (-x) for x and simplify the result.",
null,
"Objective: Given an equation, substitute (-y) for y and substitute (-x) for x and simplify the result.",
null,
"Objective: Given the graph of a function, describe and sketch the graph as a result of a horizontal shift transformation.",
null,
"Objective: Given the graph of a function, describe and sketch the graph as a result of a horizontal scaling transformation.",
null,
"Objective: Given the graph of a function, describe and sketch the graph as a result of a vertical scaling transformation.",
null,
"Objective: Given the graph of a function, describe and sketch the graph as a result of a vertical shift transformation.",
null,
"Objective: Given the graph of a function, describe and sketch the graph as a result of a y-axis reflection transformation.",
null,
"Objective: Given the graph of a function, describe and sketch the graph as a result of an x-axis reflection transformation.",
null,
"Objective: Given the graph of y = f(x), describe and sketch the graph of g(x) = a f(bx + h) + k using transformations.",
null,
"Objective: Given the graph of a function y = f(x) and the graph of y = g(x), find a formula for g(x) based on transformations of the graph of f(x).",
null,
"Objective: Given the graph of a function, write a table of ordered pairs that represents the function.",
null,
"Objective: Given a set of ordered pairs, correctly plot and connect points to form the graph of a function.",
null,
"Objective: Given a point, plot its symmetric point with respect to the y-axis.",
null,
"Objective: Given a point, plot its symmetric point with respect to the x-axis.",
null,
"Objective: Utilize correct function notation to find function values."
] | [
null,
"http://ostts.org/Stitz_Zeager_Precalculus_Book_By_Section_files/shapeimage_1.jpg",
null,
"http://ostts.org/images/Grainline.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null,
"http://www.OSTTS.org/images/HelpButtonSmall.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8013305,"math_prob":0.98902833,"size":4815,"snap":"2023-40-2023-50","text_gpt3_token_len":984,"char_repetition_ratio":0.26688838,"word_repetition_ratio":0.469657,"special_character_ratio":0.21225338,"punctuation_ratio":0.15603645,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996377,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T15:56:42Z\",\"WARC-Record-ID\":\"<urn:uuid:46c81e87-78d2-4378-ba80-3d9189db3b02>\",\"Content-Length\":\"29256\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:82645687-b750-4978-adfc-280801bc34a6>\",\"WARC-Concurrent-To\":\"<urn:uuid:251eb8c0-5d38-4259-96c0-6816d08ed665>\",\"WARC-IP-Address\":\"74.208.236.150\",\"WARC-Target-URI\":\"http://ostts.org/PreCalc1b.html\",\"WARC-Payload-Digest\":\"sha1:ELE5WMNPF3OGAMBLXXHYPF5DWASYXVRQ\",\"WARC-Block-Digest\":\"sha1:J3NMPQRXMAESRM5C65CKYQXM4XQHG64C\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510903.85_warc_CC-MAIN-20231001141548-20231001171548-00627.warc.gz\"}"} |
https://it.b-ok.org/book/445486/8792fc | [
"Pagina principale Kalman filtering theory\n\n# Kalman filtering theory\n\nThis is a textbook intended for a one-quarter (or one-semester, depending on the pace) course at the graduate level in Engineering. The prerequisites are Elementary State-Space theory and Elementary (second-order Gaussian) Stochastic Process theory. As a textbook, it does not not purport to be a compendium of all known work on the subject. Neither is it a ''trade book.'' Rather it attempts a logically sequenced set of topics of proven pedagogical value, emphasizing theory while not devoid of practical utility. The organization is based on experience gained over a period of ten years of class room teaching. It develops those aspects of Kalman Filtering lore which can be given a firm mathematical basis, avoiding the industry syndrome manifest in professional short courses: ''Here is the recipe. Use it, it will ''work''!''\nAnno:\n1984\nEditore:\nOptimization Software, Inc., Publications Division\nLingua:\nenglish\nPagine:\n236\nISBN 13:\n9780911575262\nISBN:\n091157526X\nSeries:\nUniversity series in modern engineering\nFile:\nDJVU, 4.24 MB\n\nAnno:\n2001\nLingua:\nenglish\nFile:\nDJVU, 1.64 MB\n\nAnno:\n2001\nLingua:\nenglish\nFile:\nDJVU, 2.85 MB\n\nAnno:\n2006\nLingua:\nenglish\nFile:\nPDF, 22.96 MB\n\nAnno:\n2011\nLingua:\nenglish\nFile:\nPDF, 6.19 MB\n\nAnno:\n2014\nLingua:\nenglish\nFile:\nPDF, 4.59 MB\n\n## Most frequently terms\n\nYou can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.\n1\n\nRok:\n1985\nJęzyk:\nenglish\nFile:\nDJVU, 753 KB\n2\n\n### Wavelets, multiscale systems and hypercomplex analysis\n\nRok:\n1980\nJęzyk:\nenglish\nFile:\nPDF, 1.26 MB\n```UNIVERSITY SERIES IN MODERN ENGINEERING Managing Editor: A.V. Balakrishnan School of Engineering University of California Los Angeles, California 90024 USA SYSTEMS & SIGNALS N. Levan 1983.x + 173 pp. ISBN 0-911575-25-1 Optimization Software, Inc. ISBN 0-387-90900-1 Springer-Veriag New York Berlin Heidelberg Tokyo ISBN 3-540-90900-1 Springer-Veriag Berlin Heidelberg New York Tokyo ELEMENTS OF STATE SPACE THEORY OF SYSTEMS A.V. Balakrishnan 1983, vii + 187pp, ISBN 0-911575-27-8 Optimization Software, Inc. ISBN 0-387-90904-4 Springer-Veriag New York Berlin Heidelberg Tokyo ISBN 3-540-90904-4 Springer-Vferlag Berlin Heidelberg New York Tokyo KALMAN FILTERING THEORY A.V. Balakrishnan 1984, xii + 222pp ISBN 0-911575-26-X Optimization Software, Inc. ISBN 0-387-90903-6 Springer-Veriag New York Berlin Heidelberg Tokyo ISBN 3-540-90903-6 Springer-Veriag Berlin Heidelberg New York Tokyo\n\nA.V. BALAKRISHNAN KALMAN FILTERING THEORY OPTIMIZATION SOFTWARE, INC. PUBLICATIONS DIVISION, NEW YORK\n\nAuthor A.V. Balakrishnan School of Engineering University of California Los Angeles, California 90024 USA Library of Congress Cataloging in Publication Data Balakrishnan, A.U Kalman filtering theory (University series in modern engineering) Bibliography: p. Includes index. 1. Control theory 2. Estimation theory 3. Kalman filtering. I. Title. II, Series, QA402.3.B287 1984 003 83-25122 ISBN 0-911575-26-X © 1984 by Optimization Software, Inc., Publications Division, New York. All rights reserved. Printed in the United States of America. No part of this publication may be reproduced in any form or by any means without prior permission in writing from the publisher. Worldwide Distribution Rights by Springer-Verlag New York, Inc., 175 Fifth Avenue, New York, New York 10010, USA and Springer-Verlag Berlin Heidelberg New York Tokyo, Heidelberg Platz 3, Berlin-Wilmersdorf 33, West Germany.\n\nABOUT THE AUTHOR Professor A. V. Balakrishnan is the founding Chairman of the Department of System Science in the School of Engineering and Applied Sc; ience at UCLA, He is a Fellow of the IEEE, and past Chairman of the IEEE Information Theory Group. He has received a Certificate of Recognition from the National Aeronautics and Space Administration for his scientific contributions to the processing of Flight Test data. More recently, he was awarded the Guillemin Prize for his many contributions to Communication and Control Systems Engineering. He is also the recipient of the Silver Core Award from the International Federation of Information Processing. He is the author of several books and monographs in Applied Mathematics and Engineering, and serves on the Editorial Boards of numerous journals and monograph series.\n\nERRATA page line as was change to Preface 1 +5 I not not j not 12 -1 converge as n + °*. converge as n + °°, even though bounded. 73 +8 (P - Pc) (Pc * P) -5 PQ > A; P0 < A; -3 P0 iA' P0 * A' 79 +9 J is white also. It is.. is white also. We assume furthermore that the signal and noise processes j are mutually independent; or, equivalently that the white noise processes N® and Kg are mutually independent. It is *.. 80 +1 We shall ... --- (delete) +2 FnG* ... — (delete) +3 implying .-. — (delete) I +4 cesses. Another... j Another... 89 -7 U(CnRn4 ... U VCnVl ¦\" 92+9 vn-l vn-l + Vl 95 -2 \\ K 1 + Bn 1U« 1 ' - \\ K 1 + U* 1 ' ' ' j | n-l n-1 n~l j n-1 h-1 97 -11 C R C*(I + C R ... C R C*(G G* + C R ... nnnv nn jnnnvnn nn 98 +10 vn - C x vw - m I I n n n | n n 100 +12 equal to the var- — (delete) 1 +13 iance of the innovation I j and Is j — (delete) 101 all replace with p. 102 102 all replace with p 101 Please turn over\n\npage line as wag change to original * l * 102 [ -9 E[xQx0] lE[x0xQ3 104 -9 Un Un. ~4 Etfx0,x323... U E[lx0,x]2}T. .. 105 -4 BK^-^X^-^)*] E^xn-^)(Xn~^)*] 108 +2 NjX »(I + H(... N^(I+H(... 109 +S A*kCkVm+k^n . A*kc*vm+k-n • 117 -2 ...crcr1 . ...C^.C)\"^ . 132 1 +8 | Show that (A^l?) is not } Controllable if P — (delete) I +9 is singular. — (delete) 170 +12 aN a11 vn I VN 216 +7 CG^XI^G^)^^*)^ (I + ^^(G^)-1) U (GA*-CAC*)\n\nUNIVERSITY SERIES IN MODERN ENGINEERING\n\nCONTENTS PREFACE ...,..,« ix NOTATION . . . . . xx CHAPTER 1. REVIEW OF LINEAR SYSTEM THEORY 1 CHAPTER 2. REVIEW OF SIGNAL THEORY . 13 CHAPTER 3. STATISTICAL ESTIMATION THEORY 34 3.1. Parameter estimation: the Cramer-Rao bound; the principle of maximum likelihood ...... 35 3.2. Bayesian theory of estimation: optimal mean square estimates and conditional expectation 46 3.3. Gaussian distributions: conditional density; unconditional maximum likelihood; mutual information ... ....... . 49 3.4. Gram-Schmidt orthogonalization and covariance matrix factorization ...... . 59 3.5. Estimation of signal parameters in additive noise 64 3.6. Performance degradation due to parameter uncertainty .......... 72\n\nVIII CONTENTS CHAPTER 4. THE KALMAN FILTER . . . . . . 76 4.1. Basic theory .... ..... . 77 4.2. Kalman filter; steady state theory . . . . . 100 4.3. Steady state theory: frequency domain analysis ..... 133 4.4. On-line estimation of system parameters . . . 149 4.5. (Kalman) smoother filter 171 4.6. Kalman filter: correlated signal and noise ................ 184 4.7. Kalman filter for colored (observation) noise . 193 4.8. Example ....... 200 CHAPTER 5. LIKELIHOOD RATIOS: GAUSSIAN SIGNALS IN GAUSSIAN NOISE ............. 211 BIBLIOGRAPHY • . . . 219 INDEX . 221\n\nPPEFACE This is a textbook intended for a one-quarter (or one- semester, depending on the pace) course at the graduate level in Engineering, The prerequisites are Elementary State-Space theory and Elementary (second-order Gaussian) Stochastic Process theory. As a textbook, it does not not purport to be a compendium of all known work on the subject. Neither is it a \"trade book.\" Rather it attempts a logically sequenced set of topics of proven pedagogical value, emphasizing theory while not devoid of practical utility, The organization is based on experience gained over a period of ten years of class room teaching* It develops those aspects of Kalman Filtering lore which can be given a firm mathematical basis, avoiding the industry syndrome manifest in professional short courses: \"Here is the recipe * Use it, it will \"work\"!\" The first two chapters cover review material on State- Space theory and Signal (Random Process) theory — necessary but not sufficient for the sequel„ The third chapter deals with Statistical Estimation theory, the mathematical framework\n\nX PREFACE on which Kalraan Filtering rests. The main chapter is the fourth chapter dealing with the subject matter per se. It begins in Section 4.1, with the basic theory and formulas, making a compromise in generality between too many obscuring details and too little practical application* Thus we consider only the case where the observation noise is white and is independent of the signal, although we allow the system to be time-varying. Because of the uncertainty in the initial covariances, in practice no Kalman filter can be optimal except in the steady state — and this is by far its important use. Hence Section 4.2 specializes to time-invariant systems and considers asymptotic behavior of the filter. Section 4.3 examines the steady-state results from the frequency-domain point of view, relating them to the more classical transfer- function approach. In Section 4,4 we study a canonical application of Kalman filtering: to System Identification. In Section 4.5 we study the !'Kalman smoother'1: the on-line version of two-sided interpolation,. In Sections 4.6 and 4.7 we study generalizations of the basic theory of Section 4.1; thus we allow the signal and noise to be correlated in Section 4.6, and allow the observation noise to be non-white in Section 4.7. Section 4,8 features a simple example which illustrates some of the theory and techniques discussed in the chapter. The book concludes with a chapter on Likelihood Ratios in which the Kalman filter formulation plays an essential role. We only consider discrete-time models throughout, since all Kalman filter implementation envisaged involves digital computation. The problems accompanying each chapter serve the traditional role of testing the student!s comprehension of the text, with an occasional foray into areas of contiguous interest.\n\nNOTATION Square Matrices I - Identity Matrix Tr, = Trace |A| Determinant of A Rectangular Matrices A =* (Conjugate) Transpose of A [A, b: « Tr. AB* 1 A|| - vTA, A3' {a. .} ~ Matrix with entries a. . 1 Column' vector v ~ v v is 1*1 'Row1 vector c - cc is lxl\n\nXII NOTATION Self-adjoint Matrices * A Self-adjoint ~ A = A A Nonnegative ~ CAx, xJ > 0 for every x A > B ¦*—»- (A-B) is nonnegative definite Gradient of a Function If gF) is a scalar function of 8: Gradient of gF) - ?egF), where (VegF))h - ^g@ + Ah) j V0gF) is l*rc if G is mxl Random Variables E(*) - Expected Value E(»]*) - Conditional Expectation p(*) - Probability Density Function\n\nChapter 1. REVIEW OF LINEAR SYSTEM THEORY A Kalman filter is a linear system. This chapter presents a brief review of Linear Systems theory from the \"state- space'1 point of view, since the Kalman filter is best described in that way. For an introductory treatment of State Space theory, the reader is referred to . More advanced treatments may he found in , , > among other texts. A system is characterized by its \"input,\" its \"state\" and the \"output.\" These are functions of time. Time may be continuous or discrete — in the latter case time is indexed by the integers. We shall only be concerned with the discrete case in this book. Let {u } denote the input and {v > the output. A n * n * linear system for our purposes is then completely characterized by a \"state-input\" equation x«j.i & A^x„ + B^u« A.1) n+1 n n n n\n\n2 CHAPTER 1. REVIEW OF LINEAR SYSTEM THEORY and by an \"outpui-state-input\" equation vn = Cnxn + Dnun • <X'2) where A is a square matrix and B , C , D are rectangu- n n n n lar matrices. If the state-space dimension is p, then A will be p x p. If the input sequence is such that each u is q x 1, then B will be p x q. If the output sequence is such that each v is m x 1, then C will be m * p and D will be m x q. We can \"solve\" A.1), A.2) or express the output in terms of the state at some initial time and the input. Thus we have, taking the initial (or starting) time to be k: n-1 x n *n,X*k * jk *n,i+lBiui ' A'3> where \\\\> . f called the State-Transition Matrix, is defined by +n.k \" An-1 •'• \\ • k < n-1 ; A.4) t ty ~ I (Identity Matrix) Note that it has the \"transition\" property: ¦ i *i .„ = ¦ - A.5) rn,krk,m Yn,m From A-3) which specifies the state at any time n, n _> k, the output is readily expressed explicitly in terms of the \"initial\" state and the current input as: X * Here and throughout, the letter I will always denote the identity matrix regardless of dimension.\n\nCHAPTER 1. REVIEW OF LINEAR SYSTEM THEORY 3 n-1 v * C Tb , x, + 7 C \\b . ,., B.u. + JD u , A.6) vn nrn,k k A?k n rn,i+l x l n n v ' Here the first term is the \"initial state\" (or initial condition) response and the second term is the \"input response,\" The function Wn,i m Cn *n,i+lBi • i < n , A.7) = r> i = n , is referred to as the \"weighting matrix\" or \"weighting pattern\" of the system. Time Invariant Systems We are most concerned with the case where the system is the \"time invariant,\" where the system matrices are all independent of time: An = A , Bn = B , Cn \" C ¦ D - D a n so that A.1), A.2) become: n+1 Ax + Bu n n vn \" Cxn + D% A.8) In this case the state-transition matrix\n\nIf CHAPTER 1. REVIEW OF LINEAR SYSTEM THEORY and hence V* = *** , n-1 . ., X = An K x. + I An 1\"*X Bu. A.9) n K i=k 1 and the output _, n~l _*_i v„ ^ CAn_J? x^ + J CAn i\"\"L Bu. + Du A.10) Note the \"time-invariance\" property: the response (state or output) is invariant with respect to any time translation. In particular, it is customary to set the initial time to zero (k = 0 in A.9), A.10)). The system weighting pattern depends also only on the time difference; = D n =* i It is more convenient now to write W. «= CAkB , k > 0 . A.11) Then A.10) becomes: vn - CA% + T Wn-l-i Ui + Dun • n > ° We can combine the second and third terms and write n vn \" CA x0 + I wiV~i A.10a)\n\nCHAPTER 1. REVIEW OF LINEAR SYSTEM THEORY 5 by defining = D , 1 = 0. Of the many descriptive properties of time invariant systems defined by A-8) we are mainly interested in three. These are: STABILITY CONTROLLABILITY OBSERVABILITY . Let us discuss these in turn. Stability We shall say that a state x (x e Rp, the linear space of p * 1 matrices) is stable (or \"A-stable\") if lim Anx = 0 . A.12) n In reference to A.9), this means that asymptotically the initial conditions term therein will vanish, and similarly also in A.10). The class of all stable states is a linear subspace. \"We call this the stable subspace. A system is stable if all its states are stable. A necessary and sufficient condition for a system to be stable is that the eigenvalues of A be all (strictly) less than 1 in magnitude. In that case we also say that the the matrix A is stable. We note that A is stable if A A is stable, although not conversely.\n\nb CHAPTER 1. REVIEW OF LINEAR SYSTEM THEORY For a stable system, the weighting pattern of the system can be characterized in terms of its Fourier transform — called the input-output \"transfer function,f: KA) - I w, e2irxXk , -J < X < i , A.13) 0 K - D + e2lriXcf Ak e2*ikX B . Let z be a complex variable. Then 0 K converges for |zj ? 1 and is called the z-transform. From A,11) we have readily that T w. zk = zCil-zAy1^ + D , |z| < 1 . A-14) 0 K All the properties of the z-transform can be inferred from the transfer function: ip(\\) - C(I-e27TiXA)'1 B e27TiX + D , -i < X < i . A.15) (It is of interest to mention in this connection a problem of importance, even if beyond our scope: given a system transfer function iKA), when can we express it in the form A.15), for appropriate A, B, C?) Note that the z-transform A.14) continues to be defined even if A is not stable,\n\nCHAPTER 1. REVIEW OF LINEAR SYSTEM THEORY 7 for Is I < r Ior some r, 0 < r _< 1; and hence its importance. fvmtrollability We say that a state x is Controllable (reachable is a better word) if it can be reached from the zero state in some finite number of steps by an appropriate input. More precisely., x is \"Controllable\" if for some n and some {u.}, xk+l \" Axk + Buk ' ° 1 k 1 n^1 > xn - x ; (or, equivalently, in terms of the explicit expression A.9) for the state any time, n-1 x - I 0 I A*1\"^ Buk ). A.16) The controllable states form a linear subspace which we label the \"Controllable Subspace.n When the latter is the whole state space then we say that the state space is Controllable. We shall then abbreviate this to: (A ^ B) is Controllable. A necessary and sufficient condition for Controllability is that the (compound) \"Controllability Matrix\" | B AB ... AP_1B | A.17) has full rank. Or, equivalently, the p * p matrix\n\n8 CHAPTER 1. REVIEW OP LINEAR SYSTEM THEORY Rp = 7 AJBB A J C 0 is nonsingular. (Recall that A is p x p.) Moreover, the subspace of controllable states is precisely the range of H^. If the state space is controllable, we can express the output v entirely in terms of the input history. To begin with, we have k-1 , *0 - I AJB u^j for some k, and we can rewrite this reversing time as: -1 x„ = I A 1 J Bu. , where In other words, we may think of the initial state Xq being accounted for in terms of an appropriate input history. Moreover, we can then write the current state x in the form: xn - An \"j A-^J Bu.. + Y A11 Bu. , n-1 ., . - I A* Bu, , ~k i where ui - u± 0 < i < n-1 , Gi -k < i < -1 ,\n\nCHAPTER 1. REVIEW OF LINEAR SYSTEM THEORY We can further rewrite this in the \"generic\" form: n-1 r» t ¦? where the input sequence u. is zero for -<*> < i <^ -N« In turn, we can express the output as: vn ^ Wn-l-i ui + Du„ ' _oo l W, u 1 . + Du 5 J n-l-j n \\$Vn-J • A-18) In other words, the output is expressed entirely in terms of the input without introducing state, as a consequence of controllability. Moreover, it is possible (although it is beyond our present scope) to deduce the \"s,tate~spaceM description in which the state space is controllable, starting from A.18). Remark. From A,10a) we note that if the system is stable, the first term in A.10a) goes to zero for large n, so that the representation A.18) holds \"asymptotically\" for stable systems. Observability To introduce the notion of Observability, let us begin with a problem — one which is not without practical importance. Let us assume that the system is known: A, B, C and\n\n10 CHAPTER 1. REVIEW OF LINEAR SYSTEM THEORY D are given. We are also given a sequence of input-output pairs: (u>,v.), i = l,...,n, u. being the input and v. the output. Can we determine the corresponding states x-, ..., x from this data? To answer this question we may proceed as follows. Since i-1 CA1x:t + I CA1^1^ Buk , i = l,...,n , let us subtract the response to the known input and define i-1 Then we have v. = v. - J CA1 1 k Bu> 1 i ? k v± = CA1 1x1 , i = l,...,n . A,19) If we can determine x1 from this, then of course X. - A1\"*, + \\ A^1^ Bu. x 1 ?* will determine the succeeding states for us. Now we may regard A,19) as a set of n \"equations\" to solve for x^. Moreover, the equations being linear, we see that A.19) has a unique solution only if the homogeneous equation 0 - CA1^1* , i~ lf...,n , A.20) has no \"nonzero\" solution z. Recalling that the state space is of dimension p and that A is p x p, we note that 0 = CA1^ , 1=1,...^ , A.21)\n\nCHAPTER 1. REVIEW OF LINEAR SYSTEM THEORY 11 implies A.20) for n _> p. We now define: a state x is Unobservable if CAkx - 0 , K > 0 . A.22) The class of unobservable states is clearly a linear subspace. Its orthogonal complement is called the Observable Subspace, We say that the state space is Observable if the subspace of unobservable states contains only the zero state. Let p be the dimension of the state space. Then the state space is observable if and only if p-1 +. * Rn - J A JC CAJ 0 0 is nonsingular. Or, equivalently, the (compound) matrix I C I CA I CAP I is nonsingular. Moreover, if the state space is observable, we can, going back to A.19), determine x.. as: -1 ? *i-l ~* ~ xx - nQx I A x C v. . A-23) \"We also use the notation \"(C ii A) Observable'1 to denote that the state space is observable. Note, in particular, that * * \"(C^A) Observable\" is equivalent to M(A ^C ) Controllable.\"\n\n12 CHAPTER 1. REVIEW OF LINEAR SYSTEM THEORY it PROBLEMS it Problem 1.1 Suppose (C ^ A) is observable. If (I - CK) is non- singular, then (C *v (I - KC)A) is also observable. Hint: C(I-KC)Ax = 0 => (I-CK)CAx =* 0 -> CAx *= 0 , hence C(I - KC)A(I -KC)Ax => 0 , => C(I-KC(A2x - 0 , <=> (I~CK)CA2x - 0 , *=> CA2x = 0 , etc. Problem 1.2 Suppose A is nonsingular. Show that (C ^ A~ ) is observable if (C ^ A) is* Problem 1,3 Show that (C^A) Observability is equivalent to: (C C ^ A) Observability (Vc C a, A) Observability Problem 1.4 Construct a (square) matrix A such that for some x IIAnxll does not converge as n + c°»\n\nChapters REVIEW OF SIGNAL THEORY In this chapter we present a brief review of the salient facts about Signals essential in the sequel. For more details, including details of proofs, the references [13, 14, 17, 38, 19] may be consulted. As in the case of Systems in Chapter 1, we shall consider only the \"discrete time\" case where the independent variable may then be replaced by integers. Thus we shall use the notation: s for the n sample, counting from some (arbitrary) initial sample (time), (The samples need not necessarily be taken at some .fixed rate, although that would be the most common situation.) We take each s to be an m * 1 column vector. Since the initial starting time is arbitrary, we may allow n to be positive as well as negative as need arises. Although it would be impossible in any physical device to process a non-finite number of samples, it would be equally\n\nIk CHAPTER 2. REVIEW OF SIGNAL THEORY unrealistic to limit the number of samples to be a fixed finite number, fixed once and for all. Hence we idealize our signals as nonterminating sequences: \"tsn^» n ~ 1»2>««'» running through all the positive integers; or, as necessary, the negative integers as well. Spectral Theory of Signals with Finite Power 4- We say a signal sequence {s } has finite energy if lim I ||s ||2 < - . N-*» -N \" We shall have little to do with the associated theory. We say that a signal {s } has Tlfinite power\" Actually we shall demand a little bit more than B.1). (This is usually assumed implicitly in the engineering literature.) Thus, following Wiener's definitive work , we shall assume that N iim 4 I snsnn exists and is finite for each m > 0. This does NOT necessarily follow from B.1)! Hote that In what follows we omit the qualification \"sequence.\"\n\nCHAPTER 2. REVIEW OF SIGNAL THEORY 15 Rm - ^ m I sn+msn N I N \"m SI * Snsn- N -N Of course, - R_m • B.2) R0 ^ Ps (signal covariance) A typical example of a signal with finite power is: s - a cos BimXo+9) t -4 < XQ < 4 which is periodic if X is rational. Note in this case: * 1 N R = aa lim-gjj I cos B7rnXo+6) -> cos Bir(n+m)Xo+0) == (aa L cos 2irraX^ , X_ f 0 o o Note that Rm « / e2iriXm /^W^*^) + 6(X+XQ) ) dX and, for X ^ 0, is the same whatever the \"phase angle\" 6, This result can be generalized. We have in fact the \"spectral representation\" theorem due to Bochner-Khinchin-Wiener: Rm « f e2*XXm ps(X) dX , B.3) where PS(X) is self-adjoint, nonnegative definite (and of\n\n16 CHAPTER 2. REVIEW OF SIGNAL THEORY course may contain delta-functions) and is known as the spectral density of the signal* In factt it may be obtained by the Fourier series expansion: P„<*> = le-2*^™* , B.4) or {RmJ are \"the Fourier coefficients of Ps(*). From B.4) we have that PS(X) = Ps(+X) . Let such a signal {s } be the input to a linear system with transfer function tJj(A). Let ^v J denote the corresponding output, so that oo n v = y w s = y w , s, n ? k n-k L n-k k CO CO + <X) = I wk e2*ikA f ^ ||wk||2 < - . 0 0 K Then -L ? * - V A ? * \\ 2N _^N vnvn+m ~ | \\p X^ sn-ksn+m-jl' and upon taking limits as N ¦+ «>, we obtain\n\nCHAPTER 2. REVIEW OF SIGNAL THEORY 17 1 K * l? « lim •= I v vn J J wk Vk-j WJ 2 = y e2lri*ra KX) ps(X) *(X)* dX * (where indicates \"conjugate-transpose1'!). In other words, the output fv } has \"time average11 properties similar to that of the input {s }, and the spectral density of the output is P^(A) * *(X) Ps(A) t|)(A)* . B.5) In the special case where the system has the structure: vn - Cxn B.6) xn+l = Axn + Fsn > we see (assuming A is stable) that where now v = I CA F s . , , n g n~l-k iK^) = ^ CA e F C(I - Ae27riX) F ;\n\n18 CHAPTER 2. REVIEW OF SIGNAL THEORY so that we have for the spectral density; PVU) « C(I-Ae2^)^1rps(X)F*(I-AV27riX)'*1 e* . B.7) Stochastic Signals: Second Order Theory In his pioneering work fl8], Wiener used the theory sketched above for the description of the signals. This theory is a \"steady state1' theory. It is possible to obtain a more general theory which enables us to include the \"transient\" or \"nonsteady state1' analysis and at the same time make the steady state analysis easier. This is accomplished by the introduction of stochastic signals, including a \"signal generation\" theory, which enables us to construct a signal with given spectral density. A stochastic signal for us is a random process (sequence) (or time series) — a sequence of random variables whose joint distributions of any order are given (or calculable, in principle). A Gaussian stochastic signal is one whose joint distributions are all Gaussian. We shall only need to consider Gaussian signals in Kalman filtering theory. More generally, in linear filtering theory we shall be concerned only with moments up to the second order (means and covariences); and hence we shall need only \"second order\" theory. On the other hand, we may replace the gxven process by a Gaussian with the same means and covariances, and hence we may as well consider only Gaussian processes.\n\nCHAPTER 2. REVIEW OF SIGNAL THEORY 19 We say that a stochastic signal {sn} is (second order) stationary if E [s ] is independent of n (where here and below E [•] denotes expectation or \"phase average\") and the covariance E[(sn-E[sn])(s;+m-E[s;+m])] « Rm . In other words, a \"time translation\" does not make any difference. In particular, if the process is Gaussian, then the density functions are also invariant with respect to a \"time shift\": p(sn' sn+l' \" - ' Sn+p} = p(sn+m' sn+m+l' - * > Sn+p+m) for all n, ra and p. For a second order stationary process we have again the Wiener-Khinchin-Bochner theorem (cf. C143): , f _27TimX = j e'™ p(X) dX , m -i where p(X) is the \"spectral density\" of the process (and may contain S-functions, or more strictly speaking: = J e27rimX dpa) where P(X) is called the spectral distribution). In the case dP(X) *= p(X) dX (i.e., the process has a spectral density) it may be determined by the Fourier series:\n\n20 CHAPTER 2. REVIEW OF SIGNAL THEORY Note that p(X) is self-adjoint and nonnegative definite. Also so that P(-X) - I e-2*lmX R* Pii(X> = Pii(-X) > ° where p..(X) denotes the diagonal terms. The problem of estimating p(X) from one \"long\" realization of the process is referred to as \"spectral estimation\" and is an important one, although beyond the scope of the present work. Let us observe now that the spectral theory of signals with finite power that we started with, shows strong resemblance to the spectral theory of stochastic signals. We can make this connection more precise by invoking the central result of ergodic theory: For a stationary Gaussian process which has a continuous spectral distribution function, the \"phase average\" may be replaced by the time average. Thus, if there are no \"jumps\" in the spectral distributions (or no \"delta functions\" in the spectral density, in engineering language), then N E[f(sn)J = lim ± It f(sk) so that in particular\n\nCHAPTER 2. REVIEW OF SIGNAL THEORY 21 Also: 1 N E[s ] - lim m I s N E [snCm] \" i±m ^ I sks*+m In particular, therefore N * 1 v * H - Ess , ] = lim -j^tt > s. s, . m L n n+mJ ,Tv_ 2N %. k k+m Thus we may consider signals as ergodic Gaussian processes, so that we retain the \"time average\" notion as well as the phase average or statistical average notion. In this sense, modeling signals as stationary Gaussian signals is more general, and is the generally accepted modern view. Let a stationary (zero mean) Gaussian signal {s } with spectral density p^CX) be the input to a time invariant li- s near system and let the output \"be ^vnJ > so that v = Y w, s n o n Then the output {v } is also Gaussian and stationary with spectral density p (X): PV(X) = tpU) PS(X) «(X)* , where oo t <x> \\ *(X) = I wk e27rikX ( I l|wk||2 < - iS assumed] ,\n\n22 CHAPTER 2. REVIEW OF SIGNAL THEORY This can be proved essentially as in the deterministic case, but now using \"phase11 (statistical) averages: 00 K4 S D J i]j(x) p a) ^(A>* dx In particular, B.7) holds when the system is specified by B.6). Signal Generation Models A Gaussian signal (N } such that n E[Nn] = 0 , EtV0 = ° > n * m ' = I n = m , is called \"white noise11 with unit covariance. Let s =* C x n n n n+1 n n n n B.8) which is then a \"model\" for generating the signal {s }, referred to as \"white noise through a linear dynamic system.\" It turns out that for all practical purposes this is the only class of signals of interest in filtering theory. Let us now\n\nCHAPTER 2. REVIEW OF SIGNAL THEORY 23 study some properties of the signal process {s } so generated ¦ We assume that xft is Gaussian and is independent of the noise {N }} for n > 0, Then the process {x } — the n — n \"state\" process — is Gaussian and Markovian, Thus we can readily verify that Indeed, we have only to note that solving B.8): n n~l p p n-1 n-1 n-1 n-2 n~2 + A iA 0F „N 0 n-1 n-2 n-3 n-3 + • - • + A -A n*<- -A F N n-1 n-2 P p P the noise terms on the right being independent of x . Once again, this implies that all the memory resides in the state one step behind. Let us now calculate the means and covariance of {s }. n We have B[8n] - CnE[Xn] and - An-1 ••¦ V^ • In particular, s has zero mean if the initial state xQ has zero mean. Let us next examine the covariance structure. For sim-\n\nm CHAPTER 2. REVIEW OF SIGNAL THEORY plicity, we shall assume that E[x0] - 0 , so that E[xn] = 0 , n > 0 . (Otherwise we may work with \"centered\" process *n \" xn ~ EfxJ which will satisfy the same dynamics and will have zero mean.) We have, letting that R(n+p,n) - An+p-l*\"AnR(n'n) and R(n,n) satisfies: R(n+l,n+l) - AnR(n,n) R* + FnF* ; R@,0) - E[xQx*] . B.9) And for the signal covariance we thus have; S'Vnl = CmR(n,ra) C* . Of special interest to us is the time invariant case where A„ = A , Fn - r , c„ - c • Then we have:\n\nCHAPTER 2. REVIEW OF SIGNAL THEORY 25 while R(m,n) = R(m-n) = Am n R(m,n) , m > n , R(n,n) - A R(n-l,n-l) A* + FF* . B.10) We can, of course, \"solve\" this difference equation to obtain R(n,n) - AnR@,0)A*n + ? Ak FF* A*k . 0 Of special interest to us is the case where A is stable. In that case, AnR@,0)A*n -* 0 and the series Ak ^ * Ak* converges. Hence I AK FF A* 0 lim R(n,n) = I A FF A K n-+co 0 Let us denote this by R. . Then lim E[xx* ] = RWA*P , P>0 . B.11) In other words, the Gaussian process {x } and hence the n signal process {s = Cx } is asymptotically stationary. Note that R satisfies R *= AR A* + FF* . B.12) The spectral density corresponding to the asymptotically\n\n26 CHAPTER 2. REVIEW OF SIGNAL THEORY stationary process is where Ps(A) - C px(X) C* , B.13) ~27riAn An 0 . ™ r _2TriAn .*n - (I-AewiX)\"\\ + BJI-aV1\") - na 2-niX'1 = (I - Ae '\"-1\") • (R (I-A*e2wiX) + (I-Ae-2ltiX)H *¦ CO ^ ' ^ ' CO - (I-Ae-2,tiX)R„(I-A*e2,r\")] where the quantity in square brackets can be evaluated using B.12), yielding pxCA) = (l-Ae^V^V-AV^V1 where KA) is the system transfer function ¦ (X) = JA\" e+2TlnAJp In particular, we have the ''factorization\" for the signal spectral density: PS(A) - COKA))(OKA))* > where Cty(A) is the transfer function of a \"physically\n\nCHAPTER 2. REVIEW OF SIGNAL THEORY 27 realizable weighting pattern.\" Finally, let us note that for the stable system (A-stable) we can show that the time average is equal to the (steady state) phase average: N * 1 r lim T? > S ^ S N^ N nix n+p n = APR Example Let us illustrate our ideas with a simple example. Suppose our stochastic signal arises from sampling periodically the noise response of a linear oscillator. Thus s = s(nA) n v 2 ^~i + 2bnt + wos(t) = N(t) B.14) B.15) where N(t) is the noise input to the oscillator system and A is the sampling interval. We assume that wQA < ir. Our first step is to derive the signal-generation model for B.14 using B.15). For this purpose we first rewrite B.15) in \"state-space\" form: s(t) = Cx(t) x(t) Hx(t) + GN(t) B.16) where 0 1 ~% b J - [ 1, 0 ] ,\n\n28 CHAPTER 2. REVIEW OF SIGNAL THEORY Let g = ; xn * x^nA^ Then from B.16) we can write xn \" ^ x„-l + Vl » B-17) where A n-1 - | eH(A~a) GN(n~lA+a) da . B.18) 0 Let us assume that N(t) is a Gaussian process with mean zero. Then so is ? and further n A A * E[? C*] - / / efl<A~a) GR<5=5A+a-s)G* eH (A~s) da ds > n m Q Q where R(t2^t1) - E[N(tx) N(t2)*] We assume that (corresponding to \"large bandwidth\" noise) : B(t) * 0 for |t| > A f and further take A small enough so that the double integral can be approximated well by = AR(O) FF* for m - n , * 0 for jtj ^ n Hence we obtain ?n - /AR(O) GNn ,\n\nCHAPTER 2. REVIEW OF SIGNAL THEORY 29 where N is white Gaussian with unit variance. We thus have the representation in the canonical form: sn - Cx„ n n x .- = Axm + FN n+i n n where HA B.19) F - /AR(O) G . Note that B.19) is not obtained by directly \"discretizingM the differential equation B.15). Let us pursue this example further. We assume that u0 < 1 , so that we do have an oscillatory system. Then letting b u0 - ? (damping ratio) , we have HA ws * w0 ^X\"^ -bA A+-^ 3 aEV sin (a A JT? sin to A ¦z s -— sin (o^A co s s cos to A s 4 ¦ sin to A l-C Note that (C ^ A) is observable. Hence we can obtain a\n\n30 CHAPTER 2. REVIEW OF STGNAL THEORY \"difference equation\" for s . Exploiting the fact that (Cayley-Hamilton Theorem (see )): A2 + Be*\"bA coscosA)A + e~bAI * 0 , we obtain sn+2 + (Se coswsA)sn+l + e sn / K, sin (o A\\ Moreover, A being stable, the signal fs } is asymptotically stationary and ergodic. Its spectral density is An^rtx -2bA 2.2 AB@) e w0sj-ii & A A-52) -b& -2iri(A+fsA) e ~ e -2 -bA -2iri(X-fsA) e - e where 2Trf0 - 4,0 ; 2wfs Note that p(X) has a maximum at X - ±tgt z ±f0A for small ?,. Also -bA -<2*tyOE e - e\n\nCHAPTER 2. REVIEW OF SIGNAL THEORY 31 under our assumptions, for small ?. Finally, log p(X) = constant - log (l+e-2bA-2e\"Mcos(A-U)) s - log A+ e~2bA - 2e\"bA cos (A+f A)) , where the second term is symmetric about (f^A) with maximum at (f\\A), while the third is symmetric about (-f_A) with maximum at1 (-f„A). The smaller b, the sharper the peaks. * PROBLEMS * Problem 2,1 Let {s } be a stationary Gaussian signal with spectral density p(A). Define (for fixed B) the continuous time stochastic process by /4.\\ v -. Sln irBBt-n) , + _ 5(t) = l Sn TBB^_n) , -co < t < cc where the convergence of the infinite series is taken in the mean square sense. Find the covariance of the process s(kA) , _co < t < °° for fixed A, 2BA < 1. Specialize to the case where the spectral density of {s } is such that p(X) = 1 , -4 < X < i . Hint: E[s(t) s(t+kA)*j - I R I a (t) a (t+kA) , P n i\n\n32 CHAPTER 2. REVIEW OF SIGNAL THEORY where o z + ^i = sin 7rBBt~n) anlx' irBBt-n) Hence BfB(t> .(t+M)*] = | Rp al;(ag;^) -* p > / e ' p(A) dA -4 Hence spectral density JLnQJ >BA p[2BAj 2^ Jf i orrn i for -BA < A < BA = 0 otherwise Problem 2.2 For the signal model; sa \" Cxn Xn+1 \" Axn + FNn • n 1 ° where A is stable and x~ is independent of the unit- variance white-noise sequence fNn>, calculate the mean and variance of the \"sample\" covariance (i Jx snCp) for each N. Show that the variance goes to zero as N ¦+ ».\n\nCHAPTER 2. REVIEW OP SIGNAL THEORY Problem 2.3 Prove the Schwarz inequality for random vectors x, y: |EtV]|2 ? E[||x||2[ E[||y||2] . Prove the Holder inequality: ^Et||x+y||2] < ^E[||x||2] + <vM||y||2] .\n\nChapter 3. STATISTICAL ESTIMATION THEORY The function of a \"filter,\" as the name suggests, is to \"filter out\" the noise; or equivalently, to extract the desired signal from the corrupting noise. A filter is, in other words, an \"estimator\" of the signal. The paradigms for this are well-established in statistics — or, to be more specific, in the statistical theory of estimation. This is the subject matter of the present chapter. It is important not only because it is a logical starting point in developing the Kalman filter theory, but also because it is of independent interest in its own right¦\n\n3-1. PARAMETER ESTIMATION 35 3.1 PARAMETER ESTIMATION: THE CRAMER-RAO BOUND; THE PRINCIPLE Or MAXIMUM LIKELIHOOD We \"observe\" an n x 1 vector v. For example, v could be n samples (arranged as a column vector) of a continuous time waveform that is sampled at some discrete intervals of time. Given v we need to estimate an m x 1 \"parameter\" 0 (see below for examples). We consider the situation where nothing is known about 0: it is just an \"unknown\"*1 parameter, and v is modeled as a random variable whose distribution is known for each 0. We have then what is called a \"parameter estimation\" problem, in the classical statistical terminology associated with the names of Cramer and Fisher. (See ). Of course, as we shall see quickly, other points of view concerning 0 are possible, although we cannot discuss them all in this book. Which view we adopt is determined by our \"track record\" of success — how well we do. We retain the model if our experience is positive and discard it, otherwise. The unknown parameter 0 is any point in m-dimensional Euclidean space. For our purpose, in this book we need only to consider the case where the distribution of v can be replaced by its density; or, in \"pure mathematics\" terms, the distribution is absolutely continuous wjth respect to Lebesgue measure. Thus we are given a family (indexed by 0) of probability densities: p(vj6) > 0 ,\n\n36 CHAPTER 3- STATISTICAL ESTIMATION THEORY / p(v|0) d|v| - 1 . The notation p(vf<5>) is intended to suggest that we may think of p(v|0) as the \"conditional\" density of v given 0, even though 0 is not a random variable. Any \"estimate\" will be (in fact, will have to be) a function of v. Thus, the \"hat\" denoting an estimate, we can write: 0 = f(v) , where the function f(-) completely specifies the estimate. How shall we measure the \"goodness\" of our estimate? Note that f (v) is a random variable. Hence any such measure must involve \"statistical averages.\" We begin with the first moment: E0(fi) - / f(v) pCv|©) d|v| where the subscript 0 indicates that 0 is fixed. The quantity E0@) - 0 is called the estimate \"bias.\" The \"bias\" is clearly a function of 0: b@) - E0(g) - G . An estimate is said to be \"unbiased\" if the bias is zero. Let 0O denote the \"true\" value of 0, which is, of course, unknown and, in fact, unknowable. Nevertheless we\n\n3,1. PARAMETER ESTIMATION 37 can talk about the \"error\": Since GQ is unknown, we consider for arbitrary 0 the \"mean square error\" EG(H*<v)-0|jJ - J ||f(v)-0||2 p(v|G) d[v| , Rn or, more generally, the second moment matrix of the error: RF) = E0((i(v)-6)(f(v)-e)*) = / (f(v)-0)(f(v)-e)* p(v|Q) d|v| C.1.1) Rn as a function of 0, The remarkable discovery associated with the names of Cramer and Rao [3, 16] is that one can calculate the \"minimal\" second moment matrix R@) — the minimum of C.1.1) over the class of all possible estimators f(-) — without actually needing to know the corresponding estimate (except for its bias). Or, more precisely, we can obtain a lower bound to R@) in the form R@) > A + V0b@)) A@)\"*1 (I+V0(b@))* , C.1.2) where A@) - E0[(V0logp(v|6))*(V0logp(v|0))l J n log p(v|0)) (V log p(v|e>) p(v|6) d|v| ,\n\nM CHAPTER 3. STATISTICAL ESTIMATION THEORY -where Vq denotes the gradient A x m matrix) with respect to 6. To use C.1.2) we do need to know the bias in the estimate which can be difficult to evaluate in general. The inequality is most useful, in fact, when the bias is zero. Thus we have that for unbiased estimators the second moment matrix R@) (which is then also the variance matrix of the \"error\" F-0)) has the lower bound (MC-R bound\"): R(e) > {E0[(Velogp(v|0))*(V0logp(v|e))]}~1 . C.1.3) Again the inequality is most useful when the bound is independent of 0; its use is circumspect when this is not the case. Proof of the C-R Bound Formula Let us see how to prove C*1.2). First we shall need to assume that V6 / p(v|0) d|v| - J V0p(v|0) d|v| . Rn Rn Or, in other words, \"differentiation with respect to the parameter 6 is permitted under the integral sign*\" There are nontrivial cases where this does not hold (see ), but fortunately we will not need to be concerned with them in this book. Note, in particular, that since J p(v|G) d|vj = 1 , Rn we have that J V(vl0) d'vl = ° > Rn\n\nor, since 5.1. PARAMETER ESTIMATION 39 V(vi0> ve l0e *Me> \" p(v|e) (where we are not concerned with those values of v for which the denominator is zero), we can write equivalently J (V0 log p(v|0)) p(v|0) d|v| - 0 C.1.4) Rn or Ee(VG log p(v[e)) - 0 for all 0 in Rm. As we have seen, the bias b@) « E[G] - G =« / f(v) p(v|0) d|v| - 0 is a function of 0. Since we can differentiate under the integral sign, we have that V0 b@) = | f(v) V0p(v|6) dv - I , where I is the m * m identity matrix; and we can rewrite this as I + VQbF) - K0) > C.1.5) where, using C.1.4), we can write finally *<e) - / (f(v) -0)(VQlogp(v|0))p(v|0) d|v| . C.1.6) Rn The C-R inequality is basically a direct consequence of the well-known Schwarz inequality. To see this, let\n\ntfO CHAPTER 3. STATISTICAL ESTIMATION THEORY A =* f(v) - B B - Ve log p(v[0) . Note that A is m x 1 and B is 1 x m, so that B is m x 1. Let A be any m x m matrix. Then we note that ((A~ AB*)(A -AB*)*) > 0 and hence^ / (A- AB*)(A- AB*)*p(v|8) d|v| > 0 . C.1.7) Rn Let us \"expand\" C.1,7): we have E0(AA*) - AE0(B*A*) - E0(AB)A* + AE0(B*B)A* > 0 . C.1.8) We now assume that the m x m matrix ?0(B*B) - / (VGlogp(v|0))*(V0logp(v|O)) p(v|0) d|v| is nonsingular for all 6- Then we can choose, for each 0, A = E0(AB) (E0(B*B))^1 in C,1.8), yielding E0(AA*) - E0(AB)(E0(B*B))\"'1E0(B*A*) > 0 , which is readily recognized to be the same as C,1.2). An important question is: When can we find an estimator which actually attains the minimum second moment matrix? In other words, when does equality hold in C.1,2)? An estimate\n\n3-1. PARAMETER ESTIMATION 41 for which equality holds in C.1.2) is said to be 1Tef f icient.\" Now, equality holds in C.1.2) if equality holds in C.1.7), or A - AB* « 0 , or f ( ' ) satisfies f(v) - G - A(V0 log p(v|G))* C.1.9) (omitting values of v for which p(v|0) =* 0). But from C.1.9) we see that Ee(f(v) - 0) - AE0(V0 log p(v|G))* - 0 , or the estimate f(v) is unbiased. Moreover, the second moment matrix (which is now also the variance matrix) of the error is Ee((f(v)-G)(f(v)-e)*) = AEe[(V0logp(v|©))*(V0logp(v|e)KA* or = [Ee((V0logp(v|0))* Velogp(v|0))] . C.1.10) Hence for the estimate f(v) to be efficient we must have V0 log p(v|6)* - M@)(f(v) - G) C.1.11) for all v and 0. The most important instance of C.1.10) is the case where f(v) is linear in v. Finally we note an alternate formula for calculating the right side of C.1.10) (whose inverse is the C-R bound for\n\nH2 CHAPTER 3. STATISTICAL ESTIMATION THEORY unbiased estimates). Let 0 - Col.F^, .... G ). Then E[(V0logp(v|G)*(V0Iogp(v|0)] ^(aefafT iogP(v|e)J| C.1.12) This follows from 1 J Rn ?_ ..36 . - / 3*7357 »<vle) d|v| - /• ^ •V \\ x / (Ve%e iogp(v|e))P(v|e) d|v| . -\\.n v i i / logp(v|0)JUf7logp(v|8)Jp<v|8) d)v| Principle of Maximum Likelihood So far we have discussed only the measure of goodness of an estimate but have not considered the problem of finding estimates which are optimal: in other words, which minimize the error moment matrix (for all 0). Unfortunately, there is no systematic technique for finding the optimal estimate, even if one exists. An efficient estimate cannot always be shown to exist. We can, however» single out one prescription\n\n3.1. PARAMETER ESTIMATION **3 fox estimates, which has some features to recommend it — andi in fact, the only one that is used, for all practical purposes. This is the Principle of Maximum Likelihood. The Maximum Likelihood Estimate (MLE for short) is the one that maximizes p(v|0) with respect to 8, for each v. Since at a maximum (assuming the necessary differentiahility properties) the gradient must be zero, the MLE will satisfy: V0 log p(vJG) - 0 . We can state one desirable property of the MLE: An efficient estimate, if it exists, is a Maximum Likelihood Estimate. To see this, let us go back to C.1.11) which characterizes the efficient estimate f(v). Let us fix v in it. Then the MLE is that value of 0 which makes the left-hand side zero. Since M@) is nonsingular, this means that f(v) - 6 « 0 , or f(v), the efficient estimate, is also the MLE. The maximum likelihood estimate need not, in general, be unbiased; and the calculation of the corresponding error co- variance can also be nontrivial. Fortunately, we are often interested only in the \"asymptotic\" case: where 0 is fixed in dimension, while the dimension of v grows, as we take more and more data, for instance. Thus we may talk about an estimate being \"asymptotically11 unbiased and \"asymptotically\" efficient. As a rule, the MLE has these desirable asymptotic properties, but, of course, requires proof in each particular\n\n*+<* CHAPTER 3. STATISTICAL ESTIMATION THEORY instance. See [3, 16] for more of the statistical literature on this. EXAMPLES Example 1 Our first example is perhaps the oldest one of its kind. Let p(vJG) = i r Exp ^[R'1(v~L0),(v-L0)] , (/2?)n]R[2 L being n x m. We see that v is Gaussian with mean LQ and variance matrix R. We can readily verify that (V0 log p(v|0))* - L*R_1(v - L0) . We assume that L0 = 0 implies 0-0 Or equivalently h L is nonsingular Then ?[(Velogp(v|0))*(V0 log p(v|0)K - E[L*R_1(v- L6)(v- h0)\\~1h] - L*R_1L and is nonsingular; therefore the C-R bound matrix is (L R XL ) x The bound is thus independent of 0. The maximum likelihood estimate is given by\n\n3-1- PARAMETER ESTIMATION *f5 L*R ^V - LG) « 0 , or G = (L R \"X) x L R v . This estimate is unbiased since E[e] - (lV^l)-1 l*r\"\"xe[v] = e The estimate is efficient since (I^R-1]^)-1!,*!*-^ - e * (L*R^1L)L*R[v-L8] *» (LV1L)(VQlogp(v|0))* . Example 2 Our second example is one of the few cases where an efficient estimate can be explicitly calculated, and is nonlinear. It also illustrates the fact that even an efficient estimate can leave something to be desired. Thus, let v = aS + N , where S and N are independent Gaussians with zero means and the identity for the covariance matrix. It is desired to o estimate a (corresponds to signal ''power1'). It is immediate that P(v|a2) = 1 ~^-~ (Exp -i) -^1 . Vo^V </§*) 1+a Hence ~^logP(v|a2) - -|-J^ + | -^5 . 8a2 2 l+aZ 2 A+a2)^ \"Hie maximum likelihood estimate is such that\n\n<*6 CHAPTER 3. STATISTICAL ESTIMATION THEORY 2 + 2 0 U l+az ^ = fv.v] 1 = f0(v) say. This estimate is efficient since 2,2 tv,v] „ x _ a2 = 2(l+a n In particular, the error 2 2 However, the estimate, even if efficient, has the drawback that there is a nonzero probability that a^ < 0 , which is undesirable since we are estimating a positive quantity. We may define a new estimate which does not have this defect by taking instead iQ(v) + lf0(v)| 2 This estimate is biased but has a smaller mean square error! 3.2, BAYESIAN THEORY OF ESTIMATION: OPTIMAL MEAN SQUARE ESTIMATES AND CONDITIONAL EXPECTATION We next consider the \"Bayesian\" view, in which we assume that 0 is also random. Thus we are given the joint density\n\n3.2. BAYESIAN THEORY OF ESTIMATION 47 n@,v)- Our criterion of goodness of any estimate f(v) is aeain the second moment matrix: E [(f(v) - B)(f (v) - 0)*] . C.2.1) The optimal \"mean square\" estimate is the one that minimizes C.2.1). Let fQ(v) « E[0|v] . C.2.2) fe show that C.2*2) — the \"conditional expectation of 0 given v\" — minimizes C.2,1). For our purpose, we can express the conditional expectation as E[0|v] = / 0p@|v) d|0| , C.2.3) where p(Q|v) is the conditional probability: P@|v) = Eif^i . C.2.4) The crucial property of conditional expectation we need is that for any m * 1 (Borel measurable) function h(v) E[Gh(v)*3 * E[E[0)v] h(v)*] . C.2.5) This can be proved easily using C.2.3). In fact, the right side of C.2.5) is = f [J 0p@|v) d[0|) h(v)*p(v) d|v| RnV ' » J J 0h(v)*p@|v) p(v) d|0| d|v| , HnRm which by virtue of C.2.4) yields the left side of C.2.5).\n\n48 CHAPTER 3. STATISTICAL ESTIMATION THEORY Let us now rewrite C,2.1) as E[((f0(v)~e) + i(v) - f0(v))<(f0(v)~e) -f f(v) - f0(v))*] - E[(f0(v)-G)(f0(v)-G)*]+Ei;(f(v)^f0(v))(f(v)-*f0(v))*]; since from C,2.5) E[(f0(v) - 0)(f(v) - f0(v))*] - 0 , taking h(v) = f(v) - fQ(v) . Let us note that from C.2.5) (or directly) we can see that E[f0(v)] = E , C,2.6) therefore E[@ - fQ(v))(G - fQ(v))*] C.2.7) is the (minimal) error covariance matrix. We can \"expand\" C.2,7): - Et@ - fQ(v))e*D C.2.8) - E@0*) - E(fQ(v) fQ(v)*) . C.2.9) Both follow from C.2.5). From C.2.8) we have the inequality: E(fQ(v) f0(v)*) < E@0*) . C.2.10) Remark. The conditional expectation E[0|v] is definable as a Borel function of v for any two random variables 0, v, as soon as 0 has a finite first moment, that is\n\n3.3. GAUSSIAN DISTRIBUTIONS *f9 The*definition is based on C.2.5) rather than on C.2,3) and does not, in particular, require that a joint density exist. See C9], for example, for more on this. For our purpose, since we are concerned mostly with Gaussians, C.2.3) is adequate. Let A be any k * m matrix- Then we may consider the best mean square estimate of A0 rather than that of 0 — some linear combinations of the components of 0, in other words. Let h(v) be any estimate. Then EC(A0 - AE@|v))h(v)*3 = AEC@ - E(G|v))h(v)*3 - 0 , by C.2.5). Hence as before, it follows that AEC0|vD is the best mean square estimate of A0. An important property of the Conditional Expectation we shall need in the sequel is: EC0}v23 - E{EtQ\\vltv2l | v2) , C.2.11) which is easy to verify. 3.3, GAUSSIAN DISTRIBUTIONS: CONDITIONAL DENSITY; UNCONDITIONAL MAXIMUM LIKELIHOOD; MUTUAL INFORMATION Let us now specialize to the case where 0, v is (jointly) Gaussian. Let us see how to calculate the conditional\n\n50 CHAPTER 3- STATISTICAL ESTIMATION THEORY expectation E[0[v] in this case. Let 0 = 0 - E v = v - E [v ] r6 - E[§e*} Rv = E[vv*] R0v = E[©v ] and let A be the (m+n) x (m+n) compound matrix: \"ev R0v \\ Then assuming A to be nonsingular and writing Z » z « where Z is now an (n-fm)-dimensional Gaussian, we have p@,v) - (^?) n+m Exp -ilA^Z, Z) , r3.3.i) ,-i where we may \"partition*1 A in the same form as we did A, To calculate the conditional expectation E[0[v], we need not use C.2*3) but can proceed indirectly. Theorem 3.1. Let 0, v be jointly Gaussian. Then E[0|v] = E + A(v) , C.3.2) where A satisfies (the linear equation)\n\n3-3. GAUSSIAN DISTRIBUTIONS 51 E [0v ] *= AE [vv ] C.3.3) Remark. Equation C.3.3) is a discrete version of a more general equation known as the Wiener-^Hopf equation. Proof. We shall need to use one fact that uniquely characterizes Gaussians: that Muncorrelated Gaussians are independent.\" Or, more specifically: suppose X is m * 1, Y is n x 1 and they are jointly Gaussian; furthermore, suppose the cross correlation matrix E[(X - E(X))(Y - E(Y))*J = 0 . Then X and Y are independent: P(X,Y) = p(X) p(Y) C.3.4) We can see this readily, assuming that the covariance matrices of the variables Rx = E[(X - E(X))(X - E(X))*] , Ry = E[(Y - E(Y))(Y - E(Y))*] are nonsingular. In that case the covariance matrix of jzTj Rx 0 0 Ry and its inverse is .-1 0 RyJ\n\n52 CHAPTER 3- STATISTICAL ESTIMATION THEORY Hence we can write, using C.3.1): (|R |R |) * ^ _ P(X'Y) = N L_ n/m Exp \"*IKTX]+1UJ]) , ( /2rr) which is - P(X) p(Y) , so that X and Y are actually independent. If FU or R^ are singular, we cannot, of course, write down the probability densities; however, if Kx is singular, we can work with a submatrix which is nonsingular, whose ranK is the same as that of Rx; similarly for Ry. The result C.3.4) would apply to the variables corresponding to the submatrices. Since the original variables can be expressed as linear combinations of these, the independence result follows even if we cannot write down C.3.4). Next let us note that 0 - (E@) + Av) being linear in 0 and v, is jointly Gaussian with v, and by virtue of C.3.3) is uncorrelated with v, since E[@~ E@) - Av)v*] = E[0v*3 - AE[vv*] = 0 ; C.3.5) hence 0 - Av is independent of v. Hence, in particular, for any m * 1 function h(v) (with finite second moment):\n\n3.3. GAUSSIAN DISTRIBUTIONS 53 E[(G- Av)h(v)*] - (E[0-Av])E[h(v)]* - 0 . C.3.6) Or E[@ - (E +A(v-E[v])))h(v)*] - 0 . Or, from the definition C.2-5) of conditional expectations, our theorem follows. Remark. The Gaussian case is thus characterized by the fact that C.3,5) implies C.3*6). Also, the best mean square estimate is linear in v. Let us now calculate the corresponding error variance matrix: P - E[@ - E[0|v])@ - E[0[v])*] - E[(§ - E[0|v])(g - E[g|vJ)*J - E[(9 - E[0[v]H*] = R0 - AE[v0*] - Re - AR*v . C.3.7) Also specializing C.2.9), we have R0 - E[E[0|vJ Et§|v]*3 C,3.8) R0 - ARvA* . C.3.9) Moreover, if R is nonsingular, we have that V V1 C,3.10)\n\n5<* CHAPTER 3. STATISTICAL ESTIMATION THEORY so in that case P - R0 - RevSvlR0v • C.3.11) In particular, always: P < Re « C.3.12) Calculating the Conditional Density P@[v) It is interesting to note that (.3.2) can be used to calculate the conditional density: p(Q|v). We know that the mean is given by C.3.2) and the variance P defined by C.3,7). Let us assume that P is nonsingular: therefore R~ will be nonsingular also. Let us assume that R is nonsingular. If we assume that pF|v) is Gaussian, then we know it must be given by p(©|v) = ^ i ,t E^ -Mp'^o-av), e - ak\\ . (vS)m|p|^ C.3.13) To show that this is indeed the correct answer, we have only to multiply the right side of C.3.13) by p(v) and show that the product is indeed p@,v). For this purpose, let us first prove a result that is also of independent interest: viz: |A| = |RV| IP| , C.3.14) where | | denotes determinant. To this end, we recall that we may perform \"elementary operations\" on a matrix while computing its determinant. Thus\n\n3.3. GAUSSIAN DISTRIBUTIONS 55 |A| - v '0 ev Rv Det Rn-AR^ R-, -AR 0 0v 0v v tL0v J Det H0v Rv Det and by the rules of computing the determinant of a partitioned matrix: |A| P IL as required. Hence we see that, in multiplying the right side of C.3»13) by p(v), the constant factors check. Next we need to show that [P_1(e-Av),0~Av] + [R^v, v] = [A^Z, Z] , where Z ~ In other words, we have to show that we may partition A as p-1 -p-V -A?-1 \\1+A*p~h or we have only to show that the product matrix\n\nCHAPTER 3. STATISTICAL ESTIMATION THEORY R0 * Rev Rv X .-1 * -1 -1 * _1 A P Pv +A P A equals the Identity (n+m) matrix. But this follows since, by the rules of multiplying partitioned matrices, the product is <VH0vA*>p~1 -Rep_lA+RevA ^\"^^ev^1 (bJv-V*>p -rJvpa+V*p_1a+V^1 and the result follows since P is self-adjoint: P = B0 - ARQv R0 ~ H0vA R0v - BvA* \" ° -RQP \"'\"A + R0VA*P XA = (-Re + R A*)P-1A and RevV = A . Remark. The formula C.3.14) does not tell us much about P, since it involves only its determinant. If, however, 0 is scalar, then p - |pj . Thus we have Error Variance - TrY ' C.3.14a) and, of course, v can be of arbitrary dimension. Note that\n\n3.3. GAUSSIAN DISTRIBUTIONS 57 |R j in that case is the cofactor of the -1 position/' since we can write where llv Alv Rv al ~ E[ee*] , xlv - ?[§v ] , and is 1 * m. Unconditional Maximum Likelihood We shall now indicate another way of deriving C.3.2): by the principle of \"unconditional maximum likelihood,\" the qualification \"unconditional\" indicating the slight difference from the principle of maximum likelihood we saw in Section 3.1. Thus we maximize the (\"unconditional\" or \"joint\") den- sity function p(v,0), which is the basic datum of our problem under the Bayesian assumption. Or, we seek that value of 0 for which V0 log p(v,G) = 0 C.3.2a) for each v. But P(v,0) - P(e|v) p(v) , and hence we only need take \"the root of the gradient equation\": V0 log p(e|v) - 0 . Now, we have seen that p(e|v) is Gaussian with mean E[0|v],\n\n58 CHAPTER 3. STATISTICAL ESTIMATION THEORY and the Gaussian density function attains its maximum at the mean. Hence the maximum unconditional likelihood estimate (MULE) is the same as the conditional expectation. The point in showing this is that C.3.2a) is \"operationally easier\" for an important class of applications we shall deal with. (Cf. Section 3.5.) Mutual Information Let us next calculate the mutual information I(G;v), that is, \"information about 8 given by v,'f which by definition (in our context) is We can show that I@;v) - i log -j^p . C.3.15) Let us outline the steps. First log p(e)plv) = l°gp@!v) \" log p@) , therefore I(9;v) = E[log p(G|v)] - E[log p@)] . We note that p@|v) is Gaussian and so is p@). Now, if X is any m * 1 Gaussian with variance IL*., we can see that -log p(X) = 4 log |RX| + log (»/2?)m + H^X-EfX]), (X-E[X])] . Then\n\n3.*K GRAM-SCHMIDT ORTHOGONAL IZATI ON 59 E[R^X(X-E[X]), (X-EpC])] = Tr.B^CX-ElXDCX-EEX])*] - Tr. R~XE[(x-E[X])(X-E [X])*] = Tr. R^ Rx = m , or E[-logp(X)] = JloglRjj.1 + log (v^f)m + |. C.3.16) Since pF|v) has-variance matrix P, and p@) has variance matrix RQ, and both are m x m, we obtain C.3.15), using C,3.16) and simplifying. 3.4. GRAM-SCHMIDT ORTHOGONALIZATION, AKP COVARIANCE MATRIX FACTORIZATION Let X be an m x 1 Gaussian with zero mean, and let us use the notion X = Col. (x1, ... , xm) . Let R denote the covariance matrix of X. Then Rx * {Xij} ' -1 i ±f J 1 m > where X. . = E[x.x. ] . Let us construct a new set of Gaussian variables {y.}, i = l,...,n, as follows:\n\n60 CHAPTER 3. STATISTICAL ESTIMATION THEORY x2 \" E[x2lxll ' K3 ~ E^xj ' xif \" •\"' xj~i^ ' xm \" E[xm |xl' \"\"xm-l] ' Note that the {y.} are zero mean Gaussians, but furthermore Ety±yj] - 0 , i ^ J . C.4.1) Indeed, for any i, by construction: E[yixJ3 = 0 , j « 1, •.. , i-l ; and since y. is a linear combination of x- , . - . , x. -, x., El^y-j] = o f j = i,..Mi-i . For j > i, we can exploit the fact that to obtain C.4.1), We say that y, is \"orthogonal\" to y. if E^y-j] * ° ¦ Hence the variables {y.l are mutually orthogonal. Equiva- lently, the covariance matrix of 7 - Col. [y., ...,yffi] is diagonal: E[YY*j = D , VX - y2 \" yj = ym =\n\n3.*K GRAM-SCHMIDT ORTHOGONAL 12ATI ON 61 where d = ^.j . dy - o , m . Note that we can express {y^J &s y. - a. ^ + a.2X2 + • • • + a...*.. , where a±± - 1 . Define the matrix L by L = U } , 1 < i , jfm , where *ij = aij ¦ i i J • - 0 , i > j . Thus defined, L is an m x m \"lower-triangular\" matrix, and we then obtain Y = LX . The determinant of a lower-triangular matrix is the product of the diagonal elements, and hence in our case |L[ = 1 . Hence L is nonsingular, and X - lTXY > where L~\" is also lower-triangular, with the diagonal elements also equal to unity. Furthermore, for any p x m\n\n62 CHAPTER 3. STATISTICAL ESTIMATION THEORY matrix A, AX - (AL'1)Y , so that any linear combination of the {x.} can also be expressed as a linear combination of the Cy1). Moreover, we have that Kx « E[XX*] - LE[YY*]L^1 - L XDL -1 = (L^D)(L~1^)* . C.4.2) Thus we have factorized the covariance matrix R as Rx = LL* , C.4.3) where L is lower-triangular . Finally, let Z be an m x 1 Gaussian with zero mean, and identity covariance matrix. Then VI C.4.4) is Gaussian and has R for its covariance matrix. We note that C.4.4) provides us with a Simulation\" technique for constructing Gaussian vectors with prescribed covariance matrix, from a random number generator. Thus, let ^1* ^2' '''' ^ ^e miltuallY independent random variables uniformly distributed between 0 and 1. Define where \\$(*) is the cumulative Gaussian distribution\n\n3^. GRAM-SCHMIDT ORTHOGONALIZATION 63 z. 1 f^ 2 \\$(z±) « / exp -^- dy . Then Z = Col. (z1, . .., zm) is Gaussian with identity covariance matrix. Using the factorization C.4,3) and taking x = tz , we can see that X is Gaussian with covariance R , x Remark. Even if X has nonzero mean, we may define y± = x± - E[x± | x1m1, . . . , x1] , i = 1, . . .,m , and {y.} would again be zero mean Gaussian, orthogonal for i f j. Also y. = 5i - E[x. | x._1, . . . , x1] ; or we can write Y - Col. {yi> = LX = LX - LE[X] and X « IT1? + E[X] .\n\n6*i CHAPTER 3. STATISTICAL ESTIMATION THEORY 3,5, ESTIMATION OF SIGNAL PARAMETERS IN ADDITIVE KOISB Let us now specialize our estimation models closer to practice. We consider a communication channel with additive noise, where v represents the received signal at the n sampling interval, so that we can write: where (N } represents the channel noise and {s^} the n n th transmitted signal at the n sampling interval. The canonical problem is to estimate the signal s from the received waveform samples v . \"Real time\" or \"on-line\" operation would mean that s would have to be estimated from v7 , n k' k < n. Moreover, we would need to specify the class of signals to be transmitted in some fashion. In keeping with our two points of view, we have the \"sure signal\" — or \"deterministic signal\" — case when we assume that signal parameters are unknown but that signals are specifiable once the parameters are specified. Or, we may have the case where the signal is a random process — a \"stochastic signal.\" These, of course, are not necessarily mutually exclusive points of view and further various shades in between are also often employed. In this chapter we shall discuss only the former. We assume that the signal parameters enter linearly, and consider the following model (corresponding to processing over a fi:xed time interval): m V = I GkSk + N > C.5.2)\n\n3.5. ESTIMATION OF SIGNAL PARAMETERS IN ADDITIVE NOISE 65 where V, S,, N are all p x l. The {S } are known to the receiver but the parameters {6, }, k = 1,— ,m, are not» and their values specify the transmitted waveform. We assume N is Gaussian with zero mean and covariance R^. Although we have stated the problem in a Communication System setting, such a model can occur in a variety of other applications; in any event the model C.5.2) can be considered divorced from any specific application. To proceed with the estimation problem, it is convenient to let G « Col. (er .... em) and write m leksk - Le , where L is the p * m matrix defined by L = U..} , th I. . « i component of S. C.5,3) [S-.e,] , where I®*} are P x 1 unit basis vectors: e. = Col. [eil. e.2> ...,e.m] , where e. . = <S . = 1 , i ~ J , = ° , i * d • Assuming that 0 is an unknown parameter, we see that V is\n\n66 CHAPTER 5. STATISTICAL ESTIMATION THEORY Gaussian with mean L8 and variance IL-. Then, p(V|6) denoting the density of V, we have V0logp(v|8) - ^V0[r^1(V-LG), (V-LG)] , C.5.4) where we have tacitly assumed that ILj is nonsingular. We know that an efficient estimate exists and is given by that value of 6 for which C*5.4) is zero* We may also take the Bayesian view and assume that Q is Gaussian with zero mean and variance matrix A. The signal is then, of course, \"stochastic\" in a \"trivial\" way. It is natural to assume that 0 is statistically independent of N (nsignal and noise independent\" case). We can then calculate the joint density p@,V) and, in turn, calculate the conditional expectation E[0|V] yielding the best mean square estimate of 0. In fact, even without explicitly calculating p(Q,V), we know that EfejV] = AV , since both 0 and V have zero mean, and A is given by: E[0V*] = AE[W*] . But E[0V*] - E[0[L0+N]*] *= AL* , E[VV*] - E[(L0+N)(L9+N)*] = LAL* + R^ , * which is singular only if both LAL and FL, are singular- Assuming that at least one of them is nonsingular, we have\n\n3.5, ESTIMATION OF SIGNAL PARAMETERS IN ADDITIVE NOISE 67 A = AL*(LAL* + K^y1 ', C.5.5) and thus our estimate is AL*(LAL* + Rjj) \\ . C.5.6) The variance matrix of C.5.6) is clearly AL*(LAL*+ HN)~1LA , and hence the error-covariance matrix P is P - A - AL*(LAL*+ R^^AL . C.5.7) If we assume that R^ is nonsingular, we can invoke the maximum unconditional likelihood principle. Since 0 and N are independent, we have that the conditional density p(V|G) = pN(V - LO) , p„(«) being the density of N. Hence V0 log p(V;0) = V0logp@) + VQlogpN(V-L0) s -^V0([A_1G,03 + [R^1(V-L0),(V-L0)]) . C.5.8) The MULE makes C.5.8) zero. But comparing with C.5.4), we see that the difference between the MLE and MULE is the appearance of the additional term [A~ 0,0] in the MULE. Or we may consider C.5.2) as the case corresponding to A = -h» which we may interpret as \"maximum ignorance11; or, in other\n\n68 3. STATISTICAL ESTIMATION THEORY words, the MULE with the a priori variance equal to infinity is the same as the MLE, in this xnodel. Let us now proceed to take the gradient in C.5.8), We have - [A^O, h] - [R^CV-LG), Lfa] ¦* [A'1© - L*R~X(V-L0), h] ; and for the gradient to be zero this must be zero for every h in Xtm; or A\" + i/'fi^CV- LG) «= 0 (A'1 + h^^DB - ^R^V or the MULE is given by (A X + h R^L) L R^ Hence E[9|V] - (A X + L*KN 1L)^1L*R^1V , C,5.9) which, under the assumption that R^ is nonsingular, must coincide with C.5,7). Putting A ~ «> in C.5.9), we get the MLB: (^R'^rV^v , C.5.10) provided L R/L is nonsingular. Or, since R^ is assumed to be nonsingular>\n\n3.5. ESTIMATION OF SIGNAL PARAMETERS IN ADDITIVE NOISE 69 provided L is one-to-one: that is to say he - 0 implies that 0 = 0. Or, equivalently, L L is nonsingular In terms of the {S.}, L being nonsingular obviously means that the {S. ) are linearly independent, a natural condition in the signal transmission context. Moreover, the estimate being efficient, the error covariance matrix equals the C-R bound which is (lA^L)\" . C.5.11) Let us calculate the error covariance matrix corresponding to C.5.9). Prom C.3.7), this is - A - (A + L*R^1L)L*R^1E(V9*) „ _i * -1 -1 * -1 = A - (A X + L R^L) L R^LA - (A X+L RjJ L) ((A +L ^ DA - L RjJ-LA) - (A^+L*^1!,)- . C.5.12) Of course, C.5*12) is readily seen to be smaller than C.5.11), as it should be, since the latter corresponds to maximum ignorance- And C.5.12) checks with C.5.11) upon setting A = -h». If P denotes the error covariance matrix, we can write C.5.10) and C.5.9) as\n\n70 CHAPTER 3. STATISTICAL ESTIMATION THEORY 0 - VL\\hr . C.5.J3) We note that in terms of the {S^} L*E^V - Col. ([S1,R^1V], .... [Sm,R^XV]) . In most applications we may take ^ to be a multiple of the identity: RN = dl ; in that case p = (r1 + ^y1 , C.5.14) where L*L - {[S^Sj]} , l<i , j<m . Remark 1. If we do not care about optimality, and simply estimate 0 as (L L) XL V , which yields the right answer if there is no noise at all, we see that the corresponding error is 6 - (L L) T. V - (L L) ^ N , so that the error covariance is (L L) -X Rjj L(L L) x , which is, of course, always * ^1 -1 > (L r/L) x . This is a direct consequence of our theory although tedious\n\n3.5. ESTIMATION OF SIGNAL PARAMETERS IN ADDITIVE NOISE 71 to prove directly. It is immediate, however, in the important case \\ » dl Remark 2. Similarly, the proof of the equivalence of C.5.6) and C*5*9) in the case R^ is nonsingular, is an algebraic exercise, which we now outline for the curious* We rewrite C.5*5) as A(LAL* + R^ = AL* . Multiplying both sides by RZ on the right, we have ALAL*!^1 + A - A^*^1 , C.5.15) or A » (L+ALyL*^1 . Hence writing A =* HAL*!^1 and substituting in C.5.5), we obtain HAL R^4- LAL B^X + HAL R^ - AL R^ Hence it is enough if HAL*!^1 + K « I (I + AL RflTO .-1.-! = (A\" + L^^) A hence A - (A^1 + L*R]^1L)~~L L as required. 1 * -i\n\n72 CHAPTER 3. STATISTICAL ESTIMATION THEORY 3.6. PERFORMANCE DEGRADATION DUE TO PARAMETER UNCERTAINTY We pause now, before treating more general estimation problems, to dwell on an important consideration in the practical utilization of our theory. We will need to deal with it more generally later, but it is instructive to see it in our present context. As we have noted, in the Bayesian approach it is necessary to specify the a priori probability density of the unknown parameter-vector 0. In the Gaussian case we have been specializing to, this means specifying the covariance matrix A. This is in general unknown, and one can argue that it is unknowable and hence must be replaced by a \"guesstimate\" — say Pq. The question then is; What is a good guesstimate? Let us ponder over this briefly. Based on our theory, our estimate will then use PQ in place of A: 0 « (Pq1 + l/r^L)\"* L*R^Xv . This estimate will, of course, no longer necessarily be optimal. Let us calculate the corresponding error covariance. We have 0- G - (P^1+l\\1L)'~ (L*R^1v-(P^1 +L*I^1LH) . Let us use the notation Pc = <P01 + L\\1l)_1 • where the subscript c is supposed to indicate \"calculated,\" Then substituting for v, we have:\n\n3.6. PERFORMANCE DEGRADATION DUE TO PARAMETER UNCERTAINTY 73 e - 0 - Pc(L*R^1(L0 + N) - (PJ^ + L^HoG) ~ Pc^*^ - POl0] ' from which is follows that the error matrix P is given by P - Pc(L\\lL + Po^O^c < which we can rewrite as Hence the discrepancy between the actual and the calculated: (P_Pc, = Pc(P-1_P-1AP-1)Pc \" Vo^O - A)P0lpc • C-6-a> Rote that the matrix is self-adjoint and nonnegative/nonpositive definite according as (P0~A) is. Hence we can conclude that: True Error Variance >^ Calculated Variance if PQ >. A ; C.6.2) True Error Variance < Calculated Variance if PQ <. A . C.6.3) The degradation in performance even if not calculable (since A is unknown) is\n\n7<* CHAPTER 3. STATISTICAL ESTIMATION THEORY (A 1+l\\1L)^1 - Pc - PCDPC - (y\\-1 + L^1L)'1(A-1^p-1)Pc - PcDPc . TJtie matrix L*!^^ C.6.4) deserves to be called the signal-to-noise ratio matrix. Note that the degradation in performance is \"small\" when the signal-to-noise ratio is \"large,\" the latter meaning L*!^1!. » A\"~X . C.6.5) This is generally true and explains incidentally why at high signal-to-noise ratio anything will work. Let us also note that we have here a dilemma that is typical In estimation theory: we may discard the overly-pessimistic \"maximum-ignorance'1 view and go Bayesian. But the latter has the disadvantage that the necessary statistics may have to be \"guesstimated.\" A way out of this is usually to show that as we process more and more data — \"asymptotically\" — the estimate becomes independent of the a priori probability. We shall elaborate on this in the next section. Finally, let us examine the dependence on the uncertainty in the noise covariance: RN« We shall only consider the case where RN = o2I . 2 2 Let Oq be our \"guesstimate\" of a . Then our estimate of G will be\n\n3.6. PERFORMANCE DEGRADATION DUE TO PARAMETER UNCERTAINTY 75 ~1 * '0' ao and the corresponding error covariance P will be = E[(G ~G)@ - 0*] ° ° © Note that only the ratio l—x ) enters; this explains why the Vo0 logarithmic measure is often used: 2 10 log % °0 in \"decibels,\" instead of the ratio itself. Note also as before that the larger the signal-to-noise ratio L L 2 G0 the smaller the discrepancy (p-pc) - Note as well the interlacing of the noise variance error and the a priori variance error in C.6,6). We may also define the \"normalized\" discrepancy as (^r1 (p-pc) (v^rr1 0.6.7) and note that this is also smaller the higher the signal-to- noise ratio.\n\nChapter 4. THE KALMAN FILTER This is the main chapter of the hook and is organized as follows. We begin in Section 4.1 with the basic theory and formulas, making a compromise in generality between too many obscuring details and too little practical application. Thus we consider only the case where the observation noise is white and is independent of the Signal, although we allow the system to be time-varying. Because of the uncertainty in the initial covariances, in practice no Kalman filter can be optimal except in the steady state — and this is by far its most important use. Hence Section 4.2 specializes to time-invariant systems and considers steady-state behavior of the filter. Section 4.3 examines the steady-state results from the frequency-domain point of view, relating them to the more classical transfer-function approach. In Section 4.4 we study a\n\n*+,l, BASIC THEORY 77 canonical application of Kalman filtering: to System Identification. In Section 4.5 we study the \"Kalman smoother\": the on-line version of two-sided interpolation. In Sections 4.6 and 4.7 we study generalizations of the basic theory of Section 4.1. Thus we allow the signal and noise to be correlated in Section 4.6; and allow the observation noise to be non-white in Section 4.7. We conclude in Section 4.8 with a simple example which illustrates some of the theory and techniques discussed in the chapter. 4.1. BASIC THEORY The estimation techniques of the previous chapter involved \"batch\" processing — processing of all the data at once. We wish now to consider the \"on-line\" problem: where the data has to be processed sequentially, as sample-time progresses. Or, we need to design a \"filter.\" Let us examine this more closely. We have as before the data model: Vn ~ Sn + Nn ' n > 1 , D.1.1) where (s } is the signal and {N°} the noise. The problem we pose now is that of estimating sn from all the available data at sample time n. Our estimation criterion being Baye- sian mean square error, to determine the optimal estimate, as we have seen, we must calculate the conditional expectation K - EfsJ%> v„-i>---'vii ¦ <4-1-2)\n\n78 CHAPTER *t. THE KALMAN FILTER We propose to do this by a (time varying in general) filter* In block diagram we have: vn Filter sn Since we will only be concerned with the Gaussian case, we know that the filter will be a \"linear system\" with input- output relations: n s\\ - J W . v. + E[s 1 , D.1.3) where §n - sn ~ E[SJ ' and the \"system matrix\" {W _ J is determined by solving the n, k Wiener-Hopf equation: E[§nv^] = .N^E^v*] . For each n, we see from D.1.3) that we need to \"store\" all the data samples v, , up to k = n. This can lead to pro hibitively large data storage, and so alternate means need to be found for practical implementation. The key is provided by having an appropriate signal-gen-\n\n<*,!. BASIC THEORY 79 eration model for the signal ^sn^ such as we have studied in Chapter 2. Thus we now assume we have the \"state space\" model: n n n ' D*1.4) n+1 n n n n where {N } is a white noise sequence, {U } is a known deterministic input, and of course, the matrices A and C are known. Next we assume that the \"observation\" noise {N°} is white also. It is then convenient to combine the white noise processes in D.1.4) and D.1.1), using the following notation: K ~ FnNn n n n D.1.5) where {N } is white Gaussian with unit covariance matrix, and of course: E[NSNS*] - F F* , 1 n n J n n ' B[HX 1 - GnGn ' Ef(FnNn><GmNm>*] = Vm • Then we can write our signal-generation model as: vn - Sn + GnNn sn - Cnxn ) . D.1.4a) xn+l = Vn + Un + PnNn\n\n80 CHAPTER ^ THE KALMAN FILTER We shall assume that implying that signal and noise are mutually independent processes. Another important assumption throughout will be that the observation noise covariance is nonsingular: G G nonsingular . D.1.6) The remarkable achievement of Kalman and Bucy was to show that given the signal-generation model D.1,4a) (and the assumption that the observation noise is white) it is possible to describe the optimal linear filter also in state-space fora analogous to D.1,4). This representation is now generally referred to as the \"Kalman filter,,f which we shall now proceed to derive. First of all let « xn * E[*n I V •••' vl3 ' n - X ' which we refer, to as the \"state estimate,\" We assume that xQ is Gaussian and independent of the noise sequence ^nK so that in particular, we may define: The Kalman filter is structured on two simple but essential ideas: (i) The Innovation Sequence We carry out a Gram-Schmidt orthogonalization of the\n\n4.1. BASIC THEORY 81 sequence ^v }. Thus we define: v^ - vi ~ Ely^] for n ~ 1 j D.1.7) tfe know of course that {v }, which is referred to as the \"Innovation Sequence\" (the name indicative of the fact that V for each n represents the \"new information\" provided by the n data sample v ), is a white noise sequence. We can \"simplify\" the right side of D.1.7) by noting that va \" Vn + GnNn Hence we may proceed to take the conditional expectation term- by-term. We note first, however, that by the independence of signal and noise and E[Kn | vn_lf ...,va] - 0 . Hence EI'n' Vl 'J * WlV-l + Vl> • and hence we have the representation: vn \" vn ~ Cn<VlXn-l + Dn-l> \" <4-9> As in any Gram-Schmidt orthogonalization procedure, we know that (cf- Chapter 3)\n\n82 CHAPTER 4* THE KALMAN FILTER B[Vn] - 0 and in fact that v does not \"depend\" on the mean of the process (v }. It is convenient to express this explicitly. Let *n - vn \" E [vn] • *n \" xn \" E[xn] ¦ Then, as we know from Chapter 3, we can write xn = E[xni + ESn^n''4,^l] D.1.7a) and in particular, as a consequence, vn - VE[?n' Vl' •¦•• V • D.1.9a) \" *n \" CnVA-l • D-1-9b> where \\ = Et\\i v •••» V • D-1-10> and we note that in = E[xn] + xn . D.1.10a) The form D.1.9a) shows explicitly that the innovation process v remains the same whatever the deterministic input ttf }, since the latter enters only in the definition of the mean of {v }. In fact, let mn = Elxj . Then, taking expectations in the state equation D,1.4), we have:\n\n4.1. BASIC THEORY 83 mn - Vlmn-1 + Vl ' n 1 1 > mQ = E[x0] , D.1.11) and, of course, E[vn] = Cnmn . D.1.12) (ii) State-Innovation Sequence Next we (Gram-Schmidt) orthogonalize the state estimate {? }. Let n vn = x- :n-E^nlVl—-Sll • D.1.13) s v.. = X-. - m- We shall refer to {v^} as the State-Innovation Sequence: n ^ the superscript s signifying this. As before, we may remove the means: vn - xn \" E[*J Vl' \\-2 *±1 • To calculate the second term on the right, let us observe that K - E[*J Vv ¦•¦• «ii is uncorrelated with v -,, . . . , v- and hence also with *n-V * * * ' *1* since ^n-k is a lin,ear combination of v ., ..., v^. Hence it follows that But the left side can be expressed\n\n8?J CHAPTER h. THE KALMAN FILTER B[E[Kn|vn> ...,«1]|?n.1 *l1 and this in turn by C.2-11) which we may now calculate term-by-term, using This yields Hence we have xn E[xJ that n ~~ iv = Vixn-i + ] .i> ••-> ^3 xn \"\" An-lxn- ? N n-rn-1 * .A. n~l n~l -1 D.1.14) « x - (A -? - + U -) . D.1.15) n v n~l n~l n-1 v J Thus the state innovation is the same whatever the input sequence, and if we wish, we may obtain it by setting U, identically to zero. The Kalman filter is obtained by showing that vn = Vn • D-1'16) This is based on the important property of the innovation sequence {v } that it \"contains all the information in the observation sequence tv }.H More precisely, for each n, we can express v in terms of v , v ^, . . . , v.*. But this is a consequence of the Gram-Schmidt orthogonalization process that we have already seen in Chapter 3. Hence in turn, it follows that v can be expressed linearly in terms of\n\n?f.l, BASIC THEORY 85 v , v -, ..., V- „ Furthermore, vf| is un correlated with v t, k > 1. This is because we can express v® as: s o . & v =* x^ - A„ -x^ - n n n-1 n-l n n n n~lx n-1 n-1 n-1' «= <Sn-xn) - Vl^-l-3!!-^ + *n \" An-lXn-l = ^n-SJ \" An-1<\\-1-Xn-1> + Fn-lNn-l \" D.1.17) The first two terms are uncorrelated with v„ w k > 1, by the optimality of the estimates x and x _1* As for the third term, we have E»D-lVlCj = EfPn-lNn-l(Cn-A-l + Gn-lNn-l)*] • Now the state noise F -N „ is independent of x • and n—1 n-l n-1 hence the above expression is n-1 n-1 = 0 by our assumption of signal-noise independence. Hence v is uncorrelated with v , for k > 0, and hence with v , n—it — n—k. for k >> 1. Hence to prove D,1.16) we have only to define K so that n ^vO \" VlVnl • D-18> Using K so defined, we have, rewriting D.1.16) by replacing vs v by D.1.9) and D.1.15), respectively:\n\n86 CHAPTER <+. THE KALMAN FILTER D.1.19) or *n \" \"-WVA-l^'-VA-^Vn .D.1.19a) g„ \" Cn*n • The Kalman filter is defined by D.1.19a)- Note that this is a time-varying linear system with the state space same as that of the signal process. The \"input\" is comprised of the terms (I - K C )U - + K v N n n n-1 n n and thus involves the observed data sequence ^vnJ as well as the input sequence {U }. It is important to note that D.1.19) can also be expressed as: x0 = 0 , D.1.20) K = \"-WVA-1 + Kn^n'Cn%> • where n+i n n n as is readily verified. What we have done in D.1.20) is separate the \"mean\" process (m }, It is also important to note that % - V-lVl + Vl + Vn • D-1-21)\n\n4.1. BASIC THEORY 87 which shows that {x } is a Gaussian Markov process just as To instrument the filter system D.1.19) we only need to calculate the gain matrix K . For this purpose, it is con- venient to introduce a new notation for the state-estimation error covariance matrix. Thus let Pn - Bte^] , D.1.22) where e = x - x = x - x n n n n n First let us derive the difference equation satisfied by the error process {e }. Subtracting the difference equations for {x } and (x }, we obtain n n er* *= A„e« 1 + F„ 1Nn 1 \" K^r, • D.1.23) n n n-i n—i n-i n n Now v defined by D.1.9) can be expressed in terms of {e } as: n vn = Gn\\ + Cnxn \" Cn< VA-1 + Vl> ' and substituting for x in this from D.1.4a) we have: n G N + C A -en 1 + C F -N n , D.1.24) n n n n-i n~i n n~l n-1 ' so that substituting this into our expression D.1.23) for e we have: n en = <I-WVlen-l+ <I-KnCn>VlKn-l- Wn \" D.1.25)\n\n88 CHAPTER *f. THE KALMAN FILTER This shows that {e ) is also a Gaussian Markov Process, e - being independent of the white noise and since each of these terms is also independent, we can readily calculate that where we have used the notation D.1.27) H n s= * * A P A + F F n n n n n Let us next calculate K from D.1.18) by calculating the necessary covariances. First in D-1.24), we note that e 1 is independent of cn?tt_iNn-.i because of our assumption of the independence of signal and noise; in fact, ^(VrV^CiVi^ - -B<ViCiF*-i> since vn_n» • • • > v-i are a*l independent of ^n_iNn_i * Tnus every term in D.1.24) is independent of the other two, so that = G„< + CnHn~lCn > <4-1-28> which is clearly nonsingular, since GG is* Next let us calculate the cross correlation matrix\n\n**.l. BASIC THEORY 89 1 n nJ For this purpose, let us first use D.1.17), We see that the first term is uncorrelated with v . For the second and third terms we exploit D.1.24), noting that en„i is uncorrelated with F ^N - under our assumption of signal-noise independence. Hence it follows that E[vV*] = (AM *V -A* -+F -F* -)C* n n n-1 n-1 n-1 n-x n~l n * « H -.C n-1 n Hence we have: Hn-lC» \" KnBK<1 ' VGnG^CnHn-lCn> • f4'1-29) Let us now go back to D,1.26) and note that the right side can be rewritten: Pn - ^-KnCn)Vl-Hn-i«+K<CnHn-lCn%GnGX • But substituting D*1.29) into this, we see that Moreover, rewriting D.1.29) as <I-«toWlCn = KnGn< and using D.1.30), we can express K in tei%ms of P as: PC* * K G G* . n n n n n\n\ngo CHAPTER *f. THE KALMAN FILTER Or finally we have and K„ - wvO <4-1^ Pn \" ^-^VO^VVl \" <4'1-32) Collecting terms containing P we have VI + CXGn>\"lcnHn-l> = Vl ' D'1'33) The matrix in parentheses (see Problem 4.1.1, if necessary) is nonsingular and hence P« =* K 1<I + C^GrrG^y1Cr,K 1>\" i D.1.34) n n-1 n n n n n-1 N ' (and taking adjoints) Pn * (I + Hn-i<(GnGn>\"lcn)_:lHn-l ' ^•15> We have thus expressed P in terms of P ... To determine ^ n n~l P we have only to add n J P0 - E[(x0-^0)(x0^x0)*] * Et*o*o] - A f D.1.36) say. This completes our derivation of the Kalman filter, which we may now express as: x - A^ J - + V - + P C*(G G*) (v - (A _,? -t+U .)), n n-1 n-1 n-1 n n n ny v n v n-1 n-1 n-1 ' D.1.37)\n\n<+.l. BASIC THEORY 91 &0 - E[xo] ¦ where P is given by D.1.35) (which is our \"error propagation\" equation), with Pn defined by D.1.36). One-Step Predictor It is often useful to express the Kalman filter formula D.1.34) in a different way in terms of the \"one-step predictor.\" Thus let Vl = E[xn+livl> •¦•' V • Note that xn+1 is our prediction \"one step ahead\" of the state at (n+1) given the data up to n. We may express both v and v in terms of x , since and V n n —> « = V n V n X n - - E[v L n C x n n E[xn >vll - x - E [x | v 1( ,.., vJ n L n ¦ n-1' ' 1J n n Also using D.1.4), we have: VH = AA + Un \" D.1.38) The corresponding error is x^--x^- = A(x-x)+FN n+1 n + 1 n n n n n\n\n92 CHAPTER 4. THE KALMAN FILTER and hence the one-step-predictor error-covariance: yielding, in particular, an \"interpretation\" of H . Moreover, we can rewrite the Kalman filter formula D.1.37) in terms of the quantities involving one-step prediction, using D,1.38) and D.1.16), in the form: (x - x ) = K(v-Cx) v n nJ nv n n n' Hence we have Xn ' An-l(I~Kn-lCn-l>*n-l + An-lKn-lvn-l K = ^V^n + Vn > *i = UA * *n m ° = *n * D.1.39) * * *v-l K = H -C (C H -C + G G ) n n-1 nv n n-1 n n n Signal Estimate Error Covariance The signal estimation error covariance can be expressed in terms of P as: n E[(s -s )(s -s )*] » C P C* . lK n nJK a uJ J n n n * Let us now derive the equations that C P C must satisfy- ^ n n n From D.1.33), multiplying on the left by C and on the right \"by C , we have Cn\\-1< \" CnPn< + CnPn<<Gn<>~lcnHn-i<\n\n4.1. BASIC THEORY 93 or - <I + SA-i<<VO_1>~\\Hn-lCn ¦ D.1.41) From D.1.40) we can also write Fit Error We may also consider at this point what is known as the \"fit error.11 This is an error that we can observe: the error between the data sequence {v } and our \"best fit\" to the data {s }: n n n It is easy to see that this is a white noise sequence. To obtain the fit error covariance, we note that n n n n s= v -s -i-s ~s n n n n = z + C e n n n Since (s ~s ), by virtue of the optimality of s is uncor- related with v , , k > 0, it follows that z and ert are n-k' — n n independent, and hence, in particular, G G* - E[z z*] + C P C* . D.1.43) nn LnnJ nnn It follows that\n\n94 CHAPTER k. THE KALMAN FILTER In particular: ETziz] » GG*-CPC L n nJ nn nnn E [z z 1 < G G L n nJ — n n « G G iff, s\\ = s^ . n n n n From D.1.44) we see that a fit-error variance much smaller than the noise variance indicates poor filter performance. Let us show that the fit error variance is nonsingular. For this purpose let us express z in terms of v , We have * n n z « v - C x^ n nnn - v ~ C (A -? -+6^ -U 1 +P C*(G G^VV) n n n-1 n-1 n~l n-1 n n n n n = (I - C P C*(G G*))v - D.1.45) v n n nv n n/ J n x J From which we also have that S[V?] = <I-CnPn<<GnGn>>EK%KI-(GnG*)-1CnPnCn> • Suppose D.1.46) (I - (GnG*)CnPnC*)x = 0 , x f 0 . D.1.47) Then by D.1.42) we would have that C P C*x nnn and hence from D11.47) that x - 0\n\n*f.l. BASIC THEORY 95 leading to a contradiction. Hence and (I - <V5*rVnCn> A \" CnPn<(Gn<)_1) are nonsingular. And hence * * * E [z z^ ] « G G - C P C n n n n n n n - (I \" CnPnCn<Gn<>)<Gn<) are nonsingular. Now == I + CHC(GG)-CPC(GG) n n nv n n' nn nv n n' - (C H C*(G gV1^P„C*(G G*) n n nv n n n n nx n n = I , since C H -C* - C PC ~ C H ^C*(G G )~XC PC* = 0 n n-1 n n n n n n-1 nv n n' n n n by D.1.40). Hence we also obtain ^^n-lCXV^ \" \"-WX6^- D.1.48) We can also reformulate the Kalman filter equations D.1.21) in terms of the fit error rather than the innovation. Thus we have: *n - VlVl + VlVl + Jn2n ' H.1.49)\n\n96 CHAPTER *f. THE KAUAW FILTER Remark. Before we leave this section let us make an important observation. For the filter to be optimal we need to calculate P by formula D.1.32) which involves P0 = A . But this (starting) covariance is unknown, and perhaps even unknowable. Hence no Kalman filter in practice is optimal, except in the case where the signal-generator model is time invariant; then we can show that under certain conditions we can make our filter asymptotically optimal, whatever the initial guesstimate for A. • PROBLEMS * Problem 4.1,1 Let L, M be self-adjoint nonnegative definite matrices. Show that (I + LM) is nonsingular. ffint: (I+LM)x *= 0 => [(I+LM)x, Mx] - 0 =*> Mx - 0 *=> x = 0 . Problem 4.1.2 Consider the case where fv ^ *s one-dimensional. Show that if the fit error is zero, then the signal-estimation error is the maximum possible. What happens to the Kalman filter in this case?\n\nkA* BASIC THEORY 97 Problem 4.1. 3 Show that z - v^ - C v . n n n n Problem 4«1.4 In the class of estimates of s of the form a v , n n n find the optimal a that minimizes the error covariance E[(s -a v )(s -a v ) ]. Denoting the minimal error covariance by T , show that nnn— n —• nn Show that the optimal a is given by: CnRnCn<I + CnRnCn>_1 • Rn = ^n\\^ ¦ Problem 4. 1._5 Consider the case where there is no \"state noise\": F =0. n Show that in that case, for zero input (U ^0) and E [xQ] « 0: (sk 3k A *1 \"f \\ # Jk sk sk ~1 \\ I *.C.(G.G.)~XC.*. + A\" I T \\$.C.(G.G.) v. , 4f 1 Xv 1 l' IX J I ^ 1 lv X Xy 1/ where i x-1 x-2 0 ' \\$0 = Identity Show that (n * * * -1 -i\\ * ?*iCi(GiG±> *iCi + A ) *n satisfies D.1.33) (yielding a \"closed form\" solution to D.1.32)).\n\n98 CHAPTER <+, THE KALMAN FILTER Hint; - a. _ -, r i -, and use the \"batch\" formulas of Chapter 3. Problem 4.1.6 Find the optimal \"zero memory\"-state estimator, i.e., find E [x jv ] and the corresponding error matrix. Problem 4.1.7 Let zn denote the fit error and let for each n: z - Z±, z2, ...,zn ; v - vx, v2, ..., vn ; v - v - C x n n n n Let R denote the covariance matrix of v: * R = E [vv ] Show that where | | denotes determinant. Similarly, writing v \" vl> v2 vn • show that the determinant of the covariance matrix of v: \\E[vv*]\\ - n K^k + SA-lVl = |R| . k=l Hint: Use * * 1 z = (I - C P C (G G )\"* )v n n n nv n n' ' n + terms containing v _1?\n\n*¦.!. BASIC THEORY 99 z = Lv , where L is \"block\" lower-triangular. Problem 4,1.8 Show that * * P ,- < A P A + F F n+1 — n n n n n Problem 4.1.9 This problem illustrates the dependence on the guesstimate for the initial covariance. Assume zero input. Define the suboptimal filter ?a .z (i~Pac*C )A ,xa - + PaC*v , xa = 0 , n v n n .n' n-1 n-1 n n n ' n ' Pa = (I +Ha .C*C )Ha - , n v n-1 n n/ n-1 * Pa ~ A a a * * n n n n n n Let T denote the corresponding error-covariance matrix: Show that Hence in particular Tn < Pn if A = p* > p0 = E[Xqx*] .\n\n100 CHAPTER H9 THE KALMAN FILTER Problem 4.1.10 Let Nn - H[Nn|vr...,vn] . Is ^Nn^ white noise? What is the covariance? Calculate E[SnN*]. Are lPn\\> and (G^} independent? Problem 4.1.11 Alternate Definition of Fit Error: We may define another \"fit error,\" using now the one-step predictor, as: where s - C x n n n Show that the variance of this fit error is equal to the variance of the innovation and is always larger than the noise variance- When does it attain its minimum? 4,2. KALMAN FILTER: STEADY STATE THEORY By far the most important for us is the case where the signal is (asymptotically) stationary. Thus we need to condi- der the asymptotics of the case where the signal-generation model Dc1.4) is time-invariant. Let us restate this problem for the time-invariant system:\n\n^*2, STEADY STATE THEORY 101 Since the signal-generation system is time-in variant, the concepts of Observability and Controllability can tell us much about its structure. Let us first examine what role these concepts play in analyzing filter performance. Theorem 4.2,1. Suppose (A ** F) is controllable. Then if P is nonsingular, so is Pn+k defined by D.2.2) for every integer k >_ 0. Proof. Let P be nonsingular. Then H must also be non- singular. For suppose for some nonzero x we have: H x = AP A x + FF x = 0 , n n Then [PnA*x, A*x] + [F x, F*x] = 0 , hence F*x *= 0 and P A*x = 0 . n Since PR is nonsingular, we must have A x - 0 , hence FAx-0, k > 1 ; x i 0 . This violates controllability. Hence H is nonsingular. But P . . - (I + H C*C)_:IB n+1 v n ' n and hence is nonsingular. Hence by induction P , is non- singular for every positive integer k. Let us next examine the role played by Observability.\n\n102 CHAPTER *u THE KALMAN FILTER V = s,+ GN n n n Sn Xn+1 * FG — = _ Cxn > Axn + un + FNn 0 . D.2.1) Then our filter formulas are: xn - (I-PnC*(GG*) 1C)Axn-1 + PnC*(GG*)\"*1vn 5M + H (Off 1C* + GG*)~1(vn -Cx ) n n n-l n n Pn+1 \" (I^H^CGG*)\"^) \\ H - AP A* + FF* n n P0 - E[x0x0] . D.2.2) In order to save space, we shall from now on take GG* «= Identity I Otherwise, we only have to replace -v. Y~l C by (\\G0\" J C (VS?)~ vn by (V^)\"^ - Tffe shall also set the input u to foe zero, since our main interest is in the error covariance, which is the same regardless of the input.\n\n*f,2. STEADY STATE THEORY 103 *fheorem 4.2.2. Suppose x is unobservable: CAnx = 0 for every n .> 0 , and further: sup |[Anx|| = +co . D.2.3) n Then we can choose PA so that lim Tr. Pn - -H» . D.2.4) proof. We may without loss of generality assume that INI - i - Let us choose PQ so that PQx = \\x , A > 0 Let x0 \" fxo,x^x + z * Now x . - = Ax„ + FN n+1 n n has the solution: n-1 xn = ^0 + I ^ FNh Hence vn = C*n + GNn CAnz + C J A° k X FN, + GN D.2.5) k k n v y since CAnx - 0\n\n104 CHAPTER <K THE KALMAN FILTER From D.2.5) it follows that E[[x0,x]v*] - 0 since and :[[x0,x]z AnC] = E[[x0,x]z ]A nC E[[x0,x]z ] * E[[xQ,x](x0 - [x0,x]x) ] - E[x x0(x0- [xQ,x]x) ] * x E[xQx0] - x E[xQx0]xx ~ Ax ~ Xk xx =* 0 . Hence {v } is independent of [x0,x] and hence so is x . Hence n-1 , x - x - [xA,x]Anx + Anz + 7 An~Km, - x n n L 0* J ? k n where the second, third and fourth terms are uncorrelated with the first. Hence Tr. Pn - Tr. E[[^n-xn]K-xn]*] Since E[[xn,xJ2][Anx, Anx] . E[[x0,x]2] - [P0x, x] - X we have that\n\n4,2. STEADY STATE THEORY 10\\$ Tr. Pn > |[Anx||2X or sup Tr. P^ = +» . n n Remark. Note that even though n ' we do have that (cf. Exercise 4. 1.4) cpnC* 5 CRnC*(I+CRnC*) < I . As we have already noted, it is impossible to construct (in general) an (optimal) Kalman filter, because to do so we need P . and P depends on P^ which is unknown and un~ n n i> knowable. However, in the time-invariant case D.2.1), we can show that it is possible to achieve \"asymptotic\" optimality. Thus under certain conditions (indicated below) we can show that P will converge to a limit matrix P as n+w, independent of the initial matrix P^, and further, if we define: S* = (I-PC*C)Ax^1 + PC*vn , then the corresponding error covariance E[(x -xa)(x - xa) ] -+ P , as n -*¦ ». LV n nv n nJ J * For this purpose, let us define the function HO on self-adjoint nonnegative definite matrices P by *(P) « (I +H(P)C*C)'1H(P) t D.2.6)\n\n106 CHAPTER 4. THE KALMAN FILTER where H(P) is defined by H(P) « APA* + FF* . Then D.2.2) becomes: Pn+1 - *<V * Suppose P converges to Pw. Then we would have *, - KPM) • The equation P * <KP) D.2.7) is called the Algebraic (or Steady State) Riccati Equation (SSRE); and we see that P^, if it exists, satisfies this equation. Let us recall next some definitions from state space theory. The class of (C-A) unobservable states is defined as: all x such that CA x = 0 for every k >^ 0 . The class of A-stable states is defined as: all x such that ||Akx|| ¦*¦ 0 as k -> « . The class of stable states is a linear subspace. Let P denote the corresponding projection. The class of (C^A) unobservable states is also a linear subspace. Let P denote the corresponding projection and P that of the orthogonal complement. Our basic result on the steady state Kalman filter is:\n\n4.2. STEADY STATE THEORY 107 Theorem 4,2,3. Suppose all (C ^ A) unobservable states are A-stable and all (F ^A ) unobservable states are A-stable. Then the steady state Riccati equation has a unique self-adjoint nonnegative definite solution. Denote it PM: Moreover, d'P.C*C)A is stable, and the filter is asymptotically optimal. Proof. Let us state the individual results we need as lemmas, since they would be of independent interest. Lemma 4.2.1. Suppose P and Q are self-adjoint and nonne- gative definite. Then *(P+Q) >. HP) . Proof. Let H(P) =» APA* + FF* mapping self-adjoint nonnegative matrices into the same class. Let P, Q be self-adjoint and nonnegative definite and let A > 0. Then we can express \\$(P+Q) - \\$(P) as: <KP+Q) - *(P) =* / 4r *(P+XQ) d\\ . D.2.8) 0 A\n\n108 CHAPTER *+. THE KALMAN FILTER Now: ~ \\$<P+XQ) = 3X <K* +H(P+AQ)C*Cf:LH<P+XQ) = (I + H<P+XQ)C*C)~ X AQA* - (I +H(P+XQ)C*Cf1 AQA*C*C(I + EKP+AQ^Cf1 H(P+AQ) = (I + H(P+AQ)C*C)~ 3 AQA*(I - C*C\\$(P+XQ)) . For any self-adjoint nonnegative definite matrix M: I - \\$(M)C*C *« I - (I + H(M)C*Cf1H(M)C*C - (I+H(M)C*C) [I+fl(M)C*C-H(M)C*C3 *= (I + H(M)C*C)~1 Hence ~~\\$(P+AQ) - (I - ^(P+AQ)C*C)AQA(I - C*C*(P+XQ)) > 0 From D.2.8) it follows that \\$(P+Q) - \\$(P) j> 0 if Q > 0 . Hence, in particular: •<Pn+1> > \\$CPn) if Pn+1iPn • Lemma 4.2,2. Suppose (C^ A) unobservable states are A-stable Then for any choice of P^ P ? M for all n , D.2.0)\n\nH.2. STEADY STATE THEORY 109 where U is a self-adjoint nonnegative definite matrix, and may depend on PQ. proof. First let us assume that we have (C^A) observability. Assume the matrix A is n * n. Define r = Y A*kC*CAk Then R must be nonsingular, as we have seen in Chapter 1. For m > n, define the suboptimal estimate: xS = A11* T A*cV+k n m q m+k-n What we have constructed thereby is a filter with a memory of n, but not optimal. We show that the corresponding error co- variance is bounded. Exploiting the time invariance of the system D.2.1), we have: k-1 x ,, = A x + 7 AJFN ..-,-, , m > n m+k-n m-n k m-n-j+k-1 ' ~ Let k-1 i^i ?m-n+k ~ I AjFNn-m+k-l+j • m > n . Then we can express x as m\n\nno CHAPTER k. THE KALMAN FILTER sC - aV^a^c'ca** = AVn + AnR_1TA*kC*(C?m_n+k + GVn+k) hence the error in the estimate: mm k m-n+K m-n+K n It is readily verified that the covariance of the random vector on the right side is independent of m. Hence m m x m m J where J is a fixed matrix. Since P is the minimal error m matrix we have P < J , m > n ; m — ' — ' hence, taking M =* max (J, max Pv) , the lemma follows. Suppose next that A is stable: all eigenvalues are strictly less than one in magnitude. In that case the conditions of our Lemma are satisfied, and we can prove D.2.9) very simply. We have 1 E[xn**] ,\n\n*+.2. STEADY STATE THEORY 111 and we know from Chapter 2 that E[xx*] = AnPnA*n + nyXAkFF*A*k n n J O k Now since A is stable, the first term goes to zero as n ¦+ <*> (regardless of the choice of PQ) and the second term converges to I AkFF*A*k . 0 Hence D.2.9) is immediate. In other words, if (C^A) is observable or A is stable, the%Lemma holds. Let us now consider the more general situation where A may be unstable, but all (C^A) unobservable states are stable. We begin by noting that and Cx « C? x„ n r n P x = P Ax , + P FN - r n r n-1 r n-1 It is readily verified that A maps the subspace of (C^ A) unobservable states into itself: and hence also Let APx = P AP x u u u P Ax = P A(P x + P x) = PAPx . r r r u r r yn = Prxn , n > 0 . D.2.10)\n\n112 CHAPTER <f. THE KALMAN FILTER Then \"by operating with\" ?T on the left of the state equation in D.2.1), we see that y satisfies: Let yn - (WVl * <PrF>Nn-l • vn - Cyn + GKn . D.2.11) *n * E[yn I V •¦\" vl] ' Then from D.2,10) it follows that yn ¦ Prxn • But since (CP ^ P AP ) is observable, we see that E[Px~Px][Px~Pxl* == E[(y-y)(y-y)l* 1 r n r nJ L r n r nJ iWn Jn'wn <ynyj satisfies D.2,9). In other words, PrPnPr 1 M < °° ¦ D.2,12) Next let *n - VPu2n4 + VNn-l - n > X - D'2<13> z0 \" PuX0 * Let us see how z differs from P x . By operating on the left with Pu on the state equation in D.2.1), we have: P x = P Ax - + P FN - u n u n-1 u n-1 = VVn-1 + VNn-l + PuAPr*n-l \" PuAPuxn-l + VNn-l + V^n-l ' D'2-14)\n\nHence letting 4.2. STEADY STATE THEORY 113 hn - V»-Zn and subtracting D.2,14) from D.2.13), we have hn - < Whn-1 + V*n-1 n-1 T (P A? )JP Ay . . g v u u' u -^n-l-j since Now where h0 \" PuX0 ~ 20 *n = ^-yn + zn-gn + hn \" hn • D.2.15) *n - Ethnlvl V • By virtue of the condition that unobservable states are stable, it follows that (puApu) is stable. Hence E[(z - z )(z - z )*] lv n n'v n n' J is bounded. Now for any two random vectors x,y we have B[||x+y|p] < E[||x||23 h- E[||y||2] + 2E[|| x|| || y ||] , and by the Schwarz inequality: EtIMIIMI] ? ^[|lx||2] jE[|ly|l2] . Hence E[||X+y||2] < (^ E[||x||2]' + ^E[||y||2]j2 .\n\n114 CHAPTER <K THE KALMAN FILTER By induction, it follows that for any finite number of random vectors b-, .... b , we have that l * n D.2.16) Hf bi If J i(| V*jf?J Hence in particular 1 (T|IaJ||||pua|| ^f: where Au - PuAPu • sup E[(yk~ykHyk~yfc> 3 k Since A is stable, it follows therefore that sup E[iihn-hn|i2] = sup Tr. Ef<*n-V<hn-V*3 1 f XHaJ||I|PuA|| ^T^f It follows in turn from D.2.15) and the inequality D.2.16) that sup E[|Ix -x II ] « sup Tr. Pn n n\n\n4.2. STEADY STATE THEORY 115 Or sup Pn < °° , as was required to be proved. Remark. It should be noted that h need not be zero, in —. n * general. Lemma 4.2.3> Let P be any self-adjoint nonnegative definite solution of the steady state Riccati equation D,2.7). If (F ^A ) unobservable states are A -stable, then (I ~PC*C)A is stable, and moreover, P is the only self-adjoint nonnegative definite solution of D.2.7). Proof. Let P be a self-adjoint nonnegative definite solution of D.2,7). Then (I+K(P)C*C)P = H(P) , or P + H(P)C*CP = H(P) , or P - H(P)(I-C*CP) , so that P - PC*CP = (I ~PC*C)P - (I -PC*C)H(P)(I - C*CP) , or P - (I -PC*C)H(P)(I -C*CP) + PC*CP D.2.17) (which is no more than the steady-state version of D.1.26)).\n\n116 CHAPTER <f. THE KALMAN FILTER Let ? » (I~PC*C)A J - I - PC*C . Then we can rewrite D.2.17): p = ypy* + jff*J* + PC*CP . D.2.18) Suppose ? is not stable. Then neither is Y . Let x be an unstable eigenvector of Y : A - yx , |y| > 1 , ||x|| - 1 . Then substituting in D.2,18), we have [Px,x] = |Y|2[Px,x] + || P* Alp + ||CPx||2 , or A- |y|2)[Px,x] - ||F*A||2 +||CPx||2 . D.2.19) Since P is nonnegative definite, [Px,x] > 0 , and hence the right side of D.2.19) must be zero if |y| >_ 1. Hence CPx =0 and F*A - 0 . Since J*x * x + C CPx = x , it follows that A =* 0 and\n\n*K 2. STEADY STATE THEORY 117 ? x ** Ax *¦ YX Hence for every nonnegative integer k: P (A )x - y P x - 0 , or x is (F ^ A ) Unobservable. This leads to a contradic- tion, since by assumption x must be A-stable . Hence ? must be stable. Let us next prove uniqueness of the solution. Thus, let Pt, P2 be two self-adjoint nonnegative definite solutions of the SSRE D.2.7). Let Vx - (I-p/c)A , Then both Y- and ?2 must be stable. Let Q - Px - P2 • As we have seen, if P is any solution of the SSRE, we have (I+H(P)C*C)~1 = I - C*CP . Hence the SSRE yields -1 HCP^ - P^I -C CP.^ H(P2) - (I-P2C*C) 1 Subtracting, we have\n\n118 CHAPTER *f. THE KALMAN FILTER AQA* - P1(I^C*CP:t)'\"i - (I -P2C*C)P2 Hence (I - P2C*C)AQA*(I - C^CP^,) = (I - P2C*C)P1 - P2(I - C*GP ) Q - ^2QYX , or for every positive integer k Q = y* Q yJ* . Hence [Qx.x] = [Qf/x, ^ x] , and ?.., ^2 being stable, letting k go to infinity, we obtain tQx,x] * 0 . Furthermore, Q being self-adjoint, it follows that Q = 0 or Pl = P2 • Remark- Note that for uniqueness we require that the solution (of the SSRE) be nonnegative definite. Prom D,2.18) it follows that any self-adjoint solution is nonnegative definite jif, in addition, V is stable.\n\n<K2. steady state theory 119 We can now complete the proof of Theorem 4.2.3 with the aid of our lemmas* First, the condition that (C-A) Unobserva- ble states are A~stable yields that whatever P0, sup P < M < °° , n n \" where Pn+1 \" \\$<V * Take the special case where PQ is zero. Then by Lemma 4.2.1 Pn - \\$n@) is monotone nondecreasing and, being bounded, converges to a finite limit. Denote the limit by P , Then P^ is a solu- S S tion of the SSRE: By the condition that (F ^ A ) Unobservable states are A -stable (Lemma 4.2.3) we obtain that any such solution must be unique. Moreover, «P = (I-PSC*C)A is stable. Let us consider the performance of the suboptimal filter, using P : x* - A^ , + P0C*(v - CAxa -) n n~l s v n n~l a3, f. KQ = 0 Let e - x - x^ n n n D.2.20)\n\n120 CHAPTER i*. THE KALMAN FILTER Let T^ denote the corresponding (error) covariance: Then Tn satisfies Tn+1 * ^n** + (I-PSC*C)CT*(I~PSC*C) + PgC*CPs , D,2.20^ But using: the SSRB in the form D.2.18), we see that <Tn+l ' Ps> \" «Tn - V* or Tn - Pfi - ^n(A-Ps)y*n , D.2.21) •where A - S[xQx0] . Since Y is stable, it follows that lim T = PQ , n n s or the filter is asymptotically optimal, the rate of convergence to optimality being determined by D.2,21). Clearly, the more stable ? the faster the convergence. Next we note that for any initial PQ: 0 < po • Hence by Lemma 4.2. , *n@) < ^CP0) = Pn < Tn . But the left side co```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.82762665,"math_prob":0.9919617,"size":104644,"snap":"2020-10-2020-16","text_gpt3_token_len":33429,"char_repetition_ratio":0.14114106,"word_repetition_ratio":0.0435125,"special_character_ratio":0.32809335,"punctuation_ratio":0.14227058,"nsfw_num_words":3,"has_unicode_error":false,"math_prob_llama3":0.99929255,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-06T19:09:48Z\",\"WARC-Record-ID\":\"<urn:uuid:e15f3688-e11a-4a15-89c2-559460036e01>\",\"Content-Length\":\"170915\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:21ce3fd2-5f8c-4b07-b8ff-52538df9c83d>\",\"WARC-Concurrent-To\":\"<urn:uuid:e86930fd-025e-4ad5-9060-16daf013d98d>\",\"WARC-IP-Address\":\"81.17.17.254\",\"WARC-Target-URI\":\"https://it.b-ok.org/book/445486/8792fc\",\"WARC-Payload-Digest\":\"sha1:Q32EPHSHB3R27QYW5WAJLOPPGNNDZKRJ\",\"WARC-Block-Digest\":\"sha1:ABI722VCGTE7CJRDFTETUEKIAIDXW4OQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371656216.67_warc_CC-MAIN-20200406164846-20200406195346-00160.warc.gz\"}"} |
https://studyres.com/doc/1260016/68000-conditions-and-condition-code-register | [
"Survey\n\n* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project\n\nDocument related concepts\nno text concepts found\nTranscript\n```3D1-Microprocessor Systems 1\nLecture 9: 68000 Conditions and the Condition Code Register\nThe computational power of all computers lies in their ability to choose between two or more\ncourses of actions on the basis of the available information. Without such powers of decision,\nthe computer would be almost entirely worthless, as it could only execute a long list of\ninstructions “blindly”.\nInstructions that make decisions at the assembly language level are used to implement\nconstructs like IF…THEN…ELSE and REPEAT…UNTIL at the high-level language level. We are\nnow going to look at the family of instructions that enable the programmer to synthesise\nhigh-level control constructs.\nLearning Outcomes:\nOn completion of this lecture, you will be able to:\n Discuss the need for conditional instructions;\n Describe the flag bits of the Condition Code register;\n Discuss the conditions the 68000 recognises;\n Implement compare, test and bit manipulation instructions;\n Implement conditional branches.\n9.1\n68000 conditions\nThere are instructions that only have an effect if a particular condition holds true. In the\n68000, conditions are:\n Boolean expressions that take the five bits of the Condition Code Register (Fig. 10.1)\n True, false\n the Boolean operators ‘and’ ,‘or’, ‘eor’ and ‘not’.\nFig. 9.1 Conditions the 68000 recognises\nA computer chooses between two courses of action by examining the state of one or more\nbits in its CCR and associating one action with the outcome of the test and another action\nwith the other outcome.\n9-1\n3D1-Microprocessor Systems 1\n9.2\nCalculating Condition in the CCR\nRemember that the Condition Code register (CCR) is a special register of the CPU. It\ncontains a set of flag bits (X, N, Z, V and C) which are set or clear according to the result of\nan arithmetic or logical operation. That is, the CCR provides a status report about the\noperation.\nMany instructions have an effect on the X, N, Z, V and C bits of the CCR. In the Instruction\nSet Summary:\n* means the bit is affected ‘as you might expect’ -– means the bit is unaffected -- it retains its present value\nWhat\n\n\n\n\n\nyou might expect…\nC: the carry/borrow bit. Set if arithmetic carry or borrow; cleared otherwise.\nV: the arithmetic overflow bit. Set if sign overflow occurred; cleared otherwise.\nZ: the zero bit. Set if the result is zero; cleared otherwise.\nN: the negative bit. Set if the result is negative; cleared otherwise.\nX: the extend bit. This is the same as the carry bit, but is only affected by a subset of\nthe instructions that affect the carry bit: the ‘arithmetic’ instructions. X is set if\narithmetic carry or borrow has occurred.\nSome instructions have no effect apart from their effect on the CCR: TST, CMP, BTST, etc…\nThe purpose of these instructions is to calculate an appropriate condition and place it in the\nCCR so that it can have a desired effect on program execution.\n9.3\nCompare and test Instructions: CMP and TST\n\nCMP -- the Compare Instruction sets the CCR as if the first operand had been\nsubtracted from the second. The N-, Z-, V-, and C-bits are all updated and the X-bit\nremains unaffected. The destination operand must be a data register and the source\noperand may be specified by any of the 68000’s addressing modes.\ne.g.:\nCMP.L D0,D1\nevaluates [D1(0:31)] – [D0(0:31)]\nCMP.B TEMP1,D3\nevaluates [D3(0:7)] – [(TEMP1)]\nCMP.L TEMP1,D3\nevaluates [D3(0:31)] – [(TEMP1)]\nCMP.W (A3),D2\nevaluates [D2(0:15)] – [M([A3))]\n\nTST -- the Test [against zero] Instruction reads the operand, compares it with zero,\nand sets the bits of the CCR accordingly. For example, TST D3 has a similar effect to\nCMP #0,D3.\n9.4\nBit manipulation instructions: BTST, BSET, BCLR, BCHG\nThe 68000 provides a family of four instructions that act on a single bit of an operand,\nrather than the entire operand: BTST, BSET, BCLR, BCHG. The selected bit is first tested to\ndetermine whether it is a one or a zero, and then it is operated on according to the actual\ninstruction. In each of the bit manipulation instructions, the complement of the selected bit\nis moved to the Z-bit of the CCR and then the bit is left unchanged, set, cleared or\ncomplemented. The N-,V-,C- and X-bits are not affected.\n\nBTST: Bit TeST tests a bit of an operand. If the test bit is zero, the Z-bit of the CCR is\nset, else the Z-bit is cleared. A bit test does not affect the value of the operand under\ntest in any way. The location of the bit to be tested is specified by a constant, or as\nthe contents of a data register. For example, BTST #3,(A0) tests bit 3 of the byte in\n9-2\n3D1-Microprocessor Systems 1\n\n\n\n9.5\nmemory pointed at by A0. BTST D0,(A0) tests the bit in the byte pointed at by A0\nwhose bit-number is in D0.\nBSET: Bit Test and SET causes the Z-bit of the CCR to be set if the specified bit of an\noperand is zero, and then forces it to be set to one. eg: BTSET #4,D0.\nBCLR: Bit and CLeaR works exactly like BSET, except that the specified bit is cleared\nafter it has been tested.\nBCHG: Bit test and CHanGe causes the value of the specified bit to be tested and\nthen inverts its state.\nBranch Conditionally\nThe 68000 provides the programmer with a toolkit containing instructions for the\nimplementation of conditional structures:\nBcc\n<label>\nBranch to label on condition cc true.\nThe condition cc, calculated into the CCR, is any one of the fourteen conditions listed below:\nFig. 9.2 authorised conditions with the Bcc instruction\nAfter an arithmetic or logical operation is carried out, the value of the Z, N, C, V flags in the\nCCR are updated accordingly. These flag bits are used to determine whether the appropriate\nlogical condition is true or false. For example, BCS LABEL causes the state of the carry-bit to\nbe tested. If it is set, a branch is made to the point in the program called LABEL. Otherwise,\nthe instruction immediately following BCS LABEL is executed.\nWe can divide branch instructions into two classes: those that branch on an unsigned\ncondition and those that branch on a signed condition. Branch instructions used in\nconjunction with signed arithmetic are: BGE (Branch on Greater than or Equal), BGT (Branch\non Greater than), BLE (Branch on Less than or Equal), and BLT (Branch on Less than).\nBranch instructions used in conjunction with unsigned arithmetic are: BHI (Branch on Higher\nthan), BCC (Branch on Carry Clear), BLS (Branch on Less than or same), and BCS (Branch\non Carry Set). Some assemblers let you write BHS (Branch on Higher or Same) instead of\nBCC, and BLO (Branch on Lower or Same) rather than BCS.\nE.g: if [D0]=\\$CO and we perform the operation CMP.B #\\$2,D0, the result is C016-2516=9B16,\nand Z=0, N=1, V=0, C=0. If we perform a BHS, the branch will be taken because \\$CO is\nhigher than \\$25 when using unsigned arithmetic. If we use a signed branch, BGE, the\nbranch will not be taken, because \\$CO is less than \\$25 (i.e., -64 is less than 37).\n9-3\n3D1-Microprocessor Systems 1\n9.6\nTest Condition, Decrement and Branch Conditionally\nThe DBcc instruction makes it easier to execute a loop a given number of times. The\nassembly language form is:\nDBcc Dn,<label> test condition cc, decrement Dn and branch to label if cc is false\nDn is a data register and <label> is a label used to specify a branch address. The label is\nassembled to a 16-bit signed displacement which permits a range of 32K bytes.\nIf the result of the test is true, the branch is not taken and the next instruction in sequence\nis executed (i.e., exit the loop).\nIf the specified condition is not true, the low-order 16-bits of register Dn are decremented\nby 1. In addition, if the resulting contents of Dn are equal to -1, the next instruction in\nsequence is executed.\nNote that DBcc has the opposite effect to a Bcc instruction; a Bcc instruction takes the\nbranch on condition cc true, whereas a DBcc instruction exits the branch on condition cc\ntrue.\nAs in the case of of the Bcc instruction, there are 14 possible computed values of condition\ncc, plus two static values: cc=T (true) and cc=F (false), as shown in Fig.10.1.\ne.g.: DBF Dn,<label> always causes Dn to be decremented and a branch made to <label>\nuntil the contents of Dn are -1. Many assemblers permit the use of the mnemonic DBRA\n9.7\nConclusion\n\n\n\n\nCMP is exactly like a SUB instruction, except that it doesn’t replace the second\noperand with the result.\nTST is somewhat equivalent to CMP #0,…\nThe bit test family of instruction is used mostly when dealing with peripherals.\nMemory being byte addressable, if the destination address is a memory location, the\nsource operand is treated as a modulo-8 value. If the destination address is a data\nregister, then the source operand is treated as a modulo-32 value.\nBcc and DBcc are conditional branch instructions. The DBcc instruction is designed to\nsupport applications in which one of two conditions may terminate a loop. One\ncondition is the loop count in the specified data register, and the other is the\ncondition specified by the test. Note that DBcc Dn,<label> works only with 16-bit\nvalues in Dn. That is, loops greater than 65,536 cannot be implemented directly by\nthis instruction. Two operations have to be carried out to implement a 32-bit\noperation.\nREFERENCES\n\nA. Clements; The 68000’s Instruction Set, In: 68000 Family Assembly Language;\npp.203-209; PWS Publishing Company; 1994.\n\nDr. Mike Brady, Microprocessor Systems 1, dept of Computer Science, Trinity College\nDublin: http://www.tcd.ie/Engineering/Courses/BAI/JS_Subjects/3D1/.\n\nLook on the Web at http://www.mee.tcd.ie/~assambc/3D1.\n9-4\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9004861,"math_prob":0.9171746,"size":9757,"snap":"2021-21-2021-25","text_gpt3_token_len":2357,"char_repetition_ratio":0.15933558,"word_repetition_ratio":0.036858007,"special_character_ratio":0.23921287,"punctuation_ratio":0.13553475,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95933694,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-18T10:23:33Z\",\"WARC-Record-ID\":\"<urn:uuid:3102d819-c5be-4faa-af25-96890643bc53>\",\"Content-Length\":\"38896\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f532ce4c-281c-4048-ada8-b6c3180f7a3b>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a2587d3-c6d5-4793-a4b2-f9731922db19>\",\"WARC-IP-Address\":\"172.67.151.140\",\"WARC-Target-URI\":\"https://studyres.com/doc/1260016/68000-conditions-and-condition-code-register\",\"WARC-Payload-Digest\":\"sha1:QCAS5E3FXPTZVM5CSPKYMIRDM6BPDKAO\",\"WARC-Block-Digest\":\"sha1:RJYCPZY5IV2JTLTAUKG5VOPEW4IWF47L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989819.92_warc_CC-MAIN-20210518094809-20210518124809-00294.warc.gz\"}"} |
https://slideplayer.com/slide/244798/ | [
"",
null,
"# Volume of Rectangular and Triangular Prisms\n\n## Presentation on theme: \"Volume of Rectangular and Triangular Prisms\"— Presentation transcript:\n\nVolume of Rectangular and Triangular Prisms\nGeometry Volume of Rectangular and Triangular Prisms\n\nVolume – the amount of space occupied by an object.\nExample: The VOLUME of this cube is all the space contained by the sides of the cube, measured in cube units (units3). h w l\n\nVolume Volume – To calculate the volume of a prism, we first need to calculate the area of the BASE of the prism. Example: The AREA of the base of this rectangular prism is l x w. h w l\n\nVolume We find that: Volume = (l x w) x h\nVolume – Once we know the area of the base, this is then multiplied by the height to determine the VOLUME of the prism. We find that: Volume = Area of Base x Height Volume = (l x w) x h h w l\n\nVolume (rectangular prism)\nFormula: V = B x h V = l x w x h h w l\n\nVolume Formula: V = B x h V = l x w x h Find the volume of this prism…\n7 cm 4 cm 5 cm\n\nVolume Formula: V = B x h V = l x w x h V = 5cm x 4cm x 7cm\nFind the volume of this prism… Formula: V = B x h V = l x w x h V = 5cm x 4cm x 7cm 7 cm 4 cm 5 cm\n\nVolume Formula: V = B x h V = l x w x h V = 5cm x 4cm x 7cm V = 140cm3\nFind the volume of this prism… Formula: V = B x h V = l x w x h V = 5cm x 4cm x 7cm V = cm3 7 cm 4 cm 5 cm\n\nVolume Formula: V = B x h V = l x w x h V = 7cm x 4cm x 5cm\nDoes it matter which side is the base? Formula: V = B x h V = l x w x h V = 7cm x 4cm x 5cm 5 cm 4 cm 7 cm\n\nVolume Volume of a Triangular Prism\n\nVolume The same principles apply to the triangular prism.\nTo find the volume of the triangular prism, we must first find the area of the triangular base (shaded in yellow). h b\n\nVolume To find the area of the Base… Area (triangle) = b x h 2\nThis gives us the Area of the Base (B). h b\n\nVolume Now to find the volume…\nWe must then multiply the area of the base (B) by the height (h) of the prism. This will give us the Volume of the Prism. B h\n\nVolume Volume of a Triangular Prism Volume (triangular prism)\nV = B x h B h\n\nVolume Together… Volume V = B x h\n\nVolume Together… Volume V = B x h V = (8 x 4) x 12 2\n\nVolume Together… Volume V = B x h V = (8 x 4) x 12 2 V = x 12\n\nVolume Together… Volume V = B x h V = (8 x 4) x 12 2 V = 16 x 12\nV = cm3\n\nVolume Your turn… Find the Volume"
] | [
null,
"https://slideplayer.com/static/blue_design/img/slide-loader4.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7936506,"math_prob":0.9995784,"size":2337,"snap":"2020-10-2020-16","text_gpt3_token_len":772,"char_repetition_ratio":0.23446207,"word_repetition_ratio":0.36062717,"special_character_ratio":0.33376125,"punctuation_ratio":0.056338027,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999972,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-24T06:05:35Z\",\"WARC-Record-ID\":\"<urn:uuid:ab70bd01-20ad-4cf9-bcc2-2fa9b1fd7b46>\",\"Content-Length\":\"161688\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b839397-af75-4897-b8f9-6ce8ce6aca42>\",\"WARC-Concurrent-To\":\"<urn:uuid:54a1bf2d-9305-4d73-815d-06a68fb6ca7b>\",\"WARC-IP-Address\":\"88.99.70.210\",\"WARC-Target-URI\":\"https://slideplayer.com/slide/244798/\",\"WARC-Payload-Digest\":\"sha1:LC7RMCEIWQJRMXGI7ZZNXPIRIWZVXJEX\",\"WARC-Block-Digest\":\"sha1:HKUFXRITJ6YBJEPJFR5XMIUE7L4MQSUI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145897.19_warc_CC-MAIN-20200224040929-20200224070929-00051.warc.gz\"}"} |
https://www.toppr.com/bytes/cbse-class-10-mathematics-paper-analysis-review/ | [
"",
null,
"## CBSE Class 10 Mathematics 2020 – Paper Analysis & Review\n\nThe students of Class 10 appeared for their Maths paper today, March 12, under the ongoing Central Board of Secondary Examination (CBSE). We bring you CBSE Class 10 Maths 2020 – Paper Analysis & Review.\n\nCBSE Class 10 Maths paper consisted of four parts. All questions were compulsory.\n\nSection A\nThere were 6 questions each carrying 1 mark.\n\nSection B\nThere were 6 questions each carrying 2 marks.\n\nSection C\nThere were 10 questions each carrying 3 marks.\n\nSection D\nThere were 8 questions each carrying 4 marks.\n\n### Marking Scheme & Chapter Distribution (Weightage) for CBSE Class 10 Maths Paper 2020\n\n Chapter Name Weightage in terms of marks (2018) Weightage in terms of marks (2020) Real Numbers 1+2+3 = 6 Polynomials 3 Pair of Linear Equations in two variables 3+4+4(Extra) = 11 Quadratic Equations 1 Arithmetic Progression 1+2+4 = 7 Triangles 1+3+3(Extra)+4+4(Extra) = 15 Coordinate Geometry 1+2+3+3(Extra) = 9 Introduction to Trigonometry 1+3+3(Extra)+4 = 11 Some Applications of Trigonometry 4 Circles 3 Constructions 4 Areas Related to Circles 3 Surface Area and Volume 2+3+3(Extra)+4 = 12 Statistics 3+4+4(Extra) = 11 Probability 2+2 = 4\n\nNote: In the analysis above the total no. of marks is equal to 104(3 questions of 4 marks each and 4 questions of 3 marks each) as it includes optional questions as well.\n\n### Comparison with Last Year’s Question Paper\n\n• Questions were difficult compared to last year’s paper.\n• This year questions from all the chapters were asked in comparison to last year’s paper\n• There was more emphasis on Trigonometry, Statistics and Surface area and volume.\n\n### Overview of CBSE Class 10 Maths 2020 – Paper Analysis & Review\n\n• According to our analysis, 65+ out of 80 is easily scorable.\n• Overall, the level of the paper was medium.\n\n## Request a Free 60 minute counselling session at your home\n\nPlease enter a valid phone number\n•",
null,
"7,829,648\n\nHappy Students\n•",
null,
"358,177,393\n\nQuestions Attempted\n•",
null,
"3,028,498\n\nTests Taken\n•",
null,
"3,020,367"
] | [
null,
"https://www.facebook.com/tr",
null,
"https://www.toppr.com/bytes/wp-content/themes/bytes-v2/images/happy-students.svg",
null,
"https://www.toppr.com/bytes/wp-content/themes/bytes-v2/images/ques-attempted.svg",
null,
"https://www.toppr.com/bytes/wp-content/themes/bytes-v2/images/tests-taken.svg",
null,
"https://www.toppr.com/bytes/wp-content/themes/bytes-v2/images/doubts-answered.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90218395,"math_prob":0.8612135,"size":1901,"snap":"2020-34-2020-40","text_gpt3_token_len":521,"char_repetition_ratio":0.13811281,"word_repetition_ratio":0.061290324,"special_character_ratio":0.28879538,"punctuation_ratio":0.06051873,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99015343,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T16:03:24Z\",\"WARC-Record-ID\":\"<urn:uuid:2e667a0a-b431-4b64-89a4-40fecafa2e2c>\",\"Content-Length\":\"81377\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:80917745-8f4b-4ac5-b675-99daa27d9f37>\",\"WARC-Concurrent-To\":\"<urn:uuid:07f8f77d-1199-489a-9508-1036859aaca4>\",\"WARC-IP-Address\":\"104.20.83.29\",\"WARC-Target-URI\":\"https://www.toppr.com/bytes/cbse-class-10-mathematics-paper-analysis-review/\",\"WARC-Payload-Digest\":\"sha1:ZBGY4YU2SPXXOTLXK5U4DWHAZJYJQFNX\",\"WARC-Block-Digest\":\"sha1:FWXA5QD24B7TCOWCG3M6SAV6PJKRK5VC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400283990.75_warc_CC-MAIN-20200927152349-20200927182349-00203.warc.gz\"}"} |
https://johanbroddfelt.se/list_the_items-9 | [
"Johan Broddfelt\n\n# List the items\n\nNow that we got some posts, we would like to display them in a list. So we need a function in our class that fetches an array of items. And of course we want the same function to work for all out classes so we put it in the Db.php. We also need a count function, to return how many items we actually have.\n\n`````` // Fetch an array of items in a table as objects\n// Sample: \\$filter = ' WHERE id > 50 ';\n// If you provide a subset of columns as values the query might be faster, But remember that you do not get all values in your objects\nfunction fetchArray(\\$filter=' ORDER BY id ', \\$values=' * ') {\n\\$ret = array();\n\\$vals = '';\n\nif (\\$values != ' * ') {\n\\$fields = array();\n\\$fields = explode(',', \\$values);\n} else {\n\\$fields = \\$this->fetchFields();\n}\n\\$first = true;\nforeach (\\$fields as \\$field) {\nif (!\\$first) {\n\\$vals .= ', ';\n}\n\\$first = false;\n\\$vals .= \\$field->columnName;\n}\n\\$sql = \"SELECT \" . \\$vals . \" FROM `\" . \\$this->table . '` ' . \\$filter;\n#print \\$sql.'<br>';\n\\$res = Db::query(\\$sql);\nwhile(\\$row = Db::fetch_assoc(\\$res)) {\n\\$className = \\$this->className;\n\\$obj = new \\$this->className;\n\\$obj->fetchObject(\\$row, \\$values);\n\\$ret[] = \\$obj;\n}\nreturn \\$ret;\n}\n\nfunction fetchCount(\\$filter='') {\n\\$sql = \"SELECT count(id) as cnt FROM `\" . \\$this->table . '` ' . \\$filter;\n\\$res = Db::query(\\$sql);\nwhile(\\$row = Db::fetch_assoc(\\$res)) {\nreturn \\$row['cnt'];\n}\nreturn 0;\n}\n``````\n\nWith our new function we can now create a new view in views/posts and call it items.php\n\n``````<div>\n<?php \\$list = \\$obj->fetchArray(); ?>\n<?php \\$count = \\$obj->fetchCount(); ?>\n<h1>Posts</h1>\n<?php echo \\$count; ?> posts found<br>\n<table class=\"list\">\n<tr><th>Title</th><th>Posted</th><th>Tags</th></tr>\n<?php foreach(\\$list as \\$item) { ?>\n<tr>\n<td><?php echo \\$item->title; ?></td>\n<td><?php echo \\$item->posted; ?></td>\n<td><?php echo \\$item->tags; ?></td>\n</tr>\n<?php } ?>\n</table>\n</div>\n``````\n\nAnd since the list looks a bit cramped, we add some styling to the table.list structure as follows, in the main.css\n\n``````table.list th, table.list td {\n}\ntable.list th {\nbackground: #a73;\ncolor: #fff;\nfont-weight: normal;\ntext-align: left;\nborder: 1px solid #962;\n}\ntable.list td {\nborder: 1px solid #a73;\n}\ntable.list tr:nth-child(odd) td {\nbackground: #ffe;\n}\n``````\n\nNow it looks more like a real list. And we can add the link it (index.php?module=post&view=items) to our menu in the main_template.php\n\n`````` <div id=\"menu\"><div>\n<a href=\"index.php?module=post&view=items\">Posts</a>\n<a href=\"\">Tags</a>\n</div></div>\n``````\n\nBut since we also want to be able to click the posts to read the content we need to add a link in the list as this index.php?module=post&view=item&id=[post_number]. So now the list file looks like this:\n\n``````<div>\n<?php \\$list = \\$obj->fetchArray(); ?>\n<?php \\$count = \\$obj->fetchCount(); ?>\n<h1>Post list</h1>\n<?php echo \\$count; ?> posts found<br>\n<table class=\"list\">\n<tr><th>Title</th><th>Posted</th><th>Tags</th></tr>\n<?php foreach(\\$list as \\$item) { ?>\n<tr>\n<td><a href=\"index.php?module=post&view=item&id=<?php echo \\$item->id; ?>\"><?php echo \\$item->title; ?></a></td>\n<td><?php echo \\$item->posted; ?></td>\n<td><?php echo \\$item->tags; ?></td>\n</tr>\n<?php } ?>\n</table>\n</div>\n``````\n\n- Framework, PHP, Database"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.51410055,"math_prob":0.58355474,"size":3256,"snap":"2020-45-2020-50","text_gpt3_token_len":947,"char_repetition_ratio":0.106396064,"word_repetition_ratio":0.15071283,"special_character_ratio":0.3581081,"punctuation_ratio":0.21216407,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95392615,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T22:53:28Z\",\"WARC-Record-ID\":\"<urn:uuid:c60ef5c1-4267-4933-85e1-b68047403777>\",\"Content-Length\":\"60853\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3eb8f64f-f4ea-4efe-96e6-7891d9c731e7>\",\"WARC-Concurrent-To\":\"<urn:uuid:51f2779f-c0be-4c72-bbd0-05d3f8394781>\",\"WARC-IP-Address\":\"77.111.240.177\",\"WARC-Target-URI\":\"https://johanbroddfelt.se/list_the_items-9\",\"WARC-Payload-Digest\":\"sha1:25MPGKLM76VEVGQBIXXF44OQAEHNCAJJ\",\"WARC-Block-Digest\":\"sha1:HWISNWHDMW3RERFZXCGG2FGQAX5FJH5N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107911792.65_warc_CC-MAIN-20201030212708-20201031002708-00306.warc.gz\"}"} |
https://baseballwithr.wordpress.com/2020/08/03/bayesian-pythagorean-modeling-part-ii/ | [
"# Bayesian Pythagorean Modeling – Part II\n\n### Introduction\n\nIn Part I of this Bayesian modeling post, I introduced a Bayesian version of Bill James’ Pythagorean relationship. This model had two unknowns, the Pythagorean exponent k and the error standard deviation",
null,
"$\\sigma$ and I described constructing a prior for (k,",
null,
"$\\sigma$) that reflected one’s knowledge about the value of the power in James’ formula and how win/loss ratios can deviate from this formula. In this post, assuming we have a sampling model and a prior, I am going to discuss the mechanics of Bayesian learning about these parameters and predicting future response values.\n\nSince one typically is interested in predicting wins instead of win/loss ratios, I am going to work with a slightly different version of the model where the number of team wins (in a 162-game season) follows a normal distribution where the mean is given by James’ formula, and",
null,
"$\\sigma$ reflects the variation of the win totals about the mean.\n\nFor a prior, I will assume k is normal with mean 2 and standard deviation 0.5 and",
null,
"$\\sigma$ is exponential with rate 1.\n\n### Posterior\n\nWe now observe data, that is, the W and R/RA values for the 30 teams in the 2019 season, and by Bayes’ rule, we find the new or posterior probability distribution for (k,",
null,
"$\\sigma$). The posterior density of (k,",
null,
"$\\sigma$) is equal to (up to a proportionality constant) the product of the likelihood (the probability of the observed data viewed as a function of the parameters) and the prior. All types of inference are found by summarizing this bivariate probability distribution.\n\n### Brute Force Calculation\n\nA straightforward way to summarize the posterior is to find a grid of (k,",
null,
"$\\sigma$) values that cover most of the probability. I have a R package LearnBayes that makes it easy to implement this brute-force method. One writes a function logpost2() that computes the logarithm of the posterior density of (k,",
null,
"$\\sigma$). The arguments to this function are the vector of parameters theta and a data frame d that contains the data. Note that I’ve included both the likelihood and prior terms in this function.\n\nlogpost2 <- function(theta, d){\nk <- theta\nsigma <- theta\nn_mean <- 162 * d$RR ^ k / (1 + d$RR ^ k)\nsum(dnorm(d\\$W, n_mean, sigma, log = TRUE)) +\ndnorm(k, 2, 0.5, log = TRUE) +\ndexp(sigma, 1, log = TRUE)\n}\n\n\nNext, by use of the gcontour() function, one finds a rectangle that covers the region where most of the probability falls. This function produces a contour plot where the contours are drawn at 10%, 1% and 0.1% of the largest posterior value and, by trial and error, I find a rectangle (specifically 1.6 < k < 2.3 and 2 <",
null,
"$\\sigma$ < 6) that covers these three contour lines.\n\ngcontour(logpost2, c(1.6, 2.3, 2, 6), d)\n\n\nThis contour graph is produced by computing the posterior density on a 50 by 50 grid. One way to summarize the posterior is to simulate a large number of values from the posterior computed on this grid and then summarize the simulated draws. I use the simcontour() function to do the simulation from this grid:\n\npts <- simcontour(logpost2, c(1.6, 2.3, 2, 6),\nd, 5000)\n\n\nHere I display 5000 simulated draws from the posterior on the contour plot.\n\nIf I am interested primarily in learning about the exponent value k, I collect those simulated values of k to perform inference. For example, here is a density estimate which represents the marginal posterior density of k. A 90% interval estimate for k (found from the simulated draws) is (1.81, 2.09).\n\n### Simulation using Stan Software\n\nCurrently a popular and powerful way to perform Bayesian computations is to simulate from a Markov Chain that convergences in theory to the posterior distribution of interest. One particular type of simulation methodology is Hamiltonian MCMC sampling implemented by the Stan software. There are interfaces of Stan to other languages such as R or Python and the brms package provides a very attractive interface for fitting a large class of Bayesian models. I’ll illustrate using this package below.\n\nFirst I specify the prior on (k,",
null,
"$\\sigma$) by use of two applications of the prior() function.\n\nprior1 <- c(prior(normal(2, 0.5), nlpar = \"b1\"),\nprior(exponential(1), class = \"sigma\"))\n\n\nThen I implement the Stan simulation by use of the brm() function in the brms package. Note that this function actually specifies this nonlinear model and the “family = gaussian” argument indicates that we are assuming normally distributed errors.\n\nfit1 <- brm(bf(W ~ 0 +\n162 * RR ^ b1 / (1 + RR ^ b1),\nb1 ~ 1, nl = TRUE),\ndata = d, prior = prior1,\nfamily = gaussian)\n\n\nBelow I show some output of this function. The right set of graphs show streams of simulated draws of the parameters k and",
null,
"$\\sigma$ and the left set of graphs display density estimates of the simulated posterior draws of these parameters. One gets essentially the same estimates of the power parameter k as I got using the brute-force method. For example, the 90% interval estimate using Stan is (1.79, 2.12) which is very close to the interval (1.81, 2.09) using brute force. One advantage of the Stan software is that it can efficiently sample from very complicated Bayesian models with many parameters.\n\n### Prediction of Wins\n\nActually, the objective here is not about learning about the parameters k and",
null,
"$\\sigma$, but rather to predict the number of wins of a team who has a particular runs ratio.\n\nWe predict the number of wins by use of the (posterior) predictive distribution which can be simulated in a similar way that we simulated future data from the (prior) predictive distribution in Part I of my post. Specifically, we simulate a future number of wins by first simulating values of (k,",
null,
"$\\sigma$) from the posterior distribution and then simulating a wins total from the normal sampling distribution using these simulated parameter values. Using the brms package this is done using the posterior_predict() function. The arguments are fit1 (the output of the posterior fitting) and a data frame containing the values of the runs ratio that we are interested in.\n\nPP <- posterior_predict(fit1,\nnewdata = data.frame(\nRR = seq(0.8, 1.2, by = 0.1)))\n\n\nI use violin plots below to summarize the simulated number of wins from the predictive distributions for these five values of the runs ratio. For example, a team that scores the same number of runs as it allows (RR = 1) will likely win between 70 and 90 games in a 162-game season. The amount of variation in these win totals might be surprising — this tells us, that there is more to winning games than just having a good runs ratio.\n\n### Summing Up\n\n• All of the R code for this exercise is available on my Github gist site.\n• In this study, we placed priors on the parameters and one general concern is the impact of these priors on the posterior analysis. This is easy to address by trying other priors and seeing if there is a change in the inferential summaries or predictions. If the predictions seem to depend on the choice of prior, then I’d think harder about the prior.\n• When possible, it is good to check one’s work by trying different computational methods. We did this by showing both a brute-force method and a modern simulation approach — both methods gave very similar estimates at the parameter k and",
null,
"$\\sigma$.\n• The purpose of this two-part post was to illustrate Bayesian modeling for a small regression problem. But we can easily generalize this problem to a situation where there is a obvious grouping of the data, and one is interested in doing many regressions, one for each group. For example, suppose we are exploring this Pythagorean relationship for many seasons and we want to apply this regression model for each season. Or maybe we have data for different professional leagues, say minor league, MLB, Japanese professional baseball, etc. and we want to apply this model to each league. This motivates the consideration of Bayesian multilevel modeling (and fitting using Stan) which will be the subject for a future set of posts."
] | [
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8732724,"math_prob":0.9860435,"size":7816,"snap":"2021-21-2021-25","text_gpt3_token_len":1770,"char_repetition_ratio":0.13568868,"word_repetition_ratio":0.008882309,"special_character_ratio":0.23183215,"punctuation_ratio":0.09966777,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99886596,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-08T09:47:49Z\",\"WARC-Record-ID\":\"<urn:uuid:f8f9fdbf-842b-4c6e-9beb-79ef5e7ec5a3>\",\"Content-Length\":\"97938\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:266a3847-3754-4e9a-95b2-73db6dac3cbd>\",\"WARC-Concurrent-To\":\"<urn:uuid:e9031fc8-2b32-4849-a8d4-2fb094a19ba1>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://baseballwithr.wordpress.com/2020/08/03/bayesian-pythagorean-modeling-part-ii/\",\"WARC-Payload-Digest\":\"sha1:UWPHQL2MR7IDPAYFMGT3DMW33ZK2AITZ\",\"WARC-Block-Digest\":\"sha1:REH43C4WDD5SESOMDXLS444VB7GC5OII\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988858.72_warc_CC-MAIN-20210508091446-20210508121446-00572.warc.gz\"}"} |
https://calculator.academy/credit-utilization-ratio-calculator/ | [
"Enter the total amount of open credit balances and the total credit limit on all lines of credit to determine the credit utilization ratio.\n\n## Credit Utilization Ratio Formula\n\nThe following formula is used to calculate a credit utilization ratio.\n\nCU = TB / TCL *100\n\n• Where CU is the credit utilization ratio (%)\n• TB is the total credit balances (\\$)\n• TCL is the total credit limit (\\$)\n\n## Credit Utilization Ratio Definition\n\nA credit utilization ratio is a measure of the total percentage or proportion of the total available credit limit that a person uses at any given time.\n\n## Credit Utilization Ratio Example\n\nHow to calculate a credit utilization ratio?\n\n1. First, determine the total credit balances.\n\nAdd together all of the balances on open credit cards/ credit lines.\n\n2. Next, determine the total available credit.\n\nAdd together all of the credit limits together to find the total.\n\n3. Finally, calculate the credit utilization.\n\nCalculate the credit utilization using the formula above.\n\n## FAQ\n\nWhat is a credit utilization ratio?\n\nA credit utilization ratio is the portion of available credit that a person uses at any given time."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8533075,"math_prob":0.98392946,"size":1083,"snap":"2021-43-2021-49","text_gpt3_token_len":211,"char_repetition_ratio":0.24559778,"word_repetition_ratio":0.07821229,"special_character_ratio":0.20129271,"punctuation_ratio":0.07526882,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.990077,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T11:13:08Z\",\"WARC-Record-ID\":\"<urn:uuid:a57a59f2-dc5b-470c-863a-8d3ec59188a8>\",\"Content-Length\":\"125780\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3ab35b19-419f-4f1d-9780-59f1023470ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ae713cf-e744-410a-987e-0cc12206d135>\",\"WARC-IP-Address\":\"172.67.69.208\",\"WARC-Target-URI\":\"https://calculator.academy/credit-utilization-ratio-calculator/\",\"WARC-Payload-Digest\":\"sha1:GOENNEH43F7PUC3G74FPYXZSFDCNEY6Z\",\"WARC-Block-Digest\":\"sha1:RXUN72RATYZJ3FI5QN2PGF47RTFKLMTB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358180.42_warc_CC-MAIN-20211127103444-20211127133444-00587.warc.gz\"}"} |
https://www.superfastcpa.com/what-is-period-order-quantity/ | [
"# What is Period Order Quantity?",
null,
"## Period Order Quantity\n\nPeriod Order Quantity (POQ) is an inventory management technique where orders are placed for a quantity that will last for a predetermined period. The idea is to order a large quantity of inventory at once and then not order again until that inventory is expected to run out. It is often used when there are quantity discounts, or if there are high ordering costs relative to holding costs.\n\nThe order quantity varies from order to order, unlike the Economic Order Quantity (EOQ) model, which maintains a consistent order quantity. The objective of POQ is to reduce total inventory costs (ordering and carrying costs) by determining the optimal order size and frequency of orders.\n\nPOQ balances the need to maintain inventory levels to meet customer demand against the costs associated with ordering and holding inventory. It can also help to improve cash flow management by aligning inventory orders with expected sales periods.\n\nThe POQ is calculated using the formula:\n\nPOQ = √(2DS/H)\n\nwhere:\n\n• D is the annual demand,\n• S is the ordering cost per order, and\n• H is the annual holding cost per unit.\n\nThis formula is the same as the EOQ formula, but unlike EOQ, it is applied at specific, regular intervals (the period), and the order quantity may vary from one period to the next based on anticipated demand.\n\n## Example of Period Order Quantity\n\nAssume a retail store sells a specific brand of shoes, with an annual demand of 1,200 pairs. The cost of placing an order (including shipping, handling, and administrative costs) is \\$50 per order, and the annual holding cost per pair of shoes (storage, insurance, potential spoilage, etc.) is \\$3.\n\nThe Period Order Quantity (POQ) would be calculated using the formula:\n\nPOQ = √(2DS/H)\n\nSo,\n\nPOQ = √[(21,200\\$50)/\\$3]\nPOQ = √[(120,000)/3]\nPOQ = √[40,000]\nPOQ = 200 pairs\n\nSo, the store should order 200 pairs of shoes at a time to minimize total inventory costs (ordering and holding costs).\n\nIf the store wants to keep inventory for three months at a time, it would divide the annual demand by the number of periods in a year. In this case, there are four 3-month periods in a year. So, the store would order 1,200/4 = 300 pairs of shoes every three months. The order quantity is larger because the store is ordering less frequently.\n\nThis calculation is a simplified example. In reality, a company would need to consider other factors such as seasonal demand fluctuations, lead times, safety stock, and the risk of stockouts.\n\nRemember that the goal of the POQ model is to minimize total inventory costs while meeting customer demand. The optimal order quantity may vary from one period to the next based on changes in these factors.\n\n## Other Posts You'll Like...",
null,
""
] | [
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201024%20576'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%202120%201414'%3E%3C/svg%3E",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93796027,"math_prob":0.95671964,"size":2686,"snap":"2023-40-2023-50","text_gpt3_token_len":596,"char_repetition_ratio":0.14466816,"word_repetition_ratio":0.06153846,"special_character_ratio":0.2274758,"punctuation_ratio":0.10485437,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98922026,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T13:37:27Z\",\"WARC-Record-ID\":\"<urn:uuid:b908750d-731f-445e-93dd-0ecd14fe8f75>\",\"Content-Length\":\"352255\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b300b71b-336e-40a7-9e1f-84899c47393a>\",\"WARC-Concurrent-To\":\"<urn:uuid:36741bec-7ec4-4d09-8952-98fbb5dda0be>\",\"WARC-IP-Address\":\"162.159.135.42\",\"WARC-Target-URI\":\"https://www.superfastcpa.com/what-is-period-order-quantity/\",\"WARC-Payload-Digest\":\"sha1:ZFBN64O64KGTL6QNKNGCQWBXDLHX7APE\",\"WARC-Block-Digest\":\"sha1:DC3RCURWRQE73IKNGZQEVXEG7KWYNHIK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511106.1_warc_CC-MAIN-20231003124522-20231003154522-00855.warc.gz\"}"} |
https://vtechworks.lib.vt.edu/handle/10919/24285/browse?type=dateissued | [
"Now showing items 1-20 of 225\n\n• #### expansion-free electromagnetic solutions of Kerr-Schild class \n\n(AIP Publishing, 1974-07)\nStarting with the general Kerr_Schild form of the metric tensor,d s2=_+l_l (where l is null and _ is flat space_time), a study is made for those solutions of the Einstein_Maxwell equations in which l is geodesic, shear_free, ...\n• #### power statistics for wave propagation in one-dimension and comparison with radiative transport-theory. II. \n\n(AIP Publishing, 1974-12)\nWe consider the one_dimensional problem of a slab having a random index of refraction and illuminated from within by a point source. We compute the expected value and the fluctuations of both the total power and power flux. ...\n• #### conditional entropy in microcanonical ensemble \n\n(AIP Publishing, 1975-08)\nThe existence of the configurational microcanonical conditional entropy in classical statistical mechanics is proved in the thermodynamic limit for a class of long_range multiparticle observables. This result generalizes ...\n• #### Case eigenfunction expansion for a conservative medium \n\n(AIP Publishing, 1976-02)\nBy using the resolvent integration technique introduced by Larsen and Habetler, the one‐speed, isotropic scattering,neutron transport equation is treated in the infinite and semi‐infinite media. It is seen that the results ...\n• #### functional calculus for symmetric multigroup transport operator \n\n(AIP Publishing, 1976-02)\nA rigorous treatment of the symmetric multigroup transport equation is given by developing the functional calculus for the transport operator. Von Neumann spectral theory is applied to nonorthogonal cyclic subspaces, and ...\n• #### solution of multigroup transport equation in Lp spaces \n\n(AIP Publishing, 1976-11)\nThe isotropic multigroup transport equation is solved in L p , p_1, for both half range and full range problems, using resolvent integration techniques. The connection between these techniques and a spectral decomposition ...\n• #### Vlasov theory of plasma oscillations: linear-approximation \n\n(AIP Publishing, 1977)\nA functional analytic approach to the linearized collisionless Vlasov equation is presented utilizing a resolvent integration technique on the resolvent of the transport operator evaluated at a particular point. Formulae ...\n• #### interaction function and lattice duals \n\n(AIP Publishing, 1977-10)\nAn interaction function is defined for lattice models in statistical mechanics. A correlation function expansion is derived, giving a direct proof of the duality relations for correlation functions.\n• #### multiple-scales space-time analysis of a randomly perturbed one-dimensional wave-equation \n\n(AIP Publishing, 1977-10)\nAn initial value problem for one_dimensional wave propagation is considered; the medium is assumed to be randomly perturbed as a function of both space and time. The stochastic perturbation theory of Papanicolaou and Keller ...\n• #### uniqueness of solutions to linearized Boltzmann equation \n\n(AIP Publishing, 1978-01)\nUniqueness theorems are proved for the linearized Boltzmann equation for both the ’’exterior’’ and ’’interior’’ problems under generalized Maxwellboundary conditions. The solution space is a weighted L p space, and agrees ...\n• #### transverse plasma oscillations \n\n(AIP Publishing, 1979)\nAn operator theoretic approach is used to solve the linearized Vlasov–Maxwell equations for transverse plasma oscillations. In particular, the special cases of simple and second‐order real zeros of the plasma dispersion ...\n• #### resolvent integration techniques for generalized transport equations \n\n(AIP Publishing, 1979-06)\nA generalized class of ’’transport type’’ equations is studied, including most of the known exactly solvable models; in particular, the transport operator K is a scalar type spectral operator. A spectral resolution for K ...\n• #### Peano compactifications and property metric spaces \n\n(Hindawi, 1980-01-01)\nLet (X,d) denote a locally connected, connected separable metric space. We say the X is S-metrizable provided there is a topologically equivalent metric ρ on X such that (X,ρ) has Property S, i.e. for any ϵ>0, X is the ...\n• #### k-space function spaces \n\n(Hindawi, 1980-01-01)\nA study is made of the properties on X which characterize when Cπ(X) is a k-space, where Cπ(X) is the space of real-valued continuous functions on X having the topology of pointwise convergence. Other properties related ...\n• #### Pulse propagation in a randomly perturbed ocean: Single pulse statistics \n\n(Acoustical Society of America, 1980-10)\nA statistical theory of broadband single pulse propagation in a random ocean is presented. The mutual coherence function of the received signal is derived using an analysis based upon coupled mode theory. As propagation ...\n• #### Note on a role for entire functions of the classes and \n\n(Hindawi, 1981-01-01)\nWe use the B and B* operators of Levin on the Classes P and P* and a comparison principle to prove a Gauss-Lucas Theorem for differential operators. The connection with the determination of final sets for differential ...\n• #### Complete function spaces \n\n(Hindawi, 1983-01-01)\nA study is made of certain completeness properties of the space of allcontinuous real-valued functions on a space, where this function space has the compact-open topology.\n• #### analytical solutions of model equations for two phase gas mixtures: Transverse velocity perturbations \n\n(AIP Publishing, 1984)\nModel equations for a dilute binary gas system are derived, using a linear BGK scheme. Complete analytical solutions for the stationary half_space problem are obtained for transverse velocity perturbations. The method of ...\n• #### Fine topology on function spaces \n\n(Hindawi, 1986-01-01)\nThis paper studies the topological properties of two kinds of fine topologies on the space C(X,Y) of all continuous functions from X into Y.\n• #### the effect of surface tension on the shape of a Hele-Shaw cell bubble \n\n(AIP Publishing, 1986-11)\nNumerical and asymptotic solutions are found for the steady motion of a symmetrical bubble through a parallel‐sided channel in a Hele–Shaw cell containing a viscous liquid. The degeneracy of the Taylor–Saffman zero ..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8531957,"math_prob":0.9054301,"size":6124,"snap":"2019-43-2019-47","text_gpt3_token_len":1347,"char_repetition_ratio":0.14836602,"word_repetition_ratio":0.0,"special_character_ratio":0.2111365,"punctuation_ratio":0.12262358,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96956193,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T04:08:13Z\",\"WARC-Record-ID\":\"<urn:uuid:f9631af5-1d3f-44a0-9545-72b3fca8d2c1>\",\"Content-Length\":\"56515\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dfcc0795-184b-4460-b33f-2c07eb20330b>\",\"WARC-Concurrent-To\":\"<urn:uuid:e4d3c890-6f25-4cf4-9838-8fd098386111>\",\"WARC-IP-Address\":\"198.82.230.9\",\"WARC-Target-URI\":\"https://vtechworks.lib.vt.edu/handle/10919/24285/browse?type=dateissued\",\"WARC-Payload-Digest\":\"sha1:SUG57DDJ3RQN535K7PWEWAEF3FBQUBNC\",\"WARC-Block-Digest\":\"sha1:U7NNIP56QHRBWZ72I3UKEA77R66HQSO4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670729.90_warc_CC-MAIN-20191121023525-20191121051525-00549.warc.gz\"}"} |
http://www.createmepink.com/2018/06/parametric-vector-form-linear-algebra/x-t-y-t-z-t-x-t-y-t-z-t-x-t-y-t-z-t-x-t-y-t-z-t/ | [
"CreateMePink\n\nVector Reference",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"X T Y T Z T X T Y T Z T X T Y T Z T X T Y T Z T\n\nThis post categorized under Vector and posted on June 21st, 2018.",
null,
"This X T Y T Z T X T Y T Z T X T Y T Z T X T Y T Z T has 960 x 1390 pixel resolution with jpeg format. Parametric Vector Form Linear Algebra Calculator, Describe All Solutions Of Ax=0 In Parametric Vector Form, Parametric Vector Form Definition, Parameterize The Solutions To The Following Linear Equation, And Write Your Answer In Vector Form., Matrix Parametric Form Calculator, Linear To Parametric, Parametric Vector Form Khan, Putting A Matrix In Parametric Vector Form, Parametric Vector Form Definition, Matrix Parametric Form Calculator, Parametric Vector Form Khan was related topic with this X T Y T Z T X T Y T Z T X T Y T Z T X T Y T Z T. You can download the X T Y T Z T X T Y T Z T X T Y T Z T X T Y T Z T picture by right click your mouse and save from your browser.\n\nNote You can click on the graphicon above to toggle light and dark. You can also click on the graphicon that floats on the right side of the screen. ---Haas parameter list (milllathe) for cnc machinists who work on Haas cnc lathemill. Parameters are seldom-modified values that change the operation of thegraphiconBgraphic Original Piano - This is the original gray piano we had on the front of the site for many years. It has an echo feature thats pretty cool\n\nIn mathematics the logarithm is the inverse function to exponentiation.That means the logarithm of a given number x is the exponent to which another fixed number the base x must be raised to produce that number x.(C) Trim & Seats Code Description C02 SW cargo carpet C07 litter container C12 shoulder belts C13 frt shoulder belts C14 rear shoulder belts C15 Deluxe seat beltElectronica 300 Series Universal 2 Axis Lathe Display - The proper terminology for describing a lathes axis are X and Z. - This display has X Y and Z"
] | [
null,
"http://www.createmepink.com/wp-content/uploads/st/thumb-stock-illustration-fruits-and-vegetables-set-in.jpg",
null,
"http://www.createmepink.com/wp-content/uploads/re/thumb-red-realistic-gift-bow-ribbon-isolated.jpg",
null,
"http://www.createmepink.com/wp-content/uploads/za/thumb-zatrzask-yamaha-grizzly-apex-rs-vector.jpg",
null,
"http://www.createmepink.com/wp-content/uploads/sh/thumb-shooting-a-kriss-vector-acp-sbr-with-suppressor-by-ospreylzvguxylwveo.jpg",
null,
"http://www.createmepink.com/wp-content/uploads/po/thumb-postcard-poster-abstract-flower-bud-in-the-form-of-a-heart-and-the-silhouette-of-gm.jpg",
null,
"http://www.createmepink.com/wp-content/uploads/x-/x-t-y-t-z-t-x-t-y-t-z-t-x-t-y-t-z-t-x-t-y-t-z-t.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.853435,"math_prob":0.94992477,"size":6791,"snap":"2019-43-2019-47","text_gpt3_token_len":1551,"char_repetition_ratio":0.19198468,"word_repetition_ratio":0.6032787,"special_character_ratio":0.22912678,"punctuation_ratio":0.0677837,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95828897,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,2,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-14T00:36:44Z\",\"WARC-Record-ID\":\"<urn:uuid:583044c0-d325-4b64-b33c-aee2b562bc76>\",\"Content-Length\":\"23623\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d704aab5-da50-4b63-98a6-2f366d0f1168>\",\"WARC-Concurrent-To\":\"<urn:uuid:12b5c633-58e8-48b5-bab9-3b88bba1245e>\",\"WARC-IP-Address\":\"93.104.213.16\",\"WARC-Target-URI\":\"http://www.createmepink.com/2018/06/parametric-vector-form-linear-algebra/x-t-y-t-z-t-x-t-y-t-z-t-x-t-y-t-z-t-x-t-y-t-z-t/\",\"WARC-Payload-Digest\":\"sha1:ZRR2RA3CNVHVB52JM32V2D2AFWZJN4HC\",\"WARC-Block-Digest\":\"sha1:BPH4SWDWGBVOITYTDJPEIZ3VJQYWDKUM\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986648481.7_warc_CC-MAIN-20191014003258-20191014030258-00010.warc.gz\"}"} |
https://dlx.booksc.org/book/12694068/0b829e | [
"Main Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg Bifurcation and non-bifurcation of compact orbits by change of stability behavior\n\n# Bifurcation and non-bifurcation of compact orbits by change of stability behavior\n\nVolume:\n55\nLanguage:\nenglish\nPages:\n2\nDOI:\n10.1007/bf02941484\nDate:\nDecember, 1985\nFile:\nPDF, 92 KB\n\nYou can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.\n1\n\nYear:\n1985\nLanguage:\nenglish\nFile:\nPDF, 497 KB\n2\n\n### Different period limits in Hopf bifurcation\n\nYear:\n1985\nLanguage:\nenglish\nFile:\nPDF, 97 KB\n```Abh. Math. Sere. Univ. Hamburg 55, 17--18 (1985)\n\nBifurcation and non-bifurcation of compact orbits\nby change of stability behavior\nVon D. ERLE\n\nConsider a continuous 1-parameter family\n\n= h(x),\n\n~ c [o, 1[,\n\nof ordinary differential equations on a neighborhood of the origin in ]Rn with\nthe following property:\n(*)\n\nThe origin is a stationary point which is asymptotically stable for \\$ ----/o(x)\na n d for ~ = --f~(x) if • > O.\n\nDoes this change of stability behavior of the origin at 2 = 0 imply the bifurcation of compact orbits? I n some cases it does, and we answer this question\nfor families of C 1 vector fields. Essentially, the answer is yes for n odd a n d\nn = 2, due to the Poincard-Hopf Theorem and the Poincarfi-Bendixson Theorem, respectively; for n > 4, n even, there are counterexamples involving\nwork of F. W. Wilson a n d P. A. Sehweitzer.\n\nProposition: L e t f~, 2 E [0, 1[, be a continuous 1-parameter family of C 1\nvector fields on an open neighborhood of 0 E •\" such t h a t the differential\nequations \\$ = / ~ ( x ) satisfy (*). Then for a n y neighborhood V of 0 E IR\"~\nthere is e > 0 such t h a t for a n y 2 E ]0, ~[ the following holds:\n(i) I f n is odd, the system 9 = f~(x) has a stationary point in V \\ {0}.\n(ii) If n = 2, the system \\$ = / ~ ( x ) has a s t a t i o n a r y point or a closed orbit in\nv \\ {o}.\n\nProof. B y Theorems 1.1, 1.2, 2.2, ]0 has a transversal cross-section Z ~\nin V t h a t is a C 1 manifold and a h o m o t o p y (n - - 1)-sphere. X is also a crosssection of [~ for all ~ in a suitable interval ]0, ~[, ~ > 0. Fix ,~ E ]0, e[. Then,\nagain b y , f~ has a transversal cross-section 27~ inside 27 such t h a t 27~ is a\nh o m o t o p y (n - - 1)-sphere, too. 27 u 27~ is the b o u n d a r y of a compact C ~ manifold M~ which has the integral homology of S\"-L Moreover, f~ points inward\non OM~. I f n is odd, M~ has non-zero Euler characteristic. B y the PoincardH o p f Theorem (, ), M~ contains a zero of ]~. I f n = 2 a n d M~ does n o t\ncontain a z; ero of ]~, then there is a closed orbit in M~ by the Poincard-Bendixson Theorem .\nExample: L e t n > 4, n even, and g: S n-1 --> ~ n a nowhere vanishing tangent C 1 vector field of S n-1 without a n y closed integral curves. Such a vector\nfield exists according to P. A. Schweitzer (, Theorems A and B, cf. F. W.\n\n18\n\nD. Erle\n\nWilson for n ~ 5). L e t ~ , ~ be the s t a n d a r d inner p r o d u c t in ]pn. Define\n]~: ]l( u -> P.~, 2 C [0, 1[, b y\n\nI ~ --(X, X}X + (X, X~g(~x,Z) -1/2 X)\n/~(x):= (-\n\nif\n\nif\n\nx * 0,\n\nx---- 0.\n\nT h e n ]~ is a family of C 1 v e c t o r fields with p r o p e r t y (*). F o r )~ z 0, x ~-~ (x, x)\nis a L y a p u n o v function. I f ~ > 0, we can use the s a m e L y a p u n o v function\nf o r Nx]] ~ ~f~. T h e sphere of this radius is i n v a r i a n t a n d orbitally a s y m p t o t i c a l l y\nstable. The phase p o r t r a i t of \\$ = / ~ ( x ) on this sphere is equivalent to t h a t\nof \\$-----g(x). As c o m p a c t orbits are either s t a t i o n a r y or closed orbits (\nT h e o r e m 2.51), no s y s t e m in the family ~ ----/~(x) has a c o m p a c t orbit outside\nt h e origin.\n\nRemark: If g can be chosen C k, 2 _~ k g c~, which is k n o w n only for n ~ 5,\n~hen/~ can be m a d e C k b y defining/~(x) -~ )~x -- (x, x) x + ~( (x, x)) g( (x, X)-112 X)\nfor X :# 0, where ~ is a strictly increasing C ~ function on It( all of whose\nd e r i v a t i v e s vanish a t 0.\n\nReferences\n E. A. CODDINGTON, N. LEVINSON: Theory of ordinary differential equations. New\nYork 1955.\n H. HOPF: Vektorfelder in n-dimensionalen Mannigfaltigkeiten. Math. Annalen 96\n(1926), 225--250.\n M. C. IawIN: Smooth dynamical systems. London 1980.\n J. W. M_m~OR: Topology from the differentiable viewpoint. Charlottesville 1965.\n P. A. SCHWEITZER: Counterexamples to the Seifert conjecture and opening closed\nleaves of foliations. Ann. Math. 100 (1974), 386--400.\n F. W. WILSON: On the minimal sets of non-singular vector fields. Ann. Math. 84\n(1966), 529--536.\n F. W. WILSOn: The structure of the level surfaces of a Lyapunov function. J. Diff.\nEq. 3 (1967), 323--329.\n\nEingegangen am 16. 5. 198g\nAnsehrift des Autors: Dieter Erle, Institut ffir Mathematik der Universit~t Dortmund,\nD-4600 Dortmund 50.\n\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7884335,"math_prob":0.9750988,"size":5387,"snap":"2020-24-2020-29","text_gpt3_token_len":1812,"char_repetition_ratio":0.078023404,"word_repetition_ratio":0.09296482,"special_character_ratio":0.35455728,"punctuation_ratio":0.14106351,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99738955,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-29T19:05:22Z\",\"WARC-Record-ID\":\"<urn:uuid:612db8e2-9bff-48df-b259-61a05619e37d>\",\"Content-Length\":\"46484\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f34453bf-a16a-4393-a1f9-040f84e7fdf1>\",\"WARC-Concurrent-To\":\"<urn:uuid:6dceaccc-eb99-4c1b-810f-1045d6a6647c>\",\"WARC-IP-Address\":\"81.17.17.254\",\"WARC-Target-URI\":\"https://dlx.booksc.org/book/12694068/0b829e\",\"WARC-Payload-Digest\":\"sha1:CAHWV6FERXBG6U3YT4FIB4JT22QE2362\",\"WARC-Block-Digest\":\"sha1:MDX73QHJQLXUYJCT4IJQSWHSTCSPCVEG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347406365.40_warc_CC-MAIN-20200529183529-20200529213529-00081.warc.gz\"}"} |
http://www.avrocks.com/shader-storage-block-length-returns-zero-or-less.html | [
"# shader storage block length returns zero or less\n\nI am getting zero or less when I try to get the length of an indeterminate-length array in a shader storage block.\n\nSetting up the storage:\n\n````geometryBuffer = ByteBuffer.allocateDirect(12* 4) .order(ByteOrder.nativeOrder()); geometryBuffer.putFloat(1f); geometryBuffer.putFloat(0.25f); geometryBuffer.putFloat(0.5f); geometryBuffer.putFloat(0.75f); geometryBuffer.putFloat(1.1f); geometryBuffer.putFloat(0.35f); geometryBuffer.putFloat(0.6f); geometryBuffer.putFloat(0.85f); geometryBuffer.putFloat(1.2f); geometryBuffer.putFloat(0.45f); geometryBuffer.putFloat(0.7f); geometryBuffer.putFloat(0.95f); geometryBuffer.flip(); geometryBufferId = GL15.glGenBuffers(); GL15.glBindBuffer(GL43.GL_SHADER_STORAGE_BUFFER, geometryBufferId); System.out.println(\"bb\" + GL11.glGetError()); GL15.glBufferData(GL43.GL_SHADER_STORAGE_BUFFER, geometryBuffer, GL15.GL_STATIC_DRAW); System.out.println(\"bd\" + GL11.glGetError()); GL30.glBindBufferRange(GL43.GL_SHADER_STORAGE_BUFFER, 0, geometryBufferId, 0, 36); System.out.println(\"br\" + GL11.glGetError()); GL15.glBindBuffer(GL43.GL_SHADER_STORAGE_BUFFER, 0); `\n```\n\nThe frag shader:\n\n````#version 430 out vec4 outColour; layout(std430, binding=0) buffer Geometry { vec4 a; vec4 myVec[]; } ; void main() { // if (myVec.y == 0.35) { if (myVec.length() == -1) { outColour = vec4 (1,0,0,1); } else { outColour = vec4(0,0,1,1); } } `\n```\n\nI expect length to be 2 but the length returned in this example is -1. I can retrieve the values in the array as the test `if (myVec.y == 0.35)` also returns red.\n\nAccording to https://www.opengl.org/wiki/Interface_Block_(GLSL) and the GLSL spec v4.5 section 4.1.9, the size is calculated at runtime from the size of the buffer or range, minus all the sized components in the block and then divided by the size of an individual array entry. What I seem to be getting is zero minus the number of vec4 units before the indeterminate storage in the block.\n\nAn alternative would be to pass a uniform with the array size, but why doesn't length() return the correct value? I'm running on a GTX660M\n\nReplay\n\nCategory: opengl Time: 2016-07-31 Views: 0\nTags: opengl glsl\n\n## Related post\n\n### server\n\nCopyright (C) avrocks.com, All Rights Reserved.\n\nprocessed in 0.220 (s). 12 q(s)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.63656324,"math_prob":0.8919242,"size":2040,"snap":"2019-35-2019-39","text_gpt3_token_len":568,"char_repetition_ratio":0.22200392,"word_repetition_ratio":0.0,"special_character_ratio":0.2872549,"punctuation_ratio":0.26618704,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97971475,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-26T01:38:14Z\",\"WARC-Record-ID\":\"<urn:uuid:ddb01652-a212-4a34-ae62-ff5de676efd8>\",\"Content-Length\":\"34233\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:07984b38-529d-4b26-8070-12caf55d5d97>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ff3e1b0-39bd-469f-a9bd-e58ec2de9ad7>\",\"WARC-IP-Address\":\"104.31.90.38\",\"WARC-Target-URI\":\"http://www.avrocks.com/shader-storage-block-length-returns-zero-or-less.html\",\"WARC-Payload-Digest\":\"sha1:VC3V7BUZVMYSPHSOMGMXXSBQOVFXZE6A\",\"WARC-Block-Digest\":\"sha1:SIYTNQ62I2JOZKWZWXXCH2DGGRVNCOM3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027330913.72_warc_CC-MAIN-20190826000512-20190826022512-00489.warc.gz\"}"} |
http://exceltemplates.net/academic/pythagorean-theorem-calculator/ | [
"Pythagorean Theorem Calculator",
null,
"",
null,
"[enlarge]\n\nThe Pythagorean Theorem is a mathematical formula used to determine the length of a side of a right triangle when only two sides are known. Our Pythagorean Theorem calculator is 100% free and guaranteed to save you time and effort. With our Pythagorean Theorem calculator with variables A2 + B2 = C2 has never been so easy. Instead of relying on memory and counting on your fingers, let our calculator with steps do the work of determining the lengths of the sides on a right triangle for you! Get your free calculator here!\n\nPythagorean Theorem Calculator with Variables\n\nSo why would anyone need a Pythagorean Theorem calculator with variables? So the equation can be used to find any answer you want of course. If you know the hypotenuse and one leg of the right triangle, a calculator with variables has got your answer! Know only the two legs but not the hypotenuse? A Pythagorean Theorem calculator with variables has you covered.\n\nPythagorean Theorem Calculator with Steps\n\nIt is important to note that our excel application, the Pythagorean Theorem calculator, also works out the process with steps to serve as a learning aid. Pythagorean Theorem calculator with steps not only finds you the answer to your question but tells you how that answer was found. Pythagorean Theorem calculator with steps is a great learning tool for students and the Pythagorean Theorem calculator with steps would make a great teaching tool in the Internet connected classroom as well.\n\nNo matter what situation you find yourself in requiring a Pythagorean Theorem calculator, our new excel based tool will work for you. Be it doing homework or trying to measure straight line distances between two objects, our calculator is 100% guaranteed to work for every situation, and considering the Pythagorean Theorem calculator is totally free you have nothing to lose by using this wonderful tool."
] | [
null,
"http://exceltemplates.net/wp-content/uploads/2012/10/pt-125x125.jpg",
null,
"http://exceltemplates.net/images/20096/pt.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9276289,"math_prob":0.9854436,"size":2082,"snap":"2019-43-2019-47","text_gpt3_token_len":430,"char_repetition_ratio":0.22569779,"word_repetition_ratio":0.0147492625,"special_character_ratio":0.18443803,"punctuation_ratio":0.06666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994191,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-23T18:03:37Z\",\"WARC-Record-ID\":\"<urn:uuid:e8e2b929-7a6b-4eec-bc67-da57816e807c>\",\"Content-Length\":\"38917\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8e9ff72f-2eac-4a7c-bc17-bd6dd1e09218>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb1b1863-0c2d-4271-85c6-a51f565f08d5>\",\"WARC-IP-Address\":\"69.46.21.158\",\"WARC-Target-URI\":\"http://exceltemplates.net/academic/pythagorean-theorem-calculator/\",\"WARC-Payload-Digest\":\"sha1:UIATHWG2W6HSESUXO3VFBKNORED33SRH\",\"WARC-Block-Digest\":\"sha1:YMBA2RCCQ3I7K6BI6NJ6EBG35C3GKEAZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987835748.66_warc_CC-MAIN-20191023173708-20191023201208-00016.warc.gz\"}"} |
https://www.dhimanrajeshdhiman.com/2019/01/piece-of-wire-28-cm-long-is-to-be-cut-two-pieces.html | [
"Maths (41) reasoning (33) Home (14) integration (6)\n\n## Application of Derivative\n\nA piece of wire 28 cm long is to be cut into two pieces. One piece is to be made into a circle and another into a square. How should the wire be cut so that the combined area of the two figures is as small as possible?\n\nLet the wire be cut at a distance of x meter from one end. Therefore then two pieces of wire be x m and (28-x) m.\n\n## Calculate Dimension of Circle and Square\n\nNow 1st part be turned into a square and the 2nd part be be made into a circle.\n\nSince 1st part of the wire is turned into square. then its perimeter will be x m.\nSo using formula of perimeter of square , we can calculate side of the square = x/4 m\n\n## Calculate Areas of Circle and Square\n\nTherefore Area of square = (x/4)(x/4) sq m\n\nA1 = x2/16\n\nAnd when 2nd part of the wire is turned to circle, then its perimeter ( circumference ) will be 28 - x m. So using formula of perimeter of square , And if \"r\" be radius of the circle , Then\nCircumference of circle = 2 π r = (28-x)\n∴ r = (28-x)/2π\n\nWe know that Area of Circle A2 = π r2\n\nA2 π[(28-x)/2π]2\n\n## To find value/s of x\n\nNow to find the value of x for which this function A(x) is maximum or minimum ,put A(x) = 0\n\n## To Test the Minimum Value of Function\n\nNow we have the value of \"x\" on which either A(x) have maximum or minimum value . To check the maximum or minimum value we have to find A''(x) as follows\n\nSo A''(x) has positive value Therefore A(x) shall have maximum value at x = 112/(π + 4)\n\nHence two pieces of wire should be of length x m and (28-x) m\n\nThese pieces should be of length 112/(π+4) and 28π/(π + 4)\n\n## Verification\n\nwe can calculate the sum of these pieces , it must be 28 m\n\n#### 1st part\n\n112/(π+4) = 112/{(22/7)+4}=112×7/50 = 784/50\n\n#### 2nd part\n\n28π/(π + 4) = {28×22/7}/{(22/7)+4} = 88×7/50 = 616/50\n\n#### Sum of Two Parts\n\n112×7/50 + 28×7/50 = (784+616)/50\n\n= 1400/50= 28 m\n\n## Conclusion\n\nThanks for visiting this website and spending your valuable time to read this post regarding How should the wire of 28 m be cut so that the combined area of the circle and square is as small as possible , s .If you liked this post , don't forget to share it with your friends to benefit them also ,we shall meet in next post , till then bye and take care......\n\nIf you are a mathematician Don't forget to visit my Mathematics You tube channel ,Mathematics Website and Mathematics Facebook Page , whose links are given below\n\nShare:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89049935,"math_prob":0.9968429,"size":2857,"snap":"2022-27-2022-33","text_gpt3_token_len":788,"char_repetition_ratio":0.11356467,"word_repetition_ratio":0.1764706,"special_character_ratio":0.30031502,"punctuation_ratio":0.050819673,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99953175,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-25T10:54:05Z\",\"WARC-Record-ID\":\"<urn:uuid:d6eb8b5c-ab6f-4434-9b9d-dc11b596d1fa>\",\"Content-Length\":\"286986\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b12cdd80-0bda-4fa1-898d-0377c27396ca>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc342355-1d4d-43b6-933c-b2aab16e5548>\",\"WARC-IP-Address\":\"172.253.115.121\",\"WARC-Target-URI\":\"https://www.dhimanrajeshdhiman.com/2019/01/piece-of-wire-28-cm-long-is-to-be-cut-two-pieces.html\",\"WARC-Payload-Digest\":\"sha1:JRJRF5LXLTXRPM32ISKGPJUEYDPN2AEO\",\"WARC-Block-Digest\":\"sha1:HDGYEN43GVUNK4QEDPUWJCPYPTQ5R4KN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103034930.3_warc_CC-MAIN-20220625095705-20220625125705-00096.warc.gz\"}"} |
https://patchwork.kernel.org/project/linux-block/patch/20210225070231.21136-28-chaitanya.kulkarni@wdc.com/ | [
"# [RFC,27/39] blktrace: add a new formatting routine\n\nMessage ID 20210225070231.21136-28-chaitanya.kulkarni@wdc.com (mailing list archive) New, archived show [RFC,01/39] blktrace_api: add new trace definitions | expand\n\n## Commit Message\n\nChaitanya Kulkarni Feb. 25, 2021, 7:02 a.m. UTC\n```Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>\n---\nkernel/trace/blktrace.c | 57 +++++++++++++++++++++++++++++++++++++++++\n1 file changed, 57 insertions(+)\n```\n\n## Patch\n\n```diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c\nindex 84bee8677162..2241c7304749 100644\n--- a/kernel/trace/blktrace.c\n+++ b/kernel/trace/blktrace.c\n@@ -1722,6 +1722,62 @@ static void fill_rwbs(char *rwbs, const struct blk_io_trace *t)\nrwbs[i] = '\\0';\n}\n\n+static void fill_rwbs_ext(char *rwbs, const struct blk_io_trace_ext *t)\n+{\n+\tint i = 0;\n+\tint tc = t->action >> BLK_TC_SHIFT_EXT;\n+\n+\tif ((t->action & ~__BLK_TN_CGROUP) == BLK_TN_MESSAGE_EXT) {\n+\t\trwbs[i++] = 'N';\n+\t\tgoto out;\n+\t}\n+\n+\tif (tc & BLK_TC_FLUSH)\n+\t\trwbs[i++] = 'F';\n+\n+\t\trwbs[i++] = 'D';\n+\telse if ((tc & BLK_TC_WRITE_ZEROES)) {\n+\t\trwbs[i++] = 'W';\n+\t\trwbs[i++] = 'Z';\n+\t} else if ((tc & BLK_TC_ZONE_RESET)) {\n+\t\trwbs[i++] = 'Z';\n+\t\trwbs[i++] = 'R';\n+\t} else if ((tc & BLK_TC_ZONE_RESET_ALL)) {\n+\t\trwbs[i++] = 'Z';\n+\t\trwbs[i++] = 'R';\n+\t\trwbs[i++] = 'A';\n+\t} else if ((tc & BLK_TC_ZONE_APPEND)) {\n+\t\trwbs[i++] = 'Z';\n+\t\trwbs[i++] = 'A';\n+\t} else if ((tc & BLK_TC_ZONE_OPEN)) {\n+\t\trwbs[i++] = 'Z';\n+\t\trwbs[i++] = 'O';\n+\t} else if ((tc & BLK_TC_ZONE_CLOSE)) {\n+\t\trwbs[i++] = 'Z';\n+\t\trwbs[i++] = 'C';\n+\t} else if ((tc & BLK_TC_ZONE_FINISH)) {\n+\t\trwbs[i++] = 'Z';\n+\t\trwbs[i++] = 'F';\n+\t} else if (tc & BLK_TC_WRITE)\n+\t\trwbs[i++] = 'W';\n+\telse if (t->bytes)\n+\t\trwbs[i++] = 'R';\n+\telse\n+\t\trwbs[i++] = 'N';\n+\n+\tif (tc & BLK_TC_FUA)\n+\t\trwbs[i++] = 'F';\n+\t\trwbs[i++] = 'A';\n+\tif (tc & BLK_TC_SYNC)\n+\t\trwbs[i++] = 'S';\n+\tif (tc & BLK_TC_META)\n+\t\trwbs[i++] = 'M';\n+out:\n+\trwbs[i] = '\\0';\n+}\n+\nstatic inline\nconst struct blk_io_trace *te_blk_io_trace(const struct trace_entry *ent)\n{\n@@ -2514,5 +2570,6 @@ void blk_fill_rwbs(char *rwbs, unsigned int op)\n}\nEXPORT_SYMBOL_GPL(blk_fill_rwbs);\n\n+\n#endif /* CONFIG_EVENT_TRACING */\n\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.54973894,"math_prob":0.99818254,"size":1728,"snap":"2021-21-2021-25","text_gpt3_token_len":658,"char_repetition_ratio":0.3161253,"word_repetition_ratio":0.20608108,"special_character_ratio":0.5034722,"punctuation_ratio":0.16666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9632628,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-11T21:35:03Z\",\"WARC-Record-ID\":\"<urn:uuid:13a38ebf-04b5-485a-9a5c-3e7b9e41edd6>\",\"Content-Length\":\"23687\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3d6a2328-8956-4d4c-9b68-5182cfff77a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:525581ca-19e9-4ef9-b817-0d00fe0abc22>\",\"WARC-IP-Address\":\"198.145.29.83\",\"WARC-Target-URI\":\"https://patchwork.kernel.org/project/linux-block/patch/20210225070231.21136-28-chaitanya.kulkarni@wdc.com/\",\"WARC-Payload-Digest\":\"sha1:ULJC5UZS3QFSWDDXYJMXN2BVENWWIX3G\",\"WARC-Block-Digest\":\"sha1:L7U2667EZPGLGZGWBKQ755ZLQZQ5QMEP\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989856.11_warc_CC-MAIN-20210511184216-20210511214216-00294.warc.gz\"}"} |
https://puzzling.stackexchange.com/questions/29090/toy-boat-on-a-string | [
"# Toy boat on a string\n\nLittle Johnny Red is standing on the edge of a lake, and the place where he stands is $1$ yard above the water level. Johnny's toy boat is floating in the lake, and the boy is pulling it ashore with a string (that is more than $2$ yards long). When Johnny pulls in $1$ yard of string, will the boat advance towards the shore by (i) more than one yard, (ii) exactly one yard, or (iii) less than one yard?\n\n(Note: This puzzle is tagged as [math]. The solution does neither require lateral thinking nor physics.)\n\n• yard vs meter? or this is kinda conversion question? :) – Oray Mar 17 '16 at 12:08\n• Where is the string tied? Is it exactly at the water level, or is it tied to the mast at some height above the water? Also, is he holding the string with his feet (i.e. at the ground), or at some height above where he is standing? – dpwilson Mar 17 '16 at 12:17\n• @Gamow could you clarify if it's possible for the rope to be less than 2 yard to begin with? In those cases the boat would reach the shore and would be lifted in the air I guess – Ivo Beckers Mar 17 '16 at 12:51\n• @Ivo Beckers: The string should be at least 2y at the beginning, so that the situation can take place. – Gamow Mar 17 '16 at 12:59\n• A shame about the two-yards stipulation. I liked the complexity of it. – goodguy5 Mar 17 '16 at 13:30\n\n## 12 Answers\n\nProof by self-explanatory picture:\n\nThe answer is on the pic below:\n\n• Ha -- I had a very similar idea and we posted almost the exact same time =). – Tyler Seacrest Mar 17 '16 at 14:49\n\nIt has to be (mathematically)\n\n(i)",
null,
"This is our boat and geometry of it.\n\nFirst of all we know that\n\n$x-y=1$\n\nsince the Little Johhny pulled 1 yard the boat and he is 1 meter above from the sea level.\n\n$b^2+1=y^2$ and $1+a^2+2ab+b^2=x^2$\n\nand these are the simple the Pythagorean theorem of an triangle, so by using both equations;\n\n$x^2-y^2=(x-y)(x+y)=1+a^2+2ab+b^2-(b^2+1)=a^2+2ab$\n\nso;\n\n$x+y=a(a+2b)$\n\nThe question is asking about $a$ whether if greater, less or equal to $1$;\n\n1. $y$ has to be greater than $1$ since $b$ should be greater than $0$.\n2. $x$ has to be greater than $2$ since $x-y=1$.\n\nTherefore,\n\n$a(a+2b)>3$\n\nthe largest value of $a$ is possible only when $b\\rightarrow0$, which makes $x$ and $y$ value as $2$ and $1$ sequently, that makes $a$ value as;\n\n$a^2=3$ or $a=\\sqrt3$\n\nthe smallest value of $a$ is possible only while $b\\rightarrow \\infty$ that makes $a$ value;\n\n$a\\geqslant1$\n\nso the range of a becomes;\n\n$\\sqrt3 > a\\geqslant1$\n\n• I'm not sure what you've proved; if $b=150$, then your final equation only shows that $a\\geq0.01$ or so. – Zandar Mar 17 '16 at 12:55\n• @Zandar there are some typos I am on it :) – Oray Mar 17 '16 at 12:56\n• @Oray is there reference on x-y = 1? – Alex Mar 17 '16 at 14:28\n• @Alex x-y=1 is the total length of the string which John pulled. – Oray Mar 17 '16 at 14:29\n• @Oray ok thanks I was thinking you meant it's part of the geometry – Alex Mar 17 '16 at 14:31\n\nMore than 1 yard.\n\nImagine if instead of pulling the boat in by 1 yard of rope, the rope broke 1 yard from the end. The 1 yard falls to the water (pivoting at the original boat location), and the remainder's tip falls to the water. As the shortest distance was the original rope, the two tips have to move away from each other. Hence, more than 1 yard.",
null,
"Ignoring catenary curvature, and momentum of the boat, I would say\n\nMore than a yard.\n\nBecause\n\nImagine Johnny's hand is directly above a point A on the shore. Boat to A and A to Johnny's hand and Johnny's hand to the boat forms a right-angled triangle with the string as hypotenuse. Imagine that Johnny is 0 yards tall and the string is 2 yards in length. The boat is $sqrt(2^2 - 1^2) = sqrt(3)$ from the shore. When Johnny reels in a yard of string, the boat is right at the bank. It has traveled $sqrt(3)$ yards, which is more than one.\n\nMore generally,\n\nIf the distance from Johnny's hand to A is x and the length of the string is y then the square of the distance of the boat from the shore is $sqrt(y^2 - x^2)$. Because the bank is raised, we know both that x is constant and > 0. Therefore the distance from the shore must decrease at a rate strictly greater than the decrease in the length of the string.\n\nJust to note,\n\nWe don't actually know the height of Johnny's hand above water level. The point is only that since the shore is a yard above water level and Johnny is standing, then the distance above water level is somewhere between 1 yard and however tall it is reasonable for Johnny to be. The point being that under any reasonable definition of the word \"standing\" and reasonable geometry of Johnny (e.g. his arms don't hang below his feet) we know that the length of the string is strictly greater than the distance of the boat to shore. So, one more try at clarity.\n\nCall the length of the string y and the distance from shore x. The height of Johnny's hand above water is a constant $n >= 1$. We have:\n$y^2 = x^2 + n^2$\nWhen y decreases by 1, the left hand side of the equation decreases by 2y - 1\nthe right hand side must decrease by more than 2x -1 to maintain equality since y is strictly greater than x.\n\n• if the boat is 1y away from the shore, how can it travel more than 1y ? Sorry if this is a strange question, but I don't understand – Marius Mar 17 '16 at 12:31\n• @Marius It reaches the shore before you pull the rope 1y. I think starting with a hypotenuse of 2y might be a better example, though. – Zandar Mar 17 '16 at 12:33\n• @question_asker. I was quoting from the answer \"...and the boat is 1 yard from the shore\" – Marius Mar 17 '16 at 12:35\n• @Marius ah, gotcha – question_asker Mar 17 '16 at 12:36\n\nThe boat is pulled\n\nmore than one yard.\n\nSuppose the boat begins $10$ yards from shore. The string forms a hypotenuse of a right triangle with legs $10$ yards and $1$ yard, resulting in a length of $\\sqrt{10^2+1^2}=\\sqrt{101}$. When the string is pulled one yard, the hypotenuse is reduced by $1$ to $\\sqrt{101}-1$. We now have a new right triangle with legs $x$ and $1$, yielding $\\sqrt{101}-1=\\sqrt{x^2+1}$. Rearranging for yields $x^2=101-2\\sqrt{101}$, so $x\\approx{}8.994$. Therefore,\n\nthe boat has been pulled ever so slightly more than one yard.\n\nAnswer\n\nMore than 1yard.\n\nWhy?\n\nLet's take a simple case. The rope is 6y long. this means the boat is $\\sqrt{6^2 - 1^2} = \\sqrt{35}$ that is aproximately $5.916$ yards from the shore.\nPull 1y of rope, the rest of the rope is 5y. The boat is $\\sqrt{5^2 - 1^2} = \\sqrt{24}$ that is aproximately $4.899$ yards from the shore.\nThe difference is more than 1y.\n\n• That's not the distance travelled by the boat - the triangles before and after pulling are not similar. – Zandar Mar 17 '16 at 12:31\n• @Zandar. Hmmm...true. I'll fix it in a few minutes. Thanks for the heads up – Marius Mar 17 '16 at 12:33\n• @Zandar. Fixed it. – Marius Mar 17 '16 at 13:05\n\nThe answer is\n\n(iii) Less than one yard. The water surface, the string and the quay form a right triangle, with the string being the hypothenuse. Imagine an one-yard-long segment of the hypothenuse casting a \"shadow\" onto the cathetus, the shadow would be shorter than the original pieve (by the ratio of hypothenuse/cathetus).\n\nThis is valid as long as\n\nthe string is still longer than sqrt(2) yards; from there on, it is more than one yard (see @Hugh Meyers' answer).\n\n• Actually, the rope should be over 2 yards. If it's less than that, Johnny will start lifting the boat when the rope length goes under 1y. – Marius Mar 17 '16 at 13:46\n• He'll lift it at 1y, but for (iii) to be true, a length greater than sqrt(2) does suffice. With a rope length of 2, there would be sqrt(3) yards of water between the boat and the quay. – Verzweifler Mar 17 '16 at 13:50\n\nDefine $D(r)$ as the distance from the boat to the shore when the rope is of length $r$. From the Pythagorean theorem, $D(r) = \\sqrt{r^2-1}$. Now, if the derivative of this function is always greater than $1$, then $D(r)$ will change faster than $r$; that is, decreasing $r$ by $1$ will always decrease $D(r)$ by more than $1$. This would mean that pulling the rope $1$ yard would always bring the boat more than $1$ yard closer to the shore.\n\nNow, we see that\n\n\\begin{align} D'(r) & =\\frac{d}{dr}(r^2-1)^\\frac{1}{2} \\\\ & =\\frac{1}{2}(r^2-1)^{-\\frac{1}{2}} \\frac{d}{dr}(r^2-1) \\\\ & = \\frac{2r}{2\\sqrt{r^2-1}} \\\\ & = \\frac{r}{\\sqrt{r^2-1}}\\\\ \\end{align}\n\nSince $r>\\sqrt{r^2-1}$, this derivative is always greater than $1$. Thus when Johnny pulls in $1$ yard of string, the boat will always move more than $1$ yard towards shore.\n\nMore than a yard\n\nBecause\n\nImagine the scenario the other way around. The boat is $n$ yards from the shore, so the rope is $\\sqrt{1+n^2}$ yards long. Johnny wants to pull the boat in $1$yd, so the rope will be $\\sqrt{1+(n-1)^2}=\\sqrt{1+n^2-2n+1}$ yards long.\n\nMaths bit\n\n$\\sqrt{1+n^2}-\\sqrt{1+n^2-2n+1}=x\\to 1+n^2-2\\sqrt{(1+n^2)(1+n^2-2n+1)}+1+n^2-2n+1=x^2$, which implies $x\\lt1$ by the AM-GM inequality.\n\nI would say\n\nMore than one yard - even if the yard of string only accounted for less than one yard of distance traveled, surely momentum would carry the boat further forward.\n\n(This answer was posted before multiple changes were made to the question)\n\n• I was thinking this as well. Unfortunately there seem to be a lot of assumptions that can be made that yield different answers. – dpwilson Mar 17 '16 at 12:14\n• @dpwilson Oh, I'm positive this wasn't the intended answer, but until it's asserted that this is a lake unlike any that exists in the real world, I'm leaving this here as a totally reasonable and realistic answer, heh – question_asker Mar 17 '16 at 12:18\n• If momentum is kept at bay, I trivially would say that for one yard of rope the distance travelled by boat in water will be less than one yard. I'm not good with mathematics but that seems like it to me. – Prashant Mar 17 '16 at 12:18\n• After a few hit and trials, I take my words back. It will always be more than a yard. I tested with boat being two yards and four yards in water respectively. – Prashant Mar 17 '16 at 12:31\n• @dpwilson Why did you delete your answer? – question_asker Mar 17 '16 at 12:44\n\nIf the planet he stands on is very small and made of water and the boat is on the opposite of the planet, he could pull it directly towards him through the planet, and then it will be exactly 1 yard closer",
null,
""
] | [
null,
"https://i.stack.imgur.com/P6PTl.jpg",
null,
"https://i.stack.imgur.com/6fuWc.png",
null,
"https://i.stack.imgur.com/m9SWp.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.95326823,"math_prob":0.9954797,"size":508,"snap":"2021-04-2021-17","text_gpt3_token_len":123,"char_repetition_ratio":0.11706349,"word_repetition_ratio":0.0,"special_character_ratio":0.2519685,"punctuation_ratio":0.10280374,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991509,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,9,null,9,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-11T00:44:46Z\",\"WARC-Record-ID\":\"<urn:uuid:e4226fa6-d125-4273-82de-611001e83d6b>\",\"Content-Length\":\"264643\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b528546e-0cfe-425c-a851-4a60ac76a10d>\",\"WARC-Concurrent-To\":\"<urn:uuid:30c8278a-1f11-4d97-abd6-667f138412cb>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://puzzling.stackexchange.com/questions/29090/toy-boat-on-a-string\",\"WARC-Payload-Digest\":\"sha1:NSAI6WZPMX4IFUJ5XNZB6VLXZCB6QZQH\",\"WARC-Block-Digest\":\"sha1:JYXF27QZT47OFSJJFUULODNSXNSGZV6B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038060603.10_warc_CC-MAIN-20210411000036-20210411030036-00242.warc.gz\"}"} |
https://es.mathworks.com/matlabcentral/cody/problems/39-which-values-occur-exactly-three-times/solutions/172360 | [
"Cody\n\n# Problem 39. Which values occur exactly three times?\n\nSolution 172360\n\nSubmitted on 4 Dec 2012\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1 Fail\n%% x = [1 2 5 2 2 7 8 3 3 1 3 8 8 8]; y_correct = [2 3]; assert(isequal(threeTimes(x),y_correct))\n\nError: Assertion failed.\n\n2 Pass\n%% x = [1 1 1]; y_correct = ; assert(isequal(threeTimes(x),y_correct))\n\n3 Pass\n%% x = [5 10 -3 10 -3 11 -3 5 5 7]; y_correct = [-3 5]; assert(isequal(threeTimes(x),y_correct))\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.58489907,"math_prob":0.9781514,"size":575,"snap":"2020-45-2020-50","text_gpt3_token_len":202,"char_repetition_ratio":0.15236427,"word_repetition_ratio":0.0,"special_character_ratio":0.40521738,"punctuation_ratio":0.112068966,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96567166,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-31T02:15:16Z\",\"WARC-Record-ID\":\"<urn:uuid:d11a8b97-86e0-4140-82b0-bc8bcf8e32a4>\",\"Content-Length\":\"81148\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:930280f3-856f-4b35-8d53-f0508e0bca85>\",\"WARC-Concurrent-To\":\"<urn:uuid:b002db0c-9270-4f47-802f-7e081f805d46>\",\"WARC-IP-Address\":\"23.212.144.59\",\"WARC-Target-URI\":\"https://es.mathworks.com/matlabcentral/cody/problems/39-which-values-occur-exactly-three-times/solutions/172360\",\"WARC-Payload-Digest\":\"sha1:BF254RXXQOL2I4A2WCLQJTPBTIRVWHYQ\",\"WARC-Block-Digest\":\"sha1:EUBIJNNQJFSENLSJ2A5KBZZAX2B5JR3F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107912593.62_warc_CC-MAIN-20201031002758-20201031032758-00618.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.