URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://www.r-bloggers.com/2009/09/power-analysis-for-mixed-effect-models-in-r/ | [
"Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.\n\nThe power of a statistical test is the probability that a null hypothesis will be rejected when the alternative hypothesis is true. In lay terms, power is your ability to refine or “prove” your expectations from the data you collect. The most frequent motivation for estimating the power of a study is to figure out what sample size will be needed to observe a treatment effect. Given a set of pilot data or some other estimate of the variation in a sample, we can use power analysis to inform how much additional data we should collect.\n\nI recently did a power analysis on a set of pilot data for a long-term monitoring study of the US National Park Service. I thought I would share some of the things I learned and a bit of R code for others that might need to do something like this. If you aren’t into power analysis, the code below may still be useful as examples of how to use the error handling functions in R (`withCallingHandlers`, `withRestarts`), parallel programming using the `snow` package, and linear mixed effect regression using `nlme`. If you have any suggestions for improvement or if I got something wrong on the analysis, I’d love to hear from you.\n\n## 1 The Study\n\nThe study system was cobblebars along the Cumberland river in Big South Fork National Park (Kentucky and Tennessee, United States). Cobblebars are typically dominated by grassy vegetation that include disjunct tall-grass prairie species. It is hypothesized that woody species will encroach onto cobblebars if they are not seasonally scoured by floods. The purpose of the NPS sampling was to observe changes in woody cover through time. The study design consisted of two-stages of clustering: the first being cobblebars, and the second being transects within cobblebars. The response variable was the percentage of the transect that was woody vegetation. Because of the clustered design, the inferential model for this study design has mixed-effects. I used a simple varying intercept model:",
null,
"where y is the percent of each transect in woody vegetation sampled n times within J cobblebars, each with K transects. The parameter of inference for the purpose of monitoring change in woody vegetation through time is β, the rate at which cover changes as a function of time. α, γ, σ2γ, and σ2y are hyper-parameters that describe the hierarchical variance structure inherent in the clustered sampling design.\n\nBelow is the function code used I used to regress the pilot data. It should be noted that with this function you can log or logit transform the response variable (percentage of transect that is woody). I put this in because the responses are proportions (0,1) and errors should technically follow a beta-distribution. Log and logit transforms with Gaussian errors could approximate this. I ran all the models with transformed and untransformed response, and the results did not vary at all. So, I stuck with untransformed responses:\n\n```Model <- function(x = cobblebars,\ntype = c(\"normal\",\"log\",\"logit\")){\n## Transforms\nif (type == \"log\")\nx\\$prop.woody <- log(x\\$prop.woody)\nelse if (type == \"logit\")\nx\\$prop.woody <- log(x\\$prop.woody / (1 - x\\$prop.woody))\n\nmod <- lme(prop.woody ~ year,\ndata = x,\nrandom = ~ 1 | cobblebar/transect,\nna.action = na.omit,\ncontrol = lmeControl(opt = \"optim\",\nmaxIter = 800, msMaxIter = 800)\n)\nmod\\$type <- type\n\nreturn(mod)\n}\n```\n\nHere are the results from this regression of the pilot data:\n\n```Linear mixed-effects model fit by REML\nData: x\nAIC BIC logLik\n-134.4319 -124.1297 72.21595\n\nRandom effects:\nFormula: ~1 | cobblebar\n(Intercept)\nStdDev: 0.03668416\n\nFormula: ~1 | transect %in% cobblebar\n(Intercept) Residual\nStdDev: 0.02625062 0.05663784\n\nFixed effects: prop.woody ~ year\nValue Std.Error DF t-value p-value\n(Intercept) 0.12966667 0.01881983 29 6.889896 0.0000\nyear -0.00704598 0.01462383 29 -0.481815 0.6336\nCorrelation:\n(Intr)\nyear -0.389\n\nNumber of Observations: 60\nNumber of Groups:\ncobblebar transect %in% cobblebar\n6 30\n```\n\n## 2 We don't learn about power analysis and complex models\n\nWhen I decided upon the inferential model the first thing that occurred to me was that I never learned in any statistics course I had taken how to do such a power analysis on a multi-level model. I've taken more statistics courses than I'd like to count and taught my own statistics courses for undergrads and graduate students, and the only exposure to power analysis that I had was in the context of simple t-tests or ANOVA. You learn about it in your first 2 statistics courses, then it rarely if ever comes up again until you actually need it.\n\nI was, however, able to find a great resource on power analysis from a Bayesian perspective in the excellent book \"Data Analysis Using Regression and Multilevel/Hierarchical Models\" by Andrew Gelman and Jennifer Hill. Andrew Gelman has thought and debated about power analysis and you can get more from his blog. The approach in the book is a simulation-based one and I have adopted it for this analysis.\n\n## 3 Analysis Procedure\n\nFor the current analysis we needed to know three things: effect size, sample size, and estimates of population variance. We set effect size beforehand. In this context, the parameter of interest is the rate of change in woody cover through time β, and effect size is simply how large or small a value of β you want to distinguish with a regression. Sample size is also set a priori. In the analysis we want to vary sample size by varying the number of cobblebars, the number of transects per cobblebar or the number of years the study is conducted.\n\nThe population variance cannot be known precisely, and this is where the pilot data come in. By regressing the pilot data using the model we can obtain estimates of all the different components of the variance (cobblebars, transects within cobblebars, and the residual variance). Below is the R function that will return all the hyperparameters (and β) from the regression:\n\n```GetHyperparam<-function(x,b=NULL){\n## Get the hyperparameters from the mixed effect model\nfe <- fixef(x)\n\nif(is.null(b))\nb<-fe # use the data effect size if not supplied\n\nmu.a <- fe\n\nvc <- VarCorr(x)\nsigma.y <- as.numeric(vc[5, 2]) # Residual StdDev\nsigma.a <- as.numeric(vc[2, 2]) # Cobblebar StdDev\nsigma.g <- as.numeric(vc[4, 2]) # Cobblebar:transect StdDev\n\nhp<-c(b, mu.a, sigma.y, sigma.a, sigma.g)\nnames(hp)<-c(\"b\", \"mu.a\", \"sigma.y\", \"sigma.a\", \"sigma.g\")\nreturn(hp)\n}\n```\n\nTo calculate power we to regress the simulated data in the same way we did the pilot data, and check for a significant β. Since optimization is done using numeric methods there is always the chance that the optimization will not work. So, we make sure the regression on the fake data catches and recovers from all errors. The solution for error recovery is to simply try the regression on a new set of fake data. This function is a pretty good example of using the R error handling function `withCallingHandlers` and `withRestarts`.\n\n```fakeModWithRestarts <- function(m.o, n = 100, ...){\n## A Fake Model\nwithCallingHandlers({\ni <- 0\nmod <- NULL\nwhile (i < n & is.null(mod)){\nmod <- withRestarts({\nf <- fake(m.orig = m.o, transform = F, ...)\nreturn(update(m.o, data = f))\n},\nrs = function(){\ni <<- i + 1\nreturn(NULL)\n})\n}\nif(is.null(mod))\nwarning(\"ExceededIterations\")\nreturn(mod)\n},\nerror = function(e){\ninvokeRestart(\"rs\")\n},\nwarning = function(w){\nif(w\\$message == \"ExceededIterations\")\ncat(\"\\n\", w\\$message, \"\\n\")\nelse\ninvokeRestart(\"rs\")\n})\n}\n```\n\nTo calculate the power of a particular design we run `fakeModWithRestarts` 1000 times and look at the proportion of significant β values:\n\n```dt.power <- function (m, n.sims = 1000, alpha=0.05, ...){\n## Calculate power for a particular sampling design\nsignif<-rep(NA, n.sims)\nfor(i in 1:n.sims){\nlme.power <- fakeModWithRestarts(m.o = m, ...)\nif(!is.null(lme.power))\nsignif[i] <- summary(lme.power)\\$tTable[2, 5] < alpha\n}\npower <- mean(signif, na.rm = T)\nreturn(power)\n}\n```\n\nFinally, we want to perform this analysis on many different sampling designs. In my case I did all combinations of set of effect sizes, cobblebars, transects, and years. So, I generated the appropriate designs:\n\n```factoredDesign <- function(Elevs = 0.2/c(1,5,10,20),\nNlevs = seq(2, 10, by = 2),\nJlevs = seq(4, 10, by = 2),\nKlevs = c(3, 5, 7), ...){\n## Generates factored series of sampling designs for simulation\n## of data that follow a particular model.\n## Inputs:\n## Elevs - vector of effect sizes for the slope parameter.\n## Nlevs - vector of number of years to sample.\n## Jlevs - vector of number of cobblebars to sample.\n## Klevs - vector of number of transects to sample.\n## Results:\n## Data frame with where columns are the factors and\n## rows are the designs.\n\n# Level lengths\nlE <- length(Elevs)\nlN <- length(Nlevs)\nlJ <- length(Jlevs)\nlK <- length(Klevs)\n\n# Generate repeated vectors for each factor\nE <- rep(Elevs, each = lN*lJ*lK)\nN <- rep(rep(Nlevs, each = lJ*lK), times = lE)\nJ <- rep(rep(Jlevs, each = lK), times = lE*lN)\nK <- rep(Klevs, times = lE*lN*lJ)\n\nreturn(data.frame(E, N, J, K))\n}\n```\n\nOnce we know our effect sizes, the different sample sizes we want, and the estimates of population variance we can generate simulated dataset that are similar to the pilot data. To calculate power we simply simulate a large number of dataset and calculate the proportion of slopes, β that are significantly different from zero (p-value < 0.05). This procedure is repeated for all the effect sizes and sample sizes of interest. Here is the code for generating a simulated dataset. It also does the work of doing the inverse transform of the response variables if necessary.\n\n```fake <- function(N = 2, J = 6, K = 5, b = NULL, m.orig = mod,\ntransform = TRUE, ...){\n## Simulated Data for power analysis\n## N = Number of years\n## J = Number of cobblebars\n## K = Number of transects within cobblebars\nyear <- rep(0:(N-1), each = J*K)\ncobblebar <- factor(rep(rep(1:J, each = K), times = N))\ntransect <- factor(rep(1:K, times = N*J))\n\n## Simulated parameters\nhp<-GetHyperparam(x=m.orig)\nif(is.null(b))\nb <- hp['b']\ng <- rnorm(J*K, 0, hp['sigma.g'])\na <- rnorm(J*K, hp['mu.a'] + g, hp['sigma.a'])\n\n## Simulated responses\neta <- rnorm(J*K*N, a + b * year, hp['sigma.y'])\nif (transform){\nif (m.orig\\$type == \"normal\"){\ny <- eta\ny[y > 1] <- 1 # Fix any boundary problems.\ny[y < 0] <- 0\n}\nelse if (m.orig\\$type == \"log\"){\ny <- exp(eta)\ny[y > 1] <- 1\n}\nelse if (m.orig\\$type == \"logit\")\ny <- exp(eta) / (1 + exp(eta))\n}\nelse{\ny <- eta\n}\n\nreturn(data.frame(prop.woody = y, year, transect, cobblebar))\n}\n```\n\nThen I performed the power calculations on each of these designs. This could take a long time, so I set this procedure to use parallel processing if needed. Note that I had to re-~source~ the file with all the necessary functions for each processor.\n\n```powerAnalysis <- function(parallel = T, ...){\n## Full Power Analysis\n\n## Parallel\nif(parallel){\ncloseAllConnections()\ncl <- makeCluster(7, type = \"SOCK\")\non.exit(closeAllConnections())\nclusterEvalQ(cl, source(\"cobblebars2.r\"))\n}\n\n## The simulations\ndat <- factoredDesign(...)\n\nif (parallel){\ndat\\$power <- parRapply(cl, dat, function(x,...){\ndt.power(N = x, J = x, K = x, b = x, ...)\n}, ...)\n} else {\ndat\\$power <- apply(dat, 1, function(x, ...){\ndt.power(N = x, J = x, K = x, b = x, ...)\n}, ...)\n}\n\nreturn(dat)\n}\n```\n\nThe output of the `powerAnalysis` function is a data frame with columns for the power and all the sample design settings. So, I wrote a custom plotting function for this data frame:\n\n```plotPower <- function(dt){\nxyplot(power~N|J*K, data = dt, groups = E,\npanel = function(...){panel.xyplot(...)\npanel.abline(h = 0.8, lty = 2)},\ntype = c(\"p\", \"l\"),\nxlab = \"sampling years\",\nylab = \"power\",\nstrip = strip.custom(var.name = c(\"C\", \"T\"),\nstrip.levels = c(T, T)),\nauto.key = T\n)\n}\n```\n\nBelow is the figure for the cobblebar power analysis. I won't go into detail on what the results mean since I am concerned here with illustrating the technique and the R code. Obviously, as the number of cobblebars and transects per year increase, so does power. And, as the effect size increases, observing it with a test is easier.",
null,
"Date: 2009-09-18 Fri\n\nHTML generated by org-mode 6.30trans in emacs 22"
] | [
null,
"https://i2.wp.com/toddjobe.blogspot.com/2009/09/ltxpng/powerAnalysis_0001.png",
null,
"https://i2.wp.com/toddjobe.blogspot.com/2009/09/ltxpng/powerAnalysis_0002.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8126074,"math_prob":0.98227674,"size":12511,"snap":"2022-05-2022-21","text_gpt3_token_len":3264,"char_repetition_ratio":0.11809387,"word_repetition_ratio":0.031538088,"special_character_ratio":0.27759573,"punctuation_ratio":0.1543332,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9973056,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-27T12:29:18Z\",\"WARC-Record-ID\":\"<urn:uuid:ce60ff46-f74a-46ea-98ff-90dc818395ce>\",\"Content-Length\":\"111180\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cbf9754d-4385-4b99-b29b-f56382028778>\",\"WARC-Concurrent-To\":\"<urn:uuid:2091afd8-9ae6-430b-a6d4-f5dc5e6a9be6>\",\"WARC-IP-Address\":\"172.64.104.18\",\"WARC-Target-URI\":\"https://www.r-bloggers.com/2009/09/power-analysis-for-mixed-effect-models-in-r/\",\"WARC-Payload-Digest\":\"sha1:VKYJ6CVGJRMQL2DKK3BF7VHMIBOICW5U\",\"WARC-Block-Digest\":\"sha1:LVJQEWZ7VZ7RCNKVP47GIFH3GN3RPSVB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662647086.91_warc_CC-MAIN-20220527112418-20220527142418-00374.warc.gz\"}"} |
https://russianpatents.com/patent/221/2214680.html | [
"Way to normalize the metric values of the component decoder in a mobile communication system and device for its implementation\n\nThe invention relates to communication systems and can be used in means of mobile communication. The technical result is to use changes in the set of metrics on the set of time segments. The decoder contains a decision tree, which generates a signal decisions when all metric values exceed the predetermined value. MyCitadel subtracts the predetermined value from the metric values in response to the signal decision to normalize the value of metrics. The decision tree contains many storage devices that remember the appropriate value metrics. 3 S. and 3 C.p. f-crystals, 13 ill., table 1.\n\nTechnical field the Present invention relates to a device and method of iterative decoding for mobile communication systems and, in particular, relates to devices and method for normalizing metric values stored in the component decoder is an iterative decoder in a mobile communication system.\n\nThe prior art In General, iterative decoding is used in such mobile communication systems, as IMT-2000 (or systems multiple access, code-division multiplexing mdcr-2000 and UMTS), which is applied that is a similar connection, using cascaded convolutional codes, cascading block codes or composite codes. The technical scope of iterative decoding is associated with the so-called \"soft\" (not defined) solutions and optimal characteristics-correcting code errors.\n\nIn Fig.1 shows a known iterative decoder, comprising two component decoder. According Fig.1 the first component decoder 101 receives signals Xtosystematic code, the first signal parity Y1Kreceived from the demultiplexer 107 (which demuxes input signals parity Ytoand first external information signal. The first component decoder 101 decodes the received signals, giving primary decoded signal associated with the decoding results. This signal consists of component Xtosignals a systematic code and the second external information component. The interleaver 103 performs interleaving of primary decoded signal. The second component decoder 105 receives the primary decoded signal coming from the output of the interleaver 103, and the second signal parity Y2Kreceived from demultiply parity Y2Kby issuing a second decoded signal is converted interleaver 111. Next, the second component decoder 105 via directed interleaver 109 delivers the external information component to the first component decoder 101.\n\nAs shown in Fig.2, the first component decoder includes unit 113 measurements of branching (WWII) to calculate metrics branching and block 115 summation-compare-select (CERs) to calculate metrics and perform the comparison in each state to choose the path with fewer errors.\n\nIn the General case, the iterative decoder computes the metric value Mtaccording to the following equation (1).",
null,
"where Mt- the accumulated value of the metric at time t; Ut- the code word for the systematic bits, the code word for each bit Xto; xt,j- codeword redundancy bits; yt,j- the resulting value for the channel (systematic + excess); Lc is the reliability of the channel, and L(ut) is the a priori value of reliability at time t.\n\nFrom equation (1) implies that at each calculation of the metric the metric value Mtcontinuously growing by the second, third and fourth members. When perepolneny. However, the main purpose of the iterative decoder is performing iterative decoding to improve the characteristics of the decoding (i.e. error rate bit (hospital has no facilities) or error rate for personnel (PSCS)). Thus, during execution of the iterative decoder's function after a number of consecutive iterations metric values can increase and grow beyond the specified range. Therefore, if you are developing hardware decoder assumes the task of a certain range for the values of the metrics, the value metric may exceed the specified range, and there is a problem of overflow.\n\nSummary of the invention Therefore, the object of the present invention is to provide a device and method for normalizing metric values of the component decoder, and exceeding all accumulated metric values for the current state of a certain threshold, these accumulated metric values are normalized to a specific level after subtracting from them the specified value.\n\nTo achieve the above result, it is proposed a decoder that uses the change in the set of metrics on the set of time segments. The decoder includes therefore the TES value. MyCitadel it subtracts a predetermined value from the metric values in response to the signal decision to normalize the metric values. The decision tree includes multiple storage devices for storing the corresponding values of the metrics with a predetermined number of bits. The logical element AND-NOT to generate a signal (\"1\" or high signal) when all values most bits (PRS) provided in the respective storage device is equal to \"1\" (high level). MyCitadel sets to zero the PRS in each storage device, when the logical element AND-NOT outputs of higher-level decisions, resulting from each of the metric values is subtracted preset value.\n\nBrief description of drawings\nThe above and other objectives, features and advantages of the present invention are explained in the following detailed description, illustrated by the drawings, which represent the following:\nFig. 1 is a block diagram showing the iterative decoder containing two-component decoder;\nFig.2 is a detailed block diagram showing the component decoders of Fig. 1;\nFig. 3 is a diagram illustrating the operation of CERs komponentov is but first embodiment of the present invention;\nFig. 4 is a flowchart showing the procedure of normalizing metric values according to the first embodiment of the present invention;\nFig. 5 is a diagram illustrating the operation of the CER component decoder, which has the facility to normalize the metric values in the block CER component decoder according to the second variant of the present invention;\nFig. 6 is a diagram showing the format of the storage device for the values of metrics to normalize the metric values according to the second variant of the present invention;\nFig. 7 is a flowchart illustrating the procedure of normalizing metric values according to the second variant of the present invention;\nFig. 8A and 8B is a diagram illustrating the right way, a wrong way and a difference of ways, and the quantization scheme for the code symbols;\nFig. 9A-9C is a diagram illustrating a right way and wrong way in accordance with the signal-to-noise ratio; and\nFig. 10 is a graph showing the value of",
null,
"maxin the saturation state, depending on the relationship of energy (signal) to the noise power Eb/No.\n\nDetailed description the preferred option of carrying out the invention\nCERs for a component decoder in accordance with the present itaut the threshold value.\n\nThere are two ways to normalize the accumulated metric values in accordance with the present invention. According to the first method the accumulated metric values are normalized using the minimum accumulated metric values when one of the accumulated metric values of the respective States exceeds the threshold value. According to the second method accumulated metric values are normalized using the pre-set value, when all the accumulated metric values exceed the threshold value.\n\nNormalization for CERs of the present invention can be used for the normalization to unit CERs 115 iterative decoder 101 described above in connection with Fig.2.\n\nA. the First option\nNext with reference to Fig.3 describes a first variant embodiment of the invention. In Fig.3 shows the block structure of CERs, with the device normalizing metric values for the code restriction K=3 according to the first embodiment of the present invention.\n\nBelow with reference to Fig.3 describes a device normalizing metric values. In Fig.3 shows four \"current state\", each of which has a metric value. When K=3 the number of shift registers for the values of the metrics is the La of each state. When all defined values of the metrics exceeds a threshold value, the comparator 117 outputs the specified value to the adders 125 - 125d, and each adder connected between one's current state and one by the following conditions. Then the adders 125 - 125d of the accumulated metric values for the current state subtracts the specified value, and the resulting values are given in the following States. In this description, the term \"cumulative metric values of the current state\" is used instead of the term \"metric values of the current state and on the contrary, to emphasize the fact that the values of the metrics for the current state of the sequential computation of the metrics are accumulated.\n\nIn Fig. 4 shows the procedure for normalizing metric values according to the first embodiment of the present invention. According Fig.4, the comparator 117 at step 401 determines the metric values for the four current States. After determining the values of the metrics comparator 117 at step 403 checks whether at least one of the specific accumulated metric values in the threshold value. If none of the accumulated metric values exceeds the threshold value, the comparator 117 passes to step 407 for performing, the comparator 117 at step 405 outputs the adders 125-125d minimum of four detected accumulated metric values. Then the adders 125-125d from all four of the accumulated metric values are subtracted minimum accumulated metric value, and then proceeds in the following States. After that, as shown in step 407, the decoder goes to the normal operation of CERs.\n\nC. the Second option\nThe following describes a second variant embodiment of the invention.\n\nIn Fig.5 shows the block structure normalization of CERs according to the second variant of the present invention. According Fig.5, the comparator includes many storage devices 130, 132, 134 and 136 for storing the accumulated metric values of the respective States, the logical element And 121 to determine whether all the accumulated metric values stored in the storage devices 130, 132, 134 and 136, exceeds a threshold value, and the inverter 119 for setting to zero the high order bit (PRS) of the respective storage devices 130, 132, 134 and 136 in response to a high signal issued by the logical element And 121.\n\nThe format of the memory devices described with reference to Fig.6. Here it is assumed that each accumulated Zn is with one additional bits to prevent overflow of the accumulated metric values. Thus, the accumulated value of the metric is only 9 bits per sample. As shown in Fig.5, the logical element 121 And receives the ninth bit, which is the highest bit (PRS) storage devices 130, 132, 134 and 136, and generates an output signal of high level when all the input signals is \"1\". That is, when none of the PRS storage devices 130, 132, 134 and 136 is not equal to \"1\", the logical element 121 And does not generate an output signal (low level signal). When all of the PRS of the storage devices have a high level or \"1\", the logical element 121 And generates a high signal. When the logical element 121 And outputs a signal of high level, the inverter 119 outputs a signal installation in the zero bit of the PRS storage devices 130, 132, 134, 136, thereby establishing the PRS bits to zero. This is equivalent to subtraction of 256 from each of the accumulated metric values, which allows to Express the accumulated metric values using 8 bits.\n\nSuppose, further, that the difference between the accumulated values of the metrics of the two States is",
null,
"k= (uki-ukj-)",
null,
"",
null,
"maxwhere i and j represent one of the values 0, 1, 2 and 3, a k - p the IR for the two States is",
null,
"max= 255 = 28-1. Finally, suppose that u1k- minimum metric value, a u3k- maximum metric value, as shown in Fig.6.\n\nAs for overflow, if all bits of the PRS metric values at time k is equal to \"1\", the minimum value is 256.\n\nAccording to the above assumptions, if the PRS for u3kis \"1\", then the PRS other States will be equal to \"0\" or \"1\". Until all bits of the PRS for uik(where 0",
null,
"i",
null,
"3) becomes equal to \"1\", the output signal of the transfer will not appear even if the PRS u3k(and maybe one or two other metric values) is equal to \"1\". That is, until all bits of the PRS becomes equal to \"1\", the output signal of the transfer on the ninth bit does not occur for any of them. This means that",
null,
"kdoes not exceed",
null,
"max.\nIn Fig.7 shows a flowchart illustrating the procedure of normalizing metric values according to the second variant. According Fig.5 and 7, at step 501, the comparator 117 determines the accumulated metric values in specific units, the logical element 121 And the comparator 117a determines (or receives) PRS bits of the accumulated values ametryplene metric values corresponding to the current state exceeds a threshold value. That is, the logical element 121 of the comparator 117 determines whether all bits of the PRS is \"1\", as shown in decision block 503. If none of the PRS is not equal to \"1\", the comparator 117 proceeds to step 507 to perform common operations CERs. If all bits of the PRS accumulated metric values are equal to \"1\", the comparator 117a goes to step 505, where each metric value is subtracted threshold value. That is, all bits of the PRS are set to zero. This corresponds to the logical element 121 And applying a high signal to the inverter 119, which in response to this signal generates a setting signal to zero bits PRS respective accumulated metric values, thereby establishing the PRS to zero. After installing PRS bits to zero comparator 117a, as shown in step 507, performs the normal operation of CERs.\n\nNext with reference to Fig. with 8A 10 description",
null,
"maxdefined above. When",
null,
"k<",
null,
"maxoverflow does not occur.",
null,
"maxhas a lower value at low value of the ratio Eb/No and has a higher value at high value of the ratio Eb/No. That is, the difference between the values of the metrics has the m, that noise at low Eb/No increases, resulting in reduction of the aforementioned difference, at high Eb/No noise is extremely small, which increases the difference",
null,
"maxbetween the values of the metrics. Therefore, it is very important, which is set to",
null,
"maxat high Eb/No. In the first case, you can just assume that",
null,
"maxhas an infinite value at infinite value of Eb/No. However, for example, the Viterbi algorithm with weakly defined (\"soft\") output (SOVA algorithm) the difference between the metrics is limited to a constant, defined asfree.\n\nFor example, suppose you have 4 bits per sample, the rate of the code R= 1/3, K= 9 convolutional code transmits a code word with all zeros \"000\". In this case, when the high value of the ratio Eb/No most of the errors you receive during the comparison/selection between the way with all zeros and by dfreeas shown in Fig.8A. Here, the value of the branch metric and the metric value of the path is calculated by the following equations (2) and (3) respectively.",
null,
"",
null,
"where 1=0, 1, 2, and 3, Ck,j- code word y(i)k,j- received signal, r is the PTO is sup>ki= us,ki-uc,ki",
null,
"",
null,
"maxwhere \"s\" denotes the selected path, and \"C\" indicates a competitive way. You must calculate",
null,
"maxat high Eb/No, when",
null,
"kihas a maximum value. This means that if",
null,
"kiless",
null,
"maxat high Eb/No, when",
null,
"kihas a maximum value, then the difference between the values of the metrics does not exceed",
null,
"max.\nIn this state of \"i\" there is a difference metric between two paths: the path with all zeros and the path dfree. In Fig.8B shows that the difference between the two paths depends on the code symbol dfree.\n\nIn other words, the metric of the selected path is a value obtained by summing the metric of the path for the zero path with a metric value in the first state in the previous time, and the metric competitive path is a value obtained by adding the path metric corresponding to a competitive way, with the metric value in the second state in PA more than the metric of the path between the first state and the time of the comparison, the difference",
null,
"maxequal to or greater than",
null,
"ki. Therefore, when the conditions are satisfied for",
null,
"maxalso satisfied the terms and",
null,
"ki. The fact that the difference does not exceed",
null,
"maxthat means that the difference in metric values between the two conditions in the above-mentioned point in time does not exceed",
null,
"The difference metric is given by the following expression:\n(dfree(convolutional encoder)=18 for K=9, R=1/3)",
null,
"where M denotes the value of the metric at the point of branching of the chosen path and competitive way. Therefore, when a condition is met",
null,
"max",
null,
"270, the value of the difference between the corresponding States does not exceed",
null,
"max. Because it was assumed that the sample has 4 bits, the number of storage devices for storing the metric values is 8, and as to prevent overflow is added 1-bit storage device,/p> In Fig. 9A shows the value of",
null,
"maxat high signal-to-noise ratio, the value of",
null,
"maxis calculated by the formula",
null,
"max= dfree",
null,
"Max(Q[ctot]) ... (4)\nwhere Q denotes the level of quantization, a Max(Q[.]) denotes the distance between \"0\" and \"1\". For example, 4 bits per sample Q=16 and Max(Q[.])= 15, and 3 bits per sample Q=8, a Max(Q[.])=7.\n\nIn Fig.9B shows the value of",
null,
"maxwith an average signal-to-noise ratio, where a value of",
null,
"maxat this point, is calculated by the formula",
null,
"max= (dfree+",
null,
")",
null,
"Max(Q[.]) ... Ur.5\nwhere the value of",
null,
"due to noise, a very small value, and it is less than or equal to 2 x dfreex Max(Q[.]) in the convolutional encoder (SC). However, this is not the case when",
null,
"summed as shown in equation (5).\n\nIn Fig. 9C shows the value of",
null,
"maxat low signal-to-noise, while at this point, the value of",
null,
"maxis calculated by the formula",
null,
"max=0, it should be noted that the value of",
null,
"maxgradually increases with increasing Eb/No, and starting from a certain point saturation occurs. If",
null,
"maxsatisfies equation (5), equation (6) is also satisfied.\n\nThe following describes the characteristics of the convolutional encoder in the system of the CDMA-2000.\n\nFor K=9 and R=1/2, dfree=12, and dfree",
null,
"14, 16, 18, 20.\n\nFor K=9 and R=1/3, dfree=18, and the next dfree",
null,
"20, 22.\n\nFor K=9 and R=1/4, dfree=24, and the next dfree",
null,
"26, 18.\n\nThe table shows the value of",
null,
"maxin the convolutional encoder (SC).\n\nTherefore, the number of bits that are added to prevent overflow for 8 bits per sample, which are assigned for metric values, is defined as follows.\n\nFor R=1/2 the number of bits equal to 1, since 28=256 180<256; for R= 1/3 the number of bits is 2, because 29=512, and 270<512; for R=1/4 the number of bits is 2, because 29=512, and 360<512. In other words, because the rate of the code R=1/2 requires 8 bits to prevent overflow, you need to add only 1 bit. In addition, because scene, adding to the number of bits required for a given speed of your code, only 1 bit.\n\nAs described above, the new device can prevent errors due to overflow by normalizing the accumulated metric values for decoding, resulting in more efficient memory usage.\n\nAlthough the invention has been shown and described with reference to specific preferred implementation, specialists in the art it is obvious that it can be made various changes in form and detail, not beyond being and scope of the invention defined in the claims.\n\nClaims\n\n1. The decoder uses the change in the set of metrics on the set of time segments, containing the decision tree, which generates a signal decisions when all metric values exceed the predetermined value, and myCitadel, which subtracts the predetermined value from the metric values in response to a signal solutions.\n\n2. The decoder under item 1, characterized in that the decision tree contains many storage devices having a pre-set E, which as the input signal gets values most bits (PRS) supplied from the respective storage devices, and the output of the logical element AND-NOT given the signal of the higher-level decisions, when all values of the PRS have a high level.\n\n3. The decoder under item 2, characterized in that myCitadel contains an inverter that has an input signal, which is combined with the output signal of the logical element AND-NOT, and the output signal, which mates with the PRS storage devices, and the signal of the higher-level decisions issued by the logical element AND-NOT on the inverter generates an output signal of the inverter, which sets to zero the value of the PRS of the respective storage devices.\n\n4. The method of normalizing metric values in the decoder, using the edit multiple values of the metrics on the set of time segments, comprising the steps of determining whether all of the metric values exceed the preset level, and subtracting a predetermined value from the values of the metrics for transition to the next state when all metric values exceeded this predetermined value.\n\n5. The method according to p. 4, characterized in that the subtraction pre-installed is achene metrics.\n\n6. The method of normalizing metric values in the decoder, using the edit multiple values of the metrics on the set of temporal segments and having multiple storage devices for storing the metric values with a predetermined number of bits, comprising the steps of determining whether all the values of the PRS of the respective storage devices is equal to 1, and setting to zero the values of the PRS, when all values of the PRS is equal to 1.\n\nSame patents:",
null,
"The invention relates to communication systems and can be used in the quantization means",
null,
"The invention relates to the field of radio communications and computing, and more particularly to methods and devices for data transmission in the computer network by radio with pseudorandom change the operating frequency",
null,
"The invention relates to electrical engineering and can be used in communication systems with pseudorandom change the operating frequency",
null,
"The invention relates to electrical engineering and can be used in the communication system with signal transmission in a wide range, in particular to the actions of search honeycomb performed by the mobile station, and to receive specific honeycomb long code used in the communication system in a wide range",
null,
"The invention relates to a communication system, multiple access, code-division multiplexing (mdcr)",
null,
"SUBSTANCE: proposed device depends for its operation on comparison of read-out signal with two thresholds, probability of exceeding these thresholds being enhanced during search interval with the result that search is continued. This broadband signal search device has linear part 1, matched filter 2, clock generator 19, channel selection control unit 13, inverter 12, fourth adder 15, two detectors 8, 17, two threshold comparison units 9, 18, NOT gates 16, as well as AND gate 14. Matched filter has pre-filter 3, delay line 4, n attenuators, n phase shifters, and three adders 7, 10, 11.\n\nEFFECT: enhanced noise immunity under structural noise impact.\n\n1 cl, 3 dwg",
null,
"SUBSTANCE: proposed automatically tunable band filter has series-connected limiting amplifier 1, tunable band filter 2 in the form of first series-tuned circuit with capacitor whose value varies depending on voltage applied to control input, first buffer amplifier 3, parametric correcting unit 4 in the form of second series-tuned circuit incorporating variable capacitor, second buffer amplifier 5, first differential unit 6, first amplitude detector 7, first integrating device 9, and subtraction unit 9. Inverting input of subtraction unit 9 is connected to reference-voltage generator 10 and output, to control input of variable capacitors 2 and 4. Automatically tunable band filter also has series-connected second amplitude detector 11, second integrating unit 12, and threshold unit 13. Synchronous operation of this filter during reception and processing of finite-length radio pulses is ensured by synchronizer 14 whose output is connected to units 10, 8, and 12. This automatically tunable band filter also has second differential unit whose input is connected to output of buffer amplifier 3 and output, to second control input of variable capacitor of band filter 2.\n\nEFFECT: enhanced noise immunity due to maintaining device characteristics within wide frequency range.\n\n1 cl, 1 dwg",
null,
"FIELD: radio communications engineering; mobile ground- and satellite-based communication systems.\n\nSUBSTANCE: proposed modulator that incorporates provision for operation in single-channel mode with selected frequency modulation index m = 0.5 or m = 1.5, or in dual-channel mode at minimal frequency shift and without open-phase fault has phase-shifting voltage analyzer 1, continuous periodic signal train and clock train shaping unit 2, control voltage shaping unit 3 for switch unit 3, switch unit 3, switch unit 4, two amplitude-phase modulators 5, 6, phase shifter 7, carrier oscillator 8, and adder 9.\n\nEFFECT: enlarged functional capabilities.\n\n1 cl, 15 dwg",
null,
"FIELD: electronic engineering.\n\nSUBSTANCE: device has data processing circuit, transmitter, commutation unit, endec, receiver, computation unit, and control unit.\n\nEFFECT: high reliability in transmitting data via radio channel.\n\n4 dwg",
null,
"FIELD: electronic engineering.\n\nSUBSTANCE: method involves building unipolar pulses on each current modulating continuous information signal reading of or on each pulse or some continuous pulse sequence of modulating continuous information code group. The number of pulses, their duration, amplitude and time relations are selected from permissible approximation error of given spectral value and formed sequence parameters are modulated.\n\nEFFECT: reduced inetrsymbol interference; high data transmission speed.\n\n16 cl, 8 dwg",
null,
"FIELD: communication system transceivers.\n\nSUBSTANCE: transceiver 80 has digital circuit 86 for converting modulating signals into intermediate-frequency ones. Signal source 114 transmits first periodic reference signal 112 at first frequency. Direct digital synthesizer 84 receives second periodic signal 102 at second frequency from first periodic reference signal. Converter circuit affording frequency increase in digital form functions to convert and raise frequency of modulating signals into intermediate-frequency digital signals using second periodic signal 102. Digital-to-analog converter 82 converts intermediate-frequency digital signals into intermediate-frequency analog signals using first periodic reference signal 112.\n\nEFFECT: reduced power requirement at low noise characteristics.\n\n45 cl, 3 dwg",
null,
"SUBSTANCE: proposed receiver has multiplier 4, band filter 6, demodulator 8, weighting coefficient unit 5, adding unit 7, analyzing and control unit 10, synchronizing unit 3, n pseudorandom sequence generators 21 through 2n, decoder 1, and switch unit 9. Receiver also has narrow-band noise suppression unit made in the form of transversal filter. Novelty is that this unit is transferred to correlator reference signal channel, reference signal being stationary periodic signal acting in absence of noise and having unmodulated harmonic components that can be rejected by filters of simpler design than those used for rejecting frequency band of input signal and noise mixture. Group of synchronized pseudorandom sequence generators used instead of delay line does not need in-service tuning.\n\nEFFECT: facilitated realization of narrow-band noise suppression unit; simplified design of rejection filters.\n\n1 cl, 8 dwg",
null,
"SUBSTANCE: proposed method and device are intended to control transmission power levels for plurality of various data streams transferred from at least one base station to mobile one in mobile radio communication system. First and second data streams are transmitted from base station and received by mobile station. Power-control instruction stream is generated in mobile station in compliance with first or second data stream received. Power control signal is shaped in mobile station from first power control instruction stream and transferred to base station. Received power control instruction stream is produced from power control signal received by base station; power transmission levels of first and second data streams coming from base station are controlled in compliance with power control instruction stream received. In this way control is effected of transmission power levels of first data stream transferred from each base station out of first active set to mobile station and of transmission power levels of second data stream which is transferred from each base station out of second active set to mobile station.\n\nEFFECT: enlarged functional capabilities.\n\n80 cl, 21 dwg",
null,
"SUBSTANCE: proposed method and device designed for fast synchronization of signal in wade-band code-division multiple access (WCDMA) system involve use of accumulations of variable-length samples, testing of decoder estimates for reliability, and concurrent decoding of plurality of sync signals in PERCH channel. Receiver accumulates samples required for reliable estimation of time interval synchronization. As long as time interval synchronization estimates have not passed reliability tests, samples are accumulated for frame synchronization estimates. As long as frame synchronization estimates have not passed reliability tests, samples are analyzed to determine channel pilot signal shift.\n\nEFFECT: reduced time for pulling into synchronism.\n\n13 cl, 9 dwg",
null,
"FIELD: satellite navigation systems and may be used at construction of imitators of signals of satellite navigational system GLONASS and pseudo-satellites.\n\nSUBSTANCE: for this purpose two oscillators of a lettered frequency and of a fixed frequency are used. Mode includes successive fulfillment of the following operations - generation of a stabilized lettered frequency, its multiplication with an oscillator's fixed frequency and filtration of lateral multipliers with means of filters of L1 and L2 ranges and corresponding option of a fixed and a lettered frequencies.\n\nEFFECT: reduces phase noise and ensures synthesizing of lettered frequencies of L1 and L2 ranges of satellite navigational system from one supporting generator at minimum number of analogous super high frequency units.\n\n3 cl, 1 dwg",
null,
""
] | [
null,
"https://img.russianpatents.com/img_data/65/658960.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/8804.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/8804.gif",
null,
"https://img.russianpatents.com/chr/8804.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/img_data/65/658961.gif",
null,
"https://img.russianpatents.com/img_data/65/658962.gif",
null,
"https://img.russianpatents.com/chr/8804.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/img_data/65/658963.gif",
null,
"https://img.russianpatents.com/img_data/65/658964.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/8804.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/215.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/948.gif",
null,
"https://img.russianpatents.com/chr/215.gif",
null,
"https://img.russianpatents.com/chr/948.gif",
null,
"https://img.russianpatents.com/chr/948.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/chr/8658.gif",
null,
"https://img.russianpatents.com/chr/8658.gif",
null,
"https://img.russianpatents.com/chr/8658.gif",
null,
"https://img.russianpatents.com/chr/916.gif",
null,
"https://img.russianpatents.com/img_data/65/658937-s.jpg",
null,
"https://img.russianpatents.com/img_data/64/641765-s.jpg",
null,
"https://img.russianpatents.com/img_data/64/641756-s.jpg",
null,
"https://img.russianpatents.com/img_data/63/637493-s.jpg",
null,
"https://img.russianpatents.com/img_data/63/633893-s.jpg",
null,
"https://img.russianpatents.com/img_data/88/881714-s.jpg",
null,
"https://img.russianpatents.com/img_data/91/912005-s.jpg",
null,
"https://img.russianpatents.com/img_data/91/912028-s.jpg",
null,
"https://img.russianpatents.com/img_data/91/912078-s.jpg",
null,
"https://img.russianpatents.com/img_data/91/912083-s.jpg",
null,
"https://img.russianpatents.com/img_data/91/916002-s.jpg",
null,
"https://img.russianpatents.com/img_data/92/926680-s.jpg",
null,
"https://img.russianpatents.com/img_data/92/926696-s.jpg",
null,
"https://img.russianpatents.com/img_data/93/931158-s.jpg",
null,
"https://img.russianpatents.com/img_data/93/934536-s.jpg",
null,
"https://img.russianpatents.com/top.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8762438,"math_prob":0.97096294,"size":27281,"snap":"2019-26-2019-30","text_gpt3_token_len":5829,"char_repetition_ratio":0.16149136,"word_repetition_ratio":0.072443515,"special_character_ratio":0.21007295,"punctuation_ratio":0.108457714,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9846552,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,3,null,4,null,6,null,3,null,3,null,null,null,null,null,null,null,10,null,9,null,9,null,null,null,8,null,8,null,8,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-20T02:08:08Z\",\"WARC-Record-ID\":\"<urn:uuid:759d15de-bff2-4401-9b6c-d570956f4d62>\",\"Content-Length\":\"85502\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7153b383-4df2-4513-864e-14f81062119f>\",\"WARC-Concurrent-To\":\"<urn:uuid:87646a70-7da6-4dd9-8d43-091c9a34b0df>\",\"WARC-IP-Address\":\"45.32.90.106\",\"WARC-Target-URI\":\"https://russianpatents.com/patent/221/2214680.html\",\"WARC-Payload-Digest\":\"sha1:3MFIT335WHOPDBFXYOESHU3LPBDJYJPF\",\"WARC-Block-Digest\":\"sha1:4PKJWXUTIZGSD5EEMMRV2AUNJXHSUQSM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999130.50_warc_CC-MAIN-20190620004625-20190620030625-00223.warc.gz\"}"} |
https://itprospt.com/num/20695790/show-that-the-setb-x27-1-1-2-1-1-1-4-3-1-2-0-3-1-2-2-0-basis | [
"1\n\n# Show that the setB' = {(1,-1,2,1), (1,1,-4,3),(1,2,0.3), (1,2,-2,0)}basis for R\"...\n\n## Question\n\n###### Show that the setB' = {(1,-1,2,1), (1,1,-4,3),(1,2,0.3), (1,2,-2,0)}basis for R\"\n\nShow that the set B' = {(1,-1,2,1), (1,1,-4,3),(1,2,0.3), (1,2,-2,0)} basis for R\"",
null,
"",
null,
"#### Similar Solved Questions\n\n##### Problem 1Q An object is thrown downward from the top of the building with initial velocity of 50 ftls. Assuming positive direction of y measured downward from the top; derive an expression for (a) the velocity; and the displacement as function of time_ Assume J(0) = 0 .What is the velocityat F052What is the positionat F25?\nProblem 1Q An object is thrown downward from the top of the building with initial velocity of 50 ftls. Assuming positive direction of y measured downward from the top; derive an expression for (a) the velocity; and the displacement as function of time_ Assume J(0) = 0 . What is the velocity at F052 ...\n##### 14 In &udying Ine ortena gaelicchaadoisic the following sanpledata weecbtanulCaxtuisicFrequencIf the assumption that the characteristics occur #ith the same frequency, find the - test stalistic4) 825233300D) 11.0715) Oneway mayasof vaimoisusd letto equdity Inreed moopopuldion varanos oudity = Inrea0r moc eamplcmcay qudity theeo moropopulaion prcportions equaity tnreaor mcre popultion maans\n14 In &udying Ine ortena gaelicchaadoisic the following sanpledata weecbtanul Caxtuisic Frequenc If the assumption that the characteristics occur #ith the same frequency, find the - test stalistic 4) 82 5233 300 D) 11.07 15) Oneway mayasof vaimoisusd letto equdity Inreed moopopuldion varanos oud...\n##### Eanpe Dests2 3 4 5 61 7 8438 : 10 8\nEanpe Dests 2 3 4 5 61 7 8 43 8 : 10 8...\n##### A) 83% 16:) The\" B) 120% elemental% C) )b% (a 58 butane, C4H1o; 2 & 68.241\nA) 83% 16:) The\" B) 120% elemental% C) )b% (a 58 butane, C4H1o; 2 & 68.24 1...\n##### Choose the correct events referring to the synthesis, translocation and transport in the cell of a multipass transmembrane O-glycosylated plasma membrane protein from the list below and order them chronologicallyProtein is glycosylated in the Golgi SRP (Signal recognition particle) recognizes the signal sequence Protein is glycosylated in the ER (endoplasmic reticulum) Translocator opens and translocation starts Protein is transported to the Golgi Vesicles transporting the protein fuse with the\nChoose the correct events referring to the synthesis, translocation and transport in the cell of a multipass transmembrane O-glycosylated plasma membrane protein from the list below and order them chronologically Protein is glycosylated in the Golgi SRP (Signal recognition particle) recognizes the s...\n##### 105. The hydrogen atom contains 1 proton and 1 electron: The radius of the proton is approximately 1.0 fm (femtometer) , and the radius of the hydrogen atom is approximately 53 pm (picometers) Calculate the volume of the nucleus and the volume of theatom for hydrogen. What percentage of the hydrogen atom s volume does the nucleus occupy? (Hint: Convert both given radii t0 m, and then calculate their volumes using the formula for the volume of sphere, which is V = 37r' ,)\n105. The hydrogen atom contains 1 proton and 1 electron: The radius of the proton is approximately 1.0 fm (femtometer) , and the radius of the hydrogen atom is approximately 53 pm (picometers) Calculate the volume of the nucleus and the volume of theatom for hydrogen. What percentage of the hydrogen...\n##### Explore how a technique called centered differencing can be used to extend our understanding of derivatives by accommodating corners, and some cusps. These exercises should be done in order. The graph of $f$ is shown in Figure $4.14$, and the point $P$ is at $left(t_{0}, fleft(t_{0}ight)ight)$.(a) Draw the line segment whose slope is $M(Delta t)=frac{fleft(t_{0}+Delta tight)-fleft(t_{0}-Delta tight)}{2 Delta t}$.(b) Suppose that $f$ is differentiable at $t_{0}$. Verify (analytically, not graphic\nExplore how a technique called centered differencing can be used to extend our understanding of derivatives by accommodating corners, and some cusps. These exercises should be done in order. The graph of $f$ is shown in Figure $4.14$, and the point $P$ is at $left(t_{0}, fleft(t_{0} ight) ight)$. (...\n##### Consider the following differential equation V4) -v\"+a\" + bv' + cy=0where a, b, € are constants: Given equation has the following general solution: Kx) = €1e ~x+ Gzez* C3cos(3x) + c4sin(3x) where €1, Cz, C3, C4 are arbitrary constants Then, which of the following is true?Yanitiniz:a + b + 0=30a + b+c=20a +b+c= - 200 + b+c= -30None of the above\nConsider the following differential equation V4) -v\"+a\" + bv' + cy=0 where a, b, € are constants: Given equation has the following general solution: Kx) = €1e ~x+ Gzez* C3cos(3x) + c4sin(3x) where €1, Cz, C3, C4 are arbitrary constants Then, which of the following i...\n##### Determine currents in each of the resistors Rl = 12 }; R2 = 12 02; R3 = 5 (};R4 =6 (1; RS = 6 ( ; R6 = 6\nDetermine currents in each of the resistors Rl = 12 }; R2 = 12 02; R3 = 5 (};R4 =6 (1; RS = 6 ( ; R6 = 6...\n##### Question 4 (8 p) Complete the following reactions providing the structures of (A), (B), (C) and (D): Write reasonable mechanism for the preparation of (D) from (A}:EtoM 'EtOCH;2.B,OC4HjoO1.H,0CH;0OCHi EL,02 , H,0 +C;HpO\nQuestion 4 (8 p) Complete the following reactions providing the structures of (A), (B), (C) and (D): Write reasonable mechanism for the preparation of (D) from (A}: Eto M 'EtO CH; 2.B,O C4HjoO 1.H,0 CH;0 OCHi EL,0 2 , H,0 + C;HpO...\n##### (6 x 5) Trarslate each of these logical expressions into English Let M(-)\" denote 'X is man\" Let W(r) denote \"X = woman\" Let W (I,y) denote works for y\"_ Let - denote Ivan and denote Peter.J(W(s) _ Vy( M(y) = -W(I,y))) Bolucion:VI(M(c) 3y(W(y) W(I,y))) Bolucion:Hi(M(I) Vy(W(I,y) - W(y))) Bolutioc:Hiy(M(c) W(y,I) W(y)) Bolucion:W(i,p) Vr(W(p,r) +-W(r)) Bolucion:\n(6 x 5) Trarslate each of these logical expressions into English Let M(-)\" denote 'X is man\" Let W(r) denote \"X = woman\" Let W (I,y) denote works for y\"_ Let - denote Ivan and denote Peter. J(W(s) _ Vy( M(y) = -W(I,y))) Bolucion: VI(M(c) 3y(W(y) W(I,y))) Bolucion: Hi(M(...\n##### Question 4A set contains fourteen elements. How many subsets does it have? 16384Blank16384Question 5The set contalning all the elements that are common tO both set A and set B Is called:None.The Intersection of set A and set B.The Unlon of set A and set B_\nQuestion 4 A set contains fourteen elements. How many subsets does it have? 16384 Blank 16384 Question 5 The set contalning all the elements that are common tO both set A and set B Is called: None. The Intersection of set A and set B. The Unlon of set A and set B_...\n##### What is the E\" for the following reaction: 2H+ + 2e < > Hzat 1M[P OA-100v OB.0v 0€ 100 MA 00. 100 V 0 E 100 mA\nWhat is the E\" for the following reaction: 2H+ + 2e < > Hzat 1M[P OA-100v OB.0v 0€ 100 MA 00. 100 V 0 E 100 mA...\n##### Anenbor child rocelves 59.000 9It towar contcounded quanadyInyasied 21525Honh(Round Vt neltest cenL)\nAnenbor child rocelves 59.000 9It towar contcounded quanady Inyasied 21525 Honh (Round Vt neltest cenL)...\n##### If there is a reaction where the reactants and products are about equal in energy will tend to he reversible or irreversable? and would it be under kinetic or therodynamic control?\nIf there is a reaction where the reactants and products are about equal in energy will tend to he reversible or irreversable? and would it be under kinetic or therodynamic control?...\n##### LlunCCCinvaQuustion 8 ptsLonsidcrthamacnonZHz (e} Nz (c} ZH,0 {El Lab cxperimcnnId conchnon thit thc MNOJ[H-l?Mthc mcacnon undcr ccrln condition: %ath 0.10 MNO and 0.10 MHz, te nte rExcbion @ mneaeuned 64 i0 Mk When INO] k Increxsed 0 40MAhil Ulothermaction prameten renulnthc *mnthc Terchon FunJA Cakculate tte exercted ncw ratc(Report Jrsvrer decimal torm with Ixo signifcInt 6 gures JadUnits Ior mtewould be M/sQuestion 3neaction second ordcr wlth rEspect to bcntophenole (lets cullit 82 Whn sturt\nLlunCC Cinva Quustion 8 pts Lonsidcrthamacnon ZHz (e} Nz (c} ZH,0 {El Lab cxperimcnnId conchnon thit thc MNOJ[H-l? Mthc mcacnon undcr ccrln condition: %ath 0.10 MNO and 0.10 MHz, te nte rExcbion @ mneaeuned 64 i0 Mk When INO] k Increxsed 0 40MAhil Ulothermaction prameten renulnthc *mnthc Terchon Fu..."
] | [
null,
"https://cdn.numerade.com/ask_images/abc14d350605460ba3cd6c4b554c94da.jpg ",
null,
"https://cdn.numerade.com/previews/1f9a1d05-bbda-4b8e-ac1d-e0fb388c715e_large.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8163194,"math_prob":0.9813712,"size":10147,"snap":"2022-40-2023-06","text_gpt3_token_len":3120,"char_repetition_ratio":0.11071675,"word_repetition_ratio":0.59598345,"special_character_ratio":0.288164,"punctuation_ratio":0.13225499,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.995507,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T13:33:03Z\",\"WARC-Record-ID\":\"<urn:uuid:43705ac2-f548-424a-8bb3-ae6a3fe25d93>\",\"Content-Length\":\"53479\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e45d648b-f1db-4adc-ba8a-8d1a7a2c3109>\",\"WARC-Concurrent-To\":\"<urn:uuid:7802d4f7-7c23-4718-bcde-3bed59d7f96f>\",\"WARC-IP-Address\":\"104.26.6.163\",\"WARC-Target-URI\":\"https://itprospt.com/num/20695790/show-that-the-setb-x27-1-1-2-1-1-1-4-3-1-2-0-3-1-2-2-0-basis\",\"WARC-Payload-Digest\":\"sha1:UQLRR4MBPDJXKX62SXKQ7PFQ4PVCXK3Q\",\"WARC-Block-Digest\":\"sha1:LHCZZTXRMPCMA6DGCTZAWWNU2FOAHCCL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337836.93_warc_CC-MAIN-20221006124156-20221006154156-00273.warc.gz\"}"} |
https://www.wowebook.com/book/excel-2013-formulas/ | [
"# Excel 2013 Formulas",
null,
"Maximize the power of Excel 2013 formulas with this must-have Excel reference\n\nJohn Walkenbach, known as “Mr. Spreadsheet,” is a master at deciphering complex technical topics and Excel formulas are no exception. This fully updated book delivers more than 800 pages of Excel 2013 tips, tricks, and techniques for creating formulas that calculate, developing custom worksheet functions with VBA, debugging formulas, and much more.\n\n• Demonstrates how to use all the latest features in Excel 2013\n• Shows how to create financial formulas and tap into the power of array formulas\n• Serves as a guide to using various lookup formulas, working with conditional formatting, and developing custom functions\n• Shares proven solutions for handling typical (and not-so-typical) Excel formula challenges\n• Includes links to the “Mr. Spreadsheet” website, which contains all the templates and worksheets used in the book, plus access to John Walkenbach’s award-winning Power Utility Pak.\n\nFrom charts to PivotTables and everything in between, Excel 2013 Formulas is your formula for Excel success.\n\nPart I: Basic Information\nChapter 1. Excel in a Nutshell\nChapter 2. Basic Facts about Formulas\nChapter 3. Working with Names\n\nPart II: Using Functions in Your Formulas\nChapter 4. Introducing Worksheet Functions\nChapter 5. Manipulating Text\nChapter 6. Working with Dates and Times\nChapter 7. Counting and Summing Techniques\nChapter 8. Using Lookup Functions\nChapter 9. Working with Tables and Lists\nChapter 10. Miscellaneous Calculations\n\nPart III: Financial Formulas\nChapter 11. Borrowing and Investing Formulas\nChapter 12. Discounting and Depreciation Formulas\nChapter 13. Financial Schedules\n\nPart IV: Array Formulas\nChapter 14. Introducing Arrays\nChapter 15. Performing Magic with Array Formulas\n\nPart V: Miscellaneous Formula Techniques\nChapter 16. Importing and Cleaning Data\nChapter 17. Charting Techniques\nChapter 18. Pivot Tables\nChapter 19. Conditional Formatting\nChapter 20. Using Data Validation\nChapter 21. Creating Megaformulas\nChapter 22. Tools and Methods for Debugging Formulas\n\nPart VI: Developing Custom Worksheet Functions\nChapter 23. Introducing VBA\nChapter 24. Function Procedure Basics\nChapter 25. VBA Programming Concepts\nChapter 26. VBA Custom Function Examples\n\nPart VII: Appendixes\nAppendix A. Excel Function Reference\nAppendix B. Using Custom Number Formats\n\n### Book Details\n\n• Paperback: 864 pages\n• Publisher: Wiley (April 2013)\n• Language: English\n• ISBN-10: 1118490444\n• ISBN-13: 978-1118490440"
] | [
null,
"http://img.wowebook.com/images/2038502170.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.67605716,"math_prob":0.4453315,"size":2534,"snap":"2021-21-2021-25","text_gpt3_token_len":590,"char_repetition_ratio":0.186166,"word_repetition_ratio":0.0,"special_character_ratio":0.2190213,"punctuation_ratio":0.13551402,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99001837,"pos_list":[0,1,2],"im_url_duplicate_count":[null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-16T12:59:01Z\",\"WARC-Record-ID\":\"<urn:uuid:8a850e7b-eeaa-4e9a-9787-99f0b1e6d871>\",\"Content-Length\":\"35846\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:038a577a-be17-4e3d-9afb-6219098da31b>\",\"WARC-Concurrent-To\":\"<urn:uuid:f9450147-0ad2-40f2-ab91-a2bc0edafae9>\",\"WARC-IP-Address\":\"172.67.133.191\",\"WARC-Target-URI\":\"https://www.wowebook.com/book/excel-2013-formulas/\",\"WARC-Payload-Digest\":\"sha1:KCY2ESPIA66YVDRE6HUSWURWPHCFCJ5V\",\"WARC-Block-Digest\":\"sha1:JZKZPN6RYVVEUZ3IXZDXJ4NEOGSWMIK6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991269.57_warc_CC-MAIN-20210516105746-20210516135746-00107.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/gr-qc/0005086/ | [
"# A class of perfect-fluid cosmologies with polarised Gowdy symmetry and a Kasner-like singularity\n\nKeith Anguige\n###### Abstract\n\nWe prove the existence of a class of perfect-fluid cosmologies which have polarised Gowdy symmetry and a Kasner like singularity. These solutions of the Einstein equations depend on four free functions of one space coordinate and are constructed by solving a system of Fuchsian equations.\n\n## 1 Introduction\n\nIn this paper we use the Fuchsian algorithm to construct a family of perfect-fluid cosmologies with polarised Gowdy symmetry and with Kasner like asymptotics at early times. This family depends on the maximum number of free functions for spacetimes within the symmetry class. The technique used is to perturb exact Bianchi I solutions in one space-direction and to solve a Fuchsian system of equations for the perturbation. The results obtained are a generalisation of those in in the sense that the symmetry requirement has been relaxed a little, but here we require the free data for the field equations to be analytic rather than merely .\n\nThe main result is stated as Theorem 6.1 at the end of the paper.\n\n## 2 Exact solutions\n\nThe general Bianchi I solution of the Einstein-perfect fluid equations, as given in is\n\n ds2=−B2(γ−1)dτ2+τ2p1B2q1dx2+τ2p2B2q2dy2+τ2p3B2q3dz2 (1)\n\nwhere\n\n B2−γ=α+m2τ2−γ α≥0, m>0 (2)\n p1+p2+p3=1 , p21+p22+p23=1 , qi=23−pi (3)\n\n(The case is an FRW model and is excluded in what follows.)\n\nThe density of the fluid is given by\n\n μ=4m23τγBγ (4)\n\nWhen we come to write down Einstein’s equations for a metric with polarised Gowdy symmetry we will use conformal coordinates, i.e coordinates for which the metric takes the form\n\n ds2=e2A(t,x)(−dt2+dx2)+R(t,x)eW(t,x)dy2+R(t,x)e−W(t,x)dz2 (5)\n\nThe model solution (1) may be written in conformal coordinates by making the transformation\n\n τ→t=∫τ0Bp1+γ−53(s)s−p1 ds (6)\n\nThen the metric (1) takes the form (5) with\n\n eA=τp1B23−p1 , eW=τp2−p3Bp3−p2 , R=τ1−p1B(p1+13) (7)\n\nand given implicitly as function of by the relation (6).\n\n## 3 The Einstein-perfect fluid equations\n\nWe will assume that spacetime is filled with a polytropic perfect fluid, so that the stress tensor takes the form\n\n Tαβ=(ρ+P)uαuβ+Pgαβ (8)\n\nwith and .\n\nWith this stress tensor the Einstein evolution equations for the metric (5) are\n\n Rtt−Rxx=Re2Aρ(2−γ) (9)\n Wtt−Wxx+R−1(RtWt−RxWx)=0 (10)\n Att−Axx−14R−2(R2t−R2x)+14(W2t−W2x)=−12γe2Aρ (11)\n\n R−1Rxx−14R−2(R2t+R2x)−R−1RtAt−R−1RxAx+14(W2t+W2x)\n =−e2Aρ(1+γ(v1)2) (12)\n −R−1Rtx+R−1(RtAx+AtRx)+12R−2RtRx−12WtWx=−e2Aγρv0v1 (13)\n\nwhere and hence .\n\nThe Euler equations take the following explicit form\n\n γv0(v0ρt+v1ρx)+(1−γ)ρt\n +γρ{v0(2v0t+Axv1)+v1(v1x−v1Ax+Axv0+Atv1)}\n +γρv0{v0At+v1x+v1Ax+R−1(v0Rt+v1Rx)}=0 (14)\n\nand\n\n γv1(v0ρt+v1ρx)+(1−γ)ρx+γρ{v0(v1t+v0Ax)}\n +γρ{v1(v0t+2v1x+v0(2At+R−1Rt)+v1(Ax+R−1Rx)}=0 (15)\n\n## 4 Inhomogeneous perturbations\n\nWe now seek solutions of (9)-(15) which depend on the maximum number of free analytic functions, namely four, and which approach the model forms (1)-(4) for each as .\n\nTo be specific we make the following ansatz\n\n A=p1logτ+(23−p1)logB+t~A (16)\n R=τ1−p1B(p1+13)(1+~R) (17)\n W=c(x)+(p2−p3)logτ+(p3−p2)logB+t~W (18)\n logρ=logμ+t(γ−1−p1)/(1−p1)+ϵϕ (19)\n v1=τγ−1−p1(G(x)+tϵ~ψ) (20)\n\nwhere\n\n B2−γ=α(x)+m2(x)τ2−γ (21)\n G(x)=−34m2γα(p1+13)/(2−γ)× (22)\n {−12(p2−p3)cx+αxα(3γ(1−p1)+2(3p1−1))3(2−γ)+(p1)x(2(1−p1)(2−γ)logα+1)} (23)\n\nand is determined by the relation (6).\n\nand are chosen as strictly positive analytic functions, is an analytic function and the Kasner exponent is subject to the restriction for some arbitrarily small constant . The analytic function is chosen to satisfy\n\n 0<ϵ(x)\n\nWe are looking for analytic solutions of the Einstein-perfect fluid equations for which and tend to zero as tends to zero. The choice of ensures that the momentum constraint is satisfied at .\n\nIt is convenient to write the field equations for and in first order form. To do this we introduce the following new variables:\n\n U=~Rt, Q=(t~A)t, X=~Rx, Y=t~Ax, V=(t~W)t, Z=t~Wx, S=t−1~R\n\nIn terms of these variables the evolution equations (9)-(11) take the form\n\n tXt=tUx (25)\n tYt=tQx (26)\n t~Rt=tU (27)\n t~At+~A−Q=0 (28)\n tZt=tVx (29)\n t~Wt+~W−V=0 (30)\n tSt+S−U=0 (31)\n tUt+2U=tXx+2U(1−((1−p1)B2−γ+m2(p1+13)τ2−γ)B−(p1+13)tτp1−1)\n +tτp1−1B−(p1+13){2X(τ1−p1Bp1+13)x+(1+~R)(τ1−p1Bp1+13)xx}\n +4m23(2−γ)(1+tS)tτ2p1−γB43−2p1−γ(exp(2t~A+tϵ+(γ−1−p1)(1−p1)−1ϕ)−1) (32)\n tQt−12U+12(p2−p3)(1−p1)V=tYx+t{(p1logτ)x+(logB23−p1)xx}\n +14(1+~R)−2tU2−12U(1−tτp1−1B−(p1+13)(1+tS)−1((1−p1)B2−γ+m2(p1+13)τ2−γ))\n +14tτ2(p1−1)B−2(p1+13)(τ1−p1Bp1+13X+(1+~R)(τ1−p1Bp1+13)x)2\n −14tV2+12V(p2−p3)((1−p1)−1−tτp1−1B53+p1−γ+m2tτ1+p1−γB−13−p1)\n +14t(cx+((p2−p3)logτ)x+((p3−p2)logB)x+Z)2\n −2m23γB43−2p1−γtτ2p1−γ(exp(2t~A+tϵ+(γ−1−p1)(1−p1)−1ϕ)−1) (33)\n tVt+V+(p2−p3)(1−p1)U=tZx+t{cxx+((p2−p3)logτ)xx+((p3−p2)logB)xx}\n +tτp1−1B−p1−13(1+~R)(cx+((p2−p3)logτ)x+((p3−p2)logB)x+Z)(τ1−p1Bp1+13X+(1+~R)(τ1−p1Bp1+13)x)\n +U((p2−p3)(1−p1)−(1+tS)−1((p2−p3)tτp1−1B53−p1−γ+m2(p3−p2)tτ1+p1−γB−13−p1+tV))\n +V(1−B−p1−13tτp1−1((1−p1)B2−γ+m2τ2−γ(p1+1/3))) (34)\n\nThe Euler equations , after some rearrangement, may be written\n\n (1+2τ2βψ2v0−2γv0τ2βψ21+γτ2βψ2)(t~ψt+ϵ~ψ)=\n t1−ϵψ(γv0(v0+τβ)1+γτ2βψ2−2)(τβψx+ψ(τβ)x)\n +t1−ϵτ−β(γ−1γ+γ(v0)2τ2βψ21+γτ2βψ2)((m2)x−γ(τx+Bx)+tϵ+β(1−p1)−1(ϕx+ϕ(β(1−p1)−1)xlogt))\n +t1−ϵv0ψ(γ((v0)2+τ2βψ2)1+γτ2βψ2−2)((2/3−p1)B−13−p1m2τ−β+Q)\n −t1−ϵτ−β(1+2τ2βψ2−γτβψv01+γτ2βψ2(3v0τβψ−τ2βψ2))((p1logτ)x+(logB23−p1)x+Y)\n +t1−ϵψ(γ(v0)31+γτ2βψ2−v0)((1+~R)−1U+m2(p1+1/3)B−p1−13τβ)\n +t1−ϵτγ−2ψ2(γ(v0)21+γτ2βψ2−1)B−p1−13(1+~R)−1(τ1−p1Bp1+13X+(1+~R)(τ1−p1Bp1+13)x)\n +t1−ϵτp1−1ψB53−p1−γ{β(1+2γv0τ2βψ21+γτ2βψ2−1+2τ2βψ2v0)\n +p1(γ(v0((v0)2+τ2βψ2)1+γτ2βψ2−1)+2(1−v0))+(1−p1)(γ((v0)21+γτ2βψ2−1)+1−v0)} (35)\n\nand\n\n (1+γτ2βψ2−2γ(v0)2τ2βψ21+2τ2βψ2)tϕt+(ϵ+β1−p1)ϕ=\n =γ{(ϵ+β(1−p1)−1)ϕ(1γ+2(v0)2τ2βψ21+2τ2βψ2−1+γτ2βψ2γ)\n +γt1+βp1−1−ϵτp1−1τ2βψ2(1−2(v0)21+2τ2βψ2)(B53−p1−γ+m2τ2−γB−13−p1)\n −(tϕx+tϕlogt(ϵ+β(1−p1)−1)x+t1+βp1−1−ϵ((m2)x−γ(τx+Bx)))v0τβψ\n ×(1+21+2τ2βψ2(γ−1γ−τ2βψ2))\n +t1+βp1−1−ϵ(4τ2βψ2v01+2τ2βψ2−v0−τβψ)((τβG)x+τβtϵ~ψx+(τβtϵ)x~ψ)\n +t1+βp1−1−ϵ(4(v0)2τ2βψ21+2τ2βψ2−1−2τ2βψ2)Q\n +t1+βp1−1−ϵτ2βψ2(4(v0)21+2τ2βψ2−2)(p1τp1−1B53−p1−γ+((2/3)−p1)B−13−p1m2τ−β)\n +t1+βp1−1−ϵ(τβψ(τβψ−v0)((p1logτ)x+(logB23−p1)x+Y)+(v0τβψ−(v0)2)1+~RU)\n +t1+βp1−1−ϵτ2βψ2(2(v0)21+2τ2βψ2−1)τp1−1B−p1−13((1−p1)B2−γ+m2(p1+1/3)τ2−γ)\n +t1+βp1−1−ϵτγ−2ψ(2v0τ2βψ21+2τ2βψ2−v0)B−p1−131+~R(τ1−p1Bp1+13X+(1+~R)(τ1−p1Bp1+13)x)⎫⎬⎭ (36)\n\n## 5 Existence and uniqueness of solutions\n\nA careful inspection of the field equations (25)-(36) shows that they may be written in the form\n\n t∂tu+N(x)u=tδH(t,x,u,ux) (37)\n\nwhere stands for is a strictly positive constant and is continuous in and analytic in its other arguments. The matrix has positive eigenvalues.\n\nIt follows by that (37) has a unique analytic solution with\n\n## 6 The constraints\n\nDefine constraint quantities by\n\n C0=R−1Rxx−14R−2(R2t+R2x)−R−1RtAt−R−1RxAx+14(W2t+W2x)\n +e2Aρ(1+γ(v1)2) (38)\n C1=−R−1Rtx+R−1(RtAx+AtRx)+12R−2RtRx−12WtWx+e2Aγρv0v1 (39)\n\nIf the evolution equations (25)-(36) are satisfied then a calculation shows that the following hold\n\n ∂tC0=−∂xC1−RtRC0−RxRC1 (40)\n ∂tC1=−∂xC0−RtRC1−RtRC0 (41)\n\nOne also calculates that the quantities tend to zero as tends to zero. These quantities satisfy the following\n\n t∂t~C0+(tRtR−1)~C0=−t∂x~C1−tRxR~C1 (42)\n t∂t~C1+(tRtR−1)~C1=−t∂x~C0−tRxR~C0 (43)\n\nNow is and for some . It thus follows that and are identically zero and thus the constraints are satisfied.\n\nSummarising the results of sections (4)-(6) we have proved the following\n\nTheorem 6.1 Given two strictly positive analytic functions , an analytic function and an analytic function satisfying\n\n −13≤p1(x)<γ−1−k (44)\n\nfor some small , there exists a unique solution of the Einstein equations coupled to a -law perfect fluid on satisfying\n\n ds2=e2A(t,x)(−dt2+dx2)+R(t,x)eW(t,x)(dy)2+R(t,x)e−W(t,x)(dz)2\n A=p1logt+logB23−p1+O(t)\n R=τ1−p1Bp1+13(1+O(t))\n W=c(x)+(p2−p3)logt+(p3−p2)logB+O(t)\n logρ=log4m23τγBγ+O(t(γ−1−p1)(1−p1)−1)\n v1=O(t(γ−1−p1)(1−p1)−1)\n\nwhere is implicitly given by\n\n t=∫τ0Bp1+γ−53(s)s−p1 ds."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.874589,"math_prob":0.9996031,"size":4659,"snap":"2022-40-2023-06","text_gpt3_token_len":1088,"char_repetition_ratio":0.13834587,"word_repetition_ratio":0.026143791,"special_character_ratio":0.25241467,"punctuation_ratio":0.055622734,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976473,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-01T16:08:00Z\",\"WARC-Record-ID\":\"<urn:uuid:63162cf6-bdb6-4681-9dc1-d5e210d7e145>\",\"Content-Length\":\"925843\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6db5e8e0-0530-42a5-9ff0-b7fb9ced2f19>\",\"WARC-Concurrent-To\":\"<urn:uuid:80bef897-013c-47c6-835c-ba34ca4e0e50>\",\"WARC-IP-Address\":\"172.67.158.169\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/gr-qc/0005086/\",\"WARC-Payload-Digest\":\"sha1:X3NHTYMLD4CRL4JXBYOW7N3NMT6DTQLY\",\"WARC-Block-Digest\":\"sha1:Z62AB4BE24S326HDSA75CITGHB6GV3DV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499946.80_warc_CC-MAIN-20230201144459-20230201174459-00328.warc.gz\"}"} |
https://www.colorhexa.com/5b698e | [
"# #5b698e Color Information\n\nIn a RGB color space, hex #5b698e is composed of 35.7% red, 41.2% green and 55.7% blue. Whereas in a CMYK color space, it is composed of 35.9% cyan, 26.1% magenta, 0% yellow and 44.3% black. It has a hue angle of 223.5 degrees, a saturation of 21.9% and a lightness of 45.7%. #5b698e color hex could be obtained by blending #b6d2ff with #00001d. Closest websafe color is: #666699.\n\n• R 36\n• G 41\n• B 56\nRGB color chart\n• C 36\n• M 26\n• Y 0\n• K 44\nCMYK color chart\n\n#5b698e color description : Mostly desaturated dark blue.\n\n# #5b698e Color Conversion\n\nThe hexadecimal color #5b698e has RGB values of R:91, G:105, B:142 and CMYK values of C:0.36, M:0.26, Y:0, K:0.44. Its decimal value is 5990798.\n\nHex triplet RGB Decimal 5b698e `#5b698e` 91, 105, 142 `rgb(91,105,142)` 35.7, 41.2, 55.7 `rgb(35.7%,41.2%,55.7%)` 36, 26, 0, 44 223.5°, 21.9, 45.7 `hsl(223.5,21.9%,45.7%)` 223.5°, 35.9, 55.7 666699 `#666699`\nCIE-LAB 44.632, 4.261, -22.03 14.247, 14.28, 27.595 0.254, 0.254, 14.28 44.632, 22.438, 280.948 44.632, -8.546, -32.143 37.789, 1.169, -16.845 01011011, 01101001, 10001110\n\n# Color Schemes with #5b698e\n\n• #5b698e\n``#5b698e` `rgb(91,105,142)``\n• #8e805b\n``#8e805b` `rgb(142,128,91)``\nComplementary Color\n• #5b838e\n``#5b838e` `rgb(91,131,142)``\n• #5b698e\n``#5b698e` `rgb(91,105,142)``\n• #675b8e\n``#675b8e` `rgb(103,91,142)``\nAnalogous Color\n• #838e5b\n``#838e5b` `rgb(131,142,91)``\n• #5b698e\n``#5b698e` `rgb(91,105,142)``\n• #8e675b\n``#8e675b` `rgb(142,103,91)``\nSplit Complementary Color\n• #698e5b\n``#698e5b` `rgb(105,142,91)``\n• #5b698e\n``#5b698e` `rgb(91,105,142)``\n• #8e5b69\n``#8e5b69` `rgb(142,91,105)``\n• #5b8e80\n``#5b8e80` `rgb(91,142,128)``\n• #5b698e\n``#5b698e` `rgb(91,105,142)``\n• #8e5b69\n``#8e5b69` `rgb(142,91,105)``\n• #8e805b\n``#8e805b` `rgb(142,128,91)``\n• #3d475f\n``#3d475f` `rgb(61,71,95)``\n• #47526f\n``#47526f` `rgb(71,82,111)``\n• #515e7e\n``#515e7e` `rgb(81,94,126)``\n• #5b698e\n``#5b698e` `rgb(91,105,142)``\n• #66759d\n``#66759d` `rgb(102,117,157)``\n• #7583a7\n``#7583a7` `rgb(117,131,167)``\n• #8591b1\n``#8591b1` `rgb(133,145,177)``\nMonochromatic Color\n\n# Alternatives to #5b698e\n\nBelow, you can see some colors close to #5b698e. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #5b768e\n``#5b768e` `rgb(91,118,142)``\n• #5b728e\n``#5b728e` `rgb(91,114,142)``\n• #5b6d8e\n``#5b6d8e` `rgb(91,109,142)``\n• #5b698e\n``#5b698e` `rgb(91,105,142)``\n• #5b658e\n``#5b658e` `rgb(91,101,142)``\n• #5b618e\n``#5b618e` `rgb(91,97,142)``\n• #5b5c8e\n``#5b5c8e` `rgb(91,92,142)``\nSimilar Colors\n\n# #5b698e Preview\n\nThis text has a font color of #5b698e.\n\n``<span style=\"color:#5b698e;\">Text here</span>``\n#5b698e background color\n\nThis paragraph has a background color of #5b698e.\n\n``<p style=\"background-color:#5b698e;\">Content here</p>``\n#5b698e border color\n\nThis element has a border color of #5b698e.\n\n``<div style=\"border:1px solid #5b698e;\">Content here</div>``\nCSS codes\n``.text {color:#5b698e;}``\n``.background {background-color:#5b698e;}``\n``.border {border:1px solid #5b698e;}``\n\n# Shades and Tints of #5b698e\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #07080b is the darkest color, while #fefefe is the lightest one.\n\n• #07080b\n``#07080b` `rgb(7,8,11)``\n• #0e1116\n``#0e1116` `rgb(14,17,22)``\n• #161922\n``#161922` `rgb(22,25,34)``\n• #1e222e\n``#1e222e` `rgb(30,34,46)``\n• #252b3a\n``#252b3a` `rgb(37,43,58)``\n• #2d3446\n``#2d3446` `rgb(45,52,70)``\n• #353d52\n``#353d52` `rgb(53,61,82)``\n• #3c465e\n``#3c465e` `rgb(60,70,94)``\n• #444e6a\n``#444e6a` `rgb(68,78,106)``\n• #4c5776\n``#4c5776` `rgb(76,87,118)``\n• #536082\n``#536082` `rgb(83,96,130)``\n• #5b698e\n``#5b698e` `rgb(91,105,142)``\n• #63729a\n``#63729a` `rgb(99,114,154)``\n• #6e7ca2\n``#6e7ca2` `rgb(110,124,162)``\n• #7a87aa\n``#7a87aa` `rgb(122,135,170)``\n• #8692b1\n``#8692b1` `rgb(134,146,177)``\n• #929db9\n``#929db9` `rgb(146,157,185)``\n• #9ea7c1\n``#9ea7c1` `rgb(158,167,193)``\n• #aab2c8\n``#aab2c8` `rgb(170,178,200)``\n• #b6bdd0\n``#b6bdd0` `rgb(182,189,208)``\n• #c2c8d8\n``#c2c8d8` `rgb(194,200,216)``\n• #ced3df\n``#ced3df` `rgb(206,211,223)``\n``#dadde7` `rgb(218,221,231)``\n• #e6e8ef\n``#e6e8ef` `rgb(230,232,239)``\n• #f2f3f6\n``#f2f3f6` `rgb(242,243,246)``\n• #fefefe\n``#fefefe` `rgb(254,254,254)``\nTint Color Variation\n\n# Tones of #5b698e\n\nA tone is produced by adding gray to any pure hue. In this case, #6d717c is the less saturated color, while #0141e8 is the most saturated one.\n\n• #6d717c\n``#6d717c` `rgb(109,113,124)``\n• #646d85\n``#646d85` `rgb(100,109,133)``\n• #5b698e\n``#5b698e` `rgb(91,105,142)``\n• #526597\n``#526597` `rgb(82,101,151)``\n• #4961a0\n``#4961a0` `rgb(73,97,160)``\n• #405da9\n``#405da9` `rgb(64,93,169)``\n• #3759b2\n``#3759b2` `rgb(55,89,178)``\n• #2e55bb\n``#2e55bb` `rgb(46,85,187)``\n• #2551c4\n``#2551c4` `rgb(37,81,196)``\n• #1c4dcd\n``#1c4dcd` `rgb(28,77,205)``\n• #1349d6\n``#1349d6` `rgb(19,73,214)``\n• #0a45df\n``#0a45df` `rgb(10,69,223)``\n• #0141e8\n``#0141e8` `rgb(1,65,232)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #5b698e is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.50437844,"math_prob":0.6499177,"size":3719,"snap":"2021-43-2021-49","text_gpt3_token_len":1683,"char_repetition_ratio":0.12113055,"word_repetition_ratio":0.011070111,"special_character_ratio":0.5574079,"punctuation_ratio":0.23756906,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9895715,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T20:54:00Z\",\"WARC-Record-ID\":\"<urn:uuid:71aa1e59-95cd-435e-b3eb-8d079ac2826a>\",\"Content-Length\":\"36207\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:267e4882-bcff-456b-8d27-9caff7543672>\",\"WARC-Concurrent-To\":\"<urn:uuid:7a79c3ef-94d2-485a-a177-80c6efeb2d5f>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/5b698e\",\"WARC-Payload-Digest\":\"sha1:B3BLEYQZHOZCFSRISZRJGYUECLTQUMKI\",\"WARC-Block-Digest\":\"sha1:4OEXYKZGOQD6BDIJMRSKIUK73LKJVQKU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363312.79_warc_CC-MAIN-20211206194128-20211206224128-00023.warc.gz\"}"} |
https://www.onlinemathlearning.com/area-perimeter-4md3.html | [
"",
null,
"# Area & Perimeter of Rectangles (Grade 4)\n\nRelated Topics:\nLesson Plans and Worksheets for Grade 4\nLesson Plans and Worksheets for all Grades\n\nVideos, examples, solutions, and lessons to help Grade 4 students learn to apply the area and perimeter formulas for rectangles in real world and mathematical problems.\n\nFor example, find the width of a rectangular room given the area of the flooring and the length, by viewing the area formula as a multiplication equation with an unknown factor\n.\n\nCommon Core: 4.MD.3\n\n### Suggested Learning Targets\n\n• I can explain the area and perimeter formula.\n• I can use the formulas to solve problems.\nFinding Perimeter & Unknown Measure (4.MD.3) Example:\n1. Mr. Myers is planning to make a flower garden outside our classroom window. He wants the garden to be 16 feet long and 3 feet wide. If he decides to fence in this flower garden, how much fencing would Mr. Myers need?\n2. An outdoor deck is 7 feet wide. The perimeter of the deck is 64 feet. What is the length of the deck?\n3. Fred has 30 yards of border for his rectangular bulletin board. The bulletin board is 6 yards long. How wide is Fred's bulletin board?\nArea Formula for Square or Rectangle\n\nPerimeter and Area of Squares and Rectangles\n• Definitions of perimeter and area.\n• How to find perimeter and area of squares and rectangles.\n• Area formulas for squares and rectangles.\n• Several examples narrated and explained.\n• Each polygon or figure shown with square units. Missing Lengths Given Area or Perimeter - 4.MD.3\nIn this video, students will learn to solve real world problems using the formulas for area and perimeter. This video relates to Common Core Standard 4.MD.3. Students learn to solve for a missing side given the perimeter or area.\n\nTry the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.",
null,
"",
null,
""
] | [
null,
"https://www.onlinemathlearning.com/objects/default_image.gif",
null,
"https://www.onlinemathlearning.com/objects/default_image.gif",
null,
"https://www.onlinemathlearning.com/objects/default_image.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85775566,"math_prob":0.92255163,"size":2083,"snap":"2020-45-2020-50","text_gpt3_token_len":458,"char_repetition_ratio":0.12169312,"word_repetition_ratio":0.0056497175,"special_character_ratio":0.21363418,"punctuation_ratio":0.12224939,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9898071,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-28T23:34:15Z\",\"WARC-Record-ID\":\"<urn:uuid:34eb4d32-c3e0-4195-9215-32916ab519ca>\",\"Content-Length\":\"44516\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:227e07a3-881a-482a-ae0b-a1413d044231>\",\"WARC-Concurrent-To\":\"<urn:uuid:e84e8650-7b3b-49e9-a898-cfff09e0d2b9>\",\"WARC-IP-Address\":\"173.247.219.45\",\"WARC-Target-URI\":\"https://www.onlinemathlearning.com/area-perimeter-4md3.html\",\"WARC-Payload-Digest\":\"sha1:UXIHUHOAO6WNPXPNOIQQCAYX23PMCMQJ\",\"WARC-Block-Digest\":\"sha1:NDKJAC6ZKYAA35DHBBDV7Z4Z4BPDQPLT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141195929.39_warc_CC-MAIN-20201128214643-20201129004643-00461.warc.gz\"}"} |
https://demo.formulasearchengine.com/wiki/Cohomology | [
"# Cohomology\n\nTemplate:More footnotes In mathematics, specifically in homology theory and algebraic topology, cohomology is a general term for a sequence of abelian groups defined from a co-chain complex. That is, cohomology is defined as the abstract study of cochains, cocycles, and coboundaries. Cohomology can be viewed as a method of assigning algebraic invariants to a topological space that has a more refined algebraic structure than does homology. Cohomology arises from the algebraic dualization of the construction of homology. In less abstract language, cochains in the fundamental sense should assign 'quantities' to the chains of homology theory.\n\nFrom its beginning in topology, this idea became a dominant method in the mathematics of the second half of the twentieth century; from the initial idea of homology as a topologically invariant relation on chains, the range of applications of homology and cohomology theories has spread out over geometry and abstract algebra. The terminology tends to mask the fact that in many applications cohomology, a contravariant theory, is more natural than homology. At a basic level this has to do with functions and pullbacks in geometric situations: given spaces X and Y, and some kind of function F on Y, for any mapping f : XY composition with f gives rise to a function F o f on X. Cohomology groups often also have a natural product, the cup product, which gives them a ring structure. Because of this feature, cohomology is a stronger invariant than homology, as it can differentiate between certain algebraic objects that homology cannot.\n\n## Definition\n\nIn algebraic topology, the cohomology groups for spaces can be defined as follows (see Hatcher). Given a topological space X, consider the chain complex\n\n$\\cdots \\rightarrow C_{n}{\\stackrel {\\partial _{n}}{\\rightarrow }}\\ C_{n-1}\\rightarrow \\cdots$",
null,
"as in the definition of singular homology (or simplicial homology). Here, the Cn are the free abelian groups generated by formal linear combinations of the singular n-simplices in X and ∂n is the nth boundary operator.\n\nNow replace each Cn by its dual space C*n−1 = Hom(Cn, G), and ∂n by its transpose\n\n$\\delta ^{n}:C_{n-1}^{*}\\rightarrow C_{n}^{*}$",
null,
"to obtain the cochain complex\n\n$\\cdots \\leftarrow C_{n}^{*}{\\stackrel {\\delta ^{n}}{\\leftarrow }}\\ C_{n-1}^{*}\\leftarrow \\cdots$",
null,
"Then the nth cohomology group with coefficients in G is defined to be Ker(δn+1)/Im(δn) and denoted by Hn(C; G). The elements of C*n are called singular n-cochains with coefficients in G , and the δn are referred to as the coboundary operators. Elements of Ker(δn+1), Im(δn) are called cocycles and coboundaries, respectively.\n\nNote that the above definition can be adapted for general chain complexes, and not just the complexes used in singular homology. The study of general cohomology groups was a major motivation for the development of homological algebra, and has since found applications in a wide variety of settings (see below).\n\nGiven an element φ of C*n-1, it follows from the properties of the transpose that $\\delta ^{n}(\\varphi )=\\varphi \\circ \\partial _{n}$",
null,
"as elements of C*n. We can use this fact to relate the cohomology and homology groups as follows. Every element φ of Ker(δn) has a kernel containing the image of ∂n. So we can restrict φ to Ker(∂n−1) and take the quotient by the image of ∂n to obtain an element h(φ) in Hom(Hn, G). If φ is also contained in the image of δn−1, then h(φ) is zero. So we can take the quotient by Ker(δn), and to obtain a homomorphism\n\n$h:H^{n}(C;G)\\rightarrow {\\text{Hom}}(H_{n}(C),G).$",
null,
"It can be shown that this map h is surjective, and that we have a short split exact sequence\n\n$0\\rightarrow \\ker h\\rightarrow H^{n}(C;G){\\stackrel {h}{\\rightarrow }}{\\text{Hom}}(H_{n}(C),G)\\rightarrow 0.$",
null,
"## History\n\nAlthough cohomology is fundamental to modern algebraic topology, its importance was not seen for some 40 years after the development of homology. The concept of dual cell structure, which Henri Poincaré used in his proof of his Poincaré duality theorem, contained the germ of the idea of cohomology, but this was not seen until later.\n\nThere were various precursors to cohomology. In the mid-1920s, J. W. Alexander and Solomon Lefschetz founded the intersection theory of cycles on manifolds. On an n-dimensional manifold M, a p-cycle and a q-cycle with nonempty intersection will, if in general position, have intersection a (p + q − n)-cycle. This enables us to define a multiplication of homology classes\n\nHp(M) × Hq(M) → Hp+qn(M).\n\nAlexander had by 1930 defined a first cochain notion, based on a p-cochain on a space X having relevance to the small neighborhoods of the diagonal in Xp+1.\n\nIn 1931, Georges de Rham related homology and exterior differential forms, proving De Rham's theorem. This result is now understood to be more naturally interpreted in terms of cohomology.\n\nIn 1934, Lev Pontryagin proved the Pontryagin duality theorem; a result on topological groups. This (in rather special cases) provided an interpretation of Poincaré duality and Alexander duality in terms of group characters.\n\nAt a 1935 conference in Moscow, Andrey Kolmogorov and Alexander both introduced cohomology and tried to construct a cohomology product structure.\n\nIn 1936 Norman Steenrod published a paper constructing Čech cohomology by dualizing Čech homology.\n\nFrom 1936 to 1938, Hassler Whitney and Eduard Čech developed the cup product (making cohomology into a graded ring) and cap product, and realized that Poincaré duality can be stated in terms of the cap product. Their theory was still limited to finite cell complexes.\n\nIn 1944, Samuel Eilenberg overcame the technical limitations, and gave the modern definition of singular homology and cohomology.\n\nIn 1945, Eilenberg and Steenrod stated the axioms defining a homology or cohomology theory. In their 1952 book, Foundations of Algebraic Topology, they proved that the existing homology and cohomology theories did indeed satisfy their axioms.\n\nIn 1948 Edwin Spanier, building on work of Alexander and Kolmogorov, developed Alexander–Spanier cohomology.\n\n## Cohomology theories\n\n### Eilenberg–Steenrod theories\n\nA cohomology theory is a family of contravariant functors from the category of pairs of topological spaces and continuous functions (or some subcategory thereof such as the category of CW complexes) to the category of Abelian groups and group homomorphisms that satisfies the Eilenberg–Steenrod axioms.\n\nSome cohomology theories in this sense are:\n\n{{safesubst:#invoke:anchor|main}}\n\n### Axioms and generalized cohomology theories\n\n{{#invoke:see also|seealso}} There are various ways to define cohomology groups (for example singular cohomology, Čech cohomology, Alexander–Spanier cohomology or Sheaf cohomology). These give different answers for some exotic spaces, but there is a large class of spaces on which they all agree. This is most easily understood axiomatically: there is a list of properties known as the Eilenberg–Steenrod axioms, and any two constructions that share those properties will agree at least on all finite CW complexes, for example.\n\nOne of the axioms is the so-called dimension axiom: if P is a single point, then Hn(P) = 0 for all n ≠ 0, and H0(P) = Z. We can generalise slightly by allowing an arbitrary abelian group A in dimension zero, but still insisting that the groups in nonzero dimension are trivial. It turns out that there is again an essentially unique system of groups satisfying these axioms, which are denoted by $H_{*}(X;A)$",
null,
". In the common case where each group Hk(X) is isomorphic to Zrk for some rk in N, we just have $H_{k}(X;A)=A^{r_{k}}$",
null,
". In general, the relationship between Hk(X) and $H_{k}(X;A)$",
null,
"is only a little more complicated, and is again controlled by the Universal coefficient theorem.\n\nMore significantly, we can drop the dimension axiom altogether. There are a number of different ways to define groups satisfying all the other axioms, including the following:\n\nThese are called generalised homology theories; they carry much richer information than ordinary homology, but are often harder to compute. Their study is tightly linked (via the Brown representability theorem) to stable homotopy.\n\nA cohomology theory E is said to be multiplicative if $E^{*}(X)$",
null,
"is a graded ring.\n\n### Other cohomology theories\n\nTheories in a broader sense of cohomology include:"
] | [
null,
"https://demo.formulasearchengine.com/index.php",
null,
"https://demo.formulasearchengine.com/index.php",
null,
"https://demo.formulasearchengine.com/index.php",
null,
"https://demo.formulasearchengine.com/index.php",
null,
"https://demo.formulasearchengine.com/index.php",
null,
"https://demo.formulasearchengine.com/index.php",
null,
"https://demo.formulasearchengine.com/index.php",
null,
"https://demo.formulasearchengine.com/index.php",
null,
"https://demo.formulasearchengine.com/index.php",
null,
"https://demo.formulasearchengine.com/index.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89422756,"math_prob":0.9269235,"size":10126,"snap":"2020-45-2020-50","text_gpt3_token_len":2565,"char_repetition_ratio":0.1641968,"word_repetition_ratio":0.0012861736,"special_character_ratio":0.21666996,"punctuation_ratio":0.1169045,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99720395,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T16:20:47Z\",\"WARC-Record-ID\":\"<urn:uuid:a9cd2249-27aa-4bf3-bd43-3829db0501e7>\",\"Content-Length\":\"120099\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c7dad296-37e1-4cf3-b832-21e64080712e>\",\"WARC-Concurrent-To\":\"<urn:uuid:4bed199a-0f93-43c1-831f-f98606133a63>\",\"WARC-IP-Address\":\"132.195.228.228\",\"WARC-Target-URI\":\"https://demo.formulasearchengine.com/wiki/Cohomology\",\"WARC-Payload-Digest\":\"sha1:YCN6FTQA2TLN7LPX5LENZRCFMRAETBMQ\",\"WARC-Block-Digest\":\"sha1:VI4U5TXJGX3QYDATL542KMIWT3KW67VB\",\"WARC-Truncated\":\"disconnect\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107904834.82_warc_CC-MAIN-20201029154446-20201029184446-00082.warc.gz\"}"} |
http://codeforces.com/problemset/problem/1144/E | [
"E. Median String\ntime limit per test\n2 seconds\nmemory limit per test\n256 megabytes\ninput\nstandard input\noutput\nstandard output\n\nYou are given two strings $s$ and $t$, both consisting of exactly $k$ lowercase Latin letters, $s$ is lexicographically less than $t$.\n\nLet's consider list of all strings consisting of exactly $k$ lowercase Latin letters, lexicographically not less than $s$ and not greater than $t$ (including $s$ and $t$) in lexicographical order. For example, for $k=2$, $s=$\"az\" and $t=$\"bf\" the list will be [\"az\", \"ba\", \"bb\", \"bc\", \"bd\", \"be\", \"bf\"].\n\nYour task is to print the median (the middle element) of this list. For the example above this will be \"bc\".\n\nIt is guaranteed that there is an odd number of strings lexicographically not less than $s$ and not greater than $t$.\n\nInput\n\nThe first line of the input contains one integer $k$ ($1 \\le k \\le 2 \\cdot 10^5$) — the length of strings.\n\nThe second line of the input contains one string $s$ consisting of exactly $k$ lowercase Latin letters.\n\nThe third line of the input contains one string $t$ consisting of exactly $k$ lowercase Latin letters.\n\nIt is guaranteed that $s$ is lexicographically less than $t$.\n\nIt is guaranteed that there is an odd number of strings lexicographically not less than $s$ and not greater than $t$.\n\nOutput\n\nPrint one string consisting exactly of $k$ lowercase Latin letters — the median (the middle element) of list of strings of length $k$ lexicographically not less than $s$ and not greater than $t$.\n\nExamples\nInput\n2\naz\nbf\n\nOutput\nbc\n\nInput\n5\nafogk\nasdji\n\nOutput\nalvuw\n\nInput\n6\nnijfvj\ntvqhwp\n\nOutput\nqoztvz"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86700237,"math_prob":0.9977703,"size":1573,"snap":"2020-24-2020-29","text_gpt3_token_len":410,"char_repetition_ratio":0.14850223,"word_repetition_ratio":0.29554656,"special_character_ratio":0.32231405,"punctuation_ratio":0.0866426,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993449,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-28T15:44:04Z\",\"WARC-Record-ID\":\"<urn:uuid:485bea23-3fad-4027-95f3-1d3d53ff43dd>\",\"Content-Length\":\"52139\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0d471701-69f4-4842-ac75-314ad7475bc3>\",\"WARC-Concurrent-To\":\"<urn:uuid:655f6353-1fcd-4f78-98c7-b497b04fbc56>\",\"WARC-IP-Address\":\"81.27.240.126\",\"WARC-Target-URI\":\"http://codeforces.com/problemset/problem/1144/E\",\"WARC-Payload-Digest\":\"sha1:NER273JHJMHERMMJG5P6ZM2MYAKVDX6I\",\"WARC-Block-Digest\":\"sha1:DLDRBLCQ5BQTRUSXXALUTTDMMWOXUIDE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347399820.9_warc_CC-MAIN-20200528135528-20200528165528-00564.warc.gz\"}"} |
https://www.storyofmathematics.com/factors/factors-of-388/ | [
"",
null,
"# Factors of 388: Prime Factorization, Methods, and Examples\n\nThe factors of 388 are 1, 2, 4, 97, 194, and 388. These numbers produce a zero remainder when divided by 388. 388 is an even composite number having 6 factors. It has both positive and negative integers as factors.",
null,
"Further information is provided in the article.\n\n### Factors of 388\n\nHere are the factors of number 388.\n\nFactors of 388: 1, 2, 4, 97, 194, 388\n\n### Negative Factors of 388\n\nThe negative factors of 388 are similar to its positive aspects, just with a negative sign.\n\nNegative Factors of 388: -1, -2, -4, -97, -194, -388\n\n### Prime Factorization of 388\n\nThe prime factorization of 388 is the way of expressing its prime factors in the product form.\n\nPrime Factorization: 22 × 97\n\nIn this article, we will learn about the factors of 388 and how to find them using various techniques such as upside-down division, prime factorization, and factor tree.\n\n## What Are the Factors of 388?\n\nThe factors of 388 are 1, 2, 4, 97, 194, 388. These numbers are the factors as they do not leave any remainder when divided by 388.\n\nThe factors of 388 are classified as prime numbers and composite numbers. The prime factors of the number 388 can be determined using the prime factorization technique.\n\n## How To Find the Factors of 388?\n\nYou can find the factors of 388 by using the rules of divisibility. The divisibility rule states that any number, when divided by any other natural number, is said to be divisible by the number if the quotient is the whole number and the resulting remainder is zero.\n\nTo find the factors of 388, create a list containing the numbers that are exactly divisible by 388 with zero remainders. One important thing to note is that 1 and 388 are the 388’s factors as every natural number has 1 and the number itself as its factor.\n\n1 is also called the universal factor of every number. The factors of 388 are determined as follows:\n\n$\\dfrac{388}{1} = 388$\n\n$\\dfrac{388}{2} = 194$\n\n$\\dfrac{388}{4} = 97$\n\n$\\dfrac{388}{388} = 1$\n\nTherefore, 1, 2, 4, 97, 194, 388 are the factors of 388.\n\n### Total Number of Factors of 388\n\nFor 388, there are 6 positive factors and 6 negative ones. So in total, there are 12 factors of 388.\n\nTo find the total number of factors of the given number, follow the procedure mentioned below:\n\n1. Find the factorization/prime factorization of the given number.\n2. Demonstrate the prime factorization of the number in the form of exponent form.\n3. Add 1 to each of the exponents of the prime factor.\n4. Now, multiply the resulting exponents together. This obtained product is equivalent to the total number of factors of the given number.\n\nBy following this procedure, the total number of factors of X is given as:\n\nFactorization of 388 is 1, 2, 4, 97, 194, 388.\n\nThe exponent of 1, 2, 4, 97, 194, 388 is 1.\n\nAdding 1 to each and multiplying them together results in m.\n\nTherefore, the total number of factors of 388 is 12. 6 is positive, and 6 factors are negative.\n\n### Important Notes\n\nHere are some essential points that must be considered while finding the factors of any given number:\n\n• The factor of any given number must be a whole number.\n• The factors of the number cannot be in the form of decimals or fractions.\n• Factors can be positive as well as negative.\n• Negative factors are the additive inverse of the positive factors of a given number.\n• The factor of a number cannot be greater than that number.\n• Every even number has 2 as its prime factor, the smallest prime factor.\n\n## Factors of 388 by Prime Factorization\n\nThe number 388 is a composite number. Prime factorization is a valuable technique for finding the number’s prime factors and expressing the number as the product of its prime factors.",
null,
"Before finding the factors of 388 using prime factorization, let us find out what prime factors are. Prime factors are the factors of any given number that are only divisible by 1 and themselves.\n\nTo start the prime factorization of 388, start dividing by its most minor prime factor. First, determine that the given number is either even or odd. If it is an even number, then 2 will be the smallest prime factor.\n\nContinue splitting the quotient obtained until 1 is received as the quotient. The prime factorization of 388 can be expressed as:\n\n388 = 22 × 97\n\n## Factors of 388 in Pairs\n\nThe factor pairs are the duplet of numbers that, when multiplied together, result in the factorized number. Factor pairs can be more than one depending on the total number of factors given.",
null,
"For 388, the factor pairs can be found as:\n\n1 x 388 = 388\n\n2 x 194 = 388\n\n4 x 97 = 388\n\nThe possible factor pairs of 388 are given as (1, 388), (2, 194), and (4, 97).\n\nAll these numbers in pairs, when multiplied, give 388 as the product.\n\nThe negative factor pairs of 388 are given as:\n\n-1 x -388 = 388\n\n-2 x -194 = 388\n\n-4 x -97 = 388\n\nIt is important to note that in negative factor pairs, the minus sign has been multiplied by the minus sign, due to which the resulting product is the original positive number. Therefore, -1, -2, -4, -97, -194, -388 are called negative factors of 388.\n\nThe list of all the factors of 388, including positive as well as negative numbers, is given below.\n\nFactor list of 388: 1, -1, 2, -2, 4, -4, 97, -97, 194, -194, 388, and -388\n\n## Factors of 388 Solved Examples\n\nTo better understand the concept of factors, let’s solve some examples.\n\n### Example 1\n\nHow many factors of 388 are there?\n\n### Solution\n\nThe total number of Factors of 388 is 12.\n\nFactors of 388 are 1, 2, 4, 97, 194, and 388.\n\n### Example 2\n\nFind the factors of 388 using prime factorization.\n\n### Solution\n\nThe prime factorization of 388 is given as:\n\n388 $\\div$ 2 = 194\n\n194 $\\div$ 2 = 97\n\n97 $\\div$ 97 = 1\n\nSo the prime factorization of 388 can be written as:\n\n22 × 97 = 388"
] | [
null,
"https://www.storyofmathematics.com/wp-content/uploads/2022/02/som-header1.jpg",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%201200%20630%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%201800%20900%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%201800%20900%22%3E%3C/svg%3E",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90164655,"math_prob":0.99565583,"size":5688,"snap":"2023-40-2023-50","text_gpt3_token_len":1494,"char_repetition_ratio":0.2489444,"word_repetition_ratio":0.06621881,"special_character_ratio":0.3043249,"punctuation_ratio":0.13939899,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998771,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T08:35:35Z\",\"WARC-Record-ID\":\"<urn:uuid:031d6e6a-929a-4e40-bbc2-742dcef5132a>\",\"Content-Length\":\"192278\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3e0b65f2-5054-4ec8-9912-ea8dfc22d5b3>\",\"WARC-Concurrent-To\":\"<urn:uuid:4c9f7602-8316-4e6a-8522-09d4ca2ce54a>\",\"WARC-IP-Address\":\"172.67.190.47\",\"WARC-Target-URI\":\"https://www.storyofmathematics.com/factors/factors-of-388/\",\"WARC-Payload-Digest\":\"sha1:IV47QWNO4NIQIGTLNABWSXKUAE7SRPYZ\",\"WARC-Block-Digest\":\"sha1:34CJ7EA7CRLAVHUYVHCPGRMAFHU3CZQM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511055.59_warc_CC-MAIN-20231003060619-20231003090619-00875.warc.gz\"}"} |
https://math.stackexchange.com/questions/4316470/what-is-the-difference-between-formula-and-function | [
"# What is the difference between 'formula' and 'function'?\n\nI hope this question fits in this site. I was asked by my student when I was explaining the concept of relation and function and after thinking, I still can't differentiate those two concepts (formula and function). I've looked for the answers from other sites and I have read these:\n\nThe first link above tells us\n\n'The difference between a formula and function is that a formula is defined as the statement used for the calculation [a]. These formulas could be simple or complex and always start with equal to operator [b]. While function is defined as the code that is designed for the calculations and is used inside the formula [c].'\n\nWe know a function can be used to calculate to find the result of the calculation from the range with the given number taken from the domain [a]. For instance, the exponential function\n\n$$P(t) = P_0e^{kt}$$\n\ncan be used to calculate the population in the region X.\n\nA function also has the equal sign, because if it doesn't, I'd say it's an expression [b] & [c]. And I don't know about the relevancy of the second link since the context is used in Excel. But it says that\n\nFormula(s) is/are nothing but doing some arithmetic operation(s) or calculation(s).\n\nDoes that mean these are formulas?:\n\n$$2+2=4\\quad 5-3=2\\quad 2(3+5)/4=4$$\n\nHow to tell to my students about their difference? Are the exactly the same concept? I have a class tomorrow and I've promised to answer their question that day.\n\n• It is common to use a formula/expression to define a function, but not all functions have a formula that describe them. (In fact, by just about any halfway principled way of comparing, most functions don't). Nov 26, 2021 at 2:41\n• Almost anything is a formula. You pick a language and its words are called formulas. So, yes $2+2=5$, $\\rightarrow\\text{car}$, $\\forall x>0, \\exists y$ can be formulas in some language. The concept of function is more concrete, but even it has two inequivalent definitions that are often used interchangeably in mathematics. I think they mention both here.\n– plop\nNov 26, 2021 at 2:43\n• See Relation for the definition of binary relation, and function is a special type of binary relation as discussed. I believe 'formula' is not something fundamental, but we commonly use it in English to refer symbolic expressions. Nov 26, 2021 at 2:55\n• How could I forget to give $H_2O$ as an example of formula?\n– plop\nNov 26, 2021 at 3:00\n• The Math Educators StackExchange may be a good place for this question.\n– Blue\nNov 26, 2021 at 3:28\n\nIn my estimation, you are asking two different questions here. The one about what might tell your students is easier to answer, but there are lots of different answers that are reasonable. Here is what I would say to, e.g., a college student in any course before linear algebra:\n\nA function is a machine that takes in numbers and spits out other numbers. A formula is a mathematical expression that I can use to calculate. Functions are more general objects than formulas, because not every function can be written as a formula. My favorite example is the floor function, which works like this: it takes in a number, cuts off the decimal, and spits out the whole number that's left over. [examples]. This function makes perfect sense, even though I can't write down a \"formula\" for it like x^2 or $$\\sin(x)$$.\n\nNone of the first three sentences are true (the fourth one is!), even to the extent that they are meaningful. But the lies are mild, and they help simplify the language to get the point across.\n\nThe second question is what the difference actually is. I will not come to any forceful conclusion on this question, but instead frame my thoughts as a series of remarks.\n\nRemark 1. This question has been discussed a lot on MSE, because, as you have observed, it is subtle, and it concerns relatively elementary objects. Exactly which question it is a duplicate of is not totally clear, though.\n\nRemark 2. I can't open the first link, but based on the language used, it kind of sounds like this source is using the programming definition of a 'function', which differs in a subtle way from the mathematical definition. In particular, we usually do not agree with [c]: for us, the function is the encapsulating object, and there are various methods by which a function's values may be given, one of which is a formula (see this Wikipedia quote)\n\nRemark 3. Continuing on this, you write\n\nWe know a function can be used to calculate to find the result of the calculation from the range with the given number taken from the domain [a]...\n\nI know no such thing. A function may encode a calculation (or it may not!) but it is odd to me to say that we use a function to calculate elements in its range. For instance, in your example I would not say I am using the function $$P$$ to calculate $$P(3)$$; I guess I can see why you would say this, but it doesn't reflect how the calculation actually proceeds. When I am actually getting around to doing the computation, what I am using is the expression $$P_0e^{kt}$$ by substituting $$t=3$$. The result of this calculation is certainly in the range of $$P$$; this is guaranteed by the equation $$P(t)=P_0e^{kt}$$... indeed this equation is what convinced me to use the expression to compute $$P(3)$$.\n\nRemark 4. The word \"formula\" did not appear in the previous remark, what's up with that? Well, in casual speech, I would describe both $$P(t)=P_0e^{kt}$$ and $$P_0e^{kt}$$ as formulas for $$P(t)$$. If I am required to be more careful, I would not use the word at all.\n\nRemark 5. Logicians have strong opinions about [read: a definition of] what a \"formula\" is, that does not align well with the common-language meaning, despite being related. (The tension is well-illustrated by this answer.) To answer your question, the three equalities in your question undoubtedly are formulas if we are using the logician's definition. Still, I would not call them formulas because for me a formula needs to be something that I can compute with; those equalities look to me like a \"record\" of a computation that has already been completed.\n\nAs this answer has now been accepted, let me highlight Mauro Allegranza's very useful comment [emphasis mine]:\n\nThe simplest explanation is in the answer to the linked post: \"A formula is a string of symbols, arranged according to mathematical grammar [i.e. an expression of the mathematical language]. A function is a mathematical object.\" The clear distinction is that between the world of (mathematical) objects and the language used to speak of them (exactly like natural language). –\n\n• You forgot to include the link for the examples. Nov 26, 2021 at 6:21\n• No I didn't; sorry for the confusion. I mean something like doing out $\\lfloor 3.482\\rfloor$ on paper, but I didn't feel like boring this crowd with the roleplay. Nov 26, 2021 at 8:38\n\nThe difference between a formula and a function is like the difference between the word \"Earth\" and the planet Earth. A formula is something formed by combining mathematical symbols in a syntactically valid way. A function is a relation mapping each element from the domain to an element in its range. A formula can define a function.\n\nFor example, $$1 + x$$ is a formula because it combines the symbols '1', '+' and '$$x$$' in a valid way; and this formula defines a function which relates each number to the number which is 1 larger than it. On the other hand, $$x + 1$$ is a different formula, because the symbols are combined in a different way; however, this different formula defines the same function because it relates numbers to each other in the same way.\n\nPerhaps a clearer example is the prime-counting function, which is certainly a function, but the phrase \"the prime-counting function\" is not a formula, because it combines English words, not mathematical symbols. In fact, almost all functions are not defined by any formula, because there are uncountably many functions $$: \\mathbb{N} \\rightarrow \\mathbb{N}$$ (just taking $$\\mathbb{N}$$ as an example for the domain and range) but only countably many formulae (each of which is a finite sequence of symbols chosen from a finite alphabet).\n\nA function $$f$$ is a triple, consisting of a domain $$D_f$$ (a set), a set of values $$V_f$$, and a relation $$R_f$$, i.e. a subset of $$D_f \\times V_f$$, which has the additional condition that for any element $$d \\in D_f$$, there is only one element in $$R_f$$ which has this particular element $$d$$ as first component.\n\nThat's the mathematical definition of a function. The intention is:\n\n$$D_f$$ defines all possible inputs; $$V_f$$ is a set containing all possible outputs, and $$R_f$$ contains all pairs of $$(d,v)$$ where $$d$$ is an input and $$f(d) = v$$, i.e. $$v$$ is the output belonging to the input $$d$$. The condition described above means that for any input there is only one output.\n\nThere is no mention at all of any formula, because there is no need for this. As Eric Nathan Stucky has mentioned, there are a lot of functions that cannot be described with a formula.\n\nIf you have a formula, you are in the lucky situation that you have a way of calculating your function values. Although I've never before heard the word \"formula\" in this context; to me the word would be \"function term\".\n\n• What happened to Binet's formula? en.wikipedia.org/wiki/Fibonacci_number Nov 26, 2021 at 17:31\n• @lalala Oops. Learned something, edited my post. Thanks! Nov 30, 2021 at 7:55\n\nBest illustrated by examples.\n\n$$y(x)=ax^2+bx+c$$\n\nis a particuar function among many. Satisfied only by two x values when $$y(0)=0$$\n\n$$\\sin (a+b)=\\sin a\\cos b+\\cos a \\sin b$$\n\nis a formula/ identity satisfied by any values of $$(a,b).$$\n\nThe difference comes from arbitrariness of chosen variables.\n\n• Not sure what you mean by saying that a function is \"satisfied by only two x values\". Also, a logician might say that the formula $\\sin(a + b) = \\sin a \\cos b + \\cos a \\sin b$ defines a function $f : \\mathbb{R}^2 \\rightarrow \\{\\top, \\bot\\}$ which is equivalent to $f(a, b) = \\top$. Nov 26, 2021 at 11:19\n• No \"$y(x)=ax^2+bx+c$\" is not a function. It is an equation, that in a context where it is understood that $y$ designates a function $\\Bbb R\\to\\Bbb R$, that $a,b,c$ are fixed real numbers, and that the equation is asserted to hold for all $x\\in\\Bbb R$, can be used to specify the function designated by $y$. And in no way it is only satisfied by two values of $x$. Nov 27, 2021 at 4:28\n• It was just given in simpler terms to contrast with a formula. Also the quadratic y(x)=0 has two roots. Nov 27, 2021 at 15:39\n\n## What to tell your students\n\nIn math class, a function is a restricted kind of relation between two sets. For the sake of simplicity we'll call these sets \"domain\" and \"range\". A relation is a function when, for any given element of the domain, there is at most one related element of the range. (There are other names for these sets in use. These are the names I learned in school.) Also, most functions do not have names, but some few do. An example is cosine.\n\nA formula is a (valid) sequence of symbols in some formal language. Most of the formulae we work with in this class are of the algebraic type, so they involve numbers, variables, operators, and the occasional function. Ordinarily, you'll interpret or manipulate a formula in order to draw some conclusion.\n\nOften a function may be defined by a formula. For example, the function square(x) may be defined as x*x.\n\nIn other contexts, these terms may be used slightly differently. Ask a chemist about functional groups, for example."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94804204,"math_prob":0.99264205,"size":4031,"snap":"2023-40-2023-50","text_gpt3_token_len":904,"char_repetition_ratio":0.11547057,"word_repetition_ratio":0.0,"special_character_ratio":0.22500621,"punctuation_ratio":0.11604938,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997446,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T15:26:01Z\",\"WARC-Record-ID\":\"<urn:uuid:d5c1c84f-6f65-4924-a551-f2497209eda4>\",\"Content-Length\":\"200914\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dad727a5-040d-43c9-be74-a3f9b4e0b2bf>\",\"WARC-Concurrent-To\":\"<urn:uuid:192e5150-b5b7-448b-8e70-2b6c083330a5>\",\"WARC-IP-Address\":\"104.18.10.86\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/4316470/what-is-the-difference-between-formula-and-function\",\"WARC-Payload-Digest\":\"sha1:CKWZ3BGEDPRAVFGET4DJJLLZQHCN37PE\",\"WARC-Block-Digest\":\"sha1:MM76TE35ABMDXSFZ6SYPJPF4E2J7XVMR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510300.41_warc_CC-MAIN-20230927135227-20230927165227-00405.warc.gz\"}"} |
https://www.geeksforgeeks.org/gate-gate-cs-2006-question-4/ | [
"Related Articles\n\n# GATE | GATE-CS-2006 | Question 4\n\n• Last Updated : 26 Feb, 2021\n\nA relation R is defined on ordered pairs of integers as follows: (x,y) R(u,v) if x < u and y > v. Then R is:\nThen R is:\n(A) Neither a Partial Order nor an Equivalence Relation\n(B) A Partial Order but not a Total Order\n(C) A Total Order\n(D) An Equivalence Relation\n\nExplanation: An equivalence relation on a set x is a subset of x*x, i.e., a collection R of ordered pairs of elements of x, satisfying certain properties. Write “x R y” to mean (x,y) is an element of R, and we say “x is related to y,” then the properties are:\n1. Reflexive: a R a for all a Є R,\n2. Symmetric: a R b implies that b R a for all a,b Є R\n3. Transitive: a R b and b R c imply a R c for all a,b,c Є R.\n\nAn partial order relation on a set x is a subset of x*x, i.e., a collection R of ordered pairs of elements of x, satisfying certain properties. Write “x R y” to mean (x,y) is an element of R, and we say “x is related to y,” then the properties are:\n1. Reflexive: a R a for all a Є R,\n2. Anti-Symmetric: a R b and b R a implies that for all a,b Є R\n3. Transitive: a R b and b R c imply a R c for all a,b,c Є R.\n\nAn total order relation a set x is a subset of x*x, i.e., a collection R of ordered pairs of elements of x, satisfying certain properties. Write “x R y” to mean (x,y) is an element of R, and we say “x is related to y,” then the properties are:\n1. Reflexive: a R a for all a Є R,\n2. Anti-Symmetric: a R b implies that b R a for all a,b Є R\n3. Transitive: a R b and b R c imply a R c for all a,b,c Є R.\n4. Comparability : either a R b or b R a for all a,b Є R.\n\nAs given in question, a relation R is defined on ordered pairs of integers as follows: (x,y) R(u,v) if x < u and y > v , reflexive property is not satisfied here, because there is > or < relationship between (x ,y) pair set and (u,v) pair set . Other way , if there would have been x <= u and y>= v (or x=u and y=v) kind of relation among elements of sets then reflexive property could have been satisfied. Since reflexive property in not satisfied here , so given realtion can not be equivalence, partial orderor total order relation.\n\nSo, option (A) is correct.\n\nThis solution is contributed by Nirmal Bharadwaj.\n\nQuiz of this Question\n\nAttention reader! Don’t stop learning now. Practice GATE exam well before the actual exam with the subject-wise and overall quizzes available in GATE Test Series Course.\n\nLearn all GATE CS concepts with Free Live Classes on our youtube channel.\n\nMy Personal Notes arrow_drop_up"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8409971,"math_prob":0.9849987,"size":2468,"snap":"2021-31-2021-39","text_gpt3_token_len":703,"char_repetition_ratio":0.12540585,"word_repetition_ratio":0.5516569,"special_character_ratio":0.28119937,"punctuation_ratio":0.15165877,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995703,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-24T09:01:05Z\",\"WARC-Record-ID\":\"<urn:uuid:3d5e3e99-334c-48af-b119-2b29e6e1eaca>\",\"Content-Length\":\"97610\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:04bc4cdb-a994-4266-885d-5281f453215b>\",\"WARC-Concurrent-To\":\"<urn:uuid:02af864e-5b2d-48f1-b523-33c71cc42956>\",\"WARC-IP-Address\":\"23.46.239.11\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/gate-gate-cs-2006-question-4/\",\"WARC-Payload-Digest\":\"sha1:BGA3EY6QCOZSZ6HE2KD4MQS26Y76QWVE\",\"WARC-Block-Digest\":\"sha1:LRSXHHQJOZTYKGXF3NLOCTKRDBNA6RCG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046150134.86_warc_CC-MAIN-20210724063259-20210724093259-00715.warc.gz\"}"} |
https://www.studysmarter.us/textbooks/math/precalculus-enhanced-with-graphing-utilities-6th/sequences-induction-the-binomial-theorem/q-76-in-problems-71-82-find-the-sum-of-each-sequence/ | [
"",
null,
"Suggested languages for you:\n\nEurope\n\nAnswers without the blur. Sign up and see all textbooks for free!",
null,
"Q 76.\n\nExpert-verified",
null,
"Found in: Page 809",
null,
"### Precalculus Enhanced with Graphing Utilities\n\nBook edition 6th\nAuthor(s) Sullivan\nPages 1200 pages\nISBN 9780321795465",
null,
"# In Problems 71-82, find the sum of each sequence.$\\underset{k=1}{\\overset{26}{\\sum \\left(3}}k-7\\right)$\n\nThe sum of this sequence is 871.\n\nSee the step by step solution\n\n## Step 1. Write the given information.\n\nThe sum of sequence:\n\n$\\underset{k=1}{\\overset{26}{\\sum \\left(3k-7\\right)}}$\n\n## Step 2. Use the properties of sequence.\n\nUsing the properties of sequence:\n\n$\\underset{k=1}{\\overset{n}{\\sum \\left({a}_{k}-{b}_{k}\\right)}}=\\underset{k=1}{\\overset{n}{\\sum \\left({a}_{k}\\right)}}-\\underset{k=1}{\\overset{n}{\\sum \\left({b}_{k}\\right)}}\\phantom{\\rule{0ex}{0ex}}&\\phantom{\\rule{0ex}{0ex}}\\underset{k=1}{\\overset{n}{\\sum \\left(c{a}_{k}\\right)}}=c\\underset{k=1}{\\overset{n}{\\sum {a}_{k}}}$\n\nSo,\n\n$\\underset{k=1}{\\overset{26}{\\sum \\left(3k-7\\right)}}=\\underset{k=1}{\\overset{26}{\\sum \\left(3k\\right)}}-\\underset{k=1}{\\overset{26}{\\sum \\left(7\\right)}}$\n\n&\n\n$\\underset{k=1}{\\overset{26}{\\sum \\left(3k\\right)}}=\\underset{k=1}{\\overset{26}{3\\sum \\left(k\\right)}}$\n\n## Step 3. Use the formula for sum of sequences of n real numbers.\n\nUsing formula for summation is:\n\n$\\underset{k=1}{\\overset{n}{\\sum k}}=1+2+...+n\\phantom{\\rule{0ex}{0ex}}⇒\\underset{k=1}{\\overset{n}{\\sum k}}=\\frac{n\\left(n+1\\right)}{2}$\n\nSo,\n\nrole=\"math\" localid=\"1646750162806\" $3\\underset{k=1}{\\overset{26}{\\sum k}}=3×\\frac{26\\left(26+1\\right)}{2}\\phantom{\\rule{0ex}{0ex}}⇒\\underset{k=1}{\\overset{26}{3\\sum k}}=3×\\frac{26\\left(27\\right)}{2}\\phantom{\\rule{0ex}{0ex}}⇒\\underset{k=1}{\\overset{26}{3\\sum k}}=3×351\\phantom{\\rule{0ex}{0ex}}⇒\\underset{k=1}{\\overset{26}{3\\sum k}}=1053$\n\n## Step 4. Use the formula for sum of sequence.\n\nUsing formula for summation is:$\\underset{k=1}{\\overset{n}{\\sum c}}=c+c+...+c\\phantom{\\rule{0ex}{0ex}}⇒\\underset{k=1}{\\overset{n}{\\sum c}}=cn,c-realnumber\\phantom{\\rule{0ex}{0ex}}So,\\phantom{\\rule{0ex}{0ex}}\\underset{k=1}{\\overset{26}{\\sum 7}}=7×26\\phantom{\\rule{0ex}{0ex}}⇒\\underset{k=1}{\\overset{26}{\\sum 7}}=182\\phantom{\\rule{0ex}{0ex}}$\n\n## Step 5. Now, subtract Step 3 from Step 4.\n\nFrom Step 3,\n\n$\\underset{k=1}{\\overset{26}{3\\sum k}}=1053$\n\nFrom Step 4,\n\n$\\underset{k=1}{\\overset{26}{\\sum 7}}=182$\n\nSo,\n\n$\\underset{k=1}{\\overset{26}{\\sum \\left(3k-7\\right)}}=\\underset{k=1}{\\overset{26}{\\sum \\left(3k\\right)}}\\underset{k=1}{\\overset{26}{-\\sum \\left(3\\right)}}\\phantom{\\rule{0ex}{0ex}}⇒\\underset{k=1}{\\overset{26}{\\sum \\left(3k-7\\right)}}=1053-182\\phantom{\\rule{0ex}{0ex}}⇒\\underset{k=1}{\\overset{26}{\\sum \\left(3k-7\\right)}}=871$",
null,
"### Want to see more solutions like these?",
null,
""
] | [
null,
"https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/dist/assets/images/header-logo.svg",
null,
"https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/src/assets/images/ab-test/searching-looking.svg",
null,
"https://studysmarter-mediafiles.s3.amazonaws.com/media/textbook-images/Sullivan_Precalculus.jpeg",
null,
"https://studysmarter-mediafiles.s3.amazonaws.com/media/textbook-images/Sullivan_Precalculus.jpeg",
null,
"https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/src/assets/images/ab-test/businessman-superhero.svg",
null,
"https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/img/textbook/banner-top.svg",
null,
"https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/img/textbook/cta-icon.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6977346,"math_prob":0.9997594,"size":375,"snap":"2023-14-2023-23","text_gpt3_token_len":105,"char_repetition_ratio":0.14824797,"word_repetition_ratio":0.03508772,"special_character_ratio":0.29333332,"punctuation_ratio":0.175,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999243,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-25T21:03:36Z\",\"WARC-Record-ID\":\"<urn:uuid:a680a25f-566e-4a6e-a97d-8e6e1755bbb4>\",\"Content-Length\":\"158370\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e1291b38-058e-45c1-a800-b9bb2f70de8f>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba744501-b4d8-46f8-b4b3-0708dc177260>\",\"WARC-IP-Address\":\"18.194.226.228\",\"WARC-Target-URI\":\"https://www.studysmarter.us/textbooks/math/precalculus-enhanced-with-graphing-utilities-6th/sequences-induction-the-binomial-theorem/q-76-in-problems-71-82-find-the-sum-of-each-sequence/\",\"WARC-Payload-Digest\":\"sha1:WUT55JOQ2OVQ7YHBEEZ2HJSAXKEKXUAR\",\"WARC-Block-Digest\":\"sha1:UFCBYP3YH3IROMFPPWZ6VWMDUB4SFK3W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945372.38_warc_CC-MAIN-20230325191930-20230325221930-00750.warc.gz\"}"} |
https://chem.libretexts.org/Courses/City_College_of_San_Francisco/Foundations_-_Review_Source_for_Chem_101A/06%3A_Appendices/7.01%3A_Essential_Mathematics | [
"Skip to main content\n\n# 7.1: Essential Mathematics\n\n## Exponential Arithmetic\n\nExponential notation is used to express very large and very small numbers as a product of two numbers. The first number of the product, the digit term, is usually a number not less than 1 and not greater than 10. The second number of the product, the exponential term, is written as 10 with an exponent. Some examples of exponential notation are:\n\n\\begin{align*} 1000&=1×10^3\\\\ 100&=1×10^2\\\\ 10&=1×10^1\\\\ 1&=1×10^0\\\\ 0.1&=1×10^{−1}\\\\ 0.001&=1×10^{−3}\\\\ 2386&=2.386×1000=2.386×10^3\\\\ 0.123&=1.23×0.1=1.23×10^{−1} \\end{align*}\n\nThe power (exponent) of 10 is equal to the number of places the decimal is shifted to give the digit number. The exponential method is particularly useful notation for every large and very small numbers. For example, 1,230,000,000 = 1.23 × 109, and 0.00000000036 = 3.6 × 10−10.\n\n### Addition of Exponentials\n\nConvert all numbers to the same power of 10, add the digit terms of the numbers, and if appropriate, convert the digit term back to a number between 1 and 10 by adjusting the exponential term.\n\nExercise $$\\PageIndex{1}$$: Adding Exponentials\n\nAdd 5.00 × 10−5 and 3.00 × 10−3.\n\nSolution\n\n\\begin{align*} 3.00×10^{−3}&=300×10^{−5}\\\\ (5.00×10^{−5})+(300×10^{−5})&=305×10^{−5}=3.05×10^{−3} \\end{align*}\n\n### Subtraction of Exponentials\n\nConvert all numbers to the same power of 10, take the difference of the digit terms, and if appropriate, convert the digit term back to a number between 1 and 10 by adjusting the exponential term.\n\nExercise $$\\PageIndex{2}$$: Subtracting Exponentials\n\nSubtract 4.0 × 10−7 from 5.0 × 10−6.\n\nSolution\n\n$4.0×10^{−7}=0.40×10^{−6}\\\\ (5.0×10^{−6})−(0.40×10^{−6})=4.6×10^{−6}$\n\n### Multiplication of Exponentials\n\nMultiply the digit terms in the usual way and add the exponents of the exponential terms.\n\nExercise $$\\PageIndex{3}$$: Multiplying Exponentials\n\nMultiply 4.2 × 10−8 by 2.0 × 103.\n\nSolution\n\n$(4.2×10^{−8})×(2.0×10^3)=(4.2×2.0)×10^{(−8)+(+3)}=8.4×10^{−5}$\n\n### Division of Exponentials\n\nDivide the digit term of the numerator by the digit term of the denominator and subtract the exponents of the exponential terms.\n\nExercise $$\\PageIndex{4}$$: Dividing Exponentials\n\nDivide 3.6 × 105 by 6.0 × 10−4.\n\nSolution\n\n$\\dfrac{3.6×10^{−5}}{6.0×10^{−4}}=\\left(\\dfrac{3.6}{6.0}\\right)×10^{(−5)−(−4)}=0.60×10^{−1}=6.0×10^{−2}$\n\n### Squaring of Exponentials\n\nSquare the digit term in the usual way and multiply the exponent of the exponential term by 2.\n\nExercise $$\\PageIndex{5}$$: Squaring Exponentials\n\nSquare the number 4.0 × 10−6.\n\nSolution\n\n$(4.0×10^{−6})^2=4×4×10^{2×(−6)}=16×10^{−12}=1.6×10^{−11}$\n\n### Cubing of Exponentials\n\nCube the digit term in the usual way and multiply the exponent of the exponential term by 3.\n\nExercise $$\\PageIndex{6}$$: Cubing Exponentials\n\nCube the number 2 × 104.\n\nSolution\n\n$(2×10^4)^3=2×2×2×10^{3×4}=8×10^{12}$\n\n### Taking Square Roots of Exponentials\n\nIf necessary, decrease or increase the exponential term so that the power of 10 is evenly divisible by 2. Extract the square root of the digit term and divide the exponential term by 2.\n\nExercise $$\\PageIndex{7}$$: Finding the Square Root of Exponentials\n\nFind the square root of 1.6 × 10−7.\n\nSolution\n\n\\begin{align*} 1.6×10^{−7}&=16×10^{−8}\\\\ \\sqrt{16×10^{−8}}=\\sqrt{16}×\\sqrt{10^{−8}}&=\\sqrt{16}×10^{−\\large{\\frac{8}{2}}}=4.0×10^{−4} \\end{align*}\n\n## Significant Figures\n\nA beekeeper reports that he has 525,341 bees. The last three figures of the number are obviously inaccurate, for during the time the keeper was counting the bees, some of them died and others hatched; this makes it quite difficult to determine the exact number of bees. It would have been more accurate if the beekeeper had reported the number 525,000. In other words, the last three figures are not significant, except to set the position of the decimal point. Their exact values have no meaning useful in this situation. In reporting any information as numbers, use only as many significant figures as the accuracy of the measurement warrants.\n\nThe importance of significant figures lies in their application to fundamental computation. In addition and subtraction, the sum or difference should contain as many digits to the right of the decimal as that in the least certain of the numbers used in the computation (indicated by underscoring in the following example).\n\nExercise $$\\PageIndex{8}$$: Addition and Subtraction with Significant Figures\n\nAdd 4.383 g and 0.0023 g.\n\nSolution\n\n\\begin{align*} &\\mathrm{4.38\\underline{3}\\:g}\\\\ &\\mathrm{\\underline{0.002\\underline{3}\\:g}}\\\\ &\\mathrm{4.38\\underline{5}\\:g} \\end{align*}\n\nIn multiplication and division, the product or quotient should contain no more digits than that in the factor containing the least number of significant figures.\n\nExercise $$\\PageIndex{9}$$: Multiplication and Division with Significant Figures\n\nMultiply 0.6238 by 6.6.\n\nSolution\n\n$0.623\\underline{8}×6.\\underline{6}=4.\\underline{1}$\n\nWhen rounding numbers, increase the retained digit by 1 if it is followed by a number larger than 5 (“round up”). Do not change the retained digit if the digits that follow are less than 5 (“round down”). If the retained digit is followed by 5, round up if the retained digit is odd, or round down if it is even (after rounding, the retained digit will thus always be even).\n\n## The Use of Logarithms and Exponential Numbers\n\nThe common logarithm of a number (log) is the power to which 10 must be raised to equal that number. For example, the common logarithm of 100 is 2, because 10 must be raised to the second power to equal 100. Additional examples follow.\n\nLogarithms and Exponential Numbers\nNumber Number Expressed Exponentially Common Logarithm\n1000 103 3\n10 101 1\n1 100 0\n0.1 10−1 −1\n0.001 10−3 −3\n\nWhat is the common logarithm of 60? Because 60 lies between 10 and 100, which have logarithms of 1 and 2, respectively, the logarithm of 60 is 1.7782; that is,\n\n$60=10^{1.7782}$\n\nThe common logarithm of a number less than 1 has a negative value. The logarithm of 0.03918 is −1.4069, or\n\n$0.03918=10^{-1.4069}=\\dfrac{1}{10^{1.4069}}$\n\nTo obtain the common logarithm of a number, use the log button on your calculator. To calculate a number from its logarithm, take the inverse log of the logarithm, or calculate 10x (where x is the logarithm of the number).\n\nThe natural logarithm of a number (ln) is the power to which e must be raised to equal the number; e is the constant 2.7182818. For example, the natural logarithm of 10 is 2.303; that is,\n\n$10=e^{2.303}=2.7182818^{2.303}$\n\nTo obtain the natural logarithm of a number, use the ln button on your calculator. To calculate a number from its natural logarithm, enter the natural logarithm and take the inverse ln of the natural logarithm, or calculate ex (where x is the natural logarithm of the number).\n\nLogarithms are exponents; thus, operations involving logarithms follow the same rules as operations involving exponents.\n\n1. The logarithm of a product of two numbers is the sum of the logarithms of the two numbers. $\\log xy= \\log x + \\log y, \\textrm{ and }\\ln xy=\\ln x + \\ln y$\n2. The logarithm of the number resulting from the division of two numbers is the difference between the logarithms of the two numbers. $\\log\\dfrac{x}{y}=\\log x-\\log y,\\textrm{ and } \\ln\\dfrac{x}{y}=\\ln x-\\ln y$\n3. The logarithm of a number raised to an exponent is the product of the exponent and the logarithm of the number. $\\log x^n=n\\log x \\textrm{ and }\\ln x^n=n\\ln x$\n\n## The Solution of Quadratic Equations\n\nMathematical functions of this form are known as second-order polynomials or, more commonly, quadratic functions.\n\n$ax^2+bx+c=0$\n\nThe solution or roots for any quadratic equation can be calculated using the following formula:\n\n$x=\\dfrac{-b±\\sqrt{b^2−4ac}}{2a}$\n\nSolving Quadratic Equations Solve the quadratic equation 3x2 + 13x − 10 = 0.\n\nSolution Substituting the values a = 3, b = 13, c = −10 in the formula, we obtain\n\n$x=\\dfrac{−13±\\sqrt{(13)^2−4×3×(−10)}}{2×3}$\n$x=\\dfrac{−13±\\sqrt{169+120}}{6}=\\dfrac{−13±\\sqrt{289}}{6}=\\dfrac{−13±17}{6}$\n\nThe two roots are therefore\n\n$x=\\dfrac{−13+17}{6}=\\dfrac{2}{3}\\textrm{ and }x=\\dfrac{−13−17}{6}=−5$\n\nQuadratic equations constructed on physical data always have real roots, and of these real roots, often only those having positive values are of any significance.\n\n## Two-Dimensional (x-y) Graphing\n\nThe relationship between any two properties of a system can be represented graphically by a two-dimensional data plot. Such a graph has two axes: a horizontal one corresponding to the independent variable, or the variable whose value is being controlled (x), and a vertical axis corresponding to the dependent variable, or the variable whose value is being observed or measured (y).\n\nWhen the value of y is changing as a function of x (that is, different values of x correspond to different values of y), a graph of this change can be plotted or sketched. The graph can be produced by using specific values for (x,y) data pairs.\n\nExercise $$\\PageIndex{10}$$\n\nGraphing the Dependence of y on x\n\nx y\n1 5\n2 10\n3 7\n4 14\n\nThis table contains the following points: (1,5), (2,10), (3,7), and (4,14). Each of these points can be plotted on a graph and connected to produce a graphical representation of the dependence of y on x.",
null,
"If the function that describes the dependence of y on x is known, it may be used to compute x,y data pairs that may subsequently be plotted.\n\nExercise $$\\PageIndex{11}$$\n\nPlotting Data Pairs If we know that y = x2 + 2, we can produce a table of a few (x,y) values and then plot the line based on the data shown here.\n\nx y = x2 + 2\n1 3\n2 6\n3 11\n4 18",
null,
"• Was this article helpful?"
] | [
null,
"https://chem.libretexts.org/@api/deki/files/61354/CNX_Chem_00_BB_Dependence_img.jpg",
null,
"https://chem.libretexts.org/@api/deki/files/61355/CNX_Chem_00_BB_Function_img.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8290494,"math_prob":0.99938685,"size":9110,"snap":"2021-21-2021-25","text_gpt3_token_len":2638,"char_repetition_ratio":0.16571492,"word_repetition_ratio":0.09244884,"special_character_ratio":0.31635565,"punctuation_ratio":0.1131874,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999871,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-19T07:31:37Z\",\"WARC-Record-ID\":\"<urn:uuid:e153b724-6c97-49a3-9883-9fc622c4e658>\",\"Content-Length\":\"112820\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4cf6139a-50c1-4245-9aee-1dfce54b26c7>\",\"WARC-Concurrent-To\":\"<urn:uuid:c80bd688-bea9-4a1b-a891-7d5a517b268e>\",\"WARC-IP-Address\":\"13.249.43.88\",\"WARC-Target-URI\":\"https://chem.libretexts.org/Courses/City_College_of_San_Francisco/Foundations_-_Review_Source_for_Chem_101A/06%3A_Appendices/7.01%3A_Essential_Mathematics\",\"WARC-Payload-Digest\":\"sha1:HBQZPNFSIASDVYV3MK2GSYZ5RVZQAHO3\",\"WARC-Block-Digest\":\"sha1:W37ELTRFR4WW6Q4L7ARAYWWZ6C2OWLGZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487643703.56_warc_CC-MAIN-20210619051239-20210619081239-00483.warc.gz\"}"} |
https://python.engineering/python-check-whether-two-lists-circularly-identical/ | [
"# Python | Check if two lists match in a circle\n\n| | |",
null,
"Examples :\n\n` Input: list1 = [10, 10, 0, 0, 10] list2 = [10, 10, 10, 0, 0] Output: Yes Explanation: yes they are circularly identical as when we write the list1 last index to second last index, then we find it is circularly same with list1 Input: list1 = [10, 10, 10, 0, 0] list2 = [1, 10, 10, 0, 0] Output: No `\n\nMethod 1. Using list traversal\n\nUsing traversing , we have to double the given list. Check for any x (0 \"= n) for any x + n and compare with list2 to see if list1 and list2 are the same, if both are the same then list2 is cyclically identical. Check this property using two loops. The first loop will run from 0 to len (list1) and then check if the index (x to x + n) is the same as list2, if yes then returns true, otherwise returns false.\n\nPython implementation of the above approach:\n\n ` # Python program to check if two ` ` # lists circularly identical ` ` # using a workaround ` ` # function to check circular identity or not ` ` def ` ` circularly_identical (list1, list2): ` ` # doubling list ` ` list3 ` = ` list1 ` ` * ` ` 2 ` ` # crawl list twice1 ` ` for ` ` x ` ` in ` ` range ` ` (` ` 0 ` `, ` ` len ` ` (list1)): ` ` z ` ` = ` ` 0 ` ` # check if list2 == list1 briefly ` ` for ` ` y ` ` in ` ` range ` ` (x, x ` ` + ` ` len ` ` (list1)): ` ` if ` ` list2 [z] ` ` = ` ` = ` ` list3 [y]: ` ` z ` ` + ` ` = ` ` 1 ` ` else ` `: ` ` break ` ` # if all n elements are the same in a circle ` ` if ` ` z ` ` = ` ` = ` ` len ` ` (list1): ` ` return ` ` True ` ` return ` ` False ` ` ` ` ` ` ` ` # driver code ` ` list1 ` ` = ` ` [` ` 10 ` `, ` ` 10 ` `, ` ` 0 ` `, ` ` 0 ` `, ` ` 10 ] `` list2 = [ 10 , 10 , 10 , 0 , 0 ] list3 = [ 1 , 10 , 10 , 0 , 0 ] # check list 1 and list 2 if < / code> (circularly_identical (list1, list2)): print \" Yes \" else : print \"No\" # check list 2 and list 3 if (circularly_identical (list2, list3)): print \" Yes \" else : print \"No\" `\n\nExit :\n\n` Yes No `\n\nC tutorial 1: using the map () function\n\nUsing the Python built-in function map (), we can do this in one step where we need to map list2 to a string and then see if it exists when mapping a double from list1 (2 * list1) to on another line.\n\nBelow is the Python implementation of the above approach:\n\n ` # Python program to check if two ` ` # the lists are circularly identical ` ` # using the map function ` ` # function to check circular identity or not ` ` def ` ` circularly_identical (list1, list2): ` ` ` ` ` ` return ` ` ( ` ` ’’ ` `. join (` ` map ` ` (` ` str ` `, list2)) ` ` in ` ` ’’ ` `. join (` ` map ` ` (` ` str ` `, list1 ` ` * ` ` 2 ` `))) ` ` # driver code ` ` list1 ` ` = ` ` [` ` 10 ` ` , ` ` 10 ` `, < / code> 0 , 0 , 10 ] `` list2 = [ 10 , 10 , 10 , 0 , 0 ] list3 = [ 1 , 10 , 10 , 0 , 0 ] # check list 1 and list 2 if (circularly_identical (list1, list2)): print \"Yes\" else : print \"No\" # check list 2 and list 3 if (circularly_identical (list2, list3)): print \"Yes\" else : `` print < code class = \"string\"> \"No\" `\n\nExit:\n\n` Yes No `\n\n## Shop",
null,
"Best laptop for Fortnite\n\n\\$",
null,
"Best laptop for Excel\n\n\\$",
null,
"Best laptop for Solidworks\n\n\\$",
null,
"Best laptop for Roblox\n\n\\$",
null,
"Best computer for crypto mining\n\n\\$",
null,
"Best laptop for Sims 4\n\n\\$",
null,
"Best laptop for Zoom\n\n\\$499",
null,
"Best laptop for Minecraft\n\n\\$590\n\nLatest questions\n\nNUMPYNUMPY\n\npsycopg2: insert multiple rows with one query\n\nNUMPYNUMPY\n\nHow to convert Nonetype to int or string?\n\nNUMPYNUMPY\n\nHow to specify multiple return types using type-hints\n\nNUMPYNUMPY\n\nJavascript Error: IPython is not defined in JupyterLab\n\n## Wiki\n\nPython OpenCV | cv2.putText () method\n\nnumpy.arctan2 () in Python\n\nPython | os.path.realpath () method\n\nPython OpenCV | cv2.circle () method\n\nPython OpenCV cv2.cvtColor () method\n\nPython - Move item to the end of the list\n\ntime.perf_counter () function in Python\n\nCheck if one list is a subset of another in Python\n\nPython os.path.join () method"
] | [
null,
"http://espressocode.top/images/paichildbusverthhandre372881.jpg",
null,
"https://python.engineering/python-check-whether-two-lists-circularly-identical/",
null,
"https://python.engineering/python-check-whether-two-lists-circularly-identical/",
null,
"https://python.engineering/python-check-whether-two-lists-circularly-identical/",
null,
"https://python.engineering/python-check-whether-two-lists-circularly-identical/",
null,
"https://python.engineering/python-check-whether-two-lists-circularly-identical/",
null,
"https://python.engineering/wp-content/uploads/2022/04/pye-best-laptop-for-sims-4.jpg",
null,
"https://python.engineering/wp-content/uploads/2022/04/pye-best-laptop-for-zoom.jpg",
null,
"https://python.engineering/wp-content/uploads/2022/04/pye-best-laptop-for-minecraft.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5803006,"math_prob":0.8873737,"size":2662,"snap":"2022-27-2022-33","text_gpt3_token_len":889,"char_repetition_ratio":0.16365689,"word_repetition_ratio":0.2984293,"special_character_ratio":0.3658903,"punctuation_ratio":0.16666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99474204,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,2,null,5,null,5,null,5,null,5,null,5,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T00:19:54Z\",\"WARC-Record-ID\":\"<urn:uuid:65ef2bd0-15f2-4d89-84f3-683daae85d97>\",\"Content-Length\":\"38370\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:705aff63-a253-41cb-b383-155b3f9d5d7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:e8cb065b-840f-4f51-8b94-fb1ce790d564>\",\"WARC-IP-Address\":\"172.67.181.212\",\"WARC-Target-URI\":\"https://python.engineering/python-check-whether-two-lists-circularly-identical/\",\"WARC-Payload-Digest\":\"sha1:EFKMJDV4X25HT2SEGZKMAM6OHMF6ASBM\",\"WARC-Block-Digest\":\"sha1:6WYAGRTQBIYOXIMHWXVTH7AMZZDMU26P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103619185.32_warc_CC-MAIN-20220628233925-20220629023925-00338.warc.gz\"}"} |
https://www.geeksforgeeks.org/p5-js-createvector-function/ | [
"# p5.js | createVector() function\n\nThe createVector() function in p5.js is used to create the new p5 vector which contains both magnitude and direction. This provides a two or three dimensional vector, specifically a geometric vector.\n\nSyntax:\n\n`createVector([x], [y], [z])`\n\nParameters: This function accepts three parameters as mentioned above and described below:\n\n• x: This parameter stores the x component of the vector.\n• y: This parameter stores the y component of the vector.\n• z: This parameter stores the z component of the vector.\n• Below programs illustrate the createVector() function in p5.js:\n\nExample 1: This example uses createVector() function to draw a line.\n\n `function` `setup() { ` ` ` ` ``// Create a Canvas ` ` ``createCanvas(500, 550); ` `} ` ` ` `function` `draw() { ` ` ` ` ``// Vector initislisation ` ` ``// using createVector ` ` ``t1 = createVector(10, 40); ` ` ``t2 = createVector(411, 500); ` ` ` ` ``// Set background color ` ` ``background(200); ` ` ` ` ``// Set stroke weight ` ` ``strokeWeight(2); ` ` ` ` ``// line using vector ` ` ``line(t1.x, t1.y, t2.x, t2.y); ` ` ` ` ``translate(12, 54); ` ` ` ` ``line(t1.x, t1.y, t2.x, t2.y); ` `} `\n\nOutput:",
null,
"Example 2: This example uses createVector() function to draw a circle.\n\n `function` `setup() { ` ` ` ` ``// Create a Canvas ` ` ``createCanvas(500, 550); ` `} ` ` ` `function` `draw() { ` ` ` ` ``// Vector initislisation ` ` ``// using createVector ` ` ``t1 = createVector(10, 40); ` ` ``t2 = createVector(41, 50); ` ` ` ` ``// Set background color ` ` ``background(200); ` ` ` ` ``// Set stroke weight ` ` ``strokeWeight(2); ` ` ` ` ``// Fill yellow ` ` ``fill(``'yellow'``); ` ` ` ` ``// ellispe using vector ` ` ``ellipse(t1.x*millis() / 1000 * 20, ` ` ``t1.y, t2.x+100, t2.y+100); ` `} `\n\nOutput:",
null,
"My Personal Notes arrow_drop_up",
null,
"Check out this Author's contributed articles.\n\nIf you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.\n\nPlease Improve this article if you find anything incorrect by clicking on the \"Improve Article\" button below.\n\nArticle Tags :\n\nBe the First to upvote.\n\nPlease write to us at contribute@geeksforgeeks.org to report any issue with the above content."
] | [
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20190420232304/Screenshot-2019-04-20-at-11.22.51-PM.png",
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20190422183953/circle6.png",
null,
"https://media.geeksforgeeks.org/auth/profile/6z3x57km8koa88tq98ev",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5256069,"math_prob":0.84704137,"size":2233,"snap":"2019-35-2019-39","text_gpt3_token_len":607,"char_repetition_ratio":0.19380888,"word_repetition_ratio":0.18879056,"special_character_ratio":0.3036274,"punctuation_ratio":0.1970803,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98314106,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-22T12:19:03Z\",\"WARC-Record-ID\":\"<urn:uuid:261ba453-f1db-4326-ac61-31459dc1efaa>\",\"Content-Length\":\"97849\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:85e28a74-be45-4f0e-a286-1f215652c68b>\",\"WARC-Concurrent-To\":\"<urn:uuid:f60d7bfc-d8a6-4d9e-87c1-d3ac34915d5e>\",\"WARC-IP-Address\":\"104.96.221.51\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/p5-js-createvector-function/\",\"WARC-Payload-Digest\":\"sha1:V4SHRNCH3KQWVEVHLMBXNMJ4OMBRRNSO\",\"WARC-Block-Digest\":\"sha1:C23FAIZHUNZX3Q7ZXTPLYK4TC5NXMM4X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027317113.27_warc_CC-MAIN-20190822110215-20190822132215-00240.warc.gz\"}"} |
https://discuss.pytorch.org/t/second-derivative-passing-from-linear-to-non-linear-formula/130435 | [
"# Second derivative - passing from linear to non-linear formula\n\nHi guys!\n\nI am interested about passing from this linear function created below, to a non-linear one, in order to estimate the second derivative. Is there exist some trick that I can use to have the second derivative different from zero? I am trying to estimate a greek called “Gamma” for a Call Option, that is priced using Montecarlo method\n\n``````import torch as np\nvalues = torch.tensor([1., 1.1, 1.2], requires_grad=True)\n\ndef delta_gamma(xi):\n\nk,T,j,sigma = 1.5,1.,10000,0.5\n\nmean = -.5 * sigma * sigma * T\n\nvolatility = sigma\n\nBM = torch.randn(1, j)*volatility+mean\n\nproduct = S*np.exp(BM)\n\np = torch.maximum(product-k,torch.zeros_like(product))\n\nresult = torch.mean(p, 1)\n\nreturn result\n\nfor i in values:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7919438,"math_prob":0.9863236,"size":792,"snap":"2022-05-2022-21","text_gpt3_token_len":217,"char_repetition_ratio":0.095177665,"word_repetition_ratio":0.0,"special_character_ratio":0.28156567,"punctuation_ratio":0.22950819,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99931204,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-28T23:35:53Z\",\"WARC-Record-ID\":\"<urn:uuid:99b11fd3-adc3-44a6-826c-33bbb92972e7>\",\"Content-Length\":\"13264\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0d7eb513-140f-4003-9e8d-4830aa65f762>\",\"WARC-Concurrent-To\":\"<urn:uuid:abd91548-dd80-49ce-985d-10cde31a415c>\",\"WARC-IP-Address\":\"159.203.145.104\",\"WARC-Target-URI\":\"https://discuss.pytorch.org/t/second-derivative-passing-from-linear-to-non-linear-formula/130435\",\"WARC-Payload-Digest\":\"sha1:LMLRNTOPSMK6B7ODO5IUN7ZFO6FOJIMG\",\"WARC-Block-Digest\":\"sha1:Y5LA7ZRFRELWEJ34X6QVEFMUTTXSWDCQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663021405.92_warc_CC-MAIN-20220528220030-20220529010030-00596.warc.gz\"}"} |
https://www.smartzworld.com/notes/network-theory-pdf-notes-nt-pdf-notes/ | [
"# Network Theory Pdf Notes – NT Pdf Notes\n\nHere you can download the free lecture Notes of Neheory Ptwork Tdf Notes – NT Pdf Notes materials with multiple file links to download. Network Theory Notes Pdf – NT Notes Pdf book starts with the topics Introduction,Advantages of Three Phase is preferred Over Single Phase,Frequency-selective or filter circuits pass to the output only those input signals that are in a desired range of frequencies (called pass band).\n\n## The Network Theory Pdf Notes – NT Pdf Notes\n\nNetwork Theory Notes pdf – NT pdf notes – NT notes pdf file to download are listed below please check it –\n\n#### Network Theory Book\n\nComplete notes\n\nUnit 1\n\nUnit 2\n\nUnit 3\n\nUnit 4\n\nUnit 5\n\nNote :- These notes are according to the R09 Syllabus book of JNTU.In R13 and R15,8-units of R09 syllabus are combined into 5-units in R13 and R15 syllabus. If you have any doubts please refer to the JNTU Syllabus Book.\n\n### Network Theory Notes Pdf – NT Notes Pdf\n\nUnit-1:\n\nIntroduction,Advantages of Three Phase is preferred Over Single Phase,STAR CONNECTION,THE STAR CONNECTION,Delta Connection,ANALYSIS OF BALANCED THREE PHASE CIRCUITS,ANALYSIS OF UNBALANCED LOADS,Power in three-phase circuits.\n\nUnit-2:\n\nIntroduction,What is meant by Transients,Series RL Circuit,Series RC circuit,Transient During Discharging a Capacitor,Response of a series R-L-C circuit due to a dc voltage source,Overdamped response,Under damped response,Critically damped response.\n\nUnit-3:\n\nThe concept of complex frequency,Terminal pairs or poles,Network Function,RESTRICTION ON POLE AND ZERO LOCATIONS OF NETWORK FUNCTION,A pair of terminals through which a current may enter or leave a network is known as a port. Two-terminal devices or elements (such as resistors, capacitors, and inductors) result in one-port networks.\n\n### Network Theory Notes pdf Details\n\nUnit-4:\n\nA pair of terminals through which a current may enter or leave a network is known as a port. A port is an access to the network and consists of a pair of terminals; the current entering one terminal leaves through the other terminal so that the net current entering the port equals zero.\n\nUnit-5:\n\nFrequency-selective or filter circuits pass to the output only those input signals that are in a desired range of frequencies (called pass band). The amplitude of signals outside this range of frequencies (called stop band) is reduced (ideally reduced to zero). Typically in these circuits, the input and output currents are kept to a small value and as such, the current transfer function is not an important parameter. The main parameter is the voltage transfer function in the frequency domain.\n\n## Network Theory Notes Reviews\n\n#### Avg. Score\n\nUsers rating & review for Network Theory Pdf Notes (NT Pdf Notes)\n\nUser Rating: Be the first one !\n\n## Basic Electronics PDF VSSUT | BE Notes VSSUT\n\nBasic Electronics PDF VSSUT – BE Notes VSSUT of Total Complete Notes Please find the …"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8564314,"math_prob":0.8259887,"size":2839,"snap":"2019-51-2020-05","text_gpt3_token_len":650,"char_repetition_ratio":0.1276896,"word_repetition_ratio":0.18666667,"special_character_ratio":0.20183162,"punctuation_ratio":0.10128913,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9686088,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-13T03:07:05Z\",\"WARC-Record-ID\":\"<urn:uuid:33899b2e-5465-4a77-8d46-c3d829266ffe>\",\"Content-Length\":\"75086\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:34c3e325-8583-4b8a-90b4-e2202172bbb7>\",\"WARC-Concurrent-To\":\"<urn:uuid:80cbab17-e611-4ffa-82a1-8da6da33a484>\",\"WARC-IP-Address\":\"209.182.192.62\",\"WARC-Target-URI\":\"https://www.smartzworld.com/notes/network-theory-pdf-notes-nt-pdf-notes/\",\"WARC-Payload-Digest\":\"sha1:WK4DLGGZWLDQDZP7GDQ5OBSIWABSMKAU\",\"WARC-Block-Digest\":\"sha1:ZTONVV7XIYVXJOJEUQO24DCQKJHNTBV2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540548537.21_warc_CC-MAIN-20191213020114-20191213044114-00288.warc.gz\"}"} |
https://www.hindawi.com/journals/aaa/2013/432509/ | [
"#### Abstract\n\nThis paper considers the existence of solutions for two boundary value problems for fractional -Laplacian equation. Under certain nonlinear growth conditions of the nonlinearity, two new existence results are obtained by using Schaefer's fixed point theorem. As an application, an example to illustrate our results is given.\n\n#### 1. Introduction\n\nFractional calculus is a generalization of ordinary differentiation and integration on an arbitrary order that can be noninteger. This subject, as old as the problem of ordinary differential calculus, can go back to the times when Leibniz and Newton invented differential calculus. As is known to all, the problem for fractional derivative was originally raised by Leibniz in a letter, dated September 30, 1695. A fractional derivative arises from many physical processes, such as a non-Markovian diffusion process with memory , charge transport in amorphous semiconductors , and propagations of mechanical waves in viscoelastic media , and so forth. Moreover, phenomena in electromagnetics, acoustics, viscoelasticity, electrochemistry, and material science are also described by differential equations of fractional order . For instance, Pereira et al. considered the following fractional Van der Pol equation: where is the fractional derivative of order and is a control parameter that reflects the degree of nonlinearity of the system. Equation (1) is obtained by substituting the capacitance by a fractance in the nonlinear RLC circuit model.\n\nRecently, fractional differential equations have been of great interest due to the intensive development of the theory of fractional calculus itself and its applications. For example, for fractional initial value problems, the existence and multiplicity of solutions (or positive solutions) were discussed in . On the other hand, for fractional boundary value problems (FBVPs), Agarwal et al. considered a two-point boundary value problem at nonresonance, and Bai considered a -point boundary value problem at resonance. For more papers on FBVPs, see and the references therein.\n\nThe turbulent flow in a porous medium is a fundamental mechanics problem. For studying this type of problems, Leibenson introduced the -Laplacian equation as follows: where . Obviously, is invertible and its inverse operator is , where is a constant such that .\n\nIn the past few decades, many important results relative to (2) with certain boundary value conditions have been obtained. We refer the reader to and the references cited therein. For boundary value problems of fractional -Laplacian equations, Chen and Liu considered an antiperiodic boundary value problem with the following form: where , , , and is Caputo fractional derivative. Under certain nonlinear growth conditions of the nonlinearity, an existence result was obtained by using degree theory. In addition, Yao et al. studied a three-point boundary value problem given by where is the standard Riemann-Liouville derivative with , , , , and the constant is a positive number satisfying . The monotone iterative technique was applied to establish the existence results on multiple positive solutions in .\n\nMotivated by the works mentioned previously, in this paper, we investigate the existence of solutions for fractional -Laplacian equation of the form subject to either boundary value conditions or where , , , is a Caputo fractional derivative, and is continuous.\n\nNote that the nonlinear operator reduces to the linear operator when and the additive index law holds under some reasonable constraints on the function .\n\nThe rest of this paper is organized as follows. Section 2 contains some necessary notations, definitions, and lemmas. In Section 3, based on Schaefer’s fixed point theorem, we establish two theorems on existence of solutions for FBVP (5) and (6) (Theorem 7) and FBVP (5) and (7) (Theorem 8). Finally, in Section 4, an explicit example is given to illustrate the main results. Our results are different from those of bibliographies listed in the previous texts.\n\n#### 2. Preliminaries\n\nFor the convenience of the reader, we present here some necessary basic knowledge and definitions about fractional calculus theory, which can be found, for instance, in [31, 32].\n\nDefinition 1. The Riemann-Liouville fractional integral operator of order of a function is given by provided that the right side integral is pointwise defined on .\n\nDefinition 2. The Caputo fractional derivative of order of a continuous function is given by where is the smallest integer greater than or equal to , provided that the right side integral is pointwise defined on .\n\nLemma 3 (see ). Let . Assume that . Then the following equality holds: where , ; here is the smallest integer greater than or equal to .\n\nLemma 4 (see ). For fixed , let one define Then the equation has a unique solution .\n\nIn this paper, we take with the norm and with the norm . By means of the linear functional analysis theory, we can prove that is a Banach space.\n\n#### 3. Existence Results\n\nIn this section, two theorems on existence of solutions for FBVP (5) and (6) and FBVP (5) and (7) will be given under nonlinear growth restriction of .\n\nAs a consequence of Lemma 3, we have the following results that are useful in what follows.\n\nLemma 5. Given , the unique solution of is where\n\nProof. Assume that satisfies the equation of FBVP (13); then Lemma 3 implies that From the boundary value condition , one has Thus, we have By condition , we get . The proof is complete.\n\nDefine the operator by where is the Nemytskii operator defined by Clearly, the fixed points of the operator are solutions of FBVP (5) and (6).\n\nLemma 6. Given , the unique solution of is where here is a constant dependent on .\n\nProof. Assume that satisfies the equation of FBVP (21); then Lemma 3 implies that From condition , one has Based on Lemma 4, we know that (25) has a unique solution . Moreover, by the integral mean value theorem, there exists a constant such that , which implies that . Thus, we have Hence From condition , we get . The proof is complete.\n\nDefine the operator by where and is the Nemytskii operator defined by (20). Clearly, the fixed points of the operator are solutions of FBVP (5) and (7).\n\nOur first result, based on Schaefer’s fixed point theorem and Lemma 5, is stated as follows.\n\nTheorem 7. Let be continuous. Assume that there exist nonnegative functions such that Then FBVP (5) and (6) has at least one solution, provided that\n\nProof. We will use Schaefer’s fixed point theorem to prove that has a fixed point. The proof will be given in the following two steps.\nStep 1. is completely continuous.\nLet be an open bounded subset. By the continuity of , we can get that is continuous and is bounded. Moreover, there exists a constant such that , for all , . Thus, in view of the Arzelà-Ascoli theorem, we need only to prove that is equicontinuous.\nFor and , we have Since is uniformly continuous on , we can obtain that is equicontinuous. A similar proof can show that is equicontinuous. This, together with the uniform continuity of on , yields that is also equicontinuous.\nStep 2 (priori bounds). Set Now it remains to show that the set is bounded.\nFrom Lemma 3 and boundary value condition , one has Thus, we get That is,\nFor , we have So, from , we obtain that which, together with and (35), yields that In view of (30), from (38), we can see that there exists a constant such that Thus, from (35), we get Combining (39) with (40), we have\nAs a consequence of Schaefer’s fixed point theorem, we deduce that has a fixed point which is the solution of FBVP (5) and (6). The proof is complete.\n\nOur second result, based on Schaefer’s fixed point theorem and Lemma 6, is stated as follows.\n\nTheorem 8. Let be continuous. Suppose that holds; then FBVP (5) and (7) has at least one solution, provided that (30) is satisfied.\n\nProof. The proof work is similar to the proof of Theorem 7, so we omit the details.\n\n#### 4. An Example\n\nIn this section, we will give an example to illustrate our main results.\n\nExample 1. Consider the following fractional -Laplacian equation: Corresponding to (5), we get that , , , and Choose , , and . By a simple calculation, we can obtain that , and Obviously, (42) subject to boundary value conditions (6) (or (7)) satisfies all assumptions of Theorem 7 (or Theorem 8). Hence, FBVP (42) and (6) (or FBVP (42) and (7)) has at least one solution.\n\n#### Acknowledgments\n\nThis work was supported by the Fundamental Research Funds for the Central Universities (2012QNA50) and the National Natural Science Foundation of China (11271364)."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8599623,"math_prob":0.9888475,"size":16918,"snap":"2022-40-2023-06","text_gpt3_token_len":4301,"char_repetition_ratio":0.16235071,"word_repetition_ratio":0.14124294,"special_character_ratio":0.25198013,"punctuation_ratio":0.21600714,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990263,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-03T17:51:00Z\",\"WARC-Record-ID\":\"<urn:uuid:9a627864-f25b-4bf2-b474-10b3a73d091c>\",\"Content-Length\":\"1049027\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9457a28f-4f6e-4bd7-b4b5-4b1a35a217ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:331c70aa-4895-436d-9cc1-2bfff82da139>\",\"WARC-IP-Address\":\"13.249.39.61\",\"WARC-Target-URI\":\"https://www.hindawi.com/journals/aaa/2013/432509/\",\"WARC-Payload-Digest\":\"sha1:QHP7QPAS2X2BJQ2G554E6UKQSKO2PFW2\",\"WARC-Block-Digest\":\"sha1:Y2JSTRFXDPYCR4ML5EAC7EEPHLKRGSSA\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337428.0_warc_CC-MAIN-20221003164901-20221003194901-00655.warc.gz\"}"} |
https://www.java67.com/2016/10/ | [
"HTML\n\n# How to transpose a matrix in Java? Example Tutorial\n\nHello guys, continuing the tradition of this week, where I have mostly published articles about coding exercises for Java beginners, today also I am going to share an interesting coding problem, many of you have solved in your college or school days. Yes, it's about writing a Java program to transpose a matrix. In the last couple of tutorials, we have learned to how to add and subtract two matrices in Java (see here) and how to multiply two matrices in Java (see here). In this tutorial, I'll show you how to transpose a matrix in Java. The transpose of a matrix is a new matrix whose rows are the columns of the original. This means when you transpose a matrix the columns of the new matrix becomes the rows of the original matrix and vice-versa. In short, to transpose a matrix, just swap the rows and columns of the matrix. For example, if you have a matrix with 2 rows and 3 columns then transpose of that matrix will contain 3 rows and two columns.\n\n# How to Add and Subtract Two Matrices in Java\n\nThis is the second program in the series of matrices related programming exercises in Java. In the last program, you have learned matrix multiplication and in this program, you will learn how to perform addition and subtraction of two matrices in Java. We'll create methods to calculate both sum and difference of two matrices in Java program. In Mathematics, a matrix is a rectangular array with two dimensions known as rows and columns. In Java, your can use a two-dimensional array to represent a matrix because it also has two dimensions rows and columns. Since a 2D array is nothing but an array of the array in Java, the length of the outer array is equal to the number of rows and length of sub-array is equal to the number of columns.\n\n# How to calculate sum and difference of two complex numbers in Java\n\nFrom the last couple of articles, I am writing about coding exercises for beginners e.g. yesterday you learned how to write a program from matrix multiplication in Java (see here) and a couple of days back, you have learned recursive binary search algorithm. To continue that tradition today I am going to show you how to write a program for calculating sum and difference of two complex numbers in Java. If you remember the complex number from you maths classes, it has two part real and imaginary and to add a complex number we add their real and imaginary part separately, similar to subtract complex number we minus their real and imaginary part separately. For example, if first complex number is A + iB and the second complex number is X + iY then the addition of these two complex number will be equal to (A +X ) + i(B + Y).\n\n# How to Multiply Two Matrices in Java\n\nI first learned about matrix in class 12th and I first wrote the program to multiply two matrices on my first semester of engineering, so, when I thought about this program, It brings a lot of memories from the past. It's actually a beginner exercise to develop coding logic, much like Fibonacci, prime, and palindrome check, but what make this program interesting is the use of the two-dimensional array to represent a matrix in Java. Since matrix has both rows and columns, two-dimensional array just naturally fits into the requirement. Another important thing to solve this problem is to remember the rule of matrix multiplication in mathematics. If you don't remember the rule, just forget about how to solve this problem, unless you have access to Google. So, first, we'll refresh the rules of multiplication and then we'll look into coding aspect.\n\n# java.lang.OutOfMemoryError: unable to create new native thread - Cause and Solution\n\nThere are several types of OutOfMemoryError in Java e.g. OutOfMemoryError related to Java heap space and permgen space, and a new one coming in Java 8, Java.lang.OutOfMemoryError: MetaSpace. Each and every OutOfMemoryError has their own unique reason and corresponding unique solution. For example, java.langOutOfMemoryError: Java Heap Space comes when the application has exhausted all heap memory and tries to create an object which requires further memory allocation, that time JVM throws this error to tell the application that it's not possible to create any object. Similarly java.lang.OutOfMemoryError: PermGen Space comes when there is no more memory in permgen space and application tries to load more classes (as class metadata is stored in this area) or tries to create new String (because prior to Java 7 String pool has also existed on permgen space).\n\n# 5 Difference between StringBuffer, StringBuilder and String in Java\n\nThough all three classes StringBuffer, StringBuilder and String are used for representing text data in Java there are some significant differences between them. One of the most notable differences between StringBuilder, StringBuffer, and String in Java is that both StringBuffer and StrinBuilder are Mutable class but String is Immutable in Java. What this means is, you can add, remove or replace characters from StringBuffer and StringBuilder object but any change on String object e.g. converting uppercase to lowercase or appending a new character using String concatenation will always result in a new String object. Another key difference between them is that both StringBuffer and String are thread-safe but StringBuilder is not thread-safe in Java. String achieves its thread-safety from Immutability but StringBuffer achieves it via synchronization, which is also the main difference between the StringBuffer and StringBuilder in Java.\n\n# JDBC - How to solve java.sql.BatchUpdateException: String or binary data would be truncated.\n\nRecently I was working in Java application which uses Microsoft SQL Server at its backend. The architecture of Java application was old i.e. even though there was heavy database communication back and forth there was no ORM used e.g. no Hibernate, JPA, or Apache iBatis. The Java application was using old DAO design pattern, where the DB related classes which are responsible for loading and storing data from database was calling stored procedure to do their Job. These stored procedure takes data from Java application and insert into SQL Server tables. One day, one of my collegue called me to troubleshoot \"java.sql.BatchUpdateException: String or binary data would be truncated\", which it's application was throwing and he has no clue whatsoever that what is wrong with the data he is getting from other system and trying to store.\n\n# How to calculate average of all numbers of array in Java\n\nIn the last article, I teach you how to calculate the sum of all numbers in a given array and in this article, we'll go one more step. This time, you need to write a program to calculate the average of all numbers from a given array, for example, you will be passed salaries of Java developers in different states in the USA and you need to calculate the average salary of Java developer in the USA. The example of average salaries of Java developer is more interesting because everybody wants to know how much Java developers make, isn't it? Anyway, coming back to the requirement of the program, The array will contain integers, which can be both positive and negative, so you must handle them. Your program should also be robust e.g. it should not break if you pass empty array or null. In these case either you can throw IllegalArgumentException as returning any other number will be ambiguous.\n\n# How to calculate sum of array elements in Java\n\nIn today's coding problem, we'll see how to write a program to calculate the sum of array elements in Java. You need to write a method which will accept an integer array and it should return total sum of all the elements. The array could contain both positive and negative numbers but only decimal numbers are allowed. The array can also be null or empty so make sure your solution handle those as well. In the case of a null or empty array, your program can throw IllegalArgumentException. The empty array means, an array whose length is zero or there is no element inside it. Well, that's enough for the requirement of this simple coding problem. The solution is really simple, just loop through the array and keep adding elements into sum until you process all the elements.\n\n# 10 Reasons of java.lang.NumberFormatException in Java - Solution\n\nThe NumberFormatException is one of the most common errors in Java application along with NullPointerException. This error comes when you try to convert a String into numeric data types e.g. int, float, double, long, short, char or byte. The data type conversion methods like Integer.parseInt(), Float.parseFloat(), Double.parseDoulbe(), and Long.parseLong() throws NumberFormatException to signal that input String is not valid numeric value. Even though the root cause is always something which cannot be converted into a number, there are many reasons and input due to which NumberFormatException occurs in Java application. Most of the time I have faced this error while converting a String to int or Integer in Java, but there are other scenarios as well when this error occurs. In this article, I am sharing 10 of the most common reasons of java.lang.NumberFormatException in Java programs.\n\n# How to calculate area of triangle in Java - Program\n\nWriting a Java program to calculate the area of a triangle is one of the basic programming exercises to develop coding sense on beginner programmers. Like many mathematical conceptual programs e.g. square root, factorial, or prime number this also serves a good exercise for beginners. Now, if you remember in maths you might have seen two main ways to calculate the area of a triangle, using vertices and using base and height. In this program, I have created two methods to calculate the area of a triangle using both ways. In the first method area(Point a, Point b, Point c) we expect coordinates of three vertices of triangle and then we calculate area of triangle using the formula (Ax(By -Cy) + Bx(Cy -Ay) + Cx(Ay - By))/2, while in second method, area(int base, int height) we expect value of base and height and then we calculate are of triangle using formula (base * height) / 2.\n\n# 3 ways to convert String to JSON object in Java?\n\nIt's very common nowadays to receive JSON String from a Java web service instead of XML, but unfortunately, JDK doesn't yet support conversion between JSON String to JSON object. Keeping JSON as String always is not a good option because you cannot operate on it easily, you need to convert it into JSON object before you do anything else e.g. retrieve any field or set different values. Fortunately, there are many open-source libraries which allows you to create JSON object from JSON formatted String e.g. Gson from Google, Jackson, and json-simple. In this tutorial, you will learn how to use these 3 main libraries to do this conversion with step by step examples.\n\n# 3 Ways to Reverse an Array in Java - Coding Interview Question\n\nOne of the common coding questions is, how do you reverse an array in Java? Well, there are multiple ways to solve this problem. You can reverse array by writing your own function, which loops through the array and swaps elements until the array is sorted. That's actually should you be your first approach on interviews. Later you can impress the interviewer by a couple of other tricks, which is specific to Java development world. For example, you can reverse an array by converting array to ArrayList and then use this code to reverse the ArrayList. You can also use Apache Commons ArrayUtils.reverse() method to reverse any array in Java. This method is overloaded to reverse byte, short, long, int, float, double and String array. You can use any of the method depending upon your array type."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90767497,"math_prob":0.7706526,"size":11833,"snap":"2019-51-2020-05","text_gpt3_token_len":2365,"char_repetition_ratio":0.1323865,"word_repetition_ratio":0.026170107,"special_character_ratio":0.1963154,"punctuation_ratio":0.10128832,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9757091,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T22:41:49Z\",\"WARC-Record-ID\":\"<urn:uuid:091bb468-7013-4e73-bf41-8a86de6762c5>\",\"Content-Length\":\"276211\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8a26dd1d-b656-41f0-b6c3-bb62e45be956>\",\"WARC-Concurrent-To\":\"<urn:uuid:194cdec1-1f60-474b-b4fc-d3f9e9ede080>\",\"WARC-IP-Address\":\"172.217.5.243\",\"WARC-Target-URI\":\"https://www.java67.com/2016/10/\",\"WARC-Payload-Digest\":\"sha1:NJHIERCFZDSBN66CNEVBU3M7IMTT67HI\",\"WARC-Block-Digest\":\"sha1:YTR645SBYQ2M2WMY42E4IVYM7HYKN466\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540491491.18_warc_CC-MAIN-20191206222837-20191207010837-00198.warc.gz\"}"} |
https://www.math-for-all-grades.com/Worksheet-on-weighted-averages.html | [
"## WORKSHEET ON WEIGHTED AVERAGES:\n\nHi welcome visitors to our website:\n\nin this worksheet, I am going to discuss in detail the concept of weighted average and one very interesting short-cut to solve questions on weighted average.\n\nTo enrich your learning, you can also visit our you tube on the concept of weighted average.\n\nSo, let’s get on with the formula of weighted average, the property of weighted average and finally the short cut to find the ratio of the number of terms in two groups whose individual averages are given.\n\nLet us take up the following problem on weighted average:\n\nSOLVED PROBLEM 1:\n\nIn a group of 80 members, the number of males is 45 and the number of females is 35. The average age of the males is m and that of the females is f. compare the average age of the two groups compared to the average of the ages of the two groups i.e. (f + m)/2. f > m.\n\nSolution:\n\nFirst all, let us set up the data for weighted average as we do normally as follows:\n\nIf you have watched our webpage on weighted average and its property, then you will understand that weighted average will be a number nearer to the average of the group of males i.e. ‘m’, which is the average of the group having more number of terms.\n\n(here, terms represent ages of the males and females)\n\nNow, tell me if a number is closer to the average ‘m’ than to the other average ‘f’, then what will it be compared to (f + m)/2 which is the midpoint of the two average numbers m and f?\n\nAs per the data written above, you may conclude that since the weighted average age of the two groups males and females lies closer to the average age ‘m’ from the property of weighted average, therefore from the diagram drawn above in the data representation, weighted average age of the two groups will be less than the midpoint (f + m)/2.\n\nHere lies the mistake often committed by students while solving this question on weighted average age of the two groups of males and female in the above question.\n\nYour conclusion that weighted average age of the two groups is nearer to the average age ‘m’ than to the other average ‘f’ is true; but compared to (f + m)/2 it is less or greater cannot be still determined without knowing the relationship between ‘f’ and ‘m’.\n\nBecause there can exist three relationships between ‘f’ and ‘m’:\nf < m or f = m or f > m\n\ni.e. the average age of the females group may be less than the average age of the males or\n\nthe average age of the group of females may be same as that of the average age of the males or even\n\nthe average age of the group of females could be greater than the average of the males.\n\nThe exploration of the above three possibilities of relationship between the average age of the females and the average age of the males is triggered by the information given by the question : f > m\n\nSuppose the average age of the two groups of females and the males is same i.e. f = m, then how can you conclude from the property of weighted average that the weighted average age of the two groups will be less than the average of the average ages: (f + m)/2.\n\nWhat I am trying to drive home the point is that the weighted average property will enable you to tell that weighted average age of the two groups will definitely be closer to the average of the males, but what the property cannot tell you is what it will be compared to the average (f + m)/2.\n\nTo be able to determine the second point above i.e. what is weighted average compared to the average (f + m)/2, we need to know the relationship between the average ages ‘m’ and ‘f’, i.e. which of the three relationships between them exists as explored above.\n\nThat’s why the question has supplied the additional detail f > m.\nWe must realize the importance of the information f > m and its purpose is to determine if weighted average age, which is closer to the average age ‘m’ of the males, is less than or equal to or greater than the average of the average ages : f and m i.e. (f + m)/2\n\nNow, after having ascertained the purpose of the relationship between f and m i.e. f > m, we can now give the answer that weighted average age of the two groups will be less than the average (f + m)/2.\n\nIf it is not still clear, then we can make the explanation more clear by assuming some numbers for f and m based on the relationship f > m.\n\nLet us take average age of the males’ group, m as 20 and the average age of the females’ group as some 40\n\nSo by substituting the above assumed values for f and m, the average (f + m)/2 will be (40 + 20)/2 = 60/2 = 30\n\nNow, you can clearly see that the weighted average age of the two groups males and females will be less than 30.\n\nHow? Its not clear still. Ok, we will go as below:\n\nFrom the weighted average property, what we can say is the weighted average age of the two males and females whose individual average ages have been taken as 20 and 40 will be a number closer to the average age of the males which is 20.\n\nA number which is closer to the average age 20 than to the other average age 40 must by implication be less than the midpoint of 20 and 40 i.e. 30\n\nFor example, 28 is closer to 20 than to 40 as 28 is only 8 numbers away from 20 but slightly more away from 40, i.e. 12 away from 40.\n\nNow figure out in your mind what 28 is compared to 30.\nObviously less.\n\nThat’s why weighted average age of the two groups by being nearer to the average age of the males i.e. ‘m’ will be also less than the average of the average ages i.e. (f + m)/2 by taking into consideration the information f > m given in the question.\n\nNow I will tell you a short cut to solve a question on weighted average:\n\nConsider the following problem:\n\nThe average marks of a group having ‘p’ students in a class is 75 and the average marks of another group having ‘n’ students is 92. Find what value the fraction p/n will be, if the weighted average marks of the two groups is 84\n\nSolution:\n\nFirst of all, present the data for the weighted average as we do usually:\n\nTo find the value of the fraction p/n, move in the direction of the arrows as drawn above in the presentation:\n\np / n = (92 – 84)/(84 – 75) = 8/9.\n\nThat’s it.\n\nIsn’t that a wonderful short-cut.\n\nWell, the above fraction p/n signifies another important property of weighted average, which will discuss in another worksheet."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9541041,"math_prob":0.9913733,"size":6229,"snap":"2022-40-2023-06","text_gpt3_token_len":1452,"char_repetition_ratio":0.26473895,"word_repetition_ratio":0.13289036,"special_character_ratio":0.24418044,"punctuation_ratio":0.08175624,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99558747,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T11:01:16Z\",\"WARC-Record-ID\":\"<urn:uuid:c0df443f-db45-4c0c-a954-9aa59e19ba56>\",\"Content-Length\":\"38480\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2e56960d-d928-40f9-8f6f-5ac940cc88d7>\",\"WARC-Concurrent-To\":\"<urn:uuid:6b67a823-b36b-482f-a4f3-baaaf448de93>\",\"WARC-IP-Address\":\"104.21.77.198\",\"WARC-Target-URI\":\"https://www.math-for-all-grades.com/Worksheet-on-weighted-averages.html\",\"WARC-Payload-Digest\":\"sha1:7QEBB75TPYK4JAT2DIZNAYQYOGEACC2E\",\"WARC-Block-Digest\":\"sha1:XYDL5MLPEWFC6TU7RA2CQLXWFUD262PY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335350.36_warc_CC-MAIN-20220929100506-20220929130506-00778.warc.gz\"}"} |
https://gmplib.org/list-archives/gmp-discuss/2004-October/001378.html | [
"# Computing A^x (mod n)\n\nPhilip Lee rocketman768 at gmail.com\nSat Oct 2 18:57:14 CEST 2004\n\n```I'm just wondering the algorithm behind mpz_powm(). It can't be doing\nthe calculation (mod n) inside whatever loop it's running, because if\nI take a a 572-bit number and raise it to a 576-bit number modulo a\n576-bit number, the program will terminate. I'm guessing it is doing\nthe exponential junk first and then just taking the modulus right?\nBecause this would cause an extreme overflow in such a case as mine.\n(As you can tell, I'm doing some RSA stuff ).\n\nSo my question is, \"if the mpz_powm() function isn't doing a running\ncalculation (mod n ), why is it not?\" As I said, this would cause\noverflows which would be prevented if it was instead doing a running\ncalculation (mod n). Anyways, just wondering.\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9055826,"math_prob":0.75676066,"size":848,"snap":"2023-14-2023-23","text_gpt3_token_len":219,"char_repetition_ratio":0.109004736,"word_repetition_ratio":0.013888889,"special_character_ratio":0.25589624,"punctuation_ratio":0.10465116,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9805292,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T00:08:01Z\",\"WARC-Record-ID\":\"<urn:uuid:255c7c8d-cdc7-4cdd-9255-16d5ef5e8402>\",\"Content-Length\":\"3000\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f8751048-b19b-4471-bb6e-01492e6b84a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:f28b0db6-fd58-4ea3-bafd-5502fe85d261>\",\"WARC-IP-Address\":\"130.242.124.102\",\"WARC-Target-URI\":\"https://gmplib.org/list-archives/gmp-discuss/2004-October/001378.html\",\"WARC-Payload-Digest\":\"sha1:4NUY3LL2LX2EBV76BUEDMZBZ4WAE2KBJ\",\"WARC-Block-Digest\":\"sha1:H46CKIUNE47YC7NTNOQIKWVSGPV54VMB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652184.68_warc_CC-MAIN-20230605221713-20230606011713-00395.warc.gz\"}"} |
http://mathcentral.uregina.ca/QQ/database/QQ.09.06/patricia4.html | [
"Subject: Problem Solving Inequalities Name: Patricia Who are you: Student Your quiz grades are 73, 75, 89 and 91. What is the lowest grade you can obtain on the last quiz and still achieve an average at least 85? Thank you. Hi Patricia, If you had all 5 grades and the average was 85 then the sum of the 5 grades divided by 5 would be 85. Thus the sum of the 5 grades would be 5",
null,
"85. Penny Hi Patricia. I'll show you how to do this with different numbers: Let's say your quiz grades are 75, 85, 95 and want to achieve an 80 average overall. What minimum score do you need on the fourth and final quiz? Let x be the score of the final quiz. Then the average for the four quizzes is: (75 + 85 + 95 + x ) / 4 and we want a minimum average of 80, so: (75 + 85 + 95 + x ) / 4 ≥ 80 Solve for x: (75 + 85 + 95 + x ) ≥ 80",
null,
"4 x ≥ 80",
null,
"4 - 75 - 85 - 95 x ≥ 65 So for this question, you would need at least a 64 on the final quiz to achieve an overall average of 80. Use this method to solve your own question. Stephen La Rocque."
] | [
null,
"http://mathcentral.uregina.ca/images/multiply.gif",
null,
"http://mathcentral.uregina.ca/images/multiply.gif",
null,
"http://mathcentral.uregina.ca/images/multiply.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90940756,"math_prob":0.97308207,"size":792,"snap":"2020-24-2020-29","text_gpt3_token_len":253,"char_repetition_ratio":0.12182741,"word_repetition_ratio":0.10869565,"special_character_ratio":0.3699495,"punctuation_ratio":0.097826086,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9928106,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-03T20:20:27Z\",\"WARC-Record-ID\":\"<urn:uuid:bb5e3c46-c833-4eba-9a28-ffc8f5f2acba>\",\"Content-Length\":\"6763\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:52366b8b-24ec-452a-a8a8-3e68ddd0fe12>\",\"WARC-Concurrent-To\":\"<urn:uuid:68d36697-384c-499c-8beb-359bce4ade07>\",\"WARC-IP-Address\":\"142.3.156.43\",\"WARC-Target-URI\":\"http://mathcentral.uregina.ca/QQ/database/QQ.09.06/patricia4.html\",\"WARC-Payload-Digest\":\"sha1:PCTKN6RAO2HUIOSUKJL4UBRSPLR7BTH7\",\"WARC-Block-Digest\":\"sha1:F7BPY6A35J4SJLTPIL2KDFFCRHX6GSAZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655882934.6_warc_CC-MAIN-20200703184459-20200703214459-00285.warc.gz\"}"} |
https://education.blurtit.com/181877/2x59 | [
"# 2x+5=9?\n\n2x+5=9 2x=9-5=4 so, x=2\nthanked the writer.\nRULES TO SOLVE SUCH TYPE OF PROBLEMS IS :-\n\nfirst takes all the \"x\" variable on one side and all the single digit{constants} on\nthe other, remember when shifting the numbers , their signs should get change...\n\n2x + 5 = 9\n2x = 9 - 5\n2x = 4\nx = 4/2",
null,
""
] | [
null,
"https://cf.blurtitcdn.com/var/avatar/thumb_default_avatar.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7845951,"math_prob":0.99552155,"size":584,"snap":"2021-43-2021-49","text_gpt3_token_len":226,"char_repetition_ratio":0.10344828,"word_repetition_ratio":0.752,"special_character_ratio":0.38869864,"punctuation_ratio":0.120567374,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98896503,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T23:25:58Z\",\"WARC-Record-ID\":\"<urn:uuid:a41a90e9-059b-4df7-b5d6-562aa3badb61>\",\"Content-Length\":\"44458\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:56778836-ba06-4a23-8cef-373a59f3eb42>\",\"WARC-Concurrent-To\":\"<urn:uuid:dde1aafe-98ac-4cb6-8f5f-176079886811>\",\"WARC-IP-Address\":\"172.67.65.240\",\"WARC-Target-URI\":\"https://education.blurtit.com/181877/2x59\",\"WARC-Payload-Digest\":\"sha1:3M55ALL7KY3JEL6BQX2CDPBQHALFE4QV\",\"WARC-Block-Digest\":\"sha1:BEJ6AVEPXVVL57LELQC55A7RAK6RAZK4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585449.31_warc_CC-MAIN-20211021230549-20211022020549-00513.warc.gz\"}"} |
https://philoid.com/question/34703-four-equal-circles-each-of-radius-5-cm-touch-each-other-as-shown-in-the-figure-find-the-area-included-between-them-take-314 | [
"",
null,
"## Book: RS Aggarwal - Mathematics\n\n### Chapter: 18. Area of Circle, Sector and Segment\n\n#### Subject: Maths - Class 10th\n\n##### Q. No. 37 of Exercise 18B\n\nListen NCERT Audio Books - Kitabein Ab Bolengi\n\n37\n##### Four equal circles, each of radius 5 cm, touch each other, as shown in the figure. Find the area included between them. [Take π = 3.14]",
null,
"Here, first we join the center of all adjacent circles then the distance between the center of circles touching each other is equal to the side of the square formed by joining the center of adjacent circles. Therefore, we can say that the side of the square equal to the twice of the radius of circle. Now by simply calculating the area of the 4 quadrants and then subtracting it from the area of the square we can easily calculate the area of the shaded region.\n\nGiven radius of each circle = r = 5 cm\n\nCentral angle of each sector formed at corner = θ = 90°\n\nSide of square ABCD = a = 2×r = 2×5 = 10 cm",
null,
"where R = radius of circle",
null,
"(putting value of r and θ)",
null,
"",
null,
"Area of all 4 quadrants = 25π eqn2\n\nArea of square = side×side = a×a = a2\n\nArea of square = 102 (putting value of side of square)\n\nArea of square = 100 cm2 eqn3\n\nArea of shaded region = Area of square – Area of all 4 quadrants\n\nArea of shaded region = 100 – 25π (from eqn3 and eqn2)\n\n= 100 – (25×3.14) (put π = 3.14)\n\n= 100 – 78.5\n\n= 21.5 cm2\n\nThe area of shaded region is 21.5 cm2.\n\n37\n\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60"
] | [
null,
"https://static.philoid.co/philoid-com/assets/images/cc-optimised/jrsa1cc.jpg",
null,
"https://static.insightsonindia.in/ncertusercontent/solutions/",
null,
"https://static.insightsonindia.in/ncertusercontent/solutions/",
null,
"https://static.insightsonindia.in/ncertusercontent/solutions/",
null,
"https://static.insightsonindia.in/ncertusercontent/solutions/",
null,
"https://static.insightsonindia.in/ncertusercontent/solutions/",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89612645,"math_prob":0.9988864,"size":1948,"snap":"2023-40-2023-50","text_gpt3_token_len":605,"char_repetition_ratio":0.16100822,"word_repetition_ratio":0.031325303,"special_character_ratio":0.31570843,"punctuation_ratio":0.10046729,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99787086,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T00:54:30Z\",\"WARC-Record-ID\":\"<urn:uuid:fb3e1ab1-e1b0-446c-8a42-160f3a85708f>\",\"Content-Length\":\"107611\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:93fb0b1b-1a3c-473e-af19-66817b343232>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa268999-9c6a-4384-ac18-cd60ac3e2222>\",\"WARC-IP-Address\":\"172.105.252.124\",\"WARC-Target-URI\":\"https://philoid.com/question/34703-four-equal-circles-each-of-radius-5-cm-touch-each-other-as-shown-in-the-figure-find-the-area-included-between-them-take-314\",\"WARC-Payload-Digest\":\"sha1:SFGV3A2UNEEVQXAFPSWKXJB2Y7AMKOCI\",\"WARC-Block-Digest\":\"sha1:NJNQNNJ4S7HIHU2C5Q3UT4KM6FCAVW7Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506429.78_warc_CC-MAIN-20230922234442-20230923024442-00544.warc.gz\"}"} |
https://portal.research.lu.se/portal/en/publications/twoview-orthographic-epipolar-geometry(0b2d14df-4a11-4639-887a-de2ed8101e60).html | [
"Two-View Orthographic Epipolar Geometry: Minimal and Optimal Solvers\n\nResearch output: Contribution to journalArticle\n\nAbstract\n\nWe will in this paper present methods and algorithms for estimating two-view geometry based on an orthographic camera model. We use a previously neglected nonlinear criterion on rigidity to estimate the calibrated essential matrix. We give efficient algorithms for estimating it minimally (using only three point correspondences), in a least squares sense (using four or more point correspondences), and optimally with respect to the number of inliers. The inlier-optimal algorithm is based on a three-point solver and gives a fourth-order polynomial time algorithm. These methods can be used as building blocks to robustly find inlier correspondences in the presence of high degrees of outliers. We show experimentally that our methods can be used in many instances, where the orthographic camera model isn’t generally used. A case of special interest is situations with repetitive structures, which give high amounts of outliers in the initial feature point matching.\n\nDetails\n\nAuthors\nOrganisations\nResearch areas and keywords\n\nSubject classification (UKÄ) – MANDATORY\n\n• Computational Mathematics\nOriginal language English 163-173 Journal of Mathematical Imaging and Vision 60 2 Published - 2018 Feb Research Yes"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8232522,"math_prob":0.6390784,"size":1432,"snap":"2019-43-2019-47","text_gpt3_token_len":283,"char_repetition_ratio":0.083333336,"word_repetition_ratio":0.0,"special_character_ratio":0.18784916,"punctuation_ratio":0.058558557,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9519454,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-15T17:18:39Z\",\"WARC-Record-ID\":\"<urn:uuid:92d1b310-b594-4623-99b7-849cfeb25d01>\",\"Content-Length\":\"18292\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b0adb3ed-33e4-4a6f-818d-70682f79cbaa>\",\"WARC-Concurrent-To\":\"<urn:uuid:c029df59-685a-406d-b20b-c667584cb82c>\",\"WARC-IP-Address\":\"130.235.140.134\",\"WARC-Target-URI\":\"https://portal.research.lu.se/portal/en/publications/twoview-orthographic-epipolar-geometry(0b2d14df-4a11-4639-887a-de2ed8101e60).html\",\"WARC-Payload-Digest\":\"sha1:ZNPNQ2J6BMJSQS26K6F6UOFM7JAWHDP2\",\"WARC-Block-Digest\":\"sha1:XA2DSULUV7C4BAZ66LTIBZCFCIKYKJUX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986660067.26_warc_CC-MAIN-20191015155056-20191015182556-00221.warc.gz\"}"} |
https://lovethedax.com/qa/question-what-is-difference-between-speed-and-velocity.html | [
"",
null,
"# Question: What Is Difference Between Speed And Velocity?\n\n## How can speed be calculated?\n\nThe formula for speed is speed = distance ÷ time.\n\nTo work out what the units are for speed, you need to know the units for distance and time.\n\nIn this example, distance is in metres (m) and time is in seconds (s), so the units will be in metres per second (m/s)..\n\n## What are the three differences between speed and velocity?\n\nVelocity: Velocity is a physical vector quantity. It has a magnitude as well as direction….Speed & Velocity.SpeedVelocitySpeed is a scalar quantityVelocity is a vector quantity.Speed ascertains how fast a body moves.Velocity ascertains the object’s speed and the direction it takes while moving.2 more rows\n\n## What is the SI unit of speed and velocity?\n\nmeter per secondThe SI unit of speed and velocity is the ratio of two — the meter per second .\n\n## How do you find speed and velocity?\n\nVelocity (v) is a vector quantity that measures displacement (or change in position, Δs) over the change in time (Δt), represented by the equation v = Δs/Δt. Speed (or rate, r) is a scalar quantity that measures the distance traveled (d) over the change in time (Δt), represented by the equation r = d/Δt.\n\n## What is the relationship between velocity and acceleration?\n\nAcceleration is the rate of change of velocity. (when velocity changes -> acceleration exists) If an object is changing its velocity, i.e. changing its speed or changing its direction, then it is said to be accelerating. Acceleration = Velocity / Time (Acceleration)\n\n## What is an example of speed and velocity?\n\nSpeed is how fast something moves. Velocity is speed with a direction. Saying Ariel the Dog runs at 9 km/h (kilometers per hour) is a speed….Speed and Velocity.SpeedVelocityHas:magnitudemagnitude and directionExample:60 km/h60 km/h NorthExample:5 m/s5 m/s upwards\n\n## Can velocity be negative?\n\nVelocity is a vector quantity. If we’re moving along a line, positive velocity means we’re moving in one direction, and negative velocity means we’re moving in the other direction. Speed is the magnitude of the velocity vector, and hence is always positive.\n\n## What is velocity in physics class 9?\n\nVelocity: Velocity is the speed of an object moving in a definite direction. The SI unit of velocity is also metre per second. Velocity is a vector quantity; it has both magnitude and direction.\n\n## What is the definition of speed velocity and acceleration?\n\nAverage speed is distance divided by time. Velocity is speed in a given direction. Acceleration is change in velocity divided by time. Movement can be shown in distance-time and velocity-time graphs.\n\n## Can speed be negative in physics?\n\nThe ratio of distance travelled and the time taken by a body can be zero but not negative. Since distance and time are positive quantities and speed is obtained by the ratio of these two quantities, speed cannot be negative.\n\n## What is the correct SI unit of velocity?\n\nmeter per secondThe SI unit of velocity is meter per second (m/s).\n\n## What is the difference between speed and velocity Class 9?\n\nSpeed is the rate at which an object covers distance in a particular time. … Velocity is the displacement traveled by a body per unit time.\n\n## Which is faster speed or velocity?\n\nSpeed is the magnitude of velocity. Velocity is the speed of an object plus its direction. Speed is called a scalar quantity and velocity is a vector quantity. The fastest possible speed in the universe is the speed of light.\n\n## What is the main difference between speed and velocity quizlet?\n\nWhat is the difference between speed and velocity? Speed is just how fast you travel but velocity is the speed in a given direction.\n\n## What are the similarities and differences between speed and velocity?\n\nSpeed, being a scalar quantity, is the rate at which an object covers distance. The average speed is the distance (a scalar quantity) per time ratio. Speed is ignorant of direction. On the other hand, velocity is a vector quantity; it is direction-aware.\n\n## What is SI unit of time?\n\nThe second, symbol s, is the SI unit of time. It is defined by taking the fixed numerical value of the caesium frequency Cs. , the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s–1.\n\n## What is definition of velocity?\n\nThe velocity of an object is the rate of change of its position with respect to a frame of reference, and is a function of time. … Velocity is a physical vector quantity; both magnitude and direction are needed to define it.\n\n## Which is an example of speed?\n\nAn example of speed is a car being driven 45 miles per hour. An example of speed is someone cleaning a room in 10 minutes. An example of speed is how quickly a jaguar runs. Speed is defined as to help someone or something along, or move too quickly.\n\n## What is the difference between speed and velocity Brainly?\n\nSpeed measures an object’s change in position, while velocity measures an object’s change in position per unit time.\n\n## What is acceleration and its SI unit?\n\nIn physics or physical science, acceleration (symbol: a) is defined as the rate of change (or derivative with respect to time) of velocity. It is thus a vector quantity with dimension length/time². In SI units, acceleration is measured in meters/second² using an accelerometer."
] | [
null,
"https://mc.yandex.ru/watch/67749877",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92949843,"math_prob":0.9749104,"size":6152,"snap":"2021-04-2021-17","text_gpt3_token_len":1314,"char_repetition_ratio":0.21877033,"word_repetition_ratio":0.20207743,"special_character_ratio":0.21407673,"punctuation_ratio":0.111853085,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9982268,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-11T10:20:21Z\",\"WARC-Record-ID\":\"<urn:uuid:9f9bf61f-b116-4f52-bbc4-bb5138dff906>\",\"Content-Length\":\"41981\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0b05630e-753e-4bd4-8304-cc0fbc8f42d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:b023a7c5-d41a-4f1e-9435-230434d8887f>\",\"WARC-IP-Address\":\"193.176.77.119\",\"WARC-Target-URI\":\"https://lovethedax.com/qa/question-what-is-difference-between-speed-and-velocity.html\",\"WARC-Payload-Digest\":\"sha1:UIUBDZARKIPJTYX67QX7WGXYVK3JD5JI\",\"WARC-Block-Digest\":\"sha1:PLYADEXY47I3YSCPIYIUYNBL7IJ54MKF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038061820.19_warc_CC-MAIN-20210411085610-20210411115610-00253.warc.gz\"}"} |
https://www.shaalaa.com/question-bank-solutions/distinguish-between-percentage-method-counting-elasticity-demand-geometric-method-counting-elasticity-demand-supply-curve-schedule_51762 | [
"Share\n\n# Distinguish Between :Percentage Method of Counting Elasticity of Demand and Geometric Method of Counting Elasticity of Demand. - Economics\n\nConceptSupply Curve and Schedule\n\n#### Question\n\nDistinguish between :\n\nPercentage method of counting elasticity of demand and Geometric method of counting elasticity of demand.\n\n#### Solution\n\n Basis of Difference Percentage method of counting elasticity of demand Geometric method of counting elasticity of demand Method $E_d = \\frac{\\text { Percentage change in demand}}{\\text { Percentage change in price}}$ $E_d = \\frac{\\text { Lower segment of the demand curve}}{\\text { Upper segment of the demand curve}}$ Also known as Arithmetic method Point elasticity method\nIs there an error in this question or solution?\n\n#### APPEARS IN\n\n2012-2013 (March) (with solutions)\nQuestion 3.1.3 | 2.00 marks\n\n#### Video TutorialsVIEW ALL \n\nSolution Distinguish Between :Percentage Method of Counting Elasticity of Demand and Geometric Method of Counting Elasticity of Demand. Concept: Supply Curve and Schedule.\nS"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7880363,"math_prob":0.45660216,"size":766,"snap":"2020-10-2020-16","text_gpt3_token_len":163,"char_repetition_ratio":0.17585301,"word_repetition_ratio":0.11111111,"special_character_ratio":0.21018277,"punctuation_ratio":0.06140351,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99783206,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-31T18:28:12Z\",\"WARC-Record-ID\":\"<urn:uuid:4f594cda-d936-4299-806f-d96bc3fdd9e9>\",\"Content-Length\":\"59136\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a34d7620-2292-4217-8bcd-c6736411a82e>\",\"WARC-Concurrent-To\":\"<urn:uuid:fdccff1e-728c-42fb-b946-0d8cad418b59>\",\"WARC-IP-Address\":\"172.105.37.75\",\"WARC-Target-URI\":\"https://www.shaalaa.com/question-bank-solutions/distinguish-between-percentage-method-counting-elasticity-demand-geometric-method-counting-elasticity-demand-supply-curve-schedule_51762\",\"WARC-Payload-Digest\":\"sha1:YFRA7S2ZDN23MQK6M54CMMGXF3PMUJWN\",\"WARC-Block-Digest\":\"sha1:HB6WN5EFVSBLVFDEH5DNHCULLJ7PWQKF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370503664.38_warc_CC-MAIN-20200331181930-20200331211930-00236.warc.gz\"}"} |
https://logic.pdmi.ras.ru/seminars/complexity-seminar/2016-10-17 | [
"## Понедельник 17.10. Krzysztof Sornat: \"Approximation and Parameterized Complexity of Minimax Approval Voting\"\n\nПонедельник, 17 октября, ауд. 106. Начало в 14:00.\n\nДокладчик: Krzysztof Sornat (University of Wroclaw).\n\nТема: Approximation and Parameterized Complexity of Minimax Approval Voting.\n\n### Abstract\n\nWe present three results on the complexity of Minimax Approval Voting that is a voting rule for choosing committee of fixed size k. Here, we see the votes and the choice as",
null,
"strings of length",
null,
"(characteristic vectors of the subsets). The goal is to minimize the maximum Hamming distance to a vote. First, we study Minimax Approval Voting parameterized by the Hamming distance",
null,
"from the solution to the votes. We show Minimax Approval Voting admits no algorithm running in time",
null,
", unless the Exponential Time Hypothesis (ETH) fails. This means that the",
null,
"algorithm of Misra et al. [AAMAS 2015] is essentially optimal. Motivated by this, we then show a parameterized approximation scheme, running in time",
null,
", which is essentially tight assuming ETH. Finally, we get a new polynomial-time randomized approximation scheme for Minimax Approval Voting, which runs in time",
null,
", almost matching the running time of the fastest known PTAS for Closest String due to Ma and Sun [SIAM J. Comp. 2009].\n\nAuthors: Marek Cygan, Łukasz Kowalik, Arkadiusz Socała, Krzysztof Sornat\nhttp://arxiv.org/abs/1607.07906"
] | [
null,
"https://logic.pdmi.ras.ru/sites/default/files/tex/3579696f06fe2e58f62d997256e0c9d669e94672.png",
null,
"https://logic.pdmi.ras.ru/sites/default/files/tex/05b806fb3a39e75606f12e544fec1ae578c32b0b.png",
null,
"https://logic.pdmi.ras.ru/sites/default/files/tex/f5efd7e164b692a1c06faa4659d1c840d8761de8.png",
null,
"https://logic.pdmi.ras.ru/sites/default/files/tex/030cff1e4bd7c8031b93bcea4553dd5b04893ab2.png",
null,
"https://logic.pdmi.ras.ru/sites/default/files/tex/460256232f10d4e18a7a2d96d1a24886ee755bbe.png",
null,
"https://logic.pdmi.ras.ru/sites/default/files/tex/390a3b919015d40c7099e8debeafa0c16f637f11.png",
null,
"https://logic.pdmi.ras.ru/sites/default/files/tex/def26e76d55c16542a2900a14ec52d944e377953.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.80214745,"math_prob":0.8655567,"size":1272,"snap":"2019-13-2019-22","text_gpt3_token_len":331,"char_repetition_ratio":0.11750789,"word_repetition_ratio":0.0,"special_character_ratio":0.21462265,"punctuation_ratio":0.15611814,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95883036,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,2,null,2,null,6,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T15:39:20Z\",\"WARC-Record-ID\":\"<urn:uuid:1cf3d6ce-17da-4807-8e14-4a0394a7b2dd>\",\"Content-Length\":\"10316\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c10be76-3775-4bd1-a061-2a3196d2092a>\",\"WARC-Concurrent-To\":\"<urn:uuid:dae84b12-64e6-42c3-a506-4ee30042fec3>\",\"WARC-IP-Address\":\"83.149.197.121\",\"WARC-Target-URI\":\"https://logic.pdmi.ras.ru/seminars/complexity-seminar/2016-10-17\",\"WARC-Payload-Digest\":\"sha1:LHGWFCYEXBCMMNZDT2SXXSQGZRIFL66K\",\"WARC-Block-Digest\":\"sha1:SPY7D6FZNSXLBOPBFHGX5CG4MBR2M27O\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202526.24_warc_CC-MAIN-20190321152638-20190321174638-00206.warc.gz\"}"} |
https://www.geogebra.org/m/ex29ay9z | [
"# Baby-Step Modeling in 3D GC with AR: Part 1\n\n## MATH Ts and Ss:\n\nIf you're studying classes of functions and function transformations in 2D, YOU CAN build and model in 3D! In 2D, consider the equation . It's graph is a circle with center (0,0) and radius = 2. To see why, check out this resource here. Yet in this equation, if we replace y with z, this equation becomes . If we solve explicitly for z, we get (upper semicircle) and (lower semicircle). In 3D, think of z as the new DEPENDENT VARIABLE. If we graph these 2 surfaces in 3D, the value of y doesn't matter. Thus, these semicircles become infinitely long half-cylinders (see screencast below). Here, if we study the relationship of z with respect to x (only) -- while ignoring y --, we have cross sections of planes (that are parallel to the xAxis) that are circles (formed from the upper and lower semicircles put together). Now if we add \"0.3y\" to both surfaces, what happens? Why do we get a slide?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8950579,"math_prob":0.94124997,"size":1045,"snap":"2021-31-2021-39","text_gpt3_token_len":280,"char_repetition_ratio":0.10951009,"word_repetition_ratio":0.0,"special_character_ratio":0.25358853,"punctuation_ratio":0.1559633,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99699974,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-01T18:40:26Z\",\"WARC-Record-ID\":\"<urn:uuid:605d6b51-66e5-46ce-b45e-f6997ce6e211>\",\"Content-Length\":\"40803\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dfa0d9cc-17af-4649-b32e-bd07081b5794>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b818f1a-6f42-4e2c-a6d0-2fb0d3d4b484>\",\"WARC-IP-Address\":\"52.85.144.70\",\"WARC-Target-URI\":\"https://www.geogebra.org/m/ex29ay9z\",\"WARC-Payload-Digest\":\"sha1:BHN42NE2DZO535CKVNUGZE5QTS543Z2W\",\"WARC-Block-Digest\":\"sha1:63YVNC7BFYBAB2ET22OQJQ6EXK6LC6E3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154214.63_warc_CC-MAIN-20210801154943-20210801184943-00389.warc.gz\"}"} |
https://brainmass.com/math/functional-analysis/polar-coordinates-43494 | [
"Explore BrainMass\n\n# Polar coordinates\n\nNot what you're looking for? Search our solutions OR ask your own Custom question.\n\nThis content was COPIED from BrainMass.com - View the original, and get the already-completed solution here!\n\nX and y represents rectangular coordinates. What is the given equation using polar coordinates (r, theta). x^2 = 4y\n\nhttps://brainmass.com/math/functional-analysis/polar-coordinates-43494\n\n#### Solution Preview\n\nusing x = r cos (theta) y = r sin( theta)\n\nwe obtain r^2 cos^2(theta) = 4 r ...\n\n#### Solution Summary\n\nThis shows how to translate an equation into polar coordinates.\n\n\\$2.49"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7788177,"math_prob":0.95611024,"size":643,"snap":"2021-31-2021-39","text_gpt3_token_len":168,"char_repetition_ratio":0.14710484,"word_repetition_ratio":0.0,"special_character_ratio":0.25660965,"punctuation_ratio":0.16,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9972738,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-04T07:21:24Z\",\"WARC-Record-ID\":\"<urn:uuid:e07f17f2-868e-4e45-ad87-5b196ec32926>\",\"Content-Length\":\"330377\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:86770a59-c346-4d2f-8c2c-87c29509b7a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:f280104a-2e9a-4b2b-adf9-d6c025992f73>\",\"WARC-IP-Address\":\"104.26.5.245\",\"WARC-Target-URI\":\"https://brainmass.com/math/functional-analysis/polar-coordinates-43494\",\"WARC-Payload-Digest\":\"sha1:OS2ATV442A3V47J7FUC4S73QTMCBOWTI\",\"WARC-Block-Digest\":\"sha1:A5HKCPGOGIHWODWVWSPEFISNSYWGTCDV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154796.71_warc_CC-MAIN-20210804045226-20210804075226-00193.warc.gz\"}"} |
https://veronak.savingadvice.com/2013/08/ | [
"User Real IP - 34.204.200.74\n```Array\n(\n => Array\n(\n => 182.68.68.92\n)\n\n => Array\n(\n => 101.0.41.201\n)\n\n => Array\n(\n => 43.225.98.123\n)\n\n => Array\n(\n => 2.58.194.139\n)\n\n => Array\n(\n => 46.119.197.104\n)\n\n => Array\n(\n => 45.249.8.93\n)\n\n => Array\n(\n => 103.12.135.72\n)\n\n => Array\n(\n => 157.35.243.216\n)\n\n => Array\n(\n => 209.107.214.176\n)\n\n => Array\n(\n => 5.181.233.166\n)\n\n => Array\n(\n => 106.201.10.100\n)\n\n => Array\n(\n => 36.90.55.39\n)\n\n => Array\n(\n => 119.154.138.47\n)\n\n => Array\n(\n => 51.91.31.157\n)\n\n => Array\n(\n => 182.182.65.216\n)\n\n => Array\n(\n => 157.35.252.63\n)\n\n => Array\n(\n => 14.142.34.163\n)\n\n => Array\n(\n => 178.62.43.135\n)\n\n => Array\n(\n => 43.248.152.148\n)\n\n => Array\n(\n => 222.252.104.114\n)\n\n => Array\n(\n => 209.107.214.168\n)\n\n => Array\n(\n => 103.99.199.250\n)\n\n => Array\n(\n => 178.62.72.160\n)\n\n => Array\n(\n => 27.6.1.170\n)\n\n => Array\n(\n => 182.69.249.219\n)\n\n => Array\n(\n => 110.93.228.86\n)\n\n => Array\n(\n => 72.255.1.98\n)\n\n => Array\n(\n => 182.73.111.98\n)\n\n => Array\n(\n => 45.116.117.11\n)\n\n => Array\n(\n => 122.15.78.189\n)\n\n => Array\n(\n => 14.167.188.234\n)\n\n => Array\n(\n => 223.190.4.202\n)\n\n => Array\n(\n => 202.173.125.19\n)\n\n => Array\n(\n => 103.255.5.32\n)\n\n => Array\n(\n => 39.37.145.103\n)\n\n => Array\n(\n => 140.213.26.249\n)\n\n => Array\n(\n => 45.118.166.85\n)\n\n => Array\n(\n => 102.166.138.255\n)\n\n => Array\n(\n => 77.111.246.234\n)\n\n => Array\n(\n => 45.63.6.196\n)\n\n => Array\n(\n => 103.250.147.115\n)\n\n => Array\n(\n => 223.185.30.99\n)\n\n => Array\n(\n => 103.122.168.108\n)\n\n => Array\n(\n => 123.136.203.21\n)\n\n => Array\n(\n => 171.229.243.63\n)\n\n => Array\n(\n => 153.149.98.149\n)\n\n => Array\n(\n => 223.238.93.15\n)\n\n => Array\n(\n => 178.62.113.166\n)\n\n => Array\n(\n => 101.162.0.153\n)\n\n => Array\n(\n => 121.200.62.114\n)\n\n => Array\n(\n => 14.248.77.252\n)\n\n => Array\n(\n => 95.142.117.29\n)\n\n => Array\n(\n => 150.129.60.107\n)\n\n => Array\n(\n => 94.205.243.22\n)\n\n => Array\n(\n => 115.42.71.143\n)\n\n => Array\n(\n => 117.217.195.59\n)\n\n => Array\n(\n => 182.77.112.56\n)\n\n => Array\n(\n => 182.77.112.108\n)\n\n => Array\n(\n => 41.80.69.10\n)\n\n => Array\n(\n => 117.5.222.121\n)\n\n => Array\n(\n => 103.11.0.38\n)\n\n => Array\n(\n => 202.173.127.140\n)\n\n => Array\n(\n => 49.249.249.50\n)\n\n => Array\n(\n => 116.72.198.211\n)\n\n => Array\n(\n => 223.230.54.53\n)\n\n => Array\n(\n => 102.69.228.74\n)\n\n => Array\n(\n => 39.37.251.89\n)\n\n => Array\n(\n => 39.53.246.141\n)\n\n => Array\n(\n => 39.57.182.72\n)\n\n => Array\n(\n => 209.58.130.210\n)\n\n => Array\n(\n => 104.131.75.86\n)\n\n => Array\n(\n => 106.212.131.255\n)\n\n => Array\n(\n => 106.212.132.127\n)\n\n => Array\n(\n => 223.190.4.60\n)\n\n => Array\n(\n => 103.252.116.252\n)\n\n => Array\n(\n => 103.76.55.182\n)\n\n => Array\n(\n => 45.118.166.70\n)\n\n => Array\n(\n => 103.93.174.215\n)\n\n => Array\n(\n => 5.62.62.142\n)\n\n => Array\n(\n => 182.179.158.156\n)\n\n => Array\n(\n => 39.57.255.12\n)\n\n => Array\n(\n => 39.37.178.37\n)\n\n => Array\n(\n => 182.180.165.211\n)\n\n => Array\n(\n => 119.153.135.17\n)\n\n => Array\n(\n => 72.255.15.244\n)\n\n => Array\n(\n => 139.180.166.181\n)\n\n => Array\n(\n => 70.119.147.111\n)\n\n => Array\n(\n => 106.210.40.83\n)\n\n => Array\n(\n => 14.190.70.91\n)\n\n => Array\n(\n => 202.125.156.82\n)\n\n => Array\n(\n => 115.42.68.38\n)\n\n => Array\n(\n => 102.167.13.108\n)\n\n => Array\n(\n => 117.217.192.130\n)\n\n => Array\n(\n => 205.185.223.156\n)\n\n => Array\n(\n => 171.224.180.29\n)\n\n => Array\n(\n => 45.127.45.68\n)\n\n => Array\n(\n => 195.206.183.232\n)\n\n => Array\n(\n => 49.32.52.115\n)\n\n => Array\n(\n => 49.207.49.223\n)\n\n => Array\n(\n => 45.63.29.61\n)\n\n => Array\n(\n => 103.245.193.214\n)\n\n => Array\n(\n => 39.40.236.69\n)\n\n => Array\n(\n => 62.80.162.111\n)\n\n => Array\n(\n => 45.116.232.56\n)\n\n => Array\n(\n => 45.118.166.91\n)\n\n => Array\n(\n => 180.92.230.234\n)\n\n => Array\n(\n => 157.40.57.160\n)\n\n => Array\n(\n => 110.38.38.130\n)\n\n => Array\n(\n => 72.255.57.183\n)\n\n => Array\n(\n => 182.68.81.85\n)\n\n => Array\n(\n => 39.57.202.122\n)\n\n => Array\n(\n => 119.152.154.36\n)\n\n => Array\n(\n => 5.62.62.141\n)\n\n => Array\n(\n => 119.155.54.232\n)\n\n => Array\n(\n => 39.37.141.22\n)\n\n => Array\n(\n => 183.87.12.225\n)\n\n => Array\n(\n => 107.170.127.117\n)\n\n => Array\n(\n => 125.63.124.49\n)\n\n => Array\n(\n => 39.42.191.3\n)\n\n => Array\n(\n => 116.74.24.72\n)\n\n => Array\n(\n => 46.101.89.227\n)\n\n => Array\n(\n => 202.173.125.247\n)\n\n => Array\n(\n => 39.42.184.254\n)\n\n => Array\n(\n => 115.186.165.132\n)\n\n => Array\n(\n => 39.57.206.126\n)\n\n => Array\n(\n => 103.245.13.145\n)\n\n => Array\n(\n => 202.175.246.43\n)\n\n => Array\n(\n => 192.140.152.150\n)\n\n => Array\n(\n => 202.88.250.103\n)\n\n => Array\n(\n => 103.248.94.207\n)\n\n => Array\n(\n => 77.73.66.101\n)\n\n => Array\n(\n => 104.131.66.8\n)\n\n => Array\n(\n => 113.186.161.97\n)\n\n => Array\n(\n => 222.254.5.7\n)\n\n => Array\n(\n => 223.233.67.247\n)\n\n => Array\n(\n => 171.249.116.146\n)\n\n => Array\n(\n => 47.30.209.71\n)\n\n => Array\n(\n => 202.134.13.130\n)\n\n => Array\n(\n => 27.6.135.7\n)\n\n => Array\n(\n => 107.170.186.79\n)\n\n => Array\n(\n => 103.212.89.171\n)\n\n => Array\n(\n => 117.197.9.77\n)\n\n => Array\n(\n => 122.176.206.233\n)\n\n => Array\n(\n => 192.227.253.222\n)\n\n => Array\n(\n => 182.188.224.119\n)\n\n => Array\n(\n => 14.248.70.74\n)\n\n => Array\n(\n => 42.118.219.169\n)\n\n => Array\n(\n => 110.39.146.170\n)\n\n => Array\n(\n => 119.160.66.143\n)\n\n => Array\n(\n => 103.248.95.130\n)\n\n => Array\n(\n => 27.63.152.208\n)\n\n => Array\n(\n => 49.207.114.96\n)\n\n => Array\n(\n => 102.166.23.214\n)\n\n => Array\n(\n => 175.107.254.73\n)\n\n => Array\n(\n => 103.10.227.214\n)\n\n => Array\n(\n => 202.143.115.89\n)\n\n => Array\n(\n => 110.93.227.187\n)\n\n => Array\n(\n => 103.140.31.60\n)\n\n => Array\n(\n => 110.37.231.46\n)\n\n => Array\n(\n => 39.36.99.238\n)\n\n => Array\n(\n => 157.37.140.26\n)\n\n => Array\n(\n => 43.246.202.226\n)\n\n => Array\n(\n => 137.97.8.143\n)\n\n => Array\n(\n => 182.65.52.242\n)\n\n => Array\n(\n => 115.42.69.62\n)\n\n => Array\n(\n => 14.143.254.58\n)\n\n => Array\n(\n => 223.179.143.236\n)\n\n => Array\n(\n => 223.179.143.249\n)\n\n => Array\n(\n => 103.143.7.54\n)\n\n => Array\n(\n => 223.179.139.106\n)\n\n => Array\n(\n => 39.40.219.90\n)\n\n => Array\n(\n => 45.115.141.231\n)\n\n => Array\n(\n => 120.29.100.33\n)\n\n => Array\n(\n => 112.196.132.5\n)\n\n => Array\n(\n => 202.163.123.153\n)\n\n => Array\n(\n => 5.62.58.146\n)\n\n => Array\n(\n => 39.53.216.113\n)\n\n => Array\n(\n => 42.111.160.73\n)\n\n => Array\n(\n => 107.182.231.213\n)\n\n => Array\n(\n => 119.82.94.120\n)\n\n => Array\n(\n => 178.62.34.82\n)\n\n => Array\n(\n => 203.122.6.18\n)\n\n => Array\n(\n => 157.42.38.251\n)\n\n => Array\n(\n => 45.112.68.222\n)\n\n => Array\n(\n => 49.206.212.122\n)\n\n => Array\n(\n => 104.236.70.228\n)\n\n => Array\n(\n => 42.111.34.243\n)\n\n => Array\n(\n => 84.241.19.186\n)\n\n => Array\n(\n => 89.187.180.207\n)\n\n => Array\n(\n => 104.243.212.118\n)\n\n => Array\n(\n => 104.236.55.136\n)\n\n => Array\n(\n => 106.201.16.163\n)\n\n => Array\n(\n => 46.101.40.25\n)\n\n => Array\n(\n => 45.118.166.94\n)\n\n => Array\n(\n => 49.36.128.102\n)\n\n => Array\n(\n => 14.142.193.58\n)\n\n => Array\n(\n => 212.79.124.176\n)\n\n => Array\n(\n => 45.32.191.194\n)\n\n => Array\n(\n => 105.112.107.46\n)\n\n => Array\n(\n => 106.201.14.8\n)\n\n => Array\n(\n => 110.93.240.65\n)\n\n => Array\n(\n => 27.96.95.177\n)\n\n => Array\n(\n => 45.41.134.35\n)\n\n => Array\n(\n => 180.151.13.110\n)\n\n => Array\n(\n => 101.53.242.89\n)\n\n => Array\n(\n => 115.186.3.110\n)\n\n => Array\n(\n => 171.49.185.242\n)\n\n => Array\n(\n => 115.42.70.24\n)\n\n => Array\n(\n => 45.128.188.43\n)\n\n => Array\n(\n => 103.140.129.63\n)\n\n => Array\n(\n => 101.50.113.147\n)\n\n => Array\n(\n => 103.66.73.30\n)\n\n => Array\n(\n => 117.247.193.169\n)\n\n => Array\n(\n => 120.29.100.94\n)\n\n => Array\n(\n => 42.109.154.39\n)\n\n => Array\n(\n => 122.173.155.150\n)\n\n => Array\n(\n => 45.115.104.53\n)\n\n => Array\n(\n => 116.74.29.84\n)\n\n => Array\n(\n => 101.50.125.34\n)\n\n => Array\n(\n => 45.118.166.80\n)\n\n => Array\n(\n => 91.236.184.27\n)\n\n => Array\n(\n => 113.167.185.120\n)\n\n => Array\n(\n => 27.97.66.222\n)\n\n => Array\n(\n => 43.247.41.117\n)\n\n => Array\n(\n => 23.229.16.227\n)\n\n => Array\n(\n => 14.248.79.209\n)\n\n => Array\n(\n => 117.5.194.26\n)\n\n => Array\n(\n => 117.217.205.41\n)\n\n => Array\n(\n => 114.79.169.99\n)\n\n => Array\n(\n => 103.55.60.97\n)\n\n => Array\n(\n => 182.75.89.210\n)\n\n => Array\n(\n => 77.73.66.109\n)\n\n => Array\n(\n => 182.77.126.139\n)\n\n => Array\n(\n => 14.248.77.166\n)\n\n => Array\n(\n => 157.35.224.133\n)\n\n => Array\n(\n => 183.83.38.27\n)\n\n => Array\n(\n => 182.68.4.77\n)\n\n => Array\n(\n => 122.177.130.234\n)\n\n => Array\n(\n => 103.24.99.99\n)\n\n => Array\n(\n => 103.91.127.66\n)\n\n => Array\n(\n => 41.90.34.240\n)\n\n => Array\n(\n => 49.205.77.102\n)\n\n => Array\n(\n => 103.248.94.142\n)\n\n => Array\n(\n => 104.143.92.170\n)\n\n => Array\n(\n => 219.91.157.114\n)\n\n => Array\n(\n => 223.190.88.22\n)\n\n => Array\n(\n => 223.190.86.232\n)\n\n => Array\n(\n => 39.41.172.80\n)\n\n => Array\n(\n => 124.107.206.5\n)\n\n => Array\n(\n => 139.167.180.224\n)\n\n => Array\n(\n => 93.76.64.248\n)\n\n => Array\n(\n => 65.216.227.119\n)\n\n => Array\n(\n => 223.190.119.141\n)\n\n => Array\n(\n => 110.93.237.179\n)\n\n => Array\n(\n => 41.90.7.85\n)\n\n => Array\n(\n => 103.100.6.26\n)\n\n => Array\n(\n => 104.140.83.13\n)\n\n => Array\n(\n => 223.190.119.133\n)\n\n => Array\n(\n => 119.152.150.87\n)\n\n => Array\n(\n => 103.125.130.147\n)\n\n => Array\n(\n => 27.6.5.52\n)\n\n)\n```\nArchive for August, 2013: Amber's Personal Finance Blog\n << Back to all Blogs Login or Create your own free blog Layout: Blue and Brown (Default) Author's Creation\nHome > Archive: August, 2013",
null,
"",
null,
"",
null,
"# Archive for August, 2013\n\n## I'm back! Again ;)\n\nAugust 25th, 2013 at 10:24 pm\n\nHi all!\nI am officially back, if I'm not mistaken my last post was the start of my summer classes in May , which by the way was super tough.\n\nIn the midst of all that I manage to add a little funds to CC, I had previously paid off but it's under control and should be paid off again in September",
null,
"A lot has happened, first I graduated! Yay I'm so excited. I now have a masters degree, the first and only in my family. Was it tough? Absolutely. Am I glad I stuck it out? Yes\nThe day of graduation I received a \\$15,000 raise, sweet. Unfortunately the money now must be paid on student loans.\n\nAlso some more great news, I've started a Human Resource firm. Basically the company is geared toward small businesses. I'd like to the point of contact for these businesses as their human resource department but out sourced. I have my first company, a construction business started by the owner of the company I am currently employed. I'll be drafting up the contract Wednesday for her to sign. It's great for me because construction is a tad bit different with all the OSHA requirements so this will help with me gaining more experience and knowledge\n\nPlease like my page on Facebook if you can, and share a few of my posts. The goal is to hit 100 likes by the end of the month and get a little traffic on the page. Name of the company is KHRServices\n\nNow that school is out of the way and I've started the new business I can start focusing on paying down the rest of my debt.\n\nI have been keeping up with my 52 week challenge.",
null,
"One thing though I have about \\$7000 in my savings which is about 6 months worth of expenses. I wonder if I should stop socking away the \\$25 a week and start paying that on the CC? Any thoughts on this?\n\nGlad to be officially back and don't forget to like my new business ( and share) on Facebook\n\nKHRServices"
] | [
null,
"https://www.savingadvice.com/blogs/images/search/top_left.php",
null,
"https://www.savingadvice.com/blogs/images/search/top_right.php",
null,
"https://www.savingadvice.com/blogs/images/search/bottom_left.php",
null,
"https://www.savingadvice.com/forums/core/images/smilies/smile.png",
null,
"https://www.savingadvice.com/forums/core/images/smilies/smile.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9783472,"math_prob":0.9955059,"size":1869,"snap":"2019-51-2020-05","text_gpt3_token_len":433,"char_repetition_ratio":0.09383378,"word_repetition_ratio":0.0,"special_character_ratio":0.23434992,"punctuation_ratio":0.09113924,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.996884,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-24T13:47:21Z\",\"WARC-Record-ID\":\"<urn:uuid:5a3774bd-3212-46b9-b454-9739349f21e0>\",\"Content-Length\":\"70445\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f6a50c7b-24cb-4adf-85c5-d038a5f36747>\",\"WARC-Concurrent-To\":\"<urn:uuid:f71d183e-a09b-4d98-8075-8886926bb151>\",\"WARC-IP-Address\":\"173.231.200.26\",\"WARC-Target-URI\":\"https://veronak.savingadvice.com/2013/08/\",\"WARC-Payload-Digest\":\"sha1:BPN5YPZHRFAFPFZYPGDGXITOVSLJCDYA\",\"WARC-Block-Digest\":\"sha1:J4HRCWNMPBAD6KFVRF7AYN5G2C7CKGML\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250620381.59_warc_CC-MAIN-20200124130719-20200124155719-00480.warc.gz\"}"} |
http://alameencollege.org/Al-ameen_departments_backend.php?id=9 | [
"# Department of Mathematics\n\n## Department Of Mathematics\n\nThe Department of Mathematics was established in the year 1995, as a complementary subject for B.Sc Petrochemicals and B.Sc. Physics courses. In 2013 M.Sc. Mathematics has been started.\n\nOur vision is to develop a love and appreciation for the discipline as an intellectual endeavor. To foster a Solid competency in Mathematics. To create an awareness of the intrinsic relation of mathematics with other technical fields. To set a foundation for further studies and a career in mathematically related fields.\n\nOur Mission is to familiarize students with basic mathematical tools. To generate mathematical concepts and ways of thinking.To highlight the importance of mathematical thinking. To develop communication skills appropriate to the science of mathematics.\n\nGraduates Of The Mathematics Program Will Be Able To:\n\nApply Knowledge Of Mathematics, In All The Fields Of Learning Including Higher Research And Its Extensions.\n\nInnovate, Invent And Solve Complex Mathematical Problems Using The Knowledge Of Pure And Applied Mathematics\n\nTo Solve One Dimensional Wave And Heat Equations Employing The Methods In Partial Differential Equations.\n\nUtilize Number Theory In The Field Of Cryptography That Helps In Hiding Information And Maintaining Secrecy In Military Information Transmission, Computer Password And Electronic Commerce.\n\nFacilitate In The Study Of Crystallographic Groups In Chemistry And Lie Symmetry Groups In Physics.\n\nDemonstrate Risk Assessment In Financial Markets, Disease Spread In Biology And Punnett Squares In Ecology.\n\nLearn To Solve Improper Integrals.\n\nMake Use Of Linear Equations For Solving Any Differential Equations Understand Various Problems Related With Planar Graphs.\n\nUnderstand The Concepts Of Matrices And Linear Equations\n\nLearn Properties Of Inverse Laplace Transforms\n\nDemonstrate A Competence In Formulating, Analysing, And Solving Problems In Several Core Areas Of Mathematics At A Detailed Level, Including Analysis, Linear And Abstract Algebra, Statistics, Probability And Applications Of Mathematics\n\nDemonstrate An Advanced Knowledge And Fundamental Understanding Of A Number Of Specialist Mathematical Topics, Including The Ability To Solve Problems Related To Those Topics Using Appropriate Tools And Techniques\n\nCommunicate Clearly In Writing And Orally Knowledge, Ideas And Conclusions About Mathematics, Including Formulating Complex Mathematical Arguments, Using Abstract Mathematical Thinking, Synthesising Intuition About Mathematical Ideas And Their Applications\n\nDemonstrate That They Can Advance Their Own Knowledge And Understanding Of Mathematics And Its Applications With A High Degree Of Autonomy.\n\nMSc Mathematics Course Mainly Focus On The Thrust Areas Of Mathematics Like Linear Algebra, Discrete Mathematics, Combinatorics, Number Theory, Analysis, Differential Equations, Linear Programming And OR, Topology Etc. And Upon Successful Completion Of This Courses Students Will Be Able To:\n\nSolve Systems Of Linear Equations,\n\nRecognize The Concepts Of The Terms Span, Linear Independence, Basis, And Dimension, And Apply These Concepts To Various Vector Spaces And Subspaces,\n\nUse Matrix Algebra And The Related Matrices To Linear Transformations,\n\nUse Technological Tools Such As Computer Algebra Systems Or Graphing Calculators For Visualization And Calculation Of Linear Algebra Concepts.\n\nFormulate And Interpret Statements Presented In Boolean Logic. Reformulate Statements From Common Language To Formal Logic. Apply Truth Tables And The Rules Of Propositional And Predicate Calculus,\n\nDemonstrate A Working Knowledge Of Set Notation And Elementary Set Theory, Recognize The Connection Between Set Operations And Logic, Prove Elementary Results Involving Sets, And Explain Russells Paradox,\n\nGain An Historical Perspective Of The Development Of Modern Discrete Mathematics\n\nApply Diverse Counting Strategies To Solve Varied Problems Involving Strings, Combinations, Distributions, And Partitions,\n\nWrite And Analyze Combinatorial, Algebraic, Inductive, And Formal Proofs Of Combinatoric Identities, And\n\nRecognize Properties Of Graphs Such As Distinctive Circuits Or Trees.\n\nDefine And Interpret The Concepts Of Divisibility, Congruence, Greatest Common Divisor, Prime, And Prime-Factorization,\n\nApply The Law Of Quadratic Reciprocity And Other Methods To Classify Numbers As Primitive Roots, Quadratic Residues, And Quadratic Non-Residues,\n\nDetermine The Continuity, Differentiability, And Integrability Of Functions Defined On Subsets Of The Real Line,\n\nApply The Mean Value Theorem And The Fundamental Theorem Of Calculus To Problems In The Context Of Real Analysis, And\n\nWrite Solutions To Problems And Proofs Of Theorems That Meet Rigorous Standards Based On Content, Organization And Coherence, Argument And Support, And Style And Mechanics.\n\nSolve Differential Equations Of First Order Using Graphical, Numerical, And Analytical Methods,\n\nSolve And Apply Linear Differential Equations Of Second Order (And Higher),\n\nSolve Linear Differential Equations Using The Laplace Transform Technique,\n\nDevelop The Ability To Apply Differential Equations To Significant Applied And/Or Theoretical Problems.\n\nAssess Properties Implied By The Definitions Of Groups And Rings,\n\nUse Various Canonical Types Of Groups (Including Cyclic Groups And Groups Of Permutations) And Canonical Types Of Rings (Including Polynomial Rings And Modular Rings),\n\nProduce Rigorous Proofs Of Propositions Arising In The Context Of Abstract Algebra\n\nFormulate And Model A Linear Programming Problem From A Word Problem And Solve Them Graphically In 2 And 3 Dimensions, While Employing Some Convex Analysis,\n\nBe Able To Modify A Primal Problem, And Use The Fundamental Insight Of Linear Programming To Identify The New Solution, Or Use The Dual Simplex Method To Restore Feasibility.\n\nDefine And Illustrate The Concept Of Topological Spaces And Continuous Functions,\n\nProve A Selection Of Theorems Concerning Topological Spaces, Continuous Functions, Product Topologies, And Quotient Topologies\n\nRepresent Complex Numbers Algebraically And Geometrically,\n\nApply The Concept And Consequences Of Analyticity And The Cauchy-Riemann Equations And Of Results On Harmonic And Entire Functions Including The Fundamental Theorem Of Algebra,\n\nAnalyze Sequences And Series Of Analytic Functions And Types Of Convergence,\n\n• M.Sc Mathematics\n\n• Complementary Courses\n\n• Latex\n\n•Well Occupied Library\n\n•Provided With The Journal Of Ramanujan Mathematical Society\n\n•Csir Net/Jrf Coaching For Final Year M.Sc Students\n\nName Designation\nDr. Vinitha T Assistant Professor & HOD\nRilga K.O Assistant Professor on Contract\nSulfith Salim Assistant Professor on Contract\nShanthy Jose Assistant Professor on Contract\nSonia George Assistant Professor on Contract\nAnju Jose Assistant Professor on Contract\nSONI SEBASTIAN T Assistant Professor on Contract\n\nView More >>"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.66842115,"math_prob":0.7091562,"size":6838,"snap":"2019-51-2020-05","text_gpt3_token_len":1338,"char_repetition_ratio":0.13227978,"word_repetition_ratio":0.002118644,"special_character_ratio":0.16145071,"punctuation_ratio":0.10570627,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9716541,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-27T03:18:23Z\",\"WARC-Record-ID\":\"<urn:uuid:711d7fb7-74bc-4c17-b457-c3e699e55118>\",\"Content-Length\":\"83991\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:34631b17-8ee8-4f19-b7b4-48e2c46bbc26>\",\"WARC-Concurrent-To\":\"<urn:uuid:68d85a61-1903-480c-a729-e2fad02f53c1>\",\"WARC-IP-Address\":\"166.62.27.186\",\"WARC-Target-URI\":\"http://alameencollege.org/Al-ameen_departments_backend.php?id=9\",\"WARC-Payload-Digest\":\"sha1:YMR776FZN5JWHPIWDS5DXONOGYOFV2YV\",\"WARC-Block-Digest\":\"sha1:WRZFKR2WARG6VA23YHHLFFWHONCQPUX3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251694176.67_warc_CC-MAIN-20200127020458-20200127050458-00365.warc.gz\"}"} |
https://discourse.dhall-lang.org/t/are-there-type-variables-generic-types/348 | [
"",
null,
"# Are there type variables / generic types?\n\n#1\n\nHi, just starting with Dhall. I will like to do something like this:\n\n``````let (Thing a) : Type =\n{ output : a\n, a : Text\n, b : Text\n}\n\nlet stringThings : List (Thing Text) =\n[ {output = \"out\", a= \"a\", b = \"b\"}\n, {output = \"out\", a= \"a\", b = \"b\"}\n]\n\nlet intThings : List (Thing Integer) =\n[ {output = 1, a= \"a\", b = \"b\"}\n, {output = 1, a= \"a\", b = \"b\"}\n]\n``````\n\nSo I would like to have a list of types where one or two attributes are generic.\nIs there such a thing in Dhall?\n\nOne way I can think of is to create different concrete types:\n\n``````let TextThing : Type\nlet IntThing: Type\nlet DoubleThing : Type\nlet OptionalTextThing : Type\n...\n``````\n\nBut this will create a lot of repetition.\n\nAnother way is to use a union in the `output`, But then elements in the list could have mixed outputs. I want all elements in a list to have the same output.\n\nAny other way to achieve something like this?\n\n#2\n\nPerhaps you can achieve that with a function to create the type, for example:\n\n``````let Thing = λ(a : Type) → { output : a, a : Text, b : Text }\n\nlet stringThings\n: List (Thing Text)\n= [ { output = \"out\", a = \"a\", b = \"b\" }\n, { output = \"out\", a = \"a\", b = \"b\" }\n]\n\nlet intThings\n: List (Thing Integer)\n= [ { output = +1, a = \"a\", b = \"b\" }, { output = +1, a = \"a\", b = \"b\" } ]\n\nlet Thing/getA = λ(a : Type) → λ(thing : Thing a) → thing.a\n\nin Thing/getA Text { output = \"out\", a = \"a\", b = \"b\" }\n``````\n\n#3\n\nNice, this works really well, thanks."
] | [
null,
"https://dhall-discourse-uploads.s3.dualstack.us-west-1.amazonaws.com/original/1X/87f17b6b61493e8404c449e71ccbeb0b44db5fb8.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8294706,"math_prob":0.9234208,"size":862,"snap":"2020-34-2020-40","text_gpt3_token_len":251,"char_repetition_ratio":0.13986014,"word_repetition_ratio":0.13259669,"special_character_ratio":0.32134572,"punctuation_ratio":0.1904762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9984708,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-21T02:47:24Z\",\"WARC-Record-ID\":\"<urn:uuid:f68b8f05-8a90-42ba-86c8-3b5622ba1269>\",\"Content-Length\":\"10792\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fa52bab9-d67b-4f45-8f33-44649b55839c>\",\"WARC-Concurrent-To\":\"<urn:uuid:af0b5364-ece9-464b-bf48-ba704ae2b9b4>\",\"WARC-IP-Address\":\"173.255.251.40\",\"WARC-Target-URI\":\"https://discourse.dhall-lang.org/t/are-there-type-variables-generic-types/348\",\"WARC-Payload-Digest\":\"sha1:D6HNX4LWXJ5II2KOMGVM2XVCX4TX7XBX\",\"WARC-Block-Digest\":\"sha1:NUSNJ4LQVH7DMHVHBLGLXGSTOP3MM5H2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400198887.3_warc_CC-MAIN-20200921014923-20200921044923-00293.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/1602.07370/ | [
"arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.\n\n# Exact solutions for logistic reaction-diffusion in biology\n\n1. Dept. of Mathematics and Statistics, La Trobe University, Victoria, Australia.\n2. Phenomics and Bioinformatics Research Centre, School of Information Technology and Mathematical Sciences, University of South Australia. Bronwyn.H\n###### Abstract\n\nReaction-diffusion equations with a nonlinear source have been widely used to model various systems, with particular application to biology. Here, we provide a solution technique for these types of equations in -dimensions. The nonclassical symmetry method leads to a single relationship between the nonlinear diffusion coefficient and the nonlinear reaction term; the subsequent solutions for the Kirchhoff variable are exponential in time (either growth or decay) and satisfy the linear Helmholtz equation in space. Example solutions are given in two dimensions for particular parameter sets for both quadratic and cubic reaction terms.\n\nKeywords: Nonclassical symmetries, Reaction-diffusion equations, Fisher equation, Fitzhugh-Nagumo equation, KPP equation, Exact solutions.\n\n## 1 Introduction\n\nLogistic reaction-diffusion equations with a nonlinear source are widely used to model many different systems, particularly in biology. One of the earliest appearances of such a model was in the seminal paper by Fisher in 1937 , where he introduced the equation\n\n ∂θ∂t=D∂2θ∂x2+sθ(1−θ). (1)\n\nFisher’s equation originally modelled the frequency in a diploid population, of a new advantageous recessive gene, labelled ‘’. In fact, for a sexually reproducing species, Fisher’s equation follows only if there are three different phenotypes , and whose relative fitness coefficients are in linear progression; otherwise Fisher’s assumptions lead to a cubic rather than a quadratic source term [2, 3]. Fisher’s equation remains the model of choice for many biological problems such as those in population dynamics (where is the population density divided by the carrying capacity of the environment ), and for biological cellular tissue growth (where is the cell population density divided by the steady-state tissue density, see for example ).\n\nSince cell mobility depends on cell density, such models naturally generalise to nonlinear reaction-diffusion with logistic reaction and nonlinear diffusion. Extending the model to three dimensions and including a broader range of source terms, the general form of a logistic reaction-diffusion equation for a density is\n\n θt=∇⋅[D(θ)∇θ]+R(θ) (2)\n\nwhere is the density dependent diffusion coefficient, is a logistic type source term which may be quadratic or cubic, and is the usual gradient operator in three dimensions. Some properties of these equations are shared more generally by equations of Kolmogorov-Piscounov-Petrov type but here, we consider to be either quadratic (as in the traditional Fisher-type model) or cubic (as in the Huxley and Fitzhugh-Nagumo models). The Fitzhugh-Nagumo equation has been used to model a nerve axon potential, the intermediate unstable steady state being the threshold electrical potential that separates the basins of attraction for the stable steady activated state and the quiescent state .\n\nDespite the wide use of equation (2), few exact analytic solutions are known even in the one-dimensional case. We have not been able to find previously published exact solutions for logistic reaction-diffusion when is non-constant. When (i.e. the one-dimensional case), is constant and is quadratic, equation (2) is known as the Fisher, Fisher-Kolmogorov or KPP equation due to the classic papers by these authors [1, 8]. An exact travelling wave solution was first presented by Ablowitz and Zeppetella in 1979 . In two dimensions, explicit and approximate travelling wave solutions have been presented by Brazhnik and Tyson .\n\nWhen , is constant, and is cubic or of higher order Huxley type, Kametaka found a travelling wave solution, while other travelling wave solutions have been found by McKean and Rinzel . Periodic solutions were found by Carpenter and Hastings . Arrigo, et al. , and Clarkson and Mansfield found some exact solutions using the method of nonclassical symmetry analysis and these solutions have been applied to the problem of a new advancing recessive gene [3, 18]. The solutions found using the nonclassical symmetry approach can also be found using the Painlevé approach [19, 20]. Other solutions have been presented by Kawahara and Tanaka , Kudryashov , Chen and Gu and Nikitin and Barannyk . The existence of solutions in the cubic case was investigated by Nagylaki and Conley .\n\nIn this paper, we use the nonclassical symmetry method to present exact analytical solutions to equation (2) when is quadratic with two real roots, cubic with one doubly repeated root, and cubic with three distinct roots. In each case, the nonlinear diffusion coefficient takes on a particular form.\n\nIn Section 2, we describe the solution technique; the solutions are presented in Sections 3, 4 and 5. The application of the solutions to population genetics is discussed in Section 6 and some final remarks are presented in Section 7.\n\n## 2 Nonclassical reduction to the Helmholtz equation\n\nA full Lie point symmetry classification of equation (2) was made by Dorodnitsyn et al. . In classical Lie point symmetry analysis, one seeks transformations that leave the governing equation invariant. In some cases, these transformations may be used to simplify the governing equation, leading to a possible analytic solution. Classically invariant solutions include travelling waves and scale-invariant solutions that may exhibit extinction, single-peak or multi-peak blow-up in finite or infinite time, with unbounded or compact spatial support on , only when the free functions for diffusivity and/or reaction are power-laws, exponentials or of form.\n\nFollowing Ovsiannikov’s general formulation of partial symmetries , the idea of nonclassical symmetries was pioneered by Bluman and Cole . This requires a transformation to leave the system consisting of the governing equation and the invariant surface condition, invariant. This extra requirement can sometimes give rise to transformations that cannot be found by the Lie point method. Nonclassical symmetry methods, also known as Q-conditional symmetries, have been applied to equations belonging to the class (2), and a number of forms of and have been found to admit strictly nonclassical symmetries [16, 17, 30, 31]. The complete nonclassical symmetry classification of equation (2) in two dimensions was given by Goard and Broadbridge . Some of the same nonclassical symmetries readily extend to -dimensions .\n\nBy writing equation (2) in terms of the Kirchhoff variable (see for example )\n\n u=u0+∫θθ0D(θ′) dθ′, (3)\n\nso that a boundary condition corresponds to , we obtain\n\n F(u)∂u∂t=∇2u+Q(u) (4)\n\nwhere and . This equation admits the nonclassical reduction operator \n\n Γ=∂∂t+Au∂∂u (5)\n\nwhenever and are related by\n\n Q(u)=AuF(u)+κu (6)\n\nfor constant. Given this gives by direct integration:\n\nHowever, given , is obtained by solving a differential equation:\n\n D(θ)=dudθ=AuR−κu,\n\nequivalently\n\n D(θ)=A∫θθ0DdθR(θ)−κ∫θθ0D dθ. (7)\n\nEquation (5) is a genuine nonclassical symmetry because it leaves equation (2) invariant only if one also makes use of the invariant surface condition, . However this conditional invariance allows a consistent reduction of the original PDE, to a differential equation among the invariants of the symmetry. Making use of this nonclassical reduction, equation (4) can be transformed to the linear Helmholtz equation\n\n ∇2Φ+κΦ=0withu=eAtΦ(x). (8)\n\nIn this manner, an arbitrary solution of the linear Helmholtz equation in or 3 dimensions may be used to construct a solution of the nonlinear reaction-diffusion equation. For example, one may use any of the solutions that have previously been constructed for the amplitude of a scattered acoustic wave . However, such solutions that represent scattering by a finite body, must approach the isotropic solution at large distances. Therefore the radial solutions are considered to be canonical, and indicative of features of fields scattered from aspherical boundaries. In the current context, there are three possible types of solution to be considered, namely those with , and .\n\nWith , is a radial solution of the Laplace equation, which can only be a linear combination of the trivial constant solution and the unit point source solution. Any such non-constant solution must be singular at the origin, where there is a steady flux from or to a point source or sink. The differential equation (7) is now linear, with general solution\n\n u=c1exp∫AR(θ)dθ. (9)\n\nWith , in two dimensions must be a linear combination of modified Bessel functions and which must either be infinite at the origin or be unbounded at large . A similar situation pertains in three dimensions when the modified Bessel functions are replaced by spherical modified Bessel functions.\n\nFinally, in the case , there are positive bounded solutions that are Bessel functions in two dimensions and spherical Bessel functions in three dimensions. They satisfy boundary conditions\n\n ur(0,t)=0, (10) u(r1,t)=0, (11)\n\nwhere , being the first zero of the Bessel function. This outer boundary condition may represent extreme total culling of a species at a boundary, for example extreme harvesting of a prey species at the boundary of a protection zone, selective culling of some genotype, or removal of outer tumour cells by radiation or chemo-therapy or diathermy. Alternatively, since varies from 0 at to at , the outer boundary may be relocated to some location where it satisfies a Robin condition for some pre-chosen parameter . Since is simply the radial flux density of the population, this may approximately represent individuals responding to an external chemoattractant with a fixed probability proportional to , except that when the diffusivity is not constant, is not exactly proportional to the population .\n\nFor convenience, from here on we set .\n\nA solution of (7) must be a fixed point of the map\n\n Dn+1(θ)=A∫θ0Dn dθR(θ)−κ∫θ0Dn dθ. (12)\n\nFor some values of the system parameters, this is a contraction map that converges to a unique solution . The modelling constraint may restrict the values of the temporal exponential growth parameter that can occur. These details will depend on the form of the reaction function . We now present some exact analytic solutions in the case where is quadratic or cubic.\n\n## 3 Fisher-type logistic equations, R(θ)=sθ(1−θ)\n\nThe standard generalisation of Fisher’s equation to -dimensions is\n\n ∂θ∂t=D∇2u+sθ(1−θ).\n\nSince cell mobility depends on cell density, Fisher’s equation naturally generalises to nonlinear reaction-diffusion (2) with nonlinear diffusion as well as logistic reaction. Even in one spatial dimension, there are very few known exact solutions, apart from the one-dimensional travelling wave solution with a special non-minimal group velocity . The logistic source changes sign at the carrying capacity, quite different from the positive definite source term of combustion. Despite that major difference, the construction that was previously applied to combustion modelling still can be applied to population dynamics after some restrictions on the system parameters.\n\nFirst consider the case . The diffusivity is given explicitly by\n\n D=−Asθ−2(θ−1−1)A/s.\n\nSince must be positive for , it must be true that . Then as . Since in biological applications the diffusivity must be bounded, the case is inadmissible.\n\nSecondly, consider the case . Since , equation (7) then implies . This can represent only a growing population (by equation (8)). If we presume that has an upper bound, then exponential growth of implies unbounded growth of the population density . However, this is problematic for population modelling since the source term is negative when . The radial solution is a linear combination of modified Bessel functions that have a point source or a singularity at the origin. Alternatively, may diverge at some finite value of so that remains bounded as diverges. However, an unbounded diffusivity is untenable in population modelling.\n\nFinally, we consider the applicable case . It follows from (7) that . Therefore the only valid solutions remaining are those with , so that approaches zero exponentially in time (by equation (8)). Since is now analytic at with leading order , compared to the case of Arrhenius combustion , is no longer negligible compared to at small . From (7) as ,\n\nimplying\n\n A=s−κD(0). (13)\n\nFor example if is the radius of a circular aquatic reserve, outside of which the population of a mobile species is practically zero, one may assume at from which it follows that ( being the first zero of Bessel function ), and (13) gives a condition that guarantees the non-existence of the undesirable solution with , that approaches extinction. This condition may be expressed\n\n r1>λ1√D(0)/s. (14)\n\nFor example, for a mobile species in which an individual’s range expands to km per year and the time scale for uninhibited exponential growth is years, the conservative safe diameter of a marine park would be km.\n\nBy choosing as the first estimate for , the constant value , it follows that all subsequent iterates of the map (12), must have the correct values of at and . The first few iterates are:\n\n D0 = D(0), (15) D1 = |A|D0/sθ+K2D0−1, (16) D2 = −A2D0log([sθ+K2D0−s]/[K2D0−s])s2θ(1−θ)+K2AD0log([sθ+K2D0−s]/[K2D0−s]). (17)\n\nThese are shown in Figure 1 for the example with , and . The iterated approximation closely agrees with the numerical approximation obtained by solving (7) with the third/fourth-order Runge-Kutta routine ode45 of Matlab. From to which is well over double the carrying capacity (), the diffusivity is positive-valued and decreasing.",
null,
"Figure 1: D(θ) for the Fisher source term constructed by Matlab routine ode45 (solid), as well as approximations D0 (solid), D1(θ) (dashed), and D2(θ) (dash-dot) (K=1, s=1 and A=−1.5).\n\nThe exact solution for the doomed population is shown in Figure 2.",
null,
"Figure 2: Solution for the decreasing population density versus radial coordinate, at times |A|t=−1.5,0.0,1.5,2.5, according to the Fisher-type equation with K=1 and s=1.\n\nThe shape of the population curve approaches that of the Kirchhoff variable but from above in this case, since is now a decreasing function of population density, a reasonable model for some populations and for cells whose mobility is decreased by close packing.\n\n## 4 Huxley equation, with R(θ)=sθ2(1−θ)\n\nIn this case, the equation in dimensions with nonlinear diffusivity may be written\n\n θt=∇⋅[D(θ)∇θ]+sθ2(1−θ).\n\nThis better models the frequency of a new advantageous recessive gene in the important case considered by Fisher, that of a Mendelian diploid sexually reproducing population. It also occurs in the Huxley approach to modelling the electrical potential in a nervous system.\n\nWe first consider the case . Equation (7) has general solution\n\n D(θ)=c1θ−1(1−θ)−2(1−θ−1)−(A/s)+1exp(−Asθ)\n\nThis expression is divergent as for , and also divergent as when . As such, the case when is inadmissible for modelling a population.\n\nSecondly, we consider the case when . Since , equation (7) implies that . As described in the previous section, the case when is not of interest in population modelling.\n\nFinally, consider the case when . Since , we find that , so that any valid solutions have , and by equation (8), approaches zero exponentially in time. When is small, , so that by using a Taylor series approximation for , in the limit as ,\n\nimplying .\n\nFor a problem in 2-dimensions, the solution to equation (8) is the Bessel function . Again, if we assume, for example, that is the radius of an aquatic reserve and that outside the reserve the population is practically zero, at , then , where is the first zero of the Bessel function . Within the reserve, and so .\n\nIt would be desirable if we could calculate a minimum radius for the aquatic reserve such that the population inside could be sustained and not become extinct. However, this can only happen if , and since and , this is not possible. As a result, the extinguishing solution will always exist for a population that can be appropriately modelled using the Huxley source term.\n\nThe nonlinear diffusivity can be calculated in the same way as for the Fisher case. By choosing for the first estimate for we find that\n\n D0 = D(0), (18) D1 = −A2K21sθ(1−θ)+A , (19) D2 = (20)\n\nwhere . These iterates for the nonlinear diffusion are shown in Figure 3, together with the solution to equation (7) found using Matlab’s ode45 routine.",
null,
"Figure 3: D(θ) for the Huxley source term, constructed by Matlabroutine ode45 (solid), as well as approximations D0 (solid), D1(θ) (dashed), and D2(θ) (dash-dot) (K=1, s=1 and A=−1.5). The second iterate, D2(θ), is almost indistinguishable from the numerical solution.\n\nThe exact solution for the population is shown in Figure 4, which shows the population density decreasing over time. In this case, in general the diffusivity is not a monotonic function of density but it has a local maximum. At late times, is small, is increasing for small , and the shape of the solution approaches the Bessel function from below. At early times, the range of densities may allow to be non-monotonic, and the scaled density may lie above the limiting Bessel function.",
null,
"Figure 4: Solution for the decreasing population density versus radial coordinate, for the Huxley source term, at times |A|t=0.0,0.1,0.2,0.3, according to the Huxley-type equation with K=1 and s=1.\n\nIn the population genetics application, this means that a new recessive engineered gene can be completely removed from an isolated population simply by extreme culling at the boundary. In the application to the nerve axon potential, it means that the potential can be set to a uniform value internally by applying that potential at the boundary.\n\n## 5 Fitzhugh-Nagumo equation, with R(θ)=sθ(1−θ)(θ−θ1)\n\nThe cubic reaction term as an appropriate model in biological applications was first introduced in the late 1960’s and early 1970s (see for example ). In this case, the generalisation to higher dimensions and to include nonlinear diffusion is\n\n θt=∇⋅[D(θ)∇θ]+sθ(1−θ)(θ−θ1).\n\nWhereas the Fisher and Huxley equations have a single stable steady state at , the Fitzhugh Nagumo equation is a bi-stable model, with an intermediate unstable steady state between the stable activated state and stable quiescent state . The model is best known in one spatial dimension for propagation of the axon potential. It could be considered in higher dimensions for bi-stable activation of a specific neurological function associated with one region of the nervous system. The model arises also in diploid population genetics in the generic case that three possible genotypes have fitness coefficients that are neither Mendelian nor in arithmetic progression [3, 18]. It could also represent a population that is artificially introduced to a new environment where it must have a threshold density of in order to survive.\n\nFirst, consider the case when . The solution to equation (7) is\n\n D(θ)=c1θ−(1+α/θ1)(θ−1)−1+α/(θ1−1)(θ−θ1)−1−α/(θ1(θ1−1)),\n\nwhere . When , the expression for is divergent as and when , it is divergent at both and . When , we see that is divergent at all three zeros of . As such, the case when is inadmissible when modelling a population.\n\nWhen , we can use either the fact that or to show that we must have in order to have positive. We may then use the same arguments presented in section 3 to conclude that this case is also inadmissible.\n\nFinally, we consider the case . From (7), we see that\n\nexcept in the singular case of the the denominator being zero, . Also . Since , we can deduce that and will approach zero exponentially in time from (8). Also, in the non-singular case,\n\n A=−(sθ1+K2D(0)).\n\nThis may represent an axon potential or a population decaying towards extinction, because of culling at the boundary even if it started above the activation threshold. For a 2-dimensional problem with rotational invariance, the solution to equation (8) is the Bessel function, . Once again, we can deduce the minimum size of an aquatic reserve such that the undesirable extinguishing solution does not exist. If is the radius of the aquatic reserve, outside of which the population is practically zero, then , where is the first zero of the Bessel function. For the population to be maintained, we require . For a population with a carrying capacity at , it must be true that so that the local growth rate is negative at higher populations. Since and , this means that . In that case, the quiescent state (or extinction state) is now the intermediate unstable steady state. It then follows that\n\n r1>λ1√D(0)s|θ1|\n\nis the minimum radius for a marine park in order for the extinction point not to be stable. For example and using the same parameters as before, if km per year, years, and , the reserve should be at least km in diameter.\n\nOnce again, an estimate for can be calculated using the iterative map (12). Since the carrying capacity better represents the ecological potential, we use the value as a starting value for the iterative procedure, guaranteeing that all subsequent iterates have the correct value at . The first two iterates are:\n\n D0 = −AK2 D1 = A2K2s1P(θ)\n\nwhere . The form for will depend on the values of the parameters , and . We have the necessary condition that since we must have . If has two real roots, then we also have that either or . In this case, the next iterate, may be written in terms of functions and, as such, it becomes singular for certain values of in the domain of interest.\n\nIf has a repeated root, the next iterate will depend inversely upon a quartic in . This iterate for the diffusivity then also has singularities for certain values of in the domain of interest and as such, we proceed no further.\n\nIf has two complex roots, then we require as well as . In this case, the integral of is an inverse tan function, and the next iterate, , is given by\n\n D2=A3K2 tan−1(1β(θ−\\large12(θ1+1)))+tan−1(12β(θ1+1))βs2θ(1−θ)(θ−θ1)−A2[tan−1(1β(θ−\\large12(θ1+1)))+tan−1(12β(θ1+1))],\n\nwhere . Some values of the parameters produce reasonable functions for , for example, if , , and , the first three iterates are shown in Figure 5, together with the numerical solution to equation (7) found using Matlab’s ode45 routine.",
null,
"Figure 5: D(θ) for the Fitzhugh-Nagumo source term, constructed by Matlabroutine ode45 (solid), as well as approximations D0 (solid), D1(θ) (dashed), D2(θ) (dash-dot) (K=1, s=1 and A=−1.5).\n\nThe exact solution for the population is plotted in Figure 6, showing the population is doomed to extinction.",
null,
"Figure 6: Solution for the decreasing population density versus radial coordinate, for the Fitzhugh-Nagumo source term, at times |A|t=0.0,0.1,0.2,0.3, according to the Fitzhugh-Nagumo type equation with K=1 and s=1.\n\n## 6 Application to population genetics\n\nLet be the frequency of new advantageous allele , and be the frequency of the original allele . Let be the fitness coefficients of genotypes with zero, one and two copies of respectively. Allele will be said to be partially recessive if . That is, addition of the second copy of gene gives a greater advantage than does the first copy. For example, in the reaction diffusion equations with linear diffusion, calculations in show that the generic source term is that of Fitzhugh-Nagumo-type, with\n\n s=γ11−2γ12+γ22;θ1=12−ν;ν=γ22−γ11γ22−γ12\n\nFor a partially recessive gene , and . For the nonlinear diffusion model with Fitzhugh-Nagumo-type source, the extinguishing solution developed in Section 5 exists only if the radius of the boundary where the new gene is selectively removed, is less than the critical value .\n\nThe case is a special case that gives rise to the Fisher equation with quadratic logistic source term . This arises in the context of diploid sexual population genetics only in the special case that the fitness coefficients are in arithmetic progression, ie . In that case, allele is neither partially recessive nor partially dominant. The Fisher equation arises also as the equation for the frequency of a new advantageous gene of an asexual species . The critical radius for existence of the extinguishing solution in the case of the quadratic logistic source term is .\n\nAnother special case arises when , the case of a totally recessive gene . The phenotype whose expression requires two copies of a new gene, is particularly vulnerable. Now , the value at which the critical radius diverges. The extinguishing solution of the Huxley reaction-diffusion equation (Section 4) always exists, no matter how large is the diameter of the boundary where individuals with the totally recessive new gene are removed.\n\nIf a new, perhaps genetically engineered, advantageous gene is totally recessive or if it is partially recessive within a population that is contained within a circle smaller than that with critical radius, then the frequency of the new gene can be reduced to zero uniformly just by selective culling at the boundary.\n\nIf the new gene is partially recessive and the circular domain of the species has a radius larger than the critical value, then its removal cannot be achieved by actions taken at the boundary alone.\n\n## 7 Discussion and final remarks\n\nReaction-diffusion equations, with a source term that is either quadratic or cubic, are commonly used to model various physical and biological systems. Here we have shown that equations of this type with nonlinear diffusion, are exactly solvable provided the nonlinear diffusion and source terms satisfy a particular relationship. This relationship can be found by first writing the equation in terms of the Kirchhoff variable, and then using the nonclassical symmetry reduction method. Provided that the relationship is satisfied, the equation may be separated into spatial and temporal parts. The solutions for the Kirchhoff variable are then exponential in time (either growth or decay), and satisfy either a Helmholtz, modified Helmholtz or Laplace equation in space.\n\nIn general, one may either specify the nonlinear diffusivity and then calculate the corresponding reaction term which will allow exact solution, or alternatively, the reaction term may be specified and the diffusivity calculated using the aforementioned relationship. In this paper, we are particularly interested in logistic Fisher-type models and cubic Huxley- or Fitzhugh Nagumo-type models, and so we calculate the nonlinear diffusivity after specifying the reaction term. In this case, the diffusivity can be constructed by solving a differential equation that is equivalent to an Abel equation if the Kirchhoff variable satisfies a Helmholtz (or modified Helmholtz) equation, or a separable equation if the Kirchhoff variable satisfies the Laplace equation. In cases where the nonlinear diffusivity cannot be written in closed form, it may be written in terms of a series expansion. It is important to note that, although the diffusivity is required to be a nonlinear function of the dependent variable, for reasonable parameters sets, it is well behaved and slowly varying over the relevant range of the population density.\n\nInterpretation of the obtained solutions, in the application of population genetics, is discussed in Section 6. The extinguishing solution of the Huxley reaction-diffusion equation (Section 4) always exists, no matter how large is the diameter of the boundary where individuals with the totally recessive new gene are removed. In practice, it is more likely that only individuals with the distinctive phenotype expressed from two copies of the recessive gene would be identified and removed at the boundary. This would locally remove a fraction of the new genes from the gene pool. This suggests a nonlinear Robin boundary condition , at , representing a selective removal from the domain of the new gene, in proportion to its frequency. If only the proportion of the new genes are detected, then the radiation constant is proportional to , , which is approximately with constant. The nonlinear radiation boundary condition therefore is\n\n −ur=Hu2; r=r1,\n\nwith (constant). This boundary condition would be of interest for the application to population genetics."
] | [
null,
"https://media.arxiv-vanity.com/render-output/3658870/x1.png",
null,
"https://media.arxiv-vanity.com/render-output/3658870/x2.png",
null,
"https://media.arxiv-vanity.com/render-output/3658870/x3.png",
null,
"https://media.arxiv-vanity.com/render-output/3658870/x4.png",
null,
"https://media.arxiv-vanity.com/render-output/3658870/x5.png",
null,
"https://media.arxiv-vanity.com/render-output/3658870/x6.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88791287,"math_prob":0.9886248,"size":31153,"snap":"2020-34-2020-40","text_gpt3_token_len":6949,"char_repetition_ratio":0.16315131,"word_repetition_ratio":0.040265754,"special_character_ratio":0.22395916,"punctuation_ratio":0.14457418,"nsfw_num_words":3,"has_unicode_error":false,"math_prob_llama3":0.9956901,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T12:56:38Z\",\"WARC-Record-ID\":\"<urn:uuid:c2e46a89-9daa-4fd0-b807-563f4b8e93cf>\",\"Content-Length\":\"616130\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec178dd5-d3cc-48ba-971f-f98b26e2c099>\",\"WARC-Concurrent-To\":\"<urn:uuid:d9b7266a-6fa1-403d-8f83-38a24d6f1701>\",\"WARC-IP-Address\":\"104.28.21.249\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/1602.07370/\",\"WARC-Payload-Digest\":\"sha1:O6NISDJADWMIKZKQCYC7J7NIXQEQDG6X\",\"WARC-Block-Digest\":\"sha1:PDPXJRGUVUVQNUYG65D2ZYF3JZHBNKA6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400279782.77_warc_CC-MAIN-20200927121105-20200927151105-00442.warc.gz\"}"} |
https://vi-magento.com/library-function-in-qbasic-quick-answer/ | [
"Chuyển tới nội dung\nTrang chủ » Library Function In Qbasic? Quick Answer\n\n# Library Function In Qbasic? Quick Answer\n\nAre you looking for an answer to the topic “library function in qbasic“? We answer all your questions at the website https://vi-magento.com in category: Top 794 tips update new. You will find the answer right below.\n\nLibrary function is also known as built in function or routine function. A library function in QBASIC may be a string function for numeric function. All string library functions can manipulate either string or numeric data and return a string value.\n\n## Functions in QBasic | Library Functions in QBasic | How to use Library Functions\n\nFunctions in QBasic | Library Functions in QBasic | How to use Library Functions\nFunctions in QBasic | Library Functions in QBasic | How to use Library Functions\n\nLibrary functions are built-in functions that are grouped together and placed in a common location called library. Each function here performs a specific operation. We can use this library functions to get the pre-defined output. All C standard library functions are declared by using many header files.\n\nLibrary Function in QBASIC\n• User-defined function.\n• Built-in function.\nFunctions in QBASIC are readymade programs, which take some data, manipulate them and return a value, which may be a string or a numeric type.\n\nREVIEW OF LIBRARY FUNCTION.\nFunction name Syntax Purpose\nSTR () STR\\$ (x) To convert a numeric expression to its string expression.\nSQR () SQR (x) It returns square root of any positive number.\nStandard C Library Functions Table, By Name\nFunction System Include File Function Prototype\nabort stdlib.h void abort(void);\nabs stdlib.h int abs(int n);\nacos math.h double acos(double x);\nasctime time.h char *asctime(const struct tm *time);\n\n## What are the two types of library functions in Qbasic?\n\nLibrary Function in QBASIC\n• User-defined function.\n• Built-in function.\n\nWhat are library functions in QBASIC?\n\nB) BUILD-IN OR LIBRARY FUNCTIONS: –\n\nThe functions which are provided by the QBASIC system and also allow the programmer to use them according to the requirement are known as BUILT-IN or Library Functions. These functions are required to be called by the programmer to use them in a program.\n\nWhat are library functions name two types of library functions in QBASIC?\n\nAnswer: It is a built-in or ready-made program which helps us to perform a certain task such as mathematical, financial, etc. Explanation: A function manipulates data passes to it and returns either a string or a numeric.\n\nHow many types of functions are there in QBASIC?\n\nIn QBasic, there are two types of functions: standard or “built-in” and user-defined. Standard or “built-in” functions are provided by the QBasic system or compiler environment and basically allow the programmer to use them so he/she doesn’t have to write code to handle certain situations.\n\nWhat are the two types of statements in QBASIC?\n\nThere are several different types statements in QBASIC Programming language. For example, CLS statement clears the screen, PRINT statement displays output and INPUT statement takes the input from the users.\n\n## What are the standard library functions in C?\n\nStandard C Library Functions Table, By Name\nFunction System Include File Function Prototype\nabort stdlib.h void abort(void);\nabs stdlib.h int abs(int n);\nacos math.h double acos(double x);\nasctime time.h char *asctime(const struct tm *time);\n\nWhat are the standard library functions?\n\nLibrary functions include standard input/output (stdio. h), string manipulation (string. h), math functions (math. h), and date and time functions (time.\n\nWhat are standard functions in C language?\n\nThe standard functions are also called as library functions or pre-defined functions. In C when we use standard functions, we must include the respective header file using #include statement. For example, the function printf() is defined in header file stdio. h (Standard Input Output header file).\n\nHow many standard library are there in C?\n\nThe ANSI C standard library consists of 24 C header files which can be included into a programmer’s project with a single directive. Each header file contains one or more function declarations, data type definitions and macros.\n\nWhat is standard library in C language?\n\nThe C standard library or libc is the standard library for the C programming language, as specified in the ISO C standard. Starting from the original ANSI C standard, it was developed at the same time as the C library POSIX specification, which is a superset of it.\n\n## Which is the library function?\n\nLibrary functions are built-in functions that are grouped together and placed in a common location called library. Each function here performs a specific operation. We can use this library functions to get the pre-defined output. All C standard library functions are declared by using many header files.\n\nWhat is library function example?\n\nExample: Square root using sqrt() function\n\nSuppose, you want to find the square root of a number. To compute the square root of a number, you can use the sqrt() library function. The function is defined in the math. h header file.\n\nLibrary functions are pre defined functions. Inorder to ease the work of the user, the developer provides some built in functions. For some operations we need not write our own code. We can simply use one of those library functions to get our work done.\n\nIs Main () a library function?\n\nmain is not a library function. However, the standard also imposes some requirements on its signature, and that it must not be overloaded or declared static or inline. In particular, it must not be used, meaning that you cannot call it.\n\nWhat are the two library functions?\n\nThe library functions to allocate memory is malloc() and calloc() methods. The answer to the question: Specify the 2 library functions to dynamically allocate memory is option C which is malloc() and calloc().\n\n## What are the 3 types of functions available in QBasic?\n\nCOS, SIN, and TAN Function\n\nThe CINT function gives results by rounding up the fractional portion. It supports only -32768 to +32767.\n\nWhat are the types of functions in QBasic?\n\nIn QBasic, there are two types of functions: standard or “built-in” and user-defined. Standard or “built-in” functions are provided by the QBasic system or compiler environment and basically allow the programmer to use them so he/she doesn’t have to write code to handle certain situations.\n\nWhat is QBasic and its function?\n\nQBasic is an integrated development environment (IDE) and interpreter for a variety of dialects of BASIC which are based on QuickBASIC. Code entered into the IDE is compiled to an intermediate representation (IR), and this IR is immediately executed on demand within the IDE.\n\nHow many types of QBasic are there?\n\nIn QBASIC, variables are also of two types: Numeric variable. String variable.\n\nWhat are the two types of library functions in QBasic?\n\nc. Functions provided by QBASIC system are called built-in or Standard or Libraryfunction. d. Val function is used to convert a number in string form into numeric value.\n\nReferences:\n\nReview Of Library Function In QBASIC Class 10 Computer Science …\n\nWorld Class Library Function in Qbasic – Schoolans.com\n\nLIBRARY FUNCTION IN QBASIC.pptx – Course Hero\n\nLibrary Functions in QBASIC-Introduction – PRAJWAL RAI\n\n## Information related to the topic library function in qbasic\n\nHere are the search results of the thread library function in qbasic from Bing. You can read more if you want.\n\nval function in qbasic\n\ncint function in qbasic\n\nqbasic functions list\n\nqbasic subroutines\n\nlibrary function example\n\nleft in qbasic\n\nstring function in qbasic\n\nlearn qbasic\n\nlibrary function in qbasic\n\nYou have just come across an article on the topic library function in qbasic. If you found this article useful, please share it. Thank you very much."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87876123,"math_prob":0.7494675,"size":7404,"snap":"2023-14-2023-23","text_gpt3_token_len":1597,"char_repetition_ratio":0.21851352,"word_repetition_ratio":0.24916667,"special_character_ratio":0.20813075,"punctuation_ratio":0.10885341,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97415036,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-20T09:40:15Z\",\"WARC-Record-ID\":\"<urn:uuid:17b123b4-720f-4f39-b9f3-d5667e69b4f6>\",\"Content-Length\":\"76146\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ee70f637-d5fb-425d-b4ed-d1acaeae0f30>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c50c77b-ea64-4c05-93e1-669b9e772fc1>\",\"WARC-IP-Address\":\"45.252.250.21\",\"WARC-Target-URI\":\"https://vi-magento.com/library-function-in-qbasic-quick-answer/\",\"WARC-Payload-Digest\":\"sha1:ZVYLGASSHYKX44NLOJOFL337TTV6IZUU\",\"WARC-Block-Digest\":\"sha1:U37UMG3HD3KQ2DIWUCLUKNFL6NZHW5TM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943471.24_warc_CC-MAIN-20230320083513-20230320113513-00565.warc.gz\"}"} |
https://www.vutbr.cz/en/students/courses/detail/209371?apid=209371 | [
"Course detail\n\n# Signals and Systems\n\nThis module provides an introduction to the linear time-invariant continuous- and discrete-time signals and systems. Students are introduced with the various methods of description and analysis of the continuous- and discrete-time signals and systems: time domain, frequency domain, spectrum, Fourier series, sampling, transforms (Laplace, Fourier, Z) and difference or differential equations. These methods are used to analyse signals and systems properties and to determine basic characteristics: linearity, time-invariance, causality, stability, etc.\n\nLearning outcomes of the course unit\n\nAn absolvent is able to:\n- describe continuous and discrete time signals in time and frequency domain,\n- perform continuous and discrete time signal transform using the Fourier series, the Fourier transform, the Laplace transform and the Z-transform,\n- discuss practical interpretations of these transforms and their properties,\n- describe fundamental properties of LTI continuous-time systems,\n- describe fundamental properties of LTI discrete-time systems,\n- use the different methods to describe LTI systems,\n- determine system response of an LTI system to standard and general signals,\n- determine from the description of the LTI system its characteristics such as linearity, time-invariance, causality and stability.\n\nPrerequisites\n\nStudent should have sufficient knowledge of the mathematical analysis for bachelors, mainly: differential and integral calculus, series, basic transforms and complex numbers.\n\n• compulsory prerequisite\n\nCo-requisites\n\nNot applicable.\n\nRecommended optional programme components\n\nNot applicable.\n\nChi-Tsong Chen:System and Signal Analysis,Saunders College publishing, 1994. (EN)\nOPENHEIM, Alan, WILSKY, Alan. Signals and Systems. Second edition. New Jersey: Prentice Hall 1997, 957 s. ISBN 0-13-814757-4. (EN)\nCHI-TSONG CHEN. System and Signal Analysis. Oxford University press 2004. 424 s. ISBN 0-19-515661-7. (EN)\n\nPlanned learning activities and teaching methods\n\nTeaching methods include lectures with demonstrations of practical computations, and pc labs. Students have to finish individual projects during semester.\n\nAssesment methods and criteria linked to learning outcomes\n\n30 points for individual projects\n70 points for final exam\n\nLanguage of instruction\n\nEnglish\n\nWork placements\n\nNot applicable.\n\nCourse curriculum\n\n1. General introduction and motivation; continuous and discrete world, signals classification.\n2. Continuous-time and discrete-time signals - basic operations and manipulations, discretization of continuous-time signals.\n3. Frequency domain of the continuous-time signal, the Fourier series.\n4. The Fourier transform, examples.\n5. Time and frequency domain of the discrete-time signal, the discrete Fourier series, the discrete Fourier transform (DFT).\n6. Systems – definition, classification, the examples of real systems.\n7. Continuous-time LTI system – description using the differential equations, the Laplace transform.\n8. Continuous-time LTI system – transfer function, poles and zeros, stability of LTI systems.\n9. Continuous-time LTI system – response on the standard input signals, the relation to the BIBO stability.\n10. Discrete-time LTI system - description using the difference equations, the Z-transform.\n11. Discrete-time LTI system – transfer function, poles and zeros, stability of LTI systems.\n12. Discrete-time LTI system – response on the standard input signals, the relation to the BIBO stability.\n13. Summary.\n\nAims\n\nTo acquaint with the fundamentals of signals and systems with the continuous and discrete time. To learn to apply the fundamentals to real signals and systems.\n\nSpecification of controlled education, way of implementation and compensation for absences\n\nThe content and forms of instruction in the evaluated course are specified by a regulation issued by the lecturer responsible for the course and updated for every academic year.\n\nClassification of course in study plans\n\n• Programme BPA-ELE Bachelor's\n\nspecialization BPA-PSA , 2. year of study, winter semester, 6 credits, compulsory\n\n#### Type of course unit\n\nLecture\n\n52 hours, optionally\n\nTeacher / Lecturer\n\nExercise in computer lab\n\n13 hours, compulsory\n\nTeacher / Lecturer"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83098996,"math_prob":0.85611725,"size":4148,"snap":"2021-31-2021-39","text_gpt3_token_len":844,"char_repetition_ratio":0.15781853,"word_repetition_ratio":0.07760141,"special_character_ratio":0.19623916,"punctuation_ratio":0.15804598,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9589677,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-29T08:27:19Z\",\"WARC-Record-ID\":\"<urn:uuid:8a93fb85-c2e1-450a-b013-e38c41d0e817>\",\"Content-Length\":\"59292\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:148a7aab-07e5-4b28-873d-2f2b3af56d1e>\",\"WARC-Concurrent-To\":\"<urn:uuid:bc73af39-449d-43a2-a60f-f1a0e0815220>\",\"WARC-IP-Address\":\"147.229.2.90\",\"WARC-Target-URI\":\"https://www.vutbr.cz/en/students/courses/detail/209371?apid=209371\",\"WARC-Payload-Digest\":\"sha1:W4Z3I4P6FMWMLFA67UEDFLMCIF7UYNTM\",\"WARC-Block-Digest\":\"sha1:4ISKVC4CK4LH34AZKYJ2737DPO3TQLAQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153854.42_warc_CC-MAIN-20210729074313-20210729104313-00454.warc.gz\"}"} |
https://gamedev.stackexchange.com/questions/53419/animating-a-background-pulse-with-easing-equation | [
"# Animating a background 'pulse' with easing equation\n\nI'm trying to find a suitable easing equation (or other method) to animate an object so that it 'pulses' (imagine a 'spike' on a music visualiser, or see the image I drew badly below)",
null,
"'v' is the value I'm using to scale. 't1' and 't2' mark the end of a single pulse.\n\nThe game I'm working on is a puzzler, but I'd like to have objects in the background pulse in time with user-selected music.\n\nI'm sure it's just a case of knowing what to search for, but any advice would be appreciated on how best to achieve this.\n\nMy answer is similar to Miro's, but I think the Math ought to be a lot simpler. Of course, the details of your curve make all the difference. If you don't care precisely what the curve looks like, then all you need is the basic sawtooth.\n\nvar clock = function(x) {\nreturn (1-x) - floor(1-x); //I like this method, though not the simplest.\n}\n\nvar clockVal = clock(time);\n\n\nThat yields:",
null,
"If you want to make it more curvy, raise it to some power:\n\nvar clockVal = Math.pow(clock(time), 2);\nvar clockVal = Math.pow(clock(time), 3);",
null,
"",
null,
"• Interim results say this looks good - though in my original drawing there was a small amount of time over which the pulse 'spikes' (rather than right on t = 1). Any ideas? – codinghands Apr 7 '13 at 4:56\n• Another way to give credit to @Miro; he already included that. Change the call to clock to input a longer period like so: var clockVal = clock(time / period); – Seth Battin Apr 7 '13 at 5:30\n\nI assume that you want to create periodic function so you need to periodize x:\n\np(x) = x/T - floor(x/T)\n\n\nThen you'll create rational function from two linear functions.\n\nf(x) = ( a*x + b ) / ( c*x + d )\n\n\nYou've got 2 points [0,V], [1,0] and together you have:\n\nf(x) = (V - V * p(x)) / (1 + p(x) * shape)",
null,
"• p(x) = x/T - floor(x/T) : ten times faster with : (x/T) % 1 . How to be sure (c*x+ d) !== 0 ? And how do you use the rationnal function ? – GameAlchemist Apr 6 '13 at 20:40\n• @VincentPiel Modulo is usually associated with integers. Some languages may allow n % 1 to retrieve the decimal portion of the number, but it's not necessarily a good idea. And the claim that using the operator is \"ten times faster\" that some other operation is probably untrue. – Seth Battin Apr 6 '13 at 21:34\n• jsperf.com/trunc-or-1/2 floor 92% slower (% 1) is faster, and valid in Javascript. The fact that it doesn't work in other languages doesn't seem like a good reason not to use it in Javascript. How to be sure (c*x+ d) !== 0 ? And how do you use the rationnal function ? – GameAlchemist Apr 6 '13 at 21:40\n• Wow. Well, congratulations on writing proving yourself correct. – Seth Battin Apr 6 '13 at 21:51\n• Hi Miro - thanks for your answer. As Vincent said: \"And how do you use the rational function?\" – codinghands Apr 7 '13 at 4:55"
] | [
null,
"https://i.stack.imgur.com/D2l1N.png",
null,
"https://i.stack.imgur.com/KuTGh.png",
null,
"https://i.stack.imgur.com/COWKf.png",
null,
"https://i.stack.imgur.com/B4Qst.png",
null,
"https://i.stack.imgur.com/fAb9r.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9600001,"math_prob":0.9240426,"size":513,"snap":"2021-04-2021-17","text_gpt3_token_len":134,"char_repetition_ratio":0.08840864,"word_repetition_ratio":0.0,"special_character_ratio":0.24951267,"punctuation_ratio":0.06542056,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96444654,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-19T06:07:12Z\",\"WARC-Record-ID\":\"<urn:uuid:1b5a471c-32fc-487f-aeeb-f1ab240fd642>\",\"Content-Length\":\"164148\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:49171afb-2411-47da-bfdc-10a9738c11ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3815b6f-0310-4801-aa38-f3e5865f6ecf>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://gamedev.stackexchange.com/questions/53419/animating-a-background-pulse-with-easing-equation\",\"WARC-Payload-Digest\":\"sha1:42XE5JJWQHMTRGCZMZ4VB7A64IJJG5MU\",\"WARC-Block-Digest\":\"sha1:EUQOXIY5QRTG2YMQQEHCXX2SMKVOELEV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703517966.39_warc_CC-MAIN-20210119042046-20210119072046-00320.warc.gz\"}"} |
https://www.vatsalsaglani.dev/leetcode/leetcode-167 | [
"# Leetcode - 167: Two Sum II\n\n## The Two sum II problem is the extension of the Two Sum problem which can be solved using the pointers technique\n\nMEDIUM\n–––\n\nAs seen in the question Two Sum in Python using dictionary given a list of `integers` and a `target` integer the algorithm needs to return the indices of two numbers which can be added to get the target. Here the only change is the given list is sorted in ascending order which can be used as an advantage.\n\n## Two Sum II\n\n### Question\n\nGiven an array of integers `numbers` that is already sorted in ascending order, find two numbers such that they add up to a specific `target` number.\n\nReturn the indices of the two numbers (1-indexed) as an integer array `answer` of size `2`, where `1 <= answer < answer <= numbers.length`.\n\nThe bruteforce solution will be same as the one in the previous blog.\n\n### Solution and Explanation\n\ntowsumtwo_onepass.py\n``````def twoSum(numbers, target):\n\nleft_ptr = 0\nright_ptr = len(numbers) - 1\n\nwhile left_ptr < right_ptr:\n\ncurrent_sum = numbers[right_ptr] + numbers[left_ptr]\n\nif current_sum > target:\n\nright_ptr -= 1\n\nelif current_sum < target:\n\nleft_ptr += 1\n\nelse:\n\nreturn [left_ptr + 1, right_ptr + 1]\n``````\n\n#### Explanation\n\nAs the numbers list is sorted we can use the one pass two pointer method wherein we can start with two different pointers one from the start of the list and anoter from the end.\n\nThe logic becomes more simpler. We just need to move the pointers based on the sum of the values of their indices.\n\nIf the sum is higher than the given target than move the right pointer towards the left i.e. `right_ptr -= 1`.\n\nIf the sum is lesser than the target then we need to move the left pointer towards the rigth i.e. `left_ptr += 1`.\n\nThe only condition left is when the sum is equal to the given target and as mentioned in the problem we need to return 1-indexed of the two indices. Hence, we will add 1 to both the `left_ptr` and `right_ptr`.\n\nThat's it for this blog, hope you found this helpful. You can connect with me on Twitter\n\nBuilt with ❤️ using\n\nI am available on\n\nSitemap"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8410859,"math_prob":0.9853875,"size":1940,"snap":"2023-14-2023-23","text_gpt3_token_len":464,"char_repetition_ratio":0.14359504,"word_repetition_ratio":0.0,"special_character_ratio":0.2386598,"punctuation_ratio":0.079155676,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993444,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-26T05:36:33Z\",\"WARC-Record-ID\":\"<urn:uuid:637d069a-c287-40d8-ad17-00f558b9edc4>\",\"Content-Length\":\"28213\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4d171ff6-1706-44d2-851b-6d08d2dcf95a>\",\"WARC-Concurrent-To\":\"<urn:uuid:01bbe976-57ae-49ef-8ba9-167646fd0d32>\",\"WARC-IP-Address\":\"76.76.21.241\",\"WARC-Target-URI\":\"https://www.vatsalsaglani.dev/leetcode/leetcode-167\",\"WARC-Payload-Digest\":\"sha1:Y6XFAGV2RZYJZH6G4OFDTWCXMECDKPLV\",\"WARC-Block-Digest\":\"sha1:UNS5YYWI2ZHHGCO45OJ2AL3PYG2S43LS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945433.92_warc_CC-MAIN-20230326044821-20230326074821-00569.warc.gz\"}"} |
https://spsstools.net/en/syntax/syntax-index/ranking-largest-values-sorting-grouping/calculate-mode/ | [
"``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46``` ```*(Q)In my data set I have 4 categories of 5 questions each I would like to calculate the mode for each category for each subject, i.e., compute modecat1, modecat2, modecat3 and modeca4 for each subject. *(A) Posted by rlevesque@videotron.ca to SPSSX-L on 2001/08/28 Note that when the distribution is multi-modal, the syntax selects the mode having the highest value. * This assumes you need the mode in the data editor. If it is sufficient to have the mode in the Output window, use FREQ with the /STATISTICS=mode subcommand. INPUT PROGRAM. SET SEED=987651423. VECTOR q(20F8.0). LOOP id=1 TO 25. + LEAVE id. + LOOP #=1 TO 20. + COMPUTE q(#)=TRUNC(UNIFORM(5)+1). + END LOOP. + END CASE. END LOOP. END FILE. END INPUT PROGRAM. EXECUTE. **** Start job. VECTOR resp=q1 TO q20. LOOP qnb=1 TO 20. COMPUTE resp=resp(qnb). COMPUTE cat=1+TRUNC(qnb/5.1). XSAVE OUTFILE='c:\\\\temp\\\\temp.sav' /KEEP=id cat qnb resp. END LOOP. EXECUTE. GET FILE='c:\\\\temp\\\\temp.sav'. * aggregate cases by response values within id and category. AGGREGATE /OUTFILE=* /BREAK=id cat resp /nbresp = N(qnb). * identify then keep the line with the largest number of cases. SORT CASES BY id cat nbresp. ADD FILES FILE=* /BY=id cat /LAST=last. SELECT IF last=1. VARIABLE LABELS nbresp 'Number of response' resp 'Response'. EXECUTE. ```\nRelated pages\n\n..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5314019,"math_prob":0.9612735,"size":1455,"snap":"2023-40-2023-50","text_gpt3_token_len":485,"char_repetition_ratio":0.1026878,"word_repetition_ratio":0.0,"special_character_ratio":0.35670102,"punctuation_ratio":0.15522388,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9834556,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T04:03:08Z\",\"WARC-Record-ID\":\"<urn:uuid:3c612cac-ed3d-41f4-a5c3-7257c4493cea>\",\"Content-Length\":\"15402\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3d668c57-b3a0-4941-9847-a65d34d92044>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a7905ba-e146-4f83-bcdc-999debd039ab>\",\"WARC-IP-Address\":\"78.108.88.70\",\"WARC-Target-URI\":\"https://spsstools.net/en/syntax/syntax-index/ranking-largest-values-sorting-grouping/calculate-mode/\",\"WARC-Payload-Digest\":\"sha1:XOXV5GMEWUVNQ3H45ZSKGFDJUDZXJQJO\",\"WARC-Block-Digest\":\"sha1:RQ63WHOQZZTJBEJYLZ6VXGEBTBCVWV5U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506559.11_warc_CC-MAIN-20230924023050-20230924053050-00106.warc.gz\"}"} |
https://mairehofemulg.ml/4727.php | [
"# Coupon bond value calculation\n\nThe theoretical fair value of a bond is calculated by discounting the present value of its coupon payments by an appropriate discount rate.\n\nIn either case, the next payment will occur in exactly six months. This will be important because we are going to use the time value of money keys to find the present value of the cash flows.\n\nThe value of any asset is the present value of its cash flows. Therefore, we need to know two things:. We have already identified the cash flows above. Take a look at the time line and see if you can identify the two types of cash flows. Using the principle of value additivity , we know that we can find the total present value by first calculating the present value of the interest payments and then the present value of the face value.\n\nAdding those together gives us the total present value of the bond. We don't have to value the bond in two steps, however. Assuming that your required return for the bond is 9. We can calculate the present value of the cash flows using the TVM keys. Enter the data: 6 into N , 4. Notice that the bond is currently selling at a discount i. This discount must eventually disappear as the bond approaches its maturity date. A bond selling at a premium to its face value will slowly decline as maturity approaches. In the chart below, the blue line shows the price of our example bond as time passes.\n\nThe red line shows how a bond that is trading at a premium will change in price over time. Both lines assume that market interest rates stay constant. In either case, at maturity a bond will be worth exactly its face value. Keep this in mind as it will be a key fact in the next section.\n\nIn the previous section we saw that it is very easy to find the value of a bond on a coupon payment date. However, calculating the value of a bond in-between coupon payment dates is more complex. As we'll see, the reason is that interest does not compound between payment dates. That means that you cannot get the correct answer by entering fractional periods e. Let's start by using the same bond, but we will now assume that 6 months have passed.\n\n1. Bond Pricing Formula.\n3. 2) Key Bond Characteristics.\n5. Bond Price | Definition, Formula and Example?\n6. toledo walleye coupons!\n7. insignia vxr deals.\n\nThat is, today is now the end of period 1. What is the value of the bond at this point? To figure this out, note that there are now 5 periods remaining until maturity, but nothing else has changed. Therefore, simply change the value in N to 5.\n\n• HP 10bII+ Financial Calculator - Bond Calculations | HP® Customer Support!\n• Excel formula: Bond valuation example | Exceljet?\n• Bond valuation example;\n• health retreat deals nsw.\n• Notice that the value of the bond has increased a little bit since period 0. As noted previously, this is because the discount must eventually vanish as the maturity date approaches. Now, is there another way that we might arrive at that period 1 value? Of course. Remember that your required return is 4. Therefore, the value of the bond must increase by that amount each period.\n\n### Coupon and Yield\n\nWait a minute! That's not the same answer. However, remember that this is the total value of your holdings at the end of period 1. If we subtract that, you can see that we do get the same result:. This is one of the key points that you must understand to value a bond between coupon payment dates.\n\n## Bond valuation\n\nLet me recap what we just did: We wanted to know the value of the bond at the end of period 1. Step 3: Now, the total number of periods till maturity is computed by multiplying the number of years till maturity and the frequency of the coupon payments in a year. The number of periods till maturity is denoted by n. Step 4: Now, the yield to maturity YTM is the discounting factor and it is determined based on the current market return from an investment with similar risk profile.\n\nThe YTM is denoted by r. Step 5: Now, the present value of the first, second, third coupon payment and so on so forth along with the present value of the par value to be redeemed after n periods is derived as,. Step 6: Finally, adding together the present value of all the coupon payments and the par value gives the bond price as below,. Below are some of the Examples of Bond Pricing Formula. Let us take an example of a bond with annual coupon payments.\n\n## Bond Valuation using Yield to Maturity & Spot Interest Rates\n\nSince the coupon rate is lower than the YTM , the bond price is less than the face value and as such the bond is said to be traded at discount. Let us take an example of a bond with semi-annual coupon payments. Since the coupon rate is higher than the YTM, the bond price is higher than the face value and as such, the bond is said to be traded at a premium. Let us take the example of a zero coupon bond.\n\nThe concept of bond pricing is very important because bonds form an indispensable part of the capital markets, and as such investors and analysts are required to understand how the different factors of a bond behave in order to determine its intrinsic value.\n\nHow to Price/Value Bonds - Formula, Annual, Semi-Annual, Market Value, Accrued Interest\n\nSimilar to stock valuation, the pricing of a bond is helpful in understanding whether it is a suitable investment for a portfolio and consequently forms an integral part of bond investing. This has been a guide to Bond Pricing Formula."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9360834,"math_prob":0.98445565,"size":5835,"snap":"2020-10-2020-16","text_gpt3_token_len":1243,"char_repetition_ratio":0.15520494,"word_repetition_ratio":0.059099436,"special_character_ratio":0.21182519,"punctuation_ratio":0.10051546,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99709207,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-26T10:49:37Z\",\"WARC-Record-ID\":\"<urn:uuid:8a5fcc66-ce45-46ae-b3e8-08ebaeee14c6>\",\"Content-Length\":\"16049\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c7dd812e-d2ac-4a1b-a259-d639906c8ad7>\",\"WARC-Concurrent-To\":\"<urn:uuid:7ce7c9ee-19ee-4b83-85fe-21de985dc50f>\",\"WARC-IP-Address\":\"104.18.59.86\",\"WARC-Target-URI\":\"https://mairehofemulg.ml/4727.php\",\"WARC-Payload-Digest\":\"sha1:LMHAHXV6SITHDBR6YCNO6HWPN43YJXIG\",\"WARC-Block-Digest\":\"sha1:LOH7KPE73WWRECW72O6S345ELFFY6NKP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146341.16_warc_CC-MAIN-20200226084902-20200226114902-00421.warc.gz\"}"} |
https://www.equationsworksheets.net/plancks-equation-chem-worksheet-5-2-answer-key/ | [
"# Planck’s Equation Chem Worksheet 5 2 Answer Key\n\nPlanck’s Equation Chem Worksheet 5 2 Answer Key – Expressions and Equations Worksheets are created to assist children in learning quicker and more effectively. These worksheets contain interactive exercises and problems that are determined by the order of operations. Through these worksheets, kids can master both basic and complex concepts in a brief amount of time. These PDF resources are free to download and can be used by your child to learn math concepts. These resources are beneficial for students in the 5th to 8th Grades.\n\n## Get Free Planck’s Equation Chem Worksheet 5 2 Answer Key",
null,
"These worksheets are for students in the 5th to 8th grades. These two-step word problems are designed using decimals, fractions or fractions. Each worksheet contains ten problems. These worksheets are available both online and in printed. These worksheets can be used to test the practice of rearranging equations. Alongside practicing restructuring equations, they can also aid your student in understanding the principles of equality as well as the inverse of operations.\n\nThese worksheets are targeted at students in the fifth and eighth grades. These worksheets are suitable for those who are struggling with calculating percentages. There are three types of problems. You can choose to solve one-step challenges that contain decimal or whole numbers, or you can use word-based approaches to solve problems involving decimals and fractions. Each page is comprised of ten equations. These worksheets on Equations are suitable for students in the 5th through 8th grade.",
null,
"These worksheets are a great way to practice fraction calculations and other algebraic concepts. You can choose from many different types of problems with these worksheets. You can choose the one that is numerical, word-based, or a mixture of both. The type of problem is vital, as each will have a distinct problem type. Each page will have ten challenges, making them a great source for students in the 5th-8th grade.\n\nThese worksheets are designed to teach students about the relationships between variables and numbers. These worksheets help students practice solving polynomial equations and to learn how to apply equations in daily life. If you’re looking for a great educational tool to learn about expressions and equations, you can start with these worksheets. These worksheets will help you learn about different types of mathematical problems along with the different symbols that are utilized to represent them.",
null,
"These worksheets are extremely beneficial to students in the beginning grade. These worksheets will aid them develop the ability to graph and solve equations. These worksheets are ideal for learning about polynomial variables. These worksheets will help you simplify and factor them. There are numerous worksheets that can be used to teach children about equations. Working on the worksheet yourself is the best method to get a grasp of equations.\n\nThere are plenty of worksheets to learn about quadratic equations. There are various levels of equation worksheets for each degree. These worksheets are a great way to solve problems to the fourth level. Once you have completed an amount of work then you are able to work on solving different types of equations. Then, you can continue to work on similar problems. As an example, you may find a problem with the same axis, but as an elongated number."
] | [
null,
"https://www.equationsworksheets.net/wp-content/uploads/2022/07/planck-s-equation-1.png",
null,
"https://www.equationsworksheets.net/wp-content/uploads/2022/07/planck-s-equation-chem-worksheet-5-2-answer-key-math-db-excel-2.jpg",
null,
"https://www.equationsworksheets.net/wp-content/uploads/2022/07/planck-s-equation-chem-worksheet-5-2-answer-key-math-db-excel-1.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9517734,"math_prob":0.9642012,"size":3383,"snap":"2022-40-2023-06","text_gpt3_token_len":627,"char_repetition_ratio":0.20242675,"word_repetition_ratio":0.042990655,"special_character_ratio":0.18001774,"punctuation_ratio":0.08389262,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9959542,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-30T13:27:02Z\",\"WARC-Record-ID\":\"<urn:uuid:0e7dda1d-0874-497f-9cd6-3a7e26471e1a>\",\"Content-Length\":\"65425\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8560eb17-2898-4c05-91c0-7c9480546ed0>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d04c953-c309-4d33-a0e4-7c0bfe8a00e3>\",\"WARC-IP-Address\":\"104.21.91.193\",\"WARC-Target-URI\":\"https://www.equationsworksheets.net/plancks-equation-chem-worksheet-5-2-answer-key/\",\"WARC-Payload-Digest\":\"sha1:UCCA4T7IW7X2T2XWUBOXRHY7ZQMGZ3NL\",\"WARC-Block-Digest\":\"sha1:HIN65I2NOAUV63UI7CGBND32YFDBNQQM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335469.40_warc_CC-MAIN-20220930113830-20220930143830-00424.warc.gz\"}"} |
http://www.northindiacampus.com/2015/11/applied-chemistry-basic-chemistry.html | [
"## Applied Chemistry, Basic Chemistry ,Question Paper of BE 1 and 2 Semester,Download Previous Years Question Paper 2\n\n•",
null,
"Sunday, November 29, 2015\n•",
null,
"•",
null,
",\n•",
null,
"No comments\n\nS.B. Roll No……..\n\nAPPLIED CHEMISTRY\n1st Exam/Common/2555/Dec-2011\n\nTime: 3 Hrs. Max. Marks: 75\nSECTION-A\nQ1. Fill in the blanks:\n(1) Dimensional formula of density is …................\n(2) Negatively charged ions are called …………\n(3) One mole of particles means …………. Particles.\n(4) The shape of a p-ortibal is like ……………\n(5) Bond length is measured in …………. Unit.\n(6) A base is a proton ………\n(7) Oxidation involves ………. of electrons by atoms or ions.\n(8) The functional group of aldehydes is …………\n\nState true or false:\n(2) Isotopes have same number of protons.\n(3) Electronic energy is negative because electron has negative charge.\n(4) An element with atomic numbers a belongs to a p-block.\n(5) Molecule of a methane (CH4) is tetrahedral.\n(6) NH40H is a strong electrolyte.\n(7) Chlorine molecules is formed by ionic linkage.\n\nSECTION-B\n\nQ2. Attempt any ten questions.\n\n(1) Write the significance of a chemical equation with an example.\n(2) Calculate the number of atoms in 23gms of Na (at mass of Na=23)\n(3) Differentiate between S and P orbitals.\n(4) Explain line spectrum of hydrogen.\n(5) Define covalent bonding with at least two examples.\n(6) What are the disadvantages of using hard water in laundary work, paper and textile Industry?\n(7) Explain open and closed systems.\n(8) Define ionization and degree of ionization.\n(9) A current of2amp on passing through a soln of AgNO3 for 100 secs, deposited 2.22gms of Ag, calculate electrochemical equivalent of Ag,\n(10)Define catenation and functional group.\n(11)What is the difference between an atom and an ion?\n(12)Explain the defects in the long form of the periodic table.\n\nSECTION-C\n\nQ3. Attempt any thee questions.\n(1) (a) What are the various factors favoring formation of ionic band?\n(b) Balance the following equation by hit and trial method.\nKclO3",
null,
"Kcl+0\n(c) How many protons and neutrons are three in the nuclei of 178O?\n(2) (a) A sample of hard water is found to contain 272mg of CaSo4 litter, what will be its hardness in .\nPPm. (Ca =40,S=32,o=16, C=12)\n(b) Explain industrial application of PH\n(3) (a) Define and explain the process of electrolysis.\n(b) Differentiate between saturated and unsaturated hydrocarbons, giving two examples of each.\n(4) (a) Write 1 UPAC names of the following compounds:\nCH3---CH—CH3",
null,
"OH\nCH3;----- CH--- CH 2 ----CH 2---CH3",
null,
"COOH\nCH 3----CH 2----CH2 ---CH2 -- CH ----CH3",
null,
"NO2\n\nCH 3----CH ----CH ---CH3\n| |\nCH CH3\n\n(b) Write the characteristics of chemical e\nequilibrium and explain the types of chemical\nequilibrium\n\n(5) (a) Define co-ordinate or dative bond\nExplain with examples of NH 4 + and NH 3\nBF 4 molecules 5\n(b) Calculate the PH value of 0.01m Hcl\nSoln. 5"
] | [
null,
"http://3.bp.blogspot.com/-Bj2uRClxp9U/U4kn7eaZPLI/AAAAAAAAJrk/R6pWjwVeCyg/s1600/icon-date.png",
null,
"http://3.bp.blogspot.com/-BkivJqU8NdQ/U4lVtHakxjI/AAAAAAAAJtk/o_pdeIfArhw/s1600/icon-user.png",
null,
"http://4.bp.blogspot.com/-9KbQMsa89eo/U4kn8kFt1HI/AAAAAAAAJqw/ci0ClwtVGgI/s1600/icon-folder.png",
null,
"http://2.bp.blogspot.com/-ZUHocoseTuQ/U4kn6ycylVI/AAAAAAAAJqk/CXinGoLCO9k/s1600/icon-comment.png",
null,
"file:///C:/Users/Dragon/AppData/Local/Temp/msohtmlclip1/01/clip_image002.gif",
null,
"file:///C:/Users/Dragon/AppData/Local/Temp/msohtmlclip1/01/clip_image004.gif",
null,
"file:///C:/Users/Dragon/AppData/Local/Temp/msohtmlclip1/01/clip_image004.gif",
null,
"file:///C:/Users/Dragon/AppData/Local/Temp/msohtmlclip1/01/clip_image006.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7934636,"math_prob":0.94801223,"size":2726,"snap":"2020-45-2020-50","text_gpt3_token_len":839,"char_repetition_ratio":0.09368112,"word_repetition_ratio":0.0,"special_character_ratio":0.30447543,"punctuation_ratio":0.13768116,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97047687,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-02T10:07:46Z\",\"WARC-Record-ID\":\"<urn:uuid:f5f8d8d6-04aa-45d5-bb80-34de16553445>\",\"Content-Length\":\"334721\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:daeaf6e5-5354-4fa0-898d-c3fe4d1ca9a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:1508d0b1-8881-4844-b0af-c39afc779f6f>\",\"WARC-IP-Address\":\"172.217.15.83\",\"WARC-Target-URI\":\"http://www.northindiacampus.com/2015/11/applied-chemistry-basic-chemistry.html\",\"WARC-Payload-Digest\":\"sha1:GHENBWBHXA54XRE4XLDUYE2R3RDMEGAU\",\"WARC-Block-Digest\":\"sha1:F4KOVANQKB67V6AS4XJTRL7PPOTEMU44\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141706569.64_warc_CC-MAIN-20201202083021-20201202113021-00578.warc.gz\"}"} |
https://pastelink.net/38vas | [
"## No Title\n\nAuthor: 965dcca30a",
null,
"18 August 2021\n\nViews: 12\n\n// Implenting Merge sort in C\n\n#include <stdio.h>\n\n// Merge two subarrays L and M into arr\n\nvoid merge(int arr[], int p, int q, int r) {\n\n// Create L ← A[p..q] and M ← A[q+1..r]\n\nint n1 = q - p + 1;\n\nint n2 = r - q;\n\nint L[n1], M[n2];\n\nfor (int i = 0; i < n1; i++)\n\nL[i] = arr[p + i];\n\nfor (int j = 0; j < n2; j++)\n\nM[j] = arr[q + 1 + j];\n\n// Maintain current index of sub-arrays and main array\n\nint i, j, k;\n\ni = 0;\n\nj = 0;\n\nk = p;\n\n// Until we reach either end of either L or M, pick larger among\n\n// elements L and M and place them in the correct position at A[p..r]\n\nwhile (i < n1 && j < n2) {\n\nif (L[i] <= M[j]) {\n\narr[k] = L[i];\n\ni++;\n\n} else {\n\narr[k] = M[j];\n\nj++;\n\n}\n\nk++;\n\n}\n\n// When we run out of elements in either L or M,\n\nwhile (i < n1) { // pick up the remaining elements and put in A[p..r]\n\narr[k] = L[i];\n\ni++;\n\nk++;\n\n}\n\nwhile (j < n2) {\n\narr[k] = M[j];\n\nj++;\n\nk++;\n\n}\n\n}\n\n// Divide the array into two subarrays, sort them and merge them\n\nvoid mergeSort(int arr[], int l, int r) {\n\nif (l < r) {\n\n// m is the point where the array is divided into Two Subarrays\n\nint m = l + (r - l) / 2;\n\nmergeSort(arr, l, m);\n\nmergeSort(arr, m + 1, r);\n\nmerge(arr, l, m, r); // Merge the sorted subarrays\n\n}\n\n}\n\n// Print the array\n\nvoid printArray(int arr[], int size) {\n\nfor (int i = 0; i < size; i++)\n\nprintf(\"%d \", arr[i]);\n\nprintf(\"\\n\");\n\n}\n\n// Driver program\n\nint main() {\n\nint arr[] = {8, 13, 24, 33, 48};\n\nint size = sizeof(arr) / sizeof(arr);\n\nmergeSort(arr, 0, size - 1);\n\nprintf(\"Sorted array: \\n\");\n\nprintArray(arr, size);\n\n}"
] | [
null,
"https://pastelink.net/assets/images/public.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5338799,"math_prob":0.9990019,"size":1475,"snap":"2021-43-2021-49","text_gpt3_token_len":528,"char_repetition_ratio":0.113528214,"word_repetition_ratio":0.024691358,"special_character_ratio":0.44474575,"punctuation_ratio":0.200542,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996507,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T21:03:58Z\",\"WARC-Record-ID\":\"<urn:uuid:de3f080e-7826-4de3-97b6-e01932b16a5f>\",\"Content-Length\":\"16890\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:22931d2b-fbcb-4f4e-98ac-233b6257b4df>\",\"WARC-Concurrent-To\":\"<urn:uuid:629245a9-4fc4-4842-8369-e256423cbe10>\",\"WARC-IP-Address\":\"178.79.155.87\",\"WARC-Target-URI\":\"https://pastelink.net/38vas\",\"WARC-Payload-Digest\":\"sha1:RHY7TFAJLH5Z4QWIM4H5OPPWLDOD7I7Y\",\"WARC-Block-Digest\":\"sha1:M42VEVR5WFI3HWKJT2XPLVJ4JPDGQD23\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587606.8_warc_CC-MAIN-20211024204628-20211024234628-00000.warc.gz\"}"} |
http://freesourcecode.net/matlabprojects/57951/gmmem-based-pixel-labeling-and-segmentation-in-matlab | [
"# Gmmem based pixel labeling and segmentation in matlab\n\nThe following Matlab project contains the source code and Matlab examples used for gmmem based pixel labeling and segmentation. Program computes the parameter of GMM model using EM algorithm,optimized parameter thus obtained are used to calculate probability of every pixel with respect to number of classes(for given example it is taken as k).\n\nThe source code and files included in this project are listed in the project files section, please make sure whether the listed source code meet your needs there.\n\n## Project Files:\n\nFile NameSize\ncallexpectmaxx.m 813\nexpectmaxx_opt.m 1141\ngauss_dist_opt.m 221"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8214414,"math_prob":0.47922117,"size":628,"snap":"2020-45-2020-50","text_gpt3_token_len":131,"char_repetition_ratio":0.10737179,"word_repetition_ratio":0.0,"special_character_ratio":0.21019109,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96579885,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-23T12:01:37Z\",\"WARC-Record-ID\":\"<urn:uuid:7e7dc826-104b-41dc-ae43-3d25755a9590>\",\"Content-Length\":\"26315\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d8afd16b-107e-4093-aed2-696125769738>\",\"WARC-Concurrent-To\":\"<urn:uuid:e0f4624b-a38a-4ab7-8b40-372782288edf>\",\"WARC-IP-Address\":\"191.96.51.188\",\"WARC-Target-URI\":\"http://freesourcecode.net/matlabprojects/57951/gmmem-based-pixel-labeling-and-segmentation-in-matlab\",\"WARC-Payload-Digest\":\"sha1:DDVOFDEIBHHYDCRTYQ5AUXXWGKSE3WYL\",\"WARC-Block-Digest\":\"sha1:3EWJWPVWX2HVGE7QSMPIHSB3NEBHGKIL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107881369.4_warc_CC-MAIN-20201023102435-20201023132435-00599.warc.gz\"}"} |
https://managenergy.tv/what-law-to-the-trends-of-lattice-energy-represent/ | [
"# What Law To.The Trends of Lattice Energy Represent\n\nAuthor:\n\nPublished:\n\nUpdated:\n\nAffiliate Disclaimer\n\nAs an affiliate, we may earn a commission from qualifying purchases. We get commissions for purchases made through links on this website from Amazon and other third parties.\n\nI’ve got a question for you: ever wondered what law governs the trends of lattice energy? Well, hold tight, because I’m about to dive into the fascinating world of lattice energy and its trends.\n\nIn this article, we’ll explore the importance of lattice energy, factors that influence it, and its impact on solubility, melting points, and stability. By understanding these trends, we’ll gain insights into the intriguing applications of lattice energy.\n\nSo, let’s buckle up and unravel the mysteries of this captivating topic together.\n\nContents\n\n## Key Takeaways\n\n• Lattice energy determines the stability and reactivity of compounds.\n• Periodic table trends provide insights into the relationship between lattice energy and reactivity.\n• Larger ionic radii decrease lattice stability due to less efficient packing arrangements.\n• Crystal packing plays a crucial role in determining lattice energy and overall stability.\n\n## The Importance of Lattice Energy\n\nLattice energy plays a crucial role in determining the stability of a crystal lattice. It is a measure of the energy released when ions come together to form a solid lattice structure.\n\nTo understand the importance of lattice energy, one must consider its role in chemical reactions. Lattice energy calculations allow us to predict the stability and reactivity of compounds.\n\nHigher lattice energy values indicate stronger ionic bonds and greater stability. This means that compounds with high lattice energy are less likely to undergo chemical reactions.\n\nOn the other hand, compounds with low lattice energy are more likely to react and form new compounds. By understanding the lattice energy of a compound, we can gain insight into its behavior in chemical reactions and its overall stability.\n\n## Understanding Lattice Energy Trends\n\nWhen studying lattice energy, it’s important to examine the factors that influence its value and how these factors can be observed in the periodic table trends.\n\nBy understanding the key factors affecting lattice energy, such as ion size, ion charge, and ionic radius, we can analyze the patterns and trends that arise within the periodic table.\n\nThis analytical approach allows us to make precise observations and draw objective conclusions about the relationship between lattice energy and the elements.\n\n### Factors Affecting Lattice Energy\n\nOne of the factors that affects lattice energy is the size of the ions involved. Smaller ions have higher charges and therefore stronger attractions to each other, resulting in higher lattice energies. On the other hand, larger ions have lower charges and weaker attractions, leading to lower lattice energies. This relationship between ion size and lattice energy can be quantitatively determined using calculations. By considering the charges and radii of the ions, we can calculate the lattice energy using the Born-Haber cycle or the Born-Landé equation. These calculations involve various factors such as ion charges, ion sizes, and the Madelung constant. Additionally, other factors that influence lattice energy include the type of bonding (ionic, covalent, or metallic), crystal structure, and temperature. Understanding these factors is crucial in predicting and interpreting lattice energy trends in different compounds.\n\nFactors Effect on Lattice Energy\nIon Size Smaller ions have higher lattice energies, while larger ions have lower lattice energies.\nIon Charge Higher charges on ions result in stronger attractions and higher lattice energies.\nCrystal Structure Different crystal structures can affect the arrangement and strength of the lattice, influencing lattice energy.\nTemperature Higher temperatures can disrupt the lattice and reduce lattice energy.\n\n### Periodic Table Trends\n\nTo understand periodic table trends, you can observe how certain properties of elements change as you move across a period or down a group. These trends can provide valuable insights into the behavior and reactivity of elements.\n\nWhen it comes to lattice energy, which is the energy required to separate the ions in a solid ionic compound, there are specific patterns that can be observed on the periodic table. Here are four key points to consider:\n\n• Lattice energy generally increases as you move across a period from left to right.\n• Lattice energy tends to decrease as you move down a group.\n• Elements with higher lattice energies are typically more reactive.\n• The relationship between lattice energy and reactivity can be explained by the fact that higher lattice energy indicates stronger ionic bonds, making it more difficult for the elements to react with other substances.\n\nThese periodic table trends help us understand the relationship between lattice energy and reactivity, providing crucial information for studying chemical reactions and the behavior of elements.\n\n## Factors Affecting Lattice Energy\n\nFactors affecting lattice energy include the types of ions present and their charges. Lattice energy, which is the energy released when ions come together to form a solid lattice, is influenced by several factors.\n\nThe first factor is the size of the ions. Smaller ions have a higher lattice energy because they can get closer together, resulting in stronger electrostatic attractions.\n\nThe second factor is the charge of the ions. Ions with higher charges have a higher lattice energy because the electrostatic attraction is stronger.\n\nAdditionally, the lattice energy is affected by the ionic radius ratio. When the ratio of the radii of the cation and anion is close to 1, the lattice energy is maximized.\n\nUnderstanding these factors helps to explain the trends in lattice energy observed in different compounds.\n\n## Ionic Radii and Lattice Energy\n\nIn this discussion, I’ll be exploring the relationship between size and energy. I’ll also be looking at the factors that affect lattice energy and how it’s connected to ionic radii and stability.\n\nUnderstanding how the size of ions impacts their energy levels is crucial. It’s also important to grasp how lattice energy is influenced by various factors. By examining the interplay between ionic radii, lattice energy, and stability, we can gain a deeper understanding of the behavior of ionic compounds.\n\n### Relationship Between Size and Energy\n\nThe relationship between size and energy can be observed through trends in lattice energy. Lattice energy is the energy released when ions combine to form a solid crystal lattice.\n\nThere is a strong correlation between lattice energy and atomic size. As the size of an atom increases, its lattice energy decreases. This is because larger atoms have more electron-electron repulsion, which weakens the ionic bond and reduces the energy released during formation of the lattice.\n\nFurthermore, the relationship between size and energy can evoke several emotions in the audience:\n\n• Surprise: The inverse relationship between size and energy may be unexpected.\n• Curiosity: People may wonder why atomic size affects lattice energy.\n• Intrigue: The idea that something as small as atomic size can impact energy is fascinating.\n• Wonder: Observing the trends in lattice energy can ignite a sense of awe and wonder about the intricacies of the atomic world.\n\n### Factors Affecting Lattice Energy\n\nOne important aspect to consider when examining lattice energy is the charge of the ions involved. The magnitude of the lattice energy is influenced by several factors, including the size and charge of the ions. As the charge of the ions increases, the lattice energy also increases. This is because higher charges result in stronger electrostatic attractions between the ions, leading to a higher lattice energy. Additionally, the size of the ions affects the lattice energy. Smaller ions can get closer to each other, resulting in stronger attractions and higher lattice energies. Conversely, larger ions have weaker attractions and lower lattice energies. To illustrate this relationship, consider the following table:\n\nIon 1 Ion 2 Lattice Energy\n+1 -1 High\n+2 -2 Very High\n+1 -2 Medium\n+2 -1 Medium\n\nIn the previous section, we discussed the various factors that affect lattice energy. Now, let’s delve into the relationship between ionic radii and lattice stability.\n\nWhen it comes to ionic compounds, the size of the ions significantly impacts the stability of the lattice structure. As the ionic radii increase, the lattice stability decreases. This is because larger ions create less efficient packing arrangements in the crystal lattice, leading to weaker electrostatic forces of attraction between the ions.\n\nThe relationship between lattice energy and bond length is also worth mentioning. As the bond length increases, the lattice energy decreases. This is due to the decrease in the electrostatic interactions between the ions, resulting in a less stable lattice.\n\nTo summarize, the size of the ions and the bond length play crucial roles in determining the stability of the lattice structure. Understanding these relationships is vital in predicting the behavior and properties of ionic compounds.\n\nMoving forward, let’s explore the next section on electrostatic interactions in lattice energy.\n\n## Electrostatic Interactions in Lattice Energy\n\nElectrostatic interactions play a crucial role in determining lattice energy trends. These interactions are governed by Coulomb’s law, which states that the force between two charged particles is directly proportional to the product of their charges and inversely proportional to the square of the distance between them.\n\nIn the context of lattice energy, electrostatic forces arise from the attraction between positively and negatively charged ions in an ionic compound. The magnitude of the lattice energy is directly related to the charges of the ions involved and the distance between them. When ions have higher charges or are closer together, the electrostatic forces are stronger, resulting in a higher lattice energy.\n\nThis understanding of electrostatic interactions is essential in predicting and explaining the trends observed in lattice energy values. Moving forward, we will explore the relationship between crystal structure and lattice energy.\n\n## Crystal Structure and Lattice Energy\n\nThe relationship between crystal structure and lattice energy can be better understood by examining the arrangement of ions within a crystal lattice. Crystal packing plays a crucial role in determining the overall stability and lattice energy of a crystal.\n\nThe close arrangement of ions in a crystal lattice leads to stronger electrostatic interactions, resulting in higher lattice energy. On the other hand, lattice defects, such as vacancies or interstitial ions, can disrupt the regular arrangement of ions and decrease the lattice energy.\n\nCrystal defects can also affect the crystal packing, leading to changes in the lattice energy. Therefore, understanding the crystal structure and the effects of crystal packing and lattice defects is essential in predicting and explaining the trends in lattice energy.\n\n• Regular arrangement of ions: Provides stability and strong electrostatic interactions.\n• Lattice defects: Disrupts arrangement, lowers lattice energy.\n• Crystal packing: Determines overall stability and lattice energy.\n• Crystal defects: Affect crystal packing, leading to changes in lattice energy.\n\n## Periodic Trends in Lattice Energy\n\nYou can observe periodic trends in lattice energy by analyzing the arrangement of ions within crystal structures and how they interact with each other. Understanding lattice energy trends is crucial in predicting and explaining the stability and properties of ionic compounds.\n\nLattice energy, which is the energy released when gaseous ions come together to form a solid crystal lattice, is influenced by various factors. One important factor is the size of the ions involved. As the size of the ions increases, the lattice energy decreases. This is because larger ions have a greater distance between them, resulting in weaker electrostatic attractions.\n\nAdditionally, the charges of the ions also play a role. Higher charges lead to stronger electrostatic attractions and higher lattice energy. By studying these factors, we can gain insights into the behavior of lattice energy in ionic compounds.\n\n## Lattice Energy in Ionic Compounds\n\nWhen discussing lattice energy in ionic compounds, it’s important to consider the factors that affect its value and the role it plays in compound stability.\n\nFactors such as ion size, ion charge, and crystal structure all contribute to the overall lattice energy of a compound.\n\nUnderstanding these factors is crucial in predicting the stability and properties of various ionic compounds.\n\n### Factors Affecting Lattice Energy\n\nFactors such as ionic size and charge influence the trends of lattice energy. Ionic size refers to the size of the ions involved in the ionic compound, while charge refers to the magnitude of the charges on the ions. These factors play a crucial role in determining the strength of the electrostatic attraction between ions in the crystal lattice.\n\n• Greater ionic size leads to lower lattice energy, as the ions are farther apart and the electrostatic attraction is weaker.\n• Higher charge on ions results in higher lattice energy, as the attraction between oppositely charged ions is stronger.\n• The arrangement of ions in the crystal lattice also affects lattice energy, with closer packing leading to higher lattice energy.\n• The Coulomb’s law, which describes the relationship between the charges and distances of particles, is fundamental in understanding the trends of lattice energy.\n\nThe relationship between lattice energy and enthalpy is that lattice energy is a component of the enthalpy change when ionic compounds are formed or dissolved.\n\n### Importance in Compound Stability\n\nThe stability of a compound is influenced by various factors, such as ionic size and charge. Another important aspect that affects compound stability is the formation energy. Compound formation energy calculations play a crucial role in understanding the stability of a compound. By calculating the energy required for the formation of a compound, we can determine whether the compound is thermodynamically stable or not. This information is valuable in predicting the behavior of compounds under different conditions and in designing new materials with desired properties.\n\nTo illustrate the importance of compound formation energy, consider the following table:\n\nCompound Formation Energy (kJ/mol)\nNaCl -786\nMgO -1204\nAl2O3 -1676\n\nFrom this table, we can observe that the formation energies of these compounds are negative, indicating that they are stable. Additionally, the magnitude of the formation energy provides information about the relative stability of the compounds. In this case, Al2O3 has the highest magnitude, suggesting that it is the most stable compound among the three.\n\n## Lattice Energy and Bond Strength\n\nLattice energy and bond strength are closely related in determining the stability of a compound. When bond formation occurs, it involves the transfer or sharing of electrons between atoms. This process leads to the formation of strong chemical bonds, which contribute to the overall stability of the compound.\n\nThe lattice enthalpy, or lattice energy, measures the strength of the bonds in the ionic lattice of a compound. It represents the energy required to separate one mole of a solid ionic compound into its gaseous ions. The higher the lattice energy, the stronger the bonds and the more stable the compound.\n\nIn summary, the bond strength and lattice energy play crucial roles in determining the stability of a compound, influencing its physical and chemical properties.\n\nMoving on to the subsequent section about lattice energy and solubility…\n\n## Lattice Energy and Solubility\n\nWhen a compound has a high lattice energy, it will be less soluble in water. Solubility refers to the ability of a substance to dissolve in a solvent, such as water.\n\nLattice energy, on the other hand, is the energy released when ions come together to form a solid crystal lattice. The relationship between lattice energy and solubility is based on the principle that compounds with high lattice energies have strong ionic bonds, making it difficult for the compound to break apart and dissolve in water.\n\nIn contrast, compounds with low lattice energies have weaker ionic bonds and are more likely to dissolve in water. Therefore, the higher the lattice energy, the lower the solubility, and vice versa.\n\nThis relationship between lattice energy and solubility can be used to predict the solubility of different compounds in water.\n\n## Lattice Energy and Melting Points\n\nIf you want to understand the relationship between lattice energy and melting points, you should consider the strength of the ionic bonds in the compound. The melting point of a substance is directly influenced by the lattice energy, which is the energy required to break apart the ionic lattice structure and convert it into a liquid or gas. The stronger the ionic bonds, the higher the lattice energy and the higher the melting point.\n\nTo measure lattice energy, various techniques are employed, such as the Born-Haber cycle, heats of formation, and enthalpy of sublimation. These methods allow for accurate determination of lattice energy, enabling scientists to study the correlation between lattice energy and melting points.\n\nIn summary, the relationship between lattice energy and melting points is crucial in understanding the behavior of ionic compounds. By measuring lattice energy through various techniques, scientists can gain insight into the factors that influence melting points and further our understanding of these compounds.\n\n• The strength of ionic bonds directly impacts melting points\n• Lattice energy measurement techniques enable accurate determination\n• Higher lattice energy leads to higher melting points\n• Understanding the relationship enhances knowledge of ionic compounds.\n\n## Lattice Energy and Stability\n\nTo fully comprehend the concept of lattice energy and stability, you should consider the overall balance between attractive and repulsive forces within the ionic compound.\n\nLattice energy refers to the energy required to break apart an ionic crystal into its constituent ions in the gas phase. It is influenced by the distance between ions and the charges of the ions involved. Higher charges or smaller distances result in stronger attractive forces and higher lattice energies.\n\nThe enthalpy change associated with lattice energy is a measure of the stability of the ionic compound. A more negative enthalpy change indicates greater stability.\n\nLattice energy also plays a role in reaction rates. Higher lattice energy implies a stronger attraction between ions, making it more difficult for them to come into contact with other reactants, thereby slowing down reaction rates.\n\n## Applications of Lattice Energy\n\nUnderstanding the applications of lattice energy can help you predict the stability of different ionic compounds. Lattice energy, the energy released when ions combine to form a solid crystal lattice, plays a crucial role in determining the properties of these compounds.\n\nHere are some important applications and properties of lattice energy:\n\n• Solubility: Compounds with high lattice energy tend to have low solubility in water.\n• Melting and boiling points: Compounds with high lattice energy have high melting and boiling points.\n• Crystal structure: Lattice energy influences the arrangement of ions in the crystal lattice, determining the crystal structure.\n• Reaction rates: Lattice energy affects the rate at which ionic compounds react with other substances.\n\n### How Is Lattice Energy Related to Chemical Bonding?\n\nLattice energy is closely related to chemical bonding. It is a measure of the strength of the bonds within a crystal lattice structure. The higher the lattice energy, the stronger the bonds between the ions in the crystal.\n\nLattice energy is influenced by factors such as ion size, charge, and the arrangement of ions in the crystal lattice.\n\nUnderstanding the trends of lattice energy helps us understand the stability and formation of crystals.\n\n### What Is the Relationship Between Lattice Energy and the Strength of an Ionic Bond?\n\nThe relationship between lattice energy and the strength of an ionic bond is crucial in understanding the stability and reactivity of compounds. Factors such as the charge of the ions and the size of the ions play significant roles in determining lattice energy.\n\nThese factors directly impact the ionic bond strength, which can be likened to the unbreakable bond between two lovers. The higher the lattice energy, the stronger the bond, resulting in a more stable compound.\n\n### How Does the Size of Ions Affect Lattice Energy?\n\nThe size of ions plays a critical role in determining lattice energy. As the ionic radius increases, the distance between ions in the crystal lattice also increases. This leads to a decrease in the strength of the ionic bond and a decrease in lattice energy.\n\nConversely, when the ionic radius decreases, the ions are closer together, resulting in a stronger ionic bond and higher lattice energy. Therefore, there is an inverse relationship between ionic radius and lattice energy.\n\n### Can Lattice Energy Be Used to Predict the Solubility of an Ionic Compound?\n\nLattice energy, while not directly predicting solubility, can provide some predictive power when it comes to the solubility of an ionic compound. This is because lattice energy is a measure of the strength of the ionic bonds in a compound.\n\nGenerally, higher lattice energy indicates stronger bonds, which can make it more difficult for the compound to dissolve in a solvent. However, it is important to note that experimental validation is necessary to fully understand the solubility of a specific compound.\n\n### Is Lattice Energy a Factor in Determining the Stability of a Crystal Structure?\n\nLattice energy plays a significant role in determining the stability of a crystal structure. It is a measure of the energy released when ions come together to form a crystal lattice.\n\nHigher lattice energy indicates stronger ionic bonds and greater stability of the crystal structure. This is because the strong ionic bonds hold the ions in place, preventing them from easily moving or breaking apart.\n\nTherefore, lattice energy is an important factor in understanding the stability of crystal formations.\n\n## Conclusion\n\nIn conclusion, the study of lattice energy is crucial in understanding the behavior of ionic compounds. By analyzing lattice energy trends, we can uncover the factors that affect the stability, solubility, and melting points of these compounds.\n\nThe correlation between ionic radii and lattice energy sheds light on the electrostatic interactions at play. This knowledge has practical applications in various fields, such as material science and pharmaceuticals.\n\nOverall, the exploration of lattice energy opens up a world of possibilities for scientific advancements and technological innovations."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90443695,"math_prob":0.93565404,"size":23013,"snap":"2023-40-2023-50","text_gpt3_token_len":4289,"char_repetition_ratio":0.22856274,"word_repetition_ratio":0.11107971,"special_character_ratio":0.17520532,"punctuation_ratio":0.09431061,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9653776,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T20:22:36Z\",\"WARC-Record-ID\":\"<urn:uuid:fcb8ea69-c6b6-479e-bd94-6199d2f88059>\",\"Content-Length\":\"203220\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b41af54-23f8-4753-b728-6215fb3b1a29>\",\"WARC-Concurrent-To\":\"<urn:uuid:b7c303ea-c36a-4151-ad2b-ce64c83099ca>\",\"WARC-IP-Address\":\"85.13.154.120\",\"WARC-Target-URI\":\"https://managenergy.tv/what-law-to-the-trends-of-lattice-energy-represent/\",\"WARC-Payload-Digest\":\"sha1:4ZZFEAIGOQ6KVHUONEFDHUXA4LD4APIR\",\"WARC-Block-Digest\":\"sha1:P6SH723KK6LLIHZ4T3HON6RXBMUNZI2S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506528.3_warc_CC-MAIN-20230923194908-20230923224908-00321.warc.gz\"}"} |
https://unige.it/en/off.f/2021/ins/48632 | [
"# MATHEMATICAL ANALYSIS 4\n\n_\niten\nCode\n86902\n2021/2022\nCREDITS\n6 credits during the 1st year of 8738 Naval Architecture and Marine Engineering (LM-34) GENOVA\nSCIENTIFIC DISCIPLINARY SECTOR\nMAT/05\nLANGUAGE\nItalian\nTEACHING LOCATION\nGENOVA (Naval Architecture and Marine Engineering)\nsemester\n1° Semester\nTeaching materials\n\nOVERVIEW\n\nThe course focuses on Fourier analysis methods as applied to the solutions of boundary value problems for classical partial differential equations. Thus, a mathematical presentation of Fourier series and transforms is presented, combining a reasonable amount of formal precision with applications to explicit problems, to be solved with workable formulae. The basic facts about analytic functions of one complex variable are also introduced because of their pervasive use in applications, with particular emphasis on the elementary and fundamentally geometric aspects of analyticity.\n\n## AIMS AND CONTENT\n\nLEARNING OUTCOMES\n\nThe main objective is to achieve a solid basic operative knowledge of Fourier analysis techniques (Fourier series and Fourier transform) for functions of one real variable as applied to boundary value problems for the classical partial differential equations (heat, Poisson, waves), and to understand the main properties of analytic functions of one complex variable.\n\nAIMS AND LEARNING OUTCOMES\n\nStudents are expected to master the basic Fourier analysis techniques (series and transforms) that are needed in order to solve standard boundary value problems for classical partial differential equations (heat, Laplace-Poisson, waves), both using series expansions and integral formulae. Basic operative knowledge concerning analytic functions of one complex variable is also expected.\n\nPREREQUISITES\n\nCalculus of functions of one and several real variables, linear algebra\n\nTeaching methods\n\nBlackboard and computer illustrations\n\nSYLLABUS/CONTENT\n\nFourier series for periodic functions and Fourier transform on R; main properties and applications to finding solutions of boundary value problems for the classical PDE, essentially through separation of variables techniques or via Fourier transform methods. The notion of holomorphic map is introduced and the main properties of analytic functions are investigated.\n\nS. Salsa - Partial differential equations in action: from modelling to theory - Springer 2016\n\n## TEACHERS AND EXAM BOARD\n\nRicevimento: On appointment\n\n## LESSONS\n\nTeaching methods\n\nBlackboard and computer illustrations\n\nLESSONS START\n\nThe class will start according to the academic calendar.\n\nORARI\n\nL'orario di tutti gli insegnamenti è consultabile su EasyAcademy.\n\n## EXAMS\n\nExam description\n\nWritten and oral examination\n\nAssessment methods\n\nStudents are required to work on standard problems in series expansions, Fourier transforms, applications to boundary value problems for classical PDE and basic properties of analytic functions."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8656918,"math_prob":0.8822383,"size":2610,"snap":"2021-31-2021-39","text_gpt3_token_len":492,"char_repetition_ratio":0.12509593,"word_repetition_ratio":0.08988764,"special_character_ratio":0.1559387,"punctuation_ratio":0.07435898,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9653773,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-05T00:54:15Z\",\"WARC-Record-ID\":\"<urn:uuid:640fbadc-d9b3-45fb-80c2-a3a2229d5024>\",\"Content-Length\":\"56875\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c78e9015-c571-475d-88f7-86a437ab471e>\",\"WARC-Concurrent-To\":\"<urn:uuid:7e38d07e-cd7b-45b1-a7d1-8c6dbd9b4a31>\",\"WARC-IP-Address\":\"130.251.222.83\",\"WARC-Target-URI\":\"https://unige.it/en/off.f/2021/ins/48632\",\"WARC-Payload-Digest\":\"sha1:UYWQH4I62VY7ZZE5CZ6UFTRXJGAKTGS4\",\"WARC-Block-Digest\":\"sha1:6X6ES7TCLSJ4YJI7MDK3A4ZTVULDKD7T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046155268.80_warc_CC-MAIN-20210805000836-20210805030836-00104.warc.gz\"}"} |
https://www.colorhexa.com/ebdbed | [
"# #ebdbed Color Information\n\nIn a RGB color space, hex #ebdbed is composed of 92.2% red, 85.9% green and 92.9% blue. Whereas in a CMYK color space, it is composed of 0.8% cyan, 7.6% magenta, 0% yellow and 7.1% black. It has a hue angle of 293.3 degrees, a saturation of 33.3% and a lightness of 89.4%. #ebdbed color hex could be obtained by blending #ffffff with #d7b7db. Closest websafe color is: #ffccff.\n\n• R 92\n• G 86\n• B 93\nRGB color chart\n• C 1\n• M 8\n• Y 0\n• K 7\nCMYK color chart\n\n#ebdbed color description : Light grayish magenta.\n\n# #ebdbed Color Conversion\n\nThe hexadecimal color #ebdbed has RGB values of R:235, G:219, B:237 and CMYK values of C:0.01, M:0.08, Y:0, K:0.07. Its decimal value is 15457261.\n\nHex triplet RGB Decimal ebdbed `#ebdbed` 235, 219, 237 `rgb(235,219,237)` 92.2, 85.9, 92.9 `rgb(92.2%,85.9%,92.9%)` 1, 8, 0, 7 293.3°, 33.3, 89.4 `hsl(293.3,33.3%,89.4%)` 293.3°, 7.6, 92.9 ffccff `#ffccff`\nCIE-LAB 89.13, 8.636, -6.813 74.876, 74.44, 90.54 0.312, 0.31, 74.44 89.13, 11, 321.731 89.13, 7.955, -12.086 86.279, 3.922, -1.823 11101011, 11011011, 11101101\n\n# Color Schemes with #ebdbed\n\n• #ebdbed\n``#ebdbed` `rgb(235,219,237)``\n• #ddeddb\n``#ddeddb` `rgb(221,237,219)``\nComplementary Color\n• #e2dbed\n``#e2dbed` `rgb(226,219,237)``\n• #ebdbed\n``#ebdbed` `rgb(235,219,237)``\n• #eddbe6\n``#eddbe6` `rgb(237,219,230)``\nAnalogous Color\n• #dbede2\n``#dbede2` `rgb(219,237,226)``\n• #ebdbed\n``#ebdbed` `rgb(235,219,237)``\n• #e6eddb\n``#e6eddb` `rgb(230,237,219)``\nSplit Complementary Color\n• #dbedeb\n``#dbedeb` `rgb(219,237,235)``\n• #ebdbed\n``#ebdbed` `rgb(235,219,237)``\n• #edebdb\n``#edebdb` `rgb(237,235,219)``\n• #dbdded\n``#dbdded` `rgb(219,221,237)``\n• #ebdbed\n``#ebdbed` `rgb(235,219,237)``\n• #edebdb\n``#edebdb` `rgb(237,235,219)``\n• #ddeddb\n``#ddeddb` `rgb(221,237,219)``\n• #cfa8d4\n``#cfa8d4` `rgb(207,168,212)``\n• #d8b9dc\n``#d8b9dc` `rgb(216,185,220)``\n• #e2cae5\n``#e2cae5` `rgb(226,202,229)``\n• #ebdbed\n``#ebdbed` `rgb(235,219,237)``\n• #f4ecf6\n``#f4ecf6` `rgb(244,236,246)``\n• #fefdfe\n``#fefdfe` `rgb(254,253,254)``\n• #ffffff\n``#ffffff` `rgb(255,255,255)``\nMonochromatic Color\n\n# Alternatives to #ebdbed\n\nBelow, you can see some colors close to #ebdbed. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #e7dbed\n``#e7dbed` `rgb(231,219,237)``\n• #e8dbed\n``#e8dbed` `rgb(232,219,237)``\n``#eadbed` `rgb(234,219,237)``\n• #ebdbed\n``#ebdbed` `rgb(235,219,237)``\n• #eddbed\n``#eddbed` `rgb(237,219,237)``\n• #eddbec\n``#eddbec` `rgb(237,219,236)``\n• #eddbeb\n``#eddbeb` `rgb(237,219,235)``\nSimilar Colors\n\n# #ebdbed Preview\n\nThis text has a font color of #ebdbed.\n\n``<span style=\"color:#ebdbed;\">Text here</span>``\n#ebdbed background color\n\nThis paragraph has a background color of #ebdbed.\n\n``<p style=\"background-color:#ebdbed;\">Content here</p>``\n#ebdbed border color\n\nThis element has a border color of #ebdbed.\n\n``<div style=\"border:1px solid #ebdbed;\">Content here</div>``\nCSS codes\n``.text {color:#ebdbed;}``\n``.background {background-color:#ebdbed;}``\n``.border {border:1px solid #ebdbed;}``\n\n# Shades and Tints of #ebdbed\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #030203 is the darkest color, while #faf5fa is the lightest one.\n\n• #030203\n``#030203` `rgb(3,2,3)``\n• #0f0810\n``#0f0810` `rgb(15,8,16)``\n• #1c0f1d\n``#1c0f1d` `rgb(28,15,29)``\n• #28152a\n``#28152a` `rgb(40,21,42)``\n• #341c38\n``#341c38` `rgb(52,28,56)``\n• #412245\n``#412245` `rgb(65,34,69)``\n• #4d2952\n``#4d2952` `rgb(77,41,82)``\n• #5a2f5f\n``#5a2f5f` `rgb(90,47,95)``\n• #66366c\n``#66366c` `rgb(102,54,108)``\n• #723c79\n``#723c79` `rgb(114,60,121)``\n• #7f4386\n``#7f4386` `rgb(127,67,134)``\n• #8b4a93\n``#8b4a93` `rgb(139,74,147)``\n• #9750a0\n``#9750a0` `rgb(151,80,160)``\n• #a258ac\n``#a258ac` `rgb(162,88,172)``\n• #aa65b2\n``#aa65b2` `rgb(170,101,178)``\n• #b172b9\n``#b172b9` `rgb(177,114,185)``\n• #b87fbf\n``#b87fbf` `rgb(184,127,191)``\n• #bf8dc6\n``#bf8dc6` `rgb(191,141,198)``\n• #c79acc\n``#c79acc` `rgb(199,154,204)``\n• #cea7d3\n``#cea7d3` `rgb(206,167,211)``\n• #d5b4d9\n``#d5b4d9` `rgb(213,180,217)``\n• #dcc1e0\n``#dcc1e0` `rgb(220,193,224)``\n• #e4cee6\n``#e4cee6` `rgb(228,206,230)``\n• #ebdbed\n``#ebdbed` `rgb(235,219,237)``\n• #f2e8f4\n``#f2e8f4` `rgb(242,232,244)``\n• #faf5fa\n``#faf5fa` `rgb(250,245,250)``\nTint Color Variation\n\n# Tones of #ebdbed\n\nA tone is produced by adding gray to any pure hue. In this case, #e5e3e5 is the less saturated color, while #f8cafe is the most saturated one.\n\n• #e5e3e5\n``#e5e3e5` `rgb(229,227,229)``\n• #e6e1e7\n``#e6e1e7` `rgb(230,225,231)``\n• #e8dfe9\n``#e8dfe9` `rgb(232,223,233)``\n• #e9ddeb\n``#e9ddeb` `rgb(233,221,235)``\n• #ebdbed\n``#ebdbed` `rgb(235,219,237)``\n• #edd9ef\n``#edd9ef` `rgb(237,217,239)``\n• #eed7f1\n``#eed7f1` `rgb(238,215,241)``\n• #f0d5f3\n``#f0d5f3` `rgb(240,213,243)``\n• #f1d3f5\n``#f1d3f5` `rgb(241,211,245)``\n• #f3d1f7\n``#f3d1f7` `rgb(243,209,247)``\n• #f5cff9\n``#f5cff9` `rgb(245,207,249)``\n• #f6ccfc\n``#f6ccfc` `rgb(246,204,252)``\n• #f8cafe\n``#f8cafe` `rgb(248,202,254)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #ebdbed is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.52694046,"math_prob":0.603821,"size":3716,"snap":"2020-10-2020-16","text_gpt3_token_len":1684,"char_repetition_ratio":0.12688577,"word_repetition_ratio":0.011090573,"special_character_ratio":0.507535,"punctuation_ratio":0.23725055,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97447777,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-25T21:53:36Z\",\"WARC-Record-ID\":\"<urn:uuid:99c641ef-e487-41aa-a457-e62f673745e1>\",\"Content-Length\":\"36348\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9febd258-e529-40bd-81d8-ca3782fe6c7b>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2218833-b766-46f0-92db-842fcec4a173>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/ebdbed\",\"WARC-Payload-Digest\":\"sha1:H7AHLHMDKJPZIYC6FX5TSBMSNW4KPVG6\",\"WARC-Block-Digest\":\"sha1:MI26M2MCL64U6GMXR6IBV4PN3U2WCKZW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146160.21_warc_CC-MAIN-20200225202625-20200225232625-00452.warc.gz\"}"} |
https://factorization.info/factors/0/factors-of-1198.html | [
"Factors of 1198",
null,
"We have all the information you will ever need about the Factors of 1198. We will provide you with the definition of Factors of 1198, show you how to find the Factors of 1198, give you all the Factors of 1198, tell you how many Factors 1198 has, and supply you with all the Factor Pairs of 1198 to prove that our answer is solved correctly.\n\nFactors of 1198 definition\nThe Factors of 1198 are all the integers (positive and negative whole numbers) that you can evenly divide into 1198. 1198 divided by a Factor of 1198 will equal another Factor of 1198.\n\nHow to find the Factors of 1198\nSince the Factors of 1198 are all the numbers that you can evenly divide into 1198, we simply need to divide 1198 by all numbers up to 1198 to see which ones result in an even quotient. When we did that, we found that these calculations resulted in an even quotient:\n\n1198 ÷ 1 = 1198\n1198 ÷ 2 = 599\n1198 ÷ 599 = 2\n1198 ÷ 1198 = 1\n\nThe Postive Factors of 1198 are therefore all the numbers we used to divide (divisors) above to get an even number. Here is the list of all Postive Factors of 1198 in numerical order:\n\n1, 2, 599, and 1198.\n\nFactors of 1198 include negative numbers. Therefore, all the Positive Factors of 1198 can be converted to negative numbers. The list of Negative Factors of 1198 are:\n\n-1, -2, -599, and -1198.\n\nHow many Factors of 1198?\nWhen we counted the Factors of 1198 that we listed above, we found that 1198 has 4 Positive Factors and 4 Negative Factors. Thus, the total number of Factors of 1198 is 8.\n\nFactor Pairs of 1198\nFactor Pairs of 1198 are combinations of two factors that when multiplied together equal 1198. Here are all the Positive Factor Pairs of 1198\n\n1 × 1198 = 1198\n2 × 599 = 1198\n599 × 2 = 1198\n1198 × 1 = 1198\n\nLike we said above, Factors of 1198 include negative numbers. Minus times minus equals plus, thus you can convert the Positive Factor Pair list above by simply putting a minus in front of every factor to get all the Negative Factor Pairs of 1198:\n\n-1 × -1198 = 1198\n-2 × -599 = 1198\n-599 × -2 = 1198\n-1198 × -1 = 1198\n\nFactor Calculator\nDo you need the factors for a particular number? You can submit a number below to find the factors for that number with detailed explanations like we did with Factors of 1198 above.\n\nFactors of 1199\nWe hope this step-by-step tutorial to teach you about Factors of 1198 was helpful. Do you want to see if you learned something? If so, give the next number on our list a try and then check your answer here."
] | [
null,
"https://factorization.info/images/factors-of.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9035778,"math_prob":0.97426814,"size":2538,"snap":"2023-40-2023-50","text_gpt3_token_len":701,"char_repetition_ratio":0.25177586,"word_repetition_ratio":0.028747434,"special_character_ratio":0.3325453,"punctuation_ratio":0.08267716,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99984074,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T08:05:26Z\",\"WARC-Record-ID\":\"<urn:uuid:aab2771b-1f7d-4c9c-9819-a774f5e0e6d7>\",\"Content-Length\":\"7583\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:36b91c75-7328-40fe-a3b9-192849e70898>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae2598ce-af40-493d-af54-0b36b532af82>\",\"WARC-IP-Address\":\"18.67.65.54\",\"WARC-Target-URI\":\"https://factorization.info/factors/0/factors-of-1198.html\",\"WARC-Payload-Digest\":\"sha1:A4U4FFXRCS663KTTG7XOTJRCL5EJNHMB\",\"WARC-Block-Digest\":\"sha1:YJS2FUJCOAVZDSA6VX7K5CFCVZIH2GM6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510284.49_warc_CC-MAIN-20230927071345-20230927101345-00331.warc.gz\"}"} |
https://ltwork.net/sat/50 | [
"# SAT\n\nAll new and solved questions in SAT category\n\n### A solution is prepared by adding 1.60 g of solid nacl to 50.0 ml of 0.100 m cacl2. what is the molarity of chloride ion in the final\n\na solution is prepared by adding 1.60 g of solid nacl to 50.0 ml of 0.100 m cacl2. what is the molarity of chloride ion in the final solution? assume that the volume of the final solution is 50.0 ml....\n\nFree points...\n\n### Which of the following is associated with our seasons, and the fact that the farther away from the equator you move, the more distinct the seasons become?Question\n\nWhich of the following is associated with our seasons, and the fact that the farther away from the equator you move, the more distinct the seasons become? Question 6 options: a) Earth’s revolution b) Earth’s distance from the Sun c) Earth’s rotation d) Earth’s tilt...\n\n### What is the capacitor's potential difference before the mylar is inserted?\n\nWhat is the capacitor's potential difference before the mylar is inserted?...\n\n### What is the antependium? a. It is the front surface of a block altar. b. It is the back surface of a block. c. It is the top of an\n\nWhat is the antependium? a. It is the front surface of a block altar. b. It is the back surface of a block. c. It is the top of an altar. Please select the best answer from the choices provided A B C...\n\n### As seen in porter five forces model condition under which a supplier group can be powerful include all the following except\n\nas seen in porter five forces model condition under which a supplier group can be powerful include all the following except...\n\n### A typical major requirement for a business major would be a course in:\n\nA typical major requirement for a business major would be a course in:...\n\n### How many sigma (σ) bonds and pi (π) bonds are in propene, ch2chch3?\n\nHow many sigma (σ) bonds and pi (π) bonds are in propene, ch2chch3?...\n\n### Which statement about –2h2 – 15h – 7 is true? One of the factors is (h 2). One of the factors is (3h – 2). One of the factors\n\nWhich statement about –2h2 – 15h – 7 is true? One of the factors is (h 2). One of the factors is (3h – 2). One of the factors is (2h 1). One of the factors is (h – 7)....\n\n### Bob's lawn-mowing service is a profit-maximizing, competitive firm. bob mows lawns for \\$30 each. his total cost each day is \\$250,\n\nbob's lawn-mowing service is a profit-maximizing, competitive firm. bob mows lawns for \\$30 each. his total cost each day is \\$250, of which \\$50 is a fixed cost. he mows 5 lawns a day....\n\n### For each of the situations below, a charged particle enters a region of uniform magnetic field. draw a vector to represent the direction\n\nfor each of the situations below, a charged particle enters a region of uniform magnetic field. draw a vector to represent the direction of the magnetic force on the particle....\n\n### Which characteristic should a good scientific question have?It should lead to a hypothesis that is testable.It should have\n\nWhich characteristic should a good scientific question have? It should lead to a hypothesis that is testable. It should have an answer that is already known. It should not be answerable just through investigation. It should include many variables about a wide range of topics....\n\n### (2x + 6) + (x² + 2x + 1) = ? *\n\n(2x + 6) + (x² + 2x + 1) = ? *...\n\n### When the following two solutions are mixed: k2co3(aq)+fe(no3)3(aq) the mixture contains the ions listed below. sort these species\n\nwhen the following two solutions are mixed: k2co3(aq)+fe(no3)3(aq) the mixture contains the ions listed below. sort these species into spectator ions and ions that react....\n\n### Which of the following molecules can cross the lipid bilayer of a membrane directly, without a transport protein or other mechanism?\n\nwhich of the following molecules can cross the lipid bilayer of a membrane directly, without a transport protein or other mechanism? select all that apply....\n\n### How many grams of naoh (mw = 40.0) are there in 500.0 ml of a 0.250 m naoh solution?\n\nHow many grams of naoh (mw = 40.0) are there in 500.0 ml of a 0.250 m naoh solution?...\n\n### What is the magnitude of the electric field at one electron due to the other?\n\nWhat is the magnitude of the electric field at one electron due to the other?...\n\n### Cyclopropane is a highly strained molecule. why? select all that apply. why is cyclopropane highly strained?\n\ncyclopropane is a highly strained molecule. why? select all that apply. why is cyclopropane highly strained?..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9287532,"math_prob":0.9303078,"size":4808,"snap":"2022-27-2022-33","text_gpt3_token_len":1209,"char_repetition_ratio":0.13218151,"word_repetition_ratio":0.59090906,"special_character_ratio":0.25665557,"punctuation_ratio":0.15406977,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97208536,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T19:22:11Z\",\"WARC-Record-ID\":\"<urn:uuid:ee79f74c-47fc-49e1-80ae-6e50adf84aa6>\",\"Content-Length\":\"46899\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a48e3ead-e0f8-4ba8-b715-3d2e69e26060>\",\"WARC-Concurrent-To\":\"<urn:uuid:202f07e8-916e-475f-8b21-f94f15c961bf>\",\"WARC-IP-Address\":\"172.67.210.72\",\"WARC-Target-URI\":\"https://ltwork.net/sat/50\",\"WARC-Payload-Digest\":\"sha1:7ZB47FN5UGPG2GQMMYM4LHUFUYNUYSSW\",\"WARC-Block-Digest\":\"sha1:OZLO36V3D2FFYJOC7U4G23IZLF7LUP6D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573399.40_warc_CC-MAIN-20220818185216-20220818215216-00187.warc.gz\"}"} |
http://www.global-sci.org/intro/article_detail/nmtma/10648.html | [
"Volume 11, Issue 1\nLocal Convergence for a Fifth Order Traub-Steffensen-Chebyshev-Like Composition Free of Derivatives in Banach Space\n\nNumer. Math. Theor. Meth. Appl., 11 (2018), pp. 160-168.\n\nPublished online: 2018-11\n\nCited by\n\nExport citation\n• Abstract\n\nWe present the local convergence analysis of a fifth order Traub-Steffensen-Chebyshev-like composition for solving nonlinear equations in Banach spaces. In earlier studies, hypotheses on the Fréchet derivative up to the fifth order of the operator under consideration are used to prove the convergence order of the method although only divided differences of order one appear in the method. That restricts the applicability of the method. In this paper, we extended the applicability of the fifth order Traub-Steffensen-Chebyshev-like composition without using hypotheses on the derivatives of the operator involved. Our convergence conditions are weaker than the conditions used in earlier studies. Numerical examples where earlier results cannot apply to solve equations but our results can apply are also given in this study.\n\n• Keywords\n\n• BibTex\n• RIS\n• TXT\n@Article{NMTMA-11-160, author = {}, title = {Local Convergence for a Fifth Order Traub-Steffensen-Chebyshev-Like Composition Free of Derivatives in Banach Space}, journal = {Numerical Mathematics: Theory, Methods and Applications}, year = {2018}, volume = {11}, number = {1}, pages = {160--168}, abstract = {\n\nWe present the local convergence analysis of a fifth order Traub-Steffensen-Chebyshev-like composition for solving nonlinear equations in Banach spaces. In earlier studies, hypotheses on the Fréchet derivative up to the fifth order of the operator under consideration are used to prove the convergence order of the method although only divided differences of order one appear in the method. That restricts the applicability of the method. In this paper, we extended the applicability of the fifth order Traub-Steffensen-Chebyshev-like composition without using hypotheses on the derivatives of the operator involved. Our convergence conditions are weaker than the conditions used in earlier studies. Numerical examples where earlier results cannot apply to solve equations but our results can apply are also given in this study.\n\n}, issn = {2079-7338}, doi = {https://doi.org/10.4208/nmtma.OA-2017-0017}, url = {http://global-sci.org/intro/article_detail/nmtma/10648.html} }\nTY - JOUR T1 - Local Convergence for a Fifth Order Traub-Steffensen-Chebyshev-Like Composition Free of Derivatives in Banach Space JO - Numerical Mathematics: Theory, Methods and Applications VL - 1 SP - 160 EP - 168 PY - 2018 DA - 2018/11 SN - 11 DO - http://doi.org/10.4208/nmtma.OA-2017-0017 UR - https://global-sci.org/intro/article_detail/nmtma/10648.html KW - AB -\n\nWe present the local convergence analysis of a fifth order Traub-Steffensen-Chebyshev-like composition for solving nonlinear equations in Banach spaces. In earlier studies, hypotheses on the Fréchet derivative up to the fifth order of the operator under consideration are used to prove the convergence order of the method although only divided differences of order one appear in the method. That restricts the applicability of the method. In this paper, we extended the applicability of the fifth order Traub-Steffensen-Chebyshev-like composition without using hypotheses on the derivatives of the operator involved. Our convergence conditions are weaker than the conditions used in earlier studies. Numerical examples where earlier results cannot apply to solve equations but our results can apply are also given in this study.\n\nIoannis K. Argyros & Santhosh George. (2020). Local Convergence for a Fifth Order Traub-Steffensen-Chebyshev-Like Composition Free of Derivatives in Banach Space. Numerical Mathematics: Theory, Methods and Applications. 11 (1). 160-168. doi:10.4208/nmtma.OA-2017-0017\nCopy to clipboard\nThe citation has been copied to your clipboard"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8695232,"math_prob":0.6596066,"size":1914,"snap":"2023-40-2023-50","text_gpt3_token_len":417,"char_repetition_ratio":0.11727749,"word_repetition_ratio":0.86567163,"special_character_ratio":0.20741902,"punctuation_ratio":0.10843374,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96189815,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T19:56:23Z\",\"WARC-Record-ID\":\"<urn:uuid:387461ac-ceb6-43f5-928e-4d785d6d7720>\",\"Content-Length\":\"60574\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:031f0a44-3f52-4267-95bb-aa7d7609cb01>\",\"WARC-Concurrent-To\":\"<urn:uuid:4fd4ffc6-6f70-48a7-adc3-835786c5b51a>\",\"WARC-IP-Address\":\"8.218.69.127\",\"WARC-Target-URI\":\"http://www.global-sci.org/intro/article_detail/nmtma/10648.html\",\"WARC-Payload-Digest\":\"sha1:UCSVYE6QI4P6X6EYV2TFZWK5S7FVZJ2B\",\"WARC-Block-Digest\":\"sha1:YV6SECZKUTRPIQGJ7NEX7HCDHRDGPOZK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100769.54_warc_CC-MAIN-20231208180539-20231208210539-00028.warc.gz\"}"} |
https://www.onmaths.com/skills_busters/constructions-and-loci-constructions-and-loci/ | [
"1\nConstruct the perpendicular of the line AB, shown in the diagram above.\n2\n2\nConstruct the perpendicular of the line AB, shown in the diagram above.\n2\n3\nConstruct the perpendicular of the line AB, shown in the diagram above.\n2\n4\nConstruct the bisector of the angle ABC, shown in the diagram above.\n2\n5\nConstruct the bisector of the angle ABC, shown in the diagram above.\n2\n6\nDraw the locus of points which are from point A, shown in the diagram above.\n2\n7\nDraw the locus of points which are from point A, shown in the diagram above.\n2\n8\nA gardener wants to plant a tree in a garden shown in the diagram above.\nThe tree must be at least away from point C and be closer to CD than AB.\nShade the region that the tree can be planted.\n3\n9\nA gardener wants to plant a tree in a garden shown in the diagram above.\nThe tree must be at least away from point C and be closer to CD than AB.\nShade the region that the tree can be planted.\n3"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87343454,"math_prob":0.9823148,"size":261,"snap":"2022-27-2022-33","text_gpt3_token_len":62,"char_repetition_ratio":0.11673152,"word_repetition_ratio":0.0,"special_character_ratio":0.21455939,"punctuation_ratio":0.05357143,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9815565,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T07:54:22Z\",\"WARC-Record-ID\":\"<urn:uuid:141a8f71-523c-4dd3-a401-29cea5b472e1>\",\"Content-Length\":\"160371\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c68a90a8-3ebd-4287-9b33-789e55e8a472>\",\"WARC-Concurrent-To\":\"<urn:uuid:f262f55b-e9ed-4eb8-aba3-df9a3504f9f0>\",\"WARC-IP-Address\":\"172.67.170.76\",\"WARC-Target-URI\":\"https://www.onmaths.com/skills_busters/constructions-and-loci-constructions-and-loci/\",\"WARC-Payload-Digest\":\"sha1:SQGM6PAMITXLAHDCA7I2IRYSHOXEL3TS\",\"WARC-Block-Digest\":\"sha1:A7IPURQKQBLSYNLO3BQ2C4ZLDS2JDJVC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103922377.50_warc_CC-MAIN-20220701064920-20220701094920-00069.warc.gz\"}"} |
https://bmcresnotes.biomedcentral.com/articles/10.1186/s13104-015-1176-y | [
"# A parameter estimation method for fluorescence lifetime data\n\n## Abstract\n\n### Background\n\nWhen modeling single-molecule fluorescence lifetime experimental data, the analysis often involves fitting a biexponential distribution to binned data. When dealing with small sample sizes, there is the potential for convergence failure in numerical optimization, for convergence to local optima resulting in physically unreasonable parameter estimates, and also for overfitting the data.\n\n### Results\n\nTo avoid the problems that arise in small sample sizes, we have developed a gamma conversion method to estimate the lifetime components. The key idea is to use a gamma distribution for initial numerical optimization and then convert the gamma parameters to biexponential ones via moment matching. A simulation study is undertaken with 30 unique configurations of parameter values. We also performed the same analysis on data obtained from a fluorescence lifetime experiment using the fluorophore Cy3. In both the simulation study and the real data analysis, fitting the biexponential directly led to a large number of data sets whose estimates were physically unreasonable, while using the gamma conversion yielded estimates consistently close to the true values.\n\n### Conclusions\n\nOur analysis shows that using numerical optimization methods to fit the biexponential distribution directly can lead to failure to converge, convergence to physically unreasonable parameter estimates, and overfitting the data. The proposed gamma conversion method avoids these numerical difficulties, yielding better results.\n\n## Findings\n\n### Background\n\nIn the single-molecule fluorescence lifetime experiments, a fluorophore is attached to the molecule under study, which is placed in a focal volume illuminated by a pulsed laser. The fluorophore emits photons when excited by the pulsed laser. The time length that it takes for the fluorophore to release the photon from the moment that it is excited is termed as the photon delay time (or fluorescence lifetime). The photon delay time is recorded by time-correlated single photon counting device .\n\nBecause the dye’s photon emission pattern depends on its photophysical state and molecular environment which are then affected by the conformational or electronic state of the molecule with which it is interacting (e.g., the active and inactive states of an enzyme could have different effect on the dye’s photon emission intensity in certain cases), by examining how the photon emission pattern fluctuates over time, one can investigate the underlying dynamics of the molecules. It is thus of interest to study the photon delay time.\n\nThis time lapse (or fluorescence lifetime) data are often binned to form count data. The decay curve describing the stochasticity of the continuous time lapse is then indirectly estimated from the count data. This leads to a two-level hierarchical model, where the first level models the binned counts and the second level models the continuous time lapse. That is, the stochasticity of the count data is determined by certain bin probabilities (the first level), and these probabilities are in turn modeled by the decay curve corresponding to the time lapse (the second level).\n\nMore specifically, conditioning on the total number of photons counted, the bin counts follow a multinomial distribution. The probability that a photon is counted during a given time interval (bin) is determined by the cumulative distribution function (cdf) of the time lapse. A mixture of exponential probability density functions (pdf) is most widely used to model the decay curve of the fluorescence lifetime . The specific context considered here is that the data follow a two-component mixture of exponentials (biexponential distribution). Furthermore, we assume that, by carefully controlled experimental conditions, the major lifetime component is known (though as we will see later, this restriction is not necessary to our method of parameter estimation) and we aim to estimate the second component.\n\nParameter estimation in this context can often be difficult, unreliable and biased. Novikov et al. showed that the parameter estimation for biexponential decays is more critical and depends on the detection procedure, leaving substantial obscurity on the estimation. Early work to address this was done by Sasaki and Masuhara , who used a convolved autoregressive model that can be fitted using the least squares (LS) method. This approach was made more efficient by Enderlein and Erdmann . However, employing LS leads to unnecessary bias .\n\nA better statistical approach would be to try to find the maximum likelihood estimators (MLEs) of the biexponential distribution involved in this hierarchical model. Indeed, using LS is equivalent to finding the MLE while assuming the bin counts follow a normal pdf; however, this assumption of normality is clearly not the case, as small bin counts and sparsity of the data make the normal model an inadequate approximation of the distribution of the bin counts . The fact that finding the MLE is more appropriate than using LS has been reviewed by Maus et al. , Edel et al. , and Laurence and Chromy .\n\nWhen dealing with mixture models such as a biexponential pdf, the expectation–maximization (EM) algorithm has been widely used for finding MLE’s . In this hierarchical setting, however, the EM algorithm may be both difficult to implement and slow to converge, and hence other numerical optimization methods may be employed. With a small sample size and small bin width, there will inevitably be a zero count in many of the bins , and such sparsity of the data may cause these numerical optimization techniques to be unstable and error-prone in finding the MLE for a mixture distribution. The commonly used direct search Nelder–Mead algorithm was found to perform poorly with such a two level hierarchical model (see McKinnon for more details on situations in which the Nelder–Mead algorithm fails). Enderlein et al. used an MLE approach to distinguish between distinct states or molecules. used MLE and iterative convolutions to fit the arrival time histograms to single exponential decay. Enderlein and Sauer presented a pattern-matching procedure for identifying single molecules from a mixture of molecules, although the algorithm presented works best only if the lifetimes are already known. This is not applicable to the cases where we cannot experimentally separate the two distinct states of a complex. complex always exhibits mixed states because we cannot predetermine the lifetimes of the respective states. Edel et al. developed a modified MLE method to compensate localized background fluorescence and instrument response function (IRF). However, this method focuses on fitting only the monoexponential decay curve.\n\nMoreover, there are some non-MLE based parameter estimation methods in the literature. For example, Digman et al. developed a phasor plot method and required labor-intensive visual inspection. Kim et al. developed a promptness ratio method for estimating the lifetime.\n\nThis paper focuses on two issues, numerical stability and overfitting small data sets. Overfitting the data in this context can be described as yielding a model which gives very high probability to data similar to the observed data yet describing the true underlying generative process poorly. When fitting a mixture of exponential decay curves with binned data, the numerical optimization algorithm for finding MLE may not converge. Even if it converges, in practice the numerical optimization algorithm may converge to a value that is physically unreasonable. In addition, we show that the MLE’s for the mixture of exponential distributions can often overfit the data, hence giving estimates that appear satisfactory but fail to accurately represent the true parameter values.\n\nTo address these issues, we propose a novel method of estimating the parameters of the biexponential distribution using binned count data. The object is to find a generalization of the mono-exponential distribution whose pdf is flexible enough to well approximate the shape of a biexponential density curve. With this motivation in mind, we propose a new estimation method which utilizes the gamma distribution family, a family which contains that of monoexponential distributions. We show that our approach can successfully recover the parameters of the underlying biexponential distribution, while avoiding the inherent numerical instabilities involved with a mixture distribution. Our proposed estimation algorithm is robust, and is not likely to overfit the data.\n\nThe rest of the paper is organized as follows. We first present the model and the estimation method. We then demonstrate the performance of the proposed method through simulations in which data sets are generated using biexponential pdfs with varying parameters. We finally present results for real data analysis from the fluorophore Cy3, collected via single photon counting technique.\n\n### Methods\n\nGiven the total number of photons, denoted by $$n$$, the photon counts in $$m$$ time intervals $$\\mathbf Y =\\left\\{ Y_1 \\ldots Y_m \\right\\}$$ has multinomial distribution $$\\mathbf Y \\sim \\text{ Multinom }(n, p_1, p_2, \\ldots ,p_m)$$ . Let the delay time for the $$i$$-th photon be $$X_i$$. We assume that $$X_i$$’s are independently and identically distributed with some pdf $$f_X(x)$$ and cdf $$F_X(x)=\\int _{0}^x f_X(s)ds$$. Then $$p_j$$ is the probability that $$X_i$$ falls in $$\\left( \\delta (j-1), \\delta j \\right)$$, where $$\\delta$$ is the width of time interval (bin width). To ensure the constraint that $$\\sum _{j=1}^m p_j=1,$$ we have the following normalized bin probabilities:\n\n\\begin{aligned} p_j &= \\, Prob\\left( \\delta (j-1) \\le X_i < \\delta j \\right) /Prob\\left( 0 \\le X_i < \\delta m \\right) \\nonumber \\\\ &= \\frac{F_X( \\delta j)- F_X(\\delta (j-1))}{F_X(\\delta m)} \\end{aligned}\n(1)\n\nAs discussed before, we assume henceforth that $$f_X$$ is a biexponential pdf, and further from carefully controlled experimental conditions the main lifetime component is known. When fitting the model to the data via finding the MLEs of the parameters, instead of directly fitting a biexponential distribution, we propose to fit a gamma distribution to the $$X_i$$’s. In particular, the pdf of the biexponential distribution and of the gamma distribution are given respectively as\n\n\\begin{aligned} f_X(x)=\\frac{c}{\\tau _1}e^{-\\frac{x}{\\tau _1}}+\\frac{1-c}{\\tau _2}e^{-\\frac{x}{\\tau _2}} \\end{aligned}\n(2)\n\\begin{aligned} g_X(x)=\\frac{1}{\\left( \\frac{\\tau _{\\gamma }}{\\alpha }\\right) ^\\alpha \\Gamma (\\alpha )}x^{\\alpha -1}e^{-\\frac{\\alpha x}{\\tau _{\\gamma }}}. \\end{aligned}\n(3)\n\nThe gamma distribution is chosen because it avoids the numerical instability found in practice when finding MLEs of the parameters in a mixture distribution and is flexible enough to approximate the biexponential distribution while being less likely than biexponential to overfit small data sets. It is worth noting that when $$\\alpha =1$$ and either $$c\\in \\{0,1\\}$$ or $$\\tau _1=\\tau _2$$, the gamma distribution is exactly the biexponential distribution (and both equal the monoexponential). Also of note is that although $$g_X(x)$$ diverges as $$x\\rightarrow 0$$ (for $$0<\\alpha \\le 1$$, which we restrict to be the case), this is in practice negligible since the probability of $$x$$ being in a neighborhood around zero goes to zero as the neighborhood itself shrinks to zero, i.e., $$\\mathbb {P}(x<\\epsilon )\\rightarrow 0$$ as $$\\epsilon \\rightarrow 0$$.\n\nWe estimate the parameters $$(\\alpha ,\\tau _{\\gamma })$$ of the hierarchical model via maximum likelihood method through the minimization of\n\n\\begin{aligned} -loglik(\\varvec{\\alpha ,\\tau _{\\gamma }};\\varvec{y}) = n\\log (G_X(\\delta m)) - \\sum _{j=1}^m y_j \\log (G_X(\\delta j)-G_X(\\delta (j-1))) + C \\end{aligned}\n(4)\n\nwhere $$G_X$$ is the cumulative distribution function corresponding to the gamma distribution in (3), $$y_j$$ is the observed photon count in the $$j$$th time interval, and $$C$$ is a constant; $$G_X$$ implicitly depends on the parameters $$(\\alpha ,\\tau _{\\gamma })$$. The minimization of (4) is carried out using the Nelder–Mead algorithm.\n\nThe estimates of parameters in the gamma distribution are then converted to those in the biexponential distribution with equal mean and variance, i.e., we match the first and second moments of the biexponential and the gamma pdf’s. Since there are two remaining unknown parameters in the biexponential distribution, we solve the system of equations satisfying\n\n\\begin{aligned} \\mathbb {E}(X^k | X \\sim \\text{ Gamma }) = \\mathbb {E}(X^k | X \\sim \\text{ Biexp }) \\quad \\text{ for } k=1,2. \\end{aligned}\n(5)\n\nBy solving this system of equations, we are matching the expected value of $$X^k$$ for $$k=1,2$$. This is equivalent to the system of equations given by the derivatives of the moment generating functions as following\n\n\\begin{aligned} M^{(k)}_{X,\\gamma }(0)=M^{(k)}_{X,BE}(0) \\quad \\text{ for } k=1,2. \\end{aligned}\n(6)\n\nwhere\n\n$$M_{X,\\gamma }(t) \\, = \\, \\left(1-\\frac{\\tau _{\\gamma }}{\\alpha }t\\right)^{-\\alpha },$$\n(7)\n$$M_{X,BE}(t) \\, = \\, c(1-\\tau _{1}t)^{-1}+(1-c)(1-\\tau _{2}t)^{-1},$$\n(8)\n\nand where $$M_{X,\\cdot }^{(k)}$$ is the kth derivative of $$M_{X,\\cdot }$$ with respect to $$t$$. The closed form solutions using the MLEs from fitting the gamma distribution to approximate the parameters of the biexponential distribution are\n\n\\begin{aligned} \\hat{\\tau }_2 &= \\, \\frac{2\\alpha \\tau _{\\gamma }\\tau _1-(1+\\alpha )\\tau _{\\gamma }^2}{2\\alpha \\tau _1-2\\alpha \\tau _{\\gamma }}, \\nonumber \\\\ \\hat{c} &= \\, \\frac{(1-\\alpha )\\tau _{\\gamma }^2}{2\\alpha \\tau _1^2-4\\alpha \\tau _{\\gamma }\\tau _1+(1+\\alpha )\\tau _{\\gamma }^2}. \\end{aligned}\n(9)\n\nNote that, while the focus of this paper is the context where one lifetime component is known, this new method of estimating the parameters of the biexponential distribution can be easily extended to the cases where the main lifetime component is unknown. We can accomplish this by simply matching the first three moments, i.e. letting $$k=1,2,3$$ in Eq. (5). If this is the case, the conversion equations become\n\n\\begin{aligned} \\widehat{\\tau }_1 &= \\, \\frac{\\tau _{\\gamma }\\left( \\sqrt{2(2-\\alpha )(\\alpha +1)}-\\alpha -1 \\right) }{\\sqrt{2(2-\\alpha )(\\alpha +1)}-4\\alpha +2w} \\nonumber \\\\ \\widehat{\\tau }_2 &= \\, \\frac{\\tau _{\\gamma }\\left( \\sqrt{2(2+\\alpha -\\alpha ^2)}+2(\\alpha +1) \\right) }{6\\alpha }\\nonumber \\\\ \\widehat{c} &= \\, \\frac{\\left( \\frac{\\alpha +1}{2\\alpha }\\right) \\tau _{\\gamma }^2-\\tau _2^2}{\\tau _1^2-\\tau _2^2}. \\end{aligned}\n(10)\n\nTo the authors’ knowledge, single molecule lifetime analyses involve only as complex a model as a biexponential decay. However, there is no theoretical reason as to why our method could not be applied to an exponential mixture decay curve with greater than two components, though as the number of components in the exponential mixture model increases, the algebra in finding the closed form solution quickly becomes tedious and impractical. In general, assuming no lifetime components known a priori, the system of equations needed to be solved for a $$M$$ component exponential mixture model is $$M^{(k)}_{X,\\gamma }(0)=M^{(k)}_{X,BE}(0)$$ for $$k=1,2,\\ldots ,2M-1$$.\n\n### Results\n\nThe performance of the proposed method can be assessed in two ways. First, we compare the estimates of the second unknown lifetime component $$\\tau _2$$ to the true value. Second, we evaluate the overfitness of the estimation by comparing how well the estimates fit the data to how close the estimates are to the truth. To this end we use two quantitative measures, Pearson’s $$\\chi ^2$$ statistic and the Hellinger Distance (see., e.g., ). We compute the Pearson’s $$\\chi ^2$$ statistic based on the multinomial distribution of the binned count data to determine how closely our fitted model fits the data. It is computed as\n\n\\begin{aligned} \\chi ^2 = \\sum _{j=1}^m \\frac{(y_j - \\mathbb {E}(y_j))^2}{\\mathbb {E}(y_j)} \\end{aligned}\n(11)\n\nwhere $$\\mathbb {E}(y_j)$$ is\n\n\\begin{aligned} \\mathbb {E}[Y_j]=n \\cdot p_j = n \\cdot \\frac{F_X \\left( \\delta j \\right) - F_X \\left( \\delta (j-1) \\right) }{F_X\\left( \\delta m \\right) }. \\end{aligned}\n(12)\n\nTo measure how close our estimated biexponential curve is to the true curve we use Hellinger’s Distance. This is a metric commonly used in the statistical literature to compare two different pdf’s. By using Hellinger’s Distance to compare the true curve and the estimated curve, we see how close to the truth our estimations are. Hellinger Distance can by computed as\n\n\\begin{aligned} H(f,h)\\propto & {} \\left\\{ \\int \\left( f^{\\frac{1}{2}} - h^{\\frac{1}{2}} \\right) ^2 \\right\\} ^{\\frac{1}{2}} \\nonumber \\\\\\propto & {} \\sqrt{ 2-2\\int (fh)^{\\frac{1}{2}} } \\end{aligned}\n(13)\n\nwhere $$f$$ represents a curve fit from the small data sets, and $$h$$ represents the true (or our best approximation to the true) data generating process. We do not have a closed form for the Hellinger Distance between two biexponential distributions; however, since\n\n\\begin{aligned} \\int (fh)^{\\frac{1}{2}}= \\int \\sqrt{\\frac{f}{h}}h = E_h \\left( \\sqrt{\\frac{f}{h}} \\right) \\end{aligned}\n\nand since one can make random draws from $$h$$, it is straightforward to use the Monte Carlo estimate of the Hellinger Distance $$H(f,h)$$ for any $$f$$. That is, for sufficiently large N,\n\n\\begin{aligned} H(f,h) \\approx \\sqrt{2- \\frac{2}{N} \\sum _{i=1}^N \\left( \\sqrt{\\frac{f(z_i)}{h(z_i)} }\\right) } \\end{aligned}\n(14)\n\nwhere $$z_1,z_2, \\ldots , z_N$$ are independent draws from $$h$$.\n\n### Simulation results\n\nBiexponential data were simulated as follows. In the biexponential distribution, the first lifetime component $$\\tau _1$$ was fixed at 1,500, and assumed known when fitting the biexponential distribution directly by maximizing the likelihood and when fitting the biexponential distribution indirectly by using the gamma conversion method; the first component weight $$c$$ took values in $$\\{ 0.60, \\, 0.75, \\, 0.90\\}$$; the second lifetime component $$\\tau _2$$ took values equaling $$k\\tau _1$$, for $$k$$ in {0.500, 0.800, 0.900, 0.950, 0.990, 1.01, 1.05, 1.10, 1.20, 2.00}; the bin width $$\\delta$$ was set to be 50. We generated 1,000 data sets of 50 photons for each of the 30 configurations. For each data set we estimated the lifetime parameter values by fitting the biexponential directly and also by using our proposed approach. In both cases optimization was performed by using the Nelder–Mead algorithm, setting the maximum number of iterations to be 10,000 and the relative convergence tolerance to be $$1\\times 10^{-8}$$. In the former case, we attempted to fit the data using 25 different starting values of $$c$$ and initializing $$\\tau _2$$ to be equal to the mean of the bin counts (i.e., $$\\sum _j (jy_j)/\\sum _{\\ell }y_{\\ell }$$). In the latter case we initialized $$\\alpha =0.5$$, and $$\\tau _{\\gamma }$$ was initialized similarly to $$\\tau _2$$ when fitting the biexponential directly. We note here that to find good solutions from the optimization algorithm it was necessary to use multiple starting points for fitting the biexponential directly whereas this was not necessary with our method; in particular, without using multiple initialization points for fitting the biexponential distribution directly we would often fail to converge or obtain poor estimates. Out of the 30,000 simulated data sets, attempting to fit the biexponential model directly failed to converge in 14,272 instances even while using multiple starting points, as opposed to 2,160 instances when using the proposed gamma method using only one starting point.\n\nTo compare the performance of the methods, we focus on the data sets in which both methods converged and use the quantity $$\\log (\\widehat{\\tau }_2/\\tau _2)$$ (base $$e$$) as a benchmark. Figure 1 shows the 2-dimensional histogram of these estimates, obtained both from directly fitting the biexponential distribution (vertical axis) and from using the gamma conversion method (horizontal axis). It can be seen from this plot that when fitting the biexponential directly, even among the data sets in which convergence was reached, there are a large number of occasions where the estimated values of $$\\tau _2$$ are physically unreasonable, yet the gamma conversion method provides reasonable answers. This can be further seen by looking at, e.g., the 5th and 95th quantiles of $$\\widehat{\\tau }_2/\\tau _2$$, which were 0.23 and 202,000 respectively when fitting the biexponential directly and 0.35 and 5.2 respectively when applying the gamma conversion method. These numbers suggest the estimates obtained from directly fitting biexponential distribution are numerically instable compared to those obtained from gamma conversion method.\n\nIn reality, many of these results yielding extremely large estimates simply would not be accepted in practice. Instead, an artificial ceiling may be put on the lifetime estimates. When we do this in our simulation study, using a cap of 100 ns, our results lead to the same conclusions. To give a brief summary of these slightly modified results, we computed the mean square error (MSE), which is the average of $$(\\widehat{\\tau }_2-\\tau _2)^2$$, for the direct fitting of the biexponential ($$1{,}500\\,ns^2$$) and for our proposed gamma conversion method ($$55\\,ns^2$$); clearly even with this truncation of extremely high estimates, our proposed method is performing much better.\n\nTo evaluate the overfitting problem, we compute the Hellinger Distance and Pearson’s $$\\chi ^2$$ statistics. For each of the simulated data sets in which both methods converged, these two values were computed by fitting the biexponential distribution directly and also by using the gamma conversion method. Figure 2 gives the two-dimensional histogram of these values, where the plot on the left corresponds to fitting the biexponential directly and the plot on the right corresponds to using the gamma conversion method. We see that fitting the biexponential directly, in a large number of the data sets, yields estimates which fit the data quite well, as evidenced by a small $$\\chi ^2$$ value, but are far from the truth, as evidenced by a large Hellinger Distance. Using the gamma conversion method eliminates this overfitting problem, as evidenced by the observations that all the Hellinger Distance values are small.\n\nFigures 1 and 2 broken down by simulation configuration are given in the Additional files 1, 2, 3 and 4. What is evident is that while the problems of numerical instability and overfitting which arise from fitting the biexponential distribution directly are milder in some configurations than others, these problems do in fact exist for each configuration, while our proposed approach greatly ameliorates these issues. When $$\\tau _1$$ was treated as unknown we obtained similar results. See Additional files 1, 2, 3 and 4 for these results.\n\n### Fluorophore Cy3 results\n\nSingle molecule fluorescence lifetime was measured as follows. We used a confocal microscope setup to minimize the detection volume. A DNA strand labeled with Cy3\n\n(5′-Cy3-TATTATATAAGTAATAAATA-3′) was excited by 532 nm quasi-continuous pulsed laser (Vanguard VGND2000-76-HM532, Spectra-Physics), which has 12 ps pulse width. These make a broadening of about 5% of the lifetime we are measuring. While there exist methods to deconvolute the IRF and our software is capable of doing it, we did not do it for this study as the broadening is negligible compared to the broadening that we are dealing with, coming from the small number of photons. Further, we considered only the time window after the peak of the lifetime decay in order that our fitting would not be seriously affected by the IRF broadening. The emitted photons passed through the objective lens, dichroic mirror, emission filter, and focusing lens, and then were collected at the avalanche photodiode (PD5CTC, Micro Photon Devices). Out-of-focus signal was rejected by use of the pinhole pair in this confocal setup (see Figure 3).\n\nThe excitation pulses were branched to a photodiode for synchronization. Time delay between the signals from the avalanche and sync photodiodes was measured by the time correlated single photon counter (SPC630, Becker&Hickl GmbH). Figure 3 shows a schematic of that described above. We used 2 nM of fluorophores for detecting fluorescence from diffusing molecules. We set it up such that it gives the APD counting rate smaller than $$10^5$$/s. Considering that the excitation pulse repeats at 80 MHz (i.e. 12.5 ns), this corresponds to detecting less than one photon every 800 pulses on average. The probability of detecting more than two photons (from two different molecules) from a single pulse is less than 1/800. As we used only 50 photons per histogram and also the pulse interval of 12.5 ns much longer than the decay time, there will be practically no photon that is not coming from the latest excitation pulse. Thus we confirm that we are measuring tightly correlated photon emission from excited single molecules. The data from SPC630 were collected until desired number of photons were detected and then plotted as a lifetime histogram with appropriate bin sizes. See for more details.\n\nOne large data set ($$\\approx$$1.2 million photons) was obtained from the above experiment. The bin width used was 12.5 ns/256 = 48.8 ps; the maximum amount of time in the observation window is 10,101.6 ps (207 bins). The biexponential distribution was fit directly to obtain $$\\tau _1=927.4$$ ps, $$\\tau _2=2739$$ ps, and $$c=0.6171$$. These components are comparable to those found by , who obtained the estimates $$\\tau _1=530$$ ps, $$\\tau _2=2000$$ ps, and $$c=0.51$$. The main component here was $$\\tau _1$$, and for the remainder of the analysis we assumed this value known, while $$c$$ and $$\\tau _2$$ remained to be estimated, treating the estimates of 2,739 ps and 0.6171 as the “true” values for the rest of the analysis. Figure 4 gives the histogram of the large data set, with the estimated biexponential curve superimposed. Next we sampled without replacement from the large data set to obtain 2,586 small data sets of sample size 50. Photon decay curves were estimated from these small data sets using both the biexponential pdf directly and the gamma conversion method, and the results were compared to the true values. The intialization of the optimization algorithms was the same as that done for the simulated data.\n\nAttempting to fit the biexponential pdf directly led to 364 instances of failure to converge, as opposed to just one using the gamma conversion method. For each of the small data sets where both methods converged, the quantity $$\\log (\\widehat{\\tau }_2/\\tau _2)$$ was computed for $$\\widehat{\\tau }_2$$, which was estimated by fitting the biexponential directly and also by the gamma conversion method. Figure 5 is a two-dimensional histogram of these values, where the vertical axis corresponds to fitting the biexponential directly and the horizontal axis corresponds to the gamma conversion method. From this figure, it is clear that there are a large number of data sets in which the estimates of $$\\tau _2$$, obtained directly by fitting the biexponential distribution, are physically unreasonable values, yet the gamma conversion method provides reasonable answers. The 5th and 95th quantiles of $$\\widehat{\\tau }_2/\\tau _2$$ were 0.55 and 67,000 from fitting the biexponential directly and 0.49 and 1.1 when using the gamma conversion method. These numbers suggest that the gamma conversion method is giving more stable results than that obtained from fitting the biexponential distribution directly.\n\nAs with the simulation study, we again applied a ceiling of 100 ns to the extremely high estimates of $$\\tau _2$$. Again the conclusions were the same for these modified results. The MSE for directly fitting the biexponential was $$725\\,ns^2$$ and for the gamma conversion method was $$0.89\\,ns^2$$.\n\nWe also computed, for each of the small data sets, the Hellinger Distance and $$\\chi ^2$$ statistic for both methods. Figure 6 is the two-dimensional histogram of Hellinger Distances vs. $$\\chi ^2$$ statistics, where the plot on the left corresponds to those values computed when fitting the model to the biexponential distribution directly and the plot on the right is when using the gamma conversion method. We observe, just as in the simulations, that when fitting the biexponential directly, in a large number of the smaller data sets, we have estimates that fit the data quite well, as evidenced by small $$\\chi ^2$$ values, but the estimated decay curve is far from the truth, as evidenced by a large Hellinger Distance. Using the gamma conversion eliminates this overfitting problem, as evidenced by the fact that all the Hellinger Distance values are small.\n\nSimilar results were obtained for the fluorophore Cy3 data when $$\\tau _1$$ was assumed to be unknown. See Additional files 1, 2, 3 and 4 for these results.\n\n### Conclusion\n\nIn the single-molecule fluorescence lifetime experiments, it is of great interest to study the photon delay time. In particular, we are interested in fitting a mixture of exponential model to the photon count data. However, directly fitting a mixture of exponential model may lead to numerical optimization problems, whether that be failure to converge or convergence to local optima resulting in physically unreasonable values or overfitting. In this paper, we proposed the gamma conversion method, where we first fit a gamma distribution to the data and then, via moment matching, estimate biexponential parameters. In this manner both the numerical instability and the overfitting problems are avoided.\n\nThe proposed method was evaluated using Pearson’s $$\\chi ^2$$ statistic and the Hellinger Distance. As an alternative to Pearson’s $$\\chi ^2$$ statistic and the Hellinger Distance, we could have compared the MSE, just as we did when we applied the ceiling to the lifetime estimates. Calculating the ratio of MSEs obtained from fitting the biexponentials directly and from our proposed method yielded a value of 5.6e10 for the real data example, and similar ratios were consistently found in all 30 simulation configurations. These observations suggest that the estimates obtained from gamma conversion significantly outperform those obtained from directly fitting biexponenetials.\n\nAlthough the method was designed to analyze photon counts in single-molecule fluorescence lifetime experiments, the method may be applied to other problems involving fitting mixture of exponential distributions. Most FLIM measurements, however, have rather large number of photons ($$\\sim1{,}000$$) for each pixel, and thus do not suffer from the overfitting or numerical instability issues highlighted here when discussing single molecule fluorescence lifetime data.\n\n## References\n\n1. Sorokina M, Koh H, Patel SS, Ha T (2009) Fluorescent lifetime trajectories of a single uorophore reveal reaction intermediates during transcription initiation. J Am Chem Soc 131(28):9630–9631\n\n2. Kollner M, Wolfrum J (1992) How many photons are necessary for fluorescence-lifetime measurements? Chem Phys Lett 200(1,2):199–204\n\n3. Xie XS (2002) Single-molecule approach to dispersed kinetics and dynamic disorder: probing conformational fluctuation and enzymatic dynamics. J Chem Phys 117(24):11024–11032\n\n4. Duncan R, Bergmann A, Cousin M, Apps D, Shipston M (2004) Multi-dimensional time-correlated single photon counting (TCSPC) fluorescence lifetime imaging microscopy (FLIM) to detect fret in cells. J Microsc 215(1):1–12\n\n5. Novikov E, Hofkens J, Cotlet M, Maus M, De Schryver FC, Boens N (2001) A new analysis method of single molecule fluorescence using series of photon arrival times: theory and experiment. Spectrochim Acta Part A Mol Biomol Spectrosc 57(11):2109\n\n6. Sasaki K, Masuhara H (1991) Analysis of transient emission curves by a convolved autoregressive model. Appl Opt 30(8):977–980\n\n7. Enderlein J, Erdmann R (1997) Fast fitting of multi-exponential decay curves. Opt Commun 134(1):371–378\n\n8. Turton D, Reid G, Beddard G (2003) Accurate analysis of fluorescence decays from single molecules in photon counting experiments. Anal Chem 75(16):4182–4187\n\n9. Laurence TA, Chromy BA (2010) Efficient maximum likelihood estimator fitting of histograms. Nat Methods 7(5):338–339\n\n10. Maus M, Cotlet M, Hofkens J, Gensch T, De Schryver FC, Schaffer J et al (2001) An experimental comparison of the maximum likelihood estimation and nonlinear least-squares fluorescence lifetime analysis of single molecules. Anal chem 73(9):2078–2086\n\n11. Edel JB, Eid JS, Meller A (2007) Accurate single molecule fret efficiency determination for surface immobilized dna using maximum likelihood calculated lifetimes. J Phys Chem B 111(11):2986–2990\n\n12. Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J Royal Stat Soc Ser B 39:1–37. (with discussion)\n\n13. Nelder JA, Mead R (1965) A simplex method for function minimization. Comput J 7(4):308–313\n\n14. McKinnon KIM (1998) Convergence of the Nelder–Mead simplex method to a nonstationary point. SIAM J Optim 9(1):148–158\n\n15. Enderlein J, Goodwin PM, Van Orden A, Patrick Ambrose W, Erdmann R, Keller RA (1997) A maximum likelihood estimator to distinguish single molecules by their fluorescence decays. Chem Phys Lett 270(5):464–470\n\n16. Rothwell P, Berger S, Kensch O, Felekyan S, Antonik M, Wöhrl B et al (2003) Multiparameter single-molecule fluorescence spectroscopy reveals heterogeneity of HIV-1 reverse transcriptase: primer/template complexes. Proc Natl Acad Sci 100(4):1655–1660\n\n17. Enderlein J, Sauer M (2001) Optimal algorithm for single-molecule identification with time-correlated single-photon counting. J Phys Chem A 105(1):48–53\n\n18. Digman MA, Caiolfa VR, Zamai M, Gratton E (2008) The phasor approach to fluorescence lifetime imaging analysis. Biophys J 94(2):14–16\n\n19. Kim G-H, Legresley SE, Snyder N, Aubry PD, Antonik M (2011) Single-molecule analysis and lifetime estimates of heterogeneous low-count-rate time-correlated fluorescence data. Appl Spectrosc 65(9):981–990\n\n20. Le Cam L, Lo Yang G (2000) Asymptotics in statistics: some basic concepts. Springer, New York\n\n21. Sanborn ME, Connolly BK, Gurunathan K, Levitus M (2007) Fluorescence properties and photophysics of the sulfoindocyanine Cy3 linked covalently to DNA. J Phys Chem B 111(37):11064–11074\n\n## Authors' contributions\n\nPM conceived of the study. DS and PM designed algorithm. HK and TH performed experiments. DS, HK, TH, and PM wrote the manuscript. All authors read and approved the final manuscript.\n\n### Acknowledgements\n\nThe study was partially supported by NSF Grants DMS-1055815 (DMS-1438957), DMS-1228288 (DMS-1440038) and DMS-1222718 (DMS 1440037). Hajin Kim was supported by the Institute for Basic Science (IBS-R020-D1), 2014 UNIST Research Fund (1.130091.01), and Basic Science Research Program through NRF of Korea (2014R1A1A1003949).\n\n### Compliance with ethical guidelines\n\nCompeting interests The authors declare that they have no competing interests.\n\n## Author information\n\nAuthors\n\n### Corresponding author\n\nCorrespondence to Ping Ma.\n\n## Rights and permissions",
null,
""
] | [
null,
"https://bmcresnotes.biomedcentral.com/track/article/10.1186/s13104-015-1176-y",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88576895,"math_prob":0.9921352,"size":33692,"snap":"2022-27-2022-33","text_gpt3_token_len":7560,"char_repetition_ratio":0.16738898,"word_repetition_ratio":0.048756607,"special_character_ratio":0.22770984,"punctuation_ratio":0.10629139,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991134,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-30T23:36:34Z\",\"WARC-Record-ID\":\"<urn:uuid:6901572b-cdbf-4b45-8ab6-7550b349dd34>\",\"Content-Length\":\"278858\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2e8762a-37e6-4e86-b24d-c1154d30a97f>\",\"WARC-Concurrent-To\":\"<urn:uuid:e807ab47-e545-42c1-bf9c-9ae95c8a3204>\",\"WARC-IP-Address\":\"146.75.36.95\",\"WARC-Target-URI\":\"https://bmcresnotes.biomedcentral.com/articles/10.1186/s13104-015-1176-y\",\"WARC-Payload-Digest\":\"sha1:MLGLISAJAZS567HY6PZ4BSK665O5G63W\",\"WARC-Block-Digest\":\"sha1:VJSM3G6I5WHLGSGHAQMTVEZ2BTAGKIK2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103915196.47_warc_CC-MAIN-20220630213820-20220701003820-00237.warc.gz\"}"} |
https://www.hackmath.net/en/word-math-problems/rational-numbers?tag_id=111 | [
"# Rational numbers + real numbers - math problems\n\n#### Number of problems found: 6\n\n• Distance of numbers",
null,
"Which number has the same distance from the numbers -5.65 and 7.25 on the numerical axis?\n• Factors",
null,
"Can the expression ? be factored into rational factors?\n• Closest natural number",
null,
"Find the closest natural number to 4.456 to 44.56 and 445.6.\n• Server",
null,
"Calculate how many average minutes a year is a webserver is unavailable, the availability is 99.99%.\n• Discounts on rail (ZSSK)",
null,
"The railway company ZSSK offers a discount of 15% (REGIONAL) for 5 Eur/year. Calculate the real discount rate as a percentage, where passengers will travel 19 Eur per week.\n• Youth track",
null,
"Youth track from Hronská Dúbrava to Banská Štiavnica which announced cancellation attracted considerable media attention and public opposition, has cost 6.3 euro per capita and revenue 13 cents per capita. Calculate the size of subsidies to a trip group o\n\nWe apologize, but in this category are not a lot of examples.\nDo you have an interesting mathematical word problem that you can't solve it? Submit a math problem, and we can try to solve it.\n\nWe will send a solution to your e-mail address. Solved examples are also published here. Please enter the e-mail correctly and check whether you don't have a full mailbox.\n\nPlease do not submit problems from current active competitions such as Mathematical Olympiad, correspondence seminars etc..."
] | [
null,
"https://www.hackmath.net/thumb/93/t_4793.jpg",
null,
"https://www.hackmath.net/thumb/17/t_417.jpg",
null,
"https://www.hackmath.net/thumb/99/t_5399.jpg",
null,
"https://www.hackmath.net/thumb/40/t_440.jpg",
null,
"https://www.hackmath.net/thumb/20/t_120.jpg",
null,
"https://www.hackmath.net/thumb/20/t_520.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.897949,"math_prob":0.8657114,"size":1239,"snap":"2020-45-2020-50","text_gpt3_token_len":300,"char_repetition_ratio":0.10850202,"word_repetition_ratio":0.1904762,"special_character_ratio":0.23405972,"punctuation_ratio":0.1125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95741117,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-29T05:16:26Z\",\"WARC-Record-ID\":\"<urn:uuid:532747c6-7ca3-42a4-bb39-9a6c6b8306b9>\",\"Content-Length\":\"34474\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9f95ae29-82b4-45a6-a6ac-939006070693>\",\"WARC-Concurrent-To\":\"<urn:uuid:6e86be1d-6dc0-4037-9795-2cf00752ecc3>\",\"WARC-IP-Address\":\"104.24.105.91\",\"WARC-Target-URI\":\"https://www.hackmath.net/en/word-math-problems/rational-numbers?tag_id=111\",\"WARC-Payload-Digest\":\"sha1:HD7REN5BQ3QNHYDLGHOFIJF77ARCWVCR\",\"WARC-Block-Digest\":\"sha1:365OBLLGWZCOLG3QYXRU4LNCOR6BBAIX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141196324.38_warc_CC-MAIN-20201129034021-20201129064021-00389.warc.gz\"}"} |
https://ualib.org/Base.Terms.Properties.html | [
"",
null,
"### Properties of Terms and the Term Algebra\n\nThis is the Base.Terms.Properties module of the Agda Universal Algebra Library.\n\n```{-# OPTIONS --without-K --exact-split --safe #-}\n\nopen import Overture using ( 𝓞 ; 𝓥 ; Signature )\n\nmodule Base.Terms.Properties {𝑆 : Signature 𝓞 𝓥} where\n\n-- Imports from Agda and the Agda Standard Library --------------------------------------\nopen import Agda.Primitive using () renaming ( Set to Type )\nopen import Data.Product using ( _,_ ; Σ-syntax )\nopen import Function using ( _∘_ )\nopen import Data.Empty.Polymorphic using ( ⊥ )\nopen import Level using ( Level )\nopen import Relation.Binary using ( IsEquivalence ; Setoid ; Reflexive )\nusing ( Symmetric ; Transitive )\nopen import Relation.Binary.PropositionalEquality as ≡\nusing ( _≡_ ; module ≡-Reasoning )\nopen import Axiom.Extensionality.Propositional\nusing () renaming (Extensionality to funext)\n\n-- Imports from the Agda Universal Algebra Library ----------------------------------------\nopen import Overture using ( _⁻¹ ; 𝑖𝑑 ; ∣_∣ ; ∥_∥ )\nopen import Base.Functions using ( Inv ; InvIsInverseʳ ; Image_∋_)\nusing ( eq ; IsSurjective )\nopen import Base.Equality using ( swelldef )\n\nopen import Base.Algebras {𝑆 = 𝑆} using ( Algebra ; _̂_ ; ov )\nopen import Base.Homomorphisms {𝑆 = 𝑆} using ( hom )\nopen import Base.Terms.Basic {𝑆 = 𝑆} using ( Term ; 𝑻 )\n\nopen Term\nprivate variable α β χ : Level\n```\n\n#### The universal property\n\nThe term algebra `𝑻 X` is absolutely free (or universal, or initial) for algebras in the signature `𝑆`. That is, for every 𝑆-algebra `𝑨`, the following hold.\n\n1. Every function from `𝑋` to `∣ 𝑨 ∣` lifts to a homomorphism from `𝑻 X` to `𝑨`.\n2. The homomorphism that exists by item 1 is unique.\n\nWe now prove this in Agda, starting with the fact that every map from `X` to `∣ 𝑨 ∣` lifts to a map from `∣ 𝑻 X ∣` to `∣ 𝑨 ∣` in a natural way, by induction on the structure of the given term.\n\n```private variable X : Type χ\n\nfree-lift : (𝑨 : Algebra α 𝑆)(h : X → ∣ 𝑨 ∣) → ∣ 𝑻 X ∣ → ∣ 𝑨 ∣\nfree-lift _ h (ℊ x) = h x\nfree-lift 𝑨 h (node f 𝑡) = (f ̂ 𝑨) (λ i → free-lift 𝑨 h (𝑡 i))\n\n```\n\nNaturally, at the base step of the induction, when the term has the form `generator` x, the free lift of `h` agrees with `h`. For the inductive step, when the given term has the form `node f 𝑡`, the free lift is defined as follows: Assuming (the induction hypothesis) that we know the image of each subterm `𝑡 i` under the free lift of `h`, define the free lift at the full term by applying `f ̂ 𝑨` to the images of the subterms.\n\nThe free lift so defined is a homomorphism by construction. Indeed, here is the trivial proof.\n\n```lift-hom : (𝑨 : Algebra α 𝑆) → (X → ∣ 𝑨 ∣) → hom (𝑻 X) 𝑨\nlift-hom 𝑨 h = free-lift 𝑨 h , λ f a → ≡.cong (f ̂ 𝑨) ≡.refl\n\n```\n\nFinally, we prove that the homomorphism is unique. This requires `funext 𝓥 α` (i.e., function extensionality at universe levels `𝓥` and `α`) which we postulate by making it part of the premise in the following function type definition.\n\n```open ≡-Reasoning\n\nfree-unique : swelldef 𝓥 α → (𝑨 : Algebra α 𝑆)(g h : hom (𝑻 X) 𝑨)\n→ (∀ x → ∣ g ∣ (ℊ x) ≡ ∣ h ∣ (ℊ x))\n→ ∀(t : Term X) → ∣ g ∣ t ≡ ∣ h ∣ t\n\nfree-unique _ _ _ _ p (ℊ x) = p x\n\nfree-unique wd 𝑨 g h p (node 𝑓 𝑡) =\n∣ g ∣ (node 𝑓 𝑡) ≡⟨ ∥ g ∥ 𝑓 𝑡 ⟩\n(𝑓 ̂ 𝑨)(∣ g ∣ ∘ 𝑡) ≡⟨ Goal ⟩\n(𝑓 ̂ 𝑨)(∣ h ∣ ∘ 𝑡) ≡⟨ (∥ h ∥ 𝑓 𝑡)⁻¹ ⟩\n∣ h ∣ (node 𝑓 𝑡) ∎\nwhere\nGoal : (𝑓 ̂ 𝑨) (λ x → ∣ g ∣ (𝑡 x)) ≡ (𝑓 ̂ 𝑨) (λ x → ∣ h ∣ (𝑡 x))\nGoal = wd (𝑓 ̂ 𝑨)(∣ g ∣ ∘ 𝑡)(∣ h ∣ ∘ 𝑡)(λ i → free-unique wd 𝑨 g h p (𝑡 i))\n\n```\n\nLet’s account for what we have proved thus far about the term algebra. If we postulate a type `X : Type χ` (representing an arbitrary collection of variable symbols) such that for each `𝑆`-algebra `𝑨` there is a map from `X` to the domain of `𝑨`, then it follows that for every `𝑆`-algebra `𝑨` there is a homomorphism from `𝑻 X` to `∣ 𝑨 ∣` that “agrees with the original map on `X`,” by which we mean that for all `x : X` the lift evaluated at `ℊ x` is equal to the original function evaluated at `x`.\n\nIf we further assume that each of the mappings from `X` to `∣ 𝑨 ∣` is surjective, then the homomorphisms constructed with `free-lift` and `lift-hom` are epimorphisms, as we now prove.\n\n```lift-of-epi-is-epi : (𝑨 : Algebra α 𝑆){h₀ : X → ∣ 𝑨 ∣}\n→ IsSurjective h₀ → IsSurjective ∣ lift-hom 𝑨 h₀ ∣\n\nlift-of-epi-is-epi 𝑨 {h₀} hE y = Goal\nwhere\nh₀⁻¹y = Inv h₀ (hE y)\n\nη : y ≡ ∣ lift-hom 𝑨 h₀ ∣ (ℊ h₀⁻¹y)\nη = (InvIsInverseʳ (hE y))⁻¹\n\nGoal : Image ∣ lift-hom 𝑨 h₀ ∣ ∋ y\nGoal = eq (ℊ h₀⁻¹y) η\n```\n\nThe `lift-hom` and `lift-of-epi-is-epi` types will be called to action when such epimorphisms are needed later (e.g., in the Base.Varieties module)."
] | [
null,
"https://ualib.org/css/lambda.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6957736,"math_prob":0.9336711,"size":4341,"snap":"2022-27-2022-33","text_gpt3_token_len":1663,"char_repetition_ratio":0.14779802,"word_repetition_ratio":0.05292172,"special_character_ratio":0.31029716,"punctuation_ratio":0.110491075,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99850905,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-02T15:11:55Z\",\"WARC-Record-ID\":\"<urn:uuid:74e7c0a2-1d44-42ed-89aa-4af845a31887>\",\"Content-Length\":\"49180\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f451bb57-fc33-4c16-9949-0b39724c4ef7>\",\"WARC-Concurrent-To\":\"<urn:uuid:33a3f592-e301-4d56-8133-796c7447925a>\",\"WARC-IP-Address\":\"172.67.205.5\",\"WARC-Target-URI\":\"https://ualib.org/Base.Terms.Properties.html\",\"WARC-Payload-Digest\":\"sha1:3ECG5SNT5EMQGZA6JVLKQ2PKLAWZ6MHI\",\"WARC-Block-Digest\":\"sha1:MUU3YAXHHTKBGJPCZ6M5C3PJJ47Q23QI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104141372.60_warc_CC-MAIN-20220702131941-20220702161941-00402.warc.gz\"}"} |
https://proceedings.neurips.cc/paper/2017/hash/0f3d014eead934bbdbacb62a01dc4831-Abstract.html | [
"#### Authors\n\nJeffrey Pennington, Pratik Worah\n\n#### Abstract\n\nNeural network configurations with random weights play an important role in the analysis of deep learning. They define the initial loss landscape and are closely related to kernel and random feature methods. Despite the fact that these networks are built out of random matrices, the vast and powerful machinery of random matrix theory has so far found limited success in studying them. A main obstacle in this direction is that neural networks are nonlinear, which prevents the straightforward utilization of many of the existing mathematical results. In this work, we open the door for direct applications of random matrix theory to deep learning by demonstrating that the pointwise nonlinearities typically applied in neural networks can be incorporated into a standard method of proof in random matrix theory known as the moments method. The test case for our study is the Gram matrix $Y^TY$, $Y=f(WX)$, where $W$ is a random weight matrix, $X$ is a random data matrix, and $f$ is a pointwise nonlinear activation function. We derive an explicit representation for the trace of the resolvent of this matrix, which defines its limiting spectral distribution. We apply these results to the computation of the asymptotic performance of single-layer random feature methods on a memorization task and to the analysis of the eigenvalues of the data covariance matrix as it propagates through a neural network. As a byproduct of our analysis, we identify an intriguing new class of activation functions with favorable properties."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90833324,"math_prob":0.98202926,"size":1631,"snap":"2020-45-2020-50","text_gpt3_token_len":315,"char_repetition_ratio":0.12784265,"word_repetition_ratio":0.0,"special_character_ratio":0.18270999,"punctuation_ratio":0.067857146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98241675,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-24T21:27:46Z\",\"WARC-Record-ID\":\"<urn:uuid:f25a303c-e4f1-4c58-a278-a8f9a3cbdc01>\",\"Content-Length\":\"8296\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eb4944ad-d0d3-4c25-8a8b-ffa64ef9e176>\",\"WARC-Concurrent-To\":\"<urn:uuid:c29d1547-4787-464f-9e75-932b6d893ef8>\",\"WARC-IP-Address\":\"198.202.70.94\",\"WARC-Target-URI\":\"https://proceedings.neurips.cc/paper/2017/hash/0f3d014eead934bbdbacb62a01dc4831-Abstract.html\",\"WARC-Payload-Digest\":\"sha1:WIFQBRFUI5DQNJO44JPD632X5MDERW2T\",\"WARC-Block-Digest\":\"sha1:DRE75U4IIUL7KW7ZNSGX2U6QW2HQLVHI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141177566.10_warc_CC-MAIN-20201124195123-20201124225123-00221.warc.gz\"}"} |
https://www.javatpoint.com/java-for-loop | [
"# Loops in Java\n\nThe Java for loop is used to iterate a part of the program several times. If the number of iteration is fixed, it is recommended to use for loop.\n\nThere are three types of for loops in Java.",
null,
"• Simple for Loop\n• For-each or Enhanced for Loop\n• Labeled for Loop\n\n## Java Simple for Loop\n\nA simple for loop is the same as C/C++. We can initialize the variable, check condition and increment/decrement value. It consists of four parts:\n\n1. Initialization: It is the initial condition which is executed once when the loop starts. Here, we can initialize the variable, or we can use an already initialized variable. It is an optional condition.\n2. Condition: It is the second condition which is executed each time to test the condition of the loop. It continues execution until the condition is false. It must return boolean value either true or false. It is an optional condition.\n3. Increment/Decrement: It increments or decrements the variable value. It is an optional condition.\n4. Statement: The statement of the loop is executed each time until the second condition is false.\n\nSyntax:\n\nFlowchart:",
null,
"Example:\n\nForExample.java\n\nTest it Now\n\nOutput:\n\n```1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n```\n\n## Java Nested for Loop\n\nIf we have a for loop inside the another loop, it is known as nested for loop. The inner loop executes completely whenever outer loop executes.\n\nExample:\n\nNestedForExample.java\n\nOutput:\n\n```1 1\n1 2\n1 3\n2 1\n2 2\n2 3\n3 1\n3 2\n3 3\n```\n\nPyramid Example 1:\n\nPyramidExample.java\n\nOutput:\n\n```*\n* *\n* * *\n* * * *\n* * * * *\n```\n\nPyramid Example 2:\n\nPyramidExample2.java\n\nOutput:\n\n```* * * * * *\n* * * * *\n* * * *\n* * *\n* *\n*\n```\n\n## Java for-each Loop\n\nThe for-each loop is used to traverse array or collection in Java. It is easier to use than simple for loop because we don't need to increment value and use subscript notation.\n\nIt works on the basis of elements and not the index. It returns element one by one in the defined variable.\n\nSyntax:\n\nExample:\n\nForEachExample.java\n\nTest it Now\n\nOutput:\n\n```12\n23\n44\n56\n78\n```\n\n## Java Labeled For Loop\n\nWe can have a name of each Java for loop. To do so, we use label before the for loop. It is useful while using the nested for loop as we can break/continue specific for loop.\n\n#### Note: The break and continue keywords breaks or continues the innermost for loop respectively.\n\nSyntax:\n\nExample:\n\nLabeledForExample.java\n\nOutput:\n\n```1 1\n1 2\n1 3\n2 1\n```\n\nIf you use break bb;, it will break inner loop only which is the default behaviour of any loop.\n\nLabeledForExample2.java\n\nOutput:\n\n```1 1\n1 2\n1 3\n2 1\n3 1\n3 2\n3 3\n```\n\n## Java Infinitive for Loop\n\nIf you use two semicolons ;; in the for loop, it will be infinitive for loop.\n\nSyntax:\n\nExample:\n\nForExample.java\n\nOutput:\n\n```infinitive loop\ninfinitive loop\ninfinitive loop\ninfinitive loop\ninfinitive loop\nctrl+c\n```\n\nNow, you need to press ctrl+c to exit from the program.\n\n## Java for Loop vs while Loop vs do-while Loop\n\nComparison for loop while loop do-while loop\nIntroduction The Java for loop is a control flow statement that iterates a part of the programs multiple times. The Java while loop is a control flow statement that executes a part of the programs repeatedly on the basis of given boolean condition. The Java do while loop is a control flow statement that executes a part of the programs at least once and the further execution depends upon the given boolean condition.\nWhen to use If the number of iteration is fixed, it is recommended to use for loop. If the number of iteration is not fixed, it is recommended to use while loop. If the number of iteration is not fixed and you must have to execute the loop at least once, it is recommended to use the do-while loop.\nSyntax for(init;condition;incr/decr){\n// code to be executed\n}\nwhile(condition){\n//code to be executed\n}\ndo{\n//code to be executed\n}while(condition);\nExample //for loop\nfor(int i=1;i<=10;i++){\nSystem.out.println(i);\n}\n//while loop\nint i=1;\nwhile(i<=10){\nSystem.out.println(i);\ni++;\n}\n//do-while loop\nint i=1;\ndo{\nSystem.out.println(i);\ni++;\n}while(i<=10);\nSyntax for infinitive loop for(;;){\n//code to be executed\n}\nwhile(true){\n//code to be executed\n}\ndo{\n//code to be executed\n}while(true);\n\nNext TopicJava While Loop\n\n### Feedback",
null,
"",
null,
"",
null,
""
] | [
null,
"https://static.javatpoint.com/images/java-loops.png",
null,
"https://static.javatpoint.com/cpages/images/forloop.png",
null,
"https://www.javatpoint.com/images/facebook32.png",
null,
"https://www.javatpoint.com/images/twitter32.png",
null,
"https://www.javatpoint.com/images/pinterest32.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7221357,"math_prob":0.71671677,"size":2108,"snap":"2023-40-2023-50","text_gpt3_token_len":590,"char_repetition_ratio":0.18726236,"word_repetition_ratio":0.13513513,"special_character_ratio":0.2756167,"punctuation_ratio":0.12975392,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95465827,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,6,null,8,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T07:59:25Z\",\"WARC-Record-ID\":\"<urn:uuid:9f70a932-ea32-4731-9455-2af3d1d3629f>\",\"Content-Length\":\"46219\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:187a49f4-6419-41d1-a999-c5322c5ef457>\",\"WARC-Concurrent-To\":\"<urn:uuid:817312a8-a82b-4e57-85b1-ad4c9dbb6e92>\",\"WARC-IP-Address\":\"104.21.22.242\",\"WARC-Target-URI\":\"https://www.javatpoint.com/java-for-loop\",\"WARC-Payload-Digest\":\"sha1:WQHCEN2S7KIAEE4Z37STMTGFHFSPHEBO\",\"WARC-Block-Digest\":\"sha1:6E2DVM76UWHQWOUTOKGNJUQZISG5XHSK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510284.49_warc_CC-MAIN-20230927071345-20230927101345-00192.warc.gz\"}"} |
https://www.colorhexa.com/00d929 | [
"# #00d929 Color Information\n\nIn a RGB color space, hex #00d929 is composed of 0% red, 85.1% green and 16.1% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 0% magenta, 81.1% yellow and 14.9% black. It has a hue angle of 131.3 degrees, a saturation of 100% and a lightness of 42.5%. #00d929 color hex could be obtained by blending #00ff52 with #00b300. Closest websafe color is: #00cc33.\n\n• R 0\n• G 85\n• B 16\nRGB color chart\n• C 100\n• M 0\n• Y 81\n• K 15\nCMYK color chart\n\n#00d929 color description : Pure (or mostly pure) lime green.\n\n# #00d929 Color Conversion\n\nThe hexadecimal color #00d929 has RGB values of R:0, G:217, B:41 and CMYK values of C:1, M:0, Y:0.81, K:0.15. Its decimal value is 55593.\n\nHex triplet RGB Decimal 00d929 `#00d929` 0, 217, 41 `rgb(0,217,41)` 0, 85.1, 16.1 `rgb(0%,85.1%,16.1%)` 100, 0, 81, 15 131.3°, 100, 42.5 `hsl(131.3,100%,42.5%)` 131.3°, 100, 85.1 00cc33 `#00cc33`\nCIE-LAB 75.936, -75.015, 67.152 25.212, 49.783, 10.378 0.295, 0.583, 49.783 75.936, 100.68, 138.166 75.936, -71.339, 88.418 70.557, -59.693, 40.669 00000000, 11011001, 00101001\n\n# Color Schemes with #00d929\n\n• #00d929\n``#00d929` `rgb(0,217,41)``\n• #d900b0\n``#d900b0` `rgb(217,0,176)``\nComplementary Color\n• #44d900\n``#44d900` `rgb(68,217,0)``\n• #00d929\n``#00d929` `rgb(0,217,41)``\n• #00d996\n``#00d996` `rgb(0,217,150)``\nAnalogous Color\n• #d90044\n``#d90044` `rgb(217,0,68)``\n• #00d929\n``#00d929` `rgb(0,217,41)``\n• #9600d9\n``#9600d9` `rgb(150,0,217)``\nSplit Complementary Color\n• #d92900\n``#d92900` `rgb(217,41,0)``\n• #00d929\n``#00d929` `rgb(0,217,41)``\n• #2900d9\n``#2900d9` `rgb(41,0,217)``\n• #b0d900\n``#b0d900` `rgb(176,217,0)``\n• #00d929\n``#00d929` `rgb(0,217,41)``\n• #2900d9\n``#2900d9` `rgb(41,0,217)``\n• #d900b0\n``#d900b0` `rgb(217,0,176)``\n• #008d1b\n``#008d1b` `rgb(0,141,27)``\n• #00a61f\n``#00a61f` `rgb(0,166,31)``\n• #00c024\n``#00c024` `rgb(0,192,36)``\n• #00d929\n``#00d929` `rgb(0,217,41)``\n• #00f32e\n``#00f32e` `rgb(0,243,46)``\n• #0dff3b\n``#0dff3b` `rgb(13,255,59)``\n• #27ff4f\n``#27ff4f` `rgb(39,255,79)``\nMonochromatic Color\n\n# Alternatives to #00d929\n\nBelow, you can see some colors close to #00d929. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #0dd900\n``#0dd900` `rgb(13,217,0)``\n• #00d905\n``#00d905` `rgb(0,217,5)``\n• #00d917\n``#00d917` `rgb(0,217,23)``\n• #00d929\n``#00d929` `rgb(0,217,41)``\n• #00d93b\n``#00d93b` `rgb(0,217,59)``\n• #00d94d\n``#00d94d` `rgb(0,217,77)``\n• #00d95f\n``#00d95f` `rgb(0,217,95)``\nSimilar Colors\n\n# #00d929 Preview\n\nText with hexadecimal color #00d929\n\nThis text has a font color of #00d929.\n\n``<span style=\"color:#00d929;\">Text here</span>``\n#00d929 background color\n\nThis paragraph has a background color of #00d929.\n\n``<p style=\"background-color:#00d929;\">Content here</p>``\n#00d929 border color\n\nThis element has a border color of #00d929.\n\n``<div style=\"border:1px solid #00d929;\">Content here</div>``\nCSS codes\n``.text {color:#00d929;}``\n``.background {background-color:#00d929;}``\n``.border {border:1px solid #00d929;}``\n\n# Shades and Tints of #00d929\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000100 is the darkest color, while #edfff0 is the lightest one.\n\n• #000100\n``#000100` `rgb(0,1,0)``\n• #001504\n``#001504` `rgb(0,21,4)``\n• #002808\n``#002808` `rgb(0,40,8)``\n• #003c0b\n``#003c0b` `rgb(0,60,11)``\n• #00500f\n``#00500f` `rgb(0,80,15)``\n• #006313\n``#006313` `rgb(0,99,19)``\n• #007716\n``#007716` `rgb(0,119,22)``\n• #008b1a\n``#008b1a` `rgb(0,139,26)``\n• #009e1e\n``#009e1e` `rgb(0,158,30)``\n• #00b222\n``#00b222` `rgb(0,178,34)``\n• #00c525\n``#00c525` `rgb(0,197,37)``\n• #00d929\n``#00d929` `rgb(0,217,41)``\n• #00ed2d\n``#00ed2d` `rgb(0,237,45)``\n• #01ff31\n``#01ff31` `rgb(1,255,49)``\n• #15ff41\n``#15ff41` `rgb(21,255,65)``\n• #28ff51\n``#28ff51` `rgb(40,255,81)``\n• #3cff61\n``#3cff61` `rgb(60,255,97)``\n• #50ff71\n``#50ff71` `rgb(80,255,113)``\n• #63ff81\n``#63ff81` `rgb(99,255,129)``\n• #77ff91\n``#77ff91` `rgb(119,255,145)``\n• #8bffa1\n``#8bffa1` `rgb(139,255,161)``\n• #9effb0\n``#9effb0` `rgb(158,255,176)``\n• #b2ffc0\n``#b2ffc0` `rgb(178,255,192)``\n• #c5ffd0\n``#c5ffd0` `rgb(197,255,208)``\n• #d9ffe0\n``#d9ffe0` `rgb(217,255,224)``\n• #edfff0\n``#edfff0` `rgb(237,255,240)``\nTint Color Variation\n\n# Tones of #00d929\n\nA tone is produced by adding gray to any pure hue. In this case, #647567 is the less saturated color, while #00d929 is the most saturated one.\n\n• #647567\n``#647567` `rgb(100,117,103)``\n• #5c7d62\n``#5c7d62` `rgb(92,125,98)``\n• #53865d\n``#53865d` `rgb(83,134,93)``\n• #4b8e58\n``#4b8e58` `rgb(75,142,88)``\n• #439653\n``#439653` `rgb(67,150,83)``\n• #3a9f4d\n``#3a9f4d` `rgb(58,159,77)``\n• #32a748\n``#32a748` `rgb(50,167,72)``\n• #2aaf43\n``#2aaf43` `rgb(42,175,67)``\n• #21b83e\n``#21b83e` `rgb(33,184,62)``\n• #19c039\n``#19c039` `rgb(25,192,57)``\n• #11c833\n``#11c833` `rgb(17,200,51)``\n• #08d12e\n``#08d12e` `rgb(8,209,46)``\n• #00d929\n``#00d929` `rgb(0,217,41)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #00d929 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5329265,"math_prob":0.79265314,"size":3673,"snap":"2022-40-2023-06","text_gpt3_token_len":1630,"char_repetition_ratio":0.13545926,"word_repetition_ratio":0.007352941,"special_character_ratio":0.5649333,"punctuation_ratio":0.23172103,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98793894,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T05:37:35Z\",\"WARC-Record-ID\":\"<urn:uuid:cfb3d25b-665d-4032-9106-8d8e8f090c5c>\",\"Content-Length\":\"36112\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7afdad5a-b3ea-44fe-a9f5-d3d10f2dd953>\",\"WARC-Concurrent-To\":\"<urn:uuid:14bee64a-98d3-4a75-b3be-e09816db1e77>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/00d929\",\"WARC-Payload-Digest\":\"sha1:RFZYYOUYEVCFQ5DQCZQHNMI624EFQAVM\",\"WARC-Block-Digest\":\"sha1:KJSDXHDUCRAHMCCPVSUFUDIL4RYWTXR2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335304.71_warc_CC-MAIN-20220929034214-20220929064214-00374.warc.gz\"}"} |
https://www.slideserve.com/natala/slide-3-powerpoint-ppt-presentation | [
"# Slide 3 - PowerPoint PPT Presentation",
null,
"Download Presentation",
null,
"Slide 3\n\nSlide 3",
null,
"Download Presentation",
null,
"## Slide 3\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -\n##### Presentation Transcript\n\n1. Test Review Table of Contents Basic Number Problems Slide 3 Number and Money Problems Slide 33 Age and Digit Problems Slide 48 Mixture Problems Slide 75 Motion (D=RT) Problems Slide 90 Test Review Problems Slide 98\n\n2. Test Review\n\n3. Systems of Equations chapter 6 Word Problems: Money\n\n4. Ex #1 1 taco 1 milk Total \\$2.10 2 taco 3 milk Total \\$5.15 Find the cost of a taco Define variables:\n\n5. Ex #1 1 taco 1 milk Total \\$2.10 2 taco 3 milk Total \\$5.15 Find the cost of a taco Write two equations:\n\n6. Ex #1 Solve the system\n\n7. Ex #1 1 taco 1 milk Total \\$2.10 2 taco 3 milk Total \\$5.15 Find the cost of a taco\n\n8. #2 Four Oranges and five apples cost \\$3.56. Three oranges and four apples cost \\$2.76. Find the cost of an orange. Solve the system Define variables: Let r be the cost of an orange Let a be the cost of an Apple Write two equations E1 E2 An orange is \\$0.44\n\n9. Ex. #3 A jar of dimes and quarters contains \\$15.25. There are 103 coins in all. How many quarters are in the jar? Define variables:\n\n10. Ex. #3 A jar of dimes and quarters contains \\$15.25. There are 103 coins in all. How many quarters are in the jar? Define variables: E1 Write two equations E2\n\n11. Ex. #3 A jar of dimes and quarters contains \\$15.25. There are 103 coins in all. How many quarters are in the jar? Define variables:\n\n12. Ex. #4 Combined, Peyton and Eli have \\$106.75. Peyton has \\$43.75 more than Eli. How much money does Peyton have? Define variables: Write two equations E1 E2\n\n13. #1 At a football game, a popcorn and a soda purchased together costs \\$4.00. Three popcorns and five sodas would cost \\$16.50. What is the cost of a single soda? Let p be the cost of a popcorn Define variables Let s be the cost of a soda E1 Write two equations E2\n\n14. #1 At a football game, a popcorn and a soda purchased together costs \\$4.00. Three popcorns and five sodas would cost \\$16.50. What is the cost of a single soda? Let p be the cost of a popcorn Let s be the cost of a soda E1 E2\n\n15. #2 Four apples and five bananas cost \\$3.75. Six apples and two bananas cost \\$2.82. What is the cost of a single banana? Let a be the cost of 1 apple Define variables Let b be the cost of 1 banana E1 Write two equations E2\n\n16. #2 Four apples and five bananas cost \\$3.75. Six apples and two bananas cost \\$2.82. What is the cost of a single banana? Let a be the cost of 1 apple Let b be the cost of 1 banana\n\n17. #3 A vending machine takes only dimes and quarters. There are 113 coins in the machine totaling \\$17.60. How many quarters are in the machine? Let d be the number of dimes Define variables: Let q be the number of quarters E1 Write two equations E2\n\n18. #3 A vending machine takes only dimes and quarters. There are 113 coins in the machine totaling \\$17.60. How many quarters are in the machine? Let d be the number of dimes Let q be the number of quarters\n\n19. #4 There are 40 coins in Jenny’s coin purse – all dimes and nickels. All together it adds to \\$2.65. How many nickels are in Jenny’s purse? Let d be the number of dimes Define variables: Let n be the number of nickels Write two equations\n\n20. #4 There are 40 coins in Jenny’s coin purse – all dimes and nickels. All together it adds to \\$2.65. How many nickels are in Jenny’s purse? Let d be the number of dimes Let n be the number of nickels\n\n21. #5 Combined, Bart and Lisa have \\$62.75. Lisa has \\$13.75 more than Bart. How much money does Bart have? Let L be Lisa’s money Define variables: Let B be Bart’s money Write two equations\n\n22. #6 Otis has three times as much money as Milo. Together they have \\$60.84. How much money does each one of them have? Let t be Otis’ money Let m be Milo’s money\n\n23. Systems of Equations chapter 6 Basic Word Problems:\n\n24. Example 1 The sum of two numbers is 49. One number is 13 less than the other. Find the numbers. Define variables: Solve the system Let x be the larger number Let y be the smaller number Write two equations E1 E2\n\n25. Example 2 The difference between two numbers is 16. Three times the larger number is seven times the smaller. What are the numbers? Define variables: Let x be the larger number Let y be the smaller number Write two equations E1 E2\n\n26. Example 3 The sum of a number and twice another number is 13. The first number is 4 larger than the second number. What are the two numbers? Solve the system Define variables: Let x be the first number (larger) Let y be the second number Write two equations E1 E2\n\n27. #3 The sum of two numbers is 82. One number is 12 more than the other. Find the larger number. Define variables: Let L be the larger number Let S be the smaller number Write two equations E1 E2\n\n28. #3 The sum of two numbers is 82. One number is 12 more than the other. Find the larger number. Define variables: Let L be the larger number Let S be the smaller number Write two equations E1 E2\n\n29. #3 The sum of two numbers is 82. One number is 12 more than the other. Find the larger number. Define variables: Solve the system Let L be the larger number Let S be the smaller number Write two equations E1 E2\n\n30. #4 The difference between two numbers is 6. Ten times the smaller number is six times the larger. Find the numbers. Define variables: Solve the system Let L be the larger number Let S be the smaller number Write two equations E1 E2\n\n31. #5 The sum of a number and twice another number is 37. The first number is 10 larger than the second number. What are the two numbers? Define variables: Solve the system Let L be the larger number Let S be the smaller number Write two equations E1 E2\n\n32. #6 The product of 4 times the sum of a number and 3 is another number. If the sum of the numbers is 67, what is the smallest of the two numbers? Define variables: Solve the system Let x be one number Let y be the “other” number Write two equations E1 E2\n\n33. Systems of Equations chapter 6 More Word Problems:\n\n34. #1 Farmer Bob had 25 animals in the barn – all of them either cows or chickens. He counted 66 legs in all. How many cows are in the barn?\n\n35. #2 The price of a ticket for the AVHS basketball game is \\$2.75 for a student, but only \\$2.25 if you have a discount card. One ticket taker sold 59 tickets for \\$141.75. How many students didn’t use a discount card? Let x be the number of students w/o discount cards Let y be the number of students with discount cards\n\n36. #3 At Randy’s bike shop, they only work on bicycles and tricycles. When Randy disassembled all the bikes and trikes he ended up with 34 seats and 89 wheels. How many tricycles does he have in his shop?\n\n37. #4 Sydney took a math test that had 32 questions on it and scored 111 points. Each correct answer was awarded 5 points and for each wrong answer two points were deducted. How many questions did she miss on her test?\n\n38. #5 Will set a school record by scoring 30 points in his basketball game. What was amazing is that he scored all his points without a single free-throw. Out of the 13 baskets that he made, how many were 3-point shots?\n\n39. #6 Jackie’s coin purse had only dimes and quarters in it. There were 5 more dimes than quarters, and the total amount of money was \\$7.85. How many dimes were in the purse?\n\n40. A science test has 25 questions on it and is worth a total of 66 points. The true/false questions are worth 2 points each and the rest of the questions are worth 3 points each. How many true/false questions are on the test? #7 Let x be the number of T/F questions Let y be the number of “other” questions\n\n41. #8 At a movie theater, tickets cost \\$9.50 for adults and \\$6.50 for children. A group of 7 moviegoers pays a total of \\$54.50. How many adults are in the group? Let a be the number of adults Let c be the number of children\n\n42. #1 At the baseball game field level seats cost \\$9.50 each, while seats in the second deck cost \\$6.25. If a ticket seller sold 52 tickets and collected \\$425.75, how many second deck seats did she sell? Let f be the number of field level tickets. Let s be the number of 2nd deck tickets.\n\n43. #4 A History test has 40 questions on it and is worth a total of 174 points. The true/false questions are worth 3 points each and the rest of the questions are worth 5 points each. How many true/false questions are on the test?\n\n44. A jar contains quarters and dimes. There are 15 more quarters than dimes. The total amount of money in the jar is \\$23. How many quarters are in the jar? #6\n\n45. #7 At the coffee shop, two bagels and three muffins cost \\$12.45. Three bagels and five muffins cost \\$20.00. What is the cost of a single bagel? Let b be the cost of a bagel Let m be the cost of a muffin E1 E2\n\n46. #10 The sum of two integers is 35 and the difference between the same two integers is 81. What is the smaller integer? Let L be the larger number Let S be the smaller number E1 E2\n\n47. #9- Bonus Haley was going to be paid to unpack a box of 125 delicate crystal ornaments. She would be paid 75 cents for each ornament unpacked, but would be charged \\$2.50 for any that she broke. After finishing the job she was paid \\$74.25. How many ornaments did she break? Let x be the number of ornaments unpacked successfully. Let y be the number of ornaments broken\n\n48. Systems of Equations chapter 6 • Basic Word Problems: • Age • Number-Digit\n\n49. #1 The sum of the digits of a two-digit number is 10. When the digits are reversed, the new number is 54 more than the original number. What is the original number? Let t be the tens digit of the original number Let u be the units (ones) digit of the original number Original number New number E1: E2:\n\n50. #2 The sum of the digits of a two-digit number is 7. When the digits are reversed, the new number is 45 less than the original number. What is the original number? Let t be the tens digit of the original number Let u be the units (ones) digit of the original number Original number New number E1: E2:"
] | [
null,
"https://www.slideserve.com/img/replay.png",
null,
"https://thumbs.slideserve.com/1_4993090.jpg",
null,
"https://www.slideserve.com/photo/34031.jpeg",
null,
"https://www.slideserve.com/img/output_cBjjdt.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92556846,"math_prob":0.99450207,"size":9566,"snap":"2020-45-2020-50","text_gpt3_token_len":2467,"char_repetition_ratio":0.19044133,"word_repetition_ratio":0.43552703,"special_character_ratio":0.27022788,"punctuation_ratio":0.0988806,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9996753,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,3,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-04T23:50:13Z\",\"WARC-Record-ID\":\"<urn:uuid:74bfbe98-81d8-4de0-8cb9-6d82db802fa7>\",\"Content-Length\":\"98757\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:04cc4684-ce3c-4184-b427-ca2cbdf575b4>\",\"WARC-Concurrent-To\":\"<urn:uuid:3b1b8898-a867-40cc-b029-e2ee08f74afb>\",\"WARC-IP-Address\":\"44.235.246.14\",\"WARC-Target-URI\":\"https://www.slideserve.com/natala/slide-3-powerpoint-ppt-presentation\",\"WARC-Payload-Digest\":\"sha1:SA6C7C62YKHHUJE7UD5QNEIBE5PICCIP\",\"WARC-Block-Digest\":\"sha1:CWU3FNKSADXHSJOKK4WQLIHCCVXGKNAI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141745780.85_warc_CC-MAIN-20201204223450-20201205013450-00284.warc.gz\"}"} |
http://getwiki.net/-stochastic_differential_equation | [
"# GetWiki\n\n### stochastic differential equation\n\nARTICLE SUBJECTS\nnews →\nunix →\nwiki →\nARTICLE TYPES\nfeed →\nhelp →\nwiki →\nARTICLE ORIGINS",
null,
"stochastic differential equation\n[ temporary import ]\nplease note:\n- the content below is remote from Wikipedia\n- it has been imported raw for GetWiki\n{{Short description|differential equations involving stochastic processes}}{{more footnotes|date=July 2013}}{{Differential equations}}A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs are used to model various phenomena such as unstable stock prices or physical systems subject to thermal fluctuations. Typically, SDEs contain a variable which represents random white noise calculated as the derivative of Brownian motion or the Wiener process. However, other types of random behaviour are possible, such as jump processes.\n\n## Background\n\nEarly work on SDEs was done to describe Brownian motion in Einstein's famous paper, and at the same time by Smoluchowski. However, one of the earlier works related to Brownian motion is credited to Bachelier (1900) in his thesis 'Theory of Speculation'. This work was followed upon by Langevin. Later Itô and Stratonovich put SDEs on more solid mathematical footing.\n\n### Terminology\n\nThe most common form of SDEs in the literature is an ordinary differential equation with the right hand side perturbed by a term dependent on a white noise variable. In most cases, SDEs are understood as continuous time limit of the corresponding stochastic difference equations. This understanding of SDEs is ambiguous and must be complemented by an \"interpretation\". The most famous interpretations are provided by Itô and Stratonovich calculi, with the former being most frequently used in mathematics and quantitative finance. An alternative view on SDEs is the stochastic flow of diffeomorphisms. This understanding is unambiguous and corresponds to the Stratonovich version of the continuous time limit of stochastic difference equations. Associated with SDEs is the Smoluchowski equation or the Fokker–Planck equation, an equation describing the time evolution of probability distribution functions. The generalization of the Fokker–Planck evolution to temporal evolution of differential forms is provided by the concept of stochastic evolution operator.In physical science, there is an ambiguity in the usage of the term \"Langevin SDEs\". While Langevin SDEs can be of a more general form, this term typically refers to a narrow class of SDEs with gradient flow vector fields. This class of SDEs is particularly popular because it is a starting point of the Parisi–Sourlas stochastic quantization procedure,JOURNAL, Parisi, G., Sourlas, N., 1979, Random Magnetic Fields, Supersymmetry, and Negative Dimensions, Physical Review Letters, 43, 11, 744–745, 10.1103/PhysRevLett.43.744, 1979PhRvL..43..744P, leading to a N=2 supersymmetric model closely related to supersymmetric quantum mechanics. From the physical point of view, however, this class of SDEs is not very interesting because it never exhibits spontaneous breakdown of topological supersymmetry, i.e., (overdamped) Langevin SDEs are never chaotic.\n\n### Stochastic calculus\n\nBrownian motion or the Wiener process was discovered to be exceptionally complex mathematically. The Wiener process is almost surely nowhere differentiable; thus, it requires its own rules of calculus. There are two dominating versions of stochastic calculus, the Itô stochastic calculus and the Stratonovich stochastic calculus. Each of the two has advantages and disadvantages, and newcomers are often confused whether the one is more appropriate than the other in a given situation. Guidelines exist (e.g. Øksendal, 2003) and conveniently, one can readily convert an Itô SDE to an equivalent Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is initially written down.\n\n### Numerical solutions\n\nNumerical solution of stochastic differential equations and especially stochastic partial differential equations is a young field relatively speaking. Almost all algorithms that are used for the solution of ordinary differential equations will work very poorly for SDEs, having very poor numerical convergence. A textbook describing many different algorithms is Kloeden & Platen (1995).Methods include the Euler–Maruyama method, Milstein method and Runge–Kutta method (SDE).\n\n## Use in physics\n\nIn physics, SDEs have widest applicability ranging from molecular dynamics to neurodynamics and to the dynamics of astrophysical objects. More specifically, SDEs describe all dynamical systems, in which quantum effects are either unimportant or can be taken into account as perturbations. SDEs can be viewed as a generalization of the dynamical systems theory to models with noise. This is an important generalization because real systems cannot be completely isolated from their environments and for this reason always experience external stochastic influence.There are standard techniques for transforming higher-order equations into several coupled first-order equations by introducing new unknowns. Therefore, the following is the most general class of SDEs:\nfrac{dx(t)}{dt} = F(x(t)) + sum_{alpha=1}^ng_alpha(x(t))xi^alpha(t),,\nwhere xin X is the position in the system in its phase (or state) space, X, assumed to be a differentiable manifold, the Fin TX is a flow vector field representing deterministic law of evolution, and g_alphain TX is a set of vector fields that define the coupling of the system to Gaussian white noise, xi^alpha. If X is a linear space and g are constants, the system is said to be subject to additive noise, otherwise it is said to be subject to multiplicative noise. This term is somewhat misleading as it has come to mean the general case even though it appears to imply the limited case in which g(x) propto x.For a fixed configuration of noise, SDE has a unique solution differentiable with respect to the initial condition.JOURNAL, SlavÃk, A., Generalized differential equations: Differentiability of solutions with respect to initial conditions and parameters, Journal of Mathematical Analysis and Applications, en, 402, 1, 261–274, 10.1016/j.jmaa.2013.01.027, 2013, Nontriviality of stochastic case shows up when one tries to average various objects of interest over noise configurations. In this sense, an SDE is not a uniquely defined entity when noise is multiplicative and when the SDE is understood as a continuous time limit of a stochastic difference equation. In this case, SDE must be complemented by what is known as \"interpretations of SDE\" such as Itô or a Stratonovich interpretations of SDEs. Nevertheless, when SDE is viewed as a continuous-time stochastic flow of diffeomorphisms, it is a uniquely defined mathematical object that corresponds to Stratonovich approach to a continuous time limit of a stochastic difference equation.In physics, the main method of solution is to find the probability distribution function as a function of time using the equivalent Fokker–Planck equation (FPE). The Fokker–Planck equation is a deterministic partial differential equation. It tells how the probability distribution function evolves in time similarly to how the Schrödinger equation gives the time evolution of the quantum wave function or the diffusion equation gives the time evolution of chemical concentration. Alternatively numerical solutions can be obtained by Monte Carlo simulation. Other techniques include the path integration that draws on the analogy between statistical physics and quantum mechanics (for example, the Fokker-Planck equation can be transformed into the Schrödinger equation by rescaling a few variables) or by writing down ordinary differential equations for the statistical moments of the probability distribution function. {{Citation needed|date=August 2011}}\n\n## Use in probability and mathematical finance\n\nThe notation used in probability theory (and in many applications of probability theory, for instance mathematical finance) is slightly different. This notation makes the exotic nature of the random function of time eta_m in the physics formulation more explicit. It is also the notation used in publications on numerical methods for solving stochastic differential equations. In strict mathematical terms, eta_m cannot be chosen as an ordinary function, but only as a generalized function. The mathematical formulation treats this complication with less ambiguity than the physics formulation.A typical equation is of the form\nmathrm{d} X_t = mu(X_t,t), mathrm{d} t + sigma(X_t,t), mathrm{d} B_t ,\nwhere B denotes a Wiener process (Standard Brownian motion).This equation should be interpreted as an informal way of expressing the corresponding integral equation\nX_{t+s} - X_ = int_t^{t+s} mu(X_u,u) mathrm{d} u + int_t^{t+s} sigma(X_u,u), mathrm{d} B_u .\nThe equation above characterizes the behavior of the continuous time stochastic process X't as the sum of an ordinary Lebesgue integral and an Itô integral. A heuristic (but very helpful) interpretation of the stochastic differential equation is that in a small time interval of length δ the stochastic process X't changes its value by an amount that is normally distributed with expectation μ(X'ttδ and variance σ(X'tt)² δ and is independent of the past behavior of the process. This is so because the increments of a Wiener process are independent and normally distributed. The function μ is referred to as the drift coefficient, while σ is called the diffusion coefficient. The stochastic process Xt is called a diffusion process, and satisfies the Markov property.The formal interpretation of an SDE is given in terms of what constitutes a solution to the SDE. There are two main definitions of a solution to an SDE, a strong solution and a weak solution. Both require the existence of a process Xt that solves the integral equation version of the SDE. The difference between the two lies in the underlying probability space (Omega,, mathcal{F},, P). A weak solution consists of a probability space and a process that satisfies the integral equation, while a strong solution is a process that satisfies the equation and is defined on a given probability space.An important example is the equation for geometric Brownian motion\nmathrm{d} X_t = mu (X_t) , mathrm{d} t + sigma (X_t) , mathrm{d} B_t.\nwhich is the equation for the dynamics of the price of a stock in the Black–Scholes options pricing model of financial mathematics.There are also more general stochastic differential equations where the coefficients μ and σ depend not only on the present value of the process Xt, but also on previous values of the process and possibly on present or previous values of other processes too. In that case the solution process, X, is not a Markov process, and it is called an Itô process and not a diffusion process. When the coefficients depends only on present and past values of X, the defining equation is called a stochastic delay differential equation.\n\n## Existence and uniqueness of solutions\n\nAs with deterministic ordinary and partial differential equations, it is important to know whether a given SDE has a solution, and whether or not it is unique. The following is a typical existence and uniqueness theorem for Itô SDEs taking values in n-dimensional Euclidean space Rn and driven by an m-dimensional Brownian motion B; the proof may be found in Øksendal (2003, §5.2).Let T > 0, and let\nmu : mathbb{R}^{n} times [0, T] to mathbb{R}^{n}; sigma : mathbb{R}^{n} times [0, T] to mathbb{R}^{n times m};\nbe measurable functions for which there exist constants C and D such that\nbig| mu (x, t) big| + big| sigma (x, t) big| leq C big( 1 + | x | big); big| mu (x, t) - mu (y, t) big| + big| sigma (x, t) - sigma (y, t) big| leq D | x - y |;\nfor all t ∈ [0, T] and all x and y ∈ Rn, where\n| sigma |^{2} = sum_{i, j = 1}^{n} | sigma_{ij} |^{2}.\nLet Z be a random variable that is independent of the σ-algebra generated by Bs, s ≥ 0, and with finite second moment:\nmathbb{E} big[ | Z |^{2} big] < + infty.\nThen the stochastic differential equation/initial value problem\nmathrm{d} X_ = mu (X_, t) , mathrm{d} t + sigma (X_, t) , mathrm{d} B_ mbox{ for } t in [0, T]; X_{0} = Z;\nhas a Pr-almost surely unique t-continuous solution (tω) |→ X't(ω) such that X is adapted to the filtration F't'Z generated by Z and B's, s ≤ t, and\nmathbb{E} left[ int_{0}^{T} | X_ |^{2} , mathrm{d} t right] < + infty.\n\n## Some explicitly solvable SDEsKloeden 1995, pag.118\n\n### Linear SDE: general case\n\ndX_t=(a(t)X_t+c(t))dt+(b(t)X_t+d(t))dW_t X_t=Phi_{t,t_0}left(X_{t_0}+int_{t_0}^tPhi^{-1}_{s,t_0}(c(s)-b(s)d(s))ds+int_{t_0}^tPhi^{-1}_{s,t_0}d(s)dW_sright)\nwhere\nPhi_{t,t_0}=expleft(int_{t_0}^tleft(a(s)-frac{b^2(s)}{2}right)ds+int_{t_0}^tb(s)dW_sright)\n\n### Reducible SDEs: Case 1\n\ndX_t=frac12f(X_t)f'(X_t)dt+f(X_t)dW_t\nfor a given differentiable function f is equivalent to the Stratonovich SDE\ndX_t=f(X_t)circ W_t\nwhich has a general solution\nX_t=h^{-1}(W_t+h(X_0))\nwhere\nh(x)=int^{x}frac{ds}{f(s)}\n\n### Reducible SDEs: Case 2\n\ndX_t=left(alpha f(X_t)+frac12 f(X_t)f'(X_t)right)dt+f(X_t)dW_t\nfor a given differentiable function f is equivalent to the Stratonovich SDE\ndX_t=alpha f(X_t)dt + f(X_t)circ W_t\nwhich is reducible to\ndY_t=alpha dt+dW_t\nwhere Y_t=h(X_t) where h is defined as before.Its general solution is\nX_t=h^{-1}(alpha t+W_t+h(X_0))\n\n## SDEs and supersymmetry\n\nIn supersymmetric theory of SDEs, stochastic dynamics is defined via stochastic evolution operator acting on the differential forms on the phase space of the model. In this exact formulation of stochastic dynamics, all SDEs possess topological supersymmetry which represents the preservation of the continuity of the phase space by continuous time flow. The spontaneous breakdown of this supersymmetry is the mathematical essence of the ubiquitous dynamical phenomenon known across disciplines as chaos, turbulence, self-organized criticality etc. and the Goldstone theorem explains the associated long-range dynamical behavior, i.e., the butterfly effect, 1/f and crackling noises, and scale-free statistics of earthquakes, neuroavalanches, solar flares etc. The theory also offers a resolution of the Ito–Stratonovich dilemma in favor of Stratonovich approach.\n\n{{Reflist}}\n\n## Further reading\n\n• BOOK, Adomian, George, Stochastic systems, Mathematics in Science and Engineering (169), Academic Press Inc., Orlando, FL, 1983\n,\n• BOOK, Adomian, George, Nonlinear stochastic operator equations, Academic Press Inc., Orlando, FL, 1986\n,\n• BOOK, Adomian, George, Nonlinear stochastic systems theory and applications to physics, Mathematics and its Applications (46), Kluwer Academic Publishers Group, Dordrecht, 1989\n,\n• BOOK, Øksendal, Bernt K., Bernt Øksendal, Stochastic Differential Equations: An Introduction with Applications, Springer, Berlin, 2003, 3-540-04758-1\n,\n• BOOK, Teugels, J. and Sund B. (eds.), Encyclopedia of Actuarial Science, Wiley, Chichester, 2004, 523–527\n,\n• BOOK, C. W. Gardiner, Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences, Springer, 2004, 415\n,\n• BOOK, Thomas Mikosch, Elementary Stochastic Calculus: with Finance in View, World Scientific Publishing, Singapore, 1998, 981-02-3543-7, 212\n,\n• BOOK, Seifedine Kadry,, A Solution of Linear Stochastic Differential Equation, WSEAS TRANSACTIONS on MATHEMATICS, April 2007., USA, 2007, 1109-2769, 618\n,\n• \"MEMBERWIDE\">\nAUTHOR2=E. PLATEN,\n, yes, Numerical Solution of Stochastic Differential Equations, Springer, 1995, 0-387-54062-8\n,\n• JOURNAL, Higham., Desmond J., An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations, SIAM Review, January 2001, 43, 3, 525–546, 10.1137/S0036144500378302, 2001SIAMR..43..525H, 10.1.1.137.6375,\n{{Authority control}}\n\n- content above as imported from Wikipedia\n- \"stochastic differential equation\" does not exist on GetWiki (yet)\n- time: 6:58pm EDT - Mon, Aug 19 2019\n[ this remote article is provided by Wikipedia ]\nLATEST EDITS [ see all ]\nGETWIKI 09 JUL 2019\nEastern Philosophy\nHistory of Philosophy\nGETWIKI 09 MAY 2016\nGETWIKI 18 OCT 2015\nM.R.M. Parrott\nBiographies\nGETWIKI 20 AUG 2014\nGETWIKI 19 AUG 2014"
] | [
null,
"http://getwiki.net/images/gw-banner.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86377966,"math_prob":0.98281497,"size":16935,"snap":"2019-35-2019-39","text_gpt3_token_len":4194,"char_repetition_ratio":0.15037505,"word_repetition_ratio":0.042030133,"special_character_ratio":0.2289932,"punctuation_ratio":0.12230216,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9975952,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-19T22:58:20Z\",\"WARC-Record-ID\":\"<urn:uuid:6ce2ef4a-ecad-4164-86b9-896bd30261c2>\",\"Content-Length\":\"38840\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6442fee-2d28-48dd-a715-4e69f08ca11c>\",\"WARC-Concurrent-To\":\"<urn:uuid:6521d6cc-8ee9-43ff-9d02-236ac0f84169>\",\"WARC-IP-Address\":\"96.127.142.114\",\"WARC-Target-URI\":\"http://getwiki.net/-stochastic_differential_equation\",\"WARC-Payload-Digest\":\"sha1:I3W346WTAEPTXV3FBDZQ7ZJWCIQJQVIJ\",\"WARC-Block-Digest\":\"sha1:JHQRCYN3R5U4NUMJNCC2WJVPXRUD6PQQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027315132.71_warc_CC-MAIN-20190819221806-20190820003806-00156.warc.gz\"}"} |
https://corporatefinanceinstitute.com/resources/accounting/depreciated-cost/ | [
"# Depreciated Cost\n\nThe useful value remaining after the asset's been depreciated for a certain period\n\nOver 1.8 million professionals use CFI to learn accounting, financial analysis, modeling and more. Start with a free account to explore 20+ always-free courses and hundreds of finance templates and cheat sheets\n\n## What is Depreciated Cost?\n\nDepreciated cost is the remaining cost of an asset after reducing the asset’s original cost by the accumulated depreciation. Understanding the concept of a depreciation schedule and the depreciated cost is important for both accounting and valuation purposes.",
null,
"### Summary\n\n• The depreciated cost of an asset is the purchase price less the total depreciation taken to date.\n• The depreciated cost equals the net book value if the asset is not written off for impairment.\n• The depreciated cost of an asset is determined by the depreciation method applied.\n\n### Depreciation and Depreciated Cost\n\nIn accounting, depreciation is an accounting process of reducing the cost of a physical asset over the asset’s useful life to mirror its wear and tear. It can be applied to tangible assets, of which the values decrease as they are used up. Buildings, vehicles, computers, equipment, and computers are some other examples of depreciable assets.\n\nThe fixed tangible assets typically come with a high purchase cost and a long life expectancy. Expensing the costs fully to a single accounting period doesn’t portray the benefits of usage over time accurately. Thus, the IFRS and the GAAP allow companies to allocate the costs over several periods through depreciation.\n\nThe depreciated cost of an asset is the value that remained after the asset’s been depreciated over a period of time. It will be equal to the net book value or the carrying value of an asset if there is no impairment or other write-offs on that asset. At the end of its useful life, an asset’s depreciated cost will be equal to its salvage value.\n\nThe depreciated cost can be used as an asset valuation tool to determine the useful value of an asset at a specific point in time. It can be compared with the market value to examine whether there is an impairment to the asset. If an asset is sold, the depreciated cost can be compared with the sales price to report a gain or loss from the sale.\n\n### Calculation of Depreciated Cost\n\nThe depreciated cost of an asset can be calculated by deducting the acquisition cost of the asset by the accumulated depreciation. The formula is shown below:",
null,
"The acquisition cost refers to the overall cost of purchasing an asset, which includes the purchase price, the shipping cost, sales taxes, installation fees, testing fees, and other acquisition costs.\n\nAccumulated depreciation is the summation of the depreciation expense taken on the assets over time. It is a contra-asset account and is displayed together with the asset on the balance sheet.\n\n### Depreciated Cost and Depreciation Expense\n\nAlthough the two terms look similar, depreciated cost and depreciation expense come with very different meanings and should not be confused with one another. The depreciation expense refers to the value depreciated during a certain period. It is reported in the income statement for that period.\n\nThe accumulated depreciation is equal to the sum of the incurred depreciation expenses. The depreciated cost can also be calculated by deducting the sum of depreciation expenses from the acquisition cost.",
null,
"For example, a manufacturing company purchased a machine at the beginning of 2017. The purchase price of the machine was \\$100,000, and the company paid another \\$10,000 for shipment and installation. The overall acquisition cost was \\$110,000 (\\$100,000 + \\$10,000).\n\nIf the machine’s life expectancy is 20 years and its salvage value is \\$15,000, in the straight-line depreciation method, the depreciation expense is \\$4,750 [(\\$110,000 – \\$15,000) / 20].\n\nThus, at the end of 2019, the accumulated depreciation is \\$14,250 (\\$4,750 * 3), and the depreciated cost is \\$95,750 (\\$110,000 – \\$14,250).\n\nAt the end of the useful life of the asset, the accumulated depreciation will be \\$95,000 (\\$4,750 * 20). The depreciated cost will be \\$15,000 (\\$110,000 – \\$95,000), equal to the salvage value.\n\n### Depreciation Schedules\n\nThe depreciated cost of an asset can be determined by a depreciation schedule that a company applies to the asset. There are several allowable methods of depreciation, which will lead to different rates of depreciation, as well as different depreciation expenses for each period. Thus, the depreciated cost balance will also differ under different depreciation methods.\n\n#### 1. Straight-Line Method\n\nThe most common depreciation method is the straight-line method, which is used in the example above. The cost available for depreciation is equally allocated over the asset’s life span. As the depreciation expense is constant for each period, the depreciated cost decreases at a constant rate under the straight-line depreciation method.\n\n#### 2. Declining-Balance Method\n\nThe declining-balance method is an accelerated depreciation method. An asset is depreciated faster with higher depreciation expenses in the earlier years, compared with the straight-line method.\n\nThus, the depreciated cost decreases faster at first and slows down later. The double declining-balance depreciation is a commonly used type of declining-balance method.",
null,
"#### 3. Sum-of-the-Year’s-Digits (SYD) Method\n\nSimilar to the declining-balance method, the sum-of-the-year’s method also accelerates the depreciation of an asset. The asset will lose more of its book value during the early periods of its lifespan.\n\n#### 4. Units-of-Production Method\n\nThe units-of-production method depreciates equipment based on its usage versus the equipment’s expected capacity. The more units produced by the equipment, the greater amount the equipment is depreciated, and the lower the depreciated cost is."
] | [
null,
"https://corporatefinanceinstitute.com/resources/accounting/depreciated-cost/",
null,
"https://corporatefinanceinstitute.com/resources/accounting/depreciated-cost/",
null,
"https://corporatefinanceinstitute.com/resources/accounting/depreciated-cost/",
null,
"https://corporatefinanceinstitute.com/resources/accounting/depreciated-cost/",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9258763,"math_prob":0.92522043,"size":6591,"snap":"2023-40-2023-50","text_gpt3_token_len":1332,"char_repetition_ratio":0.2076818,"word_repetition_ratio":0.03036239,"special_character_ratio":0.20588681,"punctuation_ratio":0.10158201,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9936866,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T04:26:07Z\",\"WARC-Record-ID\":\"<urn:uuid:b7a8f49a-a949-4ea5-a638-4bde5a86f411>\",\"Content-Length\":\"481654\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:11633853-d6f7-4453-847d-dd76e0aaa9b9>\",\"WARC-Concurrent-To\":\"<urn:uuid:1392e46e-1000-4641-9df9-0334c4f43406>\",\"WARC-IP-Address\":\"104.22.28.212\",\"WARC-Target-URI\":\"https://corporatefinanceinstitute.com/resources/accounting/depreciated-cost/\",\"WARC-Payload-Digest\":\"sha1:BGSXD6ZAGNHMKQ66AU7FYKKZ7CMKI6YS\",\"WARC-Block-Digest\":\"sha1:5KGKNLCYU4LKY4HSIYVNP3I6NVPWV2UR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100056.38_warc_CC-MAIN-20231129041834-20231129071834-00043.warc.gz\"}"} |
https://ro.scribd.com/document/210104756/Chapter-2 | [
"Sunteți pe pagina 1din 60\n\n# Chapter 2: Statistical Measures\n\n## Chapter 2: Statistical Measures\n\nKeith E. Emmert\nDepartment of Mathematics Tarleton State University\n\nJune 7, 2011\n\n## Some Basic Denitions\n\nThe population is the entire group of objects or individuals under study, about which information is wanted. A unit is an individual object or person in the population. The units are often called subjects if the population consists of people. A sample is a part of the population that is actually used to get information. A variable is a characteristic of interest to be measured for each unit in the sample. The size of the population is denoted by the capital letter N . The size of the sample is denoted by the small letter n.\n\n## Chapter 2: Statistical Measures Parameters and Statistics\n\nExample\nPopulation, Unit, Sample, Size\n\nSample\n\n## Some Basic Denitions\n\nA parameter is a numerical value that would be calculated using all of the values of the units in the population. A statistic is a numerical value that is calculated using all of the values of the units in a sample.\n\n## Chapter 2: Statistical Measures Parameters and Statistics\n\nExample\nParameter or Statistic?\n\nAccording to the Campus Housing Fact Sheet at a Big-Ten University, 60% of the students living in campus housing are in-state residents. In a sample of 200 students living in campus housing, 56.5% were found to be in-state residents. Circle your answer.\n1\n\nIn this particular situation, the value of 60% is a (parameter, statistic). In this particular situation, the value of 56.5% is a (parameter, statistic).\n\n## Chapter 2: Statistical Measures Parameters and Statistics\n\nDenitions\n\nA unit is the item or object we observe. When the object is a person, we refer to the unit as a subject. An observation is the information or characteristic recorded for each unit. A characteristic that can vary from unit to unit is called a variable. A collection of observations on one or more variables is called a data set.\n\n## Chapter 2: Statistical Measures Parameters and Statistics\n\nDenitions\n\nQualitative variables are those which classify the units into categories. The categories may or may not have a natural ordering to them. Qualitative variables are also called categorical variables. Quantitative variables have numerical values that are measurements (length, weight, and so on) or counts (of how many). Arithmetic operations on such numerical values do have meaning.\nA discrete variable can only take on a nite (or countable) number of values. A continuous variable can take on any value in an interval (or collection of intervals).\n\n## Chapter 2: Statistical Measures Parameters and Statistics\n\nExample\nUnit, Observation, Variables\n\nComposer Ludwig Van Beethoven Nikolai Karlovich Medtner Jacques Oenbach Identify the following: Unit Obervation\n\nSiblings 6 5 6\n\n## Chapter 2: Statistical Measures Parameters and Statistics\n\nExample\nWhat Type of Random Variable is Weight?\n\nPackages are brought to a mailing center and weighed. Their results are recorded. Is weight discrete or continuous? Packages are brought to a mailing center and weighed. Their weights are recorded to the nearest pound. Is weight discrete or continuous? Packages under 5 pounds are classied as light, those weighing between 5 and 20 pounds are classied as medium and those over 20 pounds are classied as heavy. We record the variable weight, which takes on the values light, medium, or heavy. Now the variable weight is qualitative. Random variables are determined by their context in experiments, not by general categories. It is important to ask many questions about the data and how they were obtained\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nMeasures of Center\nWhat single number would best represent the most typical age for the 20 subjects? Subject Gender Age Subject Gender Age 1 M 45 11 M 41 2 M 41 12 F 44 3 F 51 13 F 47 4 F 46 14 F 49 5 F 47 15 M 45 6 F 42 16 F 42 7 M 43 17 M 41 8 F 50 18 F 40 9 M 39 19 M 45 10 M 32 20 M 37\n\nMeasures of center are numerical values that tend to report (in some sense) the middle of the data. We shall focus on two such measures: the mean and the median.\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nMeasures of Center\nMean\n\nThe mean of a set of n observations is the sum of the observations divided by the number of observations, n. If the observations are a sample of a larger group, then we denote the mean by x (pronounced x -bar). If the observations are the entire group, i.e. the entire population, then we denote the mean by the Greek letter . Math Trip: If x1 , x2 , . . . , xn denote the observations, then the mean is calculated by (x1 + x2 + + xn ) . n Note the parentheses in the numerator...if you forget these in your calculator, things will go horribly wrong!\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nExample\nMeasures of Center: Mean\n\nConsider the following data. Subject Gender Age Subject Gender Age 1 M 45 11 M 41 2 M 41 12 F 44 3 F 51 13 F 47 4 F 46 14 F 49 5 F 47 15 M 45 6 F 42 16 F 42 7 M 43 17 M 41 8 F 50 18 F 40 9 M 39 19 M 45 10 M 32 20 M 37\n\nFind the mean of the ages of the male subjects. x = (45 + 41 + 43 + 39 + 32 + 41 + 45 + 41 + 45 + 37) 409 = = 40.9 10 10\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nExample\nMeasures of Center: Mean\n\nSuppose that the number of children in a simple random sample of 10 households is as follows: 2, 3, 0, 2, 1, 0, 3, 0, 1, 4.\n1 2 3\n\nCalculate the sample mean number of children per household. Interpret your answer. Suppose that the observation for the last household in the above list was incorrectly recorded as 40 instead of 4.What would happen to the mean?\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nSolution\nMeasures of Center: Mean\n\nCalculate the sample mean number of children per household. 16 = 10 = 1.6. x = (2+3+0+2+1+0+3+0+1+4) 10 Interpret your answer. Note that 1.6 is not rounded up to say 2. We are reporting a value that we would expect on average, over many samples of 10 households. Suppose that the observation for the last household in the above list was incorrectly recorded as 40 instead of 4.What would happen to the mean? 52 x = (2+3+0+2+1+0+3+0+1+40) = 10 = 5.2. 10\n\nThus we say the mean is sensitive to extreme observations. Most graphical displays would detect this...always graph your data!\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nLets Do It!\nMeasures of Center: Mean\n\nSuppose a sample of size n = 10 observations is observed. Can x be larger than the maximum value or less than the minimum value? If yes, give an example. Can x be the minimum value? If yes, give an example. Can x be the maximum value? If yes, give an example. Can x be exactly the midpoint between the minimum and maximum value (when the minimum does not equal the maximum)? If yes, give an example. Can x be exactly the second smallest value (out of the 10, not all equal observations, when they are ordered from smallest to largest)? If yes, give an example. Can x be not equal to any value in the sample? If yes, give an example.\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nLets Do It!\nA Mean Is Not Always Representative\n\nKims biology test scores are 7, 98, 25, 19, and 26. Calculate Kims mean test score. Explain why the mean does not do a very good job at summarizing Kims test scores.\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nLets Do It!\nCombinint Means\n\nWe have seven students. The mean score for three of these students is 54 and the mean score for the four other students is 76. What is the mean score for all seven students?\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nThe Mean\nAs an Equilibrium Point\n\nThe mean = the point of equilibrium, the point where the distribution would balance.\n1 2 3 1 2 3 4 5 6 7\n\nMean = 2\n\nMean = 3\n\nIf the distribution is symmetric, as in the rst picture at the left, the mean would be exactly at the center of the distribution. As the largest observation is moved further to the right, making this observation somewhat extreme, the mean shifts towards the extreme observation. If a distribution appears to be skewed, we may wish also to report a more resistant measure of center.\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nFrequency Tables\nSometimes data is grouped into classes. This is called a frequency table. The data represent the number of miles run during one week for a sample of 20 runners. 7, 13, 15, 18, 18, 20, 22, 22, 24, 24, 25, 26, 27, 28, 29, 33, 34, 35, 37, 40. This can be grouped into the following frequency table (based upon given classes). Class 5.5 - 10.5 10.5 - 15.5 15.5 - 20.5 20.5 - 25.5 25.5 - 30.5 30.5 - 35.5 35.5 - 40.5 Frequency 1 2 3 5 4 3 2\n\n## The Mean of Grouped Data/Frequency Tables\n\nUnfortunately, if the original data is not available, then nding the mean becomes a bit more interesting. Assume that all observations b in a given class, a-b , are at the midpoint, xm = a+ 2 . Class 5.5 - 10.5 10.5 - 15.5 15.5 - 20.5 20.5 - 25.5 25.5 - 30.5 30.5 - 35.5 35.5 - 40.5 Frequency, f 1 2 3 5 4 3 2 n = 20 Midpoint, xm (5.5 + 10.5)/2 = 8 (10.5 + 15.5)/2 = 13 18 23 28 33 38 f xm 8 26 54 115 112 99 76 f xm = 490\n\nf xm n\n\n490 20\n\n= 24.5 miles\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nLets Do It!\nThe Mean of Grouped Data/Frequency Tables\n\nEighty randomly selected light bulbs were tested to determine their lifetime in hours. The frequency table of the results is shown in table. Find the average lifetime of a light bulb. Class 53-63 64-74 75-85 86-96 97-107 108-118 Frequency, f 6 12 25 18 14 5 n= Midpoint, xm f xm\n\nf xm =\nf xm n\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nLets Do It!\nThe Mean of Grouped Data/Frequency Tables\n\nThe cost per load (in cents) of 35 laundry detergents tested by consumer organization is given below. Class 13-19 20-26 27-33 34-40 41-47 48-54 55-61 62-68 Frequency, f 2 7 12 5 6 1 0 n= The mean is given by x =\nf xm n\n\nMidpoint, xm\n\nf xm\n\nf xm = =\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nMeasures of Center\nMedian\n\nA measure of center that is more resistant to extreme values is the median. The median, M , of a set of n observations, ordered from smallest to largest, is a value such that half of the observations are less than or equal to that value and half the observations are greater than or equal to that value. If the number of observations is odd, the median is the middle observation. If the number of observations is even, the median is any number between the two middle observations, including either of the two middle observations. To be consistent, we will dene the median as the mean or average of the two middle observations.\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nExample\nMedian\n\nFind the median, M , of the ages of the following 8 subjects.. 30 37 39 40 M So, M = 41 + 42 = 41.5. 2 41 42 43 44\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nLets Do It!\nMedian\n\nThe number of children in a household of 10 households is shown below. Number of Children Median, M = What happens to the median if the fth observation in the rst list was incorrectly recorded as 40 instead of 4? What happens to the median if the third observation in the rst list was incorrectly recorded as -20 instead of 0? The median is resistant-that is, it does not change, or changes very little, in response to extreme observations. 2 3 0 1 4 0 3 0 1 2\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nMeasures of Center\nMode\n\nThe mode of a set of observations is the most frequently occurring value; it is the value having the highest frequency among the observations. The mode of the values: {0, 0, 0, 0, 1, 1, 2, 2, 3, 4} is 0. For {0, 0, 0, 1, 1, 2, 2, 2, 3, 4} two modes, 0 and 2 (bimodal) The mode for {0, 1, 2, 4, 5, 8} is none! The mode is not often used as a measure of center for quantitative data. The mode can be computed for qualitative (non-numeric) data.\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nMeasures of Center\nDierent Measures Can Give Dierent Impressions\n\nConsider the annual incomes of ve families in a neighborhood: \\$12, 000, \\$12, 000 \\$30, 000 \\$90, 000 \\$100, 000\n\n1 2 3 4\n\nCalculate the average income. Calculate the median income. Calculate the modal income. If you were trying to promote that this is an auent neighborhood, which measure might you prefer to present? If you were trying to argue against a tax increase, which measure might you prefer to present? If you want to represent these values with the income that is in the middle, which measure might you prefer to present?\n\n## Chapter 2: Statistical Measures Central Tendency of a Data Set\n\nMeasures of Center\nShapes of Distributions\n\nSymmetric Distribution\n\nBimodal Distribution\n\nMean=Median=Mode\n\nMode\n\nMean=Median\n\nMode\n\nLeft Skewed\n\nRight Skewed\n\nMean\n\nMedian\n\nMode\n\nMode\n\nMedian\n\nMean\n\nHomework\n\n## Measures of Variation or Spread\n\nConsider the following data sets. List 1: 55, 56, 57, 58, 59, 60, 60, 60, 61, 62, 63, 64, 65 List 2: 35, 40, 45, 50, 55, 60, 60, 60, 65, 70, 75, 80, 85\n\nList 1 35 40 45 50 55 60 65 70 75 80 85 List 2 35 40 45 50 55 60 65 70 75 80 85\n\n## Measures of Variation or Spread\n\nRange\n\nRange is just the dierence between the largest value and the smallest value. Consider the following data sets. List 1: 55, 56, 57, 58, 59, 60, 60, 60, 61, 62, 63, 64, 65 List 2: 35, 40, 45, 50, 55, 60, 60, 60, 65, 70, 75, 80, 85 Range of List 1: 65 55 = 10. Range of List 2: 85 35 = 50. Clearly, List 2 is spread out more than List 1.\n\n## Measures of Variation or Spread\n\nProblems with Range\n\n## Consider the following data sets.\n\nList 1 35 40 45 50 55 60 65 70 75 80 85\n\nList 2 35 40 45 50 55 60 65 70 75 80 85\n\nBoth lists have ranges of 50. Obviously, List 1 has more data concentrted in the middle. List 2 has more data concentrated on the ends.\n\n## Measures of Variation or Spread\n\nQuartiles\n\nThe three values that divide the data into four parts are called the quartiles, represented by Q1 , Q2 = M = Median, and Q3 . Finding the quartiles: Find the median of all of the observations. First Quartile = Q1 = median of observations that fall below the median. Third Quartile = Q3 = median of observations that fall above the median.\n\n## Measures of Variation or Spread\n\nQuartiles\n\nSome things to remember: When the number of observations is odd, the middle observation is the median. This observation is not included in either of the two halves when computing Q1 and Q3 . Although dierent books, calculators, and computers may use slightly dierent ways to compute the quartiles, they are all based on the same idea. In a left-skewed distribution, the rst quartile will be farther from the median than the third quartile. If the distribution is symmetric, the quartiles should be the same distance from the median. In a right-skewed distribution, the third quartile will be farther from the median than the rst quartile.\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nExample\nQuartiles\n\nFind the quartiles of the ages of the following 8 subjects.. 30 37 Q1 37 + 39 = 38 2 41 + 42 M = Q2 = = 41.5 2 42 + 43 Q3 = = 42.5 2 Q1 = 39 40 M 41 42 Q3 43 44\n\n## Measures of Variation or Spread\n\nInterquartile Range\n\nThe interquartile range measures the spread of the middle 50% of the data and is dened to be IQR = Q3 Q1 . Find the interquartile range of the ages of the following 8 subjects.. 30 37 Q1 39 40 M 41 42 Q3 43 44\n\n## Measures of Variation or Spread\n\nInterquartile Range\n\nThe p th -percentile is the value such that p% of the observations fall at or below that value and (100 - p)% of the observations fall at or above that value. The rst quartile Q1 is the 25th -percentile since 25% of the data fall below and 75% of the data fall above. The second quartile Q2 = M (the median) is the 50th -percentile since 50% of the data fall below and 50% of the data fall above. The third quartile Q3 is the 75th -percentile since 75% of the data fall below and 25% of the data fall above.\n\n## Measures of Variation or Spread\n\nFive Number Summary\n\nOne well used measure of variation is the ve number summary dened to be the Minimum, Q1 , Median, Q3 , and Maximum of the data set. Find the ve number summary of the ages of the following 8 subjects.. 30 37 Q1 Solution: Min = 30, Q1 = 38, M = 40.5, Q3 = 42.5, Max = 44. 39 40 M 41 42 Q3 43 44\n\n## Measures of Variation or Spread\n\nBoxplots\n\nA boxplot is a graphical representation of the ve number summary of a data set. List the data values in order from smallest to largest. Find the ve number summary: Minimum, Q1 , Median, Q3 , and Maximum. Q1 and Q3 determine the ends of the box, and a line is drawn inside the box to mark the value of the Median. Draw lines (called whiskers) from the midpoints of the ends of the box out to the Minimum and Maximum.\n\nMin Q1 M Q3\n\nMax\n\nExample\nBoxplots\n\n## Recall: Min = 30, Q1 = 38, M = 40.5, Q3 = 42.5, Max = 44.\n\n= 1 38 M = 40. 5 Q = 3 5 42.\n\nM in = 30\n\nM ax = 44 IQR Age\n\n30\n\n35\n\n40\n\n45\n\n## Measures of Variation or Spread\n\n1.5 IQR Rule to Identify Outliers and Build Modied Boxplots\n\nFind the ve number summary. Draw the box part of the boxplot using Q1 , M , and Q3 . Find the Interquartile Range, IQR = Q3 Q1 . Compute the quantity STEP = 1.5 IQR . Find the location of the inner fences.\nLower Inner Fence = Q1 STEP Upper Inner Fence = Q3 + STEP\n\nDraw whiskers from the midpoints of the ends of the box to the smallest and largest values within the inner fences. These whiskers end with small vertical lines. All of the observations that fall outside the inner fences are potential outliers and are plotted with solid dots.\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nExample\nModied Boxplots\n\nConstruct a modied boxplot for the ages of the following 8 subjects 30, 37, 39, 40, 41, 42, 43, 44. Recall: Min = 30, Q1 = 38, M = 40.5, Q3 = 42.5, Max = 44. Note: IQR = Q3 Q1 = 42.5 38 = 4.5 and STEP = 1.5 IQR = 1.5(4.5) = 6.75. Lower Fence: Q1 STEP = 38 6.75 = 31.25 Upper Fence: Q3 + STEP = 42.5 + 6.75 = 49.25\n5 5 42. 40. = Q3 M ax = 44\n\nM in = 30\n\nLower Fence\n\nQ1\n\n38 M\n\nUpper Fence\n\nIQR 30 35 40 45 Age 50\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nExample\nSide-by-Side Boxplots\n\nSide-by-side boxplots are helpful for comparing two or more distributions with respect to the ve-number summary.\n\nAlthough the median of the rst process is closer to the target value of 20.000 cm, the second process produces a less variable distribution.\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nLets Do It!\nModied Boxplots\n\nVariable = age for 23 children randomly assigned to one of two treatment groups. Amoxicillin 8 9 9 10 10 11 11 12 14 14 17 9 10 10 11 12 13 14 Cefadroxil 7 8 9 9 (a) Give the ve-number summary for each of the two treatment groups. Comment on your results. (b) Make side-by-side Boxplots for the antibiotic study data in part (a). (c) Using our rule of thumb, are there any outliers for the Amoxicillin group? If so, modify your Boxplot above. (d) Using our rule of thumb, are there any outliers for the Cefadroxil group? If so, modify your Boxplot above.\n\n16\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nLets Do It!\nModied Boxplots\n\nFor each of the following modied boxplots, report the corresponding ve-number summary and list the values for all outliers (if any). (a)\n0 10 20 30 40 50 60 70 80 90 100\n\nMin (b)\n\n, Q1\n\n,M\n\n, Q3\n\n, Max\n\n, Outliers\n\n0 10 20 30 40 50 60 70 80 90 100\n\nMin (c)\n\n, Q1\n\n,M\n\n, Q3\n\n, Max\n\n, Outliers\n\n0 10 20 30 40 50 60 70 80 90 100\n\nMin\n\n, Q1\n\n,M\n\n, Q3\n\n, Max\n\n, Outliers\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nStandard Deviation\nThe Idea\n\nStandard deviation is a measure of the spread of the observations from the mean. Think of the standard deviation as roughly an average (or standard) distance of the observations from their mean. If all of the observations are the same, then the standard deviation will be 0 (i.e. no spread). Otherwise the standard deviation is positive and the more spread out the observations are about their mean, the larger the value of the standard deviation.\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nStandard Deviation\nThe Idea\n\nSuppose you make three observations: 0, 5, 7. Then, the sample 0+5+7 mean is x = = 4. 3\nDeviation = 1 Deviation = -4 Deviation = 3 x =4 0 5 7\n\nProblem: The average of the deviations is zero! 4 + 1 + 3 0 = = 0. 3 3 (Thats boring!) It turns out that the average of the deviations from the mean will always be zero...so we need a little trick.\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nStandard Deviation\nThe Idea\n\nSuppose you make three observations: 0, 5, 7. Then, the sample mean is x = 4. Solution: Use the squared deviations from the mean. Deviations from the Mean Squared Deviations -4 16 1 1 3 3\n\nThe average, which is called the sample variance is 16 + 1 + 9 26 = = 13. 31 2 The sample standard deviation is 13 3.60555.\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nStandard Deviation\nThe Idea\n\nNotes When calculating sample variance in the previous example, 26 16 + 1 + 9 = = 13, we subtract one in the 31 2 denominator...this is because we estimated the mean and hence have used up some information...if you want more information, then take advanced statistics courses. When calculating (sample or population) standard deviation, we square all of the numbers and then add them...so the variance is measured in squared units...so we take a square root to preserve return to the original units. Just as the mean is not a resistant measure of center, since the standard deviation used the mean in its denition, it is not a resistant measure of spread. It is heavily inuenced by extreme values.\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nStandard Deviation\nThe Math\n\n## Let x1 , x2 , . . . , xn denote n observations.\n\nn (xi x )2 . The sample variance is denoted by s 2 = i =1 n1 The sample standard deviaiton is s = s 2 .\n\nSuppose is the population mean. The population variance is n (xi )2 2 denoted by = i =1 . n The population standard deviaiton is = 2 . Note that when dealing with population variance or standard deviation, we do not divide by n 1 since we have not estimated the mean...the population mean can be calculated exactly.\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nStandard Deviation\nThe Math - Shortcut Formulas for Sample Variance or Sample Standard Deviation\n\nSome shortcut formulas are presented for calculating the sample variance and sample standard deviation. Let x1 , x2 , . . . , xn denote a sample of n observations. Then,\n2 n 2 i =1 xi\n\n2 n i =1 xi ) /n\n\nn1 s2 =\nn 2 i =1 xi\n\n2 n i =1 xi ) /n\n\nn1\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nStandard Deviation\nThe Math - Shortcut Formulas for Population Variance or Population Standard Deviation\n\nSome shortcut formulas are presented for calculating the population variance and population standard deviation. Let x1 , x2 , . . . , xn denote all n observations in a population. Then, Variance: =\n2 n 2 i =1 xi\n\n( n\n\n2 n i =1 xi ) /n\n\nStandard Deviation: =\n\n2 =\n\nn 2 i =1 xi\n\n( n\n\n2 n i =1 xi ) /n\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nExample\nStandard Deviation\n\nIn a recent study of the eect of a certain diet on weight reduction, 11 subjects were put on the diet for two weeks and their weight loss/gain in lbs was measured (positive values indicate weight loss). 1, 1, 2, 2, 3, 2, 1, 1, 3, 2.5, 23. What is the standard deviation of the weight loss?\n11\n\ni =1 11\n\ni =1\n\n= 569.25\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nExample\nStandard Deviation (Continued)\n\n11 11\n\nxi = 4.5\ni =1\n\nand\ni =1\n\nxi2 = 569.25\n\n## The sample variance is\n\n2 n 2 i =1 xi\n\n= 38.516 11 1 The sample standard deviation is s = s 2 = 38.516 6.20613. So, our answer is s = 6.2 lbs. s = n1 =\n\n2 n i =1 xi ) /n\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nLets Do It!\nStandard Deviation\n\nThe following are the ages of a sample of 20 patients seen in the emergency room of a hospital on a Friday night. 35 37 32 53 21 45 43 23 39 64 60 10 36 34 12 22 54 36 45 55\n\n## Variance and Standard Deviation of Grouped Data/Frequency Tables\n\nUnfortunately, if the original data is not available, then nding the sample standard deviation becomes a bit more interesting. Class 5.5 - 10.5 10.5 - 15.5 15.5 - 20.5 20.5 - 25.5 25.5 - 30.5 30.5 - 35.5 35.5 - 40.5 Sum Frequency, f 1 2 3 5 4 3 2 n = 20 Midpoint, xm 8 13 18 23 28 33 38\n2 f xm f xm 8 26 54 115 112 99 76 490\n\n## Variance and Standard Deviation of Grouped Data/Frequency Tables\n\nThe Formulas\n\n2 = 13, 310. We found n = 20, f xm = 490 f xm The formula for sample variance of grouped data is\n\ns =\n\n2 ( f xm\n\nf xm )2 /n =\n\n## 13, 310 (490)2 /20 20 1 = 68.68\n\nn1\n\nThe formula for sample standard deviation of grouped data is s = s 2 = 68.68 8.28734. So, our nal answer is s = 8.3\n\n## Sample Standard Deviation of Grouped Data\n\nThe data show distribution of the birth weight ( in oz.) of 100 consecutive deliveries. Find the variance and the standard deviation. Class 29.50-69.45 69.50-89.45 89.50-99.45 99.50-109.45 109.50-119.45 119.50-129.45 129.50-139.45 139.50-169.45 Sum Frequency, f 5 10 11 19 17 20 20 6 Midpoint, xm f xm\n2 f xm\n\n## Chapter 2: Statistical Measures Variation or Spread of a Data Set\n\nHomework\n\nHW page 37: 2, 3, 9, 13"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88578993,"math_prob":0.9900436,"size":19543,"snap":"2019-51-2020-05","text_gpt3_token_len":5635,"char_repetition_ratio":0.14954706,"word_repetition_ratio":0.1629139,"special_character_ratio":0.3060431,"punctuation_ratio":0.13478462,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.996553,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-20T18:28:20Z\",\"WARC-Record-ID\":\"<urn:uuid:234799d0-4e18-42e2-aff4-f366de2c9132>\",\"Content-Length\":\"398331\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:14f7d531-64bf-446b-91b8-55f46b2c4d31>\",\"WARC-Concurrent-To\":\"<urn:uuid:da1bc893-a646-4b96-bc96-417fb07700cc>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://ro.scribd.com/document/210104756/Chapter-2\",\"WARC-Payload-Digest\":\"sha1:EKL3KIZR3QXFCISA54NUFR6K5CD2LAA3\",\"WARC-Block-Digest\":\"sha1:AUDEVTKZ7ZFGDNKSTK4YQYAYP77PW7RX\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250599718.13_warc_CC-MAIN-20200120165335-20200120194335-00085.warc.gz\"}"} |
https://oeis.org/A343234 | [
"The OEIS is supported by the many generous donors to the OEIS Foundation.",
null,
"Year-end appeal: Please make a donation to the OEIS Foundation to support ongoing development and maintenance of the OEIS. We are now in our 60th year, we have over 367,000 sequences, and we’ve reached 11,000 citations (which often say “discovered thanks to the OEIS”). Other ways to Give\n Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!)\n A343234 Triangle T read by rows: lower triangular Riordan matrix of the Toeplitz type with first column A067687. 2\n 1, 1, 1, 2, 1, 1, 5, 2, 1, 1, 12, 5, 2, 1, 1, 29, 12, 5, 2, 1, 1, 69, 29, 12, 5, 2, 1, 1, 165, 69, 29, 12, 5, 2, 1, 1, 393, 165, 69, 29, 12, 5, 2, 1, 1, 937, 393, 165, 69, 29, 12, 5, 2, 1, 1, 2233, 937, 393, 165, 69, 29, 12, 5, 2, 1, 1 (list; table; graph; refs; listen; history; text; internal format)\n OFFSET 0,4 COMMENTS This infinite lower triangular Riordan matrix T is the so-called L-eigen-matrix of the infinite lower triangular Riordan matrix A027293 (but with offset 0 for rows and columns). Such eigentriangles have been considered by Paul Barry in the paper given as a link in A186020. This means that E is the L-eigen-matrix of an infinite lower triangular matrix M if M*E = L*(E - I), with the unit matrix I and the matrix L with elements L(i, j) = delta_{i, j-1} (Kronecker's delta-symbol; first upper diagonal with 1's). Therefore, this notion is analogous to calling sequence S an L-eigen-sequence of matrix M if M*vec(S) = L.vec(S) (or vec(S) is an eigensequence of M - L with eigenvalue 0), used by Bernstein and Sloane, see the links in A155002. L*(E - I) is the E matrix after elimination of the main diagonal and then the first row, and starting with offset 0. Because for infinite lower triangular matrices L^{tr}.L = I (tr stands for transposed), this leads to M = L*(I - E^{-1}) or E = (I - L^{tr}*M)^{-1}. Note that Gary W. Adamson uses a different notion: E is the eigentriangle of a triangle T if the columns of E are the columns j of T multiplied by the sequence elements B_j of B with o.g.f. x/(1 - x*G(x)), with the o.g.f. G(x) of column nr. 1 of T. Or E(i, j) = T(i, j)*B(j). In short: sequence B is the L-eigen-sequence of the infinite lower triangular matrix T (but with offset 1): T*vec(B) = L.vec(B). See, e.g., A143866. Thanks to Gary W. Adamson for motivating my occupation with such eigentriangles and eigensequences. The first column of the present triangle T is A067687, which is then shifted downwards (Riordan of Toeplitz type). LINKS Table of n, a(n) for n=0..65. FORMULA Matrix elements: T(n, m) = A067687(n-m), for n >= m >= 0, and 0 otherwise. O.g.f. of row polynomials R(n,x) = Sum_{m=0..n} T(n, m)*x^m is G(z, x) = 1/((1 - z*P(z))*(1 - x*z)), with the o.g.f. P of A000041 (number of partitions). O.g.f. column m: G_m(x) = x^m/(1 - x*P(x)), for m >= 0. EXAMPLE The triangle T begins: n \\ m 0 1 2 3 4 5 6 7 8 9 ... ----------------------------------------- 0: 1 1: 1 1 2: 2 1 1 3: 5 2 1 1 4: 12 5 2 1 1 5: 29 12 5 2 1 1 6: 69 29 12 5 2 1 1 7: 165 69 29 12 5 2 1 1 8: 393 165 69 29 12 5 2 1 1 9: 937 393 165 69 29 12 5 2 1 1 ... CROSSREFS Cf. A000041, A027293, A067687, A143866, A155002, A186020. Sequence in context: A120292 A179318 A162470 * A174004 A128604 A098885 Adjacent sequences: A343231 A343232 A343233 * A343235 A343236 A343237 KEYWORD nonn,easy,tabl AUTHOR Wolfdieter Lang, Apr 16 2021 STATUS approved\n\nLookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam\nContribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recents\nThe OEIS Community | Maintained by The OEIS Foundation Inc.\n\nLast modified December 10 00:12 EST 2023. Contains 367696 sequences. (Running on oeis4.)"
] | [
null,
"https://oeis.org/banner2021.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.733689,"math_prob":0.9851486,"size":2781,"snap":"2023-40-2023-50","text_gpt3_token_len":1074,"char_repetition_ratio":0.15484336,"word_repetition_ratio":0.08741259,"special_character_ratio":0.4512765,"punctuation_ratio":0.21592443,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98954153,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T05:24:08Z\",\"WARC-Record-ID\":\"<urn:uuid:a6c34a17-1d98-4b29-a57b-890b359c9005>\",\"Content-Length\":\"20843\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1d03bcb-78d0-417a-adeb-b792b2849880>\",\"WARC-Concurrent-To\":\"<urn:uuid:61889f02-badd-4293-8062-5a96e98c2b55>\",\"WARC-IP-Address\":\"104.239.138.29\",\"WARC-Target-URI\":\"https://oeis.org/A343234\",\"WARC-Payload-Digest\":\"sha1:ZNDBV7FLWI2HSADVK7XXMC5Y5OGAQHCZ\",\"WARC-Block-Digest\":\"sha1:ZXDFZC5O45JHATOQMXQ5MGLVONIKEAUJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679101195.85_warc_CC-MAIN-20231210025335-20231210055335-00241.warc.gz\"}"} |
https://www.javaprogramto.com/2019/04/java-program-to-display-fibonacci.html | [
"## \\$show=/label\n\nIn this program, you'll learn to display fibonacci series in Java using for and while loops. You'll learn how to display the fibonacci series upto a specific term or a number and how to find the nth number in the fibonacci series using recursion.\n\n## Java Program to Display Fibonacci Series:\n\nThe Fibonacci series is a series where the next term is the sum of previous two numbers. The first two numbers of the Fibonacci sequence is 0 followed by 1.\n\nThe Fibonacci numbers are the sums of the \"shallow\" diagonals (shown in red) of Pascal's triangle as below.\n\nWe will learn how to find the Fibonacci series to print the n numbers in the series. Below is the example Fibonacci series for first 20 numbers.\n\n```0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765\n```\n\nWe can find this in many ways. Below three ways, we will learn in this post.\n\n1) While Loop\n2) For Loop\n3) Using Recursive\n\nThe Java program is successfully compiled and run on a Windows system.\n\n## Example 1: Display Fibonacci series using for loop\n\nExample program to print the Fibonacci numbers using for loop.\n\n```package examples.java.w3schools.array.programs;\n\nimport java.util.Scanner;\n\npublic class FibonaciSeries {\n\npublic static void main(String[] args) {\n\nint n, num1 = 0, num2 = 1;\n\nScanner s = new Scanner(System.in);\nSystem.out.print(\"Enter value of n:\");\nn = s.nextInt();\n\nSystem.out.print(\"First \" + n + \" Numbers in the Fibonacci series: \");\nint i = 1;\nwhile (i <= n) {\nif (i == n)\nSystem.out.print(num1);\nelse\nSystem.out.print(num1 + \" + \");\n\nint sum = num1 + num2;\nnum1 = num2;\nnum2 = sum;\ni++;\n}\n}\n}\n```\n\nThe above program takes input n from the user. num1 and num2 will be assiged to default values i.e 0 and 1. The next number will be sum of num1 and num2. This will be third number. Like this, for loop contines for 10 times if n value is 10 by displaying the sum of previous two terms stored in variable num2.\n\n### Output:\n\nThe above program produces the output.\n\n```Enter value of n:10\nFirst 10 Numbers in the Fibonacci series: 0 + 1 + 1 + 2 + 3 + 5 + 8 + 13 + 21 + 34\n```\n\n## Example 2: Display Fibonacci series using for while\n\nExample program to print the fibonacci numbers using for while.\n\n```package examples.java.w3schools.array.programs;\n\nimport java.util.Scanner;\n\npublic class FibonaciSeries {\n\npublic static void main(String[] args) {\n\nint n, num1 = 0, num2 = 1;\n\nScanner s = new Scanner(System.in);\nSystem.out.print(\"Enter value of n:\");\nn = s.nextInt();\n\nSystem.out.print(\"First \" + n + \" Numbers in the Fibonacci series: \");\nfor (int i = 1; i <= n; ++i) {\nif (i == n)\nSystem.out.print(num1);\nelse\nSystem.out.print(num1 + \" + \");\n\nint sum = num1 + num2;\nnum1 = num2;\nnum2 = sum;\n}\n}\n}\n```\n\n### Output:\n\n```Output will be same as above program using for loop.\n```\n\n## Example 3: Display nth number in the Fibonacci series using for recursive approach\n\nIn this example, We will write a java program to find the nth number in the fibonacci series.\n\n```package examples.java.w3schools.array.programs;\n\nimport java.util.Scanner;\n\npublic class FibonaciSeries {\n\npublic static void main(String[] args) {\n\nint n, num1 = 0, num2 = 1;\n\nScanner s = new Scanner(System.in);\nSystem.out.print(\"Enter value of n:\");\nn = s.nextInt();\n\nint nthNumber = fibonacci(n);\nSystem.out.println(n + \"th number in the fibonacci series: \" + nthNumber);\n}\n\nprivate static int fibonacci(int number) {\nif (number == 1 || number == 2) {\nreturn 1;\n}\nreturn fibonacci(number - 1) + fibonacci(number - 2);\n}\n\n}\n```\n\nThis program takes the number n and finds out the nth number fibonacci series. The above program is implemented using recursive approach.\n\n### Output:\n\n```Enter value of n:20\n20th number in the fibonacci series: 6765\n\n-------------------\n\nEnter value of n:10\n10th number in the fibonacci series: 55\n\n-------------------\n\nEnter value of n:30\n30th number in the fibonacci series: 832040\n\n-------------------\n\nEnter value of n:40\n40th number in the fibonacci series: 102334155\n\n-------------------\n\nEnter value of n:45\n45th number in the fibonacci series: 1134903170\n\n```",
null,
"BLOGGER\n\nAuthor: Venkatesh - I love to learn and share the technical stuff.\nName\n\naccumulo,1,ActiveMQ,2,Adsense,1,API,37,ArrayList,17,Arrays,16,Bean Creation,3,Bean Scopes,1,BiConsumer,1,Blogger Tips,1,Books,1,C Programming,1,Collection,5,Collections,29,Collector,1,Command Line,1,Compile Errors,1,Configurations,7,Constants,1,Control Statements,8,Conversions,6,Core Java,93,Corona India,1,Create,2,CSS,1,Date,3,Date Time API,35,Dictionary,1,Difference,1,Download,1,Eclipse,2,Efficiently,1,Error,1,Errors,1,Exception,1,Exceptions,3,Fast,1,Files,14,Float,1,Font,1,Form,1,Freshers,1,Function,3,Functional Interface,2,Garbage Collector,1,Generics,4,Git,4,Grant,1,Grep,1,HashMap,1,HomeBrew,2,HTML,2,HttpClient,2,Immutable,1,Installation,1,Interview Questions,5,Iterate,2,Jackson API,3,Java,30,Java 10,1,Java 11,5,Java 12,5,Java 13,2,Java 14,2,Java 8,111,Java 8 Difference,2,Java 8 Stream Conversions,2,java 8 Stream Examples,3,Java 9,1,Java Conversions,11,Java Design Patterns,1,Java Files,1,Java Program,3,Java Programs,105,Java Spark,1,java.lang,5,java.util. function,1,jQuery,1,Kotlin,11,Kotlin Conversions,6,Kotlin Programs,10,Lambda,1,lang,29,Leap Year,1,live updates,1,LocalDate,1,Logging,1,Mac OS,2,Math,1,Matrix,5,Maven,1,Method References,1,Mockito,1,MongoDB,3,New Features,1,Operations,1,Optional,6,Oracle,5,Oracle 18C,1,Partition,1,Patterns,1,Programs,1,Property,1,Python,2,Quarkus,1,Read,1,Real Time,1,Recursion,2,Remove,2,Rest API,1,Schedules,1,Serialization,1,Servlet,2,Sort,1,Sorting Techniques,8,Spring,2,Spring Boot,23,Spring Email,1,Spring MVC,1,Streams,28,String,59,String Programs,12,String Revese,1,Swing,1,System,1,Tags,1,Threads,11,Tomcat,1,Tomcat 8,1,Troubleshoot,16,Unix,3,Updates,3,util,5,While Loop,1,\nltr\nitem\nJavaProgramTo.com: Java Program to Display Fibonacci Series | Find nth Fibonacci number using recursion\nJava Program to Display Fibonacci Series | Find nth Fibonacci number using recursion\nIn this program, you'll learn to display fibonacci series in Java using for and while loops. You'll learn how to display the fibonacci series upto a specific term or a number and how to find the nth number in the fibonacci series using recursion.\nhttps://4.bp.blogspot.com/-Am4lcnI23sA/XLQyg61om0I/AAAAAAAABVc/Ga-RVcCsiewOmE3b7hgbTo9WAW2WTu63ACLcBGAs/s400/Java%2BProgram%2Bto%2BDisplay%2BFibonacci%2BSeries.PNG\nhttps://4.bp.blogspot.com/-Am4lcnI23sA/XLQyg61om0I/AAAAAAAABVc/Ga-RVcCsiewOmE3b7hgbTo9WAW2WTu63ACLcBGAs/s72-c/Java%2BProgram%2Bto%2BDisplay%2BFibonacci%2BSeries.PNG\nJavaProgramTo.com\nhttps://www.javaprogramto.com/2019/04/java-program-to-display-fibonacci.html\nhttps://www.javaprogramto.com/\nhttps://www.javaprogramto.com/\nhttps://www.javaprogramto.com/2019/04/java-program-to-display-fibonacci.html\ntrue\n3124782013468838591\nUTF-8"
] | [
null,
"https://4.bp.blogspot.com/-Am4lcnI23sA/XLQyg61om0I/AAAAAAAABVc/Ga-RVcCsiewOmE3b7hgbTo9WAW2WTu63ACLcBGAs/s400/Java%2BProgram%2Bto%2BDisplay%2BFibonacci%2BSeries.PNG",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6553631,"math_prob":0.94843775,"size":3948,"snap":"2021-43-2021-49","text_gpt3_token_len":1025,"char_repetition_ratio":0.19726166,"word_repetition_ratio":0.25588235,"special_character_ratio":0.31585613,"punctuation_ratio":0.17305315,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9842915,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T15:40:19Z\",\"WARC-Record-ID\":\"<urn:uuid:bc9f4e09-b09b-4cb9-ac04-9c6f0de94871>\",\"Content-Length\":\"478856\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae55c624-d745-4bd7-94a9-9484bd426b1f>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2969f85-f4de-431f-8000-64c4adc6ba43>\",\"WARC-IP-Address\":\"142.251.33.211\",\"WARC-Target-URI\":\"https://www.javaprogramto.com/2019/04/java-program-to-display-fibonacci.html\",\"WARC-Payload-Digest\":\"sha1:7KRHSVXBVV57VP73LODPFU3NNKMDG7LT\",\"WARC-Block-Digest\":\"sha1:AFYUMHBSZ27TCGDPFOJSOO3L64J5VBM2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585696.21_warc_CC-MAIN-20211023130922-20211023160922-00628.warc.gz\"}"} |
http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons.htm | [
"Volume 377, Issues 45–48, 17 December 2013, Pages 3247–3253\n\n# Tachyonic Cherenkov emission from Jupiterʼs radio electrons\n\n• Department of Physics, Hiroshima University, 1-3-1 Kagami-yama, Higashi-Hiroshima 739-8526, Japan\n\n## Highlights\n\nTachyonic radio emission from uniformly moving electrons in a dispersive spacetime.\n\nTransversal and longitudinal flux densities due to superluminal Cherenkov effect.\n\nMildly relativistic thermal electron populations in Jupiterʼs radiation belts.\n\nSpectral fit of ground-based and in-situ measurements of the Jovian radio flux.\n\nJupiterʼs low flux density at 13.8 GHz (Cassini fly-by) due to thermal spectral cutoff.\n\n## Abstract\n\nTachyonic Cherenkov radiation from inertial relativistic electrons in the Jovian radiation belts is studied. The tachyonic modes are coupled to a frequency-dependent permeability tensor and admit a negative mass-square, rendering them superluminal and dispersive. The superluminal radiation field can be cast into Maxwellian form, using 3D field strengths and inductions, and the spectral densities of tachyonic Cherenkov radiation are derived. The negative mass-square gives rise to a longitudinal flux component. A spectral fit to Jupiterʼs radio spectrum, inferred from ground-based observations and the Cassini 2001 fly-by, is performed with tachyonic Cherenkov flux densities averaged over a thermal electron population.\n\n## Keywords\n\n• Superluminal radiation modes with negative mass-square;\n• Transversal and longitudinal tachyons;\n• Tachyonic Maxwell–Proca fields in a permeable spacetime;\n• Thermal electron plasma in Jupiterʼs radiation belts;\n• Cassiniʼs Jupiter fly-by and Jovian radio spectrum\n\n## 1. Introduction\n\nDuring the Cassini Jupiter fly-by in January 2001, an in situ measurement of the Jovian radio emission was performed, resulting in an unexpectedly low flux density of 0.44±0.15 Jy at 13.8 GHz, as compared to ground-based observations at 8.6 GHz, which produced an averaged flux of 2.3±0.6 Jy. This sudden decline is hard to explain with magnetospheric ultra-relativistic synchrotron radiation models. Here, we investigate the presently available radio spectrum ranging from 74 MHz up to the Cassini flux at 13.8 GHz. We perform a spectral fit to the Jovian radio flux with a tachyonic Cherenkov density , , and produced by mildly relativistic thermal electrons in the radiation belts. The superluminal radiation field (Proca field) satisfies Maxwellʼs equations with negative mass-square . On that basis, we derive the tachyonic Cherenkov flux generated by inertial charges propagating in a dispersive spacetime.\n\nIn Section 2, we discuss Proca fields with negative mass-square and frequency-dependent permeabilities. We outline the tachyonic Maxwell equations in terms of 3D field strengths and inductions and develop the 4D Lagrange formalism using dispersive permeability tensors. We obtain field equations which have a manifestly covariant appearance, even though the permeability tensors are a manifestation of the absolute spacetime required for causal superluminal signal transfer , and .\n\nIn Section 3, we analyze asymptotic tachyonic radiation fields generated by a classical subluminal current density, and decompose them into transversal and longitudinal field components. In Section 4, we derive the tachyonic Cherenkov flux densities of a relativistic charge in uniform motion. A longitudinal radiation component emerges whose intensity scales with the negative mass-square of the radiation field. As in the case of electromagnetic Cherenkov radiation, the mass of the radiating particle does not enter in the tachyonic Cherenkov densities. This gives credence to the view , that Cherenkov radiation is not so much radiation by a charge passing through a medium, but rather radiation by the medium itself, excited by the field of the inertial charge.\n\nIn Section 5, we average the differential tachyonic Cherenkov flux over relativistic electron distributions (thermal Maxwell–Boltzmann and nonthermal power-law distributions). In Section 6.1, we discuss spectral fitting with Cherenkov flux densities, adapted to Jupiterʼs radio band. In Section 6.2, a spectral fit to the Jovian radio emission is performed. The low flux density at 13.8 GHz measured by the Cassini spacecraft can well be explained by tachyonic Cherenkov emission from thermal electrons in Jupiterʼs radiation belts, the Cassini data point being located in the exponentially decaying spectral tail. In Section 7, we present our conclusions.\n\n## 2. Tachyonic Proca–Maxwell fields: manifestly covariant field equations in a permeable spacetime\n\nWe start with some conventions regarding the Fourier time transform. The vector potential Aμ=(A0,A) transforms as , and the same holds for the field strengths, inductions and the current. Since Aμ is real, . The homogeneous Maxwell equations in space–frequency representation read , . The field strengths and are related to the vector potential by , . The constitutive equations defining the inductions and and the inductive potential read and \n\nequation2.1",
null,
"",
null,
"The permeabilities ε(ω), μ(ω), μ0(ω) and ε0(ω) are independent of the space variable, real and symmetric ε(ω)=ε(−ω). The inhomogeneous field equations coupled to a current",
null,
"(which will be defined in (2.5)) read\nequation2.2",
null,
"",
null,
"",
null,
"is the negative tachyonic mass-square (",
null,
"in our sign convention), which can be frequency-dependent like the permeabilities,",
null,
". We take the divergence of the first equation in (2.2) and substitute the second, to obtain the Lorentz condition",
null,
", subject to current conservation",
null,
". The corresponding Fourier representation of the Poynting flux vector is",
null,
".\n\nTo write the Maxwell equations manifestly covariantly in Fourier space, we start with the field tensor Fμν(x,t)=Aν,μAμ,ν. We use the convention that time differentiation in Fourier space means to multiply with a factor −iω, e.g.",
null,
". For conjugated fields,",
null,
". Thus,",
null,
"and",
null,
", which actually means",
null,
". The 3D field strengths are",
null,
"and",
null,
", and inversely",
null,
", where εkij is the Levi-Civita 3-tensor. The manifestly covariant homogeneous field equations read",
null,
", where εκλμν is the totally antisymmetric 4-tensor.\n\nThe permeabilities (ε0(ω),μ0(ω)) and (ε(ω),μ(ω)) in (2.1) define isotropic real and symmetric permeability tensors",
null,
"and",
null,
",\n\nequation2.3",
null,
"",
null,
"with zero flanks",
null,
"and",
null,
". Greek indices are raised and lowered with the Minkowski metric ημν=diag(−1,1,1,1). We may then write the inductions (2.1) manifestly covariantly as",
null,
",",
null,
", so that",
null,
"and",
null,
", and inversely",
null,
". Also,",
null,
"and",
null,
".\n\nThe action functional is",
null,
", defined by the Lagrangian\n\nequation2.4",
null,
"",
null,
"The coupling of the wave modes to an external current",
null,
"is effected by a permeability tensor",
null,
",\n\nequation2.5",
null,
"",
null,
"with the same properties as the tensors",
null,
"in (2.3). In the field equations (2.2), we use the ‘dressed’ current",
null,
". Euler variation of the action gives the manifestly covariant field equations Hˆμν,νm2tCˆμ=jˆμΩ in space–frequency (x,ω) representation, equivalent to (2.2). (Hˆm0,0(x,ω)=iωHˆm0, as defined above.) If the current jˆμΩ is conserved, jˆμΩ,μ=0, we find the Lorentz condition Cˆμ,μ=0. The permeability tensor (2.5) amounts to a frequency-dependent coupling constant in Lagrangian (2.4) if Ω0(ω)=1/Ω(ω), which we assume in the following, jˆμΩ=jˆμ/Ω(ω). Thus, if the external current is conserved, jˆm,miωjˆ0=0, this also holds for the dressed current jˆμΩ. In fact, Ω(ω) can be scaled into the permeabilities (2.1), εΩε, μμ/Ω, and analogously for (ε0,μ0), cf. (2.2).\n\n## 3. Asymptotic radiation fields: time-averaged transversal and longitudinal energy flux\n\nThe tachyonic Maxwell equations (stated in (2.2) and after (2.5), with permeability tensors in (2.3) and (2.5)) are solved by the transversal and longitudinal asymptotic vector potentials and \n\nequation3.1\nAˆT,L(x,ω)1ΩλT,L4πreikT,LreikT,LnxjˆT,L(x,ω)dx.",
null,
"Here, we substitute the transversal/longitudinal current components, defined with the radial unit wave vector n=x/r: and . We also note and . λT,L in (3.1) stands for λT=μ, λL=ε0μ0/ε, and the wave numbers kT,L are defined by the dispersion relations kT,L=sign(ω)κT,L(ω), where and . It is convenient to define the current transform whose transversal and longitudinal projections read\nequation3.4\nso that the integral in (3.1) can be replaced by .\n\nequation3.5",
null,
"",
null,
"We perform a time average (also see (4.1) below),\nequation3.6",
null,
"",
null,
"to find\nequation3.7",
null,
"",
null,
"from which the transversal and longitudinal Cherenkov flux densities can be extracted, cf. Section 4.\n\n## 4. Superluminal radiation by an inertial charge in a dispersive spacetime\n\nWe consider a classical point charge q in uniform motion x0(t)=υt, with subluminal speed υ<1. The charge and current densities are ρ=qδ(xυt) and j(x,t)=qυδ(xυt). We use the Heaviside–Lorentz system, so that αe=e2/(4πc)≈1/137 and αq=q2/(4πc) are the electric and tachyonic fine-structure constants. The current transform (3.3) reads\n\nwhere the time cutoff T→∞ has been introduced as a regularization. The time integration is performed by means of the limit definition",
null,
"of the Dirac function. A second limit definition, , is invoked in (4.2) to calculate the time-averaged flux vectors. We write",
null,
"and use υ as polar axis, , to find, cf. (3.7),\nequation4.2",
null,
"",
null,
"",
null,
"",
null,
"Performing the limit T→∞, we can replace δ(2),T by the ordinary delta function.\n\nThe radiant transversal/longitudinal power is obtained by integrating the Poynting vectors (4.2) over a sphere of radius r→∞, . By interchanging the dθ and dω integrations, we find\n\nwhere pT,L(ω) are the tachyonic Cherenkov densities for transversal/longitudinal radiation, The integration range in (4.3) is actually over positive ω intervals in which ω/(kT,L(ω)υ)⩽1, cf. (3.2). From now on, we will use constant (i.e. frequency-independent) permeabilities (ε,μ) and (ε0,μ0) as well as a constant tachyonic mass-square . In this case, the integration range in (4.3) is defined by cutoff frequencies ωT,L,max obtained as the solutions of kT,L(ω)υ=ω, respectively.\n\nIn the tachyonic spectral densities (4.4), we substitute the wave numbers (3.2), and parametrize the particle velocity with the Lorentz factor, :\n\nwhere we use the shortcut\nequation4.6",
null,
"We also define ηL(γ) analogous to ηT(γ) with the product εμ in (4.6) replaced by ε0μ0. In the following, we restrict to permeabilities satisfying εμ⩽1 and ε0μ0⩽1, since the spectral averaging carried out in the next section leads to exponentially decaying spectral densities only in this case. Subject to these constraints, the functions ηT,L(γ) are positive, irrespectively of the Lorentz factor γ⩾1, and related to the cutoff frequencies in (4.3) by .\n\n## 5. Tachyonic Cherenkov densities averaged with relativistic electron distributions\n\nWe rescale the frequency in the radiation densities (4.5), ,\n\nequation5.1",
null,
"",
null,
"",
null,
"where we have introduced the frequency-dependent tachyonic fine-structure constant αq(ω)=q2/(4πΩ2(ω)). The scale factor Ω2(ω), cf. after (2.5), is chosen as\nequation5.2",
null,
"where the exponent σ is to be determined from an empirical spectral fit. We note Ω(ω)→1 for ω→∞ as well as mt→0, and . The permeabilities ε,μ,ε0,μ0 are positive and constant, satisfying εμ⩽1 and ε0μ0⩽1, cf. after (4.6). The transversal/longitudinal spectral range is 0⩽ωωT,L,max(γ), with . The cutoff factors ηT,L(γ) in (4.6) are monotonically increasing, from zero at γ=1 to the maximum ηT(∞)=[εμ0(1/(εμ)−1)]−1 and ηL(∞)=[εμ0(1/(ε0μ0)−1)]−1, respectively.\n\nWe average the radiation densities (5.1) with an electronic power-law distribution , , and ,\n\nequation5.3",
null,
"where the dimensionless normalization constant Aα,β is related to the electronic number count Ne by The exponential cutoff β=me/(kBT) determines the electron temperature, T[K]≈5.93×109/β. kB is the Boltzmann constant and me the electron mass. A thermal Maxwell–Boltzmann distribution requires the electron index α=−2.\n\nThe spectral average of the transversal radiation densities in (5.1) is carried out as\n\nequation5.5\nwhere θ is the Heaviside step function. The same applies to the longitudinal density, if we replace T→L and εμε0μ0. To evaluate integral (5.5), we solve the inequality ω<ωT,max, , for γ, cf. (4.6): which is valid if the denominator is positive. As pointed out after (5.2), only frequencies in the range 0<ω<ωT,max(∞) can be radiated, where . In this frequency range, the denominator in (5.6) is positive and γT,min(ω) is monotonically increasing, reaching infinity at ωT,max(∞). Thus, for a frequency in the range 0<ω<ωT,max(∞) to be radiated, this requires the electronic Lorentz factor γ to exceed γT,min(ω). This holds true for longitudinal radiation as well, if we perform the substitutions T→L and εμε0μ0, which also define γL,min(ω) via (5.6).\n\nWith these prerequisites, the average (5.5) can be reduced to the spectral functions \n\nso that, cf. (5.6),\nequation5.8\npT(ω)〉α,β=θ(ωT,max(∞)−ω)BT(ω,γT,min(ω)).",
null,
"In the Heaviside function θ, it is convenient to rescale the argument, writing , where . The longitudinal average pL(ω)〉α,β is obtained by the substitutions T→L and εμε0μ0 in (5.6) and (5.8).\n\nThe spectral functions BT,L(ω,γ1) in (5.7) admit integration in terms of incomplete gamma functions,\n\nequation5.9",
null,
"",
null,
"where αq(ω) is the tachyonic fine-structure constant defined before (5.2). The longitudinal spectral function reads\nequation5.10",
null,
"",
null,
"We note that Γ is elementary for Maxwell–Boltzmann averages, α=−2, Γ(1,βγ1)=eβγ1, and it decays exponentially for βγ1≫1, Γ(−α−1,βγ1)∼(βγ1)α−2eβγ1. Accordingly, BT(ω,γT,min(ω)) decays exponentially as well, since γT,min(ω) diverges for ωωT,max(∞), cf. (5.6), and the same holds true for BL(ω,γL,min(ω)) and ωωL,max(∞). In the low-frequency limit, , we find , cf. (5.2).\n\n## 6. Tachyonic spectral fit to Jupiterʼs radio emission\n\n### 6.1. Superluminal Cherenkov flux in the radio band\n\nWe restore the units =c=1 and use eV units for the tachyon mass, so that mt stands for mtc2[eV]. As for the radiated frequencies, we put , where . The energy-dependent tachyonic fine-structure constant is dimensionless, αq(ω)=αt/Ω2(ω), cf. (5.2); the proportionality factor αt=q2/(4πc) is the tachyonic counterpart to the electric fine-structure constant e2/(4πc), αq(ω→∞)=αt. The permeabilities (2.1) and the temperature parameter β are dimensionless. The spectral functions BT,L in (5.9) and (5.10) and the averaged densities pT,L(ω)〉α,β in (5.8) are in eV units accordingly. The power transversally and longitudinally radiated is thus , where we substitute mtmtc2[eV] and in the integrand. The transversal/longitudinal flux densities read\n\nequation6.1",
null,
"",
null,
"where d[cm] is the distance to the radiating source. The total unpolarized flux density is . Jupiterʼs standard geocentric distance is d≈4.04 AU≈6.04×1013 cm , and . As we have already expressed pT,L(ω)〉α,β in terms of the rescaled frequency , we only need to substitute in the spectral densities (5.8), where ν is measured in hertz, and κt,Hz[s] is a fitting parameter, , determining the tachyon mass in the radio band.\n\nWe note the conversions ν[GHz]≈2.418×105E[eV] and 1 Jy≈6.2415×10−12 eV/(cm2), so that",
null,
". To better distinguish linear spectral slopes from curved spectral cutoffs, one can use the rescaled flux density , where k is preferably a positive integer exponent. In Fig. 1, we plot against ν[GHz], and the fit is performed with the total flux density .",
null,
"Fig. 1.\n\nTachyonic Cherenkov fit to the Jovian radio emission. Data points of the 1998 multi-site campaign from , VLA (Very Large Array), DSN (Deep Space Network) and Cassini 2001 data from . The fit T + L (solid curve) depicts the total tachyonic flux density rescaled with frequency. This unpolarized density is obtained by adding the transversal flux component (dotted curve labeled T) and the longitudinal component (dashed curve L), generated by a thermal electron plasma, cf. (6.2). Jupiterʼs radio spectrum shows an extended power-law ascent terminating in a spectral peak around 6 GHz, which is followed by an exponentially decaying tail inferred from the Cassini data point at 13.8 GHz. The fitting parameters are recorded in Section 6.2.\n\nSummarizing the flux densities employed in the spectral fit in Fig. 1, we assemble in the above stated units. The transversal Cherenkov density (6.1) reads\n\nequation6.2",
null,
"",
null,
"where , cf. after (5.8). The spectral function BT in (5.9) and the argument γT,min(ω) in (5.6) are already expressed in the rescaled variable . The longitudinal counterpart to (6.2) is obtained by replacing T→L and εμε0μ0, so that .\n\nIn (6.2), we substitute , where the frequency ν is measured in GHz so that κt=109κt,Hz or, cf. after (6.1),\n\nIn this parametrization, we find the transversal spectral function (5.9) as\nequation6.4",
null,
"",
null,
"and the longitudinal spectral function in (5.10) reads\nequation6.5",
null,
"",
null,
"In (6.4) and (6.5), we have to insert the frequency-dependent minimal Lorentz factors, cf. (5.6),\nequation6.6",
null,
"",
null,
"and the scale factor\nequation6.7",
null,
"of the tachyonic fine-structure constant αq(ω)=αt/Ω2, αt=q2/(4πc), see (5.2) and before (6.1).\n\nThe Heaviside function in the flux density (6.2) can be replaced by . The highest frequency transversally/longitudinally radiated is νT,max=(1/(εμ)−1)−1/2/κt and νL,max=(1/(ε0μ0)−1)−1/2/κt respectively, cf. after (6.2). The tachyonic mass parameter κt is defined in (6.3). The constant amplitudes in (6.2) and (6.4) can be combined to one fitting parameter,\n\nwith 4πd2≈4.58×1028 cm2 for Jupiter, cf. after (6.1). This amplitude at can also be used for the longitudinal radiation component in (6.2) and (6.5).\n\n### 6.2. Spectral asymptotics and tachyonic Cherenkov fit of the Jovian radio emission\n\nWe consider a thermal electron distribution (5.3) with electron index α=−2, and permeabilities (2.1) satisfying εμ=ε0μ0=1. (This is slightly more general than vacuum permeabilities ε=μ=1, ε0=μ0=1.) The transversal spectral function BT(ω,γT,min) in (6.4) then simplifies to\n\nequation6.9",
null,
"",
null,
"and its longitudinal counterpart in (6.5) is\nequation6.10",
null,
"",
null,
"The minimal Lorentz factor to be substituted is , cf. (6.6), and the fine-structure scale factor Ω2(ν) is stated in (6.7). ν is measured in GHz, and the tachyonic mass parameter κt is defined in (6.3).\n\nThe flux densities in (6.2) apply, with the Heaviside function dropped. In the low-frequency limit κtν→0, we find the unpolarized flux\n\nThe parameter at is defined in (6.8). We can estimate the amplitude A0 and the exponent σ by fitting this power-law slope, which is linear in a log–log plot, to the low-frequency spectrum. In the asymptotic high-frequency limit κtν≫1, the unpolarized flux reads\nequation6.12",
null,
"",
null,
"An initial estimate of A and ρ is obtained by fitting this exponentially decaying flux in the high-frequency regime. Once the parameters A0,∞, σ and ρ have been estimated from the asymptotic spectral fits, we find the temperature parameter β of the radiating electron population by solving\nequation6.13",
null,
"",
null,
"This equation readily follows from the definition of the asymptotic fitting parameters A0,∞ and ρ in (6.11) and (6.12). Initial values for κt and at are found as κt=ρ/β and at=Aβ2ρ.\n\nThe least-squares fit of the flux densities (6.2) is performed by varying the parameters A0,∞,ρ and σ in the vicinity of their initial values obtained from the asymptotic fits (6.11) and (6.12); the corresponding β is obtained by solving (6.13). In addition, we may vary the permeabilities in the vicinity of εμ=ε0μ0=1, subject to the constraints εμ⩽1 and ε0μ0⩽1, cf. after (4.6). The electron index can also be varied around its equilibrium value α=−2, cf. (5.3), by employing the nonthermal spectral functions (6.4) and (6.5) instead of (6.9) and (6.10), but this is not necessary for the Jovian radio spectrum.\n\nThe tachyonic Cherenkov fit of Jupiterʼs radio emission depicted in Fig. 1 is performed with a thermal electron distribution α=−2 and permeabilities εμ=ε0μ0=1. The numerical values of the fitting parameters are\n\nequation6.14",
null,
"",
null,
"The fine-structure scaling exponent σ is defined in (6.7), the temperature parameter β in (5.3), the tachyonic mass parameter κt[s] in (6.3), and the flux amplitude at[eV/cm2] in (6.8). In practice, we use the parameters of the asymptotic flux limits (6.11) and (6.12) as fitting parameters, which are A0≈4.9, A≈4.0×109, σ≈0.5 and ρ≈0.8. The parameters in (6.14) have been calculated from these values as explained above.\n\nThe electron temperature is T[K]≈2.51×108, cf. after (5.4), which is the only point where the electron mass enters, via β=me/(kBT). (The mass of the radiating particle does not show in the classical Cherenkov densities (4.5), in contrast to the tachyon mass mt.) Assuming vacuum permeabilities, εμ≈1, ε0μ0≈1, we can estimate the tachyon mass in the radio band, mtc2[eV]≈1.22×10−4 or mtc2≈29.5 GHz, cf. (6.3) and (6.14). From the amplitude at in (6.8) and the Jovian distance estimate, we find the product of the asymptotic tachyonic fine-structure constant αt (defined before (6.1)) and the normalization factor of the electron distribution (5.3) as Aα,βαt≈4.17×1033. The integral Kα,β in (5.4) determining the electron number Ne=Aα,βKα,β is calculated with the temperature parameter β in (6.14), Kα=−2,β≈6.65×10−13. The estimate for the product of electron count and tachyonic fine-structure constant is thus Neαt≈2.77×1021.\n\n## 7. Conclusion\n\nWe have investigated the emission of tachyonic radiation modes by freely propagating electrons in a dispersive spacetime. The superluminal group velocity υT,L=dω/dkT,L(ω) is caused by the negative mass-square in the wave numbers (3.2) of the tachyonic radiation field. υT,L differs for transversal and longitudinal modes , unless the wave numbers coincide, which requires permeabilities satisfying ε0μ0=εμ. The tachyonic wavelength is λT,L=2π/kT,L. In the radio band and with vacuum permeabilities, we find the group velocity and wavelength , with ν in GHz. A tachyon mass mt of 29.5 GHz is inferred from the spectral fit in Fig. 1, cf. after (6.14).\n\nWe introduced tachyonic field strengths and inductions defined by constitutive relations with frequency-dependent permeabilities. We then developed an equivalent 4D space–frequency representation of the dispersive radiation field, deriving manifestly covariant field equations. Thus the suggestive and efficient formalism of manifest covariance can be maintained, but the underlying space conception is non-relativistic, as superluminal wave propagation requires an absolute spacetime conception to preserve causality , and .\n\nWe focused on superluminal Cherenkov radiation, the tachyonic radiation field being generated by a classical subluminal charge uniformly moving in a permeable spacetime. The electron mass does not enter in the classical radiation densities (4.4), which suggests that the tachyonic quanta are actually emitted by the medium stimulated by the field of the moving electron and . A longitudinal radiation component emerges, with amplitude proportional to the negative mass-square of the tachyonic modes. We parametrized the Cherenkov flux densities with the Lorentz factor of the inertial charge, averaged them with a relativistic electron distribution, and explained how to perform tachyonic spectral fits in the radio band.\n\nA flux average over a mildly relativistic thermal electron population (with Maxwell–Boltzmann distribution) suffices to model the currently observed Jovian radio spectrum. We also calculated the tachyonic Cherenkov flux radiated by nonthermal electron populations (with power-law distribution), cf. Section 5, which can be useful to model X- and γ-ray spectra , and . As for Jupiter, the low flux density at 13.8 GHz (Cassini 2001 in-situ measurement and ) is caused by an exponential spectral cutoff. The low-frequency spectrum is linear in the double-logarithmic flux representation of Fig. 1. The cross-over region around the spectral peak at 6 GHz is also well reproduced by the tachyonic flux densities (4.4) averaged over a thermal electron population.\n\n## References\n\n• \n• Electrodynamics of Continuous Media\n\n• Pergamon, Oxford (1984)\n\n• \n• Transition Radiation and Transition Scattering\n\n• Hilger, Bristol (1990)\n\n• \n• Sov. Phys. Usp., 15 (1972), p. 184"
] | [
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_151.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_083.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_114.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_036.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_021.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_033.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_039.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_034.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_097.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_112.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_057.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_062.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_082.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_030.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_124.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_048.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_136.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_026.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_007.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_052.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_131.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_047.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_049.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_140.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_002.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_077.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_045.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_015.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_130.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_090.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_028.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_084.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_088.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_027.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_068.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_123.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_079.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_115.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_093.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_031.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_008.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_012.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_073.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_142.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_059.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_091.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_025.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_051.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_098.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2.jpg",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_069.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_133.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_016.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_076.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_150.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_046.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_139.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/1-s2_120.gif",
null,
"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons_files/blank.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.82175875,"math_prob":0.9898624,"size":22223,"snap":"2022-05-2022-21","text_gpt3_token_len":6872,"char_repetition_ratio":0.14807147,"word_repetition_ratio":0.019074421,"special_character_ratio":0.25334114,"punctuation_ratio":0.15084043,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99354565,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178],"im_url_duplicate_count":[null,2,null,null,null,2,null,2,null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null,2,null,2,null,2,null,null,null,2,null,2,null,null,null,2,null,null,null,2,null,null,null,2,null,null,null,2,null,2,null,2,null,null,null,2,null,null,null,null,null,2,null,null,null,null,null,null,null,null,null,null,null,2,null,null,null,2,null,null,null,2,null,null,null,2,null,2,null,2,null,null,null,2,null,null,null,2,null,null,null,2,null,null,null,null,null,2,null,null,null,2,null,null,null,2,null,null,null,2,null,null,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T23:02:24Z\",\"WARC-Record-ID\":\"<urn:uuid:be2d76b0-dad7-4b49-81b6-ce9705fa0886>\",\"Content-Length\":\"1048896\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b9ba089b-027d-4f63-a58a-61f35444142f>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f4b4fde-c6d7-449a-82ff-7afc1fe74595>\",\"WARC-IP-Address\":\"217.160.231.221\",\"WARC-Target-URI\":\"http://wallpaintings.at/geminga/FT_tachyonic_cherenkov_emission_jupiter_radio_electrons.htm\",\"WARC-Payload-Digest\":\"sha1:HVBNFGTVOBPPDPGIEMTUSVDHDCKQS5FP\",\"WARC-Block-Digest\":\"sha1:AIWXU4YGQBV2EJR5FVCJWA2EOM2ZAGG6\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662594414.79_warc_CC-MAIN-20220525213545-20220526003545-00571.warc.gz\"}"} |
https://archives.zujuan.com/beike/P/12379854 | [
"# 2022年全国乙卷高考真题数学(理科)试卷(图片版,无答案)\n\n(在此卷上答题无效)\n\n2022年普通高等学校招生全国统一考试\n\n1.答卷前,考生务必将自己的姓名和座位号填写在答题卡上。\n2.回答选择题时,选出每小题答案后,用铅笔把答题卡上对应题目的答案标号涂黑。如粉\n\n3.考试结束后,将本试卷和答题卡一并交回。\n\n1.设全集U={,2,3,4,5},集合M满足CuM={L,3},则\nA.2∈M\nB.3∈M\nC.4gM\nD.5M\n2.已知z=1-2i,且z+az+b=0,其中a,b为实数,则\nA.a=1,b=-2\nB.a=-1,b=2\nC.a=1,b=2\nD.a=-1,b=-2\n3.已知向量a,b满足|al=1,|b=√5,|a-2b1=3,则ab=\nA.-2\nB.-1\nC.1\nD.2\n4.嫦娥二号卫星在完成探月任务后,继续进行深空探测,成为我国第一颗环绕太飞\n\n6=1+,=1+1\na,\n1,6=1+\na1+\nG1二,…,依此类推,其中\n1\n2+\na∈N(k=1,2).则\nA.bB.b3C.b。D.ba5.设F为抛物线C:y2=4x的焦点,点A在C上,点B3,0),若AF|BF,则|AB1=\nD.32\nA.2\nB.2√2\nC.3\nW数学(理科)试题第1页(共5页)\n6.执行右边的程序框图,输出的n=\n\nA.3\n\nB.4\nb=b+2a\nC.5\na=b-a,n=n+l\nD.6\nb2\n\n-2k0.00\n\n7.在正方体ABCD-AB,CD中,E,F分别为AB,B......",
null,
""
] | [
null,
"https://archives.zujuan.com/beike/static/images/user-pic.jpg",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.7900829,"math_prob":0.9978091,"size":1716,"snap":"2022-27-2022-33","text_gpt3_token_len":1657,"char_repetition_ratio":0.06542056,"word_repetition_ratio":0.7761194,"special_character_ratio":0.4318182,"punctuation_ratio":0.2625216,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95610136,"pos_list":[0,1,2],"im_url_duplicate_count":[null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T17:17:44Z\",\"WARC-Record-ID\":\"<urn:uuid:b61313f5-53e1-4150-a87c-75a3624582f1>\",\"Content-Length\":\"11277\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20f8deb7-deb9-4333-8e85-c060cd9354ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d28727b-48c6-43fc-bb45-3c87ca3adc1b>\",\"WARC-IP-Address\":\"49.232.173.179\",\"WARC-Target-URI\":\"https://archives.zujuan.com/beike/P/12379854\",\"WARC-Payload-Digest\":\"sha1:7YUGOE5HSEALOM53IW5OVC257KMVBW7U\",\"WARC-Block-Digest\":\"sha1:W56FM4O2NKWRZKH2E5PXKD5ED2OYDRRX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573242.55_warc_CC-MAIN-20220818154820-20220818184820-00717.warc.gz\"}"} |
https://de.b-ok.xyz/book/3385643/22ef39 | [
"Startseite Applied Numerical Methods with MATLAB for Engineers and Scientists\n\n# Applied Numerical Methods with MATLAB for Engineers and Scientists\n\nApplied Numerical Methods with MATLAB is written for students who want to learn and apply numerical methods in order to solve problems in engineering and science. As such, the methods are motivated by problems rather than by mathematics. That said, sufficient theory is provided so that students come away with insight into the techniques and their shortcomings.\nMcGraw-Hill's Connect, is also available as an optional, add on item. Connect is the only integrated learning system that empowers students by continuously adapting to deliver precisely what they need, when they need it, how they need it, so that class time is more effective. Connect allows the professor to assign homework, quizzes, and tests easily and automatically grades and records the scores of the student's work. Problems are randomized to prevent sharing of answers an may also have a \"multi-step solution\" which helps move the students' learning along if they experience difficulty.\nJahr: 2017\nEdition: 4\nVerlag: McGraw-Hill Education\nSprache: english\nSeiten: 720 / 714\nISBN 10: 0073397962\nISBN 13: 978-0073397962\nFile: PDF, 31.57 MB\n\n## Most frequently terms\n\nPost a Review",
null,
"You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.\n1\n\n### Commercial Aviation Safety, Sixth Edition\n\nAnno: 2017\nLingua: english\nFile: AZW3 , 14.42 MB\n2\n\n### Global Forum on Transparency and Exchange of Information for Tax Purposes peer reviews : Austria 2011 ; phase 1 ; August 2011 (reflecting the legal and regulatory framework as at June 2011)\n\nAnno: 2011\nLingua: english\nFile: PDF, 1.20 MB\n```\fApplied Numerical Methods\nwith MATLAB® for Engineers and Scientists\nFourth Edition\n\nSteven C. Chapra\nBerger Chair in Computing and Engineering\nTufts University\n\nAPPLIED NUMERICAL METHODS WITH MATLAB® FOR ENGINEERS AND SCIENTISTS,\nFOURTH EDITION\n2005. No part of this publication may be reproduced or distributed in any form or by any means, or stored in a\ndatabase or retrieval system, without the prior written consent of McGraw-Hill Education, including, but not\nlimited to, in any network or other electronic storage or transmission, or broadcast for distance learning.\nSome ancillaries, including electronic and print components, may not be available to customers outside the\nUnited States.\nThis book is printed on acid-free paper.\n1 2 3 4 5 6 7 8 9 LCR 21 20 19 18 17\nISBN 978-0-07-339796-2\nMHID 0-07-339796-2\nChief Product Officer, SVP Products & Markets: G. Scott Virkler\nVice President, General Manager, Products & Markets: Marty Lange\nVice President, Content Design & Delivery: Betsy Whalen\nManaging Director: Thomas Timp\nBrand Manager: Thomas M. Scaife, Ph.D.\nDirector, Product Development: Rose Koos\nProduct Developer: Jolynn Kilburg\nMarket Development Manager: Shannon O’Donnell\nDigital Product Developer: Joan Weber\nDirector, Content Design & Delivery: Linda Avenarius\nProgram Manager: Lora Neyens\nContent Project Managers: Jeni McAtee; Sandy Schnee\nContent Licensing Specialists: Lorraine Buczek\nCompositor: MPS Limited\nPrinter: LSC Communications\nAll credits appearing on page or at the end of the book are considered to be an extension of the copyright page.\nChapra, Steven C., author.\nApplied numerical methods with MATLAB for engineers and scientists /\n\u0007Steven C. Chapra, Berger Chair in Computing and Engineering, Tufts University.\nFourth edition. | New York, NY : McGraw-Hill Education, |\nIncludes bibliographical references and index.\nLCCN 2016038044 | ISBN 9780073397962 (alk. paper) | ISBN 0073397962 (alk. paper)\nLCSH: Numerical analysis—Data processing—Textbooks. | Engineering\nmathematics—Textbooks. | MATLAB—Textbooks.\nLCC QA297 .C4185 2018 | DDC 518—dc23 LC record available at https://lccn.loc.gov/2016038044\nThe Internet addresses listed in the text were accurate at the time of publication. The inclusion of a website\ndoes not indicate an endorsement by the authors or McGraw-Hill Education, and McGraw-Hill Education does\nnot guarantee the accuracy of the information presented at these sites.\nmheducation.com/highered\n\nTo\nMy brothers,\nJohn and Bob Chapra\nand\nFred Berger (1947–2015)\nwho I miss as a good friend, a good man.\nand a comrade in bringing the light of engineering\nto some of world’s darker corners.\n\nSteve Chapra teaches in the Civil and Environmental Engineering Department at Tufts\nUniversity, where he holds the Louis Berger Chair in Computing and Engineering. His other\nbooks include Numerical Methods for Engineers and Surface Water-Quality Modeling.\nSteve received engineering degrees from Manhattan College and the University of\nMichigan. Before joining the faculty at Tufts, he worked for the Environmental Protection\nAgency and the National Oceanic and Atmospheric Administration, and taught at Texas\nA&M University and the University of Colorado. His general research interests focus on\nsurface water-quality modeling and advanced computer applications in environmental\nengineering.\nHe has received a number of awards for his scholarly contributions, including the\nRudolph Hering Medal, the Meriam/Wiley Distinguished Author Award, and the ChandlerMisener Award. He has also been recognized as the outstanding teacher at Texas A&M\nUniversity (1986 Tenneco Award), the University of Colorado (1992 Hutchinson Award),\nand Tufts University (2011 Professor of the Year Award).\nSteve was originally drawn to environmental engineering and science because of his\nlove of the outdoors. He is an avid fly fisherman and hiker. An unapologetic nerd, his love\naffair with computing began when he was first introduced to Fortran programming as an\nundergraduate in 1966. Today, he feels truly blessed to be able to meld his love of mathematics, science, and computing with his passion for the natural environment. In addition,\nhe gets the bonus of sharing it with others through his teaching and writing!\nBeyond his professional interests, he enjoys art, music (especially classical music,\njazz, and bluegrass), and reading history. Despite unfounded rumors to the contrary, he\nnever has, and never will, voluntarily bungee jump or sky dive.\nhttp://engineering.tufts.edu/cee/people/chapra/ or e-mail him at steven.chapra@tufts.edu.\n\niv\n\nCONTENTS\nPreface\n\nPart One\n\nxiv\n\nModeling, Computers, and Error Analysis\n1.1\n1.2\n\nMotivation 1\nPart Organization\n\n1\n\n2\n\nCHAPTER 1\nMathematical Modeling, Numerical Methods,\nand Problem Solving 4\n1.1 A Simple Mathematical Model 5\n1.2 Conservation Laws in Engineering and Science\n1.3 Numerical Methods Covered in This Book 13\n1.4 Case Study: It’s a Real Drag 17\nProblems 20\n\n12\n\nCHAPTER 2\nMATLAB Fundamentals 27\n2.1 The MATLAB Environment 28\n2.2 Assignment 29\n2.3 Mathematical Operations 36\n2.4 Use of Built-In Functions 39\n2.5 Graphics 42\n2.6 Other Resources 46\n2.7 Case Study: Exploratory Data Analysis 46\nProblems 49\n\nCHAPTER 3\nProgramming with MATLAB\n3.1 M-Files 54\n3.2 Input-Output\n\n53\n\n61\nv\n\nvi\n\nCONTENTS\n\n3.3 Structured Programming 65\n3.4 Nesting and Indentation 79\n3.5 Passing Functions to M-Files 81\n3.6 Case Study: Bungee Jumper Velocity 87\nProblems 91\n\nCHAPTER 4\nRoundoff and Truncation Errors 99\n4.1 Errors 100\n4.2 Roundoff Errors 106\n4.3 Truncation Errors 114\n4.4 Total Numerical Error 125\n4.5 Blunders, Model Errors, and Data Uncertainty 130\nProblems 131\n\nPart Two\n\nRoots and Optimization\n\n135\n\n2.1 Overview 135\n2.2 Part Organization 136\nCHAPTER 5\nRoots: Bracketing Methods\n\n138\n\n5.1 Roots in Engineering and Science 139\n5.2 Graphical Methods 140\n5.3 Bracketing Methods and Initial Guesses 141\n5.4 Bisection 146\n5.5 False Position 152\n5.6 Case Study: Greenhouse Gases and Rainwater\nProblems 159\n\nCHAPTER 6\nRoots: Open Methods\n\n164\n\n6.1 Simple Fixed-Point Iteration 165\n6.2 Newton-Raphson 169\n6.3 Secant Methods 174\n6.4 Brent’s Method 176\n6.5 MATLAB Function: fzero 181\n6.6 Polynomials 183\n6.7 Case Study: Pipe Friction 186\nProblems 191\n\n156\n\nvii\n\nCONTENTS\b\n\nCHAPTER 7\nOptimization\n\n198\n\n7.1 Introduction and Background 199\n7.2 One-Dimensional Optimization 202\n7.3 Multidimensional Optimization 211\n7.4 Case Study: Equilibrium and Minimum Potential Energy\nProblems 215\n\nPart Three\n\nLinear Systems 223\n3.1 Overview 223\n3.2 Part Organization\n\n225\n\nCHAPTER 8\nLinear Algebraic Equations and Matrices\n\n227\n\n8.1 Matrix Algebra Overview 229\n8.2 Solving Linear Algebraic Equations with MATLAB 238\n8.3 Case Study: Currents and Voltages in Circuits 240\nProblems 244\n\nCHAPTER 9\nGauss Elimination 248\n9.1 Solving Small Numbers of Equations 249\n9.2 Naive Gauss Elimination 254\n9.3 Pivoting 261\n9.4 Tridiagonal Systems 264\n9.5 Case Study: Model of a Heated Rod 266\nProblems 270\n\nCHAPTER 10\nLU Factorization 274\n10.1 Overview of LU Factorization 275\n10.2 Gauss Elimination as LU Factorization\n10.3 Cholesky Factorization 283\n10.4 MATLAB Left Division 286\nProblems 287\n\n276\n\n213\n\nviii\n\nCONTENTS\n\nCHAPTER 11\nMatrix Inverse and Condition\n\n288\n\n11.1 The Matrix Inverse 288\n11.2 Error Analysis and System Condition 292\n11.3 Case Study: Indoor Air Pollution 297\nProblems 300\n\nCHAPTER 12\nIterative Methods\n\n305\n\n12.1\t\t\t Linear Systems: Gauss-Seidel 305\n12.2\t\t Nonlinear Systems 312\n12.3 Case Study: Chemical Reactions 320\nProblems 323\n\nCHAPTER 13\nEigenvalues 326\n13.1\t\t Mathematical Background 328\n13.2 Physical Background 331\n13.3 The Power Method 333\n13.4 MATLAB Function: eig 336\n13.5 Case Study: Eigenvalues and Earthquakes 337\nProblems 340\n\nPart Four\n\nCurve Fitting 343\n4.1 Overview 343\n4.2 Part Organization 345\n\nCHAPTER 14\nLinear Regression 346\n14.1\t\t Statistics Review 348\n14.2 Random Numbers and Simulation 353\n14.3 Linear Least-Squares Regression 358\n14.4 Linearization of Nonlinear Relationships 366\n14.5 Computer Applications 370\n14.6 Case Study: Enzyme Kinetics 373\nProblems 378\n\nix\n\nCONTENTS\b\n\nCHAPTER 15\nGeneral Linear Least-Squares and Nonlinear Regression\n15.1\t\tPolynomial Regression 385\n15.2 Multiple Linear Regression 389\n15.3 General Linear Least Squares 391\n15.4 QR Factorization and the Backslash Operator 394\n15.5 Nonlinear Regression 395\n15.6 Case Study: Fitting Experimental Data 397\nProblems 399\n\nCHAPTER 16\nFourier Analysis 404\n16.1\t\t Curve Fitting with Sinusoidal Functions\n16.2 Continuous Fourier Series 411\n16.3 Frequency and Time Domains 414\n16.4 Fourier Integral and Transform 415\n16.5 Discrete Fourier Transform (DFT) 418\n16.6 The Power Spectrum 423\n16.7 Case Study: Sunspots 425\nProblems 426\n\n405\n\nCHAPTER 17\nPolynomial Interpolation\n\n429\n\n17.1\t\t Introduction to Interpolation 430\n17.2\t\t Newton Interpolating Polynomial 433\n17.3\t\t Lagrange Interpolating Polynomial 441\n17.4\t\tInverse Interpolation 444\n17.5\t\t Extrapolation and Oscillations 445\nProblems 449\n\nCHAPTER 18\nSplines and Piecewise Interpolation\n\n453\n\n18.1\t\t Introduction to Splines 453\n18.2 Linear Splines 455\n18.4 Cubic Splines 462\n18.5 Piecewise Interpolation in MATLAB 468\n18.6 Multidimensional Interpolation 473\n18.7 Case Study: Heat Transfer 476\nProblems 480\n\n385\n\nx\n\nCONTENTS\n\nPart Five Integration and Differentiation\n5.1 Overview 485\n5.2 Part Organization\n\n485\n\n486\n\nCHAPTER 19\nNumerical Integration Formulas 488\n19.1\t\t Introduction and Background 489\n19.2 Newton-Cotes Formulas 492\n19.3 The Trapezoidal Rule 494\n19.4 Simpson’s Rules 501\n19.5 Higher-Order Newton-Cotes Formulas 507\n19.6 Integration with Unequal Segments 508\n19.7\t\tOpen Methods 512\n19.8 Multiple Integrals 512\n19.9 Case Study: Computing Work with Numerical Integration 515\nProblems 518\n\nCHAPTER 20\nNumerical Integration of Functions 524\n20.1\t\t\t Introduction 524\n20.2 Romberg Integration 525\n20.5 Case Study: Root-Mean-Square Current 540\nProblems 544\n\nCHAPTER 21\nNumerical Differentiation 548\n21.1\t\t Introduction and Background 549\n21.2 High-Accuracy Differentiation Formulas 552\n21.3 Richardson Extrapolation 555\n21.4 Derivatives of Unequally Spaced Data 557\n21.5 Derivatives and Integrals for Data with Errors 558\n21.6 Partial Derivatives 559\n21.7\t\t Numerical Differentiation with MATLAB 560\n21.8 Case Study: Visualizing Fields 565\nProblems 567\n\nxi\n\nCONTENTS\b\n\nPart six\n\nOrdinary Differential Equations\n6.1 Overview 573\n6.2 Part Organization\n\n573\n\n577\n\nCHAPTER 22\nInitial-Value Problems\n\n579\n\n22.1 Overview 581\n22.2 Euler’s Method 581\n22.3 Improvements of Euler’s Method 587\n22.4 Runge-Kutta Methods 593\n22.5 Systems of Equations 598\n22.6 Case Study: Predator-Prey Models and Chaos 604\nProblems 609\n\nCHAPTER 23\n\n615\n\n23.2 Multistep Methods 624\n23.3 Stiffness 628\n23.4 MATLAB Application: Bungee Jumper with Cord 634\n23.5 Case Study: Pliny’s Intermittent Fountain 635\nProblems 640\n\nCHAPTER 24\nBoundary-Value Problems 646\n24.1\t\t\t Introduction and Background 647\n24.2 The Shooting Method 651\n24.3 Finite-Difference Methods 658\n24.4 MATLAB Function: bvp4c 665\nProblems 668\n\nAPPENDIX A: MATLAB BUILT-IN FUNCTIONS 674\nAPPENDIX B: MATLAB M-FILE FUNCTIONS 676\nAPPENDIX C: INTRODUCTION TO SIMULINK 677\nBIBLIOGRAPHY 685\nINDEX 687\n\nRequired=Results\n\nMcGraw-Hill Connect®\nLearn Without Limits\n\nConnect is a teaching and learning platform that is proven\nto deliver better results for students and instructors.\nConnect empowers students by continually adapting to deliver\nprecisely what they need, when they need it, and how they\nneed it, so your class time is more engaging and effective.\n\n73% of instructors who use\nConnect require it; instructor\nsatisfaction increases by 28%\nwhen Connect is required.\n\nAnalytics\n\nUsing Connect improves retention rates\nby 19.8%, passing rates by\n12.7%, and exam scores by 9.1%.\n\nConnect Insight®\nConnect Insight is Connect’s new one-of-a-kind\nvisual analytics dashboard—now available for both\ninstructors and students—that provides at-aglance information regarding student performance,\nwhich is immediately actionable. By presenting\nassignment, assessment, and topical performance\nresults together with a time metric that is easily visible for\naggregate or individual results, Connect Insight gives the user\nthe ability to take a just-in-time approach to teaching and\nlearning, which was never before available. Connect Insight presents\ndata that empowers students and helps instructors improve class\nperformance in a way that is efficient and effective.\n\nMobile\nConnect’s new, intuitive mobile interface gives students\nand instructors flexible and convenient, anytime–anywhere\n\nStudents can view\ntheir results for any\nConnect course.\n\nDESIGNED TO TRANSFORM\n\nMore students earn A’s and\nB’s when they use McGraw-Hill\n\nSmartBook®\nProven to help students improve grades and study more\nefficiently, SmartBook contains the same content within\nthe print book, but actively tailors that content to the\nneeds of the individual. SmartBook’s adaptive technology\nprovides precise, personalized instruction on what the\nstudent should do next, guiding the student to master\nand remember key concepts, targeting gaps in knowledge\nand offering customized feedback, and driving the student\ntoward comprehension and retention of the subject matter.\nAvailable on tablets, SmartBook puts learning at the student’s\nfingertips—anywhere, anytime.\n\nOver 8 billion questions have been\nEducation products more intelligent,\nreliable, and precise.\n\nwww.mheducation.com\n\nPREFACE\n\nThis book is designed to support a one-semester course in numerical methods. It has been\nwritten for students who want to learn and apply numerical methods in order to solve problems in engineering and science. As such, the methods are motivated by problems rather\nthan by mathematics. That said, sufficient theory is provided so that students come away\nwith insight into the techniques and their shortcomings.\nMATLAB® provides a great environment for such a course. Although other environments (e.g., Excel/VBA, Mathcad) or languages (e.g., Fortran 90, C++) could\nhave been chosen, MATLAB presently offers a nice combination of handy programming features with powerful built-in numerical capabilities. On the one hand, its\nM-file programming environment allows students to implement moderately complicated algorithms in a structured and coherent fashion. On the other hand, its built-in,\nnumerical capabilities empower students to solve more difficult problems without trying to “reinvent the wheel.”\nThe basic content, organization, and pedagogy of the third edition are essentially preserved in the fourth edition. In particular, the conversational writing style is intentionally\nmaintained in order to make the book easier to read. This book tries to speak directly to the\nreader and is designed in part to be a tool for self-teaching.\nThat said, this edition differs from the past edition in three major ways: (1) new\nmaterial, (2) new and revised homework problems, and (3) an appendix introducing\n1.\n\n2.\n\n3.\nxiv\n\nNew Content. I have included new and enhanced sections on a number of topics. The\nprimary additions include material on some MATLAB functions not included in previous editions (e.g., fsolve, integrate, bvp4c), some new applications of Monte Carlo\nfor problems such as integration and optimization, and MATLAB’s new way to pass\nparameters to function functions.\nNew Homework Problems. Most of the end-of-chapter problems have been modified,\nand a variety of new problems have been added. In particular, an effort has been made\nto include several new problems for each chapter that are more challenging and difficult than the problems in the previous edition.\nI have developed a short primer on Simulink which I have my students read prior to\ncovering that topic. Although I recognize that some professors may not choose to\ncover Simulink, I included it as a teaching aid for those that do.\n\nPREFACE\b\n\nxv\n\nAside from the new material and problems, the fourth edition is very similar to the\nthird. In particular, I have endeavored to maintain most of the features contributing to its\npedagogical effectiveness including extensive use of worked examples and engineering and\nscientific applications. As with the previous edition, I have made a concerted effort to make\nthis book as “student-friendly” as possible. Thus, I’ve tried to keep my explanations straightforward and practical.\nAlthough my primary intent is to empower students by providing them with a sound\nintroduction to numerical problem solving, I have the ancillary objective of making this\nintroduction exciting and pleasurable. I believe that motivated students who enjoy engineering and science, problem solving, mathematics—and yes—programming, will ultimately make better professionals. If my book fosters enthusiasm and appreciation for these\nsubjects, I will consider the effort a success.\nAcknowledgments. Several members of the McGraw-Hill team have contributed to\nthis project. Special thanks are due to Jolynn Kilburg, Thomas Scaife, Ph.D., Chelsea\nHaupt, Ph.D., and Jeni McAtee for their encouragement, support, and direction.\nDuring the course of this project, the folks at The MathWorks, Inc., have truly demonstrated their overall excellence as well as their strong commitment to engineering and\nscience education. In particular, Naomi Fernandes of The MathWorks, Inc., Book Program\nhas been especially helpful and Jared Wasserman of the MathWorks Technical Support\nDepartment was of great help with technical questions.\nThe generosity of the Berger family has provided me with the opportunity to work on\ncreative projects such as this book dealing with computing and engineering. In addition,\nmy colleagues in the School of Engineering at Tufts, notably Masoud Sanayei, Babak\nMoaveni, Luis Dorfmann, Rob White, Linda Abriola, and Laurie Baise, have been very\nSignificant suggestions were also given by a number of colleagues. In particular, Dave\nClough (University of Colorado–Boulder), and Mike Gustafson (Duke University) provided valuable ideas and suggestions. In addition, a number of reviewers provided useful feedback and advice including Karen Dow Ambtman (University of Alberta), Jalal\nBehzadi (Shahid Chamran University), Eric Cochran (Iowa State University), Frederic\nGibou (University of California at Santa Barbara), Jane Grande-Allen (Rice University),\nRaphael Haftka (University of Florida), Scott Hendricks (Virginia Tech University), Ming\nHuang (University of San Diego), Oleg Igoshin (Rice University), David Jack (Baylor University), Se Won Lee (Sungkyunkwan University), Clare McCabe (Vanderbilt University),\nEckart Meiburg (University of California at Santa Barbara), Luis Ricardez (University of\nWaterloo), James Rottman (University of California, San Diego), Bingjing Su (University\nof Cincinnati), Chin-An Tan (Wayne State University), Joseph Tipton (The University\nof Evansville), Marion W. Vance (Arizona State University), Jonathan Vande Geest\n(University of Arizona), Leah J. Walker (Arkansas State University), Qiang Hu (University\nof Alabama, Huntsville), Yukinobu Tanimoto (Tufts University), Henning T. Søgaard\n(Aarhus University), and Jimmy Feng (University of British Columbia).\nIt should be stressed that although I received useful advice from the aforementioned\nindividuals, I am responsible for any inaccuracies or mistakes you may find in this book.\n\nxvi\n\nPREFACE\n\nFinally, I want to thank my family, and in particular my wife, Cynthia, for the love,\npatience, and support they have provided through the time I’ve spent on this project.\nSteven C. Chapra\nTufts University\nMedford, Massachusetts\nsteven.chapra@tufts.edu\nPEDAGOGICAL TOOLS\n\nTheory Presented as It Informs Key Concepts. The text is intended for Numerical\nMethods users, not developers. Therefore, theory is not included for “theory’s sake,” for example no proofs. Theory is included as it informs key concepts such as the Taylor series, convergence, condition, etc. Hence, the student is shown how the theory connects with practical\nissues in problem solving.\nIntroductory MATLAB Material. The text includes two introductory chapters on how to\nuse MATLAB. Chapter 2 shows students how to perform computations and create graphs\nin MATLAB’s standard command mode. Chapter 3 provides a primer on developing\nnumerical programs via M\n ATLAB M-file functions. Thus, the text provides students with\nthe means to develop their own numerical algorithms as well as to tap into MATLAB’s\npowerful built-in routines.\nAlgorithms Presented Using MATLAB M-files. Instead of using pseudocode, this book\npresents algorithms as well-structured MATLAB M-files. Aside from being useful computer programs, these provide students with models for their own M-files that they will\ndevelop as homework exercises.\nWorked Examples and Case Studies. Extensive worked examples are laid out in detail\nso that students can clearly follow the steps in each numerical computation. The case studies consist of engineering and science applications which are more complex and richer than\nthe worked examples. They are placed at the ends of selected chapters with the intention\nof (1) illustrating the nuances of the methods and (2) showing more realistically how the\nmethods along with MATLAB are applied for problem solving.\nProblem Sets. The text includes a wide variety of problems. Many are drawn from engineering and scientific disciplines. Others are used to illustrate numerical techniques and\ntheoretical concepts. Problems include those that can be solved with a pocket calculator as\nwell as others that require computer solution with MATLAB.\nUseful Appendices and Indexes. Appendix A contains MATLAB commands, Appendix\nB contains M-file functions, and new Appendix C contains a brief Simulink primer.\nInstructor Resources. Solutions Manual, Lecture PowerPoints, Text images in PowerPoint, M-files and additional MATLAB resources are available through Connect®.\n\nPart One\nModeling, Computers,\nand Error Analysis\n1.1\n\nMOTIVATION\nWhat are numerical methods and why should you study them?\nNumerical methods are techniques by which mathematical problems are formulated\nso that they can be solved with arithmetic and logical operations. Because digital computers excel at performing such operations, numerical methods are sometimes referred to as\ncomputer mathematics.\nIn the pre–computer era, the time and drudgery of implementing such calculations\nseriously limited their practical use. However, with the advent of fast, inexpensive digital\ncomputers, the role of numerical methods in engineering and scientific problem solving\nhas exploded. Because they figure so prominently in much of our work, I believe that\nnumerical methods should be a part of every engineer’s and scientist’s basic education.\nJust as we all must have solid foundations in the other areas of mathematics and science,\nwe should also have a fundamental understanding of numerical methods. In particular, we\nshould have a solid appreciation of both\ntheir capabilities and their limitations.\nreasons why you should study numerical\nmethods:\n1. N\n\u0007 umerical methods greatly expand the\ntypes of problems you can address.\nThey are capable of handling large systems of equations, nonlinearities, and\ncomplicated geometries that are not\nuncommon in engineering and science\nand that are often impossible to solve\nanalytically with standard calculus. As\nsuch, they greatly enhance your problem-solving skills.\n2. \u0007Numerical methods allow you to use\n“canned” software with insight. During\n1\n\n2\n\nPART 1 Modeling, Computers, and Error Analysis\n\n3.\n\n4.\n\n5.\n\nyour career, you will invariably have occasion to use commercially available prepackaged computer programs that involve numerical methods. The intelligent use of these\nprograms is greatly enhanced by an understanding of the basic theory underlying the\nmethods. In the absence of such understanding, you will be left to treat such packages\nas “black boxes” with little critical insight into their inner workings or the validity of\nthe results they produce.\nMany problems cannot be approached using canned programs. If you are conversant\nwith numerical methods, and are adept at computer programming, you can design\nyour own programs to solve problems without having to buy or commission expensive\nsoftware.\nNumerical methods are an efficient vehicle for learning to use computers. Because numerical methods are expressly designed for computer implementation, they are ideal for\nillustrating the computer’s powers and limitations. When you successfully implement\nnumerical methods on a computer, and then apply them to solve otherwise intractable\nproblems, you will be provided with a dramatic demonstration of how computers can\nserve your professional development. At the same time, you will also learn to acknowledge and control the errors of approximation that are part and parcel of large-scale\nnumerical calculations.\nNumerical methods provide a vehicle for you to reinforce your understanding of mathematics. Because one function of numerical methods is to reduce higher mathematics\nto basic arithmetic operations, they get at the “nuts and bolts” of some otherwise\nobscure topics. Enhanced understanding and insight can result from this alternative\nperspective.\n\nWith these reasons as motivation, we can now set out to understand how numerical\nmethods and digital computers work in tandem to generate reliable solutions to mathematical problems. The remainder of this book is devoted to this task.\n\n1.2\n\nPART ORGANIZATION\nThis book is divided into six parts. The latter five parts focus on the major areas of numerical methods. Although it might be tempting to jump right into this material, Part One\nconsists of four chapters dealing with essential background material.\nChapter 1 provides a concrete example of how a numerical method can be employed\nto solve a real problem. To do this, we develop a mathematical model of a free-falling\nbungee jumper. The model, which is based on Newton’s second law, results in an ordinary\ndifferential equation. After first using calculus to develop a closed-form solution, we then\nshow how a comparable solution can be generated with a simple numerical method. We\nend the chapter with an overview of the major areas of numerical methods that we cover in\nParts Two through Six.\nChapters 2 and 3 provide an introduction to the MATLAB® software environment.\nChapter 2 deals with the standard way of operating MATLAB by entering commands one\nat a time in the so-called calculator, or command, mode. This interactive mode provides\na straightforward means to orient you to the environment and illustrates how it is used for\ncommon operations such as performing calculations and creating plots.\n\n1.2\n\nPART ORGANIZATION\b\n\n3\n\nChapter 3 shows how MATLAB’s programming mode provides a vehicle for assembling individual commands into algorithms. Thus, our intent is to illustrate how MATLAB\nserves as a convenient programming environment to develop your own software.\nChapter 4 deals with the important topic of error analysis, which must be understood\nfor the effective use of numerical methods. The first part of the chapter focuses on the\nroundoff errors that result because digital computers cannot represent some quantities\nexactly. The latter part addresses truncation errors that arise from using an approximation\nin place of an exact mathematical procedure.\n\n1\n\nMathematical Modeling,\nNumerical Methods,\nand Problem Solving\n\nCHAPTER OBJECTIVES\nThe primary objective of this chapter is to provide you with a concrete idea of what\nnumerical methods are and how they relate to engineering and scientific problem\nsolving. Specific objectives and topics covered are\n\n•\n•\n•\n•\n\nLearning how mathematical models can be formulated on the basis of scientific\nprinciples to simulate the behavior of a simple physical system.\nUnderstanding how numerical methods afford a means to generate solutions in a\nmanner that can be implemented on a digital computer.\nUnderstanding the different types of conservation laws that lie beneath the models\nused in the various engineering disciplines and appreciating the difference\nbetween steady-state and dynamic solutions of these models.\nLearning about the different types of numerical methods we will cover in this\nbook.\n\nYOU’VE GOT A PROBLEM\n\nS\n\nuppose that a bungee-jumping company hires you. You’re given the task of\npredicting the velocity of a jumper (Fig. 1.1) as a function of time during the\nfree-fall part of the jump. This information will be used as part of a larger\nanalysis to determine the length and required strength of the bungee cord for jumpers\nof different mass.\nYou know from your studies of physics that the acceleration should be equal to the ratio\nof the force to the mass (Newton’s second law). Based on this insight and your knowledge\n\n4\n\n1.1\n\n5\n\nA Simple Mathematical Model\b\n\nof physics and fluid mechanics, you develop the following mathematical model for the rate\nof change of velocity with respect to time,\n\nUpward force\ndue to air\nresistance\n\nc\n___\n dυ = g − ___\n md υ2\ndt\nwhere υ = downward vertical velocity (m/s), t = time (s), g = the acceleration due to\ngravity (≅ 9.81 m/s2), cd = a lumped drag coefficient (kg/m), and m = the jumper’s\nmass (kg). The drag coefficient is called “lumped” because its magnitude depends on factors such as the jumper’s area and the fluid density (see Sec. 1.4).\nBecause this is a differential equation, you know that calculus might be used to obtain\nan analytical or exact solution for υ as a function of t. However, in the following pages, we\nwill illustrate an alternative solution approach. This will involve developing a computeroriented numerical or approximate solution.\nAside from showing you how the computer can be used to solve this particular problem, our more general objective will be to illustrate (a) what numerical methods are and\n(b) how they figure in engineering and scientific problem solving. In so doing, we will also\nshow how mathematical models figure prominently in the way engineers and scientists use\nnumerical methods in their work.\n\nDownward\nforce due\nto gravity\n\nFIGURE 1.1\nForces acting\non a free-falling\nbungee jumper.\n\n1.1\n\nA SIMPLE MATHEMATICAL MODEL\nA mathematical model can be broadly defined as a formulation or equation that expresses\nthe essential features of a physical system or process in mathematical terms. In a very general sense, it can be represented as a functional relationship of the form\n\n(\n\n)\n\nindependent\nforcing\nDependent\n, parameters,\n= f \n\b\nvariables\nfunctions\nvariable\n\n(1.1)\n\nwhere the dependent variable is a characteristic that typically reflects the behavior or state\nof the system; the independent variables are usually dimensions, such as time and space,\nalong which the system’s behavior is being determined; the parameters are reflective of\nthe system’s properties or composition; and the forcing functions are external influences\nacting upon it.\nThe actual mathematical expression of Eq. (1.1) can range from a simple algebraic\nrelationship to large complicated sets of differential equations. For example, on the basis\nof his observations, Newton formulated his second law of motion, which states that the\ntime rate of change of momentum of a body is equal to the resultant force acting on it. The\nmathematical expression, or model, of the second law is the well-known equation\nF = ma\b\n\n(1.2)\n\nwhere F is the net force acting on the body (N, or kg m/s2), m is the mass of the object (kg),\nand a is its acceleration (m/s2).\n\n6\n\nMathematical Modeling, Numerical Methods, and Problem Solving\n\nThe second law can be recast in the format of Eq. (1.1) by merely dividing both sides\nby m to give\nF \b\na = __\n m\n\n(1.3)\n\nwhere a is the dependent variable reflecting the system’s behavior, F is the forcing function, and m is a parameter. Note that for this simple case there is no independent variable\nbecause we are not yet predicting how acceleration varies in time or space.\nEquation (1.3) has a number of characteristics that are typical of mathematical models\nof the physical world.\n•\n•\n\n•\n\nIt describes a natural process or system in mathematical terms.\nIt represents an idealization and simplification of reality. That is, the model ignores\nnegligible details of the natural process and focuses on its essential manifestations.\nThus, the second law does not include the effects of relativity that are of minimal\nimportance when applied to objects and forces that interact on or about the earth’s\nsurface at velocities and on scales visible to humans.\nFinally, it yields reproducible results and, consequently, can be used for predictive\npurposes. For example, if the force on an object and its mass are known, Eq. (1.3) can\nbe used to compute acceleration.\n\nBecause of its simple algebraic form, the solution of Eq. (1.2) was obtained easily.\nHowever, other mathematical models of physical phenomena may be much more complex,\nand either cannot be solved exactly or require more sophisticated mathematical techniques\nthan simple algebra for their solution. To illustrate a more complex model of this kind,\nNewton’s second law can be used to determine the terminal velocity of a free-falling body\nnear the earth’s surface. Our falling body will be a bungee jumper (Fig. 1.1). For this case,\na model can be derived by expressing the acceleration as the time rate of change of the\nvelocity (dυ/dt) and substituting it into Eq. (1.3) to yield\n___\n dυ = __\n F \b\ndt m\n\n(1.4)\n\nwhere υ is velocity (in meters per second). Thus, the rate of change of the velocity is equal\nto the net force acting on the body normalized to its mass. If the net force is positive, the\nobject will accelerate. If it is negative, the object will decelerate. If the net force is zero, the\nobject’s velocity will remain at a constant level.\nNext, we will express the net force in terms of measurable variables and parameters.\nFor a body falling within the vicinity of the earth, the net force is composed of two\nopposing forces: the downward pull of gravity FD and the upward force of air resistance FU\n(Fig. 1.1):\nF = FD + FU\b\n\n(1.5)\n\nIf force in the downward direction is assigned a positive sign, the second law can be\nused to formulate the force due to gravity as\nFD = mg\b\n\n(1.6)\n2\n\nwhere g is the acceleration due to gravity (9.81 m/s ).\n\n1.1\n\nA Simple Mathematical Model\b\n\n7\n\nAir resistance can be formulated in a variety of ways. Knowledge from the science\nof fluid mechanics suggests that a good first approximation would be to assume that it is\nproportional to the square of the velocity,\nFU = −cd υ2\b\n\n(1.7)\n\nwhere cd is a proportionality constant called the lumped drag coefficient (kg/m). Thus, the\ngreater the fall velocity, the greater the upward force due to air resistance. The parameter\ncd accounts for properties of the falling object, such as shape or surface roughness, that affect air resistance. For the present case, cd might be a function of the type of clothing or the\norientation used by the jumper during free fall.\nThe net force is the difference between the downward and upward force. Therefore,\nEqs. (1.4) through (1.7) can be combined to yield\nc\n___\n dυ = g − ___\n md υ2\b\ndt\n\n(1.8)\n\nEquation (1.8) is a model that relates the acceleration of a falling object to the forces\nacting on it. It is a differential equation because it is written in terms of the differential rate\nof change (dυ/dt) of the variable that we are interested in predicting. However, in contrast\nto the solution of Newton’s second law in Eq. (1.3), the exact solution of Eq. (1.8) for the\nvelocity of the jumper cannot be obtained using simple algebraic manipulation. Rather,\nmore advanced techniques such as those of calculus must be applied to obtain an exact or\nanalytical solution. For example, if the jumper is initially at rest (υ = 0 at t = 0), calculus\ncan be used to solve Eq. (1.8) for\n\n(\n\n___\n\n___\n\n)\n\ngc\ngm\nυ(t) = ___\n c tanh √___\n md t \b\nd\n\n√\n\n(1.9)\n\nwhere tanh is the hyperbolic tangent that can be either computed directly1 or via the more\nelementary exponential function as in\nx\n\n−x\n\ntanh x = _______\n ex − e−x \b\ne +e\n\n(1.10)\n\nNote that Eq. (1.9) is cast in the general form of Eq. (1.1) where υ(t) is the dependent\nvariable, t is the independent variable, cd and m are parameters, and g is the forcing function.\nEXAMPLE 1.1\n\nAnalytical Solution to the Bungee Jumper Problem\nProblem Statement. A bungee jumper with a mass of 68.1 kg leaps from a stationary\nhot air balloon. Use Eq. (1.9) to compute velocity for the first 12 s of free fall. Also determine the terminal velocity that will be attained for an infinitely long cord (or alternatively,\nthe jumpmaster is having a particularly bad day!). Use a drag coefficient of 0.25 kg/m.\n1\n\nMATLAB allows direct calculation of the hyperbolic tangent via the built-in function tanh(x).\n\nMathematical Modeling, Numerical Methods, and Problem Solving\n\nSolution. Inserting the parameters into Eq. (1.9) yields\n_________\n\n_________\n\n(√\n\n)\n\n9.81(68.1)\n9.81(0.25)\nυ(t) = _________\n\n tanh _________\n\n t = 51.6938 tanh(0.18977t)\n68.1\n0.25\n\n√\n\nwhich can be used to compute\nt, s\n\nυ, m/s\n\n0\n2\n4\n6\n8\n10\n12\n∞\n\n0\n18.7292\n33.1118\n42.0762\n46.9575\n49.4214\n50.6175\n51.6938\n\nAccording to the model, the jumper accelerates rapidly (Fig. 1.2). A velocity of\n49.4214 m/s (about 110 mi/hr) is attained after 10 s. Note also that after a sufficiently\n\nFIGURE 1.2\nThe analytical solution for the bungee jumper problem as computed in Example 1.1. Velocity\nincreases with time and asymptotically approaches a terminal velocity.\n\n60\nTerminal velocity\n\n40\nυ, m/s\n\n8\n\n20\n\n0\n\n0\n\n4\n\n8\nt, s\n\n12\n\n1.1\n\n9\n\nA Simple Mathematical Model\b\n\nlong time, a constant velocity, called the terminal velocity, of 51.6983 m/s (115.6 mi/hr)\nis reached. This velocity is constant because, eventually, the force of gravity will be in\nbalance with the air resistance. Thus, the net force is zero and acceleration has ceased.\nEquation (1.9) is called an analytical or closed-form solution because it exactly satisfies the original differential equation. Unfortunately, there are many mathematical models\nthat cannot be solved exactly. In many of these cases, the only alternative is to develop a\nnumerical solution that approximates the exact solution.\nNumerical methods are those in which the mathematical problem is reformulated so it\ncan be solved by arithmetic operations. This can be illustrated for Eq. (1.8) by realizing that\nthe time rate of change of velocity can be approximated by (Fig. 1.3):\nυ(ti+1) − υ(ti )\n___\n dυ ≅ ___\n Δυ = ___________\n\nti+1 − ti \b\ndt Δt\n\n(1.11)\n\nwhere Δυ and Δt are differences in velocity and time computed over finite intervals,\nυ(ti) is velocity at an initial time ti, and υ(ti+1) is velocity at some later time ti+1. Note that\ndυ/dt ≅ Δυ/Δt is approximate because Δt is finite. Remember from calculus that\nΔυ \n___\n ___\nlim\n dυ = Δt→0\nΔt\ndt\nEquation (1.11) represents the reverse process.\n\nFIGURE 1.3\nThe use of a finite difference to approximate the first derivative of υ with respect to t.\n\nυ(ti+1)\nTrue slope\ndυ/dt\n∆υ\n\nApproximate slope\n∆υ = υ(ti+1) − υ(ti)\nti+1 − ti\n∆t\n\nυ(ti )\n\nti+1\n\nti\n∆t\n\nt\n\n10\n\nMathematical Modeling, Numerical Methods, and Problem Solving\n\nEquation (1.11) is called a finite-difference approximation of the derivative at time ti.\nIt can be substituted into Eq. (1.8) to give\nυ(ti+1) − υ(ti )\ncd\n2\n___________\n___\n\nti+1 − ti = g − m υ(ti )\nThis equation can then be rearranged to yield\n\n[\n\n]\n\nc\nυ(ti+1) = υ(ti ) + g − ___\n md υ(ti )2 (ti+1 − ti )\b\n\n(1.12)\n\nNotice that the term in brackets is the right-hand side of the differential equation itself\n[Eq. (1.8)]. That is, it provides a means to compute the rate of change or slope of υ. Thus,\nthe equation can be rewritten more concisely as\ndυ\nυi+1 = υi + ___\n i Δt\b\ndt\n\n(1.13)\n\nwhere the nomenclature υi designates velocity at time ti, and Δt = ti+1 − ti.\nWe can now see that the differential equation has been transformed into an equation that\ncan be used to determine the velocity algebraically at ti+1 using the slope and previous values\nof υ and t. If you are given an initial value for velocity at some time ti, you can easily compute\nvelocity at a later time ti+1. This new value of velocity at ti+1 can in turn be employed to extend\nthe computation to velocity at ti+2 and so on. Thus at any time along the way,\nNew value = old value + slope × step size\nThis approach is formally called Euler’s method. We’ll discuss it in more detail when we\nturn to differential equations later in this book.\nEXAMPLE 1.2\n\nNumerical Solution to the Bungee Jumper Problem\nProblem Statement. Perform the same computation as in Example 1.1 but use Eq. (1.12)\nto compute velocity with Euler’s method. Employ a step size of 2 s for the calculation.\nSolution. At the start of the computation (t0 = 0), the velocity of the jumper is zero.\nUsing this information and the parameter values from Example 1.1, Eq. (1.12) can be used\nto compute velocity at t1 = 2 s:\n\n[\n\n]\n\nυ = 0 + 9.81 − ____\n 0.25 (0)2 × 2 = 19.62 m/s\n68.1\nFor the next interval (from t = 2 to 4 s), the computation is repeated, with the result\n\n[\n\n]\n\nυ = 19.62 + 9.81 − ____\n 0.25 (19.62)2 × 2 = 36.4137 m/s\n68.1\n\n1.1\n\n11\n\nA Simple Mathematical Model\b\n\n60\nTerminal velocity\n\nApproximate,\nnumerical solution\n\nυ, m/s\n\n40\n\nExact, analytical\nsolution\n20\n\n0\n\n4\n\n0\n\n8\n\n12\n\nt, s\n\nFIGURE 1.4\nComparison of the numerical and analytical solutions for the bungee jumper problem.\n\nThe calculation is continued in a similar fashion to obtain additional values:\nt, s\n\nυ, m/s\n\n0\n2\n4\n6\n8\n10\n12\n∞\n\n0\n19.6200\n36.4137\n46.2983\n50.1802\n51.3123\n51.6008\n51.6938\n\nThe results are plotted in Fig. 1.4 along with the exact solution. We can see that the numerical method captures the essential features of the exact solution. However, because we have\nemployed straight-line segments to approximate a continuously curving function, there is\nsome discrepancy between the two results. One way to minimize such discrepancies is to\nuse a smaller step size. For example, applying Eq. (1.12) at 1-s intervals results in a smaller\nerror, as the straight-line segments track closer to the true solution. Using hand calculations, the effort associated with using smaller and smaller step sizes would make such\nnumerical solutions impractical. However, with the aid of the computer, large numbers of\ncalculations can be performed easily. Thus, you can accurately model the velocity of the\njumper without having to solve the differential equation exactly.\n\n12\n\nMathematical Modeling, Numerical Methods, and Problem Solving\n\nAs in Example 1.2, a computational price must be paid for a more accurate numerical result. Each halving of the step size to attain more accuracy leads to a doubling of the\nnumber of computations. Thus, we see that there is a trade-off between accuracy and computational effort. Such trade-offs figure prominently in numerical methods and constitute\nan important theme of this book.\n\n1.2\n\nCONSERVATION LAWS IN ENGINEERING AND SCIENCE\nAside from Newton’s second law, there are other major organizing principles in science\nand engineering. Among the most important of these are the conservation laws. Although\nthey form the basis for a variety of complicated and powerful mathematical models, the\ngreat conservation laws of science and engineering are conceptually easy to understand.\nThey all boil down to\nChange = increases − decreases\b\n\n(1.14)\n\nThis is precisely the format that we employed when using Newton’s law to develop a force\nbalance for the bungee jumper [Eq. (1.8)].\nAlthough simple, Eq. (1.14) embodies one of the most fundamental ways in which\nconservation laws are used in engineering and science—that is, to predict changes\nwith respect to time. We will give it a special name—the time-variable (or transient)\ncomputation.\nAside from predicting changes, another way in which conservation laws are applied is\nfor cases where change is nonexistent. If change is zero, Eq. (1.14) becomes\nChange = 0 = increases − decreases\nor\nIncreases = decreases\b\n(1.15)\nThus, if no change occurs, the increases and decreases must be in balance. This case, which\nis also given a special name—the steady-state calculation—has many applications in engineering and science. For example, for steady-state incompressible fluid flow in pipes, the\nflow into a junction must be balanced by flow going out, as in\nFlow in = flow out\nFor the junction in Fig. 1.5, the balance that can be used to compute that the flow out of the\nfourth pipe must be 60.\nFor the bungee jumper, the steady-state condition would correspond to the case where\nthe net force was zero or [Eq. (1.8) with dυ/dt = 0]\nmg = cd υ2\b\n\n(1.16)\n\nThus, at steady state, the downward and upward forces are in balance and Eq. (1.16) can\nbe solved for the terminal velocity\n___\n\ngm\nυ = ___\n c \nd\n\n√\n\nAlthough Eqs. (1.14) and (1.15) might appear trivially simple, they embody the two fundamental ways that conservation laws are employed in engineering and science. As such, they\nwill form an important part of our efforts in subsequent chapters to illustrate the connection\nbetween numerical methods and engineering and science.\n\n1.3\tNumerical Methods Covered in This Book\b\n\n13\n\nPipe 2\nFlow in = 80\n\nPipe 1\nFlow in = 100\n\nPipe 4\nFlow out = ?\n\nPipe 3\nFlow out = 120\n\nFIGURE 1.5\nA flow balance for steady incompressible fluid flow at the junction of pipes.\n\nTable 1.1 summarizes some models and associated conservation laws that figure\nprominently in engineering. Many chemical engineering problems involve mass balances\nfor reactors. The mass balance is derived from the conservation of mass. It specifies that\nthe change of mass of a chemical in the reactor depends on the amount of mass flowing in\nminus the mass flowing out.\nCivil and mechanical engineers often focus on models developed from the conservation of momentum. For civil engineering, force balances are utilized to analyze structures\nsuch as the simple truss in Table 1.1. The same principles are employed for the mechanical\nengineering case studies to analyze the transient up-and-down motion or vibrations of an\nautomobile.\nFinally, electrical engineering studies employ both current and energy balances to model\nelectric circuits. The current balance, which results from the conservation of charge, is similar in spirit to the flow balance depicted in Fig. 1.5. Just as flow must balance at the junction\nof pipes, electric current must balance at the junction of electric wires. The energy balance\nspecifies that the changes of voltage around any loop of the circuit must add up to zero.\nIt should be noted that there are many other branches of engineering beyond chemical,\ncivil, electrical, and mechanical. Many of these are related to the Big Four. For example, chemical engineering skills are used extensively in areas such as environmental, petroleum, and biomedical engineering. Similarly, aerospace engineering has much in common with mechanical\nengineering. I will endeavor to include examples from these areas in the coming pages.\n\n1.3\n\nNUMERICAL METHODS COVERED IN THIS BOOK\nEuler’s method was chosen for this introductory chapter because it is typical of many\nother classes of numerical methods. In essence, most consist of recasting mathematical\noperations into the simple kind of algebraic and logical operations compatible with digital\ncomputers. Figure 1.6 summarizes the major areas covered in this text.\n\n14\n\nMathematical Modeling, Numerical Methods, and Problem Solving\n\nTABLE 1.1 \u0007Devices and types of balances that are commonly used in the four major areas of engineering. For\neach case, the conservation law on which the balance is based is specified.\nField\n\nDevice\n\nChemical\nengineering\n\nReactors\n\nOrganizing Principle\n\nMathematical Expression\n\nConservation\nof mass\n\nMass balance:\nInput\n\nOutput\n\nOver a unit of time period\n∆mass = inputs − outputs\n\nCivil\nengineering\n\nConservation\nof momentum\n\n+FV\n\nForce balance:\n\nStructure\n\n−FH\n\n+FH\n\n−FV\nAt each node\nΣ horizontal forces (FH) = 0\nΣ vertical forces (FV) = 0\n\nMechanical\nengineering\n\nMachine\n\nConservation\nof momentum\n\nForce balance:\n\nUpward force\nx=0\nDownward force\n\n2\nm d 2x = downward force − upward force\ndt\n\nElectrical\nengineering\n\n+\n\nConservation\nof charge\n\nCurrent balance:\n\n+i1\n\nFor each node\nΣ current (i) = 0\n\n−\n\n−i3\n+i2\n\nCircuit\ni1R1\nConservation\nof energy\n\nVoltage balance:\ni2R2\n\nξ\ni3R3\n\nAround each loop\nΣ emf’s − Σ voltage drops for resistors\n=0\nΣ ξ − Σ iR = 0\n\n15\n\n1.3\tNumerical Methods Covered in This Book\b\n\nf(x)\n\n(a) Part 2 : Roots and optimization\nRoots: Solve for x so that f(x) = 0\n\nRoots\n\nOptimization: Solve for x so that f ' (x) = 0\n\nx\nOptima\n\n(b) Part 3 : Linear algebraic equations\n\nx2\n\nGiven the a’s and the b’s, solve for the x’s\na11x1 + a12 x2 = b1\n\nSolution\n\na21x1 + a22 x2 = b2\nx1\n\n(c) Part 4 : Curve fitting\nf(x)\n\nf(x)\n\nInterpolation\n\nRegression\nx\n(d) Part 5 : Integration and differentiation\n\nx\ny\n\ndy/dx\n\nIntegration: Find the area under the curve\nDifferentiation: Find the slope of the curve\n\nI\nx\n\n(e) Part 6 : Differential equations\n\ny\n\nGiven\ndy\n∆y\n= f (t, y)\n≈\ndt\n∆t\nsolve for y as a function of t\nyi+1 = yi + f (ti, yi)∆t\n\nFIGURE 1.6\nSummary of the numerical methods covered in this book.\n\nSlope = f(ti, yi)\n\n∆t\nt\n\n16\n\nMathematical Modeling, Numerical Methods, and Problem Solving\n\nPart Two deals with two related topics: root finding and optimization. As depicted in\nFig. 1.6a, root location involves searching for the zeros of a function. In contrast, optimization involves determining a value or values ofan independent variable that correspond to a\n“best” or optimal value of a function. Thus, as in Fig. 1.6a, optimization involves identifying maxima and minima. Although somewhat different approaches are used, root location\nand optimization both typically arise in design contexts.\nPart Three is devoted to solving systems of simultaneous linear algebraic equations\n(Fig. 1.6b). Such systems are similar in spirit to roots of equations in the sense that they\nare concerned with values that satisfy equations. However, in contrast to satisfying a single\nequation, a set of values is sought that simultaneously satisfies a set of linear algebraic\nequations. Such equations arise in a variety of problem contexts and in all disciplines\nof engineering and science. In particular, they originate in the mathematical modeling of\nlarge systems of interconnected elements such as structures, electric circuits, and fluid\nnetworks. However, they are also encountered in other areas of numerical methods such as\ncurve fitting and differential equations.\nAs an engineer or scientist, you will often have occasion to fit curves to data points.\nThe techniques developed for this purpose can be divided into two general categories: regression and interpolation. As described in Part Four (Fig. 1.6c), regression is employed\nwhere there is a significant degree of error associated with the data. Experimental results\nare often of this kind. For these situations, the strategy is to derive a single curve that\nrepresents the general trend of the data without necessarily matching any individual\npoints.\nIn contrast, interpolation is used where the objective is to determine intermediate values between relatively error-free data points. Such is usually the case for tabulated information. The strategy in such cases is to fit a curve directly through the data points and use the\ncurve to predict the intermediate values.\nAs depicted in Fig. 1.6d, Part Five is devoted to integration and differentiation. A\nphysical interpretation of numerical integration is the determination of the area under a\ncurve. Integration has many applications in engineering and science, ranging from the\ndetermination of the centroids of oddly shaped objects to the calculation of total quantities\nbased on sets of discrete measurements. In addition, numerical integration formulas play\nan important role in the solution of differential equations. Part Five also covers methods\nfor numerical differentiation. As you know from your study of calculus, this involves the\ndetermination of a function’s slope or its rate of change.\nFinally, Part Six focuses on the solution of ordinary differential equations (Fig. 1.6e).\nSuch equations are of great significance in all areas of engineering and science. This is\nbecause many physical laws are couched in terms of the rate of change of a quantity rather\nthan the magnitude of the quantity itself. Examples range from population-forecasting\nmodels (rate of change of population) to the acceleration of a falling body (rate of change\nof velocity). Two types of problems are addressed: initial-value and boundary-value\nproblems.\n\n1.4\n\n1.4 CASE STUDY\n\n17\n\nCASE STUDY\b\n\nIT’S A REAL DRAG\nBackground. In our model of the free-falling bungee jumper, we assumed that drag\ndepends on the square of velocity (Eq. 1.7). A more detailed representation, which was\noriginally formulated by Lord Rayleigh, can be written as\n_›\n\n1 ρυ2AC υ\b(1.17)\nFd = − __\nd \n2\nwhere Fd = the drag force (N), ρ = fluid density (kg/m3), A = the frontal area of the object\non a plane perpendicular\nto the direction of motion (m2), Cd = a dimensionless drag coef_›\nficient, and υ = a unit vector indicating the direction of velocity.\nThis relationship, which assumes turbulent conditions (i.e., a high Reynolds number),\nallows us to express the lumped drag coefficient from Eq. (1.7) in more fundamental terms\nas\ncd = __\n 1 ρACd\n2\n\n(1.18)\n\nThus, the lumped drag coefficient depends on the object’s area, the fluid’s density, and a\ndimensionless drag coefficient. The latter accounts for all the other factors that contribute\nto air resistance such as the object’s “roughness.” For example, a jumper wearing a baggy\noutfit will have a higher Cd than one wearing a sleek jumpsuit.\nNote that for cases where velocity is very low, the flow regime around the object will\nbe laminar and the relationship between the drag force and velocity becomes linear. This is\nreferred to as Stokes drag.\nIn developing our bungee jumper model, we assumed that the downward_ direction was\n›\npositive. Thus, Eq. (1.7) is an accurate representation of Eq. (1.17), because υ = +1 and the\ndrag force is negative. Hence, drag reduces velocity.\nBut what happens if the jumper has an upward (i.e., negative) velocity? In this case,\n_\n›\nυ = −1 and Eq. (1.17) yields a positive drag force. Again, this is physically correct as the\npositive drag force acts downward against the upward negative velocity.\nUnfortunately, for this case, Eq. (1.7) yields a negative drag force because it does not\ninclude the unit directional vector. In other words, by squaring the velocity, its sign and\nhence its direction is lost. Consequently, the model yields the physically unrealistic result\nthat air resistance acts to accelerate an upward velocity!\nIn this case study, we will modify our model so that it works properly for both downward\nand upward velocities. We will test the modified model for the same case as Example 1.2, but\nwith an initial value of υ(0) = −40 m/s. In addition, we will also illustrate how we can extend\nthe numerical analysis to determine the jumper’s position.\nSolution. The following simple modification allows the sign to be incorporated into the\ndrag force:\n1 ρυ∣υ∣ AC \b\nFd = − __\nd\n2\n\n(1.19)\n\n18\n\n1.4 CASE STUDY\n\nMathematical Modeling, Numerical Methods, and Problem Solving\n\ncontinued\nor in terms of the lumped drag:\nFd = −cd υ∣υ∣\b\n\n(1.20)\n\nThus, the differential equation to be solved is\nc\n___\n dυ = g − __\n md υ∣υ∣\b\ndt\n\n(1.21)\n\nIn order to determine the jumper’s position, we recognize that distance traveled,\nx (m), is related to velocity by\n___\n dx = −υ\b\n(1.22)\ndt\nIn contrast to velocity, this formulation assumes that upward distance is positive. In\nthe same fashion as Eq. (1.12), this equation can be integrated numerically with Euler’s\nmethod:\nxi +1 = xi − υ(ti )Δt\b\n\n(1.23)\n\nAssuming that the jumper’s initial position is defined as x(0) = 0, and using the parameter\nvalues from Examples 1.1 and 1.2, the velocity and distance at t = 2 s can be computed as\n\n[\n\n]\n\nυ(2) = −40 + 9.81 − ____\n 0.25 (−40)(40) 2 = −8.6326 m/s\n68.1\nx(2) = 0 − (−40)2 = 80 m\n\nNote that if we had used the incorrect drag formulation, the results would be −32.1274 m/s\nand 80 m.\nThe computation can be repeated for the next interval (t = 2 to 4 s):\n\n[\n\n]\n\nυ(4) = −8.6326 + 9.81 − ____\n 0.25 (−8.6326)(8.6326) 2 = 11.5346 m/s\n68.1\nx(4) = 80 − (−8.6326)2 = 97.2651 m\n\nThe incorrect drag formulation gives –20.0858 m/s and 144.2549 m.\nThe calculation is continued and the results shown in Fig. 1.7 along with those o btained\nwith the incorrect drag model. Notice that the correct formulation decelerates more rapidly\nbecause drag always diminishes the velocity.\nWith time, both velocity solutions converge on the same terminal velocity because\neventually both are directed downward in which case, Eq. (1.7) is correct. However, the\nimpact on the height prediction is quite dramatic with the incorrect drag case resulting in\na much higher trajectory.\nThis case study demonstrates how important it is to have the correct physical model. In\nsome cases, the solution will yield results that are clearly unrealistic. The current example\nis more insidious as there is no visual evidence that the incorrect solution is wrong. That is,\nthe incorrect solution “looks” reasonable.\n\n1.4\n\n19\n\nCASE STUDY\b\n\n1.4 CASE STUDY\n\ncontinued\n(a) Velocity, m/s\n60\n\n40\nCorrect drag\nυ, m/s\n\n20\n\n0\n\n4\n\nt, s\n\n8\n\n12\n\nIncorrect drag\n\n−20\n\n−40\n(b) Height, m\n200\n\nIncorrect drag\n\nx, m\n\n100\n\n0\n\n−100\n\n4\n\nt, s\n\n8\n\n12\n\nCorrect drag\n\n−200\n\nFIGURE 1.7\nPlots of (a) velocity and (b) height for the free-falling bungee jumper with an upward\n(negative) initial velocity generated with Euler’s method. Results for both the correct (Eq. 1.20)\nand incorrect (Eq. 1.7) drag formulations are displayed.\n\n20\n\nMathematical Modeling, Numerical Methods, and Problem Solving\n\nPROBLEMS\n1.1 Use calculus to verify that Eq. (1.9) is a solution of\nEq. (1.8) for the initial condition υ(0) = 0.\n1.2 Use calculus to solve Eq. (1.21) for the case where the initial velocity is (a) positive and (b) negative. (c) Based on your\nresults for (a) and (b), perform the same computation as in\nExample 1.1 but with an initial velocity of −40 m/s. Compute\nvalues of the velocity from t = 0 to 12 s at intervals of 2 s. Note\nthat for this case, the zero velocity occurs at t = 3.470239 s.\n1.3 The following information is available for a bank account:\nDate\n\nDeposits\n\nWithdrawals\n\n5/1\n220.13\n327.26\n6/1\n216.80\n378.61\n7/1\n450.25\n106.80\n8/1\n127.31\n350.61\n9/1\n\nBalance\n1512.33\n\nNote that the money earns interest which is computed as\nInterest = iBi\nwhere i = the interest rate expressed as a fraction per month,\nand Bi the initial balance at the beginning of the month.\n(a) Use the conservation of cash to compute the balance on\n6∕1, 7∕1, 8∕1, and 9∕1 if the interest rate is 1% per month\n(i = 0.01∕month). Show each step in the computation.\n(b) Write a differential equation for the cash balance in the\nform\n___\n dB = f [D(t), W(t), i]\ndt\nwhere t = time (months), D(t) = deposits as a function\nof time (\\$/month), W(t) = withdrawals as a function of\ntime (\\$/month). For this case, assume that interest is\ncompounded continuously; that is, interest = iB.\n(c) Use Euler’s method with a time step of 0.5 month to\nsimulate the balance. Assume that the deposits and withdrawals are applied uniformly over the month.\n(d) Develop a plot of balance versus time for (a) and (c).\n1.4 Repeat Example 1.2. Compute the velocity to t = 12 s,\nwith a step size of (a) 1 and (b) 0.5 s. Can you make any\nstatement regarding the errors of the calculation based on\nthe results?\n\n1.5 Rather than the nonlinear relationship of Eq. (1.7), you\nmight choose to model the upward force on the bungee\njumper as a linear relationship:\nFU = −c′υ\nwhere c′ = a first-order drag coefficient (kg/s).\n(a) Using calculus, obtain the closed-form solution for the\ncase where the jumper is initially at rest (υ = 0 at t = 0).\n(b) Repeat the numerical calculation in Example 1.2 with\nthe same initial condition and parameter values. Use a\nvalue of 11.5 kg/s for c′.\n1.6 For the free-falling bungee jumper with linear drag\n(Prob. 1.5), assume a first jumper is 70 kg and has a drag coefficient of 12 kg/s. If a second jumper has a drag coefficient\nof 15 kg/s and a mass of 80 kg, how long will it take her to\nreach the same velocity jumper 1 reached in 9 s?\n1.7 For the second-order drag model (Eq. 1.8), compute the\nvelocity of a free-falling parachutist using Euler’s method\nfor the case where m = 80 kg and cd = 0.25 kg/m. Perform\nthe calculation from t = 0 to 20 s with a step size of 1 s. Use\nan initial condition that the parachutist has an upward velocity of 20 m/s at t = 0. At t = 10 s, assume that the chute is\ninstantaneously deployed so that the drag coefficient jumps\nto 1.5 kg/m.\n1.8 The amount of a uniformly distributed radioactive contaminant contained in a closed reactor is measured by its\nconcentration c (becquerel/liter or Bq/L). The contaminant\ndecreases at a decay rate proportional to its concentration;\nthat is\nDecay rate = −kc\nwhere k is a constant with units of day−1. Therefore, according to Eq. (1.14), a mass balance for the reactor can be\nwritten as\n___\n dc \n= −kc\ndt\nchange\ndecrease\n=\n in mass by decay \n\n(\n\n) (\n\n)\n\n(a) Use Euler’s method to solve this equation from t = 0 to\n1 d with k = 0.175 d–1. Employ a step size of Δt = 0.1 d.\nThe concentration at t = 0 is 100 Bq/L.\n(b) Plot the solution on a semilog graph (i.e., ln c versus t)\nand determine the slope. Interpret your results.\n1.9 A storage tank (Fig. P1.9) contains a liquid at depth y\nwhere y = 0 when the tank is half full. Liquid is withdrawn\nat a constant flow rate Q to meet demands. The contents are\n\n21\n\nPROBLEMS\b\n\ny\nytop\n\nrtop\n\nQin\n\ny\n0\ns\n\nQout\n\nyout\n\n1\n\n0\n\nFIGURE P1.9\n\nFIGURE P1.11\n\nresupplied at a sinusoidal rate 3Q sin2(t). Equation (1.14)\ncan be written for this system as\n\nThe liquid flows in at a sinusoidal rate of Q in = 3 sin2(t) and\nflows out according to\n\nd(Ay)\n _____\n = 3 Q sin2(t) −\nQ\ndt\nchange in\n\n = (inflow) − (outflow)\nvolume\n\n(\n\n)\n\nor, since the surface area A is constant\ndy\nQ\nQ\n___\n = 3 __\n sin2(t) − __\n \nA\nA\ndt\nUse Euler’s method to solve for the depth y from t = 0 to\n10 d with a step size of 0.5 d. The parameter values are A =\n1250 m2 and Q = 450 m3/d. Assume that the initial condition\nis y = 0.\n1.10 For the same storage tank described in Prob. 1.9, suppose that the outflow is not constant but rather depends on\nthe depth. For this case, the differential equation for depth\ncan be written as\ndy\nα(1 + y)1.5\nQ\n ___ = 3 __ sin2(t) − _________\n\nA\nA\ndt\nUse Euler’s method to solve for the depth y from t = 0 to\n10 d with a step size of 0.5 d. The parameter values are A =\n1250 m2, Q = 450 m3/d, and α = 150. Assume that the initial\ncondition is y = 0.\n1.11 Apply the conservation of volume (see Prob. 1.9) to simulate the level of liquid in a conical storage tank (Fig. P1.11).\n\nQout = 3(y − yout)1.5\nQout = 0\n\ny > yout\ny ≤ yout\n\nwhere flow has units of m3/d and y = the elevation of the\nwater surface above the bottom of the tank (m). Use Euler’s\nmethod to solve for the depth y from t = 0 to 10 d with a step\nsize of 0.5 d. The parameter values are rtop = 2.5 m, ytop = 4 m,\nand yout = 1 m. Assume that the level is initially below the\noutlet pipe with y(0) = 0.8 m.\n1.12 A group of 35 students attend a class in an insulated room which measures 11 by 8 by 3 m. Each student\ntakes up about 0.075 m3 and gives out about 80 W of heat\n(1 W = 1 J/s). Calculate the air temperature rise during\nthe first 20 minutes of the class if the room is completely\nsealed and insulated. Assume the heat capacity Cυ for air is\n0.718 kJ/(kg K). Assume air is an ideal gas at 20 °C and\n101.325 kPa. Note that the heat absorbed by the air Q is\nrelated to the mass of the air m the heat capacity, and the\nchange in temperature by the following relationship:\n\n∫\n\nT2\n\nQ = m Cυ dT = mCυ (T2 − T1)\nT1\n\nThe mass of air can be obtained from the ideal gas law:\nPV = ____\n m RT\nMwt\nwhere P is the gas pressure, V is the volume of the gas, Mwt\nis the molecular weight of the gas (for air, 28.97 kg/kmol),\nand R is the ideal gas constant [8.314 kPa m3/(kmol K)].\n\n22\n\nMathematical Modeling, Numerical Methods, and Problem Solving\n\nSkin\nUrine\n\nFeces\n\nFood\n\nAir\nBODY\n\nDrink\n\nSweat\n\n(c) Use calculus to obtain the closed form solution where\nυ = υ0 at x = 0.\n(d) Use Euler’s method to obtain a numerical solution from\nx = 0 to 100,000 m using a step of 10,000 m where the\ninitial velocity is 1500 m/s upward. Compare your result\nwith the analytical solution.\n1.15 Suppose that a spherical droplet of liquid evaporates at\na rate that is proportional to its surface area.\n___\n dV = −kA\ndt\n\nMetabolism\n\nFIGURE P1.13\n\n1.13 Figure P1.13 depicts the various ways in which an\naverage man gains and loses water in one day. One liter\nis ingested as food, and the body metabolically produces\n0.3 liters. In breathing air, the exchange is 0.05 liters while\ninhaling, and 0.4 liters while exhaling over a one-day period.\nThe body will also lose 0.3, 1.4, 0.2, and 0.35 liters through\nsweat, urine, feces, and through the skin, respectively. To\nmaintain steady state, how much water must be drunk per day?\n1.14 In our example of the free-falling bungee jumper, we\nassumed that the acceleration due to gravity was a constant\nvalue of 9.81 m/s2. Although this is a decent approximation when we are examining falling objects near the surface\nof the earth, the gravitational force decreases as we move\nabove sea level. A more general representation based on\nNewton’s inverse square law of gravitational attraction can\nbe written as\n2\ng(x) = g(0) _______\n R 2 \n(R + x)\nwhere g(x) = gravitational acceleration at altitude x (in m)\nmeasured upward from the earth’s surface (m/s2), g(0) =\ngravitational acceleration at the earth’s surface (≅ 9.81 m/s2),\nand R = the earth’s radius (≅ 6.37 × 106 m).\n(a) In a fashion similar to the derivation of Eq. (1.8), use a\nforce balance to derive a differential equation for velocity as a function of time that utilizes this more complete\nrepresentation of gravitation. However, for this derivation, assume that upward velocity is positive.\n(b) For the case where drag is negligible, use the chain rule\nto express the differential equation as a function of altitude rather than time. Recall that the chain rule is\n\n___\n dυ = ___\n dυ ___\n dx \ndt dx dt\n\nwhere V = volume (mm3), t = time (min), k = the evaporation rate (mm/min), and A = surface area (mm2). Use\nEuler’s method to compute the volume of the droplet from\nt = 0 to 10 min using a step size of 0.25 min. Assume that\nk = 0.08 mm/min and that the droplet initially has a radius of\n2.5 mm. Assess the validity of your results by determining\nit is consistent with the evaporation rate.\n1.16 A fluid is pumped into the network shown in Fig. P1.16.\nIf Q2 = 0.7, Q3 = 0.5, Q7 = 0.1, and Q8 = 0.3 m3/s, determine\nthe other flows.\n1.17 Newton’s law of cooling says that the temperature of\na body changes at a rate proportional to the difference between its temperature and that of the surrounding medium\n(the ambient temperature),\ndT = −k(T − T )\n ___\na\ndt\nwhere T = the temperature of the body (°C), t = time (min),\nk = the proportionality constant (per minute), and Ta = the\nambient temperature (°C). Suppose that a cup of coffee\noriginally has a temperature of 70 °C. Use Euler’s method to\ncompute the temperature from t = 0 to 20 min using a step\nsize of 2 min if Ta = 20 °C and k = 0.019/min.\n\nQ1\n\nQ3\n\nQ2\n\nQ10\n\nFIGURE P1.16\n\nQ5\n\nQ4\n\nQ9\n\nQ6\n\nQ8\n\nQ7\n\nPROBLEMS\b\n\n23\n\n1.18 You are working as a crime scene investigator and\nmust predict the temperature of a homicide victim over a\n5-hr period. You know that the room where the victim was\nfound was at 10 °C when the body was discovered.\n(a) Use Newton’s law of cooling (Prob. 1.17) and Euler’s\nmethod to compute the victim’s body temperature for the\n5-hr period using values of k = 0.12/hr and Δt = 0.5 hr.\nAssume that the victim’s body temperature at the time\nof death was 37 °C, and that the room temperature was\nat a constant value of 10 °C over the 5-hr period.\n(b) Further investigation reveals that the room temperature\nhad actually dropped linearly from 20 to 10 °C over the\n5-hr period. Repeat the same calculation as in (a) but\nincorporate this new information.\n(c) Compare the results from (a) and (b) by plotting them\non the same graph.\n1.19 The velocity is equal to the rate of change of distance,\nx (m):\n\n1.21 As noted in Sec. 1.4, a fundamental representation of\nthe drag force, which assumes turbulent conditions (i.e., a\nhigh Reynolds number), can be formulated as\n\n___\n dx = υ(t)\ndt\n\n(P1.19)\n\nUse Euler’s method to numerically integrate Eqs. (P1.19)\nand (1.8) in order to determine both the velocity and distance\nfallen as a function of time for the first 10 seconds of freefall\nusing the same parameters and conditions as in Example 1.2.\nDevelop a plot of your results.\n1.20 In addition to the downward force of gravity (weight)\nand drag, an object falling through a fluid is also subject\nto a buoyancy force which is proportional to the displaced\nvolume (Archimedes’ principle). For example, for a sphere\nwith diameter d (m), the sphere’s volume is V = π d 3/6, and\nits projected area is A = πd2/4. The buoyancy force can then\nbe computed as Fb = −ρVg. We neglected buoyancy in our\nderivation of Eq. (1.8) because it is relatively small for an\nobject like a bungee jumper moving through air. However,\nfor a more dense fluid like water, it becomes more prominent.\n(a) Derive a differential equation in the same fashion as\nEq. (1.8), but include the buoyancy force and represent\nthe drag force as described in Sec. 1.4.\n(b) Rewrite the differential equation from (a) for the special\ncase of a sphere.\n(c) Use the equation developed in (b) to compute the terminal\nvelocity (i.e., for the steady-state case). Use the following parameter values for a sphere falling through water:\nsphere diameter = 1 cm, sphere density = 2700 kg/m3,\nwater density = 1000 kg/m3, and Cd = 0.47.\n(d) Use Euler’s method with a step size of Δt = 0.03125 s\nto numerically solve for the velocity from t = 0 to 0.25 s\nwith an initial velocity of zero.\n\n1 ρAC υ∣υ∣\nFd = − __\nd\n2\nwhere Fd = the drag force (N), ρ = fluid density (kg/m3), A =\nthe frontal area of the object on a plane perpendicular to the\ndirection of motion (m2), υ = velocity (m/s), and Cd = a dimensionless drag coefficient.\n(a) Write the pair of differential equations for velocity and\nposition (see Prob. 1.19) to describe the vertical motion of\na sphere with diameter, d (m), and a density of ρs (kg/m3).\nThe differential equation for velocity should be written as\na function of the sphere’s diameter.\n(b) Use Euler’s method with a step size of Δt = 2 s to compute the position and velocity of a sphere over the first\n14 seconds. Employ the following parameters in your\ncalculation: d = 120 cm, ρ = 1.3 kg/m3, ρs = 2700 kg/m3,\nand Cd = 0.47. Assume that the sphere has the initial\nconditions: x(0) = 100 m and υ(0) = −40 m/s.\n(c) Develop a plot of your results (i.e., y and υ versus t) and\nuse it to graphically estimate when the sphere would hit\nthe ground.\n(d) Compute the value for the bulk second-order drag coefficient, cd′ (kg/m). Note that the bulk second-order drag\ncoefficient is the term in the final differential equation\nfor velocity that multiplies the term υ |υ|.\n1.22 As depicted in Fig. P1.22, a spherical particle settling through a quiescent fluid is subject to three forces: the\ndownward force of gravity (FG), and the upward forces of\nbuoyancy (FB) and drag (FD). Both the gravity and buoyancy\nforces can be computed with Newton’s second law with the\nlatter equal to the weight of the displaced fluid. For laminar\nflow, the drag force can be computed with Stoke’s law,\nFD = 3πμdυ\nwhere μ = the dynamic viscosity of the fluid (N s/m2), d =\nthe particle diameter (m), and υ = the particle’s settling\nvelocity (m/s). The mass of the particle can be expressed as\nthe product of the particle’s volume and density, ρs (kg/m3),\nand the mass of the displaced fluid can be computed as the\nproduct of the particle’s volume and the fluid’s density,\nρ (kg/m3). The volume of a sphere is πd3/6. In addition,\nlaminar flow corresponds to the case where the dimensionless Reynolds number, Re, is less than 1, where Re = ρdυ/μ.\n(a) Use a force balance for the particle to develop the differential equation for dυ/dt as a function of d, ρ, ρs, and μ.\n\n24\n\nMathematical Modeling, Numerical Methods, and Problem Solving\nx=0\nFB\n\nFD\n\nx=L\n\n0\n\nw\n\ny\nd\n\nFG\n\nFIGURE P1.22\n\n(b) At steady-state, use this equation to solve for the particle’s terminal velocity.\n(c) Employ the result of (b) to compute the particle’s terminal velocity in m/s for a spherical silt particle settling\nin water: d = 10 μm, ρ = 1 g/cm3, ρs = 2.65 g/cm3, and\nμ = 0.014 g/(cm·s).\n(d) Check whether flow is laminar.\n(e) Use Euler’s method to compute the velocity from t = 0 to\n2−15 s with Δt = 2−18 s given the initial condition: υ(0) = 0.\n1.23 As depicted in Fig. P1.23, the downward deflection,\ny (m), of a cantilever beam with a uniform load, w = 10,000\nkg/m, can be computed as\n\nFIGURE P1.23\nA cantilever beam.\n\n1.24 Use Archimedes’ principle to develop a steady-state\nforce balance for a spherical ball of ice floating in seawater.\nThe force balance should be expressed as a third-order polynomial (cubic) in terms of height of the cap above the water\nline (h), and the seawater’s density (ρf), the ball’s density\n1.25 Beyond fluids, Archimedes’ principle has proven\nuseful in geology when applied to solids on the earth’s\ncrust. Figure P1.25 depicts one such case where a lighter\nconical granite mountain “floats on” a denser basalt layer\nat the earth’s surface. Note that the part of the cone below\nthe surface is formally referred to as a frustum. Develop a\nsteady-state force balance for this case in terms of the following parameters: basalt’s density (ρb), granite’s density\n(ρg), the cone’s bottom radius (r), and the height above (h1)\nand below (h2) the earth’s surface.\n\ny = _____\n w (x4 − 4L x3 + 6L2x2)\n24EI\n\nh\n\nwhere x = distance (m), E = the modulus of elasticity =\n2 × 1011 Pa, I = moment of inertia = 3.25 × 10–4 m4, and L =\nlength = 4 m. This equation can be differentiated to yield the\nslope of the downward deflection as a function of x\n\nr\n\ndy\n___\n = _____\n w (4x3 − 12 L x2 + 12L2x)\ndx 24EI\nIf y = 0 at x = 0, use this equation with Euler’s method (Δx =\n0.125 m) to compute the deflection from x = 0 to L. Develop\na plot of your results along with the analytical solution computed with the first equation.\n\nFIGURE P1.24\n\n25\n\nPROBLEMS\b\nResistor\n\nInductor\nL di\ndt\n\niR\n\nCapacitor\nq\nC\n\nh1\nr1\n\nH\n\nh2\nGranite\n\ni\n\nr2\n\nBasalt\n\nFIGURE P1.25\n\n1.26 As depicted in Fig. P1.26, an RLC circuit consists of\nthree elements: a resistor (R), an inductor (L), and a capacitor (C). The flow of current across each element induces a\nvoltage drop. Kirchhoff’s second voltage law states that the\nalgebraic sum of these voltage drops around a closed circuit\nis zero,\n\nFIGURE P1.26\n\n(c) Develop plots of y versus t and y versus x. Use the plot to\ngraphically estimate when and where the jumper would\nhit the ground if the chute failed to open.\n1.28 Figure P1.28 shows the forces exerted on a hot air balloon system.\n\nq\ndi + __\niR + L __\n = 0\ndt C\nwhere i = current, R = resistance, L = inductance, t = time,\nq = charge, and C = capacitance. In addition, the current is\nrelated to charge as in\n\nFB\n\ndb\n\nFD\n\ndq\n___\n = i\ndt\n(a) If the initial values are i(0) = 0 and q(0) = 1 C, use\nEuler’s method to solve this pair of differential equations from t = 0 to 0.1 s using a step size of Δt = 0.01 s.\nEmploy the following parameters for your calculation:\nR = 200 Ω, L = 5 H, and C = 10−4 F.\n(b) Develop a plot of i and q versus t.\n1.27 Suppose that a parachutist with linear drag (m = 70 kg,\nc = 12.5 kg/s) jumps from an airplane flying at an altitude\nof 200 m with a horizontal velocity of 180 m/s relative to\nthe ground.\n(a) Write a system of four differential equations for x, y, υx =\ndx/dt and υy = dy/dt.\n(b) If the initial horizontal position is defined as x = 0, use\nEuler’s methods with Δt = 1 s to compute the jumper’s\nposition over the first 10 seconds.\n\nFG\n\nFP\n\nFIGURE P1.28\nForces on a hot air balloon: FB = buoyancy, FG = weight\nof gas, FP = weight of payload (including the balloon\nenvelope), and FD = drag. Note that the direction of the\ndrag is downward when the balloon is rising.\n\n26\n\nMathematical Modeling, Numerical Methods, and Problem Solving\n\nFormulate the drag force as\n 1 ρa υ2ACd\nFD = __\n2\nwhere ρa = air density (kg/m3), υ = velocity (m/s), A = projected frontal area (m2), and Cd = the dimensionless drag coefficient (≅ 0.47 for a sphere). Note also that the total mass\nof the balloon consists of two components:\nm = mG + mP\nwhere mG = the mass of the gas inside the expanded balloon\nand the unexpanded balloon = 265 kg). Assume that the\nideal gas law holds (P = ρRT), that the balloon is a perfect\nsphere with a diameter of 17.3 m, and that the heated air\n\ninside the envelope is at roughly the same pressure as the\noutside air.\nOther necessary parameters are:\nNormal atmospheric pressure, P = 101,300 Pa\nThe gas constant for dry air, R = 287 Joules/(kg K)\nThe air inside the balloon is heated to an average temperature, T = 100 ºC\nThe normal (ambient) air density, ρ = 1.2 kg/m3.\n(a) Use a force balance to develop the differential equation for dυ/dt as a function of the model’s fundamental\nparameters.\n(b) At steady-state, calculate the particle’s terminal velocity.\n(c) Use Euler’s method and Excel to compute the velocity\nfrom t = 0 to 60 s with Δt = 2 s given the previous\nparameters along with the initial condition: υ(0) = 0.\nDevelop a plot of your results.\n\n2\n\nMATLAB Fundamentals\n\nCHAPTER OBJECTIVES\nKnowledge\nunderstanding\nare prerequisites\nforan\ntheintroduction\neffective implementation\nof\nThe\nprimaryand\nobjective\nof this chapter\nis to provide\nand overview of\nany tool.\nhow\nMATLAB’s calculator mode is used to implement interactive computations.\nNo matter\nhow and\nimpressive\nSpecific\nobjectives\ntopics covered\narechest, you will be hard-pressed to repair a car\nif you do not understand how it works.\n• Learning how real and complex numbers are assigned to variables.\n• Learning\nThis is thehow\nfirstvectors\nchapterand\nobjectives\nmatricesentry.\nare assigned values using simple assignment,\n• the\nSecond\nobjective\nthe entries\nuse ic/lc\nper manuscript, the first and\ncolonchapter\noperator,\nand theentry,\nlinspace\nand logspace\nfunctions.\n• Understanding\nlast entry have space\naboverules\nor below,\nmiddle entries\ndo not. expressions.\nthe priority\nfor constructing\nmathematical\n• Gaining\nThird chapter\nentryunderstanding\ncopy goes here.\na general\nabout them with MATLAB’s Help facilities.\n• Learning how to use vectors to create a simple line plot based on an equation.\n\nYOU’VE GOT A PROBLEM\n\nI\n\nn Chap. 1, we used a force balance to determine the terminal velocity of a free-falling\nobject like a bungee jumper:\n___\n\ngm\nυt = ___\n c \nd\n\n√\n\nwhere υt = terminal velocity (m/s), g = gravitational acceleration (m/s2), m = mass (kg),\nand cd = a drag coefficient (kg/m). Aside from predicting the terminal velocity, this equation can also be rearranged to compute the drag coefficient\nmg\ncd = ___\n 2 \b\nυt \n\n(2.1)\n27\n\n28\n\nMATLAB Fundamentals\nTABLE 2.1 Data for the mass and associated terminal velocities of a number of jumpers.\nm, kg\nυt , m/s\n\n83.6\n53.4\n\n60.2\n48.5\n\n72.1\n50.9\n\n91.1\n55.7\n\n92.9\n54\n\n65.3\n47.7\n\n80.9\n51.1\n\nThus, if we measure the terminal velocity of a number of jumpers of known mass, this\nequation provides a means to estimate the drag coefficient. The data in Table 2.1 were collected for this purpose.\nIn this chapter, we will learn how MATLAB can be used to analyze such data. Beyond\nshowing how MATLAB can be employed to compute quantities like drag coefficients, we\nwill also illustrate how its graphical capabilities provide additional insight into such analyses.\n\n2.1\n\nTHE MATLAB ENVIRONMENT\nMATLAB is a computer program that provides the user with a convenient environment for\nperforming many types of calculations. In particular, it provides a very nice tool to implement numerical methods.\nThe most common way to operate MATLAB is by entering commands one at a time\nin the command window. In this chapter, we use this interactive or calculator mode to introduce you to common operations such as performing calculations and creating plots. In\nChap. 3, we show how such commands can be used to create MATLAB programs.\nOne further note. This chapter has been written as a hands-on exercise. That is, you\nshould read it while sitting in front of your computer. The most efficient way to become\nproficient is to actually implement the commands on MATLAB as you proceed through\nthe following material.\nMATLAB uses three primary windows:\n•\n•\n•\n\nCommand window. Used to enter commands and data.\nGraphics window. Used to display plots and graphs.\nEdit window. Used to create and edit M-files.\n\nIn this chapter, we will make use of the command and graphics windows. In Chap. 3 we\nwill use the edit window to create M-files.\nAfter starting MATLAB, the command window will open with the command prompt\nbeing displayed\n>>\n\nThe calculator mode of MATLAB operates in a sequential fashion as you type in commands line by line. For each command, you get a result. Thus, you can think of it as operating like a very fancy calculator. For example, if you type in\n>> 55 − 16\n\nMATLAB will display the result1\nans =\n39\n1\n\nMATLAB skips a line between the label (ans =) and the number (39). Here, we omit such blank lines for\nc onciseness. You can control whether blank lines are included with the format compact and format loose\ncommands.\n\n2.2\n\n29\n\nASSIGNMENT\b\n\nNotice that MATLAB has automatically assigned the answer to a variable, ans. Thus, you\ncould now use ans in a subsequent calculation:\n>> ans + 11\n\nwith the result\nans =\n50\n\nMATLAB assigns the result to ans whenever you do not explicitly assign the calculation to\na variable of your own choosing.\n\n2.2\n\nASSIGNMENT\nAssignment refers to assigning values to variable names. This results in the storage of the\nvalues in the memory location corresponding to the variable name.\n2.2.1 Scalars\nThe assignment of values to scalar variables is similar to other computer languages.\nTry typing\n>> a = 4\n\nNote how the assignment echo prints to confirm what you have done:\na =\n4\n\nEcho printing is a characteristic of MATLAB. It can be suppressed by terminating the command line with the semicolon (;) character. Try typing\n>> A = 6;\n\nYou can type several commands on the same line by separating them with commas or\nsemicolons. If you separate them with commas, they will be displayed, and if you use the\nsemicolon, they will not. For example,\n>> a = 4,A = 6;x = 1;\na =\n4\n\nMATLAB treats names in a case-sensitive manner—that is, the variable\nsame as A. To illustrate this, enter\n>> a\n\nand then enter\n>> A\n\nSee how their values are distinct. They are distinct names.\n\na\n\nis not the\n\n30\n\nMATLAB Fundamentals\n\nWe can assign complex values to variables, since\n___MATLAB handles complex arithmetic automatically. The unit imaginary number √\n −1 is preassigned to the variable i.\nConsequently, a complex value can be assigned simply as in\n>> x = 2 +i*4\nx =\n2.0000 + 4.0000i\n\nIt should be noted that MATLAB allows the symbol j to be used to represent the unit\nimaginary number for input. However, it always uses an i for display. For example,\n>> x = 2 +j*4\nx =\n2.0000 + 4.0000i\n\nThere are several predefined variables, for example, pi.\n>> pi\nans =\n3.1416\n\nNotice how MATLAB displays four decimal places. If you desire additional precision,\nenter the following:\n>> format long\n\nNow when pi is entered the result is displayed to 15 significant figures:\n>> pi\nans =\n3.14159265358979\n\n>> format short\n\nThe following is a summary of the format commands you will employ routinely in engineering and scientific calculations. They all have the syntax: format type.\ntype\n\nResult\n\nExample\n\nshort\nlong\nshort e\nlong e\nshort g\nlong g\n\nScaled fixed-point format with 5 digits\nScaled fixed-point format with 15 digits for double and 7 digits for single\nFloating-point format with 5 digits\nFloating-point format with 15 digits for double and 7 digits for single\nBest of fixed- or floating-point format with 5 digits\nBest of fixed- or floating-point format with 15 digits for double\nand 7 digits for single\nEngineering format with at least 5 digits and a power that is a multiple of 3\nEngineering format with exactly 16 significant digits and a power\nthat is a multiple of 3\nFixed dollars and cents\n\n3.1416\n3.14159265358979\n3.1416e+000\n3.141592653589793e+000\n3.1416\n3.14159265358979\n\nshort eng\nlong eng\nbank\n\n3.1416e+000\n3.14159265358979e+000\n3.14\n\n2.2\n\n31\n\nASSIGNMENT\b\n\n2.2.2 Arrays, Vectors, and Matrices\nAn array is a collection of values that are represented by a single variable name.\nOne-d imensional arrays are called vectors and two-dimensional arrays are called\nmatrices. The scalars used in Sec. 2.2.1 are actually matrices with one row and one\ncolumn.\nBrackets are used to enter arrays in the command mode. For example, a row vector can\nbe assigned as follows:\n>> a = [1 2 3 4 5]\na =\n1\n\n2\n\n3\n\n4\n\n5\n\nNote that this assignment overrides the previous assignment of a = 4.\nIn practice, row vectors are rarely used to solve mathematical problems. When we\nspeak of vectors, we usually refer to column vectors, which are more commonly used. A\ncolumn vector can be entered in several ways. Try them.\n>> b = [2;4;6;8;10]\n\nor\n>> b = [2\n4\n6\n8\n10]\n\nor, by transposing a row vector with the ' operator,\n>> b = [2 4 6 8 10]'\n\nThe result in all three cases will be\nb =\n2\n4\n6\n8\n10\n\nA matrix of values can be assigned as follows:\n>> A = [1 2 3; 4 5 6; 7 8 9]\nA =\n1\n4\n7\n\n2\n5\n8\n\n3\n6\n9\n\nIn addition, the Enter key (carriage return) can be used to separate the rows. For example,\nin the following case, the Enter key would be struck after the 3, the 6, and the ] to assign\nthe matrix:\n>> A = [1 2 3\n4 5 6\n7 8 9]\n\n32\n\nMATLAB Fundamentals\n\nFinally, we could construct the same matrix by concatenating (i.e., joining) the vectors\nrepresenting each column:\n>> A = [[1 4 7]' [2 5 8]' [3 6 9]']\nwho\n\nAt any point in a session, a list of all current variables can be obtained by entering the\ncommand:\n>> who\nA\na\nans b\nx\n\nor, with more detail, enter the whos command:\n>> whos\nName\n\nSize\n\nA\na\nans\nb\nx\n\n3x3\n1x5\n1x1\n5x1\n1x1\n\nBytes\n\nClass\n\n72 double array\n40 double array\n8 double array\n40 double array\n16 double array (complex)\n\nGrand total is 21 elements using 176 bytes\n\nNote that subscript notation can be used to access an individual element of an array.\nFor example, the fourth element of the column vector b can be displayed as\n>> b(4)\nans =\n8\n\nFor an array, A(m,n) selects the element in mth row and the nth column. For example,\n>> A(2,3)\nans =\n6\n\nThere are several built-in functions that can be used to create matrices. For example, the ones and zeros functions create vectors or matrices filled with ones and zeros,\nrespectively. Both have two arguments, the first for the number of rows and the second for\nthe number of columns. For example, to create a 2 × 3 matrix of zeros:\n>> E = zeros(2,3)\nE =\n0\n0\n\n0\n0\n\n0\n0\n\nSimilarly, the ones function can be used to create a row vector of ones:\n>> u = ones(1,3)\nu =\n1\n\n1\n\n1\n\n2.2\n\n33\n\nASSIGNMENT\b\n\n2.2.3 The Colon Operator\nThe colon operator is a powerful tool for creating and manipulating arrays. If a colon is\nused to separate two numbers, MATLAB generates the numbers between them using an\nincrement of one:\n>> t = 1:5\nt =\n1\n\n2\n\n3\n\n4\n\n5\n\nIf colons are used to separate three numbers, MATLAB generates the numbers between the\nfirst and third numbers using an increment equal to the second number:\n>> t = 1:0.5:3\nt =\n1.0000\n\n1.5000\n\n2.0000\n\n2.5000\n\n3.0000\n\nNote that negative increments can also be used\n>> t = 10:−1:5\nt =\n10\n\n9\n\n8\n\n7\n\n6\n\n5\n\nAside from creating series of numbers, the colon can also be used as a wildcard to\nselect the individual rows and columns of a matrix. When a colon is used in place of a\nspecific subscript, the colon represents the entire row or column. For example, the second\nrow of the matrix A can be selected as in\n>> A(2,:)\nans =\n4\n\n5\n\n6\n\nWe can also use the colon notation to selectively extract a series of elements from\nwithin an array. For example, based on the previous definition of the vector t:\n>> t(2:4)\nans =\n9\n\n8\n\n7\n\nThus, the second through the fourth elements are returned.\n2.2.4 The linspace and logspace Functions\nThe linspace and logspace functions provide other handy tools to generate vectors of spaced\npoints. The linspace function generates a row vector of equally spaced points. It has the\nform\nlinspace(x1, x2, n)\n\n34\n\nMATLAB Fundamentals\n\nwhich generates n points between x1 and x2. For example\n>> linspace(0,1,6)\nans =\n0\n\n0.2000\n\n0.4000\n\n0.6000\n\n0.8000\n\n1.0000\n\nIf the n is omitted, the function automatically generates 100 points.\nThe logspace function generates a row vector that is logarithmically equally spaced. It\nhas the form\nlogspace(x1, x2, n)\n\nwhich generates n logarithmically equally spaced points between decades 10x1 and 10x2.\nFor example,\n>> logspace(-1,2,4)\nans =\n0.1000\n\n1.0000\n\n10.0000\n\n100.0000\n\nIf n is omitted, it automatically generates 50 points.\n2.2.5 Character Strings\nAside from numbers, alphanumeric information or character strings can be represented by\nenclosing the strings within single quotation marks. For example,\n>> f = 'Miles ';\n>> s = 'Davis';\n\nEach character in a string is one element in an array. Thus, we can concatenate (i.e., paste\ntogether) strings as in\n>> x = [f s]\nx =\nMiles Davis\n\nNote that very long lines can be continued by placing an ellipsis (three consecutive\nperiods) at the end of the line to be continued. For example, a row vector could be entered as\n>> a = [1 2 3 4 5 ...\n6 7 8]\na =\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\nHowever, you cannot use an ellipsis within single quotes to continue a string. To enter a\nstring that extends beyond a single line, piece together shorter strings as in\n>> quote = ['Any fool can make a rule,' ...\n' and any fool will mind it']\nquote =\nAny fool can make a rule, and any fool will mind it\n\n2.2\n\n35\n\nASSIGNMENT\b\n\nTABLE 2.2 Some useful string functions.\nFunction\n\nDescription\n\nn=length(s)\n\nNumber of characters, n, in a string, s.\n\nb=strcmp(s1,s2)\n\nCompares two strings, s1 and s2; if equal returns true (b = 1). If not equal,\nreturns false (b = 0).\n\nn=str2num(s)\n\nConverts a string, s, to a number, n.\n\ns=num2str(n)\n\nConverts a number, n, to a string, s.\n\ns2=strrep(s1,c1,c2)\n\nReplaces characters in a string with different characters.\n\ni=strfind(s1,s2)\n\nReturns the starting indices of any occurrences of the string s2 in the\nstring s1.\n\nS=upper(s)\n\nConverts a string to upper case.\n\ns=lower(S)\n\nConverts a string to lower case.\n\nA number of built-in MATLAB functions are available to operate on strings. Table 2.2\nlists a few of the more commonly used ones. For example,\n>> x1 = ‘Canada’; x2 = ‘Mexico’; x3 = ‘USA’; x4 = ‘2010’; x5 = 810;\n>> strcmp(a1,a2)\nans =\n0\n>> strcmp(x2,’Mexico’)\nans =\n1\n>> str2num(x4)\nans =\n2010\n>> num2str(x5)\nans =\n810\n>> strrep\n>> lower\n>> upper\n\nNote, if you want to display strings in multiple lines, use the sprint function and insert\nthe two-character sequence \\n between the strings. For example,\n\nyields\nYo\n\n36\n\nMATLAB Fundamentals\n\n2.3\n\nMATHEMATICAL OPERATIONS\nOperations with scalar quantities are handled in a straightforward manner, similar to other\ncomputer languages. The common operators, in order of priority, are\n^\n−\n*/\n\\\n+−\n\nExponentiation\nNegation\nMultiplication and division\nLeft division2\n\nThese operators will work in calculator fashion. Try\n>> 2*pi\nans =\n6.2832\n\nAlso, scalar real variables can be included:\n>> y = pi/4;\n>> y ^ 2.45\nans =\n0.5533\n\nResults of calculations can be assigned to a variable, as in the next-to-last example, or\nsimply displayed, as in the last example.\nAs with other computer calculation, the priority order can be overridden with parentheses. For example, because exponentiation has higher priority than negation, the following result would be obtained:\n>> y = −4 ^ 2\ny =\n−16\n\nThus, 4 is first squared and then negated. Parentheses can be used to override the priorities\nas in\n>> y = (−4) ^ 2\ny =\n16\n\nWithin each precedence level, operators have equal precedence and are evaluated from left\nto right. As an example,\n>> 4^2^3\n>> 4^(2^3)\n>> (4^2)^3\n\n2\n\nLeft division applies to matrix algebra. It will be discussed in detail later in this book.\n\n2.3\n\nMathematical Operations\b\n\n37\n\nIn the first case 42 = 16 is evaluated first, which is then cubed to give 4096. In the second\ncase 23 = 8 is evaluated first and then 48 = 65,536. The third case is the same as the first,\nbut uses parentheses to be clearer.\nOne potentially confusing operation is negation; that is, when a minus sign is employed with a single argument to indicate a sign change. For example,\n>> 2*−4\n\nThe −4 is treated as a number, so you get −8. As this might be unclear, you can use parentheses to clarify the operation\n>> 2*(−4)\n\nHere is a final example where the minus is used for negation\n>> 2^−4\n\nAgain −4 is treated as a number, so 2^−4 = 2−4 = 1/24 = 1/16 = 0.0625. Parentheses can\nmake the operation clearer\n>> 2^(−4)\n\nCalculations can also involve complex quantities. Here are some examples that use the\nvalues of x (2 + 4i) and y (16) defined previously:\n>> 3 * x\nans =\n6.0000 + 12.0000i\n>> 1 / x\nans =\n0.1000 − 0.2000i\n>> x ^ 2\nans =\n−12.0000 + 16.0000i\n>> x + y\nans =\n18.0000 + 4.0000i\n\nThe real power of MATLAB is illustrated in its ability to carry out vector-matrix\nc alculations. Although we will describe such calculations in detail in Chap. 8, it is worth\nintroducing some examples here.\nThe inner product of two vectors (dot product) can be calculated using the *\noperator,\n>> a * b\nans =\n110\n\n38\n\nMATLAB Fundamentals\n\nand likewise, the outer product\n>> b * a\nans =\n2\n4\n6\n8\n10\n\n4\n8\n12\n16\n20\n\n6\n12\n18\n24\n30\n\n8\n16\n24\n32\n40\n\n10\n20\n30\n40\n50\n\nTo further illustrate vector-matrix multiplication, first redefine a and b:\n>> a = [1 2 3];\n\nand\n>> b = [4 5 6]';\n\nNow, try\n>> a * A\nans =\n30\n\n36\n\n42\n\nor\n>> A * b\nans =\n32\n77\n122\n\nMatrices cannot be multiplied if the inner dimensions are unequal. Here is what happens\nwhen the dimensions are not those required by the operations. Try\n>> A * a\n\nMATLAB automatically displays the error message:\n??? Error using ==> mtimes\nInner matrix dimensions must agree.\n\nMatrix-matrix multiplication is carried out in likewise fashion:\n>> A * A\nans =\n30\n66\n102\n\n36\n81\n126\n\n42\n96\n150\n\nMixed operations with scalars are also possible:\n>> A/pi\nans =\n0.3183\n1.2732\n2.2282\n\n0.6366\n1.5915\n2.5465\n\n0.9549\n1.9099\n2.8648\n\n2.4\tUSE OF BUILT-IN FUNCTIONS\b\n\n39\n\nWe must always remember that MATLAB will apply the simple arithmetic operators\nin vector-matrix fashion if possible. At times, you will want to carry out calculations item\nby item in a matrix or vector. MATLAB provides for that too. For example,\n>> A^2\nans =\n30\n66\n102\n\n36\n81\n126\n\n42\n96\n150\n\nresults in matrix multiplication of A with itself.\nWhat if you want to square each element of A? That can be done with\n>> A.^2\nans =\n1\n16\n49\n\n4\n25\n64\n\n9\n36\n81\n\nThe . preceding the ^ operator signifies that the operation is to be carried out element by\nelement. The MATLAB manual calls these array operations. They are also often referred\nto as element-by-element operations.\ndone. Press the up-arrow key. You should get back the last line you typed in.\n>> A.^2\n\nPressing Enter will perform the calculation again. But you can also edit this line. For\nexample, change it to the line below and then press Enter.\n>> A.^3\nans =\n1\n64\n343\n\n8\n125\n512\n\n27\n216\n729\n\nUsing the up-arrow key, you can go back to any command that you entered. Press the uparrow until you get back the line\n>> b * a\n\nAlternatively, you can type b and press the up-arrow once and it will automatically bring\nup the last command beginning with the letter b. The up-arrow shortcut is a quick way to\nfix errors without having to retype the entire line.\n\n2.4\n\nUSE OF BUILT-IN FUNCTIONS\nMATLAB and its Toolboxes have a rich collection of built-in functions. You can use\nfunction, type in\n>> help log\nLOG\n\nNatural logarithm.\n\n40\n\nMATLAB Fundamentals\nLOG(X) is the natural logarithm of the elements of X.\nComplex results are produced if X is not positive.\n\nFor a list of all the elementary functions, type\n>> help elfun\n\nOne of their important properties of MATLAB’s built-in functions is that they will\noperate directly on vector and matrix quantities. For example, try\n>> log(A)\nans =\n0\n1.3863\n1.9459\n\n0.6931\n1.6094\n2.0794\n\n1.0986\n1.7918\n2.1972\n\nand you will see that the natural logarithm function is applied in array style, element by\nelement, to the matrix A. Most functions, such as sqrt, abs, sin, acos, tanh, and exp, operate\nin array fashion. Certain functions, such as exponential and square root, have matrix definitions also. MATLAB will evaluate the matrix version when the letter m is appended to the\nfunction name. Try\n>> sqrtm(A)\nans =\n0.4498 + 0.7623i\n1.0185 + 0.0842i\n1.5873 − 0.5940i\n\n0.5526 + 0.2068i\n1.2515 + 0.0228i\n1.9503 − 0.1611i\n\n0.6555 − 0.3487i\n1.4844 − 0.0385i\n2.3134 + 0.2717i\n\nThere are several functions for rounding. For example, suppose that we enter a vector:\n>> E = [−1.6 −1.5 −1.4 1.4 1.5 1.6];\n\nThe round function rounds the elements of E to the nearest integers:\n>> round(E)\nans =\n−2\n\n−2\n\n−1\n\n1\n\n2\n\n2\n\nThe ceil (short for ceiling) function rounds to the nearest integers toward infinity:\n>> ceil(E)\nans =\n−1\n\n−1\n\n−1\n\n2\n\n2\n\n2\n\nThe floor function rounds down to the nearest integers toward minus infinity:\n>> floor(E)\nans =\n−2\n\n−2\n\n−2\n\n1\n\n1\n\n1\n\nThere are also functions that perform special actions on the elements of matrices and\narrays. For example, the sum function returns the sum of the elements:\n>> F = [3 5 4 6 1];\n>> sum(F)\nans =\n19\n\n2.4\tUSE OF BUILT-IN FUNCTIONS\b\n\n41\n\nIn a similar way, it should be pretty obvious what’s happening with the following\ncommands:\n>> min(F),max(F),mean(F),prod(F),sort(F)\nans =\n1\nans =\n6\nans =\n3.8000\nans =\n360\nans =\n1\n\n3\n\n4\n\n5\n\n6\n\nA common use of functions is to evaluate a formula for a series of arguments. Recall\nthat the velocity of a free-falling bungee jumper can be computed with [Eq. (1.9)]:\n___\n\n(\n\n____\n\n)\n\ngc\ngm\nυ = ___\n c tanh √____\n md t \nd\n\n√\n\nwhere υ is velocity```"
] | [
null,
"https://de.b-ok.xyz/img/loader.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88740706,"math_prob":0.9562734,"size":102027,"snap":"2019-51-2020-05","text_gpt3_token_len":25162,"char_repetition_ratio":0.14026248,"word_repetition_ratio":0.060219504,"special_character_ratio":0.25966656,"punctuation_ratio":0.118805096,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9947465,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-07T12:46:16Z\",\"WARC-Record-ID\":\"<urn:uuid:4c78cb2b-e0d1-4056-9bc8-a92ec5c700c1>\",\"Content-Length\":\"159889\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c6d039b1-20f1-4d39-b25a-1c4be4d95ba7>\",\"WARC-Concurrent-To\":\"<urn:uuid:b47e8478-45cf-49fc-93fb-040b23563c6e>\",\"WARC-IP-Address\":\"80.209.252.13\",\"WARC-Target-URI\":\"https://de.b-ok.xyz/book/3385643/22ef39\",\"WARC-Payload-Digest\":\"sha1:MGY47ZVF6WDMYDP74OUHCMEBPW62QISP\",\"WARC-Block-Digest\":\"sha1:P4RQPGJS5EHG5FVFPDF6RRCM7OYDU5M2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540499389.15_warc_CC-MAIN-20191207105754-20191207133754-00196.warc.gz\"}"} |
https://snipplr.com/view/5993/excel-spreadsheet-to-mysql-table-inserts | [
"# Posted By\n\njeffreality on 04/24/08\n\n# Statistics\n\nViewed 560 times\nFavorited by 3 user(s)\n\n# Excel spreadsheet to MySQL Table inserts\n\n/ Published in: Visual Basic",
null,
"",
null,
"`Sub Excel2MySQL() ' Excel2MySQL ' Versions: ' 1.0 - Jeffrey Berthiaume - October 26, 2005 - Initial version ' How to use: ' - name the worksheet the name of the Table you want to import into ' - name each of the columns (row 1) the name of the row ' - run the macro. ' the sql file will be saved to c:\\[Tablename].sql Open \"c:\\\" & Sheet1.Name & \".sql\" For Output As #1 totalrows = ActiveSheet.UsedRange.Rows.Count totalcols = ActiveSheet.UsedRange.Columns.Count colnames = \"\" For y = 1 To totalcols colnames = colnames & Cells(1, y) If y < totalcols Then colnames = colnames & \",\" End If Next y For x = 2 To totalrows s = \"INSERT INTO \" & Sheet1.Name & \" (\" & colnames & \") VALUES (\" For y = 1 To totalcols s = s & \"'\" & Replace(Cells(x, y).Value, \"'\", \"\\'\") & \"'\" If y < totalcols Then s = s & \",\" Else s = s & \");\" Print #1, s End If Next y Next x Close #1 End Sub`",
null,
"Subscribe to comments"
] | [
null,
"https://snipplr.com/img/icon_sav.png",
null,
"https://snipplr.com/img/loader.gif",
null,
"https://snipplr.com/images/rss.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6513633,"math_prob":0.87118036,"size":1113,"snap":"2023-14-2023-23","text_gpt3_token_len":364,"char_repetition_ratio":0.12804328,"word_repetition_ratio":0.10121457,"special_character_ratio":0.39083558,"punctuation_ratio":0.13461539,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9800012,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-28T15:12:56Z\",\"WARC-Record-ID\":\"<urn:uuid:5b4b99b2-f333-4111-b8e6-e6cb7f5678c4>\",\"Content-Length\":\"27640\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cd2723b6-d3af-4c5b-ad3d-df0cd6c6f1c6>\",\"WARC-Concurrent-To\":\"<urn:uuid:c228f830-16fb-4586-a9dd-661bd4a943b0>\",\"WARC-IP-Address\":\"172.67.96.199\",\"WARC-Target-URI\":\"https://snipplr.com/view/5993/excel-spreadsheet-to-mysql-table-inserts\",\"WARC-Payload-Digest\":\"sha1:JW36ENLX5CKV4DCGPLTR6GATN6PQYTUE\",\"WARC-Block-Digest\":\"sha1:NGOIQ7EAHOUKBQCIWJW2N4QXEIURHP7J\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948867.32_warc_CC-MAIN-20230328135732-20230328165732-00116.warc.gz\"}"} |
https://phpeditors.partners.phpclasses.org/browse/file/240305.html | [
"",
null,
"# File: demo.php",
null,
"Download\n ``` name = 'proxy'; } public function sayHello() { echo 'Hello from class ' . __CLASS__ . \" named {\\$this->name} \\n\"; } } Class A { public function __construct() { \\$this->name = 'A'; } public function sayHello() { return function() { echo 'Hello from class ' . __CLASS__ . \" named {\\$this->name} as \".get_called_class().\" \\n\";}; } } Class B { public function __construct() { \\$this->name = 'B'; } public function sayHello() { return function() { echo 'Hello from class ' . __CLASS__ . \" named {\\$this->name} as \".get_called_class().\" \\n\";}; } } function globalFunction(\\$text) { echo \"Hello from global scope \\$text \\n\"; } \\$p = new Proxy; \\$a = new A; \\$b = new B; \\$text = 'none'; \\$p->a1 = \\$a->sayHello(); \\$p->b1 = \\$b->sayHello(); \\$p->p1 = [\\$p, 'sayHello']; \\$p->g1 = 'globalFunction'; \\$p->g2 = function() use (\\$text) { return globalFunction(\\$text); }; print_r(\\$p); print_r(\\$a); print_r(\\$b); \\$p->a1(); \\$p->b1(); \\$p->p1(); \\$p->g1('G1'); \\$p->g2('G2'); \\$a->name = 'AA'; \\$b->name = 'BB'; \\$p->name = 'same'; \\$p->a1(); \\$p->b1(); \\$p->p1(); /* Outputs: Proxy Object ( [armory:protected] => Array ( [name] => proxy [a1] => Closure Object ( [this] => A Object ( [name] => A ) ) [b1] => Closure Object ( [this] => B Object ( [name] => B ) ) [p1] => Array ( => Proxy Object *RECURSION* => sayHello ) [g1] => globalFunction [g2] => Closure Object ( [static] => Array ( [text] => none ) ) ) ) A Object ( [name] => A ) B Object ( [name] => B ) Hello from class A named A as A Hello from class B named B as B Hello from class Proxy named proxy Hello from global scope G1 Hello from global scope none Hello from class A named AA as A Hello from class B named BB as B Hello from class Proxy named same */ ?> ```",
null,
"About us",
null,
"Advertise on this site",
null,
"Site map",
null,
"Newsletter",
null,
"Statistics",
null,
"Site tips",
null,
"Privacy policy",
null,
"Contact\nFor more information send a message to `info at phpclasses dot org`."
] | [
null,
"https://files.phpclasses.org/graphics/phpclasses/logo-small-phpclasses.svg",
null,
"https://files.phpclasses.org/graphics/phpclasses/textfile.png",
null,
"https://files.phpclasses.org/graphics/phpclasses/user.png",
null,
"https://files.phpclasses.org/graphics/phpclasses/buy.png",
null,
"https://files.phpclasses.org/graphics/phpclasses/map.png",
null,
"https://files.phpclasses.org/graphics/phpclasses/new.png",
null,
"https://files.phpclasses.org/graphics/phpclasses/statistics.png",
null,
"https://files.phpclasses.org/graphics/phpclasses/tip.png",
null,
"https://files.phpclasses.org/graphics/phpclasses/privacy.png",
null,
"https://files.phpclasses.org/graphics/phpclasses/mail.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5299324,"math_prob":0.58367306,"size":1784,"snap":"2022-40-2023-06","text_gpt3_token_len":573,"char_repetition_ratio":0.17921348,"word_repetition_ratio":0.27974278,"special_character_ratio":0.41816142,"punctuation_ratio":0.16558442,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9622483,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-30T11:56:05Z\",\"WARC-Record-ID\":\"<urn:uuid:5f893b61-b5ef-40cb-9ec5-b437dcae7c23>\",\"Content-Length\":\"71102\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:827b9794-cb80-4a61-9771-5fdf10ed69ba>\",\"WARC-Concurrent-To\":\"<urn:uuid:2b30f408-ce22-42e8-befa-03c5edb21205>\",\"WARC-IP-Address\":\"192.111.159.130\",\"WARC-Target-URI\":\"https://phpeditors.partners.phpclasses.org/browse/file/240305.html\",\"WARC-Payload-Digest\":\"sha1:SOGG57UNVNY3FARXBLFSJ64OTNFS3XYV\",\"WARC-Block-Digest\":\"sha1:4B5MILW7HOMOV7QPXBYQ257BON6JXWTS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335469.40_warc_CC-MAIN-20220930113830-20220930143830-00553.warc.gz\"}"} |
https://www.howtoforge.com/tutorial/linux-c-programming-tutorial-part-24-multi-dimensional-arrays/ | [
"# Linux C Programming Tutorial Part 24 - Multi dimensional arrays\n\nIf you're following this ongoing C programming tutorial series, you'd be aware of the concept of arrays. To quickly refresh, arrays are used to store multiple values of the same type in continuous storage.\n\n## Multidimensional arrays in C\n\nFor example, the following is an integer array capable of storing 5 numbers.\n\n``int arr``\n\nAny value stored in an array can be accessed easily using the array name and the corresponding index value. As indexes begin from 0, let's say if you want to access the second element in an array, you can do that in the following way:\n\n``arr``\n\nThe following program accepts 5 integers from user as input, stores them in an array, and then outputs them back to user.\n\n``#include <stdio.h>int main(){ int arr,i; printf(\"Enter 5 integer values\\n\"); for(i=0;i<5;i++) scanf(\"%d\",&(arr[i])); printf(\"You entered the following values:\\n\"); for(i=0;i<5;i++) printf(\"%d\\n\",arr[i]); return 0;}``\n\nNow, this type of array is known as a single dimensional array. Yes, that means there also exist multi dimensional arrays - two dimensional arrays, three dimensional arrays, and so on. For example, following is a two dimensional array:\n\n``int arr``\n\nYou can visualize this array as a 2-D table of numbers with 2 rows and 3 columns - something like the following:\n\n``x x xx x x ``\n\nSo there are a total of 6 elements this array can hold. It's worth mentioning that the total number of elements an array can hold can be easily calculated by multiplying the indices in the declaration of the array. For example, in case of 'arr', the capacity of the array can be calculated by doing 2x3, which equals 6.\n\nComing to the initialization part, a 2-D array like 'arr' can be initialized in the following way:\n\n``int arr = {1,2,3,4,5,6}``\n\nAs this above initialization makes it hard to visualize these values in a 2-D array, there's another (read: better) way that you can opt. Here it is:\n\n``int arr = { {1,2,3}, {4,5,6} };``\n\nSo now it's easy to visualize that numbers 1,2,3 are in one row, while 4,5,6 are in the other. Here you go:\n\n``1 2 34 5 6``\n\nAs for how to deal with a 2-D array in C, following is a small program that accepts these 6 values from user, stores them in a 2-D array 'arr', and then finally outputs them back to user:\n\n``#include <stdio.h>int main(){ int arr,i,j; printf(\"You are about to enter values for a 2x3 array\\n\"); for(i=0;i<2;i++) { for(j=0;j<3;j++) { printf(\"\\n Enter value to be stored at row %d and column %d :: \",i,j); scanf(\"%d\",&arr[i][j]); } } printf(\"\\n You entered the following values:\\n\"); for(i=0;i<2;i++) { for(j=0;j<3;j++) { printf(\"\\n Row %d and column %d = %d\\n\",i,j,arr[i][j]); } } return 0;}``\n\nAnd here's the output:\n\n``You are about to enter values for a 2x3 array Enter value to be stored at row 0 and column 0 :: 1 Enter value to be stored at row 0 and column 1 :: 2 Enter value to be stored at row 0 and column 2 :: 3 Enter value to be stored at row 1 and column 0 :: 4 Enter value to be stored at row 1 and column 1 :: 5 Enter value to be stored at row 1 and column 2 :: 6 You entered the following values: Row 0 and column 0 = 1 Row 0 and column 1 = 2 Row 0 and column 2 = 3 Row 1 and column 0 = 4 Row 1 and column 1 = 5 Row 1 and column 2 = 6``\n\nSo that was some basic information about two dimensional arrays. What about 3-D arrays? Well, on the same lines, you can define and initialize three dimensional arrays as well. Here's an example:\n\n``int arr``\n\nSo how would one go about visualizing this array? Well, think of a three dimensional world (the world we live in), and then visualize three dimensions perpendicular to each other. That's how the three dimensions of this array fit in.\n\nCarrying a capacity of 24 elements (2x3x4), this array can be initialized in the following way:\n\n```int x =\n{\n{ {1,2,3,4}, {5,6,7,8}, {9,10,11,12} },\n{ {13,14,15,16}, {17,18,19,20}, {21,22,23,24} }\n};```\n\nAnd here's a C program making use of a 3-D array:\n\n``#include <stdio.h>int main(){ int arr,i,j,k; printf(\"You are about to enter values for a 2x3x4 array\\n\"); for(i=0;i<2;i++) { for(j=0;j<3;j++) { for(k=0;k<4;k++) { printf(\"\\n Enter value to be stored at arr[%d][%d][%d] :: \",i,j,k); scanf(\"%d\",&arr[i][j][k]); } } } printf(\"\\n You entered the following values:\\n\"); for(i=0;i<2;i++) { for(j=0;j<3;j++) { for(k=0;k<4;k++) { printf(\"\\n arr[%d][%d][%d] = %d\\n\",i,j,k,arr[i][j][k]); } } } return 0;}``\n\n## Conclusion\n\nIn this tutorial, we expanded upon our existing understanding of arrays by discussing the concept of multidimensional arrays. You are advised to try out examples used in this tutorial on your system (as well as create new ones) to get a better understanding of how these arrays work. In case of any doubt or query, leave a comment below.",
null,
"view as pdf |",
null,
"print"
] | [
null,
"https://www.howtoforge.com/images/pdficon_small.png",
null,
"https://www.howtoforge.com/images/print.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8077257,"math_prob":0.98585707,"size":4774,"snap":"2020-45-2020-50","text_gpt3_token_len":1406,"char_repetition_ratio":0.14591195,"word_repetition_ratio":0.1215733,"special_character_ratio":0.32697946,"punctuation_ratio":0.16349481,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99499875,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T12:35:04Z\",\"WARC-Record-ID\":\"<urn:uuid:8970cc5e-c06e-43fb-b99f-745c946d8d7c>\",\"Content-Length\":\"53308\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0b02b6b9-c6f0-4eb2-b4bd-a12f9b2c7ee7>\",\"WARC-Concurrent-To\":\"<urn:uuid:aad6b5f7-c580-4a13-a6b6-aed55ab07965>\",\"WARC-IP-Address\":\"172.67.68.93\",\"WARC-Target-URI\":\"https://www.howtoforge.com/tutorial/linux-c-programming-tutorial-part-24-multi-dimensional-arrays/\",\"WARC-Payload-Digest\":\"sha1:P5R643CO7PDWKWSR4V42HT7WUZRXF2P5\",\"WARC-Block-Digest\":\"sha1:SZUFORVZGPL2KVGWL4TJMRPWAUJ26BIM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107872686.18_warc_CC-MAIN-20201020105000-20201020135000-00628.warc.gz\"}"} |
http://forums.wolfram.com/mathgroup/archive/1999/Sep/msg00226.html | [
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Re: simple Simplify[] question\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg19789] Re: simple Simplify[] question\n• From: \"Allan Hayes\" <hay at haystack.demon.co.uk>\n• Date: Fri, 17 Sep 1999 01:36:44 -0400\n• References: <7rnkqn\\$emf@smc.vnet.net>\n• Sender: owner-wri-mathgroup at wolfram.com\n\nUlf Saalmann <us at atom.msi.se> wrote in message\nnews:7rnkqn\\$emf at smc.vnet.net...\n>\n>\n> Hello,\n>\n> why gives Mathematica\n> 2 3/2\n> Simplify[(a^2)^(3/2)] (a )\n>\n>\n> and not\n> 3\n> a\n>\n> and how to convince Mathematica to do it?\n>\n>\n> Thanks\n> Ulf (us at atom.msi.se)\n>\n\nUlf,\n\nMathematica works with complex numbers; and even with reals your reduction is not\nalways true:\n\nSimplify[((-1)^2)^(3/2)]\n\n1\n\n(-1)^3\n\n-1\n\nHowever, we can tell Mathematica that a is positive, and then your reduction is true\n\nSimplify[(a^2)^(3/2), a > 0]\n\na^3\n\nAllan\n---------------------\nAllan Hayes\nMathematica Training and Consulting\nLeicester UK\nwww.haystack.demon.co.uk\nhay at haystack.demon.co.uk\nVoice: +44 (0)116 271 4198\nFax: +44 (0)870 164 0565\n\n• Prev by Date: Re: Re: Mathematica can't win against Tiger Woods\n• Next by Date: polynomial approximation for twenty data points\n• Previous by thread: Re: simple Simplify[] question\n• Next by thread: Re: simple Simplify[] question"
] | [
null,
"http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif",
null,
"http://forums.wolfram.com/mathgroup/images/head_archive.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/1.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/9.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/9.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/9.gif",
null,
"http://forums.wolfram.com/mathgroup/images/search_archive.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.711781,"math_prob":0.7045048,"size":1010,"snap":"2019-43-2019-47","text_gpt3_token_len":344,"char_repetition_ratio":0.13121273,"word_repetition_ratio":0.0,"special_character_ratio":0.3831683,"punctuation_ratio":0.20627803,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96199,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-18T17:27:34Z\",\"WARC-Record-ID\":\"<urn:uuid:16140e7b-2704-4903-bf19-6e8e7f9ff3c3>\",\"Content-Length\":\"45000\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d93c91b-358d-4e0b-b76f-ba933a31095b>\",\"WARC-Concurrent-To\":\"<urn:uuid:907c30e7-886f-40e8-9b8b-95082106d4c9>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/1999/Sep/msg00226.html\",\"WARC-Payload-Digest\":\"sha1:I2CH7OXZHJPCI3FH4DRFRW2LWQMWX467\",\"WARC-Block-Digest\":\"sha1:6BLKYS3OLCKDGN2YRLAI53HNS24V5A4A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986684226.55_warc_CC-MAIN-20191018154409-20191018181909-00540.warc.gz\"}"} |
https://cm-to-inches.appspot.com/673-cm-to-inches.html | [
"Cm To Inches\n\n# 673 cm to in673 Centimeters to Inches\n\ncm\n=\nin\n\n## How to convert 673 centimeters to inches?\n\n 673 cm * 0.3937007874 in = 264.960629921 in 1 cm\nA common question is How many centimeter in 673 inch? And the answer is 1709.42 cm in 673 in. Likewise the question how many inch in 673 centimeter has the answer of 264.960629921 in in 673 cm.\n\n## How much are 673 centimeters in inches?\n\n673 centimeters equal 264.960629921 inches (673cm = 264.960629921in). Converting 673 cm to in is easy. Simply use our calculator above, or apply the formula to change the length 673 cm to in.\n\n## Convert 673 cm to common lengths\n\nUnitLength\nNanometer6730000000.0 nm\nMicrometer6730000.0 µm\nMillimeter6730.0 mm\nCentimeter673.0 cm\nInch264.960629921 in\nFoot22.0800524934 ft\nYard7.3600174978 yd\nMeter6.73 m\nKilometer0.00673 km\nMile0.0041818281 mi\nNautical mile0.0036339093 nmi\n\n## What is 673 centimeters in in?\n\nTo convert 673 cm to in multiply the length in centimeters by 0.3937007874. The 673 cm in in formula is [in] = 673 * 0.3937007874. Thus, for 673 centimeters in inch we get 264.960629921 in.\n\n## 673 Centimeter Conversion Table",
null,
"## Alternative spelling\n\n673 Centimeter to Inches, 673 Centimeter in Inches, 673 cm to in, 673 cm in in, 673 cm to Inch, 673 cm in Inch, 673 Centimeters to Inch, 673 Centimeters in Inch, 673 Centimeters to Inches, 673 Centimeters in Inches, 673 cm to Inches, 673 cm in Inches, 673 Centimeters to in, 673 Centimeters in in"
] | [
null,
"https://cm-to-inches.appspot.com/image/673.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79484,"math_prob":0.8767578,"size":1059,"snap":"2021-43-2021-49","text_gpt3_token_len":318,"char_repetition_ratio":0.25308058,"word_repetition_ratio":0.021276595,"special_character_ratio":0.40604344,"punctuation_ratio":0.15517241,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9805757,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-04T13:47:48Z\",\"WARC-Record-ID\":\"<urn:uuid:01013d92-3c9d-4f7a-9792-ee2be6f333a1>\",\"Content-Length\":\"28774\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:05dc3abe-ed07-499d-a30c-ec44487e44d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:a79ffc72-a297-4f49-b040-3f113d35e3e9>\",\"WARC-IP-Address\":\"172.217.0.52\",\"WARC-Target-URI\":\"https://cm-to-inches.appspot.com/673-cm-to-inches.html\",\"WARC-Payload-Digest\":\"sha1:EAFDTZ5GFAP3MUE57ZALENKTS3PKMEPA\",\"WARC-Block-Digest\":\"sha1:TQI7NWZ7WBCRVH75KSXBPBAZYL7CMM4U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362992.98_warc_CC-MAIN-20211204124328-20211204154328-00199.warc.gz\"}"} |
http://www.indiabix.com/chemical-engineering/chemical-reaction-engineering/109001 | [
"# Chemical Engineering - Chemical Reaction Engineering\n\n### Exercise :: Chemical Reaction Engineering - Section 7\n\n1.\n\nThe minimum energy required to allow a chemical reaction to proceed is termed as the 'threshold energy '. Chemical reaction with low activation energy are\n\n A. always irreversible. B. insensitive to temperature changes. C. mostly irreversible. D. highly temperature sensitive.\n\nExplanation:\n\nNo answer description available for this question. Let us discuss.\n\n2.\n\nIf Thiele modulus is __________ , then the pore diffusion resistance in a catalyst may be considered as negligible.\n\n A. 0 B. ∞ C. < 0.5 D. > 0.5\n\nExplanation:\n\nNo answer description available for this question. Let us discuss.\n\n3.\n\nFrom collison theory, the reaction rate constant is proportional to\n\n A.",
null,
"B.",
null,
"C. T D.",
null,
"Explanation:\n\nNo answer description available for this question. Let us discuss.\n\n4.\n\nFor a mixed flow reactor operating at steady state, the rate of reaction is given by\n\n A.",
null,
"B.",
null,
"C.",
null,
"D.",
null,
"Explanation:\n\nNo answer description available for this question. Let us discuss.\n\n5.\n\nCatalytic action in a catalytic chemical reaction follows from the ability of catalyst to change the\n\n A. activation energy B. equilibrium constant C. heat of reaction D. none of these"
] | [
null,
"http://www.indiabix.com/_files/images/chemical-engineering/chemical-reaction-engineering/471-428-1.png",
null,
"http://www.indiabix.com/_files/images/chemical-engineering/chemical-reaction-engineering/471-428-2.png",
null,
"http://www.indiabix.com/_files/images/chemical-engineering/chemical-reaction-engineering/471-428-3.png",
null,
"http://www.indiabix.com/_files/images/chemical-engineering/chemical-reaction-engineering/471-427-1.png",
null,
"http://www.indiabix.com/_files/images/chemical-engineering/chemical-reaction-engineering/471-427-2.png",
null,
"http://www.indiabix.com/_files/images/chemical-engineering/chemical-reaction-engineering/471-427-3.png",
null,
"http://www.indiabix.com/_files/images/chemical-engineering/chemical-reaction-engineering/471-427-4.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7374012,"math_prob":0.560156,"size":1209,"snap":"2019-51-2020-05","text_gpt3_token_len":270,"char_repetition_ratio":0.27219918,"word_repetition_ratio":0.13333334,"special_character_ratio":0.235732,"punctuation_ratio":0.11702128,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9619477,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-10T19:33:55Z\",\"WARC-Record-ID\":\"<urn:uuid:2d652e73-e53b-49c1-b569-1e724e5f69ea>\",\"Content-Length\":\"36302\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e64f3ff2-cae9-49fb-9a2f-d55c28ad596d>\",\"WARC-Concurrent-To\":\"<urn:uuid:236abd85-6ba2-414a-9464-15d1eeb87ed6>\",\"WARC-IP-Address\":\"35.244.11.196\",\"WARC-Target-URI\":\"http://www.indiabix.com/chemical-engineering/chemical-reaction-engineering/109001\",\"WARC-Payload-Digest\":\"sha1:ACUBHHBMIV2D2GH5V5N3F45KUV6V4N3P\",\"WARC-Block-Digest\":\"sha1:WSGYBMR456IJRS6XWF7AZHDYH43K7FMW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540528490.48_warc_CC-MAIN-20191210180555-20191210204555-00345.warc.gz\"}"} |
https://mirror.pkill.info/ocaml-tutorial.org/data_types_and_matching.html | [
"# Data Types and Matching\n\nAs with Perl, OCaml has support for lists built into the language. All elements of a list in OCaml must be the same type. To write a list, use:\n\n```[1; 2; 3]\n```\n\n(Note semicolons, NOT commas).\n\n`[]` is the empty list.\n\nA list has a head (the first element) and a tail (the rest of the elements). The head is an element, and the tail is a list, so in the above example, the head is the integer `1` while the tail is the list `[2; 3]`.\n\nAn alternative way to write a list is to use the cons operator `head :: tail`. So the following ways to write a list are exactly the same:\n\n```[1; 2; 3]\n1 :: [2; 3]\n1 :: 2 :: \n1 :: 2 :: 3 :: []\n```\n\nWhy do I mention the cons operator? Well, it's useful when we start doing pattern matching on lists, which I'll talk about below.\n\n### The type of a linked list\n\nThe type of a linked list of integers is `int list`, and in general the type of a linked list of `foo`s is `foo list`.\n\nThis implies that all the elements of a linked list must have the same type. But the type can be polymorphic (ie. `'a list`), which is really useful if you want to write generic functions which operate on \"lists of anything\". (But note: `'a list` doesn't mean that each individual element has a different type - you still can't use this to construct a list containing, say, a mixture of ints and strings. It means that the type of the elements is anything, but all the same type of anything.)\n\nThe `length` function defined as part of the OCaml `List` module is a good example of this. It doesn't matter if the list contains ints or strings or objects or small furry animals, the `List.length` function can still be called on it. The type of `List.length` is therefore:\n\n```List.length : 'a list -> int\n```\n\n## Structures\n\nC and C++ have the concept of a simple `struct`, short for structure. Java has classes which can be used to similar effect, albeit much more laboriously.\n\nConsider this simple C structure:\n\n```struct pair_of_ints {\nint a, b;\n};\n```\n\nThe simplest equivalent to this in OCaml is a tuple such as `(3, 4)` which has the type `int * int`. Unlike lists, tuples can contain elements of different types, so for example `(3, \"hello\", 'x')` has type `int * string * char`.\n\nA slightly more complex alternative way of writing a C struct is to use a record. Records, like C structs, allow you to name the elements. Tuples don't let you name the elements, but instead you have to remember the order in which they appear. Here is the equivalent record for our C struct above:\n\n```type pair_of_ints = { a : int; b : int };;\n```\n\nThat defines the type, and here is how we actually create objects of this type:\n\n```{ a=3; b=5 }\n```\n\nNote that we use \":\" in the type definition and \"=\" when creating objects of this type.\n\nHere are some examples of this typed into the toplevel:\n\n```# type pair_of_ints = { a : int; b : int };;\ntype pair_of_ints = { a : int; b : int; }\n# {a=3; b=5};;\n- : pair_of_ints = {a = 3; b = 5}\n# {a=3};;\nSome record field labels are undefined: b\n```\n\nSo OCaml won't let you leave some fields in your structure undefined.\n\n## Variants (qualified unions and enums)\n\nA \"qualified union\" doesn't really exist in C, although there is support in the gcc compiler for it. Here is the pattern which one commonly uses for a qualified union in C:\n\n```struct foo {\nint type;\n#define TYPE_INT 1\n#define TYPE_PAIR_OF_INTS 2\n#define TYPE_STRING 3\nunion {\nint i; // If type == TYPE_INT.\nint pair; // If type == TYPE_PAIR_OF_INTS.\nchar *str; // If type == TYPE_STRING.\n} u;\n};\n```\n\nWe've all seen this I should think, and it's not a pretty sight. For a start it's not safe: the programmer might make a mistake and accidentally use, say, the `u.i` field when in fact a string was stored in the structure. Also the compiler can't easily check if all possible types have been examined in a switch statement (you can use an `enum` type instead to solve this particular problem). The programmer might forget to set the `type` field, which would result in all sorts of fun and games. Furthermore, it's cumbersome.\n\nHere is the elegant and concise equivalent in OCaml:\n\n```type foo = Nothing | Int of int | Pair of int * int | String of string;;\n```\n\nThat's the type definition. First part of each `|` separated part is called the constructor. It can be called anything, as long as it starts with a capital letter. If the constructor can be used to define a value, it's followed by the `of type` part, where type always starts with a lowercase letter. In the above example, Nothing is used as a constant and the other constructors are used with values.\n\nTo actually create things of this type you would write:\n\n```Nothing\nInt 3\nPair (4, 5)\nString \"hello\"\n&c.\n```\n\nEach of these expressions has type `foo`.\n\nNote that you use `of` when writing the type definition, but NOT when writing elements of the type.\n\nBy extension, a simple C `enum` defined as:\n\n```enum sign { positive, zero, negative };\n```\n\ncan be written in OCaml as:\n\n```type sign = Positive | Zero | Negative;;\n```\n\n### Recursive variants (used for trees)\n\nVariants can be recursive, and the common use for this is to define tree structures. This really is where the expressive power of functional languages come into their own:\n\n```type binary_tree = Leaf of int | Tree of binary_tree * binary_tree;;\n```\n\nHere're some binary trees. For practice, try drawing them on paper.\n\n```Leaf 3\n```\n```Tree (Leaf 3, Leaf 4)\n```\n```Tree (Tree (Leaf 3, Leaf 4), Leaf 5)\n```\n```Tree (Tree (Leaf 3, Leaf 4), Tree (Tree (Leaf 3, Leaf 4), Leaf 5))\n```\n\n### Parameterized variants\n\nThe binary tree in the previous section has integers at each leaf, but what if we wanted to describe the shape of a binary tree, but decide exactly what to store at each leaf node later? We can do this by using a parameterized (or polymorphic) variant, like this:\n\n```type 'a binary_tree = Leaf of 'a | Tree of 'a binary_tree * 'a binary_tree;;\n```\n\nThis is a general type. The specific type which stores integers at each leaf is called `int binary_tree`. Similarly the specific type which stores strings at each leaf is called `string binary_tree`. In the next example we type some instances into the top-level and allow the type inference system to show the types for us:\n\n```# Leaf \"hello\";;\n- : string binary_tree = Leaf \"hello\"\n# Leaf 3.0;;\n- : float binary_tree = Leaf 3.\n```\n\nNotice how the type name is backwards. Compare this to the type names for lists, eg. `int list` etc.\n\nIn fact it is no coincidence that `'a list` is written \"backwards\" in the same way. Lists are simply parameterized variant types with the following slightly strange definition:\n\n``` type 'a list = [] | :: of 'a * 'a list (* not real OCaml code *)\n```\n\nActually the definition above doesn't quite compile. Here's a pretty-much equivalent definition:\n\n```# type 'a list = Nil | :: of 'a * 'a list;;\ntype 'a list = Nil | :: of 'a * 'a list\n# Nil;;\n- : 'a list = Nil\n# 1 :: Nil;;\n- : int list = :: (1, Nil)\n# 1 :: 2 :: Nil;;\n- : int list = :: (1, :: (2, Nil))\n```\n\nRecall earlier that we said lists could be written two ways, either with the simple syntactic sugar of `[1; 2; 3]` or more formally as `1 :: 2 :: 3 :: []`. If you look at the definition for `'a list` above, you may be able to see the reason for the formal definition.\n\n## Lists, structures and variants - summary\n\nOCaml name Example type definition Example usage\n\n```list int list [1; 2; 3]\ntuple int * string (3, \"hello\")\nrecord type pair = { a : int; b : string } { a = 3; b = \"hello\" }\nvariant type foo = Int of int Int 3\n| Pair of int * string\nvariant type sign = Positive | Zero Positive\n| Negative Zero\nparameterized type 'a my_list = Empty Cons (1, Cons (2, Empty))\nvariant | Cons of 'a * 'a my_list\n```\n\n## Pattern matching (on datatypes)\n\nSo one Really Cool Feature of functional languages is the ability to break apart data structures and do pattern matching on the data. This is again not really a \"functional\" feature - you could imagine some variation of C appearing which would let you do this, but it's a Cool Feature nonetheless.\n\nLet's start with a real program requirement: I wish to represent simple mathematical expressions like `n * (x + y)` and multiply them out symbolically to get `n * x + n * y`.\n\nLet's define a type for these expressions:\n\n```type expr = Plus of expr * expr (* means a + b *)\n| Minus of expr * expr (* means a - b *)\n| Times of expr * expr (* means a * b *)\n| Divide of expr * expr (* means a / b *)\n| Value of string (* \"x\", \"y\", \"n\", etc. *)\n;;\n```\n\nThe expression `n * (x + y)` would be written:\n\n```Times (Value \"n\", Plus (Value \"x\", Value \"y\"))\n```\n\nLet's write a function which prints out `Times (Value \"n\", Plus (Value \"x\", Value \"y\"))` as something more like `n * (x + y)`. Actually, I'm going to write two functions, one which converts the expression to a pretty string, and one which prints it out (the reason is that I might want to write the same string to a file and I wouldn't want to repeat the whole of the function just for that).\n\n```let rec to_string e =\nmatch e with\nPlus (left, right) -> \"(\" ^ (to_string left) ^ \" + \" ^ (to_string right) ^ \")\"\n| Minus (left, right) -> \"(\" ^ (to_string left) ^ \" - \" ^ (to_string right) ^ \")\"\n| Times (left, right) -> \"(\" ^ (to_string left) ^ \" * \" ^ (to_string right) ^ \")\"\n| Divide (left, right) -> \"(\" ^ (to_string left) ^ \" / \" ^ (to_string right) ^ \")\"\n| Value v -> v\n;;\n\nlet print_expr e =\nprint_endline (to_string e);;\n```\n\n(NB: The `^` operator concatenates strings.)\n\nHere's the print function in action:\n\n```# print_expr (Times (Value \"n\", Plus (Value \"x\", Value \"y\")));;\n(n * (x + y))\n```\n\nThe general form for pattern matching is:\n\n```match object with\npattern -> result\n| pattern -> result\n...\n```\n\nThe patterns on the left hand side can be simple, as in the `to_string` function above, or complex and nested. The next example is our function to multiply out expressions of the form `n * (x + y)` or `(x + y) * n` and for this we will use a nested pattern:\n\n```let rec multiply_out e =\nmatch e with\nTimes (e1, Plus (e2, e3)) ->\nPlus (Times (multiply_out e1, multiply_out e2),\nTimes (multiply_out e1, multiply_out e3))\n| Times (Plus (e1, e2), e3) ->\nPlus (Times (multiply_out e1, multiply_out e3),\nTimes (multiply_out e2, multiply_out e3))\n| Plus (left, right) -> Plus (multiply_out left, multiply_out right)\n| Minus (left, right) -> Minus (multiply_out left, multiply_out right)\n| Times (left, right) -> Times (multiply_out left, multiply_out right)\n| Divide (left, right) -> Divide (multiply_out left, multiply_out right)\n| Value v -> Value v\n;;\n```\n\nHere it is in action:\n\n```# print_expr (multiply_out (Times (Value \"n\", Plus (Value \"x\", Value \"y\"))));;\n((n * x) + (n * y))\n```\n\nHow does the `multiply_out` function work? The key is in the first two patterns. The first pattern is `Times (e1, Plus (e2, e3))` which matches expressions of the form `e1 * (e2 + e3)`. Now look at the right hand side of this first pattern match, and convince yourself that it is the equivalent of `(e1 * e2) + (e1 * e3)`.\n\nThe second pattern does the same thing, except for expressions of the form `(e1 + e2) * e3`.\n\nThe remaining patterns don't change the form of the expression, but they crucially do call the `multiply_out` function recursively on their subexpressions. This ensures that all subexpressions within the expression get multiplied out too (if you only wanted to multiply out the very top level of an expression, then you could replace all the remaining patterns with a simple `e -> e` rule).\n\nCan we do the reverse (ie. factorizing out common subexpressions)? We sure can! (But it's a bit more complicated). The following version only works for the top level expression. You could certainly extend it to cope with all levels of an expression and more complex cases:\n\n```let factorize e =\nmatch e with\nPlus (Times (e1, e2), Times (e3, e4)) when e1 = e3 -> Times (e1, Plus (e2, e4))\n| Plus (Times (e1, e2), Times (e3, e4)) when e2 = e4 -> Times (Plus (e1, e3), e4)\n| e -> e\n;;\n```\n```# factorize (Plus (Times (Value \"n\", Value \"x\"), Times (Value \"n\", Value \"y\")));;\n- : expr = Times (Value \"n\", Plus (Value \"x\", Value \"y\"))\n```\n\nThe factorize function above introduces another couple of features. You can add what are known as guards to each pattern match. A guard is the conditional which follows the `when`, and it means that the pattern match only happens if the pattern matches and the condition in the `when`-clause is satisfied.\n\n```match object with\npattern [ when condition ] -> result\npattern [ when condition ] -> result\n...\n```\n\nThe second feature is the `=` operator which tests for \"structural equality\" between two expressions. That means it goes recursively into each expression checking they're exactly the same at all levels down.\n\nOCaml is able to check at compile time that you have covered all possibilities in your patterns. I changed the type definition of `type expr` above by adding a `Product` variant:\n\n```type expr = Plus of expr * expr (* means a + b *)\n| Minus of expr * expr (* means a - b *)\n| Times of expr * expr (* means a * b *)\n| Divide of expr * expr (* means a / b *)\n| Product of expr list (* means a * b * c * ... *)\n| Value of string (* \"x\", \"y\", \"n\", etc. *)\n;;\n```\n\nI then recompiled the `to_string` function without changing it. OCaml reported the following warning:\n\n```Warning: this pattern-matching is not exhaustive.\nHere is an example of a value that is not matched:\nProduct _\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8615179,"math_prob":0.9697489,"size":12773,"snap":"2022-27-2022-33","text_gpt3_token_len":3311,"char_repetition_ratio":0.13062887,"word_repetition_ratio":0.11187958,"special_character_ratio":0.29272684,"punctuation_ratio":0.15213141,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99582994,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-25T16:29:43Z\",\"WARC-Record-ID\":\"<urn:uuid:be55b22b-b259-4234-98fd-ef604aae7de0>\",\"Content-Length\":\"20717\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cfca9ccf-0827-4b57-86b1-1ed153a44f5c>\",\"WARC-Concurrent-To\":\"<urn:uuid:702c894e-c030-4015-88ea-7fbd927c3363>\",\"WARC-IP-Address\":\"104.21.59.141\",\"WARC-Target-URI\":\"https://mirror.pkill.info/ocaml-tutorial.org/data_types_and_matching.html\",\"WARC-Payload-Digest\":\"sha1:4HRT2X2CPQVPWXMXKM5JXUUXD62NIOE6\",\"WARC-Block-Digest\":\"sha1:TPSKUHVUSYHO4YQ5XCOC3WKHSOLXMGWJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103036077.8_warc_CC-MAIN-20220625160220-20220625190220-00017.warc.gz\"}"} |
https://www.jiskha.com/questions/5730/write-the-equation-for-copper-ii-sulfate-pentahydrate-and-ammonium-sulfate-forming | [
"# chemistry\n\nwrite the equation for copper(II) sulfate pentahydrate and ammonium sulfate forming:\nCu(NH4)2(SO4)2*6H2O\n\nnot sure how to balance it\n\nCheck your question very carefully. Is this a question given to you by a teacher?\n\nyeah it is in a lab\n\nCuSO4.5H2O + (NH4)2SO4 >>Cu(NH4)2(SO4)2*6H2O\n\nNote a water is missing on the left side, if you really need 6H2O on the right, you will have to add H2O to the left.\n\nso would this be a redox reaction\n\nWhat do you think is oxidized and reduced?\n\nNo. Where did you get that idea?\n\nthe product would be oxidized?\n\nI don't see any element that has changed oxidation state.\nCu is +2 on the left and right.\nS is +6 on left and right.\nO is -2 on left and right.\nN is -3 on left and right.\n\nI think it is only fair to point out that I have taught chemistry for almost 50 years and I never heard of Cu(NH4)2(SO4)2. That doesn't mean that it doesn't exist and I may have overlooked something in your question (or I may have overlooke something in these last several years). But I hope we have helped you. Thanks for using the Jiska help site.\n\ni really need a suggestion here.\n\nNH3 + CuSO4*5H2O = Cu(NH3)4SO4*H2O\n\nYou need to be more specific about the kind of suggestion for which you are looking. And yes, CuSO4*5H2O + NH3 will produce Cu(NH3)4SO4 + H2O but it won't do it with (NH4)2SO4. Sometimes (NH4)2SO4 is added to CuSO4 BEFORE NH3 is added in order to keep Cu(OH)2 from forming upon the initial addition of NH3.\n\noh thanks!\nso how would i balance the second equation i sent iN?\n\nI don't undersand why we are playing \"twenty questions\" here. You could tell us what you did at the get go? As for balancing, surely you can balance that last equation. However, IF that is the equation you want balanced, a hint is to add 4NH3. You can take it from there.\n\ni am wondering what reasonable values would be for the x in the formula [Cu(NH3)x]SO4*H2O\n\n1. 👍 0\n2. 👎 0\n3. 👁 369\n\n## Similar Questions\n\n1. ### Chemistry\n\n2. The reaction can be described in words as follows: aluminum sulfate reacted with ammonium hydroxide forming the insoluble product or precipitate, aluminum hydroxide and the water-soluble ammonium sulfate. The chemical equation\n\nasked by Jerry on June 24, 2018\n2. ### Chemistry\n\nWrite the dissociation equation for iron(II) ammonium sulfate hexahydrate in water . Fe(NH4)2(SO4)2 * 6H2O --> __?________ I know dissociate means to take apart so for example; NaCl would look like Na^+ + Cl^- And another one\n\nasked by Christine on October 4, 2015\n3. ### Chemistry\n\nWrite the dissociation equation for iron(II) ammonium sulfate hexahydrate in water . Fe(NH4)2(SO4)2 * 6H2O --> __?________ I know dissociate means to take apart so for example; NaCl would look like Na^+ + Cl^- And another one\n\nasked by Christine on October 4, 2015\n4. ### Chemistry\n\nIf 0.273 moles of ferrous ammonium sulfate (Fe(NH4)2(SO4)2 . 6H2O) and an excess of all other reagents are used in a synthesis of K2[Fe(C2O4)3] . 3H2O, how many grams of product will be obtained if the reaction gives a 100% yield?\n\nasked by Anna on January 7, 2012\n5. ### Science\n\nIf .275 moles of ferrous ammonium sulfate (Fe(NH4)2(SO4)2. 6H2O) and an excess of all other reagents are used in a synthesis of K3[Fe(C2O4)3].3H2O, how many grams of product will be obtained if the reaction gives a 100% yield? Can\n\nasked by Anna on January 7, 2012\n6. ### chemistry\n\nwrite a net ionic equation for the following: Copper (II) sulfate reacts with Ammonium hydroxide to produce Copper (II) oxide, Ammonium sulfate, and water.\n\nasked by rachel on October 26, 2010\n7. ### Chemistry\n\n2. The reaction can be described in words as follows: aluminum sulfate reacted with ammonium hydroxide forming the insoluble product or precipitate, aluminum hydroxide and the water-soluble ammonium sulfate. The chemical equation\n\nasked by Jerry on June 24, 2018\n8. ### Chemistry\n\nIn a lab, we added NH3(aq) to CuSO4(aq), and the solution turned dark blue with a precipitate. The problem is that the products ( Cu(NH3)4 and SO4) should be soluble, no? The question is then, what is the percipitate? The ppt is\n\nasked by Karl on December 17, 2006\n9. ### Chemistry\n\nThe concentration of ammonium ion in the solution of the salt called ferrous ammonium sulfate, formula [Fe(NH4)]2(SO4)3, is 0.360 M. The concentration of sulfate ions in this solution is?\n\nasked by Colby on September 27, 2016\n10. ### Chemistry\n\nWhat volume of gas can be produced in a lab where pressure is 16.0 psi and temperature is 25 C if you collect it over water from the reaction of 50g of ammonium sulfate with 50g of aluminum hydroxide. The equation for the reaction\n\nasked by Victor on April 7, 2017\n\nMore Similar Questions"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9118368,"math_prob":0.7752769,"size":2685,"snap":"2020-10-2020-16","text_gpt3_token_len":788,"char_repetition_ratio":0.14285715,"word_repetition_ratio":0.48106903,"special_character_ratio":0.27337056,"punctuation_ratio":0.099236645,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9804282,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-25T09:26:37Z\",\"WARC-Record-ID\":\"<urn:uuid:6447e455-f943-4125-bc85-0d7537ebfec2>\",\"Content-Length\":\"23317\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b5fd1b08-6d9c-4e42-bdec-bc3571be7a39>\",\"WARC-Concurrent-To\":\"<urn:uuid:626a5a5d-a5dd-46b6-a326-0676997db1e5>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/5730/write-the-equation-for-copper-ii-sulfate-pentahydrate-and-ammonium-sulfate-forming\",\"WARC-Payload-Digest\":\"sha1:VH7INQHSTXLWJS6WQDRURV2DQOL7IQO7\",\"WARC-Block-Digest\":\"sha1:KM36RGPTDSEUGO7YOLOIQPOVWPLX3B4R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146064.76_warc_CC-MAIN-20200225080028-20200225110028-00559.warc.gz\"}"} |
https://avesis.comu.edu.tr/yayin/3d802932-4094-4dd8-a909-2d4958616e74/scalar-field-solutions-for-anisotropic-universe-models-in-various-gravitation-theories | [
"## Scalar field solutions for anisotropic universe models in various gravitation theories\n\nINTERNATIONAL JOURNAL OF GEOMETRIC METHODS IN MODERN PHYSICS, cilt.17, 2020 (SCI İndekslerine Giren Dergi)",
null,
"",
null,
"• Cilt numarası: 17 Konu: 2\n• Basım Tarihi: 2020\n• Doi Numarası: 10.1142/s0219887820500255\n• Dergi Adı: INTERNATIONAL JOURNAL OF GEOMETRIC METHODS IN MODERN PHYSICS\n\n#### Özet\n\nIn this study, we have investigated homogeneous and anisotropic Marder and Bianchi type I universe models filled with normal and phantom scalar field matter distributions with. in f(R, T) gravitation theory (T. Harko et al., Phys. Rev. D 84 (2011) 024020). In this model, R is the Ricci scalar and T is the trace of energy-momentum tensor. To obtain exact solutions of modified field equations, we have used anisotropy feature of the universe and different scalar potential models with f(R, T) = R + 2f(T) function. Also, we have obtained general relativity (GR) solutions for normal and phantom scalar field matter distributions in Marder and Bianchi type I universes. Additionally, we obtained the same scalar function values by using different scalar field potentials for Marder and Bianchi type I universe models with constant difference in f(R, T) gravity and GR theory. From obtained solutions, we get negative cosmological term value for V(phi) = V-0 constant scalar potential model with Marder and Bianchi type I universes in GR theory. These results agree with the studies of Maeda and Ohta, Aktas et al. also Biswas and Mazumdar. Finally, we have discussed and compared our results in gravitation theories."
] | [
null,
"https://avesis.comu.edu.tr/Content/images/integrations/small/integrationtype_2.png",
null,
"https://avesis.comu.edu.tr/Content/images/integrations/small/integrationtype_1.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8746246,"math_prob":0.83582276,"size":1323,"snap":"2020-34-2020-40","text_gpt3_token_len":314,"char_repetition_ratio":0.119787715,"word_repetition_ratio":0.067961164,"special_character_ratio":0.21239607,"punctuation_ratio":0.11885246,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9650109,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-29T08:29:56Z\",\"WARC-Record-ID\":\"<urn:uuid:fd611631-f4f7-458e-a82a-827b8f1bec4d>\",\"Content-Length\":\"32704\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a29dd1d-2d51-4541-9b7f-cce1aacc6aaf>\",\"WARC-Concurrent-To\":\"<urn:uuid:c41044f2-05b7-465d-8f1d-45459a29f538>\",\"WARC-IP-Address\":\"193.255.97.61\",\"WARC-Target-URI\":\"https://avesis.comu.edu.tr/yayin/3d802932-4094-4dd8-a909-2d4958616e74/scalar-field-solutions-for-anisotropic-universe-models-in-various-gravitation-theories\",\"WARC-Payload-Digest\":\"sha1:GE6NBNXYHWRDZYESOJZBKAQLT2RQHCVJ\",\"WARC-Block-Digest\":\"sha1:URSE2WM75D533LTKRRCNHWOONBGGONZW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401632671.79_warc_CC-MAIN-20200929060555-20200929090555-00152.warc.gz\"}"} |
https://www.physicsforums.com/threads/angular-width-of-first-order-visible-spectrum.986014/ | [
"# Angular width of first-order visible spectrum\n\nBolter\nHomework Statement:\nSee image attached below\nRelevant Equations:\ndsin(theta) = m*lambda\nThis is a diffraction grating problem I have been given that I am trying to answer",
null,
"Made a attempt at it and just wanted to see if I done this correctly or not? I get an angular width of 0.71 degrees which is very small",
null,
"Any help is much appreciated! Thanks\n\nYou switched from lines per millimetre to lines per centimetre at the start.\n\n•",
null,
"Bolter\nBolter\nYou switched from lines per millimetre to lines per centimetre at the start.\n\nOh shoot yes you're right",
null,
"I don't know what I was thinking there. This must change the whole answer then. I get my distance between adjacent slits to be 1/415,000 m = 2.409... x 10^-6 m\n\nRunning through the same process but with new d value now, I get the angular width to be 7.33 degrees now?"
] | [
null,
"https://www.physicsforums.com/attachments/screenshot-2020-03-23-at-20-17-47-png.259199/",
null,
"https://www.physicsforums.com/attachments/img_4327-jpg.259200/",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://www.physicsforums.com/styles/physicsforums/xenforo/smilies/oldschool/doh.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.952272,"math_prob":0.6557204,"size":549,"snap":"2023-14-2023-23","text_gpt3_token_len":144,"char_repetition_ratio":0.06972477,"word_repetition_ratio":0.5904762,"special_character_ratio":0.2568306,"punctuation_ratio":0.071428575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9560058,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-29T16:09:20Z\",\"WARC-Record-ID\":\"<urn:uuid:6aedc383-cdb2-4014-94fe-3177362f5ac4>\",\"Content-Length\":\"66366\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:460d0f35-9186-46d9-a2a1-3b27f707efc6>\",\"WARC-Concurrent-To\":\"<urn:uuid:156ffc8c-d3d0-47bf-ac21-aab02c6832f3>\",\"WARC-IP-Address\":\"104.26.14.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/angular-width-of-first-order-visible-spectrum.986014/\",\"WARC-Payload-Digest\":\"sha1:7OARDL2M5ARR3FNKMOOC5LYLYCKCBSXB\",\"WARC-Block-Digest\":\"sha1:QBI2ONVMFXSYD6LSYJWYD3WSGKMG43WS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949009.11_warc_CC-MAIN-20230329151629-20230329181629-00257.warc.gz\"}"} |
https://nmfin.tech/forums/reply/2671/ | [
"",
null,
"",
null,
"#2671\n\nThank you very much Ding Hao,\n\nI have one more question about “public VARFitting(MultivariateIntTimeTimeSeries mts,int p)”. How can I estimate p value by using akaike information criterion. For example if I used ARIMA model below code is ok for me. But how can I estimate p value for VARMA model by using akaike information criterion?\n\nBest wishes,\nozcanalp\n\ndouble[] inputNumbers = new double[] { 2, 3, 4 };\nint P_VALUE = 2;\nint D_VALUE = 1;\nint Q_VALUE = 3;\n\nint p = P_VALUE;\nint d = D_VALUE;\nint q = Q_VALUE;\n\nARIMAModel arima = null;\n\ndouble maxAic = 0;\ndouble actualAic = 0;\nint maxp = 10;\nint maxd = 10;\nint maxq = 10;\n\nConditionalSumOfSquares csos = new ConditionalSumOfSquares(inputTimeSeries, p, d, q);\n\nfor(int ii=0;ii<=3;ii++){ for(int jj=0;jj<=2;jj++){ for(int kk=0;kk maxAic){ maxAic = actualAic; maxp = p; maxd = d; maxq = q; csos = new ConditionalSumOfSquares(inputTimeSeries, p, d, q); } } } } Console.Write(\"\\n\"); Console.Write(\"maxp: \" + maxp + \",\" + maxd + \",\" + maxq); Console.Write(\"\\n\"); arima = csos.getModel(); IntTimeTimeSeries xt = new SimpleTimeSeries(inputTimeSeries); ARIMAForecast instance = new ARIMAForecast(xt, arima); ARIMAForecast.Forecast frc = instance.next(); double next = frc.xHat(); double err = frc.var();"
] | [
null,
"https://nmfin.tech/wp-content/uploads/2019/01/logo_1.png",
null,
"https://nmfin.tech/wp-content/uploads/2019/01/logo_1.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5212219,"math_prob":0.9991591,"size":1368,"snap":"2020-34-2020-40","text_gpt3_token_len":423,"char_repetition_ratio":0.11290322,"word_repetition_ratio":0.037914693,"special_character_ratio":0.30043858,"punctuation_ratio":0.23282443,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991709,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T16:44:12Z\",\"WARC-Record-ID\":\"<urn:uuid:239023cf-7d61-4be9-ab5e-dd31a15c6687>\",\"Content-Length\":\"145711\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8b5b0bee-22e9-4b3d-89b9-cfba17146bba>\",\"WARC-Concurrent-To\":\"<urn:uuid:8162edf4-00f1-4fc6-8918-1eb5c1690ec9>\",\"WARC-IP-Address\":\"35.192.22.49\",\"WARC-Target-URI\":\"https://nmfin.tech/forums/reply/2671/\",\"WARC-Payload-Digest\":\"sha1:L722FJGZCD2PXCNHL5XHXMUJQW6VMQ6Q\",\"WARC-Block-Digest\":\"sha1:KSAXXBRAN3Q73CTE6UJ3X5YNYIMX3O5D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402131777.95_warc_CC-MAIN-20201001143636-20201001173636-00662.warc.gz\"}"} |
https://r-coder.com/cut-r/ | [
"Home » Data Manipulation » Cut in R\n\n# Cut in R",
null,
"## Cut function in R\n\nSometimes it is useful to categorize the values of a continuous variable in different levels of a factor. For that purpose, you can use the R cut function. In the following block of code we show the syntax of the function and the simplified description of the arguments.\n\ncut(num_vector, # Numeric input vector\nbreaks, # Number or vector of breaks\nlabels = NULL, # Labels for each group\ninclude.lowest = FALSE, # Whether to include the lowest 'break' or not\nright = TRUE, # Whether the right interval is closed (and the left open) or vice versa\ndig.lab = 3, # Number of digits of the groups if labels = NULL\nordered_result = FALSE, # Whether to order the factor result or not\n…) # Additional arguments\n\n### Cut in R: the breaks argument\n\nThe breaks argument allows you to cut the data in bins and hence to categorize it. Consider the following vector:\n\nx <- -5:5\n\nOn the one hand, you can set the breaks argument to any integer number, creating as many intervals (levels) as the specified number. These intervals will be all of the same length.\n\ncut(x, breaks = 2)\n(-5.01,0] (-5.01,0] (-5.01,0] (-5.01,0] (-5.01,0]\n(-5.01,0] (0,5.01] (0,5.01] (0,5.01] (0,5.01] (0,5.01]\nLevels: (-5.01,0] (0,5.01]\n\nOn the other hand, you can specify the intervals you prefer.\n\ncut(x, breaks = c(-6, 2, 5))\n(-6,2] (-6,2] (-6,2] (-6,2] (-6,2] (-6,2] (-6,2] (-6,2] (2,5] (2,5]\n(2,5]\nLevels: (-6,2] (2,5]\n\nIt is worth to mention that if the intervals have decimals you can modify the number of decimals with the dig.lab argument and decide whether to order the results or not with the ordered_result argument.\n\n### Cut in R: the labels argument\n\nYou can also change the levels of the output factor with the labels argument.\n\nx <- c(12, 1, 25, 12, 65, 2, 6, 17)\n\ncut(x, breaks = c(0, 3, 12, 15, 20, 80),\nlabels = c(\"First\", \"Second\", \"Third\", \"Fourth\", \"Fifth\"))\n\n# Equivalent to\nc <- cut(x, breaks = c(0, 3, 12, 15, 20, 80))\nlevels(c) <- c(\"First\", \"Second\", \"Third\", \"Fourth\", \"Fifth\")\nSecond First Fifth Second Fifth First Second Fourth\nLevels: First Second Third Fourth Fifth\n\n### Include lowest value\n\nThe include.lowest argument specify whether to include the lowest break or not. By default, it is set to FALSE.\n\nx <- 15:25\n\ncut(x, breaks = c(15, 20, 25), include.lowest = FALSE)\n<NA> (15,20] (15,20] (15,20] (15,20]\n(15,20] (20,25] (20,25] (20,25] (20,25] (20,25]\nLevels: (15,20] (20,25]\n\nIn this case, the lowest value (15), specified as a break, it is not included in the interval (the left interval is open), so the value is categorized as NA, because the number 15 doesn’t belong to any of the intervals. However, if you set include.lowest to TRUE, the value will be included, as the left interval of the lowest break will be closed.\n\ncut(x, breaks = c(15, 20, 25), include.lowest = TRUE)\n[15,20] [15,20] [15,20] [15,20] [15,20]\n[15,20] (20,25] (20,25] (20,25] (20,25] (20,25]\nLevels: [15,20] (20,25]\n\n### The argument ‘right’\n\nConsider, for instance, you want to categorize some data ( x ) in the following categories:\n\n• Low, if x \\in [0, 150).\n• Medium, if x \\in [150, 200).\n• High, if x \\in [200, \\infty ).\n\nBy default, the argument right is set to TRUE, so the intervals are opened on the left and closed on the right (x, y].\n\nx <- c(75, 150, 160, 151, 216, 149)\n\ncategories <- cut(x, breaks = c(0, 150, 200, Inf),\nlabels = c(\"low\", \"medium\", \"high\"))\n\ndata.frame(x, categories)\n\nIn this scenario, not all the values are categorized well.\n\n x categories\n75 low\n150 low # <-- Categorized as low\n160 medium\n151 medium\n216 high\n149 low\n\nHowever, if you set right = FALSE, the intervals will be closed on the left and open on the right.\n\ncategories <- cut(x, breaks = c(0, 150, 200, Inf),\nlabels = c(\"low\", \"medium\", \"high\"),\nright = FALSE)\n\ndata.frame(x, categories)\n\nNow the data is categorized correctly:\n\n x categories\n75 low\n150 medium # <-- Categorized as medium\n160 medium\n151 medium\n216 high\n149 low\nChanging arguments right and include.lowest can lead to mistakes, so we recommend changing the values of the breaks argument instead of the others.\n\n## Example: How to categorize age groups in R?\n\nConsider, for instance, that you want to categorize a numeric vector of ages in the following categories:\n\n• 0-14: Children.\n• 15-24: Youth.\n• 65 and over: Senior.\nage <- c(0, 12, 89, 14, 25, 2, 65, 1, 16, 24, 67, 61, 64)\n\nAt first glance, you could think in set the following, but an error will arise.\n\ncut(age, breaks = c(14, 24, 64, Inf),\nlabels = c(\"Children\", \"Youth\", \"Adult\", \"Senior\"))\nError in cut.default(age, breaks = c(14, 24, 64, Inf), labels = c(“Children”,: lengths of ‘breaks’ and ‘labels’ differ\n\nNonetheless, if you have specified 4 break values and 4 labels, as the breaks are intervals, you are generating three intervals instead of four (14-24, 24-64 and 64-Inf) . Consequently, you will need to add in this case the lowest value to have four intervals:\n\ncut(age, breaks = c(0, 14, 24, 64, Inf),\nlabels = c(\"Children\", \"Youth\", \"Adult\", \"Senior\"))\n<NA> Children Senior Children Adult Children Senior Children\nLevels: Children Youth Adult Senior\n\nBut now the lowest age (0), will be categorized as NA, as the lowest value of the breaks is not included by default. You could solve this changing the 0 of the breaks (for example setting -0.01 instead of 0) or setting the include.lowest argument to TRUE.\n\ncut(age, breaks = c(-0.01, 14, 24, 64, Inf),\nlabels = c(\"Children\", \"Youth\", \"Adult\", \"Senior\"))\n\n# Equivalent to:\ncut(age, breaks = c(0, 14, 24, 64, Inf),\nlabels = c(\"Children\", \"Youth\", \"Adult\", \"Senior\"),\ninclude.lowest = TRUE)\nChildren Children Senior Children Adult Children Senior Children\nLevels: Children Youth Adult Senior\n\n## Example: How to categorize exam notes?\n\nAs another example, exam notes can be categorized as fail, if the note is lower than 5 points out of 10, or pass in the other case. We will generate a simple data set to categorize exam qualifications.\n\nnumeric <- c(6.1, 5.3, 8.9, 5.0, 8.8, 1.9, 6.6, 7.2, 9.4, 4.9,\n7.1, 3.9, 1.0, 9.3, 9.9, 5.9, 5.1, 8.4, 3.2, 10.0)\n\nIn this example you could implement the function as follows:\n\ncategorized_note <- cut(numeric, breaks = c(0, 4.9, 10),\nlabels = c(\"fail\", \"pass\"))\n\n# Equivalent to:\n# categorized_note <- cut(numeric, breaks = c(0, 5, 10.1),\n# labels = c(\"fail\", \"pass\"), right = FALSE)\n\n# You could specify factor levels with levels function\n# levels(categorized_note) <- c(\"fail\", \"pass\")\n\n# Generating the dataframe\nfinal_notes <- data.frame(numeric, categorized_note)\nhead(final_notes)\n\nNote that in the equivalent alternative we set right = FALSE, because if TRUE, a 5 would be fail instead of pass. However, when setting this argument to FALSE, the right interval is open, so a 10 won’t enter the interval and that is the reason because we set the third break as 10.1 instead of 10. The final result is as follows:\n\n numeric categorized_note\n1 6.1 pass\n2 5.3 pass\n3 8.9 pass\n4 5.0 pass\n5 8.8 pass\n6 1.9 fail"
] | [
null,
"https://r-coder.com/wp-content/uploads/2020/04/cut-r.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7782207,"math_prob":0.9690093,"size":6788,"snap":"2022-05-2022-21","text_gpt3_token_len":2120,"char_repetition_ratio":0.1429835,"word_repetition_ratio":0.10404127,"special_character_ratio":0.36166766,"punctuation_ratio":0.22524753,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9658203,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-17T01:29:09Z\",\"WARC-Record-ID\":\"<urn:uuid:2f817821-a178-46f9-a2d9-990773801c0c>\",\"Content-Length\":\"137962\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9a55a8e2-b4f8-4c26-ba12-696f702068cd>\",\"WARC-Concurrent-To\":\"<urn:uuid:292d07e8-ca39-4f72-b8d4-7805cedc82f0>\",\"WARC-IP-Address\":\"104.21.20.208\",\"WARC-Target-URI\":\"https://r-coder.com/cut-r/\",\"WARC-Payload-Digest\":\"sha1:AWMAMM7IBHKC536WF5XVGYAVP624OWGR\",\"WARC-Block-Digest\":\"sha1:5LAU37HKRFXN543PGELD2H6T3GTJB47D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662515466.5_warc_CC-MAIN-20220516235937-20220517025937-00778.warc.gz\"}"} |
https://physics.gurumuda.net/linear-momentum-impulse-collisions.htm | [
"Basic Physics\n\n# Linear Momentum, Impulse, Collisions\n\n1. Linear Momentum\n\n1.1 Linear Momentum Definition\n\nThe linear momentum of an object is defined as the result of multiplying the mass of the object by the velocity of the object.\n\np = m v\n\nwhere:\n\np = momentum, m = mass (kg), v = velocity (m/s)\n\nLinear momentum, or simply momentum, is a vector quantity as it is derived by multiplying a vector (velocity) and a scalar (mass). As momentum is a vector quantity, it has direction and magnitude. Momentum shares direction with the velocity or motion of an object.\n\nMomentum is proportional to mass and velocity since the greater the mass, the greater the momentum. Likewise, the greater the velocity, the greater the momentum. Suppose there are two cars, say cars A and B. If car A’s mass is greater than car B’s and both cars move at the same velocity, car A will have greater momentum than that of car B. Similarly, if cars A and B are of the same mass, but car A moves faster than car B, car A’s momentum is greater than that of car B.\n\nIf an object that has mass does not move or is at rest (has zero velocity), the momentum of the object is zero.\n\nThe SI unit of momentum is kg m/s, which is comprised of the unit of mass and unit of velocity.\n\nRead : The ideal gas law\n\n1.2 Newton’s Second Law\n\nPreviously, you have learned Newton’s Second Law which is stated in the equation ΣF = m a and explains the relationship between the net force and mass as well as acceleration of an object. The net force acting on an object which has mass renders acceleration to the object. This time, you are to be introduced to another form of Newton’s Second Law, which explains the relationship between the net force and change in momentum of an object.\n\nIf the net force acts on an object which is initially at rest, the object will move. Before moving, the object does not have any momentum. The object has momentum after movement is rendered. In other words, the net force acting on the object causes a change in the object’s momentum for a given time interval. The rate of change in an object’s momentum is equal to the net force acting on the object.",
null,
"Where:\n\nΣF = net force (Newton), Δt = time interval (second), Δp = m (vt – vo) = change in momentum (kg m/s).\n\nEquation 1.1 is another form of Newton’s Second Law, which explains the relationship between the net force and rate of change in momentum of an object, either when the object’s mass is constant or changes.",
null,
"Where:\n\nΣF = net force (Newton), m = mass (kg), a = acceleration (m/s2)\n\nEquation 1.2 is a Newton’s Second Law equation that explains the relationship between the net force and acceleration of an object with a constant mass.\n\nRead : Equation of diverging (concave) lens\n\n2. Impulse\n\n2.1 Impulse Definition\n\nImpulse is defined as the result of multiplying force or net force by the time interval.\n\n###",
null,
"Where:\n\nI = impulse, ΣF = net force (Newton), Δt = time interval (second).\n\n2.2 Impulse-Momentum Theorem\n\nImpulse-momentum theorem is obtained by deriving an equation from equation 1.1\n\nΣF Δt = Δp\n\nI = Δp ………………….. Equation 1.3\n\nEquation 1.3 indicates that impulse is equal to change in momentum.\n\nI = ΣF Δt\n\nΔp = m vt – m vo = m (vt – vo)\n\nExample question 1:\n\nA ball with a mass of 1 kg is thrown horizontally at a speed of 2 m/s. Then, the ball is hit in the same direction as the initial direction. The ball takes 1 ms to come into contact with the hitter, and the speed of the ball after leaving the hitter is 4 m/s. What is the force applied by the hitter on the ball?\n\nKnown :\n\nmass (m) = 1 kg, Initial velocity (vo) = 2 m/s, time interval (Δt) = 1 x 10-3 second, final velocity (vt) = 4 m/s\n\nThe direction of the ball’s motion does not change, thus the initial speed and the final speed have the same mark.\n\nWanted: force (F)\n\nSolution :",
null,
"Read : Definition of heat, mechanical equivalent of heat, equation of heat\n\nExample question 2:\n\nA ball with a mass of 1 kg is thrown horizontally to the right at a speed of 10 m/s. After being hit, the ball moves to the left at a speed of 20 m/s. Determine the impulse is acting on the ball.\n\nKnown :\n\nmass (m) = 1 kg\n\nInitial velocity (vo) = 10 m/s,\n\nFinal velocity (vt) = -20 m/s\n\nThe directions of the ball’s motion (directions of velocity) are opposite, thus the initial speed and the final speed have different sign.\n\nWanted: Impulse (I)\n\nSolution :\n\nI = m (vt – vo) = 1 kg (-20 m/s – 10 m/s) = 1 kg (-30 m/s) = – 30 kg m/s\n\nThe negative sign indicates that the direction of the impulse is the same as the direction of the final speed of the ball (to the left)\n\nExample question 3\n\nA student hits a 0.1 kg volleyball which is initially at rest. The student’s hand comes into contact with the volleyball for 0.01 second. After being hit, the volleyball moves at a speed of 2 m/s.\n\n(a) What is the amount of force exerted by the student’s hand to the volleyball?\n\n(b) Newton’s Third Law states that if the student exerts force to the volleyball, the volleyball will exert force too to the student. What is the size of force exerted by the volleyball to the student’s hand?\n\n(c) If the student’s hand comes into contact with the volleyball for 0.001 seconds, what is the size of force exerted by the volleyball to the student’s hand?\n\nKnown :\n\nmass (m) = 0.1 kg,\n\nTime interval 1 (Δt1) = 0.01 s = 1 x 10-2 s\n\nInitial velocity (vo) = 0\n\nFinal velocity (vt) = 2 m/s\n\nTime interval 2 (Δt2) = 0.001 s = 1 x 10-3 s\n\nWanted: force (F)\n\nSolution :\n\n(a) The force applied by the student’s hand to the volleyball for a period of contact time of 0.01 second is",
null,
"(b) The force exerted by the volleyball to the student’s hand for a period of contact time of 0.01 second is\n\nNewton’s Third Law: F action = – F reaction\n\nThe size of force exerted by the ball to the student’s hand is 200 N\n\n(c) The force exerted by the ball to the student’s hand for a period of contact time of 0.001 seconds is",
null,
"Based on the results obtained, it can be concluded that the force exerted by the ball to the student’s hand is greater when the contact time is shorter. Greater force cause greater pain to the student’s hand. You can prove this when you play volleyball. The contact time you will take when you hit a harder volleyball is shorter than when you hit the softer one. The difference in the contact time makes your hand feel greater pain when you hit a harder ball.\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed."
] | [
null,
"https://physics.gurumuda.net/wp-content/uploads/2018/04/Impulse-Linear-Momentum-Collisions-1.png",
null,
"https://physics.gurumuda.net/wp-content/uploads/2018/04/Impulse-Linear-Momentum-Collisions-2.png",
null,
"https://physics.gurumuda.net/wp-content/uploads/2018/04/Impulse-Linear-Momentum-Collisions-3.png",
null,
"https://physics.gurumuda.net/wp-content/uploads/2018/04/Impulse-Linear-Momentum-Collisions-4.png",
null,
"https://physics.gurumuda.net/wp-content/uploads/2018/04/Impulse-Linear-Momentum-Collisions-5-300x36.png",
null,
"https://physics.gurumuda.net/wp-content/uploads/2018/04/Impulse-Linear-Momentum-Collisions-6-300x30.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90903246,"math_prob":0.99406993,"size":6122,"snap":"2022-27-2022-33","text_gpt3_token_len":1559,"char_repetition_ratio":0.1521739,"word_repetition_ratio":0.15437393,"special_character_ratio":0.25906566,"punctuation_ratio":0.09824829,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99964726,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T13:07:42Z\",\"WARC-Record-ID\":\"<urn:uuid:4692818b-3eac-4998-b033-e75f6181c01a>\",\"Content-Length\":\"70442\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f23e3501-164d-4b19-bb3b-b47bbf5293f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:463d01d9-c892-48b7-891c-0317849193b2>\",\"WARC-IP-Address\":\"50.16.49.81\",\"WARC-Target-URI\":\"https://physics.gurumuda.net/linear-momentum-impulse-collisions.htm\",\"WARC-Payload-Digest\":\"sha1:7J3LKNUAU2D6HRGF25AGMLIGJL6IJRBE\",\"WARC-Block-Digest\":\"sha1:7SNNONWY7UGTFRFB2YJ5DRT5QXBS4HMZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104672585.89_warc_CC-MAIN-20220706121103-20220706151103-00396.warc.gz\"}"} |
https://file.scirp.org/Html/1-2320026_25400.htm | [
"Heat Transfer with Viscous Dissipation in Couette-Poiseuille Flow under Asymmetric Wall Heat Fluxes\n\nOpen Journal of Fluid Dynamics\nVol.2 No.4(2012), Article ID:25400,4 pages DOI:10.4236/ojfd.2012.24011\n\nHeat Transfer with Viscous Dissipation in Couette-Poiseuille Flow under Asymmetric Wall Heat Fluxes\n\nJ. Sheela-Francisca1, C. P. Tso2, Dirk Rilling3\n\n1Faculty of Engineering, Multimedia University, Cyberjaya, Malaysia\n\n2School of Mechanical & Aerospace Engineering, Nanyang Technological University, Singapore City, Singapore\n\n3Faculty of Engineering & Technology, Multimedia University, Jalan Ayer Keroh Lama, Melaka, Malaysia\n\nEmail: franciscasheela@hotmail.com\n\nReceived June 29, 2012; revised August 5, 2012; accepted August 13, 2012\n\nKeywords: Viscous Dissipation; Couette-Poiseuille Flow; Newtonian Fluid; Nusselt Number; Brinkman Number; Constant Heat-Flux\n\nABSTRACT\n\nAnalytical solutions of temperature distributions and the Nusselt numbers in forced convection are reported for flow through infinitely long parallel plates, where the upper plate moves in the flow direction with constant velocity and the lower plate is kept stationary. The flow is assumed to be laminar, both hydro-dynamically and thermally fully developed, taking into account the effect of viscous dissipation of the flowing fluid. Both the plates being kept at specified and at different constant heat fluxes are considered as thermal boundary conditions. The solutions obtained from energy equation are in terms of Brinkman number, dimensionless velocity and heat flux ratio. These parameters greatly influence and give complete understanding on heat transfer rates that has potentials for designing and analyzing energy equipment and processes.\n\n1. Introduction\n\nFlow of Newtonian fluids through various channels is of practical importance and heat transfer is dependent on flow conditions such as flow geometry and physical properties. Investigations in heat transfer behavior through various channels showed that the effect of viscous dissipation cannot be neglected for some applications, such as flow through micro-channels, small conduits and extrusion at high speeds. The thermal development of forced convection through infinitely long fixed parallel plates, both plates having specified constant heat flux had been investigated [1-5]. For the same but filled by a saturated porous medium, heat transfer analysis was done where the walls were kept at uniform wall temperature with the effect of viscous dissipation and axial conduction taken into account . In , it was concluded that in a porous medium, the absence of viscous dissipation effect can have great impact. For the horizontal double passage channel, uniform wall temperature with asymmetric and symmetric heating and the effect of viscous dissipation had been investigated .\n\nFor the pipe flow, where the walls are kept either at constant heat flux or constant wall temperature, analytical solution is obtained for both hydro-dynamically and thermally fully developed and thermally developing Newtonian fluid flow, considering the effect of viscous dissipation [9,10].\n\nAnalytical solution with the effect of viscous dissipation was derived for Couette-Poiseuille flow of nonlinear visco-elastic fluids and with the simplified Phan-ThienTanner fluid between parallel plates, with stationary plate subjected to constant heat flux and the other plate moving with constant velocity but insulated [11-13]. Numerical solution of fully developed laminar heat transfer of power-law non-Newtonian fluids in plane Couette flow, with constant heat flux at one wall with other wall insulated had been investigated and analytical solution was derived for Newtonian fluid .\n\nA numerical investigation had been done to find the heat transfer for the simultaneously developing steady laminar flow, where the fluid was considered to be viscous non-Newtonian described by a power-law model flowing between two parallel plates with several different thermal boundary conditions . When a thin slab was symmetrically heated on both sides, the hyperbolic heat conduction equation was solved analytically . Considering the effect of viscous dissipation and pressure stress work of the fluid, the steady laminar boundary layer flow along a vertical stationary isothermal plate was studied. The variation of wall heat transfer and wall shear stress along the plate was discussed .\n\nThe Bingham fluid was assumed to be flowing in between two porous parallel plates. With the slip effect at the porous walls, the analytical solutions were obtained for the Couette-Poiseuille flow . Numerical evaluation for developing temperature profiles by a finite-difference method were carried out for non-Newtonian fluid through parallel plates and circular ducts. The effects of viscous dissipation and axial heat conduction were taken into account. Graphical representation of Nusselt numbers were noted for various parameters . The thermal entrance region of a horizontal parallel plate channel, where the lower plate was heated isothermally and the upper plate was cooled isothermally was considered. Numerical results were found on the onset of instability for longitudinal vortices, with effect of viscous dissipation . A numerical analysis was carried out, taking viscous dissipation into account for pseudo-plastic nonNewtonian fluids aligned with a semi-infinite plate .\n\nFrom the literature survey, it is observed that heat transfer analysis with effect of viscous dissipation is not found for the Couette-Poiseuille flow with both the plates being kept at specified but different constant heat fluxes. The heat transfer analysis with one plate moving is a different fundamental problem worth pursuing. This study is necessary specifically in the design of special heat exchangers and other devices where the dimensions have to be kept very small. Hence, the case of lower plate being fixed and the upper plate moving with constant velocity, both being imposed to different but constant heat fluxes is considered. The energy equation is solved leading to expressions in temperature profiles and Nusselt number, that could be useful to industrial applications.\n\n2. Statement of Problem and Mathematical Formulation\n\nConsider two flat infinitely long parallel plates distanced W or 2 apart, where the upper plate is moving with constant velocity U and the lower plate is fixed. The coordinate system chosen is shown in Figure 1. The flow through the plates is considered at a sufficient distance from the entrance such that it is both hydro-dynamically and thermally fully developed. The axial heat conduction in the fluid and through the wall is assumed to be negligible. The fluid is assumed to be Newtonian and with constant properties. The thermal boundary conditions are the upper plate is kept at constant heat flux while the lower plate at different constant heat flux.\n\nThe momentum equation in the x-direction is described as",
null,
"Figure 1. Notation to the problem.",
null,
"(1)\n\nwhere u is the velocity of the fluid,",
null,
"is the dynamic viscosity, P is the pressure.\n\nThe velocity boundary conditions are u = 0 when y = 0 and u = U when y = W.\n\nUsing the following dimensionless parameters:",
null,
"(2)\n\nthe well-known velocity-distribution is ,",
null,
", (3)\n\nwhere the mean velocity (um) is given by",
null,
"(4)\n\nFor the above equation, expression for u is obtained by solving the momentum Equation (1).\n\nThe energy equation, including the effect of viscous dissipation, is given by",
null,
", (5)\n\nwhere the second term on the right-hand side is the viscous-dissipative term. In accordance to the assumption of a thermally fully developed flow with uniformly heated boundary walls, the longitudinal conduction term is neglected in the energy equation . Following this, the temperature gradient along the axial direction is independent of the transverse direction and given as",
null,
", (6)\n\nwhere",
null,
"and",
null,
"are the upper and lower wall temperatures, respectively.\n\nBy taking",
null,
", introducing the non-dimensional quantity",
null,
", (7)\n\nand defining a dimensionless constant",
null,
",",
null,
", (8)\n\nand modified Brinkman number",
null,
"as",
null,
", (9)\n\nEquation (5) can be written as",
null,
"(10)\n\nThe thermal boundary conditions are",
null,
"(11)\n\nThe solution of Equation (10) under the above thermal boundary conditions can be obtained as",
null,
"(12)\n\nTo evaluate",
null,
"in the above equation, a third boundary condition is required:",
null,
". (13)\n\nBy substituting Equation (13) into Equation (12),",
null,
"can be expressed as",
null,
"(14)\n\nTherefore, the solution of Equation (10) under the above thermal boundary conditions can be written in a simplified form as",
null,
"(15)\n\nwhere",
null,
"(16)\n\nIn fully developed flow, it is usual to utilize the mean fluid-temperature,",
null,
", rather than the centerline temperature, when defining the Nusselt number. Thus mean or bulk temperature is given by",
null,
", (17)\n\nwith",
null,
"the cross-sectional area of the channel and the denominator on the right-hand side of Equation (17) can be written as",
null,
". (18)\n\nUsing Equations (3) and (15), the numerator of Equation (17) can be found. Therefore the dimensionless mean temperature is given by",
null,
". (19)\n\nAt this point, the convective heat transfer coefficient can be evaluated by the equation",
null,
". (20)\n\nDefining Nusselt number to be",
null,
", (21)\n\nwhere Dh is the hydraulic diameter defined by Dh = 2W, the expression for Nusselt number can be shown to be",
null,
"(22)\n\nWhen q2 = 0,",
null,
"(23)\n\nagreeing with reference .\n\nExplicit expressions for Nusselt number for various values of U*,",
null,
"and",
null,
"are given in the following discussions.\n\n3. Graphical Results and Discussions\n\nFor the purpose of discussion on the behavior of the Couette-Poiseuille flow, two types of graphs based on the analytical solutions are made. The temperature profile in the channel is plotted with variations of various parameters to indicate the heated region, and the Nusselt number is plotted to reveal the heat transfer characteristics of the flow.\n\n3.1. Temperature Profiles against the Channel Width for Various Parameters\n\n3.1.1. Temperature Profiles for the Case of Insulated Lower Plate\n\nFigure 2 shows the dimensionless temperature profiles of",
null,
"versus Y, where the lower plate is insulated at five dimensionless velocities U* = −1.0, −0.5, 0.0, 0.5 and 1.0, and at six selected",
null,
"values from −0.01 to 0.5, as shown in (a) to (f). The temperature distributions have similar pattern but different shapes, and all the curves converge at Y = 1, θ equal to 0, by definition. At Y = 0, the curves are vertical to satisfy the insulated condition. As expected, generally the motion of the upper plate tends to impart more heat into the fluid layers that are dragged along, unless off-set by the viscous dissipation effects. It is observed that when",
null,
"= −0.01, 0.0, 0.01 and 0.1, the temperature distribution is negative which implies there is decrease in heat transfer, whereas when",
null,
"= −0.1 and 0.5, θ manifests in a different way such that θ takes both negative and positive values.\n\n3.1.2. Temperature Profiles for a Fixed Brinkman Number for Various Heat Flux Ratios\n\nThe effect of viscous dissipation is seen in the value of modified Brinkman number. It is interesting to observe the behavior of the temperature profiles for various heat flux ratios for a fixed modified Brinkman number and hence to note the effect of viscous dissipation. In Figure 3, for a",
null,
"value of 0.01, the temperature distribution is investigated at U* = −1.0, −0.5, 0.0, 0.5 and 1.0 for various heat flux ratios. When",
null,
", the values of theta are all negative. For the equal heat fluxes, for U* = −1.0, −0.5 and 0.0, theta takes only negative values, but for",
null,
", theta takes both positive as well as negative values. When",
null,
"and 10.0, theta takes both positive as well as negative values. For",
null,
", when the upper plate moves in the negative direction with values U* = −1.0, −0.5, theta takes both positive as well as negative values and when the upper plate is fixed and moves in the positive direction with values U* = 0.5 and 1.0, theta takes positive values. As expected again, all the curves converge at Y = 1.\n\n3.2. Nusselt Number Variations\n\nFigure 4 shows the plots of Nusselt number versus the heat flux ratio",
null,
"at U* = −1.0, −0.5, 0.0, 0.5 and 1.0 at various",
null,
"values. The hyperbolic curves have asymptotes occurring at different",
null,
"values. It is observed that, for the specified values of",
null,
", when",
null,
"= −0.01, 0.0, 0.01 and 0.1, the asymptotes fall to the positive direction of",
null,
", whereas when",
null,
"at",
null,
", the asymptote falls at",
null,
"and when",
null,
", the asymptote falls at",
null,
", as given in Table 1.\n\n4. Conclusion\n\nHeat transfer with the effect of viscous dissipation has been analysed. Analytical expressions for Nusselt number have been obtained for fully developed Newtonian fluid flow between infinitely long parallel plates, where the lower plate is fixed and the upper plate is moving with constant velocity. When both plates are kept at",
null,
"(a)",
null,
"(b)",
null,
"(c)",
null,
"(d)",
null,
"(e)",
null,
"(f)\n\nFigure 2. Temperature profiles at U* = −1.0, −0.5, 0.0, 0.5 and 1.0 at various for the case of q2/q1 = 0. (a) Brq1 = −0.01; (b) Brq1 = −0.10; (c) Brq1 = 0.00; (d) Brq1 = 0.01; (e) Brq1 = 0.10; (f) Brq1 = 0.50.",
null,
"(a)",
null,
"(b)",
null,
"(c)",
null,
"(d)",
null,
"(e)",
null,
"(f)\n\nFigure 3. Temperature profiles at U* = −1.0, −0.5, 0.0, 0.5 and 1.0 at various q2/q1, for the case of Brq1 = 0.01. (a) q2/q1 = 0.0; (b) q2/q1 = 0.50; (c) q2/q1 = 1.0; (d) q2/q1 = 2.0; (e) q2/q1 = 10.0; (f) q2/q1 = 50.0.",
null,
"(a)",
null,
"(b)",
null,
"(c)",
null,
"(d)",
null,
"(e)",
null,
"(f)\n\nFigure 4. Nusselt number versus q2/q1, at U* = −1.0, −0.5, 0.0, 0.5, and 1.0, at various Brq1. Vertical lines are asymptotes. (a) Brq1 = −0.01; (b) Brq1 = −0.10; (c) Brq1 = 0.00; (d) Brq1 = 0.01; (e) Brq1 = 0.10; (f) Brq1 = 0.50.",
null,
"Table 1. Values of q2/q1 at various Brq1 and U*.\n\ndifferent constant heat fluxes, the dimensionless temperature distribution is given by Equation (15), and the Nusselt number by Equation (22) and they are in terms of",
null,
",",
null,
"and Various dimensionless velocity values such as −1.0, −0.5, 0.0, 0.5, and 1.0, constant heat flux ratios 0, 0.5, 1.0, 2.0, 10.0 and 50.0 and modified Brinkaman numbers −0.01, −0.1, 0.0, 0.01, and 1.0 are considered in the analysis. The behaviour of the temperature distribution and the Nusselt number against these parameters are discussed. The Brinkman Number, the speed of the moving plate and different values of heat fluxes at both the plates have significant impact in the thermal development.\n\n5. Acknowledgements\n\nPart of the results was submitted as a conference paper to the 4th International Meeting of Advances in Thermo Fluids (IMAT2011), Melaka, Malaysia.\n\nREFERENCES\n\n1. J. Sheela-Francisca and C. P. Tso, “Viscous Dissiption Effects on Parallel Plates with Constant Heat Flux Boundary Conditions,” International Communcations in Heat Mass Transfer, Vol. 36, No. 3, 2009, pp. 249-254. doi:10.1016/j.icheatmasstransfer.2008.11.003\n2. O. Aydin and M. Avci, “Viscous-Dissipation Effects on the Heat Transfer in a Poiseuille Flow,” Applied Energy, Vol. 83, No. 5, 2006, pp. 495-512. doi:10.1016/j.apenergy.2005.03.003\n3. J. W. Ou and K. C. Cheng, “Effects of Pressure Work and Viscous Dissipation on Graetz Problem for Gas Flows in Parallel Plate Channels,” Warme-und Stoffubertraggung, Vol. 6, No. 4, 1973, pp. 191-198. doi:10.1007/BF02575264\n4. C. P. Tso, J. Sheela Francisca and Y.-M. Hung, “Viscous Dissipation Effects of Power-Law Fluid within Parallel Plates with Constant Heat Flues,” Journal of Non-Newtonian Fluid Mechanics, Vol. 165, No. 11-12, 2010, pp. 625-630. doi:10.1016/j.jnnfm.2010.02.023\n5. D. E. Gray, “The Motion of Viscous Fluids,” Amercan Institute of Physics Handbook, 3rd Edition, Section 3c-2, American Institute of Physics, McGraw-Hill, New York, 1972.\n6. D. A. Nield, A. V. Kuznetsov and M. Xiong, “Thermally Developing Forced Convection in a Porous Medium: Parallel Plate Channel with Walls at Uniform Temperature, with Axial Conduction and Viscous Dissipation Effects,” International Journal Heat and Mass Transfer, Vol. 46, No. 4, 2003, pp. 643-651. doi:10.1016/S0017-9310(02)00327-7\n7. Y. M. Hung and C. P. Tso, “Effects of Viscous Disspation on Fully Developed Forced Convection in Porous Media,” International Communications in Heat Mass Transfer, Vol. 36, No. 6, 2009, pp. 597-603. doi:10.1016/j.icheatmasstransfer.2009.03.008\n8. M. M. Salah El-Din, “Effect of Viscous Dissipation on Fully Developed Combined Convection in a Horizontal Double-Passage Channel,” Heat Mass Transfer, Vol. 38, No. 7-8, 2002, pp. 673-677. doi:10.1007/s002310100255\n9. O. Aydin, “Effects of Viscous Dissipation on the Heat Transfer in Forced Pipe Flow. Part 1: Both Hydro-Dynamically and Thermally Fully Developed Flow,” Energy Conversion Management, Vol. 46, 2005, pp. 757-769.\n10. O. Aydin, “Effects of Viscous Dissipation on the Heat Transfer in a Forced Pipe Flow. Part 2: Thermally Developing Flow,” Energy Conversion Management, Vol. 46, No. 18-19, 2005, pp. 3091-3102. doi:10.1016/j.enconman.2005.03.011\n11. S. H. Hashemabadi, S. Gh. Etemad and J. Thibault, “Forced Convection Heat Transfer of Couette-Poiseuille Flow of Nonlinear Visco-Elastic Fluids between Parallel Plates,” International Journal of Heat and Mass Transfer, Vol. 47, No. 17-18, 2004, pp. 3985-3991. doi:10.1016/j.ijheatmasstransfer.2004.03.026\n12. F. T. Pinho and P. J. Oliveira, “Analysis of Forced Convection in Pipes And Channels with the Simplified Phan-Thien-Tanner Fluid,” International Journal of Heat and Mass Transfer, Vol. 43, No. 13, 2000, pp. 2273-2287. doi:10.1016/S0017-9310(99)00303-8\n13. S. H. Hashemabadi, S. Gh. Etemad and J. Thibault, “Mathematical Modeling of Laminar Forced Convection of Simplified Phan-Thien-Tanner (SPTT) Fluid between Moving Parallel Plates,” International Communications in Heat Mass Transfer, Vol. 30, No. 2, 2003, pp. 197-205. doi:10.1016/S0735-1933(03)00030-7\n14. G. Davaa, T. Shigechi and S. Momoki, “Effect of Viscous Dissipation on Fully Developed Heat Transfer of NonNewtonian Fluids in Plane Laminar Poiseuille-Couette Flow,” International Communications in Heat Mass Transfer, Vol. 31, No. 5, 2004, pp. 663-672. doi:10.1016/S0735-1933(04)00053-3\n15. O. Aydin and M. Avci, “Laminar Forced Convection with Viscous Dissipation in a Couette-Poiseuille Flow between Parallel Plates,” Applied Energy, Vol. 83, No. 8, 2006, pp. 856-867. doi:10.1016/j.apenergy.2005.08.005\n16. S. Gh. Etemad, A. S. Majumdar and B. Huang, “Viscous Dissipation Effects in Entrance Region Heat Transfer for a Power Law Fluid Flowing between Parallel Plates,” International Journal of Heat and Fluid Flow, Vol. 15, No. 2, 1994, pp. 122-131. doi:10.1016/0142-727X(94)90066-3\n17. M. Lewandowska and L. Malinowski, “An Analytcal Solution of the Hyperbolic Heat Conduction Equation for the Case of a Finite Medium Symmetrically Heated on Both Sides,” International Communications in Heat Mass Transfer, Vol. 33, No. 1, 2006, pp. 61-69. doi:10.1016/j.icheatmasstransfer.2005.08.004\n18. A. Pantokratoras, “Effect of Viscous Dissipation and Pressure Stress Work in Natural Convection along a Vertical Isothermal Plate. New Results,” International Journal of Heat and Mass Transfer, Vol. 46, No. 25, 2003, pp. 4979-4983. doi:10.1016/S0017-9310(03)00321-1\n19. Y.-L. Chen and K.-Q. Zhu, “Couette-Poiseuille Flow of Bingham Fluids between Two Porous Parallel Plates with Slip Conditions,” Journal of Non-Newtonian Fluid Mechanics, Vol. 153, No. 1, 2008, pp. 1-11. doi:10.1016/j.jnnfm.2007.11.004\n20. O. Jambal, T. Shigechi, G. Davaa and S. Momoki, “Effects of Viscous Dissipation and Fluid Axial Heat Conduction on Heat Transfer for Non-Newtonian Fluids in Ducts with Uniform Wall Temperature Part I: Parallel Plates and Circular Ducts,” International Communications in Heat Mass Transfer, Vol. 32, No. 9, 2005, pp. 1165-1173. doi:10.1016/j.icheatmasstransfer.2005.07.002\n21. K. C. Cheng and R. S. Wu, “Viscous Dissipation Effects on Convective Instability and Heat Transfer in Plane Poiseuille Flow Heated from Below,” Applied Science Research, Vol. 32, No. 4, 1976, pp. 327-346.\n22. B. Li, L. Zheng and X. Zhang, “Heat Transfer in Pseudo-Plastic Non-Newtonian Fluids with Variable Thermal Conductivity,” Energy Conversion Management, Vol. 52, No. 1, 2011, pp. 355-358. doi:10.1016/j.enconman.2010.07.008\n23. W. M. Kays, “Convective Heat and Mass Transfer,” 4th Edition, McGraw-Hill, New York, 1966, p. 104."
] | [
null,
"https://file.scirp.org/Html/1-2320026\\5f3946e3-092f-432b-9f06-ce4547c8978b.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\8062c74e-dada-41ce-8570-4dbf6eb76be7.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\245da12b-6469-4459-9cb9-e3bf966d8ebc.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\8b409296-6ccc-45e5-9e78-94765cc70dfe.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\d26f9d85-c7ec-41b5-adf9-c5dc5c78f411.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\b26cc3fc-e5d5-4bac-9aee-38d1391dc3ab.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\718c1f5a-df46-40bb-b5f3-49232a311cc2.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\cc21fef1-67fe-4ef5-8065-efa67f27d150.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\b1c5013b-344d-4abb-b767-4fbf02e76243.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\4b88dd09-1824-4b4e-a45b-76be37ea7cfd.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\ef1aec8e-3059-4bbb-b7c5-21ae514fa6c0.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\655b5325-2b6c-4075-b698-f78ae8ebd247.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\8b50fd77-8642-42ff-8c4d-28edc1f8f2a2.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\6e0e5824-4d7f-43da-8d51-916f63956a6e.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\6aafca00-9c8c-459f-b27f-eedc4229ba26.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\2847afa9-4ead-470b-9751-e5e02ca9a28b.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\ef51cb6b-4949-4c8f-80f8-f3b6074b0499.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\97717a7f-103c-440f-9e69-5a47c82a1434.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\fd0b8931-6925-4607-b747-9c39acf526ca.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\d7a5e4dd-10af-4d75-93ef-827f8397f008.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\48716a78-ac21-42ba-bfa9-61e6acc1e04b.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\49681665-fb70-4f24-9a5f-ed1317a78d2b.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\44cbf146-bfa5-4436-8516-1a28facf0591.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\b902e21e-369d-498a-9bbe-8e7a59c45b88.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\597e4b86-faaa-4f47-8d20-91c98f52d33d.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\50bd7d5c-093c-4ffa-96c0-a34b19d96480.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\47f1abc2-a6b7-4e3b-964e-6c0bab790a77.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\5f3f77ae-b522-4d6b-8af5-d9b65f7a6fb3.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\21da5cc4-1f25-4200-9014-8dfdd4e36999.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\1912bf35-7d31-40fe-bc68-638ac3fe4547.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\7959b4cc-8eb3-450b-a7f2-23cc85b48ac2.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\a9288d9b-f628-4032-bc2e-e04c0f7521a4.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\04ca974f-6432-4bd0-a9ab-a2237b7e6164.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\45cfeac5-7344-42b2-86cc-717eca568d83.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\05128bd8-6452-49f1-86e9-7fbd3359c3f2.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\c12e671e-6c96-4f43-b8cb-ff6ff5a59b73.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\26539396-8423-4dd6-b129-5639c531bc3b.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\35777d7f-5853-4bb5-8331-a1a84fe10831.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\1a05a3b7-be69-4151-8416-6aa70020dcdd.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\071e792a-4959-466e-853e-04d4715237c8.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\e09a5838-a499-430a-80f9-49b20ea9d2ea.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\ce1a2ddf-4767-4016-89de-3edd3ff99d01.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\8839ff61-d25c-4a16-b1e4-e3e9c3d8d204.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\9f112d3a-646c-4ae5-a14a-af9f1f1165d9.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\4f199ac0-0d47-422a-9bfa-4d138433613e.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\76619924-dcad-4a65-b48d-6ee33e9c3f91.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\cf485ce9-f7a9-4763-821f-38e6c92946ce.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\d7409d66-2944-48d5-9321-154a1f075285.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\1d4ba6b6-adb3-4896-a59a-1210c81a3797.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\975af202-ee6a-45df-bcd3-3093d2b63444.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\9e905836-3884-42de-87c6-b5bf230b23e1.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\bfa30581-3dfc-4a4e-8ace-2b3b2a7a8724.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\c5c8c920-2c8d-4866-bfb9-54ff322b317c.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\1fb16b0a-c17b-4f64-b4cd-61c2f9d85b70.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\3a3a8c84-06f4-4041-9fa5-d6a6e2e52555.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\bf233d08-8ccc-4b92-b6c9-90d779b3faa3.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\365c0c9f-f461-42cd-bd69-aae08d62e405.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\ddd175c5-4024-444b-9df9-4bbd8cdbe896.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\f917a611-b80e-4129-b038-13c61cc18a55.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\589cb0f3-25c6-48ac-be84-405728be16de.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\875a1dc7-1aae-4de3-ae16-240107db0566.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\a185ecb8-d95d-471c-b74b-798009822097.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\92bfcb1a-e90e-4189-9663-eb0d16d40c52.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\7dfef274-d80b-431a-a91c-00b21ce6af1e.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\a7d2b45a-ce99-4107-a594-d02bc508f534.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\10bcddb8-ebd8-4a5e-9179-f2c5b9ffbf8f.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\953d71bc-fe94-4f53-9049-d66225f1126a.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\cd513387-31f5-4295-bbcf-1a9927b76d2c.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\564bf23f-a88b-487c-b6a4-09d56b1dfa85.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\96ffa2a5-09bb-4b35-a4ef-0940cd495866.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\ba024082-e2bb-4445-8998-ea588bd962f0.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\69daff68-d85d-4e49-a478-7e839ddbfd38.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\5cc64c55-0c64-4a41-820c-0e2c486388ff.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\e3a14a82-9710-4e1d-8867-646f32b9fc20.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\7dbce7d9-78a8-43b8-9d50-f81dce372f2c.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\b44ad63c-a4d6-4e5c-a513-ad88d0413023.jpg",
null,
"https://file.scirp.org/Html/1-2320026\\6ebed996-b849-4bd8-bb98-ea746a9cc533.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8681169,"math_prob":0.9450659,"size":19914,"snap":"2019-43-2019-47","text_gpt3_token_len":5340,"char_repetition_ratio":0.12938222,"word_repetition_ratio":0.08918033,"special_character_ratio":0.26915738,"punctuation_ratio":0.18973105,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98405504,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-22T09:43:04Z\",\"WARC-Record-ID\":\"<urn:uuid:1f0b5057-8b6d-4913-ac05-5e5078a278c0>\",\"Content-Length\":\"51051\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1b6ef680-0d38-4770-bc81-860e32b7cd51>\",\"WARC-Concurrent-To\":\"<urn:uuid:ebb0f5d1-97cf-4f67-b711-bcd867fc037d>\",\"WARC-IP-Address\":\"173.208.146.83\",\"WARC-Target-URI\":\"https://file.scirp.org/Html/1-2320026_25400.htm\",\"WARC-Payload-Digest\":\"sha1:7CZZTIY6ETZ3RLCW7NSXA23GSNAJ47TY\",\"WARC-Block-Digest\":\"sha1:3KCWK24MTAV3BQHQY6XRG4PCVPM64BDV\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987813307.73_warc_CC-MAIN-20191022081307-20191022104807-00558.warc.gz\"}"} |
https://chemistry.stackexchange.com/questions/19150/conditions-for-spontaneity?noredirect=1 | [
"# Conditions for spontaneity\n\nSo the two conditions for a reaction to be spontaneous are:\n\n1. tendency to achieve minimum energy\n2. tendency to achieve maximum randomness.\n\nBoth these things are contrasting and the only way I see them working are when they work independent of each other. Do these work simultaneously?\n\nI am really having a hard time in figuring out a reaction where energy decreases and entropy increases. Can this happen?\n\nLuckily for us, a really smart guy named Josiah Willard Gibbs figured out how these two tendencies work together back in the late 1800's. As a result, in chemical thermodynamics we now have a state function called the Gibbs Free Energy that describes what is called the thermodynamic potential of a system at constant pressure, temperature, and number of molecules.\n\nSince most of the systems we work with under lab conditions are at constant pressure and temperature and are typically closed to mass transfer (abbreviated as NPT), Gibbs free energy is a very convenient way of determining what will happen to a given chemical system under normal lab conditions.\n\nThe reason it works is that it is a potential energy function - it describes how a system's potential energy will change as different state variables change, and therefore lets us predict where the system will \"try\" to go. An analogy that works pretty well is to imagine it like the surface of a planet, where gravity is the potential energy. If the derivative or slope is negative in a given direction, then things will tend to \"roll downhill\" at that point. On the other hand, to make things go uphill, you need to add energy.\n\nThe other thing we can get from this analogy is that given the opportunity, things will eventually move from a state of higher potential energy to a state of lower potential energy. So we can compare the energy at two different states (the reactants and products, for example), and decide based on the difference in potential energy whether the reaction would be spontaneous under those conditions.\n\nThe equation for Gibbs free energy is:\n\n$G = H - TS$\n\nWhere $H$ is enthalpy (a.k.a heat of reaction), $T$ is absolute temperature (usually measured in Kelvin) and $S$ is entropy.\n\nIf you take the derivative at constant temperature and pressure, you get:\n\n$\\Delta G = \\Delta H - T\\Delta S$\n\n(This is for large changes. for small changes replace $\\Delta$ with $\\delta$). What this tells us is that change in Gibbs Free Energy is a function of change in enthalpy, change in entropy, and the absolute temperature.\n\nTo illustrate how this works, take a look at the following diagram.",
null,
"Here the line represents the Gibbs Free energy \"surface\" - the thermodynamic potential as a function of some variable (a reaction coordinate, for example). The ball represents the system at some point along that coordinate axis. If this were regular potential energy, it would be easy to see what happens - the ball will roll downhill if it gets the chance, and it will stop at the lower energy state. To get back up the hill, someone would have to put in that amount of energy. For thermodynamic potentials, it works the same way - the system will move along the coordinate axis in the direction of decreasing thermodynamic potential energy ($G$ in this case).\n\nThis means that under constant NPT conditions, any process that involves an overall decrease in Gibbs Free energy will be spontaneous. It will also tend to move in the \"direction\" that has the most negative slope in G at any given time. In mathematical terms,\n\n$\\Delta G < 0$ - overall process is spontaneous\n\n$dG < 0$ - process will be moving in that direction\n\n$dG = 0$ - process is at equilibrium\n\nSo how does this all fit in with what you described about systems trying to reach minimum energy, and maximum entropy? The answer is: those are both thermodynamic potentials for different types of systems. For a system with constant entropy, volume, and number of particles, (NSV) the internal energy (or total energy) is the thermodynamic potential. For a system at constant entropy and pressure (NSP), enthalpy is the thermodynamic potential. And for a system at constant volume and internal energy (NVE), the negative of entropy is the thermodynamic potential.\n\nIn other words, all of these thermodynamic variables are interconnected, and a change in one affects all of the others. You can hold three constant at any given time and still allow the system to \"move\" through phase-space. Which thermodynamic potential you need to describe how the system will move depends on which variables you choose.\n\nLet's look at the equation for $\\Delta G$ and see how changes in enthalpy and entropy affect it.\n\n$\\downarrow \\Delta G = \\space \\downarrow \\Delta H - T \\Delta S$\n\nIf $\\Delta H$ decreases, it will make $\\Delta G$ decrease as well. This makes sense, since we know that exothermic processes tend to be spontaneous, because they are releasing energy and therefore the final system energy is lower than the initial.\n\n$\\downarrow \\Delta G = \\Delta H - T \\space \\uparrow \\Delta S$\n\nOn the other hand, $\\Delta G$ tends to decrease as $\\Delta S$ increases - this is because the change in entropy is subtracted in the equation. This also matches up with what you know - an increase in entropy indicates a spontaneous process.\n\nFor your last question:\n\nI am really having a hard time in figuring out a reaction where energy decreases and entropy increases. Can this happen?\n\nYes, it can! In fact, under these conditions, the process is guaranteed to be spontaneous - if $\\Delta H$ is negative, and $\\Delta S$ is positive, then $\\Delta G$ has to be negative - the reaction would be spontaneous at any temperature under these conditions.\n\nLet's look at the other possibilities:\n\n$\\Delta H > 0; \\Delta S < 0$\n\nIn this case, since $\\Delta H$ is always positive, and we are subtracting a negative $\\Delta S$, the reaction can never be spontaneous.\n\n$\\Delta H > 0; \\Delta S > 0$\n\n$\\Delta H < 0; \\Delta S < 0$\n\nIn these two cases, the reaction could be spontaneous or it could be non-spontaneous - it depends on the relative magnitudes of the enthalpy and energy terms as well as the temperature at which it occurs.\n\nTo summarize - all thermodynamic variables are related to each other in fairly complicated ways. When you hold three of them constant (two if you don't count number of molecules), you can derive thermodynamic potential energy functions that describe the behavior of the system in terms of a single quantity. For most cases in chemistry, Gibbs Free Energy is the thermodynamic potential that we use. It gives us the relationship between enthalpy, entropy and temperature under constant pressure conditions. Since it's the thermodynamic potential, it also lets us predict how the system will behave - how it will move through phase space.\n\nI feel that the Original Poster actually needed examples of :\n$\\Delta H < 0 ; \\Delta S > 0$\nsince majority of the common reactions do not fall under this criteria . If not then Thomij's answer is perfect for the understanding of Gibbs Free Energy .\n\nSo here are a few examples :\n\n• Hydrogen peroxide is thermodynamically unstable and decomposes to form water and oxygen $\\ce{2 H_2O_2 → 2 H_2O + O_2}$\n$\\Delta H = -98.2~\\mathrm{kJ\\, mol^{-1}}$\n$\\Delta S = 70.5~\\mathrm{kJ\\, mol^{-1}K^{-1}}$\n\n• Decomposition of Ammonium dichromate\n$\\ce{(NH_4)_2Cr_2O_7 → Cr_2O_3 + N_2 + 4 H_2O}$\n$\\Delta H = -1794.9.2~\\mathrm{kJ\\, mol^{-1}}$\n$\\Delta S > 0$ due to formation of gases from the solid"
] | [
null,
"https://i.stack.imgur.com/Fry9V.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9257933,"math_prob":0.98702025,"size":6346,"snap":"2019-35-2019-39","text_gpt3_token_len":1356,"char_repetition_ratio":0.14900662,"word_repetition_ratio":0.011029412,"special_character_ratio":0.21556886,"punctuation_ratio":0.08966695,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.998634,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-21T13:41:40Z\",\"WARC-Record-ID\":\"<urn:uuid:253564f7-09d9-4f90-953a-629bd9a17705>\",\"Content-Length\":\"148743\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:02ffc3bc-b3dd-41cf-9401-894120177830>\",\"WARC-Concurrent-To\":\"<urn:uuid:6f7bf59c-ea26-4fb8-9b16-08b300f97d92>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://chemistry.stackexchange.com/questions/19150/conditions-for-spontaneity?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:YWEQM2LKMXJHWVXUJXN4CEEEY4AGHFQF\",\"WARC-Block-Digest\":\"sha1:67SNJQGBTWDMON4BFWBGSCW45UKPUUCI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574501.78_warc_CC-MAIN-20190921125334-20190921151334-00291.warc.gz\"}"} |
http://www.talks.cam.ac.uk/talk/index/39846 | [
"# Recent progress in the spectral theory of first order elliptic systems\n\n•",
null,
"Vassiliev, D (University College London)\n•",
null,
"Tuesday 18 September 2012, 10:10-10:50\n•",
null,
"Seminar Room 1, Newton Institute.\n\nMathematics and Physics of Anderson localization: 50 Years After\n\nThe talk deals with the distribution of eigenvalues of a linear self-adjoint elliptic operator. The eigenvalue problem is considered in the deterministic setting, i.e. the coefficients of the operator are prescribed smooth functions. The objective is to derive a two-term asymptotic formula for the counting function (number of eigenvalues between zero and a positive lambda) as lambda tends to plus infinity.\n\nThere is an extensive literature on the subject (see, for example, ), mostly dealing with scalar operators. It has always been taken for granted that all results extend in a straightforward manner to systems. However, the author has recently discovered [2,3] that all previous publications on first order systems give formulae for the second asymptotic coefficient that are either incorrect or incomplete (i.e. an algorithm for the calculation of the second asymptotic coefficient rather than an explicit formula). The aim of the talk is to explain the spectral theoretic difference between scalar operators and systems and to present the correct formula for the second asymptotic coefficient.\n\n Yu.Safarov and D.Vassiliev, The asymptotic distribution of eigenvalues of partial differential operators, American Mathematical Society, 1997 (hardcover), 1998 (softcover).\n\n Preprint arXiv:1204.6567.\n\n Preprint arXiv:1208.6015.\n\nThis talk is part of the Isaac Newton Institute Seminar Series series."
] | [
null,
"http://talks.cam.ac.uk/images/user.jpg",
null,
"http://talks.cam.ac.uk/images/clock.jpg",
null,
"http://talks.cam.ac.uk/images/house.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89036727,"math_prob":0.90232366,"size":2854,"snap":"2022-05-2022-21","text_gpt3_token_len":577,"char_repetition_ratio":0.08947369,"word_repetition_ratio":0.03874092,"special_character_ratio":0.18815698,"punctuation_ratio":0.09513742,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9705974,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-20T08:58:38Z\",\"WARC-Record-ID\":\"<urn:uuid:e1a8f53c-6be0-42a9-936f-46ef3eb51635>\",\"Content-Length\":\"14468\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b43ec0b-6bf5-4e6d-9b61-0e5e2c0ae73a>\",\"WARC-Concurrent-To\":\"<urn:uuid:e40e6993-2783-4a90-9c97-e83378856b5f>\",\"WARC-IP-Address\":\"131.111.150.181\",\"WARC-Target-URI\":\"http://www.talks.cam.ac.uk/talk/index/39846\",\"WARC-Payload-Digest\":\"sha1:YHEOLQRMX43RZ4E363PVPKLQFVSFLZJ6\",\"WARC-Block-Digest\":\"sha1:ROAWMT4VIZO5W2YRUGSCTQRRK4KHFUFR\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662531762.30_warc_CC-MAIN-20220520061824-20220520091824-00655.warc.gz\"}"} |
https://www.xoax.net/math/ref/algebra/incl/common_products/ | [
"# Common Products\n\nGeneral Binomial Product",
null,
"Square of a Binomial",
null,
"Difference of Squares",
null,
"Here, we have three equations for the product of two binomials, which are commonly used in algebra. The first formula is the general formula because the use of four distinct letters implies that all of the terms could be different. The second formula is for the square of a binomial, which is equivalent to the first formula with c = a and d = b. The third formula is called the difference of squares because of the right-hand side, and it is equivalent to the general formula with c = a and d = -b. The last two equations are just specializations of the first one.\n\n### Derivations\n\nGeneral Binomial Product\n\nFor the first equality, we apply the right-distributive property to get the right-hand side. Then we use the left-distributive property twice to get the second and final equality.",
null,
"Square of a Binomial\n\nTo get the right-hand side of the first equality, we use the definition of an exponent. To get the second equality, we use the general formula that we derived above with c = a and d = b. For the third and final equality, we use the definition of exponents on the first and last terms and use commutativity and distributivity to combine the ab terms.",
null,
"Difference of Squares\n\nFor the first equality, we use the general formula with c = a and d = -b to get the right-hand side. For the second inequality, we use the property that a product of an number and an additive inverse is the additive inverse of the product. For the third and final equality, we commutativity to say that ba = ab and the definition of an additive inverse to cancel the -ab and ab terms.",
null,
""
] | [
null,
"https://www.xoax.net/math/ref/algebra/incl/common_products/formula_general_product_of_binomials.png",
null,
"https://www.xoax.net/math/ref/algebra/incl/common_products/formula_square_of_a_binomial.png",
null,
"https://www.xoax.net/math/ref/algebra/incl/common_products/formula_difference_of_squares.png",
null,
"https://www.xoax.net/math/ref/algebra/incl/common_products/derivation_of_general_product_of_binomials.png",
null,
"https://www.xoax.net/math/ref/algebra/incl/common_products/derivation_of_square_of_a_binomial.png",
null,
"https://www.xoax.net/math/ref/algebra/incl/common_products/derivation_of_difference_of_squares.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9270572,"math_prob":0.9984922,"size":1644,"snap":"2020-24-2020-29","text_gpt3_token_len":362,"char_repetition_ratio":0.15060976,"word_repetition_ratio":0.1462585,"special_character_ratio":0.20620438,"punctuation_ratio":0.075235106,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99902546,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-30T01:39:43Z\",\"WARC-Record-ID\":\"<urn:uuid:7fa3a873-e676-4087-9909-6be99348969a>\",\"Content-Length\":\"22991\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dba0fb54-b65b-4c6b-9aeb-b1621727bce0>\",\"WARC-Concurrent-To\":\"<urn:uuid:40f689e3-d006-467d-acc5-78471edd93dc>\",\"WARC-IP-Address\":\"132.148.102.183\",\"WARC-Target-URI\":\"https://www.xoax.net/math/ref/algebra/incl/common_products/\",\"WARC-Payload-Digest\":\"sha1:4VDZAYNLVAVVQ23N4NS7AGG5JLG6E4PA\",\"WARC-Block-Digest\":\"sha1:PNA5SYSP3NQLB5QFQKUEWMTW4U52ZGXC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347407001.36_warc_CC-MAIN-20200530005804-20200530035804-00169.warc.gz\"}"} |
http://www.wikicalculator.com/formula_calculator/Inductance-of-solenoid-(given-magnetic-permeability-number-of-turns-area-length-of-coil)-266.htm | [
"Inductance of solenoid (given magnetic permeability,number of turns,area,length of coil)\n\nFormula",
null,
"L\ninductance of a solenoid\nN\ntotal number of turns\nmu\nmagnetic permeability\nA\ncross-section area\nl\nlength of the coil\n\nFormula description\n\nInductance is the property of an electrical circuit causing voltage to be generated proportional to the rate of change in current in a circuit. This property also is called self inductance to discriminate it from mutual inductance, describing the voltage induced in one electrical circuit by the rate of change of the electric current in another circuit. solenoid is a long, thin coil, i.e. a coil whose length is much greater than the diameter. Under these conditions, and without any magnetic material used, the magnetic flux density B within the coil is practically constant.\n\nCalculator (how to use calculator?)",
null,
"N\nmu\nA\nl\nL\nPrecision"
] | [
null,
"http://www.wikicalculator.com/formula_image/Inductance-of-solenoid-(given-magnetic-permeability-number-of-turns-area-length-of-coil)-266.png",
null,
"http://www.wikicalculator.com/formula_image/Inductance-of-solenoid-(given-magnetic-permeability-number-of-turns-area-length-of-coil)-266.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88783145,"math_prob":0.99310833,"size":813,"snap":"2022-05-2022-21","text_gpt3_token_len":177,"char_repetition_ratio":0.12855376,"word_repetition_ratio":0.0,"special_character_ratio":0.17712177,"punctuation_ratio":0.0952381,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99639714,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-21T02:21:30Z\",\"WARC-Record-ID\":\"<urn:uuid:bd4c549f-83ab-4c17-9a4e-b8ca7583ce6f>\",\"Content-Length\":\"27602\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c5006a0-41cc-416e-9444-c56f6f62530d>\",\"WARC-Concurrent-To\":\"<urn:uuid:f83c8112-372d-4fad-a964-146aaf1a2f7e>\",\"WARC-IP-Address\":\"74.208.155.67\",\"WARC-Target-URI\":\"http://www.wikicalculator.com/formula_calculator/Inductance-of-solenoid-(given-magnetic-permeability-number-of-turns-area-length-of-coil)-266.htm\",\"WARC-Payload-Digest\":\"sha1:MMYWYD43JCCH37P5VLZLKDRFZL7ZJ5RR\",\"WARC-Block-Digest\":\"sha1:GWLFBI7SEK65FRRV6MYRKPFNSQW7M3TJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320302715.38_warc_CC-MAIN-20220121010736-20220121040736-00696.warc.gz\"}"} |
http://www.aforgenet.com/framework/docs/html/a7e937d4-2a3d-8bdf-5dbf-1058a3df2a60.htm | [
"Posit Class",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"3D pose estimation algorithm.\n\nNamespace: AForge.Math.Geometry\nAssembly: AForge.Math (in AForge.Math.dll) Version: 2.2.5.0 (2.2.5.0)\n\n#",
null,
"Syntax\n\nC#\n`public class Posit`\n\n#",
null,
"Remarks\n\nThe class implements an algorithm for 3D object's pose estimation from it's 2D coordinates obtained by perspective projection, when the object is described none coplanar points. The idea of the implemented math and algorithm is described in \"Model-Based Object Pose in 25 Lines of Code\" paper written by Daniel F. DeMenthon and Larry S. Davis (the implementation of the algorithm is almost 1 to 1 translation of the pseudo code given by the paper, so should be easy to follow).",
null,
"Note:At this point the implementation works only with models described by 4 points, which is the minimum number of points enough for 3D pose estimation.",
null,
"Note:The 4 model's point must not be coplanar, i.e. must not reside all within same planer. See CoplanarPosit for coplanar case.\n\nSample usage:",
null,
"CopyC#\n```// points of real object - model\nVector3[] positObject = new Vector3\n{\nnew Vector3( 28, 28, -28 ),\nnew Vector3( -28, 28, -28 ),\nnew Vector3( 28, -28, -28 ),\nnew Vector3( 28, 28, 28 ),\n};\n// focal length of camera used to capture the object\nfloat focalLength = 640; // depends on your camera or projection system\n// initialize POSIT object\nPosit posit = new Posit( positObject, focalLength );\n\n// 2D points of te object - projection\nAForge.Point[] projectedPoints = new AForge.Point\n{\nnew AForge.Point( -4, 29 ),\nnew AForge.Point( -180, 86 ),\nnew AForge.Point( -5, -102 ),\nnew AForge.Point( 76, 137 ),\n};\n// estimate pose\nMatrix3x3 rotationMatrix;\nVector3 translationVector;\nposit.EstimatePose( projectedPoints,\nout rotationMatrix, out translationVector );```\n\n#",
null,
"Inheritance Hierarchy\n\nSystem..::.Object\nAForge.Math.Geometry..::.Posit"
] | [
null,
"http://www.aforgenet.com/framework/docs/icons/collapse_all.gif",
null,
"http://www.aforgenet.com/framework/docs/icons/expand_all.gif",
null,
"http://www.aforgenet.com/framework/docs/icons/collapse_all.gif",
null,
"http://www.aforgenet.com/framework/docs/icons/expand_all.gif",
null,
"http://www.aforgenet.com/framework/docs/icons/dropdown.gif",
null,
"http://www.aforgenet.com/framework/docs/icons/dropdownHover.gif",
null,
"http://www.aforgenet.com/framework/docs/icons/copycode.gif",
null,
"http://www.aforgenet.com/framework/docs/icons/copycodeHighlight.gif",
null,
"http://www.aforgenet.com/framework/docs/icons/collapse_all.gif",
null,
"http://www.aforgenet.com/framework/docs/icons/collapse_all.gif",
null,
"http://www.aforgenet.com/framework/docs/icons/alert_note.gif",
null,
"http://www.aforgenet.com/framework/docs/icons/alert_note.gif",
null,
"http://www.aforgenet.com/framework/docs/icons/copycode.gif",
null,
"http://www.aforgenet.com/framework/docs/icons/collapse_all.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.70979667,"math_prob":0.934487,"size":1884,"snap":"2023-14-2023-23","text_gpt3_token_len":494,"char_repetition_ratio":0.13723405,"word_repetition_ratio":0.014336918,"special_character_ratio":0.2754777,"punctuation_ratio":0.23342176,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98428965,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-20T21:40:30Z\",\"WARC-Record-ID\":\"<urn:uuid:760a2de0-d758-4265-9318-a0c4dc8a94b2>\",\"Content-Length\":\"14998\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54232cb4-c35a-4e6f-a4c1-53dd83d07fc5>\",\"WARC-Concurrent-To\":\"<urn:uuid:68a8e061-88b7-4162-8a6c-6e92b354d049>\",\"WARC-IP-Address\":\"65.182.101.165\",\"WARC-Target-URI\":\"http://www.aforgenet.com/framework/docs/html/a7e937d4-2a3d-8bdf-5dbf-1058a3df2a60.htm\",\"WARC-Payload-Digest\":\"sha1:R3FZWQKG7VPZBZKUR7SGRFKA7JEVURIP\",\"WARC-Block-Digest\":\"sha1:XEFLGVGFW7YM72LLG3MSYLT6XKOF5VFM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943562.70_warc_CC-MAIN-20230320211022-20230321001022-00407.warc.gz\"}"} |
https://info.enterprisedna.co/dax-function-guide/rounddown/ | [
"## How does the ROUNDDOWN function (DAX) work?\n\nThe ROUNDDOWN function (DAX) rounds a number down, toward zero.\n\n## ROUNDDOWN Formula Syntax\n\nROUNDDOWN(\n<number>, <num_digits>\n)\n\n## How do you use the ROUNDDOWN function?\n\nThe ROUNDDOWN function will always round numbers down. The number of places to round to is controlled by the num_digits argument. Positive numbers round to the right of the decimal point, negative numbers round to the left, and zero rounds to the nearest 1.\n\n## Related Support Forum Posts\n\nIterating a distinctcount between two tables\nGrouping customers evenly by rank\nDax Calculation question\n\n## Considerations when using the ROUNDDOWNfunction\n\nIf num_digits is greater than 0 (zero), then the value in number is rounded down to the specified number of decimal places.\n\nIf num_digits is 0, then the value in number is rounded down to the nearest integer.\n\nIf num_digits is less than 0, then the value in number is rounded down to the left of the decimal point.\n\n## Formula examples using the ROUNDDOWN function\n\n=ROUNDDOWN(3.14159,3)\n\n=ROUNDDOWN(31415.92654, -2)\n\n=ROUNDDOWN(76.9,0)\n\n## Related Course Modules",
null,
""
] | [
null,
"https://lh3.googleusercontent.com/UcotfeAuLLWg2iVup7FXgVKRQ-o0eNVJY0WniI65-NHiaKAoIWpOOQLtq8ku0jlPtmKouCeHAY1XT5ifsxCspg=w794",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.691838,"math_prob":0.99556595,"size":1143,"snap":"2020-45-2020-50","text_gpt3_token_len":293,"char_repetition_ratio":0.19139597,"word_repetition_ratio":0.115606934,"special_character_ratio":0.22134733,"punctuation_ratio":0.106796116,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99856216,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T00:05:43Z\",\"WARC-Record-ID\":\"<urn:uuid:94f63c61-6693-49b0-b720-3557c2980dc8>\",\"Content-Length\":\"128594\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:040bde21-e04d-42d7-aff4-17476595a402>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b4e4a93-e48d-4a36-92f8-44d6bde1ea99>\",\"WARC-IP-Address\":\"35.213.182.208\",\"WARC-Target-URI\":\"https://info.enterprisedna.co/dax-function-guide/rounddown/\",\"WARC-Payload-Digest\":\"sha1:ATLMNHLZO2TXKGXPECCNQAE2DRQHKY2X\",\"WARC-Block-Digest\":\"sha1:LBNXNVQOJAY4WCTJR6JWEWDKXVPRS5WI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107902038.86_warc_CC-MAIN-20201028221148-20201029011148-00282.warc.gz\"}"} |
https://es.mathworks.com/help/stateflow/ug/matlab-functions-in-a-chart.html | [
"## Reuse MATLAB Code by Defining MATLAB Functions\n\nA MATLAB® function in a Stateflow® chart is a graphical element that you use to write algorithms that are easier to implement by calling built-in MATLAB functions. Typical applications include:\n\n• Matrix-oriented calculations\n\n• Data analysis and visualization\n\nThis type of function is useful for coding algorithms that are more easily expressed by using MATLAB instead of the graphical Stateflow constructs. MATLAB functions also provide optimizations for generating efficient, production-quality C code for embedded applications.\n\nA MATLAB function can reside anywhere in a chart, state, or subchart. The location of the function determines the set of states and transitions that can call the function.\n\n• If you want to call the function only within one state or subchart and its substates, put your MATLAB function in that state or subchart. That function overrides any other functions of the same name in the parents and ancestors of that state or subchart.\n\n• If you want to call the function anywhere in that chart, put your MATLAB function at the chart level.\n\n• If you want to call the function from any chart in your model, put your MATLAB function at the chart level and enable exporting of chart-level functions. For more information, see Export Stateflow Functions for Reuse.\n\nFor example, this MATLAB function has the name `stdevstats`. It takes an argument `vals` and returns an output value `stdevout`.",
null,
"To compute the standard deviation of the values in `vals`, the function uses this code.\n\n```function stdevout = stdevstats(vals) %#codegen % Calculates the standard deviation for vals len = length(vals); stdevout = sqrt(sum(((vals-avg(vals,len)).^2))/len); function mean = avg(array,size) mean = sum(array)/size;```\n\n### Define a MATLAB Function in a Chart\n\n1. In the object palette, click the MATLAB function icon",
null,
". Move your pointer to the location for the new MATLAB function in your chart.\n\n2. Enter the signature label for the function, as described in Declare Function Arguments and Return Values.\n\n3. To program the function, open the MATLAB editor by double-clicking the function box. For more information, see Program a MATLAB Function in a Chart.\n\n4. In the editor, enter the MATLAB code implementing your function.\n\n5. In the Model Explorer, expand the chart object and select the MATLAB function. The arguments and return values of the function signature appear as data items that belong to your function. Arguments have the scope `Input`. Return values have the scope `Output`.\n\n6. In the Data properties dialog box for each argument and return value, specify the data properties, as described in Set Data Properties.\n\nYour function can access its own data or data belonging to parent states or the chart. The data items in the function can have one of these scopes:\n\n• `Local` — Local data persists from one function call to the next function call. Valid for C charts only.\n\n• `Constant` — Constant data retains its initial value through all function calls.\n\n• `Parameter` — Parameter data retains its initial value through all function calls.\n\n• `Temporary` — Temporary data initializes at the start of every function call. Valid for C charts only.\n\n#### Initialize Function Data\n\nYou can initialize your function data (other than arguments and return values) from the MATLAB workspace. For more information, see Initialize Data from the MATLAB Base Workspace.\n\n#### Define Temporary Data\n\nIn charts that use C as the action language, define temporary data when you want to use data that is only valid while a function executes. For example, you can designate a loop counter to have `Temporary` scope if the counter value does not need to persist after the function completes.\n\nIn charts that use MATLAB as the action language, you do not need to define temporary function data. If you use an undefined variable, Stateflow creates a temporary variable. The variable is available to the rest of the function.\n\n#### Use Enumerated and Variable-Size Data\n\nIn Stateflow charts in Simulink® models, you can use enumerated data and variable-size data in MATLAB functions. For more information, see Code Generation for Enumerations (Simulink) and Declare Variable-Size Inputs and Outputs (Simulink).\n\n### Declare Function Arguments and Return Values\n\nThe function signature label specifies a name for your function and the formal names for its arguments and return values. A signature label has this syntax:\n\n```[return_val1,return_val2,...] = function_name(arg1,arg2,...) ```\nYou can specify multiple return values and multiple input arguments. Each return value and input argument can be a scalar, vector, or matrix of values. For functions with only one return value, omit the brackets in the signature label.\n\nYou can use the same variable name for both arguments and return values. For example, a function with this signature label uses the variables `y1` and `y2` as both inputs and outputs:\n\n```[y1,y2,y3] = f(y1,u,y2) ```\nIf you export this function to C code, `y1` and `y2` are passed by reference (as pointers), and `u` is passed by value. Passing inputs by reference reduces the number of times that the generated code copies intermediate data, resulting in more optimal code.\n\n### Call MATLAB Functions in States and Transitions\n\nYou can call MATLAB functions from the actions of any state or transition. You can also call MATLAB functions from other functions. If you export a MATLAB function, you can call it from any chart in the model.\n\nThe syntax for a call to a MATLAB function is the same as the function signature, with actual arguments replacing the formal ones specified in a signature. If the data types of an actual and formal argument differ, a function casts the actual argument to the type of the formal argument.\n\n### Tip\n\nIf the formal arguments of a function signature are scalars, verify that inputs and outputs of function calls follow the rules of scalar expansion. For more information, see Assign Values to All Elements of a Matrix.\n\n### Specify MATLAB Function Properties in a Chart\n\nYou can set general properties for your MATLAB function through its properties dialog box. To open the function properties dialog box, right-click the MATLAB function box and select Properties from the context menu.\n\n#### Name\n\nFunction name. Click the function name link to open your function in the MATLAB editor.\n\n#### Function Inline Option\n\nControls the inlining of your function in generated code:\n\n• `Auto` — Determines whether to inline your function based on an internal calculation.\n\n• `Inline` — Inlines your function if you do not export it to other charts and it is not part of a recursion. (A recursion exists if your function calls itself directly or indirectly through another function call.)\n\n• `Function` — Does not inline your function.\n\n#### Saturate on Integer Overflow\n\nSpecifies whether integer overflows saturate in the generated code. For more information, see Handle Integer Overflow for Chart Data.\n\n#### MATLAB Function `fimath`\n\nDefines the `fimath` properties for the MATLAB function. The `fimath` properties specified are associated with all `fi` and `fimath` objects constructed in the MATLAB function. Choose one of these options:\n\n• Same as MATLAB — The function uses the same `fimath` properties as the current global `fimath`. The edit box appears dimmed and displays the current global `fimath` in read-only form. For more information on the global `fimath` and `fimath` objects, see the Fixed-Point Designer™ documentation.\n\n• Specify Other — Specify your own `fimath` object by one of these methods:\n\n• Construct the `fimath` object inside the edit box.\n\n• Construct the `fimath` object in the MATLAB or model workspace and enter its variable name in the edit box.\n\n#### Description\n\nFunction description. You can enter brief descriptions of functions in the hierarchy.",
null,
""
] | [
null,
"https://es.mathworks.com/help/stateflow/ug/eml_function.png",
null,
"https://es.mathworks.com/help/stateflow/ug/icon_eml_function.png",
null,
"https://es.mathworks.com/images/responsive/supporting/apps/doc_center/bg-trial-arrow.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7064839,"math_prob":0.934708,"size":7509,"snap":"2020-34-2020-40","text_gpt3_token_len":1524,"char_repetition_ratio":0.18854097,"word_repetition_ratio":0.032392025,"special_character_ratio":0.19176988,"punctuation_ratio":0.10491563,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.988167,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-05T20:07:44Z\",\"WARC-Record-ID\":\"<urn:uuid:a66c560d-2773-40db-96ce-93dab1a39fbe>\",\"Content-Length\":\"79053\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:33470d64-7ba3-4dbc-aa98-1ab34671fb33>\",\"WARC-Concurrent-To\":\"<urn:uuid:3ef8257f-e9c9-4846-8b8e-cef3f49312ba>\",\"WARC-IP-Address\":\"184.30.230.165\",\"WARC-Target-URI\":\"https://es.mathworks.com/help/stateflow/ug/matlab-functions-in-a-chart.html\",\"WARC-Payload-Digest\":\"sha1:IBMOIHCZ7ZDFIJUDZVCJAS22E7ZVFGJU\",\"WARC-Block-Digest\":\"sha1:DBFEEF24XWTHYVZA4UHKSRTC7S7N5JGG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735964.82_warc_CC-MAIN-20200805183003-20200805213003-00534.warc.gz\"}"} |
https://canadam.math.ca/2015f/program/res/gpa | [
"",
null,
"CanaDAM 2015 UniversitÃĐ de la Saskatchewan, 1 - 4 juin 2015 www.smc.math.ca//2015f",
null,
"english accueil réunion accueil canadam\n\nGraph packings and colorings\nResponsable et prÃĐsident: Daniel Kral (University of Warwick) et Bojan Mohar (Simon Fraser University)\n[PDF]\n\nROSS CHURCHLEY, Simon Fraser University\nPacking odd edge-disjoint $(u,v)$-trails [PDF]\n\nMenger's theorem gives a famous duality between packings and coverings of $(u,v)$-paths in a graph. But perfect duality may not exist if the paths are further restricted: for example, the maximum number of edge-disjoint odd $(u,v)$-paths may be less than the number of edges it takes to cover all such paths. In this talk, we explain an approximate duality for packings of odd trails: if a graph does not have $k$ edge-disjoint odd $(u,v)$-trails, it has a set of fewer than $8k$ edges intersecting all such trails.\n\nThis is joint work with Bojan Mohar and Hehui Wu.\n\nPING HU, University of Warwick\nRainbow triangles in three-colored graphs [PDF]\n\nErd\\H os and S\\'os proposed a problem of determining the maximum number $F(n)$ of rainbow triangles in 3-edge-colored complete graphs on $n$ vertices. They conjectured that $F(n)=F(a)+F(b)+F(c)+F(d)+abc+abd+acd+bcd$, where $a+b+c+d=n$ and $a,b,c,d$ are as equal as possible. We prove that the conjectured recurrence holds for sufficiently large $n$. We also prove the conjecture for $n = 4^k$ for all $k \\geq 0$. These results imply that $\\lim F(n)/{n\\choose 3}=0.4$, and determine the unique limit object. In the proof we use flag algebras combined with stability arguments.\n\nJoint work with JÃģzsef Balogh, Bernard LidickÃ―, Florian Pfender, Jan Volec and Michael Young.\n\nPartition of random graphs into subgraphs of bounded component order [PDF]\n\nWe consider vertex partitions of the binomial random graph $G_{n,p}$. For $np\\to\\infty$, we observe the following phenomenon: for any partition into asymptotically fewer than $\\chi(G_{n,p})$ parts, i.e. $o(np/\\log np)$ parts, there must be one part whose induced subgraph has a connected component of order at least roughly the average part size. For $0 < p <1$ fixed, we obtain more precise information: in particular, we find something subtle happens at the threshold $t = \\Theta(\\log n)$, and we determine the asymptotic first-order behaviour.\n\nThis is joint work with Nicolas Broutin (INRIA Rocquencourt).\n\nROBERT SAMAL, Charles University\nUnique Vector Coloring and Cores [PDF]\n\nStrict vector coloring is one formulation of an optimization program for Lovasz theta function: assign vectors to vertices so that adjacent vertices obtain vectors with large angle in-between. We study when this assignment is unique by using geometric rigidity theory. We then show how to use this notion as a tool to prove a graph is a core. We successfully apply this to several instances of strongly regular graphs.\n\nHEHUI WU, University of Olemiss\nTriangle-free subgraph with large fractional chromatic number [PDF]\n\nIt is well known that for any $k$ and $g$, there is a graph with chromatic number at least $k$ and girth at least $g$. In 1970's, Erd\\H{o}s and Hajnal conjectured that for any numbers $k$ and $g$, there exists a number $f(k, g)$, such that for any graph with chromatic number at least $f(k, g)$, it contains a subgraph with chromatic number at least $k$ and girth at least $g$. In 1978, R\\\"{o}dl proved the case for $g=4$ and arbitrary $k$. We prove the fractional chromatic number version of R\\\"{o}dl's result.\n\nThis is joint work with Bojan Mohar."
] | [
null,
"https://canadam.math.ca/styles/standardwrap-1/printlogo.png",
null,
"https://canadam.math.ca/2015f/styles/global-1/transparent.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84711516,"math_prob":0.99469817,"size":3426,"snap":"2021-43-2021-49","text_gpt3_token_len":906,"char_repetition_ratio":0.09994155,"word_repetition_ratio":0.044280443,"special_character_ratio":0.24138938,"punctuation_ratio":0.10313901,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99939394,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-28T21:11:49Z\",\"WARC-Record-ID\":\"<urn:uuid:c4eef2c4-93a9-45a0-bfcf-e2ecdb4817a5>\",\"Content-Length\":\"13332\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a16c18a-e9ce-4060-99f2-ed0c4645afb5>\",\"WARC-Concurrent-To\":\"<urn:uuid:eaee4fc8-5bc7-440f-aad4-2179b0e35ecc>\",\"WARC-IP-Address\":\"137.122.61.199\",\"WARC-Target-URI\":\"https://canadam.math.ca/2015f/program/res/gpa\",\"WARC-Payload-Digest\":\"sha1:MJCZ44EC4ON3HNM2XRKN2SJZCFDYK7KZ\",\"WARC-Block-Digest\":\"sha1:VODXRCJF46XRS43CZ7TF7SMBG2B2S3WR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588526.57_warc_CC-MAIN-20211028193601-20211028223601-00249.warc.gz\"}"} |
http://web.math.ucsb.edu/~cblacker/ | [
"# Casey Blacker\n\nI am a China Postdoctoral Science Foundation (CPSF) International Exchange Postdoc at the East China Normal University in Shanghai. I received my PhD from UC Santa Barbara in 2018 under the supervision of Xianzhe Dai.\n\nMy interests include symplectic geometry, moduli spaces, and geometric quantization. You can find more information in my Research Statement.\n\n## Publications\n\n1. Reduction of multisymplectic manifolds, (Submitted)\nhttp://arxiv.org/abs/2002.10062\n2. Quantization of polysymplectic manifolds, J. Geom. Phys., 2019,\nhttps://doi.org/10.1016/j.geomphys.2019.103480\n3. Polysymplectic reduction and the moduli space of flat connections, J. Phys. A, 2019,\nhttps://doi.org/10.1088/1751-8121/ab2eed\n4. First eigenvalue of the p-Lapacian on Kähler manifolds (with Shoo Seto), Proc. Amer. Math. Soc., 2019, https://doi.org/10.1090/proc/14395\n5. The Moduli Space of Flat Connections over Higher Dimensional Manifolds (PhD Thesis), 2018, https://escholarship.org/uc/item/0535z0rb\n\n## Selected Talks\n\nPolysymplectic Reduction and the Moduli Space of Flat Connections\n\nUniversity of Tokyo, Dec 3, 2019.\n\nIn a landmark paper, Atiyah and Bott showed that the moduli space of flat connections on a principal bundle over an oriented closed surface is the symplectic reduction of the space of all connections by the action of the gauge group. By appealing to polysymplectic geometry, a generalization of symplectic geometry in which the symplectic form takes values in a fixed vector space, we may extend this result to the case of higher-dimensional base manifolds. In this setting, the space of connections exhibits a natural polysymplectic structure and the reduction by the action of the gauge group yields the moduli space of flat connections equipped with a 2-form taking values in the cohomology of the base manifold. In this talk, I will first review the polysymplectic formalism and then outline its role in obtaining the moduli space of flat connections.\n\nQuantization of Polysymplectic Manifolds\n\nUniversity of Cologne, July 7, 2019.\n\nGeometric quantization is a method for taking a symplectic manifold and returning a complex Hilbert space. A polysymplectic manifold is a smooth manifold equipped with a symplectic structure taking values in a fixed vector space. Both geometric quantization and polysymplectic geometry have their roots in physics, and have each engendered a rich mathematical literature. In this talk, I will review both formalisms independently and then introduce an extension of geometric quantization to the setting of polysymplectic manifolds. No familiarity with geometric quantization or polysymplectic geometry will be assumed.\n\nDifferential Geometry and Classical Mechanics\n\nNYU Shanghai, Dec 11, 2018.\n\nSymplectic geometry arose in physics as the ideal setting for classical mechanics, and multisymplectic geometry has recently emerged as an analogous candidate in classical field theory. In this talk, I will introduce symplectic geometry according to the perspective that every symplectic manifold is locally the phase space of a classical mechanical system. The related theories of contact and polysymplectic geometry will also be discussed, along with relevant historical background. Working by analogy with the symplectic approach to classical mechanics, I will conclude with a brief introduction to the multisymplectic formalism in classical field theory."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90563035,"math_prob":0.79424274,"size":2744,"snap":"2020-45-2020-50","text_gpt3_token_len":566,"char_repetition_ratio":0.17737226,"word_repetition_ratio":0.034398034,"special_character_ratio":0.17274052,"punctuation_ratio":0.086283185,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95140636,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-24T03:18:18Z\",\"WARC-Record-ID\":\"<urn:uuid:c033a98c-57c6-4100-a47e-987998afd5b6>\",\"Content-Length\":\"5417\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a7b5f2f-8bed-41af-83bb-fa41b1a6ca4d>\",\"WARC-Concurrent-To\":\"<urn:uuid:d870a220-37f6-4669-a693-9db09a9aa02f>\",\"WARC-IP-Address\":\"128.111.88.13\",\"WARC-Target-URI\":\"http://web.math.ucsb.edu/~cblacker/\",\"WARC-Payload-Digest\":\"sha1:7UXILW4KTRZ2XU6WYHMNW6YUQDP2CIHS\",\"WARC-Block-Digest\":\"sha1:G36IQLWT4DC73UWB75JD4B6GIZUCP4YT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107881640.29_warc_CC-MAIN-20201024022853-20201024052853-00263.warc.gz\"}"} |
https://math.stackexchange.com/questions/694672/strangest-notation | [
"# Strangest Notation? [closed]\n\nWhile this may be a fruitless pursuit of anecdotes, I still ask: what is the strangest (or most blatantly wrong (at least in the eyes of common notation)) mathematical notation you have ever seen?\n\n• There is a thread about this on MathOverflow also. There, as here, the top scorer is Mazur's $\\overline{\\Xi}\\over\\Xi$. – MJD Mar 1 '14 at 4:39\n• This is not an answer, but $dx$ and $dy$ are the best notation for confusing students. – Sawarnik Mar 1 '14 at 5:02\n• @Sawarnik do you mean contrary to using $\\Delta x$ or $\\Delta y$? – user76061 Mar 1 '14 at 5:05\n• @user76061 I meant that unlike $f'(x)$, the imprecise $dx$.. confuses people. Why the derivative became a quotient, what is a differential, why did $dx$ come in the integral, among other things. – Sawarnik Mar 1 '14 at 5:11\n• \"A woman stepped forward and asked, / What is the strangest day? // Tuesday, the Master replied.\" —Kehlog Albran, The Profit – user856 Mar 3 '14 at 22:39\n\nThere is an old story about Lang and Mazur, Mazur tried to get Lang attention by using the worst notation possible. He wrote Xi conjugated over Xi, which looks like:\n\n$$\\frac{\\overline{\\Xi}}{\\Xi}$$\n\nP.S. You can read the story, narrated by Paul Vojta, in the AMS Notices issue dedicated to Lang: AMS Nottices Lang\n\nIt is on pages 546-547.\n\n• Objective achieved! – user76061 Mar 1 '14 at 3:21\n• Just curious: What did Mazur use the notation for (except for getting Lang's attention, of course)? – user122283 Mar 1 '14 at 3:25\n• @SanathDevalapurkar Just for getting Lang's attention. I posted the link to the story, on short Lang was criticizing a lot the notations Mazur was using, so they prepared a t-shirt for Lang, and they were waiting for Lang to say the magic words... But for some reason, that particular day Lang was quiet... – N. S. Mar 1 '14 at 3:29\n\nThe single worst use of mathematical notation I have ever seen was in a set of lecture notes in which the author wanted to construct a sequence of equivalence relations, each one ($\\equiv_n$) derived from the previous one ($\\equiv_{n-1}$). After $i_0$ iterations of this procedure, the construction has no more work to do, and the sequence has converged to a certain equivalence relation $\\equiv$ with desirable properties. The notes contained this formula: $$\\equiv_{i_0+1}=\\equiv_{i_0}=\\equiv$$\n\nI regret that I did not make a note of the source.\n\n• Could this actually be considered correct? – user76061 Mar 1 '14 at 4:31\n• I believe it is correct. – MJD Mar 1 '14 at 4:31\n• So it treats the equivalence like a variable? – user76061 Mar 1 '14 at 4:32\n• I'm not sure what you mean. An equivalence relation, as any relation, is a set. The $=$ signs assert that the sets are equal. – MJD Mar 1 '14 at 4:33\n• Wait. It just clicked, I misread. Sorry – user76061 Mar 1 '14 at 4:34\n\nThe Landau big-$O$ notation is extremely strange.\n\n1. One writes $$f(x) = O(g(x))$$ which looks like $f$ is the composition of $O$ and $g$, but it is nothing of the sort. Is $O()$ an operator that can be applied to any term? Can I write $$O(x^2) = O(x^3)$$ or $O(x^2) = 2x^2$? Not normally.\n\n2. It is easily confused with a whole family of similar notations for similar notions; computer programmers regularly talk about $O(n)$ algorithms when they mean $\\Omega(n)$ algorithms, for example. This is exacerbated because someone decided that instead of using mnemonic abbreviations, it would be a good idea arbitrarily assign every possible variant of the letter ‘o’ in naming them. Then when they ran out of letter O’s they used $\\Theta$, seemingly because it looks enough like an O that you might confuse it with one.\n\n3. It is written with an $=$ even though the relation is asymmetric! We have both $x=O(x^2)$ and $x=O(x^3)$ although $O(x^2)$ and $O(x^3)$ are not the same, and we have both $1 = O(x)$ and $x = O(x)$ even though $1\\ne x$.\n\n• It probably makes more sense if you give $O$ a variable to bind, e.g. $Ox(x^2)$ could denote the equivalence class of functions associated with the mapping $x \\in \\mathbb{R} \\mapsto x^2 \\in \\mathbb{R}$. – goblin GONE Mar 1 '14 at 5:28\n• There is still no excuse for the $=$ sign. If it were instead a $\\in$ sign, or some other asymmetric sign like $\\prec$, instead, my objection would only be a minor quibble. – MJD Mar 3 '14 at 21:50\n• @MJD I've always treated $O$ as a function which when given a function $f$ returns the set of functions asymptotically upper-bounded by $f$, because the = in that context is absolutely appalling! Makes much more sense to say $\\lambda x.4x^2-3 \\in O(\\lambda x. x^2)$. Regarding your point 2, I do believe $\\Theta$ is gaining much more popularity in computer science circles these days, at least. – Ray Toal Mar 4 '14 at 6:08\n• On the equals sign for O notation, Knuth indicates that (at least) he thinks of (say) $O(x^2)$ as \"something that is at most a constant times $x^2$\"—see his letter \"Teach Calculus with Big O\" (blog repost, PDF of AMS) — and of the equals sign as the English \"is\": \"Aristotle is a man, but a man isn’t necessarily Aristotle\". He's channelling de Bruijn: see pp. 5–7 of Asymptotic Methods in Analysis. – ShreevatsaR Mar 4 '14 at 20:14\n• In particular, Knuth (and de Bruijn before him, and perhaps even Bachmann and Landau onwards) would write $O(x^2) = O(x^3)$ (for $x \\to \\infty$, not $x \\to 0$ of course), with the meaning that \"something that is at most a constant times $x^2$ is also at most a constant times $x^3$\". E.g. de Bruijn writes, on page 6 of his book, the equation $O(x) + O(x^2) = O(x) \\quad (x \\to 0).$ At least de Bruijn agrees that it is abuse of notation and that the equals sign is a poor choice because it \"suggests symmetry, and there is no such symmetry\". But it's customary. – ShreevatsaR Mar 4 '14 at 20:25\n\nI took a long time to get used to derivative of integrals like this $$\\frac{\\partial}{\\partial x}\\int_{x_0}^x f(x,y) \\ dx$$\n\nIt's just too much $x'$s in the same formula, and each one has a different meaning. Nevertheless, its common to see people writing down this way.\n\n• I think this is actually meaningless. You cannot have the same variable in the integrand as in one of the limits of integration. – Steven Gubkin Mar 1 '14 at 4:21\n• I couldn't agree more. – Integral Mar 1 '14 at 4:25\n• If you are careful, you can, but there is no excuse to require the care. The $x$ in $f(x,y)$ is bound by the integral, while the $x$ upper limit is free, so that is what the derivative is taken with respect to. The result is then $f(x,y)$ with $x$ free. – Ross Millikan Mar 1 '14 at 5:07\n• The integration variable could be named anything; why on earth would you pick the one letter that would cause the most confusion. – mjqxxxx Mar 3 '14 at 22:33\n• @mjqxxxx: Because it's consistent with how one usually notates antidifferentiation; in fact, the integral above can be viewed not as intending to be a definite integral, but as intending to be an anti-derivative with a particular constant of integration determined by the lower bound $x_0$. – user14972 May 5 '14 at 21:52\n\nThe usage of pi:\n\n$\\pi$ is a constant. $\\pi(x)$ is the prime counting function. $\\prod(x)$ is a product of a sequence.\n\n• $\\pi$ is often a homomorphism or projection map as well. – Steven Gubkin Mar 1 '14 at 4:21\n• $\\phi = \\frac{1 + \\sqrt 5}{2}$, $\\phi (n)$ is the totient function, $\\Phi = \\frac{1 - \\sqrt5}{2}$, $\\Phi_n$ is the $n$th cyclotomic polynomial, $\\phi$ is often used in logic (e.g. $\\psi \\implies \\phi$)... – MCT Mar 1 '14 at 4:50\n• $\\phi(\\alpha)$ is also used in NF literature as a special operation on cardinals. If you're writing about the proof theory surrounding NF's big cardinals, it gets ugly. That's why I've pledged to overwork $\\xi$ in my own writing... – Malice Vidrine Mar 1 '14 at 7:44\n• isn't used for the fundamental group as well? – MphLee Mar 6 '14 at 7:25\n• In statistics, we use $\\pi$ as a variable to represent the population proportion. For example, we might be \"95% confident that $\\pi$ is between $0.41$ and $0.43$.\" Without knowing that $\\pi$ is being abused, that sentence would make absolutely no sense. – wchargin Nov 17 '14 at 3:36\n\nFrom a proof that convergence a.e. implies convergence in measure for $\\mu(\\Omega)<\\infty$:$$\\bigcup_{r\\geq 1}\\bigcap_{n\\geq 1}\\bigcup_{j\\geq n}\\{|{f_j-f}|>\\frac{1}{r}\\}=\\{\\omega:f_j(\\omega) \\not \\to f(\\omega)\\}$$\n\nAlso, labeling graphs of functions as $f(x)$ (which I end up still doing to my undergraduates, who are bored when I mention my reservations about it), $\\coprod$, \"Random Variable,\" calling a domain the preimage but switching it to a connected open set in complex talk, etc. etc. etc.\n\n$$\\large{\\prod_{n = 1}^3 \\mathbb{R} = \\mathbb{R}^3}$$\n\nEdit: Apparently this is common notation. MJD suggests a better example:\n\n$$\\large{\\prod_{n = 1}^3 S \\neq S^3}$$\n\n• Where has that been used? I've never seen this before (I've seen $(\\mathbb{R}^1)^3=\\mathbb{R}^3$), though I can understand the logic behind it. – user122283 Mar 1 '14 at 3:43\n• Why is this strange? $\\mathbb{R}^3 = \\mathbb{R}\\times\\mathbb{R}\\times\\mathbb{R}$. It's is a cartesian product, but still a product. Its quite natural to write like the way you showed. – Integral Mar 1 '14 at 3:47\n• Maybe I just don't work with this a lot so it seems weird to me. Usually I see it as $\\mathbb{R} \\times \\mathbb{R} \\times \\mathbb{R}$ -- the use of product notation of $n = 1$ to $3$ just seems a bit unnatural to me. – MCT Mar 1 '14 at 4:05\n• This is quite natural. The one you should have mentioned is $$\\prod_{n=1}^3 S\\color{red}{\\ne} S^3.$$ – MJD Mar 1 '14 at 4:23\n• $S$ is the circle, $S^2$ is the sphere (that is, the two-dimensional manifold that is the boundary of the ball in $\\Bbb R^3$), $S^3$ is the 3-sphere, which is the 3-dimensional boundary of the ball in $\\Bbb R^4$. But unfortunately $S^2 \\ne S\\times S$; the latter is the torus, sometimes written $T^2$, just to really confuse matters. – MJD Mar 1 '14 at 4:44\n\nHow about using pairs of letters like $r,s$ or $u,v$ , or $m,n$ when writing on a blackboard? Unless you're extremely careful, the two in any pair get very easily confused with each other. Or, when you're told you have two collections of objects ( with maybe some additional propreties ) , say $S,X$ , and then you have that $a$, or worse $x$ is an element in $S$. Isn't it so much better to just say $s$ is in $S$, and $x$ is in $X$ ; isn't an element $s$ in $S$ better than any other letter?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90592074,"math_prob":0.99688554,"size":11190,"snap":"2020-45-2020-50","text_gpt3_token_len":3362,"char_repetition_ratio":0.15590917,"word_repetition_ratio":0.04892162,"special_character_ratio":0.31438786,"punctuation_ratio":0.11402359,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99910295,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T16:39:16Z\",\"WARC-Record-ID\":\"<urn:uuid:73f751ae-cb5d-411e-ac02-5c7bb68e7ca8>\",\"Content-Length\":\"223368\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0437bb48-d37b-4c64-b440-c1b9c879f218>\",\"WARC-Concurrent-To\":\"<urn:uuid:53558f19-9c73-4651-bb4c-d0560c1d05be>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/694672/strangest-notation\",\"WARC-Payload-Digest\":\"sha1:BUSNJH7FKARWOOUERCM7GQRQMMP77VPQ\",\"WARC-Block-Digest\":\"sha1:64GPXL5XO4VL3NZIEGLYZ2L2NRA3PGGM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107904834.82_warc_CC-MAIN-20201029154446-20201029184446-00024.warc.gz\"}"} |
https://mathspace.co/textbooks/syllabuses/Syllabus-452/topics/Topic-8345/subtopics/Subtopic-109602/ | [
"",
null,
"# Numbers, words and symbols (millions)\n\nLesson\n\nIn this video we look at writing numbers in numerals from words, writing numbers in expanded notation and also comparing numbers using the inequality symbols for greater than and less than. Having a strong sense of place value is important to be confident with all of these skills.\n\n#### Worked Examples\n\n##### question 1\n\nExpress the following as a single number:\n\n$4000000+200000+40000+7000+800+80+1$4000000+200000+40000+7000+800+80+1.\n\nNote: Do not use comma separators\n\n##### question 2\n\nConsider the place-value chart for the number $8739142$8739142.\n\n Millions period Thousands period Ones period hundreds tens ones hundreds tens ones hundreds tens ones $8$8 $7$7 $3$3 $9$9 $1$1 $4$4 $2$2\n1. In order to round $8739142$8739142 to the nearest million, we have to look at the millions digit.\n\nWhat is the millions digit?\n\n2. What is the digit to the right of the millions place?\n\n3. What should happen to the digit that is to be rounded?\n\nWe add $1$1 to it.\n\nA\n\nWe do not change it.\n\nB\n\nWe add $1$1 to it.\n\nA\n\nWe do not change it.\n\nB\n4. Hence, what is $8739142$8739142 rounded to the nearest million?\n\n##### question 3\n\nWrite an inequality symbol ($<$< or $>$>) in the box to make this number sentence true.\n\n1. $107714\\editable{}107540$107714107540"
] | [
null,
"https://mathspace-production-static.mathspace.co/permalink/badges/v3/matrices.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.71653485,"math_prob":0.9916955,"size":1154,"snap":"2021-43-2021-49","text_gpt3_token_len":333,"char_repetition_ratio":0.12608695,"word_repetition_ratio":0.08988764,"special_character_ratio":0.35788563,"punctuation_ratio":0.083333336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986879,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-04T05:33:26Z\",\"WARC-Record-ID\":\"<urn:uuid:3402dfb5-12ed-4d2a-977f-99d4c987799c>\",\"Content-Length\":\"617562\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d800f484-d45c-44e1-a4b1-6ac46184d4fc>\",\"WARC-Concurrent-To\":\"<urn:uuid:5fd36262-f159-448b-b537-f63cad9032eb>\",\"WARC-IP-Address\":\"104.22.56.207\",\"WARC-Target-URI\":\"https://mathspace.co/textbooks/syllabuses/Syllabus-452/topics/Topic-8345/subtopics/Subtopic-109602/\",\"WARC-Payload-Digest\":\"sha1:X3ADWPNTYRODG4XDHJR4RI5XQSQCAXYS\",\"WARC-Block-Digest\":\"sha1:JVQEH4H2VBKVMLWD24GYUCA6ROJ6GPJX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362930.53_warc_CC-MAIN-20211204033320-20211204063320-00199.warc.gz\"}"} |
https://link.springer.com/article/10.1007/s10546-023-00792-3 | [
"## 1 Introduction\n\nDust devils are columnar convective vortices with a vertical axis of rotation. On Earth, they typically extend to 1–10 m horizontally and 10–100 m vertically and last for several minutes (Balme and Greeley 2006). Dust devils mainly occur in arid regions like deserts (Lorenz et al. 2016) and swirl up loose material from the ground, which makes them visible. The basic meteorological conditions for dust devil development are clear skies with strong insolation that favors a large super-adiabatic lapse rate near the surface and background winds below certain critical values ($$\\sim$$ 5 m $$\\hbox {s}^{-1}$$) (Ives 1947; Williams 1948; Horton et al. 2016; Giersch et al. 2019). These conditions cause a convective boundary layer characterized by a near-surface cellular pattern with a size on the order of the boundary layer height ($$\\sim$$ 1 km). This pattern shows small cell boundaries where the horizontal flow converges and the air rises. Simultaneously, the cell centers contain broad regions of descending air (e.g., Khanna and Brasseur 1998). Dust devil-like vortices preferentially appear at the cell branches and vertices (Kanak 2006; Raasch and Franke 2011).\n\nTypical characteristics of dust devils include high vertical vorticity as well as positive temperature and negative pressure deviations (Sinclair 1964; Mullen and Maxworthy 1977). Their mean flow features a radial inflow near the surface, with highest wind speeds just outside the dust devil core, and a spiraling upward motion with positive peak vertical velocities located near the border of the visible dust column similar to the maximum tangential velocities (Sinclair 1973; Balme and Greeley 2006). However, the short-term flow structure is often more complicated and is influenced by, for example, secondary vortices (Sinclair 1973; Zhao et al. 2004) or central downdraughts (Kaimal and Businger 1970; Sinclair 1973).\n\nIn addition to the impressive appearance and dynamical behavior described above, dust devils carry particles into the atmosphere that may effect cloud microphysics (DeMott et al. 2003) and the radiation budget (Myhre and Stordal 2001). The contribution of dust devil sediment flux to the terrestrial dust budget is still under debate (Koch and Rennó 2005; Neakrase and Greeley 2010; Jemmett-Smith et al. 2015; Klose and Shao 2016). Also, dust devils modify the vertical momentum and heat fluxes significantly (as shown by field measurements from Kaimal and Businger (1970)). All this motivates the ongoing research on dust devils.\n\nSince the late 1990 s, direct numerical simulations (DNS, e.g., Cortese and Balachandar 1998; Giersch and Raasch 2021) and large-eddy simulations (LES) (e.g., Kanak 2005; Ohno and Takemi 2010a; Raasch and Franke 2011; Spiga et al. 2016; Giersch et al. 2019) have been increasingly used to study convective vertical vortices. One crucial parameter in numerical set-ups is the grid spacing, which often affects the simulated flow dynamics and the statistics of dust devils. However, meaningful numerical results must not depend on the model resolution. A prerequisite for an adequate simulation of dust devils is the sufficient resolution of the mean flow and its low-order moments. For daytime convective planetary boundary layers (PBLs) in which dust devils can form, Sullivan and Patton (2011) proposed a resolution of $$z_\\text {i}/\\varDelta _\\text {f}$$ > 60 so that the majority of the low-order moments (means, variances, and fluxes) become grid independent in the boundary layer interior (0.1 < $$z/z_\\text {i}$$ < 0.9). $$z_\\text {i}$$ and $$\\varDelta _\\text {f}$$ describe the boundary layer height and LES filter width, respectively. In our LES simulations, $$\\varDelta _\\text {f}$$ can be interpreted as the spatially uniform grid spacing $$\\varDelta$$. For $$z_\\text {i}/\\varDelta _\\text {f}$$ > 230 ($$\\varDelta$$ $$\\approx$$ 5 m), Sullivan and Patton (2011) demonstrated a coupling between large-scale thermal plumes and dust devil-like vortices. The dust devil cores tended to develop in the branches or spokes of the surface updrafts. Wurps et al. (2020) concluded that for convective simulations ($$z_\\text {i}$$ $$\\sim$$ 1000 m) a resolution of 20 m is adequate to accurately capture the profiles of the mean flow variables, the resolved turbulence kinetic energy (TKE), the resolved turbulent shear stresses, and the energy spectra. From Bopape et al. (2020), it can be inferred that a minimum grid width on the order of 25 m is sufficient to provide an accurate simulation of the convective boundary layer. However, the aforementioned studies focused on the grid-resolution requirements for low-order moments or flow structures with scales much larger than those of dust devils.\n\nNumerical simulations by Ito et al. (2010) (20 and 50 m grid spacing) and Zhao et al. (2004) (50 m grid spacing vertically and 100 m horizontally) indicated that relatively coarse-resolution LES are capable of resolving basic dust devil characteristics qualitatively but Ito et al. (2010) admitted that higher resolutions would affect the intensity and structures of the vortices. To simulate smaller dust devils, they suggested a grid spacing of a few meters. Resolutions of several tens of meters only capture very large dust devils, which are quite rare in nature (Kurgansky 2006). Even the largest vortices may not be accurately captured but only the large-scale thermal updraughts associated with them (Sinclair 1969). According to Kanak et al. (2000), horizontal wind speeds of the dust devil-like vortices are expected to strengthen and their circulation diameters are expected to decrease with increasing horizontal resolution. Kanak (2005) performed high-resolution LES with 2 m grid spacing that enabled the detection of smaller vortices. Despite the high resolution, vortex characteristics did not match the observations, especially the lower vortex vertical wind speed and the absolute pressure drop at the center. Raasch and Franke (2011) and Giersch et al. (2019) partly focused on the effects of grid resolution on simulated dust devils. Raasch and Franke (2011) concluded that vortices were stronger and appeared more frequently in high-resolution runs, whereas vortex diameter and height were the same. Therefore, a 2 m grid spacing should be sufficient to simulate the typical spatial structure of vortices. However, they only compared two different resolutions (1 and 2 m) and used model domains that were far too small to capture the large-scale cellular patterns to which the dust devils are tied. By comparing a 10 m to a 2 m simulation, Giersch et al. (2019) obtained a similar result as Raasch and Franke (2011). Dust devil-like vortices were more numerous and intense with finer model resolution. Giersch et al. (2019) also showed that the mean radius and lifetime of a whole dust devil sample decreased with higher resolutions and stated that a further reduction of the grid spacing (below 2 m) might result in even higher core pressure drops. Klose and Shao (2016) concluded that their number density, defined by the number of dust devils per square kilometer and per hour was smaller than in other studies due to the lower horizontal resolution they used (10 m). This was explained by an underestimation of the number of small dust devils. Interestingly, Ito et al. (2013) found no significant change in the dust devil strength (measured by vertical vorticity) between model runs with 50 and 5 m grid size. Both resolutions suggested a typical value of $$\\sim$$ 10$$^{-1}$$ $$\\hbox {s}^{-1}$$. Zhao et al. (2004) and Gu et al. (2008) simulated dust devils at ultra-fine resolution (down to 0.1 m). These studies modeled single, idealized dust devils in a cylindrical domain by applying appropriate boundary and initial conditions to force vortex development. While the results helped to understand basic vortex physics, it is unclear how applicable they are to atmospheric dust devils in more realistic environments.\n\nFor individual dust devils, some of the aforementioned studies showed that higher resolutions reveal more flow details. For example, Zhao et al. (2004) examined the flow structure of dust devil-like vortices at different grid spacings. Their vortex at 100 m resolution was characterized by local maxima in vertical vorticity, vertical velocity and temperature. However, their high-resolution vortex at 0.1 m suggested that these maxima do not necessarily occur at the same location, which is consistent with the simulations of Raasch and Franke (2011). The high-resolution vortex of Zhao et al. (2004) also revealed dynamic flow features like downdraughts at the core or several intense secondary vortices. Both phenomena were not observed in their coarse-resolution vortex. As admitted by Kanak et al. (2000), structures at the dust devil scale can not be resolved by a grid size of several tens of meters. Instead, the larger-scale circulations in which dust devils are embedded are captured. Our investigations of selected vortices will show how smaller grid spacings modify the appearance of dust devils and reveal many more flow details. However, a detailed investigation of the flow dynamics is not part of this study. Instead, we focus on the minimum grid spacing at which various dust devil features reach statistical convergence.\n\nBy determining the resolution where convergence occurs, future studies can improve estimations of particle concentrations and fluxes within dust devils. Numerical simulations require emission schemes that connect the turbulent wind to the particle release and transport (e.g. Klose and Shao 2013). Therefore, a prerequisite for these schemes to work appropriately is a well-resolved flow within and around dust devils. Currently available concentration and emission data vary a lot and are subject to great uncertainties. For example, Rennó et al. (2004) found through field experiments a typical dust content and vertical dust flux of 0.1 g m$$^{-3}$$ and 0.1–1 g m$$^{-2}$$ s$$^{-1}$$, respectively. Neakrase and Greeley (2010) estimated the sediment flux of dust devils by means of laboratory investigations to be between 10$$^{-3}$$ and 10$$^{3}$$ g m$$^{-2}$$ s$$^{-1}$$. Estimates from LES studies indicate dust concentrations of 10$$^{-5}$$–10$$^{-4}$$ g m$$^{-3}$$ (Ito et al. 2010) and 10$$^{-4}$$–10$$^{-3}$$ g m$$^{-3}$$ (Klose and Shao 2016). In these studies, grid spacings of 20 and 10 m were applied, which is too high for an adequate quantification of the particle load, especially within smaller dust devils with a horizontal size of a few meters. Both Ito et al. (2010) and Klose and Shao (2016) mentioned this insufficient resolution with respect to smaller vortices.\n\nTable 1 provides a summary of simulation parameters that have been used to study terrestrial dust devil-like vortices by LES while being able to capture the large-scale cellular pattern (about 1 km $$\\times$$ 1 km horizontal domain size or more) and while using a grid spacing of 10 m or below. $$L_{i}$$ describes the spatial extents along the x-, y-, and z-axis. All grids were stretched vertically except the one of Ito et al. (2013). For all we know, a grid spacing of 2 m is the highest resolution that has ever been used so far to investigate terrestrial dust devils in the convective PBL. Higher resolutions at large domains or larger domains at high resolutions were computationally too intensive. We overcome this problem by using a nesting technique (Hellsten et al. 2021) that enables simulations of a 4 km $$\\times$$ 4 km large horizontal model domain with grid spacings down to 0.625 m.\n\nThe above considerations indicate that there is a strong need to know which model resolution is necessary to adequately resolve terrestrial dust devils with LES and to capture their characteristics qualitatively and quantitatively correctly. Considering this, the paper structure results as follows. Section 2 gives an overview of the dust devil detection and analysis algorithm, the analyzed quantities characterizing a dust devil, the PALM model system, which is used in this study to perform the LES simulations, and the numerical set-ups. The results are introduced and discussed in Sect. 3 with a focus on the convergence behavior of general flow features, the dust devil statistics, and their three-dimensional structure. In Sect. 4, a summary and conclusions are given.\n\n## 2 Methodology\n\nThis section introduces the numerical and analysis methods that are used in our study. First, the PALM model system with its nesting technique is presented, followed by an introduction to the numerical set-ups that are performed. We will only concentrate on those features of PALM (formerly an abbreviation for Parallelized Large-eddy Simulation Model but now an independent name) which are actually used. Second, it is clarified how a dust devil center is detected during the numerical simulation and how centers are combined to form a dust devil with a certain lifetime. To conclude this section, specific dust devil quantities are introduced, which are statistically analyzed and provide the basis for the grid convergence study. Note, (convective) vortex, dust devil, and dust devil-like vortex are used as synonyms in the following and do not distinguish between a dust-laden vortex and an invisible one. Neither dust lifting nor dust transport processes are included in our simulations and, consequently, no potential effects of the particles on the turbulent flow (e.g., Richter and Sullivan 2013) are considered.\n\n### 2.1 The PALM Model System and its Nesting Technique\n\nThe numerical simulations are carried out with the PALM model system (revision 4732, Raasch and Schröter 2001; Maronga et al. 2015, 2020), which is an open source model code written in Fortran designed for atmospheric and oceanic boundary-layer flows. It is publicly available at http://palm.muk.uni-hannover.de/trac/browser/?rev=4732 and designed for massively parallel computer architectures with distributed memory, utilizing the message passing interface (e.g., Gropp et al. 1999). In its default state, PALM calculates the flow solution based on the non-hydrostatic, filtered, Navier–Stokes equations in a Boussinesq-approximated form by solving the conservation equations for momentum, mass, and internal energy on a staggered Cartesian Arakawa-C grid (Arakawa and Lamb 1977). On such a grid, scalars like the potential temperature $$\\theta$$ are defined in the grid box center, whereas the velocity components u, v, and w are shifted by half of the grid spacing in x-, y-, and z-direction, respectively. Hence, the velocities are defined in the middle of the side walls of the grid box. The filtering of the smallest eddies is done implicitly based on a spatial scale separation approach after Schumann (1975). The spatial discretization of the model domain is realized through finite differences with an equidistant horizontal and a variable vertical grid width. A third-order Runge–Kutta scheme (Williamson 1980) is used for the integration in time. Advection is discretized by a fifth-order scheme of Wicker and Skamarock (2002). The parameterization of the SGS turbulence uses a 1.5-order closure after Deardorff (1980) but in the revised formulation of Moeng and Wyngaard (1988) and Saiki et al. (2000). PALM’s data output is based on the [UC]$$^2$$ data standard (Scherer et al. 2020) and is performed with the open, self-describing netCDF format.\n\nOne main assumption of the Boussinesq-approximated system of equations is the incompressibility of the flow. To reach this, a Poisson equation for the so-called perturbation pressure $$p^*$$ is solved by applying a predictor-corrector method (Patrinos and Kistler 1977) together with a fast Fourier transform after every Runge–Kutta sub-time step. The total dynamic pressure perturbation $$\\pi ^*$$ = $$p^*$$ + 2/3 $$\\rho e$$ is interpreted as the dust devil pressure drop with respect to the surroundings, similar to other LES studies of dust devils (e.g., Kanak et al. 2000; Raasch and Franke 2011; Giersch et al. 2019). The second term includes the air density $$\\rho$$ and the SGS TKE e. It describes the isotropic part of the SGS tensor that arises from the filtering of the model equations.\n\nIn order to allow for a high resolution of small-scale turbulent processes near the surface and to simulate a sufficiently large model domain to capture the large-scale cellular flow pattern in the convective PBL, PALM’s self-nesting capabilities are used (Hellsten et al. 2021). In our study, we apply the pure vertical (one-dimensional) nesting, where the high-resolution nested and coarse-resolution domains (also called child and parent domains) have identical horizontal model extents and where the children only obtain their boundary conditions from the parent at the top boundary instead of all boundary surfaces. Identical horizontal model extents are mandatory to capture the large-scale cellular pattern in all domains and enough high-resolved dust devil-like structures for statistical analysis. Regions above the dust devils can be simulated with the coarser parent resolutions to spare computational costs. The domain with the coarsest resolution is called the root domain.\n\n### 2.2 Numerical Set-ups\n\nTo study the effects of model resolution on the dust devil statistics, we performed differently resolved and mostly nested simulations while maintaining the horizontal model extent. In the following, an overview about the simulation initialization is given (see also Fig. 1). To force convection, a homogeneous heating is prescribed at the surface with a temporally and spatially constant vertical sensible heat flux $$w\\theta _0$$ of 0.24 K m s$$^{-1}$$ (approximately 285 W m$$^{-2}$$). The subscript 0 indicates a surface value. The vertical potential temperature profile at the beginning features a constant value of 300 K up to a height of 1000 m. Above, a capping inversion with a gradient of 0.02 K m$$^{-1}$$ is prescribed. For heights of 1300 m and more, a sponge layer (Rayleigh damping) is applied for all prognostic variables to reduce spurious reflections of vertically propagating waves from the model top. Wind velocities are initially set to zero everywhere because no background wind is considered. To accelerate the development of convection, random perturbations with a maximum amplitude of 0.25 m s$$^{-1}$$ are imposed on the horizontal velocity field during the beginning of the simulation until a prescribed domain-averaged perturbation energy (or resolved-scale TKE) limit of 0.01 m$$^2$$ s$$^{-2}$$ is exceeded. Besides, large-scale subsidence with a magnitude of up to 0.023 m s$$^{-1}$$ guarantees a constant boundary layer height during the simulation. Otherwise, the dust devil statistics might change in time because of the dependency between the boundary layer height and dust devils (e.g., Hess and Spillane 1990; Fenton and Lorenz 2015). With these initial conditions, a quasi-stationary state of the convective PBL is simulated that mimics the reality during the afternoon of a sunny day.\n\nThe bottom and top boundary are regarded as impermeable (w = 0 m s$$^{-1}$$) with no-slip (Dirichlet) conditions (u = v = 0 m s$$^{-1}$$) at the ground and free-slip (Neumann) conditions ($$\\partial u/\\partial z$$ = $$\\partial v/\\partial z$$ = 0 s$$^{-1}$$) at the top. For the perturbation pressure and the SGS TKE, a Neumann condition ($$\\partial p^*/\\partial z$$ = 0 Pa m$$^{-1}$$, $$\\partial e/\\partial z$$ = 0 m s$$^{-2}$$) is applied at the bottom surface. At the top surface, a fixed value of $$p^*$$ = 0 Pa is set, except for child domains, where a Neumann condition ($$\\partial p^*/\\partial z$$ = 0 Pa m$$^{-1}$$) is used. The potential temperature at the model top is derived locally by linear interpolation during each time step, utilizing the initial gradient at that height (horizontally homogeneous and constant during the run) and the temperature one grid point below, which is calculated through the prognostic model equation for the thermal internal energy. A further Neumann condition is set for e ($$\\partial e/\\partial z$$ = 0 m s$$^{-2}$$) at the top boundary. The nested domains obtain their top boundary conditions for the prognostic variables ($$\\theta$$, u, v, w) from their respective parent solution through a zero-order interpolation as described in Hellsten et al. (2021). In addition, a constant flux layer is assumed between the surface and the first computational grid point above. In this layer, unknown fluxes are calculated using Monin-Obukhov similarity theory (MOST), which requires to specify a roughness length of the surface. We choose a value of 0.1 m (typical for rural areas). In the horizontal directions, cyclic boundary conditions are applied. Finally, the Coriolis force, although probably not that important for dust devils (e.g., Balme and Greeley 2006), is considered by setting the earth’s angular velocity to 7.29 $$\\times$$ 10$$^{-5}$$ rad s$$^{-1}$$ and the latitude to 52$$^\\circ$$.\n\nThe computational domain has a horizontal extent of 4 km $$\\times$$ 4 km in order to resolve the large-scale polygonal convective cells. This is necessary because vertical vortices in the convective PBL are strongly tied to these cells (see Sect. 1). By the application of the vertical grid stretching above 1.2 km height, a vertical extent of the model domain of approximately 2 km is reached, which is well above the inversion layer. The child domains are about 240 m high, except for the simulation with the highest resolution (0.625 m), where a two-stage nesting (three domains) is used. Here, the second child domain has a height of 120 m. The selected heights are a compromise between computational costs and the demand to resolve the whole vertical thickness of a dust devil with fine resolution. The grid spacings in the root domains are 10 or 5 m along each spatial direction (except where the vertical grid stretching is applied) to capture the general flow statistics of the convective PBL well enough (see Sect. 1) and to limit the grid spacing ratio between parent and child domain to a maximum value of 5. This limitation is applied due to observations made by Hellsten et al. (2021) who state that the child solutions are almost independent of the chosen grid spacing ratio and that they all match to the non-nested case with the child resolution everywhere. Besides, our own test simulations show no significant difference between the dust devil statistics derived from an ensemble of 10 nested simulations with grid spacing ratio 5 (parent: 10 m, child: 2 m grid spacing) and the non-nested case, where a fine resolution of 2 m is used everywhere. In the nested set-ups of the convergence study, the child grid spacing is uniform in each direction and gradually reduced from 5 to 0.625 m (5, 2.5, 2, 1.25, 1, and 0.625 m), which results in 7 simulations each with a total simulated time of 4 h. An overview off all performed simulations is given in Table 2, stating the simulation name, the grid spacing, the domain size, and the number of grid points. The name is used as a reference in this study and is selected as follows: The first part indicates the root domain’s grid spacing (e.g., R10). The second part, if any, describes the nested domain’s grid spacing (e.g., N5) and simultaneously clarifies that a single-stage nesting (one child) is applied. If a two-stage nesting is used, a third part illustrates the resolution of the second nest and the first nest acts also as a parent domain. In the same row of the simulation name, the root domain characteristics are given. The first and second line below show the characteristics of the first and second child domain, respectively. Due to the grid stretching, the vertical extent of the root domains is rounded. Note that the dust devil data of a simulation are only taken from the nested domain with the finest resolution.\n\n### 2.3 Detection of Vortex Centers and the Generation of Vortex Tracks\n\nIn the following, we will comprehensively clarify how a vortex center is identified and how various centers compose a vortex track, which we interpret as a dust devil-like vortex with a certain lifetime. The different tracks form the basis for our statistical analysis of dust devil-like vortices (see Sect. 2.4). Our experiences show that the dust devil statistics can strongly depend on details of the detection and analysis algorithm, especially if a quantitative analysis is performed. In principle, the algorithm is based on the vortex detection and analysis approach from Raasch and Franke (2011) and Giersch et al. (2019), which we abbreviate with VDA11 (Vortex Detection and Analysis approach 2011) in the following. However, their approach has some drawbacks, which would heavily affect the comparison of results from runs with different resolution. For example, their vortices were detected on a horizontal plane taken from the first computational grid level above the surface at $$\\varDelta /2$$. If the grid spacing changes, the physical detection height also changes, which will result in different statistics. Such a procedure would complicate or even prevent a quantitative comparison between differently resolved simulations. Therefore, a requirement for the algorithm in this study is a constant detection/analysis height, which we fix to 10 m or the next higher level, where a scalar grid point is located (e.g., 12.5 m height in the 5 m simulation). We select this height level to exclude potential effects of the SGS model on the statistics at least for high-resolution runs (only the first grid points above the surface are significantly affected by the SGS model (e.g., Schmidt and Schumann 1989; Gadde et al. 2021) and to enable comparisons with field measurements. Besides, dust devils are ground-level vortices and their vertical extent is often limited to several tens of meters (Balme and Greeley 2006), which is why the detection height is chosen close to the ground.\n\nVortex centers are identified during the simulation after each model time step and after the model spin-up time of 45 min (see Sect. 3) by local minima of the dynamic perturbation pressure and local maxima of the absolute value of the vertical vorticity once certain thresholds are exceeded (see below). The vertical vorticity (hereafter only “vorticity”) is defined as the vertical component of the rotation of the velocity field:\n\n\\begin{aligned} |\\zeta | = \\left| \\frac{\\partial v}{\\partial x} - \\frac{\\partial u}{\\partial y}\\right| . \\end{aligned}\n(1)\n\nThe local minima/maxima refer to the lowest/highest value within a resolution-independent square of size 20 $$\\times$$ 20 m$$^2$$ instead of just considering the eight neighbouring grid points as in VDA11. This square size reproduces typical sizes of large dust devils (e.g., Kurgansky 2006) and simultaneously guarantees comparability between all performed simulations of different resolution. For example, an area of 20 $$\\times$$ 20 m$$^2$$ is the smallest possible square that can be defined for a local minima/maxima in R10.\n\nThe exact position of the center is exclusively defined by the location of the pressure minimum and not by the location of the absolute vorticity maximum, which is a further difference to VDA11. This definition is more consistent with the calculation of the vortex radius that is determined from the tangentially (in circular direction) averaged pressure drop distribution around each center. The radial distance at which the absolute pressure drop is 50% of that at the center is interpreted as the (core) radius (see also Kanak 2005; Raasch and Franke 2011; Giersch et al. 2019; Giersch and Raasch 2021). This procedure agrees well with analytical and empirical models of dust devils, where the radius also often describes the location of the maximum tangential velocity (Lorenz 2014). The numerically calculated core radius can be used as an estimate for the visible dust column radius of real vortices.\n\nSimultaneously to the pressure minimum, an absolute vorticity maximum must be located within a square of size 20 $$\\times$$ 20 m$$^2$$ around a center. In this way, the positions of the maximum rotation and the pressure minimum can be slightly shifted, which can especially happen for high resolutions (see discussion about Fig. 3). The size of the square should not be too large. Otherwise, rotations in the flow field that do not belong to the same vortex structure might be assigned to it by mistake.\n\nThe vortex center is only considered for the generation of vortex tracks if certain pressure and vorticity thresholds are exceeded. Different values have been proposed in former studies that mainly depend on the model resolution or the detection height (Ohno and Takemi 2010a; Raasch and Franke 2011; Klose and Shao 2016; Nishizawa et al. 2016). These studies have in common that certain pressure and vorticity deviations from the horizontal mean are chosen for the thresholds. In our simulations, 3 times the standard deviation of the dynamic perturbation pressure and 5 times the standard deviation of the vorticity are chosen. The standard deviation is calculated as the arithmetic mean of the standard deviations from 4 instantaneous horizontal cross-sections at analysis height after 1, 2, 3, and 4 h simulated time. It must be taken into account that all simulations from Table 2 (except R10N2.5N0.625) are executed at least twice, once for calculating the correct thresholds and once for identifying the dust devil-like vortices. Two simulations of R10N2.5N0.625 with 4 h simulated time are too expensive for us in terms of time and available computational resources (one simulation needs about 20 days wall-clock time on 6688 cores of an Atos/Bull system equipped with Intel Xeon Platinum 9242 processors) and thresholds are based on the first cross-section after the model spin-up time of 45 min. For the vorticity, the values 0.1, 0.24, 0.51, 0.63, 0.92, 1.08, and 1.56 s$$^{-1}$$ are used, which indicates a continuous increase in local vorticity fluctuations with decreasing grid spacing, similar to other studies (Nishizawa et al. 2016; Giersch et al. 2019). The thresholds for $$\\pi ^*$$ show no clear trend and a grid-independent value of 3.5 Pa is taken. Note, the threshold values are derived empirically to capture most of the vortex lifetime, to eliminate the background noise of non-coherent turbulence, and to limit the data for the post processing, which especially includes the formation of vortex tracks (see next paragraph).\n\nTo form dust devil tracks, the detected centers are sequentially processed after the simulation in a grid-independent way. The processing steps are first described technically. Details for each step and the main differences to VDA11 are explained further below.\n\n1. 1.\n\nCenters of vortices with a radius $$r\\ge$$ 100 m are neglected.\n\n2. 2.\n\nFor finer resolutions than 10 m (all nested simulations), only the strongest center (rated by $$\\pi ^*$$) is considered if two or more centers are closer than 20 m to each other at the same time.\n\n3. 3.\n\nTwo centers at different times are combined to the same vortex track if the displacement speed from the location of the center detected first to the center detected at later times (second center) is $$\\le$$ $$\\sqrt{2} \\times 10$$ m (corresponds to the distance of one R10 grid spacing in each horizontal direction) per 1.4 s (this time is defined by the mean time step of R10). If no second center is found after 4.2 s (three mean time steps in R10), a new dust devil track is initiated.\n\n4. 4.\n\nFor simulations with $$\\varDelta < 10$$ (all nested runs), centers of the same vortex can move up to 20 m in just one time step (see detailed explanation below), which would exceed the allowed maximum displacement speed by criterion 3. Therefore, two centers from a nested simulation that are detected at different time steps will be connected regardless of the third criterion if they have a distance to each other of 20 m or less even for very short time periods between the detections.\n\n5. 5.\n\nA further condition for connecting two vortex centers to a track is that their difference in $$\\pi ^*$$ must be less than 10% of the value of the first center.\n\n6. 6.\n\nFinally, the area-averaged (over 20 $$\\times$$ 20 m$$^2$$ around the center’s location) local vorticities must have the same sign to connect two vortex centers.\n\nThe first action, which was not part of VDA11, guarantees that only well-developed dust devil-like centers are connected. An investigation of the events with radii above 100 m has shown that such centers are weakly pronounced with values close to the detection thresholds so that they often vanish again quite quickly.\n\nThe second step avoids counting a single dust devil with several sub-vortices (see e.g., Bluestein et al. 2004) twice or even more and it takes care of the merging of centers, which is an important process for vortex intensification (Ohno and Takemi 2010a). Additionally, the comparability between differently resolved simulations is increased. Due to the definition of a center as a local extremum within a square of 20 $$\\times$$ 20 m$$^2$$, the 10 m run can not resolve situations where two or more centers are closer than 20 m (see Fig. 2, which shows the smallest possible distance between two centers A and B that might occur in R10). However, all nested simulations can principally resolve distances below 20 m as it is shown in Fig. 2 for R10N5 (center A and C). With the second criterion, we prohibit this technical discrepancy. The old algorithm VDA11 disregards all weaker vortex centers as soon as a center is within the radius of another (sub)vortex at the same time step. Taking the radius instead of a fixed value of 20 m is critical. Our results will show that the radius is strongly affected by the resolution. As a consequence, the detection and analysis method would indirectly depend on the grid, which we want to avoid as much as possible.\n\nThe third step allows the dust devil to occasionally have absolute vorticity or pressure drop values less than the thresholds. For example, consecutive centers of a dust devil track in R10 might have a temporal distance of three mean R10 time steps (3 $$\\times$$ 1.4 s) if thresholds were not exceeded after the first and second time step and, thus, no suitable center was stored at these times. For the nested runs, the physical scales regarding time and space are maintained, i.e., the maximum allowed covered distance in 1.4 s is always $$\\sqrt{2} \\times 10$$ m, which corresponds to a displacement speed of about 10 m s$$^{-1}$$, and a period of 4.2 s is always scanned for a potentially subsequent center independent of the actual number of time steps that are needed to cover this time frame. For the highest resolution (R10N2.5N0.625), it follows that consecutive centers of a dust devil track are allowed to have temporal distances of more than 100 time steps (the mean time step in R10N2.5N0.625 is 0.038 s). In VDA11, two vortex centers were only connected if the position of the second center is not more than two grid points away from the first one after a maximum of three time steps. This procedure links the detection of dust devils to the model resolution. Therefore, we reject it.\n\nCriterion 3 is completed by step 4 to allow a center displacement speed of much more than 10 m s$$^{-1}$$, which might be physically unrealistic but can happen technically anyway. Imagine a situation in the child of simulation R10N5 (mean time step of 0.75 s) where two centers of the same dust devil-like vortex with similar strength are closer than 20 m to each other (see Fig. 2 center A and C, distance is approximately 18 m). It might happen that at one time step center A is preferred and C is sorted out (based on criterion 2) but at the subsequent time step center C is preferred because it becomes stronger than A, which is then rejected. For this situation, the vortex center’s displacement speed is 24 m s$$^{-1}$$ (18 m/0.75 s) and, thus, it exceeds the maximum allowed value of 10 m s$$^{-1}$$ according to point 3. Therefore, a new track would be generated by mistake instead of connecting center A (remaining center after first time step) with center C (remaining center after second time step). We enable this connection by step 4.\n\nFinally, point 5 and 6 describe additional criteria that intend to avoid counting two centers from different dust devils to the same track and to prohibit unrealistically high short-term fluctuations of the dust devil features during its lifetime. The area-averaged local vorticity is taken instead of the local vorticity as used in VDA11because the area average is more robust with respect to the overall rotation direction of the dust devil-like vortex, especially for finer resolutions. To illustrate this, Fig. 3 shows two snapshots of the local vorticity in and around a cyclonic (a) and anticyclonic vortex (b) resolved with 10 and 1 m grid spacing, respectively. For 1 m, the structure is quite diverse and although the overall rotation is in clockwise direction some locations show a pronounced positive vorticity (anticlockwise spin), even in the central region. Therefore, it can happen that the value of the local vorticity attributed to a vortex center can change its sign within just two consecutive time steps. This would initiate a new dust devil track and centers would not be combined if local vorticity values are taken. However, the area averages of the local vorticities around the two vortex centers possess a clear positive and negative value (0.4 s$$^{-1}$$ and $$-$$0.58 s$$^{-1}$$).\n\nThe high positive vorticities in the clockwise rotating vortex can be better understood by transforming the vorticity into natural coordinates:\n\n\\begin{aligned} \\zeta = -\\frac{\\partial V}{\\partial n} + \\frac{V}{R}, \\end{aligned}\n(2)\n\nwith V the horizontal wind speed, n the normal direction to the velocity vector, and R the curvature radius ($$R>0$$ for",
null,
"and $$R<0$$ for",
null,
"). Accordingly, the first term describes the rate of change of V normal to the direction of the flow (shear vorticity) and the second term describes the turning of the wind along a streamline (curvature vorticity). By analyzing high-density streamlines and vector fields (see Fig. 3c, d), it turns out that the high positive vorticity values in the vortex center are a combination of both the shear and curvature vorticity. At these locations (e.g., at around x = -11 m, y = 4 m, see white rectangle), the flow shows a cyclonic curvature (Fig. 3c) and also partly a decelerating flow along the normal direction (Fig. 3d), resulting in an overall positive vorticity.\n\n### 2.4 Statistical Vortex Analysis\n\nThe resulting vortex tracks from the formation algorithm above create a subset of an unknown dust devil population for which statistics shall be derived. We call this subset a dust devil sample. Each simulation creates such a sample but with a different resolution the underlying dust devil population changes. For example, high-resolution runs produce very short-lived vortices of a few seconds, while coarse-resolution runs do not capture them at all because of their higher time steps. This complicates a comparison between the statistics of differently resolved simulations. Therefore, it is mandatory to restrict the statistical analysis only to those vortices with a lifetime above a certain threshold. In this way, the dust devil population can be compared much better among the differently resolved simulations because all dust devil lifetimes that might occur in the population can principally be captured with every resolution. Additionally, we assume that very short-lived vertical vortices of several tens of seconds do not appear in nature as well-developed dust devils because of the small time frame for reaching a significant strength that initiates strong dust lifting. Therefore, they are usually not considered in any dust devil statistics which is based on observations. To increase the comparability with data from field studies and between simulations with different grid spacings, we restrict our dust devil population to those vortices with a lifetime $$\\tau$$ of at least 120 s.\n\nThe horizontal size of the vortices within the regarded population must also be constrained to allow for a good comparability between the differently resolved simulations. If the grid spacing is reduced, more and more smaller vortices occur (e.g., Giersch et al. 2019). For example, R10 allows a minimum radius of 10 m, while in R5N1 also vortices with radii of several meters can be simulated. Therefore, we limit the spatial vortex scale at the lower bound, i.e., the vortex must have a certain size and all other vortices smaller than this size are not considered. If the lower bound is selected high enough, a detection of vortices with spatial scales close to this bound would become possible for each applied grid spacing. In addition, limiting the statistics to larger vortices enables a proper resolution of their dynamics, which typically requires a minimum of 5–10 grid points along the dust devil’s axes. In our study, the most obvious selection for the lower bound would be a minimum radius of 10 m because such a vortex could principally occur in each simulation, even though poorly resolved in the coarse-resolution runs. However, the minimum vortex radius must not be too large either. Otherwise, each sample contains only a few dust devil-like vortices from which a statistically profound derivation of vortex characteristics is impossible. Observational data suggest that a core radius of 10 m and more is already too large to get a sufficient number of dust devils during a simulated time of 4 h (Houser et al. 2003; Balme and Greeley 2006; Kurgansky 2006). They usually show radii of around 5 m. That is why we exclude all vortices with a lifetime-averaged radius lower than this value from our statistical investigation as a compromise between the above requirements for comparability and statistical analysis.\n\nAs explained above, our regarded dust devil population only considers vortex tracks with lifetimes of at least 120 s and lifetime-averaged radii of $$\\ge$$ 5 m. We mainly calculate double-averaged quantities at the detection height of 10 m to describe this population statistically. The first average is executed over the vortex lifetime. Subsequently, an average over the sample is performed. In this way, each model run creates one value for one specific quantity. Instantaneous events of single dust devils or extreme values of a sample might be rare events and do not necessarily represent typical characteristics of a population. Taking only these events would make it difficult to compare results among the differently resolved simulations. That is why we focus our statistical analysis to mean values (if not otherwise specified).\n\nThe application of the sample average necessitates the first quantity that is analyzed: the sample size, which is expressed by the number of dust devil-like vortices N or the number density n in km$$^{-2}$$ h$$^{-1}$$ or km$$^{-2}$$ d$$^{-1}$$. In addition, the statistical focus is on the sample-averaged quantities $$\\overline{\\pi ^*}$$, $$\\tau$$, $$\\overline{r}$$, the tangential, radial, and vertical velocity $$\\overline{u_{\\text {tan}}}$$, $$\\overline{u_{\\text {rad}}}$$, and $$\\overline{w_{\\text {d}}}$$ (\"d\" for dust devil), respectively, the area-averaged vorticity $$\\overline{\\zeta _{\\text {av}}}$$ and, finally, $$\\overline{\\theta }$$. Here, the overbar indicates a time-averaged value over the whole vortex lifetime. The quantities $$\\pi ^*$$, $$\\zeta _{\\text {av}}$$ and $$\\theta$$ are defined in the vortex center. Instead, $$u_{\\text {tan}}$$, $$u_{\\text {rad}}$$, and $$w_{\\text {d}}$$ represent the maximum of the tangentially averaged velocity distribution of the respective cylindrical component around the vortex center. Note that the radial and tangential components are calculated during the simulation through the transformation of the total Cartesian velocity components u and v to polar coordinates. Thus, radial and tangential velocities also contain the translational speed of the dust devil.\n\nBased on the quantities above, the minimum lifetime of a vortex, which shall be considered in the statistics, is further restricted. An air parcel, which moves with the velocity $$\\overline{u_{\\text {tan}}}$$, must be able to circulate the vortex with a circumference of $$2\\pi \\overline{r}$$ at least once during its lifetime. This is a further reasonable condition to focus the dust devil analysis only to well-developed vortices.\n\n## 3 Results and Discussion\n\nThis section starts with an overview of general flow characteristics, which describe the physics of the simulated PBL in more detail. A focus is on the grid convergence of flow quantities that are typically used within studies of the PBL, like vertical profiles of the potential temperature or the friction velocity. In Sect. 3.2, dust devil statistics for variable grid spacings are analyzed with respect to their convergence. In addition, a quantitative comparison to observational data is performed. Finally, features of the three-dimensional structure of selected dust devil-like vortices are addressed without the claim to give a comprehensive description and explanation of the dust devil flow dynamics. A focus is again on how these features change with grid spacing.\n\n### 3.1 General Flow Features\n\nThe general development of the flow in all root domains is very similar and can be evaluated through results from R10. A quasi-stationary state, where the turbulence statistics do not change substantially, is reached after 45 min as, for example, indicated by time series of the domain-averaged total kinetic energy E = 0.5 $$\\times$$ ($$u^2$$ + $$v^2$$ + $$w^2$$) of the flow (see Fig. 5a). Here, the domain average refers to all heights up to the top of the (first) child domain (240 m) in every simulation to enable better comparability between the different model domains. The subsequent analysis only includes periods after the spin-up time of 45 min.\n\nFigure 4 shows horizontally and temporally averaged vertical profiles of the potential temperature (a), the total vertical turbulent heat flux (b), composed of the resolved-scale and subgrid-scale turbulent heat flux, and variances of v (c) and w (d). The horizontal average is marked by angular brackets, the overbar describes a temporal average over a period of 15 min before the respective output time, and the prime denotes a resolved-scale turbulent fluctuation, which is interpreted as the deviation of an instantaneous resolved-scale quantity from its horizontal domain average. The profiles reveal the typical characteristics of a convective PBL (see also Schmidt and Schumann 1989; Moeng and Sullivan 1993; Park and Baik 2014). The potential temperature indicates constant values in the so-called mixed layer and strong vertical gradients near the surface and in the entrainment zone at around 1 km, where the heat flux becomes negative (downward flux). The negative slope of $$\\overline{<w\\theta >}$$ indicates negative divergences, causing a mean temperature increase in the PBL with time. In upper layers ($$\\approx$$ 1000–1100 m), overshooting thermals mix warmer air from the inversion layer downwards. This, together with the large-scale subsidence (see Sect. 2.2), results in a warming of the PBL and the free atmosphere during the simulation. Variances of the horizontal velocity components (only $$\\overline{<v'^2>}$$ is shown because $$\\overline{<u'^2>}$$ looks very similar) show stronger turbulence near the surface and in the entrainment zone generated by wind shear and buoyancy forces. Vertical velocity fluctuations reach a maximum at the lower third of the boundary layer height, which is constant over time ($$\\approx$$ 1 km ). In our study, the boundary layer height is defined as the point where the minimum of the total sensible heat flux profile is reached. Profiles in b, c, and d still show fluctuations around the actual mean state because the temporal average refers to a period of only 15 min ($$\\sim$$ one to two times the large-eddy turnover time defined by the ratio of the boundary layer height $$z_\\text {i}$$ and the convective velocity scale $$w_*$$, which is also referred to as the Deardorff velocity scale). Normally, several large-eddy turnover times are necessary to better capture the mean state. However, the shape remains similar.\n\nThe grid convergence of the general flow state is first investigated by time series of E, the horizontal average of the friction velocity $$<u_*>$$ calculated by means of MOST (Maronga et al. 2020), and the maximum of the vertical velocity $$w_\\text {max}$$ of the whole domain (see Fig. 5). The domain-averaged total kinetic energy represents the whole (resolved) flow, including a mean and turbulent contribution. However, because no mean wind is applied, E and the domain-averaged resolved-scale TKE of the flow are the same. All simulations show similar values for E that oscillate around a mean state of approximately 1.7 m$$^2$$ s$$^{-2}$$. Therefore, this quantity can be regarded as converged for every resolution.\n\nThe friction velocity, characterizing the turbulent momentum exchange near the surface, shows a grid spacing independent behaviour. This indicates that a resolution of 10 m is already fine enough to resolve the mean turbulent transport in the surface layer. According to Lyons et al. (2008), the ratio of the convective velocity scale and the friction velocity must amount to larger than 5 for dust devil formation. Our simulations indicate a grid-independent value of 2 m s$$^{-1}$$ (not shown) for the convective velocity scale and, thus, a ratio $$w_*/u_*$$ of approximately 10.\n\nIn contrast to the friction velocity, the total maximum values of the velocity components, which occur somewhere in the respective domain and which represent extraordinary flow events, do not converge (the time evolution of $$u_\\text {max}$$ and $$v_\\text {max}$$ look very similar to $$w_\\text {max}$$). The magnitude constantly increases with a decrease in the grid spacing, resulting in peak velocities of about 20 m s$$^{-1}$$ for each component in simulation R10N2.5N0.625. This corresponds to about 10 times the convective velocity scale. In R5N1, maximum vertical velocities always occur near the center of a dust devil-like vortex after 1, 2, 3, and 4 h simulated time. The average height of the location of $$w_\\text {max}$$ after the spin-up time is 30 m. In most cases, maxima occur below 20 m. Only in sporadic cases, the strongest updrafts in the child are simulated at heights between 100 and 240 m. Thus, we presume that maximum velocities are mostly caused by dust devils or the strong larger scale updrafts connected to them. Because $$w_\\text {max}$$ values of up to 25 m s$$^{-1}$$ have already been reported for dust devils (Balme and Greeley 2006), we assume that our $$u_\\text {max}$$, $$v_\\text {max}$$, and $$w_\\text {max}$$ are still realistic and we speculate that a grid spacing of 0.625 m is just before the limit of what is required to show a convergence of maximum velocities. To clarify this point, a further simulation with even finer resolution would be required, which is currently beyond our available computational resources. However, the time series are a first indicator that extreme values in the dust devil statistics are not well suitable at this stage for evaluating a grid convergence (see Sect. 3.2).\n\nIn Fig. 6a, vertical profiles of the potential temperature show a constant increase of the surface temperature and the near-surface temperature gradients with a decrease in the grid spacing (the surface value is set to the value at the first computational grid level above but does not enter the prognostic model equations). With higher resolution, the thin super-adiabatic layer close to the ground, showing strong vertical temperature gradients, is much better resolved. According to the theoretical work by Horton et al. (2016), a critical value of 1–10 K m$$^{-1}$$ is needed in the first few meters above the hot surface for the onset of dust devils. This has also been shown by observations (e.g., Oke et al. 2007; Ansmann et al. 2009). Such high gradients are only simulated in the simulations with child grid spacings of 1 and 0.625 m. As a consequence of increasing surface temperatures, stronger plumes are probably be generated, resulting in higher vertical velocities. This fits to the result that $$w_\\text {max}$$ constantly increases with decreasing grid spacing (see previous paragraph). For heights of 20 m and above, all profiles of the potential temperature overlap, indicating a converged situation. This is also true for the potential temperature variance displayed in Fig. 6b. Very close to the surface, however, more intense temperature fluctuations are simulated if the grid spacing is reduced, which is directly related to stronger gradients in the mean profile. With such gradients, even small displacements of air parcels cause high temperature fluctuations.\n\nAs Fig. 6c shows, the mean variance of the u-component $$\\overline{<u'^2>}$$ ($$\\overline{<v'^2>}$$ looks almost the same) increases if the resolution changes from 10 to 5 m. A further reduction causes a significant increase below heights of 20 m only until a resolution of 1.25 m is reached. For 1.25, 1, and 0.625 m grid spacing, profiles almost overlap, even very close to the surface. This is confirmed by the resolved-scale TKE $$e_\\text {res}$$ = 0.5 $$\\times$$ ($$u'^2$$ + $$v'^2$$ + $$w'^2$$) (see Fig. 6d), which includes all variances in one quantity. For resolutions of 5 m and below, maximum differences among the profiles amount to several per cent only if heights above 20 m are regarded. At lower heights, only the profiles of R5N1.25, R5N1, and R10N2.5N0.625 match well.\n\nFinally, the resolved-scale and SGS vertical turbulent heat flux is displayed in Fig. 6e, f (a double prime denotes SGS quantities). In each simulation, the first two vertical grid levels above the surface are significantly affected by non-resolved processes. At 20 m and above, however, SGS fluxes can be neglected for all resolutions because they contribute to the total flux only marginally (less than 10%). From a height of 30 m, all profiles starts to overlap and are converged. As discussed in Sect. 2.3, the detection height of dust devil-like vortices is set to 10 m or the next higher grid level. The heat flux profiles show that the turbulence is already well-resolved at that height for most of the simulations except R10. Therefore, uncertainties that might be introduced by the surface parameterization are mostly negligible in the subsequent dust devil statistics. The total vertical turbulent heat flux profiles overlap for all resolutions (not shown).\n\nDust devils are connected to the large-scale cellular pattern of the convective PBL (see e.g., Kanak 2006; Raasch and Franke 2011). Figure 7 reveals this pattern by horizontal cross-sections of the instantaneous vertical velocity at 100 m height and it shows how structures depend on model resolution. With a resolution of 10 m, the large-scale polygonal structures are well-resolved. However, more flow details become apparent for R10N5 (Fig. 7b). A further reduction in grid spacing does not change the overall flow pattern, although smaller and smaller turbulent scales are captured but the displayed figure size in Fig. 7 does not allow to see them. The number of detected vortex centers at 4 h simulated time amounts to 33, 58, 178, 256, 254, and 261 for R10, R10N5, R10N2.5, R10N2, R5N1.25, and R5N1, respectively. Consequently, simulations with grid spacings of 2 m and below create a similar amount of centers. A more detailed discussion about the number of detected vortex centers and, thereby, dust devil-like vortices follows in Sect. 3.2.\n\nAll in all, our results support the findings from past studies about resolutions requirements for LES of the convective PBL (see Sect. 1). These studies mainly recommend a grid spacing on the order of 10 m. However, if the research focus is more on the surface layer, processes that originate from there, or details of the flow structures, 10 m is still too coarse. This is probably also true for dust devils, that are quite small-scale flow phenomena. Especially the above number of detected vortex centers and the resolution dependent variation of the near-surface temperature profile suggest a strong influence of the grid spacing on dust devils. We will investigate this in the next Sects. 3.2 and 3.3 in more detail.\n\n### 3.2 Dust Devil Statistics\n\nIn the subsequent paragraphs, dust devil statistics at detection height are quantitatively analyzed for samples of a dust devil population, which only contains vortices with a lifetime-averaged radius of 5 m or more and with a lifetime of at least 120 s. This limited population and the quantities that are analyzed have already been motivated and explained in Sect. 2.4. The statistics are usually based on one simulation with a certain grid spacing. However, to estimate statistical uncertainties, ensembles with 10 members have been created for grid spacings of 5 (R10N5) and 2.5 m (R10N2.5) by applying different random perturbations at the beginning of the respective simulation (see also Sect. 2.2). From these members, 95% confidence intervals have been derived for the mean and the standard deviation of the corresponding ensemble. Such an interval covers the true value with a probability of 95%. We take the same statistical significance interval as defined by (Appendix C Giersch and Raasch 2021) to assess if a value might be part of the same ensemble or not, i.e., as soon as a value lies outside the significance interval, differences are rated as statistically significant. A calculation of the confidence intervals for higher resolutions was not possible because the required ensemble runs were beyond our current computational resources. The statistical uncertainty ranges of R10N5 and R10N2.5 are very similar for most of the analyzed quantities (see Fig. 8), which is why we assume that they are also applicable to the other resolutions. Note, the exact detection height varies with the grid spacing due to the arrangement of the numerical grid. In case of 10 m grid spacing, dust devil centers are actually detected at 15 m and for 1 m resolution at a height of 10.5 m. This creates a systematic uncertainty to higher or lower values. However, with decreasing grid spacing the analyzed grid level approaches the physical height of 10 m and the systematic bias is reduced.\n\nThe dust devil characteristics and their dependencies on the grid width are shown in Fig. 8. The number of detected dust devil-like vortices (Fig. 8a) varies between 700 and 2400, which corresponds to number densities of 13 km$$^{-2}$$ h$$^{-1}$$ and 46 km$$^{-2}$$ h$$^{-1}$$, respectively (the dust devil detection time during the simulation is 3 h and 15 min). Therefore, the order of magnitude for n is assumed to be 10 km$$^{-2}$$ h$$^{-1}$$. The deviations between coarser resolutions (10-$$-$$2.5 m) are mainly provoked by the definition of the regarded dust devil population. The smallest vortices with radii between 5 and 10 m, which occur more frequently than larger vortices, can hardly exist in R10, R10N5, and R10N2.5, if at all. The finer the resolution the more of these vortices are resolved, which can be seen in Fig. 9, where the radius data is grouped into bins with an equal size ratio of about $$\\sqrt{2}$$. The maximum number moves towards smaller radii and increases. For R10, R10N5, and R10N2.5, it is located at the bins [15;21.63), [7.21;10.4), and [5;7.21), respectively. A further reduction in grid spacing (2.5-$$-$$0.625 m) causes no further increase of N because dust devil-like vortices with radii smaller than 5 m are neglected in our population. Instead, the total number decreases. For grid spacings of 2 m or lower, Fig. 9 and observational data by Oncley et al. (2016) suggest that the maximum of the number distribution of $$\\overline{r}$$ is not captured anymore by a population, which neglects vortices smaller than 5 m. The maximum moves out of the considered radius range if the grid spacing reduces. According to the significance intervals from simulation R10N5 and R10N2.5, we assume a converged value of about N=1000, corresponding to 19 km$$^{-2}$$ h$$^{-1}$$ or 77 km$$^{-2}$$ d$$^{-1}$$ if a typical sunny day allows for 4 h of strong dust devil activity (see also Lorenz 2014). Optical detections during field experiments state frequencies between 0.1 and 800 km$$^{-2}$$ d$$^{-1}$$, depending on the survey area (Balme and Greeley 2006; Lorenz 2009; Lorenz and Jackson 2016). Lorenz (2009) proposes the formula n $$\\sim$$ 50/A, indicating that the number density in km$$^{-2}$$ d$$^{-1}$$ is inversely proportional to the survey area A in km$$^2$$. An application of this formula to our study results in n $$\\approx$$ 3 km$$^{-2}$$ d$$^{-1}$$, which is less than the simulated values. However, according to Lorenz (2014), approximately 100 dust devil counts per square kilometer and day is the most likely formation rate of visible dust devils under favorable meteorological conditions. The simulated value of 77 km$$^{-2}$$ d$$^{-1}$$ is quite close to this rate. It must be noted that not all of the simulated vortices would be visible in nature because pressure and vorticity detection thresholds in the numerical simulations correspond to intensities much lower than the values that would be needed for dust lifting. If we additionally assume a threshold core pressure drop of 30 Pa for the occurrence of dust lifting (Lorenz 2014), only 200–300 vortices occur in simulation R5N1.25, R5N1, and R10N2.5N0.625 that reach such a high value at least once during their lifetimes. This is approximately equal to a frequency of 20 instead of 77 km$$^{-2}$$ d$$^{-1}$$, illustrating how an apparently easy statistical quantity like the number density can fluctuate strongly depending on the details of the investigation approach.\n\nA typical lifetime for the investigated dust devil population varies between 240 and 310 s (Fig. 8b). Maximum values fluctuate between 1500 and 2600 s. However, a convergence can not be identified. The changes from R5N1 to R10N2.5N0.625 are still statistically significant if error bars with comparable sizes to R10N5 or R10N2.5 are applied to the data of R5N1 and R10N2.5N0.625. Additionally, the results indicate that there is no benefit from performing high-resolution and costly simulations if the mean lifetime shall be estimated. No clear trend in a certain direction is visible. Observational data show that most dust devils last for only a few minutes (Lorenz 2013). In rare cases, a duration of several tens of minutes and even several hours is possible (Balme and Greeley 2006). Consequently, the simulated lifetimes match the observed range for each simulation. The absence of dust devil-like vortices with a duration of several hours might be attributed to the rareness of such events in combination with the idealized setups (e.g., lack of large-scale vorticity or limited simulation time). Note, lifetimes derived from field studies mostly refer to the time where the vortex is visible. Because our detection thresholds are rather low, simulated vortices would probably not be visible in nature during the whole lifetime. Therefore, our values might have a bias towards larger lifetimes compared to observations.\n\nThe mean potential temperature at detection height (Fig. 8d) tends to increase with a reduction in grid width, especially for coarser resolutions of the investigated range. The development hypothesizes a converged value slightly above 306 K. This corresponds to a mean horizontal temperature difference between the dust devil core and the surroundings of about 3.5 K if the profile data from Fig. 6a are taken as a reference for the ambient conditions. The mean vertical temperature gradient (difference between the mean surface value and the mean value at analysis height) constantly increases with finer resolution and amounts to 0.13 K m$$^{-1}$$ for $$\\varDelta =0.625$$ m. According to Balme and Greeley (2006), measured horizontal temperature excursions of less than 10 K are common. The total possible range is quite large and stated as 1–20 K. Sinclair (1973) did some measurements at 31 ft (similar to our analysis height) that indicate typical excursions of 2–4 K. Data derived from thermal image velocimetry show a temperature difference of up to 3 K for a single dust devil (Inagaki and Kanda 2022). However, measurement studies often refer to the maximum recorded temperature excursion during the dust devil’s lifetime instead of lifetime-averaged values. If we consult our maximum values and average them over the sample size, they are 1–2 K higher than the sample mean of the lifetime-averaged temperatures. In R10N2.5N0.625, one dust devil even shows an instantaneous temperature of nearly 311 K, corresponding to a deviation of more than 8 K. All this suggests that our simulated values fit quite well to reality, at least for resolutions below 2 m. Coarser grids underestimate the temperature jump within dust devils significantly because the super-adiabatic layer close to the ground from which the heat is sucked into the dust devils core is poorly resolved (see also Fig. 6a).\n\nFigure 8e demonstrates the development of the mean absolute pressure drop in the dust devil’s core with the grid spacing. In accordance with Giersch et al. (2019), the strength increases with decreasing grid spacing. From a resolution of 1.25 m, the changes are insignificant and a converged value of approximately 14 Pa is reached. A fair comparison to measurements is only possible with the peak pressure drop of a dust devil-like vortex because this is the quantity that matches the most to what is reported in field studies. For finer resolutions (1.25, 1, and 0.625 m), single dust devils show maximum pressure drops between 200 and 300 Pa. Sample averages of the maximum values vary between 25 and 35 Pa. The coarser resolutions (5 and 10 m) only simulate maximum values below 120 Pa (sample averages between 10 and 20 Pa). Unlike the lifetime-averaged pressure drops, maximum values constantly increase and show no convergence similar to what has been shown in Fig. 5 for the peak vertical velocity of the model domain. In reality, intense and visible dust devils show pressure excursions of several 100 Pa (Balme and Greeley 2006), which is similar to the results from the high-resolution runs. Pressure measurements recorded by Lorenz and Lanagan (2014) disclose peak pressure dips of convective vortices that may or may not be dust-laden in a range of 20–150 Pa, which suggests smaller intensities compared to Balme and Greeley (2006). This might be due to the fact that Lorenz and Lanagan (2014) derived the pressure drops from fixed stations that were not necessarily inside the dust devil core and from dust devils that were not necessarily dust-laden. All in all, the simulated values for resolutions below 2 m agree well with the reality. Otherwise, pressure drops tend to be underestimated. Our numerical simulations would probably produce even stronger vortices if heterogeneities and background winds are considered (Giersch et al. 2019).\n\nAnother measure beside the pressure drop that is used to evaluate the vortex strength is the vertical vorticity. Figure 8f visualizes the mean absolute value of the area-averaged vorticity around the center (for the definition see Sect. 2.3) and how it depends on the resolution. Similar to the lifetime, no definite trend is observable and values fluctuate around 0.23 s$$^{-1}$$, which we define as our best vorticity estimate of the regarded population. The sample mean of the maximum area-averaged vorticity during a dust devil’s lifetime varies between 0.3 and 0.4 s$$^{-1}$$, depending on the resolution. Single vortices with a very strong rotation even reach instantaneous area-averaged vorticities of approximately 1 s$$^{-1}$$. Higher resolutions tend to produce higher peak values. References to measured vorticities within terrestrial dust devils do rarely exist. Doppler radar measurements by Bluestein et al. (2004) show local core vorticities that range from 0.5–1 s$$^{-1}$$, similar to observations by Oncley et al. (2016). Temporally and tangentially averaged data for one single dust devil show a peak vertical vorticity of 1.8 s$$^{-1}$$ in the core region, which reduces nearly linearly to approximately 0 s$$^{-1}$$ at the core radius of about 12 m (Inagaki and Kanda 2022). All these values indicate typical vorticities of $$\\sim$$ 1 s$$^{-1}$$ within the dust devil core. If the vorticity is additionally averaged in a horizontal plane of 20 $$\\times$$ 20 m$$^2$$ around that core, similar results as our estimate of 0.23 s$$^{-1}$$ are expected. However, for getting this estimate, high-resolution runs are not mandatory.\n\nThe tangential velocity in Fig. 8g shows a similar pattern compared to $$\\overline{|\\zeta _\\text {av}|}$$. This indicates that the horizontally averaged vorticity around a dust devil-like vortex strongly correlates with the maximum tangential velocity, occurring at a certain distance apart from the center. The pressure drop do not show this correlation, which could be expected from the cyclostrophic balance. It might be that the pressure drop in the center has less effects on the maximum tangential velocity but rather on the overall rotation, which could be better described by integral quantities like the circulation or the horizontally averaged tangential velocity. However, these results are contrary to observations made by (Fig.9f Oncley et al. 2016), which showed a fairly well cyclostrophic balance for the maximum tangential velocity but a bad one for the averaged tangential velocity. This mismatch to our results might be caused by the different methodologies used to define a vortex. For example, Oncley et al. (2016) fitted the circular vortex structure by eye, whereas we use the pressure distribution to determine the vortex scale (see again Sect. 2.3). Consequently, completely different horizontal vortex surfaces could be determined for the same vortex, which significantly influences the calculation of maximum and averaged tangential velocities. Similar to Oncley et al. (2016), we additionally argue that the radial velocity causes a significant deviation from the cyclostrophic balance and, thus, disturb a potential correlation between $$\\overline{u_{\\text {tan}}}$$ and $$\\overline{|\\pi ^*|}$$. Fluctuations of $$\\overline{u_{\\text {tan}}}$$ appear in a range of 2.05-$$-$$2.20 m s$$^{-1}$$ with no distinct development to higher or lower values. Therefore, very fine resolutions are not beneficial for this measure. Instead, instantaneous peak values of 13 m s$$^{-1}$$ are simulated in R10N2.5N0.625, whereas simulation R10 reveals much smaller peak tangential velocities ($$\\approx$$ 5 m s$$^{-1}$$). This is also true for the sample averages of the maximum tangential velocity during a vortex lifetime that range from 2.5 to 3.5 m s$$^{-1}$$, depending on the resolution. Lower grid spacings simulate higher maximum velocities. Based on several measurement studies, Balme and Greeley (2006) stated that the peak tangential component of the wind speed usually reaches 5–10 m s$$^{-1}$$. In extreme cases, up to 20 m s$$^{-1}$$ are possible. Likewise, observations from a fixed array of 31 turbulence sensors demonstrate a maximum value of tangential velocity of 8.9 m s$$^{-1}$$ (Oncley et al. 2016). Stull (1988) specifies that tangential velocities are on the order of 10 m s$$^{-1}$$. Novel measurements with high spatial and temporal resolution show a tangential velocity component of up to 4.2 m s$$^{-1}$$ for a single dust devil (Inagaki and Kanda 2022). Because this value represents an averaged velocity along the circular direction and over 40 s it appears to be smaller than the other measurements mentioned before. In summary, our numerical simulations with a grid spacing of $$\\sim$$ 1 m resemble the tangential velocity more realistic compared to, for example, the 10 m run.\n\nThe radial velocity converges to 1.35 m s$$^{-1}$$ (see Fig. 8h) if grid spacings below 2 m are applied. A magnitude of about 1.35 m s$$^{-1}$$ fits very well to the temporally and tangentially averaged measurements from Inagaki and Kanda (2022), which show radial velocities between 1 and 2 m s$$^{-1}$$ along the radial direction of a single dust devil-like vortex. A typical maximum radial velocity that is simulated during a dust devil’s lifetime and for resolutions below 2 m is 2 m s$$^{-1}$$, while higher values are reached for higher resolutions. In extreme events and only for resolutions below 2 m, peak radial velocities up to 4 m s$$^{-1}$$ are simulated at certain times. Based on measurements, Sinclair (1973) derived typical maximum radial velocities of $$\\sim$$ 5 m s$$^{-1}$$, similar to Kaimal and Businger (1970) whose time series suggest maximum radial velocities between 3 and 6 m s$$^{-1}$$, depending on the measured height. Note, the surface roughness as the main reason for the radial velocity and the deviation from the cyclostrophic balance is assumed as 0.1 m in our simulations. A different roughness would produce different values that might fit better or worse to observations.\n\nThe vertical velocity component shows the smallest significance intervals of all investigated quantities with respect to the changes that occur due to a modification in the grid spacing. Over the entire resolution range, the lifetime-averaged vertical velocities almost double (from 1 to 2 m s$$^{-1}$$) with higher values at small grid spacings, whereas the relative statistical uncertainty expressed by the significance interval is just a few per cent. No convergence is reached for the vertical component. This might be related to the potential temperature, which also constantly increases for higher resolutions (see discussion above). The buoyancy and, thus, the strength of updraughts is determined by horizontal temperature differences. The profile data in Fig. 6 indicate a grid-independent reference temperature at 10 m height. If the core region of a dust devil-like vortex becomes warmer for higher resolutions at the same height, horizontal temperature differences increase on average and, consequently, updraughts become stronger. The maximum vertical velocities during the vortex lifetime show sample averages of about 4–5 m s$$^{-1}$$ for 1.25, 1 and 0.625 m grid spacing with peak values of 20 m s$$^{-1}$$. A constant increase with decreasing grid spacing is also evident for the sample-averaged maximum values. The coarsest resolution run R10 reveals a value of only 1.5 m s$$^{-1}$$. Comparisons with field studies suggest that fine resolutions below 2 m are necessary to capture the vertical component realistically. Using a mobile instrumented tower, Sinclair (1973) measures maximum vertical motions of $$\\sim$$ 10 m s$$^{-1}$$ in all investigated dust devils and at all regarded height levels (7 (2), 17 (5), and 31 ft (9 m)). In-situ wind speed measurements by Kaimal and Businger (1970) also show roughly height independent magnitudes of the w-component. Peak values of 3 to 4 m s$$^{-1}$$ are reported, similar to measurement data acquired by Fitzjarrald (1973) and Tratt et al. (2003). According to Balme and Greeley (2006), typical vertical wind speeds are less than 10 m s$$^{-1}$$.\n\nIn addition to a comparison with measurements, dust devil data can also be compared to theoretical models like the thermodynamical scaling theory of Rennó et al. (1998). In this theory, the pressure drop and the maximum tangential wind speed across a dust devil can be approximated by:\n\n\\begin{aligned} \\varDelta p \\approx \\frac{\\gamma \\eta c_{\\text {p}} p_{\\infty } \\varDelta T}{R T_{\\infty }}, \\end{aligned}\n(3)\n\nand:\n\n\\begin{aligned} v_{\\text {m}} \\approx \\sqrt{\\gamma \\eta c_{\\text {p}} \\varDelta T}. \\end{aligned}\n(4)\n\nThe variable $$\\gamma$$ describes the fraction of the total dissipation of mechanical energy consumed by friction at the surface with typical values between 0.5 and 1 Rennó et al. (1998), $$\\eta$$ is the thermal efficiency of a heat engine, $$c_{\\text {p}}=1005$$ J kg$$^{-1}$$ K$$^{-1}$$ is the specific heat capacity at constant pressure, $$R=287$$ J kg$$^{-1}$$ K$$^{-1}$$ is the gas constant of dry air, $$p_{\\infty }$$ is the surface pressure (101,325 Pa in our simulations), $$T_{\\infty }$$ is the absolute temperature of the ambient air outside the dust devil determined by the profiles of Fig. 6a, and $$\\varDelta T$$ is the effective temperature perturbation. According to Souza et al. (2000) and Kurgansky et al. (2016), $$\\eta$$ can be calculated via:\n\n\\begin{aligned} \\eta \\approx \\frac{g z_{\\text {i}}}{{c_{\\text {p}} T_{\\infty }}}, \\end{aligned}\n(5)\n\nwhere $$g=9.81$$ m$$^2$$ s$$^{-1}$$ is the gravitational acceleration. We use the simplified formulas suggested by Kurgansky et al. (2016). Because this model describes the order of magnitude of the maximum values, we apply it to the peak values of our simulated dust devil statistics at 10 m height. Therefore, $$\\varDelta T=T_{\\text {m}} - T_{\\infty }$$, with $$T_{\\text {m}}$$ the peak absolute core temperature of every simulated dust devil sample. Similarly, $$|\\pi ^*|_\\text {m}$$ and $$u_{\\text {tan}_\\text {m}}$$ describe the simulated pressure drop and tangential velocity maxima for each sample. Table 3 shows the results. Only for resolutions of approximately 1 m or lower, the simulated peak values ($$|\\pi ^*|_\\text {m}$$, $$u_{\\text {tan}_\\text {m}}$$) match the range of the theoretical values ($$\\varDelta p$$, $$v_{\\text {m}}$$). Otherwise, our simulations underestimate the magnitude, which is in agreement with the comparison to observational studies performed before.\n\nThe above discussion demonstrates that vortex properties can vary significantly if the grid spacing is changed and that the concrete resolution dependence differs between the regarded quantities. So far, no general answer can be given to the question at what resolution the overall statistics are converged. However, resolutions below 2 m show a convergence for most of the analyzed mean quantities and results fit very well to observations and measurements of real dust devils. Comparisons of the strongest dust devil events with observations and the thermodynamical theory of Rennó et al. (1998) suggest that a resolution of 0.625 m is short before the minimum grid spacing that is required to reach a convergence even of the peak values. This section also indicates that a quantitative comparison between numerical simulations and field experiments must be carefully performed. Different analysis heights might be relevant and lifetime-averaged values must not be mixed with maximum values. Additionally, huge differences of several orders of magnitude partly exist among various measurement data, which makes a comparison to numerical results quite challenging. Also, sometimes only visible, dust-laden vortices are considered in the statistics. In other cases, a pronounced pressure drop is enough to define a dust devil-like vortex. Therefore, it is mandatory to clearly define the characteristics of the regarded dust devil population. Changes in the considered population can cause significant changes in the results. Finally, different boundary conditions (e.g., roughness length, terrain type or slope) and meteorological conditions (e.g., background wind, boundary layer height, heating rate) complicate a comparison between numerical simulations and measurements. Nevertheless, it can be summarized that with grid spacings of less than 2 m dust devils with radii of more than 5 m are detected with sufficient accuracy. In the following, the three-dimensional flow structure is investigated regarding grid spacing changes to review the above resolution suggestion of less than 2 m. However, a detailed and sophisticated flow analysis is not intended.\n\n### 3.3 Three-Dimensional Structure\n\nThe study of the grid convergence of the three-dimensional vortex structure is realized by the analysis of instantaneous as well as time-averaged horizontal and vertical cross-sections of selected vortices. Only data from the most durable dust devil-like vortices are considered. These dust devils can be regarded as representative for well-developed and pronounced vertical vortices, occurring in the regarded population. The most persistent vortices in R10, R10N5, R10N2.5, R10N2, R5N1.25, R5N1, and R10N2.5N0.625 are tracked for 1829, 1590, 2549, 2207, 2203, 1838, and 1628 s, respectively. The sampling of the vortices’ variables ($$p^*$$, $$\\theta$$, $$\\zeta$$, u, v, and w) uses the same algorithm as presented in Raasch and Franke (2011). It is based on a three-dimensional grid defined in the vortices’ centers that moves together with them during their whole lifetime. Instantaneous (at every time step) as well as time-averaged data is stored after the simulation. The averaging procedure is performed during the model run. For each simulation, it is guaranteed that the output volume has at least an extent of 140 $$\\times$$ 140 $$\\times$$ 100 m$$^3$$ (see cross-section size in Figs. 12 and 13). This is a compromise between available storage space, memory, and to ensure that the dust devil’s main sphere of influence is recorded. Note that two identical simulations are required for the three-dimensional analysis. Based on the first run, the dust devil to be examined is identified as well as its track. The second run uses the center coordinates from the first run to perform the sampling. A rerun of R10N2.5N0.625 was not possible due to the high computational demand associated with it. Therefore, only results for grid spacings between 10 and 1 m are discussed.\n\n#### 3.3.1 Instantaneous Data\n\nThis section provides an overview of instantaneous and short-term vortex features that are not visible in time-averaged fields. The main focus is on how these features change with the resolution. Our data reveal a frequent interaction of approaching vortices, independent of the grid spacing. The interaction can result in an intensification, dilution, or maintenance of the original vortex. Figure 10 shows an example of how a dust devil-like vortex is maintained and intensified by absorbing another vortex with the same sense of rotation (positive vorticity). Between about 6833 and 6866 s simulated time (Fig. 10a, b), the main vortex in the middle shows perturbation pressures between -37 and -44 Pa, whereas the second and weaker vortex on the right has minimum pressures in the range of -20 to -30 Pa. After the merging, only one strong vortex remains with a maximum pressure drop of more than 50 Pa (Fig. 10d). In Sect. 3.2, the clustered occurrence of dust devils along convergence lines and near vertices of the cellular flow pattern has already suggested a strong interaction between different vortex centers during their lifetimes (see Fig. 7). The results are in agreement with Ohno and Takemi (2010a), who pointed out that most of the strong dust devils are first intensified through the merger of multiple vortices and subsequently maintained and more enhanced by additionally incorporating small-scale vortices. Doppler radar observations by Bluestein et al. (2004) also support a frequent interaction between different dust devil-like vortices.\n\nThe decrease of the grid spacing discloses more and more instantaneous fine-scale flow features. Some of them are displayed in Fig. 11. Panel a and b show horizontal snapshots of the vertical velocity and the horizontal wind at detection height for the analyzed vortices taken from simulation R10 and R10N2, respectively. Beside the overestimation of the vortex size in R10 (see also Sects. 3.2 and 3.3.2), downdraughts are apparent in both core regions surrounded by pronounced positive vertical velocities. Such downdraughts appear to be an inherent feature for all investigated fully developed dust devils, which agrees well with measurements (Balme and Greeley 2006). However, vortices in R10 rarely show these central downdraughts, whereas vortices resolved with finer resolutions reveal a descending motion in or close to the center for most of the time. In the time-averaged dust devil data, upward motions are dominant nearly everywhere (see Sect. 3.3.2). This is not contradicting to the instantaneous data because the location of the region of descending air changes during the vortex lifetime. Thus, positive and negative vertical velocities alternate in the center (defined by the pressure minimum), still resulting in a positive time-averaged value. Horizontal gradients of w are much more distinct for the smaller grid spacings. The vortex in R10N2 partially shows an increase in the vertical velocity from -5 to 6 m s$$^{-1}$$ over a distance of just 10 m. Observations related to dust devils have already shown the existence of pronounced downdraughts within the core (Kaimal and Businger 1970; Sinclair 1973). The two-cell vortex concept with descending air close the axis of rotation and upward motions aside (e.g., Mullen and Maxworthy 1977) fits also very well to our numerical simulations.\n\nFigure 11c indicates an extended low pressure region southwest to the dust devil center (x=[-30;0], y=[-40;-20]), which is another distinct flow feature that arises in our simulations independent of the resolution. We refer to this region as a tail-like structure in the following. The most striking pattern, however, is visible in the vertical velocity. Strong descending air is separated from strong upward motions in the same region where the tail in the pressure data is visible. The updraughts are always located closer to the center than the downdraughts. As far as we know, this dust devil characteristic has not been reported so far, probably because of missing high-resolution three-dimensional data. Nevertheless, a comprehensive analysis is not intended here and goes far beyond the scope of this study.\n\nFinally, Fig. 11d indicates that dust devils can momentarily have several sub-centers that appear, for example, through neighbouring local pressure minima. This supports previous findings of secondary vortices within or around a main dust devil (Bluestein et al. 2004; Zhao et al. 2004; Lorenz et al. 2016; Oncley et al. 2016). Especially Zhao et al. (2004) highlight this issue in their dust-devil-scale simulation with a resolution down to 0.1 m. Pressure contours showed up to eight sub-centers that occur along the annular zone around the center where the strongest radial shear in both the tangential and the axial velocity components exist. In the most persistent vortex from R5N1.25, up to three distinct pressure drops appear next to each other. The number of secondary vortices scales with the resolution. A maximum of two sub-centers is visible in the dust devil data of simulation R10N2.5. For a grid width of 1 m, up to 4 sary vortices are observable. However, they seem to be quite unstable and last not more than several tens of seconds, which is why they can not be seen in the time-averaged fields.\n\nThe above discussion shows again that the model resolution is one of the most relevant parameters with respect to LES of dust devils. A definite recommendation which grid spacing must at least be used to capture the instantaneous vortex characteristics realistically is difficult to derive because appropriate field data that contain instantaneous, three-dimensional vortex information with high spatial and temporal resolution rarely exist or are limited to a few dust devil properties only (e.g., Bluestein et al. 2004; Oncley et al. 2016; Inagaki and Kanda 2022). Also the exact research question of follow-up studies will determine the resolution requirements. Our simulations with grid spacings of 10 and 5 m are missing some dust devil features like sub-vortices and they overestimate the sphere of influence significantly. This is why we suggest a minimum resolution below 5 m to qualitatively capture the instantaneous dust devil behaviour realistically.\n\n#### 3.3.2 Time-Averaged Data\n\nA temporal average of quantities within the volume around the vortices’ centers reveal typical dust devil features that have already been reported in other field or LES studies (for a comprehensive overview see Reiss et al. 2017). As indicated by horizontal cross-sections in Fig. 12, the surrounding region of a dust devil is characterized by negative pressure excursions with maximum values of the normalized perturbation pressure $$p^*/p^*_\\text {min}$$ close to the center. The variable $$p^*_\\text {min}$$ describes the minimum pressure drop that occurs in the whole region of 140 $$\\times$$ 140 m$$^2$$. Arrows of the horizontal wind vector show that the horizontal convergent flow rotates the strongest just outside the central region. Directly around the center, a flow with a weak radial component pointing away from the core is partly visible. This was already noticed by Sinclair (1973), who stated that there might be a radial outflow within the dust column. Also, Balme and Greeley (2006) illustrate this issue in their Fig. 8. The potential temperature, the vertical vorticity and velocity show distinct maximum values in or close to the center with a pattern similar to the one in Fig. 12. The local vertical vorticity is even more concentrated in the vortex core than the pressure drop and starts to randomly fluctuate around zero if a distance of more than 20–30 m to the center is reached (not shown). The mean vertical velocity maxima are mostly located several grid points adjacent to the vortex core similar to (Fig.8 Raasch and Franke (2011)). Independent of the regarded quantity, all cross-sections demonstrate that coarser grid resolutions overestimate the horizontal vortex size and, thereby, the vortex sphere of influence significantly. Grid resolutions of 2.5 m or smaller (Fig. 12c–f) do not alter this sphere anymore, which is in agreement with the more quantitative analysis in Sect. 3.2 (see Fig. 8c).\n\nTemporally averaged vertical cross-sections of the normalized temperature difference $$(\\theta -\\theta _\\text {min})/(\\theta _\\text {max}-\\theta _\\text {min})$$ in Fig. 13 indicate the highest temperatures at ground level in the vortex core. The quantities $$\\theta _\\text {max}$$ and $$\\theta _\\text {min}$$ depict the maximum and minimum temperature occurring in the analyzed region of 140 $$\\times$$ 100 m$$^2$$. $$\\varDelta \\theta$$ is defined as $$\\theta _\\text {max}-\\theta _\\text {min}$$. Temperature gradients tend to increase with lower grid spacings, which is a consequence of the better resolution of the super-adiabatic layer close to the surface. Throughout the whole displayed vertical range, warmer temperatures are apparent above the central region compared to temperatures outside the vortex core at the same height level. This is caused by the predominant vertical flow, which lifts the super-adiabatic surface layer of warm air. In agreement with the results from Sect. 3.1, the vertical extent of this layer outside the dust devil core is simulated to be very small for resolutions of around 2 m and below, but expands to several 10 m in height after being sucked into the vortex. A downward flow, which has often been reported (Kaimal and Businger 1970; Sinclair 1973; Balme and Greeley 2006), does not occur in the mean fields. However, reduced positive vertical velocities are visible in the vortex core compared to the values next to the center (not shown), which is in agreement with Raasch and Franke (2011). This is caused by instantaneous central downdraughts (see Sect. 3.3.1). Additionally, strong radial inflow is limited to the lower vortex regions. Air parcels that move near the surface heat up due to the prescribed positive surface heat flux and reach their highest temperatures when they approach the updraft region. Again, the size of the vortices is overestimated for resolutions larger than 2.5 m, which is also supported by vertical cross-sections of $$\\zeta /|\\zeta |_{max}$$ and $$p^*/p^*_\\text {min}$$ (not shown).\n\nIsolines of the perturbation pressure in Fig. 13 reveal a minimum of the pressure at about 10 m height but only for grid spacings of 2 m and less. Such a pronounced and lifted minimum does not occur for coarser resolutions. Because DNS simulations of dust devils in Rayleigh-Bénard convection have also shown this feature (Giersch and Raasch 2021), we interpret it as an inherent characteristic of dust devil-like vortices. According to our knowledge, neither LES studies nor case studies of real dust devils have shown a lifted pressure minimum yet, probably because of a poor resolution or poorly available three-dimensional measurement data. A simple assumption of a steady circular vortex in cyclostrophic balance can explain this finding. Due to surface friction, maximum rotational velocities occur above the surface and not directly adjacent to it, which results in maximum radial pressure gradients at the same height level and, thus, stronger pressure drops. This situation is first captured for a resolution of 2 m. Also the mathematical model of whirlwinds by Pandey and Maurya (2017) includes a negative pressure gradient along z, which is especially important for the whirlwind to grow vertically.\n\nTo conclude, the mean three-dimensional structure of single dust devils reaches a nearly converged state at resolutions of approximately 2 m. For grid spacings above 2 m, especially the high spatial gradients and the vortex size are not captured appropriately. The overall convergent and spiraling upward flow is realistically simulated for each selected resolution, at least from a purely qualitative perspective.\n\n## 4 Summary and Conclusions\n\nIn this study, we numerically investigated atmospheric dust devil-like vortices and their statistical properties. We focused on the resolution dependent convergence of various dust devil parameters. Simulations were performed with the large-eddy simulation (LES) model PALM. By using the nesting feature of PALM, we explored grid spacings between 10 and 0.625 m within a domain of approximately 4 $$\\times$$ 4 $$\\times$$ 2 km$$^{-3}$$. This domain and resolution captured the large-scale cellular pattern of the convective planetary boundary layer (PBL), in which dust devils naturally form.\n\nAs a first step, we developed a revised and resolution-independent version of the dust devil detection and analysis algorithm of Raasch and Franke (2011) and Giersch et al. (2019). It improveed the comparability of dust devil statistics and properties between simulations of different resolutions and facilitateed a direct comparison with field measurements and observations. We showed how careful this algorithm must be designed to adequately capture the natural spatial and temporal vortex scales and highlighted the challenges to create comparable results between model runs with different resolutions. Also, the analyzed dust devil population for which the derived statistics are valid, must be clearly defined. Such a definition should include the detection algorithm itself, the current knowledge about real dust devils and changes in the dust devil physics that follow from a change in model resolution.\n\nConvergence of the general flow in the convective PBL agrees well with previous studies (e.g., Sullivan and Patton 2011; Bopape et al. 2020; Wurps et al. 2020). For the boundary layer interior, grid spacings on the order of 10 m were sufficient to capture the mean flow and low-order moment statistics of the convective boundary layer appropriately. However, properties of the near surface layer, where dust devils mainly occur, still vary with resolution. Additionally, previous studies never analyzed convergence of peak values, like maximum wind velocities. Although not thoroughly investigated, our results indicated that extrema do not converge. Cursory analysis showed that most of the maxima are located within dust devils at heights below 30 m. Maxima at these heights are likely related to buoyancy caused by the super-adiabatic layer that is drawn into the cores of dust devils. Due to the non-convergence of the peak values, we concentrated our statistical analysis to mean dust devil properties. These mean properties were averaged over both the whole vortex lifetime and sample at a height of 10 m. We chose this height to avoid surface interference.\n\nWe defined the dust devil population to be investigated as all vortices with lifetimes of at least 120 s and lifetime-averaged radii of at least 5 m. The properties of this population were consistent with the results of field studies if resolutions of 1.25, 1, and 0.625 m were chosen. Therefore, we generally recommend a grid spacing below 2 m if quantitative results are desired from numerical simulations. However, the quantitative comparison between different measurement campaigns and model results remains challenging (e.g., because of poor three-dimensional measurement data or different analysis heights).\n\nWith a grid spacing of 1.25 or below, our results showed a converged dust devil occurrence rate of 19 km$$^{-2}$$ h$$^{-1}$$ or 77 km$$^{-2}$$ d$$^{-1}$$. Typical lifetimes were 4–5 min. The mean converged radius was between 6 and 8 m. Typical temperature excursions were 3–4 K with maxima of approximately 8 K. The mean strength of vortices, rated by the pressure drop at convergence, was approximately 14 Pa. Maximum instantaneous pressure drops of several hundreds of pascals existed in some vortices. The vertical vorticity averaged over a horizontal plane of 20 $$\\times$$ 20 m$$^2$$ around the dust devil center typically ranged between 0.1 and 1 s$$^{-1}$$. Finally, converged mean tangential and radial velocities were 2.1 m s$$^{-1}$$ and 1.35 m s$$^{-1}$$, respectively, while maxima were 13 and 4 m s$$^{-1}$$, respectively. The vertical velocity never converged, even for the mean values. They constantly increased as grid spacing decreased from 1 to 2 m s$$^{-1}$$. At the finest resolution of 0.625 m, peak vertical velocities of nearly 20 m s$$^{-1}$$ were simulated. A comparison with observations suggested that convergence is expected at resolutions of just less than 0.5 m, which might be affordable within the next years.\n\nFinally, the analysis of the instantaneous and time-averaged three-dimensional flow structure indicates realistic results for grid spacings of 2.5 m or smaller. With such a resolution, all vortex features are qualitatively captured. This especially includes central downdrafts, sub-vortices and the near-ground convergent flow that becomes a dominant spiraling upward motion near the center. Also, the vertically-thin, super-adiabatic layer with high vertical temperature gradients in the dust devil surroundings, which is drawn into and stretched within the core, is captured appropriately. In instantaneous pressure fields, we frequently observe tail-like structures that separate strong updraughts from downdraughts. These structures have not been reported in the current literature.\n\nFollow-up grid convergence studies should focus especially on convergence of extrema. Namely, do extrema converge at resolutions of just less than 0.5 m? These very high-resolution studies could then investigate the small-scale structures of dust devils in more detail. The results presented here are useful for deriving better and more convincing quantitative estimates of dust devil phenomena (e.g., the amount of dust that is typically released by dust devil-like vortices). It it now clear that results derived from simulations at 10 m grid spacing or larger are not meaningful because pressure drops and wind velocities, which are directly related to turbulent dust emission, are too weakly simulated. In ongoing work, we are currently estimating the sediment fluxes and particle concentrations within dust devils. Generally speaking, this study can be taken as the basis for the resolution requirements that are needed to capture coherent vortex structures like dust devils in future LES studies."
] | [
null,
"https://media.springernature.com/lw18/springer-static/image/art%3A10.1007%2Fs10546-023-00792-3/MediaObjects/10546_2023_792_IEq191_HTML.gif",
null,
"https://media.springernature.com/lw18/springer-static/image/art%3A10.1007%2Fs10546-023-00792-3/MediaObjects/10546_2023_792_IEq193_HTML.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8740309,"math_prob":0.979911,"size":121812,"snap":"2023-40-2023-50","text_gpt3_token_len":29454,"char_repetition_ratio":0.16948804,"word_repetition_ratio":0.023151888,"special_character_ratio":0.25097692,"punctuation_ratio":0.12009921,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9817044,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T15:35:35Z\",\"WARC-Record-ID\":\"<urn:uuid:29668f4c-4050-4461-853f-34f6fe7511f1>\",\"Content-Length\":\"551378\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:17f811a4-465c-475f-879e-643bbe6585ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:821f7799-efaa-41ea-b67f-14acaa950ef1>\",\"WARC-IP-Address\":\"146.75.36.95\",\"WARC-Target-URI\":\"https://link.springer.com/article/10.1007/s10546-023-00792-3\",\"WARC-Payload-Digest\":\"sha1:GQH7FZQXAWK4FMV4PZY5AL2T4HYZFES2\",\"WARC-Block-Digest\":\"sha1:27LOOB3M7KNWFFLRJIFTV3ZMXS6Q7TY7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510300.41_warc_CC-MAIN-20230927135227-20230927165227-00827.warc.gz\"}"} |
https://www.easycalculation.com/cube-roots-125.html | [
"# What is Cube Root of 125 ?\n\n125 is said to be a perfect cube because 5 x 5 x 5 is equal to 125. Since 125 is a whole number, it is a perfect cube. The nearest previous perfect cube is 64 and the nearest next perfect cube is 216 .\n\nCube Root of 125\n ∛125 = ∛(5 x 5 x 5) 5\n\n125 is said to be a perfect cube because 5 x 5 x 5 is equal to 125. Since 125 is a whole number, it is a perfect cube. The nearest previous perfect cube is 64 and the nearest next perfect cube is 216 ."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9479889,"math_prob":0.99962395,"size":403,"snap":"2022-27-2022-33","text_gpt3_token_len":110,"char_repetition_ratio":0.22055137,"word_repetition_ratio":0.95348835,"special_character_ratio":0.32506204,"punctuation_ratio":0.083333336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96474075,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T10:09:00Z\",\"WARC-Record-ID\":\"<urn:uuid:d81f46cc-f9df-450b-ba9b-a90d9a37e7e0>\",\"Content-Length\":\"28492\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9684db40-d5aa-4079-9f49-0df031d2d581>\",\"WARC-Concurrent-To\":\"<urn:uuid:38e1b6b3-4e56-4323-a43a-53045268fd51>\",\"WARC-IP-Address\":\"50.116.14.108\",\"WARC-Target-URI\":\"https://www.easycalculation.com/cube-roots-125.html\",\"WARC-Payload-Digest\":\"sha1:2GUDT4MBEMKRRMQ2S2D5PZBAULXBUANV\",\"WARC-Block-Digest\":\"sha1:NFJ2TDRZMMYUAQ3ETLWDPZJF7VT7HAH2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103940327.51_warc_CC-MAIN-20220701095156-20220701125156-00573.warc.gz\"}"} |
https://www.slideshare.net/ashvini1991/final-btp-presentation | [
"Successfully reported this slideshow.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Upcoming SlideShare\n×\n\n# Meshless Point collocation Method For 1D and 2D Groundwater Flow Simulation\n\n902 views\n\nPublished on\n\nThis is my presentation of research on mesh less point collocation method for 1D and 2D groundwater flow simulation\n\nPublished in: Engineering\n• Full Name\nComment goes here.\n\nAre you sure you want to Yes No",
null,
"• Be the first to comment\n\n### Meshless Point collocation Method For 1D and 2D Groundwater Flow Simulation\n\n1. 1. Meshless Point collocation Method For 1D and 2D Groundwater Flow Simulation By Under Supervision of Ashvini Kumar Prof Anirban Dhar 10CE31005\n2. 2. Introduction Groundwater contamination and soil pollution have been recognized as critical environment problems therefore flow of groundwater must be analysed. Analysing flow of ground water through analytical method or numerical methods like FEM and FDM or boundary element method is complex. Meshfree method is a powerful numerical technique to obtain more accurate approximate solutions in a more convenient manner for complex systems. This method is used to establish algebraic equation for the whole problem domain without the use of pre-defined mesh. Different from FDM and FEM, Mfree method use a set of scatted nodes in the problem domain and boundaries of the domain. In present project, Mfree model is developed for groundwater flow problems in 1D and 2D based on collocation techniques.\n3. 3. Literature Review Type of models use to study flow of groundwater can be classified in three categories Sand Tank Models Analog Model Viscous Fluid Model Electrical Models Mathematical Models Analytical Models Numerical Models\n4. 4. Problem Description In this project, a meshless method called as Polynomial Point collocation method (PPCM) with radial basis function has been developed for the groundwater flow simulation in porous media in one and two dimensions. The developed model has been applied for computing head distribution in a hypothetical confined aquifer having different boundary conditions and source, and sink terms. The developed model is tested with analytical and FEM solutions available in literature and found to be satisfactory.\n5. 5. Development of PPCM Equation for analysis of flow of groundwater Governing equation of flow of groundwater\n6. 6. PPCM formulation for 1D transient Flow equation for confined aquifer The transient of groundwater in homogeneous isotropic media in 1D can be written as Where, K is hydraulic conductivity (m/d) of the aquifer Sy is specific yield To use Mfree method, the first step is to define the trial solution as where, is unknown head is the shape function at node I n is the total number of nodes in the support domain is given by and ℎ푖\n7. 7. PPCM formulation for 1D transient Flow equation for confined aquifer For MQ-RBF, q=0.5, therefore can be written as and therefore and Using earlier introduce equation, we get Arranging previous equation, we get (PPCM model for 1D flow of groundwater)\n8. 8. PPCM formulation for 2D transient flow equation for confined aquifer The transient of groundwater in homogeneous isotropic media in 2D can be written as Again to use Mfree method, the first step is to define the trial solution as where, Single and double derivatives of shape function w.r.t x and y can be written as Putting these equation in first equation and arranging the terms, we get where, [K1] is global matrix of shape function [K2] is global matrix of double derivative of shape function w.r.t. x [K3] is global matrix of double derivative of shape function w.r.t y\n9. 9. Model Development\n10. 10. Support Domain and Collocation Point 1D Model 2D Model\n11. 11. Verification of 1D Model\n12. 12. Verification of 1D model\n13. 13. Verification of 2D Model\n14. 14. Verification of 2D Model Node No. t=0.2 days t=1day Analytical FEM % Error (analytical with FEM) PPCM %Error (Analytical with PPCM) FEM PPCM %Difference FEM with PPCM 29 97.013 96.993 0.02 97.0316 0.019 97.2316 97.2316 0.084 43 93.804 93.768 0.036 93.7606 0.046 94.0059 94.0059 0.047 57 90.095 90.051 0.044 90.0756 0.022 90.0245 90.0245 0.133 71 85.451 85.413 0.038 8 0.07 85.6105 85.6105 0.041 85 78.983 78.974 0.009 78.9544 0.024 78.8544 78.8544 0.272 99 67.953 67.762 0.191 67.8552 0.144 67.8212 67.8212 0.204\n15. 15. Verification of 2D Model\n16. 16. Case Study\n17. 17. Case Study Properties ZONE I ZONE II ZONE III Transmissivity Tx (m2/d) 500 400 250 Transmissivity Ty (m2/d) 300 250 200 Porosity 0.2 0.25 0.15\n18. 18. Case Study Node No. FEM PPCM % Difference 21 98.2 98.149 0.0129 34 97.1 96.927 0.0445 51 95.25 95.313 0.0165\n19. 19. Case Study\n20. 20. Thank You All"
] | [
null,
"https://image.slidesharecdn.com/finalbtppresentation-141113051845-conversion-gate01/95/meshless-point-collocation-method-for-1d-and-2d-groundwater-flow-simulation-1-638.jpg",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://www.slideshare.net/ashvini1991/final-btp-presentation",
null,
"https://public.slidesharecdn.com/images/user-48x48.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88440955,"math_prob":0.8865054,"size":691,"snap":"2020-34-2020-40","text_gpt3_token_len":133,"char_repetition_ratio":0.101892285,"word_repetition_ratio":0.537037,"special_character_ratio":0.18089725,"punctuation_ratio":0.088709675,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98411125,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-09T10:41:15Z\",\"WARC-Record-ID\":\"<urn:uuid:c50c5cee-1d47-4799-9300-69f9b146af77>\",\"Content-Length\":\"173633\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8ddb41bd-66ee-4b33-950d-3571bbf29f26>\",\"WARC-Concurrent-To\":\"<urn:uuid:b5e7d3aa-ac48-478d-b831-7c680a5c02bb>\",\"WARC-IP-Address\":\"108.174.10.19\",\"WARC-Target-URI\":\"https://www.slideshare.net/ashvini1991/final-btp-presentation\",\"WARC-Payload-Digest\":\"sha1:F7ZP4YTUJ4INLPQU73WP5RAHH6L4VLWH\",\"WARC-Block-Digest\":\"sha1:2XOCVX33HZTJGV5NVT73BP7VVPJSBOXU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738552.17_warc_CC-MAIN-20200809102845-20200809132845-00358.warc.gz\"}"} |
https://pockettorch.net/book/nonlinear-functional-analysis-and-its-applications-volume-4-applications-to-mathematical-physics/ | [
"Nonlinear Functional Analysis and its Applications",
null,
"IV: Applications to Mathematical Physics\n\nAuthor: E. Zeidler\n\nPublisher: Springer\n\nISBN:\n\nCategory: Mathematics\n\nPage: 993\n\nView: 171\n\nThe fourth of a five-volume exposition of the main principles of nonlinear functional analysis and its applications to the natural sciences, economics, and numerical analysis. The presentation is self-contained and accessible to the non-specialist, and topics covered include applications to mechanics, elasticity, plasticity, hydrodynamics, thermodynamics, statistical physics, and special and general relativity including cosmology. The book contains a detailed physical motivation of the relevant basic equations and a discussion of particular problems which have played a significant role in the development of physics and through which important mathematical and physical insight may be gained. It combines classical and modern ideas to build a bridge between the language and thoughts of physicists and mathematicians. Many exercises and a comprehensive bibliography complement the text.\n\nNonlinear Functional Analysis and its Applications",
null,
"IV: Applications to Mathematical Physics\n\nAuthor: E. Zeidler\n\nPublisher: Springer\n\nISBN:\n\nCategory: Mathematics\n\nPage: 993\n\nView: 797\n\nThe fourth of a five-volume exposition of the main principles of nonlinear functional analysis and its applications to the natural sciences, economics, and numerical analysis. The presentation is self-contained and accessible to the non-specialist, and topics covered include applications to mechanics, elasticity, plasticity, hydrodynamics, thermodynamics, statistical physics, and special and general relativity including cosmology. The book contains a detailed physical motivation of the relevant basic equations and a discussion of particular problems which have played a significant role in the development of physics and through which important mathematical and physical insight may be gained. It combines classical and modern ideas to build a bridge between the language and thoughts of physicists and mathematicians. Many exercises and a comprehensive bibliography complement the text.\n\nOxford Users' Guide to Mathematics",
null,
"Author: Eberhard Zeidler\n\nPublisher: Oxford University Press\n\nISBN:\n\nPage: 1284\n\nView: 359\n\nThe Oxford Users' Guide to Mathematics is one of the leading handbooks on mathematics available. It presents a comprehensive modern picture of mathematics and emphasises the relations between the different branches of mathematics, and the applications of mathematics in engineering and the natural sciences. The Oxford User's Guide covers a broad spectrum of mathematics starting with the basic material and progressing on to more advanced topics that have come to the fore in the last few decades. The book is organised into mathematical sub-disciplines including analysis, algebra, geometry, foundations of mathematics, calculus of variations and optimisation, theory of probability and mathematical statistics, numerical mathematics and scientific computing, and history of mathematics. The book is supplemented by numerous tables on infinite series, special functions, integrals, integral transformations, mathematical statistics, and fundamental constants in physics. It also includes a comprehensive bibliography of key contemporary literature as well as an extensive glossary and index. The wealth of material, reaching across all levels and numerous sub-disciplines, makes The Oxford User's Guide to Mathematics an invaluable reference source for students of engineering, mathematics, computer science, and the natural sciences, as well as teachers, practitioners, and researchers in industry and academia.\n\nApplied Functional Analysis",
null,
"Applications to Mathematical Physics\n\nAuthor: Eberhard Zeidler\n\nPublisher: Springer Science & Business Media\n\nISBN:\n\nCategory: Mathematics\n\nPage: 481\n\nView: 594\n\nThe first part of a self-contained, elementary textbook, combining linear functional analysis, nonlinear functional analysis, numerical functional analysis, and their substantial applications with each other. As such, the book addresses undergraduate students and beginning graduate students of mathematics, physics, and engineering who want to learn how functional analysis elegantly solves mathematical problems which relate to our real world. Applications concern ordinary and partial differential equations, the method of finite elements, integral equations, special functions, both the Schroedinger approach and the Feynman approach to quantum physics, and quantum statistics. As a prerequisite, readers should be familiar with some basic facts of calculus. The second part has been published under the title, Applied Functional Analysis: Main Principles and Their Applications.\n\nNonlinear Functional Analysis and its Applications",
null,
"I: Fixed-Point Theorems\n\nAuthor: Eberhard Zeidler\n\nPublisher: Springer\n\nISBN:\n\nCategory: Mathematics\n\nPage: 909\n\nView: 687\n\nThe greatest mathematicians, such as Archimedes, Newton, and Gauss, always united theory and applications in equal measure. Felix Klein There exists the remarkable possibility that one can master a subject mathemati cally, without really understanding its essence. Albert Einstein Don't give us numbers: give us insight! A contemporary natural scientist to a mathematician Numerous questions in physics, chemistry, biology, and economics lead to nonlinear problems; for example, deformation of rods, plates, and shells; behavior of plastic materials; surface waves of fluids; flows around objects in fluids or gases; shock waves in gases; movement of viscous fluids; equilibrium forms of rotating fluids in astrophysics; determination of the shape of the earth through gravitational measu- ments; behavior of magnetic fields of astrophysical objects; melting processes; chemical reactions; heat radiation; processes in nuclear reactors; nonlinear oscillation in physics, chemistry, and biology; 2 Introduction existence and stability of periodic and quasiperiodic orbits in celestial mechanics; stability of physical, chemical, biological, ecological, and economic processes; diffusion processes in physics, chemistry, and biology; processes with entropy production, and self-organization of systems in physics, chemistry, and biology; study of the electrical potential variation in the heart through measure ments on the body surface to prevent heart attacks; determining material constants or material laws (e. g.\n\nApplied Nonlinear Functional Analysis",
null,
"An Introduction\n\nAuthor: Nikolaos S. Papageorgiou\n\nPublisher: Walter de Gruyter GmbH & Co KG\n\nISBN:\n\nCategory: Mathematics\n\nPage: 622\n\nView: 384\n\nThe aim of this book is to provide a concise but complete introduction to the main mathematical tools of nonlinear functional analysis, which are also used in the study of concrete problems in economics, engineering, and physics. This volume gathers the mathematical background needed in order to conduct research or to deal with theoretical problems and applications using the tools of nonlinear functional analysis.\n\nQuantum Field Theory II: Quantum Electrodynamics",
null,
"A Bridge between Mathematicians and Physicists\n\nAuthor: Eberhard Zeidler\n\nPublisher: Springer Science & Business Media\n\nISBN:\n\nCategory: Mathematics\n\nPage: 1101\n\nView: 883\n\nAnd God said, Let there be light; and there was light. Genesis 1,3 Light is not only the basis of our biological existence, but also an essential source of our knowledge about the physical laws of nature, ranging from the seventeenth century geometrical optics up to the twentieth century theory of general relativity and quantum electrodynamics. Folklore Don’t give us numbers: give us insight! A contemporary natural scientist to a mathematician The present book is the second volume of a comprehensive introduction to themathematicalandphysicalaspectsofmodernquantum?eldtheorywhich comprehends the following six volumes: Volume I: Basics in Mathematics and Physics Volume II: Quantum Electrodynamics Volume III: Gauge Theory Volume IV: Quantum Mathematics Volume V: The Physics of the Standard Model Volume VI: Quantum Gravitation and String Theory. It is our goal to build a bridge between mathematicians and physicists based on the challenging question about the fundamental forces in • macrocosmos (the universe) and • microcosmos (the world of elementary particles). The six volumes address a broad audience of readers, including both und- graduate and graduate students, as well as experienced scientists who want to become familiar with quantum ?eld theory, which is a fascinating topic in modern mathematics and physics.\n\nApplications of functional analysis in mathematical physics",
null,
"Author: Sergeĭ Lʹvovich Sobolev\n\nPublisher:\n\nISBN:\n\nCategory: Mathematics\n\nPage: 239\n\nView: 686\n\nMethods of Modern Mathematical Physics: Functional analysis",
null,
"Author: Michael Reed\n\nPublisher: Gulf Professional Publishing\n\nISBN:\n\nCategory: Science\n\nPage: 400\n\nView: 633\n\nThis book is the first of a multivolume series devoted to an exposition of functional analysis methods in modern mathematical physics. It describes the fundamental principles of functional analysis and is essentially self-contained, although there are occasional references to later volumes. We have included a few applications when we thought that they would provide motivation for the reader. Later volumes describe various advanced topics in functional analysis and give numerous applications in classical physics, modern physics, and partial differential equations.\n\nFunctional Analysis and Applications",
null,
"Author: Abul Hasan Siddiqi\n\nPublisher: Springer\n\nISBN:\n\nCategory: Mathematics\n\nPage: 562\n\nView: 151\n\nThis self-contained textbook discusses all major topics in functional analysis. Combining classical materials with new methods, it supplies numerous relevant solved examples and problems and discusses the applications of functional analysis in diverse fields. The book is unique in its scope, and a variety of applications of functional analysis and operator-theoretic methods are devoted to each area of application. Each chapter includes a set of problems, some of which are routine and elementary, and some of which are more advanced. The book is primarily intended as a textbook for graduate and advanced undergraduate students in applied mathematics and engineering. It offers several attractive features making it ideally suited for courses on functional analysis intended to provide a basic introduction to the subject and the impact of functional analysis on applied and computational mathematics, nonlinear functional analysis and optimization. It introduces emerging topics like wavelets, Gabor system, inverse problems and application to signal and image processing."
] | [
null,
"https://books.google.com/books/content",
null,
"https://books.google.com/books/content",
null,
"https://books.google.com/books/content",
null,
"https://books.google.com/books/content",
null,
"https://books.google.com/books/content",
null,
"https://books.google.com/books/content",
null,
"https://books.google.com/books/content",
null,
"https://books.google.com/books/content",
null,
"https://books.google.com/books/content",
null,
"https://books.google.com/books/content",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92158777,"math_prob":0.6191134,"size":8999,"snap":"2019-26-2019-30","text_gpt3_token_len":1551,"char_repetition_ratio":0.15619789,"word_repetition_ratio":0.20684169,"special_character_ratio":0.16279587,"punctuation_ratio":0.12068965,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9726259,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-25T09:25:13Z\",\"WARC-Record-ID\":\"<urn:uuid:5fdddb82-6440-41f1-a080-990ad019b25f>\",\"Content-Length\":\"36105\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b05115b-6af2-4c4e-9eef-9fbc080d5b9d>\",\"WARC-Concurrent-To\":\"<urn:uuid:ee59b965-1ef9-44bc-b42d-99cb8984a513>\",\"WARC-IP-Address\":\"104.27.177.181\",\"WARC-Target-URI\":\"https://pockettorch.net/book/nonlinear-functional-analysis-and-its-applications-volume-4-applications-to-mathematical-physics/\",\"WARC-Payload-Digest\":\"sha1:XWJ2BGGLCR2DHG3JNJOICT3F2F6FTOBC\",\"WARC-Block-Digest\":\"sha1:PHRS6ADRG4BAOQOOEJY6RB7WLARC3TOM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999817.30_warc_CC-MAIN-20190625092324-20190625114324-00010.warc.gz\"}"} |
https://www.lexico.com/definition/confidence_interval | [
"# Meaning of confidence interval in English:\n\n## confidence interval\n\n### noun\n\nStatistics\n• A range of values so defined that there is a specified probability that the value of a parameter lies within it.\n\n• ‘Furthermore, the confidence interval of the parameters and the correlation among them were both very low, indicating stability of the estimations.’\n• ‘The confidence interval of fit parameters was either determined from the covariance matrix or by F-test analysis.’\n• ‘The confidence interval was defined as a drop of 1 in the LOD score.’\n• ‘This significance was lost for a third exacerbation, possibly because of the small numbers; the resulting confidence interval is very wide.’\n• ‘It is clear from the size of the confidence interval that these data do not imply strong differences between the three groups.’"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9008391,"math_prob":0.8416778,"size":921,"snap":"2020-10-2020-16","text_gpt3_token_len":181,"char_repetition_ratio":0.17884405,"word_repetition_ratio":0.0,"special_character_ratio":0.19001086,"punctuation_ratio":0.077419356,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95984256,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-28T13:42:08Z\",\"WARC-Record-ID\":\"<urn:uuid:dc1cb809-d8d6-4804-abca-5f97f8025d51>\",\"Content-Length\":\"69852\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ddd588b-71eb-49e2-99d0-87fedfc9b3cb>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc2b30db-b9f9-40bf-b75e-574af4ced97e>\",\"WARC-IP-Address\":\"151.101.250.133\",\"WARC-Target-URI\":\"https://www.lexico.com/definition/confidence_interval\",\"WARC-Payload-Digest\":\"sha1:JMDB63JHA3QA6M4INUUHANOQ42OXI35H\",\"WARC-Block-Digest\":\"sha1:6G4YPKG4BPOVVEP3LVRFOYZ332RJKXIK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875147154.70_warc_CC-MAIN-20200228104413-20200228134413-00205.warc.gz\"}"} |
https://www.boost.org/doc/libs/1_48_0/libs/geometry/doc/html/geometry/reference/adapted/c_array.html | [
"#",
null,
"Boost C++ Libraries\n\n...one of the most highly regarded and expertly designed C++ library projects in the world.\n\n#### C array\n\nC arrays are adapted to the Boost.Geometry point concept\n\n###### Description\n\nC arrays, such as double or int, are (optionally) adapted to the Boost.Geometry point concept. They can therefore be used in many Boost.Geometry algorithms.\n\nNote that a C array cannot be the point type of a linestring or a polygon. The reason for that is that a std::vector does not allow containing C arrays (this is not related to Boost.Geometry). The C array is therefore limited to the point type.\n\n###### Model of\n\n`#include <boost/geometry/geometries/adapted/c_array.hpp>`\n\nThe standard header `boost/geometry.hpp` does not include this header.\n\n###### Example\n\nSmall example showing the combination of an array with a Boost.Geometry algorithm\n\n```#include <iostream>\n#include <boost/geometry.hpp>\n\nBOOST_GEOMETRY_REGISTER_C_ARRAY_CS(cs::cartesian)\n\nint main()\n{\nint a = {1, 2, 3};\nint b = {2, 3, 4};\n\nstd::cout << boost::geometry::distance(a, b) << std::endl;\n\nreturn 0;\n}\n\n```\n\nOutput:\n\n```1.73205\n```"
] | [
null,
"https://www.boost.org/gfx/space.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7874009,"math_prob":0.7888338,"size":1125,"snap":"2022-27-2022-33","text_gpt3_token_len":290,"char_repetition_ratio":0.14362177,"word_repetition_ratio":0.012903226,"special_character_ratio":0.25244445,"punctuation_ratio":0.19650654,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9764683,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T13:30:00Z\",\"WARC-Record-ID\":\"<urn:uuid:4965107a-0b11-423d-87a6-6ec0f0d1ca21>\",\"Content-Length\":\"9793\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:29ef7431-3040-4fb6-99ff-ab47be7cc724>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6457e9a-3ecd-4e89-a403-a2a467fa292d>\",\"WARC-IP-Address\":\"146.20.110.251\",\"WARC-Target-URI\":\"https://www.boost.org/doc/libs/1_48_0/libs/geometry/doc/html/geometry/reference/adapted/c_array.html\",\"WARC-Payload-Digest\":\"sha1:7KURCPR5PJ7FJHNA6S4G2MJHAY3COYDP\",\"WARC-Block-Digest\":\"sha1:7LAZ5QXUIS4NLIKKHQDNNOTM7MUGFVQ4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104672585.89_warc_CC-MAIN-20220706121103-20220706151103-00409.warc.gz\"}"} |
https://www.harrisgeospatial.com/docs/idlgrmodel__scale.html | [
"Welcome to the L3 Harris Geospatial documentation center. Here you will find reference guides and help documents.\n\n\n### IDLgrModel::Scale\n\nIDLgrModel::Scale\n\nThe IDLgrModel::Scale procedure method scales the model by the specified scaling factors. IDL computes and maintains the resulting transform matrix in double-precision floating-point.\n\n## Syntax\n\nObj->[IDLgrModel::]Scale, Sx, Sy, Sz [, /PREMULTIPLY]\n\n## Arguments\n\n### Sx, Sy, Sz\n\nThe floating-point scaling factors in the x, y, and z dimensions by which the model is to be scaled. For example, specifying a value of 0.5 for Sx scales the model to 50% of its original size in the x dimension.\n\nNote: Because scale factors are multiplied together with the three-dimensional transformation matrix, specifying negative values for Sx, Sy, or Sz has the effect of reversing the corresponding axis. For example, specifying a value of -1.0 for Sy leaves the scale unchanged at 100%, but flips the model around the y axis.\n\n## Keywords\n\n### PREMULTIPLY\n\nSet this keyword to cause the scaling matrix specified by Sx, Sy, Sz to be pre-multiplied to the model’s transformation matrix. By default, the scaling matrix is post-multiplied.\n\n## Example\n\nChanging the scale factors used by the IDLgrModel object is one way to change the “zoom” of a displayed graphic.\n\nFirst, we create an IDLgrWindow containing a simple plot:\n\n`mywindow = OBJ_NEW('IDLgrWindow', RETAIN=2)myview = OBJ_NEW('IDLgrView', VIEWPLANE_RECT=[-10,-10,20,20])mymodel = OBJ_NEW('IDLgrModel')x = (FINDGEN(21) / 10.0 - 1.0) * 10.0y = RANDOMU(seed, 21) * 10.0 - 5.0myplot1 = OBJ_NEW('IDLgrPlot', x, y)myview->Add, mymodelmymodel->Add, myplot1mywindow->Draw, myview`\n\nNext, change the scale of the x dimension to 80% of the current value, leaving the scale in the y and z dimensions unchanged:\n\n`mymodel->Scale, .8, 1.0, 1.0mywindow->Draw, myview`\n\nWhen you have finished with the example, destroy the IDLgrWindow and IDLgrView objects:\n\n`OBJ_DESTROY, mywindowOBJ_DESTROY, myview`\n\n## Version History\n\n 5 Introduced"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.67103505,"math_prob":0.9340959,"size":1884,"snap":"2020-10-2020-16","text_gpt3_token_len":520,"char_repetition_ratio":0.12074468,"word_repetition_ratio":0.01438849,"special_character_ratio":0.24893843,"punctuation_ratio":0.18181819,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9867836,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-24T17:02:09Z\",\"WARC-Record-ID\":\"<urn:uuid:f6cbda25-de9a-46dc-8bd2-7a185eb66a5c>\",\"Content-Length\":\"58798\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ce47535d-0ba0-44af-b401-fac3191370fc>\",\"WARC-Concurrent-To\":\"<urn:uuid:ee789354-732b-4dd0-886e-abb2591ead50>\",\"WARC-IP-Address\":\"192.5.156.19\",\"WARC-Target-URI\":\"https://www.harrisgeospatial.com/docs/idlgrmodel__scale.html\",\"WARC-Payload-Digest\":\"sha1:OG5TVCFDOCFXYSQOWIHOAVFITQIV7PGD\",\"WARC-Block-Digest\":\"sha1:CWHDEUUY7ZGYKRDVGZKVSMSQH7BMIAIA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145966.48_warc_CC-MAIN-20200224163216-20200224193216-00161.warc.gz\"}"} |
https://www.colorhexa.com/56ed98 | [
"# #56ed98 Color Information\n\nIn a RGB color space, hex #56ed98 is composed of 33.7% red, 92.9% green and 59.6% blue. Whereas in a CMYK color space, it is composed of 63.7% cyan, 0% magenta, 35.9% yellow and 7.1% black. It has a hue angle of 146.2 degrees, a saturation of 80.7% and a lightness of 63.3%. #56ed98 color hex could be obtained by blending #acffff with #00db31. Closest websafe color is: #66ff99.\n\n• R 34\n• G 93\n• B 60\nRGB color chart\n• C 64\n• M 0\n• Y 36\n• K 7\nCMYK color chart\n\n#56ed98 color description : Soft cyan - lime green.\n\n# #56ed98 Color Conversion\n\nThe hexadecimal color #56ed98 has RGB values of R:86, G:237, B:152 and CMYK values of C:0.64, M:0, Y:0.36, K:0.07. Its decimal value is 5696920.\n\nHex triplet RGB Decimal 56ed98 `#56ed98` 86, 237, 152 `rgb(86,237,152)` 33.7, 92.9, 59.6 `rgb(33.7%,92.9%,59.6%)` 64, 0, 36, 7 146.2°, 80.7, 63.3 `hsl(146.2,80.7%,63.3%)` 146.2°, 63.7, 92.9 66ff99 `#66ff99`\nCIE-LAB 84.386, -58.67, 29.7 39.787, 64.81, 40.117 0.275, 0.448, 64.81 84.386, 65.76, 153.151 84.386, -62.845, 51.348 80.505, -52.666, 26.808 01010110, 11101101, 10011000\n\n# Color Schemes with #56ed98\n\n• #56ed98\n``#56ed98` `rgb(86,237,152)``\n• #ed56ab\n``#ed56ab` `rgb(237,86,171)``\nComplementary Color\n• #60ed56\n``#60ed56` `rgb(96,237,86)``\n• #56ed98\n``#56ed98` `rgb(86,237,152)``\n• #56ede4\n``#56ede4` `rgb(86,237,228)``\nAnalogous Color\n• #ed5660\n``#ed5660` `rgb(237,86,96)``\n• #56ed98\n``#56ed98` `rgb(86,237,152)``\n• #e456ed\n``#e456ed` `rgb(228,86,237)``\nSplit Complementary Color\n• #ed9856\n``#ed9856` `rgb(237,152,86)``\n• #56ed98\n``#56ed98` `rgb(86,237,152)``\n• #9856ed\n``#9856ed` `rgb(152,86,237)``\n• #abed56\n``#abed56` `rgb(171,237,86)``\n• #56ed98\n``#56ed98` `rgb(86,237,152)``\n• #9856ed\n``#9856ed` `rgb(152,86,237)``\n• #ed56ab\n``#ed56ab` `rgb(237,86,171)``\n• #18df6f\n``#18df6f` `rgb(24,223,111)``\n• #28e87c\n``#28e87c` `rgb(40,232,124)``\n• #3feb8a\n``#3feb8a` `rgb(63,235,138)``\n• #56ed98\n``#56ed98` `rgb(86,237,152)``\n• #6defa6\n``#6defa6` `rgb(109,239,166)``\n• #84f2b4\n``#84f2b4` `rgb(132,242,180)``\n• #9bf4c2\n``#9bf4c2` `rgb(155,244,194)``\nMonochromatic Color\n\n# Alternatives to #56ed98\n\nBelow, you can see some colors close to #56ed98. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #56ed72\n``#56ed72` `rgb(86,237,114)``\n• #56ed7f\n``#56ed7f` `rgb(86,237,127)``\n• #56ed8b\n``#56ed8b` `rgb(86,237,139)``\n• #56ed98\n``#56ed98` `rgb(86,237,152)``\n• #56eda5\n``#56eda5` `rgb(86,237,165)``\n• #56edb1\n``#56edb1` `rgb(86,237,177)``\n• #56edbe\n``#56edbe` `rgb(86,237,190)``\nSimilar Colors\n\n# #56ed98 Preview\n\nThis text has a font color of #56ed98.\n\n``<span style=\"color:#56ed98;\">Text here</span>``\n#56ed98 background color\n\nThis paragraph has a background color of #56ed98.\n\n``<p style=\"background-color:#56ed98;\">Content here</p>``\n#56ed98 border color\n\nThis element has a border color of #56ed98.\n\n``<div style=\"border:1px solid #56ed98;\">Content here</div>``\nCSS codes\n``.text {color:#56ed98;}``\n``.background {background-color:#56ed98;}``\n``.border {border:1px solid #56ed98;}``\n\n# Shades and Tints of #56ed98\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #010804 is the darkest color, while #f6fef9 is the lightest one.\n\n• #010804\n``#010804` `rgb(1,8,4)``\n• #031a0d\n``#031a0d` `rgb(3,26,13)``\n• #052c16\n``#052c16` `rgb(5,44,22)``\n• #073d1f\n``#073d1f` `rgb(7,61,31)``\n• #084f27\n``#084f27` `rgb(8,79,39)``\n• #0a6130\n``#0a6130` `rgb(10,97,48)``\n• #0c7339\n``#0c7339` `rgb(12,115,57)``\n• #0e8442\n``#0e8442` `rgb(14,132,66)``\n• #10964b\n``#10964b` `rgb(16,150,75)``\n• #12a853\n``#12a853` `rgb(18,168,83)``\n• #14ba5c\n``#14ba5c` `rgb(20,186,92)``\n• #16cb65\n``#16cb65` `rgb(22,203,101)``\n• #18dd6e\n``#18dd6e` `rgb(24,221,110)``\n• #21e778\n``#21e778` `rgb(33,231,120)``\n• #33e982\n``#33e982` `rgb(51,233,130)``\n• #44eb8d\n``#44eb8d` `rgb(68,235,141)``\n• #56ed98\n``#56ed98` `rgb(86,237,152)``\n• #68efa3\n``#68efa3` `rgb(104,239,163)``\n• #79f1ae\n``#79f1ae` `rgb(121,241,174)``\n• #8bf3b8\n``#8bf3b8` `rgb(139,243,184)``\n• #9df5c3\n``#9df5c3` `rgb(157,245,195)``\n• #aff6ce\n``#aff6ce` `rgb(175,246,206)``\n• #c0f8d9\n``#c0f8d9` `rgb(192,248,217)``\n• #d2fae4\n``#d2fae4` `rgb(210,250,228)``\n• #e4fcee\n``#e4fcee` `rgb(228,252,238)``\n• #f6fef9\n``#f6fef9` `rgb(246,254,249)``\nTint Color Variation\n\n# Tones of #56ed98\n\nA tone is produced by adding gray to any pure hue. In this case, #9ea5a1 is the less saturated color, while #48fb96 is the most saturated one.\n\n• #9ea5a1\n``#9ea5a1` `rgb(158,165,161)``\n• #97aca0\n``#97aca0` `rgb(151,172,160)``\n• #90b39f\n``#90b39f` `rgb(144,179,159)``\n• #88bb9e\n``#88bb9e` `rgb(136,187,158)``\n• #81c29d\n``#81c29d` `rgb(129,194,157)``\n• #7ac99d\n``#7ac99d` `rgb(122,201,157)``\n• #73d09c\n``#73d09c` `rgb(115,208,156)``\n• #6cd79b\n``#6cd79b` `rgb(108,215,155)``\n• #64df9a\n``#64df9a` `rgb(100,223,154)``\n• #5de699\n``#5de699` `rgb(93,230,153)``\n• #56ed98\n``#56ed98` `rgb(86,237,152)``\n• #4ff497\n``#4ff497` `rgb(79,244,151)``\n• #48fb96\n``#48fb96` `rgb(72,251,150)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #56ed98 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5175273,"math_prob":0.61289024,"size":3719,"snap":"2020-34-2020-40","text_gpt3_token_len":1654,"char_repetition_ratio":0.12113055,"word_repetition_ratio":0.011049724,"special_character_ratio":0.55364347,"punctuation_ratio":0.23756906,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.979747,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-06T20:10:21Z\",\"WARC-Record-ID\":\"<urn:uuid:67042930-aaa1-4e03-8393-73c55a30e16a>\",\"Content-Length\":\"36325\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f57f0f79-6e15-40bc-a06c-4b20646e455b>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d143fa6-f8ff-41f4-b3ed-4110f0b14bd7>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/56ed98\",\"WARC-Payload-Digest\":\"sha1:5YUYWSA6DE7ETIMT3PHVUTBV6G2TKMBV\",\"WARC-Block-Digest\":\"sha1:A2C4VJP5DKIC5QFONOTYOOYZEHXHWNHN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737019.4_warc_CC-MAIN-20200806180859-20200806210859-00385.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.