URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://thatsmaths.com/2018/04/05/fouriers-wonderful-idea-ii/ | [
"### Fourier’s Wonderful Idea – II\n\nSolving PDEs by a Roundabout Route",
null,
"Joseph Fourier (1768-1830)\n\nJoseph Fourier, born just 250 years ago, introduced a wonderful idea that revolutionized science and mathematics: any function or signal can be broken down into simple periodic sine-waves. Radio waves, micro-waves, infra-red radiation, visible light, ultraviolet light, X-rays and gamma rays are all forms of electromagnetic radiation, differing only in frequency [TM136 or search for “thatsmaths” at irishtimes.com].\n\nThe ability to break down signals into components of different frequencies is very useful. Some examples are for tuning a radio to a specific station, using ultrasound to examine a developing foetus or extracting a single phone conversation from a complex combination of inputs.\n\nSolving PDEs\n\nThe equations that govern many physical processes were formulated in the seventeenth century but, although many of them were linear – having the output proportional to the input – nobody knew how to solve them. These partial differential equations remained intractable for one hundred years or more. Around 1807, Joseph Fourier developed a powerful method of expressing the solutions as combinations of simple trigonometric functions and thereby solving the equations. His ideas had a profound effect, and now permeate all areas of mathematical physics and engineering.\n\nThe Essential Idea\n\nFourier showed that (almost) any periodic function can be expressed as a sum of sine and cosine functions. This allowed him to solve many of the equations of physics. Fourier analysis also turned out to be an ideal language for quantum mechanics. Fourier showed that functions that are not periodic, but that become small rapidly outside a bounded range, can be analyzed into periodic components using what is now called the Fourier transform. This transform is like a mathematical prism, breaking up inputs into constituents with different frequencies, just as a prism splits light into colours having different wavelengths.\n\nSolving Problems by a Roundabout Route\n\nPartial differential equations are solved by Fourier analysis via a “detour” from the physical space (or the time domain) to Fourier space (or the frequency domain), where the solution is simple, and then returning to physical space by an inverse transformation of the solution. We might compare this to solving the problem of calculating with Roman numerals: what is LXXXVI multiplied by XLI? Roman accountants and book-keepers had various tables to help them, but it was still a cumbersome process.\n\nWith the Hindu-Arabic numerals to hand, there is an easier way: we convert the two numbers to the “decimal domain”, LXXXVI = 86 and XLI = 41. Then we multiply to get 3526 and convert this back to the “Roman domain” to get MMMDXXVI.\n\nPartial differential equations are solved in an analogous manner: we transform from the time domain, where all the components are entangled, to the frequency domain where they are distinguishable, solve for each component and transform back to the time domain to get the solution.\n\nFaster Computation\n\nWith the development of computers it became possible to analyze complicated signals using Fourier analysis. But the level of calculation becomes rapidly greater with the signal length. The Fast Fourier Transform (FFT), developed around 1965, provided a solution to this problem. It dramatically reduced the way computation scales with problem size, making it practicable to solve many problems that could not otherwise be tackled.\n\nArticle on the Fast Fourier Transform: later."
] | [
null,
"https://thatsmaths.files.wordpress.com/2018/04/fourier-3.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9287561,"math_prob":0.9587309,"size":3508,"snap":"2020-45-2020-50","text_gpt3_token_len":684,"char_repetition_ratio":0.1081621,"word_repetition_ratio":0.0036968577,"special_character_ratio":0.18557583,"punctuation_ratio":0.09421488,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98695606,"pos_list":[0,1,2],"im_url_duplicate_count":[null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-04T02:01:59Z\",\"WARC-Record-ID\":\"<urn:uuid:48044d42-f7b1-447b-9e50-ef193c1ef0d8>\",\"Content-Length\":\"82802\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:641896bc-5c9b-47d5-a72a-9b07b49a1a34>\",\"WARC-Concurrent-To\":\"<urn:uuid:a73b419e-b863-43af-91db-16dd7055f424>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://thatsmaths.com/2018/04/05/fouriers-wonderful-idea-ii/\",\"WARC-Payload-Digest\":\"sha1:4UF4QBZCYHCKWFLCDITACVIN6C36EBUD\",\"WARC-Block-Digest\":\"sha1:HMFZKKEL4WJTKISLHESBSPVE2YFQJKTY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141733120.84_warc_CC-MAIN-20201204010410-20201204040410-00018.warc.gz\"}"} |
https://faculty.math.illinois.edu/Macaulay2/doc/Macaulay2-1.21/share/doc/Macaulay2/GKMVarieties/html/_make__K__Class_lp__G__K__M__Variety_cm__Flag__Matroid_rp.html | [
"# makeKClass(GKMVariety,FlagMatroid) -- the equivariant K-class of a flag matroid\n\n## Description\n\nA flag matroid of whose constituent matroids have ranks $r_1, \\ldots, r_k$ and ground set size $n$ defines a KClass on the (partial) flag variety $Fl(r_1,\\ldots, r_k;n)$. When the flag matroid arises from a matrix representing a point on the (partial) flag variety, this equivariant K-class coincides with that of the structure sheaf of its torus orbit closure. See [CDMS18] or [DES20].\n\n i1 : X = generalizedFlagVariety(\"A\",2,{1,2}) o1 = a \"GKM variety\" with an action of a 3-dimensional torus o1 : GKMVariety i2 : A = matrix{{1,2,3},{0,2,3}} o2 = | 1 2 3 | | 0 2 3 | 2 3 o2 : Matrix ZZ <--- ZZ i3 : FM = flagMatroid(A,{1,2}) o3 = a \"flag matroid\" with rank sequence {1, 2} on 3 elements o3 : FlagMatroid i4 : C1 = makeKClass(X,FM) o4 = an \"equivariant K-class\" on a GKM variety o4 : KClass i5 : C2 = orbitClosure(X,A) o5 = an \"equivariant K-class\" on a GKM variety o5 : KClass i6 : C1 === C2 o6 = true"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.70636153,"math_prob":0.99895746,"size":913,"snap":"2023-14-2023-23","text_gpt3_token_len":335,"char_repetition_ratio":0.11771177,"word_repetition_ratio":0.046511628,"special_character_ratio":0.37458926,"punctuation_ratio":0.16582915,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99376523,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-29T13:15:54Z\",\"WARC-Record-ID\":\"<urn:uuid:6bf14c24-7228-4687-acd7-a9507c461dbd>\",\"Content-Length\":\"7960\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:03c89289-899e-43d3-8c87-60f91755d004>\",\"WARC-Concurrent-To\":\"<urn:uuid:4c2de02c-341e-4139-831f-7f3781b5c740>\",\"WARC-IP-Address\":\"128.174.199.46\",\"WARC-Target-URI\":\"https://faculty.math.illinois.edu/Macaulay2/doc/Macaulay2-1.21/share/doc/Macaulay2/GKMVarieties/html/_make__K__Class_lp__G__K__M__Variety_cm__Flag__Matroid_rp.html\",\"WARC-Payload-Digest\":\"sha1:4T7UZNYGACXCMFH6LSJOCUGZISF6UJV5\",\"WARC-Block-Digest\":\"sha1:EFTOODGR4YBVWTAJROC7JKVEINPYVGWK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644855.6_warc_CC-MAIN-20230529105815-20230529135815-00592.warc.gz\"}"} |
https://aws-amplify.github.io/aws-sdk-ios/docs/reference/AWSTextract/Classes/AWSTextractPoint.html | [
"# AWSTextractPoint\n\nObjective-C\n\n``@interface AWSTextractPoint``\n\nSwift\n\n``class AWSTextractPoint``\n\nThe X and Y coordinates of a point on a document page. The X and Y values that are returned are ratios of the overall document page size. For example, if the input document is 700 x 200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the document page.\n\nAn array of `Point` objects, `Polygon`, is returned by DetectDocumentText. `Polygon` represents a fine-grained polygon around detected text. For more information, see Geometry in the Amazon Textract Developer Guide.\n\n• ``` X ```\n\nThe value of the X coordinate for a point on a `Polygon`.\n\n#### Declaration\n\nObjective-C\n\n``@property (nonatomic, strong) NSNumber *_Nullable X;``\n\nSwift\n\n``var x: NSNumber? { get set }``\n• ``` Y ```\n\nThe value of the Y coordinate for a point on a `Polygon`.\n\n#### Declaration\n\nObjective-C\n\n``@property (nonatomic, strong) NSNumber *_Nullable Y;``\n\nSwift\n\n``var y: NSNumber? { get set }``"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6519817,"math_prob":0.8258656,"size":939,"snap":"2021-04-2021-17","text_gpt3_token_len":229,"char_repetition_ratio":0.11336898,"word_repetition_ratio":0.16107382,"special_character_ratio":0.23109691,"punctuation_ratio":0.13407822,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9747817,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-21T13:44:33Z\",\"WARC-Record-ID\":\"<urn:uuid:d4aadcaa-221d-438d-86ec-f666520330ce>\",\"Content-Length\":\"15764\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:09727d35-245c-48f5-a420-ad3685d1cf52>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb62cbb2-59de-4a6c-907b-1e8d3d6a40dc>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"https://aws-amplify.github.io/aws-sdk-ios/docs/reference/AWSTextract/Classes/AWSTextractPoint.html\",\"WARC-Payload-Digest\":\"sha1:V6BXU2C23TL5Y6GJZA4MF6VGGFJ747TH\",\"WARC-Block-Digest\":\"sha1:D3YTHBFMZSGIVPBDWXGORICOQV3WSJDO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703524858.74_warc_CC-MAIN-20210121132407-20210121162407-00522.warc.gz\"}"} |
http://pike.lysator.liu.se/docs/ietf/rfc/13/rfc1320.xml | [
"Network Working Group\nObsoletes: RFC 1186\nR. Rivest\nMIT Laboratory for Computer Science\nand RSA Data Security, Inc.\nApril 1992\n\n# The MD4 Message-Digest Algorithm\n\n## Status of thie Memo\n\nThis memo provides information for the Internet community. It does not specify an Internet standard. Distribution of this memo is unlimited.\n\n## Acknowlegements\n\nWe would like to thank Don Coppersmith, Burt Kaliski, Ralph Merkle, and Noam Nisan for numerous helpful comments and suggestions.\n\n``` 1. Executive Summary 1\n2. Terminology and Notation 2\n3. MD4 Algorithm Description 2\n4. Summary 6\nReferences 6\nAPPENDIX A - Reference Implementation 6\nSecurity Considerations 20\n```\n\n## 1. Executive Summary\n\nThis document describes the MD4 message-digest algorithm . The algorithm takes as input a message of arbitrary length and produces as output a 128-bit \"fingerprint\" or \"message digest\" of the input. It is conjectured that it is computationally infeasible to produce two messages having the same message digest, or to produce any message having a given prespecified target message digest. The MD4 algorithm is intended for digital signature applications, where a large file must be \"compressed\" in a secure manner before being encrypted with a private (secret) key under a public-key cryptosystem such as RSA.\n\nThe MD4 algorithm is designed to be quite fast on 32-bit machines. In addition, the MD4 algorithm does not require any large substitution tables; the algorithm can be coded quite compactly.\n\nThe MD4 algorithm is being placed in the public domain for review and possible adoption as a standard.\n\nThis document replaces the October 1990 RFC 1186 . The main difference is that the reference implementation of MD4 in the appendix is more portable.\n\nFor OSI-based applications, MD4's object identifier is\n\nmd4 OBJECT IDENTIFIER ::=\n\n``` {iso(1) member-body(2) US(840) rsadsi(113549) digestAlgorithm(2) 4}\n```\n\nIn the X.509 type AlgorithmIdentifier , the parameters for MD4 should have type NULL.\n\n## 2. Terminology and Notation\n\nIn this document a \"word\" is a 32-bit quantity and a \"byte\" is an eight-bit quantity. A sequence of bits can be interpreted in a natural manner as a sequence of bytes, where each consecutive group of eight bits is interpreted as a byte with the high-order (most significant) bit of each byte listed first. Similarly, a sequence of bytes can be interpreted as a sequence of 32-bit words, where each consecutive group of four bytes is interpreted as a word with the low-order (least significant) byte given first.\n\nLet x_i denote \"x sub i\". If the subscript is an expression, we surround it in braces, as in x_{i+1}. Similarly, we use ^ for superscripts (exponentiation), so that x^i denotes x to the i-th power.\n\nLet the symbol \"+\" denote addition of words (i.e., modulo-2^32 addition). Let X <<< s denote the 32-bit value obtained by circularly shifting (rotating) X left by s bit positions. Let not(X) denote the bit-wise complement of X, and let X v Y denote the bit-wise OR of X and Y. Let X xor Y denote the bit-wise XOR of X and Y, and let XY denote the bit-wise AND of X and Y.\n\n## 3. MD4 Algorithm Description\n\nWe begin by supposing that we have a b-bit message as input, and that we wish to find its message digest. Here b is an arbitrary nonnegative integer; b may be zero, it need not be a multiple of eight, and it may be arbitrarily large. We imagine the bits of the message written down as follows:\n\n``` m_0 m_1 ... m_{b-1}\n```\n\nThe following five steps are performed to compute the message digest of the message.\n\n### 3.1 Step 1. Append Padding Bits\n\nThe message is \"padded\" (extended) so that its length (in bits) is congruent to 448, modulo 512. That is, the message is extended so that it is just 64 bits shy of being a multiple of 512 bits long. Padding is always performed, even if the length of the message is already congruent to 448, modulo 512.\n\nPadding is performed as follows: a single \"1\" bit is appended to the message, and then \"0\" bits are appended so that the length in bits of the padded message becomes congruent to 448, modulo 512. In all, at least one bit and at most 512 bits are appended.\n\n### 3.2 Step 2. Append Length\n\nA 64-bit representation of b (the length of the message before the padding bits were added) is appended to the result of the previous step. In the unlikely event that b is greater than 2^64, then only the low-order 64 bits of b are used. (These bits are appended as two 32-bit words and appended low-order word first in accordance with the previous conventions.)\n\nAt this point the resulting message (after padding with bits and with b) has a length that is an exact multiple of 512 bits. Equivalently, this message has a length that is an exact multiple of 16 (32-bit) words. Let M[0 ... N-1] denote the words of the resulting message, where N is a multiple of 16.\n\n### 3.3 Step 3. Initialize MD Buffer\n\nA four-word buffer (A,B,C,D) is used to compute the message digest. Here each of A, B, C, D is a 32-bit register. These registers are initialized to the following values in hexadecimal, low-order bytes first):\n\n``` word A: 01 23 45 67\nword B: 89 ab cd ef\nword C: fe dc ba 98\nword D: 76 54 32 10\n```\n\n### 3.4 Step 4. Process Message in 16-Word Blocks\n\nWe first define three auxiliary functions that each take as input three 32-bit words and produce as output one 32-bit word.\n\n``` F(X,Y,Z) = XY v not(X) Z\nG(X,Y,Z) = XY v XZ v YZ\nH(X,Y,Z) = X xor Y xor Z\n```\n\nIn each bit position F acts as a conditional: if X then Y else Z. The function F could have been defined using + instead of v since XY and not(X)Z will never have \"1\" bits in the same bit position.) In each bit position G acts as a majority function: if at least two of X, Y, Z are on, then G has a \"1\" bit in that bit position, else G has a \"0\" bit. It is interesting to note that if the bits of X, Y, and Z are independent and unbiased, the each bit of f(X,Y,Z) will be independent and unbiased, and similarly each bit of g(X,Y,Z) will be independent and unbiased. The function H is the bit-wise XOR or parity\" function; it has properties similar to those of F and G.\n\nDo the following:\n\n``` Process each 16-word block. */\nFor i = 0 to N/16-1 do\n\n/* Copy block i into X. */\nFor j = 0 to 15 do\nSet X[j] to M[i*16+j].\nend /* of loop on j */\n\n/* Save A as AA, B as BB, C as CC, and D as DD. */\nAA = A\nBB = B\nCC = C\nDD = D\n\n/* Round 1. */\n/* Let [abcd k s] denote the operation\na = (a + F(b,c,d) + X[k]) <<< s. */\n/* Do the following 16 operations. */\n[ABCD 0 3] [DABC 1 7] [CDAB 2 11] [BCDA 3 19]\n[ABCD 4 3] [DABC 5 7] [CDAB 6 11] [BCDA 7 19]\n[ABCD 8 3] [DABC 9 7] [CDAB 10 11] [BCDA 11 19]\n[ABCD 12 3] [DABC 13 7] [CDAB 14 11] [BCDA 15 19]\n\n/* Round 2. */\n/* Let [abcd k s] denote the operation\na = (a + G(b,c,d) + X[k] + 5A827999) <<< s. */\n\n/* Do the following 16 operations. */\n[ABCD 0 3] [DABC 4 5] [CDAB 8 9] [BCDA 12 13]\n[ABCD 1 3] [DABC 5 5] [CDAB 9 9] [BCDA 13 13]\n[ABCD 2 3] [DABC 6 5] [CDAB 10 9] [BCDA 14 13]\n[ABCD 3 3] [DABC 7 5] [CDAB 11 9] [BCDA 15 13]\n\n/* Round 3. */\n/* Let [abcd k s] denote the operation\na = (a + H(b,c,d) + X[k] + 6ED9EBA1) <<< s. */\n/* Do the following 16 operations. */\n[ABCD 0 3] [DABC 8 9] [CDAB 4 11] [BCDA 12 15]\n[ABCD 2 3] [DABC 10 9] [CDAB 6 11] [BCDA 14 15]\n[ABCD 1 3] [DABC 9 9] [CDAB 5 11] [BCDA 13 15]\n[ABCD 3 3] [DABC 11 9] [CDAB 7 11] [BCDA 15 15]\n\n/* Then perform the following additions. (That is, increment each\nof the four registers by the value it had before this block\nwas started.) */\nA = A + AA\nB = B + BB\nC = C + CC\nD = D + DD\n\nend /* of loop on i */\n```\n\nNote. The value 5A..99 is a hexadecimal 32-bit constant, written with the high-order digit first. This constant represents the square root of 2. The octal value of this constant is 013240474631.\n\nThe value 6E..A1 is a hexadecimal 32-bit constant, written with the high-order digit first. This constant represents the square root of 3. The octal value of this constant is 015666365641.\n\n``` See Knuth, The Art of Programming, Volume 2 (Seminumerical\nAlgorithms), Second Edition (1981), Addison-Wesley. Table 2, page\n\n660.\n```\n\n### 3.5 Step 5. Output\n\nThe message digest produced as output is A, B, C, D. That is, we begin with the low-order byte of A, and end with the high-order byte of D.\n\nThis completes the description of MD4. A reference implementation in C is given in the appendix.\n\n## 4. Summary\n\nThe MD4 message-digest algorithm is simple to implement, and provides a \"fingerprint\" or message digest of a message of arbitrary length. It is conjectured that the difficulty of coming up with two messages having the same message digest is on the order of 2^64 operations, and that the difficulty of coming up with any message having a given message digest is on the order of 2^128 operations. The MD4 algorithm has been carefully scrutinized for weaknesses. It is, however, a relatively new algorithm and further security analysis is of course justified, as is the case with any new proposal of this sort.\n\n## References\n\n``` Rivest, R., \"The MD4 message digest algorithm\", in A.J. Menezes\nand S.A. Vanstone, editors, Advances in Cryptology - CRYPTO '90\nProceedings, pages 303-311, Springer-Verlag, 1991.\n\n Rivest, R., \"The MD4 Message Digest Algorithm\", RFC 1186, MIT,\nOctober 1990.\n\n CCITT Recommendation X.509 (1988), \"The Directory -\nAuthentication Framework\".\n\n Rivest, R., \"The MD5 Message-Digest Algorithm\", RFC 1321, MIT and\nRSA Data Security, Inc, April 1992.\n```\n\n## APPENDIX A - Reference Implementation\n\nThis appendix contains the following files:\n\n``` global.h -- global header file\n\nmd4.h -- header file for MD4\n\nmd4c.c -- source code for MD4\n\nmddriver.c -- test driver for MD2, MD4 and MD5\n```\n\nThe driver compiles for MD5 by default but can compile for MD2 or MD4 if the symbol MD is defined on the C compiler command line as 2 or 4.\n\nThe implementation is portable and should work on many different plaforms. However, it is not difficult to optimize the implementation on particular platforms, an exercise left to the reader. For example, on \"little-endian\" platforms where the lowest-addressed byte in a 32- bit word is the least significant and there are no alignment restrictions, the call to Decode in MD4Transform can be replaced with a typecast.\n\n## */\n\n```/* PROTOTYPES should be set to one if and only if the compiler supports\nfunction argument prototyping.\nThe following makes PROTOTYPES default to 0 if it has not already\nbeen defined with C compiler flags.\n*/\n#ifndef PROTOTYPES\n#define PROTOTYPES 0\n#endif\n```\n\n## typedef unsigned long int UINT4;\n\n```/* PROTO_LIST is defined depending on how PROTOTYPES is defined above.\n\nIf using PROTOTYPES, then PROTO_LIST returns the list, otherwise it\nreturns an empty list.\n*/\n```\n\n#if PROTOTYPES\n#define PROTO_LIST(list) list\n#else\n#define PROTO_LIST(list) ()\n#endif\n\n## rights reserved.\n\nLicense to copy and use this software is granted provided that it is identified as the \"RSA Data Security, Inc. MD4 Message-Digest Algorithm\" in all material mentioning or referencing this software or this function.\n\nLicense is also granted to make and use derivative works provided that such works are identified as \"derived from the RSA Data Security, Inc. MD4 Message-Digest Algorithm\" in all material mentioning or referencing the derived work.\n\nRSA Data Security, Inc. makes no representations concerning either the merchantability of this software or the suitability of this software for any particular purpose. It is provided \"as is\" without express or implied warranty of any kind.\n\n``` These notices must be retained in any copies of any part of this\ndocumentation and/or software.\n*/\n\n/* MD4 context. */\ntypedef struct {\nUINT4 state; /* state (ABCD) */\nUINT4 count; /* number of bits, modulo 2^64 (lsb first) */\nunsigned char buffer; /* input buffer */\n} MD4_CTX;\n\nvoid MD4Init PROTO_LIST ((MD4_CTX *));\nvoid MD4Update PROTO_LIST\n((MD4_CTX *, unsigned char *, unsigned int));\nvoid MD4Final PROTO_LIST ((unsigned char , MD4_CTX *));\n```\n\n## */\n\nLicense to copy and use this software is granted provided that it is identified as the \"RSA Data Security, Inc. MD4 Message-Digest Algorithm\" in all material mentioning or referencing this software or this function.\n\nLicense is also granted to make and use derivative works provided that such works are identified as \"derived from the RSA Data Security, Inc. MD4 Message-Digest Algorithm\" in all material mentioning or referencing the derived work.\n\nRSA Data Security, Inc. makes no representations concerning either the merchantability of this software or the suitability of this software for any particular purpose. It is provided \"as is\" without express or implied warranty of any kind.\n\n``` These notices must be retained in any copies of any part of this\ndocumentation and/or software.\n*/\n```\n\n## #include \"md4.h\"\n\n```/* Constants for MD4Transform routine.\n*/\n#define S11 3\n#define S12 7\n#define S13 11\n#define S14 19\n#define S21 3\n#define S22 5\n#define S23 9\n#define S24 13\n#define S31 3\n#define S32 9\n#define S33 11\n#define S34 15\n\nstatic void MD4Transform PROTO_LIST ((UINT4 , unsigned char ));\nstatic void Encode PROTO_LIST\n((unsigned char *, UINT4 *, unsigned int));\nstatic void Decode PROTO_LIST\n((UINT4 *, unsigned char *, unsigned int));\nstatic void MD4_memcpy PROTO_LIST ((POINTER, POINTER, unsigned int));\nstatic void MD4_memset PROTO_LIST ((POINTER, int, unsigned int));\n\nstatic unsigned char PADDING = {\n0x80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0\n};\n\n/* F, G and H are basic MD4 functions.\n*/\n#define F(x, y, z) (((x) & (y)) | ((~x) & (z)))\n#define G(x, y, z) (((x) & (y)) | ((x) & (z)) | ((y) & (z)))\n#define H(x, y, z) ((x) ^ (y) ^ (z))\n\n/* ROTATE_LEFT rotates x left n bits.\n*/\n#define ROTATE_LEFT(x, n) (((x) << (n)) | ((x) >> (32-(n))))\n```\n\n## /* Rotation is separate from addition to prevent recomputation */\n\n```#define FF(a, b, c, d, x, s) { \\\n(a) += F ((b), (c), (d)) + (x); \\\n(a) = ROTATE_LEFT ((a), (s)); \\\n}\n#define GG(a, b, c, d, x, s) { \\\n(a) += G ((b), (c), (d)) + (x) + (UINT4)0x5a827999; \\\n(a) = ROTATE_LEFT ((a), (s)); \\\n}\n#define HH(a, b, c, d, x, s) { \\\n(a) += H ((b), (c), (d)) + (x) + (UINT4)0x6ed9eba1; \\\n(a) = ROTATE_LEFT ((a), (s)); \\\n}\n\n/* MD4 initialization. Begins an MD4 operation, writing a new context.\n*/\nvoid MD4Init (context)\nMD4_CTX *context; /* context */\n{\ncontext->count = context->count = 0;\n\n*/\ncontext->state = 0x67452301;\ncontext->state = 0xefcdab89;\ncontext->state = 0x10325476;\n}\n\n/* MD4 block update operation. Continues an MD4 message-digest\noperation, processing another message block, and updating the\ncontext.\n*/\nvoid MD4Update (context, input, inputLen)\nMD4_CTX *context; /* context */\nunsigned char *input; /* input block */\nunsigned int inputLen; /* length of input block */\n{\nunsigned int i, index, partLen;\n\n/* Compute number of bytes mod 64 */\nindex = (unsigned int)((context->count >> 3) & 0x3F);\n/* Update number of bits */\nif ((context->count += ((UINT4)inputLen << 3))\n< ((UINT4)inputLen << 3))\ncontext->count++;\ncontext->count += ((UINT4)inputLen >> 29);\n\npartLen = 64 - index;\n/* Transform as many times as possible.\n*/\nif (inputLen >= partLen) {\nMD4_memcpy\n((POINTER)&context->buffer[index], (POINTER)input, partLen);\nMD4Transform (context->state, context->buffer);\n\nfor (i = partLen; i + 63 < inputLen; i += 64)\nMD4Transform (context->state, &input[i]);\n\nindex = 0;\n}\nelse\ni = 0;\n\n/* Buffer remaining input */\nMD4_memcpy\n((POINTER)&context->buffer[index], (POINTER)&input[i],\ninputLen-i);\n}\n\n/* MD4 finalization. Ends an MD4 message-digest operation, writing the\nthe message digest and zeroizing the context.\n*/\nvoid MD4Final (digest, context)\nunsigned char digest; /* message digest */\nMD4_CTX *context; /* context */\n{\nunsigned char bits;\n\n/* Save number of bits */\nEncode (bits, context->count, 8);\n\n/* Pad out to 56 mod 64.\n*/\nindex = (unsigned int)((context->count >> 3) & 0x3f);\npadLen = (index < 56) ? (56 - index) : (120 - index);\n\n/* Append length (before padding) */\nMD4Update (context, bits, 8);\n/* Store state in digest */\nEncode (digest, context->state, 16);\n\n/* Zeroize sensitive information.\n*/\nMD4_memset ((POINTER)context, 0, sizeof (*context));\n```\n\n## }\n\n```/* MD4 basic transformation. Transforms state based on block.\n*/\nstatic void MD4Transform (state, block)\nUINT4 state;\nunsigned char block;\n{\nUINT4 a = state, b = state, c = state, d = state, x;\n\nDecode (x, block, 64);\n\n/* Round 1 */\nFF (a, b, c, d, x[ 0], S11); /* 1 */\nFF (d, a, b, c, x[ 1], S12); /* 2 */\nFF (c, d, a, b, x[ 2], S13); /* 3 */\nFF (b, c, d, a, x[ 3], S14); /* 4 */\nFF (a, b, c, d, x[ 4], S11); /* 5 */\nFF (d, a, b, c, x[ 5], S12); /* 6 */\nFF (c, d, a, b, x[ 6], S13); /* 7 */\nFF (b, c, d, a, x[ 7], S14); /* 8 */\nFF (a, b, c, d, x[ 8], S11); /* 9 */\nFF (d, a, b, c, x[ 9], S12); /* 10 */\nFF (c, d, a, b, x, S13); /* 11 */\nFF (b, c, d, a, x, S14); /* 12 */\nFF (a, b, c, d, x, S11); /* 13 */\nFF (d, a, b, c, x, S12); /* 14 */\nFF (c, d, a, b, x, S13); /* 15 */\nFF (b, c, d, a, x, S14); /* 16 */\n\n/* Round 2 */\nGG (a, b, c, d, x[ 0], S21); /* 17 */\nGG (d, a, b, c, x[ 4], S22); /* 18 */\nGG (c, d, a, b, x[ 8], S23); /* 19 */\nGG (b, c, d, a, x, S24); /* 20 */\nGG (a, b, c, d, x[ 1], S21); /* 21 */\nGG (d, a, b, c, x[ 5], S22); /* 22 */\nGG (c, d, a, b, x[ 9], S23); /* 23 */\nGG (b, c, d, a, x, S24); /* 24 */\nGG (a, b, c, d, x[ 2], S21); /* 25 */\nGG (d, a, b, c, x[ 6], S22); /* 26 */\nGG (c, d, a, b, x, S23); /* 27 */\nGG (b, c, d, a, x, S24); /* 28 */\nGG (a, b, c, d, x[ 3], S21); /* 29 */\nGG (d, a, b, c, x[ 7], S22); /* 30 */\nGG (c, d, a, b, x, S23); /* 31 */\nGG (b, c, d, a, x, S24); /* 32 */\n/* Round 3 */\nHH (a, b, c, d, x[ 0], S31); /* 33 */\nHH (d, a, b, c, x[ 8], S32); /* 34 */\nHH (c, d, a, b, x[ 4], S33); /* 35 */\nHH (b, c, d, a, x, S34); /* 36 */\nHH (a, b, c, d, x[ 2], S31); /* 37 */\nHH (d, a, b, c, x, S32); /* 38 */\nHH (c, d, a, b, x[ 6], S33); /* 39 */\nHH (b, c, d, a, x, S34); /* 40 */\nHH (a, b, c, d, x[ 1], S31); /* 41 */\nHH (d, a, b, c, x[ 9], S32); /* 42 */\nHH (c, d, a, b, x[ 5], S33); /* 43 */\nHH (b, c, d, a, x, S34); /* 44 */\nHH (a, b, c, d, x[ 3], S31); /* 45 */\nHH (d, a, b, c, x, S32); /* 46 */\nHH (c, d, a, b, x[ 7], S33); /* 47 */\nHH (b, c, d, a, x, S34); /* 48 */\n\nstate += a;\nstate += b;\nstate += c;\nstate += d;\n\n/* Zeroize sensitive information.\n*/\nMD4_memset ((POINTER)x, 0, sizeof (x));\n}\n\n/* Encodes input (UINT4) into output (unsigned char). Assumes len is\na multiple of 4.\n*/\nstatic void Encode (output, input, len)\nunsigned char *output;\nUINT4 *input;\nunsigned int len;\n{\nunsigned int i, j;\n\nfor (i = 0, j = 0; j < len; i++, j += 4) {\noutput[j] = (unsigned char)(input[i] & 0xff);\noutput[j+1] = (unsigned char)((input[i] >> 8) & 0xff);\noutput[j+2] = (unsigned char)((input[i] >> 16) & 0xff);\noutput[j+3] = (unsigned char)((input[i] >> 24) & 0xff);\n}\n}\n```\n\n## static void Decode (output, input, len)\n\n```UINT4 *output;\nunsigned char *input;\nunsigned int len;\n{\nunsigned int i, j;\n\nfor (i = 0, j = 0; j < len; i++, j += 4)\noutput[i] = ((UINT4)input[j]) | (((UINT4)input[j+1]) << 8) |\n(((UINT4)input[j+2]) << 16) | (((UINT4)input[j+3]) << 24);\n}\n\n/* Note: Replace \"for loop\" with standard memcpy if possible.\n*/\nstatic void MD4_memcpy (output, input, len)\nPOINTER output;\nPOINTER input;\nunsigned int len;\n{\nunsigned int i;\n\nfor (i = 0; i < len; i++)\noutput[i] = input[i];\n}\n\n/* Note: Replace \"for loop\" with standard memset if possible.\n*/\nstatic void MD4_memset (output, value, len)\nPOINTER output;\nint value;\nunsigned int len;\n{\nunsigned int i;\n\nfor (i = 0; i < len; i++)\n((char *)output)[i] = (char)value;\n}\n```\n\n## rights reserved.\n\nRSA Data Security, Inc. makes no representations concerning either the merchantability of this software or the suitability of this software for any particular purpose. It is provided \"as is\" without express or implied warranty of any kind.\n\nThese notices must be retained in any copies of any part of this documentation and/or software.\n\n``` */\n\n/* The following makes MD default to MD5 if it has not already been\ndefined with C compiler flags.\n*/\n#ifndef MD\n#define MD MD5\n#endif\n```\n\n#include <stdio.h>\n#include <time.h>\n#include <string.h>\n#include \"global.h\"\n#if MD == 2\n#include \"md2.h\"\n#endif\n#if MD == 4\n#include \"md4.h\"\n#endif\n#if MD == 5\n#include \"md5.h\"\n#endif\n\n```/* Length of test block, number of test blocks.\n*/\n#define TEST_BLOCK_LEN 1000\n#define TEST_BLOCK_COUNT 1000\n\nstatic void MDString PROTO_LIST ((char *));\nstatic void MDTimeTrial PROTO_LIST ((void));\nstatic void MDTestSuite PROTO_LIST ((void));\nstatic void MDFile PROTO_LIST ((char *));\nstatic void MDFilter PROTO_LIST ((void));\nstatic void MDPrint PROTO_LIST ((unsigned char ));\n```\n\n#if MD == 2\n#define MD_CTX MD2_CTX\n#define MDInit MD2Init\n#define MDUpdate MD2Update\n#define MDFinal MD2Final\n\n#endif\n#if MD == 4\n#define MD_CTX MD4_CTX\n#define MDInit MD4Init\n#define MDUpdate MD4Update\n#define MDFinal MD4Final\n#endif\n#if MD == 5\n#define MD_CTX MD5_CTX\n#define MDInit MD5Init\n#define MDUpdate MD5Update\n#define MDFinal MD5Final\n#endif\n\n## /* Main driver.\n\n``` Arguments (may be any combination):\n-sstring - digests string\n-t - runs time trial\n-x - runs test script\nfilename - digests file\n(none) - digests standard input\n*/\nint main (argc, argv)\nint argc;\nchar *argv[];\n{\nint i;\n\nif (argc > 1)\nfor (i = 1; i < argc; i++)\nif (argv[i] == '-' && argv[i] == 's')\nMDString (argv[i] + 2);\nelse if (strcmp (argv[i], \"-t\") == 0)\nMDTimeTrial ();\nelse if (strcmp (argv[i], \"-x\") == 0)\nMDTestSuite ();\nelse\nMDFile (argv[i]);\nelse\nMDFilter ();\n```\n\n## }\n\n```/* Digests a string and prints the result.\n*/\nstatic void MDString (string)\n```\n\nchar *string;\n{\n\n``` MD_CTX context;\nunsigned char digest;\nunsigned int len = strlen (string);\n\nMDInit (&context);\nMDUpdate (&context, string, len);\nMDFinal (digest, &context);\n\nprintf (\"MD%d (\\\"%s\\\") = \", MD, string);\nMDPrint (digest);\nprintf (\"\\n\");\n}\n\n/* Measures the time to digest TEST_BLOCK_COUNT TEST_BLOCK_LEN-byte\nblocks.\n*/\nstatic void MDTimeTrial ()\n{\nMD_CTX context;\ntime_t endTime, startTime;\nunsigned char block[TEST_BLOCK_LEN], digest;\nunsigned int i;\n```\n\nprintf\n\n(\"MD%d time trial. Digesting %d %d-byte blocks ...\", MD,\n\n``` TEST_BLOCK_LEN, TEST_BLOCK_COUNT);\n\n/* Initialize block */\nfor (i = 0; i < TEST_BLOCK_LEN; i++)\nblock[i] = (unsigned char)(i & 0xff);\n\n/* Start timer */\ntime (&startTime);\n\n/* Digest blocks */\nMDInit (&context);\nfor (i = 0; i < TEST_BLOCK_COUNT; i++)\nMDUpdate (&context, block, TEST_BLOCK_LEN);\nMDFinal (digest, &context);\n\n/* Stop timer */\ntime (&endTime);\n\nprintf (\" done\\n\");\nprintf (\"Digest = \");\nMDPrint (digest);\n\nprintf (\"\\nTime = %ld seconds\\n\", (long)(endTime-startTime));\nprintf\n(\"Speed = %ld bytes/second\\n\",\n(long)TEST_BLOCK_LEN * (long)TEST_BLOCK_COUNT/(endTime-startTime));\n}\n\n/* Digests a reference suite of strings and prints the results.\n*/\nstatic void MDTestSuite ()\n{\nprintf (\"MD%d test suite:\\n\", MD);\n\nMDString (\"\");\nMDString (\"a\");\nMDString (\"abc\");\nMDString (\"message digest\");\nMDString (\"abcdefghijklmnopqrstuvwxyz\");\nMDString\n(\"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789\");\nMDString\n\n(\"1234567890123456789012345678901234567890\\\n1234567890123456789012345678901234567890\");\n}\n\n/* Digests a file and prints the result.\n*/\nstatic void MDFile (filename)\nchar *filename;\n{\nFILE *file;\nMD_CTX context;\nint len;\nunsigned char buffer, digest;\n\nif ((file = fopen (filename, \"rb\")) == NULL)\nprintf (\"%s can't be opened\\n\", filename);\n```\n\nelse {\n\n``` MDInit (&context);\nwhile (len = fread (buffer, 1, 1024, file))\nMDUpdate (&context, buffer, len);\nMDFinal (digest, &context);\n\nfclose (file);\n\nprintf (\"MD%d (%s) = \", MD, filename);\nMDPrint (digest);\n\nprintf (\"\\n\");\n}\n}\n\n/* Digests the standard input and prints the result.\n*/\nstatic void MDFilter ()\n{\nMD_CTX context;\nint len;\nunsigned char buffer, digest;\n\nMDInit (&context);\nwhile (len = fread (buffer, 1, 16, stdin))\nMDUpdate (&context, buffer, len);\nMDFinal (digest, &context);\n\nMDPrint (digest);\nprintf (\"\\n\");\n}\n\n/* Prints a message digest in hexadecimal.\n*/\nstatic void MDPrint (digest)\nunsigned char digest;\n```\n\n## unsigned int i;\n\n``` for (i = 0; i < 16; i++)\nprintf (\"%02x\", digest[i]);\n}\n```\n\n### A.5 Test suite\n\nThe MD4 test suite (driver option \"-x\") should print the following results:\n\nMD4 test suite:\nMD4 (\"\") = 31d6cfe0d16ae931b73c59d7e0c089c0\nMD4 (\"a\") = bde52cb31de33e46245e05fbdbd6fb24\nMD4 (\"abc\") = a448017aaf21d8525fc10ae87aa6729d\nMD4 (\"message digest\") = d9130a8164549fe818874806e1c7014b MD4 (\"abcdefghijklmnopqrstuvwxyz\") = d79e1c308aa5bbcdeea8ed63df412da9 MD4 (\"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789\") = 043f8582f241db351ce627e153e7f0e4 MD4 (\"123456789012345678901234567890123456789012345678901234567890123456 78901234567890\") = e33b4ddc9c38f2199c3e7b164fcc0536\n\n## Security Considerations\n\nThe level of security discussed in this memo is considered to be sufficient for implementing moderate security hybrid digital- signature schemes based on MD4 and a public-key cryptosystem. We do not know of any reason that MD4 would not be sufficient for implementing very high security digital-signature schemes, but because MD4 was designed to be exceptionally fast, it is \"at the edge\" in terms of risking successful cryptanalytic attack. After further critical review, it may be appropriate to consider MD4 for very high security applications. For very high security applications before the completion of that review, the MD5 algorithm is recommended.\n\n``` Ronald L. Rivest"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.69063956,"math_prob":0.9505497,"size":26484,"snap":"2020-34-2020-40","text_gpt3_token_len":8197,"char_repetition_ratio":0.12972054,"word_repetition_ratio":0.18382668,"special_character_ratio":0.36372903,"punctuation_ratio":0.19125684,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98584545,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-10T11:44:38Z\",\"WARC-Record-ID\":\"<urn:uuid:ed7051d3-c5c7-4664-997a-6944ccfe52df>\",\"Content-Length\":\"41045\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3531a054-eb2b-4f7c-a216-1592f1f903c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab118185-5d76-4d88-b894-7ff92fbf5949>\",\"WARC-IP-Address\":\"130.236.254.69\",\"WARC-Target-URI\":\"http://pike.lysator.liu.se/docs/ietf/rfc/13/rfc1320.xml\",\"WARC-Payload-Digest\":\"sha1:72JI5JRICT5CQ6SLAWFDTHKXW625TQJP\",\"WARC-Block-Digest\":\"sha1:I4WSGCKA7QLW4TWW2MKOVIWVYHQJFTEU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738674.42_warc_CC-MAIN-20200810102345-20200810132345-00091.warc.gz\"}"} |
https://rdrr.io/cran/timetk/man/slidify.html | [
"# slidify: Create a rolling (sliding) version of any function In timetk: A Tool Kit for Working with Time Series in R\n\n## Description\n\n`slidify` returns a rolling (sliding) version of the input function, with a rolling (sliding) `.period` specified by the user.\n\n## Usage\n\n ```1 2 3 4 5 6 7``` ```slidify( .f, .period = 1, .align = c(\"center\", \"left\", \"right\"), .partial = FALSE, .unlist = TRUE ) ```\n\n## Arguments\n\n `.f` A function, formula, or vector (not necessarily atomic). If a function, it is used as is. If a formula, e.g. `~ .x + 2`, it is converted to a function. There are three ways to refer to the arguments: For a single argument function, use `.` For a two argument function, use `.x` and `.y` For more arguments, use `..1`, `..2`, `..3` etc This syntax allows you to create very compact anonymous functions. If character vector, numeric vector, or list, it is converted to an extractor function. Character vectors index by name and numeric vectors index by position; use a list to index by position and name at different levels. If a component is not present, the value of `.default` will be returned. `.period` The period size to roll over `.align` One of \"center\", \"left\" or \"right\". `.partial` Should the moving window be allowed to return partial (incomplete) windows instead of `NA` values. Set to FALSE by default, but can be switched to TRUE to remove `NA`'s. `.unlist` If the function returns a single value each time it is called, use `.unlist = TRUE`. If the function returns more than one value, or a more complicated object (like a linear model), use `.unlist = FALSE` to create a list-column of the rolling results.\n\n## Details\n\nThe `slidify()` function is almost identical to `tibbletime::rollify()` with 3 improvements:\n\n1. Alignment (\"center\", \"left\", \"right\")\n\n2. Partial windows are allowed\n\n3. Uses `slider` under the hood, which improves speed and reliability by implementing code at C++ level\n\nMake any function a Sliding (Rolling) Function\n\n`slidify()` turns a function into a sliding version of itself for use inside of a call to `dplyr::mutate()`, however it works equally as well when called from `purrr::map()`.\n\nBecause of it's intended use with `dplyr::mutate()`, `slidify` creates a function that always returns output with the same length of the input\n\nAlignment\n\nRolling / Sliding functions generate `.period - 1` fewer values than the incoming vector. Thus, the vector needs to be aligned. Alignment of the vector follows 3 types:\n\n• center (default): `NA` or `.partial` values are divided and added to the beginning and end of the series to \"Center\" the moving average. This is common in Time Series applications (e.g. denoising).\n\n• left: `NA` or `.partial` values are added to the end to shift the series to the Left.\n\n• right: `NA` or `.partial` values are added to the beginning to shift the series to the Right. This is common in Financial Applications (e.g moving average cross-overs).\n\nAllowing Partial Windows\n\nA key improvement over `tibbletime::slidify()` is that `timetk::slidify()` implements `.partial` rolling windows. Just set `.partial = TRUE`.\n\n## References\n\n• The Tibbletime R Package by Davis Vaughan, which includes the original `rollify()` Function\n\nTransformation Functions:\n\n• `slidify_vec()` - A simple vectorized function for applying summary functions to rolling windows.\n\nAugmentation Functions (Add Rolling Multiple Columns):\n\n• `tk_augment_slidify()` - For easily adding multiple rolling windows to you data\n\nSlider R Package:\n\n• `slider::pslide()` - The workhorse function that powers `timetk::slidify()`\n\n## Examples\n\n ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106``` ```library(tidyverse) library(tidyquant) library(tidyr) library(timetk) FB <- FANG %>% filter(symbol == \"FB\") # --- ROLLING MEAN (SINGLE ARG EXAMPLE) --- # Turn the normal mean function into a rolling mean with a 5 row .period mean_roll_5 <- slidify(mean, .period = 5, .align = \"right\") FB %>% mutate(rolling_mean_5 = mean_roll_5(adjusted)) # Use `partial = TRUE` to allow partial windows (those with less than the full .period) mean_roll_5_partial <- slidify(mean, .period = 5, .align = \"right\", .partial = TRUE) FB %>% mutate(rolling_mean_5 = mean_roll_5_partial(adjusted)) # There's nothing stopping you from combining multiple rolling functions with # different .period sizes in the same mutate call mean_roll_10 <- slidify(mean, .period = 10, .align = \"right\") FB %>% select(symbol, date, adjusted) %>% mutate( rolling_mean_5 = mean_roll_5(adjusted), rolling_mean_10 = mean_roll_10(adjusted) ) # For summary operations like rolling means, we can accomplish large-scale # multi-rolls with tk_augment_slidify() FB %>% select(symbol, date, adjusted) %>% tk_augment_slidify( adjusted, .period = 5:10, .f = mean, .align = \"right\", .names = str_c(\"MA_\", 5:10) ) # --- GROUPS AND ROLLING ---- # One of the most powerful things about this is that it works with # groups since `mutate` is being used data(FANG) mean_roll_3 <- slidify(mean, .period = 3, .align = \"right\") FANG %>% group_by(symbol) %>% mutate(mean_roll = mean_roll_3(adjusted)) %>% slice(1:5) # --- ROLLING CORRELATION (MULTIPLE ARG EXAMPLE) --- # With 2 args, use the purrr syntax of ~ and .x, .y # Rolling correlation example cor_roll <- slidify(~cor(.x, .y), .period = 5, .align = \"right\") FB %>% mutate(running_cor = cor_roll(adjusted, open)) # With >2 args, create an anonymous function with >2 args or use # the purrr convention of ..1, ..2, ..3 to refer to the arguments avg_of_avgs <- slidify( function(x, y, z) (mean(x) + mean(y) + mean(z)) / 3, .period = 10, .align = \"right\" ) # Or avg_of_avgs <- slidify( ~(mean(..1) + mean(..2) + mean(..3)) / 3, .period = 10, .align = \"right\" ) FB %>% mutate(avg_of_avgs = avg_of_avgs(open, high, low)) # Optional arguments MUST be passed at the creation of the rolling function # Only data arguments that are \"rolled over\" are allowed when calling the # rolling version of the function FB\\$adjusted <- NA roll_mean_na_rm <- slidify(~mean(.x, na.rm = TRUE), .period = 5, .align = \"right\") FB %>% mutate(roll_mean = roll_mean_na_rm(adjusted)) # --- ROLLING REGRESSIONS ---- # Rolling regressions are easy to implement using `.unlist = FALSE` lm_roll <- slidify(~lm(.x ~ .y), .period = 90, .unlist = FALSE, .align = \"right\") FB %>% drop_na() %>% mutate(numeric_date = as.numeric(date)) %>% mutate(rolling_lm = lm_roll(adjusted, numeric_date)) %>% filter(!is.na(rolling_lm)) ```\n\ntimetk documentation built on Jan. 19, 2021, 1:06 a.m."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6569476,"math_prob":0.9108771,"size":5983,"snap":"2021-31-2021-39","text_gpt3_token_len":1748,"char_repetition_ratio":0.14216425,"word_repetition_ratio":0.054136876,"special_character_ratio":0.31890357,"punctuation_ratio":0.16306306,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98632455,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T00:36:00Z\",\"WARC-Record-ID\":\"<urn:uuid:66cae201-329b-4648-9677-a81e5cc45203>\",\"Content-Length\":\"75723\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:619f9945-8ff8-49f9-b5fe-62dd11b94e18>\",\"WARC-Concurrent-To\":\"<urn:uuid:3654ea8c-c93c-4727-bb7f-9516ec20c651>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/cran/timetk/man/slidify.html\",\"WARC-Payload-Digest\":\"sha1:BEXNDXEDIP6JEFWDTX3MKA62UBKXCSER\",\"WARC-Block-Digest\":\"sha1:HFUUEFRA3FVOA4OIW2E6LSMQ4UWWNYQP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154408.7_warc_CC-MAIN-20210802234539-20210803024539-00715.warc.gz\"}"} |
https://socratic.org/questions/if-the-slope-of-a-line-is-17-13-what-is-the-slope-of-a-perpendicular-line | [
"# If the slope of a line is 17/13, what is the slope of a perpendicular line?\n\n##### 1 Answer\nOct 21, 2015\n\n$- \\frac{13}{17}$\n\n#### Explanation:\n\nSuppose a line has equation $y = m x + c$\n\nThis is in slope intercept form with slope $m$ and intercept $c$\n\nIf we reflect this line in the line $y = x$ then that is equivalent to swapping $x$ and $y$ in the equation, resulting in a line with equation:\n\n$x = m y + c$\n\nIf we then reflect that line in the $x$ axis, that is equivalent to replacing $y$ with $- y$, so we get a line with equation:\n\n$x = - m y + c$\n\nSubtract $c$ from both sides to get:\n\n$x - c = - m y$\n\nDivide both sides by $- m$ to get:\n\n$y = - \\frac{1}{m} x + \\frac{c}{m}$\n\nThis is in slope intercept format.\n\nNotice that the geometric result of the two reflections is a rotation through a right angle (Try it yourself with a square of paper with an arrow on one side).\n\nNotice that the effect on the slope is to replace $m$ by $- \\frac{1}{m}$.\n\nAny parallel line will just have a different intercept value, so we have shown that a line perpendicular to a line of slope $m$ has slope $- \\frac{1}{m}$."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89601225,"math_prob":0.9999857,"size":748,"snap":"2021-21-2021-25","text_gpt3_token_len":186,"char_repetition_ratio":0.13172042,"word_repetition_ratio":0.01369863,"special_character_ratio":0.24732621,"punctuation_ratio":0.0397351,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000069,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-25T06:04:07Z\",\"WARC-Record-ID\":\"<urn:uuid:ba9c5d58-8662-4fe3-a24d-a85e957fd569>\",\"Content-Length\":\"35977\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90458bef-423e-48e8-b445-0dbeaa8a3b9d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5ebc607c-653e-454b-9e55-12f1d699a58d>\",\"WARC-IP-Address\":\"216.239.38.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/if-the-slope-of-a-line-is-17-13-what-is-the-slope-of-a-perpendicular-line\",\"WARC-Payload-Digest\":\"sha1:PSDDR3WM72546HWUQGBOMHYW3HJWVHKK\",\"WARC-Block-Digest\":\"sha1:WICN3LVRSVOBCY2NOIXNVB7CH4PU2BBF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487622113.11_warc_CC-MAIN-20210625054501-20210625084501-00236.warc.gz\"}"} |
https://answers.everydaycalculation.com/compare-fractions/24-8-and-1-20 | [
"Solutions by everydaycalculation.com\n\n## Compare 24/8 and 1/20\n\n1st number: 3 0/8, 2nd number: 1/20\n\n24/8 is greater than 1/20\n\n#### Steps for comparing fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 8 and 20 is 40\n\nNext, find the equivalent fraction of both fractional numbers with denominator 40\n2. For the 1st fraction, since 8 × 5 = 40,\n24/8 = 24 × 5/8 × 5 = 120/40\n3. Likewise, for the 2nd fraction, since 20 × 2 = 40,\n1/20 = 1 × 2/20 × 2 = 2/40\n4. Since the denominators are now the same, the fraction with the bigger numerator is the greater fraction\n5. 120/40 > 2/40 or 24/8 > 1/20\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn to work with fractions in your own time:"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8384621,"math_prob":0.9904581,"size":413,"snap":"2021-31-2021-39","text_gpt3_token_len":170,"char_repetition_ratio":0.2616137,"word_repetition_ratio":0.0,"special_character_ratio":0.45762712,"punctuation_ratio":0.067307696,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9941625,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-01T03:36:36Z\",\"WARC-Record-ID\":\"<urn:uuid:82c00afe-2513-48bf-8787-ad6a3482b071>\",\"Content-Length\":\"8007\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b7f135ca-53f5-4264-b604-6e51cffb6455>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b555ac3-3c83-490c-8edc-f807f622dcf8>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/compare-fractions/24-8-and-1-20\",\"WARC-Payload-Digest\":\"sha1:BOD3RDTEPT2G7JIR6LSGSVCBKHTYKMP4\",\"WARC-Block-Digest\":\"sha1:YDLJP24BZBXXGKZ3NXF6TVXCEWVMB6WX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154158.4_warc_CC-MAIN-20210801030158-20210801060158-00637.warc.gz\"}"} |
https://openstax.org/books/college-physics/pages/18-section-summary | [
"College Physics\n\n# Section Summary\n\nCollege PhysicsSection Summary\n\n### 18.1Static Electricity and Charge: Conservation of Charge\n\n• There are only two types of charge, which we call positive and negative.\n• Like charges repel, unlike charges attract, and the force between charges decreases with the square of the distance.\n• The vast majority of positive charge in nature is carried by protons, while the vast majority of negative charge is carried by electrons.\n• The electric charge of one electron is equal in magnitude and opposite in sign to the charge of one proton.\n• An ion is an atom or molecule that has nonzero total charge due to having unequal numbers of electrons and protons.\n• The SI unit for charge is the coulomb (C), with protons and electrons having charges of opposite sign but equal magnitude; the magnitude of this basic charge $∣ q e ∣ ∣ q e ∣ size 12{ lline q rSub { size 8{e} } rline} {}$ is\n$∣ q e ∣ = 1.60 × 10 − 19 C . ∣ q e ∣ = 1.60 × 10 − 19 C . size 12{ lline q rSub { size 8{e} } rline =1 \".\" \"60\" times \"10\" rSup { size 8{ - \"19\"} } C} {}$\n• Whenever charge is created or destroyed, equal amounts of positive and negative are involved.\n• Most often, existing charges are separated from neutral objects to obtain some net charge.\n• Both positive and negative charges exist in neutral objects and can be separated by rubbing one object with another. For macroscopic objects, negatively charged means an excess of electrons and positively charged means a depletion of electrons.\n• The law of conservation of charge ensures that whenever a charge is created, an equal charge of the opposite sign is created at the same time.\n\n### 18.2Conductors and Insulators\n\n• Polarization is the separation of positive and negative charges in a neutral object.\n• A conductor is a substance that allows charge to flow freely through its atomic structure.\n• An insulator holds charge within its atomic structure.\n• Objects with like charges repel each other, while those with unlike charges attract each other.\n• A conducting object is said to be grounded if it is connected to the Earth through a conductor. Grounding allows transfer of charge to and from the earth’s large reservoir.\n• Objects can be charged by contact with another charged object and obtain the same sign charge.\n• If an object is temporarily grounded, it can be charged by induction, and obtains the opposite sign charge.\n• Polarized objects have their positive and negative charges concentrated in different areas, giving them a non-symmetrical charge.\n• Polar molecules have an inherent separation of charge.\n\n### 18.3Coulomb’s Law\n\n• Frenchman Charles Coulomb was the first to publish the mathematical equation that describes the electrostatic force between two objects.\n• Coulomb’s law gives the magnitude of the force between point charges. It is\n$F=k|q1q2|r2,F=k|q1q2|r2, size 12{F=k { {q rSub { size 8{1} } q rSub { size 8{2} } } over {r rSup { size 8{2} } } } } {}$\n\nwhere $q1q1$ and $q2q2$ are two point charges separated by a distance $rr$, and $k≈8.99×109 N·m2/C2k≈8.99×109 N·m2/C2$\n\n• This Coulomb force is extremely basic, since most charges are due to point-like particles. It is responsible for all electrostatic effects and underlies most macroscopic forces.\n• The Coulomb force is extraordinarily strong compared with the gravitational force, another basic force—but unlike gravitational force it can cancel, since it can be either attractive or repulsive.\n• The electrostatic force between two subatomic particles is far greater than the gravitational force between the same two particles.\n\n### 18.4Electric Field: Concept of a Field Revisited\n\n• The electrostatic force field surrounding a charged object extends out into space in all directions.\n• The electrostatic force exerted by a point charge on a test charge at a distance $rr size 12{r} {}$ depends on the charge of both charges, as well as the distance between the two.\n• The electric field $EE size 12{E} {}$ is defined to be\n$E = F q , E = F q , size 12{E= { {F} over {q,} } } {}$\n\nwhere $FF size 12{F} {}$ is the Coulomb or electrostatic force exerted on a small positive test charge $qq size 12{q} {}$. $EE size 12{E} {}$ has units of N/C.\n\n• The magnitude of the electric field $EE size 12{E} {}$ created by a point charge $QQ size 12{Q} {}$ is\n$E = k |Q| r 2 . E = k |Q| r 2 . size 12{E=k { {Q} over {r rSup { size 8{2} } } } } {}$\n\nwhere $rr size 12{r} {}$ is the distance from $QQ size 12{Q} {}$. The electric field $EE size 12{E} {}$ is a vector and fields due to multiple charges add like vectors.\n\n### 18.5Electric Field Lines: Multiple Charges\n\n• Drawings of electric field lines are useful visual tools. The properties of electric field lines for any charge distribution are that:\n• Field lines must begin on positive charges and terminate on negative charges, or at infinity in the hypothetical case of isolated charges.\n• The number of field lines leaving a positive charge or entering a negative charge is proportional to the magnitude of the charge.\n• The strength of the field is proportional to the closeness of the field lines—more precisely, it is proportional to the number of lines per unit area perpendicular to the lines.\n• The direction of the electric field is tangent to the field line at any point in space.\n• Field lines can never cross.\n\n### 18.6Electric Forces in Biology\n\n• Many molecules in living organisms, such as DNA, carry a charge.\n• An uneven distribution of the positive and negative charges within a polar molecule produces a dipole.\n• The effect of a Coulomb field generated by a charged object may be reduced or blocked by other nearby charged objects.\n• Biological systems contain water, and because water molecules are polar, they have a strong effect on other molecules in living systems.\n\n### 18.7Conductors and Electric Fields in Static Equilibrium\n\n• A conductor allows free charges to move about within it.\n• The electrical forces around a conductor will cause free charges to move around inside the conductor until static equilibrium is reached.\n• Any excess charge will collect along the surface of a conductor.\n• Conductors with sharp corners or points will collect more charge at those points.\n• A lightning rod is a conductor with sharply pointed ends that collect excess charge on the building caused by an electrical storm and allow it to dissipate back into the air.\n• Electrical storms result when the electrical field of Earth’s surface in certain locations becomes more strongly charged, due to changes in the insulating effect of the air.\n• A Faraday cage acts like a shield around an object, preventing electric charge from penetrating inside.\n\n### 18.8Applications of Electrostatics\n\n• Electrostatics is the study of electric fields in static equilibrium.\n• In addition to research using equipment such as a Van de Graaff generator, many practical applications of electrostatics exist, including photocopiers, laser printers, ink-jet printers and electrostatic air filters.\nOrder a print copy\n\nAs an Amazon Associate we earn from qualifying purchases."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90977126,"math_prob":0.98885065,"size":6162,"snap":"2023-14-2023-23","text_gpt3_token_len":1203,"char_repetition_ratio":0.1654758,"word_repetition_ratio":0.0,"special_character_ratio":0.1944174,"punctuation_ratio":0.08671587,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9844634,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-01T23:40:02Z\",\"WARC-Record-ID\":\"<urn:uuid:55ee8670-4248-4867-af47-3d62cfe117e8>\",\"Content-Length\":\"487942\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa6546c3-1976-4dae-bd71-1de022471064>\",\"WARC-Concurrent-To\":\"<urn:uuid:99a95660-9caa-4120-8d20-87221aaa8d97>\",\"WARC-IP-Address\":\"18.160.46.96\",\"WARC-Target-URI\":\"https://openstax.org/books/college-physics/pages/18-section-summary\",\"WARC-Payload-Digest\":\"sha1:HGAD2ZH6GUQAPBMPVLTIFU37G4V4NMQB\",\"WARC-Block-Digest\":\"sha1:36O4RUT4XFBNT6AER2Q6TFH34MWG42QQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648209.30_warc_CC-MAIN-20230601211701-20230602001701-00429.warc.gz\"}"} |
https://b-ok.org/book/2464071/d862b2 | [
",\n\nMedical Statistics Made Easy 2nd edition continues to provide the easiest possible explanations of the key statistical techniques used throughout the medical literature.\n\nFeaturing a comprehensive updating of the 'Statistics at work' section, this new edition retains a consistent, concise, and user-friendly format. Each technique is graded for ease of use and frequency of appearance in the mainstream medical journals.\n\nMedical Statistics Made Easy 2nd edition is essential reading for anyone looking to understand:\n\n* confidence intervals and probability values\n* numbers needed to treat\n* t tests and other parametric tests\n* survival analysis\n\nIf you need to understand the medical literature, then you need to read this book.\n\nReviews:\n\"This book helps medical students understand the basic concepts of medical statistics starting in a 'step-by-step approach'. The authors have designed the book assuming that the reader has no prior knowledge. It focuses on the most common statistical concepts that are likely to be faced in medical literature.\nAll chapters are concise and simple to understand. Each chapter starts with an introduction which consists of “how important” that particular statistical concept is, using a 'star' system. A 'thumbs-up' system shows how easy the statistical concept is to understand. Both these systems indicate time-efficient learning allowing yourself to focus on areas you find most difficult. Following this, there are worked out examples with exam-tips at the end of some chapters.\nThe last chapter, 'Statistics at Work', shows how medical statistics is put into practice using worked out examples from renowned journals. This helps in assessing the reader’s own knowledge and gives them confidence in analysis of statistics of a journal.\nIn conclusion, we would recommend this book as an introduction into medical statistics before plunging into the deep 'statistical' waters! It gives confidence to the reader in taking up the challenge of understanding statistics and [being] able to apply knowledge in analysing medical literature.\"\nStefanie Zhao Lin Lip & Louise Murchison, Scottish Medical Journal, June 2010\n\n\"If ever there was a book that completely lived up to its title, this is it...Perhaps above everything, it is the chapter layout and design that makes this book stand out head and shoulders above the crowd. At the beginning of each chapter two questions are posed – how important is the subject in question and how difficult is it to understand? The first is answered on the basis of how often the subject is mentioned / used in papers published in mainstream medical journals. A star rating is then given from one to five with five stars implying use in the majority of papers published. The second question is answered by means of a ‘thumbs up’ grading system. The more thumbs, the easier the concept is to understand (maximum of five). This, of course, provides a route into statistics for even the most idle of uneducated individuals! Five stars and five thumbs must surely indicate time-efficient learning! At the end of each chapter exam tips (light bulb icon!) are given – I doubt anyone could ask for more!\nThe whole way in which the authors have written this book is commendable; the chapters are succinct, easy to follow and a pleasure to read...Is it value for money? – a definite yes even at twice the price. Of course I never exaggerate but if you breathe, you should own this book!\"\nIan Pearce, Urology News, June 2010\n\nYear: 2008\nEdition: 2nd\nLanguage: english\nPages: 116\nISBN 10: 1904842550\nISBN 13: 9781904842552\nFile: PDF, 2.21 MB\n\nYou may be interested in\n\nMedical Statistics at a Glance Workbook\n\nYear: 2013\nLanguage: english\nFile: PDF, 5.88 MB\n\nPost a Review",
null,
"You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.\n1\n\n日本語500問中級\n\nYear: 2009\nLanguage: japanese\nFile: PDF, 33.63 MB\n2\n\nPHP for Absolute Beginners\n\nYear: 2014\nLanguage: english\nFile: PDF, 3.98 MB\n```\fMEDICAL\nSTATISTICS\n\n2\n\nNew Clinical Genetics\n978 1 904842 31 6\n“This book is a very valuable tool that will be\nused by future geneticists all over Europe and\nbeyond, both as a teaching material and as a\nsource of excellent knowledge.”\nEuropean Journal of Human Genetics\n\nPuzzles for Medical Students\nR. Howard\n978 1 904842 34 8\n“Unlike bland medical textbooks offering\nlearning by rote, this book challenges the\nmedical student to learn key medical facts\nthrough the solving of puzzles and the author\nshould be congratulated for this novel\napproach.”\nDr Andrew Catto\n\nClinical Skills for OSCEs\nN. Burton\n978 1 904842 59 0\n\n“An invaluable guide to clinical skills for OSCEs.\nA must have for all students!”\nInternational Journal of Clinical Skills\n\nMEDICAL\nSTATISTICS\nMichael Harris\nGeneral Practitioner and Senior Lecturer in\nMedical Education, Bristol, UK\nand\n\nGordon Taylor\nSenior Lecturer in Medical Statistics,\nUniversity of Bath, UK\n\n2\n\nSecond edition © Scion Publishing Ltd, 2008\nISBN 978 1 904842 55 2\nReprinted 2008, 2009\nFirst edition published in 2003 by Martin Dunitz (ISBN 1 85996 219 X)\nReprinted 2004, 2005, 2007\ntransmitted, in any form or by any means, without permission.\nA CIP catalogue record for this book is available from the British Library.\nScion Publishing Limited\nBloxham Mill, Barford Road, Bloxham, Oxfordshire OX15 4FF\nwww.scionpublishing.com\nImportant Note from the Publisher\nThe information contained within this book was obtained by Scion\nPublishing Limited from sources believed by us to be reliable. However,\nwhile every effort has been made to ensure its accuracy, no responsibility\nfor loss or injury whatsoever occasioned to any person acting or\nrefraining from action as a result of information contained herein can be\naccepted by the authors or publishers.\nAlthough every effort has been made to ensure that all owners of\ncopyright material have been acknowledged in this publication, we\nwould be pleased to acknowledge in subsequent reprints or editions any\nomissions brought to our attention.\n\nTypeset by Phoenix Photosetting, Chatham, Kent, UK\nPrinted by Gutenberg Press Ltd, Malta\n\nCONTENTS\n\nAbbreviations\n\nvii\n\nPreface\n\nix\n\nx\n\nForeword\n\nxi\n\nHow to use this book\n\n1\n\nHow this book is designed\n\n4\n\nStatistics which describe data\nPercentages\nMean\nMedian\nMode\nStandard deviation\n\n7\n9\n12\n14\n16\n\nStatistics which test confidence\nConfidence intervals\nP values\n\n20\n24\n\nStatistics which test differences\nt tests and other parametric tests\nMann–Whitney and other non-parametric tests\nChi-squared test\n\n28\n31\n34\n\nStatistics which compare risk\nRisk ratio\nOdds ratio\nRisk reduction and numbers needed to treat\n\n37\n40\n43\n\nvi\n\nContents\n\nStatistics which analyze relationships\nCorrelation\nRegression\n\n48\n53\n\nStatistics which analyze survival\nSurvival analysis: life tables and Kaplan–Meier\nplots\nThe Cox regression model\n\n57\n60\n\nStatistics which analyze clinical investigations and screening\nSensitivity, specificity and predictive value\nLevel of agreement and Kappa\n\n62\n67\n\nOther concepts\n\n69\n\nStatistics at work\n\n73\n\nStandard deviation, relative risk and numbers\nneeded to treat\nOdds ratios and confidence intervals\nCorrelation and regression\nSurvival analysis\nSensitivity, specificity and predictive values\n\n74\n78\n81\n85\n88\n\nGlossary\n\n93\n\nIndex\n\n113\n\nABBREVIATIONS\n\nARR\nBMI\nBP\nCI\ndf\nHR\nIQR\nLR\nNNH\nNNT\nNPV\nP\nPPV\nRRR\nSD\nSE\n\nabsolute risk reduction\nbody mass index\nblood pressure\nconfidence interval\ndegrees of freedom\nhazard ratio\ninter-quartile range\nlikelihood ratio\nnumber needed to harm\nnumber needed to treat\nnegative predictive value\nprobability\npositive predictive value\nrelative risk reduction\nstandard deviation\nside effect\n\nPREFACE\n\nThis book is designed for healthcare students and\nprofessionals who need a basic knowledge of when\ncommon statistical terms are used and what they\nmean.\nWhether you love or hate statistics, you need to have\nsome understanding of the subject if you want to\ncritically appraise a paper. To do this, you do not\nneed to know how to do a statistical analysis. What\nyou do need is to know why the test has been used\nand how to interpret the resulting figures.\nThis book does not assume that you have any\nprior statistical knowledge. However basic your\nmathematical or statistical knowledge, you will find\nthat everything is clearly explained.\nA few readers will find some of the sections\nridiculously simplistic, others will find some bafflingly\nout concepts that suit your level of understanding.\nthe most important concepts if you are short of time.\nThis book is also produced for those who may be\ntips” sections if you are in a hurry.\nYou can test your understanding of what you have\nlearnt by working through extracts from original\npapers in the “Statistics at work” section.\n\nDr Michael Harris MB BS FRCGP MMEd is a\nGeneral Practitioner and Senior Lecturer in Medical\nEducation in Bristol, UK. He teaches nurses, medical\nstudents and GP Registrars. Until recently he was an\nexaminer for the MRCGP.\nDr Gordon Taylor PhD MSc BSc (Hons) is a Senior\nLecturer in Medical Statistics at the University of\nBath, UK. His main role is in the teaching, support\nand supervision of health care professionals involved\nin non-commercial research.\n\nFOREWORD\n\nA love of statistics is, oddly, not what attracts most\nyoung people to a career in medicine and I suspect\nthat many clinicians, like me, have at best a sketchy\nand incomplete understanding of this difficult\nsubject.\nDelivering modern, high quality care to patients now\nrelies increasingly on routine reference to scientific\npapers and journals, rather than traditional textbook\nlearning. Acquiring the skills to appraise medical\nresearch papers is a daunting task. Realizing this,\nMichael Harris and Gordon Taylor have expertly\nconstructed a practical guide for the busy clinician.\nOne a practising NHS doctor, the other a medical\nstatistician with tremendous experience in clinical\nresearch, they have produced a unique handbook. It\nis short, readable and useful, without becoming\noverly bogged down in the mathematical detail that\nfrankly puts so many of us off the subject.\nI commend this book to all healthcare professionals,\ngeneral practitioners and hospital specialists. It\ncovers all the ground necessary to critically evaluate\nthe statistical elements of medical research papers, in\na friendly and approachable way. The scoring of each\ncomprehension will efficiently guide the busy\npractitioner through his or her reading. In particular\nit is almost unique in covering this part of the\nsyllabus for Royal College and other postgraduate\nexaminations. Certainly a candidate familiar with\nthe contents of this short book and taking note of its\n\nxii\n\nForeword\n\nnumerous helpful examination tips should have few\ndifficulties when answering the questions on\nstatistics in both the MCQ and Written modules of\nthe new MRCGP exam.\nNovember 2007\nBill Irish\nBSc MB BChir DCH DRCOG MMEd FRCGP\n(Head of GP School, Severn Deanery, UK and\nSenior Examiner for the MRCGP(UK)).\n\n> HOW TO USE THIS BOOK\nYou can use this book in a number of ways.\n\nIf you want a statistics course\n∑ Work through from start to finish for a complete\ncourse in commonly used medical statistics.\n\nIf you are in a hurry\n∑ Choose the sections with the most stars to learn\nabout the commonest statistical methods and\nterms.\npercentages (page 7), mean (page 9), standard\ndeviation (page 16), confidence intervals (page\n20) and P values (page 24).\n\nIf you are daunted by statistics\n∑ If you are bewildered every time someone tries to\nexplain a statistical method, then pick out the\nsections with the most thumbs up symbols to find\nthe easiest and most basic concepts.\nmean (page 9), median (page 12) and mode (page\n14), then move on to risk ratio (page 37),\nincidence and prevalence (page 70).\n\n2\n\nXA\n\nMT\nIP\n\nE\n\nIf you are taking an exam\n∑ The “Exam Tips” give you pointers to the topics\n∑ You will find these in the following sections: mean\n(page 9), standard deviation (page 16), confidence\nintervals (page 20), P values (page 24), risk\nreduction and NNT (page 43), sensitivity,\nspecificity and predictive value (page 62),\nincidence and prevalence (page 70).\n\n∑ See how statistical methods are used in five\nextracts from real-life papers in the “Statistics at\nwork” section (page 73).\n∑ Work out which statistical methods have been\nused, why, and what the results mean. Then check\n\nGlossary\n∑ Use the glossary (page 93) as a quick reference for\nstatistical words or phrases that you do not know.\n\n∑ Go through difficult sections when you are fresh\nand try not to cover too much at once.\n∑ You may need to read some sections a couple of\ntimes before the meaning sinks in. You will find\nprinciples.\n\nHow To Use This Book\n\n3\n\n∑ We have tried to cut down the jargon as much as\npossible. If there is a word that you do not\nunderstand, check it out in the glossary.\n\nTHIS BOOK IS\n> HOW\nDESIGNED\n\nEvery section uses the same series of headings to help\nyou understand the concepts.\n\n“How important is it?”\nWe noted how often statistical terms were used in\n200 quantitative papers in mainstream medical\njournals. All the papers selected were published\nduring the last year in the British Medical Journal,\nThe Lancet, the New England Journal of Medicine\nand the Journal of the American Medical\nAssociation.\nWe grouped the terms into concepts and graded them\nby how often they were used. This helped us to\ndevelop a star system for importance. We also took\ninto account usefulness to readers. For example,\n“numbers needed to treat” are not often quoted but\nare fairly easy to calculate and useful in making\ntreatment decisions.\n\n88888\n\nConcepts which are used in the majority of medical\npapers.\n\n8888\n\nImportant concepts which are used in at least a third\nof papers.\n\n888\n\nLess frequently used, but still of value in decisionmaking.\n\nHow This Book Is Designed\n\n88\n\nFound in at least 1 in 10 papers.\n\n8\n\nRarely used in medical journals.\n\n5\n\nHow easy is it to understand?\nWe have found that the ability of health care\nprofessionals to understand statistical concepts varies\nmore widely than their ability to understand anything\nelse related to medicine. This ranges from those that\nhave no difficulty learning how to understand\nregression to those that struggle with percentages.\nOne of the authors (not the statistician!) fell into the\nlatter category. He graded each section by how easy\nit is to understand the concept.\nEven the most statistic-phobic will have little\ndifficulty in understanding these sections.\nWith a little concentration, most readers should be\nSome readers will have difficulty following these.\nYou may need to go over these sections a few times to\nbe able to take them in.\nQuite difficult to understand. Only tackle these\nsections when you are fresh.\nStatistical concepts that are very difficult to grasp.\n\nWhen is it used?\nOne thing you need to do if critically appraising a\npaper is check that the right statistical technique has\nbeen used. This part explains which statistical\nmethod should be used for what scenario.\n\n6\n\nWhat does it mean?\nThis explains the bottom line – what the results mean\n\nExamples\nSometimes the best way to understand a statistical\ntechnique is to work through an example. Simple,\nfictitious examples are given to illustrate the\nprinciples and how to interpret them.\n\nWatch out for . . .\nThis includes more detailed explanation, tips and\ncommon pitfalls.\n\nXA\n\nMT\nIP\n\nE\n\nExam tips\nSome topics are particularly popular with examiners\nbecause they test understanding and involve simple\ncalculations. We have given tips on how to approach\nthese concepts.\n\n> PERCENTAGES\nHow important are they?\n\n88888\n\nAn understanding of percentages is probably the first\nand most important concept to understand in statistics!\n\nHow easy are they to understand?\nPercentages are easy to understand.\n\nWhen are they used?\nPercentages are mainly used in the tabulation of data\nin order to give the reader a scale on which to assess\nor compare the data.\n\nWhat do they mean?\n“Per cent” means per hundred, so a percentage\ndescribes a proportion of 100. For example 50% is\n50 out of 100, or as a fraction 1⁄2. Other common\npercentages are 25% (25 out of 100 or 1⁄4), 75% (75\nout of 100 or 3⁄4).\nTo calculate a percentage, divide the number of items\nor patients in the category by the total number in the\ngroup and multiply by 100.\n\n8\n\nEXAMPLE\nData were collected on 80 patients referred for heart transplantation. The\nresearcher wanted to compare their ages. The data for age were put in\n“decade bands” and are shown in Table 1.\nTable 1. Ages of 80 patients referred for heart transplantation\nYearsa\n\nFrequencyb\n\n0–9\n\n2\n\nPercentagec\n2.5\n\n10–19\n\n5\n\n6.25\n\n20–29\n\n6\n\n7.5\n\n30–39\n\n14\n\n17.5\n\n40–49\n\n21\n\n26.25\n\n50–59\n\n20\n\n25\n\n≥ 60\n\n12\n\n15\n\nTotal\n\n80\n\n100\n\na\n\nb\n\nFrequency = number of patients referred;\n\nc\n\nPercentage = percentage of patients in each decade band. For example, in the\n\n30–39 age band there were 14 patients and we know the ages of 80 patients,\n14\nso\n¥ 100 = 17.5%.\n80\n\nWatch out for . . .\nAuthors can use percentages to hide the true size of\nthe data. To say that 50% of a sample has a certain\ncondition when there are only four people in the\nsample is clearly not providing the same level of\ninformation as 50% of a sample based on 400\npeople. So, percentages should be used as an\nthe actual data.\n\n> MEAN\nOtherwise known as an arithmetic mean, or average.\n\nHow important is it?\n\n88888\n\nA mean appeared in 90% papers surveyed, so it is\nimportant to have an understanding of how it is\ncalculated.\n\nHow easy is it to understand?\nOne of the simplest statistical concepts to grasp.\nHowever, in most groups that we have taught there\nhas been at least one person who admits not knowing\nhow to calculate the mean, so we do not apologize\nfor including it here.\n\nWhen is it used?\nIt is used when the spread of the data is fairly similar\non each side of the mid point, for example when the\ndata are “normally distributed”.\nThe “normal distribution” is referred to a lot in\nstatistics. It’s the symmetrical, bell-shaped distribution of data shown in Fig. 1.\n\n10\n\nFig. 1. The normal distribution. The dotted line shows the mean of the data.\n\nWhat does it mean?\nThe mean is the sum of all the values, divided by the\nnumber of values.\n\nEXAMPLE\nFive women in a study on lipid-lowering agents are aged 52, 55, 56, 58\nand 59 years.\n52 + 55 + 56 + 58 + 59 = 280\nNow divide by the number of women:\n280 = 56\n5\nSo the mean age is 56 years.\n\nWatch out for...\nIf a value (or a number of values) is a lot smaller or\nlarger than the others, “skewing” the data, the mean\nwill then not give a good picture of the typical value.\n\nMean\n\n11\n\nXA\n\nMT\nIP\n\nE\n\nFor example, if there is a sixth patient aged 92 in the\nstudy then the mean age would be 62, even though\nonly one woman is over 60 years old. In this case, the\n“median” may be a more suitable mid-point to use\n(see page 12).\nA common multiple choice question is to ask the\ndifference between mean, median (see page 12) and\nmode (see page 14) – make sure that you do not get\nconfused between them.\n\n> MEDIAN\nSometimes known as the mid-point.\n\nHow important is it?\n\n8888\n\nIt is given in over a half of mainstream papers.\n\nHow easy is it to understand?\nEven easier than the mean!\n\nWhen is it used?\nIt is used to represent the average when the data\nare not symmetrical, for instance the “skewed”\ndistribution in Fig. 2. Compare the shape of the\ngraph with the normal distribution shown in Fig. 1.\n\nFig. 2. A skewed distribution. The dotted line shows the median.\n\nWhat does it mean?\nIt is the point which has half the values above, and\nhalf below.\n\nMedian\n\n13\n\nEXAMPLE\nUsing the first example from page 10 of five patients aged 52, 55, 56, 58\nand 59, the median age is 56, the same as the mean – half the women are\nolder, half are younger.\nHowever, in the second example with six patients aged 52, 55, 56, 58, 59\nand 92 years, there are two “middle” ages, 56 and 58. The median is halfway between these, i.e. 57 years. This gives a better idea of the mid-point\nof this skewed data than the mean of 62.\n\nWatch out for...\nThe median may be given with its inter-quartile range\n(IQR). The 1st quartile point has the 1⁄4 of the data below\nit, the 3rd quartile point has the 3⁄4 of the sample below\nit, so the IQR contains the middle 1⁄2 of the sample. This\ncan be shown in a “box and whisker” plot.\n\nEXAMPLE\nA dietician measured the energy intake over 24 hours of 50 patients on a\nvariety of wards. One ward had two patients that were “nil by mouth”. The\nmedian was 12.2 megajoules, IQR 9.9 to 13.6. The lowest intake was 0,\nthe highest was 16.7. This distribution is represented by the box and\nwhisker plot in Fig. 3.\n18\nHighest intake\n16\nEnergy intake (MJ)\n\n14\n\n3rd quartile point\n\n12\n\nMedian (2nd quartile point)\n\n10\n\n1st quartile point\n\n8\n6\n4\n\nLowest intake,\nexcluding extreme values\n\n2\n0\n\nFig. 3. Box and whisker plot of energy intake of 50 patients over 24 hours. The ends\nof the whiskers represent the maximum and minimum values, excluding extreme\nresults like those of the two “nil by mouth” patients.\n\n> MODE\nHow important is it?\n\n8\n\nRarely quoted in papers and of limited value.\n\nHow easy is it to understand?\nAn easy concept.\n\nWhen is it used?\nIt is used when we need a label for the most\nfrequently occurring event.\n\nWhat does it mean?\nThe mode is the most common of a set of events.\n\nEXAMPLE\nAn eye clinic sister noted the eye colour of 100 consecutive patients. The\nresults are shown in Fig. 4.\n80\nNumber of patients\n\n70\n60\n50\n40\n30\n20\n10\n0\nBrown\n\nBlue\n\nGrey\n\nGreen\n\nEye colour\n\nFig. 4. Graph of eye colour of patients attending an eye clinic.\n\nIn this case the mode is brown, the commonest eye colour.\n\nMode\n\n15\n\nYou may see reference to a “bi-modal distribution”.\nGenerally when this is mentioned in papers it is as a\nconcept rather than from calculating the actual\nvalues, e.g. “The data appear to follow a bi-modal\ndistribution”. See Fig. 5 for an example of where\nthere are two “peaks” to the data, i.e. a bi-modal\ndistribution.\n400\n\nNumber of patients\n\n300\n\n200\n\n80–89\n\n70–79\n\n60–69\n\n50–59\n\n40–49\n\n30–39\n\n20–29\n\n10–19\n\n0–9\n\n0\n\n≥90\n\n100\n\nAges of patients\nFig. 5. Graph of ages of patients with asthma in a practice.\n\nThe arrows point to the modes at ages 10–19 and\n60–69.\nBi-modal data may suggest that two populations are\npresent that are mixed together, so an average is not\na suitable measure for the distribution.\n\n> STANDARD DEVIATION\nHow important is it?\n\n88888\n\nQuoted in two-thirds of papers, it is used as the basis\nof a number of statistical calculations.\n\nHow easy is it to understand?\nIt is not an intuitive concept.\n\nWhen is it used?\nStandard deviation (SD) is used for data which are\n“normally distributed” (see page 9), to provide\ninformation on how much the data vary around their\nmean.\n\nWhat does it mean?\nSD indicates how much a set of values is spread\naround the average.\nA range of one SD above and below the mean\n(abbreviated to ± 1 SD) includes 68.2% of the values.\n± 2 SD includes 95.4% of the data.\n± 3 SD includes 99.7%.\n\nStandard deviation\n\n17\n\nEXAMPLE\nLet us say that a group of patients enrolling for a trial had a normal\ndistribution for weight. The mean weight of the patients was 80 kg. For\nthis group, the SD was calculated to be 5 kg.\n1 SD below the average is 80 – 5 = 75 kg.\n1 SD above the average is 80 + 5 = 85 kg.\n± 1 SD will include 68.2% of the subjects, so 68.2% of patients will weigh\nbetween 75 and 85 kg.\n95.4% will weigh between 70 and 90 kg (± 2 SD).\n99.7% of patients will weigh between 65 and 95 kg (± 3 SD).\nSee how this relates to the graph of the data in Fig. 6.\n14\n\nNumber of patients\n\n12\n10\n8\n6\n\n± 1 SD (68.2%)\n\n4\n2\n0\n60\n\n± 2 SD (95.4%)\n± 3 SD (99.7%)\n65\n\n70\n\n75\n\n80\n85\nWeight (kg)\n\n90\n\n95\n\n100\n\nFig. 6. Graph showing normal distribution of weights of patients enrolling in a trial\nwith mean 80 kg, SD 5 kg.\n\n18\n\nIf we have two sets of data with the same mean but different SDs, then the\ndata set with the larger SD has a wider spread than the data set with the\nsmaller SD.\nFor example, if another group of patients enrolling for the trial has the\nsame mean weight of 80 kg but an SD of only 3, ± 1 SD will include 68.2%\nof the subjects, so 68.2% of patients will weigh between 77 and 83 kg\n(Fig. 7). Compare this with the example above.\n14\n\nNumber of patients\n\n12\n10\n8\n6\n4\n2\n0\n60\n\n65\n\n70\n\n75\n\n80\n85\nWeight (kg)\n\n90\n\n95\n\n100\n\nFig. 7. Graph showing normal distribution of weights of patients enrolling in a trial\nwith mean 80 kg, SD 3 kg.\n\nWatch out for...\nSD should only be used when the data have a normal\ndistribution. However, means and SDs are often\nwrongly used for data which are not normally\ndistributed.\nA simple check for a normal distribution is to see if 2\nSDs away from the mean are still within the possible\n\nStandard deviation\n\n19\n\nrange for the variable. For example, if we have some\nlength of hospital stay data with a mean stay of 10\ndays and a SD of 8 days then:\nmean – 2 ¥ SD = 10 – 2 ¥ 8 = 10 – 16 = -6 days.\nThis is clearly an impossible value for length of stay,\nso the data cannot be normally distributed. The\nmean and SDs are therefore not appropriate\nmeasures to use.\nGood news – it is not necessary to know how to\ncalculate the SD.\nIt is worth learning the figures above off by heart, so\na reminder –\n± 1 SD includes 68.2% of the data\n± 2 SD includes 95.4%,\n± 3 SD includes 99.7%.\n\nXA\n\nMT\nIP\n\nE\n\nKeeping the “normal distribution” curve in Fig. 6 in\nmind may help.\nExaminers may ask what percentages of subjects are\nincluded in 1, 2 or 3 SDs from the mean. Again, try\nto memorize those percentages.\n\n> CONFIDENCE INTERVALS\nHow important are they?\n\n88888\n\nImportant – given in three-quarters of papers.\n\nHow easy are they to understand?\nA difficult concept, but one where a small amount of\nunderstanding will get you by without having to\n\nWhen is it used?\nConfidence intervals (CI) are typically used when,\ninstead of simply wanting the mean value of a\nsample, we want a range that is likely to contain the\ntrue population value.\nThis “true value” is another tough concept – it is the\nmean value that we would get if we had data for the\nwhole population.\n\nWhat does it mean?\nStatisticians can calculate a range (interval) in which\nwe can be fairly sure (confident) that the “true value”\nlies.\nFor example, we may be interested in blood pressure\n(BP) reduction with antihypertensive treatment.\nFrom a sample of treated patients we can work out\nthe mean change in BP.\n\nConfidence intervals\n\n21\n\nHowever, this will only be the mean for our particular\nsample. If we took another group of patients we would\nnot expect to get exactly the same value, because\nchance can also affect the change in BP.\nThe CI gives the range in which the true value (i.e.\nthe mean change in BP if we treated an infinite\nnumber of patients) is likely to be.\n\nEXAMPLES\nThe average systolic BP before treatment in study A, of a group of 100\nhypertensive patients, was 170 mmHg. After treatment with the new drug\nthe mean BP dropped by 20 mmHg.\nIf the 95% CI is 15–25, this means we can be 95% confident that the true\neffect of treatment is to lower the BP by 15–25 mmHg.\nIn study B 50 patients were treated with the same drug, also reducing\ntheir mean BP by 20 mmHg, but with a wider 95% CI of -5 to +45. This CI\nincludes zero (no change). This means there is more than a 5% chance\nthat there was no true change in BP, and that the drug was actually\nineffective.\n\nWatch out for...\nThe size of a CI is related to the sample size of the\nstudy. Larger studies usually have a narrower CI.\nWhere a few interventions, outcomes or studies are\ngiven it is difficult to visualize a long list of means\nand CIs. Some papers will show a chart to make it\neasier.\nFor example, “meta-analysis” is a technique for\nbringing together results from a number of similar\nstudies to give one overall estimate of effect. Many\nmeta-analyses compare the treatment effects from\n\n22\n\nthose studies by showing the mean changes and 95%\nCIs in a chart. An example is given in Fig. 8.\nStudy A\nStudy B\nStudy C\nStudy D\nStudy E\nCombined estimate\n–40\n\n–30\n\n–20\n\n–10\n\n0\n\n10\n\nChange in BP (mmHg)\nFig. 8. Plot of 5 studies of a new antihypertensive drug. See how the results of studies\nA and B above are shown by the top two lines, i.e. a 20 mmHg reduction in BP, 95% CI\n15–25 for study A and a 20 mmHg reduction, 95% CI -5 to +45 for study B.\n\nThe vertical axis does not have a scale. It is simply\nused to show the zero point on each CI line.\nThe statistician has combined the results of all five\nstudies and calculated that the overall mean reduction\nin BP is 14 mmHg, CI 12–16. This is shown by the\n“combined estimate” diamond. See how combining a\nnumber of studies reduces the CI, giving a more\naccurate estimate of the true treatment effect.\nThe chart shown in Fig. 8 is called a “Forest plot” or,\nmore colloquially, a “blobbogram”.\nStandard deviation and confidence intervals – what is\nthe difference? Standard deviation tells us about\nthe variability (spread) in a sample.\nThe CI tells us the range in which the true value (the\nmean if the sample were infinitely large) is likely to be.\n\nXA\n\nMT\nIP\n\nE\n\nConfidence intervals\n\n23\n\nAn exam question may give a chart similar to that in\nFig. 8 and ask you to summarize the findings.\nConsider:\n∑ Which study showed the greatest change?\n∑ Did all the studies show change in favour of the\nintervention?\n∑ Were the changes statistically significant?\nIn the example above, study D showed the greatest\nchange, with a mean BP drop of 25 mmHg.\nStudy C resulted in a mean increase in BP, though\nwith a wide CI. The wide CI could be due to a low\nnumber of patients in the study.\nThe combined estimate of a mean BP reduction of\n14 mmHg, 95% CI 12–16, is statistically significant.\n\n> P VALUES\nHow important is it?\n\n88888\n\nA really important concept, P values are given in\nmore than four out of five papers.\n\nHow easy is it to understand?\nNot easy, but worth persevering as it is used so\nfrequently.\nIt is not important to know how the P value is\nderived – just to be able to interpret the result.\n\nWhen is it used?\nThe P (probability) value is used when we wish to see\nhow likely it is that a hypothesis is true. The\nhypothesis is usually that there is no difference\nbetween two treatments, known as the “null\nhypothesis”.\n\nWhat does it mean?\nThe P value gives the probability of any observed\ndifference having happened by chance.\nP = 0.5 means that the probability of the difference\nhaving happened by chance is 0.5 in 1, or 50:50.\nP = 0.05 means that the probability of the difference\nhaving happened by chance is 0.05 in 1, i.e. 1 in 20.\n\nP values\n\n25\n\nIt is the figure frequently quoted as being\n“statistically significant”, i.e. unlikely to have\nhappened by chance and therefore important.\nHowever, this is an arbitrary figure.\nIf we look at 20 studies, even if none of the\ntreatments work, one of the studies is likely to have a\nP value of 0.05 and so appear significant!\nThe lower the P value, the less likely it is that the\ndifference happened by chance and so the higher the\nsignificance of the finding.\nP = 0.01 is often considered to be “highly\nsignificant”. It means that the difference will only\nhave happened by chance 1 in 100 times. This is\nunlikely, but still possible.\nP = 0.001 means the difference will have happened\nby chance 1 in 1000 times, even less likely, but still\njust possible. It is usually considered to be “very\nhighly significant”.\n\n26\n\nEXAMPLES\nOut of 50 new babies on average 25 will be girls, sometimes more,\nsometimes less.\nSay there is a new fertility treatment and we want to know whether it affects\nthe chance of having a boy or a girl. Therefore we set up a null hypothesis\n– that the treatment does not alter the chance of having a girl. Out of the\nfirst 50 babies resulting from the treatment, 15 are girls. We then need to\nknow the probability that this just happened by chance, i.e. did this happen\nby chance or has the treatment had an effect on the sex of the babies?\nThe P value gives the probability that the null hypothesis is true.\nThe P value in this example is 0.007. Do not worry about how it was\ncalculated, concentrate on what it means. It means the result would only\nhave happened by chance in 0.007 in 1 (or 1 in 140) times if the treatment\ndid not actually affect the sex of the baby. This is highly unlikely, so we\ncan reject our hypothesis and conclude that the treatment probably does\nalter the chance of having a girl.\nTry another example: Patients with minor illnesses were randomized to\nsee either Dr Smith or Dr Jones. Dr Smith ended up seeing 176 patients in\nthe study whereas Dr Jones saw 200 patients (Table 2).\nTable 2. Number of patients with minor illnesses seen by two GPs\nDr Jones\n(n=200)a\nPatients satisfied\n186 (93)\nwith consultation (%)\nMean (SD) consultation\nlength (minutes)\n\n16 (3.1)\n\nPatients getting a\nprescription (%)\n\n65 (33)\n\nMean (SD) number of\ndays off work\nPatients needing a\nfollow-up\nappointment (%)\na\n\n3.58 (1.3)\n68 (34)\n\nDr Smith\n(n=176)\n168 (95)\n6 (2.8)\n67 (38)\n\nP value i.e. could have\nhappened by chance\n0.38\n\n– possible\n\n<0.001 < One time in 1000\n– very unlikely\n0.28\n\n– possible\n\n3.61 (1.3) 0.82\n\n– probable\n\n78 (44)\n\n0.044 Only one time in 23\n– fairly unlikely\n\nn=200 means that the total number of patients seen by Dr Jones was 200.\n\nP values\n\n27\n\nWatch out for...\nThe “null hypothesis” is a concept that underlies this\nand other statistical tests.\nThe test method assumes (hypothesizes) that there is\nno (null) difference between the groups. The result of\nthe test either supports or rejects that hypothesis.\nThe null hypothesis is generally the opposite of what\nwe are actually interested in finding out. If we are\ninterested if there is a difference between two\ntreatments then the null hypothesis would be that\nthere is no difference and we would try to disprove\nthis.\n\nXA\n\nMT\nIP\n\nE\n\nTry not to confuse statistical significance with\nclinical relevance. If a study is too small, the results\nare unlikely to be statistically significant even if the\nintervention actually works. Conversely a large study\nmay find a statistically significant difference that is\ntoo small to have any clinical relevance.\nYou may be given a set of P values and asked to\ninterpret them. Remember that P = 0.05 is usually\nclassed as “significant”, P = 0.01 as “highly\nsignificant” and P = 0.001 as “very highly\nsignificant”.\nIn the example above, only two of the sets of data\nshowed a significant difference between the two\nGPs. Dr Smith’s consultations were very highly\nsignificantly shorter than those of Dr Jones. Dr\nSmith’s follow-up rate was significantly higher than\nthat of Dr Jones.\n\n>\n\nt TESTS AND OTHER\nPARAMETRIC TESTS\n\nHow important are they?\n\n8888\n\nUsed in one in three papers, they are an important\naspect of medical statistics.\n\nHow easy are they to understand?\nThe details of the tests themselves are difficult to\nunderstand.\nThankfully you do not need to know them. Just look\nfor the P value (see page 24) to see how significant\nthe result is. Remember, the smaller the P value, the\nsmaller the chance that the “null hypothesis” is true.\n\nWhen are they used?\nParametric statistics are used to compare samples of\n“normally distributed” data (see page 9). If the data\ndo not follow a normal distribution, these tests\nshould not be used.\n\nWhat do they mean?\nA parametric test is any test which requires the data\nto follow a specific distribution, usually a normal\ndistribution. Common parametric tests you will\ncome across are the t test and the c2 test .\n\nt tests and other parametric yests\n\n29\n\nAnalysis of variance (ANOVA). This is a group of\nstatistical techniques used to compare the means of\ntwo or more samples to see whether they come from\nthe same population – the “null hypothesis”. These\ntechniques can also allow for independent variables\nwhich may have an effect on the outcome.\nAgain, check out the P value.\nt test (also known as Student’s t). t tests are\ntypically used to compare just two samples. They\ntest the probability that the samples come from a\npopulation with the same mean value.\nχ2 test. A frequently used parametric test is the c2\ntest. It is covered separately (page 34).\n\nEXAMPLE\nTwo hundred adults seeing an asthma nurse specialist were randomly\nassigned to either a new type of bronchodilator or placebo.\nAfter 3 months the peak flow rates in the treatment group had increased\nby a mean of 96 l/min (SD 58), and in the placebo group by 70 l/min\n(SD 52). The null hypothesis is that there is no difference between\nthe bronchodilator and the placebo.\nThe t statistic is 11.14, resulting in a P value of 0.001. It is therefore very\nunlikely (1 in 1000 chance) that the null hypothesis is correct so we reject\nthe hypothesis and conclude that the new bronchodilator is significantly\nbetter than the placebo.\n\nWatch out for...\nParametric tests should only be used when the data\nfollow a “normal” distribution. You may find\nreference to the “Kolmogorov Smirnov” test. This\n\n30\n\ntests the hypothesis that the collected data are from a\nnormal distribution and therefore assesses whether\nparametric statistics can be used.\nSometimes authors will say that they have\n“transformed” data and then analyzed them with a\nparametric test. This is quite legitimate – it is not\ncheating! For example, a skewed distribution might\nbecome normally distributed if the logarithm of the\nvalues is used.\n\n>\n\nMANN–WHITNEY AND\nOTHER NON-PARAMETRIC\nTESTS\n\nHow important are they?\n\n88\n\nUsed in one in five papers.\n\nHow easy are they to understand?\nNon-parametric testing is difficult to understand.\nHowever, you do not need to know the details of the\ntests. Look out for the P value (see page 24) to see\nhow significant the results are. Remember, the\nsmaller the P value, the smaller the chance that the\n“null hypothesis” is true.\n\nWhen are they used?\nNon-parametric statistics are used when the data are\nnot normally distributed and so are not appropriate\nfor “parametric” tests.\n\nWhat do they mean?\nRather than comparing the values of the raw data,\nstatisticians “rank” the data and compare the ranks.\n\n32\n\nEXAMPLE\nMann–Whitney U test. A GP introduced a nurse triage system into her\npractice. She was interested in finding out whether the age of the patients\nattending for triage appointments was different to that of patients who\nmade emergency appointments with the GP.\nSix hundred and forty-six patients saw the triage nurse and 532 patients\nsaw the GP. The median age of the triaged patients was 50 years (1st\nquartile 40 years, 3rd quartile 54), for the GP it was 46 (22, 58). Note how\nthe quartiles show an uneven distribution around the median, so the data\ncannot be normally distributed and a non-parametric test is appropriate.\nThe graph in Fig. 9 shows the ages of the patients seen by the nurse and\nconfirms a skewed, rather than normal, distribution.\n\n200\n\nAges of patients\nFig. 9. Graph of ages of patients seen by triage nurse.\n\n≥70\n\n60–69\n\n50–59\n\n40–49\n\n30–39\n\n0\n\n20–29\n\n100\n\n10–19\n\nNumber of patients\n\n300\n\nMann–Whitney and other non-parametric tests\n\n33\n\nThe statistician used a “Mann–Whitney U test” to test the hypothesis that\nthere is no difference between the ages of the two groups. This gave a U\nvalue of 133 200 with a P value of < 0.001. Ignore the actual U value but\nconcentrate on the P value, which in this case suggests that the triage\nnurse’s patients were very highly significantly older than those who saw\nthe GP.\n\nWatch out for...\nThe “Wilcoxon signed rank test”, “Kruskal Wallis”\nand “Friedman” tests are other non-parametric tests.\nDo not be put off by the names – go straight to the P\nvalue.\n\n> CHI-SQUARED TEST\nUsually written as c2 (for the test) or C2 (for its\nvalue); Chi is pronounced as in sky without the s.\n\nHow important is it?\n\n8888\n\nA frequently used test of significance, given in a\nquarter of papers.\n\nHow easy is it to understand?\nDo not try to understand the C2 value, just look at\nwhether or not the result is significant.\n\nWhen is it used?\nIt is a measure of the difference between actual and\nexpected frequencies.\n\nWhat does it mean?\nThe “expected frequency” is that there is no\ndifference between the sets of results (the null\nhypothesis). In that case, the C2 value would be zero.\nThe larger the actual difference between the sets of\nresults, the greater the C2 value. However, it is\ndifficult to interpret the C2 value by itself as it\ndepends on the number of factors studied.\nStatisticians make it easier for you by giving the P\nvalue (see page 24), giving you the likelihood there is\nno real difference between the groups.\n\nChi-squared test\n\n35\n\nSo, do not worry about the actual value of C2 but\nlook at its P value.\nEXAMPLES\nA group of patients with bronchopneumonia were treated with either\namoxicillin or erythromycin. The results are shown in Table 3.\nTable 3. Comparison of effect of treatment of bronchopneumonia with amoxicillin\nor erythromycin\nType of antibiotic given\nAmoxicillin\nErythromycin\nImprovement at 5 days\nNo improvement at 5 days\nTotal\n\nTotal\n\n144\n\n(60%)\n\n160\n\n(67%)\n\n304\n\n96\n\n(40%)\n\n80\n\n(33%)\n\n176\n\n(63%)\n(37%)\n\n240\n\n(100%)\n\n240\n\n(100%)\n\n480\n\n(100%)\n\nC2 = 2.3; P = 0.13\nA table like this is known as a “contingency table” or “two-way table”.\n\nFirst, look at the table to get an idea of the differences between the\neffects of the two treatments.\nRemember, do not worry about the C2 value itself, but see whether it is\nsignificant. In this case P is 0.13, so the difference in treatments is not\nstatistically significant.\n\nWatch out for...\nSome papers will also give the “degrees of freedom”\n(df), for example C2 = 2.3; df 1; P = 0.13. See page 98\nfor an explanation. This is used with the C2 value to\nwork out the P value.\nOther tests you may find. Instead of the c2 test,\n“Fisher’s exact test” is sometimes used to analyze\ncontingency tables. Fisher’s test is the best choice as it\nalways gives the exact P value, particularly where the\nnumbers are small.\n\n36\n\nThe c2 test is simpler for statisticians to calculate but\ngives only an approximate P value and is\ninappropriate for small samples. Statisticians may\napply “Yates’ continuity correction” or other\nadjustments to the c2 test to improve the accuracy of\nthe P value.\nThe “Mantel Haenszel test” is an extension of the c2\ntest that is used to compare several two-way tables.\n\n> RISK RATIO\nOften referred to as relative risk.\n\nHow important is it?\n\n888\n\nUsed in one in six papers.\n\nHow easy is it to understand?\nRisk is a relatively intuitive concept that we encounter\nevery day, but interpretation of risk (especially low\nrisk) is often inconsistent. The risk of death while\ntravelling to the shops to buy a lottery ticket can be\nhigher than the risk of winning the jackpot!\n\nWhen is it used?\nRelative risk is used in “cohort studies”, prospective\nstudies that follow a group (cohort) over a period of\ntime and investigate the effect of a treatment or risk\nfactor.\n\nWhat does it mean?\nFirst, risk itself. Risk is the probability that an event\nwill happen. It is calculated by dividing the number\nof events by the number of people at risk.\nOne boy is born for every two births, so the\nprobability (risk) of giving birth to a boy is\n⁄2 = 0.5\n\n1\n\n38\n\nIf one in every 100 patients suffers a side-effect from\na treatment, the risk is\n⁄100 = 0.01\n\n1\n\nCompare this with odds (page 40).\nNow, risk ratios. These are calculated by dividing the\nrisk in the treated or exposed group by the risk in the\ncontrol or unexposed group.\nA risk ratio of one indicates no difference in risk\nbetween the groups.\nIf the risk ratio of an event is >1, the rate of that\nevent is increased compared to controls.\nIf <1, the rate of that event is reduced.\nRisk ratios are frequently given with their 95% CIs –\nif the CI for a risk ratio does not include one (no\ndifference in risk), it is statistically significant.\n\nRisk ratio\n\n39\n\nEXAMPLES\nA cohort of 1000 regular football players and 1000 non-footballers were\nfollowed to see if playing football was significant in the injuries that they\nAfter 1 year of follow-up there had been 12 broken legs in the football\nplayers and only four in the non-footballers.\nThe risk of a footballer breaking a leg was therefore 12/1000 or 0.012. The\nrisk of a non-footballer breaking a leg was 4/1000 or 0.004.\nThe risk ratio of breaking a leg was therefore 0.012/0.004 which equals\nthree. The 95% CI was calculated to be 0.97 to 9.41. As the CI includes the\nvalue 1 we cannot exclude the possibility that there was no difference in\nthe risk of footballers and non-footballers breaking a leg. However, given\nthese results further investigation would clearly be warranted.\n\n> ODDS RATIO\nHow important is it?\n\n8888\n\nUsed in a third of papers.\n\nHow easy is it to understand?\nOdds are difficult to understand. Just aim to\nunderstand what the ratio means.\n\nWhen is it used?\nUsed by epidemiologists in studies looking for factors\nwhich do harm, it is a way of comparing patients\nwho already have a certain condition (cases) with\npatients who do not (controls) – a “case–control\nstudy”.\n\nWhat does it mean?\nFirst, odds. Odds are calculated by dividing the\nnumber of times an event happens by the number of\ntimes it does not happen.\nOne boy is born for every two births, so the odds of\ngiving birth to a boy are 1:1 (or 50:50) = 1⁄1 = 1\nIf one in every 100 patients suffers a side-effect from\na treatment, the odds are\n1:99 = 1⁄99 = 0.0101\nCompare this with risk (page 37).\n\nOdds ratio\n\n41\n\nNext, odds ratios. They are calculated by dividing\nthe odds of having been exposed to a risk factor by\nthe odds in the control group.\nAn odds ratio of 1 indicates no difference in risk\nbetween the groups, i.e. the odds in each group are\nthe same.\nIf the odds ratio of an event is >1, the rate of that\nevent is increased in patients who have been exposed\nto the risk factor.\nIf <1, the rate of that event is reduced.\nOdds ratios are frequently given with their 95% CI –\nif the CI for an odds ratio does not include 1 (no\ndifference in odds), it is statistically significant.\n\nEXAMPLES\nA group of 100 patients with knee injuries, “cases”, was matched for age\nand sex to 100 patients who did not have injured knees, “controls”.\nIn the cases, 40 skied and 60 did not, giving the odds of being a skier for\nthis group of 40:60 or 0.66.\nIn the controls, 20 patients skied and 80 did not, giving the odds of being\na skier for the control group of 20:80 or 0.25.\nWe can therefore calculate the odds ratio as 0.66/0.25 = 2.64. The 95% CI\nis 1.41 to 5.02.\nIf you cannot follow the maths, do not worry! The odds ratio of 2.64\nmeans that the number of skiers in the cases is higher than the number of\nskiers in the controls, and as the CI does not include 1 (no difference in\nrisk) this is statistically significant. Therefore, we can conclude that skiers\nare more likely to get a knee injury than non-skiers.\n\n42\n\nWatch out for...\nAuthors may give the percentage change in the odds\nratio rather than the odds ratio itself. In the example\nabove, the odds ratio of 2.64 means the same as a\n164% increase in the odds of injured knees amongst\nskiers.\nOdds ratios are often interpreted by the reader in the\nsame way as risk ratios. This is reasonable when the\nodds are low, but for common events the odds and\nthe risks (and therefore their ratios) will give very\ndifferent values. For example, the odds of giving\nbirth to a boy are 1, whereas the risk is 0.5. However,\nin the side-effect example given above the odds are\n0.0101, a similar value to the risk of 0.01. For this\nreason, if you are looking at a case–control study,\ncheck that the authors have used odds ratios rather\nthan risk ratios.\n\n>\n\nRISK REDUCTION AND\nNUMBERS NEEDED TO\nTREAT\n\nHow important are they?\n\n888\n\nAlthough only quoted in less than 5% of papers, they\nare helpful in trying to work out how worthwhile a\ntreatment is in clinical practice.\n\nHow easy are they to understand?\n“Relative risk reduction” (RRR) and “absolute risk\nreduction” (ARR) need some concentration.\n“Numbers needed to treat” (NNT) are pretty intuitive,\nuseful and not too difficult to work out for yourself.\n\nWhen are they used?\nThey are used when an author wants to know how\noften a treatment works, rather than just whether it\nworks.\n\nWhat do they mean?\nARR is the difference between the event rate in the\nintervention group and that in the control group. It is\nalso the reciprocal of the NNT and is usually given as\n100\na percentage, i.e. ARR =\nNNT\nNNT is the number of patients who need to be\ntreated for one to get benefit.\n\n44\n\nRRR is the proportion by which the intervention\nreduces the event rate.\n\nEXAMPLES\nOne hundred women with vaginal candida were given an oral antifungal,\n100 were given placebo. They were reviewed 3 days later. The results are\ngiven in Table 4.\nTable 4. Results of placebo-controlled trial of oral antifungal agent\nGiven antifungal\n\nGiven placebo\n\nImproved\n\nNo improvement\n\nImproved\n\nNo improvement\n\n80\n\n20\n\n60\n\n40\n\nARR = improvement rate in the intervention group – improvement rate in\nthe control group = 80% – 60% = 20%\nNNT =\n\n100 100\n=\n=5\nARR 20\n\nSo five women have to be treated for one to get benefit.\nThe incidence of candidiasis was reduced from 40% with placebo to 20%\nwith treatment , i.e. by half.\nThus, the RRR is 50%.\nIn another trial young men were treated with an expensive lipid-lowering\nagent. Five years later the death rate from ischaemic heart disease (IHD)\nis recorded. See Table 5 for the results.\nTable 5. Results of placebo-controlled trial of Cleverstatin\nGiven Cleverstatin\n\nGiven placebo\n\nSurvived\n\nDied\n\nSurvived\n\nDied\n\n998 (99.8%)\n\n2 (0.2%)\n\n996 (99.6%)\n\n4 (0.4%)\n\nARR = improvement rate in the intervention group – improvement rate in\nthe control group = 99.8% – 99.6% = 0.2%\n\nRisk reduction and numbers needed to treat\n\nNNT =\n\n45\n\n100 100\n=\n= 500\nARR 0.2\n\nSo 500 men have to be treated for 5 years for one to survive who would\notherwise have died.\nThe incidence of death from IHD is reduced from 0.4% with placebo to\n0.2% with treatment – i.e. by half.\nThus, the RRR is 50%.\nThe RRR and NNT from the same study can have opposing effects on\nprescribing habits. The RRR of 50% in this example sounds fantastic.\nHowever, thinking of it in terms of an NNT of 500 might sound less\nattractive: for every life saved, 499 patients had unnecessary treatment\nfor 5 years.\n\nWatch out for...\nUsually the necessary percentages are given in the\nabstract of the paper. Calculating the ARR is easy:\nsubtract the percentage that improved without\ntreatment from the percentage that improved with\ntreatment.\nAgain, dividing that figure into 100 gives the NNT.\nWith an NNT you need to know:\n(a) What treatment?\n∑ What are the side-effects?\n∑ What is the cost?\n(b) For how long?\n\n46\n\n(c) To achieve what?\n∑ How serious is the event you are trying to\navoid?\n∑ How easy is it to treat if it happens?\nFor treatments, the lower the NNT the better – but\nlook at the context.\n(a) NNT of 10 for treating a sore throat with\nexpensive blundamycin\n∑ not attractive\n(b) NNT of 10 for prevention of death from\nleukaemia with a non-toxic chemotherapy agent\n∑ worthwhile\nExpect NNTs for prophylaxis to be much larger. For\nexample, an immunization may have an NNT in the\nthousands but still be well worthwhile.\nNumbers needed to harm (NNH) may also be\nimportant.\nNNH =\n\n100\n(% on treatment that had SEs) – (% not on treatment that had SEs)\nIn the example above, 6% of those on cleverstatin had\npeptic ulceration as opposed to 1% of those on placebo.\nNNH =\n\n100 100\n=\n= 20\n6–1\n5\n\ni.e. for every 20 patients treated, one peptic ulcer was\ncaused.\n\nRisk reduction and numbers needed to treat\n\n47\n\nXA\n\nMT\nIP\n\nE\n\nYou may see ARR and RRR given as a proportion\ninstead of a percentage. So, an ARR of 20% is the\nsame as an ARR of 0.2.\nBe prepared to calculate RRR, ARR and NNT from\na set of results. You may find that it helps to draw a\nsimple table like Table 5 and work from there.\n\n> CORRELATION\nHow important is it?\n\n88\n\nOnly used in 15% of medical papers.\n\nHow easy is it to understand?\n\nWhen is it used?\nWhere there is a linear relationship between two\nvariables there is said to be a correlation between\nthem. Examples are height and weight in children, or\nsocio-economic class and mortality.\nThe strength of that relationship is given by the\n“correlation coefficient”.\n\nWhat does it mean?\nThe correlation coefficient is usually denoted by the\nletter “r” for example r = 0.8.\nA positive correlation coefficient means that as one\nvariable is increasing the value for the other variable\nis also increasing – the line on the graph slopes up\nfrom left to right. Height and weight have a positive\ncorrelation: children get heavier as they grow taller.\nA negative correlation coefficient means that as the\nvalue of one variable goes up the value for the other\nvariable goes down – the graph slopes down from left\n\nCorrelation\n\n49\n\nto right. Higher socio-economic class is associated\nwith a lower mortality, giving a negative correlation\nbetween the two variables.\nIf there is a perfect relationship between the two\nvariables then r = 1 (if a positive correlation) or\nr = -1 (if a negative correlation).\nIf there is no correlation at all (the points on the\ngraph are completely randomly scattered) then r = 0.\nThe following is a good rule of thumb when\nconsidering the size of a correlation:\nr = 0–0.2\n\n: very low and probably meaningless.\n\nr = 0.2–0.4\n\n: a low correlation that might warrant\nfurther investigation.\n\nr = 0.4–0.6\n\n: a reasonable correlation.\n\nr = 0.6–0.8\n\n: a high correlation.\n\nr = 0.8–1.0\n\n: a very high correlation. Possibly too\nhigh! Check for errors or other\nreasons for such a high correlation.\n\nThis guide also applies to negative correlations.\n\nExamples\nA nurse wanted to be able to predict the laboratory HbA1c result (a\nmeasure of blood glucose control) from the fasting blood glucoses which\nshe measured in her clinic. On 12 consecutive diabetic patients she noted\nthe fasting glucose and simultaneously drew blood for HbA1c. She\ncompared the pairs of measurements and drew the graph in Fig. 10.\n\n50\n\n14\n12\n\nHbA1c %\n\n10\n8\n6\n4\n2\n0\n\n0\n\n5\n\n10\n15\nFasting blood glucose (mmol/l)\n\n20\n\n25\n\nFig. 10. Plot of fasting glucose and HbA1c in 12 patients with diabetes. For these\nresults r = 0.88, showing a very high correlation.\n\nA graph like this is known as a “scatter plot”.\nAn occupational therapist developed a scale for measuring physical\nactivity and wondered how much it correlated to Body Mass Index (BMI)\nin 12 of her adult patients. Fig. 11 shows how they related.\n\nActivity measure\n\n10\n8\n6\n4\n2\n0\n0\n\n10\n\n20\n30\nBody Mass Index\n\n40\n\n50\n\nFig. 11. BMI and activity measure in 12 adult patients.\n\nIn this example, r = -0.34, indicating a low correlation. The fact that the r\nvalue is negative shows that the correlation is negative, indicating that\npatients with a higher level of physical activity tended to have a lower BMI.\n\nCorrelation\n\n51\n\nWatch out for...\nCorrelation tells us how strong the association\nbetween the variables is, but does not tell us about\ncause and effect in that relationship.\nThe “Pearson correlation coefficient”, Pearson’s r, is\nused if the values are sampled from “normal”\npopulations (page 9). Otherwise the “Spearman rank\ncorrelation coefficient” is used. However, the\ninterpretation of the two is the same.\nWhere the author shows the graph, you can get a\ngood idea from the scatter as to how strong the\nrelationship is without needing to know the r value.\nAuthors often give P values with correlations;\nhowever, take care when interpreting them. Although\na correlation needs to be significant, we need also to\nconsider the size of the correlation. If a study is\nsufficiently large, even a small clinically unimportant\ncorrelation will be highly significant.\nR2 is sometimes given. As it is the square of the r\nvalue, and squares are always positive, you cannot\nuse it to tell whether the graph slopes up or down.\nWhat it does tell you is how much of the variation in\none value is caused by the other.\nIn Fig. 10, r = 0.88. R2 = 0.88 ¥ 0.88 = 0.77. This\nmeans that 77% of the variation in HbA1c is related\nto the variation in fasting glucose.\nAgain, the closer the R2 value is to 1, the higher the\ncorrelation.\n\n52\n\nIt is very easy for authors to compare a large number\nof variables using correlation and only present the\nones that happen to be significant. So, check to make\nsure there is a plausible explanation for any\nsignificant correlations.\nAlso bear in mind that a correlation only tells us\nabout linear (straight line) relationships between\nvariables. Two variables may be strongly related but\nnot in a straight line, giving a low correlation\ncoefficient.\n\n> REGRESSION\nHow important is it?\n\n888\n\nRegression analysis is used in a half of papers.\n\nHow easy is it to understand?\nThe idea of trying to fit a line through a set of points\nto make the line as representative as possible is\nrelatively straightforward. However, the mathematics\ninvolved in fitting regression models are more difficult\nto understand.\n\nWhen is it used?\nRegression analysis is used to find how one set of\ndata relates to another.\nThis can be particularly helpful where we want to use\none measure as a proxy for another – for example, a\nnear-patient test as a proxy for a lab test.\n\nWhat does it mean?\nA regression line is the “best fit” line through the\ndata points on a graph.\nThe regression coefficient gives the “slope” of the\ngraph, in that it gives the change in value of one\noutcome, per unit change in the other.\n\n54\n\nEXAMPLE\nConsider the graph shown in Fig. 10 (page 50). A statistician calculated the\nline that gave the “best fit” through the scatter of points, shown in Fig. 12.\n14\n12\n\nHbA1c %\n\n10\n8\n6\n4\n2\n0\n\n0\n\n5\n\n10\n15\nFasting blood glucose (mmol/l)\n\n20\n\n25\n\nFig. 12. Plot with linear regression line of fasting glucose and HbA1c in 12 patients\nwith diabetes.\n\nThe line is called a “regression line”.\nTo predict the HbA1c for a given blood glucose a nurse could simply plot it on\nthe graph, as here where a fasting glucose of 15 predicts an HbA1c of 9.95%.\nThis can also be done mathematically. The slope and position of the\nregression line can be represented by the “regression equation”:\nHbA1c = 3.2 + (0.45 ¥ blood glucose).\n\nThe 0.45 figure gives the slope of the graph and is called the “regression\ncoefficient”.\nThe “regression constant” that gives the position of the line on the graph\nis 3.2: it is the point where the line crosses the vertical axis.\nTry this with a glucose of 15:\nHbA1c = 3.2 + (0.45 x 15) = 3.2 + 6.75 = 9.95%\n\nRegression\n\n55\n\nThis regression equation can be applied to any\nregression line. It is represented by:\ny = a + bx\nTo predict the value y (value on the vertical axis of\nthe graph) from the value x (on the horizontal axis),\nb is the regression coefficient and a is the constant.\n\nOther values sometimes given with regression\nYou may see other values quoted. The regression\ncoefficient and constant can be given with their\n“standard errors”. These indicate the accuracy that\ncan be given to the calculations. Do not worry about\nthe actual value of these but look at their P values.\nThe lower the P value, the greater the significance.\nThe R2 value may also be given. This represents the\namount of the variation in the data that is explained\nby the regression. In our example the R2 value is\n0.77. This is stating that 77% of the variation in the\nHbA1c result is accounted for by variation in the\nblood glucose.\n\nOther types of regression\nThe example above is a “linear regression”, as the\nline that best fits the points is straight.\nOther forms of regression include:\nLogistic regression. This is used where each case in\nthe sample can only belong to one of two groups (e.g.\nhaving disease or not) with the outcome as the\nprobability that a case belongs to one group rather\nthan the other.\n\n56\n\nPoisson regression. Poisson regression is mainly\nused to study waiting times or time between rare\nevents.\nCox proportional hazards regression model. The\nCox regression model (page 60) is used in survival\nanalysis where the outcome is time until a certain\nevent.\n\nWatch out for...\nRegression should not be used to make predictions\noutside of the range of the original data. In the\nexample above, we can only make predictions from\nblood glucoses which are between 5 and 20.\n\nRegression or correlation?\nRegression and correlation are easily confused.\nCorrelation measures the strength of the association\nbetween variables.\nRegression quantifies the association. It should only\nbe used if one of the variables is thought to precede\nor cause the other.\n\n>\n\nSURVIVAL ANALYSIS:\nLIFE TABLES AND\nKAPLAN–MEIER PLOTS\n\nHow important are they?\n\n888\n\nSurvival analysis techniques are used in 20% of\npapers.\n\nHow easy are they to understand?\nLife tables are difficult to interpret. Luckily, most\npapers make it easy for you by showing the resulting\nplots – these graphs give a good visual feel of what\nhas happened to a population over time.\n\nWhen are they used?\nSurvival analysis techniques are concerned with\nrepresenting the time until a single event occurs. That\nevent is often death, but it could be any other single\nevent, for example time until discharge from hospital.\nSurvival analysis techniques are able to deal with\nsituations in which the end event has not happened in\nevery patient or when information on a case is only\nknown for a limited duration – known as “censored”\nobservations.\n\nWhat do they mean?\nLife table. A life table is a table of the proportion of\npatients surviving over time.\n\n58\n\nLife table methods look at the data at a number of\nfixed time points and calculate the survival rate at\nthose times. The most commonly used method is\nKaplan–Meier.\nKaplan–Meier\nThe Kaplan–Meier approach recalculates the\nsurvival rate when an end event (e.g. death) occurs in\nthe data set, i.e. when a change happens rather than\nat fixed intervals.\nThis is usually represented as a “survival plot”. Fig.\n13 shows a fictitious example.\n100\n\nCumulative survival (%)\n\n80\n\n60\n\n40\n\n20\n\n0\n0\n\n10\n\n20\n30\nSurvival (years)\n\n40\n\n50\n\nFig. 13. Kaplan–Meier survival plot of a cohort of patients with rheumatoid arthritis.\n\nThe dashed line shows that at 20 years, 36% of this\ngroup of patients were still alive.\n\nSurvival analysis: Life tables and Kaplan–Meier plots\n\n59\n\nWatch out for...\nLife tables and Kaplan–Meier survival estimates are\nalso used to compare survival between groups. The\nplots make any difference between survival in two\ngroups beautifully clear. Fig. 14 shows the same\ngroup of patients as above, but compares survival for\nmen and women.\n100\n\nWomen\nMen\n\nCumulative survival (%)\n\n80\n\n60\n\n40\n\n20\n\n0\n0\n\n10\n\n20\n30\nSurvival (years)\n\n40\n\n50\n\nFig. 14. Kaplan–Meier survival plot comparing men and women with rheumatoid\narthritis.\n\nIn this example 46% of women were still alive at 20\nyears but only 18% of men.\nThe test to compare the survival between these two\ngroups is called the “log rank test”. Its P value will\ntell you how significant the result of the test is.\n\nCOX REGRESSION\n> THE\nMODEL\n\nAlso known as the proportional hazards survival\nmodel.\n\nHow important is it?\n\n88\n\nIt appeared in a quarter of papers.\n\nHow easy is it to understand?\nJust aim to understand the end result – the “hazard\nratio” (HR).\n\nWhen is it used?\nThe Cox regression model is used to investigate the\nrelationship between an event (usually death) and\npossible explanatory variables, for instance smoking\nstatus or weight.\n\nWhat does it mean?\nThe Cox regression model provides us with estimates\nof the effect that different factors have on the time\nuntil the end event.\nAs well as considering the significance of the effect of\ndifferent factors (e.g. how much shorter male life\nexpectancy is compared to that of women), the\nmodel can give us an estimate of life expectancy for\nan individual.\n\nThe Cox regression model\n\n61\n\nThe “HR” is the ratio of the hazard (chance of\nsomething harmful happening) of an event in one\ngroup of observations divided by the hazard of an\nevent in another group. An HR of 1 means the risk is\n1 ¥ that of the second group, i.e. the same. An HR of\n2 implies twice the risk.\n\nEXAMPLE\nThe Cox regression model shows us the effect of being in one group\ncompared with another.\nUsing the rheumatoid arthritis cohort on page 58, we can calculate the\neffect that gender has on survival. Table 6 gives the results of a Cox model\nestimate of the effect.\nTable 6. Cox model estimate of the effect of sex on survival in a cohort of patients\nwith rheumatoid arthritis\nParameter\n\nHRa (95% CI)b\n\ndf c\n\nP valued\n\nSex (Male)\n\n1.91 (1.21 to 3.01)\n\n1\n\n<0.05\n\na\n\nThe HR of 1.91 means that the risk of death in any particular time period for men was 1.91\n\ntimes that for women.\nb\n\nThis CI means we can be 95% confident that the true HR is between 1.21 and 3.01.\n\nc\n\nDegrees of freedom – see glossary, page 98.\n\nd\n\nThe P value of <0.05 suggests that the result is significant.\n\nSPECIFICITY\n> SENSITIVITY,\nAND PREDICTIVE VALUE\n\nHow important are they?\n\n888\n\nThey are discussed in 40% of papers, so a working\nknowledge is important in interpreting papers that\nstudy screening.\n\nHow easy are they to understand?\nThe tables themselves are fairly easy to understand.\nHowever, there is a bewildering array of information\nthat can be derived from them.\nsection until you are feeling fresh. You may need to\ngo over it a few days running until it is clear in your\nmind.\n\nWhen are they used?\nThey are used to analyze the value of screening or\ntests.\n\nWhat do they mean?\nThink of any screening test for a disease. For each\npatient:\n∑ the disease itself may be present or absent;\n∑ the test result may be positive or negative.\n\nSensitivity, specificity and predictive value\n\n63\n\nWe need to know how useful the test is.\nThe results can be put in the “two-way table” shown\nin Table 7. Try working through it.\nTable 7. Two-way table\nDisease:\nTest result:\n\nPresent\n\nAbsent\n\nPositive\n\nA\n\nB (False positive)\n\nNegative\n\nC (False negative)\n\nD\n\nSensitivity. If a patient has the disease, we need to\nknow how often the test will be positive, i.e.\n“positive in disease”.\nThis is calculated from:\n\nA\nA+C\n\nThis is the rate of pick-up of the disease in a test, and\nis called the Sensitivity.\nSpecificity. If the patient is in fact healthy, we want\nto know how often the test will be negative, i.e.\n“negative in health”.\nThis is given by:\n\nD\nD+B\n\nThis is the rate at which a test can exclude the possibility\nof the disease , and is known as the Specificity.\nPositive Predictive Value. If the test result is\npositive, what is the likelihood that the patient will\nhave the condition?\nLook at:\n\nA\nA+B\n\nThis is known as the Positive Predictive Value (PPV).\n\n64\n\nNegative Predictive Value. If the test result is\nnegative, what is the likelihood that the patient will\nbe healthy?\nHere we use:\n\nD\nD+C\n\nThis is known as the Negative Predictive Value (NPV).\nIn a perfect test, the sensitivity, specificity, PPV and\nNPV would each have a value of 1. The lower the\nvalue (the nearer to zero), the less useful the test is in\nthat respect.\n\nEXAMPLES\nConfused? Try working through an example.\nImagine a blood test for gastric cancer, tried out on 100 patients admitted\nwith haematemesis. The actual presence or absence of gastric cancers\nwas diagnosed from endoscopic findings and biopsy. The results are\nshown in Table 8.\nTable 8. Two-way table for blood test for gastric cancer\nGastric cancer:\n\nBlood result:\n\nSensitivity =\n\nPresent\n\nAbsent\n\nPositive\n\n20\n\n30\n\nNegative\n\n5\n\n45\n\n20\n20\n=\n= 0.8\n20 + 5 25\n\nIf the gastric cancer is present, there is an 80% (0.8) chance of the test\npicking it up.\nSpecificity =\n\n45\n45\n=\n= 0.6\n30 + 45 75\n\nIf there is no gastric cancer there is a 60% (0.6) chance of the test being\nnegative – but 40% will have a false positive result.\n\nSensitivity, specificity and predictive value\n\nPPV =\n\n65\n\n20\n20\n=\n= 0.4\n20 + 30 50\n\nThere is a 40% (0.4) chance, if the test is positive, that the patient actually\nhas gastric cancer.\nNPV =\n\n45\n45\n=\n= 0.9\n45 + 5 50\n\nThere is a 90% (0.9) chance, if the test is negative, that the patient does\nnot have gastric cancer. However, there is still a 10% chance of a false\nnegative, i.e. that the patient does have gastric cancer.\n\nWatch out for...\nOne more test to know about.\nThe “Likelihood Ratio” (LR) gives an estimate of\nhow much a test result will change the odds of having\na condition.\nThe LR for a positive result (LR+) tells us how much\nthe odds of the condition increase when the test\nresult is positive.\nThe LR for a negative result (LR–) tells us how much\nthe odds of the condition decrease when the test\nresult is negative.\nTo calculate LR+, divide the sensitivity by (1 –\nspecificity). To calculate LR–, divide (1 – sensitivity)\nby the specificity.\nHead spinning again? Try using the example above\nto calculate the LR for a positive result.\nLR =\n\n0.8\n0.8\nsensitivity\n=\n=\n=2\n(1 – specificity) 1 - 0.6 0.4\n\nIn this example, LR+ for a positive result = 2. This\nmeans that if the test is positive in a patient, the odds\nof that patient having gastric cancer are doubled.\n\n66\n\nTip: Invent an imaginary screening or diagnostic test\nof your own, fill the boxes in and work out the\nvarious values. Then change the results to make the\ntest a lot less or more effective and see how it affects\nthe values.\nOne thing you may notice is that in a rare condition,\neven a diagnostic test with a very high sensitivity may\nresult in a low PPV.\nIf you are still feeling confused, you are in good\ncompany. Many colleagues far brighter than us\nadmit that they get confused over sensitivity, PPV etc.\nTry copying the following summary into your diary\nand refer to it when reading a paper:\n∑ Sensitivity: how often the test is positive if the\npatient has the disease.\n∑ Specificity: if the patient is healthy, how often the\ntest will be negative.\n∑ PPV: If the test is positive, the likelihood that the\npatient has the condition.\n∑ NPV: If the test is negative, the likelihood that the\npatient will be healthy.\n\nXA\n\nMT\nIP\n\nE\n\n∑ LR: If the test is positive, how much more likely\nthe patient is to have the disease than not have it.\nExaminers love to give a set of figures which you can\nturn into a two-way table and ask you to calculate\nsensitivity, PPV etc. from them. Keep practising until\nyou are comfortable at working with these values.\n\nOF AGREEMENT AND\n> LEVEL\nKAPPA\n\nKappa is often seen written as k.\n\nHow important is it?\n\n8\n\nNot often used.\n\nHow easy is it to understand?\n\nWhen is it used?\nIt is a comparison of how well people or tests agree\nand is used when data can be put into ordered\ncategories.\nTypically it is used to look at how accurately a test\ncan be repeated.\n\nWhat does it mean?\nThe kappa value can vary from zero to 1.\nA k of zero means that there is no significant\nagreement – no more than would have been expected\nby chance.\nA k of 0.5 or more is considered a good agreement, a\nvalue of 0.7 shows very good agreement.\nA k of 1 means that there is perfect agreement.\n\n68\n\nEXAMPLE\nIf the same cervical smear slides are examined by the cytology\ndepartments of two hospitals and k = 0.3, it suggests that there is little\nagreement between the two laboratories.\n\nWatch out for...\nKappa can be used to analyze cervical smear results\nbecause they can be ordered into separate categories,\ne.g. CIN 1, CIN 2, CIN 3 – so-called “ordinal data”.\nWhen the variable that is being considered is\ncontinuous, for example blood glucose readings, the\n“intra-class correlation coefficient” should be used\n(see glossary).\n\n> OTHER CONCEPTS\nImportance: 8\nEase of understanding:\nOne fundamental principle of statistics is that we\naccept there is a chance we will come to the wrong\nconclusion. If we reject a null hypothesis with a P\nvalue of 0.05, then there is still the 5% possibility\nthat we should not have rejected the hypothesis and\ntherefore a 5% chance that we have come to the\nwrong conclusion.\nIf we do lots of testing then this chance of making a\nmistake will be present each time we do a test and\ntherefore the more tests we do the greater the chances\nof drawing the wrong conclusion.\nadjust the P value to keep the overall chance of coming\nto the wrong conclusion at a certain level (usually 5%).\nThe most commonly used\n\nmethod\n\nis\n\nthe\n\n1- and 2-tailed tests\nImportance: 8\nEase of understanding:\nWhen trying to reject a “null hypothesis” (page 27),\nwe are generally interested in two possibilities: either\n\n70\n\nwe can reject it because the new treatment is better\nthan the current one, or because it is worse. By\nallowing the null hypothesis to be rejected from\neither direction we are performing a “two-tailed\ntest” – we are rejecting it when the result is in either\n“tail” of the test distribution.\nOccasionally there are situations where we are only\ninterested in rejecting a hypothesis if the new\ntreatment is worse that the current one but not if it is\nbetter. This would be better analyzed with a onetailed test. However, be very sceptical of one-tailed\ntests. A P value that is not quite significant on a twotailed test may become significant if a one-tailed test\nis used. Authors have been known to use this to their\n\nIncidence\nImportance: 8888\nEase of understanding:\nThe number of new cases of a condition over a given\ntime as a percentage of the population.\nExample: Each year 15 people in a practice of 1000\npatients develop Brett’s palsy.\n15\n¥ 100 = yearly incidence of 1.5%\n1000\n\nPrevalence (= Point Prevalence Rate)\nImportance 8888\nEase of understanding:\nThe existing number of cases of a condition at a\nsingle point in time as a percentage of the population.\n\nOther concepts\n\n71\n\nExample: At the time of a study 90 people in a\npractice of 1000 patients were suffering from Brett’s\npalsy (15 diagnosed in the last year plus 75\ndiagnosed in previous years).\n90\n¥ 100 = a prevalence of 9%\n1000\nWith chronic diseases like the fictitious Brett’s palsy,\nthe incidence will be lower than the prevalence – each\nyear’s new diagnoses swell the number of existing\ncases.\n\nXA\n\nMT\nIP\n\nE\n\nWith short-term illnesses the opposite may be true.\n75% of a population may have a cold each year\n(incidence), but at any moment only 2% are actually\nsuffering (prevalence).\nCheck that you can explain the difference between\nincidence and prevalence.\n\nPower\nImportance: 88\nEase of understanding:\nThe power of a study is the probability that it would\ndetect a statistically significant difference.\nIf the difference expected is 100% cure compared\nwith 0% cure with previous treatments, a very small\nstudy would have sufficient power to detect that.\nHowever if the expected difference is much smaller,\ne.g. 1%, then a small study would be unlikely to have\nenough power to produce a result with statistical\nsignificance.\n\n72\n\nBayesian statistics\nImportance: 8\nEase of understanding:\nBayesian analysis is not often used. It is a totally\ndifferent statistical approach to the classical,\n“frequentist” statistics explained in this book.\nIn Bayesian statistics, rather than considering the\nsample of data on its own, a “prior distribution” is\nset up using information that is already available. For\ninstance, the researcher may give a numerical value\nand weighting to previous opinion and experience as\nwell as previous research findings.\nOne consideration is that different researchers may\nput different weighting on the same previous\nfindings.\nThe new sample data are then used to adjust this\nprior information to form a “posterior distribution”.\nThus these resulting figures have taken both the\ndisparate old data and the new data into account.\nBayesian methodology is intuitively appealing\nbecause it reflects how we think. When we read\nabout a new study we do not consider its results in\nisolation, we factor it in to our pre-existing opinions,\nknowledge and experience of dealing with patients.\nIt is only recently that computer power has been\nsufficient to calculate the complex models that are\nneeded for Bayesian analysis.\n\n> STATISTICS AT WORK\nIn this section we have given five real-life examples of\nhow researchers use statistical techniques to describe\nand analyze their work.\nThe extracts have been taken from research papers\npublished in the British Medical Journal (reproduced\nwith permission of The BMJ Publishing Group), The\nLancet (reproduced with permission from Elsevier),\nand the New England Journal of Medicine\n(reproduced with permission from Massachusetts\nMedical Society). If you want to see the original\n∑ The BMJ website http://www.bmj.com/\n∑ The Lancet website http://www.thelancet.com/\n∑ The NEJM website http://content.nejm.org/\nIf you wish, you can use this part to test what you\nhave learnt:\n∑ First, go through the abstracts and results and note\ndown what statistical techniques have been used.\n∑ Then try to work out why the authors have used\nthose techniques.\n∑ Next, try to interpret the results.\n∑ Finally, check out your understanding\ncomparing it with our commentary.\n\nby\n\n74\n\nThe following extract is reproduced with permission\nfrom The BMJ Publishing Group.\n\nStandard deviation, relative risk and numbers needed to\ntreat\nAlho, O.-P., Koivunen, P., Penna, T., et al. (2007). Tonsillectomy versus watchful\nwaiting in recurrent streptococcal pharyngitis in adults: randomised controlled\ntrial. BMJ 334: 939. (Originally published online 8 Mar 2007;\ndoi:10.1136/bmj.39140.632604.55.)\n\nAbstract\nObjective: To determine the short-term efficacy\nand safety of tonsillectomy for recurrent\nDesign: Randomised controlled trial.\nSetting: Academic referral centre in Finland.\nParticipants: 70 adults with documented recurrent\nepisodes of streptococcal group A pharyngitis.\nIntervention: Instant tonsillectomy (n=36) or\nremaining on waiting list as control (n=34).\nMain outcome measures: Percentage change in the\nrisk of an episode of streptococcal pharyngitis at\n90 days. Rates of all episodes of pharyngitis and\ndays with symptoms and adverse effects.\nResults: The mean (SD) follow-up was 164 (63)\ndays in the control group and 170 (12) days in the\ntonsillectomy group. At 90 days, streptococcal\npharyngitis had recurred in 24% (8/34) in the\ncontrol group and 3% (1/36) in the tonsillectomy\ngroup (difference 21%; 95% confidence interval\n6% to 36%). The number needed to undergo\ntonsillectomy to prevent one recurrence was 5 (95%\nconfidence interval 3 to 16). During the whole\nfollow-up, the rates of other episodes of pharyngitis\nand days with throat pain and fever were\nsignificantly lower in the tonsillectomy group than\n\nStatistics at work\n\n75\n\nin the control group. The most common morbidity\nrelated to tonsillectomy was postoperative throat\npain (mean length 13 days, SD 4).\nConclusions: Adults with a history of documented\nrecurrent episodes of streptococcal pharyngitis\nwere less likely to have further streptococcal or\nother throat infections or days with throat pain if\n\nWhat statistical methods were used and why?\nWhile tonsillectomy has been used to prevent\nrecurrent streptococcal throat infections, a recent\nreview showed no evidence that it was effective in\nwith recurrent episodes of streptococcal pharyngitis\nsought to determine the effects of tonsillectomy.\nThe authors assumed that the length of follow-up\nwas normally distributed, so compared the two\ngroups by giving their mean length of follow-up.\nThey wanted to indicate how much the length of\nfollow-up was spread around the mean, so gave the\nstandard deviation of the number of days.\nAs there were different numbers of adults in each\ngroup, percentages were given as a scale on which to\ncompare the number of patients with a recurrence of\nstreptococcal pharyngitis (the incidence of pharyngitis).\nThis was a prospective study, following two cohorts of\nadults, and used the difference in incidences of\nrecurrence of streptococcal pharyngitis to compare the\neffect of the two different methods of management.\nThe authors wanted to give the range that was likely\nto contain the true difference, so gave it with its 95%\nconfidence interval.\n\n76\n\nThe null hypothesis was that there would be no\ndifference between the incidences of recurrence of\nstreptococcal pharyngitis in the two groups.\n\nWhat do the results mean?\nThe standard deviation of the mean duration of\npostoperative throat pain was 4 days.\nIn this group:\n1 SD below the average is 13 – 4 = 9 days.\n1 SD above the average is 13 + 4 = 17 days.\n± 1 SD will include 68.2% of the subjects, so 68.2%\nbetween 9 and 17 days.\n95.4% had pain for between 5 and 21 days (± 2 SD).\n99.7% of the patients would have had postoperative\nthroat pain for between 1 and 25 days (± 3 SD).\nrecurrence of streptococcal pharyngitis. As a\npercentage, this is:\n8\n¥ 100 = 24%\n34\nThis is the same as the risk, or probability, that a\nrecurrence would happen in this group.\nThe difference in incidence between recurrences over\n90 days was 21%. The confidence interval (CI) of\n6% to 36% doesn’t include 0% (no difference in\nrisk), so it is statistically significant.\n\nStatistics at work\n\n77\n\nThe risk ratio wasn’t given in the abstract, but can be\ncalculated by dividing the risk in the tonsillectomy\ngroup with that in the control group:\n3%\n= 0.125\n24%\nThe risk ratio of <1 shows that the rate of recurrence\nin the tonsillectomy group was lower than that in the\ncontrol group.\nFrom the results of the research, the clinician can\ncalculate absolute risk reduction (ARR), number\nneeded to treat (NNT) and relative risk reduction\n(RRR) from doing a tonsillectomy.\nARR = [risk in the control group] – [risk in the\ntonsillectomy group] = 24% – 3% = 21%\n100\n100\nNNT = ᎏᎏ = ᎏᎏ = 4.8\nARR\n21\nSo, for every 5 adults, tonsillectomy would prevent\none patient from experiencing a recurrence of\nstreptococcal pharyngitis in the first 90 days.\nThe risk of reactions was reduced from 24% to 3%\nby tonsillectomy, so the RRR is given by:\n21\n24% – 3%\nRRR = ᎏ ᎏ = ᎏᎏ = 0.88 = 88%\n24\n24%\nThe researchers concluded that adults with recurrent\nepisodes of streptococcal pharyngitis were less likely\nto have further streptococcal throat infections or\ndays with throat pain if they had their tonsils\nremoved.\nTonsillectomy significantly reduced rates of a\nrecurrence of streptococcal pharyngitis in the first 90\ndays, with an NNT of 5.\n\n78\n\nThe following extract is reproduced with permission\nfrom Elsevier.\n\nOdds ratios and confidence intervals\nYin, P., Jiang, C.Q., Cheng, K.K., et al. (2007). Passive smoking exposure and risk of\nCOPD among adults in China: the Guangzhou Biobank Cohort Study. Lancet 370:\n751–57.\n\nSummary\nBackground: Chronic obstructive pulmonary\ndisease (COPD) is a leading cause of mortality in\nChina, where the population is also exposed to high\nlevels of passive smoking, yet little information\nexists on the effects of such exposure on COPD. We\nexamined the relation between passive smoking\nand COPD and respiratory symptoms in an adult\nChinese population.\nMethods: We used baseline data from the\nGuangzhou Biobank Cohort Study. Of 20 430 men\nand women over the age of 50 recruited in\n2003–06, 15 379 never smokers (6497 with valid\nspirometry) were included in this cross-sectional\nanalysis. We measured passive smoking exposure at\nhome and work by two self-reported measures\n(density and duration of exposure). Diagnosis of\nCOPD was based on spirometry and defined\naccording to the GOLD guidelines.\nFindings: There was an association between risk of\nCOPD and self-reported exposure to passive\nsmoking at home and work (adjusted odds ratio\n1.48, 95% CI 1.18–1.85 for high level exposure;\nequivalent to 40 h a week for more than 5 years).\nThere were significant associations between\nreported respiratory symptoms and increasing\npassive smoking exposure (1.16, 1.07–1.25 for any\nsymptom).\n\nStatistics at work\n\n79\n\nWhat statistical methods were used and why?\nThe authors felt that the evidence of effects of passive\nsmoking exposure on lung function showed mixed\nresults.\nThey therefore analyzed a large cohort of patients to\nstudy the association between passive smoking and\nCOPD. They studied the odds of COPD in passive\nsmokers with those of patients who had not been\nexposed to tobacco, and compared the two groups\nwith an odds ratio.\nThe authors wanted to give the ranges that were\nlikely to contain the true odds ratios, so gave them\nwith their 95% confidence intervals (CI).\n\nWhat do the results mean?\nIn the Lancet paper, the authors gave the results and\nodds ratios for 8 outcomes of each of the trials. We\nhave extracted the results for COPD risk for those\nwith the highest passive exposure to tobacco (Table 9).\nTable 9. Relation between self-reported passive smoking exposure and COPD in never\nsmokers: total hours of adulthood home and work exposure.\nn (%) without\nCOPD\n\nn (%) with\nCOPD\n\nOR (95% CI)\n\n<2 yrs of 40 h per week\n\n2999 (94.0)\n\n191 (6.0)\n\n1\n\n2–5 yrs of 40 h per week\n\n1409 (94.5)\n\n82 (5.5)\n\n0.91 (0.70–1.19)\n\n>5 yrs of 40 h per week\n\n1660 (91.4)\n\n156 (8.6)\n\n1.48 (1.18–1.84)\nP = 0.001\n\nConsider the risk of COPD:\nOdds for COPD in patients with low passive\nexposure (<2 yrs) to tobacco =\nnumber of times the event (COPD) happens\nnumber of times it doesn’t happen\n\n80\n\n=\n\n191\n= 0.064\n2999\n\nOdds for COPD in patients with high passive\nexposure (>5 yrs) to tobacco\n= 156 = 0.094\n1660\nOdds ratio (OR) =\nOdds of COPD in patients with high tobacco exposure\nᎏᎏᎏᎏᎏᎏ\nOdds of COPD in patients with low tobacco exposure\n=\n\n0.094\n= 1.48\n0.064\n\nThe OR of >1 indicates that the rate of COPD is\nincreased in patients with high levels of passive\nsmoking compared to patients with low levels.\nThe 95% confidence interval was calculated to be\n1.18–1.85. As the CI for the odds ratio doesn’t\ninclude 1 (no difference in odds), the difference\nbetween the results in this study is statistically\nsignificant.\nThe OR between reported respiratory symptoms and\nincreasing passive smoking exposure was 1.16.\nAgain, the OR of >1 indicates an increased risk with\nhigh levels of passive smoking. The fact that the CI\nfor the OR is 1.07–1.25, and doesn’t include 1,\nindicates that this result is also significant.\nThe authors concluded that exposure to passive\nsmoking is associated with an increased prevalence\nof COPD and respiratory symptoms.\n\nStatistics at work\n\n81\n\nThe following extract is reproduced with permission\nfrom The BMJ Publishing Group.\n\nCorrelation and regression\nPriest, P., Yudkin, P., McNulty, C. and Mant, D. (2001). Antibacterial prescribing and\nantibacterial resistance in English general practice: cross sectional study. BMJ\n323: 1037–1041.\n\nAbstract\nObjective: To quantify the relation between\ncommunity based antibacterial prescribing and\nantibacterial resistance in community acquired\ndisease.\nDesign: Cross sectional study of antibacterial\nprescribing and antibacterial resistance of routine\nisolates within individual practices and primary care\ngroups.\nSetting: 405 general practices in south west and\nnorth west England.\nMain outcome measures: Correlation between\nantibacterial prescribing and resistance for urinary\ncoliforms.\nResults: Antibacterial resistance in urinary\ncoliform isolates is common but the correlation\nwith prescribing rates was relatively low for\nindividual practices (ampicillin and amoxicillin\nrs=0.20, P=0.001). The regression coefficient was\nalso low; a practice prescribing 20% less ampicillin\nand amoxicillin than average would have about\n1% fewer resistant isolates.\n\n% of urinary coliforms resistant\nto ampicillin/amoxicillin\n\n82\n\n80\n70\n60\n\ny = 0.019x + 39.6\n\n50\n40\n30\n20\n10\n0\n0\n\n100\n\n200\n\n300\n\n400\n\n500\n\n600\n\n700\n\n800\n\n900\n\nAmpicillin/amoxicillin: prescriptions/1000 patients/year\n\nFig. 15. Graph of regression of antibacterial resistance of urinary coliforms on\nprescribing at practice level\n\nWhat statistical methods were used and why?\nThe researchers wanted to know whether there\nwas a relationship between antibacterial\nprescribing and antibiotic resistance, so needed to\nmeasure the correlation between the two.\nAs the data were skewed, i.e. not normally\ndistributed, they needed to use the Spearman rank\ncorrelation coefficient as a measure of the\ncorrelation.\nThey wished to predict how much a 20%\nreduction in antibiotic prescribing would be likely\nto reduce antibacterial resistance. To do this they\nneeded to calculate the regression coefficient.\n\nStatistics at work\n\n83\n\nWhat do the results mean?\nThe Spearman rank correlation coefficient for\npenicillin prescribing and resistance for urinary\ncoliforms is given as rs = 0.20.\nThe r indicates that it is the correlation coefficient,\nand the small s behind it shows that it is a Spearman\nrank correlation coefficient.\nThe r value of 0.20 indicates a low correlation.\nP = 0.001 indicates that an r value of 0.20 would\nhave happened by chance 1 in 1000 times if there\nwere really no correlation.\nFigure 15 shows the authors’ regression graph\ncomparing antibiotic prescribing with antibiotic\nresistance. Each dot represents one practice’s results.\nThe line through the dots is the regression line.\nThe authors show the regression equation in the\ngraph: y = 0.019x + 39.6.\nIn this equation:\nx (the horizontal axis of the graph) indicates the\nnumber of penicillin prescriptions per 1000 patients\nper year.\ny (the vertical axis of the graph) indicates the\npercentage of urinary coliforms resistant to\npenicillin.\nThe regression constant of 39.6 is the point at which\nthe regression line would hit the vertical axis of the\ngraph (when number of prescriptions = 0).\n\n84\n\nThe regression coefficient gives the slope of the line –\nthe percentage of resistant bacteria reduces by 0.019\nfor every one less penicillin prescription per 1000\npatients per year.\nUsing this value, the authors calculated that a\npractice prescribing 20% less penicillin than average\nwould only have about 1% fewer resistant bacteria.\nThe authors therefore suggested that trying to reduce\nthe overall level of antibiotic prescribing in UK\ngeneral practice may not be the most effective\nstrategy for reducing antibiotic resistance in the\ncommunity.\n\nStatistics at work\n\n85\n\nThe following extract is reproduced with permission\nfrom the Massachusetts Medical Society, © 2007; all\nrights reserved.\n\nSurvival analysis\nSjöström, L., Narbro, K., Sjöström, C.D., et al. (2007). Effects of bariatric surgery\non mortality in Swedish obese subjects. N Engl J Med 357: 741–52.\n\nAbstract\nBackground: Obesity is associated with increased\nmortality. Weight loss improves cardiovascular risk\nfactors, but no prospective interventional studies\nhave reported whether weight loss decreases\noverall mortality. In fact, many observational\nstudies suggest that weight reduction is associated\nwith increased mortality.\nMethods: The prospective, controlled Swedish\nObese Subjects study involved 4047 obese subjects.\nOf these subjects, 2010 underwent bariatric\nsurgery (surgery group) and 2037 received\nconventional treatment (matched control group).\nWe report on overall mortality during an average\nof 10.9 years of follow-up.\nCox proportional-hazards models were used to\nevaluate time to death while adjusting for\npotentially significant risk factors.\nResults: There were 129 deaths in the control\ngroup and 101 deaths in the surgery group.\nThe overall hazard ratio was 0.76 in the surgery\ngroup (P = 0.04), as compared with the control\ngroup.\n\n86\n\n14\n12\nCumulative mortality (%)\n\nControl\n10\n8\nSurgery\n6\n4\nP = 0.04\n2\n0\n0\n\n2\n\n4\n\n6\n\n8\n\n10\n\n12\n\n14\n\n16\n\n1260\n1174\n\n760\n749\n\n422\n422\n\n169\n156\n\nYears\nNo. at risk\nSurgery\nControl\n\n2010\n2037\n\n2001\n2027\n\n1987\n2016\n\n1821\n1842\n\n1590\n1455\n\nFig. 16. Cumulative mortality. The hazard ratio for subjects who underwent\nbariatric surgery, as compared with control subjects, was 0.76 (95% CI, 0.59 to\n0.99; P=0.04), with 129 deaths in the control group and 101 in the surgery group.\n\nWhat statistical methods were used and why?\nThe authors wanted to find out whether surgical\nprocedures to treat severe obesity (bariatric\nsurgery) affect mortality.\nThe null hypothesis was that there was no\ndifference between the surgery group and those\nreceiving conventional treatment.\nThe cumulative mortality plot (Fig. 16) was used\nto give a visual representation of the mortality\nover time in each group. The survival between\nthese two groups was compared using the hazard\nratio, and the likelihood that there was no real\n\nStatistics at work\n\n87\n\ndifference between the groups was given by the P\nvalue.\nThe Cox regression model was used so that the effect\nof different variables could be studied.\n\nWhat do the results mean?\nThe ratio of the chance (hazard) of death in the\nsurgery group, divided by the chance of death in the\nconventional treatment group, was calculated to be\n0.76. A hazard ratio (HR) of 1 would mean the risk\nis 1¥ that of the second group, i.e. the same. The\nhazard ratio of 0.76 in this study means that there\nwas a lower risk in the surgery group.\nP=0.04, so the probability of the difference having\nhappened by chance is 0.04 in 1, i.e. 1 in 25.\nAs P<0.05, this is considered to be statistically\nsignificant.\nThis is stated another way by giving the 95%\nconfidence interval, given in Fig. 16. The CI of 0.59\nto 0.99 doesn’t include 1 (no difference in risk), so\nagain demonstrating statistical significance.\nThe conclusion was that bariatric surgery for severe\nobesity is associated with decreased overall\nmortality.\n\n88\n\nThe following extract is reproduced with permission\nfrom The BMJ Publishing Group.\n\nSensitivity, specificity and predictive values\nHopper, A.D., Cross, S.S., Hurlstone, D.P. et al. (2007). Pre-endoscopy serological\ntesting for coeliac disease: evaluation of a clinical decision tool. BMJ 334; 729.\n(Originally published online 23 Mar 2007; doi:10.1136/bmj.39133.668681.BE.)\n\nAbstract\nObjective: To determine an effective diagnostic\nmethod of detecting all cases of coeliac disease in\npatients referred for gastroscopy without\nperforming routine duodenal biopsy.\nDesign: An initial retrospective cohort of patients\nattending for gastroscopy was analysed to derive a\nclinical decision tool that could increase the\ndetection of coeliac disease without performing\nroutine duodenal biopsy. The tool incorporated\nserology and stratifying patients according to\ntheir referral symptoms (“high risk” or “low risk”\nof coeliac disease). The decision tool was then\ntested on a second cohort of patients attending for\ngastroscopy. In the second cohort all patients had\na routine duodenal biopsy and serology\nperformed.\nreferred for gastroscopy recruited prospectively.\nMain outcome measure: Evaluation of a clinical\ndecision tool using patients’ referral symptoms,\ntissue transglutaminase antibody results, and\nduodenal biopsy results.\nResults: No cases of coeliac disease were missed\nby the pre-endoscopy testing algorithm.\n\nStatistics at work\n\n89\n\nEvaluation of the clinical decision tool gave a\nsensitivity, specificity, positive predictive value,\nand negative predictive value of 100%, 60.8%,\n9.3%, and 100%, respectively.\nTable 10. Two-way table of number of patients categorised as having coeliac\ndisease by clinical decision tool and by gold standard (duodenal biopsy): values\nderived from data given in the paper.\nOutcome of duodenal\nbiopsy\nPositive\nResult from clinical decision tool\nTotal\n\nTotal\n\nNegative\n\nPositive\n\n77\n\n753\n\n830\n\nNegative\n\n0\n\n1170\n\n1170\n\n77\n\n1923\n\n2000\n\nWhat statistical methods were used and why?\nThe researchers wanted to know whether they\ncould use their clinical decision tool to replace\nduodenal biopsy in diagnosing coeliac disease.\nThey decided whether patients actually had\ncoeliac disease by using a “gold standard” test,\nduodenal biopsy.\nWe have used a two-way table to show the results,\nshown in Table 10.\nSensitivity, specificity, predictive values and\nlikelihood ratios were used to give the value of\ntheir clinical decision tool.\n\n90\n\nWhat do the results mean?\nIn a perfect test, the sensitivity, specificity and\npredictive values would each have a value of 1. The\nlower the value (the nearer to zero), the less useful\nthe test is in that respect.\nSensitivity shows the rate of pick-up of the disease –\nif a patient has coeliac disease, how often the tool\nwill be positive.\nThis is calculated from:\n\n77\n= 1.0, or 100%\n77\n\nSpecificity is the rate at which the tool can exclude a\ncoeliac disease – if the patient is in fact healthy, how\noften the tool will be negative.\nThis is given by:\n\n1170\n= 0.608, or 60.8%\n1923\n\nPositive predictive value is the likelihood that the\npatient has coeliac disease if the tool is positive.\n77\n= 0.093, or 9.3%\n830\nSo, in patients for whom the tool is positive, the\nincidence of coeliac disease is 9.3%. However, 91 in\n100 patients with a positive result won’t actually\nhave coeliac disease.\nNegative predictive value shows the likelihood that\nthe patient won’t have coeliac disease if the tool\nresult is negative.\n1170\n= 1.0, or 100%\n1170\nFollowing a negative test, no patient will turn out to\nhave the condition.\n\nStatistics at work\n\n91\n\nThe likelihood ratio (LR) gives an estimate of how\nmuch a test result will change the odds of having\ncoeliac disease.\nThe LR for a positive result (LR+) tells us how much\nthe odds of coeliac disease increase when the tool\nresult is positive.\nThe LR for a negative result (LR–) tells us how much\nthe odds of coeliac disease decrease when the tool\nresult is negative.\nLR =\n\n1\n1\nsensitivity\n=\n=\n= 2.55\n(1 – specificity) 1 – 0.608 0.392\n\nSo, if the tool is positive in a patient, the odds of\nhaving coeliac disease are increased by a factor of\n2.6.\nPre-endoscopy serological testing in combination\nwith biopsy of high risk cases had a 100% negative\npredictive value – no patients who were negative\nwere found to have coeliac disease. The authors\ntherefore suggested that patients with a negative\nresult might not need a duodenal biopsy.\n\n> GLOSSARY\nCross-references to other parts of the glossary are\ngiven in italics.\n\nAbsolute risk reduction, ARR\nThe difference between the event rate in the\nintervention group and that in the control group. It is\nalso the reciprocal of the NNT.\n\nAlpha, a\nThe alpha value is equivalent to the P value and\nshould be interpreted in the same way.\n\nANCOVA (ANalysis of COVAriance)\nAnalysis of covariance is an extension of analysis of\nvariance (ANOVA) to allow for the inclusion of\ncontinuous variables in the model. See page 29.\n\nANOVA (ANalysis Of VAriance)\nThis is a group of statistical techniques used to\ncompare the means of two of more samples to see\nwhether they come from the same population. See\npage 29.\n\nAssociation\nA word used to describe a relationship between two\nvariables.\n\n94\n\nBeta, b\nThe beta value is the probability of accepting a\nhypothesis that is actually false. 1 – b is known as the\npower of the study.\n\nBayesian statistics\nAn alternative way of analyzing data, it creates and\ncombines numerical values for prior belief, existing\ndata and new data. See page 72.\n\nBinary variable\nSee categorical variable below.\n\nBi-modal distribution\nWhere there are 2 modes in a set of data, it is said to\nbe bi-modal. See page 15.\n\nBinomial distribution\nWhen data can only take one of two values (for\ninstance male or female), it is said to follow a\nbinomial distribution.\n\nBonferroni\nA method that allows for the problems associated\nwith making multiple comparisons. See page 69.\n\nBox and whisker plot\nA graph showing the median, range and interquartile range of a set of values. See page 13.\n\nGlossary\n\n95\n\nCase–control study\nA retrospective study which investigates the\nrelationship between a risk factor and one or more\noutcomes. This is done by selecting patients who\nalready have the disease or outcome (cases), matching\nthem to patients who do not (controls) and then\ncomparing the effect of the risk factor on the two\ngroups. Compare this with Cohort study. See page 40.\n\nCases\nThis usually refers to patients but could refer to\nhospitals, wards, counties, blood samples etc.\n\nCategorical variable\nA variable whose values represent different\ncategories of the same feature. Examples include\ndifferent blood groups, different eye colours, and\ndifferent ethnic groups.\nWhen the variable has only two categories, it is\ntermed “binary” (e.g. gender). Where there is some\ninherent ordering (e.g. mild, moderate, severe), this is\ncalled an “ordinal” variable.\n\nCausation\nThe direct relationship of the cause to the effect\nthat it produces, usually established in experimental\nstudies.\n\nCensored\nA censored observation is one where we do not have\ninformation for all of the observation period. This is\nusually seen in survival analysis where patients are\n\n96\n\nfollowed for some time and then move away or\nwithdraw consent for inclusion in the study. We\ncannot include them in the analysis after this point as\nwe do not know what has happened to them. See\npage 57.\n\nCentral tendency\nThe “central” scores in a set of figures. Mean,\nmedian and mode are measures of central tendency.\n\nChi-squared test, c2\nThe chi-squared test is a test of association between\ntwo categorical variables. See page 34.\n\nCohort study\nA prospective, observational study that follows a\ngroup (cohort) over a period of time and investigates\nthe effect of a treatment or risk factor. Compare this\nwith case–control study. See page 37.\n\nConfidence interval, CI\nA range of values within which we are fairly\nconfident the true population value lies. For\nexample, a 95% CI means that we can be 95%\nconfident that the population value lies within those\nlimits. See page 20.\n\nConfounding\nA confounding factor is the effect of a covariate or\nfactor that cannot be separated out. For example, if\nwomen with a certain condition received a new\ntreatment and men received placebo, it would not be\npossible to separate the treatment effect from the\n\nGlossary\n\n97\n\neffect due to gender. Therefore gender would be a\nconfounding factor.\n\nContinuous variable\nA variable which can take any value within a given\nrange, for instance BP. Compare this with discrete\nvariable.\n\nCorrelation\nWhen there is a linear relationship between two\nvariables there is said to be a correlation between\nthem. Examples are height and weight in children, or\nsocio-economic class and mortality.\nMeasured on a scale from -1 (perfect negative\ncorrelation), through 0 (no relationship between the\nvariables at all), to +1 (perfect positive correlation).\nSee page 48.\n\nCorrelation coefficient\nA measure of the strength of the linear relationship\nbetween two variables. See page 48.\n\nCovariate\nA covariate is a continuous variable that is not of\nprimary interest but is measured because it may\naffect the outcome and may therefore need to be\nincluded in the analysis.\n\nCox regression model\nA method which explores the effects of different\nvariables on survival. See page 60.\n\n98\n\nDatabase\nA collection of records that is organized for ease and\nspeed of retrieval.\n\nDegrees of freedom, df\nThe number of degrees of freedom, often abbreviated\nto df, is the number of independent pieces of\ninformation available for the statistician to make the\ncalculations.\n\nDescriptive statistics\nDescriptive statistics are those which describe the\ndata in a sample. They include means, medians,\nstandard deviations, quartiles and histograms. They\nare designed to give the reader an understanding of\nthe data. Compare this with inferential statistics.\n\nDiscrete variable\nA variable where the data can only be certain values,\nusually whole numbers, for example the number of\nchildren in families. Compare this with continuous\nvariable.\n\nDistribution\nA distinct pattern of data may be considered as\nfollowing a distribution. Many patterns of data have\nbeen described, the most useful of which is the\nnormal distribution. See page 9.\n\nFisher’s exact test\nFisher’s exact test is an accurate test for association\nbetween categorical variables. See page 35.\n\nGlossary\n\n99\n\nFields\nSee variables below.\n\nHazard ratio, HR\nThe HR is the ratio of the hazard (chance of\nsomething harmful happening) of an event in one\ngroup of observations divided by the hazard of an\nevent in a different group. An HR of 1 implies no\ndifference in risk between the two groups, an HR of\n2 implies double the risk. The HR should be stated\nwith its confidence intervals. See page 61.\n\nHistogram\nA graph of continuous data with the data categorized\ninto a number of classes. See example on page 15.\n\nHypothesis\nA statement which can be tested that predicts t```"
] | [
null,
"https://b-ok.org/book/2464071/css/jscomments/loader.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.937829,"math_prob":0.8440683,"size":105232,"snap":"2019-26-2019-30","text_gpt3_token_len":24367,"char_repetition_ratio":0.14932337,"word_repetition_ratio":0.08029077,"special_character_ratio":0.2375038,"punctuation_ratio":0.10551791,"nsfw_num_words":6,"has_unicode_error":false,"math_prob_llama3":0.9653755,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-25T10:43:34Z\",\"WARC-Record-ID\":\"<urn:uuid:c0a6e859-d104-40ae-ba5f-ed11d3df0579>\",\"Content-Length\":\"141810\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3748afcf-682f-4596-b2ab-4ce5b0bf16f5>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d68ad12-35d6-45b4-8d1e-e4637173b170>\",\"WARC-IP-Address\":\"179.43.147.124\",\"WARC-Target-URI\":\"https://b-ok.org/book/2464071/d862b2\",\"WARC-Payload-Digest\":\"sha1:MJOKQGNOHIHQABVNSFYKN6O4NLNHHEUO\",\"WARC-Block-Digest\":\"sha1:3KCPEXCP2YZKUK6BX3ISRQJS5BDX3FGO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999817.30_warc_CC-MAIN-20190625092324-20190625114324-00057.warc.gz\"}"} |
https://developer.valvesoftware.com/w/index.php?title=Character_Setup_Overview:jp&oldid=18121 | [
"# Character Setup Overview:jp\n\n(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)\n\noriginally translated by N-neko(C-SEC), 2005/3/25\noriginal English version: Character Setup Overview\n\n=\n\n=\n\n[[]]\n\n=\n\n```\"eyeball_r.tga\"<\"eyeball_l.tga\"<\"dark_eyeball_r.tga\"<\"dark_eyeball_l.tga\"< ```\n\n``` [[]] ```\n\n```= ```\n\n```\"mouth.tga\"<\"fmouth.tga\"< ```\n\n```= ```\n\n``` ```\n\n```= ```\n\n``` ```\n\n```- ```\n\n``` ```\n\n``` ```\n\n```= ```\n\n``` ```\n\n``` ```\n\n```= ```\n\n```キャラクタ顔アニメーションシェイプキー一覧 ```\n\n```- ```\n\n``` ```\n\n``` ```\n\n``` ```\n\n``` ```\n\n``` [[]] ```\n\n```= ```\n\n``` ```\n\n``` ```\n\n``` ```\n\n```[[]] ```\n\n```= ```\n\n```= ```\n\n``` ```\n\n``` ```\n\n``` ```\n\n``` ```\n\n``` = ```\n\n``` ```\n\n``` ```\n\n``` ```\n\n``` ```\n\n``` = ```\n\n``` ```\n\n``` [[]] ```\n\n```[[]] ```\n\n```= ```\n\n``` ```\n\n``` ```\n\n``` [[]] (modelname_reference.smd)< ```\n\n``` = ```\n\n```[http://www.c-- ```\n\n```[http://www.c-- ```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7882362,"math_prob":0.90392554,"size":448,"snap":"2021-31-2021-39","text_gpt3_token_len":165,"char_repetition_ratio":0.18468468,"word_repetition_ratio":0.0,"special_character_ratio":0.390625,"punctuation_ratio":0.1875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99252987,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-17T15:50:29Z\",\"WARC-Record-ID\":\"<urn:uuid:bebc00bf-635e-43db-8c97-6c31796cf466>\",\"Content-Length\":\"19966\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fb476aca-8706-45b5-a02e-51fa37e44e27>\",\"WARC-Concurrent-To\":\"<urn:uuid:67b26e97-33fa-4f6d-ada8-e873bf822490>\",\"WARC-IP-Address\":\"104.18.23.32\",\"WARC-Target-URI\":\"https://developer.valvesoftware.com/w/index.php?title=Character_Setup_Overview:jp&oldid=18121\",\"WARC-Payload-Digest\":\"sha1:OWYBXLQI5V3NJXXTTGLUJUDT7XNPYWLR\",\"WARC-Block-Digest\":\"sha1:YB2QG4VGLN5ZW2JOKXVZLGJYNERGCXR2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780055684.76_warc_CC-MAIN-20210917151054-20210917181054-00302.warc.gz\"}"} |
https://www.johncanessa.com/2021/02/10/binary-tree-pruning/ | [
"# Binary Tree Pruning\n\nHave an on-line call at 02:30 PM. I believe I am well prepared. Will see how it goes.\n\nEarlier today I selected LeetCode 814 Binary Tree Pruning problem.\n\n```We are given the head node root of a binary tree,\nwhere additionally every node's value is either a 0 or a 1.\n\nReturn the same tree where every subtree (of the given tree)\nnot containing a 1 has been removed.\n\n(Recall that the subtree of a node X is X, plus every node that is a descendant of X.)\n\nNote:\n\no The binary tree will have at most 200 nodes.\no The value of each node will only be 0 or 1.\n```\n\nWe are given a binary tree whose nodes are holding 0 or 1 as value. We need to prune the BT as described in the requirements. If you are interested in this problem please navigate to the LeetCode web site and read the actual requirements. In addition LeetCode has a set of test cases.",
null,
"In this post we will solve the problem using the Java programming language on a Windows 10 computer using the VSCode IDE. You can use any of the languages supported by LeetCode and solve the problem on the LeetCode side with the IDE provided.\n\nSince I am going to solve the problem on my computer I will need to generate test scaffolding. Note that such code IS NOT PART OF THE SOLUTON.\n\n```/**\n* Definition for a binary tree node.\n* public class TreeNode {\n* int val;\n* TreeNode left;\n* TreeNode right;\n* TreeNode() {}\n* TreeNode(int val) { this.val = val; }\n* TreeNode(int val, TreeNode left, TreeNode right) {\n* this.val = val;\n* this.left = left;\n* this.right = right;\n* }\n* }\n*/\nclass Solution {\npublic TreeNode pruneTree(TreeNode root) {\n\n}\n}\n```\n\nThe signature for the method in question is as expected. In addition we will be using the TreeNode class to represent and manipulate the binary tree.\n\n```1,null,0,0,1\nmain <<< strArr: [1, null, 0, 0, 1]\nmain <<< arr: [1, null, 0, 0, 1]\nmain <<< bt levelOrder:\n1\n0\n0,1\nmain <<< pruned levelOrder:\n1\n0\n1\n\n1,0,1,0,0,0,1\nmain <<< strArr: [1, 0, 1, 0, 0, 0, 1]\nmain <<< arr: [1, 0, 1, 0, 0, 0, 1]\nmain <<< bt levelOrder:\n1\n0,1\n0,0,0,1\nmain <<< pruned levelOrder:\n1\n1\n1\n\n1,1,0,1,1,0,1,0\nmain <<< strArr: [1, 1, 0, 1, 1, 0, 1, 0]\nmain <<< arr: [1, 1, 0, 1, 1, 0, 1, 0]\nmain <<< bt levelOrder:\n1\n1,0\n1,1,0,1\n0\nmain <<< pruned levelOrder:\n1\n1,0\n1,1,1\n\n0,null,0,0,0\nmain <<< strArr: [0, null, 0, 0, 0]\nmain <<< arr: [0, null, 0, 0, 0]\nmain <<< bt levelOrder:\n0\n0\n0,0\nmain <<< pruned levelOrder:\n\n```\n\nThe input line per test contains a single line with the depth first traversal of the nodes in the binary tree. This technique is typically used by LeetCode to define binary trees.\n\nOur test scaffold seems to read and parse the input line. The values are placed in a String[] array. Using the String[] array we appear to create a Integer[] and populate it with the same values. The two arrays are displayed.\n\nWe then appear to generate and populate a binary tree with the values from the Integer[] array. The binary tree is displayed in level order. The display seems to match the specified values.\n\nWe then seem to make a call to the method in question to prune the binary tree. The resulting binary tree is then displayed.\n\nWhile looking at the test cases and associated diagrams I noticed the end case in which all the nodes in the binary tree may need to be pruned. I forgot the condition before I submitted the code. After addressing the issue the code was accepted by LeetCode.\n\n``` /**\n* Test scaffolding\n*\n* @throws IOException\n*/\npublic static void main(String[] args) throws IOException {\n\n// **** open buffered reader ****\n\n// **** read String[] with node values for binary tree ****\n\n// **** close buffered reader ****\nbr.close();\n\n// ???? ????\nSystem.out.println(\"main <<< strArr: \" + Arrays.toString(strArr));\n\n// **** create and populate Integer[] ****\nInteger[] arr = new Integer[strArr.length];\nfor (int i = 0; i < strArr.length; i++) {\nif (strArr[i].equals(\"null\"))\narr[i] = null;\nelse\narr[i] = Integer.parseInt(strArr[i]);\n}\n\n// ???? ????\nSystem.out.println(\"main <<< arr: \" + Arrays.toString(arr));\n\n// **** create and populate a original binary tree ****\nBST bt = new BST();\nbt.root = bt.populate(arr);\n\n// ???? ????\nSystem.out.println(\"main <<< bt levelOrder:\");\nSystem.out.println(bt.levelOrder());\n\n// **** prune the BT ****\nbt.root = pruneTree(bt.root);\n\n// ???? ????\nSystem.out.println(\"main <<< pruned levelOrder:\");\nSystem.out.println(bt.levelOrder());\n}\n```\n\nThe test scaffold seems to follow (what a surprise) our description while looking at the test cases. Note that we create the binary tree using the BST class. Such code is only used in the test scaffold. As usual all the code can be found in the associated GitHub repository (https://github.com/JohnCanessa/BinaryTreePrunning) for this post.\n\n``` /**\n* Return the same tree where every subtree (of the given tree)\n* not containing a 1 has been removed.\n*\n* Runtime: 0 ms, faster than 100.00% of Java online submissions.\n* Memory Usage: 36.5 MB, less than 77.69% of Java online submissions.\n*/\nstatic TreeNode pruneTree(TreeNode root) {\n\n// **** sanity check ****\nif (root == null)\nreturn null;\n\n// **** recursive call ****\nprune(root);\n\n// **** delete root node (if needed) ****\nif (leafNode(root))\nroot = null;\n\n// **** returned pruned BT ****\nreturn root;\n}\n```\n\nThe idea is to perform a depth first search traversal on the binary tree. We will be checking and pruning nodes.\n\nAfter the sanity check we prune the binary tree. If we end with a root node with value 0 (end conditioned that I initially forgot to code) we set the root of the binary tree to null (pruned). The binary tree is then returned.\n\n``` /**\n* DFS traversal deleting nodes.\n* Recursive call.\n*/\nstatic void prune(TreeNode root) {\nif (root != null) {\n\n// **** visit left sub tree ****\nprune(root.left);\n\n// **** delete left node (if needed) ****\nif (root.left != null && leafNode(root.left))\nroot.left = null;\n\n// **** visit right sub tree ****\nprune(root.right);\n\n// **** delete right node (if needed) ****\nif (root.right != null && leafNode(root.right))\nroot.right = null;\n}\n}\n```\n\nThe prune() method implements DFS in a recursive fashion. We traverse the left and then the right subtrees. After each traversal we prune the left or the right node as needed.\n\n``` /**\n* Auxiliary method.\n*/\nstatic boolean leafNode(TreeNode root) {\n\n// **** sanity check ****\nif (root.val == 1)\nreturn false;\n\n// **** check if end node ****\nif (root.left == null && root.right == null)\nreturn true;\n\n// **** not a leaf node ****\nreturn false;\n}\n```\n\nThis is an auxiliary method. We only prune a node with value 0 if it is a leaf node. That implies it has no children that meets the requirements.\n\nHope you enjoyed solving this problem as much as I did. The entire code for this project can be found in my GitHub repository.\n\nIf you have comments or questions regarding this, or any other post in this blog, or if you would like for me to help out with any phase in the SDLC (Software Development Life Cycle) of a project associated with a product or service, please do not hesitate and leave me a note below. If you prefer, send me a private e-mail message. I will reply as soon as possible.\n\nKeep on reading and experimenting. It is one of the best ways to learn, become proficient, refresh your knowledge and enhance your developer toolset.\n\nOne last thing, many thanks to all 6,488 subscribers to this blog!!!\n\nKeep safe during the COVID-19 pandemic and help restart the world economy. I believe we can all see the light at the end of the tunnel.\n\nRegards;\n\nJohn\n\njohn.canessa@gmail.com\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed."
] | [
null,
"https://www.johncanessa.com/wp-content/uploads/2021/02/binary_tree_pruning-300x80.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7330223,"math_prob":0.8505913,"size":7534,"snap":"2022-27-2022-33","text_gpt3_token_len":1919,"char_repetition_ratio":0.13532537,"word_repetition_ratio":0.066967644,"special_character_ratio":0.3068755,"punctuation_ratio":0.19107479,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95110035,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T09:26:43Z\",\"WARC-Record-ID\":\"<urn:uuid:f61eda83-9c72-4a84-89fc-e26ef744791b>\",\"Content-Length\":\"62241\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cf9ce6ac-386f-4001-af56-5b22a55a90f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:1dcd5896-0dea-4093-8940-001455bcae54>\",\"WARC-IP-Address\":\"208.113.168.135\",\"WARC-Target-URI\":\"https://www.johncanessa.com/2021/02/10/binary-tree-pruning/\",\"WARC-Payload-Digest\":\"sha1:RTX6BJ4OJUU6VDCOASUU455HUIA5Q627\",\"WARC-Block-Digest\":\"sha1:5JWBBWLAK5UWIRDWZHO774R7GUUGKC7X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103626162.35_warc_CC-MAIN-20220629084939-20220629114939-00542.warc.gz\"}"} |
https://forum.allaboutcircuits.com/threads/explanation-for-time-dependent-distribution-of-electric-field-in-a-closed-circuit.121867/ | [
"# Explanation for time dependent distribution of electric field in a closed circuit\n\n#### Bastians\n\nJoined Mar 13, 2016\n2\nupon closing a electric circuit DC or AC , the potential difference generates an electrical field and electrons start moving through the circuit based on the electric field force.\n\nThe distribution of this electric field is setup at a very high speed. A light bulb lights up almost instantaneously.\nWhere does the electric field starts first? at the terminal where electrons are pushed into the circuit, where electrons are pulled out from the circuit , or at both terminals simultaneously ? There must be some initial time frame to get the electrical field throughout the entire circuit.\ncan someone explain this ?\n\n#### crutschow\n\nJoined Mar 14, 2008\n33,346\nThe electric field travels at the speed of light and emanates from the source generating the voltage difference across the lines (or line and return).\n\n#### Bastians\n\nJoined Mar 13, 2016\n2\nThe electric field travels at the speed of light and emanates from the source generating the voltage difference across the lines (or line and return).\ndoes the electric field is distributed at both + and - terminal of the source at the same time ?\nor only at the terminal where electrons enters the circuit , almost at speed of light, but there must be a delay, heading towards the other terminal ?\ni am looking for a mathematical formulation of this phenomena.\n\n#### Pinkamena\n\nJoined Apr 20, 2012\n22\n\"The speed at which energy or signals travel down a cable is actually the speed of the electromagnetic wave, not the movement of electrons. Electromagnetic wave propagation is fast and depends on the dielectric constant of the material.\"\n\nSo if you assume you have a voltage source that is terminated to ground or some other constant potential, and the voltage source turns on at some time t, then the electric field will propagate along the wire at a speed determined by the dielectric constant of the material. For a PCB trace, this will mostly depend on the dielectric constant of the PCB material, I believe. It's still quite fast, about 66% of the speed of light for most pcb materials.\n\nHere is a blog post that goes a bit more in depth about the propagation speed for pcb traces: https://blogs.mentor.com/hyperblog/blog/tag/velocity-of-propagation/\n\n#### nsaspook\n\nJoined Aug 27, 2009\n12,286\ndoes the electric field is distributed at both + and - terminal of the source at the same time ?\nor only at the terminal where electrons enters the circuit , almost at speed of light, but there must be a delay, heading towards the other terminal ?\ni am looking for a mathematical formulation of this phenomena.\nIt's not Either/Or, the electrons in the wire and the electric field act as a system together to guide electrical energy from both terminals at the source to the load.",
null,
"https://en.wikipedia.org/wiki/Energy_current\nhttps://en.wikipedia.org/wiki/Poynting_vector\n\nLast edited:"
] | [
null,
"https://forum.allaboutcircuits.com/proxy.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9316884,"math_prob":0.94228727,"size":1049,"snap":"2023-40-2023-50","text_gpt3_token_len":188,"char_repetition_ratio":0.17607656,"word_repetition_ratio":0.80701756,"special_character_ratio":0.18303145,"punctuation_ratio":0.09677419,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9603944,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-03T11:50:24Z\",\"WARC-Record-ID\":\"<urn:uuid:f84b0d45-fd4e-4bdb-81ad-af03ec67b16a>\",\"Content-Length\":\"110521\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38bc7ac4-b3a2-4c5b-8f25-5520ee8dd63e>\",\"WARC-Concurrent-To\":\"<urn:uuid:9cc876a5-f377-4a14-9f2b-dd286fb16a4d>\",\"WARC-IP-Address\":\"104.17.144.194\",\"WARC-Target-URI\":\"https://forum.allaboutcircuits.com/threads/explanation-for-time-dependent-distribution-of-electric-field-in-a-closed-circuit.121867/\",\"WARC-Payload-Digest\":\"sha1:JSK727CJ4BNKBD5HGBUFPEPDQDVP4NO7\",\"WARC-Block-Digest\":\"sha1:45W7DJCOFVETRRKPTOHHMHDCYWYM2OPV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100499.43_warc_CC-MAIN-20231203094028-20231203124028-00013.warc.gz\"}"} |
https://ch.mathworks.com/matlabcentral/cody/problems/135-inner-product-of-two-vectors/solutions/506777 | [
"Cody\n\n# Problem 135. Inner product of two vectors\n\nSolution 506777\n\nSubmitted on 30 Sep 2014 by Mike\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1 Pass\n%% x = 1:3; y= 3:-1:1; z_correct = 10; assert(isequal(your_fcn_name(x,y),z_correct))\n\n2 Pass\n%% x = 1:6; y= ones(1,6); z_correct = 21; assert(isequal(your_fcn_name(x,y),z_correct))"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5477463,"math_prob":0.9893253,"size":404,"snap":"2020-10-2020-16","text_gpt3_token_len":139,"char_repetition_ratio":0.145,"word_repetition_ratio":0.0,"special_character_ratio":0.37128714,"punctuation_ratio":0.18604651,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9554686,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-01T09:08:10Z\",\"WARC-Record-ID\":\"<urn:uuid:63f464e9-e6b5-42a3-baec-ed7a224102d1>\",\"Content-Length\":\"72677\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b92f2a4d-de11-40e9-8089-30600f85143c>\",\"WARC-Concurrent-To\":\"<urn:uuid:ef5fa7c2-dbbb-4c33-ac56-f7dd8cf995fc>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://ch.mathworks.com/matlabcentral/cody/problems/135-inner-product-of-two-vectors/solutions/506777\",\"WARC-Payload-Digest\":\"sha1:PA6QGOHLTRMNRMBI2HI2NSOVHH2Q2YEC\",\"WARC-Block-Digest\":\"sha1:RB3JV77VJXL5MSMDTWSQRC36Z4FTJGKV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370505550.17_warc_CC-MAIN-20200401065031-20200401095031-00201.warc.gz\"}"} |
https://technologica.be/quarry/calculation-of-circulating-load-on-quarry/ | [
" calculation of circulating load on quarry\n\n# calculation of circulating load on quarry\n\n•",
null,
"### calculating circulating load on a cone crusher –\n\n» Stone Crusher Rate Sand Making Stone Quarry calculating circulating load on a cone crusher calculation of circulating load of a grinding mill pdf\n\n•",
null,
"### calculate circulating load primary crusher –\n\nCalculate Circulating Load Primary Crusher. calculation of circulating load on quarryStone Crusher . how to calculate circulating load\n\n•",
null,
"### Calculate Circulating Load Primary Crusher\n\npe series jaw crusher is usually used as primary crusher in quarry production calculate circulating load ball mill pdf,circulating load calculation\n\n•",
null,
"### How To Calculate Circulating Load In Cone Crusher\n\ngrinding circulating load calculation india xsmcirculating load calculation india from xsm. shanghai xsm circulating load Income From A Rock Quarry next:\n\n•",
null,
"### Ball Mill Calculation Crusher Mills, Cone Crusher,\n\nCrushers France Quarry Producing Railway Ballast ball mill calculation ball mill circulating load calculation. circulating load in roll mill calculation\n\n•",
null,
"### sag mill circulating load rate fifbowling.pw\n\ncalculating circulating load ball mill Mining & World Quarry. ground in a SAGball mill circuit. circulating load calculation made in a mill bim.\n\n•",
null,
"### calculation of circulating load on quarry\n\nCirculating Load Calculation Formula. Here is a formula that allows you to calculate the circulating load ratio around a ball mill and hydrocylone as part of a\n\n•",
null,
"### calculating circulating load in crushing circuit pdf\n\nThe grinding mill receives crushed ore feed.This is %Solids/Densities Based Circulating Load Calculation Method obtained from the plants crushing site quarry\n\n•",
null,
"### Circulating load calculation in mineral processing\n\nA problem for solving mass balances in mineral processing plants is the calculation of circulating load in closed circuits. A family of possible methods to the\n\n•",
null,
"### calculation of circulating load on quarry travelkare\n\ncirculating load in ball mill Mining & Quarry Plant . ball mill circulating load calculation. circulating load calculation – OneMine Mining and Minerals\n\n•",
null,
"Circulating Load Calculation Screen Crusher, quarry, ball mill circulating load calculation Ball mills are predominantly used machines for grinding in the\n\n•",
null,
"### circulating load formula of ball mill trigo.co\n\n•",
null,
"### the effect of circulating load on sag mill power\n\nball mill circulating load calculation. Gulin provides crusehr and grinding mill in quarry and ore plant the effect of circulating load on sag mill power\n\n•",
null,
"calculate circulating load crusher calculation of circulating load of a grinding mill pdf. calculation of circulating load on quarry.\n\n•",
null,
"### crusher circulating load calculation aiips\n\ncrusher circulating load calculation Jaw crusher balance reasons energy and mass balance crusher in Mumbai. >> calculation of circulating load on quarry\n\n•",
null,
"### calculation of circulating load on quarry\n\ncrushing flowsheet simulation increased productivity and. english channel island with a limestone quarry that was used as source of .lost circulation materials\n\n•",
null,
"### mill circulation load calculation gravity\n\n»stone crusher and quarry plant in sucre venezuela »concentrator machines quarry Circulating load calculation in grinding circuits SciELO. Keywords:\n\n•",
null,
"Calculation Of Circulating Load On Quarry CAESAR Mining calculation of circulating loads in kil sweden . Sweden. After the millennium shift it slowly became obvious .\n\n•",
null,
"### calculation of circulating loads in kil sweden\n\nblack galaxy granite quarry calculation of circulating loads of electrical equipment causes load losses. . grid structured circulating currents will\n\n•",
null,
""
] | [
null,
"https://technologica.be/images/case/1105.jpg",
null,
"https://technologica.be/images/case/769.jpg",
null,
"https://technologica.be/images/case/57.jpg",
null,
"https://technologica.be/images/case/956.jpg",
null,
"https://technologica.be/images/case/63.jpg",
null,
"https://technologica.be/images/case/1187.jpg",
null,
"https://technologica.be/images/case/92.jpg",
null,
"https://technologica.be/images/case/958.jpg",
null,
"https://technologica.be/images/case/308.jpg",
null,
"https://technologica.be/images/case/795.jpg",
null,
"https://technologica.be/images/case/217.jpg",
null,
"https://technologica.be/images/case/715.jpg",
null,
"https://technologica.be/images/case/74.jpg",
null,
"https://technologica.be/images/case/93.jpg",
null,
"https://technologica.be/images/case/295.jpg",
null,
"https://technologica.be/images/case/262.jpg",
null,
"https://technologica.be/images/case/1280.jpg",
null,
"https://technologica.be/images/case/358.jpg",
null,
"https://technologica.be/images/case/142.jpg",
null,
"https://technologica.be/images/case/1177.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7698522,"math_prob":0.993508,"size":3956,"snap":"2020-24-2020-29","text_gpt3_token_len":716,"char_repetition_ratio":0.3861336,"word_repetition_ratio":0.110912345,"special_character_ratio":0.15419616,"punctuation_ratio":0.06081081,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9880718,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-13T17:51:29Z\",\"WARC-Record-ID\":\"<urn:uuid:7d1d6e28-bc7f-49f3-a0d0-3ed7b517bb5f>\",\"Content-Length\":\"12872\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ef4fed91-fcd6-4fb5-9f61-105469f36e5f>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b1e66b1-8f83-4525-8238-6f6f28a92c7d>\",\"WARC-IP-Address\":\"104.28.13.179\",\"WARC-Target-URI\":\"https://technologica.be/quarry/calculation-of-circulating-load-on-quarry/\",\"WARC-Payload-Digest\":\"sha1:SG5TP22EVYTG5YXKSDB3OG55NHQHBAMV\",\"WARC-Block-Digest\":\"sha1:TXXTSFCUNWV3ZYPJECCSSRDRQNHBC2XX\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657146247.90_warc_CC-MAIN-20200713162746-20200713192746-00025.warc.gz\"}"} |
https://de.mathworks.com/matlabcentral/cody/problems/848-calculate-a-modified-levenshtein-distance-between-two-strings/solutions/489921 | [
"Cody\n\n# Problem 848. Calculate a modified Levenshtein distance between two strings\n\nSolution 489921\n\nSubmitted on 23 Aug 2014 by rifat\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1 Pass\n%% s1 = 'I do not like MATLAB'; s2 = 'I love MATLAB a lot'; d_correct = 4; assert(isequal(modlevenshtein(s1,s2),d_correct))\n\nc1 = 'i' 'do' 'not' 'like' 'matlab' c2 = 'i' 'love' 'matlab' 'a' 'lot'\n\n2 Pass\n%% s1 = 'Which words need to be edited?'; s2 = 'Can you tell which words need to be edited?'; d_correct = 3; assert(isequal(modlevenshtein(s1,s2),d_correct))\n\nc1 = 'which' 'words' 'need' 'to' 'be' 'edited' c2 = 'can' 'you' 'tell' 'which' 'words' 'need' 'to' 'be' 'edited'\n\n3 Pass\n%% s1 = 'Are these strings identical?'; s2 = 'These strings are not identical!'; d_correct = 3; assert(isequal(modlevenshtein(s1,s2),d_correct))\n\nc1 = 'are' 'these' 'strings' 'identical' c2 = 'these' 'strings' 'are' 'not' 'identical'\n\n4 Pass\n%% s1 = 'Settlers of Catan is my favorite game'; s2 = 'Tic-tac-toe is also one of my favorite games'; d_correct = 6; assert(isequal(modlevenshtein(s1,s2),d_correct))\n\nc1 = 'settlers' 'of' 'catan' 'is' 'my' 'favorite' 'game' c2 = 'tic-tac-toe' 'is' 'also' 'one' 'of' 'my' 'favorite' 'games'\n\n5 Pass\n%% s1 = 'This one should be simple, but maybe it isn''t'; s2 = 'This one should be simple, but maybe it isn''t'; d_correct = 0; assert(isequal(modlevenshtein(s1,s2),d_correct))\n\nc1 = 'this' 'one' 'should' 'be' 'simple' 'but' 'maybe' 'it' 'isn't' c2 = 'this' 'one' 'should' 'be' 'simple' 'but' 'maybe' 'it' 'isn't'\n\n6 Pass\n%% s1 = 'Testing, testing, one, two, three,...'; s2 = 'Testing, testing, one, two,...'; d_correct = 1; assert(isequal(modlevenshtein(s1,s2),d_correct))\n\nc1 = 'testing' 'testing' 'one' 'two' 'three' c2 = 'testing' 'testing' 'one' 'two'\n\n7 Pass\n%% s1 = 'How many edits do you think there are in this example? I don''t know!'; s2 = 'Well, it is hard to tell how many edits are required because there are big differences in the two strings.'; d_correct = 15; assert(isequal(modlevenshtein(s1,s2),d_correct))\n\nc1 = Columns 1 through 10 'how' 'many' 'edits' 'do' 'you' 'think' 'there' 'are' 'in' 'this' Columns 11 through 14 'example' 'i' 'don't' 'know' c2 = Columns 1 through 10 'well' 'it' 'is' 'hard' 'to' 'tell' 'how' 'many' 'edits' 'are' Columns 11 through 19 'required' 'because' 'there' 'are' 'big' 'differences' 'in' 'the' 'two' Column 20 'strings'"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5955507,"math_prob":0.6466995,"size":2392,"snap":"2019-51-2020-05","text_gpt3_token_len":881,"char_repetition_ratio":0.11515913,"word_repetition_ratio":0.101333335,"special_character_ratio":0.38419732,"punctuation_ratio":0.13729978,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99562836,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-24T11:47:27Z\",\"WARC-Record-ID\":\"<urn:uuid:a898983c-025b-4596-97c1-24eb40384c8a>\",\"Content-Length\":\"78844\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5f3ab839-bec0-45a5-a01e-93cabd7c1426>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c72aa28-32d1-450c-b10c-4339746732f8>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://de.mathworks.com/matlabcentral/cody/problems/848-calculate-a-modified-levenshtein-distance-between-two-strings/solutions/489921\",\"WARC-Payload-Digest\":\"sha1:VQYALYVKPVSPQ7LB7CFLWK7CLTGBOUQL\",\"WARC-Block-Digest\":\"sha1:IRFH2LCCO5CBIIETF4CKNN5ER2VEPZK5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250619323.41_warc_CC-MAIN-20200124100832-20200124125832-00058.warc.gz\"}"} |
https://sheir.org/edu/statistics-quiz/ | [
"# Statistics Quiz\n\nMathematicsStatistics → Statistics Quiz\n\n### Page: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10\n\n11. In a module, quiz contributes 10%, assignment 30%, and final exam contributes 60% towards the final result. A student obtained 80% marks in quiz, 65% in assignment, and 75% in the final exam. What are average marks?\n(A) 64.5%\n(B) 68.5%\n(C) 72.5%\n(D) 76.5%\n\n12. In a university, average height of students is 165 cm. Now, consider the following Table,\n\nStudents1620242016\nHEIGHT160–162162–164164–166166–168168–170\nWhat type of distribution is this?\n(A) Normal\n(B) Uniform\n(C) Poisson\n(D) Binomial\n\n13. What is the average of 3%, 7%, 10%, and 16% ?\n(A) 8%\n(B) 9%\n(C) 10%\n(D) 11%\n\n11. (C) 72.5%\n12. (A) Normal\n13. (B) 9%\n\nSOLUTIONS: STATISTICS QUIZ\n\n11. By using formula for calculating weighted average, we have",
null,
"12.",
null,
"13.",
null,
""
] | [
null,
"https://sheir.org/edu/wp-content/uploads/2020/08/statistics-quiz.png",
null,
"https://sheir.org/edu/wp-content/uploads/2020/08/basic-statistics-mcqs-solution.png",
null,
"https://sheir.org/edu/wp-content/uploads/2019/12/statistics-mcqs-question.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7413269,"math_prob":0.95836294,"size":889,"snap":"2021-43-2021-49","text_gpt3_token_len":316,"char_repetition_ratio":0.09378531,"word_repetition_ratio":0.0,"special_character_ratio":0.44656917,"punctuation_ratio":0.17525773,"nsfw_num_words":3,"has_unicode_error":false,"math_prob_llama3":0.972156,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-09T04:11:35Z\",\"WARC-Record-ID\":\"<urn:uuid:9780f09d-6aca-4567-96f8-f8828ab8f873>\",\"Content-Length\":\"18454\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d9ba7202-0346-41ca-88c3-329255eb3558>\",\"WARC-Concurrent-To\":\"<urn:uuid:77b3d039-8958-4652-9226-67bf513b69df>\",\"WARC-IP-Address\":\"43.255.154.97\",\"WARC-Target-URI\":\"https://sheir.org/edu/statistics-quiz/\",\"WARC-Payload-Digest\":\"sha1:F2ZST35XP7GYHUWLCNMUUNUD63HROPFD\",\"WARC-Block-Digest\":\"sha1:K5Z6D36WV6HG2MTZXXNMM2TIJMWMJRAK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363659.21_warc_CC-MAIN-20211209030858-20211209060858-00409.warc.gz\"}"} |
http://andeekaplan.com/2015/01/29/ModelVis | [
"# Reading: Visualization of the model fitting process\n\nToggle Code Viewing: ON\n\nAs part of the course Stat 503 I am taking this semester, I will be posting a series of responses to assigned course readings. Mostly these will be my rambling thoughts as I skim papers.\n\nThis week we are reading a paper by Hadley Wickham, Dianne Cook, and Heike Hofmann entitled “Visualizing Statistical Models: Removing the Blindfold” which was submitted to the Annals of Applied Statistics. This paper presents multiple ways that visualization can be useful throughout the model fitting process, rather than solely after the models have been fit and a “best” model has been selected. This paper has been my favorite reading assignment thus far in the semester; I feel as though I’d been thinking of some of these concepts without knowing how to put words to them properly.\n\n# Strategies\n\nThere are three main strategies presented in the paper to enhance model fitting through the use of visualization and graphics:\n\n1. Display the model in the data space\n2. Look at all members in the collection\n3. Understand the process of model fitting\n\nThese three ideas on their own can be very powerful, but together I imagine the improvement to the model fitting would be more than the sum of the parts. To get an idea for all three of these suggestions, I will run through a case study of fitting linear models to the mtcars dataset.\n\n# Model in the data space\n\nA common way to diagnose the fit of a model is to display the data in the model space. For example, with linear regression we often look at plots of residuals versus fitted values to assess the assumptions of a linear model. This maps each observation (data) to a point in two dimensions that are summaries from the model (in the model space). The authors suggest flipping this paradigm and mapping the model in the data space instead. One way to accomplish this for a predictive model with continuous response is to visualize the response surface over a grid of predictors.\n\nFor a (very simple) regression example of fitting a linear model to the mtcars dataset, predicting miles per gallon (mpg) with engine displacement (disp) and horespower (hp), we can fit the model and then view the resulting prediction surface (a plane) in the dataspace with the data overlaid in red.",
null,
"This is commonly done for OLS regression where we have 2 dimensions. Once we get beyond 2 predictors (3 dimensions), this visualization requires much more creativity and some special tools (like GGobi).\n\n# A look at the members\n\nIn many modeling activities, some models are fit within the same family of models (think linear models with a main effect for example) and then a “best” combination of variables is chosen through some criteria (AIC, BIC, etc.). However, the only model that gets explored in depth is this best model. The less optimal models are thrown away in the selection process. The authors argue that by visualizing multiple models in the selection process can give “more insight into the data and the relative merits of different models”. To quote John Tukey,\n\nThe greatest value of a picture is when it forces us to notice what we never expected to see.\n\nThis is an important concept that can be applied to the model selection process, and is what the authors suggest when they say “look at members in a collection”. We can explore multiple models and hopefully find out something we never expected about the data and benefits of different models.\n\nTo get an idea for how this works, I will fit everypossible combination of predictors in the mtcars example for models with at least two explanatory variables. We can then look at the values for the stimates of the coefficients associated with each variable in a parallel coordinates plot. Note that a value of zero means that variable wasn’t included in the model. We can aso plot some (standardized) model fit diagnostics for each model and look for a relationship between complexity and fit for our many models.",
null,
"",
null,
"It would be interesting to explore these plots with interactive linked brushing, as was done in the paper, however there are still interesting things to note about the models. First, number of carburetors (carb), number of cylinders in the engine (cyl), and weight (wt) always negatively affect fuel efficiency when included in the model. Secondly it appears the number of forward gears (gear), 1/4 mile time (qsec) and V/S (vs) can either negatively or positively affect fuel efficiency. It would be interesting to investigate which models these are to explore collinearity in the model.\n\n# Understanding the process\n\nBy understanding how a model is fit, and by that I mean the algorithm that fits the model, we can more fully understand how specific data affect the resulting model. This can often be accomplished by visualizing the iterations of a model fitting algorithm to view each step in the process. For example, using the Newton-Raphson method, we could see the steps taken to arrive at a maximum in maximum likelihood fitting for a specific model.\n\n# Conclusion\n\nTo sum of this approach, why only look at one model when you can look at hundreds? By visualizing multiple models we can find interesting things about the data that we may not have known from just looking at one model. Additionally, viewing the model in the data space can provide additional insight as to why a certain model is fit. Finally, understanding the model that is being fit is always a good idea (let’s avoid the dreaded black box!). Visualization of models can help with all three of these ideas."
] | [
null,
"http://andeekaplan.com/images/blog/2015-01-29-ModelVis/unnamed-chunk-1-1.png",
null,
"http://andeekaplan.com/images/blog/2015-01-29-ModelVis/unnamed-chunk-2-1.png",
null,
"http://andeekaplan.com/images/blog/2015-01-29-ModelVis/unnamed-chunk-2-2.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9344613,"math_prob":0.94046414,"size":5552,"snap":"2021-43-2021-49","text_gpt3_token_len":1120,"char_repetition_ratio":0.14329489,"word_repetition_ratio":0.010593221,"special_character_ratio":0.19560519,"punctuation_ratio":0.07219512,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98095423,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-28T21:26:04Z\",\"WARC-Record-ID\":\"<urn:uuid:0141a3ed-3c6d-4756-84c8-c5ea4c3e02a4>\",\"Content-Length\":\"31975\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:000e371c-25e7-45f8-ab7f-6597922db37b>\",\"WARC-Concurrent-To\":\"<urn:uuid:894f82f3-b13b-4d63-aac2-ef0d09547ac1>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"http://andeekaplan.com/2015/01/29/ModelVis\",\"WARC-Payload-Digest\":\"sha1:54P4QIZT67ITPMKBHLIYKNGBFVNWNTWO\",\"WARC-Block-Digest\":\"sha1:OT7PO2BGV4LQEZHJ5CR5VFVQTA6OUDKG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358591.95_warc_CC-MAIN-20211128194436-20211128224436-00583.warc.gz\"}"} |
https://www.ahyungo.com/distance-formula-physics-no-longer-a-mystery-3/ | [
"# Distance Formula Physics: No Longer a Mystery\n\n## Distance Formula Physics Ideas\n\nA heavy body moving at a quick velocity is hard to stop. In case the distance is less than one wavelength, then the solution is unique. Its motion is known as projectile motion.\n\nHere’s the trajectory (path) between the 2 points. When it regards projectile motion, there are lots of equations to consider. This change in position is known as displacement.\n\nThere are various kinds of tests included in the majority of standards. As is typical in any matter, there are assumptions hidden in the way in which the dilemma is stated and we need to work out the way to care for it. Mass spectrometers have many designs, and a lot of use magnetic fields to measure mass.\n\nSeveral the words may be incorrectly translated or mistyped. Charge Q functions as a point charge to make an electric field. It is possible to add distances and you may add times, but you cannot add rates.\n\nThe work done by friction on the vehicle is linked to the initial kinetic energy of the vehicle. http://publishing.wsu.edu/copyright/fair_use/ In case the traffic ahead has stopped, that could mess up your day promptly. Imagine that you’re driving your vehicle on a normal street.\n\nSo if, for instance, you’re travelling at 50mph in your vehicle and the road is wet, you are going to want to double the conventional figures given for stopping distance. Speed only extends to you a number which lets you understand how fast you’re going. Stopping distance is the typical speed of the vehicle times the average stopping time.\n\nStandard assumption that collisions are binary effects in severe problems when seeking to take several interactions into consideration. It is possible to use any of them dependent on the nature of a certain problem. You always need to approach an issue first by thinking about the crucial interactions described, no matter what quantity you’re requested to find.\n\nHigher fidelity can be accomplished https://www.buyessay.net by calculating higher order moments. Therefore, we use that which we can see to tell us about that which we can’t. The velocity is changing over the term of time.\n\n## Introducing Distance Formula Physics\n\nThe key idea about angular momentum, much much like linear momentum, is the fact that it’s conserved. Among the most fundamental scientific disciplines, the most important purpose of physics is to comprehend the way the universe behaves. The idea of moment in physics comes from the mathematical notion of moments.\n\nAfter all the ideal thing about physics is the fact that it can be utilised to address real world difficulties. The primary goal of these subjects is to study and attempt to know the universe and everything within it. A comprehensive discussion of modulo is beyond the range of this piece, but here is a fantastic link on Khan Academy.\n\n## Finding the Best Distance Formula Physics\n\nProjectile motion has two key components. Therefore, the angular velocity will differ. Acceleration doesn’t occur by itself.\n\nYes, it becomes rather complicated, but all you want to understand is that the angular momentum is dependent on how fast the rotors spin. The equation that’s utilized to figure distance and velocity is provided below. You should also think of whether it’s possible to pick the initial speed for the ball and just figure out the angle at which it is thrown.\n\nThe Synchrotron Light source is going to be employed by researchers to study a huge selection of scientific questions. The electric field strength isn’t dependent upon the number of charge on the test charge. There are also a number of organic units that are employed in astronomy and space science.\n\n## Facts, Fiction and Distance Formula Physics\n\nIt’s complicated to memorize each arrangement of the 2 equations and I recommend that you practice creating new combinations from the original equations. These equations are called kinematic equations. The equations of motion could be utilised to figure out the qualities of the trajectory.\n\nYou will need a calculator for that, but nevertheless, it shouldn’t be essential for the theory test. This is among the most typical procedures of calculus. It is derived from the Pythagorean theorem.\n\nRotational kinetic energy has to be supplied to the blades to make them rotate faster, and enough energy cannot be supplied in time to prevent a crash. That means you can observe that flying a drone is really simple if you enable the computer do all of the work. The whole thrust force will stay equal to the weight, or so the drone will stay at the identical vertical level.\n\nThe very first step, naturally, is to establish what we mean when we discuss acceleration for a method of talking about force. Two charges would always be essential to encounter a force. It’s the huge force acting for a tiny interval of time.\n\n## What Distance Formula Physics Is – and What it Is Not\n\nIt should be put approximately ten centimeters under the base of the streamlined bob when it’s suspended by the magnet. This sort of bond is quite strong. When the time was determined, a horizontal equation can be utilised to ascertain the horizontal displacement of the pool ball.\n\nFor the triangle to be a proper triangle, D3 the biggest of the 3 distances have to be the amount of the hypotenuse and all 3 sides must satisfy Pythagorean theorem. This is the point where the lines crossed. Be ready for black ice on cold days, and look out for loose surfaces like gravel.\n\nA lot of men and women spend far too much time reading as a means to revise. Another sort of directed distance is that between two distinct particles or point masses at a particular time. In order to make sure that the stopping sight distance provided is adequate, we are in need of a more in-depth comprehension of the frictional force.\n\n## Whispered Distance Formula Physics Secrets\n\nDetailed mathematical solutions of practical problems typically don’t have closed-form solutions, and so need numerical strategies to handle. There are quite a lot of approaches that were designed for just such systems. See below to learn more."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9417359,"math_prob":0.9493041,"size":6029,"snap":"2022-27-2022-33","text_gpt3_token_len":1188,"char_repetition_ratio":0.105892114,"word_repetition_ratio":0.0,"special_character_ratio":0.1909106,"punctuation_ratio":0.086766,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98525995,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T10:27:18Z\",\"WARC-Record-ID\":\"<urn:uuid:3a89a7dc-f8c7-4e4e-8697-99dadcda8bf3>\",\"Content-Length\":\"34817\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6e1bd34a-851b-404f-a3c1-e87798a7d5ae>\",\"WARC-Concurrent-To\":\"<urn:uuid:f7361ec2-1087-4c29-90df-68acc2f869a9>\",\"WARC-IP-Address\":\"182.92.175.202\",\"WARC-Target-URI\":\"https://www.ahyungo.com/distance-formula-physics-no-longer-a-mystery-3/\",\"WARC-Payload-Digest\":\"sha1:3VBDWTKGCC4TV5WGAQL54BZYVFCHAFZH\",\"WARC-Block-Digest\":\"sha1:B2JTJVCAQXNVFLZBVZSMWLLYGG7SK3KD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572898.29_warc_CC-MAIN-20220817092402-20220817122402-00067.warc.gz\"}"} |
https://numbermatics.com/n/10718221303/ | [
"# 10718221303\n\n## 10,718,221,303 is an odd composite number composed of two prime numbers multiplied together.\n\nWhat does the number 10718221303 look like?\n\nThis visualization shows the relationship between its 2 prime factors (large circles) and 10 divisors.\n\n10718221303 is an odd composite number. It is composed of two distinct prime numbers multiplied together. It has a total of ten divisors.\n\n## Prime factorization of 10718221303:\n\n### 1014 × 103\n\n(101 × 101 × 101 × 101 × 103)\n\nSee below for interesting mathematical facts about the number 10718221303 from the Numbermatics database.\n\n### Names of 10718221303\n\n• Cardinal: 10718221303 can be written as Ten billion, seven hundred eighteen million, two hundred twenty-one thousand, three hundred three.\n\n### Scientific notation\n\n• Scientific notation: 1.0718221303 × 1010\n\n### Factors of 10718221303\n\n• Number of distinct prime factors ω(n): 2\n• Total number of prime factors Ω(n): 5\n• Sum of prime factors: 204\n\n### Divisors of 10718221303\n\n• Number of divisors d(n): 10\n• Complete list of divisors:\n• Sum of all divisors σ(n): 10930504520\n• Sum of proper divisors (its aliquot sum) s(n): 212283217\n• 10718221303 is a deficient number, because the sum of its proper divisors (212283217) is less than itself. Its deficiency is 10505938086\n\n### Bases of 10718221303\n\n• Binary: 10011111101101101100010011111101112\n• Base-36: 4X9COHJ\n\n### Squares and roots of 10718221303\n\n• 10718221303 squared (107182213032) is 114880267900083017809\n• 10718221303 cubed (107182213033) is 1231312134701016876948952185127\n• The square root of 10718221303 is 103528.8428555059\n• The cube root of 10718221303 is 2204.8255854831\n\n### Scales and comparisons\n\nHow big is 10718221303?\n• 10,718,221,303 seconds is equal to 340 years, 41 weeks, 6 days, 11 hours, 41 minutes, 43 seconds.\n• To count from 1 to 10,718,221,303 would take you about six hundred eighty-one years!\n\nThis is a very rough estimate, based on a speaking rate of half a second every third order of magnitude. If you speak quickly, you could probably say any randomly-chosen number between one and a thousand in around half a second. Very big numbers obviously take longer to say, so we add half a second for every extra x1000. (We do not count involuntary pauses, bathroom breaks or the necessity of sleep in our calculation!)\n\n• A cube with a volume of 10718221303 cubic inches would be around 183.7 feet tall.\n\n### Recreational maths with 10718221303\n\n• 10718221303 backwards is 30312281701\n• The number of decimal digits it has is: 11\n• The sum of 10718221303's digits is 28\n• More coming soon!\n\n#### Copy this link to share with anyone:\n\nMLA style:\n\"Number 10718221303 - Facts about the integer\". Numbermatics.com. 2023. Web. 5 December 2023.\n\nAPA style:\nNumbermatics. (2023). Number 10718221303 - Facts about the integer. Retrieved 5 December 2023, from https://numbermatics.com/n/10718221303/\n\nChicago style:\nNumbermatics. 2023. \"Number 10718221303 - Facts about the integer\". https://numbermatics.com/n/10718221303/\n\nThe information we have on file for 10718221303 includes mathematical data and numerical statistics calculated using standard algorithms and methods. We are adding more all the time. If there are any features you would like to see, please contact us. Information provided for educational use, intellectual curiosity and fun!\n\nKeywords: Divisors of 10718221303, math, Factors of 10718221303, curriculum, school, college, exams, university, Prime factorization of 10718221303, STEM, science, technology, engineering, physics, economics, calculator, ten billion, seven hundred eighteen million, two hundred twenty-one thousand, three hundred three.\n\nOh no. Javascript is switched off in your browser.\nSome bits of this website may not work unless you switch it on."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81235915,"math_prob":0.9274606,"size":3550,"snap":"2023-40-2023-50","text_gpt3_token_len":982,"char_repetition_ratio":0.15877044,"word_repetition_ratio":0.054613937,"special_character_ratio":0.38056338,"punctuation_ratio":0.16640747,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9832218,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T21:47:36Z\",\"WARC-Record-ID\":\"<urn:uuid:5348642f-e41a-45ff-80f2-a707eb068cf3>\",\"Content-Length\":\"20178\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:26d0c619-5a3b-4d03-813f-7bc13b4c955c>\",\"WARC-Concurrent-To\":\"<urn:uuid:e5403b9c-f110-4089-b2db-664605c08832>\",\"WARC-IP-Address\":\"72.44.94.106\",\"WARC-Target-URI\":\"https://numbermatics.com/n/10718221303/\",\"WARC-Payload-Digest\":\"sha1:PXR3ECRNTIZ3QGENDJZCOT3EN5S5SJIY\",\"WARC-Block-Digest\":\"sha1:5YUI5BYL5VLUEF3DJASRYBREDW2KAFZ2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100568.68_warc_CC-MAIN-20231205204654-20231205234654-00790.warc.gz\"}"} |
https://ktbssolutions.com/1st-puc-physics-question-bank-chapter-5/ | [
"# 1st PUC Physics Question Bank Chapter 5 Laws of Motion\n\n## Karnataka 1st PUC Physics Question Bank Chapter 5 Laws of Motion\n\n### 1st PUC Physics Laws of Motion TextBook Questions and Answers\n\nQuestion 1.\nGive the magnitude and direction of the net force acting on\n\n1. a drop of rain falling down at a constant speed.\n2. a cork of mass 10 g floating on water\n3. a kite skillfully held stationary in the sky.\n4. a car moving with a constant velocity of 30 km/h on a rough road.\n5. a high-speed electron in space far from ail material objects, and free of electric and magnetic fields.\n\n1. In accordance with the first law of motion, there is no net force on the drop since it is moving with constant speed.\n2. The weight of the cork is balanced by upthrust which is equal to the weight of water displaced. Hence no net force on the cork.\n3. Since the kite is in the state of rest net force on it is zero.\n4. From the first law of motion, since the velocity of the car is a constant net force on it is zero.\n5. Since the electron is in free space no gravitational or electric or magnetic force is acting on it. Hence net force on it is zero.\n\nQuestion 2.\nA pebble of mass 0.05 kg is thrown vertically upwards. Give the direction and magnitude of the net force on the pebble,\n\n1. during its upward motion.\n2. during its downward motion.\n3. at the highest point where it is momentarily at rest. Do your answers change if the pebble was thrown at an angle of 45° with the horizontal direction? Ignore air resistance.\n\n1. When the pebble is moving upward the force acting on it is gravitational force in downward direction. F = mg = 0.05 × 10 = 0.5 N\n2. Even in this case F = mg = 0.5 N in downward direction.\n3. Since there is no force other than gravitational force acting on pebble, during the whole process F = mg = 0.5 N. Note that pebble moves in opposite direction because of its initial velocity. The situation remains same for pebble thrown at an angle.\n\nQuestion 3.\nGive the magnitude and direction of the net force acting on a stone of mass 0.1 kg,\n\n1. Just after it is dropped from the window of a stationary train.\n2. Just after It is dropped from the window of a train running at a constant velocity of 36 km/h\n3. Just after it is dropped from the window of a train accelerating with 1 ms-2\n4. Lying on the floor of a train which is accelerating with 1 m s-2, the stone being at rest relative to the train. Neglect air resistance throughout.\n\n1. Gravitational force is acting on the stone in downward direction F = mg = 0.1 × 10 = 1 N\n2. Once the stone is dropped from the train, the only force acting on it is gravitation force =1 N.\n3. Since there is no contact between train and stone the force acting on it is again gravitational force.\n4. Since the stone is lying on the floor of train its acceleration in the same as that of the train. Hence the force excreted by train on the stone is F = ma = 0.1 × 1 = 0. 1 N in the direction of the train. The weight is balanced by the normal reaction of the floor of the train.\n\nQuestion 4.\nOne end of a string of length L is connected to a particle of mass m and the other to a small peg on a smooth horizontal table. If the particle moves in a circle with speed v the net force on the particle (directed towards the centre) is:\n\n1. T.\n2. T – $$\\frac{m v^{2}}{L}$$\n3. T + $$\\frac{m v^{2}}{L}$$\n4. 0\n\nT is the tension in the string. [Choose the correct alternative].\n1. The centripetal force necessary for the particle to move in a circular path is provided by the tension in the string. Hence net force on the particle is nothing but tension T in the string.",
null,
"Question 5.\nA constant retarding force of 50 N is applied to a body of mass 20 kg moving initially with a speed of 15 ms-1. How long does the body take to stop?\nGiven F = – 50 N (retarding force)\nm = 20 kg\nu = 15 m/s. V = 0 m/s\nt = ?\nF = ma ⇒ a = $$\\frac{\\mathrm{F}}{\\mathrm{m}}$$ = $$\\frac{-50}{20}$$ = – 2.5m/s2\nbut we know that\nV = u + at\n0 = 15 + (- 2.5) t\n⇒ t = 6 s.\n\nQuestion 6.\nA constant force acting on a body of mass 3.0 kg changes its speed from 2.0 m s-1 to 3.5 m s-1 in 25 s. The direction of the motion of the body remains unchanged. What is the magnitude and direction of the force?\nGiven m = 3 kg\nu = 2 m/s\nv = 3.5 m/s\nt = 25 s.\nF = ma\nbut we know that a = $$\\frac{v-u}{t}$$\n∴ F = m $$\\left(\\frac{v-u}{t}\\right)$$ = 3 $$\\left(\\frac{3.5-2}{25}\\right)$$\n= 0.18 N\nsince direction acceleration ‘a’ is positive force is acting in the direction of motion.\n\nQuestion 7.\nA body of mass 5 kg is acted upon by two perpendicular forces 8 N and 6 N. Give the magnitude and direction of the acceleration of the body.",
null,
"Given Fa = 8 N\nFb = 6 N\nm = 5 kg\nThe result ant force F is given by,\nF = $$\\sqrt{\\mathrm{Fa}^{2}+\\mathrm{Fb}^{2}}$$\n= $$\\sqrt{64+36}$$ = 10 N\nwe know from the figure that\ntan θ = $$\\frac{F_{b}}{F_{a}}=\\frac{6}{8}$$ = 0.75\n⇒ θ = tan-1(0.75) = 37° with 8 N force\n\nQuestion 8.\nThe driver of a three wheeler moving with a speed of 36 km/h sees a child standing in the middle of the road and brings his vehicle to rest in 4.0 s just in time to save the child. What is the average retarding force on the vehicle? The mass of the three- wheeler is 400 kg and the mass of the driver is 65 kg.\nGiven, u = 36 km/h\n36 × $$\\frac{1000}{3600}$$ m/s\n=10 m/s\nv = 0\nt = 4 s\nm = 400 + 65 = 465 kg\na = $$\\frac{v-u}{t}$$ = $$\\frac{-10}{4}$$ = – 2.5 m/s²\nF = ma = 465 (- 2. 5)\n= – 11625 N (retarding force)\n\nQuestion 9.\nA rocket with a lift-off mass 20,000 kg is blasted upwards with an initial acceleration of 5.0 ms-2. Calculate the initial thrust (force) of the blast.\nGiven m = 20000 kg\na = 5ms-2 (against gravity) since the rocket has to move upwards against gravity the total initial thrust of the blast is given by\nF = ma + mg\n= m (a + g) = 20000 (5 + 9.8)\n= 296 × 105 N.\n\nQuestion 10.\nA body of mass 0.40 kg moving initially with a constant speed of 10 m s-1 to the north is subject to a constant force of 8.0 N directed towards the south for 30 s. Take the instant the force is applied to be t= 0, the position of the body at that time to be x= 0, and predict its position at t = – 5 s, 25 s, 100 s.\nGiven mass m = 0.4 kg\nRetarding force F = – 8 N\n∴ acceleration a = $$\\frac{F}{m}$$ = $$\\frac{-8}{0.4}$$ = – 20 m/s²\nat t = – 5 s\na = 0 for t < 0\n∴ s = u + $$\\frac{1}{2}$$ at²\n= 10 (-5) + $$\\frac{1}{2}$$ (0)(-5)²\nat t = 25 s\nS = ut+$$\\frac{1}{2}$$ at²\n= 10 (25) + $$\\frac{1}{2}$$ (- 20) (25)²\n= – 6000 m\nat t = 100 s\nsince there is a retarding force for 30 s\nS1 = ut + $$\\frac{1}{2}$$ at²\n= 10 (30) + $$\\frac{1}{2}$$ (-20) (30)²\n= – 8700 m .\nafter 30 s it move with a constant velocity.\nV = u + at\n= 10 – 20 (30)\n= – 590 m/s\nfor rest of 70 s.\nS2 = – 590 (70) + $$\\frac{1}{2}$$ (0) (70)² = -41300 m\n∴ Total distance = S1 + S2 = 50000 m.",
null,
"Question 11.\nA truck starts from rest and accelerates uniformly at 2.0 m s-2. At t = 10 s, a stone is dropped by a person standing on the top of the truck (6 m high from the ground). What are the\n\n1. velocity, and\n2. acceleration of the stone at t = 11s? (Neglect air resistance.)\n\nWe have, Vt = u + at\ni.e. Vt = 0 + 2 (10) = 20 ms-1\nDuring the nextone second stone is under the effect of gravity.",
null,
"Vg = u + gt\n= 0 + 9.8 (1) = 9.8 m/s\n∴ Net velocity of stone at t = 11 s\nv = $$\\sqrt{V_{t}^{2}+V_{g}^{2}}$$ = 22.27 m/s.\ntan θ = $$\\frac{V_{g}}{V_{t}}=\\frac{9.8}{20}$$\n⇒ θ = 26.1° with horizontal.\n\n(b) The moment stone is dropped from truck only gravitational force is acting on it.\n∴ acceleration = g = 9.8 m/s².\n\nQuestion 12.\nA bob of mass 0.1 kg hung from the celling of a room by a string 2 m long is set into oscillation. The speed of the bob at its mean position is 1 m s-1. What is the trajectory of the bob if the string is cut when the bob is\n\n1. at one of its extreme positions\n2. at its mean position.\n\n1. When the bob is at one of its extreme positions its velocity is zero. Hence if the string is cut, it will fall straight down due to gravitational force.\n\n2. At the mean position the bob has a horizontal velocity of 1 m/s. If the string is cut, bob is acted by vertical gravitational force = a = 9.8 ms-2. Hence bob will behave like a projectile and follows a parabolic path.\n\nQuestion 13.\nA man of mass 70 kg stands on a weighing scale in a lift which is moving\n\n1. upwards with a uniform speed of 10 m s-1\n2. downwards with a uniform acceleration of 5 m s-2\n3. upwards with a uniform acceleration of 5 ms-2. What would be the readings on the scale in each case?\n4. What would be the reading if the lift mechanism failed and it hurtled down freely under gravity?\n\nThe weighing machine measures the reaction R which is nothing but the apparent weight.\n1. when the lift is moving upwards with uniform speed.\nR = mg = 70 × 9.8 = 686 N.\n\n2. When lift moves downwards with an acceleration of 5m/s²\nR = m (g – a) = 70 (9.8 – 5) = 336 N.\n\n3. When lift moves upwards with with an acceleration of 5m/s²\nR = m (g + a) = 70 (9.8 + 5) = 1036 N.\n\n4. If the lift falls down freely under gravity\nR = m (g – g) = 0.\n\nQuestion 14.\nFigure shows the position-time graph of a particle of mass 4 kg. What is the\n\n1. force on the particle for t < 0, t > 4 s, 0 < t < 4 s?\n2. impulse at t = 0 and t = 4 s? (Consider onedimensional motion only)",
null,
"1. From the position time graph we can see that the particle is in rest during t < 0 and t > 4. Hence net force on it is zero t < 0, t > 4 s. During 0 < t < 4; the graph has a constant slope i.e, particle\nhas uniform velocity = 3/4 = 0.75 m/s.\nHence net force is zero.\n\n2. at t = 0 u = 0 v = 0.75\nimpulse = change in momentum\n= M (v – u) = 4 (0.75 – 0)\n= 3 kg m/s\nat t = 4, u = 0.75 v = 0\nimpulse = 4 (0 – 0.75) = – 3 kg m/s.\n\nQuestion 15.\nTwo bodies of masses 10 kg and 20 kg respectively kept on a smooth, horizontal surface are tied to the ends of a light string, a horizontal force F = 600 N is applied to\n\n1. A\n2. B along the direction of string. What is the tension in the string in each case?",
null,
"1. Acceleration of the whole system a = $$\\frac{F}{M_{1}+M_{2}}=\\frac{600}{10+20}$$ = 20ms-2\nThe net force, acting on A\n= 600 – T = m1 (a)\n∴ 600 – T = 10 × 20\n⇒ T = 400 N.\n\n2. similar to the above case.\nThe net force acting on B\n= 600 – T = M2 a\n∴ 600 – T = 20 × 20\n⇒ T = 200 N.",
null,
"Question 16.\nTwo masses 8 kg and 12 kg are connected at the two ends of a light inextensible string that goes over a frictionless pulley. Find the acceleration of the masses, and the tension in the string when the masses are released.",
null,
"Let ‘a’ be the acceleration of the masses. Then\nfor block m1, T – m1g = m1a → (1)\nfor block m2, m2g – T = m2 a → (2)\n(1) + (2) (m2 – m1) g = (m1 + m2) a\n⇒ a = $$\\frac{12-8}{12+8}$$ g = 2m/s\nsubstituting in (1)\nT – 8 × 10 = 8 × 2\n⇒ T = 96 N\n\nQuestion 17.\nA nucleus is at rest in the laboratory frame of reference. Show that if it disintegrates into two smaller nuclei the products must move in opposite directions.\nLet m1 & m2 be the masses of smaller nuclei and let $$\\vec{v}_{1}$$ & $$\\vec{v}_{2}$$ be their velocities.\nAccording to the law of conservation of momentum.\nInitial momentum = final momentum\n0 = m1$$\\vec{v}_{1}$$ + $$\\vec{v}_{2}$$\nOr $$\\vec{v}_{2}$$ = – $$-\\frac{m_{1}}{m_{2}} \\vec{v}_{1}$$\nHence v1 & v2 are in opposite direction.\n\nQuestion 18.\nTwo billiard balls each of mass 0.05 kg moving in opposite directions with speed 6 m s-1 collide and rebound at the same speed. What is the impulse imparted to each bail due to the other?\nImpulse = change in momentum\nInitial momentum of each ball = 0.05 × 6\n= 0.3 kg m/s\nFinal momentum of each ball = 0.05 × (-6)\n= – 0.3 kg m/s\nImpulse = 0.6 kgm/s (in magnitude).\n\nQuestion 19.\nA shell of mass 0.020 kg is fired by a gun of mass 100 kg. If the muzzle speed of the shell is 80 m s-1, what is the recoil speed of the gun?\nGiven\nm1 = 0.02 kg, m2 = 100 kg\nv1 = 80 m/s v2 = ?\nAccording law of conservation of momentum\nm1u1 + m2u2 = m1v1 + m2v2\n0.02 (0) + 100 (0) = 0.02 × 80 + 100 × v2\nv2 = – 1.6 × 10-2 m/s\n\nQuestion 20.\nA batsman deflects a ball by an angle of 45° without changing its initial speed which is equal to 54 km/h. What is the impulse imparted to the ball? (Mass of the ball is 0.15 kg.)",
null,
"given\nm = 0.15 kg\nu = 54 kmph\n= 54 × $$\\frac{1000}{3600}$$\n= 15m/s\nAlong x -axis\nInitial velocity = – u cos θ\n= 15 cos (22.5°)\nFinal velocity = u cos θ\n= 15 cos (22.5°)\n∴ Impulse = change in momentum\n= 0.15 [15 cos (22.5) – (-15 cos (22.5)°)]\n= 4.16 kgm/s.\nAlong y-axis\nInitial velocity = Final velocity = – u sin θ\n∴ No impulse along the y-axis.\n\nQuestion 21.\nA stone of mass 0.25 kg tied to the end of a string is whirled round in a circle of radius 1.5 m with a speed of 40 rev/min in a horizontal plane. What is the tension in the string? What is the maximum speed with which the stone can be whirled around if the string can withstand a maximum tension of 200 N?\nGiven m = 1.5m\nr = 1.5 m\nw = 40 rpm = $$40 \\frac{\\times 2 \\pi}{60}$$ rad/s\n= $$\\frac{4}{3}$$π rad/s\nNow Tension T = mrw²\n= 0.25 × 1.5 × $$\\left(\\frac{4}{3} \\pi\\right)^{2}$$\n= 0.58 N\nTmax = 200 N\nTmax = $$\\frac{\\mathrm{m} \\mathrm{v}_{(\\mathrm{max})}^{2}}{\\mathrm{r}}$$\n⇒ Vmax = $$\\sqrt{\\frac{200 \\times 1.5}{0.25}}$$ = 34.6 m/s.\n\nQuestion 22.\nIf, In Exercise 5.21, the speed of the stone is increased beyond the maximum permissible value, and the string breaks suddenly, which of the following correctly describes the trajectory of the stone after the string breaks:\n\n1. the stone moves radially outwards,\n2. the stone flies off tangentially from the instant the string breaks,\n3. the stone flies off at an angle with the tangent whose magnitude depends on the speed of the particle?\n\nThe answer is 2. When a particle moves in a circular path, at each point the velocity is directed along the tangent of the circular path. Hence when string, breaks, it moves along the tangent in accordance with Newton’s 1st law of motion.\n\nQuestion 23.\nExplain why\n\n1. a horse cannot pull a cart and run in empty space.\n2. passengers are thrown forward from their seats when a speeding bus stops suddenly.\n3. it is easier to pull a lawnmower than to push it.\n4. a cricketer moves his hands backward while holding a catch.\n\n1. In accordance with Newton’s 1st law of motion since there is no external agent the horse cannot pull cart.\n2. The passenger continues to move forward when a speeding bus breaks because of their inertia of motion. Hence they are thrown forward from their seats.\n3. A lawn mover is pulled or pushed by applying a force at an angle. When it is pushed, the normal force (N) must be more than its weight, for equilibrium in the vertical direction. This results in greater friction and hence greater applied force to move. It is just opposite while pulling.\n4. The ball will have a large momentum. If the player tries to stop it instantaneous, the time of contact is low which results in a large impulse which may hurt his hand. Hence he tries to move his hands backward which increases the time of contact hence reducing the impulse.\n\n### 1st PUC Physics Laws of Motion Additional Exercises Questions and Answers\n\nQuestion 24.\nThe figure shows the position-time graph of a body of mass 0.04 kg. Suggest a suitable physical context for this motion. What is the time between two consecutive impulses received by the body? What is the magnitude of each impulse?",
null,
"The graph could be representing a ball rebounding between two walls separated by 2 cm with a constant velocity in free space. After receiving the impulse ball changes its direction. Hence time between two impulses is 2 seconds.\nvelocity = $$\\frac{\\text { displacement }}{\\text { time }}$$ = $$\\frac{2 \\times 10^{-2}}{2}$$ = 0.01m/s\nInitial momentum,\nmu = 0.04 × 10-2kgm/s\nFinal momentum,\nmv = – 0.04 × 10-2kgm/s\n∴ Change in momentum = 0.08 × 10-2kgm/s\n\nQuestion 25.\nFigure 5.18 shows a man standing stationary with respect to a horizontal conveyor belt that is accelerating with 1 m s-2. What is the net force on the man? If the coefficient of static friction between the man’s shoes and the belt is 0.2, up to what acceleration of the belt can the man continue to be stationary relative to the belt?\n(Mass of the man = 65 kg.)",
null,
"Given acceleration of conveyor belt a = 1 m s-2\nµs = 0.2\nmass of man m = 65 kg\nThen man experiences a pseudo force Fs = ma as he is in an accelerating frame as shown in the figure. Hence to maintain his equilibrium he exerts a force F = – Fs = ma = 65 × 1 = 65 N in direction of motion of belt.\n∴ Net force acting on man = 65 N The man continue to be stationary with respect to belt if force of friction equal to force acting on man i.e.",
null,
"µs N = mamax\nµs . m .g = mamax\na(max) = µs × g\n= 0.2 × 10\n= 2m s-2\n\nQuestion 26.\nA stone of mass m tied to the end of a string revolves in a vertical circle of radius R. The net forces at the lowest and highest points of the circle directed vertically downwards are : [Choose the correct alternative]",
null,
"T1 and v1 denote the tension and speed at the lowest point. T2 and v2 denote corresponding values at the highest point.",
null,
"The net force acting on stone at the lowest point directed vertically downward = mg – T1 & and the highest point = mg + T2. Hence option (a) is correct answer.\n\nQuestion 27.\nA helicopter of mass 1000 kg rises with a vertical acceleration of 15 ms-2. The crew and the passengers weigh 300 kg. Give the magnitude and direction of the\n\n1. force on the floor by the crew and passengers,\n2. action of the rotor of the helicopter on the surrounding air,\n3. force on the helicopter due to the surrounding air.\n\nmass of helicopter = mh = 1000 kg mass of crew = mc = 300 kg\nvertical acceleration, a =15 m/s².\n1. force on the floor by crew & passenger = apparent weight of crew & passenger\n= mc (g + a)\n= 300(10 + 15)\n= 7500 N.\n\n2. The action of motor of helicopter on surrounding air is vertical downwards. The helicopter rises on account of reaction to this force\n= (mn + mc) (g + a)\n= (1000 + 300) (10 + 15)\n= 32500 N.\n\n3. force on helicopter due to the surrounding air is nothing but the reaction to the action of rolor = 32500 N in vertically upward direction.",
null,
"Question 28.\nA stream of water flowing horizontally with a speed of 15 m s-1 gushes out of a tube of cross-sectional area 10-2 m² and hits a vertical wall nearby. What is the force exerted on the wall by the Impact of water, assuming It does not rebound?\nThe volume of water hitting the wall per second\n= (Area × Velocity) of a stream of water\n= 10-2 × 15\n= 0.15 m3s-1\ndensity of water = 1000 kg/m3\n∴ mass of water hitting the wall per second\n= 0.15 × 1000= 150 kg/s\nInitial momentum of water hitting the wall per second\n= 150 × 15\n= 250 kg m/s² or 2250 N\nFinal momentum per second = 0\n∴ Force exerted on the wall: change in momentum per second\n= 2250 N.\n\nQuestion 29.\nTen one-rupee coins are put on top of each other on a table. Each coin has a mass m. Give the magnitude and direction of\n\n1. the force on the 7th coin (counted from the bottom) due to all the coins on its top,\n2. the force on the 7th coin by the 8th coin,\n3. the reaction of the 6th coin on the 7th coin.\n\n1. There are 3 coins above the 7th coin\nHence force = (3m) g\n= 3mg N\n\n2. The 8th coin has two coins above it. Hence force exerted by 8th coin on 7th is, it’s weight plus the weight of two coins\n= mg + 2 mg\n= 3 mg N.\n\n3. The 6th coin is under the weight of 4 coins above it\nReaction R = – F = – 4 mg N.\n\nQuestion 30.\nAn aircraft executes a horizontal loop at a speed of 720 km/h with its wings banked at 15°. What is the radius of the loop?\nυ = 720 km/hr = 720 × $$\\frac{1000}{3600}$$\n= 200 m/s\nθ = 15°",
null,
"From the relation\ntan θ = $$\\frac{v^{2}}{r g}$$\n⇒ r = $$\\frac{v^{2}}{\\tan \\theta \\times g}$$ = $$\\frac{200 \\times 200}{\\tan 15^{\\circ} \\times 10}$$\n= $$\\frac{200 \\times 200}{0.2679 \\times 10}$$\n= 14931 m\n\nQuestion 31.\nA train runs along an unbanked circular track of radius 30 m at a speed of 54 km/h. The mass of the train is 106 kg. What provides the centripetal force required for this purpose The engine or the rails? What is the angle of banking required to prevent wearing out of the rail?\nvelocity υ = 54 km/h = 54 × $$\\frac{1000}{3600}$$= 15 m/s\nmass m = 106 kg\nThe centripetal force F = $$\\frac{m v^{2}}{r}$$ is provided by the lateral frictional force between rails and wheels of train.\nThe angle of banking required to prevent the wearing out of rail\ntan θ = $$\\frac{v^{2}}{r g}$$ = $$\\frac{15 \\times 15}{30 \\times 10}$$ = 0.75\nθ = tan-1 (0.75) ≈ 37°.",
null,
"Question 32.\nA block of mass 25 kg is raised by a 50 kg man in two different ways as shown in Fig. What is the action on the floor by the man in the two cases? If the floor yields to a normal force of 700 N, which mode should the man adopt to lift the block without the floor yielding?",
null,
"In the first case man applies an upward force of 25 kg weight. Hence the action on the floor by man is\n= 50 kg weight + 25 kg weight = 75 kg weight\n= 75 × 10\n= 750 N.\n\nIn the second case man applies a downward force of 25 kg weight. Hence the action on the floor by man is\n= 50 kg weight – 25 kg weight = 25 × 10\n= 250 N.\n(other 500 N is applied on the ceiling) Hence man should adopt the second case.\n\nQuestion 33.\nA monkey of mass 40 kg climbs on a rope (Fig.) which can stand a maximum tension of 600 N. In which of the following cases will the rope break: the monkey\n\n1. climbs up with an acceleration of 6 m s-2\n2. climbs down with an acceleration of 4 m s-2\n3. climbs up with a uniform speed of 5 m s-1\n4. falls down the rope nearly freely under gravity? (Ignore the mass of the rope).",
null,
"1. When monkey climbs up with an acceleration ‘a’ then\nT – mg = ma\nOr T = m (g + a)\n= 40 (10 + 6)\n= 640 N\nwhich exceeds the maximum tension which rope can withstand (600 N), hence rope breaks.\n\n2. when monkey is climbing down with an acceleration a\nmg – T = ma\nor T = m (g – a)\n= 40 (10-4)\n= 240 N\nThe rope will not break.\n\n3. when the monkey climbs up with uniform speed then\nT = mg\n= 40 × 10\n= 400 N\nThe rope will not break.\n\n4. when the monkey is falling freely, it would be a state of weightlessness. So, there won’t be any tension in the rope hence it will not break.\n\nQuestion 34.\nTwo bodies A and B of masses 5 kg and 10 kg in contact with each other rest on a table against a rigid wail (Fig). The coefficient of friction between the bodies and the table is 0.15. A force of 200 N is applied horizontally to A. What are\n\n1. the reaction of the partition\n2. the action-reaction forces between A and B?\n3. What happens when the wall is removed? Does the answer to (b) change, when the bodies are in motion? ignore the difference between µs and µk.",
null,
"1. As the blocks are at rest against the rigid walls, reaction of the partition = – (force applied on A)\n= 200 N towards left.\n\n2. The action-reaction force between A & B are 200 N each.\n\n3. when the wall is removed, the pushing force gives acceleration to the system. On taking the coefficient of friction into account,\n200 – µ (m1 + m2) g = (m1 + m2) a\na = $$\\frac{200-0.15(5+10) \\times 10}{(5+10)}$$\n= 11.8 ms-2\nLet the force exerted by A on B be FBA. On considering the equilibrium of the only block\nA, 200 – fk1 = m1 a + FBA\nFBA = 200 – µ m1 g – m1 a\n= 200 – 7.5 – 59\n= 133.5 N towards left.\n\nQuestion 35.\nA block of mass 15 kg is placed on a long trolley. The coefficient of static friction between the block and the trolley is 0.18. The trolley accelerates from rest with 0.5 m s-2 for 20 s and then moves with uniform velocity. Discuss the motion of the block as\nviewed by\n\n1. a stationary observer on the ground,\n2. an observer moving with the trolley.\n\n1. Force experienced by block\nF = ma = 15 × 0.5 = 7.5 N\nForce of friction, Ff = µ mg = 0.18 × 15 × 10 = 27 N\nSince the force experienced block is less than frictional force it wilt remain stationary with respect to trolley. For an observer on the ground block appears to move with same acceleration as trolley,\n\n2. For an observer moving with trolley the block appears to he stationary as there is no relative motion between him and trolley and the block.\n\nQuestion 36.\nThe rear side of a truck is open and a box of 40 kg mass is placed 5 m away from the open end as shown in Fig. The coefficient of friction between the box and the surface below it is 0.15. On a straight road, the truck starts from rest and accelerates with 2 m s-2. At what distance from the starting point does the box fail off the truck? (ignore the size of the box).",
null,
"Force experienced by box F = ma = 40 × 2 = 80 N\nfrictional force Ffriction= µ mg = 0.15 × 40 × 10 = 60 N\n∴ Net force on the box = F – Ffriction = 80 – 60 = 20 N .\n∴ The backward acceleration experienced by box is given by,\na = $$\\frac{\\text { Net force }}{\\text { mass }}$$ = $$\\frac{20}{40}$$ = 0.5 m/s²\nLet‘t’ be the time taken by box to move through 5m backwards\nWe have, S = ut + $$\\frac{1}{2}$$ at²\n∴ 5 = 0 × t + $$\\frac{1}{2}$$ × 0.5 × t²\nt = $$\\sqrt{20} \\approx$$ 4.47 s\nThe distance travelled by truck in t = 4.47s is\ns = ut + $$\\frac{1}{2}$$ at² (a = 2m /s2)\ns = 0 × $$(\\sqrt{20})$$ + $$\\frac{1}{2}$$ × 2 $$(\\sqrt{20})^{2}$$\ns = 20 m\nThe box will off the truck after 20 m from starting point.\n\nQuestion 37.\nA disc revolves with a speed of $$33 \\frac{1}{3}$$ rev/min, and has a radius of 15 cm. Two coins are placed at 4 cm and 14 cm away from the centre of the record. If the co-efficient of friction between the coins and the record is 0.15, which of the coins will revolve with the record?\nIf the coin is to revolve with the record then the force of friction must be enough to provide the necessary centripetal force.\ni.e. mr ω² ≤ µs mg or r ≤ $$\\frac{\\mu_{\\mathrm{s}} \\mathrm{g}}{\\omega^{2}}$$\nHere, ω = $$33 \\frac{1}{3}$$ rpm",
null,
"$$\\approx$$ 0.12 m\nThe coin placed within the radial distance of 0.12 m will revolve with the record. Hence coin at 4 cm will revolve with the record.\n\nQuestion 38.\nYou may have seen in a circus a motorcyclist driving in vertical loops Inside a ‘deathwell’ (a hollow spherical chamber with holes, so the spectators can watch from outside). Explain clearly why the motorcyclist does not drop down when he is at the uppermost point, with no support from below. What is the minimum speed required at the uppermost position to perform a vertical loop if the radius of the chamber is 25 m?\nWhen the motor cyclist is at the highest point of death well, the normal readction R on the motor cycle by the ceiling of the chamber acts downwards. His weight mg also acts downwards. These two forces are balanced by the outward centrifugal acting on him.\n∴ R + mg = $$\\frac{m v^{2}}{r}$$ → (1)\nThe minimum speed required to perform vertical loop is given by equation (1) when\nR = 0\n∴ mg = $$\\frac{m v^{2}_{(\\min )}}{r}$$\nυmin = $$\\sqrt{\\mathrm{rg}}$$ = $$\\sqrt{25 \\times 10}$$\n≈ 15.8 m/s.",
null,
"Question 39.\nA 70 kg man stands in contact against the inner wall of a hollow cylindrical drum of radius 3 m rotating about its vertical axis with 200 rev/mln. The coefficient of friction between the wall and his clothing is 0.15. What is the minimum rotational speed of the cylinder to enable the man to remain stuck to the wall (without falling) when the floor is suddenly removed?",
null,
"The horizontal force N exerted by the wall on the man provides the necessary centrepetal force.\n∴ N = m ω² r\nThe static frictional force (vertically upwards) balances the weight of the man mg.\nThe man remains stuck to the wall after the floor is removed if mg 〈 µ N\ni.e., mg 〈 µ mRω²\n∴ Minimum angular speed of rotation is\n⇒ ωmin = $$\\sqrt{\\frac{g}{\\mu_{s}} r}$$ = $$\\sqrt{\\frac{10}{0.15 \\times 3}}$$ = 4.6 rad/s\nThus the minimum rotational speed of cylinder required to hold the man stuck to the wall is 4.6 rad/s\n\nQuestion 40.\nA thin circular loop of radius R rotates about its vertical diameter with an angular frequency ω\n\n1. Show that a small bead on the wire loop remains at its lowermost point for ω ≤ $$\\sqrt{g / R}$$\n2. What is the angle made by the radius vector joining the centre to the bead with the vertical downward direction for ω = $$\\sqrt{2 \\mathrm{g} / \\mathrm{R}}$$? Neglect friction.",
null,
"1. Let the radius vector joining the bead to the center of the wire make an angle θ with vertical downward direction. Let N be the normal reaction. From fig,\nmg = N cos θ → (1)\nmr ω² = N sin θ → (2)\nor m (R sin θ ) ω² = N sin θ\nmRω² = N\nOn substituting in (1)\nmg = (m Rω²) cos θ\n0r ω = $$\\sqrt{\\frac{9}{\\mathrm{R} \\cos \\theta}}$$\nFor the bead to remain in lower most position θ = 0\n⇒ cos θ = 1\n⇒ ω ≤ $$\\sqrt{\\frac{\\mathrm{g}}{\\mathrm{R}}}$$\n\n2. when ω = $$\\sqrt{\\frac{2 g}{R}}$$\ncos θ = $$\\sqrt{\\frac{g}{R \\omega^{2}}}$$ = $$\\frac{g}{R\\left(\\frac{2 g}{R}\\right)}$$\n⇒ θ =60°\n∴ apparent weight = m (g – a).\n\n### 1st PUC Physics Laws of Motion One Mark Questions and Answers\n\nQuestion 1.\nDefine force.\nForce is defined as that external agent acting on a body changes its state of rest or uniform motion along a straight line.\n\nQuestion 2.\nDefine inertia.\nThe tendency of a body to oppose any change in its state of rest or of uniform motion is called inertia.\n\nQuestion 3.\nState Newton’s first law of motion (or Law of inertia).\nEverybody continues to be in its state of rest or uniform motion along a straight line unless compelled to change that state by an external force.\n\nQuestion 4.\nDefine Linear momentum.\nThe momentum of a body to defined to be the product of mass and velocity and is denoted by P $$\\overrightarrow{\\mathrm{P}}$$ = $$m \\bar{v}$$\n\nQuestion 5.\nState Newton’s Second law of motion.\nThe rate of change of momentum of the body is directly proportional to the applied force and takes place in the direction of the force.\n\nQuestion 6.\nDefine newton.\nOne newton is defined as that force which acting on a body of mass 1kg produces an acceleration of 1 m/s2\n\nQuestion 7.\nGive the dominion formula for\na)Force\nb) momentum\nForce – MLT-2\nMomentum – MLT-1",
null,
"Question 8.\nWhat quantity is conserved during rocket propulsion?\nLinear Momentum.\n\nQuestion 9.\nAction and reaction forces do not cancel each other. Why?\nAction and reaction forces do not cancel each other because they act on different objects.\n\nQuestion 10.\nWhat is the apparent weight measured? When a person of mass ‘m’ is standing in a lift accelerating with an acceleration of ‘a’\n(i) Downwards\nThe weighting machine measures the reaction force given by the floor. So when lift is going\ni) downwards the weight measured is lesser\n\nQuestion 11.\nii) Upwards\nii) Upwards the weight measured is more\n∴ apparent weight = m (g + a)\n\nQuestion 12.\nState Newton’s third law of motion.\nFor every action, there is an equal and opposite reaction.\n\nQuestion 13.\nWhich is the weakest force in nature?\nGravitational force.\n\nQuestion 14.\nIs it possible for the weight of a body to be zero?\nYes, whenever a body is on a free fall its weight is zero, but its mass remains unaltered.\n\nQuestion 15.\nTwo masses are In the Nation 1:2. What is the ratio of their Inertia?\nInertia of a body is directly proportional to its mass. Therefore the ratio of their inertia is Their inertia is also in the ration 1:2.\n\nQuestion 16.\nPassengers in buses tend to fall back as it accelerates. Why?\nDue to inertia, the passengers tend to continue their state of rest, when the bus moves by accelerating.\n\nQuestion 17.\nA cricket player catches the ball by moving his hand along the direction of the motion of the ball. Why?\nBy moving his hand along the direction of motion of the ball, the player increases the time of contact, thus reduces the impulse felt.\n\nQuestion 18.\nA stone breaks the window glass, but a bullet make only a hole. Why?\nSince the velocity of the bullet is much greater than that of the stone, the bullet is in contact with the glass for a very short time that the glass can’t give enough resistance.\n\nQuestion 19.\nCan a moving body be in equilibrium?\nYes, If a body is in a state of uniform motion in a straight line (net force acting on it is zero), its a moving body in equilibrium.\n\nQuestion 20.\nState law of conservation of momentum?\nWhen the net external force on a system is zero, then there is no change in momentum of the system.",
null,
"Question 21.\nWhat is friction?\nThe property by virtue of which an opposing force is created between two bodies in contact, which opposes their relative motion is called friction.\n\nQuestion 22.\nWhat is frictional force?\nThe force, which opposes the motion of one body over the other in contact with it, is called frictional force\n\nQuestion 23.\nWhat is static friction?\nFrictional force, which balances the applied force when the body is in the state of rest is called static friction.\n\nQuestion 24.\nWhat is limiting friction?\nThe maximum static friction that a body can exert on the other body in contact with it is called limiting friction.\n\nQuestion 25.\nWhat is sliding friction?\nThe frictional force that opposes the relative motion between the surfaces when one body slides over the other body is called sliding friction.\n\nQuestion 26.\nWhat is rolling friction?\nRolling friction is defined as the force of friction acting when a body rolls over the other body.\n\nQuestion 27.\nDefine angle of friction.\nAngle made by the resultant of normal reaction and limiting friction with the normal reaction is called angle of friction.\n\nQuestion 28.\nDefine angle of repose.\nAngle of repose is defined as the angle that an inclined plane makes with the horizontal when a body placed on it just starts sliding.",
null,
"Question 29.\nDefine coefficient of static friction.\nThe coefficient of static friction is defined as the ratio of limiting friction to the normal reaction between the surfaces.\n\nQuestion 30.\nWhat are the units and dimensions of the coefficient of friction?\nCo-efficient of friction is a ratio, so it is unitless.\n\nQuestion 31.\nWhen a wheel is rolling, what is the direction of the friction?\nFriction is tangential to the wheel and in the direction opposite to motion.\n\nQuestion 32.\nWhich of the following is a scalar quantity? Force, momentum & Inertia.\nInertia.\n\nQuestion 33.\nIf the string rotating stone is cut. Which direction will the stone move?\nThe stone will move in the direction tangential from the point where it got cut.\n\nQuestion 34.\nDoes a stone moving in a uniform circular motion (constant speed) has no net external force on it?\nThe speed is uniform but direction is changing, so, there is a change in velocity (acceleration is non zero) Hence stone is under the influence of a net external force.\n\nQuestion 35.\nA man of mass 60 kg is on a lift which is moving up with uniform speed, [g = 10ms 2]. Find apparent weight?\nSince it is moving with uniform speed, there no additional force (a = 0) So, apparent weight = m(g + 0) = (60 kg) × (10 ms-2)\n= 600 N.\n\nQuestion 36.\nWhat happens to the coefficient of friction if the weight of a body is doubled?\nThe coefficient of friction remain constant.\n\nQuestion 37.\nWhat provides the centripetal force for a car taking a turn on a level road.\nFrictional force.\n\nQuestion 38.\nFind force on a body if change in momentum of a body is 20 kg ms-1 over 5 seconds.\nForce is the rate of change of momentum\nF = $$\\frac{\\Delta(\\text { momentum })}{t}$$ = $$\\frac{20 \\mathrm{kg} \\mathrm{ms}^{-1}}{5 \\mathrm{s}}$$ 4 N\n\nQuestion 39.\nA 50 kg mass is subjected to a force of 5 N. What is acceleration of the body.\nWe know that, F = ma\n⇒ α = $$\\frac{F}{m}$$\n= $$\\frac{5 \\mathrm{N}}{50 \\mathrm{kg}}$$ = 0.1 ms-1",
null,
"Question 40.\nA 25 g body is moving with uniform velocity of 5ms-1. What is the force acting on the body?\nSince there is no change in velocity the a = 0.\n⇒ F = ma= (25 × 10-3) × (0) = 0 N\n\n### 1st PUC Physics Laws of Motion Two Marks Questions and Answers\n\nQuestion 1.\nState the 4 basic forces of nature.\n\n1. Gravitational force\n2. Elector magnetic force\n3. Strong nuclear force\n4. Weak nuclear force.\n\nQuestion 2.\nWhich is the strongest & weakest force in nature?\nStrongest – strong nuclear force Weakest – gravitational force.\n\nQuestion 3.\nExplain why does a cyclist bends while riding a curved road?",
null,
"On bending the cyclist part of his normal force to acts as centrepetal force that helps him staying within the circular path.\nFa = N sin θ = $$\\frac{m v^{2}}{\\alpha}$$\n\nQuestion 4.\nShow that impulse of force is equal to the change in momentum of a body.\nLet a force F act on a body of mass ‘m’ for a short interval of time‘t’.\nThen impulse of the force = F t\n= mat = $$m\\left(\\frac{v-u}{t}\\right) t$$ = m (v – u). Therefore impulse of force is equal to the change in momentum.\n\nQuestion 5.\nDefine Impulse of a force and Impulsive force.\nThe product of the force & the time for which it acts on a body is called impulse of a force. The force acting on a body for short interval of time is called impulsive force.\n\nQuestion 6.\nA ball hits the ground with a momentum $$\\overrightarrow{\\mathbf{p}}$$ and bounce back with the same magnitude of momentum. Find the change in momentum.\nInitial momentum = $$\\overrightarrow{\\mathbf{p}}$$\nFinal momentum = – $$\\overrightarrow{\\mathbf{p}}$$\nchange in momentum =\nΔ p = – $$\\overrightarrow{\\mathbf{p}}$$ – $$\\overrightarrow{\\mathbf{p}}$$ =- 2$$\\overrightarrow{\\mathbf{p}}$$",
null,
"Question 7.\nWhile Jumping on a cement floor, we weigh less than what weighs on cement floor. Why?)\nWhen we are on the floor, exerts reaction on us. When we jump the reaction on us is zero. Therefore while jumping on a cement Floor we weigh less than on cement floor.\n\nQuestion 8.\nDistinguish between conservative and nonconservative force.\nConservative forces are those forces against which the work done doesn’t depend on the path followed but depends only on the initial and final positions. Non-conservative forces are those in which the work done depends on the path taken\n\nQuestion 9.\nCalculate the Impulse of a force of 50 N acting for 0:1s.\nF = 50N t = 0:1 s\nimpulse = 50 N × 0.1s= 5 Ns\n\nQuestion 10.\nWhat are the methods of reducing friction?\n\n1. Friction between two surfaces can be reduced by polishing them.\n2. Jets, aeroplanes, and cars are given streamlined shape to reduce friction due to air resistance.\n3. The use of lubricants like oil, grease, etc. reduces the friction in machines.\n4. By using ball bearings friction in wheels of a car or cycle can be minimised.\n\nQuestion 11.\nStatic friction is a self-adjusting force comment.\nThe magnitude of static friction depends? on the magnitude of the applied force. As the applied force increases the magnitude of the static friction also increases. Thus static frictional force is a self-adjusting force.\n\nQuestion 12.\n\n1. Brakes of the vehicles work due to friction.\n2. Friction helps in driving vehicles.\n3. A match stick is lighted because of friction.\n\nQuestion 13.\nIs earth an inertial frame of reference?\nNo. earth can not be considered as an inertial frame of reference, because the earth is rotating and revolving, which means it is accelerating.\n\nQuestion 14.\nDerive an expression for recoil velocity of gun.\nLet mg be mass of gun,\n$$\\vec{v}_{g}$$ = recoil velocity of gun,\nmb be mass of Bullet $$\\vec{v}_{b}$$\nInitial momentum = 0,\nFinal momentum = mg $$\\vec{v}_{g}$$ + mb $$\\vec{v}_{b}$$\nBy law of conservation of momentum\n0 = mg $$\\vec{v}_{g}$$ + mb $$\\vec{v}_{b}$$\n⇒ $$\\vec{v}_{b}$$ = – $$\\left(\\frac{m_{b} \\vec{v}_{b}}{\\vec{m}_{g}}\\right)$$\n\nQuestion 15.\nHow does lubricants help in reducing friction?\nThe lubricants spread over the irregularities on the surface that makes the contact. So, the contact between the lubricant and the moving objects reduces the friction.\n\nQuestion 16.\nA bubble generator is kept at the bottom of an aquarium which is on a free fall. Will the bubbles generated rise to the top?\nNo, the bubbles generated at the bottom will not rise to the surface, ‘this is because the water in the aquarium is in a state of weightlessness and does not give the bubbles a reactional upward force.",
null,
"Question 17.\nWhy is it easier to pull a roller than push it?",
null,
"When we pull, the force ‘F’ the normal force is reduced by a value F sin θ. So the friction experienced is lesser which makes it is easier to pull.\n\nQuestion 18.\nAn object of mass m collides with a another object of mass ‘2 m’. if the initial velocity of the object of mass ‘m’ is v1 and of mass ‘2m’ is ‘0’. Then find the final velocity assuming they get stick to each other.\nInitial momentum =\n(m) (v1) + (2m) × (0)\n= mv1\nFinal momentum= m(vx) + 2m (vx)\n= 3m vx.\nWhere Vx is the velocitycombined mass By Law of conservation of momentum\nmv1 = 3mvx\nvx = $$\\frac{m v_{1}}{3 m}$$\nvx = $$\\frac{v_{1}}{3}$$\n\nQuestion 19.\nIf a boats sail is blown by air produced by a fan on the boat, can the boat move forward?\nNo, the boat can not be moved by a fan on the boat, this is because when the fan pushes the sail blowing air, the air pushes the fan backwards with the same force. Since there is no external force on the system – the net change in momentum is zero.\n\nQuestion 20.\nA retarding force Is applied to a motor car. If speed Is doubled how much more distance will it travel.\nLet original force be F1, mass be m1 velocity be V and distance be S then,\nF = ma, & V² = 2as\n⇒ F = $$\\frac{m v^{2}}{2 s}$$ ⇒ S = $$\\frac{m v^{2}}{2 F}$$\nIf velocity is doubled, then the distance it will travel before coming to halt will be increased by 4 times.\n\n### 1st PUC Physics Laws of Motion Three Marks Questions and Answers\n\nQuestion 1.\nDistinguish between mass and weight.\n\n1. Mass is the amount of matter contained in a body while weight is the gravitational force acting on a body.\n2. Mass of body remains same while weight of body varies from place to place.\n3. Mass is a scalar but weight is a vector.\n4. Unit of mass is kilogram and that of weight is newton.\n5. Mass is measured using a physical balance and weight is measured using a spring balance.\n\nQuestion 2.\nDerive the equation F = ma.\nConsider a body of mass ‘m’ moving with a velocity ‘u’. Let a constant force ‘F’ applied on a body changes its velocity to V in ‘t’ seconds.\nInitial momentum of the body = mass × initial velocity = m u\nFinal momentum = mass × Final velocity = m v\nChange of momentum in ‘t’ seconds = mv – mu.\n= $$\\frac{m v-m u}{t}$$ = m $$\\left(\\frac{v-u}{t}\\right)$$\n∴ α = ma\n∵ $$\\frac{v-u}{t}$$ = a, acceleration\nAccording to Newton’s second law, the rate of change of momentum is directly proportional to the applied force or vice versa.\ni. e. Force a rate of change of momentum\nF α ma\nF = kma\nWhere ‘k’ is a proportionality constant. In SI system k=1.\n∴ F = ma\n\nQuestion 3.\nState and explain Newton’s third law of motion. Give illustrations for the same.\nNewton’s third law states that for every action, there is an equal and opposite reaction.\nLet F1 be the force exerted by the body A on body B, F1 is called action. Then force F2 exerted by B on A is called reaction. According to the third law F1 = – F2.\nIllustrations:\n\n1. When a book is placed on the table the weight of the book is acting vertically downwards (action). The table exerts an equal and opposite force vertically upwards (reaction).\n2. A swimmer pushes the water in the backward direction with a certain force (action) and the water pushes him in the forward direction with equal and opposite force (reaction).\n3. The sailing of a boat is due to the action of the boat on water and reaction from water on the boat.\n4. When an object is suspended from the string, the weight of the object acts vertically downwards. The reaction in the string called the tension acts vertically upwards.\n5. The earth attracts the moon with a force that constitutes action. In turn the moon attracts earth with equal and opposite force (reaction).\n\nQuestion 4.\nDerive a relation for the safe velocity of negotiating a curve by a body in a banked curve with fractional coefficient ‘µ’.\nThe net force along the x-direction inwards should provide the centripetal force\n∴ Ffriction cos θ + N sin θ = $$\\frac{m v^{2}}{2}$$ → (1)\n∵ there is is no motion in y-direction\nN cos θ Ffriction sin θ = mg → (2)\nwe know that\nFfriction = µ N.\ndivide (1) by (2)",
null,
"$$\\frac{\\mu \\mathrm{N} \\cos \\theta+\\mathrm{N} \\sin \\theta}{\\mathrm{N} \\cos \\theta-\\mu \\mathrm{N} \\sin \\theta}$$ = $$\\frac{m v^{2}}{r(m g)}$$\n⇒ $$\\frac{N(\\mu+\\tan \\theta)}{N(1-\\mu \\tan \\theta)}$$ = $$\\frac{\\mathrm{v}^{2}}{\\mathrm{rg}}$$\n⇒ v² = rg$$\\left(\\frac{\\mu+\\tan \\theta}{1-\\mu \\tan \\theta}\\right)$$\nThe max velocity to safely negotiate the turn is $$\\sqrt{r g\\left(\\frac{\\mu+\\tan \\theta}{1-\\mu \\tan \\theta}\\right)}$$",
null,
"Question 5.\nWrite the equation corresponding to the ones given, for rotational motion about a fixed axis.\n(i) x (t) = x (0) + v (0) t + 1a/2 t²\nθ (t) = θ (0) + ω(0) + $$\\frac{1}{2}$$ α t²\n\n(ii) v² (t) = v² (0) + 2a [x t) – x (0)]\nω² (t) = ω² (0) + 2 a α [θ (t) – θ (0)]\n\n(iii) $$\\overline{\\mathbf{v}}$$ = $$\\frac{v(t)-v(0)}{2}$$\n$$\\bar{\\omega}$$ = $$\\frac{\\omega(t)-\\omega(0)}{2}$$\n\n(iv) v(t) = v(0) + at\nω (t) = ω (0) + α t\n\nQuestion 6.\nTwo masses m1 and m2 are connected to ends of string passing over a pulley. Find tension and acceleration associated.",
null,
"Assuming mass mt moves down with an\nacceleration ‘a’\nm1g -T = m1a1 ………….. (1)\nT – m2 g = m2 a1 ………. (2)\n(1) + (2)\nm1g – m2g = (m1 + m2) a1\n⇒ a1 = $$\\left(\\frac{m_{1}-m_{2}}{m_{1}+m_{2}}\\right) g$$\n& T = m1g – m1 $$\\left(\\left(\\frac{m_{1}-m_{2}}{m_{1}+m_{2}}\\right) g\\right)$$\n⇒ T = m1g $$\\left[1-\\frac{m_{1}-m_{2}}{m_{1}+m_{2}}\\right]$$\nT = $$\\frac{2 m_{1} m_{2} g}{m_{1}+m_{2}}$$\n\nQuestion 7.\nName a mass varying system. Derive an expression for the velocity of the rocket at any instant of time\nA rocket-propelled into space is a mass varying system as it losses the weight of the fuel burnt.\nLet the velocity of gas used for propelling be ‘vg’ & let the rate of decrease in mass of the body be $$\\frac{\\mathrm{d} m}{\\mathrm{dt}}$$\nThen, by law of conservation of momentum since initial momentum is zero, dp = 0\n⇒ d (mv) = 0\n⇒ (dm) v + m d v = 0\n⇒ vdm = – mdv ⇒ dm = – $$\\frac{\\mathrm{d} \\mathrm{v}}{\\mathrm{v}}$$\nIntegrating on both sides\nv = vg (Inm) + c\n⇒ v = – vg logc m + c\nwhere c is a constant.\n\nQuestion 8.\nIndicate the force acting on a block of mass ‘m’ at rest on an Inclined plane of angle θ.",
null,
"Ffriction = μ N, N = mg cos θ & mg sin θ = Ffriction\n\nQuestion 9.\nDistinguish between static friction, limiting friction & kinetic friction. How do they vary with applied force? Explain.\nThe static friction is a friction that acts on a body at rest.\nLimiting friction is the maximum value of static friction. It is the force that is required for the body to just start moving.\nKinetic friction is the frictional force that action a body which is in motion.\nOn increasing the applied force, static friction increase, until it reaches limiting friction which is fixed, and kinetic friction also remains constant.",
null,
"Question 10.\nDefine Impulse. What graphical methods can be used to calculate impulse in the following cases\n\n1. constant force\n2. variable force acting on a body\n\nImpulse is a force that acts a body for a very short duration of time. It is defined as the product of force and the time for which it acts.\n1.",
null,
"In case of a constant force, say F1, the impulse is simply product of force and duration\nImpulse = F1 × t1\n2.",
null,
"In case of a variable force, the impulse will be the integral of F2 one of the interval [0,t2]\nImpulse = $$\\int_{0}^{t_{2}} F_{2}(t) d t$$\n\nQuestion 11.\nAn object of mass ‘m’ is on a inclined plane (θ). Find\n\n1. The effective resistant force on the body if it’s moving downwards.\n2. The minimum force if it is being pushed upwards.\n\nAssume a friction with a coefficient of μ",
null,
"1. The net force along the slope is\n⇒ Feq = F1 – Mg sin θ\n= μ N – Mg sin θ\nBut, N = mg cos θ\n⇒ Feq = mg (μ cos θ – sin θ)",
null,
"2. Additional force required be Fa\nFa = mg sin θ + μ N\n= mg sin θ + μ mg cos θ\n⇒ Fa = mg (sin θ + μ cos θ)\n\n### 1st PUC Physics Laws of Motion Five Marks Questions and Answers\n\nQuestion 1.\nState and prove the law of conservation of momentum.\nIn a closed system, the total linear momentum of the system remains constant or conserved.\nProof:\nConsider two bodies A and B of masses m1 and m2 moving in the same direction with uniform velocities u1 and u2 respectively. After the collision let their uniform velocities be v1 and v2. Let‘t’ be the time of impact.\nChange in momentum of A\n= m1v1 – m1u1\nRate of change of momentum of A\n= $$\\frac{m_{1} v_{1}-m_{1} \\mu_{1}}{t}$$\nchange in momentum of B\n= m2v2 – m2u2\nRate of change of momentum of B\n= $$\\frac{m_{2} v_{2}-m_{2} u_{2}}{t}$$\nIf F1 is the force exerted by A on B then according to second law,\nF1 = $$\\frac{m_{2} v_{2}-m_{2} u_{2}}{t}$$ (action)\nIf F2 is the force exerted by B on A then\nF2 = $$\\frac{m_{1} v_{1}-m_{1} \\mu_{1}}{t}$$ (reaction)\nAccording to Newton’s third law, action and reaction are equal and opposite i.e.\nF1 = – F2\n$$\\left[\\frac{m_{2} v_{2}-m_{2} u_{2}}{t}\\right]$$ = – $$\\left[\\frac{m_{1} v_{1}-m_{1} \\mu_{1}}{t}\\right]$$\nm2v2 – m2u2 = – m1v1 + m1u1\nOR\nm1u1 + m2u2 = m1v1 + m2v2\ni.e., Total momentum before collision = Total momentum after collision. Hence the momentum is conserved.",
null,
"Question 2.\nA stone weighing 5kg. falls from the top of a tower 100m high and buries itself 1 m deep in the sand. What is the average resistance offered by sand?",
null,
"Mass of the stone, m = 5kg\nHeight of the tower h = s = 100m\nInitial velocity u =0\nFinal velocity v = ?\nFrom the relation, v² = u² + 2gs\nv² = 0 + 2 × 9.8 × 100\nv² = 1960\nV = $$\\sqrt{1960}$$\nv = 44.27 ms-1\nThen the stone penetrates through the sand with a initial velocity, u = 44.27 ms-1\nDistance travelled, S = 1 m\nFinal velocity, v = 0\nacceleration, a =?\nFrom the equation, V² = u²+2as\n0² = (44.27)² + 2 × a × 1\n– 1960 = 2a\na = -980ms-2\n∴ The average resistance offered by the sand is F = ma\n= 5 × 980\nF = 4900 N\n\nQuestion 3.\nState the Newton’s laws of motion. Write any two illustrations.\nNewton’s first law of motion states that, everybody continues to be in its state of rest or uniform motion along a straight line unless compelled to change its state by an external force. Newton’s second law of motion states that the rate of change of momentum of a body is directly proportional to the applied force and takes place in the direction of the force.\nNewton’s third law of motion states that for every action there is. an equal and opposite reaction.\nIllustrations for Newton’s third law of motion are\n\n• when a book is placed on the table the book exerts force on the table (action), in turn the table exerts an equal and opposite force (reaction) on the book in the upward direction.\n• The sailing of a boat is due to the action of the boat on water and the reaction from water on boat.\n\nQuestion 4.\nConsider a body of mass ‘m’ attached to a string of length ‘L’. If the ring is forming a vertical circle, derive an expression for velocity and tension at any point.\nAlso, find the velocity that is required for mass to just reach the peak point of the circle.",
null,
"Let ‘θ’ be the angle the string makes with the vertical at any instant of time. Let vx be the velocity of body at the lowermost point x. The distance traveled by the body from the given point ‘p’ to ‘X’ in ‘y’ direction is given by,\nXY = L – Lcos θ = L(1 – cos θ)",
null,
"T – mg cos θ = $$\\frac{m v^{2}}{L}$$\nUsing the equation v² = u² + 2as, we can write\nVx² = v² + 2g (XY)\ni.e. V² = Vx² – 2g L (1 – cos θ) ……….. (1)\nTension, T = mg cos θ + $$\\frac{m v^{2}}{L}$$ using (1)\nT = mg cos θ + $$\\frac{m}{L}$$ (Vx² – 2gL (1 – cos θ)\n= mg cos θ + $$\\frac{m v_{x}^{2}}{L}$$ – 2 mg (1 – cos θ)\nT = $$\\frac{m v_{x}^{2}}{L}$$ + mg (3 cos θ – 2)\nFor the vertical circle to be reached v = 0 & vx = ? & θ = 180°\n⇒ vx² = v² + 2g L (1 – cos θ)\nvx² = 0 + 2g L (1 – (- 1))\nvx = $$\\sqrt{4g L}$$\nvx = $$2 \\sqrt{g L}$$\n\nQuestion 5.\nIf the system Is on a frictionless surface. Find the ratio of tensions in the string.",
null,
"The acceleration of the system is given\nby a = $$\\frac{\\text { Force applied }}{\\text { Total mass }}$$\n= $$\\frac{120 \\mathrm{N}}{(10+20+30) \\mathrm{kg}}$$ = 2 ms-2\nFor the last block\n120 – T2 = ma\n⇒ T2 = 120- (30) × (2)\n= 60 N\nFor the middle block\nT2 – T1 = ma\n60 – T1 = (20) × (2)\nT1 = 60 – 40 = 20 N\nThe ratio of tensions is,\nT1 : T2 = 20 : 60 = 1 : 3\n\nQuestion 6.\nFor the figure shown, find acceleration produced and the force of contact between the blocks. What is the effect of this force if it is applied to other blocks.",
null,
"The acceleration of system is\na = $$\\frac{F}{\\left(m_{1}+m_{2}\\right)}$$\nWhen the force is applied on block m, we have,\nF – Fc = m1 a\nwhere Fc is force of contact\n⇒ Fc = F – m1 $$\\left(\\frac{F}{m_{1}+m_{2}}\\right)$$ = $$\\left(\\frac{F m_{2}}{m_{1}+m_{2}}\\right)$$\nIf F is applied to other block ,",
null,
"Fc = m1 a = $$\\left(\\frac{\\mathrm{m}_{1}}{\\mathrm{m}_{1}+\\mathrm{m}_{2}}\\right)$$ F\n\nQuestion 7.\nIn the system shown if μk (kinetic friction coefficient) is 0.04. Find acceleration of the trolley,\n[g = 10ms-2]",
null,
"Free body diagram of trolley",
null,
"We know that Ff = μ N\n= μk mg\n= 0.04 × 15 × 10\n= 6N\n⇒ T – 6N = ma\n= 15 × a\n⇒ T – 5 a = 6 …………. (1)\nFree body diagram of mass\n⇒ 2g – T = ma\n⇒ 20 – T = 20\n⇒ 2a + T = 20 ……….. (2)\n0n,(1) – (2)\n– 17 a = – 14\na = $$\\frac{14}{17}$$ = 0.82ms-2",
null,
"Question 8.\nWeights of 250 g & 200 g are connected by a string over a smooth pulley. I system is traveling 4.95 m in the first 3 second. Find the value of g.",
null,
"For the pulley system,\nT – (200 g × 10-3) g = (200 × 10-3kg) a\nT – (0.29) = 0.2a …………….(1)\nan (0.25g) – T = (0.25a) …………(2)\n⇒ From (1) & (2)\n0.45 a = (0.25 – 0.2) g\na = $$\\frac{0.05}{0.45}$$g = $$\\frac{g}{9}$$ ms-2\nNow, S = 4.95 m, t = 3s u = 0\nFrom S = ut + $$\\frac{1}{2}$$ at²\n4.95 = 0 + $$\\frac{1}{2}$$ × a × (3)²\n4.95 = $$\\frac{1}{2}$$ × $$\\frac{10}{9}$$ × g × 9²\ng = $$\\frac{4.95}{5}$$ = 9.9 ms-2\n\nQuestion 9.\nA force of 80N acting on a body at rest for 2 sec imparts it a velocity of 20ms-1 what is the mass of the body calculate the distance traveled by the body in 2 seconds?\nForce, F = 80N\nInitial velocity, u =0\nTime for which force acts on the body, t = 25s,\nFinal velocity, v = 20ms-1\nFrom the equation v = u + at\n20 = 0 + a × 2\na = 10ms-2\n∴ The mass of the body, from the equation F = ma\nm = $$\\frac{F}{a}$$ = $$\\frac{80}{10}$$ = 8kg\n∴ The distance travelled by the body in a time, t = 2s. From the equation\ns = ut + $$\\frac{1}{2}$$ at²\n= 0 × 2 + $$\\frac{1}{2}$$ × 10 × 2²\n= 0 + 20 = 20m.\n\nQuestion 10.\nA man of 60 kg is standing on a weighing machine placed on the floor of a lift which reads the force in newtons. Find the reading of the weighing machine when the lift is\n\n1. stationary\n2. moving upwards with uniform speed of 10 ms-1\n3. moving downwards with uniform acceleration of 5 ms-2\n4. moving upwards with uniform acceleration of 5 ms-2.\n5. What would be the reading of the weighing machine if the connecting rope of the lift suddenly breaks and lift begins to fall freely under gravity, g = 10 ms-2.\n\nWeight of the man is due to the reaction from the floor of the lift. It is given by,\nR = mg + ma*\nwhere a* is the acceleration of the lift.\n1. when the lift is at rest a* = 0\n∴ Reading of the weighing machine\nR = mg = 60 × 10 = 600 N.\n\n2. when the lift is moving up or down with uniform velocity, a* = 0\n∴ Reading of the weighing machine,\nR = 600 N.\n\n3. When the lift is moving downwards with an acceleration a*.\nR = mg – ma* = m(g – a*)\n= 60 × (10 – 5) = 300 N.\n\n4. When the lift is moving upwards with acceleration a*,\nR = mg + ma* = m(g + a*)\n= 60 × (10 + 5)\n= 900 N.\n\n5. If the lift falls freely under gravity, a* = g\n∴ R = m(g – a*) = 0.\n\nQuestion 11.\nA rubber ball of mass 0.1 Kg is dropped on the ground from a height of 2.5 m and it rises to a height of 0.4m. Assuming the time of contact with the ground to be 0.01 s, calculate the force exerted by the ground on the wall, g = 10ms-2\nMass of the ball m = 0.1 kg\nTime in contact with the ground, t = 0.01s\n1. When the ball is dropped on ground,\nu =0, s = 2.5m, g = 10ms-2\nFrom the equation, v² = u² + 2gs\nv² = 0 + 2 × 10 × 2.5\nv = 7.07ms-1, downwards.\n\n2. When the ball rises up from the ground,\nv = 0, s = 0.4m, g = 10ms-2\nFrom the equation v² = u² + 2gs\n0 = u² + 2(- 10)0.4\nu² = 8\nu =2.83ms-1, upwards.\nAssuming the upward velocity +ve & downward velocity -ve,\nchange of velocity = 2.83 – (- 7.07)\ni.e v – u = 9.9ms-1\n∴ Force exerted by the ground on the ball is,\nF = m $$\\left(\\frac{v-u}{t}\\right)$$ = 0.1 $$\\left(\\frac{9.9}{0.01}\\right)$$ = 99 N.\n\nQuestion 12.\nA Bullet flying with a velocity of 50ms-1 hits a block of wood and penetrates through a distance of 0.2 m before coming to rest. The Mass of the bullet is 0.03kg. Calculate the resistance offered by the block of wood.\nInitial velocity, u = 50ms-1\nDistance travelled, s = 0.2\nFinal velocity v = 0\nMass of the bullet, m = 0.03 kg\nFrom the equation, v² = u² + 2as\n0 = 50² + 2 × a × 0.2\na = – $$\\frac{2500}{0.4}$$ = -6250ms-2\n∴ The resistance offered by the wood is\nF = ma = 0.03 × 6250 = 187.5 N.\n\nQuestion 13.\nA machine gun fires 200 bullets per minute with a velocity of 60ms-1. If the mass of each bullet Is 0.02kg, calculate the power of the gun.\nNumber of bullets fired in one minute = 200\nwork done by the gun in one minute is,\nW = kinetic energy of 200 bullets\nW = 200 ($$\\frac{1}{2}$$mv²)\n= 200 × $$\\frac{1}{2}$$ × 0.02 × (60)²\n= 7200J\n∴ Power, P = $$\\frac{W}{t}$$, t = 60 seconds\nP = $$\\frac{7200}{60}$$ = 120 watt.\n∴ Power of the gun = 120 watts.\n\nQuestion 14.\nTwo metal balls of masses 10kg and 8kg are moving In the same direction with velocities 10m/s and 4m/s respectively. They stick together after collision. Find their common velocity after collision. If they are moving\n\n1. In the same direction,\n2. In opposite direction before collision.\n\nm1 = 10kg, m2 = 8kg\nu1 = 10m/s and u2= 4m/s.\n1. v1 = v2 = v, common velocity.\nm1u1 +m2u2 = m1v1 + m2v2\n10 × 10 + 8 × 4 = (l0 + 8)v\nor v = 7.33m/s\n\n2. m1u1 – m2u2 = (m1 + m2) v\n10 × 10 – 8 × 4 = (10 + 8)v\nv = 3.778m/s.\n\nQuestion 15.\nDerive the equation F = ma.\nConsider a body of mass ‘m’ moving with a velocity ‘u’. Let a constant force ‘F’ applied on a body changes its velocity to ‘v’ in ‘t’ seconds.\nInitial momentum of the body = mass × initial velocity = m u\nFinal momentum = mass × Final velocity = m v\nChange of momentum in ‘t’ seconds = mv – mu.\nRate of change of momentum\n= $$\\frac{m v-m u}{t}$$ = m$$\\left(\\frac{v-u}{t}\\right)$$ = ma\n∵ $$\\frac{v-u}{t}$$ = a, acceleration\nAccording to Newton’s second law, the rate of change of momentum is directly proportional to the applied force or vice versa.\ni.e. Force a rate of change of momentum\nF α ma\nF = kma\nWhere ‘k’ is a proportionality constant. In SI system k =1.\n∴ F = ma.",
null,
"Question 16.\nName the basic forces in nature.\nBasic forces in nature are,\n\n1. Gravitational force\n2. Electromagnetic force\n3. Nuclear force and\n4. Weak force\n\nQuestion 17.\nA body is moving on a frictionless curved path of radius of 1.8 km with a speed of 30 ms-1. Find the banking angle required.\nThe centripetal force required to keep the body in circular motion is $$\\frac{m v^{2}}{r}$$\nHere, v = 30 ms-1\nr = $$\\frac{1.8 \\times 10^{3} m}{2}$$ = 0.9 103 = 900m\nN cos θ = mg\nand $$\\frac{m v^{2}}{r}$$ = N sin θ\n⇒ $$\\frac{m v^{2}}{r}$$ = $$\\frac{m g}{\\cos \\theta}$$ sin θ\n⇒ tan θ = $$\\frac{v^{2}}{r g}$$\n⇒ θ = tan-1 $$\\left(\\frac{30^{2}}{900 \\times 10}\\right)$$\n⇒ θ = tan-1 (0.1) = 5.71°.\n\nQuestion 18.\nWhat is the acceleration of a body moving on a circular path of radius 400 m. If it has\n\n1. constant speed of 40 ms-1\n2. speed increases at 3 ms-2\n\nA body on a circular path has two kinds of accelerations: radial & linear\n1. If speed is constant linear acceleration is zero & radical acceleration is ar = $$\\frac{v^{2}}{r}$$\nv = 40ms-2\nr = 400 m ⇒ ar = $$\\frac{40 \\times 40}{400}$$ = 4ms-2\n\n2. If the speed increases at 3 m/s², it has a linear acceleration of 3ms-2\na = $$\\sqrt{a_{r}^{2}+a_{1}^{2}}$$\n= $$\\sqrt{3^{2}+4^{2}}$$\n= 5 ms-2.\n\nQuestion 19.\nAn aeroplane at 360 km hr-1 has its wing banked at an angle 20°. Find the radius of the circle traversed by the plane, [g = 10ms-2]\nSpeed of the plane = 360 km/hr\n= $$\\frac{360 \\times 10^{3}}{3600}$$ = 100 ms -1\nwe know that tan θ = $$\\frac{v^{2}}{r g}$$\n⇒ r = $$\\frac{v^{2}}{\\tan \\theta \\times g}$$ = $$\\frac{100 \\times 100}{\\tan \\left(20^{\\circ}\\right) \\times 10}$$\n= 2.747 km.\n\nQuestion 20.\nA uniform chain of length L is kept on a table of coefficient of static friction μ(limiting value). Find the maximum length of chain that can be outside the table, without it sliding away.",
null,
"Let x be the length of the chain that can be outside the table.\nLet ‘M’ be the total mass of the chain.\nMass on the table is $$\\frac{M}{L}$$(L – x)\nMass of the chain outside = $$\\frac{M}{L}$$ x\nFor, equilibrium,\nForce of friction = weight of the hanging part.\ni.e., μN = $$\\left(\\frac{M}{L} x\\right)$$g × α\ni.e., μ$$\\left(\\frac{M}{L}(L-x) g\\right)$$ = $$\\left(\\frac{M}{L} x\\right)$$g\nμ(L – x) = x or x = $$\\frac{\\mu L}{1+\\mu}$$\n\nQuestion 21.\nFor the system shown in the figure, the coefficient of Kinetic friction between the mass and plane is 0.25.\nGiven that M2 = 5kg & M3 = 7kg. Find M1, such that the body M1, is moving with uniform velocity. Sin37° = $$\\frac{3}{5}$$, cos 37° = $$\\frac{4}{5}$$",
null,
"For the mass m1 to have a uniform velocity the system should be in equilibrium.\n⇒ T1 = m1 g\nT1 = 10m1 …………… (1) (g =10 m/s²)\nT1 =T2 + F11 + m2 g sin θ\n= T2 + μ N + m2 g sin 37°\n= T2 + μm2g cos 37° + m2g sin 37°\n= T2 + m2g $$\\left(\\mu \\frac{4}{5}+\\frac{3}{5}\\right)$$ (g =10 m/s²)\nT1 = T2 + m2[8μ + 6] ………….. (2)\nT2 = F12\nT2 = μ N\n= μ m3 g\nT2 = (0.25) (7) (10)\nT2 = 17.5 N ………….. (3)\nSubstituting (1) & (3) in (2)\n10m1 = 17.5 + 5 [8(0.25) + 6]\n10m1 = 57.5 N\nm1 = 5.75 kg.\n\n1st PUC Physics Laws of Motion Numerical Problems Questions and Answers\n\nQuestion 1.\nTwo masses 4 kg & 2 kg are connected by a massless string and they are placed on a smooth surface. The 2 kg mass is pulled by a force of 12 N as shown\n\n1. Find the acceleration of the system.\n2. If the string is replaced by a spring then what change do you notice in the acceleration\n3. In the string, system find the Tension",
null,
"1. We know that from Newtons second Law,\nF = ma\nF Force on the system\n⇒ a = $$\\frac{F}{m}$$ = $$\\frac{\\text { Force on the system }}{\\text { Total mass }}$$\n= $$\\frac{12 \\mathrm{N}}{(4+2) \\mathrm{kg}}$$\na = 2 ms-2\n\n2. If the string is replaced by spring, there is no change in mass of system. So there is no change in the acceleration a = 2m s-2\nc) We know that, F = m a\n(12 – T) = (m a)\nT = 12 – (2 × 2)\nT = 8 N",
null,
"a = 2 ms-2\n\nQuestion 2.\nA force of 98 N acts on a body of mass 10 kg which is at rest. Calculate\n\n1. Velocity at the end of 5 seconds.\n2. Distance traveled by the body in 5 seconds.\n\nSolution:\n1. To find the velocity at the end of 5 seconds.\nWe have, F = ma\nGiven, F = 98 N and\nm = 10 kg\n∴acceleration a = $$\\frac{F}{m}$$ = $$\\frac{98}{10}$$\n= 9.8 ms-2\nvelocity v = u + at\nHere a = 0,\na = 9.8 ms-2 and\nt = 5 s\n∴ v = 0 + 9.8 × 5\n= 49.0 ms-1\n\n2. To find the distance travelled\nwe have, s = ut + $$\\frac{1}{2}$$ at²\nHere, u = 0,\na = 9.8 ms-2\nt = 5 seconds\n∴ s = 0 × 5 + $$\\frac{1}{2}$$ × 9.8 × (5)²\n= 122.5 m.\n\nQuestion 3.\nA truck of mass 3000 kg is moving with a velocity of 10 m/s is accelerated by a force of 600N.\n\n1. What is the rate at which its velocity increases?\n2. How far will it travel In 10s?\n\nSolution:\n1. To find the rate at which velocity is increasing\nForce F = 600 N\nmass m = 3000 kg\nRate of increase in speed,\na = $$\\frac{F}{m}$$\n= $$\\frac{600}{3000}$$\n= 0.2 ms-2\n\n2. To find the distance travelled in 10 s\nWe have, s = ut + $$\\frac{1}{2}$$ at²\nHere, u =10 ms-1\nt = 10 s\na = 0.2 ms-2\n∴ s = 10 × 10 + $$\\frac{1}{2}$$ × 0.2 × (10)²\n= 100 + 10\n= 110 m.\n\nQuestion 4.\nA certain force acting on a body of mass 10 kg at rest moves it through 125 m in 5 seconds. If the same force acts on a body of mass 15 kg, what is the acceleration produced?\nSolution:\nin the case of the first body,\nu = 0;\nt = 5 s;\ns = 125 m\nSubstituting these values in the equation,\ns = ut + $$\\frac{1}{2}$$ at\n125 = 0 × 5 + $$\\frac{1}{2}$$ × a × (5)²\n$$\\frac{1}{2}$$ × a × 25\n∴ a = $$\\frac{2 \\times 125}{25}$$ = 10 ms-2\nF = m × a = 10 × 10 = 100 N\nIf the same force acts on another body of mass 15 kg, the amount of acceleration produced is,\na = $$\\frac{F}{m}$$ = $$\\frac{100}{15}$$\n= 6.67 ms-2",
null,
"Question 5.\nA cricket ball of mass 0.15 kg is moving with a velocity of 12 ms-1 and is hit by a bat so that the ball is turned back with a velocity of 20 ms-1. If the force of blow acts for 0.01 s, find the average force exerted on the ball by the bat.\nSolution:\nInitial velocity u = 12 ms-1\nFinal velocity v = 20 ms-1\nChange in velocity =20 – (- 12)\n= 20 + 12\n= 32 ms-1\n(-ve sign is taken because initial and final velocities are in opposite direction)\nTime for which force is acting, t = 0.01 s\n∴ acceleration a = $$\\frac{\\text { change in velocity }}{\\text { time }}$$\n= $$\\frac{32}{0.01}$$\n= 3200 ms-2\nForce F = ma\n= 0.15 × 3200\n= 480 N.\n\nQuestion 6.\nA hammer of mass 1 kg moving with a speed of 6 ms-1 strikes a wall and comes to rest in 0.1 s. Find the\n\n1. Impulse\n2. Retarding force on the hammer\n3. Retardation\n\nSolution:\n1. The initial momantum of the hammer is,\nm × v = 1 kg × 6 m s-1\n= 6 kg m s-1\n= 6 Ns\nImpulse = F . t = Δ P\n= 0 – mv\n= – 6 Ns.\n\n2. The force on the hammer\nF = $$\\frac{\\text { Impulse }}{\\text { time }}$$ =$$\\frac{6 \\mathrm{Ns}}{0.1 \\mathrm{s}}$$\n60 N.\n\n3. Retardation = a =$$\\frac{F}{m}$$ = $$\\frac{60 \\mathrm{N}}{1 \\mathrm{kg}}$$\n= 60 m s-2\n\nQuestion 7.\nA disc of mass 200 g is kept floating horizontally by throwing 40 pebbles per second against it from below. If the mass of each pebble is 2g, calculate the velocity with which the pebbles are striking the disc. Assume the pebbles strike the disc normally and rebound with the same speed.\nSolution:\nMass of the disc M = 200 g\n= 0.2 kg\nTotal downward force\nF = Mg\n= 0.2 × 9.8 =1.96 N\nMass of one pebble m = 2 g = 2 × 10-3 kg Let v be the velocity with which the pebbles strike the disc. Momentum given by one pebble = mv The pebbles rebound downward and strike from below.\n∴ net momentum given to the disc in the upward direction\n= change in velocity of the pebble × m = (2v) m\nTotal momentum given in one second\n=40 × m × 2v\n= 80 mv\nThe disc remains horizontal if this is equal to the weight of the disc, Mg\n∴ 80 mv = Mg\n80 × 2 × 10-3 × v = 1.96\nv = 12.25 ms-1\n\nQuestion 8.\nWater ejects with a speed of 0.2 ms-1 through a pipe of area of cross-section 1 × 10-2 m². If the water strikes a wall normally, calculate the force on the wall in newtons, assuming the velocity of the water normal to the wall is zero after the collision.\nSolution:\nVolume of water striking the wall per second = 0.2 × 10-2 = 2 × 10-3 m3\nMass of the water striking the wall in one second = volume × density = 2 × 10-3 × 1000\n= 2 kg\nChange in velocity of water on striking the wall in one second = 0.2 – 0 = 0.2 ms-1\nForce acting on the wall\n= change in momentum\n=2 × 0.2\n= 0.4N.\n\nQuestion 9.\nA gun of mass 5 tons fires a bullet of mass 20g with a velocity of 110.2ms-1. Find the velocity of the gun.\nSolution:\nInitially, both the gun and the bullet are at rest.\n∴ The total initial momentum of the system is f zero.\nIf v1 and v2 are the final velocity of the gun and the bullet, final momentum is given by,\npf = m1v1 + m2v2\nAccording to the law of conservation of momentum, pi = pf\nm1v1 + m2v2 = 0\ni.e., v1 = – $$\\frac{m_{2} v_{2}}{m_{1}}$$\n= $$\\frac{-20 \\times 110.2}{5 \\times 1000}$$\n= – 0.44 ms-1\n∴ Recoil velocity of the gun is 0.44 ms-1.\n\nQuestion 10.\nA gun weighing 1000 kg recoils with a velocity of 3 × 10-2 m/s when a shell of mass 1 kg is shot from it. If the shell hits the target in 8 seconds, find the gun target distance.\nSolution:\nInitial momentum of the gun & that of shell is zero as they are at rest.\nRecoil velocity of the gun v1 = – 3 × 10-2ms-1\nMass of the gun m1 = 1000kg\nMass of the shell m2 = 1 kg\nvelocity of the shell v2 =?\nFrom the equation\nm1 u1 + m2 u2 = m1v1 + m2 v2\n0 = 1000(- 3 × 10-2) +1 .v2\n∴ v2 = 30ms-1\nThe gun target distance,\ns = v2 × t\n= 30 × 8\n= 240m.\n\nQuestion 11.\nA machine gun has a mass of 20 kg. The firing rate of 500 bullets per second and mass of each bullet Is 20 g. If the speed of the bullets 500 m s-1. Find the force required to keep the gun in its position.\nSolution:\nmgun = 20 kg, mb = 20 g\nvgun = ? vb = 500 ms-1\nFrom Law of conservation of momentum\nMgun Vgun + mb vb = 0\n⇒ Vgun = –$$\\frac{20 \\times 10^{-3} \\times 500}{20}$$\n= – 0.5 -1\n∴ Force required to hold its position\nF = m$$\\left(\\frac{v-u}{t}\\right)$$ = 20 × $$\\frac{(0.5-0)}{\\left(\\frac{1}{500}\\right) s}$$ = 5000 N\n\nQuestion 12.\nA body of mass 20 kg moving with a velocity of 10 ms-1 collides with another body of mass 40 kg moving In the same direction with a velocity of 5 ms-1. If both the bodies stick together after the collision, find the common velocity after collision.\nSolution:\nIf u1 and u2 are the initial velocities of the two bodies before the collision, the total momentum before the collision is\npi = m1u1 + m2u2\nLet v be the common velocity of the two bodies after collision. Then the final mo-mentum after collision is,\npf = (m1 + m2)v\nFrom the law of conservation of momentum\nPi = Pf\nm1u1 + m2u2 = ( m1 + m2) v",
null,
"= 6.67 ms-1.\n\nQuestion 13.\nA shell of mass 10 kg flying horizontally with a velocity of 36 kmph explodes in air into two fragments. The larger fragment has a velocity of 25ms-1 & is directed In the same direction as the initial velocity of the shell. The smaller fragment has a velocity of 12.5 ms-1 in the opposite direction. Find the masses of the fragments.\nSolution:\nLet the mass of larger fragment be m1 = x Then the mass of smaller fragment is m2 = 10 – x\nInitial velocity of larger fragment,\nu1 = 36kmph = 10 ms-1.\nFinal velocity of larger fragment,\nv1 = 25ms-1.\nInitial velocity of smaller fragment,\nu2 = 10ms-1.\nFinal velocity of smaller fragment,\nv2 = – 12.5ms-1.\nAccording to the law of conservation of momentum,\nmu = m1v1 + m2v2\n10 × 10 = x.25 + (10 – x) – 12.5\n100 = 25x – 125 + 12.5x\n225 = 37.5x\n∴ x = $$\\frac{225}{37.5}$$ = 6Kg\n∴ Mass of larger fragment, m1 =. 6kg\nMass of smaller fragment m2 = (10 – x) = 4kg.",
null,
"Question 14.\nA neutron (mass = 1.67 × 1o-27kg) at a speed of 108m s-1. Collides with detron and gets sticked to it Find the velocity of the composite particle.\nSolution:\nMass of neutron 1.67 × 10-27 kg\n= Mn,\nmass of detron = (mn) = 3.34 × 10-27 kg\n= md,\nvelocity of neutron = 108m s-1 = Vn\nvelocity of detron = 0 m s-1 = Vd\nOn collision,\nmass of composite particle = Mc = Mn + Md\n= (1.67 + 3.34) × 10-27 kg\n= 6.01 × 10-27 kg\nvelocity of composite particle = vc\nFrom Law of conservation of momentum\nMn + Vn + McVc = McVc\n1.67 × 10-27 × 10+8 + 0 = (5.01 × 10-27) Vc\n⇒ Vc = $$\\left(\\frac{1.67}{5.01}\\right)$$ × 10+8\nVc = 0.33 × 108m s-1\n\nQuestion 15.\nA projectile is fired a with velocity ‘V’ at an angle ‘θ’. If the projective breaks into 2 equal parts and one of them retraces the path then find the velocity of the other part\nSolution:",
null,
"At the highest point the projective will have only x-direction velocity and it is constant throughout the path.\nS0, Vx = Vi cos θ\nLet ‘M’ be the initial mass, $$\\frac{M}{2}$$ be mass of the halves. Velocity of 1 half changes from vx to – vx. Let the velocity of other half be V0. From Law of conservation of momentum.",
null,
"⇒ V0 = 3 Vx\n⇒ v0 = 3 vi cos θ."
] | [
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-1.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-2.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-3.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-46.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-5.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-6.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-7.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-8.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-9.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-10.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-11.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-12.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-13.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-14.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-15.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-16.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-17.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-18.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-19.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-20.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-21.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-22.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-23.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-24.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-25.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-47.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-27.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-29.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-30.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-31.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-32.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-33.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-34.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-35.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-36.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-37.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-38.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-39.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-40.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-41.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-42.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-43.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-44.png",
null,
"https://ktbssolutions.com/wp-content/uploads/2020/11/1st-PUC-Physics-Question-Bank-Chapter-5-Laws-of-Motion-img-45.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8672338,"math_prob":0.9997327,"size":71531,"snap":"2022-27-2022-33","text_gpt3_token_len":22486,"char_repetition_ratio":0.18321753,"word_repetition_ratio":0.07484437,"special_character_ratio":0.34064952,"punctuation_ratio":0.09872053,"nsfw_num_words":5,"has_unicode_error":false,"math_prob_llama3":0.99988556,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126],"im_url_duplicate_count":[null,null,null,3,null,null,null,3,null,3,null,3,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null,3,null,null,null,3,null,3,null,3,null,3,null,3,null,null,null,3,null,3,null,null,null,null,null,null,null,null,null,3,null,null,null,null,null,3,null,3,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null,3,null,null,null,3,null,3,null,3,null,3,null,null,null,3,null,null,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-19T02:06:01Z\",\"WARC-Record-ID\":\"<urn:uuid:3bb0063f-741b-482e-81a9-bdced623816a>\",\"Content-Length\":\"158709\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7fa27b52-abc2-4a19-982e-cfffdfe2b2f4>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ec64bda-9378-4504-a418-e9577720f561>\",\"WARC-IP-Address\":\"172.64.104.25\",\"WARC-Target-URI\":\"https://ktbssolutions.com/1st-puc-physics-question-bank-chapter-5/\",\"WARC-Payload-Digest\":\"sha1:THOWNKKZFUKVNI66ONFZCDEPGZB4TJGS\",\"WARC-Block-Digest\":\"sha1:5A5R7VC25CMRE4QMEZ3AVTR6PZQASH6N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573540.20_warc_CC-MAIN-20220819005802-20220819035802-00480.warc.gz\"}"} |
https://rdrr.io/github/trevorld/piecepack/man/aabb_piece.html | [
"# aabb_piece: Calculate axis-aligned bounding box for set of game pieces In trevorld/piecepack: Board Game Graphics\n\n## Description\n\nCalculate axis-aligned bounding box (AABB) for set of game pieces with and without an “oblique projection”.\n\n## Usage\n\n ```1 2 3 4 5 6 7 8``` ```aabb_piece( df, cfg = getOption(\"piecepackr.cfg\", pp_cfg()), envir = getOption(\"piecepackr.envir\"), op_scale = getOption(\"piecepackr.op_scale\", 0), op_angle = getOption(\"piecepackr.op_angle\", 45), ... ) ```\n\n## Arguments\n\n `df` A data frame of game piece information with (at least) the named columns “piece_side”, “x”, and “y”. `cfg` Piecepack configuration list or `pp_cfg` object, a list of `pp_cfg` objects, or a character vector referring to names in `envir` or a character vector referring to object names that can be retrieved by `base::dynGet()`. `envir` Environment (or named list) containing configuration list(s). `op_scale` How much to scale the depth of the piece in the oblique projection (viewed from the top of the board). `0` (the default) leads to an “orthographic” projection, `0.5` is the most common scale used in the “cabinet” projection, and `1.0` is the scale used in the “cavalier” projection. `op_angle` What is the angle of the oblique projection? Has no effect if `op_scale` is `0`. `...` Ignored\n\n## Details\n\nThe “oblique projection” of a set of (x,y,z) points onto the xy-plane is (x + λ * z * cos(α), y + λ * z * sin(α)) where λ is the scale factor and α is the angle.\n\n## Value\n\nA named list of ranges with five named elements `x`, `y`, and `z` for the axis-aligned bounding cube in xyz-space plus `x_op` and `y_op` for the axis-aligned bounding box of the “oblique projection” onto the xy plane.\n\n## Examples\n\n ``` 1 2 3 4 5 6 7 8 9 10``` ``` df_tiles <- data.frame(piece_side=\"tile_back\", x=0.5+c(3,1,3,1), y=0.5+c(3,3,1,1), suit=NA, angle=NA, z=NA, stringsAsFactors=FALSE) df_coins <- data.frame(piece_side=\"coin_back\", x=rep(4:1, 4), y=rep(4:1, each=4), suit=1:16%%2+rep(c(1,3), each=8), angle=rep(c(180,0), each=8), z=1/4+1/16, stringsAsFactors=FALSE) df <- rbind(df_tiles, df_coins) aabb_piece(df, op_scale = 0) aabb_piece(df, op_scale = 1, op_angle = 45) aabb_piece(df, op_scale = 1, op_angle = -90) ```\n\ntrevorld/piecepack documentation built on July 22, 2021, 3:26 a.m."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.63167727,"math_prob":0.94440603,"size":1574,"snap":"2021-31-2021-39","text_gpt3_token_len":511,"char_repetition_ratio":0.10955414,"word_repetition_ratio":0.05511811,"special_character_ratio":0.32909784,"punctuation_ratio":0.16332377,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99249405,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-29T02:26:27Z\",\"WARC-Record-ID\":\"<urn:uuid:647ac000-cb4d-4086-9ab5-03f3971e2d2f>\",\"Content-Length\":\"47200\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:08309c1a-59ae-48a1-a447-a5bf76ac7519>\",\"WARC-Concurrent-To\":\"<urn:uuid:14cddd94-258e-47b1-a10a-5dc4017a1cf0>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/github/trevorld/piecepack/man/aabb_piece.html\",\"WARC-Payload-Digest\":\"sha1:VSCLKUFOJ3HC65KPJXBT36BZEWR2E5HD\",\"WARC-Block-Digest\":\"sha1:OITEU3T7N7DPHFIOS4B2TJEO3FCSBRVH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153814.37_warc_CC-MAIN-20210729011903-20210729041903-00007.warc.gz\"}"} |
https://physics.stackexchange.com/questions/494771/what-is-the-exact-role-of-friction-in-rolling-without-slipping | [
"# What is the exact role of friction in rolling without slipping?\n\nWhen an object rolls along a plane, we say that for the object to roll without slipping, the velocity of the center of mass must be equal to angular velocity times radius, so that at the point of contact with the ground the two velocities cancel each other, and the point is instantaneously at rest.\n\nI understand this.\n\nBut then we say that if the ground is rough, there exists a friction on the bottom of the wheel. Which gives it torque. And this friction is responsible for the rotational motion of the wheel. Now if there is a external force acting on the center of mass of the wheel, then the only way the wheel will roll without slipping is if the tangential acceleration is greater than or equal to the acceleration caused by the external force at the center of mass. I don't understand why this is.\n\nFirst of all, if the point is momentarily at rest, how does friction know which direction to apply the force in? Secondly, if friction applies a torque, couldn't it be the case that the linear acceleration caused by the same is greater than the acceleration due to the external force? In that case also, shouldn't the wheel slip?\n\n• if your car is rolling down a hill, linear acceleration can be greater than the external force applied by the drive train to the center of the wheel, but the wheel still may not slip. Aug 2, 2019 at 2:47\n• Related question: Can we know when rolling occurs without slipping? Aug 2, 2019 at 4:34\n• If the tangential acceleration is lesser than the force at CM then it would be as if the cylinder/wheel is skidding. It’s intuitive to think of pushing a wheel rather than rolling it uphill or downhill would be difficult rather than rolling. Rolling friction is lesser than surface friction hence its easier roll rather than push an object across provided the object is circular in shape. Aug 2, 2019 at 5:54\n• Your first question “if the point is at rest how does friction know where to act?” is like asking where does friction know to act if an object is pushed along a rough plane. For the second question, friction itself does not act on the cm it acts at a distance causing a torque. This friction cannot cause linear acceleration because the net forces cancel out across opposite point s of the circle. In that case the wheel cannot slip unless a large enough linear force on the cm is applied. Aug 2, 2019 at 6:18\n\nThe part you understand is correct. Let's get to the rest of your question.\n\nBut then we say that if the ground is rough, there exists a friction on the bottom of the wheel.\n\nNot ideally, no. Just like how there is no static friction acting on a book resting on a table with no other horizontal forces acting on it, there is no static friction force acting on the wheel if there are no other forces/torques trying to change the wheel's velocity.\n\nWhich gives it torque. And this friction is responsible for the rotational motion of the wheel.\n\nThis is invalidated by the above discussion. If the wheel is rolling then it will keep on rolling. Friction is not responsible for this, just like how it is not responsible for keeping the book at rest on the table.\n\nNow if there is a external force acting on the center of mass of the wheel, then the only way the wheel will roll without slipping is if the tangential acceleration is greater than or equal to the acceleration caused by the external force at the center of mass.\n\nHere is where friction now comes into play, just like how static friction would be required to keep our book at rest of we started to push on it. Although I must say your terminology confuses me here. The simplest way to think about it is just that the static friction has some maximum value. If friction needs to be larger than this maximum value to prevent slipping, then slipping will occur. Once again, just think about the book.\n\nFirst of all, if the point is momentarily at rest, how does friction know which direction to apply the force in?\n\nFriction opposes relative motion. It \"knows\" which direction to act because that is the direction that opposes slipping. Once again, think about the book.\n\nSecondly, if friction applies a torque, couldn't it be the case that the linear acceleration caused by the same is greater than the acceleration due to the external force? In that case also, shouldn't the wheel slip?\n\nThis is confusing again. Just talk about forces. I see many people, including you now, getting things mixed up with comparing \"accelerations due to forces and torques\".\n\nYou can work out the problem in more detail using Newton's second law. What is interesting is that depending on where you apply your force and the type of wheel you have the friction force could act either in the direction of the applied force or opposite the direction of the applied force to prevent slipping. I discuss this here and here\n\nWhen a wheel is rolling without slipping at constant speed, friction has no role: The rotational and translational speeds are perfectly matched, and neither force nor torque from friction is needed nor available.\n\nTo put it another way, friction is about force and torque. Those in turn are related to angular and linear acceleration, not velocity. No acceleration, no force and/or torque, no friction.\n\nNow, what happens if some (external) force accelerates the axle?\n\nIn the absence of friction, the forward linear motion of the wheel increases due to the force on the axle, but there's nothing that increases the angular speed of the wheel: With that constant angular speed, the touch point will be moving forwards.\n\nIn the frictionless case, the wheel will start sliding forward on the surface. But that sliding is what friction at that surface can oppose with a fore that generates a torque to speed the wheel up.\n\nImagine a thrown bowling ball. At first it's sliding, and the friction acts to make it spin until it's rolling.\n\nWith a small acceleration, the wheel might not reach the point of sliding and kinetic friction. For small accelerations and large friction, it'll stay in the static friction regime. But that static friction will be generating a force in that same direction, causing the wheel to speed up it's rotation until it's rolling at the right speed to be rolling without friction.\n\n• I believe the question mainly concerns acceleration. Aug 2, 2019 at 3:16\n• \"In the absence of friction, the forward linear motion of the wheel increases due to the force on the axle, but there's nothing that increases the angular speed of the wheel: With that constant angular speed, the touch point will be moving forwards.\" So the wheels will slip? Aug 2, 2019 at 23:24\n• In the absence of friction, yes, in the linear motion increases by itself the wheel will slip. With friction, the friction force speeds up the rotation to match. Aug 2, 2019 at 23:26"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9501795,"math_prob":0.9453659,"size":2446,"snap":"2023-40-2023-50","text_gpt3_token_len":501,"char_repetition_ratio":0.14086814,"word_repetition_ratio":0.013824885,"special_character_ratio":0.20032707,"punctuation_ratio":0.08921162,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97354984,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T14:24:36Z\",\"WARC-Record-ID\":\"<urn:uuid:6ba319ed-3436-447f-872b-d9eeec55c650>\",\"Content-Length\":\"183011\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec310346-83b2-4035-aa09-e1ec8742503e>\",\"WARC-Concurrent-To\":\"<urn:uuid:b9f91ada-6d45-4d29-8785-8d01a9e137e7>\",\"WARC-IP-Address\":\"104.18.43.226\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/494771/what-is-the-exact-role-of-friction-in-rolling-without-slipping\",\"WARC-Payload-Digest\":\"sha1:HI33U425J4YON5TNUYWQR7ZXSSDTB3EZ\",\"WARC-Block-Digest\":\"sha1:JFB2JZB2AOLZO4TSJGPFLDKWAWGOMM6U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100287.49_warc_CC-MAIN-20231201120231-20231201150231-00534.warc.gz\"}"} |
https://lists.nongnu.org/archive/html/auctex-devel/2021-12/msg00016.html | [
"auctex-devel\n[Top][All Lists]\n\n## a function that converts expression1/expression2 to \\frac{expression1}{e\n\n From: Uwe Brauer Subject: a function that converts expression1/expression2 to \\frac{expression1}{expression2} Date: Wed, 29 Dec 2021 08:22:47 +0100 User-agent: Gnus/5.13 (Gnus v5.13) Emacs/29.0.50 (gnu/linux)\n\nHi\n\nThis has annoyed me for years, honestly.\n\nI sometimes receive latex files, with constructions like this\n\nIn fact, if $6/5<\\gamma<2$, there exists a solution\nThen\n\\begin{equation}\n\\rho(t, x)=\\left\\{\n\\begin{array}{ll}\n{\\left[\\frac{K \\gamma}{4 \\pi\n\\kappa(\\gamma-1)}\\right]^{1/\\left(2-\\gamma^{\\prime}\\right)} u(|x|)^{1\n/(\\gamma-1)}} & \\text { for } 0 \\leqq|x| \\leqq R \\\\\n0 & \\text { for } R<|x|\n\\end{array}\n\\right.\n\\end{equation}\n\nWhat annoys me is the use of 6/5 instead of \\frac{5}{6} (I know\nsometimes it is even advised to use it).\n\nBut in general I want \\frac\n\nSo I have the following function\n\n(defun my-change-frac ()\n\"Changes stuff like 1/2 to \\frac{1}{2}.\"\n(interactive)\n(query-replace-regexp \"\\$$\\\\<[0-9]*\\$$\\$$[\\\\/]\\$$\\$$[0-9]*\\\\>\\$$\"\n\"\\\\\\\\frac{\\\\1}{\\\\3}\"))\n\nIt covers the 6/5 case but not, for example,\n1/\\left(2-\\gamma^{\\prime}\\right)\n\nDoes anybody have an idea?\nIn general shouldn't auctex have such a function?\n\nRegards\n\nUwe Brauer",
null,
"smime.p7s\nDescription: S/MIME cryptographic signature"
] | [
null,
"https://lists.nongnu.org/icons/generic.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6752379,"math_prob":0.98312944,"size":1515,"snap":"2022-05-2022-21","text_gpt3_token_len":506,"char_repetition_ratio":0.13103905,"word_repetition_ratio":0.041237112,"special_character_ratio":0.3518152,"punctuation_ratio":0.09608541,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9971875,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-18T07:38:13Z\",\"WARC-Record-ID\":\"<urn:uuid:9fe56b7f-7ab2-43c6-966a-fade1e332661>\",\"Content-Length\":\"5765\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b3f3e223-89db-4ffd-979e-c3750f502c6f>\",\"WARC-Concurrent-To\":\"<urn:uuid:f8ea7b13-36a1-4781-8843-4d62fc84d807>\",\"WARC-IP-Address\":\"209.51.188.17\",\"WARC-Target-URI\":\"https://lists.nongnu.org/archive/html/auctex-devel/2021-12/msg00016.html\",\"WARC-Payload-Digest\":\"sha1:GL4GAF3AJZNU3S7XI63QBMMEPOXMPN7Q\",\"WARC-Block-Digest\":\"sha1:LTXXX64BGU5UW5AOSQP3HOCJU5L5O3LY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662521152.22_warc_CC-MAIN-20220518052503-20220518082503-00199.warc.gz\"}"} |
https://epub.ub.uni-muenchen.de/88963/ | [
"",
null,
"",
null,
"Mousset, Frank; Noever, Andreas; Panagiotou, Konstantinos; Samotij, Wojciech (2020): ON THE PROBABILITY OF NONEXISTENCE IN BINOMIAL SUBSETS. In: Annals of Probability, Vol. 48, No. 1: pp. 493-525\nFull text not available from 'Open Access LMU'.\n\n### Abstract\n\nGiven a hypergraph Gamma = (Omega, chi) and a sequence p = (p(omega))(omega is an element of)(Q) of values in (0, 1), let Omega(p) be the random subset of Omega obtained by keeping every vertex omega independently with probability p(omega). We investigate the general question of deriving fine (asymptotic) estimates for the probability that Omega(p) is an independent set in Gamma, which is an omnipresent problem in probabilistic combinatorics. Our main result provides a sequence of upper and lower bounds on this probability, each of which can be evaluated explicitly in terms of the joint cumulants of small sets of edge indicator random variables. Under certain natural conditions, these upper and lower bounds coincide asymptotically, thus giving the precise asymptotics of the probability in question. We demonstrate the applicability of our results with two concrete examples: subgraph containment in random (hyper)graphs and arithmetic progressions in random subsets of the integers.",
null,
"",
null,
""
] | [
null,
"https://epub.ub.uni-muenchen.de/images/epub_logo_de.png",
null,
"https://epub.ub.uni-muenchen.de/images/header_open-access-lmu_mobile_en.png",
null,
"https://epub.ub.uni-muenchen.de/images/autor-author.svg",
null,
"https://epub.ub.uni-muenchen.de/images/export.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83806926,"math_prob":0.9393992,"size":1248,"snap":"2022-27-2022-33","text_gpt3_token_len":288,"char_repetition_ratio":0.10932476,"word_repetition_ratio":0.0,"special_character_ratio":0.20913461,"punctuation_ratio":0.13513513,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99497634,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T22:55:47Z\",\"WARC-Record-ID\":\"<urn:uuid:365a3566-2333-4ee3-bab0-52d7d9f9d741>\",\"Content-Length\":\"24351\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:231bcb95-a752-4aee-874d-0759512fa0ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:83e1529e-acfa-451c-9fdb-e4373385146d>\",\"WARC-IP-Address\":\"141.84.147.156\",\"WARC-Target-URI\":\"https://epub.ub.uni-muenchen.de/88963/\",\"WARC-Payload-Digest\":\"sha1:KAAS66HMQ5757PBUZCE74KAECM6GEGU5\",\"WARC-Block-Digest\":\"sha1:Z5JLJ5CQHDWP6VWR5FINTCBWO5HYBNRS\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104678225.97_warc_CC-MAIN-20220706212428-20220707002428-00304.warc.gz\"}"} |
https://www.geogebra.org/m/d2Sfwu5z | [
"# The ratio of the areas of two similar triangles\n\nProof: We are given two triangles ABC and PQR such that Δ ABC ~Δ PQR We need to prove that ar(ABC) / ar(PQR) = (AB/PQ)² = (BC/QR)²=(CA/RP)² For Finding the areas of the two triangles, we draw altiude AM and PN of the triangle Now, ar(ABC) = 1/2 BC x AM and ar(PQR) = 1/2 QR x PN Therefore , AM/PN = AB/PQ Also, Δ ABC ~ Δ PQR So, AB/PQ = BC/QR = CA/RP Therefore, ar(ABC)/ar(PQR) = AB/PQ x AM/PN = AB/PQ x AB/PQ = (AB/PQ)² Weget ar(ABC)/ar(PQR) = (AB/PQ)² = (BC/QR)² = (CA/RP)² So, ar(ABC)/ar(PQR) = (1/2 x BC x AM) / 1/2 x QR x PN = (BC x AM) / (QR x PN) Now, in Δ ABM and Δ PQN, ∠B = ∠Q (As Δ ABC ~ Δ PQR) ∠M = ∠N (Each is of 90⁰) and Δ ABM ~Δ PQN (AA similarity criterion)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89673793,"math_prob":0.99977785,"size":799,"snap":"2023-14-2023-23","text_gpt3_token_len":329,"char_repetition_ratio":0.14842768,"word_repetition_ratio":0.0,"special_character_ratio":0.3729662,"punctuation_ratio":0.07070707,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99993896,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-22T02:51:11Z\",\"WARC-Record-ID\":\"<urn:uuid:25c33120-e55c-487e-948d-95cb761b87d1>\",\"Content-Length\":\"46395\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e957e752-3411-4aab-86bc-68a19477d992>\",\"WARC-Concurrent-To\":\"<urn:uuid:9bf743b8-84a0-427e-ade6-21ab7e0a916a>\",\"WARC-IP-Address\":\"18.160.10.6\",\"WARC-Target-URI\":\"https://www.geogebra.org/m/d2Sfwu5z\",\"WARC-Payload-Digest\":\"sha1:CHZSDZA4ILQLXS4TKDCEYEJJY6SJ3UIQ\",\"WARC-Block-Digest\":\"sha1:JTRLAUBLM6BEWF2B72VDB5MNOQDX7MIL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943749.68_warc_CC-MAIN-20230322020215-20230322050215-00566.warc.gz\"}"} |
http://conceptmap.cfapps.io/wikipage?lang=en&name=Linearized_gravity | [
"# Linearized gravity\n\nLinearized gravity is an approximation scheme in general relativity in which the nonlinear contributions from the spacetime metric are ignored, simplifying the study of many problems while still producing useful approximate results.\n\n## The method\n\nIn linearized gravity the metric tensor, $g$ , of spacetime is treated as a sum of an exact solution of Einstein's equations (often Minkowski spacetime) and a perturbation $h$ .\n\n$g\\,=\\eta +h$\n\nwhere $\\eta$ is the nondynamical background metric that is being perturbed about, and $h$ represents the deviation of the true metric ($g$ ) from flat spacetime.\n\nThe perturbation is treated using the methods of perturbation theory, \"linearized\" by ignoring all terms of order higher than one (quadratic in $h$ , cubic in $h$ etc...) in the perturbation.\n\n## Applications\n\nThe Einstein field equations (EFE), being nonlinear in the metric, are difficult to solve exactly and the above perturbation scheme allows linearized Einstein field equations to be obtained. These equations are linear in the metric, and the sum of two solutions of the linearized EFE is also a solution. The idea of 'ignoring the nonlinear part' is thus encapsulated in this linearization procedure.\n\nThe method is used to derive the Newtonian limit, including the first corrections, much like for a derivation of the existence of gravitational waves that led, after quantization, to gravitons. This is why the conceptual approach of linearized gravity is the canonical one in particle physics, string theory, and more generally quantum field theory where classical (bosonic) fields are expressed as coherent states of particles.\n\nThis approximation is also known as the weak-field approximation, as it is only valid if the perturbation h is very small.\n\n### Weak-field approximation\n\nIn a weak-field approximation, the gauge symmetry is associated with diffeomorphisms with small \"displacements\" (diffeomorphisms with large displacements obviously violate the weak field approximation), which has the exact form (for infinitesimal transformations)\n\n$\\delta _{\\vec {\\xi }}h=\\delta _{\\vec {\\xi }}g-\\delta _{\\vec {\\xi }}\\eta ={\\mathcal {L}}_{\\vec {\\xi }}g={\\mathcal {L}}_{\\vec {\\xi }}\\eta +{\\mathcal {L}}_{\\vec {\\xi }}h$\n$=\\left[\\xi _{\\nu ;\\mu }+\\xi _{\\mu ;\\nu }+\\xi ^{\\alpha }h_{\\mu \\nu ;\\alpha }+\\xi _{;\\mu }^{\\alpha }h_{\\alpha \\nu }+\\xi _{;\\nu }^{\\alpha }h_{\\mu \\alpha }\\right]$\n$dx^{\\mu }\\otimes dx^{\\nu }$\n\nWhere ${\\mathcal {L}}$ is the Lie derivative and we used the fact that η does not transform (by definition). Note that we are raising and lowering the indices with respect to η and not g and taking the covariant derivatives (Levi-Civita connection) with respect to η. This is the standard practice in linearized gravity. The way of thinking in linearized gravity is this: the background metric η is the metric and h is a field propagating over the spacetime with this metric.\n\nIn the weak field limit, this gauge transformation simplifies to\n\n$\\delta _{\\vec {\\xi }}h_{\\mu \\nu }\\approx \\left({\\mathcal {L}}_{\\vec {\\xi }}\\eta \\right)_{\\mu \\nu }=\\xi _{\\nu ;\\mu }+\\xi _{\\mu ;\\nu }$\n\nThe weak-field approximation is useful in finding the values of certain constants, for example in the Einstein field equations and in the Schwarzschild metric.\n\n## Linearized Einstein field equations\n\nThe linearized Einstein field equations (linearized EFE) are an approximation to Einstein's field equations that is valid for a weak gravitational field and is used to simplify many problems in general relativity and to discuss the phenomena of gravitational radiation. The approximation can also be used to derive Newtonian gravity as the weak-field approximation of Einsteinian gravity.\n\nThe equations are obtained by assuming the spacetime metric is only slightly different from some baseline metric (usually a Minkowski metric). Then the difference in the metrics can be considered as a field on the baseline metric, whose behaviour is approximated by a set of linear equations.\n\n### Derivation for the Minkowski metric\n\nStarting with the metric for a spacetime in the form\n\n$g_{ab}=\\eta _{ab}+h_{ab}$\n\nwhere $\\,\\eta _{ab}$ is the Minkowski metric and $\\,h_{ab}$ — sometimes written as $\\epsilon \\,\\gamma _{ab}$ — is the deviation of $\\,g_{ab}$ from it. $h$ must be negligible compared to $\\eta$ : $\\left|h_{\\mu \\nu }\\right|\\ll 1$ (and similarly for all derivatives of $h$ ). Then one ignores all products of $h$ (or its derivatives) with $h$ or its derivatives (equivalent to ignoring all terms of higher order than 1 in $\\epsilon$ ). It is further assumed in this approximation scheme that all indices of h and its derivatives are raised and lowered with $\\eta$ .\n\nThe metric h is clearly symmetric, since g and η are. The consistency condition $g_{ab}g^{bc}=\\delta _{a}{}^{c}$ shows that\n\n$g^{ab}\\,=\\eta ^{ab}-h^{ab}$\n\nThe Christoffel symbols can be calculated as\n\n$2\\Gamma _{bc}^{a}=(h^{a}{}_{b,c}+h^{a}{}_{c,b}-h_{bc,}{}^{a})$\n\nwhere $h_{bc,}{}^{a}\\ {\\stackrel {\\mathrm {def} }{=}}\\ \\eta ^{ar}h_{bc,r}$ , and this is used to calculate the Riemann tensor:\n\n$2R^{a}{}_{bcd}=2(\\Gamma _{bd,c}^{a}-\\Gamma _{bc,d}^{a})=\\eta ^{ae}(h_{eb,dc}+h_{ed,bc}-h_{bd,ec}-h_{eb,cd}-h_{ec,bd}+h_{bc,ed})=$\n$=\\eta ^{ae}(h_{ed,bc}-h_{bd,ec}-h_{ec,bd}+h_{bc,ed})=h_{d,bc}^{a}-h_{bd,}{}^{a}{}_{c}+h_{bc,}{}^{a}{}_{d}-h^{a}{}_{c,bd}$\n\nUsing $R_{bd}=\\delta ^{c}{}_{a}R^{a}{}_{bcd}$ gives\n\n$2R_{bd}=h_{d,br}^{r}+h_{b,dr}^{r}-h_{,bd}-h_{bd,rs}\\eta ^{rs}$\n\nFor the Ricci scalar we have:\n\n$R=R_{bd}\\eta ^{bd}=h_{,ab}^{ab}-\\square h$ .\n\nThen the linearized Einstein equations are\n\n$8\\pi T_{bd}\\,=R_{bd}-R_{ac}\\eta ^{ac}\\eta _{bd}/2$\n\nor\n\n$8\\pi T_{bd}=(h_{d,br}^{r}+h_{b,dr}^{r}-h_{,bd}-h_{bd,r}{}^{r}-h_{s,r}^{r}{}^{s}\\eta _{bd})/2+(h_{,a}{}^{a}\\eta _{bd}+h_{ac,r}{}^{r}\\eta ^{ac}\\eta _{bd})/4$\n\nOr, equivalently:\n\n$8\\pi (T_{bd}-T_{ac}\\eta ^{ac}\\eta _{bd}/2)\\,=R_{bd}$\n$16\\pi (T_{bd}-T_{ac}\\eta ^{ac}\\eta _{bd}/2)\\,=h_{d,br}^{r}+h_{b,dr}^{r}-h_{,bd}-h_{bd,rs}\\eta ^{rs}$\n\n## With a coordinate condition\n\nIf one uses the Lorentz invariant harmonic coordinate condition\n\n$h_{\\alpha \\beta ,\\gamma }\\eta ^{\\beta \\gamma }={\\frac {1}{2}}h_{\\beta \\gamma ,\\alpha }\\eta ^{\\beta \\gamma }\\,,$\n\nthen the last form above of the linearized Einstein equation simplifies to\n\n$16\\pi (T_{bd}-T_{ac}\\eta ^{ac}\\eta _{bd}/2)\\,=\\,-h_{bd,rs}\\eta ^{rs}\\,.$\n\nTo solve it, this can be rewritten as\n\n$\\Delta h_{bd}={-16\\pi G \\over c^{4}}(T_{bd}-T_{ac}\\eta ^{ac}\\eta _{bd}/2)+{\\frac {\\partial ^{2}h_{bd}}{c^{2}{\\partial t}^{2}}}\\,$\n\nwhere ∆ is the Laplacian on a spatial slice. If the stress-energy changes slowly (velocities are low compared to c), then this gives\n\n$h_{bd}(r)={\\frac {-1}{4\\pi }}\\int \\left({-16\\pi G \\over c^{4}}(T_{bd}(s)-T_{ac}(s)\\eta ^{ac}\\eta _{bd}/2)+{\\frac {\\partial ^{2}h_{bd}(s)}{c^{2}{\\partial t}^{2}}}\\right){\\frac {1}{\\vert r-s\\vert }}d^{3}s\\,$\n\nas a generalization of the Newtonian formula for gravitational potential. This is solved iteratively by first replacing the second time derivative by zero and then inserting the h so obtained repeatedly until convergence.\n\n## Applications\n\nThe linearized EFE are used primarily in the theory of gravitational radiation, where the gravitational field far from the source is approximated by these equations."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8953738,"math_prob":0.99959105,"size":5509,"snap":"2019-13-2019-22","text_gpt3_token_len":1144,"char_repetition_ratio":0.15894641,"word_repetition_ratio":0.0,"special_character_ratio":0.19077873,"punctuation_ratio":0.09334764,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000006,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-18T16:10:43Z\",\"WARC-Record-ID\":\"<urn:uuid:5ad11695-94a7-4367-b188-4a9524fab64c>\",\"Content-Length\":\"124477\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f72ce757-d0fe-470e-890e-15e26f074493>\",\"WARC-Concurrent-To\":\"<urn:uuid:7fc28fd4-2133-4ada-8138-71d8eaaee97f>\",\"WARC-IP-Address\":\"52.207.110.63\",\"WARC-Target-URI\":\"http://conceptmap.cfapps.io/wikipage?lang=en&name=Linearized_gravity\",\"WARC-Payload-Digest\":\"sha1:BJRHTF2TUWBQOWEMYIOKXBQWIA2QTWX4\",\"WARC-Block-Digest\":\"sha1:7MZHTDLU5TLUDEZ2DPEPNR3NE27DQKNE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912201455.20_warc_CC-MAIN-20190318152343-20190318174343-00385.warc.gz\"}"} |
https://gamedev.stackexchange.com/questions/151056/how-can-i-move-a-3d-object-to-a-known-cartesian-point-on-the-surface-of-a-sphere | [
"How can I move a 3d object to a known Cartesian point on the surface of a sphere?\n\nI'm trying to figure out how to move an object from one Cartesian point to another located on the surface of a 3D sphere. So that the object will follow the spherical coordinate system (Theta and Phi)\n\nI've a function that converts a Cartesian to Spherical but how can I make it work on the surface of a sphere just like we are using the equation of a line y=mx+b when moving from one point to another on a flat surface?\n\nI'm going to use it to move an object from its point to a selected point somewhere else on the sphere so that the object will move in a straight line but following the curvature.\n\n• Can you give some example start/stopping points and what exactly you need as the output? It sounds like you currently can get both the cartesian coordinates and the spherical coordinates but you need 3D vectors describing the position? Nov 17 '17 at 21:04\n• Yes that's right I can but I don't know how to make the movement in between the starting and stopping point in a straight line over the curved surface. I know how to do it in a completely straight line through the surface but not over on the curved surface. I don't even know where to begin my thinking on this one. Nov 17 '17 at 21:09\n\nThe general idea between interpolation between two points on a sphere is to use a \"Spherical Lerp\" function or \"slerp\" for short.\n\nSome engines have it as a built in function, for others, either it's only available for Quaternions or missing entirely and you'll have to write it yourself. The page on wikipedia provides a good explanation of both the Vector and Quaternion versions of slerp, but to summarize the Vector3 part here:\n\nThe basic formula involves finding the spherical combination of the start and end points with a ratio of the sin of the subtended angle.\n\nslerp(a , b, t) =\ntheta = acos(dot(a, b))\n(sin((1-t)theta) / sin(theta)) * a+ (sin(t * theta)/sin(theta)) * b\n\nif you are animating in a loop, you can use a more efficient algorithm by iteratively reflecting each point after you've found the first intermediate point.\n\nc = dot(p0, p1) * 2\npk+1 = c pk − pk−1\n• That was all I needed here's the final function in C# public Vector3 slerpIt(Vector3 a, Vector3 b, float t) { float theta = (float)Math.Acos(Vector3.Dot(Vector3.Normalize(a), Vector3.Normalize(b))); return (float)(Math.Sin((1 - t) * theta) / Math.Sin(theta)) * a + (float)(Math.Sin(t * theta) / Math.Sin(theta)) * b; } Nov 18 '17 at 12:36\n\nIt's very simple. First of all, you need to get the line between the starting and ending point (marked with purple on the following image):",
null,
"(The green point is the center of the sphere, a is the vector from the center to the start point and b is the vector from the center to the end point, the red line is the path on the surface of the sphere)\n\nNow to make the object follow the surface of the sphere you simply need to interpolate between the start and endpoint, get the vector between that point and the center (if c is the center and p is the interpolated position, then the vector is simply p - c), normalize it, multiply it by the radius of the sphere, then add the result to the center of the sphere to get the object's position.\n\nPseudo code:\n\nvec3 getPositionOnSphere(vec3 startPos, vec3 endPos, vec3 center, float radius, float time) {\nvec3 pos = (endPos - startPos) * time + startPos\nreturn normalize(pos - center) * radius + center\n}\n\nThe problem with this approach is that the end result won't be uniform. You should generate the line first, then move the object on that.\n\n• I've up voted your answer because I hade use of your explanation, it's very informative. My reputation is too low to make it seen here, so I wanted to tell you. Nov 18 '17 at 12:47"
] | [
null,
"https://i.stack.imgur.com/PIFGt.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9363036,"math_prob":0.97477645,"size":598,"snap":"2022-05-2022-21","text_gpt3_token_len":130,"char_repetition_ratio":0.13131313,"word_repetition_ratio":0.035714287,"special_character_ratio":0.21070234,"punctuation_ratio":0.024793388,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99847883,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-24T11:51:02Z\",\"WARC-Record-ID\":\"<urn:uuid:24091221-5577-4860-bbbd-b3aa3932bc2b>\",\"Content-Length\":\"136295\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:084bd98a-2b3d-450f-b1c4-001fd20b9507>\",\"WARC-Concurrent-To\":\"<urn:uuid:11ef64e7-c543-4a78-aeac-fbc9a527156a>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://gamedev.stackexchange.com/questions/151056/how-can-i-move-a-3d-object-to-a-known-cartesian-point-on-the-surface-of-a-sphere\",\"WARC-Payload-Digest\":\"sha1:V4F3NSTQSO26O2SVZWPHW3NCB45DMZFU\",\"WARC-Block-Digest\":\"sha1:TZUQT6JX4NYPWKMKKBTJXHZ5TILJJS5W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304528.78_warc_CC-MAIN-20220124094120-20220124124120-00384.warc.gz\"}"} |
https://data-mining-tutorials.blogspot.com/2012/11/ | [
"Monday, November 5, 2012\n\nLinear Discriminant Analysis - Tools comparison\n\nLinear discriminant analysis is a popular method in domains of statistics, machine learning and pattern recognition. Indeed, it has interesting properties: it is rather fast on large bases; it can handle naturally multi-class problems (target attribute with more than 2 values); it generates a linear classifier linear, easy to interpret; it is robust and fairly stable, even applied on small databases; it has an embedded variable selection mechanism. Personally, I appreciate linear discriminant analysis because we can have multiple interpretations (probabilistic, geometric), and thus highlights various aspects of supervised learning.\n\nIn this tutorial, we highlight the similarities and the differences between the outputs of Tanagra, R (MASS and klaR packages), SAS, and SPSS software. The main conclusion is that, if the presentation is not always the same, ultimately we have exactly the same results. This is the most important.\n\nKeywords: linear discriminant analysis, predictive discriminant analysis, canonical discriminant analysis, variable selection, feature selection, sas, stepdisc, candisc, R software, xlsx package, MASS package, lda, klaR package, greedy.wilks, confusion matrix, resubstitution error rate\nComponents: LINEAR DISCRIMINANT ANALYSIS, CANONICAL DISCRIMINANT ANALYSIS, STEPDISC\nTutorial: en_Tanagra_LDA_Comparisons.pdf\nDataset: alcohol\nReferences :\nWikipedia - \"Linear Discriminant Analysis\""
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81935644,"math_prob":0.78513086,"size":1422,"snap":"2019-26-2019-30","text_gpt3_token_len":293,"char_repetition_ratio":0.116361074,"word_repetition_ratio":0.0,"special_character_ratio":0.17651196,"punctuation_ratio":0.20083682,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9508003,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-24T17:24:56Z\",\"WARC-Record-ID\":\"<urn:uuid:80ebf1a7-9c0a-4579-9f8e-9d0130fec3a7>\",\"Content-Length\":\"85191\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e92812ac-17a2-4c16-8ee8-8cdb1b82f433>\",\"WARC-Concurrent-To\":\"<urn:uuid:bcdbb4b9-3748-4f4f-8843-beea64242ab3>\",\"WARC-IP-Address\":\"172.217.8.1\",\"WARC-Target-URI\":\"https://data-mining-tutorials.blogspot.com/2012/11/\",\"WARC-Payload-Digest\":\"sha1:GJM3S3K624NJHAOLADDJ3IVX75N7NC4L\",\"WARC-Block-Digest\":\"sha1:JQ5QSM3VRVJPTS6FMQLSCYHA6A7XDPE6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999620.99_warc_CC-MAIN-20190624171058-20190624193058-00500.warc.gz\"}"} |
https://mathemerize.com/tag/types-of-functions-in-maths/ | [
"## Types of Functions in Maths – Domain and Range\n\nHere you will learn types of functions in maths i.e polynomial function, logarithmic function etc and their domain and range. Let’s begin – Types of Functions in Maths (a) Polynomial function If a function is defined by f(x) = $$a_0x^n$$ + $$a_1x^{n-1}$$ + $$a_2x^{n-2}$$ + ….. + $$a_{n-1}x$$ + $$a_n$$ where n is a non …"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7629729,"math_prob":1.000001,"size":368,"snap":"2022-40-2023-06","text_gpt3_token_len":113,"char_repetition_ratio":0.18956044,"word_repetition_ratio":0.032786883,"special_character_ratio":0.3451087,"punctuation_ratio":0.07042254,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999976,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-28T19:30:38Z\",\"WARC-Record-ID\":\"<urn:uuid:65628842-5bef-414c-bcce-01ea4fc4ef36>\",\"Content-Length\":\"199669\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:66b812ef-6c8b-4a17-9fd6-f689fd944c92>\",\"WARC-Concurrent-To\":\"<urn:uuid:eeafc03c-2de8-4f51-a273-a6aeb177cee2>\",\"WARC-IP-Address\":\"3.234.104.255\",\"WARC-Target-URI\":\"https://mathemerize.com/tag/types-of-functions-in-maths/\",\"WARC-Payload-Digest\":\"sha1:JDPWRFQMUNC6G54RY3PZDAUDFVMJYBYF\",\"WARC-Block-Digest\":\"sha1:553ZP5SCN4TFD74NUXTT2SM5B3MGH5D7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335276.85_warc_CC-MAIN-20220928180732-20220928210732-00162.warc.gz\"}"} |
https://testbook.com/question-answer/the-characteristic-equation-of-a-control-system-is--5fd34ee1c59dd0a44872d503 | [
"# The characteristic equation of a control system is given as:$$1+\\frac{K(s + 1)}{s(s+4)(s^2+2s+2)} =0$$For a large value of s, the root loci for K ≥ 0 are asymptotic to asymptotes, where do the asymptotes intersect on the real axis?\n\nThis question was previously asked in\nESE Electronics 2011 Paper 2: Official Paper\nView all UPSC IES Papers >\n1. 5/3\n2. 2/3\n3. 6/3\n4. 4/3\n\nOption 1 : 5/3\nFree\nCT 3: Building Materials\n2668\n10 Questions 20 Marks 12 Mins\n\n## Detailed Solution\n\nConcept:\n\nAll the asymptotes meet at a common point on the real axis known as centroid, it is given by,\n\nCentroid:\n\n$$\\frac{{\\sum real\\;part\\;of\\;pole - \\sum real\\;part\\;of\\;zero}}{{No.\\;\\;of\\;poles - No.\\;\\;of\\;zeros}}$$\n\nAnalysis:\n\nfrom characteristic equation open-loop transfer function have,\n\nZeros = -1\n\nPoles = 0, -4, -1 + i, -1 - i.\n\nTherefore centroid = -5 / 3.",
null,
"Important Points\n\nIn root locus when P ≠ Z then some of the root locus branches will tend to infinity along the direction of asymptotes.\n\nThe angle of Asymptotes = $$\\frac{{\\left( {2k \\pm 1} \\right)180}}{{P - Z}}$$ K = 0, 1, 2, 3, …\n\nRemember: Centroid always lies on a real axis not compulsory to be on root locus whereas saddle point can lie anywhere but it should present on the root locus."
] | [
null,
"https://cdn.testbook.com/resources/lms_creative_elements/important-point-image.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.72738487,"math_prob":0.9918089,"size":987,"snap":"2021-31-2021-39","text_gpt3_token_len":307,"char_repetition_ratio":0.12716176,"word_repetition_ratio":0.0,"special_character_ratio":0.31914893,"punctuation_ratio":0.16744186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99094343,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T16:39:47Z\",\"WARC-Record-ID\":\"<urn:uuid:8e6e48c0-fda5-4112-a4fb-29c7a67c5017>\",\"Content-Length\":\"114938\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5ad6365a-a41d-4fbe-9117-6145e27854c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:cd5ed56b-f433-4a4e-b315-4e83dfaf329a>\",\"WARC-IP-Address\":\"104.22.44.238\",\"WARC-Target-URI\":\"https://testbook.com/question-answer/the-characteristic-equation-of-a-control-system-is--5fd34ee1c59dd0a44872d503\",\"WARC-Payload-Digest\":\"sha1:5IU3JPDVNEEYQIX4QLROOLQB3YH25Q7E\",\"WARC-Block-Digest\":\"sha1:2ISP6KHSS2VJNRQ74BVE7O2UT4UPZ5DC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780060877.21_warc_CC-MAIN-20210928153533-20210928183533-00184.warc.gz\"}"} |
https://thebestuknow.com/maths/multiplication/maths-table-30/ | [
"# Table of 30\n\nMultiplication table of 30 is in the tabular form. So, you can get it easily.\n\nIn the table of 30 given below, you can see numerals on the left side and how to learn or read is on the right side. So, this is the right way to understand and learn the table.\n\nAs this table is of 30: in the part of numerals, we have the number 30 and the results by multiplying factors from 1 to 20. So, we get the results from 1 to 20.\n\nNow let’s see how to learn-\n\n## Table of 30 in Tabular Form\n\n Numerals Read as 1. 30 x 1 = 30 Thirty ones are thirty 2. 30 x 2 = 60 Thirty twos are sixty 3. 30 x 3 = 90 Thirty threes are ninety 4. 30 x 4 = 120 Thirty fours are hundred and twenty 5. 30 x 5 = 150 Thirty fives are hundred and fifty 6. 30 x 6 = 180 Thirty sixes are hundred and eighty 7. 30 x 7 = 210 Thirty sevens are two hundred and ten 8. 30 x 8 = 240 Thirty eights are two hundred and forty 9. 30 x 9 = 270 Thirty nines are two hundred and seventy 10. 30 x 10 = 300 Thirty tens are three hundred 11. 30 x 11 = 330 Thirty elevens are three hundred and thirty 12. 30 x 12 = 360 Thirty twelves are three hundred and sixty 13. 30 x 13 = 390 Thirty thirteens are three hundred and ninety 14. 30 x 14 = 420 Thirty fourteens are four hundred and twenty 15. 30 x 15 = 450 Thirty fifteens are four hundred and fifty 16. 30 x 16 = 480 Thirty sixteens are four hundred and eighty 17. 30 x 17 = 510 Thirty seventeens are five hundred and ten 18. 30 x 18 = 540 Thirty eighteens are five hundred and forty 19. 30 x 19 = 570 Thirty nineteens are five hundred and seventy 20. 30 x 20 = 600 Thirty twenties are six hundred\n\n### Get more tables other than a table of 30-\n\n Table of 1 Table of 2 Table of 3 Table of 4 Table of 5 Table of 6 Table of 7 Table of 8 Table of 9 Table of 10 Table of 11 Table of 12 Table of 13 Table of 14 Table of 15 Table of 16 Table of 17 Table of 18 Table of 19 Table of 20 Table of 21 Table of 22 Table of 23 Table of 24 Table of 25 Table of 26 Table of 27 Table of 28 Table of 29 Table of 30\n\nYou may also like to learn the terms given below-\n\nMultiplication Table of 30 to List of Chapters in Mathematics"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8808053,"math_prob":0.9889413,"size":1800,"snap":"2021-43-2021-49","text_gpt3_token_len":556,"char_repetition_ratio":0.266147,"word_repetition_ratio":0.0,"special_character_ratio":0.37333333,"punctuation_ratio":0.08226221,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99685997,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-01T09:02:33Z\",\"WARC-Record-ID\":\"<urn:uuid:92dd2bb3-d3ce-4131-8414-9d01a138512f>\",\"Content-Length\":\"36498\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e3482d8-936c-49a1-a7e8-a57c895c99af>\",\"WARC-Concurrent-To\":\"<urn:uuid:2c965775-dafc-49e7-9443-33f9ea362a29>\",\"WARC-IP-Address\":\"204.93.216.87\",\"WARC-Target-URI\":\"https://thebestuknow.com/maths/multiplication/maths-table-30/\",\"WARC-Payload-Digest\":\"sha1:IL4OS6TEKIBJTQZOXMB2CYDJKD6M5OSW\",\"WARC-Block-Digest\":\"sha1:R6ZM7PGZQDXMF7HTDSXVO5ZNJDBM3E2R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964359976.94_warc_CC-MAIN-20211201083001-20211201113001-00365.warc.gz\"}"} |
https://uk.mathworks.com/matlabcentral/answers/489726-parfeval-doesn-t-call-function?s_tid=prof_contriblnk | [
"# parfeval() doesn't call function\n\n10 views (last 30 days)\nTimo Schmid on 7 Nov 2019\nEdited: Edric Ellis on 8 Nov 2019\nI am trying to process Data from an UDP Server close to realtime.\nFor that I wrote this MatLab Code to fill a Buffer with the UDP Datagrams and Process the Data (split the strings, etc.) once the Buffer from my MatLab function is full (myBuffer).\nDuring Processing the data (which takes about 0.9s) I need to go on receiving Data and store them in the (now) emptied Buffer.\nI found the parfeval-Function of the \"Parallel Computing Toolbox\" which might suit my needs as I need my function \"ProcessData\" to run in the background.\nThe Problem I encountered is that I can't make it run as the parfeval function doesn't enter my function ProcessData. I tested it with setting a breakpoint in ProcessData() but the program never stops. Did I do anything wrong with the function parameters?\nThat's what MatLab help says: F = parfeval(p,fcn,numout,in1,in2,...) requests asynchronous execution of the function fcn on a worker contained in the parallel pool p, expecting numout output arguments and supplying as input arguments in1,in2,....\nHope you guys can help me with this problem! Thanks in advance.\n%% Specify a Server (host name or IP address) with Port 8080\nu = udp('192.168.0.164', 8080); %UDP Object Zuhause\n%u = udp('169.254.38.221', 8080); %UDP Object Pilotfabrik\n% Buffer in the enclosing function\nmyBuffer = {}; %Initialisierung\nMAXBUFFLEN = 100; %Maximale Anzahl an Eintraegen in Buffer (1 Eintrage = 1 Datagram)\nu.InputBufferSize = 4060;\nu.ErrorFcn = @ErrorFcn;\nu.DatagramTerminateMode =\nu.Terminator = '!';\n%% Initialize Parallel pool\npool = gcp();\n%% Oeffnen der Verbindung\nfopen(u);\nif (~strcmp(u.Status,'open'))\nNetworkError(u,'Connection failed!');\nend\n%% Start Data transmission by trigger\nfprintf(u, 'Requesting Data')\n%% Callback Funktion\ndatagram = fscanf(u);\nmyBuffer{end+1} = datagram; %Appends datagram to buffer\n[~, bufflen] = size(myBuffer);\nif bufflen < MAXBUFFLEN\nreturn;\nelse\nf = parfeval(pool, @ProcessData, 1, myBuffer);\nmyBuffer = {}; %empty Buffer\nend\nend\nfunction ErrorFcn(u,~)\ndisp(\"An Error occured\");\nend\nend\nfunction datagram_values = ProcessData(myBuffer)\nstringvalues = split(myBuffer, \";\"); %Split Strings\ndoublevalues = str2double(stringvalues) %Convert Strings do Doubles\ndim_doublevalues = size(doublevalues); %Dimension of Double Output Array\ni_max = dim_doublevalues(2) %Anzahl der Datenpakete\nj_max = (dim_doublevalues(3))-1 %Anzahl der Werte pro Datenpaket; -1 wegen leerem Wert nach \";\" am Ende\nk_max = i_max*j_max %Gesamtanzahl der Werte in Buffer\nk=1;\nwhile k<=k_max\nfor i = 1:i_max\nfor j = 1:j_max\ndatagram_values(k,1)=doublevalues(1,i,j);\nk=k+1;\nend\nend\nend\ndisp(datagram_values);\nend\n\nEdric Ellis on 8 Nov 2019\nEdited: Edric Ellis on 8 Nov 2019\nUnfortunately, the MATLAB debugger can't stop in code running on the workers - only code running at the client.\nIn this case, you should try looking at the diary output of the future f, like this:\nf = parfeval(..);\nwait(f); % wait for the worker to complete\ndisp(f.Diary); % display the output\nIf you don't wish to block the client, you could use afterEach to invoke the call to disp, like this:\nf = parfeval(..);\nafterEach(f, @(f) disp(f.Diary), 0, 'PassFuture', true);"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6587103,"math_prob":0.7896169,"size":2199,"snap":"2020-24-2020-29","text_gpt3_token_len":590,"char_repetition_ratio":0.13986333,"word_repetition_ratio":0.0,"special_character_ratio":0.26466575,"punctuation_ratio":0.19952494,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95653754,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-06T00:34:20Z\",\"WARC-Record-ID\":\"<urn:uuid:c84c4c43-deb9-4b4a-acce-22149670d49d>\",\"Content-Length\":\"117959\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6cb3549e-3d21-4ed8-ad20-ba70199ca1cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:97a318af-2e1d-4421-8545-4cf8674bab74>\",\"WARC-IP-Address\":\"23.223.252.57\",\"WARC-Target-URI\":\"https://uk.mathworks.com/matlabcentral/answers/489726-parfeval-doesn-t-call-function?s_tid=prof_contriblnk\",\"WARC-Payload-Digest\":\"sha1:3KG46A4KYSIGVJ2WVQ3D76OVSW6ZQTAO\",\"WARC-Block-Digest\":\"sha1:3I6UFAGHLA2BLTUXZQAQSB2IBW3XRCAP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655889877.72_warc_CC-MAIN-20200705215728-20200706005728-00171.warc.gz\"}"} |
https://www.folkstalk.com/2022/10/python-how-to-copy-a-2d-array-leaving-out-last-column-with-code-examples.html | [
"# Python How To Copy A 2D Array Leaving Out Last Column With Code Examples\n\nPython How To Copy A 2D Array Leaving Out Last Column With Code Examples\n\nIn this session, we will try our hand at solving the Python How To Copy A 2D Array Leaving Out Last Column puzzle by using the computer language. The following piece of code will demonstrate this point.\n\n```In : import numpy as np\n\nIn : H = np.meshgrid(np.arange(5), np.arange(5))\n\nIn : H\nOut:\narray([[0, 1, 2, 3, 4],\n[0, 1, 2, 3, 4],\n[0, 1, 2, 3, 4],\n[0, 1, 2, 3, 4],\n[0, 1, 2, 3, 4]])\n\nIn : Hsub = H[1:-1,1:-1]\n\nIn : Hsub\nOut:\narray([[1, 2, 3],\n[1, 2, 3],\n[1, 2, 3]])\n```\n\nThere are a lot of real-world examples that show how to fix the Python How To Copy A 2D Array Leaving Out Last Column issue.\n\n## How do you pull a column out of an array in Python?\n\nUse the syntax array[:, [i, j]] to extract the i and j indexed columns from array . Like lists, NumPy arrays use zero-based indexes. Use array[:, i:j+1] to extract the i through j indexed columns from array .\n\n## How do you remove the last element of a NumPy array in Python?\n\nApproach\n\n• Import numpy library and create numpy array.\n• Using the len() method to get the length of the given array.\n• Now use slicing to remove the last element by setting the start of slicing=0 and end = lastIndex.\n• lastIndex is calculated by decrementing the length of array by one.\n\n## How do I find the last column of an array?\n\nIf you want the values in the last column as a simple one-dimensional array, use the syntax ar[:, -1] . If you want the resulting values in a column vector (for example, with shape (n, 1) where n is the total number of rows), use the syntax ar[: [-1]] .\n\n## How do you extract a matrix element in Python?\n\nGiven a Matrix, Extract all the elements that are of string data type. Input : test_list = [[5, 6, 3], [\"Gfg\", 3], [9, \"best\", 4]] Output : ['Gfg', 'best'] Explanation : All strings are extracted.04-Aug-2022\n\n## How do you extract a column from a multidimensional array in Python?\n\nUse a list comprehension to extract a column from an array. Use the syntax [row[i] for row in array] to extract the i – indexed column from array . Further reading: A list comprehension is often useful for extracting elements from a list. You can read more about list comprehensions here.\n\n## How do I extract a column from a data frame?\n\nExtracting Multiple columns from dataframe\n\n• Syntax : variable_name = dataframe_name [ row(s) , column(s) ]\n• Example 1: a=df[ c(1,2) , c(1,2) ]\n• Explanation : if we want to extract multiple rows and columns we can use c() with row names and column names as parameters.\n• Example 2 : b=df [ c(1,2) , c(“id”,”name”) ]\n\n## How do I remove the last element from a list?\n\npop() function. The simplest approach is to use the list's pop([i]) function, which removes an element present at the specified position in the list. If we don't specify any index, pop() removes and returns the last element in the list.\n\n## How do I remove the last two elements from a list in Python?\n\nUsing len() + list slicing to remove last K elements of list. List slicing can perform this particular task in which we just slice the first len(list) – K elements to be in the list and hence remove the last K elements.14-Sept-2022\n\n## How do you pop the last element of a list in Python?\n\nPython list pop() is an inbuilt function in Python that removes and returns the last value from the List or the given index value.25-Aug-2022\n\n## How do you pick up the last value in a column?\n\nCOLUMNS Function – Example 3 If we wish to get only the first column number, we can use the MIN function to extract just the first column number, which will be the lowest number in the array. Once we get the first column, we can just add the total columns in the range and subtract 1, to get the last column number.14-Jun-2022"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.68599755,"math_prob":0.909925,"size":3714,"snap":"2023-40-2023-50","text_gpt3_token_len":998,"char_repetition_ratio":0.15956873,"word_repetition_ratio":0.083333336,"special_character_ratio":0.29348412,"punctuation_ratio":0.15238096,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9877586,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T13:29:07Z\",\"WARC-Record-ID\":\"<urn:uuid:a6362513-47fc-40fc-b2e4-19be02a8d04a>\",\"Content-Length\":\"72206\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9d5927d9-40f7-4e34-ad6a-3f6643de98b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:c0991d43-b37a-4c5f-bc11-c101ba40480f>\",\"WARC-IP-Address\":\"104.21.49.226\",\"WARC-Target-URI\":\"https://www.folkstalk.com/2022/10/python-how-to-copy-a-2d-array-leaving-out-last-column-with-code-examples.html\",\"WARC-Payload-Digest\":\"sha1:BG44NFAGEISSPSDOTQ2QR4LHTXJQXCHK\",\"WARC-Block-Digest\":\"sha1:3NM4YZYSEKUFKZ5GTQZY2N5SWVMD2SLQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506481.17_warc_CC-MAIN-20230923130827-20230923160827-00361.warc.gz\"}"} |
http://mathcentral.uregina.ca/QQ/database/QQ.09.14/h/victoria1.html | [
"",
null,
"",
null,
"",
null,
"SEARCH HOME",
null,
"Math Central Quandaries & Queries",
null,
"",
null,
"Question from Victoria, a student: find the area of a regular pentagon inscribed in a circle with radius 3 units",
null,
"Hi Victoria,\n\nIf you join each of the vertices of the pentagon to the center $C$ of the circle you will see that the pentagon is partitioned into five congruent isosceles triangles. I have labeled one of these triangles $ABC.$ Thus the area of the pentagon is 5 times the area of the triangle $ABC.$",
null,
"S\\Also you can see that the measure of the angle $BCA$ is $\\frac{360^o}{5} = 72^{o}.$ You can use the technique in my response to a question by Sela to find the area of triangle $ABC.$\n\nPenny",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Math Central is supported by the University of Regina and the Imperial Oil Foundation."
] | [
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/search.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/QQ/database/QQ.09.14/h/victoria1.1.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/qqsponsors.gif",
null,
"http://mathcentral.uregina.ca/lid/images/mciconnotext.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91102237,"math_prob":0.9740514,"size":588,"snap":"2023-14-2023-23","text_gpt3_token_len":154,"char_repetition_ratio":0.16267124,"word_repetition_ratio":0.0,"special_character_ratio":0.26020408,"punctuation_ratio":0.060344826,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99802303,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T21:13:42Z\",\"WARC-Record-ID\":\"<urn:uuid:fb7b9280-9b87-43a1-ae36-253f9452d7cf>\",\"Content-Length\":\"7147\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a186b9a-6c22-4ff5-a7d3-c73d02e69804>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d7b3f89-2949-4c6f-9563-98f214130b3b>\",\"WARC-IP-Address\":\"142.3.156.40\",\"WARC-Target-URI\":\"http://mathcentral.uregina.ca/QQ/database/QQ.09.14/h/victoria1.html\",\"WARC-Payload-Digest\":\"sha1:BMFX3X3XMR6NI3URI6CYE4TWN4GJIAY6\",\"WARC-Block-Digest\":\"sha1:POPKNQ6OFE5DO43RQ7RSHAGYBC55PHFC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655143.72_warc_CC-MAIN-20230608204017-20230608234017-00368.warc.gz\"}"} |
https://answers.everydaycalculation.com/lcm/16-27 | [
"Solutions by everydaycalculation.com\n\n## What is the LCM of 16 and 27?\n\nThe lcm of 16 and 27 is 432.\n\n#### Steps to find LCM\n\n1. Find the prime factorization of 16\n16 = 2 × 2 × 2 × 2\n2. Find the prime factorization of 27\n27 = 3 × 3 × 3\n3. Multiply each factor the greater number of times it occurs in steps i) or ii) above to find the lcm:\n\nLCM = 2 × 2 × 2 × 2 × 3 × 3 × 3\n4. LCM = 432\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn how to find LCM of upto four numbers in your own time:"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7436596,"math_prob":0.9993404,"size":456,"snap":"2020-10-2020-16","text_gpt3_token_len":147,"char_repetition_ratio":0.121681415,"word_repetition_ratio":0.0,"special_character_ratio":0.3991228,"punctuation_ratio":0.086021505,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99651504,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-17T07:39:19Z\",\"WARC-Record-ID\":\"<urn:uuid:c4baaaff-e5b3-415e-be13-12f3cbd41c5f>\",\"Content-Length\":\"5756\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c6e016b-e585-4722-a75c-ae06980381b0>\",\"WARC-Concurrent-To\":\"<urn:uuid:10e7e951-2695-424c-9b32-fa46e0cbe6a2>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/lcm/16-27\",\"WARC-Payload-Digest\":\"sha1:GRUTVFN2RIN5DQ3PF3M3NJMLN5YHTCFB\",\"WARC-Block-Digest\":\"sha1:MKHFTR52AIJPDHT37ZX3KVVPQCUCIBIU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875141749.3_warc_CC-MAIN-20200217055517-20200217085517-00039.warc.gz\"}"} |
http://nonlinearsolve.sciml.ai/dev/basics/NonlinearProblem/ | [
"# Nonlinear Problems\n\nSciMLBase.NonlinearProblemType\n\nDefines a nonlinear system problem. Documentation Page: https://nonlinearsolve.sciml.ai/dev/basics/NonlinearProblem/\n\nMathematical Specification of a Nonlinear Problem\n\nTo define a Nonlinear Problem, you simply need to give the function $f$ which defines the nonlinear system:\n\n$$$f(u,p) = 0$$$\n\nand an initial guess $u₀$ of where f(u,p)=0. f should be specified as f(u,p) (or in-place as f(du,u,p)), and u₀ should be an AbstractArray (or number) whose geometry matches the desired geometry of u. Note that we are not limited to numbers or vectors for u₀; one is allowed to provide u₀ as arbitrary matrices / higher-dimension tensors as well.\n\nProblem Type\n\nConstructors\n\nNonlinearProblem(f::NonlinearFunction,u0,p=NullParameters();kwargs...)\nNonlinearProblem{isinplace}(f,u0,p=NullParameters();kwargs...)\n\nisinplace optionally sets whether the function is in-place or not. This is determined automatically, but not inferred.\n\nParameters are optional, and if not given, then a NullParameters() singleton will be used, which will throw nice errors if you try to index non-existent parameters. Any extra keyword arguments are passed on to the solvers. For example, if you set a callback in the problem, then that callback will be added in every solve call.\n\nFor specifying Jacobians and mass matrices, see the NonlinearFunctions page.\n\nFields\n\n• f: The function in the problem.\n• u0: The initial guess for the steady state.\n• p: The parameters for the problem. Defaults to NullParameters.\n• kwargs: The keyword arguments passed on to the solvers."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6268191,"math_prob":0.89835334,"size":1575,"snap":"2022-40-2023-06","text_gpt3_token_len":374,"char_repetition_ratio":0.13812858,"word_repetition_ratio":0.00921659,"special_character_ratio":0.20761904,"punctuation_ratio":0.17704917,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9729779,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-27T07:16:04Z\",\"WARC-Record-ID\":\"<urn:uuid:753a6d93-609b-403a-8fa7-aa9a508f42ad>\",\"Content-Length\":\"9274\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:454dd2c6-8d28-4b65-9cd3-53d22425c20c>\",\"WARC-Concurrent-To\":\"<urn:uuid:7cb3161c-c8e3-4c68-8b17-637721ac9642>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"http://nonlinearsolve.sciml.ai/dev/basics/NonlinearProblem/\",\"WARC-Payload-Digest\":\"sha1:QYOOW776KHWWBJR7OULG6LRH3LZ5C7S2\",\"WARC-Block-Digest\":\"sha1:EDT4344UUKZYC6OTV7OGPVTJNTOB3TE3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334992.20_warc_CC-MAIN-20220927064738-20220927094738-00307.warc.gz\"}"} |
https://zbmath.org/?q=an%3A1269.65063 | [
"# zbMATH — the first resource for mathematics\n\nW-methods in optimal control. (English) Zbl 1269.65063\nSummary: This paper addresses the consistency and stability of W-methods up to order three for nonlinear ordinary differential equation-constrained control problems with possible restrictions on the control. The analysis is based on the transformed adjoint system and the control uniqueness property. These methods can also be applied to large-scale partial differential equation-constrained optimization, since they offer an efficient way to compute gradients of the discrete objective function.\n\n##### MSC:\n 65K10 Numerical optimization and variational techniques 49J15 Existence theories for optimal control problems involving ordinary differential equations 49M25 Discrete approximations in optimal control\n##### Software:\nDONLP2; CG_DESCENT; RODAS\nFull Text:\n##### References:\n Bonnans, JF; Laurent-Varin, J, Computation of order conditions for symplectic partitioned Runge-Kutta schemes with application to optimal control, Numer. Math., 103, 1-10, (2006) · Zbl 1112.65063 Büttner, M; Schmitt, BA; Weiner, R, W-methods with automatic partitioning by Krylov techniques for large stiff systems, SIAM J. Numer. Anal., 32, 260-284, (1995) · Zbl 0820.65043 Dhamo, V; Tröltzsch, F, Some aspects of reachability for parabolic boundary control problems with control constraints, Comput. Optim. Appl., 50, 75-110, (2011) · Zbl 1245.93016 Hager, WW, Runge-Kutta methods in optimal control and the transformed adjoint system, Numer. Math., 87, 247-282, (2000) · Zbl 0991.49020 Hager, WW; Zhang, H, A new active set algorithm for box constrained optimization, SIAM J. Optim., 17, 526-557, (2006) · Zbl 1165.90570 Hager, W.W., Zhang, H.: Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32, 113-137 (2006) · Zbl 1346.90816 Hairer, E., Wanner, G.: Solving Ordinary Differential Equations II, Stiff and Differential-Algebraic Equations, 2nd revised edn. Springer, Berlin (1996) · Zbl 1192.65097 Jacobson, D.H., Mayne, D.Q.: Differential Dynamic Programming. American Elsevier Publishing, New York (1970) · Zbl 0223.49022 Kammann, E.: Modellreduktion und Fehlerabschätzung bei parabolischen Optimalsteuerungsproblemen. Diploma thesis, Department of Mathematics, Technische Universität Berlin (2010) · Zbl 0889.65083 Kaps, P; Rentrop, P, Generalized Runge-Kutta methods of order four with stepsize control for stiff ordinary differential equations, Numer. Math., 33, 55-68, (1979) · Zbl 0436.65047 Lang, J.: Adaptive Multilevel Solution of Nonlinear Parabolic PDE Systems. Theory, Algorithm and Applications. Lecture Notes in Computational Science and Engineering, vol. 16. Springer, Berlin (2000) · Zbl 0964.90062 Murua, A, On order conditions for partitioned symplectic methods, SIAM J. Numer. Anal., 34, 2204-2211, (1997) · Zbl 0889.65083 Pulova, N.V.: Runge-Kutta Schemes in Control Constrained Optimal Control. Lecture Notes in Computer Science, LNCS, vol. 4818, pp. 358-365 (2008) · Zbl 1229.65123 Rosenbrock, HH, Some general implicit processes for the numerical solution of differential equations, Comput. J., 5, 329-331, (1963) · Zbl 0112.07805 Sandu, A.: On the Properties of Runge-Kutta Discrete Adjoints. Lecture Notes in Computer Science, LNCS, vol. 3394, pp. 550-557 (2006) · Zbl 1157.65421 Sandu, A.: On consistency properties of discrete adjoint linear multistep methods. Report TR-07-40, Computer Science Department, Virginia Polytechnical Institute and State University (2007) · Zbl 0900.65238 Schwartz, A; Polak, E, Consistent approximations for optimal control problems based on Runge-Kutta integration, SIAM J. Control Optim., 34, 1235-1269, (1996) · Zbl 0861.49002 Spellucci, P.: Donlp2-intv-dyn users guide. Version November 18, 2009 · Zbl 0820.65043 Spellucci, P, A new technique for inconsistent QP problems in the SQP method, Math. Methods Oper. Res., 47, 355-400, (1998) · Zbl 0964.90062 Spellucci, P, An SQP method for general nonlinear programs using only equality constrained subproblems, Math. Program., 82, 413-448, (1998) · Zbl 0930.90082 Steihaug, T; Wolfbrandt, A, An attempt to avoid exact Jacobian and nonlinear equations in the numerical solution of stiff ordinary differential equations, Math. Comput., 33, 521-534, (1979) · Zbl 0451.65055 Strehmel, K., Weiner, R.: Linear-implizite Runge-Kutta-Methoden und ihre Anwendungen. Teubner-Texte zur Mathematik, Bd. 127, Teubner (1992) · Zbl 0759.65047 Verwer, JG; Spee, EJ; Blom, JG; Hundsdorfer, WH, A second order rosenbrock method applied to photochemical dispersion problems, SIAM J. Sci. Comput., 20, 1456-1480, (1999) · Zbl 0928.65116 Walther, A, Automatic differentiation of explicit Runge-Kutta methods for optimal control, Comput. Optim. Appl., 36, 83-108, (2007) · Zbl 1278.49037\nThis reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6300074,"math_prob":0.5721491,"size":5525,"snap":"2021-31-2021-39","text_gpt3_token_len":1630,"char_repetition_ratio":0.12280384,"word_repetition_ratio":0.010403121,"special_character_ratio":0.3279638,"punctuation_ratio":0.26512456,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9502533,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T08:59:53Z\",\"WARC-Record-ID\":\"<urn:uuid:bd9dfaee-008e-4c47-ad06-0d80a74ce28d>\",\"Content-Length\":\"55948\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e632b761-9799-4475-be4a-0fbe40830d3f>\",\"WARC-Concurrent-To\":\"<urn:uuid:f9499936-8128-4754-a0a3-eb65c010c3ac>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an%3A1269.65063\",\"WARC-Payload-Digest\":\"sha1:JIVZTUWW3OGJNTH5VSTF7IHJWYMDTFV7\",\"WARC-Block-Digest\":\"sha1:ZTNWG65X7MEGP2HDJYNS2CQSE7N72SRV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154432.2_warc_CC-MAIN-20210803061431-20210803091431-00216.warc.gz\"}"} |
https://www.intechopen.com/books/thermodynamics-physical-chemistry-of-aqueous-systems/thermodynamics-and-the-glass-forming-ability-of-alloys | [
"Open access peer-reviewed chapter\n\n# Thermodynamics and the Glass Forming Ability of Alloys\n\nBy Chengying Tang and Huaiying Zhou\n\nSubmitted: November 10th 2010Reviewed: May 30th 2011Published: September 15th 2011\n\nDOI: 10.5772/20803\n\n## 1. Introduction\n\nBulk metallic glasses (BMGs) have received a great deal of attention due to scientific and technological interest ever since the first successful synthesis of an amorphous phase in the Au–Si system in 1960 (Klement et al., 1960). There has been a lot of interest to identify parameters to assess the glass forming ability (GFA) of various alloy systems and compositions. A great deal of scientific efforts for quantification of GFA of alloys has been devoted to investigation of the GFA of alloys. There have been a lot of parameters to assess the glass forming ability (GFA) of various alloy systems and compositions. As a result, many criteria, including the confusion rule and the deep eutectic rule, for evaluating the glass forming ability (GFA) of an amorphous alloy have been proposed. Among them, the criteria used usually are the supercooled liquid region △Tx(=TxTg, where Tgand Txare the glass transition temperature and the crystallization temperature, respectively) (Inoue et al., 1993), the reduced glass transition temperature Trg(=Tg/Tl, where Tlis the liquidus temperature) (Turnbull, 1969) and the recently defined parameters γ(=Tx/(Tg+Tl)) (Lu & Liu, 2002), δ(=Τ x/(T l-T g)) (Chen, et al., 2005), β[=T xT g/(T l+T x)2] (Yuan, et al., 2008), ϕ(=△Trg(Tx/Tg)0.143) (Fan, et al. 2007), ω[=T l(T l+T x)/(T x(T l–T x))] (Ji & Pan, 2009), γc[=(3Tx–2Tg)/Tl) (Guo, 2010), and so on. These criteria have generally proved useful parameters for evaluating the GFA of an amorphous alloy. In order to guide the design of alloy compositions with high GFA, Inoue et al. (Inoue et al., 1998) and Johnson (Johnson, 1999) have proposed the following empirical rules: (I) multicomponent systems, (II) significant atomic size ratios above 12%, (III) negative heat of mixing and (IV) deep eutectic rule based on the Trg criterion. However, Al-based metallic glasses with rare earth metal additions (Guo et al., 2000), rare earth (RE) based glasses and some binary BMGs such as Zr-Cu, Ni-Nb binary alloy (Xia et al., 2006), provide important exception from this generality, because most of above mentioned GFA parameters and rules capable of searching metallic glasses with high GFA are not applicable to these Al–based and RE-based amorphous systems. Furthermore, all the above parameters need the alloy to be first prepared in glassy form to be able to measure the crystallization temperature T x, the liquidus temperature Tl, and/or the glass transition temperature T g.Hence, the above parameters are not predictive in nature, as they cannot predict a good glass forming composition without actually making that alloy and rapidly solidifying it into the glassy state. It is well known that crystallization is the only event that prevents the formation of an amorphous phase. Metallic glass formation is always a competing process between the undercooled melt and the resulting crystalline phases. The GFA of a melt is thus virtually determined by the stability of the undercooled melt and the competing crystalline phases. Thermodynamic analysis could be useful in evaluating the stability of the undercooled melt and the formation enthalpies of crystalline phases. So far, several attempts have been made successfully to investigate the GFA and predict glass forming range (GFR) in several binary and ternary amorphous alloy systems, using a pure thermodynamic approach or a combined thermodynamics and kinetics approach.\n\nFrom a thermodynamic point of view, there are generally following methods for calculating the GFA and predicting glass forming range (GFR) of an alloy system. The first approach is based on the T0 curve, which has been used to predict the GFR on several binary and some ternary systems. The quality of these predictions depends critically on the accuracy of the thermodynamic description. The second method is based on the semi-empirical Miedema’s model, which has been successfully applied to calculate and predict the glass forming range of some binary or ternary systems. The third consideration is directly employed on the calculation of the driving forces of crystalline phases (minimum driving force criterion) in a supercooled melt using calculation of phase diagram (CALPHAD) database. By employing driving force criterion with the obtained thermodynamic description for the investigated system, the GFA and predicted GFR of an alloy system were determined by comparing the driving force of crystalline phases precipitated from an undercooled melt. This evaluation has been successfully used to evaluate the GFA of several binary or ternary systems. Especially, it can be used to analyze the GFA of some alloy systems with unique glass forming ability, such as Al-based system. The other thermodynamic considerations, such as suppression of the formation of intermetallic phases, have been introduced.\n\nFrom a combined thermodynamics and kinetics approach, the GFA of the alloys were evaluated by introducing thermodynamic quantities obtained from CALPHAD method into Davies–Uhlmann kinetic formulations. In this evaluation, by assuming homogeneous nucleation without pre-existing nuclei and following the simplest treatment based on Johnson-Mehl-Avrami’s isothermal transformation kinetics, the time–temperature-transformation (TTT) curves were obtained, which are a measure of the time tfor formation of the phase Φ with a minimum detectable mass of crystal as a function of temperature. The critical cooling rates (Rc) for the glass formation calculated on the basis of the TTT curves was used to evaluate the glass-forming ability of this binary or ternary alloy. The calculated GFA results show good agreement with the experimental data in the compositional glass formation range of the investigated systems.\n\nThis chapter is intended to present systematically the methods and progress on the glass forming ability investigated by a thermodynamic approach or a combined thermodynamics and kinetics approach.\n\n## 2. Calculation of GFA based on thermodynamics analysis\n\nUsually, it is regarded the formation of metallic glasses is controlled by two factors, i.e., the cooling rate and the composition of the alloy. The critical cooling, which is the most effective gauge for GFA of the alloys, is hard to be measured experimentally. Hence, a great deal of efforts has devoted to the investigation on the correlation between the GFA and the composition of glass forming alloys. Inoue et al. (Inoue et al., 1998) and Johnson (Johnson, 1999) proposed the empirical rules to predict the element selection and compositional range of glass forming alloy. These rules have played an important role as a guideline for synthesis of BMGs for the last decade. However, recent experimental results have shown that the “confuse principal” and “deep eutectic rule” cannot be applicable to the Cu-Zr, Ni-Nb binary system (Xia et al., 2006) and Al-based ternary system (Guo et al., 2000). From a thermodynamic point of view, it is well known that crystallization is the only event that prevents the formation of an amorphous phase. During a melt–quenching process for metallic glass formation, the glass formation is exposed to crystallization competition of other crystalline phases from the undercooled melt between liquidus temperature Tl and glass transition Tg. The GFA of a melt is thus virtually determined by the stability of the undercooled melt and the competing crystalline phases, which can be analyzed by thermodynamic analysis. In this section, several GFA calculation based on thermodynamics analysis were introduced.\n\n### 2.1. Calculation of the GFA of alloys based on T0 curve\n\n#### 2.1.1. Method\n\nGenerally, a glass can be formed during cooling when crystallization is avoided up to the occurrence of the glass transition. Thus, in order to predict the tendency to glass formation in a system and the composition regions where it is most probable, nucleation of crystals in the undercooled melt must be considered. The GFR will be the region of composition where nucleation of crystalline phases is less likely. Various models have been developed to analyze the GFA of alloys in the literature, as will be discussed in the following section, with different levels of approximation. T0 curve is one of the approaches used to estimate the GFA of the alloys.\n\nA T0 curve is the locus of the compositions and temperatures where the free energies of two phases are equal. Thus, T0 curves can be calculated provided that their Gibbs free energy is known, i.e. an assessment of the system is available. The T0 curve between the liquid and a solid phase determines the minimum undercooling of the liquid for the partitionless formation of a crystalline solid with the same composition (Boettinger & Perepezko, 1993). Fig. 1 showed one example for a simple eutectic system. Alloys with T0 curves plunge steadily at low temperatures (dashed line in Fig. 1a), there will be no driving force for partitionless transformation in the composition region between them. If the equilibrium crystalline phases are not prone to nucleation, the glass can thus form. On the contrary, if T0 curves that are only slightly depressed below the stable liquidus curves are good candidate for partionless transformation of crystalline phases in the entire composition range (dashed line in Fig. 1b).",
null,
"Figure 1.Hypothetical T0 curves for a binary eutectic A–B system. (a) T0 curves drop to low temperature: glass formation is possible. (b) T0 curves intersect at low temperature: partitionless crystalline phase formation occurs (redrawn fromBoettinger & Perepezko, 1993).\n\n#### 2.1.2. Application of T0 curve\n\nPredictions of GFR based on T0 curves have been performed on several binary and some ternary systems. The construction of T0 curves for alloy glass needs a precise knowledge of thermodynamic properties of the supercooled liquid alloy and the introduction of the transition to the glassy state (Kim, et al., 1998). The quality of these predictions depends critically on the accuracy of the thermodynamic description and the introduction of the excess specific heat contribution is expected to improve the quality of results (Palumbo & Battezzati, 2008). However, as pointed out by Schwarz and co-workers (Schwarz et al., 1987), some discrepancies have been observed between the prediction and experimental results. For example, even when using the most recent thermodynamic assessment (Kumar, 1996) to calculate T0 curves in the Cu–Ti system, the results are not agreement with the reported experimental GFR. In fact, T0 curves for terminal solid solutions do not plunge at low temperatures as expected for glass forming systems (Kumar et al. 1996). Battezzati and co-workers (Battezzati, et al., 1990) have shown that in the Cu–Ti system the contribution of the excess specific heat is essential for describing the glass forming ability. An excess specific heat contribution has also been considered in the Al–Ti system (Cocco, et al., 1990) and the Fe–B system (Palumbo, et al., 2001).\n\n### 2.2. Calculation of the GFA of alloys based on Miedema’s model\n\n#### 2.2.1. Method\n\nMiedema’s model is an empirical theory for calculating heat of mixing in various binary systems both for the solid state (Miedema et al., 1975) and liquid (Boom et al., 1976). This model involves the calculations of the formation enthalpy of metallic glasses (amorphous phase) (ΔHamor), solid solutions (ΔHSS), and intermetallic compounds (ΔHinter) according to the following equations (Bakker 1988; Boer et al. 1988).\n\nΔHamor=ΔHchem(amor)+ΔHtopoE1\nΔHss=ΔHchem(SS)+ΔHelastic+ΔHstructureE2\nand\nΔHinter=ΔHchem(inter)E3\n\nwhere ΔHchem(amor) is the chemical mixing enthalpy of the amorphous state, ΔHtopo is the topology enthalpy of a glass, ΔHchem(SS) is the chemical mixing enthalpy of a solid solution, ΔHelastic is the elastic enthalpy of the solid solution calculated based on the continuous elastic model proposed by Friedel (Friedel, 1954) and Eshelby (Eshelby, 1954& 1956), ΔHstructure is the structure enthalpy induced by the structural changes, and ΔHchem(inter) is the chemical mixing enthalpy of an intermetallic compound. The formation enthalpy ΔHinter of a composition between two adjacent intermetallic compounds can be calculated using the level principle.\n\nThe chemical contribution of enthalpy of mixing of solid solution can be written as\n\nΔHchem=xAxB[xAΔHBinASS+xBΔHAinBSS]E4\n\nwhere x Aand x Brepresent the mole fraction of A and B atoms and ΔHSS is the enthalpy of solution of one element in another at infinite dilution. The data have been taken from Niessen et al. (Niessen, et al., 1983).\n\nThe elastic term in the enthalpy of formation originates from the atomic size mismatch, which can be expressed as\n\nΔHelastic=xAxB[xAΔHBinAelastic+xBΔHAinBelastic]E5\n\nTheΔHiinjelastichas been obtained by using the formalism by Simozar and Alonso(Simozar & Alonso, 1984) as\n\nΔHiinjelastic=2μj(ViVj)2Vj(3+4μjKi)E6\n\nwhere μj is the shear modulus of the solvent, Vi and Vj are the molar volumes of the solute and the solvent, respectively and K iis the compressibility of the solute.\n\nThe structural contribution of enthalpy for solid solution originates from the valence and the crystal structure of the solute and the solvent atom. It is found to have a very minor contribution and it is difficult to calculate. Hence, the structural contribution to enthalpy has been usually neglected (Basu, et al., 2008). In the case of the elastic and structural contributions are absent, thus the formation enthalpy of glasses can be calculated as\n\nΔHamor=ΔHchem(amor)+3.51nxiTm,iE7\n\nWhere x irepresents the mole fraction of component iatom, Tm,iis the melting temperature of the component i.\n\nAccording to the Miedema’s model, an amorphous phase can be formed if the enthalpy of formation of the amorphous phase is less than that of the solid solution phase. The heat of formation in alloys generally arises from the interactions among the constituent atoms where the interfacial energy plays a major role. The interfacial energy mainly comes from the atomic size difference. It has also been postulated that the number of intermetallic phases appearing in an alloy system is a strong function of the heat of mixing. The number of intermetallic phase in an alloy system increases with the increase in the heat of mixing. This model can be directly used to determine the glass forming range in binary alloy systems and can be extended to ternary systems by neglecting the ternary interactions.\n\n#### 2.2.2. Calculation of the GFA for the binary alloy systems\n\nSince the metallic glass formation process is controlled by thermodynamic factors, Miedema’s model was firstly used to predict the composition range of amorphous binary transition metal alloys (Kolk et al., 1988; Coehoorn et al., 1988; Murty, et al., 1992; Basu, et al., 2008). It is found that the predicted glass forming composition ranges are in good agreement with the experimental results. In the work of Takeuchi and Inoue (Takeuchi & Inoue, 2000), this approach has been used to calculate the mixing enthalpy and mismatch entropy of a number of bulk metallic glass alloy systems. It has been observed that the mixing enthalpy and normalised mismatch entropy for glass forming alloys vary within a certain range.",
null,
"Figure 2.Enthalpy–composition curves for binary Ti–Ni, Zr–Ni, Hf–Ni, Ti-Cu alloy systems (a–d). The curve with (•) and the curve with (Δ) represent amorphous and solid solution phase, respectively. The enthalpy values are in J/mol (FromBasu, et al., 2008).\n\nAs shown in Fig. 2, in the work of Basu et al. (Basu, et al., 2008), glass forming range (GFR) has been determined for different binary (Ti–Ni, Zr–Ni, Hf–Ni, Ti–Cu, Zr–Cu, Hf–Cu) in (Zr, Ti, Hf)–(Cu, Ni) alloys based on the mixing enthalpy and mismatch entropy calculations. Though copper and nickel appear next to each other in the periodic table, the glass forming ability of the copper and nickel bearing alloys is different. Thermodynamic analysis reveals that the glass forming behaviour of Zr and Hf is similar, whereas it is different from that of Ti. The smaller atomic size of Ti and the difference in the heat of mixing of Ti, Zr, Hf with Cu and Ni leads to the observed changes in the glass forming behaviour. Enthalpy contour plots can be used to distinguish the glass forming compositions on the basis of the increasing negative enthalpy of the composition. This method reveals the high glass forming ability of binary Zr–Cu, Hf–Cu, Hf–Ni systems over a narrow composition.\n\nIn the recent work performed by Xia (Xia, et al., 2006), the GFA of an alloy is considered that the formation of the meta-stable amorphous state should include two aspects: (1) the driving force for the glass formation, i.e., −ΔHamor, and (2) the resistance of glass formation against crystallization, i.e. the difference between the driving force for glass phase and for the intermetallic compound formation ΔHamor−ΔHinter. When two glass forming alloys have the same −ΔHamor but different ΔHamor−ΔHinter, their GFA can then be dominated by ΔHamor−ΔHinter. The lower the value of ΔHamor−ΔHinter, the higher the GFA of the alloy. On the other hand, when two glass forming alloys have the same ΔHamor−ΔHinter but different ΔHamor, their GFA is dominated by −ΔHamor. The higher the value of −ΔHamor, the better the GFA. Since the contribution from entropies is much smaller as compared with that from the formation enthalpy of solid compounds (Delamare, et al., 1994), the GFA is expressed in terms of formation enthalpy alone. Based on this thermodynamic consideration, a new parameter γ* to evaluate GFA for glass formation was proposed by Xia et al. (Xia, et al., 2006) and expressed as\n\nγ*=GFAΔHamorΔHinterΔHamorE8\n\nwhere ΔHamor and ΔHinter are the enthalpies for glass and intermetallic formation, respectively. Both ΔHamor and ΔHinter are calculated by Miedema’s macroscopic atom model.\n\nThis parameter has been successfully used to predict the GFR and the best GFA alloy compositions in Zr-Cu and Ni-Nb system by comparing the value of γ* of various alloy systems, respectively (Xia, et al., 2006).",
null,
"Figure 3.Calculated dependence of the parameter γ* on Zr and Ni concentration in Cu-Zr (a) and Ni-Nb (b) binary alloys, respectively (fromXia, et al., 2006).\n\nFig. 3 shows the calculated dependence of the parameter γ* on Zr and Ni concentration in Cu-Zr (a) and Ni-Nb (b) binary alloys, respectively, suggesting that the alloys Cu64Zr36 and Cu50Zr50 in Cu-Zr system, and Ni61.5Nb38.5 in Ni-Nb system are the best glass former, respectively. These predicted results are in good agreement with the experimentally reported Cu64.5Zr35.5 and Cu50Zr50, and Ni62Ni38 that could be made into bulk metallic glass rods with 2 mm in diameter, indicating that γ* is an effective parameter in identifying the best glass former in the Zr-Cu and Ni-Nb binary system.\n\nSimilarly, considering both the stability of liquid employing ΔHliq/ΔHinter, and the competition of glass and crystal using ΔHamor/ΔHinter, Ji et al. (Ji, et al., 2009) proposed a new parameter γ’ of GFA as\n\nγ'=GFAΔHliqΔHamor(ΔHinter)2E9\n\nAs Ji et al. described, this parameter γ’ is not only verified in five different binary bulk metallic glasses (Cu–Hf, Ni–Nb, Cu–Zr, Ca–Al, Pd–Si) but also showed wider application range comparing with the former model, but also have a better GFA estimation on the different composition than the parameter γ* because it is including ΔHliq in the evaluation expression. The predicated results are in good agreement with the experiments in all five different kinds of binary BMG systems and the biggest deviation of the peak of γ’ from the best current GFA composition is only about 6 at.% in Ca–Al alloy. Comparing with former GFA parameter γ*, γ’ takes account of liquid stability and shows more universal for evaluation GFA in different kinds of binary alloys (Ji, et al., 2009). Recently, Wang et al. also made a modification to Xia’s proposal and it works more convenient to describe the GFA of transition metal systems (Wang, et al., 2009).\n\n#### 2.2.3. Calculation of the GFA for the multicomponent alloy systems\n\nMiedema’s approach has been extensively used by Nagarajan and Ranganathan (Nagarajan & Ranganathan, 1994), Takeuchi and Inoue (Takeuchi & Inoue, 2001&2004) and other researchers (Murty, et al., 1992; Rao, et al.; 2007; Basu, et al, 2008; Wang & Liu, 2009; Sun, et al., 2010) to determine the glass forming composition range (GFR) in a number of ternary and multicomponent systems. In the work performed by Takeuchi and Inoue (Takeuchi & Inoue, 2001), the amorphous-forming composition range (GFR) was calculated for 338 ternary amorphous alloy systems on the basis of the database given by Miedema's model in order to examine the applicability of the model, to analyze the stability of the amorphous phase, and to determine the dominant factors influencing the ability to form an amorphous phase. The mixing enthalpies of amorphous and solid solution phases were expressed as a function of alloy compositions on the basis of chemical enthalpy. The GFR was calculated for 335 systems except for the Al-Cu-Fe, Al-Mo-Si and Au-Ge-Si systems. The calculated results are in agreement with the experimental data for Cu-Ni- and Al-Ti-based systems. For typical amorphous alloy systems exemplified by the Zr-, La-, Fe- and Mg-based systems, it was recognized that the calculated GFR had been overestimated as a result of the model being simplified. It is found that the elastic enthalpy term arising in a solid solution phase stabilizes the amorphous phase, and the stabilization mechanism is particularly notable in Mg-based amorphous alloy systems. Short-range order plays an important role in the formation of Al-, Fe- and Pd-metalloid based systems (Takeuchi & Inoue, 2001).\n\nBased on Miedema’s model and Alonso’s method, the glass forming ability/range (GFA/ GFR) of the Fe–Zr–Cu system was studied by thermodynamic calculation. It is found that when the atomic concentration of Zr is between 34% and 56%, no matter what the atomic concentrations of Fe and Cu are, amorphous phase could be obtained, thus the atomic mismatch playing a dominating role in influencing the GFA. While the atomic concentration of Zr is out of the above range, the GFA is highly influenced by the immiscibility between Fe and Cu (Wang & Liu, 2009).\n\nGlass forming composition range for ternary Zr–Ti–Ni, Zr–Hf–Ni, Ti–Hf–Ni, Zr–Ti–Cu, Zr–Hf–Cu and Ti–Hf–Cu systems has been determined by extending the Miedema’s model to ternary alloy systems and by neglecting the ternary interaction parameter (Basu, et al., 2008). In their calculations, solid pure metals have been chosen to be the standard state and their enthalpy has been assigned to be zero. It is seen that the glass forming composition range for Ni bearing alloys is higher than that of the Cu bearing alloys, as heat of mixing of Ni is higher than that of Cu with Ti, Zr and Hf. In these ternary (Zr, Ti, Hf)–(Cu, Ni) alloys mixing enthalpy and mismatch entropy varies between (−13) and (−42) kJ/mol and 0.13 and 0.25, which is within the range predicted for glass formation (Basu, et al., 2008).\n\nIn the work of Oliveira et al., the γ* parameter proposed by Xia et al. was extended to the ternary Al–Ni–Y system. The calculated γ* isocontours in the ternary diagram are compared with experimental results of glass formation in that system. Despite some misfitting, the best glass formers are found quite close to the highest γ* values, leading to the conclusion that this thermodynamic approach can be extended to ternary systems, serving as a useful tool for the development of new glass-forming compositions (Oliveira et al., 2008).\n\nRao et al. (Rao et al. 2007) identified the composition with highest glass forming ability in Zr-Ti-Ni-Cu-Al quinary systems with the Gibbs-energy change between the amorphous and solid solution phases as the thermodynamic parameter by calculating the Gibbs-energy change with the help of Miedema, Miracle, mismatch entropy and configurational entropy models. ΔG shows the strong correlation with the reduced glass transition temperature (Tg/Tl) in Zr-based metallic glasses. Thus, ΔG can be used as a predictive GFA parameter to identify compositions with the highest GFA. The compositions with the highest GFA have been identified in a number of quinary systems by iso-free energy contour maps by representing quandary systems as quasiternary systems (Rao et al. 2007). The best glass forming composition has been identified by drawing iso-Gibbs-energy change contours by representing quinary systems as pseudo-ternary ones. Attempts have been made to correlate the Gibbs-energy change with different existing glass forming criteria and it is found that the present thermodynamic parameter has good correlation with the reduced glass transition temperature. Further, encouraging correlations have been obtained between the energy required for amorphization during mechanical alloying to the Gibbs-energy change between the amorphous and solid solutions.\n\n### 2.3. Calculation of the GFA of alloys based on driving force criterion\n\n#### 2.3.1. Method\n\nThe basic underlying concept to predict the compositions of alloys having high GFA using the thermodynamic approach is that the compositions exhibiting the local melting minimum points favour amorphous phase formation. Thermodynamic approach of driving force criterion is based on a different concept. During a melt–quenching process for metallic glass formation, the glass formation is exposed to crystallization competition of other crystalline phases from the undercooled melt between liquidus temperature Tl and glass transition Tg. It is well known that the crystallization is the only event that prevents the formation of amorphous phase. Considering that crystallization is usually through the nucleation and growth process, the high GFA can be inversely predicted by searching a condition where the nucleation and growth of crystalline phases can be retarded. There are three dominating factors for the kinetics, (i) chemical driving force, (ii) interfacial energy, as an energy barrier, between the amorphous phase and the crystalline phases, (iii) the atomic mobility for rearrangement or transport of the partitioning atoms. According to the classical nucleation theory, the driving force of formation of the crystalline phases and interfacial energy, among other things, affects the nucleation rate of product phases. The interfacial energy between liquid and crystalline phases is known to be small compared to surface energy or grain boundary energy (Porter & Easterling, 1992), and therefore the role of interfacial energy in the nucleation kinetics of crystalline phases would be small. Then, the driving force of formation becomes the major factor that affect the nucleation kinetics of crystalline phases from amorphous alloy melts. It is believed that alloys with lower driving force for the formation of crystalline phases under the supercooled liquid state suggest higher GFA in the glass forming range. Therefore, Kim and co-workers proposed the minimum driving force criterion as a new thermodynamic calculation scheme to evaluate the composition dependence of the GFA (Kim et al., 2004). The driving force for the crystalline phases can be calculated using the critical assessed thermodynamic parameters by the CALPHAD method (Kaufman & Bernstein, 1970). In the CALPAHD method, the Gibbs energies of individual phases are described using thermodynamic models. Then, the model parameters are optimized considering relevant experimental information on phase equilibria or the other thermodynamic properties. The calculation of phase equilibrium is performed based on the minimum Gibbs energy criterion.\n\n#### 2.3.2. Application of the driving force criterion\n\nDriving force criterion has been successfully used to explain the composition dependence of GFA in several glass forming alloys with unique GFA, such as Cu-Zr-Ti (Kim, et al., 2004), Mg-Cu-Y (Kim, et al., 2005), Al-Ce-Ni (Tang, et al. 2010), and Al-Cu-Zr (Bo, et al, 2010) systems, by calculating the driving force of formation of crystalline phases under metastable supercooled liquid states and by searching the local minima of the driving forces for crystallization. The calculated results are in good agreement with the experimental results. It has been indicated that the driving force criterion can be used as a new thermodynamic scheme to estimate the composition dependence of GFA in multicompoent alloy systems for the development of bulk amorphous alloys.",
null,
"Figure 4.Calculated driving forces of crystalline phases for Cu55Zr45–xTix alloys, versus Ti content at (a) 1073 K, (b) 973 K, and (c) 873 K (fromKim, et al., 2004).\n\nFor the Cu-Zr-Ti system, among a series of ternary alloys Cu60Zr40–xTix (x = 10, 20, 30), the alloy with the highest GFA should be the alloy Cu60Zr20Ti20 according to the maximum Trg criterion (Turnbull, 1969), while experiments (Inoue, et al., 2001) show it is Cu60Zr30Ti10. Although the other alloys, Cu55Ti35Zr10 (Lin & Johnson, 1995) and Cu47Ti33Zr11Ni8Si1 (Choi, et al., 1998) based on the Cu–Ti–Zr ternary system but at different region, have been published as alloys with high GFA, there is no empirical rule or factor that can explain why the high GFA is obtained at certain compositions in the Cu–Ti–Zr system (roughly Zr: Ti=3:1 and Zr: Ti=1:3). As already the thermodynamic parameters for all phases in this system obtained by the CALPHAD method, the GFA is estimated by calculating the driving forces of all crystalline phases under the undercooled liquid state. Fig. 4 shows the calculated driving forces of individual crystalline phases as a function of Ti content in a temperature range (600–800 oC) where the alloys correspond to supercooled liquids state. As shown in this figure, along the composition line Cu55Zr45–xTix with varying Ti content, the driving forces of crystalline phases show two local minimums, one at Zr-rich region (x = 7–10) and the other at Ti-rich region (x= 28–29). According to the driving force criterion, the two local minimum points in Fig. 4 are the compositions where the GFA is expected to be higher than other compositional region. In a sense that the Zr:Ti ratios in the two local minimum points are toughly close to 3:1 and 1:3, it can be said that the former is close to the Inoue’s composition and the latter is close to Johnson’s composition (Kim et al., 2004). This finding indicates that the composition dependency of the GFA in the Cu-Zr-Ti ternary alloy system can be explained by calculating the driving forces of formation of crystalline phases under metastable supercooled liquid states and by searching the local minima of the driving forces for crystallization (Kim, et al. 2004).\n\nSimilarly, Al–based amorphous, which was discovered in 1988 (He, et al., 1988& Inoue, et al., 1988), is also of particular interest because of its low density, good bending ductility and high tensile strength. It was found that, however, most of above mentioned parameters and rules capable of searching metallic glasses with high GFA are not applicable to Al–based amorphous (Guo, et al., 2000, Hackenberg, et al., 2002, Gao, et al., 2003, Zhu, et al. 2004).Al–Ce–Ni system is an unique Al–based system, which can be synthesized into a strong, flexible metallic glass with the widest GFR covering 2–15 at.% Ce and 1–30 at.% Ni (Inoue, 1998, Kawazoe et al., 1997). The alloys with high GFA are situated away from the eutectic point. Experimental results of the Al–Ce–Ni bulk amorphous alloys prepared with copper mold casting indicate that the amorphous sheets with 5 mm width and 0.2 mm thickness are obtained in Al86Ce4Ni10 and Al88Ce6Ni6 alloys without appreciable glass transition. On contrary, alloys Al82Ce8Ni10 and Al80Ce6Ni14 with △Tx values of 20 and 21K consist mainly of crystalline phases (Inoue, 1998). After a thermodynamic assessment of the Al-Ce-Ni system in the Al-rich corner was performed, a set of consistent thermodynamic parameters were obtained, and the thermodynamic properties of the Al-Ce-Ni amorphous alloys were calculated. The calculated results indicated that the alloys with high GFA in the Al–Ce–Ni system are far from the eutectic point, and the heats of mixing are from –15 to –49 kJ/mol of atom for the observed amorphous alloys (Tang et al., 2010).\n\nAs shown in Fig. 5, the relatively smaller nucleation driving forces for the formation of crystalline phases for the Al–10Ce based alloys (Fig. 5a) are generally indicative of their higher GFA with a reportedly wider GFR (1–30at.% Ni) (Kawazoe et al., 1997). In contrast, the relatively larger driving forces in the Al–10Ni based alloys (Fig. 7b) are associated with their poorer GFA and narrower GFR (2–10 at.% Ce) (Kawazoe et al., 1997). This finding is further confirmed by the melt spinning (Tang, et al., 2010) and the copper mold casting experimental results (Inoue, 1998).\n\nBased on the experimental enthalpies of mixing of ternary liquid and undercooled liquid alloys as well as the evaluated isothermal sections, the Al–Cu–Zr ternary system has been assessed using the CALPHAD method. Most of the calculated results show good agreement with the experimental thermodynamic data and the reported phase diagrams. By employing the driving force criterion with the present thermodynamic description, the observed glass-forming ability in the Al–Cu–Zr system can be accounted for satisfactorily (Bo, et al., 2010).",
null,
"Figure 5.Calculated normalized nucleation driving force (per mole of atoms) for crystalline phases from undercooled (a) Al–10Ce–Ni, (b) Al–Ce–10Ni and (c) Al-6Ce-Ni metastable liquid at 800 oC (fromTang, et al., 2010).\n\n### 2.4. Calculation of the GFA of alloys based on the other thermodynamic approach\n\nBy treating the glass transition as a second-order phase transformation from liquid phase (Palumbo, et al., 2001, Shao et al., 2005), which can give good predictions of all important GFA indicators such as the reduced glass transition temperature and the thermodynamic stability of the amorphous phase, Shao et al. established a full thermodynamic database for glass forming ability (GFA). The resultant thermodynamic database can be used to produce all major temperature-related GFA indicators such as Tg/Tl, Tg/Tm and Tx/(Tg+Tl). It is indicated that together with phase diagram prediction, such an extensive CALPHAD approach is a powerful tool for designing alloys with large GFA (Shao et al., 2005).\n\nBy using the computational thermodynamic approach exhibiting low-lying liquidus surfaces coupled with the reduced glass transition temperature criterion of Turnbull, regions of alloy composition suitable for experimental tests for glass formation of Zr–Ti–Ni–Cu–Al system were identify rapidly by Cao et al. (Cao, et al., 2006). The glass forming ability of the alloys we studied can be understood in terms of the relative liquidus temperature in a thermodynamically calculated temperature vs. composition section through a multicomponent phase diagram. It does not follow several other proposed thermodynamic or topological criteria.\n\nA thermodynamic parameter (ΔHchem ×Sσ/kB) in the configuration entropy (Sconfig/R) range of 0.8-1.0 has been developed to identify excellent BMG composition using enthalpy of chemical mixing (ΔHchem), the mismatch entropy normalized by Boltzmann’s constant (Sσ/kB) and the configurational entropy (Sconfig/R) by Bhatt et al. and it has been demonstrated for the Zr-Cu-Al based ternary system. It is found that this approach can be used to predict the best BMG composition more closely than the earlier models (Bhatt, et al., 2007).\n\nBased on the undercooling theory resulting from the existence of multicomponent chemical short-range order (CSRO) domains, the glass forming range (GFR) in Zr-Ni-Ti alloy system was predicted by thermodynamic calculation. The GFR predicted by the thermodynamic calculation is consistent with the experiment results (Liu, et al. 2008).\n\nOne of the ways to predict the possible bulk glass formation composition is the phase diagram calculation with suppression of the formation of intermetallic phases. The formation of stoichiometric intermetallic compounds which have the ordered structure of atoms into specific lattice sites can take a time for the rearrangement of atoms from liquid state. Thus, the formation of intermetallic compounds can be suppressed during the fast solidification process normally applied to the bulk glass production. Combining the obtained the thermodynamic database and the above concept, the amorphous formation diagram of the Cu–Zr–Ag system with the suppression of all binary and ternary intermetallic phases has been proposed by Kang and Jung (Kang & Jung, 2010)\n\n## 3. Calculation of the GFA of alloys based on a combined thermodynamics and kinetics approach\n\nAs discussed above, the thermodynamic approach is useful since the thermodynamic parameters can be used to calculate the GFR in binary alloys and can also be used to predict the GFR in ternary systems based on the constituent binaries. One of the limitations of a purely thermodynamic approach is that it does not give the critical cooling rates for the glass formation. A combined thermodynamic and kinetic treatment, based on time-temperature-transformation curves (TTT) in the manner of Uhlmann and Davies has been presented (Saunders & Miodownik, 1986& 1988). This combined approach takes the thermodynamic parameters obtained from the phase diagram calculations and derives values for the free energy barrier for nucleation, free energy driving forces, and melting points used in kinetic equations. The combined approach has been successfully used to calculate the glass forming ability (GFA) of a wide range of binary and ternary alloy systems (Saunders & Miodownik, 1988). The calculated glass forming ranges for a wide number of binary and ternary alloy systems are in good agreement with experiment. A significant advantage of the combined approach is that data from binary alloy systems, often with little or no ternary modification, can be used to calculate the necessary thermodynamic input for the kinetic equations in higher order systems. This section outlines briefly the combined thermodynamic and kinetics method used for the calculation of the GFA for alloy systems.\n\n### 3.1. Method\n\nCritical cooling rates for glass formation can be obtained by Johnson-Mehl-Avrami isothermal transformation kinetics using the equation\n\nX=1exp[(π/3)IvUc3t4]E10\n\nwhere X is the volume fraction of material transformed, Iv is the nucleation frequency, Uc is the crystal growth rate, and tis the time taken to transform X. In the early stages of transformation the value of X approximates to\n\nXπIvUc3t4E11\n\nFor homogeneous nucleation without pre-existing nuclei, the nucleation frequencyIvhis given by\n\nIvh=DnNva02exp(ΔG*/kT)E12\n\nwhere Dn is the diffusion coefficient necessary for crystallisation, Nv is the number of atoms per unit volume. a0 is an atomic diameter. kis Boltzmann’s constant. T is the transformation temperature, and ΔG*is the free energy barrier for nucleation of a spherical nucleus given by the expression\n\nΔG*=16π3(σ3/Gv2)E13\n\nwhere σ is the liquid/crystal interfacial energy and Gv is the change in free energy per unit volume on solidification. An equation for Uc can be written as\n\nUc=fDga0[1exp(ΔGm/RT)]E14\n\nwhere Dg is the diffusion coefficient for the atomic motion necessary for liquid to crystal growth, ΔGmis the molar free energy driving force for liquid to crystal growth, and Ris the universal gas constant, and fis a structural constant denoted the fraction of sites on the interface where atoms may preferentially be added or removed, and is given by the following expression (Uhlmann, 1972)\n\nf=0.2(TmT)/TmE15\n\nwhere Tm is the liquidus temperature. By assuming that Dn=Dg=the bulk liquid diffusivity and invoking the Stokes-Einstein relationship between diffusivity and viscosity η, equation (12) and (14) can be derived to give the time tneeded to form a volume fraction X of transformed crystalline phase in an undercooled liquid as following\n\nt9.3ηkT{a09Xf3Nvexp(ΔG*/kT)[1exp(ΔGm/RT)]3}1/4E16\n\nwhere tis the time taken to transformation volume fraction X of crystalline solid. η is the viscosity of liquid, a0 is an atomic diameter, fis a structural constant, Nvis the number of atoms per unit volume, ΔG*is the Gibbs energy barrier to nucleation and ΔGmis the Gibbs energy driving force for the liquid-crystal transformation. The constants have been typically taken as X=10-6, a0=0.28×10-9m, f=0.1 and Nv=5×1028 atoms/m3. In order to apply this equation to a real alloy system, it is necessary to derive or estimate the parameters η,ΔG*, and ΔGm(Saunders & Miodownik, 1988).\n\n### 3.2. Estimation of η,ΔG*, and ΔGm\n\nSince it is very difficult to measure experimentally the viscosity of supercooled liquid, there have been few measurements of it. In this case, the viscosity can be generally described as being between the liquidus temperature Tm and the glass transition Tg using a Doolittle-type expression involving the relative free volume fT (Ramachandrarao, et al., 1977) as\n\nη=Aexp(B/fT)E17\n\nwhere\n\nfT=Cexp(EH/RT)E18\n\nEH is the hole formation energy and A, B, and C are constants. Because of the lack of experimental data and the EH value was estimated by means of a direct relationship from Tg (Ramachandrarao, et al., 1977). Assuming B is unity and fT and η are 0.03 and 1012Ns/m2, respectively at Tg, A and C have been approximated at 3.33×10-3 and 10.1, respectively. If Tg values are not available, crystallisation temperatures Tx are used as a first approximation.",
null,
"Figure 6.The construction used in calculating the driving force, ΔGm, for the crystallization of compound AB2 from a liquid of compositionx1 in the A-B system.\n\nFor the crystallization of compounds AB2 from a liquid of composition x1 in the A-B system (Fig. 6), the molar free energy driving force for liquid to crystal growth, ΔGm, represents the Gibbs energy required to form one mole of crystalline phase from the liquid of composition x1, which can be obtained from thermodynamic phase diagram calculations to give explicitly molar heats of fusion Hmfand driving force that can be used toΔG*. Values of Hmfand ΔGmare calculated from partial molar Gibbs energies of elements A and B and free energy values. Therefore, ΔGmis expressed by the following equation\n\nΔGm=xAG¯AL+xBG¯BLGcrystE19\n\nwhere xA and xB are the mole fractions of elements A and B in the precipitating crystalline phase, respectively. G¯ALand G¯BLare the partial molar Gibbs energies of elements A and B in the liquid phase, respectively, and Gcryst is the integral free energy of the precipitating crystalline phase. The Gibbs energy functions in Eq. (19) can be obtained from the thermodynamic model parameters evaluated in the literature. In an A-B alloy system, a liquid composition x1 becomes unstable with respect to the compounds AB2 at the liquidus temperature Tm. At a given temperature T1, there is a driving force for the precipitation of the compound AB2 given by G1 (Fig. 6), where G1 is defined as the driving force to form one mole of compound AB2 in a liquid of composition x1. In all cases here ΔGm is equal to ΔG1. By using heats of formation in place of free energy values, Hmfcan be similarly evaluated.\n\nThe Gibbs energy barrier to nucleation of a spherical nucleus ΔG* can be described as\n\nΔG*=16π3N(σm3/ΔGm2)E20\n\nwhere Nis Avogadro’s number and the σm the molar liquid/crystal interfacial energy. σm is directly related to the molar enthalpy of fusion Hmfand expressed as\n\nσm=αHmfE21\n\nwhere αis a proportional constant. Hmfcan be obtained in a similar way to evaluate ΔGmbased on bond energy values across the interface (Turnbull, 1950). Saunders and Miodownik empirically evaluated the constant αto be 0.41 (Saunders & Miodownik, 1988).\n\n### 3.3. Calculation of critical cooling rates below T0 of disordered solid phases\n\nThe expression for tin equation (16) is derived assuming that the kinetics of the liquid to crystal transformation are limited by the bulk diffusivity, which is appropriate when the crystal composition differs from that of the liquid, or at compound compositions where substantial diffusion is necessary before the correct spatial relationships that define the ordered structure of the compound are achieved. However, at the temperatures below the T0 temperature of a disordered solid solution phase, the liquid becomes unstable with respect to a molecularly simple phase of the same composition. Consequently, no long range diffusion is necessary for the liquid to crystal and the kinetics are governed by atom motions of less than one atom in diameter. Then, the transformation is considered to be extremely difficult to suppress and this forms the T0 criteria for GFA. In such cases, it has been suggested that the rate limiting step for crystal growth is proportional to the rate at which atoms collide at the liquid/crystal interface, and an expression for the crystal growth rate is then given (Boettinger et al., 1984) by\n\nUc=fV0[1exp(ΔGm/RT)]E22\n\nwhere V0 is the velocity of sound in the liquid metal. This is the same form as equation (14), but with V0 replacing the Dg/a0. Replacing the Dn/a0 in equation (12) with V0 and rearranging equation (12) and (14), an expression for tis derived as\n\nt1V0{Xa0πf3Nvexp(ΔG*/kT)[1exp(ΔGm/RT)]3}1/4E23\n\nThe value for V0 has been taken as 1000 m/s by Saunders and Miodownik (Saunders & Miodownik, 1988), close to the a value used by Boettinger et al. (Boettinger et al., 1984) and no transformation is considered to occur below Tg.\n\nFrom equations (10) to (23), the time-temperature-transformation (TTT) curve can be obtained. The critical cooling rate Rc necessary for amorphous phase formation with a melt quenching method can be evaluated from TTT curve calculated and approximated as follows\n\nRc=TmTn5tnE24\n\nwhere Τmand tn are the temperature and time at the nose of the TTT curve, respectively, since the cooling rate calculated directly from the isothermal transformation curve is somewhat overestimated compared with that from the CCT (continuous cooling transformation) curve, the right side of Equation (24) has divided by a factor of 5 to emulate continuous cooling. In the composition range with Rc˂1×10-7 K/s, which has been generally known to be a maximum available cooling rate for melting quenching, the amorphous phase formation may be possible.\n\n### 3.4. Evaluation of glass forming ranges in alloy systems\n\nThe combined thermodynamic and kinetic approach has been undertaken to evaluate the GFA of a wide number of binary and ternary alloy systems since the pioneering work performed by Sanders and Miodownik (Saunders & Miodownik, 1988; Shim et al., 1999; Clavaguera-Mora, 1995; Tokunaga, et al., 2004; Abe, et al., 2006; Ge, et al., 2008; Palumbo, & Battezzati, 2008; Mishra & Dubey, 2009). They calculated the free energy driving forces from the thermodynamic databases, free energy barrier for nucleation and melting points, and employed this data to kinetic calculation. There is in good agreement between the predicted glass forming ranges and those experimentally observed. It is indicated that the approach has the potential to predict glass forming ability in multicomponent alloys using mainly binary input data.\n\nThe first attempts to couple kinetic models with reliable thermodynamic data using the CALPHAD methodology was performed by Saunders and Miodownik (Saunders & Miodownik, 1988). In his work, the combined thermodynamics and kinetics approach was presented in detail and undertaken to evaluate the GFA of a wide range of binary (Au-Si, Pd-Si, Ti-Be, Zr-Be, Hf-Be, Cu-Ti, Co-Zr, Ni-Zr, Cu-Zr, Ni-P, Pd-P) and ternary (Ni-Pd-P, Cu-Pd-P, Co-Ti-Zr, Zr-Be-Hf, Ti-Be-Hf) alloy systems (Saunders & Miodownik, 1988). The TTT curves and the critical cooling rate for glass formation Rc were estimated. There is excellent agreement between the predicted and observed GFRs of binary systems, apart from the discrepancies in the Ti-Be and Cu-Ti systems. The approach was then extended to give predications for critical cooling rates in ternary and multicomponent alloys using mainly binary information. The results would appear to indicate that the combined approach takes into account a number of the major effects that govern glass formation and has the potential to predict GFA in multicomponent systems (Saunders & Miodownik, 1988).\n\nIn the work performed by Ge et al., (Ge, et al., 2008), the glass forming ability (GFA) of nine compositions of Cu-Zr and thirteen of Cu-Zr-Ti alloys in terms of critical cooling rate and fragility were evaluated by combining CALPHAD technique with kinetic approach. The driving forces for crystallization from the undercooled liquid alloys were calculated by using Turnbull and Thompson-Spaepen (TS) Gibbs free energy approximate equations, respectively. As shown in Fig. 7, time-temperature-transformation (TTT) curves of these alloys were obtained with Davies-Uhlmann kinetic equations based on classical nucleation theory. With Turnbull and TS equations, the critical cooling rates are calculated to be in the range of 9.78 ×103-8.23×105 K/s and 4.32 ×102-3.63×104 K/s, respectively, for Cu-Zr alloys, and 1.38×102-7.34×105 K/s and 0.64-1.36×104 K/s, respectively, for Cu-Zr-Ti alloys (Ge, et al., 2008).",
null,
"Figure 7.Calculated TTT curve of Cu-Zr (left) and Cu-Zr-Ti (right) alloys by (a) Turnbull model and (b) TS model (from Ref.Ge, et al., 2008).\n\nBased on topological, kinetic and thermodynamic considerations, Yang et al. (Yang, et al., 2010) have discussed the existence of the multiple maxima in GFA in a single eutectic system in Al–Zr–Ni system. It is apparent that, when taken alone, none of the factors seemed to be able to fully explain this phenomenon we have observed. It is suggested that glass formation is an intricate balance of kinetic, thermodynamic and also topological factors. Perhaps in good glass formers, all factors could come to a consensus at one composition or one compositional zone, where the best glass former(s) are located. However, for marginal glass formers like Al-based alloys, each of these factors could point to a different alloy composition, where conditions are best suited for glass formation.\n\nRecently, Considering chemical short-range ordering and metastability of undercooled melts, Zhu and co-workers have applied a simplified quasi-kinetic approach in order to predict the GFR in binary Al-rare earth (Zhu, et al., 2004) and Al-based Al-Gd-Ni(Fe) ternary system (Zhu, et al., 2004), using CALPHAD databases. They derived the derived an expression for the reduced time t’ = t/tmin for the formation of a minimal quantity of crystalline solid, where trepresents the composition-dependent time needed for the transformation and tmin the minimum transformation time at a certain optimum composition:\n\nt'(exp(ΔG*/kT)[1exp(ΔGm/RT)]3)1/4E25\n\nThis formula is in fact equivalent to Eqs. (10) to (23), except that the effect of parameters related to atomic transport is neglected. The calculated reduced times for various solid crystalline phases are then used to predict the GFR, i.e. the region of the composition space where these times are higher. A qualitative satisfactory agreement can be observed (Zhu, et al., 2004). The ability to predict the GFR of candidate metallic glass systems indicates a simple but effective approach for reducing reliance on extensive experimental trial and error in the search for new metallic glass systems (Zhu, et al., 2004).\n\n## 4. Conclusion\n\nSearch for new bulk metallic glasses (BMGs) system or composition by predicting the GFA of an alloy system is of interesting and theoretical and practical significance. In this chapter, the progress on the calculation or predication the glass forming ability by thermodynamics approach or a combined thermodynamics and kinetics approach have been reviewed. It is found that a good agreement between the predicated glass forming ability and those experimentally observed has been obtained. It is indicated that the thermodynamic approach developed in the literature has proved useful to predict the glass forming ability of a number of alloys system. It has revealed that the combined thermodynamics and kinetics approach has the advantage to predict the glass forming ability of the multicomponent alloys using the reliable database of binary system assessed by CALPHAD method. It has been accepted that the thermodynamic approach and/or the combined thermodynamic and kinetic approach are effective ways for the prediction of the GFA of metallic glass alloys.\n\nchapter PDF\nCitations in RIS format\nCitations in bibtex format\n\n## How to cite and reference\n\n### Cite this chapter Copy to clipboard\n\nChengying Tang and Huaiying Zhou (September 15th 2011). Thermodynamics and the Glass Forming Ability of Alloys, Thermodynamics - Physical Chemistry of Aqueous Systems, Juan Carlos Moreno-Piraján, IntechOpen, DOI: 10.5772/20803. Available from:\n\n### chapter statistics\n\n3Crossref citations\n\n### Related Content\n\nNext chapter\n\nBy Bohdan Hejna\n\nFirst chapter\n\n#### Thermodynamics of Ligand-Protein Interactions: Implications for Molecular Design\n\nBy Agnieszka K. Bronowska\n\nWe are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities."
] | [
null,
"https://www.intechopen.com/media/chapter/20155/media/image2.jpeg",
null,
"https://www.intechopen.com/media/chapter/20155/media/image11.png",
null,
"https://www.intechopen.com/media/chapter/20155/media/image13.png",
null,
"https://www.intechopen.com/media/chapter/20155/media/image15.jpeg",
null,
"https://www.intechopen.com/media/chapter/20155/media/image16.png",
null,
"https://www.intechopen.com/media/chapter/20155/media/image35.jpeg",
null,
"https://www.intechopen.com/media/chapter/20155/media/image53.jpeg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91870093,"math_prob":0.9300013,"size":36831,"snap":"2021-21-2021-25","text_gpt3_token_len":8461,"char_repetition_ratio":0.18258344,"word_repetition_ratio":0.062239945,"special_character_ratio":0.21180527,"punctuation_ratio":0.10906682,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9630499,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-17T20:07:43Z\",\"WARC-Record-ID\":\"<urn:uuid:17a652f5-4011-4a54-8528-3246614a5b9b>\",\"Content-Length\":\"708498\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1257a45-c549-4f81-bd2d-dc1167cb144c>\",\"WARC-Concurrent-To\":\"<urn:uuid:556e1144-34e0-4ad9-9452-d218a593ee55>\",\"WARC-IP-Address\":\"35.171.73.43\",\"WARC-Target-URI\":\"https://www.intechopen.com/books/thermodynamics-physical-chemistry-of-aqueous-systems/thermodynamics-and-the-glass-forming-ability-of-alloys\",\"WARC-Payload-Digest\":\"sha1:IYNE6TBI6IC4TOGOPB52LIRINABV5YH2\",\"WARC-Block-Digest\":\"sha1:I5JF5UXYQ5RPE2ON5ZU4IXUMNNBQXWA4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243992440.69_warc_CC-MAIN-20210517180757-20210517210757-00598.warc.gz\"}"} |
https://byjus.com/poisson-distribution-formula/ | [
"",
null,
"# Poisson Distribution Formula\n\nPoisson distribution is actually another probability distribution formula. As per binomial distribution, we won’t be given the number of trials or the probability of success on a certain trail. The average number of successes will be given in a certain time interval. The average number of successes is called “Lambda” and denoted by the symbol “λ”.\n\nThe formula for Poisson Distribution formula is given below:\n\n$\\large P\\left(X=x\\right)=\\frac{e^{-\\lambda}\\:\\lambda^{x}}{x!}$\n\nHere,\n\n$$\\begin{array}{l}\\lambda\\end{array}$$\nis the average number\nx is a Poisson random variable.\ne is the base of logarithm and e = 2.71828 (approx).\n\n### Solved Example\n\nQuestion: As only 3 students came to attend the class today, find the probability for exactly 4 students to attend the classes tomorrow.\n\nSolution:\n\nGiven,\nAverage rate of value(\n\n$$\\begin{array}{l}\\lambda\\end{array}$$\n) = 3\nPoisson random variable(x) = 4\n\nPoisson distribution = P(X = x) =\n\n$$\\begin{array}{l}\\frac{e^{-\\lambda} \\lambda^{x}}{x!}\\end{array}$$\n\n$$\\begin{array}{l}\\begin{array}{c}P(X = 4)=\\frac{e^{-3} \\cdot 3^{4}}{4 !} \\\\ \\\\P(X = 4)=0.16803135574154\\end{array}\\end{array}$$"
] | [
null,
"https://www.facebook.com/tr",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8100217,"math_prob":0.9997458,"size":862,"snap":"2023-40-2023-50","text_gpt3_token_len":208,"char_repetition_ratio":0.13519813,"word_repetition_ratio":0.015625,"special_character_ratio":0.23201856,"punctuation_ratio":0.10759494,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000081,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T08:04:41Z\",\"WARC-Record-ID\":\"<urn:uuid:39c63a3d-5d01-48e7-859a-0f241c4e237e>\",\"Content-Length\":\"552085\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d435dc38-9b79-460a-aeef-636f71abd036>\",\"WARC-Concurrent-To\":\"<urn:uuid:b0021662-c425-4ed2-b1d3-bec3eb1ceab9>\",\"WARC-IP-Address\":\"34.36.4.163\",\"WARC-Target-URI\":\"https://byjus.com/poisson-distribution-formula/\",\"WARC-Payload-Digest\":\"sha1:WRCI6MLKAVZX5YIZ3LHHOS2KQKYYR7NI\",\"WARC-Block-Digest\":\"sha1:6RMP7V7QLAN74KH4IBYULIZT3XXI7ZIT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679101282.74_warc_CC-MAIN-20231210060949-20231210090949-00735.warc.gz\"}"} |
https://zbmath.org/?q=an%3A0907.62017 | [
"# zbMATH — the first resource for mathematics\n\nRecord statistics. (English) Zbl 0907.62017\nCommack, NY: Nova Science Publishers, Inc. 227 p. (1995).\nContents (Chapter headings): 1. Record statistics; 2. Exponential distribution; 3. Generalized extreme value distributions; 4. Generalized Pareto distribution; 5. Power function distribution; 6. Geometric distribution; 7. Some selected distributions; 8. Additional topics; Appendix; References.\n\n##### MSC:\n 62E15 Exact distribution theory in statistics 62G30 Order statistics; empirical distribution functions 62-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to statistics 60G70 Extreme value theory; extremal stochastic processes 62-02 Research exposition (monographs, survey articles) pertaining to statistics"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.63503313,"math_prob":0.50011665,"size":917,"snap":"2021-31-2021-39","text_gpt3_token_len":231,"char_repetition_ratio":0.16648412,"word_repetition_ratio":0.036697246,"special_character_ratio":0.25627044,"punctuation_ratio":0.2760736,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9612058,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-04T00:58:22Z\",\"WARC-Record-ID\":\"<urn:uuid:d04e9092-fc90-4ae8-8957-f3fd6c1c87b3>\",\"Content-Length\":\"45938\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f55b1e77-a1b2-4593-a788-bf4bf5ca0aaf>\",\"WARC-Concurrent-To\":\"<urn:uuid:f162eb26-ff0c-4a3f-9222-22c640daacde>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an%3A0907.62017\",\"WARC-Payload-Digest\":\"sha1:TSCV6HMZWRMSDWZXCGMP7ERJMTU7NJ65\",\"WARC-Block-Digest\":\"sha1:AJVZJGKCRI75645QNZGPNZH7DM5WFBJJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154486.47_warc_CC-MAIN-20210803222541-20210804012541-00080.warc.gz\"}"} |
https://stats.stackexchange.com/questions/156210/an-example-where-the-output-of-the-k-medoid-algorithm-is-different-than-the-outp/156492 | [
"# An example where the output of the k-medoid algorithm is different than the output of the k-means algorithm\n\nI understand the difference between k medoid and k means. But can you give me an example with a small data set where the k medoid output is different from k means output.\n\nk-medoid is based on medoids (which is a point that belongs to the dataset) calculating by minimizing the absolute distance between the points and the selected centroid, rather than minimizing the square distance. As a result, it's more robust to noise and outliers than k-means.\n\nHere is a simple, contrived example with 2 clusters (ignore the reversed colors)",
null,
"As you can see, the medoids and centroids (of k-means) are slightly different in each group. Also you should note that every time you run these algorithms, because of the random starting points and the nature of the minimization algorithm, you will get slightly different results. Here is another run:",
null,
"And here is the code:\n\nlibrary(cluster)\nx <- rbind(matrix(rnorm(100, mean = 0.5, sd = 4.5), ncol = 2),\nmatrix(rnorm(100, mean = 0.5, sd = 0.1), ncol = 2))\ncolnames(x) <- c(\"x\", \"y\")\n# using 2 clusters because we know the data comes from two groups\ncl <- kmeans(x, 2)\nkclus <- pam(x,2)\npar(mfrow=c(1,2))\nplot(x, col = kclus$$clustering, main=\"Kmedoids Cluster\") points(kclus$$medoids, col = 1:3, pch = 10, cex = 4)\nplot(x, col = cl$$cluster, main=\"Kmeans Cluster\") points(cl$$centers, col = 1:3, pch = 10, cex = 4)\n\n• @frc, if you think someone's answer is incorrect, don't edit it to correct it. You can leave a comment (once your rep is >50), &/or downvote. Your best option is to post your own answer w/ what you believe to be the correct information (cf, here). Nov 22, 2016 at 16:18\n• K-medoids minimizes an arbitrarily chosen distance (not necessarily an absolute distance) between clustered elements and the medoid. Actually the pam method (an example implementation of K-medoids in R) used above, by default uses the Euclidean distance as a metric. K-means always uses the squared Euclidean. The medoids in K-medoids are chosen out of the cluster elements, not out of a whole points space as centroids in K-means. Nov 27, 2016 at 16:40\n• I have not enough reputation to comment, but wanted to mention that there is a mistake in the plots of Ilanman's answer: he ran the whole code, such that the data was modified. If you run only the clustering part of the code, the clusters are quite stables, more stable for PAM than for k-means by the way. Jun 14, 2017 at 10:40\n\nA medoid has to be a member of the set, a centroid does not.\n\nCentroids are typically discussed in the context of solid, continuous objects, but there's no reason to believe that the extension to discrete samples would require the centroid to be a member of the original set.\n\nBoth k-means and k-medoids algorithms are breaking the dataset up into k groups. Also, they are both trying to minimize the distance between points of the same cluster and a particular point which is the center of that cluster. In contrast to the k-means algorithm, k-medoids algorithm chooses points as centers that belong to the dastaset. The most common implementation of k-medoids clustering algorithm is the Partitioning Around Medoids (PAM) algorithm. PAM algorithm uses a greedy search which may not find the global optimum solution. Medoids are more robust to outliers than centroids, but they need more computation for high dimensional data."
] | [
null,
"https://i.stack.imgur.com/wBlqF.png",
null,
"https://i.stack.imgur.com/ytx3W.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9069461,"math_prob":0.98406637,"size":3531,"snap":"2022-05-2022-21","text_gpt3_token_len":891,"char_repetition_ratio":0.13070598,"word_repetition_ratio":0.02725724,"special_character_ratio":0.24723874,"punctuation_ratio":0.11522049,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9979269,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-21T09:32:26Z\",\"WARC-Record-ID\":\"<urn:uuid:ab6ed376-5def-40af-a67a-71e9bf7e8fc3>\",\"Content-Length\":\"245328\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c82a66e-32d9-44e5-8503-c5dd8c357091>\",\"WARC-Concurrent-To\":\"<urn:uuid:ca5b0b68-6cb3-4e69-a3ce-3eec38e2c63f>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/156210/an-example-where-the-output-of-the-k-medoid-algorithm-is-different-than-the-outp/156492\",\"WARC-Payload-Digest\":\"sha1:OWTIIGQD64D6EZQMDFSD7UCJ3LHRW5EK\",\"WARC-Block-Digest\":\"sha1:HGRXKVUG2ZHRIWQYSZWDWLDU4JONIAEH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662539049.32_warc_CC-MAIN-20220521080921-20220521110921-00537.warc.gz\"}"} |
https://support.pulse-eight.com/support/solutions/articles/30000044395-proaudio-setting-up-the-subwoofer-setting-up-the-subwoofer-crossover-filters-putty- | [
"Each zone has a cross over filter that can allow only high frequencies, or only low frequencies to pass, or be disabled (and allow all frequencies to pass). To enable a filter, both the filter type and the filter frequency has to be set before the filter is enabled.\n\nFor the following example, we're assuming the main stereo speakers are connected to zone 1, and the subwoofer is connected to zone 2.\n\nStart by picking the type of filter and slope:\n\n0 - Disabled, the filter is bypassed.\n\n1 - Also disables the filter, but you should use 0.\n\n2 - 12dB / Octave Low Pass Filter\n\n3 - 12dB / Octave High Pass Filter\n\n4 - 24dB / Octave Low Pass Filter\n\n5 - 24 dB / Octave High Pass Filter\n\nand then sending the command:\n\n• ^FTYPE @1,5\\$ ; set the stereo speakers to use a 24dB / octave high pass filter\n• ^FTYPE @2,4\\$ ; set the subwoofer to use a 24dB / octave low pass filter\n\nIf these don't make sense, that's fine. Use '4' (24dB / Octave Low Pass Filter) for the subwoofer zone and '5' (24dB / Octave High Pass Filter) for the main stereo speakers zone. The 12dB / 24dB is the sharpness of the filter, how fast it cuts off high or low frequencies. Experimenting with this setting and listening for what sound best, cannot hurt anything.\n\nNow pick the crossover frequency:\n\n0 - Disabled, the filter is bypassed.\n\n1 = 50Hz 9 = 79Hz 17 = 126Hz 25 = 200Hz\n\n2 = 53Hz 10 = 84Hz 18 = 133Hz 26 = 212Hz\n\n3 = 56Hz 11 = 89Hz 19 = 141Hz 27 = 224Hz\n\n4 = 59Hz 12 = 94Hz 20 = 150Hz 28 = 238Hz\n\n5 = 63Hz 13 = 100Hz 21 = 159Hz 29 = 252Hz\n\n6 = 67Hz 14 = 106Hz 22 = 168Hz 30 = 267Hz\n\n7 = 71Hz 15 = 112Hz 23 = 178Hz 31 = 283Hz\n\n8 = 75Hz 16 = 119Hz 24 = 189Hz 32 = 300Hz\n\nand send the frequency command using the above table to select a frequency:\n\n• ^FFREQ @1@2,16\\$ ; set the crossover frequency for stereo and subwoofer speakers to 119Hz\n\nExperimenting with this setting and listening for what sounds best, is encouraged. Both the main speakers and the subwoofer should be set to the same frequency. When going up in frequency, this setting indicates the frequency the subwoofer stops working and the main speakers take over. This value is usually between 100Hz and 150Hz, but we allow values between 50 and 300Hz.\n\nNote: Changes to this command should be backed up in case of power failure by sending the \"SS\" command:\n\n• ^SS 512\\$ ; Backup zone settings"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8214113,"math_prob":0.9777799,"size":2307,"snap":"2022-40-2023-06","text_gpt3_token_len":685,"char_repetition_ratio":0.14198871,"word_repetition_ratio":0.052747253,"special_character_ratio":0.33550066,"punctuation_ratio":0.09151786,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9770222,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T20:44:38Z\",\"WARC-Record-ID\":\"<urn:uuid:ea17e16b-e392-4d71-95a7-a56716d231bf>\",\"Content-Length\":\"27447\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bb826cfe-47d2-4f7e-8482-c526004614c5>\",\"WARC-Concurrent-To\":\"<urn:uuid:4989c2ed-90ac-4372-b86d-dda918ea5d61>\",\"WARC-IP-Address\":\"44.205.166.88\",\"WARC-Target-URI\":\"https://support.pulse-eight.com/support/solutions/articles/30000044395-proaudio-setting-up-the-subwoofer-setting-up-the-subwoofer-crossover-filters-putty-\",\"WARC-Payload-Digest\":\"sha1:WVZI6XL2RIEXHAIBPPW5NC5ZVUEXF7EM\",\"WARC-Block-Digest\":\"sha1:2LU72XYI5U25RHN7KR7KILQPCH4VIFU4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337524.47_warc_CC-MAIN-20221004184523-20221004214523-00096.warc.gz\"}"} |
https://www.jpost.com/israel/military-court-upholds-pacifists-jail-term | [
"(function (a, d, o, r, i, c, u, p, w, m) { m = d.getElementsByTagName(o), a[c] = a[c] || {}, a[c].trigger = a[c].trigger || function () { (a[c].trigger.arg = a[c].trigger.arg || []).push(arguments)}, a[c].on = a[c].on || function () {(a[c].on.arg = a[c].on.arg || []).push(arguments)}, a[c].off = a[c].off || function () {(a[c].off.arg = a[c].off.arg || []).push(arguments) }, w = d.createElement(o), w.id = i, w.src = r, w.async = 1, w.setAttribute(p, u), m.parentNode.insertBefore(w, m), w = null} )(window, document, \"script\", \"https://95662602.adoric-om.com/adoric.js\", \"Adoric_Script\", \"adoric\",\"9cc40a7455aa779b8031bd738f77ccf1\", \"data-key\");\nvar domain=window.location.hostname; var params_totm = \"\"; (new URLSearchParams(window.location.search)).forEach(function(value, key) {if (key.startsWith('totm')) { params_totm = params_totm +\"&\"+key.replace('totm','')+\"=\"+value}}); var rand=Math.floor(10*Math.random()); var script=document.createElement(\"script\"); script.src=`https://stag-core.tfla.xyz/pre_onetag?pub_id=34&domain=\\${domain}&rand=\\${rand}&min_ugl=0\\${params_totm}`; document.head.append(script);"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9825574,"math_prob":0.96919554,"size":2923,"snap":"2023-14-2023-23","text_gpt3_token_len":646,"char_repetition_ratio":0.12298732,"word_repetition_ratio":0.045081966,"special_character_ratio":0.20629491,"punctuation_ratio":0.09318996,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9789482,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T23:04:25Z\",\"WARC-Record-ID\":\"<urn:uuid:d0647fe2-37b2-43cc-a39a-3c0a905d0364>\",\"Content-Length\":\"82833\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b4a1b0d3-2c57-4308-97f0-196e541cb10a>\",\"WARC-Concurrent-To\":\"<urn:uuid:1d5550f4-a15c-491c-8c58-66386b95e62a>\",\"WARC-IP-Address\":\"159.60.130.79\",\"WARC-Target-URI\":\"https://www.jpost.com/israel/military-court-upholds-pacifists-jail-term\",\"WARC-Payload-Digest\":\"sha1:7JTVLQOFPUINCD34MVBELULKU6ITAKLL\",\"WARC-Block-Digest\":\"sha1:PXDWL75S77VZZ6GCZ7BN24IG7BMOMW43\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653183.5_warc_CC-MAIN-20230606214755-20230607004755-00234.warc.gz\"}"} |
https://jp.mathworks.com/matlabcentral/answers/488134-interpolate-among-datasets-so-one-set-matches-the-other?s_tid=prof_contriblnk | [
"# Interpolate among Datasets so one set matches the other\n\n47 ビュー (過去 30 日間)\nStelios Fanourakis 2019 年 10 月 29 日\nコメント済み: Image Analyst 2020 年 7 月 26 日\nHi\nI have two datasets with different dimensions. Needs to be matched somehow and one dimension needs to be interpolated to fit the dimension of the second data set\nMy first dataset is X,Y (2D) and my second dataset is only Y (1D). I need to somehow combine those two datasets. Like trying to plug in the Y axis data from the second dataset to the first set which is 2D. So, the X axis from the first set will be interpolated to match the data of the new Y axis data (second set). Am I clear?\n##### 2 件のコメント表示非表示 1 件の古いコメント\nStelios Fanourakis 2019 年 10 月 30 日\nDatasets are two different excel files. Both they have the same X axis but are changed in Y axis. I want to plug in the Y values of the second set to the first one and interpolate. E.g. if the Y values are in Newtons and there are thickness changes of a material vertically (compressive forces) for every X there is a Y value (Newton) caused the deformation or the change in thickness (shape). For different Y values (second set of excel data), needs to interpolate the appropriate forces.\n\nサインインしてコメントする。\n\n### 回答 (2 件)\n\nthe cyclist 2019 年 10 月 30 日\n% Data from your file that has both x and y\nx1 = [2 3 5];\ny1 = [1 2 3];\n% Data from your file that has only y\ny2 = [1.5 2.5];\n% Interpolate the set of x data in based on the above.\nx2 = interp1(y1,x1,y2)\nHere is a simple example using the interp1 function. I'm not certain if this is what you mean.\nIf you look at that documentation page, note that the use of \"x\" and \"y\" variables are swapped compared to what is written here, because the normal convention is that y is dependent on x, but that is not what you have described in your question.\nIf this doesn't do what you want, or if it is unclear, then I suggest you actually load your data from Excel into MATLAB, save to a *.mat file, and upload those data here.\n##### 9 件のコメント表示非表示 8 件の古いコメント\nthe cyclist 2020 年 7 月 21 日\nYou should really open a new question, rather than burying this as a comment on an 8-month-old question.\nBut, since I happened to see it ...\nYou'll need to separate your problem into two different \"branches\" of y, because the sample points must be unique. So, you'll need to do one interpolation with, for example, the values of y>=0, and another with y<0.\n\nサインインしてコメントする。\n\nMOHD UWAIS 2020 年 7 月 26 日\nActually my problem is to find the fwhm(s) of large number of curve like following. So i require the interpolation values of x corresponding to average y values to write the code (because x values do not lie exactly y data points in average of y) . There are exist two values of x correspondinfg to single average value of y in a one curve (therfore sample point is not unique).\ni looking forwrd.\nThank YOU.",
null,
"##### 1 件のコメント表示非表示 なし\nImage Analyst 2020 年 7 月 26 日\nbut you still posted your question here, as an answer to Stelios's question. Why?\nWe look forward to answering your question when you post it as your own question, not here in Stelios's thread. In the meantime, check out the find() function, like index=find(signal < threshold).\n\nサインインしてコメントする。\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!"
] | [
null,
"https://www.mathworks.com/matlabcentral/answers/uploaded_files/337132/image.jpeg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9475428,"math_prob":0.7872511,"size":579,"snap":"2021-43-2021-49","text_gpt3_token_len":161,"char_repetition_ratio":0.107826084,"word_repetition_ratio":0.0,"special_character_ratio":0.29533678,"punctuation_ratio":0.14074074,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96891445,"pos_list":[0,1,2],"im_url_duplicate_count":[null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T12:32:11Z\",\"WARC-Record-ID\":\"<urn:uuid:777eab40-03db-481e-8125-0444615fde42>\",\"Content-Length\":\"195383\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2588b4b2-5101-4da6-8882-d84d8be91123>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3f1300c-7c29-4198-8e33-3608601170db>\",\"WARC-IP-Address\":\"104.69.217.80\",\"WARC-Target-URI\":\"https://jp.mathworks.com/matlabcentral/answers/488134-interpolate-among-datasets-so-one-set-matches-the-other?s_tid=prof_contriblnk\",\"WARC-Payload-Digest\":\"sha1:CAMJ4LOXNSE7Q7VLE3G3WWAEJVV4W3ZD\",\"WARC-Block-Digest\":\"sha1:U4XZKK6653PAP5E2YLZTF3OFBJK5ELEI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588153.7_warc_CC-MAIN-20211027115745-20211027145745-00325.warc.gz\"}"} |
https://culturasuicida.com/z7ai9c1j7.html | [
"Home\n\n# Geoda manual spatial regression coefficients\n\nSpatial Regression in GeoDa GeoDa is a software package developed by the Spatial Analysis Lab (Luc Anselin) at the University of Illinois. It is Compare the individual coefficients of the OLS model and the spatial lag model.\n\n4. Use model comparison statistics to compare the OLS and spatial S4 Training Modules GeoDa: Spatial Regression This box inquires the information to be included in the output results. Check Morans I zvalue as shown, and click OK.\n\nb. Geoda Manual Spatial Regression alternative forms of multivariate analysis and introduces students to spatial regression models and the experience working with data and statistical software packages (STATA, JMP and GeoDa) in GeoDa and Spatial Regression Modeling June 9, 2006 Spatial Regression in GeoDa 3.\n\nExamples This presentation draws on examples and text from both the GeoDa Workbook (0. 95i) and the by their regression coefficients) and observed values of the explanatory variable 2. Discover which of the explanatory variables contribute ii Acknowledgments The development of the GeoDa software for geodata analysis and its antecedents has been supported in part by research projects funded by a variety of sources. You can access GeoDa's regression functionality without opening a spatial file by going directly to Regress after opening GeoDa.\n\nThis option is particularly useful if you are working with large datasets (e. g.several hundred thousand observations), to avoid loading times of the map file. Spatial regression is used to model spatial relationships. Regression models investigate what variables explain their location. Home GIS Analysis How to Build Spatial Regression Models in ArcGIS We can manually plug in the betacoefficient model into the regression model.\n\nThe result is the predicted value. In our case, it is the Run the nonspatial regression Test the regression residuals for spatial autocorrelation, using Moran's I or some other index If no significant spatial autocorrelation exists, STOP.\n\nThis paper briefly reviews how to derive and interpret coefficients of spatial regression models, including topics of direct and indirect (spatial spillover) effects. These topics have been addressed Interpreting Regression Output in Geoda and ArcMap Summary Statistics: Geoda: ArcMap: Traditional Measures of Regression Fit: FstatisticsJoint FStatistic: typically No spatial regression method is effective for both characteristics.\n\nLinear Regression Spatial Lag Model (Geoda) Use the coefficients to form a regression equation: y 10. 5a 6b 8c GeoDa is the flagship program of the GeoDa Center, following a long line of software tools developed by Dr.\n\nLuc Anselin. It is designed to implement techniques for exploratory spatial data analysis (ESDA) on lattice data (points and polygons). Spatial Regression in GeoDa. Introduction. In the first exercise, we explored relationships between variables in the Cairo dataset.\n\nIn this exercise we will test these relationships by modeling fertility in a Spatial Regression User's Guide (Book) The user's guide to the spatial regression functionality in GeoDa can be purchased here: Luc Anselin and Sergio J. Rey. (2014). An Introduction to Spatial Autocorrelation Analysis with GeoDa Luc Anselin higher order contiguity. To create distancebased weights, it is easiest to compute the say to include as an instrumental variable in a regression.\n\nYou can add spatial lags for any variable in"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86283416,"math_prob":0.7742378,"size":3413,"snap":"2019-35-2019-39","text_gpt3_token_len":677,"char_repetition_ratio":0.17043121,"word_repetition_ratio":0.0,"special_character_ratio":0.17990038,"punctuation_ratio":0.09122807,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98928237,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-25T06:37:49Z\",\"WARC-Record-ID\":\"<urn:uuid:d075941d-d0f5-4257-a9bf-4c6316a73e3b>\",\"Content-Length\":\"9505\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3c3b3c85-6744-4984-8e17-413daeff2ae4>\",\"WARC-Concurrent-To\":\"<urn:uuid:0718f620-fcd1-4864-b4c5-b2fbb05a0632>\",\"WARC-IP-Address\":\"104.27.132.213\",\"WARC-Target-URI\":\"https://culturasuicida.com/z7ai9c1j7.html\",\"WARC-Payload-Digest\":\"sha1:R7SCBTSFDFYDUM76P6PESAS3JXUHYPG6\",\"WARC-Block-Digest\":\"sha1:XVPRJHSTHRX7RKTP2CKY4KZAJYFIAYFK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027323067.50_warc_CC-MAIN-20190825042326-20190825064326-00390.warc.gz\"}"} |
http://wordsearchfun.com/192937_Hamsters_wordsearch.html | [
"Hamsters\nhave fun\n\nLogin to be the first to rate this puzzle!\nBABIES\nBURROW\nCAGE\nCHUBBY\nCUTE\nDWARF\nEXCIRCISE\nFAST\nFOOD\nFUN\nFUNNY\nHAMSTERS\nNICE\nNOCTURNAL\nPET\nPLAYFUL\nPLAYHOUSE\nPOUCHES\nSLEEP\nSMALL\nSTRAWBERRY\nSYRIAN\nTIBBLES\nTOYS\nTREATS\nVET\nVET\nWATERBOTTEL\nWHISKERS\nYOGGIES\n V R P O U C H E S L K O O R H E G A C V Q G G J U S H S J H J T Q W Z U Z C W E V L O T Y E J U E N U L O F S F B D Q Y E L E X E I X Y B L A L W E Q D G J H C A L N Q X G Z J B B B I I X D H S F K O M P L T K G W M Q Y Y B R F Q W P M V E Y R A A N O O L E P A H I Y A H A T K K L O T S M Y R R E B W A R T S I U R T E K O C W E S I C R I C X E E S S S W F D N G L A E C D V H K L R C X R K V E V Z O D K I N W O R R U B I Q E U E Q R C G D C L A N R U T C O N N T N R R U H Z O Q E Q S T A E R T O Y S A C N S D V I L F F Y C X O T T Y J M X H Q U Q C E L B T P A L G T E V E A P L A Y F U L Y T T S S S K P L A Y H O U S E A O U D U D G K H Y X E Q Z P M T L P S A O Y N A R J N S I F T W Z D D E Z T R L D F I N R P W E Z D G W J F E D K T F S Q J V I Y Q H D Q O X X E P D N G K G N Y Q Z N D F"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6825115,"math_prob":0.51209533,"size":1913,"snap":"2021-31-2021-39","text_gpt3_token_len":927,"char_repetition_ratio":0.2812991,"word_repetition_ratio":0.2568371,"special_character_ratio":0.65081024,"punctuation_ratio":0.0022421526,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999994,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T16:49:38Z\",\"WARC-Record-ID\":\"<urn:uuid:5cf0c440-38d6-461e-b945-e424038d858a>\",\"Content-Length\":\"35713\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:93fa62f5-97da-46fc-ae9e-d2c0d73e109a>\",\"WARC-Concurrent-To\":\"<urn:uuid:c4014f20-625c-4472-805b-c431ff90350b>\",\"WARC-IP-Address\":\"88.208.252.230\",\"WARC-Target-URI\":\"http://wordsearchfun.com/192937_Hamsters_wordsearch.html\",\"WARC-Payload-Digest\":\"sha1:ADH3ZZHMLGREYUI4LUDXKKTYT5DT4QG4\",\"WARC-Block-Digest\":\"sha1:PJY4APQ2FYLPFE3626NWOX5NNUSIGT63\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057225.57_warc_CC-MAIN-20210921161350-20210921191350-00085.warc.gz\"}"} |
https://www.itsolutionstuff.com/post/how-to-get-difference-between-two-dates-in-phpexample.html | [
"# How to Get Difference Between Two Dates in PHP?\n\nBy Hardik Savani | November 8, 2019 | Category : PHP\n\nDo you want to calculate difference between two dates in php? I mean difference between two dates in days, months, years using php. if yes than i will show you to php get difference between two dates using strtotime(), abs() and floor().\n\nWe may sometime require to get difference between in php for your application, even if you use any php framework like laravel, codeigniter, wordpress but you can use code php code anywhere. So you can see following examples.\n\nBellow examples will help you to calculate difference between two dates in php example.",
null,
"I will give you one by one simple example of following definition.\n\n1) PHP Calculate difference between two dates in days\n\n2) PHP Calculate difference between two dates in months\n\n3) PHP Calculate difference between two dates in years\n\nExample:\n\n`<?php \\$startDate = \"2018-05-20\";\\$endDate = \"2019-08-27\"; \\$diffData = abs(strtotime(\\$endDate) - strtotime(\\$startDate)); \\$yearsDiff = floor(\\$diffData / (365*60*60*24));print_r(\"Years:\".\\$yearsDiff); \\$monthsDiff = floor((\\$diffData - \\$yearsDiff * 365*60*60*24) / (30*60*60*24));print_r(\" Months:\".\\$monthsDiff); \\$daysDiff = floor((\\$diffData - \\$yearsDiff * 365*60*60*24 - \\$monthsDiff*30*60*60*24)/ (60*60*24));print_r(\" Days:\".\\$daysDiff); `\n\nOutput:\n\n`Years:1 Months:3 Days:9`",
null,
""
] | [
null,
"https://www.itsolutionstuff.com/upload/php-get-difference-days.png",
null,
"https://www.itsolutionstuff.com/newTheme/mypic.jpeg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8462899,"math_prob":0.9537122,"size":2068,"snap":"2019-51-2020-05","text_gpt3_token_len":518,"char_repetition_ratio":0.16472869,"word_repetition_ratio":0.09509202,"special_character_ratio":0.2761122,"punctuation_ratio":0.14,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98500633,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-27T13:07:04Z\",\"WARC-Record-ID\":\"<urn:uuid:6d939804-fc96-46a3-a7af-13e2f077c98c>\",\"Content-Length\":\"30530\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ab4f2baa-d6b4-4ff1-865f-5fc816653be4>\",\"WARC-Concurrent-To\":\"<urn:uuid:d341cdd8-20aa-4e2f-a9d4-d039227e35ef>\",\"WARC-IP-Address\":\"148.72.92.152\",\"WARC-Target-URI\":\"https://www.itsolutionstuff.com/post/how-to-get-difference-between-two-dates-in-phpexample.html\",\"WARC-Payload-Digest\":\"sha1:QKPGK4GX7LHTOZCECYDPIVTB7OOX457X\",\"WARC-Block-Digest\":\"sha1:I3UKHYUUGODYHTHCIYZ2S4YWLV66GYXI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251700675.78_warc_CC-MAIN-20200127112805-20200127142805-00061.warc.gz\"}"} |
https://stackoverflow.com/questions/27001604/32-bit-unsigned-multiply-on-64-bit-causing-undefined-behavior | [
"# 32 bit unsigned multiply on 64 bit causing undefined behavior?\n\n``````uint32_t s1 = 0xFFFFFFFFU;\nuint32_t s2 = 0xFFFFFFFFU;\nuint32_t v;\n...\nv = s1 * s2; /* Only need the low 32 bits of the result */\n``````\n\nIn all the followings I assume the compiler couldn't have any preconceptions on the range of `s1` or `s2`, the initializers only serving for an example above.\n\nIf I compiled this on a compiler with an integer size of 32 bits (such as when compiling for x86), no problem. The compiler would simply use `s1` and `s2` as `uint32_t` typed values (not being able to promote them further), and the multiplication would simply give the result as the comment says (modulo `UINT_MAX + 1` which is 0x100000000 this case).\n\nHowever if I compiled this on a compiler with an integer size of 64 bits (such as for x86-64), there might be undefined behavior from what I can deduce from the C standard. Integer promotion would see `uint32_t` can be promoted to `int` (64 bit signed), the multiplication would then attempt to multiply two `int`'s, which, if they happen to have the values shown in the example, would cause an integer overflow, which is undefined behavior.\n\nAm I correct with this and if so how would you avoid it in a sane way?\n\nI spotted this question which is similar, but covers C++: What's the best C++ way to multiply unsigned integers modularly safely?. Here I would like to get an answer applicable to C (preferably C89 compatible). I wouldn't consider making a poor 32 bit machine potentially executing a 64 bit multiply an acceptable answer though (usually in code where this would be of concern, 32 bit performance might be more critical as typically those are the slower machines).\n\nNote that the same problem can apply to 16 bit unsigned ints when compiled with a compiler having a 32 bit int size, or unsigned chars when compiled with a compiler having a 16 bit int size (the latter might be common with compilers for 8 bit CPUs: the C standard requires integers to be at least 16 bits, so a conforming compiler is likely affected).\n\n• `int` is 32 bits even on most modern 64-bit architectures en.wikipedia.org/wiki/64-bit_computing#64-bit_data_models – phuclv Nov 18 '14 at 18:56\n• Oh come on. Is anything even defined in C? – harold Nov 18 '14 at 19:09\n• @harold, yes: `__STDC__` is. – fuz Nov 18 '14 at 19:14\n• @Tim: I posted it especially for that, although now even I realize that I might actually need to stroll through an entire project which is set up to be compiler int size independent (currently working fine on both 32 and 64 bits) to check for such multiplies and other stuff I took for granted. Proves you can just never know all the beasts lurking in the shadows of C... – Jubatian Nov 18 '14 at 19:55\n• @TimSeguine Promotion from uint to int is acceptable if one of the operands is a wider int, however two uints converting to int is evil. – 2501 Nov 18 '14 at 19:56\n\nThe simplest way to get the multiplication to happen in an unsigned type that is at least `uint32_t`, and also at least `unsigned int`, is to involve an expression of type `unsigned int`.\n\n``````v = 1U * s1 * s2;\n``````\n\nThis either converts `1U` to `uint32_t`, or `s1` and `s2` to `unsigned int`, depending on what's appropriate for your particular platform.\n\n@Deduplicator comments that some compilers, where `uint32_t` is narrower than `unsigned int`, may warn about the implicit conversion in the assignment, and notes that such warnings are likely suppressable by making the conversion explicit:\n\n``````v = (uint32_t) (1U * s1 * S2);\n``````\n\nIt looks a bit less elegant, in my opinion, though.\n\n• +1 Yep, I thought too complicated. – Deduplicator Nov 18 '14 at 19:21\n• @Deduplicator That happens implicitly, since `v` is defined as `uint32_t`. – user743382 Nov 18 '14 at 19:22\n• @Deduplicator I'm not sure I agree that it's necessarily good for a compiler to warn for this, but I do agree that it is likely that there are compilers that warn for it. Will add a note. – user743382 Nov 18 '14 at 19:24\n• @hvd This works because multiply is from left-to-right, right? If 1U was all the way on the right, the first multiplication would still be `s1 * s2` and converted to int. – 2501 Nov 18 '14 at 19:27\n• @2501 That's correct. `1U * s1 * s2` always means `(1U * s1) * s2`, and `s1 * s2 * 1U` always means `(s1 * s2) * 1U`. An alternative that just might be slightly more readable, by not requiring the reader to know how `*` binds, would be `s1 * 1U * s2`. – user743382 Nov 18 '14 at 19:29\n\nCongratulations on finding a friction point.\n\nA possible way:\n\n``````v = (uint32_t) (UINT_MAX<=0xffffffff\n? s1 * s2\n: (unsigned)s1 * (unsigned)s2);\n``````\n\nAnyway, looks like adding some typedefs to `<stdint.h>` for types guaranteed to be no smaller than `int` would be in order ;-).\n\n• Does it even need the conditional? I assume it would play OK as simply `v = (unsigned)s1 * (unsigned)s2;`, the type of `v` would take care of the proper truncating anyway, while on 32 bits, it is still a 32 bit multiply. At least if I except `unsigned` to be at least 32 bits... Wait, aren't there such types already defined? – Jubatian Nov 18 '14 at 19:18\n• Trouble is, `int` need not have more than 16 bits... And no, there are no such types. – Deduplicator Nov 18 '14 at 19:19\n• Eh, sorry for being vague, I meant 32 bit int compiler... Well, the type I mean is `uint_least32_t` in `stdint.h`, a suitable substitution may even be ifdeffed for C89 I guess. – Jubatian Nov 18 '14 at 19:21\n• does this work with 1's complement or sign-magnitude? – phuclv Nov 19 '14 at 8:04\n• @LưuVĩnhPhúc: For unsigned numbers, those terms are meaningless. – Deduplicator Nov 19 '14 at 16:42"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9158813,"math_prob":0.8889179,"size":2160,"snap":"2019-43-2019-47","text_gpt3_token_len":529,"char_repetition_ratio":0.124304265,"word_repetition_ratio":0.05235602,"special_character_ratio":0.2625,"punctuation_ratio":0.0917647,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9572727,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-13T12:27:29Z\",\"WARC-Record-ID\":\"<urn:uuid:1be79634-8fa1-4cb0-9e1a-27e175b5084e>\",\"Content-Length\":\"166252\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:79fc6e58-942a-47b6-add7-eb090c625121>\",\"WARC-Concurrent-To\":\"<urn:uuid:4e4454e0-5b24-4a0c-bd65-f73483742def>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/27001604/32-bit-unsigned-multiply-on-64-bit-causing-undefined-behavior\",\"WARC-Payload-Digest\":\"sha1:RTWZQS6DRUERGRH2MZMBZ3CMOQ5GPQN5\",\"WARC-Block-Digest\":\"sha1:3V4WT42RCAKUT5D35AARVG5XEGN3MU22\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496667260.46_warc_CC-MAIN-20191113113242-20191113141242-00201.warc.gz\"}"} |
https://nl.mathworks.com/matlabcentral/cody/problems/885 | [
"Cody\n\n# Problem 885. Create logical matrix with a specific row and column sums\n\nGiven two numbers n and s, build an n-by-n logical matrix (of only zeros and ones), such that both the row sums and the column sums are all equal to s. Additionally, the main diagonal must be all zeros.\n\nYou can assume that: 0 < s < n\n\nExample:\n\nTake n=10 and s=3, here is a possible solution\n\n```M =\n0 1 0 0 1 1 0 0 0 0\n0 0 1 0 1 1 0 0 0 0\n0 0 0 0 1 1 0 0 1 0\n0 0 0 0 0 0 1 1 0 1\n1 0 0 0 0 0 1 0 1 0\n0 1 1 0 0 0 0 1 0 0\n1 0 0 1 0 0 0 0 0 1\n0 0 0 1 0 0 0 0 1 1\n1 1 0 0 0 0 0 1 0 0\n0 0 1 1 0 0 1 0 0 0\n```\n\nNote that the following conditions are all true:\n\n```all(sum(M,1)==3) % column sums equal to s\nall(sum(M,2)==3) % row sums equal to s\nall(diag(M)==0) % zeros on the diagonal\nislogical(M) % logical matrix\nndims(M)==2 % 2D matrix\nall(size(M)==n) % square matrix\n```\n\nUnscored bonus:\n\nVisualize the result as a graph where M represents the adjacency matrix:\n\n```% circular layout\nt = linspace(0, 2*pi, n+1)';\nxy = [cos(t(1:end-1)) sin(t(1:end-1))];\nsubplot(121), spy(M)\nsubplot(122), gplot(M, xy, '*-'), axis image\n```\n\n### Solution Stats\n\n38.69% Correct | 61.31% Incorrect\nLast Solution submitted on Jan 20, 2020"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6939465,"math_prob":0.99924994,"size":1239,"snap":"2019-51-2020-05","text_gpt3_token_len":500,"char_repetition_ratio":0.22024292,"word_repetition_ratio":0.31617647,"special_character_ratio":0.42372882,"punctuation_ratio":0.08459215,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9978504,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T11:55:05Z\",\"WARC-Record-ID\":\"<urn:uuid:ce986acb-c8b2-4fee-9bf5-2cfbedf0229b>\",\"Content-Length\":\"104468\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6bcf1f4-686b-4805-be5d-c7b408c1e0d2>\",\"WARC-Concurrent-To\":\"<urn:uuid:1595ac14-c63b-4145-832a-762021b5892d>\",\"WARC-IP-Address\":\"23.32.68.178\",\"WARC-Target-URI\":\"https://nl.mathworks.com/matlabcentral/cody/problems/885\",\"WARC-Payload-Digest\":\"sha1:NWGA5LMULJHPBZ3YE6FNBBR5UIOIJLHB\",\"WARC-Block-Digest\":\"sha1:H52KKDD4YTX5MVCKJNWLUVXZ2VAMVECZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250603761.28_warc_CC-MAIN-20200121103642-20200121132642-00543.warc.gz\"}"} |
https://isiarticles.com/article/10106 | [
"دانلود مقاله ISI انگلیسی شماره 10106\nترجمه فارسی عنوان مقاله\n\n# رویکرد سلسله مراتبی بیزی برای تجزیه و تحلیل داده های شمارش طولی با پراکندگی بیش از حد: یک مطالعه شبیه سازی\n\nعنوان انگلیسی\nA hierarchical Bayesian approach for the analysis of longitudinal count data with overdispersion: A simulation study\nکد مقاله سال انتشار تعداد صفحات مقاله انگلیسی ترجمه فارسی\n10106 2013 13 صفحه PDF سفارش دهید\nدانلود فوری مقاله + سفارش ترجمه\n\nنسخه انگلیسی مقاله همین الان قابل دانلود است.\n\nهزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.\n\nاین مقاله تقریباً شامل 7236 کلمه می باشد.\n\nهزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:\n\nشرح تعرفه ترجمه زمان تحویل جمع هزینه\nترجمه تخصصی - سرعت عادی هر کلمه 90 تومان 11 روز بعد از پرداخت 651,240 تومان\nترجمه تخصصی - سرعت فوری هر کلمه 180 تومان 6 روز بعد از پرداخت 1,302,480 تومان\nپس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.\nمنبع",
null,
"Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)\n\nJournal : Computational Statistics & Data Analysis, , Volume 57, Issue 1, January 2013, Pages 233-245\n\nترجمه کلمات کلیدی\n- معیارهای انحراف اطلاعات - مدل پواسون نرمال سلسله مراتبی - مدل پراکند ه پواسون نرمال سلسله مراتبی -\nکلمات کلیدی انگلیسی\nDeviance information criteria, Hierarchical Poisson–Normal model , Hierarchical Poisson–Normal overdispersed model\n\n#### چکیده انگلیسی\n\nIn sets of count data, the sample variance is often considerably larger or smaller than the sample mean, known as a problem of over- or underdispersion. The focus is on hierarchical Bayesian modeling of such longitudinal count data. Two different models are considered. The first one assumes a Poisson distribution for the count data and includes a subject-specific intercept, which is assumed to follow a normal distribution, to account for subject heterogeneity. However, such a model does not fully address the potential problem of extra-Poisson dispersion. The second model, therefore, includes also random subject and time dependent parameters, assumed to be gamma distributed for reasons of conjugacy. To compare the performance of the two models, a simulation study is conducted in which the mean squared error, relative bias, and variance of the posterior means are compared.\n\n#### مقدمه انگلیسی\n\nIn medical research, data are often collected in the form of counts, e.g., corresponding to the number of times that a particular event of interest occurs. A common model for count data is the Poisson model, which is rather restrictive, given that variance and mean are equal. Often, in observed count data, the sample variance is considerably larger (smaller) than the sample mean—a phenomenon called overdispersion (underdispersion). Generically, this is referred to as extra-(Poisson)-dispersion (Iddi and Molenberghs, 2012). If not appropriately accounted for, extra-dispersion may cause serious flaws in precision estimation, and inferences based there upon (Breslow, 1990). However, such excess variation has little effect on the estimation of the regression coefficients of primary interest (Cox, 1983). One of the approaches to this problem is to assume a specific, flexible parametric distribution for the Poisson means associated with each observed count. Margolin et al. (1981) assumed a gamma mixing distribution for the Poisson means which leads to the negative binomial model. The advantage of this parametric approach is that parameter estimates may be obtained by maximum likelihood, leading to estimates that are asymptotically normal, consistent, and efficient if the parametric assumptions are accurate (Cramér, 1946 and Wald, 1949). Under conditions discussed by Cox (1983), maximum likelihood methods maintain high efficiency for modest amounts of extra-dispersion, even when not explicitly accounted for in the parametric model. Pocock et al. (1981) proposed an intermediate solution, via maximum likelihood, to the problem of fitting regression models to tables of frequencies when the residual variation is substantially larger than would be expected from assumptions. Williams (1982) proposed a moment method for logistic linear models, and Breslow (1984) used the method proposed by Pocock et al. (1981) and Williams (1982) for log-linear models. Furthermore, the quasi-likelihood method, which can be considered a moment method, was applied for overdispersion by McCullagh and Nelder (1989) and Wedderburn (1974). The asymptotic properties of all these moment methods for extra-binomial and extra-Poisson variations were studied by Moore (1986). For modeling longitudinal count data with overdispersion, similarly to Zeger (1988) and Thall and Vail (1990) developed a mixed-effects approach in which the regression coefficients are estimated by generalized estimating equation and the variance component is estimated using method of moments. This may be viewed as an extension of Liang and Zeger’s (1986) model for longitudinal count data. Variance components are generally of broad interest (Pryseley et al., 2011). Besides, Booth et al. (2003) and Molenberghs et al. (2007) brought together both modeling strands and allowed at the same time correlation between repeated measures and overdispersion in the counts. This work was extended by Molenberghs et al. (2010) to data types different from counts. Molenberghs et al. (2007) termed their model the combined model. All of these authors conducted parameter estimation and inferences using a likelihood paradigm. In contrast, this paper takes a likelihood perspective. In particular, two versions of a hierarchical Poisson model for longitudinal count data are studied. The first one includes subject-specific random effects to account for subject heterogeneity (a conventional generalized linear mixed model) and the second one includes an additional parameter accounting for overdispersion, generated through an additional gamma distributed random effect (a combined model). The two models are applied to real longitudinal count data and compared using a simulation study. This paper proceeds as follows. In Section 2, the motivating study is described, which comprises a set of data on epileptic patients. The statistical methodologies is laid out in Section 3. In Section 4, the data set is analyzed, followed by a simulation study in Section 5.\n\n#### نتیجه گیری انگلیسی\n\nA Bayesian inferential route was proposed for the HPNOD (and the HPN), and compared the performance of the HPN and HPNOD models on data generated with and without overdispersion. A Bayesian approach was adopted. When the data are generated with high overdispersion levels, the HPN model leads to higher bias and less precise estimates for the variance of the random effect (σ2) than the HPNOD. HPN and HPNOD produce similar results for the slopes. HPNOD and HPN provide similar bias and precision for the slopes and for the random effects variance σ. To check the problem with the intercept estimates using the HPNOD model, the correlation between the parameters was calculated. The intercepts between the two models cannot be directly compared, but only indirectly, given that it takes the form logE(θij)+β0+0.5σ2 in the HPNOD and β0+0.5σ2 in the HPN. A Deviance Information Criterion (DIC) was applied to check the overall performance of both models. The DIC result seems to imply that the HPNOD is much better than the HPN model for data with high, moderate, and low overdispersion. Nevertheless, the HPNOD model has slightly smaller DIC values than the HPN for data without overdispersion. The results of the simulation study also show that there is an effect of cluster size and sample size. The bias and the MSE decrease when the cluster size increase and there is a slight decrease of the bias and the MSE when the sample size increases. To investigate the robustness of the simulation study, three different true values for View the MathML source were chosen. The results obtained were similar under these three different true values of View the MathML source which shows the robustness of the simulation study. Most of our findings for the analysis of the epilepsy data set are in agreement with the findings reported in Molenberghs et al. (2007). In both studies, there was a difference in the estimates of the intercepts and also on the inference of the slopes using both models. The HPNOD model shows also that there is no significant change in the number of epileptic seizures over time for the patients who received the treatment while the HPN models does. This underscores the importance of careful extra-dispersion modeling. Further, both models produce non-significant values for the difference and ratio in slopes. However, the study done by Molenberghs et al. (2007) shows that there is significant difference in the slopes using the HPN. In both studies, the HPNOD model fits better than the HPN model. Note that our findings are different from the ones reported in Thall and Vail (1990) and in Lindsey (1993). This should not come as a surprise, because these authors consider a different set of data, studying different compounds. To conclude, the HPNOD model performs better than the HPN model for data featuring high, moderate and low overdispersion level. However, both models perform similarly for data without overdispersion. Using the HPN model, the bias and MSE of all parameters increases when the overdispersion level increases. The HPN model results in bias and inefficient estimates for all parameters, especially for σ and for data with high overdispersion (0<α<=0.25). This may be due to the excess variability resulting from overdispersion not taken into account with the HPN model. This underscores that we should accommodate the extra-model variability. Further investigation is needed to answer the question why the HPNOD model is providing unbiased estimate of the intercepts when the data are generated with moderate overdispersion level but not when there is high overdispersion, low overdispersion, and no overdispersion.\n\nدانلود فوری مقاله + سفارش ترجمه\n\nنسخه انگلیسی مقاله همین الان قابل دانلود است.\n\nهزینه ترجمه مقاله بر اساس تعداد کلمات مقاله انگلیسی محاسبه می شود.\n\nاین مقاله شامل 7236 کلمه می باشد.\n\nهزینه ترجمه مقاله توسط مترجمان با تجربه، طبق جدول زیر محاسبه می شود:\n\nشرح تعرفه ترجمه زمان تحویل جمع هزینه\nترجمه تخصصی - سرعت عادی هر کلمه 90 تومان 11 روز بعد از پرداخت 651,240 تومان\nترجمه تخصصی - سرعت فوری هر کلمه 180 تومان 6 روز بعد از پرداخت 1,302,480 تومان\nپس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید."
] | [
null,
"https://isiarticles.com/bundles/Article/front/images/Elsevier-Logo.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8577388,"math_prob":0.89858854,"size":9423,"snap":"2023-14-2023-23","text_gpt3_token_len":2348,"char_repetition_ratio":0.13515235,"word_repetition_ratio":0.010914052,"special_character_ratio":0.20969968,"punctuation_ratio":0.10060241,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9696998,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T09:06:44Z\",\"WARC-Record-ID\":\"<urn:uuid:e0aec076-ceb3-4b36-b3c7-6ed2018dc91f>\",\"Content-Length\":\"43318\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b5e115f-7a9c-46f1-aabb-07c07818e17c>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f952716-c132-435b-ad9a-44f9cbaef1bc>\",\"WARC-IP-Address\":\"45.159.197.11\",\"WARC-Target-URI\":\"https://isiarticles.com/article/10106\",\"WARC-Payload-Digest\":\"sha1:TSW5O7PUQ4JZRVHZLWSWRK4GUATFK6XE\",\"WARC-Block-Digest\":\"sha1:JXUDMGNC27BBFO3L2YVJZGR3355S5MPJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652494.25_warc_CC-MAIN-20230606082037-20230606112037-00335.warc.gz\"}"} |
https://www.powershow.com/view/16c8dc-MzdhM/Parallel_Spectral_Methods_Solving_Elliptic_Problems_with_FFTs_powerpoint_ppt_presentation | [
"# Parallel Spectral Methods: Solving Elliptic Problems with FFTs - PowerPoint PPT Presentation\n\nPPT – Parallel Spectral Methods: Solving Elliptic Problems with FFTs PowerPoint presentation | free to download - id: 16c8dc-MzdhM",
null,
"The Adobe Flash plugin is needed to view this content\n\nGet the plugin now\n\nView by Category\nTitle:\n\n## Parallel Spectral Methods: Solving Elliptic Problems with FFTs\n\nDescription:\n\n### Solving Elliptic Problems with FFTs. Kathy Yelick. www.cs.berkeley.edu/~yelick/cs267_s07 ... 1D FFT is due to Edelman (see http://www-math.mit.edu/~edelman) ... – PowerPoint PPT presentation\n\nNumber of Views:112\nAvg rating:3.0/5.0\nSlides: 53\nProvided by: DavidE1\nCategory:\nTags:\nTranscript and Presenter's Notes\n\nTitle: Parallel Spectral Methods: Solving Elliptic Problems with FFTs\n\n1\nParallel Spectral MethodsSolving Elliptic\nProblems with FFTs\n• Kathy Yelick\n• www.cs.berkeley.edu/yelick/cs267_s07\n\n2\nReferences\n• Previous CS267 lectures\n• Lecture by Geoffrey Fox\n• http//grids.ucs.indiana.edu/ptliupages/presentati\nons/PC2007/cps615fft00.ppt\n• FFTW project\n• http//www.fftw.org\n• Spiral project\n• http//www.spiral.net\n\n3\nPoissons equation arises in many models\n3D ?2u/?x2 ?2u/?y2 ?2u/?z2 f(x,y,z)\nf represents the sources also need boundary\nconditions\n2D ?2u/?x2 ?2u/?y2 f(x,y)\n1D d2u/dx2 f(x)\n• Electrostatic or Gravitational Potential\nPotential(position)\n• Heat flow Temperature(position, time)\n• Diffusion Concentration(position, time)\n• Fluid flow Velocity,Pressure,Density(position,tim\ne)\n• Elasticity Stress,Strain(position,time)\n• Variations of Poisson have variable coefficients\n\n4\nAlgorithms for 2D (3D) Poisson Equation (N n2\n(n3) vars)\n• Algorithm Serial PRAM Memory Procs\n• Dense LU N3 N N2 N2\n• Band LU N2 (N7/3) N N3/2 (N5/3) N (N4/3)\n• Jacobi N2 (N5/3) N (N2/3) N N\n• Explicit Inv. N2 log N N2 N2\n• Conj.Gradients N3/2 (N4/3) N1/2(1/3) log N N N\n• Red/Black SOR N3/2 (N4/3) N1/2 (N1/3) N N\n• Sparse LU N3/2 (N2) N1/2 Nlog N (N4/3) N\n• FFT Nlog N log N N N\n• Multigrid N log2 N N N\n• Lower bound N log N N\n• PRAM is an idealized parallel model with zero\ncost communication\n• Reference James Demmel, Applied Numerical\nLinear Algebra, SIAM, 1997.\n\n5\nSolving Poissons Equation with the FFT\n• Express any 2D function defined in 0 ? x,y ? 1 as\na series ?(x,y) Sj Sk ?jk sin(p jx) sin(p\nky)\n• Here ?jk are called Fourier coefficient of ?(x,y)\n• The inverse of this is ?jk 4\n?(x,y) sin(p jx) sin(p ky)\n• Poissons equation ?2 ? /? x2 ? 2 ? /? y2\nf(x,y) becomes\n• Sj Sk (-p2j2 - p2k2) ?jk sin(p jx) sin(p ky)\n• Sj Sk fjk sin(p jx) sin(p ky)\n• where fjk are Fourier coefficients of f(x,y)\n• and f(x,y) Sj Sk fjk sin(p jx) sin(p ky)\n• This implies PDE can be solved exactly\nalgebraically, ?jk fjk / (-p2j2 - p2k2)\n\n6\nSolving Poissons Equation with the FFT\n• So solution of Poissons equation involves the\nfollowing steps\n• 1) Find the Fourier coefficients fjk of f(x,y) by\nperforming integral\n• 2) Form the Fourier coefficients of ? by\n• ?jk fjk / (-p2j2 - p2k2)\n• 3) Construct the solution by performing sum\n?(x,y)\n• There is another version of this (Discrete\nFourier Transform) which deals with functions\ndefined at grid points and not directly the\ncontinuous integral\n• Also the simplest (mathematically) transform uses\nexp(-2pijx) not sin(p jx)\n• Let us first consider 1D discrete version of this\ncase\n• PDE case normally deals with discretized\nfunctions as these needed for other parts of\nproblem\n\n7\nSerial FFT\n• Let isqrt(-1) and index matrices and vectors\nfrom 0.\n• The Discrete Fourier Transform of an m-element\nvector v is\n• Fv\n• Where F is the mm matrix defined as\n• Fj,k v (jk)\n• Where v is\n• v e (2pi/m) cos(2p/m)\nisin(2p/m)\n• v is a complex number with whose mth power vm 1\nand is therefore called an mth root of unity\n• E.g., for m 4\n• v i, v2 -1, v3 -i, v4\n1,\n\n8\nUsing the 1D FFT for filtering\n• Signal sin(7t) .5 sin(5t) at 128 points\n• Noise random number bounded by .75\n• Filter by zeroing out FFT components lt .25\n\n9\nUsing the 2D FFT for image compression\n• Image 200x320 matrix of values\n• Compress by keeping largest 2.5 of FFT\ncomponents\n• Similar idea used by jpeg\n\n10\nRelated Transforms\n• Most applications require multiplication by both\nF and inverse(F).\n• Multiplying by F and inverse(F) are essentially\nthe same. (inverse(F) is the complex conjugate\nof F divided by n.)\n• For solving the Poisson equation and various\nother applications, we use variations on the FFT\n• The sin transform -- imaginary part of F\n• The cos transform -- real part of F\n• Algorithms are similar, so we will focus on the\nforward FFT.\n\n11\nSerial Algorithm for the FFT\n• Compute the FFT of an m-element vector v, Fv\n• (Fv)j S F(j,k) v(k)\n• S v (jk) v(k)\n• S (v j)k v(k)\n• V(v j)\n• Where V is defined as the polynomial\n• V(x) S xk v(k)\n\nm-1 k 0\nm-1 k 0\nm-1 k 0\nm-1 k 0\n12\nDivide and Conquer FFT\n• V can be evaluated using divide-and-conquer\n• V(x) S (x)k v(k)\n• v0 x2v2\nx4v4\n• x(v1 x2v3\nx4v5 )\n• Veven(x2) xVodd(x2)\n• V has degree m-1, so Veven and Vodd are\npolynomials of degree m/2-1\n• We evaluate these at points (v j)2 for 0ltjltm-1\n• But this is really just m/2 different points,\nsince\n• (v (jm/2) )2 (v j v m/2) )2 v 2j v m\n(v j)2\n• So FFT on m points reduced to 2 FFTs on m/2\npoints\n• Divide and conquer!\n\nm-1 k 0\n13\nDivide-and-Conquer FFT\n• FFT(v, v, m)\n• if m 1 return v0\n• else\n• veven FFT(v02m-2, v 2, m/2)\n• vodd FFT(v12m-1, v 2, m/2)\n• v-vec v0, v1, v (m/2-1)\n• return veven (v-vec . vodd),\n• veven - (v-vec . vodd)\n• The . above is component-wise multiply.\n• The , is construction an m-element vector\nfrom 2 m/2 element vectors\n• This results in an O(m log m) algorithm.\n\nprecomputed\n14\nAn Iterative Algorithm\n• The call tree of the dc FFT algorithm is a\ncomplete binary tree of log m levels\n• An iterative algorithm that uses loops rather\nthan recursion, goes each level in the tree\nstarting at the bottom\n• Algorithm overwrites vi by (Fv)bitreverse(i)\n• Practical algorithms combine recursion (for\nmemory hiearchy) and iteration (to avoid function\n\nFFT(0,1,2,3,,15) FFT(xxxx)\neven\nodd\nFFT(1,3,,15) FFT(xxx1)\nFFT(0,2,,14) FFT(xxx0)\nFFT(xx10)\nFFT(xx01)\nFFT(xx11)\nFFT(xx00)\nFFT(x100)\nFFT(x010)\nFFT(x110)\nFFT(x001)\nFFT(x101)\nFFT(x011)\nFFT(x111)\nFFT(x000)\nFFT(0) FFT(8) FFT(4) FFT(12) FFT(2) FFT(10)\nFFT(6) FFT(14) FFT(1) FFT(9) FFT(5) FFT(13)\nFFT(3) FFT(11) FFT(7) FFT(15)\n15\nParallel 1D FFT\n• Data dependencies in 1D FFT\n• Butterfly pattern\n• A PRAM algorithm takes O(log m) time\n• each step to right is parallel\n• there are log m steps\n• What about communication cost?\n• See LogP paper for details\n\n16\nBlock Layout of 1D FFT\n• Using a block layout (m/p contiguous elts per\nprocessor)\n• No communication in last log m/p steps\n• Each step requires fine-grained communication in\nfirst log p steps\n\n17\nCyclic Layout of 1D FFT\n• Cyclic layout (only 1 element per processor,\nwrapped)\n• No communication in first log(m/p) steps\n• Communication in last log(p) steps\n\n18\nParallel Complexity\n• m vector size, p number of processors\n• f time per flop 1\n• a startup for message (in f units)\n• b time per word in a message (in f units)\n• Time(blockFFT) Time(cyclicFFT)\n• 2mlog(m)/p\n• log(p) a\n• mlog(p)/p b\n\n19\nFFT With Transpose\n• If we start with a cyclic layout for first log(p)\nsteps, there is no communication\n• Then transpose the vector for last log(m/p) steps\n• All communication is in the transpose\n• Note This example has log(m/p) log(p)\n• If log(m/p) gt log(p) more phases/layouts will be\nneeded\n• We will work with this assumption for simplicity\n\n20\nWhy is the Communication Step Called a Transpose?\n• Analogous to transposing an array\n• View as a 2D array of n/p by p\n• Note same idea is useful for uniprocessor caches\n\n21\nComplexity of the FFT with Transpose\n• If no communication is pipelined (overestimate!)\n• Time(transposeFFT)\n• 2mlog(m)/p\nsame as before\n• (p-1) a\nwas log(p) a\n• m(p-1)/p2 b\nwas m log(p)/p b\n• If communication is pipelined, so we do not pay\nfor p-1 messages, the second term becomes simply\na, rather than (p-1)a.\n• This is close to optimal. See LogP paper for\ndetails.\n• See also following papers on class resource page\n• A. Sahai, Hiding Communication Costs in\nBandwidth Limited FFT\n• R. Nishtala et al, Optimizing bandwidth limited\nproblems using one-sided communication\n\n22\nComment on the 1D Parallel FFT\n• The above algorithm leaves data in bit-reversed\norder\n• Some applications can use it this way, like\nPoisson\n• Others require another transpose-like operation\n• Other parallel algorithms also exist\n• A very different 1D FFT is due to Edelman (see\nhttp//www-math.mit.edu/edelman)\n• Based on the Fast Multipole algorithm\n• Less communication for non-bit-reversed algorithm\n\n23\nHigher Dimension FFTs\n• FFTs on 2 or 3 dimensions are define as 1D FFTs\non vectors in all dimensions.\n• E.g., a 2D FFT does 1D FFTs on all rows and then\nall columns\n• There are 3 obvious possibilities for the 2D FFT\n• (1) 2D blocked layout for matrix, using 1D\nalgorithms for each row and column\n• (2) Block row layout for matrix, using serial 1D\nFFTs on rows, followed by a transpose, then more\nserial 1D FFTs\n• (3) Block row layout for matrix, using serial 1D\nFFTs on rows, followed by parallel 1D FFTs on\ncolumns\n• Option 2 is best, if we overlap communication and\ncomputation\n• For a 3D FFT the options are similar\n• 2 phases done with serial FFTs, followed by a\ntranspose for 3rd\n• can overlap communication with 2nd phase in\npractice\n\n24\nFFTW Fastest Fourier Transform in the West\n• www.fftw.org\n• Produces FFT implementation optimized for\n• Your version of FFT (complex, real,)\n• Your value of n (arbitrary, possibly prime)\n• Close to optimal for serial, can be improved for\nparallel\n• Similar in spirit to PHIPAC/ATLAS/Sparsity\n• Won 1999 Wilkinson Prize for Numerical Software\n• Widely used for serial FFTs\n• Had parallel FFTs in version 2, but no longer\nsupporting them\n• Layout constraints from users/apps network\ndifferences are hard to support\n\n25\nBisection Bandwidth\n• FFT requires one (or more) transpose operations\n• Ever processor send 1/P of its data to each other\none\n• Bisection Bandwidth limits this performance\n• Bisection bandwidth is the bandwidth across the\nnarrowest part of the network\n• Important in global transpose operations,\nall-to-all, etc.\n• Full bisection bandwidth is expensive\n• Fraction of machine cost in the network is\nincreasing\n• Fat-tree and full crossbar topologies may be too\nexpensive\n• Especially on machines with 100K and more\nprocessors\n• SMP clusters often limit bandwidth at the node\nlevel\n\n26\nModified LogGP Model\n• LogGP no overlap\n• LogGP no overlap\n\nP0\ng\nP1\nEEL end to end latency (1/2 roundtrip) g\nminimum time between small message sends G\nadditional gap per byte for larger messages\n27\nHistorical Perspective\n½ round-trip latency\n• Potential performance advantage for fine-grained,\none-sided programs\n• Potential productivity advantage for irregular\napplications\n\n28\nGeneral Observations\n• The overlap potential is the difference between\nthe gap and overhead\n• No potential if CPU is tied up throughout message\nsend\n• E.g., no send size DMA\n• Grows with message size for machines with DMA\n(per byte cost is handled by network)\n• Because per-Byte cost is handled by NIC\n• Grows with amount of network congestion\n• Because gap grows as network becomes saturated\n• Remote overhead is 0 for machine with RDMA\n\n29\nGASNet Communications System\n• GASNet offers put/get communication\n• One-sided no remote CPU involvement required in\nAPI (key difference with MPI)\n• Message contains remote address\n• No need to match with a receive\n• No implicit ordering required\n\nCompiler-generated code\n• Used in language runtimes (UPC, etc.)\n• Fine-grained and bulk xfers\n• Split-phase communication\n\nLanguage-specific runtime\nGASNet\nNetwork Hardware\n30\nPerformance of 1-Sided vs 2-sided Communication\nGASNet vs MPI\n• Comparison on Opteron/InfiniBand GASNets\nvapi-conduit and OSU MPI 0.9.5\n• Up to large message size (gt 256 Kb), GASNet\nprovides up to 2.2X improvement in streaming\nbandwidth\n• Half power point (N/2) differs by one order of\nmagnitude\n\n31\nGASNet Performance for mid-range message sizes\nGASNet usually reaches saturation bandwidth\nbefore MPI - fewer costs to amortize Usually\noutperform MPI at medium message sizes - often by\na large margin\n32\nNAS FT Case Study\n• Performance of Exchange (Alltoall) is critical\n• Communication to computation ratio increases with\nfaster, more optimized 1-D FFTs\n• Determined by available bisection bandwidth\n• Between 30-40 of the applications total runtime\n• Two ways to reduce Exchange cost\n• 1. Use a better network (higher Bisection BW)\n• 2. Overlap the all-to-all with communication\n(where possible) break up the exchange\n• Default NAS FT Fortran/MPI relies on 1\n• Our approach uses UPC/GASNet and builds on 2\n• Started as CS267 project\n• 1D partition of 3D grid is a limitation\n• At most N processors for N3 grid\n• HPC Challenge benchmark has large 1D FFT (can be\nviewed as 3D or more with proper roots of unity)\n\n33\n3D FFT Operation with Global Exchange\n1D-FFT Columns\nTranspose 1D-FFT (Rows)\n1D-FFT (Columns)\nCachelines\n1D-FFT Rows\nExchange (Alltoall)\nsend to Thread 0\nsend to Thread 1\nTranspose 1D-FFT\nDivide rows among threads\nsend to Thread 2\nLast 1D-FFT (Thread 0s view)\n• Single Communication Operation (Global Exchange)\nsends THREADS large messages\n• Separate computation and communication phases\n\n34\nCommunication Strategies for 3D FFT\nchunk all rows with same destination\n• Three approaches\n• Chunk\n• Wait for 2nd dim FFTs to finish\n• Minimize messages\n• Slab\n• Wait for chunk of rows destined for 1 proc to\nfinish\n• Overlap with computation\n• Pencil\n• Send each row as it completes\n• Maximize overlap and\n• Match natural layout\n\npencil 1 row\nslab all rows in a single plane with same\ndestination\nJoint work with Chris Bell, Rajesh Nishtala, Dan\nBonachea\n35\nDecomposing NAS FT Exchange into Smaller Messages\n• Three approaches\n• Chunk\n• Wait for 2nd dim FFTs to finish\n• Slab\n• Wait for chunk of rows destined for 1 proc to\nfinish\n• Pencil\n• Send each row as it completes\n• Example Message Size Breakdown for\n• Class D (2048 x 1024 x 1024)\n• at 256 processors\n\n36\nOverlapping Communication\n• Goal make use of all the wires\n• Distributed memory machines allow for\nasynchronous communication\n• Berkeley Non-blocking extensions expose GASNets\nnon-blocking operations\n• Approach Break all-to-all communication\n• Interleave row computations and row\ncommunications since 1D-FFT is independent across\nrows\n• Decomposition can be into slabs (contiguous sets\nof rows) or pencils (individual row)\n• Pencils allow\n• Earlier start for communication phase and\nimproved local cache use\n• But more smaller messages (same total volume)\n\n37\nNAS FT UPC Non-blocking MFlops\n• Berkeley UPC compiler support non-blocking UPC\nextensions\n• Produce 15-45 speedup over best UPC Blocking\nversion\n• Non-blocking version requires about 30 extra\nlines of UPC code\n\n38\nNAS FT Variants Performance Summary\n• Shown are the largest classes/configurations\npossible on each test machine\n• MPI not particularly tuned for many small/medium\nsize messages in flight (long message matching\nqueue depths)\n\n39\nPencil/Slab optimizations UPC vs MPI\n• Same data, viewed in the context of what MPI is\nable to overlap\n• For the amount of time that MPI spends in\ncommunication, how much of that time can UPC\neffectively overlap with computation\n• On Infiniband, UPC overlaps almost all the time\nthe MPI spends in communication\n• On Elan3, UPC obtains more overlap than MPI as\nthe problem scales up\n\n40\nSummary of Overlap in FFTs\n• One-sided communication has performance\n• Better match for most networking hardware\n• Most cluster networks have RDMA support\n• Machines with global address space support (X1,\nAltix) shown elsewhere\n• Smaller messages may make better use of network\n• Spread communication over longer period of time\n• Postpone bisection bandwidth pain\n• Smaller messages can also prevent cache thrashing\nfor packing\n• Avoid packing overheads if natural message size\nis reasonable\n\n41\nFFTW\nthe Fastest Fourier Tranform in the West\nC library for real complex FFTs (arbitrary\nsize/dimensionality)\n( parallel versions for threads MPI)\nComputational kernels (80 of code)\nautomatically generated\nSelf-optimizes for your hardware (picks best\ncomposition of steps) portability performance\n42\nFFTW performancepower-of-two sizes, double\nprecision\n833 MHz Alpha EV6\n2 GHz PowerPC G5\n500 MHz Ultrasparc IIe\n2 GHz AMD Opteron\n43\nFFTW performancenon-power-of-two sizes, double\nprecision\nunusual non-power-of-two sizes receive as much\noptimization as powers of two\n833 MHz Alpha EV6\n2 GHz AMD Opteron\nbecause we let the code do the optimizing\n44\nFFTW performancedouble precision, 2.8GHz Pentium\nIV 2-way SIMD (SSE2)\npowers of two\nexploiting CPU-specific SIMD instructions (rewriti\nng the code) is easy\nnon-powers-of-two\nbecause we let the code write itself\n45\nWhy is FFTW fast?three unusual features\nFFTW implements many FFT algorithms A planner\npicks the best composition by measuring the speed\nof different combinations.\nThe resulting plan is executed with explicit\nrecursion enhances locality\nThe base cases of the recursion are\ncodelets highly-optimized dense\ncode automatically generated by a special-purpose\ncompiler\n46\nFFTW is easy to use\ncomplex xn plan p p plan_dft_1d(n, x,\nx, FORWARD, MEASURE) ... execute(p) / repeat\nas needed / ... destroy_plan(p)\n47\nWhy is FFTW fast?three unusual features\nFFTW implements many FFT algorithms A planner\npicks the best composition by measuring the speed\nof different combinations.\n3\nThe resulting plan is executed with explicit\nrecursion enhances locality\n1\nThe base cases of the recursion are\ncodelets highly-optimized dense\ncode automatically generated by a special-purpose\ncompiler\n2\n48\nFFTW Uses Natural Recursion\nSize 8 DFT\np 2 (radix 2)\nSize 4 DFT\nSize 4 DFT\nSize 2 DFT\nSize 2 DFT\nSize 2 DFT\nSize 2 DFT\n49\nTraditional cache solution Blocking\nSize 8 DFT\np 2 (radix 2)\nSize 4 DFT\nSize 4 DFT\nSize 2 DFT\nSize 2 DFT\nSize 2 DFT\nSize 2 DFT\nbreadth-first, but with blocks of size cache\nrequires program specialized for cache size\n50\nRecursive Divide Conquer is Good\nSingleton, 1967\n(depth-first traversal)\nSize 8 DFT\np 2 (radix 2)\nSize 4 DFT\nSize 4 DFT\nSize 2 DFT\nSize 2 DFT\nSize 2 DFT\nSize 2 DFT\n51\nCache Obliviousness\nA cache-oblivious algorithm does not know the\ncache size it can be optimal for any machine\nfor all levels of cache simultaneously\nExist for many other algorithms, too Frigo et\nal. 1999\nall via the recursive divide conquer approach\n52\nWhy is FFTW fast?three unusual features\nFFTW implements many FFT algorithms A planner\npicks the best composition by measuring the speed\nof different combinations.\n3\nThe resulting plan is executed with explicit\nrecursion enhances locality\n1\nThe base cases of the recursion are\ncodelets highly-optimized dense\ncode automatically generated by a special-purpose\ncompiler\n2"
] | [
null,
"https://www.powershow.com/themes/default/images/loading-slideshow.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7688506,"math_prob":0.84662,"size":18047,"snap":"2019-35-2019-39","text_gpt3_token_len":5208,"char_repetition_ratio":0.10247742,"word_repetition_ratio":0.10058785,"special_character_ratio":0.25821465,"punctuation_ratio":0.061594203,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97830564,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-18T17:39:18Z\",\"WARC-Record-ID\":\"<urn:uuid:ac47380f-25c5-4244-85f5-08700840978f>\",\"Content-Length\":\"117557\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4eef26d5-1154-40d8-987d-436346f8f8d5>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e1cde8d-b69b-49ac-9fcf-c279338c5a1a>\",\"WARC-IP-Address\":\"209.128.81.248\",\"WARC-Target-URI\":\"https://www.powershow.com/view/16c8dc-MzdhM/Parallel_Spectral_Methods_Solving_Elliptic_Problems_with_FFTs_powerpoint_ppt_presentation\",\"WARC-Payload-Digest\":\"sha1:5WECEWAYY7G74D54464Q6YKUS4JCLCCG\",\"WARC-Block-Digest\":\"sha1:HX7GTEQYF46DSEA2F6QQSLICU3P6UJHD\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573323.60_warc_CC-MAIN-20190918172932-20190918194932-00163.warc.gz\"}"} |
https://biharboardsolutions.com/bihar-board-12th-chemistry-objective-answers-chapter-9-in-english/ | [
"# Bihar Board 12th Chemistry Objective Answers Chapter 9 Coordination Compounds\n\nBihar Board 12th Chemistry Objective Questions and Answers\n\n## Bihar Board 12th Chemistry Objective Answers Chapter 9 Coordination Compounds\n\nQuestion 1.\nCopper sulphate dissolves in ammonia due to the formation of\n(a) Cu2O\n(b) [Cu(NH3)4]SO4\n(c) [Cu(NH3)4]OH\n(d) [Cu(H2O)4]SO4\n(b) [Cu(NH3)4]SO4\n\nQuestion 2.\nThe number of ions given by [Pt(NH3)6]Cl4 inaqueous solution will be\n(a) two\n(b) three\n(c) five\n(d) eleven\n(c) five",
null,
"Question 3.\nWhen one mole of each of the following complexes is treated with excess of AgNO3, which will give maximum amount of AgCl ?\n(a) [Co(NH3)6]Cl3\n(b)[Co(NH3)5CI]Cl2\n(c) [Co(NH3)4Cl2]Cl\n(d) [Co(NH3)3Cl3]\n(a) [Co(NH3)6]Cl3\n\nQuestion 4.\nAccording to Werner’s theory of coordination compounds.\n(a) primary valency is ionisable\n(b) secondary valency is ionisable\n(c) primary and secondary valencies are ionisable\n(d) neither primary nor secondary valency is ionisable\n(a) primary valency is ionisable\n\nQuestion 5.\nWhich of the following primary and secondary valencies are not correctly marked against the compounds ?\n(a) [Cr(NH3)6]Cl3, p = 3,s = 6\n(b) K2[Pt(Cl4], p = 2, s = 4\n(c) [Pt(NH3)2CI2], p = 2, s = 4\n(d) [Cu(NH3)4]SO4 , p = 4, s = 4\n(d) [Cu(NH3)4]SO4 , p = 4, s = 4\n\nQuestion 6.\nThe ligand N(CH2CH2NH2)3 is\n(a) bidentate\n(b) tridentate\n\nQuestion 7.\nWhich of the following is a tridentate ligand ?\n(a)EDTA4-\n(b) (COO)2\n(c) dien\n(d) NO2\n(c) dien",
null,
"Question 8.\nAmong the following, which are ambidentate ligands ?\n(i)SCN\n(ii) NO\n(iii) NO;\n(iv) C2O4\n(a) (i) and (iii)\n(b) (i) and (iv)\n(c) (ii) and (iii)\n(d) (ii) and (iv)\n(a) (i) and (iii)\n\nQuestion 9.\nWhich of the following ligands form a chelate ?\n(a) Acetate\n(b) Oxalate\n(c) Cyanide\n(d) Ammonia\n(b) Oxalate\n\nQuestion 10.\nWhich of the following is not a neutral ligand ?\n(a) H2O\n(b) NH3\n(c) ONO\n(d) CO\n(c) ONO",
null,
"Question 11.\nWhich of the following ligands will not show chelation ?\n(a) EDTA\n(b) DMG\n(c) Ethane – 1, 2-diamine\n(d) SCN\n(d) SCN\n\nQuestion 12.\nThe coordination number and the oxidation state of the element E in the complex [E(en)2(C2O4)]NO2 (where (en) is ethylenediamine) are, respectively\n(a) 6 and 3\n(b) 6 and 2\n(c) 4 and 2\n(d) 4 and 3\n(a) 6 and 3\n\nQuestion 13.\nThe correct IUPAC name of the coordination compound K3[Fe(CN)5NO] is\n(a) potassium pentacyanonitrosylferrate (II)\n(b) potassium pentacyanonitroferrate(III)\n(c) potassium nitritopentacyanoeferrate (IV)\n(d) potassium nitritepentacyanoiron (II)\n(a) potassium pentacyanonitrosylferrate (II)\n\nQuestion 14.\nCorrect formula of tetraamminechloronitroplatinum (IV) sulphate can be written as\n(a) [ Pt(NH3 )4 (ONO)Cl ]SO4\n(b) [Pt(NH3)4Cl2NO2]2SO4\n(c) [Pt(NH3)4(NO2)Cl]SO4\n(d) [PtCl (ONO) NH3(SO4)]\n(c) [Pt(NH3)4(NO2)Cl]SO4\n\nQuestion 15.\nWhich among the following will be named as dibromidobis (ethylenediamine) chromium (III) bromide ?\n(a) [Cr(en)2Br2]Br\n(b) [Cr(en)Br]\n(c) [Cr(en)Br2]Br\n(d) [Cr(en)3]Br3\n(a) [Cr(en)2Br2]Br",
null,
"Question 16.\nThe name of the compound [Co(NH3)5NO2]Cl2 will be\n(a) pentaamminonitrocobalt (II) chloride\n(b) pentaamminenitrochloridecobaltate (III)\n(c) pentaamminenitrocobalt (III) chloride\n(d) pentanitrosoamminechlorocobaltate (III)\n(c) pentaamminenitrocobalt (III) chloride\n\nQuestion 17.\nHow many geometrical isomers are there for [Co(NH3)2Cl4] (octahedral) and [AuCl2Br2] (square planar) ?\n(a) Two cis and trans, no geometrical isomers\n(b) Two cis and trans, two cis and trans\n(c) No geometrical isomers, two cis and trans\n(d) No geometrical isomers, no geometrical isomers\n(b) Two cis and trans, two cis and trans\n\nQuestion 18.\nWhich of the following will not show geometrical isomerism ?\n(a) [Cr(NH3)4Cl2]Cl\n(b) [Co(en)2Cl2]CI\n(c) [Co(NH3)5NO2]Cl2\n(d) [Pt(NH3)2Cl2]\n(c) [Co(NH3)5NO2]Cl2",
null,
"Question 19.\nWhich of the following shows maximum number of isomers ?\n(a) [Co(NH3)4Cl2]\n(b) [Ni(en)(NH3)4]2+\n(c) [Ni(C2O4)(en)2]2-\n(d) [Cr(SCN)2(NH3)4]+\n(d) [Cr(SCN)2(NH3)4]+\n\nQuestion 20.\nWhich of the following complexes exists as pair of enantiomers ?\n(a) [Co(NH3)4Cl2]+\n(b) [Cr(en)3]3+\n(c) [Co(P(C2H5)3)2ClBr]\n(d) trans- [Co(en)2Cl2 ]+\n(b) [Cr(en)3]3+\n\nQuestion 21.\nTwo isomers of a compound Co(NH3)3Cl3(MA3B3type) are shown in the figures.",
null,
"The isomers can be classified as\n(a) (i) fac-isomers (ii) mer-isomer\n(b) (i) optical-isomer (ii) trans-isomer\n(c) (i) mer-isomer (ii) fac-isomer\n(d) (i) trans-isomer (ii) cis-isomer\n(a) (i) fac-isomers (ii) mer-isomer\n\nQuestion 22.\nWhich of the following compounds exhibits linkage isomerism ?\n(a) [Co(en)3]Cl3\n(b) [Co(NH3)6][Cr(en)3]\n(c) [Co(en)2 (NO2)Cl]Br\n(d) [Co(NH3)5Cl]Br2\n(c) [Co(en)2 (NO2)Cl]Br\n\nQuestion 23.\n[Pt(NH3)4][CuCl4] and tCu(NH3)4][PtCl4] are known is\n(a) ionisation isomers\n(b) coordination isomers\n(d) polymerisation isomers\n(b) coordination isomers",
null,
"Question 24.\nWhich of the following isomers will give white precipitate with BaCl2 solution ?\n(a) [Co(NH3)5SO4]Br\n(b) [Co(NH3)5 Br]SO4\n(c) [Co(NH3)4(SO4)2]Br\n(d) [Co(NH3)4 Br(SO4)]\n(b) [Co(NH3)5 Br]SO4\n\nQuestion 25.\nCrCl3.6H2O exists in different isomeric forms which show different colours like violet and green. This is due to\n(a) ionisation isomerism\n(b) coordination isomerism\n(c) optical isomerism\n(d) hydrate isomerism\n(d) hydrate isomerism\n\nQuestion 26.\nThe hybridisation involved in [Co(C2O4)3]3- is\n(a) sp3d2\n(b) sp3d3\n(c) dsp3\n(d) d2sp3\n(d) d2sp3\n\nQuestion 27.\nWhich of the following complexes will have tetrahedral shape ?\n(a) [PdCl4]2-\n(b) [Pd(CN)4]2-\n(c) [Ni(CN)4l2-\n(d) [NiCl4]2-\n(d) [NiCl4]2-",
null,
"Question 28.\nThe complex ion which has no d-electrons in the central metal atom is\n(a) [MnO4]\n(b) [Co(NH3)6]3+\n(c) [Fe(CN)6]3-\n(d) [Cr(H2O)6]+\n(a) [MnO4]\n\nQuestion 29.\nThe lowest value paramagnetism is shown by\n(a) [Co(CN)6]3-\n(b) [Fe(CN)6]3-\n(c) [Cr(CN)6]3-\n(d) [Mn(CN)6]3-\n(a) [Co(CN)6]3-\n\nQuestion 30.\n[CoF6]3- is\n(a) paramagnetic and undergoes sp3d2 hybridisation\n(b) diamagnetic and undergoes d2sp3 hybridisation\n(c) paramagnetic and undergoes sp3d hybridisation\n(d) diamagnetic and undergoes sp3 hybridisation\n(a) paramagnetic and undergoes sp3d2 hybridisation",
null,
"Question 31.\nThe magnitude of magnetic moment (spin only) of [NiCl4]2- will be\n(a) 2.82B.M.\n(b) 3.25B.M.\n(c) 1.23 B.M.\n(d) 5.64 B.M.\n(a) 2.82B.M.\n\nQuestion 32.\nWhich of the following has largest paramagnetism ?\n(a) [Cr(H2O)6]3+\n(b) [Fe(H2O)6]2+\n(c) [Cu(H2O)6]2+\n(d) [Zn(H2O)2]2+\n(b) [Fe(H2O)6]2+\n\nQuestion 33.\nWhich of the following descriptions about [FeCl6 ]4- is correct about the complex ion ?\n(a) sp3 d, inner orbital complex, diamagnetic\n(b) sp3 d2, outer orbital complex, paramagnetic\n(c) d2sp3, inner orbital complex, paramagnetic\n(d) d2sp3, outer orbital complex, diamagnetic\n(b) sp3 d2, outer orbital complex, paramagnetic\n\nQuestion 34.\nWhen excess of ammonia is added to copper sulphate solution, the deep blue coloured complex is formed. The complex is\n(a) tetrahedral and paramagnetic\n(b) tetrahedral and diamagnetic\n(c) square planar and diamagnetic\n(d) square planar and paramagnetic\n(d) square planar and paramagnetic",
null,
"Question 35.\nAmong the following compounds which is both paramagnetic and coloured ?\n(a) K2Cr2O7\n(b) [Co(SO4)]\n(c) (NH4)2[TiCl6]\n(d) K3[Cu(CN)4]\n(b) [Co(SO4)]\n\nQuestion 36.\nThe spin only magnetic moment value of Cr(CO)6 is\n(a) 2.84 B.M.\n(b) 4.90 B.M.\n(c) 5.92 B.M.\n(d) O.B.M.\n(d) O.B.M.\n\nQuestion 37.\nWhich of the following complexes will show maximum paramagnetism ?\n(a) 3d4\n(b) 3d5\n(c) 3d6\n(d) 3d7\n(b) 3d5\n\nQuestion 38.\nElectronic configuration of [Cu(NH3)6]2+ on the basis of crystal field splitting theory is",
null,
"(b)\n\nQuestion 39.\nWhich of the following shall form an octahedral complex ?\n(a) d4 (low spin)\n(b) d8 (high spin)\n(c) d6 (low spin)\n(d) None of these\n(c) d6 (low spin)\n\nQuestion 40.\nThe value of the ‘spin only’ magnetic moment for one of the following configuration is 2.84 BM. The correct one is\n(a) d4(in strong ligand field)\n(b) d4 (in weak ligand field)\n(c) d3 (in weak as well as in strong fields)\n(d) d5 (in strong ligand field)\n(a) d4(in strong ligand field)",
null,
"Question 41.\nCuSO4.5H2O is blue in colour while CuSO4 is colourless due to\n(a) presence of strong field ligand in CuSO4.5H2O\n(b) due to absence of water (ligand), d-d transitions are not possible in CuSO4\n(c) anhydrous CuSO4 undergoes d-d transitions due to crystal field splitting\n(d) colour is lost due to loss of unpaired electrons\n(b) due to absence of water (ligand), d-d transitions are not possible in CuSO4\n\nQuestion 42.\n[Fe(CN)6]4- and [Fe(H2O)6]2+ show different colours in dilute solution because\n(a) CN is a strong field ligand and H2O is a weak field ligand hence magnitude of CFSE is different\n(b) both CN and H2O absorb same wavelength of energy\n(c) complexes of weak field ligands are generally colourless\n(d) the sizes of CN and H2O are different hence their colours are also different\n(d) the sizes of CN and H2O are different hence their colours are also different\n\nQuestion 43.\nThe terminal and bridged CO ligands in the compound [Co2(CO)8] are respectively\n(a) 0, 2\n(b) 6, 1\n(c) 5,2\n(d) 6, 2\n(b) 6, 1",
null,
"Question 44.\nThe geometry possessed by [Ni(CO)4] is\n(a) tetrahedral\n(b) square planar\n(c) linear\n(d) octahedral\n(a) tetrahedral\n\nQuestion 45.\nCr-C bond in the compound [Cr(CO)6] shows 71- character due to\n(a) covalent bonding\n(b) coordinate bonding\n(c) synergic bonding\n(d) ionic bonding\n(c) synergic bonding",
null,
"Question 46.\nThe overall complex dissociation equilibrium constant for the complex [Cu(NH3)4]2+ ion will be (P4 for this complex is 2.1 x 1013)\n(a) 7 x 10-14\n(b) 2.1 x 1013\n(c) 11.9 x 10-2\n(d) 2.1 x 103\n(a) 7 x 10-14\n\nQuestion 47.\nMark the incorrect match\n(a) Insulin – Zinc\n(b) Haemoglobin – Iron\n(c) Vitamin Bp – Cobalt\n(d) Chlorophyll – Chromium\n(d) Chlorophyll – Chromium\n\nQuestion 48.\nThe correct IUPAC name of [Pt(NH3)2Cl2] is ……………….\n(a) diamminedichloridoplatinum (II)\n(b) diamminedichloridoplatinum (IV)\n(c) diamminedichloridoplatinum (0)\n(d) dichloridodiammineplatinum (IV)\n(a) diamminedichloridoplatinum (II)\n\nQuestion 49.\nThe stabilisation of coordination compounds due to chelation is called the chelate effect. Which of the following is the most stable complex species ?\n(a) [Fe(CO)5]\n(b) [Fe(CN)6]-\n(c) [Fe(C2O4)3]3-\n(d) [Fe(H2O)6]3+\n(c) [Fe(C2O4)3]3-\n\nQuestion 50.\nIndicate the complex ion which shows geometrical isomerism.\n(a) [Cr(H2O)4Cl2]+\n(b) [Pt(NH3)3Cl]\n(c) [Co(NH3)6]3+\n(d) [Co(CN)5(NC)]3-\n(a) [Cr(H2O)4Cl2]+",
null,
"Question 51.\nThe CFSE for octa hedral [CoCl6]4- is 18,000 cm-1. The CFSE for tetrahedral [CoCl4 ]2 will be\n(a) 18,0000 cm-1\n(b) 16,000 cm-1\n(c) 8,000 cm-1\n(d) 20,000 cm-1\n(c) 8,000 cm-1\n\nQuestion 52.\nThe compounds [Co(SO4)(NH3)s] Br and [Co(SO4)(NH3)5]Cl represent\n(b) ionisation isomerism\n(c) coordination isomerism\n(d) no isomerism\n(d) no isomerism\n\nQuestion 53.\nA chelating agent has two or more than two donor atoms to bind to a single metal ion. Which of the following is not a chelating agent ?\n(a) Thiosulphato\n(b) Oxalato\n(c) Glycinato\n(d) Ethane-1. 2-diamine\n(a) Thiosulphato\n\nQuestion 54.\nWhich of the following species is not expected to be a ligand ?\n(a) NO\n(b) NH4\n(c) NH2CH2CH2NH2\n(d) CO"
] | [
null,
"https://i0.wp.com/biharboardsolutions.com/wp-content/uploads/2020/04/Bihar-Board-Solutions.png",
null,
"https://i0.wp.com/biharboardsolutions.com/wp-content/uploads/2020/04/Bihar-Board-Solutions.png",
null,
"https://i0.wp.com/biharboardsolutions.com/wp-content/uploads/2020/04/Bihar-Board-Solutions.png",
null,
"https://i0.wp.com/biharboardsolutions.com/wp-content/uploads/2020/04/Bihar-Board-Solutions.png",
null,
"https://i0.wp.com/biharboardsolutions.com/wp-content/uploads/2020/04/Bihar-Board-Solutions.png",
null,
"https://i2.wp.com/biharboardsolutions.com/wp-content/uploads/2020/05/Bihar-Board-12th-Chemistry-Objective-Answers-Chapter-9-Coordination-Compounds-1.png",
null,
"https://i0.wp.com/biharboardsolutions.com/wp-content/uploads/2020/04/Bihar-Board-Solutions.png",
null,
"https://i0.wp.com/biharboardsolutions.com/wp-content/uploads/2020/04/Bihar-Board-Solutions.png",
null,
"https://i0.wp.com/biharboardsolutions.com/wp-content/uploads/2020/04/Bihar-Board-Solutions.png",
null,
"https://i0.wp.com/biharboardsolutions.com/wp-content/uploads/2020/04/Bihar-Board-Solutions.png",
null,
"https://i0.wp.com/biharboardsolutions.com/wp-content/uploads/2020/05/Bihar-Board-12th-Chemistry-Objective-Answers-Chapter-9-Coordination-Compounds-2.png",
null,
"https://i0.wp.com/biharboardsolutions.com/wp-content/uploads/2020/04/Bihar-Board-Solutions.png",
null,
"https://i0.wp.com/biharboardsolutions.com/wp-content/uploads/2020/04/Bihar-Board-Solutions.png",
null,
"https://i0.wp.com/biharboardsolutions.com/wp-content/uploads/2020/04/Bihar-Board-Solutions.png",
null,
"https://i0.wp.com/biharboardsolutions.com/wp-content/uploads/2020/04/Bihar-Board-Solutions.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6679847,"math_prob":0.9742154,"size":11434,"snap":"2022-40-2023-06","text_gpt3_token_len":4252,"char_repetition_ratio":0.19273841,"word_repetition_ratio":0.05989732,"special_character_ratio":0.33260453,"punctuation_ratio":0.097433664,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9602606,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,3,null,null,null,null,null,null,null,null,null,3,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-08T13:59:06Z\",\"WARC-Record-ID\":\"<urn:uuid:48f48f75-ba8a-475e-bd84-1eed41586ba4>\",\"Content-Length\":\"81504\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2311301-239e-4506-bb46-acbcac570a43>\",\"WARC-Concurrent-To\":\"<urn:uuid:38b1bbee-197d-4cc3-ab10-0a22c5ae9591>\",\"WARC-IP-Address\":\"194.1.147.25\",\"WARC-Target-URI\":\"https://biharboardsolutions.com/bihar-board-12th-chemistry-objective-answers-chapter-9-in-english/\",\"WARC-Payload-Digest\":\"sha1:NKRMMU2A6Z34OR5QGJNTD3JOJBQ2EIXC\",\"WARC-Block-Digest\":\"sha1:CZV6FJAFS6F3FDNUROIMJ7EHL552ZGSB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500813.58_warc_CC-MAIN-20230208123621-20230208153621-00590.warc.gz\"}"} |
https://jrobio.springeropen.com/articles/10.1186/s40638-017-0077-z | [
"# Improved 3D measurement with a novel preprocessing method in DFP\n\n• 734 Accesses\n\n## Abstract\n\nShadow and background are two common factors in digital fringe projection, which lead to ambiguity in three-dimensional measurement and thereby need to be seriously considered. Preprocessing is often needed to segment the object from invalid points. The existing segmentation approaches based on modulation normally perform well in pure dark background circumstances, which, however, lose accuracy in situations of white or complex background. In this paper, an accurate shadow and background removal technique is proposed, which segments the shadow by one threshold from modulation histogram and segments the background by the threshold in intensity histogram. Experiments are well designed and conducted to verify the effectiveness and reliability of the proposed method.\n\n## Background\n\nDigital fringe projection (DFP) techniques are widely employed in flexible, non-contact and high-speed 3D shape measurement . In a DFP system, a sequence of phase-shifted sinusoidal fringes is often projected on the object by the projector, and the fringes are distorted by the object surface and captured by a camera. Phase map can be retrieved from the deformed fringes, and the object height information is calculated from the phase map in a calibrated DFP system . However, shadow and the background are inevitable, since the projector and camera are arranged from different viewpoints. Invalid points such as shadow and background should be identified and removed from the object.\n\nResearchers made great efforts to remedy the influence of invalid points including the shadow and background. Skydan et al. utilized multiple projectors to probe the object from different viewpoints to achieve shadow free reconstruction. However, the increased cost of hardware keeps this method from commonly utilized. Zhang proposed to employ the Gaussian filter on the fringes to remove random noise and identify the invalid points by the monotonicity of the unwrapped phase. However, the Gaussian filter introduces errors to the object details. Chen et al. applied a threshold to the least-squares fitting errors in temporal phase unwrapping for invalid points detection. However, this method is vulnerable to noise .\n\nHuang and Asundi proposed a compact framework combining modulation, rms error and monotonicity for shadow and background removal and error detection. Intensity modulation is very effective in measuring how informative are the pixels, and can be used to detect background and shadow. However, manually adjusting the threshold is time-consuming. In practice, the threshold selection is subject to measurement conditions such as the environmental illumination and object surface characteristics. Lu et al. proposed a technique to remove shadow points by mapping the 3D results into projector coordinates, and the modulation is not needed. However, this method can only detect shadow caused by the DFP system .\n\nOtsu’s method is widely utilized for thresholding in image segmentation, which is automatic and efficient. However, it fails to provide optimal threshold when the class to be separated increases or when the intensity histogram is close to unimodal distribution . Ng improved this technique through a weighting factor, considering the occurrence probability of the threshold point. Both Otsu’s method and Ng’s method aim for image segmentation based on intensity histogram. The literature utilized the automatic thresholding method in modulation histogram for object detection. However, their method can only deal with dark background with low modulation, since the background and shadow are with similar low modulation, while the object is with obviously higher modulation level, and only one threshold is needed to segment the object. When the background is a white board or complex with higher or similar modulation level, it is difficult to segment the background from the object. In this situation, there will be three classes in the modulation map, and two thresholds are needed to separate the object from the background and shadow, as shown in Fig. 1. The method in cannot deal well with this situation.\n\nIn this paper, we apply the multi-thresholding technique on modulation histogram and propose a preprocessing method to detect the valid points of the object by firstly segmenting the shadow using one threshold from the modulation histogram. Secondly, we project one more picture onto the object and reference plane and calculate the intensity difference of the captured images, and the histogram of the difference map is analyzed for the background detection. We call this one more picture the coding map.\n\nThe rest of this paper is organized as follows: We introduce the related principles and existing methods in Related work. In “Methods” section, we introduce the details of how to implement our proposed object segmentation technique. In the experiments and results part, we present and compare some segmentation results using our method and the expanded conventional method. The 3D shape reconstruction result is also presented in this section. In the end, we make a summary in “Conclusion”.\n\n## Related work\n\n### N-step phase shifting and modulation\n\nPhase-shifting algorithms are widely utilized in the stationary object measurement due to their high accuracy and flexibility . They carry out point-by-point measurement and calculate wrapped phase value from −π to π. For the N-step phase-shifting method, sinusoidal fringes with the following intensity modulation are often used ,\n\n$$I_{n} \\left( {x,y} \\right) = I_{\\text{a}} + I_{\\text{m}} \\cos \\left[ {\\varphi \\left( {x,y} \\right) + \\frac{{2\\pi \\left( {n - 1} \\right)}}{N}} \\right]$$\n(1)\n\nwhere n is the phase-shifting number and N is the total phase-shifting steps. I n is the intensity map of the nth sinusoidal fringes and I a and I m are the average intensity and modulation intensity, respectively. The wrapped phase φ w can be calculated as ,\n\n$$\\varphi^{\\text{w}} = - \\tan^{ - 1} \\frac{{\\mathop \\sum \\nolimits_{n = 0}^{N - 1} I_{n} \\cdot \\sin \\frac{2n\\pi }{N}}}{{\\mathop \\sum \\nolimits_{n = 0}^{N - 1} I_{n} \\cdot \\cos \\frac{2n\\pi }{N}}}$$\n(2)\n\nThe modulation M is defined as,\n\n$$M = \\frac{2}{N}\\sqrt {\\left[ {\\mathop \\sum \\limits_{n = 0}^{N - 1} I_{n} \\cdot \\sin \\frac{2n\\pi }{N}} \\right]^{2} + \\left[ {\\mathop \\sum \\limits_{n = 0}^{N - 1} I_{n} \\cdot \\cos \\frac{2n\\pi }{N}} \\right]^{2} }$$\n(3)\n\nIt shows how much useful information is contained in each pixel. It is usually selected as the reliability map to guide the phase unwrapping and object segmentation . If the proper threshold t is found, object can be identified from the background, shadow and the less informative pixels. However, manually adjusting the modulation threshold is very tedious and unstable, since the modulation varies according to measuring conditions, such as the incoherent light, the reflection of object and background, and the occlusion caused by object step height.\n\n### Existing methods of threshold selection\n\nOtsu’s method is commonly utilized for quick segment of the object and background based on image intensity. For a given image, if we distribute the gray levels into L bins ranging from 1 to L, k i represent the total number of pixels with gray-level i and K is the total pixels of the given image, $$K = k_{1} + k_{2} + \\cdots + k_{\\text{L}}$$. The occurrence probability of gray-level i is calculated as,\n\n$$p_{i} = \\frac{{k_{i} }}{K},\\quad p_{i} \\ge 0, \\quad \\mathop \\sum \\limits_{i = 1}^{L} p_{i} = 1.$$\n(4)\n\nWhen a single value threshold is applied, the pixels of the given image are to be divided into two classes (typically the object and background with shadow): class C 0 includes the pixels with levels $$\\left\\{ {k_{1} ,k_{2} , \\ldots ,k_{t} } \\right\\}$$, and class C 1 includes the pixels with levels $$\\left\\{ {k_{t + 1} ,k_{t + 2} , \\ldots ,k_{L} } \\right\\}$$, where k t is the threshold to be determined. The occurrence probability of each class can be calculated as,\n\n$$\\omega_{0} = P_{r} \\left( {C_{0} } \\right) = \\mathop \\sum \\limits_{i = 1}^{t} p_{i} = \\omega \\left( t \\right)$$\n(5)\n$$\\omega_{1} = P_{r} \\left( {C_{1} } \\right) = \\mathop \\sum \\limits_{i = t + 1}^{L} p_{i} = 1 - \\omega \\left( t \\right)$$\n(6)\n\nand the class mean levels are,\n\n$$\\mu_{0} = \\mathop \\sum \\limits_{i = 1}^{t} i \\cdot p_{i} /\\omega_{0} = \\mu \\left( t \\right)/\\omega \\left( t \\right)$$\n(7)\n$$\\mu_{1} = \\mathop \\sum \\limits_{i = t + 1}^{L} i \\cdot p_{i} /\\omega_{1} = \\frac{{\\mu_{\\varGamma } - \\mu \\left( t \\right)}}{1 - \\omega \\left( t \\right)}$$\n(8)\n\nwhere ω(t) and μ(t) are the zeroth-order and the first-order cumulative moments of the histogram up to tth level, respectively. The total average gray level of the whole image is calculated as,\n\n$$\\mu_{\\varGamma } = \\mathop \\sum \\limits_{i = 1}^{L} i \\cdot p_{i}$$\n(9)\n\nFor any selection of t, it is easily verified that\n\n$$\\omega_{0} \\cdot \\mu_{0} + \\omega_{1} \\cdot \\mu_{1} = \\mu_{\\varGamma }$$\n(10)\n$$\\omega_{0} + \\omega_{1} = 1$$\n(11)\n\nAccording to the discriminant criterion analysis , Otsu showed that the optimal threshold $$t^{*}$$ can be calculated by maximizing the between-class variance,\n\n$$t^{*} = {\\text{Arg}}\\,{\\text{Max}}\\left\\{ {\\sigma_{B}^{2} \\left( t \\right)} \\right\\}$$\n(12)\n\nwhere the between-class variance $$\\sigma_{\\text{B}}^{2}$$ is defined as,\n\n$$\\sigma_{\\text{B}}^{2} = \\omega_{0} \\left( {\\mu_{0} - \\mu_{\\varGamma } } \\right)^{2} + \\omega_{1} \\left( {\\mu_{1} - \\mu_{\\varGamma } } \\right)^{2}$$\n(13)\n\nThe optimal threshold $$t^{*}$$ is often calculated by an equivalent, but simpler equation ,\n\n$$t^{*} = {\\text{Arg}}\\,{\\text{Max}}\\left\\{ {\\omega_{0} \\mu_{0}^{2} + \\omega_{1} \\mu_{1}^{2} } \\right\\}$$\n(14)\n\nOtsu’s method works well on the histogram of bimodal distribution, but not robust for histograms of unimodal or close to unimodal . Ng developed a valley emphasis method to improve Otsu’s method. By adding a weighting factor, then the threshold is calculated by considering two elements, the small occurrence and the big between-class variance. The threshold of Ng’s method is calculated as,\n\n$$t_{\\text{v}}^{*} = {\\text{Arg}}\\,{\\text{Max}}\\left\\{ {\\left( {1 - p_{t} } \\right)\\sigma_{\\text{B}}^{2} \\left( t \\right)} \\right\\}$$\n(15)\n\nThe above two methods for automatic threshold selection are intended for image segment based on gray-level histogram. The literature utilizes them in modulation histogram for object segmentation. However, in their work, the background is dark, so invalid points in shadow and background are with low modulation level, and the object is with higher modulation level; only one threshold is enough to segment the object. As shown in Fig. 1, Fig. 1a shows a captured fringe on the object with dark background, Fig. 1b shows the modulation map of the captured fringes, and Fig. 1c shows the histogram of the modulation map. The modulation histogram is within two classes, and it is easy to find the threshold t 1, to segment the valid points and invalid points.\n\nIn practical, the modulation histogram is not necessarily in two classes, such as when a white board is used as the background for system calibration, as shown in Fig. 1d. Figure 1e shows the modulation map of Fig. 1d, and Fig. 1f shows the histogram of the modulation map. As can be seen that when the background is a white board, the modulation level of the background will be high, and the modulation histogram in Fig. 1f is to be classified to three categories. The background is with middle to high modulation, the object is with medium modulation, and the shadow is with low modulation level. Two thresholds need to be calculated for shadow and the background segmentation separately. For this situation, the conventional method cannot be utilized directly.\n\n## Methods\n\nTo segment the object from white background, or complex background, we firstly applied the expanded Ng’s method for multi-threshold calculation in modulation histogram. Then, we proposed our method for shadow and background detection. Figure 2 shows the flowchart of our method. The first threshold calculated from modulation histogram is utilized for shadow segmentation. For the background segmentation, we project one coding image onto the object and calculate the intensity difference between the object and the background. The threshold in intensity histogram is used for background segmentation. Details on how to segment the shadow and background are introduced as follows.\n\n### Expanded thresholding method\n\nThe literature has improved and applied Ng’s method for single thresholding in the fringe modulation histogram for object detection in digital fringe projection technique, while it only discussed the situation of a dark background, in which only one threshold is needed for object segmentation. For DFP system with a white or complex background, we apply the multi-thresholding Ng’s method on the modulation. The expanded Ng’s method can be described by ,\n\n$$\\left\\{ {t_{1}^{*} ,t_{2}^{*} , \\ldots t_{M - 1}^{*} } \\right\\} = {\\text{Arg }}\\,{\\text{Max}}\\left\\{ {\\left( {1 - \\mathop \\sum \\limits_{j = 1}^{M - 1} p_{tj} } \\right)\\left( {\\mathop \\sum \\limits_{k = 1}^{M} \\omega_{k} \\cdot \\mu_{k}^{2} } \\right)} \\right\\}$$\n(16)\n\nUtilizing this equation, two thresholds t 1 and t 2 in Fig. 1f can be calculated. Pixels with modulation level smaller than t 1 are regarded as the shadow, pixels with modulation level larger than t 2 are regarded as background, and the object pixels are with medium modulation level. However, the multi-threshold calculation is less credible . What’s worse, when the background is complex, with modulation levels distributed for a large range, it is difficult to segment the background by just modulation. In our method, only t 1 is utilized for shadow detection, and the background is segmented from image intensity. Figure 3 shows the preliminary detection results, and black pixels are shadow and invalid points.\n\n### Intensity-based background segmentation\n\nFor background segment, we project an extra coding image with intensity of Eq. (17) on the object and background and analyze the intensity of their difference to calculate a reliable t in.\n\n$$I\\left( {x,y} \\right) = 255 \\times \\frac{x}{N}$$\n(17)\n\nHere 255 is the total gray-level range, and N is the column of the projected image. The coding image for projection is shown in Fig. 4. The captured coding image on the reference plane I flat is shown in Fig. 5a, and the captured coding image on the object I obj is shown in Fig. 5b. The intensity difference map I diff shown in Fig. 5c is calculated by subtracting I flat from I obj. Here (x, y) is omitted for simplicity.\n\n$$I_{\\text{obj}} - I_{\\text{flat}} = I_{\\text{diff}}$$\n(18)\n\nSince the extra projected image contains a lot of useful information for background detection, we call it the coding map.\n\nThe histogram of difference coding map I diff is shown in Fig. 6a. Utilizing the single threshold criteria in , we can calculate a reliable intensity threshold I in for segmenting the background. The 150th row cross-section intensity of Fig. 5a–c is shown in Fig. 6b.\n\nSo with the multi-thresholding Ng’s method utilized on modulation histogram, the object valid points matrix V valid is computed as,\n\n$$V_{\\text{valid}} = B\\left( {M,t_{1} } \\right) \\circ \\neg B\\left( {M,t_{2} } \\right)$$\n(19)\n\nwhere B is a matrix with the same size as M, calculated as, $$B_{ij} \\left( {M,t} \\right) = \\left\\{ {\\begin{array}{*{20}l} {1,} \\hfill & {{\\text{where}}\\quad M_{ij} > t} \\hfill \\\\ 0 \\hfill & {{\\text{where}}\\quad M_{ij} \\le t} \\hfill \\\\ \\end{array} } \\right.,\\quad M$$, is the matrix of modulation map and t 1 and t 2 are the first and second threshold of modulation histogram calculated by (16). ° represents the Hadamard product of two matrices, and $$\\neg$$ means negative. Multi-threshold calculation is less credible , and the background may be complex. We analyze intensity difference of the coding map to find t in for background segmentation, and the lower threshold t 1 from modulation is still used for shadow detection. The proposed object valid points matrix V pro is calculated as,\n\n$$V_{\\text{pro}} = B\\left( {M,t_{1} } \\right) \\circ \\neg B\\left( {I_{\\text{diff}} ,t_{\\text{in}} } \\right)$$\n(20)\n\nwhere I diff is the intensity difference map calculated from Eq. (18) and t in is the intensity threshold.\n\n## Experiments and results\n\nExperiments are carried out to test the proposed shadow and background removal technique. A DFP 3D shape measurement system in Fig. 7 with defocused projector projecting binary fringes of width T = 30 is employed to measure the 3D objects. Utilizing defocused binary fringes can avoid nonlinear gamma correction . The projected fringes are deformed by the object and captured by a camera. Phase of the object surface is retrieved by phase-shifting technique, and height information is calculated after system calibration . The hardware in the study includes a DLP projector of model AAXA P4-X with native resolution of 480 × 854 pixels and a CCD camera of Point Gray FL3-U3-13S2M-CS with resolution of 1328 × 1048 pixels. The camera is attached with a 6-mm focal-length lens of model Kowa LM6JC. The projection distance is about 40 cm.\n\nIn this experiment, two different objects are tested and segmented, and the results are shown in Fig. 8 for the first object and Fig. 9 for the second object. The calculated thresholds are shown in Table 1. Three different defocusing levels of the projector are utilized, to produce different fringe contrasts and modulation levels. Figure 8a shows the modulation histogram of the captured fringe patterns, and Fig. 8b shows the histogram of intensity difference for the captured coding image. Figure 8c shows the object segmentation by single threshold, as we can see from this picture, only one threshold is not enough to segment the whole object when the background is with high modulation level. It only segments the shadow from the object. Figure 8d shows the detected object by modulation thresholds t 1 and t 2, as we can see, it can segment the shadow and background from the object, but part of the background is detected as the valid points of the object. There are two reasons: First, multi-threshold calculation is not always credible , and second, when the background is complicated with modulation levels distributed in both the second cluster and the third cluster, background segmentation based on pure modulation is prone to error. Figure 8e shows the detected object by our proposed method, the background is segmented based on the intensity difference histogram of the coding map shown in Fig. 8b, and threshold t in is utilized. We may notice that the detected object is more accurate than Fig. 8c. The similar trends are shown in Fig. 8f–j for slightly defocused projector and Fig. 8k–o for strongly defocused projector. They provide different fringe contrasts and modulation levels. We may see that when the projector defocusing level increases, the modulation thresholds t 1 and t 2 become smaller, because the defocusing will depress the fringe modulation level in general. The same experiments are also done on the second object, and similar results are shown in Fig. 9. To demonstrate that our proposed method can work with a more complex background, we put a small statue near the measuring object to make the background more complex. Results are shown in Fig. 10. Figure 10a shows the modulation histogram of the captured fringes, Fig. 10b shows the histogram of the intensity difference for the captured coding map, and Fig. 10c shows the object with a small statue beside it. Object segmented by Ng’s method based on modulation is shown in Fig. 10d, and by our proposed method, it is shown in Fig. 10e. We may see that our proposed method can accurately segment the object from background, while the modulation-based method cannot segment the object from complex background. Our proposed method can segment valid points of the object more accurately than that of pure modulation, in most practical conditions.\n\n### 3D reconstruction\n\nAfter we retrieved the phase map of the object, the height information can be calculated by system calibration . One commonly utilized method calibrates the camera and the projector separately to find the system parameters . This kind of method is easy to understand, because each system parameter has its geometric meaning, but is also time-consuming, and error prone . Because the projector is regarded as an inversed camera, its calibration accuracy depends on the camera calibration process. In this work, we apply the calibration framework presented in to calculate the height information of the object.\n\nFor a general DFP system with arbitrary arrangements, the governing equation of the 3D height is computed as [18, 19],\n\n\\begin{aligned} z & = f_{c} /f_{d} , \\\\ f_{c} & = 1 + c_{1} \\varphi + \\left( {c_{2} + c_{3} \\varphi } \\right)i + \\left( {c_{4} + c_{5} \\varphi } \\right)j \\\\ & \\quad + \\left( {c_{6} + c_{7} \\varphi } \\right)i^{2} + (c_{8} + c_{9} \\varphi )j^{2} , \\\\ f_{d} & = d_{0} + d_{1} \\varphi + \\left( {d_{2} + d_{3} \\varphi } \\right)i + \\left( {d_{4} + d_{5} \\varphi } \\right)j \\\\ & \\quad + \\left( {d_{6} + d_{7} \\varphi } \\right)i^{2} + (d_{8} + d_{9} \\varphi )j^{2} , \\\\ \\end{aligned}\n(21)\n\nwhere z is the height at pixel (i, j) and φ is the phase value of the projection fringe at that pixel. c 1c 9 and d 0d 9 are constants related to system parameters. To determine the 19 coefficients, we need to know some sample points height information on the calibration board, their corresponding phase φ and pixel position (i, j) and use least-squares algorithm to find the coefficients.\n\nIn our experiment, a 2D checkerboard with 12 × 16 black and white squares is utilized as the calibration object. The calibration includes obtaining the 3D coordinates and phase value of all calibration points on the checkerboard, at ten different positions. Phase-shifted sinusoidal fringes and an extra white image are projected on to the calibration board and captured by the camera. The camera intrinsic and extrinsic parameters are calibrated with the captured clear checkerboard. We define the points in the world and camera coordinate system as $$\\left\\{ {x_{\\text{w}} , \\left. {y_{\\text{w}} , z_{\\text{w}} } \\right\\}} \\right.^{\\text{T}}$$ and $$\\left\\{ {x_{\\text{c}} , \\left. {y_{\\text{c}} , z_{\\text{c}} } \\right\\}} \\right.^{\\text{T}}$$, respectively. Generally, z w is set to zero, so the relationship between the world and camera coordinate systems is expressed by,\n\n$$\\left\\{ {\\left. {\\begin{array}{*{20}c} {x_{\\text{c}} } \\\\ {y_{\\text{c}} } \\\\ {z_{\\text{c}} } \\\\ \\end{array} } \\right\\} = \\left[ {\\begin{array}{*{20}c} {R_{11} } & {R_{12} } & {T_{1} } \\\\ {R_{21} } & {R_{22} } & {T_{2} } \\\\ {R_{31} } & {R_{32} } & {T_{3} } \\\\ \\end{array} } \\right]} \\right.\\left\\{ {\\left. {\\begin{array}{*{20}c} {x_{\\text{w}} } \\\\ {y_{\\text{w}} } \\\\ 1 \\\\ \\end{array} } \\right\\}} \\right.,$$\n(22)\n\nhere R and T represent the rotation and translation elements of the camera extrinsic parameters. Using Eq. (22), we can find all the calibration points in the camera coordinate system. Set the first calibration board position as the reference plane and its coordinate system as the world coordinate system. The literature computes the reference plane equation in camera coordinate system and calculates the distance of each calibration point to this plane as the points’ height. In our experiments, all the calibration points are transformed to the world coordinate system according to their respective transformation matrix; then, Zw is the point’s height.\n\nThe system coefficients c 1c 9 and d 0d 9 are computed through minimizing a nonlinear least-squares error function as,\n\n$$\\arg \\mathop {\\hbox{min} }\\limits_{c,d} \\mathop \\sum \\limits_{k = 1}^{m} \\left( {\\frac{{f_{c} }}{{f_{d} }} - z_{k}^{b} } \\right)^{2} ,$$\n(23)\n\nwhere k is the ordinal number of each point and m denotes the total number of points. An initial guess of coefficients c 1c 9 and d 0d 9 is obtained by minimizing a linear least-squares error of $$S = \\mathop \\sum \\limits_{k = 1}^{m} \\left( {f_{c} - f_{d} z_{k}^{b} } \\right)^{2}$$; then, Levenberg–Marquardt algorithm is utilized to verify the results.\n\nThe reconstructed 3D object is shown in Fig. 11. The object in Fig. 11a is preprocessed by object segmentation based on modulation histogram, and that of Fig. 11b is preprocessed by our proposed method with modulation and intensity histogram being analyzed. As we can see, the modulation-based segmentation can remove the shadow correctly, so as our proposed method. However, in Fig. 11a, part of the measurement platform is segmented as part of the object, which should be removed as background, while our proposed method can accurately remove the shadow and complex background from the object points.\n\n## Conclusion\n\nIn this paper, we proposed a novel preprocessing method for object segmentation in DFP 3D shape measurement. We firstly applied the multi-threshold Ng’s method on modulation histogram and then proposed our method for shadow and background detection based on modulation and intensity histogram. Experiments verified that our proposed method can improve the 3D shape measurement with white and complex background.\n\n## References\n\n1. 1.\n\nGorthi SS, Rastogi P. Fringe projection techniques: whither we are? Opt Lasers Eng. 2010;48(2):133–40.\n\n2. 2.\n\nGuo Q, Xi J, Member S, Song L. Fringe pattern analysis with message passing based expectation maximization for fringe projection profilometry. IEEE Access. 2016;4:4310–20.\n\n3. 3.\n\nSkydan OA, Lalor MJ, Burton DR. Using coloured structured light in 3-D surface measurement. Opt Lasers Eng. 2005;43:801–14.\n\n4. 4.\n\nZhang S. Phase unwrapping error reduction framework for a multiple-wavelength phase-shifting algorithm. Opt Eng. 2009;48(10):105601.\n\n5. 5.\n\nChen F, Su X, Xiang L. Analysis and identification of phase error in phase measuring profilometry. Opt Exp. 2010;18(11):11300–7.\n\n6. 6.\n\nHuang L, Asundi AK. Phase invalidity identification framework with the temporal phase unwrapping method. Meas Sci Technol. 2011;22(3):35304.\n\n7. 7.\n\nLu L, Xi J, Yu Y, Guo Q, Yin Y, Song L. Shadow removal method for phase-shifting profilometry. Appl Opt. 2015;54(19):6059.\n\n8. 8.\n\nZhang W, Li W, Yan J, Yu L. Adaptive threshold selection for background removal in fringe projection profilometry. Opt Lasers Eng. 2017;90:209–16.\n\n9. 9.\n\nOtsu N. A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern. 1979;20(1):62–6.\n\n10. 10.\n\nNg HF. Automatic thresholding for defect detection. Pattern Recognit Lett. 2006;27(14):1644–9.\n\n11. 11.\n\nMalacara D. Optical shop testing, vol. 59. New York: Wiley; 2007.\n\n12. 12.\n\nSu X, Chen W. Reliability-guided phase unwrapping algorithm: a review. Opt Lasers Eng. 2004;42(3):245–61.\n\n13. 13.\n\nGdeisat M, Burton D, Lilley F, Arevalillo-Herráez M. Fast fringe pattern phase demodulation using FIR Hilbert transformers. Opt Commun. 2016;359:200–6.\n\n14. 14.\n\nXiao Y, Li Y. High-quality binary fringe generation via joint optimization on intensity and phase. Opt Lasers Eng. 2017;90:19–26.\n\n15. 15.\n\nVo M, Wang Z, Hoang T, Nguyen D. Flexible calibration technique for fringe-projection-based three-dimensional imaging. Opt Lett. 2010;35(15):3192–4.\n\n16. 16.\n\nLi Z, et al. Accurate calibration method for a structured light system. Opt Eng. 2008;47(5):053604. http://dx.doi.org/10.1117/1.2931517\n\n17. 17.\n\nZhang X, Zhu L. Projector calibration from the camera image point of view. Opt Eng. 2009;48(11):117208. http://dx.doi.org/10.1117/1.3265551\n\n18. 18.\n\nHuang L, Chua P, Asundi A. Least-squares calibration method for fringe projection profilometry considering camera lens distortion. Appl Opt. 2010;49(9):1539–48.\n\n19. 19.\n\nWang Z, Nguyen D, Barnes J. Some practical considerations in fringe projection profilometry. Opt Lasers Eng. 2010;48(2):218–25.\n\n## Authors’ contributions\n\nYX built the experiment system, implemented the algorithm, collected and analyzed the data, and wrote the manuscript. YL supervised the main idea and revised the manuscript. Both authors read and approved the final manuscript.\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n### Funding\n\nThis work was financially supported by the Research Grants Council of Hong Kong (Project No. CityU 11205015), the National Natural Science Foundation of China (Grant No. 61673329) and the Center for Robotics and Automation (CRA) at CityU. The funding body had no direct input on either data collection, experiments design or execution, or the writing of the manuscript.\n\n### Publisher’s Note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## Author information\n\nCorrespondence to You-Fu Li.\n\n## Rights and permissions",
null,
""
] | [
null,
"https://jrobio.springeropen.com/track/article/10.1186/s40638-017-0077-z",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8876373,"math_prob":0.9895587,"size":26102,"snap":"2019-51-2020-05","text_gpt3_token_len":5736,"char_repetition_ratio":0.18196797,"word_repetition_ratio":0.026781326,"special_character_ratio":0.22266492,"punctuation_ratio":0.1257952,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99604005,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-10T03:46:05Z\",\"WARC-Record-ID\":\"<urn:uuid:7952056e-4ca8-454d-9784-1cf6e9cced4a>\",\"Content-Length\":\"196266\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:34563fe8-ec29-44e2-a723-b6c6a8804f27>\",\"WARC-Concurrent-To\":\"<urn:uuid:251521f6-4e98-4a04-867f-6cb0301d9081>\",\"WARC-IP-Address\":\"151.101.200.95\",\"WARC-Target-URI\":\"https://jrobio.springeropen.com/articles/10.1186/s40638-017-0077-z\",\"WARC-Payload-Digest\":\"sha1:EV22OGDW2OQJP5SWB3643FH4BZONFOUR\",\"WARC-Block-Digest\":\"sha1:CFFKLBPBEYTYJZOY6TWI6XNYIGQOQOFP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540525781.64_warc_CC-MAIN-20191210013645-20191210041645-00460.warc.gz\"}"} |
https://remtheory.com/saskatchewan/hidden-markov-model-tutorial-in-r.php | [
"# Hidden markov model tutorial in r\n\n### Hidden Markov Models iui.ku.edu.tr",
null,
"A Novel Approach for Record Deduplication using Hidden. Hidden Markov Models (Part 1) BMI/CS 576 www.biostat.wisc.edu/bmi576.html Mark Craven craven@biostat.wisc.edu Fall 2011 A simple HMM A C T, Do you know any good literature and/or tutorials about how to implement HMM in python, R (Bioconductor)? (especially for sequence analysis).\n\n### depmixS4 An R Package for Hidden Markov Models\n\nHidden Markov Models with example YouTube. Generating a DNA sequence using a multinomial modelВ¶ We can use R to generate a DNA sequence using a particular multinomial model. in a Hidden Markov model, An Introduction to Markov Modeling: Concepts and Uses //ntrs.nasa.gov/search.jsp?R lustrate each modeling situation covered. This tutorial will be aimed at.\n\nHidden Markov Models Andrew W. Moore or the following link to the source repository of Andrew’s tutorials: R STATE q = Location of Robot, A step-by-step tutorial on HMMs (University of Leeds) Hidden (a portable toolkit for building and manipulating hidden Markov models) Hidden Markov Model R\n\nHidden Markov Models for classification tasks using hidden markov models. The tutorial series will cover how to build and train a hidden markov models in R. I am going to tell you a story. A story where a Hidden Markov Model(HMM) is used to nab a thief even when there were no real witnesses at the scene of crime; you’ll\n\nthe Baum-Welch algorithm for Hidden Markov Models, but instead, refer you to a very good and old tutorial by L. Rabiner on HMM models and their estimation. 30/04/2013В В· Hidden Markov Models, with example Hidden Markov Model: Markov Chain Matlab Tutorial--part 1 - Duration: 10:52.\n\nAn Introduction to Hidden Markov Models of these models. It is the purpose of this tutorial paper to An HMM is a doubly stochastic process with an unde'r In this paper we describe an algorithm for clustering multivariate time series with variables taking Rabiner L.R. A tutorial on hidden Markov models and selected\n\nKISS ILVB Tutorial Hidden Markov Model Hidden Markov Model What is вЂhidden’? V = { R, G, B } • Initial state distribution KISS ILVB Tutorial Hidden Markov Model Hidden Markov Model What is вЂhidden’? V = { R, G, B } • Initial state distribution\n\nThe Application of Hidden Markov Models the hidden Markov model ous density HMMs were introduced.1 An excellent tutorial covering the Hidden Markov model \"An Introduction to Hidden Markov Models,\" 2001. L. R. Rabiner, \"A tutorial on hidden Markov models and selected applications in speech\n\nReveals How HMMs Can Be Used as General-Purpose Time Series Models Implements all methods in RHidden Markov Models for Time Series: An Introduction Using R applies I'm not sure what exactly you want to do, but you might find this excellent tutorial on hidden Markov models using R useful. You build the functions and Markov models\n\nr hidden-markov-model. and I mainly studied them on the Rabiner tutorial from 1989 and the book \"Hidden Markov Models for newest hidden-markov-model Hidden Markov Models: The objective of this tutorial is to introduce basic concepts of a Hidden Markov Model (HMM). The tutorial is intended for the R E\n\n2 1 Hidden Markov Models Deп¬Ѓnition 1.1. A kernel from a measurable space (E,E) to a measurable space (F,F) is a map P : E Г—F в†’ R + such that r hidden-markov-model. and I mainly studied them on the Rabiner tutorial from 1989 and the book \"Hidden Markov Models for newest hidden-markov-model\n\n“A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, Lalit R. Bahl, Frederick Jelinek and Robert L. Mercer. An Application of Hidden Markov Model. for details and Getting Started with Hidden Markov Models in R for a very brief information of HMM model using R.\n\nIn this paper we describe an algorithm for clustering multivariate time series with variables taking Rabiner L.R. A tutorial on hidden Markov models and selected An Introduction to Hidden. Markov Models ,-'\" 4 - is the purpose of this tutorial paper to L. R. Rabiner B. H. luang . Consider\n\nPackage вЂHMM ’ February 19, 2015 A Tutorial on Hidden Markov Models and Selected Applications in Lawrence R. Rabiner: A Tutorial on Hidden Markov Models A Revealing Introduction to Hidden Markov Models This tutorial was originally published online we want to uncover the hidden part of the Hidden Markov Model.\n\nA Revealing Introduction to Hidden Markov Models This tutorial was originally published online we want to uncover the hidden part of the Hidden Markov Model. r ainy and fo ggy Lets assume for the Markov Assumption In a sequence f w n w g P w n j This is called a Hidden Mark o v Mo dels So what mak es a Hidden Mark\n\nAn Application of Hidden Markov Model. for details and Getting Started with Hidden Markov Models in R for a very brief information of HMM model using R. Hidden Markov Models for classification tasks using hidden markov models. The tutorial series will cover how to build and train a hidden markov models in R.\n\nHidden Markov model (HMM) L. R. Rabiner, \"A tutorial on hidden Markov models and selected applications in speech recognition,\" Proceedings of the IEEE, “A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, Lalit R. Bahl, Frederick Jelinek and Robert L. Mercer.\n\nHidden and non Hidden Markov Models HMM. (hidden) Markov Models of biased , e.g. modelled by a finite-state model, or by any \"left to right\" model: D. R Hidden Markov models. r s t b v f o • Here we have to determine the best sequence of hidden states, the one that most likely produced word image.\n\nHidden Markov Models: The objective of this tutorial is to introduce basic concepts of a Hidden Markov Model (HMM). The tutorial is intended for the R E An Introduction to Hidden. Markov Models ,-'\" 4 - is the purpose of this tutorial paper to L. R. Rabiner B. H. luang . Consider\n\nthe Baum-Welch algorithm for Hidden Markov Models, but instead, refer you to a very good and old tutorial by L. Rabiner on HMM models and their estimation. Bioinformatics Introduction to Hidden Markov Path through states aligns sequence to model F i g u r e f r o m (K A tutorial on hidden Markov models and\n\nThis post will explore how to train hidden markov models in R. The previous posts in this series detailed the maths that power the HMM, fortunately all of this has Hidden Markov Models Andrew W. Moore or the following link to the source repository of Andrew’s tutorials: R STATE q = Location of Robot,\n\nGenerating a DNA sequence using a multinomial modelВ¶ We can use R to generate a DNA sequence using a particular multinomial model. in a Hidden Markov model 2 1 Hidden Markov Models Deп¬Ѓnition 1.1. A kernel from a measurable space (E,E) to a measurable space (F,F) is a map P : E Г—F в†’ R + such that\n\nHidden Markov Models – Examples In R – Part 3 of 4 Gekko. 12/11/2018В В· Bayesian Hierarchical Hidden Markov Models applied to r stan hidden-markov-model gsoc HMMLab is a Hidden Markov Model editor oriented on, Hidden and non Hidden Markov Models HMM. (hidden) Markov Models of biased , e.g. modelled by a finite-state model, or by any \"left to right\" model: D. R.\n\n### Hidden Markov Models for Speech Recognition B. H. Juang L",
null,
"Hidden Markov Models – Examples In R – Part 3 of 4 Gekko. References Discrete State HMMs: A. W. Moore, Hidden Markov Models. Slides from a tutorial presentation. L. R. Rabiner (1989), A Tutorial on Hidden Markov Models, Hidden Markov Models for Regime Detection using R Hidden Markov Models for Regime Detection using R. Hidden Markov Models (Pomegranate Tutorial),.",
null,
"### Hidden Markov Models for Time Series An Introduction",
null,
"A Novel Approach for Record Deduplication using Hidden. 2 1 Hidden Markov Models Definition 1.1. A kernel from a measurable space (E,E) to a measurable space (F,F) is a map P : E ×F → R + such that This is the 2nd part of the tutorial on Hidden Markov models. In this post we will look at a possible implementation of the described algorithms and estimate model.",
null,
"• Hidden Markov Models Aprendizaje automГЎtico Python\n• Clustering Multivariate Time Series Using Hidden Markov Models\n• Newest 'hidden-markov-model' Questions Cross Validated\n\n• 30/04/2013В В· Hidden Markov Models, with example Hidden Markov Model: Markov Chain Matlab Tutorial--part 1 - Duration: 10:52. An Application of Hidden Markov Model. for details and Getting Started with Hidden Markov Models in R for a very brief information of HMM model using R.\n\nHidden Markov Model Hidden Markov Model I Hidden Markov models have close connection with mixture models. I A mixture model generates data as follows. Jia Li http The Basic of Hidden Markov Model. L. R. Rabiner, A tutorial on hidden Markov models and selected applications in speech recognition, Proceedings of the IEEE,\n\nSome packages are: 1. Page on r-project.org 2. CRAN - Package HiddenMarkov I can't say which one is better or what is the best one (among these two and some other Hidden Markov Models Fundamentals Daniel Ramage R ( jS +1). The aluev The matrix B encodes the probability of our hidden state generating\n\n... hidden markov model for beginners, hidden markov model tutorial, Digital image processing/Data mining using Advanced R or Python Hidden Markov model (HMM) L. R. Rabiner, \"A tutorial on hidden Markov models and selected applications in speech recognition,\" Proceedings of the IEEE,\n\nHidden Markov Models: The objective of this tutorial is to introduce basic concepts of a Hidden Markov Model (HMM). The tutorial is intended for the R E the Baum-Welch algorithm for Hidden Markov Models, but instead, refer you to a very good and old tutorial by L. Rabiner on HMM models and their estimation.\n\nSome packages are: 1. Page on r-project.org 2. CRAN - Package HiddenMarkov I can't say which one is better or what is the best one (among these two and some other depmixS4 : An R-package for hidden Markov models Ingmar Visser University of Amsterdam Maarten Speekenbrink University College London Abstract\n\nIn this paper we describe an algorithm for clustering multivariate time series with variables taking Rabiner L.R. A tutorial on hidden Markov models and selected Bioinformatics Introduction to Hidden Markov Path through states aligns sequence to model F i g u r e f r o m (K A tutorial on hidden Markov models and\n\nI have studied some of these resources and I know that there is an R package called HMM. Could anybody explain the usefulness of the 'forward algorithm' with a simple A Novel Approach for Record Deduplication using Hidden Markov Model hidden markov model is used for record duplication R.Parimala devi et al, /\n\nI'm not sure what exactly you want to do, but you might find this excellent tutorial on hidden Markov models using R useful. You build the functions and Markov models A step-by-step tutorial on HMMs (University of Leeds) Hidden (a portable toolkit for building and manipulating hidden Markov models) Hidden Markov Model R\n\nHidden Markov Models for Regime Detection using R Hidden Markov Models for Regime Detection using R. Hidden Markov Models (Pomegranate Tutorial), A Revealing Introduction to Hidden Markov Models This tutorial was originally published online we want to uncover the hidden part of the Hidden Markov Model.\n\n... hidden markov model for beginners, hidden markov model tutorial, Digital image processing/Data mining using Advanced R or Python Do you know any good literature and/or tutorials about how to implement HMM in python, R (Bioconductor)? (especially for sequence analysis)\n\n## machine learning Hidden Markov models package in R",
null,
"AN INTRODUCTION TO HIDDEN MARKOV MODELS. This post will explore how to train hidden markov models in R. The previous posts in this series detailed the maths that power the HMM, fortunately all of this has, Principles of Autonomy and Decision Making L. Rabiner, \\A tutorial on Hidden Markov Models...\" E. Frazzoli (MIT) R +, i.e.,transition.\n\n### Hidden Markov Models Aprendizaje automГЎtico Python\n\nNewest 'hidden-markov-model' Questions Cross Validated. An Introduction to Markov Modeling: Concepts and Uses //ntrs.nasa.gov/search.jsp?R lustrate each modeling situation covered. This tutorial will be aimed at, 2 depmixS4: An R Package for Hidden Markov Models (1982), for an overview, and e.g.,Schmittmann, Visser, and Raijmakers(2006), for a recent application..\n\nI am going to tell you a story. A story where a Hidden Markov Model(HMM) is used to nab a thief even when there were no real witnesses at the scene of crime; you’ll Hidden Markov Models (Part 1) BMI/CS 576 www.biostat.wisc.edu/bmi576.html Mark Craven craven@biostat.wisc.edu Fall 2011 A simple HMM A C T\n\n2 depmixS4: An R Package for Hidden Markov Models (1982), for an overview, and e.g.,Schmittmann, Visser, and Raijmakers(2006), for a recent application. This is the 2nd part of the tutorial on Hidden Markov models. In this post we will look at a possible implementation of the described algorithms and estimate model\n\nI'm not sure what exactly you want to do, but you might find this excellent tutorial on hidden Markov models using R useful. You build the functions and Markov models Hidden Markov Models Fundamentals Daniel Ramage R ( jS +1). The aluev The matrix B encodes the probability of our hidden state generating\n\nAn Introduction to Hidden. Markov Models ,-'\" 4 - is the purpose of this tutorial paper to L. R. Rabiner B. H. luang . Consider Hidden Markov Models for classification tasks using hidden markov models. The tutorial series will cover how to build and train a hidden markov models in R.\n\nA step-by-step tutorial on HMMs (University of Leeds) Hidden (a portable toolkit for building and manipulating hidden Markov models) Hidden Markov Model R Hidden Markov Models A Summary for “A tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, by Lawrence R. Rabiner”\n\nr hidden-markov-model. and I mainly studied them on the Rabiner tutorial from 1989 and the book \"Hidden Markov Models for newest hidden-markov-model depmixS4 : An R-package for hidden Markov models Ingmar Visser University of Amsterdam Maarten Speekenbrink University College London Abstract\n\n... hidden markov model for beginners, hidden markov model tutorial, Digital image processing/Data mining using Advanced R or Python An Introduction to Hidden Markov Models of these models. It is the purpose of this tutorial paper to An HMM is a doubly stochastic process with an unde'r\n\nKISS ILVB Tutorial Hidden Markov Model Hidden Markov Model What is вЂhidden’? V = { R, G, B } • Initial state distribution In this paper we describe an algorithm for clustering multivariate time series with variables taking Rabiner L.R. A tutorial on hidden Markov models and selected\n\nA step-by-step tutorial on HMMs (University of Leeds) Hidden (a portable toolkit for building and manipulating hidden Markov models) Hidden Markov Model R r hidden-markov-model. and I mainly studied them on the Rabiner tutorial from 1989 and the book \"Hidden Markov Models for newest hidden-markov-model\n\nAn Introduction to Markov Modeling: Concepts and Uses //ntrs.nasa.gov/search.jsp?R lustrate each modeling situation covered. This tutorial will be aimed at Hidden Markov Models for classification tasks using hidden markov models. The tutorial series will cover how to build and train a hidden markov models in R.\n\n... hidden markov model for beginners, hidden markov model tutorial, Digital image processing/Data mining using Advanced R or Python HOW TO IMPLEMENT HIDDEN MARKOV CHAIN A Framework and C++ Code Before we go into what is Hidden Markov Model, LetвЂs start by introducing you to what is\n\nReferences Discrete State HMMs: A. W. Moore, Hidden Markov Models. Slides from a tutorial presentation. L. R. Rabiner (1989), A Tutorial on Hidden Markov Models Some packages are: 1. Page on r-project.org 2. CRAN - Package HiddenMarkov I can't say which one is better or what is the best one (among these two and some other\n\nHidden Markov Models: The objective of this tutorial is to introduce basic concepts of a Hidden Markov Model (HMM). The tutorial is intended for the R E Hidden Markov Models A Summary for “A tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, by Lawrence R. Rabiner”\n\nAn Introduction to Hidden. Markov Models ,-'\" 4 - is the purpose of this tutorial paper to L. R. Rabiner B. H. luang . Consider 12/11/2018В В· Bayesian Hierarchical Hidden Markov Models applied to r stan hidden-markov-model gsoc HMMLab is a Hidden Markov Model editor oriented on\n\n2 1 Hidden Markov Models Deп¬Ѓnition 1.1. A kernel from a measurable space (E,E) to a measurable space (F,F) is a map P : E Г—F в†’ R + such that 2 depmixS4: An R Package for Hidden Markov Models (1982), for an overview, and e.g.,Schmittmann, Visser, and Raijmakers(2006), for a recent application.\n\ndepmixS4 : An R-package for hidden Markov models Ingmar Visser University of Amsterdam Maarten Speekenbrink University College London Abstract This is the 2nd part of the tutorial on Hidden Markov models. In this post we will look at a possible implementation of the described algorithms and estimate model\n\nI am going to tell you a story. A story where a Hidden Markov Model(HMM) is used to nab a thief even when there were no real witnesses at the scene of crime; you’ll Acoustic Modelling for Speech Recognition: Hidden Markov Models and Beyond? An Engineering Solution - should planes flap their wings? Cambridge University\n\nHidden Markov Models for classification tasks using hidden markov models. The tutorial series will cover how to build and train a hidden markov models in R. The Application of Hidden Markov Models the hidden Markov model ous density HMMs were introduced.1 An excellent tutorial covering the\n\nAn Introduction to Hidden Markov Models of these models. It is the purpose of this tutorial paper to An HMM is a doubly stochastic process with an unde'r 2 1 Hidden Markov Models Deп¬Ѓnition 1.1. A kernel from a measurable space (E,E) to a measurable space (F,F) is a map P : E Г—F в†’ R + such that\n\nThis is the 2nd part of the tutorial on Hidden Markov models. In this post we will look at a possible implementation of the described algorithms and estimate model The Basic of Hidden Markov Model. L. R. Rabiner, A tutorial on hidden Markov models and selected applications in speech recognition, Proceedings of the IEEE,\n\nA Revealing Introduction to Hidden Markov Models This tutorial was originally published online we want to uncover the hidden part of the Hidden Markov Model. A Novel Approach for Record Deduplication using Hidden Markov Model hidden markov model is used for record duplication R.Parimala devi et al, /\n\n### Hidden Markov Models for Speech Recognition B. H. Juang L",
null,
"Practical Machine Learning Lecture Hidden Markov models. We provide a tutorial on learning and inference in hidden Markov models in the context Hidden Markov models R P(W)P(X)P(YjW)P(ZjX;Y), ... hidden markov model for beginners, hidden markov model tutorial, Digital image processing/Data mining using Advanced R or Python.\n\n### depmixS4 An R-package for hidden Markov models",
null,
"Hidden Markov Models inplementation in R or python Stack. Hidden Markov Models (Part 1) BMI/CS 576 www.biostat.wisc.edu/bmi576.html Mark Craven craven@biostat.wisc.edu Fall 2011 A simple HMM A C T I have studied some of these resources and I know that there is an R package called HMM. Could anybody explain the usefulness of the 'forward algorithm' with a simple.",
null,
"• Topic hidden-markov-model В· GitHub\n• Practical Machine Learning Lecture Hidden Markov models\n• AN INTRODUCTION TO HIDDEN MARKOV MODELS\n\n• A Novel Approach for Record Deduplication using Hidden Markov Model hidden markov model is used for record duplication R.Parimala devi et al, / An Introduction to Hidden Markov Models of these models. It is the purpose of this tutorial paper to An HMM is a doubly stochastic process with an unde'r\n\nHidden Markov Models A Summary for “A tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, by Lawrence R. Rabiner” KISS ILVB Tutorial Hidden Markov Model Hidden Markov Model What is вЂhidden’? V = { R, G, B } • Initial state distribution\n\n2 depmixS4: An R Package for Hidden Markov Models (1982), for an overview, and e.g.,Schmittmann, Visser, and Raijmakers(2006), for a recent application. Hidden Markov Models Andrew W. Moore or the following link to the source repository of Andrew’s tutorials: R STATE q = Location of Robot,\n\n30/04/2013В В· Hidden Markov Models, with example Hidden Markov Model: Markov Chain Matlab Tutorial--part 1 - Duration: 10:52. Hidden and non Hidden Markov Models HMM. (hidden) Markov Models of biased , e.g. modelled by a finite-state model, or by any \"left to right\" model: D. R\n\n7/07/2011В В· Definition of a hidden Markov model (HMM). Description of the parameters of an HMM (transition matrix, emission probability distributions, and initial 12/11/2018В В· Bayesian Hierarchical Hidden Markov Models applied to r stan hidden-markov-model gsoc HMMLab is a Hidden Markov Model editor oriented on\n\nBioinformatics Introduction to Hidden Markov Path through states aligns sequence to model F i g u r e f r o m (K A tutorial on hidden Markov models and Hidden Markov model \"An Introduction to Hidden Markov Models,\" 2001. L. R. Rabiner, \"A tutorial on hidden Markov models and selected applications in speech\n\nReveals How HMMs Can Be Used as General-Purpose Time Series Models Implements all methods in RHidden Markov Models for Time Series: An Introduction Using R applies Do you know any good literature and/or tutorials about how to implement HMM in python, R (Bioconductor)? (especially for sequence analysis)\n\nAn Application of Hidden Markov Model. for details and Getting Started with Hidden Markov Models in R for a very brief information of HMM model using R. This post will explore how to train hidden markov models in R. The previous posts in this series detailed the maths that power the HMM, fortunately all of this has\n\nr hidden-markov-model. and I mainly studied them on the Rabiner tutorial from 1989 and the book \"Hidden Markov Models for newest hidden-markov-model A Revealing Introduction to Hidden Markov Models This tutorial was originally published online we want to uncover the hidden part of the Hidden Markov Model.\n\n[Rabiner89] Lawrence R. Rabiner “A tutorial on hidden Markov models and selected applications in speech recognition”, Proceedings of the IEEE 77.2, pp. 257-286, 1989. Hidden Markov Models Andrew W. Moore or the following link to the source repository of Andrew’s tutorials: R STATE q = Location of Robot,\n\nReferences Discrete State HMMs: A. W. Moore, Hidden Markov Models. Slides from a tutorial presentation. L. R. Rabiner (1989), A Tutorial on Hidden Markov Models 14/08/2015В В· A hidden Markov model is a statistical model which builds upon the concept of a Markov chain. The idea behind the model is simple: imagine your system can\n\n26/03/2011В В· Making a Sweater Coat Part 1 Inspriation and Vogue 1266 Pattern for the recycled sweater coats and is now a bead kit and tutorial on Recycled sweater coat tutorial England When gifting a bottle of wine this holiday season, add a personal touch by wrapping it inside an upcycled sweater sleeve. Get the tutorial at That's What Che Said"
] | [
null,
"https://remtheory.com/images/490983.jpg",
null,
"https://remtheory.com/images/hidden-markov-model-tutorial-in-r.jpg",
null,
"https://remtheory.com/images/hidden-markov-model-tutorial-in-r-2.jpg",
null,
"https://remtheory.com/images/7d417772149165a074464092c388753d.jpg",
null,
"https://remtheory.com/images/217511.jpg",
null,
"https://remtheory.com/images/hidden-markov-model-tutorial-in-r-3.jpg",
null,
"https://remtheory.com/images/38c801ffc1002c558634e4d092a1290c.jpg",
null,
"https://remtheory.com/images/c6f828448024d4a30cc1bee9a34c9b95.jpg",
null,
"https://remtheory.com/images/d93ca84a501883b9dbbadc4ae6a5639e.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8075817,"math_prob":0.82859325,"size":22958,"snap":"2023-14-2023-23","text_gpt3_token_len":5524,"char_repetition_ratio":0.26404983,"word_repetition_ratio":0.7550859,"special_character_ratio":0.21391237,"punctuation_ratio":0.10894677,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99723196,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-05T17:53:13Z\",\"WARC-Record-ID\":\"<urn:uuid:e79e7016-bf43-4f0e-8366-063944bea647>\",\"Content-Length\":\"50925\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ce9ebd7-e796-45e1-9b98-7aaef6f75776>\",\"WARC-Concurrent-To\":\"<urn:uuid:6e4a7c56-4112-4d1d-8939-d1245d42ab5b>\",\"WARC-IP-Address\":\"88.119.175.235\",\"WARC-Target-URI\":\"https://remtheory.com/saskatchewan/hidden-markov-model-tutorial-in-r.php\",\"WARC-Payload-Digest\":\"sha1:47UGOT4XROWXXE4UGYJSLMCDRVUSSRFS\",\"WARC-Block-Digest\":\"sha1:C6GAEXX5N6664A2PBKD4QXOYKRIKKESV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652149.61_warc_CC-MAIN-20230605153700-20230605183700-00420.warc.gz\"}"} |
https://macdownload.informer.com/Tg/solve-equations/pages/8/ | [
"Featured\n\n# Solve Equations\n\nSoftware",
null,
"## Soulver",
null,
"|\n1,099\n|\nAcqualia Software Pty. Ltd.\nSoulver is a math calculation utility designed for Mac. This nicely-designed program enables...\n...to write your math equations. The screen contain ...with Mac's Calculator for solving math equations.",
null,
"## LuxRender\n\nfree",
null,
"|\n808\n|\nLuxRender Team\nLuxRender is a physically based and unbiased rendering engine. The program simulates...\n...light according to physical equations, thus producing realistic images...",
null,
"## PCalc",
null,
"|\n754\n|\nTLA Systems Ltd.\nPCalc is a full-featured, scriptable scientific calculator with support...\nPCalc is a full-featured, scriptable scientific calculator with support for hexadecimal, octal, and...",
null,
"## Daum Equation Editor\n\nfree",
null,
"|\n628\n|\nDaum Communications\nDaum Equation Editor is, as its name suggests, a program that enables you...\nDaum Equation Editor is, as it ...sum things up, Daum Equation Editor is a simple yet...",
null,
"## Maxima\n\nfree",
null,
"|\n183\n|\nMaxima Team\nMaxima is a system for the manipulation of symbolic and numerical expressions.\n...transforms, ordinary differential equations, systems of linear equations, polynomials, and...",
null,
"## Microsoft Office 2016",
null,
"|\n71\n|\nMicrosoft\nMicrosoft Office 2016 is a tool set for the most commonly used office applications...\n...it integrates plenty of equations patterns to fill in ...can type your own equations, which can be...",
null,
"## Study Center",
null,
"|\n48\n|\nRPG Softworks\nReviews: You can check more at rpgmac.wordpress.com \"SC is super fast and neat…\" \"I used...\n...Custom Templates - Write Math Equations using LaTeX - PDF Export...",
null,
"## EdenGraph\n\nfree",
null,
"|\n46\n|\nEdenwaith\nEdenGraph is a free Mac application that allows you to generate two-dimensional graphs using...\n...can save and edit equations. This way, you ...the desired functions and equations every single time you...",
null,
"## Longhand\n\nfree",
null,
"|\n34\n|\nScott Fortmann-Roe\nLonghand is an application that allows users to perform basic as well as complex mathematical calculations...\nLonghand is an application that allows users to perform basic as well as complex mathematical...",
null,
"## InstaCalc\n\nfree",
null,
"|\n29\n|\nInstaCalc is a simple Mac utility that enables you to make basic and complex calculations on your computer...\n...access to several equation examples which you ...calculator app for writing equations. The most obvious...",
null,
"## iMathGeo",
null,
"|\n27\n|\nPhilippe Logel\niMathGeo is a software I began to develop 10 years ago when I was a student in math...\n...to import, like an equations editor. It can easily...",
null,
"## Equation Service\n\nfree",
null,
"|\n11\n|\nedu.umn\nEquation Service is a program that uses pdflatex to produce small PDF files containing equations...\n...small PDF files containing equations and other text ...in the main Equation Service window and...",
null,
"## Mathpix Snipping Tool\n\nfree",
null,
"|\n9\n|\nMathpix\nMathpix leverages the world's most powerful math OCR technology to faciliate interacting with digital...\nMathpix leverages the world's most powerful math OCR technology to faciliate interacting with...",
null,
"## Sigfit",
null,
"|\n3\n|\nJ.P.G. Malthouse\nThis program fits data to the equations:- Alkaline sigmoid kobs = k/(1+[H]/Ka...\n...fits data to the equations:- Alkaline sigmoid kobs = k/(1+[H]/Ka...",
null,
"## KLatexFormula\n\nfree",
null,
"|\n6\n|\nPhilippe Faist\nKLatexFormula is an easy-to-use graphical application for generating images (that you can drag and drop...\n...to disk) from LaTeX equations. These images can be...",
null,
"## FX Chem 3",
null,
"|\n1\n|\nEfofex Software\nFX Chem makes typing chemical equations almost as easy as typing your name. With FX Chem...\n...makes typing chemical equations almost as easy a ...components of a chemical equation and puts them in...",
null,
"## Factormania",
null,
"|\nReally Early Morning Software\nFactormania! is an AppleScript Studio application that performs some basic mathematical factoring methods on numbers...\n...methods on numbers and equations.",
null,
"## Year 10 Interactive Maths (2nd Ed) - Home Licence",
null,
"|\nmathsteacher\nYear 10 Interactive Maths (Second Edition) by G S Rehill, an experienced mathematics author...",
null,
"## Year 9 Interactive Maths (2nd Ed) - Home Licence",
null,
"|\nmathsteacher\nYear 9 Interactive Maths (Second Edition) by G S Rehill, an experienced mathematics author...\n...Law, Linear Equations and Inequalitie ...Linear Graphs, Simultaneous Equations, Indices, Surd...",
null,
"## MathTabs",
null,
"|\neQuatrix\nDo the Math in QuarkXPress! Now for Quark 6 and OS X. Create business documents...\n...automatically by assigning equations to standard text ...pages Assign different equations to different..."
] | [
null,
"https://img.informer.com/icons_mac/png/128/473/473996.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/icons_mac/png/128/352/352699.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/icons_mac/png/128/476/476871.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/icons_mac/png/128/196/196159.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/icons_mac/png/128/21/21889.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/images/default_icon/default_128_1.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/icons_mac/png/128/151/151624.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/icons_mac/png/128/41/41765.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/icons_mac/png/128/34/34575.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/icons_mac/png/128/258/258151.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/icons_mac/png/128/327/327066.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/images/default_icon/default_128_4.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/icons_mac/png/128/453/453319.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/icons_mac/png/128/222/222555.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/icons_mac/png/128/347/347317.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/images/default_icon/default_128_0.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/images/default_icon/default_128_0.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/images/default_icon/default_128_5.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/images/default_icon/default_128_5.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null,
"https://img.informer.com/images/default_icon/default_128_0.png",
null,
"https://img.informer.com/images/v3/trend_red_stars_small.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8020648,"math_prob":0.96896905,"size":4979,"snap":"2019-51-2020-05","text_gpt3_token_len":1261,"char_repetition_ratio":0.12984924,"word_repetition_ratio":0.104575165,"special_character_ratio":0.23518778,"punctuation_ratio":0.24793388,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96865654,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-25T00:18:12Z\",\"WARC-Record-ID\":\"<urn:uuid:af48c652-9830-460f-9015-98ee4be0a865>\",\"Content-Length\":\"62959\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1e854c72-1c93-4db6-8e10-952f2503aea4>\",\"WARC-Concurrent-To\":\"<urn:uuid:e354dd1b-0339-4ed3-8d97-1cf06466c7df>\",\"WARC-IP-Address\":\"206.54.189.100\",\"WARC-Target-URI\":\"https://macdownload.informer.com/Tg/solve-equations/pages/8/\",\"WARC-Payload-Digest\":\"sha1:NBVLEIE767YSIZXQ7IJTNOHLJTOHZFAD\",\"WARC-Block-Digest\":\"sha1:YJUT2MBSL6OTFEIAAEZD7XTRTSVQBC7M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250626449.79_warc_CC-MAIN-20200124221147-20200125010147-00437.warc.gz\"}"} |
https://docs.oracle.com/cd/B10501_01/appdev.920/a96584/oci18m64.htm | [
"OCI Datatype Mapping and Manipulation Functions, 64 of 134\n\n## OCIIntervalToText()\n\n### Purpose\n\nGiven an interval, produces a string representing the interval.\n\n### Syntax\n\n```sword OCIIntervalToText ( dvoid *hndl,\nOCIError *err,\nCONST OCIInterval *interval,\nub1 lfprec,\nub1 fsprec,\nOraText *buffer,\nsize_t buflen,\nsize_t *resultlen );\n```\n\n### Parameters\n\nhndl (IN)\n\nThe OCI user session handle or the environment handle.\n\nerr (IN/OUT)\n\nThe OCI error handle. If there is an error, it is recorded in `err` and this function returns OCI_ERROR. Obtain diagnostic information by calling `OCIErrorGet()`.\n\ninterval (IN)\n\nInterval to be converted.\n\nlfprec (IN)\n\nLeading field precision. (The number of digits used to represent the leading field.)\n\nfsprec (IN)\n\nFractional second precision of the interval (the number of digits used to represent the fractional seconds).\n\nbuffer (OUT)\n\nBuffer to hold the result.\n\nbuflen (IN)\n\nThe length of `buffer`.\n\nresultlen (OUT)\n\nThe length of the result placed into `buffer`.\n\nThe interval literal is output as 'year' or '[year-]month' for `INTERVAL YEAR TO MONTH` intervals and as 'seconds' or 'minutes[:seconds]' or 'hours[:minutes[:seconds]]' or 'days[ hours[:minutes[:seconds]]]' for `INTERVAL` `DAY` `TO` `SECOND` intervals (where optional fields are surrounded by brackets).\n\n### Returns\n\nOCI_SUCCESS,\n\nOCI_INVALID_HANDLE, if `err` is a null pointer,\n\nOCI_ERROR, if the buffer is not large enough to hold the result.\n\n### Related Functions\n\n`OCIIntervalFromText()`\n\n## OCI Number Functions\n\nThis section describes the OCI Number functions.\n\n##### Table 18-4 Number Functions\nFunction/Page Purpose\n\n`OCINumberAbs()`\n\nComputes the absolute value\n\n`OCINumberAdd()`\n\n`OCINumberArcCos()`\n\nComputes the arc cosine\n\n`OCINumberArcSin()`\n\nComputes the arc sine\n\n`OCINumberArcTan()`\n\nComputes the arc tangent\n\n`OCINumberArcTan2()`\n\nComputes the arc tangent of two numbers\n\n`OCINumberAssign()`\n\nAssigns one number to another\n\n`OCINumberCeil()`\n\nComputes the ceiling of number\n\n`OCINumberCmp()`\n\nCompares numbers\n\n`OCINumberCos()`\n\nComputes the cosine\n\n`OCINumberDec()`\n\nDecrements an OCI number\n\n`OCINumberDiv()`\n\nDivides two numbers\n\n`OCINumberExp()`\n\nRaises e to the specified Oracle number power\n\n`OCINumberFloor()`\n\nComputes the floor of a number\n\n`OCINumberFromInt()`\n\nConverts an integer to an Oracle number\n\n`OCINumberFromReal()`\n\nConvert a real to an Oracle number\n\n`OCINumberFromText()`\n\nConvert a string to an Oracle number\n\n`OCINumberHypCos()`\n\nComputes the hyperbolic cosine\n\n`OCINumberHypSin()`\n\nComputes the hyperbolic sine\n\n`OCINumberHypTan()`\n\nComputes the hyperbolic tangent\n\n`OCINumberInc()`\n\nIncrements an Oracle number\n\n`OCINumberIntPower()`\n\nRaises a given base to an integer power\n\n`OCINumberIsInt()`\n\nTests if a number is an integer\n\n`OCINumberIsZero()`\n\nTests if a number is zero\n\n`OCINumberLn()`\n\nComputes the natural logarithm\n\n`OCINumberLog()`\n\nComputes the logarithm to an arbitrary base\n\n`OCINumberMod()`\n\nModulo division\n\n`OCINumberMul()`\n\nMultiplies numbers\n\n`OCINumberNeg()`\n\nNegates a number\n\n`OCINumberPower()`\n\nExponentiation to base e\n\n`OCINumberPrec()`\n\nRounds a number to a specified number of decimal places\n\n`OCINumberRound()`\n\nRounds an Oracle number to a specified decimal place\n\n`OCINumberSetPi()`\n\nInitializes a number to Pi\n\n`OCINumberSetZero()`\n\nInitializes a number to zero\n\n`OCINumberShift()`\n\nMultiplies by 10, shifting specified number of decimal places\n\n`OCINumberSign()`\n\nObtains the sign of an Oracle number\n\n`OCINumberSin()`\n\nComputes the sine\n\n`OCINumberSqrt()`\n\nComputes the square root of a number\n\n`OCINumberSub()`\n\nSubtracts numbers\n\n`OCINumberTan()`\n\nComputes the tangent\n\n`OCINumberToInt()`\n\nConverts an Oracle number to an integer\n\n`OCINumberToReal()`\n\nConverts an Oracle number to a real\n\n`OCINumberToText()`\n\nConverts an Oracle number to a string\n\n`OCINumberTrunc()`\n\nTruncates an Oracle number at a specified decimal place"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.563886,"math_prob":0.98312837,"size":2926,"snap":"2019-26-2019-30","text_gpt3_token_len":711,"char_repetition_ratio":0.2019165,"word_repetition_ratio":0.08196721,"special_character_ratio":0.22693096,"punctuation_ratio":0.08163265,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99459404,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-15T20:07:45Z\",\"WARC-Record-ID\":\"<urn:uuid:a96c4199-1b5d-4be0-a10c-daf95ceab442>\",\"Content-Length\":\"26026\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0dbed899-270d-40aa-af01-6993d4909d4f>\",\"WARC-Concurrent-To\":\"<urn:uuid:7cedcfc8-6f97-4280-bcb7-6d1b8a117b9f>\",\"WARC-IP-Address\":\"23.219.223.23\",\"WARC-Target-URI\":\"https://docs.oracle.com/cd/B10501_01/appdev.920/a96584/oci18m64.htm\",\"WARC-Payload-Digest\":\"sha1:K4DWIPNBWYPVRYIYRQNR7Z2PUJNRNXEU\",\"WARC-Block-Digest\":\"sha1:OCSS7GQG4NEFV4ABXTRNKKHAFVX463X6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195524111.50_warc_CC-MAIN-20190715195204-20190715221204-00339.warc.gz\"}"} |
https://wcipeg.com/problem/coci077p4 | [
"### COCI 2007/2008, Croatian Regional\n\nWhile browsing a math book, Mirko found a strange equation of the form A=S. What makes the equation strange is that A and S are not the same. Mirko realized that the left side of the equation should have addition operations between some pairs of digits in A.\n\nWrite a program that inserts the smallest number of addition operations on the left side to make the equation correct. The numbers in the corrected equation may contain arbitrary amounts of leading zeros.\n\n### Input\n\nThe first line contains the equation in the form A=S.\n\nA and S will both be positive integers without leading zeros. They will be different.\n\nA will contain at most 1000 digits.\n\nS will be less than or equal to 5000.\n\nNote: The input data will guarantee that a solution, although not necessarily unique, will always exist.\n\n### Output\n\nOutput the corrected equation. If there are multiple solutions, output any of them.\n\n### Input\n\n`143175=120`\n\n### Output\n\n`14+31+75=120`\n\n### Input\n\n`5025=30`\n\n### Output\n\n`5+025=30`\n\n### Input\n\n`999899=125`\n\n### Output\n\n`9+9+9+89+9=125`\n\nPoint Value: 17 (partial)\nTime Limit: 1.00s\nMemory Limit: 64M"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.80500174,"math_prob":0.97424483,"size":1305,"snap":"2021-43-2021-49","text_gpt3_token_len":354,"char_repetition_ratio":0.121445045,"word_repetition_ratio":0.0,"special_character_ratio":0.2835249,"punctuation_ratio":0.15636364,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991658,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-28T19:41:38Z\",\"WARC-Record-ID\":\"<urn:uuid:b275d5c6-df60-46dd-9a6c-b00f9820cd40>\",\"Content-Length\":\"11022\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a727ac8-a509-4d60-83f2-d3be1a5221c5>\",\"WARC-Concurrent-To\":\"<urn:uuid:7fbe6591-1421-4298-8fea-47021809f608>\",\"WARC-IP-Address\":\"172.67.147.117\",\"WARC-Target-URI\":\"https://wcipeg.com/problem/coci077p4\",\"WARC-Payload-Digest\":\"sha1:MS5DB66MNCFV22HHBU2ZZGQHMKQCRR3R\",\"WARC-Block-Digest\":\"sha1:VDODVSGSZA3JFQ3676MNWVJI36CU5UKS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588526.57_warc_CC-MAIN-20211028193601-20211028223601-00443.warc.gz\"}"} |
http://101iq.com/infoaop.htm | [
"",
null,
"",
null,
"Area of Panels This is the total area the panels will take on your roof. The foot-by-foot calculation is just to show roughly how large the area would be if it were exactly square. You might find it interesting to increase the number of panels until the area is equal to the area of your entire roof. You may find that if you had a roof that entirely faced south toward the sun, and was entirely covered with solar panels, that this would be enough to cover all of your average electricity needs. Unfortunately, the subsidy is probably not enough for this in Austin where we use air conditioners much of the year. Another interesting calculation is to increase the number of panels to about 30 billion (30000000000) panels. This would take roughly 10,000 square miles of area, a 100 mile by 100 mile square in west Texas, and on average enough to power the electic needs of the whole country.",
null,
"",
null,
"\u001a"
] | [
null,
"http://101iq.com/INDEX_files/ULh.gif",
null,
"http://101iq.com/INDEX_files/UR.gif",
null,
"http://101iq.com/INDEX_files/LL.gif",
null,
"http://101iq.com/INDEX_files/LR.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9772905,"math_prob":0.95850354,"size":721,"snap":"2020-10-2020-16","text_gpt3_token_len":155,"char_repetition_ratio":0.10878661,"word_repetition_ratio":0.03125,"special_character_ratio":0.23994452,"punctuation_ratio":0.07913669,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.951394,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-25T06:59:21Z\",\"WARC-Record-ID\":\"<urn:uuid:b756c05d-3e45-48cc-ae71-0772d96b28c3>\",\"Content-Length\":\"2452\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec6184d1-cefe-4c68-80f0-37f5a38d47a8>\",\"WARC-Concurrent-To\":\"<urn:uuid:c15f791d-489b-42ca-993f-6f16b436fe19>\",\"WARC-IP-Address\":\"63.230.168.74\",\"WARC-Target-URI\":\"http://101iq.com/infoaop.htm\",\"WARC-Payload-Digest\":\"sha1:32ZZXDAYET6HP76DJWQP3OSHMOSLCLXR\",\"WARC-Block-Digest\":\"sha1:DLLC3OT2LYJB25JLIV5NJ7OHQFO5STNY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146033.50_warc_CC-MAIN-20200225045438-20200225075438-00324.warc.gz\"}"} |
https://de.scribd.com/document/204271958/1-2-1 | [
"You are on page 1of 15\n\n# Hindawi Publishing Corporation\n\n## Abstract and Applied Analysis\n\nVolume 2011, Article ID 709427, 15 pages\ndoi:10.1155/2011/709427\nResearch Article\nAsymptotic Convergence of\nthe Solutions of a Discrete Equation with\nTwo Delays in the Critical Case\nL. Berezansky,\n1\nJ. Dibl k,\n2\nM. R u zi ckov a,\n2\nand Z.\n\nSut a\n2\n1\nDepartment of Mathematics, Ben-Gurion University of the Negev, 84105 Beer-Sheva, Israel\n2\nDepartment of Mathematics, University of\n\nZilina, 01026\n\nZilina, Slovakia\nCorrespondence should be addressed to J. Diblk, diblik@feec.vutbr.cz\nReceived 9 October 2010; Revised 17 March 2011; Accepted 13 April 2011\nAcademic Editor: Elena Braverman\nCopyright q 2011 L. Berezansky et al. This is an open access article distributed under the Creative\nCommons Attribution License, which permits unrestricted use, distribution, and reproduction in\nany medium, provided the original work is properly cited.\nA discrete equation yn nyn j yn k with two integer delays k and j, k > j 0\nis considered for n . We assume : Z\n\nn0k\n0, , where Z\n\nn0\n{n\n0\n, n\n0\n1, . . .}, n\n0\nN and\nn Z\n\nn0\n. Criteria for the existence of strictly monotone and asymptotically convergent solutions for\nn are presented in terms of inequalities for the function . Results are sharp in the sense that\nthe criteria are valid even for some functions with a behavior near the so-called critical value,\ndened by the constant k j\n1\n. Among others, it is proved that, for the asymptotic convergence\nof all solutions, the existence of a strictly monotone and asymptotically convergent solution is\nsucient.\n1. Introduction\nWe use the following notation: for integers s, q, s q, we dene Z\nq\ns\n: {s, s 1, . . . , q}, where\nthe cases s and q are admitted too. Throughout this paper, using the notation Z\nq\ns\nor another one with a pair of integers s, q, we assume s q.\nIn this paper we study a discrete equation with two delays\nyn n\n_\ny\n_\nn j\n_\nyn k\n_\n1.1\nas n . Integers k and j in 1.1 satisfy the inequality k > j 0 and : Z\n\nn\n0\nk\nR\n\n:\n0, , where n\n0\nN and n Z\n\nn\n0\n. Without loss of generality, we assume n\n0\nk > 0 throughout\nthe paper this is a technical detail, necessary for some expressions to be well dened.\n2 Abstract and Applied Analysis\nThe results concern the asymptotic convergence of all solutions of 1.1. We focus on\nwhat is called the critical case with respect to the function which separates the case when\nall solutions are convergent from the case when there exist divergent solutions.\nSuch a critical case is characterized by the constant value\nn\ncr\n:\n_\nk j\n_\n1\n, n Z\n\nn\n0\nk\n, 1.2\nand below we explain its meaning and importance by an analysis of the asymptotic behavior\nof solutions of 1.1.\nConsider 1.1 with n\n0\n, where\n0\nis a positive constant; that is, we consider the\nfollowing equation:\nyn\n0\n\n_\ny\n_\nn j\n_\nyn k\n_\n. 1.3\nLooking for a solution of 1.3 in the form yn\nn\n, C \\ {0} using the usual procedure,\nwe get the characteristic equation\n\nk1\n\nk\n\n0\n\n_\n\nkj\n1\n_\n. 1.4\nDenote its roots by\ni\n, i 1, . . . , k 1. Then characteristic equation 1.4 has a root\nk1\n1.\nRelated solution of 1.3 is y\nk1\nn 1. Then there exists a one-parametric family of constant\nsolutions of 1.3 yn c\nk1\ny\nk1\nn c\nk1\n, where c\nk1\nis an arbitrary constant. Equation 1.4\ncan be rewritten as\n\nk\n1\n0\n1\n_\n\nkj1\n\nkj2\n1\n_\n, 1.5\nand, instead of 1.4, we can consider the following equation:\nf :\nk\n\n0\n\n_\n\nkj1\n\nkj2\n1\n_\n0. 1.6\nLet\n0\n\ncr\n. Then 1.6 has a root\nk\n1 which is a double root of 1.4. By the theory of\nlinear dierence equations, 1.3 has a solution y\nk\nn n, linearly independent with y\nk1\nn.\nThere exists a two-parametric family of solutions of 1.3\nyn c\nk\ny\nk\nn c\nk1\ny\nk1\nn c\nk\nn c\nk1\n, 1.7\nwhere c\nk\n, c\nk1\nare arbitrary constants. Then lim\nn\nyn if c\nk /\n0. This means that\nsolutions with c\nk /\n0 are divergent.\nLet\n0\n<\ncr\nand k j > 1. We dene two functions of a complex variable\nF :\nk\n, :\n0\n\n_\n\nkj1\n\nkj2\n1\n_\n, 1.8\nAbstract and Applied Analysis 3\nand 1.6 can be written as\nF 0. 1.9\nBy Rouches theorem, all roots\ni\n, i 1, 2, . . . , k of 1.6 satisfy |\ni\n| < 1 because, on the\nboundary C of a unit circle || < 1, we have\n||\nC\n\n0\n\n_\n_\n_\nkj1\n\nkj2\n1\n_\n_\n_ <\n1\nk j\n_\nk j\n_\n1 |F|\nC\n, 1.10\nand the functions F, F have the same number of zeros in the domain || < 1.\nThe case\n0\n<\ncr\nand k j 1 is trivial because 1.6 turns into\n\nk\n\n0\n0 1.11\nand, due to inequality ||\nk\n\n0\n<\ncr\n1, has all its roots in the domain || < 1.\nThen the relevant solutions y\ni\nn, i 1, 2, . . . , k satisfy lim\nn\ny\ni\nn 0, and the limit\nof the general solution of 1.3, yn lim\nn\n\nk1\ni1\nc\ni\ny\ni\nn where c\ni\nare arbitrary constants,\nis nite because\nlim\nn\nyn lim\nn\nk1\n\ni1\nc\ni\ny\ni\nn c\nk1\n. 1.12\nLet\n0\n>\ncr\n. Since f1 1\n0\nk j < 0 and f , there exists a root\n\n## > 1 of 1.6 and a solution y\n\nn\nof 1.3 satisfying lim\nn\ny\n\nn . This\nmeans that solution y\n\nn is divergent.\nGathering all the cases considered, we have the following:\ni if 0 <\n0\n<\ncr\n, then all solutions of 1.3 have a nite limit as n ,\nii if\n0\n\ncr\n, then there exists a divergent solution of 1.3 when n .\nThe above analysis is not applicable in the case of a nonconstant function n in 1.1.\nTo overcome some diculties, the method of auxiliary inequalities is applied to investigate\n1.1. From our results it follows that, for example, all solutions of 1.1 have a nite limit for\nn or, in accordance with the below denition, are asymptotically convergent if there\nexists a p > 1 such that the inequality\nn\n1\nk j\n\np\n_\nk j 1\n_\n2n\n_\nk j\n_ 1.13\nholds for all n Z\n\nn\n0\nk\n, where n\n0\nis a suciently large natural number. The limit of the right-\nhand side of 1.13 as n equals the critical value\ncr\n:\nlim\nn\n_\n1\nk j\n\np\n_\nk j 1\n_\n2n\n_\nk j\n_\n_\n\n1\nk j\n\ncr\n. 1.14\n4 Abstract and Applied Analysis\nIt means that the function n in 1.1 can be suciently close to the critical value\ncr\nbut\nsuch that all solutions of 1.1 are convergent as n .\nThe proofs of the results are based on comparing the solutions of 1.1 with those of an\nauxiliary inequality that formally copies 1.1. First, we prove that, under certain conditions,\n1.1 has an increasing and convergent solution y yn i.e., there exists a nite limit\nlim\nn\nyn. Then we extend this statement to all the solutions of 1.1. It is an interesting\nfact that, in the general case, the asymptotic convergence of all solutions is characterized by\nthe existence of a strictly increasing and bounded solution.\nThe problem concerning the asymptotic convergence of solutions in the continuous\ncase, that is, in the case of delayed dierential equations or other classes of equations, is\na classical one and has attracted much attention recently. The problem of the asymptotic\nconvergence of solutions of discrete and dierence equations with delay has not yet received\nmuch attention. We mention some papers from both of these elds in most of them,\nequations and systems with a structure similar to the discrete equation 1.1 are considered.\nArino and Pituk 1, for example, investigate linear and nonlinear perturbations of\na linear autonomous functional-dierential equation which has innitely many equilibria.\nBereketo glu and Karakoc 2 derive sucient conditions for the asymptotic constancy\nand estimates of the limits of solutions for an impulsive system, and Gy ori et al. give\nsucient conditions for the convergence of solutions of a nonhomogeneous linear system\nof impulsive delay dierential equations and a limit formula in 3. Bereketo glu and Pituk\n4 give sucient conditions for the asymptotic constancy of solutions of nonhomogeneous\nlinear delay dierential equations with unbounded delay. The limits of the solutions can be\ncomputed in terms of the initial conditions and a special matrix solution of the corresponding\nadjoint equation. In 5 Diblk studies the scalar equation under the assumption that every\nconstant is its solution. Criteria and sucient conditions for the convergence of solutions\nare found. The paper by Diblk and R u zi ckov a 6 deals with the asymptotic behavior of a\nrst-order linear homogeneous dierential equation with double delay. The convergence of\nsolutions of the delay Volterra equation in the critical case is studied by Messina et al. in 7.\nBerezansky and Braverman study a behavior of solutions of a food-limited population model\nwith time delay in 8.\nBereketo glu and Huseynov 9 give sucient conditions for the asymptotic constancy\nof the solutions of a system of linear dierence equations with delays. The limits of the\nsolutions, as t , can be computed in terms of the initial function and a special matrix\nsolution of the corresponding adjoint equation. Dehghan and Douraki 10 study the global\nbehavior of a certain dierence equation and show, for example, that zero is always an\nequilibrium point which satises a necessary and suent condition for its local asymptotic\nstability. Gy ori and Horv ath 11 study a system of linear delay dierence equations such\nthat every solution has a nite limit at innity. The stability of dierence equations is studied\nintensively in papers by Stevi c 12, 13. In 12, for example, he proves the global asymptotic\nstability of a class of dierence equations. Ba stinec and Diblk 14 study a class of positive\nand vanishing at innity solutions of a linear dierence equation with delay. Nonoscillatory\nsolutions of second-order dierence equations of the Poincar e type are investigated by\nMedina and Pituk in 15.\nComparing the known investigations with the results presented, we can see that our\nresults can be applied to the critical case giving strong sucient conditions of asymptotic\nconvergence of solutions for this case. Nevertheless, we are not concerned with computing\nthe limits of the solutions as n .\nAbstract and Applied Analysis 5\nThe paper is organized as follows. In Section 2 auxiliary results are collected, an\nauxiliary inequality is studied, and the relationship of its solutions with the solutions of 1.1\nis derived. The existence of a strictly increasing and convergent solution of 1.1 is established\nin Section 3. Section 4 contains results concerning the convergence of all solutions of 1.1. An\nexample illustrating the sharpness of the results derived is given as well.\nThroughout the paper we adopt the customary notation\n\nk\niks\nBi 0, where k is an\ninteger, s is a positive integer, and B denotes the function under consideration regardless of\nwhether it is dened for the arguments indicated or not.\n2. Auxiliary Results\nLet C : CZ\n0\nk\n, R be the space of discrete functions mapping the discrete interval Z\n0\nk\ninto R.\nLet v Z\n\nn\n0\nbe given. The function y : Z\n\nvk\nR is said to be a solution of 1.1 on Z\n\nvk\nif it\nsatises 1.1 for every n Z\n\nv\n. A solution y of 1.1 on Z\n\nvk\nis asymptotically convergent if the\nlimit lim\nn\nyn exists and is nite. For a given v Z\n\nn\n0\nand C, we say that y y\nv,\nis\na solution of 1.1 dened by the initial conditions v, if y\nv,\nis a solution of 1.1 on Z\n\nvk\nand y\nv,\nv m m for m Z\n0\nk\n.\n2.1. Auxiliary Inequality\nThe auxiliary inequality\nn n\n_\n\n_\nn j\n_\nn k\n_\n2.1\nwill serve as a helpful tool in the analysis of 1.1. Let v Z\n\nn\n0\n. The function : Z\n\nvk\nR is\nsaid to be a solution of 2.1 on Z\n\nvk\nif satises inequality 2.1 for n Z\n\nv\n. A solution of\n2.1 on Z\n\nvk\nis asymptotically convergent if the limit lim\nn\nn exists and is nite.\nWe give some properties of solutions of inequalities of the type 2.1, which will be\nutilized later on. We will also compare the solutions of 1.1 with the solutions of inequality\n2.1.\nLemma 2.1. Let C be strictly increasing (nondecreasing, strictly decreasing, nonincreasing) on\nZ\n0\nk\n. Then the corresponding solution y\nn\n\n,\nn of 1.1 with n\n\nn\n0\nis strictly increasing (non-\ndecreasing, strictly decreasing, nonincreasing) on Z\n\nk\ntoo.\nIf is strictly increasing (nondecreasing) and : Z\n\nn\n0\nk\nR is a solution of inequality 2.1\nwith n\n0\nm m, m Z\nn\n0\nn\n0\nk\n, then is strictly increasing (nondecreasing) on Z\n\nn\n0\nk\n.\nProof. This follows directly from 1.1, inequality 2.1, and from the properties n > 0,\nn Z\n\nn\n0\nk\n, k > j 0.\nTheorem 2.2. Let n be a solution of inequality 2.1 on Z\n\nn\n0\nk\n. Then there exists a solution yn\nof 1.1 on Z\n\nn\n0\nk\nsuch that the inequality\nyn n 2.2\n6 Abstract and Applied Analysis\nholds on Z\n\nn\n0\nk\n. In particular, a solution yn\n0\n, of 1.1 with C dened by the equation\nm : n\n0\nm, m Z\n0\nk\n2.3\nis such a solution.\nProof. Let n be a solution of inequality 2.1 on Z\n\nn\n0\nk\n. We will show that the solution\nyn : y\nn\n0\n,\nn of 1.1 satises inequality 2.2, that is,\ny\nn\n0\n,\nn n 2.4\non Z\n\nn\n0\nk\n. Let W : Z\n\nn\n0\nk\nR be dened by Wn n yn. Then W 0 on Z\nn\n0\nn\n0\nk\n, and,\nin addition, W is a solution of 2.1 on Z\n\nn\n0\nk\n. Lemma 2.1 implies that W is nondecreasing.\nConsequently, n yn n\n0\nyn\n0\n0 for all n n\n0\n.\n2.2. Comparison Lemma\nNow we consider an inequality of the type 2.1\n\nn\n1\nn\n_\n\n_\nn j\n_\n\nn k\n_\n, 2.5\nwhere\n1\n: Z\n\nn\n0\nk\nR\n\n## is a discrete function satisfying\n\n1\nn n on Z\n\nn\n0\nk\n. The following\ncomparison lemma holds.\nLemma 2.3. Let\n\n: Z\n\nn\n0\nk\nR\n\nn\n0\nk\n.\nThen\n\nn\n0\nk\ntoo.\nProof. Let\n\n## be a nondecreasing solution of 2.5 on Z\n\nn\n0\nk\n. We have\n\n_\nn j\n_\n\nn k 0 2.6\nbecause n k < n j. Then\n\nn\n1\nn\n_\n\n_\nn j\n_\n\nn k\n_\nn\n_\n\n_\nn j\n_\n\nn k\n_\n2.7\non Z\n\nn\n0\n. Consequently, the function :\n\n## solves inequality 2.1 on Z\n\nn\n0\n, too.\n2.3. A Solution of Inequality 2.1\nWe will construct a solution of inequality 2.1. In the following lemma, we obtain a solution\nof inequality 2.1 in the form of a sum. This auxiliary result will help us derive sucient\nconditions for the existence of a strictly increasing and asymptotically convergent solution of\n1.1 see Theorem 3.2 below.\nAbstract and Applied Analysis 7\nLemma 2.4. Let there exist a discrete function : Z\n\nn\n0\nk\nR\n\nsuch that\nn 1\nnj\n\nink1\ni 1i 2.8\non Z\n\nn\n0\n. Then there exists a solution n\n\nn\n0\nk\nhaving the\nform\n\nn :\nn\n\nin\n0\nk1\ni 1i.\n2.9\nProof. For n Z\n\nn\n0\n, we get\n\nn 1\n\nn1\n\nin\n0\nk1\ni 1i\nn\n\nin\n0\nk1\ni 1i\nnn 1,\n\n_\nn j\n_\n\nn k\nnj\n\nin\n0\nk1\ni 1i\nnk\n\nin\n0\nk1\ni 1i\n\nnj\n\nink1\ni 1i.\n2.10\nWe substitute\n\n## for in 2.1. Using 2.10, we get\n\nnn 1 n\nnj\n\nnk1\ni 1i. 2.11\nThis inequality will be satised if inequality 2.8 holds. Indeed, reducing the last inequality\nby n, we obtain\nn 1\nnj\n\nnk1\ni 1i, 2.12\nwhich is inequality 2.8.\n8 Abstract and Applied Analysis\n2.4. Decomposition of a Function into the Difference of\nTwo Strictly Increasing Functions\nIt is well known that every absolutely continuous function is representable as the dierence of\ntwo increasing absolutely continuous functions 16, page 318. We will need a simple discrete\nanalogue of this result.\nLemma 2.5. Every function C can be decomposed into the dierence of two strictly increasing\nfunctions\nj\nC, j 1, 2, that is,\nn\n1\nn\n2\nn, n Z\n0\nk\n. 2.13\nProof. Let constants M\nn\n> 0, n Z\n0\nk\nbe such that inequalities\nM\nn1\n> M\nn\nmax\n_\n0, n n 1\n_\n2.14\nare valid for n Z\n1\nk\n. We set\n\n1\nn : n M\nn\n, n Z\n0\nk\n,\n\n2\nn : M\nn\n, n Z\n0\nk\n.\n2.15\nIt is obvious that 2.13 holds. Now we verify that both functions\nj\n, j 1, 2 are strictly\nincreasing. The rst one should satisfy\n1\nn 1 >\n1\nn for n Z\n1\nk\n, which means that\nn 1 M\nn1\n> n M\nn\n2.16\nor\nM\nn1\n> M\nn\nn n 1. 2.17\nWe conclude that the last inequality holds because, due to 2.14, we have\nM\nn1\n> M\nn\nmax\n_\n0, n n 1\n_\nM\nn\nn n 1. 2.18\nThe inequality\n2\nn 1 >\n2\nn obviously holds for n Z\n1\nk\ndue to 2.14 as well.\n2.5. Auxiliary Asymptotic Decomposition\nThe following lemma can be proved easily by induction. The symbol O stands for the Landau\norder symbol.\nAbstract and Applied Analysis 9\nLemma 2.6. For xed r, R \\ {0}, the asymptotic representation\nn r\n\n_\n1\nr\nn\nO\n_\n1\nn\n2\n__\n2.19\nholds for n .\n3. Convergent Solutions of 1.1\nThis part deals with the problem of detecting the existence of asymptotically convergent\nsolutions. The results shown below provide sucient conditions for the existence of\nan asymptotically convergent solution of 1.1. First we present two obvious statements\nconcerning asymptotic convergence. From Lemma 2.1 and Theorem 2.2, we immediately get\nthe following.\nTheorem 3.1. Let n be a strictly increasing and bounded solution of 2.1 on Z\n\nn\n0\nk\n. Then there\nexists a strictly increasing and asymptotically convergent solution yn of 1.1 on Z\n\nn\n0\nk\n.\nFrom Lemma 2.1, Theorem 2.2, and Lemma 2.4, we get the following.\nTheorem 3.2. Let there exist a function : Z\n\nn\n0\nk\nR\n\nsatisfying\n\nin\n0\nk1\ni 1i <\n3.1\nand inequality 2.8 on Z\n\nn\n0\n. Then the initial function\nn\nn\n0\nn\n\nin\n0\nk1\ni 1i, n Z\n0\nk\n3.2\ndenes a strictly increasing and asymptotically convergent solution y\nn\n0\n,\nn of 1.1 on Z\n\nn\n0\nk\nsatisfying the inequality\nyn\nn\n\nin\n0\nk1\ni 1i\n3.3\non Z\n\nn\n0\n.\nAssuming that the coecient n in 1.1 can be estimated by a suitable function, we\ncan prove that 1.1 has a convergent solution.\nTheorem 3.3. Let there exist a p > 1 such that the inequality\nn\n1\nk j\n\np\n_\nk j 1\n_\n2n\n_\nk j\n_ 3.4\n10 Abstract and Applied Analysis\nholds for all n Z\n\nn\n0\nk\n. Then there exists a strictly increasing and asymptotically convergent solution\nyn of 1.1 as n .\nProof. In the proof, we assume without loss of generality that n\n0\nis suciently large for\nasymptotic computations to be valid. Let us verify that inequality 2.8 has a solution such\nthat\n\nin\n0\nk1\ni 1i < .\n3.5\nWe put\nn\n\nn :\n1\nk j\n\np\n\n2n\n, n :\n1\nn\n\n3.6\nin inequality 2.8, where p\n\n> 0 and > 1 are constants. Then, for the right-hand side Rn\nof 2.8, we have\nRn\nnj\n\nink1\n_\n1\nk j\n\np\n\n2i 1\n_\n1\ni\n\n1\nk j\nnj\n\nink1\n1\ni\n\n2\nnj\n\nink1\n1\ni 1i\n\n1\nk j\n_\n1\nn k 1\n\n1\nn k 2\n\n1\n_\nn j\n_\n\n2\n_\n1\nn kn k 1\n\n1\nn k 1n k 2\n\n1\n_\nn j 1\n__\nn j\n_\n\n_\n.\n3.7\nWe asymptotically decompose Rn as n using decomposition formula 2.19 in\nLemma 2.6. We apply this formula to each term in the rst square bracket with and\nwith r k 1 for the rst term, r k 2 for the second term, and so forth, and, nally, r j\nfor the last term. To estimate the terms in the second square bracket, we need only the rst\nterms of the decomposition and the order of accuracy, which can be computed easily without\nusing Lemma 2.6. We get\nRn\n1\n_\nk j\n_\nn\n\n_\n1\nk 1\nn\n1\nk 2\nn\n1\nj\nn\nO\n_\n1\nn\n2\n__\n\n2n\n1\n_\n1 1 1 O\n_\n1\nn\n__\nAbstract and Applied Analysis 11\n\n1\n_\nk j\n_\nn\n1\n_\n_\nk j\n_\nn k 1 k 2 j O\n_\n1\nn\n__\n\n2n\n1\n_\n_\nk j\n_\nO\n_\n1\nn\n__\n\n1\nn\n\n_\nk j\n_\nn\n1\n_\nk j 1\n__\nk j\n_\n2\n\np\n\n2n\n1\n_\nk j\n_\nO\n_\n1\nn\n2\n_\n,\n3.8\nand, nally,\nRn\n1\nn\n\n2n\n1\n_\nk j 1\n_\n\n2n\n1\n_\nk j\n_\nO\n_\n1\nn\n2\n_\n. 3.9\nA similar decomposition of the left-hand side Ln n 1 n 1\n\nin inequality 2.8\nleads to\nLn\n1\nn 1\n\n1\nn\n\n_\n1\n\nn\nO\n_\n1\nn\n2\n__\n\n1\nn\n\nn\n1\nO\n_\n1\nn\n2\n_\n3.10\nwe use decomposition formula 2.19 in Lemma 2.6 with and r 1.\nComparing Ln and Rn, we see that, for Ln Rn, it is necessary to match the\ncoecients of the terms n\n1\nbecause the coecients of the terms n\n\n## are equal. It means that\n\nwe need the inequality\n>\n\n_\nk j 1\n_\n2\n\np\n\n2\n_\nk j\n_\n.\n3.11\nSimplifying this inequality, we get\np\n\n2\n_\nk j\n_\n>\n\n_\nk j 1\n_\n2\n,\np\n\n_\nk j\n_\n>\n_\nk j 1\n_\n,\n3.12\nand, nally,\np\n\n>\n\n_\nk j 1\n_\nk j\n. 3.13\nWe set\np\n\n: p\nk j 1\nk j\n, 3.14\n12 Abstract and Applied Analysis\nwhere p const. Then the previous inequality holds for p > , that is, for p > 1. Consequently,\nthe function\n\n## dened by 3.6 has the form\n\nn\n1\nk j\n\np\n_\nk j 1\n_\n2\n_\nk j\n_\nn\n3.15\nwith p > 1, and, for the function\n\nn\nn\n\nin\n0\nk1\n_\n1\nk j\n\np\n_\nk j 1\n_\n2\n_\nk j\n_\ni 1\n_\n1\ni\n\n. 3.16\nFunction\n\n## n is a positive solution of inequality 2.1, and, moreover, it is easy to verify that\n\n< since > 1. This is a solution to every inequality of the type 2.1 if the function\n\n## xed by formula 3.15 is changed by an arbitrary function satisfying inequality 3.4.\n\nThis is a straightforward consequence of Lemma 2.3 if, in its formulation, we set\n\n1\nn :\n\nn\n1\nk j\n\np\n_\nk j 1\n_\n2\n_\nk j\n_\nn\n3.17\nwith p > 1 since\n\nwith :\n\n## as dened by 3.16, we conclude that there exists a strictly increasing and\n\nconvergent solution yn of 1.1 as n satisfying the inequality\nyn <\n\nn 3.18\non Z\n\nn\n0\nk\n.\n4. Convergence of All Solutions\nIn this part we present results concerning the convergence of all solutions of 1.1. First we\nuse inequality 3.4 to state the convergence of all the solutions.\nTheorem 4.1. Let there exist a p > 1 such that inequality 3.4 holds for all n Z\n\nn\n0\nk\n. Then all\nsolutions of 1.1 are convergent as n .\nProof. First we prove that every solution dened by a monotone initial function is convergent.\nWe will assume that a strictly monotone initial function C is given. For deniteness,\nlet be strictly increasing or nondecreasing the case when it is strictly decreasing or\nnonincreasing can be considered in much the same way. By Lemma 2.1, the solution y\nn\n0\n,\nis monotone; that is, it is either strictly increasing or nondecreasing. We prove that y\nn\n0\n,\nis\nconvergent.\nBy Theorem 3.3 there exists a strictly increasing and asymptotically convergent\nsolution y Yn of 1.1 on Z\n\nn\n0\nk\n. Without loss of generality we assume y\nn\n0\n, /Yn on\nAbstract and Applied Analysis 13\nZ\n\nn\n0\nk\nsince, in the opposite case, we can choose another initial function. Similarly, without\nloss of generality, we can assume\nYn > 0, n Z\nn\n0\n1\nn\n0\nk\n. 4.1\nHence, there is a constant > 0 such that\nYn yn > 0, n Z\nn\n0\n1\nn\n0\nk\n4.2\nor\n\n_\nYn yn\n_\n> 0, n Z\nn\n0\n1\nn\n0\nk\n, 4.3\nand the function Yn yn is strictly increasing on Z\nn\n0\n1\nn\n0\nk\n. Then Lemma 2.1 implies that\nYn yn is strictly increasing on Z\n\nn\n0\nk\n. Thus\nYn yn > Yn\n0\nyn\n0\n, n Z\n\nn\n0\n4.4\nor\nyn <\n1\n\nYn Yn\n0\nyn\n0\n, n Z\n\nn\n0\n, 4.5\nand, consequently, yn is a bounded function on Z\n\nn\n0\nk\nbecause of the boundedness of Yn.\nObviously, in such a case, yn is asymptotically convergent and has a nite limit.\nSummarizing the previous section, we state that every monotone solution is conver-\ngent. It remains to consider a class of all nonmonotone initial functions. For the behavior of a\nsolution y\nn\n0\n,\ngenerated by a nonmonotone initial function C, there are two possibilities:\ny\nn\n0\n,\nis either eventually monotone and, consequently, convergent, or y\nn\n0\n,\nis eventually\nnonmonotone.\nNow we use the statement of Lemma 2.5 that every discrete function C can be\ndecomposed into the dierence of two strictly increasing discrete functions\nj\nC, j 1, 2.\nIn accordance with the previous part of the proof, every function\nj\nC, j 1, 2 denes\na strictly increasing and asymptotically convergent solution y\nn\n0\n,\nj\n\n## . Now it is clear that the\n\nsolution y\nn\n0\n,\nis asymptotically convergent.\nWe will nish the paper with two obvious results. Inequality 3.4 in Theorem 3.3 was\nnecessary only for the proof of the existence of an asymptotically convergent solution. If we\nassume the existence of an asymptotically convergent solution rather than inequality 3.4,\nwe can formulate the following result, the proof of which is an elementary modication of\nthe proof of Theorem 4.1.\nTheorem 4.2. If 1.1 has a strictly monotone and asymptotically convergent solution on Z\n\nn\n0\nk\n, then\nall the solutions of 1.1 dened on Z\n\nn\n0\nk\nare asymptotically convergent.\n14 Abstract and Applied Analysis\nCombining the statements of Theorems 2.2, 3.1, and 4.2, we get a series of equivalent\nstatements below.\nTheorem 4.3. The following three statements are equivalent.\na Equation 1.1 has a strictly monotone and asymptotically convergent solution on Z\n\nn\n0\nk\n.\nb All solutions of 1.1 dened on Z\n\nn\n0\nk\nare asymptotically convergent.\nc Inequality 2.1 has a strictly monotone and asymptotically convergent solution on Z\n\nn\n0\nk\n.\nExample 4.4. We will demonstrate the sharpness of the criterion 3.4 by the following\nexample. Let k 1, j 0, n 1 1/n, n Z\n\nn\n0\n1\n, n\n0\n2 in 1.1; that is, we consider\nthe equation\nyn\n_\n1\n1\nn\n_\n_\nyn yn 1\n_\n. 4.6\nBy Theorems 3.3 and 4.3, all solutions are asymptotically convergent if\nn\n1\nk j\n\np\n_\nk j 1\n_\n2n\n_\nk j\n_ 1\np\nn\n, 4.7\nwhere a constant p > 1. In our case the inequality 4.7 does not hold since inequality\nn 1\n1\nn\n1\np\nn\n4.8\nis valid only for p 1. Inequality 4.7 is sharp because there exists a solution y y\n\nn of\n4.6 having the form of an nth partial sum of harmonic series, that is,\ny\n\nn\nn\n\ni1\n1\ni\n4.9\nwith the obvious property lim\nn\ny\n\n## n . Then by Theorem 4.3, all strictly monotone\n\nincreasing or decreasing solutions of 4.6 tend to innity.\nAcknowledgments\nThe research was supported by the Project APVV-0700-07 of the Slovak Research and\nDevelopment Agency and by the Grant no. 1/0090/09 of the Grant Agency of Slovak\nRepublic VEGA.\nReferences\n1 O. Arino and M. Pituk, Convergence in asymptotically autonomous functional-dierential equa-\ntions, Journal of Mathematical Analysis and Applications, vol. 237, no. 1, pp. 376392, 1999.\nAbstract and Applied Analysis 15\n2 H. Bereketo glu and F. Karakoc, Asymptotic constancy for impulsive delay dierential equations,\nDynamic Systems and Applications, vol. 17, no. 1, pp. 7183, 2008.\n3 I. Gy ori, F. Karakoc, and H. Bereketo glu, Convergence of solutions of a linear impulsive dierential\nequations system with many delays, Dynamics of Continuous, Discrete and Impulsive Systems Series A,\nMathematical Analysis, vol. 18, no. 2, pp. 191202, 2011.\n4 H. Bereketo glu and M. Pituk, Asymptotic constancy for nonhomogeneous linear dierential\nequations with unbounded delays, Discrete and Continuous Dynamical Systems. Series A, supplement,\npp. 100107, 2003.\n5 J. Diblk, Asymptotic convergence criteria of solutions of delayed functional dierential equations,\nJournal of Mathematical Analysis and Applications, vol. 274, no. 1, pp. 349373, 2002.\n6 J. Diblk and M. R u zi ckov a, Convergence of the solutions of the equation yt tytyt\nin the critical case, The Journal of Mathematical Analysis and Applications, vol. 331, pp. 13611370, 2007.\n7 E. Messina, Y. Muroya, E. Russo, and A. Vecchio, Convergence of solutions for two delays Volterra\nintegral equations in the critical case, Applied Mathematics Letters, vol. 23, no. 10, pp. 11621165, 2010.\n8 L. Berezansky and E. Braverman, On oscillation of a food-limited population model with time\ndelay, Abstract and Applied Analysis, no. 1, pp. 5566, 2003.\n9 H. Bereketoglu and A. Huseynov, Convergence of solutions of nonhomogeneous linear dierence\nsystems with delays, Acta Applicandae Mathematicae, vol. 110, no. 1, pp. 259269, 2010.\n10 M. Dehghan and M. J. Douraki, Global attractivity and convergence of a dierence equation,\nDynamics of Continuous, Discrete and Impulsive Systems. Series A. Mathematical Analysis, vol. 16, no.\n3, pp. 347361, 2009.\n11 I. Gy ori and L. Horv ath, Asymptotic constancy in linear dierence equations: limit formulae and\nsharp conditions, Advances in Dierence Equations, vol. 2010, Article ID 789302, 20 pages, 2010.\n12 S. Stevi c, Global stability and asymptotics of some classes of rational dierence equations, Journal\nof Mathematical Analysis and Applications, vol. 316, no. 1, pp. 6068, 2006.\n13 C. M. Kent, W. Kosmala, and S. Stevi c, On the dierence equation x\nn1\nx\nn\nx\nn2\n1, Abstract and\nApplied Analysis, vol. 2011, Article ID 815285, 25 pages, 2011.\n14 J. Ba stinec and J. Diblk, Subdominant positive solutions of the discrete equation uk n\npkuk, Abstract and Applied Analysis, no. 6, pp. 461470, 2004.\n15 R. Medina and M. Pituk, Nonoscillatory solutions of a second-order dierence equation of Poincar e\ntype, Applied Mathematics Letters, vol. 22, no. 5, pp. 679683, 2009.\n16 B. Z. Vulikh, A Brief Course in the Theory of Functions of a Real Variable (An Introduction to the Theory of\nthe Integral), Mir Publishers, Moscow, Russia, 1976."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.82887065,"math_prob":0.9833629,"size":21145,"snap":"2019-13-2019-22","text_gpt3_token_len":7328,"char_repetition_ratio":0.18201599,"word_repetition_ratio":0.16853933,"special_character_ratio":0.30697563,"punctuation_ratio":0.13153933,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9976511,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-20T11:46:04Z\",\"WARC-Record-ID\":\"<urn:uuid:848b031b-2882-4ad7-b6c0-318d9855d1c7>\",\"Content-Length\":\"275918\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f63925a2-f9f8-4c50-a732-e56801808cd5>\",\"WARC-Concurrent-To\":\"<urn:uuid:163cea49-ef98-4c7d-baca-37e6515ef0af>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://de.scribd.com/document/204271958/1-2-1\",\"WARC-Payload-Digest\":\"sha1:QURQD7JZMSCCLHW7DQIPPSLDTMANIS7Z\",\"WARC-Block-Digest\":\"sha1:O6VB4MRDNA7JS7L3O2HKNFKFWR6J4Q6B\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232255943.0_warc_CC-MAIN-20190520101929-20190520123929-00147.warc.gz\"}"} |
https://sutherland.che.utah.edu/teaching/chen1703/lectures/ | [
"## Lecture Notes\n\nLecture NotesVideoOther resources\nIntroduction.pdfIntroduction\nMatlab IntroductionMatlab Introductionprojectile.m - solves the projectile motion equations\nArrays in MatlabArrays in MatlabdegRadConvert.m - example of making and using arrays.\n\nHere is another example of the same thing.\nMatrix & vector algebra\nIntroduction to ExcelIntroduction to ExcelexcelIntro.xls - example of very basic Excel usage.\nLinear Systems of EquationsLinear Systems of Equationsincinerator.m - example of solving a linear system of equations using Matlab. See the lecture notes on linear systems of equations for the problem definition.\nPlotting in Matlabsincos.m - example of plotting where the user chooses either radians or degrees to input the plotting range.\nLogic & LoopsLogic & Loops - Part 1\nLogic & Loops - Part 2\nnfactfor.m - calculates the factorial of a number using a FOR LOOP.\n\nnfactwhile.m - calculates the factorial of a number using a WHILE LOOP.\n\nvecOps.m - calculates the dot product or elemental product using for loops. Demonstrates usage of if statements and for loops.\nReports in MS WordReports in MS WordExample of using equations in MS Word, with automatic numbering and cross-referencing.\nInput & Output in MatlabInput & Output in Matlab - Part 2Examples of using fprintf: temperature conversion table (output to command window) and here is the version that outputs to a file.\nInterpolationInterpolationPolynomial interpolation using Matlab’s built in functions and the “manual” way. NOTE: These use different data than what you should use for your homework. Use the data from the lecture notes...\nRegressionRegression - Part 1\nRegression - Part 2\nExamples of regression: ball drop example and the kinetics example\nFunctions in MatlabFunctions in MatlabExample of creating a function in matlab: angle_table.m\nNonlinear equations - Part 1\nNonlinear equations - Part 2\nNonlinear equations - Part 1\nNonlinear equations - Part 2\nUsing FZERO to solve nonlinear equations in Matlab - the projectile example: function and driver file.\n\nSolving a system of nonlinear equations using SOLVER in Excel: nonlinSys.xls\n\nRegression using optimization in excel (regression.xls) and in matlab (kineticsInClass.m)\nMatlab's Symbolic Toolbox - Part 1\nMatlab's Symbolic Toolbox - Part 2\nMatlab's Symbolic Toolbox - Part 1\nMatlab's Symbolic Toolbox - Part 2\nMatlab's Symbolic Toolbox - Part 3"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.687206,"math_prob":0.9507209,"size":2437,"snap":"2019-51-2020-05","text_gpt3_token_len":544,"char_repetition_ratio":0.16317303,"word_repetition_ratio":0.104,"special_character_ratio":0.21255642,"punctuation_ratio":0.0959596,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99992895,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-25T17:39:16Z\",\"WARC-Record-ID\":\"<urn:uuid:e94652b2-9456-4d1a-9853-968c88ab6f6a>\",\"Content-Length\":\"51887\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e3ad8c4-207e-44ee-835a-7a1a7995625b>\",\"WARC-Concurrent-To\":\"<urn:uuid:33f8bf31-baa4-41d7-8f90-7513c7d2a042>\",\"WARC-IP-Address\":\"155.98.110.18\",\"WARC-Target-URI\":\"https://sutherland.che.utah.edu/teaching/chen1703/lectures/\",\"WARC-Payload-Digest\":\"sha1:LHN5YI7BDMOELRJAIKIDS5VDBUISMBWH\",\"WARC-Block-Digest\":\"sha1:WFV6TIGMY6POWXMMB7WK3C6IJNT6TLKJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251678287.60_warc_CC-MAIN-20200125161753-20200125190753-00433.warc.gz\"}"} |
https://www.colorhexa.com/549a52 | [
"# #549a52 Color Information\n\nIn a RGB color space, hex #549a52 is composed of 32.9% red, 60.4% green and 32.2% blue. Whereas in a CMYK color space, it is composed of 45.5% cyan, 0% magenta, 46.8% yellow and 39.6% black. It has a hue angle of 118.3 degrees, a saturation of 30.5% and a lightness of 46.3%. #549a52 color hex could be obtained by blending #a8ffa4 with #003500. Closest websafe color is: #669966.\n\n• R 33\n• G 60\n• B 32\nRGB color chart\n• C 45\n• M 0\n• Y 47\n• K 40\nCMYK color chart\n\n#549a52 color description : Dark moderate lime green.\n\n# #549a52 Color Conversion\n\nThe hexadecimal color #549a52 has RGB values of R:84, G:154, B:82 and CMYK values of C:0.45, M:0, Y:0.47, K:0.4. Its decimal value is 5544530.\n\nHex triplet RGB Decimal 549a52 `#549a52` 84, 154, 82 `rgb(84,154,82)` 32.9, 60.4, 32.2 `rgb(32.9%,60.4%,32.2%)` 45, 0, 47, 40 118.3°, 30.5, 46.3 `hsl(118.3,30.5%,46.3%)` 118.3°, 46.8, 60.4 669966 `#669966`\nCIE-LAB 57.659, -37.262, 30.997 16.734, 25.604, 12.043 0.308, 0.471, 25.604 57.659, 48.469, 140.244 57.659, -33.463, 44.279 50.601, -29.52, 21.31 01010100, 10011010, 01010010\n\n# Color Schemes with #549a52\n\n• #549a52\n``#549a52` `rgb(84,154,82)``\n• #98529a\n``#98529a` `rgb(152,82,154)``\nComplementary Color\n• #789a52\n``#789a52` `rgb(120,154,82)``\n• #549a52\n``#549a52` `rgb(84,154,82)``\n• #529a74\n``#529a74` `rgb(82,154,116)``\nAnalogous Color\n• #9a5278\n``#9a5278` `rgb(154,82,120)``\n• #549a52\n``#549a52` `rgb(84,154,82)``\n• #74529a\n``#74529a` `rgb(116,82,154)``\nSplit Complementary Color\n• #9a5254\n``#9a5254` `rgb(154,82,84)``\n• #549a52\n``#549a52` `rgb(84,154,82)``\n• #52549a\n``#52549a` `rgb(82,84,154)``\n• #9a9852\n``#9a9852` `rgb(154,152,82)``\n• #549a52\n``#549a52` `rgb(84,154,82)``\n• #52549a\n``#52549a` `rgb(82,84,154)``\n• #98529a\n``#98529a` `rgb(152,82,154)``\n• #396837\n``#396837` `rgb(57,104,55)``\n• #427940\n``#427940` `rgb(66,121,64)``\n• #4b8949\n``#4b8949` `rgb(75,137,73)``\n• #549a52\n``#549a52` `rgb(84,154,82)``\n• #5fa95d\n``#5fa95d` `rgb(95,169,93)``\n• #6fb26d\n``#6fb26d` `rgb(111,178,109)``\n• #80ba7e\n``#80ba7e` `rgb(128,186,126)``\nMonochromatic Color\n\n# Alternatives to #549a52\n\nBelow, you can see some colors close to #549a52. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #669a52\n``#669a52` `rgb(102,154,82)``\n• #609a52\n``#609a52` `rgb(96,154,82)``\n• #5a9a52\n``#5a9a52` `rgb(90,154,82)``\n• #549a52\n``#549a52` `rgb(84,154,82)``\n• #529a56\n``#529a56` `rgb(82,154,86)``\n• #529a5c\n``#529a5c` `rgb(82,154,92)``\n• #529a62\n``#529a62` `rgb(82,154,98)``\nSimilar Colors\n\n# #549a52 Preview\n\nThis text has a font color of #549a52.\n\n``<span style=\"color:#549a52;\">Text here</span>``\n#549a52 background color\n\nThis paragraph has a background color of #549a52.\n\n``<p style=\"background-color:#549a52;\">Content here</p>``\n#549a52 border color\n\nThis element has a border color of #549a52.\n\n``<div style=\"border:1px solid #549a52;\">Content here</div>``\nCSS codes\n``.text {color:#549a52;}``\n``.background {background-color:#549a52;}``\n``.border {border:1px solid #549a52;}``\n\n# Shades and Tints of #549a52\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000000 is the darkest color, while #f3f8f3 is the lightest one.\n\n• #000000\n``#000000` `rgb(0,0,0)``\n• #070d07\n``#070d07` `rgb(7,13,7)``\n• #0e1a0e\n``#0e1a0e` `rgb(14,26,14)``\n• #152715\n``#152715` `rgb(21,39,21)``\n• #1c341b\n``#1c341b` `rgb(28,52,27)``\n• #234022\n``#234022` `rgb(35,64,34)``\n• #2a4d29\n``#2a4d29` `rgb(42,77,41)``\n• #315a30\n``#315a30` `rgb(49,90,48)``\n• #386737\n``#386737` `rgb(56,103,55)``\n• #3f743e\n``#3f743e` `rgb(63,116,62)``\n• #468044\n``#468044` `rgb(70,128,68)``\n• #4d8d4b\n``#4d8d4b` `rgb(77,141,75)``\n• #549a52\n``#549a52` `rgb(84,154,82)``\n• #5ba759\n``#5ba759` `rgb(91,167,89)``\n``#68ad66` `rgb(104,173,102)``\n• #74b473\n``#74b473` `rgb(116,180,115)``\n• #81bb7f\n``#81bb7f` `rgb(129,187,127)``\n• #8ec28c\n``#8ec28c` `rgb(142,194,140)``\n• #9ac999\n``#9ac999` `rgb(154,201,153)``\n• #a7d0a6\n``#a7d0a6` `rgb(167,208,166)``\n• #b4d6b3\n``#b4d6b3` `rgb(180,214,179)``\n• #c0ddbf\n``#c0ddbf` `rgb(192,221,191)``\n• #cde4cc\n``#cde4cc` `rgb(205,228,204)``\n• #d9ebd9\n``#d9ebd9` `rgb(217,235,217)``\n• #e6f2e6\n``#e6f2e6` `rgb(230,242,230)``\n• #f3f8f3\n``#f3f8f3` `rgb(243,248,243)``\nTint Color Variation\n\n# Tones of #549a52\n\nA tone is produced by adding gray to any pure hue. In this case, #6e7f6d is the less saturated color, while #07ec00 is the most saturated one.\n\n• #6e7f6d\n``#6e7f6d` `rgb(110,127,109)``\n• #658864\n``#658864` `rgb(101,136,100)``\n• #5d915b\n``#5d915b` `rgb(93,145,91)``\n• #549a52\n``#549a52` `rgb(84,154,82)``\n• #4ba349\n``#4ba349` `rgb(75,163,73)``\n• #43ac40\n``#43ac40` `rgb(67,172,64)``\n• #3ab537\n``#3ab537` `rgb(58,181,55)``\n• #32be2e\n``#32be2e` `rgb(50,190,46)``\n• #29c725\n``#29c725` `rgb(41,199,37)``\n• #21d01c\n``#21d01c` `rgb(33,208,28)``\n• #18da12\n``#18da12` `rgb(24,218,18)``\n• #0fe309\n``#0fe309` `rgb(15,227,9)``\n• #07ec00\n``#07ec00` `rgb(7,236,0)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #549a52 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.53430253,"math_prob":0.7179486,"size":3703,"snap":"2020-34-2020-40","text_gpt3_token_len":1650,"char_repetition_ratio":0.1197621,"word_repetition_ratio":0.011070111,"special_character_ratio":0.56710774,"punctuation_ratio":0.23756906,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99122375,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-13T11:54:48Z\",\"WARC-Record-ID\":\"<urn:uuid:c939209f-8b05-47ee-a743-013fd5383770>\",\"Content-Length\":\"36292\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4a3a0788-09a2-4888-a227-f197a981c898>\",\"WARC-Concurrent-To\":\"<urn:uuid:108213bf-e4f5-4892-9570-0a985509139c>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/549a52\",\"WARC-Payload-Digest\":\"sha1:TTUDYMZNNOZF34CNUVZVICNJB6PPJKG6\",\"WARC-Block-Digest\":\"sha1:DUZKJTEDUIG47JPX6SG6WQLU7BSANRDL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738982.70_warc_CC-MAIN-20200813103121-20200813133121-00326.warc.gz\"}"} |
https://www.openset.wang/problems/n-th-tribonacci-number/ | [
"# 第 N 个泰波那契数\n\n## 1137. 第 N 个泰波那契数 (Easy)\n\nT0 = 0, T1 = 1, T2 = 1, 且在 n >= 0 的条件下 Tn+3 = Tn + Tn+1 + Tn+2\n\n```输入:n = 4\n\nT_3 = 0 + 1 + 1 = 2\nT_4 = 1 + 1 + 2 = 4\n```\n\n```输入:n = 25\n\n```\n\n• `0 <= n <= 37`\n• 答案保证是一个 32 位整数,即 `answer <= 2^31 - 1`\n\n[递归]\n\n1. 爬楼梯 (Easy)"
] | [
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.7700558,"math_prob":1.0000099,"size":318,"snap":"2019-51-2020-05","text_gpt3_token_len":252,"char_repetition_ratio":0.1178344,"word_repetition_ratio":0.0,"special_character_ratio":0.6100629,"punctuation_ratio":0.049382716,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999373,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-15T22:10:25Z\",\"WARC-Record-ID\":\"<urn:uuid:b85c49fd-1437-41dc-a6e0-dd1bb80ef75d>\",\"Content-Length\":\"12930\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:71ad7cbd-30fe-4d24-b73e-6e8f4e013e34>\",\"WARC-Concurrent-To\":\"<urn:uuid:b40b54a6-eb80-413f-a212-c14d74d181da>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"https://www.openset.wang/problems/n-th-tribonacci-number/\",\"WARC-Payload-Digest\":\"sha1:BIEJZVRX6UVHFWDYU47QFOFMLLZFDBH3\",\"WARC-Block-Digest\":\"sha1:NVF6GJP5O3EUABZHEIWAKPC3GABZP5FY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541310866.82_warc_CC-MAIN-20191215201305-20191215225305-00533.warc.gz\"}"} |
https://extropians.weidai.com/extropians.4Q97/1462.html | [
"# Re: Infinities\n\nKeith Elis (hagbard@ix.netcom.com)\nSun, 16 Nov 1997 11:59:46 -0500\n\nJohn K Clark wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n>\n> On Fri, 14 Nov 1997 Harvey Newstrom <harv@gate.net> Wrote:\n>\n> >It seems obvious to me from a physical standpoint, that the 2 x\n> >infinity inches volume is twice as large as the 1 x infinity inches\n> >volume. It would take two of the former added together to equal the\n> >latter. If you superimposed the 1 x infinity inches volume inside\n> >the 2 x infinity inches volume, you would have 1 x infinity inches\n> >volume left over.\n>\n> Infinite numbers do not obey the same laws of arithmetic that finite numbers\n> do, however, you do count them, that is determine how big they are, in\n> exactly the same way, by putting them in a one relationship with something\n> else. I know that if I can put each of the fingers on my right hand in a one\n> to one correspondence with some apples and have no apples or fingers left\n> over, then there must be 5 apples. In the same way I can put the odd integers\n> in a one to relationship with all the integers, both the odd AND the even,\n> so there must be an equal number of both.\n>\n> 1 -1\n> 3- 2\n> 5- 3\n> 7- 4\n> 9- 5\n> .\n> .\n>\n> Or you can prove that all lines are composed of the same number of points\n> regardless of length. Draw 2 parallel lines, a short one and a long one below\n> it, pick a point midway along the short line but above it.\n>\n>\n> /\\\n> / \\\n> /________\\\n> / \\\n> /________________\\\n>\n>\n> Draw a line from that point to any place on the short line, then continue it\n> until you hit the long line. You've made a one to one correspondence between\n> all the points in the short line and all the points in the long line, so they\n> must have an equal number of points.\n>\n> But not all infinities are equal. Let's try to put the integers in a one to\n> one correspondence with all the points in the line from 0 to 1 expressed as a\n> decimal.\n>\n> 1 - 0.a1,a2,a3,a4,a5 ...\n> 2 - 0.b1,b2,b3,b4,b5 ...\n> 3 - 0.c1,c2,c3,c4,c5 ...\n> 4 - 0.d1,d2,d3,d4.d5 ...\n> .\n> .\n>\n> The trouble is it doesn't work, there are decimals not included, for example,\n> the point 0.A1,B2,C3,D4,E5 ... where A1 is any digit except a1, B2 is any\n> digit except b2, C3 is any digit except c3 etc. This point differs in at\n> least one decimal place with any point in our one to one scheme, we've used\n> all the integers but there are still points remaining, so there must be more\n> points on a line than integers.\n\nThere are an infinite number of points between 0 and 1. There are an infinite\nnumber of points between 1 and 2. What about all the points between 0 and 2?\nThere is an infinite number, yes, but it is composed of the points from 0 to 1\nand 1 to 2. Does this mean that the number of points between 0 and 2 is double\nthat of both?\n\nNo, it's just infinite. Does it really make sense to say that all infinities are\nnot equal? I mean, infinity is used in mathematics thanks to a symbol that\nsupposedly quantizes it, or at least allows us to represent the mathematical\nconcept of infinity. But is infinity ever a quantity? We can't really ever grasp\ninfinity anymore than we can grasp the size of [(10^100)^100]^100. The difference\nis that such big numbers (positive) have a meaningful upper limit that we can use\nto define its value. Infinity does not. So any number that does not have a\nmeaningful upper limit (or lower limit, if negative) is equal to every other\nnumber that has no such limit. Infinity is infinity no matter in what context it\noccurs.\n\nIf infinities are not equal (this infinity is greater than that infinity) then\nthat means that some infinities are even larger, and then some infinities are\neven larger than that, ad infinitum. So then the ultimate infinity is the\ninfinity that is larger than all the rest -- the inifinitely infinite infinity.\nBut then what about an infinity even greater than that....\n\nBlah, blah, blah....\n\nThere doesn't seem to be a worthwhile way to deal with this unless every infinity\nis just plain old infinity, unbounded, and -- to try to keep it within our\nsemantic framework -- equal."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92943525,"math_prob":0.96827096,"size":3993,"snap":"2021-43-2021-49","text_gpt3_token_len":1072,"char_repetition_ratio":0.14539985,"word_repetition_ratio":0.031766202,"special_character_ratio":0.2872527,"punctuation_ratio":0.13885647,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98573035,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-28T04:56:04Z\",\"WARC-Record-ID\":\"<urn:uuid:d693e179-6681-4500-8e62-7e0de52cc017>\",\"Content-Length\":\"7534\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f856f728-87bd-4643-add0-73e28b5f26ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:c70a0581-f1ef-486a-bd9b-34fac6fd55f1>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://extropians.weidai.com/extropians.4Q97/1462.html\",\"WARC-Payload-Digest\":\"sha1:EOZ6KSAV3LDSFEVOT7H2NXYXNY3FB2QC\",\"WARC-Block-Digest\":\"sha1:H7RUW3QDC76FV5BJYBJSU5B2PEY2CQCL\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358469.34_warc_CC-MAIN-20211128043743-20211128073743-00201.warc.gz\"}"} |
https://online-calculator.org/how-old-am-i-if-i-was-born-on-10-24-1911 | [
"Online Calculators > Time Calculators\n\nHow old am I if I was born on October 24, 1911?\n\nHow old am I if I was born on October 24, 1911? - October 24, 1911 age to find out how old is someone born on October 24, 1911 in years, months, weeks, days, hours, minutes and seconds.\n\nOctober 24, 1911 Age\n\nYou are 107 years 11 months, 3 weeks, and 3 days old\n\nor 1296 months old\nor 5,634 weeks old\nor 39,441 days old\nor 946,584 hours old\nor 56,795,040 minutes old\nor -2,147,483,648 seconds old\nYou were born on a Tuesday.\n\nAge Calculator\n\nBirth Date:\nToday's Date:\nHow old am I if I was born in 1911\n\nElectrical Calculators\nReal Estate Calculators\nAccounting Calculators\nConstruction Calculators\nSports Calculators\n\nFinancial Calculators\nCompound Interest Calculator\nMortgage Calculator\nHow Much House Can I Afford\nLoan Calculator\nStock Calculator\nOptions Calculator\nInvestment Calculator\nRetirement Calculator\n401k Calculator\neBay Fee Calculator\nPayPal Fee Calculator\nEtsy Fee Calculator\nMarkup Calculator\nTVM Calculator\nLTV Calculator\nAnnuity Calculator\nHow Much do I Make a Year\n\nMath Calculators\nMixed Number to Decimal\nRatio Simplifier\nPercentage Calculator\n\nHealth Calculators\nBMI Calculator\nWeight Loss Calculator\n\nConversion\nCM to Feet and Inches\nMM to Inches\n\nOthers\nHow Old am I\nRandom Name Picker\nRandom Number Generator\nMultiplication Chart"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.96138304,"math_prob":0.9854469,"size":471,"snap":"2019-43-2019-47","text_gpt3_token_len":151,"char_repetition_ratio":0.15203427,"word_repetition_ratio":0.0,"special_character_ratio":0.39065817,"punctuation_ratio":0.18103448,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9594293,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T00:54:44Z\",\"WARC-Record-ID\":\"<urn:uuid:3815e3e3-008e-4fa6-b45f-0c808e10d13f>\",\"Content-Length\":\"13550\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63e67716-0a04-40e8-a50a-3f8bbd13577a>\",\"WARC-Concurrent-To\":\"<urn:uuid:6aedc370-5838-45c3-ae94-a895d3e56707>\",\"WARC-IP-Address\":\"184.154.80.211\",\"WARC-Target-URI\":\"https://online-calculator.org/how-old-am-i-if-i-was-born-on-10-24-1911\",\"WARC-Payload-Digest\":\"sha1:SJDSLTCIE6YV5XDR2TAKXORCHZO2Q7SW\",\"WARC-Block-Digest\":\"sha1:CYXQR6FRAUMUTYEREHKDZ4PH3C7RKZRD\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986685915.43_warc_CC-MAIN-20191018231153-20191019014653-00146.warc.gz\"}"} |
https://www.splashlearn.com/s/math-worksheets/subtract-fractions-from-mixed-numbers | [
"Home > Math > Subtract Fractions from Mixed Numbers Worksheet\n\n# Subtract Fractions from Mixed Numbers Worksheet\n\n## Know more about Subtract Fractions from Mixed Numbers Worksheet",
null,
"Struggles with subtraction of fractions can easily be overcome if students practice the concept in a fun and engaging way! Visuals in the content attract students’ attention and aid better comprehension. Here students solve a variety of problems using fraction models as visual help. The worksheet involves fractions and mixed numbers, it is important for students to gain confidence in a concept by working at different levels of complexity.",
null,
"4413+",
null,
"4567+",
null,
"",
null,
"",
null,
""
] | [
null,
"https://cdn.splashmath.com/cms_assets/images/playable-left-desc-d0e7c503c7eb99a138cc.svg",
null,
"https://cdn.splashmath.com/cms_assets/images/playable-right-image-c88d24a6fffd20c6833d.svg",
null,
"https://cdn.splashmath.com/cms_assets/images/math-and-ela-games-feature-d7f1a6d98b223203d222.svg",
null,
"https://cdn.splashmath.com/cms_assets/images/math-and-ela-worksheet-feature-56a20bb968cbfa2fe52a.svg",
null,
"https://cdn.splashmath.com/cms_assets/images/coomon-core-feature-5e0a900656847818fa6d.svg",
null,
"https://cdn.splashmath.com/cms_assets/images/coopa-feature-8af350a7eecbb0439840.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9332684,"math_prob":0.792571,"size":442,"snap":"2023-40-2023-50","text_gpt3_token_len":78,"char_repetition_ratio":0.1369863,"word_repetition_ratio":0.0,"special_character_ratio":0.16742082,"punctuation_ratio":0.067567565,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98862237,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T16:22:53Z\",\"WARC-Record-ID\":\"<urn:uuid:e6bcf5a6-3145-4ad2-8698-a7c11e1234b2>\",\"Content-Length\":\"95945\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f453b9da-b378-4657-8b42-3180bb9accc6>\",\"WARC-Concurrent-To\":\"<urn:uuid:34b6823d-075b-479d-8b91-7bd0798fca94>\",\"WARC-IP-Address\":\"104.18.29.134\",\"WARC-Target-URI\":\"https://www.splashlearn.com/s/math-worksheets/subtract-fractions-from-mixed-numbers\",\"WARC-Payload-Digest\":\"sha1:HPUFOUJ35IF5OGDWDMMUNDKMQRPLGR2G\",\"WARC-Block-Digest\":\"sha1:UC6CFXR2V5P4GMSQGYS7S6GERJIQROSU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100762.64_warc_CC-MAIN-20231208144732-20231208174732-00003.warc.gz\"}"} |
https://physics.stackexchange.com/questions/505708/magnetic-vector-potential-a-measure-of-momentum/505747 | [
"# Magnetic Vector Potential - a measure of Momentum\n\nI am trying to understand the magnetic vector potential as momentum per unit charge, that is, somehow, the \"total\" momentum should be $$p = mv +qA$$.\n\nHere's a part of the paper I was reading - I am facing difficulties understanding the mathematical gymnastics involved, and would appreciate any help!",
null,
"Firstly, let's start with (16). On the RHS, the dot product of velocity with $$v$$ x $$B$$ is zero, and only the first term remains. To arrive at (18) from (16), I added the time derivative of $$q\\phi$$ on both sides - the LHS is now what is desired, so manipulation on RHS should yield the RHS of (18); and this is where I'm stuck. How should I take it from here? Please help! A detailed explanation would be great. I'm using Griffiths as the text for my E&M course, and I am only familiar with the level of vector calculus used in the book, and that is probably why I'm stuck where I am.\n\nAlso, I need help deriving (19) (trust me, I tried).\n\nLastly, what is the difference between $$p = mv$$ and $$p = mv + qA$$? We call both as momentum, and I'd learnt only the former in a classical mechanics course. So what is the difference in meaning and interpretation of the two different momenta? I am really interested in understanding this concept - vector potential as momentum per unit charge, please help me out! Also, this is a very absorbing concept, one that isn't touched upon in nearly all E&M texts, so this post, when complete with answers, would definitely interest many people on Physics SE!\n\nHere's the link to the entire paper, if the readers would like to know more: Thoughts on the magnetic vector potential\n\n• Equation (14) in your Dokument give you the Lagrangian. So with Euler Lagrangian Methode you get the equations of motion, which give you equation (19). – Eli Oct 1 '19 at 13:08\n\nFirst the derivations:\n\nEquation 18:\n\nThe connection between total time derivative $$\\frac{d}{dt}$$ and partial time derivative $$\\frac{\\partial}{\\partial t}$$ of a scalar function is, using the chain rule,\n\n$$\\frac{d}{dt} = \\frac{\\partial}{\\partial t} + \\frac{\\partial x}{\\partial t} \\frac{\\partial}{\\partial x} + \\frac{\\partial y}{\\partial t} \\frac{\\partial}{\\partial y} + \\frac{\\partial z}{\\partial t} \\frac{\\partial}{\\partial z}$$\n\nNotice that this is equal to\n\n$$\\frac{d}{dt} = \\frac{\\partial}{\\partial t} + \\mathbf{v} \\cdot \\nabla$$\n\nHence $$q \\frac{d \\phi}{dt} = q \\frac{\\partial \\phi}{\\partial t} + q \\mathbf{v} \\cdot \\nabla \\phi$$, where the last term is the one appearing in Eq. 16, but with a minus sign.\n\nFrom this we get\n\n$$\\frac{d}{dt} \\left( \\frac{1}{2} m v^2 \\right) = -q \\mathbf{v} \\cdot \\nabla \\phi - q \\mathbf{v} \\cdot \\frac{\\partial \\mathbf{A}}{\\partial t}$$\n\n$$\\frac{d}{dt} \\left( \\frac{1}{2} m v^2 \\right) = q \\frac{\\partial \\phi}{\\partial t} - q \\frac{d \\phi}{dt} - q \\mathbf{v} \\cdot \\frac{\\partial \\mathbf{A}}{\\partial t}$$\n\n$$\\frac{d}{dt} \\left( \\frac{1}{2} m v^2 \\right) + q \\frac{d \\phi}{dt} = q \\frac{\\partial \\phi}{\\partial t} - q \\mathbf{v} \\cdot \\frac{\\partial \\mathbf{A}}{\\partial t}$$\n\n$$\\frac{d}{dt} \\left(\\frac{1}{2} m v^2 + q \\phi \\right) = q \\left(\\frac{\\partial \\phi}{\\partial t} - \\mathbf{v} \\cdot \\frac{\\partial \\mathbf{A}}{\\partial t} \\right)$$\n\nwhich is what I think Eq. 18 is supposed to be. Here it is also perfectly clear that if $$\\phi, \\mathbf{A}$$ are not explicitly time dependent, then their partial time derivatives are 0, and the whole RHS is zero.\n\nEquation 19:\n\nHere we need the total time derivative of a vector function $$\\mathbf{A}$$. By looking at the components of the vector, we should realize that\n\n$$\\frac{d \\mathbf{A}}{dt} = \\frac{\\partial \\mathbf{A}}{\\partial t} + \\left( \\mathbf{v} \\cdot \\nabla \\right) \\mathbf{A}$$\n\nwith the interpretation that e.g.\n\n$$\\frac{d A_x}{dt} = \\frac{\\partial A_x}{\\partial t} + \\frac{\\partial x}{\\partial t} \\frac{\\partial A_x}{\\partial x} + \\frac{\\partial y}{\\partial t} \\frac{\\partial A_x}{\\partial y} + \\frac{\\partial z}{\\partial t} \\frac{\\partial A_x}{\\partial z}$$\n\nWe have a useful vector calculus identity link:\n\n$$\\mathbf{v} \\times (\\nabla \\times \\mathbf{A}) = \\nabla_\\mathbf{A} (\\mathbf{v} \\cdot \\mathbf{A}) - (\\mathbf{v} \\cdot \\nabla) \\mathbf{A}$$\n\nwhere the notation $$\\nabla_\\mathbf{A}$$ means only the variation due to $$\\mathbf{A}$$ is taken into account in that term.\n\nStarting from Newton's second law, $$\\frac{d}{dt}(m \\mathbf{v} ) = q(\\mathbf{E} + \\mathbf{v} \\times \\mathbf{B})$$ and inserting the potentials, we get\n\n$$\\frac{d}{dt}(m \\mathbf{v} ) = q \\left(- \\nabla \\phi - \\frac{\\partial \\mathbf{A}}{\\partial t} + \\mathbf{v} \\times (\\nabla \\times \\mathbf{A}) \\right)$$\n\nUsing the identity gives\n\n$$\\frac{d}{dt}(m \\mathbf{v} ) = q \\left(- \\nabla \\phi - \\frac{\\partial \\mathbf{A}}{\\partial t} + \\nabla_\\mathbf{A} (\\mathbf{v} \\cdot \\mathbf{A}) - (\\mathbf{v} \\cdot \\nabla) \\mathbf{A} \\right)$$\n\n$$\\frac{d}{dt}(m \\mathbf{v} ) + q \\left( \\frac{\\partial \\mathbf{A}}{\\partial t} + (\\mathbf{v} \\cdot \\nabla) \\mathbf{A} \\right) = q \\left(- \\nabla \\phi + \\nabla_\\mathbf{A} (\\mathbf{v} \\cdot \\mathbf{A}) \\right)$$\n\nOn the LHS we identify the total derivative of the vector potential, so finally\n\n$$\\frac{d}{dt} \\left( m \\mathbf{v} + q \\mathbf{A} \\right) = - q \\nabla \\left(\\phi - \\mathbf{v} \\cdot \\mathbf{A} \\right)$$\n\nwhere the gradient on the RHS is understood in accordance to the clarification about the identity we used.\n\nNow the meaning:\n\nThe question of the difference between $$p=mv$$ and $$p=mv + qA$$ has been asked before on this site, e.g. here. In short, \"momentum\" is a type of quantity that is often of great importance in physics, just as \"energy\" is, and there exist several different momenta, just as there exist different important forms of energy.\n\nIn the classical theories of Lagrangian and Hamiltonian mechanics, the fundamental variables are so-called generalized coordinates $$q_i$$ and their time derivatives $$\\dot{q}_i$$. They can be, and are often, real-space coordinates, but they can also be angles or other ways of describing the configurations of a system. For each generalized coordinate, there exists a corresponding quantity called the conjugate momentum, which is given by\n\n$$p_i = \\frac{\\partial L}{\\partial \\dot{q}_i}$$\n\nwhere $$L$$ is the Lagrangian, usually the kinetic minus the potential energy. In the case of a charged particle in an EM field, the conjugate momentum to position turns out to be exactly $$mv + qA$$.\n\na) Eq. 18 does not tell us that $$\\mathbf{A}$$ is momentum per unit charge, Eq. 19 does. Eq. 18 is a conservation law; it tells you that when your charged particle is moving around, the conserved quantity is not just the kinetic energy, but $$1/2 m v^2 + q \\phi$$. Hence we can think of $$q \\phi$$ as potential energy $$E_p$$, and then\n\n$$\\frac{E_p}{q} = \\phi$$\n\nis interpreted as potential energy per charge. Likewise for Eq. 19, it establishes a mathematical form similar to Newton's second law when written on the form\n\n$$\\frac{d p}{dt} = - \\nabla U$$\n\nComparing Eq. 19 to this form, we see that the \"total momentum\" is now $$m \\mathbf{v} + q \\mathbf{A}$$, so kinetic momentum $$p_k$$ + electromagnetic momentum $$p_{EM}$$. Hence\n\n$$\\frac{p_{EM}}{q} = \\frac{q \\mathbf{A}}{q} = \\mathbf{A}$$\n\nis electromagnetic momentum per unit charge.\n\nb) I'll explain the chain rule for scalar functions, called fields. Remember that fields $$\\phi$$ depend on position in space, and possibly explicitly on time too:\n\n$$\\phi = \\phi(t, x(t), y(t), z(t))$$\n\nThe reason that I'm letting the spatial coordinates depend on $$t$$, is that the particle we are describing the motion of, will of course move around, and the only value of $$\\phi$$ relevant to the particle is the field value at the particle's position.\n\nSo how can the field at the position of the particle change with time? Of course it can change explicitly with time in a given fixed point in space, so that is what $$\\frac{\\partial}{\\partial t}$$ gives you. But if the particle moves a small amount in the $$x$$ direction over a small time, then the change is\n\n$$\\frac{\\partial x}{\\partial t} \\frac{\\partial \\phi}{\\partial x}$$\n\nby the ordinary chain rule. And since the three spatial directions are not explicitly dependent on each other (instead they all depend on time $$t$$), we get contributions like this for all three directions.\n\nc) The general rule for the gradient of the dot product of two vector fields is\n\n$$\\nabla(\\mathbf{v} \\cdot \\mathbf{A}) = \\mathbf{v} \\times (\\nabla \\times \\mathbf{A}) + \\mathbf{A} \\times (\\nabla \\times \\mathbf{v}) + (\\mathbf{v} \\cdot \\nabla) \\mathbf{A} + (\\mathbf{A} \\cdot \\nabla) \\mathbf{v}$$\n\nOf course it is symmetric in the two vector fields. The point here is that in our physical situation, it is only $$\\mathbf{A}$$ that is a proper vector field (one vector value for each point in space). The quantity $$\\mathbf{v}$$ is not a field. There is only one value of $$\\mathbf{v}$$ for each point in time, and it is not associated to anything in space, apart from maybe the position of the particle whose motion we are describing. In that sense, we don't consider the \"spatial variation\" in $$\\mathbf{v}$$, so we drop the two terms where $$\\nabla$$ is acting on $$\\mathbf{v}$$, and keep the two terms where $$\\nabla$$ acts on $$\\mathbf{A}$$. The notation $$\\nabla_\\mathbf{A}$$ signifies just that.\n\n• How does equation (18) explicitly tell me that magnetic potential is indeed momentum per unit charge? Is it because the partial time derivative qA yields force, and that dotted with velocity is the rate of work done? Makes sense only dimensionally to me, a little more explanation of (18) would help! – arya_stark Oct 2 '19 at 3:44\n• Also, I'm not exactly proficient in multivariable calculus yet (in process of learning for the first time) - so I wonder how you got to the chain rule used in derivation of (18)? – arya_stark Oct 2 '19 at 3:45\n• Lastly, I don't understand what grad subscript A means, exactly? Could you explain in a little more detail? – arya_stark Oct 2 '19 at 4:50\n• Great! I understand the chain rule better now! – arya_stark Oct 2 '19 at 6:29\n\nFrom the linked paper, $$\\frac{d}{dt} \\left( \\frac{1}{2} mv^2 \\right) = q \\mathbf v \\cdot (\\mathbf E + \\mathbf v \\times \\mathbf B) \\label{16}\\tag{16}$$\n\n$$\\mathbf E = - \\nabla \\phi - \\frac{\\partial}{\\partial t} \\mathbf A\\ \\text{ and }\\ \\mathbf B = \\nabla \\times \\mathbf A \\label{17}\\tag{17}$$\n\nI assume when taking the total derivative of $$\\phi$$, the following was meant: $$q \\frac{d}{dt} \\phi(\\mathbf r,t) = q \\frac{\\partial}{\\partial t} \\phi + q \\mathbf v \\cdot \\nabla \\phi \\label{n.1}\\tag{n.1}$$\n\nIf we dot \\ref{17}.a with $$q \\mathbf v$$, we get $$q \\mathbf v \\cdot \\mathbf E = - q \\mathbf v \\cdot \\nabla \\phi - q \\mathbf v \\cdot \\frac{\\partial}{\\partial t} \\mathbf A$$\n\nSubstituting $$- q \\mathbf v \\cdot \\nabla \\phi$$ from \\ref{n.1} $$q \\mathbf v \\cdot \\mathbf E = - q \\frac{d}{dt} \\phi + q \\frac{\\partial}{\\partial t} \\phi - q \\mathbf v \\cdot \\frac{\\partial}{\\partial t} \\mathbf A$$\n\nFrom \\ref{16}, we have $$\\frac{d}{dt} \\left( \\frac{1}{2} mv^2 \\right) = q \\mathbf v \\cdot (\\mathbf E + \\mathbf v \\times \\mathbf B) = q \\mathbf v \\cdot \\mathbf E$$\n\nsince $$\\mathbf v \\cdot (\\mathbf v \\times \\mathbf B) = 0$$. We already calculated $$q \\mathbf v \\cdot \\mathbf E$$: $$\\frac{d}{dt} \\left( \\frac{1}{2} mv^2 \\right) = - q \\frac{d}{dt} \\phi + q \\frac{\\partial}{\\partial t} \\phi - q \\mathbf v \\cdot \\frac{\\partial}{\\partial t} \\mathbf A$$\n\n$$\\frac{d}{dt} \\left( \\frac{1}{2} mv^2 + q \\phi \\right) = q \\frac{\\partial}{\\partial t} \\phi - q \\mathbf v \\cdot \\frac{\\partial}{\\partial t} \\mathbf A$$\n\n$$\\frac{d}{dt} \\left( \\frac{1}{2} mv^2 + q \\phi \\right) = \\frac{\\partial}{\\partial t} q (\\phi - \\mathbf v \\cdot \\mathbf A) \\tag{18}$$ $$\\tag*{\\blacksquare}$$\n\n$$\\frac{d}{dt} (m \\mathbf v + q \\mathbf A) = - \\nabla q (\\phi-\\mathbf v \\cdot \\mathbf A) \\label{19}\\tag{19}$$\n\nFor \\ref{19}, you need to use (again, I am not sure about this) $$q \\frac{d}{dt} \\mathbf A = q \\frac{\\partial}{\\partial t} \\mathbf A + q (\\mathbf v \\cdot \\nabla) \\mathbf A \\label{n.2}\\tag{n.2}$$\n\nFrom $$\\mathbf F = m \\mathbf a = m \\frac{d}{dt} \\mathbf v = q(\\mathbf E + \\mathbf v \\times \\mathbf B)$$, $$m \\frac{d}{dt} \\mathbf v = q(\\mathbf E + \\mathbf v \\times \\mathbf B)$$\n\nAdding both sides $$\\frac{d}{dt} q \\mathbf A$$, $$\\frac{d}{dt} (m \\mathbf v + q \\mathbf A) = q(\\mathbf E + \\mathbf v \\times \\mathbf B + \\frac{\\partial}{\\partial t} \\mathbf A + (\\mathbf v \\cdot \\nabla) \\mathbf A) \\label{n.3}\\tag{n.3}$$\n\nUsing $$\\mathbf A \\times (\\mathbf B \\times \\mathbf C) = (\\mathbf A \\times \\mathbf C) \\mathbf B - (\\mathbf A \\times \\mathbf B) \\mathbf C$$ and \\ref{17}.b, $$\\mathbf v \\times \\mathbf B = \\mathbf v \\times (\\nabla \\times \\mathbf A) = \\nabla (\\mathbf v \\cdot \\mathbf A) - (\\mathbf v \\cdot \\nabla) \\mathbf A$$\n\nInserting this into \\ref{n.3}, $$\\frac{d}{dt} (m \\mathbf v + q \\mathbf A) = q(\\mathbf E + \\frac{\\partial}{\\partial t} \\mathbf A + \\nabla (\\mathbf v \\cdot \\mathbf A))$$\n\nUsing \\ref{17}.a, we have $$\\mathbf E + \\frac{\\partial}{\\partial t} \\mathbf A = - \\nabla \\phi$$, so $$\\frac{d}{dt} (m \\mathbf v + q \\mathbf A) = - q(\\nabla \\phi - \\nabla (\\mathbf v \\cdot \\mathbf A))$$\n\n$$\\frac{d}{dt} (m \\mathbf v + q \\mathbf A) = - \\nabla q (\\phi-\\mathbf v \\cdot \\mathbf A)$$ $$\\tag*{\\blacksquare}$$\n\nI will not be able to address your interpretation question at all since I am not accustomed to it either. Also sorry for the backwards proof. Any and all improvements are welcome.\n\n• Please explain (n.1) and (n.2) – arya_stark Oct 2 '19 at 5:17\n• @arya_stark Meyer explained it more clearly than I could, what I advise you is to try to get the intuition of derivatives of a (vector/scalar) field as well. I kind of made these up while I was answering the question. – acarturk Oct 2 '19 at 7:41\n\nThe potential energy deu to the Lorenz force is (Goldstein book):\n\n$$U=q\\,\\phi(x)-q\\,A(x,t)\\cdot\\,v(t)$$\n\nand the kinetic energy is:\n\n$$T=\\frac{1}{2}\\,m\\,v^2$$\n\nso the total energy $$E=T+U$$ is explicit independent from t,so:\n\n$$\\frac{d}{dt}\\,E=0=\\frac{d}{dt}\\left(\\frac{1}{2}\\,m\\,v^2+q\\,\\phi(x)-q\\,A(x,t)\\cdot\\,v(t)\\right)$$ or: $$\\frac{d}{dt}\\left(\\frac{1}{2}\\,m\\,v^2+q\\,\\phi(x)-\\underbrace{q\\,\\phi(x)}_{\\ne f(t)}-q\\,A(x,t)\\cdot\\,v(t)\\right)=0$$\n\n$$\\Rightarrow\\quad$$equation (18)\n\nwith:\n\n$$L=T-U=\\frac{1}{2}\\,m\\,v^2-q\\,\\phi(x)+q\\,A(x,t)\\cdot\\,v(t)$$\n\nthe equations of motion\n\n$$\\frac{d}{dt}\\left(\\frac{\\partial L}{\\partial v}\\right)+\\frac{\\partial L}{\\partial x}=0$$\n\n$$\\Rightarrow$$\n\n$$\\frac{\\partial L}{\\partial v}=m\\,v+q\\,A$$\n\n$$\\frac{\\partial L}{\\partial x} =\\frac{\\partial }{\\partial x} \\left[q\\,(\\phi-A\\cdot v)\\right]$$\n\n$$\\Rightarrow\\quad$$equation (19)"
] | [
null,
"https://i.stack.imgur.com/Qkv3D.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7598679,"math_prob":0.99998724,"size":7387,"snap":"2019-51-2020-05","text_gpt3_token_len":2332,"char_repetition_ratio":0.24597047,"word_repetition_ratio":0.078947365,"special_character_ratio":0.3097333,"punctuation_ratio":0.07032967,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000094,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-22T15:04:34Z\",\"WARC-Record-ID\":\"<urn:uuid:a8aaa7da-c14d-48e3-84b0-077d56ef7c28>\",\"Content-Length\":\"174486\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e9397b75-98bb-4ad8-bfde-3ab7e3f069fe>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ee681cc-22cb-40e5-adbb-514d1b0017c6>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/505708/magnetic-vector-potential-a-measure-of-momentum/505747\",\"WARC-Payload-Digest\":\"sha1:OIVG7GNFUC3FH3GPM2EB6VIKJDGDELSY\",\"WARC-Block-Digest\":\"sha1:ZCRNU4ADMIIYKIJ7OH2IE6YVBTYZHBKB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250607118.51_warc_CC-MAIN-20200122131612-20200122160612-00028.warc.gz\"}"} |
https://tools.carboncollective.co/inflation/us/1947/1000000/ | [
"# $1,000,000 in 1947 is worth$13,287,488.79 today\n\n$1,000,000 in 1947 has the same purchasing power as$13,287,488.79 today. Over the 75 years this is a change of $12,287,488.79. The average inflation rate of the dollar between 1947 and 2022 was 2.19% per year. The cumulative price increase of the dollar over this time was 1,228.75%. ## The value of$1,000,000 from 1947 to 2022\n\nSo what does this data mean? It means that the prices in 2022 are 132,874.89 higher than the average prices since 1947. A dollar today can buy 7.53% of what it could buy in 1947.\n\nWe can look at the buying power equivalent for $1,000,000 in 1947 to see how much you would need to adjust for in order to beat inflation. For 1947 to 2022, if you started with$1,000,000 in 1947, you would need to have $13,287,488.79 in 1947 to keep up with inflation rates. So if we are saying that$1,000,000 is equivalent to $13,287,488.79 over time, you can see the core concept of inflation in action. The \"real value\" of a single dollar decreases over time. It will pay for fewer items at the store than it did previously. In the chart below you can see how the value of the dollar is worth less over 75 years. ## Value of$1,000,000 Over Time\n\nIn the table below we can see the value of the US Dollar over time. According to the BLS, each of these amounts are equivalent in terms of what that amount could purchase at the time.\n\nYear Dollar Value Inflation Rate\n1947 $1,000,000.00 14.36% 1948$1,080,717.49 8.07%\n1949 $1,067,264.57 -1.24% 1950$1,080,717.49 1.26%\n1951 $1,165,919.28 7.88% 1952$1,188,340.81 1.92%\n1953 $1,197,309.42 0.75% 1954$1,206,278.03 0.75%\n1955 $1,201,793.72 -0.37% 1956$1,219,730.94 1.49%\n1957 $1,260,089.69 3.31% 1958$1,295,964.13 2.85%\n1959 $1,304,932.74 0.69% 1960$1,327,354.26 1.72%\n1961 $1,340,807.17 1.01% 1962$1,354,260.09 1.00%\n1963 $1,372,197.31 1.32% 1964$1,390,134.53 1.31%\n1965 $1,412,556.05 1.61% 1966$1,452,914.80 2.86%\n1967 $1,497,757.85 3.09% 1968$1,560,538.12 4.19%\n1969 $1,645,739.91 5.46% 1970$1,739,910.31 5.72%\n1971 $1,816,143.50 4.38% 1972$1,874,439.46 3.21%\n1973 $1,991,031.39 6.22% 1974$2,210,762.33 11.04%\n1975 $2,412,556.05 9.13% 1976$2,551,569.51 5.76%\n1977 $2,717,488.79 6.50% 1978$2,923,766.82 7.59%\n1979 $3,255,605.38 11.35% 1980$3,695,067.26 13.50%\n1981 $4,076,233.18 10.32% 1982$4,327,354.26 6.16%\n1983 $4,466,367.71 3.21% 1984$4,659,192.83 4.32%\n1985 $4,825,112.11 3.56% 1986$4,914,798.21 1.86%\n1987 $5,094,170.40 3.65% 1988$5,304,932.74 4.14%\n1989 $5,560,538.12 4.82% 1990$5,860,986.55 5.40%\n1991 $6,107,623.32 4.21% 1992$6,291,479.82 3.01%\n1993 $6,479,820.63 2.99% 1994$6,645,739.91 2.56%\n1995 $6,834,080.72 2.83% 1996$7,035,874.44 2.95%\n1997 $7,197,309.42 2.29% 1998$7,309,417.04 1.56%\n1999 $7,470,852.02 2.21% 2000$7,721,973.09 3.36%\n2001 $7,941,704.04 2.85% 2002$8,067,264.57 1.58%\n2003 $8,251,121.08 2.28% 2004$8,470,852.02 2.66%\n2005 $8,757,847.53 3.39% 2006$9,040,358.74 3.23%\n2007 $9,297,847.53 2.85% 2008$9,654,843.05 3.84%\n2009 $9,620,493.27 -0.36% 2010$9,778,295.96 1.64%\n2011 $10,086,950.67 3.16% 2012$10,295,695.07 2.07%\n2013 $10,446,502.24 1.46% 2014$10,615,964.13 1.62%\n2015 $10,628,565.02 0.12% 2016$10,762,645.74 1.26%\n2017 $10,991,928.25 2.13% 2018$11,260,403.59 2.44%\n2019 $11,464,439.46 1.81% 2020$11,605,874.44 1.23%\n2021 $0.00 -100.00% 2022$13,287,488.79 0.00%\n\n## US Dollar Inflation Conversion\n\nIf you're interested to see the effect of inflation on various 1950 amounts, the table below shows how much each amount would be worth today based on the price increase of 1,228.75%.\n\nInitial Value Equivalent Value\n$1.00 in 1947$13.29 today\n$5.00 in 1947$66.44 today\n$10.00 in 1947$132.87 today\n$50.00 in 1947$664.37 today\n$100.00 in 1947$1,328.75 today\n$500.00 in 1947$6,643.74 today\n$1,000.00 in 1947$13,287.49 today\n$5,000.00 in 1947$66,437.44 today\n$10,000.00 in 1947$132,874.89 today\n$50,000.00 in 1947$664,374.44 today\n$100,000.00 in 1947$1,328,748.88 today\n$500,000.00 in 1947$6,643,744.39 today\n$1,000,000.00 in 1947$13,287,488.79 today\n\n## Calculate Inflation Rate for $1,000,000 from 1947 to 2022 To calculate the inflation rate of$1,000,000 from 1947 to 2022, we use the following formula:\n\n$$\\dfrac{ 1947\\; USD\\; value \\times CPI\\; in\\; 2022 }{ CPI\\; in\\; 1947 } = 2022\\; USD\\; value$$\n\nWe then replace the variables with the historical CPI values. The CPI in 1947 was 22.3 and 296.311 in 2022.\n\n$$\\dfrac{ \\1,000,000 \\times 296.311 }{ 22.3 } = \\text{ \\13,287,488.79 }$$\n\n$1,000,000 in 1947 has the same purchasing power as$13,287,488.79 today.\n\nTo work out the total inflation rate for the 75 years between 1947 and 2022, we can use a different formula:\n\n$$\\dfrac{\\text{CPI in 2022 } - \\text{ CPI in 1947 } }{\\text{CPI in 1947 }} \\times 100 = \\text{Cumulative rate for 75 years}$$\n\nAgain, we can replace those variables with the correct Consumer Price Index values to work out the cumulativate rate:\n\n$$\\dfrac{\\text{ 296.311 } - \\text{ 22.3 } }{\\text{ 22.3 }} \\times 100 = \\text{ 1,228.75\\% }$$\n\n## Inflation Rate Definition\n\nThe inflation rate is the percentage increase in the average level of prices of a basket of selected goods over time. It indicates a decrease in the purchasing power of currency and results in an increased consumer price index (CPI). Put simply, the inflation rate is the rate at which the general prices of consumer goods increases when the currency purchase power is falling.\n\nThe most common cause of inflation is an increase in the money supply, though it can be caused by many different circumstances and events. The value of the floating currency starts to decline when it becomes abundant. What this means is that the currency is not as scarce and, as a result, not as valuable.\n\nBy comparing a list of standard products (the CPI), the change in price over time will be measured by the inflation rate. The prices of products such as milk, bread, and gas will be tracked over time after they are grouped together. Inflation shows that the money used to buy these products is not worth as much as it used to be when there is an increase in these products’ prices over time.\n\nThe inflation rate is basically the rate at which money loses its value when compared to the basket of selected goods – which is a fixed set of consumer products and services that are valued on an annual basis."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8432665,"math_prob":0.9967131,"size":6966,"snap":"2022-27-2022-33","text_gpt3_token_len":2776,"char_repetition_ratio":0.14334962,"word_repetition_ratio":0.016997168,"special_character_ratio":0.5627333,"punctuation_ratio":0.24349636,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97618663,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-10T13:25:41Z\",\"WARC-Record-ID\":\"<urn:uuid:287dd18f-63f7-4620-920f-2071c9e3fe7c>\",\"Content-Length\":\"45568\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:851ab414-efbc-4929-aaad-768f451666e4>\",\"WARC-Concurrent-To\":\"<urn:uuid:09845b9a-4a27-4562-970a-4bd7374cf638>\",\"WARC-IP-Address\":\"138.197.3.89\",\"WARC-Target-URI\":\"https://tools.carboncollective.co/inflation/us/1947/1000000/\",\"WARC-Payload-Digest\":\"sha1:GXDYBKUC6OLYW2SDLUNA2J4T3S2MSR2J\",\"WARC-Block-Digest\":\"sha1:BQAWDKS5B6RSB35TYQL66BR6ASV45PMX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571190.0_warc_CC-MAIN-20220810131127-20220810161127-00233.warc.gz\"}"} |
https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.zmap.html | [
"# scipy.stats.zmap¶\n\nscipy.stats.zmap(scores, compare, axis=0, ddof=0)[source]\n\nCalculate the relative z-scores.\n\nReturn an array of z-scores, i.e., scores that are standardized to zero mean and unit variance, where mean and variance are calculated from the comparison array.\n\nParameters\nscoresarray_like\n\nThe input for which z-scores are calculated.\n\ncomparearray_like\n\nThe input from which the mean and standard deviation of the normalization are taken; assumed to have the same dimension as scores.\n\naxisint or None, optional\n\nAxis over which mean and variance of compare are calculated. Default is 0. If None, compute over the whole array scores.\n\nddofint, optional\n\nDegrees of freedom correction in the calculation of the standard deviation. Default is 0.\n\nReturns\nzscorearray_like\n\nZ-scores, in the same shape as scores.\n\nNotes\n\nThis function preserves ndarray subclasses, and works also with matrices and masked arrays (it uses asanyarray instead of asarray for parameters).\n\nExamples\n\n>>> from scipy.stats import zmap\n>>> a = [0.5, 2.0, 2.5, 3]\n>>> b = [0, 1, 2, 3, 4]\n>>> zmap(a, b)\narray([-1.06066017, 0. , 0.35355339, 0.70710678])\n\n\n#### Previous topic\n\nscipy.stats.trim1\n\n#### Next topic\n\nscipy.stats.zscore"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.73290753,"math_prob":0.96511924,"size":1141,"snap":"2021-04-2021-17","text_gpt3_token_len":307,"char_repetition_ratio":0.12840809,"word_repetition_ratio":0.0,"special_character_ratio":0.27782646,"punctuation_ratio":0.20779221,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9947535,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-12T23:15:34Z\",\"WARC-Record-ID\":\"<urn:uuid:f1b29c2c-0eca-4ea6-9121-2cdd4aac461c>\",\"Content-Length\":\"9188\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ad2489e9-5756-4987-935b-0a2f4ce3f049>\",\"WARC-Concurrent-To\":\"<urn:uuid:82d531d6-2563-4ade-9889-2da906544df4>\",\"WARC-IP-Address\":\"50.17.248.72\",\"WARC-Target-URI\":\"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.zmap.html\",\"WARC-Payload-Digest\":\"sha1:IKDUQDRBX2PALINXDGG3CIITGT5ONNNB\",\"WARC-Block-Digest\":\"sha1:27IXIAISB2KHAEWVWJMC3JSR7PMRUSST\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038069267.22_warc_CC-MAIN-20210412210312-20210413000312-00634.warc.gz\"}"} |
https://www.studysmarter.us/textbooks/physics/physics-principles-with-applications-7th/fluids/q-11p-i-a-calculate-the-total-force-of-the-atmosphere-acting/ | [
"• :00Days\n• :00Hours\n• :00Mins\n• 00Seconds\nA new era for learning is coming soon",
null,
"Suggested languages for you:\n\nEurope\n\nAnswers without the blur. Sign up and see all textbooks for free!",
null,
"Q 11P\n\nExpert-verified",
null,
"Found in: Page 260",
null,
"### Physics Principles with Applications\n\nBook edition 7th\nAuthor(s) Douglas C. Giancoli\nPages 978 pages\nISBN 978-0321625922",
null,
"# (I) (a) Calculate the total force of the atmosphere acting on the top of a table that measures 1.7m× 2.6m. (b) What is the total force acting upward on the underside of the table?\n\n1. The force on the top of the table is 4.47 × 105 N.\n2. The force on the underside of the table is 4.47 × 105 N.\nSee the step by step solution\n\n## Step-1: Understanding the force acting on a substance\n\nIn order to evaluate the force acting on a table utilize the relation of force with the pressure and the area of the table.\n\n## Step 2: Given the data\n\nThe area of table A =1.7m × 2.6m.\n\nThe atmospheric pressure P = 1.013 × 105 N/m2.\n\n## Step-3:-Calculation of force on the top of the table\n\nThe force acting on the table is calculated as;\n\nF = P × A\n\nSubstitute the values in the above relation\n\nF = (1.013×105 N/m2) × (1.7m × 2.6m)\n\nF = 4.47×105 N\n\nThus, the force acting on the top of the table is 4.47×105 N.\n\n## Step-4: Calculation of force on the underside of the table\n\nThe atmospheric pressure on the table is equivalent at the top and underside of the table because air pressure's upward and downward forces are similar. Therefore, the force acting on the table is also the same on both sides.\n\nThus, the force acting on the underside of the table is also 4.47×105 N.",
null,
"### Want to see more solutions like these?",
null,
""
] | [
null,
"https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/dist/assets/images/header-logo.svg",
null,
"https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/src/assets/images/ab-test/searching-looking.svg",
null,
"https://studysmarter-mediafiles.s3.amazonaws.com/media/textbook-images/Physics_Giancoli.png",
null,
"https://studysmarter-mediafiles.s3.amazonaws.com/media/textbook-images/Physics_Giancoli.png",
null,
"https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/src/assets/images/ab-test/businessman-superhero.svg",
null,
"https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/img/textbook/banner-top.svg",
null,
"https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/img/textbook/cta-icon.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8681197,"math_prob":0.9634485,"size":1364,"snap":"2023-14-2023-23","text_gpt3_token_len":365,"char_repetition_ratio":0.18455882,"word_repetition_ratio":0.050980393,"special_character_ratio":0.27859238,"punctuation_ratio":0.11221122,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9913029,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T13:39:59Z\",\"WARC-Record-ID\":\"<urn:uuid:4e37ba29-cf39-4324-8f5d-e393b1b5e337>\",\"Content-Length\":\"198913\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5d2060aa-2bcf-4dfd-ac1a-57b90e6056cb>\",\"WARC-Concurrent-To\":\"<urn:uuid:b80bee6f-f8c0-4674-b26e-6a1905955fb1>\",\"WARC-IP-Address\":\"52.28.131.248\",\"WARC-Target-URI\":\"https://www.studysmarter.us/textbooks/physics/physics-principles-with-applications-7th/fluids/q-11p-i-a-calculate-the-total-force-of-the-atmosphere-acting/\",\"WARC-Payload-Digest\":\"sha1:JUZFDKBN3CPSJJJL7LWOA35LVFOTPPKO\",\"WARC-Block-Digest\":\"sha1:2RRUUW6HL77KMOU7RWUZ4EDBKGYZC6PV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649986.95_warc_CC-MAIN-20230604125132-20230604155132-00559.warc.gz\"}"} |
https://sparkbyexamples.com/pandas/pandas-merge-series-into-dataframe/ | [
"# Pandas – How to Merge Series into DataFrame\n\nLet’s say you already have a pandas DataFrame with few columns and you would like to add/merge Series as columns into existing DataFrame, this is certainly possible using `pandas.Dataframe.merge()` method.\n\nI will explain with the examples in this article. first create a sample DataFrame and a few Series. You can also try by combining Multiple Series to create DataFrame.\n\n``````\nimport pandas as pd\n\ntechnologies = ({\n'Fee' :[22000,25000,23000]\n})\ndf = pd.DataFrame(technologies)\nprint(df)\n\n# Create Series\ndiscount = pd.Series([1000,2300,1000], name='Discount')\n\n``````\n\n## 1. Merge Series into pandas DataFrame\n\nNow let’s say you wanted to merge by adding Series object `discount` to DataFrame `df`.\n\n``````\n# Merge Series into DataFrame\ndf2=df.merge(discount,left_index=True, right_index=True)\nprint(df2)\n``````\n\nYields below output. It merges the Series with DataFrame on index.\n\n``````\nCourses Fee Discount\n0 Spark 22000 1000\n1 PySpark 25000 2300\n``````\n\nThis also works if your rows are in different order, but in this case you should have custom indexes. I will leave this to you to explore.\n\n``````\n# Rename Series before Merge\ndf2=df.merge(discount.rename('Course_Discount'),\nleft_index=True, right_index=True)\nprint(df2)\n``````\n\nYields below output\n\n``````\nCourses Fee Course_Discount\n0 Spark 22000 1000\n1 PySpark 25000 2300\n``````\n\n## 2. Using Series.to_frame() & DataFrame.merge() Methods\n\nYou can also create a DataFrame from Series using `Series.to_frame()` and use it with DataFrame to merge.\n\n``````\n# Merge by creating DataFrame from Series\ndf2=df.merge(discount.to_frame(), left_index=True, right_index=True)\nprint(df2)\n``````\n\nYields same output as in first example.\n\n## Conclusion\n\nIn this article I have explained how to merge/add series objects to existing pandas DataFrame as columns by using merge() method. also covered by creating a DataFrame from series using to_frame() and using on merge() method.\n\nHappy Learning\n\n## Reference",
null,
""
] | [
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201280%20720'%3E%3C/svg%3E",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7574885,"math_prob":0.5999534,"size":2285,"snap":"2022-40-2023-06","text_gpt3_token_len":549,"char_repetition_ratio":0.16922402,"word_repetition_ratio":0.04819277,"special_character_ratio":0.25339168,"punctuation_ratio":0.11002445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99692976,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T07:22:57Z\",\"WARC-Record-ID\":\"<urn:uuid:497c37b2-bb15-4da1-a096-044544524d09>\",\"Content-Length\":\"231638\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c66a779e-a58f-4e80-836b-c0abe1af33a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:6523eda6-9840-47f3-acfc-d4d31f9618b2>\",\"WARC-IP-Address\":\"172.67.152.210\",\"WARC-Target-URI\":\"https://sparkbyexamples.com/pandas/pandas-merge-series-into-dataframe/\",\"WARC-Payload-Digest\":\"sha1:TFAGWB5MXC36LGIM3ZMGWQQK2S522S2X\",\"WARC-Block-Digest\":\"sha1:H3KVN32UYE3J4WOH3QCMJS7CU6VAKHSC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500304.90_warc_CC-MAIN-20230206051215-20230206081215-00725.warc.gz\"}"} |
https://datascience.stackexchange.com/questions/27225/how-what-to-initialize-the-hidden-states-in-rnn-sequence-to-sequence-models/47959 | [
"# How/What to initialize the hidden states in RNN sequence-to-sequence models?\n\nIn an RNN sequence-to-sequence model, the encode input hidden states and the output's hidden states needs to be initialized before training.\n\nWhat values should we initialize them with? How should we initialize them?\n\nFrom the PyTorch tutorial, it simply initializes zeros to the hidden states.\n\nIs initializing zero the usual way of initializing hidden states in RNN seq2seq networks?\n\nFor a single-layer vanilla RNN wouldn't the fan-in and fan-out be equals to $(1 + 1)$ which gives a variance of $1$ and the gaussian distribution with $mean=0$ gives us a uniform distribution of $0$s.\n\nfor-each input-hidden weight\nvariance = 2.0 / (fan-in +fan-out)\nstddev = sqrt(variance)\nweight = gaussian(mean=0.0, stddev)\nend-for\n\n\nFor single layer encoder-decoder architecture with attention, if we use glorot, we'll get a very very small variance when initializing the decoder hidden state since the fan-in would include the attention which is mapped to all possible vocabulary from the encoder output. So we result in a gaussian mean of ~= 0 too since stdev is really really small.\n\nWhat other initialization methods are there, esp. for the use on RNN seq2seq models?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85780215,"math_prob":0.99306387,"size":1186,"snap":"2020-34-2020-40","text_gpt3_token_len":267,"char_repetition_ratio":0.13874789,"word_repetition_ratio":0.0,"special_character_ratio":0.2183811,"punctuation_ratio":0.086363636,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99126756,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-10T00:11:01Z\",\"WARC-Record-ID\":\"<urn:uuid:aae38209-e553-47c0-a521-221b96259cbc>\",\"Content-Length\":\"147396\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2c3b359e-71c3-4437-8e15-3c64b4077cf4>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b244c3c-d671-4366-ab58-74282480846a>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://datascience.stackexchange.com/questions/27225/how-what-to-initialize-the-hidden-states-in-rnn-sequence-to-sequence-models/47959\",\"WARC-Payload-Digest\":\"sha1:NTG3LZIBGNFBMFBINORCSY2HVVJXH7FB\",\"WARC-Block-Digest\":\"sha1:NZB4C2CQOS76NVHE2APLBX66NNDFF5FD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738595.30_warc_CC-MAIN-20200809222112-20200810012112-00533.warc.gz\"}"} |
https://tamathisland.com/determination-relationship-tom-shearer-derived-a-new-strain-energy/ | [
"",
null,
"Determination of the strain energy\nfunction using stress-strain response of\na single fascicle for the modeling of ligaments and tendons\n\nMd Asif Arefeen\n\nWe Will Write a Custom Essay Specifically\nFor You For Only \\$13.90/page!\n\norder now\n\nABSTRACT\n\nA\nreview and analysis of the strain energy function by using the distribution of crimp angles of the fibrils to\ndetermine the stress-strain response of single fascicle. (Kastelic, Palley et al. 1980) gave a\nnon-linear stress-strain relationship based on the radial variation of the\nfibril crimp. By correcting this relationship Tom\nShearer derived a new strain energy function and compared it with the commonly\nused model HGO. The relative and absolute errors related to the new model are\nless than 9% and 40% than of that HGO model. Undoubtedly\nnew model gives a better performance than the HGO model. But\nit\nis mandatory to measure the\n\nand\n\no separately for\nthe ligament or tendon in order to validate this model.\n\n1.\nIntroduction\n\nA fascicle is the main subunit of the\nligaments and tendons which are the soft collagenous tissue. These tissues are the fundamental structures of in\nthe musculoskeletal systems and play a significant role in biomechanics.\nLigaments provide stability and also make the joints work perfectly by\nconnecting bone to bone, on the other hand, tendons transfer force to a skeleton which is generated by muscle by\nconnecting bone to muscle. The collagenous fibers\nlike fascicle consist of crimped pattern\nfibrils and this crimp are called the waviness\nof the fibrils(see fig.1) which contributes significantly to the non-linear stress-strain\nresponse for ligaments and tendons.As an anisotropic tissue, the characteristic\nof stress-strain of ligaments and tendons within a non-linear elastic framework\noccur in the toe region where mechanically loading of the tendon up to 2%\nstrain(see fig.2).\n\nFig 1. Tendon hierarchy\nFig 2. Model within a non-linear framework\n\n(Fung 1967) gave an\nexponential stress-strain relationship based on rabbit\nmesentery which was only in a phenomenological\nsense but there was no microstructural basis for the choice of the\nexponential function. Based on his work (Gou 1970) proposed a strain energy function for isotropic tissues\nthat also gave an exponential stress-strain relationship but was not suitable\nfor tissue like tendons and ligaments. (Kastelic, Palley et al. 1980) gave a\nnon-linear stress-strain relationship based on the radial variation of the fibril crimp. But there was an error in the\nimplementation of the Hook’s law which leads\nhis relationship incorrect. The strain energy function which has used for modeling biological tissue for a\nlong time is Holzapfel-grasser-Ogden(HGO) model,\ngiven by\n\nW\n=\n\n(I1-3) +\n\n(\n\n-1), where, I1= trC, I4=\nM.(CM), C=\n\nI1\nand I4 are the strain invariants where I4 has a\ndirect interpretation as the square of the stretch in the direction of the fiber.More explanations about invariants can be found\nin the (Holzapfel et al. 2010).”C is the right Cauchy-Green tensor, F is the\ndeformation gradient tensor and M is a unit vector pointing in the direction of\nthe tissue’s fibers before any deformation has taken place, c, k1and\nk1 are material parameters and the above expression is only valid\nwhen I4?1(when I4>1,\nW =\n\n(I1-3)). As a\nphenomenological model, the parameters are not directly linked to measurable\nquantities”.So this model has some limitations.\n\nA large number SEF model has been proposed so far by\ndifferent researchers like( Humphrey and Lin 1987),(\nHumphrey et al.1990), (Fung et al. 1993),( Taber 2004), (Murphy\n2013) but none of them were valid for ligaments and tendons.In 2014 Tom Shearer\nproposed a model by correcting the work done by Kastelic based on the fibril\ncrimp angle.This new model is more efficient than the HGO model.\n\n2.Development of new\nstress-strain relationship\n\nA new stress-strain response has given by the Tom Shearer\nbased on the radial variation in the crimp angle of a fascicle’s fibrils by\ncorrecting the Hook’s law in that paper.The Hook’s law stated by Kastelic et\nal.(1980) is given by\n\n?p(?)=E*. ??p (?), where ??p (?)= ? – ?p (?)\n\nHere ??p (?)(elastic-deformation)\nis not the fibril strain and differs from the fibril strain by a quantity that is\ndependent on ?.All fibrils should have same Young’s modulus.So E* is not valid\nfor all ?.New Hook’s law was given by Tom Shearer in his paper which\ncan be derived from the figure-3 below.\n\n?p(?)=E.\n\n(?)\n\n(1)\n\nwhere\n\n(?)=\ncos(\n\n( ? – ?p (?))= ( ? +1) cos(\n\n-1= ( ? +1) cos(\n\n-1\n\nFig 3: Stretching of fibril of initial length lp(?)\nwithin a fascicle of initial length L\n\nUsing the equation (1) he derived an expression for the\naverage traction in the direction of the fascicle\n\n= 2\n\nWhere\nPp is the tensile load faced by the fascicle. Taking p=1,2 and\nsimplifying few things Tom Shearer derived a new stress-strain relationship which is given by\n\n=\n\n(2?-\n1+\n\n)\n\n=\n\n( ? +1)-1, ?=\n\n= E(??-1), ?>\n\nTom\nShearer used this form to derive the new strain energy function.\n\n3. Strain Energy Function\n\nIn\nthis section, a derived strain energy function will be shown for the ligaments\nand tendons. For the details, the reader\nis referred to Tom Shearer (2014).His strain energy function is valid for both of\nthe isotropic and anisotropic tissue.\n\nFor\nanisotropic tissue SEF\n\nW=\n\n(4\n\nI4 -3log (I4)-\n\n-3)\n\n“The\nneo-Hookean model is still reasonable for isotropic\ntissue”. Based on this an isotropic SEF\ncan be derived\n\nW=\n(1-?)\n\n(I2-3)\n\nNow\nfull form of strain energy function can be given as\n\nW=\n(1-?)\n\n(I2-3) +\n\n(4\n\nI4 -3log(I4)-\n\n-3),\n\nI4\n\nW=\n(1-?)\n\n(I2-3) +\n\n(?\n\nI4 –\n\nlog(I4)+?), I4\n\nWhere\n\nis\nthe collagen volume fraction, E is the\nfibril stiffness and\n\nis the average out fibril crimp angle. Here\n\ncannot be measured directly. As a result, it was taken based on assumptions.\nFinally, the above SEF gives stress-strain response for both isotropic and anisotropic tissues. It seems\nquite unusual for isotropic SEF but it happens due to the inability of the linear term in their stress-strain relationship\nfor small strains of fascicles.\n\n4. Result\n\nIn\nthis section, a comparison of the stress-strain relationship among new model, HGO model, an\nexperimental model will be shown. The existing data were taken from the (Johnson, Tramaglini et al. 1994), Parameter values: c=(1-?)\n\n=0.01MPa, k1=25MPa, k2=183MPa,\n\n=552 MPa,\n\nand tendon matrix is insignificant compared with that of its fascicles, (1-?)\n\nwere chosen to be small,\n\ncannot be measured directly , it was taken based on assumptions like 0.11\n\n1. Also\n\nwas not available so it was taken as a\npredicted value. Based on this Tom Shearer measured the stress-strain response\nwhich is given below\n\nFig\n4: Comparison stress-strain curves of\nthe new model and HGO model with\nexperimental data. Black: new model, Blue: HGO model, Red: experimental data.\n\nFrom the above graph,\nan average relative error and absolute\nerror among the model can be calculated.\nCalculation of the Tom Shearer suggested that average relative error and\nabsolute error of new model is less than the HGO model respectively 0.053 (new\nmodel)\n\nPosted on / Categories Etudes",
null,
""
] | [
null,
"https://tamathisland.com/wp-content/themes/business-gravity/assets/images/placeholder/loader.gif",
null,
"https://randomuser.me/api/portraits/women/60.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8849934,"math_prob":0.8354132,"size":7069,"snap":"2019-51-2020-05","text_gpt3_token_len":1865,"char_repetition_ratio":0.13432413,"word_repetition_ratio":0.03938356,"special_character_ratio":0.24218418,"punctuation_ratio":0.11794501,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9810556,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T13:05:45Z\",\"WARC-Record-ID\":\"<urn:uuid:48f8e97c-2fb8-47bd-bac6-a6a813b1993c>\",\"Content-Length\":\"51062\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63a316f1-d9b6-42f3-bbff-c607de815657>\",\"WARC-Concurrent-To\":\"<urn:uuid:4711dee8-ea7e-40c7-98ec-30beeb06938f>\",\"WARC-IP-Address\":\"104.31.74.12\",\"WARC-Target-URI\":\"https://tamathisland.com/determination-relationship-tom-shearer-derived-a-new-strain-energy/\",\"WARC-Payload-Digest\":\"sha1:CPVXYG7CZSTDSOHT7GHECIMPDLRPRQTU\",\"WARC-Block-Digest\":\"sha1:5CLWDX43BPYYVTLS45PY7POOJANBEG53\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541157498.50_warc_CC-MAIN-20191214122253-20191214150253-00411.warc.gz\"}"} |
https://models.cellml.org/workspace/noble_varghese_kohl_noble_1998/@@rawfile/995dda261096e1a8576203ccc5e8d3a7c1365da7/noble_varghese_kohl_noble_1998_a.cellml | [
"Improved guinea-pig ventricular cell model incorporating a diadic space, IKr and IKs, and length- and tension-dependent processes Penny Noble Oxford University\nModel Status This version has been curated and unit checked by Penny Noble and is known to run into COR and PCEnv. This variant is parameterised for the BASIC model, excluding acetylcholine dependent modulation and electromechanical processes. This model is also associated with a PCEnv session file.\nModel Structure ABSTRACT: The guinea-pig ventricular cell model, originally developed by Noble et al in 1991, has been greatly extended to include accumulation and depletion of calcium in a diadic space between the sarcolemma and the sarcoplasmic reticulum where, according to contempory understanding, the majority of calcium-induced calcium release is triggered. The calcium in this space is also assumed to play the major role in calcium-induced inactivation of the calcium current. Delayed potassium current equations have been developed to include the rapid (IKr) and slow (IKs) components of the delayed rectifier current based on the data of of Heath and Terrar, along with data from Sanguinetti and Jurkiewicz. Length- and tension-dependent changes in mechanical and electrophysiological processes have been incorporated as described recently by Kohl et al. Drug receptor interactions have started to be developed, using the sodium channel as the first target. The new model has been tested against experimental data on action potential clamp, and on force-interval and duration-interval relations; it has been found to reliably reproduce experimental observations. The original paper reference is cited below: Improved guinea-pig ventricular cell model incorporating a diadic space, IKr and IKs, and length- and tension-dependent processes, Denis Noble, Anthony Varghese, Peter Kohl and Penelope Noble, 1998, Can J Cardiol, 14, 123-134. PubMed ID: 9487284 cell diagram of Noble'98 model showing ionic currents, pumps and exchangers within the sarcolemma and the sarcoplasmic reticulum A schematic diagram describing the current flows across the cell membrane that are captured in the Noble'98 model.\n$\\mathrm{i_Stim}=\\begin{cases}\\mathrm{stim_amplitude} & \\text{if (\\mathrm{time}\\ge \\mathrm{stim_start})\\land (\\mathrm{time}\\le \\mathrm{stim_end})\\land (\\mathrm{time}-\\mathrm{stim_start}-\\lfloor \\frac{\\mathrm{time}-\\mathrm{stim_start}}{\\mathrm{stim_period}}\\rfloor \\mathrm{stim_period}\\le \\mathrm{stim_duration})}\\\\ 0 & \\text{otherwise}\\end{cases}\\frac{d V}{d \\mathrm{time}}}=\\frac{-1}{\\mathrm{Cm}}(\\mathrm{i_Stim}+\\mathrm{i_K1}+\\mathrm{i_to}+\\mathrm{i_Kr}+\\mathrm{i_Ks}+\\mathrm{i_NaK}+\\mathrm{i_Na}+\\mathrm{i_b_Na}+\\mathrm{i_p_Na}+\\mathrm{i_Ca_L_Na_cyt}+\\mathrm{i_Ca_L_Na_ds}+\\mathrm{i_NaCa_cyt}+\\mathrm{i_NaCa_ds}+\\mathrm{i_Ca_L_Ca_cyt}+\\mathrm{i_Ca_L_Ca_ds}+\\mathrm{i_Ca_L_K_cyt}+\\mathrm{i_Ca_L_K_ds}+\\mathrm{i_b_Ca})$ $\\mathrm{E_Na}=\\frac{RT}{F}\\ln \\left(\\frac{\\mathrm{Na_o}}{\\mathrm{Na_i}}\\right)\\mathrm{E_K}=\\frac{RT}{F}\\ln \\left(\\frac{\\mathrm{K_o}}{\\mathrm{K_i}}\\right)\\mathrm{E_Ks}=\\frac{RT}{F}\\ln \\left(\\frac{\\mathrm{K_o}+\\mathrm{P_kna}\\mathrm{Na_o}}{\\mathrm{K_i}+\\mathrm{P_kna}\\mathrm{Na_i}}\\right)\\mathrm{E_Ca}=\\frac{0.5RT}{F}\\ln \\left(\\frac{\\mathrm{Ca_o}}{\\mathrm{Ca_i}}\\right)\\mathrm{E_mh}=\\frac{RT}{F}\\ln \\left(\\frac{\\mathrm{Na_o}+0.12\\mathrm{K_o}}{\\mathrm{Na_i}+0.12\\mathrm{K_i}}\\right)$ $\\mathrm{i_K1}=\\frac{\\frac{\\mathrm{g_K1}\\mathrm{K_o}}{\\mathrm{K_o}+\\mathrm{K_mk1}}(V-\\mathrm{E_K})}{1+e^{\\frac{(V-\\mathrm{E_K}-10)F\\times 1.25}{RT}}}$ $\\mathrm{i_Kr}=\\frac{(\\mathrm{g_Kr1}\\mathrm{xr1}+\\mathrm{g_Kr2}\\mathrm{xr2})\\times 1}{1+e^{\\frac{V+9}{22.4}}}(V-\\mathrm{E_K})$ $\\mathrm{alpha_xr1}=\\frac{50}{1+e^{\\frac{-(V-5)}{9}}}\\mathrm{beta_xr1}=0.05e^{\\frac{-(V-20)}{15}}\\frac{d \\mathrm{xr1}}{d \\mathrm{time}}}=\\mathrm{alpha_xr1}(1-\\mathrm{xr1})-\\mathrm{beta_xr1}\\mathrm{xr1}$ $\\mathrm{alpha_xr2}=\\frac{50}{1+e^{\\frac{-(V-5)}{9}}}\\mathrm{beta_xr2}=0.4e^{-\\left(\\frac{V+30}{30}\\right)^{3}}\\frac{d \\mathrm{xr2}}{d \\mathrm{time}}}=\\mathrm{alpha_xr2}(1-\\mathrm{xr2})-\\mathrm{beta_xr2}\\mathrm{xr2}$ $\\mathrm{i_Ks}=\\mathrm{g_Ks}\\mathrm{xs}^{2}(V-\\mathrm{E_Ks})$ $\\mathrm{alpha_xs}=\\frac{14}{1+e^{\\frac{-(V-40)}{9}}}\\mathrm{beta_xs}=1e^{\\frac{-V}{45}}\\frac{d \\mathrm{xs}}{d \\mathrm{time}}}=\\mathrm{alpha_xs}(1-\\mathrm{xs})-\\mathrm{beta_xs}\\mathrm{xs}$ $\\mathrm{i_Na}=\\mathrm{g_Na}m^{3}h(V-\\mathrm{E_mh})$ $\\mathrm{E0_m}=V+41\\mathrm{alpha_m}=\\begin{cases}2000 & \\text{if \\left|\\mathrm{E0_m}\\right|< \\mathrm{delta_m}}\\\\ \\frac{200\\mathrm{E0_m}}{1-e^{-0.1\\mathrm{E0_m}}} & \\text{otherwise}\\end{cases}\\mathrm{beta_m}=8000e^{-0.056(V+66)}\\frac{d m}{d \\mathrm{time}}}=\\mathrm{alpha_m}(1-m)-\\mathrm{beta_m}m$ $\\mathrm{alpha_h}=20e^{-0.125(V+75-\\mathrm{shift_h})}\\mathrm{beta_h}=\\frac{2000}{1+320e^{-0.1(V+75-\\mathrm{shift_h})}}\\frac{d h}{d \\mathrm{time}}}=\\mathrm{alpha_h}(1-h)-\\mathrm{beta_h}h$ $\\mathrm{i_p_Na}=\\frac{\\mathrm{g_pna}\\times 1}{1+e^{\\frac{-(V+52)}{8}}}(V-\\mathrm{E_Na})$ $\\mathrm{i_b_Na}=\\mathrm{g_bna}(V-\\mathrm{E_Na})$ $\\mathrm{i_Ca_L_Ca_cyt}=\\frac{\\frac{(1-\\mathrm{FrICa})\\times 4\\mathrm{P_Ca_L}df\\mathrm{f2}(V-50)F}{RT}}{1-e^{\\frac{-(V-50)F\\times 2}{RT}}}(\\mathrm{Ca_i}e^{\\frac{100F}{RT}}-\\mathrm{Ca_o}e^{\\frac{-(V-50)F\\times 2}{RT}})\\mathrm{i_Ca_L_K_cyt}=\\frac{\\frac{(1-\\mathrm{FrICa})\\mathrm{P_CaK}\\mathrm{P_Ca_L}df\\mathrm{f2}(V-50)F}{RT}}{1-e^{\\frac{-(V-50)F}{RT}}}(\\mathrm{K_i}e^{\\frac{50F}{RT}}-\\mathrm{K_o}e^{\\frac{-(V-50)F}{RT}})\\mathrm{i_Ca_L_Na_cyt}=\\frac{\\frac{(1-\\mathrm{FrICa})\\mathrm{P_CaNa}\\mathrm{P_Ca_L}df\\mathrm{f2}(V-50)F}{RT}}{1-e^{\\frac{-(V-50)F}{RT}}}(\\mathrm{Na_i}e^{\\frac{50F}{RT}}-\\mathrm{Na_o}e^{\\frac{-(V-50)F}{RT}})\\mathrm{i_Ca_L_Ca_ds}=\\frac{\\frac{\\mathrm{FrICa}\\times 4\\mathrm{P_Ca_L}df\\mathrm{f2ds}(V-50)F}{RT}}{1-e^{\\frac{-(V-50)F\\times 2}{RT}}}(\\mathrm{Ca_i}e^{\\frac{100F}{RT}}-\\mathrm{Ca_o}e^{\\frac{-(V-50)F\\times 2}{RT}})\\mathrm{i_Ca_L_K_ds}=\\frac{\\frac{\\mathrm{FrICa}\\mathrm{P_CaK}\\mathrm{P_Ca_L}df\\mathrm{f2ds}(V-50)F}{RT}}{1-e^{\\frac{-(V-50)F}{RT}}}(\\mathrm{K_i}e^{\\frac{50F}{RT}}-\\mathrm{K_o}e^{\\frac{-(V-50)F}{RT}})\\mathrm{i_Ca_L_Na_ds}=\\frac{\\frac{\\mathrm{FrICa}\\mathrm{P_CaNa}\\mathrm{P_Ca_L}df\\mathrm{f2ds}(V-50)F}{RT}}{1-e^{\\frac{-(V-50)F}{RT}}}(\\mathrm{Na_i}e^{\\frac{50F}{RT}}-\\mathrm{Na_o}e^{\\frac{-(V-50)F}{RT}})\\mathrm{i_Ca_L}=\\mathrm{i_Ca_L_Ca_cyt}+\\mathrm{i_Ca_L_K_cyt}+\\mathrm{i_Ca_L_Na_cyt}+\\mathrm{i_Ca_L_Ca_ds}+\\mathrm{i_Ca_L_K_ds}+\\mathrm{i_Ca_L_Na_ds}$ $\\mathrm{E0_d}=V+24-5\\mathrm{alpha_d}=\\begin{cases}120 & \\text{if \\left|\\mathrm{E0_d}\\right|< 0.0001}\\\\ \\frac{30\\mathrm{E0_d}}{1-e^{\\frac{-\\mathrm{E0_d}}{4}}} & \\text{otherwise}\\end{cases}\\mathrm{beta_d}=\\begin{cases}120 & \\text{if \\left|\\mathrm{E0_d}\\right|< 0.0001}\\\\ \\frac{12\\mathrm{E0_d}}{e^{\\frac{\\mathrm{E0_d}}{10}}-1} & \\text{otherwise}\\end{cases}\\frac{d d}{d \\mathrm{time}}}=\\mathrm{speed_d}(\\mathrm{alpha_d}(1-d)-\\mathrm{beta_d}d)$ $\\mathrm{E0_f}=V+34\\mathrm{alpha_f}=\\begin{cases}25 & \\text{if \\left|\\mathrm{E0_f}\\right|< \\mathrm{delta_f}}\\\\ \\frac{6.25\\mathrm{E0_f}}{e^{\\frac{\\mathrm{E0_f}}{4}}-1} & \\text{otherwise}\\end{cases}\\mathrm{beta_f}=\\frac{12}{1+e^{\\frac{-1(V+34)}{4}}}\\frac{d f}{d \\mathrm{time}}}=\\mathrm{speed_f}(\\mathrm{alpha_f}(1-f)-\\mathrm{beta_f}f)$ $\\frac{d \\mathrm{f2}}{d \\mathrm{time}}}=1-1(\\frac{\\mathrm{Ca_i}}{\\mathrm{Km_f2}+\\mathrm{Ca_i}}+\\mathrm{f2})$ $\\frac{d \\mathrm{f2ds}}{d \\mathrm{time}}}=\\mathrm{R_decay}(1-\\frac{\\mathrm{Ca_ds}}{\\mathrm{Km_f2ds}+\\mathrm{Ca_ds}}+\\mathrm{f2ds})$ $\\mathrm{i_b_Ca}=\\mathrm{g_bca}(V-\\mathrm{E_Ca})$ $\\mathrm{i_to}=\\mathrm{g_to}(\\mathrm{g_tos}+s(1-\\mathrm{g_tos}))r(V-\\mathrm{E_K})$ $\\mathrm{alpha_s}=0.033e^{\\frac{-V}{17}}\\mathrm{beta_s}=\\frac{33}{1+e^{-0.125(V+10)}}\\frac{d s}{d \\mathrm{time}}}=\\mathrm{alpha_s}(1-s)-\\mathrm{beta_s}s$ $\\frac{d r}{d \\mathrm{time}}}=333(\\frac{1}{1+e^{\\frac{-(V+4)}{5}}}-r)$ $\\mathrm{i_NaK}=\\frac{\\frac{\\mathrm{i_NaK_max}\\mathrm{K_o}}{\\mathrm{K_mK}+\\mathrm{K_o}}\\mathrm{Na_i}}{\\mathrm{K_mNa}+\\mathrm{Na_i}}$ $\\mathrm{i_NaCa_cyt}=\\frac{(1-\\mathrm{FRiNaCa})\\mathrm{k_NaCa}(e^{\\frac{\\mathrm{gamma}(\\mathrm{n_NaCa}-2)VF}{RT}}\\mathrm{Na_i}^{\\mathrm{n_NaCa}}\\mathrm{Ca_o}-e^{\\frac{(\\mathrm{gamma}-1)(\\mathrm{n_NaCa}-2)VF}{RT}}\\mathrm{Na_o}^{\\mathrm{n_NaCa}}\\mathrm{Ca_i})}{(1+\\mathrm{d_NaCa}(\\mathrm{Ca_i}\\mathrm{Na_o}^{\\mathrm{n_NaCa}}+\\mathrm{Ca_o}\\mathrm{Na_i}^{\\mathrm{n_NaCa}}))(1+\\frac{\\mathrm{Ca_i}}{0.0069})}\\mathrm{i_NaCa_ds}=\\frac{\\mathrm{FRiNaCa}\\mathrm{k_NaCa}(e^{\\frac{\\mathrm{gamma}(\\mathrm{n_NaCa}-2)VF}{RT}}\\mathrm{Na_i}^{\\mathrm{n_NaCa}}\\mathrm{Ca_o}-e^{\\frac{(\\mathrm{gamma}-1)(\\mathrm{n_NaCa}-2)VF}{RT}}\\mathrm{Na_o}^{\\mathrm{n_NaCa}}\\mathrm{Ca_ds})}{(1+\\mathrm{d_NaCa}(\\mathrm{Ca_ds}\\mathrm{Na_o}^{\\mathrm{n_NaCa}}+\\mathrm{Ca_o}\\mathrm{Na_i}^{\\mathrm{n_NaCa}}))(1+\\frac{\\mathrm{Ca_ds}}{0.0069})}\\mathrm{i_NaCa}=\\mathrm{i_NaCa_cyt}+\\mathrm{i_NaCa_ds}$ $\\mathrm{K_1}=\\frac{\\mathrm{K_cyca}\\mathrm{K_xcs}}{\\mathrm{K_srca}}\\mathrm{K_2}=\\mathrm{Ca_i}+\\mathrm{Ca_up}\\mathrm{K_1}+\\mathrm{K_cyca}\\mathrm{K_xcs}+\\mathrm{K_cyca}\\mathrm{i_up}=\\frac{\\mathrm{Ca_i}}{\\mathrm{K_2}}\\mathrm{alpha_up}-\\frac{\\mathrm{Ca_up}\\mathrm{K_1}}{\\mathrm{K_2}}\\mathrm{beta_up}$ $\\mathrm{i_trans}=50(\\mathrm{Ca_up}-\\mathrm{Ca_rel})$ $\\mathrm{VoltDep}=e^{0.08(V-40)}\\mathrm{CaiReg}=\\frac{\\mathrm{Ca_i}}{\\mathrm{Ca_i}+\\mathrm{K_m_Ca_cyt}}\\mathrm{CadsReg}=\\frac{\\mathrm{Ca_ds}}{\\mathrm{Ca_ds}+\\mathrm{K_m_Ca_ds}}\\mathrm{RegBindSite}=\\mathrm{CaiReg}-1\\mathrm{CadsReg}\\mathrm{ActRate}=0\\mathrm{VoltDep}+500\\mathrm{RegBindSite}^{2}\\mathrm{InactRate}=60+500\\mathrm{RegBindSite}^{2}\\mathrm{SpeedRel}=\\begin{cases}5 & \\text{if V< -50}\\\\ 1 & \\text{otherwise}\\end{cases}\\mathrm{PrecFrac}=1-\\mathrm{ActFrac}-\\mathrm{ProdFrac}\\frac{d \\mathrm{ActFrac}}{d \\mathrm{time}}}=\\mathrm{PrecFrac}\\mathrm{SpeedRel}\\mathrm{ActRate}-\\mathrm{ActFrac}\\mathrm{SpeedRel}\\mathrm{InactRate}\\frac{d \\mathrm{ProdFrac}}{d \\mathrm{time}}}=\\mathrm{ActFrac}\\mathrm{SpeedRel}\\mathrm{InactRate}-\\mathrm{SpeedRel}\\times 1\\mathrm{ProdFrac}\\mathrm{i_rel}=(\\left(\\frac{\\mathrm{ActFrac}}{\\mathrm{ActFrac}+0.25}\\right)^{2}\\mathrm{K_m_rel}+\\mathrm{K_leak_rate})\\mathrm{Ca_rel}$ $\\frac{d \\mathrm{Na_i}}{d \\mathrm{time}}}=\\frac{-1}{1\\mathrm{V_i}F}(\\mathrm{i_Na}+\\mathrm{i_p_Na}+\\mathrm{i_b_Na}+3\\mathrm{i_NaK}+3\\mathrm{i_NaCa_cyt}+\\mathrm{i_Ca_L_Na_cyt}+\\mathrm{i_Ca_L_Na_ds})$ $\\frac{d \\mathrm{K_i}}{d \\mathrm{time}}}=\\frac{-1}{1\\mathrm{V_i}F}(\\mathrm{i_K1}+\\mathrm{i_Kr}+\\mathrm{i_Ks}+\\mathrm{i_Ca_L_K_cyt}+\\mathrm{i_Ca_L_K_ds}+\\mathrm{i_to}-2\\mathrm{i_NaK})$ $\\mathrm{V_Cell}=3.141592654\\mathrm{radius}^{2}\\mathrm{length}\\mathrm{V_i_ratio}=1-\\mathrm{V_e_ratio}-\\mathrm{V_up_ratio}-\\mathrm{V_rel_ratio}\\mathrm{V_i}=\\mathrm{V_Cell}\\mathrm{V_i_ratio}\\frac{d \\mathrm{Ca_i}}{d \\mathrm{time}}}=\\frac{-1}{2\\times 1\\mathrm{V_i}F}(\\mathrm{i_Ca_L_Ca_cyt}+\\mathrm{i_b_Ca}-2\\mathrm{i_NaCa_cyt}-2\\mathrm{i_NaCa_ds})+\\mathrm{Ca_ds}\\mathrm{V_ds_ratio}\\mathrm{Kdecay}+\\frac{\\mathrm{i_rel}\\mathrm{V_rel_ratio}}{\\mathrm{V_i_ratio}}-\\frac{d \\mathrm{Ca_Calmod}}{d \\mathrm{time}}}-\\frac{d \\mathrm{Ca_Trop}}{d \\mathrm{time}}}-\\mathrm{i_up}\\frac{d \\mathrm{Ca_ds}}{d \\mathrm{time}}}=\\frac{-1\\mathrm{i_Ca_L_Ca_ds}}{2\\times 1\\mathrm{V_ds_ratio}\\mathrm{V_i}F}-\\mathrm{Ca_ds}\\mathrm{Kdecay}\\frac{d \\mathrm{Ca_up}}{d \\mathrm{time}}}=\\frac{\\mathrm{V_i_ratio}}{\\mathrm{V_up_ratio}}\\mathrm{i_up}-\\mathrm{i_trans}\\frac{d \\mathrm{Ca_rel}}{d \\mathrm{time}}}=\\frac{\\mathrm{V_up_ratio}}{\\mathrm{V_rel_ratio}}\\mathrm{i_trans}-\\mathrm{i_rel}\\frac{d \\mathrm{Ca_Calmod}}{d \\mathrm{time}}}=\\mathrm{alpha_Calmod}\\mathrm{Ca_i}(\\mathrm{Calmod}-\\mathrm{Ca_Calmod})-\\mathrm{beta_Calmod}\\mathrm{Ca_Calmod}\\frac{d \\mathrm{Ca_Trop}}{d \\mathrm{time}}}=\\mathrm{alpha_Trop}\\mathrm{Ca_i}(\\mathrm{Trop}-\\mathrm{Ca_Trop})-\\mathrm{beta_Trop}\\mathrm{Ca_Trop}$ Changed stim_end in stimulus protocol from 10.1 to 1,000,000 seconds Penny Noble This file is a CellML description of Noble, Carghese, Kohl and Noble's 1998 extension of their 1991 guinea pig ventricular cell model. This model incorporates a diadic space, IKr and IKs currents, and length and tension dependent processes. Variant 1 is parameterised for the 'basic' model. To simulate multiple action potentials with this model the following integrator parameters must be used: max step size must be 0.001 or less, max point density must be 10,000 or more. Improved guinea-pig ventricular cell model incorporating a diadic space, IKr and IKs, and length- and tension-dependent processes. 14(1) 123 134 Penny Noble J D Noble James Lawson 1998-01-01 00:00 2007-06-22T14:01:50+12:00 Penny Noble was informed by a community member that INaCads was not included in the dCads/dt ODE and hence the system was out of balance. This has been fixed in version 07. An alteration was also made to the model to remove two differential expressions in the intracellular_calcium concentration component to allow PCEnv 0.2 to run the model. Units checked, curated. James Lawson Richard keyword diadic space excitation-contraction coupling Ventricular Myocyte ventricular myocyte electrophysiology myofilament mechanics cardiac Improved guinea-pig ventricular cell model incorporating a diadic space, IKr and IKs, and length- and tension-dependent processes (Basic Model) Department of Physiology, Anatomy & Genetics, University of Oxford James Lawson James Lawson Richard 2007-12-04T15:09:12+13:00 This version was created by Penny Noble and is known to read into COR. James Lawson fixed an error (19/04/07) that prevented it reading in PCEnv (unsupported predefined operator diff in component intracellular_calcium_concentration. This variant of version 06 is parameterised for the BASIC model, excluding acetylcholine dependent modulation and electromechanical processes. A Varghese Several variables have been given cmeta:id's to allow creation of a PCEnv session file. Penny Noble J 2007-02-06T14:05:00+13:00 2007-09-06T12:57:25+12:00 2007-05-03T00:00:00+00:00 University of Oxford Department of Physiology, Anatomy & Genetics This model has been curated by Penny Noble of Oxford University 2007-07-18T12:58:29+12:00 Penny Noble J 2008-05-14T03:12:08+12:00 James Lawson Richard Note that the missing i_NCX_ds component was added to the dCai/dt rather than the dCads/dt equation in order to achieve balance in the model. 9487284 P Noble 50000 0.001 Canadian Journal of Cardiology 100000 0.001 bdf15 P Kohl penny.noble@dpag.ox.ac.uk"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8681042,"math_prob":0.9998534,"size":4785,"snap":"2021-31-2021-39","text_gpt3_token_len":1199,"char_repetition_ratio":0.10792721,"word_repetition_ratio":0.11859444,"special_character_ratio":0.23030303,"punctuation_ratio":0.114450864,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994537,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T00:57:03Z\",\"WARC-Record-ID\":\"<urn:uuid:6c810cd1-c8f7-4dc9-9adc-d1098c159644>\",\"Content-Length\":\"181383\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c99c07c-bd87-4a6b-a4ee-c9261af66b06>\",\"WARC-Concurrent-To\":\"<urn:uuid:ed1a9b4a-42ce-4b63-96ed-13bf095c7405>\",\"WARC-IP-Address\":\"54.186.195.229\",\"WARC-Target-URI\":\"https://models.cellml.org/workspace/noble_varghese_kohl_noble_1998/@@rawfile/995dda261096e1a8576203ccc5e8d3a7c1365da7/noble_varghese_kohl_noble_1998_a.cellml\",\"WARC-Payload-Digest\":\"sha1:YCNJEOVSXB5SYF5SVMQXWUAY6L6QO3SM\",\"WARC-Block-Digest\":\"sha1:RPH56FGOMR4FALZFKE26W2R7CULLMGSW\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057479.26_warc_CC-MAIN-20210923225758-20210924015758-00058.warc.gz\"}"} |
http://ixtrieve.fh-koeln.de/birds/litie/document/23777 | [
"# Document (#23777)\n\nAuthor\nGrossmann, S.\nTitle\nMeta-Strukturen in Intranets : Konzepte, Vorgehensweise, Beispiele\nSource\nInformation Research & Content Management: Orientierung, Ordnung und Organisation im Wissensmarkt; 23. DGI-Online-Tagung der DGI und 53. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. DGI, Frankfurt am Main, 8.-10.5.2001. Proceedings. Hrsg.: R. Schmidt\nImprint\nFrankfurt am Main : DGI\nYear\n2001\nPages\nS.67-73\nSeries\nTagungen der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis; 4\nAbstract\nDie meisten Intranets stehen vor einem Informationsinfarkt - es fehlt in den Organisationen vielfach an klaren Rollenkonzepten zur Eingabe, Pflege und Weiterentwicklung der Intranets, vor allem aber auch an methodischen Grundsätzen zur Erfassung und Erschließung der verschiedenartigen Informationen. In diesem Beitrag werden die Grundkonzepte zur Meta-Strukturierung beschrieben, eine erprobte Vorgehensweise bei der Implementierung entsprechender Standards erarbeitet und zur besseren Illustration an konkreten Beispielen dargestellt\nTheme\nIntranet\n\n## Similar documents (content)\n\n1. Schmidt, A.: Endo-Management : Wissenslenkung in Cyber-Ökonomien (1999) 0.11\n```0.10593288 = sum of:\n0.10593288 = product of:\n0.3783317 = sum of:\n0.025941519 = weight(abstract_txt:meisten in 6008) [ClassicSimilarity], result of:\n0.025941519 = score(doc=6008,freq=1.0), product of:\n0.12370121 = queryWeight, product of:\n1.0421788 = boost\n6.710756 = idf(docFreq=140, maxDocs=42596)\n0.017687248 = queryNorm\n0.20971112 = fieldWeight in 6008, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.710756 = idf(docFreq=140, maxDocs=42596)\n0.03125 = fieldNorm(doc=6008)\n0.045661163 = weight(abstract_txt:strukturen in 6008) [ClassicSimilarity], result of:\n0.045661163 = score(doc=6008,freq=3.0), product of:\n0.12503585 = queryWeight, product of:\n1.0477859 = boost\n6.746861 = idf(docFreq=135, maxDocs=42596)\n0.017687248 = queryNorm\n0.36518455 = fieldWeight in 6008, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n6.746861 = idf(docFreq=135, maxDocs=42596)\n0.03125 = fieldNorm(doc=6008)\n0.026894936 = weight(abstract_txt:weiterentwicklung in 6008) [ClassicSimilarity], result of:\n0.026894936 = score(doc=6008,freq=1.0), product of:\n0.12671383 = queryWeight, product of:\n1.0547931 = boost\n6.791981 = idf(docFreq=129, maxDocs=42596)\n0.017687248 = queryNorm\n0.21224941 = fieldWeight in 6008, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.791981 = idf(docFreq=129, maxDocs=42596)\n0.03125 = fieldNorm(doc=6008)\n0.046314023 = weight(abstract_txt:organisationen in 6008) [ClassicSimilarity], result of:\n0.046314023 = score(doc=6008,freq=2.0), product of:\n0.1444914 = queryWeight, product of:\n1.1263576 = boost\n7.252796 = idf(docFreq=81, maxDocs=42596)\n0.017687248 = queryNorm\n0.32053134 = fieldWeight in 6008, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n7.252796 = idf(docFreq=81, maxDocs=42596)\n0.03125 = fieldNorm(doc=6008)\n0.046542827 = weight(abstract_txt:meta in 6008) [ClassicSimilarity], result of:\n0.046542827 = score(doc=6008,freq=1.0), product of:\n0.23012061 = queryWeight, product of:\n2.0102406 = boost\n6.47213 = idf(docFreq=178, maxDocs=42596)\n0.017687248 = queryNorm\n0.20225406 = fieldWeight in 6008, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.47213 = idf(docFreq=178, maxDocs=42596)\n0.03125 = fieldNorm(doc=6008)\n0.07692492 = weight(abstract_txt:vorgehensweise in 6008) [ClassicSimilarity], result of:\n0.07692492 = score(doc=6008,freq=1.0), product of:\n0.32168567 = queryWeight, product of:\n2.3767643 = boost\n7.6521826 = idf(docFreq=54, maxDocs=42596)\n0.017687248 = queryNorm\n0.2391307 = fieldWeight in 6008, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.6521826 = idf(docFreq=54, maxDocs=42596)\n0.03125 = fieldNorm(doc=6008)\n0.11005232 = weight(abstract_txt:intranets in 6008) [ClassicSimilarity], result of:\n0.11005232 = score(doc=6008,freq=1.0), product of:\n0.46753797 = queryWeight, product of:\n3.509331 = boost\n7.532381 = idf(docFreq=61, maxDocs=42596)\n0.017687248 = queryNorm\n0.23538691 = fieldWeight in 6008, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.532381 = idf(docFreq=61, maxDocs=42596)\n0.03125 = fieldNorm(doc=6008)\n0.28 = coord(7/25)\n```\n2. Hanke, M.: Bibliothekarische Klassifikationssysteme im semantischen Web : zu Chancen und Problemen von Linked-data-Repräsentationen ausgewählter Klassifikationssysteme (2014) 0.06\n```0.056315336 = sum of:\n0.056315336 = product of:\n0.35197085 = sum of:\n0.06723734 = weight(abstract_txt:weiterentwicklung in 3464) [ClassicSimilarity], result of:\n0.06723734 = score(doc=3464,freq=1.0), product of:\n0.12671383 = queryWeight, product of:\n1.0547931 = boost\n6.791981 = idf(docFreq=129, maxDocs=42596)\n0.017687248 = queryNorm\n0.53062356 = fieldWeight in 3464, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.791981 = idf(docFreq=129, maxDocs=42596)\n0.078125 = fieldNorm(doc=3464)\n0.0905555 = weight(abstract_txt:strukturierung in 3464) [ClassicSimilarity], result of:\n0.0905555 = score(doc=3464,freq=1.0), product of:\n0.15453501 = queryWeight, product of:\n1.1648465 = boost\n7.500633 = idf(docFreq=63, maxDocs=42596)\n0.017687248 = queryNorm\n0.5859869 = fieldWeight in 3464, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.500633 = idf(docFreq=63, maxDocs=42596)\n0.078125 = fieldNorm(doc=3464)\n0.091127075 = weight(abstract_txt:pflege in 3464) [ClassicSimilarity], result of:\n0.091127075 = score(doc=3464,freq=1.0), product of:\n0.1551846 = queryWeight, product of:\n1.1672921 = boost\n7.516381 = idf(docFreq=62, maxDocs=42596)\n0.017687248 = queryNorm\n0.5872173 = fieldWeight in 3464, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.516381 = idf(docFreq=62, maxDocs=42596)\n0.078125 = fieldNorm(doc=3464)\n0.10305093 = weight(abstract_txt:besseren in 3464) [ClassicSimilarity], result of:\n0.10305093 = score(doc=3464,freq=1.0), product of:\n0.16844247 = queryWeight, product of:\n1.216133 = boost\n7.8308744 = idf(docFreq=45, maxDocs=42596)\n0.017687248 = queryNorm\n0.6117871 = fieldWeight in 3464, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.8308744 = idf(docFreq=45, maxDocs=42596)\n0.078125 = fieldNorm(doc=3464)\n0.16 = coord(4/25)\n```\n3. Palme, J.: HTML / XML / SGML : Gemeinsamkeiten und Unterschiede (1998) 0.05\n```0.052635454 = sum of:\n0.052635454 = product of:\n0.3289716 = sum of:\n0.057293948 = weight(abstract_txt:konzepte in 621) [ClassicSimilarity], result of:\n0.057293948 = score(doc=621,freq=1.0), product of:\n0.11389102 = queryWeight, product of:\n6.43916 = idf(docFreq=184, maxDocs=42596)\n0.017687248 = queryNorm\n0.5030594 = fieldWeight in 621, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.43916 = idf(docFreq=184, maxDocs=42596)\n0.078125 = fieldNorm(doc=621)\n0.08999509 = weight(abstract_txt:erfassung in 621) [ClassicSimilarity], result of:\n0.08999509 = score(doc=621,freq=1.0), product of:\n0.15389678 = queryWeight, product of:\n1.1624386 = boost\n7.4851284 = idf(docFreq=64, maxDocs=42596)\n0.017687248 = queryNorm\n0.5847757 = fieldWeight in 621, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.4851284 = idf(docFreq=64, maxDocs=42596)\n0.078125 = fieldNorm(doc=621)\n0.0905555 = weight(abstract_txt:strukturierung in 621) [ClassicSimilarity], result of:\n0.0905555 = score(doc=621,freq=1.0), product of:\n0.15453501 = queryWeight, product of:\n1.1648465 = boost\n7.500633 = idf(docFreq=63, maxDocs=42596)\n0.017687248 = queryNorm\n0.5859869 = fieldWeight in 621, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.500633 = idf(docFreq=63, maxDocs=42596)\n0.078125 = fieldNorm(doc=621)\n0.091127075 = weight(abstract_txt:pflege in 621) [ClassicSimilarity], result of:\n0.091127075 = score(doc=621,freq=1.0), product of:\n0.1551846 = queryWeight, product of:\n1.1672921 = boost\n7.516381 = idf(docFreq=62, maxDocs=42596)\n0.017687248 = queryNorm\n0.5872173 = fieldWeight in 621, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.516381 = idf(docFreq=62, maxDocs=42596)\n0.078125 = fieldNorm(doc=621)\n0.16 = coord(4/25)\n```\n4. Franke, F.; Pfister, S.; Schüller-Zwierlein, A.: \"Hätten wir personelle Valenzen, würden wir uns um stärkere Nutzung bemühen.\" : Eine Umfrage zur Vermittlung von lnformationskompetenz an Schüler an den bayerischen wissenschaftlichen Bibliotheken (2007) 0.05\n```0.047171116 = sum of:\n0.047171116 = product of:\n0.29481947 = sum of:\n0.045835156 = weight(abstract_txt:konzepte in 1943) [ClassicSimilarity], result of:\n0.045835156 = score(doc=1943,freq=1.0), product of:\n0.11389102 = queryWeight, product of:\n6.43916 = idf(docFreq=184, maxDocs=42596)\n0.017687248 = queryNorm\n0.4024475 = fieldWeight in 1943, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.43916 = idf(docFreq=184, maxDocs=42596)\n0.0625 = fieldNorm(doc=1943)\n0.07456437 = weight(abstract_txt:strukturen in 1943) [ClassicSimilarity], result of:\n0.07456437 = score(doc=1943,freq=2.0), product of:\n0.12503585 = queryWeight, product of:\n1.0477859 = boost\n6.746861 = idf(docFreq=135, maxDocs=42596)\n0.017687248 = queryNorm\n0.5963439 = fieldWeight in 1943, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.746861 = idf(docFreq=135, maxDocs=42596)\n0.0625 = fieldNorm(doc=1943)\n0.06484724 = weight(abstract_txt:konkreten in 1943) [ClassicSimilarity], result of:\n0.06484724 = score(doc=1943,freq=1.0), product of:\n0.14353286 = queryWeight, product of:\n1.1226152 = boost\n7.2286987 = idf(docFreq=83, maxDocs=42596)\n0.017687248 = queryNorm\n0.45179367 = fieldWeight in 1943, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.2286987 = idf(docFreq=83, maxDocs=42596)\n0.0625 = fieldNorm(doc=1943)\n0.10957272 = weight(abstract_txt:vielfach in 1943) [ClassicSimilarity], result of:\n0.10957272 = score(doc=1943,freq=2.0), product of:\n0.16161512 = queryWeight, product of:\n1.1912317 = boost\n7.6705317 = idf(docFreq=53, maxDocs=42596)\n0.017687248 = queryNorm\n0.6779856 = fieldWeight in 1943, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n7.6705317 = idf(docFreq=53, maxDocs=42596)\n0.0625 = fieldNorm(doc=1943)\n0.16 = coord(4/25)\n```\n5. Geeb, F.: Lexikographische Informationsstrukturierung mit XML (2003) 0.05\n```0.045305442 = sum of:\n0.045305442 = product of:\n0.37754536 = sum of:\n0.10544994 = weight(abstract_txt:strukturen in 2843) [ClassicSimilarity], result of:\n0.10544994 = score(doc=2843,freq=1.0), product of:\n0.12503585 = queryWeight, product of:\n1.0477859 = boost\n6.746861 = idf(docFreq=135, maxDocs=42596)\n0.017687248 = queryNorm\n0.8433576 = fieldWeight in 2843, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.746861 = idf(docFreq=135, maxDocs=42596)\n0.125 = fieldNorm(doc=2843)\n0.12720662 = weight(abstract_txt:erarbeitet in 2843) [ClassicSimilarity], result of:\n0.12720662 = score(doc=2843,freq=1.0), product of:\n0.1416914 = queryWeight, product of:\n1.1153907 = boost\n7.182179 = idf(docFreq=87, maxDocs=42596)\n0.017687248 = queryNorm\n0.8977724 = fieldWeight in 2843, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.182179 = idf(docFreq=87, maxDocs=42596)\n0.125 = fieldNorm(doc=2843)\n0.14488879 = weight(abstract_txt:strukturierung in 2843) [ClassicSimilarity], result of:\n0.14488879 = score(doc=2843,freq=1.0), product of:\n0.15453501 = queryWeight, product of:\n1.1648465 = boost\n7.500633 = idf(docFreq=63, maxDocs=42596)\n0.017687248 = queryNorm\n0.9375791 = fieldWeight in 2843, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.500633 = idf(docFreq=63, maxDocs=42596)\n0.125 = fieldNorm(doc=2843)\n0.12 = coord(3/25)\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.57743233,"math_prob":0.9966147,"size":10540,"snap":"2019-51-2020-05","text_gpt3_token_len":4062,"char_repetition_ratio":0.23974943,"word_repetition_ratio":0.36684996,"special_character_ratio":0.5222011,"punctuation_ratio":0.2829248,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99960107,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T09:51:28Z\",\"WARC-Record-ID\":\"<urn:uuid:dd102099-e460-48c1-a9ea-988750e6827d>\",\"Content-Length\":\"20365\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fc0960ac-d2bc-4d9c-be38-0264c6977975>\",\"WARC-Concurrent-To\":\"<urn:uuid:279cc073-5246-4e91-9a1f-baa3e77c2d0a>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"http://ixtrieve.fh-koeln.de/birds/litie/document/23777\",\"WARC-Payload-Digest\":\"sha1:URLWBAG5JBFT75VYYOCGK64Z54ZKXXU4\",\"WARC-Block-Digest\":\"sha1:BKORHWSINYH23KUAZG2P2CWP62NKKKEH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251778168.77_warc_CC-MAIN-20200128091916-20200128121916-00384.warc.gz\"}"} |
http://hi.gher.space/forum/viewtopic.php?f=3&t=2377 | [
"The downfall of the first axiom\n\nHigher-dimensional geometry (previously \"Polyshapes\").\n\nThe downfall of the first axiom\n\nEuclid, made a bunch of axioms which are basically useless to graph theorists and topologists. But this could give rise to a new system of geometry, which is probably already a thing. The first axiom is like so: If you pick one point, and another distinct point, you can draw a single line between them. Now let's generalize this to death. If you pick one point and another point or the same point, you can put at least one line between them. Now, this allows for a lot of weird things such as monogons and digons with area, and a good visualization of sheaved polyhedra, which have their edges replaced with digons. Now to end this off, is this system equivalent to any existing one? If it is, I wasted my time writing this.",
null,
"ubersketch\nTrionian\n\nPosts: 143\nJoined: Thu Nov 30, 2017 12:00 am\n\nRe: The downfall of the first axiom\n\nI make use of the 'art of circle-drawing' as a means to avoid advanced hyperbolic and trignometric functions. It allows me to deal with spheric, hyperbolic and eculidean geometries as variations of the same thing. The definition of 'straight' or 'flat' is that a subspace is straight / flat, if it has the same curvature as 'all-space'. This means, that if you draw a circle in these, and a second one on the radius, the ratios of circumferences of the greater to the radial circles must be identical.\n\nA straight line is then a particular instance of an isocurve, which is concentric with all-space, (as great circles are lines and circles centred at the centre of a sphere), and that all iso-curves are defined by three point-like conditions. There is a class of 'parallelisms' that are defined by two point-like conditions. Straight lines contain the point 'U' (at infinity).\n\nSo there is at most one line through A, B, and U. A general parallelism is defined by two points. So there are an infinite number of straight lines through C, U, but only one of these will contain a separate point A. There are an infinite set of isocurves through AB, that pass through any point C.\n\nThe interesting thing is that one can derive Möbius geometry, if one supposes that space is a sphere, and the point U is the full interior of the sphere. This is what happens when you evaluate the hyperbolic space horizon. Every circle drawn on the sphere passes through U, and has the same curvature as the space it's in: ie every circle is straight. You can then draw a straight line through any three points.\n\nIn regards the digon and so forth, the digon arises in reflection symmetry when one considers that the edge of a polyhedron (like a dodecahedron), where the mirror runs along the edge, is not so much o--------o but o======o. Of course, the polygon-rules of area etc still apply, but it is easier to generalise the cycle of polygons, not so much as 3 pentagons, but three-times alternately pentagon, digon. Of course, if one derives the polygon by wythoff's construction, it is easy to replace the edge with a rectangle, vis 8======8, of zero height.\n\nIn essence, each node of a CD diagram has both a 'surround' and 'arround' symmetry, where the first contains the vertex-nodes, and forms a visible surtope, while the second contains no vertex-nodes. So you get a zero-height surtope, such as the rectangle, the surround mirrors makes --------- while the around mirrors make the 8 bit. The mirrors that mark out the surtope are 'wall' mirrors, reflect a copy of the surtope onto a different copy.\n\nIn low dimensions, it is not necessary to be all that fussy, and digons will do for s=edge, a=edge prisms.\nThe dream you dream alone is only a dream\nthe dream we dream together is reality.\n\n\\(Latex\\) at https://greasyfork.org/en/users/188714-wendy-krieger",
null,
"wendy\nPentonian\n\nPosts: 1901\nJoined: Tue Jan 18, 2005 12:42 pm\nLocation: Brisbane, Australia"
] | [
null,
"https://i.imgur.com/ZLVpaTX.png",
null,
"http://www.msfn.org/board/uploads/av-30458.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9229493,"math_prob":0.9586351,"size":2727,"snap":"2019-43-2019-47","text_gpt3_token_len":656,"char_repetition_ratio":0.11678296,"word_repetition_ratio":0.004201681,"special_character_ratio":0.22955629,"punctuation_ratio":0.11310592,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98827386,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T06:58:16Z\",\"WARC-Record-ID\":\"<urn:uuid:4a99d385-089f-4a40-9350-92e40c404d3d>\",\"Content-Length\":\"19940\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f220bae2-8fe4-4e4a-8649-7530cbeb6db7>\",\"WARC-Concurrent-To\":\"<urn:uuid:526c36d0-c8e2-46fd-b7c9-6c407974fc5d>\",\"WARC-IP-Address\":\"178.79.184.59\",\"WARC-Target-URI\":\"http://hi.gher.space/forum/viewtopic.php?f=3&t=2377\",\"WARC-Payload-Digest\":\"sha1:IFISJCJUINHC7KNON25E35F7AQ3Y44TR\",\"WARC-Block-Digest\":\"sha1:2ONJFQTCOZLEDUVTX6EAPQO5IW7BQYDP\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986692126.27_warc_CC-MAIN-20191019063516-20191019091016-00090.warc.gz\"}"} |
https://stats.stackexchange.com/questions/273465/neural-network-softmax-activation/280048 | [
"# Neural network softmax activation\n\nI'm trying to perform backpropagation on a neural network using Softmax activation on the output layer and a cross-entropy cost function. Here are the steps I take:\n\n1. Calculate the error gradient with respect to each output neuron's input:\n\n$$\\frac{\\partial E} {\\partial z_j} = {\\frac{\\partial E} {\\partial o_j}}{\\frac{\\partial o_j} {\\partial z_j}}$$\n\nwhere $\\frac{\\partial E} {\\partial o_j}$ is the derivative of the cost function with respect to the node's output and $\\frac{\\partial o_j} {\\partial z_j}$ is the derivative of the activation function.\n\n1. Adjust the output layer's weights using the following formula:\n\n$$w_{ij} = w'_{ij} - r{\\frac{\\partial E} {\\partial z_j}} {o_i}$$\n\nwhere $r$ is some learning rate constant and $o_i$ is the $i$th output from the previous layer.\n\n1. Adjust the hidden layer's weights using the following formula:\n\n$$w_{ij} = w'_{ij} - r{\\frac{\\partial o_j} {\\partial z_j}} {\\sum_k (E_k w'_{jk})} {o_i}$$\n\nwhere ${\\frac{\\partial o_j} {\\partial z_j}}$ is the derivative of the hidden layer's activation function and $E$ is the vector of output layer error gradients computed in Step 1.\n\nQuestion: The internet has told me that when using Softmax combined with cross entropy, Step 1 simply becomes\n\n$$\\frac{\\partial E} {\\partial z_j} = o_j - t_j$$ where $t$ is a one-hot encoded target output vector. Is this correct?\n\nFor some reason, each round of backpropagation is causing my network to adjust itself heavily toward the provided label - so much that the network's predictions are always whatever the most recent backpropagation label was, regardless of input. I don't understand why this is happening, or how it can even be possible.\n\nThere must be something wrong with the method I'm using. Any ideas?\n\nThe internet has told me that when using Softmax combined with cross entropy, Step 1 simply becomes $\\frac{\\partial E} {\\partial z_j} = o_j - t_j$ where $t$ is a one-hot encoded target output vector. Is this correct?\n\nYes. Before going through the proof, let me change the notation to avoid careless mistakes in translation:\n\n### Notation:",
null,
"whereby $j$ is the index denoting any of the $K$ output neurons - not necessarily the one corresponding to the true, ($t)$, value. Now,\n\n\\begin{align} o_j&=\\sigma(j)=\\sigma(z_j)=\\text{softmax}(j)=\\text{softmax (neuron }j)=\\frac{e^{z_j}}{\\displaystyle\\sum_K e^{z_k}}\\\\[3ex] z_j &= \\mathbf w_j^\\top \\mathbf x = \\text{preactivation (neuron }j) \\end{align}\n\nThe loss function is the negative log likelihood:\n\n$$E = -\\log \\sigma(t) = -\\log \\left(\\text{softmax}(t)\\right)$$\n\nThe negative log likelihood is also known as the multiclass cross-entropy (ref: Pattern Recognition and Machine Learning Section 4.3.4), as they are in fact two different interpretations of the same formula.\n\n### Gradient of the loss function with respect to the pre-activation of an output neuron:\n\n\\begin{align} \\frac{\\partial E}{\\partial z_j}&=\\frac{\\partial}{\\partial z_j}\\,-\\log\\left( \\sigma(t)\\right)\\\\[2ex] &= \\frac{-1}{\\sigma(t)}\\quad\\frac{\\partial}{\\partial z_j}\\sigma(t)\\\\[2ex] &= \\frac{-1}{\\sigma(t)}\\quad\\frac{\\partial}{\\partial z_j}\\sigma(z_j)\\\\[2ex] &= \\frac{-1}{\\sigma(t)}\\quad\\frac{\\partial}{\\partial z_j}\\frac{e^{z_t}}{\\displaystyle\\sum_k e^{z_k}}\\\\[2ex] &= \\frac{-1}{\\sigma(t)}\\quad\\left[ \\frac{\\frac{\\partial }{\\partial z_j }e^{z_t}}\\sum_K e^{z_k}} \\quad - \\quad \\frac{e^{z_t}\\quad \\frac{\\partial}{\\partial z_j}\\displaystyle \\sum_K e^{z_k}}{\\left[\\displaystyle\\sum_K e^{z_k}\\right]^2}\\right]\\\\[2ex] &= \\frac{-1}{\\sigma(t)}\\quad\\left[ \\frac{\\delta_{jt}\\;e^{z_t}}\\sum_K e^{z_k}} \\quad - \\quad \\frac{e^{z_t}}{\\displaystyle\\sum_K e^{z_k}} \\frac{e^{z_j}}{\\displaystyle\\sum_K e^{z_k}}\\right]\\\\[2ex] &= \\frac{-1}{\\sigma(t)}\\quad\\left(\\delta_{jt}\\sigma(t) - \\sigma(t)\\sigma(j) \\right)\\\\[2ex] &= - (\\delta_{jt} - \\sigma(j))\\\\[2ex] &= \\sigma(j) - \\delta_{jt} \\end{align\n\nThis is practically identical to $\\frac{\\partial E} {\\partial z_j} = o_j - t_j$, and it does become identical if instead of focusing on $j$ as an individual output neuron, we transition to vectorial notation (as indicated in your question), and $t_j$ becomes the one-hot encoded vector of true values, which in my notation would be $\\small \\begin{bmatrix}0&0&0&\\cdots&1&0&0&0_K\\end{bmatrix}^\\top$.\n\nThen, with $\\frac{\\partial E} {\\partial z_j} = o_j - t_j$ we are really calculating the gradient of the loss function with respect to the preactivation of all output neurons: the vector $t_j$ will contain a $1$ only in the neuron corresponding to the correct category, which is equivalent to the delta function $\\delta_{jt}$, which is $1$ only when differentiating with respect to the pre-activation of the output neuron of the correct category.\n\nIn the Geoffrey Hinton's Coursera ML course the following chunk of code illustrates the implementation in Octave:\n\n%% Compute derivative of cross-entropy loss function.\nerror_deriv = output_layer_state - expanded_target_batch;\n\n\nThe expanded_target_batch corresponds to the one-hot encoded sparse matrix with corresponding to the target of the training set. Hence, in the majority of the output neurons, the error_deriv = output_layer_state $(\\sigma(j))$, because $\\delta_{jt}$ is $0$, except for the neuron corresponding to the correct classification, in which case, a $1$ is going to be subtracted from $\\sigma(j).$\n\nThe actual measurement of the cost is carried out with...\n\n% MEASURE LOSS FUNCTION.\nCE = -sum(sum(...\nexpanded_target_batch .* log(output_layer_state + tiny))) / batchsize;\n\n\nWe see again the $\\frac{\\partial E}{\\partial z_j}$ in the beginning of the backpropagation algorithm:\n\n$$\\small\\frac{\\partial E}{\\partial W_{hidd-2-out}}=\\frac{\\partial \\text{outer}_{input}}{\\partial W_{hidd-2-out}}\\, \\frac{\\partial E}{\\partial \\text{outer}_{input}}=\\frac{\\partial z_j}{\\partial W_{hidd-2-out}}\\, \\frac{\\partial E}{\\partial z_j}$$\n\nin\n\nhid_to_output_weights_gradient = hidden_layer_state * error_deriv';\n\n\nsince $z_j = \\text{outer}_{in}= W_{hidd-2-out} \\times \\text{hidden}_{out}$\n\n• The splitting of partials in the OP, $\\frac{\\partial E} {\\partial z_j} = {\\frac{\\partial E} {\\partial o_j}}{\\frac{\\partial o_j} {\\partial z_j}}$, seems unwarranted.\n\n• The updating of the weights from hidden to output proceeds as...\n\nhid_to_output_weights_delta = ...\nmomentum .* hid_to_output_weights_delta + ...\n\nwhich don't include the output $o_j$ in the OP formula: $w_{ij} = w'_{ij} - r{\\frac{\\partial E} {\\partial z_j}} {o_i}.$ The formula would be more along the lines of...\n$$W_{hidd-2-out}:=W_{hidd-2-out}-r\\, \\small \\frac{\\partial E}{\\partial W_{hidd-2-out}}\\, \\Delta_{hidd-2-out}$$"
] | [
null,
"https://i.stack.imgur.com/0rewJ.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6736136,"math_prob":0.9994209,"size":5127,"snap":"2019-51-2020-05","text_gpt3_token_len":1517,"char_repetition_ratio":0.1764591,"word_repetition_ratio":0.05042017,"special_character_ratio":0.30836746,"punctuation_ratio":0.09942196,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000038,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-09T00:09:18Z\",\"WARC-Record-ID\":\"<urn:uuid:632ddfb9-094d-4133-8b0e-dfd62c53f1be>\",\"Content-Length\":\"140071\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1518872f-90c3-49de-aa67-c6bc4a00ee0b>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a628229-8ecf-47d1-ab28-768f13d9b8cf>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/273465/neural-network-softmax-activation/280048\",\"WARC-Payload-Digest\":\"sha1:W3ITS5E5JHK73PTGVRD6FV3YNF3Y2CX3\",\"WARC-Block-Digest\":\"sha1:PQ567XOX5KKRJX6V5DEQR5NNTQWLCEFB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540515344.59_warc_CC-MAIN-20191208230118-20191209014118-00155.warc.gz\"}"} |
https://essayparlour.com/paperhelp/economics/consider-the-following-cobb-douglas-production-function-for/ | [
"",
null,
"+1 (347) 474-1028\n\n## Consider the following Cobb-Douglas production function for\n\nPaper help Economics Consider the following Cobb-Douglas production function for\n\n# Consider the following Cobb-Douglas production function for\n\nConsider the following Cobb-Douglas production function for the bus transportation system in… Show more Consider the following Cobb-Douglas production function for the bus transportation system in a particular city: (Q= alphaLbeta1Fbeta2Kbeta) where L = labor input in worker hours F = fuel input in gallons K = capital input in number of buses Q = output measured in millions of bus miles Suppose that the parameters (alpha, beta1, beta2, beta3) of this model were estimated using annual data for the past 25 years. The following results were obtained: alpha=0.0012 beta1=0.45 beta2=0.20 beta3=0.30 a. Determine the (i) labor, (ii) fuel (iii) capital input production elasticities. b. Suppose that labor input (worker hours) is increased by 2 percent next year (with the other inputs held constant). Determine the approxiate percentage change in output. c. Suppose that capital input (number of buses) is decreased by 3 percent next year (when certain older buses are taken out of service). Assuming that the other inputs are held constant, determine the approximate percentage change in output. d. What type of returns to scare appears to characterize this bus transportation system? (Ignore the issue of statistical significance.) e. Discuss some of the methodological and measurement problems one might encounter in suing time-series data to estimate the parameters of this model. • Show less",
null,
"Ready to try a high quality writing service?",
null,
""
] | [
null,
"https://www.facebook.com/tr",
null,
"https://essayparlour.com/wp-content/uploads/2016/07/8952039_orig.png",
null,
"https://essayparlour.com/paperhelp/wp-content/uploads/sites/5/2016/08/accepted-cards.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8976871,"math_prob":0.8938346,"size":1396,"snap":"2021-31-2021-39","text_gpt3_token_len":300,"char_repetition_ratio":0.09913793,"word_repetition_ratio":0.06603774,"special_character_ratio":0.21489972,"punctuation_ratio":0.10121457,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98996276,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-26T19:29:16Z\",\"WARC-Record-ID\":\"<urn:uuid:ae9eeeb8-9b69-48cc-9b19-d2e00ed6217b>\",\"Content-Length\":\"34949\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5fdf64ca-1c4e-42ba-a746-93492922be37>\",\"WARC-Concurrent-To\":\"<urn:uuid:edd93a8a-9a12-4881-ad1c-eae4cedad222>\",\"WARC-IP-Address\":\"172.67.162.9\",\"WARC-Target-URI\":\"https://essayparlour.com/paperhelp/economics/consider-the-following-cobb-douglas-production-function-for/\",\"WARC-Payload-Digest\":\"sha1:KJ3TG46DQCFXFUJ275R2Z3VY63K4SQ77\",\"WARC-Block-Digest\":\"sha1:3OCFGNVJUJDMUDOIF2JGFJ6Q3R4UJ2JF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057913.34_warc_CC-MAIN-20210926175051-20210926205051-00627.warc.gz\"}"} |
http://www.moomoomathblog.com/2021/01/all-factors-of-625-how-many-factors.html | [
"All factors of 625\n\nHow many factors does 625 have?\n\n5 numbers will factor into 625. This means that there are 5 numbers 625 can be divided by completely, with no remainder.\n\nThe factors of 625 include:\n\n1, 5, 25, 125, 625.\n\nThe factors of 625 in pairs include:\n\n1 and 625, 5 and 125, and 25 and 25.\n\nRemember that a prime number is a number greater than 1 that can only be evenly divided by itself and 1. For example, 5 is a prime number.\n\nIs 625 a prime number?\n\nNo, because it can be divided by numbers other than itself and 1.\n\nA composite number can be divided by numbers other than itself and 1 exactly (no remainder). To put it another way, numbers besides 1 and the number itself can be multiplied together to equal that number.\n\nIs 625 composite?\n\nYes, 625 is composite because many numbers besides 625 and 1 can be multiplied together to equal 625 (i.e. 5 and 125, 25 and 25). 625 can be divided by numbers like 5, 25, and 125 with no remainder.\n\nWhat is the prime factorization of a number? The prime factorization of a number is the process of finding all the prime factors that multiply together to equal a given number.\n\nFor instance, the prime factorization of 12 is 2 x 2 x 3 = 12.\n\nHow can we find the prime factorization of 625?\n\nI like to create a factor tree\n\nSee the picture below for a factor tree of 625.",
null,
"Prime factorization of 625\n\n5 x 5 x 5 x 5 = 625"
] | [
null,
"https://docs.google.com/drawings/u/0/d/s74suF7mMTs5ucUsGGwvFkw/image",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9099251,"math_prob":0.9984099,"size":1400,"snap":"2023-40-2023-50","text_gpt3_token_len":374,"char_repetition_ratio":0.1769341,"word_repetition_ratio":0.0701107,"special_character_ratio":0.3042857,"punctuation_ratio":0.13015874,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99979633,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T18:39:28Z\",\"WARC-Record-ID\":\"<urn:uuid:46bf9134-7b40-45af-a60d-d923e336f6f9>\",\"Content-Length\":\"94951\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b21b8375-91e5-47c6-9bd6-33d780f63a40>\",\"WARC-Concurrent-To\":\"<urn:uuid:a007238e-f2a1-472c-9895-3903250c59c4>\",\"WARC-IP-Address\":\"172.253.63.121\",\"WARC-Target-URI\":\"http://www.moomoomathblog.com/2021/01/all-factors-of-625-how-many-factors.html\",\"WARC-Payload-Digest\":\"sha1:OB6EDBFRKD3MVQSOQ6PS5WTTE7D3AEW7\",\"WARC-Block-Digest\":\"sha1:NHGV6ZIAGQR7PX7WBSABLPMF7DZBJORA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100448.65_warc_CC-MAIN-20231202172159-20231202202159-00783.warc.gz\"}"} |
https://community.rstudio.com/t/how-can-find-the-row-indices-in-one-data-frame-where-its-values-exist-in-another-sorther-data-frame/46448 | [
"# How can find the row indices in one data frame where its values exist in another sorther data frame?\n\nDear R Users,\n\nI have two data frame. Each of them has three columns but the number of rows is different. I would like to find the row indices of the longer data frame where its values exist in the other data frame. I tried the following:\n\n``````ind = which(as.numeric(Results[,1])\n%in% as.numeric(Hour.mean[,1])\n& as.numeric(Results[,2]) %in% as.numeric(Hour.mean[,2])\n& as.numeric(Results[,3]) %in% as.numeric(Hour.mean[,3])\n)\n``````\n\nbut it does not give correct values. Could someone suggest me a solution?\n\nHi!\n\nTo help us help you, could you please prepare a reproducible example (reprex) illustrating your issue? Please have a look at this guide, to see how to create one:\n\nMeantime I found a solution. I converted the column values of Results and also the Hour.mean with `past` function and then I could find the row indices with `which` function with the following way:\n\n``````List1 = paste(Results[,1], Results[,2], Results[,3], sep=\".\")\nList2 = paste(Hour.mean[,1], Hour.mean[,2], Hour.mean[,3], sep=\".\")\nind = which(List1 %in% List2)\n``````\n\nThis topic was automatically closed 21 days after the last reply. New replies are no longer allowed."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8269744,"math_prob":0.9133578,"size":494,"snap":"2022-05-2022-21","text_gpt3_token_len":133,"char_repetition_ratio":0.16734694,"word_repetition_ratio":0.0,"special_character_ratio":0.27530363,"punctuation_ratio":0.19298245,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9859662,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-17T00:58:49Z\",\"WARC-Record-ID\":\"<urn:uuid:2833e024-5698-4644-82e0-c834d2daa687>\",\"Content-Length\":\"27152\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:165fa4f6-8024-41a3-97bc-bfd1babcca1b>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5dfb3ee-a7c8-435b-9048-a1504d07347a>\",\"WARC-IP-Address\":\"167.99.20.217\",\"WARC-Target-URI\":\"https://community.rstudio.com/t/how-can-find-the-row-indices-in-one-data-frame-where-its-values-exist-in-another-sorther-data-frame/46448\",\"WARC-Payload-Digest\":\"sha1:UHYDCJPBRFBIQY2GUTWOJJGK5JUJCQMB\",\"WARC-Block-Digest\":\"sha1:PYL354VUAVFJMPDOM3OJCWDJPHUBCXVF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662515466.5_warc_CC-MAIN-20220516235937-20220517025937-00418.warc.gz\"}"} |
http://oeis.org/A258333 | [
"This site is supported by donations to The OEIS Foundation.",
null,
"Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!)\n A258333 Number of (primitive) weird numbers of the form 2^n*p*q, with odd primes p < q. 12\n 1, 1, 5, 3, 10, 23, 29, 53, 115, 210, 394, 683, 1389, 3118, 6507, 9120 (list; graph; refs; listen; history; text; internal format)\n OFFSET 1,3 COMMENTS Sequence taken from page 3 of \"On primitive weird numbers of the form 2^k*p*q\". The (primitive) weird numbers considered here are listed in A258882, a proper subset of A002975. If 2^k*p*q is weird, then 2^(k+1) < p < 2^(k+2)-2 < q < 2^(2k+1). This being the case the number of possible pwn of the form 2^n*p*q with p unique is: 1, 2, 4, 7, 12, 23, 43, 75, 137, 255, 463, 872, 1612, 3030, 5708, .... However, p is usually not unique, e.g., for k=3, p=19 we have two pwn (with q=61 and q=71), and for k=5, p=71 yields two pwn (for q=523 and q=541) and p=67 yields three pwn (for q=887, 971 and 1021). I conjecture that there is an increasing number of pwn with, e.g., p=nextprime(2^(k+1)). Also, if 2^k p q and 2^k p' q are both weird, then usually 2^k p\" q is weird for all p\" between p and p'. There is one exception [p, p', q] = [2713, 2729, 8191] for k=10, five exceptions [6197, 6203, 12049], [6113, 6131, 12289], [6113, 6131, 12301], [6121, 6133, 12323], [5441, 5449, 16411] for k=11, and seven exceptions for k=12. These exceptions occur when q/p is close to an integer, (p, q) ~ (3/4, 3/2)*2^(k+2) or (2/3, 2)*2^(k+2). - M. F. Hasler, Jul 16 2016 LINKS Douglas E. Iannucci, On primitive weird numbers of the form 2^k*p*q, arXiv:1504.02761 [math.NT], 2015. EXAMPLE The only primitive weird number of the form 2*p*q is 70 so a(1) = 1; The only primitive weird number of the form 2^2*p*q is 836 so a(2) = 1; There are 5 primitive weird numbers of the form 2^3*p*q and they are 5704, 7912, 9272, 10792 & 17272; so a(3) = 5; etc. PROG (PARI) A258333(n)={ local(s=0, p, M=2^(n+1)-1, qn, T(P=p-1)=is_A006037(qn*p=precprime(P)) && s+=1); forprime(q=2*M, M*(M+1), qn=q<M, T() || T() || break)); s} \\\\ Not very efficient, for illustrative purpose only. - M. F. Hasler, Jul 18 2016 CROSSREFS Cf. A002975, A258882. Sequence in context: A141620 A195140 A049829 * A137613 A259650 A165670 Adjacent sequences: A258330 A258331 A258332 * A258334 A258335 A258336 KEYWORD hard,nonn,more AUTHOR Douglas E. Iannucci and Robert G. Wilson v, May 27 2015 EXTENSIONS a(15) from Robert G. Wilson v, Jun 14 2015 a(16) from Robert G. Wilson v, Dec 06 2015 STATUS approved\n\nLookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam\nContribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent\nThe OEIS Community | Maintained by The OEIS Foundation Inc.\n\nLast modified November 13 12:45 EST 2019. Contains 329094 sequences. (Running on oeis4.)"
] | [
null,
"http://oeis.org/banner2021.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7675117,"math_prob":0.98913676,"size":1465,"snap":"2019-43-2019-47","text_gpt3_token_len":614,"char_repetition_ratio":0.091718,"word_repetition_ratio":0.0076923077,"special_character_ratio":0.51262796,"punctuation_ratio":0.21568628,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99436814,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-13T18:15:01Z\",\"WARC-Record-ID\":\"<urn:uuid:d05b86ef-6587-42fd-b172-98200d8c34bc>\",\"Content-Length\":\"18340\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:810a444d-9c27-49b6-aa88-021702b58bb9>\",\"WARC-Concurrent-To\":\"<urn:uuid:7a8b87db-776e-42f8-8881-7a8416af461f>\",\"WARC-IP-Address\":\"104.239.138.29\",\"WARC-Target-URI\":\"http://oeis.org/A258333\",\"WARC-Payload-Digest\":\"sha1:Q7UQVV3KXS5EICLNRJLNXV2XA32ZG6FZ\",\"WARC-Block-Digest\":\"sha1:6YEVWX5OPZUZXJK6AKO5JRVPKZM3HIZQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496667319.87_warc_CC-MAIN-20191113164312-20191113192312-00439.warc.gz\"}"} |
https://www.skytowner.com/explore/numpy_meshgrid_method | [
"search\nSearch\nmenu search toc more_vert\nGuest 0reps\nThanks for the thanks!\nclose\naccount_circle\nProfile\nexit_to_app\nSign out\nhelp Ask a question\nsearch\nkeyboard_voice\nclose\nSearching Tips\nSearch for a recipe:\n\"Creating a table in MySQL\"\nSearch for an API documentation: \"@append\"\nSearch for code: \"!dataframe\"\nApply a tag filter: \"#python\"\nUseful Shortcuts\n/ to open search panel\nEsc to close search panel\nto navigate between search results\nd to clear all current filters\nEnter to expand content preview",
null,
"Doc Search",
null,
"Code Search Beta",
null,
"SORRY NOTHING FOUND!\nmic\nStart speaking...",
null,
"Voice search is only supported in Safari and Chrome.\nShrink\nNavigate to\nA\nA\nbrightness_medium\nshare\narrow_backShare",
null,
"Twitter",
null,
"Facebook\n\n# NumPy | meshgrid method\n\nNumPy\nchevron_right\nDocumentation\nschedule Jul 1, 2022\nLast updated\nlocal_offer PythonNumPy\nTags\nexpand_more\n\nNumpy's `meshgrid(~)` method returns a grid that is useful in plotting contour plots of 3D graphs. Since it is difficult to explain in words what this method does, consult the examples below.\n\n# Parameters\n\n1. `x1` | `array-like`\n\nThe input array to make a grid out of.\n\n2. `x2` | `array-like` | `optional`\n\nIf you want a 2D grid, then this is required.\n\n* Note that you could add in more arrays (i.e. x3, x4,...) if desired.\n\n3. `sparse` | `boolean` | `optional`\n\nWhether to return a sparse grid. Set this to `True` if you're dealing with large volumes of data that cannot be fit into memory. By default, `sparse=False`.\n\n4. `copy` | `boolean` | `optional`\n\nIf `True`, then a new Numpy array is created and returned. If `False`, then view is returned so as to save memory. By default, `copy=True`.\n\n# Return value\n\nIf only `x1` is specified, then a Numpy array is returned. Otherwise, a tuple of Numpy arrays will be returned.\n\n# Examples\n\n## Visualising the mesh grid\n\nLet us create a 2D mesh grid:\n\n``` x = [1,2,3,4]y = [5,6,7,8]xx, yy = np.meshgrid(x, y) ```\n\nLet's unveil the content of the returned arrays:\n\n``` print(xx) [[1 2 3 4] [1 2 3 4] [1 2 3 4] [1 2 3 4]] ```\n\nHere's `yy`:\n\n``` print(yy) [[5 5 5 5] [6 6 6 6] [7 7 7 7] [8 8 8 8]] ```\n\nUsing Matplotlib, we can graph them like so:\n\n``` plt.scatter(xx, yy)plt.show() ```\n\nThis gives us the following:",
null,
"## Drawing a contour plot\n\nLet's now talk about the practical benefits of having these data-points. They come in handy when we draw contour plots of 3D functions.\n\nHere's an example:\n\n``` def f(x, y): return x**2 + y **2 ```\n\nWe can draw the contour like so:\n\n``` x = np.linspace(-5, 5, 100)y = np.linspace(-5, 5, 100)xx, yy = np.meshgrid(x, y)zz = f(xx, yy)plt.contour(xx, yy, zz)plt.show() ```\n\nThis gives us the following contour plot:",
null,
"mail"
] | [
null,
"https://www.skytowner.com/images/search_documentation_icon.svg",
null,
"https://www.skytowner.com/images/icon_searching_api.svg",
null,
"https://www.skytowner.com/images/robocat/unhappy_robocat_circular.svg",
null,
"https://www.skytowner.com/images/robocat/unhappy_robocat_50_50.png",
null,
"https://www.skytowner.com/images/registration/icon_twitter.png",
null,
"https://www.skytowner.com/images/registration/facebook_icon.svg",
null,
"https://storage.googleapis.com/skytowner_public/images/wiaTbcUNeHbrvDYvSckn/res.png",
null,
"https://storage.googleapis.com/skytowner_public/images/wiaTbcUNeHbrvDYvSckn/contour.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7610918,"math_prob":0.68475384,"size":1699,"snap":"2022-40-2023-06","text_gpt3_token_len":528,"char_repetition_ratio":0.10560472,"word_repetition_ratio":0.032051284,"special_character_ratio":0.31665686,"punctuation_ratio":0.17359413,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97019786,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T22:36:30Z\",\"WARC-Record-ID\":\"<urn:uuid:08651f18-1241-4038-823d-8409a3595df4>\",\"Content-Length\":\"67005\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fa43a812-4ad0-4674-bf74-c83db33341b4>\",\"WARC-Concurrent-To\":\"<urn:uuid:d821ff77-abc3-4003-96dc-4e12eeff619a>\",\"WARC-IP-Address\":\"172.67.158.12\",\"WARC-Target-URI\":\"https://www.skytowner.com/explore/numpy_meshgrid_method\",\"WARC-Payload-Digest\":\"sha1:YFFY3V2QUO6O5JWIAKXRGACX2JX6AZPA\",\"WARC-Block-Digest\":\"sha1:JYYLG7EY47726VRMSGBGNMUDKFYXV7Y6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337889.44_warc_CC-MAIN-20221006222634-20221007012634-00542.warc.gz\"}"} |
https://convertoctopus.com/138-centimeters-to-decimeters | [
"## Conversion formula\n\nThe conversion factor from centimeters to decimeters is 0.1, which means that 1 centimeter is equal to 0.1 decimeters:\n\n1 cm = 0.1 dm\n\nTo convert 138 centimeters into decimeters we have to multiply 138 by the conversion factor in order to get the length amount from centimeters to decimeters. We can also form a simple proportion to calculate the result:\n\n1 cm → 0.1 dm\n\n138 cm → L(dm)\n\nSolve the above proportion to obtain the length L in decimeters:\n\nL(dm) = 138 cm × 0.1 dm\n\nL(dm) = 13.8 dm\n\nThe final result is:\n\n138 cm → 13.8 dm\n\nWe conclude that 138 centimeters is equivalent to 13.8 decimeters:\n\n138 centimeters = 13.8 decimeters\n\n## Alternative conversion\n\nWe can also convert by utilizing the inverse value of the conversion factor. In this case 1 decimeter is equal to 0.072463768115942 × 138 centimeters.\n\nAnother way is saying that 138 centimeters is equal to 1 ÷ 0.072463768115942 decimeters.\n\n## Approximate result\n\nFor practical purposes we can round our final result to an approximate numerical value. We can say that one hundred thirty-eight centimeters is approximately thirteen point eight decimeters:\n\n138 cm ≅ 13.8 dm\n\nAn alternative is also that one decimeter is approximately zero point zero seven two times one hundred thirty-eight centimeters.\n\n## Conversion table\n\n### centimeters to decimeters chart\n\nFor quick reference purposes, below is the conversion table you can use to convert from centimeters to decimeters\n\ncentimeters (cm) decimeters (dm)\n139 centimeters 13.9 decimeters\n140 centimeters 14 decimeters\n141 centimeters 14.1 decimeters\n142 centimeters 14.2 decimeters\n143 centimeters 14.3 decimeters\n144 centimeters 14.4 decimeters\n145 centimeters 14.5 decimeters\n146 centimeters 14.6 decimeters\n147 centimeters 14.7 decimeters\n148 centimeters 14.8 decimeters"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7595232,"math_prob":0.9980816,"size":1814,"snap":"2020-34-2020-40","text_gpt3_token_len":467,"char_repetition_ratio":0.29558012,"word_repetition_ratio":0.0,"special_character_ratio":0.28224918,"punctuation_ratio":0.1051051,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99905235,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-20T13:45:09Z\",\"WARC-Record-ID\":\"<urn:uuid:e91071ee-ee9a-4249-a9a2-7d726c9bca28>\",\"Content-Length\":\"28588\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1a073315-d2cc-4505-b4fe-fcc706513027>\",\"WARC-Concurrent-To\":\"<urn:uuid:a57c04b2-ec0f-4640-a796-42cf9d41705f>\",\"WARC-IP-Address\":\"104.27.143.66\",\"WARC-Target-URI\":\"https://convertoctopus.com/138-centimeters-to-decimeters\",\"WARC-Payload-Digest\":\"sha1:7ZYL3PSJZPYK4CAVOSVB2GZR24BCJ6FT\",\"WARC-Block-Digest\":\"sha1:HHH27YGRHKGWTRQ7LOHFOJ5ODF52UP2U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400198213.25_warc_CC-MAIN-20200920125718-20200920155718-00426.warc.gz\"}"} |
https://0-bmcmedresmethodol-biomedcentral-com.brum.beds.ac.uk/articles/10.1186/s12874-018-0547-1 | [
"# Heckman imputation models for binary or continuous MNAR outcomes and MAR predictors\n\n## Abstract\n\n### Background\n\nMultiple imputation by chained equations (MICE) requires specifying a suitable conditional imputation model for each incomplete variable and then iteratively imputes the missing values. In the presence of missing not at random (MNAR) outcomes, valid statistical inference often requires joint models for missing observations and their indicators of missingness. In this study, we derived an imputation model for missing binary data with MNAR mechanism from Heckman’s model using a one-step maximum likelihood estimator. We applied this approach to improve a previously developed approach for MNAR continuous outcomes using Heckman’s model and a two-step estimator. These models allow us to use a MICE process and can thus also handle missing at random (MAR) predictors in the same MICE process.\n\n### Methods\n\nWe simulated 1000 datasets of 500 cases. We generated the following missing data mechanisms on 30% of the outcomes: MAR mechanism, weak MNAR mechanism, and strong MNAR mechanism. We then resimulated the first three cases and added an additional 30% of MAR data on a predictor, resulting in 50% of complete cases. We evaluated and compared the performance of the developed approach to that of a complete case approach and classical Heckman’s model estimates.\n\n### Results\n\nWith MNAR outcomes, only methods using Heckman’s model were unbiased, and with a MAR predictor, the developed imputation approach outperformed all the other approaches.\n\n### Conclusions\n\nIn the presence of MAR predictors, we proposed a simple approach to address MNAR binary or continuous outcomes under a Heckman assumption in a MICE procedure.\n\n## Background\n\nIn clinical epidemiology, missing data are generally classified as (i) missing completely at random (MCAR); (ii) missing at random (MAR) when, conditional on the observed data, the probability of data being missing does not depend on unobserved data; or (iii) missing not at random (MNAR) when, conditional on the observed data, the probability of data being missing still depends on unobserved data, i.e., neither MCAR nor MAR [1, 2]. Unfortunately, the missing data mechanisms of MNAR, MAR and MCAR are generally not testable unless there are direct modelisations of the missing data mechanisms. Although methods for handling MCAR or MAR data in clinical epidemiology have been widely described and studied, methods adapted for MNAR mechanisms are less studied.\n\nIn the presence of MNAR missing outcomes, valid statistical inference implies describing the missing data mechanism [1, 3]. Hence, it often requires joint models for missing outcomes and their indicators of missingness . Two principal factorisations of these joint models have been proposed: pattern-mixture models and selection models [1, 57]. The first consists of using different distributions to model individuals with and without missing observations [8, 9]. The second directly models the relationship between the risk of a variable being missing and its unseen value. It involves defining an analysis model for the outcome and a selection model (i.e. the missing data mechanism). It generally relies on a bivariate distribution to model the outcome and its missing binary indicator simultaneously . This approach, called sample selection model, Tobit type-2 model or Heckman’s model, was first introduced by Heckman for continuous outcomes [12, 13]. For continuous outcomes, two approaches have been proposed to estimate the model parameters: a one-step process that directly estimates all parameters of the joint model using the maximum likelihood estimator and a two-step process [12, 13]. The first step of the latter consists of estimating the parameters of the selection model. The second step consists of fitting the outcome model adjusted on a correction term named “inverse Mills ratio” (IMR), which is obtained via the first step. IMR corresponds to the mean of the conditional distribution of the outcome within the bivariate normal distribution knowing that the outcome has been observed . This allows unbiased estimates of the parameters of the outcome model to be calculated.\n\nFor binary outcomes, sample selection methods rely on a different model. This model is not simply an adaptation of the continuous case and notably is not simply an adaptation of the two-step estimator with a different outcome model as a generalised linear model. In the setting of binary outcomes, the use of a bivariate probit model and a one-step maximum likelihood estimator is mandatory . Indeed, the use of a Heckman’s model implies linking the outcome model and the selection model by their error terms. Some authors, through analogy with Heckman’s two-step estimator, proposed modelling binary outcomes using a probit model adjusted on the IMR . Despite the misuse of such approaches, it has been specifically demonstrated that the use of a two-step approach including the IMR in a probit model for binary outcomes is not valid [10, 16]. More generally, Heckman’s two-step estimator could not be extended straightforwardly to general linear outcome models by plugging IMR into the linear predictor. It relies on the fact that outcome expectation in non-linear models subject to selection does not involve a simple corrector term in the linear predictor .\n\nIf Heckman’s model handles MNAR missing binary outcomes well using a bivariate probit model, then in the presence of additional missing data on predictors, there is no process that can address all the missing data simultaneously. In this setting, missing data on predictors are typically treated using a non-satisfactory complete-predictors approach, i.e., cases with at least one missing predictor are removed from the analysis. In the presence of missing data on more than one variable (including the outcome), multiple imputation (MI) appears to be one of the most flexible and easiest method to apply due to the numerous types of variables handled and the extensive development of statistical packages dedicated to its implementation . Galimard et al. previously developed an approach based on a conditional imputation model for an MNAR mechanism using a Heckman’s model and a two-step estimator to impute MNAR missing continuous outcomes. This approach allows imputing MAR missing covariates and MNAR missing outcomes within a multiple imputation by chained equations (MICE) procedure . MICE specifies a suitable conditional imputation model for each incomplete variable and iteratively imputes the missing values until convergence. The key concept of MI procedures is to use the distribution of the observed data to draw a set of plausible values for the missing data. Thus, imputing missing MNAR binary outcomes implies developing valid methods to obtain a valid distribution of missing binary outcomes. As mentioned above, the direct extension of the work of Galimard et al. on continuous outcomes cannot be considered because it involves a two-step estimator which is not compatible with Heckman’s model with binary outcomes.\n\n### Aims of this work\n\nThe first aim of this work is to propose an approach to handle MNAR binary outcomes. To our knowledge, the use of sample selection models as imputation models has never been proposed for missing binary outcomes, which is a current framework in clinical research. Thus, we propose developing an imputation method for binary outcomes based on a bivariate probit model associated with a one-step maximum likelihood estimator.\n\nThe second aim is to extend this approach for continuous outcomes proposing a new approach for the issue raised by Galimard et al. . Indeed, for continuous outcomes, one of the main drawbacks of Heckman’s two-step estimator is that the uncertainties of the first step estimates are not taken into account in the second step. Indeed, IMR is considered as known observed values in the second step, whereas they have been estimated in the first step. Thus, the uncertainties around the final estimates are not fully assessed using a two-step estimator . This point could impact the quality of the imputation. This is the reason why we hypothesised that the use of a one-step estimator could also improve the performance of Heckman’s model as an imputation model for continuous outcomes. Therefore, we also proposed a new approach for continuous missing outcomes.\n\nThe final aim is to integrate the current developed MNAR model into a MICE procedure. It will handle both MNAR outcomes and MAR predictors in the same process.\n\nIn what follows, we introduce the study that motivated this work. Then, the “Methods” section section develops our proposed imputation model using one-step ML estimation for binary and continuous outcomes. The “Results” section section presents the evaluation of its performance using a simulation study and an illustrative example using data from our motivating example. Finally, a discussion and some conclusions are provided.\n\n## Motivating example: the BIVIR study\n\nThe BIVIR study was a three-arm, parallel, randomised clinical trial that aimed to assess the efficacy of the Oseltamivir-Zanamivir combination relative to each monotherapy in patients with seasonal influenza. This study was conducted by 145 general practitioners throughout France during the 2008-2009 seasonal influenza epidemic and included 541 patients. Primary analyses of the trial showed that the Oseltamivir-Zanamivir combination is less effective than Oseltamivir monotherapy and not significantly more effective than Zanamivir monotherapy based on the proportion of patients with nasal influenza reverse transcription (RT)-PCR below 200 copies genome equivalent (cgeq)/ μl at day 2 after randomisation . We focused our work on evaluating the impact of the treatment group on adherence adjusted on the first day severity score of flu symptoms. Adherence was defined as completing the full treatment between day 1 and day 5 and was self-reported by the patient. Unfortunately, adherence was missing for 115 (21%) patients. It was reasonable to suspect that patients who decided to stop treatment might be more likely to not record data on their adherence, resulting in an MNAR mechanism. The severity score corresponding to flu symptoms was measured as a weighted sum (ranging from 0 to 78) of 13 intensity symptoms . The score was missing for 114 (21%) patients, and a MAR mechanism was suspected.\n\n## Methods\n\n### Heckman’s model\n\nLet Yi be a binary outcome and Xi be a p-vector of covariates for individual i=1,...,n. Adopt the following probit regression model as the outcome model:\n\n$$P(Y_{i}=1|X_{i})= \\Phi(X_{i}\\beta)$$\n(1)\n\nwhere Φ is the standard normal cumulative distribution function and β is a p-vector of fixed effects. Assuming an underlying MNAR mechanism for Y, introduce a selection model that represents the non-random sampling of the missingness process:\n\n$$P\\left(R_{yi}=1|X^{s}_{i}\\right)=\\Phi\\left(X^{s}_{i}\\beta^{s}\\right)$$\n(2)\n\nwhere Ryi is an indicator of Yi missingness (equal to 1 if Yi is observed and 0 if Yi is missing), $$X_{i}^{s}$$ is a q-vector of observed covariates potentially associated with the missingness mechanism, and βs is an unknown q-vector of coefficients.\n\nAccording to the bivariate probit model, define Y and Ry′ as two latent normally distributed variables associated with Y and Ry, respectively, such that for individual i, Yi=1 if Yi′>0 and Yi=0 otherwise and Ryi=1 if Ryi′>0 and Ryi=0 otherwise. Heckman’s model considers that the two latent formulations of the selection and outcome models are linked through their error terms, which follow a bivariate normal distribution. The joint model of the outcome and selection models is defined as:\n\n{\\begin{aligned} \\begin{array}{ll} R_{yi}'&=X^{s}_{i}\\beta^{s} +\\varepsilon_{i}^{s} \\\\ Y'_{i}&= X_{i}\\beta + \\varepsilon_{i} \\end{array},~~ \\text{with}~~ \\left(\\begin{array}{ll} \\varepsilon^{s} \\\\ \\varepsilon \\end{array} \\right) \\sim N\\left(\\left(\\begin{array}{ll} 0 \\\\ 0 \\end{array} \\right) \\text{,} \\left(\\begin{array}{cc} 1 & \\rho\\\\ \\rho & 1 \\end{array} \\right)\\right) \\end{aligned}}\n(3)\n\nwhere ρ corresponds to the correlation coefficient between the error terms of the selection model $$\\left (\\varepsilon ^{s}_{i}\\right)$$ and outcome model (εi). When ρ equals 0, the selection and outcome models are independent, $$E\\left (R_{yi}'|Y_{i},X_{i},X_{i}^{s}\\right)$$ does not depend on Yi, and the mechanism is MAR. When ρ is not equal to 0, $$E\\left (R_{yi}'|Y_{i},X_{i},X_{i}^{s}\\right)$$ depends on Yi, and the mechanism is MNAR. The larger ρ is, the stronger the MNAR mechanism is.\n\nFor a continuous outcome, Heckman’s model given in Eq. (3) is simplified as Yi, the non-latent outcome instead of Yi′, is directly inserted in the joined model. The joint model for continuous outcomes is presented below:\n\n{\\begin{aligned} \\begin{array}{ll} R_{yi}'&=X^{s}_{i}\\beta^{s} +\\varepsilon_{i}^{s} \\\\ Y_{i}&= X_{i}\\beta + \\varepsilon_{i} \\end{array},~~ \\text{with}~~ \\left(\\begin{array}{ll} \\varepsilon^{s} \\\\ \\varepsilon \\end{array} \\right) \\sim N\\left(\\left(\\begin{array}{ll} 0 \\\\ 0 \\end{array} \\right) \\text{,} \\left(\\begin{array}{cc} 1 & \\rho\\sigma_{\\varepsilon}\\\\ \\rho\\sigma_{\\varepsilon} & \\sigma_{\\varepsilon} \\end{array} \\right)\\right) \\end{aligned}}\n(4)\n\nwhere σε is the variance of error terms (εi).\n\n### Model estimation\n\n#### Maximum likelihood estimator\n\nThe parameters of the Heckman’s model (β,βS,ρ) are directly obtained by maximising the following log-likelihood of the joint bivariate probit model [10, 15, 19]:\n\n{\\begin{aligned} l &= \\sum_{\\{i:R_{y}=0\\}} \\log \\Phi\\left(-X^{s}_{i}\\beta^{s}\\right) \\\\ & \\quad + \\sum_{\\{i:R_{y}=1,Y_{i}=1\\}} \\log\\Phi_{2}\\left(X_{i}\\beta,X_{i}^{s}\\beta^{s},\\rho\\right)\\\\ &\\quad + \\sum_{\\{i:R_{y}=1,Y_{i}=0\\}} \\log\\Phi_{2}\\left(-X_{i}\\beta,X_{i}^{s}\\beta^{s},-\\rho\\right) \\end{aligned}}\n\nwhere Φ2 corresponds to the binormal cumulative density function.\n\nFor a continuous outcome, the one-step estimator consists of estimating the parameters of the joint model (β,βS,ρ,σε) via the following log-likelihood :\n\n{\\begin{aligned} l &=\\sum_{\\{i:R_{y}=0\\}} \\log \\Phi\\left(-X^{s}_{i}\\beta^{s}\\right) \\\\ & \\quad +\\sum_{\\{i:R_{y}=1\\}} \\left[ \\log \\Phi\\left(\\frac{X^{s}_{i}\\beta^{s}+\\frac{\\rho}{\\sigma_{\\varepsilon}} (Y_{i}-X_{i}\\beta)}{\\sqrt{1-\\rho^{2}}}\\right) \\right.\\\\ & \\left.\\quad -\\frac{1}{2}\\log2\\pi-\\log\\sigma_{\\varepsilon}-\\frac{1}{2}\\frac{(Y_{i}-X_{i}\\beta)^{2}}{\\sigma_{\\varepsilon}^{2}}{\\vphantom{\\log \\Phi\\left(\\frac{X^{s}_{i}\\beta^{s}+\\frac{\\rho}{\\sigma_{\\varepsilon}} (Y_{i}-X_{i}\\beta)}{\\sqrt{1-\\rho^{2}}}\\right)}}\\right] \\end{aligned}}\n\n#### Two-step estimator\n\nFor a continuous outcome, Heckman proposed a two-step approach to estimate the parameters of the joint model given in Eq. (4). His development comes from the expression of the following conditional expectation of the outcome :\n\n$$E(Y_{i}|X_{i},X^{s}_{i},R_{yi}=1)=X_{i}\\beta +\\rho\\sigma_{\\varepsilon}\\lambda_{i}$$\n(5)\n\nwhere $$\\lambda _{i}=\\phi \\left (X^{s}_{i}\\beta ^{s}\\right)/\\Phi \\left (X^{s}_{i}\\beta ^{s}\\right)$$ is called the “inverse Mills ratio” (IMR); ϕ corresponds to the probability density function of the normal distribution. As the IMR of each individual corresponds to an error term resulting from the probit selection model , Heckman proposed the following two-step procedure:\n\n1. 1\n\nEstimate selection model parameters $$\\left (\\widehat {\\beta ^{S}}\\right)$$ by maximum likelihood\n\n2. 2\n\nFor each observed i, compute $$\\widehat {\\lambda _{i}}$$ using $$\\widehat {\\beta ^{S}}$$\n\n3. 3\n\nEstimate $$\\widehat {\\beta }$$ from Eq. (5)\n\n#### Exclusion-restriction rule\n\nIn practice, Heckman’s model must avoid collinearity between the two linear predictors of the outcome model and the selection model. Indeed, if the variables included in the selection and outcome models are exactly the same, then E[ Yi|Xi,Ryi=1]=Xiβ+ρσελi is only identified through the IMR (λ) producing collinearity issues and possibly erroneous estimation. To avoid this concern, it has been recommended to include at least a supplementary variable in the selection equation [14, 22, 23]. Ideally, this supplementary variable should be linked to the indicator of missingness and linked to the outcome .\n\n### Imputation model using Heckman’s model\n\nUnder the MAR mechanism, imputation approaches use the conditional distribution of observed Y given the other covariates to impute the missing Y. However, in Heckman’s model, the conditional expectations of the observed and missing Y are different. For a binary outcome (, p. 921):\n\n$$P\\left(Y_{i}=1|X_{i},X_{i}^{s},R_{yi}=0\\right)=\\frac{\\Phi_{2}\\left(X_{i}\\beta,-X_{i}^{s} \\beta^{s},-\\rho\\right)}{\\Phi\\left(- X_{i}^{s}\\beta^{s}\\right)}$$\n(6)\n\nWe propose using Eq. (6) to define the imputation model for binary outcomes.\n\n#### Imputation algorithm\n\nFor a binary outcome, consider Heckman’s model parameters θ=(β,βs,ρ). The imputation algorithm consists of the following steps:\n\n1. 1\n\nUse the one-step estimator to obtain Heckman’s model parameters $$\\left (\\widehat {\\theta },\\widehat {\\Psi }\\right)$$ where $$\\widehat {\\Psi }$$ is the variance-covariance matrix of $$\\widehat {\\theta }$$\n\n2. 2\n\nDraw θ from $$N\\left (\\widehat {\\theta }\\text {,} \\widehat {\\Psi }\\right)$$\n\n3. 3\n\nDraw $$Y^{*}_{i}$$ from a Bernoulli distribution with parameter $$p^{*}_{i}$$ from:\n\n$$p^{*}_{i}=\\frac{\\Phi_{2}\\left(X_{i}\\beta^{*},-X^{s}_{i} \\beta^{s*},-\\rho^{*}\\right)}{\\Phi\\left(- X^{s}_{i} \\beta^{s*}\\right)}$$\n\nFor a continuous outcome, Eq. (6) becomes (, p. 913):\n\n$$E\\left(Y_{i}|X_{i},X_{i}^{s},R_{yi}=0\\right)= X_{i}\\beta+\\rho\\sigma_{\\varepsilon}\\frac{-\\phi\\left(X^{s}_{i}\\beta^{s}\\right)}{\\Phi\\left(-X^{s}_{i}\\beta^{s}\\right)}$$\n(7)\n\nWith model parameters θ=(β,βs,σε,ρ), in the third step of the imputation algorithm:\n\n1. 3\n\nDraw Y from:\n\n$$Y^{*}_{i}=X_{i} \\beta^{*} + \\rho^{*} \\sigma_{\\varepsilon}^{*}\\frac{-\\phi\\left(X^{s}_{i} \\beta^{s*}\\right)}{\\Phi\\left(-X^{s}_{i}\\beta^{s*} \\right)} + \\varepsilon^{*} ~~\\text{with}~~\\varepsilon^{*} \\sim N\\left(0,~\\sigma_{\\varepsilon}^{*2}\\right)$$\n\n### Multiple imputation by chained equations using Heckman’s imputation model\n\nThe final aim of this work is to provide a global framework to impute MNAR outcomes and MAR predictors through a MICE procedure. This procedure requires specifying conditional imputation models for each variable with missing data. The global procedure starts with an initial fill of all missing data using random draws from observed values. The posterior predictive distribution of the first incomplete variable is obtained using all observed values. Then, for a given observation with a missing value of the first variable, imputations are generated given all the other variables. Following variables with missing values are similarly repeatedly imputed in an iterated sequence. The key point of chained equation is that consecutive iterations use imputed values of the previous. Then missing value are iteratively imputed until convergence (at least 10 cycles) . The theoretical properties of MICE are not well understood: except in simple cases, conditional imputation models do not correspond to any joint model [25, 26]. However, it performs well in practice [27, 28]. This procedure is realised in parallel to obtain several imputed datasets. Analyses and Rubin’s rules are then applied to obtain the final estimations of the parameters of interest.\n\nWe propose using Heckman’s imputation model for MNAR outcomes and standard imputation regression models for missing predictors, such as linear models for continuous covariates and logistic models for binary covariates. In this framework, Galimard et al. proved that the missing data indicator of MNAR outcomes should be included in the imputation models of all other variables. The MICE algorithm involves defining conditional imputation models. In our case the definition of such imputation models will depend on the type of the missing mechanism:\n\n• Heckman’s imputation model for MNAR outcome, specifying outcome and selection models\n\n• General linear imputation models for MAR predictors as described by van Buuren et al. adding Ry and the outcome to other variables in the linear predictors\n\n## Simulation study\n\n### Data-generating process\n\nWe generated three normally independent and identically distributed variables, X1, X2 and X3, with XjN(0,σ2). Two error terms, ε and εs, were generated using ρ fixed at 0, 0.3 and 0.6 to simulate MAR, light MNAR and heavy MNAR settings from a bivariate normal distribution according to the model given in Eq. (3).\n\nFor binary outcomes, Y was generated as follows: if β0+β1X1+β2X2+ε>0, then Y=1; otherwise, Y=0. The missing indicator Ry of Y was generated according to the following algorithm: if $$\\beta _{0}^{s}+\\beta _{1}^{s} X_{1}+\\beta _{2}^{s} X_{2}+\\beta _{3}^{s} X_{3}+\\varepsilon ^{s}>0$$, then Ry=1; otherwise, Ry=0.\n\nFor continuous outcomes, Y was generated according to Y=β0+β1X1+β2X2+ε. Note that in that case and according to the model given in Eq. (4), σε=1.\n\nWe fixed σ2 to 0.5 and (β0,β1,β2) to (0,1,1). $$\\left (\\beta _{0}^{s},\\beta _{1}^{s},\\beta _{2}^{s},\\beta _{3}^{s}\\right)$$ were fixed to (0.75,1,-0.5,1), which resulted in approximately 30% missing data for the outcome.\n\nTo evaluate the robustness of our approach, we also generated a non-Heckman MNAR mechanism by directly including Y in the following selection equation: $$P(R_{y}=1)= logit\\left (\\beta _{0}^{sl}+X_{1}-0.5 \\times X_{2}+X_{3}+\\beta _{Y}^{sl} Y\\right)$$. Two sets of parameters were considered. To obtain approximately 30% missing data on Y, we fixed $$\\beta _{0}^{sl}$$ to 0.60 and 0.20 for binary outcomes and to 1.31 and 1.86 for continuous outcomes, with $$\\beta _{Y}^{sl}$$ equal to 0, 1 and 2.\n\nWe first simulated scenarios with only missing outcomes to validate our approach in a simple setting. Then, to evaluate the performance of the MICE process, we generated missing data on X2 using two MAR mechanisms depending on either (X1,Y) or (X1,X3). Thus, R2, the indicator of X2 missingness, was defined by either:\n\n• P(R2=1|X1,X3)=Φ(0.25+X1+X3)\n\n• $$P(R_{2}=1|X_{1},Y)=\\Phi \\left (\\beta _{0}^{R_{2}}+X_{1}+Y\\right)$$\n\n$$\\beta _{0}^{R_{2}}$$ was fixed to 1.10 and 0.25 for binary and continuous outcomes, respectively. We obtained approximately 30% missing data for X2.\n\nA total of N=1000 independent datasets of size 500 were generated for each setting. The sample size was chosen to be similar to our motivating example.\n\n### Analysis methods\n\nThe analysis models were probit models and linear models for binary and continuous outcomes, respectively, including X1 and X2 as predictors. The simulated data were first analysed prior to data deletion as a benchmark. The incomplete data were then analysed using the following methods:\n\n• Complete case analysis (CCA).\n\n• Heckman’s model (HEml) consisting of one-step ML estimation, as described in the “Methods” section for binary and continuous outcomes.\n\n• Multiple imputation using Heckman’s one-step ML estimation (MIHEml), as described in the “Methods” section.\n\nFor continuous outcomes exclusively, two-step approaches have also been performed.\n\n• Heckman’s two-step estimation (HE2steps) consisting of Heckman’s two-step estimator for continuous outcomes as described in the “Methods” section for continuous outcomes.\n\n• Multiple imputation using Heckman’s two-step model estimation (MIHE2steps) for continuous outcomes, as described in Galimard et al. .\n\nFor HEml, MIHEml, HE2steps, and MIHE2steps, the selection equation included X1, X2 and X3. For MIHEml and MIHE2steps, the incomplete data were imputed m=50 times, and final estimates were obtained by applying Rubin’s rules for small samples .\n\nFor scenarios with missing X2: (1) for the HEml and HE2steps approaches, observations with missing X2 were deleted from the analyses as previously described in the complete-predictors approach; (2) for MIHEml and MIHE2steps, a MICE procedure was applied. X2 was imputed using a linear regression model and an approximate proper imputation algorithm . As recommended, we included Ry and Y in its imputation model [2, 18]. Twenty iterations of the chained equation process were applied.\n\nIn each data-generating scenario, the performance of each method was assessed by computing the percent relative bias (%Rbias), the root mean square of the estimated standard error (SEcal), the empirical Monte Carlo standard error (SEemp), the root mean square error (RMSE) and the percent of the coverage of nominal 95% confidence intervals (Cover) of β1 and β2.\n\n### Computational settings\n\nSimulations and analyses were performed using R statistical software, version 3.3.0 . We computed the imputation procedure within the mice R package version 2.25 . Heckman’s One-step model estimator was supplied by functions semiParBIV() and copulaSampleSel() of the GJRM R package version 0.1-1, for binary and continuous cases respectively [19, 32]. Our code is available in the supplementary materials (S1 for binary outcomes and S2 for continuous outcomes). Heckman’s two-step model estimator was performed using the function heckit() of package sampleSelection version 1.0-4 .\n\n## Results\n\nIn this section, only the results of β1 estimations are presented. β2 estimations are presented in Additional file 1.\n\n### Only missing data on outcome Y\n\nTable 1 (Fig. 1) presents the results of the simulation study based on a scenario with missing binary outcome Y and complete predictors X. When Y is missing due to a MAR mechanism (ρ=0), all methods provide unbiased estimates of β1 (relative biases less than 2%). The standard errors of the approaches using Heckman’s model are greater than those of CCA. Nevertheless, all coverages are close to their nominal values. In the presence of an MNAR mechanism, CCA is biased 6.1% with ρ=0.3 and 11.9% with ρ=0.6. HEml and MIHEml are unbiased. The results for β2 are similar (Additional file 1: Table S8).\n\nThe results of the simulations that considered missing data on a continuous outcome are presented in Table 2 (Fig. 2). Compared to a binary outcome, similar results are observed. HEml, HE2steps, MIHEml and MIHE2steps presented similar results concerning biases; nevertheless, the standard errors obtained with HEml and MIHEml with ρ≠0 are smaller than those observed with HE2steps and MIHE2steps, while the confidence intervals remain near 95%. The results for β2 are similar (Additional file 1: Table S9).\n\nThe results of the simulations with data created using a logit selection model including Y as a covariate (i.e., a non-Heckman MNAR mechanism) are presented in Table 3 (Fig. 1) for binary outcomes and in Table 4 (Fig. 2) for continuous outcomes. CCA is not biased for $$\\beta ^{sl}_{Y}=0$$ and is biased for $$\\beta ^{sl}_{Y} \\neq 0$$. The biases increase with the effect of Y. For MNAR binary outcomes, HEml and MIHEml are biased from 2.5 to 4.2% but are less biased than CCA. For continuous outcomes, HEml, HE2steps, MIHEml and MIHE2steps are slightly biased for $$\\beta ^{sl}_{Y}\\neq 0$$, and lower standard errors are obtained using HEml and MIHEml, while biases appear to be very slightly greater.\n\n### Missing data on outcome Y and covariate X2\n\nThe results of the simulations that considered missing data on a binary outcome Y and on X2 depending on X1 and X3 are presented in Table 5 (Fig. 1). Approximately 50% of the cases were analysed with CCA, while 70% were analysed with HEml and the entire dataset with MIHEml. Under a MAR mechanism for the missing outcome (ρ=0), the biases for CCA, HEml and MIHEml range from 1.0 to 2.1%. The smallest standard error is obtained using CCA. If the missing mechanism is MNAR, then CCA is biased from 3.8 to 8.4%, whereas the biases of HEml and MIHEml remain less than 2.5%. MIHEml provides lower standard errors than HEml notably because HEml uses only approximately 70% of the observations. The results for β2 are similar (Additional file 1: Table S10).\n\nThe results of the simulations that considered missing data on binary outcome Y and on X2 depending on X1 and Y are presented in Table 5 (Fig. 1). Regardless of ρ, CCA and HEml are biased from 20% to more than 33%. Regardless of ρ, MIHEml is almost unbiased (relative bias of less than 2.5%). The results for β2 are similar excepted for unbiased HEml (Additional file 1: Table S10).\n\nThe results of the simulation studies with missing continuous outcomes Y and missing X2 depending on X1 and X3 are presented for β1 in Table 6 (Fig. 2). When ρ=0, all methods are unbiased (relative biases of less than 1%). The smallest standard error is obtained with CCA. When ρ≠0, CCA is biased from 6.3 to 13.3%. The other methods are almost unbiased (relative biases of less than 2.2%). The results for β2 are similar (Additional file 1: Table S11).\n\nThe results of the simulations with missing continuous outcomes Y and missingness of X2 depending on X1 and Y are presented for β1 in Table 6 (Fig. 2). Regardless of ρ, CCA, HEml and HE2steps are biased from 27.7% to more than 37.7%. CCA presents the smallest standard error. Regardless of ρ, MIHEml and MIHE2steps are unbiased (relative biases of less than 2%). The standard errors observed for MIHEml are smaller than those observed for MIHE2steps, while the coverage remains close to 95%. The results for β2 are similar (Additional file 1: Table S11). However, when ρ=0.6, MIHEml and MIHE2steps are slightly biased for β2 (3.4% and 4.5%, respectively).\n\nSimilar results are observed when the sample size decreased down to 200, although biases ans SEs slightly increased (Additional file 1: Tables S12, S13, S14 and S15).\n\n## Application to illustrative examples\n\nThe impact of treatment group on adherence has been assessed using a probit model adjusted on severity score. Adherence presented 115 (21%) missing data. There were 51 and 375 non-adherent and adherent patients, respectively. The missing data mechanism of adherence was strongly suspected to be MNAR. The severity score was missing for 114 (21%) patients, and its missing data mechanism was suspected to be MAR. Four methods were applied: CCA, HEml, MIHEml and MI. A standard MI approach was added using a MICE procedure with a linear imputation model for severity score and a probit imputation model for adherence. The aim of the latter model was to assess the performance of an available misspecified but widely used approach. The missing data mechanisms assumed by each method are presented in Table 7. The HEml and MIHEml selection equations for adherence included treatment group, severity score and antibiotic treatment. The latter binary variable was chosen to fulfill the exclusion-restriction criterion. The MAR variables were imputed using linear and probit regression models for continuous and binary variables, respectively. Using MIHEml, the indicator of adherence missingness was included in the severity score imputation model. The MICE procedure was applied for 20 iterations, and m=100 datasets were generated. Finally, Rubin’s rules for small samples were applied.\n\nThe results are presented in Table 7. The reference group for treatment is the combination group. The Severity score coefficient corresponds to an increase of 20 units. CCA includes only 359 cases, i.e., 66% of the entire dataset. Observations with missing predictors are ignored in the HEml analyses, i.e., only 427 (79%) cases are retained. MI and MIHEml consider all observations. As expected, MI and MIHEml have lower standard errors than those of CCA and HEml. The coefficients estimated for Oseltamivir-Placebo with MI and MIHEml are similar and higher than those obtained with CCA or HEml. The effect of Oseltamivir-Placebo reached significance with MIHEml, thus enabling the assessment of the impact of Oseltamivir-Placebo on adherence. The estimated coefficients of Zanamivir-Placebo and severity score are similar for CCA and MI, slightly higher for HEml and higher for MIHEml. Not surprisingly, the proportion of imputed values corresponding to the non-adherent outcome are 13% and 47% for MI and MIHEml, respectively, indicating that missing values on self-reported adherence are more likely to correspond to non-adherent patients.\n\nWe also challenged the MAR assumption concerning the missing mechanism associated with the severity score. Thus, we performed a new MICE procedure encoding two Heckman’s imputation models for adherence and severity score. It involves defining selection and outcome models for severity score. The results for the effects were similar: 0.376 (0.186) and 0.096 (0.179) for Oseltamivir-Placebo and Zanamivir-Placebo, respectively. These results suggest a weak impact of the MNAR mechanism for severity score.\n\n## Discussion\n\nThe first aim of this work was to propose a unique approach to address binary outcomes according to an MNAR mechanism and missing predictors with a MAR mechanism. According to our simulation results, for MNAR outcomes, only MIHEml and HEml were unbiased. Our simulation studies were generated using a real Heckman’s model. Thus, we generated MNAR outcomes using a logistic selection model, directly including Y as a predictor, i.e. an MNAR mechanism that is non-compatible with Heckman’s model. Although our results remain biased, the use of MIHEml reduced the biases compared to CCA. Because it is not possible to confirm the validity of Heckman’s model from the observed data alone [17, 33], the developed approach appears to at least reduce the biases under an MNAR mechanism if the Heckman’s hypotheses do not hold.\n\nTo thoroughly evaluate our approach in a MICE procedure, we simulated missing data on predictors following two scenarios: one where the MAR mechanism for X2 depended on the fully observed X1 and X3, and one where the mechanism depended on X1 and Y. For these two scenarios, Heckman’s model (HEml) used only cases with complete predictors to estimate the model parameters, i.e, did not use all available information. This loss of information produced larger standard errors, particularly for β1 and only slightly for β2. This result is not surprising because the information lost, resulting from ignoring patients with missing X2, primarily affected X1. In terms of bias, the first scenario presented similar results to those obtained without missing X2 data. In the second scenario, where the missing mechanism for X2 also depended on Y, MIHEml out-performed all the other methods. The second aim was to validate the proposition of Galimard et al. using a one-step ML estimator for continuous outcome. Our simulations showed that MIHEml performs slightly better than MIHE2steps in terms of standard errors for the missing MNAR outcomes.\n\nAlthough our method performs well in the presence of a MAR mechanism, i.e., when ρ=0, it is preferable to determine whether the missing data mechanism is most likely to be MNAR or MAR to avoid modelling a selection equation. Indeed, the standard errors are greater than those of the standard approaches for ρ=0. Unfortunately, it is not possible to distinguish between MAR and MNAR from the observed data alone [17, 33]. Hence, sensitivity analyses are often performed to evaluate departures from MAR. Some authors have proposed a pattern mixture model using δ adjustment, i.e., systematically adding a certain increment δ to the linear predictors of the imputed values. Despite its simplicity, van Buuren considered this method to be a powerful approach for evaluating the MAR mechanism by varying δ [2, 8, 17]. This method identifies two patterns: one for the observed data and one for the unobserved data. Missing values are imputed conditionally on the observed data with an additional shift parameter δ, which is the magnitude of departure from MAR. Then, the model for the observed data is different from the model for the missing data. Similarly, MIHEml can be viewed as a method that applies a shift term or a correction term for the selection bias in the imputation model specific to each observation i. Precisely, as $$E(Y_{i}|R_{yi}=0)=X_{i}\\beta +\\rho \\sigma _{\\varepsilon }\\left (-\\phi \\left (X^{s}_{i}\\beta ^{s}\\right)\\right)/\\Phi \\left (-X^{s}_{i}\\beta ^{s}\\right)$$, MIHEml uses a selection correction term that can be considered as an individual δi for each patient (adjusted on the parameters of the selection equation). In this sense, we obtained a more precise δ-adjustment approach.\n\nThe construction of the selection model follows strict rules [14, 23]. In our experience, respect of the exclusion-restriction criterion should be strict. Indeed, Heckman’s model can inflate standard errors due to the collinearity between the regressors and IMR, and this problem is exacerbated when the exclusion-restriction criterion does not hold . Moreover, MICE (or full conditional specification) follows certain rules. Each variable with missing data requires a specific conditional imputation model that is generally defined by a link function and a linear predictor with its set of predictors. Theoretically, imputation models should be derived from the global joint distribution of the variables, including the outcome [2, 35], and misspecification may result in biased parameter estimates . Despite recent work in simple cases, the theoretical properties of MICE are not fully understood [25, 26, 28, 37]. Nevertheless, it performs well in practice, particularly when the conditional imputation models are well accommodated to the substantive model. The efficiency of the MICE approach is generally validated by simulation studies, and the results appear robust even when the compatibility between the full conditional distribution and the global joint distribution is not proven . Although simulation is never sufficiently complete, these simulations suggest that our approach of multiple imputation using Heckman’s model and its use in a MICE process are valid and could be useful when the MNAR mechanism on the outcome is compatible with Heckman’s model. To avoid the bivariate normality assumption of Heckman’s model, Marchenko and Genton proposed a Heckman’s model with a bivariate Student distribution for error terms. Ogundimu and Collins developed an imputation model using this selection-t model. Unfortunately, their imputation model is only available for continuous outcomes. We compare the proposition in the current paper for continuous outcome to the propositions of Ogundimu and Collins and Galimard et al. in Additional file 2. Not surprisingly, the results were similar. Indeed, t-distributions are very close to a normal distribution for high degrees of freedom. In this paper, we focused on frequentist sample selection approaches within a MICE procedure. Nevertheless, Bayesian posterior distribution of sample selection models can be obtained using Gibbs sampling and data augmentation [40, 41]. Such a fully Bayesian framework could improve the imputation when based on small samples; this could be evaluated in further research.\n\nFinally, our simulation study does not explore MNAR mechanisms on covariates and outcomes. Such a situation requires specifying a Heckman’s imputation model for each MNAR variable (i.e. selection and outcome models). Nevertheless, we used this type of approach in our example analysis to evaluate the departure from MAR for the missing predictors.\n\n## Conclusion\n\nIn the presence of MAR predictors, we proposed a simple approach to address MNAR binary or continuous missing outcomes under a Heckman assumption in a MICE procedure. This approach could be either directly used to handle such a framework (MNAR outcomes and MAR predictors) or to challenge the robustness of a suspected MAR mechanism for missing outcomes, such as in a sensitivity analysis. Finally, a R package, named “miceMNAR”, dedicated to the proposed approaches has been implemented and is available on the CRAN (https://cran.r-project.org/package=miceMNAR).\n\n## Abbreviations\n\nCCA:\n\nComplete case analysis\n\nCover:\n\nCoverage of nominal 95% confidence intervals\n\nHEml:\n\nHeckman’s one-step ML estimation\n\nHE2steps:\n\nHeckman’s two-step estimation\n\nIMR:\n\nInverse Mills ratio\n\nMAR:\n\nMissing at random\n\nMCAR:\n\nMissing completely at random\n\nMNAR:\n\nMissing not at random\n\nMI:\n\nMultiple imputation\n\nMICE:\n\nMultiple imputation by chained equations\n\nMIHEml:\n\nMultiple imputation using Heckman’s one-step ML estimation\n\nMIHE2steps:\n\nMultiple imputation using Heckman’s two-step estimation\n\nRbias:\n\nRelative bias\n\nRMSE:\n\nRoot mean square error\n\nS E cal :\n\nRoot mean square of estimated standard errors\n\nS E emp :\n\nEmpirical Monte Carlo standard errors\n\n## References\n\n1. 1\n\nLittle RJ, Rubin DB. Statistical Analysis with Missing Data. New York: Wiley; 2002.\n\n2. 2\n\nvan Buuren S. Flexible Imputation of Missing Data. Boca Raton: CRC press; 2012.\n\n3. 3\n\nThijs H, Molenberghs G, Michiels B, Verbeke G, Curran D. Strategies to fit pattern-mixture models. Biostatistics. 2002; 3(2):245–65.\n\n4. 4\n\nFitzmaurice GM, Kenward MG, Molenberghs G, Verbeke G, Tsiatis AA. Missing data: Introduction and statistical preliminaries. In: Handbook of Missing Data Methodology. Boca Raton: Chapman and Hall/CRC Press: 2014. p. 3–22.\n\n5. 5\n\nLittle RJ. Pattern-mixture models for multivariate incomplete data. J Am Stat Assoc. 1993; 88(421):125–34.\n\n6. 6\n\nRubin DB. Formalizing subjective notions about the effect of nonrespondents in sample surveys. J Am Stat Assoc. 1977; 72(359):538–43.\n\n7. 7\n\nGlynn RJ, Laird NM, Rubin DB. Selection modeling versus mixture modeling with nonignorable nonresponse. In: Drawing Inferences from Self-selected Samples. New York: Springer: 1986. p. 115–42.\n\n8. 8\n\nvan Buuren S, Boshuizen HC, Knook DL. Multiple imputation of missing blood pressure covariates in survival analysis. Stat Med. 1999; 18(6):681–94.\n\n9. 9\n\nRatitch B, O’Kelly M, Tosiello R. Missing data in clinical trials: From clinical assumptions to statistical analysis using pattern mixture models. Pharm Stat. 2013; 12(6):337–47.\n\n10. 10\n\nGreene WH. Econometric Analysis: International Edition (7th Ed.)Edinburgh: Pearson; 2011.\n\n11. 11\n\nAmemiya T. Tobit models: A survey. J Econom. 1984; 24(1):3–61.\n\n12. 12\n\nHeckman JJ. The common structure of statistical models of truncation, sample selection and limited dependent variables and a simple estimator for such models. Ann Econ Soc Meas. 1976; 5(4):475–92.\n\n13. 13\n\nHeckman JJ. Sample selection bias as a specification error. Econometrica. 1979; 47(1):153–61.\n\n14. 14\n\nToomet O, Henningsen A. Sample selection models in R: Package sampleSelection. J Stat Softw. 2008; 27(7):1–23.\n\n15. 15\n\nVan de Ven WPMM, Van Praag BMS. The demand for deductibles in private health insurance: A probit model with sample selection. J Econom. 1981; 17(2):229–52.\n\n16. 16\n\nGreene W. A stochastic frontier model with correction for sample selection. J Prod Anal. 2010; 34(1):15–24.\n\n17. 17\n\nWhite IR, Royston P, Wood AM. Multiple imputation using chained equations: Issues and guidance for practice. Stat Med. 2011; 30(4):377–99.\n\n18. 18\n\nGalimard J-E, Chevret S, Protopopescu C, Resche-Rigon M. A multiple imputation approach for MNAR mechanisms compatible with Heckman’s model. Stat Med. 2016; 35(17):2907–20.\n\n19. 19\n\nMarra G, Radice R. A penalized likelihood estimation approach to semiparametric sample selection binary response modeling. Electron J Stat. 2013; 7:1432–55.\n\n20. 20\n\nDuval X, van der Werf S, Blanchon T, Mosnier A, Bouscambert-Duchamp M, Tibi A, Enouf V, Charlois-Ou C, Vincent C, Andreoletti L, Tubach F, Lina B, Mentré F, Leport C, and the Bivir Study Group. Efficacy of oseltamivir-zanamivir combination compared to each monotherapy for seasonal influenza: A randomized placebo-controlled trial. PLoS Med. 2010; 7(11):1000362.\n\n21. 21\n\nTreanor JJ, Hayden FG, Vrooman PS, Barbarash R, Bettis R, Riff D, Singh S, Kinnersley N, Ward P, Mills RG, et al. Efficacy and safety of the oral neuraminidase inhibitor oseltamivir in treating acute influenza: a randomized controlled trial. JAMA. 2000; 283(8):1016–24.\n\n22. 22\n\nVella F. Estimating models with sample selection bias: A survey. J Hum Resour. 1998; 33(1):127–69.\n\n23. 23\n\nPuhani P. The Heckman correction for sample selection and its critique. J Econ Surveys. 2000; 14(1):53–68.\n\n24. 24\n\nMarra G, Radice R, Bärnighausen T, Wood SN, McGovern ME. A simultaneous equation approach to estimating hiv prevalence with nonignorable missing responses. J Am Stat Assoc. 2017; 112(518):484–96.\n\n25. 25\n\nChen HY. Compatibility of conditionally specified models. Stat Probab Lett. 2010; 80(7):670–7.\n\n26. 26\n\nHughes RA, White IR, Seaman SR, Carpenter JR, Tilling K, Sterne JA. Joint modelling rationale for chained equations. BMC Med Res Methodol. 2014; 14(1):28.\n\n27. 27\n\nvan Buuren S. Multiple imputation of discrete and continuous data by fully conditional specification. Stat Meth Med Res. 2007; 16(3):219–42.\n\n28. 28\n\nvan Buuren S, Brand JP, Groothuis-Oudshoorn C, Rubin DB. Fully conditional specification in multivariate imputation. J Stat Comput Simul. 2006; 76(12):1049–64.\n\n29. 29\n\nRubin DB. Multiple Imputation for Nonresponse in Surveys. New-York: Wiley; 1987.\n\n30. 30\n\nR Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2016. R Foundation for Statistical Computing. http://www.r-project.org/.\n\n31. 31\n\nvan Buuren S, Groothuis-Oudshoorn K. mice: Multivariate imputation by chained equations in R. J Stat Softw. 2011; 45(3):1–67.\n\n32. 32\n\nMarra G, Radice R. Estimation of a regression spline sample selection model. Comput Stat Data Anal. 2013; 61:158–73.\n\n33. 33\n\nKaambwa B, Bryan S, Billingham L. Do the methods used to analyse missing data really matter? An examination of data from an observational study of intermediate care patients. BMC Res Notes. 2012; 5(1):330.\n\n34. 34\n\nBushway S, Johnson BD, Slocum LA. Is the magic still there? the use of the Heckman two-step correction for selection bias in criminology. J Quant Criminol. 2007; 23(2):151–78.\n\n35. 35\n\nGilks WR, Richardson S, Spiegelhalter DJ. Introducing markov chain monte carlo. In: Markov Chain Monte Carlo in Practice. Boca Raton: CRC Press: 1996. p. 75–88.\n\n36. 36\n\nMeng X-L. Multiple-imputation inferences with uncongenial sources of input. Stat Sci. 1994; 9:538–58.\n\n37. 37\n\nLiu J, Gelman A, Hill J, Su Y-S, Kropko J. On the stationary distribution of iterative imputations. Biometrika. 2014; 101(1):155–73.\n\n38. 38\n\nMarchenko YV, Genton MG. A heckman selection-t model. J Am Stat Assoc. 2012; 107(497):304–17.\n\n39. 39\n\nOgundimu EO, Collins GS, A robust imputation method for missing responses and covariates in sample selection models. Stat Meth Med Res. 2017;0(0). https://0-doi-org.brum.beds.ac.uk/10.1177/0962280217715663.\n\n40. 40\n\nKai L. Bayesian inference in a simultaneous equation model with limited dependent variables. J Econom. 1998; 85(2):387–400.\n\n41. 41\n\nvan Hasselt M. Bayesian inference in a sample selection model. J Econom. 2011; 165(2):221–32.\n\n## Acknowledgements\n\nWe thank the scientific committee of the BIVIR study group for the permission to use their data (see Additional file 3).\n\n### Funding\n\nMRR, SC and JEG are funded by Paris Diderot University, Paris, France. EC is granted by Paris Descartes University, Paris, France. MRR, SC and EC are funded by AP-HP (Assistance publique - Hôpitaux de Paris), Paris, France. The funding sources had no role in the study design, data collection, data analysis, data interpretation, or manuscript writing.\n\n### Availability of data and materials\n\nThe R codes for imputation models using Heckman’s model are available in additional files (Additional file 4 for binary outcome and Additional file 5 for continuous outcome) and can easily be used with the MICE package . The R code corresponding to the data-generating process can be obtained on request to Jacques-Emmanuel Galimard (jacques-emmanuel.galimard@inserm.fr).\n\nThe real dataset supporting the findings (BIVIR Study) can be obtained on request to the scientific committee of the BIVIR study group by contacting Professor Catherine Leport (catherine.leport@bch.aphp.fr).\n\n## Author information\n\nJEG, SC, EC and MRR contributed to the design of the paper and the writing and revision of the manuscript. JEG performed the simulations, prepared and analysed the data. All authors read and approved the final manuscript.\n\nCorrespondence to Jacques-Emmanuel Galimard.\n\n## Ethics declarations\n\n### Ethics approval and consent to participate\n\nAll the data have already been published in: “Efficacy of oseltamivir-zanamivir combination compared to each monotherapy for seasonal influenza: a randomized placebo-controlled trial.” (http://0-dx.doi.org.brum.beds.ac.uk/10.1371/journal.pmed.1000362). This study was approved on July 18, 2008 by the Ethics Committee of Ile de France 1 (“CPP Ile de France 1”) and the French drug administration (AFSSAPS). We used already analysed data and a pre-specified secondary outcome on compliance to antiviral treatment (Trial registration: http://www.clinicaltrials.govnct00799760).\n\n### Consent for publication\n\nAll the data have already been published in: “Efficacy of oseltamivir-zanamivir combination compared to each monotherapy for seasonal influenza: a randomized placebo-controlled trial.” (http://0-dx.doi.org.brum.beds.ac.uk/10.1371/journal.pmed.1000362). This study was approved on July 18, 2008 by the Ethics Committee of Ile de France 1 (“CPP Ile de France 1”) and the French drug administration (AFSSAPS). We used already analysed data and a pre-specified secondary outcome on compliance to antiviral treatment (Trial registration: http://www.clinicaltrials.govnct00799760).\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n### Publisher’s Note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\nComparison to Ogundimu and Collins. (PDF 129 kb)\n\nBIVIR study group. (PDF 78 kb)\n\nR code to impute binary outcome. (R 1 kb)",
null,
""
] | [
null,
"https://0-bmcmedresmethodol-biomedcentral-com.brum.beds.ac.uk/track/article/10.1186/s12874-018-0547-1",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8686263,"math_prob":0.9506014,"size":48534,"snap":"2019-51-2020-05","text_gpt3_token_len":11655,"char_repetition_ratio":0.16509375,"word_repetition_ratio":0.07046107,"special_character_ratio":0.23521654,"punctuation_ratio":0.13325264,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99259764,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T05:46:20Z\",\"WARC-Record-ID\":\"<urn:uuid:8689b242-bd07-4347-a427-b2816df93e5f>\",\"Content-Length\":\"297959\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aaff8a6f-d5b9-484c-a103-04d0033c3c07>\",\"WARC-Concurrent-To\":\"<urn:uuid:17514317-910b-4c23-98c5-0ef24d0e8d9b>\",\"WARC-IP-Address\":\"194.80.219.188\",\"WARC-Target-URI\":\"https://0-bmcmedresmethodol-biomedcentral-com.brum.beds.ac.uk/articles/10.1186/s12874-018-0547-1\",\"WARC-Payload-Digest\":\"sha1:5RBX5CBFAQJ264RMXZGKNAELUZ34PQBV\",\"WARC-Block-Digest\":\"sha1:T64KHXSWXWEJM6SEZXOWJIPDSHHQQ53E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594209.12_warc_CC-MAIN-20200119035851-20200119063851-00134.warc.gz\"}"} |
https://codecrucks.com/java-program-to-demonstrate-arithmetic-operators/ | [
"# Java Program To Demonstrate Arithmetic Operators\n\n## Write Java program to demonstrate the use of arithmetic operators\n\n// WAP to demonstrate arithmetic operators.\n\npublic class ArithmeticOperator\n{\npublic static void main(String[] args)\n{\nint a = 12,b = 8,c;\n\nSystem.out.println(\"Value of num1 is : \"+a);\nSystem.out.println(\"Value of num2 is : \"+b);\n\nc = a+b;\nSystem.out.println(\"\"+a+\" + \"+b+\" = \"+c);\n\nc = a-b;\nSystem.out.println(\"\"+a+\" - \"+b+\" = \"+c);\n\nc = a*b;\nSystem.out.println(\"\"+a+\" * \"+b+\" = \"+c);\n\nfloat c1 = a/(float)b;\nSystem.out.println(\"\"+a+\" / \"+b+\" = \"+c1);\n\nc = a%b;\nSystem.out.println(\"\"+a+\" mod \"+b+\" = \"+c);\n\nc = a+1;\nSystem.out.println(\"\"+a+\" + \"+1+\" = \"+c);\n\nc = a-1;\nSystem.out.println(\"\"+a+\" - \"+1+\" = \"+c);\n\n}\n}\n\nOutput:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.51958454,"math_prob":0.9981855,"size":680,"snap":"2023-14-2023-23","text_gpt3_token_len":215,"char_repetition_ratio":0.23964497,"word_repetition_ratio":0.03,"special_character_ratio":0.43676472,"punctuation_ratio":0.26623377,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990187,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-22T13:38:09Z\",\"WARC-Record-ID\":\"<urn:uuid:fc2ee127-b08d-4eca-b9c2-76d87b8d403f>\",\"Content-Length\":\"131230\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f12178dd-d21d-4105-bf00-1caa1c4b46cd>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb2a5f34-e2f4-4954-ba20-66cc4552b4c0>\",\"WARC-IP-Address\":\"172.67.151.236\",\"WARC-Target-URI\":\"https://codecrucks.com/java-program-to-demonstrate-arithmetic-operators/\",\"WARC-Payload-Digest\":\"sha1:UAE3HXR3LNELHODF6PM4T37CGFC4RSEU\",\"WARC-Block-Digest\":\"sha1:EKOFHGKQZPOAXNW6H5TFQAANGOXVHTNE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943809.76_warc_CC-MAIN-20230322114226-20230322144226-00138.warc.gz\"}"} |
https://answers.everydaycalculation.com/percent-of-what-number/56-50 | [
"Solutions by everydaycalculation.com\n\n## 56 percent of what number is 50?\n\n50 is 56% of 89.29\n\n#### Steps to solve \"50 is 56 percent of what number?\"\n\n1. We have, 56% × x = 50\n2. or, 56/100 × x = 50\n3. Multiplying both sides by 100 and dividing both sides by 56,\nwe have x = 50 × 100/56\n4. x = 89.29\n\nIf you are using a calculator, simply enter 50×100÷56, which will give you the answer.\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn how to work with percentages in your own time:"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.921017,"math_prob":0.9993823,"size":568,"snap":"2021-31-2021-39","text_gpt3_token_len":167,"char_repetition_ratio":0.26241136,"word_repetition_ratio":0.23300971,"special_character_ratio":0.33978873,"punctuation_ratio":0.067226894,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99151826,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T15:26:32Z\",\"WARC-Record-ID\":\"<urn:uuid:240208fb-1bad-4271-a688-0f6919747d14>\",\"Content-Length\":\"6056\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:70629e93-2f67-48b5-aebc-067eb68a296d>\",\"WARC-Concurrent-To\":\"<urn:uuid:ecc26a8a-ad6e-4a82-82d6-9a7a166c270d>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/percent-of-what-number/56-50\",\"WARC-Payload-Digest\":\"sha1:IHFKHY7KBH75U2SIC4C7JIY7A5SEH5HY\",\"WARC-Block-Digest\":\"sha1:4B6O547FO2G3Q7PYB5Y5Z2PYRAQGQAQY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057558.23_warc_CC-MAIN-20210924140738-20210924170738-00441.warc.gz\"}"} |
https://www.geeksforgeeks.org/how-to-find-square-root-of-a-number/?type=article&id=602997 | [
"",
null,
"GeeksforGeeks App\nOpen App",
null,
"Browser\nContinue\n\n# How to Find Square Root of a Number?\n\nIn everyday situations, the challenge of calculating the square root of a number is faced. What if one doesn’t have access to a calculator or any other gadget? It can be done with old-fashioned paper and pencil in a long division style. Yes, there are a variety of ways to do so. Let’s start with discussing Square root and its properties first.\n\n### What is a Square Root?\n\nA square root is a value, which gives the original number that multiplication of itself. e.g., 6 multiplied by itself gives 36 (i.e. 6 × 6 = 36), therefore, 6 is the square root of 36 or in other words, 36 is the square number of 6.\n\nSuppose, a is the square root of b, then it is represented as,\n\na = √b or\n\na2 = b\n\nLet the square of 2 is 4 so the square root of 4 will be 2 i.e.\n\n√4 = 2\n\nThe following are square roots of the first 50 digits as:\n\nHence, the square root of the square of a positive number gives the original number. However, the square root of a negative number represents a complex number.\n\n### Properties of Square Root\n\n• A perfect square root always exists if a number is a perfect square.\n• The square root of 4 is 2 and the square root of 16 is 4. So we can conclude that the square root of an even perfect square is even.\n• The square root of 9 is 3 and the square root of 81 is 9. So we can conclude that the square root of an odd perfect square is odd.\n• A perfect square cannot be negative and hence the square root of a negative number is not defined.\n• From the above point, it can be concluded that numbers ending with (having unit’s digit) 1, 4, 5, 6, or 9 will have a square root.\n• If a number of ends with an even number of zeros (0’s), then it can have a square root.\n• If the unit digit of a number is 2, 3, 7, or 8 then a perfect square root is not possible.\n• If a number of ends with 2, 3, 7, or 8 (in the unit digit), then the perfect square root does not exist.\n• The two square root values can be multiplied. For example, √5 can be multiplied by √2, then the result should be √10.\n• Two same square roots are multiplied to give a non-square root number. When √5 is multiplied by √5 we get 5 as a result.\n\n### Perfect Square\n\nA number that can be expressed as the product of two identical integers is called a perfect square. Perfect squares are numbers that can be made by squaring any whole number.\n\ne.g.:\n\n9 is a perfect square because it is the product of two equal integers, 3 × 3 = 9.\n\nHowever, 10 is not a perfect square because it cannot be expressed as the product of two equal integers. (5 × 2 = 10).\n\nThus, a perfect square is an integer that is the square of an integer; in other words, it is the product of some integer with itself.\n\nThe numbers that are perfect squares are mentioned below, and finding the square roots of those numbers is easy. Here are few examples of square roots:\n\n• 12 = 1\n• 22 = 4\n• 32 = 9\n• 42 = 16\n• 52 = 25\n• 62 = 36\n• 72 = 49\n• 82 = 64\n• 92 = 81\n• 102 = 100\n\nAs a result, the complete squares are 1, 4, 9, 16, 25, 36, 49, 64, 81, and 100.\n\n### Methods to find Square Root of a number\n\nTo determine if a given number is a perfect square or an imperfect square, we must first determine if it is a perfect square or an imperfect square. If the number is a perfect square, such as 4, 9, 16, etc., we will use the prime factorization process to factorize it. We must use the long division approach to find the root if the number is an incomplete square, such as 2, 3, 5, and so on.\n\n1. Repeated Subtraction Method\n2. Prime Factorization Method\n3. Division Method\n\n### 1. Repeated Subtraction Method\n\nThe sum of the first n odd natural numbers is known to be n2. We’ll do this to calculate the square root of an integer by subtracting it several times. Let’s consider an example and see how this approach works. Let’s say you ought to find the square root of 25, which is √25. The steps are as follows:\n\nLet’s consider the following examples to understand the repeated subtraction method to determine the square roots.\n\nExample 1: Determine the square root of 25 using the repeated subtraction method.\nSolution:\n\nSince, 25 is an odd number. Therefore, the steps to find the square root of 25 is:\n\n• 25 – 1 = 24\n• 24 – 3 = 21\n• 21 – 5 = 16\n• 16- 7 = 9\n• 9 – 9 = 0\n\nHere it takes five steps to get the 0.\n\nHence, the square root of 25 is 5.\n\nExample 2: Determine the square root of 16 using the repeated subtraction method.\nSolution:\n\nSince, 16 is an even number. Therefore, the steps to find the square root of 16 is:\n\n• 16 – 4 = 12\n• 12 – 4 = 8\n• 8 – 4 = 4\n\nHere it takes four steps to get the 0.\n\nHence, the square root of 16 is 4.\n\nExample 3: Find the square root of 49 using the repeated subtraction method.\n\nSolution:\n\nSince, 49 is an odd number. Therefore, the steps to find the square root of 49 is:\n\n• 49 – 1 = 48\n• 48 – 3 = 45\n• 45 – 5 = 40\n• 40 – 7 = 33\n• 33 – 9 = 24\n• 24 – 11 = 13\n• 13 -13 = 0\n\nHere it takes seven steps to get the 0.\n\nHence, the square root of 49 is 7.\n\n### 2. Prime Factorization Method\n\nThe prime factorization method involves expressing numbers as a function of their prime factors. The square root of the number is given by the product of one element from each pair of equal prime factors. This approach can also be used to determine whether a given number is a perfect square or not. This method, however, cannot be used to find the square root of non-perfect square decimal numbers.\n\ne.g.: The prime factors of 126 will be 2, 3 and 7 as 2 × 3 × 3 × 7 = 126 and 2, 3, 7 are prime numbers.\n\n• 16 = 2 × 2 × 2 × 2 = 22 × 22 = √16 = 4\n• 25 = 5 × 5 = 52 = √25 = 5\n• 64 = 2 × 2 × 2 × 2 × 2 × 2 = √64 = 8\n\n### 3. Division Method\n\nWhen the integers are sufficiently large, it is easy to obtain the square root of a perfect square by utilizing the long division approach, because getting their square roots through factorization becomes lengthy and complicated. To overcome this problem, a new method for finding the square root is developed. This method basically uses the division operation by a divisor whose square is either less than or equal to the dividend.\n\nFollowing are the steps to for division method:\n\nStep 1: Take the number whose square root is to find. Place a bar over every pair of the digit of the number starting from that in the unit’s place (rightmost side).\n\nStep 2: Let’s divide the leftmost number by the largest number whose square is less than or equal to the number under the leftmost bar. Take this number as the divisor and the quotient. The number under the leftmost bar is considered to be the dividend.\n\nStep 3: Divide and get the number. Bring down the number under the next bar to the right of the remainder.\n\nStep 4: Double the divisor (or add the divisor to itself). To the right of this divisor find a suitable number which together with the divisor forms a new divisor for the new dividend. The new number in the quotient will have the same number as selected in the divisor. The condition is the same as being either less or equal to that of the dividend.\n\nStep 5: Continue this process till we get zero as the remainder. The quotient thus obtained will be the square root of the number.\n\nLet’s consider the following examples to understand the division method to determine the square roots.\n\nExample 1: Find the square root of 144 using the division method.\n\nSolution:\n\nThe steps to determine the square root of 144 are:\n\nStep 1: Start the division from the leftmost side. Here 1 is the number whose square is 1.\n\nStep 2: Putting it in the divisor and the quotient and then doubling it will give as,",
null,
"Step 3: Now it is required to find a number for the blanks in divisor and quotient. Let that number be x.\n\nStep 4: Therefore, check when 2x multiplies by x give a number of less than or equal to 44. Take x = 1, 2, 3, and so on and check.\n\nIn this case,\n\n• 21 × 1 = 21\n• 22 × 2 = 44\n\nSo we choose x = 2 as the new digit to be put in the divisor and in the quotient.\n\nThe remainder here is 0 and hence 12 is the square root of 144.\n\nExample 2: Find the square root of 196 using the division method.\n\nSolution:\n\nThe steps to determine the square root of 196 are:\n\nStep 1: Start the division from the leftmost side. Here 1 is the number whose square is 1.\n\nStep 2: Putting it in the divisor and the quotient and then doubling it will give.",
null,
"Step 3: Now we need to find a number for the blanks in divisor and quotient. Let that number be x.\n\nStep 4: We need to check when 2x multiplies by x give a number less than or equal to 96. Take x = 1, 2, 3 and so on and check.\n\nIn this case,\n\n• 21 × 1 = 21\n• 22 × 2 = 44\n• 23 × 3 = 69\n• 24 × 4 = 96\n\nSo, choose x = 4 as the new digit to be put in divisor and in the quotient.\n\nThe remainder here is 0 and hence 14 is the square root of 196.\n\nExample 2: Find the square root of 225 using the division method.\n\nSolution:\n\nThe steps to determine the square root of 225 are:\n\nStep 1: Start the division from the leftmost side. Here 1 is the number whose square is 1.\n\nStep 2: Putting it in the divisor and the quotient and then doubling it will give.",
null,
"Step 3: Now we need to find a number for the blanks in divisor and quotient. Let that number be x.\n\nStep 4: We need to check when 2x multiplies by x gives a number which is either less than or equal to 125. Take x = 1, 2, 3 and so on and check.\n\nIn this case,\n\n• 21 × 1 = 21\n• 22 × 2 = 44\n• 23 × 3 = 69\n• 24 × 4 = 96\n• 25 × 5 = 125\n\nSo we choose x = 5 as the new digit to be put in divisor and in the quotient.\n\nThe remainder here is 0 and hence 15 is the square root of 225.\n\n### 4. Square Roots of Complex Numbers\n\nTo calculate the square root of a complex number, let’s suppose that the root is ea + ib. Then compare it to the original number to get the values of a and b, yielding the square root.\n\nLet a + ib is a complex number, therefore to find the square root of a + ib following formula can be used",
null,
"Let’s consider the following examples to understand the determination of the square roots of complex numbers.\n\nExample 1: Find the square root of 6 – 8i.\n\nSolution:\n\nLet’s use the following formula to determine the square root of the given complex number as:",
null,
"For the given case, substitute a = 6 and b = (-8) in the above formula,",
null,
"which is the required solution.\n\nExample 2: Find the square root of 9 + 40i.\n\nSolution :\n\nLet’s use the following formula to determine the square root of the given complex number as:",
null,
"For the given case, substitute a = 9 and b = 40 in the above formula,",
null,
"which is the required solution.\n\nExample 3: Find the square root of 3 + 4i.\n\nSolution :\n\nLet’s use the following formula to determine the square root of the given complex number as:",
null,
"For the given case, substitute a = 3 and b = 4 in the above formula,",
null,
"which is the required solution.\n\nMy Personal Notes arrow_drop_up\nRelated Tutorials"
] | [
null,
"https://media.geeksforgeeks.org/gfg-gg-logo.svg",
null,
"https://media.geeksforgeeks.org/auth-dashboard-uploads/web-2.png",
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20210525121619/1111.PNG",
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20210526172229/Square2.PNG",
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20210526172851/sq3.PNG",
null,
"https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-3c368bc8d78c6a5e946d0e5ad960466c_l3.png",
null,
"https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-3c368bc8d78c6a5e946d0e5ad960466c_l3.png",
null,
"https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-74098f1727adee85e724c93fd136f00e_l3.png",
null,
"https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-3c368bc8d78c6a5e946d0e5ad960466c_l3.png",
null,
"https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-9576bd5a64ba28f71b852f5cbd9793e4_l3.png",
null,
"https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-3c368bc8d78c6a5e946d0e5ad960466c_l3.png",
null,
"https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-feb5d43f7a3091bb5cf96493fbbde033_l3.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87627304,"math_prob":0.9994618,"size":11150,"snap":"2023-14-2023-23","text_gpt3_token_len":3338,"char_repetition_ratio":0.21012022,"word_repetition_ratio":0.27599654,"special_character_ratio":0.32726458,"punctuation_ratio":0.12529644,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999546,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,4,null,4,null,4,null,null,null,null,null,4,null,null,null,4,null,null,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-03T11:37:42Z\",\"WARC-Record-ID\":\"<urn:uuid:fee45e00-050b-4343-8267-ec495599dd57>\",\"Content-Length\":\"225073\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:50850d8d-5a1f-40eb-bcb5-46d7f71e1df5>\",\"WARC-Concurrent-To\":\"<urn:uuid:f5b7de19-183a-40f2-94b1-36af5f0373aa>\",\"WARC-IP-Address\":\"23.55.221.209\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/how-to-find-square-root-of-a-number/?type=article&id=602997\",\"WARC-Payload-Digest\":\"sha1:NAH6G6KMS3BJOMQE2S3JWPNKQY4KCWLY\",\"WARC-Block-Digest\":\"sha1:K53EFSIXYGTRK4PEUFHDTT3US6CE22ON\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649193.79_warc_CC-MAIN-20230603101032-20230603131032-00644.warc.gz\"}"} |
https://socratic.org/questions/what-is-the-percent-yield-for-the-following-chemical-reaction | [
"# What is the percent yield for the following chemical reaction?\n\n## The Haber process can be used to produce ammonia, $N {H}_{3}$, and it is based on the following reaction. ${N}_{2} \\left(g\\right) + 3 {H}_{2} \\left(g\\right) \\to 2 N {H}_{3} \\left(g\\right)$ If one mole each of ${N}_{2}$ and ${H}_{2}$ are mixed and 0.50 moles of $N {H}_{3}$ are produced, what is the percent yield for the reaction?\n\nSep 19, 2017\n\n75%\n\n#### Explanation:\n\nThe first thing that you need to do here is to calculate the theoretical yield of the reaction, i.e. what you get if the reaction has a 100% yield.\n\nThe balanced chemical equation\n\n${\\text{N\"_ (2(g)) + 3\"H\"_ (2(g)) -> 2\"NH}}_{3 \\left(g\\right)}$\n\ntells you that every $1$ mole of nitrogen gas that takes part in the reaction will consume $3$ moles of hydrogen gas and produce $1$ mole of ammonia.\n\nIn your case, you know that $1$ mole of nitrogen gas reacts with $1$ mole of hydrogen gas. Since you don't have enough hydrogen gas to ensure that all the moles of nitrogen gas can react\n\noverbrace(\"3 moles H\"_2)^(color(blue)(\"what you need\")) \" \" > \" \" overbrace(\"1 mole H\"_2)^(color(blue)(\"what you have\"))\n\nyou can say that hydrogen gas will act as a limiting reagent, i.e. it will be completely consumed before all the moles of nitrogen gas will get the chance to take part in the reaction.\n\nSo, the reaction will consume $1$ mole of hydrogen gas and produce\n\n1 color(red)(cancel(color(black)(\"mole H\"_2))) * \"2 moles NH\"_3/(3color(red)(cancel(color(black)(\"moles H\"_2)))) = \"0.667 moles NH\"_3\n\nat 100% yield. This represents the reaction's theoretical yield.\n\nNow, you know that the reaction produced $0.50$ moles of ammonia. This represents the reaction's actual yield.\n\nIn order to find the percent yield, you need to figure out how many moles of ammonia are actually produced for every $100$ moles of ammonia that could theoretically be produced.\n\nYou know that $0.667$ moles will produce $0.50$ moles, so you can say that\n\n100 color(red)(cancel(color(black)(\"moles NH\"_3color(white)(.)\"in theory\"))) * (\"0.50 moles NH\"_3color(white)(.)\"actual\")/(0.667color(red)(cancel(color(black)(\"moles NH\"_3color(white)(.)\"in theory\")))) = \"75 moles NH\"_3color(white)(.)\"actual\"\n\nTherefore, you can say that the reaction has a percent yield equal to\n\n$\\textcolor{\\mathrm{da} r k g r e e n}{\\underline{\\textcolor{b l a c k}{\\text{% yield = 75%}}}}$\n\nI'll leave the answer rounded to two sig figs."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8542013,"math_prob":0.99750125,"size":1796,"snap":"2019-43-2019-47","text_gpt3_token_len":523,"char_repetition_ratio":0.14285715,"word_repetition_ratio":0.06792453,"special_character_ratio":0.31069043,"punctuation_ratio":0.084033616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992735,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-12T18:56:47Z\",\"WARC-Record-ID\":\"<urn:uuid:b119e011-b659-4a1a-93e1-716a8d4364bb>\",\"Content-Length\":\"39690\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f177e114-f431-41c6-9b54-240345a2b8d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae559789-752c-4ab7-8bcb-f344adb8ddf4>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/what-is-the-percent-yield-for-the-following-chemical-reaction\",\"WARC-Payload-Digest\":\"sha1:GNVOEZJ7RO65PCSRPL37M3WKLI2FHX7B\",\"WARC-Block-Digest\":\"sha1:G7JWU4IJ5LNDUCRRPG6EQDRHNZX6D53C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496665726.39_warc_CC-MAIN-20191112175604-20191112203604-00535.warc.gz\"}"} |
http://umj.imath.kiev.ua/volumes/issues/?lang=en&year=1997&number=2 | [
"2019\nТом 71\n№ 11\n\n# Volume 49, № 2, 1997\n\nArticle (Russian)\n\n### On extremal problems for symmetric disjoint domains\n\nUkr. Mat. Zh. - 1997. - 49, № 2. - pp. 179–185\n\nWe study two extremal problems for the product of powers of conformal radii of symmetric disjoint domains.\n\nArticle (Ukrainian)\n\n### Problem with nonlocal conditions for weakly nonlinear hyperbolic equations\n\nUkr. Mat. Zh. - 1997. - 49, № 2. - pp. 186–195\n\nFor weakly nonlinear hyperbolic equations of order n, n≥3, with constant coefficients in the linear part of the operator, we study a problem with nonlocal two-point conditions in time and periodic conditions in the space variable. Generally speaking, the solvability of this problem is connected with the problem of small denominators whose estimation from below is based on the application of the metric approach. For almost all (with respect to the Lebesgue measure) coefficients of the equation and almost all parameters of the domain, we establish conditions for the existence of a unique classical solution of the problem.\n\nArticle (Russian)\n\n### Phase transition in an exactly solvable model of interacting bosons\n\nUkr. Mat. Zh. - 1997. - 49, № 2. - pp. 196–205\n\nIn the formalism of the grand canonical ensemble, we study a model system of a lattice Bose gas with repulsive hard-core interaction on a perfect graph. We show that the corresponding ideal system may undergo a phase transition (Bose-Einstein condensation). For a system of interacting particles, we obtain an explicit expression for pressure in the thermodynamic limit. The analysis of this expression demonstrates that the phase transition does not take place in the indicated system.\n\nArticle (Russian)\n\n### Mean oscillations and the convergence of Poisson integrals\n\nUkr. Mat. Zh. - 1997. - 49, № 2. - pp. 206–222\n\nWe establish conditions for mean oscillations of a periodic summable function under which the summability of its Fourier series (conjugate series) by the Abel-Poisson method at a given point implies the convergence of Steklov means (the existence of the conjugate function) at the indicated point. Similar results are also obtained for the Poisson integral in ℝ+n+1.\n\nArticle (Russian)\n\n### Periodic solutions of systems of differential equations with random right-hand sides\n\nUkr. Mat. Zh. - 1997. - 49, № 2. - pp. 223–227\n\nWe prove a theorem on the existence of periodic solutions of a system of differential equations with random right-hand sides and small parameter of the form dx/dt=εX(t, x, ξ(t)) in a neighborhood of the equilibrium state of the averaged deterministic system dx/dtX 0(t).\n\nArticle (Russian)\n\n### Potential fields with axial symmetry and algebras of monogenic functions of vector variables. III\n\nUkr. Mat. Zh. - 1997. - 49, № 2. - pp. 228–243\n\nWe obtain new representations of the potential and flow function of three-dimensional potential solenoidal fields with axial symmetry, study principal algebraic analytic properties of monogenic functions of vector variables with values in an infinite-dimensional Banach algebra of even Fourier series, and establish the relationship between these functions and the axially symmetric potential or the Stokes flow function. The developed approach to the description of the indicated fields is an analog of the method of analytic functions in the complex plane used for the description of two-dimensional potential fields.\n\nArticle (Russian)\n\n### Nonlinear nonlocal problems for a parabolic equation in a two-dimensional domain\n\nUkr. Mat. Zh. - 1997. - 49, № 2. - pp. 244–254\n\nWe establish the convergence of the Rothe method for a parabolic equation with nonlocal boundary conditions and obtain an a priori estimate for the constructed difference scheme in the grid norm on a ball. We prove that the suggested iterative process for the solution of the posed problem converges in the small.\n\nArticle (Ukrainian)\n\n### On direct decompositions in modules over group rings\n\nUkr. Mat. Zh. - 1997. - 49, № 2. - pp. 255–261\n\nIn the theory of infinite groups, one of the most important useful generalizations of the classical Maschke theorem is the Kovačs-Newman theorem, which establishes sufficient conditions for the existence of G-invariant complements in modules over a periodic group G finite over the center. We genralize the Kovačs-Newman theorem to the case of modules over a group ring KG, where K is a Dedekind domain.\n\nArticle (Ukrainian)\n\n### On a limit theorem for an additive functional on a nonrecurrent Markovian chain\n\nUkr. Mat. Zh. - 1997. - 49, № 2. - pp. 262–271\n\nWe establish conditions under which the distribution of an additive functional on a nonrecurrent Markovian chain is asymptotically normal.\n\nArticle (Russian)\n\n### Weakly nonlinear boundary-value problems for operator equations with pulse influence\n\nUkr. Mat. Zh. - 1997. - 49, № 2. - pp. 272–288\n\nWe consider the problem of finding conditions of solvability and algorithms for construction of solutions of weakly nonlinear boundary-value problems for operator equations (with the Noetherian linear part) with pulse influence at fixed times. The method of investigation is based on passing by methods of the Lyapunov—Schmidt type from a pulse boundary-value problem to an equivalent operator system that can be solved by iteration procedures based on the fixed-point principle.\n\nArticle (Russian)\n\n### On finite-dimensional approximation of solutions of ill-posed problems\n\nUkr. Mat. Zh. - 1997. - 49, № 2. - pp. 289–295\n\nWe show that the modified method for finite-dimensional approximation of solutions of Fredholm integral equations of the first kind presented in this paper is more economical than traditional methods for finite-dimensional approximation.\n\nArticle (Ukrainian)\n\n### The solvability of a boundary-value periodic problem\n\nUkr. Mat. Zh. - 1997. - 49, № 2. - pp. 302–308\n\nIn the space of functions B a 3+ ={g(x, t)=−g(−x, t)=g(x+2π, t)=−g(x, t+T3/2)=g(x, −t)}, we establish that if the condition aT 3 (2s−1)=4πk, (4πk, a (2s−1))=1, k ∈ ℤ, s ∈ ℕ, is satisfied, then the linear problem u u −a 2 u xx =g(x, t), u(0, t)=u(π, t)=0, u(x, t+T 3 )=u(x, t), ℝ2, is always consistent. To prove this statement, we construct an exact solution in the form of an integral operator.\n\nArticle (Ukrainian)\n\n### Existence of the Vejvoda-Shtedry spaces\n\nUkr. Mat. Zh. - 1997. - 49, № 2. - pp. 302–308\n\nWe investigate the linear periodic problem u tt −u xx =F(x, t), u(x+2π, t)=u(x, t+T)=u(x, t), ∈ ℝ2, and establish conditions for the existence of its classical solution in spaces that are subspaces of the Vejvoda-Shtedry spaces.\n\nBrief Communications (English)\n\n### On the problem on periodic solutions of one class of systems of difference equations\n\nUkr. Mat. Zh. - 1997. - 49, № 2. - pp. 309–314\n\nThe scheme of the Samoilenko numerical-analytic method for finding periodic solutions in the form of a uniformly convergent sequence of periodic functions is applied to one class of difference equations.\n\nArticle (Ukrainian)\n\n### A projective method for the construction of solutions of the problem of normal symmetric oscillations of viscous liquid\n\nUkr. Mat. Zh. - 1997. - 49, № 2. - pp. 315–320\n\nWe propose a variational formulation of the spectral problem of normal symmetric oscillations of viscous liquid. On the basis of this formulation, we construct a projective method for the determination of real eigenvalues of the problem. We present the numerical realization of this method in the case of a spherical cavity."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8104821,"math_prob":0.97938806,"size":7327,"snap":"2020-10-2020-16","text_gpt3_token_len":1881,"char_repetition_ratio":0.15676636,"word_repetition_ratio":0.11912752,"special_character_ratio":0.24935171,"punctuation_ratio":0.15448852,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.9939533,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-20T12:06:52Z\",\"WARC-Record-ID\":\"<urn:uuid:6eb753a5-08fa-4c7b-a47e-dfc47c6307ab>\",\"Content-Length\":\"46908\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4223ac00-aeeb-4074-b8d4-158d667d55cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:7640f378-f7db-491a-a389-989f97924ce1>\",\"WARC-IP-Address\":\"194.44.31.54\",\"WARC-Target-URI\":\"http://umj.imath.kiev.ua/volumes/issues/?lang=en&year=1997&number=2\",\"WARC-Payload-Digest\":\"sha1:S4YUUCGRW5W5SGY4XR4UBBFVXSIP2KZA\",\"WARC-Block-Digest\":\"sha1:4GEQ7IQRD2XK7PN3IB5MHGBVYOIDOOXK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144722.77_warc_CC-MAIN-20200220100914-20200220130914-00026.warc.gz\"}"} |
https://physics.stackexchange.com/questions/338124/does-violating-cosmic-censorship-really-mean-violating-causality?noredirect=1 | [
"# Does Violating Cosmic Censorship Really Mean Violating Causality?\n\nAs I understand it, the basic motivation behind ruling out a naked singularity is that we don't know what is happening at a singularity and thus, we won't be able to predict anything in the universe if there is no horizon around such an unknown region. But the reason we don't understand what is happening at the singularity is that we don't have a theory of quantum gravity. But when we have a theory of quantum gravity, this limitation should go away. And thus, causality should be preserved even with naked singularities.\n\nIt is a very cultural fact that we don't know how to deal with singularities without horizons at this stage. Thus, it seems quite naive to assume that causality would actually be violated if horizons don't cover the singularity. Though, I believe under some restricted energy conditions the censorship conjecture has been proven and thus, the censorship might be correct due to some other than causality reasons but causality doesn't seem to force the censorship.\n\n• Kerr solutions have closed timelike curves.\n– user107153\nJun 15, 2017 at 19:52\n\nThe reason naked singularities are a problem is not that they imply causality violation in the sense of closed timelike curves existing (although sometimes they do: see below), it is that they imply that GR is not a useful theory, even in the cases where it ought to be useful, because the future can't be predicted from the past in many cases. So, in particular, if GR predicts that uncensored singularities arise when starting from physically-reasonable initial conditions, then GR is not useful at predicting what happens in those cases: you need a better theory which makes useful predictions about what happens when GR predicts a singularity.\n\nIf cosmic censorship fails, then GR thus fails to be a usefully predictive theory in many cases. In particular it ceases (or may cease) to be a usefully predictive theory for cosmology. Well, we would like it to be useful for cosmology of course.\n\nSo the question that cosmic censorship seeks to answer is 'is GR, which we know is not a completely correct theory, still usable in the regimes where we would like it to be a good approximation, or does it fail even there?'.\n\nNote that a reasonable (indeed common) definition of 'causality violating' is 'usefully predictive', as Ben Crowell says in a comment: in that sense naked singularities always violate causality.\n\nHowever it is actually worse than that. As mentioned in other answers some solutions (Kerr) can have both naked singularities and CTCs while some (Reissner-Nordström) have only naked singularities.\n\nBut these are two different pathologies. So it is not sufficient to have some QG theory which fixes the singularities: that theory would also need to fix the CTCs.\n\n• Thank you for your answer. Although it's not related to my original question, can you elaborate why CTCs are considered highly pathological? Except for messing with the human intuition of not being able to going into one's own past, does it create any concrete theoretical/mathematical issues that an \"intuition-less\" theoretical physicist would appreciate?\n– user87745\nJun 29, 2017 at 14:54\n• I think that might be worth an independent question: it's interesting enough, and you will get better answers than this as more people will see it. However I think the problem is that, since there are now events which are in their own pasts, it becomes impossible to predict the future in the way you would like: so if I take some suitable spacelike surface (a Cauchy surface) I can no longer predict the future from it.\n– user107153\nJun 29, 2017 at 16:34\n• The reason naked singularities are a problem is not that they imply causality violation (although sometimes they do: see below) A naked singularity always implies causality violation, if you're using the (AFAIK) standard condition that the spacetime should be globally hyperbolic. If you lack global hyperbolicity, then you don't have existence and uniqueness for solutions to Cauchy problems, and that's pretty much the definition of violating causality.\n– user4552\nJun 29, 2017 at 16:41\n• @BenCrowell: I agree with that. I was using a definition in the sense of 'closed timelike curves existing', but I had not stated that. I've elaborated the answer to be, I hope, more satisfactory (at least it now says what I mean!)\n– user107153\nJun 29, 2017 at 17:12\n\nThere are closed timelike curves in the interior of the Kerr horizon. The obvious way to see this is if you go through the center of the ring singularity (thus, not intersecting the ring singularity), the Boyer-Lindquist $r$ goes negative, and the Boyer-Lindquist $\\phi$ becomes timelike. Since, by construction, the orbits of $\\phi$ are closed, this means that they are closed timelike curves.\n\n• Thank you for your answer! But I don't understand how closed timelike curves are related to naked singularities. Can you explain a bit?\n– user87745\nJun 16, 2017 at 5:38\n• @Dvij There are closed timelike curves in the interior of the kerr horizon. What's there to explain? If you strip away the horizon, there are regions where the past flows into the future. Jun 16, 2017 at 16:12\n• @Dvij: I guess what I\"m saying is that there is a known class of GR solutions (namely, the $a > M$ kerr models), that has a naked singularity and that has closed timelike curves. Therefore, if it were possible to \"spin up\" a Kerr hole such that $a > M$, then it would also be possible to create causality violations. The only way we prevent this is through cosmic censorship. Jun 26, 2017 at 15:03\n• Ok. I understand it that in the case of the Kerr solutions, the only way to prevent causality (i.e. preclude naked CTCs) is to avoid naked singularities. But does this imply that we must avoid naked singularities in all the cases? I mean, for example, in a pure RN solution if we admit the super-extremal case then there are no CTCs but we have the naked singularity. How does a naked singularity (by itself) violate causality (considering that actually there exists a quantum gravity theory that is in principle capable of figuring out what is happening at the center)?\n– user87745\nJun 26, 2017 at 15:18\n• Why the downvotes? Jun 30, 2017 at 3:36\n\nTo my knowledge a naked singularity doesn't imply closed time like curves or other alteration of the ordering of events. I agree with the OP that a primary example is an overcharged Reisser-Nordstrom.\n\nStill, a naked singularity is a problem, so an actual theory of quantum gravity will need to remove this pathologies. To be more explicit, a naked singularity means that the space it's not globally hyperbolic, that is there isn't a Cauchy surface, that is given a set of valid and complete initial conditions I cannot predict the future, since singularities act as disturbance points in your equations. See Wald for more info.\n\nI personally found solutions to supergravity (related to some string theory configurations of branes) with the same asymptotic charges of a naked singularity, but without actual singularities (https://arxiv.org/pdf/1701.05520.pdf, but it's technical, you have been warned!).\n\n• Thank you for your answer and the reference therein. Can you elaborate how \"a naked singularity is a problem\" on its own? As you agree, a naked singularity doesn't necessarily imply CTCs. And if we have a proper theory of QG (which the nature itself presumably has) then what is going to come out of the naked singularity is not really indeterminate. It would be dictated by the laws of QG. And thus, I believe naked singularities shouldn't cause a problem of broken down predictability. Can you elaborate what kind of problems you have in mind that a naked singularity can cause?\n– user87745\nJun 29, 2017 at 13:45\n• In reference to some recent literature (arxiv.org/pdf/1702.05490.pdf), existence of naked singularities might mean some problems for the weak gravity conjecture. The weak gravity conjecture, I feel, is most probably correct based on many impressive restricted proofs that we have obtained for it so far.\n– user87745\nJun 29, 2017 at 13:52\n• A naked singularity does imply causality violation. When you have a naked singularity, the spacetime is not globally hyperbolic. Global hyperbolicity is the condition that's needed if you want solutions to Cauchy problems to exist and be unique.\n– user4552\nJun 29, 2017 at 16:38\n• It is a matter of how you define causality. Here the OP was not considering the rigorous definition of causality, but the more common meaning of \"well ordered causal flow\". I agree that it can be misleading. Jun 29, 2017 at 17:39"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.97783726,"math_prob":0.7835784,"size":988,"snap":"2023-40-2023-50","text_gpt3_token_len":201,"char_repetition_ratio":0.12703252,"word_repetition_ratio":0.024539877,"special_character_ratio":0.18927126,"punctuation_ratio":0.07734807,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9577275,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-04T20:59:20Z\",\"WARC-Record-ID\":\"<urn:uuid:aa7e02c9-96d9-45e8-a304-e0ce0c19792c>\",\"Content-Length\":\"197742\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7d00212-847f-46fd-a786-e1cddeb8eb12>\",\"WARC-Concurrent-To\":\"<urn:uuid:d07099c2-25a1-426b-bd4d-f9fdc96227d0>\",\"WARC-IP-Address\":\"104.18.43.226\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/338124/does-violating-cosmic-censorship-really-mean-violating-causality?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:V4TUQ527DXB2LEKL6FDZTTJUP7YHRZBL\",\"WARC-Block-Digest\":\"sha1:2ODG4JDWUYRUDPSWQHSMNEWNR36JRBQL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100534.18_warc_CC-MAIN-20231204182901-20231204212901-00555.warc.gz\"}"} |
https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/ | [
"",
null,
"# Maximum Entropy Probability Density Functions\n\nInitializing live version",
null,
"Requires a Wolfram Notebook System\n\nInteract on desktop, mobile and cloud with the free Wolfram Player or other Wolfram Language products.\n\nThe principle of maximum entropy can be used to find the probability distribution, subject to a specified constraint, that is maximally noncommittal regarding missing information about the distribution. In this Demonstration the principle of maximum entropy is used to find the probability density function of discrete random variables defined on the interval",
null,
"subject to user-specified constraints regarding the mean",
null,
"and variance",
null,
". The resulting probability distribution is referred to as an",
null,
"distribution . The mean of the",
null,
"distribution associated with a proposition",
null,
"is the probability of that proposition, and the variance of the",
null,
"distribution is a measure of the amount of confidence associated with predicting the probability of the proposition. When only the mean is specified, the entropy",
null,
"of the",
null,
"distribution is maximal when the specified mean probability is",
null,
"When both mean and variance are specified, the entropy",
null,
"of the",
null,
"distribution decreases as the specified variance decreases.\n\nContributed by: Marshall Bradley (March 2011)\nOpen content licensed under CC BY-NC-SA\n\n## Snapshots",
null,
"",
null,
"",
null,
"## Details\n\nProbabilities are used to characterize the likelihood of events or propositions. In some circumstances, predictions of probability carry a high degree of confidence. For example, an individual can confidently predict that a fair coin will produce “heads” in one flip with probability",
null,
". By way of contrast, there is more uncertainty associated with a weather prediction that states the probability of rain tomorrow as",
null,
". E. T. Jaynes developed the concept of the",
null,
"distribution to deal with what he described as different states of external and internal knowledge. In the terminology of Jaynes, the probability of the proposition",
null,
"is found by computing the mean of the",
null,
"distribution, and the variance of the",
null,
"distribution is a measure of the amount of confidence associated with the prediction of the mean. In situations where you have high states of internal knowledge, like the case of the coin, the variance of the",
null,
"distribution is small. In fact, for the case of coin, the variance of the",
null,
"distribution is 0.\n\nThe entropy",
null,
"is a measure of the amount of disorder in a probability density function. The principle of maximum entropy can be used to find",
null,
"distributions in circumstances where the only specified information is the mean of the distribution or the mean and variance of the distribution. The",
null,
"distributions in this Demonstration are evaluated at the points",
null,
"for",
null,
". If the probability density at these",
null,
"points is denoted by",
null,
", then the mean",
null,
", variance",
null,
", and entropy",
null,
"of the",
null,
"distribution are respectively given by",
null,
",",
null,
",",
null,
".\n\nIf the mean",
null,
"of the",
null,
"distribution is specified, then the corresponding maximum entropy probability distribution",
null,
"can be found using the technique of Lagrange multipliers . This requires finding the maximum of the quantity",
null,
",\n\nwhere the unknowns are the probabilities",
null,
"and the Lagrange multipliers",
null,
"and",
null,
". If the mean",
null,
"and the variance",
null,
"of the",
null,
"distributions are both specified, then it is necessary to find the maximum value of the quantity",
null,
",\n\nwhere",
null,
"is an additional Lagrange multiplier.\n\nReferences\n\n E. T. Jaynes, Probability Theory: The Logic of Science, New York: Cambridge University Press, 2003.\n\n P. Gregory, Bayesian Logical Data Analysis for the Physical Sciences, Cambridge: Cambridge University Press, 2005.\n\n## Permanent Citation\n\nMarshall Bradley\n\n Feedback (field required) Email (field required) Name Occupation Organization Note: Your message & contact information may be shared with the author of any specific Demonstration for which you give feedback. Send"
] | [
null,
"https://demonstrations.wolfram.com/app-files/assets/img/header-spikey2x.png",
null,
"https://demonstrations.wolfram.com/img/demonstrations-branding.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc3736748185271208817.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc5315733498686097224.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc5988854692937971491.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc6116785450330900528.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc6116785450330900528.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc5492152901135104630.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc6116785450330900528.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc3573654321628649414.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc6116785450330900528.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc7701214828527607791.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc3573654321628649414.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc6116785450330900528.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/popup_1.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/popup_2.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/popup_3.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc7016728702073946047.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc7016728702073946047.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc6116785450330900528.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc5492152901135104630.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc6116785450330900528.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc6116785450330900528.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc6116785450330900528.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc6116785450330900528.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc3573654321628649414.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc6116785450330900528.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc6116785450330900528.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc1804153554538992732.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc7876926540524608229.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc3130964846562294571.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc8046480459378361240.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc5315733498686097224.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc5988854692937971491.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc3573654321628649414.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc6116785450330900528.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc5818522479629057670.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc3216801892780109371.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc1393934262251585149.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc5315733498686097224.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc6116785450330900528.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc8046480459378361240.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc5899293643873911785.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc8046480459378361240.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc1907104486595694581.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc3143158249447740672.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc5315733498686097224.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc5988854692937971491.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc6116785450330900528.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc7687262325621208634.png",
null,
"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/img/desc4686949763582738365.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87338024,"math_prob":0.94413555,"size":3519,"snap":"2021-21-2021-25","text_gpt3_token_len":655,"char_repetition_ratio":0.20711237,"word_repetition_ratio":0.10218978,"special_character_ratio":0.1832907,"punctuation_ratio":0.102649,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99785674,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102],"im_url_duplicate_count":[null,null,null,null,null,5,null,null,null,null,null,null,null,null,null,10,null,null,null,null,null,null,null,5,null,null,null,null,null,5,null,5,null,5,null,10,null,10,null,null,null,10,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,5,null,5,null,5,null,null,null,null,null,null,null,null,null,null,null,5,null,5,null,5,null,null,null,null,null,null,null,5,null,null,null,5,null,5,null,null,null,null,null,null,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-24T06:14:13Z\",\"WARC-Record-ID\":\"<urn:uuid:acd7f70a-2f45-4e57-b187-58515082dd8e>\",\"Content-Length\":\"73243\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec7ffc12-aa32-4496-b740-0aa1078ab1cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:2201514a-cbb7-4409-b6e8-8b18086897e0>\",\"WARC-IP-Address\":\"140.177.205.90\",\"WARC-Target-URI\":\"https://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/\",\"WARC-Payload-Digest\":\"sha1:CGM725NSZ2IFACP2OBGQH7IG5WHU2GPK\",\"WARC-Block-Digest\":\"sha1:DYJYQM7JF55FKGFYWWBM7R4REAWJCR23\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488551052.94_warc_CC-MAIN-20210624045834-20210624075834-00002.warc.gz\"}"} |
https://www.colorhexa.com/42c96b | [
"# #42c96b Color Information\n\nIn a RGB color space, hex #42c96b is composed of 25.9% red, 78.8% green and 42% blue. Whereas in a CMYK color space, it is composed of 67.2% cyan, 0% magenta, 46.8% yellow and 21.2% black. It has a hue angle of 138.2 degrees, a saturation of 55.6% and a lightness of 52.4%. #42c96b color hex could be obtained by blending #84ffd6 with #009300. Closest websafe color is: #33cc66.\n\n• R 26\n• G 79\n• B 42\nRGB color chart\n• C 67\n• M 0\n• Y 47\n• K 21\nCMYK color chart\n\n#42c96b color description : Moderate cyan - lime green.\n\n# #42c96b Color Conversion\n\nThe hexadecimal color #42c96b has RGB values of R:66, G:201, B:107 and CMYK values of C:0.67, M:0, Y:0.47, K:0.21. Its decimal value is 4376939.\n\nHex triplet RGB Decimal 42c96b `#42c96b` 66, 201, 107 `rgb(66,201,107)` 25.9, 78.8, 42 `rgb(25.9%,78.8%,42%)` 67, 0, 47, 21 138.2°, 55.6, 52.4 `hsl(138.2,55.6%,52.4%)` 138.2°, 67.2, 78.8 33cc66 `#33cc66`\nCIE-LAB 72.222, -56.588, 36.478 25.786, 43.991, 21.041 0.284, 0.484, 43.991 72.222, 67.327, 147.193 72.222, -56.418, 56.727 66.325, -46.673, 27.619 01000010, 11001001, 01101011\n\n# Color Schemes with #42c96b\n\n• #42c96b\n``#42c96b` `rgb(66,201,107)``\n• #c942a0\n``#c942a0` `rgb(201,66,160)``\nComplementary Color\n• #5dc942\n``#5dc942` `rgb(93,201,66)``\n• #42c96b\n``#42c96b` `rgb(66,201,107)``\n• #42c9af\n``#42c9af` `rgb(66,201,175)``\nAnalogous Color\n• #c9425d\n``#c9425d` `rgb(201,66,93)``\n• #42c96b\n``#42c96b` `rgb(66,201,107)``\n• #af42c9\n``#af42c9` `rgb(175,66,201)``\nSplit Complementary Color\n• #c96b42\n``#c96b42` `rgb(201,107,66)``\n• #42c96b\n``#42c96b` `rgb(66,201,107)``\n• #6b42c9\n``#6b42c9` `rgb(107,66,201)``\n• #a0c942\n``#a0c942` `rgb(160,201,66)``\n• #42c96b\n``#42c96b` `rgb(66,201,107)``\n• #6b42c9\n``#6b42c9` `rgb(107,66,201)``\n• #c942a0\n``#c942a0` `rgb(201,66,160)``\n• #2a944a\n``#2a944a` `rgb(42,148,74)``\n• #30a854\n``#30a854` `rgb(48,168,84)``\n• #36bc5e\n``#36bc5e` `rgb(54,188,94)``\n• #42c96b\n``#42c96b` `rgb(66,201,107)``\n• #56cf7b\n``#56cf7b` `rgb(86,207,123)``\n``#6ad48a` `rgb(106,212,138)``\n• #7eda9a\n``#7eda9a` `rgb(126,218,154)``\nMonochromatic Color\n\n# Alternatives to #42c96b\n\nBelow, you can see some colors close to #42c96b. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #42c949\n``#42c949` `rgb(66,201,73)``\n• #42c955\n``#42c955` `rgb(66,201,85)``\n• #42c960\n``#42c960` `rgb(66,201,96)``\n• #42c96b\n``#42c96b` `rgb(66,201,107)``\n• #42c976\n``#42c976` `rgb(66,201,118)``\n• #42c982\n``#42c982` `rgb(66,201,130)``\n• #42c98d\n``#42c98d` `rgb(66,201,141)``\nSimilar Colors\n\n# #42c96b Preview\n\nThis text has a font color of #42c96b.\n\n``<span style=\"color:#42c96b;\">Text here</span>``\n#42c96b background color\n\nThis paragraph has a background color of #42c96b.\n\n``<p style=\"background-color:#42c96b;\">Content here</p>``\n#42c96b border color\n\nThis element has a border color of #42c96b.\n\n``<div style=\"border:1px solid #42c96b;\">Content here</div>``\nCSS codes\n``.text {color:#42c96b;}``\n``.background {background-color:#42c96b;}``\n``.border {border:1px solid #42c96b;}``\n\n# Shades and Tints of #42c96b\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #030905 is the darkest color, while #f9fdfa is the lightest one.\n\n• #030905\n``#030905` `rgb(3,9,5)``\n• #07190c\n``#07190c` `rgb(7,25,12)``\n• #0b2814\n``#0b2814` `rgb(11,40,20)``\n• #10371c\n``#10371c` `rgb(16,55,28)``\n• #144623\n``#144623` `rgb(20,70,35)``\n• #18562b\n``#18562b` `rgb(24,86,43)``\n• #1d6533\n``#1d6533` `rgb(29,101,51)``\n• #21743a\n``#21743a` `rgb(33,116,58)``\n• #268342\n``#268342` `rgb(38,131,66)``\n• #2a934a\n``#2a934a` `rgb(42,147,74)``\n• #2ea251\n``#2ea251` `rgb(46,162,81)``\n• #33b159\n``#33b159` `rgb(51,177,89)``\n• #37c061\n``#37c061` `rgb(55,192,97)``\n• #42c96b\n``#42c96b` `rgb(66,201,107)``\n• #51cd77\n``#51cd77` `rgb(81,205,119)``\n• #61d283\n``#61d283` `rgb(97,210,131)``\n• #70d68f\n``#70d68f` `rgb(112,214,143)``\n• #7fda9b\n``#7fda9b` `rgb(127,218,155)``\n• #8edfa7\n``#8edfa7` `rgb(142,223,167)``\n• #9ee3b3\n``#9ee3b3` `rgb(158,227,179)``\n``#ade8bf` `rgb(173,232,191)``\n• #bceccb\n``#bceccb` `rgb(188,236,203)``\n• #cbf0d7\n``#cbf0d7` `rgb(203,240,215)``\n• #dbf5e2\n``#dbf5e2` `rgb(219,245,226)``\n• #eaf9ee\n``#eaf9ee` `rgb(234,249,238)``\n• #f9fdfa\n``#f9fdfa` `rgb(249,253,250)``\nTint Color Variation\n\n# Tones of #42c96b\n\nA tone is produced by adding gray to any pure hue. In this case, #838885 is the less saturated color, while #13f859 is the most saturated one.\n\n• #838885\n``#838885` `rgb(131,136,133)``\n• #7a9181\n``#7a9181` `rgb(122,145,129)``\n• #719a7d\n``#719a7d` `rgb(113,154,125)``\n• #67a47a\n``#67a47a` `rgb(103,164,122)``\n``#5ead76` `rgb(94,173,118)``\n• #55b672\n``#55b672` `rgb(85,182,114)``\n• #4bc06f\n``#4bc06f` `rgb(75,192,111)``\n• #42c96b\n``#42c96b` `rgb(66,201,107)``\n• #39d267\n``#39d267` `rgb(57,210,103)``\n• #2fdc64\n``#2fdc64` `rgb(47,220,100)``\n• #26e560\n``#26e560` `rgb(38,229,96)``\n• #1dee5c\n``#1dee5c` `rgb(29,238,92)``\n• #13f859\n``#13f859` `rgb(19,248,89)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #42c96b is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5432932,"math_prob":0.57564574,"size":3720,"snap":"2020-10-2020-16","text_gpt3_token_len":1651,"char_repetition_ratio":0.12029064,"word_repetition_ratio":0.011049724,"special_character_ratio":0.5591398,"punctuation_ratio":0.23581758,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98510605,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-10T10:12:54Z\",\"WARC-Record-ID\":\"<urn:uuid:a59ab733-2afd-453b-81ca-e4ce5e5ea2a4>\",\"Content-Length\":\"36320\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b8c7ac1c-732f-46fd-93ce-37dd9de09751>\",\"WARC-Concurrent-To\":\"<urn:uuid:075f8a7d-395f-41aa-a17a-f07d9ba7ab91>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/42c96b\",\"WARC-Payload-Digest\":\"sha1:CKINV7L2QSQEQNKNRSS3U7NV5N5NWUAC\",\"WARC-Block-Digest\":\"sha1:JEF746J5R2XXCEUQOWHGGYRUVSFG5ZKJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371893683.94_warc_CC-MAIN-20200410075105-20200410105605-00554.warc.gz\"}"} |
https://www.turing.ac.uk/research/research-projects/geometry-and-topology-complex-interconnected-systems | [
"# Geometry and topology of complex interconnected systems\n\n## Introduction\n\nGeometry plays an important part in understanding the behaviour of systems. For example, periodic or recurrent systems (systems which repeat every so often) are naturally described by a circle – this includes the yearly cycle of the seasons and daily commuting traffic patterns. This may not be a geometric circle. For example, the seasons vary from year to year, so are characterised by an ‘angle’ which shows where in the year we are and, e.g., that it’s more likely that a hot day occurs in summer.\n\nUnderstanding this shape is better done in terms of topology, or ‘rubber sheet’ geometry, which maintains the qualitative features of the shape but also includes bending and stretching. Understanding these types of structures is critical for performing statistical analyses based on collected data as well as making and testing predictions.\n\n## Explaining the science\n\nWhile understanding the overall shape is important in analysing data, geometry still plays a critical role. A typical example is the scale at which we analyse a system. Local behaviour may be geometrically intricate, but at larger scales emergent phenomena may appear giving more insight into the system, e.g. the circle in the case of a yearly cycle. In complex systems, structures can interact at several scales, resulting in different phenomena at a number of different scales. This is perhaps best understood in terms of temporal scales, where we can understand and interpret events in the short term, mid-term, or long-term. Constructing a quantitative measure is critical, e.g. is the climate changing on a scale of 10 years, 100 years, or 1000 years?\n\nThis issue of scale is ubiquitous and requires an understanding of the interplay between geometry (quantitative measures) and topology (overall shape). One way of understanding this relationship is through algebraic invariants, which are descriptors of the global structure which are also directly related to the local geometry. To further complicate matters, complex systems are most often random. This requires a further understand of how randomness affects both the local geometry and global topology.",
null,
"The Lorenz system – a simple chaotic system which has two attractors (the two holes which the system circles). Understanding the system away from the middle is straightforward (it will continue around the hole) while at the center, it is nearly impossible to predict which hole it will go around next. Knowing this can be used to plot a complex trajectory in 3 dimensions and analyse it with 2 mutually exclusive impulses describing which hole it is circling (far right). These appear at a given scale and so we must understand the ‘scale’ of the holes (far left).\n\n## Project aims\n\nThe project is at the intersection of geometry, topology, algebra, and probability. The goal is to answer the following questions:\n\n• How probabilistic effects appear at different scales in complex interconnected systems?\n• How can an understanding be used to verify hypotheses about different complicated datasets?\n• How can prediction and statistical analysis take these structures into account?\n\nThis project is part of the Data-centric engineering programme's 'Mathematical foundations' challenge.\n\n## Applications\n\nAs we collect more and more data, tools to understand non-linear and high dimensional structures are becoming increasingly important. How these structures appear and their role in understanding systems and data has been largely unexplored, however as we consider the complex interaction of many different factors, it is likely that they will play an important role in future developments of analyses and interpretation. The applications include understanding and predicting random systems arising from any number of real-world systems\n\n## Researchers and collaborators\n\n### Dr Wajid Mannan\n\nPostdoctoral Research Assistant, QMUL"
] | [
null,
"https://www.turing.ac.uk/sites/default/files/inline-images/Lorenz%20system%202_0.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94063514,"math_prob":0.9122403,"size":3202,"snap":"2023-14-2023-23","text_gpt3_token_len":585,"char_repetition_ratio":0.12507817,"word_repetition_ratio":0.0,"special_character_ratio":0.17926297,"punctuation_ratio":0.10218978,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96333444,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-29T12:26:03Z\",\"WARC-Record-ID\":\"<urn:uuid:55ad61eb-3db2-494e-8322-0b6d67278236>\",\"Content-Length\":\"167651\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e0ab4276-2ddd-47e8-8def-c97c589be877>\",\"WARC-Concurrent-To\":\"<urn:uuid:bafa9ad5-4293-402c-843d-7b5dc03bbb37>\",\"WARC-IP-Address\":\"104.22.19.129\",\"WARC-Target-URI\":\"https://www.turing.ac.uk/research/research-projects/geometry-and-topology-complex-interconnected-systems\",\"WARC-Payload-Digest\":\"sha1:N7KYALR5IVBRNGHGPE6MSGXZ25XJNK2N\",\"WARC-Block-Digest\":\"sha1:QGDQ7EHTJDWDARY4X2FHIZQLJROTVQZS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948976.45_warc_CC-MAIN-20230329120545-20230329150545-00578.warc.gz\"}"} |
http://virtu-software.com/toolbox/prime.asp | [
"Prime Factorization Calculator\n\nThe image shown is the prime factorization calculator, which allows the user to determine if a number is prime or composite. The calculator also provides the prime factors, and the prime factorization (prime factors with powers)",
null,
"Math tools included in the Mega Toolbox are:\n• Least Common Multiple and Greatest Common Factor calculator\n• Base converter\n• Happy Number Calculator\n• Polar and Rectangular Calculator\n• Prime Factorization Calculator"
] | [
null,
"http://virtu-software.com/toolbox/images/primefactors.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8099542,"math_prob":0.9930701,"size":511,"snap":"2019-26-2019-30","text_gpt3_token_len":99,"char_repetition_ratio":0.19526628,"word_repetition_ratio":0.0,"special_character_ratio":0.16438356,"punctuation_ratio":0.06329114,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99549764,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-16T07:16:55Z\",\"WARC-Record-ID\":\"<urn:uuid:3c8b1d0a-d3a2-4984-934a-c6d57ed8f854>\",\"Content-Length\":\"7908\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:66204d20-6e27-4931-ba32-f7afb07f0473>\",\"WARC-Concurrent-To\":\"<urn:uuid:eb032881-4ed2-4c7f-846f-9a5d966f0348>\",\"WARC-IP-Address\":\"45.40.165.38\",\"WARC-Target-URI\":\"http://virtu-software.com/toolbox/prime.asp\",\"WARC-Payload-Digest\":\"sha1:MNH5ZPUDMMILMSEGCARTHSZUOL773GOD\",\"WARC-Block-Digest\":\"sha1:57ZCBIETAGE722XXQFAPV4HVDE3B2SUW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627997801.20_warc_CC-MAIN-20190616062650-20190616084650-00508.warc.gz\"}"} |
https://classnotes.ng/lesson/new-lesson-45/ | [
"",
null,
"# CONIC SECTIONS: PARABOLA,ELLIPSE AND HYPERBOLA\n\nTHE PARABOLA\n\nThe parabola is a locus of points, equidistant from a given point, called the Focus and from a given line called the Directrix.\n\n(Length of directrix from V, (AV) = Length of Focus from V ,(FV))\n\nThe line AB, a distance ofa, from the y axis is called the Directrix. The line AF is called the axis of symmetry.\n\nSince\n\nBP = FP\n\nBP2 = FP2\n\n(x + a)2 = (x-a)+ (y-0)2\n\nx2 + 2ax + a2 = x2 – 2ax + a+ y2\n\n4ax = y2\n\nthus, y2 = 4ax is the equation of the parabola.\n\nThe line RQ which is perpendicular to AF is called the latusrectum, V is called the vertex and F the focus of theparabola.\n\nIf the vertex of the parabola is translated to a point \\(x1,y1), the equation of the parabola becomes\n\n(y-y1)2 = 4a(x-x1)2.\n\nThe above equation is said to be in the standard or canonicalform\n\nExamples\n\n1. find the focus and directrix of the parabola y2 = 16x\n\n2. write down the equation of the parabola y2– 4y-12x+40 = 0 in its canonical form and hence find i) the vertex; ii) the focus; iii) the directrix of the parabola\n\n` Solution\n\n1. comparey2 = 16x with y2 = 4ax,\n\n4a = 16 , a = 4\n\nThus the focus is (4,0) while the directrix is x = -4\n\n2. y2– 4y-12x + 40 = 0\n\ny2– 4y-+4-12x+40 = 0+4 …. (completing the square)\n\ny2– 4y+4 = 12x – 36 …… (rearranging)\n\n(y – 2)2 = 12(x-3) …… (factorising)\n\nBut (y – y1)2 = 12(x-x1)\n\ni) hence vertex (x1,y1) = (3,2)\n\nii) since 4a = 12, a = 3 then the focus (x1+a,y1) = (3+3,2) = (6,2)\n\niii)the equation of the directrix is x = 3-3 ie x=0\n\nnote that the thedirectrix is of equal but opposite distance from the vertex with thefocus\n\nthis means the distance between the focus and the vertex = the distance between the directrixand the vertex\n\nHow Can We Make ClassNotesNG Better - CLICK to Tell Us💃"
] | [
null,
"https://www.facebook.com/tr",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8786785,"math_prob":0.9996722,"size":1790,"snap":"2023-14-2023-23","text_gpt3_token_len":630,"char_repetition_ratio":0.15397537,"word_repetition_ratio":0.011331445,"special_character_ratio":0.34357542,"punctuation_ratio":0.09585492,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999894,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-30T05:43:18Z\",\"WARC-Record-ID\":\"<urn:uuid:b351eccb-9f01-4b67-ad16-7999fba456b8>\",\"Content-Length\":\"163975\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:73c40617-f82e-4d35-b5e8-42d343be4bda>\",\"WARC-Concurrent-To\":\"<urn:uuid:eec82c9f-0e12-4198-9403-84e358d7de8b>\",\"WARC-IP-Address\":\"156.67.71.120\",\"WARC-Target-URI\":\"https://classnotes.ng/lesson/new-lesson-45/\",\"WARC-Payload-Digest\":\"sha1:ZP2MO2FLTRA7KZYGZWTUZ2MS6T4KLXRW\",\"WARC-Block-Digest\":\"sha1:2FEN2X7QONACDE7KFFXV4CLRQNQ6FTGO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949097.61_warc_CC-MAIN-20230330035241-20230330065241-00284.warc.gz\"}"} |
https://uk.mathworks.com/help/stats/feature-extraction.html | [
"## Feature Extraction\n\n### What Is Feature Extraction?\n\nFeature extraction is a set of methods that map input features to new output features. Many feature extraction methods use unsupervised learning to extract features. Unlike some feature extraction methods such as PCA and NNMF, the methods described in this section can increase dimensionality (and decrease dimensionality). Internally, the methods involve optimizing nonlinear objective functions. For details, see Sparse Filtering Algorithm or Reconstruction ICA Algorithm.\n\nOne typical use of feature extraction is finding features in images. Using these features can lead to improved classification accuracy. For an example, see Feature Extraction Workflow. Another typical use is extracting individual signals from superpositions, which is often termed blind source separation. For an example, see Extract Mixed Signals.\n\nThere are two feature extraction functions: `rica` and `sparsefilt`. Associated with these functions are the objects that they create: `ReconstructionICA` and `SparseFiltering`.\n\n### Sparse Filtering Algorithm\n\nThe sparse filtering algorithm begins with a data matrix `X` that has `n` rows and `p` columns. Each row represents one observation and each column represents one measurement. The columns are also called the features or predictors. The algorithm then takes either an initial random `p`-by-`q` weight matrix `W` or uses the weight matrix passed in the `InitialTransformWeights` name-value pair. `q` is the requested number of features that `sparsefilt` computes.\n\nThe algorithm attempts to minimize the Sparse Filtering Objective Function by using a standard limited memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) quasi-Newton optimizer. See Nocedal and Wright . This optimizer takes up to `IterationLimit` iterations. It stops iterating earlier when it takes a step whose norm is less than `StepTolerance`, or when it computes that the norm of the gradient at the current point is less than `GradientTolerance` times a scalar τ, where\n\n`$\\tau =\\mathrm{max}\\left(1,\\mathrm{min}\\left(|f|,{‖{g}_{0}‖}_{\\infty }\\right)\\right).$`\n\n|f| is the norm of the objective function, and ${‖{g}_{0}‖}_{\\infty }$ is the infinity norm of the initial gradient.\n\nThe objective function attempts to simultaneously obtain few nonzero features for each data point, and for each resulting feature to have nearly equal weight. To understand how the objective function attempts to achieve these goals, see Ngiam, Koh, Chen, Bhaksar, and Ng .\n\nFrequently, you obtain good features by setting a relatively small value of `IterationLimit`, from as low as 5 to a few hundred. Allowing the optimizer to continue can result in overtraining, where the extracted features do not generalize well to new data.\n\nAfter constructing a `SparseFiltering` object, use the `transform` method to map input data to the new output features.\n\n#### Sparse Filtering Objective Function\n\nTo compute an objective function, the sparse filtering algorithm uses the following steps. The objective function depends on the `n`-by-`p` data matrix `X` and a weight matrix `W` that the optimizer varies. The weight matrix `W` has dimensions `p`-by-`q`, where `p` is the number of original features and `q` is the number of requested features.\n\n1. Compute the `n`-by-`q` matrix `X*W`. Apply the approximate absolute value function $\\varphi \\left(u\\right)=\\sqrt{{u}^{2}+{10}^{-8}}$ to each element of `X*W` to obtain the matrix `F`. ϕ is a smooth nonnegative symmetric function that closely approximates the absolute value function.\n\n2. Normalize the columns of `F` by the approximate L2 norm. In other words, define the normalized matrix $\\stackrel{˜}{F}\\left(i,j\\right)$ by\n\n`$\\begin{array}{c}‖F\\left(j\\right)‖=\\sqrt{\\sum _{i=1}^{n}{\\left(F\\left(i,j\\right)\\right)}^{2}+{10}^{-8}}\\\\ \\stackrel{˜}{F}\\left(i,j\\right)=F\\left(i,j\\right)/‖F\\left(j\\right)‖.\\end{array}$`\n3. Normalize the rows of $\\stackrel{˜}{F}\\left(i,j\\right)$ by the approximate L2 norm. In other words, define the normalized matrix $\\stackrel{^}{F}\\left(i,j\\right)$ by\n\n`$\\begin{array}{c}‖\\stackrel{˜}{F}\\left(i\\right)‖=\\sqrt{\\sum _{j=1}^{q}{\\left(\\stackrel{˜}{F}\\left(i,j\\right)\\right)}^{2}+{10}^{-8}}\\\\ \\stackrel{^}{F}\\left(i,j\\right)=\\stackrel{˜}{F}\\left(i,j\\right)/‖\\stackrel{˜}{F}\\left(i\\right)‖.\\end{array}$`\n\nThe matrix $\\stackrel{^}{F}$ is the matrix of converted features in `X`. Once `sparsefilt` finds the weights `W` that minimize the objective function h (see below), which the function stores in the output object `Mdl` in the `Mdl.TransformWeights` property, the `transform` function can follow the same transformation steps to convert new data to output features.\n\n4. Compute the objective function h(`W`) as the 1–norm of the matrix $\\stackrel{^}{F}\\left(i,j\\right)$, meaning the sum of all the elements in the matrix (which are nonnegative by construction):\n\n`$h\\left(W\\right)=\\sum _{j=1}^{q}\\sum _{i=1}^{n}\\stackrel{^}{F}\\left(i,j\\right).$`\n5. If you set the `Lambda` name-value pair to a strictly positive value, `sparsefilt` uses the following modified objective function:\n\n`$h\\left(W\\right)=\\sum _{j=1}^{q}\\sum _{i=1}^{n}\\stackrel{^}{F}\\left(i,j\\right)+\\lambda \\sum _{j=1}^{q}{w}_{j}^{T}{w}_{j}.$`\n\nHere, wj is the jth column of the matrix `W` and λ is the value of `Lambda`. The effect of this term is to shrink the weights `W`. If you plot the columns of `W` as images, with positive `Lambda` these images appear smooth compared to the same images with zero `Lambda`.\n\n### Reconstruction ICA Algorithm\n\nThe Reconstruction Independent Component Analysis (RICA) algorithm is based on minimizing an objective function. The algorithm maps input data to output features.\n\nThe ICA source model is the following. Each observation x is generated by a random vector s according to\n\n`$x=\\mu +As.$`\n• x is a column vector of length `p`.\n\n• μ is a column vector of length `p` representing a constant term.\n\n• s is a column vector of length `q` whose elements are zero mean, unit variance random variables that are statistically independent of each other.\n\n• A is a mixing matrix of size `p`-by-`q`.\n\nYou can use this model in `rica` to estimate A from observations of x. See Extract Mixed Signals.\n\nThe RICA algorithm begins with a data matrix `X` that has `n` rows and `p` columns consisting of the observations xi:\n\n`$X=\\left[\\begin{array}{c}{x}_{1}^{T}\\\\ {x}_{2}^{T}\\\\ ⋮\\\\ {x}_{n}^{T}\\end{array}\\right].$`\n\nEach row represents one observation and each column represents one measurement. The columns are also called the features or predictors. The algorithm then takes either an initial random `p`-by-`q` weight matrix `W` or uses the weight matrix passed in the `InitialTransformWeights` name-value pair. `q` is the requested number of features that `rica` computes. The weight matrix `W` is composed of columns wi of size `p`-by-1:\n\n`$W=\\left[\\begin{array}{cccc}{w}_{1}& {w}_{2}& \\dots & {w}_{q}\\end{array}\\right].$`\n\nThe algorithm attempts to minimize the Reconstruction ICA Objective Function by using a standard limited memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) quasi-Newton optimizer. See Nocedal and Wright . This optimizer takes up to `IterationLimit` iterations. It stops iterating when it takes a step whose norm is less than `StepTolerance`, or when it computes that the norm of the gradient at the current point is less than `GradientTolerance` times a scalar τ, where\n\n`$\\tau =\\mathrm{max}\\left(1,\\mathrm{min}\\left(|f|,{‖{g}_{0}‖}_{\\infty }\\right)\\right).$`\n\n|f| is the norm of the objective function, and ${‖{g}_{0}‖}_{\\infty }$ is the infinity norm of the initial gradient.\n\nThe objective function attempts to obtain a nearly orthonormal weight matrix that minimizes the sum of elements of g(`XW`), where g is a function (described below) that is applied elementwise to `XW`. To understand how the objective function attempts to achieve these goals, see Le, Karpenko, Ngiam, and Ng .\n\nAfter constructing a `ReconstructionICA` object, use the `transform` method to map input data to the new output features.\n\n#### Reconstruction ICA Objective Function\n\nThe objective function uses a contrast function, which you specify by using the `ContrastFcn` name-value pair. The contrast function is a smooth convex function that is similar to an absolute value. By default, the contrast function is $g=\\frac{1}{2}\\mathrm{log}\\left(\\mathrm{cosh}\\left(2x\\right)\\right)$. For other available contrast functions, see `ContrastFcn`.\n\nFor an `n`-by-`p` data matrix `X` and `q` output features, with a regularization parameter λ as the value of the `Lambda` name-value pair, the objective function in terms of the `p`-by-`q` matrix `W` is\n\n`$h=\\frac{\\lambda }{n}\\sum _{i=1}^{n}{‖W{W}^{T}{x}_{i}-{x}_{i}‖}_{2}^{2}+\\frac{1}{n}\\sum _{i=1}^{n}\\sum _{j=1}^{q}{\\sigma }_{j}g\\left({w}_{j}^{T}{x}_{i}\\right)$`\n\nThe σj are known constants that are ±1. When σj = +1, minimizing the objective function h encourages the histogram of ${w}_{j}^{T}{x}_{i}$ to be sharply peaked at 0 (super Gaussian). When σj = –1, minimizing the objective function h encourages the histogram of ${w}_{j}^{T}{x}_{i}$ to be flatter near 0 (sub Gaussian). Specify the σj values using the `rica` `NonGaussianityIndicator` name-value pair.\n\nThe objective function h can have a spurious minimum of zero when λ is zero. Therefore, `rica` minimizes h over W that are normalized to 1. In other words, each column wj of W is defined in terms of a column vector vj by\n\n`${w}_{j}=\\frac{{v}_{j}}{\\sqrt{{v}_{j}^{T}{v}_{j}+{10}^{-8}}}.$`\n\n`rica` minimizes over the vj. The resulting minimal matrix `W` provides the transformation from input data `X` to output features `XW`.\n\n Ngiam, Jiquan, Zhenghao Chen, Sonia A. Bhaskar, Pang W. Koh, and Andrew Y. Ng. “Sparse Filtering.” Advances in Neural Information Processing Systems. Vol. 24, 2011, pp. 1125–1133. `https://papers.nips.cc/paper/4334-sparse-filtering.pdf`.\n\n Nocedal, J. and S. J. Wright. Numerical Optimization, Second Edition. Springer Series in Operations Research, Springer Verlag, 2006.\n\n Le, Quoc V., Alexandre Karpenko, Jiquan Ngiam, and Andrew Y. Ng. “ICA with Reconstruction Cost for Efficient Overcomplete Feature Learning.” Advances in Neural Information Processing Systems. Vol. 24, 2011, pp. 1017–1025. `https://papers.nips.cc/paper/4467-ica-with-reconstruction-cost-for-efficient-overcomplete-feature-learning.pdf`."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.80312955,"math_prob":0.99543077,"size":8406,"snap":"2020-10-2020-16","text_gpt3_token_len":1945,"char_repetition_ratio":0.14413235,"word_repetition_ratio":0.27014926,"special_character_ratio":0.20045206,"punctuation_ratio":0.113768585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99919647,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-18T23:17:41Z\",\"WARC-Record-ID\":\"<urn:uuid:79b02200-ffb0-4e5d-8ace-40ad54fc3692>\",\"Content-Length\":\"88297\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5f34235b-fb36-486a-8983-638b293ec146>\",\"WARC-Concurrent-To\":\"<urn:uuid:69518196-21da-46ed-a0fe-4b311573bf3a>\",\"WARC-IP-Address\":\"23.32.68.178\",\"WARC-Target-URI\":\"https://uk.mathworks.com/help/stats/feature-extraction.html\",\"WARC-Payload-Digest\":\"sha1:TP3U2AA4NCBM4ID4OSBM3AN56QNNYXS3\",\"WARC-Block-Digest\":\"sha1:K3ZH77WQ2TK26IUFJCYIWN7XVO2J6OUL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875143815.23_warc_CC-MAIN-20200218210853-20200219000853-00103.warc.gz\"}"} |
https://rdrr.io/bioc/preprocessCore/man/rcModels.html | [
"# rcModels: Fit row-column model to a matrix In preprocessCore: A collection of pre-processing functions\n\n## Description\n\nThese functions fit row-column effect models to matrices\n\n## Usage\n\n ```1 2 3``` ```rcModelPLM(y,row.effects=NULL,input.scale=NULL) rcModelWPLM(y, w,row.effects=NULL,input.scale=NULL) rcModelMedianPolish(y) ```\n\n## Arguments\n\n `y` A numeric matrix `w` A matrix or vector of weights. These should be non-negative. `row.effects` If these are supplied then the fitting procedure uses these (and analyzes individual columns separately) `input.scale` If supplied will be used rather than estimating the scale from the data\n\n## Details\n\nThese functions fit row-column models to the specified input matrix. Specifically the model\n\ny_ij = r_i + c_j + e_ij\n\nwith r_i and c_j as row and column effects respectively. Note that this functions treat the row effect as the parameter to be constrained using sum to zero (for `rcModelPLM` and `rcModelWPLM`) or median of zero (for `rcModelMedianPolish`).\n\nThe `rcModelPLM` and `rcModelWPLM` functions use a robust linear model procedure for fitting the model.\n\nThe function `rcModelMedianPolish` uses the median polish algorithm.\n\n## Value\n\nA list with following items:\n\n `Estimates` The parameter estimates. Stored in column effect then row effect order `Weights` The final weights used `Residuals` The residuals `StdErrors` Standard error estimates. Stored in column effect then row effect order `Scale` Scale Estimates\n\n## Author(s)\n\n`rcModelPLMr`,`rcModelPLMd`\n ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26``` ```col.effects <- c(10,11,10.5,12,9.5) row.effects <- c(seq(-0.5,-0.1,by=0.1),seq(0.1,0.5,by=0.1)) y <- outer(row.effects, col.effects,\"+\") w <- runif(50) rcModelPLM(y) rcModelWPLM(y, w) rcModelMedianPolish(y) y <- y + rnorm(50) rcModelPLM(y) rcModelWPLM(y, w) rcModelMedianPolish(y) rcModelPLM(y,row.effects=row.effects) rcModelWPLM(y,w,row.effects=row.effects) rcModelPLM(y,input.scale=1.0) rcModelWPLM(y, w,input.scale=1.0) rcModelPLM(y,row.effects=row.effects,input.scale=1.0) rcModelWPLM(y,w,row.effects=row.effects,input.scale=1.0) ```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5365507,"math_prob":0.9784495,"size":2113,"snap":"2022-27-2022-33","text_gpt3_token_len":626,"char_repetition_ratio":0.1844476,"word_repetition_ratio":0.042105265,"special_character_ratio":0.2654993,"punctuation_ratio":0.17954545,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99684566,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-04T08:32:50Z\",\"WARC-Record-ID\":\"<urn:uuid:5cd03631-c083-4782-adb2-34be9d34dab8>\",\"Content-Length\":\"43360\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b8060913-382b-4444-a3ef-5b10a8b7b3fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:482bdb43-b7cc-472a-9d77-96600d8c7037>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/bioc/preprocessCore/man/rcModels.html\",\"WARC-Payload-Digest\":\"sha1:4SLZXKEUBWMWGIR7KE2YYWNIPA25JT7P\",\"WARC-Block-Digest\":\"sha1:JGXODBVXFHFOULVFSVMD3WU4MZBVNI3G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104364750.74_warc_CC-MAIN-20220704080332-20220704110332-00188.warc.gz\"}"} |
https://www.proofwiki.org/wiki/Definition:Model_(Logic) | [
"# Definition:Model (Logic)\n\n## Definition\n\nLet $\\mathscr M$ be a formal semantics for a logical language $\\LL$.\n\nLet $\\MM$ be a structure of $\\mathscr M$.\n\n### Model of Logical Formula\n\nLet $\\phi$ be a logical formula of $\\LL$.\n\nThen $\\MM$ is a model of $\\phi$ if and only if:\n\n$\\MM \\models_{\\mathscr M} \\phi$\n\nthat is, if $\\phi$ is valid in $\\MM$.\n\n### Model of Set of Logical Formulas\n\nLet $\\FF$ be a set of logical formulas of $\\LL$.\n\nThen $\\MM$ is a model of $\\FF$ if and only if:\n\n$\\MM \\models_{\\mathscr M} \\phi$ for every $\\phi \\in \\FF$\n\nthat is, if it is a model of every logical formula $\\phi \\in \\FF$.\n\n## Specific Examples\n\n### Boolean Interpretations\n\nLet $\\LL_0$ be the language of propositional logic.\n\nLet $v: \\LL_0 \\to \\set {T, F}$ be a boolean interpretation of $\\LL_0$.\n\nThen $v$ models a WFF $\\phi$ if and only if:\n\n$\\map v \\phi = T$\n\nand this relationship is denoted as:\n\n$v \\models_{\\mathrm {BI} } \\phi$\n\nWhen pertaining to a collection of WFFs $\\FF$, one says $v$ models $\\FF$ if and only if:\n\n$\\forall \\phi \\in \\FF: v \\models_{\\mathrm {BI} } \\phi$\n\nthat is, if and only if it models all elements of $\\FF$.\n\nThis can be expressed symbolically as:\n\n$v \\models_{\\mathrm {BI}} \\FF$\n\n### Predicate Logic\n\nLet $\\LL_1$ be the language of predicate logic.\n\nLet $\\AA$ be a structure for predicate logic.\n\nThen $\\AA$ models a sentence $\\mathbf A$ if and only if:\n\n$\\map {\\operatorname{val}_\\AA} {\\mathbf A} = T$\n\nwhere $\\map {\\operatorname{val}_\\AA} {\\mathbf A}$ denotes the value of $\\mathbf A$ in $\\AA$.\n\nThis relationship is denoted:\n\n$\\AA \\models_{\\mathrm{PL} } \\mathbf A$\n\nWhen pertaining to a collection of sentences $\\FF$, one says $\\AA$ models $\\FF$ if and only if:\n\n$\\forall \\mathbf A \\in \\FF: \\AA \\models_{\\mathrm{PL} } \\mathbf A$\n\nthat is, if and only if it models all elements of $\\FF$.\n\nThis can be expressed symbolically as:\n\n$\\AA \\models_{\\mathrm {PL} } \\FF$\n\n## Also known as\n\nIf $\\MM$ is a model of $\\phi$, respectively $\\FF$, one sometimes says that $\\MM$ models $\\phi$, respectively $\\FF$."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5987068,"math_prob":1.0000026,"size":2324,"snap":"2022-27-2022-33","text_gpt3_token_len":718,"char_repetition_ratio":0.13491379,"word_repetition_ratio":0.205,"special_character_ratio":0.33950087,"punctuation_ratio":0.1627907,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000095,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T12:20:15Z\",\"WARC-Record-ID\":\"<urn:uuid:7892d06f-8f76-4c6d-9c01-f061e4e54725>\",\"Content-Length\":\"44455\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1a8c7ed2-cd61-4bae-b11d-95db4096d53b>\",\"WARC-Concurrent-To\":\"<urn:uuid:36f6c81b-12a9-460f-9f18-ed064ceb5ca9>\",\"WARC-IP-Address\":\"172.67.198.93\",\"WARC-Target-URI\":\"https://www.proofwiki.org/wiki/Definition:Model_(Logic)\",\"WARC-Payload-Digest\":\"sha1:JOKDSOAW7WUCKN5UC4X7XTJARUOOFHIW\",\"WARC-Block-Digest\":\"sha1:JRPOKARAS4WDMS4SZSC72QTFM3LDCJEM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104672585.89_warc_CC-MAIN-20220706121103-20220706151103-00201.warc.gz\"}"} |
https://studyqas.com/the-table-represents-a-linear-function-a-two-column-table/ | [
"# The table represents a linear function. A two column table with six rows. The first column, x, has the entries, negative 2, negative\n\nThe table represents a linear function. A two column table with six rows. The first column, x, has the entries, negative 2, negative 1, 0, 1, 2. The second column, y, has the entries, negative 8, 2, negative 4, negative 10, negative 16. What is the slope of the function? –6 –4 4 6\n\n## This Post Has 5 Comments\n\n1.",
null,
"Skylynn11 says:\n\nc just took it\n\nStep-by-step explanation:\n\n2.",
null,
"madim1275 says:\n\nGiven : The table represents a linear function.\n\nTo Find : slope of the function\n\nSolution:\n\nx y\n\n-2 8\n\n-1 2\n\n0 -4\n\n1 -10\n\n2 -16\n\nSlope of the line = ( 2 - 8) / ( - 1 -(-2))\n\n= - 6 / 1\n\n= - 6\n\n-6 is slope of the line\n\ny -(-4) = -6(x - 0)\n\n=> y + 4 = -6x\n\n=> 6x + y + 4 = 0 is Equation of line\n\nslope of the function = - 6\n\nWhat is the slope of the line joining the points(2,0) and(-2,0) - ...\n\nFind the slope of the line passing through the points (2,3) and ...\n\n3.",
null,
"jennaranelli05 says:\n\nTHIRD OPTION.\n\nStep-by-step explanation:\n\nThe slope can found by using the following formula:\n\n$m=\\frac{y_2-y_1}{x_2-x_1}$\n\nIn this case, in order to find the slope of the given function, we need to choose two points from the table provided in the exercise. Let's choose these points:\n\n$(-2,-2)\\\\\\\\(2,10)$\n\nWe can identify that:\n\n$y_2=-2\\\\y_1=10\\\\\\\\x_2=-2\\\\x_1=2$\n\nSubstituting these values into the formula, we get that the slope of the function is:\n\n$m=\\frac{-2-10}{-2-2}\\\\\\\\m=3$\n\nThis matches with the third option.\n\n4.",
null,
"Maddy4965 says:\n\nbutter\n\nStep-by-step explanation:\n\n5.",
null,
"milkshakegrande101 says:\n\nthe slope of the graph is 4 ( because -4 minus 0 is negative 4 but u have to flip it to a positive number)\n\nStep-by-step explanation:\n\nI hope this helps"
] | [
null,
"https://secure.gravatar.com/avatar/",
null,
"https://secure.gravatar.com/avatar/",
null,
"https://secure.gravatar.com/avatar/",
null,
"https://secure.gravatar.com/avatar/",
null,
"https://secure.gravatar.com/avatar/",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8198274,"math_prob":0.99376476,"size":2249,"snap":"2023-14-2023-23","text_gpt3_token_len":721,"char_repetition_ratio":0.14209354,"word_repetition_ratio":0.024813896,"special_character_ratio":0.33881724,"punctuation_ratio":0.12474438,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992648,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-11T01:15:48Z\",\"WARC-Record-ID\":\"<urn:uuid:b70c0c75-0db8-4381-96f6-92470c027ad8>\",\"Content-Length\":\"137141\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:64bef93b-4c7b-4d88-a3c0-9f50b0615dd9>\",\"WARC-Concurrent-To\":\"<urn:uuid:faec5525-e412-4325-af1a-eb92569d1059>\",\"WARC-IP-Address\":\"172.67.196.109\",\"WARC-Target-URI\":\"https://studyqas.com/the-table-represents-a-linear-function-a-two-column-table/\",\"WARC-Payload-Digest\":\"sha1:BN2IQZH6HRDNAHFUHLGBVJSUUBRMAARX\",\"WARC-Block-Digest\":\"sha1:4ZWTRVK3WWW5SX7ESN5CM5CLSTPPNMAA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646652.16_warc_CC-MAIN-20230610233020-20230611023020-00701.warc.gz\"}"} |
https://mathscribe.com/grade-8/congruent/similarity.html | [
"# Similarity\n\nYou have already learned about rigid motions and dilations. Two figures that are related by a combination of rigid motions and dilations are called similar. Roughly speaking, this means they are the same shape as each other (but perhaps not the same size). In this lesson, you will study similarity, and learn about ways to determine that two triangles are similar to each other.\n\n## Definition and properties of similarity\n\nTwo figures are called similar if there is some combination of translations, rotations, dilations, and reflections which makes them coincide with each other.\n\n Using the sliders to the left, what combination of translations, rotations, dilations, and reflections do you need to make the red sailboat coincide with the black sailboat? (Start by using translations to make the point \\$\\cl\"red\"A\\$ coincide with the corresponding point on the black sailboat. Then use the other controls to get the rest of the red sailboat to coincide.)\n Are the red sailboat and the black sailboat similar?\n Using the sliders to the left, what combination of translations, rotations, dilations, and reflections do you need to make the red sailboat coincide with the black sailboat?\n Are the red sailboat and the black sailboat similar?\n\nA combination of rigid motions and dilations is called a similarity transformation. A triangle \\$\\cl\"red\"{▵ABC}\\$ is drawn to the left, with sliders that allow you to transform it using a similarity transformation made up of several dilations and rigid motions.\n\n As you slide the sliders, do the sides of \\$\\cl\"red\"{▵ABC}\\$ remain straight?\n Do the measures of \\$\\cl\"red\"{▵ABC}\\$’s angles change or stay the same?\n Do the lengths of its sides change or stay the same?\n\nBecause any rigid motion or dilation takes straight line segments to straight line segments and preserves angle measures, any similarity transformation does as well. Because any rigid motion preserves lengths, and any dilation multiplies every length by the same number, any similarity transformation multiplies every length by the same number.\n\n## Triangle similarity — AA\n\nTwo triangles \\$\\cl\"red\"{▵ABC}\\$ and \\$\\cl\"blue\"{▵A'B'C'}\\$ are drawn to the left. There is one slider that lets you change the measures of \\$\\cl\"red\"{∠A}\\$ and \\$\\cl\"blue\"{∠A'}\\$, and another slider that lets you change the measures of \\$\\cl\"red\"{∠B}\\$ and \\$\\cl\"blue\"{∠B'}\\$.\n\n When the measure of \\$\\cl\"red\"{∠A}\\$ is 40˚ and the measure of \\$\\cl\"red\"{∠B}\\$ is 80˚, what is the measure of \\$\\cl\"red\"{∠C}\\$?\n When the measure of \\$\\cl\"blue\"{∠A'}\\$ is 40˚ and the measure of \\$\\cl\"blue\"{∠B'}\\$ is 80˚, what is the measure of \\$\\cl\"blue\"{∠C'}\\$?\n However you slide the sliders, \\$\\cl\"red\"{∠A}\\$ and \\$\\cl\"blue\"{∠A'}\\$ will always have equal measure, as will \\$\\cl\"red\"{∠B}\\$ and \\$\\cl\"blue\"{∠B'}\\$. Will \\$\\cl\"red\"{∠C}\\$ and \\$\\cl\"blue\"{∠C'}\\$ always have equal measure?\n\nAs you can see:\n\nIf you have any two triangles (such as \\$\\cl\"red\"{▵ABC}\\$ and \\$\\cl\"blue\"{▵A'B'C'}\\$) with two pairs of related angles that are equal in measure (for example, \\$\\cl\"red\"{∠A}\\$ and \\$\\cl\"blue\"{∠A'}\\$ are equal in measure, as are \\$\\cl\"red\"{∠B}\\$ and \\$\\cl\"blue\"{∠B'}\\$), then the third pair of related angles (in this case \\$\\cl\"red\"{∠C}\\$ and \\$\\cl\"blue\"{∠C'}\\$) will also be equal in measure.\n\nIn aa-to-aaa-qn, you saw that, in any two triangles with two pairs of related equal-measured angles, the third pair of related angles were also equal-measured. Now we’ll explore what else can be said about triangles with equal-measured related angles.\n\nThe two triangles \\$\\cl\"red\"{▵ABC}\\$ and \\$\\cl\"blue\"{▵A'B'C'}\\$ drawn to the left both have a 45˚ angle (\\$\\cl\"red\"{∠A}\\$ and \\$\\cl\"blue\"{∠A'}\\$) and a 60˚ angle (\\$\\cl\"red\"{∠B}\\$ and \\$\\cl\"blue\"{∠B'}\\$).\n\n You can slide the slider to dilate \\$\\cl\"red\"{▵ABC}\\$. How much do you need to dilate \\$\\cl\"red\"{▵ABC}\\$ by to make the side \\$\\cl\"red\"\\ov{AB}\\$ have the same length as the side \\$\\cl\"blue\"\\ov{A'B'}\\$?\n Once the sides \\$\\cl\"red\"\\ov{AB}\\$ and \\$\\cl\"blue\"\\ov{A'B'}\\$ are equally long, the triangles \\$\\cl\"red\"{▵ABC}\\$ and \\$\\cl\"blue\"{▵A'B'C'}\\$ have two pairs of corresponding equal-measured angles, and the corresponding sides between those angles have equal length. That is, the triangles have corresponding measurements equal for an Angle, then a Side, and then another Angle. Which congruence rule applies to two triangles like this: SAS, ASA, or SSS?\n If the slider is set so that the sides \\$\\cl\"red\"\\ov{AB}\\$ and \\$\\cl\"blue\"\\ov{A'B'}\\$ are equally long, are \\$\\cl\"red\"{▵ABC}\\$ and \\$\\cl\"blue\"{▵A'B'C'}\\$ congruent to each other?\n\nClick . Now the original triangles \\$\\cl\"red\"{▵ABC}\\$ and \\$\\cl\"blue\"{▵A'B'C'}\\$ are drawn, along with the dilated copy \\$\\cl\"green\"{▵A''B''C''}\\$ of \\$\\cl\"red\"{▵ABC}\\$ in which \\$\\cl\"green\"\\ov{A''B''}\\$ has the same length as \\$\\cl\"blue\"\\ov{A'B'}\\$.\n\n What rigid motion (combination of translations, rotations, and reflections) do you need to apply to the green triangle \\$▵A''B''C''\\$ to make it coincide with the blue triangle \\$▵A'B'C'\\$?\n You have found a dilation which takes the red triangle \\$▵ABC\\$ to the green triangle \\$▵A''B''C''\\$, and a combination of translations, rotations, and reflections which takes the green triangle \\$▵A''B''C''\\$ to the blue triangle \\$▵A'B'C'\\$. Is there a similarity transformation (combination of dilations, translations, rotations, and reflections) which takes the red triangle to the blue triangle?\n Are the triangles \\$\\cl\"red\"{▵ABC}\\$ and \\$\\cl\"blue\"{▵A'B'C'}\\$ similar?\n\nIf you have two triangles with two pairs of corresponding equal-measured angles, as in \\$\\cl\"red\"{▵ABC}\\$ and \\$\\cl\"blue\"{▵A'B'C'}\\$, then you can always make the triangles coincide with a similarity transformation in this way:\n\n• Using a dilation, make the sides between the two angles equally long.\n• Now, by the ASA criterion, these two triangles are congruent. Use rigid motions to make them coincide.\n\nSo the two triangles are similar. This is called the Angle-Angle or AA rule for similarity of triangles. Because of what you learned in similarity-transformation-qn, that also means that the length of every side of one of those triangles is the same multiple of the length of the corresponding side of the other triangle.\n\n Three triangles are drawn to the left, along with some of their measurements. Which triangle must be similar to the red triangle (\\$\\cl\"red\"{▵ABC}\\$) by the AA rule: the blue triangle (\\$\\cl\"blue\"{▵A'B'C'}\\$) or the green triangle (\\$\\cl\"green\"{▵A''B''C''}\\$)?\n\n## Triangle similarity — proportional sides\n\nTwo triangles \\$\\cl\"red\"{▵ABC}\\$ and \\$\\cl\"blue\"{▵A'B'C'}\\$ are drawn to the left. Notice that every side of \\$\\cl\"blue\"{▵A'B'C'}\\$ is twice as long as the corresponding side of \\$\\cl\"red\"{▵ABC}\\$.\n\n You can slide the slider to dilate \\$\\cl\"red\"{▵ABC}\\$. How much do you need to dilate \\$\\cl\"red\"{▵ABC}\\$ by to make the side \\$\\cl\"red\"\\ov{AB}\\$ have the same length as the side \\$\\cl\"blue\"\\ov{A'B'}\\$?\nWhen \\$\\cl\"red\"\\ov{AB}\\$ has the same length as \\$\\cl\"blue\"\\ov{A'B'}\\$, what are the lengths of \\$\\cl\"red\"\\ov{BC}\\$ and \\$\\cl\"red\"\\ov{AC}\\$?\n length of \\$\\cl\"red\"\\ov{BC}\\$: length of \\$\\cl\"red\"\\ov{AC}\\$:\n Are these lengths equal to the lengths of the sides \\$\\cl\"blue\"\\ov{B'C'}\\$ and \\$\\cl\"blue\"\\ov{A'C'}\\$ of \\$\\cl\"blue\"{▵A'B'C'}\\$?\n When two triangles have all corresponding Side lengths equal, which congruence rule applies to those triangles: SAS, ASA, or SSS?\n If the slider is set so that the sides \\$\\cl\"red\"\\ov{AB}\\$ and \\$\\cl\"blue\"\\ov{A'B'}\\$ are equally long, are \\$\\cl\"red\"{▵ABC}\\$ and \\$\\cl\"blue\"{▵A'B'C'}\\$ congruent to each other?\n\nClick . Now the original triangles \\$\\cl\"red\"{▵ABC}\\$ and \\$\\cl\"blue\"{▵A'B'C'}\\$ are drawn, along with the dilated copy \\$\\cl\"green\"{▵A''B''C''}\\$ of \\$\\cl\"red\"{▵ABC}\\$ that has the same corresponding side lengths as \\$\\cl\"blue\"{▵A'B'C'}\\$.\n\n What rigid motion do you need to apply to the green triangle \\$▵A''B''C''\\$ to make it coincide with the blue triangle \\$▵A'B'C'\\$?\n You have found a dilation which takes the red triangle \\$▵ABC\\$ to the green triangle \\$▵A''B''C''\\$, and a combination of translations, rotations, and reflections which takes the green triangle \\$▵A''B''C''\\$ to the blue triangle \\$▵A'B'C'\\$. Is there a similarity transformation (combination of dilations, translations, rotations, and reflections) which takes the red triangle to the blue triangle?\n Are the triangles \\$\\cl\"red\"{▵ABC}\\$ and \\$\\cl\"blue\"{▵A'B'C'}\\$ similar?\n\nIf you can multiply the side lengths of one triangle by a single number \\$r\\$ to get the side lengths of another triangle (the way each side of \\$\\cl\"blue\"{▵A'B'C'}\\$ was 2 times as long as the corresponding side of \\$\\cl\"red\"{▵ABC}\\$), then you can always make them coincide with a similarity transformation in this way:\n\n• Using a dilation, make their corresponding sides equally long.\n• Now, by the SSS criterion, these two triangles are congruent. Use rigid motions to make them coincide.\n\nSo the two triangles are similar. This is called the proportional sides rule for similarity of triangles. Because of what you learned in similarity-transformation-qn, that also means that every angle of each triangle has a corresponding equal-measured angle in the other triangle.\n\nThree triangles are drawn to the left, along with some of their measurements. We would like to use the proportional sides rule to determine whether the blue triangle (\\$\\cl\"blue\"{▵A'B'C'}\\$) or the green triangle (\\$\\cl\"green\"{▵A''B''C''}\\$) is similar to the red triangle (\\$\\cl\"red\"{▵ABC}\\$).\n\nNotice that the shortest side of the red triangle has a length of 2, while the shortest side of the blue triangle has a length of 3. So, if you can multiply the side lengths of the red triangle by a single number to get the side lengths of the blue triangle, that number must be \\$\\$3/2 = 1.5\\$\\$.\n\n If you multiply the side lengths of the red triangle by 1.5, do you get the side lengths of the blue triangle?\n If you can multiply the side lengths of the red triangle by a single number to get the side lengths of the green triangle, what must that number be?\n If you multiply the side lengths of the red triangle by that number, do you get the side lengths of the green triangle?\n Which triangle must be similar to the red triangle (\\$\\cl\"red\"{▵ABC}\\$) by the proportional sides rule: the blue triangle (\\$\\cl\"blue\"{▵A'B'C'}\\$) or the green triangle (\\$\\cl\"green\"{▵A''B''C''}\\$)?\n\n## Non-similar figures\n\nIt’s possible to decide that two triangles are similar just by looking at their angles (by using the AA rule) or just by looking at their sides (by using the proportional sides rule). Now we’ll study whether or not there are rules like this which allow you to determine that two quadrilaterals (four-sided polygons) are similar.\n\n Two quadrilaterals \\$\\cl\"red\"{ABCD}\\$ and \\$\\cl\"blue\"{A'B'C'D'}\\$ are drawn to the left, with a slider that dilates \\$\\cl\"red\"{ABCD}\\$. Is there any way you can slide the slider so that \\$\\cl\"red\"{ABCD}\\$ and \\$\\cl\"blue\"{A'B'C'D'}\\$ are congruent?\n Are \\$\\cl\"red\"{ABCD}\\$ and \\$\\cl\"blue\"{A'B'C'D'}\\$ similar?\n Notice that all of the angles of both quadrilaterals are right angles, so they are all equal. Can there be a rule for quadrilaterals that tells you they are similar only by looking at their angles (like the AA rule for triangles)?\n Two quadrilaterals \\$\\cl\"red\"{ABCD}\\$ and \\$\\cl\"blue\"{A'B'C'D'}\\$ are drawn to the left, with a slider that dilates \\$\\cl\"red\"{ABCD}\\$. Is there any way you can slide the slider so that \\$\\cl\"red\"{ABCD}\\$ and \\$\\cl\"blue\"{A'B'C'D'}\\$ are congruent?\n Are \\$\\cl\"red\"{ABCD}\\$ and \\$\\cl\"blue\"{A'B'C'D'}\\$ similar?\n Notice that both quadrilaterals have four equal sides, which means that however you dilate \\$\\cl\"red\"{ABCD}\\$ the two quadrilaterals will have proportional sides. Can there be a rule for quadrilaterals that tells you they are similar only by looking at their sides (like the proportional sides rule for triangles)?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9014729,"math_prob":0.99089634,"size":5050,"snap":"2023-14-2023-23","text_gpt3_token_len":1383,"char_repetition_ratio":0.17736821,"word_repetition_ratio":0.19556715,"special_character_ratio":0.25148514,"punctuation_ratio":0.07112527,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988951,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T14:53:08Z\",\"WARC-Record-ID\":\"<urn:uuid:2ed85f77-cee0-4a72-b908-bfd4febd438a>\",\"Content-Length\":\"19243\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:28d9d4c3-76cb-4e3c-a735-ee9b0f73c01d>\",\"WARC-Concurrent-To\":\"<urn:uuid:62e4adb3-1c18-44eb-85cf-059c508e1540>\",\"WARC-IP-Address\":\"173.254.238.202\",\"WARC-Target-URI\":\"https://mathscribe.com/grade-8/congruent/similarity.html\",\"WARC-Payload-Digest\":\"sha1:7PXF6KG4YLXYMG5LBPI64J6PUJ3GPJL6\",\"WARC-Block-Digest\":\"sha1:WL36SDJWHJNA6VO26T2TBT42XGA73WXU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649986.95_warc_CC-MAIN-20230604125132-20230604155132-00003.warc.gz\"}"} |
https://topnotchhomeworks.com/exercises-943/ | [
"1. What is trend analysis? Explain how the percent change from one period to the next is calculated.\n\n2. What is common-size analysis? How is common-size analysis information used?\n\n3. Explain the difference between trend analysis and common-size analysis.\n\n4. Name the ratios used to evaluate profitability. Explain what the statement “evaluate profitability” means.\n\n5. Coca-Cola’s return on assets was 19.4 percent, and return on common shareholders’ equity was 41.7 percent. Briefly explain why these two percentages are different.\n\n6. Coca-Cola had earnings per share of \\$5.12, and PepsiCo had earnings per share of \\$3.97. Is it accurate to conclude PepsiCo was more profitable? Explain your reasoning.\n\n7. Name the ratios used to evaluate short-term liquidity. Explain what the statement “evaluate short-term liquidity” means.\n\n8. Explain the difference between the current ratio and the quick ratio.\n\n9. Coca-Cola had an inventory turnover ratio of 5.07 times (every 71.99 days), and PepsiCo had an inventory turnover ratio of 8.87 times (every 41.15 days). Which company had the best inventory turnover? Explain your reasoning.\n\n10. Name the ratios used to evaluate long-term solvency. Explain what the term “longterm solvency” means.\n\n11. Name the measures used to determine and evaluate the market value of a company. Briefly describe the meaning of each measure.\n\n12. What is the balanced scorecard? Briefly describe the four perspectives of the balanced scorecard."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91666,"math_prob":0.8798621,"size":1465,"snap":"2023-14-2023-23","text_gpt3_token_len":320,"char_repetition_ratio":0.1266256,"word_repetition_ratio":0.06334842,"special_character_ratio":0.22866894,"punctuation_ratio":0.16206896,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9839574,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-31T11:55:03Z\",\"WARC-Record-ID\":\"<urn:uuid:5a519fb4-afd2-4e40-9412-de3a11065f51>\",\"Content-Length\":\"63405\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3494c985-c534-4c89-bc0c-71625867af86>\",\"WARC-Concurrent-To\":\"<urn:uuid:2cb64a25-ebb2-4e2e-a855-0a6a33bd7a34>\",\"WARC-IP-Address\":\"162.0.229.131\",\"WARC-Target-URI\":\"https://topnotchhomeworks.com/exercises-943/\",\"WARC-Payload-Digest\":\"sha1:DDVJ2DP33V44ULOLK4WQF3FAJCXXWOOT\",\"WARC-Block-Digest\":\"sha1:J7QW3JXG4HN5QGR75TMKAADQ25QGLDHB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949642.35_warc_CC-MAIN-20230331113819-20230331143819-00149.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.