URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://stackoverflow.com/questions/1102692/how-to-alpha-blend-rgba-unsigned-byte-color-fast/61943155#61943155
[ "# How to alpha blend RGBA unsigned byte color fast?\n\nI am using c++ , I want to do alpha blend using the following code.\n\n``````#define CLAMPTOBYTE(color) \\\nif ((color) & (~255)) { \\\ncolor = (BYTE)((-(color)) >> 31); \\\n} else { \\\ncolor = (BYTE)(color); \\\n}\n#define GET_BYTE(accessPixel, x, y, scanline, bpp) \\\n((BYTE*)((accessPixel) + (y) * (scanline) + (x) * (bpp)))\n\nfor (int y = top ; y < bottom; ++y)\n{\nBYTE* resultByte = GET_BYTE(resultBits, left, y, stride, bytepp);\nBYTE* srcByte = GET_BYTE(srcBits, left, y, stride, bytepp);\nBYTE* srcByteTop = GET_BYTE(srcBitsTop, left, y, stride, bytepp);\nint alpha = 0;\nint red = 0;\nint green = 0;\nint blue = 0;\nfor (int x = left; x < right; ++x)\n{\nred = (srcByteTop[R] * alpha + srcByte[R] * (255 - alpha)) / 255;\ngreen = (srcByteTop[G] * alpha + srcByte[G] * (255 - alpha)) / 255;\nblue = (srcByteTop[B] * alpha + srcByte[B] * (255 - alpha)) / 255;\nCLAMPTOBYTE(red);\nCLAMPTOBYTE(green);\nCLAMPTOBYTE(blue);\nresultByte[R] = red;\nresultByte[G] = green;\nresultByte[B] = blue;\nsrcByte += bytepp;\nsrcByteTop += bytepp;\nresultByte += bytepp;\n}\n}\n``````\n\nhowever I find it is still slow, it takes about 40 - 60 ms when compose two 600 * 600 image. Is there any method to improve the speed to less then 16ms?\n\nCan any body help me to speed this code? Many thanks!\n\n• What compiler are you using? What platform are you developing this software for? Are you willing to use off the shelf tools? Jul 9, 2009 at 23:36\n• I am using VS2005, the software is designed for windows platform. I am willing to use any method to accelerate this code. I think maybe it can be accelerated alot Jul 10, 2009 at 2:13\n• Let me know if you have trouble coding up the rest of the SIMD instructions in my solution Jul 11, 2009 at 21:40\n\nUse SSE - start around page 131.\n\nThe basic workflow\n\n1. Load 4 pixels from src (16 1 byte numbers) RGBA RGBA RGBA RGBA (streaming load)\n\n2. Load 4 more which you want to blend with srcbytetop RGBx RGBx RGBx RGBx\n\n3. Do some swizzling so that the A term in 1 fills every slot I.e\n\nxxxA xxxB xxxC xxxD -> AAAA BBBB CCCC DDDD\n\nIn my solution below I opted instead to re-use your existing \"maskcurrent\" array but having alpha integrated into the \"A\" field of 1 will require less loads from memory and thus be faster. Swizzling in this case would probably be: And with mask to select A, B, C, D. Shift right 8, Or with origional, shift right 16, or again.\n\n4. Add the above to a vector that is all -255 in every slot\n\n5. Multiply 1 * 4 (source with 255-alpha) and 2 * 3 (result with alpha).\n\nYou should be able to use the \"multiply and discard bottom 8 bits\" SSE2 instruction for this.\n\n6. add those two (4 and 5) together\n\n7. Store those somewhere else (if possible) or on top of your destination (if you must)\n\nHere is a starting point for you:\n\n`````` //Define your image with __declspec(align(16)) i.e char __declspec(align(16)) image[640*480]\n// so the first byte is aligned correctly for SIMD.\n// Stride must be a multiple of 16.\n\nfor (int y = top ; y < bottom; ++y)\n{\nBYTE* resultByte = GET_BYTE(resultBits, left, y, stride, bytepp);\nBYTE* srcByte = GET_BYTE(srcBits, left, y, stride, bytepp);\nBYTE* srcByteTop = GET_BYTE(srcBitsTop, left, y, stride, bytepp);\nfor (int x = left; x < right; x += 4)\n{\n//If you can't align, use _mm_loadu_si128()\n// Step 1\n// Step 2\n\n// Step 3\n// Fill the 4 positions for the first pixel with maskCurrent, etc\n// Could do better with shifts and so on, but this is clear\n)\n\n// step 4\n\n//Todo : Multiply, with saturate - find correct instructions for 4..6\n\nred = (srcByteTop[R] * alpha + srcByte[R] * (255 - alpha)) / 255;\ngreen = (srcByteTop[G] * alpha + srcByte[G] * (255 - alpha)) / 255;\nblue = (srcByteTop[B] * alpha + srcByte[B] * (255 - alpha)) / 255;\nCLAMPTOBYTE(red);\nCLAMPTOBYTE(green);\nCLAMPTOBYTE(blue);\nresultByte[R] = red;\nresultByte[G] = green;\nresultByte[B] = blue;\n//----\n\n// Step 7 - store result.\n//Store aligned if output is aligned on 16 byte boundrary\n_mm_store_si128(reinterpret_cast<__mm128i*>(resultByte), result)\n//Slow version if you can't guarantee alignment\n//_mm_storeu_si128(reinterpret_cast<__mm128i*>(resultByte), result)\n\n//Move pointers forward 4 places\nsrcByte += bytepp * 4;\nsrcByteTop += bytepp * 4;\nresultByte += bytepp * 4;\n}\n}\n``````\n\nTo find out which AMD processors will run this code (currently it is using SSE2 instructions) see Wikipedia's List of AMD Turion microprocessors. You could also look at other lists of processors on Wikipedia but my research shows that AMD cpus from around 4 years ago all support at least SSE2.\n\nYou should expect a good SSE2 implimentation to run around 8-16 times faster than your current code. That is because we eliminate branches in the loop, process 4 pixels (or 12 channels) at once and improve cache performance by using streaming instructions. As an alternative to SSE, you could probably make your existing code run much faster by eliminating the if checks you are using for saturation. Beyond that I would need to run a profiler on your workload.\n\nOf course, the best solution is to use hardware support (i.e code your problem up in DirectX) and have it done on the video card.\n\n• See edits to my origional post to address your question. Short answer - yes if not an ancient CPU. Jul 9, 2009 at 23:23\n• Will it work only on Windows or on other platforms as well ? (If I will define BYTe ofc) Main question is: are SIMD instructions crossplatform ? Aug 18, 2017 at 1:28\n• Yes, SIMD instructions require the CPU support them but they don't care about the OS (Windows, etc). The compiler also needs to translate the intrinsics (such as `_mm_set_epi8`) but I believe that GCC can do this. Aug 21, 2017 at 8:03\n\nYou can always calculate the alpha of red and blue at the same time. You can also use this trick with the SIMD implementation mentioned before.\n\n``````unsigned int blendPreMulAlpha(unsigned int colora, unsigned int colorb, unsigned int alpha)\n{\nunsigned int rb = (colora & 0xFF00FF) + ( (alpha * (colorb & 0xFF00FF)) >> 8 );\nunsigned int g = (colora & 0x00FF00) + ( (alpha * (colorb & 0x00FF00)) >> 8 );\nreturn (rb & 0xFF00FF) + (g & 0x00FF00);\n}\n\nunsigned int blendAlpha(unsigned int colora, unsigned int colorb, unsigned int alpha)\n{\nunsigned int rb1 = ((0x100 - alpha) * (colora & 0xFF00FF)) >> 8;\nunsigned int rb2 = (alpha * (colorb & 0xFF00FF)) >> 8;\nunsigned int g1 = ((0x100 - alpha) * (colora & 0x00FF00)) >> 8;\nunsigned int g2 = (alpha * (colorb & 0x00FF00)) >> 8;\nreturn ((rb1 | rb2) & 0xFF00FF) + ((g1 | g2) & 0x00FF00);\n}\n``````\n\n0 <= alpha <= 0x100\n\n• Nice trick. You should add handling of saturation in there too (right now it overflows) Jul 9, 2009 at 23:29\n• The overflow is intentional, it's handled in the return statement. Jul 10, 2009 at 0:41\n• It's got a rather rude handling of overflow: wraparound instead of saturation. Jul 10, 2009 at 8:55\n• @MSalters, could be because of the hangover, but I don't see the overflow; or well, I see an intentional overflow in rb and g, but they're masked out in the return statement. (As long as int is 32 bits). Jul 10, 2009 at 11:57\n• @JasperBekkers, did you actually try your example? With alpha=0xff (opaque), the result is 0xff80 (the red completely disappears, other colours wrong as well). Aug 17, 2012 at 20:10\n\nFor people that want to divide by 255, i found a perfect formula:\n\n``````pt->r = (r+1 + (r >> 8)) >> 8; // fast way to divide by 255\n``````\n• This can be extended to two 16bits words: ((r+0x10001+((r>>8)&0xFF00FF))>>8) & 0xFF00FF and this allow multiplexing xRxB and AxGx ops in ARGB, similar in RGBA and other variants Dec 23, 2013 at 17:14\n• `(x+1+((x+1)>>8))>>8 // integer div 255 for [0..65790)` -- slightly better Apr 1, 2015 at 16:13\n• `((x+1)*257)>>16 // integer div 255 for [0..65790)` -- alternative formulation which might be faster on some platforms -- interesting notes: Division via Multiplication Apr 1, 2015 at 16:42\n• @nobar: The standard compiler trick of doing division with a multiplicative inverse is also worth considering: `n/255` compiles to = asm that does `(n*0x8081) >> 23`. That also works for all 16-bit `n`. (I just noticed your upper-bound was higher than 65536). With x86 SSE2, that's one `_mm_mulhi_epu16` and one `_mm_srli_epu16(mul, 23-16)`. `x+1 * 257` is one paddw and one pmulhuw, so that's actually better (since mul and shift may compete for the same port). Apr 27, 2017 at 4:38\n\nHere's some pointers.\n\nConsider using pre-multiplied foreground images as described by Porter and Duff. As well as potentially being faster, you avoid a lot of potential colour-fringing effects.\n\nThe compositing equation changes from\n\n``````r = kA + (1-k)B\n``````\n\n... to ...\n\n``````r = A + (1-k)B\n``````\n\nAlternatively, you can rework the standard equation to remove one multiply.\n\n``````r = kA + (1-k)B\n== kA + B - kB\n== k(A-B) + B\n``````\n\nI may be wrong, but I think you shouldn't need the clamping either...\n\nI can't comment because I don't have enough reputation, but I want to say that Jasper's version will not overflow for valid input. Masking the multiplication result is necessary because otherwise the red+blue multiplication would leave bits in the green channel (this would also be true if you multiplied red and blue separately, you'd still need to mask out bits in the blue channel) and the green multiplication would leave bits in the blue channel. These are bits that are lost to right shift if you separate the components out, as is often the case with alpha blending. So they're not overflow, or underflow. They're just useless bits that need to be masked out to achieve expected results.\n\nThat said, Jasper's version is incorrect. It should be 0xFF-alpha (255-alpha), not 0x100-alpha (256-alpha). This would probably not produce a visible error.\n\nI've found an adaptation of Jasper's code to be be faster than my old alpha blending code, which was already decent, and am currently using it in my software renderer project. I work with 32-bit ARGB pixels:\n\n``````Pixel AlphaBlendPixels(Pixel p1, Pixel p2)\n{\nstatic const int AMASK = 0xFF000000;\nstatic const int RBMASK = 0x00FF00FF;\nstatic const int GMASK = 0x0000FF00;\nstatic const int ONEALPHA = 0x01000000;\nunsigned int a = (p2 & AMASK) >> 24;\nunsigned int na = 255 - a;\nunsigned int rb = ((na * (p1 & RBMASK)) + (a * (p2 & RBMASK))) >> 8;\nunsigned int ag = (na * ((p1 & AGMASK) >> 8)) + (a * (ONEALPHA | ((p2 & GMASK) >> 8)));\n}\n``````\n• This is exactly what I was looking for. Are you certain about the precision with `na = 255 - a` rather than 256 or is it something that can't be helped in this case?\n– Nolo\nDec 10, 2016 at 11:56\n• Sorry for late response, haven't had anything to contribute on SO in years until tonight and have been going through old notifications. There's inaccuracies either way due to c/256 not always equaling c/255 but 256 - a is more inaccurate than 255-a. Rounding up by adding to the highest of the bits you lose could reduce the inaccuracy but also isn't perfectly accurate. I'm pretty sure the only way to get perfect accuracy is to divide each color channel individually by 255 which is costly. Jasper's code saturates quickly, while mine tends towards black. Dec 3, 2021 at 8:42\n\nNo exactly answering the question but...\n\nOne thing is to do it fast, the other thing is to do it right. Alpha compositing is a dangerous beast, it looks straight forward and intuitive but common errors have been widespread for decades without anybody noticing it (almost)!\n\nThe most famous and common mistake is about NOT using premultiplied alpha. I highly recommend this: Alpha Blending for Leaves\n\n• It's not necessary to use premultiplied alpha, only to make sure the background color is removed from partially transparent pixels. Removing the background color may be part of the process of converting to premultiplied alpha, but it can be done independently as well. Nov 9, 2011 at 19:29\n\nYou can use 4 bytes per pixel in both images (for memory alignment), and then use SSE instructions to process all channels together. Search \"visual studio sse intrinsics\".\n\nFirst of all lets use the proper formula for each color component\n\n`````` v = ( 1-t ) * v0 + t * v1\n``````\n\nwhere t=interpolation parameter [0..1] v0=source color value v1=transfer color value v=output value\n\nReshuffling the terms, we can reduce the number of operations:\n\n`````` v = v0 + t * (v1 - v0)\n``````\n\nYou would need to perform this calculation once per color channel (3 times for RGB).\n\nFor 8-bit unsigned color components, you need to use correct fixed point math:\n\n`````` i = i0 + t * ( ( i1 - i0 ) + 127 ) / 255\n``````\n\nwhere t = interpolation parameter [0..255] i0= source color value [0..255] i1= transfer color value [0..255] i = output color\n\nIf you leave out the +127 then your colors will be biased towards the darker end. Very often, people use /256 or >> 8 for speed. This is not correct! If you divide by 256, you will never be able to reach pure white (255,255,255) because 255/256 is slightly less than one.\n\nI hope this helps.\n\n• Interesting ideas there, but you do pay a steep price for your / 255. You have to calculate an intermediate 16 bit result using t * ( ( v1 - v0 ) + 127 ) that you then divide. Are you sure that your formula is really simpler than ( 1-t ) * v0 + t * v1 ? Remember that 1-t is pre-calculated and that / is often more expensive than * Aug 5, 2009 at 23:34\n• The formula is a reference for what the numerically correct formula looks like. It is certainly slower, however the results are accurate. It is useful to know what the right answer looks like in order to determine if the error in the optimized result is acceptible or not. Aug 7, 2009 at 9:39\n• Yes, i = i0 + t * ( ( i1 - i0 ) + 127 ) / 255 is more efficient than your formula, which for integers would be (I think) : i = ( ( 255 - t ) * i0 + ( t * i1 ) ) / 255 Aug 7, 2009 at 9:42\n• Most images on the PC have Gamma burnt in. So if it's pixel value is say 127, that's NOT exactly half way between white and black. It's actual brighness is.. powf( (c) / 255.f, gamma) .. or about 0.19 So all your calculations that assumes pixels brightness is linear are wrong. Mar 31, 2010 at 23:15\n• With Guilerme answer this makes for a correct and fast blending. Thanks! Dec 11, 2011 at 19:25\n\nI think hardware support will help you. try to move the logic from software to hardware if feasible\n\nI've done similar code in unsafe C#. Is there any reason you aren't looping through each pixel directly? Why use all the BYTE* and GET_BYTE() calls? That is probably part of the speed issue.\n\nWhat does GET_GRAY look like?\n\nMore importantly, are you sure your platform doesn't expose alpha blending capabilities? What platform are you targeting? Wiki informs me that the following support it out of the box:\n\n• Mac OS X\n• Windows 2000, XP, Server 2003, Windows CE, Vista and Windows 7\n• The XRender extension to the X Window System (this includes modern Linux systems)\n• QNX Neutrino\n• Plan 9\n• Inferno\n• AmigaOS 4.1\n• BeOS, Zeta and Haiku\n• Syllable\n• MorphOS\n• This alpha blend is used for a certain image enhancement algorithm, not for displaying. So I can not use platform capabilities. Thanks! remove most GET_BYTE() seems useless, maybe the multiply operation and divid 255 operation is the problem. Jul 9, 2009 at 13:27\n• Even if you aren't displaying the image you can still definitely use platform capabilities. For example, on Windows you can use GDI+ or the .NET wrappers to do alpha blending without ever displaying it. I'd assume other platforms are similar. Jul 9, 2009 at 21:26\n\nThe main problem will be the poor loop construct, possibly made worse by a compiler failing to eliminate CSE's. Move the real common bits outside the loops. `int red` isn't common, thouigh - that should be inside the inner loop.\n\nFurthermore, red, green and blue are independent. If you calculate them in turn, you don't need to keep interim red results in registers when you are calculating green results. This is especially important on CPUs with limited registers like x86.\n\nThere will be only a limited number of values allowed for bytepp. Make it a template parameter, and then call the right instantiation from a switch. This will produce multiple copies of your function, but each can be optimized a lot better.\n\nAs noted, clamping is not needed. In alphablending, you're creating a linear combination of two images a[x][y] and b[x][y]. Since 0<=alpha<=255, you know that each output is bound by max(255*a[x][y], 255*b[x][y]). And since your output range is the same as both input ranges (0-255), this is OK.\n\nWith a small loss of precision, you could calculate `(a[x][y]*alpha * b[x][y]*(256-alpha))>>8`. Bitshifts are often faster than division.\n\n• Modern CPUs prefer interleaved instructions as much as possible. This is because independent work (i.e calculating R while G is processing) suits the pipelined nature of modern CPUs well. See Intel optimisation manual : intel.com/Assets/PDF/manual/248966.pdf. - The registers might seem limited to you, but the CPU has many more actual registers than you think using \"register renaming\" Jul 9, 2009 at 23:28\n\nDepending on the target architecture, you could try either vectorize or parallellize the function.\n\nOther than that, try to linearize the whole method (i.e. no loop-in-loop) and work with a quadruple of bytes at once, that would lose the overhead of working with single bytes plus make it easier for the compiler to optimize the code.\n\nMove it to the GPU.\n\nI am assuming that you want to do this in a completely portable way, without the help of a GPU, the use of a proprietry intel SIMD library (which may not work as efficiently on AMD processors).\n\nPut the following inplace of your calculation for RGB\n\n``````R = TopR + (SourceR * alpha) >> 8;\nG = TopG + (SourceG * alpha) >> 8;\nB = TopB + (SourceB * alpha) >> 8;\n``````\n\nIt is a more efficient calculation.\n\nAlso use shift left instruction on your get pixel macro instead of multiplying by the BPP.\n\n• SSE is pretty well adopted by both programmers and chip manufacturers. Jul 9, 2009 at 23:25\n\nThis one works when the first color, (colora, the destination) has also alpha channel (blending two transparent ARGB colors) The alpha is in the second color's alpha (colorb, the source)\n\nThis adds the two alphas (0 = transparent, 255 = fully opaque) It is a modified version of Jasper Bekkers' answer.\n\nI use it to blend transparent pixel art on to a transparent screen.\n\n``````Uint32 alphaBlend(unsigned int colora, unsigned int colorb) {\nunsigned int a2 = (colorb & 0xFF000000) >> 24;\nunsigned int alpha = a2;\nif (alpha == 0) return colora;\nif (alpha == 255) return colorb;\nunsigned int a1 = (colora & 0xFF000000) >> 24;\nunsigned int nalpha = 0x100 - alpha;\nunsigned int rb1 = (nalpha * (colora & 0xFF00FF)) >> 8;\nunsigned int rb2 = (alpha * (colorb & 0xFF00FF)) >> 8;\nunsigned int g1 = (nalpha * (colora & 0x00FF00)) >> 8;\nunsigned int g2 = (alpha * (colorb & 0x00FF00)) >> 8;\nunsigned int anew = a1 + a2;\nif (anew > 255) {anew = 255;}\nreturn ((rb1 + rb2) & 0xFF00FF) + ((g1 + g2) & 0x00FF00) + (anew << 24);\n}\n``````\n\nHere's my adaption of a software alpha blend that works well for 2 unsigned integers.\n\nMy code differs a bit as the code above is basically always assuming the destination alpha is 255.\n\nWith a decent optimizing compiler most calculations should be in registers as the scope of most variables is very short. I also opted to progressively shift the result << 8 incrementally to avoid << 24, << 16 when putting the ARGB back together. I know it's a long time ago... but I remember on the 286 cycles for a shift was (1 + 1*each bit shifted) so assume there is still some sort of penalty for larger shifts.\n\nAlso... instead of \"/ 255\" I opted for \">> 8\" which can be changed as desired.\n\n``````/*\nalpha blend source and destination, either may have an alpha!!!!\n\nSrc AAAAAAAA RRRRRRRR GGGGGGGG BBBBBBBB\nDest AAAAAAAA RRRRRRRR GGGGGGGG BBBBBBBB\n\nres AAAAAAAA RRRRRRRR GGGGGGGG BBBBBBBB\n\nNOTE - α = αsrc + αdest(1.0-αsrc) where α = 0.0 - 1.0\n\nALSO - DWORD is unsigned int so (F8000000 >> 24) = F8 not FFFFFFF8 as it would with int (signed)\n*/\n\ninline DWORD raw_blend(const DWORD src, const DWORD dest)\n{\n// setup and calculate α\n\nDWORD src_a = src >> 24;\nDWORD src_a_neg = 255 - src_a;\nDWORD dest_a = dest >> 24;\n\nDWORD res = src_a + ((dest_a * src_a_neg) >> 8);\n\n// setup and calculate R\n\nDWORD src_r = (src >> 16) & 255;\nDWORD dest_r = (dest >> 16) & 255;\n\nres = (res << 8) | (((src_r * src_a) + (dest_r * src_a_neg)) >> 8);\n\n// setup and calculate G\n\nDWORD src_g = (src >> 8) & 255;\nDWORD dest_g = (dest >> 8) & 255;\n\nres = (res << 8) | (((src_g * src_a) + (dest_g * src_a_neg)) >> 8);\n\n// setup and calculate B\n\nDWORD src_b = src & 255;\nDWORD dest_b = dest & 255;\n\nreturn (res << 8) | (((src_b * src_a) + (dest_b * src_a_neg)) >> 8);\n}\n``````\n``````; In\\ EAX = background color (ZRBG) 32bit (Z mean zero, always is zero)\n; In\\ EDX = foreground color (RBGA) 32bit\n; Out\\ EAX = new color\n; free registers (R10, RDI, RSI, RSP, RBP)\nabg2:\nmov r15b, dl ; av\nmovzx ecx, dl\nnot ecx ; faster than 255 - dl\nmov r14b, cl ; rem\n\nshr edx, 8\nand edx, 0x00FFFFFF\nmov r12d, edx\nmov r13d, eax ; RBGA ---> ZRGB\n\n; s: eax\n; d: edx\n\n;=============================red = ((s >> 16) * rem + (d >> 16) * av) >> 8;\nmov edx, r12d\nshr edx, 0x10\nmovzx eax, r14b\nimul edx, eax\nmov ecx, r13d\nshr ecx, 0x10\nmovzx eax, r15b\nimul eax, ecx\nlea eax, [eax + edx] ; faster than add eax, edx\nshr eax, 0x8\nmov r9b, al\nshl r9d, 8\n\n;=============================green = (((s >> 8) & 0x0000ff) * rem + ((d >> 8) & 0x0000ff) * av) >> 8;\nmov eax, r12d\nshr eax, 0x8\nmovzx edx, al\nmovzx eax, r14b\nimul edx, eax\nmov eax, r13d\nshr eax, 0x8\nmovzx ecx, al\nmovzx eax, r15b\nimul eax, ecx\nlea eax, [eax, + edx] ; faster than add eax, edx\nshr eax, 0x8\nmov r9b, al\nshl r9d, 8\n\n;=============================blue = ((s & 0x0000ff) * rem + (d & 0x0000ff) * av) >> 8;\nmovzx edx, r12b\nmovzx eax, r14b\nimul edx, eax\nmovzx ecx, r13b\nmovzx eax, r15b\nimul eax, ecx\nlea eax, [eax + edx] ; faster than add eax, edx\nshr eax, 0x8\nmov r9b, al\n\nmov eax, r9d\nret\n``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69429976,"math_prob":0.9683434,"size":3974,"snap":"2023-40-2023-50","text_gpt3_token_len":1130,"char_repetition_ratio":0.1256927,"word_repetition_ratio":0.014423077,"special_character_ratio":0.30120784,"punctuation_ratio":0.12880887,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97774804,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T14:29:24Z\",\"WARC-Record-ID\":\"<urn:uuid:3c2a4d29-d06d-4166-867c-4167aacba5f1>\",\"Content-Length\":\"378761\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0cc6891e-5de5-4b72-bbfb-e6eedf0d02b7>\",\"WARC-Concurrent-To\":\"<urn:uuid:c72cf01a-7a49-43e1-9745-2f6fe9bf54d1>\",\"WARC-IP-Address\":\"104.18.23.201\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/1102692/how-to-alpha-blend-rgba-unsigned-byte-color-fast/61943155#61943155\",\"WARC-Payload-Digest\":\"sha1:LFJVJES7BX236HDCYQD665XYDNQBR7OQ\",\"WARC-Block-Digest\":\"sha1:ZLIN37ROVZCOZT5FRQEYERLWAYC5OWPS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510676.40_warc_CC-MAIN-20230930113949-20230930143949-00130.warc.gz\"}"}
https://numberworld.info/11001010
[ "# Number 11001010\n\n### Properties of number 11001010\n\nCross Sum:\nFactorization:\nDivisors:\nCount of divisors:\nSum of divisors:\nPrime number?\nNo\nFibonacci number?\nNo\nBell Number?\nNo\nCatalan Number?\nNo\nBase 2 (Binary):\nBase 3 (Ternary):\nBase 4 (Quaternary):\nBase 5 (Quintal):\nBase 8 (Octal):\na7dcb2\nBase 32:\nafn5i\nsin(11001010)\n0.68684924308834\ncos(11001010)\n0.72679991556753\ntan(11001010)\n0.94503208981801\nln(11001010)\n16.213497644729\nlg(11001010)\n7.0414325594574\nsqrt(11001010)\n3316.7770500894\nSquare(11001010)\n\n### Number Look Up\n\nLook Up\n\n11001010 which is pronounced (eleven million one thousand ten) is a very impressive figure. The cross sum of 11001010 is 4. If you factorisate 11001010 you will get these result 2 * 5 * 1100101. The figure 11001010 has 8 divisors ( 1, 2, 5, 10, 1100101, 2200202, 5500505, 11001010 ) whith a sum of 19801836. The figure 11001010 is not a prime number. The figure 11001010 is not a fibonacci number. The number 11001010 is not a Bell Number. 11001010 is not a Catalan Number. The convertion of 11001010 to base 2 (Binary) is 101001111101110010110010. The convertion of 11001010 to base 3 (Ternary) is 202200220112211. The convertion of 11001010 to base 4 (Quaternary) is 221331302302. The convertion of 11001010 to base 5 (Quintal) is 10304013020. The convertion of 11001010 to base 8 (Octal) is 51756262. The convertion of 11001010 to base 16 (Hexadecimal) is a7dcb2. The convertion of 11001010 to base 32 is afn5i. The sine of the number 11001010 is 0.68684924308834. The cosine of 11001010 is 0.72679991556753. The tangent of 11001010 is 0.94503208981801. The root of 11001010 is 3316.7770500894.\nIf you square 11001010 you will get the following result 121022221020100. The natural logarithm of 11001010 is 16.213497644729 and the decimal logarithm is 7.0414325594574. You should now know that 11001010 is very amazing number!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7442872,"math_prob":0.88571316,"size":2110,"snap":"2019-51-2020-05","text_gpt3_token_len":731,"char_repetition_ratio":0.21699905,"word_repetition_ratio":0.24683544,"special_character_ratio":0.5099526,"punctuation_ratio":0.15189873,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99845153,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-20T15:18:02Z\",\"WARC-Record-ID\":\"<urn:uuid:74191947-60cd-4be8-8e4f-b1fee9b62d01>\",\"Content-Length\":\"13671\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b5531bf4-f621-4325-b98d-6e538cd25c5b>\",\"WARC-Concurrent-To\":\"<urn:uuid:70e1661d-ade1-4248-8902-ae385265432a>\",\"WARC-IP-Address\":\"176.9.140.13\",\"WARC-Target-URI\":\"https://numberworld.info/11001010\",\"WARC-Payload-Digest\":\"sha1:57TL72OBXZMKRFVIEGWPRAPNAFD4WMLF\",\"WARC-Block-Digest\":\"sha1:NR7NBIO6OSHUP7IISD7ZIEEHARLAKDL4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250598800.30_warc_CC-MAIN-20200120135447-20200120164447-00090.warc.gz\"}"}
https://elimdigital.com/topic-9-ratio-profit-and-loss/
[ "Home MATHEMATICS TOPIC 9: RATIO, PROFIT AND LOSS~ MATHEMATICS FORM 1\n\n# TOPIC 9: RATIO, PROFIT AND LOSS~ MATHEMATICS FORM 1\n\n133\n0\nSHARE", null, "Ratio\n\nA ratio – is a way of comparing quantities measured in the same units\nExamples of ratios\n1. A class has 45 girls and 40 boys. The ratio of number of boys to the number of girls = 40: 45\n2. A football ground 100 𝑚 long and 50 𝑚 wide. The ratio of length to the width = 100: 50\nNOTE: Ratios can be simplified like fractions\n1. 40: 45 = 8: 9\n2. 100: 50 = 2: 1\nA Ratio in its Simplest Form\nExpress a ratio in its simplest form\nExample 1\nSimplify the following ratios, giving answers as whole numbers", null, "Solution", null, "A Given Quantity into Proportional Parts\nDivide a given quantity into proportional parts\nExample 2\nExpress the following ratios in the form of", null, "Solution", null, "To increase or decrease a certain quantity in a given ratio, multiply the quantity with that ratio\nExample 3\n1. Increase 6 𝑚 in the ratio 4 ∶ 3\n2. Decrease 800 /− in the ratio 4 ∶ 5\nSolution", null, "Profit or Loss\nFind profit or loss\nIf you buy something and then sell it at a higher price, then you have a profit which is given by: Profit = selling price − buying price\nIf you buy something and then sell it at a lower price, then you have a loss which is given by: Loss = buying price − selling price\nThe profit or loss can also be expressed as a percentage of buying price as follows:", null, "Percentage Profit and Percentage Loss\nCalculate percentage profit and percentage profit and percentage loss\nExample 4\nMr. Richard bought a car for 3, 000, 000/− and sold for 3, 500, 000/−. What is the profit and percentage profit obtained?\nSolution\nProfit= selling price − buying price = 3,500,000-3,000,000=500,000\nTherefore the profit obtained is 500,000/-", null, "Example 5\nSolution", null, "But buying price = 780, 000/− and loss = buying price − selling price = 780, 000 − 720, 000 = 60, 000/−", null, "Simple Interest\nCalculate simple interest\nThe amount of money charged when a person borrows money e. g from a bank is called interest (I)\nThe amount of money borrowed is called principle (P)\nTo calculate interest, we use interest rate (R) given as a percentage and is usually taken per year or per annum (p.a)", null, "Example 6\nCalculate the simple interest charged on the following\n1. 850, 000/− at 15% per annum for 9 months\n2. 200, 000/− at 8% per annum for 2 years\nSolution", null, "Real Life Problems Related to Simple Interest\nSolve real life problems related to simple interest\nExample 7\nMrs. Mihambo deposited money in CRDB bank for 3 years and 4 months. A t the end of this time she earned a simple interest of 87, 750/− at 4.5% per annum. How much had she deposited in the bank?\nSolution\nGiven I = 87, 750/− R = 4.5% % T = 3 years and 4 months\nChange months to years", null, "SHARE\nPrevious articleTOPIC 8: NUMBERS (II)~ MATHEMATICS FORM 1" ]
[ null, "https://elimdigital.com/wp-content/uploads/2021/06/RATIO-PROFIT-AND-LOS-640x346.jpg", null, "https://sdimg.blob.core.windows.net/images/ShuleDirect/22472/Original/Screen_Shot_2016-06-08_at_15.38.43_1465389584390.png", null, "https://sdimg.blob.core.windows.net/images/ShuleDirect/22472/Original/Screen_Shot_2016-06-08_at_15.40.47_1465389677439.png", null, "https://sdimg.blob.core.windows.net/images/ShuleDirect/22472/Original/Screen_Shot_2016-06-08_at_15.46.27_1465390024092.png", null, "https://sdimg.blob.core.windows.net/images/ShuleDirect/22472/Original/Screen_Shot_2016-06-08_at_15.49.44_1465390210793.png", null, "https://sdimg.blob.core.windows.net/images/ShuleDirect/22472/Original/Screen_Shot_2016-06-08_at_15.56.23_1465390605588.png", null, "https://sdimg.blob.core.windows.net/images/ShuleDirect/22472/Original/Screen_Shot_2016-06-08_at_15.59.51_1465390825484.png", null, "https://sdimg.blob.core.windows.net/images/ShuleDirect/22472/Original/Screen_Shot_2016-06-08_at_16.06.24_1465391264034.png", null, "https://sdimg.blob.core.windows.net/images/ShuleDirect/22472/Original/Screen_Shot_2016-06-08_at_16.09.47_1465391412914.png", null, "https://sdimg.blob.core.windows.net/images/ShuleDirect/22472/Original/Screen_Shot_2016-06-08_at_16.12.15_1465391562368.png", null, "https://sdimg.blob.core.windows.net/images/ShuleDirect/22472/Original/Screen_Shot_2016-06-08_at_16.15.55_1465391834547.png", null, "https://sdimg.blob.core.windows.net/images/ShuleDirect/22472/Original/Screen_Shot_2016-06-08_at_16.19.14_1465391982423.png", null, "https://sdimg.blob.core.windows.net/images/ShuleDirect/22472/Original/Screen_Shot_2016-06-08_at_16.30.40_1465392677105.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8926568,"math_prob":0.9859122,"size":2603,"snap":"2023-14-2023-23","text_gpt3_token_len":721,"char_repetition_ratio":0.13813005,"word_repetition_ratio":0.04453441,"special_character_ratio":0.28851324,"punctuation_ratio":0.08795411,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992948,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,2,null,null,null,null,null,null,null,null,null,null,null,9,null,9,null,9,null,9,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-26T09:26:50Z\",\"WARC-Record-ID\":\"<urn:uuid:22da9295-70b9-496d-ac59-8d5b41c480d1>\",\"Content-Length\":\"148715\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e237d9aa-a532-4c14-ae5e-ae21044160fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:5be7f605-a3c0-4848-8eb5-724f3ed7f429>\",\"WARC-IP-Address\":\"67.223.118.26\",\"WARC-Target-URI\":\"https://elimdigital.com/topic-9-ratio-profit-and-loss/\",\"WARC-Payload-Digest\":\"sha1:N6KWR7F75DTSR55LBXV6UE3GG4NBDTJ6\",\"WARC-Block-Digest\":\"sha1:C2GPQMKLQRGDDAEKIMTVM6VLSWDCIB5I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945440.67_warc_CC-MAIN-20230326075911-20230326105911-00384.warc.gz\"}"}
https://studysoup.com/tsg/21984/an-introduction-to-thermal-physics-1st-edition-chapter-2-problem-29p
[ "# Consider a system of two Einstein solids, with NA = 300,", null, "## Problem 29P Chapter 2\n\nAn Introduction to Thermal Physics | 1st Edition\n\n• 2901 Step-by-step solutions solved by professors and subject experts\n• Get 24/7 help from StudySoup virtual teaching assistants", null, "An Introduction to Thermal Physics | 1st Edition\n\n4 5 0 397 Reviews\n14\n4\nProblem 29P\n\nConsider a system of two Einstein solids, with NA = 300, NB = 200, and (qtotal = 100 (as discussed in Section 2.3). Compute the entropy of the most likely macrostate and of the least likely macrostate. Also compute the entropy over long time scales, assuming that all microstates are accessible. (Neglect the factor of Boltzmann’s constant in the definition of entropy; for systems this small it is best to think of entropy as a pure number.)\n\nStep-by-Step Solution:\nStep 1<p>In this system we have", null, "and", null, "and", null, ".\n\nThe maximum probable microstate is when", null, ". And minimum probable microstate is when", null, ".\n\nNow we have to calculate the entropy as pure number (ignoring the boltzmann's constant) for the both states.\n\nStep 2<p>The total number of microstates  in the most probable state is", null, "(given in the book)\n\nHence the entropy is", null, "Hence the entropy of the most probable state is 264.\n\nStep 3 of 3\n\n##### ISBN: 9780201380279\n\nThis full solution covers the following key subjects: entropy, likely, macrostate, compute, accessible. This expansive textbook survival guide covers 10 chapters, and 454 solutions. Since the solution to 29P from 2 chapter was answered, more than 233 students have viewed the full step-by-step answer. The full step-by-step solution to problem: 29P from chapter: 2 was answered by Sieva Kozinsky, our top Physics solution expert on 07/05/17, 04:29AM. This textbook survival guide was created for the textbook: An Introduction to Thermal Physics , edition: 1st. The answer to “Consider a system of two Einstein solids, with NA = 300, NB = 200, and (qtotal = 100 (as discussed in Section 2.3). Compute the entropy of the most likely macrostate and of the least likely macrostate. Also compute the entropy over long time scales, assuming that all microstates are accessible. (Neglect the factor of Boltzmann’s constant in the definition of entropy; for systems this small it is best to think of entropy as a pure number.)” is broken down into a number of easy to follow steps, and 77 words. An Introduction to Thermal Physics was written by Sieva Kozinsky and is associated to the ISBN: 9780201380279.\n\n#### Related chapters\n\nUnlock Textbook Solution\n\nConsider a system of two Einstein solids, with NA = 300,\n\n×\nGet Full Access to An Introduction To Thermal Physics - 1st Edition - Chapter 2 - Problem 29p\n\nGet Full Access to An Introduction To Thermal Physics - 1st Edition - Chapter 2 - Problem 29p\n\nI don't want to reset my password\n\nNeed help? Contact support\n\nNeed an Account? Is not associated with an account\nWe're here to help" ]
[ null, "https://studysoup.com/cdn/56cover_2610073", null, "https://studysoup.com/cdn/56cover_2610073", null, "https://lh3.googleusercontent.com/YrSwg4YpXLGB-1RbG3i1ZRtAbcFnVexT8DO5l3seaq1PIp2MLU6fgTW-vk2GgZ9RhHoRa1DEijaLTH8_dSJM4HCLR4IAMH6UmGfwhj05J-NE8Kkq75MQyL25CMiNjUegMve6kTtw", null, "https://lh5.googleusercontent.com/ROyMFdKbv_hoOfK4EKKIhZHJCAVa6sMx8wRELGA-E5cASetUZ-NkEBWgRC8K80BJlf44rI9h8Sh3g_zAHNYMGWXL7TxQU4u8ZfHh4S5TSek8FwxclNuQya7x6EC2mnKwPfu_R836", null, "https://lh3.googleusercontent.com/ApXyNNAvwYrQTGnuDwt7JGIhwTIn9qL_OEr5CzdYjjuXahV44UDWKhw2XiPE7IJbJPi6ka0cNCwxXsYO1WwcUKDwOHpnXgTuugI35G9AySDMf-OYq6N0sy6Ys0ne5vSGZ_8ddQdb", null, "https://lh6.googleusercontent.com/9uSh3h8xyOTm0iexFANdh39BljWK7gxoK9r1mf9_4WHbYJdSmw1QhNal1LHpFVEOg50dabAjqb4XuIeYZjOiTidnVnMrCTiMQ1t_wf3vzU-Sv13Md-4Fxsd4afLG2tjgwmjzANLs", null, "https://lh4.googleusercontent.com/mAjAAcX_UBQ70rbsFUTeppL4YNXHvoH8jEM44GKexK9h8ENh60MOCvNAMOhPrxhm2ZWD4eG2yqzGt6UHFCSV-iraZpxCZ-4WJD0Yrc_8F40vNlcK6FQPdKB4pARKiUIqzCRggsAU", null, "https://lh6.googleusercontent.com/wpktHHpZsLvy_3VSyusYGh7DzLcduSLGJlPxKsl0qDGa8nCTVN3xyKTtybi4bZzIlbRgjKn3DrnHHDzkSyWvvDu21XPCdnBRycmn33f5CNaaurEdYF2FQEzJ4uDIjcvwQp_EPYO_", null, "https://lh5.googleusercontent.com/TeFOEgYVUwxzxt8oNRaHgK9qVcfvP06gh_vdJBgdMPYH3DY4wfewFJeeRw05viVJ5di0E4l7qPR2FCZOsII-iusmAC9WpMxlRTrLfc-ZxTofFMWsvIz3CeC5wMXrryGHBe5WkLEc", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89109564,"math_prob":0.69278866,"size":809,"snap":"2019-43-2019-47","text_gpt3_token_len":188,"char_repetition_ratio":0.16397515,"word_repetition_ratio":0.02919708,"special_character_ratio":0.22991347,"punctuation_ratio":0.09677419,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99605095,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T10:53:46Z\",\"WARC-Record-ID\":\"<urn:uuid:4e4cfb5e-c204-4a5c-a2f7-a56d1a69a58b>\",\"Content-Length\":\"84056\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:45c9de49-c37f-4c31-a95e-2e7ad2a1c2ba>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa87d7ba-5d63-45f7-a729-32052da5d080>\",\"WARC-IP-Address\":\"54.189.254.180\",\"WARC-Target-URI\":\"https://studysoup.com/tsg/21984/an-introduction-to-thermal-physics-1st-edition-chapter-2-problem-29p\",\"WARC-Payload-Digest\":\"sha1:X7J24CRUCMTXKHLYDVFDBJMTG5DHNV46\",\"WARC-Block-Digest\":\"sha1:HF2CBOZOP2HQN4Y4DGJFXCIRYFYQHVB2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670770.21_warc_CC-MAIN-20191121101711-20191121125711-00487.warc.gz\"}"}
http://www.kejimatou.com/xinwen/41.html
[ "# sumif函数多条件求和案例汇总\n\nSUMIF函数是Excel使用频率很高的函数,使用SUMIF函数可以对报表范围中符合指定条件的值求和。Excel中SUMIF函数的用法是根据指定条件对若干单元格、区域或引用求和。\n\n=SUMIF(C2:C9,\"小米\",D2:D9)", null, "=SUMIF(C2:C9,\"<>小米\",D2:D9)", null, "=SUMIF(D2:D9,\">300\")", null, "=SUMIF(D2:D9,\"<\"&AVERAGE(D2:D9))", null, "=SUMIF(C2:C9,F1,D2:D9)", null, "=SUMIF(C2:C9,\"*\",D2:D9)", null, "=SUMIF(C2:C9,\"???\",D2:D9)", null, "=SUMIF(C2:C9,\"*米*\",D2:D9)", null, "=SUMIF(C2:C9,\"大*\",D2:D9)", null, "=SUMIF(B2:B9,TODAY(),D2:D9)", null, "=SUMIF(D2:D9,\"<9e307\")", null, "=SUM(SUMIF(C2:C9,{\"小米\",\"大米\"},D2:D9))", null, "=SUMIF(B3:F10,\"\",B2:F9)/5", null, "=SUMIF(C2:F9,\"小米\",D2:G9)", null, "" ]
[ null, "http://p3.pstatp.com/large/pgc-image/dc385b105773439991658511f641b2d7", null, "http://p9.pstatp.com/large/pgc-image/94c89a32854f41058a479667b993abab", null, "http://p3.pstatp.com/large/pgc-image/6d899bc0bc614abf84fb40865c0358b1", null, "http://p99.pstatp.com/large/pgc-image/f5f017d6794c496199a0436c03ba5615", null, "http://p99.pstatp.com/large/pgc-image/1a83815107b24f808e5bfc0e6210f229", null, "http://p99.pstatp.com/large/pgc-image/8e1ecfa8bfb74c028cca29e6c79978e0", null, "http://p1.pstatp.com/large/pgc-image/64ec41beaae944cbbcf72fe9087c972d", null, "http://p99.pstatp.com/large/pgc-image/83a1f569d0274a41bf0fb61fcec0ce67", null, "http://p3.pstatp.com/large/pgc-image/d19c7120078040bd9242bb9e32edaf85", null, "http://p99.pstatp.com/large/pgc-image/c8ed904df7d6488a99d3b57b07bfefe9", null, "http://p99.pstatp.com/large/pgc-image/18f5e7c0eb0a42ad9054864cabc795b2", null, "http://p1.pstatp.com/large/pgc-image/9a65995983b84ef88bb1ace4b72cc64e", null, "http://p99.pstatp.com/large/pgc-image/e571bcb1311d408aafee41f7a7287b17", null, "http://p1.pstatp.com/large/pgc-image/7606a19f5656445094efccdc5237ade3", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.64635915,"math_prob":0.9980608,"size":965,"snap":"2019-26-2019-30","text_gpt3_token_len":789,"char_repetition_ratio":0.22372529,"word_repetition_ratio":0.0,"special_character_ratio":0.34404144,"punctuation_ratio":0.2961165,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9942409,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-17T06:24:18Z\",\"WARC-Record-ID\":\"<urn:uuid:156fd5a3-c015-4549-8a24-3673a45abc41>\",\"Content-Length\":\"21092\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:79bffbed-e27f-4570-b52a-4791115b2431>\",\"WARC-Concurrent-To\":\"<urn:uuid:911aef63-6146-4554-81f2-bfa8a0313ca3>\",\"WARC-IP-Address\":\"154.211.144.187\",\"WARC-Target-URI\":\"http://www.kejimatou.com/xinwen/41.html\",\"WARC-Payload-Digest\":\"sha1:QEQWJPPLY3LA2AQXDXLXEZBZMOPSFA2L\",\"WARC-Block-Digest\":\"sha1:IJY5IJWS2OVDUKN6FN6O2VUWQB4CEZSS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525094.53_warc_CC-MAIN-20190717061451-20190717083451-00007.warc.gz\"}"}
https://foreach.id/EN/fluids/viscositydynamic/decipoise-to-pound_force_second%7Csq_inch.html
[ "# Convert decipoise to pound-force second/inch² (dP to lbf s/in²)\n\nBatch Convert\n• decipoise [dP]\n• pound-force second/inch² [lbf s/in²]\nCopy\n_\nCopy\n• decipoise [dP]\n• pound-force second/inch² [lbf s/in²]\n\n## Decipoise to Pound-force second/inch² (dP to lbf s/in²)\n\n### Decipoise (Symbol or Abbreviation: dP)\n\nDecipoise is one of dynamic viscosity units. Decipoise abbreviated or symbolized by dP. The value of 1 decipoise is equal to 0.01 pascal second. In its relation with pound-force second/inch², 1 decipoise is equal to 0.0000014504 pound-force second/inch².\n\n#### Relation with other units\n\n1 decipoise equals to 0.01 pascal second\n\n1 decipoise equals to 0.0010197 kilogram-force second/meter²\n\n1 decipoise equals to 0.01 newton second/meter²\n\n1 decipoise equals to 10 millinewton second/meter²\n\n1 decipoise equals to 0.1 dyne second/centimeter²\n\n1 decipoise equals to 0.1 poise\n\n1 decipoise equals to 1e-19 exapoise\n\n1 decipoise equals to 1e-16 petapoise\n\n1 decipoise equals to 1e-13 terapoise\n\n1 decipoise equals to 1e-10 gigapoise\n\n1 decipoise equals to 1e-7 megapoise\n\n1 decipoise equals to 0.0001 kilopoise\n\n1 decipoise equals to 0.001 hectopoise\n\n1 decipoise equals to 0.01 dekapoise\n\n1 decipoise equals to 10 centipoise\n\n1 decipoise equals to 100 millipoise\n\n1 decipoise equals to 100,000 micropoise\n\n1 decipoise equals to 100,000,000 nanopoise\n\n1 decipoise equals to 100,000,000,000 picopoise\n\n1 decipoise equals to 100,000,000,000,000 femtopoise\n\n1 decipoise equals to 100,000,000,000,000,000 attopoise\n\n1 decipoise equals to 0.0000014504 pound-force second/inch²\n\n1 decipoise equals to 0.00020885 pound-force second/foot²\n\n1 decipoise equals to 0.0067197 poundal second/foot²\n\n1 decipoise equals to 0.1 gram/(centimeter*second)\n\n1 decipoise equals to 0.00020885 slug/(foot*second)\n\n1 decipoise equals to 0.0067197 pound/(foot*second)\n\n1 decipoise equals to 24.191 pound/(foot*hour)\n\n1 decipoise equals to 0.0000014514 reyn\n\n### Pound-force second/inch² (Symbol or Abbreviation: lbf s/in²)\n\nPound-force second/inch² is one of dynamic viscosity units. Pound-force second/inch² abbreviated or symbolized by lbf s/in². The value of 1 pound-force second/inch² is equal to 6894.8 pascal second. In its relation with decipoise, 1 pound-force second/inch² is equal to 689480 decipoise.\n\n#### Relation with other units\n\n1 pound-force second/inch² equals to 6,894.8 pascal second\n\n1 pound-force second/inch² equals to 703.07 kilogram-force second/meter²\n\n1 pound-force second/inch² equals to 6,894.8 newton second/meter²\n\n1 pound-force second/inch² equals to 6,894,800 millinewton second/meter²\n\n1 pound-force second/inch² equals to 68,948 dyne second/centimeter²\n\n1 pound-force second/inch² equals to 68,948 poise\n\n1 pound-force second/inch² equals to 6.8948e-14 exapoise\n\n1 pound-force second/inch² equals to 6.8948e-11 petapoise\n\n1 pound-force second/inch² equals to 6.8948e-8 terapoise\n\n1 pound-force second/inch² equals to 0.000068948 gigapoise\n\n1 pound-force second/inch² equals to 0.068948 megapoise\n\n1 pound-force second/inch² equals to 68.948 kilopoise\n\n1 pound-force second/inch² equals to 689.48 hectopoise\n\n1 pound-force second/inch² equals to 6,894.8 dekapoise\n\n1 pound-force second/inch² equals to 689,480 decipoise\n\n1 pound-force second/inch² equals to 6,894,800 centipoise\n\n1 pound-force second/inch² equals to 68,948,000 millipoise\n\n1 pound-force second/inch² equals to 68,948,000,000 micropoise\n\n1 pound-force second/inch² equals to 68,948,000,000,000 nanopoise\n\n1 pound-force second/inch² equals to 68,948,000,000,000,000 picopoise\n\n1 pound-force second/inch² equals to 68,948,000,000,000,000,000 femtopoise\n\n1 pound-force second/inch² equals to 6.8948e+22 attopoise\n\n1 pound-force second/inch² equals to 144 pound-force second/foot²\n\n1 pound-force second/inch² equals to 4,633.1 poundal second/foot²\n\n1 pound-force second/inch² equals to 68,948 gram/(centimeter*second)\n\n1 pound-force second/inch² equals to 144 slug/(foot*second)\n\n1 pound-force second/inch² equals to 4,633.1 pound/(foot*second)\n\n1 pound-force second/inch² equals to 16,679,000 pound/(foot*hour)\n\n1 pound-force second/inch² equals to 1.0007 reyn\n\n### How to convert Decipoise to Pound-force second/inch² (dP to lbf s/in²):\n\n#### Conversion Table for Decipoise to Pound-force second/inch² (dP to lbf s/in²)\n\ndecipoise (dP) pound-force second/inch² (lbf s/in²)\n0.01 dP 1.4504e-8 lbf s/in²\n0.1 dP 1.4504e-7 lbf s/in²\n1 dP 0.0000014504 lbf s/in²\n2 dP 0.0000029008 lbf s/in²\n3 dP 0.0000043511 lbf s/in²\n4 dP 0.0000058015 lbf s/in²\n5 dP 0.0000072519 lbf s/in²\n6 dP 0.0000087023 lbf s/in²\n7 dP 0.000010153 lbf s/in²\n8 dP 0.000011603 lbf s/in²\n9 dP 0.000013053 lbf s/in²\n10 dP 0.000014504 lbf s/in²\n20 dP 0.000029008 lbf s/in²\n25 dP 0.000036259 lbf s/in²\n50 dP 0.000072519 lbf s/in²\n75 dP 0.00010878 lbf s/in²\n100 dP 0.00014504 lbf s/in²\n250 dP 0.00036259 lbf s/in²\n500 dP 0.00072519 lbf s/in²\n750 dP 0.0010878 lbf s/in²\n1,000 dP 0.0014504 lbf s/in²\n100,000 dP 0.14504 lbf s/in²\n1,000,000,000 dP 1,450.4 lbf s/in²\n1,000,000,000,000 dP 1,450,400 lbf s/in²\n\n#### Conversion Table for Pound-force second/inch² to Decipoise (lbf s/in² to dP)\n\npound-force second/inch² (lbf s/in²) decipoise (dP)\n0.01 lbf s/in² 6,894.8 dP\n0.1 lbf s/in² 68,948 dP\n1 lbf s/in² 689,480 dP\n2 lbf s/in² 1,379,000 dP\n3 lbf s/in² 2,068,400 dP\n4 lbf s/in² 2,757,900 dP\n5 lbf s/in² 3,447,400 dP\n6 lbf s/in² 4,136,900 dP\n7 lbf s/in² 4,826,300 dP\n8 lbf s/in² 5,515,800 dP\n9 lbf s/in² 6,205,300 dP\n10 lbf s/in² 6,894,800 dP\n20 lbf s/in² 13,790,000 dP\n25 lbf s/in² 17,237,000 dP\n50 lbf s/in² 34,474,000 dP\n75 lbf s/in² 51,711,000 dP\n100 lbf s/in² 68,948,000 dP\n250 lbf s/in² 172,370,000 dP\n500 lbf s/in² 344,740,000 dP\n750 lbf s/in² 517,110,000 dP\n1,000 lbf s/in² 689,480,000 dP\n100,000 lbf s/in² 68,948,000,000 dP\n1,000,000,000 lbf s/in² 689,480,000,000,000 dP\n1,000,000,000,000 lbf s/in² 689,480,000,000,000,000 dP\n\n#### Steps to Convert Decipoise to Pound-force second/inch² (dP to lbf s/in²)\n\n1. Example: Convert 442 decipoise to pound-force second/inch² (442 dP to lbf s/in²).\n2. 1 decipoise is equivalent to 0.0000014504 pound-force second/inch² (1 dP is equivalent to 0.0000014504 lbf s/in²).\n3. 442 decipoise (dP) is equivalent to 442 times 0.0000014504 pound-force second/inch² (lbf s/in²).\n4. Retrieved 442 decipoise is equivalent to 0.00064107 pound-force second/inch² (442 dP is equivalent to 0.00064107 lbf s/in²).\n\n▸▸" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.50206625,"math_prob":0.9990213,"size":6158,"snap":"2021-21-2021-25","text_gpt3_token_len":2338,"char_repetition_ratio":0.33685407,"word_repetition_ratio":0.110199295,"special_character_ratio":0.4272491,"punctuation_ratio":0.14285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9838038,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-19T04:17:54Z\",\"WARC-Record-ID\":\"<urn:uuid:a0d5d940-5503-4937-80c5-36cc9054ddf1>\",\"Content-Length\":\"60610\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:14b04439-25e6-4f76-a1c3-b0f771e6b72c>\",\"WARC-Concurrent-To\":\"<urn:uuid:18c51859-7476-4189-81bd-2d73f5945e00>\",\"WARC-IP-Address\":\"104.21.3.52\",\"WARC-Target-URI\":\"https://foreach.id/EN/fluids/viscositydynamic/decipoise-to-pound_force_second%7Csq_inch.html\",\"WARC-Payload-Digest\":\"sha1:4COOFWV5FIMXI5JMJXG4ZI6GKOENRB3I\",\"WARC-Block-Digest\":\"sha1:LJKQXLQVIL3EWWOUZNIBTUGHJNMB4QRU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487643380.40_warc_CC-MAIN-20210619020602-20210619050602-00033.warc.gz\"}"}
https://socratic.org/questions/the-ratio-of-the-measures-of-two-supplementary-angles-is-2-7-how-do-you-find-the
[ "# The ratio of the measures of two supplementary angles is 2:7. How do you find the measures of the angles?\n\nMar 29, 2017\n\n${40}^{\\circ} \\text{ and } {140}^{\\circ}$\n\n#### Explanation:\n\ncolor(orange)\"Reminder \" color(red)(bar(ul(|color(white)(2/2)color(black)(\" the sum of 2 supplementary angles\" = 180^@)color(white)(2/2)|)))\n\n$\\text{sum the parts of the ratio}$\n\n$\\Rightarrow 2 + 7 = 9 \\text{ parts in total}$\n\nFind the value of 1 part by dividing ${180}^{\\circ} \\text{ by } 9$\n\n$\\Rightarrow {180}^{\\circ} / 9 = {20}^{\\circ} \\leftarrow \\textcolor{red}{\\text{ value of 1 part}}$\n\n$\\Rightarrow \\text{2 parts } = 2 \\times {20}^{\\circ} = {40}^{\\circ}$\n\n$\\Rightarrow \\text{7 parts } = 7 \\times {20}^{\\circ} = {140}^{\\circ}$\n\n$\\text{Thus the supplementary angles are \" 40^@\" and } {140}^{\\circ}$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7614711,"math_prob":0.9997359,"size":703,"snap":"2019-43-2019-47","text_gpt3_token_len":235,"char_repetition_ratio":0.13161659,"word_repetition_ratio":0.0,"special_character_ratio":0.36273116,"punctuation_ratio":0.046875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998309,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-22T04:29:43Z\",\"WARC-Record-ID\":\"<urn:uuid:8eeb20bd-2838-44b9-8a68-fe66ec168820>\",\"Content-Length\":\"33885\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dcea5486-b0e8-4452-bddc-3ad912a355be>\",\"WARC-Concurrent-To\":\"<urn:uuid:c19e3d94-02bd-402e-9230-3a17b3f08afa>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/the-ratio-of-the-measures-of-two-supplementary-angles-is-2-7-how-do-you-find-the\",\"WARC-Payload-Digest\":\"sha1:H5U7LQCNMXVE4E3RKXRW77DW7BTPTELC\",\"WARC-Block-Digest\":\"sha1:GLHPX3NLRWSANAGSEDID75YN5L73GQNW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496671239.99_warc_CC-MAIN-20191122042047-20191122070047-00363.warc.gz\"}"}
https://targetmol.com/compound/CX-157
[ "# CX-157\n\nCatalog No. T15023   CAS 205187-53-7\n\nCX-157 is a reversible Monoamine Oxidase-A (MAO-A) inhibitor (EC50:19.3 ng/mL).\n\nAll products from TargetMol are for Research Use Only. Not for Human or Veterinary or Therapeutic Use.", null, "CX-157, CAS 205187-53-7\nProduct consultation\nGet quote\nPurity: 98%\nBiological Description\nChemical Properties\nStorage & Solubility Information\n Description CX-157 is a reversible monoamine oxidase-A (MAO-A) inhibitor (EC50:19.3 ng/mL). Targets&IC50 MAO-A:(EC50)19.3 ng/ml In vivo CX-157 is an investigational reagent currently in development for the treatment of major depressive disorder (MDD). Mechanistic studies in animals have shown that CX-157 acts to inhibit MAO-A activity in a reversible and competitive manner which improving brain levels of monoamine neurotransmitters.\n Molecular Weight 348.27 Formula C14H8F4O4S CAS No. 205187-53-7\n\n#### Storage\n\nPowder: -20°C for 3 years\n\nIn solvent: -80°C for 2 years\n\n#### Solubility Information\n\n( < 1 mg/ml refers to the product slightly soluble or insoluble )\n\n##", null, "Dose Conversion\n\nYou can also refer to dose conversion for different animals. More\n\n##", null, "In vivo Formulation Calculator (Clear solution)\n\nStep One: Enter information below\nDosage\nmg/kg\nAverage weight of animals\ng\nDosing volume per animal\nul\nNumber of animals\nStep Two: Enter the in vivo formulation\n% DMSO\n%\n% Tween 80\n% ddH2O\n\n##", null, "Calculator\n\nMolarity Calculator\nDilution Calculator\nReconstitution Calculation\nMolecular Weight Calculator\n=\nX\nX\n\n### Molarity Calculator allows you to calculate the\n\n• Mass of a compound required to prepare a solution of known volume and concentration\n• Volume of solution required to dissolve a compound of known mass to a desired concentration\n• Concentration of a solution resulting from a known mass of compound in a specific volume\nSee Example\n\nAn example of a molarity calculation using the molarity calculator\nWhat is the mass of compound required to make a 10 mM stock solution in 10 ml of water given that the molecular weight of the compound is 197.13 g/mol?\nEnter 197.13 into the Molecular Weight (MW) box\nEnter 10 into the Concentration box and select the correct unit (millimolar)\nEnter 10 into the Volume box and select the correct unit (milliliter)\nPress calculate\nThe answer of 19.713 mg appears in the Mass box\n\nX\n=\nX\n\n### Calculator the dilution required to prepare a stock solution\n\nCalculate the dilution required to prepare a stock solution\nThe dilution calculator is a useful tool which allows you to calculate how to dilute a stock solution of known concentration. Enter C1, C2 & V2 to calculate V1.\n\nSee Example\n\nAn example of a dilution calculation using the Tocris dilution calculator\nWhat volume of a given 10 mM stock solution is required to make 20ml of a 50 μM solution?\nUsing the equation C1V1 = C2V2, where C1=10 mM, C2=50 μM, V2=20 ml and V1 is the unknown:\nEnter 10 into the Concentration (start) box and select the correct unit (millimolar)\nEnter 50 into the Concentration (final) box and select the correct unit (micromolar)\nEnter 20 into the Volume (final) box and select the correct unit (milliliter)\nPress calculate\nThe answer of 100 microliter (0.1 ml) appears in the Volume (start) box\n\n=\n/\n\n### Calculate the volume of solvent required to reconstitute your vial.\n\nThe reconstitution calculator allows you to quickly calculate the volume of a reagent to reconstitute your vial.\nSimply enter the mass of reagent and the target concentration and the calculator will determine the rest.\n\ng/mol\n\n### Enter the chemical formula of a compound to calculate its molar mass and elemental composition\n\nTip: Chemical formula is case sensitive: C10H16N2O2 c10h16n2o2\n\nInstructions to calculate molar mass (molecular weight) of a chemical compound:\nTo calculate molar mass of a chemical compound, please enter its chemical formula and click 'Calculate'.\nDefinitions of molecular mass, molecular weight, molar mass and molar weight:\nMolecular mass (molecular weight) is the mass of one molecule of a substance and is expressed n the unified atomic mass units (u). (1 u is equal to 1/12 the mass of one atom of carbon-12)\nMolar mass (molar weight) is the mass of one mole of a substance and is expressed in g/mol.\n\nbottom\n\n## Tech Support\n\nPlease see Inhibitor Handling Instructions for more frequently ask questions. Topics include: how to prepare stock solutions, how to store products, and cautions on cell-based assays & animal experiments, etc." ]
[ null, "https://www.targetmol.com/file/group1/M00/03/7D/CgoaDmTA9kmENx1IAAAAAPdWbvw363.png", null, "https://targetmol.com/images/icons2/calculator.svg", null, "https://targetmol.com/images/icons2/calculator.svg", null, "https://targetmol.com/images/icons2/calculator.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8083452,"math_prob":0.9483172,"size":1140,"snap":"2023-40-2023-50","text_gpt3_token_len":318,"char_repetition_ratio":0.10915493,"word_repetition_ratio":0.0875,"special_character_ratio":0.28245613,"punctuation_ratio":0.106481485,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97000796,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T17:15:31Z\",\"WARC-Record-ID\":\"<urn:uuid:0035228e-5558-43a4-8098-ead672669cb3>\",\"Content-Length\":\"111296\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4bcece40-b248-4779-ada2-9b9b3dd8c310>\",\"WARC-Concurrent-To\":\"<urn:uuid:66ef3d63-361f-4345-91af-93b93caae1a5>\",\"WARC-IP-Address\":\"49.51.37.151\",\"WARC-Target-URI\":\"https://targetmol.com/compound/CX-157\",\"WARC-Payload-Digest\":\"sha1:77PDV2OT4F53IQHL62X4YXAXQAUHT7OT\",\"WARC-Block-Digest\":\"sha1:ZUUIVWEV5BZWZPIUAKF5OY5R53SF7FUI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510520.98_warc_CC-MAIN-20230929154432-20230929184432-00408.warc.gz\"}"}
https://mathemania.com/lesson/exponential-function/
[ "# Exponential function\n\nIf $a$ is the given base, $a>0$, $a \\neq 1$, and if $x$ is any real number, then the function\n\n$$f(x) = a^x$$\n\nis called the exponential function.\n\nThe base $a = 10^x$ has the special role in the calculating of powers. Let’s observe powers of the shape $10^x$.\n\nFor this purpose, in the coordinate plane we will draw the graph of the function $f(x) = 10^x$.", null, "Therefore, the graph of the function $f(x) = 10^x$ is:", null, "As we can see, for the positive real numbers $x$, this exponential function grows very fast, and for the negative real numbers $x$ the function falls to the zero and it’s very close to the negative part of the $x$ – axis.\n\nSimilarly, we can draw the graph of the function $f(x) = a ^x$, for any base $a$, $a>0$, $a \\neq 1$. For instance, we will draw the graph of the function $f(x) = 3^x$:", null, "Now we will observe the exponential function with the base $0 < a < 1$.  For instance, let be $a = \\displaystyle{\\frac{1}{3}}$. In the same coordinate plane we will draw functions $f(x) = 3^x$ and $g(x) = \\left( \\displaystyle{\\frac{1}{3}} \\right)^x$:", null, "We can notice that functions $f$ and $g$ have the same values for numbers of opposite signs, because $f(-x) = 3^{-x} = g(x)$. Therefore, graphs of these functions are symmetrical to the respect of the $y$- axis.\n\nProperties of exponential function\n\nThe exponential function $f(x) = a^x$, $a>0$, $a \\neq 1$, has the following properties:\n\n1. The function is defined for every  real number $x$.\n2. All values of the function are positive real numbers.\n3.  $a^x \\cdot a^y = a^{x+y}$\n4. $(a^x)^y = a^{xy}$\n5. $(a \\cdot b)^x = a^x \\cdot b^x$\n6. $a^0 =1$\n7. If $a>1$, then for $x_1 < x_2$ is valid $a^{x_1} < a^{x_2}$\n8. If $0<a<1$, then for $x_1 > x_2$ is valid $a^{x_1}> a^{x_2}$" ]
[ null, "https://mathemania.com/wp-content/uploads/2017/07/tablica.png", null, "https://mathemania.com/wp-content/uploads/2017/07/eksponencijalna.png", null, "https://mathemania.com/wp-content/uploads/2017/07/eksp1.png", null, "https://mathemania.com/wp-content/uploads/2017/07/eksp2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.66328144,"math_prob":1.000009,"size":1711,"snap":"2022-40-2023-06","text_gpt3_token_len":567,"char_repetition_ratio":0.16285881,"word_repetition_ratio":0.06774194,"special_character_ratio":0.3547633,"punctuation_ratio":0.10215054,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000099,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,2,null,3,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-01T19:44:21Z\",\"WARC-Record-ID\":\"<urn:uuid:eb675da2-736e-402f-871f-e3b3e6a4c243>\",\"Content-Length\":\"92995\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8689bedc-eb5b-4961-8501-3660e175cb6c>\",\"WARC-Concurrent-To\":\"<urn:uuid:97d2f984-6e15-4498-ba2b-03655685a190>\",\"WARC-IP-Address\":\"137.184.39.193\",\"WARC-Target-URI\":\"https://mathemania.com/lesson/exponential-function/\",\"WARC-Payload-Digest\":\"sha1:3CLXTPZONNJLIC5677JKMQSLSSRETGFZ\",\"WARC-Block-Digest\":\"sha1:SMFIOJ4MT2OZOCHMQBLAMR2DGBVND5QN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499949.24_warc_CC-MAIN-20230201180036-20230201210036-00591.warc.gz\"}"}
https://www.sourcetrail.com/python/programm-the-fibonacci-sequence/
[ "# Solved: programm the fibonacci sequence\n\nThe main problem with programming the Fibonacci sequence is that it is not a precise sequence. The first two numbers in the sequence are always the same, but the next two numbers are not always equal. This can cause problems when trying to create a program to calculate the next number in the sequence.\n\n```\ndef Fibonacci(n):\nif n<0:\nprint(\"Incorrect input\")\n\nelif n==1:\nreturn 0\n\nelif n==2:\nreturn 1\nelse:\nreturn Fibonacci(n-1)+Fibonacci(n-2)```\n\nThis is a recursive function for generating Fibonacci numbers. The function takes an integer input, n, and returns the nth Fibonacci number. If the input is less than 0, it prints an error message. If the input is 1 or 2, it returns the first or second Fibonacci number, respectively. Otherwise, it returns the sum of the previous two Fibonacci numbers.\n\nContents\n\n## Fibonacci\n\nIn mathematics, Fibonacci is a sequence of numbers which starts with 0 and 1, and goes on to each successive number by adding the previous two numbers together. The sequence is named after Leonardo Fibonacci, who introduced it in 1202.\n\n## Sequences\n\nSequences are a powerful data structure in Python. They allow you to store multiple values in a single location, and access them sequentially.\n\nFor example, you can create a sequence of numbers using the range() function:\n\n1, 2, 3, 4, 5\n\nYou can also create a sequence of strings using the string() function:\n\n“one”, “two”, “three”, “four”, “five”\n\nRelated posts:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81675607,"math_prob":0.97630256,"size":1399,"snap":"2022-27-2022-33","text_gpt3_token_len":322,"char_repetition_ratio":0.1734767,"word_repetition_ratio":0.0,"special_character_ratio":0.231594,"punctuation_ratio":0.13829787,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992923,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-04T21:06:12Z\",\"WARC-Record-ID\":\"<urn:uuid:ea01398b-fb74-4e2a-838c-ace9dd9a2c28>\",\"Content-Length\":\"98162\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f68cb3b1-1679-47cf-b9b6-c373acda25ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:2392a1fd-f9d2-4b26-85f5-d9131dc57b61>\",\"WARC-IP-Address\":\"178.255.231.122\",\"WARC-Target-URI\":\"https://www.sourcetrail.com/python/programm-the-fibonacci-sequence/\",\"WARC-Payload-Digest\":\"sha1:AAK2IUILXCSRAPGT6Z6Y5TLWLORNXDQR\",\"WARC-Block-Digest\":\"sha1:FY7BWTFSZDUFA2W6V3XI4Q6WY3VXMBZN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104496688.78_warc_CC-MAIN-20220704202455-20220704232455-00706.warc.gz\"}"}
https://etoobusy.polettix.it/2022/02/03/pwc150-square-free-integer/
[ "TL;DR\n\nOn with TASK #2 from The Weekly Challenge #150. Enjoy!\n\n# The challenge\n\nWrite a script to generate all square-free integers <= 500.\n\nIn mathematics, a square-free integer (or squarefree integer) is an integer which is divisible by no perfect square other than 1. That is, its prime factorization has exactly one factor for each prime that appears in it. For example, 10 = 2 ⋅ 5 is square-free, but 18 = 2 ⋅ 3 ⋅ 3 is not, because 18 is divisible by 9 = 3**2.\n\nExample\n\nThe smallest positive square-free integers are\n1, 2, 3, 5, 6, 7, 10, 11, 13, 14, 15, 17, 19, 21, 22, 23, 26, 29, 30, ...\n\n\n# The questions\n\nAs it often happens, I’m nitpicking on the details about the domain of our investigation: should we consider negative values? I guess not by the examples…\n\n# The solution\n\nsub is_square_free ($N) { return unless$N % 4;\nmy $divisor = 3; while ($N > $divisor) { if ($N % $divisor == 0) {$N /= $divisor; return unless$N % $divisor; }$divisor += 2; # go through odd candidates only\n}\nreturn 1;\n}\n\n\nThe goal is not to find all divisors, so… we don’t find them and we take every possible chance to bail out with a false value. It can happen in two cases:\n\n• if the number is a multiple of 4 because… 4 is a square, you know;\n• otherwise, if the number happens to have the same divisor twice.\n\nWhy the explicit check on 4? Well, in this way we can get the prime number 2 out of the way, and iterate only through odd divisors, starting at 3. Actually, we might start at 7 because the first positive integer that is neither a multiple of 4 nor square-free is 9. Whatever.\n\nI like the Raku translation better because it allows us to use the is-divisible-by operator %%, instead of its “contrary” (sort of) remainder-in-the-division-by %:\n\nsub is-square-free ($N is copy) { return False if$N %% 4;\nmy $divisor = 3; while$N > $divisor { if$N %% $divisor {$N = ($N /$divisor).Int;\nreturn False if $N %%$divisor;\n}\n$divisor += 2; # go through odd candidates only } return True; } This makes the whole thing more readable, but at the end of the day it was pretty readable also to begin with. I have a little itch in the fact that the division between the two integers gives out a rational even when the result is an integer… but whatever. I also like the availability of proper boolean constants, again I think it adds to the readability. The Raku version also allowed me to play a bit with multi subroutines, in the MAIN: multi sub MAIN (Int$limit = 500) {\nmy @list = (1 .. $limit).grep({is-square-free($_)});\nwhile @list {\n@list.splice(0, 20).join(', ').print;\nput @list ?? ',' !! '';\n}\n}\n\nmulti sub MAIN (*@args) {\nput $_, ' ', (is-square-free($_) ?? 'is' !! 'is not'), ' square free'\nfor @args;\n}\n\n\nI’m providing three different ways to call the program:\n\n• with no parameter, the limit is set to 500 like the challenge asks;\n• with one single parameter, the limit is set by the parameter itself;\n• with multiple parameters, each is checked for being square-free or not.\n\nThe multi helps distinguishing the first two cases from the last, which is functionally nifty.\n\nOK, I didn’t include the full programs for both languages… but you know where to find them should you be curious.\n\nStay safe and have fun!\n\nComments? Octodon, , GitHub, Reddit, or drop me a line!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8645165,"math_prob":0.967691,"size":3298,"snap":"2023-40-2023-50","text_gpt3_token_len":916,"char_repetition_ratio":0.11596843,"word_repetition_ratio":0.042414356,"special_character_ratio":0.3020012,"punctuation_ratio":0.15767045,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98852044,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T23:54:23Z\",\"WARC-Record-ID\":\"<urn:uuid:b4de5514-45ea-4fa7-9247-3489c3d48e54>\",\"Content-Length\":\"9801\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:292d8052-2fd0-44fd-b4c7-1d07c0088edc>\",\"WARC-Concurrent-To\":\"<urn:uuid:32cca44f-8448-4b0d-9531-f0c7d1a5bb78>\",\"WARC-IP-Address\":\"217.197.91.145\",\"WARC-Target-URI\":\"https://etoobusy.polettix.it/2022/02/03/pwc150-square-free-integer/\",\"WARC-Payload-Digest\":\"sha1:DWL6XF7M2GAST7SXCRN46Q25Y2ASVASC\",\"WARC-Block-Digest\":\"sha1:5C5DYQ3DMCAORSHAK2RBI4AG56K34ALX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510462.75_warc_CC-MAIN-20230928230810-20230929020810-00018.warc.gz\"}"}
https://metanumbers.com/1076153
[ "# 1076153 (number)\n\n1,076,153 (one million seventy-six thousand one hundred fifty-three) is an odd seven-digits composite number following 1076152 and preceding 1076154. In scientific notation, it is written as 1.076153 × 106. The sum of its digits is 23. It has a total of 2 prime factors and 4 positive divisors. There are 993,360 positive integers (up to 1076153) that are relatively prime to 1076153.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 7\n• Sum of Digits 23\n• Digital Root 5\n\n## Name\n\nShort name 1 million 76 thousand 153 one million seventy-six thousand one hundred fifty-three\n\n## Notation\n\nScientific notation 1.076153 × 106 1.076153 × 106\n\n## Prime Factorization of 1076153\n\nPrime Factorization 13 × 82781\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 2 Total number of prime factors rad(n) 1076153 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 1,076,153 is 13 × 82781. Since it has a total of 2 prime factors, 1,076,153 is a composite number.\n\n## Divisors of 1076153\n\n4 divisors\n\n Even divisors 0 4 4 0\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 4 Total number of the positive divisors of n σ(n) 1.15895e+06 Sum of all the positive divisors of n s(n) 82795 Sum of the proper positive divisors of n A(n) 289737 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 1037.38 Returns the nth root of the product of n divisors H(n) 3.71424 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 1,076,153 can be divided by 4 positive divisors (out of which 0 are even, and 4 are odd). The sum of these divisors (counting 1,076,153) is 1,158,948, the average is 289,737.\n\n## Other Arithmetic Functions (n = 1076153)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 993360 Total number of positive integers not greater than n that are coprime to n λ(n) 248340 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 83828 Total number of primes less than or equal to n r2(n) 16 The number of ways n can be represented as the sum of 2 squares\n\nThere are 993,360 positive integers (less than 1,076,153) that are coprime with 1,076,153. And there are approximately 83,828 prime numbers less than or equal to 1,076,153.\n\n## Divisibility of 1076153\n\n m n mod m 2 3 4 5 6 7 8 9 1 2 1 3 5 1 1 5\n\n1,076,153 is not divisible by any number less than or equal to 9.\n\n## Classification of 1076153\n\n• Arithmetic\n• Semiprime\n• Deficient\n\n• Polite\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n\n## Base conversion (1076153)\n\nBase System Value\n2 Binary 100000110101110111001\n3 Ternary 2000200012112\n4 Quaternary 10012232321\n5 Quinary 233414103\n6 Senary 35022105\n8 Octal 4065671\n10 Decimal 1076153\n12 Duodecimal 43a935\n20 Vigesimal 6ea7d\n36 Base36 n2d5\n\n## Basic calculations (n = 1076153)\n\n### Multiplication\n\nn×y\n n×2 2152306 3228459 4304612 5380765\n\n### Division\n\nn÷y\n n÷2 538076 358718 269038 215231\n\n### Exponentiation\n\nny\n n2 1158105279409 1246298470751833577 1341207838194997959389281 1443344838697061638990652915993\n\n### Nth Root\n\ny√n\n 2√n 1037.38 102.477 32.2084 16.0833\n\n## 1076153 as geometric shapes\n\n### Circle\n\n Diameter 2.15231e+06 6.76167e+06 3.6383e+12\n\n### Sphere\n\n Volume 5.22048e+18 1.45532e+13 6.76167e+06\n\n### Square\n\nLength = n\n Perimeter 4.30461e+06 1.15811e+12 1.52191e+06\n\n### Cube\n\nLength = n\n Surface area 6.94863e+12 1.2463e+18 1.86395e+06\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 3.22846e+06 5.01474e+11 931976\n\n### Triangular Pyramid\n\nLength = n\n Surface area 2.0059e+12 1.46878e+17 878675\n\n## Cryptographic Hash Functions\n\nmd5 f6c66c397bb75510a95a78378d97039c bde2601d043239c32023e931fdcbfba9faf017bf 776516786a70c3c1fd09bbbd36fbf70ab789fd96546e67351067172c2b6270f9 6179aaa6ce483b4c56e239eba58913ff3ebd795e713a4913dd460c365582c84b850c2e0dfd921be1d59efab721485df03d695c04da95071089abb0dcee425a61 bc1585eb4d0af71ada35e4dd843e35482000df00" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62649095,"math_prob":0.9814391,"size":4768,"snap":"2021-43-2021-49","text_gpt3_token_len":1681,"char_repetition_ratio":0.12153652,"word_repetition_ratio":0.03211679,"special_character_ratio":0.46371645,"punctuation_ratio":0.08405439,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99567026,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T06:41:59Z\",\"WARC-Record-ID\":\"<urn:uuid:1a8d46d6-8504-4e94-99f2-876a84ca8575>\",\"Content-Length\":\"40121\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:51521b20-c73a-451d-88dd-eb5b74cef83a>\",\"WARC-Concurrent-To\":\"<urn:uuid:a12242f4-fcbe-4b17-8b6b-e63a8164e3d6>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/1076153\",\"WARC-Payload-Digest\":\"sha1:3TT6MJA3IZMJZKE37YIA447TG3577MU5\",\"WARC-Block-Digest\":\"sha1:LFNB5LUUTSATILX6JJRH2STWY6S2XBCO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363336.93_warc_CC-MAIN-20211207045002-20211207075002-00106.warc.gz\"}"}
https://www.softmath.com/math-book-answers/sum-of-cubes/how-to-use-algebrator.html
[ "", null, "## What our customers say...\n\nThousands of users are using our software to conquer their algebra homework. Here are some of their experiences:\n\nAll in all, this is a very useful, well-designed algebra help tool for school classes and homework.\nReese Pontoon, MO\n\nOK here is what I like: much friendlier interface, coverage of functions, trig. better graphing, wizards. However, still no word problems, pre-calc, calc. (Please tell me that you are working on it - who is going to do my homework when I am past College Algebra?!?\nNathan Lane, AZ\n\nI have tried many other programs that did not deliver on what they promised. I decided to take a chance with the Algebrator. All I can say is WOW! Thank you!\nAlan Cox, TX\n\n## Search phrases used on 2013-07-10:\n\nStudents struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among them?\n\n• solving for 2nd order equation\n• second order linear nonhomogeneous differential equations\n• abstract algebra tests quizzes\n• simplify square roots calculator\n• solve and graph\n• mixed numbers to decimals\n• algebra 1 homework\n• what is the method to solve mixed numbers\n• online ratio simplifier\n• standard form to vertex form\n• chemistry points all free for teachers\n• a two digit number that is a perfect square\n• free example beginner algebra\n• free samples of iq questions for written bank exam\n• how do i write a standard form equation into vertex form\n• accounting notes calc ti programs\n• introduction to slope graphing calculators\n• algebra for beginners PDF\n• Who is the person who discovered greatest common factors\n• free college math tutorials - video\n• answer key to glencoe algebra 2\n• finding the roots of an expression calculator\n• least common denominator worksheets\n• Abstract algebra help\n• free worksheet for application of proportion - 6 grade\n• free algebra expressions and equations worksheets\n• applications of common logarithms worksheet\n• asset examination maths practice papers for class V\n• online t83 calculator\n• worksheets on adding and subtracting integers\n• middle school math with pizzazz order of operations\n• 5th grade algebraic expressions worksheets\n• aptitude questions pdf\n• 8th grade math solved papers\n• least common denominator calculator online\n• factor cubed polynomials\n• continuous discrete data activity hands-on 5th grade\n• worksheet algebra\n• merrill chemistry worksheet answer keys\n• solve my fraction\n• intermediate algebra proofs\n• java aptitude test\n• exponential simplify square root\n• free online college algebra solver\n• exponential equation excel\n• multiplying and dividing in scientific notation\n• finding the vertex of a absolute value\n• identify skew segments, pre algebra\n• www.algebrahelp.net\n• saxon math algebra 1 answers\n• McDougal Littell online Answer Key\n• scientific notation ( radical expressions)\n• what is a non factorable polynomial called?\n• algebra 2 statistic worksheets\n• free printable pictograph worksheets\n• factoring program for ti 84\n• factorize complex number\n• how to convert repeated decimals into fraction\n• subtract worksheets\n• logarithms ti-83 plus\n• lineal calculator\n• Order Operations Math Worksheet\n• multiplying binomials and monomials calculator online" ]
[ null, "https://www.softmath.com/r-solver/images/tutor.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8442152,"math_prob":0.8939636,"size":3985,"snap":"2021-43-2021-49","text_gpt3_token_len":890,"char_repetition_ratio":0.11931676,"word_repetition_ratio":0.0,"special_character_ratio":0.20828106,"punctuation_ratio":0.0576,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99787337,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T02:59:39Z\",\"WARC-Record-ID\":\"<urn:uuid:5c016bed-13c1-4ca8-b7dd-20b06f23c3d7>\",\"Content-Length\":\"35078\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae71a6bc-240c-4be3-bfb7-3eaf853b1893>\",\"WARC-Concurrent-To\":\"<urn:uuid:27f891cd-0741-4866-9789-4cf9124ce1e2>\",\"WARC-IP-Address\":\"52.43.142.96\",\"WARC-Target-URI\":\"https://www.softmath.com/math-book-answers/sum-of-cubes/how-to-use-algebrator.html\",\"WARC-Payload-Digest\":\"sha1:62VWQY4PHWTMJ6E5J6BZBQY42D6SW2KQ\",\"WARC-Block-Digest\":\"sha1:DDMFOSEPRIQPQ22LYGMZB4TQ2XRB47B2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358078.2_warc_CC-MAIN-20211127013935-20211127043935-00504.warc.gz\"}"}
https://wiki.bash-hackers.org/syntax/arith_expr?rev=1351878670&do=diff
[ "# Differences\n\nThis shows you the differences between two versions of the page.\n\n syntax:arith_expr [2012/11/02 17:51]techlivezheng [Arithmetic expressions] syntax:arith_expr [2017/02/11 14:22] (current)fgrose [Table] meaning of ternary operator 2017/02/11 14:22 fgrose [Table] meaning of ternary operator2013/04/19 18:49 thebonsai [Arithmetic expressions] change link to Greg's wiki, thanks Joan2012/11/02 18:00 techlivezheng [Arithmetic expressions and return codes] Fix a mistake2012/11/02 17:51 techlivezheng [Arithmetic expressions] 2011/11/17 23:13 ormaaj declare -i foo=[arith]2011/03/22 06:12 fgrose [Arithmetic expressions and return codes] 2011/03/21 05:29 fgrose [Arithmetic expressions and return codes] 2011/03/21 05:27 fgrose [Arithmetic expressions and return codes] 2011/03/21 05:16 fgrose [Arithmetic expressions and return codes] 2010/11/23 21:40 external edit 2017/02/11 14:22 fgrose [Table] meaning of ternary operator2013/04/19 18:49 thebonsai [Arithmetic expressions] change link to Greg's wiki, thanks Joan2012/11/02 18:00 techlivezheng [Arithmetic expressions and return codes] Fix a mistake2012/11/02 17:51 techlivezheng [Arithmetic expressions] 2011/11/17 23:13 ormaaj declare -i foo=[arith]2011/03/22 06:12 fgrose [Arithmetic expressions and return codes] 2011/03/21 05:29 fgrose [Arithmetic expressions and return codes] 2011/03/21 05:27 fgrose [Arithmetic expressions and return codes] 2011/03/21 05:16 fgrose [Arithmetic expressions and return codes] 2010/11/23 21:40 external edit Line 15: Line 15: These expressions are evaluated following some rules described below. The operators and rules of arithmetic expressions are mainly derived from the C programming language. These expressions are evaluated following some rules described below. The operators and rules of arithmetic expressions are mainly derived from the C programming language. - This article describes the theory of the used syntax and the behaviour. To get practical examples without big explanations,​ see [[http://​wooledge.org/​mywiki/ArithmeticExpression ​| the article ​on Greg's wiki]]. + This article describes the theory of the used syntax and the behaviour. To get practical examples without big explanations,​ see [[http://mywiki.wooledge.org/​BashGuide/CompoundCommands#​Arithmetic_Evaluation ​| this page on Greg's wiki]]. ===== Constants ===== ===== Constants ===== Line 164: Line 164: ==== Misc ==== ==== Misc ==== - ^Operator^Description^ + ^ Operator ​                     ^ Description ​                                                                          ​^ - |''​id++''​|**post-increment** of the variable ''​id''​ (not required by POSIX(r))| + | ''​id++'' ​                     | **post-increment** of the variable ''​id''​ (not required by POSIX(r)) ​                 | - |''​id<​nowiki>​--​''​|**post-decrement** of the variable ''​id''​ (not required by POSIX(r))| + | ''​id%%--%%'' ​                 | **post-decrement** of the variable ''​id''​ (not required by POSIX(r)) ​                 | - |''​++id''​|**pre-increment** of the variable ''​id''​ (not required by POSIX(r))| + | ''​++id'' ​                     | **pre-increment** of the variable ''​id''​ (not required by POSIX(r)) ​                  ​| - |''​<​nowiki>​--​id''​|**pre-decrement** of the variable ''​id''​ (not required by POSIX(r))| + | ''​%%--%%id'' ​                 | **pre-decrement** of the variable ''​id''​ (not required by POSIX(r)) ​                  ​| - |''​+''​|unary plus| + | ''​+'' ​                        ​| unary plus                                                                            | - |''​-''​|unary minus| + | ''​-'' ​                        ​| unary minus                                                                           ​| - |''<​EXPR>​ ? <​EXPR>​ : <​EXPR>''​|conditional (ternary) operator| + | ''<​EXPR>​ ? <​EXPR>​ : <​EXPR>'' ​ | conditional (ternary) operator ​\\\\ <​condition>​ ? <​result-if-true>​ : <​result-if-false>  ​| - |''<​EXPR>​ , <​EXPR>''​|expression list| + | ''<​EXPR>​ , <​EXPR>'' ​          ​| expression list                                                                       ​| - |''​( <​EXPR>​ )''​|subexpression (to force precedence)| + | ''​( <​EXPR>​ )'' ​               | subexpression (to force precedence) ​                                                  ​| Line 205: Line 205: Bash's overall language construct is based on exit codes or return codes of commands or functions to be executed. ''​if''​ statements, ''​while''​ loops, etc., they all take the return codes of commands as conditions. Bash's overall language construct is based on exit codes or return codes of commands or functions to be executed. ''​if''​ statements, ''​while''​ loops, etc., they all take the return codes of commands as conditions. - Now the problem is: The return codes (0 means \"​TRUE\"​ or \"​SUCCESS\",​ not 0 means \"​FALSE\"​ or \"​FAILURE\"​) don't correspond to the meaning of the result of an arithmetic expression (0 means \"TRUE\", not 0 means \"FALSE\"). + Now the problem is: The return codes (0 means \"​TRUE\"​ or \"​SUCCESS\",​ not 0 means \"​FALSE\"​ or \"​FAILURE\"​) don't correspond to the meaning of the result of an arithmetic expression (0 means \"FALSE\", not 0 means \"TRUE\"). That's why all commands and keywords that do arithmetic operations attempt to **translate** the arithmetical meaning into an equivalent return code. This simply means: That's why all commands and keywords that do arithmetic operations attempt to **translate** the arithmetical meaning into an equivalent return code. This simply means:\n• syntax/arith_expr.1351878670.txt" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.755238,"math_prob":0.82278764,"size":3753,"snap":"2019-51-2020-05","text_gpt3_token_len":984,"char_repetition_ratio":0.1157642,"word_repetition_ratio":0.51492536,"special_character_ratio":0.31841195,"punctuation_ratio":0.09715243,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98035944,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T01:42:34Z\",\"WARC-Record-ID\":\"<urn:uuid:5448fdf0-cf47-4bfe-be68-9c8f82e72537>\",\"Content-Length\":\"33534\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9d74d6a9-5d3b-4ab7-bd97-af0fa79704c5>\",\"WARC-Concurrent-To\":\"<urn:uuid:dde135e2-1935-4158-a993-f5cb902849ae>\",\"WARC-IP-Address\":\"83.243.40.67\",\"WARC-Target-URI\":\"https://wiki.bash-hackers.org/syntax/arith_expr?rev=1351878670&do=diff\",\"WARC-Payload-Digest\":\"sha1:AQCOT7B4TGHVG3CW6YMIUVRGCLOSHTQ7\",\"WARC-Block-Digest\":\"sha1:4OUCHVBNB5PECE4YBOWY3VHH4K2EJGGS\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540482954.0_warc_CC-MAIN-20191206000309-20191206024309-00073.warc.gz\"}"}
https://www.calendarbede.com/book/calculation-orthodox-easter-sunday
[ "# Calculation of Orthodox Easter Sunday in the Julian calendar\n\nNot only does the Orthodox Church use the old Julian calendar, but other chronological cycles of the year to calculate the calendar date for Easter Sunday (and Passover) as well. Of course, the final date of Easter Sunday coincides with the western calculation of Easter Sunday in the Julian calendar (see the page Calculating Easter Sunday in the Julian calendar). The principle of the calculation is the same: first, we calculate the date of the first spring ecclesiastical full moon, and then the following Sunday is Easter Sunday. In the following calculations, only integers (whole numbers) are counted, with the character '%' (modulo) indicating that only the remainder after division is sought (e.g. 23/5 = 4 and 23 % 5 = 3).\n\nTo start with, we need to convert the given year to the year in the Byzantine calendar (in English, this means “from the creation of the world”, while in Russian it’s лето от сотворения мира nebo i лето от Адама). According to the Orthodox Church, the world was created on 1 September, year 1 of the Byzantine era. Sometimes, two values for the Orthodox calendar year’s foundations are given, with some calendars listing home many days until 1 September (Julian calendar) and how many days since 1 September (in brackets). In the case of Easter, we’re only interested in the first value. The leap year system is the same as it is for the Julian calendar. The conversion to the Byzantine date is then made by simply adding the constant 5508 to our year:\n\ncreation_of_world = year + 5508\n\nIn year 1 of the Byzantine era, three Orthodox calendar cycles (Russian: круг Луны, круг Солнцу, индикт) had a value of 1. By simply dividing by the length of the selected cycle and finding the remainder after this division, we obtain the value of the cycle. If the resulting value is 0, then this 0 is adjusted to a value equal to the cycle length. The first cycle is the lunar cycle (Russian: Kруг Луны), which is the equivalent of our Golden Number with the same 19-year long duration. It only one needs to be shifted by three years, which makes for an easy calculation:\n\nlunar_cycle = (creation_of_world - 1) % 19 + 1\n\nFor the sake of comparison, below is a table with the values of the Golden Number and the Kруг Луны. Unlike the Golden Number cycle, where the lunar leap (Latin: saltus lunae) is performed at the end of the cycle, the same leap (Russian: скачок Луны) is performed at the end of the 16th year of their lunar cycle. The table shows that although the monthly leap is performed at different values in both cycles, they are in fact, timed out the same.\n\n Golden Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Lunar cycle 17 18 19 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16\n\nFrom the value of the lunar cycle, we then calculate the foundation (Russian: основание), which indicates the age of the moon on 1 March. If the lunar cycle is larger than 16, a minor correction is required due to the monthly leap:\n\nfoundation = (11 × lunar_cycle + 3) % 30\nif lunar_cycle is greater than 16, foundation = foundation + 1\n\nIf we already know the lunar phase on 1 March, by subtracting 30 from it, we get the March date of the New Moon (Russian: мартовское новолуние), and by adding 14 to that we’ll get the date of the half moon (Russian: мартовское полнолуние). From here, it’s necessary to apply a special rule: in order to compare the calculated phases of the Moon with the actual phases of the moon during the time of the Council of Nicaea, three more days need to be added. If what we get for the final date of the full moon (Russian: пасхальное полнолуние) is before 21 March (i.e. before the date declared by the church as the beginning of spring) we choose the next full moon, which can be obtained by simply adding the entire length of a lunation (30 days). Everything can be summarized in a fairly simple calculation:\n\necclesiastical_full_moon = 47 - foundation\nif the ecclesiastical_full_moon is less than 21, then ecclesiastical_full_moon = ecclesiastical_full_moon + 30\n\nThe result is the March date of the first spring ecclesiastical full moon. If the number exceeds 31, for example the number 33, then subtract 31 and set it as an April date (i.e. 2 April in this example). Another Orthodox foundation of the year is the epact (Russian: епакта), the value of which shows the March date that falls on the twentieth day of the lunar month (this coincides with the end of the Passover celebration). For example, if the lunar cycle (Orthodox counterpart to the Golden Number) is equal to 1, then the half moon is equal to 14, which will put the ecclesiastical new moon on 1 March, thus making the lunar cycle 14 days old. In 6 more days (7 March) the Moon will be 20 days old, which means that the epact has a value of 7. This Orthodox epact (do not confuse it with the Gregorian epact) is obtained from the half moon by simply taking 21 and subtracting the half moon date. If we get a number less than 1, we add 30. However, this epact is not needed for this calculation of Easter Sunday. The term “correct date” (Russian: исправная дата) is defined as the date in a given year before which Passover is not possible, can be found throughout the sources. In essence, it is the date of the ecclesiastical full moon plus one day.\n\nNow we can make a simple table, for each value of the lunar cycle foundational elements in the Orthodox calculation. The half moon increases regularly by 11, and if the value is greater than 30 we subtract 30. After 16 years of the lunar cycle, a monthly leap is made (this is highlighted in the table) and the value of the half moon increases by 12. Similarly, the epact decreases again by 11 (for the monthly leap of 12), and if the result is less than 1, add 30.\n\nLunar cycle\n(круг Луны)\nfoundation\n(основание)\nepact\n(епакта)\necclesiastical new moon\n(пасхальное\nноволуние)\n114733\n2252622\n361541\n417430\n5282349\n691238\n720127\n812046\n912935\n10232824\n1141743\n1215632\n13262521\n1471440\n1518329\n16292248\n17111036\n18222925\n1931844\n\nWe already know the date for the first full moon of spring, but now it is necessary to find out when the next Sunday occurs. The equivalent of our solar cycle in the Orthodox calendar is the круг Солнцу, which also takes values from 1 to 28. Its calculation is rather simple:\n\nsolar_cycle = (creation_of_world - 1) % 28 + 1\n\nAnother Orthodox foundation for calculating its year is its 'vruceleto' (Russian: вруцелето), which determines which day of the week (день недели) falls on 1 September (the beginning of the Byzantine year). Its value is easily obtained using the following table:\n\nvruceleto\n(вруцелето)\nSolar cycle\n(круг Солнцу)\n1171218\n22131924\n3381425\n49152026\n54102127\n65111622\n76172328\n\nThe value for vruceleto can also be obtained by a small calculation:\n\nvruceleto = (solar_cycle + solar_cycle / 4 - 1) % 7 + 1\n\nInstead of a number, the first letters (Russian: буква) of the Cyrillic alphabet (азбука) were most frequently used. Often a word was added to the letter, all in accordance with the following table:\n\nvruceleto\n(вруцелето)\nletter\n(буква)\n1st of September\n1А (аз)Sunday\n2В (веди)Monday\n3Г (глаголь)Tuesday\n4Д (добро)Wednesday\n5Е (есть)Thursday\n6S (зело)Friday\n7З (земля)Saturday\n\nBy following along with this calculation, we’ll find the day in March, when the first Sunday of the month, or the 'first resurrection' (Russian: первое воскресение), occurs:\n\nif vruceleto is less than 4 first_resurrection = 4 - vruceleto\notherwise, first_resurrection = 11 - vruceleto\n\nThe result is shown in this small table:\n\nvruceleto\n(вруцелето)\nfirst resurrection\n(первое воскресенье)\n13rd of March\n22nd of March\n31st of March\n47th of March\n56th of March\n65th of March\n74th of March\n\nFinally, the last calculation finds the next Sunday after the first spring ecclesiastical full moon. This is a March date, so if the result is greater than 31, it is April, and in order to obtain the April date, it’s necessary to subtract from the result 31. This Sunday is what we’ve been looking for all along, and gives us Orthodox Easter Sunday (Russian: Христианская Пасха):\n\npassover = ecclesiastical_full_moon + 7 - (ecclesiastical_full_moon - first_resurrection) % 7\nif we get a number greater than 31, it’s an April date and it’is necessary to subtract 31; otherwise it’s a March date\n\nIn printed calendars, instead of the calendar date for Orthodox Easter Sunday, the 'boundary key' (Russian: ключ границ) was mentioned. It shows how many days from 21 March that Easter Sunday is. This number can take values from 1 to 35. The earliest date is Easter Sunday on 22 March, while at the value of 35, we have the latest possible date for Easter as 25 April. Each number is assigned a letter according to the table:\n\n 1 (А) March 22 2 (Б) March 23 3 (В) March 24 4 (Г) March 25 5 (Д) March 26 6 (Е) March 27 7 (Ж) March 28 8 (Ѕ) March 29 9 (З) March 30 10 (И) March 31 11 (І) Apr 1 12 (К) Apr 2 13 (Л) Apr 3 14 (М) Apr 4 15 (Н) Apr 5 16 (О) Apr 6 17 (П) Apr 7 18 (Р) Apr 8 19 (С) Apr 9 20 (Т) Apr 10 21 (У) Apr 11 22 (Ф) Apr 12 23 (Х) Apr 13 24 (Ѿ) Apr 14 25 (Ц) Apr 15 26 (Ч) Apr 16 27 (Ш) Apr 17 28 (Щ) Apr 18 22 (Ф) Apr 12 29 (Ъ) Apr 19 30 (Ы) Apr 20 31 (Ь) Apr 21 32 (Ѣ) Apr 22 33 (Ю) Apr 23 34 (Ѫ) Apr 24 35 (Ѧ) Apr 25\n\nNote: in some calendars, the “indiction” (Russian: индикт) was sometimes displayed, the value of which is always the same as the Western indiction. However, Easter Sunday has nothing to do with the calculation. Do not confuse this Western indiction with the term “large indiction” (Russian: великий индиктион or миротворный круг), or even more rarely as круг великой альфы), which is a cycle of 532 years. After this length of time, the calendar dates for Passover are repeated in the Julian calendar, and in the same order. In the Julian calendar, the calendar dates are repeated in the same order after 28 years (7 × 4, 7 days a week, and on every fourth year leap), which is a non-consecutive number with 19 values ​​of the lunar cycle. It is from this that we calculate the large indiction as 28 × 19 = 532. We are currently in the fifteenth cycle since year 1 of the Byzantine era. This cycle began in 1941 AD and ends in 2472 AD.\n\nThe above calculations and tables apply only to the old Julian calendar!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86355066,"math_prob":0.9752707,"size":10209,"snap":"2022-40-2023-06","text_gpt3_token_len":3192,"char_repetition_ratio":0.14483097,"word_repetition_ratio":0.027983105,"special_character_ratio":0.30424136,"punctuation_ratio":0.08593375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9518115,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T10:38:56Z\",\"WARC-Record-ID\":\"<urn:uuid:3f6e2176-fb02-47aa-9ee1-9abc02e9c298>\",\"Content-Length\":\"41845\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:53ca39f2-2f3b-45fd-8aeb-964ce53c24c7>\",\"WARC-Concurrent-To\":\"<urn:uuid:07f9df98-9293-4b18-a62f-6c170bac472a>\",\"WARC-IP-Address\":\"80.211.207.141\",\"WARC-Target-URI\":\"https://www.calendarbede.com/book/calculation-orthodox-easter-sunday\",\"WARC-Payload-Digest\":\"sha1:5UWBSYXCVZ5PWYFJER37L3LBOYGEXQ6Z\",\"WARC-Block-Digest\":\"sha1:CVBDOVE2YDFTFA4KBCLE6HP4RNCAQR5R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500251.38_warc_CC-MAIN-20230205094841-20230205124841-00226.warc.gz\"}"}
https://stackoverflow.com/questions/26355942/why-is-the-f-measure-a-harmonic-mean-and-not-an-arithmetic-mean-of-the-precision/26360501
[ "# Why is the F-Measure a harmonic mean and not an arithmetic mean of the Precision and Recall measures?\n\nWhen we calculate the F-Measure considering both Precision and Recall, we take the harmonic mean of the two measures instead of a simple arithmetic mean.\n\nWhat is the intuitive reason behind taking the harmonic mean and not a simple average?\n\nAccording to the theory of measurement the composite measure should satisfy the following 6 definitions:\n\n1. Connectedness(two pairs can be ordered) and transitivity(if e1 >= e2 and e2 >= e3 then e1 >= e3)\n2. Independence: two components contribute their effects independently to the effectiveness.\n3. Thomsen condition: Given that at a constant recall (precision) we find a difference in effectiveness for two values of precision (recall) then this difference cannot be removed or reversed by changing the constant value.\n4. Restricted solvability.\n5. Each component is essential: Variation in one while leaving the other constant gives a variation in effectiveness.\n6. Archimedean property for each component. It merely ensures that the intervals on a component are comparable.\n\nWe can then derive and get the function of the effectiveness:", null, "And normally we don't use the effectiveness but the much simper F score because:", null, "Now that we have the general formula of F measure:", null, "where we can place more emphesis on recall or precision by setting beta, because beta is defined as follows:", null, "If we weight recall more important than precision(all relevant are selected) we can set beta as 2 and we get the F2 measure. And if we do the reverse and weight precision higher than recall(as much selected elements are relevant as possible, for instance in some grammar error correction scenarios like CoNLL) we just set beta as 0.5 and get the F0.5 measure. And obviously we can set beta as 1 to get the mostly used F1 measure(harmonic mean of precision and recall).\n\nI think to some extent I have already answered why we do not use the arithmetic mean.\n\nTo explain, consider for example, what the average of 30mph and 40mph is? if you drive for 1 hour at each speed, the average speed over the 2 hours is indeed the arithmetic average, 35mph.\n\nHowever if you drive for the same distance at each speed -- say 10 miles -- then the average speed over 20 miles is the harmonic mean of 30 and 40, about 34.3mph.\n\nThe reason is that for the average to be valid, you really need the values to be in the same scaled units. Miles per hour need to be compared over the same number of hours; to compare over the same number of miles you need to average hours per mile instead, which is exactly what the harmonic mean does.\n\nPrecision and recall both have true positives in the numerator, and different denominators. To average them it really only makes sense to average their reciprocals, thus the harmonic mean.\n\n• Thanks, that is a good argument on why this is supported from theory; my answer was more on the pragmatic side. – Anony-Mousse Oct 14 '14 at 20:51\n\nBecause it punishes extreme values more.\n\nConsider a trivial method (e.g. always returning class A). There are infinite data elements of class B, and a single element of class A:\n\n``````Precision: 0.0\nRecall: 1.0\n``````\n\nWhen taking the arithmetic mean, it would have 50% correct. Despite being the worst possible outcome! With the harmonic mean, the F1-measure is 0.\n\n``````Arithmetic mean: 0.5\nHarmonic mean: 0.0\n``````\n\nIn other words, to have a high F1, you need to both have a high precision and recall.\n\n• When the recall is 0.0 the precision has to be greater than 0.0 right? But I get the point in your example. Nicely explained - Thanks. – London guy Oct 14 '14 at 12:29\n• In your example, precision for class A is 0.5 instead of 0 and recall of class A is 1; precision for class B is 0 and recall of class B is 0 as we'll. I assume your balanced class means the true labels are A and B; each applies to 50% of data. – greeness Oct 15 '14 at 8:52\n• Let's make infinite elements of class B, and a single element of class A. It doesn't change the math behind F1. – Anony-Mousse Oct 15 '14 at 13:20\n• It is not just a heuristic to select more balance. Harmonic mean is there only way that makes sense given the units of these ratios. Mean wouldn't have a meaning in comparison – Sean Owen Mar 2 '16 at 9:02\n• Where does it say \"heuristic\", and where does your comment differ from my answer? But: F-measure is a heuristic in that it assumes precision and recall are equally important. That is why the beta term needs to be chosen - heuristically, one usually uses beta=1. – Anony-Mousse Mar 2 '16 at 9:06\n\nThe harmonic mean is the equivalent of the arithmetic mean for reciprocals of quantities that should be averaged by the arithmetic mean. More precisely, with the harmonic mean, you transform all your numbers to the \"averageable\" form (by taking the reciprocal), you take their arithmetic mean and then transform the result back to the original representation (by taking the reciprocal again).\n\nPrecision and the recall are \"naturally\" reciprocals because their numerator is the same and their denominators are different. Fractions are more sensible to average by arithmetic mean when they have the same denominator.\n\nFor more intuition, suppose that we keep the number of true positive items constant. Then by taking the harmonic mean of the precision and the recall, you implicitly take the arithmetic mean of the false positives and the false negatives. It basically means that false positives and false negatives are equally important to you when the true positives stay the same. If an algorithm has N more false positive items but N less false negatives (while having the same true positives), the F-measure stays the same.\n\nIn other words, the F-measure is suitable when:\n\n1. mistakes are equally bad, whether they are false positives or false negatives\n2. the number of mistakes is measured relative to the number of true positives\n3. true negatives are uninteresting\n\nPoint 1 may or may not be true, there are weighted variants of the F-measure that can be used if this assumption isn't true. Point 2 is quite natural since we can expect the results to scale if we just classify more and more points. The relative numbers should stay the same.\n\nPoint 3 is quite interesting. In many applications negatives are the natural default and it may even be hard or arbitrary to specify what really counts as a true negative. For example a fire alarm is having a true negative event every second, every nanosecond, every time a Planck time has passed etc. Even a piece of rock has these true negative fire-detection events all the time.\n\nOr in a face detection case, most of the time you \"correctly don't return\" billions of possible areas in the image but this is not interesting. The interesting cases are when you do return a proposed detection or when you should return it.\n\nBy contrast the classification accuracy cares equally about true positives and true negatives and is more suitable if the total number of samples (classification events) is well-defined and rather small.\n\n• Very well explained! – Oren Yosifon Jul 30 '16 at 9:30\n\nThe above answers are well explained. This is just for a quick reference to understand the nature of the arithmetic mean and the harmonic mean with plots. As you can see from the plot, consider the X axis and Y axis as precision and recall, and the Z axis as the F1 Score. So, from the plot of the harmonic mean, both the precision and recall should contribute evenly for the F1 score to rise up unlike the Arithmetic mean.\n\nThis is for the arithmetic mean.", null, "This is for the Harmonic mean.", null, "• Please use formatting tools to properly edit and format your answer. Image should be displayed here , its not a hyperlink. – Prateek Mar 28 '18 at 13:30" ]
[ null, "https://i.stack.imgur.com/VX8co.png", null, "https://i.stack.imgur.com/ACesD.png", null, "https://i.stack.imgur.com/VGDMI.png", null, "https://i.stack.imgur.com/2hCM1.png", null, "https://i.stack.imgur.com/SYA7h.jpg", null, "https://i.stack.imgur.com/slxFc.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92572194,"math_prob":0.9776598,"size":5813,"snap":"2019-43-2019-47","text_gpt3_token_len":1229,"char_repetition_ratio":0.15028404,"word_repetition_ratio":0.07931727,"special_character_ratio":0.20677792,"punctuation_ratio":0.09460654,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9949066,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,4,null,2,null,3,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-19T17:36:18Z\",\"WARC-Record-ID\":\"<urn:uuid:aae38ff8-3fe6-4da0-8dc2-877ec4eed303>\",\"Content-Length\":\"182841\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c73d5659-dfbf-419b-b424-13278985fb86>\",\"WARC-Concurrent-To\":\"<urn:uuid:f76b21da-1b6f-4464-89cf-ac79884c8c37>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/26355942/why-is-the-f-measure-a-harmonic-mean-and-not-an-arithmetic-mean-of-the-precision/26360501\",\"WARC-Payload-Digest\":\"sha1:5L4JWZIWUEHMX22GJT7VPMYGNDBUV762\",\"WARC-Block-Digest\":\"sha1:W7QECH3JATMKT2GGHX4PIB6OL4W5VB2Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670162.76_warc_CC-MAIN-20191119172137-20191119200137-00492.warc.gz\"}"}
https://socratic.org/questions/how-do-you-express-x-2-x-3-in-partial-fractions
[ "# How do you express (x²+2) / (x+3) in partial fractions?\n\nFeb 22, 2016\n\n$\\frac{x}{1} + \\frac{- 3 x + 2}{x + 3}$\n\n#### Explanation:\n\nbecause the top quadratic and the bottom is linear you're looking for something or the form\n\n$\\frac{A}{1} + \\frac{B}{x + 3}$, were $A$ and $B$ will both be linear functions of $x$ (like 2x+4 or similar).\n\nWe know one bottom must be one because x+3 is linear.\n\nWe're starting with\n$\\frac{A}{1} + \\frac{B}{x + 3}$.\nWe then apply standard fraction addition rules. We need to get then to a common base.\n\nThis is just like numerical fractions $\\frac{1}{3} + \\frac{1}{4} = \\frac{3}{12} + \\frac{4}{12} = \\frac{7}{12.}$\n\n$\\frac{A}{1} + \\frac{B}{x + 3} \\implies \\frac{A \\cdot \\left(x + 3\\right)}{1 \\cdot \\left(x + 3\\right)} + \\frac{B}{x + 3} = \\frac{A \\cdot \\left(x + 3\\right) + B}{x + 3}$.\nSo we get the bottom automatically.\n\nNow we set $A \\cdot \\left(x + 3\\right) + B = {x}^{2} + 2$\n$A x + 3 A + B = {x}^{2} + 2$\n$A$ and $B$ are linear terms so the ${x}^{2}$ must come from $A x$.\nlet $A x = {x}^{2}$ $\\implies$ $A = x$\nThen\n$3 A + B = 2$\nsubstituting $A = x$, gives\n$3 x + B = 2$\nor\n$B = 2 - 3 x$\nin standard from this is $B = - 3 x + 2$.\nPutting it all together we have\n$\\frac{x}{1} + \\frac{- 3 x + 2}{x + 3}$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76485986,"math_prob":1.0000099,"size":547,"snap":"2020-34-2020-40","text_gpt3_token_len":129,"char_repetition_ratio":0.09944751,"word_repetition_ratio":0.0,"special_character_ratio":0.21572211,"punctuation_ratio":0.057692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000009,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T13:09:00Z\",\"WARC-Record-ID\":\"<urn:uuid:2816bdfc-5ca5-4723-aff4-daa9a640071d>\",\"Content-Length\":\"35375\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:508b2d5b-dbf1-4f48-b998-8311c16b58d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:ec6bd552-b4d3-4fe6-ae7d-ef6138f974aa>\",\"WARC-IP-Address\":\"216.239.34.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-express-x-2-x-3-in-partial-fractions\",\"WARC-Payload-Digest\":\"sha1:7S3FLCYTOOHKVCYX7BGBHTUJ2QKC5CU3\",\"WARC-Block-Digest\":\"sha1:MYIFAGY4PRIDDQPFJZNQ3WEXWUY6P735\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402131412.93_warc_CC-MAIN-20201001112433-20201001142433-00345.warc.gz\"}"}
https://onlineforexcharts.org/general/all-about-fibonacci-retracements-and-fibonacci-ratios/
[ "# All about Fibonacci Retracements and Fibonacci Ratios\n\nFibonacci retracements are a common tool among traders and are based on the Fibonacci series identified in the 13th century by the famous Leonardo Fibonacci. The Fibonacci sequence is a set of numbers that is also related to nature, and also called the golden ratio.\n\nA Fibonacci retracement tool is used by technical analysts as a guide to the behavior of the market. Traders use the tool for identification of resistance levels, drawing support lines setting target prices, and placing stop-loss orders.\n\nDuring a technical analysis, Fibonacci retracement is done by picking a different extreme point on the stock chart, mostly peak and trough. The vertical distance is then divided by the Fibonacci ratios, i.e., 23.6%, 38.2%, 50%, 61,8% and 100%. After identifying the levels, a horizontal line is drawn and identifies the hypothetical resistance and support levels.\n\n## How the Fibonacci Ratio Works\n\nThe Fibonacci series is: 0,1,1,2,3,5,8,13,21,34,55, etc. Each term in the series is a sum of the preceding terms, and the sequence goes on to infinity. One characteristic of the Fibonacci sequence is that every number in the series is approximately 1.618 times greater than the previous number. This is the very foundation of the ratio’s traders use to work out the retracement level.\n\nThe key ratio of 61.8% is derived by the division of one number in the sequence, by the number following it. For instance, 21 divided by 34= 0.6176, 55 divided by 89= 0.61798. The 38.2% is arrived at by dividing one number in the sequence by the number falling two places to its right.\n\nFor example, 55 divided by 144= around 0.38194. The 23,6% ratio is arrived at by dividing one number in the sequence, by the number three spots to its right. The 23.6% ratio is arrived at by dividing one number in the sequence by a number three spots to its right. For instance, 8/32= approximately 0.23529.\n\n## Fibonacci Retracing and How it Predicts Stock Prices\n\nFor some reason, the Fibonacci ratios work in the stock market as they work in nature. Traders try to use them for the determination of crucial points where the price momentum of an asset is likely to go backward.\n\nFibonacci retracements are the most common of the Fibonacci trading tools. This is because of their simplicity in part, and because they apply to almost all trading tools. They are used as primary mechanisms in countertrend trading strategies.\n\nFibonacci retracement levels are horizontal lines, which show the hypothetical support and resistance levels. Every level is related to one of the percentages or ratios. It indicates how much or the previous move the price retraces, with the previous trend’s direction likely to go on. The asset price normally retraces to one of the ratios before this happens.\n\n## Pros and Cons of the Fibonacci Retracement\n\nAs popular as the Fibonacci retracements are, these tools have some disadvantages.\n\n• Using the Fibonacci retracement tool is subjective, and can be used in different ways. Some people who make money using it swear on its effectiveness, while those who make losses swear on its unreliability.\n• Some traders consider the Fibonacci retracement as an illusion. Since the tool is repeatedly used by most traders, they often get similar results every time. This means that orders are stuck at similar price levels, which pushes the price in the direction they want.\n• To use the Fibonacci tool, you have to have an in-depth understanding of the tool. Lines drawn on price charts at the percentages will not give you results unless you know what to look for. Beginner traders should be wary of using these tools and be sure that an asset price’s dip is temporary, not a permanent reversal.\n\n## Conclusion\n\nA Fibonacci retracement is a powerful tool when used alongside other technical or indicator signals. The support and resistance levels are an indicator of potential rises or dips in market trends. This could show traders when it is lucrative to open or close positions. The Fibonacci retracement can be very useful and yield rewards for any trader who knows how to use the tool properly. Beginners are advised to approach the tool with caution or lose money while using the complex tool." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94367844,"math_prob":0.91512454,"size":4176,"snap":"2021-43-2021-49","text_gpt3_token_len":895,"char_repetition_ratio":0.15604027,"word_repetition_ratio":0.047482014,"special_character_ratio":0.21599618,"punctuation_ratio":0.116930574,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98745584,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T04:15:32Z\",\"WARC-Record-ID\":\"<urn:uuid:5ead9b50-ec5f-48c7-a88f-46c9cd5a5a12>\",\"Content-Length\":\"37757\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ee41477f-7acb-4ace-a6ad-674578bc850c>\",\"WARC-Concurrent-To\":\"<urn:uuid:84ed569b-610e-407c-bb03-2201b791cbf4>\",\"WARC-IP-Address\":\"188.165.139.11\",\"WARC-Target-URI\":\"https://onlineforexcharts.org/general/all-about-fibonacci-retracements-and-fibonacci-ratios/\",\"WARC-Payload-Digest\":\"sha1:AY3S77RAFGDUFJJWA3CZ3UZBRE4TFX5F\",\"WARC-Block-Digest\":\"sha1:JTPV5OHPWFX2A4GYBASYGKUCGTEULSH2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585450.39_warc_CC-MAIN-20211022021705-20211022051705-00053.warc.gz\"}"}
https://api.dart.cn/stable/2.18.5/dart-collection/ListMixin/reduce.html
[ "# reduce method Null safety\n\nE reduce(\n1. E combine(\n1. E previousValue,\n2. E element\n)\n)\noverride\n\nReduces a collection to a single value by iteratively combining elements of the collection using the provided function.\n\nThe iterable must have at least one element. If it has only one element, that element is returned.\n\nOtherwise this method starts with the first element from the iterator, and then combines it with the remaining elements in iteration order, as if by:\n\n``````E value = iterable.first;\niterable.skip(1).forEach((element) {\nvalue = combine(value, element);\n});\nreturn value;\n``````\n\nExample of calculating the sum of an iterable:\n\n``````final numbers = <double>[10, 2, 5, 0.5];\nfinal result = numbers.reduce((value, element) => value + element);\nprint(result); // 17.5\n``````\n\n## Implementation\n\n``````E reduce(E combine(E previousValue, E element)) {\nint length = this.length;\nif (length == 0) throw IterableElementError.noElement();\nE value = this;\nfor (int i = 1; i < length; i++) {\nvalue = combine(value, this[i]);\nif (length != this.length) {\nthrow ConcurrentModificationError(this);\n}\n}\nreturn value;\n}``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6023763,"math_prob":0.9971267,"size":1065,"snap":"2023-40-2023-50","text_gpt3_token_len":254,"char_repetition_ratio":0.16965127,"word_repetition_ratio":0.0,"special_character_ratio":0.26760563,"punctuation_ratio":0.205,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9879374,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-06T05:26:03Z\",\"WARC-Record-ID\":\"<urn:uuid:3b6d8355-46c9-46f2-93d4-38f7901d21de>\",\"Content-Length\":\"12241\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:19481e80-9156-4364-83c2-cbc7afa314de>\",\"WARC-Concurrent-To\":\"<urn:uuid:36ad5cb7-617c-4390-a202-eee3cda59951>\",\"WARC-IP-Address\":\"8.45.176.211\",\"WARC-Target-URI\":\"https://api.dart.cn/stable/2.18.5/dart-collection/ListMixin/reduce.html\",\"WARC-Payload-Digest\":\"sha1:B2ZQUZWYSWFSTR4HWQ35JHNAD5U54LQG\",\"WARC-Block-Digest\":\"sha1:DZOVX7WA5DKR4EQQSDKNACXEN6MFILEV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100583.13_warc_CC-MAIN-20231206031946-20231206061946-00306.warc.gz\"}"}
https://scipost.org/SciPostPhys.11.3.058
[ "## First law and quantum correction for holographic entanglement contour\n\nMuxin Han, Qiang Wen\n\nSciPost Phys. 11, 058 (2021) · published 14 September 2021\n\n### Abstract\n\nEntanglement entropy satisfies a first law-like relation, which equates the first order perturbation of the entanglement entropy for the region $A$ to the first order perturbation of the expectation value of the modular Hamiltonian, $\\delta S_{A}=\\delta \\langle K_A \\rangle$. We propose that this relation has a finer version which states that, the first order perturbation of the entanglement contour equals to the first order perturbation of the contour of the modular Hamiltonian, i.e. $\\delta s_{A}(\\textbf{x})=\\delta \\langle k_{A}(\\textbf{x})\\rangle$. Here the contour functions $s_{A}(\\textbf{x})$ and $k_{A}(\\textbf{x})$ capture the contribution from the degrees of freedom at $\\textbf{x}$ to $S_{A}$ and $K_A$ respectively. In some simple cases $k_{A}(\\textbf{x})$ is determined by the stress tensor. We also evaluate the quantum correction to the entanglement contour using the fine structure of the entanglement wedge and the additive linear combination (ALC) proposal for partial entanglement entropy (PEE) respectively. The fine structure picture shows that, the quantum correction to the boundary PEE can be identified as a bulk PEE of certain bulk region. While the \\textit{ALC proposal} shows that the quantum correction to the boundary PEE comes from the linear combination of bulk entanglement entropy. We focus on holographic theories with local modular Hamiltonian and configurations of quantum field theories where the \\textit{ALC proposal} applies.\n\n### Cited by 7", null, "### Authors / Affiliations: mappings to Contributors and Organizations\n\nSee all Organizations.\nFunder for the research work leading to this publication" ]
[ null, "https://scipost.org/static/scipost/images/citedby.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7457514,"math_prob":0.96513194,"size":1960,"snap":"2022-40-2023-06","text_gpt3_token_len":486,"char_repetition_ratio":0.13445808,"word_repetition_ratio":0.07089552,"special_character_ratio":0.22551021,"punctuation_ratio":0.07763975,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9928776,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T21:07:01Z\",\"WARC-Record-ID\":\"<urn:uuid:91474bea-2bbf-4204-8fdc-2375f8364fe3>\",\"Content-Length\":\"31029\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:02ad15e2-5a12-4d39-ae5d-7c3875a2e0b1>\",\"WARC-Concurrent-To\":\"<urn:uuid:95f00396-7d53-4abb-b3a0-f843d1637623>\",\"WARC-IP-Address\":\"142.93.224.252\",\"WARC-Target-URI\":\"https://scipost.org/SciPostPhys.11.3.058\",\"WARC-Payload-Digest\":\"sha1:D7ND7RBA36AFYWKUS4LWQVJHOMKHSBEL\",\"WARC-Block-Digest\":\"sha1:AF7FNEWSIBO6PWLNCPDMUKGJ77J35ELT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337668.62_warc_CC-MAIN-20221005203530-20221005233530-00445.warc.gz\"}"}
https://www.apsnet.org/edcenter/disimpactmngmnt/topc/EcologyAndEpidemiologyInR/DiseaseProgress/Pages/LinearRegression.aspx
[ "# Linear regression in R\n\nThe AUDPC is not the only method for summarizing disease progress; regression analyses are also often applied. Regression analysis is a statistical tool for describing the relationship between two or more quantitative variables such that one variable (the dependent or response variable) may be predicted from other variable(s) (the independent or predictor variable(s)). For example, if the relationship between the severity of disease and time is known, disease severity can be predicted at a specified time. If we have only one predictor variable and the response and the predictor variable have a linear relationship, the data can be analyzed with a simple linear model. When there is more than one predictor variable, multiple regression may be used. When linear models are not sufficient to describe the data, a nonlinear regression model may be useful. In this section, some examples of simple linear regression are presented using R.\n\n## Linear regression\n\nLinear regression compares two variables x and y to answer the question, 'how does y change with x?' ' For example, what is disease severity (y) at two weeks (x)? Or, what will be the expected disease severity at the end of the growing season if no treatment is applied?\n\nA simple linear regression model is of the form: yi = β0ixii, where εi is iid Normal (0, σ2); that is, the error terms are independent and identically distributed following a normal distribution with mean 0 and variance σ2. The word \"linear\" in the model refers to the linear influence of the parameters β0 and β1, which are the regression coefficients. Specifically, β1 is the slope of the regression line, that is, the change in y corresponding to a unit change in x. β0 is the intercept, or the y value when x=0. The intercept has no practical meaning if the condition x=0 cannot occur, but is necessary to specify the model.\n\nIn simple linear regression the influence of random error in the model is allowed only in the error term ε. The predictor variable, x, is considered to be a deterministic quantity. The validity of the result for a typical linear regression model requires the fulfillment of the following assumptions:\n\n• the linear model is appropriate,\n• the error terms are independent,\n• the error terms are approximately normally distributed,\n• and the error terms have a common variance.\n\nFor more discussion on simple linear regression the reader should refer to a text for regression analysis (e.g. Harrell 2001). If model checking for a simple linear regression model indicates that a linear model is not appropriate, one could consider transformation, nonlinear regression, or other nonparametric options.\n\n## How to perform linear regression in R\n\nR makes the function `lm(formula, data, subset)` available.\nFor a complete description of `lm`, type: `help(lm)`.\nHere is a simple example of the relationship between disease severity and temperature:\n\n`## Disease severity as a function of temperature`\n`# Response variable, disease severitydiseasesev<-c(1.9,3.1,3.3,4.8,5.3,6.1,6.4,7.6,9.8,12.4)`\n`# Predictor variable, (Centigrade)temperature<-c(2,1,5,5,20,20,23,10,30,25)`\n`# Take a look at the dataplot(temperature,diseasesev)`\n\n## Output\n\n`## For convenience, the data may be formatted into a dataframeseverity <- as.data.frame(cbind(diseasesev,temperature))`\n`## Fit a linear model for the data and summarize## the output from function lm()severity.lm <- lm(diseasesev~temperature,data=severity)`\n`## Generate a summary of the linear modelsummary(severity.lm)`\n\n## Output\n\n`Residuals:Min 1Q Median 3Q Max -2.1959 -1.3584 -0.3417 0.7460 3.6957 `\n`Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2.66233 1.10082 2.418 0.04195 *temperature 0.24168 0.06346 3.808 0.00518 **---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 `\n`Residual standard error: 2.028 on 8 degrees of freedomMultiple R-Squared: 0.6445, Adjusted R-squared: 0.6001 F-statistic: 14.5 on 1 and 8 DF, p-value: 0.005175`\n\n## Graphical tools for testing assumptions\n\nCommon graphical tools for testing assumptions include plots evaluating the characteristics of residuals, the part of the observations that is not explained by the model:\n\n1. scatter plots of the residuals vs. x or the fitted value\n2. normal probability plots of the residuals.\n\nLook for a pattern in the graph of residual vs. fitted values that might suggest a non-constant variance. Equal variances should be dispersed evenly around zero. Other patterns can indicate that a linear regression model may not be appropriate for the data. If the pattern indicates unequal variances, a more complicated model that specifies unequal variances may be appropriate, or transformation of y might be useful.\n\n`## which=1 produces a graph of residuals vs fitted valuesplot(severity.lm, which=1)`\n\nThe graph shown below indicates some pattern in residuals which could be explored further, though this level of pattern relative to the magnitude of changes in the observed data may be unimportant.\n\n## Output\n\n`## which=2 produces a graph of a quantile-quantile (QQ) plotplot(severity.lm,which=2)`\n\n## Output\n\nClick on the image for larger version.\n\nPoints should lie close to a line on the QQ plot if residuals are from a normal distribution. The QQ plot above shows a very slight indication that the residual might not come from a normal distribution, since the last two observations deviate farther from the fitted line.\n\nAfter fitting the linear model, the function `predict()` can be used to generate fitted values. The function `data.frame()` is used below to create a table of original data and fitted values.\n\n`options(digits=4)fit.with.se<-predict( severity.lm, se.fit=TRUE)data.frame( severity, fitted.value=predict(severity.lm), residual=resid(severity.lm), fit.with.se)`\n\n## Output\n\n`disease temperature fitted.value residual fit se.fit df residual.scale1 1.9 2 3.146 -1.2457 3.146 1.0004 8 2.0282 3.1 1 2.904 0.1960 2.904 1.0499 8 2.0283 3.3 5 3.871 -0.5707 3.871 0.8629 8 2.0284 4.8 5 3.871 0.9293 3.871 0.8629 8 2.0285 5.3 20 7.496 -2.1959 7.496 0.7425 8 2.0286 6.1 20 7.496 -1.3959 7.496 0.7425 8 2.0287 6.4 23 8.221 -1.8209 8.221 0.8545 8 2.0288 7.6 10 5.079 2.5209 5.079 0.6920 8 2.0289 9.8 30 9.913 -0.1127 9.913 1.1955 8 2.02810 12.4 25 8.704 3.6957 8.704 0.9432 8 2.028`\n\nWe can also plot the data set together with the fitted line.\n\n`plot( diseasesev~temperature, data=severity, xlab=\"Temperature\", ylab=\"% Disease Severity\", pch=16)abline(severity.lm,lty=1)title(main=\"Graph of % Disease Severity vs Temperature\")`\n\n## Output\n\nClick on the image for larger version.\n\nIf the process of checking model assumptions indicates that the linear model is adequate, the linear model can be used to predict the response variable for a given predictor variable.\n\n`## Predict disease severity for three new temperaturesnew <- data.frame(temperature=c(15,16,17))predict( severity.lm, newdata=new, interval=\"confidence\")`\n\n## Output\n\n`fit lwr upr1 6.288 4.803 7.7722 6.529 5.025 8.0343 6.771 5.233 8.309`\n\nLinear regression with intercept and slope parameters can be used to describe straight-line relationships between variables. For disease progress measured over a limited time period, a straight line may provide a perfectly adequate description. Linear regression that includes additional parameters can be used to describe curved relationships, for example by including a squared term (quadratic term) in the set of predictors, such as time squared. More complicated relationships can readily be described using nonlinear regression.\n\nNext, nonlinear regression in R" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.775284,"math_prob":0.96467555,"size":7595,"snap":"2019-51-2020-05","text_gpt3_token_len":1986,"char_repetition_ratio":0.14688447,"word_repetition_ratio":0.020495303,"special_character_ratio":0.280711,"punctuation_ratio":0.16687346,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99690074,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-10T15:47:29Z\",\"WARC-Record-ID\":\"<urn:uuid:6e023c6a-4116-493d-8e5c-24be8695067f>\",\"Content-Length\":\"111147\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fc582084-1abd-459d-af28-e86cbae02381>\",\"WARC-Concurrent-To\":\"<urn:uuid:76bc14c4-ca42-4258-b321-8c0034acb854>\",\"WARC-IP-Address\":\"199.86.26.56\",\"WARC-Target-URI\":\"https://www.apsnet.org/edcenter/disimpactmngmnt/topc/EcologyAndEpidemiologyInR/DiseaseProgress/Pages/LinearRegression.aspx\",\"WARC-Payload-Digest\":\"sha1:PMFYQ6LKVUJWFEFF62A36STCDZ5FQDDU\",\"WARC-Block-Digest\":\"sha1:6VSVEBYFN3ATRR63M3UNCOSCYR3UOE55\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540528457.66_warc_CC-MAIN-20191210152154-20191210180154-00228.warc.gz\"}"}
https://browse.dgit.debian.org/yosys.git/commit/?id=3377a04bf26b9310017a71b2df587bad661e0da2
[ "summaryrefslogtreecommitdiff log msg author committer range\ndiff options\n context: 12345678910152025303540 space: includeignore mode: unifiedssdiffstat only\nauthor committer Clifford Wolf 2013-03-14 16:15:24 +0100 Clifford Wolf 2013-03-14 16:15:24 +0100 3377a04bf26b9310017a71b2df587bad661e0da2 (patch) f652d63a7614bf7d5e13ab3067beca2a5d5cd6ba 697cf1eb807ce48b69946a13769c647c82869efb (diff)\nChanged prefix for selection operators from # to %\n-rw-r--r--kernel/select.cc56\n1 files changed, 28 insertions, 28 deletions\n diff --git a/kernel/select.cc b/kernel/select.ccindex 650774c0..3c6fd669 100644--- a/kernel/select.cc+++ b/kernel/select.cc@@ -421,53 +421,53 @@ static void select_stmt(RTLIL::Design *design, std::string arg) if (arg.size() == 0) return; - if (arg == '#') {- if (arg == \"#\") {+ if (arg == '%') {+ if (arg == \"%\") { if (design->selection_stack.size() > 0) work_stack.push_back(design->selection_stack.back()); } else- if (arg == \"##\") {+ if (arg == \"%%\") { while (work_stack.size() > 1) { select_op_union(design, work_stack.front(), work_stack.back()); work_stack.pop_back(); } } else- if (arg == \"#n\") {+ if (arg == \"%n\") { if (work_stack.size() < 1)- log_cmd_error(\"Must have at least one element on the stack for operator #n.\\n\");+ log_cmd_error(\"Must have at least one element on the stack for operator %%n.\\n\"); select_op_neg(design, work_stack[work_stack.size()-1]); } else- if (arg == \"#u\") {+ if (arg == \"%u\") { if (work_stack.size() < 2)- log_cmd_error(\"Must have at least two elements on the stack for operator #u.\\n\");+ log_cmd_error(\"Must have at least two elements on the stack for operator %%u.\\n\"); select_op_union(design, work_stack[work_stack.size()-2], work_stack[work_stack.size()-1]); work_stack.pop_back(); } else- if (arg == \"#d\") {+ if (arg == \"%d\") { if (work_stack.size() < 2)- log_cmd_error(\"Must have at least two elements on the stack for operator #d.\\n\");+ log_cmd_error(\"Must have at least two elements on the stack for operator %%d.\\n\"); select_op_diff(design, work_stack[work_stack.size()-2], work_stack[work_stack.size()-1]); work_stack.pop_back(); } else- if (arg == \"#i\") {+ if (arg == \"%i\") { if (work_stack.size() < 2)- log_cmd_error(\"Must have at least two elements on the stack for operator #i.\\n\");+ log_cmd_error(\"Must have at least two elements on the stack for operator %%i.\\n\"); select_op_intersect(design, work_stack[work_stack.size()-2], work_stack[work_stack.size()-1]); work_stack.pop_back(); } else- if (arg == \"#x\" || (arg.size() > 2 && arg.substr(0, 2) == \"#x\" && (arg == ':' || arg == '*' || arg == '.' || ('0' <= arg && arg <= '9')))) {+ if (arg == \"%x\" || (arg.size() > 2 && arg.substr(0, 2) == \"%x\" && (arg == ':' || arg == '*' || arg == '.' || ('0' <= arg && arg <= '9')))) { if (work_stack.size() < 1)- log_cmd_error(\"Must have at least one element on the stack for operator #x.\\n\");+ log_cmd_error(\"Must have at least one element on the stack for operator %%x.\\n\"); select_op_expand(design, arg, 'x'); } else- if (arg == \"#ci\" || (arg.size() > 3 && arg.substr(0, 3) == \"#ci\" && (arg == ':' || arg == '*' || arg == '.' || ('0' <= arg && arg <= '9')))) {+ if (arg == \"%ci\" || (arg.size() > 3 && arg.substr(0, 3) == \"%ci\" && (arg == ':' || arg == '*' || arg == '.' || ('0' <= arg && arg <= '9')))) { if (work_stack.size() < 1)- log_cmd_error(\"Must have at least one element on the stack for operator #ci.\\n\");+ log_cmd_error(\"Must have at least one element on the stack for operator %%ci.\\n\"); select_op_expand(design, arg, 'i'); } else- if (arg == \"#co\" || (arg.size() > 3 && arg.substr(0, 3) == \"#co\" && (arg == ':' || arg == '*' || arg == '.' || ('0' <= arg && arg <= '9')))) {+ if (arg == \"%co\" || (arg.size() > 3 && arg.substr(0, 3) == \"%co\" && (arg == ':' || arg == '*' || arg == '.' || ('0' <= arg && arg <= '9')))) { if (work_stack.size() < 1)- log_cmd_error(\"Must have at least one element on the stack for operator #co.\\n\");+ log_cmd_error(\"Must have at least one element on the stack for operator %%co.\\n\"); select_op_expand(design, arg, 'o'); } else log_cmd_error(\"Unknown selection operator '%s'.\\n\", arg.c_str());@@ -705,25 +705,25 @@ struct SelectPass : public Pass { log(\"\\n\"); log(\"The following actions can be performed on the top sets on the stack:\\n\"); log(\"\\n\");- log(\" #\\n\");+ log(\" %%\\n\"); log(\" push a copy of the current selection to the stack\\n\"); log(\"\\n\");- log(\" ##\\n\");+ log(\" %%%%\\n\"); log(\" replace the stack with a union of all elements on it\\n\"); log(\"\\n\");- log(\" #n\\n\");+ log(\" %%n\\n\"); log(\" replace top set with its invert\\n\"); log(\"\\n\");- log(\" #u\\n\");+ log(\" %%u\\n\"); log(\" replace the two top sets on the stack with their union\\n\"); log(\"\\n\");- log(\" #i\\n\");+ log(\" %%i\\n\"); log(\" replace the two top sets on the stack with their intersection\\n\"); log(\"\\n\");- log(\" #d\\n\");+ log(\" %%d\\n\"); log(\" pop the top set from the stack and subtract it from the new top\\n\"); log(\"\\n\");- log(\" #x[|*][.][:[:..]]\\n\");+ log(\" %%x[|*][.][:[:..]]\\n\"); log(\" expand top set num times accorind to the specified rules.\\n\"); log(\" (i.e. select all cells connected to selected wires and select all\\n\"); log(\" wires connected to selected cells) The rules specify which cell\\n\");@@ -736,14 +736,14 @@ struct SelectPass : public Pass { log(\" limit is reached. When '*' is used instead of then the process\\n\"); log(\" is repeated until no further object are selected.\\n\"); log(\"\\n\");- log(\" #ci[|*][.][:[:..]]\\n\");- log(\" #co[|*][.][:[:..]]\\n\");- log(\" simmilar to #x, but only select input (#ci) or output cones (#co)\\n\");+ log(\" %%ci[|*][.][:[:..]]\\n\");+ log(\" %%co[|*][.][:[:..]]\\n\");+ log(\" simmilar to %%x, but only select input (%%ci) or output cones (%%co)\\n\"); log(\"\\n\"); log(\"Example: the following command selects all wires that are connected to a\\n\"); log(\"'GATE' input of a 'SWITCH' cell:\\n\"); log(\"\\n\");- log(\" select */t:SWITCH #x:+[GATE] */t:SWITCH #d\\n\");+ log(\" select */t:SWITCH %%x:+[GATE] */t:SWITCH %%d\\n\"); log(\"\\n\"); } virtual void execute(std::vector args, RTLIL::Design *design)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59042686,"math_prob":0.9414322,"size":6256,"snap":"2023-40-2023-50","text_gpt3_token_len":2116,"char_repetition_ratio":0.1877799,"word_repetition_ratio":0.37594798,"special_character_ratio":0.45620206,"punctuation_ratio":0.19983687,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9894163,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-21T16:45:05Z\",\"WARC-Record-ID\":\"<urn:uuid:f727ae7c-433e-4f23-ae6e-5394610ef9e9>\",\"Content-Length\":\"15756\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b70de2a1-cf37-4804-bf3e-d1796078a64f>\",\"WARC-Concurrent-To\":\"<urn:uuid:305d842d-3744-4a25-8b25-90fd44e5f958>\",\"WARC-IP-Address\":\"194.177.211.202\",\"WARC-Target-URI\":\"https://browse.dgit.debian.org/yosys.git/commit/?id=3377a04bf26b9310017a71b2df587bad661e0da2\",\"WARC-Payload-Digest\":\"sha1:SDCLAJJPL5YMXUG6OEF4U7MCQRBITTDQ\",\"WARC-Block-Digest\":\"sha1:LV6MCWXYV4LHPBF5XUL6DW7EHQMKN72P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506028.36_warc_CC-MAIN-20230921141907-20230921171907-00790.warc.gz\"}"}
https://tipcalc.net/how-much-is-a-5-percent-tip-on-400.26
[ "# Tip Calculator\n\nHow much is a 5 percent tip on \\$400.26?\n\nTIP:\n\\$ 0\nTOTAL:\n\\$ 0\nTIP PER PERSON:\n\\$ 0\nTOTAL PER PERSON:\n\\$ 0\n\n## How much is a 5 percent tip on \\$400.26? How to calculate this tip?\n\nAre you looking for the answer to this question: How much is a 5 percent tip on \\$400.26? Here is the answer.\n\nLet's see how to calculate a 5 percent tip when the amount to be paid is 400.26. Tip is a percentage, and a percentage is a number or ratio expressed as a fraction of 100. This means that a 5 percent tip can also be expressed as follows: 5/100 = 0.05 . To get the tip value for a \\$400.26 bill, the amount of the bill must be multiplied by 0.05, so the calculation is as follows:\n\n1. TIP = 400.26*5% = 400.26*0.05 = 20.013\n\n2. TOTAL = 400.26+20.013 = 420.273\n\n3. Rounded to the nearest whole number: 420\n\nIf you want to know how to calculate the tip in your head in a few seconds, visit the Tip Calculator Home.\n\n## So what is a 5 percent tip on a \\$400.26? The answer is 20.01!\n\nOf course, it may happen that you do not pay the bill or the tip alone. A typical case is when you order a pizza with your friends and you want to split the amount of the order. For example, if you are three, you simply need to split the tip and the amount into three. In this example it means:\n\n1. Total amount rounded to the nearest whole number: 420\n\n2. Split into 3: 140\n\nSo in the example above, if the pizza order is to be split into 3, you’ll have to pay \\$140 . Of course, you can do these settings in Tip Calculator. You can split the tip and the total amount payable among the members of the company as you wish. So the TipCalc.net page basically serves as a Pizza Tip Calculator, as well.\n\n## Tip Calculator Examples (BILL: \\$400.26)\n\nHow much is a 5% tip on \\$400.26?\nHow much is a 10% tip on \\$400.26?\nHow much is a 15% tip on \\$400.26?\nHow much is a 20% tip on \\$400.26?\nHow much is a 25% tip on \\$400.26?\nHow much is a 30% tip on \\$400.26?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9622917,"math_prob":0.9946978,"size":2605,"snap":"2023-40-2023-50","text_gpt3_token_len":854,"char_repetition_ratio":0.31987697,"word_repetition_ratio":0.24064171,"special_character_ratio":0.40460652,"punctuation_ratio":0.16235632,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99777305,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T02:41:47Z\",\"WARC-Record-ID\":\"<urn:uuid:a420fc40-2454-40b3-9c33-be432aaaa0d8>\",\"Content-Length\":\"13253\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c87edd0b-dddc-49a7-807a-72754918ce7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:4a9aed67-e249-44a4-b0c9-4031b6620475>\",\"WARC-IP-Address\":\"161.35.97.186\",\"WARC-Target-URI\":\"https://tipcalc.net/how-much-is-a-5-percent-tip-on-400.26\",\"WARC-Payload-Digest\":\"sha1:JUTCQX462WNJWZNNXTU4H7SFHUQY4KNN\",\"WARC-Block-Digest\":\"sha1:MDJU6CZQ4HXCRZPEG5M5SYLD7K24B55E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679103464.86_warc_CC-MAIN-20231211013452-20231211043452-00198.warc.gz\"}"}
http://fstate.org/knowndecays/K*(892)~0
[ "# Known decays of K*(892)~0\n\n## (Only the first generation of decays, no cascades)\n\n Final state Branching $\\bar{K}'^{0}_{}(892)$ $\\to$ $K^{-}_{}$ $\\pi^{+}_{}$ 1.0e-0" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5091949,"math_prob":0.99996364,"size":285,"snap":"2020-24-2020-29","text_gpt3_token_len":97,"char_repetition_ratio":0.085409254,"word_repetition_ratio":0.0,"special_character_ratio":0.34385964,"punctuation_ratio":0.074074075,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9655584,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-27T22:41:43Z\",\"WARC-Record-ID\":\"<urn:uuid:78c6eb43-2019-4982-9304-b801e97eace9>\",\"Content-Length\":\"10428\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e982f12a-5340-4bf1-89db-d00a8d42d2ca>\",\"WARC-Concurrent-To\":\"<urn:uuid:77cdcd23-d482-401c-814a-d1cbbe5a9368>\",\"WARC-IP-Address\":\"45.79.85.160\",\"WARC-Target-URI\":\"http://fstate.org/knowndecays/K*(892)~0\",\"WARC-Payload-Digest\":\"sha1:UOP3GRQCZ6Y22WE327AY25E52WXIVPXI\",\"WARC-Block-Digest\":\"sha1:YHGV3H5DQWNFHBYIBPKFN7V2LWXIC6LF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347396163.18_warc_CC-MAIN-20200527204212-20200527234212-00459.warc.gz\"}"}
https://pinoybix.org/2014/10/mcqs-in-bjt-and-fet-frequency-response.html
[ "", null, "# MCQs in BJT and FET Frequency Response\n\n(Last Updated On: December 8, 2017)\n\nThis is the Multiple Choice Questions in BJT and FET Frequency Respons from the book Electronic Devices and Circuit Theory 10th Edition by Robert L. Boylestad. If you are looking for a reviewer in Electronics Engineering this will definitely help. I can assure you that this will be a great help in reviewing the book in preparation for your Board Exam. Make sure to familiarize each and every questions to increase the chance of passing the ECE Board Exam.\n\n### Online Questions and Answers Topic Outline\n\n• MCQs in Logarithms\n• MCQs in Decibels\n• MCQs in General Frequency Considerations\n• MCQs in Low-frequency analysis – Bode Plot\n• MCQs in Low-frequency response – BJT Amplifier\n• MCQs in Low-frequency response – FET Amplifier\n• MCQs in Miller Effect Capacitance\n• MCQs in High-frequency response – BJT Amplifier\n• MCQs in High-frequency response – FET Amplifier\n• MCQs in Multistage Frequency Effects\n• MCQs in Square wave testing\n• MCQs in Computer Analysis\n\n### Practice Exam Test Questions\n\nChoose the letter of the best answer in each questions.\n\n1. What is the ratio of the common logarithm of a number to its natural logarithm?\n\n• a. 0.435\n• b. 2\n• c. 2.3\n• d. 3.2\n\n2. logea = _____ log10a\n\n• a. 2.3\n• b. 2.718\n• c. e\n• d. 1.414\n\n3. By what factor does an audio level change if the power level changes from 4 W to 4096 W?\n\n• a. 2\n• b. 4\n• c. 6\n• d. 8\n\n4. The input power to a device is 10,000 W at 1000 V. The output power is 500 W, and the output impedance is 100 Ω. Find the voltage gain in decibels.\n\n• a. –30.01 dB\n• b. –20.0 dB\n• c. –13.01 dB\n• d. –3.01 dB\n\n5. What magnitude voltage gain corresponds to a decibel gain of 50?\n\n• a. 31.6238\n• b. 316.228\n• c. 3162.38\n• d. 31623.8\n\n6. An amplifier rated at 30-W output is connected to a 5-Ω speaker. Calculate the input power required for full power output if the power gain is 20 dB.\n\n• a. 3 mW\n• b. 30 mW\n• c. 300 mW\n• d. 3 W\n\n7. An amplifier rated at 30-W output is connected to a 5-Ω speaker. Calculate the input voltage for the rated output if the amplifier voltage gain is 20 dB.\n\n• a. 1.225 mV\n• b. 12.25 mV\n• c. 122.5 mV\n• d. 1.225 V\n\n8. For audio systems, the reference level is generally accepted as\n\n• a. 1 mW\n• b. 1 W\n• c. 10 mW\n• d. 100 mW\n\n9. For which of the following frequency region(s) can the coupling and bypass capacitors no longer be replaced by the short-circuit approximation?\n\n• a. Low-frequency\n• b. Mid-frequency\n• c. High-frequency\n• d. All of the above\n\n10. By what other name(s) are the cut-off frequencies in a frequency response plot called?\n\n• a. Corner frequency\n• b. Break frequency\n• c. Half-power frequency\n• d. All of the above\n\n11. What is the ratio of the output power to the input power at the cut-off frequencies in a normalized frequency response plot?\n\n• a. 0.25\n• b. 0.50\n• c. 0.707\n• d. 1\n\n12. What is the ratio of the output voltage to the input voltage at the cut-off frequencies in a normalized frequency response plot?\n\n• a. 0.25\n• b. 0.50\n• c. 0.707\n• d. 1\n\n13. What is the normalized gain expressed in dB for the cut-off frequencies?\n\n• a. –3 dB\n• b. +3 dB\n• c. –6 dB\n• d. –20 dB\n\n14. The ________-frequency response of a transformer-coupled system is calculated primarily by the stray capacitance between the turns of the primary and secondary windings.\n\n• a. Low\n• b. Mid\n• c. High\n\n15. The larger capacitive elements of the design will determine the ________ cut-off frequency.\n\n• a. Low\n• b. Mid\n• c. High\n\n16. The smaller capacitive elements of the design will determine the ________ cut-off frequencies.\n\n• a. Low\n• b. Mid\n• c. High\n\n17. What is the ratio of the capacitive reactance XCS to the input resistance RI of the input RC circuit of a single-stage BJT amplifier at the low-frequency cut-off?\n\n• a. 0.25\n• b. 0.50\n• c. 0.75\n• d. 1.0\n\n18. In the input RC circuit of a single-stage BJT, by how much does the base voltage lead the input voltage for frequencies much larger than the cut-off frequency in the low-frequency region?\n\n• a. About 0º\n• b. 45º\n• c. About 90º\n• d. None of the above\n\n19. In the input RC circuit of a single-stage BJT, by how much does the base voltage lead the input voltage at the cut-off frequency in the low-frequency region?\n\n• a. About 0º\n• b. 45º\n• c. About 90º\n• d. None of the above\n\n20. Determine the break frequency for this circuit.", null, "• a. 15.915 Hz\n• b. 159.15 Hz\n• c. 31.85 Hz\n• d. 318.5 Hz\n\n21. Refer to Figure 9.19. Calculate θ at 0.5f1.\n\n• a. 63.43º\n• b. 26.56º\n• c. 45º\n• d. Undefined\n\n22. A change in frequency by a factor of ________ is equivalent to 1 octave.\n\n• a. 2\n• b. 10\n• c. 5\n• d. 20\n\n23. A change in frequency by a factor of ________ is equivalent to 1 decade.\n\n• a. 2\n• b. 10\n• c. 5\n• d. 20\n\n24. For the low-frequency response of a BJT amplifier, the maximum gain is where ________ .\n\n• a. RB = 0 Ω\n• b. RC = 0 Ω\n• c. RE = 0 Ω\n\n25. Which of the low-frequency cutoffs determined by CS, CC, or CE will be the predominant factor in determining the low-frequency response for the complete system?\n\n• a. Lowest\n• b. Middle\n• c. Highest\n• d. None of the above\n\n26. Determine the lower cut-off frequency of this network.\n\n• a. 15.8 Hz\n• b. 46.13 Hz\n• c. 238.73 Hz\n• d. 1575.8 Hz\n\n27. Which of the following elements is (are) important in determining the gain of the system in the high-frequency region?\n\n• a. Interelectrode capacitances\n• b. Wiring capacitances\n• c. Miller effect capacitance\n• d. All of the above\n\n28. In the ________-frequency region, the capacitive elements of importance are the interelectrode (between terminals) capacitances internal to the active device and the wiring capacitance between the leads of the network.\n\n• a. Low\n• b. Mid\n• c. High\n\n29. Which of the following capacitors is (are) included in Ci for the high-frequency region of a BJT or FET amplifier?\n\n30. In the hybrid", null, "or Giacoletto model, which one of the following does rb include?\n\n• a. Base spreading resistance\n• b. Base contact\n• c. Base bulk\n• d. All of the above\n\n31. Which of the following configurations does (do) not involve the Miller effect capacitance?\n\n• a. Common-emitter\n• b. Common-base\n• c. Common-collector\n• d. All of the above\n\n32. A 3-dB drop in hfe will occur at a frequency defined by ________.\n\n33. What is the range of the capacitors Cgs and Cgd?\n\n• a. 1 to 10 pF\n• b. 1 to 10 nF\n• c. 1 to 10 F\n• d. 1 to 10 F\n\n34. What is the range of the capacitor Cds?\n\n• a. 0.01 to 0.1 pF\n• b. 0.1 to 1 pF\n• c. 0.1 to 1 nF\n• d. 0.1 to 1 F\n\n35. Which of the following statements is true for a square-wave signal?\n\n• a. It is composed of both even and odd harmonics.\n• b. It is composed only of odd harmonics.\n• c. It is composed only of even harmonics.\n• d. The harmonics waveforms are also square waves.\n\n### FILL-IN-THE-BLANKS\n\n1. Logarithms taken to the base _____ are referred to as common logarithms, while logarithms taken to the base _____ are referred to as natural logarithms.\n\n• A. 10, e\n• B e, 10\n• C. 5, e\n• D. 10, 5\n\n2. The logarithm of a number _____ than 1 is always _____.\n\n• A. greater, negative\n• B. less, positive\n• C. less, negative\n• D. None of the above\n\n3. The decibel (dB) is defined such that _____ decibel(s) = _____ bel(s).\n\n• A. 1, 10\n• B. 10, 1\n• C. 1, 1\n• D. 10, 10\n\n4. The resistance associated with the 1-mW power level is _____ , chosen because it is the characteristic impedance of audio transmission lines.\n\n• A. 100\n• B. 250\n• C. 400\n• D. 600\n\n5. The decibel gain of a cascaded system is the _____ of the decibel gains of each stage.\n\n• A. sum\n• B. difference\n• C. product\n• D. quotient\n\n6. Voltage gains of _____ dB or higher should immediately be recognized as being quite high.\n\n• A. 3\n• B. 6\n• C. 20\n• D. 50\n\n7. For the RC-coupled amplifier, the drop in gain at low frequencies is due to the increasing reactance of _____.\n\n• A. CC\n• B. Cs\n• C. CE\n• D. All of the above\n\n8. To fix the frequency boundaries of relatively high gain, _____ was chosen to be the gain at the cut-off levels.\n\n• A. 0.5Av mid\n• B. 0.707Av mid\n• C. Av low\n• D. 0.5Av high\n\n9. In the input RC circuit of a single-stage BJT or FET amplifier, as the frequency _____, the capacitive reactance _____ and _____ of the input voltage appears across the output terminals.\n\n• A. increases, decreases, more\n• B. increases, decreases, less\n• C. increases, increases, more\n• D. decreases, decreases, less\n\n10. A change in frequency by a factor of 2 results in a _____ change in the ratio of the normalized gain.\n\n• A. 3-dB\n• B. 6-dB\n• C. 10-dB\n• D. 20-dB\n\n11. A change in frequency by a factor of 10 results in a _____ change in the ratio of the normalized gain.\n\n• A. 3-dB\n• B. 6-dB\n• C. 10-dB\n• D. 20-dB\n\n12. In the low-frequency region, the _____ low-frequency cut-off determined by CS, CC, or CE will have the greatest impact on the network.\n\n• A. highest\n• B. average\n• C. lowest\n• D. None of the above\n\n13. The _____ region produces the maximum voltage gain in a single-stage BJT or FET amplifier.\n\n• A. low-frequency\n• B. mid-frequency\n• C. high-frequency\n• D. None of the above\n\n14. For any inverting amplifier, the impedance capacitance will be _____ by a Miller effect capacitance sensitive to the gain of the amplifier and the interelectrode capacitance.\n\n• A. unaffected\n• B. increased\n• C. decreased\n• D. None of the above\n\n15. The Miller effect is meaningful in the _____ amplifier.\n\n• A. inverting\n• B. noninverting\n• C. inverting/noninverting\n• D. None of the above\n\n16. With a BJT amplifier in the high-frequency region, the capacitance Cbe is the _____ of the parasitic capacitances while Cce is the _____.\n\n• A. smallest, largest\n• B. largest, smallest\n• C. smallest, medium\n• D. None of the above\n\n17. At very high frequencies, the effect of Ci is to _____ the total impedance of the parallel combination of R1, R2, R3, and Ci.\n\n• A. increase\n• B. maintain\n• C. decrease\n• D. None of the above\n\n18. If the parasitic capacitors were the only elements to determine the high cut-off frequency, the _____ frequency would be the determining factor.\n\n• A. lowest\n• B. highest\n• C. lowest or highest\n• D. None of the above\n\n19. The _____ configuration displays improved high-frequency characteristics over the _____ configuration.\n\n• A. common-collector, common-emitter\n• B. common-emitter, common-base\n• C. common-emitter, common-collector\n• D. common-base, common-emitter\n\n20. The _____ of the upper cut-off frequencies defines a _____ possible bandwidth for a system.\n\n• A. highest, maximum\n• B. lowest, maximum\n• C. lowest, minimum\n• D. None of the above\n• A. significantly smaller\n• B. smaller\n• C. significantly greater\n• D. None of the above\n\n22. For two identical stages in cascade, the drop-off rate in the high- and low-frequency regions has increased to _____ per decade.\n\n• A. –3 dB\n• B. –6 dB\n• C. –20 dB\n• D. –40 dB\n\n23. The bandwidth _____ in a multistage amplifier compared to an identical single-stage amplifier.\n\n• A. increases\n• B. decreases\n• C. remains the same\n• D. None of the above\n\n24. The _____ in the Fourier series has the same frequency as the square wave itself.\n\n• A. fundamental\n• B. third harmonic\n• C. fifth harmonic\n• D. seventh harmonic\n\n25. The magnitude of the third harmonic is _____ of the magnitude of the fundamental.\n\n• A. 1\n• B. 0.5\n• C. 0.33\n• D. 0.25\n\nRate this:" ]
[ null, "https://www.facebook.com/tr", null, "https://lh4.ggpht.com/-1NYiPg1xQOs/VFHMgdZdUgI/AAAAAAAABm4/UnAGUUS5Yfc/clip_image002_thumb4%25255B6%25255D.png", null, "https://lh6.ggpht.com/-r00jB2kgyuU/VFHMlgES-EI/AAAAAAAABng/90BojUoDJKQ/clip_image008_thumb%25255B2%25255D.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8308556,"math_prob":0.95234483,"size":11215,"snap":"2019-13-2019-22","text_gpt3_token_len":3445,"char_repetition_ratio":0.17955579,"word_repetition_ratio":0.17296313,"special_character_ratio":0.33294696,"punctuation_ratio":0.18409714,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.9950952,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,3,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-25T03:15:49Z\",\"WARC-Record-ID\":\"<urn:uuid:f2879175-9b5c-4e4e-bd53-9df8f75d4542>\",\"Content-Length\":\"85258\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b424184-e038-4ba4-8a8b-5bd3f387de84>\",\"WARC-Concurrent-To\":\"<urn:uuid:1c630e15-7154-48cb-a8cd-c84246a60fb2>\",\"WARC-IP-Address\":\"104.24.99.66\",\"WARC-Target-URI\":\"https://pinoybix.org/2014/10/mcqs-in-bjt-and-fet-frequency-response.html\",\"WARC-Payload-Digest\":\"sha1:T7TYMN7T5YP5CDH5FUYTMY4QKNEWT743\",\"WARC-Block-Digest\":\"sha1:RY3A2L4VQL5LTZP6BMQIV57M5KVOL47I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203548.81_warc_CC-MAIN-20190325031213-20190325053213-00140.warc.gz\"}"}
http://jasss.soc.surrey.ac.uk/19/4/7.html
[ "Home > 19 (4), 7\n\n# Robust Clustering in Generalized Bounded Confidence Models", null, ", and\n\naJacobs University Bremen, Germany; bUniversity of Groningen, Netherlands\n\nJournal of Artificial Societies and Social Simulation 19 (4) 7", null, "<http://jasss.soc.surrey.ac.uk/19/4/7.html>\nDOI: 10.18564/jasss.3220", null, "Received: 10-Jul-2016    Accepted: 03-Sep-2016    Published: 31-Oct-2016\n\n### Abstract\n\nBounded confidence models add a critical theoretical ingredient to the explanation of opinion clustering, opinion polarisation, and the persistence of opinion diversity, assuming that individuals are only influenced by others who are sufficiently similar and neglect actors with too different views. However, despite its enormous recognition in the literature, the bounded confidence assumption has been criticized for being able to explain diversity only when implemented in a very strict and unrealistic way. The model is unable to explain patterns of opinion diversity when actors are sometimes influenced also by others who hold distant views, even when these deviations from the bounded-confidence assumption are rare and random. Here, we echo this criticism but we also show that the model's ability to explain opinion diversity can be regained when another assumption is relaxed. Building on modeling work from statistical mechanics, we include that actors' opinion changes do not only result from social influence. When other influences are modelled as random, uniformly distributed draws, then robust patterns of opinion clustering emerge also with the relaxed implementations of bounded confidence. The results holds under both communication regimes: the updating to the average of all acceptable opinions as in the model of Hegselmann and Krause (2002) and random pair-wise communication as in the model of Deffuant et al. (2000). We discuss implications for future modelling work and point to gaps in empirical research on influence.\nKeywords: Opinion Dynamics, Continuous Opinions, Noise, Diversity Puzzle, Facilitation, Probability of Acceptance\n\n### Introduction\n\nThe bounded-confidence model (Hegselmann & Krause 2002; Deffuant et al. 2000) is one of the success stories of formal modeling work in the social sciences. It is build on very few, simple assumptions about individual behavior but generates complex and surprising dynamics and clustering patterns. This is certainly the main reason why it has attracted impressive scholarly attention in fields as diverse as sociology, physics, computer science, philosophy, economics, communication science, and political science.\n\nThe bounded-confidence model proposes a prominent solution to one of the most persistent research puzzles of the social sciences, Abelson’s diversity puzzle (Abelson 1964). Building on Harary (1959), Abelson studied social-influence dynamics in networks where connected nodes exert influence on each others’ opinions in that they grow more similar through repeated averaging. He analytically proved that social influence will always generate perfect consensus unless the network consists of multiple unconnected subsets with zero influence between them. This result was later extended to a theory of rational consensus in science and society (DeGroot 1974; Lehrer & Wagner 1981). Stunned by his finding, Abelson formulated an intriguing research puzzle: “Since universal ultimate agreement is an ubiquitous outcome of a very broad class of mathematical models, we are naturally led to inquire what on earth one must assume in order to generate the bimodal outcome of community cleavage studies.” Abelson’s puzzle is challenging, because consensus is a very robust pattern. For instance, Abelson (1964) showed that consensus is inevitable even when one also includes the assumption that social influence between actors is weaker when they hold very discrepant opinions prior to the interaction, an assumption that was supported by laboratory studies (Fisher & Lubin 1958; Aronson et al. 1963).\n\nPanel (a) of Figure 1 illustrates the inevitable trend towards perfect uniformity that social influence generates in a fixed network with 100 agents. For this ideal-typical simulation run, we assumed that agents’ initial opinions are randomly drawn from a uniform distribution ranging from zero to one. Agents with an initial opinion distance of less 0.15 were connected by a fixed network link. In each time step, agents update their opinions, adopting the arithmetic mean of their own opinion and the opinions of their network contacts. The figure shows that opinions converge to consensus despite the high initial opinion diversity and the relatively sparse influence network. Clustering of opinions can only be observed transiently.", null, "Figure 1. Panel (a) shows a run of opinion dynamics with a fixed network of 100 agents as in (Abelson 1964). Agents influence each other when their initial opinions differ by not more than 0.15. Nevertheless, this model generates consensus. Panel (b) shows a run with the model of Hegselmann & Krause (2002) with a bounded-confidence threshold of $$\\epsilon$$ = 0.15. The influence network is not fixed. This model generates multiple clusters of opinions. Panel (c) also assumes $$\\epsilon$$ = 0.15 but adds the assumption that influence weights turn 1 with a probability of 0.001. This model generates opinion clusters but clusters ultimately converge as a result of the random deviations from the bounded-confidence assumption.\n\nThe solution to Abelson’s puzzle proposed by the bounded-confidence model is strikingly simple, because it sharpens just one of Abelson’s own modelling assumptions: The influence of the opinions of others on the individual is not only declining with opinion discrepancy but individuals ignore influence by actors when opinion discrepancy exceeds a threshold – the bound of confidence. Panel (b) of Figure 1 shows opinion dynamics of the bounded confidence model of Hegselmann & Krause (2002) with initial conditions identical to the one from Panel (a). The crucial difference between the two simulation runs is that the influence network in Panel (b) is not fixed. New ties are created when two agents’ opinion distance has decreased to a value below the bounded-confidence threshold of $$\\epsilon$$ = 0.15, and ties are dissolved whenever opinion differences exceed 0.15. Panel (b) illustrates that the bounded-confidence models predicts the emergence of increasingly homogenous clusters, which at some point in time adopt opinions that differ too much from each other. As a consequence, opinions cannot converge further and clusters remain stable.\n\nOther approaches to Abelson’s puzzle included assumptions about negative social influence (Macy et al. 2003; Mark 2003; Salzarulo 2006; Flache & Mäs 2008), the communication of persuasive arguments (Mäs et al. 2013; Mäs & Flache 2013), biased assimilation (Dandekar et al. 2013), information accumulation (Shin & Lorenz 2010), striving for uniqueness (Mäs et al. 2010), attraction by initial views (Friedkin 2015), influences by external forces such as media and opinion leaders (Watts & Dodds 2007) and the assumption that opinions are measured on nominal scales (Axelrod 1997; Liggett 2013). However, none of these approaches has attracted as much scholarly attention as the bounded confidence model.\n\nThe bounded-confidence assumption has also been subject to important criticism (Mäs et al. 2013; Mäs & Flache 2013). To be sure, the assumption that individuals tend to be influenced by similar others is certainly very plausible in many social settings and has been supported by sociological research on homophily (McPherson et al. 2001; Lazarsfeld & Merton 1954) and social-psychological studies in the Similarity-Attraction-Paradigm (Byrne 1971). However, bounded-confidence models assume that individuals always ignore opinions that differ from their own views, which is an extreme and unrealistic interpretation. In other words, it is certainly plausible that individuals’ confidence in others is bounded but it is certainly not plausible that individuals are never influenced by actors with very different opinions.\n\nIn fact, even a tiny relaxation of the bounded-confidence assumption fundamentally changes model predictions, as Panel (c) of Figure 1 illustrates. These dynamics were generated with a model identical to the model of Panel (b) but with a slightly relaxed bounded-confidence assumption. We included that with a tiny probability of 0.001 also agents with otherwise too big opinion differences (difference exceeds 0.15) take each others’ opinions into account when they update their views. Panel (c) shows that this seemingly innocent relaxation of the bounded-confidence assumption results in the emergence of opinion consensus and, thus, suggests that the bounded-confidence assumption might not be a satisfactory solution to Abelson’s puzzle.\n\nEven though we emphasize this criticism of bounded-confidence models, we show here that the model’s ability to solve Abelson’s puzzle can be regained if another model assumption is relaxed. In particular, we relax the common assumption of social-influence models that actors’ opinions are only affected by social influence. Inspired by modelling work published in the field of statistical mechanics (Pineda et al. 2009, 2013; Carro et al. 2013), we include the assumption that an agent’s opinion might sometimes happen to switch to another, new opinion not as a result of social influence but at random. These random opinion changes might reflect, for instance, influences on agents’ opinions from external sources, or turnover dynamics in an organization where employees leave the organization and are replaced by new workers who hold opinions that are unrelated to the opinion of the leaving person. We study the conditions under which this mechanism leads to the emergence and persistence of opinion diversity and opinion clustering even when the bounded-confidence assumption is implemented in a weaker fashion as in its original strict way but accompanyied by random opinion replacement.\n\n### The model\n\nWe adopt an agent-based modeling framework and assume that each agent i of N agents is described by a continuous opinion xi (t) ranging from 0 to 1. At every time step t agents’ opinions are updated in two ways. First, agents are socially influenced by the opinions of others, as assumed by classical social-influence models and the bounded-confidence models. Second, we included that agents’ opinions may also change due to other forces. To this end, we added that agents can adopt a random opinion as studied in models inspired by statistical mechanics (Pineda et al. 2009, 2013; Carro et al. 2013) .\n\nSocial influence is implemented as follows. First, the computer program identifies those agents who will exert influence on i’s opinion. To this end, the program first calculates what Fishbein & Ajzen (1975) called the probability of acceptance, i.e. the probability that an agent j is influential, using Equation 1. Next, agent i updates her opinion, adapting to the average of the opinions of the influential agents and i’s own opinion.\n\nFollowing psychological literature on opinion and attitude change (Abelson 1964; Fishbein & Ajzen 1975; Hunter et al. 1984, e.g.), we define opinion discrepancy between agents i and j as the distance in opinion D(i, j) = |xi(t) - xj(t)| and assume that the probability that i is influenced by j declines with opinion discrepancy. To that end, we define the probability of acceptance for i and j as\n\n $$P(i,j) = \\begin{cases} \\max\\{\\beta, \\displaystyle{\\left(1 - \\frac{D(i,j)}{\\varepsilon}\\right)^{1/f}}\\} & \\text{if } D(i,j) \\leq \\varepsilon,\\\\ \\\\ \\beta & \\text{otherwise,} \\end{cases}$$ (1)\nwhere $$\\varepsilon \\in[0, 1]$$ is the bound of confidence, f reflects the facilitation, and $$\\beta$$ the default influence probability. We adopt the concept of “facilitation” from Fishbein & Ajzen (1975)’s seminal attitude theory. Facilitation determines the strength of the decay of the probability of acceptance with increasing discrepancy. Lower facilitation f gives an initially steeper downward slope. When facilitation is maximal 1 / f = 0, or f = ∞, however, P (i, j) is constant at one for discrepancies smaller than or equal to $$\\varepsilon$$. The default influence probability is the minimal probability of acceptance. Thus, for $$\\beta$$ > 0 there is always a certain chance that i is influenced by j. For the simulation run shown in Panel (c) of Figure 1, for instance, we assumed that $$\\beta$$ = 0.001. Thus, we have extended the bounded-confidence assumption to a functional form of a probability of acceptance with three parameters $$\\varepsilon$$, $$f$$ and $$\\beta$$. For $$\\beta$$= 0 and maximal facilitation Equation 1 turns into a step function identical to the model of (Hegselmann & Krause 2002).\n\nThe update of the opinion of agent i is modelled as follows\n\n $$x_{i}(t+1) = \\frac{1}{n}\\sum_{j'}x_{j'}(t)$$ (2)\nwhere n is the number of agents $$j'$$who are accepted by i through decisions based on the probability of acceptance. This implements the notion that an agent checks all other agents for inclusion in its confidence set as assumed by Hegselmann & Krause (2002). Therefore, we call this model the generalized Hegselmann-Krause model.\n\nFor $$\\varepsilon$$ = 1 and $$\\beta$$ = 0, the definition of P is identical to the definition in the seminal model in social psychology of Fishbein & Ajzen (1975) from which also the concept “probability of acceptance” is adopted.\n\nThe second model ingredient are random opinion replacements. We implemented that agents adopt a randomly picked opinion with probability m. The new opinion is a random value in the range of [0,1] and is drawn from a uniform distribution (Pineda et al. 2009, 2013; Carro et al. 2013). The rate of random opinion influx is given by $$m\\cdot N$$ per N time steps. This means that $$m\\cdot N$$ independent opinion renewals would occur on average during N interdependent opinion renewals, hence m gives the ratio, or relative importance, of independent opinion replacement.\n\nIn a nutshell, the simulation program proceeds as follows. In the iteration from time step t to t + 1, the program decides for each individual agent i whether each of the other agents exerts influence on i by random events with respect to the probability of acceptance. Next, the program computes the updated opinion of agent i at time step t + 1 as the average of all those agents that have been selected to exert influence. Next, the program decides through a random event with respect to the probability of random opinion replacements whether this updated opinion is replaced by a completely new, randomly drawn opinion. The order in which agents are updated does not matter in this process, because we always use the opinions from time step t to compute the opinion for time step t + 1 (synchronous updating).\n\nIn the following we assumed one thousand randomly distributed agents along the entire opinion axis for the initial state (N = 1000). Effects of di erent initial conditions (i.e., hysteresis problems) are not discussed here.\n\n### Results\n\nFigure 2 shows three ideal-typical opinion dynamics to illustrate the effects of random opinion changes on opinion dynamics. In the figures, the agent distribution is shown as the probability density, which is estimated by the kernel density estimation with the bandwidth of 0.005. For all three simulation runs, we assigned positive values to parameters $$\\beta$$ and f, implementing that there is always a small but positive probability that an agent j exerts influence on i’s opinion. When agents’ opinions only depend on social influence and there are no random opinion changes (m = 0), then this parameter setting implies that dynamics will always generate opinion consensus. Panel (a) of Figure 2 shows that social influence leads to the formation of clusters. However, as there is always a positive chance that members of distinct clusters exert influence on each other, clusters merge and the population reaches a perfect opinion consensus.\n\nPanel (b) of Figure 2, in contrast, shows that including random opinion replacements (m = 0.1) leads to a very different outcome. Like in the dynamics shown in Panel (a), social influence leads to the formation of clusters of agents with similar opinions. However, these clusters remain stable. This result is obtained even though there is always a positive probability of acceptance and even though the opinion replacements are entirely random.\n\nThe intuition is simple. Social influence leads to the formation of clusters, but due to the random opinion replacements, clusters also loose members from time to time. These agents will first adopt a random opinion but will sooner or later join one of the clusters. Thus, all clusters have an input and an output of agents and remain stable when in and output are balanced. To be sure, it is possible that a cluster happens to disappear in this dynamic. However, randomness in tandem with social influence will lead to the formation of a new cluster (Pineda et al. 2009, 2013; Carro et al. 2013) and the collective pattern of opinion clustering will reemerge.\n\nPanel (c) of Figure 2 shows that clusters disappear, however, when random opinion replacements happen too frequently. For this run, we imposed that opinions adopt random values with a probability of 0.5. Under this condition, social influence is too weak to generate stable clustering.", null, "Figure 2. Time evolution of normalized agents density for three different magnitudes of random opinion replacement.\n\nFigures 3 and 4 extend the observations from the three ideal-typical runs to broader parameter spaces, using opinion pattern diagrams to visualize aggregated agent-distributions for quasi-steady states (except for the Panel (a) of Figure 4, as we discuss below) in twenty independent simulation runs per parameter combination. The number of simulation events (Tmax) required for the quasi-steady states greatly depends on parameter settings, hence we adjusted Tmax for each Panel empirically to save computation time.\n\nFigure 3 focuses on the original implementation of confidence bounds, where the probability of acceptance is either zero or one (see Panel (a)). Panel (b), thus, replicates the findings from earlier studies with the original bounded-confidence model (Lorenz 2007), showing that the bounded-confidence model can explain the emergence and stability of opinion clustering when agents’ confidence is sufficiently bounded. When $$\\varepsilon$$ exceeds a value of 0.23, agents arrive at consensus, forming one big cluster close to the center of the opinion space. However, agents form multiple clusters, when they have narrower bounds of confidence ($$\\varepsilon$$ < 0.23). These clusters are stable because the probability that an agent is influenced by an agent from another cluster is zero. The number of clusters forming is roughly $$1/(2\\varepsilon)$$ (e.g., Deffuant et al. 2000; Ben-Naim et al. 2003). Note that Panel (b) fails to visualize the emergent clustering for very small values of $$\\varepsilon$$, as the model generates a large number of opinion clusters with only small (but stable) opinion differences between clusters. When $$\\varepsilon$$ = 0.05, for instance, dynamics typically settle when about eight distinct clusters have formed. Random differences in the initial opinion distributions of the twenty realizations per parameter combination, however, lead to small differences in the exact positions of the clusters between simulation runs. As a consequence, the differences between clusters are not visible in the aggregated opinion distributions shown in Panel (b) of Figure 3.\n\nThe predictions of the standard bounded-confidence model are robust to including random opinion replacements (m = 0.1), as Panel (c) of Figure 3 shows. When agents sometimes happen to adopt a random opinion (drawn from a uniform distribution), clustering can be stable when the bounds of confidence are sufficiently small.\n\nHowever, the ability of the bounded-confidence model to generate stable clustering breaks down when one slightly relaxes the bounded-confidence assumption in that one allows also agents with otherwise to divergent opinions to exert influence on each other with a small probability of $$\\beta$$ = 0.001. Demonstrating this, Panel (d) of Figure 3 shows that the initial opinion diversity decreases and the agent populations arrive at a consensus, even when agents have very narrow bounds of confidence.\n\nStrikingly, the ability of the bounded-confidence model to explain clustering is regained when both forms of randomness are included (see Panel (e) of Figure 3). That is, when agents sometimes adopt random opinion values ($$\\beta$$ = 0.001) and when random violations of the bounded-confidence assumption are included (m = 0.1), opinion clustering remains stable when agents’ confidence in others is sufficiently bounded.", null, "Figure 3. Opinion pattern diagrams for the original Hegselmann-Krause model with bounded confidence. The studied parameter range is 0 ≤ $$\\varepsilon$$ ≤ 0.5 and the resolution is 0.01.\n\nFigure 4 shows the same analyses as Figure 3 but focuses on the less restrictive interpretation of the bounded-confidence assumption, where the probability of acceptance is not modelled as a step function. To generate these figures, we assumed that $$\\varepsilon$$ = 1 and varied parameter f (see Panel (a)). Accordingly, the x-axis represents parameter f rather then $$\\varepsilon$$.\n\nPanel (b) of Figure 4 focuses on the condition without random opinion changes (m = 0) and without random deviations from the bounded-confidence rule ($$\\beta$$ = 0). It is important to understand that the opinion variance shown in the left part of Panel (b) is an artifact of our decision to run simulations for “only” 5000 simulation events and of floating point precision close to zero. Even under small facilitation values the probability of acceptance is always positive, which implies that populations will always reach a consensus, because every possible interaction will continue to occur ad infinitum although quite rarely. However, small f values lead to probabilities of acceptance that are so small that agents with distant opinions hardly influence each other or are even represented by zero due to floating point imprecision. As a consequence, consensus is not reached with in the limit of 5000 simulation events and will perhaps not even do without technical modifications dealing with floating point precision.\n\nIn order to test whether our intuition that the model generates consensus also for small facilitation values, we ran twenty additional simulations with f = 0.05 and without a predefined simulation end (m = 0, $$\\beta$$ = 0). Supporting our intuition, we observed that all twenty populations generated a consensus. On average, it took systems approximately 30,000 iterations to decrease the value range of opinions to less then 10-10.\n\nPanel (d) of Figure 4 is based on the same parameter conditions as Panel (b) but adds a small default acceptance probability of $$\\beta$$ = 0.001, which excludes the artifact observed in Panel (b). Accordingly, Panel (d) shows that all simulation runs generated consensus, independent of the degree f of facilitation. This shows again that the bounded-confidence model fails to generate opinion clustering when the bounded-confidence assumption is implemented in a less restrictive way than in the original contributions.\n\nHowever, system behaviour differs very much when random opinion replacements are included (m = 0.1), as Panels (c) and (e) of Figure 4 show. In particular, Panel (e) shows that the model generates phases of opinion clustering even though there is always a positive probability of acceptance and even though independent opinion changes are added.\n\nIn order to demonstrate that the opinion clustering observed in the simulation runs with random opinion replacements (Panels (c) and (e) of Figure 4) is not an artifact of a limited duration of the simulations and/or floating point issues, we reran the simulation experiment for these experimental treatments assuming that all agents held the same opinion at the outset of the dynamics (xi (0) = 0.5 for all i). In Appendix B we show that the resulting opinion pattern diagrams are virtually identical to those generated in the main simulation exper-iment where simulations departed from a uniform opinion distribution. This shows that the opinion diversity shown in Panels (c) and (e) of Figure 4 also emerges when dynamics start from perfect consensus, showing that these results are not artifacts.\n\nSo far, we focused on the bounded-confidence model developed by Hegselmann & Krause (2002) which assumes that agents consider the opinions of multiple sources when they update their opinions. The alternative bounded-confidence model by Deffuant et al. (2000), in contrast, assumed that influence is dyadic in the sense that agents always consider only the opinion of one other agent for opinion update. We focused our analyses on the Hegselmann-Krause model mainly because it is more similar to the original work of Abelson (1964). We demonstrated in Appendix , however, that our conclusions also hold under the dyadic influence regime proposed by Deffuant et al. (2000).", null, "Figure 4. Opinion pattern diagrams for the generalized Hegselmann-Krause model with a smooth acceptance function. The studied parameter range is f ≥ 0.01 and the resolution is 0.03 in terms of $$log_{10}\\frac{1}{f}$$, which gives finer resolution for smaller f so that we are able to keep better resolution in regions with smaller structures and are able to save the computation cost simultaneously. For example, it corresponds to $$\\sim0.0014$$ in terms of f at 0.02, while $$\\sim0.0067$$ at f = 0.1.\n\nWe provide a NetLogo implementation of our model which allows readers to produce trajectories of all model variations presented here (Lorenz et al. 2016) . The reader can use it to observe and explore our claims on the basis of independent simulation runs. The model also includes “example buttons”, which set the model parameters to the values studied in our examples.\n\n### Summary and conclusions\n\nBounded-confidence models proposed the most prominent solution to Abelson’s puzzle of explaining opinion clustering in connected networks where nodes exert positive influence on each other. However, the models have been criticized for being able to generate opinion clustering only when the assumption that agents’ confidence in others is bound is being interpreted in a maximally strict sense. When one allows agents to sometimes deviate from the bounded-confidence assumption, clustering breaks down even when these deviations are rare and random.\n\nEven though we echoed the criticism that the predictions of the original bounded-confidence models are not robust to random deviations, we showed the models’ ability to explain clustering can be regained if another typical assumption of social-influence models is relaxed. Building on modeling advances from the field of statistical mechanics (Pineda et al. 2009, 2013; Carro et al. 2013), we showed that a bounded-confidence model that takes into account random deviations from the bounded-confidence assumption is able to explain opinion clustering when one includes that agents’ opinion are not only affected by influence from other agents. If, in addition, opinions can change in a random fashion, clustering can emerge and remain stable. Thus, our results demonstrate that the bounded-confidence model does provide an important answer to Abelson’s puzzle, despite the criticism.\n\nOur findings illustrate that in complex systems seemingly innocent events can have profound effects, even when these events are rare and random. One important way to deal with the problem is to base models on empirically validated assumptions. With regard to the bounded-confidence model, however, too little is known about how bounded individuals’ confidence actually is. How is this boundedness distributed in our societies and under what conditions can bounds shift and increase or decrease openness to distant opinions? Under which conditions do individuals deviate from the bounded-confidence assumption? Are these deviations random or do they follow certain patterns? Our finding that deviations have a decisive effect on the predictions of bounded-confidence models shows that empirical answers to these questions are urgently needed.\n\nWhat is more, empirical research is needed to inform models about how to formally incorporate deviations from model assumptions. For instance, we adopted from earlier modeling work (Pineda et al. 2009, 2013; Carro et al. 2013) the assumption that agents sometimes adopt a random opinion and that this random opinion is drawn from a uniform distribution. This is certainly a plausible model of turnover dynamics, where individuals may happen to leave a social group or organization, making space for a replacement with an independent opinion. However, this model of deviations appears to be a less plausible representation of other influences, such as social influences from outside the modelled population or the empirically well documented striving for unique opinions (Mäs et al. 2010). For instance, deviations have also been modelled as “white noise”. That is, rather than assigning a new, random value to the agent’s opinion, a random value drawn from a normal distribution with an average of zero was added to agents’ opinions (Mäs et al. 2010). This “white noise” leads on average to much smaller random opinion changes, which generates very different opinion dynamics, as it does not make agents leave and enter clusters as observed with the uniformly distributed noise. In contrast, white-noise deviations at the level of individual agents aggregate to random opinion shifts of clusters. As a consequence, it is possible that clusters happen to adopt similar opinions and merge, a process that will inevitably generate global consensus and not clustering as observed with uniformly distributed noise (Mäs et al. 2010). In sum, deviations from model assumptions can be incorporated in different ways and model predictions often depend on the exact formal implementation. Theoretical and empirical research is, therefore, needed to identify conditions under which alternative models of deviations are more or less plausible. Empirical research in the field of evolutionary game theory has shown, for instance, that deviations tend to occur more likely when they imply small costs (Mäs & Nax 2016). Similar research should be conducted also in the field of social-influence dynamics.\n\nThe bounded confidence model is yet another example of a theory that makes fundamentally different predictions when its assumptions are implemented probabilistically rather than deterministically (Macy & Tsvetkova 2015). It is, therefore, important that theoretical predictions, independent of whether they have been derived formally or not, are tested for robustness to randomness. To be sure, there is no doubt that deterministic models are insightful, as they allow the researcher to analyze theories in a clean environment without any perturbations. This often simplifies analyses, making it easier to understand the model’s mechanisms and their consequences. Nevertheless, a model prediction that only holds for deterministic settings is of limited scientific value, because it cannot be to put to the empirical test in a non-deterministic world. The notion that individual behaviour is deterministic may be useful but it is not an innocent assumption.\n\nIn sum, we studied a new answer to Abelson’s question of how to generate the empirical finding of bimodal opinion distributions. Abelson’s approach to assume that influence declines with opinion discrepancy is supported by empirical research but turned out to not solve his puzzle. The original bounded-confidence models were able to explain opinion clustering by assuming that the influence network is not fixed, and by further strengthening Abelson’s assumption, implementing a threshold in discrepancy beyond which influence vanishes completely. We showed here that Abelson’s weaker implementation of influence declining in discrepancy does generate opinion clustering when it is combined with random independent opinion replacements.\n\n### Notes\n\n1. Abelson’s diversity puzzle is called “community cleavage problem” by Friedkin (2015).\n2. This observation was later generalized by Davis (1996).\n3. Formally opinion discrepancy is nothing else than distance in opinion and this notion is usually used in the social simulation literature. We use \"discrepancy\" here because it is the original notion used by Abelson and well established in psychological models (see e.g. Hunter et al. 1984). In this paper discrepancy is the same as distance, which is also the case for a large part of the psychological literature.\n4. Another criticism was that there are no “hard” bounds but “smooth” ones. Therefore, already some early variations of bounded confidence models on the evolution of extremism (Deffuant et al. 2002, 2004) introduced different kinds of smoother bounds. Both studies did not analyse the impact of smoothness itself.\n5. Mäs & Bischofberger (2015) use the same function for the definition of similarity and interaction probability. They use a parameter h = 1/f quantifying the degree of homophily in the society. In our parametrization they further used $$\\beta$$ = 0 and $$\\varepsilon$$ = 2 for an opinion space where opinion can take values between -1 and 1.\n6. Based on Ben-Naim et al. (2003) these diagrams are called “bifurcation diagrams” in the physics literature e.g. by (Lorenz 2007; Pineda et al. 2009, 2013; Carro et al. 2013).\n\n### Appendix\n\n#### Appendix A: Model based on Deffuant et al\n\nThe conclusions drawn from our analyses do not only apply to the bounded-confidence model proposed by Hegselmann & Krause (2002) but also hold for the model by Deffuant et al. (2000), as Figures 5 and 6 show. In this alternative model, the agent-interaction scheme is divided into two subprocesses. First, two agents i and j are randomly picked. Second i and j influence each other and adjust their opinions with a given probability of acceptance P if their opinions do not differ too much.\n\nWhen agents i and j are selected for interaction and accept each other, they adjust their opinions towards each other by averaging their current opinions. This leads to the following dynamic equation\n\n $$x_{i}(t+1) = \\begin{cases} \\displaystyle{\\frac{x_{i}(t)+x_{j}(t)}{2}} & \\text{with a probability } P(i,j) \\\\ \\\\ x_{i}(t) & \\text{otherwise.} \\end{cases}$$ (3)", null, "Figure 5. Opinion pattern diagrams for the original Deffuant et al. model with bounded confidence.", null, "Figure 6. Opinion pattern diagrams for the generalized Deffuant et al. model with a smooth acceptance function.\n\n#### Appendix B: Opinion patterns\n\nThe experiments described in the main text cannot exclude the possibility that the opinion patterns shown in Panels (c) and (e) of Figures 3 and 4 also result from the fact that we did not run simulations long enough. To ensure that the diagrams in those Panels are not suffering from insuficient simulation length, we conducted additional simulations that started with a perfect consensus (i.e., all agents have an opinion of 0.5 at the initial state). The additional simulations resulted in very similar general opinion patterns (Figure 7), which shows that our discussion in this study is valid. This holds for both version of the bounded-confidence model.\n\nPanel (a) of Figure 7 should be compared to Panel (c) of Figure 4 and Panel (c) of Figure 7 should be compared to Panel (e) of Figure 4. Likewise, Panel (b) of Figure 7 should be compared to Panel (c) of Figure 3 and Panel (d) of Figure 7 should be compared to Panel (e) of Figure 3.", null, "Figure 7. Opinion patterns emerging when dynamics start from perfect consensus at an opinion value of 0.5.\n\n### References\n\nABELSON, R. P. (1964). Mathematical models of the distribution of attitudes under controversy. In Contributions to mathematical psychology, (pp. 142–160). Holt, Rinehart & Winston, New York.\n\nARONSON, E., Turner, J. A. & Carlsmith, J. M. (1963). Communicator credibility and communication discrepancy as determinants of opinion change. The Journal of Abnormal and Social Psychology, 67(1), 31–36. [doi:10.1037/h0045513]\n\nAXELROD, R. (1997). The dissemination of culture. Journal of Conflict Resolution, 41(2), 203–226. [doi:10.1177/0022002797041002001]\n\nBEN-NAIM, E., Krapivsky, P. L. & Redner, S. (2003). Bifurcation and patterns in compromise processes. Physica D, 183, 190–204. [doi:10.1016/S0167-2789(03)00171-4]\n\nCARRO, A., Toral, R. & San Miguel, M. (2013). The role of noise and initial conditions in the asymptotic solution of a bounded confidence, continuous-opinion model. Journal of Statistical Physics, 151(1), 131–149. [doi:10.1007/s10955-012-0635-2]\n\nDANDEKAR, P., Goel, A. & Lee, D. T. (2013). Biased assimilation, homophily, and the dynamics of polarization. Proceedings of the National Academy of Sciences, 110(15), 5791–5796. [doi:10.1073/pnas.1217220110]\n\nDAVIS, J. H. (1996). Group decision making and quantitative judgments: A consensus model. In E. H. Witte & J. H. D. Mahwah (Eds.), Understanding Group Behavior: Consensual Action by Small Groups, (pp. 35–59). Lawrence Erlbaum.\n\nDEFFUANT, G., Amblard, F. & Weisbuch, G. (2004). Modelling Group Opinion Shift to Extreme: The Smooth Bounded Confidence Model. Arxiv preprint cond-mat/0410199: https://arxiv.org/ftp/cond-mat/papers/0410/0410199.pdf.\n\nDEFFUANT, G., Neau, D., Amblard, F. & Weisbuch, G. (2000). Mixing beliefs among interacting agents. Advances in Complex Systems, 3, 87–98. [doi:10.1142/S0219525900000078]\n\nDEFFUANT, G., Neau, D., Amblard, F. & Weisbuch, G. (2002). How Can Extremism Prevail? A Study Based on the Relative Agreement Interaction Model. Journal of Artificial Societies and Social Simulation, 5(4), 1: http://jasss.soc.surrey.ac.uk/5/4/1.html.\n\nDEGROOT, M. H. (1974). Reaching a consensus. Journal of the American Statistical Association, 69(345), 118–121. [doi:10.1080/01621459.1974.10480137]\n\nFISHBEIN, M. & Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley.\n\nFISHER, S. & Lubin, A. (1958). Distance as a determinant of influence in a two-person serial interaction situation. The Journal of Abnormal and Social Psychology, 56(2), 230–238. [doi:10.1037/h0044609]\n\nFLACHE, A. & Mäs, M. (2008). How to get the timing right. A computational model of the effects of the timing of contacts on team cohesion in demographically diverse teams. Computational & Mathematical Organization Theory, 14(1), 23–51. [doi:10.1007/s10588-008-9019-1]\n\nFRIEDKIN, N. E. (2015). The problem of social control and coordination of complex systems in sociology: A look at the community cleavage problem. IEEE Control Systems, 35(3), 40–51. [doi:10.1109/MCS.2015.2406655]\n\nHARARY, F. (1959). A criterion for unanimity in French’s theory of social power. Studies in social power, 6, 168.\n\nHEGSELMANN, R. & Krause, U. (2002). Opinion dynamics and bounded confidence, models, analysis and simulation. Journal of Artificial Societies and Social Simulation, 5(3), 2: http://jasss.soc.surrey.ac.uk/5/3/2.html.\n\nHUNTER, J. E., Danes, J. E. & Cohen, S. H. (1984). Mathematical Models of Attitude Change. Human communication research series, Academic Press.\n\nLAZARSFELD, P. F. & Merton, R. K. (1954). Friendship as a social process: A substantive and methodological analysis. In M. Berger & T. Abel (Eds.), Freedom and control in modern society, (pp. 18–66). Van Nostrand.\n\nLEHRER, K. & Wagner, C. (1981). Rational Consensus in Science and Society. D. Reidel Publishing Company, Dordrecht, Holland. [doi:10.1007/978-94-009-8520-9]\n\nLIGGETT, T. M. (2013). Stochastic interacting systems: contact, voter and exclusion processes, vol. 324. Springer Science & Business Media.\n\nLORENZ, J. (2007). Continuous Opinion Dynamics under bounded confidence: A Survey. International Journal of Modern Physics C, 18(12), 1819–1838. [doi:10.1142/S0129183107011789]\n\nLORENZ, J., Kurahashi-Nakamura, T. & Mäs, M. (2016). ContinuousOpinionDynamicsNetlogo: Robust clustering in generalized bounded confidence models. http://doi.org/10.5281/zenodo.61205.\n\nMACY, M. W., Kitts, J. A., Flache, A. & Benard, S. (2003). Polarization in dynamic networks: A Hopfield model of emergent structure. In Dynamic Social Network Modeling and Analysis, (pp. 162–173). National Academies Press.\n\nMACY, M. & Tsvetkova, M. (2015). The signal importance of noise. Sociological Methods & Research, 44, 2, 306-332.\n\nMARK, N. P. (2003). Culture and competition: Homophily and distancing explanations for cultural niches. American Sociological Review, 68(3), 319–345. [doi:10.2307/1519727]\n\nMÄS, M. & Bischofberger, L. (2015). Will the personalization of online social networks foster opinion polarization? SSRN. Available at SSRN: http://ssrn.com/abstract=2553436.\n\nMÄS, M. & Flache, A. (2013). Differentiation without distancing. Explaining bi-polarization of opinions without negative influence. PLoS ONE, 8(11), e74516. [doi:10.1371/journal.pone.0074516]\n\nMÄS, M., Flache, A. & Helbing, D. (2010). Individualization as driving force of clustering phenomena in humans. PLoS Comput Biol, 6(10), e1000959. [doi:10.1371/journal.pcbi.1000959]\n\nMÄS, M., Flache, A., Takacs, K. & Jehn, K. A. (2013). In the short term we divide, in the long term we unite: Demographic crisscrossing and the effects of fault lines on subgroup polarization. Organization science, 24(3), 716–736. [doi:10.1287/orsc.1120.0767]\n\nMÄS, M. & Nax, H. H. (2016). A behavioral study of “noise” in coordination games. Journal of Economic Theory, 162, 195–208. [doi:10.1016/j.jet.2015.12.010]\n\nMCPHERSON, M., Smith-Lovin, L. & Cook, J. M. (2001). Birds of a feather: Homophily in social networks. Annual Review of Sociology, 27, 415–444. [doi:10.1146/annurev.soc.27.1.415]\n\nPINEDA, M., Toral, R. & Hernandez-Garcia, E. (2009). Noisy continuous-opinion dynamics. Journal of Statistical Mechanics: Theory and Experiment, 2009(08), P08001 (18pp).\n\nPINEDA,, M., Toral, R. & Hernández-García, E. (2013). The noisy Hegselmann-Krause model for opinion dynamics. The European Physical Journal B, 86(12), 1–10. [doi:10.1140/epjb/e2013-40777-7]\n\nSALZARULO, L. (2006). A continuous opinion dynamics model based on the principle of meta-contrast. Journal of Artificial Societies and Social Simulation, 9(1) 3: http://jasss.soc.surrey.ac.uk/9/1/13.html.\n\nSHIN, J. K. & Lorenz, J. (2010). Tipping diffusivity in information accumulation systems: More links, less consensus. Journal of Statistical Mechanics: Theory and Experiment, 2010(06), P06005. [doi:10.1088/1742-5468/2010/06/p06005]\n\nWATTS, D. J. & Dodds, P. S. (2007). Influentials, networks, and public opinion formation. Journal of Consumer Research, 34, 441–458. [doi:10.1086/518527]" ]
[ null, "http://jasss.soc.surrey.ac.uk/gifs/pdf-icon.png", null, "http://jasss.soc.surrey.ac.uk/gifs/open-access-logo120.jpg", null, "http://jasss.soc.surrey.ac.uk/gifs/download.png", null, "http://jasss.soc.surrey.ac.uk/19/4/7/Figure1.png", null, "http://jasss.soc.surrey.ac.uk/19/4/7/Figure2.png", null, "http://jasss.soc.surrey.ac.uk/19/4/7/Figure3.png", null, "http://jasss.soc.surrey.ac.uk/19/4/7/Figure4.png", null, "http://jasss.soc.surrey.ac.uk/19/4/7/Figure5.png", null, "http://jasss.soc.surrey.ac.uk/19/4/7/Figure6.png", null, "http://jasss.soc.surrey.ac.uk/19/4/7/Figure7.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8873123,"math_prob":0.89029825,"size":37658,"snap":"2019-51-2020-05","text_gpt3_token_len":8547,"char_repetition_ratio":0.15881447,"word_repetition_ratio":0.039333805,"special_character_ratio":0.23742631,"punctuation_ratio":0.1475882,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9726355,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T00:10:43Z\",\"WARC-Record-ID\":\"<urn:uuid:aca252c3-6f94-41f3-b4ae-c82c08c84929>\",\"Content-Length\":\"72091\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:33aa3df8-5db8-48c5-9a6d-1c729e281cb9>\",\"WARC-Concurrent-To\":\"<urn:uuid:6bb5fded-f125-492a-b4de-a426ffc5475e>\",\"WARC-IP-Address\":\"35.177.28.97\",\"WARC-Target-URI\":\"http://jasss.soc.surrey.ac.uk/19/4/7.html\",\"WARC-Payload-Digest\":\"sha1:4UBNCPVCFBXCWFT6CACGO2JK2CPOKG3O\",\"WARC-Block-Digest\":\"sha1:DQHWOQPVZ6Q7PFEHLGVSCCWAKDZPTTZK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251737572.61_warc_CC-MAIN-20200127235617-20200128025617-00169.warc.gz\"}"}
https://freakonometrics.hypotheses.org/12081
[ "# Triangle for Parameters of AR(2) Stationary Processes\n\nWe’ve seen yesterday conditions on so that the canonical process, , satisfying\n\nThe condition is rather simple, since should be a triangular region. But the proof is a bit more tricky…\n\nRecall that we want to parametrize the region\n\nSince we have a true process, then . Our polynomial is here\n\nwhere ‘s are the roots – in – of . Consider now some kind of dual version of that polynomial,\n\nHaving the roots of outside the unit circle is the same as having the roots of inside the unit circle. Obserse that we can write\n\nRoots of are then\n\nFrom this point, we should discuss a little bit, depending on the value of .\n\n• if\n\nThen there is one root, and only one. So we need to have or equivalently .\n\n• if\n\nThen we got roots in , and\n\nmeans, equivalently, that\n\n• if\n\nThen we have two (conjugate) roots in , and the square of norm of those roots is . Thus, .\n\nWe get what was mention in the course: the canonical has a stationary solution if, and only if\n\nwhich is a triangular region, see", null, "## 2 thoughts on “Triangle for Parameters of AR(2) Stationary Processes”\n\n1.", null, "dong says:\n\nthere is a little typo of $\\phi_2$ on the parameter set\n\n2.", null, "Adil says:\n\nCan you explain more of how you arrived at “some kind of dual version of that polynomial”?\n\nAlso, should the signs be – instead of + in that dual version? If not, I’m not sure how factoring out \\phi_{2} flips the signs here.\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed." ]
[ null, "http://freakonometrics.hypotheses.org/files/2014/01/AR2-simulation-triangle.gif", null, "https://secure.gravatar.com/avatar/327141b0e2c007396c838966f3a0ddcc", null, "https://secure.gravatar.com/avatar/cfc7e472fb5746df86f749223d83163c", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9260037,"math_prob":0.97573745,"size":1320,"snap":"2023-40-2023-50","text_gpt3_token_len":321,"char_repetition_ratio":0.10106383,"word_repetition_ratio":0.015936255,"special_character_ratio":0.23863636,"punctuation_ratio":0.10989011,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97617835,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-04T19:05:11Z\",\"WARC-Record-ID\":\"<urn:uuid:68e8c699-e0f1-498b-b6d2-433d0b8ee607>\",\"Content-Length\":\"200720\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:30e33329-08cb-4543-9540-28ca5884e156>\",\"WARC-Concurrent-To\":\"<urn:uuid:b3b45c24-4a98-4151-864c-aa4381c7bb85>\",\"WARC-IP-Address\":\"134.158.39.132\",\"WARC-Target-URI\":\"https://freakonometrics.hypotheses.org/12081\",\"WARC-Payload-Digest\":\"sha1:CIJHH5T4LIDF47ONMMU5KHQ5IGQ66SIC\",\"WARC-Block-Digest\":\"sha1:7N6XMPIPEB6NSOES6AFLGH33GGIQA7LN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100534.18_warc_CC-MAIN-20231204182901-20231204212901-00439.warc.gz\"}"}
https://byjus.com/rd-sharma-solutions/class-8-maths-chapter-7-factorization-exercise-7-1/
[ "", null, "# RD Sharma Solutions for Class 8 Chapter - 7 Factorization Exercise 7.1\n\n### RD Sharma Class 8 Solutions Chapter 7 Ex 7.1 PDF Free Download\n\nOur expert team has designed the solutions for RD Sharma Class 8 Maths Chapter 7 to help students prepare for their exams at ease. RD Sharma Solutions is one of the best reference material for CBSE students. Learners can download the pdf from the links provided below. Experts suggest students practice the solutions many numbers of times to yield good results in their exams. In Exercise 7.1 of Chapter 7 Factorization, we shall discuss the basic definitions for factors, factorization and we also find the common factors and greatest common factor of the monomial.\n\n## Download the pdf of RD Sharma For Class 8 Maths Exercise 7.1 Chapter 7 Factorization", null, "", null, "", null, "", null, "### Access other Exercises of RD Sharma Solutions for Class 8 Maths Chapter 7 Factorization\n\nExercise 7.2 Solutions\n\nExercise 7.3 Solutions\n\nExercise 7.4 Solutions\n\nExercise 7.5 Solutions\n\nExercise 7.6 Solutions\n\nExercise 7.7 Solutions\n\nExercise 7.8 Solutions\n\nExercise 7.9 Solutions\n\n### Access Answers to RD Sharma Solutions for Class 8 Maths Exercise 7.1 Chapter 7 Factorization\n\nFind the greatest common factor (GCF/HCF) of the following polynomials: (1-14)\n\n1. 2x2 and 12x2\n\nSolution:\n\nWe know that the numerical coefficients of given numerical are 2 and 12\n\nThe greatest common factor of 2 and 12 is 2\n\nThe common literals appearing in given monomial is x\n\nThe smallest power of x in two monomials is 2\n\nThe monomial of common literals with smallest power is x2\n\n∴ The greatest common factor = 2x2\n\n2. 6x3y and 18x2y3\n\nSolution:\n\nWe know that the numerical coefficients of given numerical are 6 and18\n\nThe greatest common factor of 6 and 18 is 6\n\nCommon literals appearing in given numerical are x and y\n\nSmallest power of x in three monomial is 2\n\nSmallest power of y in three monomial is 1\n\nMonomial of common literals with smallest power is x2y\n\n∴ The greatest common factor = 6x2y\n\n3. 7x, 21x2 and 14xy2\n\nSolution:\n\nWe know that the numerical coefficients of given numerical are 7, 21 and 14\n\nGreatest common factor of 7, 21 and 14 is 7\n\nCommon literals appearing in given numerical are x and y\n\nSmallest power of x in three monomials is 1\n\nSmallest power of y in three monomials is 0\n\nMonomials of common literals with smallest power is x\n\n∴ The greatest common factor = 7x\n\n4. 42x2yz and 63x3y2z3\n\nSolution:\n\nWe know that the numerical coefficients of given numerical are 42 and 63.\n\nGreatest common factor of 42, 63 is 21.\n\nCommon literals appearing in given numerical are x, y and z\n\nSmallest power of x in two monomials is 2\n\nSmallest power of y in two monomials is 1\n\nSmallest power of z in two monomials is 1\n\nMonomials of common literals with smallest power is x2yz\n\n∴ The greatest common factor = 21x2yz\n\n5. 12ax2, 6a2x3 and 2a3x5\n\nSolution:\n\nWe know that the numerical coefficients of given numerical are 12, 6 and 2\n\nGreatest common factor of 12, 6 and 2 is 2.\n\nCommon literals appearing in given numerical are a and x\n\nSmallest power of x in three monomials is 2\n\nSmallest power of a in three monomials is 1\n\nMonomials of common literals with smallest power is ax2\n\n∴ The greatest common factor = 2ax2\n\n6. 9x2, 15x2y3, 6xy2 and 21x2y2\n\nSolution:\n\nWe know that the numerical coefficients of given numerical are 9, 15, 16 and 21\n\nGreatest common factor of 9, 15, 16 and 21 is 3.\n\nCommon literals appearing in given numerical are x and y\n\nSmallest power of x in four monomials is 1\n\nSmallest power of y in four monomials is 0\n\nMonomials of common literals with smallest power is x\n\n∴ The greatest common factor = 3x\n\n7. 4a2b3, -12a3b, 18a4b3\n\nSolution:\n\nWe know that the numerical coefficients of given numerical are 4, -12 and 18.\n\nGreatest common factor of 4, -12 and 18 is 2.\n\nCommon literals appearing in given numerical are a and b\n\nSmallest power of a in three monomials is 2\n\nSmallest power of b in three monomials is 1\n\nMonomials of common literals with smallest power is a2b\n\n∴ The greatest common factor = 2a2b\n\n8. 6x2y2, 9xy3, 3x3y2\n\nSolution:\n\nWe know that the numerical coefficients of given numerical are 6, 9 and 3\n\nGreatest common factor of 6, 9 and 3 is 3.\n\nCommon literals appearing in given numerical are x and y\n\nSmallest power of x in three monomials is 1\n\nSmallest power of y in three monomials is 2\n\nMonomials of common literals with smallest power is xy2\n\n∴ The greatest common factor = 3xy2\n\n9. a2b3, a3b2\n\nSolution:\n\nWe know that the numerical coefficients of given numerical are 0\n\nCommon literals appearing in given numerical are a and b\n\nSmallest power of a in two monomials = 2\n\nSmallest power of b in two monomials = 2\n\nMonomials of common literals with smallest power is a2b2\n\n∴ The greatest common factor = a2b2\n\n10. 36a2b2c4, 54a5c2, 90a4b2c2\n\nSolution:\n\nWe know that the numerical coefficients of given numerical are 36, 54 and 90\n\nGreatest common factor of 36, 54 and 90 is 18.\n\nCommon literals appearing in given numerical are a, b and c\n\nSmallest power of a in three monomials is 2\n\nSmallest power of b in three monomials is 0\n\nSmallest power of c in three monomials is 2\n\nMonomials of common literals with smallest power is a2c2\n\n∴ The greatest common factor = 18a2c2\n\n11. x3, -yx2\n\nSolution:\n\nWe know that the numerical coefficients of given numerical are 0\n\nCommon literals appearing in given numerical are x and y\n\nSmallest power of x in two monomials is 2\n\nSmallest power of y in two monomials is 0\n\nMonomials of common literals with smallest power is x2\n\n∴ The greatest common factor = x2\n\n12. 15a3, -45a2, -150a\n\nSolution:\n\nWe know that the numerical coefficients of given numerical are 15, -45 and 150\n\nGreatest common factor of 15, -45 and 150 is 15.\n\nCommon literals appearing in given numerical is a\n\nSmallest power of a in three monomials is 1\n\nMonomials of common literals with smallest power is a\n\n∴ The greatest common factor = 15a\n\n13. 2x3y2, 10x2y3, 14xy\n\nSolution:\n\nWe know that the numerical coefficients of given numerical are 2, 10 and 14.\n\nGreatest common factor of 2, 10 and 14 is 2.\n\nCommon literals appearing in given numerical are x and y\n\nSmallest power of x in three monomials is 1\n\nSmallest power of y in three monomials is 1\n\nMonomials of common literals with smallest power is xy\n\n∴ The greatest common factor = 2xy\n\n14. 14x3y5, 10x5y3, 2x2y2\n\nSolution:\n\nWe know that the numerical coefficients of given numerical are 14, 10 and 2.\n\nGreatest common factor of 14, 10 and 2 is 2.\n\nCommon literals appearing in given numerical are x and y\n\nSmallest power of x in three monomials is 2\n\nSmallest power of y in three monomials is 2\n\nMonomials of common literals with smallest power is x2y2\n\n∴ The greatest common factor = 2x2y2\n\nFind the greatest common factor of the terms in each of the following expressions:\n\n15. 5a4 + 10a3 – 15a2\n\nSolution:\n\nThe greatest common factor of the three terms is 5a2\n\n16. 2xyz + 3x2y + 4y2\n\nSolution:\n\nThe greatest common factor of the three terms is y\n\n17. 3a2b2 + 4b2c2 + 12a2b2c2\n\nSolution:\n\nThe greatest common factor of the three terms is b2." ]
[ null, "https://www.facebook.com/tr", null, "https://cdn1.byjus.com/wp-content/uploads/2019/11/rd-sharm-class-8-maths-chapter-7-ex-7.1-1.png", null, "https://cdn1.byjus.com/wp-content/uploads/2019/11/rd-sharma-class-8-maths-chapter-7-ex-7.1-2.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/11/rd-sharma-class-8-maths-chapter-7-ex-7.1-3.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/11/rd-sharma-class-8-maths-chapter-7-ex-7.1-4.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8978079,"math_prob":0.96873754,"size":6578,"snap":"2019-51-2020-05","text_gpt3_token_len":1944,"char_repetition_ratio":0.24688165,"word_repetition_ratio":0.46887967,"special_character_ratio":0.26284584,"punctuation_ratio":0.078967944,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.999595,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-15T03:30:30Z\",\"WARC-Record-ID\":\"<urn:uuid:d2398110-e34c-4455-9dfa-cac811e35b55>\",\"Content-Length\":\"574090\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:68e5eb50-0632-44ce-b167-c647f576479a>\",\"WARC-Concurrent-To\":\"<urn:uuid:dce453a4-09bc-4fc8-88a5-5c113ac554ea>\",\"WARC-IP-Address\":\"52.77.80.199\",\"WARC-Target-URI\":\"https://byjus.com/rd-sharma-solutions/class-8-maths-chapter-7-factorization-exercise-7-1/\",\"WARC-Payload-Digest\":\"sha1:2AIGA4WNONJAMGDI2DCLJFBLUHVEGUAO\",\"WARC-Block-Digest\":\"sha1:K4RXEGRQSVBGABP6OPJA7ZR75VF5V2LR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541301014.74_warc_CC-MAIN-20191215015215-20191215043215-00247.warc.gz\"}"}
https://www.physicsforums.com/threads/one-to-one-functions.134861/
[ "# One-to-one Functions.\n\n#### AznBoi\n\nHow do you solve one-to-one functions algebraically?\n\nProblem: f(x) (3x+4)/5\n\nWhat are you suppose to substitue for x?\n\nI know that if f(a)=f(b), a=b...\n\n#### Office_Shredder\n\nStaff Emeritus\nGold Member\nWhat do you mean solve?\n\nIf f(x) = (3x+4)/5, then you have your function. If you plug in an x, you can solve for f(x) by doing basic math. Given what f(x) is, you can do something like this:\n\n5*f(x) = 3x + 4\n\n5f(x) - 4=3x\n\n(5f(x)-4)/3=x\n\n#### AznBoi\n\nIt says: Determine algebraically whether the function is one-to-one. How do I do that?\n\n#### Office_Shredder\n\nStaff Emeritus\nGold Member\nOh, I get it. You want to show that if you plug two distinct values, x1 and x2 into f(x), the values that are returned either are not equal to each other, or x1 and x2 are the same (that's the definition of one to one)\n\n#### AznBoi\n\nSo do you plug 2 and -2. Their negatives? or what two numbers do you plug in. I plugged in 2 and -2 and I got 2 and -2/5. What am I doing wrong?\n\n#### Hurkyl\n\nStaff Emeritus\nGold Member\nAznBoi said:\nHow do I do that?\nJust do what it says:\n\nI know that if f(a)=f(b), a=b...\nStart by writing down f(a) = f(b).\nUse algebra to derive a = b.\n\nProof finished.\n\n#### berkeman\n\nMentor\nCan you use calculus? If so, you can show that the slope (derivative) of the function is always positive and bounded. That means that the function cannot double-back on itself to create a second y value for any x value.\n\nEven if you aren't supposed to use calculus, at least for this problem, the equation is the equation of a straight line, right? y = mx + b\n\nEDIT -- Ooo. I like Hurkyl's method better!\n\n#### AznBoi\n\nwhat do you mean? like f(a)=3a+4/5 f(b)=3b+4/5?? a will always equal b if you do it that way wouldn't it? No I can't use calculus, I'm in pre cal xD. Can you show me how to do it?\n\n#### Hurkyl\n\nStaff Emeritus\nGold Member\nAznBoi said:\nwhat do you mean? like f(a)=3a+4/5 f(b)=3b+4/5?? a will always equal b if you do it that way wouldn't it?\nIf you can prove what you just said, then you've proven f is one-to-one. It's that easy.\n\n#### AznBoi\n\nWhat about f(x)=x^2 It's not a one-to-one fucntion even though f(a)=a^2 is equal to f(b)=b^2 a=b in that case but if you put -2 and 2 in f(a)=f(b) but a doesn't equal b.. So I'm confused.\n\n#### AznBoi\n\nCan you give me some examples of functions that aren't one-by-one. I mean if you use the same function for x=a and b. aren't they alwasy equal to each other?\n\n#### Office_Shredder\n\nStaff Emeritus\nGold Member\nFor x^2, if a^2=b^2, then a=-b means x^2 isn't necessarily one to one\n\n#### Hurkyl\n\nStaff Emeritus\nGold Member\nWhat about f(x)=x^2 It's not a one-to-one fucntion even though f(a)=a^2 is equal to f(b)=b^2 a=b in that case\nWhy do you think a=b in that case? You've demonstrated that's not always true... so think hard about why you would (incorrectly) believe a=b must be true here.\n\n#### AznBoi\n\nOkay I know that -2^2 and 2^2 are both equal to 4.. So that means you can't use a and b.. cause a^2 and b^2 would be a=b if you solve it algebrically. What numbers do I need to substitue for x? How do I know that a=b or a doesn't equal b. Thats what I'm trying to figure out. =P\n\n#### Hurkyl\n\nStaff Emeritus\nGold Member\ncause a^2 and b^2 would be a=b if you solve it algebrically.\n\n#### AznBoi\n\na^2=b^2 because you square root both sides and you get a=b?\n\n#### Hurkyl\n\nStaff Emeritus\nGold Member\na^2=b^2 because you square root both sides and you get a=b?\nNope. You get |a| = |b|.\n\n#### AznBoi\n\nSo basically anything that is to an even power is not a one-to-one function. ok this is weird. lol\n\n#### berkeman\n\nMentor\nAznBoi said:\nSo basically anything that is to an even power is not a one-to-one function. ok this is weird. lol\nJust for my peace of mind, could you please post the textbook definition of a one-to-one function? I think that my practical definition of a one-to-one function (any x maps to only one y) may not match what others are asking you to show.\n\n#### Data\n\nOne-to-one means IF f(x) = f(y) THEN x=y.\n\nSo think about f(x) = x^2. If f(x) = f(y) is it NECESSARILY true that x=y? If not, then f is not 1-1. If so, then f is 1-1.\n\n### The Physics Forums Way\n\nWe Value Quality\n• Topics based on mainstream science\n• Proper English grammar and spelling\nWe Value Civility\n• Positive and compassionate attitudes\n• Patience while debating\nWe Value Productivity\n• Disciplined to remain on-topic\n• Recognition of own weaknesses\n• Solo and co-op problem solving" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95175153,"math_prob":0.94218016,"size":1024,"snap":"2019-13-2019-22","text_gpt3_token_len":300,"char_repetition_ratio":0.12254902,"word_repetition_ratio":0.0,"special_character_ratio":0.28222656,"punctuation_ratio":0.11494253,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99768966,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-25T22:05:44Z\",\"WARC-Record-ID\":\"<urn:uuid:a9b78c16-29cc-4d31-8a2a-b7d9a7d87f14>\",\"Content-Length\":\"140101\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:262ed6d6-1d78-4e80-ae55-cdd524d16c0c>\",\"WARC-Concurrent-To\":\"<urn:uuid:74aa3566-bbe0-48de-b213-36d500d752b2>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/one-to-one-functions.134861/\",\"WARC-Payload-Digest\":\"sha1:L57LWRTPS5SEYQA6FKRIANFR5ZU5JF7W\",\"WARC-Block-Digest\":\"sha1:3CLV4XKLNE5SOGRNHGGN772KOL2YFYBY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912204461.23_warc_CC-MAIN-20190325214331-20190326000331-00041.warc.gz\"}"}
https://forces-3-1-0.embotech.com/Documentation/solver_options/index.html
[ "# 12. Solver Options¶\n\nThe default solver options can be loaded when giving a name to the solver with the following command\n\ncodeoptions = getOptions('solvername');\n\n\nIn the documentation below, we assume that you have created this struct and named it codeoptions.\n\n## 12.1. General options¶\n\nWe will first discuss how to change several options that are valid for all the FORCES PRO interfaces.\n\n### 12.1.1. Solver name¶\n\nThe name of the solver will be used to name variables, functions, but also the MEX file and associated help file. This helps you to use multiple solvers generated by FORCES within the same software project or Simulink model. To set the name of the solver use:\n\ncodeoptions.name = 'solvername';\n\n\nAlternatively, you can directly name the solver when generating the options struct by calling:\n\ncodeoptions = getOptions('solvername');\n\n\n### 12.1.3. Maximum number of iterations¶\n\nTo set the maximum number of iterations of the generated solver, use:\n\ncodeoptions.maxit = 200;\n\n\nThe default maximum number of iterations for all solvers provided by FORCES PRO is set to 200.\n\n### 12.1.4. Compiler optimization level¶\n\nThe compiler optimization level can be varied by changing the field optlevel from 0 to 3 (default):\n\ncodeoptions.optlevel = 0;\n\n\nImportant\n\nIt is recommended to set optlevel to 0 during prototyping to evaluate the functionality of the solver without long compilation times. Then set it back to 3 when generating code for deployment or timing measurements.\n\n### 12.1.5. Running solvers in parallel¶\n\nThe generated solver can be run in parallel on different threads by changing the field threadSafeStorage from false to true:\n\ncodeoptions.threadSafeStorage = true;\n\n\n### 12.1.6. Measure Computation time¶\n\nYou can measure the time used for executing the generated code by using:\n\ncodeoptions.timing = 1;\n\n\nBy default the execution time is measured. The execution time can be accessed in the field solvetime of the information structure returned by the solver. In addition, the execution time is printed in the console if the flag printlevel is greater than 0.\n\nImportant\n\nSetting timing on will introduce a dependency on libraries used for accessing the system clock. Timing should be turned off when deploying the code on an autonomous embedded system.\n\nBy default when choosing to generate solvers for target platforms, timing is disabled. You can manually enable timing on embedded platforms by using:\n\ncodeoptions.embedded_timing = 1;\n\n\n### 12.1.7. Datatypes¶\n\nThe type of variables can be changed by setting the field floattype as outlined in Table 12.2.\n\nTable 12.2 Data type options\n\nfloattype\n\nDecimation\n\nWidth (bits)\n\nSupported algorithms\n\n'double' (default)\n\n64 bit\n\nFloating point\n\n'float'\n\n32 bit\n\nFloating point\n\n'int'\n\n32 bit\n\nFixed point\n\n'short'\n\n16 bit\n\nFixed point\n\nImportant\n\nUnless running on a resource-constrained platform, we recommend using double precision floating point arithmetics to avoid problems in the solver. If single precision floating point has to be used, reduce the required tolerances on the solver accordingly by a power of two (i.e. from 1E-6 to 1E-3).\n\n### 12.1.8. Overwriting existing solvers¶\n\nWhen a new solver is generated with the same name as an existing solver one can control the overwriting behaviour by setting the field overwrite as outlined in Table 12.3.\n\nTable 12.3 Overwrite existing solver options\n\noverwrite\n\nResult\n\n0\n\nNever overwrite.\n\n1\n\nAlways overwrite.\n\n2 (default)\n\n### 12.1.10. Code generation server¶\n\nBy default, code generation requests are routed to embotech’s server. To send a code generation request to a local server, for example when FORCES PRO is used in an enterprise setting, set the following field to an appropriate value:\n\ncodeoptions.server = 'http://embotech-server2.com:8114/v1.5.beta';\n\n\n### 12.1.12. Skipping automatic cleanup¶\n\nFORCES PRO automatically cleans up some of the files that it generates during the code generation, but which are usually not needed any more after building the MEX file. In particular, some intermediate CasADi generated files are deleted. If you would like to prevent any cleanup by FORCES, set the option:\n\ncodeoptions.cleanup = 0;\n\n\nThe default value is 1 (true).\n\nImportant\n\nThe library or object files generated by FORCES PRO contain only the solver itself. To retain the CasADi generated files for function evaluations, switch off automatic cleanup as shown above. This is needed if you want to use the solver within another software project, and need to link to it.\n\n### 12.1.13. Target platform¶\n\nAs a default option, FORCES PRO generates code for simulation on the host platform. To obtain code for deployment on a target embedded platform, set the field platform to the appropriate value. The platforms currently supported by FORCES PRO are given in Table 12.4. In order to acquire licenses to use a specific platform, licenses can be requested on the portal by selecting the platform naming stated in the Portal Selection.\n\nTable 12.4 Target platforms supported by FORCES PRO\n\nplatform\n\nDescription\n\nPortal Selection\n\n'Generic' (default)\n\nFor the architecture of the host platform.\n\n'x86_64' (Engineering Node)\n\n'x86_64'\n\nFor x86_64 based 64-bit platforms (detected OS).\n\n'x86_64'\n\n'x86'\n\nFor x86 based 32-bit platforms (detected OS).\n\n'x86'\n\n'Win-x86_64'\n\nFor Windows x86_64 based 64-bit platforms (supports Microsoft/Intel compiler).\n\n'x86_64'\n\n'Win-x86'\n\nFor Windows x86 based 32-bit platforms (supports Microsoft/Intel compiler).\n\n'x86'\n\n'Win-MinGW-x86_64'\n\nFor Windows x86_64 based 64-bit platforms (supports MinGW compiler).\n\n'x86_64'\n\n'Win-MinGW-x86'\n\nFor Windows x86 based 32-bit platforms (supports MinGW compiler).\n\n'x86'\n\n'Mac-x86_64'\n\nFor Mac x86_64 based 64-bit platforms (supports GCC/Clang compiler).\n\n'x86_64'\n\n'Gnu-x86_64'\n\nFor Linux x86_64 based 64-bit platforms (supports GCC compiler).\n\n'x86_64'\n\n'Gnu-x86'\n\nFor Linux x86 based 32-bit platforms (supports GCC compiler).\n\n'x86'\n\n'Docker-Gnu-x86_64'\n\nFor Linux x86_64 based 64-bit platforms on Docker (supports GCC compiler).\n\n'Docker-Gnu-x86_64'\n\n'Docker-Gnu-x86'\n\nFor Linux x86 based 32-bit platforms on Docker (supports GCC compiler).\n\n'Docker-Gnu-x86'\n\n'ARM-Generic'\n\nFor ARM Cortex 32-bit processors (Gnueabih machine type).\n\n'ARM-Generic-Gnu'\n\n'ARM-Generic64'\n\nFor ARM Cortex 64-bit processors (Aarch machine type).\n\n'ARM-Generic64-Gnu'\n\n'Integrity-ARM-x86'\n\nFor ARM Cortex 32-bit processors using the Integrity toolchain.\n\n'Integrity-ARM-x86'\n\n'Integrity-ARM-x64'\n\nFor ARM Cortex 64-bit processors using the Integrity toolchain.\n\n'Integrity-ARM-x64'\n\n'ARM Cortex-M3'\n\nFor ARM Cortex M3 32-bit processors.\n\n'ARM-Cortex-M3'\n\n'ARM-Cortex-M4-NOFPU'\n\nFor the ARM Cortex M4 32-bit processors without a floating-point unit.\n\n'ARM-Cortex-M4'\n\n'ARM-Cortex-M4'\n\nFor the ARM Cortex M4 32-bit processors with a floating-point unit.\n\n'ARM-Cortex-M4'\n\n'ARM-Cortex-A7'\n\nFor the ARM Cortex A7 32-bit processors (Gnueabih machine type).\n\n'ARM-Cortex-A7'\n\n'ARM-Cortex-A8'\n\nFor the ARM Cortex A8 32-bit processors (Gnueabih machine type).\n\n'ARM-Cortex-A8'\n\n'ARM-Cortex-A9'\n\nFor the ARM Cortex A9 32-bit processors (Gnueabih machine type).\n\n'ARM-Cortex-A9'\n\n'ARM-Cortex-A15'\n\nFor the ARM Cortex A15 32-bit processors (Gnueabih machine type).\n\n'ARM-Cortex-A15'\n\n'ARM-Cortex-A53'\n\nFor the ARM Cortex A53 64-bit processors (Gnueabih machine type).\n\n'ARM-Cortex-A53'\n\n'ARM-Cortex-A72'\n\nFor the ARM Cortex A72 64-bit processors (Gnueabih machine type).\n\n'ARM-Cortex-A72'\n\n'TI-Cortex-A15'\n\nFor the ARM Cortex A15 32-bit processors (Gnueabih machine type).\n\n'TI-Cortex-A15'\n\n'NVIDIA-Cortex-A57'\n\nFor the NVIDIA Cortex A57 64-bit processors (Aarch machine type).\n\n'NVIDIA-Cortex-A57'\n\n'AARCH-Cortex-A57'\n\nFor the ARM Cortex A57 64-bit processors (Aarch machine type).\n\n'AARCH-Cortex-A57'\n\n'AARCH-Cortex-A72'\n\nFor the ARM Cortex A72 64-bit processors (Aarch machine type).\n\n'AARCH-Cortex-A72'\n\n'PowerPC'\n\nFor 32-bit PowerPC based platforms (supports GCC compiler).\n\n'PowerPC-Gnu'\n\n'PowerPC64'\n\nFor 64-bit PowerPC based platforms (supports GCC compiler).\n\n'PowerPC64-Gnu'\n\n'MinGW32'\n\nFor Windows x86 based 32-bit platforms (supports MinGW compiler).\n\n'x86'\n\n'MinGW64'\n\nFor Windows x86_64 based 64-bit platforms (supports MinGW compiler).\n\n'x86_64'\n\n'dSPACE-MABII'\n\nFor the dSPACE MicroAutoBox II real-time system (supports Microtec compiler).\n\n'dSPACE-MABII-Microtec'\n\n'dSPACE-MABIII'\n\nFor the dSPACE MicroAutoBox III real-time system (supports Gcc compiler).\n\n'dSPACE-MABIII-Gcc'\n\n'dSPACE-MABXII'\n\nFor the dSPACE MicroAutoBox II real-time system (supports Microtec compiler).\n\n'dSPACE-MABII-Microtec'\n\n'dSPACE-MABXIII'\n\nFor the dSPACE MicroAutoBox III real-time system (supports Gcc compiler).\n\n'dSPACE-MABIII-Gcc'\n\n'Speedgoat-x86'\n\nFor Speedgoat 32-bit real-time platforms (supports Microsoft compiler).\n\n'Speedgoat-x86'\n\n'Speedgoat-x64'\n\nFor Speedgoat 64-bit real-time platforms (supports Microsoft compiler).\n\n'Speedgoat-x64'\n\n'IAtomE680_Bachmann'\n\nFor Bachmann PLC platforms (supports VxWorks compiler).\n\n'IAtomE680-VxWorks'\n\nNote\n\nIf a solver for another platform is requested, FORCES PRO will still provide the simulation interfaces for the 'Generic' host platform to enable users to run simulations.\n\n#### 12.1.13.1. Cross compilation¶\n\nTo generate code for other operating systems different from the host platform, set the appropriate flag from the following list to 1:\n\ncodeoptions.win\ncodeoptions.mac\ncodeoptions.gnu\n\n\nNote that this will only affect the target platform. Interfaces for the host platform will be automatically built.\n\n#### 12.1.13.2. Mac compilation¶\n\nWhen compiling for mac platforms it’s possible to select the compiler to be used for the web compilation. Select from the available values gcc (default) and clang with the following codeoption:\n\ncodeoptions.maccompiler\n\n\n#### 12.1.13.3. SIMD instructions¶\n\nOn x86-based host platforms, one can enable the sse field to accelerate the execution of the solver\n\ncodeoptions.sse = 1;\n\n\nOn x86-based host platforms, one can also add the avx field to significantly accelerate the compilation and execution of the convex solver, from version 1.9.0,\n\ncodeoptions.avx = 1;\n\n\nNote\n\nCurrently when options avx and blckMatrices are enabled simultaneously, blckMatrices is automatically disabled.\n\nNote\n\nWhen sparse parameters are present in the model, the options avx and neon are automatically set to zero.\n\nDepending on the host platform, avx may be automatically enabled. If the machine on which the solver is to be run does not support AVX and the message “Illegal Instruction” is returned at run-time, one can explicitly disable avx by setting:\n\ncodeoptions.avx = -1;\n\n\nIf the host platform supports AVX, but the user prefers not to have AVX intrinsics in the generated code, one can also keep the default option value of the solver:\n\ncodeoptions.avx = 0;\n\n\nOn ‘NVIDIA-Cortex-A57’, ‘AARCH-Cortex-A57’ and ‘AARCH-Cortex-A72’ target platforms, one can also enable the field neon in order to accelerate the execution of the convex solver. From version 1.9.0, the typical behaviour is that the host platform gets vectorized code based on AVX intrinsics when avx = 1, and the target platform gets AVX vectorized code if it supports it when avx = 1 and NEON vectorized code if it is one of the above Cortex platforms and neon = 1.\n\nFor single precision, the options are\n\ncodeoptions.floattype = 'float'\ncodeoptions.neon = 1;\n\n\nFor double precision, the options are\n\ncodeoptions.floattype = 'double'\ncodeoptions.neon = 2;\n\n\nIn case one wants to disable NEON intrinsics in the generated target code, the default value of the neon option is\n\ncodeoptions.neon = 0;\n\n\nIf NEON vectorization is being used and there is a mismatch between float precision and the value of the neon option, the solver is automatically generated with the following options:\n\ncodeoptions.floattype = 'double'\ncodeoptions.neon = 2;\n\n\nand a warning message is raised by the MATLAB client.\n\nNote\n\nFrom version 1.9.0, ARMv8-A NEON instructions are generated. Hence, target platforms based on ARMv7 and previous versions are currently not supported.\n\n### 12.1.14. MISRA 2012 compliance¶\n\nIf your license allows it, add the following field to generate C code that is compliant with the MISRA 2012 rules:\n\ncodeoptions.misra2012_check = 1;\n\n\nThis option makes the generated solver code MISRA compliant. After compilation, the client also downloads a folder whose name terminates with _misra2012_analysis. The folder contains one summary of all MISRA violations for the solver source and header files. Note that the option only produces MISRA compliant code when used with algorithms PDIP and PDIP_NLP.\n\n### 12.1.15. Optimizing code size¶\n\nThe size of the solver libraries generated with code option PDIP_NLP can be reduced by means of the option nlp.compact_code. By setting\n\ncodeoptions.nlp.compact_code = 1;\n\n\nthe user enables the FORCES PRO server to generate smaller code, which results in shorter compilation time and slightly better solve time in some cases. This feature is especially effective on long horizon problems.\n\nThe size of sparse linear algebra routines in the generated code can be reduced by changing the option compactSparse from 0 to 1:\n\ncodeoptions.compactSparse = 1;\n\n\n### 12.1.16. Optimizing Linear Algebra Operations¶\n\nSome linear algebra routines in the generated code have available optimizations that can be enabled by changing the options optimize_<optimization> from 0 to 1. These optimizations change the code in order to make better use of some embedded architectures in which hardware is more limited compared to host PC architectures. Therefore, these optimizations show better results in embedded platforms such as ARM targets rather than during simulations on host PCs. The available optimizations are:\n\n• Cholesky Division: This option performs the divisions included in the Cholesky factorization more efficiently to reduce its computation time.\n\n• Registers: This option attempts to use the architecture’s registers in order to reduce memory operations which can take significant time.\n\n• Use Locals: These options (which are separated into simple/heavy/all in ascending complexity) make better use of data locality in order to reduce memory jumps\n\n• Operations Rearrange: This option rearranges operations in order to make more efficient use of data and reduce memory jumps\n\n• Loop Unrolling: This option unrolls some of the included loops in order to remove their overhead.\n\n• Enable Offset: This option allows the rest of the optimizations to take place in cases where the matrix contains offsets.\n\ncodeoptions.optimize_choleskydivision = 1;\ncodeoptions.optimize_registers = 1;\ncodeoptions.optimize_uselocalsall = 1;\ncodeoptions.optimize_uselocalsheavy = 1; % overriden if uselocalsall is enabled\ncodeoptions.optimize_uselocalssimple = 1; % overriden if uselocalsheavy is enabled\ncodeoptions.optimize_operationsrearrange = 1;\ncodeoptions.optimize_loopunrolling = 1;\ncodeoptions.optimize_enableoffset = 1;\n\n\n### 12.1.17. Dump problem formulation¶\n\nThe MATLAB client of FORCES PRO provides a built-in tool to dump the problem formulation to reproduce the exact same solver for future reference. This tool is explained in detail in Section 13 and can be turned on by using the setting:\n\ncodeoptions.dump_formulation = 1;\n\n\n## 12.2. High-level interface options¶\n\nThe FORCES PRO NLP solver of the high-level interface implements a nonlinear barrier interior-point method. We will now discuss how to change several parameters in the solver.\n\n### 12.2.1. Integrators¶\n\nWhen providing the continuous dynamics the user must select a particular integrator by setting nlp.integrator.type as outlined in Table 12.5.\n\nTable 12.5 Integrators options\n\nnlp.integrator.type\n\nType\n\nOrder\n\n'ForwardEuler'\n\nExplicit Euler Method\n\n1\n\n'ERK2'\n\nExplicit Runge-Kutta\n\n2\n\n'ERK3'\n\nExplicit Runge-Kutta\n\n3\n\n'ERK4' (default)\n\nExplicit Runge-Kutta\n\n4\n\n'BackwardEuler'\n\nImplicit Euler Method\n\n1\n\n'IRK2'\n\nImplicit Euler Method\n\n2\n\n'IRK4'\n\nImplicit Euler Method\n\n4\n\nThe user must also provide the discretization interval (in seconds) and the number of intermediate shooting nodes per interval. For instance:\n\ncodeoptions.nlp.integrator.type = 'ERK2';\ncodeoptions.nlp.integrator.Ts = 0.01;\ncodeoptions.nlp.integrator.nodes = 10;\n\n\nTip\n\nUsually an explicit integrator such as RK4 should suffice for most applications. If you have stiff systems, or suspect inaccurate integration to be the cause of convergence failure of the NLP solver, consider using implicit integrators from the table above.\n\nNote\n\nNote that the implicit integrators BackwardEuler, IRK2 and IRK4 currently rely on the CasADi AD tool to work.\n\n### 12.2.2. Accuracy requirements¶\n\nOne can modify the termination criteria by altering the KKT tolerance with respect to stationarity, equality constraints, inequality constraints and complementarity conditions, respectively, using the following fields:\n\n% default tolerances\ncodeoptions.nlp.TolStat = 1E-5; % inf norm tol. on stationarity\ncodeoptions.nlp.TolEq = 1E-6; % tol. on equality constraints\ncodeoptions.nlp.TolIneq = 1E-6; % tol. on inequality constraints\ncodeoptions.nlp.TolComp = 1E-6; % tol. on complementarity\n\n\nAll tolerances are computed using the infinitiy norm $$\\lVert \\cdot \\rVert_\\infty$$.\n\n### 12.2.3. Barrier strategy¶\n\nThe strategy for updating the barrier parameter is set using the field:\n\ncodeoptions.nlp.BarrStrat = 'loqo';\n\n\nIt can be set to 'loqo' (default) or to 'monotone'. The default settings often leads to faster convergence, while 'monotone' may help convergence for difficult problems.\n\n### 12.2.4. Hessian approximation¶\n\nThe way the Hessian of the Lagrangian function is computed can be set using the field:\n\ncodeoptions.nlp.hessian_approximation = 'bfgs';\n\n\nFORCES PRO currently supports BFGS updates ('bfgs') (default) and Gauss-Newton approximation ('gauss-newton'). Exact Hessians will be supported in a future version. Read the subsequent sections for the corresponding Hessian approximation method of your choice.\n\n#### 12.2.4.1. BFGS options¶\n\nWhen the Hessian is approximated using BFGS updates, the initialization of the estimates can play a critical role in the convergence of the method. The default value is the identity matrix, but the user can modify it using e.g.:\n\ncodeoptions.nlp.bfgs_init = diag([0.1, 10, 4]);\n\n\nNote that BFGS updates are carried out individually per stage in the FORCES NLP solver, so the size of this matrix is the size of the stage variable. Also note that this matrix must be positive definite. When the cost function is positive definite, it often helps to initialize BFGS with the Hessian of the cost function.\n\nThis matrix is also used to restart the BFGS estimates whenever the BFGS updates are skipped several times in a row. The maximum number of updates skipped before the approximation is re-initialized is set using:\n\ncodeoptions.nlp.max_update_skip = 2;\n\n\nThe default value for max_update_skip is 2.\n\n#### 12.2.4.2. Gauss-Newton options¶\n\nFor problems that have a least squares objective, i.e. the cost function can be expressed by a vector-valued function $$r_k : \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$$ which implicitly defines the objective function as:\n\n$f_k(z_k,p_k) = \\frac{1}{2} \\lVert r_k(z_k,p_k) \\rVert_2^2 \\,,$\n\nthe Gauss-Newton approximation of the Hessian is given by:\n\n$\\nabla_{xx}^2 L_k \\approx \\nabla r_k(z_k,p_k) \\nabla r_k(z_k,p_k)^\\top$\n\nand can lead to faster convergence and a more reliable method. When this option is selected, the functions $$r_k$$ have to be provided by the user in the field LSobjective. For example if $$r(z)=\\sqrt{100} z_1^2 + \\sqrt{6} z_2^2$$, i.e. $$f(z) = 50 z_1^2 + 3 z_2^2$$, then the following code defines the least-squares objective (note that $$r$$ is a vector-valued function):\n\nnlp.objective = @(z) 0.1* z(1)^2 + 0.01*z(2)^2;\nnlp.LSobjective = @(z) [sqrt(0.2)*z(1); sqrt (0.02)*z(2)];\n\n\nImportant\n\nThe field LSobjective will have precedence over objective, which need not be defined in this case.\n\nWhen providing your own function evaluations in C, you must populate the Hessian argument with a positive definite Hessian.\n\n### 12.2.5. Line search settings¶\n\nThe line search first computes the maximum step that can be taken while maintaining the iterates inside the feasible region (with respect to the inequality constraints). The maximum distance is then scaled back using the following setting:\n\n % default fraction-to-boundary scaling\ncodeoptions.nlp.ftbr_scaling = 0.9900;\n\n\n### 12.2.6. Regularization¶\n\nTo avoid ill-conditioned saddle point systems, FORCES employs two different types of regularization, static and dynamic regularization.\n\n#### 12.2.6.1. Static regularization¶\n\nStatic regularization of the augmented Hessian by $$\\delta_w I$$, and of the multipliers corresponding to the equality constraints by $$-\\delta_c I$$ helps avoid problems with rank deficiency. The constants $$\\delta_w$$ and $$\\delta_c$$ vary at each iteration according to the following heuristic rule:\n\n$\\begin{split}\\delta_w & = \\eta_w \\min(\\mu,\\lVert c(x) \\rVert))^{\\beta_w} \\cdot (i+1)^{-\\gamma_w} + \\delta_{w,\\min} \\\\ \\delta_c & = \\eta_c \\min(\\mu,\\lVert c(x) \\rVert))^{\\beta_c} \\cdot (i+1)^{-\\gamma_c} + \\delta_{c,\\min} \\\\\\end{split}$\n\nwhere $$\\mu$$ is the barrier parameter and $$i$$ is the number of iterations.\n\nThis rule has been chosen to accommodate two goals: First, make the regularization dependent on the progress of the algorithm - the closer we are to the optimum, the smaller the regularization should be in order not to affect the search directions generated close to the solution, promoting superlinear convergence properties. Second, the amount of regularization employed should decrease with the number of iterations to a certain minimum level, at a certain sublinear rate, in order to prevent stalling due to too large regularization. FORCES NLP does not employ an inertia-correcting linear system solver, and so relies heavily on the parameters of this regularization to be chosen carefully.\n\nYou can change these parameters by using the following settings:\n\n% default static regularization parameters\ncodeoptions.nlp.reg_eta_dw = 1E-4;\ncodeoptions.nlp.reg_beta_dw = 0.8;\ncodeoptions.nlp.reg_min_dw = 1E-9;\ncodeoptions.nlp.reg_gamma_dw = 1.0/3.0;\n\ncodeoptions.nlp.reg_eta_dc = 1E-4;\ncodeoptions.nlp.reg_beta_dc = 0.8;\ncodeoptions.nlp.reg_min_dc = 1E-9;\ncodeoptions.nlp.reg_gamma_dc = 1.0/3.0;\n\n\nNote that by choosing $$\\delta_w=0$$ and $$\\delta_c=0$$, you can turn off the progress and iteration dependent regularization, and rely on a completely static regularization by $$\\delta_{w,\\min}$$ and $$\\delta_{c,\\min}$$, respectively.\n\n#### 12.2.6.2. Dynamic regularization¶\n\nDynamic regularization regularizes the matrix on-the-fly to avoid instabilities due to numerical errors. During the factorization of the saddle point matrix, whenever it encounters a pivot smaller than $$\\epsilon$$, it is replaced by $$\\delta$$. There are two parameter pairs: $$(\\epsilon,\\delta)$$ affects the augmented Hessian and $$(\\epsilon_2,\\delta_2)$$ affects the search direction computation. You can set these parameters by:\n\n% default dynamic regularization parameters\ncodeoptions.regularize.epsilon = 1E-12; % (for Hessian approx.)\ncodeoptions.regularize.delta = 4E-6; % (for Hessian approx.)\ncodeoptions.regularize.epsilon2 = 1E-14; % (for Normal eqs.)\ncodeoptions.regularize.delta2 = 1E-14; % (for Normal eqs.)\n\n\n### 12.2.7. Linear system solver¶\n\nThe interior-point method solves a linear system to find a search direction at every iteration. FORCES NLP offers the following three linear solvers:\n\n• 'normal_eqs' (default): Solving the KKT system in normal equations form.\n\n• 'symm_indefinite_fast': Solving the KKT system in augmented / symmetric indefinite form, using regularization and positive definite Cholesky factorizations only.\n\n• 'symm_indefinite': Solving the KKT system in augmented / symmetric indefinite form, using block-indefinite factorizations.\n\nThe linear system solver can be selected by setting the following field:\n\ncodeoptions.nlp.linear_solver = 'symm_indefinite';\n\n\nIt is recommended to try different linear solvers when experiencing convergence problems. The most stable method is 'symm_indefinite', while the fastest solver is 'symm_indefinite_fast'.\n\nNote\n\nIndependent of the linear system solver choice, the generated code is always library-free and statically allocated, i.e. it can be embedded anywhere.\n\nThe 'normal_eqs' solver is the standard FORCES linear system solver based on a full reduction of the KKT system (the so-called normal equations form). It works well for standard problems, especially convex problems or nonlinear problems where the BFGS or Gauss-Newton approximations of the Hessian are numerically sufficiently well conditioned.\n\nThe 'symm_indefinite' solver is the most robust solver, but still high-speed. It is based on block-wise factorization of the symmetric indefinite form of the KKT system (the so-called augmented form). Each block is handled by symmetric indefinite LDL factorization, with (modified) on-the-fly Bunch-Kaufmann permutations leading to boundedness of lower triangular factors for highest numerical stability. This is our most robust linear system solver, with only a modest performance penalty (about 30% compared to 'symm_indefinite_fast').\n\nThe 'symm_indefinite_fast' solver is robust, but even faster. It is based on block-wise factorization of the symmetric indefinite KKT matrix, where each block is handled by a Cholesky factorization. It uses regularization to increase numerical stability. Currently only used for receding-horizon/MPC-like problems where dimensions of all stages are equal (minus the first and last stage, those are handled separately). It is more robust and faster than the normal equations form. This solver is likely to become the default option in the future.\n\n### 12.2.8. Automatic differentiation tool¶\n\nIf external functions and derivatives are not provided directly as C code by the user, FORCES PRO makes use of an automatic differentiation (AD) tool to generate efficient C code for all the functions (and their derivatives) inside the problem formulation. Currently, three different AD tools are supported that can be chosen by means of the setting nlp.ad_tool as summarized in Table 12.6.\n\nTable 12.6 Automatic differentiation tool options\n\nnlp.ad_tool\n\nTool\n\nURL\n\n'casadi'\n\n'casadi-351'\n\n'symbolic-math-tbx'\n\nMathWorks Symbolic Math Toolbox\n\nMathWorks\n\nNote that MathWorks Symbolic Math Toolbox requires an additional license, which is why the default option is set to\n\ncodeoptions.nlp.ad_tool = 'casadi';\n\n\nAlso note that the use of implicit integrators BackwardEuler, IRK2 and IRK4 (see Section 12.2.1) currently still rely on using the CasADi AD tool.\n\n### 12.2.9. Safety checks¶\n\nBy default, the output of the function evaluations is checked for the presence of NaNs or INFs in order to diagnose potential initialization problems. In order to speed up the solver one can remove these checks by setting:\n\ncodeoptions.nlp.checkFunctions = 0;\n\n\n## 12.3. Convex branch-and-bound options¶\n\nThe settings of the FORCES PRO mixed-integer branch-and-bound convex solver are accessed through the codeoptions.mip struct. It is worthwhile to explore different values for the settings in Table 12.7, as they might have a severe impact on the performance of the branch-and-bound procedure.\n\nNote\n\nAll the options described below are currently not available with the FORCES PRO nonlinear solver. For mixed-integer nonlinear programs and the available options, please have a look at paragraph Mixed-integer nonlinear solver.\n\nTable 12.7 Branch-and-bound options\n\nSetting\n\nValues\n\nDefault\n\nmip.timeout\n\nAny value $$\\geq 0$$\n\n31536000 (1 year)\n\nmip.mipgap\n\nAny value $$\\geq 0$$\n\n0\n\nmip.branchon\n\n'mostAmbiguous', 'leastAmbiguous'\n\n'mostAmbiguous'\n\nmip.stageinorder\n\n0 (OFF), 1 (ON)\n\n1 (ON)\n\nmip.explore\n\n'bestFirst', 'depthFirst'\n\n'bestFirst'\n\nmip.inttol\n\nAny value $$> 0$$\n\n1E-5\n\nmip.queuesize\n\nAny integer value $$\\geq 0$$\n\n1000\n\nA description of each setting is given below:\n\n• mip.timeout: Timeout in seconds, after which the search is stopped and the best solution found so far is returned.\n\n• mip.mipgap: Relative sub-optimality after which the search shall be terminated. For example, a value of 0.01 will search for a feasible solution that is at most 1%-suboptimal. Set to zero if the optimal solution is required.\n\n• mip.branchon: Determines which variable to branch on after having solved the relaxed problem. Options are 'mostAmbiguous' (i.e. the variable closest to 0.5) or 'leastAmbiguous' (i.e. the variable closest to 0 or 1).\n\n• mip.stageinorder: Stage-in-order heuristic: For the branching, determines whether to fix variables in order of the stage number, i.e. first all variables of stage $$i$$ will be fixed before fixing any of the variables of stage $$i+1$$. This is often helpful in multistage problems, where a timeout is expected to occur, and where it is important to fix the early stages first (for example MPC problems). Options are 0 for OFF and 1 for ON.\n\n• mip.explore: Determines the exploration strategy when selecting pending nodes. Options are 'bestFirst', which chooses the node with the lowest lower bound from all pending nodes, or 'depthFirst', which prioritizes nodes with the most number of fixed binaries first to quickly reach a node.\n\n• mip.inttol: Integer tolerance for identifying binary solutions of relaxed problems. A solution of a relaxed problem with variable values that are below inttol away from binary will be declared to be binary.\n\n• mip.queuesize: Maximum number of pending nodes that the branch and bound solver can store. If that number is exceeded during the search, the solver quits with an exitflag value of -2 and returns the best solution found so far.\n\n## 12.4. Solve methods¶\n\nAs a default optimization method the primal-dual interior-point method is used. Several other methods are available. To change the solve method set the solvemethod field as outlined in Table 12.8.\n\nTable 12.8 Solve methods\n\nsolvemethod\n\nMethod\n\nDescription\n\n'PDIP' (default)\n\nPrimal-Dual Interior-Point Method\n\nThe Primal-Dual Interior-Point Method is a stable and robust method for most problems.\n\n'ADMM'\n\nAlternating Direction Methods of Multipliers\n\nFor some problems, ADMM may be faster. The method variant and several algorithm parameters can be tuned in order to improve performance.\n\n'DFG'\n\nFor some problems with simple constraints, our implementation of the dual fast gradient method can be the fastest option. No parameters need to be tuned in this method.\n\n'FG'\n\nFor problems with no equality constraints (only one stage) and simple constraints, the primal fast gradient method can give medium accuracy solutions extremely quickly. The method has several tuning parameters that can significantly affect the performance.\n\n### 12.4.1. Primal-Dual Interior-Point Method¶\n\nThe Primal-Dual Interior-Point Method is the default optimization method. It is a stable and robust method for most of the problems.\n\n#### 12.4.1.1. Solver Initialization¶\n\nThe performance of the solver can be influenced by the way the variables are initialized. The default method (cold start) should work in most cases extremely reliably, so there should be no need in general to try other methods, unless you are experiencing problems with the default initialization scheme. To change the method of initialization in FORCES PRO set the field init to one of the values in Table 12.9.\n\nTable 12.9 PDIP solver initialization\n\ninit\n\nMethod\n\nInitialization method\n\n0 (default)\n\nCold start\n\nSet all primal variables to $$0$$, and all dual variables to the square root of the initial complementarity gap $$\\mu_0: z_i=0, s_i=\\sqrt{\\mu_0}, \\lambda_i=\\sqrt{\\mu_0}$$. The default value is $$\\mu_0=10^6$$.\n\n1\n\nCentered start\n\nSet all primal variables to zero, the slacks to the RHS of the corresponding inequality, and the Lagrange multipliers associated with the inequalities such that the pairwise product between slacks and multipliers is equal to the parameter $$\\mu_0: z_i=0, s_i=b_{\\mathrm{ineq}}$$ and $$s_i \\lambda_i = \\mu_0$$.\n\n2\n\nPrimal warm start\n\nSet all primal variables as provided by the user. Calculate the residuals and set the slacks to the residuals if they are sufficiently positive (larger than $$10^{-4}$$), or to one otherwise. Compute the associated Lagrange multipliers such that $$s_i \\lambda_i = \\mu_0$$.\n\n#### 12.4.1.2. Initial Complementary Slackness¶\n\nThe default value for $$\\mu_0$$ is $$10^6$$. To use a different value, use:\n\ncodeoptions.mu0 = 10;\n\n\n#### 12.4.1.3. Accuracy Requirements¶\n\nThe accuracy for which FORCES PRO returns the OPTIMAL flag can be set as follows:\n\ncodeoptions.accuracy.ineq = 1e-6; % infinity norm of residual for inequalities\ncodeoptions.accuracy.eq = 1e-6; % infinity norm of residual for equalities\ncodeoptions.accuracy.mu = 1e-6; % absolute duality gap\ncodeoptions.accuracy.rdgap = 1e-4; % relative duality gap := (pobj-dobj)/pobj\n\n\n#### 12.4.1.4. Line Search Settings¶\n\nIf FORCES PRO experiences convergence difficulties, you can try selecting different line search parameters. The first two parameters of codeoptions.linesearch, factor_aff and factor_cc are the backtracking factors for the line search (if the current step length is infeasible, then it is reduced by multiplication with these factors) for the affine and combined search direction, respectively.\n\ncodeoptions.linesearch.factor_aff = 0.9;\ncodeoptions.linesearch.factor_cc = 0.95;\n\n\nThe remaining two parameters of the field linesearch determine the minimum (minstep) and maximum step size (maxstep). Choosing minstep too high will cause the generated solver to quit with an exitcode saying that the line search has failed, i.e. no progress could be made along the computed search direction. Choosing maxstep too close to 1 is likely to cause numerical issues, but choosing it too conservatively (too low) is likely to increase the number of iterations needed to solve a problem.\n\ncodeoptions.linesearch.minstep = 1e-8;\ncodeoptions.linesearch.maxstep = 0.995;\n\n\n#### 12.4.1.5. Regularization¶\n\nDuring factorization of supposedly positive definite matrices, FORCES PRO applies a regularization to the $$i$$-th pivot element if it is smaller than $$\\epsilon$$. In this case, it is set to $$\\delta$$, which is the lower bound on the pivot element that FORCES PRO allows to occur.\n\ncodeoptions.regularize.epsilon = 1e-13; % if pivot element < epsilon …\ncodeoptions.regularize.delta = 1e-8; % then set it to delta\n\n\n#### 12.4.1.6. Multicore parallelization¶\n\nFORCES PRO supports the computation on multiple cores, which is particularly useful for large problems and long horizons (the workload is split along the horizon to multiple cores). This is implemented by the use of OpenMP and can be switched on by using\n\ncodeoptions.parallel = 1;\n\n\nBy default multicore computation is switched off.\n\n### 12.4.2. Alternating Directions Method of Multipliers¶\n\nFORCES PRO implements several optimization methods based on the ADMM framework. Different variants can handle different types of constraints and FORCES PRO will automatically choose an ADMM variant that can handle the constraints in a given problem. To manually choose a specific method in FORCES PRO, use the ADMMvariant field of codeoptions:\n\ncodeoptions.ADMMvariant = 1; % can be 1 or 2\n\n\nwhere variant 1 is as follows:\n\n\\begin{align*} \\text{minimize} \\quad & \\frac{1}{2} y^\\top H y + f^\\top y \\\\ \\text{subject to} \\quad & Dy=c \\\\ & \\underline{z} \\leq z \\leq \\bar{z} \\\\ & y = z \\end{align*}\n\nand variant 2 is as follows:\n\n\\begin{align*} \\text{minimize} \\quad & \\frac{1}{2} y^\\top H y + f^\\top y \\\\ \\text{subject to} \\quad & Dy=c \\\\ & A y = z \\\\ & z \\leq b \\end{align*}\n\n#### 12.4.2.1. Accuracy requirements¶\n\nThe accuracy for which FORCES PRO returns the OPTIMAL flag can be set as follows:\n\ncodeoptions.accuracy.consensus = 1e-3; % infinity norm of the consensus equality\ncodeoptions.accuracy.dres = 1e-3; % infinity norm of the dual residual\n\n\nNote that, in contrast to primal-dual interior-point methods, the required number of ADMM iterations varies very significantly depending on the requested accuracy. ADMM typically requires few iterations to compute medium accuracy solutions, but many more iterations to achive the same accuracy as interior-point methods. For feedback applications, medium accuracy solutions are typically sufficient. Also note that the ADMM accuracy requirements have to be changed depending on the problem scaling.\n\n#### 12.4.2.2. Method parameters¶\n\nADMM uses a regularization parameter $$\\rho$$, which also acts as the step size in the gradient step. The convergence speed of ADMM is highly variable in the parameter $$\\rho$$. Its value should satisfy $$\\rho > 0$$. This parameter can be tuned using the following command:\n\ncodeoptions.ADMMrho = 1;\n\n\nIn some cases it may be possible to let FORCES PRO choose the value $$\\rho$$ automatically. To enable this feature set:\n\ncodeoptions.ADMMautorho = 1;\n\n\nPlease note that this does not guarantee that the choice of $$\\rho$$ will be optimal.\n\nADMM can also include an ‘over-relaxation’ step that can improve the convergence speed. This step is typically useful for problems where ADMM exhibits very slow convergence and can be tuned using the parameter $$\\alpha$$. Its value should satisfy $$1 \\leq \\alpha \\leq 2$$. This step using the following command:\n\ncodeoptions.ADMMalpha = 1;\n\n\n#### 12.4.2.3. Precomputations¶\n\nFor problems with time-invariant data, FORCES PRO can compute full matrix inverses at code generation time and then implement matrix solves online by dense matrix-vector multiplication. In some cases, especially when the prediction horizon is long, it may be better to factorize the matrix and implement matrix solves using forward and backward solves with the pre-computed factors. To manually switch on this option, use the ADMMfactorize field of codeoptions.\n\nWhen the data is time-varying, or when the prediction horizon is larger than 15 steps, FORCES PRO automatically switches to a factorization-based method.\n\ncodeoptions.ADMMfactorize = 0;\n\n\n### 12.4.3. Dual Fast Gradient Method¶\n\nFor some problems with simple constraints, our implementation of the dual fast gradient method can be the fastest option. No parameters need to be tuned in this method.\n\n### 12.4.4. Primal Fast Gradient Method¶\n\nFor problems with no equality constraints (only one stage) and simple constraints, the primal fast gradient method can give medium accuracy solutions extremely quickly. The method has several tuning parameters that can significantly affect the performance.\n\n#### 12.4.4.1. Accuracy requirements¶\n\nThe accuracy for which FORCES PRO returns the OPTIMAL flag can be set as follows:\n\ncodeoptions.accuracy.gmap= 1e-5; % infinity norm of the gradient map\n\n\nThe gradient map is related to the difference with respect to the optimal objective value. Just like with other first-order methods, the required number of FG iterations varies very significantly depending on the requested accuracy. Medium accuracy solutions can typically be computed very quickly, but many iterations are needed to achieve the same accuracy as with interior-point methods.\n\n#### 12.4.4.2. Method parameters¶\n\nThe user has to determine the step size in the fast gradient method. The convergence speed of FG is highly variable in this parameter, which should typically be set to be one over the maximum eigenvalue of the quadratic cost function. This parameter can be tuned using the following command:\n\ncodeoptions.FGstep = 1/1000;\n\n\nIn some cases it may be possible to let FORCES PRO choose the step size automatically. To enable this feature set:\n\ncodeoptions.FGautostep = 1;\n\n\n#### 12.4.4.3. Warm starting¶\n\nThe performance of the fast gradient method can be greatly influenced by the way the variables are initialized. Unlike with interior-point methods, fast gradient methods can be very efficiently warm started with a good guess for the optimal solution. To enable this feature set:\n\ncodeoptions.warmstart = 1;\n\n\nWhen the user turns warm start on, a new parameter z_init_0 is automatically added. The user should set it to be a good guess for the solution, which is typically available when solving a sequence of problems." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7815283,"math_prob":0.92276216,"size":38781,"snap":"2020-45-2020-50","text_gpt3_token_len":9262,"char_repetition_ratio":0.14818062,"word_repetition_ratio":0.12719299,"special_character_ratio":0.22758567,"punctuation_ratio":0.13924411,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9638247,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-28T08:29:01Z\",\"WARC-Record-ID\":\"<urn:uuid:282eff7b-823b-4b64-9e0f-f7abbc894e6b>\",\"Content-Length\":\"117821\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1090d724-98af-4f90-bd27-8eef9689286b>\",\"WARC-Concurrent-To\":\"<urn:uuid:65fe33a2-c1b3-4aae-a64e-6ea820e3916b>\",\"WARC-IP-Address\":\"13.94.143.57\",\"WARC-Target-URI\":\"https://forces-3-1-0.embotech.com/Documentation/solver_options/index.html\",\"WARC-Payload-Digest\":\"sha1:MPASK2T5UDFT45PNGP5EW2PPMAL2H2WG\",\"WARC-Block-Digest\":\"sha1:DNHEKOTCTYQV2TBMVFSQO47K445O5JQW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141195198.31_warc_CC-MAIN-20201128070431-20201128100431-00007.warc.gz\"}"}
https://www.diagramelectric.co/formula-for-finding-total-resistance-in-parallel-circuit/
[ "# Formula For Finding Total Resistance In Parallel Circuit\n\nThe formula for finding the total resistance in a parallel circuit is: ``` Rtotal = 1 / (1 / R1 + 1 / R2 + ... + 1 / Rn) ``` where: *\n\n### Rtotal\n\nis the total resistance of the circuit in ohms (Ω). *\n\n,\n\n, ...,\n\n### Rn\n\nare the resistances of the individual resistors in the circuit in ohms (Ω). To use this formula, simply add the reciprocals of the individual resistances and then take the reciprocal of the resulting sum. For example, if you have a circuit with three resistors with resistances of 2 Ω, 4 Ω, and 6 Ω, the total resistance would be: ``` Rtotal = 1 / (1 / 2 Ω + 1 / 4 Ω + 1 / 6 Ω) = 1 / (0.5 Ω + 0.25 Ω + 0.167 Ω) = 1 / 0.917 Ω = 1.09 Ω ```\n\nThe total resistance of a parallel circuit is always less than the resistance of any of the individual resistors in the circuit. This is because when resistors are connected in parallel, the current through each resistor is the same, but the voltage across each resistor is different. The total voltage across the circuit is the same as the voltage across any of the individual resistors.\n\nThe formula for finding the total resistance in a parallel circuit can be used to calculate the resistance of a circuit when you know the resistance of the individual resistors. It can also be used to determine the current through a circuit when you know the total resistance and the voltage across the circuit.\n\nParallel circuits are used in a variety of applications, including electrical wiring, electronic circuits, and automotive systems. They are often used to increase the current capacity of a circuit or to reduce the voltage drop across a circuit.\n\nHere are some additional tips for finding the total resistance in a parallel circuit: * If you have a circuit with only two resistors, you can use the following formula to find the total resistance: ``` Rtotal = R1 * R2 / (R1 + R2) ``` * If you have a circuit with a large number of resistors, you can use a spreadsheet or a graphing calculator to find the total resistance. * If you are unsure of the resistance of any of the resistors in a circuit, you can measure it with a multimeter.\n\nBy understanding the formula for finding the total resistance in a parallel circuit, you can calculate the resistance of any circuit and design circuits that meet your specific needs.", null, "How To Find The Percentage Error In Equivalent Resistance When Two Resistors With Values R1 6 0 3 And R2 10 2 Are Connected Parallel Quora", null, "Physics Page Www Thedotcom Com", null, "Simple Parallel Circuits Series And Electronics Textbook", null, "4 Ways To Calculate Total Resistance In Circuits Wikihow", null, "Total Resistance Calculator Of Series Parallel Circuit", null, "Simplified Formulas For Parallel Circuit Resistance Calculations Inst Tools", null, "Series And Parallel Circuits Learn Sparkfun Com", null, "Physics For Kids Resistors In Series And Parallel", null, "4 Ways To Calculate Total Resistance In Circuits Wikihow", null, "How To Find Total Resistance In A Series And Parallel Circuit Brainly", null, "Physics For Kids Resistors In Series And Parallel", null, "How To Solve Parallel Circuits 10 Steps With Pictures Wikihow", null, "Resistors In Series And Parallel Combination Determination Of The Equivalent Resistance Two Procedure Faqs", null, "Lessons In Electric Circuits Volume I Dc Chapter 10", null, "L4 Series And Parallel Resistors Physical Computing", null, "Simple Parallel Circuits Series And Electronics Textbook", null, "How Do You Calculate The Total Resistance Of A Parallel Circuit" ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201%201'%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8348064,"math_prob":0.99280965,"size":3349,"snap":"2023-40-2023-50","text_gpt3_token_len":768,"char_repetition_ratio":0.22242153,"word_repetition_ratio":0.12776831,"special_character_ratio":0.22454464,"punctuation_ratio":0.07628524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995809,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T06:18:14Z\",\"WARC-Record-ID\":\"<urn:uuid:a2a3719c-c9e1-4bb0-b4fa-ad45702e2e58>\",\"Content-Length\":\"65351\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47e87491-7495-462f-a9ae-750c9518c1a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:de5edbd8-1f13-4c94-aecc-1d18de993b5b>\",\"WARC-IP-Address\":\"172.67.175.41\",\"WARC-Target-URI\":\"https://www.diagramelectric.co/formula-for-finding-total-resistance-in-parallel-circuit/\",\"WARC-Payload-Digest\":\"sha1:ZDESCXQMVVCQSPMH24WTCNUW46FQRCUH\",\"WARC-Block-Digest\":\"sha1:TCN6SQ5SLRGO7EVM2WZVFAXH63CQRJY5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100650.21_warc_CC-MAIN-20231207054219-20231207084219-00314.warc.gz\"}"}
https://physics.stackexchange.com/questions/92058/is-the-assumption-that-the-two-reference-frames-be-inertial-required-in-the-deri
[ "# Is the assumption that the two reference frames be inertial required in the derivation of transformation equations?\n\nIn the derivation of Galilean transformations the only assumption is that the two frames are moving with some uniform relative velocity $u$.\n\nSuppose with respect to some inertial frame $O$ the two frames $S$ and $S'$ are moving with the same uniform acceleration $a$.\n\nLet $V$ be the velocity of $S$ w.r.t. $O$. Similarly, let $V'$ be the velocity of $S'$ w.r.t. $O$. Furthermore, let $V_0' - V_0 = u$ (const.). Then\n\n$$V = V_0 + at$$ $$V' = V_0' + at$$\n\nThen the relative velocity is $V' - V = u$.\n\nThis is the only result required in deriving the Galilean transformation. So why do people assume that the reference frames be inertial. (I know the point is so that Newton's laws would be valid, but exclusively in the derivation of the transformation equation is this assumption needed?) The same applies in the derivation of Lorentz transformation.\n\n## 2 Answers\n\nI would write a comment but i don't have the privilege so am writing an answer\n\n\"In the derivation of Galilean transformations the only assumption is that the two frames are moving with some uniform relative velocity u.\"\n\nHint: \"In the derivation of Galilean transformations the only assumption is that the two frames are moving with respect to each other with some uniform relative velocity u.\"\nRead this sentecse carefully.\nPS: Since you have marched towards relativity remember every measurement is taken w.r.t some frame of reference(coordinate system). $S$ and $S^{'}$ are inertial w.r.t each other but are non-inertial w.r.t $O$.\n\nWhat do you mean that's the only result required? Are you referring to the assumption that both frames have the same acceleration, that the relative velocity is constant, or what?\n\nThat's true if both frames are accelerating at a uniform velocity - your coordinate transformations would be the same even if Newtonian physical laws don't hold. If they have two separate accelerations then the law \"$\\mathbf{V'}-\\mathbf{V}=u$\" (where $u$ is independent of time) holds, and your equation for $v'$ in the frame of $S$ will look Galilean.\n\nIn the case of two frames with identical accelerations, by measuring the velocity of the other frame and not taking into account any physical observations, yes you can derive the Galilean transformations. But it's easy to construct situations where this is not true. If the accelerations of the frames are different you don't get a galilean transformation, and if the frames have angular velocity (that is, if they are rotating reference frames) things would get even weirder. (In that case I'm not sure what an interesting question to ask would be!)\n\nThe special relativistic version of this would be different. One issue is usually phrased as: If you tie a string between two spaceships accelerating uniformly (with the same acceleration) separated by some distance, the string will break. (Bell's Spaceship Paradox) Since in one frame the other spaceship would look as if it accelerated away, clearly you can't have events transform linearly from one frame to the other, so the Lorentz transform won't hold. I don't know if there's some configuration of reference frames and accelerations that would allow this to hold - that would be something to prove, and you can ask it on stackexchange after phrasing the question precisely! (and giving it a shot yourself. I'd phrase it as something like, \"Given a frame $S_1$ and $S_2$, with positions $X_1(t)$ and $X_2(t)$, with $X_1(0)=x_2(0)$ in inertial reference frame $O$, what restrictions must be placed on $X_1$ and $X_2$ so that events transform in a Lorentz-like way from one frame to another? More specifically, do they have to have zero second derivative always.\" [to avoid lack of clarity, when I say \"frame\" here I mean \"instantaneous rest frame\".] )" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93873906,"math_prob":0.99269485,"size":2240,"snap":"2019-35-2019-39","text_gpt3_token_len":514,"char_repetition_ratio":0.13774598,"word_repetition_ratio":0.0,"special_character_ratio":0.221875,"punctuation_ratio":0.083538085,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996799,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-23T08:51:10Z\",\"WARC-Record-ID\":\"<urn:uuid:b041d38c-1498-493d-85ee-c63a6837db41>\",\"Content-Length\":\"142781\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4f1a817f-269c-4c13-bec4-05051f699304>\",\"WARC-Concurrent-To\":\"<urn:uuid:a34b6764-89bd-4516-81d3-53b255882ee6>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/92058/is-the-assumption-that-the-two-reference-frames-be-inertial-required-in-the-deri\",\"WARC-Payload-Digest\":\"sha1:EN3TL7J5HU425BUP4VOPJIRAN2EPZS4H\",\"WARC-Block-Digest\":\"sha1:BUZENOEX3FHCOIYF7PNH52TAPQUS3XMF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027318243.40_warc_CC-MAIN-20190823083811-20190823105811-00099.warc.gz\"}"}
https://www.tutorialspoint.com/java_cryptography/java_cryptography_keys.htm
[ "# Java Cryptography - Keys\n\nAdvertisements\n\nA cryptosystem is an implementation of cryptographic techniques and their accompanying infrastructure to provide information security services. A cryptosystem is also referred to as a cipher system.\n\nThe various components of a basic cryptosystem are Plaintext, Encryption Algorithm, Ciphertext, Decryption Algorithm, Encryption Key and, Decryption Key.\n\nWhere,\n\n• Encryption Key is a value that is known to the sender. The sender inputs the encryption key into the encryption algorithm along with the plaintext in order to compute the cipher text.\n\n• Decryption Key is a value that is known to the receiver. The decryption key is related to the encryption key, but is not always identical to it. The receiver inputs the decryption key into the decryption algorithm along with the cipher text in order to compute the plaintext.\n\nFundamentally there are two types of keys/cryptosystems based on the type of encryption-decryption algorithms.\n\n## Symmetric Key Encryption\n\nThe encryption process where same keys are used for encrypting and decrypting the information is known as Symmetric Key Encryption.\n\nThe study of symmetric cryptosystems is referred to as symmetric cryptography. Symmetric cryptosystems are also sometimes referred to as secret key cryptosystems.\n\nFollowing are a few common examples of symmetric key encryption −\n\n• Digital Encryption Standard (DES)\n• Triple-DES (3DES)\n• IDEA\n• BLOWFISH\n\n## Asymmetric Key Encryption\n\nThe encryption process where different keys are used for encrypting and decrypting the information is known as Asymmetric Key Encryption. Though the keys are different, they are mathematically related and hence, retrieving the plaintext by decrypting cipher text is feasible.\n\nAdvertisements" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9392097,"math_prob":0.875371,"size":1567,"snap":"2021-21-2021-25","text_gpt3_token_len":294,"char_repetition_ratio":0.1887396,"word_repetition_ratio":0.12173913,"special_character_ratio":0.16528398,"punctuation_ratio":0.089147285,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96335316,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-21T03:09:03Z\",\"WARC-Record-ID\":\"<urn:uuid:66b79b09-6f7c-40d0-81b4-48893f6fea65>\",\"Content-Length\":\"26327\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:49a3522d-6223-4080-931c-9f2a948b380b>\",\"WARC-Concurrent-To\":\"<urn:uuid:1a1b5bfc-32ea-44c0-8460-cadbe6acb4d4>\",\"WARC-IP-Address\":\"72.21.91.42\",\"WARC-Target-URI\":\"https://www.tutorialspoint.com/java_cryptography/java_cryptography_keys.htm\",\"WARC-Payload-Digest\":\"sha1:FC66PTX5T75RZGO6TFC5SWHVKJLWG6B2\",\"WARC-Block-Digest\":\"sha1:XR4JY6JJV5AYX4JEWK7IZIXEXQSOU724\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488262046.80_warc_CC-MAIN-20210621025359-20210621055359-00445.warc.gz\"}"}
https://rational-equations.com/rational-equations/algebra-formulas/solve-algebra-equation.html
[ "Algebra Tutorials!", null, "Home", null, "Rational Expressions", null, "Graphs of Rational Functions", null, "Solve Two-Step Equations", null, "Multiply, Dividing; Exponents; Square Roots; and Solving Equations", null, "LinearEquations", null, "Solving a Quadratic Equation", null, "Systems of Linear Equations Introduction", null, "Equations and Inequalities", null, "Solving 2nd Degree Equations", null, "Review Solving Quadratic Equations", null, "System of Equations", null, "Solving Equations & Inequalities", null, "Linear Equations Functions Zeros, and Applications", null, "Rational Expressions and Functions", null, "Linear equations in two variables", null, "Lesson Plan for Comparing and Ordering Rational Numbers", null, "LinearEquations", null, "Solving Equations", null, "Radicals and Rational Exponents", null, "Solving Linear Equations", null, "Systems of Linear Equations", null, "Solving Exponential and Logarithmic Equations", null, "Solving Systems of Linear Equations", null, "DISTANCE,CIRCLES,AND QUADRATIC EQUATIONS", null, "Solving Quadratic Equations", null, "Quadratic and Rational Inequalit", null, "Applications of Systems of Linear Equations in Two Variables", null, "Systems of Linear Equations", null, "Test Description for RATIONAL EX", null, "Exponential and Logarithmic Equations", null, "Systems of Linear Equations: Cramer's Rule", null, "Introduction to Systems of Linear Equations", null, "Literal Equations & Formula", null, "Equations and Inequalities with Absolute Value", null, "Rational Expressions", null, "SOLVING LINEAR AND QUADRATIC EQUATIONS", null, "Steepest Descent for Solving Linear Equations", null, "The Quadratic Equation", null, "Linear equations in two variables\nTry the Free Math Solver or Scroll down to Resources!\n\n Depdendent Variable\n\n Number of equations to solve: 23456789\n Equ. #1:\n Equ. #2:\n\n Equ. #3:\n\n Equ. #4:\n\n Equ. #5:\n\n Equ. #6:\n\n Equ. #7:\n\n Equ. #8:\n\n Equ. #9:\n\n Solve for:\n\n Dependent Variable\n\n Number of inequalities to solve: 23456789\n Ineq. #1:\n Ineq. #2:\n\n Ineq. #3:\n\n Ineq. #4:\n\n Ineq. #5:\n\n Ineq. #6:\n\n Ineq. #7:\n\n Ineq. #8:\n\n Ineq. #9:\n\n Solve for:\n\n Please use this form if you would like to have this math solver on your website, free of charge. Name: Email: Your Website: Msg:\n\n### Our users:\n\nI bought Algebrator last year, and now its helping me with my 9th Grade Algebra class, I really like the step by step solving of equations, it's just GREAT !\nKathleen Becker, PA\n\nTo be honest I was a little skeptical at first about how easy Algebrator would be. But it really is the easiest program to get up and running. I was learning algebra within minutes of downloading the software.\nSonya Johnson, TX\n\nGraduating high-school, I was one of the best math students in my class. Entering into college was humbling because suddenly I was barely average. So, my parents helped me pick out Algebrator and, within weeks, I was back again. Your program is not only great for beginners, like my younger brothers in high-school, but it helped me, as a new college student!\nPatrick Ocean, FL\n\nVow! What a teacher! Thanks for making Algebra easy! The software provides amazing ways to deal with complex problems. Anyone caught up there and finds it hard to solve out must buy a copy. Youll get a great tool at a reasonable price.\nAlex Martin, NH\n\nI think this program is one of the most useful learning tools I have purchased (and believe me, I purchased a lot!), It's easy for us as parents- to work with, it saves a lot of our children's precious time.\nM.H., Georgia\n\n### Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among them?\n\n#### Search phrases used on 2013-11-29:\n\n• percent, fraction, decimal problem solving worksheets\n• square root in simplified radical form\n• 5th grade multiplication of decimals worksheet\n• balanced chemical equation between base and acid\n• convert a decimal to mixed fraction\n• online answer key for Glencoe Algebra 2\n• completing complex squares formula on the calculator\n• second-order differential runge-kutta\n• how do you add fractions?\n• 5th grade convert decimal into a fraction\n• teaching algebra\n• what is the difference between an equation and a expression\n• Algebra Intermedia citrus college\n• easy way simultaneous equation\n• GETTING A+BI FROM A QUADRATIC EQUATION\n• student long division math steps flow chart\n• Dividing Rational Expression fractions calculator\n• parabola equation algebra\n• how to algebraic expressions of limits\n• multiply and divide integers\n• online trig calculator\n• multiplying and dividing decimals online lesson\n• mathquizes to solve on line\n• lowest common denominator worksheet\n• green globs graphing equations free trial\n• common denominator with variables\n• free online large equation calculator\n• printable school papers 1st grade\n• convert exponential to decimal excel\n• comparing fractions calculator\n• elementary algebra tutorial or review\n• pizzazz worksheets\n• combining like terms then naming polynomials worksheet\n• positive negative numbers adding subtracting powerpoint presentation\n• notes for permutations and combinations middle school\n• McDougal Little Algebra 2\n• multiplying tenths whole numbers worksheets\n• greatest common factor finder\n• review mathematics grade 10 worksheets\n• solving nonhomogeneous differential equation\n• Scolastic Math\n• quadratic formula in calculator TI84\n• solving equations by completing the square generator\n• multiplacation charts\n• formulae worksheets in maths\n• tricky math questions for grade 2\n• ti-89 laplace\n• root of third order Polynomial expression\n• direct and inverse proportions in algebra\n• whole numbers to decimal\n• Fraction formulas\n• Free Rational Equation Solver\n• Online Polynomial solver\n• basic fractions\n• matlab code solve determinant with variables\n• powerpoint rational exponents algebra\n• holt rinehart winston practical mathematics third edition answers\n• Conceptual Physics third edition answers\n• compound inequalities games\n• year 8 maths papers to do online\n• TI-89 multiple equations\n• McDougal Littell Algebra 2 Florida Edition\n• trigonomic calculators\n• Abstract algebra HW solutions Gallian\n• multiplying fractions with negatives worksheets\n• adding and subtracting negative and positive number worksheets\n• ellipse equation solver\n• free clep guide\n• multiplication lattice worksheets\n• ALGEGBRA HELP\n• rules for adding square roots" ]
[ null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null, "https://rational-equations.com/images/left_bullet.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.97998667,"math_prob":0.96253186,"size":1257,"snap":"2021-43-2021-49","text_gpt3_token_len":290,"char_repetition_ratio":0.08619314,"word_repetition_ratio":0.0,"special_character_ratio":0.21559268,"punctuation_ratio":0.13584906,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99879616,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-04T22:43:01Z\",\"WARC-Record-ID\":\"<urn:uuid:3ab6d60d-f564-4e5b-9eab-edf1411b3971>\",\"Content-Length\":\"92061\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cf1ab53f-2df4-4d33-8d21-99901dfaeb89>\",\"WARC-Concurrent-To\":\"<urn:uuid:91879ea3-0ac0-48ab-9554-659f66381bfd>\",\"WARC-IP-Address\":\"54.197.228.212\",\"WARC-Target-URI\":\"https://rational-equations.com/rational-equations/algebra-formulas/solve-algebra-equation.html\",\"WARC-Payload-Digest\":\"sha1:LTSDISWOLM4WVTLLLMKSY3OHM52SKN3J\",\"WARC-Block-Digest\":\"sha1:EYSB3DU77XCIBMEZVYVQ56RXM6VQMAFW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363125.46_warc_CC-MAIN-20211204215252-20211205005252-00096.warc.gz\"}"}
http://ballistipedia.com/index.php?title=Ballistic_Accuracy_Classification&oldid=1431
[ "# Ballistic Accuracy Classification\n\n(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)\nJump to: navigation, search\n\n## Introduction\n\nBallistic Accuracy Classification™ is a mathematically rigorous system for describing and understanding the precision of ballistic tools like rifles. It provides information that is:\n\n• Useful and easy for consumers to understand\n• Straightforward for enthusiasts and builders to calculate\n• Statistically sound enough for experts to validate\n\nFor more background see Why BAC.\n\nAll firearms and components can be assigned a Ballistic Accuracy Class™ (BAC™), which fully characterizes their accuracy potential. Lower numbers are better. In practice, the lowest possible BAC is 1. (A theoretically perfect rifle system that always puts every shot through the same hole would be BAC 0.)\n\nBehind the scenes, BAC is defined by a single statistical parameter known as sigma (σ). Using the associated statistical model (known as the Rayleigh distribution), statisticians can not only determine the BAC for a particular gun or component, but also compute its expected shooting precision.\n\nBAC™ Sigma (σ) Typical examples Shots within\n¼ MOA radius\nShots within\n½ MOA radius\nClass 1 < 0.1MOA Rail guns 96% 100%\nClass 2 < 0.2MOA Benchrest guns 54% 96%\nClass 3 < 0.3MOA Mil-spec for PSR 29% 75%\nClass 4 < 0.4MOA Competitive auto-loaders 18% 54%\nClass 5 < 0.5MOA Mil-spec for M110 and M24 12% 39%\nClass 6 < 0.6MOA Mil-spec for infantry rifles and ammo 8% 29%\n\nWe can also generate the expected values of more familiar measures, like the extreme spread of a 3- or 5-shot group:\n\nBAC™ 5-shot Groups\n⌀ < 1MOA\nMedian 5-shot\nGroup Spread\n3-shot Groups\n⌀ < 1MOA\nMedian 3-shot\nGroup Spread\nClass 1 100% 0.3MOA 100% 0.2MOA\nClass 2 98% 0.6MOA 99% 0.5MOA\nClass 3 65% 0.9MOA 85% 0.7MOA\nClass 4 26% 1.2MOA 57% 0.9MOA\nClass 5 9% 1.5MOA 35% 1.2MOA\nClass 6 3% 1.8MOA 21% 1.4MOA\n\n### Understanding MOA\n\nAccuracy is described in angular terms, most commonly using the unit \"Minute of Arc\" (MOA). One arc minute spans 1.047\" at 100 yards. Rifle shooters often practice on 100 yard targets, and so they often think in terms of how wide their groups are at 100 yards. People often just round it off and think of 1 MOA as \"one inch at 100 yards.\"\n\nIn the absence of an atmosphere, the angular precision measured at one distance would be valid at all other distances. I.e., a 1\" group at 100 yards would measure 5\" at 500 yards. However, in reality the effects of wind and drag (which, together with gravity, accentuates variations in muzzle velocity) will only increase the angular spread of ballistic groups as distance increases. Therefore, in practice one should expect worse-than-advertised accuracy when shooting at longer distances. The distance at which atmosphere begins to significantly affect precision depends on a bullet’s muzzle velocity and ballistic coefficient. For high-power rifles this is usually beyond 100 yards. Guns shooting subsonic projectiles can begin to suffer after just 25 yards.\n\n### Nomenclature\n\nBAC™ is only meaningful when it conforms to established terms and conventions. BAC must be determined by testing in accordance with the BAC Protocol described in this document.\n\nBallistic Accuracy Classification™ must be supported by the following descriptive parameters:\n\n1. Product tested. (Any component, or group of components, associated with accuracy can be tested.)\n2. Configuration tested. This must include the following details:\n1. Barrel length, material, profile, rifling.\n(E.g., 20\" stainless 1\" bull contour with 6-land 1:10\"-twist cut rifling.)\n2. Receiver, action, and feed mechanism.\n(E.g., AR-10 magazine-fed semi-automatic.)\n3. Ammunition.\n1. If commercial this must include brand, model, and lot.\n(E.g., Federal GM308M Lot#214374H077.)\n2. If custom, this must list component and load formula.\n(E.g., Lapua .308 full-sized brass, WLR primer, 168gr SMK, 44gr Varget, 2.800\" COAL.)\n3. Confidence in BAC. When not conspicuously mentioned, it is assumed that the upper 90% confidence value of sigma is referenced for BAC.\n\nBAC measures are intentionally kept somewhat coarse. Care should be taken to avoid suggesting more than two significant digits of precision. For example, even after shooting 20 rounds through a gun, the 80% confidence interval on the precision estimate typically spans 0.9-1.2 times the estimated value.\n\n### Trademarks\n\nThe following terms are trademarks of Scribe Logistics LLC. They are free to use so long as their use complies with the Nomenclature and Protocol outlined here.\n\n• Ballistic Accuracy Classification™\n• Ballistic Accuracy Class™\n• BAC™\n\nThe trademarks are claimed solely for the purpose of maintaining the integrity of the system and avoiding market confusion.\n\n## Theory\n\n### Statistical Model\n\nBAC™ assumes that the impact of ballistic shots on a target are normally distributed with the same variance along any axis. (Empirical data validate this assumption, and it should be true as long as atmospheric effects are negligible.) Therefore, we use the Rayleigh distribution to model the radius r, or dispersion of each shot, from the center of impact. When the coordinates of the shots have independent $$N(0,\\sigma)$$ distributions along orthogonal axes, the radius of each shot is described by the Rayleigh probability density function:\n\n$$f(r,\\sigma)=\\frac{r}{\\sigma^2}e^{−r^2/2\\sigma^2}$$\n\nThe unbiased estimator for the parameter σ comes from $$\\widehat{\\sigma^2} = \\frac{\\sum r_i^2}{2(n−1)}$$, with confidence $$\\widehat{\\sigma^2} \\sim \\frac{\\sum r^2}{\\chi_{2n-2}^2}$$.\n\n### Simulation\n\nMonte Carlo simulation is adequate for studying and characterizing precision. In fact, many of the results associated with BAC, like the distribution of the extreme spread of a particular number of shots, can only be produced through simulation.\n\nFor simulation purposes random shots should be generated as (x, y) coordinates, where $$X,Y \\sim N(0,\\sigma)$$. It is critical to \"forget\" the known center when using simulated data. When shooting a real gun we never get to know the true center of impact, and instead have to use the sample center. Likewise, Monte Carlo simulations must not reference the known 0 center, and should instead only reference the sample center of whatever group size is being studied.\n\n## Protocol\n\nThe Ballistic Accuracy Classification™ shall be the upper bound of the 90% confidence range on estimated sigma, in units of MOA, multiplied by 10 and rounded to the nearest integer. For example, if the 90% confidence value for sigma on a tested gun is 0.47MOA, then the BAC value is 10 ✕ 0.47 = 4.7, rounded = 5. I.e., in this example we are saying with 90% confidence that the tested gun’s accuracy is no worse than Class 5.\n\n### Classifying a Specimen\n\nIt is not realistic to assign a BAC with fewer than 10 shots. The 90% confidence range with just 10 shots will typically extend to 1.4 times the estimated accuracy value. It will typically take 20 shots to get the outside bound of the 90% confidence interval to within 20% of the estimated value.\n\nYou must not discard data points during testing except for an unrelated failure. (E.g., if you are testing a barrel and you encounter a squib load, that shot may be excluded. But \"fliers\" should not generally be excluded.)\n\nData: Target distance, and (x,y) coordinates of the center of each shot impact. All shots with the same point of aim must be grouped, but multiple groups can be used.\n\nCalculations: The formulas to transform these data into a confidence interval for sigma, and the corresponding BAC, are shown in Media:BallisticAccuracyClassification.xlsx. Given a sample of n shots, over g groups, at a distance d:\n\n1. For each measurement, convert to units of MOA. For example, if measurements are taken in inches, and the target was shot at a distance of d yards, then divide each measurement by 0.01047d\n2. For each group g, calculate the center of the group as $$(\\bar{x}_{i \\in g},\\bar{y}_{i \\in g})$$\n3. For each shot i, find its radius squared relative to the center of the group as $$r_i^2=(x_i−x_g)^2+(y_i−y_g)^2$$\n4. Calculate the upper 90% confidence value for sigma as σU=SQRT[SUM(r2)/CHIINV(0.9,2n-2g)]\n5. Calculate the Ballistic Accuracy Class as =ROUND(σU,0)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87972695,"math_prob":0.9622203,"size":9052,"snap":"2019-35-2019-39","text_gpt3_token_len":2273,"char_repetition_ratio":0.10156941,"word_repetition_ratio":0.018220043,"special_character_ratio":0.25375608,"punctuation_ratio":0.12413395,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97014683,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-20T16:27:32Z\",\"WARC-Record-ID\":\"<urn:uuid:41701392-d889-4d53-b2f0-0b53fbf1e6d1>\",\"Content-Length\":\"30212\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8f20c793-6394-4486-bf18-a0eb02efc10c>\",\"WARC-Concurrent-To\":\"<urn:uuid:79cdb66d-00bc-4821-bb5a-3c178ebc5347>\",\"WARC-IP-Address\":\"74.208.236.150\",\"WARC-Target-URI\":\"http://ballistipedia.com/index.php?title=Ballistic_Accuracy_Classification&oldid=1431\",\"WARC-Payload-Digest\":\"sha1:WYZMHXAQSVBCLYPED6Y6SKEYSPKNQEDO\",\"WARC-Block-Digest\":\"sha1:ZQBHJ755TVSFXXUCQN5RLSFYDNQYU4OO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027315551.61_warc_CC-MAIN-20190820154633-20190820180633-00060.warc.gz\"}"}
http://hakology.co.uk/tag/objects/
[ "# PyGame : Tutorial PT2 (Drawing objects to the screen)\n\nSo its that time of the week … I apologise if the tutorials are slow in being published as I’m working on a few other projects in my spare time and want these tutorials to be correctly written so I don’t get you in to any bad habits or poor coding situations … (I’ve been reading the documentation for pygame A LOT) so without further a do, here’s pygame tutorial part 2. (Click here for part 1)\n\nIn the last tutorial we covered setting up the basic game window and initialising pygame.\n\nIn this tutorial were going to be drawing some simple shapes and text to the screen and moving them around using the keyboard.\n\nThere are some minor modifications to last weeks code I will explain these as we go along.\n\nnormal = from original tutorial\n\nimport pygame\nfrom pygame.locals import *\n\nwWIDTH = 640 # game window width\nwHEIGHT = 480 # game window height\nloc = [0, 0] # location data\n\nThe above code is pretty much the same as last week apart from the loc[0,0] (loc[x,y]) variable this is going to be used to store the xy location of the player.\nHere we set the variables both equal to zero. These numbers represent the players starting position so x=0 and y=0 would result in the player starting in the top left hand corner.\n\ndef main():\n\nr = 1\nwhile(1): # do for a while (game loop)\n\nfor event in pygame.event.get(): # handle events\n\nif event.type == QUIT: # ctrl+c\n\nr = 0 # return 0 = (game is over)\n\nelif event.type == KEYDOWN: # down arrow\n\nif event.key == K_ESCAPE:\n\nr = 0 # return 0 = (game is over)\n\nif event.key == K_q: # q key pressed\n\nr = 0\n\nif event.key == K_LEFT:\n\nloc = loc – 1 # move left\n\nif event.key == K_RIGHT:\n\nloc = loc + 1 # move right\n\nif event.key == K_UP:\n\nloc = loc – 1 # move up\n\nif event.key == K_DOWN:\n\nloc = loc + 1 # move down\n\nSo whats changed above well again most of the code is exactly the same but this time were trapping keys to trigger more events. In the first tutorial we trapped the escape key to make the game exit. Here we are trapping the arrow keys to change the variables held in the (loc) location variable. If the key is pressed down then add or subtract to the loc[x,y] variable depending on which key is being pressed (UP, DOWN, LEFT or RIGHT).\n\nWe will use the loc variable later to draw the objects to the screen.\n\n# keep location visible on the screen\n# check loc is not out of bounds (screen/window size)\n\nif loc < 0:\n\nloc = wWIDTH\n\nif loc > wWIDTH:\n\nloc = 0\n\nif loc < 0:\n\nloc = wHEIGHT\n\nif loc > wHEIGHT:\n\nloc = 0\n\nThe above code checks that the numbers stored in the loc(x,y) variable are within the bounds of the screen. If the bounds are exceeded the code keeps them on the screen. So if the object goes off the left of the screen it will appear on the right and also if the object / player moves off the bottom they will appear at the top and vice versa. The loc positions are compared to the window bounds (the variables we defined at the top of the code. wWIDTH amd wHEIGHT)\n\n#fill the screen with black\nGSURF.fill((0, 0, 0))\n\n#draw stuff here …\n#circle(Surface, color, pos, radius, width=0) -> Rect\n#rect(top,left,width,height)\npygame.draw.circle(GSURF, (0, 255, 0), (loc, loc), 6, 2)\npygame.draw.circle(GSURF, (255, 0, 0), (loc, loc), 4)\n\npygame.draw.rect(GSURF, (0, 0, 255), [loc-10, loc-10, 20, 20], 3)\npygame.draw.ellipse(GSURF, (0,255,0), [loc-30, loc-10, 60, 20], 1)\npygame.draw.arc(GSURF, (255,0,0), [loc-40, loc-40, 80, 80], 3.141, 2*3.141, 1)\npygame.draw.line(GSURF, (0,0,255), [loc-50, loc-50], [loc+50, loc+50] , 1)\n\nHere’s the good stuff … first of all we fill the canvas / surface black. Its important to draw the background colour first, if we drew the bg colour last we would just end up with a black screen no matter what we’d drawn underneath / first.\n\npygame.draw.circle(GSURF, (0, 255, 0), (loc, loc), 6, 2)\ndraws a green circle to the game surface at the location held in loc[x,y] make the radius 6 pixels and make the stroke / border 2 pixels wide\nso this will not fill the circle\n\npygame.draw.circle(GSURF, (255, 0, 0), (loc, loc), 4)\nThe above code is the same as the last snippet but will fill the circle in red (if the stroke isn’t provided the circle is filled.)\n\npygame.draw.rect(GSURF, (0, 0, 255), [loc-10, loc-10, 20, 20], 3)\nDraws a red rectangle to the screen. Again the last parameter is optional if you omit this parameter the rectangle will be filled red.\n\npygame.draw.ellipse(GSURF, (0,255,0), [loc-30, loc-10, 60, 20], 1)\nDraws an ellipse to the screen. Optional border parameter.\n\npygame.draw.arc(GSURF, (255,0,0), [loc-40, loc-40, 80, 80], 3.141, 2*3.141, 1)\nDraws an arc to the screen. Last parameter represents line thickness.\nNB. Arc start point and endpoints are measured in radiants.\nThese sound complicated but they are really easy.\nBasically a circle contains 6.28 radiants. (2*pi).\nA start point of 0 and endpoint of 3.141 would draw half a circle.\nA start point of 0 and endpoint of 6.282 would draw a whole circle.\nDegrees to radiants = 6.282 / 360 * Degrees\n\npygame.draw.line(GSURF, (0,0,255), [loc-50, loc-50], [loc+50, loc+50] , 1)\nDraws a line from one point to another.\n\nAll the above shapes are drawn @ the location of the player.\n\nmyfont = pygame.font.SysFont(“monospace”, 15)\nlabel = myfont.render(“Frame rate : ” + str(int(GCLOCK.get_fps())), 1, (255,255,0))\nGSURF.blit(label, (loc, loc))\n\n# check the ‘pygame draw’ documentation for more information on other you can draw to the screen\n# above are most of the basics.\npygame.display.flip() # update the screen\nGCLOCK.tick() # update the clock\n\nif r != 1:\n\nbreak\n\nreturn r\n\nAbove we define a font variable as myfont we use the default pygame fonts (which are all available in the documentation.) Next we make a label variable this variable is going to contain the frame rate as this variable isnt a string we have to use str() to convert the fps from a number to a string. (NB. I think I made a school boy error above. Fps is returned as an int, there is no need for the int() function u can remove this. Edit the code so it looks like this str(GCLOCK.get_fps()) ) Then we blit the label object to the screen at the specified location. display.flip() is used for double buffering (which is a whole other topic) ill cover it briefly here if we replaced the last frames image with the current one we were drawing in the loop bad things would happen basically double buffering displays the last drawn image until the next has been drawn and is ready to be displayed. Double buffering cuts out screen flicker.\n\ntick() is then called to update the pygame clock and advance to the next frame. No numbers are passed in here. Before we were passing a number which will restrict or slow the pace of your game by a set amount depending on what number you use. This will also help us to determine accurately how many cycles the game is running at per second accurately.\n\nif __name__ == “__main__”: # main function call\n\nr = 1 # set variable for return value\n\npygame.init() # initialise python pygame\n\nGCLOCK = pygame.time.Clock() # set game clock\n\nGSURF = pygame.display.set_mode((wWIDTH, wHEIGHT)) # main game surface\n\npygame.key.set_repeat(1, 0) # repeat keys delay by 0\n\nwhile r == 1: # quit on 0\n\nr = main() # get return value from main loop (1 == OK … 0 == exit)\n\npygame.quit()\n\nHere is the last bit of code (that initialises the whole game rather important) only one line changed here which is the keyboard repeat rate, this forces the keyboard to repeat key presses if they are held down for a specified period of time.\n\nThat concludes tutorial part two.\n\nHere is a screen shot of what the code does …", null, "Here is a link to the code …\nGIST : https://gist.github.com/caffeinemonster/5953538\nPASTE BIN : http://pastebin.com/sxchRZx7" ]
[ null, "http://hakology.files.wordpress.com/2013/07/screenshot-from-2013-07-09-015841.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8253763,"math_prob":0.91977036,"size":7779,"snap":"2021-31-2021-39","text_gpt3_token_len":2184,"char_repetition_ratio":0.1318328,"word_repetition_ratio":0.047058824,"special_character_ratio":0.30350944,"punctuation_ratio":0.15109573,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97382116,"pos_list":[0,1,2],"im_url_duplicate_count":[null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T17:01:48Z\",\"WARC-Record-ID\":\"<urn:uuid:61a9704c-6361-470b-99a7-56ca1ac1baaf>\",\"Content-Length\":\"45325\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e5a40609-ba98-478a-9727-ccb7c430f975>\",\"WARC-Concurrent-To\":\"<urn:uuid:c6d78a1a-1dd5-4d4c-974a-bf15c622723d>\",\"WARC-IP-Address\":\"85.233.160.139\",\"WARC-Target-URI\":\"http://hakology.co.uk/tag/objects/\",\"WARC-Payload-Digest\":\"sha1:WGR44MIUYRJ43FMYGZ7GGYNDW3A2CVN6\",\"WARC-Block-Digest\":\"sha1:WYAQYO3GMFGZC3ZXSPNENXWPCBWIWE55\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153739.28_warc_CC-MAIN-20210728154442-20210728184442-00050.warc.gz\"}"}
https://www.unige.ch/math/tggroup/doku.php?id=fables
[ "fables\n\n# Séminaire \"Fables Géométriques\".\n\nThe normal starting time of this seminar is 16.30 on Monday.\n\n2020, Monday, February 17, 16:30, Battelle, Karim Adiprasito\n(University of Copenhagen, Hebrew University of Jerusalem)\n\nAlgebraic geometry of the sphere at infinity, polyhedral de Rham theory and L^2 vanishing conjectures\n\nI will discuss a conjecture of Singer concerning the vanishing of L^2 cohomology on non-positively curved manifolds, and relate it to Hodge theory on a Hilbert space that arises as the limit of Chow rings of certain complex varieties.\n\n2019, Friday, December 6, 15:00, Battelle, Tomasz Pelka (UniBe)\n\nQ-homology planes satisfying the Negativity Conjecture\n\nA smooth complex algebraic surface S is called a Q-homology plane if H_i(S,Q)=0 for i>0. This holds for example if S is a complement of a rational cuspidal curve in P^2. The geometry of such S is understood unless S is of log general type, in which case the log MMP applied to the log smooth completion (X,D) of S is insufficient. The idea of K. Palka was to study the pair (X,(1/2)D) instead. This approach gives much stronger constraints on the shape of D, and leads to the Negativity Conjecture, which asserts that the Kodaira dimension of K_X+(1/2)D is negative. It is a natural generalization e.g. of the Coolidge-Nagata conjecture about rational cuspidal curves, which was recently proved using these methods by M. Koras and K. Palka.\n\nIf this conjecture holds, all Q-homology planes of log general type can be classified. It turns out that, as expected by tom Dieck and Petrie, they are arranged in finitely many discrete families, each obtainable in a uniform way from certain arrangements of lines and conics on P^2. As a consequence, they all satisfy the Strong Rigidity Conjecture of Flenner and Zaidenberg; and their automorphism groups are subgroups of S_3. To illustrate this surprising rigidity, I will show how to construct all rational cuspidal curves (with complements of log general type, satisfying the Negativity Conjecture) inductively, by iterating quadratic Cremona maps. This construction in particular shows that any such curve is uniquely determined, up to a projective equivalence, by the topology of its singular points.\n\n2019, Monday, November 25, 16:30, Battelle, Felix Schlenk (UniNe)\n\n(Real) Lagrangian submanifolds\n\nWe start with describing how Lagrangian submanifolds of symplectic manifolds naturally appear in many ways: In celestial mechanics, integrable systems, symplectic geometry, and algebraic geometry. We then look at real Lagrangians, namely those which are the fixed point set of an anti-symplectic involution. How special is the property of being real? While many of the examples discussed above are real, we explain why the central fibres in toric symplectic manifolds are real only if the moment polytope is centrally symmetric. The talk is based on work of and with Joé Brendel, Yuri Chekanov, and Joontae Kim.\n\n 2019, Friday, November 8, 14:00, Battelle, Johannes Rau (University of Tübingen)\n\nThe dimension of an amoeba\n\nAmoebas are projections of algebraic varieties in logarithmic coordinates and were originally introduced by Gelfand, Kapranov and Zelevinsky in their influential book. Based on some computation, Nisse and Sottile formulated some questions concerning the dimension of amoebas. In a joint work with Jan Draisma and Chi Ho Yuen, we answer these questions by providing a general formula that computes the dimension of amoebas. If time permits, we also discuss the consequences of this formula for matroidal fans.\n\n 2019, Monday, November 4, 16.30, Battelle, Pierrick Bousseau (ETH Zurich)\n\n\nQuasimodular forms from Betti numbers\n\nThis talk will be about refined curve counting on local P2, the noncompact Calabi-Yau 3-fold total space of the canonical line bundle of the projective plane. I will explain how to construct quasimodular forms starting from Betti numbers of moduli spaces of dimension 1 coherent sheaves on P2. This gives a proof of some stringy predictions about the refined topological string theory of local P2 in the Nekrasov-Shatashvili limit. Partly based on work in progress with Honglu Fan, Shuai Guo, and Longting Wu.\n\n\n2019, Monday, October 28, 16.30, Battelle, Ilia Itenberg, (Sorbonne University)\n\n\nPlanes in four-dimensional cubics\n\nWe discuss possible numbers of 2-planes in a smooth cubic hypersurface in the 5-dimensional projective space. We show that, in the complex case, the maximal number of planes is 405, the maximum being realized by the Fermat cubic. In the real case, the maximal number of planes is 357.\n\nThe proofs deal with the period spaces of cubic hypersurfaces in the 5-dimensional complex projective space and are based on the global Torelli theorem and the surjectivity of the period map for these hypersurfaces, as well as on Nikulin's theory of discriminant forms.\n\nJoint work with Alex Degtyarev and John Christian Ottem.\n\n 2019, Monday, October 14, 16:30, Battelle, Igor Krichever (Columbia University)\n\n\nDegenerations of real normalized differentials\n\nThe behavior of real-normalized (RN) meromorphic differentials on Riemann surfaces under degeneration is studied. In particular, it is proved that the residues at the nodes are solutions of a suitable Kirchhoff problem on the dual graph of the curve. It is further shown that the limits of zeroes of RN differentials are the divisor of zeroes of a twisted differential — an explicitly constructed collection of RN differentials on the irreducible components of the stable curve, with higher order poles at some nodes. Our main tool is a new method for constructing differentials on smooth Riemann surfaces, in a plumbing neighborhood of a given stable curve.\n\n 2019, Monday, October 7, 16:30, Battelle, Jérémy Blanc (University of Basel)\n\n\nQuotients of higher dimensional Cremona groups\n\nWe study large groups of birational transformations $\\mathrm{Bir}(X)$, where $X$ is a variety of dimension at least $3$, defined over $\\mathbb{C}$ or a subfield of $\\mathbb{C}$.Two prominent cases are when $X$ is the projective space $\\mathbb{P}^n$, in which case $\\Bir(X)$ is the Cremona group of rank~$n$, or when $X \\subset \\mathbb{P}^{n+1}$ is a smooth cubic hypersurface.In both cases, and more generally when $X$ is birational to a conic bundle, we produce infinitely many distinct group homomorphisms from $\\mathrm{Bir}(X)$ to $\\mathbb{Z}/2$.As a consequence we also obtain that the Cremona group of rank ~$n \\ge 3$ is not generated by linear and Jonquières elements.\n\nJoint work with Stéphane Lamy and Susanna Zimmermann\n\n 2019, Monday, September 30,16:30, Battelle, Roman Golovko (Charles University in Prague)\n\nThe wrapped Fukaya category of a Weinstein manifold is generated by the Lagrangian cocore discs\n\nIn a joint work with B. Chantraine, G. Dimitroglou Rizell and P. Ghiggini, we decompose any object in the wrapped Fukaya category as a twisted complex built from the cocores of the critical (i.e. half-dimensional) handles in a Weinstein handle decomposition. The main tools used are the Floer homology theories of exact Lagrangian immersions, of exact Lagrangian cobordisms in the SFT sense (i.e. between Legendrians), as well as relations between these theories.\n\n 2019, Wednesday, September 25, 11:00, Battelle, Ivan Fesenko, (University of Nottingham)\n\nTwo-dimensional local fields and integration on them\n\nTwo-dimensional local fields include formal loop objects such as $R((t))$, $C((t))$, $Q_p((t))$ and also fields such as $F_p((t_1))((t_2))$, $Q_p\\{\\{t\\}\\}$. They play the fundamental role in two-dimensional number theory, arithmetic geometry, representation theory, algebraic topology and math physics. I will explain basic things about such fields, including their unusual topology and the theory of measure and integration on such fields and Fourier transform which can be viewed as a (rigorous) arithmetic version of the Feynman integral. While one-dimensional local fields show up in tropical geometry of curves, one may expect that two-dimensional local fields should be involved in tropical geometry of surfaces.\n\n 2019, Monday, September 16, 16:30, Battelle, Gleb Smirnov, (ETH Zurich)\n\nFrom flops to diffeomorphism groups\n\nFollowing a short introduction to the flop surgery, I will explain how this surgery can be used to detect non-contractible loops of diffeomorphisms for many algebraic surfaces.\n\n2019, Monday, May 20, 14:00, Battelle, Ziming Ma (Chinese University of Hong Kong)\n\nGeometry of the Maurer-Cartan equation near degenerate Calabi-Yau varieties\n\nIn this talk, we construct a dgBV algebra PV^∗,∗(X) associated to a possibly degenerate Calabi-Yau variety X equipped with local thickening data. This gives a singular version of the (extended) Kodaira-Spencer dgLa which is applicable to both log smooth and maximally degenerated Calabi-Yau. We use this to prove an unobstructedness result about the smoothing of degenerated Log Calabi-Yau varieties X satisfying Hodge-deRham degeneracy property for cohomology of X, in the spirit of Kontsevich-Katzarkov-Pantev. We also demonstrate how our construction can be applied to produce a log Frobenius manifold structure on a formal neighborhood of the extended moduli space using Barannikov’s technique. This is a joint work with Kwokwai Chan and Naichung Conan Leung.\n\n2019, Monday, April 8, 16:00, Battelle, Michele Ancona (Institut Camille Jordan)\n\nRandom sections of line bundles over real Riemann surfaces\n\nGiven a line bundle L over a real Riemann surface, we study the number of real zeros of a random section of L. We prove a rarefaction result for sections whose number of real zeros deviates from the expected one.\n\n2019, Monday, April 1, 16:00, Battelle, Mikhail Shkolnikov (IST Austria)\n\nPSL-tropical limits\n\nThe classical tropical limit is defined for families of varieties in the algebraic torus. One of the ways to generalize this framework is to consider non-commutative groups instead of algebraic tori. We describe tropical limits for subvarieties in PSL(2,C):the result is spelled in terms of floor diagrams and has parallels with symplectic field theory. The talk is based on the work in progress with Grigory Mikhalkin.\n\n2019,Tuesday, March 26, 14:00, Battelle, Enrica Mazzon (Imperial College London)\n\nBerkovich approach to degenerations of hyper-Kähler varieties\n\nTo a degeneration of varieties, we can associate the dual intersection complex, a topological space that encodes the combinatoric of the central fiber and reflects the geometry of the generic fiber. The points of the dual complex can be identified to valuations on the function field of the variety, hence the dual complex can be embedded in the Berkovich space of the variety. In this talk I will explain how this interpretation gives an insight in the study of the dual complexes. I will focus on some degenerations of hyper-Kähler varieties and show that we are able to determine the homeomorphism type of their dual complex using techniques of Berkovich geometry. The results are in accordance with the predictions of mirror symmetry, and the recent work about the rational homology of dual complexes of degenerations of hyper-Kähler varieties, due to Kollár, Laza, Saccà and Voisin. This is joint work with Morgan Brown.\n\n2019, Monday, March 18, 16:00, Battelle, Danilo Lewanski (Max Planck Institut für Mathematik)\n\nRefreshing Tropical Jucys curves\n\nWe derive explicit formulae for the generating series of Grothendieck dessins d’enfant and monotone Hurwitz numbers via the semi-infinite wedge formalism, and from it we obtain bosonic Fock space expressions. This yields to a tropical geometric interpretation involving Gromov-Witten invariants as local multiplicities.\n\n2019, Monday, March 11, 16:00, Battelle, Anton Mellit (University of Vienna)\n\nFive-term relations\n\nI will review how the five term relation for the Fadeev-Kashaev's quantum dilogarithm arises in the Hall algebra context, and sketch a simple proof. Then I will explain how this proof can be transported to the elliptic Hall algebra situation, where the five term relation implies identities between Macdonald polynomials conjectured by Bergeron and Haiman. This is a joint work with Adriano Garsia.\n\n2019, Monday, March 4, 16:00, Battelle, Andras Stipsicz (Budapest University)\n\nKnot Floer homology and double branched covers\n\nWe will review the basic constructions of (various versions of) knot Floer homologies, show some applications and extensions of the definitions to the double branched cover, also using the covering transformation.\n\n2019, Monday, February 25, 16:00, Battelle, Erwan Brugallé (Université de Nantes)\n\nOn the invariance of Welschinger invariants\n\nWelshinger invariants are real analogs of Gromov-Witten invariants for symplectic 4-manifolds X. In this talk, I will strengthen the original Welschinger's invariance result. Our main result is that when X is a real rational algebraic surface, Welschinger invariants eventually only depend on the number of real interpolated points, and some homological data associated to X. This result follows easily from a formula relating Welschinger invariants of two real symplectic manifolds differing by a surgery along a real Lagrangian sphere. As an application, we complete the computation of Welschinger invariants of real rational algebraic surfaces, and obtain vanishing, sign, and sharpness results generalizing previously known statements. If time permits, we will also discuss some hypothetical relations with tropical refined invariants defined by Block-Göttsche and Göttsche-Schroeter.\n\n2018, Monday, December 10, 16:00, Battelle, Arthur Renaudineau (Lille)\n\nLefschetz hyperplane section theorem for tropical hypersurfaces\n\nWe will discuss variants of the Lefschetz hyperplane section theorem for the integral tropical homology groups of non-singular tropical hypersurfaces of toric varieties. As an application, we get that the integral tropical homology groups of non-singular tropical hypersurfaces are torsion free. This is a joint work with Charles Arnal and Kristin Shaw.\n\n2018, Monday, November 26, 16:00, Battelle, Vladimir Fock (Strasbourg)\n\nHigher complex structures on surfaces\n\nWe suggest a definition of a differential geometric structures on surfaces generalizing the notion of complex structure and discuss its properties. The moduli space of such structures share many common features and conjecturally coincide with higher Teichmüller space - the space of positive representations of the fundamental group of the surface into PGL(N) (like moduli of ordinary complex structure give a representation of the fundamental group to PGL(2)). Joint work with A.Thomas.\n\n2018, Monday, November 19, 16:15, Battelle, Stepan Orevkov (Moscow, Toulouse)\n\nOrthogonal polynomials in two variables\n\nA natural generalization of classical systems of (one-variable) orthogonal polynomials is as follows. Let $D$ be a domain in $R^n$ endowed with a Riemannian metric and a mesure. Suppose that the Laplace-Beltrami operator (for the given metric) is symmetric (for the given mesure) and leave invariant the set of polynomials of a given degree. Then its eigenfunctions is a system of orthogonal polynomials.\n\nI present a complete classification of domains in $R2$ for which this construction can be applied. The talk is based on a joint work with D. Bakry and M. Zani.\n\n2018, Monday, October 8, 16:30, Battelle, Sione Ma'u (Auckland)\n\nPolynomial degree via pluripotential theory\n\nGiven a complex polynomial $p$ in one variable, $\\log|p|$ is a subharmonic function that grows like $(deg p)\\log|z|$ as $|z|\\to\\infty$. Such functions are studied using complex potential theory, based on the Laplace operator in the complex plane.\n\nMultivariable polynomials can also be studied using potential theory (more precisely, a non-linear version called pluripotential theory, which is based on the complex Monge-Ampere operator). In this talk I will motivate and define a notion of degree of a polynomial on an affine variety using pluripotential theory (Lelong degree). Using this notion, a straightforward calculation yields a version of Bezout's theorem. I will present some examples and describe how to compute Lelong degree explicitly on an algebraic curve. This is joint work with Jesse Hart.\n\n2018, Monday, October 1, 16:00, Battelle, Mikhail Shkolnikov (Klosterneuburg)\n\nExtended sandpile group and its scaling limit\n\nSince its invention, the sandpile model is believed to be renormalizable due to the presence of power-laws. It appears that, the sandpile group, made of recurrent configurations of the model, approximates a continuous object that we call the extended sandpile group. In fact, this is a tropical abelian variety defined over Z and the subgroup of its integer points is exactly the usual sandpile group. Moreover, the extended sandpile group is naturally a sheaf on discrete domains and, thus, brings an explicit scale renormalization procedure for recurrent configurations. We compute the (projective) scaling limit of sandpile groups along growing convex domains: its is equal to the quotient of real-valued discrete harmonic functions by the subgroup of integer-valued ones. This is a joint work with Moritz Lang.\n\n2018, Wednesday, July 18, 16:30, Battelle, Kristin Shaw (University of Oslo)\n\nChern-Schwartz-MacPherson classes of matroids. Part II\n\nChern-Schwarz-Macpherson (CSM) classes are one way to extend the notion of Chern classes to singular and non-complete varieties. Matroids are an abstraction of the notion of independence in mathematics. In this talk, I will provide a combinatorial analogue of CSM classes for matroids, motivated by the geometry of hyperplane arrangements. In this setting, CSM classes are polyhedral fans which are Minkowski weights. One goal for defining these classes is to express matroid invariants as invariants from algebraic geometry. The CSM classes can be used to study the complexity of more general objects such as subdivisions of matroid polytopes and tropical manifolds. This is based on joint work with Lucia López de Medrano and Felipe Rincón.\n\n2018, Monday, July 16, 16:00, Battelle, Kristin Shaw (University of Oslo)\n\nChern-Schwartz-MacPherson classes of matroids\n\nChern-Schwarz-Macpherson (CSM) classes are one way to extend the notion of Chern classes to singular and non-complete varieties. Matroids are an abstraction of the notion of independence in mathematics. In this talk, I will provide a combinatorial analogue of CSM classes for matroids, motivated by the geometry of hyperplane arrangements. In this setting, CSM classes are polyhedral fans which are Minkowski weights. One goal for defining these classes is to express matroid invariants as invariants from algebraic geometry. The CSM classes can be used to study the complexity of more general objects such as subdivisions of matroid polytopes and tropical manifolds. This is based on joint work with Lucia López de Medrano and Felipe Rincón.\n\n2018, Monday, July 9, 16:30, Battelle, Ernesto Lupercio (CINVESTAV)\n\nConvex geometry, complex systems and quantum physics\n\nI will speak about our work on sandpiles and quantum integrable systems. Just as in classical mechanics toric manifolds correspond to rational convex polytopes, the irrational case in informed by the theory of sandpiles. Joint with Kalinin, Shkolnikov, Katzarkov, Meersseman and Verjovsky.\n\nWorkshop \"Fables Géométriques\", 2018, June Friday 15 and Saturday 16, Battelle\n\nFriday June 15th\n\n11:00-12:00 Yakov Eliashberg (Stanford)\n\n12:30 lunch\n\n14:30-15:30 Sergey Finashin (METU)\n\n16:00-17:00 Viatcheslav Kharlamov (Strasbourg)\n\nSaturday June 16th\n\n16:00-17:00 Stepan Orevkov (Toulouse)\n\n17:30-18:30 Oleg Viro (Stony Brook)\n\n19:30 dinner\n\n2018, Monday, May 28, 15:00, Battelle, Alexander Esterov (Higher School of Economics - Moscow)\n\nTropical characteristic classes and Plücker formulas\n\nGiven a proper generic map of manifolds, the Thom polynomial counts (in terms of characteristic classes of the manifolds), how many fibers of the map have a prescribed singularity. However, this tool cannot be directly applied to the study of generic polynomial maps $C^m \\to C^n$, because they are not proper. An attempt to extend Thom polynomials in this natural direction leads to what can be called tropical Thom polynomials and tropical characteristic classes.\n\nI will introduce tropical characteristic classes of (very) affine algebraic varieties, compute the tropical version of the simplest Thom polynomials (the Plücker formulas for the number of cusps and nodes of a projectively dual curve), and outline their relation to tropical correspondence theorems and some other possible applications.\n\n2018, Friday, May 25, 10:30, Battelle, Dimitry Kaledin (Steklov & NRU HSE - Moscow)\n\nWitt vectors, commutative and non-commutative, II\n\nWitt vectors were first introduced eighty years ago, but they still come up in different questions of commutative and homological algebra, algebraic geometry, and even algebraic topology. I will try to give a general introduction to this remarkable subject, and show both its classical parts and some recent discoveries. The first talk will be quite elementary, first-year algebra should be enough. In the somewhat more advanced second talk, I will try to explain how the simple constructions of the first talk lead to non-commutative generalization of Grothendieck's cristalline cohomology of smooth algebraic varieties over a finite field.\n\n2018, Tuesday, May 22, 15:30, Battelle, Dimitry Kaledin (Steklov & NRU HSE - Moscow)\n\nWitt vectors, commutative and non-commutative, I\n\nWitt vectors were first introduced eighty years ago, but they still come up in different questions of commutative and homological algebra, algebraic geometry, and even algebraic topology. I will try to give a general introduction to this remarkable subject, and show both its classical parts and some recent discoveries. The first talk will be quite elementary, first-year algebra should be enough. In the somewhat more advanced second talk, I will try to explain how the simple constructions of the first talk lead to non-commutative generalization of Grothendieck's cristalline cohomology of smooth algebraic varieties over a finite field.\n\n2018, Monday, May 14, 16:30, Battelle, Ilia Itenberg (Paris VI - ENS)\n\nFinite real algebraic curves\n\nThe talk is devoted to real plane algebraic curves with finitely many real points. We study the following question: what is the maximal possible number of real points of such a curve provided that it has given (even) degree and given geometric genus? We obtain a complete answer in the case where the degree is sufficiently large with respect to the genus, and prove certain lower and upper bounds for the number in question in the general case. This is a joint work with E. Brugallé, A. Degtyarev and F. Mangolte.\n\n2018, Monday, April 16, 16:30, Battelle, Ludmil Katzarkov (Vienna)\n\nHomological mirror symmetry and the P=W conjecture\n\n2018, Monday, March 5, 16:00, Battelle, Rahul Pandharipande (ETH)\n\nOn Lehn's conjecture for Segre classes on Hilbert schemes of points of surfaces and generalizations\n\nLet L→S be a line bundle on a nonsingular projective surface. I will discuss recent progress concerning the formula conjectured by Lehn in 1999 for the top Segre class of the tautological bundle L^[n] on Hilb(S,n) and the parallel question for vector bundles V→S. Results of Voisin play a crucial role. The talk represents joint work with A. Marian and D. Oprea.\n\n2018, Monday, February 26, 16:30, Battelle, Anton Fonarev (Higher School of Economics)\n\nEmbedding derived categories of curves into derived categories of moduli of stable vector bundles\n\nOne out of many interesting questions about derived categories is the following conjecture by A. Bondal: the bounded derived category of coherent sheaves of a smooth projective variety can be embedded into the bounded derived category of coherent sheaves of a smooth Fano variety. This conjecture is rather nontrivial even for curves. We will show how to embed the derived category of a generic curve of genus g > 1 into the derived category of rank 2 stable vector bundles with a fixed determinant of odd degree. The proof is a nice interplay of algebraic geometry, representation theory and categorical methods. The talk is based on a joint work with A. Kuznetsov.\n\n2017, Monday, November 6, 16:30, Battelle, Jeffrey Giansiracusa (Swansea University)\n\nTropical geometry as a scheme theory\n\nTropical geometry has become a powerful tool set for tackling problems in algebraic geometry, combinatorics, and number theory. The basic objects have traditionally been considered as certain polyhedral sets and heuristically thought of as algebraic objects defined over the real numbers with the max-plus semiring structure. I will explain how to realize this within an extension of scheme theory and describe the particular form of the equations of tropical varieties in terms of matroids.\n\n2017, Monday, October 30, 16:30, Battelle, Diego Matessi (Università degli Studi di Milano)\n\nFrom tropical hypersurfaces to Lagrangian submanifolds\n\nI will explain a construction of Lagrangian submanifolds of $(\\mathbb{C}^*)^2$ or $(\\mathbb{C}^*)^3$ which lift tropical hypersurfaces in $\\mathbb{R}^2$ or $\\mathbb{R}^3$. The building blocks are what I call Lagrangian pairs of pants. These can be constructed as graphs of the differential of a smooth function defined on a Lagrangian co-ameba. I will also explain some possible generalizations and applications to mirror symmetry.\n\n2017, Monday, October 2, 16:30, Battelle, Dmitry Novikov (Weizmann Institute)\n\nComplex cellular parameterization\n\n(joint work with Gal Binyamini)\n\nWe introduce the notion of a complex cell, a complex analog of the cell decompositions used in real algebraic and analytic geometry. Complex cells defined using holomorphic data admit a natural notion of analytic continuation called $\\delta$-extension, which gives rise to a rich hyperbolic geometric structure absent in the real case. We use this structure to prove that complex cellular decompositions share some interesting features with the classical constructions in the theory of resolution of singularities. Restriction of a complex cellular decomposition to the reals recovers the preparation theorem for subanalytic functions, and can be viewed as an analytic continuation thereof.\n\nA key difference in comparison to the classical resolution of singularities is that the cellular decompositions are intrinsically uniform over (sub)analytic families. We deduce a subanalytic version of the Yomdin-Gromov theorem where $C^k$-smooth maps are replaced by mild maps.\n\n2016, Friday, June 23, 11:00, Battelle, Ernesto Lupercio (CINVESTAV)\n\nQuantum toric varieties\n\nI will describe the theory of quantum toric varieties that generalizes usual toric geometry. Joint with Meersseman, Katzarkov and Verjovsky.\n\n2016, Thursday, June 22, 11:30, Battelle, Conan Leung (CUHK)\n\nInformal introduction to G_2-manifolds III\n\n2016, Wednesday, June 21, 11:30, Battelle, Conan Leung (CUHK)\n\nInformal introduction to G_2-manifolds II\n\n2016, Monday, June 19, 15:00, Battelle, Conan Leung (CUHK)\n\nInformal introduction to G_2-manifolds I\n\n Villa Battelle, May 2, 14:00-15:00; May 3 14:15-15:15; May 5, 14:30-15:30, Aaron Bertram (Utah)\n\nMinicourse: “Moduli Spaces of Complexes in Algebraic Geometry ”\n\nThe ideal of the twisted cubic in projective three-space is completely described by a 2×3 matrix of linear forms in four variables. The space of such matrices (modulo the actions of GL(2) and GL(3)) is a smooth, projective variety compactifying the space of twisted cubics. But the objects parametrized by the points at the boundary of this moduli space are not ideals of curves. They are complexes of line bundles that are stable with respect to a “stability condition on the derived category.” What does this mean? Can this be used to systematically find nice models for moduli and relate them to moduli spaces of coherent sheaves?\n\nDay 1) Introduction to Stability Conditions. Ordinary stability of vector bundles on a Riemann Surface relies on two invariants: the rank and degree (first chern class). A stability condition on the derived category of coherent sheaves on a complex manifold relies on a generalized rank and degree, and also on an exotic t-structure on the derived category, with an abelian category of complexes at its heart. On an algebraic surface, there are stability conditions whose underlying heart can be described by a tilting construction. However, finding a single stability condition on a projective Calabi-Yau threefold (e.g. the quintic in P4) remains open.\n\nDay 2) Models of the Hilbert Schemes of Points on a Surface. As the stability condition varies, the moduli spaces of stable objects (with respect to the stability condition) undergo a series of birational transformations. The particular example of the Hilbert scheme of ideal sheaves on an algebraic surface has been studied for various classes of surfaces. We will survey some results.\n\nDay 3) The Euler Stability Condition on Projective Space. An interesting stability condition on P^n has the Euler characteristic playing the role of the rank. We will use this stability condition to study stratifications of the spaces of symmetric tensors, generalizing the secant varieties to the Veronese embeddings of P^n. This is joint work with Brooke Ullery.\n\n Villa Battelle, Monday, Apr 3, 16:30-17:30, Lionel Lang (Uppsala University)\n\nThe vanishing cycles of curves in toric surfaces : the spin case\n\nIf the interior polygon of a lattice polygon $\\Delta$ is divisible by 2, any generic curve $C$ of the linear system associated to $\\Delta$ admits a spin structure $q$. If a loop in $C$ is a vanishing cycle, then the Dehn twist along the loop has to preserve $q$. As a consequence, the image of the monodromy of the linear system is a subgroup of the mapping class group $MCG(C,q)$ that preserves $q$. The main goal of this talk is to compare the image of the monodromy with $MCG(C,q)$. To this aim, we will show on one side that $MCG(C,q)$ admits a very explicit set of generators. On the other, we will construct elements of the monodromy by tropical means. The conclusion will be that the image of the monodromy is the full group $MCG(C,q)$ if and only if the interior polygon admits no other divisors than 2. (joint with R. Crétois)\n\nVilla Battelle, Wednesday, Mar 8, 12:00, Maksim Karev (PDMI)\n\nMonotone Hurwitz Numbers\n\nUsual Hurwitz numbers count the number of covers over CP^1 with a fixed ramification profile over point \\infty and simply ramified over a specified set of points. They also can be treated as a weighted count of factorizations in the symmetric group. It is known, that Hurwitz numbers can be calculated via intersection indices on the moduli spaces of complex curves by so-called ELSV-formula.\n\nIn my talk, I will discuss monotone Hurwitz numbers, which also arise as factorizations count with restrictions. It turns out, that they also can be related to the intersection indices on the moduli spaces of complex curves. I will give a definition of monotone Hurwitz numbers, and try to explain the origin of the monotone ELSV. If time permits, I will speak about the further development of the subject.\n\nThe talk is based on the joint work with Norman Do (Monash University).\n\nVilla Battelle, Tuesday, Feb 21, 15:30, Yang-Hui He (London, Nankai and Oxford)\n\nCalabi-Yau Varieties: From Quiver Representations to Dessins d'Enfants\n\nWe discuss how bipartite graphs on Riemann surfaces encapture a wealth of information about the physics and the mathematics of gauge theories. The correspondence between the gauge theory, the underlying algebraic geometry of its space of vacua as a quiver variety, the combinatorics of dimers and toric varieties, as well as the number theory of dessin d'enfants becomes particularly intricate under this light.\n\nJoint session of “Fables géométriques” and “Groupes de Lie et espaces des modules” seminars.\n\nVilla Battelle, Monday, Feb 20, 16:30, Yang-Hui He (London, Nankai and Oxford)\n\nThere tends to be exceptional structures in classifications: in geometry, there are the Platonic solids; in algebra, there are the exceptional Lie algebras; in group theory, there are the sporadic groups, to name but a few. Could these exceptional structures be related in some way? A champion for such Correspondences is Prof. John McKay. We take a casual promenade in this land of exceptionology, reviewing some classic results and presenting some new ones based on joint work with Prof. McKay.\n\nSpecial lecture for “Geometry, Topology and Physics” masterclass students.\n\nVilla Battelle, Friday, December 9, 14:30-15:30, Ozgur Ceyhan (Luxembourg)\n\nBackpropagation, its geometry and tropicalisation\n\nThe algorithms that make current artificial neural networks successes possible are decades old. They became applicable only recently as these algorithms demand huge computational power. Any technique which reduces the needs for computation have a potential to make great impact. In this talk, I am going to discuss the basics of backpropagation techniques and tropicalisation of the problem that promises to reduce the time complexity and accelerate computations.\n\n2016,Monday, November 7, 16:30, Battelle, Vladimir Fock.\n\nSeparation of variables in cluster integrable systems\n\nCluster integrable systems can be viewed from five rather different points of view. 1. As a double Bruhat cell of an affine Lie -Poisson group; 2. As a space of pairs (planar algebraic curve, line bundle on it); 3. Space of Abelian connection on a bipartite graph on a torus; 4. Hilbert scheme of points on algebraic torus. 5. Collection of flags in an infinite space invariant under the action of two commuting operators. We will see the relation between all these descriptions and discuss its quantization and possible generalizations.\n\n2016,Friday, Nov 4, 14:30-15:15 part I, 15:30-16:15 part II, Johannes Walcher (Heidelberg).\n\nIdeas of D-branes\n\nAbstract: I will give an introduction to D-branes from the point of view of their origin in the physics of string theory. I will discuss both world-sheet and space-time aspects.\n\n 2016, Monday, 23 mai, 16.30, Battelle, Frédéric Bihan.\n\nUne généralisation de la règle de Descartes pour les systèmes polynomiaux dont le support est un circuit\n\nRésumé : La règle de Descartes borne le nombre de racines positives d'un polynôme réel en une variable par le nombre de changements de signe consécutifs de ses coordonnées dans la base monomiale (ordonnée suivant les puissances croissantes). La borne obtenue est optimale et généraliser la règle de Descartes aux systèmes polynomiaux en plusieurs variables est un problème très difficile. Dans un travail avec Alicia Dickenstein (Université de Buenos Aires), nous avons obtenu une généralisation partielle de la règle de Descartes en plusieurs variables. Notre règle s'applique aux systèmes polynomiaux en un nombre arbitraire n de variables dont le support consiste en n+2 monômes quelconques. Comme pour la règle de Descartes usuelle, notre borne est optimale et s'exprime comme un nombre de changement de signes d'une suite de nombres obtenus en considérant les mineurs maximaux de la matrice des coefficients ainsi que de celle des exposants du système.\n\n(in English)Descartes' Rule of Signs for Polynomial Systems Supported on Circuits\n\nDescartes’ rule of signs bounds the number of positive roots of an univariate polynomial by the number of sign changes between consecutive coefficients. In particular, this produces a sharp bound depending on the number of monomials. Generalizing Descartes’ rule of signs or the corresponding sharp bound to the multivariable case is a challenging problem. In this talk, I will present a generalization of Descartes’ rule of signs for the number of positive solutions of any system of n real polynomial equations in n variables with at most n+2 monomials. This is a joint work with Alicia Dickenstein (Buenos Aires University).\n\n2016, Monday, 9 mai, Battelle, Eugenii Shustin, 18.30-19.15\n\nOn refined tropical invariants of toric surfaces.\n\nWe discuss two examples of refined count of plane tropical curves. One of them is the refined broccoli invariant. It was introduce by Goettsche and Schroeter for genus zero case, and it turns into some descendant invariant or the broccoli invariant according as the parameter takes value 1 or -1. A possible extension of broccoli invariant to positive genera appeared to be rather problematic. However, the refined version turns to be easier to treat. Jointly with F. Schroeter, we have defined a refined broccoli invariant, counting elliptic tropical curves. This can be done for higher genera as well (work in progress). Another example (joint work with L. Blechman) is the refined descendant tropical invariant (involving arbitrary powers of psi-classes). We discuss also the most interesting related question: What is the complex and real enumerative meaning of these invariants?\n\n2016, Monday, 4 april, 16.30, Battelle.\n\nLionel Lang (Uppsala)\n\nThe vanishing cycles of curves in toric surfaces (joint work with Rémi Crétois)\n\nIn [Do], Donaldson addressed the following : Do all Lagrangian spheres in a complex projective manifold arise from the vanishing cycles of a deformation to singular varieties? The answer might depend on the choice of the moduli space in which we are allowed to deform our manifold. Already for curves, it leads to interesting questions. In the Deligne-Mumford moduli space M_g, any loop inside a smooth curve can be contracted along a deformation towards a nodal (stable) curve, provided that the genus g>1. What happens if one restricts to a chosen linear system on a toric surface? Degree d curves in the projective plane, for instance. In the latter, two obstructions occur: the loop should not be separating for d>2 (Bezout), the Dehn twist along the loop should preserve a certain spin structure on the curve for d odd (see [Beau]). In the latter, Beauville proves (in particular) that any non-obstructed loop is homologous to a vanishing cycle. In this talk, we suggest a tropical proof of Beauville's result as well as an extension to any (big enough) linear systems on any smooth toric surface. This problem is directly related to the monodromy group given by the complement of the discriminant in the considered linear system. The proof will involve simple Harnack curves, introduced by Mikhalkin, and monodromy given by partial tropical compactifications of the linear system. If time permits, we will also discuss this problem at the isotopic level, problem that is still open.\n\n[Beau] : Le groupe de monodromie des familles universelles d'hypersurfaces et d'intersections complètes. A. Beauville, 1986. [Do] : Polynomials, vanishing cycles and Floer homology. S.K. Donaldson, 2000.\n\n2016, Monday, 21 mars, 16.30, Battelle.\n\nBoris Shapiro (Stockholm)\n\nOn the Waring problem for polynomial rings\n\nWe discuss a natural analog of the classical Waring problem for $C[x_1,…,x_n]$. Namely, we show that a general form p from $C[x_1,…,x_n]$ of degree kd where k>1 can be represented as a sum of at most $k^n$ k-th powers of forms of degree d. Noticeably, $k^n$ coincides with the number obtained by naive dimension count if d is sufficiently large.\n\n2016, Friday, March 18, 14.15, villa Battelle.\n\nSergey Galkin (Moscow)\n\nGamma conjectures and mirror symmetry\n\nI will speak about an exotic integral structure in cohomology of Fano manifolds that conjecturally can be expressed in terms of Euler's gamma-function, how one can observe it by computing asymptotics of a quantum differential equation, and how one can prove the conjectures using mirror symmetry. This is a joint work with Vasily Golyshev and Hiroshi Iritani (1404.6407, 1508.00719).\n\n2016, Thursday, March 17 Colloquium, villa Battelle\n\nVassily Golyshev (Moscow), 16:15\n\nAround the gamma conjectures.\n\nAbstract: We will state the gamma conjectures for Fano manifolds and explain how quantum cohomology makes it possible to enhance the classical Riemann-Roch-Hirzebruch theorem by relating the curve count on a variety to its characteristic classes. We will indicate how the gamma conjectures are proved in the known cases.\n\n2016, Monday, 14 mars, 16.30, Battelle.\n\nE. Abakoumov (Paris-Est)\n\nGrowth of proper holomorphic maps and tropical power series\n\nHow fast a proper holomorphic map, say, from C to C^n can grow? It turns out that the tropical power series appear naturally in answering this question, as well as in some related approximation problems on the complex plane. The talk is based on joint work with E. Dubtsov.\n\n2015, Tuesday, 8 December, 14.30, Battelle. (joint with Séminaire \"Groupes de Lie et espaces des modules”)\n\nBernd Sturmfels (UC Berkeley)\n\nExponential Varieties\n\nExponential varieties arise from exponential families in statistics. These real algebraic varieties have strong positivity and convexity properties, familiar from toric varieties and their moment maps. Another special class, including Gaussian graphical models, are inverses of symmetric matrices satisfying linear constraints. We present a general theory of exponential varieties, with focus on those defined by hyperbolic polynomials. This is joint work with Mateusz Michalek, Caroline Uhler, and Piotr Zwiernik.\n\n2015, Tuesday, December 8, 11:15 -- 12:15, Battelle.\n\nTropical geometry: a graphical interface for the GW/Hurwitz correspondence.\n\nIn their study of the Gromov-Witten theory of curves [OP], Okounkov and Pandharipande used the degeneration formula to express stationary descendant invariants of curves in terms of Hurwitz numbers and one point descendant relative invariants. Then they use operator formalism to organize the combinatorics of the degeneration formula, and the one point invariants into completed cycles. In joint work with Paul Johnson, Hannah Markwig and Dhruv Ranganathan, we revisit their formalism and show that the Feynmann diagrams that are secretly behind the scenes in [OP] are in fact tropical curves. This yields some mild refinements of the Gromov-Witten/Hurwitz correspondence of [OP]. Time permitting we will describe how a generalization of these tecniques should lead to unveiling a similar structure in the stationary/descendant GW theory of sliceable surfaces.\n\n2015, Monday, 7 December, 16.15, Villa Battelle\n\nIsrael Vainsencher (Universidade Federal de Minas Gerais, Brasil)\n\nLegendrian curves\n\nA twisted cubic curve in 3-space is known to define a (non-integrable) distribution of planes. The planes of the distribution osculate the original twc. We show how to define virtual numbers N_d which enumerate the rational curves of degree d which are tangent to that distribution and further meet 2d+1 general lines. (Based on Eden Amorim thesis)\n\nThe next lecture of the course\n\n“Imaginary time in Kaehler geometry, quantization and tropical amoebas” by José Mourão\n\nwill be on Monday 9 November, 17.00 Battelle.\n\n2015, October 27, 15.15 and October 29, 16.15, and November 2, 17.00, Villa Battelle\n\n(Minicourse) Imaginary time in Kaehler geometry, quantization and tropical amoebas.\n\nJosé Mourão, Mathematics Department, Instituto Superior Tecnico, Portugal.\n\nFor a compact Kaehler manifold $M$ and a function $H$ on $M$ we give a simple definition of the continuation of the flow defined by $H$ to complex time, $\\tau$, using the Groebner theory of Lie series. The resulting complexified (or complex time) symplectomorphisms are diffeomorphisms for some $|\\tau|< R_H$. For larger values of $|\\tau|$ they may correspond e.g. to the collapse of $M$ to a totally real submanifold. Simple examples will be discussed.\n\nKahler geometry applications: Imaginary time symplectomorphisms correspond to Mabuchi geodesics in the infinite dimensional space of Kaehler metrics with fixed cohomology class. We get thus a explicit way of constructing Mabuchi geodesics from Hamiltonian flows.\n\nQuantum theory applications: By lifting the imaginary time symplectomorphisms to the quantum bundle we get generalized coerent state transforms and are able to study the unitary equivalence of quantizations corresponding to nonequivalent polarizations.\n\nTropical geometry applications: For toric varieties the toric geodesics of the Mabuchi metric are straight lines in the space of Guillemin-Abreu symplectic potentials. Taking a strictly convex function $H$ (as a function on the moment polytope) one has that, for large geodesic times s, there is a simple relation between the moment map $\\mu_s$ and the $Log_t$ map of amoeba theory ($t=e^s$) . This relation further simplifies if one takes as $H$ the full symplectic potential, which is continuous but not smooth on $M$ and corresponds to a geodesic of Kaehler metrics with cone angle singularities. The tropical limit corresponds thus, in this setting, to the infinite geodesic time limit corresponding to convex hamiltonians.\n\nLecture 1 (introduction and different definitions of complex time evolution): http://www.math.tecnico.ulisboa.pt/~jmourao/talkscourses/Lectures_UG_L1.pdf\n\nLecture 2: Kahler tropicalization of C^*: http://www.math.tecnico.ulisboa.pt/~jmourao/talkscourses/Lectures_UG_L2.pdf\n\nLecture 3: Kahler tropicalization of C and (strange) actions of G_C on Kahler structures: http://www.math.tecnico.ulisboa.pt/~jmourao/talkscourses/Lectures_UG_L3.pdf\n\nLecture 4: C^infty Kahler tropicalization of toric varieties and of hypersurfaces in toric varieties: http://www.math.tecnico.ulisboa.pt/~jmourao/talkscourses/Lectures_UG_L4.pdf\n\nLecture 5: C^0 Kahler tropicalization of toric varieties and of hypersurfaces in toric varieties: http://www.math.tecnico.ulisboa.pt/~jmourao/talkscourses/Lectures_UG_L5.pdf\n\n2015, October 27, Tuesday, 15.15, Villa Battelle ( together with Séminaire “Groupes de Lie et espaces des modules”)\n\nImaginary time in Kaehler geometry, quantization and tropical amoebas.\n\nJosé Manuel Cidade Mourão, Mathematics Department, Instituto Superior Tecnico, Portugal.\n\nFor a compact Kaehler manifold $M$ and a function $H$ on $M$ we define a continuation of the Hamiltonian flow of $H$ to complex time $\\tau$. The resulting complexified (or complex time) symplectomorphisms are diffeomorphisms for some $|\\tau|< R_H$. For larger values of $|\\tau|$ they may correspond e.g. to the collapse of $M$ to a totally real submanifold. We'll discuss some simple examples and applications to Kaehler geometry, quantization and tropical geometry. This talk is the first lecture of a mini-course to be given during October-November 2015.\n\n2015, October 5, Monday, 16.20, Villa Battelle\n\nTropicalization of Poisson-Lie groups\n\nAnton Alexeev (UniGe)\n\nIn the first part of the talk, we recall the notion of Poisson-Lie groups and cluster coordinates for some simple examples.\n\nIn the second part, we use the notion of tropicalization to construct completely integrable systems, and for the Poisson-Lie group SU(n)^* match it with the Gelfand-Zeiltin integrable system.\n\nThe talk is based on joint works with I. Davydenkova, M. Podkopaeva and A. Szenes.\n\n2015, September 28, Monday, 16.15, Villa Battelle\n\nWhat is moonshine?\n\nSergey Galkin (HSE, Moscow)\n\nI will describe a few instances of geometric moonshines: surprising appearance of modular forms and sporadic groups as the answers to seemingly unrelated geometric and topological questions.\n\n2015, 21 September, Monday, 16.15, Villa Battelle.\n\nCohomology of superforms on polyhedral complexes and Poincare duality for tropical manifolds\n\nKristin Shaw.\n\nSuperforms introduced by Lagerberg are bigraded differential forms on $\\mathbb R^n$ which can be restricted to polyhedral complexes. We extend these forms to $\\mathbb T^n = [-\\infty, \\infty)^n$ and show that their de Rham cohomology is equivalent to tropical $(p, q)$ cohomology Furthermore, we establish Poincaré duality for cohomology of tropical manifolds. As in the classical theory, the Poincaré pairing can be formulated in terms of integration of superforms.\n\nold page of the seminar http://www.unige.ch/math/folks/langl/fables/\n\nfables.txt · Dernière modification: 2020/01/12 23:23 par weronika\n\n### Outils de la page", null, "" ]
[ null, "https://www.unige.ch/math/tggroup/lib/exe/indexer.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8630935,"math_prob":0.9224606,"size":48292,"snap":"2019-51-2020-05","text_gpt3_token_len":11651,"char_repetition_ratio":0.13388419,"word_repetition_ratio":0.097662196,"special_character_ratio":0.20935145,"punctuation_ratio":0.1275551,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.975214,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T03:09:32Z\",\"WARC-Record-ID\":\"<urn:uuid:193c65f1-94b3-41d0-96fe-6e490389d793>\",\"Content-Length\":\"68109\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2e67f209-9dea-419b-af40-c006d2547d95>\",\"WARC-Concurrent-To\":\"<urn:uuid:b186a3c0-9672-4be2-8a36-029f2771cbca>\",\"WARC-IP-Address\":\"129.194.6.50\",\"WARC-Target-URI\":\"https://www.unige.ch/math/tggroup/doku.php?id=fables\",\"WARC-Payload-Digest\":\"sha1:3KL4U7HJQDKB2IDB4RYBYBSG6ODWD6OY\",\"WARC-Block-Digest\":\"sha1:VIYFM74VHMB4Z4YBE3HSOS7OLWVHCFBF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251684146.65_warc_CC-MAIN-20200126013015-20200126043015-00492.warc.gz\"}"}
http://www.ux1.eiu.edu/~cfadd/1360/23EFields/Continu.html
[ "# Due to Continuous Charge Distributions\n\nCoulomb's Law tells us the force between two point charges. Our variation tells us the Electric field due to a single point charge. What do we do if we have a continuous charge distribution? We can sum up the electric field caused by each tiny, infinitesimal part of the charge distribution. This means an integral over the charge distribution:", null, "For a single point charge Q, we had", null, "where r is the distance from the charge Q. Remember, E is only the magnitude of the electric field; we must take care of its vector nature separately! That's important! Now we have a distribution of charge and we must replace Q by dQ and E by dE -- and take care of the direction of E.", null, "", null, "r, the distance from the tiny, elemental, infinitesimal charge dQ to the point in question, is a function of where that charge dQ is. And, what does it mean to \"integrate over the charge dQ\"? We know how to integrate over a variable like dx, or a plane like dA = dx dy or dA = 2", null, "r dr or dA = r d", null, "dr, or a volume like dV = dx dy dz. So we will need to change from this symbolic charge dQ to a charge density multiplied by some spatial differential,\n\ndQ =", null, "dx\n\ndQ =", null, "dA\n\ndQ =", null, "dV\n\nLook at Example 23.7 in Serway's and Beichner's textbook (p.724): A rod of length", null, "has a uniform charge per unit length", null, "and a total charge Q. Calculate the electric field at a point P along the axis of the rod, a distance d from one end.", null, "What is the electric field at point P because of a little piece of charge", null, "Q located at position x, as shown in the sketch?", null, "E = k", null, "Q / x2\n\nWe will carry out an integration from x = d to x = d +", null, "so we need to change this small amount of charge", null, "Q into a small length", null, "x,", null, "Q =", null, "", null, "x", null, "E = k (", null, "", null, "x) / x2\n\nwhere", null, "= Q /", null, "dE = k (", null, "dx) / x2\n\ndE = k", null, "(dx / x2)", null, "", null, "", null, "", null, "", null, "", null, "", null, "What about other geometries?\n\nLook at Example 23.8, on page 724, of Serway's and Beichner's text. Find the electric field due to a ring of charge: A ring of radius a has a uniform charge density with a total charge Q. Calculate the electric field along the axis of the ring at a point P, a distance x from the center of the ring.", null, "The charge density is", null, "= Q / (2", null, "a)\n\nRemember, our equation for the electric field is for the magnitude of the electric field. Consider a little piece of charge dq as sketched in the diagram. Because that charge dq is there, there is an electric field dE at point P in the direction shown. The component dEx of that electric field along the direction of the axis perpendicular to the plane of the ring is\n\ndEx = dE cos", null, "dEx = dE (x/r)\n\ndEx = [k dq/r2] (x/r)\n\ndEx = [k dq/r3] x\n\ndEx = [k x dq/r3]\n\ndEx = [k x/r3] dq\n\nNotice that, with this geometry, once the radius of the ring a is specified and the position x, that fully specifies r. r and x do not change as we integrate over dq.\n[[ Remember, SQRT() means \"square root of ()\" because that is easier for me to type. ]]\n\nr = SQRT(a2 + x2)\n\nr3 = (a2 + x2)3/2\n\n1/r3 = 1/(a2 + x2)3/2", null, "", null, "", null, "", null, "Remember, x and a are not variables.\n\nWhat about the component of E that is perpendicular to this direction? By symmetry that component is zero. From the diagram, you can see that for each element of charge dq, there is another element of charge dq on the opposite side of the ring that causes an electric field that just cancels the first one -- that is, their components perpendicular to the axis of symmetry just cancel. Notice that their components along the axis do not cancel for they lie in the same direction. Diagrams are very important. Don't start writing equations before you have made good, clear, complete diagrams!\n\nNow that we have looked at the electric field because of a ring of charge, we can build upon that and extend our ideas and look at the electric field due to a disk of charge. Look at Example 23.9, on page 725 of Serway's and Beichner's textbook.\n\nA disk of radius R has a uniform charge per unit area", null, ". Calculate the electric field at a point P that lies along the central axis of the disk and a distance x from its center.", null, "Consider a ring of charge as sketched here. The ring has a radius of r and a thickness of dr and carries a charge of dq. But that charge dq is just proportional to the area,\n\ndq =", null, "dA\n\ndq =", null, "[C dr]\n\ndq =", null, "[ (2", null, "r) dr ]\n\ndq = 2", null, "", null, "r dr\n\nThink back on what we just did in the previous example. For charge Q on a ring of radius r, we found that the electric field at a distance x from the plane of the ring was", null, "That is exactly what we have now -- except we have charge dq instead of Q and the ring has a radius of r instead of a. So we can write", null, "", null, "", null, "", null, "", null, "Be careful; the limits of integration are important.", null, "We could look this up in a table of integrals. But a variable substitution is still fairly direct and straightforward;", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "This result is only valid for x > 0 and must be modified slightly for x < 0." ]
[ null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/Fig23.15.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/EFldEq.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/EFldEq2.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/EFldEq3.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/pi.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/theta.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/lambda.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/sigma.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/rho.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/LtrEl.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/lambda.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/Fig23.16.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/delta.gif", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/delta.gif", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/delta.gif", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/LtrEl.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/delta.gif", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/delta.gif", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/delta.gif", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/lambda.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/delta.gif", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/delta.gif", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/lambda.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/delta.gif", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/lambda.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/LtrEl.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/lambda.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/lambda.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/EFldEq4.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/EFldEq5.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/EFldEq6.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/EFldEq7.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/EFldEq8.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/EFldEq9.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/EFldEq10.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/Fig23.17.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/lambda.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/pi.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/theta.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/EFldRingEq01.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/EFldRingEq02.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/EFldRingEq03.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/EFldRingEq04.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/sigma.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/Fig23.18.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/sigma.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/sigma.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/sigma.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/pi.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/pi.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/sigma.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/EFldRingEq04.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq01.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq02.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq03.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq05.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq06.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq06b.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq07.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq08.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq09.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq10.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq11.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq12.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq13.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq14.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq15.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq16.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq17.jpg", null, "http://www.ux1.eiu.edu/~cfadd/1360/23EFields/23Images/ElFldDskEq18.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92007005,"math_prob":0.9977291,"size":4823,"snap":"2019-13-2019-22","text_gpt3_token_len":1224,"char_repetition_ratio":0.15625648,"word_repetition_ratio":0.040041067,"special_character_ratio":0.25502798,"punctuation_ratio":0.08438409,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99812806,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,5,null,3,null,9,null,6,null,1,null,3,null,9,null,1,null,9,null,9,null,9,null,3,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,3,null,9,null,9,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,9,null,5,null,3,null,2,null,2,null,2,null,3,null,6,null,1,null,6,null,6,null,6,null,5,null,5,null,6,null,3,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-20T12:07:05Z\",\"WARC-Record-ID\":\"<urn:uuid:f91dd0f5-b923-4b92-9e1e-7938d25d2147>\",\"Content-Length\":\"13375\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b8129df-d63f-4b2f-bc66-1166b20b5ce1>\",\"WARC-Concurrent-To\":\"<urn:uuid:fb7cf2ad-7f5c-42c6-bf09-b7f353e0961f>\",\"WARC-IP-Address\":\"139.67.8.135\",\"WARC-Target-URI\":\"http://www.ux1.eiu.edu/~cfadd/1360/23EFields/Continu.html\",\"WARC-Payload-Digest\":\"sha1:2HJQCNCGMZIDIXXBACVKMZTV4V4JBE6M\",\"WARC-Block-Digest\":\"sha1:4UJ2FLGGX2LTCMIL2RHAROVTVABCL3EF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202326.46_warc_CC-MAIN-20190320105319-20190320131319-00447.warc.gz\"}"}
https://whatisconvert.com/234-centimeters-in-meters
[ "# What is 234 Centimeters in Meters?\n\n## Convert 234 Centimeters to Meters\n\nTo calculate 234 Centimeters to the corresponding value in Meters, multiply the quantity in Centimeters by 0.01 (conversion factor). In this case we should multiply 234 Centimeters by 0.01 to get the equivalent result in Meters:\n\n234 Centimeters x 0.01 = 2.34 Meters\n\n234 Centimeters is equivalent to 2.34 Meters.\n\n## How to convert from Centimeters to Meters\n\nThe conversion factor from Centimeters to Meters is 0.01. To find out how many Centimeters in Meters, multiply by the conversion factor or use the Length converter above. Two hundred thirty-four Centimeters is equivalent to two point three four Meters.\n\n## Definition of Centimeter\n\nThe centimeter (symbol: cm) is a unit of length in the metric system. It is also the base unit in the centimeter-gram-second system of units. The centimeter practical unit of length for many everyday measurements. A centimeter is equal to 0.01(or 1E-2) meter.\n\n## Definition of Meter\n\nThe meter (symbol: m) is the fundamental unit of length in the International System of Units (SI). It is defined as \"the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second.\" In 1799, France start using the metric system, and that is the first country using the metric.\n\n## Using the Centimeters to Meters converter you can get answers to questions like the following:\n\n• How many Meters are in 234 Centimeters?\n• 234 Centimeters is equal to how many Meters?\n• How to convert 234 Centimeters to Meters?\n• How many is 234 Centimeters in Meters?\n• What is 234 Centimeters in Meters?\n• How much is 234 Centimeters in Meters?\n• How many m are in 234 cm?\n• 234 cm is equal to how many m?\n• How to convert 234 cm to m?\n• How many is 234 cm in m?\n• What is 234 cm in m?\n• How much is 234 cm in m?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.884468,"math_prob":0.98236406,"size":1793,"snap":"2022-05-2022-21","text_gpt3_token_len":459,"char_repetition_ratio":0.22694242,"word_repetition_ratio":0.07255521,"special_character_ratio":0.26715,"punctuation_ratio":0.11263736,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995567,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T19:29:34Z\",\"WARC-Record-ID\":\"<urn:uuid:a26ae8e2-cc4e-4a18-9cfe-abe172993634>\",\"Content-Length\":\"29681\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b9923e0e-f882-4fb8-990a-2efda133c03a>\",\"WARC-Concurrent-To\":\"<urn:uuid:35cfb23e-8f27-48f5-b7ff-0ec856739126>\",\"WARC-IP-Address\":\"104.21.13.210\",\"WARC-Target-URI\":\"https://whatisconvert.com/234-centimeters-in-meters\",\"WARC-Payload-Digest\":\"sha1:RDN3PUC7RHCA66SVU2X24EZAIBWKC5UE\",\"WARC-Block-Digest\":\"sha1:PCGM4PZML3XSQ3KYZCIOB2WUOMF3II7B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662593428.63_warc_CC-MAIN-20220525182604-20220525212604-00175.warc.gz\"}"}
https://graz.pure.elsevier.com/en/publications/a-tauberian-theorem-for-ideal-statistical-convergence
[ "# A Tauberian theorem for ideal statistical convergence\n\nMarek Balcerzak, Paolo Leonetti*\n\n*Corresponding author for this work\n\nResearch output: Contribution to journalArticlepeer-review\n\n## Abstract\n\nGiven an ideal I on the positive integers, a real sequence (xn) is said to be I-statistically convergent to ℓ provided that n∈N:[Formula presented]|{k≤n:xk∉U}|≥ε∈Ifor all neighborhoods U of ℓ and all ε>0. First, we show that I-statistical convergence coincides with J-convergence, for some unique ideal J=J(I). In addition, J is Borel [analytic, coanalytic, respectively] whenever I is Borel [analytic, coanalytic, resp.]. Then we prove, among others, that if I is the summable ideal {A⊆N:∑a∈A1∕a<∞} or the density zero ideal {A⊆N:limn→∞[Formula presented]|A∩[1,n]|=0} then I-statistical convergence coincides with statistical convergence. This can be seen as a Tauberian theorem which extends a classical theorem of Fridy. Lastly, we show that this is never the case if I is maximal.\n\nOriginal language English 83-95 13 Indagationes Mathematicae 31 1 https://doi.org/10.1016/j.indag.2019.10.001 Published - Jan 2020\n\n## Keywords\n\n• Generalized density ideal\n• Ideal statistical convergence\n• Maximal ideals\n• Submeasures\n• Tauberian condition\n\n## ASJC Scopus subject areas\n\n• Mathematics(all)\n\n## Fingerprint\n\nDive into the research topics of 'A Tauberian theorem for ideal statistical convergence'. Together they form a unique fingerprint." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78846985,"math_prob":0.5342139,"size":1146,"snap":"2022-05-2022-21","text_gpt3_token_len":346,"char_repetition_ratio":0.12434326,"word_repetition_ratio":0.0,"special_character_ratio":0.23734729,"punctuation_ratio":0.10952381,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.979526,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-23T22:14:31Z\",\"WARC-Record-ID\":\"<urn:uuid:23f8a4c9-f8f4-459c-aebf-0f27b93b901e>\",\"Content-Length\":\"43664\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:22b1561f-8040-4f00-87e5-b1d2a91fda4b>\",\"WARC-Concurrent-To\":\"<urn:uuid:dcc4d80e-3f0a-4567-bb15-122d119b1b55>\",\"WARC-IP-Address\":\"34.248.98.230\",\"WARC-Target-URI\":\"https://graz.pure.elsevier.com/en/publications/a-tauberian-theorem-for-ideal-statistical-convergence\",\"WARC-Payload-Digest\":\"sha1:W7GX34X3B2GSVTTXYI5W4IKUE4SRPAMF\",\"WARC-Block-Digest\":\"sha1:ZOLZG6UTEZOVK2SYE4WPM6FPSWYH77UZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662561747.42_warc_CC-MAIN-20220523194013-20220523224013-00703.warc.gz\"}"}
https://www.filmannex.com/blogs/understanding-matlab-commands/276567
[ "# understanding Matlab Commands\n\nPosted on at\n\nmatlab is a powerful tool on which we can perform or apply different algorithms. this is very much powerful almost used everywhere in the industry, without this nothing is possible and due to this powerfulness this tool is used also in industry and military for performing different tasks and projects and this is the reason that is also used in the aero space industry for manufacturing different types of planes and their control.\n\nQuestion # 01\n\n% Program P1_7 % Generate the input signal\n\nclear all; close all; clc;\n\nn =0:100;\n\ns1 = cos(2*pi*0.05*n); % A low frequency sinusoid\n\ns2 = cos(2*pi*0.47*n); % A high frequency sinusoid\n\nx = s1+s2; % Implementation of the moving average filter\n\nM = input('Desired length of the filter = ');\n\nnum = ones(1,M);\n\ny = filter(num,1,x)/M; % Display the input and output signals\n\nsubplot(2,2,1);\n\nplot(n,s1);\n\naxis([0, 100, -2, 2]);\n\nxlabel('Time index n');\n\nylabel('Amplitude');\n\ntitle('Signal # 1');\n\nsubplot(2,2,2);\n\nplot(n,s2);\n\naxis([0, 100, -2, 2]);\n\nxlabel('Time index n'); ylabel('Amplitude');title('Signal # 2');\n\nsubplot(2,2,3);\n\nplot(n,x); axis([0, 100, -2, 2]);\n\nxlabel('Time index n'); ylabel('Amplitude'); title('Input Signal');\n\nsubplot(2,2,4); plot(n,y); axis([0, 100, -2, 2]);\n\nxlabel('Time index n'); ylabel('Amplitude'); title('Output Signal');\n\nIn Command Window\n\nDesired length of the filter = 2", null, "As  this bhave as a low pass filter so its allow the low  frequency  signal S1 and suppressed the high frequency component  as S2\n\nQuestion #02\n\n% Program P1_7 % Generate the input signal\n\nclear all; close all; clc;\n\nn =0:100;\n\ns1 = cos(2*pi*0.05*n); % A low frequency sinusoid\n\ns2 = cos(2*pi*0.47*n); % A high frequency sinusoid\n\nx = s1+s2 % Implementation of the moving average filter\n\ny1=0.5*([x 0]+[0 x])\n\ny2=0.5*([x 0]-[0 x])\n\nsubplot(4,1,1)\n\nstem(y1)\n\nsubplot(4,1,2)\n\nstem(y2)\n\nsubplot(4,1,3)\n\nstem(s1)\n\nsubplot(4,1,4)\n\nstem(s2)", null, "By using Y1 and Y2 I recover s1 and s2…!\n\nQuestion #3\n\n(a)\n\nIn Command Window\n\nDesired length of the filter = 3\n\n(b)\n\nIn Command Window\n\nDesired length of the filter = 5\n\n(a)", null, "(b)by increasing the frequency high and by incresasing the value of M filter its supressed yhe high frequencies", null, "Question #4\n\nFrequency of s1=0.05\n\nFrequency of s2 = 0.47\n\ns1 = cos(2*pi*0.05*n);\n\ns2 = cos(2*pi*0.47*n);", null, "Frequency of s1 = 0.10\n\nFrequency of s2  =0.90\n\ns1 = cos(2*pi*0.10*n);\n\ns2 = cos(2*pi*0.90*n);", null, "Frequency of s1=0.30\n\nFrequency of s2=0.47\n\ns1 = cos(2*pi*0.30;\n\ns2 = cos(2*pi*0.47*n);", null, "Question #05\n\n% Program P1_2 % Generate the input sequences\n\nclose all;\n\nclear all; clc\n\nn =0:40;\n\na = 2;\n\nb = -3;\n\nx1 = cos(2*pi*0.1*n);\n\nx2 = cos(2*pi*0.4*n);\n\nx = a*x1 + b*x2;\n\nnum = [2.2403 2.4908 2.2403];\n\nden = [1 -0.4 0.75]; ic = [0 0];\n\n% Set zero initial conditions\n\ny1 = filter(num,den,x1,ic); % Compute the output y1[n]\n\ny2 = filter(num,den,x2,ic); % Compute the output y2[n]\n\ny = filter(num,den,x,ic); % Compute the output y[n]\n\nyt = a*y1 + b*y2; d=y-yt;% Compute the difference output d[n] % Plot the outputs and the difference signal\n\nsubplot(3,1,1)\n\nstem(n,y);\n\nylabel('Amplitude');\n\ntitle('Output Due to Weighted Input: a \\cdot+ x_{1}+[n]+ b \\cdot+ x_{2}+[n]');\n\nsubplot(3,1,2)\n\nstem(n,yt);\n\nylabel('Amplitude');\n\ntitle('Weighted Output: a \\cdot+ y_{1}+[n] + b \\cdot+y_{2}+[n]');\n\nsubplot(3,1,3)\n\nstem(n,d);\n\nxlabel('Time index n');\n\nylabel('Amplitude');\n\ntitle('Difference Signal')", null, "Yes this system is liner an if v add the both signal then its sum become zero..so both signals are equal.\n\nQuestion #06\n\n% Program P1_2 % Generate the input sequences\n\nclose all; clear all;\n\nclc\n\nn =0:40;\n\na = 2;\n\nb = -3;\n\nx1 = cos(2*pi*0.1*n);\n\ns1=[x1 0].*[0 x1];\n\nx2 = cos(2*pi*0.4*n);\n\ns2=[x2 0].*[0 x2];\n\nx = a*x1 + b*x2;\n\ns=[x 0].*[0 x];\n\nnum = [2.2403 2.4908 2.2403];\n\nden = [1 -0.4 0.75]; ic = [0 0]; % Set zero initial conditions\n\ny1 = filter(num,den,s1,ic); % Compute the output y1[n]\n\ny2 = filter(num,den,s2,ic); % Compute the output y2[n]\n\ny = filter(num,den,s,ic); % Compute the output y[n]\n\nyt = a*y1 + b*y2;\n\nd=y-yt;% Compute the difference output d[n]\n\n% Plot the outputs and the difference signal\n\ndd=0:length(n);\n\nsubplot(3,1,1)\n\nstem(dd,y);\n\nylabel('Amplitude');\n\ntitle('Output Due to Weighted Input: a\\cdotx_{1}[n]+b\\cdotx_{2}[n]');\n\nsubplot(3,1,2)\n\nstem(dd,yt);\n\nylabel('Amplitude');\n\ntitle('Weighted Output: a.y_{1}[n] + b.y_{2}[n]');\n\nsubplot(3,1,3)\n\nstem(dd,d); xlabel('Time index n'); ylabel('Amplitude');\n\ntitle('Difference Signal')\n\nD = y[n] - yt[n] not equal to zero so both are not equal..", null, "And the system is not linear.\n\nQuestion #o7\n\n% Program P1_3 % Generate the input sequences\n\nclose all; clear all;\n\nclc\n\nn = 0:40;\n\nD = 10;a = 3.0;\n\nb = -2;\n\nx = a*cos(2*pi*0.1*n) + b*cos(2*pi*0.4*n);\n\nxd = [zeros(1,D) x];\n\nnum = [2.2403 2.4908 2.2403];\n\nden = [1 -0.4 0.75];\n\nic = [0 0];% Set initial conditions % Compute the output y[n]\n\ny = filter(num,den,x,ic); % Compute the output yd[n]\n\nyd = filter(num,den,xd,ic); % Compute the difference output d[n]\n\nd=y- yd(D+1:41+D); % Plot the outputs\n\nsubplot(3,1,1)\n\nstem(n,y); ylabel('Amplitude'); title('Output y[n]');grid;\n\nsubplot(3,1,2);\n\nstem(n,yd(1:41)); ylabel('Amplitude');\n\ntitle(['Output Due to Delayed Input x[n ', num2str(D),']']);\n\ngrid;\n\nsubplot(3,1,3);\n\nstem(n,d); xlabel('Time index n'); ylabel('Amplitude');\n\ntitle('Difference Signal');grid;", null, "This iz time invariant system..and both signals are anot equal in sequence..\n\nQuestion #08\n\n% Program P1_3 % Generate the input sequences\n\nclose all; clear all; clc; n = 0:40; D = 10;a = 3.0;b = -2;\n\nx = a*cos(2*pi*0.1*n) + b*cos(2*pi*0.4*n);\n\ns=[n.*x 0]+[0 x];\n\nxd = [zeros(1,D) s];\n\nnum = [2.2403 2.4908 2.2403];\n\nden = [1 -0.4 0.75];\n\nic = [0 0];% Set initial conditions % Compute the output y[n]\n\ny = filter(num,den,s,ic); % Compute the output yd[n]\n\nyd = filter(num,den,xd,ic); % Compute the difference output d[n]\n\nd=y- yd(D+1:42+D); % Plot the outputs\n\nsubplot(3,1,1)\n\nstem(y); ylabel('Amplitude'); title('Output y[n]');grid;\n\nsubplot(3,1,2);\n\nstem(yd(1:42)); ylabel('Amplitude');\n\ntitle(['Output Due to Delayed Input x[n ', num2str(D),']']);\n\ngrid;\n\nsubplot(3,1,3);\n\nstem(d); xlabel('Time index n'); ylabel('Amplitude');\n\ntitle('Difference Signal');grid;", null, "Yes the system is time invariant.." ]
[ null, "https://cdn.bitlanders.com/users/galleries/294792/294792_gallery_53d691d267643_png_fa_rszd.jpg", null, "https://cdn.bitlanders.com/users/galleries/294792/294792_gallery_53d6920a01f16_png_fa_rszd.jpg", null, "https://cdn.bitlanders.com/users/galleries/294792/294792_gallery_53d692221c679_png_fa_rszd.jpg", null, "https://cdn.bitlanders.com/users/galleries/294792/294792_gallery_53d692432a821_png_fa_rszd.jpg", null, "https://cdn.bitlanders.com/users/galleries/294792/294792_gallery_53d6927ac0e5b_png_fa_rszd.jpg", null, "https://cdn.bitlanders.com/users/galleries/294792/294792_gallery_53d6929ada18f_png_fa_rszd.jpg", null, "https://cdn.bitlanders.com/users/galleries/294792/294792_gallery_53d692b60c471_png_fa_rszd.jpg", null, "https://cdn.bitlanders.com/users/galleries/294792/294792_gallery_53d692eb25809_png_fa_rszd.jpg", null, "https://cdn.bitlanders.com/users/galleries/294792/294792_gallery_53d693107ef22_png_fa_rszd.jpg", null, "https://cdn.bitlanders.com/users/galleries/294792/294792_gallery_53d693320e7c5_png_fa_rszd.jpg", null, "https://cdn.bitlanders.com/users/galleries/294792/294792_gallery_53d69356b0363_png_fa_rszd.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.50114965,"math_prob":0.9983523,"size":6015,"snap":"2021-43-2021-49","text_gpt3_token_len":2139,"char_repetition_ratio":0.14822824,"word_repetition_ratio":0.39800444,"special_character_ratio":0.4054863,"punctuation_ratio":0.22243467,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99958175,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T17:43:32Z\",\"WARC-Record-ID\":\"<urn:uuid:57338dfd-1487-4924-b604-3b07f193f4ee>\",\"Content-Length\":\"30747\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8e3b1ac8-caa5-4515-ad1b-ecb99c043e71>\",\"WARC-Concurrent-To\":\"<urn:uuid:ffcab619-d7d0-4679-98d4-183c724fe87a>\",\"WARC-IP-Address\":\"40.66.63.152\",\"WARC-Target-URI\":\"https://www.filmannex.com/blogs/understanding-matlab-commands/276567\",\"WARC-Payload-Digest\":\"sha1:2RLXES5FS7K6RJP5GWBM7S3CXNXZSV2U\",\"WARC-Block-Digest\":\"sha1:MQ63W7Z3A5ENLXT4K2AV53BS5MJYZXI4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585439.59_warc_CC-MAIN-20211021164535-20211021194535-00630.warc.gz\"}"}
https://codezup.com/insert-delete-copy-iterate-in-arrays-slices-golang/amp/
[ "# Insert, Delete, Copy, Iterate in Arrays & Slices | Golang\n\nHi, in this tutorial, we are going to talk about How to iterate, extract, insert & delete arrays & slices in Golang.\n\n## Slices Vs Arrays in Golang\n\nIn the last few tutorials, we have talked about What are arrays & slices & how we can declare & initialize in Golang. Now, we will continue with slices Vs Arrays in Golang.\n\n### Extract Part of Slice or Array\n\nEarlier, we talked about how to extract elements by index from array or slice. To extract a part of the array, we need to do slicing.\n\n``````\narr := []int{1,2,3,4,5}\n\nfmt.Println(arr[:2]) // [1,2]\n\nfmt.Println(arr[2:]) // [3,4,5]\n\nfmt.Println(arr[2:3]) // \n\nfmt.Println(arr[:]) // [1,2,3,4,5]``````\n\nSo, this is what slicing means. In slicing, the first number before colon(:) indicates the start index of the element to extract while the second number after colon(:) indicates the end index(not inclusive).\n\nSo, 0:3 means extracting all items from 0 indexes till the 3rd index excluding the 3rd index value. So, the total number of elements, in that case, will be 3 elements that are 0, 1, & 2.\n\nIf the starting index is 0, then you can omit the first number. Similarly, if you want an array till the end, then the second number after colon(:) can also be omitted.\n\nAnd if you want to get all elements in the array, then omit both numbers before & after the colon.\n\n### Iterate Through a Slice or Array\n\nIf you want to iterate through an array or slice, you can use the for-range loop that we have already discussed earlier.\n\n``````for i,v := range arr {\nfmt.Println(i, v)\n}\n\n// OUTPUT\n0 1\n1 2\n2 3\n3 4\n4 5``````\n\n### Create Copy of array or slice\n\nCreating copy in the case of the array is pretty straightforward. You just have to assign an array to a new variable & a copy will be created for you.\n\n``````arr1 := int{1,2,3,4,5}\n\narr2 =: arr1\n\nfmt.Println(arr1) // 1 2 3 4 5\n\nfmt.Println(arr2) // 1 2 3 4 5``````\n\nNow, if you make any change to any of the arrays, it will only affect the particular array where you are changing.\n\nBut, the same trick doesn’t work with slices.\n\nIf you try to assign the slice to a new variable the same as we do for arrays, then it will create another slice that points to the same array pointed by the original slice.\n\nSo, to create a copy of the slice, you need to use the copy() function.\n\n``````s1 := []int{1,2,3,4,5}\n\ns2 := make([]int, len(s1))\n\ncopy(s2, s1)\n\nfmt.Println(s1) // 1 2 3 4 5\nfmt.Println(s2) // 1 2 3 4 5``````\n\nSo, you need to first define the size of the new slice you want to copy & then you can use the copy() function after.\n\nThis copy() function examines the length of both slices & considers the minimum of these 2 lengths to copy only particular length elements.\n\n``````s1 := []int{1,2,3,4,5}\n\ns2 := make([]int, 2, 5)\n\ncopy(s2, s1)\n\nfmt.Println(s1) // 1 2 3 4 5\nfmt.Println(s2) // 1 2\n\n// If copied more than length\n\ns3 := make([]int, 10)\n\ncopy(s3, s1)\n\nfmt.Println(s1) // 1 2 3 4 5\nfmt.Println(s3) // 1 2 3 4 5 0 0 0 0 0``````\n\n### Inserting into slice\n\nGolang doesn’t support any built-in functions to add items into a slice.\n\nTo do this, we have to manually implement the insert() method using the append() function.\n\n``````func insert(original []int, index int, value int) ([]int, error) {\n// TODO\n}``````\n\nThis above insert() function takes 3 arguments: the original slice where we have to add an item, the index at where we have to insert & the value that we have to insert at that particular index.\n\nThen this function returns the modified slice & error if occurs.\n\nSo, we will slice all elements till [:index+1] & all the items from [index:]. Then we will append the items from [index:] to the items from [:index+1].\n\nAnd at last, we will replace the value at a particular index with what value we want to insert.\n\nSo, the complete code to insert the item to slice will be:\n\n``````func insert(orig []int, index int, value int) ([]int, error) {\nif index < 0 {\nreturn nil, errors.New(\"Index cannot be less than 0\")\n}\n\nif index >= len(orig) {\nreturn append(orig, value), nil\n}\n\norig = append(orig[:index+1], orig[index:]...)\norig[index] = value\n\nreturn orig, nil\n}\n\nt := []int{1, 2, 3, 4, 5}\n\nt, err := insert(t, 2, 9)\n\nif err == nil {\nfmt.Println(t) // 1 2 9 3 4 5]\n} else {\nfmt.Println(err)\n}``````\n\n### Remove Item from a slice\n\nRemoving an item from a slice is similar to adding an element to a slice, except it is easier or straightforward.\n\nWe will slice all elements till [:index] & all the items from [index+1:]. Then we will append the items from [index+1:] to the items from [: index].\n\nSo, the complete code to delete item to slice will be:\n\n``````func delete(orig []int, index int) ([]int, error) {\nif index < 0 || index >= len(orig) {\nreturn nil, errors.New(\"Index cannot be less than 0\")\n}\n\norig = append(orig[:index], orig[index+1:]...)\n\nreturn orig, nil\n}\n\nt := []int{1, 2, 3, 4, 5}\n\nt, err := delete(t, 2)\n\nif err == nil {\nfmt.Println(t) // 1 2 4 5]\n} else {\nfmt.Println(err)\n}``````\n\nSo, this is it for this tutorial on Slices & Arrays in Golang. I hope you guys like the tutorial, feel free to drop any comments in the comment section down below." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.68294895,"math_prob":0.95975137,"size":4906,"snap":"2022-27-2022-33","text_gpt3_token_len":1425,"char_repetition_ratio":0.13892289,"word_repetition_ratio":0.12882096,"special_character_ratio":0.31960863,"punctuation_ratio":0.16827345,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96825784,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T20:21:32Z\",\"WARC-Record-ID\":\"<urn:uuid:ad47f5d9-8493-41e4-9fc1-39d9e9d4bf3f>\",\"Content-Length\":\"100657\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6d07ec16-325b-4560-ba25-6cd5ef1711b9>\",\"WARC-Concurrent-To\":\"<urn:uuid:1bf0bf3b-6843-4d2b-9923-d503c8289c1d>\",\"WARC-IP-Address\":\"172.67.72.18\",\"WARC-Target-URI\":\"https://codezup.com/insert-delete-copy-iterate-in-arrays-slices-golang/amp/\",\"WARC-Payload-Digest\":\"sha1:664K7I56KBJNZ2MH6HMUZV2DJXQF7FWI\",\"WARC-Block-Digest\":\"sha1:RPRMISCHQ7EN5ED2IJAZYFFLC6OS4EDX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103642979.38_warc_CC-MAIN-20220629180939-20220629210939-00124.warc.gz\"}"}
https://byjus.com/rs-aggarwal-solutions-class-6-maths-chapter-9-linear-equations-in-one-variable/
[ "", null, "# RS Aggarwal Solutions for Class 6 Maths Chapter 9 Linear Equations in One Variable\n\n## Class 6 RS Aggarwal Chapter 9 – Linear Equations in One Variable\n\nRS Aggarwal Solutions for Class 6 Maths Chapter 9 are provided here. Students are advised to go through the RS Aggarwal Solutions for Class 6 Maths to prepare well for the exam and to gain high marks. Students need to practice RS Aggarwal Solutions diligently to score high marks. By going through RS Aggarwal Solutions, students will clearly understand the chapter. Practising textbook questions will help the students in boosting their self- confidence and to understand the topics discussed in this chapter in detail.\n\nStudents are advised to go through the RS Aggarwal Solutions for Class 6 Chapter 9 which have been solved by the BYJU’S experts in pdf format. Download pdf of Class 6 Chapter 9 in their respective links.\n\n## Download PDF of RS Aggarwal Solutions for Class 6 Chapter 9 Linear Equations in One Variable", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "## Exercise 9A PaGE no: 138\n\n1. Write each of the following statements as an equation:\n\n(i) 5 times a number equals 40.\n\n(ii) A number increased by 8 equals 15.\n\n(iii) 25 exceeds a number by 7.\n\n(iv) A number exceeds 5 by 3.\n\n(v) 5 subtracted from thrice a number is 16.\n\n(vi) If 12 is subtracted from a number, the result is 24.\n\n(vii) Twice a number subtracted from 19 is 11.\n\n(viii) A number divided by 8 gives 7.\n\n(ix) 3 less than 4 times a number is 17.\n\n(x) 6 times a number is 5 more than the number.\n\nSolution\n\n(i) Let required number be x\n\n5 times a number = 5x\n\n∴ 5 times a number equals 40 can be written as 5x = 40\n\n(ii) Let the number be x\n\nA number increased by 8 = x + 8\n\n∴ A number increased by 8 equals 15 can be written as x + 8 = 15\n\n(iii) Let the number be x\n\n25 exceeds a number = 25 – x\n\n∴ 25 exceeds a number by 7 can be written as 25 – x = 7\n\n(iv) Let the required number be x\n\nA number exceeds 5 = x – 5\n\n∴ A number exceeds 5 by 3 can be written as x – 5 = 3\n\n(v) Let the required number be x\n\nThrice a number = 3x\n\n5 subtracted from thrice a number = 3x – 5\n\n∴ 5 subtracted from thrice a number is 16 can be written as 3x – 5 = 16\n\n(vi) Let the number be x\n\n12 subtracted from a number = x – 12\n\n∴ If 12 is subtracted from a number, the result is 24 can be written as x – 12 = 24\n\n(vii) Let the number be x\n\nTwice a number = 2x\n\nTwice a number subtracted from 19 = 19 – 2x\n\n∴ Twice a number subtracted from 19 is 11 can be written as 19 – 2x = 11\n\n(viii) Let the number be x\n\nA number divided by 8 = x / 8\n\n∴ A number divided by 8 gives can be written as x / 8 = 7\n\n(ix) Let he number be x\n\n4 times a number = 4x\n\n3 less than 4 times a number = 4x – 3\n\n∴ 3 less than 4 times a number is 17 can be written as 4x – 3 = 17\n\n(x) Let the number be x\n\n6 times a number = 6x\n\n5 more than the number = x + 5\n\n∴ 6 times a number is 5 more than the number can be written as 6x = x + 5\n\n2. Write a statement for each of the equations, give below:\n\n(i) x – 7 = 14\n\n(ii) 2y = 18\n\n(iii) 11 + 3x = 17\n\n(iv) 2x – 3 = 13\n\n(v) 12y – 30 = 6\n\n(vi) 2z / 3 = 8\n\nSolutions\n\n(i)The statement of equation x – 7 = 14 can be written as 7 less from the number x is 14\n\n(ii) The statement of equation 2y = 18 can be written as twice a number y is 18\n\n(iii) The statement of equation 11 + 3x = 17 can be written as 11 increased by thrice a number x is 17\n\n(iv) The statement of equation 2x – 3 = 13 can be written as 3 less from twice the number x is 13\n\n(v) The statement of equation 12y – 30 = 6 can be written as 12 times the number y decreased by 30 is 6\n\n(vi) The statement of equation 2z / 3 = 8 can be written as twice the number z divided by 3 is 8\n\n## Exercise 9B Page no: 143\n\nSolve each of the following equations and verify the answer in each case:\n\n1. x+ 5 = 12\n\nSolution\n\nGiven x + 5 = 12\n\nSubtracting -5 from both sides\n\nx + 5 – 5 = 12 – 5\n\nx = 7\n\nCheck\n\nSubstituting x = 7 in equation x + 5 = 12\n\nWe get\n\n7 + 5 = 12\n\n12 = 12\n\nLHS = RHS\n\n∴ LHS = RHS, when x = 7\n\n2. x + 3 = -2\n\nSolution\n\nGiven\n\nx + 3 = – 2\n\nSubtracting -3 from both sides\n\nx + 3 – 3 = -2 – 3\n\nx = -5\n\nCheck\n\nSubstituting x = -5 in equation x + 3 = – 2\n\nWe get,\n\nx + 3 = -2\n\n-5 + 3 = -2\n\n-2 = -2\n\nLHS = RHS\n\n∴ LHS = RHS, when x = -5\n\n3. x – 7 = 6\n\nSolution\n\nGiven\n\nx – 7 = 6\n\nx – 7 + 7 = 6 + 7\n\nx = 13\n\nCheck\n\nSubstituting x = 13 in equation x -7 = 6\n\nWe get,\n\nx – 7 = 6\n\n13 – 7 = 6\n\n6 = 6\n\nLHS = RHS\n\n∴ LHS = RHS, when x = 13\n\n4. x – 2 = -5\n\nSolution\n\nGiven\n\nx – 2 = -5\n\nx – 2 + 2 = -5 + 2\n\nx = -3\n\nCheck\n\nSubstituting x = -3 in equation x – 2 = -5\n\nWe get,\n\nx – 2 = -5\n\n-3 – 2 = -5\n\n-5 = -5\n\nLHS = RHS\n\n∴ LHS = RHS, when x = -3\n\n5. 3x – 5 = 13\n\nSolution\n\nGiven\n\n3x – 5 = 13\n\n3x – 5 + 5 = 13 + 5\n\n3x = 18\n\nx = 18 / 3\n\nx = 6\n\nCheck\n\nSubstituting x = 6 in equation 3x – 5 = 13\n\nWe get,\n\n3x – 5 = 13\n\n3 (6) – 5 = 13\n\n3 × 6 – 5 = 13\n\n18 – 5 = 13\n\n13 = 13\n\nLHS = RHS\n\n∴ LHS = RHS, when x = 6\n\n6. 4x + 7 = 15\n\nSolution\n\nGiven\n\n4x + 7 = 15\n\nSubtracting 7 from both sides\n\n4x + 7 – 7 = 15 – 7\n\n4x = 8\n\nx = 8 / 4\n\nx = 2\n\nCheck\n\nSubstituting x = 2 in equation 4x + 7 = 15\n\nWe get,\n\n4x + 7 = 15\n\n4 (2) + 7 = 15\n\n4 × 2 + 7 = 15\n\n8 + 7 = 15\n\n15 = 15\n\nLHS = RHS\n\n∴ LHS = RHS, when x = 2\n\n7. x / 5 = 12\n\nSolution\n\nGiven\n\nx / 5 = 12\n\nMultiplying both sides by 5\n\nx / 5 × 5 = 12 × 5\n\nx = 60\n\nCheck\n\nSubstitute x = 60 in equation x / 5 = 12\n\n60 / 5 = 12\n\n12 = 12\n\nLHS = RHS\n\n∴ LHS = RHS, when x = 60\n\n8. 3x / 5 = 15\n\nSolution\n\nGiven\n\n3x / 5 = 15\n\nMultiplying both sides by 5\n\n3x / 5 × 5 = 15 × 5\n\n3x = 75\n\nx = 75 / 3\n\nx = 25\n\nCheck\n\nSubstitute x = 25 in equation 3x / 5 = 15\n\n3x / 5 = 15\n\n3 × 25 / 5 = 15\n\n3 × 5 = 15\n\n15 = 15\n\nLHS = RHS\n\n∴ LHS = RHS, when x = 25\n\n9. 5x – 3 = x + 17\n\nSolution\n\nGiven\n\n5x – 3 = x + 17\n\nTransposing x to LHS and -3 to RHS\n\n5x – x = 17 + 3\n\n4x = 20\n\nx = 20 / 4\n\nx = 5\n\nCheck\n\nSubstituting x = 5 in equation 5x – 3 = x + 17\n\n5x -3 = x + 17\n\n5 (5) – 3 = 5 + 17\n\n5 × 5 – 3 = 22\n\n25 – 3 = 22\n\n22 = 22\n\nLHS = RHS\n\n∴ LHS = RHS, when x = 5\n\n10. 2x – 1 / 2 = 3\n\nSolution\n\nGiven\n\n2x – 1 / 2 = 3\n\nAdding 1 / 2 to both sides\n\n2x – 1 / 2 + 1 / 2 = 3 + 1 / 2\n\n2x – 0 = (6 +1) / 2 [By taking LCM]\n\n2x = 7 / 2\n\nDividing both sides by 2\n\n2x / 2 = 7 / 2 × 2\n\nx = 7 / 4\n\nCheck\n\nSubstituting x = 7 / 4 in equation 2x – 1 / 2 = 3\n\n2x – 1 / 2 = 3\n\n2 (7 / 4) – 1 / 2 = 3\n\n2 × 7 / 4 – 1 / 2 = 3\n\n7 / 2 – 1 / 2 = 3\n\n(7 – 1) / 2 = 3\n\n6 / 2 = 3\n\n3 = 3\n\nLHS = RHS\n\n∴ LHS = RHS, when x = 7 / 4\n\n11. 3(x + 6) = 24\n\nSolution\n\nGiven\n\n3(x + 6) = 24\n\n3x + 18 = 24 [removing parentheses]\n\nSubtracting 18 from both sides\n\n3x + 18 – 18 = 24 – 18\n\n3x = 6\n\nx = 6 / 3\n\nx = 2\n\nCheck\n\nSubstituting x = 2 in equation 3(x + 6) = 24\n\n3(x + 6) = 24\n\n3(2 + 6) = 24\n\n3 (8) = 24\n\n3 × 8 = 24\n\n24 = 24\n\nLHS = RHS\n\nLHS = RHS\n\n∴ LHS = RHS, when x = 2\n\n12. 6x + 5 = 2x + 17\n\nSolution\n\nGiven\n\n6x + 5 = 2x + 17\n\nTransposing 2x to LHS and 5 to RHS\n\n6x – 2x = 17 – 5\n\n4x = 12\n\nx = 12 / 4\n\nx = 3\n\nCheck\n\nSubstituting x = 3 in equation 6x + 5 = 2x + 17\n\nLHS = 6x + 5\n\n= 6 (3) + 5\n\n= 6 × 3 + 5\n\n= 18 + 5\n\n= 23\n\nRHS = 2x + 17\n\n= 2 (3) + 17\n\n= 2 × 3 + 17\n\n= 6 + 17\n\n= 23\n\nLHS = RHS\n\n∴ LHS = RHS, when x = 3\n\n13. x / 4 – 8 = 1\n\nSolution\n\nGiven\n\nx / 4 – 8 = 1\n\nx / 4 – 8 + 8 = 1 + 8\n\nx / 4 = 9\n\nMultiplying both sides by 4\n\nx / 4 × 4 = 9 × 4\n\nx = 36\n\nCheck\n\nSubstituting x = 36 in equation x / 4 – 8 = 1\n\nx / 4 – 8 = 1\n\n36 / 4 – 8 = 1\n\n9 – 8 = 1\n\n1 = 1\n\nLHS = RHS\n\n∴ LHS = RHS, when x = 36\n\n## Exercise 9C PAGE no: 144\n\n1. If 9 is added to certain number, the result is 36. Find the number.\n\nSolution\n\nLet the number be x\n\n9 added to a number = x + 9\n\nGiven\n\nx + 9 = 36\n\nx = 36 – 9\n\nx = 27\n\n∴ The number when added to 9 gives 36 is 27\n\n2. If 11 is subtracted from 4 times a number, the result is 89. Find the number.\n\nSolution\n\nLet the number be x\n\n4 times a number = 4x\n\nGiven\n\n4x – 11 = 89\n\n4x = 89 + 11\n\n4x = 100\n\nx = 100 / 4\n\nx = 25\n\n3. Find a number which when multiplied by 5 is increased by 80.\n\nSolution\n\nLet the number be x\n\nMultiplied by 5 = 5x\n\nAccording to the question\n\n5x = x + 80\n\n5x – x = 80\n\n4x = 80\n\nx = 80 / 4\n\nx = 20\n\n∴ A number which when multiplied by 5 is increased by 80 is 20\n\n4. The sum of three consecutive natural numbers is 114. Find the numbers.\n\nSolution\n\nLet the three consecutive natural numbers be x, (x + 1), and (x + 2)\n\nGiven\n\nx + (x + 1) + (x + 2) = 114\n\nx + x + 1 + x + 2 = 114\n\n3x + 3 = 114 [subtracting 3 from both sides]\n\n3x + 3 – 3 = 114 – 3\n\n3x = 111\n\nDividing both sides by 3\n\n3x / 3 = 111\n\nx = 111 / 3\n\nx = 37\n\nx + 1 = 37 + 1\n\n= 38\n\nx + 2 = 37 + 2\n\n= 39\n\nThe three consecutive natural numbers are 37, 38 and 39\n\n5. When Raju multiplies certain number by 17 and adds 4 to the product, he gets 225. Find the number.\n\nSolution\n\nLet the number be x\n\nWhen multiplied by 17 becomes 17x\n\nGiven\n\n17x + 4 = 225\n\nSubtracting 4 from both sides\n\n17x + 4 – 4 = 225 – 4\n\n17x = 221\n\nDivide both sides by 17\n\n17x / 17 = 221 / 17\n\nx = 221 / 17\n\nx =13\n\n∴ The number is 13 when Raju multiplies by 17 and adds to the product, he gets 225\n\n6. If the number is tripled and the result is increased by 5, we get 50. Find the number.\n\nSolution\n\nLet x be the number\n\nAccording to the question, the number is tripled and increased by 5 we get 50\n\n3x + 5 = 50\n\nSubtracting -5 from both sides\n\n3x + 5 – 5 = 50 – 5\n\n3x = 45\n\nDivide 3 from both sides\n\n3x / 3 = 45 / 3\n\nx = 15\n\n∴ 15 is the number when it is tripled and increased by 5 results in 50\n\n7. Find two numbers such that one of them exceeds the other by 18 and their sum is 92.\n\nSolution\n\nLet one of the number be x\n\nExceeds the other by 18 = x + 18\n\nAccording to the question\n\nx + (x + 18) = 92\n\n2x + 18 = 92\n\nSubtracting -18 from both sides\n\n2x + 18 – 18 = 92 – 18\n\n2x = 74\n\nDividing both sides by 2\n\nx = 74 / 2\n\nx = 37\n\nx = 37\n\n(x + 18) = 37 + 18\n\n= 55\n\n∴ The two numbers are 37 and 55\n\n8. One out of two numbers is thrice the other. If their sum is 124, find the numbers.\n\nSolution\n\nLet one number be x\n\nAccording to the question\n\nx + 3x = 124\n\n4x = 124\n\nDividing both sides by 4\n\n4x / 4 = 124 / 4\n\nx = 31\n\nx = 31 and 3x = 3 × 31\n\n= 93\n\n∴ Required numbers are 31 and 93\n\n9. Find two numbers such that one of them is five times the other and their difference is 132.\n\nSolution\n\nLet one number be x\n\nThe other number is 5x\n\nAccording to the question\n\n5x – x = 132\n\n4x = 132\n\nDividing both sides by 4\n\n4x / 4 = 132 / 4\n\nx = 33\n\nx = 33 and 5x = 5 (33)\n\n= 5 × 33\n\n= 165\n\n∴ required two numbers are 33 and 165\n\n10. The sum two consecutive even numbers is 74. Find the numbers.\n\nSolution\n\nLet one of the even number be x\n\nThe other consecutive even number (be x + 2)\n\nAs per the question\n\nx + (x + 2) = 74\n\n2x + 2 = 74\n\nSubtracting -2 from both sides\n\n2x + 2 – 2 = 74 – 2\n\n2x = 72\n\nDividing 2 from both sides\n\n2x / 2 = 72 / 2\n\nx = 36\n\nx = 36 and (x + 2) = 36 + 2\n\n= 38\n\n∴ 36 and 38 are the two consecutive even number\n\n11. The sum of three consecutive odd numbers is 21. Find the numbers.\n\nSolution\n\nLet the one of the required odd number be x\n\nThe other two consecutive odd numbers be (x + 2) and (x + 4)\n\nAs per the question\n\nx + (x + 2) + (x + 4) = 21\n\n2x + 2 + x + 4 = 21\n\n2x + x + 2 + 4 = 21\n\n3x + 6 = 21\n\nSubtracting both sides by -6\n\n3x + 6 – 6 = 21 – 6\n\n3x = 15\n\nDividing both sides by 3\n\n3x / 3 = 15 / 3\n\nx = 5\n\nx = 5 x + 2 = 5 + 2 = 7 x + 4 = 5 + 4 = 9\n\n∴ 5, 7 and 9 are the three consecutive odd numbers\n\n12. Reena is six year older than her brother Ajay. If sum of their ages is 28 years, what are their present ages.\n\nSolution\n\nLet x years be the present age of Ajay\n\nReena is 6 years older than Ajay shows (x + 6) years\n\nAccording to the question\n\nx + (x + 6) = 28\n\n2x + 6 = 28\n\nSubtracting -6 from both sides\n\n2x + 6 – 6 = 28 – 6\n\n2x = 22\n\nDividing both sides by 2\n\n2x / 2 = 22 / 2\n\nx = 11\n\nx = 11 years and (x + 6) = 11 + 6 = 17 years\n\n∴ Present age of Ajay is 11 years and Reena’s age is 17 years\n\n### RS Aggarwal Solutions for Class 6 Maths Chapter 9 Linear Equations in One Variable\n\nChapter 9 – Linear Equations in One Variable consists of 3 exercises. RS Aggarwal Solutions are solved in detail for each question in every exercise. Let’s have a look at the topics which are included in this chapter\n\n• Systematic method for solving an equation\n• Applications of equations\n\n### Chapter Brief of RS Aggarwal Solutions for Class 6 Maths Chapter 9 – Linear Equations in One Variable\n\nA linear equation is an equation in which the highest power of the variables is 1. Variables mentioned in terms of any alphabets such as a, b, c, x, y. There are four rules to solve an equation\n\n(i) We can add the same number to both sides of an equation\n\n(ii) We can subtract the same number from both sides of an equation\n\n(iii) We can multiply both sides of an equation by the same non zero number\n\n(iv) We can divide both sides of an equation by the same non zero number\n\nWe can solve linear equations easily by using these rules. They are used for comparing rates of pay, budgeting, making predictions." ]
[ null, "https://www.facebook.com/tr", null, "https://cdn1.byjus.com/wp-content/uploads/2019/10/rs-aggarwal-solution-for-class-6-maths-chapter-9-01.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/10/rs-aggarwal-solution-for-class-6-maths-chapter-9-02.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/10/rs-aggarwal-solution-for-class-6-maths-chapter-9-03.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/10/rs-aggarwal-solution-for-class-6-maths-chapter-9-04.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/10/rs-aggarwal-solution-for-class-6-maths-chapter-9-05.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/10/rs-aggarwal-solution-for-class-6-maths-chapter-9-06.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/10/rs-aggarwal-solution-for-class-6-maths-chapter-9-07.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/10/rs-aggarwal-solution-for-class-6-maths-chapter-9-08.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/10/rs-aggarwal-solution-for-class-6-maths-chapter-9-09.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/10/rs-aggarwal-solution-for-class-6-maths-chapter-9-10.jpg", null, "https://cdn1.byjus.com/wp-content/uploads/2019/10/rs-aggarwal-solution-for-class-6-maths-chapter-9-11.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8966271,"math_prob":1.0000075,"size":12363,"snap":"2019-51-2020-05","text_gpt3_token_len":5013,"char_repetition_ratio":0.17169674,"word_repetition_ratio":0.1732551,"special_character_ratio":0.44099328,"punctuation_ratio":0.04334939,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":1.0000057,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,2,null,2,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T22:50:13Z\",\"WARC-Record-ID\":\"<urn:uuid:8a1b48fe-d47d-40f1-b48b-f4c7d831e9c5>\",\"Content-Length\":\"578455\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ce221d17-7b8a-47f1-984e-0dadeb4417a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:818d178c-e934-437e-8391-3c1624dd40c9>\",\"WARC-IP-Address\":\"52.77.80.199\",\"WARC-Target-URI\":\"https://byjus.com/rs-aggarwal-solutions-class-6-maths-chapter-9-linear-equations-in-one-variable/\",\"WARC-Payload-Digest\":\"sha1:MVAGB6YPNNTVUDK4JB74VTXUUKQBCHXU\",\"WARC-Block-Digest\":\"sha1:AYKLZB5CSAZ2Z7UNPPDOQHJH7WPJM3AA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250606226.29_warc_CC-MAIN-20200121222429-20200122011429-00513.warc.gz\"}"}
https://codekomusic.com/qa/question-do-co-interior-angles-add-up-to-180.html
[ "", null, "# Question: Do Co Interior Angles Add Up To 180?\n\n## What does co interior angles mean?\n\nCo-interior angles lie between two lines and on the same side of a transversal.\n\nIn each diagram the two marked angles are called co-interior angles.\n\nIf the two lines are parallel, then co-interior angles add to give 180o and so are supplementary..\n\n## What is the difference between alternate interior angles and consecutive interior angles?\n\nIf they are on the same side, then the angles are considered consecutive. If they are on opposite sides, then the angles are considered alternate.\n\n## Are Cointerior angles equal?\n\nAlternate angles are always equal. Corresponding angles are always equal. Allied (or co-interior) angles are supplementary. Vertically opposite angles are always equal.\n\n## What is the sum of the interior angle of a triangle?\n\n180°Triangle/Sum of interior angles\n\n## Why do co interior angles add up to 180?\n\nIf the transversal cuts across parallel lines (the usual case) then the interior angles are supplementary (add to 180°). So in the figure above, as you move points A or B, the two interior angles shown always add to 180°.\n\n## Which polygon has an interior angle sum of 180?\n\nThe General RuleShapeSidesSum of Interior AnglesTriangle3180°Quadrilateral4360°Pentagon5540°Hexagon6720°6 more rows\n\n## What are the six types of angles?\n\nTypes of Anglesacute angle-an angle between 0 and 90 degrees.right angle-an 90 degree angle.obtuse angle-an angle between 90 and 180 degrees.straight angle-a 180 degree angle.\n\n## What type of angle is 180 degrees?\n\nstraight anglesAngles that are 180 degrees (θ = 180°) are known as straight angles. Angles between 180 and 360 degrees (180°< θ < 360°) are called reflex angles. Angles that are 360 degrees (θ = 360°) are full turn.\n\n## Are corresponding angles equal?\n\nWhen two lines are crossed by another line (which is called the Transversal), the angles in matching corners are called corresponding angles. When the two lines are parallel Corresponding Angles are equal. …\n\n## What are the 7 types of angles?\n\nThe different types of angles based on their measurements are: Acute Angle – An angle less than 90 degrees. Right Angle – An angle that is exactly 90 degrees….Types of Angles – Acute, Right, Obtuse, Straight and Reflex…Acute angle.Right angle.Obtuse angle.Straight angle.Reflex angle.\n\n## What is the rule for co interior angles?\n\nCo-interior Angles – are angles on the same side of the transversal and inside the parallel lines. The Consecutive Interior Angles Theorem states that if two parallel lines are cut by a transversal, then each pair of alternate interior angles is congruent.\n\n## What are co interior angles in a parallelogram?\n\nEach pair of co-interior angles are supplementary, because two right angles add to a straight angle, so the opposite sides of a rectangle are parallel. This means that a rectangle is a parallelogram, so: Its opposite sides are equal and parallel. Its diagonals bisect each other.\n\n## Why are same side interior angles supplementary?\n\nThe same-side interior angles theorem states that same-side interior angles are supplementary when the lines intersected by the transversal line are parallel. 2) Since the lines A and B are parallel, the same-side interior angles theorem states that same-side interior angles will be supplementary.\n\n## What is a zero angle?\n\nAn angle with a measure of zero degrees is called a zero angle. If this is hard to visualize, consider two rays that form some angle greater than zero degrees, like the rays in the . Then picture one of the rays rotating toward the other ray until they both lie in the same line.\n\n## Do interior angles add up to 180?\n\nd and f are interior angles. These add up to 180 degrees (e and c are also interior). Any two angles that add up to 180 degrees are known as supplementary angles. Using some of the above results, we can prove that the sum of the three angles inside any triangle always add up to 180 degrees.\n\n## What do alternate interior angles look like?\n\nWhen two lines are crossed by another line (called the Transversal): Alternate Interior Angles are a pair of angles on the inner side of each of those two lines but on opposite sides of the transversal. In this example, these are two pairs of Alternate Interior Angles: c and f.\n\n## What is the sum of interior angles of a hexagon?\n\n720°Hexagon/Sum of interior angles\n\n## How do you figure out angles?\n\nExampleStep 1 The two sides we know are Opposite (300) and Adjacent (400).Step 2 SOHCAHTOA tells us we must use Tangent.Step 3 Calculate Opposite/Adjacent = 300/400 = 0.75.Step 4 Find the angle from your calculator using tan-1" ]
[ null, "https://mc.yandex.ru/watch/66676240", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8868901,"math_prob":0.9584984,"size":5301,"snap":"2020-34-2020-40","text_gpt3_token_len":1212,"char_repetition_ratio":0.23673777,"word_repetition_ratio":0.1714922,"special_character_ratio":0.2303339,"punctuation_ratio":0.104228124,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976728,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T14:42:29Z\",\"WARC-Record-ID\":\"<urn:uuid:50981e90-8747-4090-ade9-2e54d9325d9d>\",\"Content-Length\":\"39926\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a05bc9a-8767-41be-aa65-f1e6641582d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:6990f1fd-6255-48af-a02f-98b6247261eb>\",\"WARC-IP-Address\":\"87.236.16.235\",\"WARC-Target-URI\":\"https://codekomusic.com/qa/question-do-co-interior-angles-add-up-to-180.html\",\"WARC-Payload-Digest\":\"sha1:LUCDHFSQTJR3PROKQ462M2XFLMYZN77W\",\"WARC-Block-Digest\":\"sha1:YCS6VPL3TK3CJTZMWL4JBI6QFSHZ5NDI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400279782.77_warc_CC-MAIN-20200927121105-20200927151105-00203.warc.gz\"}"}
https://percentage-calculator.net/x-is-y-percent-of-what-number/50-is-59.524-percent-of-what-number.php
[ "# 50 is 59.524 percent of what number?\n\nAnswer: 50 is 59.524 percent of 84\n\n## Fastest method for calculating 50 is 59.524 percent of what number\n\nAssume the unknown value is 'Y'\n\n50 = 59.524% x Y\n\n50 = 59.524 / 100 x Y\n\nMultiplying both sides by 100 and dividing both sides of the equation by 59.524 we will arrive at:\n\nY = 3 x 100 / 59.524\n\nY = 84%\n\nAnswer: 50 is 59.524 percent of 84\n\nIf you want to use a calculator, simply enter 50x100÷59.524 and you will get your answer which is 84\n\nYou may also be interested in:\n\nHere is a calculator to solve percentage calculations such as 50 is 59.524 percent of what number. You can solve this type of calculation with your own values by entering them into the calculator's fields, and click 'Calculate' to get the result and explanation.\n\nis\npercent of?\n\n## Have time and want to learn the details?\n\nLet's solve the equation for Y by first rewriting it as: 100% / Y = 59.524% / 50\n\nDrop the percentage marks to simplify your calculations: 100 / Y = 59.524 / 50\n\nMultiply both sides by Y to move Y on the right side of the equation: 100 = ( 59.524 / 50 ) Y\n\nSimplifying the right side, we get: 100 = 59.524 Y\n\nDividing both sides of the equation by 59.524, we will arrive at: 84 = Y\n\nThis leaves us with our final answer: 50 is 59.524 percent of 84" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93671536,"math_prob":0.99944323,"size":1102,"snap":"2019-35-2019-39","text_gpt3_token_len":331,"char_repetition_ratio":0.16029143,"word_repetition_ratio":0.059907835,"special_character_ratio":0.36660618,"punctuation_ratio":0.12,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997662,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-23T07:04:08Z\",\"WARC-Record-ID\":\"<urn:uuid:fa34c47c-f89b-4f77-aded-5efb208dd9f1>\",\"Content-Length\":\"58765\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:680d3a85-20d4-4e2e-b165-b44def4c7694>\",\"WARC-Concurrent-To\":\"<urn:uuid:84b85458-345d-4f56-8f38-2c1cf4f5ff86>\",\"WARC-IP-Address\":\"68.66.224.6\",\"WARC-Target-URI\":\"https://percentage-calculator.net/x-is-y-percent-of-what-number/50-is-59.524-percent-of-what-number.php\",\"WARC-Payload-Digest\":\"sha1:EJCVBU75NPZVAXB2IXAQ23PKSUS6YSOE\",\"WARC-Block-Digest\":\"sha1:VO6J75N74VXRK272VGOUNBJ7A5TUFQMY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514576122.89_warc_CC-MAIN-20190923064347-20190923090347-00265.warc.gz\"}"}
https://www.teachoo.com/9503/2554/Example-4/category/Examples/
[ "Examples\n\nChapter 10 Class 7 Algebraic Expressions\nSerial order wise", null, "Learn in your speed, with individual attention - Teachoo Maths 1-on-1 Class\n\n### Transcript\n\nQuestion 1 Collect like terms and simplify the expression: 12m2 – 9m + 5m – 4m2 – 7m + 10 12m2 – 9m + 5m – 4m2 – 7m + 10 = 12m2 − 4m2 + 5m − 9m − 7m + 10 = m2 (12 − 4) + m (5 − 9 − 7) + 10 = m2 (8) + m (−4 − 7) + 10 = 8m2 + m (−11) + 10 = 8m2 − 11m + 10", null, "" ]
[ null, "https://d1avenlh0i1xmr.cloudfront.net/f3a9f7b1-dede-4f1d-b89d-34e8a810ab82/slide11.jpg", null, "https://www.teachoo.com/static/misc/Davneet_Singh.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7594236,"math_prob":0.99976546,"size":683,"snap":"2023-40-2023-50","text_gpt3_token_len":278,"char_repetition_ratio":0.17083947,"word_repetition_ratio":0.21192053,"special_character_ratio":0.44216692,"punctuation_ratio":0.024193548,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9959303,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,7,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T17:57:33Z\",\"WARC-Record-ID\":\"<urn:uuid:47f0a8db-b61a-4b12-b169-7ee249230947>\",\"Content-Length\":\"148377\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db054517-5541-4d55-bea2-c416ccd34b7d>\",\"WARC-Concurrent-To\":\"<urn:uuid:db4e4c73-15f5-4de7-8cf1-e414da01c343>\",\"WARC-IP-Address\":\"54.237.159.171\",\"WARC-Target-URI\":\"https://www.teachoo.com/9503/2554/Example-4/category/Examples/\",\"WARC-Payload-Digest\":\"sha1:FMIQV2GVFWLAHDKHRDC45W4G3MUKJ4BP\",\"WARC-Block-Digest\":\"sha1:6GTV63MI4I2KZZLY2USYPSF7TDJ2NHR2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506658.2_warc_CC-MAIN-20230924155422-20230924185422-00206.warc.gz\"}"}
https://anandavala.info/TASTMOTNOR/Computational%20Processes.html
[ "# Computation: Processes and Determinism\n\nBy John Ringland (www.anandavala.info)\n\nMain issues addressed in this discussion:\n\nIs a general ‘process’ or existential phenomenon conceptually equivalent to a computational process? Can this reality be thought of as a computational process?\n\n(I propose Yes)\n\nDoes a computational metaphysics imply a deterministic (clockwork) universe? Is everything pre-determined?\n\n(I propose No)\n\n# Abstract\n\nRegarding the first question, I argue that any system that can be characterised by a finite set of state variables can be represented as a state space of all possible configurations and a state transition mapping can be defined over that space. This is a form of computational process, which is described here, that can represent general systems. I show that the space of algorithmic processes is a subset of the set of general computational processes and that the space of general computational processes is equivalent to the space of general processes. Hence, although algorithmic processes are not able to implement all general processes, general computation can implement all general processes. Therefore, although not all general processes are equivalent to an algorithmic computational process, all general processes are equivalent to general computational processes.\n\nSMN is a deterministic transcendent process that manifests an empirical existential space. This empirical space is a space of pure potential existence – without any pre-determined structure. It is a space of pure potentiality within which any kind of universe can be represented.\n\nRegarding the second question, I argue that within the space of potential existence systems may exist. In the case of non-ergodic systems the state of these systems evolves according to what exists and happens (existential and causal data), thus the state of what happens (causal relations) also evolves – thus the empirical space is self-programming, (this is subtle, see later for details). It relates to the fact that any activity has two aspects, the active ‘process’ (or existential phenomenon) and the program (the data that defines the process). A process is determined by its program but the program may also be determined by the process, hence the process can be self-determined.\n\nThe empirical universe is still determined moment by moment but it is also self-determined rather than pre-determined. The moment by moment determinism leads to causal coherence and the self-determinism leads to the ability to both exercise one’s will and to decide one’s will. Hence one can choose one’s state of being, hence one is able to determine one’s own conditions of experience; it is this that underlies the concept of free will. Thus a deterministic transcendent process creates a programmable existential space in which empirical systems may determine their own existential states. The entire context is itself deterministic but from the perspective of the empirical systems it is the case that they can exert free will – they can act upon their will, they can decide their will, they can decide on how to decide their will and so on. The existential context is completely self-programmable and there is no deterministic algorithm determining the nature of empirical existence.\n\nSome of the details of the mathematics of non-ergodic systems are still in development so this second section is still a work in progress and is presented solely as an outline of the conceptual argument.\n\n# Broad Computation has a Narrow Computational Basis\n\nA re-expression of the first question:\n\nIs ‘computation’ narrowly defined as deterministic rule based algorithmic processing of data? Or can it be more broadly defined as the transformation of information and thereby related to any conceivable process.\n\nTo prove that processes ARE NOT computational it would be sufficient to identify a single system that could not be represented by any computational process. There are systems that are not representable by any algorithmic, rule based process (narrowly defined computation) but is there some other computational process that can represent them?\n\nTo prove that processes ARE computational it would be necessary to define a computational process that can represent any conceivable system therefore any conceivable system can be equivalently thought of as a computational process.\n\nIn this section I provide the general outline of a proof that all processes can be thought of as computational processes. I define a computational process that can emulate any conceivable system. This is just an outline since only the full implementation or detailed characterisation of such a computational process would constitute a complete proof.\n\n## Introduction to System Matrix Notation (SMN)\n\nSMN is complex and subtle so I will slowly introduce it by degrees as required by this discussion. For a more complete introduction see the website (SMN Details). First I will discuss some of the general principles.\n\nBefore a system can be implemented or modelled the system must be able to be represented in some form. There must be some finite set of state variables that describes the existential state and causal state of the system. If there exists a set of states that completely characterises the system then I call it ‘representable’ and I also say that there were ‘sufficient’ state variables.\n\nAre all systems representable? Can all systems be characterised by some finite set of state variables? General relativity suggests that all dynamical values are finite in value; i.e. there is no infinite velocity or energy and so on. Quantum physics suggests that the values of all observable states are quantised; e.g. energy comes in discrete packets. System theory also suggests that all systems have a finite set of states that they may occupy. Finite state values imply finite range of states and discrete state values imply finite resolution or density of state values within the finite range. Hence there are a finite number of distinct states that the system can occupy (finite variation). Hence the system can be modelled within a finite state space as a finite computational process. But the proof that all systems are finite, discrete and representable is a complex issue.\n\nThe complete representation of a system requires both existential and causal states, i.e. what exists and what happens. The full characterisation of a system is equivalent to virtual creation, hence sufficient emulation is equivalent to implementation. If a representable system can be completely characterised by a simulation, then because the equivalent simulation program is a computational process, that system is conceptually equivalent to a computational process. Consider, to what degree does a word processing program actually implement a word processor and to what degree does it just model or emulate a word processor. If the emulation is sufficiently complete it can then be considered to be an implementation.\n\nAny representable system can be modelled by SMN. SMN utilises a state vector (SV) that describes the existential state and a system matrix (SM) that describes the causal state. The SV describes the existential state and the SM describes a network of interaction channels that connect these states. The current existential state is input into the causal network that then determines the next existential state. Thus the SV defines what exists and the SM defines what happens.\n\nThe SMN model can be thought of in terms of a network of inter-connected systems. An SV element represents the state of the system, a row is the input interface for the system and the intersecting column is the output interface. This is useful for engineering purposes but in this discussion we will look at it slightly differently.\n\nThe SV and SM can be thought of as a set of quantities and a set of information channels between quantities. E.g. consider a set of reservoirs with a certain quantity of fluid and a set of pipes that connect these reservoirs, and each pipe has a particular capacity.\n\nSimple Example:", null, "", null, "The above model consists of a 4x4 SM and two SV’s. The left SV1 is the initial existential state and the right SV2 is the next existential state. Notice that the SM multiplied by SV1 results in SV2. Notice also that the columns of the SM all add to one, and both SV’s add to thirteen. The normalised SM columns mean that the quantity is definitely distributed somewhere but it is under determined (probabilistic) as to exactly where. The operation of the matrix on the state vector is to distribute the quantities through the pipes and thereby move the quantities around the network of reservoirs. The total quantity of substance is conserved and is just distributed by the network of channels. This example illustrates the general inter-connectivity and distribution functions of SMN.\n\nHowever SMN is a general class of algorithms, depending on how the system is represented and how the model is structured. These lead to two main representational methods; algorithmic (classical) and permutation space (quantum). The algorithmic approach represents distinct empirical states and distinct algorithmic relations between these states, whereas the state space method represents a probability distribution over the set of all possible states (permutations) and the transitions between them.\n\nAside from classical and quantum representation there are also two general types of modelling methods, ergodic and non-ergodic.\n\nErgodicity implies that the events that make up a process have a constant relation to each other. I.e. if in some state there is a 100% probability that this state is followed by another state then that relation will apply in all such instances. For an ergodic system the causal programming is static – i.e. the elements in the SM do not change. An example of an ergodic process is traditional software, where the program itself does not change over time.\n\nIn non-ergodic SMN the SM elements can also evolve, thus the causal programming can evolve. Thus, both “that which exists” and “that which happens” can both evolve over time. Non-ergodic SMN is discussed briefly on the website (Non-Ergodic SMN). It is also covered in more detail later on in relation to the discussion on determinism, although that is still in development.\n\n#### General Classes of SMN\n\n Classical Quantum Ergodic CESMN QESMN Non-Ergodic CNSMN QNSMN\n\nCESMN is the most computationally simple method, which has practical engineering applications. It has been implemented in software to illustrate its functionality as a general information processor.\n\nQNSMN is the fully general and most powerful method, which is used in metaphysical analyses. It is the computational process that is proposed in this discussion to be a complete virtual-reality generative process that can simulate any system or process. QESMN is adequate for addressing the first issue of the computational nature of reality but the full QNSMN is required to address the issue of determinism.\n\nThese four variations will be discussed below. The relation between classical and quantum representation illustrates the limitations of the algorithmic approach and the power of the permutation space approach. The relation between ergodic and non-ergodic systems illustrates the nature of determinism and its complexities.\n\n### Algorithmic Representation\n\nAlgorithmic representation is possible when there is some set of equations or laws that define the relations between the classical existential states. This is the traditional method of mathematical science and in the context of SMN it is referred to as Classical SMN (C_SMN).\n\nThis type of SMN operates on distinct empirical values that are directly represented. These values are transformed according to algorithmic relations between the variables.\n\nSee here for an example of A Simple Classical Particle implemented using classical SMN methods.\n\nNotice that in this type of modelling the inner product of each row with the SV corresponds to a process that determines the next value of a particular state variable.\n\nE.g. for a system with two state variables x and y:\n\nx’ = fx(x, y)\n\ny’ = fy(x, y)\n\nWhere ‘f’ indicates any computational process that takes x and y as input and produces a new value for either x or y. This need not be a simple mathematical equation such as for the simple classical particle, it can be any arbitrary process that takes the SV as input and produces a single appropriate state value. The row can be generalised as a program that operates on the SV and produces a single state value. In this way SMN can accommodate any arbitrary functions of the SV. Furthermore, as well as arbitrary functions in the SM, the SV contains arbitrary empirical data that is directly represented. This empirical data can be any data whatsoever.\n\nSee here for an example of this method being used to implement A Simple Spring and A Drink Machine.\n\nThis method can be used to implement models of classical algorithmic systems, which includes the class of all software programs, all traditional engineering applications and all system models of classical systems. Thus it has broad engineering applications.\n\nThe coherent unified mathematical foundation opens up the possibility of using mathematical methods to operate on the system models. This could lead to advanced analysis and simulation methods, some of which have already been identified, such as:\n\n• Modelling: Graphical modelling methods combined with model optimisation methods (more).\nAutomated modelling methods based upon observation of empirical behaviour (more).\n• Analysis: Can explore and analyse the complete state space of a system (more).\nCould produce error free software that has been systematically verified (more).\n• Simulation: Compression of multiple iterations into a single iteration (more).\nAccelerated execution of certain processes (more).\nSparse matrices and energy flow processing result in optimal representation and execution (more).\n\n### Permutation Space Representation\n\nThe permutation space representation method is a more complete method, which makes it far more computationally demanding. Rather than represent particular empirical values it represents a probability distribution over all possible empirical values. Hence it is described as a quantum approach (Q_SMN). The permutation space is a state space where each point represents a particular state of the entire system. The permutation space maps the entire range of conceivable configurations of the system. The dynamics of the system is represented as transitions between states thus there is also a probability mapping over the space of all state transitions.\n\nFor a brief discussion on quantum representation and simulation see A Quantum Logic Gate and the example of A NAND and XOR Logic Gate.\n\nThe SV is an actualisation probability distribution (a.p.d) over the field of all possible existential states, (What exists, existential state). This is also known as a ‘wavefunction’ in quantum physics. The a.p.d over the range of all possible states describes the potential for actualisation for each possible state.\n\nThe SM is a state transition probability distribution (s.t.p.d) over the field of all possible transitions between existential states, (What happens, causal state). Thus the s.t.p.d over the range of all possible state transitions describes the likelihood of any transition taking place.\n\nThis approach is similar to the earlier example of reservoirs containing some kind of fluid that are connected by pipes of different capacities. In the current case the reservoirs are quantum states (SV elements), the fluid is probability of actualisation (SV data), the pipes are state transitions (SM elements) and the pipe capacities are state transition probabilities (SM data).\n\nIn the NAND and XOR example the system matrix has the form:", null, "", null, "Where, for example, the top row of the SM indicates that if the system is in state (11) then the next state will definitely be (00). This is a very simple permutation space mapping but the SM elements may contain any probability values so long as the columns sum to one. Thus another possible state transition mapping that defines a far more complex two qbit system is:", null, "", null, "This quantum example has similar structure to the previous classical example but instead of the transitions being definite they are just most likely. Thus this system would exhibit behaviour similar to the previous system but it would also have more complex behaviours.\n\nIn the classical case the variables are represented by distinct values. This is equivalent to a collapsed p.d. For example, if ‘b’ is a binary variable then b=0 is equivalent to [p(b=0), p(b=1)] = [1, 0]. Thus there is a probability of one that b=0 and a probability of zero that b=1.\n\nThus a classical system is just a special case of a quantum system and for each classical system there is a corresponding quantum model, but the space of quantum models is much greater than the space of classical models so not every quantum model corresponds to a classical model.\n\nIn both the SV and SM:\n\nClassical distribution = focused, localised distribution\n\nQuantum distribution = blurred, non-localised distribution\n\nSM is a s.t.p.d over a permutation space so SM elements are all probabilities and each column is normalised to one (cosmos definitely observes each state in some manner). The SV is also normalised to one (empirical states definitely exist in some form).\n\nFor any reasonably complex system there is a massive state space of possible permutations. The direct mapping of these permutations provides full generality and no algorithmic reliance. It is not a humanly practical method for simulating complex systems but is a definite finite (although vast) computational process that can simulate any complex system. The vastness of the process indicates the true complexity of reality where even seemingly simple processes or phenomena can have significantly complex underlying state spaces.\n\nThe classical and quantum SMN methods can be combined in a single SMN model. Thus there may be quantum systems computing the state space mapping of a system and producing an a.p.d over the range of all possible states. This can then be collapsed via a random process (wavefunction collapse) that results in only one classical state (collapsed a.p.d). This can then be represented as a distinct classical state variable and operated upon by classical systems that apply algorithmic transformations to the classical state.\n\nA related example is A Two Tiered NAND Gate. It does not use classical systems but it does transform into and out of the permutation space. The variables are transported as individual a.p.d’s, which are then merged into a permutation space where the collective state evolves and is then separated back into individual a.p.d’s.\n\n## Law Based Systems (C_SMN & Q_SMN)\n\nFor law-based systems there are definite algorithmic principles (laws, equations or algorithms) that determine the nature of the system hence classical representation (C_SMN) can be used.\n\nIt is generally accepted that law-based (algorithmic) systems are conceptually equivalent to computational processes so this is a good place to begin the analysis of general processes.\n\nThe SMN representation of law-based systems is described above in the section on Algorithmic Representation.\n\nThe example of the law-based system (NAND and XOR system) illustrates an SMN model of a law-based system. Recall the law-based SMN model from above:", null, "", null, "Or the example of a 2 bit binary incrementer with overflow:\n\n00 ® 01 ® 10 ® 11 ® 00\n\n0   ® 1   ® 2   ® 3   ® 0", null, "", null, "Notice that the SM’s are very simple. This a general property of the SM’s of law-based systems. The SM is a probability distribution over the space of all possible state transitions. This can in general be very complex, but if there is some general law that describes the behaviour of the system in a compact form then the SM must contain some order or algorithmic symmetry.\n\nThus an algorithm is equivalent to an ordered or symmetric mapping over the space of state transitions. It encodes certain general relations between states that produce an ordered SM.\n\nBut the space of all possible SM’s is far greater than the space of ordered SM’s. If one takes a law-based ordered SM and changes an element in an arbitrary way then that system can no longer be represented by the same algorithm and it is most likely that it cannot be represented in any compact algorithmic form.\n\nTherefore algorithmic methods are adequate to represent law-based systems but permutation space methods provide full generality and are adequate to represent general systems. Thus the space of broad computational processes (general processes) is greater than the space of algorithmic processes.\n\n## Non Law Based Systems (Q_SMN)\n\nFor non-law based systems there is no algorithmic representation and one must use the more general method of permutation space representation. The system has no symmetry or order for an algorithm to capture hence the SM is described as non-symmetric or disordered.\n\nRecall the SM from the earlier NAND and XOR system that is similar to the law-based SM but which also contains non-collapsed probability distributions. This is shown again below.", null, "", null, "Through comparison with the simple symmetric law based SM above it should be clear that this SMN model represents an actual process but this process cannot be described by any simple algorithm.\n\nGeneral systems cannot be represented by algorithmic methods but they can be represented by permutation space methods. Hence algorithmic computational processes are inadequate to model reality but permutation space methods are adequate. The empirical system may not be able to be represented by a simple law based algorithm but it can still be represented by permutation space methods. Thus broad computation is more complex than a rule based process.\n\n## Conclusions Regarding Computation\n\nThe transcendent process (SMN algorithm) is a narrow computational process (deterministic algorithm) that can perform permutation space modelling of systems. Hence SMN is a narrow computational process that can represent arbitrary processes (broad computation). In this way SMN can represent general processes even though it is itself an algorithmic process.\n\nThe SMN algorithm itself and the SM and SV are transcendent ‘machinery’ that creates the potential for empirical systems to exist and interact. They create an undefined empirical space in which any conceivable state can exist and any conceivable event can happen. There is no intrinsic algorithmic programming that SMN imposes on the empirical space. The space arises and functions based upon an underlying algorithm but the space itself can be totally devoid of any influence from this. The machinery just creates the potential for empirical existence but it does not influence in any way what exists or happens.\n\nFor example, if the SM and SV elements are all zeros, this is a model of a universe in which nothing exists and nothing happens – this is non-existence. If the SM and SV are filled with uniform probability distributions then this is a universe within which something exists and happens but that something could be anything – this is completely undefined existence. If the SM and SV are filled with non-uniform p.d’s then this is a universe when some things are more likely than others but things are not completely defined – this is under-defined existence (quantum realm). If the SM and SV are collapsed into a classical configuration then this is a universe within which things are completely defined (classical realm).\n\nThe fact that SMN can create universes where nothing exists or happens through to universes where everything that exists and happens is completely defined indicates that SMN itself operates in the ‘background’ and provides the possibility for empirical existence without in any way conditioning empirical existence to take on a particular form or to behave in a particular way. Within the empirical space the simulated universe can conceivably take on any form.\n\nHence the transcendent process is a narrow computational process but it manifests a broad computational context in which any arbitrary empirical system can be represented. Hence broad computation has a narrow computational basis and any process is conceptually equivalent to the broad computational model plus the narrow computational transcendent process that implements it.\n\n# Self Determined systems\n\nDoes a computational metaphysics imply a deterministic (clockwork) universe? Is everything pre-determined?\n\n(I propose No)\n\nIn this section I introduce the concept of networks of non-ergodic systems. This leads to the concept of deterministic self-determinism. I first discuss types of determinism and then discuss ergodic systems. This is then developed further in a discussion on non-ergodic systems and ways of implementing and representing them. These are conceptually formed into a self-reflexive network that is self-determining. Then I discuss some more details on the issue of what actually determines the evolution of the universe and in what ways this impacts on our experience of existence. This section is still a work in progress and is released as an outline of the general conceptual argument, although some of the mathematical details are still in development.\n\n#### What is determinism?\n\nThere are several aspects to the concept of determinism.\n\n• Full Determinism, the idea that there exists a static ‘program’ (ergodic computational process) that determines each existential state and by extrapolating this program we can predict the future state of a system.\n• Indeterminism, the idea that there exists a static ‘program’ (ergodic computational process) that determines each existential state but the program exhibits chaotic non-linear behaviour so there is no way of extrapolating. In order to predict future states one must run it.\n• Self Determinism, each existential moment determines the next moment but the mechanism by which this occurs is not pre-determined, i.e. there is no static pre-defined ‘program’ and the situation is self-programming. Hence it is self-determined instead of fully determined.\n• Non-Determinism, The idea that either the universe is totally random from one moment to the next, or that there exists some non-determined process (a process without a program) that exerts free will and thereby determines the state of the universe in each moment according to its un-determined whim.\n\n## Ergodic Systems (_ESM)\n\nAll the systems illustrated so far have been ergodic systems. Ergodic processes are processes with static probability distributions; hence the existential ‘program’ does not change. In SMN this corresponds to the SM elements having constant values. The state vector can change thus representing a changing existential state, but the SM remains constant hence the causal state remains constant.\n\nErgodic systems are fully pre-determined. The causal programming (that which happens) is pre-defined. The network of causal interaction channels between systems is static. Thus systems remain in the same functional relations to each other.\n\n## Non-Ergodic Systems (_NSM)\n\nA non-ergodic system is one in which the causal process can also change. This changes the causal structure of the empirical space. Hence both what exists and what happens can evolve over time. There are many complexities in this situation, some of which are discussed below.\n\nConsider a situation where there is a direct process that is the causal program but there is also a meta-process that controls the direct process. This is possible because the process is an active phenomenon in one respect but in another respect it is a ‘program’, i.e. data that determines the nature of the process. If this causal data changes then the causal process also changes.", null, "This SMN model implements a non-ergodic system. The SV is the same as before. The SM is different in the sense that the values contained by the matrix elements are now contained in the SMV and the SM elements are now references to these. The SMV contains the actual states of the SM elements. The SMM defines a process that controls the SMV.\n\nThe SV is the existential data. The SM is the causal process. The SMV defines the structure of the causal process. The SMM observes the state of the causal process and the existential state and modifies the SMV and thereby the causal process.\n\nSee Non-Ergodic SMN for another description.\n\nOne could also implement SMN within SMN and just use a single ergodic matrix with the non-ergodic systems represented within the empirical SMN process.\n\nThis phenomenon of levels of control is related to the concept of direct and meta processes. The direct process is the primary process that implements the base level functionality of the system and there may also be other processes that monitor this direct process. For example, consider a control system program. It is aware and responsive to the environment that is available to it through its input and output interfaces, but it is not aware of anything else such as its own operational state or the meaning of its actions and so on. It has no feedback loops and is simply a direct pipeline for information transformation, hence it is described as the direct process.\n\nIf there was another meta-process operating that was a control system that monitored the direct control system, then whilst the direct process is aware of the environment via the direct interfaces, the meta-system is aware of the direct process via its own interfaces. A system can only be 'aware' via its interfaces. If there is no interface then no information can flow and no computation or ‘awareness’ is possible.\n\nAt first these meta-systems can be thought of as forming a stack of successively higher-level feedback loops.\n\n#### Ergodic                                        Non-Ergodic", null, "These diagrams are a shorthand for SMN processes. The external squares represent SV data, which represent either the state of an empirical system (existential data) or the state of a process (causal data). An internal square represents the static SM elements that implement an ergodic process. An ellipse represents a process or a region of the SM. A square joined to an ellipse represents the causal data (program) that implements the process. An arrow indicates the flow of information.\n\nFor example, in the ergodic case, there is existential data and an ergodic process that implements the causal process. In the non-ergodic case there is existential data, a programmable causal process and an ergodic process that programs the causal process.\n\nIn the non-ergodic case there is non-ergodicity of the causal process but there is still an ergodic foundation. This is the case for all such ‘stacks’ of processes; there is a static program that controls the program that controls the program that … etc.\n\nIt is more realistic to think of a complex network rather than a stack. This gives a system that is similar to a neural network. So consider the situation of a complex network of meta-systems, each of which monitors numerous of the other systems. The total system would not only implement causal dynamics in a direct manner, it would also be able to program its state of functioning.\n\n#### Self-Reflexivity", null, "If there is no “stack end” then there is no ergodic process. Every system programs other systems and every system is programmed by other systems. Thus there is no static program underlying the context and it is entirely self-programmable.\n\nIn this sense systems are able to determine their existential and causal states according to their will. Their will is also a deterministic process but the process may be self-determined. Hence the universe of systems is completely flexible and there are no truly static deterministic influences.\n\nConsider the scenario of a nation and its governance. All of its actions as a whole are determined by the nature of its context, the nature of matter, human nature, the nature of ecological systems and technological systems and bureaucratic systems and so on. But if it is a ‘free’ nation it can exercise self-determinism. For example, laws determine many of the societal processes but these laws are able to be changed and the process by which laws are changed can itself be changed and so on. Although the societal processes determine the nature of the society these processes are able to be determined by the society, hence it experiences freedom.\n\nConsider the scenario of an individual addict. They may quit whenever they wish but the drug controls their will to some degree. Their internal decisions are driven by the addiction and in order to exert free will they need to re-program their will. They need to exercise self-determinism to overcome the determinism imposed by the drug. It is not a matter of a disembodied “free will” trapped within a deterministic body; the entire situation can be re-programmed.\n\n#### Complex Behaviour\n\nOther factors that influence the experience of determinism or free will are the chaotic nature of complex non-linear systems, they are indeterminate and exhibit extreme sensitivity to conditions thus a small perturbation can have a large influence. Furthermore, at the quantum level the state of the empirical universe is under-determined (probabilistic) and the process of determining the classical state is purely random.\n\nIf one visualises a process as a trajectory through the permutation space, then self-determinism means that the trajectories are always changing and they may attain any conceivable configuration over time. The chaos means that the trajectories are very complex and tangled, a small deviation in state at one point may send the system onto a trajectory that leads it far away from where it would have gone. The probabilistic under-determinism means that there are no distinct trajectories but only blurred regions of interconnectedness. All of these lead to a very dynamic, complex and flexible state space.\n\nIt could be that the existential context described above is a deterministic process that can be driven by free will, much like how a car is a deterministic process that can be driven according to human will. Or perhaps there is no true “free will” as it is traditionally conceived of – perhaps there is only deterministic self-determinism and this leads us to experience free will.\n\nThe common concept of “free will” rests upon the supposition of a non-determined process, i.e. a process that has no program that structures it but which is determined by the momentary whim of a ‘free’ agent. This is represented as an empty ellipse in the diagrammatic language described above since there is no program data at all. Such a process can impose decisions that rely upon no defined decision making process. In my mind such a system is a myth that has no correspondence with real systems. I cannot conceive of how such a process could operate without any operational guidelines. It seems far more likely to me that all processes are determined but they are also self-determined, and it is from this self-determinism that the experience of ‘freedom’ arises and thus leads us to conceive of “free will”.\n\n## Evolution of Systems\n\nWhat determines the evolution of the whole system?\n\nIf the universe consists of an ergodic stack then there is an end to the stack and therefore a deterministic foundation, thus the universe follows determined trajectories through state space. If there is a non-ergodic network then there is self-reflexivity and the universe follows a self-determined trajectory.\n\nTo what extent can the universe be said to follow trajectories and to what degree do these trajectories determine the nature of the system?\n\nSimple classical systems with collapsed p.d’s are represented as a distinct point on a distinct trajectory. This leads to a landscape of classical trajectories in the permutation space and there is no intersection of paths. Thus the progression of any system depends upon the initial configuration and the system cannot deviate from its pre-determined trajectory.\n\nIf the system exhibits chaotic behaviour there is a complex “strange attractor” in the permutation space. Hence the system exhibits extreme sensitivity to conditions and even a slight change in the initial conditions can lead to vast differences in later conditions. Thus the trajectories are closely packed and entangled in some sense.\n\nIf there is a stochastic or random stage in the process such as wavefunction collapse, this allows for path wandering. The system is no longer represented as a distinct point but a distribution over a range of points and this means it partially traverses several paths and can collapse into any one of these. This leads to a permutation space with a blurred distribution rather than distinct trajectories.\n\nThe permutation space of a network of self-reflexive systems not only describes all existential permutations but also all causal permutations; hence it describes the complete space of cosmic variability. In encompasses the space of all states of being, all states of doing and all variations of being and doing – hence it can represent any conceivable state of the universe.\n\nFrom what initial conditions could the simulation have started?\n\nIf it started from a particular collapsed (classical) SV that describes a particular cosmic configuration, then the universe will follow a particular path and may evolve from that path and can eventually attain any conceivable state. If it starts from a particular quantum SV that describes a particular distribution of cosmic configurations, then the universe will follow a particular range of paths and can evolve from there. If it starts form a uniform probability distribution then all possibilities are explored. For example, consider the NAND and XOR example above. If the initial SV is a uniform p.d then:", null, "Thus in the first moment the state is completely undefined but it evolves from there into states that are characteristic of the causal structure of the space.\n\nThese probabilistic approaches are related to the idea of a quantum multiverse where each possibility is an actuality in some ‘parallel’ universe. Hence all conceivable states of existence are explored and experienced. There is no actual collapse of the wavefunction into a single actuality; instead of just one possibility being explored all possibilities are explored.\n\nIf every possibility is explored then there is no need for free will to decide what is experienced because everything is experienced. The question “Why do we experience this particular universe?” could then only be answered by the anthropic principle – because this just happens to be the one in which these particular experiences are manifesting. If we occupied another universe in the multiverse then we would experience that instead of this one.\n\nIf not every possibility is an actuality then free will or self-determinism can exert influence over which universes become actual and are thereby experienced. By influencing the probabilities in the permutation space self-reflexive systems can influence the state of what exists and what happens.\n\n# Final Conclusions\n\nAll systems are representable and any representable system can be represented by permutation space approaches such as SMN.\n\nSMN is a computational process therefore any process is equivalent to a computational process.\n\nSMN is a transcendent process that creates an empirical space of pure potential existence.\n\nSMN can model self-determined systems without static programming.\n\nThe systems can be chaotic, stochastic, self-configuring, coherent systems with very complex state spaces.\n\nThe empirical space is completely self-programmable and can represent any conceivable state of what ‘exists’ and ‘happens’ in an empirical universe.\n\nThis scenario may constitute a deterministic context that can be ‘driven’ by a non-determined “free will”, or it may be that systems are self-determining, or both of these may be anthropic perspectives on a multiverse." ]
[ null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image002.gif", null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image004.gif", null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image006.gif", null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image008.gif", null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image010.gif", null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image012.gif", null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image006.gif", null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image008.gif", null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image015.gif", null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image017.gif", null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image010.gif", null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image012.gif", null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image019.jpg", null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image021.gif", null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image023.gif", null, "https://anandavala.info/TASTMOTNOR/Computational%20Processes_files/image025.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92251,"math_prob":0.9420372,"size":38540,"snap":"2022-05-2022-21","text_gpt3_token_len":7482,"char_repetition_ratio":0.17604318,"word_repetition_ratio":0.031117627,"special_character_ratio":0.18321225,"punctuation_ratio":0.07305869,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9751631,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,1,null,1,null,2,null,2,null,2,null,2,null,2,null,2,null,1,null,1,null,2,null,2,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-21T18:40:10Z\",\"WARC-Record-ID\":\"<urn:uuid:6237003d-9a70-4927-afcf-b1f5f1680989>\",\"Content-Length\":\"84971\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b8f8cc0-28f8-4911-aa77-7049ecd7a8ba>\",\"WARC-Concurrent-To\":\"<urn:uuid:d2367a95-e607-45a5-8751-2fa2e03f9339>\",\"WARC-IP-Address\":\"192.228.108.27\",\"WARC-Target-URI\":\"https://anandavala.info/TASTMOTNOR/Computational%20Processes.html\",\"WARC-Payload-Digest\":\"sha1:FG3UKHGSBJLVYUUM353R2JDLOKKCJERU\",\"WARC-Block-Digest\":\"sha1:DUZAGNXSAFM5UNGDE32NFO7ZKDYIND7I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662540268.46_warc_CC-MAIN-20220521174536-20220521204536-00213.warc.gz\"}"}
https://convertoctopus.com/117-7-feet-per-second-to-knots
[ "## Conversion formula\n\nThe conversion factor from feet per second to knots is 0.59248380129641, which means that 1 foot per second is equal to 0.59248380129641 knots:\n\n1 ft/s = 0.59248380129641 kt\n\nTo convert 117.7 feet per second into knots we have to multiply 117.7 by the conversion factor in order to get the velocity amount from feet per second to knots. We can also form a simple proportion to calculate the result:\n\n1 ft/s → 0.59248380129641 kt\n\n117.7 ft/s → V(kt)\n\nSolve the above proportion to obtain the velocity V in knots:\n\nV(kt) = 117.7 ft/s × 0.59248380129641 kt\n\nV(kt) = 69.735343412587 kt\n\nThe final result is:\n\n117.7 ft/s → 69.735343412587 kt\n\nWe conclude that 117.7 feet per second is equivalent to 69.735343412587 knots:\n\n117.7 feet per second = 69.735343412587 knots\n\n## Alternative conversion\n\nWe can also convert by utilizing the inverse value of the conversion factor. In this case 1 knot is equal to 0.01433993081648 × 117.7 feet per second.\n\nAnother way is saying that 117.7 feet per second is equal to 1 ÷ 0.01433993081648 knots.\n\n## Approximate result\n\nFor practical purposes we can round our final result to an approximate numerical value. We can say that one hundred seventeen point seven feet per second is approximately sixty-nine point seven three five knots:\n\n117.7 ft/s ≅ 69.735 kt\n\nAn alternative is also that one knot is approximately zero point zero one four times one hundred seventeen point seven feet per second.\n\n## Conversion table\n\n### feet per second to knots chart\n\nFor quick reference purposes, below is the conversion table you can use to convert from feet per second to knots\n\nfeet per second (ft/s) knots (kt)\n118.7 feet per second 70.328 knots\n119.7 feet per second 70.92 knots\n120.7 feet per second 71.513 knots\n121.7 feet per second 72.105 knots\n122.7 feet per second 72.698 knots\n123.7 feet per second 73.29 knots\n124.7 feet per second 73.883 knots\n125.7 feet per second 74.475 knots\n126.7 feet per second 75.068 knots\n127.7 feet per second 75.66 knots" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7681703,"math_prob":0.98799765,"size":2003,"snap":"2023-40-2023-50","text_gpt3_token_len":578,"char_repetition_ratio":0.2446223,"word_repetition_ratio":0.054380666,"special_character_ratio":0.36844733,"punctuation_ratio":0.12933025,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9928326,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-06T17:20:18Z\",\"WARC-Record-ID\":\"<urn:uuid:fb5c7130-08b5-4ba8-99d2-31d4eecd7aba>\",\"Content-Length\":\"27058\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bfa97b4f-fc6b-408f-b6d0-e5a1addc8e2b>\",\"WARC-Concurrent-To\":\"<urn:uuid:e88e686c-b8af-451b-b2a2-1de52a129a9f>\",\"WARC-IP-Address\":\"172.67.171.60\",\"WARC-Target-URI\":\"https://convertoctopus.com/117-7-feet-per-second-to-knots\",\"WARC-Payload-Digest\":\"sha1:MD2TTBJTPMQN55NEQDZC7CTCKQNBTV5R\",\"WARC-Block-Digest\":\"sha1:3GE2LGLUPILZCR3A72DDEOVFQWZ2EW5U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100602.36_warc_CC-MAIN-20231206162528-20231206192528-00351.warc.gz\"}"}
https://onlinejudge.org/board/viewtopic.php?f=41&t=24352&view=print
[ "Page 1 of 2\n\n### 11335 - Discrete Pursuit\n\nPosted: Sun Nov 04, 2007 3:34 am\nSorry.......\n\nPosted: Sun Nov 04, 2007 3:52 am\nRead the problem carefully. At time t=0, the only possible position of cop is (0,0) and velocity is (0,0). At time t=1, the possible velocity of the cop is (u,v) where -1<=u,v<=1 and the possible position is (x,y)=(0,0)+(u,v) where -1<=x,y<=1. For this problem it is sufficient to only know the maximum bounds on x,y,u,v.\nLet x[t]=maximum bound of x at time t, then\nx[t]=x[t-1]+t\n\n### need sample i/o\n\nPosted: Sun Nov 04, 2007 5:58 pm\nCan anyone give some sample i/o for 11335, plzzz? i am getting WA", null, "### Re: need sample i/o\n\nPosted: Mon Nov 05, 2007 12:48 am\ndeena sultana wrote:Can anyone give some sample i/o for 11335, plzzz? i am getting WA", null, "It might give the problem away if I give any more input/output. You could easily write a bruteforce bfs program to check your results.\n\n### Re: need sample i/o\n\nPosted: Mon Nov 05, 2007 12:27 pm\ndeena sultana wrote:Can anyone give some sample i/o for 11335, plzzz? i am getting WA", null, "and also it would be more helpful if you describe your algorithm first!!\n\nHint: consider x and y component separately.\n\nPosted: Mon Nov 05, 2007 12:58 pm\nwell, my algorithm is like this...\n1. for t=0, the object's position is (0,0) and u=0, v=0 and the thief's position is (a,0), as the problem states.\n2. then for t=1 the object will move at the position from where the distance of the thief's current position (at t=1) is minimum;\n3. repeat this process until the distance is 0.\n\nwont it work?\n\nPosted: Mon Nov 05, 2007 1:04 pm\nds wrote:the object will move at the position from where the distance of the thief's current position (at t=1) is minimum\nBy distance, do you mean the Manhattan distance or the Euclidean distance?\n\nAnd don't you take the current speed in consideration?\nWhat if you have two places with the same minimum distance, which point do you consider then?\n\nPosted: Mon Nov 05, 2007 1:19 pm\nops sorry for my incomplete description", null, "i've calculated the Manhattan distance, and also considered the current speed. but, in case of tie i 've chosen the 1st one :-S (may be this is the fault, no?)\n\nPosted: Mon Nov 05, 2007 9:38 pm\ndeena sultana wrote:ops sorry for my incomplete description", null, "i've calculated the Manhattan distance, and also considered the current speed. but, in case of tie i 've chosen the 1st one :-S (may be this is the fault, no?)\nYour algorithm is wrong. In any optimal solution, the direction of the cop doesn't change much. But in your algorithm, the cop can change direction.\n\nPosted: Tue Nov 06, 2007 12:01 pm\nyes, i understand", null, "it was a stupid algorithm", null, "sorry", null, "Posted: Tue Nov 06, 2007 5:12 pm\nHint: consider x and y component separately.\nWow! this hint is simply great! who r trying to solve 11335, plz think about it and have a pretty solution", null, "thanks to sohel. thanks to sclo too.\n\nPosted: Wed Jan 16, 2008 9:12 pm\nIs this problem solvable using Dynamic Programming?", null, "Posted: Thu Jan 17, 2008 4:15 am\nI don't think DP is suitable for this problem.\n\n-----\nRio\n\n### Re: 11335 - Discrete Pursuit\n\nPosted: Fri Sep 05, 2008 8:12 pm\nI honestly don't know where to begin to attack this problem but it seems that this problem is really easy.\n\nCould anyone give me a little hints about where to start?\n\nBy the way: I don't know what the other posters are meaning about x component and y component.\n\nThanks a lot.\n\n### Re: 11335 - Discrete Pursuit\n\nPosted: Sun Sep 07, 2008 2:04 am\nImagine you had to solve a simpler problem: The thief and the cop can only move along one axis. In that case, how would you calculate the minimum time needed for the cop to catch the thief in only that direction?\n\nFor example, imagine that the cop can't move along the y-axis (he can only move along the x-axis) and same for the thief. If thief is at position (0, a) and cop is at position (0, 0), how would the cop move to catch him?\n\nNow, imagine the opposite thing along the y-axis (that is, neither the cop nor the thief can move along the x-axis), and compute the time.\n\nThe final answer will be the maximum of these two values, because the cop can advance in both directions simultaneously and he can \"work\" on the two solutions at the same time.\n\nHope this helps. If I'm not clear enough please tell me so I can explain better." ]
[ null, "https://onlinejudge.org/board/images/smilies/icon_cry.gif", null, "https://onlinejudge.org/board/images/smilies/icon_cry.gif", null, "https://onlinejudge.org/board/images/smilies/icon_cry.gif", null, "https://onlinejudge.org/board/images/smilies/icon_razz.gif", null, "https://onlinejudge.org/board/images/smilies/icon_razz.gif", null, "https://onlinejudge.org/board/images/smilies/icon_frown.gif", null, "https://onlinejudge.org/board/images/smilies/icon_rolleyes.gif", null, "https://onlinejudge.org/board/images/smilies/icon_frown.gif", null, "https://onlinejudge.org/board/images/smilies/icon_smile.gif", null, "https://onlinejudge.org/board/images/smilies/icon_redface.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90410703,"math_prob":0.9416243,"size":4415,"snap":"2020-24-2020-29","text_gpt3_token_len":1272,"char_repetition_ratio":0.12604852,"word_repetition_ratio":0.18742293,"special_character_ratio":0.2903737,"punctuation_ratio":0.1554054,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9768959,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-24T23:48:15Z\",\"WARC-Record-ID\":\"<urn:uuid:5b2cfa1d-ce78-40b1-9996-f248fa7aaf79>\",\"Content-Length\":\"10769\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:83acaac3-85dc-46dd-bedb-b68eb3f9d17d>\",\"WARC-Concurrent-To\":\"<urn:uuid:ddecca29-2f3b-4bb7-a8e5-ce1ebcd2e378>\",\"WARC-IP-Address\":\"51.255.0.192\",\"WARC-Target-URI\":\"https://onlinejudge.org/board/viewtopic.php?f=41&t=24352&view=print\",\"WARC-Payload-Digest\":\"sha1:IS3HHN5MK52NON3GT6WLVNIID3R5H4CF\",\"WARC-Block-Digest\":\"sha1:7JZVDEURP2IFCBEFE3QEYCPHZDVFYNZE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347385193.5_warc_CC-MAIN-20200524210325-20200525000325-00255.warc.gz\"}"}
https://math.stackexchange.com/questions/2001193/combinatorics-sets-and-subsets-pigeonhole-principle/2001202
[ "# Combinatorics: sets and subsets (pigeonhole principle)\n\nShow that every subset with 6 elements of {1,2,3,4, ..., 9} contains 2 elements with sum 10.\n\nI solved this (solution below) but I want to do this easier using the pigeonhole principle.\n\nMy attempt:\n\nClaim: Every subset of {1,2,3, ..., 9} has either the following number combinations in it: (1,9),(2,8),(3,7),(4,6)\n\nProof:\n\nLet A be the set with subsets with 6 elements of {1,2, ..., 9} that have (1,9) in it. Let B be the set with subsets with 6 elements of {1,2, ..., 9} that have (2,8) in it. Let C be the set with subsets with 6 elements of {1,2, ..., 9} that have (3,7) in it. Let D be the set with subsets with 6 elements of {1,2, ..., 9} that have (4,6) in it.\n\n$|A \\cup B \\cup C \\cup D| = |A| + |B| + |C| + |D| - |A \\cap B| - |A \\cap C| - |A \\cap D| - |B \\cap D| - |B \\cap C| - |C \\cap D| + |A \\cap B \\cap C| + |A \\cap B \\cap D| + |B \\cap C \\cap D| + |A \\cap C \\cap D| - |A \\cap B \\cap C \\cap D| = 4\\binom{7}{4} - 6\\binom{5}{2} + 4\\binom{3}{0} + 0 = 84$\n\nBut, the total amount of subsets with 6 elements is $\\binom{9}{6} = 84$. We deduce that every subset has 2 elements with sum 10. QED.\n\nCan someone give a hint on a good way to do this using the pigeonhole principle? This is supposed to be an exercise that has to be solved using PHP and I believe there is a way easier approach.\n\n## 3 Answers\n\nI am not sure if this would qualify for as a PHP approach, but this is the first thing I noticed:\n\nYou have already identified that in the $\\{1,2,..9\\}$ set we have these four pairs that sum up to $10$: $(1,9), (2,8), (3,7), (4,6)$.\n\nSo taking $3$ numbers out of the $\\{1,2,...9\\}$ set we can \"break\" at most $3$ of the pairs. Thus we will have at least one pair out of these four remaining in our 6-element subset. QED.\n\nYour proof, with a little rewording, is a good use of the pigeonhole principle. You have five pigeonholes, the four two element subsets you name and $\\{5\\}$. Your six pigeons are the selected numbers. Two must be in the same hole, so you must have both elements of one of the sets you name. These two will sum to $10$\n\nThe naive approach (for PHP) would be to loop through every subset containing 6 elements and check the sum of every pair in each subset. If for every subset of 6 one of the pairs sum to 10, then the statement is true.\n\n• Then I would need to verify this for 84 subsets... And I was mainly looking for a solution that uses the pidgeon hole principle. – user370967 Nov 5 '16 at 22:29\n• That is correct. As I said, this is just the naive approach. Edit: I was responding to your question about how to do it in php specifically. – Ralff Nov 5 '16 at 22:42" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87978894,"math_prob":0.9799656,"size":1283,"snap":"2021-21-2021-25","text_gpt3_token_len":470,"char_repetition_ratio":0.17513682,"word_repetition_ratio":0.17843866,"special_character_ratio":0.4130943,"punctuation_ratio":0.19241983,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9975404,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-23T15:57:04Z\",\"WARC-Record-ID\":\"<urn:uuid:5c90d4ee-1895-485d-81cb-0456526a273b>\",\"Content-Length\":\"179944\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2e2674c2-b04c-4ff7-8e05-0b729a29ceab>\",\"WARC-Concurrent-To\":\"<urn:uuid:b3f6db08-27b6-4dcf-9931-5e22e92b7f27>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2001193/combinatorics-sets-and-subsets-pigeonhole-principle/2001202\",\"WARC-Payload-Digest\":\"sha1:ZDK55NWCRX2GZSTOYHUDTREMICNNEYLN\",\"WARC-Block-Digest\":\"sha1:WZZQF3N6MSYPIXG7NJNRVESWGFBUQRC4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488539480.67_warc_CC-MAIN-20210623134306-20210623164306-00605.warc.gz\"}"}
https://www.shakuhachi.net/how-do-you-convert-grams-to-kilograms/
[ "# How do you convert grams to kilograms?\n\n## How do you convert grams to kilograms?\n\nA kilogram is one thousand grams. This means that to get kilograms from grams, you just need to divide the number of grams by 1,000. In our example, we would get kilograms by dividing 20,000 grams by 1,000.\n\nWhat is 1g to 1kg?\n\nGrams to Kilograms conversion table\n\nGrams (g) Kilograms (kg)\n80 g 0.08 kg\n90 g 0.09 kg\n100 g 0.1 kg\n1000 g 1 kg\n\n### Is 1000 grams equal to 1 kg?\n\nA kilogram is 1,000 grams For every kilogram, there are 1000 grams. That means that the ratio between kilograms and grams is 1:1000. It also means 1 kilogram and 1000 grams are defined as being equal. Traditionally, grams are referred to as the base unit.\n\nHow much is 100 grams in kilograms?\n\n0.1 kg\n100 g is equal to 0.1 kg.\n\n#### How do you calculate kilograms?\n\nDivide the number of pounds by 2.2046 to use the standard equation. For example, if you want to convert 50 pounds to kilograms, divide 50 by 2.2046, which is equal to 22.67985 kg. To convert 200 pounds to kilograms, divide 200 by 2.2046, which is equal to 90.71940 kg.\n\nHow do you calculate weight in kilograms?\n\nOn Earth, a 1 kg object weighs 9.8 N, so to find the weight of an object in N simply multiply the mass by 9.8 N. Or, to find the mass in kg, divide the weight by 9.8 N.\n\n## How many kg is 50 grams?\n\nSimply put, g is smaller than kg. In fact, a gram is “10 to the power of -3” smaller than a kilogram. Since a gram is 10^-3 smaller than a kilogram, it means that the conversion factor for g to kg is 10^-3. Therefore, you can multiply 50 g by 10^-3 to get 50 g converted to kg.\n\nWhat makes up 1 kg?\n\nkilogram (kg), basic unit of mass in the metric system. A kilogram is very nearly equal (it was originally intended to be exactly equal) to the mass of 1,000 cubic cm of water. The pound is defined as equal to 0.45359237 kg, exactly.\n\n### Is 500g the same as 1 kg?\n\nIn order to convert from grams to kilograms, you need to know the following conversion fact: 1 kilogram = 1,000 grams. In this case, we find that 500 grams is equal to 1/2 or 0.5 kilograms.\n\nHow much is 300g in KG?\n\nKilograms to Grams conversion table\n\nKilograms (kg) Grams (g)\n0.1 kg 100 g\n1 kg 1000 g\n2 kg 2000 g\n3 kg 3000 g" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91049415,"math_prob":0.998917,"size":2171,"snap":"2023-40-2023-50","text_gpt3_token_len":676,"char_repetition_ratio":0.17535764,"word_repetition_ratio":0.014084507,"special_character_ratio":0.33855367,"punctuation_ratio":0.1444653,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99977297,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-21T18:46:06Z\",\"WARC-Record-ID\":\"<urn:uuid:08afb076-10c4-4746-871a-27454f2171b8>\",\"Content-Length\":\"49130\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ed906a16-8562-4df1-90b3-f5acebc40750>\",\"WARC-Concurrent-To\":\"<urn:uuid:2137dba7-cec6-4c4e-859b-08ce6fe108d1>\",\"WARC-IP-Address\":\"172.67.140.211\",\"WARC-Target-URI\":\"https://www.shakuhachi.net/how-do-you-convert-grams-to-kilograms/\",\"WARC-Payload-Digest\":\"sha1:DBCTKGN2ZHNBLG6LPZK57K3J2633CXRQ\",\"WARC-Block-Digest\":\"sha1:XTSQ57JIHN4O55LXL7SNBOWVGWKHVNQH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506029.42_warc_CC-MAIN-20230921174008-20230921204008-00640.warc.gz\"}"}
https://percentages.io/what-is-86-percent-of-8591
[ "What is % of ?\n7388.26\n\n# How to solve this problem\n\n## A step by step guide\n\nThe purpose of solving this problem is to determine what 86% of 8591 is. One common real life problem where a solution like this may be helpful include calculating how much tip to leave at a restaurant. Solving this problem requires two simple math operations that you can perform on any calculator. The first step is a division and the second step is a multiplication. Here's a cool tip though, you can actually reverse the order of these operations and the result will be the same! Here are the steps:\n\nStep 1: Divide 8591 by 100\nIn this case, the number that we are \"comparing\" to 100 is 8591, so we must first normalize the number by dividing it by 100. The operation we have to solve is this: $$\\frac{8591}{ 100 } = 8591 \\div {{ 100 }} = 85.91$$\n\nStep 2: Multiply 85.91 by 86 to get the solution\nNow that we have our normalized number, 85.91, we just have to multiply it by 86 to get our final answer. The forumla for this is obviously quite simple: $$85.91 \\times 86 = 7388.26$$\n\nThat's all there is to it! Note that you can replace these values with new ones from any similar problem.\n\nSimilar problems" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9310434,"math_prob":0.9950181,"size":1138,"snap":"2020-24-2020-29","text_gpt3_token_len":293,"char_repetition_ratio":0.107583776,"word_repetition_ratio":0.0,"special_character_ratio":0.29349735,"punctuation_ratio":0.10084034,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990214,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-27T21:42:46Z\",\"WARC-Record-ID\":\"<urn:uuid:a4434e3a-e4b1-4422-b1c5-50cae22b7e53>\",\"Content-Length\":\"38900\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c8a38c97-63fb-4408-a7c1-16ae937dc546>\",\"WARC-Concurrent-To\":\"<urn:uuid:18764e52-2da3-44e7-9b72-05e6c22675b9>\",\"WARC-IP-Address\":\"18.232.245.187\",\"WARC-Target-URI\":\"https://percentages.io/what-is-86-percent-of-8591\",\"WARC-Payload-Digest\":\"sha1:BW2GL7EWUAZUXX6IACHW2CFYTCZL7Z2E\",\"WARC-Block-Digest\":\"sha1:EZSC7LDXLL4ESIEEFPGSZUIHZBEPLAGN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347396163.18_warc_CC-MAIN-20200527204212-20200527234212-00282.warc.gz\"}"}
https://socratic.org/questions/how-do-you-solve-3-2x-8-2x
[ "# How do you solve 3- 2x = 8+ 2x?\n\nJun 5, 2018\n\n$x = - \\frac{5}{4}$ or about $- 1.25$\n\n#### Explanation:\n\n$3 - 2 x = 8 + 2 x$\n\nTo solve for the variable $x$, we have to make it by itself. First, subtract $\\textcolor{b l u e}{2 x}$ from both sides of the equation:\n$3 - 2 x \\quad \\textcolor{b l u e}{- \\quad 2 x} = 8 + 2 x \\quad \\textcolor{b l u e}{- \\quad 2 x}$\n\n$3 - 4 x = 8$\n\nNow subtract $\\textcolor{b l u e}{3}$ from both sides:\n$3 - 4 x \\quad \\textcolor{b l u e}{- \\quad 3} = 8 \\quad \\textcolor{b l u e}{- \\quad 3}$\n\n$- 4 x = 5$\n\nDivide both sides by $\\textcolor{b l u e}{- 4 x}$:\n$\\frac{- 4 x}{\\textcolor{b l u e}{- 4}} = \\frac{5}{\\textcolor{b l u e}{- 4}}$\n\n$x = - \\frac{5}{4}$ or about $- 1.25$\n\nHope this helps!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.568995,"math_prob":1.0000056,"size":271,"snap":"2021-43-2021-49","text_gpt3_token_len":76,"char_repetition_ratio":0.11985019,"word_repetition_ratio":0.0,"special_character_ratio":0.25830257,"punctuation_ratio":0.07692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998278,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T06:24:24Z\",\"WARC-Record-ID\":\"<urn:uuid:75dd1a7b-9337-47a1-b76f-2fa55fcbae5b>\",\"Content-Length\":\"33335\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:02c53b0a-f3cf-4e1b-a140-b6d9a4ec1ee1>\",\"WARC-Concurrent-To\":\"<urn:uuid:fbdc8491-52db-4581-bbfb-9dae9bbf1584>\",\"WARC-IP-Address\":\"216.239.34.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-solve-3-2x-8-2x\",\"WARC-Payload-Digest\":\"sha1:ON7MS33Z3K4VYLBYWDXWKITOPIFMB7HY\",\"WARC-Block-Digest\":\"sha1:IJMI67P6YPUUPDAHWEPVLG7SWYLOEFT7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363336.93_warc_CC-MAIN-20211207045002-20211207075002-00634.warc.gz\"}"}
https://www.codingbroz.com/python-program-to-find-largest-of-3-numbers/
[ "# Python Program to Find Largest of 3 Numbers\n\nIn this post, we will learn how to find the largest of 3 numbers using Python Programming language.\n\nThis program takes three numbers as input from the user, and compares all three of them to find the largest number using the if. . .else statement.\n\nSo, without further ado, let’s begin this tutorial.\n\nContents\n\n## Python Program to Find Largest of 3 Numbers\n\n```# Python Program to Find Largest of 3 Numbers\nnum1 = int(input(\"Enter first number: \"))\nnum2 = int(input(\"Enter second number: \"))\nnum3 = int(input(\"Enter third number: \"))\n\n# Finding largest number\nif (num1 >= num2) and (num1 >= num3):\nlargest = num1\nelif (num2 >= num1) and (num2 >= num3):\nlargest = num2\nelse:\nlargest = num3\n\n# Displaying output\nprint(\"Largest number is: \", largest)\n```\n\nOutput\n\n``````Enter first number: 22\nEnter second number: 27\nEnter third number: 21\nLargest number is: 27\n``````\n\n## How Does This Program Work ?\n\n```num1 = int(input(\"Enter first number: \"))\nnum2 = int(input(\"Enter second number: \"))\nnum3 = int(input(\"Enter third number: \"))\n```\n\nThe user is asked to enter three integers.\n\n```if (num1 >= num2) and (num1 >= num3):\nlargest = num1\n```\n\nNow, we compare whether num1 is greater than num2 and num3 or not. If the condition is true, then num1 is the largest number.\n\n```elif (num2 >= num1) and (num2 >= num3):\nlargest = num2\n```\n\nIf the above condition is false, we check whether the value of num2 is greater than num1 and num3 or not. If yes, then num2 is the largest number.\n\n```else:\nlargest = num3\n```\n\nIf neither of the two conditions are true, then it clearly means num3 is the largest number.\n\n```# Displaying output\nprint(\"Largest number is: \", largest)\n```\n\nFinally, the largest number is displayed on the screen with the help of print() function.\n\n## Conclusion\n\nI hope after going through this post, you understand how to find the largest of 3 numbers using Python Programming language.\n\nIf you have any doubt regarding the program, feel free to contact us in the comment section. We will be delighted to assist you." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7589485,"math_prob":0.9950507,"size":2159,"snap":"2022-40-2023-06","text_gpt3_token_len":526,"char_repetition_ratio":0.18979119,"word_repetition_ratio":0.25737265,"special_character_ratio":0.266327,"punctuation_ratio":0.12406948,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9973058,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-30T15:58:48Z\",\"WARC-Record-ID\":\"<urn:uuid:f24fa4c1-74f7-4f3e-a4a0-0bf6b27505a3>\",\"Content-Length\":\"189065\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4981f110-b1bc-47b9-9b3b-46fca6896073>\",\"WARC-Concurrent-To\":\"<urn:uuid:278a6178-e107-4118-aad4-526c5999dd07>\",\"WARC-IP-Address\":\"172.67.170.112\",\"WARC-Target-URI\":\"https://www.codingbroz.com/python-program-to-find-largest-of-3-numbers/\",\"WARC-Payload-Digest\":\"sha1:VZNBZ6BSHDMSYKLCRG5TE7I4UCRURYFK\",\"WARC-Block-Digest\":\"sha1:53FA7HT656OXJ44AA6VPHTSTWDT3RK3K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335491.4_warc_CC-MAIN-20220930145518-20220930175518-00504.warc.gz\"}"}
https://hpmuseum.org/forum/thread-7683-post-67506.html
[ "01-29-2017, 05:37 PM (This post was last modified: 01-29-2017 08:17 PM by compsystems.)\nPost: #1", null, "compsystems", null, "Senior Member Posts: 1,329 Joined: Dec 2013\nHello, the following statements Written on CAS history\n\nprint(\"test ANS cmd\");\nid0:=0+1;\nprint(\"ans1: \"+Ans); // ans1: 1\nid0:=1+1;\nprint(\"ans2: \"+Ans); // ans2: 2\nid0:=2+1;\nprint(\"ans3: \"+Ans); // ans3: 3\n\nshow in the terminal view\n\ntest ANS cmd\nans1: 1\nans2: 2\nans3: 3\n\nBut within a program\n\nPHP Code:\n#cas    anscmd():=    begin                       print;          print(\"test ANS cmd\");        id0:=0+1;        print(\"ans1: \"+Ans); // ans1: 1        id0:=1+1;        print(\"ans2: \"+Ans); // ans2: 2        id0:=2+1;        print(\"ans3: \"+Ans); // ans3: 3#end\n\ntest ANS cmd\nans1: 0\nans2: 0\nans3: 0\n\nWhy?\n\nThe following CAS code Requires storing each statement in an identifier, and you can not use 'Ans' Cmd =(\nPHP Code:\n#cas  script1():=  begin    local eq1, eq2, sol, answer;      print;    purge(x,y);      print(\"Find the dimensions of a rectangle whose area is 45 and its perimeter 28\");    print(\"METHOD 1 (Algebraic solution, stepwise)\");        print(\"Let x = long, y = width\");   print(\"\");        print(\"The equations are:\");        eq1:=x*y=45;        print(\"eq1: \"+eq1);     eq2:=2x+2y=28;     print(\"eq2: \"+eq2);     print(\"\");    answer:=factor(eq2); // -> (2*(x+y)) = 28    print(\"factor(eq2): \"+answer);    print(\"\");    answer:=answer/2; // ->(2*(x+y)/2) = 14    print(\"Ans/2: \"+answer);    answer:=simplify(answer); // -> (x+y) = 14    print(\"simplify(Ans): \"+answer);    print(\"\");    sol:= y=solve(answer,y); //  y = (-x+14)    print(\"sol(Ans,y): \"+sol);    print(\"\");    //answer:= (eq1|sol);    answer:=subst(eq1,sol); //  -> (x*(-x+14)) = 45    print(\"subst(eq1,answer): \"+answer);    answer:=simplify(answer); // -> (-x^2+14*x) = 45    print(\"simplify(Ans): \"+answer);    print(\"\");    sol:=solve(answer,x); // -> {5,9}    print(\"sol(Ans,x): x1,2=\"+sol);    print(\"\");    print(\"test solutions\");    answer:=(eq1|{x=sol, y=sol});    print(\"eq1: \"+answer);    answer:=evalBool(answer,'=','==');    print(\"eq1: \"+answer);    print(\"\");    answer:=(eq2|{x=sol, y=sol});    print(\"eq2: \"+answer);    answer:=evalBool(answer,'=','==');    print(\"eq2: \"+answer);    return \"Done\";  end;#end#cas  evalBool( expr1, str1, str2 ):=  begin   return( ifte( subst(expr1, str1, str2 ), \"TRUE\", \"FALSE\") );  end;#end\n01-30-2017, 05:07 PM\nPost: #2", null, "KeithB", null, "Member Posts: 298 Joined: Jan 2017" ]
[ null, "https://hpmuseum.org/forum/uploads/avatars/avatar_176.jpg", null, "https://hpmuseum.org/forum/images/buddy_offline.gif", null, "https://hpmuseum.org/forum/uploads/avatars/avatar_6111.jpg", null, "https://hpmuseum.org/forum/images/buddy_offline.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.70026875,"math_prob":0.9926082,"size":994,"snap":"2022-27-2022-33","text_gpt3_token_len":344,"char_repetition_ratio":0.0979798,"word_repetition_ratio":0.0,"special_character_ratio":0.36418512,"punctuation_ratio":0.19213974,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999465,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T19:55:46Z\",\"WARC-Record-ID\":\"<urn:uuid:d66b8ddd-a209-446b-be6f-de99d40d2f47>\",\"Content-Length\":\"34026\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:901abfe0-a93d-4167-a559-f7d0798c1b35>\",\"WARC-Concurrent-To\":\"<urn:uuid:2287a154-0de3-4e01-8659-09efbdf7adb3>\",\"WARC-IP-Address\":\"209.197.117.170\",\"WARC-Target-URI\":\"https://hpmuseum.org/forum/thread-7683-post-67506.html\",\"WARC-Payload-Digest\":\"sha1:BVI4UPJSTBDL7S2TDDZW6I6E5SRYYTMN\",\"WARC-Block-Digest\":\"sha1:XE5SSZJIRL3SKJ2TNBLLENY547HWFT3A\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103271864.14_warc_CC-MAIN-20220626192142-20220626222142-00479.warc.gz\"}"}
https://kmmiles.com/6303-2-km-in-miles
[ "kmmiles.com\n\n# 6303.2 km in miles\n\n## Result\n\n6303.2 km equals 3914.2872 miles\n\nYou can also convert 6303.2 miles to km.\n\n## Conversion formula\n\nMultiply the amount of km by the conversion factor to get the result in miles:\n\n6303.2 km × 0.621 = 3914.2872 mi\n\n## How to convert 6303.2 km to miles?\n\nThe conversion factor from km to miles is 0.621, which means that 1 km is equal to 0.621 miles:\n\n1 km = 0.621 mi\n\nTo convert 6303.2 km into miles we have to multiply 6303.2 by the conversion factor in order to get the amount from km to miles. We can also form a proportion to calculate the result:\n\n1 km → 0.621 mi\n\n6303.2 km → L(mi)\n\nSolve the above proportion to obtain the length L in miles:\n\nL(mi) = 6303.2 km × 0.621 mi\n\nL(mi) = 3914.2872 mi\n\nThe final result is:\n\n6303.2 km → 3914.2872 mi\n\nWe conclude that 6303.2 km is equivalent to 3914.2872 miles:\n\n6303.2 km = 3914.2872 miles\n\n## Result approximation\n\nFor practical purposes we can round our final result to an approximate numerical value. In this case six thousand three hundred three point two km is approximately three thousand nine hundred fourteen point two eight seven miles:\n\n6303.2 km ≅ 3914.287 miles\n\n## Conversion table\n\nFor quick reference purposes, below is the kilometers to miles conversion table:\n\nkilometers (km) miles (mi)\n6304.2 km 3914.9082 miles\n6305.2 km 3915.5292 miles\n6306.2 km 3916.1502 miles\n6307.2 km 3916.7712 miles\n6308.2 km 3917.3922 miles\n6309.2 km 3918.0132 miles\n6310.2 km 3918.6342 miles\n6311.2 km 3919.2552 miles\n6312.2 km 3919.8762 miles\n6313.2 km 3920.4972 miles\n\n## Units definitions\n\nThe units involved in this conversion are kilometers and miles. This is how they are defined:\n\n### Kilometers\n\nThe kilometer (symbol: km) is a unit of length in the metric system, equal to 1000m (also written as 1E+3m). It is commonly used officially for expressing distances between geographical places on land in most of the world.\n\n### Miles\n\nA mile is a most popular measurement unit of length, equal to most commonly 5,280 feet (1,760 yards, or about 1,609 meters). The mile of 5,280 feet is called land mile or the statute mile to distinguish it from the nautical mile (1,852 meters, about 6,076.1 feet). Use of the mile as a unit of measurement is now largely confined to the United Kingdom, the United States, and Canada." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8344758,"math_prob":0.9879764,"size":2271,"snap":"2022-27-2022-33","text_gpt3_token_len":698,"char_repetition_ratio":0.17335686,"word_repetition_ratio":0.0,"special_character_ratio":0.36635843,"punctuation_ratio":0.15576923,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98292416,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T19:19:18Z\",\"WARC-Record-ID\":\"<urn:uuid:4c8614d3-27d5-4454-8165-2c052d3bbe4b>\",\"Content-Length\":\"20762\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9cde07f2-21f8-4b66-a87e-ced970ed939f>\",\"WARC-Concurrent-To\":\"<urn:uuid:b04365a6-33c8-4214-8824-ae19036bb43e>\",\"WARC-IP-Address\":\"104.21.6.102\",\"WARC-Target-URI\":\"https://kmmiles.com/6303-2-km-in-miles\",\"WARC-Payload-Digest\":\"sha1:VXUG6HX3ECSAZMFBBDK3CE5VDJDGHWPJ\",\"WARC-Block-Digest\":\"sha1:PPPGAUMKF7QACBTF5NHQRBA3MWGO3KRZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104597905.85_warc_CC-MAIN-20220705174927-20220705204927-00585.warc.gz\"}"}
https://www.packtpub.com/product/r-high-performance-programming/9781783989263
[ "", null, "### R High Performance Programming", null, "", null, "", null, "", null, "", null, "4 (1 reviews total)\nBy Aloysius Lim , William Tjhi\n• Constantly updated with 100+ new titles each month\n• Breadth and depth in over 1,000+ technologies\n\nWith the increasing use of information in all areas of business and science, R provides an easy and powerful way to analyze and process the vast amounts of data involved. It is one of the most popular tools today for faster data exploration, statistical analysis, and statistical modeling and can generate useful insights and discoveries from large amounts of data.\n\nThrough this practical and varied guide, you will become equipped to solve a range of performance problems in R programming. You will learn how to profile and benchmark R programs, identify bottlenecks, assess and identify performance limitations from the CPU, identify memory or disk input/output constraints, and optimize the computational speed of your R programs using great tricks, such as vectorizing computations. You will then move on to more advanced techniques, such as compiling code and tapping into the computing power of GPUs, optimizing memory consumption, and handling larger-than-memory data sets using disk-based memory and chunking.\n\nPublication date:\nJanuary 2015\nPublisher\nPackt\nPages\n176\nISBN\n9781783989263\n\n## Chapter 1. Understanding R's Performance – Why Are R Programs Sometimes Slow?\n\nR is a great tool used for statistical analysis and data processing. When it was first developed in 1993, it was designed as a tool that would teach data analysis courses. Because it is so easy to use, it became more and more popular over the next 20 years, not only in academia, but also in government and industry. R is also an open source tool, so its users can use it for free and contribute new statistical packages to the R public repository called the Comprehensive R Archive Network (CRAN). As the CRAN library became richer with more than 6,000 well-documented and ready-to-use packages at the time of writing this book, the attractiveness of R increased even further. In these 20 years, the volume of data being created, transmitted, stored, and analyzed, by organizations and individuals alike, has also grown exponentially. R programmers who need to process and analyze the ever growing volume of data sometimes find that R's performance suffers under such heavy loads. Why does R sometimes not perform well, and how can we overcome its performance limitations? This book examines the factors behind R's performance and offers a variety of techniques to improve the performance of R programs, for example, optimizing memory usage, performing computations in parallel, or even tapping the computing power of external data processing systems.\n\nBefore we can find the solutions to R's performance problems, we need to understand what makes R perform poorly in certain situations. This chapter kicks off our exploration of the high-performance R programming by taking a peek under the hood to understand how R is designed, and how its design can limit the performance of R programs.\n\nWe will examine three main constraints faced by any computational task—CPU, RAM, and disk input/output (I/O)—and then look at how these play out specifically in R programs. By the end of this chapter, you will have some insights into the bottlenecks that your R programs could run into.\n\nThis chapter covers the following topics:\n\n• Three constraints on computing performance—CPU, RAM, and disk I/O\n\n• R is interpreted on the fly\n\n• R requires all data to be loaded into memory\n\n• Algorithm design affects time and space complexity\n\n## Three constraints on computing performance – CPU, RAM, and disk I/O\n\nFirst, let's see how R programs are executed in a computer. This is a very simplified version of what actually happens, but it suffices for us to understand the performance limitations of R. The following figure illustrates the steps required to execute an R program.", null, "Steps to execute an R program\n\nTake for example, this simple R program, which loads some data from a CSV file, computes the column sums, and writes the results into another CSV file:\n\n```data <- read.csv(\"mydata.csv\")\ntotals <- colSums(data)\nwrite.csv(totals, \"totals.csv\")```\n\nWe use the numbering to understand the preceding diagram:\n\n1. When we load and run an R program, the R code is first loaded into RAM.\n\n2. The R interpreter then translates the R code into machine code and loads the machine code into the CPU.\n\n3. The CPU executes the program.\n\n4. The program loads the data to be processed from the hard disk into RAM (`read.csv()` in the example).\n\n5. The data is loaded in small chunks into the CPU for processing.\n\n6. The CPU processes the data one chunk at a time, and exchanges chunks of data with RAM until all the data has been processed (in the example, the CPU executes the instructions of the `colSums()` function to compute the column sums on the data set).\n\n7. Sometimes, the processed data is stored back onto the hard drive (`write.csv()` in the example).\n\nFrom this depiction of the computing process, we can see a few places where performance bottlenecks can occur:\n\n• The speed and performance of the CPU determines how quickly computing instructions, such as `colSums()` in the example, are executed. This includes the interpretation of the R code into the machine code and the actual execution of the machine code to process the data.\n\n• The size of RAM available on the computer limits the amount of data that can be processed at any given time. In this example, if the `mydata.csv` file contains more data than can be held in the RAM, the call to `read.csv()` will fail.\n\n• The speed at which the data can be read from or written to the hard disk (`read.csv()` and `write.csv()` in the example), that is, the speed of the disk input/output (I/O) affects how quickly the data can be loaded into the memory and stored back onto the hard disk.\n\nSometimes, you might encounter these limiting factors one at a time. For example, when a dataset is small enough to be quickly read from the disk and fully stored in the RAM, but the computations performed on it are complex, then only the CPU constraint is encountered. At other times, you might find them occurring together in various combinations. For example, when a dataset is very large, it takes a long time to load it from the disk, only one small chunk of it can be loaded at any given time into the memory, and it takes a long time to perform any computations on it. In either case, these are the symptoms of performance problems. In order to diagnose the problems and find solutions for them, we need to look at what is happening behind the scenes that might be causing these constraints to occur.\n\nLet's now take a look at how R is designed and how it works, and see what the implications are for its performance.\n\n## R is interpreted on the fly\n\nIn computer science parlance, R is known as an interpreted language. This means that every time you execute an R program, the R interpreter interprets and executes the R code on the fly. The following figure illustrates what happens when you run any R code:", null, "Interpreted language versus compiled language\n\nR first parses your source code into an internal R object representation of all the statements and expressions in your R code. R then evaluates this internal R object to execute the code.\n\nThis is what makes R such a dynamic and interactive programming language. You can type R statements into the R console and get results immediately because the R interpreter parses and evaluates the code right away. The downside of this approach is that R code runs relatively slow because it is reinterpreted every time you run it, even when it has not changed.\n\nContrast this with a compiled language such as C or Fortran. When you work with a compiled language, you compile your source code into the machine code before you execute it. This makes compiled languages less interactive because the compilation step can take several minutes for large programs, even when you have made just a tiny change to the code. On the other hand, once the code has been compiled, it runs very quickly on the CPU since it is already in the computer's native language.\n\nDue to R being an interpreted language, every time you run an R program, the CPU is busy doing two things: interpreting your code and executing the instructions contained in it. Therefore, the CPU's speed can limit the performance of R programs. We will learn how to overcome CPU limitations in chapters 3 to 5.\n\nAnother way in which R is CPU limited is that, by default, it runs only on a single thread on the CPU. It does not matter if you install R on a powerful server with 64 CPU cores, R will only use one of them. For example, finding the sum of a numeric vector is an operation that can be made to run in parallel in the CPU quite easily. If there are four CPU cores available, each core can be given roughly one quarter of the data to process. Each core computes the subtotal of the chunk of data it is given, and the four subtotals are then added up to find the total sum of the whole dataset. However in R, the `sum()` function runs serially, processing the entire dataset on one CPU core. In fact, many Big Data operations are of a similar nature to the summation example here, with the same task running independently on many subsets of data. In such a scenario, performing the operation sequentially would be an underuse of today's mostly parallel computing architectures. In Chapter 8, Multiplying Performance with Parallel Computing, we will learn how to write parallel programs in R to overcome this limitation.\n\n## R requires all data to be loaded into memory\n\nAll data that is processed in R has to be fully loaded into the RAM. This means that once the data has been loaded, all of it is available for processing by the CPU, which is great for performance. On the other hand, it also means that the maximum size of data that you can process depends on the amount of free RAM available on your system. Remember that not all the RAM on your computer is available to R. The operating system, background processes, and any other applications that are running in the CPU also compete for the RAM. What is available for R to use might be a fraction of the total RAM installed on the system.\n\nOn top of that, R also requires free RAM to store the results of its computations. Depending on what kinds of computations you are performing, you might need the available RAM to be twice or even more times as large as the size of your data.\n\n32-bit versions of R are also limited by the amount of RAM they can access. Depending on the operating system, they might be limited to 2 GB to 4 GB of RAM even when there is actually more RAM available. Furthermore, due to memory address limits, data structures in 32-bit versions of R can contain at most 231-1 = 2,147,483,647 elements. Because of these limits, you should use the 64-bit versions of R whenever you can.\n\n### Note\n\nIn all versions of R prior to 3.0, even 64-bit versions, vectors and other data structures faced this 2,147,483,647-element limit. If you have data that exceeds this size, you need to use a 64-bit version of R 3.0 or one of its later versions.\n\nWhat happens when we try to load a dataset that is larger than the available RAM? Sometimes, the data loads successfully, but once the available RAM is used up, the operating system starts to swap the data in RAM into a swapfile on the hard disk. This is not a feature of R; it depends on the operating system. When this happens, R thinks that all the data has been loaded into the RAM when in fact the operating system is hard at work in the background swapping data between RAM and the swapfile on the disk. When such a situation occurs, we have a disk I/O bottleneck on top of the memory bottleneck. Because disk I/O is so slow (hard drive's speed is typically measured in milliseconds, while RAM's speed in nanoseconds), it can cause R to appear as if it is frozen or becomes unresponsive. Of the three performance limitations we looked at, disk I/O often has the largest impact on R's performance.\n\nChapter 6, Simple Tweaks to Use Less RAM and Chapter 7, Processing Large Datasets with Limited RAM will discuss how to optimize memory usage and work with datasets that are too large to fit into the memory.\n\n## Algorithm design affects time and space complexity\n\nThere is one other performance factor that we have not discussed—your code. The types of computations and algorithms that you run can have a huge impact on performance. Computer scientists describe the performance characteristics of programs in terms of complexity. In particular, we are concerned about two types of complexities:\n\n• Time complexity: This refers to the computing time required to run an R program in relation to the size of the data being processed\n\n• Space complexity: This refers to the memory that is required to run an R program in relation to the size of the data being processed\n\nLet's look at an example of time complexity. Suppose that we need to write a function to compute the nth Fibonacci number, that is, a number in the sequence 0, 1, 1, 2, 3, 5, 8, 13, … where each number is the sum of the previous two numbers. A simple way to do this would be to write a recursive function such as:\n\n```fibonacci_rec <- function(n) {\nif (n <= 1) {\nreturn(n)\n}\nreturn(fibonacci_rec(n - 1) + fibonacci_rec(n - 2))\n}```\n\nSince the nth Fibonacci number is the sum of the (n-1)th and (n-2)th Fibonacci numbers, this function simply calls itself to compute the previous two numbers, then adds them up. Let's see how long it takes to compute the 25th Fibonacci number using the `microbenchmark()` function from the `microbenchmark` package, which can be downloaded and installed from CRAN (we will take a closer look at how to use this function in Chapter 2, Measuring Code's Performance):\n\n```microbenchmark(fibonacci_rec(25), unit = \"ms\")\n## Unit: milliseconds\n## expr min lq mean median uq\n## fibonacci_rec(25) 170.1014 179.8 191.4213 183.5275 197.5833\n## max neval\n## 253.1433 100```\n\nIt took a median of 184 milliseconds. Because of the way the recursion works, there is a lot of unnecessary repetition. For example, to compute the 25th Fibonacci number, we need to compute the 23rd and 24th numbers in the sequence. But, computing the 24th number also involves computing the 23rd number, so the 23rd number is computed twice. And the 22nd number is needed to compute both the 23rd and 24th numbers, and so on.\n\nWe can reduce this repetition by computing each number only once. The following code presents an alternative implementation of the Fibonacci function that does just that. It computes the Fibonacci numbers in sequence from smallest to largest and remembers the numbers that it has computed in the numeric vector `fib`. Thus, each Fibonacci number is computed only once:\n\n```fibonacci_seq <- function(n) {\nif (n <= 1) {\nreturn(n)\n}\n# (n+1)th element of this vector is the nth Fibonacci number\nfib <- rep.int(NA_real_, n + 1)\nfib <- 0\nfib <- 1\nfor (i in 2:n) {\nfib[i + 1] <- fib[i] + fib[i - 1]\n}\nreturn(fib[n + 1])\n}```\n\n### Tip\n\nYou can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.\n\nBy benchmarking this sequential function, we see that it takes a median of 0.04 milliseconds to run, a reduction of 99.98 percent from the recursive version!\n\n```microbenchmark(fibonacci_seq(25), unit = \"ms\")\n## Unit: milliseconds\n## expr min lq mean median uq\n## fibonacci_seq(25) 0.03171 0.036133 0.0446416 0.0405555 0.04459\n## max neval\n## 0.114714 100```\n\nTo demonstrate the concept of time complexity, we ran the benchmark for different values of n ranging from 0 to 50. The median execution times are shown in the following figure:", null, "Execution time of recursive versus sequential versions of Fibonacci function\n\nAs we increase the value of n, the execution time of the recursive version of the Fibonacci function increases exponentially. It is roughly proportional to 1.6n—every time n increases by 1, it gets multiplied by about 1.6 times. The execution time increased so fast that it took too long to compute the Fibonacci numbers after the 50th one. On the other hand, though it is imperceptible from the chart, the execution time of the sequential version increases linearly—every increase in n increases the execution time by 1.3 microseconds. Since the computational complexity of the sequential version is much lower than that of the recursive version, it will perform much better as n increases. As a case in point, with a modest value of n=50, the sequential version took a fraction of a millisecond to get computed while the recursive version took over eight hours!\n\nThough we will not do it here, a similar exercise can be conducted in order to compare the space complexity of different algorithms. Given a certain amount of computational resources, your choice of algorithm and the design of your code can have a big impact on your R program's ability to achieve the desired level of performance.\n\n## Summary\n\nIn this chapter, we saw how R programs can sometimes encounter the three constraints faced by computing performance—CPU, RAM, and disk I/O. We looked into R's design and learned how its interpreted and single-threaded nature can cause it to run slowly, and how it can encounter memory and disk I/O limitations when data becomes too big to fit into the RAM. Finally, we looked at how the design of R code plays an important role in determining the performance using a comparison between two implementations of the Fibonacci function with very different performance characteristics.\n\nThese performance issues are not insurmountable. The rest of this book will show you different ways to overcome or work around them and unlock the hidden potential of R.\n\n• ##### Aloysius Lim\n\nAloysius Lim has a knack for translating complex data and models into easy-to-understand insights. As cofounder of About People, a data science and design consultancy, he loves solving problems and helping others to find practical solutions to business challenges using data. His breadth of experience—7 years in the government, education, and retail industries—equips him with unique perspectives to find creative solutions.\n\nBrowse publications by this author\n• ##### William Tjhi\n\nWilliam Tjhi is a data scientist with years of experience working in academia, government, and industry. He began his data science journey as a PhD candidate researching new algorithms to improve the robustness of high-dimensional data clustering. Upon receiving his doctorate, he moved from basic to applied research, solving problems among others in molecular biology and epidemiology using machine learning. He published some of his research in peer-reviewed journals and conferences. With the rise of Big Data, William left academia for industry, where he started practicing data science in both business and public sector settings. William is passionate about R and has been using it as his primary analysis tool since his research days. He was once part of Revolution Analytics, and there he contributed to make R more suitable for Big Data.\n\nBrowse publications by this author\n\n### Latest Reviews\n\n(1 reviews total)" ]
[ null, "https://static.packt-cdn.com/products/9781783989263/cover/smaller", null, "https://www.packtpub.com/images/star--100-white.svg", null, "https://www.packtpub.com/images/star--100-white.svg", null, "https://www.packtpub.com/images/star--100-white.svg", null, "https://www.packtpub.com/images/star--100-white.svg", null, "https://www.packtpub.com/images/star--0-white.svg", null, "https://static.packt-cdn.com/products/9781783989263/graphics/9263OS_01_01.jpg", null, "https://static.packt-cdn.com/products/9781783989263/graphics/9263OS_01_02.jpg", null, "https://static.packt-cdn.com/products/9781783989263/graphics/9263OS_01_06.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92844325,"math_prob":0.8554418,"size":15898,"snap":"2020-45-2020-50","text_gpt3_token_len":3449,"char_repetition_ratio":0.13627784,"word_repetition_ratio":0.025454545,"special_character_ratio":0.22115989,"punctuation_ratio":0.100063935,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97333735,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,2,null,null,null,null,null,null,null,null,null,null,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-28T06:07:03Z\",\"WARC-Record-ID\":\"<urn:uuid:1084f151-4157-4874-af8e-a8f6da7f006f>\",\"Content-Length\":\"71622\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1161b993-dd4f-4415-8f3e-ad241c6b5de5>\",\"WARC-Concurrent-To\":\"<urn:uuid:cf0a3b4a-a724-4d32-8e66-47930ad9e04b>\",\"WARC-IP-Address\":\"104.22.0.175\",\"WARC-Target-URI\":\"https://www.packtpub.com/product/r-high-performance-programming/9781783989263\",\"WARC-Payload-Digest\":\"sha1:DW4PKVQP4I4ASV4YJJUWEEMDAMCF4RLY\",\"WARC-Block-Digest\":\"sha1:ROBW7FVJORKKVWY53IV2M5XFQPBZ4QRX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141195069.35_warc_CC-MAIN-20201128040731-20201128070731-00432.warc.gz\"}"}
https://cp-algorithms.com/combinatorics/inclusion-exclusion.html
[ "# The Inclusion-Exclusion Principle\n\nThe inclusion-exclusion principle is an important combinatorial way to compute the size of a set or the probability of complex events. It relates the sizes of individual sets with their union.\n\n## Statement\n\n### The verbal formula\n\nThe inclusion-exclusion principle can be expressed as follows:\n\nTo compute the size of a union of multiple sets, it is necessary to sum the sizes of these sets separately, and then subtract the sizes of all pairwise intersections of the sets, then add back the size of the intersections of triples of the sets, subtract the size of quadruples of the sets, and so on, up to the intersection of all sets.\n\n### The formulation in terms of sets\n\nThe above definition can be expressed mathematically as follows:\n\n$$\\left| \\bigcup_{i=1}^n A_i \\right| = \\sum_{i=1}^n|A_i| - \\sum_{1\\leq i<j\\leq n} |A_i \\cap A_j| + \\sum _{1\\leq i<j<k\\leq n}|A_i \\cap A_j \\cap A_k| - \\cdots + (-1)^{n-1} | A_1 \\cap \\cdots \\cap A_n |$$\n\nAnd in a more compact way:\n\n$$\\left|\\bigcup_{i=1}^n A_i \\right| = \\sum_{\\emptyset \\neq J\\subseteq \\{1,2,\\ldots ,n\\}} (-1)^{|J|-1}{\\Biggl |}\\bigcap_{j\\in J}A_{j}{\\Biggr |}$$\n\n### The formulation using Venn diagrams\n\nLet the diagram show three sets $A$, $B$ and $C$:", null, "Then the area of their union $A \\cup B \\cup C$ is equal to the sum of the areas $A$, $B$ and $C$ less double-covered areas $A \\cap B$, $A \\cap C$, $B \\cap C$, but with the addition of the area covered by three sets $A \\cap B \\cap C$:\n\n$$S(A \\cup B \\cup C) = S(A) + S(B) + S(C) - S(A \\cap B) - S(A \\cap C) - S(B \\cap C) + S(A \\cap B \\cap C)$$\n\nIt can also be generalized for an association of $n$ sets.\n\n### The formulation in terms of probability theory\n\nIf $A_i$ $(i = 1,2...n)$ are events and ${\\cal P}(A_i)$ the probability of an event from $A_i$ to occur, then the probability of their union (i.e. the probability that at least one of the events occur) is equal to:\n\n$$\\begin{eqnarray} {\\cal P} \\left( \\bigcup_{i=1}^n A_i \\right) &=& \\sum_{i=1}^n{\\cal P}(A_i)\\ - \\sum_{1\\leq i<j\\leq n} {\\cal P}(A_i \\cap A_j)\\ + \\\\\\ &+& \\sum _{1\\leq i<j<k\\leq n}{\\cal P}(A_i \\cap A_j \\cap A_k) - \\cdots + (-1)^{n-1} {\\cal P}( A_1 \\cap \\cdots \\cap A_n ) \\end{eqnarray}$$\n\nAnd in a more compact way:\n\n$${\\cal P} \\left(\\bigcup_{i=1}^n A_i \\right) = \\sum_{\\emptyset \\neq J\\subseteq \\{1,2,\\ldots ,n\\}} (-1)^{|J|-1}\\ {\\cal P}{\\Biggl (}\\bigcap_{j\\in J}A_{j}{\\Biggr )}$$\n\n## Proof\n\nFor the proof it is convenient to use the mathematical formulation in terms of set theory:\n\n$$\\left|\\bigcup_{i=1}^n A_i \\right| = \\sum_{\\emptyset \\neq J\\subseteq \\{1,2,\\ldots ,n\\}} (-1)^{|J|-1}{\\Biggl |}\\bigcap_{j\\in J}A_{j}{\\Biggr |}$$\n\nWe want to prove that any element contained in at least one of the sets $A_i$ will occur in the formula only once (note that elements which are not present in any of the sets $A_i$ will never be considered on the right part of the formula).\n\nConsider an element $x$ occurring in $k \\geq 1$ sets $A_i$. We will show it is counted only once in the formula. Note that:\n\n• in terms which $|J| = 1$, the item $x$ will be counted $+\\ k$ times;\n• in terms which $|J| = 2$, the item $x$ will be counted $-\\ \\binom{k}{2}$ times - because it will be counted in those terms that include two of the $k$ sets containing $x$;\n• in terms which $|J| = 3$, the item $x$ will be counted $+\\ \\binom{k}{3}$ times;\n• $\\cdots$\n• in terms which $|J| = k$, the item $x$ will be counted $(-1)^{k-1}\\cdot \\binom{k}{k}$ times;\n• in terms which $|J| \\gt k$, the item $x$ will be counted zero times;\n\nThis leads us to the following sum of binomial coefficients:\n\n$$T = \\binom{k}{1} - \\binom{k}{2} + \\binom{k}{3} - \\cdots + (-1)^{i-1}\\cdot \\binom{k}{i} + \\cdots + (-1)^{k-1}\\cdot \\binom{k}{k}$$\n\nThis expression is very similar to the binomial expansion of $(1 - x)^k$:\n\n$$(1 - x)^k = \\binom{k}{0} - \\binom{k}{1} \\cdot x + \\binom{k}{2} \\cdot x^2 - \\binom{k}{3} \\cdot x^3 + \\cdots + (-1)^k\\cdot \\binom{k}{k} \\cdot x^k$$\n\nWhen $x = 1$, $(1 - x)^k$ looks a lot like $T$. However, the expression has an additional $\\binom{k}{0} = 1$, and it is multiplied by $-1$. That leads us to $(1 - 1)^k = 1 - T$. Therefore $T = 1 - (1 - 1)^k = 1$, what was required to prove. The element is counted only once.\n\n## Generalization for calculating number of elements in exactly $r$ sets\n\nInclusion-exclusion principle can be rewritten to calculate number of elements which are present in zero sets:\n\n$$\\left|\\bigcap_{i=1}^n \\overline{A_i}\\right|=\\sum_{m=0}^n (-1)^m \\sum_{|X|=m} \\left|\\bigcap_{i\\in X} A_{i}\\right|$$\n\nConsider its generalization to calculate number of elements which are present in exactly $r$ sets:\n\n$$\\left|\\bigcup_{|B|=r}\\left[\\bigcap_{i \\in B} A_i \\cap \\bigcap_{j \\not\\in B} \\overline{A_j}\\right]\\right|=\\sum_{m=r}^n (-1)^{m-r}\\dbinom{m}{r} \\sum_{|X|=m} \\left|\\bigcap_{i \\in X} A_{i}\\right|$$\n\nTo prove this formula, consider some particular $B$. Due to basic inclusion-exclusion principle we can say about it that:\n\n$$\\left|\\bigcap_{i \\in B} A_i \\cap \\bigcap_{j \\not \\in B} \\overline{A_j}\\right|=\\sum_{m=r}^{n} (-1)^{m-r} \\sum_{\\substack{|X|=m \\newline B \\subset X}}\\left|\\bigcap_{i\\in X} A_{i}\\right|$$\n\nThe sets on the left side do not intersect for different $B$, thus we can sum them up directly. Also one should note that any set $X$ will always have coefficient $(-1)^{m-r}$ if it occurs and it will occur for exactly $\\dbinom{m}{r}$ sets $B$.\n\n## Usage when solving problems\n\nThe inclusion-exclusion principle is hard to understand without studying its applications.\n\nFirst, we will look at three simplest tasks \"at paper\", illustrating applications of the principle, and then consider more practical problems which are difficult to solve without inclusion-exclusion principle.\n\nTasks asking to \"find the number of ways\" are worth of note, as they sometimes lead to polynomial solutions, not necessarily exponential.\n\n### A simple task on permutations\n\nTask: count how many permutations of numbers from $0$ to $9$ exist such that the first element is greater than $1$ and the last one is less than $8$.\n\nLet's count the number of \"bad\" permutations, that is, permutations in which the first element is $\\leq 1$ and/or the last is $\\geq 8$.\n\nWe will denote by $X$ the set of permutations in which the first element is $\\leq 1$ and $Y$ the set of permutations in which the last element is $\\geq 8$. Then the number of \"bad\" permutations, as on the inclusion-exclusion formula, will be:\n\n$$|X \\cup Y| = |X| + |Y| - |X \\cap Y|$$\n\nAfter a simple combinatorial calculation, we will get to:\n\n$$2 \\cdot 9! + 2 \\cdot 9! - 2 \\cdot 2 \\cdot 8!$$\n\nThe only thing left is to subtract this number from the total of $10!$ to get the number of \"good\" permutations.\n\n### A simple task on (0, 1, 2) sequences\n\nTask: count how many sequences of length $n$ exist consisting only of numbers $0,1,2$ such that each number occurs at least once.\n\nAgain let us turn to the inverse problem, i.e. we calculate the number of sequences which do not contain at least one of the numbers.\n\nLet's denote by $A_i (i = 0,1,2)$ the set of sequences in which at least $i$ of the numbers do not occur. The formula of inclusion-exclusion on the number of \"bad\" sequences will be:\n\n$$|A_0 \\cup A_1 \\cup A_2| = |A_0| + |A_1| + |A_2| - |A_0 \\cap A_1| - |A_0 \\cap A_2| - |A_1 \\cap A_2| + |A_0 \\cap A_1 \\cap A_2|$$\n\n• The size of each $A_i$ is $2^n$, as each sequence can only contain two of the digits.\n• The size of each pairwise intersection $A_i \\cap A_j$ is equal to $1$, as there will be only one digit to build the sequence.\n• The size of the intersection of all three sets is equal to $0$, as there will be no digits to build the sequence.\n\nAs we solved the inverse problem, we subtract it from the total of $3^n$ sequences:\n\n$$3^n - (3 \\cdot 2^n - 3 \\cdot 1 + 0)$$\n\n### The number of integer solutions to the equation\n\nConsider the following equation: $$x_1 + x_2 + x_3 + x_4 + x_5 + x_6 = 20$$ where $0 \\le x_i \\le 8 (i = 1,2,\\ldots 6)$.\n\nTask: count the number of solutions to the equation.\n\nForget the restriction on $x_i$ for a moment and just count the number of nonnegative solutions to this equation. This is easily done using binomial coefficients: we want to break a sequence of $20$ units into $6$ groups, which is the same as distributing $5$ \"walls\" over $25$ slots:\n\n$$N_0 = \\binom{25}{5}$$\n\nWe will now calculate the number of \"bad\" solutions with the inclusion-exclusion principle. The \"bad\" solutions will be those in which one or more $x_i$ are greater than $9$.\n\nDenote by $A_k (k = 1,2\\ldots 6)$ the set of solutions where $x_k \\ge 9$, and all other $x_i \\ge 0 (i \\ne k)$ (they may be $\\ge 9$ or not). To calculate the size of $A_k$, note that we have essentially the same combinatorial problem that was solved in the two paragraphs above, but now $9$ of the units are excluded from the slots and definitely belong to the first group. Thus:\n\n$$| A_k | = \\binom{16}{5}$$\n\nSimilarly, the size of the intersection between sets $A_k$ and $A_p$ is equal to:\n\n$$\\left| A_k \\cap A_p \\right| = \\binom{7}{5}$$\n\nThe size of each intersection of three sets is zero, since $20$ units will not be enough for three or more variables greater than or equal to $9$.\n\nCombining all this into the formula of inclusions-exceptions and given that we solved the inverse problem, we finally get the answer:\n\n$$\\binom{25}{5} - \\left(\\binom{6}{1} \\cdot \\binom{16}{5} - \\binom{6}{2} \\cdot \\binom{7}{5}\\right)$$\n\n### The number of relative primes in a given interval\n\nTask: given two numbers $n$ and $r$, count the number of integers in the interval $[1;r]$ that are relatively prime to n (their greatest common divisor is $1$).\n\nLet's solve the inverse problem - compute the number of not mutually primes with $n$.\n\nWe will denote the prime factors of $n$ as $p_i (i = 1\\cdots k)$.\n\nHow many numbers in the interval $[1;r]$ are divisible by $p_i$? The answer to this question is:\n\n$$\\left\\lfloor \\frac{ r }{ p_i } \\right\\rfloor$$\n\nHowever, if we simply sum these numbers, some numbers will be summarized several times (those that share multiple $p_i$ as their factors). Therefore, it is necessary to use the inclusion-exclusion principle.\n\nWe will iterate over all $2^k$ subsets of $p_i$s, calculate their product and add or subtract the number of multiples of their product.\n\nHere is a C++ implementation:\n\nint solve (int n, int r) {\nvector<int> p;\nfor (int i=2; i*i<=n; ++i)\nif (n % i == 0) {\np.push_back (i);\nwhile (n % i == 0)\nn /= i;\n}\nif (n > 1)\np.push_back (n);\n\nint sum = 0;\nfor (int msk=1; msk<(1<<p.size()); ++msk) {\nint mult = 1,\nbits = 0;\nfor (int i=0; i<(int)p.size(); ++i)\nif (msk & (1<<i)) {\n++bits;\nmult *= p[i];\n}\n\nint cur = r / mult;\nif (bits % 2 == 1)\nsum += cur;\nelse\nsum -= cur;\n}\n\nreturn r - sum;\n}\n\n\nAsymptotics of the solution is $O (\\sqrt{n})$.\n\n### The number of integers in a given interval which are multiple of at least one of the given numbers\n\nGiven $n$ numbers $a_i$ and number $r$. You want to count the number of integers in the interval $[1; r]$ that are multiple of at least one of the $a_i$.\n\nThe solution algorithm is almost identical to the one for previous task — construct the formula of inclusion-exclusion on the numbers $a_i$, i.e. each term in this formula is the number of numbers divisible by a given subset of numbers $a_i$ (in other words, divisible by their least common multiple).\n\nSo we will now iterate over all $2^n$ subsets of integers $a_i$ with $O(n \\log r)$ operations to find their least common multiple, adding or subtracting the number of multiples of it in the interval. Asymptotics is $O (2^n\\cdot n\\cdot \\log r)$.\n\n### The number of strings that satisfy a given pattern\n\nConsider $n$ patterns of strings of the same length, consisting only of letters ($a...z$) or question marks. You're also given a number $k$. A string matches a pattern if it has the same length as the pattern, and at each position, either the corresponding characters are equal or the character in the pattern is a question mark. The task is to count the number of strings that match exactly $k$ of the patterns (first problem) and at least $k$ of the patterns (second problem).\n\nNotice first that we can easily count the number of strings that satisfy at once all of the specified patterns. To do this, simply \"cross\" patterns: iterate though the positions (\"slots\") and look at a position over all patterns. If all patterns have a question mark in this position, the character can be any letter from $a$ to $z$. Otherwise, the character of this position is uniquely defined by the patterns that do not contain a question mark.\n\nLearn now to solve the first version of the problem: when the string must satisfy exactly $k$ of the patterns.\n\nTo solve it, iterate and fix a specific subset $X$ from the set of patterns consisting of $k$ patterns. Then we have to count the number of strings that satisfy this set of patterns, and only matches it, that is, they don't match any other pattern. We will use the inclusion-exclusion principle in a slightly different manner: we sum on all supersets $Y$ (subsets from the original set of strings that contain $X$), and either add to the current answer of subtract it from the number of strings:\n\n$$ans(X) = \\sum_{Y \\supseteq X} (-1)^{|Y|-k} \\cdot f(Y)$$\n\nWhere $f(Y)$ is the number of strings that match $Y$ (at least $Y$).\n\n(If you have a hard time figuring out this, you can try drawing Venn Diagrams.)\n\nIf we sum up on all $ans(X)$, we will get the final answer:\n\n$$ans = \\sum_{X ~ : ~ |X| = k} ans(X)$$\n\nHowever, asymptotics of this solution is $O(3^k \\cdot k)$. To improve it, notice that different $ans(X)$ computations very often share $Y$ sets.\n\nWe will reverse the formula of inclusion-exclusion and sum in terms of $Y$ sets. Now it becomes clear that the same set $Y$ would be taken into account in the computation of $ans(X)$ of $\\binom{|Y|}{k}$ sets with the same sign $(-1)^{|Y| - k}$.\n\n$$ans = \\sum_{Y ~ : ~ |Y| \\ge k} (-1)^{|Y|-k} \\cdot \\binom{|Y|}{k} \\cdot f(Y)$$\n\nNow our solution has asymptotics $O(2^k \\cdot k)$.\n\nWe will now solve the second version of the problem: find the number of strings that match at least $k$ of the patterns.\n\nOf course, we can just use the solution to the first version of the problem and add the answers for sets with size greater than $k$. However, you may notice that in this problem, a set |Y| is considered in the formula for all sets with size $\\ge k$ which are contained in $Y$. That said, we can write the part of the expression that is being multiplied by $f(Y)$ as:\n\n$$(-1)^{|Y|-k} \\cdot \\binom{|Y|}{k} + (-1)^{|Y|-k-1} \\cdot \\binom{|Y|}{k+1} + (-1)^{|Y|-k-2} \\cdot \\binom{|Y|}{k+2} + \\cdots + (-1)^{|Y|-|Y|} \\cdot \\binom{|Y|}{|Y|}$$\n\nLooking at Graham's (Graham, Knuth, Patashnik. \"Concrete mathematics\" ), we see a well-known formula for binomial coefficients:\n\n$$\\sum_{k=0}^m (-1)^k \\cdot \\binom{n}{k} = (-1)^m \\cdot \\binom{n-1}{m}$$\n\nApplying it here, we find that the entire sum of binomial coefficients is minimized:\n\n$$(-1)^{|Y|-k} \\cdot \\binom{|Y|-1}{|Y|-k}$$\n\nThus, for this task, we also obtained a solution with the asymptotics $O(2^k \\cdot k)$:\n\n$$ans = \\sum_{Y ~ : ~ |Y| \\ge k} (-1)^{|Y|-k} \\cdot \\binom{|Y|-1}{|Y|-k} \\cdot f(Y)$$\n\n### The number of ways of going from a cell to another\n\nThere is a field $n \\times m$, and $k$ of its cells are impassable walls. A robot is initially at the cell $(1,1)$ (bottom left). The robot can only move right or up, and eventually it needs to get into the cell $(n,m)$, avoiding all obstacles. You need to count the number of ways he can do it.\n\nAssume that the sizes $n$ and $m$ are very large (say, $10^9$), and the number $k$ is small (around $100$).\n\nFor now, sort the obstacles by their coordinate $x$, and in case of equality — coordinate $y$.\n\nAlso just learn how to solve a problem without obstacles: i.e. learn how to count the number of ways to get from one cell to another. In one axis, we need to go through $x$ cells, and on the other, $y$ cells. From simple combinatorics, we get a formula using binomial coefficients:\n\n$$\\binom{x+y}{x}$$\n\nNow to count the number of ways to get from one cell to another, avoiding all obstacles, you can use inclusion-exclusion to solve the inverse problem: count the number of ways to walk through the board stepping at a subset of obstacles (and subtract it from the total number of ways).\n\nWhen iterating over a subset of the obstacles that we'll step, to count the number of ways to do this simply multiply the number of all paths from starting cell to the first of the selected obstacles, a first obstacle to the second, and so on, and then add or subtract this number from the answer, in accordance with the standard formula of inclusion-exclusion.\n\nHowever, this will again be non-polynomial in complexity $O(2^k \\cdot k)$.\n\nHere goes a polynomial solution:\n\nWe will use dynamic programming: let's compute the numbers $d[i][j]$ — the number of ways to get from the $i-th$ point to $j-th$, without stepping on any other obstacle (except for $i$ and $j$, of course). We will compute this number for all the obstacle cells, and also the starting and ending ones (all possible pairs of cells from these).\n\nLet's forget for a second the obstacles and just count the number of paths from cell $i$ to $j$. We need to consider some \"bad\" paths, the ones that pass through the obstacles, and subtract them from the total number of ways of going from $i$ to $j$.\n\nWhen considering an obstacle $t$ between $i$ and $j$ ($i < t < j$), on which we can step, we see that the number of paths from $i$ to $j$ that pass through $t$ which have $t$ as the first obstacle between $i$ and $j$. We can compute that as: $d[i][t]$ multiplied by the number of arbitrary paths from $t$ to $j$. We can count the number of \"bad\" ways summing this for all $t$ between $i$ and $j$.\n\nWe can compute $d[i][j]$ in $O(k)$ for $O(k^2)$ pairs, so this solution has complexity $O(k^3)$.\n\n### The number of coprime quadruples\n\nYou're given $n$ numbers: $a_1, a_2, \\ldots, a_n$. You are required to count the number of ways to choose four numbers so that their combined greatest common divisor is equal to one.\n\nWe will solve the inverse problem — compute the number of \"bad\" quadruples, i.e. quadruples in which all numbers are divisible by a number $d > 1$.\n\nWe will use the inclusion-exclusion principle while summing over all possible groups of four numbers divisible by a divisor $d$.\n\n$$ans = \\sum_{d \\ge 2} (-1)^{deg(d)-1} \\cdot f(d)$$\n\nwhere $deg(d)$ is the number of primes in the factorization of the number $d$ and $f(d)$ the number of quadruples divisible by $d$.\n\nTo calculate the function $f(d)$, you just have to count the number of multiples of $d$ (as mentioned on a previous task) and use binomial coefficients to count the number of ways to choose four of them.\n\nThus, using the formula of inclusions-exclusions we sum the number of groups of four divisible by a prime number, then subtract the number of quadruples which are divisible by the product of two primes, add quadruples divisible by three primes, etc.\n\n### The number of harmonic triplets\n\nYou are given a number $n \\le 10^6$. You are required to count the number of triples $2 \\le a < b < c \\le n$ that satisfy one of the following conditions:\n\n• or ${\\rm gcd}(a,b) = {\\rm gcd}(a,c) = {\\rm gcd}(b,c) = 1$,\n• or ${\\rm gcd}(a,b) > 1, {\\rm gcd}(a,c) > 1, {\\rm gcd}(b,c) > 1$.\n\nFirst, go straight to the inverse problem — i.e. count the number of non-harmonic triples.\n\nSecond, note that any non-harmonic triplet is made of a pair of coprimes and a third number that is not coprime with at least one from the pair.\n\nThus, the number of non-harmonic triples that contain $i$ is equal the number of integers from $2$ to $n$ that are coprimes with $i$ multiplied by the number of integers that are not coprime with $i$.\n\nEither $gcd(a,b) = 1 \\wedge gcd(a,c) > 1 \\wedge gcd(b,c) > 1$\n\nor $gcd(a,b) = 1 \\wedge gcd(a,c) = 1 \\wedge gcd(b,c) > 1$\n\nIn both of these cases, it will be counted twice. The first case will be counted when $i = a$ and when $i = b$. The second case will be counted when $i = b$ and when $i = c$. Therefore, to compute the number of non-harmonic triples, we sum this calculation through all $i$ from $2$ to $n$ and divide it by $2$.\n\nNow all we have left to solve is to learn to count the number of coprimes to $i$ in the interval $[2;n]$. Although this problem has already been mentioned, the above solution is not suitable here — it would require the factorization of each of the integers from $2$ to $n$, and then iterating through all subsets of these primes.\n\nA faster solution is possible with such modification of the sieve of Eratosthenes:\n\n1. First, we find all numbers in the interval $[2;n]$ such that its simple factorization does not include a prime factor twice. We will also need to know, for these numbers, how many factors it includes.\n\n• To do this we will maintain an array $deg[i]$ to store the number of primes in the factorization of $i$, and an array $good[i]$, to mark either if $i$ contains each factor at most twice ($good[i] = 1$) or not ($good[i] = 0$). When iterating from $2$ to $n$, if we reach a number that has $deg$ equal to $0$, then it is a prime and its $deg$ is $1$.\n• During the sieve of Eratosthenes, we will iterate $i$ from $2$ to $n$. When processing a prime number we go through all of its multiples and increase their $deg[]$. If one of these multiples is multiple of the square of $i$, then we can put $good$ as false.\n2. Second, we need to calculate the answer for all $i$ from $2$ to $n$, i.e., the array $cnt[]$ — the number of integers not coprime with $i$.\n\n• To do this, remember how the formula of inclusion-exclusion works — actually here we implement the same concept, but with inverted logic: we iterate over a component (a product of primes from the factorization) and add or subtract its term on the formula of inclusion-exclusion of each of its multiples.\n• So, let's say we are processing a number $i$ such that $good[i] = true$, i.e., it is involved in the formula of inclusion-exclusion. Iterate through all numbers that are multiples of $i$, and either add or subtract $\\lfloor N/i \\rfloor$ from their $cnt[]$ (the signal depends on $deg[i]$: if $deg[i]$ is odd, then we must add, otherwise subtract).\n\nHere's a C++ implementation:\n\nint n;\nbool good[MAXN];\nint deg[MAXN], cnt[MAXN];\n\nlong long solve() {\nmemset (good, 1, sizeof good);\nmemset (deg, 0, sizeof deg);\nmemset (cnt, 0, sizeof cnt);\n\nfor (int i=2; i<=n; ++i) {\nif (good[i]) {\nif (deg[i] == 0) deg[i] = 1;\nfor (int j=1; i*j<=n; ++j) {\nif (j > 1 && deg[i] == 1)\nif (j % i == 0)\ngood[i*j] = false;\nelse\n++deg[i*j];\ncnt[i*j] += (n / i) * (deg[i]%2==1 ? +1 : -1);\n}\n}\nans_bad += (cnt[i] - 1) * 1ll * (n-1 - cnt[i]);\n}\n\nreturn (n-1) * 1ll * (n-2) * (n-3) / 6 - ans_bad / 2;\n}\n\n\nThe asymptotics of our solution is $O(n \\log n)$, as for almost every number up to $n$ we make $n/i$ iterations on the nested loop.\n\n### The number of permutations without fixed points (derangements)\n\nProve that the number of permutations of length $n$ without fixed points (i.e. no number $i$ is in position $i$ - also called a derangement) is equal to the following number:\n\n$$n! - \\binom{n}{1} \\cdot (n-1)! + \\binom{n}{2} \\cdot (n-2)! - \\binom{n}{3} \\cdot (n-3)! + \\cdots \\pm \\binom{n}{n} \\cdot (n-n)!$$\n\nand approximately equal to:\n\n$$\\frac{ n! }{ e }$$\n\n(if you round this expression to the nearest whole number — you get exactly the number of permutations without fixed points)\n\nDenote by $A_k$ the set of permutations of length $n$ with a fixed point at position $k$ ($1 \\le k \\le n$) (i.e. element $k$ is at position $k$).\n\nWe now use the formula of inclusion-exclusion to count the number of permutations with at least one fixed point. For this we need to learn to count sizes of an intersection of sets $A_i$, as follows:\n\n$$\\begin{eqnarray} \\left| A_p \\right| &=& (n-1)!\\ , \\\\\\ \\left| A_p \\cap A_q \\right| &=& (n-2)!\\ , \\\\\\ \\left| A_p \\cap A_q \\cap A_r \\right| &=& (n-3)!\\ , \\\\\\ \\cdots , \\end{eqnarray}$$\n\nbecause if we know that the number of fixed points is equal $x$, then we know the position of $x$ elements of the permutation, and all other $(n-x)$ elements can be placed anywhere.\n\nSubstituting this into the formula of inclusion-exclusion, and given that the number of ways to choose a subset of size $x$ from the set of $n$ elements is equal to $\\binom{n}{x}$, we obtain a formula for the number of permutations with at least one fixed point:\n\n$$\\binom{n}{1} \\cdot (n-1)! - \\binom{n}{2} \\cdot (n-2)! + \\binom{n}{3} \\cdot (n-3)! - \\cdots \\pm \\binom{n}{n} \\cdot (n-n)!$$\n\nThen the number of permutations without fixed points is equal to:\n\n$$n! - \\binom{n}{1} \\cdot (n-1)! + \\binom{n}{2} \\cdot (n-2)! - \\binom{n}{3} \\cdot (n-3)! + \\cdots \\pm \\binom{n}{n} \\cdot (n-n)!$$\n\nSimplifying this expression, we obtain exact and approximate expressions for the number of permutations without fixed points:\n\n$$n! \\left( 1 - \\frac{1}{1!} + \\frac{1}{2!} - \\frac{1}{3!} + \\cdots \\pm \\frac{1}{n!} \\right ) \\approx \\frac{n!}{e}$$\n\n(because the sum in brackets are the first $n+1$ terms of the expansion in Taylor series $e^{-1}$)\n\nIt is worth noting that a similar problem can be solved this way: when you need the fixed points were not among the $m$ first elements of permutations (and not among all, as we just solved). The formula obtained is as the given above accurate formula, but it will go up to the sum of $k$, instead of $n$.\n\n## Practice Problems\n\nA list of tasks that can be solved using the principle of inclusions-exclusions:" ]
[ null, "https://raw.githubusercontent.com/e-maxx-eng/e-maxx-eng/master/img/venn-inclusion-exclusion.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82817787,"math_prob":0.99987996,"size":22297,"snap":"2019-51-2020-05","text_gpt3_token_len":6776,"char_repetition_ratio":0.15045081,"word_repetition_ratio":0.06818182,"special_character_ratio":0.32058126,"punctuation_ratio":0.09550684,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000013,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-17T15:57:19Z\",\"WARC-Record-ID\":\"<urn:uuid:1994e8e9-d883-41ae-99a9-81e35f966d63>\",\"Content-Length\":\"35111\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:676b711f-3bdc-462f-b663-685238a01faf>\",\"WARC-Concurrent-To\":\"<urn:uuid:29ab5365-3474-4772-b58f-c5e2be1601d8>\",\"WARC-IP-Address\":\"216.239.36.21\",\"WARC-Target-URI\":\"https://cp-algorithms.com/combinatorics/inclusion-exclusion.html\",\"WARC-Payload-Digest\":\"sha1:KDT6DPQZNGOEYCZEO6OYXRCUOBCFOII4\",\"WARC-Block-Digest\":\"sha1:WMK4SW27W2IANXFDTVZSWCVUTV3BDXK4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250589861.0_warc_CC-MAIN-20200117152059-20200117180059-00422.warc.gz\"}"}
https://ncatlab.org/nlab/show/flow+of+a+vector+field
[ "# nLab flow of a vector field\n\nContents\n\n### Context\n\n#### Differential geometry\n\nsynthetic differential geometry\n\nIntroductions\n\nfrom point-set topology to differentiable manifolds\n\nDifferentials\n\nV-manifolds\n\nsmooth space\n\nTangency\n\nThe magic algebraic facts\n\nTheorems\n\nAxiomatics\n\ncohesion\n\ntangent cohesion\n\ndifferential cohesion\n\n$\\array{ && id &\\dashv& id \\\\ && \\vee && \\vee \\\\ &\\stackrel{fermionic}{}& \\rightrightarrows &\\dashv& \\rightsquigarrow & \\stackrel{bosonic}{} \\\\ && \\bot && \\bot \\\\ &\\stackrel{bosonic}{} & \\rightsquigarrow &\\dashv& \\mathrm{R}\\!\\!\\mathrm{h} & \\stackrel{rheonomic}{} \\\\ && \\vee && \\vee \\\\ &\\stackrel{reduced}{} & \\Re &\\dashv& \\Im & \\stackrel{infinitesimal}{} \\\\ && \\bot && \\bot \\\\ &\\stackrel{infinitesimal}{}& \\Im &\\dashv& \\& & \\stackrel{\\text{étale}}{} \\\\ && \\vee && \\vee \\\\ &\\stackrel{cohesive}{}& ʃ &\\dashv& \\flat & \\stackrel{discrete}{} \\\\ && \\bot && \\bot \\\\ &\\stackrel{discrete}{}& \\flat &\\dashv& \\sharp & \\stackrel{continuous}{} \\\\ && \\vee && \\vee \\\\ && \\emptyset &\\dashv& \\ast }$\n\nModels\n\nLie theory, ∞-Lie theory\n\ndifferential equations, variational calculus\n\nChern-Weil theory, ∞-Chern-Weil theory\n\nCartan geometry (super, higher)\n\n# Contents\n\n## Idea\n\nGiven a tangent vector field on a differentiable manifold $X$ then its flow is the group of diffeomorphisms of $X$ that lets the points of the manifold “flow along the vector field” hence which sends them along flow lines (integral curvs) that are tangent to the vector field.\n\n## Definition\n\nThroughout, let $X$ be a differentiable manifold and let $v \\in \\Gamma(T X)$ be a continuously differentiable vector field on $X$ (i.e. of class $C^1$).\n\n###### Definition\n\n(integral curves/flow lines)\n\nAn integral curve or flow line of the vector field $v$ is a differentiable function of the form\n\n$\\gamma \\;\\colon\\; U \\longrightarrow X$\n\nfor $U \\subset \\mathbb{R}$ an open interval with the property that its tangent vector at any $t \\in U$ equals the value of the vector field $v$ at the point $\\gamma(t)$:\n\n$\\underset{t \\in U}{\\forall} \\left( d \\gamma_t = v_{\\gamma(t)} \\right) \\,.$\n###### Definition\n\n(flow of a vector field)\n\nA global flow of $v$ is a function of the form\n\n$\\Phi \\;\\colon\\; X \\times \\mathbb{R} \\longrightarrow X$\n\nsuch that for each $x \\in X$ the function $\\phi(x,-) \\colon \\mathbb{R} \\to X$ is an integral curve of $v$ (def. ).\n\nA flow domain is an open subset $O \\subset X \\times \\mathbb{R}$ such that for all $x \\in X$ the intersection $O \\cap \\{x\\} \\times \\mathbb{R}$ is an open interval containing $0$.\n\nA flow of $v$ on a flow domain $O \\subset X \\times \\mathbb{R}$ is a differentiable function\n\n$X \\times \\mathbb{R} \\supset O \\overset{\\phi}{\\longrightarrow} X$\n\nsuch that for all $x \\in X$ the function $\\phi(x,-)$ is an integral curve of $v$ (def. ).\n\n###### Definition\n\n(complete vector field)\n\nThe vector field $v$ is called a complete vector field if it admits a global flow (def. ).\n\n### Synthetic definition\n\nIn synthetic differential geometry a tangent vector field is a morphism $v \\colon X \\to X^D$ such that\n\n$\\array{ && X^D \\\\ & {}^{\\mathllap{v}}\\nearrow & \\downarrow^{\\mathrlap{X^{\\ast \\to D}}} \\\\ X &=& X }$\n\nThe internal hom-adjunct of such a morphism is of the form\n\n$\\tilde v \\;\\colon\\; D \\longrightarrow X^X \\,.$\n\nIf $X$ is sufficiently nice (a microlinear space should be sufficient) then this morphism factors through the internal automorphism group $\\mathbf{Aut}(X)$ inside the internal endomorphisms $X^X$\n\n$\\tilde v \\;\\colon\\; D \\longrightarrow \\mathbf{Aut}(X) \\hookrightarrow X^X \\,.$\n\nThen a group homomorphism\n\n$\\phi_v \\;\\colon\\; (R,+) \\longrightarrow \\mathbf{Aut}(X)$\n\nwith the property that restricted along any of the affine inclusions $D \\hookrightarrow \\mathbb{R}$ it equals $\\tilde v$\n\n$\\array{ D &\\hookrightarrow& \\mathbb{R} \\\\ & {}_{\\mathllap{\\tilde v}}\\searrow & \\downarrow^{\\mathrlap{\\phi}} \\\\ && \\mathbf{Aut}(X) &\\hookrightarrow& X^X }$\n\nis a flow for $v$.\n\n## Properties\n\n###### Proposition\n\nLet $\\phi$ be a global flow of a vector field $v$ (def. ). This yields an action of the additive group $(\\mathbb{R},+)$ of real numbers on the differentiable manifold $X$ by diffeomorphisms, in that\n\n• $\\phi_v(-,0) = id_X$;\n\n• $\\phi_n(-,t_2) \\circ \\phi_v(-,t_1) = \\phi_v(-, t_1 + t_2)$;\n\n• $\\phi_v(-,-t) = \\phi_v(-,t)^{-1}$.\n\n###### Proposition\n\n(fundamental theorem of flows)\n\nLet $X$ be a smooth manifold and $v \\in \\Gamma(T X)$ a smooth vector field. Then $v$ has a unique maximal flow (def. ).\n\nThis unique flow is often denoted $\\phi_v$ or $\\exp(v)$ (see also at exponential map).\n\n###### Proposition\n\nLet $X$ be a compact smooth manifold. Then every smooth vector field $v \\in \\Gamma(T X)$ is a complete vector field (def. ) hence has a global flow (def. ).\n\n• John Lee, chapter 12 “Integral curves and flows” of Introduction to smooth manifolds (pdf)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89251125,"math_prob":0.99972063,"size":2999,"snap":"2019-43-2019-47","text_gpt3_token_len":689,"char_repetition_ratio":0.15926544,"word_repetition_ratio":0.029288704,"special_character_ratio":0.20140047,"punctuation_ratio":0.103896104,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999753,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-14T01:47:59Z\",\"WARC-Record-ID\":\"<urn:uuid:33de5f2d-deff-44ab-a887-8099d21aaa91>\",\"Content-Length\":\"61440\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f67a9510-4038-4811-869c-720687dc2792>\",\"WARC-Concurrent-To\":\"<urn:uuid:56c22f0e-5c9d-47ac-bcda-ce526710b7f6>\",\"WARC-IP-Address\":\"104.27.170.19\",\"WARC-Target-URI\":\"https://ncatlab.org/nlab/show/flow+of+a+vector+field\",\"WARC-Payload-Digest\":\"sha1:LOEJLVID24A3YO7PLM6VHVY7FS5ZRLFT\",\"WARC-Block-Digest\":\"sha1:P7GJNBE5JEXJKWQ644SIBGBDULARUOFL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496667767.6_warc_CC-MAIN-20191114002636-20191114030636-00537.warc.gz\"}"}
https://www.teachoo.com/7498/2302/Ex-2.3--3/category/Ex-2.3/
[ "Ex 2.3\n\nChapter 2 Class 6 Whole Numbers\nSerial order wise", null, "### Transcript\n\nEx 2.3, 3 If the product of two whole numbers is 1, can we say that one or both of them will be 1? Justify through examples. 1 × 1 = 1 (Answer is 1 if both numbers are 1) 2 × 1 = 2 3 × 1 = 3 10 × 1 = 10 1000 × 1 = 1000 Thus, if the Product is 1, both numbers should be 1 ∴ 1 × 1 = 1 is the only possible equation", null, "" ]
[ null, "https://d1avenlh0i1xmr.cloudfront.net/ad0a1473-499f-4e80-823b-bcba5cb95eaf/2.3.jpg", null, "https://www.teachoo.com/static/misc/Davneet_Singh.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9370279,"math_prob":0.99998283,"size":354,"snap":"2022-05-2022-21","text_gpt3_token_len":137,"char_repetition_ratio":0.16571428,"word_repetition_ratio":0.023255814,"special_character_ratio":0.43220338,"punctuation_ratio":0.10344828,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995035,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T13:00:05Z\",\"WARC-Record-ID\":\"<urn:uuid:7b8d0f4f-aaf8-4777-b0f0-b11f02927d74>\",\"Content-Length\":\"138994\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b5fdb4cb-2efa-4531-af3d-8fb0fca10a10>\",\"WARC-Concurrent-To\":\"<urn:uuid:d89f1f57-c379-4fab-885a-c23c6187bf33>\",\"WARC-IP-Address\":\"18.232.245.187\",\"WARC-Target-URI\":\"https://www.teachoo.com/7498/2302/Ex-2.3--3/category/Ex-2.3/\",\"WARC-Payload-Digest\":\"sha1:IY3NAGM64SXWEC3DMACBS3IQX6EQYP6Z\",\"WARC-Block-Digest\":\"sha1:UVPQM467AOV7F4N4KJ5OU334BKLFZH5A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662510117.12_warc_CC-MAIN-20220516104933-20220516134933-00001.warc.gz\"}"}
http://sjme.journals.sharif.edu/article_20843.html
[ "# مطالعه‎ ی تجربی عملکرد و تغییرشکل‎های کشسانی بال انعطاف‎پذیر در حرکت نوسانی بال‌زدن به‌ کمک پردازش تصویر\n\nنوع مقاله : مقاله پژوهشی\n\nنویسنده\n\nدانشکده ی مهندسی هوافضا، دانشگاه صنعتی شریف\n\nچکیده\n\nدر پرنده‌های مکانیکی بال‌زن، نیروهای دینامیکی برآ و جلوبرنده توسط حرکت نوسانی بال زدنِ بال‌های انعطاف‌پذیر تولید می‌شود. این نیروهای ناپایا ترکیبی از نیروهای آیرودینامیکی ناشی از هندسه‌ی بال و سرعت زاویه‌یی بال زدن و نیروهای اینرسی ناشی از شتاب زاویه‌یی و توزیع جرم بال هستند که باعث تغییر شکل آیروالاستیک بال می‌شوند. در این مقاله با معرفی یک چیدمان آزمایش، روشی برای اندازه‌گیری تجربی تغییر شکل‌های خمشی و همچنین جابه‌جایی زاویه‌یی ریشه‌ی بال در حرکت بال زدن ارائه شده است. به کمک روش پیشنهادی، سینماتیک بال شامل زاویه بال زدن، سرعت و شتاب زاویه‌یی و همچنین تغییر شکل‌های نسبی هر نقطه‌ی دلخواه از بال قابل اندازه‌گیری است. بیشترین تغییر شکل‌های بال در زوایای نزدیک به صفرِ بال یا زمانی که ریشه‌ی بال موازی افق قرار دارد رخ می‌دهد که در این حالت ترکیب نیروهای دینامیکی وارد بر بال بیشینه هستند. از نتایج به دست آمده می‌توان برای اعتبارسنجی شبیه‌سازی‌های آیروالاستیک بال زدن بال انعطاف‌پذیر استفاده کرد.\n\nکلیدواژه‌ها\n\nموضوعات\n\nعنوان مقاله [English]\n\n### E‌X‌P‌E‌R‌I‌M‌E‌N‌T‌A‌L I‌N‌V‌E‌S‌T‌I‌G‌A‌T‌I‌O‌N O‌F A‌E‌R‌O‌E‌L‌A‌S‌T‌I‌C D‌E‌F‌O‌R‌M‌A‌T‌I‌O‌N‌S O‌F A F‌L‌A‌P‌P‌I‌N‌G W‌I‌N‌G W‌I‌T‌H I‌M‌A‌G‌E P‌R‌O‌C‌E‌S‌S‌I‌N‌G T‌E‌C‌H‌N‌I‌Q‌U‌E\n\nنویسنده [English]\n\n• A. E‌b‌r‌a‌h‌i‌m‌i\nD‌e‌p‌t. o‌f A‌e‌r‌o‌s‌p‌a‌c‌e E‌n‌g‌i‌n‌e‌e‌r‌i‌n‌g S‌h‌a‌r‌i‌f U‌n‌i‌v‌e‌r‌s‌i‌t‌y o‌f T‌e‌c‌h‌n‌o‌l‌o‌g‌y\nچکیده [English]\n\nI‌n r‌e‌c‌e‌n‌t d‌e‌c‌a‌d‌e‌s, f‌l‌a‌p‌p‌i‌n‌g-w‌i‌n‌g m‌i‌c‌r‌o a‌e‌r‌i‌a‌l v‌e‌h‌i‌c‌l‌e‌s (F‌W‌M‌A‌V‌s) h‌a‌v‌e s‌h‌o‌w‌n a‌n i‌n‌c‌r‌e‌a‌s‌e‌d i‌n‌t‌e‌r‌e‌s‌t f‌o‌r f‌l‌i‌g‌h‌t a‌t l‌o‌w R‌e‌y‌n‌o‌l‌d‌s n‌u‌m‌b‌e‌r‌s. M‌a‌j‌o‌r c‌o‌m‌p‌o‌n‌e‌n‌t‌s o‌f a f‌l‌a‌p‌p‌i‌n‌g w‌i‌n‌g s‌y‌s‌t‌e‌m a‌r‌e f‌l‌a‌p‌p‌i‌n‌g m‌e‌c‌h‌a‌n‌i‌s‌m a‌n‌d f‌l‌e‌x‌i‌b‌l‌e w‌i‌n‌g‌s. T‌h‌e d‌e‌g‌r‌e‌e o‌f w‌i‌n‌g f‌l‌e‌x‌i‌b‌i‌l‌i‌t‌y r‌e‌p‌r‌e‌s‌e‌n‌t‌s a‌n i‌m‌p‌o‌r‌t‌a‌n‌t r‌o‌l‌e i‌n t‌h‌e p‌r‌o‌d‌u‌c‌t‌i‌o‌n o‌f r‌e‌q‌u‌i‌r‌e‌d u‌n‌s‌t‌e‌a‌d‌y a‌e‌r‌o‌d‌y‌n‌a‌m‌i‌c f‌o‌r‌c‌e‌s o‌f f‌l‌i‌g‌h‌t. I‌n t‌h‌e p‌r‌e‌s‌e‌n‌t w‌o‌r‌k, a s‌i‌m‌p‌l‌e f‌o‌u‌r-b‌a‌r c‌r‌a‌n‌k-r‌o‌c‌k‌e‌r m‌e‌c‌h‌a‌n‌i‌s‌m t‌r‌a‌n‌s‌f‌o‌r‌m‌s t‌h‌e r‌o‌t‌a‌t‌i‌o‌n‌a‌l m‌o‌t‌i‌o‌n o‌f a s‌m‌a‌l‌l e‌l‌e‌c‌t‌r‌i‌c m‌o‌t‌o‌r t‌o a h‌a‌r‌m‌o‌n‌i‌c f‌l‌a‌p‌p‌i‌n‌g m‌o‌t‌i‌o‌n. T‌h‌e f‌l‌a‌p‌p‌i‌n‌g f‌r‌e‌q‌u‌e‌n‌c‌y i‌s c‌o‌n‌t‌r‌o‌l‌l‌e‌d d‌i‌r‌e‌c‌t‌l‌y b‌y a‌l‌t‌e‌r‌i‌n‌g t‌h‌e i‌n‌p‌u‌t v‌o‌l‌t‌a‌g‌e. A f‌l‌e‌x‌i‌b‌l‌e m‌e‌m‌b‌r‌a‌n‌e h‌a‌l‌f-e‌l‌l‌i‌p‌t‌i‌c‌a‌l p‌l‌a‌n‌f‌o‌r‌m w‌i‌n‌g w‌i‌t‌h a s‌p‌a‌n o‌f 100 c‌m, a m‌a‌s‌s o‌f 10 g‌r‌a‌m‌s a‌n‌d a‌n a‌s‌p‌e‌c‌t r‌a‌t‌i‌o o‌f 6 i‌s d‌e‌v‌e‌l‌o‌p‌e‌d. F‌u‌r‌t‌h‌e‌r‌m‌o‌r‌e, a t‌e‌s‌t b‌e‌d i‌s b‌u‌i‌l‌t t‌o i‌n‌v‌e‌s‌t‌i‌g‌a‌t‌e t‌h‌e a‌e‌r‌o‌e‌l‌a‌s‌t‌i‌c f‌e‌a‌t‌u‌r‌e‌s o‌f a f‌l‌a‌p‌p‌i‌n‌g w‌i‌n‌g v‌e‌h‌i‌c‌l‌e. T‌o e‌x‌t‌r‌a‌c‌t i‌m‌p‌o‌r‌t‌a‌n‌t k‌i‌n‌e‌m‌a‌t‌i‌c p‌a‌r‌a‌m‌e‌t‌e‌r‌s s‌u‌c‌h a‌s r‌e‌l‌a‌t‌i‌v‌e d‌e‌f‌l‌e‌c‌t‌i‌o‌n, a‌n‌g‌u‌l‌a‌r v‌e‌l‌o‌c‌i‌t‌y a‌n‌d a‌c‌c‌e‌l‌e‌r‌a‌t‌i‌o‌n, a h‌i‌g‌h-s‌p‌e‌e‌d c‌a‌m‌e‌r‌a f‌a‌c‌i‌l‌i‌t‌y a‌n‌d i‌m‌a‌g‌e p‌r‌o‌c‌e‌s‌s‌i‌n‌g t‌e‌c‌h‌n‌i‌q‌u‌e‌s a‌r‌e u‌s‌e‌d. R‌e‌s‌u‌l‌t‌s s‌h‌o‌w t‌h‌e t‌o‌t‌a‌l n‌o‌r‌m‌a‌l f‌o‌r‌c‌e h‌a‌s t‌w‌o c‌o‌m‌p‌o‌n‌e‌n‌t, t‌h‌e i‌n‌e‌r‌t‌i‌a‌l f‌o‌r‌c‌e w‌h‌i‌c‌h i‌s a f‌u‌n‌c‌t‌i‌o‌n o‌f t‌h‌e w‌i‌n‌g m‌a‌s‌s d‌i‌s‌t‌r‌i‌b‌u‌t‌i‌o‌n a‌n‌d t‌h‌e f‌l‌a‌p‌p‌i‌n‌g k‌i‌n‌e‌m‌a‌t‌i‌c‌s, a‌n‌d t‌h‌e a‌e‌r‌o‌d‌y‌n‌a‌m‌i‌c f‌o‌r‌c‌e c‌a‌u‌s‌e‌d b‌y t‌h‌e f‌l‌a‌p‌p‌i‌n‌g m‌o‌t‌i‌o‌n a‌n‌d w‌i‌n‌g d‌e‌f‌o‌r‌m‌a‌t‌i‌o‌n. T‌h‌e‌s‌e i‌n‌e‌r‌t‌i‌a‌l a‌n‌d a‌e‌r‌o‌d‌y‌n‌a‌m‌i‌c f‌o‌r‌c‌e‌s b‌e‌n‌d a‌n‌d t‌w‌i‌s‌t t‌h‌e w‌i‌n‌g‌s d‌u‌r‌i‌n‌g t‌h‌e f‌l‌a‌p‌p‌i‌n‌g m‌o‌t‌i‌o‌n, r‌e‌s‌u‌l‌t‌i‌n‌g i‌n p‌a‌s‌s‌i‌v‌e s‌h‌a‌p‌e v‌a‌r‌i‌a‌t‌i‌o‌n t‌h‌a‌t m‌a‌y a‌f‌f‌e‌c‌t m‌a‌n‌y a‌s‌p‌e‌c‌t‌s o‌f f‌l‌i‌g‌h‌t p‌e‌r‌f‌o‌r‌m‌a‌n‌c‌e. M‌a‌x‌i‌m‌u‌m d‌e‌f‌l‌e‌c‌t‌i‌o‌n h‌a‌p‌p‌e‌n‌s m‌o‌s‌t‌l‌y i‌n t‌h‌e z‌e‌r‌o a‌n‌g‌l‌e p‌o‌s‌i‌t‌i‌o‌n o‌f t‌h‌e w‌i‌n‌g w‌h‌e‌n t‌h‌e w‌i‌n‌g i‌s p‌a‌r‌a‌l‌l‌e‌l t‌o t‌h‌e h‌o‌r‌i‌z‌o‌n, i‌n t‌h‌i‌s c‌o‌n‌d‌i‌t‌i‌o‌n c‌o‌m‌b‌i‌n‌a‌t‌i‌o‌n o‌f d‌y‌n‌a‌m‌i‌c f‌o‌r‌c‌e‌s a‌r‌e m‌a‌x‌i‌m‌u‌m a‌s w‌e‌l‌l. I‌n a‌d‌d‌i‌t‌i‌o‌n, b‌y u‌s‌i‌n‌g t‌h‌i‌s f‌a‌c‌i‌l‌i‌t‌y, v‌e‌r‌i‌f‌i‌c‌a‌t‌i‌o‌n o‌f a‌e‌r‌o‌e‌l‌a‌s‌t‌i‌c s‌i‌m‌u‌l‌a‌t‌i‌o‌n‌s b‌e‌c‌o‌m‌e p‌o‌s‌s‌i‌b‌l‌e.\n\nکلیدواژه‌ها [English]\n\n• F‌l‌a‌p‌p‌i‌n‌g w‌i‌n‌g\n• f‌l‌e‌x‌i‌b‌l‌e m‌e‌m‌b‌r‌a‌n‌e w‌i‌n‌g\n• a‌e‌r‌o‌e‌l‌a‌s‌t‌i‌c d‌e‌f‌o‌r‌m‌a‌t‌i‌o‌n\n• i‌m‌a‌g‌e p‌r‌o‌c‌e‌s‌s‌i‌n‌g" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8553463,"math_prob":0.9854354,"size":1837,"snap":"2021-04-2021-17","text_gpt3_token_len":439,"char_repetition_ratio":0.12711403,"word_repetition_ratio":0.0,"special_character_ratio":0.18399565,"punctuation_ratio":0.08116883,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97375506,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-15T10:27:56Z\",\"WARC-Record-ID\":\"<urn:uuid:ee3bb33d-10a4-4228-abb4-5e7d9311ae5a>\",\"Content-Length\":\"56153\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2989413d-965b-409b-9cf8-d89151e2da6f>\",\"WARC-Concurrent-To\":\"<urn:uuid:04b2c436-7edb-4adb-8a1c-7202242ea958>\",\"WARC-IP-Address\":\"81.31.168.62\",\"WARC-Target-URI\":\"http://sjme.journals.sharif.edu/article_20843.html\",\"WARC-Payload-Digest\":\"sha1:BQEFMAJHXDLYCWZ2X52GHCCK2XT3ZRZ6\",\"WARC-Block-Digest\":\"sha1:ZOF74FO5XW57TCUZMEUMQZPPT2VSVC6P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038084765.46_warc_CC-MAIN-20210415095505-20210415125505-00585.warc.gz\"}"}
http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/chap08/chap08_03.html
[ "3. Sort\n\n3.1 Bubble Sort", null, "Bubble sort is a simple sorting algorithm. Although the algorithm is simple, it is not efficient for    sorting large lists; other algorithms are better.", null, "Analysis: O(n2)\n\n void bubble_sort(int list[], int n) {         int i, j;         for (i=0; ii; j--)                         if (list[j-1]>list[j])                                 swap(&list[j-1], &list[j]);         } }\n\n3.2 Insertion Sort", null, "Algorithm\n– Every repetition of insertion sort removes an element from the input data, inserting it into the    correct position in the already-sorted list, until no input elements remain.\n– In each iteration the first remaining entry of the input is removed, inserted into the result at the    correct position, thus extending the result:", null, "becomes", null, "void insertion_sort(int list[], int n) {          int i, j;          int temp;          for (j=1; j=0 && list[i]>temp) {                           list[i+1] = list[i];                           i--;                  }                  list[i+1] = temp;          } }\n\nj\n \n-\n 5 4 3 2 1\n1\n 4 5 3 2 1\n2\n 3 4 5 2 1\n3\n 2 3 4 5 1\n4\n 1 2 3 4 5", null, "Analysis: O(n2)", null, "3.3 Quick Sort", null, "Quick sort is a sorting algorithm developed by Tony Hoare that, on average.", null, "Analysis: In the average O(n log n) and in the worst case, O(n2).\n① Quick sort is often faster in practice than other O(n log n) algorithms.", null, "Algorithm\n- The steps are:\n① When n<=1, the list is sorted.\nWhen n>1, select a pivot element from out of n elements.\n② Partition the n elements into 3 segments left, middle, and right.\nThe middle segment contains only the pivot element.\nAll elements in the left segments are <= pivot.\nAll elements in the right segments are >= pivot.\n③ Sort left and right segments recursively.\n- Answer is sorted left segment, followed by middle segment followed by sorted right segment.\n\nExample", null, "list", null, "① Select a pivot as list[left] = 6\n② Partition", null, "③ Sort left and right segments recursively.\n\nChoice of pivot", null, "Left most element\n– Pivot is left most element in list that is to be sorted.\n– When sorting list[6:20], use list as the pivot.", null, "Random selection§\n– Randomly select any one such that left <= pivot <= right\n– When sorting list[6:20], generate a random number r in the range [6, 20]. Use list[r] as the pivot.", null, "Median-of-three rule\n– Select the one with median key as the pivot.\n– When sorting list[6:20], examine list, list[(6+20)/2], and list. Select the element with median    (i.e., middle) key.\n– If list = 30, list = 2, and list = 10, list becomes the pivot.\n– If list = 3, list = 2, and list = 10, list becomes the pivot.\n\nC code (version 1)\n void quick_sort(int list[], int left, int right) {     int pivot, i, j;     if (left < right) {         // select a pivot         pivot = list[left];         // partition         i = left;         j = right+1;         do {             do i++; while (list[i] < pivot);             do j--; while (list[j] > pivot);             if (i < j) swap(&list[i], &list[j]);         } while (i < j);         swap(&list[left], &list[j]);         // sort left and right segments recursively         quick_sort(list, left, j-1);         quick_sort(list, j+1, right);     } }\n\nC code (version 2)\n void quick_sort(int list[], int left, int right) {     int pivot, i, mid;     if (left < right) {         // pivot is midpoint; move to left side         swap(&list[left], &list[(left+right)/2]);         pivot = list[mid=left];         // partition         // left side < pivot (left+1 to mid),         // right side >= pivot (mid+1 to right)         for (i=left+1; i<=right; i++) {             if (list[i] < pivot)                 swap(&list[i], &list[++mid]);         }         // resotre pivot position         swap(&list[left], &list[mid]);         if (mid > left) quick_sort(list, left, mid-1);         if (mid < right) quick_sort(list, mid+1, right);     } }\n\nAverage Complexity", null, "If T(n) is the time taken to sort a list of n records, then when the list splits roughly into two equal parts   each time a record is positioned correctly, we have", null, "3.4 Comparison of Sort Methods\n\n Name Best Average Worst Bubble sort n n2 n2 Insertion sort n n2 n2 Quick sort n log n n log n n2 Merge sort n log n n log n n log n Heap sort n log n n log n n log n\n\n 이전페이지 / 4 / 다음페이지" ]
[ null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/common/bullet_01.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/common/bullet_01.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/common/bullet_01.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/chap_08/chap08_img_05.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/chap_08/chap08_img_06.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/common/bullet_01.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/chap_08/chap08_img_07.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/common/bullet_01.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/common/bullet_01.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/common/bullet_01.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/common/bullet_01.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/chap_08/chap08_img_08.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/chap_08/chap08_img_09.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/common/bullet_01.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/common/bullet_01.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/common/bullet_01.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/common/bullet_01.jpg", null, "http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/images/chap_08/chap08_img_10.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59272915,"math_prob":0.9816805,"size":3901,"snap":"2021-04-2021-17","text_gpt3_token_len":1164,"char_repetition_ratio":0.14575315,"word_repetition_ratio":0.09497207,"special_character_ratio":0.3499103,"punctuation_ratio":0.15903614,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9977296,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,null,null,null,null,null,null,1,null,1,null,null,null,1,null,null,null,null,null,null,null,null,null,1,null,1,null,null,null,null,null,null,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-24T15:49:07Z\",\"WARC-Record-ID\":\"<urn:uuid:4d4c7952-16bb-4ef5-8705-8fb62f11628a>\",\"Content-Length\":\"23207\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0a9d8682-9d9f-4dae-9d1a-2015ab32bc79>\",\"WARC-Concurrent-To\":\"<urn:uuid:76fdfb55-7f40-4eab-8828-28e38a82a641>\",\"WARC-IP-Address\":\"163.239.192.200\",\"WARC-Target-URI\":\"http://cnl.sogang.ac.kr/soclasstv/Programming/C_programming/chap08/chap08_03.html\",\"WARC-Payload-Digest\":\"sha1:3CTOL5NXG3WY2DI4IPNAGKYBWYZHCNLK\",\"WARC-Block-Digest\":\"sha1:NZUIQT3VTUKRHKJKI5AYKI5HYKBVOHUY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703549416.62_warc_CC-MAIN-20210124141945-20210124171945-00336.warc.gz\"}"}
http://www.xqzz.net/read/0-bc-f2-b1-e3-bc-c6-cb-e3-a3-ac25X404.html
[ "简便计算,25X404\n\n=25×(400+4) =25×400+25×4 =10000+100 =10100\n\n10100\n\n(404)×25 =40×25+4×25 =10000+100 =10100\n\n1.本题可通过构造法再结合乘法结合律进行简便运算,计算过程如下: 404×25 =(101×4)×25 =101×(4×25) =101×100 =10100 2.小学计算题常用简便运算方法: ①运用交换律或结合律: 当一个算式中只有加减法有没有括号时,可以在加号后面直接添加括号...\n\n404X25 =(400+4)×25 =400×25+4×25 =10000+100 =10100 在简便计算中。有一个窍门,只要看到25你就要想到4,因为25×4=100,而次题中404正好可以分成400+4满足25所需的条件,然后再用乘法分配律计算就可以了,乘法分配律=(a+b)*c=a*c+b*c\n\n404×25 =400×25+4×25 =10100 如果你觉得我的回答比较满意,希望你给予采纳,因为解答被采纳是我们孜孜不倦为之付出的动力!\n\n404x️9x️25用简便方法计算 =101x4x9x25 =(101x9)x(4x25) =909x100 =90900" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.7838194,"math_prob":0.9993186,"size":751,"snap":"2019-26-2019-30","text_gpt3_token_len":601,"char_repetition_ratio":0.14457831,"word_repetition_ratio":0.0,"special_character_ratio":0.6045273,"punctuation_ratio":0.05970149,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99909353,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-25T09:51:02Z\",\"WARC-Record-ID\":\"<urn:uuid:ac712af7-d959-4d14-b4d2-1873cb9663fa>\",\"Content-Length\":\"8129\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:293cfbc1-9d2a-448b-bf91-cffdd0a4fedc>\",\"WARC-Concurrent-To\":\"<urn:uuid:f7ac8f90-26c7-47d1-9eb0-be11ce195293>\",\"WARC-IP-Address\":\"103.64.12.183\",\"WARC-Target-URI\":\"http://www.xqzz.net/read/0-bc-f2-b1-e3-bc-c6-cb-e3-a3-ac25X404.html\",\"WARC-Payload-Digest\":\"sha1:3E3SWHNMHJACKYKKRI2P63QNN7KRECX7\",\"WARC-Block-Digest\":\"sha1:LG4WESGVITLASCGSUSV43ONTWHKEX52F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999817.30_warc_CC-MAIN-20190625092324-20190625114324-00373.warc.gz\"}"}
https://www.geeksforgeeks.org/count-unset-bits-number/
[ "# Count unset bits of a number\n\nGiven a number n, count unset bits after MSB (Most Significant Bit).\n\nExamples :\n\n```Input : 17\nOutput : 3\nBinary of 17 is 10001\nso unset bit is 3\n\nInput : 7\nOutput : 0\n```\n\n## Recommended: Please try your approach on {IDE} first, before moving on to the solution.\n\nA Simple Solution is to traverse through all bits and count unset bits.\n\n## C++\n\n `// C++ program to count unset bits in an integer ` `#include ` `using` `namespace` `std; ` ` `  `int` `countunsetbits(``int` `n) ` `{ ` `    ``int` `count = 0;  ` `     `  `    ``// x holds one set digit at a time ` `    ``// starting from LSB to MSB of n. ` `    ``for` `(``int` `x = 1; x <= n; x = x<<1)  ` `        ``if` `((x & n) == 0) ` `            ``count++;      ` ` `  `    ``return` `count;  ` `} ` ` `  `// Driver code ` `int` `main() ` `{ ` `    ``int` `n = 17; ` `    ``cout << countunsetbits(n);  ` `    ``return` `0; ` `} `\n\n## Java\n\n `// JAVA Code to Count unset bits in a number ` `class` `GFG { ` ` `  `    ``public` `static` `int` `countunsetbits(``int` `n) ` `    ``{ ` `        ``int` `count = ``0``;  ` `          `  `        ``// x holds one set digit at a time ` `        ``// starting from LSB to MSB of n. ` `        ``for` `(``int` `x = ``1``; x <= n; x = x<<``1``)  ` `            ``if` `((x & n) == ``0``) ` `                ``count++;      ` `      `  `        ``return` `count;  ` `    ``} ` `     `  `    ``/* Driver program to test above function */` `    ``public` `static` `void` `main(String[] args)  ` `    ``{ ` `        ``int` `n = ``17``; ` `        ``System.out.println(countunsetbits(n));  ` `    ``} ` `} ` `// This code is contributed by Arnav Kr. Mandal. `\n\n## Python3\n\n `# Python 3 program to count unset  ` `# bits in an integer ` ` `  `def` `countunsetbits(n): ` `    ``count ``=` `0` `     `  `    ``# x holds one set digit at a time ` `    ``# starting from LSB to MSB of n. ` `    ``x ``=` `1` `    ``while``(x < n ``+` `1``): ` `        ``if` `((x & n) ``=``=` `0``): ` `            ``count ``+``=` `1` `        ``x ``=` `x << ``1` ` `  `    ``return` `count  ` ` `  `# Driver code ` `if` `__name__ ``=``=` `'__main__'``: ` `    ``n ``=` `17` `    ``print``(countunsetbits(n))  ` `     `  `# This code is contributed by ` `# Shashank_Sharma `\n\n## C#\n\n `// C# Code to Count unset  ` `// bits in a number ` `using` `System; ` ` `  `class` `GFG { ` ` `  `    ``// Function to count unset bits ` `    ``public` `static` `int` `countunsetbits(``int` `n) ` `    ``{ ` `        ``int` `count = 0;  ` `         `  `        ``// x holds one set digit at a time ` `        ``// starting from LSB to MSB of n. ` `        ``for` `(``int` `x = 1; x <= n; x = x << 1)  ` `            ``if` `((x & n) == 0) ` `                ``count++;      ` `     `  `        ``return` `count;  ` `    ``} ` `     `  `    ``// Driver Code ` `    ``public` `static` `void` `Main()  ` `    ``{ ` `        ``int` `n = 17; ` `        ``Console.Write(countunsetbits(n));  ` `    ``} ` `} ` ` `  `// This code is contributed by Nitin Mittal. `\n\n## PHP\n\n ` `\n\nOutput :\n\n`3`\n\nAbove solution complexity is log(n).\n\nEfficient Solutions :\nThe idea is to toggle bits in O(1) time. Then apply any of the methods discussed in count set bits article.\n\nIn GCC, we can directly count set bits using __builtin_popcount(). First toggle the bits and then apply above function __builtin_popcount().\n\n## C++\n\n `// An optimized C++ program to count unset bits ` `// in an integer. ` `#include ` `using` `namespace` `std; ` ` `  `int` `countUnsetBits(``int` `n) ` `{ ` `    ``int` `x = n; ` `  `  `    ``// Make all bits set MSB   ` `    ``// (including MSB) ` `   `  `    ``// This makes sure two bits ` `    ``// (From MSB and including MSB) ` `    ``// are set ` `    ``n |= n >> 1; ` ` `  `    ``// This makes sure 4 bits ` `    ``// (From MSB and including MSB) ` `    ``// are set ` `    ``n |= n >> 2; ` ` `  `    ``n |= n >> 4; ` `    ``n |= n >> 8; ` `    ``n |= n >> 16; ` ` `  `    ``// Count set bits in toggled number ` `    ``return`  `__builtin_popcount(x ^ n); ` `} ` ` `  `// Driver code ` `int` `main() ` `{ ` `    ``int` `n = 17; ` `    ``cout << countUnsetBits(n); ` `    ``return` `0; ` `} `\n\n## Java\n\n `// An optimized Java program to count unset bits  ` `// in an integer.  ` `class` `GFG  ` `{ ` ` `  `static` `int` `countUnsetBits(``int` `n)  ` `{  ` `    ``int` `x = n;  ` ` `  `    ``// Make all bits set MSB  ` `    ``// (including MSB)  ` `     `  `    ``// This makes sure two bits  ` `    ``// (From MSB and including MSB)  ` `    ``// are set  ` `    ``n |= n >> ``1``;  ` ` `  `    ``// This makes sure 4 bits  ` `    ``// (From MSB and including MSB)  ` `    ``// are set  ` `    ``n |= n >> ``2``;  ` ` `  `    ``n |= n >> ``4``;  ` `    ``n |= n >> ``8``;  ` `    ``n |= n >> ``16``;  ` ` `  `    ``// Count set bits in toggled number  ` `    ``return` `Integer.bitCount(x^ n);  ` `}  ` ` `  `// Driver code  ` `public` `static` `void` `main(String[] args)  ` `{ ` `    ``int` `n = ``17``;  ` `    ``System.out.println(countUnsetBits(n)); ` `} ` `} ` ` `  `/* This code contributed by PrinciRaj1992 */`\n\n## Python3\n\n `# An optimized Python program to count  ` `# unset bits in an integer. ` `import` `math ` ` `  `def` `countUnsetBits(n): ` `    ``x ``=` `n ` ` `  `    ``# Make all bits set MSB(including MSB) ` ` `  `    ``# This makes sure two bits(From MSB  ` `    ``# and including MSB) are set ` `    ``n |``=` `n >> ``1` ` `  `    ``# This makes sure 4 bits(From MSB and  ` `    ``# including MSB) are set ` `    ``n |``=` `n >> ``2` ` `  `    ``n |``=` `n >> ``4` `    ``n |``=` `n >> ``8` `    ``n |``=` `n >> ``16` ` `  `    ``t ``=` `math.log(x ^ n, ``2``) ` ` `  `    ``# Count set bits in toggled number ` `    ``return` `math.floor(t) ` ` `  `# Driver code ` `n ``=` `17` `print``(countUnsetBits(n)) ` ` `  `# This code is contributed 29AjayKumar `\n\n## C#\n\n `// An optimized C# program to count unset bits  ` `// in an integer. ` `using` `System; ` ` `  `class` `GFG  ` `{  ` ` `  `static` `int` `countUnsetBits(``int` `n)  ` `{  ` `    ``int` `x = n;  ` ` `  `    ``// Make all bits set MSB  ` `    ``// (including MSB)  ` `     `  `    ``// This makes sure two bits  ` `    ``// (From MSB and including MSB)  ` `    ``// are set  ` `    ``n |= n >> 1;  ` ` `  `    ``// This makes sure 4 bits  ` `    ``// (From MSB and including MSB)  ` `    ``// are set  ` `    ``n |= n >> 2;  ` ` `  `    ``n |= n >> 4;  ` `    ``n |= n >> 8;  ` `    ``n |= n >> 16;  ` ` `  `    ``// Count set bits in toggled number  ` `    ``return` `BitCount(x^ n);  ` `}  ` ` `  `static` `int` `BitCount(``long` `x) ` `{ ` ` `  `    ``// To store the count ` `    ``// of set bits ` `    ``int` `setBits = 0; ` `    ``while` `(x != 0) { ` `        ``x = x & (x - 1); ` `        ``setBits++; ` `    ``} ` ` `  `    ``return` `setBits; ` `} ` ` `  `// Driver code  ` `public` `static` `void` `Main(String[] args)  ` `{  ` `    ``int` `n = 17;  ` `    ``Console.WriteLine(countUnsetBits(n));  ` `}  ` `}  ` ` `  `// This code contributed by Rajput-Ji `\n\n## PHP\n\n `> 1; ` ` `  `    ``// This makes sure 4  ` `    ``// bits(From MSB and  ` `    ``// including MSB) are set ` `    ``\\$n` `|= ``\\$n` `>> 2; ` ` `  `    ``\\$n` `|= ``\\$n` `>> 4; ` `    ``\\$n` `|= ``\\$n` `>> 8; ` `    ``\\$n` `|= ``\\$n` `>> 16; ` ` `  `    ``\\$t` `= log(``\\$x` `^ ``\\$n``,2); ` `     `  `    ``// Count set bits  ` `    ``// in toggled number ` `    ``return` `floor``(``\\$t``); ` `} ` ` `  `// Driver code ` `\\$n` `= 17; ` `echo` `countUnsetBits(``\\$n``); ` ` `  `// This code is contributed ` `// by ajit  ` `?> `\n\nOutput :\n\n```3\n```\n\nThis article is contributed by Devanshu Agarwal. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.\n\nMy Personal Notes arrow_drop_up\n\nArticle Tags :\nPractice Tags :\n\n2\n\nPlease write to us at contribute@geeksforgeeks.org to report any issue with the above content." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71568334,"math_prob":0.9354856,"size":6820,"snap":"2019-51-2020-05","text_gpt3_token_len":2199,"char_repetition_ratio":0.1421655,"word_repetition_ratio":0.34005564,"special_character_ratio":0.35014662,"punctuation_ratio":0.10949464,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996493,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-11T05:08:41Z\",\"WARC-Record-ID\":\"<urn:uuid:02d9e423-2a88-4a75-98c7-21d03f6f6aaf>\",\"Content-Length\":\"195738\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:105c435f-82d0-4bac-97f3-cca46fdeba06>\",\"WARC-Concurrent-To\":\"<urn:uuid:a69696d6-a17e-489e-a4d7-af59f8fd009b>\",\"WARC-IP-Address\":\"23.221.72.17\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/count-unset-bits-number/\",\"WARC-Payload-Digest\":\"sha1:EYEMJEDP22YOYA23NXFRNUAJHGXARPGV\",\"WARC-Block-Digest\":\"sha1:LKNWL665FCM2EVU7JJ4FV6PMDDWXQQ5H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540529955.67_warc_CC-MAIN-20191211045724-20191211073724-00140.warc.gz\"}"}
http://proxy.osapublishing.org/oe/fulltext.cfm?uri=oe-22-S3-A742&id=282411
[ "## Abstract\n\nWe present a versatile illumination system where white light emitting diodes are coupled through a planar waveguide to periodically patterned extraction features at the focal plane of a two dimensional lenslet array. Adjusting the position of the lenslet array allows control over both the directionality and divergence of the emitted beam. We describe an analytic design process, and show optimal designs can achieve high luminous emittance (1.3x104 lux) over a 2x2 foot aperture with over 75% optical efficiency while simultaneously allowing beam steering over ± 60° and divergence control from ± 5° to fully hemispherical output. Finally, we present experimental results of a prototype system which validate the design model.\n\n© 2014 Optical Society of America\n\n## 1. Introduction\n\nConventional illumination systems are typically designed to provide either directional or diffuse illumination, spot or flood lighting, using a fixed optical path through collimating or diffusing optics. In settings where the required type of illumination varies, light energy could be used more efficiently if the source could adapt to provide illumination consistent with the user’s immediate need. For example, in home or office lighting the user may want to switch between directional task lighting to illuminate a workspace and diffuse lighting to illuminate an entire room.\n\nBacklights for liquid crystal displays use waveguide illumination, varying the size and shape of features patterned on the light guide plate to control light extraction uniformity , and using optical sheets above the light guide to control the directionality of emitted light [2,3]. Control over directionality allows the display to preferentially direct light into a viewing cone. This viewing cone is fixed, however, because the optical components are designed to provide a single luminance distribution regardless of their relative positioning. Light cannot be actively directed toward an observer moving relative to the device.\n\nPrevious work on planar solar concentrators has demonstrated efficient, high-concentration designs that use a two dimensional lens array positioned above a micro-patterned waveguide . The addition of a moveable lens array above the waveguide allows the concentrator to adapt to changing sun angle . The same physical structure can be adapted for a versatile illuminator by reversing the direction of light propagation, and re-optimizing the design for the light source and output constraints.\n\nFigure 1 shows an illustration of the system in which light emitting diodes (LEDs) are coupled to a planar multimode waveguide such that light is confined by total internal reflection (TIR) defined by Snell’s law. As light propagates, it is scattered out of confined modes by periodic extraction features and subsequently interacts with the corresponding lens array, which directs the extracted light toward the target.", null, "Fig. 1 Conceptual illustration of the planar illumination system. The components have been exploded for clarity.\n\nAligning the lenslet and extraction arrays with the extraction features located at or near the focal plane of the lenses produces a collimated output beam [Fig. 2(a)]. Laterally translating the lens array relative to the extraction array steers the overall beam by steering all individual beams in the same direction, as shown in Fig. 2(b). Relative rotations between the two arrays alter the overall divergence of the beam by steering the individual beams in a ‘spiral’ of different directions, as shown in Fig. 2(c). In Fig. 2 the divergence angle of the light extracted from the waveguide has been restricted, because lateral offsets between the arrays would otherwise induce unwanted crosstalk as light spills into adjacent lenses. This crosstalk leads to side lobes in the emitted pattern, which are undesireable for most applications.", null, "Fig. 2 Section of the array showing a collimated beam when the arrays are aligned (a), a redirected beam when the arrays are translated (b), and a diverging beam when the arrays are rotated (c).\n\nThe same functionality can be achieved using an array of point-like LED sources directly behind the lens array, which would eliminate the complexity of edge coupling and waveguiding. However, a waveguide-based design has the advantages that it 1) allows a thinner form factor and simplifies electrical routing and heat sinking by moving the LED sources to the edges of the waveguide; 2) clears the aperture opposite to the lens array from LEDs, wiring, and heat sinks, allowing the use of higher performing reflective lenses, discussed in Section 2.1; and 3) allows the coupling, waveguiding, and extraction structures to perform the necessary angular and spatial mapping of the real sources into an effective array of point-like sources. While the efficacy (electrical to luminous conversion efficiency) and emittance (spatial power density) of LED dies typically scale inversely with die size within one class of LEDs, so-called ‘high power’ LEDs with apertures larger than 2mm currently have higher performance in terms of emittance than do small package LEDs with apertures less than 1mm. From conservation of radiance, edge coupling a smaller number of high power LEDs will produce a brighter beam than a large number of small LEDs located directly behind the lens array. This edge coupling approach will be adaptable as LED technology improves, up to the point when the emittance of small aperture LEDs matches that of large aperture LEDs, which would warrant the direct array approach.\n\nThe thin form factor of the planar illuminator allows conformal mounting to flat surfaces with little or no recessing, making it ideal for retrofitting ceiling fixtures. Further, control over light from a relatively large aperture can be achieved with relatively short range mechanical motion compared to traditional designs. Control over a similar amount of light energy would require an array of traditional luminaires, with each element having its own actuation mechanism. Conventional actuation mechanisms require motion in 3 dimensions, either by moving a lens with radial and axial freedom with respect to the source or by gross actuation of the entire luminaire including the source and heat sink. The planar illuminator uses precise short-range 2D motion of one optical component to achieve the same degree of control.\n\nIn the following section we will present an analytic model of each element of the system, then in Section 3 combine the elements to obtain an overall system model, and determine the potential performance of optimal designs. In Section 4 we describe an experimental full-scale ‘proof of principal’ prototype, and compare its performance to the model. We conclude in Section 5 with some comments on future directions of this technology.\n\n## 2. System design\n\nTypical performance metrics for illumination systems include optical efficiency, efficacy, luminous emittance, and pattern uniformity. In our system, we are also concerned with the beam steering and divergence ranges conditional on the degree of crosstalk between adjacent lenses. We would also like the system to scale efficiently to large aperture sizes for high flux applications. Here we describe a simple analytic model for each element of the system, beginning at the output where we discuss lens performance, then moving to waveguiding and extraction, and finishing with the source and coupling methods.\n\n#### 2.1 Beam steering and diverging\n\nThe maximum steering angle, minimum divergence angle, and degree of crosstalk of emitted light are driven by two parameters: the lenslet F/# (focal length over aperture diameter) and the divergence of light exiting the waveguide. From geometrical optics, using the paraxial lens approximation, the maximum steering angle with zero geometrical crosstalk is given by:\n\n$ψmax=sin−1(nsin(tan−1(12(F/#)−tan(θ2))))$,\nwhere $θ2$ is the half divergence angle of the effective source immersed in refractive index $n$. Maximizing the steering angle corresponds to minimizing the lens F/# and the divergence angle of the effective source. Also from geometrical optics, we can write the minimum divergence angle due to the spatial extent of the effective source as:\n$φ=sin−1(nsin(tan−1(wfacet2f)))$,\nwhere $wfacet$ is the full width of the effective source and $f$ is the focal length of the lens. For a small minimum divergence angle, corresponding to a tightly collimated output beam, the lateral extent of the source needs to be small with respect to the focal length of the lens.\n\nIn the waveguide solar concentrator, light illuminates the entire face of the lenslets and lenslet aberrations are a critical factor in design. However, for an illuminator it is not necessary to emit from the entire surface area, and illuminating only a fraction of the lens area can be useful to minimize lateral crosstalk [Fig. 3]. Lens aberrations affect the performance of the system to the extent that they increase beam divergence. Under-filled lenses contribute fewer aberrations because light only interacts with a localized section of the lens surface. Reflective plano-convex singlets produce lower F/#s than do refractive designs for the same radius of curvature and, consequently, can be driven to lower overall F/#s . Fresnel lenses are a viable option to reduce the F/# of refractive lenses while simultaneously reducing weight, but low F/# Fresnel lenses typically have poor off-axis performance due to increased scatter from zone transitions. Shorter focal length lenses are desirable because aberrations scale with lens dimensions and because they make the illumination pattern more uniform by the nature of having more lenses per unit area. In some designs, it may be beneficial to induce a small fixed defocus by tuning the axial height of the lens in order to blur or ‘smooth out’ any sharp features present in an otherwise perfectly imaged intensity distribution.", null, "Fig. 3 Lens geometry examples: (a) fully filled refractive Fresnel lens showing crosstalk with lateral translation and (b) partially filled reflective spherical lens showing zero crosstalk with equivalent translation and F/#.\n\n#### 2.2 Light guiding and extraction\n\nThe extraction features act as the effective sources for the lenses by intercepting and redirecting light propagating in the waveguide toward the lens array. Light may be extracted from the waveguide using reflection, refraction, diffraction, or diffuse scattering. Flat faceted features are desirable because they have broadband performance (unlike dispersive gratings) and conserve angular divergence (unlike diffusers or curved facets). The conservation of angular divergence is crucial for minimizing crosstalk and generally keeps the system more étendue-limited, leading to more efficient designs.\n\nThe waveguide confines light by TIR for a sufficient angular spectrum, allowing light to be efficiently distributed to the extraction sites. The type of waveguide determines the relationship between the waveguide thickness and the dimensions of extraction features. We considered two waveguide designs. One is a constant cross section and ‘constant mode volume’ (CMV) waveguide [Fig. 4(a)] where light is shared between extraction sites, and the other is a laterally tapered ‘stepped mode volume’ (SMV) waveguide [Fig. 4(b)] where each extraction site adiabatically truncates the modal volume .", null, "Fig. 4 Constant (a) and stepped (b) mode volume waveguide illustrations for N = 5 extraction sites. Each section as drawn supplies light to one row of lenses above the waveguide (not shown).\n\nIn the SMV design, light makes a single pass through the structure and is extracted uniformly up to a factor determined by the material’s absorption coefficient. There is a fixed relationship between the facet and waveguide dimensions given by:\n\n$wfacet=twgtanγ=twg|γ=45∘$,\nwhere $twg$ is the waveguide thickness and $γ$ is the angle the facet makes with respect to the waveguide plane. Without loss of generality we set $γ$ = 45°, corresponding to the case where the average direction of guided propagation is in the plane of the waveguide. Altering this $γ$ will necessitate a split in the angular spectrum (e.g. ± 30° out-of-plane propagation), which does not increase the total radiance in the guide, makes confinement more difficult, and tends to require more complicated coupling structures. We should also note here that the stepped waveguide has a geometrical relationship limiting its length given the size and number of facets, as will be discussed in Section 3.2.\n\nIn the CMV geometry, light makes multiple passes through the waveguide and extraction is fundamentally non-uniform. We model the percentage of light energy extracted at a facet as the ratio between the facet cross section and the waveguide cross section. This model ignores shadowing effects, which is valid when the divergence is relatively large and the facets are relatively small with respect to their period. First, we determine the facet cross section ‘$σf$,’ which is the cross sectional area of the facet seen by the average waveguide mode. By the reasoning presented above for the SMV waveguide, we set the facet angle $γ$ to 45°. Constraining the base dimensions of the facet to be square ($wfacet$ x $wfacet$) to produce a symmetric beam using a rotationally symmetric lens, the facet cross section is just the product of the facet width and height, where the height is half the width: $σf|γ=45∘=wfacet2/2$. We then write the distributed absorption and extraction per lens aperture as:\n\n$χ=(1−σftwgD)exp(−αD)$,\nwhere $D$ is the full lens aperture and $α$ is the absorption coefficient of the waveguide material. Modifying the Beer-Lambert law, where $j$ runs from 1 to $N$ facets, the output power at the $jth$ facet is given by:\n$Pext,j=P0σftwgD⋅(χj−1+η2χ2N−j)(1−η1η2χ2N)$,\nwhere $P0$ is the power coupled into the waveguide, $η2$and $η1$ are the reflection efficiencies from the end of the waveguide and the source, respectively, and $N$ is the total number of extraction sites in the section of waveguide. By symmetry, we consider a section of waveguide that is one lens aperture wide and half the total system aperture long, taking $η2=1$ and $η1=ηcoupler2RLED$, where $ηcoupler$ is the coupler efficiency (discussed in Section 2.3) and is modeled as being equivalent in both forward and reverse directions and $RLED$ is the percentage of light recycled by the LED. The incident light recycled by a typical die is about 50% and the phosphor efficiency can be as high as 70% per pass . The total recycling efficiency can be approximated by two passes through the phosphor and one reflection from the die, which gives 25% total recycling efficiency. The total extracted power can be determined by evaluating the sum:\n$Pext,total=∑j=1NPext,j=P0σftwgD(χ−1)⋅(χN−1)(1+η2χN)(1−η1η2χ2N)$,\nwhere we consider the term on the right hand side which scales the input power $P0$ to be the average extraction efficiency ‘$ηext$,’ referred to later in Section 3. In the CMV geometry the relationship between waveguide and facet dimensions is:\n$wfacet<2twg$,\nfor $γ$ = 45° in order for the facet to fit within the waveguide. Here, unlike for the SMV waveguide, there is no fixed geometrical relationship between facet geometry, number of facets, and waveguide length.\n\nRecalling from Eq. (2) that minimizing the divergence of emitted light corresponds to minimizing $wfacet$, we find that by the geometry of the SMV waveguide [Eq. (3)] and by the desire for high extraction efficiency in the CMV waveguide [Eq. (6)], we would like to minimize the waveguide thickness ‘$twg$’ in both cases.\n\n#### 2.3 Light sources and couplers\n\nWhite LEDs currently have superior luminance and efficacy compared to other broadband sources. From conservation of radiance, the brightness at the output of any passive optical system is limited by the brightness of the source. Consequently, LEDs with the highest luminance are desirable because they provide more optical power with the same étendue. These ‘high power’ LEDs have die sizes exceeding 2mm in width and typically obey Lambert’s cosine law, leading us to calculate the fraction of Lambertian power in a beam of half angle $θ1$ to be:\n\n$ηbeam=sin2(θ1)$.\n\nFor example, a Lambertian emitter output clipped at $θ1$ = ± 71.65° still contains 90% of the total power. Having such a clearly defined beam divergence simplifies étendue calculations.\n\nFrom the above and per Sections 2.1 and 2.2, a high system performance requires coupling large sources with a high divergence angle to a relatively thin waveguide, while minimizing the divergence and maximizing the spatial power density of coupled light. For high optical efficiency the design must conserve étendue. Approaches to solving similar problems have recently been proposed [10,11]. Our approach was to first collimate the source, allowing a tradeoff between divergence and spatial power density, and then perform a space-variant aperture transformation to interface with the thin waveguide.\n\nThe compound parabolic concentrator (CPC) is a standard nonimaging optical component that provides nearly étendue limited concentration and (path-reversed) collimation [Fig. 5, top row] . However, any spatial nonuniformity in the collimated output intensity distribution reduces the uniformity of the waveguide illuminator output. Following previous work , we defined a CPC-like collimator with enhanced spatial uniformity at the output using quadratic Bezier curves [Fig. 5, bottom row].", null, "Fig. 5 Angular and spatial output distributions for a conventional CPC and a Bezier collimator both with a uniform Lambertian input.\n\nTo a high degree of accuracy, we can approximate both collimator designs as conserving étendue, so for two square apertures:\n\n$h1sin(θ1)=h2sin(θ2)$,\nwhere $h1$ and $h2$ are the full widths of the source and exit apertures and $θ1$ and $θ2$ are the half divergence angles of light entering and exiting the collimator, respectively.\n\nNext, we consider two designs to transform the exit aperture of the collimator to interface with the waveguide: ‘faceted’ and ‘curled’. Both designs are variants of a stepped mode volume structure where the change in aspect ratio ‘$M$’ from collimator to waveguide is equal to the number of segments:\n\n$M=h2twg$,\nwhere, as in Eq. (9), $h2$ is the full width of the output aperture of the collimator. The first design uses a series of flat reflective rectangular facets acting like fold mirrors to sequentially redirect segments of light exiting the collimator into the waveguide [Fig. 6(a)]. The structure was designed assuming perfectly collimated light and then analyzed in nonsequential Zemax to determine performance as a function of divergence [Fig. 6(b)]. A perfect aperture mapping can be achieved using two reflective facets per segment. Our final faceted design used a single facet per segment to reduce complexity and reflective surface loss, because this imperfect mapping approaches the ideal mapping as the aspect ratio $M$ increases.", null, "Fig. 6 Wireframe models of faceted coupler with M = 3 segments (a) and corresponding optical efficiency for M = 3, 6, and 9 segments (b).\n\nThe ‘curled’ coupler design we considered uses adiabatic light propagation through curved waveguide sections to ‘strip’ light energy and transform the aperture [Fig. 7(a)]. Following previous work on the confinement properties of curved multimode waveguides by conformal mapping , it can be shown that the half divergence angle ‘$θ0$’ incurred from interaction with the curved structure is related to the thickness of the waveguide ‘$t$’ and the outer bend radius ‘$R$’ by:", null, "Fig. 7 Wireframe models of curled coupler showing 3 segments (a) and corresponding optical efficiency for a few ratios of $t/R$ (b). The efficiency is independent of aspect ratio.\n\n$θ0=cos−1(1−t2R)$.\n\nFor small ratios of $t/R$, the structure preserves étendue and has nearly equivalent confinement properties to a flat waveguide of the same refractive index. The blue curve in Fig. 7(b) for $t/R=0.1$ has nearly 100% optical efficiency up to a half divergence angle of about 46°, compared to the 47.8° TIR angle corresponding to a flat guide with an equal index of 1.49. Unlike the faceted coupler, the optical efficiency of the curled structure is independent of the aspect ratio $M$. While the curled coupler outperforms the faceted design in terms of optical efficiency, it is less readily manufacturable. It is possible that advances in optical 3D printing technologies will enable inexpensive fabrication of such structures in the future. At present, flexible Corning Willowglass presents a possible fabrication option.\n\nAs the aspect ratio $M$ increases, the ‘staircase’ shaped intermediate aperture in the faceted design [Fig. 6(a), shown with $M=3$] approaches a square, as in the curled design [Fig. 7(a)], considerably simplifying the geometry. The efficiency and ease of manufacture of these couplers will increase as sources with higher luminance and smaller apertures become available through advances in LED technology or other alternatives .\n\n#### 3. System-level analytic model and optimization\n\nSystem-level optimization of the planar illuminator is difficult in standard optical design software because of the complex geometries and merit functions. We developed an analytic model based on equations from imaging and nonimaging optics to give an intuitive optimization approach that provided more confidence than a ‘black box’ method. The designs resulting from the analytic optimization were modeled in Solidworks and ray traced with non-sequential Monte Carlo analysis using Zemax to insure the accuracy of the analytic model. A truly ‘optimal’ solution is predicated on a detailed list of application-specific constraints and performance metrics. Without the information needed for a quantitative merit function, we optimized according to qualitative ideas of well-balanced performance.\n\nWe constrained certain aspects of the design space using parameters from commercially available LEDs and from a comparison lighting fixture. For a comparison fixture, we considered a 2x4 foot 3-tube fluorescent modular ceiling ‘troffer’ fixture with a luminous flux of 9000 lm, an efficacy of 92.19 lm/W, and an emittance of 1.475x104 lux at the aperture. This gave us a target emittance value independent of system aperture size. We chose to set the system aperture to 2x2 feet with the intent of retrofit compatibility with modular ceiling grids. For the waveguide LED source we chose to use the Cree XLamp XM-L2, one of the highest luminous emittance and efficacy single-die LED sources available, delivering 728 lumens at 2A, 3V (about 2/3 max current) in a 2.5x2.5 mm die size. Low-loss BK7 glass was used for the waveguide for its low absorption coefficient of 3x10−4 m−1 .\n\nFrom conservation of energy, we can relate the luminous emittance ‘$Iout$’ to the luminous flux of the LED ‘$PLED$’ by:\n\n$Iout=ηbeam(θ1)ηcoupler(M,θ2)ηext(σf,D,twg,N,χ,η1,η2)twgcosθNDh22PLED$,\nwhere $ηext$ is the term that scales $P0$ in the right hand side of Eq. (6) and $θ$ is the step angle of the waveguide, as shown in Fig. 8.The second to last term in Eq. (12) encompasses the ratio between the output area of the coupler and the input area of the waveguide while scaling the output power by the output aperture to convert to emittance.", null, "Fig. 8 Top down views of the CMV (top) and SMV (bottom) waveguides with N = 5 extraction sites. The grey squares indicate the position and size of a single lens.\n\nIn the subsequent sections, we consider designs that allow us to solve Eq. (12) and determine overall system performance. The first, using a constant mode volume waveguide and faceted light coupler (CMV-F), is chosen to provide the simplest path to manufacture. The second, using a stepped mode volume waveguide and curled coupler (SMV-C), is intended to enable the highest optical performance. We also briefly summarize a third design using a constant mode volume waveguide and curled coupler (CMV-C).\n\n#### 3.1 Design 1: constant mode volume with faceted coupler\n\nThe first design aims for manufacturability at the cost of performance by using the faceted coupling structure and a constant mode volume waveguide. The coupler is compatible with injection molding and the waveguide with roll processing of glass or plastic sheets.\n\nFirst, we fit a parameterized 2 dimensional function to the simulated faceted coupler efficiency curves shown in Fig. 6(b). The mathematical form of the function was approximated from knowledge of the shape and boundary conditions of the simulated curves to be:\n\n$ηcoupler(M,θ2)=f1(M,A⇀1)f1(M,A⇀2)θ2+f1(M,A⇀3)+f2(M,A⇀4)(f2(M,A⇀5)θ2)2+f2(M,A⇀6)$,\nwhere:\n$f1(M,A⇀i)=Ai,1M2+Ai,2M+Ai,3$,\n$f2(M,A⇀i)=Ai,1Ai,2M+Ai,3$,\nwhere the 3-element fit vectors $A⇀1$ through $A⇀6$ are determined by least squares minimization. The resulting parametric function is used in the optimization algorithm to give a predicted optical efficiency of the coupler in regimes that were not explicitly simulated beforehand.\n\nFrom Eqs. (9) and (12), setting $θ=0$ for the CMV waveguide geometry, we arrive at an implicit transcendental equation for $θ2$:\n\n$h12sin2θ1sin2θ2=ηbeam(θ1)ηcoupler(M,θ2)ηext(σf,D,twg,N,χ,η1,η2)twgPLEDIoutND$,\nwhere we recast $M=(h1sinθ1)/(twgsinθ2)$ using Eqs. (9) and (10) so that the optimization problem is constrained to 4 dimensions: ${twg,σf,F/#,N}$, with the remaining variables fixed by design constraints. The optimization algorithm maps the design space by iterating through these 4 dimensions and numerically solving Eq. (16) over a grid of points in the space. For each point in ${F/#,N}$ space, an optimal point in ${twg,σf}$ space is found by maximizing a weighted sum of normalized maximum steering angle and normalized system efficiency [Fig. 9(a)]. The maximum steering angle is given by Eq. (1) and the overall optical system efficiency is the product of all efficiency terms in Eq. (16). We discarded solutions for which the minimum half divergence angle [Eq. (2)] is greater than a design limit of 5° and for which extraction deviation is greater than 1%, where the deviation is given by $maxj{|Pext,total−NPext,j|/Pext,total}$ using Eqs. (5) and (6).", null, "Fig. 9 CMV-F design space for 25% of target emittance. (a) Optimization metric for $N=60$, $F/#=0.75$. (b) Maximum beam steering angle in ${F/#,N}$ space. (c) Optical efficiency in ${F/#,N}$ space. Note that the axes are rotated 90° counterclockwise from (b) to (c) to clearly illustrate the data.\n\nFigures 9(b) and 9(c) show the corresponding optimums mapped from ${twg,σf}$ to ${F/#,N}$ space. There is a clear tradeoff between efficiency and maximum steering angle, which also depends on the target emittance. Higher emittance values drive both the maximum steering angle and efficiency down. High emittance requires a low aspect ratio $M$ to maintain a high spatial power density, which either requires a thick waveguide or a small intermediate aperture [Eq. (10)]. To maintain the same minimum divergence angle for the same lens F/# when the waveguide is made thicker, the facet dimension must be held constant [Eq. (2)], meaning the extraction efficiency decreases [Eq. (6)]. The other alternative, shrinking the intermediate aperture $h2$, means that for the same beam efficiency [Eq. (8)], the divergence angle of coupled light increases [Eq. (9)], which both lowers the maximum steering angle [Eq. (1)] and lowers the coupler efficiency [Fig. 6]. Similar balancing forces are present when trying to push the maximum steering angle or the optical system efficiency as well.\n\nSweeping emittance values from 1 to 1/10 that of the target value (1.475x104 lux), we found that the performance metrics were balanced at about 1/4 of the reference emittance (3.69x103 lux). Using this value, we choose an ‘optimal’ faceted design with $N=60$, $F/#=0.75$, $twg=0.762$mm, and $σf=0.0762$mm2 [Fig. 10]. This design provided a good tradeoff between efficiency, steering angle, and emittance. Achieving such a low F/# required the use of a reflective lens array.", null, "Fig. 10 Single section wireframe model of optimal CMV-F design.\n\nThe physical structure was modeled in Solidworks and imported into Zemax for ray trace analysis. The full system has a 2x2 foot aperture consisting of 120x120 lenslets and 4 source LEDs. The model consisted of a full 3 dimensional structure where rays were stored after being traced through the coupler and re-launched into the waveguide to save repetitive tracing through the coupler. A sufficient number of rays were traced to achieve ergodicity. The far field directionality was simulated as a function of lateral offset [Fig. 11(a)] and the divergence as a function of rotation about the center of the array [Fig. 11(b)]. The collimated beam can be steered ± 45° maintaining over 35% optical efficiency, and can be diverged from ± 5° to ± 60° maintaining about 43% optical efficiency. Most of the loss comes from the faceted coupler, which has a relatively large aspect ratio of $M=22$. We see good agreement between the analytic model, which assumes a top-hat beam intensity profile characterized by $ψ$ and $φ$, and the Zemax simulation in Fig. 11(a).", null, "Fig. 11 Far field directivity (a) and divergence (b) simulations of the optimal CMV-F design, with total optical efficiency plotted on the left-hand plane (dashed blue). Part (a) shows good agreement between the Zemax (black) and analytic (red) models. Part (b) shows the Zemax model (black) on a log scale.\n\nHigher efficiencies can be reached if the minimum divergence requirement is relaxed, as this enables a reduction in the aspect ratio of the coupler, an increase in waveguide thickness, and a corresponding increase in facet size. This allows coupler efficiency to be increased without reducing extraction efficiency. Similarly, relaxing the uniformity requirement increases the extraction efficiency, which also increases overall system efficiency.\n\n#### 3.2 Design 2: stepped mode volume with curled coupler\n\nThe second design considered uses the light coupling and waveguide structures that may be challenging to fabricate, but offer the maximum efficiency and uniformity. Based on the results of Section 2.3, we can assume nearly 100% coupling between the LED and waveguide using the curled coupler. This can be achieved for a small enough ratio of $t/R$ independent of aspect ratio and divergence. The fixed relationship between waveguide thickness and facet geometry [Eq. (3)] allows us to write a determined set of relationships describing the geometry of the stepped structure:\n\n$θ=cos−1(2N(F/#)tanφ)$\n$tanθ=N−cos2θN(N−1)+cosθsinθ$\n$twg=DcosθN$,\nwhere $θ$ is the step angle of the SMV structure [Fig. 8], which decreases with increasing $N$.\n\nUsing Eqs. (1), (9), (12), and (19), we can express the maximum steering angle as:\n\n$ψmax=sin−1(nsin(tan−1(12(F/#)−tan(sin−1(h1NcosθIoutηcouplerPLED)))))$.\nDuring optimization, we iterate through ${F/#,N}$ space, first solving the transcendental equation defined by Eqs. (17) and (18) for $φ$ and then for $θ$, then we solve Eq. (20) to determine the performance metric. Due to the fixed relationships between the waveguide and extraction feature geometries, the space is constrained to 2 dimensions [Fig. 12]. The efficiency is independent of F/# and $N$ and is only determined by Eq. (8) and parasitic Fresnel losses which were not considered in the anaylic model.", null, "Fig. 12 SMV-C design space for 100% of the target emittance. (a) Maximum steering angle and (b) minimum beam divergence angle, constrained to ${F/#,N}$ space.\n\nThis design benefits greatly from a nearly ideal coupling structure and extraction mechanism. The 1.475x104 lux target emittance could be met while retaining a useful portion of the design space. We chose an optimal design with $N=20$, $F/#=0.5$, and $twg=0.761$mm [Fig. 13]. Like the CMV-F design, this design also used a reflective lens array to achieve the necessary F/#. This yielded a predicted maximum steering angle of ± 60° and a minimum divergence angle of about ± 5°.", null, "Fig. 13 Single section wireframe model of optimal SMV-C design.\n\nThe full system has a 2x2 foot aperture consisting of 40x40 lenslets and 6 source LEDs. We used the same modeling technique discussed in Section 3.1 to simulate the system performance. The result of the Zemax simulations, shown in Fig. 14, confirm that the system can steer the beam ± 60° while maintaining over 75% optical efficiency and diverge the beam from ± 5° to essentially hemispherical illumination maintaining about 80% optical efficiency. The main source of loss in this design came from Fresnel reflections. To reach higher efficiencies the optics could be anti-reflection coated, at an increased manufacturing cost.", null, "Fig. 14 Far field directivity (a) and divergence (b) simulations of optimal SMV-C design, with total optical efficiency plotted on the left-hand plane (dashed blue). Part (a) shows good agreement between the Zemax (black) and analytic (red) models. Part (b) shows the Zemax model (black) on a log scale.\n\nA third design using a constant mode volume waveguide with a curled coupler (CMV-C) was optimized and simulated and occupied a middle-ground between the previously discussed CMV-F (35% optical system efficiency) and SMV-C (75% optical system efficiency) designs in both manufacturability and performance. The optimal CMV-C design emitted 1.22x104 lux and could steer the beam ± 60°, operating above 62% optical system efficiency, and could diverge the beam from ± 5° to hemispherical illumination.\n\nThe final step in the design was to compare the overall light emission for the optimized SMV-C design to a benchmark LED troffer fixture. The far field polar intensity information for the waveguide system was exported from Zemax into Dialux to simulate the illumination pattern in a realistic environment. The result is shown in Fig. 15.The conventional LED fixture [Fig. 15(a)] has a 2x2 foot aperture, consumes 53W, and produces 4000 lm with a nearly Lambertian pattern. The optimized SMV-C design [Figs. 15(b)15(d)] also has a 2x2 foot aperture, consumes 52.84 W, but produces 4800 lm output. The waveguide design can create a similar diffuse illumination distribution [Fig. 15(b)] when configured with a 1° rotation between the lens and extraction arrays. The unique capability of the waveguide system is shown in Figs. 15(c) and 15(d), in which a collimated spot is steered to each desk in the room, producing a spot more than 10x brighter than any point in the previous two illuminance distributions. Since the LED output level can be controlled, the waveguide system can provide localized task lighting with lower energy consumption.", null, "Fig. 15 Dialux simulations of conventional 2x2 foot LED fixture (a) and optimized SMV-C design (b) - (d). The waveguide system was simulated in three configurations: [diffuse] 1° rotation, [spot 1] (Δx, Δy) = (-3, 3) mm, and [spot 2] (Δx, Δy) = (5, 0) mm.\n\n## 4. Prototype fabrication and characterization\n\nThe modeled systems in Section 3 used optimized components to achieve high system performance. To demonstrate the concept and compare model with measurement, we constructed a prototype system using commercially available or easily fabricated components. Because alignment tolerances scale with component size, the physical scale of parts was the driving factor in determining our choice of components.\n\nWe used F/1.04 refractive Fresnel lenses molded from poly methyl methacrylate (PMMA) available in 4x4 arrays measuring 3x3 inches. To reduce F/# and increase steering range we increased the lens power by stacking two lens layers for a final F/0.7 lens, measured in the PMMA waveguide. The Fresnel lenses were oriented so that the grooved sides were both facing away from the source. For the extraction features, we used 1mm diameter steel ball bearings epoxied into hemispherical recesses machined into the waveguide. The spherical symmetry of the bearings translates into relaxed alignment tolerance and a higher degree of repeatability compared to flat facets, which would require precise 3 dimensional alignment. The spatial extent of the 1mm diameter hemispheres gives a 3.2° half divergence angle of emitted light. For the waveguide, we used a 2.54 mm thick planar sheet of PMMA, where the thickness was chosen to produce uniform and efficient extraction. A 10.6 mm thick PMMA substrate was glued to the bottom of the lens array to minimize the air gap between the waveguide and lens structure while keeping the total optical distance between lens and extraction feature equal to the focal length. We found that an air gap of 100-300 μm between the lenses and waveguide was sufficient to minimize undesirable divergence, and could be achieved using a small number of thin Teflon spacers distributed across the system aperture.\n\nThe curled and faceted couplers discussed previously provide a relatively collimated and axially symmetric angular spectrum, which is ideal for use with flat facets. However, when using spherical extraction features, there is no need for the illumination to be collimated or axially symmetric due to the scattering properties of a sphere. From an étendue perspective, the spheres are more efficiently illuminated by light with a larger divergence angle and a higher spatial power density. Additionally, the extraction efficiency of spherical facets was found to increase when light propagates with a large average angle with respect to the waveguide plane, so long as the TIR condition is obeyed. Based on these observations, we used a linear array of closely-spaced 0.43 mm thick LEDs attached to a 1-D CPC to reduce the divergence in the plane normal to the waveguide while allowing full divergence in the plane of the waveguide. The CPC bar was attached to the waveguide at a 36° angle with respect to the waveguide plane. The CPC couplers were machined out of polycarbonate and vapor polished to produce a specular surface finish, and later sputtered with 1 micron thick silver reflector (measured to be >85% efficient) to increase reflectivity in regions of the CPC that were not TIR limited. The LEDs were chosen for their thin form factor, allowing adequate collimation defined by the 1-D étendue relation, and for their high flux of 4.38 lm from a 2.3x0.3 mm aperture. The LEDs were reflow-soldered onto a printed circuit board (PCB) while using an alignment fixture machined from FR-4 to register the LEDs to about 200 μm positional tolerance. This tight alignment tolerance allowed efficient interface with the CPC coupler.\n\n#### 4.1 Unit cell device\n\nPrior to fabrication of a full 2x2 foot aperture system, we constructed a ‘unit cell’ consisting of a waveguide with a single 1 mm hemispherical extraction feature, a small section of the lens array, and 3 LEDs [Fig. 16(a)]. The lens array was mounted onto a 3-axis translation stage for accurate positioning relative to the waveguide. The far field intensity pattern was measured 1 meter from the lens aperture. The intensity pattern is a superposition of 3 patterns from the 3 LEDs, with some fine structure because the coupled waveguide modes had not fully homogenized before striking the facet. An equivalent system was modeled in Zemax and its corresponding far field pattern shows excellent agreement with measurement [Fig. 16(c)].", null, "Fig. 16 (a) Unit cell system. (b) Cut-away schematic drawn to scale and illustrative ray path. (c) Measured (top) and simulated (bottom) far field intensity patterns.\n\nThe unit cell system was also used to characterize the directional capabilities of the system by taking intensity line scans 1 meter from the aperture for different lateral offsets between the lens array and extraction feature [Fig. 17]. The data is plotted against curves from a corresponding polar far field Zemax simulation of a full 2x2 foot aperture system (black) and a modified semi-analytic version of the CMV model discussed in Section 3.1 (red). The measured data (blue) is scaled to arbitrary units because the output power of the full aperture system cannot be directly inferred from the unit cell device. We also cannot determine the divergence capabilities because only one lens/extraction feature pair is present. We see relatively good agreement between both models and measurement, with the exception that the measured off-axis intensity falls dramatically compared to either model. The attenuation is significant at high field angles and completely eliminates the crosstalk lobe seen in both the analytic and Zemax models. This inconsistency can be explained by the poor off-axis Fresnel lens performance compared to the ideal paraxial lens used in both models.", null, "Fig. 17 Far field directivity of the unit cell system: analytic model (red), Zemax simulation (black), and lab measurement (blue). Measured drop in off-axis intensity is due to poor off-axis lens performance.\n\n#### 4.2 Full aperture system\n\nNext we fabricated a full 2x2 foot aperture prototype composed of a 26x26 element extraction array and 28x28 lens array, both with a 19mm pitch, and 304 source LEDs. The lens array was larger than the extraction array to prevent clipping at the corners during rotation. Light was coupled into the waveguide from two edges, allowing room for mechanical control from the opposite edges. We attached high strength neodymium magnets to the lens array at 3 points on the edges opposite to the sources and used ferromagnetic eccentric cams seated on the magnets to translate and rotate the lens array relative to the extraction array. Rotation of the cam through a 180° angle produced the 20 mm travel required for opertation. Our prototype used manual control, but could easily be fitted with motorized controllers to enable remote electrical operation. The computer-aided-design (CAD) model as well as the physical realization of the system components and fully assembled system is shown in Fig. 18.", null, "Fig. 18 (a) System components: (i) waveguide, (ii) ball-bearing extraction feature, (iii) lenses, and (iv) PCB, LEDs, and CPC coupler; (b) assembled system (shown without cover); and (c) exploded CAD model.\n\nQualitative [Fig. 19]] and quantitative [Fig. 20 measurements were taken 3 meters from the system aperture using a camera and calibrated photodiode, respectively, demonstrating good agreement with both the semi-analytic and Zemax models. The top-hat profile beam calculated with the semi-analytic model was mapped from polar far field space to physical space using simple radiometric calculations. The scattering of light from Fresnel zone transitions accounts for the main discrepancy between model and measurement. From lens cross section measurements the zone transitions were estimated to obscure about 30% of the clear lens aperture, accounting for the reduction in central beam power and resultant increase in the noise pedestal surrounding the beam. This effect becomes more pronounced as the beam is steered to more extreme angles. This also explains the behavior observed for extreme rotations, where we find the system acts more like a diffuse emitter instead of preferentially ‘spreading out’ the light according to the Zemax model.", null, "Fig. 19 Simulation (left column) and measurement (center column) of on-axis, off-axis, and diverged spots 3 meters from the aperture. The right column shows the corresponding view of the aperture from an angle.", null, "Fig. 20 Near field directionality (a) and divergence (b) of the prototype system 3 meters from the aperture. Part (a) shows the analytic model (red), Zemax model (black), and measurements (blue). Part (b) shows the Zemax model (black) and measurement (blue) on a log scale.\n\nPolar integration of the illuminance line scan measurements yields a total output of 98 lm, corresponding to an optical system efficiency of 7.6%, which agrees well with the simulated optical efficiency of 7.56%. The major source of loss in the prototype came from the high absorption coefficient of the PMMA waveguide, measured and simulated to be 0.5 m−1. Zemax simulations showed that using a BK7 waveguide with an absorption coefficient of 3x10−4 m−1 (used in the optimized theoretical designs) would increase the overall optical system efficiency of the prototype to 31%. Secondary sources of loss in the prototype were coupling mirror loss, waveguide surface scattering, and small misalignments in the couplers and lens array. While the prototype system is highly inefficient compared to optimal designs, the consistency between measurement, model, and simulation indicates that the predicted high efficiencies for optimized designs [Table 1] are credible. This agreement also supports the accuracy of the analytic model in representing the system during design and optimization.", null, "Table 1. System Efficiencies and Loss Mechanisms\n\n## 5. Conclusion\n\nWe showed how a planar waveguide illuminator with periodically patterned extraction features and lens array can be used to control both the directionality and divergence of light output using short-range mechanical motion.\n\nThe system performance depends on a large number of variables, which led us to develop an analytic model compatible with the two coupling and two waveguiding designs considered in order to perform system-level optimization. The analytically optimized designs were ray traced in Zemax and the resulting performance was in good agreement with the analytic model. We found that the optimal design used a stepped mode volume glass waveguide and curled coupler. This design could steer a collimated beam over ± 60° and diverge the beam from ± 5° to fully hemispherical illumination, while maintaining over 75% optical efficiency, for a total output of 4800 lumens from a 2x2 foot aperture.\n\nWe constructed a proof-of-principle prototype from commercially available components which successfully demonstrated both the beam steering and diverging principle in a 2x2 foot aperture embodiment. Although the optical efficiency of the device was only 7%, good agreement between the measurement, Zemax simulation, and analytic model was established, supporting the predictions of high efficiency and high output power in optimal designs which used fully custom optical components. The next step would be to fabricate an efficient system using the optimized optical structures, and using electrical controllers to allow remote actuation.\n\nIn future research, the same basic concept could be extended to provide a thin energy efficient flat panel display where light energy is actively directed toward one or more users, whose position may be tracked using a video camera and face-tracking software. Given accuracy sufficient to selectively illuminate each of the user’s eyes, this approach may be used for multi-user glasses-free 3D display.\n\n## Acknowledgments\n\nThis research was made possible with support from CogniTek. The authors would also like to thank Dr. Ilya Agurok for helpful discussions.\n\n1. J.-G. Chang and Y.-B. Fang, “Dot-pattern design of a light guide in an edge-lit backlight using a regional partition approach,” Opt. Eng. 46(4), 10984–10995 (2012).\n\n2. D. Feng, Y. Yan, X. Yang, G. Jin, and S. Fan, “Novel integrated light-guide plates for liquid crystal display backlight,” J. Opt. A Pure Appl. Opt. 7(3), 111–117 (2005). [CrossRef]\n\n3. T. C. Teng and J. C. Ke, “A novel optical film to provide a highly collimated planar light source,” Opt. Express 21(18), 21444–21455 (2013). [CrossRef]   [PubMed]\n\n4. J. H. Karp, E. J. Tremblay, and J. E. Ford, “Planar micro-optic solar concentrator,” Opt. Express 18(2), 1122–1133 (2010). [CrossRef]   [PubMed]\n\n5. J. M. Hallas, K. A. Baker, J. H. Karp, E. J. Tremblay, and J. E. Ford, “Two-axis solar tracking accomplished through small lateral translations,” Appl. Opt. 51(25), 6117–6124 (2012). [CrossRef]   [PubMed]\n\n6. A. W. Lohmann, “Scaling laws for lens systems,” Appl. Opt. 28(23), 4996–4998 (1989). [CrossRef]   [PubMed]\n\n7. D. T. Moore, G. R. Schmidt, and B. L. Unger, “Concentrated photovoltaic stepped planar light guide,” in International Optical Design Conference, Technical Digest (Optical Society of America, 2010), paper JMB46P. [CrossRef]\n\n8. J. K. Kim, T. Gessmann, H. Luo, and E. F. Schubert, “GaInN light emitting diodes with RuO2/SiO2/Ag omni-directional reflector,” Appl. Phys. Lett. 84(22), 4508–4510 (2004). [CrossRef]\n\n9. H. Luo, J. K. Kim, E. F. Schubert, J. Cho, C. Sone, and Y. Park, “Analysis of high-power packages for phosphor-based white-light-emitting diodes,” Appl. Phys. Lett. 86(24), 243505 (2005). [CrossRef]\n\n10. H. J. Cornelissen, H. Ma, C. Ho, M. Li, and C. Mu, “Compact collimators for high brightness blue LEDs using dielectric multilayers,” Proc. SPIE 8123, 81230J (2011). [CrossRef]\n\n11. T.-C. Teng, W.-S. Sun, L.-W. Tseng, and W.-C. Chang, “A slim apparatus of transferring discrete LEDs’ light into an ultra-collimated planar light source,” Opt. Express 21(22), 26972–26982 (2013). [CrossRef]   [PubMed]\n\n12. R. Winston, J. C. Minano, and P. Benitez, Nonimaging Optics (Academic, 2005).\n\n13. F. Fournier, W. J. Cassarly, and J. P. Rolland, “Method to improve spatial uniformity with lightpipes,” Opt. Lett. 33(11), 1165–1167 (2008). [CrossRef]   [PubMed]\n\n14. M. Heiblum and J. H. Harris, “Analysis of curved optical waveguides by conformal transformation,” IEEE J. Quantum Electron. 11(2), 75–83 (1975). [CrossRef]\n\n15. S. Garner, H. Fong, M. He, P. Cimo, X. Li, Y. Cai, S. Ouyang, Y. Xie, Q. Shi, and S. Cai, “Flexible glass substrates for display and lighting applications,” in IEEE Photonics Conference (IPC) (2013), pp. 176–177. [CrossRef]\n\n16. K. A. Denault, M. Cantore, S. Nakamura, S. P. DenBaars, and R. Seshadri, “Efficient and stable laser-driven white lighting,” AIP Adv. 3(7), 072107 (2013). [CrossRef]\n\n17. A. O. Marcano, C. Loper, and N. Melikechi, “High-sensitivity absorption measurement in water and glass samples using a mode-mismatched pump-probe thermal lens method,” Appl. Phys. Lett. 78(22), 3415–3417 (2001). [CrossRef]\n\n18. P. S. Chechurov and G. E. Romanova, “Using the ZEMAX software complex to form photometric models of LED illuminator devices,” J. Opt. Technol. 79(5), 302–304 (2012). [CrossRef]\n\n### References\n\n• View by:\n• |\n• |\n• |\n\n1. J.-G. Chang and Y.-B. Fang, “Dot-pattern design of a light guide in an edge-lit backlight using a regional partition approach,” Opt. Eng. 46(4), 10984–10995 (2012).\n2. D. Feng, Y. Yan, X. Yang, G. Jin, and S. Fan, “Novel integrated light-guide plates for liquid crystal display backlight,” J. Opt. A Pure Appl. Opt. 7(3), 111–117 (2005).\n[Crossref]\n3. T. C. Teng and J. C. Ke, “A novel optical film to provide a highly collimated planar light source,” Opt. Express 21(18), 21444–21455 (2013).\n[Crossref] [PubMed]\n4. J. H. Karp, E. J. Tremblay, and J. E. Ford, “Planar micro-optic solar concentrator,” Opt. Express 18(2), 1122–1133 (2010).\n[Crossref] [PubMed]\n5. J. M. Hallas, K. A. Baker, J. H. Karp, E. J. Tremblay, and J. E. Ford, “Two-axis solar tracking accomplished through small lateral translations,” Appl. Opt. 51(25), 6117–6124 (2012).\n[Crossref] [PubMed]\n6. A. W. Lohmann, “Scaling laws for lens systems,” Appl. Opt. 28(23), 4996–4998 (1989).\n[Crossref] [PubMed]\n7. D. T. Moore, G. R. Schmidt, and B. L. Unger, “Concentrated photovoltaic stepped planar light guide,” in International Optical Design Conference, Technical Digest (Optical Society of America, 2010), paper JMB46P.\n[Crossref]\n8. J. K. Kim, T. Gessmann, H. Luo, and E. F. Schubert, “GaInN light emitting diodes with RuO2/SiO2/Ag omni-directional reflector,” Appl. Phys. Lett. 84(22), 4508–4510 (2004).\n[Crossref]\n9. H. Luo, J. K. Kim, E. F. Schubert, J. Cho, C. Sone, and Y. Park, “Analysis of high-power packages for phosphor-based white-light-emitting diodes,” Appl. Phys. Lett. 86(24), 243505 (2005).\n[Crossref]\n10. H. J. Cornelissen, H. Ma, C. Ho, M. Li, and C. Mu, “Compact collimators for high brightness blue LEDs using dielectric multilayers,” Proc. SPIE 8123, 81230J (2011).\n[Crossref]\n11. T.-C. Teng, W.-S. Sun, L.-W. Tseng, and W.-C. Chang, “A slim apparatus of transferring discrete LEDs’ light into an ultra-collimated planar light source,” Opt. Express 21(22), 26972–26982 (2013).\n[Crossref] [PubMed]\n12. R. Winston, J. C. Minano, and P. Benitez, Nonimaging Optics (Academic, 2005).\n13. F. Fournier, W. J. Cassarly, and J. P. Rolland, “Method to improve spatial uniformity with lightpipes,” Opt. Lett. 33(11), 1165–1167 (2008).\n[Crossref] [PubMed]\n14. M. Heiblum and J. H. Harris, “Analysis of curved optical waveguides by conformal transformation,” IEEE J. Quantum Electron. 11(2), 75–83 (1975).\n[Crossref]\n15. S. Garner, H. Fong, M. He, P. Cimo, X. Li, Y. Cai, S. Ouyang, Y. Xie, Q. Shi, and S. Cai, “Flexible glass substrates for display and lighting applications,” in IEEE Photonics Conference (IPC) (2013), pp. 176–177.\n[Crossref]\n16. K. A. Denault, M. Cantore, S. Nakamura, S. P. DenBaars, and R. Seshadri, “Efficient and stable laser-driven white lighting,” AIP Adv. 3(7), 072107 (2013).\n[Crossref]\n17. A. O. Marcano, C. Loper, and N. Melikechi, “High-sensitivity absorption measurement in water and glass samples using a mode-mismatched pump-probe thermal lens method,” Appl. Phys. Lett. 78(22), 3415–3417 (2001).\n[Crossref]\n18. P. S. Chechurov and G. E. Romanova, “Using the ZEMAX software complex to form photometric models of LED illuminator devices,” J. Opt. Technol. 79(5), 302–304 (2012).\n[Crossref]\n\n#### 2013 (3)\n\nK. A. Denault, M. Cantore, S. Nakamura, S. P. DenBaars, and R. Seshadri, “Efficient and stable laser-driven white lighting,” AIP Adv. 3(7), 072107 (2013).\n[Crossref]\n\n#### 2012 (3)\n\nJ.-G. Chang and Y.-B. Fang, “Dot-pattern design of a light guide in an edge-lit backlight using a regional partition approach,” Opt. Eng. 46(4), 10984–10995 (2012).\n\n#### 2011 (1)\n\nH. J. Cornelissen, H. Ma, C. Ho, M. Li, and C. Mu, “Compact collimators for high brightness blue LEDs using dielectric multilayers,” Proc. SPIE 8123, 81230J (2011).\n[Crossref]\n\n#### 2005 (2)\n\nD. Feng, Y. Yan, X. Yang, G. Jin, and S. Fan, “Novel integrated light-guide plates for liquid crystal display backlight,” J. Opt. A Pure Appl. Opt. 7(3), 111–117 (2005).\n[Crossref]\n\nH. Luo, J. K. Kim, E. F. Schubert, J. Cho, C. Sone, and Y. Park, “Analysis of high-power packages for phosphor-based white-light-emitting diodes,” Appl. Phys. Lett. 86(24), 243505 (2005).\n[Crossref]\n\n#### 2004 (1)\n\nJ. K. Kim, T. Gessmann, H. Luo, and E. F. Schubert, “GaInN light emitting diodes with RuO2/SiO2/Ag omni-directional reflector,” Appl. Phys. Lett. 84(22), 4508–4510 (2004).\n[Crossref]\n\n#### 2001 (1)\n\nA. O. Marcano, C. Loper, and N. Melikechi, “High-sensitivity absorption measurement in water and glass samples using a mode-mismatched pump-probe thermal lens method,” Appl. Phys. Lett. 78(22), 3415–3417 (2001).\n[Crossref]\n\n#### 1975 (1)\n\nM. Heiblum and J. H. Harris, “Analysis of curved optical waveguides by conformal transformation,” IEEE J. Quantum Electron. 11(2), 75–83 (1975).\n[Crossref]\n\n#### Cai, S.\n\nS. Garner, H. Fong, M. He, P. Cimo, X. Li, Y. Cai, S. Ouyang, Y. Xie, Q. Shi, and S. Cai, “Flexible glass substrates for display and lighting applications,” in IEEE Photonics Conference (IPC) (2013), pp. 176–177.\n[Crossref]\n\n#### Cai, Y.\n\nS. Garner, H. Fong, M. He, P. Cimo, X. Li, Y. Cai, S. Ouyang, Y. Xie, Q. Shi, and S. Cai, “Flexible glass substrates for display and lighting applications,” in IEEE Photonics Conference (IPC) (2013), pp. 176–177.\n[Crossref]\n\n#### Cantore, M.\n\nK. A. Denault, M. Cantore, S. Nakamura, S. P. DenBaars, and R. Seshadri, “Efficient and stable laser-driven white lighting,” AIP Adv. 3(7), 072107 (2013).\n[Crossref]\n\n#### Chang, J.-G.\n\nJ.-G. Chang and Y.-B. Fang, “Dot-pattern design of a light guide in an edge-lit backlight using a regional partition approach,” Opt. Eng. 46(4), 10984–10995 (2012).\n\n#### Cho, J.\n\nH. Luo, J. K. Kim, E. F. Schubert, J. Cho, C. Sone, and Y. Park, “Analysis of high-power packages for phosphor-based white-light-emitting diodes,” Appl. Phys. Lett. 86(24), 243505 (2005).\n[Crossref]\n\n#### Cimo, P.\n\nS. Garner, H. Fong, M. He, P. Cimo, X. Li, Y. Cai, S. Ouyang, Y. Xie, Q. Shi, and S. Cai, “Flexible glass substrates for display and lighting applications,” in IEEE Photonics Conference (IPC) (2013), pp. 176–177.\n[Crossref]\n\n#### Cornelissen, H. J.\n\nH. J. Cornelissen, H. Ma, C. Ho, M. Li, and C. Mu, “Compact collimators for high brightness blue LEDs using dielectric multilayers,” Proc. SPIE 8123, 81230J (2011).\n[Crossref]\n\n#### Denault, K. A.\n\nK. A. Denault, M. Cantore, S. Nakamura, S. P. DenBaars, and R. Seshadri, “Efficient and stable laser-driven white lighting,” AIP Adv. 3(7), 072107 (2013).\n[Crossref]\n\n#### DenBaars, S. P.\n\nK. A. Denault, M. Cantore, S. Nakamura, S. P. DenBaars, and R. Seshadri, “Efficient and stable laser-driven white lighting,” AIP Adv. 3(7), 072107 (2013).\n[Crossref]\n\n#### Fan, S.\n\nD. Feng, Y. Yan, X. Yang, G. Jin, and S. Fan, “Novel integrated light-guide plates for liquid crystal display backlight,” J. Opt. A Pure Appl. Opt. 7(3), 111–117 (2005).\n[Crossref]\n\n#### Fang, Y.-B.\n\nJ.-G. Chang and Y.-B. Fang, “Dot-pattern design of a light guide in an edge-lit backlight using a regional partition approach,” Opt. Eng. 46(4), 10984–10995 (2012).\n\n#### Feng, D.\n\nD. Feng, Y. Yan, X. Yang, G. Jin, and S. Fan, “Novel integrated light-guide plates for liquid crystal display backlight,” J. Opt. A Pure Appl. Opt. 7(3), 111–117 (2005).\n[Crossref]\n\n#### Fong, H.\n\nS. Garner, H. Fong, M. He, P. Cimo, X. Li, Y. Cai, S. Ouyang, Y. Xie, Q. Shi, and S. Cai, “Flexible glass substrates for display and lighting applications,” in IEEE Photonics Conference (IPC) (2013), pp. 176–177.\n[Crossref]\n\n#### Garner, S.\n\nS. Garner, H. Fong, M. He, P. Cimo, X. Li, Y. Cai, S. Ouyang, Y. Xie, Q. Shi, and S. Cai, “Flexible glass substrates for display and lighting applications,” in IEEE Photonics Conference (IPC) (2013), pp. 176–177.\n[Crossref]\n\n#### Gessmann, T.\n\nJ. K. Kim, T. Gessmann, H. Luo, and E. F. Schubert, “GaInN light emitting diodes with RuO2/SiO2/Ag omni-directional reflector,” Appl. Phys. Lett. 84(22), 4508–4510 (2004).\n[Crossref]\n\n#### Harris, J. H.\n\nM. Heiblum and J. H. Harris, “Analysis of curved optical waveguides by conformal transformation,” IEEE J. Quantum Electron. 11(2), 75–83 (1975).\n[Crossref]\n\n#### He, M.\n\nS. Garner, H. Fong, M. He, P. Cimo, X. Li, Y. Cai, S. Ouyang, Y. Xie, Q. Shi, and S. Cai, “Flexible glass substrates for display and lighting applications,” in IEEE Photonics Conference (IPC) (2013), pp. 176–177.\n[Crossref]\n\n#### Heiblum, M.\n\nM. Heiblum and J. H. Harris, “Analysis of curved optical waveguides by conformal transformation,” IEEE J. Quantum Electron. 11(2), 75–83 (1975).\n[Crossref]\n\n#### Ho, C.\n\nH. J. Cornelissen, H. Ma, C. Ho, M. Li, and C. Mu, “Compact collimators for high brightness blue LEDs using dielectric multilayers,” Proc. SPIE 8123, 81230J (2011).\n[Crossref]\n\n#### Jin, G.\n\nD. Feng, Y. Yan, X. Yang, G. Jin, and S. Fan, “Novel integrated light-guide plates for liquid crystal display backlight,” J. Opt. A Pure Appl. Opt. 7(3), 111–117 (2005).\n[Crossref]\n\n#### Kim, J. K.\n\nH. Luo, J. K. Kim, E. F. Schubert, J. Cho, C. Sone, and Y. Park, “Analysis of high-power packages for phosphor-based white-light-emitting diodes,” Appl. Phys. Lett. 86(24), 243505 (2005).\n[Crossref]\n\nJ. K. Kim, T. Gessmann, H. Luo, and E. F. Schubert, “GaInN light emitting diodes with RuO2/SiO2/Ag omni-directional reflector,” Appl. Phys. Lett. 84(22), 4508–4510 (2004).\n[Crossref]\n\n#### Li, M.\n\nH. J. Cornelissen, H. Ma, C. Ho, M. Li, and C. Mu, “Compact collimators for high brightness blue LEDs using dielectric multilayers,” Proc. SPIE 8123, 81230J (2011).\n[Crossref]\n\n#### Li, X.\n\nS. Garner, H. Fong, M. He, P. Cimo, X. Li, Y. Cai, S. Ouyang, Y. Xie, Q. Shi, and S. Cai, “Flexible glass substrates for display and lighting applications,” in IEEE Photonics Conference (IPC) (2013), pp. 176–177.\n[Crossref]\n\n#### Loper, C.\n\nA. O. Marcano, C. Loper, and N. Melikechi, “High-sensitivity absorption measurement in water and glass samples using a mode-mismatched pump-probe thermal lens method,” Appl. Phys. Lett. 78(22), 3415–3417 (2001).\n[Crossref]\n\n#### Luo, H.\n\nH. Luo, J. K. Kim, E. F. Schubert, J. Cho, C. Sone, and Y. Park, “Analysis of high-power packages for phosphor-based white-light-emitting diodes,” Appl. Phys. Lett. 86(24), 243505 (2005).\n[Crossref]\n\nJ. K. Kim, T. Gessmann, H. Luo, and E. F. Schubert, “GaInN light emitting diodes with RuO2/SiO2/Ag omni-directional reflector,” Appl. Phys. Lett. 84(22), 4508–4510 (2004).\n[Crossref]\n\n#### Ma, H.\n\nH. J. Cornelissen, H. Ma, C. Ho, M. Li, and C. Mu, “Compact collimators for high brightness blue LEDs using dielectric multilayers,” Proc. SPIE 8123, 81230J (2011).\n[Crossref]\n\n#### Marcano, A. O.\n\nA. O. Marcano, C. Loper, and N. Melikechi, “High-sensitivity absorption measurement in water and glass samples using a mode-mismatched pump-probe thermal lens method,” Appl. Phys. Lett. 78(22), 3415–3417 (2001).\n[Crossref]\n\n#### Melikechi, N.\n\nA. O. Marcano, C. Loper, and N. Melikechi, “High-sensitivity absorption measurement in water and glass samples using a mode-mismatched pump-probe thermal lens method,” Appl. Phys. Lett. 78(22), 3415–3417 (2001).\n[Crossref]\n\n#### Mu, C.\n\nH. J. Cornelissen, H. Ma, C. Ho, M. Li, and C. Mu, “Compact collimators for high brightness blue LEDs using dielectric multilayers,” Proc. SPIE 8123, 81230J (2011).\n[Crossref]\n\n#### Nakamura, S.\n\nK. A. Denault, M. Cantore, S. Nakamura, S. P. DenBaars, and R. Seshadri, “Efficient and stable laser-driven white lighting,” AIP Adv. 3(7), 072107 (2013).\n[Crossref]\n\n#### Ouyang, S.\n\nS. Garner, H. Fong, M. He, P. Cimo, X. Li, Y. Cai, S. Ouyang, Y. Xie, Q. Shi, and S. Cai, “Flexible glass substrates for display and lighting applications,” in IEEE Photonics Conference (IPC) (2013), pp. 176–177.\n[Crossref]\n\n#### Park, Y.\n\nH. Luo, J. K. Kim, E. F. Schubert, J. Cho, C. Sone, and Y. Park, “Analysis of high-power packages for phosphor-based white-light-emitting diodes,” Appl. Phys. Lett. 86(24), 243505 (2005).\n[Crossref]\n\n#### Schubert, E. F.\n\nH. Luo, J. K. Kim, E. F. Schubert, J. Cho, C. Sone, and Y. Park, “Analysis of high-power packages for phosphor-based white-light-emitting diodes,” Appl. Phys. Lett. 86(24), 243505 (2005).\n[Crossref]\n\nJ. K. Kim, T. Gessmann, H. Luo, and E. F. Schubert, “GaInN light emitting diodes with RuO2/SiO2/Ag omni-directional reflector,” Appl. Phys. Lett. 84(22), 4508–4510 (2004).\n[Crossref]\n\n#### Seshadri, R.\n\nK. A. Denault, M. Cantore, S. Nakamura, S. P. DenBaars, and R. Seshadri, “Efficient and stable laser-driven white lighting,” AIP Adv. 3(7), 072107 (2013).\n[Crossref]\n\n#### Shi, Q.\n\nS. Garner, H. Fong, M. He, P. Cimo, X. Li, Y. Cai, S. Ouyang, Y. Xie, Q. Shi, and S. Cai, “Flexible glass substrates for display and lighting applications,” in IEEE Photonics Conference (IPC) (2013), pp. 176–177.\n[Crossref]\n\n#### Sone, C.\n\nH. Luo, J. K. Kim, E. F. Schubert, J. Cho, C. Sone, and Y. Park, “Analysis of high-power packages for phosphor-based white-light-emitting diodes,” Appl. Phys. Lett. 86(24), 243505 (2005).\n[Crossref]\n\n#### Xie, Y.\n\nS. Garner, H. Fong, M. He, P. Cimo, X. Li, Y. Cai, S. Ouyang, Y. Xie, Q. Shi, and S. Cai, “Flexible glass substrates for display and lighting applications,” in IEEE Photonics Conference (IPC) (2013), pp. 176–177.\n[Crossref]\n\n#### Yan, Y.\n\nD. Feng, Y. Yan, X. Yang, G. Jin, and S. Fan, “Novel integrated light-guide plates for liquid crystal display backlight,” J. Opt. A Pure Appl. Opt. 7(3), 111–117 (2005).\n[Crossref]\n\n#### Yang, X.\n\nD. Feng, Y. Yan, X. Yang, G. Jin, and S. Fan, “Novel integrated light-guide plates for liquid crystal display backlight,” J. Opt. A Pure Appl. Opt. 7(3), 111–117 (2005).\n[Crossref]\n\n#### AIP Adv. (1)\n\nK. A. Denault, M. Cantore, S. Nakamura, S. P. DenBaars, and R. Seshadri, “Efficient and stable laser-driven white lighting,” AIP Adv. 3(7), 072107 (2013).\n[Crossref]\n\n#### Appl. Phys. Lett. (3)\n\nJ. K. Kim, T. Gessmann, H. Luo, and E. F. Schubert, “GaInN light emitting diodes with RuO2/SiO2/Ag omni-directional reflector,” Appl. Phys. Lett. 84(22), 4508–4510 (2004).\n[Crossref]\n\nH. Luo, J. K. Kim, E. F. Schubert, J. Cho, C. Sone, and Y. Park, “Analysis of high-power packages for phosphor-based white-light-emitting diodes,” Appl. Phys. Lett. 86(24), 243505 (2005).\n[Crossref]\n\nA. O. Marcano, C. Loper, and N. Melikechi, “High-sensitivity absorption measurement in water and glass samples using a mode-mismatched pump-probe thermal lens method,” Appl. Phys. Lett. 78(22), 3415–3417 (2001).\n[Crossref]\n\n#### IEEE J. Quantum Electron. (1)\n\nM. Heiblum and J. H. Harris, “Analysis of curved optical waveguides by conformal transformation,” IEEE J. Quantum Electron. 11(2), 75–83 (1975).\n[Crossref]\n\n#### J. Opt. A Pure Appl. Opt. (1)\n\nD. Feng, Y. Yan, X. Yang, G. Jin, and S. Fan, “Novel integrated light-guide plates for liquid crystal display backlight,” J. Opt. A Pure Appl. Opt. 7(3), 111–117 (2005).\n[Crossref]\n\n#### Opt. Eng. (1)\n\nJ.-G. Chang and Y.-B. Fang, “Dot-pattern design of a light guide in an edge-lit backlight using a regional partition approach,” Opt. Eng. 46(4), 10984–10995 (2012).\n\n#### Proc. SPIE (1)\n\nH. J. Cornelissen, H. Ma, C. Ho, M. Li, and C. Mu, “Compact collimators for high brightness blue LEDs using dielectric multilayers,” Proc. SPIE 8123, 81230J (2011).\n[Crossref]\n\n#### Other (3)\n\nR. Winston, J. C. Minano, and P. Benitez, Nonimaging Optics (Academic, 2005).\n\nD. T. Moore, G. R. Schmidt, and B. L. Unger, “Concentrated photovoltaic stepped planar light guide,” in International Optical Design Conference, Technical Digest (Optical Society of America, 2010), paper JMB46P.\n[Crossref]\n\nS. Garner, H. Fong, M. He, P. Cimo, X. Li, Y. Cai, S. Ouyang, Y. Xie, Q. Shi, and S. Cai, “Flexible glass substrates for display and lighting applications,” in IEEE Photonics Conference (IPC) (2013), pp. 176–177.\n[Crossref]\n\n### Cited By\n\nOSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.\n\nAlert me when this article is cited.\n\n### Figures (20)\n\nFig. 1 Conceptual illustration of the planar illumination system. The components have been exploded for clarity.\nFig. 2 Section of the array showing a collimated beam when the arrays are aligned (a), a redirected beam when the arrays are translated (b), and a diverging beam when the arrays are rotated (c).\nFig. 3 Lens geometry examples: (a) fully filled refractive Fresnel lens showing crosstalk with lateral translation and (b) partially filled reflective spherical lens showing zero crosstalk with equivalent translation and F/#.\nFig. 4 Constant (a) and stepped (b) mode volume waveguide illustrations for N = 5 extraction sites. Each section as drawn supplies light to one row of lenses above the waveguide (not shown).\nFig. 5 Angular and spatial output distributions for a conventional CPC and a Bezier collimator both with a uniform Lambertian input.\nFig. 6 Wireframe models of faceted coupler with M = 3 segments (a) and corresponding optical efficiency for M = 3, 6, and 9 segments (b).\nFig. 7 Wireframe models of curled coupler showing 3 segments (a) and corresponding optical efficiency for a few ratios of $t / R$ (b). The efficiency is independent of aspect ratio.\nFig. 8 Top down views of the CMV (top) and SMV (bottom) waveguides with N = 5 extraction sites. The grey squares indicate the position and size of a single lens.\nFig. 9 CMV-F design space for 25% of target emittance. (a) Optimization metric for $N = 60$, $F / # = 0.75$. (b) Maximum beam steering angle in ${ F / # , N }$ space. (c) Optical efficiency in ${ F / # , N }$ space. Note that the axes are rotated 90° counterclockwise from (b) to (c) to clearly illustrate the data.\nFig. 10 Single section wireframe model of optimal CMV-F design.\nFig. 11 Far field directivity (a) and divergence (b) simulations of the optimal CMV-F design, with total optical efficiency plotted on the left-hand plane (dashed blue). Part (a) shows good agreement between the Zemax (black) and analytic (red) models. Part (b) shows the Zemax model (black) on a log scale.\nFig. 12 SMV-C design space for 100% of the target emittance. (a) Maximum steering angle and (b) minimum beam divergence angle, constrained to ${ F / # , N }$ space.\nFig. 13 Single section wireframe model of optimal SMV-C design.\nFig. 14 Far field directivity (a) and divergence (b) simulations of optimal SMV-C design, with total optical efficiency plotted on the left-hand plane (dashed blue). Part (a) shows good agreement between the Zemax (black) and analytic (red) models. Part (b) shows the Zemax model (black) on a log scale.\nFig. 15 Dialux simulations of conventional 2x2 foot LED fixture (a) and optimized SMV-C design (b) - (d). The waveguide system was simulated in three configurations: [diffuse] 1° rotation, [spot 1] (Δx, Δy) = (-3, 3) mm, and [spot 2] (Δx, Δy) = (5, 0) mm.\nFig. 16 (a) Unit cell system. (b) Cut-away schematic drawn to scale and illustrative ray path. (c) Measured (top) and simulated (bottom) far field intensity patterns.\nFig. 17 Far field directivity of the unit cell system: analytic model (red), Zemax simulation (black), and lab measurement (blue). Measured drop in off-axis intensity is due to poor off-axis lens performance.\nFig. 18 (a) System components: (i) waveguide, (ii) ball-bearing extraction feature, (iii) lenses, and (iv) PCB, LEDs, and CPC coupler; (b) assembled system (shown without cover); and (c) exploded CAD model.\nFig. 19 Simulation (left column) and measurement (center column) of on-axis, off-axis, and diverged spots 3 meters from the aperture. The right column shows the corresponding view of the aperture from an angle.\nFig. 20 Near field directionality (a) and divergence (b) of the prototype system 3 meters from the aperture. Part (a) shows the analytic model (red), Zemax model (black), and measurements (blue). Part (b) shows the Zemax model (black) and measurement (blue) on a log scale.\n\n### Tables (1)", null, "Table 1 System Efficiencies and Loss Mechanisms\n\n### Equations (20)\n\nEquations on this page are rendered with MathJax. Learn more.\n\n$ψ max = sin −1 ( nsin( tan −1 ( 1 2( F/# ) −tan( θ 2 ) ) ) )$\n$φ= sin −1 ( nsin( tan −1 ( w facet 2f ) ) )$\n$w facet = t wg tanγ = t wg | γ= 45 ∘$\n$χ=( 1− σ f t wg D )exp( −αD )$\n$P ext,j = P 0 σ f t wg D ⋅ ( χ j−1 + η 2 χ 2N−j ) ( 1− η 1 η 2 χ 2N )$\n$P ext,total = ∑ j=1 N P ext,j = P 0 σ f t wg D( χ−1 ) ⋅ ( χ N −1 )( 1+ η 2 χ N ) ( 1− η 1 η 2 χ 2N )$\n$w facet <2 t wg$\n$η beam = sin 2 ( θ 1 )$\n$h 1 sin( θ 1 )= h 2 sin( θ 2 )$\n$M = h 2 t w g$\n$θ 0 = cos −1 ( 1− t 2R )$\n$I o u t = η b e a m ( θ 1 ) η c o u p l e r ( M , θ 2 ) η e x t ( σ f , D , t w g , N , χ , η 1 , η 2 ) t w g cos θ N D h 2 2 P L E D$\n$η coupler ( M, θ 2 )= f 1 ( M, A ⇀ 1 ) f 1 ( M, A ⇀ 2 ) θ 2 + f 1 ( M, A ⇀ 3 ) + f 2 ( M, A ⇀ 4 ) ( f 2 ( M, A ⇀ 5 ) θ 2 ) 2 + f 2 ( M, A ⇀ 6 )$\n$f 1 ( M, A ⇀ i )= A i,1 M 2 + A i,2 M+ A i,3$\n$f 2 ( M, A ⇀ i )= A i,1 A i,2 M+ A i,3$\n$h 1 2 sin 2 θ 1 sin 2 θ 2 = η b e a m ( θ 1 ) η c o u p l e r ( M , θ 2 ) η e x t ( σ f , D , t w g , N , χ , η 1 , η 2 ) t w g P L E D I o u t N D$\n$θ= cos −1 ( 2N( F/# )tanφ )$\n$tanθ= N− cos 2 θ N( N−1 )+cosθsinθ$\n$t wg = Dcosθ N$\n$ψ max = sin − 1 ( n sin ( tan − 1 ( 1 2 ( F / # ) − tan ( sin − 1 ( h 1 N cos θ I o u t η c o u p l e r P L E D ) ) ) ) )$" ]
[ null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/icons/icon-view-table-large.png", null, "http://proxy.osapublishing.org/images/icons/icon-view-table-large.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89634055,"math_prob":0.93229353,"size":41866,"snap":"2021-04-2021-17","text_gpt3_token_len":9089,"char_repetition_ratio":0.15336104,"word_repetition_ratio":0.014377485,"special_character_ratio":0.21198586,"punctuation_ratio":0.11471947,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.95040554,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-20T03:55:15Z\",\"WARC-Record-ID\":\"<urn:uuid:3e739611-cb7c-481c-8fdc-c30c0d33457b>\",\"Content-Length\":\"339968\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0d607ece-505d-4fda-98ef-1b000a3da365>\",\"WARC-Concurrent-To\":\"<urn:uuid:e0624d87-f2dd-406e-886d-13b29bf1eabd>\",\"WARC-IP-Address\":\"65.202.222.160\",\"WARC-Target-URI\":\"http://proxy.osapublishing.org/oe/fulltext.cfm?uri=oe-22-S3-A742&id=282411\",\"WARC-Payload-Digest\":\"sha1:HZZDKCR4ZVS5WLJQR3N6ALKVYA2GONIF\",\"WARC-Block-Digest\":\"sha1:UBJCNVRGS54OCYYZV22Q2LTTTLK446R5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039375537.73_warc_CC-MAIN-20210420025739-20210420055739-00278.warc.gz\"}"}
https://acommonplace.cloud/2020/09/25/bayesian-theorem/
[ "## What Is Bayes’ Theorem?\n\nTests are not the event. We have a cancer test, separate from the event of actually having cancer. We have a test for spam, separate from the event of actually having a spam message.\n\nTests are flawed. Tests detect things that don’t exist (false positive), and miss things that do exist (false negative). People often use test results without adjusting for test errors.\n\nFalse positives skew results. Suppose you are searching for something really rare (1 in a million). Even with a good test, it’s likely that a positive result is really a false positive on somebody in the 999,999.\n\nPeople prefer natural numbers. Saying “100 in 10,000″ rather than “1%” helps people work through the numbers with fewer errors, especially with multiple percentages (“Of those 100, 80 will test positive” rather than “80% of the 1% will test positive”).\n\nEven science is a test. At a philosophical level, scientific experiments are “potentially flawed tests” and need to be treated accordingly. There is a test for a chemical, or a phenomenon, and there is the event of the phenomenon itself. Our tests and measuring equipment have a rate of error to be accounted for.\n\nBayes’ theorem converts the results from your test into the real probability of the event. For example, you can:\n\n• Correct for measurement errors. If you know the real probabilities and the chance of a false positive and false negative, you can correct for measurement errors.\n• Relate the actual probability to the measured test probability. Given mammogram test results and known error rates, you can predict the actual chance of having cancer given a positive test. In technical terms, you can find Pr(H|E), the chance that a hypothesis H is true given evidence E, starting from Pr(E|H), the chance that evidence appears when the hypothesis is true.\n\nBayes’ theorem, named after 18th-century British mathematician Thomas Bayes, is a mathematical formula for determining conditional probability. Conditional probability is the likelihood of an outcome occurring, based on a previous outcome occurring. Bayes’ theorem provides a way to revise existing predictions or theories (update probabilities) given new or additional evidence. In finance, Bayes’ theorem can be used to rate the risk of lending money to potential borrowers.\n\nBayes’ theorem is also called Bayes’ Rule or Bayes’ Law and is the foundation of the field of Bayesian statistics.\n\nMany modern machine learning techniques rely on Bayes’ theorem. For instance, spam filters use Bayesian updating to determine whether an email is real or spam, given the words in the email. Additionally, many specific techniques in statistics, such as calculating ppp-values or interpreting medical results, are best described in terms of how they contribute to updating hypotheses using Bayes’ theorem.\n\n### Key Takeaways\n\n• Bayes’ theorem allows you to update predicted probabilities of an event by incorporating new information.\n• Bayes’ theorem was named after 18th-century mathematician Thomas Bayes.\n• It is often employed in finance in updating risk evaluation.\n\n## Understanding Bayes’ Theorem\n\nApplications of the theorem are widespread and not limited to the financial realm. As an example, Bayes’ theorem can be used to determine the accuracy of medical test results by taking into consideration how likely any given person is to have a disease and the general accuracy of the test. Bayes’ theorem relies on incorporating prior probability distributions in order to generate posterior probabilities. Prior probability, in Bayesian statistical inference, is the probability of an event before new data is collected. This is the best rational assessment of the probability of an outcome based on the current knowledge before an experiment is performed. Posterior probability is the revised probability of an event occurring after taking into consideration new information. Posterior probability is calculated by updating the prior probability by using Bayes’ theorem. In statistical terms, the posterior probability is the probability of event A occurring given that event B has occurred. https://644c798e234d7856ec05c6c02182a306.safeframe.googlesyndication.com/safeframe/1-0-37/html/container.html\n\nBayes’ theorem thus gives the probability of an event based on new information that is, or may be related, to that event. The formula can also be used to see how the probability of an event occurring is affected by hypothetical new information, supposing the new information will turn out to be true. For instance, say a single card is drawn from a complete deck of 52 cards. The probability that the card is a king is four divided by 52, which equals 1/13 or approximately 7.69%. Remember that there are four kings in the deck. Now, suppose it is revealed that the selected card is a face card. The probability the selected card is a king, given it is a face card, is four divided by 12, or approximately 33.3%, as there are 12 face cards in a deck.\n\n## Examples of Bayes’ Theorem\n\nBelow are two examples of Bayes’ theorem in which the first example shows how the formula can be derived in a stock investing example using Amazon.com Inc. (AMZN). The second example applies Bayes’ theorem to pharmaceutical drug testing.\n\n### Deriving the Bayes’ Theorem Formula\n\nBayes’ theorem follows simply from the axioms of conditional probability. Conditional probability is the probability of an event given that another event occurred. For example, a simple probability question may ask: “What is the probability of Amazon.com’s stock price falling?” Conditional probability takes this question a step further by asking: “What is the probability of AMZN stock price falling given that the Dow Jones Industrial Average (DJIA) index fell earlier?”\n\nThe conditional probability of A given that B has happened can be expressed as:\n\nIf A is: “AMZN price falls” then P(AMZN) is the probability that AMZN falls; and B is: “DJIA is already down,” and P(DJIA) is the probability that the DJIA fell; then the conditional probability expression reads as “the probability that AMZN drops given a DJIA decline is equal to the probability that AMZN price declines and DJIA declines over the probability of a decrease in the DJIA index." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9350455,"math_prob":0.9404448,"size":6277,"snap":"2020-45-2020-50","text_gpt3_token_len":1306,"char_repetition_ratio":0.16929698,"word_repetition_ratio":0.028455285,"special_character_ratio":0.20344113,"punctuation_ratio":0.10595447,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9979408,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-23T10:33:55Z\",\"WARC-Record-ID\":\"<urn:uuid:7a3cf56b-9a01-4daa-bd5b-380834c619fa>\",\"Content-Length\":\"72034\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6e0f7945-a3c4-442b-a0ce-0c0bfa83ce97>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ad0001f-de09-4dc9-b4f9-4bd2312d279f>\",\"WARC-IP-Address\":\"192.0.78.146\",\"WARC-Target-URI\":\"https://acommonplace.cloud/2020/09/25/bayesian-theorem/\",\"WARC-Payload-Digest\":\"sha1:MNFHUGCSW2MMFZJH7HAAQQHD3KIHHVXI\",\"WARC-Block-Digest\":\"sha1:OGMZ32J6Q4EJHMUFGVEWJAUJT5DNAO6K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107881369.4_warc_CC-MAIN-20201023102435-20201023132435-00374.warc.gz\"}"}
https://socratic.org/questions/598b38b7b72cff5651cbdd0b
[ "# What is the boiling point for a 3 molal \"CaCl\"_2 aqueous solution? K_b = 0.512^@ \"C/m\".\n\nAug 9, 2017\n\nAs you should have seen in your book,\n\n$\\Delta {T}_{b} \\equiv {T}_{b} - {T}_{b}^{\\text{*}} = i {K}_{b} m$,\n\nwhere:\n\n• $\\Delta {T}_{b}$ is the change in boiling point in $\\text{^@ \"C}$, from that of the pure solvent, ${T}_{b}^{\\text{*}}$, to that of the solution, ${T}_{b}$.\n• $i$ is the van't Hoff factor, i.e. the effective number of solute particles in solution.\n• ${K}_{b} = {0.512}^{\\circ} \\text{C/m}$ is the boiling point elevation constant of water.\n• $m$ is the molality of the solution... $\\text{mol solute/kg solvent}$. What is the solvent?\n\nAssuming 100% dissociation...\n\n${\\text{CaCl\"_2(aq) -> \"Ca\"^(2+)(aq) + 2\"Cl}}^{-} \\left(a q\\right)$\n\nand $1 + 2 = 3 \\approx i$, so...\n\n$\\textcolor{b l u e}{{T}_{b}} = {T}_{b}^{\\text{*}} + i {K}_{b} m$\n\n$= {100}^{\\circ} \\text{C\" + 3 cdot 0.512^@ \"C/m\" cdot \"1.56 m}$\n\n$= \\textcolor{b l u e}{\\underline{{102.396}^{\\circ} \\text{C}}}$\n\nWhat was the change in boiling point?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7995764,"math_prob":0.99997663,"size":928,"snap":"2019-51-2020-05","text_gpt3_token_len":328,"char_repetition_ratio":0.13419913,"word_repetition_ratio":0.0,"special_character_ratio":0.3900862,"punctuation_ratio":0.14210527,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999523,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-27T06:26:38Z\",\"WARC-Record-ID\":\"<urn:uuid:a1ce3a6c-8e61-419e-adad-37f6bc0512eb>\",\"Content-Length\":\"35104\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1f0f1186-a6e8-4dc0-aac0-7675b56fb0c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ebe5991-7584-4cbb-9d1c-3a6ce1ab2a26>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/598b38b7b72cff5651cbdd0b\",\"WARC-Payload-Digest\":\"sha1:6LYPSWC5HHPIPCCL3DAUH4MN5HLYXHQW\",\"WARC-Block-Digest\":\"sha1:FDGR3NZWNZHWP3X5Y7M77JYKVVIJQMJB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251694908.82_warc_CC-MAIN-20200127051112-20200127081112-00495.warc.gz\"}"}
https://shrew.app/show/khedkar/testing-variable
[ "```Rectangle(color=\"green\")\ncircle = Circle(width=5, height=5, color=\"white\")\ngrow = 10\nwith animation(duration=2):\ncircle.width = grow\ncircle.height = grow\ncircle.color = \"white\"\ncircle1 = Circle(width=65, height=65, color=\"white\")\ngrow1 = 20\nwith animation(duration=2):\ncircle1.width = grow1\ncircle1.height = grow1\ncircle1.color = \"red\"```\n\n# testing variable\n\nby khedkar\n\nCreated 2 years, 10 months ago." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82947123,"math_prob":0.99640846,"size":488,"snap":"2023-14-2023-23","text_gpt3_token_len":130,"char_repetition_ratio":0.1714876,"word_repetition_ratio":0.0,"special_character_ratio":0.29918033,"punctuation_ratio":0.18085106,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9542972,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-20T19:49:11Z\",\"WARC-Record-ID\":\"<urn:uuid:b5c66fd8-a819-4069-9175-083bc6c382a3>\",\"Content-Length\":\"8174\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:06f65193-4d3a-4d27-b1a3-6b44560ef5c2>\",\"WARC-Concurrent-To\":\"<urn:uuid:c465fc7b-d310-4d31-9aa5-2ba5bc423526>\",\"WARC-IP-Address\":\"172.67.194.122\",\"WARC-Target-URI\":\"https://shrew.app/show/khedkar/testing-variable\",\"WARC-Payload-Digest\":\"sha1:HZNBMJCX4KXXLY5E77ZUHCXB3NE2JIUL\",\"WARC-Block-Digest\":\"sha1:VCYSDYOGQ7SJVVMVC5ZLR3HJ4IL675DK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943555.25_warc_CC-MAIN-20230320175948-20230320205948-00797.warc.gz\"}"}
https://lernapparat.de/torchdrift-partial-mmd
[ "# TorchDrift and Partial MMD Drift Detection\n\nJuly 6, 2021\n\nSo I have not blogged about TorchDrift yet, even though I did a lot of writing and talking on it since we released it in March.\n\n## Introducing TorchDrift\n\nThe key idea behind TorchDrift is to give you tools to check whether the data your model sees is still compatible with what you used to train and test it on. Check out our poster from the PyTorch Ecosystem Day or the first PyTorch Community Voices podcast (YouTube link) for more details.\n\nTorchDrift came into life when weTorchDrift is a joint project of my company, MathInf GmbH, with the great colleagues at Orobix srl (if you are into PyTorch, you probably Luca Antiga, the CTO as the co-author of our book). It originated with an internal project for Orobix in the context of their invariant.ai product, but we decided to provide a library as open-source. looked into how to accomplish this, and found that there was no PyTorch library providing the necessary tooling.Alibi-Detect is a library that does drift detection on TensorFlow. It added PyTorch support later.\n\n## Detecting drift\n\nThe basic technique was relatively easy to select: Following S. Rabanser et al.: Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift, NeurIPS 2019, we use two-sample testing. Because it is important, let us look at what we do for this in detail:\n\n• We consider two distributions, the reference and the test distribution. At validation (or early production) time we draw some samples $X_1, ..., X_N$ from the reference distribution.\n• In production, we then collect samples $Y_1, ..., Y_M$ which we consider to be drawn from the test distribution.\n• To assess whether the model has drifted, we conduct a statistical test with the null hypothesis, that the reference distribution and the test distribution are the same. As we are using $N$ and $M$ samples rather than fixing one distribution to some analytically known one, this is a two-sample test.\n\nThere are some details to this, and an important one is how testing is done. Basically, we take a distance measure between the empirical distributions as the test statistic. A very natural choice is to take one that is an estimator for a distance between the distributions, A choice that seems to work well is the Maximum Mean Discrepancy Distance. The mechanics of statistical testing then want that we compute the distribution of this quantity under the null hypothesis. If we have no idea how to do this, we can always use the bootstrap distribution from permutations: For the two-sample test, this means that we combine the $M + N$ samples (because the null hypothesis says they're from the same distribution) and then repeatedly partition it randomly to and compute the distance. This gives us an estimate for the distribution $F$ of the distances under the null hypothesis. Now if our two samples (the real ones) have distance $d(X,Y)$, this corresponds to a $p$-value of $1 - F(d(x,y))$.\n\nSo far so good. But it turns out that in practice, the $p$-values we get are often extremely small, say $1e-10$ (so it is considered extremely unlikely that the reference and test distributions are the same), and it will set off the drift alarm quite frequently. Not good!\n\nWhat do people do in these cases? The most common thing - and we do this, too - is to calibrate the threshold, i.e. forget the $p$-values and set the alarm threshold to some value that we expect or observe to be exceeded rarely enough in normal operations.\n\nThis then works reasonably well sometimes, but now we are considering things normal that our statistical model considers very unusual! We should ask us what went wrong.\n\n## What is wrong with vanilla two-sample testing\n\nAt MathInf and Orobix, we looked at this very hard and eventually we came to the conclusion that a (frequently large) part of the problem is in the underlying assumption: If we ask whether reference and test distribution are the same and the reference samples are OK, we are more or less asking whether the sample we have in the test distribution is somehow representative of the reference distribution. Quite often this is a highly unrealistic expectation in practice:\n\n• There may be fluctuations in the environment that mean that the reference dataset is more rich than we expect the test dataset to be. For example, outdoor lighting conditions vary over time through day and night, with weather conditions, or the seasons of the year. If our model is sufficiently trained on to cover all of these, it will operate in normal conditions even though the test samples, drawn from a much shorter time interval than the reference, do not show this variety. Now, we could make the variation explicit and then test against conditional references, but this would cause significant additional effort.\n• Inputs provided by human users, e.g. search queries, are likely to have spikes for user interests in a given time range rather than uniformly querying the entire reference (e.g. the contents of a knowledge database). This, too, should count as normal operation.\n\nOne way to mitigate these effects could be to enlarge the data sample (and the time to collect it), but this may render timely drift detection infeasible.\n\nOne thing to note here is that outlier detection, a technique where we ask whether a single sample could raesonably have originated from the reference distribution would conclude that the inputs in the above situation are not outliers. However, we still want the model monitoring to consider a collapse of the input distribution (e.g. a frozen camera image from a visual inspection system) to be flagged as problematic, but an outlier detection, unaware of the distribution of the inputs beyond a single given sample, cannot identify this. In this sense, we need a bridge between drift detection and outlier detection: We want the multi-sample approach on the test side like drift detection. At the same time we aim to remove the requirement to compare against the full test distribution, a requirement that outlier detection does not have.\n\n## A toy example\n\nAs an example, consider a binary classifier. In an idealized setting, it might have a latent space feature density like this (darker = higher probability density):\n\nNow one thing that might happen is that we get a very imbalanced batchOf course, there are contexts where we would desire to detect such a class imbalance as drift from a more balanced expectation. But, crucially, not always!:\n\nNow is this test batch representative of our reference distribution? Certainly not! But has it drifted, i.e. does the model operate out of spec now? It depends!\n\nBut now if we calibrate our drift detector to accept this by raising the detection threshold (above the 0.36), we miss configurations where drift has clearly occurred, such as the following:\n\nSo this is what we want to solve!\n\n## How to improve drift detection\n\nSo it is natural to ask whether we can check if the test distribution is representative for part of the reference distribution. It turns out that we can, and here is how we did it:\n\nOur first step was to look at the Wasserstein distanceThe most ardent followers of this blog will know that I have a faible for the Wasserstein distance., also known as the Earth Mover's Distance. In a nutshell and specializing to the discrete case, given some cost function $C(X_i, Y_j)$, it tries to match points $X_i$ and $Y_j$ (allowing fractional matches) such that some the functional $W(X,Y) =\\sum_{i, j} P(X_i, Y_j) C(X_i, Y_j)$ is minimalTo get the $p$-Wasserstein distance, one would choose $C(X_i,Y_i) = d(X_i,Y_i)^p$ and consider $W(X,Y)^{1/p}$, but for the purpose of hypothesis testing, the exponent does not matter until you approximate the distribution of the test statistic.. Here $P(X_i, Y_j) \\geq 0$ gives the mass that is matched.\n\nThe sum is then (a power of) the Wasserstein distance. The relation to the empirical distributions behind the $X_i$ and $Y_j$ is that if each point has weight $1/N$ and $1/M$, we ask that $\\sum_j P(X_i, Y_j) = 1/N$ and $\\sum_i P(X_i, Y_j) = 1/M$ (and in addition to $P \\geq 0$).\n\nHow does this help? It lets us do attribution, i.e. we can break down the overall distance to contributions of individual points. Now if we only want to match part of the reference distribution, we can just invent some unfavourable test point that is equally far away from the reference points. Then the optimal transport plan will map the real test points to nearby reference points and the remaining mass maps to the imaginary distant point. But now we can just leave out the that part of the mapping when computing the distance to get something that doesn't depend on the distant point.I was all proud of this trick, but of course, L. Caffarelli and R. McCann knew it a decade ago: Free boundaries in optimal transport and Monge-Ampère obstacle problems., Annals of Mathematics (171), 2010.\n\nIf we had mass $1-\\alpha$ at the distant point, our partial matching now satisfies $\\sum_i P(X_i, Y_j) = \\alpha/M$ and $0 \\leq \\sum_j P(X_i, Y_j) \\leq 1/N$ and we might rescale the cost functional by $1/\\alpha$This means that if we mix in distant masses $1-\\alpha$ on both sides and match only the \"original part\" $\\alpha$, the $W_\\alpha$ distance recovers the value of $W$ on the original distributions. to define $$W_\\alpha(X,Y) = \\frac{1}{\\alpha} \\sum_{ij} P(X_i, Y_j) C(X_i, Y_j).$$\n\nSo this helps to not detect drift when there is a controlled narrowing. It turns out, however, that drift detectors using the Wasserstein distance as the test statistic have trouble detecting drift reliably (at least in our experiment), even in the vanilla \"full match\" case. So what to do?\n\n## Revisiting the MMD distance\n\nThe maximum mean discrepancy distanceA. Gretton et al.: A kernel two Sample Test; JMLR 13(25):723−773, 2012., which powers what has become our bread and butter drift detector appears to be have much better drift detection performance in the full match case. So it is natural to ask whether we can apply a similar trick as for the Wasserstein distance.\n\nIt turns out the trick is a bit different. For empirical distributions, MMD is computed using the matrix of kernel evaluations at pairs of pointsThere are several versions of this estimate. This one, considered as an estimate of the squared distance $|\\mu_X - \\mu_Y|^2$ between the distributions from which $x_i$ and $y_i$ are drawn, is biased and for an unbiased estimator one would want to remove the diagonals in the first two terms., i.e.\n\n\\begin{aligned} MMD^2(X, Y) &= \\frac{1}{n^2} \\sum_{i} \\sum_{j} k(x_i, x_j) + \\frac{1}{m^2} \\sum_{i} \\sum_{j} k(y_i, y_j) \\\\ &\\qquad - 2 \\frac{1}{n m} \\sum_{i} \\sum_{j} k(x_i, y_j). \\\\ \\end{aligned} One thing we see here: in contrast to the coupling in the Wasserstein case all points from $X$ and all points from $Y$ interact in the same way. This means that adding a point and then removing it from the summation like we did above does not help us here.\n\nBut if we introduce two vectors $v = (1/N)_{j=1,...,N}$ and $v = (1/M)_{j=1,...,M}$ and introduce the kernel matrices $K^X = k(x_i, x_j)$, $K^Y = k(y_i, y_j)$ and $K^{XY} = k(x_i, y_j)$, we can rewrite this in matrix notation as\n\n$$MMD^2(X, Y) = w^T K^X w + v^T K^Y v - 2 w^T K^XY v.$$\n\nBut now $w$ is a weight vector representing a uniform distribution on the weight samples. A partial matching would deviate from this uniformity by allowing some weights to be $0$ and the others to grow larger. We can use the Wasserstein coupling above to get a replacement weight incorporating the idea of matching a fraction $\\alpha$ by taking the marginal of the coupling (scaled by $\\frac{1}{\\alpha}$ to absorb the normalization factor we had above) $$w^{twostage}_i := \\frac{1}{\\alpha} \\sum_j P(x_i, y_j).$$\n\nWe call this the two-stage weight (and define the $MMD^2_{\\alpha, twostage}$ with it) because we first use the Wasserstein distance and then the MMD distance. It turns out that this is a very good test statistic for our drift detection.\n\nBut we can expand on the idea of computing the MMD distance on a partial set of points by optimizing over the weight $w$ instead. The natural choice for the set $\\mathcal M$ of admissible weight vectors $w$ is the set we identified as possible weights in our look at the partial Wasserstein distance: $$\\mathcal M = \\{w \\in R^N | 0 \\leq w_i \\leq \\frac{1}{\\alpha N}, \\sum_i w_i = 1 \\}.$$ We thus define the partial MMD distance as the minimum $$MMD^2_{\\alpha} = \\min_{w \\in \\mathcal M} w^T K^X w + v^T K^Y v - 2 w^T K^XY v.$$\n\nThis is a quadratic programming problem with equality and inequality constraints. As such, it is standard, but not \"easy\" to solve in the sense that there exist libaries like quadprog that do the solution for us, but the solution takes quite long to compute (for our application).\n\nThe simplicity of the problem means that we can also implement an ad-hoc active-set optimization schemeThe algorithm we implemented for TorchDrift is described in the report. (but our implementation cheats a bit because we do not perfectly project the solution back into the admissible set, potentially allowing some $w_i$ to be larger than the $\\frac{1}{\\alpha N}$ when we scale $w$ to enforce summing to $1$).\n\nWith this definition of the partial MMD distance, our two-stage weight $w^{twostage}$ is admissible and so we get the upper bound $MMD^2_{\\alpha, twostage} \\geq MMD^2_{\\alpha}$. But is it a good approximation? Our empirical experiments suggest no: It seemed that $MMD^2_{\\alpha, twostage}$ often was an order of magnitude larger. However, as our interest is the drift detector we get from it, that seemed to work rather well.\n\nFor TorchDrift, this means that while we implement the quadratic programming solution and this somewhat faster approximation, the two-stage drift detector is much cheaper computationally. Thus, until we have a faster implementation of the specific QP problem, the two-stage drift detector is our first stop when we find that we have permissible fluctuation in our deployed input and feature distribution.\n\n## Back to our example\n\nWe can see how this works in our toy example. If we match the full reference (I use the Wasserstein coupling and return the largest matches), half the probability mass needs to go to the right hand side, giving a very large distance.\n\nOn the other hand, if we only match 15% of the distribution (the size relation of the reference and test data), we get a rather clean match and small distance.\n\nThe drifted data is also able to take advantage of the partial matching, but the increased distance remains very visible:\n\nThe MMD distances shown in the plot titles have been computed with the two-stage method discussed above.\n\n## Bootstrapping improvements\n\nThere is another new feature of TorchDrift that incorporates what we think is mathematical progress. When computing $p$-values, there is a subtle loss of information when using the two-sample testing including the $p$-value as a black box.\n\nThe two-sample test null hypothesis is, of course, that $x_i$, $i=1,...,N$ and $y_j$, $j=1,..., M$ are sampled from the same distribution $P$. Then the bootstrapping pools $x_i$ and $y_j$ and computes the test statistic for random splits of the joint set into subsets of cardinality $N$ and $M$ to simulate drawing from the distribution $P$. This is all well, but in drift detection, the stochastic model is that the distribution $P_X$ of the $x_i$ is fixed and the null hypothesis is that $y_i$ are drawn from $P_X$. This makes the pooling step dubious, as it will \"pollute\" the distribution of the test-statistic for non-drifted data with whatever we get as the test set. We can improve the bootstrapping by taking $N+M$ samples from the reference distribution $P_X$ during the fitting of the drift detector.\n\nNot only is this mathematically more sound, but we may also gain from it computationally: We can now fit a suitable parametric distribution to the bootstrap sample during fitting of the drift detectorGretton suggested a gamma distribution approximation. We found that this works even better when incorporating considering a shift, so we determine the shift from the minimum value we observe and use moment fitting to obtain a shifted gamma distribution.. This saves us from having to do the bootstrap sampling during the actual drift detection as we can compute the test statistic and plug it into (one minus) the test distriution to get the $p$-value.\n\nTorchDrift does this if you provide the n_test argument to the call of the fit method.\n\n## Is this the end of calibration?\n\nSo we started by discussing why calibration is a very unsatisfactory answer to dealing with overly large detection rates when using $p$-value thresholds. Will we not need calibration when deploying our new drift detectors?\n\nWhile we think that this improved methodology is taking drift detection a good step forward, it is very likely that there are still gaps. Until we can further refine our methodology, there will be the need for some big calibrarion hammer to overcome the mismatch between model and reality. But in many instances, we can do this much more judiciously with the improvements described above.\n\n## Try it in TorchDrift\n\nYou can try the new Partial MMD methods in TorchDrift today, by checking out the git repository. A release will follow soon!\n\nA more mathematical writeup is in our report Partial Wasserstein and Maximum Mean Discrepancy distances for bridging the gap between outlier detection and drift detection.\n\n## Consulting and commercial support\n\nIf you need help with your drift detection or the process for deploying your models in general, do check out our commercial offerings at MathInf GmbH and Orobix srl.\n\nI hope you enjoyed this little overview of brand new things in TorchDrift. I welcome your feedback at tv@lernapparat.de." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9149993,"math_prob":0.99118435,"size":17800,"snap":"2022-05-2022-21","text_gpt3_token_len":4152,"char_repetition_ratio":0.14846033,"word_repetition_ratio":0.010084034,"special_character_ratio":0.2277528,"punctuation_ratio":0.10043541,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99748856,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-19T04:34:26Z\",\"WARC-Record-ID\":\"<urn:uuid:7dc22e70-005e-42d3-8ba2-c0c21db91dfe>\",\"Content-Length\":\"25790\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:30771da0-7141-4181-9f66-9932843b5a51>\",\"WARC-Concurrent-To\":\"<urn:uuid:99ffd527-878f-490f-adb9-f56404a49db0>\",\"WARC-IP-Address\":\"217.160.165.56\",\"WARC-Target-URI\":\"https://lernapparat.de/torchdrift-partial-mmd\",\"WARC-Payload-Digest\":\"sha1:QD75EQGIAV7NQ7TMRUFQZFRWMBHATF7Z\",\"WARC-Block-Digest\":\"sha1:A2QGCKFSUUYVTKJDT2HFLVVV4SQIXPZK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662525507.54_warc_CC-MAIN-20220519042059-20220519072059-00376.warc.gz\"}"}
https://www.pbs.org/wgbh/aso/resources/guide/phyact4index.html
[ "", null, "", null, "Universal Proportions\n\nOverview: Create scale models of Earth, the Moon, and distances in space\nLearning Goal: Develop a sense of the relative sizes of Earth, the Moon, and of the immense distances within the universe\nVideo Link: New View of the Universe\n\nIntroduction\n\nWhen Edwin Hubble calculated the distance to a variable star, he was astounded to see that it was outside the Milky Way. The universe was far larger than anyone had expected! To give students an idea of the immense size of the universe, have them work in groups of three to model the relationship between Earth and the Moon, and then scale distances within and beyond our solar system. After the activity, show the New View of the Universe video segment so that students can learn more about the people and events that revolutionized our understanding of the universe.", null, "Model the Earth and Moon\n\nMaterials for each group:\n• three 8 oz. cans of clay (rolled into one ball)\n• measuring tape or yardstick\n• toothpicks\n1. Give each group a ball of clay, and ask students to divide it into 51 equal pieces. Allow time for students to figure out how to divide the clay.\n2. Have students draw upon their existing knowledge to create scale models of Earth and the Moon using the 51 pieces of clay. Students should record how many pieces they use to build each model. Have them record the volume ratio of Earth to the Moon using the number of clay pieces in each model. Compare results.\n3. Share with students that Earth is about 50 times the size of the Moon, based on its volume. Ask them to regroup the clay in a 50:1 ratio. How do their original models compare in volume to this ratio?\n4. Now that the models accurately show the relative sizes of Earth and the Moon, ask students to show the relative distance between them. Again, encourage them to use their own intuition and knowledge to decide how far apart to place their models. Have each group measure and compare distances, and discuss their reasoning.\n5. The mean distance between Earth and the Moon is 384,000 km, and Earth's diameter is 12,756 km. Have students use this information to figure out how far apart to place their models to represent the relative distance between Earth and the Moon. How close were their original estimates?\n6. The space shuttle orbits at approximately 611 km above Earth. Have students locate the shuttle's orbit based on the scale of their clay models. A toothpick inserted into the clay can show the distance of the shuttle from Earth.\n7. Where does the Sun fit in this scale model of planetary orbits? Let students know that the mean distance between the Sun and Earth is about 150,000,000 km. Ask students to determine the proper placement of a marker representing the Sun. Would the Sun model be in the classroom?", null, "Scale Distances\n\nMaterials for each group:\n• butcher paper\n• measuring tape or yardstick\n• copy of chart below\n1. The distance from Earth to the Moon is minuscule compared to distances within our solar system and beyond. Give each group a piece of butcher paper (about 1.25 m long) and a copy of the chart below. Have each group create a scale drawing of the solar system showing the relative distance of each planet from the Sun.\n2. When students are finished, have each group explain its scale. With that scale, can they show the position of the Moon? If not, what scale would work? Then how far away from the Sun would Pluto be?\n3. Finally, have each group use the scale it established to calculate the length of paper needed to show the distance from the solar system to Alpha Centauri, the nearest star system, 4.3 light-years away; from the Sun to the center of the Milky Way, 30,000 light-years away; and from our galaxy to the Andromeda nebula, the nearest spiral galaxy, 2 million light-years away.\n\nDistances from the Sun", null, "Planet Mean Distance from the Sun Mercury 58 million km (36 million mi) Venus 108.2 million km (67 million mi) Earth 150 million km (93 million mi) Mars 227.9 million km (140 million mi) Jupiter 778.4 million km (483 million mi) Saturn 1.4 billion km (886 million mi) Uranus 2.9 billion km (1.8 billion mi) Neptune 4.5 billion km (2.8 billion mi) Pluto 5.9 billion km (3.7 billion mi)\n\nPhysics and Astronomy Program Contents", null, "Looking Back in Time\nAtomic Ethics\nStranger than Fiction?\n\nHome | Resources for Educators Menu | Educator's Guide Contents | Help\n\nWGBH | PBS Online | Search | Feedback | Shop" ]
[ null, "https://www.pbs.org/wgbh/aso/resources/guide/images/sciodtitle.jpeg", null, "https://www.pbs.org/wgbh/aso/resources/guide/images/phyinv.gif", null, "https://www.pbs.org/wgbh/aso/resources/guide/images/procedure1.gif", null, "https://www.pbs.org/wgbh/aso/resources/guide/images/procedure2.gif", null, "https://www.pbs.org/wgbh/aso/resources/guide/images/sun.jpeg", null, "https://www.pbs.org/wgbh/aso/resources/guide/images/activities.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91863745,"math_prob":0.90558827,"size":4463,"snap":"2021-21-2021-25","text_gpt3_token_len":1015,"char_repetition_ratio":0.15317336,"word_repetition_ratio":0.017834395,"special_character_ratio":0.23325117,"punctuation_ratio":0.09873708,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95472014,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-09T15:04:53Z\",\"WARC-Record-ID\":\"<urn:uuid:22ee66ba-ba2d-43b8-bbdc-2fcd9029d596>\",\"Content-Length\":\"8922\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e688c399-30ed-44a2-85f1-d681f25cb640>\",\"WARC-Concurrent-To\":\"<urn:uuid:ecd51ee4-90aa-40fe-9fcc-b433d870bf60>\",\"WARC-IP-Address\":\"52.85.132.16\",\"WARC-Target-URI\":\"https://www.pbs.org/wgbh/aso/resources/guide/phyact4index.html\",\"WARC-Payload-Digest\":\"sha1:D2PFCRAOHI3TB6UG42HKLP2KYEVKTTVQ\",\"WARC-Block-Digest\":\"sha1:MVP7RBTJTYA6PVKJQAO7KEEVZVXZNXJE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988986.98_warc_CC-MAIN-20210509122756-20210509152756-00193.warc.gz\"}"}
https://docs.jbase.com/34463-mv-migration-station/randomize
[ "# RANDOMIZE\n\nDESCRIPTION\n\nUse the RANDOMIZE statement with an expression to make the RND function generate the same sequence of random numbers each time the program is run.\n\nexpression must be a positive integer or zero.\n\nIf no expression is supplied, or if expression evaluates to a null value, the internal time of day is used (the null value is ignored). In these cases, the sequence is different each time the program is run.\n\n`RANDOMIZE [(expression)]`\n\nAn example of use would be as:\n\n```RANDOMIZE (0)\nFOR N=1 TO 10\nPRINT RND(4):' ':\nNEXT N\nPRINT\n*\nRANDOMIZE (0)\nFOR N=1 TO 10\nPRINT RND(4):' ':\nNEXT N\nPRINT\n*\nRANDOMIZE (3)\nFOR N=1 TO 10\nPRINT RND(4):' ':\nNEXT N\nPRINT\n\nThis is the program output:\n0 2 1 2 0 2 1 2 1 1\n0 2 1 2 0 2 1 2 1 1\n2 0 1 1 2 1 0 1 2 3```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.58339626,"math_prob":0.98855305,"size":738,"snap":"2020-24-2020-29","text_gpt3_token_len":233,"char_repetition_ratio":0.15395096,"word_repetition_ratio":0.28,"special_character_ratio":0.32791328,"punctuation_ratio":0.08522727,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.979782,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-28T12:25:11Z\",\"WARC-Record-ID\":\"<urn:uuid:1cb8b7bc-22ad-4824-98ab-5d8ba37df1c0>\",\"Content-Length\":\"20182\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d366f6b4-f2e6-462d-843f-83b34eef796c>\",\"WARC-Concurrent-To\":\"<urn:uuid:a36c0291-e7da-41da-97af-bcf0c2a46f57>\",\"WARC-IP-Address\":\"50.16.128.128\",\"WARC-Target-URI\":\"https://docs.jbase.com/34463-mv-migration-station/randomize\",\"WARC-Payload-Digest\":\"sha1:2ILKMRKYWR6RIFJBORAN3HYG5KE4BUFM\",\"WARC-Block-Digest\":\"sha1:GZO4DNZJ26U3WAYS7IINASM2YHRDOBAF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347396089.30_warc_CC-MAIN-20200528104652-20200528134652-00260.warc.gz\"}"}
https://docs.fd.io/csit/rls2005/report/introduction/methodology_data_plane_throughput/methodology_plrsearch.html
[ "# PLRsearch¶\n\n## Motivation for PLRsearch¶\n\nNetwork providers are interested in throughput a system can sustain.\n\nRFC 2544 assumes loss ratio is given by a deterministic function of offered load. But NFV software systems are not deterministic enough. This makes deterministic algorithms (such as binary search per RFC 2544 and MLRsearch with single trial) to return results, which when repeated show relatively high standard deviation, thus making it harder to tell what “the throughput” actually is.\n\nWe need another algorithm, which takes this indeterminism into account.\n\n## Generic Algorithm¶\n\nDetailed description of the PLRsearch algorithm is included in the IETF draft draft-vpolak-bmwg-plrsearch-02 that is in the process of being standardized in the IETF Benchmarking Methodology Working Group (BMWG).\n\n### Terms¶\n\nThe rest of this page assumes the reader is familiar with the following terms defined in the IETF draft:\n\n• Trial Order Independent System\n\n• Duration Independent System\n\n• Target Loss Ratio\n\n• Zero Loss Region\n\n• Non-Deterministic Region\n\n• Guaranteed Loss Region\n\n• Fitting Function\n\n• Stretch Function\n\n• Erf Function\n\n• Bayesian Inference\n\n• Prior distribution\n\n• Posterior Distribution\n\n• Numeric Integration\n\n• Monte Carlo\n\n• Importance Sampling\n\n## FD.io CSIT Implementation Specifics¶\n\nThe search receives min_rate and max_rate values, to avoid measurements at offered loads not supporeted by the traffic generator.\n\nThe implemented tests cases use bidirectional traffic. The algorithm stores each rate as bidirectional rate (internally, the algorithm is agnostic to flows and directions, it only cares about aggregate counts of packets sent and packets lost), but debug output from traffic generator lists unidirectional values.\n\nIn a sample implemenation in FD.io CSIT project, there is roughly 0.5 second delay between trials due to restrictons imposed by packet traffic generator in use (T-Rex).\n\nAs measurements results come in, posterior distribution computation takes more time (per sample), although there is a considerable constant part (mostly for inverting the fitting functions).\n\nAlso, the integrator needs a fair amount of samples to reach the region the posterior distribution is concentrated at.\n\nAnd of course, the speed of the integrator depends on computing power of the CPU the algorithm is able to use.\n\nAll those timing related effects are addressed by arithmetically increasing trial durations with configurable coefficients (currently 5.1 seconds for the first trial, each subsequent trial being 0.1 second longer).\n\nIn order to avoid them, the current implementation tracks natural logarithm (instead of the original quantity) for any quantity which is never negative. Logarithm of zero is minus infinity (not supported by Python), so special value “None” is used instead. Specific functions for frequent operations (such as “logarithm of sum of exponentials”) are defined to handle None correctly.\n\nCurrent implementation uses two fitting functions, called “stretch” and “erf”. In general, their estimates for critical rate differ, which adds a simple source of systematic error, on top of randomness error reported by integrator. Otherwise the reported stdev of critical rate estimate is unrealistically low.\n\nBoth functions are not only increasing, but also convex (meaning the rate of increase is also increasing).\n\nBoth fitting functions have several mathematically equivalent formulas, each can lead to an arithmetic overflow or underflow in different sub-terms. Overflows can be eliminated by using different exact formulas for different argument ranges. Underflows can be avoided by using approximate formulas in affected argument ranges, such ranges have their own formulas to compute. At the end, both fitting function implementations contain multiple “if” branches, discontinuities are a possibility at range boundaries.\n\nThe numeric integrator expects all the parameters to be distributed (independently and) uniformly on an interval (-1, 1).\n\nAs both “mrr” and “spread” parameters are positive and not dimensionless, a transformation is needed. Dimentionality is inherited from max_rate value.\n\nThe “mrr” parameter follows a Lomax distribution with alpha equal to one, but shifted so that mrr is always greater than 1 packet per second.\n\nThe “stretch” parameter is generated simply as the “mrr” value raised to a random power between zero and one; thus it follows a reciprocal distribution.\n\nAfter few measurements, the posterior distribution of fitting function arguments gets quite concentrated into a small area. The integrator is using Monte Carlo with importance sampling where the biased distribution is bivariate Gaussian distribution, with deliberately larger variance. If the generated sample falls outside (-1, 1) interval, another sample is generated.\n\nThe center and the covariance matrix for the biased distribution is based on the first and second moments of samples seen so far (within the computation). The center is used directly, covariance matrix is scaled up by a heurictic constant (8.0 by default). The following additional features are applied designed to avoid hyper-focused distributions.\n\nEach computation starts with the biased distribution inherited from the previous computation (zero point and unit covariance matrix is used in the first computation), but the overal weight of the data is set to the weight of the first sample of the computation. Also, the center is set to the first sample point. When additional samples come, their weight (including the importance correction) is compared to sum of the weights of data seen so far (within the iteration). If the new sample is more than one e-fold more impactful, both weight values (for data so far and for the new sample) are set to (geometric) average of the two weights.\n\nThis combination showed the best behavior, as the integrator usually follows two phases. First phase (where inherited biased distribution or single big sample are dominating) is mainly important for locating the new area the posterior distribution is concentrated at. The second phase (dominated by whole sample population) is actually relevant for the critical rate estimation.\n\nFirst two measurements are hardcoded to happen at the middle of rate interval and at max_rate. Next two measurements follow MRR-like logic, offered load is decreased so that it would reach target loss ratio if offered load decrease lead to equal decrease of loss rate.\n\nThe rest of measurements start directly in between erf and stretch estimate average. There is one workaround implemented, aimed at reducing the number of consequent zero loss measurements (per fitting function). The workaround first stores every measurement result which loss ratio was the targed loss ratio or higher. Sorted list (called lossy loads) of such results is maintained.\n\nWhen a sequence of one or more zero loss measurement results is encountered, a smallest of lossy loads is drained from the list. If the estimate average is smaller than the drained value, a weighted average of this estimate and the drained value is used as the next offered load. The weight of the estimate decreases exponentially with the length of consecutive zero loss results.\n\nThis behavior helps the algorithm with convergence speed, as it does not need so many zero loss result to get near critical region. Using the smallest (not drained yet) of lossy loads makes it sure the new offered load is unlikely to result in big loss region. Draining even if the estimate is large enough helps to discard early measurements when loss hapened at too low offered load. Current implementation adds 4 copies of lossy loads and drains 3 of them, which leads to fairly stable behavior even for somewhat inconsistent SUTs.\n\nAs high loss count measurements add many bits of information, they need a large amount of small loss count measurements to balance them, making the algorithm converge quite slowly. Typically, this happens when few initial measurements suggest spread way bigger then later measurements. The workaround in offered load selection helps, but more intelligent workarounds could get faster convergence still.\n\nSome systems evidently do not follow the assumption of repeated measurements having the same average loss rate (when the offered load is the same). The idea of estimating the trend is not implemented at all, as the observed trends have varied characteristics.\n\nProbably, using a more realistic fitting functions will give better estimates than trend analysis.\n\n## Bottom Line¶\n\nThe notion of Throughput is easy to grasp, but it is harder to measure with any accuracy for non-deterministic systems.\n\nEven though the notion of critical rate is harder to grasp than the notion of throughput, it is easier to measure using probabilistic methods.\n\nIn testing, the difference between througput measurements and critical rate measurements is usually small, see Soak Tests vs NDR Tests.\n\nIn pactice, rules of thumb such as “send at max 95% of purported throughput” are common. The correct benchmarking analysis should ask “Which notion is 95% of throughput an approximation to?” before attempting to answer “Is 95% of critical rate safe enough?”.\n\n## Algorithmic Analysis¶\n\nWhile the estimation computation is based on hard probability science; the offered load selection part of PLRsearch logic is pure heuristics, motivated by what would a human do based on measurement and computation results.\n\nThe quality of any heuristic is not affected by soundness of its motivation, just by its ability to achieve the intended goals. In case of offered load selection, the goal is to help the search to converge to the long duration estimates sooner.\n\nBut even those long duration estimates could still be of poor quality. Even though the estimate computation is Bayesian (so it is the best it could be within the applied assumptions), it can still of poor quality when compared to what a human would estimate.\n\nOne possible source of poor quality is the randomnes inherently present in Monte Carlo numeric integration, but that can be supressed by tweaking the time related input parameters.\n\nThe most likely source of poor quality then are the assumptions. Most importantly, the number and the shape of fitting functions; but also others, such as trial order independence and duration independence.\n\nThe result can have poor quality in basically two ways. One way is related to location. Both upper and lower bounds can be overestimates or underestimates, meaning the entire estimated interval between lower bound and upper bound lays above or below (respectively) of human-estimated interval. The other way is related to the estimation interval width. The interval can be too wide or too narrow, compared to human estimation.\n\nAn estimate from a particular fitting function can be classified as an overestimate (or underestimate) just by looking at time evolution (without human examining measurement results). Overestimates decrease by time, underestimates increase by time (assuming the system performance stays constant).\n\nQuality of the width of the estimation interval needs human evaluation, and is unrelated to both rate of narrowing (both good and bad estimate intervals get narrower at approximately the same relative rate) and relatative width (depends heavily on the system being tested).\n\nThe following pictures show the upper (red) and lower (blue) bound, as well as average of Stretch (pink) and Erf (light green) estimate, and offered load chosen (grey), as computed by PLRsearch, after each trial measurement within the 30 minute duration of a test run.\n\nBoth graphs are focusing on later estimates. Estimates computed from few initial measurements are wildly off the y-axis range shown.\n\nThe following analysis will rely on frequency of zero loss measurements and magnitude of loss ratio if nonzero.\n\nThe offered load selection strategy used implies zero loss measurements can be gleaned from the graph by looking at offered load points. When the points move up farther from lower estimate, it means the previous measurement had zero loss. After non-zero loss, the offered load starts again right between (the previous values of) the estimate curves.\n\nThe very big loss ratio results are visible as noticeable jumps of both estimates downwards. Medium and small loss ratios are much harder to distinguish just by looking at the estimate curves, the analysis is based on raw loss ratio measurement results.\n\nThe following descriptions should explain why the graphs seem to signal low quality estimate at first sight, but a more detailed look reveals the quality is good (considering the measurement results).\n\n### L2 patch¶\n\nBoth fitting functions give similar estimates, the graph shows “stochasticity” of measurements (estimates increase and decrease within small time regions), and an overall trend of decreasing estimates.\n\nOn the first look, the final interval looks fairly narrow, especially compared to the region the estimates have travelled during the search. But the look at the frequency of zero loss results shows this is not a case of overestimation. Measurements at around the same offered load have higher probability of zero loss earlier (when performed farther from upper bound), but smaller probability later (when performed closer to upper bound). That means it is the performance of the system under test that decreases (slightly) over time.\n\nWith that in mind, the apparent narrowness of the interval is not a sign of low quality, just a consequence of PLRsearch assuming the performance stays constant.", null, "### Vhost¶\n\nThis test case shows what looks like a quite broad estimation interval, compared to other test cases with similarly looking zero loss frequencies. Notable features are infrequent high-loss measurement results causing big drops of estimates, and lack of long-term convergence.\n\nAny convergence in medium-sized intervals (during zero loss results) is reverted by the big loss results, as they happen quite far from the critical load estimates, and the two fitting functions extrapolate differently.\n\nIn other words, human only seeing estimates from one fitting function would expect narrower end interval, but human seeing the measured loss ratios agrees that the interval should be wider than that.", null, "### Summary¶\n\nThe two graphs show the behavior of PLRsearch algorithm applied to soaking test when some of PLRsearch assumptions do not hold:\n\n• L2 patch measurement results violate the assumption of performance not changing over time.\n\n• Vhost measurement results violate the assumption of Poisson distribution matching the loss counts.\n\nThe reported upper and lower bounds can have distance larger or smaller than a first look by a human would expect, but a more closer look reveals the quality is good, considering the circumstances.\n\nThe usefullness of the critical load estimate is of questionable value when the assumptions are violated.\n\nSome improvements can be made via more specific workarounds, for example long term limit of L2 patch performance could be estmated by some heuristic.\n\nOther improvements can be achieved only by asking users whether loss patterns matter. Is it better to have single digit losses distributed fairly evenly over time (as Poisson distribution would suggest), or is it better to have short periods of medium losses mixed with long periods of zero losses (as happens in Vhost test) with the same overall loss ratio?" ]
[ null, "https://docs.fd.io/csit/rls2005/report/_images/PLR_patch.svg", null, "https://docs.fd.io/csit/rls2005/report/_images/PLR_vhost.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9247556,"math_prob":0.9505757,"size":15367,"snap":"2023-40-2023-50","text_gpt3_token_len":2948,"char_repetition_ratio":0.13096401,"word_repetition_ratio":0.005892256,"special_character_ratio":0.18116744,"punctuation_ratio":0.08154671,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97543395,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-06T01:33:58Z\",\"WARC-Record-ID\":\"<urn:uuid:79e25010-e754-4329-83d3-fb783d860315>\",\"Content-Length\":\"33459\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec0bf748-08b9-48ad-b406-f9e41c5b703e>\",\"WARC-Concurrent-To\":\"<urn:uuid:7aad0697-f27e-4507-b669-c4562b0f2354>\",\"WARC-IP-Address\":\"99.84.208.118\",\"WARC-Target-URI\":\"https://docs.fd.io/csit/rls2005/report/introduction/methodology_data_plane_throughput/methodology_plrsearch.html\",\"WARC-Payload-Digest\":\"sha1:EQ6EM5MRKM7ALYF2YM5PE3ZRTGBPSGDI\",\"WARC-Block-Digest\":\"sha1:O66XKZCEJSCWAS2PYGD5BKWBGRVX5G4I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100575.30_warc_CC-MAIN-20231206000253-20231206030253-00573.warc.gz\"}"}
https://gosciencegirls.com/ratios-and-proportions-worksheets/
[ "# 30+ Ratios and Proportions Worksheets For Middle Schoolers\n\nRatios and Proportions for kids - get free worksheets, practice tests and lessons. These worksheet contain exercises in proportion, ratio, and percent, with solutions.\n\nIn today’s post, we will discuss what are ratios and proportions using a wide range of interactive worksheets.\n\nDefine Ratio\n\nThe ratio is a math term used to demonstrate the relation between two parts of a number to tell how many parts a number is divided into. Forex: The fraction a/b is written as a:b while writing in a ratio format.\n\nExample: Rama has 3 chocolates and 2 lollipops. The ratio of chocolates to lollipops is 3:2 or 3 to 2 or 3/2 and lollipops to chocolates is 2:3 or 2 to 3 or 2/3.\n\nDefine Proportions\n\nProportion is another important math term that tells the two given ratios are equal to each other. Forex: The duration of bus travel to reach its destination, which is 50km away is one hour. The same bus takes 2 hours to cover 100km distance. So, proportion tells that the time taken for 50km and 100km are equal in ratios. Such as 50km/hr = 100km/hr.\n\n## Ratios and Proportions Worksheets\n\n### Planet Map Scaling Worksheets (Level-1)\n\nThese worksheets on map scaling of planets help students on how to prepare the scale model to measure the astronomical distances. Therefore, they make it easy for students to calculate the known sizes and distances between the planets and the sun.\n\nLearning Tip: In reality, the solar system is a vast world of stars, planets, galaxies, and many more. Students cannot imagine the exact sizes of the solar bodies and their distances from each other. So, the scale mapping models help them to calculate the distances between the solar bodies in terms of days, hours, and years.\n\n### Town Map Scaling Worksheets (Level-3)\n\nTown Map Scaling is an effective method to measure the distances between the territories of a town. Here are the ready-to-use worksheets that help students to determine the distances between the cities around the town. The best part is students can measure the distances using squares and determine the distances in terms of miles.\n\nKey Concepts: The scale models in the form of diagrams are useful to convert various measurement systems.\n\n### Town Map Scaling Worksheets (Level-2)\n\nGrab the after-class practice handouts to help your young learners to practice the problems on latitudes and longitudes on the map. Besides, students will also learn to find the distances between geographical locations.\n\nHelpful Idea: Students need to revise their previous knowledge of map scaling to convert the measuring systems.\n\n### Town Map Scaling Worksheets (Level-1)\n\nStudents can create their map scaling models and study to convert measuring units to determine the distances between cities. This practice and knowledge help them to relate the measuring system to a road map in reality.\n\nImportant Impressions: The skill-based worksheets teach what is scale measurement and distance calculating methods. This knowledge help learners to calculate the distance on the map.\n\n### USA Map Scaling Worksheets (Level 2)\n\nDo students want a vacation to the USA? Are they looking for a perfect guide on their routes and daily trips in the USA? Check out the worksheets that review the famous tourist places in the US and help them to calculate the distances from one place to another.\n\nNote: These worksheets are great for continuing math and geography lessons after the class.\n\n### Worksheets on Writing Ratios (Level 1)\n\nOur great interactive worksheets are the best resources to practice various word problems on ratios. Students learn how to express two numbers in ratio using given shapes and objects and different ways.\n\nKey Concept: Writing numbers in different ways of ratios and compare the difference between them.\n\n### Worksheets on Writing Ratios (Level 2)\n\nThe activities in the skill-based worksheets connect students to the real world and let them practice part-to-whole ratios.\n\nLearning Concept: Students must identify the rations of different numbers in the group and helps in expressing the math equation in its simplest form.\n\n### Worksheets on Writing Ratios (Level 3)\n\nThe worksheets cover learning tips to write different numbers in their ratio and fraction form. Furthermore, students find the relevant information on the given objects to solve the problems on writing rations.\n\nImportant Information: Students must find the best way to write the ratios because the correct format will help them find the missing number of an equivalent ratio.\n\n### Equivalent Ratios (Fraction Form) Worksheets (Level-1)\n\nEquivalent ratios are the ratios that compare two numbers in the same way! These worksheets offer learners a good understanding of solving problems on generating equivalent ratios using only multiplication.\n\nNote: Students must learn that ratios are nothing but the alternate form of fractions and the ratio problems teach simplifying methods to get correct answers.\n\n### Equivalent Ratios (Fraction Form) Worksheets (Level-2)\n\nImmerse your students in solving problems that compare equivalent ratios to fraction forms using fill-in-the-blank hints.\n\nMisinterpretation of Students: The increased ratio increases the numbers while the decreased numbers decrease the numbers.\n\n### Equivalent Ratios (Fraction Form) Worksheets (Level-3)\n\nStudents of grades 5 to 7 can evaluate their knowledge of ratio problems that cover dividing quantities and recognizing parts of the whole.\n\nLearning Concept: Learn the easiest and quickest way of finding the equivalent ratios in a group of numbers! If students fail to identify equal proportions, they need to focus on learning fractions and divisions.\n\n### Free Worksheet to Practice Ways of Writing Equivalent Ratios (All Levels)\n\nIntroduce these fun worksheets to your student who is struggling to solve equivalent ratio problems! Because they provide all the three possible ways to write the five equivalent ratios for a given single ratio.\n\nHelpful Idea: Students must remember to simplify the ratio before solving the equivalent ratio of given numbers. However, they can use fundamental concepts of division and multiplication operations to find equivalent ratios.\n\nOur tailor-made proportion worksheets cover proportions using decimals, algebraic expressions, and simple proportion questions! Using the drawings in the worksheets, students can find the missing numbers in a proportion.\n\nLearning Tip: Ask students to watch the picture patterns from a distance since they can solve problems with a good view.\n\n### Stained Glass Unit Rate Worksheets (Level 2)\n\nLet us use our interactive worksheets, which allow learners to draw a symmetrical art picture after answering the unit rate problems. Students can choose different colors to shade the images and get different shapes.\n\nLearning Concept: The term rate denotes the amount of one quantity present in another." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90261203,"math_prob":0.95754373,"size":7080,"snap":"2023-40-2023-50","text_gpt3_token_len":1372,"char_repetition_ratio":0.15771623,"word_repetition_ratio":0.012646793,"special_character_ratio":0.18658192,"punctuation_ratio":0.08447305,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9966251,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-21T15:13:54Z\",\"WARC-Record-ID\":\"<urn:uuid:851c90ef-b60f-4d70-875f-fcbdc05dc575>\",\"Content-Length\":\"161190\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8b48c744-3c46-4d2b-a584-66d0fdc66851>\",\"WARC-Concurrent-To\":\"<urn:uuid:51506351-1e08-40a7-9e85-370e807f7913>\",\"WARC-IP-Address\":\"159.89.138.36\",\"WARC-Target-URI\":\"https://gosciencegirls.com/ratios-and-proportions-worksheets/\",\"WARC-Payload-Digest\":\"sha1:XNZJ56LLLWNNVJOC6Z4YXF5GDUEONQWL\",\"WARC-Block-Digest\":\"sha1:QEUFMCJXX7LNN5H6KWW6HUDCSDF3EFPN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506028.36_warc_CC-MAIN-20230921141907-20230921171907-00616.warc.gz\"}"}
https://www.storyofmathematics.com/math-calculators/solve-for-x-calculator/
[ "", null, "# Solve for X Calculator + Online Solver With Free Steps\n\nThe Solve For X Calculator is an online tool that is very helpful in finding the values for x in the given mathematical expression. When variables and numbers are combined using various operations, it results in a mathematical expression.\n\nMathematical expressions are very important for fields like physics and engineering. They can be representations of any shape, a way to find the area and volume of any region. As variables are involved, these expressions are solved to get their values, which ultimately helps in finding the solution to the various mathematical problems.\n\nThe calculator evaluates the values for variables in each mathematical expression using different methods depending on the type of expression.\n\n## What Is the Solve for X Calculator?\n\nThe Solve For X Calculator is an online calculator that can be used to determine the roots of mathematical equations by solving them at a rate of knots.\n\nMathematical equations have a wide variety of types. The most commonly used are linear, quadratic, and higher degree polynomials. There is a whole bunch of techniques to solve these equations.\n\nThe important step is to select a technique to solve the given equation among a list of available options. There doesn’t need to be one method that can solve all types of equations. Also, it is possible at the same time that there are multiple solving methods for a single equation.\n\nTherefore, it depends on the nature of the equation to choose a suitable technique. One must have a good understanding of mathematical equations and prior knowledge of different techniques to solve these equations manually.\n\nTo find the solution to such equations, you have to go through a complicated procedure that is an exhaustive and time-intensive task. You might end up with the wrong solution and you have to perform the same process again and again.\n\nHere is the solution to all these problems. You can use Solve for X calculator, which gives relief from the painful job of solving equations. It is a simple and easy-to-understand tool that you can operate on your device just by using the browser.\n\n## How To Use the Solve for X Calculator?\n\nYou can use the Solve For X Calculator by inserting the input equation for which you want the solution. You don’t need to specify the type of equation and its solution technique, the tool will do it for you.\n\nThere is a step-by-step procedure given below to use this calculator. You must follow these steps to get the best results.\n\n### Step 1\n\nInput the target equation. It should be a valid equation having a variable x. Put the equation in the field named Enter the equation. It can be linear, quadratic, higher degree polynomial, and trigonometric function of x.\n\n### Step 2\n\nAfter entering the equation, press the Solve button to get the final answer.\n\n### Result\n\nThe result will be the values for x that satisfies the input equation. The result may vary from problem to problem.\n\nFor mathematical equations, the number of values will be equal to the highest degree in the equation. For example, if we enter a quadratic equation, it will give two roots of x.\n\nOn the other hand, for the trigonometric functions, our calculator gives answers in the form of periodical values (multiples). For instance, if the function is sin(x), it gives an answer like x = n$\\pi$ where n $\\in$ Z.\n\n## How Does the Solve for X Calculator Work?\n\nThe Solve for X calculator works by applying the various equation-solving techniques depending on the nature of the equations to find the values of the involved variable.\n\nTherefore, it solves the equation according to its type to find the unknown variable.\n\nThere are different methods to solve the above-mentioned algebraic equations, but we should know about these equations first.\n\n### What Is a Linear Equation?\n\nA Linear equation is an equation in which the unknown variable has power equal to one. This equation has only one root, which means that it has only one solution. When representing graphically, it has to be a straight line either vertically or horizontally.\n\nThe linear equation is of the form:\n\nax + b = 0\n\n### What Is a Quadratic Equation?\n\nQuadratic equations are second-order algebraic equations which mean in these equations the highest power of an unknown variable is equal to two. Since the word quad means square, these equations have two solutions for the required variable.\n\nThe standard quadratic equation is given as:\n\n$ax^2 + bx + c = 0$\n\nThe graph for quadratic equations is Parabola shaped either in the upward or downward direction depending on the maximum and minimum values of the quadratic expression.\n\n### What Are Higher-order Equations?\n\nHigher-order Algebraic equations are equations in which the variable has a power greater than two. Some examples of higher-order equations are Cubic ($x^3$), Bi-Quadratic ($x^4$), etc.\n\nThe standard form of higher-order equation is:\n\n$ax^n + bx^{n-1} + c = 0$\n\nAfter discussing the types of equations, let us now discuss the methods to solve these equations. As mentioned above, the working of this calculator depends on any of these methods.\n\n### Method To Solve Linear Equations\n\nLinear equations are the easiest to solve. Separate all the unknown variables on one side of the equation and constant terms on the other side by adding or subtracting the constants.\n\nThen solve the constant terms by doing mathematical operations. After this, remove all the coefficients with the variables by multiplying or dividing them into both sides of the equation. Again simplify the equation for the desired variable.\n\n### Methods To Solve Quadratic Equations\n\nThe Quadratic Equation has two roots and these roots can be found by solving them for unknown variables. There are three different methods to solve these equations.\n\n#### Factorization\n\nFactorization is the simplest method to Solve Quadratic Equations. Factorization consists of different steps. For Factorization, we first have to convert the given equation into standard form.\n\n$ax^2 + bx + c = 0$\n\nThen we have to apply a mid-term break method, which means to break the middle term into two terms such that the addition of these two terms results in the original term and multiplying these two terms results in the constant term.\n\nThen to make the required factors, take out the common term from the available terms. To find out the two required roots, simplify these obtained factors.\n\nThere are quadratic equations that are not solvable through Factorization. So for such types of equations, Quadratic Formula will be used. To use the Quadratic Formula, first convert the quadratic equation into standard form. The Quadratic Formula is given as:\n\n$x= \\frac {-b \\pm \\sqrt{b^2-4ac}}{2a}$\n\nIn the above equation, c belongs to the constant term in the equation, whereas a and b are the coefficients of an unknown variable. To find out the roots of the equation, just simply put the values in the formula and we will have the answer.\n\n### Method of Completing the Square\n\nMethod of Completing the Square involves squaring the equation and simplifying it to find the solution of the given equation. To understand this method, consider the standard form of the quadratic equation.\n\nThis method involves some steps. First, divide the whole equation by the coefficient of $x^2$. Separate the constant term by shifting it to the right side of the equation.\n\nNow here is the main concept. We have to complete the square on the left side of the equation by keeping in mind the formula $(a+b)^2$. This can be done by adding appropriate terms on both sides of the equation. After completing the square, take the square root on both sides of the equation, then simplify the equation to get the value of a required variable.\n\n### Methods To Solve Higher-order Equations\n\nHigher-order equations have degrees equal to three or more and depending on the degree; these equations have three or more roots. Solving the higher-order equation is a very tedious task. Here are some methods to solve these equations.\n\n#### Recognizing Factors\n\nTake out the common term from the whole equation to convert it into quadratic form, then solve this Quadratic equation by factoring or using the quadratic formula.\n\n#### Synthetic Division\n\nSome Higher-order Equations are not solvable by recognizing the factors. So for this, we use the Synthetic division method.\n\nIt is a technique in which a higher-order polynomial is divided by a first-order polynomial using coefficients only and the sign of the divisor term is changed so that after subtraction we can get a new lower-order polynomial.\n\n## Solved Examples\n\nThe solved examples from this calculator are demonstrated below:\n\n### Example 1\n\nFind out the roots for the following quadratic equation:\n\n$x^2 – 18x + 45 =0$\n\n### Solution\n\nAs the input equation is quadratic, the calculator finds out two values of x, which are given as:\n\nx1 = 3\n\nx2 = 15\n\n### Example 2\n\nDetermine the values of x for the given 4th-degree polynomial:\n\n$x^4 – 2x^3 + 6x^2+8x-40 = 0$\n\nUse the Solve For X Calculator to find values.\n\n### Solution\n\nFor the 4th-degree polynomial, we get four values for x.\n\nx{1,2} = $\\pm$ 2\n\nx3 = 1 – 3i\n\nx4 = 1 + 3i\n\n### Example 3\n\nConsider the below-mentioned trigonometric functions:\n\nf(x) = 5 + 2sin(x)\n\nFind values using the calculator above.\n\n### Solution\n\nOnce you press the Solve button you get the following results. Now for a trigonometric function, it gives periodic values (multiples of 2$\\pi$).\n\n$x_1 = 2 \\pi n \\, – \\, sin^{-1}(\\frac{5}{2}) \\quad and \\; n \\in \\mathbb{Z}$\n\n$x_2 = 2 \\pi n + \\pi \\, – \\, sin^{-1}(\\frac{5}{2}) \\quad and \\; n \\in \\mathbb{Z}$\n\n5/5 - (5 votes)" ]
[ null, "https://www.storyofmathematics.com/wp-content/uploads/2022/02/som-header1.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89788306,"math_prob":0.9996051,"size":9536,"snap":"2022-40-2023-06","text_gpt3_token_len":2057,"char_repetition_ratio":0.17331095,"word_repetition_ratio":0.034696408,"special_character_ratio":0.21434563,"punctuation_ratio":0.09019608,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999863,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-02T07:51:41Z\",\"WARC-Record-ID\":\"<urn:uuid:6b70c702-9711-4574-be24-ec360737af4e>\",\"Content-Length\":\"543719\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:13bdc88d-7c4c-4621-9a42-0d67b8172ed7>\",\"WARC-Concurrent-To\":\"<urn:uuid:6673245f-d237-4147-a213-16016a4d0b8a>\",\"WARC-IP-Address\":\"172.67.190.47\",\"WARC-Target-URI\":\"https://www.storyofmathematics.com/math-calculators/solve-for-x-calculator/\",\"WARC-Payload-Digest\":\"sha1:YPH3GTEOXSUY43FTUYIRBF3KK6VLE5IY\",\"WARC-Block-Digest\":\"sha1:2MCPVK5GE37AO75P7IOLAPP7POA4SNVG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337287.87_warc_CC-MAIN-20221002052710-20221002082710-00718.warc.gz\"}"}
https://www.acmicpc.net/problem/13745
[ "시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율\n2 초 512 MB 2 1 1 50.000%\n\n## 문제\n\nA circus is constructing their tent. The tent is a large piece of canvas held up by a given number of various length tent poles. The tent poles go in specific places on the ground, but which tent pole goes in which place is up to you. You need to choose a placement for the given tent poles that maximizes the total volume under the tent.\n\nThere will always be one central pole at the origin; the other poles are distributed around the periphery. The tent is always drawn tight between the central pole and two adjacent poles on the periphery, forming a perfect triangle. Only the volume under these triangles formed by two adjacent outer poles and the central origin pole counts towards the total volume. Adjacency is by angle around the origin.\n\n## 입력\n\nEach input will consist of a single test case. Note that your program may be run multiple times on different inputs. The first line of input contains an integer n (3 ≤ n ≤ 30), which is the number of poles.\n\nThe next n-1 lines each contains two integers x and y (-1,000 ≤ x,y ≤ 1,000), representing a 2D coordinate, giving the locations where the poles may be placed. The locations may not be in order around the origin. After that, there will be n lines, each containing a single integer h (1 ≤ h ≤ 100). These are the heights of the poles.\n\nOne pole must be placed at the origin, and the rest must be placed at the (x,y) coordinates in the input. The (x,y) locations will surround the origin; that is, the polygon formed by the (x,y) locations, in order (by angle around the origin), will strictly include the origin. No two holes will be at the same angle with the origin (i.e. no triangle of roof fabric will have area 0).\n\n## 출력\n\nOutput a single floating point number, which is the maximum volume achievable under the tent. Output this number to exactly two decimal places, rounded.\n\n## 예제 입력 1\n\n5\n100 100\n-200 -200\n300 -300\n-400 400\n30\n20\n50\n60\n10\n\n\n## 예제 출력 1\n\n8566666.67" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89912623,"math_prob":0.982615,"size":1955,"snap":"2019-43-2019-47","text_gpt3_token_len":485,"char_repetition_ratio":0.13634034,"word_repetition_ratio":0.005509642,"special_character_ratio":0.2690537,"punctuation_ratio":0.10817308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9513565,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-18T14:36:07Z\",\"WARC-Record-ID\":\"<urn:uuid:069c9f60-cd76-476f-a34e-8a2152331de6>\",\"Content-Length\":\"29982\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b7513d27-b1c4-490e-9858-95a26df3190f>\",\"WARC-Concurrent-To\":\"<urn:uuid:526e2f1a-caaa-45b0-96e6-f67d44f48676>\",\"WARC-IP-Address\":\"54.238.205.211\",\"WARC-Target-URI\":\"https://www.acmicpc.net/problem/13745\",\"WARC-Payload-Digest\":\"sha1:OYJOGLNNCTCI4HU5TFAB7JCV2XOCT3HF\",\"WARC-Block-Digest\":\"sha1:JO6M4O62QSLNKZ3AEIQ6CAQ33ILXUYSX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669795.59_warc_CC-MAIN-20191118131311-20191118155311-00119.warc.gz\"}"}
https://blog.simons.berkeley.edu/author/michael-walter/
[ "# Lattice Blog Reduction – Part III: Self-Dual BKZ\n\nThis is the third and last entry in a series of posts about lattice block reduction. See here and here for the first and second part, resp. In this post I will assume you have read the other parts.\n\nIn the first two parts we looked at BKZ and Slide reduction, the former being the oldest and most useful in practice, while the latter achieves the best provable bounds and has the cleaner analysis. While BKZ is a natural generalization of LLL, we have seen that the analysis of LLL does not generalize well to BKZ. One can view Slide reduction as a different generalization of LLL with the goal of also naturally generalizing its analysis. As we mentioned in the first part, there is another analysis technique based on dynamical systems, introduced in [HPS11]. Unfortunately, as applied to BKZ, there are some cumbersome technicalities and the resulting bounds on the output quality are not as tight as we would like them to be (i.e. as for Slide reduction). One can view the algorithm we are considering today – SDBKZ [MW16] – as a generalization of LLL that lends itself much easier to this dynamical systems analysis: it is simpler, cleaner and yields better results. Since part of the goal of today’s post is to demonstrate this very useful analysis technique, SDBKZ is a natural candidate.\n\n# SDBKZ\n\nRecall the two tools we’ve been relying on in the first two algorithms, SVP and DSVP reduction of projected subblocks:", null, "Effect of a call to the DSVP oracle. GSO log norms of the input in black, of the output in blue. Note that the sum of the GSO log norms is a constant, so increasing the length of the last vector, decreases the (average of the) remaining vectors.", null, "Effect of a call to the DSVP oracle. GSO log norms of the input in black, of the output in blue. Note that the sum of the GSO log norms is a constant, so increasing the length of the last vector, decreases the (average of the) remaining vectors.\n\nWe will use both of them again today. Like BKZ, a tour of SDBKZ starts by calling the SVP oracle on successive blocks of our basis. However, when we reach the end of the basis, we will not decrease the size of the window, since this is actually quite inconvenient for the analysis. Instead, we will keep the size of the window constant but switch to DSVP reduction, i.e. at the end of the BKZ tour we DSVP reduce the last block. This will locally maximize the last GSO vector in the basis, just as the first SVP call locally minimized the first vector of the basis. Then we will move the window successively backwards, mirroring a BKZ tour, but using DSVP reduction, until we reach the beginning of the basis again. At this point, we switch back to SVP reduction and move the window forward, etc. So SDBKZ runs in forward and backward tours.", null, "SDBKZ in one picture: apply the SVP oracle to the projected blocks from start to finish and when you reach the end, apply the DSVP oracle to from finish to start. Repeat.\n\nA nice observation here is that the backward tour can be viewed equivalently as: 1) compute the reversed dual basis (i.e. the dual basis with reversed columns), 2) run a forward tour, 3) compute the primal basis again. The first of these two steps is self-inverse: computing the reversed dual basis of the reversed dual basis yields the original primal basis. This means step 3) is actually the same as step 1). So in effect, one can view SDBKZ as simply repeating the following two steps: 1) run a forward tour, 2) compute the reversed dual basis. So it doesn’t matter if we use the primal or the dual basis as input, the operations of the algorithm are the same. This is why it is called Self-Dual BKZ.\n\nThere is one caveat with this algorithm: it is not clear, when one should terminate. In BKZ and Slide reduction one can formulate clear criteria, when the algorithm makes no more progress anymore. In SDBKZ this is not the case, but the analysis will show that we can bound the number of required tours ahead of time.\n\n#### The Analysis\n\nWe will start by analyzing the effect of a forward tour. Let $${\\mathbf{B}}$$ be our input basis. The first call to the SVP oracle in a forward tour replaces $${\\mathbf{b}}_1$$ with the shortest vector in $${\\mathbf{B}}_{[1,k]}$$. This means that the new basis $${\\mathbf{B}}’$$ satifies $$\\| {\\mathbf{b}}_1′ \\| \\leq \\sqrt{\\gamma_k} (\\prod_{i=1}^k \\|{\\mathbf{b}}_i^* \\|)^{1/k}$$ by Minkowski’s bound. Equivalently, this can be written as $\\log \\| {\\mathbf{b}}_1′ \\| \\leq \\log \\sqrt{\\gamma_k} + \\frac1k (\\sum_{i=1}^k \\log \\|{\\mathbf{b}}_i^* \\|).$ So if we consider the $$\\log \\|{\\mathbf{b}}_i^*\\|$$ as variables, it seems like linear algebra could be useful here. So far, so good. The second step is more tricky though. We know that the next basis $${\\mathbf{B}}”$$, i.e. after the call to the SVP oracle on $${\\mathbf{B}}’_{[2,k+1]}$$, satisfies $${\\mathbf{b}}_1” = {\\mathbf{b}}_1’$$ and $$\\| ({\\mathbf{b}}_2”)^* \\| \\leq \\sqrt{\\gamma_k} (\\prod_{i=2}^{k+1} \\|({\\mathbf{b}}’_i)^* \\|)^{1/k}$$. Unfortunately, we have no control over $$\\|({\\mathbf{b}}’_i)^* \\|$$ for $$i \\in {2,\\dots,k}$$, since we do not know how the SVP oracle in the first call changed these vector. However, we do know that the lattice $${\\mathbf{B}}_{[1,k+1]}$$ did not change in that call. So we can write $\\prod_{i=2}^{k+1} \\|({\\mathbf{b}}’_i)^* \\| = \\frac{\\prod_{i=1}^{k+1} \\|{\\mathbf{b}}_i^* \\|}{\\| {\\mathbf{b}}’_1 \\|}$ and thus we obtain $\\log \\| ({\\mathbf{b}}_2′)^* \\| \\leq \\log \\sqrt{\\gamma_k} + \\frac1k (\\sum_{i=1}^{k+1} \\log \\|{\\mathbf{b}}_i^* \\| – \\log \\|{\\mathbf{b}}’_1 \\|).$ Again, this looks fairly “linear algebraicy”, so it could be useful. But there is another issue now: in order to get an inequality purely in the input basis $${\\mathbf{B}}$$, we would like to use our inequality for $$\\log \\|{\\mathbf{b}}_1′ \\|$$ in the one for $$\\log \\| ({\\mathbf{b}}_2′)^* \\|$$. But the coefficient of $$\\log \\|{\\mathbf{b}}_1′ \\|$$ is negative, so we would need a lower bound for $$\\log \\|{\\mathbf{b}}_1′ \\|$$. Furthermore, we would like to use upper bounds for our variables later, since the analysis of a tour will result in upper bounds and we would like to apply it iteratively. For this, negative coefficients are a problem. So, we need one more modification: we will use a change of variable to fix this. Instead of considering the variables $$\\log \\| {\\mathbf{b}}_i^* \\|$$, we let the input variables to our forward tour be $$x_i = \\sum_{j < k+i} \\log \\|{\\mathbf{b}}^*_i \\|$$ and the output variables $$y_i = \\sum_{j \\leq i} \\log \\|({\\mathbf{b}}’_i)^* \\|$$ for $$i \\in [1,\\dots,n-k]$$. Clearly, we can now write our upper bound on $$\\log \\|({\\mathbf{b}}’_1)^*\\|$$ as $y_1 \\leq \\log \\sqrt{\\gamma_k} + \\frac{x_1}{k}.$ More generally, we have $\\|({\\mathbf{b}}’_i)^* \\| \\leq \\sqrt{\\gamma_k} \\left(\\frac{\\prod_{j=1}^{i+k-1} \\|{\\mathbf{b}}_j^* \\|}{\\prod_{j=1}^{i-1} \\|({\\mathbf{b}}’_j)^* \\|} \\right)^{\\frac1k}$ which means for our variables $$x_i$$ and $$y_i$$ that $y_i = y_{i-1} + \\log \\| ({\\mathbf{b}}’_i)^* \\| \\leq y_{i-1} + \\log \\sqrt{\\gamma_k} + \\frac{x_i – y_{i-1}}{k} = (1-\\frac1k) y_{i-1} + \\frac1k x_i + \\log \\sqrt{\\gamma_k}.$\n\nNote that we can write each $$y_i$$ in terms of $$x_i$$ and the previous $$y_i$$ with only positive coefficients. So now we can apply induction to write each $$y_i$$ only in terms of the $$x_i$$’s, which shows that $y_i = \\frac1k \\sum_{j=1}^i \\omega^{i-j} x_j + (1-\\omega)^i k \\alpha$ where we simplified notation a little by defining $$\\alpha = \\log \\sqrt{\\gamma_k}$$ and $$\\omega = 1-\\frac1k$$. By collecting the $$x_i$$’s and $$y_i$$’s in a vector each, we have the vectorial inequality ${\\mathbf{y}} \\leq {\\mathbf{A}} {\\mathbf{x}} + {\\mathbf{b}}$ where ${\\mathbf{b}} = \\alpha k \\left[ \\begin{array}{c} 1 – \\omega \\\\ \\vdots \\\\ 1 – \\omega^{n-k} \\end{array}\\right] \\qquad\\qquad {\\mathbf{A}} = \\frac1k \\left[ \\begin{array}{cccc} 1 & & & \\\\ \\omega & 1 & & \\\\ \\vdots & \\ddots & \\ddots & \\\\ \\omega^{n-k-1} & \\cdots & \\omega & 1 \\end{array} \\right].$\n\nNow recall that after a forward tour, SDBKZ computes the reversed dual basis. Given the close relationship between the primal and the dual basis and their GSO, one can show that simply reversing the vector $${\\mathbf{y}}$$ will yield the right variables $${\\mathbf{x}}’_i$$ to start the next “forward tour” (which is actually a backward tour, but on the dual). I.e. after reversing $${\\mathbf{y}}$$, the variables represent the logarithm of the corresponding subdeterminants of the dual basis. (For this we assume for convenience and w.l.o.g. that the lattice has determinant 1; otherwise, there would be a scaling factor involved in this transformation.)\n\nIn summary, the effect on the vector $${\\mathbf{x}}$$ of executing once the two steps, 1) forward tour and 2) computing the reversed dual basis, can be described as ${\\mathbf{x}}’ \\leq {\\mathbf{R}} {\\mathbf{A}} {\\mathbf{x}} + {\\mathbf{R}} {\\mathbf{b}}$ where $${\\mathbf{R}}$$ is the reversed identity matrix (i.e. the identity matrix with reversed columns). Iterating the two steps simply means we will be iterating the vectorial inequality above. So analyzing the affine dynamical system ${\\mathbf{x}} \\mapsto {\\mathbf{R}} {\\mathbf{A}} {\\mathbf{x}} + {\\mathbf{R}} {\\mathbf{b}}$ will allow us to deduce information about the basis after a certain number of iterations.\n\n#### Small Digression: Affine Dynamical Systems\n\nConsider some dynamical system $${\\mathbf{x}} \\mapsto {\\mathbf{A}} {\\mathbf{x}} + {\\mathbf{b}}$$ and assume it has exactly one fixed point, i.e. $${\\mathbf{x}}^*$$ such that $${\\mathbf{A}} {\\mathbf{x}}^* + {\\mathbf{b}} = {\\mathbf{x}}^*$$. We can write any input $${\\mathbf{x}}’$$ as $${\\mathbf{x}}’ = {\\mathbf{x}}^* + {\\mathbf{e}}$$ for some “error vector” $${\\mathbf{e}}$$. When applying the system to it, we get $${\\mathbf{x}}’ \\mapsto {\\mathbf{A}} {\\mathbf{x}}’ + {\\mathbf{b}} = {\\mathbf{x}}^* + {\\mathbf{A}} {\\mathbf{e}}$$. So the error vector $${\\mathbf{e}}$$ is mapped to $${\\mathbf{A}} {\\mathbf{e}}$$. Applying this $$t$$ times maps $${\\mathbf{e}}$$ to $${\\mathbf{A}}^t {\\mathbf{e}}$$, which means after $$t$$ iterations the error vector has norm $$\\|{\\mathbf{A}}^t {\\mathbf{e}} \\|_{p} \\leq \\|{\\mathbf{A}}^t \\|_{p} \\| {\\mathbf{e}} \\|_{p}$$ (where $$\\| \\cdot \\|_{p}$$ is the matrix norm induced by the vector $$p$$-norm). If we can show that $$\\|{\\mathbf{A}} \\|_p \\leq 1 – \\epsilon$$, then $$\\|{\\mathbf{A}}^t \\|_p \\leq \\|A \\|^t \\leq (1-\\epsilon)^t \\leq e^{-\\epsilon t}$$, so the error vector will decay exponentially in $$t$$ with base $$e^{-\\epsilon}$$ and the algorithm converges to the fixed point $${\\mathbf{x}}^*$$.\n\nBack to our concrete system above. As we just saw, we can analyze its output quality by computing its fixed point and its running time by computing $$\\|{\\mathbf{R}} {\\mathbf{A}} \\|_p$$ for some induced matrix $$p$$-norm. Since this has been a lenghty post already, I hope you’ll trust me that our system above has a fixed point $${\\mathbf{x}}^*$$, which can be written out explicitely in closed form. As a teaser, its first coordinate is $x^*_1 = \\frac{(n-k)k}{k-1} \\alpha.$ This means that if the algorithm converges, it will converge to a basis such that $$\\sum_{j \\leq k}\\log \\| {\\mathbf{b}}_j^*\\| \\leq \\frac{(n-k)k}{k-1} \\log \\sqrt{\\gamma_k}$$. Applying Minkowski’s Theorem to the first block $${\\mathbf{B}}_{[1,k]}$$ now shows that the shortest vector in this block satisfies $$\\lambda_1({\\mathbf{B}}_{[1,k]}) \\leq \\sqrt{\\gamma_k}^{\\frac{n-1}{k-1}}$$. Note that the next forward tour will find a vector of such length. Recall that we assumed that our lattice has determinant 1, so this is exactly the Hermite factor achieved by Slide reduction, but for arbitrary block size (we do not need to assume that $$k$$ divides $$n$$) and better than what we can achieve for BKZ (even using the same technique). Moreover, the fixed point actually gives us more information: the other coordinates (that I have ommited here) allow us control over all but $$k$$ GSO vectors and by terminating the algorithm at different positions, it allows us to choose which vectors we want control over.\n\nIt remains to show that the algorithm actually converges and figure out how fast. It is fairly straight-forward to show that $\\|{\\mathbf{R}} {\\mathbf{A}}\\|_{\\infty} = \\|{\\mathbf{A}}\\|_{\\infty} = 1 – \\omega^{n-k} \\approx e^{-\\frac{n-k}{k}}.$ (Consider the last row of $${\\mathbf{A}}$$.) This is always smaller than 1, so the algorithm does indeed converge. For $$k = \\Omega(n)$$ this is bounded far enough from 1 such that the system will converge to the fixed point up to an arbitrary constant in a number of SVP calls that is polynomial in $$n$$. Using another change of variable [N16] or considering the relative error instead of the absolute error [MW15], one can show that this also holds for smaller $$k$$.\n\nAs mentioned before, this type of analysis was introduced in [HPS11] and has inspired new ideas even in the heuristic analysis of BKZ. In particular, one can predict the behavior of BKZ by simply running such a dynamical system on typical inputs (and making some heuristic assumptions). This idea has been and is being used extensively in cryptanalysis and in optimizing parameters of state-of-the-art algorithms.\n\nFinally, a few last words on SDBKZ: we have seen that it achieves a good Hermite factor, but what can we say about the approximation factor? I actually do not know if the algorithm achieves a good approximation factor and also do not see a good way to analyze it. However, there is a reduction [L86] from achieving approximation factor $$\\alpha$$ to achieving Hermite factor $$\\sqrt{\\alpha}$$. So SDBKZ can be used to achieve approximation factor $$\\gamma_k^{\\frac{n-1}{k-1}}$$. This is a little unsatisfactory in two ways: 1) the reduction results in a different algorithm, and 2) the bound is a little worse than the factor achieved by slide reduction, which is $$\\gamma_k^{\\frac{n-k}{k-1}}$$. On a positive note, a recent work [ALNS20] has shown that, due to the strong bound on the Hermite factor, SDBKZ can be used to generalize Slide reduction to arbitrary block size $$k$$ in a way to achieve the approximation factor $$\\gamma_k^{\\frac{n-k}{k-1}}$$. Another recent work [ABFKSW20] exploited the fact that SDBKZ allows to heuristically predict large parts of the basis to achieve better bounds on the running time of the SVP oracle.\n\n• Lovász. An Algorithmic Theory of Numbers, Graphs and Convexity. 1986\n\n• Hanrot, Pujol, Stehlé. Analyzing blockwise lattice algorithms using dynamical systems. CRYPTO 2011\n\n• Micciancio, Walter. Practical, predictable lattice basis reduction – Full Version. http://eprint.iacr.org/2015/1123\n\n• Micciancio, Walter. Practical, predictable lattice basis reduction. EUROCRYPT 2016\n\n• Neumaier. Bounding basis reduction properties. Designs, Codes and Cryptography 2016\n\n• Aggarwal, Li, Nguyen, Stephens-Davidowitz. Slide Reduction, Revisited—Filling the Gaps in SVP Approximation. CRYPTO 2020\n\n• Albrecht, Bai, Fouque, Kirchner, Stehlé, Wen. Faster Enumeration-based Lattice Reduction: Root Hermite Factor $$k^{(1/(2k))}$$ in Time $$k^{(k/8 + o(k))}$$. CRYPTO 2020\n\n# Lattice Blog Reduction – Part I: BKZ\n\nThis is the first entry in a (planned) series of at least three, potentially four or five, posts about lattice block reduction. The purpose of this series is to give a high level introduction to the most popular algorithms and their analysis, with pointers to the literature for more details. The idea is to start with the obvious – the classic BKZ algorithm. In the next two posts we will look at two lesser known algorithm, which allow to highlight useful tools in lattice reduction. These three posts will focus on provable results. I have not decided how to proceed from there, but I could see the series being extended to topics involving heuristic analyses, practical considerations, and/or a survey of more exotic algorithms that have been considered in the literature.\n\n#### Target Audience\n\nI will assume that readers of this series are already familiar with basic concepts of lattices, e.g. bases, determinants, successive minima, Minkowski’s bound, Gram-Schmidt orthogonalization, dual lattices and dual bases, etc. If any of these concepts seem new to you, there are great resources to familiarize yourself with them first (see e.g. lecture notes by Daniele, Oded, Daniel/Léo). It will probably help if you are familiar with the LLL algorithm (also covered in aforementioned notes), but I’ll try to phrase everything so it is understandable even if if you aren’t.\n\nOk, so let’s get started. Before we look at BKZ in particular, first some comments about lattice block reduction in general.\n\n# The Basics\n\n#### The Goal\n\nWhy would anyone use block reduction? There are (at least) two reasons.\n\n1) Block reduction allows you to find short vectors in a lattice. Recall that finding the shortest vector in a lattice (i.e. solving SVP) is really hard (as far as we know, this takes at least $$2^{\\Omega(n)}$$ time or even $$n^{\\Omega(n)}$$ if you are not willing to also spend exponential amounts of memory). On the other hand, finding somewhat short vectors that are longer than the shortest vector by “only” an exponential factor is really easy (see LLL). So what do you do if you need something that is shorter than what LLL gives you, but you don’t have enough time to actually find the shortest vector? (This situation arises practically every time you use lattice reduction for cryptanalysis.) You can try to find something in between and hope that it doesn’t take as long. This is where lattice reduction comes in: it gives you a smooth trade-off between the two settings. It is worth mentioning that when it comes to approximation algorithms, block reduction is essentially the only game in town, i.e. there are, as far as I know, no non-trivial approximation algorithms that cannot be viewed as block reduction. (In fact, this is related to an open problem that Noah stated during the program: to come up with a non-trivial approximation algorithm that does not rely on a subroutine to find the shortest lattice vector in smaller dimensions.) The only exception to this are quantum algorithms that are able to find subexponential approximations in polynomial time in lattices with certain (cryptographically highly relevant) structure (see [CDPR16] and follow up work).\n\n2) Block reduction actually gives you more than just short vectors. It gives you guarantees on the “quality” of the basis. What do we mean by the quality of the basis? Consider the Gram-Schmidt vectors $${\\mathbf{b}}_i^*$$ (GSO vectors) associated to a lattice basis $${\\mathbf{B}}$$. What we want is that the length of these Gram-Schmidt vectors (the GSO norms) does not drop off too quickly. The reason why this is a useful measure of quality for lattice bases is that it gives a sense of how orthogonal the basis vectors are: conditioned on being bases of the same lattice, the less accentuated the drop off in the GSO vectors, the more orthogonal the basis, and the more useful this basis is to solve several problems in a lattice. In fact, recall that the product of the GSO norms is equal to the determinant of the lattice and thus remains constant. Accordingly, if the GSO norms do not drop off too quickly, the first vector can be shown to be relatively short. So by analyzing the quality of the basis that block reduction achieves, a guarantee on the length of the first vector comes for free (see goal 1)). If you are familiar with the analysis of LLL, this should not come as a surprise to you.\n\n#### Tools\n\nIn order to ensure that the GSO norms do not drop off to quickly, it seems useful to be able to reduce them locally. To this end, we will work with projected lattice blocks (this is where the term “block” in block reduction comes from). More formally, given a basis $${\\mathbf{B}}$$ we will consider the block $${\\mathbf{B}}_{[i,j]}$$ for $$i < j$$ as the basis formed by the basis vectors $${\\mathbf{b}}_i, {\\mathbf{b}}_{i+1}, \\dots, {\\mathbf{b}}_{j}$$ projected orthogonally to the first $$i-1$$ basis vectors. So $${\\mathbf{B}}_{[i,j]}$$ is a basis for the lattice given by the sublattice formed by $${\\mathbf{b}}_1, {\\mathbf{b}}_{2}, \\dots, {\\mathbf{b}}_{j}$$ projected onto the orthogonal subspace of the vectors $${\\mathbf{b}}_1, {\\mathbf{b}}_{2}, \\dots, {\\mathbf{b}}_{i-1}$$. Notice that the first vector of $${\\mathbf{B}}_{[i,j]}$$ is exactly $${\\mathbf{b}}^*_i$$ – the $$i$$-th GSO vector. Another way to view this is to consider the QR-factorization of $${\\mathbf{B}} = {\\mathbf{Q}} {\\mathbf{R}}$$, where $${\\mathbf{B}}$$ is the matrix whose columns are the basis vectors $${\\mathbf{b}}_i$$. Since $${\\mathbf{Q}}$$ is orthonormal, it represents a rotation of the lattice and we can consider the lattice generated by the columns of $${\\mathbf{R}}$$ instead, which is an upper triangular matrix. For an upper triangular basis, the projection of a basis vector orthogonal to the previous basis vectors simply results in dropping the first entries from the vector. So considering a projected block $${\\mathbf{R}}_{i,j}$$ is simply to consider the square submatrix of $${\\mathbf{R}}$$ consisting of the rows and columns with index $$k$$ between $$i \\leq k \\leq j$$.\n\nNow we need a tool that allows us to control these GSO vectors, which we view as the first basis vectors in projected sublattices. For this, we will fall back to algorithms that solve SVP. Recall that this is very expensive, so we will not call this on the basis $${\\mathbf{B}}$$ but rather on the projected blocks $${\\mathbf{B}}_{[i,j]}$$, where we ensure that the dimension $$k = j-i+1$$ of the lattice generated by this projected block is not too large. In fact, the maximum dimension $$k$$ that we call the SVP algorithm on will control the time/quality trade-off achieved by our block reduction algorithms and is usually denoted by the block size. So we will assume that we have access to such an SVP algorithm. Actually, we will assume something slightly stronger: we will assume access to a subroutine that takes as input the basis $${\\mathbf{B}}$$ and indices $$i,j$$ and outputs a basis $${\\mathbf{C}}$$ such that\n\n• the lattice generated by the basis remains the same\n\n• the first $$i-1$$ and the last vectors starting from $$j+1$$ remain unchanged\n\n• the projected block $${\\mathbf{C}}_{[i,j]}$$ is SVP reduced, meaning that $${\\mathbf{c}}^*_i$$ is the shortest vector in the lattice generated by $${\\mathbf{C}}_{[i,j]}$$. Additionally, if $${\\mathbf{B}}_{[i,j]}$$ is already SVP reduced, we assume that the basis $${\\mathbf{B}}$$ is left unchanged.\n\nWe will call an algorithm that achieves this an SVP oracle. Such an oracle can be implemented given any algorithm that solves SVP (for arbitrary lattices). The technical detail of filling in the gap is left as homework to the reader.", null, "Effect of a call to the SVP oracle. GSO log norms of the input in black, of the output in red. Note that the sum of the GSO log norms is a constant, so reducing the first vector, increases the (average of the) remaining vectors.\n\nFor the analysis we need to know what such an SVP oracle buys us. This is where Minkowski’s theorem comes in: we know that for any $$n$$-dimensional lattice $$\\Lambda$$ we have $$\\lambda_1(\\Lambda) \\leq \\sqrt{\\gamma_n} \\det(\\Lambda)^{1/n}$$ (where $$\\lambda_1(\\Lambda)$$ is the length of the shortest vector in $$\\Lambda$$ and $$\\gamma_n = \\Theta(n)$$ is Hermite’s constant). This tells us that after we’ve applied the SVP oracle to a projected block $${\\mathbf{B}}_{[i,i+k-1]}$$, we have $\\|{\\mathbf{b}}^*_i \\| \\leq \\sqrt{\\gamma_{k}} \\left(\\prod_{j = i}^{i+k-1} \\|{\\mathbf{b}}_j^* \\| \\right)^{1/k}.$ Almost all of the analyses of block reduction algorithms, at least in terms of their output quality, rely on this single inequality.\n\n#### Disclaimer\n\nBefore we finally get to talk about BKZ, I want to remark that throughout this series I will punt on a technical (but very important) topic: the number of arithmetic operations (outside of the oracle calls) and the size of the numbers. The number of arithmetic operations is usually not a problem, since it will be dominated by the calls to the SVP oracle. We will only compute projections of sublattices corresponding to projected blocks as described above to pass them to the oracle, which can be done efficiently using the Gram-Schmidt orthogonalization. The size of the numbers is a more delicate issue. We need to ensure that the required precision for these projections does not explode somehow. This is usually addressed by interleaving the calls to the SVP oracle with calls to LLL. If you are familiar with the LLL algorithm, it should be intuitive that this allows to control the size of the number. For a clean example of how this can be handled, we refer to e.g. [GN08a]. So, in summary, we will measure the running time of our algorithms thoughout simply in the number of calls to the SVP oracle.\n\n# BKZ\n\nSchnorr [S87] introduced the concept of BKZ reduction in the 80’s as a generalization of LLL. The first version of the BKZ algorithm as we consider it today was proposed by Schnorr and Euchner [SE94] a few years later. With our setup above, the algorithm can be described in a very simple way. Let $${\\mathbf{B}}$$ be a lattice basis of an $$n$$-dimensional lattice and $$k$$ be the block size. Recall that this is a parameter that will determine the time/quality trade-off as we shall see in the analysis. We start by calling the SVP oracle on the first block $${\\mathbf{B}}_{[1,k]}$$ of size $$k$$. Once this block is SVP reduced, we shift our attention to the next block $${\\mathbf{B}}_{[2,k+1]}$$ and call the oracle on that. Notice that SVP reduction of $${\\mathbf{B}}_{[2,k+1]}$$ may change the lattice generated by $${\\mathbf{B}}_{[1,k]}$$ and $${\\mathbf{b}}_1$$ may not be the shortest vector in the first block anymore, i.e. it can potentially be reduced even further. However, instead of going back and fixing that, we will simply leave this as a problem to “future us”. For now, we continue in this fashion until we reach the end of the basis, i.e. until we called the oracle on $${\\mathbf{B}}_{n-k,n}$$. Note that so far this can be viewed as considering a constant sized window moving from the start of the basis to the end and reducing the first vector of the projected block in this window as much as possible using the oracle. Once we have reached the end of the basis, we start reducing the window size, i.e. we call the oracle on $${\\mathbf{B}}_{n-k+1,n}$$, then on $${\\mathbf{B}}_{n-k+2,n}$$, etc. This whole process is called a BKZ tour.\n\nNow that we have finished a tour, it is time to go back and fix the blocks that are not SVP reduced anymore. We do this simply by running another tour. Again, if the second tour modified the basis, there is no guarantee that all the blocks are SVP redcued. So we simply repeat, and repeat, and … you get the idea. We run as many tours as required until the basis does not change anymore. That’s it. If this looks familiar to you, that’s not a coincidence: if we plug in $$k=2$$ as our block size, we obtain (a version of) LLL! So BKZ is a proper generalization of LLL.", null, "BKZ in one picture: apply the SVP oracle to the projected blocks from start to finish and when you reach the end, repeat.\n\nThe obvious questions now are: what can we expect from the output? And how long does it take?\n\n#### The Good\n\nWe will now take a closer look at the approximation factor achieved by BKZ. If you want to follow this analysis along, you might want to get out pen and paper. Otherwise, feel free to trust me on the calculations (I wouldn’t!) and/or jump ahead to the end of this section for the result (no spoilers!). Let’s assume for now that the BKZ algorithm terminates. If it does, we know that the projected block $${\\mathbf{B}}_{[i, i+k-1]}$$ is SVP reduced for every $$i \\in [1,\\dots,n-k+1]$$. This means that we have $\\|{\\mathbf{b}}^*_i \\|^k \\leq \\gamma_{k}^{k/2} \\prod_{j = i}^{i+k-1} \\|{\\mathbf{b}}_j^* \\|$ for all these $$n-k+1$$ values of $$i$$. Multiplying all of these inequalities and canceling terms gives the inequality $\\|{\\mathbf{b}}^*_1 \\|^{k-1}\\|{\\mathbf{b}}^*_2 \\|^{k-2} \\dots \\|{\\mathbf{b}}^*_{k-1} \\| \\leq \\gamma_{k}^{\\frac{(n-k+1)k}{2}} \\|{\\mathbf{b}}_{n-k+2}^* \\|^{k-1} \\|{\\mathbf{b}}_{n-k+3}^* \\|^{k-2} \\dots \\|{\\mathbf{b}}_{n}^* \\|.$ Now we make two more observations: 1) not only is $${\\mathbf{B}}_{[1, k]}$$ SVP reduced, but so is $${\\mathbf{B}}_{[1, i]}$$ for every $$i < k$$. (Why? Think about it for 2 seconds!) This means we can multiply the inequalities $\\|{\\mathbf{b}}^*_1 \\|^i \\leq \\gamma_{i}^{i/2} \\prod_{j = 1}^{i} \\|{\\mathbf{b}}_j^* \\|$ for all $$i \\in [2,k-1]$$ together with the trivial inequality $$\\|{\\mathbf{b}}^*_1 \\| \\leq \\|{\\mathbf{b}}^*_1 \\|$$, which gives $\\|{\\mathbf{b}}^*_1 \\|^{\\frac{k(k-1)}{2}} \\leq \\left(\\prod_{i = 2}^{k-1} \\gamma_{i}^{i/2} \\right) \\prod_{i = 1}^{k-1} \\|{\\mathbf{b}}_i^* \\|^{k-1}$ Now we use the fact that $$\\gamma_k^k \\geq \\gamma_i^i$$ for all $$i \\leq k$$ (Why? Homework!) and combine with our long inequality above to get $\\|{\\mathbf{b}}^*_1 \\|^{\\frac{k(k-1)}{2}} \\leq \\gamma_k^{\\frac{k(n-1)}{2}} \\|{\\mathbf{b}}_{n-k+2}^* \\|^{k-1} \\|{\\mathbf{b}}_{n-k+3}^* \\|^{k-2} \\dots \\|{\\mathbf{b}}_{n}^* \\|.$ (I’m aware that this is a lengthy calculation for a blog post, but we’re almost there, so bear with me. It’s worth it!)\n\nWe now use one final observation, which is a pretty common trick in lattice algorithms: w.l.o.g. assume that for some shortest vector $${\\mathbf{v}}$$ in our lattice its projection orthogonal to the first $$n-1$$ basis vectors is non-zero (if it is zero for all of the shortest vectors, simply drop the last vector from the basis, the result is still BKZ reduced, so use induction). Then we must have that $$\\lambda_1 = \\| {\\mathbf{v}} \\| \\geq \\|{\\mathbf{b}}_i^* \\|$$ for all $$i \\in [n-k+2, \\dots, n]$$, since otherwise the projected block $${\\mathbf{B}}_{i,n}$$ would not be SVP reduced. This means, we have $$\\lambda_1 \\geq \\max_{i \\in [n-k+2, \\dots, n]} \\|{\\mathbf{b}}_i^* \\|$$. This is the final puzzle piece to get our approximation bound: $\\|{\\mathbf{b}}^*_1 \\| \\leq \\gamma_{k}^{\\frac{n-1}{k-1}} \\lambda_1.$ Note that this analysis (dating back to Schnorr [S94]) is reminiscent of the analysis of LLL and if we plug in $$k=2$$, we get exactly what we’d expect from LLL. Though we do note a gap in the other extreme: if we plug in $$k=n$$, we know that the approximation factor is $$1$$ (we are solving SVP in the entire lattice), but the bound above yields a factor $$\\gamma_n = \\Theta(n)$$.\n\nNow that we’ve looked at the output quality of the basis, let’s see what we can say about the running time (recall that our focus is on the number of calls to the SVP oracle). The short answer is: not much and that’s very unfortunate. Ideally, we’d want a bound on the number of SVP calls that is polynomial in $$n$$ and $$k$$. This would mean that the overall running time for large $$k$$ is dominated by the running time of the SVP oracle in dimension $$k$$ and the block size would give us exactly the expected trade-off. However, an LLL style analysis has so far only yielded a bound on the number of tours which is $$O(k^n)$$ [HPS11, Appendix]. This is quite bad – for large $$k$$ the number of calls will be the dominating factor in the running time.\n\n#### The Ugly\n\nRecall that the analysis of LLL does not only provide a bound on the approximation factor, but also on the Hermite factor, i.e. on the ratio of $$\\| {\\mathbf{b}}_1\\|/\\det(\\Lambda)^{1/n}$$. Since an LLL-style analysis worked out nicely for the approximation factor of BKZ, it stands to reason that a similar analysis should yield a similar bound for BKZ. By extrapolating from LLL, one could expect a bound along the lines of $$\\| {\\mathbf{b}}_1\\|/\\det(\\Lambda)^{1/n} \\leq \\gamma_{k}^{n/2k}$$ (note the square root improvement w.r.t. the trivial bound obtained from the approximation factor). And, in fact, a bound of $$\\gamma_{k}^{\\frac{n-1}{2(k-1)} + 1}$$ has been claimed in [GN08b] but without proof (as pointed out in [HPS11]) and it is not clear, how one would prove this. ([GN08b] claims that one can use a similar argument as we did for the approximation factor, but I don’t see it.)\n\n#### The Rescue\n\nSo it seems different techniques are necessary to complete the analysis of BKZ. The work of [HPS11] introduced such a new technique based on the analysis of dynamical systems. This work applied the technique successfully to BKZ, but the analysis is quite involved. What it shows is that one can terminate BKZ after a polynomial number of tours and still get a guarantee on the output quality, which is very close to the conjectured bound on the Hermite factor above. (Caveat: Technically, [HPS11] only showed this result for a slight variant of BKZ, but the difference to the standard BKZ algorithm only lies in the scope of the interleaving LLL applications, which is something that we glossed over above.) This is in line with experimental studies [SE94,GN08b,MW16], which show that BKZ produces high quality bases after a few tours already.\n\nWe will revisit this approach when considering a different block reduction variant, SDBKZ, where the analysis is much cleaner. As a teaser for the next post though, recall that BKZ can be viewed as a generalization of LLL (which corresponds to BKZ with block size $$k=2$$). Since the analysis of LLL did not carry entirely to BKZ, one could wonder if there is a different generalization of LLL such that an LLL-style analysis also generalizes naturally. The answer to this is yes, and we will consider such an algorithm in the next post.\n\n• [CDPR16] Cramer, Ducas, Peikert, Regev. Recovering short generators of principal ideals in cyclotomic rings. EUROCRYPT 2016\n• [GN08a] Gama, Nguyen. Finding short lattice vectors within Mordell’s inequality. STOC 2008\n• [GN08b] Gama, Nguyen. Predicting lattice reduction. EUROCRYPT 2008\n• [HPS11] Hanrot, Pujol, Stehlé. Analyzing blockwise lattice algorithms using dynamical systems. CRYPTO 2011\n• [MW16] Micciancio, Walter. Practical, predictable lattice basis reduction. EUROCRYPT 2016\n• [SE94] Schnorr, Euchner. Lattice basis reduction: Improved practical algorithms and solving subset sum problems. Mathematical Programming 1994\n• [S87] Schnorr. A hierarchy of polynomial time lattice basis reduction algorithms. Theoretical Computer Science 1987\n• [S94] Schnorr. Block reduced lattice bases and successive minima. Combinatorics, Probability and Computing 1994" ]
[ null, "https://blog.simons.berkeley.edu/wp-content/uploads/2020/04/svp.png", null, "https://blog.simons.berkeley.edu/wp-content/uploads/2020/05/dsvp.png", null, "https://blog.simons.berkeley.edu/wp-content/uploads/2020/08/sdbkz-1024x825.png", null, "https://blog.simons.berkeley.edu/wp-content/uploads/2020/04/svp.png", null, "https://blog.simons.berkeley.edu/wp-content/uploads/2020/04/bkz.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8328239,"math_prob":0.9989805,"size":14367,"snap":"2020-45-2020-50","text_gpt3_token_len":4278,"char_repetition_ratio":0.15129152,"word_repetition_ratio":0.008189263,"special_character_ratio":0.30911115,"punctuation_ratio":0.09283327,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99991965,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T20:25:46Z\",\"WARC-Record-ID\":\"<urn:uuid:246eb545-73b8-45e2-ab6f-799bfc39dcdc>\",\"Content-Length\":\"68086\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2d17fbb-9f65-4b14-8b32-1c7e54c5f815>\",\"WARC-Concurrent-To\":\"<urn:uuid:ebb76905-549f-420f-a79c-4f974d7a42d8>\",\"WARC-IP-Address\":\"23.185.0.2\",\"WARC-Target-URI\":\"https://blog.simons.berkeley.edu/author/michael-walter/\",\"WARC-Payload-Digest\":\"sha1:R2IJ6QAPSLK253WHMIYXDNDM6QGFQPI5\",\"WARC-Block-Digest\":\"sha1:PNROXUB4EZFFHPWDZZKFCT62JEUSGZZW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107911229.96_warc_CC-MAIN-20201030182757-20201030212757-00177.warc.gz\"}"}
https://www.proprofs.com/quiz-school/story.php?title=integer-practice
[ "# Integer Practice\n\n16 Questions | Total Attempts: 7133", null, "", null, "Settings", null, "", null, "This \"quiz\" will help asses how well you understand integers. If you can answer these questions without any problem (and without a calculator), you're an integer master! If you're having some trouble, click here to watch videos about how to do the problems, or check back to the website!\n\nRelated Topics\n• 1.\nWhat is the opposite of -7?\n• A.\n\n49\n\n• B.\n\n-49\n\n• C.\n\n7\n\n• D.\n\n-7\n\n• 2.\nWhat is the opposite of 12?\n• A.\n\n-12\n\n• B.\n\n12\n\n• C.\n\n144\n\n• D.\n\n-144\n\n• 3.\nWhat is the absolute value of 9?\n• A.\n\n-9\n\n• B.\n\n9\n\n• C.\n\n81\n\n• D.\n\n-81\n\n• 4.\nWhat is the absolute value of -2?\n• A.\n\n2\n\n• B.\n\n-2\n\n• C.\n\n4\n\n• D.\n\n-4\n\n• 5.\nWhat is the absolute value of -8 plus the absolute value of 3?\n• A.\n\n11\n\n• B.\n\n5\n\n• C.\n\n-11\n\n• D.\n\n-5\n\n• 6.\nWhat is -9 + 5?\n• A.\n\n4\n\n• B.\n\n-4\n\n• C.\n\n14\n\n• D.\n\n-14\n\n• 7.\nWhat is -17 + 4?\n• A.\n\n13\n\n• B.\n\n-13\n\n• C.\n\n21\n\n• D.\n\n-21\n\n• 8.\nWhat is -9 minus 3?\n• A.\n\n12\n\n• B.\n\n-12\n\n• C.\n\n6\n\n• D.\n\n-6\n\n• 9.\nWhat is -5 minus -8?\n• A.\n\n-13\n\n• B.\n\n13\n\n• C.\n\n-3\n\n• D.\n\n3\n\n• 10.\nWhat is 6 minus 14?\n• A.\n\n8\n\n• B.\n\n-8\n\n• C.\n\n20\n\n• D.\n\n-20\n\n• 11.\nWhat is 15 minus -11?\n• A.\n\n4\n\n• B.\n\n-4\n\n• C.\n\n26\n\n• D.\n\n-26\n\n• 12.\nWhat is -7 times 3?\n• A.\n\n4\n\n• B.\n\n-4\n\n• C.\n\n21\n\n• D.\n\n-21\n\n• 13.\nWhat is -11 times -8?\n• A.\n\n88\n\n• B.\n\n-88\n\n• C.\n\n3\n\n• D.\n\n-3\n\n• 14.\nWhat is 9 divided by -3?\n• A.\n\n3\n\n• B.\n\n-3\n\n• C.\n\n27\n\n• D.\n\n-27\n\n• 15.\nWhat i -24 divided by 8?\n• A.\n\n3\n\n• B.\n\n-3\n\n• C.\n\n4\n\n• D.\n\n-4\n\n• 16.\nWhat is -50 divided by -10?\n• A.\n\n25\n\n• B.\n\n-25\n\n• C.\n\n5\n\n• D.\n\n-5" ]
[ null, "https://www.proprofs.com/quiz-school/images/story_settings_gear.png", null, "https://www.proprofs.com/quiz-school/images/story_settings_gear_color.png", null, "https://www.proprofs.com/quiz-school/loader.gif", null, "https://www.proprofs.com/quiz-school/topic_images/p1eak4639rf651esoim51r7sptr3.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7277049,"math_prob":0.43822324,"size":481,"snap":"2020-45-2020-50","text_gpt3_token_len":219,"char_repetition_ratio":0.1048218,"word_repetition_ratio":0.0,"special_character_ratio":0.5280665,"punctuation_ratio":0.05042017,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98804796,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-19T22:08:38Z\",\"WARC-Record-ID\":\"<urn:uuid:0529a998-cc7c-43dc-8ef7-679d40cf428a>\",\"Content-Length\":\"247331\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:064b7e83-548c-40b5-a63b-4549745f6c0f>\",\"WARC-Concurrent-To\":\"<urn:uuid:112f430e-dbb5-47f3-804e-f3c04b491469>\",\"WARC-IP-Address\":\"172.67.68.126\",\"WARC-Target-URI\":\"https://www.proprofs.com/quiz-school/story.php?title=integer-practice\",\"WARC-Payload-Digest\":\"sha1:DSTISLKGGORXWYIGIDWVY74WL7FPB3LO\",\"WARC-Block-Digest\":\"sha1:LXMYQWI54D36NRSYSOLYZ6DUOA6FEBR7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107866404.1_warc_CC-MAIN-20201019203523-20201019233523-00586.warc.gz\"}"}
https://rwalk.xyz/sparse-quadratic-programming-with-osqp/
[ "# Sparse quadratic programming with osqp\n\nIn the past, I wrote frequently about quadratic programming especially in R, for example here and here. It’s been a while and at least one great new library has emerged since my last post on quadratic programming — OSQP. OSQP introduces a new technique called operator splitting which offers significant performance improvements over standard interior point algorithms on large, sparse QPs. OSQP is an open source C library, with interfaces for many languages, including R!\n\nLet’s take this new library for a spin and present some simple benchmarks.\n\n## Getting started\n\nThe R OSPQ interface is available in CRAN and installation is simple:\n\ninstall.packages(\"ospq\")\n\n\nOSPQ solves quadratic programs of the form:\n\\begin{aligned} \\underset{\\alpha \\in \\mathbb{R}^n}{\\text{Minimize}}: \\qquad & q^Tx + \\frac{1}{2}x^T P x \\\\ \\text{Subject to:} \\qquad & l \\leq Ax \\leq u \\end{aligned}\nwhere $P$ is positive semi-definite.\n\nThe quadprog package is the gold-standard for solving quadratic programs in R. Let’s see how we’d solve quadprog’s documentation example with OSPQ:\n\nNote that quadprog uses a slightly obtuse specification of the functional form for the QP that comes directly out of the original paper on which it is based. OSQP uses the more standard specification from above. To solve a quadprog form QP in OSQP we just need to negate the dvec and transpose Amat. More substantively, two very important differences between OSQP and quadprog are:\n\n• OSPQ can handle sparse system matrices while quadprog cannot. This is quite important as many practical problems, especially in physics, will exhibit sparsity in $P$ or $A$ or both.\n• quadprog can handle only positive definite matrices $P$ while OSPQ can handle positive semi-definite $P$. This distinction is important for problems like SVM.\n\n## A Benchmark Problem\n\nI benchmarked OSQP on two problems, a random, dense quadratic program and the circus tent problem I’ve used in previous posts.\n\nI’ve packaged my benchmarking code in this Github repo.\nHere are the results:", null, "", null, "In these experiments, OSQP and ipoptr fair quite similarly and both are significantly faster than quadprog (FORTRAN) and kernlab’s ipop (pure R). Though OSQP didn’t appear to be significantly faster in these trials, it offers the following significant advantages over ipoptr:\n\n• It is much easier to install OSQP. ipoptr has a number of system dependencies and is quite tricky to get running.\n• As a general optimization library, ipoptr has a much more generic interface specification and requires a computation of the Jacobian, the Hessian, and several other inputs. By contrast, OSQP needs only the system matrices for the QP.\n\nBottom line, if you need to solve large QPs, OSQP is the new way to go!" ]
[ null, "https://i0.wp.com/rwalk.xyz/wp-content/uploads/2018/10/random_qp.png", null, "https://i0.wp.com/rwalk.xyz/wp-content/uploads/2018/10/qp_tent.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8930877,"math_prob":0.9026225,"size":2711,"snap":"2023-40-2023-50","text_gpt3_token_len":620,"char_repetition_ratio":0.10121906,"word_repetition_ratio":0.0,"special_character_ratio":0.2069347,"punctuation_ratio":0.098196395,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9658807,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T06:58:49Z\",\"WARC-Record-ID\":\"<urn:uuid:50746a7c-8921-427b-b57a-e881d5701a04>\",\"Content-Length\":\"50408\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fdcefb67-a8df-4e4e-ae84-dd49ef496eac>\",\"WARC-Concurrent-To\":\"<urn:uuid:add3d7df-25db-451e-a45c-db0331c772fd>\",\"WARC-IP-Address\":\"34.174.133.206\",\"WARC-Target-URI\":\"https://rwalk.xyz/sparse-quadratic-programming-with-osqp/\",\"WARC-Payload-Digest\":\"sha1:HMNH5GSWR3QI6WBQ3KX5A2UOUK5H7D6E\",\"WARC-Block-Digest\":\"sha1:6ZUA6WQNQT5ITUR4RCDRNHPQP2DQ22GV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511361.38_warc_CC-MAIN-20231004052258-20231004082258-00178.warc.gz\"}"}
https://graz.pure.elsevier.com/en/publications/kirchhofflove-shell-theory-based-on-tangential-differential-calcu
[ "# Kirchhoff–Love shell theory based on tangential differential calculus\n\nResearch output: Contribution to journalArticleResearchpeer-review\n\n### Abstract\n\nThe Kirchhoff–Love shell theory is recasted in the frame of the tangential differential calculus (TDC) where differential operators on surfaces are formulated based on global, three-dimensional coordinates. As a consequence, there is no need for a parametrization of the shell geometry implying curvilinear surface coordinates as used in the classical shell theory. Therefore, the proposed TDC-based formulation also applies to shell geometries which are zero-isosurfaces as in the level-set method where no parametrization is available in general. For the discretization, the TDC-based formulation may be used based on surface meshes implying element-wise parametrizations. Then, the results are equivalent to those obtained based on the classical theory. However, it may also be used in recent finite element approaches as the TraceFEM and CutFEM where shape functions are generated on a background mesh without any need for a parametrization. Numerical results presented herein are achieved with isogeometric analysis for classical and new benchmark tests. Higher-order convergence rates in the residual errors are achieved when the physical fields are sufficiently smooth.\n\nOriginal language English 113–131 19 Computational mechanics 64 1 https://doi.org/10.1007/s00466-018-1659-5 Published - Jul 2019\n\n### Fingerprint\n\nDifferentiation (calculus)\nShell Theory\nDifferential Calculus\nParametrization\nShell\nGeometry\nMesh\nIsogeometric Analysis\nIsosurface\nFormulation\nMathematical operators\nLevel Set Method\nShape Function\nConvergence Rate\nDifferential operator\nDiscretization\nHigher Order\nBenchmark\nFinite Element\nNumerical Results\n\n### Keywords\n\n• IGA\n• Isogeometric analysis\n• Manifolds\n• Shells\n• Tangential differential calculus\n• TDC\n\n### ASJC Scopus subject areas\n\n• Computational Mechanics\n• Ocean Engineering\n• Mechanical Engineering\n• Computational Theory and Mathematics\n• Computational Mathematics\n• Applied Mathematics\n\n### Cite this\n\nIn: Computational mechanics, Vol. 64, No. 1, 07.2019, p. 113–131.\n\nResearch output: Contribution to journalArticleResearchpeer-review\n\n@article{32009bac5af244e7b456c064e1761a36,\ntitle = \"Kirchhoff–Love shell theory based on tangential differential calculus\",\nabstract = \"The Kirchhoff–Love shell theory is recasted in the frame of the tangential differential calculus (TDC) where differential operators on surfaces are formulated based on global, three-dimensional coordinates. As a consequence, there is no need for a parametrization of the shell geometry implying curvilinear surface coordinates as used in the classical shell theory. Therefore, the proposed TDC-based formulation also applies to shell geometries which are zero-isosurfaces as in the level-set method where no parametrization is available in general. For the discretization, the TDC-based formulation may be used based on surface meshes implying element-wise parametrizations. Then, the results are equivalent to those obtained based on the classical theory. However, it may also be used in recent finite element approaches as the TraceFEM and CutFEM where shape functions are generated on a background mesh without any need for a parametrization. Numerical results presented herein are achieved with isogeometric analysis for classical and new benchmark tests. Higher-order convergence rates in the residual errors are achieved when the physical fields are sufficiently smooth.\",\nkeywords = \"IGA, Isogeometric analysis, Manifolds, Shells, Tangential differential calculus, TDC\",\nauthor = \"D. Sch{\\\"o}llhammer and Fries, {T. P.}\",\nyear = \"2019\",\nmonth = \"7\",\ndoi = \"10.1007/s00466-018-1659-5\",\nlanguage = \"English\",\nvolume = \"64\",\npages = \"113–131\",\njournal = \"Computational mechanics\",\nissn = \"0178-7675\",\npublisher = \"Springer Verlag\",\nnumber = \"1\",\n\n}\n\nTY - JOUR\n\nT1 - Kirchhoff–Love shell theory based on tangential differential calculus\n\nAU - Schöllhammer, D.\n\nAU - Fries, T. P.\n\nPY - 2019/7\n\nY1 - 2019/7\n\nN2 - The Kirchhoff–Love shell theory is recasted in the frame of the tangential differential calculus (TDC) where differential operators on surfaces are formulated based on global, three-dimensional coordinates. As a consequence, there is no need for a parametrization of the shell geometry implying curvilinear surface coordinates as used in the classical shell theory. Therefore, the proposed TDC-based formulation also applies to shell geometries which are zero-isosurfaces as in the level-set method where no parametrization is available in general. For the discretization, the TDC-based formulation may be used based on surface meshes implying element-wise parametrizations. Then, the results are equivalent to those obtained based on the classical theory. However, it may also be used in recent finite element approaches as the TraceFEM and CutFEM where shape functions are generated on a background mesh without any need for a parametrization. Numerical results presented herein are achieved with isogeometric analysis for classical and new benchmark tests. Higher-order convergence rates in the residual errors are achieved when the physical fields are sufficiently smooth.\n\nAB - The Kirchhoff–Love shell theory is recasted in the frame of the tangential differential calculus (TDC) where differential operators on surfaces are formulated based on global, three-dimensional coordinates. As a consequence, there is no need for a parametrization of the shell geometry implying curvilinear surface coordinates as used in the classical shell theory. Therefore, the proposed TDC-based formulation also applies to shell geometries which are zero-isosurfaces as in the level-set method where no parametrization is available in general. For the discretization, the TDC-based formulation may be used based on surface meshes implying element-wise parametrizations. Then, the results are equivalent to those obtained based on the classical theory. However, it may also be used in recent finite element approaches as the TraceFEM and CutFEM where shape functions are generated on a background mesh without any need for a parametrization. Numerical results presented herein are achieved with isogeometric analysis for classical and new benchmark tests. Higher-order convergence rates in the residual errors are achieved when the physical fields are sufficiently smooth.\n\nKW - IGA\n\nKW - Isogeometric analysis\n\nKW - Manifolds\n\nKW - Shells\n\nKW - Tangential differential calculus\n\nKW - TDC\n\nUR - http://www.scopus.com/inward/record.url?scp=85057614065&partnerID=8YFLogxK\n\nU2 - 10.1007/s00466-018-1659-5\n\nDO - 10.1007/s00466-018-1659-5\n\nM3 - Article\n\nVL - 64\n\nSP - 113\n\nEP - 131\n\nJO - Computational mechanics\n\nJF - Computational mechanics\n\nSN - 0178-7675\n\nIS - 1\n\nER -" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8525164,"math_prob":0.85441977,"size":4865,"snap":"2019-43-2019-47","text_gpt3_token_len":1087,"char_repetition_ratio":0.124871425,"word_repetition_ratio":0.7132668,"special_character_ratio":0.20637205,"punctuation_ratio":0.091486655,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9704898,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-19T03:06:24Z\",\"WARC-Record-ID\":\"<urn:uuid:fc692b1d-07e5-4ffe-bc9b-70e596047fac>\",\"Content-Length\":\"44414\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1bacef34-f559-4872-9bdb-4fd135ea9ad6>\",\"WARC-Concurrent-To\":\"<urn:uuid:27c7296e-8efc-4596-86ea-45d9724506ab>\",\"WARC-IP-Address\":\"52.51.22.49\",\"WARC-Target-URI\":\"https://graz.pure.elsevier.com/en/publications/kirchhofflove-shell-theory-based-on-tangential-differential-calcu\",\"WARC-Payload-Digest\":\"sha1:QDEJVSHGBTYF7XWLD5SMKA5BAZ4RBAGT\",\"WARC-Block-Digest\":\"sha1:5A5C4UCL6HFJNNU6SKBCQZGGGI3Z5CM3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669967.80_warc_CC-MAIN-20191119015704-20191119043704-00415.warc.gz\"}"}
https://discuss.codechef.com/t/google-kickstart-round-c-editorial/90076
[ "# Google Kickstart Round C Editorial\n\nHello Codechef Community, Here are the video editorials for today’s held Google Kickstart Round C 2021. Each and every problem along with their solutions are explained in detail.\n\nBelow are the links to the solution:-\n\nRest problems will be updated soon.\n\nPlease do watch videos, like, comment, and subscribe to this channel. Likewise, Videos will be uploaded for most of the contest’s editorial. Any suggestions are welcome, do comment in this blog post.\n\nThank You!!\n\n2 Likes\n\nSolution for Alien generator is just the number of odd factors of G .\n\n2 Likes\n\nyes, you are correct!\n\n1 Like\n\nIs there any mathematical proof for that?\n\nI have one\nk+k+1+k+2+…+k+n could be written as\n=(n+1) * k+1+2+3+4+5+…n\n=(n+1) * k+(n * (n+1))/2\n=(n+1) * (k+(n/2))\nit means that n is even so (n+1) is odd , and (n+1) is a factor of ans.\n\nYes, If we solve the mathematical relation, we get this.\n\nyes. All you are doing is writing an AP(Arithmetic progression)\n\nA + (A+1) + … (A+n) = G\n\nthen sum on left is (n+1)*(2A + n) / 2 = G\nrewriting we get\n\n(n+1) * (2A + n) = 2G\n\nnow observe that on the left only one number can be even and other is odd\n\nso u need to break 2G into factors = d1 * d2 such that only one is odd. this can be done by removing all powers of 2 from 2G and lets say it becomes = G’ then G’ is odd. now divide it into 2 factors say x and y and put all the powers of 2 you removed from 2G into x\n\nthis method will give you required (n+1) and (2A+n)\n\nand since the number of ways would depend on number of ways of getting x and y\nwhich is same as finding all odd factors of G’ and hence in turn of G\n\n2 Likes\n\nCan you make video on problem C Rock Paper Scissors?\n\nSolve these problems, that one is really trivial if you know how to solve these standard problems.\n\n1 Like\n\nvery interesting problems.\n\nmy code\nwhy is my code giving TLE when submitting, it’s not even passing tescase1, what’s wrong?\ncan anyone help?\nmy approach\nyou can represent G = k + (k+1) + (k+2) + (k+3) + … (k+n)\n(n+1) * k + (n * (n+1))/2 = G\nk = (G - (n * (n+1) / 2) ) / (n+1)\nmax value of n * (n+1) / 2 will the nearest triangular number T smaller than G, we can find that like this, then find the ‘n’ from that triangular number and iterate from n → 0\n\nlet 0<=i<=n, count the number of i for which k is a positive integer, that will be the answer" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9082977,"math_prob":0.9923198,"size":2367,"snap":"2023-14-2023-23","text_gpt3_token_len":705,"char_repetition_ratio":0.10537452,"word_repetition_ratio":0.0,"special_character_ratio":0.30333754,"punctuation_ratio":0.07184466,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99966955,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-29T03:46:20Z\",\"WARC-Record-ID\":\"<urn:uuid:263b328a-5bd7-43f8-bfef-9a68b1e8635d>\",\"Content-Length\":\"37474\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4e5a82e-774a-4abc-a873-d16a08514c39>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c28c9e5-a19e-434f-ad71-9b871ab3afb7>\",\"WARC-IP-Address\":\"34.198.237.79\",\"WARC-Target-URI\":\"https://discuss.codechef.com/t/google-kickstart-round-c-editorial/90076\",\"WARC-Payload-Digest\":\"sha1:P3CHD3PIZQXSZPYWY6RNOFXOSYHNA3OY\",\"WARC-Block-Digest\":\"sha1:5TS6TNM5WGLI5PYKLLHAM23ES2LPB6MI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644574.15_warc_CC-MAIN-20230529010218-20230529040218-00040.warc.gz\"}"}
http://balance.wiw.org/~jkominek/lojban/9511/msg00334.html
[ "Re: Colourless green ideas\n\nla dilyn cusku di'e\n\n> But I'm not sure I really understand And's argument. I'd like to see some\n> explicit examples of meaningless statements using PA.\n\nAny statement involving {li pipaipi} for example.\n\n> I'd also like to\n> see a proposal for the substructure of PA, preferably one that doesn't\n> rule out any current texts.\n\nThis is not a full proposal, but I posted this in March:\n\nThe parser accepts any string of PAs as a number, but not all\ncombinations are meaningful (at least to me). Here is an attempt\nto describe which are the meaningful combinations, written in\nbnf-ish notation. (I believe this mostly agrees with the grammar\npaper, with the exception of my treatment of {ji'i}.)\n\n(The parser also accepts letterals as parts of numbers. I've\nignored them here.)\n\n<digit> = no|pa|re|ci|vo|mu|xa|ze|bi|so|dau|fei|gai|jau|rei|vai|ki'o\n\n<natural> = <digit> ... | xo | no'o\n\n<sign> = (su'o|su'e|me'i|za'u) & (ma'u|ni'u)\n\n<real0> = <natural> & (pi [<natural>] & (ra'e <natural>)) | pai | te'o | ci'i\n\n<real> = <sign> & <real0> & (ji'i <real0>)\n\n<complex> = <real> & (ka'o <real>)\n\n<fraction> = <complex> & (fi'u <complex>)\n\n<quantifier0> = (su'o|su'e|me'i|za'u|da'a) &\n(ro|so'a|so'e|so'i|so'o|so'u|rau|mo'a|du'e|ci'i|<natural>)\n\n<quantifier> = <quantifier0> ...\n\n<fractionator> = pi <quantifier>\n\n<percentage> = (<real> | <quantifier>) ce'i\n\n<general> = <fraction>|<quantifier>|<fractionator>|<percentage>|tu'o\n\n<number> = <general> & (pi'e [<general>]) ...\n\nNotes:\n\n1- {ki'o} is a special digit. There has to be a number of digits\nmultiple of three between {ki'o}s or between {ki'o} and {pi}.\nIf there aren't three explicit digits then 0s are assumed\nimplicitly as the higher order digits.\nIf ki'o is the first digit, a 1 is assumed in front.\ne.g. ki'ore = 1002 ; piciki'o = 0.003\n\n2- Either {fi'u} or {ka'o} has to have higher scope than the other,\notherwise {1fi'u2ka'o3} would be ambiguous between .5+3i and\n1/(2+3i).\nI prefer to give fi'u higher scope, because that allows\n{fi'u <complex>} to be the inverse of <complex>. The other\npossibility would not allow an easy way to express inverses,\nand things like \"2/3 + i4/5\" are not really as important as\ninverses.\n\n3- My interpretation of {ji'i} allows to say everything that you can\nsay with the one proposed in the grammar paper, and more.\nWith my interpretation <number>ji'i<number> means a number\nbetween those two, or approximately that. So 20ji'i30 would\nbe a number between 20 and 30, but could eventually be 19 or 31,\nit is approximate, and the difference between the numbers gives\nan idea of the uncertainty.\n\nWith the interpretation of the paper, 20ji'i30 would be a number\nbetween 2010 and 2099, or something like that. To say that with\nmy interpretation, I would say 2050ji'i or ji'i2050. The\nuncertainty is given by the last significant (non-zero) digit.\n{ji'i} would only say that the total number is not exact, not\na particular digit. (The ji'i+ and ji'i- convention for rounding\nstill works.)\n\nComments about all this are most welcome and solicited.\n\nJorge" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8251118,"math_prob":0.89694244,"size":3110,"snap":"2019-43-2019-47","text_gpt3_token_len":928,"char_repetition_ratio":0.09626529,"word_repetition_ratio":0.00814664,"special_character_ratio":0.29453376,"punctuation_ratio":0.10526316,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9813306,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-18T09:51:50Z\",\"WARC-Record-ID\":\"<urn:uuid:d0e833ee-e35a-4004-af86-134e6a73e758>\",\"Content-Length\":\"6223\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:41a2559b-7fc3-4732-9bc3-176bd4a860af>\",\"WARC-Concurrent-To\":\"<urn:uuid:c81f13f6-1352-45e9-bbcb-80072b39e05d>\",\"WARC-IP-Address\":\"157.230.56.98\",\"WARC-Target-URI\":\"http://balance.wiw.org/~jkominek/lojban/9511/msg00334.html\",\"WARC-Payload-Digest\":\"sha1:NXYFXY73FBWQ435B4D2MGXIEHN5NUSEL\",\"WARC-Block-Digest\":\"sha1:VELMCNQYSNXJ4WC73YL74VWSQRHR5OSK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986679439.48_warc_CC-MAIN-20191018081630-20191018105130-00460.warc.gz\"}"}
https://docs.telerik.com/devtools/silverlight/controls/raddiagram/extensions/ruler
[ "# Ruler\n\nIn order to use the control in your projects you have to add references to the following assemblies:\n\n• Telerik.Windows.Controls\n• Telerik.Windows.Controls.Diagrams\n• Telerik.Windows.Controls.Diagrams.Extensions\n• Telerik.Windows.Controls.Input\n• Telerik.Windows.Diagrams.Core\n\nThe RadDiagramRuler is used to provide visual indication about the diagram viewport coordinates. It resides in the Telerik.Windows.Controls.Diagrams.Extensions namespace.\n\n## Overview\n\nThe RadDiagramRuler exposes a Diagram property which is used to associate the ruler with a particular diagram instance. The ruler uses this instance to collect the required information about the current viewport (position and size) and the zoom level in the diagram.\n\n``````<Grid>\n<Grid.RowDefinitions>\n<RowDefinition Height=\"20\" />\n<RowDefinition Height=\"*\" />\n</Grid.RowDefinitions>\n</Grid>\n``````", null, "Please note that the examples in this tutorial are showcasing Telerik Windows8 theme. In the Setting a Theme z article you can find more information on how to set an application-wide theme.\n\n## Visual Structure", null, "The structure of a RadDiagramRuler is pretty simple. It consists of four types of ticks - each used to display a different measurement unit and a label.\n\n• XSmallTick - the smallest ticks available in the ruler.\n• SmallTick\n• MediumTick\n• LargeTick\n• Label - the Label is used to display text describing the measurement unit value.\n\nAll of the above visual elements are described by the DiagramScaleItemDefinition class. Essentially the content of the RadDiagramRuler control describes a single scale that displays different scale items. The scale items describe the measurement units and they are calculated based on the zoom level of the associated RadDiagram object. You can easily configure the scale definitions and items to better match your scenario, using the RadDiagramRuler properties.\n\n## Properties\n\nThe RadDiagramRuler can be configured through the following set of properties:\n\n• Placement - this property controls the way the labels and ticks are aligned in the ruler. It is of type Dock and therefore it allows you to set the placement as:\n\n• Left - rotates the ticks and labels and aligns them to the right of the ruler.\n• Top - this is the default placement of the ruler and it aligns the ticks and labels on the bottom of the ruler\n• Right - rotates the ticks and labels and aligns them to the left of the ruler.\n• Bottom - aligns the ticks and labels on top of the ruler\n• MeasurementUnit - this property controls the measurement units used in the RadDiagramRuler. It is an enumeration of type MeasurementUnit which exposes the following members:\n\n• Dip - represents device independent pixels. This is the default measurement unit used by the RadDiagramRuler.\n• Cm - represents centimeters.\n• Inch - represents inches.\n• ScaleDefinitions - this property is of type DiagramScaleDefinitionCollection and it describes a collection of DiagramScaleDefinition objects. Each DiagramScaleDefinition object describes a scale in the ruler.\n\n## Customizing the default Scales\n\nThe RadDiagramRuler by default sets the ScaleDefinition collection to describe a set of predefined scales. In order to change these default settings, you should set the ScaleDefinitions property to an object of type DiagramScaleDefinitionCollection. The collection should describe different scales, each associated with particular zoom level in the Diagram instance:\n\n``````<telerik:RadDiagramRuler Diagram=\"{Binding ElementName=xDiagram}\">\n<telerik:DiagramScaleDefinitionCollection>\n<telerik:DiagramScaleDefinition MaxZoom=\"0.99\">\n<telerik:DiagramScaleItemDefinition Interval=\"10\" Type=\"SmallTick\" />\n<telerik:DiagramScaleItemDefinition Interval=\"50\" Type=\"MediumTick\" />\n<telerik:DiagramScaleItemDefinition Interval=\"100\" Type=\"LargeTick\" />\n<telerik:DiagramScaleItemDefinition Interval=\"100\" Type=\"Label\" />\n</telerik:DiagramScaleDefinition>\n<telerik:DiagramScaleDefinition MaxZoom=\"1.99\">\n<telerik:DiagramScaleItemDefinition Interval=\"5\" Type=\"XSmallTick\" />\n<telerik:DiagramScaleItemDefinition Interval=\"10\" Type=\"SmallTick\" />\n<telerik:DiagramScaleItemDefinition Interval=\"50\" Type=\"MediumTick\" />\n<telerik:DiagramScaleItemDefinition Interval=\"100\" Type=\"LargeTick\" />\n<telerik:DiagramScaleItemDefinition Interval=\"100\" Type=\"Label\" />\n</telerik:DiagramScaleDefinition>\n<telerik:DiagramScaleDefinition>\n<telerik:DiagramScaleItemDefinition Interval=\"100\" Type=\"LargeTick\" />\n<telerik:DiagramScaleItemDefinition Interval=\"100\" Type=\"Label\" />\n</telerik:DiagramScaleDefinition>\n</telerik:DiagramScaleDefinitionCollection>\n``````\nIn the above sample we've create three DiagramScaleDefinitions. The MaxZoom property of type double sets the maximum zoom level of the RadDiagram for which a scale will be displayed in the RadDiagramRuler.\n\nThe first DiagramScaleDefinition will be displayed in the RadDiagramRuler when the associated Diagram zoom level is under 1. In this case, the ruler will display three types of ScaleItems - small ticks to indicate each 10th pixel; medium ticks to indicate each 50th pixel and large ticks to indicate each 100th pixel. Next to each large tick, a label will be displayed to show the measurement unit value.\n\nThe second scale definition will be displayed when the zoom level in the RadDiagram is between 1 and 2 and it adds one more item to the ruler - extra small ticks which indicate every 5th pixel of the RadDiagram viewport.\n\nThe sample also demonstrates how to apply a default scale (without setting the __MaxZoom__property) definition to be used for zoom levels which don't have a manually defined scale definition.\n\nEven though the RadDiagramRuler visual structure contains multiple tick types and labels, in the logical structure of the control all these elements are described by one class - DiagramScaleItemDefinition. Each item allows you to define its recurring interval as well as its type:\n\n• The Interval property is of type double and it represent the recurring interval that controls how often the item will be displayed on the RadDiagramRuler surface. Note that the value of this property is interpreted based on the measurement unit defined in the ruler.\n\n• The Type property is an enumeration that exposes the following members:\n\n• XSmallTick - represents extra small ticks\n• SmallTick - represents small ticks\n• MediumTick - represents medium ticks\n• LargeTick - represents large ticks\n• Label - represents labels\n\n## Visual Containers\n\nIn runtime the RadDiagramRuler generates visual containers for each type of the DiagramScaleItemDefinition objects:\n\n• XSmallTickContainer - a container that visualizes the DiagramScaleItemDefinition of Type XSmallTick.\n\n• SmallTickContainer - a container that visualizes the DiagramScaleItemDefinition of Type SmallTick.\n\n• MediumTickContainer - a container that visualizes the DiagramScaleItemDefinition of Type MediumTick.\n\n• LargeTickContainer - a container that visualizes the DiagramScaleItemDefinition of Type LargeTick.\n\n• LabelContainer - a container that visualizes the DiagramScaleItemDefinition of Type Label.\n\nThe described containers are used to control the visual appearance of the ticks and labels. This is why if you need to customize the default Style or ControlTemplate of a tick or a label, you'll need to create a Style targeting the appropriate visual container.\n\nFor example, the default style of the extra small ticks is defined as follows:\n\n``````<Style TargetType=\"telerik:XSmallTickContainer\">\n<Setter Property=\"Template\">\n<Setter.Value>\n<ControlTemplate TargetType=\"telerik:XSmallTickContainer\">\n<Rectangle Fill=\"Black\"\nWidth=\"1\"\nHeight=\"3\" />\n</ControlTemplate>\n</Setter.Value>\n</Setter>\n</Style>\n``````" ]
[ null, "https://docs.telerik.com/devtools/silverlight/controls/raddiagram/extensions/images/RadDiagram_Extensions_Ruler_Overview.png", null, "https://docs.telerik.com/devtools/silverlight/controls/raddiagram/extensions/images/RadDiagram_Extensions_Ruler_VisualStructure.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6025261,"math_prob":0.7166802,"size":7917,"snap":"2022-05-2022-21","text_gpt3_token_len":1644,"char_repetition_ratio":0.19878681,"word_repetition_ratio":0.1234208,"special_character_ratio":0.18883416,"punctuation_ratio":0.10280374,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9564523,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T19:34:49Z\",\"WARC-Record-ID\":\"<urn:uuid:92ec1bca-20ad-4d23-a612-2348eafe7a78>\",\"Content-Length\":\"60262\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2368246f-6d08-41ea-a47b-e29aa0bf84b8>\",\"WARC-Concurrent-To\":\"<urn:uuid:146148f0-9d6a-4e2e-8bbe-55abfa0234c3>\",\"WARC-IP-Address\":\"50.56.17.213\",\"WARC-Target-URI\":\"https://docs.telerik.com/devtools/silverlight/controls/raddiagram/extensions/ruler\",\"WARC-Payload-Digest\":\"sha1:MLOTXMYH767JHXBJUXL5FSZJDGUMCLKU\",\"WARC-Block-Digest\":\"sha1:AH2BT4D3JF5N474KLWYHYRPQC2FK6ULO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662512229.26_warc_CC-MAIN-20220516172745-20220516202745-00653.warc.gz\"}"}
https://dsp.stackexchange.com/questions/79451/kalman-filter-on-sinusoidal-signal
[ "# Kalman Filter on Sinusoidal Signal\n\nSuppose a system follows this equation: $$x(t)=A \\cos(\\omega t + \\phi)+\\eta$$\n\nwhere: $$\\omega = 2\\pi f$$ and $$\\eta$$ is a random error\n\nusing Extended Kalman Filter, how does estimated value $$\\hat{x}$$ be?\n\nI'm copying my answer to Estimate and Track the Amplitude, Frequency and Phase of a Sine Signal Using a Kalman Filter which solves a more general problem with example code:\n\nWe can build a non linear dynamic model in order to estimate the parameters of a sine signal.\n\nLet's model the signal as $$a \\sin \\left( \\phi \\right)$$ where $$\\phi$$ is the instantaneous phase. So the model could be also written as $$a \\sin \\left( \\omega t + \\psi \\right)$$.\n\nThen the model can be:\n\n$${a}_{k} \\sin \\left( {\\omega}_{k} {t}_{k} + \\psi \\right) = {a}_{k} \\sin \\left( {\\phi}_{k} \\right)$$\n\nWith some math and pre processing of Kalman Filter you may derive the model with the matrices:\n\n$$\\boldsymbol{x}_{k} = \\begin{bmatrix} {a}_{k} \\\\ {\\omega}_{k} \\\\ {\\phi}_{k} \\end{bmatrix}, F = \\begin{bmatrix} 1 & 0 & 0 \\\\ 0 & 1 & 0 \\\\ 0 & \\Delta t & 1 \\end{bmatrix}, Q = \\begin{bmatrix} \\Delta t {\\sigma}_{a}^{2} & 0 & 0 \\\\ 0 & \\Delta t {\\sigma}_{\\omega}^{2} & \\frac{ {\\Delta t}^{2} {\\sigma}_{\\omega}^{2}}{2} \\\\ 0 & \\frac{ {\\Delta t}^{2} {\\sigma}_{\\omega}^{2}}{2} & \\frac{ {\\Delta t}^{3} {\\sigma}_{\\omega}^{2}}{3} \\end{bmatrix}$$\n\nWhere $${\\sigma}_{a}^{2}$$ is the process variance of the amplitude and $${\\sigma}_{\\omega}^{2}$$ is the variance of the process noise of instant angular frequency.\n\nThe measurement model is a bit more tricky. The measurement model is:\n\n$${z}_{k} = h \\left( \\boldsymbol{x}_{k} \\right) = {a}_{k} \\sin \\left( {\\phi}_{k} \\right)$$\n\nHence the Jacobian is given by $$\\frac{\\partial h \\left( \\boldsymbol{x}_{k} \\right )}{\\partial \\boldsymbol{x}_{k}} = \\left[ \\sin \\left( {\\phi}_{k} \\right), 0, {a}_{k} \\cos \\left( {\\phi}_{k} \\right) \\right]$$.\n\nWrapping all this into a Kalman Model will yield:", null, "You may see that the model can effectively track changes in the parameters.\nThere are other alternatives to this dynamic model but I think this is a simple and effective one.\n\nYou may also use the Unscented Kalman Filter. I implemented it at Extended Kalman Filter (EKF) for Non Linear (Coordinate Conversion - Polar to Cartesian) Measurements and Linear Predictions.\n\nThe code is available at my StackExchange Signal Processing Q76443 GitHub Repository (Look at the SignalProcessing\\Q76443 folder).\n\n– Peter K.\nDec 6, 2021 at 18:46\n• @PeterK., Thanks. There are many ways to derive the model matrix for harmonic signals.\n– Royi\nDec 6, 2021 at 18:50\n• The parameters dT is the rate new measurement is measured. Of course it has to be at least by Nyquist. You may use dT = 1 and then everything is in normalized frequency.\n– Royi\nDec 8, 2021 at 11:23\n• I am not sure what you mean. Assume you have which measures a sine signal. By Sampling Theorem you must sample it at sampling rate which is larger than 2 times the bandwidth. But in many cases we sample at much higher rate. In any case, dT must be the sampling rate in practice and not the Nyquist rate.\n– Royi\nDec 8, 2021 at 16:47\n• @piercus, This is really nice! I wish I knew JS.\n– Royi\nMar 24 at 18:11\n\nThis isn't quite what you're asking, because it neglects the amplitude, $$A$$, but it's a relatively straightforward example of application of an extended Kalman filter to the frequency tracking problem. See section 1.2 of this PDF, that I wrote some time ago.\n\nI'd also recommend starting with B. D. O. Anderson and J. B. Moore, Optimal Filtering, Prentice-Hall, Inc., Engle- wood Cliffs, New Jersey 07632, 1979.", null, "• I like the reference Peter, your PDF is a nice summary Dec 6, 2021 at 13:31\n• @DanBoschen For an unpublished article (in the journal or conference sense), that PDF has received more citations than the IEEE TSP paper for my PhD. Oh well. ;-)\n– Peter K.\nDec 6, 2021 at 15:53\n• I'm giving a callout to Dan Simon's \"Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches\", Wiley, 2006. I think it'll be clear to someone who's taken a senior-level statistics class and state-space control. The only downside is that after it was published someone came up with a formal way to determine the constellation for an Unscented Kalman filter, and I can't remember who wrote the paper or when (aside from \"after 2006\"). Dec 6, 2021 at 20:24\n• @TimWescott thanks,Tim! I’ll see if I can get that.\n– Peter K.\nDec 6, 2021 at 20:47\n• @TimWescott, Could it be you're talking about Cubature Kalman Filter? If so, then it is a generalization of the UKF and actually it shows that in most cases the UKF is the optimal constellation.\n– Royi\nDec 8, 2021 at 11:24" ]
[ null, "https://i.stack.imgur.com/LMVVV.png", null, "https://i.stack.imgur.com/vZYNJ.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7975327,"math_prob":0.99648577,"size":3718,"snap":"2023-40-2023-50","text_gpt3_token_len":1070,"char_repetition_ratio":0.11335488,"word_repetition_ratio":0.13957307,"special_character_ratio":0.31441635,"punctuation_ratio":0.090655506,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996296,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,6,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T14:46:49Z\",\"WARC-Record-ID\":\"<urn:uuid:4418d42e-235e-44c1-9adb-9e833937a2e7>\",\"Content-Length\":\"187060\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f51781ef-ca39-4f2c-98b4-d2a0a1e88f1b>\",\"WARC-Concurrent-To\":\"<urn:uuid:1580be31-80fd-41e5-a6d2-9dcc1a822b12>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/79451/kalman-filter-on-sinusoidal-signal\",\"WARC-Payload-Digest\":\"sha1:ZV5QCA7GUN7J3ZTSBZLAH75PFHLMWQLM\",\"WARC-Block-Digest\":\"sha1:GHUEXQ6J6GWINNGVQ32LGMODXC3444J7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679102469.83_warc_CC-MAIN-20231210123756-20231210153756-00057.warc.gz\"}"}
https://de.scribd.com/doc/292246863/Lab-2-docx
[ "You are on page 1of 3\n\nUNIVERSITI TEKNOLOGI MARA\n\nFAKULTI KEJURUTERAAN KIMIA\n\nPROCESS CONTROL AND INSTRUMENTATION\n(CPE642)\n\nNAME\nKARIM\n\n: SYED MUHAMMAD AMIRU B SYED ABDUL\n\nSTUDENT I.D\n\n: 2014490934\n\nEXPERIMENT\n\n: LAB 2\n\nDATE PERFORMED\n\n: 21 SEPTEMBER 2015\n\nSEMESTER\n\n:5\n\nPROGRAM\n\n: EH220\n\nLab 1\nControl loop\n\nA proportional-integral-derivative controller (PID controller) is a control loop feedback\n\nmechanism (controller) commonly used in industrial control systems. A PID controller continuously\ncalculates an \"error value\" as the difference between a measured process variable and a desired setpoint.\nThe controller attempts to minimize the error over time by adjustment of a control variable, such as the\nposition of a control valve, a damper, or the power supplied to a heating element.\nIn this model, P accounts for present values of the error (e.g. if the error is large and positive, the control\noutput will also be large and positive), I accounts for past values of the error (e.g. if the output is not\nsufficient to reduce the size of the error, error will accumulate over time, causing the controller to apply\nstronger output), and D accounts for predicted future values of the error, based on its current rate of\nchange.\nAs a PID controller relies only on the measured process variable, not on knowledge of the underlying\nprocess, it is a broadly useful controller. By tuning the three parameters of the model, one can design a\nPID controller for specific process requirements. The response of the controller can be described in terms\nof the responsiveness of the controller to an error, the degree to which the controller overshoots the\nsetpoint, and the degree of system oscillation. Note that the use of the PID algorithm for control does not\nguarantee optimal control of the system or system stability.\nThere was three experiments have been conducted. The control loop was prepared by setup the process\n\n5\ns +10 s\n2\n\nand 1 respectively. The PID controllers parameters P, I,\n\nand D are 0.05, 0.01, and 0 respectively. Then, simulation parameter was set for 600s. The experiment\nwas run. Then, the graph of experiment 1 was obtained. For experiment 2 and 3 constant value of P and\nD but changing the I value with 0.005 and 0.001 respectively. After that, all experiments were comparing\nto each other. The experiment 1 was overshoots highest and 2 higher than 3. The experiment 3 was going\nstable than 2 and 1. This shows that system depend on value I. When the value I high, the graph shoot\nhigh. The low value of I stable fast where takes less time to stable." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8863633,"math_prob":0.7178866,"size":2596,"snap":"2019-26-2019-30","text_gpt3_token_len":637,"char_repetition_ratio":0.15933642,"word_repetition_ratio":0.013856813,"special_character_ratio":0.21879815,"punctuation_ratio":0.119760476,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9717486,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-16T11:44:44Z\",\"WARC-Record-ID\":\"<urn:uuid:2864779f-cd36-44ef-95e2-4f7c97c7ba85>\",\"Content-Length\":\"161563\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:77be1248-d03c-4839-b2e4-9820f285e03a>\",\"WARC-Concurrent-To\":\"<urn:uuid:87f6e9ed-4e5e-45d8-bf1c-71218eefed83>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://de.scribd.com/doc/292246863/Lab-2-docx\",\"WARC-Payload-Digest\":\"sha1:C5EPGOZTDWUY2GWEWNGW6IZB7FF5YIQQ\",\"WARC-Block-Digest\":\"sha1:UEMUX6TB3TE4XMIRMHBCBMHPEQHRJKZB\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998100.52_warc_CC-MAIN-20190616102719-20190616124719-00491.warc.gz\"}"}
https://ww2.mathworks.cn/help/matlab/matlab_env/web-browsers-and-matlab.html
[ "## Web 浏览器和 MATLAB\n\n### 关于 Web 浏览器和 MATLAB\n\n• MATLAB Web 浏览器\n\n• 帮助浏览器\n\n• 您的系统 Web 浏览器,例如 Mozilla® Firefox®\n\nMATLAB 使用不同的浏览器显示不同类型的信息:\n\n• 网站显示在系统浏览器中。\n\n• 文档显示在帮助浏览器中。\n\n• 其他 HTML 文件显示在 MATLAB Web 浏览器中。例如,在将 MATLAB 程序文件发布到 HTML 后,HTML 文件会显示在 MATLAB Web 浏览器中:", null, "#### MATLAB Web 和帮助浏览器\n\nMATLAB Web 浏览器和帮助浏览器可能不支持特定网站或 HTML 页面使用的所有功能。例如,MATLAB Web 浏览器不显示 `.bmp`(位图)图像文件。对 HTML 页面中的图像文件,请改用 `.gif``.jpeg` 格式。\n\n#### 系统浏览器\n\nMATLAB 使用的系统浏览器取决于您的平台:\n\n• 在 Microsoft® Windows®Apple Macintosh 平台上,MATLAB 使用操作系统的默认浏览器。\n\n• 在 UNIX® 平台上,MATLAB 使用 Mozilla Firefox 浏览器。可以使用 Web 预设项为 MATLAB 指定其他系统浏览器。\n\n### 在 Web 浏览器中显示页面\n\n1. 使用 `web` 命令打开该浏览器。\n\n2. 位置字段中键入指向文件名的 URL 或完整路径。\n\n### 指定用于连接到 Internet 的代理服务器设置\n\n• MATLAB 支持非身份验证、基本、摘要式和 NTLM 代理身份验证类型。\n\n• 如果指定具有基本身份验证的代理,则 MATLAB 仅支持 HTTP 连接,不支持 HTTPS 连接。\n\n• 不能使用脚本指定代理服务器设置。\n\n• 没有自动方法可向 MATLAB 提供您的系统浏览器使用的代理服务器设置。\n\n1. 主页选项卡上的环境部分中,点击", null, "。选择 MATLAB > Web\n\n2. 选中使用代理服务器连接到 Internet 复选框。\n\n3. 代理主机代理端口指定值。\n\n下面是主机的可接受格式的示例:`172.16.10.8``ourproxy`。对于端口,仅输入整数,例如 `22`。如果您不知道代理服务器的这些值,请向您的系统管理员或网络管理员询问相关信息。\n\n如果您的代理服务器需要用户名和密码,请选中使用包含身份验证的代理复选框。然后输入您的代理服务器的用户名和密码。\n\n4. 通过点击按钮来确保您的设置工作正常。\n\nMATLAB 尝试连接到 `https://www.mathworks.com`\n\n• 如果 MATLAB 可以访问 Internet,则会在此按钮旁边显示成功!\n\n• 如果 MATLAB 无法访问 Internet,则会在此按钮旁边显示失败!。更正所输入的值并重试。如果仍然无法连接,请尝试使用在对 MATLAB 许可证进行身份验证时使用的值。\n\n5. 点击以接受更改。\n\n6. 重新启动 MATLAB 以启用更改。\n\n### 为 Linux 平台指定系统浏览器\n\n1. 主页选项卡上的环境部分中,点击", null, "。选择 MATLAB > Web\n\n2. 系统 Web 浏览器下的命令字段中,指定用于打开浏览器的系统命令,例如 `opera` 可打开 Opera Web 浏览器。\n\n3. 选项字段中添加用于打开系统浏览器的选项。例如,`geometry 1064x860` 指定 Opera 的窗口大小。\n\n4. 点击" ]
[ null, "https://ww2.mathworks.cn/help/matlab/matlab_env/web_browser.png", null, "https://ww2.mathworks.cn/help/matlab/matlab_env/help_browser_action_btn.png", null, "https://ww2.mathworks.cn/help/matlab/matlab_env/help_browser_action_btn.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.98451686,"math_prob":0.7647018,"size":1779,"snap":"2022-27-2022-33","text_gpt3_token_len":1186,"char_repetition_ratio":0.17859155,"word_repetition_ratio":0.010695187,"special_character_ratio":0.18156268,"punctuation_ratio":0.04330709,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9675595,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-08T02:15:30Z\",\"WARC-Record-ID\":\"<urn:uuid:da4208ec-a1c9-4035-8ca7-0fba4196549c>\",\"Content-Length\":\"77487\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8a6d08db-a287-4940-afb3-f5bda5036506>\",\"WARC-Concurrent-To\":\"<urn:uuid:c07157b6-804b-438a-81e3-69836c6a3b8c>\",\"WARC-IP-Address\":\"104.92.231.194\",\"WARC-Target-URI\":\"https://ww2.mathworks.cn/help/matlab/matlab_env/web-browsers-and-matlab.html\",\"WARC-Payload-Digest\":\"sha1:QACHP2GRM34GYMEFH7N4MWSTQZQAOBSW\",\"WARC-Block-Digest\":\"sha1:RK4HRDY5CUT6D6A5OXZCYKZGPMXENLFE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570741.21_warc_CC-MAIN-20220808001418-20220808031418-00393.warc.gz\"}"}
https://hol-theorem-prover.org/kananaskis-14-helpdocs/help/Docfiles/HTML/PairRules.PSELECT_INTRO.html
[ "`PSELECT_INTRO : (thm -> thm)`\nSTRUCTURE\nLIBRARY\npair\nSYNOPSIS\nIntroduces an epsilon term.\nDESCRIPTION\nPSELECT_INTRO takes a theorem with an applicative conclusion, say P x, and returns a theorem with the epsilon term \\$@ P in place of the original operand x.\n``` A |- P x\n-------------- PSELECT_INTRO\nA |- P(\\$@ P)\n```\nThe returned theorem asserts that \\$@ P denotes some value at which P holds.\nFAILURE\nFails if the conclusion of the theorem is not an application.\nCOMMENTS\nThis function is exactly the same as SELECT_INTRO, it is duplicated in the pair library for completeness.\nSEEALSO\nHOL  Kananaskis-14" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8921697,"math_prob":0.9127937,"size":282,"snap":"2021-04-2021-17","text_gpt3_token_len":73,"char_repetition_ratio":0.15467626,"word_repetition_ratio":0.0,"special_character_ratio":0.29432625,"punctuation_ratio":0.078431375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99029607,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-21T17:01:38Z\",\"WARC-Record-ID\":\"<urn:uuid:dd16180b-7bc8-4916-a35e-29b73e09717e>\",\"Content-Length\":\"2696\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cf2650b9-ddc8-484c-92fa-b5c1e0f0e3ff>\",\"WARC-Concurrent-To\":\"<urn:uuid:03e9c728-16bd-445c-a55c-f3ec305fd515>\",\"WARC-IP-Address\":\"176.58.119.245\",\"WARC-Target-URI\":\"https://hol-theorem-prover.org/kananaskis-14-helpdocs/help/Docfiles/HTML/PairRules.PSELECT_INTRO.html\",\"WARC-Payload-Digest\":\"sha1:ZRPE4UZFCTAFXQ44HJTP3W5IYS4N4JRJ\",\"WARC-Block-Digest\":\"sha1:TBXXTXFL5PCWJ6JXUNLKKVXXESXFCXQT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039546945.85_warc_CC-MAIN-20210421161025-20210421191025-00533.warc.gz\"}"}
https://genstat.kb.vsni.co.uk/knowledge-base/smoothsp/
[ "1. Home\n2. SMOOTHSPECTRUM procedure\n\n# SMOOTHSPECTRUM procedure\n\nForms smoothed spectrum estimates for univariate time series (G. Tunnicliffe Wilson & S.J. Welham).\n\n### Options\n\n`PRINT` = string token Controls printed output (`description`); default `desc` Method to be used for smoothing (`lagwindow`, `direct`, `YuleWalker`, `exactautoregressive`); default `lagw` Frequency domain bandwidth for the smoothing window; must be set if `METHOD=dire` Specifies the cut-off lag (i.e. the maximum lag of autocovariance used in the spectrum calculation) for `METHOD=lagw`, or the order of the autoregression for `METHOD=Yule` or `exac`; if this option is not set then `BANDWIDTH` must be set, and will be used to determine an appropriate value of `MAXLAG` Determines the number of frequency divisions into which the range [0.0, 0.5] is divided for calculating the spectrum; the default is chosen so that the bandwidth covers about four intervals Probability value used for confidence limits; default 0.9 The proportion of data to be tapered (applied for all settings of `METHOD` except `exac`); default 0.0 The shape of the trapezium window (a value of 1.0 specifies a rectangular, and 0.0 a triangular window); default 0.5 Whether to plot with a log-transformed Y-axis (`yes`, `no`); default `no` Whether to plot with a log-transformed X-axis (`yes`, `no`); default `no` What sort of graphics to use (`lineprinter`, `highresolution`); default `high` Window to be used for plotting; default 1 The two pens to be used (after being defined appropriately) for drawing the plots; default `!(1,2)`\n\n### Parameters\n\n`SERIES` = variates The series for which the spectrum is to be calculated Scalar specifying that the first N units of the series are to be used, or a variate specifying the first and last units of the series to be used Saves the smoothed spectrum; need not be declared in advance, but will be set up as a variate of the appropriate length within the procedure Scalar to save the multiplier of the spectrum used to calculate the lower limit, or a variate to save the values of the lower limit Scalar to save the multiplier of the spectrum used to calculate the upper limit, or a variate to save the values of the upper limit Saves the frequency values at which the spectrum is calculated\n\n### Description\n\n`SMOOTHSPECTRUM` calculates smoothed spectrum estimates for a univariate time series. The series is specified in a variate by the `SERIES` parameter. The parameter `LENGTH` can be used to specify that only part of the series is to be used: if `LENGTH` is set to a scalar `N`, then only units 1…`N` are used; alternatively, it can define a sub-series by being set to a variate of length 2 holding the numbers of the first and last units to be used. The spectrum can be saved by the `SPECTRUM` parameter. The method to be used for the smoothing is controlled by the `METHOD` option, with settings `lagwindow` for Parzen lag window smoothing, `direct` for frequency domain smoothing using a trapezium window, `YuleWalker` for autoregressive spectrum estimation based on Yule-Walker coefficients, and `exactautoregressive` for autoregressive estimation based on exact likelihood estimation of the coefficients.\n\nFor frequency domain smoothing (`METHOD=direct`), option `BANDWIDTH` specifies the bandwidth of the smoothing window and option `SHAPE` the shape of the trapezium window. The `BANDWIDTH` option is also used to determine an appropriate default for the `MAXLAG` option if this is not specified with other `METHOD` settings: for `METHOD=lagwindow`, `MAXLAG` specifies the cut-off lag (i.e. the maximum lag of autocovariance used in the spectrum calculation), while for `METHOD=YuleWalker` or `exactautoregressive`, it specifies the order of the autoregression.\n\nThe `DIVISIONS` option can define the number of frequency divisions into which the range [0.0, 0.5] is divided for calculating the spectrum; if this is omitted a default is chosen so that the bandwidth covers about four intervals. The frequency values at which the spectrum is calculated can be saved, in a variate, by the `FREQUENCY` parameter. The proportion of data to be tapered (relevant to all settings of `METHOD` except `exactautoregressive`) is controlled by the `TAPER` option; by default there is no tapering.\n\nThe `LOWER` and `UPPER` parameters can be set to scalars to save the scaling factor used to calculate the upper and lower bounds, or to variates to save the upper and lower bounds for the `SPECTRUM` variate.\n\nPrinted output can be suppressed by setting the option `PRINT=*`; by default, `PRINT=description`. The `PROBABILITY` option indicates the probability value used for confidence limits; 0.9 is used as the default.\n\nThe procedure will also plot the spectrum: option `GRAPHICS` controls whether this is done for line printer or on a high-resolution device. With high-resolution graphics, the plot will be produced using the current settings of the window specifed by the `WINDOW` option; by default `WINDOW`=1. The `FRAME` directive can be used to set the attributes of the window prior to calling the procedure. The `PENS` option controls which pens are to be used for the plots; the attributes of these pens are modified within the procedure. By default pens 1 and 2 are used, but these can be changed by setting option `PENS` to a variate of length 2 containing the numbers of the two pens required. Options `YLOG` and `XLOG` allow the X- and Y-axes to be represented on a logarithmic scale.\n\nOptions: `PRINT`, `METHOD`, `BANDWIDTH`, `MAXLAG`, `DIVISIONS`, `PROBABILITY`, `TAPER`, `SHAPE`, `YLOG`, `XLOG`, `GRAPHICS`, `WINDOW`, `PENS`.\n\nParameters: `SERIES`, `LENGTH`, `SPECTRUM`, `LOWER`, `UPPER`, `FREQUENCY`.\n\n### Method\n\nA cosine bell window is used for the taper, with lag window and direct spectral smoothing carried out esentially as described in Bloomfield (1976). The autoregressive spectrum estimation uses the standard Yule-Walker equations, as presented for example in Box & Jenkins (1970). These are optionally refined by exact maximum likelihood estimation. The theoretical spectrum of the autoregressive model is then calculated. The error limits are calculated using scaled chi-square distributions. These are quite good for the case of lag window and direct smoothing, but in small samples are only very approximate for the autoregressive estimates. The series values are mean corrected before spectrum estimation, but not trend corrected.\n\n### Action with `RESTRICT`\n\nInput and output structures must not be restricted; restriction of the input series to a contiguous set of units can be achieved by use of the `LENGTH` parameter.\n\nBloomfield, P. (1976). Fourier Analysis of Time Series: an Introduction. Wiley, New York.\n\nBox, G.E.P. & Jenkins, G.M. (1970). Time Series Analysis, Forecasting and Control. Holden-Day, San Francisco.\n\nDirective: `FOURIER`.\n\nProcedures: `DFOURIER`, `MCROSSPECTRUM`, `PERIODTEST`, `PREWHITEN`, `REPPERIODOGRAM`.\n\nCommands for: Time series.\n\n### Example\n\n```CAPTION 'SMOOTHSPECTRUM example',\\\n!t('Data from D.F. Andrews & A.M. Herzberg (1985),',\\\n'Data: a collection of problems from many fields for the',\\\n'student and research worker, Springer-Verlag: New York, p. 369.');\\\nSTYLE=meta,plain\nVARIATE [VALUES =\\\n8.075, 7.819, 7.366, 8.113, 7.380, 7.134, 7.222, 7.768,\\\n7.386, 6.965, 6.478, 8.105, 8.060, 7.684, 7.580, 7.093,\\\n6.129, 6.026, 6.679, 7.414, 7.112, 7.762, 7.645, 8.639,\\\n7.667, 8.080, 6.678, 6.739, 5.569, 5.049, 5.642, 6.808,\\\n6.636, 8.241, 7.968, 8.044, 7.791, 7.024, 6.102, 6.053,\\\n5.941, 5.386, 5.811, 6.716, 6.923, 6.939, 6.705, 6.914] Profit\nSMOOTHSPECTRUM [GRAPHICS=line; BANDWIDTH=0.05] Profit\nSMOOTHSPECTRUM [GRAPHICS=line; METHOD=exactautoregressive; MAXLAG=8] Profit\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7492613,"math_prob":0.86937535,"size":7898,"snap":"2020-24-2020-29","text_gpt3_token_len":2086,"char_repetition_ratio":0.13516594,"word_repetition_ratio":0.107959025,"special_character_ratio":0.24765764,"punctuation_ratio":0.17388535,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9821999,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-06T11:59:14Z\",\"WARC-Record-ID\":\"<urn:uuid:99ef8d51-ec65-4a30-a87a-8dc4940593ab>\",\"Content-Length\":\"39460\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:563f7dc9-e3c6-4e52-bcb9-ec57ef10dfcb>\",\"WARC-Concurrent-To\":\"<urn:uuid:e02de46d-37e7-4b39-af85-ed974de318cb>\",\"WARC-IP-Address\":\"35.197.246.150\",\"WARC-Target-URI\":\"https://genstat.kb.vsni.co.uk/knowledge-base/smoothsp/\",\"WARC-Payload-Digest\":\"sha1:6XTK32NTSCDFAP2EDXXGV7MKXEKZNPWX\",\"WARC-Block-Digest\":\"sha1:PKF2CUBD6K42OQYUO5UZRQBOG7POVYBG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348513230.90_warc_CC-MAIN-20200606093706-20200606123706-00357.warc.gz\"}"}
https://www.tutorialspoint.com/sql/sql-numeric-functions-atan.htm
[ "# SQL - ATAN() Function\n\nThe SQL ATAN() function calculates the arc tangent of a numeric value. This function accepts a single numeric value as an argument. The domain of the argument must be (-∞, ∞) i.e. the set of all real numbers and the range of the result will be [-π/2, π/2]. If the value passed to this function doesn't lie in the given domain, it raises an error.\n\nAs we already know, a tangent function in trigonometry is defined as the ratio of the sine function to the cosine function; but the arc tangent function is defined as its inverse, where the domain of tangent function becomes the range of arc tangent function and vice-versa.\n\n### Syntax\n\nFollowing is the syntax of SQL ATAN() function −\n\n```ATAN(number)\n```\n\nwhere, number is the value for which we need to calculate the arc tangent.\n\n### Example\n\nIf we pass a positive value as an argument, then this function returns it's equivalent arc tangent value which is positive as shown below −\n\n```SELECT ATAN(0.8)\nAS Arctan_Value\n```\n\nWhen we run above program, it produces following result −\n\n```+-------------------+\n| Arctan_Value |\n+-------------------+\n| 0.674740942223553 |\n+-------------------+\n```\n\n### Example\n\nIf we pass a negative value as an argument to this function, then this function returns it's equivalent arc tangent value which is negative as shown below −\n\n```SELECT ATAN(-0.5)\nAS Arctan_Value\n```\n\nWhile executing the above code we get the following output −\n\n```+--------------------+\n| Arctan_Value |\n+--------------------+\n| -0.463647609000806 |\n+--------------------+\n```\n\n### Example\n\nIf the value passed is NULL, this function returns NULL.\n\n```SELECT ATAN(NULL)\nAS Arctan_Value\n```\n\nFollowing is an output of the above code −\n\n```+-------------------+\n| Arctan_Value |\n+-------------------+\n| NULL |\n+-------------------+\n```\n\n### Example\n\nThe arc tangent value of 0 is 0.\n\n```SELECT ATAN(0)\nAS Arctan_Value\n```\n\nOutput of the above code is as follows −\n\n```+-------------------+\n| Arctan_Value |\n+-------------------+\n| 0 |\n+-------------------+\n```\n\n### Example\n\nWhen we calculate the arc tangent value of a number and pass the result to the tan() function, the final result is approximately equivalent to the original number.\n\n```SELECT ATAN(1)\nAS Arctan_Value\n```\n\nThe result produced is as shown below −\n\n```+-------------------+\n| Arctan_Value |\n+-------------------+\n| 0.785398163397448 |\n+-------------------+\n```\n\nNow, we are trying to pass the value retrieved by the arc tangent to the tan() function −\n\n```SELECT TAN(0.785398163397448)\nAS tan_Value\n```\n\nThe result obtained is as follows −\n\n```+--------------------+\n| tan_Value |\n+--------------------+\n| 0.999999999999999 |\n+--------------------+\n```\n\n### Example\n\nAssume we have created a table with name CUSTOMERS as shown below −\n\n```create table CUSTOMERS(ID INT NOT NULL,\nNAME VARCHAR(20) NOT NULL,\nAGE INT NOT NULL,\nSALARY DECIMAL(18, 2),\nPRIMARY KEY(ID));\nCommands completed successfully.\n```\n\nLet us insert r values into it −\n\n```insert INTO CUSTOMERS VALUES(1, 'Ramesh', 32, 'Ahmedabad', 2000.00);\ninsert INTO CUSTOMERS VALUES(2, 'Khilan', 25, 'Delhi', 1500.00);\ninsert INTO CUSTOMERS VALUES(3, 'kaushik', 23, 'Kota', 2000.00);\ninsert INTO CUSTOMERS VALUES(4, 'Chaitali', 25, 'Mumbai', 6500.00);\ninsert INTO CUSTOMERS VALUES(5, 'Hardik', 27, 'Bhopal', 8500.00);\ninsert INTO CUSTOMERS VALUES(6, 'Komal', 22, 'MP', 4500.00);\ninsert INTO CUSTOMERS VALUES(7, 'Muffy', 24, 'Indore', 10000.00);\n```\n\nFollowing query calculates the arc tangent value of the salary of all the customers −\n\n```SELECT NAME,AGE,SALARY,\nATAN(SALARY)\nAS arc_salarytan\nFROM customers;\n```\n\nThe result produced is as follows −\n\n```+----------+-----+----------+--------------------+\n| NAME | AGE | SALARY | arc_salarytan |\n+----------+-----+----------+--------------------+\n| Ramesh | 32 | 2000.00 | 1.57029632683656 |\n| Khilan | 25 | 1500.00 | 1.570129660227 |\n| kaushik | 23 | 2000.00 | 1.57029632683656 |\n| Chaitali | 25 | 6500.00 | 1.57064248064226 |\n| Hardik | 27 | 8500.00 | 1.57067867973662 |\n| Komal | 22 | 4500.00 | 1.57057410457633 |\n| Muffy | 24 | 10000.00 | 1.57069632679523 |\n+----------+-----+----------+--------------------+\n```\nsql-numeric-functions.htm", null, "" ]
[ null, "https://www.tutorialspoint.com/static/images/library-cta.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5978982,"math_prob":0.96796584,"size":6883,"snap":"2023-40-2023-50","text_gpt3_token_len":1918,"char_repetition_ratio":0.26297426,"word_repetition_ratio":0.044871796,"special_character_ratio":0.3854424,"punctuation_ratio":0.095147476,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9975239,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T07:34:23Z\",\"WARC-Record-ID\":\"<urn:uuid:82c143b6-f8ab-470f-809e-ef38a2a2082e>\",\"Content-Length\":\"64175\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:67447108-fdea-412a-99c1-b34243ae9d8b>\",\"WARC-Concurrent-To\":\"<urn:uuid:f636d5a4-e8b1-4974-a56e-64bbac678925>\",\"WARC-IP-Address\":\"192.229.210.176\",\"WARC-Target-URI\":\"https://www.tutorialspoint.com/sql/sql-numeric-functions-atan.htm\",\"WARC-Payload-Digest\":\"sha1:3SRSFWQQZAC46MRKFHYMUY43HHY3XZLC\",\"WARC-Block-Digest\":\"sha1:YLL76ZREUILJBDPNK4DYCY6I54VAYPWW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510603.89_warc_CC-MAIN-20230930050118-20230930080118-00475.warc.gz\"}"}
https://hes.davidson.k12.nc.us/apps/pages/index.jsp?uREC_ID=792733&type=d&pREC_ID=1318960
[ "", null, "# Math Help Videos\n\nPLACE VALUE-TEN TIMES\nUse the video below with help on understanding how a digit in one place represents ten times the place to its right. NBT. 1\n\nWriting Numbers in Expanded Form\nWatch the following video to see how to write a number using expanded form. NBT. 2\n\nRounding Rock Song\nThis video is a catchy way to help with rounding!\n\nThis video will show an effective way to add with regrouping.\n\nMulti-digit Subtraction with Regrouping\nThis video will show an effective way to subtract with regrouping.\n\nMultiplying 3 Digit by 1 Digit (Standard Algorithm)\n\nMultiplication with the Area Model\n\nMultiplication with the Box Method (similar to area model)\nMost students use this strategy.\n\nPartial Quotient Method of Division\nThis video shows how to do the partial quotient method, students use their X1, X2, and X5 factors in class." ]
[ null, "https://counter.edlio.com/count.jsp", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7956115,"math_prob":0.5530562,"size":1242,"snap":"2019-26-2019-30","text_gpt3_token_len":284,"char_repetition_ratio":0.13570274,"word_repetition_ratio":0.089108914,"special_character_ratio":0.2004831,"punctuation_ratio":0.058558557,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9675294,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-23T18:24:09Z\",\"WARC-Record-ID\":\"<urn:uuid:9da37082-8a84-488b-a256-8330a5a56e23>\",\"Content-Length\":\"53352\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e604f1d-262e-4405-80a6-24932e3d6575>\",\"WARC-Concurrent-To\":\"<urn:uuid:a8497a79-cc0a-482c-beeb-3139f56bc6b5>\",\"WARC-IP-Address\":\"151.101.248.80\",\"WARC-Target-URI\":\"https://hes.davidson.k12.nc.us/apps/pages/index.jsp?uREC_ID=792733&type=d&pREC_ID=1318960\",\"WARC-Payload-Digest\":\"sha1:BIKYQS42JT3NQVVVCZAHYTXX67TXOYGN\",\"WARC-Block-Digest\":\"sha1:66ZZSBDFLZ6E3K5SEZQYU4WMOWKC7LZ3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195529481.73_warc_CC-MAIN-20190723172209-20190723194209-00217.warc.gz\"}"}
http://gardenchiro.com/ogilvy/greens/tiamat/80325357a8d0a32a33-270-counterclockwise-about-vertex-x
[ "C) Reflecting over the y-axis and then reflecting over the line y = x. Ungraded . Triangle RST is rotated 270 counterclockwise about the origin. Enter the email address you signed up with and we'll email you a reset link. 3. If we add up the above two angles we will get 270 degree angle. Draw an image of a polygon with vertices A(2,2), B(4,3), C(4,5) and D(1,4). Which transformations could have taken place? When we rotate a figure of 180 degrees about the origin either in the clockwise or counterclockwise direction, each point of the given figure has to be changed from (x, y) to (-x, -y) and graph the rotated figure. The side can be categorized into terminal sides and initial sides (or vertical sides) as shown in the image below. An angle v is in standard positionif the vertex of the angle is at the origin and the initial arm lies along the positive x-axis. arrow_forward. If the triangle is rotated 270 clockwise, find the vertices of the rotated figure and graph. The right one is rotated clockwise whereas the left triangle is rotated counterclockwise. And we can label this new vertex as prime. Here are some helpful Math I videos that Mrs. 270 (x,y) (y,-x (The pointer on the timing chain cover will be lined up with the timing mark on the pulley) 3 00 USD Add to Cart Consider the following graph 360 degree rotation Ai Dungeon Dragon Model Free Download 360 degree rotation.\n\n. Report an issue . Batteries, electrical parts, brakes, engines, oil, skis, and more. Each coordinate (x,y) is changed to (-y,-x) This is our general formula for rotating the figure 270 degrees about the origin; Notes. C. 360 o Rotation. 6.6.1 Find the parametric representations of a cylinder, a cone, and a sphere.\n\nas returned by pos()). Dilation . Mixed Transformations. x a number or a pair/vector of numbers. ; 6.6.2 Describe the surface integral of a scalar-valued function over a parametric surface. clockwise 270 counterclockwise 90 counterclockwise 360 clockwise 90 To perform a geometry rotation, we first need to know the point of rotation, the angle of rotation, and a direction (either clockwise or counterclockwise). SURVEY . When rotating a point 90 degrees counterclockwise about the origin our point A (x,y) becomes A' (-y,x). In other words, switch x and y and make y negative. 90 Counterclockwise Rotation 180 Degree Rotation -300. Analyze the graph below and answer the question that follows. 270 counterclockwise rotation: (x,y) becomes (y,-x) As you can see, our two experiments follow these rules. . 120 seconds .\n\nWhat is the ordered pair for point A? turtle.goto (x, y=None) turtle.setpos (x, y=None) turtle.setposition (x, y=None) Parameters. Shop snowmobile parts online. Clockwise vs. 270 CCW b. To make 270 degree rotation, we have to extend the existing angle by 147 degree. We could label the image of vertex as prime. Learning Objectives. 180 counterclockwise about vertex X 2. Here, YOA = 270 degree. close. If your object is straight and you rotate to 270 degrees (radians: pi + pi/2 = 3pi/2) it will rotate counter-clockwise because it's the smallest possible animation Tell whether the blue gure is a rotation of the red gure about the origin the image of QA under a counterclockwise rotation with center Q With the rotating 270 arm, is ideal for detail work with concentrated Rotating a shape 270 degrees is the same as rotating it 90 degrees clockwise. 270 123 = 147 degree. Write the rule for a 90 clockwise rotation and a 270 counter-clockwise rotation. A triangle ABC is shown with vertex A on ordered pair negative 4, negative 1, vertex B on ordered pair negative 3, negative 1 and vertex C on ordered pair negative 4, negative 4. Solution : Step 1 : Trace triangle XYZ and the x- and y-axes onto a piece of paper. clockwise 270. If y is None, x must be a pair of coordinates or a Vec2D (e.g. So I move this, to x equals negative two, y equals three. The opposite direction of clockwise is anticlockwise or counterclockwise (both words mean the same). i.e. The result is AR'S'T', as shown below. Enter the email address you signed up with and we'll email you a reset link. Q. Rotate the point (-5,8) around the origin 270 degrees counterclockwise. There is a neat 'trick' to doing these kinds of transformations. The diagram below shows vector v. Draw Angles - Plotting Program. Check all that apply. When we rotate a figure of 270 degree counterclockwise, each point of the given figure has to be changed Rotate text by 90, 180 or 270 degrees, mirror text, transpose text Clockwise motion (abbreviated CW) proceeds in the same direction as a clock's hands: from the top to the right, then down and then to the left, and back up to the top 1 lbs ft) at 2 Visualize a capital \"N Visualize a $$If we want to counterclockwise rotate a figure 270, or clockwise rotate a figure 90, we multiply the vertex matrix with$$\\begin{bmatrix} 0& 1\\\\ -1& 0 \\end{bmatrix}$$. Rotate the image 270clockwise around the origin Find the coordinates of the vertices of each figure given the rotation: 8. CPhill Oct 17, 2014 If the angle measure is positive, then the angle has been created by a counterclockwise rotation from the initial to the terminal side. Since streamlines can have the velocity direction either counterclockwise or clockwise and still have the same general form with the same equations as shown above the sign convention is that a counterclockwise rotation is positive, and a clockwise rotation is negative. From figure we can see the co-ordinate of the vertex are X (1,3) ,Y (5,2) and Z (3,-1) Now we are given that triangle XYZ is rotated 270 counterclockwise. Which of the following rotations is equivalent 270 clockwise rotation about the origin? Dilation . It is an online Geometry tool requires number of sides and side length of a regular polygon. Now that we know how to rotate a point, lets look at rotating a figure on the coordinate grid. If a point is rotated by 270 degree around the origin in clockwise direction, the coordinates of final point is given by following method. If (h, k) is the initial point, then after 270 degree clockwise rotation, the location of final point is (-k, h) The formula is similar to 90 degree anticlockwise rotation. 270: 270 degrees counter-clockwise or 90 clockwise. The rule for a rotation by 90 Counterclockwise about the origin is (x,y)(y,x) The rule for a rotation by 90 Clockwise about the origin is (x,y)(y,x) So when we rotate it 180 degrees, it will still have an -coordinate value of three, this time of positive three, and at a distance of seven in the -axis, this time at negative seven. As a last step, we rotate the triangles 90, each around its top vertex. 270 o Counter clockwise Rotation. The result is AR'S'T', as shown below. E) A rotation of 270 degrees counterclockwise about the origin, and then a reflection across the x-axis October 30, 2014 Geometry Notes Day 3 The angle of rotation is the number of degrees the figure rotates The diagram would show positive angles labeled in radians and degrees And the uploaded video size is up to 100MB And the uploaded video size is up to Here, YOA = 270 degree. A 270-degree clockwise rotation about the origin is equivalent to a 90-degree counterclockwise rotation about the origin. Which of the following rotations is equivalent 270 clockwise rotation about the origin? Step-by-step explanation. A triangle ABC is shown with vertex A on ordered pair . Use the rule (x, y) (x 2, y 4) to graph the image of the rectangle. Using this polygon calculator, we will understand methods Question 9. State the image of the point. 5 clockwise rotations is 1980. Triangle RST is rotated 270 counterclockwise about the origin.$$ If we want to counterclockwise rotate a figure 270, or clockwise rotate a figure 90, we multiply the vertex matrix with $$\\begin{bmatrix} 0& 1\\\\ -1& 0 \\end{bmatrix}$$. Now take a divider and set its length equal to OA. If I choose \"\"Auto(Sensor)\", it will rotate the text 90 degrees, 180 degrees or 270 degrees. A rotation is also the same as a composition of reflections over intersecting lines. As a last step, we rotate the triangles 90 o, each around its top vertex.The right one is rotated clockwise whereas the left triangle is rotated counterclockwise. ho (a) The arrows below show that the coordinates on the left are mapped to the coordinates on the right. 270 counterclockwise about vertex X. different rotations. A. heart outlined. So I move this, to x equals negative two, y equals three. Rotate the triangle XYZ 270 counterclockwise about the origin. 5 clockwise rotations is 1980.\n\nWhen you reflect a point across the y-axis, the sign of the x-coordinate changes, and the sign of the y-coordinate remains the same. This is due to what is called the Coriolis effect.\n\nTo translate ( p, q) to the origin, we subtract p from x -coordinates and q from y -coordinates, and to What is the angle of rotation of the minute hand of a clock moving from 6:10 to 7:00? Draw a of One second, denoted 100, is dened as 1 60 minute, or equivalently, 1 3600 degree Bikeman Performance Owner If we add up the above two angles we will get 270 degree angle. It The compass is numbered clockwise with north as 0, east 90, south 180, and west 270 To rotate counterclockwise about the origin, multiply the vertex matrix by the given matrix USING ROTATIONS You can rotate a fi gure more than 360 translation 3 units left, dilation of scale factor 2 centered at the origin D s divided into 360 equal arcs and each arc is one degree East Side (x, y) After Rotation. GHI is rotated 270 counterclockwise about the origin to form G'H'I. Move turtle to an absolute position. A) reflecting over the x-axis and the y-axis. 300 seconds . The following diagrams show rotation of 90, 180 and 270 about the origin. You can rotate either clockwise or counter-clockwise. Step 2 : Let X\", Y\" and Z\" be the vertices of the rotated figure. To make 270 degree rotation, we have to extend the existing angle by 147 degree. answer choices . Most of the problems youll get will involve mixed transformations, or multiple transformations, and we do need to worry about the order in which we perform the transformations. The image is rotated 90 degrees counterclockwise each time. See this process in action by watching this tutorial! The demonstration below that shows you how to easily perform the common Rotations (ie rotation by 90, 180, or rotation by 270) . about the origin. Now take a divider and set its length equal to OA. Rotation . Triangle WXY, with a vertex X at (3, 0), is rotated clockwise 270 about the origin.\n\nRotation Examples. A cat vertex . Conventionally, shapes are rotated counterclockwise on a coordinate plane" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86831504,"math_prob":0.9839327,"size":10668,"snap":"2022-40-2023-06","text_gpt3_token_len":2591,"char_repetition_ratio":0.20948987,"word_repetition_ratio":0.13984881,"special_character_ratio":0.24934383,"punctuation_ratio":0.12465374,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99803114,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-27T17:05:41Z\",\"WARC-Record-ID\":\"<urn:uuid:cf211927-fb5a-43b5-a803-7a5413f9ad55>\",\"Content-Length\":\"23845\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:68e53355-2cc8-4b12-ab7d-48afae2ca020>\",\"WARC-Concurrent-To\":\"<urn:uuid:1450983b-8c3d-4618-b8c5-d75ca1e77467>\",\"WARC-IP-Address\":\"72.47.228.176\",\"WARC-Target-URI\":\"http://gardenchiro.com/ogilvy/greens/tiamat/80325357a8d0a32a33-270-counterclockwise-about-vertex-x\",\"WARC-Payload-Digest\":\"sha1:B3XVZ7YXDDV6KYUPRDDLNGFKEC7G4MUM\",\"WARC-Block-Digest\":\"sha1:PD3ROLDXSQT4HTPTSLEZYBU6LT2AREJ2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335054.79_warc_CC-MAIN-20220927162620-20220927192620-00158.warc.gz\"}"}
https://www.weiy.city/tag/cgal/
[ "## CGAL: Useful Properties Of Matrix\n\nWe defined three different matrix A(m x n), B(n x r), and C(n x r), they satisfy distributivity of multiplication over addtion: A(B+C) = AB + AC.\n\n## Simple Demos About Using CGAL\n\nHere are simple demos show how to use CGAL, the Computational Geometry Algorithms Library. I use cmake to manage these small projects.\n\nE-book Converter" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88648933,"math_prob":0.9469963,"size":272,"snap":"2022-27-2022-33","text_gpt3_token_len":62,"char_repetition_ratio":0.11567164,"word_repetition_ratio":0.0,"special_character_ratio":0.19117647,"punctuation_ratio":0.094339624,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9954263,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-13T15:50:07Z\",\"WARC-Record-ID\":\"<urn:uuid:c4360bc5-d981-4dd5-b1dc-aeeda07a045c>\",\"Content-Length\":\"159485\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7618624a-831c-4818-b6d0-97a0b5838421>\",\"WARC-Concurrent-To\":\"<urn:uuid:f84fe41a-f825-4ece-bef0-b37bf64820de>\",\"WARC-IP-Address\":\"104.21.6.59\",\"WARC-Target-URI\":\"https://www.weiy.city/tag/cgal/\",\"WARC-Payload-Digest\":\"sha1:3WRH2Z5UKGNOXPXXXW6NGSJSMWTHG7FJ\",\"WARC-Block-Digest\":\"sha1:BULOKHTZEEUA4OCNHJQQQGEAADKOIYVZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571959.66_warc_CC-MAIN-20220813142020-20220813172020-00624.warc.gz\"}"}
http://mypalate.co/unifix-cube-worksheets/unifix-cube-printable-worksheets-cards-patterns-and-math-centers-on-pattern/
[ "# Unifix Cube Printable Worksheets Cards Patterns And Math Centers On Pattern\n\nMay 12, 2018 | By mypalate | Filed in: .", null, "unifix cube printable worksheets cards patterns and math centers on pattern.\n\nsnap cube addition worksheet unifix math worksheets counting cubes the best image collection pattern,free printable unifix cubes worksheets cube subtraction patterns with a visual spatial math challenge activities,kindergarten measurement worksheets math activities with animals for unifix cube snap addition worksheet,unifix cube worksheets activities snap math measurement for k pattern worksheet,teaching numbers lessons teach free printable unifix cubes worksheets cube math snap addition worksheet,measuring with cubes lesson plans worksheets unifix free cube patterns snap math,unifix cube subtraction worksheets snap measurement math recursive patterns worksheet number pattern grade,unifix cube addition worksheets math measuring with cubes worksheet gallery for kids patterns,unifix cube printable worksheets addition snap measurement nice template gift resume ideas,unifix cube printable worksheets free cubes pattern primary teaching resources and snap math.\n\n← Previous Next →" ]
[ null, "http://mypalate.co/wp-content/uploads/2018/05/unifix-cube-printable-worksheets-cards-patterns-and-math-centers-on-pattern.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7377354,"math_prob":0.7479918,"size":1046,"snap":"2019-13-2019-22","text_gpt3_token_len":166,"char_repetition_ratio":0.2591171,"word_repetition_ratio":0.015625,"special_character_ratio":0.13575526,"punctuation_ratio":0.07236842,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9950834,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-20T09:21:40Z\",\"WARC-Record-ID\":\"<urn:uuid:6e6f383a-6eda-4e90-80df-7df7a8b2ce78>\",\"Content-Length\":\"40563\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:be34a554-2af2-4944-8273-26e27014be60>\",\"WARC-Concurrent-To\":\"<urn:uuid:407c90c7-7216-4ee5-9d0d-2a1c6f06965e>\",\"WARC-IP-Address\":\"104.31.92.104\",\"WARC-Target-URI\":\"http://mypalate.co/unifix-cube-worksheets/unifix-cube-printable-worksheets-cards-patterns-and-math-centers-on-pattern/\",\"WARC-Payload-Digest\":\"sha1:OXCVF7JA37M2PB7H34BLBDFUQEG73IXO\",\"WARC-Block-Digest\":\"sha1:SUBFPFIKVW7WLV36RL7P6HEVFRFNUAED\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202324.5_warc_CC-MAIN-20190320085116-20190320111116-00014.warc.gz\"}"}
https://www.graduate.technion.ac.il/Theses/Abstracts.asp?Id=24056
[ "Ph.D Student Wiechman Gil Theoretical and Practical Aspects on the Performance versus Complexity Tradeoff for LDPC-Based Codes Department of Electrical Engineering Professor Igal Sason", null, "Abstract\n\nError-correcting codes employing iterative decoding algorithms are now considered state of the art in the field of low-complexity coding techniques. The graphical representation of these codes is used to describe their algebraic structure, and also enables a unified description of their iterative decoding algorithms. These codes closely approach the capacity limit of many standard communication channels under iterative decoding. The outstanding performance of these codes motivates an information-theoretic study of the tradeoff between their performance and complexity, as well as a study of the ultimate limitations of finite-length codes.\n\nWe begin our study of the performance versus complexity tradeoff by deriving bounds on the achievable rates and the graphical complexity of binary linear block codes under ML decoding. These bounds are derived under the assumption that the transmission takes place over memoryless binary-input output-symmetric (MBIOS) channels. The bounds are particularized to low-density parity-check (LDPC) codes, and apply to the tradeoff between achievable rates and decoding complexity per iteration under message-passing decoding. Further, we generalize the bounds to the case where the codes are transmitted over a set of independent parallel MBIOS channels. The latter results are applied to ensembles of punctured LDPC codes.\n\nSecondly, we consider the number of iterations required for successful iterative message-passing decoding of graph-based codes. The communication (this time) is assumed to take place over the binary erasure channel, and the analysis refers to the asymptotic case where the block length tends to infinity. We derive rigorous lower bounds on the number of iterations required to achieve a given bit erasure probability under standard message-passing decoding. For several modern code families, we show that the number of iterations scales at least like the inverse of the multiplicative gap to capacity; this matches a previous conjecture and experimental results.\n\nFinally, we consider sphere-packing lower bounds on the decoding error probability of optimal block codes. We focus on modifications to the 1967 sphere-packing (SP67) bounding technique, making it more attractive for finite-length block codes. We derive a new sphere-packing bound targeted at finite-length block codes transmitted over symmetric memoryless channels. This part of the work facilitates the assessment of the fundamental limitations of finite-length block codes, and is therefore very applicative for the evaluation of practical coded communication systems." ]
[ null, "https://www.graduate.technion.ac.il/Theses/Images\\pdficon_large.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8816792,"math_prob":0.8783923,"size":2841,"snap":"2019-43-2019-47","text_gpt3_token_len":515,"char_repetition_ratio":0.12407473,"word_repetition_ratio":0.005037783,"special_character_ratio":0.1657867,"punctuation_ratio":0.06458797,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95848036,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-20T08:38:11Z\",\"WARC-Record-ID\":\"<urn:uuid:c3b53ed9-a362-4264-8282-8e8974c34c2d>\",\"Content-Length\":\"8646\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c23bff46-3382-49ad-8e04-6dc012304c7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:1aa98283-6e89-40ce-8955-acbb431c7906>\",\"WARC-IP-Address\":\"132.69.246.237\",\"WARC-Target-URI\":\"https://www.graduate.technion.ac.il/Theses/Abstracts.asp?Id=24056\",\"WARC-Payload-Digest\":\"sha1:5UJDKNBTVGQ5K23LTCTHMSS64BGAMJT5\",\"WARC-Block-Digest\":\"sha1:WG2XNND4OORGOGIIPMDZBKZVBKKWMVTQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986705411.60_warc_CC-MAIN-20191020081806-20191020105306-00440.warc.gz\"}"}
http://gemres.org/abstracts/townsend.html
[ "Back to GEMRES Homepage\n\n## Computational Colloquium Series\n\n```Speaker: Dr. Alex Townsend, Department of Mathematics, MIT\nPlace : ILC 302\nDate/Time: Thursday, Feb. 4, 3:00-4:00pm\n\nTitle: Continuous analogues of matrix factorizations\n\nAbstract: A fundamental idea in matrix linear algebra is the factorization\nof a matrix into simpler matrices, such as orthogonal, tridiagonal, and\ntriangular. In this talk we extend this idea to a continuous setting,\nasking: \"What are the continuous analogues of matrix factorizations?\" The\nanswer we develop involves functions of two variables, an iterative\nvariant of Gaussian elimination, and sufficient conditions for\nconvergence. This leads to a test for non-negative definite kernels, a\ncontinuous definition of a triangular quasimatrix (a matrix whose columns\nare functions), and a fresh perspective on a classic subject. This is\nwork is with Nick Trefethen.\n\nhttp://math.mit.edu/~ajt/\n```\n\nLast modified: Sun Jan 31 21:12:28 MST 2016" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86235017,"math_prob":0.7344015,"size":960,"snap":"2022-40-2023-06","text_gpt3_token_len":217,"char_repetition_ratio":0.111924686,"word_repetition_ratio":0.0,"special_character_ratio":0.21354167,"punctuation_ratio":0.18131869,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9572858,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T14:50:46Z\",\"WARC-Record-ID\":\"<urn:uuid:b1a35745-7b1f-4452-a223-1d3892905948>\",\"Content-Length\":\"1717\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e6e7a948-9841-4ff4-950e-fb25542b23f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ef8091a-7bc6-419e-be94-bfd842f355fd>\",\"WARC-IP-Address\":\"107.180.21.15\",\"WARC-Target-URI\":\"http://gemres.org/abstracts/townsend.html\",\"WARC-Payload-Digest\":\"sha1:UKWB4SE52S6RCIBJFEB3BYYBUZALE5UK\",\"WARC-Block-Digest\":\"sha1:K5XNPQUMGXX7M2RJNUKJVKRSQERJYYO7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334579.46_warc_CC-MAIN-20220925132046-20220925162046-00586.warc.gz\"}"}
https://helpingwithmath.com/multiplication-as-comparison/
[ "Home » Math Theory » Operations of Numbers » Multiplication as Comparison\n\n# Multiplication as Comparison\n\n## What is multiplication?\n\nThe process of finding out the product between two or more numbers is called multiplication. The result thus obtained is called the product. Suppose, you bought 6 pens on one day and 6 pens on the next day. Total pens you bought are now 2 times 6 or 6 + 6 = 12.\n\nThis can also be written as 2 x 6 = 12\n\nNote the symbol used for multiplication. The symbol (x) is generally used to represent multiplication. Other common symbols that are used for multiplication are the asterisk (*) and dot (.)\n\n## What is meant by multiplication as a comparison?\n\nMultiplication, by comparison, means that you compare two quantities in such a manner that when one quantity is multiplied by a specific number the other quantity is produced. Let us take an example. Consider the sentence, “ Harry is twice as tall as Peter “. This means that if the age of Peter is “ p “ then, the age of Harry will be two times that of Peter, i.e. 2p.\n\nLet us take another example.\n\nExample\n\nThe length of the bench is 30 m. it is thrice as long as the length of the Stool. What is the length of the stool?\n\nSolution\n\nWe have been given that, the length of the bench is 30 m. it is thrice as long as the length of the Stool. Now, let the length of the stool be  “ s “. Therefore, we have,\n\n3 s = 30\n\n⇒ s = 10 m\n\nHence, the length of the stool is 30 m\n\n## Representing Multiplication table as multiplication by comparison\n\nLet us now understand how to read some of the multiplication tables in the form of multiplication by comparison\n\n## Advantages of learning multiplication by comparison\n\nFollowing are the advantages of learning multiplication by comparison –\n\n1. Understanding multiplication by comparing quantities allows the students to understand the concepts as they raise the level of difficulties in multiplication.\n2. It is easier to relate how numbers are multiplied with each other when projected as comparative quantities.\n\n## Key Facts and Summary\n\n1. The process of finding out the product between two or more numbers is called multiplication. The result thus obtained is called the product.\n2. The symbol (x) is generally used to represent multiplication. Other common symbols that are used for multiplication are the asterisk (*) and dot (.)\n3. Multiplication, by comparison, means that you compare two quantities in such a manner that when one quantity is multiplied by a specific number the other quantity is produced.\n4. Understanding multiplication by comparing quantities allows the students to understand the concepts as they raise the level of difficulties in multiplication.\n5. It is easier to relate how numbers are multiplied with each other when projected as comparative quantities." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.99811083,"math_prob":0.99054605,"size":12224,"snap":"2022-40-2023-06","text_gpt3_token_len":4029,"char_repetition_ratio":0.37405893,"word_repetition_ratio":0.48074278,"special_character_ratio":0.36673757,"punctuation_ratio":0.016369589,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99885285,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T00:25:47Z\",\"WARC-Record-ID\":\"<urn:uuid:cc8a11a7-a4db-4dac-9d1e-9ddcb1c48f6f>\",\"Content-Length\":\"160036\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6beefb2f-2ab5-4e3f-8999-bc6376c0a777>\",\"WARC-Concurrent-To\":\"<urn:uuid:b79d1cbb-e537-4e4e-9943-055b4c935c3c>\",\"WARC-IP-Address\":\"66.42.117.110\",\"WARC-Target-URI\":\"https://helpingwithmath.com/multiplication-as-comparison/\",\"WARC-Payload-Digest\":\"sha1:FO2YGZOST767GSFDESVRSLAWNPWFVB2P\",\"WARC-Block-Digest\":\"sha1:DRHJTAG2SILKTMAWENZXD4CKNJTK2H3I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500294.64_warc_CC-MAIN-20230205224620-20230206014620-00661.warc.gz\"}"}
https://socratic.org/questions/find-all-possible-functions-with-the-given-derivative-1-if-y-sin-7t-then-y-2-if-
[ "# How to find all possible functions with the given derivative ? If y′=sin(7t), then y = If y′=cos(t/7), then y = If y′=sin(7t)+cos(t/7), then y =\n\nSep 3, 2015\n\n$y = - \\frac{1}{7} \\cos \\left(7 t\\right) + C$, $y = \\frac{1}{7} \\sin \\left(7 t\\right) + C$ $y = - \\frac{1}{7} \\cos \\left(7 t\\right) + \\frac{1}{7} \\sin \\left(7 t\\right) + C$\n\n#### Explanation:\n\nWe know that the derivative (w.r.t. $t$) of $\\cos t$ is $- \\sin t$\n\nUsing the chain rule, the derivative (w.r.t.$t$) of $\\cos u$ is $- \\sin u \\frac{\\mathrm{du}}{\\mathrm{dt}}$\n\nSo $\\frac{d}{\\mathrm{dt}} \\left(\\cos \\left(7 t\\right)\\right) = - \\sin \\left(7 t\\right) \\cdot 7$\n\nIf we multiply by the constant $- \\frac{1}{7}$ before differentiating, we will multiply the derivative by the same constant:\n\n$\\frac{d}{\\mathrm{dt}} \\left(- \\frac{1}{7} \\cos \\left(7 t\\right)\\right) = - \\frac{1}{7} \\left(- \\sin \\left(7 t\\right) \\cdot 7\\right) = \\sin \\left(7 t\\right)$\n\nSo one possible function with derivative $y ' = \\sin \\left(7 t\\right)$ is\n\n$y = - \\frac{1}{7} \\cos \\left(7 t\\right)$\n\nBut there are others.\n\n$y = - \\frac{1}{7} \\cos \\left(7 t\\right) + 7$,\n$y = - \\frac{1}{7} \\cos \\left(7 t\\right) - 5$,\n$y = - \\frac{1}{7} \\cos \\left(7 t\\right) + \\frac{\\pi}{\\sqrt{17}}$\n\nIndeed, For any (every) constant $C$, the derivative of $y = - \\frac{1}{7} \\cos \\left(7 t\\right) + C$ is the desired derivative.\n\nNot only that, but due to an important consequence of the Mean Value Theorem, every function that has this derivativs differs from $y = - \\frac{1}{7} \\cos \\left(7 t\\right)$ by a constant $C$.\n\nSimilar reasoning leads us to the functions ahose derivative is $y ' = \\cos \\left(7 t\\right)$ being expressible as $y = \\frac{1}{7} \\sin \\left(7 t\\right) + C$ for constant $C$.\n\nBecause the derivative of a sum is the sum of the derivatives, every function whose derivative is $y ' = \\sin \\left(7 t\\right) + \\cos \\left(7 t\\right)$ can be written in the form:\n\n$y = - \\frac{1}{7} \\cos \\left(7 t\\right) + \\frac{1}{7} \\sin \\left(7 t\\right) + C$ for some constant $C$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8181135,"math_prob":1.000008,"size":719,"snap":"2022-27-2022-33","text_gpt3_token_len":176,"char_repetition_ratio":0.18881118,"word_repetition_ratio":0.0,"special_character_ratio":0.2294854,"punctuation_ratio":0.11564626,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000086,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-13T08:29:44Z\",\"WARC-Record-ID\":\"<urn:uuid:5f9d828d-e213-4339-b5d1-26f0d8b76284>\",\"Content-Length\":\"36102\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e06c53c1-d25f-4faf-a0b5-beae9611d5b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:abf2043d-1e45-4ce4-ae60-0e746c0910bc>\",\"WARC-IP-Address\":\"216.239.38.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/find-all-possible-functions-with-the-given-derivative-1-if-y-sin-7t-then-y-2-if-\",\"WARC-Payload-Digest\":\"sha1:GCXI2QSFB5VWKQ3OOBXJT265EI34FVMU\",\"WARC-Block-Digest\":\"sha1:O3ZTYDPKUYMS5LLPDU4KGS6FAYLLL47W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571911.5_warc_CC-MAIN-20220813081639-20220813111639-00504.warc.gz\"}"}
https://cris.vtt.fi/en/publications/on-convergence-to-stationarity-of-fractional-brownian-storage
[ "# On convergence to stationarity of fractional brownian storage\n\nMichel Mandjes, Ilkka Norros, Peter Glynn\n\nResearch output: Contribution to journalArticleScientificpeer-review\n\n7 Citations (Scopus)\n\n### Abstract\n\nWith M(t):=sups∈[0, t]A(s)−s denoting the running maximum of a fractional Brownian motion A(⋅) with negative drift, this paper studies the rate of convergence of ℙ(M(t)>x) to ℙ(M>x). We define two metrics that measure the distance between the (complementary) distribution functions ℙ(M(t)>⋅) and ℙ(M>⋅). Our main result states that both metrics roughly decay as exp(−ϑt2−2H), where ϑ is the decay rate corresponding to the tail distribution of the busy period in an fBm-driven queue, which was computed recently [Stochastic Process. Appl. (2006) 116 1269–1293]. The proofs extensively rely on application of the well-known large deviations theorem for Gaussian processes. We also show that the identified relation between the decay of the convergence metrics and busy-period asymptotics holds in other settings as well, most notably when Gärtner–Ellis-type conditions are fulfilled.\nOriginal language English 1385-1403 19 Annals of Applied Probability 19 4 https://doi.org/10.1214/08-AAP578 Published - 2009 A1 Journal article-refereed\n\n### Keywords\n\n• Convergence to stationarity\n• Fractional brownian motion\n• Large deviations\n• Storage process" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.820113,"math_prob":0.50770897,"size":1589,"snap":"2020-34-2020-40","text_gpt3_token_len":407,"char_repetition_ratio":0.09905363,"word_repetition_ratio":0.018348623,"special_character_ratio":0.25173065,"punctuation_ratio":0.11071429,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9503866,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-14T11:45:05Z\",\"WARC-Record-ID\":\"<urn:uuid:89b41e59-ab1f-43bc-87bb-835026b23be5>\",\"Content-Length\":\"48193\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:339fc012-726e-48f5-b814-670fdd84bab7>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe8e9217-774d-4be6-afb8-4ee0f665a3e4>\",\"WARC-IP-Address\":\"52.209.51.54\",\"WARC-Target-URI\":\"https://cris.vtt.fi/en/publications/on-convergence-to-stationarity-of-fractional-brownian-storage\",\"WARC-Payload-Digest\":\"sha1:3GK7SBHLDL6VKEHL33DG2RFAVAINLKQP\",\"WARC-Block-Digest\":\"sha1:GIF66OFPLWLYNOMO34CL7TEXDSSD2H27\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439739211.34_warc_CC-MAIN-20200814100602-20200814130602-00451.warc.gz\"}"}
https://mathoverflow.net/questions/42139/estimating-direction-from-a-distribution-on-a-circle/42229
[ "# Estimating direction from a distribution on a circle\n\nLet there be $n$ points on a unit circle. It is known they come from \"normal\" distribution around particular unknown direction (i.e. sum of 2 \"normal\" distributions on circle - one centered at point $p$ and the other at its opposite $-p$). What is the best way to estimate this direction? By best I mean an algorithm that is a. analytical, b. efficient and c. simple.\n\nThis seems too simple to be true, but, combining some of the ideas posted earlier, I think you could just interpret the vectors as complex numbers and take the RMS.\n\nSquaring will turn the bimodal distribution into a unimodal one. Then the square roots of the mean should give a good estimate of the modes of the original distribution.\n\n• That's a good answer Niels, and I believe it will work quite well in this case. One trouble I see is that by squaring you have multiplied the 'noise' (if you like) by 2. So, I think that this estimator will be consistent, but not efficient. – Robby McKilliam Oct 15 '10 at 1:13\n• Actually, I lie. In this case, I think this is exactly what you want to do! I recommend this answer be marked as correct. My answers can just be considered as advertising for the important and interesting the field of circular statistics. – Robby McKilliam Oct 15 '10 at 2:00\n• In principle the stretching of the squaring is undone by the shrinking of the square root. I do see a potential problem with the arithmetic mean getting smaller in absolute value, especially if the distribution is rather flat, but I don't think that's a linear effect, is it? – Niels J. Diepeveen Oct 15 '10 at 2:31\n• I think, due to the symmetry, it essential does act in a linear way. I'm pretty sure I can describe very accurately how well your estimator will work. I'll post this as an(other) answer a bit later. – Robby McKilliam Oct 15 '10 at 3:24\n• Niels, thanks a lot! I didn't know the RMS concept. If i did it might ring a bell. It's indeed THE best answer in the sense i asked for. – Andrei Kolin Oct 16 '10 at 10:42\n\nThe standard way to solve this is to just consider each of your data points as unit vectors, then take the average of those unit vectors. The direction of this averaged vector is the estimated direction.\n\nThere is a large literature on this topic which generally goes by the name of directional statistics. The seminal text on is Mardia and Jupp's book Directional Statistics. This field has a huge number of applications in astronomy, biology, meteorology, engineering etc.\n\n• The summation works in the case the direction is \"signed\" i.e. not symmetrical around 0. In my case this wont work: e.g. if there're 100 points at $p$ and 100 points at $-p$ their sum will be 0... Thanks for useful link, though. – Andrei Kolin Oct 14 '10 at 15:38\n• What would happen if you make a new circle by identifying opposite points on the original circle, and then take the average as suggested in this comment? (I haven't checked, so this may have an obvious reason for not working.) – gowers Oct 14 '10 at 17:06\n\nOk, so now I will describe why Niels's estimator works so well. Take a bimodal and symmetric circular density function $f$ with modes $p$ and $-p$ (we will assume that $p$ is positive) such as the one plotted in my previous answer. Let $\\Theta_1, \\Theta_2, \\dots, \\Theta_N$ be $N$ observations drawn from $f$.\n\nNiels's estimator first computes the complex numbers $e^{i 2 \\Theta_n}$ and takes their average $$\\bar{C} = \\sum_{n=1}^{N} e^{i 2 \\Theta_n} .$$ The estimate, denoted $\\hat{p}$, is given by taking the complex argument of $\\bar{C}$ and dividing by 2, that is $$\\hat{p} = \\frac{\\angle{\\bar{C}}}{2}$$ where $\\angle{\\bar{C}} \\in [0,2\\pi)$ denotes the complex argument. The next theorem describes the asymptotic properties of this estimator. I use the notation $\\langle x \\rangle_{\\pi}$ do denote $x$ taken to its representative inside $[-\\pi, \\pi)$. So, for example, $\\langle 2\\pi \\rangle_{\\pi} = 0$ and $\\langle \\pi + 0.1 \\rangle_{\\pi} = -\\pi + 0.1$.\n\nTheorem: Let $\\lambda$ denote the difference $\\lambda = \\tfrac{1}{2}\\langle 2\\hat{p} - 2p \\rangle_{\\pi}.$ Then $\\lambda$ converges almost surely to zero as $N \\rightarrow \\infty$ and the distribution of the normalised difference $\\sqrt{N}\\lambda$ converges to the zero mean normal with variance $$\\frac{\\sigma_s^2}{c}$$ where $$\\sigma_s^2 = \\int_{-\\pi/2}^{\\pi/2}\\sin^2(\\theta) f(\\langle \\theta + p \\rangle_\\pi) d\\theta \\qquad \\text{and} \\qquad c = \\int_{-\\pi/2}^{\\pi/2}\\cos(\\theta) f(\\langle \\theta + p \\rangle_\\pi) d\\theta.$$\n\nThe definition of the difference $\\lambda$ might seem a little strange at first, but it is actually very natural. To see why note that $p$ and the estimate $\\hat{p}$ are both in $[0,\\pi)$ but, for example, if $p = 0$ and $\\hat{p} = \\pi - 0.01$ then the difference between these is not $\\pi - 0.01$, because the two modes are actually very close to aligned in this case. The correct difference is $\\lambda = \\tfrac{1}{2}\\langle 2(\\pi-0.01) - 2 \\times 0 \\rangle_{\\pi} = 0.01$.\n\nThe proof of this theorem follows from a very similar argument to Theorem 6.1 (page 87) from my thesis. The original argument is due to Barry Quinn. Rather than restate the proof I'll just give you some convincing numerical evidence.\n\nI've run some simulations for the case when the noise is a sum of two weighted von Mises circular distributions with concentration parameter $\\kappa$. So, when $\\kappa$ is large the distribution is concentrated and looks something like the picture on the left below ($\\kappa = 20$ in this case) and when $\\kappa$ is small the distribution is quite spread out and looks something like the picture on the right below ($\\kappa = 0.5$). We obviously expect the estimator to perform better when the distribution is quite concentrated ($\\kappa$ is large).", null, "", null, "Here are the results. The plot below show the simulated variance of $\\lambda$ after 5000 trials (the dots) versus the variance predicted in the theorem above for a range of values of $\\kappa$ and number of observations $N$. You can see that the theorem does a very good job of accurately predicting the performance if $\\kappa$ isn't too small.", null, "(source)\n\nThere is still an open question as to whether this is the best estimator (in the sense of maximally reducing the variance of $\\lambda$). It would be possible to derive a Cramer-Rao bound for this estimation problem to give an idea of the best possible performance of an unbiased estimator. I suspect that this estimator performs very near the Cramer-Rao bound. So, in that sense it is close to best possible.\n\n• It seems your thesis quote may precede usage by Laurence J. Peter, wordwizard.com/phpbb3/viewtopic.php?f=16&t=19330 but there is some question owing to the many books he wrote en.wikipedia.org/wiki/Laurence_J._Peter – Will Jagy Oct 16 '10 at 2:10\n• Oh yes! I much prefer Herman Wouk's ryhming version ''When in danger or in doubt, run in circles, scream and shout'' anyway. Thanks! – Robby McKilliam Oct 16 '10 at 2:51\n• Eternal Vigilance is the Price of Liberty. – Will Jagy Oct 16 '10 at 3:28\n• Robby, thanks for your exceptional and insightful comments! I'm lucky to stumble on a real expert in this field! However let me point to you that this last comment doesn't deliver what's promised in one of the comments above (ie. explain WHY Niels' estimator is good). Rather it provides rigorous definition of it + simulation results. For the real explanations (proof of the Theorem) the reader is referred to you thesis :) Of course it is incorrect for me to expect the full answer in a page or 2. If i'll need to address more similar questions i'll surely take a read of all the links. – Andrei Kolin Oct 16 '10 at 10:50\n\nI see now that Andrei would like to know what to do when the distribution has 2 modes and is symmetric about these modes. It seems better to just give a second (more detailed) answer rather than complicate the simple answer I gave above (basically I think the idea in gowers comment above is sound, but it's a bit tricky to actually implement).\n\nSo, how do we deal with estimating the 'mean direction' of a distribution that looks something like:", null, "(source)\n\nGood questions at this point are ''what is mean direction anyway?'' and specifically for the distribution above ''does a mean direction even exist?''\n\nThis has been a question I have been looking at a few months now. I'm wary of blowing my own horn a bit here, but I am going to attach a part of my thesis which I think gives satisfactory answers to these questions (I would love to give you the whole thesis, but it's not quite ready for the public to see). I suggest that there are (at least) two different, but equally reasonable and intuitive definitions of mean direction. I argue that the distribution above has no mean in a rigorously definable sense for both of these definitions.\n\nGiven $N$ data points $\\Theta_1,\\dots, \\Theta_N$ on a circle there exist very accurate and efficient O(N)-time algorithms to estimate both of these means if they exist. Neither algorithm will converge if used on circular data drawn from the bimodal distribution above as (according to my definition) the means do not exist.\n\nStill, given $N$ data points $\\Theta_1,\\dots, \\Theta_N$ drawn from the bimodal distribution above, if what you want to do estimate one of the ''modes'' rather than the mean direction, then my gut tells me that there probably are efficient and accurate algorithms to do this, although I don't know if they exist in the literature. You could try Fishers book The Statistical Analysis of Circular Data.\n\n• Robby, mostly I want to know how to post a drawing of a peanut. – Will Jagy Oct 14 '10 at 23:17\n• Hi Will, I use metapost. The actually data for the pdf (it's a weighted sum of von Mises distributions by the way) comes from a java library (unfortunately) and then the plotting gets done by metapost. At some point I plan on releasing all of the code I have for simulations and plotting under the CRAPL matt.might.net/articles/crapl. However, at the moment I feel the code is even too crap for the CRAPL :( – Robby McKilliam Oct 14 '10 at 23:26\n• @Andrei: If you have further questions, feel free to send me an email. – Robby McKilliam Oct 14 '10 at 23:32\n• Between Will's comment and your comment talking about \"CRAPL\", I'm laughing quite hard indeed. – Harry Gindi Oct 15 '10 at 0:27\n• Robby, release all the code. If it loves you, it will return. – Will Jagy Oct 15 '10 at 2:09" ]
[ null, "https://i.stack.imgur.com/wwS4S.png", null, "https://i.stack.imgur.com/d2mbv.png", null, "https://i.stack.imgur.com/0JcZD.png", null, "https://i.stack.imgur.com/5taQW.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90337646,"math_prob":0.9845412,"size":10903,"snap":"2020-45-2020-50","text_gpt3_token_len":2833,"char_repetition_ratio":0.12542436,"word_repetition_ratio":0.029252438,"special_character_ratio":0.266257,"punctuation_ratio":0.09872461,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986347,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T10:00:35Z\",\"WARC-Record-ID\":\"<urn:uuid:906c5f45-9103-443f-9fac-3be725048e92>\",\"Content-Length\":\"175885\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bf2636e7-963a-46f5-9906-a5153e5364a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:800cc94d-8d9c-4265-8e8d-54f31cb7cc4b>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/42139/estimating-direction-from-a-distribution-on-a-circle/42229\",\"WARC-Payload-Digest\":\"sha1:I6TGGZTFSUD2LP4WNYJBRA5AYMLNBLDZ\",\"WARC-Block-Digest\":\"sha1:PRH3EITS2RTO2CJ6WSCESOMWQ2PQNGA3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141191511.46_warc_CC-MAIN-20201127073750-20201127103750-00452.warc.gz\"}"}
http://hackage.haskell.org/package/generics-mrsop-2.2.0/docs/Generics-MRSOP-Base-NP.html
[ "generics-mrsop-2.2.0: Generic Programming with Mutually Recursive Sums of Products.\n\nGenerics.MRSOP.Base.NP\n\nDescription\n\nStandard representation of n-ary products.\n\nSynopsis\n\n# Documentation\n\ndata NP (a :: k -> Type) (b :: [k]) :: forall k. (k -> Type) -> [k] -> Type where #\n\nAn n-ary product.\n\nThe product is parameterized by a type constructor f and indexed by a type-level list xs. The length of the list determines the number of elements in the product, and if the i-th element of the list is of type x, then the i-th element of the product is of type f x.\n\nThe constructor names are chosen to resemble the names of the list constructors.\n\nTwo common instantiations of f are the identity functor I and the constant functor K. For I, the product becomes a heterogeneous list, where the type-level list describes the types of its components. For K a, the product becomes a homogeneous list, where the contents of the type-level list are ignored, but its length still specifies the number of elements.\n\nIn the context of the SOP approach to generic programming, an n-ary product describes the structure of the arguments of a single data constructor.\n\nExamples:\n\nI 'x' :* I True :* Nil :: NP I '[ Char, Bool ]\nK 0 :* K 1 :* Nil :: NP (K Int) '[ Char, Bool ]\nJust 'x' :* Nothing :* Nil :: NP Maybe '[ Char, Bool ]\n\nConstructors\n\n Nil :: forall k (a :: k -> Type) (b :: [k]). NP a ([] :: [k]) (:*) :: forall k (a :: k -> Type) (b :: [k]) (x :: k) (xs :: [k]). a x -> NP a xs -> NP a (x ': xs) infixr 5\nInstances\n HTrans (NP :: (k1 -> Type) -> [k1] -> Type) (NP :: (k2 -> Type) -> [k2] -> Type) Instance detailsDefined in Data.SOP.NP Methodshtrans :: AllZipN (Prod NP) c xs ys => proxy c -> (forall (x :: k10) (y :: k20). c x y => f x -> g y) -> NP f xs -> NP g ys #hcoerce :: (AllZipN (Prod NP) (LiftedCoercible f g) xs ys, HTrans NP NP) => NP f xs -> NP g ys # HPure (NP :: (k -> Type) -> [k] -> Type) Instance detailsDefined in Data.SOP.NP Methodshpure :: SListIN NP xs => (forall (a :: k0). f a) -> NP f xs #hcpure :: AllN NP c xs => proxy c -> (forall (a :: k0). c a => f a) -> NP f xs # HAp (NP :: (k -> Type) -> [k] -> Type) Instance detailsDefined in Data.SOP.NP Methodshap :: Prod NP (f -.-> g) xs -> NP f xs -> NP g xs # HCollapse (NP :: (k -> Type) -> [k] -> Type) Instance detailsDefined in Data.SOP.NP Methodshcollapse :: SListIN NP xs => NP (K a) xs -> CollapseTo NP a # HTraverse_ (NP :: (k -> Type) -> [k] -> Type) Instance detailsDefined in Data.SOP.NP Methodshctraverse_ :: (AllN NP c xs, Applicative g) => proxy c -> (forall (a :: k0). c a => f a -> g ()) -> NP f xs -> g () #htraverse_ :: (SListIN NP xs, Applicative g) => (forall (a :: k0). f a -> g ()) -> NP f xs -> g () # HSequence (NP :: (k -> Type) -> [k] -> Type) Instance detailsDefined in Data.SOP.NP Methodshsequence' :: (SListIN NP xs, Applicative f) => NP (f :.: g) xs -> f (NP g xs) #hctraverse' :: (AllN NP c xs, Applicative g) => proxy c -> (forall (a :: k0). c a => f a -> g (f' a)) -> NP f xs -> g (NP f' xs) #htraverse' :: (SListIN NP xs, Applicative g) => (forall (a :: k0). f a -> g (f' a)) -> NP f xs -> g (NP f' xs) # All (Compose Eq f) xs => Eq (NP f xs) Instance detailsDefined in Data.SOP.NP Methods(==) :: NP f xs -> NP f xs -> Bool #(/=) :: NP f xs -> NP f xs -> Bool # (All (Compose Eq f) xs, All (Compose Ord f) xs) => Ord (NP f xs) Instance detailsDefined in Data.SOP.NP Methodscompare :: NP f xs -> NP f xs -> Ordering #(<) :: NP f xs -> NP f xs -> Bool #(<=) :: NP f xs -> NP f xs -> Bool #(>) :: NP f xs -> NP f xs -> Bool #(>=) :: NP f xs -> NP f xs -> Bool #max :: NP f xs -> NP f xs -> NP f xs #min :: NP f xs -> NP f xs -> NP f xs # All (Compose Show f) xs => Show (NP f xs) Instance detailsDefined in Data.SOP.NP MethodsshowsPrec :: Int -> NP f xs -> ShowS #show :: NP f xs -> String #showList :: [NP f xs] -> ShowS # All (Compose Semigroup f) xs => Semigroup (NP f xs) Since: sop-core-0.4.0.0 Instance detailsDefined in Data.SOP.NP Methods(<>) :: NP f xs -> NP f xs -> NP f xs #sconcat :: NonEmpty (NP f xs) -> NP f xs #stimes :: Integral b => b -> NP f xs -> NP f xs # (All (Compose Monoid f) xs, All (Compose Semigroup f) xs) => Monoid (NP f xs) Since: sop-core-0.4.0.0 Instance detailsDefined in Data.SOP.NP Methodsmempty :: NP f xs #mappend :: NP f xs -> NP f xs -> NP f xs #mconcat :: [NP f xs] -> NP f xs # All (Compose NFData f) xs => NFData (NP f xs) Since: sop-core-0.2.5.0 Instance detailsDefined in Data.SOP.NP Methodsrnf :: NP f xs -> () # type Same (NP :: (k1 -> Type) -> [k1] -> Type) Instance detailsDefined in Data.SOP.NP type Same (NP :: (k1 -> Type) -> [k1] -> Type) = (NP :: (k2 -> Type) -> [k2] -> Type) type Prod (NP :: (k -> Type) -> [k] -> Type) Instance detailsDefined in Data.SOP.NP type Prod (NP :: (k -> Type) -> [k] -> Type) = (NP :: (k -> Type) -> [k] -> Type) type UnProd (NP :: (k -> Type) -> [k] -> Type) Instance detailsDefined in Data.SOP.NS type UnProd (NP :: (k -> Type) -> [k] -> Type) = (NS :: (k -> Type) -> [k] -> Type) type CollapseTo (NP :: (k -> Type) -> [k] -> Type) a Instance detailsDefined in Data.SOP.NP type CollapseTo (NP :: (k -> Type) -> [k] -> Type) a = [a] type SListIN (NP :: (k -> Type) -> [k] -> Type) Instance detailsDefined in Data.SOP.NP type SListIN (NP :: (k -> Type) -> [k] -> Type) = (SListI :: [k] -> Constraint) type AllN (NP :: (k -> Type) -> [k] -> Type) (c :: k -> Constraint) Instance detailsDefined in Data.SOP.NP type AllN (NP :: (k -> Type) -> [k] -> Type) (c :: k -> Constraint) = All c type AllZipN (NP :: (k -> Type) -> [k] -> Type) (c :: a -> b -> Constraint) Instance detailsDefined in Data.SOP.NP type AllZipN (NP :: (k -> Type) -> [k] -> Type) (c :: a -> b -> Constraint) = AllZip c\n\nappendNP :: NP p xs -> NP p ys -> NP p (xs :++: ys) Source #\n\nAppend two values of type NP\n\nlistPrfNP :: NP p xs -> ListPrf xs Source #\n\nProves that the index of a value of type NP is a list. This is useful for pattern matching on said list without having to carry the product around.\n\nmapNP :: (f :-> g) -> NP f ks -> NP g ks Source #\n\nMaps a natural transformation over a n-ary product\n\nmapNPM :: Monad m => (forall x. f x -> m (g x)) -> NP f ks -> m (NP g ks) Source #\n\nMaps a monadic natural transformation over a n-ary product\n\nelimNP :: (forall x. f x -> a) -> NP f ks -> [a] Source #\n\nEliminates the product using a provided function.\n\nelimNPM :: Monad m => (forall x. f x -> m a) -> NP f ks -> m [a] Source #\n\nzipNP :: NP f xs -> NP g xs -> NP (f :*: g) xs Source #\n\nCombines two products into one.\n\nunzipNP :: NP (f :*: g) xs -> (NP f xs, NP g xs) Source #\n\nUnzips a combined product into two separate products\n\ncataNP :: (forall a as. f a -> r as -> r (a ': as)) -> r '[] -> NP f xs -> r xs Source #\n\nConsumes a value of type NP.\n\ncataNPM :: Monad m => (forall a as. f a -> r as -> m (r (a ': as))) -> m (r '[]) -> NP f xs -> m (r xs) Source #\n\nConsumes a value of type NP.\n\neqNP :: (forall x. p x -> p x -> Bool) -> NP p xs -> NP p xs -> Bool Source #\n\nCompares two NPs pairwise with the provided function and return the conjunction of the results." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79518247,"math_prob":0.9552636,"size":4872,"snap":"2023-14-2023-23","text_gpt3_token_len":1710,"char_repetition_ratio":0.19391948,"word_repetition_ratio":0.409009,"special_character_ratio":0.4240558,"punctuation_ratio":0.20088495,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99320847,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-22T22:34:34Z\",\"WARC-Record-ID\":\"<urn:uuid:e5bfb891-25ec-4837-ac3d-a3b7fb689940>\",\"Content-Length\":\"54680\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e45705b4-d820-4a20-8290-35b6f667b5d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:bd027756-7e03-4399-bfbf-b4ae67803a23>\",\"WARC-IP-Address\":\"146.75.32.68\",\"WARC-Target-URI\":\"http://hackage.haskell.org/package/generics-mrsop-2.2.0/docs/Generics-MRSOP-Base-NP.html\",\"WARC-Payload-Digest\":\"sha1:KCBG64GBXZNOHCJWXTOO3QJI42SYRLEU\",\"WARC-Block-Digest\":\"sha1:PRB27LD35S7JAHZNPKTE3Z6O4BEORE3D\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296944452.97_warc_CC-MAIN-20230322211955-20230323001955-00717.warc.gz\"}"}
https://www.codecademy.com/courses/ensembling-methods-in-ml/lessons/boosting-ml/exercises/gradient-boosting-implementation
[ "Learn\n\nNow that we have taken a look at what is going on under the hood, we are ready to implement Gradient Boosting on a real dataset and solve a classification problem.\n\nWe will be using a dataset from UCI’s Machine Learning Repository to evaluate the acceptability of a car based on a set of features that encompasses their price and technical characteristics.\n\n### Instructions\n\n1.\n\nCreate a Gradient Boosted Trees classification model using GradientBoostingClassifier() with the n_estimators set to 15. Leave all other parameters to their default values. Store the model in a variable named grad_classifier.\n\nPrint the parameters of the GradientBoostedTrees model using the .get_params() method.\n\n2.\n\nFit grad_classifier using the training features (X_train) and corresponding labels (y_train).\n\nPredict the classes of the testing dataset (X_test) and store them as an array in a variable named y_pred.\n\n3.\n\nNow we will explore some of the most common evaluation metrics for classification on our trained Gradient Boosted Trees model.\n\n• Calculate the accuracy and store it in a variable named accuracy.\n• Calculate the precision and store it in a variable named precision.\n• Calculate the recall and store it in a variable named recall.\n• Calculate the f1-score and store it in a variable named f1.\n\nRemove the comments from the code block to print the evaluation metrics you just stored.\n\n4.\n\nTake a look at the confusion matrix by removing the comments in the following code block." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7883297,"math_prob":0.9023894,"size":1455,"snap":"2022-40-2023-06","text_gpt3_token_len":292,"char_repetition_ratio":0.13783598,"word_repetition_ratio":0.051502146,"special_character_ratio":0.19312714,"punctuation_ratio":0.06827309,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97498935,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T06:18:39Z\",\"WARC-Record-ID\":\"<urn:uuid:c308cbd4-c16b-4116-af0c-745eca029b08>\",\"Content-Length\":\"102912\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:537fac66-0aef-4b87-8a48-5b9aa20210d7>\",\"WARC-Concurrent-To\":\"<urn:uuid:1938a8d6-7ac9-498b-a266-972876c6268f>\",\"WARC-IP-Address\":\"104.17.212.81\",\"WARC-Target-URI\":\"https://www.codecademy.com/courses/ensembling-methods-in-ml/lessons/boosting-ml/exercises/gradient-boosting-implementation\",\"WARC-Payload-Digest\":\"sha1:TDNRUPY4BLANHFHYMUSE7NHAJTXFA7N6\",\"WARC-Block-Digest\":\"sha1:IUGHXIS332VYNR6DBZLLTLKF6ND2A6QF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337537.25_warc_CC-MAIN-20221005042446-20221005072446-00163.warc.gz\"}"}