qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,111,906
<p>I have to show that for any <span class="math-container">$b &gt;1$</span>, we have <span class="math-container">$$ b^n &gt; n$$</span> for all <span class="math-container">$n$</span> sufficiently large, using only very basic analysis (no calculus). My attempt is as follows.</p> <hr> <p>We know that <span class="math-container">$b^{n+1} - b^n = b^n(b-1)$</span>. For <span class="math-container">$n$</span> sufficiently large, say <span class="math-container">$$n \geq N = \left\lceil \frac{\ln(2/(b-1))}{\ln b} \right\rceil + 1,$$</span> we have <span class="math-container">$$ b^{n+1} - b^n &gt; 2.$$</span></p> <p>Now let <span class="math-container">$\Delta = N - b^N$</span>. Then for any <span class="math-container">$j\geq 1$</span>, we have <span class="math-container">$$ b^{N+j} = (b^{N+j} - b^{N+j-1}) + \ldots + (b^{N+1} - b^N) + b^N &gt; 2j + b^N = 2j + N - \Delta = N+j + (j - \Delta).$$</span> Thus we have <span class="math-container">$b^n &gt; n$</span> for any <span class="math-container">$n \geq N + |\Delta|$</span>.</p> <hr> <p>This works, but it seems messy. <strong>Is there is better way?</strong> I know induction is usual for this type of problem, but establishing the base case for generic <span class="math-container">$b$</span> seems difficult. </p>
YiFan
496,634
<p>We can proceed by induction. Suppose <span class="math-container">$b^x&gt;x$</span> is true for all <span class="math-container">$x= n$</span>. We want to show the claim for <span class="math-container">$x=n+1$</span>. We just need <span class="math-container">$$b^n(b-1)=b^{n+1}-b^n&gt;1,$$</span> Because by inductive hypothesis we have <span class="math-container">$b^n&gt;n$</span>; then we can sum the inequalities. But <span class="math-container">$b^n(b-1)&gt;1$</span> is the same as <span class="math-container">$b^n&gt;1/(b-1)$</span>, which is clearly true when <span class="math-container">$n$</span> is sufficiently large, since <span class="math-container">$b^n$</span> grows without bound but <span class="math-container">$b-1$</span> is constant.</p>
3,111,906
<p>I have to show that for any <span class="math-container">$b &gt;1$</span>, we have <span class="math-container">$$ b^n &gt; n$$</span> for all <span class="math-container">$n$</span> sufficiently large, using only very basic analysis (no calculus). My attempt is as follows.</p> <hr> <p>We know that <span class="math-container">$b^{n+1} - b^n = b^n(b-1)$</span>. For <span class="math-container">$n$</span> sufficiently large, say <span class="math-container">$$n \geq N = \left\lceil \frac{\ln(2/(b-1))}{\ln b} \right\rceil + 1,$$</span> we have <span class="math-container">$$ b^{n+1} - b^n &gt; 2.$$</span></p> <p>Now let <span class="math-container">$\Delta = N - b^N$</span>. Then for any <span class="math-container">$j\geq 1$</span>, we have <span class="math-container">$$ b^{N+j} = (b^{N+j} - b^{N+j-1}) + \ldots + (b^{N+1} - b^N) + b^N &gt; 2j + b^N = 2j + N - \Delta = N+j + (j - \Delta).$$</span> Thus we have <span class="math-container">$b^n &gt; n$</span> for any <span class="math-container">$n \geq N + |\Delta|$</span>.</p> <hr> <p>This works, but it seems messy. <strong>Is there is better way?</strong> I know induction is usual for this type of problem, but establishing the base case for generic <span class="math-container">$b$</span> seems difficult. </p>
Calum Gilhooley
213,690
<p>(Umberto P.'s argument from first principles is surely the best way to go, but here's another argument, just for the sake of variety.)</p> <p>Consider the sequence <span class="math-container">$a_n = \frac{b^n}{n}$</span>. We have: <span class="math-container">$$\frac{a_{n+1}}{a_n} = \frac{b}{1 + \frac{1}{n}} \geqslant \frac{2b}{b+1} &gt; 1 \text{ for } n \geqslant \frac{2}{b-1}. $$</span> If the eventually increasing sequence <span class="math-container">$(a_n)$</span> is bounded above, it tends to a limit, but this implies <span class="math-container">$$ 1 = \lim_{n\to\infty}\frac{a_{n+1}}{a_n}\geqslant \frac{2b}{b+1} &gt; 1, $$</span> a contradiction. Therefore, <span class="math-container">$(a_n)$</span> is not bounded above. In particular, there exists <span class="math-container">$N \geqslant \frac{2}{b-1}$</span> such that <span class="math-container">$a_N &gt; 1$</span>. Then we have <span class="math-container">$a_n &gt; 1$</span> for all <span class="math-container">$n \geqslant N$</span>.</p>
23,942
<p>I have tried to resolve the problem of the following link <a href="https://mathematica.stackexchange.com/questions/23931/how-can-i-solve-precision-problem">How can I solve precision problem</a></p> <p>I can tell the problem described in that link shortly here, It's no mater how many precision is there after decimal(.) the result should be 2 or 3 precision after decimal as per user requirement.e.g</p> <pre><code>N[10/3] 3 </code></pre> <p>Output should be as per user wish precision as 2.</p> <pre><code>3.33 3.00 </code></pre> <p>There I found <code>NumberForm[]</code> function as a solution, since the <code>NumberForm[]</code> function is wrap up the result, I can't get the result as number out of it.</p> <p>I tried like this </p> <pre><code>NumberForm[N[10/3],{4,2}] NumberForm[3, {4, 2}] </code></pre> <p>Output am getting is </p> <pre><code>3.33 3.00 </code></pre> <p>But when I try to do further operation, it's not computing</p> <pre><code>NumberForm[N[10/3], {4, 2}] + 1 NumberForm[3, {4, 2}] + 1 </code></pre> <p>Output</p> <pre><code>1+3.33 1+3.00 </code></pre> <p>I tried other way also which follows</p> <pre><code>ToExpression[ToString[NumberForm[3, {4, 2}]]] </code></pre> <p>OutPut</p> <pre><code>3. </code></pre> <p>but I need output as 3.00. Help me to find the solution.</p>
Spawn1701D
255
<p>Use the <code>$PrePrint</code> global parameter:</p> <pre><code>$PrePrint = If[MatchQ[#, _?NumericQ], NumberForm[#, {4, 2}], #] &amp;; </code></pre> <p>Note: if you dont want the way rationals will be represented after setting the global variable (e.g. $3.00/4.00$) then use this instead </p> <pre><code>$PrePrint = If[MatchQ[#, Except[_Rational,_?NumericQ]], NumberForm[#, {4, 2}], #] &amp;; </code></pre> <hr> <p>Update:</p> <p>if you just want it for the <code>InputField</code> command try this instead</p> <pre><code>InputField[Dynamic[ToString@NumberForm[b, {4, 2}]], String] </code></pre>
4,228,535
<p>First the geometric inversion map <span class="math-container">$f:\mathbb{R}^n\setminus \{0\}\rightarrow \mathbb{R}^n$</span> is defined by <span class="math-container">$$f(x)=\frac{x}{|x|^2}=:X.$$</span> The one of its properties is following:</p> <p>For any <span class="math-container">$c\in\mathbb{R}^n, c\not=0$</span>, the sphere <span class="math-container">$|x-c|=|c|$</span> is mapped to the hyperplane <span class="math-container">$2X\cdot c=1$</span>.</p> <p>I can't understand what it means intuitively. As I know, <span class="math-container">$2X\cdot c=1$</span> means the set of <span class="math-container">$X$</span> with length of <span class="math-container">$1/2$</span> when projection maps <span class="math-container">$X$</span> to <span class="math-container">$c$</span>. However, I failed to find some connection with sphere <span class="math-container">$|x-c|=|c|$</span>. Can I get any help?</p>
Kavi Rama Murthy
142,385
<p>You are making things too commplicated. <span class="math-container">$\psi$</span> is a bounded function and if <span class="math-container">$|\psi| \leq M$</span> we get <span class="math-container">$|\int_{-1/k}^{1/k} \psi (x)dx|\leq \frac M {2k} \to 0$</span>.</p>
4,557,576
<p>Working on a 3U CubeSat as part of a project for a Space Engineering club. To calculate the maximum solar disturbance force, we are trying to calculate the largest shadow a 0.1 * 0.1 * 0.3 rectangular prism can cast.</p> <p>If the satellite was oriented with the largest side facing the sun directly, the shadow cast would be 0.03 m^2. It is our thought that there is a certain orientation in which the shadow is larger than this, so the solar disturbance force will be maximized. Is there a function we could use to maximize this value?</p> <p>Any help would be greatly appreciated!</p>
Intelligenti pauca
255,730
<p>Even if a satisfying answer has been found by the asker, I want to show how the answer can be reached without matrices nor calculus.</p> <p>If we take a plane surface with area <span class="math-container">$A$</span> and unit normal <span class="math-container">$\vec n$</span>, then the projection of that area along a direction <span class="math-container">$\vec v$</span> (with <span class="math-container">$|\vec v|=1$</span>) is <span class="math-container">$$ A'=A \vec n\cdot\vec v. $$</span> Let's take now a convex polyhedron: if the normals <span class="math-container">$\vec n_k$</span> of its faces are oriented outwards, then only those faces with <span class="math-container">$\vec n_k\cdot\vec v&gt;0$</span> are visible from direction <span class="math-container">$\vec v$</span>. The area projected by the polyhedron is then <span class="math-container">$$ S=\sum_{\text{$k$ visible}} A_k \vec n_k\cdot\vec v, $$</span> where <span class="math-container">$A_k$</span> is the area of the <span class="math-container">$k$</span>-the face, and the sum is only carried over the visible faces.</p> <p>In the case of a parallelepiped, we can set up our coordinates such that its center is at the origin, and its faces are parallel to coordinate planes, so that <span class="math-container">$\vec n$</span> can take one of the six values <span class="math-container">$(0,0,\pm1)$</span>, <span class="math-container">$(0,\pm1,0)$</span>,<span class="math-container">$(\pm1,0,0)$</span>. Without loss of generality, we can in addition suppose that <span class="math-container">$v=(x,y,z)$</span> is in the first octant. We get then: <span class="math-container">$$ \tag{1} S=A_x x+A_y y+A_z z, $$</span> where <span class="math-container">$A_x$</span> is the area of the faces perpendicular to <span class="math-container">$x$</span> axis, and so on.</p> <p>We must find for which <span class="math-container">$v=(x,y,z)$</span> the value of <span class="math-container">$S$</span> is maximum, with the constraint <span class="math-container">$|\vec v|=1$</span>, that is: <span class="math-container">$$ \tag{2} x^2+y^2+z^2=1. $$</span> But <span class="math-container">$(1)$</span> is the equation of a plane (if <span class="math-container">$S$</span> is fixed) and <span class="math-container">$(2)$</span> is the equation of a sphere. We obtain the maximum value of <span class="math-container">$S$</span> when the plane is tangent to the sphere, i.e. when <span class="math-container">$v$</span> is perpendicular to the plane, leading to: <span class="math-container">$$ v={(A_x,A_y,A_z)\over\sqrt{A_x^2+A_y^2+A_z^2}}. $$</span> Substituting this into <span class="math-container">$(1)$</span> we can find the maximum value of <span class="math-container">$S$</span>: <span class="math-container">$$ S_\max=\sqrt{A_x^2+A_y^2+A_z^2}. $$</span> For the case at hand, substituting: <span class="math-container">$$ A_x=A_y=0.03\text{ m}^2,\quad A_z=0.01\text{ m}^2, $$</span> we get <span class="math-container">$S_\max={\sqrt{19}\over100}\approx0.0436\text{ m}^2$</span>.</p>
32,150
<p>I want to test if expressions (mix of variables, functions and numbers) are zero valued, as fast as possible, and <code>PossibleZeroQ</code> is sometimes very slow. One solution I found was to substitute the variables for random reals and test if the value of the substituted expression is less than, say, $0.0001$.</p> <p>It works good, but maybe there are other solutions out there.</p> <p>I know it can cause some wrong answers, but what is most important is the speed, since I can check the false positive later with <code>PossibleZeroQ</code>.</p> <p>Can you think of an algorithm that can perform fast zero value tests in detriment of some loss of accuracy?</p> <p>Edit:</p> <p>I'll post my algorithm here:</p> <pre><code>TestZeroValuedExpression[expression_,symbolslist_]:=Module[{numericvalue}, Quiet[TimeConstrained[If[Check[ numericvalue=N[Expand[expression/.Table[symbolslist[[i]]-&gt;RandomReal[{1,2}],{i,Length[symbolslist]}]]]; ,False]=!=False, If[Abs[numericvalue]&gt;0.00001,False,PossibleZeroQ[expression]], PossibleZeroQ[expression] ],3,False]] ]; </code></pre>
Giovanni F.
4,769
<p>Another improved version</p> <pre><code>TestConstantValuedExpression[expression_,zerovaluetest_]:=Module[{randomvaluestestresult,expressionrandomvalue,previousexpressionrandomvalue,symbolreplacementlist,extremesdifferencetestvalue}, randomvaluestestresult=True; previousexpressionrandomvalue=False; Quiet[ Do[ Block[{$MaxExtraPrecision=Infinity}, symbolreplacementlist={_-&gt;False}; TimeConstrained[ While[!FreeQ[$Assumptions/.symbolreplacementlist,False],symbolreplacementlist=Table[symbolslist[[dvi]]-&gt;RandomChoice[{-1,1}]*10^Round[RandomVariate[StudentTDistribution[2]]],{dvi,Length[symbolslist]}]] ,0.2,nskippedsymbolreplacementlists++;Continue[]]; TimeConstrained[ expressionrandomvalue=Check[N[expression/.symbolreplacementlist],Continue[]]; ,0.2,nskippedexpressionrandomvalues++;Continue[]]; ]; If[previousexpressionrandomvalue=!=False, If[Chop[expressionrandomvalue-previousexpressionrandomvalue,10^-4]=!=0,randomvaluestestresult=False;Break[],previousexpressionrandomvalue=expressionrandomvalue]; ,previousexpressionrandomvalue=expressionrandomvalue]; ,{60}]; If[randomvaluestestresult, extremesdifferencetestvalue=Chop[TimeConstrained[Minimize[{expression,$Assumptions},symbolslist][[1]],60,0]-TimeConstrained[Maximize[{expression,$Assumptions},symbolslist][[1]],60,0],10^-4]; If[extremesdifferencetestvalue===0||extremesdifferencetestvalue==={0,0}, If[zerovaluetest,TimeConstrained[PossibleZeroQ[FullSimplify[expression]],360,False],True] ,False] ,False] ] ]; </code></pre>
911,584
<p><em>Disclaimer: This thread is a record of thoughts.</em></p> <p><strong>Discussion</strong> Given a compact set.</p> <blockquote> <p>Do mere neighborhood covers admit finite subcovers? $$C\subseteq\bigcup_{i\in I}N_i\implies C\subseteq N_1\cup\ldots N_n$$ <em>(The idea is that neighborhoods are in some sense fat.)</em></p> </blockquote> <p><strong>Application</strong></p> <p>Given a locally compact space.</p> <p>Every compact set has a compact neighborhood base: $$C\subseteq U:\quad N\subseteq U\quad(C\subseteq N^°)$$ <em>(The above would give clues how to prove this.)</em></p>
Rebecca J. Stones
91,818
<p>This is essentially a counting problem: in the $6^5=7776$ possible outcomes, how many satisfy the condition "the number of rolls resulting in 1 or 2 is greater than the number of rolls resulting in 6"? The probability is this number divided by $6^5$.</p> <p>We can find this number by filling five initially empty cells with the possible numbers.</p> <p><em>Hint</em>: We can find a formula for number of ways of rolling exactly $s$ 6s and $n$ 1s or 2s:</p> <ul> <li><p>What is the number of ways of filling exactly $s$ 6s in those $5$ empty cells?</p></li> <li><p>After the above, what is the number of ways of choosing exactly $n$ empty cells (to be filled with 1s and 2s in the next step)?</p></li> <li><p>After the above, what is the number of ways of filling the chosen $n$ cells with 1s and/or 2s?</p></li> <li><p>After the above, what is the number of ways of filling the remaining empty cells with 3s, 4s, or 5s?</p></li> </ul> <p>Once we have the formula, we just do the bookkeeping:</p> <p>$$\begin{array}{cc|c} \text{no 6s} &amp; \text{no 1s and 2s} &amp; \text{nr ways this can occur} \\ \hline 0 &amp; 1 &amp; ??? \\ 0 &amp; 2 &amp; ??? \\ 0 &amp; 3 &amp; ??? \\ 0 &amp; 4 &amp; ??? \\ 0 &amp; 5 &amp; ??? \\ 1 &amp; 2 &amp; ??? \\ 1 &amp; 3 &amp; ??? \\ 1 &amp; 4 &amp; ??? \\ 2 &amp; 3 &amp; ??? \\ \end{array}$$</p>
101,974
<p>We are familiar with <strong><em>Hurwitz’s theorem</em></strong> which implies there is only the Fibonacci 2-Square, Euler 4-Square, Degen 8-Square, and no more. However, if we relax conditions and allow for <em>rational expressions</em>, then <strong><em>Pfister's theorem</em></strong> states that similar identities are possible for <strong>ALL $2^n$ squares</strong>. His 4-square version is: </p> <p>$\begin{align}&amp;(a_1^2+a_2^2+a_3^2+a_4^2)(b_1^2+b_2^2+b_3^2+b_4^2)=\\ &amp;(a_1 b_4 + a_2 b_3 + a_3 b_2 + a_4 b_1)^2 +\\ &amp;(a_1 b_3 - a_2 b_4 + a_3 b_1 - a_4 b_2)^2 +\\ &amp;\left(a_1 b_2 + a_2 b_1 + \frac{a_3 (b_1^2b_4-2b_1b_2b_3-b_2^2b_4)}{b_1^2+b_2^2} - \frac{a_4 (b_1^2b_3+2b_1b_2b_4-b_2^2b_3)}{b_1^2+b_2^2}\right)^2+\\ &amp;\left(a_1 b_1 - a_2 b_2 - \frac{a_4 (b_1^2b_4-2b_1b_2b_3-b_2^2b_4)}{b_1^2+b_2^2} - \frac{a_3 (b_1^2b_3+2b_1b_2b_4-b_2^2b_3)}{b_1^2+b_2^2}\right)^2 \end{align}$</p> <p><em>Question</em>: What does the <em>Pfister 8-square</em> version look like? (<strong>And, if you have the strength, can you also give the 16-square version?</strong>) Here is K. Conrad's <a href="http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/pfister.pdf" rel="nofollow">pdf file</a> which describes the general method, but I can’t make heads or tails out of it. </p> <p>$\color{red}{\text{Attention}}$ (Feb. 16): Someone is trying to delete Wikipedia's article on <a href="https://en.wikipedia.org/wiki/Degen%27s_eight-square_identity" rel="nofollow">Degen's Eight-Square Identity</a> <em>simply because he finds it uninteresting</em>. Please <a href="https://en.wikipedia.org/wiki/Wikipedia%3aArticles_for_deletion/Degen%27s_eight-square_identity" rel="nofollow">vote to keep</a>.</p>
Robert Israel
8,508
<p>Here's Maple code for getting a 16-square version using the method Conrad outlines. The result is too big to show here (the equation eq has length 4808351).</p> <blockquote> <p>C2:= &lt; &lt; c[1] | c[2] >,&lt;-c[2] | c[1]> >:</p> <p>c2:= c[1]^2 + c[2]^2:</p> <p>C2b:= subs(seq(c[i]=c[i+2],i=1..2),C2):</p> <p>c4:= add(c[i]^2,i=1..4):</p> <p>C4:= &lt; &lt; C2|C2b >,&lt; -C2 . C2b^%T . C2/c2 | C2 > >:</p> <p>C4b:= subs(seq(c[i]=c[i+4],i=1..4),C4):</p> <p>c8:= add(c[i]^2,i=1..8):</p> <p>C8:= &lt; &lt; C4|C4b >,&lt; -C4 . C4b^%T . C4/c4 | C4 > >:</p> <p>C8b:= subs(seq(c[i]=c[i+8],i=1..8),C8):</p> <p>C16:= &lt; &lt; C8|C8b >,&lt; -C8 . C8b^%T . C8/c8 | C8 > >:</p> <p>XY:= subs(c=x, C16) . subs(c=y, C16):</p> <p>for i to 16 do z[i] := XY[1, i] end do:</p> </blockquote> <p>To verify:</p> <blockquote> <p>eq:= add(x[i]^2, i=1..16) * add(y[i]^2, i=1..16) = add(z[i]^2, i=1..16):</p> <p>testeq(eq);</p> </blockquote> <p>true</p>
3,331,865
<p>Find the Orthonormal basis of vector space <span class="math-container">$V$</span> of the linear polynomials of the form <span class="math-container">$ax+b$</span> such that <span class="math-container">$\:$</span>, <span class="math-container">$p:[0,1] \to \mathbb{R}$</span>. with inner product</p> <p><span class="math-container">$$\langle p,q \rangle= \int_0^1 p(x)q(x) dx$$</span> for <span class="math-container">$q, p \in V$</span></p> <p>The polynomial in <span class="math-container">$V$</span> are linear so the basis of <span class="math-container">$V$</span> are <span class="math-container">$\left \{1,x \right \}$</span> further i am not getting how to proceedes further </p> <p>please help</p> <p>Thankyou.</p>
Federico Fallucca
531,470
<p>An orthonormal base can be </p> <p><span class="math-container">$\{1,\frac{(x-\frac{1}{2})}{\sqrt{\langle x-\frac{1}{2},x-\frac{1}{2}\rangle}}\}$</span></p> <p>The idea is simple. The space has dimension <span class="math-container">$2$</span>, so you must choose two polynomials of order at most <span class="math-container">$1$</span>. The first polynomial can be <span class="math-container">$1$</span>, that has norm <span class="math-container">$1$</span>. Then the second polynomial <span class="math-container">$p$</span> must verifies the condition </p> <p><span class="math-container">$\langle 1,p\rangle =\int_0^1pdx=0$</span></p> <p>Moreover <span class="math-container">$p$</span> must be unitary so you can simply consider </p> <p><span class="math-container">$q=\frac{p}{\sqrt{\langle p,p\rangle}}$</span></p> <p>So </p> <p><span class="math-container">$\{1,q\}$</span> is an orthonormal base for your space.</p> <p>How can you calcolate <span class="math-container">$p$</span>? You can observe that </p> <p><span class="math-container">$\int_0^1 xdx=\frac{1}{2}$</span> so </p> <p><span class="math-container">$\int_0^1 xdx-\frac{1}{2}=\int_0^1( x-\frac{1}{2})dx=0$</span></p> <p>Then you can choose</p> <p><span class="math-container">$p:=x-\frac{1}{2}$</span></p> <p>You can observe that, fixed a polynomial <span class="math-container">$s$</span>, you can always choose <span class="math-container">$p=s-\int_0^1sdx$</span> </p> <p>because</p> <p><span class="math-container">$\langle 1,p\rangle =\int_0^1 p dx=0$</span> </p> <p>This is not a mysterious result because it follows the same idea of the Gram-Schmidt orthogonalization </p>
129,709
<p>Say we have a continuous function $u(x,y) : \mathbb{R}^2 \rightarrow \mathbb{R}$.</p> <p>I have seen several textbooks that make the following assertion:</p> <blockquote> <p>The length/area element of the zero level set of $u$ is given by $\lvert\nabla H\left(u\right)\rvert = \delta(u)\lvert\lvert\nabla u\rvert\rvert$, where $H\left(u\right)$ is the Heaviside step function, $\delta(u)= \partial H(u) / \partial u $ is the Dirac delta function.</p> <p>We can measure the length of the zero level set as $ \int\int \lvert \nabla H(x,y)\rvert dx dy = \int\int \delta (u(x,y))\lvert \nabla u (x,y)\rvert dx dy$</p> </blockquote> <p>I fail to see this. Why is the length (or/and area) element of the zero level set of $u$ given by $\lvert\nabla H\left(u\right)\rvert $ or $\delta(u)\lvert\lvert\nabla u\rvert\rvert$?</p> <p>Some papers and textbooks that make this assertion:</p> <ul> <li>Zhao, H.K. et al., 1996. <a href="http://www.math.lsa.umich.edu/~psmereka/LEVELSET/LSPAPERS/zzosherzhao.pdf" rel="nofollow">A Variational Level Set Approach to Multiphase Motion</a> (available online). <em>Journal of Computational Physics</em>, 127(1), p.179-195 (page 2, Equations 2.2a and 2.2b)</li> <li><p>Geometric Partial Differential Equations and Image Analysis. Guillermo Sapiro. Cambridge University Press 2001 (e.g. page 92).</p></li> <li><p>Image Processing and Analysis. Variational, PDE, Wavelet, and stochastic methods. Tony Chan and Jianhong Shen. SIAM 2005 (e.g. page 46)</p></li> </ul>
Christian Blatter
1,303
<p>Assume that $\phi:\ (x,y)\mapsto\phi(x,y)$ is a smooth real-valued function, and assume that the set $\Omega\subset{\mathbb R}^2$ defined by $$\Omega:=\bigl\{(x,y)\ \bigm|\ \phi(x,y)&gt;0\bigr\}$$ is bounded. Then the area $A(\Omega)$ is obviously given by $$A(\Omega)=\int 1_\Omega(x,y)\ {\rm d}(x,y)=\int H\bigl(\phi(x,y)\bigr)\ {\rm d}(x,y)\ .$$ This is formula 2.2b of the quoted source. </p> <p>The formula 2.2a for the length of $\partial\Omega$ is another matter. It reads as follows: $$L(\partial\Omega)\ =\ \int\delta\bigl(\phi(x,y)\bigr)\,\bigl|\nabla\phi(x,y)\bigr|\ {\rm d}(x,y)\ .\qquad(1)$$ I can only give a heuristic argument why such a formula might hold. In any case, this argument does not lead to a "length element", it only gives an interpretation for the integral appearing on the right of $(1)$.</p> <p>Replace the $\delta$-function by the function $$\delta_N(t):=\cases{N &amp; $\Bigl(|t|\leq{1\over 2N}\Bigr)$ \cr 0 &amp; $\Bigl(|t|>{1\over 2N}\Bigr)$\cr}\ ,$$ where $N\gg1$. Then $$\int\delta\bigl(\phi(x,y)\bigr)\,\bigl|\nabla\phi(x,y)\bigr|\ {\rm d}(x,y) \doteq N\ \int_{B_N}\bigl|\nabla\phi(x,y)\bigr|\ {\rm d}(x,y)\ ,\qquad(2)$$ where $B_N$ is the narrow band defined by $$B_N:=\bigl\{(x,y)\ \bigm|\ |\phi(x,y)|\leq{1\over 2N}\bigr\}\ .$$ The "centerline" of this band is the curve $\partial\Omega:\ \phi(x,y)=0$, resp. $$\partial\Omega:\quad s\mapsto{\bf z}(s)\qquad\bigl(0\leq s\leq L:=L(\partial\Omega)\bigr)$$ (parametrized by arc length $s$). Consider a point ${\bf p}:={\bf z}(s)\in\partial\Omega$. The direction of the band there is $\dot{\bf z}(s)$, and ${\bf n}:=\nabla\phi\bigl({\bf z}(s)\bigr)$ points into the normal direction. Starting at ${\bf p}$ in direction ${\bf n}$ we reach the boundary of $B_N$ when we have traversed the length $h&gt;0$ such that $|{\bf n}|\, h={1\over 2N}$. It follows that near ${\bf z}(s)$ the width $\rho(s)$ of this band is approximately given by $$\rho(s)=2 h={1 \over N\,|{\bf n}|}= {1\over N\, \bigl|\nabla\phi\bigl({\bf z}(s)\bigr)\bigr|}\ .$$ Therefore we have $$N\ \int_{B_N}\bigl|\nabla\phi(x,y)\bigr|\ {\rm d}(x,y)\doteq N\ \int_0^L \rho(s) \bigl|\nabla\phi\bigl({\bf z}(s)\bigr)\bigr|\ ds=\int_0^L ds =L(\partial\Omega)\ .$$ When we enter this in (2) we obtain the quoted formula $(1)$.</p>
619,526
<p>I'm studying up for my algebra exam, and I'm not exactly sure how to solve a problem like the following</p> <blockquote> <p>Let $f = X^2 + 1 \in \mathbb{F}_5[X]$, $R = \mathbb{F}_5[X]/\langle f \rangle$ and $\alpha = X + \langle f \rangle \in R$. Show that $\alpha \in R^*$ and that $\vert \alpha \vert = 4$ in $R^*$.</p> </blockquote> <p>I have already proven that R is not a field by proving that $\langle f \rangle$ is not a maximal ideal, but I'm unsure of how I should proceed.</p>
drhab
75,923
<p>$\alpha^{2}=-1_R$ in $R$ so $\alpha^{4}=\left(-1_R\right)^{2}=1_R$ showing that the order of $\alpha$ is $4$ and that $\alpha$ is invertible.</p> <p>Explaining:</p> <p>$\alpha^{2}=\left(X+\left(f\right)\right)^{2}=X^{2}+\left(f\right)=-1+\left(f\right)$ since $f\mid X^{2}-\left(-1\right)=X^{2}+1$</p>
546,239
<p>Myself and my Math teacher are at a disagreement in to what the proper method of solving the question <em>In how many ways can four students be chosen from a group of 12 students?</em> is.</p> <p>The question comes straight out of a Math revision sheet from a Math book distributed under the national curriculum. The options it gives for answers are:</p> <ol> <li>12</li> <li>48</li> <li>495</li> <li>11880</li> <li>40320</li> </ol> <p>As we are currently learning Permutations and Combinations, my above interpretation is that it is asking for a <strong>Combination without repetition</strong> or $\frac{(n+r-1)!}{r!(n-1)!}$ which gives you the amount of combinations without repetition (as you cannot pick the same student twice.) Now my teacher argues that the answer the book provides is correct. The books answer simply says to use $^{n}C_{r}$ or $\frac{n!}{r!(n-r)!}$.</p> <p>What is the correct method of answering this? The book states <em>3. 495</em> is the answer.</p>
Ross Millikan
1,827
<p>To justify the $^nC_r$ formula, you can line up all the $n$ boys in $n!$ ways. Then you take the first $r$ for your selection. You can scramble the first $r$ and the last $n-r$ in any way and get the same selection, so you divide by $r!$ and by $(n-r)!$ to get the formula.</p>
1,039,487
<p>Suppose $a$, $b$ are real numbers such that $a+b=12$ and both roots of the equation $x^2+ax+b=0$ are integers. </p> <p>Determine all possible values of $a$. </p> <p>I don't know how to go about doing this without long, messy casework. I tries $(x-s)(x-r)=x^2+ax+b$ and got $-r-s=a$ and $rs=b$, but was unable to find all solutionss based on only these and $a+b=12$. Could someone help me finish up? Thanks.</p>
André Nicolas
6,312
<p>Your method, carried a little further, works. We have $-(r+s)+rs=12$. Rewrite as $$(r-1)(s-1)=13.$$</p> <p>There are now not many possibilities, since $13$ is prime. Don't forget about negative numbers.</p>
128,695
<p>Is there any good guide on covering space for idiots? Like a really dumped down approach to it . As I have an exam on this, but don't understand it and it's like 1/6th of the exam. </p> <p>So I'm doing Hatcher problem and stuck on 4.</p> <ol> <li>Construct a simply-connected covering space of the space $X \subset \mathbb{R}^3$ that is a union of a sphere and diameter. </li> </ol> <p>All I can think of is just connecting a bunch of spheres in a line.</p> <p>But, yeah pretty scared will fail my degree because of this. So I need a good guide of covering spaces that isn't Hatcher. The only other uses heavy category theory which is even worse to read. </p>
C.F.G
272,127
<p>Here are a list of books where covers covering spaces:</p> <ul> <li><p><em>Munkres, James R.</em>, Topology., Upper Saddle River, NJ: Prentice Hall. xvi, 537 p. (2000). <a href="https://zbmath.org/?q=an:0951.54001" rel="nofollow noreferrer">ZBL0951.54001</a>.</p> </li> <li><p><em>Lee, John M.</em>, <a href="http://dx.doi.org/10.1007/978-1-4419-7940-7" rel="nofollow noreferrer"><strong>Introduction to topological manifolds</strong></a>, Graduate Texts in Mathematics 202. New York, NY: Springer (ISBN 978-1-4419-7939-1/hbk; 978-1-4419-7940-7/ebook). xvii, 433 p. (2011). <a href="https://zbmath.org/?q=an:1209.57001" rel="nofollow noreferrer">ZBL1209.57001</a>.</p> </li> <li><p><em>Lima, Elon Lages</em>, Fundamental groups and covering spaces. Transl. from the Spanish by Jonas Gomes, Natick, MA: A K Peters. ix, 210 p. (2003). <a href="https://zbmath.org/?q=an:1029.55001" rel="nofollow noreferrer">ZBL1029.55001</a>.</p> </li> <li><p><em>Kalajdzievski, Sasho</em>, <a href="https://www.taylorfrancis.com/books/9781482220810" rel="nofollow noreferrer"><strong>An illustrated introduction to topology and homotopy</strong></a>, Boca Raton, FL: CRC Press (ISBN 978-1-4398-4815-9/hbk; 978-1-4822-2081-0/ebook). xvi, 469 p. (2015). <a href="https://zbmath.org/?q=an:1323.55001" rel="nofollow noreferrer">ZBL1323.55001</a>.</p> </li> <li><p><em>Ault, Shaun V.</em>, Understanding topology. A practical introduction, Baltimore, MD: Johns Hopkins University Press (ISBN 978-1-4214-2407-1/hbk; 978-1-4214-2408-8/ebook). x, 399 p. (2018). <a href="https://zbmath.org/?q=an:1392.54001" rel="nofollow noreferrer">ZBL1392.54001</a>.</p> </li> </ul>
4,356,938
<p>I'm reading through <a href="https://press.princeton.edu/books/hardcover/9780691151199/elliptic-tales" rel="nofollow noreferrer">Elliptic Tales</a>.</p> <p>Addition of 2 points on an elliptic curve is described as follows:</p> <p><a href="https://i.stack.imgur.com/Hifi9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hifi9.png" alt="Image of addition" /></a></p> <p><span class="math-container">$L$</span> is the line between <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> and <span class="math-container">$R$</span></p> <p><span class="math-container">$L'$</span> is the line between <span class="math-container">$O$</span> and <span class="math-container">$P + Q$</span> and <span class="math-container">$R$</span></p> <p>The book describes the algebraic process of adding together 2 points on an elliptic curve.</p> <p>First: It describes adding together <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> to get <span class="math-container">$R$</span>. It then says we need to connect <span class="math-container">$O$</span> and <span class="math-container">$R$</span> with a line, and where that line intersects <span class="math-container">$E$</span> will be the point <span class="math-container">$P + Q$</span>. So far so good.</p> <p>It then says the line connecting <span class="math-container">$O$</span> and <span class="math-container">$R$</span> is vertical and is easy to describe in projective coordinates as <span class="math-container">$x = x_3z$</span> where <span class="math-container">$R$</span> is <span class="math-container">$(x_3, y_3)$</span>.</p> <p>The line connecting <span class="math-container">$O$</span> and <span class="math-container">$R$</span> is <span class="math-container">$L'$</span> doesn't seem to be vertical. Clearly, in the picture it's slanted downwards.</p> <p>Does anyone know what's going on?</p>
Lubin
17,760
<p>For complete generality, let’s write the equation of any line in the Cartesian plane thus: <span class="math-container">$ax+by+c=0$</span>, with <span class="math-container">$a$</span> and <span class="math-container">$b$</span> not both zero. Now, remember that points <span class="math-container">$(x,y)$</span> in the Cartesian plane correspond to points <span class="math-container">$(x:y:1)$</span> in the projective plane. The projective version of the line above is <span class="math-container">$aX+bY+cZ=0$</span>.</p> <p>The neutral point of the elliptic curve is at <span class="math-container">$\Bbb O=(0:1:0)$</span>. For a line as above to contain <span class="math-container">$\Bbb O$</span>, the necessary and sufficient condition is that <span class="math-container">$b=0$</span>.</p> <p>Going back to the original Cartesian line, we see that its equation must be <span class="math-container">$ax+c=0$</span>: vertical.</p>
2,438,236
<p>Suppose a finite group $G$ has order smaller than the dimension of a vector space $V$ over any field, how to prove that the representation of $G$ on $V$ is reducible?</p>
reuns
276,986
<p>$\rho : G \to \text{End}(V)$ is a representation, $V$ is a $K$-vector space. Take a non-zero vector $v \in V$ and let $$W = \text{span}(\rho(g_1)v,\rho(g_2)v,\ldots) = \{ \sum_{g \in G} a_g\,\rho(g) v, a_g \in K\}$$</p> <p>Then $$\rho(h) (\sum_{g \in G} a_g\,\rho(g) v) = \sum_{g \in G} a_g\,\rho(hg) v=\sum_{g \in G} a_{h^{-1}g}\,\rho(g) v \in W$$</p> <p>Thus $W$ is a $\rho$ invariant subspace and $\rho|_W : G \to \text{End}(W)$ is a sub-representation, $\dim(W) \le |G|$</p> <p>($\rho|_W(g)=P \rho(g) P$ where $P$ is the orthogonal projection $V \to W$)</p>
1,232,023
<p>For me, intuitively, integral $2\pi y~dx$ make more sense. I know intuition can not be proof, but by far, most part of math I've learned does match with my intuition. So, I think this one should 'make sense' as well. Probably I didn't understand the way surface area is measured. It will be great if any one could tell me how 'integral $2\pi y~dx$' is wrong. (By the way, how to use mathematical symbols in texts?)</p>
Mathemagical
446,771
<p>OP, if I'm guessing right, the intuition point that you are puzzling over is "how come we can get away with integrating $\pi y^2 dx$ for volume (there too, we are approximating an infinitesimal frustrum by an infinitesimal cylinder) but we are not able to approximate the surface area of the infinitesimal frustrum $2 \pi y ds$ by that of the infinitesimal cylinder $2 \pi y dx$?"</p> <p>To understand the difference consider the error of the approximation in each case as a fraction of the thing you are trying to calculate. The difference arises when the function is not a constant ($y' \neq 0$). In the case of volume, that error is negligible (because $\pi y^2 dx$ does capture most of the volume) while in the case of the surface area calculation, that error is of the same order as the surface area itself. </p> <p>To be more convinced, you can use the <a href="http://jwilson.coe.uga.edu/emt725/Frustum/Formulas.html" rel="nofollow noreferrer">formula for the volume of that frustrum</a> of thickness $dx$. It is $$dV=\pi y^2 dx + \pi y dy dx + \pi dy^2 dx$$ the error is $$dE=dV-\pi y^2 dx= \pi y dy dx + \pi dy^2 dx$$ which is tiny in relation to $dV$ (will vanish as we take $dx$ to zero and integrate) in relation to the true V.</p> <p>The error in the surface area approximation on the other hand, is $$d\epsilon= dA - 2 \pi y dx = 2 \pi y (ds- dx) = 2 \pi y ds (1 - \frac{1}{\sqrt{1+y'^2}})$$which is of the same order as $ dA = 2 \pi y ds $ and upon taking the limit of dx to 0 to compute the integral, this results in a deviation from the true surface area. </p>
3,327,094
<p>Give an example of a non abelian group of order <span class="math-container">$55$</span>.</p> <p>To find non abelian group the simplest way is to find one non abelian group whose order divides the order of given group and then we take the group which is the external direct product of the non abelian group and some other abelian group.</p> <p>For example to find a non abelian group of order <span class="math-container">$36$</span> we take the permutation group <span class="math-container">$S_3$</span> and take the group <span class="math-container">$S_3\otimes \Bbb Z_6$</span>. But using this way we can not have a group of order <span class="math-container">$55$</span> since any group of order <span class="math-container">$5$</span> or <span class="math-container">$11$</span> will be abelian.</p> <p>So how do we proceed?</p>
Travis Willse
155,629
<p>By Sylow's Third Theorem the number <span class="math-container">$n_{11}$</span> of Sylow <span class="math-container">$11$</span>-subgroups of a group <span class="math-container">$G$</span> of order <span class="math-container">$55$</span> divides <span class="math-container">$5$</span> and is congruent to <span class="math-container">$1$</span> modulo <span class="math-container">$11$</span>, so any group of order <span class="math-container">$55$</span> has one subgroup <span class="math-container">$N$</span> of order <span class="math-container">$11$</span>, which we just call <span class="math-container">$\Bbb Z_{11}$</span>. Pick any subgroup <span class="math-container">$H$</span> of order <span class="math-container">$5$</span> (which exists by Cauchy's Theorem); we likewise call it <span class="math-container">$\Bbb Z_5$</span>. Since <span class="math-container">$5$</span> and <span class="math-container">$11$</span> are coprime, <span class="math-container">$\Bbb Z_5 \cap \Bbb Z_{11} = \{1_G\}$</span> and thus <span class="math-container">$\Bbb Z_{55} = \Bbb Z_{11} \Bbb Z_5$</span>, so our group is isomorphic to a <a href="https://en.wikipedia.org/wiki/Semidirect_product" rel="nofollow noreferrer">semidirect product</a>: <span class="math-container">$$G \cong\Bbb Z_{11} \rtimes \Bbb Z_5 .$$</span> In other words, there is some homomorphism <span class="math-container">$\phi : \Bbb Z_5 \to \operatorname{Aut}(\Bbb Z_{11}) \cong \Bbb Z_{10}$</span> such that <span class="math-container">$G$</span> is isomorphic to the set <span class="math-container">$\Bbb Z_{11} \times \Bbb Z_5$</span> equipped with the group operation <span class="math-container">$$(n, h) \ast_\phi (n', h') \mapsto (n + \phi(h)(n), h + h') .$$</span> Since <span class="math-container">$\Bbb Z_5$</span> is cyclic, <span class="math-container">$\phi$</span> is determined by <span class="math-container">$\phi(1)$</span> and <span class="math-container">$\phi(1) = n$</span> defines a group operation iff <span class="math-container">$n$</span> has order dividing <span class="math-container">$5$</span>. The choice <span class="math-container">$\phi(1) = 0$</span> determines the trivial homomorphism <span class="math-container">$\phi$</span> and thus the direct product <span class="math-container">$\Bbb Z_{11} \times \Bbb Z_5$</span>. All of the other choices are conjugate and so define isomorphic semidirect products, so we need consider only one, say, the one defined by <span class="math-container">$\phi(1) = 2$</span>, which corresponds to the homomorphism <span class="math-container">$\phi : n \mapsto n^2$</span>, and this defines a nonabelian group operation <span class="math-container">$\ast_\phi$</span>, <span class="math-container">$$(n, h) \ast\phi (n', h') = (n + (n')^{2h}, h + h').$$</span> (In fact we have shown that up to isomorphism the group defined by this operation and the cyclic group of order <span class="math-container">$55$</span> are the only groups of that order.)</p> <p>We can interpret this group concretely: Consider the group <span class="math-container">$\operatorname{Aff}(\Bbb F_{11})$</span> (which has order <span class="math-container">$110 = 55 \cdot 2$</span>) of invertible affine transformations <span class="math-container">$t \mapsto a t + b$</span> of <span class="math-container">$\Bbb F_{11}$</span>. The composition rule is <span class="math-container">$$(t \mapsto a t + b) \circ (t \mapsto a' t + b') = (t \mapsto (a a') t + (a b' + b)) ,$$</span> and so we can identify this group as the semidirect product <span class="math-container">$$\Bbb Z_{11} \rtimes \operatorname{Aut}(\Bbb Z_{11}) \cong \Bbb Z_{11} \rtimes \Bbb Z_{10} .$$</span> Thus, we can identify our nonabelian group <span class="math-container">$G$</span> of order <span class="math-container">$55$</span> with the subgroup of <span class="math-container">$\operatorname{Aff}(\Bbb F_{11})$</span> whose elements have <span class="math-container">$\operatorname{Aut}(\Bbb Z_{11})$</span>-component of order dividing <span class="math-container">$5$</span>, that is, whose elements have <span class="math-container">$\Bbb Z_{10}$</span>-component contained in the unique copy <span class="math-container">$\Bbb Z_5 \cong \{0, 2, 4, 6, 8\} &lt; \Bbb Z_{10}$</span>. So, these are just the squares in <span class="math-container">$\operatorname{Aut}(\Bbb Z_{11})$</span>, and so we thus recover an observation of Tobias Kildetoft from a comment about another answer: We can identify <span class="math-container">$G$</span> up to isomorphism as the group affine transformations <span class="math-container">$t \mapsto a t + b$</span> of <span class="math-container">$\Bbb F_{11}$</span> with <span class="math-container">$a$</span> a square, that is, <span class="math-container">$$G \cong \{t \mapsto a t + b : a \in (\Bbb F_{11}^\times)^2, b \in \Bbb F_{11}\} = \{t \mapsto a t + b : a \in \{1, 3, 4, 5, 9\}, b \in \Bbb F_{11}\} .$$</span> Applying the usual embedding <span class="math-container">$\operatorname{Aff}(\Bbb F) \hookrightarrow \operatorname{GL}(2, \Bbb F)$</span> then recovers the explicit matrix group realization in Chinnapparaj R.'s answer, <span class="math-container">$$G \cong \left\{ \pmatrix{1 &amp; b \\ &amp; a} : a \in (\Bbb F_{11}^\times)^2, b \in \Bbb F_{11} \right\} \subset \operatorname{GL}(2, \Bbb F_{11}).$$</span></p>
199,199
<p>Suppose a box contains 5 white balls and 5 black balls.</p> <p>If you want to extract a ball and then another:</p> <p>What is the probability of getting a black ball and then a black one?</p> <p>I think that this is the answer:</p> <p>Let $A:$ get a black ball in the first extraction, $B:$ get a black ball in the second extraction. Then: $P(A)=\frac{5}{10}$ and $$P(B|A)=\frac{P(A\cap B)}{P(A)}=\frac{\frac{4}{9}}{\frac{1}{2}}=\frac{8}{9}$$</p> <p>Is this correct? </p> <p>Now, If you want to extract two balls at the same time: what is the probability that both are black? Is exactly the same probability of the first question? Why?</p> <p>Thanks for your help.</p>
osa
25,167
<p><strong>Why would you even split it into two events?!</strong></p> <p>You are just picking two balls. There are <code>C(10,2)=45</code> ways to pick two balls out of 10, but only <code>C(5,2)=10</code> give you two black ones. So <code>10/45 = 2/9</code> is your answer.</p>
2,736,295
<p>Let $f(x,y) = xy^2$ and the domain $D = \lbrace (x,y)| x,y\geq0, x^2 + y^2 \leq 3 \rbrace$</p> <p>$f_x(x,y) = y^2$ and $f_y(x,y) = 2xy$ </p> <p>Therefore, the critical points should be $\lbrace (x,y)| y = 0, \sqrt{3} \geq x \geq 0 \rbrace$. </p> <p>The determinant of the Hessian is $$\det(Hf(x,y))= \begin{vmatrix} 0 &amp; 2y \\ 2y &amp; 2x \end{vmatrix} = 0-4y^2$$</p> <p>But this doesn't make sense to me because, if $y = 0$, then $f(x,0) = 0$. But on this interval, suppose I chose $(x,y) = (1,\sqrt{2})$, this would be larger and would be the maximum value of the system. </p> <p>Why didn't finding the first partials and the Hessian allow me to compute the critical point? </p>
robjohn
13,854
<p>On the circle of radius $r$, we have $x^2+y^2=r^2$. Therefore, $$ xy^2=r^2x-x^3\tag1 $$ This implies that the interior critical points are at $$ (x,y)=\frac r{\sqrt3}\left(\pm1,\pm\sqrt2\right)\tag2 $$ with the corresponding values of $$ xy^2=\pm r^3\frac2{3\sqrt3}\tag3 $$ At the endponts of $x=\pm r$, we get the values of $xy^2=0$. Therefore, $$ -r^3\frac2{3\sqrt3}\le xy^2\le r^3\frac2{3\sqrt3}\tag4 $$ Thus, the maxima and minima are on the bounding circle.</p> <p>If $x^2+y^2=3$, then $r=\sqrt3$ and so $(4)$ says that $$ -2\le xy^2\le2\tag5 $$</p>
306,212
<p>The only statement I'm sure of is that any hyperbolic or Euclidean manifold is a $K(G,1)$ (i.e. its higher homotopy groups vanish), since its universal cover must be $\mathbb H^n$ or $\mathbb E^n$. But for example, if a complete Riemannian manifold $M$ satisfies one of the following, can I conclude that $M$ is a $K(G,1)$?</p> <ol> <li><p>$M$ has sectional curvature bounded above by some negative number.</p></li> <li><p>$M$ has negative sectional curvature.</p></li> <li><p>$M$ has nonpositive sectional curvature.</p></li> <li><p>$M$ has sectional curvature bounded above by $f(\operatorname{vol}(M))$ (where $f: \mathbb R \to \mathbb R$ is some function depending only on the dimension of $M$ that I don't know).</p></li> <li><p>$M$ has scalar curvature bounded above by some negative number.</p></li> <li><p>$M$ has negative scalar curvature.</p></li> <li><p>$M$ has nonpositive scalar curvature.</p></li> <li><p>$M$ has scalar curvature bounded above by $f(\operatorname{vol}(M))$.</p></li> </ol> <p>Do the answers change if I assume that $M$ is compact? Have I left out a relevant condition of some kind?</p>
Tim Campion
2,362
<p>Let me summarize the information in the comments in a CW post. Feel free to edit.</p> <ul> <li><p>For "weaker" notions of curvature, negative curvature seems to not imply that a manifold is a $K(G,1)$. As Deane Yang pointed out, Lohkamp <a href="http://dx.doi.org/10.2307/2118620" rel="nofollow noreferrer">showed</a> that for each $d \geq 3$, there are numbers $a(d) &gt; b(d) &gt; 0$ such that every manifold $M$ of dimension $d$ admits a complete metric $g$ with $-a(d) &lt; \operatorname{Ric}(M,g) &lt; -b(d)$. I believe this implies an analogous result for scalar curvature.</p> <ul> <li><p>The only loophole I can see is that there might be a smaller interval $a(d) \geq a' \geq b' \geq b(d) &gt; 0$ such that if $M$ admits a metric $g$ with $-a'\leq \mathrm{Ric}(M,g) \leq -b'$, then $M$ is a $K(G,1)$. (Possibly $a',b'$ might depend on further parameters such as $\operatorname{diam}(M,g)$ or $\operatorname{vol}(M,g)$).</p> <p>For instance, if $M$ admits a metric of <em>constant</em> negative Ricci curvature, does this imply that $M$ is a $K(G,1)$? Igor Belegradek points out below that the answer is <em>no</em> in this case as shown by <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC431004/?page=1" rel="nofollow noreferrer">Yau</a>.</p></li> </ul></li> <li><p>For sectional curvature, the story is different. As several people pointed out, the <a href="https://en.wikipedia.org/wiki/Cartan%E2%80%93Hadamard_theorem" rel="nofollow noreferrer">Cartan-Hadamard theorem</a> says that any manifold admitting a complete metric of nonpositive sectional curvature is a $K(G,1)$.</p> <p>We may ask if this can be improved to allow a small amount of positive curvature. As Igor Belegradek pointed out, "small amount" can't be specified in terms of volume, since $R S^2 \times g(R) S^1$ has constant volume $a$ for appropriate $g(R)$, but by choosing $R$ sufficiently large, it has arbitrarily small positive curvature. But as Igor Belegradek also pointed out, Fukaya and Yamaguchi <a href="http://dx.doi.org/10.4310/jdg/1214446030" rel="nofollow noreferrer">showed</a> that there is a positive number $\epsilon(d,D)$ dependent only on the dimension $d$ and diameter $D$, such that any compact Riemannian manifold $M$ with $-1 \leq \operatorname{sec}(M) &lt; \epsilon(\operatorname{dim}(M), \operatorname{diam}(M))$ is a $K(G,1)$. The lower bound on the curvature is necessary; Fukaya says that Gromov constructed metrics on $S^3$ with fixed diameter and arbitrarily small sectional curvature.</p> <ul> <li>I don't know if Yamaguchi - Fukaya's result holds for complete Riemannian manifolds (as Igor Belegradek points out the question doesn't even make sense in this case).</li> <li>Another direction which might be interesting would be to control "small amounts of positive curvature" in some other way. For instance, rather than controlling the $L^\infty$ norm of the sectional curvature, one might ask for control over some averaged version of it -- this might allow the curvature to become very positive at a point so long as it's not very positive in a large region. Somehow the necessity of the lower curvature bound in Fukaya's result suggests to me that something like this might be a good idea.</li> </ul></li> </ul>
316,016
<p>Could you recommend any approachable books/papers/texts about matroids (maybe a chapter from somewhere)? The ideal reference would contain multiple examples, present some intuitions and keep formalism to a necessary minimum.</p> <p>I would appreciate any hints or appropriate sources.</p>
azimut
61,691
<p>You could have a look at the relatively new book</p> <p>Gary Gordon, Jennifer McNulty: <em>Matroids. A Geometric Introduction</em>, Cambridge University Press 2012.</p> <p>From the description: "This book provides the first comprehensive introduction to the field which will appeal to undergraduate students and to any mathematician interested in the geometric approach to matroids. Written in a friendly, fun-to-read style and developed from the authors' own undergraduate courses, the book is ideal for students."</p>
2,682,531
<p>The definition of a convex set is the following:</p> <blockquote> <p>A set $\Omega \subset \mathbb R^n$ is convex if $\alpha x + (1 − \alpha) y \in \Omega, \forall x, y \in \Omega$ and $\forall \alpha \in [0, 1]$.</p> </blockquote> <p>With this it should be easy enough to prove that a set is not convex: just find a counterexample. But how do you prove that it <em>is</em> convex? How do I do it for the unit disk? </p> <p>$$\Omega = \{(x, y) \in \mathbb R^2 \mid x^2 + y^2 \leq 1\}$$</p> <p>Also what exactly does it mean for a set to be convex? </p>
Emilio Novati
187,568
<p>Hint:</p> <p>If the two points $P=(x_P,y_P)$ and $Q=(x_Q,y_Q)$ are in the set than we have $x_P^2+y_P^2=x_Q^2+y_Q^2\leq1$ and to prove that the set is convex we use the definition, that gives: $$ \alpha(x_P,y_P)+(1-\alpha)(x_Q,y_Q) \in \Omega $$ that is: $$ \left[\alpha x_P+(1-\alpha)x_Q \right]^2+\left[\alpha y_P+(1-\alpha)y_Q \right]^2\leq 1 $$</p> <p>Can you prove that this is true for all $P,Q$ such that $x_P^2+y_P^2=x_Q^2+y_Q^2\leq1$ and for $\alpha \in [0,1]$?</p> <p>Note that $x_P^2+y_P^2$ is the square of the norm of the vector $\overrightarrow {OP}$ and use the <a href="https://en.wikipedia.org/wiki/Triangle_inequality" rel="nofollow noreferrer">triangle inequality</a> for the vector $\alpha \overrightarrow {OP}+(1-\alpha)\overrightarrow {OQ}$.</p> <p>Note that this set is a circle of radius $r=1$ center at the origin, that contains the segments joining any two of its points. This is the meaning of the definition of a convex set.</p>
4,489,898
<p>After 18 months of studying an advanced junior high school mathematics course, I'm doing a review of the previous 6 months, starting with solving difficult quadratics that are not easily factored, for example: <span class="math-container">$$x^2+6x+2=0$$</span> This could be processed via the quadratic equation but the course I'm working through asks me to use the complete the square method. I can do it, and I appreciate the geometric illustration of what is happening.</p> <p>But it's so powerful and elegant, I can't help but wonder where else this method of adding something into an expression only to take it away in another is employed in mathematics. And is there a name for the general case of this kind of operation?</p> <p><a href="https://i.stack.imgur.com/I0fmi.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/I0fmi.jpg" alt="geometric complete the square" /></a></p>
KCd
619
<p>The method of inserting <span class="math-container">$0$</span> in a clever way by adding and subtracting a term is used many times in analysis. I learned long ago to call this clever use of <span class="math-container">$0$</span> a &quot;propitious zero&quot; and others have been taught that term too: look <a href="https://math.stackexchange.com/questions/2142889/translation-suggestion-of-an-operation-adding-and-subtracting-the-same-term">here</a> and <a href="https://math.stackexchange.com/questions/1827836/finding-partial-fractions-expansions-mentally">here</a>.</p> <p>I'll apply this idea to products, reciprocals, and to the terms in a sequence rewritten as a series.</p> <p>Example 1. Let's prove continuity of multiplication of real numbers. If <span class="math-container">$x$</span> is near <span class="math-container">$a$</span> and <span class="math-container">$y$</span> is near <span class="math-container">$b$</span> then we want to show <span class="math-container">$xy$</span> is near <span class="math-container">$ab$</span>. A standard way to do this is to write <span class="math-container">$$ xy - ab = (xy - ay) + (ay - ab) = (x-a)y + a(y-b) $$</span> and then <span class="math-container">$$ (x-a)y = (x-a)(y-b+b) = (x-a)(y-b) + (x-a)b, $$</span> so <span class="math-container">$$ xy - ab = (x-a)(y-b) + (x-a)b + a(y-b). $$</span> On the right side <span class="math-container">$x$</span> and <span class="math-container">$y$</span> only show up in the context of <span class="math-container">$x-a$</span> and <span class="math-container">$y-b$</span>, so by choosing <span class="math-container">$x$</span> and <span class="math-container">$y$</span> so that <span class="math-container">$|x-a|$</span> and <span class="math-container">$|y-b|$</span> are sufficiently small, we can make the right side arbitrarily close to <span class="math-container">$0$</span>. Thus multiplication as a mapping <span class="math-container">$\mathbf R^2 \to \mathbf R$</span> is continuous at each point <span class="math-container">$(a,b)$</span> in <span class="math-container">$\mathbf R^2$</span>.</p> <p>A similar argument shows other multiplication operations (on <span class="math-container">$\mathbf C$</span>, on <span class="math-container">${\rm M}_n(\mathbf R)$</span>, etc.) are continuous.</p> <p>UPDATE: the answer by CR Drost reminds me that a propitious zero occurs in the <em>proof</em> of the product rule from calculus for the derivative <span class="math-container">$(u(t)v(t))'$</span> in exactly the same way as in the last identity above for <span class="math-container">$xy - ab$</span>. In that identity, replace <span class="math-container">$a$</span> and <span class="math-container">$b$</span> with <span class="math-container">$u(t)$</span> and <span class="math-container">$v(t)$</span> and replace <span class="math-container">$x$</span> and <span class="math-container">$y$</span> with <span class="math-container">$u(t+h)$</span> and <span class="math-container">$v(t+h)$</span>. It tells us that <span class="math-container">$u(t+h)v(t+h) - u(t)v(t)$</span> equals <span class="math-container">$$ (u(t+h) - u(t))(v(t+h)-v(t)) + (u(t+h) - u(t))v(t) + u(t)(v(t+h)-v(t)). $$</span> Divide by <span class="math-container">$h$</span> and let <span class="math-container">$h \to 0$</span> to get in the limit <span class="math-container">$$ u'(t)0 + u'(t)v(t) + u(t)v'(t) = u'(t)v(t) + u(t)v'(t). $$</span></p> <p>Example 2. Let's prove continuity of inversion on the nonzero real numbers. If <span class="math-container">$a \not= 0$</span> and <span class="math-container">$x$</span> is close enough to <span class="math-container">$a$</span>, we want to show <span class="math-container">$1/x$</span> is close to <span class="math-container">$1/a$</span>. To begin, let's suppose <span class="math-container">$|x-a| &lt; |a|$</span>, so <span class="math-container">$x$</span> is inside the open interval around <span class="math-container">$a$</span> of radius <span class="math-container">$a$</span> and thus <span class="math-container">$x \not= 0$</span>. We have <span class="math-container">$$ \left|\frac{1}{x} - \frac{1}{a}\right| = \frac{|x-a|}{|x||a|}. $$</span> On the right side, in the numerator <span class="math-container">$x$</span> appears only in the context of <span class="math-container">$x-a$</span>, which is great. For the denominator, we want to get a (positive) lower bound on <span class="math-container">$|x|$</span> in terms of <span class="math-container">$|x-a|$</span> in order to get an upper bound on <span class="math-container">$1/|x|$</span>. It's time for a propitious zero: <span class="math-container">$$ |a| = |a-x+x| \leq |a-x| + |x| \Longrightarrow |x| \geq |a| - |a-x| = |a| - |x-a|. $$</span> As long as <span class="math-container">$|x-a| &lt; |a|$</span>, that lower bound is positive, so <span class="math-container">$$ |x-a| &lt; |a| \Longrightarrow \left|\frac{1}{x} - \frac{1}{a}\right| = \frac{|x-a|}{|x||a|} \leq \frac{|x-a|}{(|a| - |x-a|)|a|}. $$</span> The right side goes to <span class="math-container">$0$</span> as <span class="math-container">$|x-a| \to 0$</span> (with <span class="math-container">$a$</span> fixed). Concretely, sharpen <span class="math-container">$|x-a|&lt; |a|$</span> to <span class="math-container">$|x-a| \leq |a|/2$</span> and we get <span class="math-container">$|a| - |x-a| \geq |a| - |a|/2 = |a|/2$</span>, so <span class="math-container">$$ \left|\frac{1}{x} - \frac{1}{a}\right| \leq \frac{|x-a|}{|a|^2/2} = \frac{2}{|a|^2}|x-a|. $$</span></p> <p>A similar argument shows inversion is continuous on <span class="math-container">$\mathbf C^\times$</span> and <span class="math-container">${\rm GL}_n(\mathbf R)$</span>, although some extra care is needed for the matrix case (when <span class="math-container">$n &gt; 1$</span>) since matrix multiplication is not commutative.</p> <p>Example 3: If <span class="math-container">$\{a_n\}$</span> is a sequence of numbers where <span class="math-container">$|a_n - a_{n+1}| \leq 1/2^n$</span>, we can write each <span class="math-container">$a_m$</span> as a telescoping sum of the differences <span class="math-container">$a_n - a_{n+1}$</span> for <span class="math-container">$n \geq m$</span>, which amounts to using infinitely many propitious zeros: <span class="math-container">$$ a_m = (a_m - a_{m+1}) + (a_{m+1} - a_{m+2}) + (a_{m+2} - a_{m+3}) + \cdots = \sum_{k \geq m} (a_k - a_{k+1}). $$</span> This by itself does not seem very interesting, but using this idea with functions in place of numbers is how you prove in measure theory that an <span class="math-container">$L^1$</span>-convergent sequence of functions has a subsequence that is pointwise convergent almost everywhere. The argument for that is written in the accepted answer <a href="https://math.stackexchange.com/questions/714744/l1-convergence-gives-a-pointwise-convergent-subsequence">here</a>.</p>
1,515,417
<p>I understand the idea that some infinities are "bigger" than other infinities. The example I understand is that all real numbers between 0 and 1 would not be able to "fit" on an infinite list.</p> <p>I have to show whether these sets are countable or uncountable. If countable, how would you enumerate the set? If uncountable, how would you prove using diagonalization? </p> <p>Set 1. All real numbers represented only by 1's. EX) 1, .11, 111.11, 1.111...</p> <p>Set 2. All real numbers represented only by 2's and 3's. EX) .2, 23.2, 22.2232...</p>
John Douma
69,810
<p>Set $2$ can be put into one-to-one correspondence with the binary representation of the reals by the map that takes $2$ to $0$ and $3$ to $1$. Thus, this set has the same cardinality as $\mathbb R$ which is uncountable.</p>
2,493,481
<p>I'm currently studying calculus of variations. I couldn't find a rigorous definition of a functional on this site.</p> <ol> <li>What is the general definition of a functional?</li> <li>Why for calculus of variations in physics, I must to use for a functional <em>a convex function</em> for the space of the admissible functions?</li> </ol>
md2perpe
168,433
<p>Let $\mathscr F$ be a functional of the form $$\mathscr F(y) = \int_a^b f(x, y(x), y'(x)) \, dx.$$</p> <p>We want to find a function $y_0$ that gives a local minimum of $\mathscr F,$ i.e. if we take a "close" function $y_0+\delta y$ then $\mathscr F(y_0+\delta y)$ will not be as small.</p> <p>The idea is to let $\delta y = \lambda\eta,$ where $\eta$ is some function that is non-zero only in a small region and $\lambda$ is a real parameter. For a fixed $\eta$, then $\mathscr F(y_0+\lambda\eta)$ is a function of $\lambda$ which should have minimum for $\lambda=0.$</p> <p>Therefore we take the derivative of $\mathscr F(y_0+\lambda\eta)$: $$ \frac{d}{d\lambda} \mathscr F(y_0+\lambda\eta) = \frac{d}{d\lambda} \int_a^b f(x, y_0(x)+\lambda\eta(x), y_0'(x)+\lambda\eta'(x)) \, dx \\ = \int_a^b \frac{\partial}{\partial\lambda} f(x, y_0(x)+\lambda\eta(x), y_0'(x)+\lambda\eta'(x)) \, dx \\ = \int_a^b \left( \frac{\partial f}{\partial y}(\cdots) \, \eta(x) + \frac{\partial f}{\partial y'}(\cdots) \, \eta'(x) \right) \, dx \\ $$ where $\frac{\partial f}{\partial y}$ is the partial derivative of $f$ with respect to its second argument (which is $y(x)$ in the defining equation for $\mathscr F$) and $\frac{\partial f}{\partial y'}$ is the partial derivative of $f$ with respect to its third argument (which is $y'(x)$ in the defining equation for $\mathscr F$). Also, $(\cdots)$ stands for $(x, y_0(x)+\lambda\eta(x), y_0'(x)+\lambda\eta'(x)).$</p> <p>Now we use partial integration to remove the derivative from $\eta'(x)$: $$ \int_a^b \left( \frac{\partial f}{\partial y}(\cdots) \, \eta(x) + \frac{\partial f}{\partial y'}(\cdots) \, \eta'(x) \right) \, dx \\ = \int_a^b \left( \frac{\partial f}{\partial y}(\cdots) \, \eta(x) - \frac{d}{dx}\left(\frac{\partial f}{\partial y'}(\cdots)\right) \, \eta(x) \right) \, dx \\ = \int_a^b \left( \frac{\partial f}{\partial y}(\cdots) - \frac{d}{dx}\left(\frac{\partial f}{\partial y'}(\cdots)\right) \right) \, \eta(x) \, dx $$ if $\eta(a) = \eta(b) = 0$ (remember that we said that $\eta$ should be non-zero only in a small region).</p> <p>We shall have a minimum for $\lambda=0$ so $$ 0 = \frac{d}{d\lambda} \mathscr F(y_0+\lambda\eta) = \int_a^b \left( \frac{\partial f}{\partial y}(x, y_0(x), y_0'(x)) - \frac{d}{dx}\left(\frac{\partial f}{\partial y'}(x, y_0(x), y_0'(x))\right) \right) \, \eta(x) \, dx $$</p> <p>This shall be valid for any choice of $\eta$ which requires $$0 = \frac{\partial f}{\partial y}(x, y_0(x), y_0'(x)) - \frac{d}{dx}\left(\frac{\partial f}{\partial y'}(x, y_0(x), y_0'(x))\right)$$</p> <p>Why so? Because if the above expression isn't $0$ everywhere then there exists some interval where it is non-zero (say positive), and then we can take $\eta$ to be positive inside that interval and zero outside of it. Such an $\eta$ would make the integral non-zero, and we get a contradiction.</p>
5,363
<p>There is something in the definition of the <a href="http://en.wikipedia.org/wiki/Free_product" rel="nofollow noreferrer">free product</a> of two groups that annoys me, and it's this "word" thing:</p> <blockquote> <p>If <span class="math-container">$G$</span> and <span class="math-container">$H$</span> are groups, a word in <span class="math-container">$G$</span> and <span class="math-container">$H$</span> is a product of the form</p> <p><span class="math-container">$$ s_1 s_2 \dots s_m, $$</span></p> <p>where each <span class="math-container">$s_i$</span> is either an element of <span class="math-container">$G$</span> or an element of <span class="math-container">$H$</span>. </p> </blockquote> <p>So what is this "word" guy? Does it come out of the blue? Does it come from some sort of new operation that I can perform with the two sets <span class="math-container">$G$</span> and <span class="math-container">$H$</span> -in addition to the well-known ones of union, intersection, Cartesian product...?</p> <p>Fortunatelly, I think there is nothing new under the sun of set operations: it's easy to realise that words can be identified with elements of some Cartesian product (see below):</p> <p><span class="math-container">$$ (s_1, s_2, \dots , s_m ) \ . $$</span></p> <p>And Cartesian product is a well-established set-theoretical operation.</p> <p>So I tried to translate the rest of Wikipedia's definition </p> <blockquote> <p>Such a word may be <em>reduced</em> using the following operations:</p> <p>Remove an instance of the identity element (of either <span class="math-container">$G$</span> or <span class="math-container">$H$</span>). Replace a pair of the form <span class="math-container">$g_1g_2$</span> by its product in <span class="math-container">$G$</span>, or a pair <span class="math-container">$h_1h_2$</span> by its product in <span class="math-container">$H$</span>.</p> <p>Every reduced word is an alternating product of elements of <span class="math-container">$G$</span> and elements of <span class="math-container">$H$</span>, e.g.</p> <p><span class="math-container">$$ g_1 h_1 g_2 h_2 \dots g_r h_r. $$</span></p> <p>The free product <span class="math-container">$G ∗ H$</span> is the group whose elements are the reduced words in <span class="math-container">$G$</span> and <span class="math-container">$H$</span>, under the operation of concatenation followed by reduction.</p> </blockquote> <p>in an elementary set setting. First, consider the set of "unreduced" tuples of elements of <span class="math-container">$G$</span> and <span class="math-container">$H$</span></p> <p><span class="math-container">$$ U = G \sqcup H \sqcup (G\times G) \times (G\times H) \sqcup (H\times G) \sqcup (H\times H) \sqcup (G\times G \times G) \sqcup \dots $$</span></p> <p>More concisely:</p> <hr> <p>EDIT:</p> <p>I think the following formula may be less messier than the one I wrote previously:</p> <p><span class="math-container">$$ U = \bigsqcup_{r \geq 1} (S_1 \times \cdots \times S_r), $$</span></p> <p>where <span class="math-container">$S_i = G$</span> or <span class="math-container">$S_i = H$</span>.</p> <hr> <p>So, elements of <span class="math-container">$U$</span> are ordered tuples (<em>unreduced</em> ones)</p> <p><span class="math-container">$$ (s_1, s_2, \dots , s_m), $$</span></p> <p>where each <span class="math-container">$s_i$</span> is either an element of <span class="math-container">$G$</span> or an element of <span class="math-container">$H$</span>.</p> <p>The product of two unreduced tuples is defined by concatenation</p> <p><span class="math-container">$$ (s_1, \dots , s_m) \cdot (t_1, \dots , t_n) = (s_1, \dots , s_m, t_1 , \dots , t_n) \ . $$</span></p> <p>Now, consider the following equivalence relation in the set of unreduced tuples <span class="math-container">$U$</span>:</p> <p><span class="math-container">$$ (s_1, s_2, \dots , s_{i-1}, 1, s_{i+1}, \dots , s_n) \sim (s_1, s_2, \dots, s_{i-1}, s_i, \dots , s_n) \ , $$</span></p> <p>where <span class="math-container">$1$</span> is either the unit element of <span class="math-container">$G$</span> or the one of <span class="math-container">$H$</span>. And</p> <p><span class="math-container">$$ (s_1, s_2, \dots , s_i,s_{i+1}, \dots , s_r) \sim (s_1, s_2, \dots , s_is_{i+1}, \dots , s_r ) $$</span></p> <p>whenever two adjacent <span class="math-container">$s_i, s_{i+1} \in G$</span> or <span class="math-container">$s_i, s_{i+1} \in H$</span> at the same time. </p> <p>If you want, you may call the equivalence class of a tuple under this equivalence relation a <em>reduced</em> tuple. So every reduced tuple is an alternating one, </p> <p><span class="math-container">$$ (g_1, h_1, \dots , g_r , h_r) \ , $$</span></p> <p>with <span class="math-container">$g_i \in G$</span> and <span class="math-container">$h_i \in H$</span> for all <span class="math-container">$i = 1, \dots , r$</span>.</p> <p>Define the free product of <span class="math-container">$G$</span> and <span class="math-container">$H$</span> as the quotient:</p> <p><span class="math-container">$$ G*H = U/\sim \ . $$</span></p> <p>Finally, one verifies that concatenation is well-defined on unreduced tuples and gives <span class="math-container">$G*H$</span> a group structure.</p> <p>After performing this elementary exercise I understand perfectly well why nobody defines the free product in this way, but I still wanted to ask:</p> <ol> <li>Is this correct?</li> <li>Is it written somewhere?</li> </ol>
Martin Brandenburg
1,650
<p>In the question above, it seems that the description is far too complicated as it actually is. I will elaborate on the construction of coproducts of groups or monoids a bit. This will go beyond the scope of the specific question, but I hope it's helpful in other situations.</p> <p>First assume $M,N$ are sets. What is their coproduct? Obviously it's the disjoint union, which I will denote by $M + N$. Now assume $M,N$ are monoids. What is their coproduct? Well we have to introduce products of elements in $M + N$. Thus consider $(M + N)^{&lt;\omega}$, the set of finite sequences with entries in $M + N$. This is a monoid by concatenation, but the inclusions $M,N \to (M + N)^{&lt;\omega}$ are no homomorphisms. Well, then let's force it! Mod out the smallest congruence relation $\sim$ on our monoid, which satisfies $1 \sim (1_M), (m,m') \sim (mm')$ for all $m,m' \in M$ and similarily for elements of $N$. Then the quotient $(M + N)^{&lt;\omega} / \sim$ is obviously a monoid (since $\sim$ is a congruence relation), and the universal property of the coproduct is also verified easily.</p> <p>Now what about groups? The correct definition of a group involves the operations of the underlying monoid and the inversion. But in practice, this inversion can be reconstructed from the rest of the data ($x=y^{-1}$ iff $xy=1$), and actually the category of groups is a full subcategory of the category of monoids. If the monoids $M,N$ above happen to be groups, their coproduct turns out to be a group, and clearly the universal property then also holds with respect to groups. Thus we have constructed the coproduct in the category of groups.</p> <p>Let's turn back to monoids. The above construction of $M \coprod N$ is rather general, it shows the existence of colimits in every finitary algebraic category, but what can be said about the <em>elements</em>? Do they have a canonical representation? Now there are two ways of doing it:</p> <p>a) The elegant, short, "geometric" one.</p> <p>b) The long, tedious one. This one is preferred in textbooks ...</p> <p>In our case, b) means that you write down the set of reduced words, endow it with a terribly complicated monoid structure, show that every monoid axiom is satisfied, and finally check the universal property. Good luck.</p> <p>In a) you just use that $M \coprod N$ exists. We have constructed it, but the construction does not answer such simple questions as: Do $M$ and $N$ intersect in $M \coprod N$ trivially? Therefore we just use the existence of the coproduct together with the structure maps $i : M \to M \coprod N, j : N \to M \coprod N$. The idea is now to define an action of $M \coprod N$ on another object. This may be a geometric one, but in our case it's our desired set of reduced words.</p> <p>Observe that the elements in $M \coprod N$, which are products of elements, which are in the image of $i$ or $j$, constitute a submonoid, which verifies the same universal property. In other words, every element in $M \coprod N$ is such a product. In such a product we may replace $i(m) i(m')$ by $i(mm')$ and cancel $i(1)$, similarily for $j$. Thus every element is in the image of the canonical map $X \to M \coprod N$, where</p> <p>$X := \{(... ,m_1,n_1,m_2,n_2, ...) : m_i \in M - \{1\}, n_i \in N - \{1\}\}.$</p> <p>Now we prove that this map $X \to M \coprod N$ is a bijection, i.e. every element of the coproduct has a <em>unique</em> representation as $... i(m_1) j(n_1) i(m_2) ...$. To do this, we define an action of the monoid $M \coprod N$ on $X$, i.e. a monoid homomorphism $M \coprod N \to End(X), (m \mapsto (x \mapsto mx))$, which should imitate the usual multiplication. By the universal property, it is enough to construct this homomorphism on $M$ and on $N$. If $m \in M$ and $x \in X$, define $mx$ as follows: If $x$ starts with an element in $N$, just concatenate with $(m)$. If $x$ starts with an element of $M$, which is not inverse to $m$, then multiply $m$ in the first entry. Otherwise delete the first entry. Similarily the homomorphism $N \to End(X)$ is defined. The resultung homomorphism $M \coprod N \to End(X)$ can be composed with the evaluation at the empty sequence to get a map $M \coprod N \to X$, which turns out to be a left inverse to $X \to M \coprod N$. Thus, $X \to M \coprod N$ is a bijection.</p>
5,363
<p>There is something in the definition of the <a href="http://en.wikipedia.org/wiki/Free_product" rel="nofollow noreferrer">free product</a> of two groups that annoys me, and it's this "word" thing:</p> <blockquote> <p>If <span class="math-container">$G$</span> and <span class="math-container">$H$</span> are groups, a word in <span class="math-container">$G$</span> and <span class="math-container">$H$</span> is a product of the form</p> <p><span class="math-container">$$ s_1 s_2 \dots s_m, $$</span></p> <p>where each <span class="math-container">$s_i$</span> is either an element of <span class="math-container">$G$</span> or an element of <span class="math-container">$H$</span>. </p> </blockquote> <p>So what is this "word" guy? Does it come out of the blue? Does it come from some sort of new operation that I can perform with the two sets <span class="math-container">$G$</span> and <span class="math-container">$H$</span> -in addition to the well-known ones of union, intersection, Cartesian product...?</p> <p>Fortunatelly, I think there is nothing new under the sun of set operations: it's easy to realise that words can be identified with elements of some Cartesian product (see below):</p> <p><span class="math-container">$$ (s_1, s_2, \dots , s_m ) \ . $$</span></p> <p>And Cartesian product is a well-established set-theoretical operation.</p> <p>So I tried to translate the rest of Wikipedia's definition </p> <blockquote> <p>Such a word may be <em>reduced</em> using the following operations:</p> <p>Remove an instance of the identity element (of either <span class="math-container">$G$</span> or <span class="math-container">$H$</span>). Replace a pair of the form <span class="math-container">$g_1g_2$</span> by its product in <span class="math-container">$G$</span>, or a pair <span class="math-container">$h_1h_2$</span> by its product in <span class="math-container">$H$</span>.</p> <p>Every reduced word is an alternating product of elements of <span class="math-container">$G$</span> and elements of <span class="math-container">$H$</span>, e.g.</p> <p><span class="math-container">$$ g_1 h_1 g_2 h_2 \dots g_r h_r. $$</span></p> <p>The free product <span class="math-container">$G ∗ H$</span> is the group whose elements are the reduced words in <span class="math-container">$G$</span> and <span class="math-container">$H$</span>, under the operation of concatenation followed by reduction.</p> </blockquote> <p>in an elementary set setting. First, consider the set of "unreduced" tuples of elements of <span class="math-container">$G$</span> and <span class="math-container">$H$</span></p> <p><span class="math-container">$$ U = G \sqcup H \sqcup (G\times G) \times (G\times H) \sqcup (H\times G) \sqcup (H\times H) \sqcup (G\times G \times G) \sqcup \dots $$</span></p> <p>More concisely:</p> <hr> <p>EDIT:</p> <p>I think the following formula may be less messier than the one I wrote previously:</p> <p><span class="math-container">$$ U = \bigsqcup_{r \geq 1} (S_1 \times \cdots \times S_r), $$</span></p> <p>where <span class="math-container">$S_i = G$</span> or <span class="math-container">$S_i = H$</span>.</p> <hr> <p>So, elements of <span class="math-container">$U$</span> are ordered tuples (<em>unreduced</em> ones)</p> <p><span class="math-container">$$ (s_1, s_2, \dots , s_m), $$</span></p> <p>where each <span class="math-container">$s_i$</span> is either an element of <span class="math-container">$G$</span> or an element of <span class="math-container">$H$</span>.</p> <p>The product of two unreduced tuples is defined by concatenation</p> <p><span class="math-container">$$ (s_1, \dots , s_m) \cdot (t_1, \dots , t_n) = (s_1, \dots , s_m, t_1 , \dots , t_n) \ . $$</span></p> <p>Now, consider the following equivalence relation in the set of unreduced tuples <span class="math-container">$U$</span>:</p> <p><span class="math-container">$$ (s_1, s_2, \dots , s_{i-1}, 1, s_{i+1}, \dots , s_n) \sim (s_1, s_2, \dots, s_{i-1}, s_i, \dots , s_n) \ , $$</span></p> <p>where <span class="math-container">$1$</span> is either the unit element of <span class="math-container">$G$</span> or the one of <span class="math-container">$H$</span>. And</p> <p><span class="math-container">$$ (s_1, s_2, \dots , s_i,s_{i+1}, \dots , s_r) \sim (s_1, s_2, \dots , s_is_{i+1}, \dots , s_r ) $$</span></p> <p>whenever two adjacent <span class="math-container">$s_i, s_{i+1} \in G$</span> or <span class="math-container">$s_i, s_{i+1} \in H$</span> at the same time. </p> <p>If you want, you may call the equivalence class of a tuple under this equivalence relation a <em>reduced</em> tuple. So every reduced tuple is an alternating one, </p> <p><span class="math-container">$$ (g_1, h_1, \dots , g_r , h_r) \ , $$</span></p> <p>with <span class="math-container">$g_i \in G$</span> and <span class="math-container">$h_i \in H$</span> for all <span class="math-container">$i = 1, \dots , r$</span>.</p> <p>Define the free product of <span class="math-container">$G$</span> and <span class="math-container">$H$</span> as the quotient:</p> <p><span class="math-container">$$ G*H = U/\sim \ . $$</span></p> <p>Finally, one verifies that concatenation is well-defined on unreduced tuples and gives <span class="math-container">$G*H$</span> a group structure.</p> <p>After performing this elementary exercise I understand perfectly well why nobody defines the free product in this way, but I still wanted to ask:</p> <ol> <li>Is this correct?</li> <li>Is it written somewhere?</li> </ol>
Arturo Magidin
742
<p>You can see essentially the same construction in two different ways in <a href="http://math.berkeley.edu/~gbergman/245/">George Bergman's <em>An Invitation to General Algebra and Universal Constructions</em></a> in <a href="http://math.berkeley.edu/~gbergman/245/Ch.2.ps">Chapter 2</a> (link is to a postscript file) for the free group. </p> <p>First, you define <em>"the set of all terms in the elements of the set $X$ under the formal group operations $\mu$, $i$, $e$"</em> to mean a set which is given with functions symb${}_T\colon X\to T$, $\mu_T\colon T^2\to T$, $i_T\colon T\to T$, and $e_T\colon T^0\to T$, such that each of these maps is one-to-one, their images are disjoint, and $T$ is the union of the images, and $T$ is generated by symb${}_T(X)$ under the operations $\mu_T$, $i_T$, and $e_T$. Such a set exists (it can be constructed inductively with enough care; given in <a href="http://math.berkeley.edu/~gbergman/245/Ch.1.ps">Chapter 1</a> of the same notes). Then one defines an apropriate equivalence relation $\sim$ on $T$; the set $T/\sim$ gives the underlying set of the free group, and one defines the operations in the free group via representatives in the natural way. Bergman labels this "the logician's approach" (section 2.2). </p> <p>An alternative construction ("the classical construction", section 2.4) gives "free groups as groups of words". Again, you start with a set $X$, and let $T$ be the set of all group-theoretic terms of $X$; identify $X$ with its image under symb, and one defines a subset $T_{red}$ of "reduced terms" (defining what this means appropriately) and then defining operations $\otimes$, ${}^{(-)}$, and $e_T$ on this set to make it into a group. Proving it is a group can be done either in the straightforward but tedious way, or by using "van der Waerden's trick" (embed the set $T_{red}$ into a group of permutations, and check that the operations you defined correspond to the operations in the image, so that "group"-ness gets inherited). </p> <p>To get the free product, you let $X$ be the disjoint union of the underlying sets of $G$ and $H$, and either adds to the equivalence relation (in the "logician's approach"), or restricts the definition of "reduced words" (in the "classical approach"), in essentially the way you did. </p>
2,906,350
<p>I have already consulted V.K. Rohatgi, and it has an example where it takes $Y=X^a$ where $a&gt;0$ but the domain of $X$ is positive real values.<br> Even the theorem for transformation of continuous random values restricts the derivative of $Y$ w.r.t. $X$ to be positive or negative for the entire domain of $X$ where as in the case when $Y = X^3$ derivative of $Y$ w.r.t. $X$ is $0$ at $X=0$.<br> I am unable to proceed further.</p>
Daman
519,543
<p>I 've been doing some questions from Rohtagi also. It's a very good book. And to answer your question.we are give $X\sim N(0,1),f(x)=\dfrac{1}{\sqrt{2\pi}}e^\frac{-x^{2}}{2} ;-\infty&lt;x&lt;\infty$</p> <p>$Y=X^3$ is a monotone function and for the monotone function, you can directly apply the transformation formula given to us which is </p> <p>$f(y)=f(x)\bigg|\dfrac{dx}{dy}\bigg|$</p> <p>$Y=X^3 \implies X=Y^{\frac{1}{3}}$</p> <p>$\dfrac{dx}{dy}=\dfrac{1}{3}y^{-\frac{2}{3}}$</p> <p>$f(x)=f(y^{\frac{1}{3}})$</p> <p>combining it we have $f(y)=f(y^{\frac{1}{3}})\dfrac{1}{3}y^{-\frac{2}{3}}=\dfrac{1}{\sqrt{2\pi}}e^\frac{{-y^ \frac{2}{3}}}{2}\cdot\dfrac{1}{3}y^{-\frac{2}{3}} ;-\infty&lt;y&lt;\infty$</p>
719,056
<p>I'm writing a Maple procedure and I have a line that is "if ... then ..." and I would like it to be if k is an integer (or if k^2 is a square) - how would onw say that in Maple? Thanks!</p>
Community
-1
<p>In terms of differentials (in the single-variable case), $\frac{dy}{dx}$ is the unique scalar with the property that $\frac{dy}{dx}dx = dy$.</p> <p>$\frac{dy}{du} \frac{du}{dx}$ therefore has the property that</p> <p>$$\frac{dy}{du} \frac{du}{dx} dx = \frac{dy}{du} du = dy $$</p> <p>therefore $\frac{dy}{du}\frac{du}{dx} = \frac{dy}{dx}$.</p> <p>You can't really justify the result by rearranging the expression like you would with fractions (e.g. by combining them into a single 'fraction'), which is why people mean when they say things like "you can't just cancel them". However, you can still prove (again this only makes sense in the single-variable case) that rearrangements are equal: e.g.</p> <p>$$ \frac{dw}{dx} \frac{dy}{dz} = \frac{dw}{dz} \frac{dy}{dx} $$</p> <p>(note that you could use this identity to prove your identity, because $\frac{du}{du} = 1$)</p>
689,546
<p>I know this problem involves using Cantor's theorem, but I'm not sure how to show that there are more subsets of an infinite enumerable set than there are positive integers. It seems like a lot of these problems are really the same problem, but they require some unique and creative thought to get them just right. Any idea how I can solve these more quickly? What train of thought do you go though when working with this specific example?</p>
João Víctor Melo
852,373
<p>Another way to clear it more is to considerer an enumerable set <span class="math-container">$ S = \{s_1, s_2, ..., s_n \}$</span>; now, it's evident that we can take the first digit from <span class="math-container">$s1$</span>, the second digit from <span class="math-container">$s_2$</span>, and so on, such that we'll have a new <span class="math-container">$s$</span> that isn't in the set, and more, even if we wanted to include it in the set, we would have a new possibility to do it all again; hence the set of all subsets is uncountable; another way to see it, is considering these <span class="math-container">$s$</span>'s as the representation of the numbers in the interval <span class="math-container">$(0,1 )$</span> as binary representations, which is a non-enumerable set clear</p> <p><span class="math-container">$ s1 = (+0, 0, 0, 0, 0, 0, 0, ...)\\ s2 = (1, +1, 1, 1, 1, 1, 1, ...)\\ s3 = (0, 1, +0, 1, 0, 1, 0, ...)\\ s4 = (1, 0, 1, +0, 1, 0, 1, ...)\\ s5 = (1, 1, 0, 1, +0, 1, 1, ...)\\ s6 = (0, 0, 1, 1, 0, +1, 1, ...)\\ s7 = (1, 0, 0, 0, 1, 0, +0, ...)\\ ... \\ s = (1, 0, 1, 1, 1, 0, 1, ...)$</span></p> <p><em>obs.: we are using the <span class="math-container">$+$</span> sign as a form to highlight the fact we are forming a new number according to the rule we've mentioned above</em>.</p> <p>The proof was given by Cantor, and the method is known as Cantor's diagonal method. The link for it is in:</p> <p><a href="https://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument</a></p>
2,078,737
<p>I will gladly appreciate explanation on how to do so on this matrix:</p> <p>$$ \begin{pmatrix} i &amp; 0 \\ 0 &amp; i \\ \end{pmatrix} $$</p> <p>I got as far as calculating the eigenvalues and came up with $λ = i$. when trying to find the eigenvectors I came up with the $0$ matrix.</p> <p>what am I doing wrong? </p> <p>Much appreciation and thanks in advance.</p>
Community
-1
<p>$$\lim_{x \to 1^{-}}\tan\left(\frac{\pi x}{2}\right)=\lim_{x \to \frac{\pi}{2}^-}\tan\left(\frac{x}{2}\right)=\lim_{x \to \frac{\pi}{2}^-} \frac{\sin x}{\cos x}.$$</p> <p>Now, $\lim_{x \to \frac{\pi}{2}^-} \sin x=1.$ But when $\lim_{x \to \frac{\pi}{2}^-},\;~\cos x$ approaches to zero through positive values. Hence the limit you're looking for is $+\infty.$</p> <p>Or convince yourself with the following graph.</p> <p><a href="https://i.stack.imgur.com/TUGbt.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TUGbt.gif" alt="Or convince yourself with the graph."></a></p>
4,465,528
<p>Solving the integral of <span class="math-container">$\cos^2x\sin^2 x$</span>:</p> <p>My steps are: <span class="math-container">$(\cos x\sin x)^2=\left(\frac{\sin(2x)}{2}\right)^2$</span>. Now we know that</p> <p><span class="math-container">$$\sin^2(\alpha)=\frac{1-\cos (2x)}{2}\iff\left(\frac{\sin(2x)}{2}\right)^2=\frac 14\sin^2(2x)=\frac 14\cdot \frac{1-\cos (4x)}{2}=\frac 18(1-\cos (4x))$$</span> Rewriting all the steps:</p> <p><span class="math-container">$$\int \sin ^2\left(x\right)\cos ^2\left(x\right)dx=\frac{1}{8}\left(x-\frac{1}{4}\sin \left(4x\right)\right)+k, \,\, k\in \Bbb R \tag 1$$</span></p> <p>Is there another method to solve this integral (1)?</p>
gnasher729
137,175
<p>You know exactly 1000 wrong answers were given, and each of 1000 candidates gave at least one incorrect answer. Nobody can have given two wrong answers, or we would have had at least 1001 wrong answers. Therefore everyone gave exactly one wrong answer snd got all the others right.</p> <p>Of 1000, 300 got answer 3 wrong and all others right, 400 got answer 4 wrong and all others right, 300 got either 1 or 2 wrong and got 3 and 4 right. 300 prizes.</p>
1,468,097
<p>Below is the problem:</p> <p>Choose a point uniformly at random from the triangle with vertices (0,0), (0, 30), and (20, 30). Let (X, Y ) be the coordinates of the chosen point. (a) Find the cumulative distribution function of X. (b) Use part (a) to find the density of X.</p> <p>First, for the first part of the question, a triangle is formed by three vertices in the question. Therefore it would look something like an inverted triangle whose right angle is formed by y-axis and the line that goes through (0,30) and (20, 30)</p> <p>If I find the area of this triangle it would be 300 since (30)(20) / 2 = 300</p> <p>so for (X, Y) coordinates in the area, the probability of the point being in the triangle is 1/300.</p> <p>So to find cdf, since the equation of the line that goes through the origin and (20, 30) is y = (3/2)x, I think cdf is</p> <p>F(X) = 1. (3/2)x * x * 1/2 * 1/300 = (1/400) * x^2 for 0 &lt;= x &lt;= 20 2. 0 for x &lt; 0 3. 1 for x > 20</p> <p>But I am not really sure if my process is right </p> <p>For part (b), I know I just have to take the derivative but the if I take the derivative, I am only calculating the area of the slope but I am suppose to find the area of the inverted triangle. Since the area of two triangles are equal, can I just take the derivative? or should the area be something like 1 - (derivative) ?</p>
Yes
155,328
<p>A slightly wilder example for 3-dimensional case: If $u:= (1,0,1)$, if $v:= (-1,0,0)$, and if $w := (0,0,-1)$, then $u,v,w$ are pairwisely linearly independent; but $u + v + w = (0,0,0)$. </p>
3,207,453
<p>studying the series <span class="math-container">$\sum_\limits{n=2}^\infty \frac{1}{n(\log n)^ {2}}$</span>.</p> <p>I've tried with the root criterion</p> <p><span class="math-container">$\lim_{n \to \infty} \sqrt[n]{\frac{1}{n(\log n)^ {2}}}&gt;1$</span> and the series should diverge.</p> <p>But I'm not sure Can someone help me to understand?</p>
marty cohen
13,079
<p>Expanding on Wojowu's answer,</p> <p><span class="math-container">$\begin{array}\\ \sum_{k=2^n}^{2^{n+1}-1} \dfrac1{k\ln^2(k)} &amp;\le \sum_{k=2^n}^{2^{n+1}-1} \dfrac1{k\ln^2(2^n)}\\ &amp;\le 2^n\dfrac1{2^n(n \ln 2)^2}\\ &amp;= \dfrac1{n^2 \ln^2 2}\\ \end{array} $</span></p> <p>and the sum of these converges,</p> <p>Note that, for <span class="math-container">$a &gt; 0$</span>,</p> <p><span class="math-container">$\begin{array}\\ \sum_{k=2^n}^{2^{n+1}-1} \dfrac1{k\ln^{1+a}(k)} &amp;\le \sum_{k=2^n}^{2^{n+1}-1} \dfrac1{k\ln^{1+a}(2^n)}\\ &amp;\le 2^n\dfrac1{2^n(n \ln 2)^{1+a}}\\ &amp;= \dfrac1{n^{1+a} \ln^{1+a} 2}\\ \end{array} $</span></p> <p>and the sum of these converges.</p> <p>Also note that</p> <p><span class="math-container">$\begin{array}\\ \sum_{k=2^n}^{2^{n+1}-1} \dfrac1{k\ln(k)} &amp;\ge \sum_{k=2^n}^{2^{n+1}-1} \dfrac1{k\ln(2^{n+1})}\\ &amp;\ge 2^n\dfrac1{2^{n+1}(n \ln 2)}\\ &amp;= \dfrac1{2n \ln 2}\\ \end{array} $</span></p> <p>and the sum of these diverges.</p>
1,364,417
<p>Find a real number k such that the limit $$\lim_{n\to\infty}\ \left(\frac{1^4 + 2^4 + 3^4 +....+ n^4}{n^k}\right)$$ has as positive value. If I am not mistaken every even $k$ can be the answer. But the answer is 5.</p>
k170
161,538
<p>First note that $$\sum\limits_{i=1}^n i^4=\frac{n(n+1)(2n+1)(3n^2+3n-1)}{30}$$ $$=\frac{n^5}{5}+\frac{n^4}{2}+\frac{n^3}{3}-\frac{n}{30}$$ So now we have $$\lim\limits_{n\to\infty} \left(\frac{1^4 + 2^4 + 3^4 +\cdots + n^4}{n^k}\right)$$ $$=\lim\limits_{n\to\infty} \left(\frac{\frac{n^5}{5}+\frac{n^4}{2}+\frac{n^3}{3}-\frac{n}{30}}{n^k}\right)$$ $$=\lim\limits_{n\to\infty} \left(\frac{n^5}{5n^k}+\frac{n^4}{2n^k}+\frac{n^3}{3n^k}-\frac{n}{30n^k}\right)$$ If $k\lt 5$, then $$\lim\limits_{n\to\infty} \left(\frac{n^5}{5n^k}+\frac{n^4}{2n^k}+\frac{n^3}{3n^k}-\frac{n}{30n^k}\right)=\infty$$ If $k\gt 5$, then $$\lim\limits_{n\to\infty} \left(\frac{n^5}{5n^k}+\frac{n^4}{2n^k}+\frac{n^3}{3n^k}-\frac{n}{30n^k}\right)=0$$ If $k=5$, then $$\lim\limits_{n\to\infty} \left(\frac{n^5}{5n^5}+\frac{n^4}{2n^5}+\frac{n^3}{3n^5}-\frac{n}{30n^5}\right)$$ $$=\lim\limits_{n\to\infty} \left(\frac{1}{5}+\frac{1}{2n}+\frac{1}{3n^2}-\frac{1}{30n^4}\right)=\frac15$$ Therefore the answer is $k=5$ because zero is not positive nor negative and infinity is not quantifiable.</p>
2,354,383
<p>Why doesn't a previous event affect the probability of (say) a coin showing tails?</p> <p>Let's say I have a <strong>fair</strong> and <strong>unbiased</strong> coin with two sides, <em>heads</em> and <em>tails</em>.</p> <p>For the first time I toss it up the probabilities of both events are equal to $\frac{1}{2}$. This much is <em>intuitive and clear</em> to me.</p> <p>Now suppose that I toss it up $1000000000$ times and the scores are,</p> <p>$501000000$ Heads</p> <p>$499000000$ Tails</p> <p>Now, for the $1001000000^{th}$ toss, shouldn't the probability of a tail coming up be <strong>greater</strong> than that of heads showing up?</p> <p>I have seen many books which say that even for the $1001000000^{th}$ toss, the probabilities of both events are equal to $\frac{1}{2}$.</p> <p>This <strong>seems wrong to me</strong> since the same books affirm that <strong>if a coin is tossed a large number of times, the quantity $\frac{heads}{tails}$ will approach $1$.</strong></p> <p>I know this is very elementary and naive, yet I had only superficially studied probability and I hope you all will bear with me.</p> <p><strong>My Objections with some of the top-voted answers</strong></p> <blockquote> <p>It isn't that future flips compensate for the imbalance, it is that there are so many of them it doesn't matter. </p> </blockquote> <p>I don't get this statement. What exactly does the second sentence mean? Moreover, if what you said is true then, the following comment by a user should be wrong,</p> <blockquote> <p>Law of large numbers</p> </blockquote> <p>So these are contradicting each other I feel. Please bear with my lack of knowledge.</p>
urbano
462,611
<p>Even in the 1000th time with a 999 tails, 0 heads, the probability of head is 1/2. But the probability that you get 1000th times tails in a row is 0.5^1000, 9.332636e-302.</p>
2,354,383
<p>Why doesn't a previous event affect the probability of (say) a coin showing tails?</p> <p>Let's say I have a <strong>fair</strong> and <strong>unbiased</strong> coin with two sides, <em>heads</em> and <em>tails</em>.</p> <p>For the first time I toss it up the probabilities of both events are equal to $\frac{1}{2}$. This much is <em>intuitive and clear</em> to me.</p> <p>Now suppose that I toss it up $1000000000$ times and the scores are,</p> <p>$501000000$ Heads</p> <p>$499000000$ Tails</p> <p>Now, for the $1001000000^{th}$ toss, shouldn't the probability of a tail coming up be <strong>greater</strong> than that of heads showing up?</p> <p>I have seen many books which say that even for the $1001000000^{th}$ toss, the probabilities of both events are equal to $\frac{1}{2}$.</p> <p>This <strong>seems wrong to me</strong> since the same books affirm that <strong>if a coin is tossed a large number of times, the quantity $\frac{heads}{tails}$ will approach $1$.</strong></p> <p>I know this is very elementary and naive, yet I had only superficially studied probability and I hope you all will bear with me.</p> <p><strong>My Objections with some of the top-voted answers</strong></p> <blockquote> <p>It isn't that future flips compensate for the imbalance, it is that there are so many of them it doesn't matter. </p> </blockquote> <p>I don't get this statement. What exactly does the second sentence mean? Moreover, if what you said is true then, the following comment by a user should be wrong,</p> <blockquote> <p>Law of large numbers</p> </blockquote> <p>So these are contradicting each other I feel. Please bear with my lack of knowledge.</p>
Zdman
462,609
<p>The probability will always be 1/2 for each coin toss because the event physically has no tie to the previous throws. It will approach 1 because, as you approach infinity, the difference between the number of heads and tails will become smaller. The difference between heads and tails will become smaller because the probably for each are equal to each other. So, when you have 10000000000003 heads vs 10000000000017 tails, you get 0.99999999999860000000000238 or very close to 1. Just keep going infinitely and you will get closer and closer to 1.</p>
316,374
<p>If <span class="math-container">$a_n$</span> satisfies the linear recurrence relation <span class="math-container">$a_n = \sum_{i=1}^k c_i a_{n-i}$</span> for some constants <span class="math-container">$c_i$</span>, then is there an easy way to find a linear recurrence relation for <span class="math-container">$b_n = a_n^2$</span> ?</p> <p>For example, if <span class="math-container">$a_n = a_{n-1} + a_{n-3}$</span>, then <span class="math-container">$b_n=a_n^2$</span> seems to satisfy <span class="math-container">$b_n=b_{n-1}+b_{n-2}+3b_{n-3}+b_{n-4}-b_{n-5}-b_{n-6}$</span>.</p>
Josiah Park
118,731
<p>T. Brown and P.J. Shiue's paper <a href="https://www.fq.math.ca/Scanned/33-4/brown.pdf" rel="nofollow noreferrer">here</a> might be of interest as a first reference. In the introduction they mention that if <span class="math-container">$a_n$</span> is a second-order sequence then the sequence of squares <span class="math-container">$a_{n}^2$</span> is a third-order sequence. They go on to show necessary conditions for the squares sequence to be a second-order sequence when <span class="math-container">$a_n$</span> is a homogeneous sequence.</p> <p>In the paper of Cooper and Kennedy <a href="http://cs.ucmo.edu/~cnc8851/articles/jarcar.pdf" rel="nofollow noreferrer">here</a>, (section 5) they give an order six linear recurrence relation for the square of a third order linear recurrence relation (as appears in your example):</p> <p><span class="math-container">$$x^2_n = (a^2 + b)x^2_{n−1} + (a^2b + b^2 + ac)x^2_{n−2} + (a^3c + 4abc − b^3 +2c^2)x^2_{n−3}+(−ab^2c + a^2c^2 − bc^2)x^2_{n−4} + (b^2c^2 − ac^3)x^2_{n−5} − c^4x^2_{n−6}$$</span> where <span class="math-container">$x_n = ax_{n−1} + bx_{n−2} + cx_{n−3}$</span>. For similar questions where squares have been replaced with higher powers, the paper <a href="https://www.fq.math.ca/Scanned/36-5/stinchcombe.pdf" rel="nofollow noreferrer">here</a> by Stinchcombe might be interesting.</p> <p><strong>Edit:</strong> Qiaochu Yuan has provided the correct order for squares. Higher powers are addressed using the same argument in Theorem 3 of Stinchcombe's paper above. For convenience it is stated here: </p> <p><strong>Question:</strong> For what order does <span class="math-container">$y_{n} =x^l_{n}$</span> satisfy a linear recurrence relation, for <span class="math-container">$x_{n}$</span> a recurrence relation of order <span class="math-container">$k$</span>?</p> <p>A recurrence equation exists and the degree of the corresponding characteristic polynomial for the recurrence is counted by the number of elements in <span class="math-container">$B_{l}$</span>, where </p> <p><span class="math-container">$$B_l=\{(i_1,...,i_k)\ |\ \text{ each } i_j\in\mathbb{N}\ \text{and}\ i_1 + ... + i_k = l \}$$</span></p> <p>Given a value of <span class="math-container">$k$</span>, define <span class="math-container">$S(k, l) =|B_{l}|$</span>, then</p> <p><strong>Theorem 3</strong>: <span class="math-container">$S(k, l)$</span> obeys the relations: <span class="math-container">$S(k, l) = k$</span> for all <span class="math-container">$k$</span>, <span class="math-container">$S(1, l) = l$</span> for all <span class="math-container">$l$</span>, and <span class="math-container">$S(k,l) =S(k-1,l) + S(k, l-1)$</span> for every <span class="math-container">$k$</span> and <span class="math-container">$l$</span>. Equivalently, <span class="math-container">$S(k, l)=\binom{k+l-1}{l}$</span>.</p>
3,102,218
<p>Given a fraction:</p> <p><span class="math-container">$$\frac{a}{b}$$</span></p> <p>I now add a number <span class="math-container">$n$</span> to both numerator and denominator in the following fashion:</p> <p><span class="math-container">$$\frac{a+n}{b+n}$$</span></p> <p>The basic property is that the second fraction is suppose to closer to <span class="math-container">$1$</span> than the first one. My question is how can we prove that?</p> <p>What I have tried:</p> <p>I know <span class="math-container">$\frac{n}{n} = 1$</span> so now adding numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> to it would actually "move it away" from <span class="math-container">$1$</span>. But I cannot understand why <span class="math-container">$\frac{a}{b}$</span> is actually farther away from <span class="math-container">$1$</span> than <span class="math-container">$\frac{a+n}{b+n}$</span>.</p> <p>Why is that? What does it mean to add a number to both the numerator and denominator?</p>
Martin R
42,969
<p>Visually: Consider the <em>slope</em> of the line segment from <span class="math-container">$(0, 0)$</span> to <span class="math-container">$(a+n, b+n$</span>):</p> <p><a href="https://i.stack.imgur.com/J6y7Z.png" rel="noreferrer"><img src="https://i.stack.imgur.com/J6y7Z.png" alt="enter image description here"></a></p> <p>Mathematically (assuming <span class="math-container">$a, b, n &gt; 0$</span>): The distance <span class="math-container">$$ \left| \frac {a+n}{b+n} - 1\right| = \frac{|a-b|}{b+n} $$</span> is <em>decreasing</em> in <span class="math-container">$n$</span> (and approaches zero for <span class="math-container">$n \to \infty$</span>).</p>
3,102,218
<p>Given a fraction:</p> <p><span class="math-container">$$\frac{a}{b}$$</span></p> <p>I now add a number <span class="math-container">$n$</span> to both numerator and denominator in the following fashion:</p> <p><span class="math-container">$$\frac{a+n}{b+n}$$</span></p> <p>The basic property is that the second fraction is suppose to closer to <span class="math-container">$1$</span> than the first one. My question is how can we prove that?</p> <p>What I have tried:</p> <p>I know <span class="math-container">$\frac{n}{n} = 1$</span> so now adding numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> to it would actually "move it away" from <span class="math-container">$1$</span>. But I cannot understand why <span class="math-container">$\frac{a}{b}$</span> is actually farther away from <span class="math-container">$1$</span> than <span class="math-container">$\frac{a+n}{b+n}$</span>.</p> <p>Why is that? What does it mean to add a number to both the numerator and denominator?</p>
toth
142,243
<p>There's a very simple way to see this. Just take the difference between the two fractions and 1. You want to show that this is smaller in modulus for the second fraction.</p> <p>You get <span class="math-container">$$ \frac{a}{b} - 1 = \frac{a-b}{b} $$</span> and <span class="math-container">$$ \frac{a+n}{b+n} -1 = \frac{a-b}{b+n} $$</span></p> <p>So the second is smaller in modulus (provided <span class="math-container">$b$</span> and <span class="math-container">$n$</span> are positive, although I supposed it also works if both are negative) because it has same numerator and larger (modulus) denominator, QED.</p>
3,102,218
<p>Given a fraction:</p> <p><span class="math-container">$$\frac{a}{b}$$</span></p> <p>I now add a number <span class="math-container">$n$</span> to both numerator and denominator in the following fashion:</p> <p><span class="math-container">$$\frac{a+n}{b+n}$$</span></p> <p>The basic property is that the second fraction is suppose to closer to <span class="math-container">$1$</span> than the first one. My question is how can we prove that?</p> <p>What I have tried:</p> <p>I know <span class="math-container">$\frac{n}{n} = 1$</span> so now adding numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> to it would actually "move it away" from <span class="math-container">$1$</span>. But I cannot understand why <span class="math-container">$\frac{a}{b}$</span> is actually farther away from <span class="math-container">$1$</span> than <span class="math-container">$\frac{a+n}{b+n}$</span>.</p> <p>Why is that? What does it mean to add a number to both the numerator and denominator?</p>
Bernard
202,857
<p>You have to suppose <span class="math-container">$a,b &gt;0$</span>. Now, it is clear that, if <span class="math-container">$a&lt;b,\;$</span> i.e. <span class="math-container">$\:\smash{\dfrac ab}&lt;1$</span>, <span class="math-container">$a+n&lt;b+n$</span>, hence <span class="math-container">$\smash{\dfrac{a+n}{b+n}}&lt;1$</span>, and similarly if <span class="math-container">$\dfrac ab&gt;1$</span>.</p> <ul> <li>If <span class="math-container">$\dfrac ab&lt;1$</span>, then <span class="math-container">$\;\dfrac ab&lt;\dfrac{a+n}{b+n}\:(&lt;1)$</span>, which is equivalent to <span class="math-container">$$a(b+n)&lt;b(a+n)\iff an&lt;bn\iff a&lt;b.$$</span></li> <li>Similar proof that if <span class="math-container">$\dfrac ab&gt;1$</span>, then <span class="math-container">$\;\dfrac ab&gt;\dfrac{a+n}{b+n}\:(&gt;1)$</span>.</li> </ul>
3,102,218
<p>Given a fraction:</p> <p><span class="math-container">$$\frac{a}{b}$$</span></p> <p>I now add a number <span class="math-container">$n$</span> to both numerator and denominator in the following fashion:</p> <p><span class="math-container">$$\frac{a+n}{b+n}$$</span></p> <p>The basic property is that the second fraction is suppose to closer to <span class="math-container">$1$</span> than the first one. My question is how can we prove that?</p> <p>What I have tried:</p> <p>I know <span class="math-container">$\frac{n}{n} = 1$</span> so now adding numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> to it would actually "move it away" from <span class="math-container">$1$</span>. But I cannot understand why <span class="math-container">$\frac{a}{b}$</span> is actually farther away from <span class="math-container">$1$</span> than <span class="math-container">$\frac{a+n}{b+n}$</span>.</p> <p>Why is that? What does it mean to add a number to both the numerator and denominator?</p>
robjohn
13,854
<p>If <span class="math-container">$b$</span> and <span class="math-container">$d$</span> have the same sign, both <span class="math-container">$$ \frac ab-\frac{a+c}{b+d}=\frac1b\frac{ad-bc}{b+d}\tag1 $$</span> and <span class="math-container">$$ \frac{a+c}{b+d}-\frac cd=\frac1d\frac{ad-bc}{b+d}\tag2 $$</span> also have the same sign. Thus, <span class="math-container">$$ \frac{a+c}{b+d}\text{ is between }\frac ab\text{ and }\frac cd\tag3 $$</span> Therefore, if <span class="math-container">$bn\gt0$</span>, <span class="math-container">$$ \frac{a+n}{b+n}\text{ is between }\frac ab\text{ and }\frac nn=1\tag4 $$</span></p>
299,795
<p>I seem to have completely lost my bearing with implicit differentiation. Just a quick question:</p> <p>Given $y = y(x)$ what is $$\frac{d}{dx} (e^x(x^2 + y^2))$$</p> <p>I think its the $\frac d{dx}$ confusing me, I don't what effect it has compared to $\frac{dy}{dx}$. Any help will be greatly appreciated.</p>
Andreas Caranti
58,401
<p>Look, $x = 0$ is definitely a solution. So you want to find solutions $x \ne 0$. Divide by $x^7$...</p>
2,356,593
<blockquote> <p>Quoting:" Prove: if $f$ and $g$ are continuous on $(a,b)$ and $f(x)=g(x)$ for every $x$ in a dense subset of $(a,b)$, then $f(x)=g(x)$ for all $x$ in $(a,b)$."</p> </blockquote> <p>Let $S \subset (a,b)$ be a dense subset such that every point $x \in (a,b)$ either belongs to S or is a limit point of S.</p> <p>There exists $x_n \in S$ such that $\lim\limits_{n \rightarrow \infty} x_n =x$.</p> <p>As $f$ and $g$ are continuous on (a,b), $$f(x)= \lim\limits_{n \rightarrow \infty} f(x_n)= \lim\limits_{n \rightarrow \infty} g(x_n)= g(x)$$</p> <ol> <li>How can I conclude that "$f(x)=g(x)$ for all $x$ in $(a,b)$"?</li> <li>Is there a proof with $\epsilon$, $\delta$ definition of limits?</li> </ol> <p>Much appreciated</p>
Marios Gretsas
359,315
<p>Let $\epsilon &gt;0$ and $x \in (a,b) \cap S^c$.</p> <p>$\forall t&gt;0, \exists z \in S$ such that $|x-z|&lt;t$ </p> <p>We have that $f,g$ are continuous at $x$ thus $\exists \delta_1, \delta_2&gt;0$ such that $$|f(y)-f(x)|&lt; \epsilon /2, \forall y: |y-x|&lt; \delta_1 $$ $$|g(y)-g(x)|&lt; \epsilon /2, \forall y:|y-x|&lt; \delta_2$$</p> <p>Take $t= \min \{\delta_1,\delta_2\}$,thus $\exists y_0 \in S$ such that $|y_0-x|&lt; t$.</p> <p>We have that $|f(x)-g(x)| \leqslant |f(x)-f(y_0)|+|f(y_0)-g(y_0)|+|g(y_0)-g(x)|&lt; \epsilon /2+0 +\epsilon /2 =\epsilon$</p> <p>Thus we proved that $|f(x)-g(x)|&lt; \epsilon ,\forall \epsilon&gt;0$,</p> <p>thus $f(x)=g(x)$</p>
592,963
<p>Find $x^4+y^4$ if $x+y=2$ and $x^2+y^2=8$</p> <p>So i started the problem by nothing that $x^2+y^2=(x+y)^2 - 2xy$ but that doesn't help!</p> <p>I also seen that $x+y=2^1$ and $x^2+y^2=2^3$ so maybe $x^3+y^3=2^5$ and $x^4+y^4=2^7$ but i think this is just coincidence</p> <p>So how can i solve this problem?</p> <p>PLEASE i need some help and thanks for all!!</p>
ILoveMath
42,344
<p>Notice</p> <p>$$(x^2 + y^2)^2 = 64 \implies x^4 + y^4 + 2(xy)^2 = 64$$ </p> <p>and</p> <p>$$ (x + y )^2 = 4 \implies x^2 + y^2 + 2xy = 4 \implies 2xy = 4 - 8 = -4 \implies xy = -2 $$</p> <p>$$ \therefore x^4 + y^4 = 64 - 2(xy)^2 = 64 - 2(-2)^2 = 56 $$</p>
1,647,517
<p>I was in need to urgently solve this integral. I already know the result in the closed form, does anybody know how to solve it? \begin{equation} \int_{\mathbb{R}}e^{-\frac{x^{2}}{2}}\left(\cos\left(\pi nx\right)\right)dx=\sqrt{2\pi}e^{-\frac{n^{2}\pi^{2}}{2}}, \end{equation} If anybody already knows it is welcome, otherwise I guess I will have to look tomorrow with more calm</p>
Joe
107,639
<p><strong>HINT:</strong> $$ \int_{\mathbb{R}}e^{-\frac{x^{2}}{2}}\left(\cos\left(\pi nx\right)\right)dx= \Re\left[ \int_{\mathbb{R}}e^{-\frac{x^{2}}{2}}e^{i\pi nx}dx\right] $$ Then observe that $$ -\frac{x^2}2+i\pi nx=-\left(\frac x{\sqrt2}-\sqrt2i\pi n\right)^2+2\pi^2 n^2 $$ and use the well known integral</p> <p>$$ \int_{\Bbb R}e^{-s^2}\,ds=\sqrt{\pi}\;\; $$</p> <p>with a suitable change of variable.</p>
2,338,321
<p>Under given standardized random variable $Z = (X-\mu)/\sigma$ Show that </p> <p>$$C_3(Z) = M_3(Z)\;\;\text{and}\;\; C_4(Z) = M_4(Z) -3$$</p> <p>where $C_r(Z)$ refer to $r$-th cumulant, $M_r(Z)$ refer to $r$-th moment.</p> <hr> <p><strong>Question</strong></p> <p>I've been required to show above relationship by my textbook, but sadly, my textbook doesnot provide well-constructed mathematical definition of moment and cumulant. I've checked Hogg's book, but this book also, is insufficient rigourously prove above statement. Any advice or recommendation for above proof?</p>
Sri-Amirthan Theivendran
302,692
<p>Note that $Z\sim N(0,1)$. The MGF of a standard normal is $$ M(t)=\exp\left(\mu t+\frac{1}{2}\sigma^2t^2\right)=\exp\left(\frac{1}{2}t^2\right). $$ In particular the cumulant generating function of a standard normal is $$ K(t)=\frac{1}{2}t^2. $$ It follows that the nth cumulant $\kappa_n=K^{(n)}(0)=0$ for $n&gt;2$. The first identity is clear since $$ E(Z^3)=0=\kappa_3 $$ as the the density of a standard normal is even. The second identity will follow from showing that $E(Z^4)=3$ from our comments above. Note that $$ E(Z^4)=V(Z^2)+E(Z^2)^2=2+1=3 $$ since $Z^2\sim \chi^2_{(1)}=\text{Gamma}(1/2,1/2)$ where we used the shape, rate parametrization of the gamma distribution. </p>
1,594,968
<p>How much knowledge of group theory is needed in order to begin Galois Theory? Which topics are most relevant?</p>
Nicky Hekster
9,605
<p>Most Galois Theory books are self-explanatory, but you need to familiarize yourself with concepts as <a href="https://en.wikipedia.org/wiki/Solvable_group" rel="nofollow">solvable groups</a> (this relates to equations being solvable by radicals), <a href="https://en.wikipedia.org/wiki/Simple_group" rel="nofollow">simple groups</a>. Also <a href="https://en.wikipedia.org/wiki/Sylow_theorems" rel="nofollow">Sylow theory</a> helps a lot. In addition, knowledge of rings and fields is necessary. <p>By the way, Ian Stewart's book <a href="http://math.illinoisstate.edu/schebol/teaching/407-14-files/Stewart-galois_theory.pdf" rel="nofollow">Galois Theory</a> makes a very nice read.</p>
1,270,107
<p>How many real roots does the below equation have?</p> <p>\begin{equation*} \frac{x^{2000}}{2001}+2\sqrt{3}x^2-2\sqrt{5}x+\sqrt{3}=0 \end{equation*}</p> <p>A) 0 B) 11 C) 12 D) 1 E) None of these</p> <p>I could not come up with anything.</p> <p>(Turkish Math Olympiads 2001)</p>
David K
139,123
<p>Consider the discriminant of $f(X) = 2\sqrt{3}x^2-2\sqrt{5}x+\sqrt{3}=0$:</p> <p>$$(-2\sqrt{5})^2 - 4(2\sqrt{3})\sqrt{3} = 20 - 24 &lt; 0.$$</p> <p>Therefore $f(x)$ has no real roots. But $f(0) = \sqrt{3} &gt; 0$, so $f(x) &gt; 0$ everywhere.</p> <p>Now combine this with $$\frac{x^{2000}}{2001} \geq 0.$$</p>
4,074,031
<p>You have been teaching Dennis and Inez math using the Moore method for their entire lives, and you're currently deep into topology class.</p> <p>You've convinced them that topological manifolds aren't quite the right object of study because you &quot;can't do calculus on them,&quot; and you've managed to motivate the definition of a smooth manifold.</p> <p>Dennis and Inez take your motivation about &quot;doing calculus&quot; seriously. They pose to you that they asked themselves how to take the derivative of a function <span class="math-container">$f: M \to \mathbb{R}$</span>. They figured out independently that such a thing couldn't meaningfully be another smooth function. Instead, it has to say how the function is changing in the direction of a given vector field.</p> <p>You are thrilled at their work: they've independently invented <span class="math-container">$d$</span> and the cotangent bundle, and they arrive at the correct formal definition of both with little prodding.</p> <p>You want to point them in the direction of generalization, so you ask them to consider how to extend <span class="math-container">$d$</span> to higher exterior powers of the cotangent bundle. You get blank stares. Why should they think about higher tensor powers of the cotangent bundle at all? Why the <strong>alternating</strong> tensors, in particular? Yes, they know what that means (you led them to it in their linear algebra course), but the cotangent bundle just naturally showed up when they went thinking about derivatives, and the alternating tensors don't.</p> <p>Oops! Now's the time to confess that your students never had a traditional 3-dimensional vector calculus course culminating in Stokes's theorem. Actually, it's worse: they have never heard of integrals. You see, you were torn about whether to start with the Lebesgue theory or spend time building the more intuitive Riemann integral, so you just skipped from the first half of Calc One to a topics course on higher topos theory. Lesson learned.</p> <p>So, how do you teach them? (Alternatively, answer this question in the negative by proving that Dennis and Inez will always invent integration if you make them think about higher exterior derivatives enough.)</p>
not all wrong
37,268
<p>So if we're going to insist on avoiding integration, I suppose we'd better take some more derivatives. In particular, we want to learn how to take derivatives of tensors. Of course, famously, without some additional structure on even a smooth manifold, there is no such generally well-defined notion. In fact, there are three common types of derivatives of tensors:</p> <ol> <li>covariant derivatives;</li> <li>Lie derivatives; and</li> <li>exterior derivatives.</li> </ol> <p>The former two require even more extra structure (a metric/connection, or an extension of tangent vectors to vector <em>fields</em> when we define directional derivatives); the latter requires instead a restriction on the class of tensors we can differentiate - to differential forms! So clearly the central idea in our motivation should be to answer the question</p> <blockquote> <p>For what class of tensors can a derivative be defined <strong>with no further structure</strong>?</p> </blockquote> <p>Focussing on covariant <span class="math-container">$(0,m)$</span> tensors as the objects which can be formed from scalar functions just by &quot;taking derivatives&quot; and multiplying them, a formal answer to this question is:</p> <blockquote> <p><em>Theorem</em>: For any connected smooth manifold <span class="math-container">$M$</span>, if <span class="math-container">$T$</span> maps differentiable covariant tensor fields of type <span class="math-container">$(0,m)$</span> to those of type <span class="math-container">$(0,m+1)$</span> and is <em>natural</em> in the technical sense that for all (diffeomorphic) automorphisms <span class="math-container">$\phi$</span> we have <span class="math-container">$\phi^\star(T \omega) = T(\phi^\star \omega)$</span>, then <span class="math-container">$T = k \ {\rm d}$</span> is a multiple of the exterior derivative. In particular, it is characterized entirely by its action on totally antisymmetric tensors, i.e. differential forms, and vanishes on all tensors with other symmetry structures.</p> </blockquote> <p>A fairly straightforward if tedious proof of this can be constructed by following e.g. <a href="https://projecteuclid.org/journals/journal-of-differential-geometry/volume-8/issue-1/Natural-operations-on-covariant-tensor-fields/10.4310/jdg/1214431486.full" rel="noreferrer"><em>Natural operations on covariant tensor fields</em> (Leicher)</a>. (In fact, various stronger results are true; essentially <span class="math-container">${\rm d}$</span> is more or less the unique natural differential operator acting on only one tensor. There are discussions in e.g. <a href="https://mathoverflow.net/questions/200723/are-there-any-natural-differential-operators-besides-d">this MathOverflow question</a> as well as in the Leicher paper.)</p> <hr /> <p>But for the purposes of motivation, what's the basic idea underlying this observation? Well, in order to be invariant under coordinate transformations, the expression for any such <span class="math-container">$T$</span> in local coordinates must be <span class="math-container">$$ (T \omega)_{i_1\ldots i_{m+1}} = c_{i_1 \ldots i_{m+1}}^{j_1 \ldots j_{m+1}} \frac{\partial}{\partial x^{j_1}} \omega_{j_2 \ldots j_{m+1}} $$</span> Why is this? Any explicit <span class="math-container">$x^i$</span> dependence on the RHS violates invariance under shifting coordinates <span class="math-container">$x^i$</span>, and homogeneous rescaling of coordinates implies that the RHS must have weight <span class="math-container">$-1$</span> in <span class="math-container">$x$</span>, and by smoothness this must arise from a single derivative with respect to <span class="math-container">$x$</span>. (One might quite reasonably even <em>impose</em> these requirements as part of your attempt to define a derivative.)</p> <p>Now we have to decide what linear combinations of the derivatives of <span class="math-container">$\omega$</span> coordinates can possibly be invariant under coordinate transformations. This is the effort of (4.2) in Leicher. But the gist of it is simply that under coordinate transformations, the LHS transforms by a product of <span class="math-container">$m+1$</span> terms <span class="math-container">$\partial x^j / \partial y^i$</span>, whilst the RHS also involves <span class="math-container">$m+1$</span> such factors, with <span class="math-container">$m$</span> of them appearing <strong>inside</strong> the existing derivative. But for invariance, one ultimately needs that we are left only with the term with a derivative acting on the components of <span class="math-container">$\omega$</span>. This can <strong>only</strong> be achieved if the <span class="math-container">$j_i$</span> are totally antisymmetrized, in which case all terms where the derivative <span class="math-container">$\partial / \partial y^{j_1}$</span> acts upon <span class="math-container">$\partial x^{k_p} / \partial y^{j_p}$</span> vanish due to the symmetry of partial derivatives.</p> <p>(In some sense, the result therefore comes down to the simple, neat fact that 'star transpositions' <span class="math-container">$(1 \ p)$</span> generate the whole symmetric group -- by requiring that the derivative is antisymmetrized with every index of <span class="math-container">$\omega$</span>, we require that all indices of <span class="math-container">$\omega$</span> are antisymmetrized with each other.)</p> <p>Therefore, in particular, only the totally antisymmetric part <span class="math-container">$\omega_{[j_2\ldots j_{m+1}]}$</span> contributes to <span class="math-container">$T \omega$</span>.</p> <hr /> <p>So whilst it would be nice to tidy this up and give a briefer and ideally coordinate-free version of the argument, the idea is just that in a simple technical sense</p> <blockquote> <p>the exterior derivative is the only <strong>natural</strong> notion of differentiation of a tensor</p> </blockquote> <p>and the intuitive fact this rests upon is that</p> <blockquote> <p>totally antisymmetrizing indices is the only way to avoid derivatives acting upon the Jacobian factors arising under a change of coordinates.</p> </blockquote>
85,288
<p>For some reason, I'd like to use <code>ParallelTable</code> with a variable number of iterators.</p> <ul> <li><code>Table[a[1], {a[1], 0, 10}]</code> works fine:</li> </ul> <blockquote> <p><em>Output:</em> <code>{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}</code></p> </blockquote> <ul> <li><code>ParallelTable[a[1], {a[1], 0, 10}]</code> returns an error:</li> </ul> <blockquote> <p><em>Output:</em> ParallelTable::nopar1: ParallelTable[ci[1],{ci[1],0,10}] cannot be parallelized; proceeding with sequential evaluation. &gt;&gt;</p> <pre><code>{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10} </code></pre> </blockquote> <hr /> <p>I am also interested in workarounds to use <code>ParallelTable</code> and a variable number of indices (typically: <code>ParallelTable[f[Table[a[i],{i,1,n}]],Table[a[j],{j,1,n}]]</code> where <code>f</code> is a function of a list with variable size).</p>
Mr.Wizard
121
<p>Using <a href="http://reference.wolfram.com/language/ref/Trace.html"><code>Trace</code></a> we can see that the evaluation of <code>ParallelTable[a[1], {a[1], 0, 10}]</code> becomes:</p> <pre><code>Parallel`Combine`Private`parallelIterateE[ ParallelTable, Table, Join, Identity, a[1], {a[1], 0, 10}, {Automatic, "Global`"} ] </code></pre> <p>Further using <code>PrintDefinitions</code> in the GeneralUtilities package lets us peek behind the curtain:</p> <pre><code>Needs["GeneralUtilities`"] PrintDefinitions @ Parallel`Combine`Private`parallelIterateE </code></pre> <p>We see definitions specifically wanting a Symbol:</p> <pre><code>Parallel`Combine`Private`parallelIterateE[orig_, iter_, comb_, f_, expr_, {i_Symbol, vals_List}, others___, {meth_, dist_, ___}] := . . . Parallel`Combine`Private`parallelIterateE[orig_, iter_, comb_, f_, expr_, it : {i_Symbol, w0_ : 1, w1_, dw_ : 1}, others___, {meth_, dist_, ___}] := . . . </code></pre> <p>If in each of these we replace <code>i_Symbol</code> with <code>i_Symbol | i : _Symbol[___]</code> and evaluate the cells we find that the problem is resolved:</p> <pre><code>ParallelTable[a[1], {a[1], 0, 10}] (* no error message *) </code></pre> <blockquote> <pre><code>{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10} </code></pre> </blockquote> <p>In light of this I think it was an accident to restrict the iterators to bare Symbols, and I propose tagging this as a bug.</p>
1,381,545
<p>I have the following problem:</p> <blockquote> <p>Show by induction that $F_n \geq 2^{0.5 \cdot n}$, for $n \geq 6$</p> </blockquote> <p>Where $F_n$ is the $nth$ Fibonacci number. </p> <hr> <h3>Proof</h3> <p><strong>Basis</strong></p> <p>$n = 6$. </p> <p>$F_6 = 8 \geq 2^{0.5 \cdot 6} = 2^{\frac{6}{2}} = 2^3 = 8$</p> <p><strong>Induction hypothesis</strong></p> <p>Assume $F_n \geq 2^{\frac{n}{2}}$, for some $n \geq 6$.</p> <p><strong>Inductive step</strong></p> <p>Lets shows that $F_{n+1} \geq 2^{\frac{n + 1}{2}}$. </p> <p>We know that </p> <ul> <li><p>$F_{n + 1} = F_n + F_{n - 1}$</p></li> <li><p>$2^{\frac{n + 1}{2}} &gt; 2^{\frac{n}{2}}$</p></li> <li><p>$2^{\frac{n + 1}{2}} = 2^{\frac{n}{2} + \frac{1}{2}} = 2^{\frac{n}{2}} \cdot 2^{\frac{1}{2}} \leq 2^{\frac{n}{2}} \cdot 2 = 2^{\frac{n}{2}} + 2^{\frac{n}{2}}$</p></li> </ul> <p>Since we have assumed that $F_n \geq 2^{\frac{n}{2}}$, then $$F_n + F_{n - 1} = F_{n + 1} \geq 2^{\frac{n}{2}} + F_{n - 1} \geq 2^{\frac{n}{2}} + F_{n - 1} + 2^{\frac{n}{2}} - F_n = 2^{\frac{n}{2}} + 2^{\frac{n}{2}} + F_{n - 1} - F_n$$</p> <p>The last inequality is true because $2^{\frac{n}{2}} - F_n$ is negative or $0$, since $F_n \geq 2^{\frac{n}{2}}$.</p> <hr> <p>I have tried a lot of things, but I cannot figure out how to proceed and how to conclude that indeed $F_{n + 1} \geq 2^{\frac{n}{2}} \cdot 2^{\frac{1}{2}}$. I feel really stupid after trying for a long time to do this alone and not managing to do it.</p>
Euler88 ...
252,332
<p>We have that $F_n&gt;F_{n-1}$ then $$F_{n+1}=F_n+F_{n-1}&gt;2F_{n-1}&gt;2\cdot2^{(n-1)/2}=2^{(n+1)/2}$$</p>
3,034,374
<p>Two cyclists start from the same place to ride in the same direction.A starts at noon with a speed of 8km/hr and B starts at 2pm with a speed of 10km/hr.At what times A and B will be 5km apart ? My thought process: As A starts early at 12 so it will have already covered 16km(8*2). so S relative=V relative*t or say 16=2*t1 and thus t1=8. Now we want Srelative to be 5 so 5=2*t2 and t2=2 and a half hour so they will meet at t1-t2 . Is this correct process ?</p>
hamam_Abdallah
369,188
<p><span class="math-container">$$x_1(t)=v_1(t-t_1)+x_0=8(t-0)$$</span> <span class="math-container">$$x_2(t)=v_2(t-t_2)+x_0=10(t-2)$$</span></p> <p>thus the condition</p> <p><span class="math-container">$$|x_2(t)-x_1(t)|=|2t-20|=5$$</span> gives two solutions</p> <p><span class="math-container">$$t=12,5 \text{ or } t=7,5$$</span></p> <p>for example,</p> <p>at <span class="math-container">$t=7,5$</span></p> <p><span class="math-container">$$x_1=8(7,5-0)=60 \; km$$</span> and <span class="math-container">$$x_2=10(7,5-2)=55 \; km$$</span></p>
2,005,365
<p>For example, p(getting two heads from tossing a coin twice) = 0.5 * 0.5...</p> <p>I passed my probability course in college, but I am still having trouble getting the intuition for this.</p>
anonymus
277,855
<p>So you have to study the limit when $h\to 0$ of the following quantity : $\frac{\sqrt{x}-\sqrt{x+h}}{h(\sqrt{x}\sqrt{x+h})}$.</p> <p>Multiplying and dividing by the "conjugate" quantity gives : $\frac{\sqrt{x}-\sqrt{x+h}}{h(\sqrt{x}\sqrt{x+h})} \frac{\sqrt{x}+\sqrt{x+h}}{\sqrt{x}+\sqrt{x+h}} = \frac{-1}{\sqrt{x}\sqrt{x+h}(\sqrt{x}+\sqrt{x+h})}$ and the limit is straightforward.</p>
2,944,397
<blockquote> <p>In one plane <span class="math-container">$Oxy$</span>, given the circle <span class="math-container">$(\!{\rm C}\!): x^{2}+ y^{2}- 2x+ 4y- 4= 0$</span> around point <span class="math-container">$O$</span> for <span class="math-container">$60^{\circ}$</span> and it maps in the circle <span class="math-container">$(\!{\rm C}'\!)$</span>. Find the loci of the circle <span class="math-container">$(\!{\rm C}'\!)$</span>.</p> </blockquote> <p>Firstly, we let <span class="math-container">${\rm M}(x, y)\in (\!{\rm C}\!), {\rm M}'({x}',{y}')\in (\!{\rm C}'\!)$</span>. We always have the following system of equations <span class="math-container">$$\left\{\begin{matrix} {x}'= x\cos\alpha- y\sin\alpha\\ {y}'= x\sin\alpha+ y\cos\alpha \end{matrix}\right.$$</span> or <span class="math-container">$$\left\{\begin{matrix} {x}'= x\cos 60^{\circ}- y\sin 60^{\circ}\\ {y}'= x\sin 60^{\circ}+ y\cos 60^{\circ} \end{matrix}\right.$$</span> so <span class="math-container">$$x= \frac{1}{2}{x}'+ \frac{\sqrt{3}}{2}{y}'$$</span> <span class="math-container">$$y= \frac{1}{2}{y}'- \frac{\sqrt{3}}{2}{x}'$$</span> Secondly <em><a href="https://www.wolframalpha.com/input/?i=(1%2F2a%2Bsqrt(3)%2F2b)%5E2%2B(-sqrt(3)%2F2a%2B1%2F2b)%5E2-2(1%2F2a%2Bsqrt(3)%2F2b)%2B4(-sqrt(3)%2F2a%2B1%2F2b)-4%3D0" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=(1%2F2a%2Bsqrt(3)%2F2b)%5E2%2B(-sqrt(3)%2F2a%2B1%2F2b)%5E2-2(1%2F2a%2Bsqrt(3)%2F2b)%2B4(-sqrt(3)%2F2a%2B1%2F2b)-4%3D0</a></em>. So, how can I prove the given system of equations as follow ? <span class="math-container">$$\left\{\begin{matrix} x'= x\cos\alpha- y\sin\alpha\\ y'= x\sin\alpha+ y\cos\alpha \end{matrix}\right.$$</span> Finally, I need to the helps! Thanks for all the nice comments!</p>
user376343
376,343
<p><strong>The question was:</strong></p> <p><em>Circle <span class="math-container">$\left ( C \right ): x^{2}+ y^{2}- 2\,x+ 4\,y- 4= 0$</span> <span class="math-container">$(1)$</span> in plane <span class="math-container">$Oxy$</span> around point <span class="math-container">$O$</span> for <span class="math-container">$60^\circ$</span> and it maps in circle <span class="math-container">$(C').$</span> Find <span class="math-container">$(C').$</span></em></p> <p><strong>Solution</strong></p> <p><span class="math-container">$\left ( C \right ): x^{2}+ y^{2}- 2\,x+ 4\,y- 4= 0$</span> rewrites as <span class="math-container">$$(x-1)^2+(y+2)^2=9,$$</span> thus the center is <span class="math-container">$S(1,-2)$</span> and the radius <span class="math-container">$r=3.$</span> Through a rotation, the circle is transformed in a circle <span class="math-container">$(C')$</span> with equal radius <span class="math-container">$r=3$</span> and the center <span class="math-container">$$S'(1\cdot \cos 60^{\circ} -(-2)\sin 60^\circ,\,1\cdot \sin 60^\circ+(-2)\cos 60^\circ)=S'(1/2+\sqrt3,\sqrt3/2-1). $$</span></p> <p>From this <span class="math-container">$\left ( C' \right ): \left(x-\frac{1+2\sqrt3}{2}\right)^2 + \left(y-\frac{\sqrt3-2}{2}\right)^2=9.$</span></p>
2,588,408
<p>A question from <em>Introduction to Analysis</em> by Arthur Mattuck:</p> <p>Suppose $f(x)$ is continuous for all $x$ and $f(a+b)=f(a)+f(b)$ for all $a$ and $b$. Prove that $f(x)=Cx$, where $C=f(1)$, as follows:</p> <p>(a)prove, in order, that it is true when $x=n, {1\over n}$ and $m\over n$, where $m, n$ are integers, $n\ne 0$;</p> <p>(b)use the continuity of $f$ to show it is true for all $x$.</p> <p>I can show the statement is true when $x=n$. As for $x={1\over n},{m\over n}$, I don't know how.</p>
ℋolo
471,959
<p>For (a):</p> <p>start with $x=\frac1n$:</p> <p>$f(1)=f(\sum_{i=1}^n x)=\sum_{i=1}^n f(x)=nf(x)=C\implies f(x)=\frac Cn$</p> <p>Now do similar thing to $y=\frac mn$:</p> <p>$f(y)=f(mx)=f(\sum_{i=1}^m x)=\sum_{i=1}^m f(x)=\sum_{i=1}^m \frac Cn=\frac{Cm}n$</p> <p>For (b) use the fact that all real numbers are the limit of some rational numbers sequence</p>
1,400,876
<p>Given a measurable space $(\Omega, \cal{F})$, $f:(\Omega, \cal F) \to (\Bbb{B},\cal B)$, where $\cal B$ is the Borel $\sigma$-algebra of $\Bbb R$, is said to be $\cal {F}$-measurable if $f^{-1}(B)\in \cal F $ for any $B\in \cal B$.</p> <p>Any function $f:\Omega\to\Bbb R$ is trivially $2^\Omega$ measurable. A constant function on $\Omega$ is $\{\emptyset, \Omega\}$ measurable. If $\Omega$ is Lebesgue measurable and $\cal F$ is the set of all Lebesgue measurable subsets of $\Omega$, then all Lebesgue measurable functions are $\cal F$-measurable.</p> <p><strong>My question is</strong> can anyone help provide an example of $\cal F$-measurable function besides the above three examples? Thank you!</p>
ncmathsadist
4,154
<p>Consider the symmetric Borel sets $$\{B\in \mathcal{B}(\mathbb{R})| - B = B\}.$$ The measurable functions in this $\sigma$-algebra are the even measurable functions.</p>
690,569
<p>Suppose a function is given by: $$ f(x)= \begin{cases} \cos\left(\dfrac{1}{x}\right) &amp; x\neq 0 \\ 0 &amp; x=0 \end{cases} $$</p> <p>Show that this function is not continuous. Please help - I don't know how to proceed with formally using the limits.</p>
LeoTheKub
131,252
<p>Take $x_n=(2n\pi)^{-1}$ $(n=1,2,3 \ldots)$ and observe that $\displaystyle\lim_{n\to\infty}x_n=0$. Then compute $f(x_n)$ and ask yourself: is $\displaystyle\lim_{n\to\infty}f(x_n)=f(0)\,$?</p> <p>If the answer is "no", then this shows that $f$ is discontinuous at $x=0$.</p>
244
<p>I know that Hilbert schemes can be very singular. But are there any interesting and nontrivial Hilbert schemes that are smooth? Are there any necessary conditions or sufficient conditions for a Hilbert scheme to be smooth?</p>
Ben Webster
66
<p>A very well-known condition is that the Hilbert scheme of a smooth surface is smooth. As David pointed out below, the Hilbert scheme of a smooth curve is smooth and equal to the symmetric product (since k[t] has only one finite dimension quotient of each dimension). </p> <p>I don't know of any other examples, but one of the versions of <a href="http://arxiv.org/abs/math/0411469">Murphy's Law in algebraic geometry</a> is roughly "if you don't have a good reason for a Hilbert scheme to not be horrible, it will be as horrible as you can possibly imagine."</p>
1,990,105
<p>Presume that $(x_n)$ is a sequence s. t.</p> <p>$|x_n-x_{n+1}| \le 2^{-n}$ for all $n \in \mathbb N$</p> <p>Prove that $x_n$ converges.</p> <p>What I've tried to think: since $2^{-n}$ converges to 0, and the difference between the terms $x_n$ and $x_{n+1}$ is smaller or equal to it, then $x_n$ must be a cauchy sequence and converge. How do I make this an actual proof?</p>
hamam_Abdallah
369,188
<p>For each $n,p&gt;0$</p> <p>by triangular inequality,</p> <p>$|x_{n+p}-x_n|\leq2^{-n}(2^{-p+1}+2^{-p+2}...+2^{-1}+1)$ $\leq 2^{1-n}(1-2^{-p})\leq 2^{1-n}$.</p> <p>and since $\lim_{n\to+\infty}2^{1-n}=0$,</p> <p>$(\forall \epsilon&gt;0) ( \exists N\in \mathbb N) : (\forall n&gt;N) (\forall p&gt;0) |x_{n+p}-x_n|&lt;2^{1-n}&lt;\epsilon.$</p> <p>$(x_n)$ is a Cauchy sequence.</p>
125,834
<p>Is there a general formula for the number of distinguishable arrangements of length $\ell$ of a sequence of size $s$, given the repeated elements? So,</p> <p>For example, would there be a general formula to solve the problem:</p> <p>How many distinguishable 5-letter arrangements are there of the word "PROBABILITY" ?</p> <p>I'm looking for a general formula.</p>
André Nicolas
6,312
<p>I assume the problem is to find the number of $5$-letter words that can be formed, where we can use a letter no more times than it appears in PROBABILITY.</p> <p>There is no useful general formula. The closest I could get would be to use generating functions, which are certainly overkill for the concrete problem you mentioned. I think it is best to attack this problem by breaking the words up into types, then counting and adding. We have two $B$, two I, and seven single letters. Our words are of four types:</p> <p>(i) Two B, two I, and a single;</p> <p>(ii) Two B, the rest singles;</p> <p>(iii) Two I, the rest singles;</p> <p>(iv) All singles.</p> <p>For type (i), we can choose where the B's go in $\binom{5}{2}$ ways. For each of these ways we can choose where the I's go in $\binom{3}{2}$ ways, and for each choice of location for the B's and I's choose a letter from the remaining $7$, for a total of $\binom{5}{2}\binom{3}{2}(7)$.</p> <p>For type (ii), decide where the B's go in $\binom{5}{2}$ ways. Then we have $8$ letters left, since the I can be used, but not twice. We can fill the $3$ empty spots in $(8)(7)(6)$ ways, for a total of $\binom{5}{2}(8)(7)(6)$ ways.</p> <p>For type (iii) the analysis and answer are the same as for (ii).</p> <p>For type (iv), we have $9$ distinct letters, and can do the job in $(9)(8)(7)(6)(5)$ ways.</p>
2,672,497
<p>$$\lim _{n\to \infty }\sum _{k=1}^n\frac{1}{n+k+\frac{k}{n^2}}$$ I unsuccessfully tried to find two different Riemann Sums converging to the same value close to the given sum so I could use the Squeeze Theorem. Is there any other way to solve this?</p>
Mark Viola
218,419
<blockquote> <p>I thought it might be instructive to present a method that relies only on the <strong>squeeze theorem</strong>. To that end, we proceed.</p> </blockquote> <hr> <p>First, it is trivial to see that </p> <p>$$\sum_{k=1}^n \frac{1}{n+k+k/n^2}\le \sum_{k=1}^n \frac{1}{n+k}\tag 1$$</p> <hr> <p>Second, note that we have</p> <p>$$\begin{align} \sum_{k=1}^n \frac{1}{n+k+k/n^2}&amp;=\sum_{k=1}^n \frac{1}{(n+k)\left(1+\frac{k/n^2}{n+k}\right)}\\\\ &amp;\ge \sum_{k=1}^n \frac{1}{(n+k)}\left(1-\frac{k/n^2}{n+k}\right)\\\\ &amp;\ge \left(1-\frac1{n^2}\right)\sum_{k=1}^n \frac{1}{n+k}\tag2 \end{align}$$</p> <hr> <p>Putting $(1)$ and $(2)$ together reveals</p> <p>$$\left(1-\frac1{n^2}\right)\sum_{k=1}^n \frac{1}{n+k}\le \sum_{k=1}^n \frac{1}{n+k+k/n^2}\le \sum_{k=1}^n \frac{1}{n+k}$$</p> <p>whence application of the squeeze theorem yields the coveted limit</p> <p>$$\lim_{n\to \infty}\sum_{k=1}^n \frac{1}{n+k+k/n^2}=\log(2)$$</p> <p>as expected!</p>
1,018,672
<blockquote> <p><span class="math-container">$$\int_0^{\infty} \frac{1}{x^3-1}dx$$</span></p> </blockquote> <p>What I did:</p> <p><span class="math-container">$$\lim_{\epsilon\to0}\int_0^{1-\epsilon} \frac{1}{x^3-1}dx+\lim_{\epsilon\to0}\int_{1+\epsilon}^{\infty} \frac{1}{x^3-1}dx$$</span></p> <hr /> <p><span class="math-container">$$\lim_{\epsilon\to0}\int_0^{1-\epsilon}\frac{1}{3(x-1)}-\frac{2x+1}{6(x^2+x+1)}-\frac{1}{2(x^2+x+1)}dx+\lim_{\epsilon\to0}\int_{1+\epsilon}^{\infty}\frac{1}{3(x-1)}-\frac{2x+1}{6(x^2+x+1)}-\frac{1}{2(x^2+x+1)}dx$$</span></p> <hr /> <p><span class="math-container">$$\lim_{\epsilon\to0}\int_0^{1-\epsilon}\frac{1}{3(x-1)}-\frac{2x+1}{6(x^2+x+1)}-\frac{1}{2[(x+\frac{1}{2})^2+\frac{3}{4}]}dx+\lim_{\epsilon\to0}\int_{1+\epsilon}^{\infty}\frac{1}{3(x-1)}-\frac{2x+1}{6(x^2+x+1)}-\frac{1}{2[(x+\frac{1}{2})^2+\frac{3}{4}]}dx$$</span></p> <hr /> <p><span class="math-container">$$[\frac{1}{3}ln(x-1)-\frac{1}{6}ln(x^2+x+1)-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{0}^{1-\epsilon}+[\frac{1}{3}ln(x-1)-\frac{1}{6}ln(x^2+x+1)-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{1+\epsilon}^{\infty}$$</span></p> <hr /> <p><span class="math-container">$$[\frac{1}{6}(2ln(x-1)-ln(x^2+x+1))-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{0}^{1-\epsilon}+[\frac{1}{6}ln{2(x-1})-\frac{1}{6}ln(x^2+x+1)-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{1+\epsilon}^{\infty}$$</span></p> <hr /> <p><span class="math-container">$$[\frac{1}{6}ln(\frac{(x-1)^2}{x^2+x+1})-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{0}^{1-\epsilon}+[\frac{1}{6}ln(\frac{(x-1)^2}{x^2+x+1})-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{1+\epsilon}^{\infty}$$</span></p> <hr /> <p><span class="math-container">$$[\frac{1}{6}ln(\frac{x^2-2x+1}{x^2+x+1})-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{0}^{1-\epsilon}+[\frac{1}{6}ln(\frac{x^2-2x+1}{x^2+x+1})-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{1+\epsilon}^{\infty}$$</span></p> <hr /> <p><span class="math-container">$$\lim_{\epsilon\to0}[\frac{1}{6}ln(\frac{(1-\epsilon)^2-2(1-\epsilon)+1}{(1-\epsilon)^2+1-\epsilon+1})-\frac{1}{\sqrt3}\arctan(\frac{2(1-\epsilon)+1}{\sqrt3})+\frac{1}{\sqrt3}\arctan(\frac{1}{\sqrt3})]+\lim_{\epsilon\to 0} [ \frac{1}{6}ln(\frac{(\infty)^2-2(\infty)+1}{(\infty)^2+(\infty)+1})+\cdots]$$</span></p> <hr /> <blockquote> <p>This is where my problem is, what is :</p> <p><span class="math-container">$$ \frac{1}{6}ln(\frac{(\infty)^2-2(\infty)+1}{(\infty)^2+(\infty)+1})$$</span></p> </blockquote> <p>^^^ If I know past this, I know how to proceed. The only thing stopping me is this ^^^. Please help.</p>
Ari
185,398
<p>Remember that in any limit to infinity only the highest power term in the numerator and denominator matter. Thus your expression is equivalent to $\frac{1}{6}ln(1)=0$</p>
251,430
<p>Consider the measure space $(\mathbb{Z},\mathcal{P}(\mathbb{Z}),\#)$, where $\#$ is the counting measure on $\mathbb{Z}$ and $\mathcal{P}(\mathbb{Z})$ is its power set.</p> <p>I would like to show that for any measurable function we have $\int f(n)d\#(n)=\sum_{n}f(n)$.</p> <p>This is what I have done: Let $x\in\mathbb{Z}$ and consider the indicator function $1_{\{x\}}$. Then $$\int_\mathbb{Z} fd\#=\int_\mathbb{Z} 1_{\{x\}}d\#=\#\{x\}=1,$$ for $f=1_{\{x\}}$. Next, for a step function $f=\sum_{k=-n}^na_k1_{\{x_k\}}$ (where $x_k\in\mathbb{Z}$ and $a_k$ are real rumbers for all $k$) we have $$\int_\mathbb{Z} fd\#=\sum_{k=-n}^na_k\int_\mathbb{Z}1_{\{x\}}d\#=\sum_{k=-n}^na_k.$$</p> <p>How do I finish this proof? I still need to prove the statement for an arbitrarily measurable function. </p>
saz
36,150
<p>Hint: Let $f$ be a measurable function, $f \geq 0$. Then there exists a sequence $(f_n)_n$ of step functions such that $f = \sup_n f_n$. Now apply <a href="http://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem" rel="nofollow">monotone convergence</a> and use the formula for the step functions (which you already proved). </p> <p>If $f$ is an arbritary measurable function you can write $f$ as $f=f^+-f^-$ where $f^+$, $f^- \geq 0$ are measurable functions.</p>
607,917
<p>Any Help solving this question ?</p> <blockquote> <p>a) Find ONE solution <span class="math-container">$\overline x\in\Bbb Z/325\Bbb Z$</span> such that <span class="math-container">$x^2\equiv-1\pmod{325}$</span>. (Hint: CRT and lifting.)</p> <p>b) How many solutions <span class="math-container">$\overline x$</span> to the above equation are there, and why?</p> </blockquote>
lab bhattacharjee
33,337
<p>We can prove by using <a href="http://mathworld.wolfram.com/DiscreteLogarithm.html" rel="nofollow">Discrete Logarithm</a> and <a href="http://www.proofwiki.org/wiki/Solution_of_Linear_Congruence" rel="nofollow">Linear Congruence</a> Theorem that $x^2\equiv a\pmod m$ has zero or two solutions if $m$ has a primitive root.</p> <p>Now, $\displaystyle x^2\equiv-1\pmod{325}\equiv-1\pmod{25}$</p> <p>$\displaystyle x^2\equiv-1\equiv49\pmod{25}\equiv7^2\implies x\equiv\pm7\pmod{25}\ \ \ \ (1)$</p> <p>Again, $x^2\equiv-1\pmod{325}\equiv-1\pmod{13}$</p> <p>$\displaystyle x^2\equiv-1\pmod{13}\equiv25\equiv5^2\implies x\equiv\pm5\pmod{13}\ \ \ \ (2)$</p> <p>Now apply CRT on $(1),(2)$ to find four in-congruent solutions </p>
182,024
<p>Following my previous question <a href="https://math.stackexchange.com/q/182000/21813">Relationship between cross product and outer product</a> where I learnt that the Exterior Product generalises the Cross Product whereas the Inner Product generalises the Dot Product, I was wondering if the simple map that I have drawn below is at all an accurate representation of the links between these different products?</p> <p>Vertical lines denote generalisation-specification, horizontal lines denote &quot;in opposition to&quot;. I'm just trying to get a quick overview before I dive in. Thanks.</p> <p><img src="https://i.stack.imgur.com/1RE3I.jpg" alt="enter image description here" /></p>
celtschk
34,930
<p>The relation between the inner product of vectors and the interior product is that if you have a metric tensor (and thus a canonical relation between vectors and covectors = $1$-forms), the inner product of two vectors is the interior product of one of the vectors and the $1$-form associated with the other one. That is, if $g$ is the metric tensor, then the inner product of the vectors $v$ and $w$ is $g(v,w)$, and the $1$-form $\omega$ associated with $v$ is defined by $\omega(w)=g(v,w)$. Then it is obvious that $\iota_w(\omega) = \omega(w) = g(v,w)$.</p> <p>The outer product is, as noted in answer to the other question you referred to, related to the tensor product. Indeed, if we associate row vectors with $1$-forms and column vectors with vectors, then we can write (using Einstein summation convention) the outer product of the vector $w = w^ie_i$ and the $1$-form $\omega = \omega_i\,e^i$ as the $(1,1)$ tensor $M = w^i\omega_j e_i\otimes e^j$ which describes an object that maps vectors to vectors. Its relation to the inner product is that you get the inner product of $w$ and $\omega$ by contracting the two indices of $M$ (which in the language of matrices corresponds to the trace of $M$).</p> <p>The exterior product is related to the tensor product in that the exterior product of two forms (a form is a skew-symmetric tensor of type $(0,p)$) is just the antisymmetrization of the tensor product.</p> <p>The cross product is a speciality of the three-dimensional space; here the space of $2$-forms has the same dimension as the space of $1$-forms; indeed, given a metric, the hodge star maps between them. Since the metric also allows to associate vectors and $1$-forms, you can define the cross product of $v$ and $w$ by the following procedure: Determine the $1$-forms corresponding to $v$ and $w$, calculate their exterior product (which is a $2$-form), apply the Hodge star to the result (which, given that we are in three dimensions, again results in a $1$-form), and finally determine the vector corresponding to that $1$-form.</p>
1,394,550
<p>I have tried to prove the following inequality:</p> <p>$$ \left(1+\frac{\log n}{n}\right)^n \gt\frac{n+1}{2}, \mbox{for}\;n\in\{2,3,\ldots\} $$</p> <p>which seems to be correct (confirmed by numerical result).</p> <p>Can anyone give me some help or hint? Thanks a lot.</p>
Björn Friedrich
203,412
<p>I just want to give a hint: Since the inequality is claimed to be true for all natural numbers greater than one, I would try a <a href="https://en.wikipedia.org/wiki/Mathematical_induction" rel="nofollow">proof by induction</a>. It consists of four steps:</p> <ol> <li>You show that the inequality is true for $n = 2$.</li> <li>You assume that th inequality is true for an $n$.</li> <li>You show that the inequality is true for $n + 1$.</li> <li>You conclude that the inequality was shown to be true for all $n \geq 2$.</li> </ol>
3,324,803
<p><strong>definition.</strong></p> <p>The phase flow of the differential equation <span class="math-container">$\dot{x}=\vec{v}\ (x)$</span> is the one-parameter diffeomorphism group for which <span class="math-container">$\vec{v}$</span> is the phase velocity vector field, namely, <span class="math-container">$$ \vec{v}=\frac{d}{dt} \Big|_{t=0} (g^tx) $$</span></p> <p>In the book, for the problem to find the phase flow of <span class="math-container">$\dot{x}=x-1$</span>, the provided answer, <span class="math-container">$g^tx=(x-1)e^t+1$</span>, is easy to verify. However, I have not idea to solve this problem. Any helps?</p>
Fareed Abi Farraj
584,389
<p>I don't know if this is what you want, but I hope it helps.</p> <p><span class="math-container">$\log_2(x)=a$</span> means that <span class="math-container">$2^a=x$</span></p> <p><span class="math-container">$\log_3(x+1)=b$</span> means that <span class="math-container">$3^b=x+1$</span></p> <p>To get <span class="math-container">$a+b=5$</span> where <span class="math-container">$3^b$</span> is after <span class="math-container">$2^a$</span> by <span class="math-container">$+1$</span> only (somehow consecutive numbers), then it is most likely that <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are integers. (This step isn't reasonable actually, it is just &quot;looks like so&quot;)</p> <p>Now if <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are integers, the ofcourse <span class="math-container">$x$</span> is also an integer, so now you'll be searching for <span class="math-container">$2$</span> consecutive numbers where the first is a power of <span class="math-container">$2$</span> and the second is a power of <span class="math-container">$3$</span>, and there is the condition that <span class="math-container">$a+b=5$</span> so if you start trying natural numbers, the first 2 numbers that satisfy the above conditions are <span class="math-container">$8=2^3$</span> and <span class="math-container">$9=3^2$</span>, and <span class="math-container">$2+3=5$</span>, so your only answer is <span class="math-container">$x=8$</span>.</p>
3,324,803
<p><strong>definition.</strong></p> <p>The phase flow of the differential equation <span class="math-container">$\dot{x}=\vec{v}\ (x)$</span> is the one-parameter diffeomorphism group for which <span class="math-container">$\vec{v}$</span> is the phase velocity vector field, namely, <span class="math-container">$$ \vec{v}=\frac{d}{dt} \Big|_{t=0} (g^tx) $$</span></p> <p>In the book, for the problem to find the phase flow of <span class="math-container">$\dot{x}=x-1$</span>, the provided answer, <span class="math-container">$g^tx=(x-1)e^t+1$</span>, is easy to verify. However, I have not idea to solve this problem. Any helps?</p>
Michael Rozenberg
190,319
<p>Let <span class="math-container">$f(x)=\log_3(x+1)+\log_2x.$</span></p> <p>Thus, since <span class="math-container">$f$</span> increases, our equation has one root maximum.</p> <p><span class="math-container">$8$</span> is a root, which says that it's an unique root and we are done.</p>
3,324,803
<p><strong>definition.</strong></p> <p>The phase flow of the differential equation <span class="math-container">$\dot{x}=\vec{v}\ (x)$</span> is the one-parameter diffeomorphism group for which <span class="math-container">$\vec{v}$</span> is the phase velocity vector field, namely, <span class="math-container">$$ \vec{v}=\frac{d}{dt} \Big|_{t=0} (g^tx) $$</span></p> <p>In the book, for the problem to find the phase flow of <span class="math-container">$\dot{x}=x-1$</span>, the provided answer, <span class="math-container">$g^tx=(x-1)e^t+1$</span>, is easy to verify. However, I have not idea to solve this problem. Any helps?</p>
B. Goddard
362,009
<p>I tried to think of how a middle schooler might solve this:</p> <p>Let <span class="math-container">$x=2^a$</span>. Then we have</p> <p><span class="math-container">$$\log_3 (2^a+1) + a = 5.$$</span> So that</p> <p><span class="math-container">$$2^a+1 = 3^{5-a}.$$</span></p> <p>Multiply by <span class="math-container">$3^a$</span> to get</p> <p><span class="math-container">$$6^a+3^a = 3^5 = 243.$$</span></p> <p>If the student knows that <span class="math-container">$6^3 = 216,$</span> he knows that <span class="math-container">$a$</span> is pretty close to <span class="math-container">$3$</span>, which does in fact work, giving <span class="math-container">$x=8.$</span></p>
3,694,661
<p>I was stuck on a problem from <a href="https://rads.stackoverflow.com/amzn/click/com/0821804308" rel="nofollow noreferrer" rel="nofollow noreferrer" title="Quite a Fun Recreational Math Book">Mathematical Circles: Russian Experience</a>, which reads as follows:</p> <blockquote> <p><em>Prove that the number <span class="math-container">$6n^3 + 3$</span> cannot be a perfect sixth power of an integer for any natural number n</em>. </p> </blockquote> <p>The problems previous to this dealt with proving that numbers cannot be a cube and cannot be a square. The hints offered to these problem said that a square leaves a remainder of 0 or 1 when divided by 3 or 4, and that a cube leaves a remainder of 0, 1 or 8 when divided by 9. However, for this problem, the hint states that the reader should "experiment by dividing the number by 7 and comparing it remainders of sixth powers divided by 7". </p> <p>Where did that come from? How would the solver figure out that <span class="math-container">$6n^3 + 3$</span> should be divided by 7? Moreover, why are 3 and 4 used in proving facts about squares, and why is 9 used when proving facts about cubes? Was this mainly through trial and error over the years, or is there some obvious fact that I'm blanking out on? </p> <p>Thanks!</p>
Favst
742,787
<p>Here is some motivation for the choice of <span class="math-container">$7$</span> as the modulus, as you asked. The equation that you want to show that has no solutions in the integers is <span class="math-container">$$6n^3 +3 -m^6=0.$$</span> When it comes to polynomial Diophantine equations, especially of the olympiad variety, a common trick is to take everything to one side, look at the equation in a certain modulus <span class="math-container">$q,$</span> substitute in all possible combinations of the residues and show that the expression never equals the zero residue. Both because you want to be efficient in your computations and because you want to reduce the chances of everything cancelling out to zero (this heuristic is not rigorous), the idea is to pick a modulus where the various terms in the expression will take on very few distinct values.</p> <p>As far as I know, there is no general known method of finding the ideal modulus, but there are two general techniques of which I am aware: take advantage of <a href="https://en.wikipedia.org/wiki/Sophie_Germain_prime" rel="nofollow noreferrer">Sophie Germain primes</a> and <a href="https://en.wikipedia.org/wiki/Fermat%27s_little_theorem" rel="nofollow noreferrer">Fermat's little theorem</a>. Sophie Germain primes <span class="math-container">$p$</span> satisfy the fact that <span class="math-container">$2p+1$</span> is also a prime, and <span class="math-container">$3$</span> is a such a prime. By Fermat's little theorem, if <span class="math-container">$p$</span> is a Sophie Germain prime, then <span class="math-container">$$x^{2p}\equiv 1 \pmod{2p+1}$$</span> or <span class="math-container">$x\equiv 0\pmod{2p+1}.$</span> So <span class="math-container">$$x^p\equiv \pm 1 \pmod{2p+1}$$</span> or <span class="math-container">$x\equiv 0\pmod{2p+1}.$</span> This means <span class="math-container">$7$</span> is a really nice modulus because you have a cube whose resides can only be <span class="math-container">$0,1,-1,$</span> and a sixth power whose residues can only be <span class="math-container">$0,1.$</span> Then just compute the <span class="math-container">$2\cdot 3=6$</span> cases and none will work out. </p> <p>By the way, years ago I asked the general question on MathOverflow in this <a href="https://mathoverflow.net/questions/134352/the-modular-arithmetic-contradiction-trick-for-diophantine-equations/">thread</a>. (Sadly, I deleted the email address associated with that account and so can no longer access the account, sigh.)</p>
3,480,890
<p>I do understand pure mathematical concepts of probability space and random variables as a (measurable) functions. </p> <p>The question is: what is the real-world meaning of probability and how can we apply the machinery of probability to the real situations?</p> <p>Ex1: probability of heads for fair coin is 1/2. Which means that "if we will make exactly the same experiment over and over again, we will obtain heads roughly 1/2 of the time". More rigorously, there is some convergences under the hood etc. But this is nonsense: if I will make exactly the same initial conditions, velocity, humidity, wind etc, then I will always obtain the same result. Moreover, I cannot guarantee the same circumstances: we have some planets flying around, and molecules are not in the same places etc. So it is not really possible to make the same experiment twice. Like, you know, one cannot step twice in the same river.</p> <p>Ex2: probability of rain is 0.9 for the next day. The "frequency" idea is really absurd in this case. </p> <p>Ex3: I'm throwing a coin and see the side. You don't see it. I ask you, what is the probability that it comes Heads. This is really somewhat vague...</p>
Robert Israel
8,508
<p>It's exactly the fact that the conditions can't be duplicated exactly that makes a probability model for the experiment viable.</p> <p>With some practice, a person with good fine-motor control can become an expert coin-flipper who can fairly reliably produce either "heads" or "tails" as desired. The "fair coin" probability model might not work well for coin flips by such a person.</p>
3,480,890
<p>I do understand pure mathematical concepts of probability space and random variables as a (measurable) functions. </p> <p>The question is: what is the real-world meaning of probability and how can we apply the machinery of probability to the real situations?</p> <p>Ex1: probability of heads for fair coin is 1/2. Which means that "if we will make exactly the same experiment over and over again, we will obtain heads roughly 1/2 of the time". More rigorously, there is some convergences under the hood etc. But this is nonsense: if I will make exactly the same initial conditions, velocity, humidity, wind etc, then I will always obtain the same result. Moreover, I cannot guarantee the same circumstances: we have some planets flying around, and molecules are not in the same places etc. So it is not really possible to make the same experiment twice. Like, you know, one cannot step twice in the same river.</p> <p>Ex2: probability of rain is 0.9 for the next day. The "frequency" idea is really absurd in this case. </p> <p>Ex3: I'm throwing a coin and see the side. You don't see it. I ask you, what is the probability that it comes Heads. This is really somewhat vague...</p>
Hymns For Disco
736,037
<p>Wikipedia's definition of "randomness" is </p> <blockquote> <p>the apparent lack of pattern or predictability in events</p> </blockquote> <p>The word "apparent" there is important. Just because something can be predicted in theory, doesn't mean its not random. For example, a pseudo-random number generator in a computer program. It is without a doubt predictable, since the output results deterministicly from its seed. However in the general case where we assume ignorance of the internal state of the generator, each output in itself is considered random.</p> <p>About the coin flip example, it is obvious in principle that given the right parameters and enough accuracy and precision, the outcome of a coin toss is predictable. This is where the question becomes a bit philosophical. What if we choose not to be aware of those parameters? Arguably the outcome is still random, at least to ourselves, but perhaps an independent observer could treat the same event as deterministic if they choose to make themselves aware of the information we ignored</p>
96,957
<p>Stacks, of varying kinds, appear in algebraic geometry whenever we have moduli problems, most famously the stacks of (marked) curves. But these seem to be to be very geometric in motivation, so I was wondering if there are natural examples of stacks that arise in arithmetic geometry or number theory.</p> <p>To me, as a non-expert, it seems like it's all Galois representations and estimates on various numeric measures (counting points or dimensions of things) based on other numeric things (conductors, heights, etc). </p> <p>I asked this question at M.SE (here : <a href="https://math.stackexchange.com/questions/143746/stacks-in-arithmetic-geometry">https://math.stackexchange.com/questions/143746/stacks-in-arithmetic-geometry</a> please vote to close if you can) because I thought it a bit too 'recreational', but with no success. What I am after is not just stacks which can be seen as arithmetic using number fields or rings of integers, but which are actually used in number-theoretic problems, or have a number-theoretic origin. Maybe there aren't any, but it doesn't hurt to ask.</p> <p>EDIT: I have belatedly made this question CW, as I've realised, too late, that there is clearly not one correct answer.</p>
stankewicz
3,384
<p>Perhaps this doesn't count as "modern" but stacks are ubiquitous in the 1972 Antwerp paper of Deligne and Rapoport. Recall that the $\Gamma_0(N)$ moduli problem is not representable, and so they must frequently work directly with stacks before moving to the coarse moduli scheme we all know and love.</p>
390,644
<p>I'm trying to solve this recurrence relation:</p> <p>$$ a_n = \begin{cases} 0 &amp; \mbox{for } n = 0 \\ 5 &amp; \mbox{for } n = 1 \\ 6a_{n-1} - 5a_{n-2} + 1 &amp; \mbox{for } n &gt; 1 \end{cases} $$</p> <p>I calculated generator function as: $$ A = \frac{31x - 24x^2}{1 - 6x + 5x^2} + \frac{x^3}{(1-x)(1-6x+5x^2)} = \frac{31x - 24x^2}{(x-1)(x-5)} + \frac{x^3}{(1-x)(x-1)(x-5)} $$</p> <p>(I'm not sure if that's right)</p> <p>and its partial fractions decomposition looks like: $$ A = \left(\frac{-7}{4} \cdot \frac{1}{x-1} - \frac{445}{4} \cdot \frac{1}{x-5}\right) + \left( \frac{39}{16} \cdot \frac{1}{x-5} + \frac{3}{4} \cdot \frac{1}{(x-1)^2} - \frac{375}{16} \cdot \frac{1}{x-5} \right) $$ (again - I'm not sure if it's ok)</p> <p>I'm stuck here... From solutions I know that I should get: $$ a_n = \frac{-21}{16} - \frac{1}{4}n + \frac{21}{16}5^n $$</p> <p>but I have no idea how it's solved... I hope somebody can help me (I spend more than 3h trying to solve this myself...)</p>
TZakrevskiy
77,314
<p>Let's write your recurrence relation for $n$ and $n+1$:</p> <p>$a_{n}-6a_{n-1}+5a_{n-2}-1=0$</p> <p>$a_{n+1}-6a_{n }+5a_{n-1}-1=0$</p> <p>Now we subtract one from another: $a_{n+1}-7a_{n }+11a_{n-1}-5a_{n-2}=0$ (relation 2)</p> <p>Then, from Theorem on <a href="http://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow">wiki</a> we build a characteristic polynomial $x^3-7x^2+11x-5 $ with roots $1,1,5$. Hence, by that theorem, the solution of our recurrence relation (2) is $c_1 1^n+ c_2n1^n+c_3 5^n$. All you have to do now is to find those constants $c_i$ from initial conditions and from the fact that it should satisfy your initial recurrence relation.</p>
390,644
<p>I'm trying to solve this recurrence relation:</p> <p>$$ a_n = \begin{cases} 0 &amp; \mbox{for } n = 0 \\ 5 &amp; \mbox{for } n = 1 \\ 6a_{n-1} - 5a_{n-2} + 1 &amp; \mbox{for } n &gt; 1 \end{cases} $$</p> <p>I calculated generator function as: $$ A = \frac{31x - 24x^2}{1 - 6x + 5x^2} + \frac{x^3}{(1-x)(1-6x+5x^2)} = \frac{31x - 24x^2}{(x-1)(x-5)} + \frac{x^3}{(1-x)(x-1)(x-5)} $$</p> <p>(I'm not sure if that's right)</p> <p>and its partial fractions decomposition looks like: $$ A = \left(\frac{-7}{4} \cdot \frac{1}{x-1} - \frac{445}{4} \cdot \frac{1}{x-5}\right) + \left( \frac{39}{16} \cdot \frac{1}{x-5} + \frac{3}{4} \cdot \frac{1}{(x-1)^2} - \frac{375}{16} \cdot \frac{1}{x-5} \right) $$ (again - I'm not sure if it's ok)</p> <p>I'm stuck here... From solutions I know that I should get: $$ a_n = \frac{-21}{16} - \frac{1}{4}n + \frac{21}{16}5^n $$</p> <p>but I have no idea how it's solved... I hope somebody can help me (I spend more than 3h trying to solve this myself...)</p>
Adi Dani
12,848
<p>$$a_0 =0,a_1=5,a_n=6a_{n-1} - 5a_{n-2} + 1, n &gt; 1$$</p> <p>$$f(x)=\sum_{n=0}^{\infty}a_nx^n=5x+\sum_{n=2}^{\infty}a_nx^n=$$</p> <p>$$=5x+\sum_{n=2}^{\infty}(6a_{n-1} - 5a_{n-2} + 1)x^n=$$</p> <p>$$=5x+6\sum_{n=2}^{\infty}a_{n-1}x^n-5\sum_{n=2}^{\infty}a_{n-2}x^n+\sum_{n=2}^{\infty}x^n=$$</p> <p>$$=5x+6x\sum_{n=2}^{\infty}a_{n-1}x^{n-1}-5x^2\sum_{n=2}^{\infty}a_{n-2}x^{n-2}+\frac{x^2}{1-x}=$$</p> <p>$$=5x+6xf(x)-5x^2f(x)+\frac{x^2}{1-x}$$ from above follow that g.f. is</p> <p>$$f(x)=\frac{5x+\frac{x^2}{1-x}}{5x^2-6x+1}=\frac{5x-4x^2}{(1-x)(5x^2-5x-(x-1))}=$$ $$=\frac{x(5-4x)}{(1-x)(5x(x-1)-(x-1))}=\frac{x(5-4x)}{(1-x)(x-1)(5x-1)}$$</p>
1,771,279
<p>Let $B=\{x \in \ell_2, ||x|| \leqslant 1\}$ - Hilbert ball $X \subset B$ - open convex connected set in Hilbert ball, $\bar{X}$ - closure of $X$. $F: \bar{X} \to \bar{X}$ - continuous map that holomorphic into each point of $X$. Is it true in general, that $F$ has fixed point?</p>
David C. Ullrich
248,223
<p>$X=\{x:1/2&lt;||x||&lt;1\}$, $f(x)=-x$. (This was before convexity was added to the hypotheses...)</p>
1,541,800
<p>I happened to stumble upon the following matrix: $$ A = \begin{bmatrix} a &amp; 1 \\ 0 &amp; a \end{bmatrix} $$</p> <p>And after trying a bunch of different examples, I noticed the following remarkable pattern. If $P$ is a polynomial, then: $$ P(A)=\begin{bmatrix} P(a) &amp; P'(a) \\ 0 &amp; P(a) \end{bmatrix}$$</p> <p>Where $P'(a)$ is the derivative evaluated at $a$.</p> <p>Futhermore, I tried extending this to other matrix functions, for example the matrix exponential, and wolfram alpha tells me: $$ \exp(A)=\begin{bmatrix} e^a &amp; e^a \\ 0 &amp; e^a \end{bmatrix}$$ and this does in fact follow the pattern since the derivative of $e^x$ is itself!</p> <p>Furthermore, I decided to look at the function $P(x)=\frac{1}{x}$. If we interpret the reciprocal of a matrix to be its inverse, then we get: $$ P(A)=\begin{bmatrix} \frac{1}{a} &amp; -\frac{1}{a^2} \\ 0 &amp; \frac{1}{a} \end{bmatrix}$$ And since $f'(a)=-\frac{1}{a^2}$, the pattern still holds!</p> <p>After trying a couple more examples, it seems that this pattern holds whenever $P$ is any rational function.</p> <p>I have two questions:</p> <ol> <li><p>Why is this happening?</p></li> <li><p>Are there any other known matrix functions (which can also be applied to real numbers) for which this property holds?</p></li> </ol>
ASCII Advocate
260,903
<p>If $e$ satisfies $Xe=eX$ and $e^k=0$, then $f(X+e)=f(X) + ef'(X)+\frac{e^2}{2}f''(X) + \dots + f^{(k-1}(X)\frac{e^{k-1}}{(k-1)!}$ (a finite sum with $k$ terms). </p> <p>This is true for polynomials and thus for power series that converge in a neighborhood of $X$.</p>
16,797
<p>Is there any good way to approximate following integral?<br> $$\int_0^{0.5}\frac{x^2}{\sqrt{2\pi}\sigma}\cdot \exp\left(-\frac{(x^2-\mu)^2}{2\sigma^2}\right)\mathrm dx$$<br> $\mu$ is between $0$ and $0.25$, the problem is in $\sigma$ which is always positive, but it can be arbitrarily small.<br> I was trying to expand it using Taylor series, but terms looks more or less this $\pm a_n\cdot\frac{x^{2n+3}}{\sigma^{2n}}$ and that can be arbitrarily large, so the error is significant.</p>
Ross Millikan
1,827
<p>If you write y=x^2 and pull the constants out you have $$\frac{1}{2\sqrt{2\pi}\sigma}\int_0^{0.25}\sqrt{y}\cdot \exp(-\frac{(y-\mu )^2}{2\sigma ^2})dy$$ If $\sigma$ is very small, the contribution will all come from a small area in $y$ around $\mu$. So you can set $\sqrt{y}=\sqrt{\mu}$ and use your error function tables for a close approximation. A quick search didn't turn up moments of $\sqrt{y}$ against the normal distribution, but maybe they are out there.</p>
16,797
<p>Is there any good way to approximate following integral?<br> $$\int_0^{0.5}\frac{x^2}{\sqrt{2\pi}\sigma}\cdot \exp\left(-\frac{(x^2-\mu)^2}{2\sigma^2}\right)\mathrm dx$$<br> $\mu$ is between $0$ and $0.25$, the problem is in $\sigma$ which is always positive, but it can be arbitrarily small.<br> I was trying to expand it using Taylor series, but terms looks more or less this $\pm a_n\cdot\frac{x^{2n+3}}{\sigma^{2n}}$ and that can be arbitrarily large, so the error is significant.</p>
Community
-1
<p>Not an answer, but might still be helpful. Using the variable substitution that Ross mentions in his answer, we can treat a simpler case $\mu=0$ more easily. </p> <p>For the following integral, Wolfram Alpha tells us that (I hope I did not make a transcription error here):</p> <p>$$\int_0^\infty \sqrt{y}e^{-y^2/(2\sigma^2)}dy = \frac{\sigma^{3/2}\Gamma(3/4)}{2^{1/4}},$$</p> <p>But your problem goes from $0$ to $0.25$, so some approximations are needed.</p> <p>Link-text: http://www.wolframalpha.com/input/?i=Integrate[Sqrt[x]+Exp[-x^2%2F%282+s^2%29]%2C{x%2C0%2Cinf}]</p>
2,796,618
<p>I am trying to</p> <p>i) determine the infimum</p> <p>ii) show that there's a function for which $\int_{0}^{1} {f'(x)}^2 dx$ is the infimum</p> <p>iii) show if such function is unique.</p> <p>I tried out several functions that suit the given condition, but couldn't see how $\int_{0}^{1} {f'(x)}^2 dx$ changes as $f(x)$ changes. How could we solve this problem?</p>
Botond
281,471
<p>So we want to maximize $$J[y]=\int_{0}^{1} (y'(x))^2 \mathrm{d}x$$ We have that the Lagrange-function is $L(x, y, y')=(y')^2$, and we must have that $$\frac{\mathrm{d}}{\mathrm{d}x}\left(\frac{\partial L}{\partial y'}\right)=\frac{\partial L}{ \partial y}$$ And you just need to solve this differential equation.</p>
1,999,352
<p>so what really is the meaning of a metric space and why is it so important in topology?</p>
Lee Mosher
26,501
<p>There is no single mathematical idea of "space". </p> <p>Each different kind of space --- vector space; metric space; topological space --- is studied in a different course of mathematics. Googling is not going to get you to a good understanding of all of these different kinds of spaces. You'll have to do some serious studying.</p> <p>For example, to study vector spaces, take a course (or read a book) on linear algebra; there are a zillion such books.</p> <p>To study metric spaces, take a course, or read a book, on metric spaces. There are a few good books devoted to metric spaces. I like <a href="http://rads.stackoverflow.com/amzn/click/0521318971" rel="nofollow noreferrer">this book</a>, and there are other good ones. Metric spaces are also studied in any topology book.</p> <p>To study topological spaces, take a course, or read a book, on topology.</p>
82,350
<p>Consider three circles of radius $1$ in $\mathbb{R}^3$, linked with each other in the same arrangement as three fibers of the Hopf fibration. Now thicken the circles up into non-overlapping standard round Euclidean solid tori of equal thickness. Allowing the tori to move, there will be some maximum thickness (distance from the core circle to the boundary of the solid torus) before the tori must overlap each other.</p> <p>I'm interested in the case that the three tori have $3$-fold rotational symmetry, as in the first image here (I think it might be possible to do better with a less symmetrical configuration).</p> <p><img src="https://i.stack.imgur.com/yHCtX.png" alt="Hopf tori and diagram"> <a href="http://www.ms.unimelb.edu.au/~segerman/misc/hopf_tori_and_diagram.png" rel="nofollow noreferrer">(source)</a></p> <p>If realised as a physical object this should have the interesting properly that the only way in which the tori can move is rotation along their axes.</p> <p>I have a numerical approximation to the best arrangement of the tori, given as follows:</p> <p>Set up a coordinate system as in the second image, with one circle (of radius $1$) centered on the $x$-axis, at distance $r$ from the origin, and rotated by angles $\theta$ and $\omega$ from the axes. The vectors $U$ and $V$ give the orientation of the circle, and are given by $$U = (\cos(\omega), \sin(\omega), 0), \qquad V=(-\sin(\omega)\sin(\theta), \cos(\omega)\sin(\theta), \cos(\theta)).$$</p> <p>The other two circles are copies of this one, rotated by $2\pi/3$ and $4\pi/3$ about the $z$-axis. The approximation I have is $r=0.4950, \omega = 0.0000, \theta = -0.8561$, with resulting distance between circles of $0.64576$ (and so torus thickness is $0.32884$). These are accurate to around $4$ decimal places. The arrangement in the picture is this approximation.</p> <p><strong>Questions:</strong></p> <ol> <li>Why does it appear that $\omega=0$? It doesn't seem obvious to me that this should be true. Is there a symmetry argument?</li> <li>Is there a closed form solution?</li> </ol> <p>A closed form solution is probably too much to hope for. In particular, this paper: <a href="http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5389850" rel="nofollow noreferrer">Finding the distance between two circles in three-dimensional space</a> shows that there is no closed form solution for the distance between arbitrary pairs of circles in $\mathbb{R}^3$. But maybe there is some symmetry argument that helps?</p> <p><strong>Edit:</strong> Better estimates: $r=0.4950197, \theta=-0.8560281$. By Ian Agol's answer, $w=0$. Here is another mysterious symmetry in the numerical solution. Set up a coordinate system on the surface of each torus. Each torus is parameterised by $\alpha, \beta\in[-\pi,\pi)$. The parameter $\alpha$ is in the longitude direction, with 0 at the vector $U$ and $V$ nearest in the positive direction. The parameter $\beta$ is in the meridian direction, with 0 at the biggest longitude (i.e. on the outside of the torus), and the direction $U\times V$ closest in the positive direction. With these coordinates, we can plot the points of contact with the other two tori ($\alpha$ on the x-axis, $\beta$ on the y-axis):</p> <p><img src="https://i.stack.imgur.com/nkuHx.jpg" alt="graph of alpha vs beta"> <a href="http://www.ms.unimelb.edu.au/~segerman/misc/tori_positions_contacts2.jpg" rel="nofollow noreferrer">(source)</a></p> <p>The (numerically approximated) positions of the points are:</p> <p>$(-2.941921822296, -1.2298655866392636),$<br> $(-1.9117269877782, 2.941921878383725),$<br> $(1.9117269877782, -2.941921878383725),$<br> $(2.941921822296, 1.2298655866392636)$</p> <p>Why does the number $2.941921$ appear in both $\alpha$ and $\beta$ coordinates? </p>
Joseph O'Rourke
6,094
<p>You might look at the work of Jason Cantarella, Robert B. Kusner, and John Sullivan, particularly their paper, &quot;<a href="https://doi.org/10.1007/s00222-002-0234-y" rel="nofollow noreferrer" title="zbMATH review at https://zbmath.org/01965446">On the Minimum Ropelength of Knots and Links</a>&quot; (<em>Inventiones Mathematicae</em>, Vol. 150, 2001). They have methods for constructing thick knots and links, which led to the tight configuration of the Borromean rings adopted by the International Mathematical Union as their logo. Perhaps their optimization methods could be adapted under the additional constraint that the rings must be geometric tori.</p> <p><a href="https://i.stack.imgur.com/Z4YC5.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z4YC5.jpg" alt="Borromean" /></a><br /> <a href="http://www.isama.org/jms/Videos/imu/rope.jpg" rel="nofollow noreferrer">(source)</a></p> <p>For information on Cantarella's optimization software, see the references in this earlier MO question: &quot;<a href="https://mathoverflow.net/questions/76545/">Is it possible to reliably generate a particular approximation of an ideal knot via a simulated annealing approach?</a>&quot;</p>
82,350
<p>Consider three circles of radius $1$ in $\mathbb{R}^3$, linked with each other in the same arrangement as three fibers of the Hopf fibration. Now thicken the circles up into non-overlapping standard round Euclidean solid tori of equal thickness. Allowing the tori to move, there will be some maximum thickness (distance from the core circle to the boundary of the solid torus) before the tori must overlap each other.</p> <p>I'm interested in the case that the three tori have $3$-fold rotational symmetry, as in the first image here (I think it might be possible to do better with a less symmetrical configuration).</p> <p><img src="https://i.stack.imgur.com/yHCtX.png" alt="Hopf tori and diagram"> <a href="http://www.ms.unimelb.edu.au/~segerman/misc/hopf_tori_and_diagram.png" rel="nofollow noreferrer">(source)</a></p> <p>If realised as a physical object this should have the interesting properly that the only way in which the tori can move is rotation along their axes.</p> <p>I have a numerical approximation to the best arrangement of the tori, given as follows:</p> <p>Set up a coordinate system as in the second image, with one circle (of radius $1$) centered on the $x$-axis, at distance $r$ from the origin, and rotated by angles $\theta$ and $\omega$ from the axes. The vectors $U$ and $V$ give the orientation of the circle, and are given by $$U = (\cos(\omega), \sin(\omega), 0), \qquad V=(-\sin(\omega)\sin(\theta), \cos(\omega)\sin(\theta), \cos(\theta)).$$</p> <p>The other two circles are copies of this one, rotated by $2\pi/3$ and $4\pi/3$ about the $z$-axis. The approximation I have is $r=0.4950, \omega = 0.0000, \theta = -0.8561$, with resulting distance between circles of $0.64576$ (and so torus thickness is $0.32884$). These are accurate to around $4$ decimal places. The arrangement in the picture is this approximation.</p> <p><strong>Questions:</strong></p> <ol> <li>Why does it appear that $\omega=0$? It doesn't seem obvious to me that this should be true. Is there a symmetry argument?</li> <li>Is there a closed form solution?</li> </ol> <p>A closed form solution is probably too much to hope for. In particular, this paper: <a href="http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5389850" rel="nofollow noreferrer">Finding the distance between two circles in three-dimensional space</a> shows that there is no closed form solution for the distance between arbitrary pairs of circles in $\mathbb{R}^3$. But maybe there is some symmetry argument that helps?</p> <p><strong>Edit:</strong> Better estimates: $r=0.4950197, \theta=-0.8560281$. By Ian Agol's answer, $w=0$. Here is another mysterious symmetry in the numerical solution. Set up a coordinate system on the surface of each torus. Each torus is parameterised by $\alpha, \beta\in[-\pi,\pi)$. The parameter $\alpha$ is in the longitude direction, with 0 at the vector $U$ and $V$ nearest in the positive direction. The parameter $\beta$ is in the meridian direction, with 0 at the biggest longitude (i.e. on the outside of the torus), and the direction $U\times V$ closest in the positive direction. With these coordinates, we can plot the points of contact with the other two tori ($\alpha$ on the x-axis, $\beta$ on the y-axis):</p> <p><img src="https://i.stack.imgur.com/nkuHx.jpg" alt="graph of alpha vs beta"> <a href="http://www.ms.unimelb.edu.au/~segerman/misc/tori_positions_contacts2.jpg" rel="nofollow noreferrer">(source)</a></p> <p>The (numerically approximated) positions of the points are:</p> <p>$(-2.941921822296, -1.2298655866392636),$<br> $(-1.9117269877782, 2.941921878383725),$<br> $(1.9117269877782, -2.941921878383725),$<br> $(2.941921822296, 1.2298655866392636)$</p> <p>Why does the number $2.941921$ appear in both $\alpha$ and $\beta$ coordinates? </p>
Ian Agol
1,345
<p>I think the minimizer should have dihedral symmetry. I'll give a heuristic explanation for this suggestion.</p> <p>Consider two linked solid tori of the same shape, such that the outer diameter is less than twice the inner diameter of the hole. If two such tori are tangent at a single point, then one may separate them by a translating each torus in the direction of a vector pointing to one side of the tangent plane at the tangency point. </p> <p>Now assume that the two linked tori are related by a rotation of order three, rotating one torus to the other. If there is only one tangency point, then one may choose one vector pointing to one side of the tangency plane, which is rotated to the other side of the tangency plane by the rotation. Move each torus by a small translation in the direction of its vector, then by the above observation, they will be separated. A third torus obtained by taking the inverse rotation will also be separated, by symmetry. </p> <p>Thus, a tight configuration must have two tangencies between each solid torus. What I believe is that two tangent linked isometric tori with two tangency points should be related by an involution rotation exchanging the two (and exchanging the tangency points). If this is true, then in a tight configuration, there should be an extra dihedral symmetry, which would imply $\omega=0$. </p>
2,233,185
<p>I'm finding how many integers under a limit, $L$, only have prime factors from a given set of prime numbers, $P$. The numbers that meet these conditions are called n-smooth numbers. (I've never used sets before so feel free to correct any mistakes I make). Take, for example, $P = \left\{2, 3 \right\}$, and $L = 25$, there are 10 numbers that are 3-smooth: 2, 3, 4, 6, 8, 9, 12, 16, 18, 24.</p> <p>I'm trying to find a method/algorithm that easily finds <em>how many</em> n-smooth numbers there are for a given $P$ and $L$. Here's my work so far:</p> <p>To start, a number from $P$ to any power will be n-smooth as long as it's &lt; $L$. Using the same example from above, this would include the powers of two: 2, 4, 8, 16, and the powers of three: 3, 9. By using the floor function and log, I can essentially tell how many powers of a number there are that are less than or equal to L. The total amount of these numbers can be modeled with this expression:</p> <p><a href="https://i.stack.imgur.com/LvlIT.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LvlIT.gif" alt="enter image description here"></a></p> <p>But this leaves the other numbers 6, 12, 18, and 24 leftover. I don't know how to account for these "leftover" type of numbers. Does anyone have any ideas?</p>
Rafael
303,887
<p>Take $a\in[0,1]$ and define $$R\longrightarrow\mathbb{R}\\\ \ \ \ f\longmapsto f(a)$$ This map is a homomorphism surjective which kernel is $M_a$ (an ideal, in particular). So $R/M_a\simeq\mathbb{R}$ and therefore $M_a$ is maximal.</p>
1,985,849
<p>I am really stumped by this equation. $$e^x=x$$ I need to prove that this equation has no real roots. But I have no idea how to start.</p> <p>If you look at the graphs of $y=e^x$ and $y=x$, you can see that those graphs do not meet anywhere. But I am trying to find an algebraic and rigorous proof. Any help is appreciated. ( $e^\infty=\infty$ looks like a solution but is it?)</p>
bulbasaur
241,800
<p>At $x=0$ , $e^x&gt;x$ now derivative of $y=x$ equals $1$ while derivative of $e^x$ is $e^x$ which is increasing and always positive thus the two graphs never meet. </p>
90,548
<p>Suppose we are handed an algebra $A$ over a field $k$. What should we look at if we want to determine whether $A$ can or cannot be equipped with structure maps to make it a Hopf algebra?</p> <p>I guess in order to narrow it down a bit, I'll phrase it like this: what are some necessary conditions on an algebra for it to be a Hopf algebra?</p> <p>Thoughts so far:</p> <p>The first obvious condition is that $A$ must be augmented, i.e. there must be a nontrivial character $\varepsilon : A \to k$. Since this is generally not that hard to determine if we are given the algebra in some fairly concrete way, let's suppose that $A$ is given to us with an augmentation map.</p> <p>If $A$ is finite-dimensional, then $A$ must be a Frobenius algebra. But not every finite-dimensional Frobenius algebra is a Hopf algebra, e.g. $\Lambda^\bullet(k^2)$ is not a Hopf algebra if the characteristic of $k$ is not 2. And generally I am more interested in the infinite-dimensional case.</p> <p>All I can come up with is this: the category of finite-dimensional $A$-modules must be a (left) rigid monoidal category. But I don't know if that is a helpful observation: given a category with a forgetful functor to finite-dimensional vector spaces over some field, how can one prove that it can't be given the structure of a <s>braided</s> rigid monoidal category?</p> <p>And perhaps there are some homological invariants that one can look at?</p> <p>To sum up, the question is:</p> <h3>Question</h3> <p>Given a $k$-algebra $A$ and a nonzero character $\varepsilon : A \to k$, are there invariants we can look at in order to show that $A$ cannot be given the structure of a Hopf algebra?</p>
M T
6,481
<p>A trivial consequence of what Vladimir says is that if $A$ is a Hopf algebra and $k$ is the trivial module (via an augmentation map $\epsilon$, then $\operatorname{Ext}_A(k,k)$ is graded commutative. It's possible to give necessary conditions for this, for example the degree one elements are graded commutative iff you can find a map $f: I^2/I^3 \to S^2(I/I^2)$ (the symmetric square) such that $fm = p$ where $I$ is the augmentation ideal, $m: (I/I^2)^{\otimes 2} \to I^2/I^3$ is multiplication and $p: (I/I^2)^{\otimes 2} \to S^2(I/I^2)$ is the natural quotient. </p>
90,548
<p>Suppose we are handed an algebra $A$ over a field $k$. What should we look at if we want to determine whether $A$ can or cannot be equipped with structure maps to make it a Hopf algebra?</p> <p>I guess in order to narrow it down a bit, I'll phrase it like this: what are some necessary conditions on an algebra for it to be a Hopf algebra?</p> <p>Thoughts so far:</p> <p>The first obvious condition is that $A$ must be augmented, i.e. there must be a nontrivial character $\varepsilon : A \to k$. Since this is generally not that hard to determine if we are given the algebra in some fairly concrete way, let's suppose that $A$ is given to us with an augmentation map.</p> <p>If $A$ is finite-dimensional, then $A$ must be a Frobenius algebra. But not every finite-dimensional Frobenius algebra is a Hopf algebra, e.g. $\Lambda^\bullet(k^2)$ is not a Hopf algebra if the characteristic of $k$ is not 2. And generally I am more interested in the infinite-dimensional case.</p> <p>All I can come up with is this: the category of finite-dimensional $A$-modules must be a (left) rigid monoidal category. But I don't know if that is a helpful observation: given a category with a forgetful functor to finite-dimensional vector spaces over some field, how can one prove that it can't be given the structure of a <s>braided</s> rigid monoidal category?</p> <p>And perhaps there are some homological invariants that one can look at?</p> <p>To sum up, the question is:</p> <h3>Question</h3> <p>Given a $k$-algebra $A$ and a nonzero character $\varepsilon : A \to k$, are there invariants we can look at in order to show that $A$ cannot be given the structure of a Hopf algebra?</p>
David E Speyer
297
<p>Commutative finitely generated Hopf algebras over a field of characteristic zero are regular. See <a href="http://www.ams.org/mathscinet-getitem?mr=206005" rel="nofollow">Oort</a>.</p>
2,666,640
<p>Assume I have a monoid $M$ and I am not guaranteed that all elements have an inverse.</p> <p>Say I have the property that:</p> <p>$a^m = a^{m+n}$</p> <p>Can I claim than it must be that $i=a^n$?</p> <p>Why or why not?</p>
Netchaiev
517,746
<p>The answer is <strong>no</strong> : take in $\mathcal{M}_2(\mathbb{R})$ with the regular matrix multiplication $$ A= \left( \begin{matrix} 1 &amp; 0 \\ 0 &amp; 0 \end{matrix} \right)$$ $$ A^{m+n}=A^m \neq I , \qquad \forall m,n&gt;0$$</p>
4,164,069
<p>I guess it's true for functions that are Lipshitz or uniformly continuous since we can limit the length of the intervals after the transformation.<br /> However, I don't know if it's true or not, and since <span class="math-container">$1/x$</span> is not one of those, I don't know how to solve this problem.</p>
Rob Arthan
23,171
<p>For the record, here is the proof I had in mind when I wrote my comment. This is intended to address directly the problem implicit in the question, namely that taking reciprocals blows up the length of intervals near <span class="math-container">$0$</span>. It uses two facts: (1) a set <span class="math-container">$E \subseteq \Bbb{R}$</span> has measure <span class="math-container">$0$</span> iff for any <span class="math-container">$\delta &gt; 0$</span>, there is a family of intervals <span class="math-container">$(x_n, y_n)$</span> , that cover <span class="math-container">$E$</span> (i.e., <span class="math-container">$E \subseteq \bigcup_i (x_i, y_i)$</span>) and have total length less than <span class="math-container">$\delta$</span> (i.e., <span class="math-container">$\sum_i (y_i - x_i) &lt; \delta$</span>); (2) the union of a countable family of sets of measure <span class="math-container">$0$</span> has measure <span class="math-container">$0$</span>.</p> <p>Assume (1) holds for <span class="math-container">$E \subseteq (0, 1)$</span> and let <span class="math-container">$H_n = \{1/x \mid x \in E \cap [1/n, 1]\}$</span> for <span class="math-container">$n = 1, 2, \ldots$</span> Given <span class="math-container">$n$</span> and <span class="math-container">$\delta&gt;0$</span>, by our assumption, we can cover <span class="math-container">$E$</span> and hence <span class="math-container">$E \cap [1/n, 1]$</span> by intervals <span class="math-container">$(x_i, y_i)$</span> such that <span class="math-container">$\sum_i (y_i - x_i) &lt; \delta/n^2$</span>. But then <span class="math-container">$H_n$</span> is covered by the intervals <span class="math-container">$(1/y_i, 1/x_i)$</span> and we have: <span class="math-container">$$ \sum_i\left(\frac{1}{x_i}- \frac{1}{y_i}\right) = \sum_i\left(\frac{y_i - x_i}{x_iy_i}\right) &lt; n^2 \sum_i(y_i - x_i) &lt; \delta $$</span> because for <span class="math-container">$x_i, y_i \ge 1/n$</span>, <span class="math-container">$1/(x_iy_i) &lt; n^2$</span>. So, by (1), each <span class="math-container">$H_n$</span> has measure <span class="math-container">$0$</span> and <span class="math-container">$H = \{1/x \mid x \in E\} = \bigcup_n H_n$</span> is a countable union of sets of measure <span class="math-container">$0$</span> and hence by (2) has measure <span class="math-container">$0$</span>.</p>
3,474,926
<p>I came across this question:</p> <p>Find an isomorphism from the group of orientation preserving isometries of the plane to some subgroup of <span class="math-container">$GL_{2}(\mathbb C)$</span>.</p> <p>I'm having trouble with finding such isomorphism. Mainly, I'm having trouble with finding some representation of those isometries in a way that would respect the group structure. I know that you can represent any isometry of the plane using a <span class="math-container">$3\times 3$</span> real matrix, and I want to somehow use this representation in order to construct a new one (maybe since I'm looking at a smaller group and can also use complex numbers I can somehow use a smaller matrix), but I'm not entirely sure on how to do this.</p> <p>Any help would be appreciated - I'd like to know how one can approach such a problem.</p> <p>Thanks in advance</p>
GSofer
509,052
<p>We can represent any orientation preserving isometry with a single angle of rotation and a single vector of translation (or simply a complex number). We can thus use the following monomorphism into <span class="math-container">$GL(2,\mathbb C)$</span>: <span class="math-container">$$(\alpha,x+iy)\mapsto\begin{bmatrix} e^{i\alpha}&amp;x+iy\\0&amp;1\end{bmatrix}$$</span></p> <p>Note that this representation preserves the group structure, and it acts on the vector <span class="math-container">$\begin{bmatrix} z \\ 1\end{bmatrix}$</span> by sending it to <span class="math-container">$\begin{bmatrix} ze^{i\alpha}+x+iy \\ 1\end{bmatrix}$</span>, which is (in the first component) exactly the image of <span class="math-container">$z$</span> under our given isometry.</p>
505,617
<p>Let $a,b,c$ be nonnegative real numbers such that $a+b+c=3$, Prove that</p> <blockquote> <p>$$ \sqrt{\frac{a}{a+3b+5bc}}+\sqrt{\frac{b}{b+3c+5ca}}+\sqrt{\frac{c}{c+3a+5ab}}\geq 1.$$</p> </blockquote> <p>This problem is from <a href="http://www.artofproblemsolving.com/Forum/viewtopic.php?f=52&amp;t=555716">http://www.artofproblemsolving.com/Forum/viewtopic.php?f=52&amp;t=555716</a></p> <p>@Calvin Lin Thank you </p>
Macavity
58,320
<p>Let $\displaystyle A = \sqrt{\frac{a}{a+3b+5bc}}+\sqrt{\frac{b}{b+3c+5ca}}+\sqrt{\frac{c}{c+3a+5ab}}$ and $\displaystyle B = \sum_{cyc}a^2(a+3b+5bc)$.</p> <p>Then by Hölder's inequality we have $A^2B \ge (a+b+c)^3 = 27$.</p> <p>So it is sufficient to prove that $B \le 27$</p> <p>$$B = \sum_{cyc}a^3 + 3 \sum_{cyc}a^2b+5\sum_{cyc}a^2bc$$ </p> <p>As $\displaystyle \sum_{cyc}ab^2 \ge 3abc$ by AM-GM, we have</p> <p>$$B \le \left(\sum_{cyc}a^3 + 3 \sum_{cyc}a^2b + 3 \sum_{cyc}ab^2 + 6abc\right) - 15abc + 5\sum_{cyc}a^2bc \\ = (a+b+c)^3 - 5abc (3-\sum_{cyc}a) = 27$$ </p>
2,987,071
<blockquote> <p>I have to show that the set <span class="math-container">$$\{1, 1 + X, (1 + X)^2 , . . . , (1 + X)^n \}$$</span> is a basis for <span class="math-container">$\Bbb{R}_n [X]$</span>, where <span class="math-container">$\Bbb{R}_n [X]$</span> denotes the vectorspace of all polynomials of degree less than or equal to <span class="math-container">$n$</span> over <span class="math-container">$\Bbb{R}$</span>. </p> </blockquote> <p>I guess I need to show that its spanning set and linear independent</p>
Fred
380,717
<p>The set <span class="math-container">$B:=\{1, 1 + X, (1 + X)^2 , . . . , (1 + X)^n \}$</span> contains <span class="math-container">$n+1$</span> elements. Since <span class="math-container">$ \dim R_n[X] =n+1$</span>, you have only to show that <span class="math-container">$B$</span> is linearly independent. To this end let <span class="math-container">$a_0,a_1,...,a_n \in \mathbb R$</span> such that</p> <p><span class="math-container">$0=a_0+a_1(1+X)+...+a_n(1+X)^n$</span>.</p> <p>Define <span class="math-container">$q(X)$</span> by <span class="math-container">$q(X)=a_0+a_1X+...+a_nX^n$</span> and <span class="math-container">$p(X):=q(1+X)$</span>. Then we have <span class="math-container">$P(X)=0$</span> for all <span class="math-container">$X$</span>.</p> <p>Conclusion ?</p>
414,432
<p>We define $f_n:\mathbb{R}\to\mathbb{R}$ by $f_n(x)=\dfrac{x}{1+nx^2}$ for each $n\ge 1$.</p> <p>I compute that $f(x):= \displaystyle\lim_{n\to \infty}f_n(x) = 0$ for each $x\in\mathbb{R}$.</p> <p>Now, I want to know in which intervals $I\subseteq \mathbb{R}$ the convergence is uniform.</p> <p>Any hint? Thanks.</p>
Mhenni Benghorbal
35,472
<p>If you use the <a href="https://math.stackexchange.com/questions/370023/how-to-prove-a-sequence-of-a-function-converges-uniformly/370071#370071">technique</a>, then the max over $x\in(-\infty,\infty)$ of the function is achieved at $x=\frac{1}{\sqrt{n}}$ and it equals $\frac{1}{2\sqrt{n}}$. So, we have</p> <p>$$ \sup| f_n(x)-f(x) |= \sup \Big| \dfrac{x}{1+nx^2} \Big|= \frac{1}{2\sqrt{n}} &lt; \epsilon.$$</p> <p>This shows the sequence converges uniformly over $\mathbb{R}$.</p> <p><strong>Added:</strong> Here is the plot of the function for $n=1,2,3$.</p> <p><img src="https://i.stack.imgur.com/3vLYU.jpg" alt="enter image description here"></p>
659,988
<p>I understood the definition of a $\sigma$-algebra, that its elements are closed under complementation and countable union, but as I am not very good at maths, I could not visualize or understand the intuition behind the meaning of "closed under complementation and countable union". </p> <p>If we consider the set X to be a finite set, then what would be a good real life example of a $\sigma$-algebra, for a noob to understand. </p>
Clement C.
75,808
<p>Its power set, i.e. the set $2^X$ (or $\mathcal{P}(X)$ depending on the notations) of all subsets of $X$.</p>
894,917
<p>I'm stuck with the limit $\lim_{n\to\infty} (2^n)\sin(n) $. I've been trying the squeeze theorem but it doesn't seem to work. I can't think of a second way to tackle the problem. Any push in the right direction would be much appreciated. </p> <blockquote> <p>Also, please don't just post the answer up because I want to try and get it. </p> </blockquote> <p>Here's what I've got so far:</p> <p>$$ \lim_{n \to \infty} (2^n)\sin(n) $$</p> <p>So I know, $\ sin(n) $ is bound by -1 and 1, but multiplying the inequality by $\ 2^n $ will give me a negative and positive $\ 2^n $. So I am stuck here. This would mean that the function is bounded by limits that tend to negative and positive infinity, pretty useless.</p> <p>So can I take the absolute value of each side of the inequality? Like this: </p> <p>$$ \lvert-2^n\rvert \le \lvert 2^n \sin(n) \rvert \le \lvert 2^n\rvert $$</p> <p>If this works, I can say it tends to infinity, but it seems a bit dodgy to me.</p> <p>Thank you for taking a look at this problem.</p>
Graham Kemp
135,106
<p>No. &nbsp; You cannot take the absolute value when you are looking for the limit. &nbsp; It's not a valid concept.</p> <p>$2^n\sin n$ oscillates periodically between $-2^n$ and $+2^n$ as $n$ tends to infinitude. &nbsp; It does not converge on a limit. &nbsp; It does not even tend towards infinitude. &nbsp; No limit exists.</p>
441,448
<p><strong>Contextual Problem</strong></p> <p>A PhD student in Applied Mathematics is defending his dissertation and needs to make 10 gallon keg consisting of vodka and beer to placate his thesis committee. Suppose that all committee members, being stubborn people, refuse to sign his dissertation paperwork until the next day. Since all committee members will be driving home immediately after his defense, he wants to make sure that they all drive home safely. To do so, he must ensure that his mixture doesn't contain too much alcohol in it! </p> <p>Therefore, his goal is to make a 10 gallon mixture of vodka and beer such that the total alcohol content of the mixture is only $12$ percent. Suppose that beer has $8\%$ alcohol while vodka has $40\%$. If $x$ is the volume of beer and $y$ is the volume of vodka needed, then clearly the system of equations is </p> <p>\begin{equation} x+y=10 \\ 0.08 x +0.4 y = 0.12\times 10 \end{equation}</p> <p><strong>My Question</strong></p> <p>The eigenvalues and eigenvectors of the corresponding matrix</p> <p>\begin{equation} \left[ \begin{array}{cc} 1 &amp; 1\\ 0.08 &amp; 0.4 \end{array} \right] \end{equation} </p> <p>are</p> <p>\begin{align} \lambda_1\approx 1.1123 \\ \lambda_2\approx 0.2877 \\ v_1\approx\left[\begin{array}{c} 0.9938 \\ 0.1116 \end{array} \right] \\ v_2\approx\left[\begin{array}{c} -0.8145 \\ 0.5802 \end{array} \right] \end{align}</p> <p>How do I interpret their physical meaning in the context of this particular problem?</p>
Hagen von Eitzen
39,174
<p>An interpretation of eigenvalues and eigenvectors of this matrix makes little sense because it is not in a natural fashion an endomorphism of a vector space: On the "input" side you have (liters of vodka, liters of beer) and on the putput (liters of liquid, liters of alcohol). For example, nothing speaks against switching the order of beer and vodka (or of liquid and alcohol), which would result in totally different eigenvalues.</p>
441,448
<p><strong>Contextual Problem</strong></p> <p>A PhD student in Applied Mathematics is defending his dissertation and needs to make 10 gallon keg consisting of vodka and beer to placate his thesis committee. Suppose that all committee members, being stubborn people, refuse to sign his dissertation paperwork until the next day. Since all committee members will be driving home immediately after his defense, he wants to make sure that they all drive home safely. To do so, he must ensure that his mixture doesn't contain too much alcohol in it! </p> <p>Therefore, his goal is to make a 10 gallon mixture of vodka and beer such that the total alcohol content of the mixture is only $12$ percent. Suppose that beer has $8\%$ alcohol while vodka has $40\%$. If $x$ is the volume of beer and $y$ is the volume of vodka needed, then clearly the system of equations is </p> <p>\begin{equation} x+y=10 \\ 0.08 x +0.4 y = 0.12\times 10 \end{equation}</p> <p><strong>My Question</strong></p> <p>The eigenvalues and eigenvectors of the corresponding matrix</p> <p>\begin{equation} \left[ \begin{array}{cc} 1 &amp; 1\\ 0.08 &amp; 0.4 \end{array} \right] \end{equation} </p> <p>are</p> <p>\begin{align} \lambda_1\approx 1.1123 \\ \lambda_2\approx 0.2877 \\ v_1\approx\left[\begin{array}{c} 0.9938 \\ 0.1116 \end{array} \right] \\ v_2\approx\left[\begin{array}{c} -0.8145 \\ 0.5802 \end{array} \right] \end{align}</p> <p>How do I interpret their physical meaning in the context of this particular problem?</p>
Mohit
181,749
<p>Problem is very simple and interesting λ1 ≈ 1.1123 is content of pure beer alcohol volume vise and λ2≈0.2877 is content of pure vodka alcohol volume wise ($8%$ beer in one liter = $80 \mathrm{mL}$ pure beer alcohol).</p> <p>In Volume direction has no significance so here eigenvector are pure mathematical term.</p>
194,867
<p>Let $f(z)$ be a function meromorphic in a simply connected convex domain $D$ (subset of the complex plane with positive area or the whole complex plane) where $z$ is a complex number.</p> <p>Are there such functions $f(z)$ where $\Re(f(z))$ is periodic in the domain (no periods larger than the domain please :p ) but $f(z)$ is not periodic? (if $D\subset \mathbb C$ it is clear that $f(z)$ is not periodic but $\Re(f(z))$ might still be for some shapes of $D$).</p> <p>In particular the case when $D = \mathbb C$ is interesting. (in other words $f(z)$ meromorphic over $\mathbb C$)</p> <p>I guess it is a similar question to ask about $\Im$ , $\operatorname{Arg}$ or $|\cdot|$ instead of $\Re$.</p> <p>I read about double periodic functions and Cauchy-Riemann equations but I still don't know. I can't find such a function in the literature ( i mean the one i search here , i don't mean i can't find a double periodic one in the literature of course ) and I don't know how to construct them or even if they exist.</p>
Mariano Suárez-Álvarez
274
<p>Suppose $f$ is meromorphic in $\mathbb C$ and its real part is periodic of period $p$. Then $z\mapsto f(z)-f(z+p)$ is a meromorphic function whose real part is identically zero. </p> <p>Can you conclude something from this?</p>