qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,100,420
<p>So my previous question states that: We are given the sequence <span class="math-container">$_{n}= 6^{({2}^n)} + 1$</span>. We must prove that the elements of this sequence are pairwise co-prime, i.e prove that if m ≠ n then <span class="math-container">$(_{m},_{n}) = 1$</span>.</p> <p>I previously proved that <span class="math-container">$k_n | k_{n+1} - 2$</span></p> <p>Then using answers I got on my previous question I was able to show that if <span class="math-container">$p | k_n$</span> then it follows that <span class="math-container">$p | k_{n+1} -2$</span>. Using this I could say that that if <span class="math-container">$p|k_{n+1}$</span> then <span class="math-container">$p|(k_{n+1} - (k_{n+1} -2)$</span>. This means the only prime that could divide <span class="math-container">$k_n$</span> and <span class="math-container">$k_{n+1}$</span> is <span class="math-container">$2$</span>, however all the terms in the sequence are odd so therefore <span class="math-container">$gcd(k_n,k_{n+1})=1$</span>.</p> <p>This therefore proves that every pair of numbers in the sequence are coprime however I'm struggling to extend it to show every element is coprime so I need to show <span class="math-container">$gcd(k_m,k_n)=1$</span>. </p> <p>All help is very much appreciated as always cheers. </p>
Bill Dubuque
242
<p><span class="math-container">${\bf Hint}\rm\quad\ \ \gcd(a+1,\,\ a^{\large 2K}\!+1)\ =\ gcd(a+1,\ \ \color{#0a0}2)\,\ $</span> by the Euclidean algorithm </p> <p><span class="math-container">${\bf Proof}\rm\ \ mod\,\ a+1\!:\,\ \color{#c00}a^{\large 2K}\!+1 \equiv (\color{#c00}{-1})^{\large 2K}\!\!+1\equiv\color{#0a0} 2,\ \ {\rm by}\ \ \color{#c00}{a\equiv {-}1}\quad {\bf QED}$</span> </p> <p>Specializing <span class="math-container">$\rm\,\ a=6^{\Large 2^{M}}\!,\ \ 2K = 2^{\large \,N-M} \Rightarrow\, a^{\Large 2K}\! = 6^{\Large 2^{N}},\,\ N&gt;M,\, $</span> yields the claim. </p>
873,582
<p>How check that $ \sqrt[3]{\frac{1}{9}}+\sqrt[3]{-\frac{2}{9}}+\sqrt[3]{\frac{4}{9}}=\sqrt[3]{\sqrt[3]2-1} $?</p>
Hagen von Eitzen
39,174
<p>If we abbreviate $w=\sqrt[3]2$, the left hand side is $L=\frac1{\sqrt[3]9}(1-w+w^2)$. From $(1+w)(1-w+w^2)=1+w^3=3$ we see that $L=\frac3{\sqrt[3]9(1+\sqrt[3]2)}$, hence $$ L^3=\frac{27}{9(1+w)^3}=\frac3{1+3w+3w^2+w^3}=\frac1{1+w+w^2}$$ As above, note that $(1+w+w^2)(w-1)=w^3-1=1$, hence $$ L^3=w-1=R^3.$$</p>
175,775
<p>To clarify the terms in the question above:</p> <p>The symmetric group Sym($\Omega$) on a set $\Omega$ consists of all bijections from $\Omega$ to $\Omega$ under composition of functions. A generating set $X \subseteq \Omega$ is minimal if no proper subset of $X$ generates Sym($\Omega$).</p> <p>This might be a difficult question, but perhaps the answer is known already?</p>
bof
43,266
<p>Jeremy Rickard's answer uses the fact that a symmetric group is not the union of a countable chain of proper subgroups. The following easy proof of that fact is quoted from Fred Galvin, <em>Generating countable sets of permutations</em>, J. London Math. Soc. (2) 51 (1995), 230-242.</p> <blockquote> <p>[. . .] permutations are regarded as right operators, and are composed from left to right. [. . .] The <em>pointwise stabilizer</em> of a subset $X$ of $E$ is the group $S_X=\{\pi\in\operatorname{Sym}(X):X\subseteq\operatorname{fix}(\pi)\}$.<br> We shall make heavy use of the following lemma, which was proved by Dixon, Neumann and Thomas [<strong>3</strong>, Lemma, p. 580] for the case $|E|=\aleph_0$, and generalized by Macpherson and Neumann [<strong>11</strong>, Lemma 2.1] to arbitrary infinite sets.<br> <br> LEMMA 2.1. <em>Let $E$ be an infinite set. If $E=A\cup B\cup C$ where $A,B,C$ are disjoint sets and $|A|=|B|=|C|$, then $\operatorname{Sym}(E)=S_AS_BS_A\cup S_BS_AS_B$.</em><br> <br> <em>Proof.</em> Let $\kappa=|E|$. Consider a permutation $\pi\in\operatorname{Sym}(E)$. It is easy to see that $\pi\in S_AS_BS_A$ if (and only if) $|(B\cup C)\setminus A\pi^{-1}|=\kappa$. In particular, $\pi\in S_AS_BS_A$ if $|C\setminus A\pi^{-1}|=\kappa$; similarly, $\pi\in S_BS_AS_B$ if $|C\setminus B\pi^{-1}|=\kappa$. At least one of these alternatives must hold, since $C=(C\setminus A\pi^{-1})\cup(C\setminus B\pi^{-1})$.<br> <br> [. . . .]<br> <br> THEOREM 3.1. <em>Let $E$ be an infinite set. Every countable subset of $\operatorname{Sym}(E)$ is contained in<br> a $4$-generator subgroup of $\operatorname{Sym}(E)$.</em><br> <br> <em>Proof.</em> We may assume that $E=\mathbb Z\times\mathbb Z\times T$, where $|T|=|E|=\kappa$. Let $E_0=\{0\}\times\{0\}\times T$. Choose $A\subset E_0$ with $|A|=|E_0\setminus A|=\kappa$; let $C=E_0\setminus A$ and let $B=E\setminus E_0$. Choose an involution $\varepsilon\in\operatorname{Sym}(E)$ so that $B\varepsilon=A$. Define $\alpha,\delta\in\operatorname{Sym}(E)$ by setting $(m,n,t)\alpha=(m+1,n,t),(0,n,t)\delta=(0,n+1,t)$, and $(m,n,t)\delta=(m,n,t)$ for $m\ne0$.<br> <br> Let a countable set $H\subseteq\operatorname{Sym}(E)$ be given; we shall show that $H\subseteq\langle\alpha,\beta,\delta,\varepsilon\rangle$ for some $\beta\in\operatorname{Sym}(E)$. By Lemma 2.1, we may assume that $H\subseteq S_A\cup S_B$. Let $H'=(H\cap S_B)\cup\varepsilon(H\cap S_A)\varepsilon$. Then $H'$ is a countable subset of $S_B$; let $H'=\{\phi_i:i\in\mathbb Z\}$. Since $\operatorname{supp}(\phi_i)\subseteq E_0$, we can define $\hat\phi_i\in\operatorname{Sym}(T)$ so that $(0,0,t)\phi_i=(0,0,t\hat\phi_i)$ for $t\in T,i\in\mathbb Z$. Finally, define $\beta\in\operatorname{Sym}(E)$ by setting $$(m,n,t)\beta= \begin{cases} (m,n,t\hat\phi_m)&amp;\text{if }n\ge0,\\ (m,n,t)&amp;\text{if }n\lt0.\\ \end{cases}$$ Then $\phi_i=(\alpha^i\beta\alpha^{-i})\delta^{-1}(\alpha^i\beta^{-1}\alpha^{-i})\delta$ for each $i\in\mathbb Z$; thus we have $H'\subseteq\langle\alpha,\beta,\delta\rangle$ and $H\subseteq H'\cup\varepsilon H'\varepsilon\subseteq\langle\alpha,\beta,\delta,\varepsilon\rangle$.<br> <br>COROLLARY 3.2. <em>A symmetric group is not the union of a countable chain of proper subgroups.</em><br> <br> [. . . .]<br> <br>THEOREM 3.3. <em>Let $E$ be an infinite set. Every countable subset of $\operatorname{Sym}(E)$ is contained in<br> a $2$-generator subgroup of $\operatorname{Sym}(E)$.</em><br> <br><em>Proof.</em> [. . . .]</p> </blockquote>
348,395
<p>If <span class="math-container">$f$</span> a distribution with compact support then they exist <span class="math-container">$m$</span> and measures <span class="math-container">$f_\beta$</span>,<span class="math-container">$|\beta|\leq m$</span> such that <span class="math-container">$$f=\sum_{|\beta|\leq m}\frac{\partial^\beta f_\beta}{\partial x^\beta}$$</span></p> <p>how to demonstrate this result ?</p>
reuns
84,768
<ul> <li><p>If <span class="math-container">$f\in D'(\Bbb{R})$</span> is a distribution with compact support <span class="math-container">$\subset (a,b)$</span> then it has finite order <span class="math-container">$m$</span>. Proof : assume it doesn't, for each <span class="math-container">$k$</span> take <span class="math-container">$\phi_k\in C^\infty_c[a,b]$</span> with <span class="math-container">$\sum_{l\le k} \|\phi_k^{(l)}\|_\infty\le 2^{-k}$</span> and <span class="math-container">$\langle f,\phi_k\rangle=1$</span> then <span class="math-container">$\lim_{K\to\infty}\sum_{k\le K} \phi_k$</span> converges in <span class="math-container">$C^\infty_c[a,b]$</span> whereas <span class="math-container">$\lim_{K\to\infty}\langle f,\sum_{k\le K} \phi_k\rangle=\infty$</span> contradicting that <span class="math-container">$f$</span> is a distribution.</p></li> <li><p><span class="math-container">$f$</span> is compactly supported and has order <span class="math-container">$m$</span> means that <span class="math-container">$F=f \ast 1_{x&gt;0}\frac{ x^{m+1}}{(m+1)!}$</span> is continuous and <span class="math-container">$$f = F^{(m+2)}$$</span></p></li> <li><p>On <span class="math-container">$(-\infty,a)$</span>, <span class="math-container">$F''=0$</span>, on <span class="math-container">$(b,\infty)$</span>, <span class="math-container">$F''$</span> is a polynomial, thus to show that <span class="math-container">$F''$</span> is a measure it suffices to show it has order <span class="math-container">$0$</span> on <span class="math-container">$(a-1,b+1)$</span>, assume it is not the case, then <span class="math-container">$F''\chi$</span> has order <span class="math-container">$1$</span> for some <span class="math-container">$\chi \in C^\infty_c$</span>, but <span class="math-container">$(F'' \chi)^{(m)}=f\chi +...$</span> has order <span class="math-container">$\le m$</span>, thus <span class="math-container">$F''\chi$</span> has order <span class="math-container">$0$</span>, <span class="math-container">$F''$</span> is a measure and <span class="math-container">$$f = (F'')^{(m)}$$</span> </p></li> </ul>
4,274,044
<p>I am trying to prove the following result.</p> <blockquote> <p>Let <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> be sets and <span class="math-container">$\sim$</span> an equivalence relation on <span class="math-container">$X$</span>. Let <span class="math-container">$f: X \to Y$</span> be a function which preserves <span class="math-container">$\sim$</span>, and let <span class="math-container">$\pi$</span> denote the projection <span class="math-container">$X \to X/\sim$</span>. Prove that there exists a unique function <span class="math-container">$\tilde{f}$</span> such that <span class="math-container">$f = \tilde{f} \circ \pi$</span>.</p> </blockquote> <p>From the wording of the problem, it seems that I need to prove both existence and uniqueness, though I don't know how to prove uniqueness because that would require giving some sort of &quot;formula&quot; for <span class="math-container">$\tilde{f}$</span>, but if I can give a formula for it, it must be unique. What the map is doing is: <span class="math-container">$$ x \stackrel{\pi}{\to} [x] \stackrel{\tilde{f}}{\to} Y. $$</span> Some versions of this decomposition I've seen send <span class="math-container">$[x]$</span> to an element of <span class="math-container">$\mathrm{Im}(f)$</span> and then inject into <span class="math-container">$Y$</span>. I'm not sure if I need to include that because then I'd be adding an extra map. The <span class="math-container">$f$</span>, as defined here, is only the composition of two maps.</p> <p>As for proving uniqueness, it seems to me that this follows from the fact that a function is surjective if and only if it admits a right inverse. Suppose that <span class="math-container">$\tilde{f}$</span> and <span class="math-container">$\tilde{f}'$</span> have the property that <span class="math-container">$f = \tilde{f} \circ \pi = \tilde{f}' \circ \pi$</span>. Cancelling on the right gives <span class="math-container">$\tilde{f} = \tilde{f}'$</span>.</p> <p>What am I missing?</p> <p>Another thing I'm having trouble understanding is the exact meaning of a project map. The map sending <span class="math-container">$x \to [x]$</span> is surely a projection, but the definition I'd read is that <span class="math-container">$p$</span> is a projection if <span class="math-container">$p^2 = p$</span>. I can't even compose <span class="math-container">$\pi$</span> with itself because <span class="math-container">$X \neq X/\sim$</span>. Is there another, more general definition? It's it a shorthand for a surjective map?</p>
José Carlos Santos
446,262
<p>If you define <span class="math-container">$\tilde f\colon X/\sim\longrightarrow Y$</span> by <span class="math-container">$\tilde f\bigl([x]\bigr)=f(x)$</span>, does this make sense? That is, if <span class="math-container">$[x]=[y]$</span>, do we have <span class="math-container">$\tilde f\bigl([x]\bigr)=\tilde f\bigl([y]\bigr)$</span>? Yes, because<span class="math-container">$$[x]=[y]\implies f(x)=f(y)\iff\tilde f\bigl([x]\bigr)=\tilde f\bigl([y]\bigr).$$</span>And then we have<span class="math-container">$$(\forall x\in X):\left(\tilde f\circ\pi\right)(x)=\tilde f\bigl([x]\bigr)=f(x),$$</span>and therefore <span class="math-container">$\tilde f\circ\pi=f$</span>.</p> <p>And if <span class="math-container">$g\colon X/\sim\longrightarrow Y$</span> is such that <span class="math-container">$g\circ\pi=f$</span>, then, if <span class="math-container">$x\in X$</span>,<span class="math-container">$$g\bigl([x]\bigr)=\left(g\circ\pi\right)(x)=f(x),$$</span>and so <span class="math-container">$g=\tilde f$</span>.</p> <p>Finally, the map <span class="math-container">$\pi\colon X\longrightarrow X/\sim$</span> is called the projection of a set onto the set of its equivalence classes with respect to the equivalence relation <span class="math-container">$\sim$</span>. It has <em>nothing</em> to do with <span class="math-container">$p\circ p=p$</span>, which is the concept of projection with respect to linear map from a vector space into itself.</p>
2,078,264
<p>I've been see the following question on group theory:</p> <p>Let $p$ be a prime, and let $G = SL2(p)$ be the group of $2 \times 2$ matrices of determinant $1$ with entries in the field $F(p)$ of integers $\mod p$.</p> <p>(i) Define the action of $G$ on $X = F(p) \cup \{ \infty \}$ by Mobius transformations. [You need not show that it is a group action.] </p> <p>State the orbit-stabiliser theorem. Determine the orbit of $\infty$ and the stabiliser of $\infty$. Hence compute the order of $SL2(p)$.</p> <p>I know matrices are isomorphic to Mobius maps, but not how the action of a mobius map can be used to define the action of a matrix (I don't really know what this part means to be honest). I tried the next part, but wasnn't sure whether the consider the vector $(\infty,\infty)$, $(\infty,a)$ or $(b,\infty)$ $(a,b \in F(p))$. Any help would be greatly appreciated!!</p> <p>(Sorry the question title isn't very related to the question, I just didn't know what to put specifically!)</p>
MvG
35,416
<p>Use homogeneous coordinates. A homogeneous coordinate vector $[x:y]$ represents the number $x/y$. The vector $[x:0]$ for any $x\neq 0$ represents infinity. Conventionally one would write infinity as $[1:0]$ but the choice of representative is arbitrary.</p> <p>Here is how the matrix operation relates to the Möbius map:</p> <p>$$ \begin{pmatrix}x\\y\end{pmatrix}\mapsto \begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}\cdot\begin{pmatrix}x\\y\end{pmatrix} \qquad \frac xy\mapsto \frac{ax+by}{cx+dy} =\frac{a\frac xy+b}{c\frac xy+d} $$</p> <p>So your regular $z\mapsto\frac{az+b}{xz+d}$ is modeled nicely in this way, as long as $y\neq 0$.</p>
1,166,524
<p>I am looking for all irreducible polynomials of degree 5 in $\mathbb{F}_{17}$ with have the form h(y) = $y^5+C$.</p> <p>I think there aren't any irreducible polynomials of this form because for every C I can find an element of $\mathbb{F}_{17}$ as a root.</p> <p>What would be the fastest way to find irreducible polynomials if the form would be h(y) = $y^5$+...</p>
MooS
211,913
<p>The unit group has 16 elements, hence any element $a$ has order relatively prime to $5$. In particular $a$ and $a^5$ have the same order in the cyclic group generated by $a$. We obtain $a \in \langle a \rangle = \langle a^5 \rangle$.</p> <p><strong>Edit</strong> There would be the following approch to easily find such an irreducible polynomial: If $17$ would have order $5$ mod $11$, we would deduce that there exists a $11$th root of unity in $\mathbb F_{17^5}$. This would mean that $\frac{x^{11}-1}{x-1}$ factors into two irreducibles of degree $5$. Unfortunately $17$ is a primitive root modulo $11$, hence $\frac{x^{11}-1}{x-1}$ is irreducible itself.</p> <p>So this does not work here (though there were some chances a priori...), but of course there are many situations, where this approach works well.</p>
1,133,581
<p>$\lim\limits_{(x,y) \to (0,0)} \frac{\log(1-x^2y^2)}{x^2y^2}$</p> <p>in polar coordinates:</p> <p>$\lim\limits_{r \to 0} \frac{\log(1-r^2\cos^2t \sin^2t)}{r^2\cos^2t \sin^2t}$</p> <p>The first limit exists if the second is independent from $t$. But if $t$ is $k \pi/2$ that fraction does not exist. So I could argue that the limit does not exist. Instead it is wrong the limit is $-1$. Where is the mistake?</p> <p><strong>EDIT</strong></p> <p>I know I can use another way to reach the result. My question is strictly related to this method (polar coordinates). Since the second limit does not exist for some $t$ I should think that the multivar limit does not exist. Perhaps are there some $t$ that I shouldn't care about?</p>
parsiad
64,601
<p>Think about... $$ \bigcup_{n=1}^{\infty}\left(0+\frac{1}{n},1\right) $$</p>
1,593,433
<p>How do I define the integral of this Riemann sum? $$\lim\limits_{m\to\infty}\frac{1}{m}\sum_{n=1}^m\cos\left({\frac{2\pi n x}{b}}\right)$$</p>
Nate 8
226,768
<p>If we replaced b with m, then indeed it would reflect a Riemann sum (though the expression you present does not).</p> <p>You can tell, because in a Riemann sum, you are adding up the areas of a series of rectangles. To find the area of each rectangle, multiply that rectangle's height by its witdh. To find the total area, add all the rectangle areas up. So $\frac{1}{m}\cos(\frac{2\pi}{m})+\frac{1}{m}\cos(\frac{4\pi}{m}) + ... \frac{1}{m}\cos(\frac{2m\pi}{m})$.</p> <p>Since the width $\frac{1}{m}$ is the same, simply factor it out. $\frac{1}{m}\left(\cos(\frac{2\pi}{m})+\cos(\frac{4\pi}{m}) + ... \cos(\frac{2m\pi}{m})\right)$. Then replace the sum with sum notation. $$\frac{1}{m} \sum_{n=1}^{m} \cos\left(\frac{2\pi n}{m}\right)$$</p> <p>To find the function you are integrating, note that the height of the rectangles is usually found by plugging in the x value (n/m in this case) into the function. To find the bounds, find what the largest and smallest values you're plugging in go towards.</p> <p>(Hint: if it's infinity or negative infinity, that means you didn't split up the pieces correctly. You should simply be sampling from the same interval, just with points that get closer and closer together as m goes to infinity.)</p>
42,016
<p>I am looking for algorithms on how to find rational points on an <a href="http://en.wikipedia.org/wiki/Elliptic_curve">elliptic curve</a> $$y^2 = x^3 + a x + b$$ where $a$ and $b$ are integers. Any sort of ideas on how to proceed are welcome. For example, how to find solutions in which the numerators and denominators are bounded, or how to find solutions with a randomized algorithm. Anything better than brute force is interesting.</p> <p>Background: a student worked on the <a href="http://en.wikipedia.org/wiki/Mordell-Weil_theorem">Mordell-Weil theorem</a> and illustrated it on some simple examples of elliptic curves. She looked for rational points by brute force (I really mean <em>brute</em>, by enumerating all possibilities and trying them). As a continuation of the project she is now interested in smarter algorithms for finding rational points. A cursory search on Math Reviews did not find much.</p>
Felipe Voloch
2,290
<p>There are clever ways of speeding the brute force enumeration. An implementation of such is M. Stoll's ratpoints program: <a href="http://www.mathe2.uni-bayreuth.de/stoll/programs/index.html" rel="noreferrer">http://www.mathe2.uni-bayreuth.de/stoll/programs/index.html</a></p> <p>A completely different way of generating points on elliptic curves is to use Heegner points, but it only works when the rank is one. This is technically more sophisticated but I believe there are also implementations. Google yields: <a href="http://www.math.mcgill.ca/darmon/programs/programs.html" rel="noreferrer">http://www.math.mcgill.ca/darmon/programs/programs.html</a></p>
42,016
<p>I am looking for algorithms on how to find rational points on an <a href="http://en.wikipedia.org/wiki/Elliptic_curve">elliptic curve</a> $$y^2 = x^3 + a x + b$$ where $a$ and $b$ are integers. Any sort of ideas on how to proceed are welcome. For example, how to find solutions in which the numerators and denominators are bounded, or how to find solutions with a randomized algorithm. Anything better than brute force is interesting.</p> <p>Background: a student worked on the <a href="http://en.wikipedia.org/wiki/Mordell-Weil_theorem">Mordell-Weil theorem</a> and illustrated it on some simple examples of elliptic curves. She looked for rational points by brute force (I really mean <em>brute</em>, by enumerating all possibilities and trying them). As a continuation of the project she is now interested in smarter algorithms for finding rational points. A cursory search on Math Reviews did not find much.</p>
David E Speyer
297
<p>The best practical solution is to have someone else do the work. You can look up the curve in <a href="http://www.warwick.ac.uk/~masgaj/ftp/data/">Cremona's tables</a>, if it is not too large. If it is larger than that, you can use <a href="http://www.warwick.ac.uk/staff/J.E.Cremona//mwrank/">mwrank</a>, a free standing C++ program. I believe that SAGE and MAGMA also both have this functionality, although I couldn't find the syntax in a quick search.</p>
202,043
<p>A <a href="http://en.wikipedia.org/wiki/Square-free_word">square-free word</a> is a string of symbols (a "word") that avoids the pattern $XX$, where $X$ is any consecutive sequence of symbols in the string. For alphabets of two symbols, the longest square-free word has length $3$. But for alphabets of three symbols, there are infinitely long square-free words, due to Thue.</p> <p>Define a <em>three-halves-free word</em> as one that avoids the pattern $XYX$, where $|X|=|Y|$. Another view is that these words avoid $Z(\frac{1}{2}Z)$, i.e., $Z=XY$ with $X$ and $Y$ the same length. So $Z$ has even length, and we want to avoid immediately following with the first half of $Z$.</p> <p>For an alphabet of two symbols, $0$ &amp; $1$, here are the three-halves-free words of length $5$: \begin{eqnarray*} &amp;(0, 1, 1, 0, 0)\\ &amp;(0, 0, 1, 1, 0)\\ &amp;(1, 1, 0, 0, 1)\\ &amp;(1, 0, 0, 1, 1) \end{eqnarray*} All the $28$ other length-$5$ words fail. E.g., $(0,1,0,1,0)$ fails because $(0,1,0)$ matches with $XYX=010$. But there are no three-halves-free words of length $\ge 6$. For example, extending $(0, 1, 1, 0, 0)$ with $0$ gives $(0, 1, 1, 0, 0, 0)$, matching $XYX=000$, and extending with $1$ gives $(0, 1, 1, 0, 0, 1)$, matching with $X=01$.</p> <blockquote> <p><strong><em>Q</em></strong>. Are there infinitely long words in three-letter alphabets that are three-halves-free?</p> </blockquote> <p>If a word has a square $ZZ$ with $|Z|$ even, then it has a three-halves pattern. If a word is square-free, it may not be three-halves-free. For example, both $(1,0,1)$ and $(0,1,2,1,0,1)$ are square-free. And the square-free infinite word <a href="http://oeis.org/A029883">A029883</a> $$ (1, 0, -1, 1, -1, 0, 1, 0, -1, 0, 1, -1, 1, 0, -1, \ldots ) $$ is not three-halves-free: e.g., $(-1,1,-1)$ and $(-1,0,1,0,-1,0)$ are three-halves patterns.</p>
echinodermata
46,975
<p><em>Note: This is an extended comment, not a full answer.</em></p> <p>I don't know about you, but I didn't start out with any good intuition about the answer to the question. I decided to do some computer experiments/visualization.</p> <p>I wrote some code in Prolog which generates square-free and three-halves-free words over a given alphabet. I don't usually use Prolog for anything, but this seemed like the sort of problem that it's very well suited to. Source code and an interactive environment is available <a href="http://swish.swi-prolog.org/p/Three-halves-free%20words%20(analogous%20to%20square-free).pl" rel="nofollow noreferrer">here</a>. (Please comment if it doesn't work, the link is dead, etc.) Please <a href="http://swish.swi-prolog.org/p/Three-halves-free%20words%20(analogous%20to%20square-free).pl" rel="nofollow noreferrer">experiment</a> on your own too!</p> <p>The query below generates the three-halves-free words of length 5 over the alphabet x,y. In the code I called it "sandwich-free" rather than "three-halves-free".</p> <pre><code>?- sandwich_free(L,[x,y],5). L = [x, y, y, x, x] ; L = [x, x, y, y, x] ; L = [y, y, x, x, y] ; L = [y, x, x, y, y] ; false. </code></pre> <p>"?-" is the prompt; the next four lines are the results, and "false" indicates that there are no more answers. (The order of the results is lexicographic, if you read each result <em>from right to left</em>.)</p> <p>If you ask for the sandwich-free words of length 6 over the same alphabet, you get 0 results:</p> <pre><code>?- sandwich_free(L,[x,y],6). false. </code></pre> <p>Here's a sandwich-free word of length 500 over a 3-letter alphabet:</p> <pre><code>?- sandwich_free(L,[x,y,z],500). L = [y, x, x, y, z, x, y, z, x, y, y, x, x, z, y, y, x, x, y, z, x, y, z, x, x, z, z, y, y, x, x, y, z, x, y, y, x, x, z, y, y, x, x, z, y, y, x, x, y, z, x, y, y, x, x, z, z, x, y, z, x, x, z, y, y, x, x, y, z, x, y, z, x, y, y, x, x, z, y, y, x, x, y, z, x, y, z, x, y, y, x, x, z, y, y, x, x, y, z, x, y, z, x, x, z, z, y, y, x, x, y, z, x, y, y, x, x, z, y, y, x, x, z, z, x, y, z, x, y, z, x, x, z, y, y, x, x, y, z, x, y, y, x, x, z, z, x, y, z, x, x, z, y, y, x, x, y, z, x, y, z, x, y, y, x, x, z, y, y, x, x, y, z, x, y, z, x, y, y, x, x, z, y, y, x, x, y, z, x, y, z, x, x, z, z, y, y, x, x, y, z, x, y, y, x, x, z, y, y, x, x, z, y, y, x, x, y, z, x, y, y, x, x, z, z, x, y, y, x, x, z, z, x, y, z, x, y, z, x, x, z, y, y, x, x, y, z, x, y, y, x, x, z, y, y, z, x, x, y, y, x, z, y, x, x, y, y, z, x, x, z, y, y, x, x, y, z, x, y, z, x, y, y, x, x, z, y, y, x, x, y, z, x, y, z, x, y, y, x, x, z, y, y, x, x, y, z, x, y, z, x, x, z, z, y, y, x, x, y, z, x, y, y, x, x, z, y, y, x, x, z, y, y, x, x, y, z, x, y, y, x, x, z, z, x, y, z, x, x, z, y, y, x, x, y, z, x, y, z, x, y, y, x, x, z, y, y, x, x, y, z, x, y, z, x, y, y, x, x, z, y, y, x, x, y, z, x, y, z, x, x, z, z, y, y, x, x, y, z, x, y, y, x, x, z, y, y, x, x, z, y, y, x, x, y, z, x, y, y, x, x, z, z, x, y, z, x, x, z, y, y, x, x, y, z, x, y, z, x, y, y, x, x, z, y, y, x, x, y, z, x, y, z, x, y, y, x, x, z, y, y, x, x, y, z, x, y, z, x, x, z, z, y, y, x, x, y, z, x, y, y, x, x, z, y, y, x, x, z, y, y, x, x, y, z, x, y, y, x, x] ... </code></pre> <p>You can ask for the sandwich-free words of any length over a given alphabet:</p> <pre><code>?- sandwich_free_peano(L,[x,y],N). L = [], N = zero ; L = [x], N = s(zero) ; L = [y], N = s(zero) ; L = [x, x], N = s(s(zero)) ; L = [y, x], N = s(s(zero)) ; L = [x, y], N = s(s(zero)) ; L = [y, y], N = s(s(zero)) ; L = [y, x, x], N = s(s(s(zero))) ... </code></pre> <p>(Technical limitations in Prolog required me to write a separate version of <code>sandwich_free</code> using Peano numerals, when you're passing in a number not already instantiated as a constant.)</p> <p>You can also use it to check whether a given list is sandwich-free:</p> <pre><code>?- sandwich_free_peano([x,y,z,z,y],[x,y,z],_). true ; false. </code></pre> <p>(The answer is true; false just indicates no more answers.)</p> <p>Here's a plot of length vs. the number of square-free and sandwich-free words of that length over a 3-letter alphabet. The growth appears to be <strong>exponential</strong>:</p> <p><img src="https://i.stack.imgur.com/ItMks.png" alt="Length vs. the number of square-free and sandwich-free words of that length over a 3-letter alphabet"></p> <p>The query which generated the data points was</p> <pre><code>?- numlist(0,23,L),maplist(how_many_sandwich_free_alphabet_3,L,XS). </code></pre> <p>The data points on the graph are:</p> <ul> <li>Square-free: 1, 3, 6, 12, 18, 30, 42, 60, 78, 108, 144, 204, 264, 342, 456, 618, 798, 1044, 1392, 1830, 2388, 3180, 4146, 5418 (OEIS <a href="http://oeis.org/A006156" rel="nofollow noreferrer">A006156</a>)</li> <li>Sandwich-free: 1, 3, 9, 18, 36, 72, 108, 180, 288, 432, 648, 1008, 1548, 2376, 3636, 5400, 8172, 12276, 17892, 26532, 39492, 58428, 86688, 128592 (not in the OEIS at this time)</li> </ul> <p>I find this compelling evidence that there are arbitrarily long three-halves-free words, indeed exponentially many as a function of length. It also seems very likely that infinitely long three-halves-free words exist.</p> <hr> <p><strong>Edit:</strong></p> <p>Here's some interesting new data: Say that a sandwich-free word is "extensible" if you can add one more letter to the front of it to get another sandwich-free word. Most sandwich-free words are extensible. An example of one that is not:</p> <pre><code>_, y, y, x, x, y, z, z, x, x, z, y, y, x, x </code></pre> <p>Try to fill in the blank at the front.</p> <p>More generally we can talk about "$k$-extensible" sandwich-free words, by which I mean "$k$-extensible but not $(k+1)$-extensible". Now here's a plot of how many non-extensible and 1-extensible words there are of particular lengths:</p> <p><img src="https://i.stack.imgur.com/A0qFM.png" alt="Length vs. the number of sandwich-free words, non-extensible sandwich-free words, and 1-extensible sandwich-free words, of that length over a 3-letter alphabet"></p> <p>(Data: 1, 3, 9, 18, 36, 72, 108, 180, 288, 432, 648, 1008, 1548, 2376, 3636, 5400, 8172, 12276, 17892, 26532, 39492, 58428, 86688, 128592, 190512, 282492, 418680, 620208, 919044, 1362456, 2012940; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 72, 72, 144, 396, 252, 396, 720, 1044, 1440, 2376, 3456, 5040, 7524, 11088, 16272, 26604, 36864; 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 36, 36, 36, 72, 108, 180, 720, 540, 468)</p>
202,043
<p>A <a href="http://en.wikipedia.org/wiki/Square-free_word">square-free word</a> is a string of symbols (a "word") that avoids the pattern $XX$, where $X$ is any consecutive sequence of symbols in the string. For alphabets of two symbols, the longest square-free word has length $3$. But for alphabets of three symbols, there are infinitely long square-free words, due to Thue.</p> <p>Define a <em>three-halves-free word</em> as one that avoids the pattern $XYX$, where $|X|=|Y|$. Another view is that these words avoid $Z(\frac{1}{2}Z)$, i.e., $Z=XY$ with $X$ and $Y$ the same length. So $Z$ has even length, and we want to avoid immediately following with the first half of $Z$.</p> <p>For an alphabet of two symbols, $0$ &amp; $1$, here are the three-halves-free words of length $5$: \begin{eqnarray*} &amp;(0, 1, 1, 0, 0)\\ &amp;(0, 0, 1, 1, 0)\\ &amp;(1, 1, 0, 0, 1)\\ &amp;(1, 0, 0, 1, 1) \end{eqnarray*} All the $28$ other length-$5$ words fail. E.g., $(0,1,0,1,0)$ fails because $(0,1,0)$ matches with $XYX=010$. But there are no three-halves-free words of length $\ge 6$. For example, extending $(0, 1, 1, 0, 0)$ with $0$ gives $(0, 1, 1, 0, 0, 0)$, matching $XYX=000$, and extending with $1$ gives $(0, 1, 1, 0, 0, 1)$, matching with $X=01$.</p> <blockquote> <p><strong><em>Q</em></strong>. Are there infinitely long words in three-letter alphabets that are three-halves-free?</p> </blockquote> <p>If a word has a square $ZZ$ with $|Z|$ even, then it has a three-halves pattern. If a word is square-free, it may not be three-halves-free. For example, both $(1,0,1)$ and $(0,1,2,1,0,1)$ are square-free. And the square-free infinite word <a href="http://oeis.org/A029883">A029883</a> $$ (1, 0, -1, 1, -1, 0, 1, 0, -1, 0, 1, -1, 1, 0, -1, \ldots ) $$ is not three-halves-free: e.g., $(-1,1,-1)$ and $(-1,0,1,0,-1,0)$ are three-halves patterns.</p>
user38477
38,477
<p><strike>(Not a real answer, just a conjecture)</strike></p> <p>Let <span class="math-container">$w$</span> be the Pansiot word on <span class="math-container">$4$</span> letters defined here :<br> <a href="https://www.sciencedirect.com/science/article/pii/0166218X84900064" rel="nofollow noreferrer">J.J. Pansiot, A propos d'une conjecture de F. Dejean sur les répétitions dans les mots, Discrete Applied Mathematics Volume 7, Issue 3, March 1984, Pages 297–311</a><br> in French, or also in<br> <a href="https://www.sciencedirect.com/science/article/pii/030439759290264G" rel="nofollow noreferrer"> J. Moulin Ollagnier, Proof of Dejean's conjecture for alphabets with 5, 6, 7, 8, 9, 10 and 11 letters, Theoretical Computer Science, Volume 95, Issue 2, 30 March 1992, Pages 187–205</a>.<br> This word proves Dejean's Conjecture for <span class="math-container">$4$</span> letters, that is:<br> it avoids fractional repetitions of exponent greater than <span class="math-container">$7/5$</span>, and so it is also three-halves-free.</p> <p><span class="math-container">$$w=abcdbadcbdacdbcadcbacdabdcadbcdacbadcabdacdbadcbdabcadbacdab...$$</span></p> <blockquote> <p><strike><strong>Conjecture</strong></strike> <strong>Theorem</strong></p> <p>Let <span class="math-container">$h(a)=aabbaccabc$</span>, <span class="math-container">$h(b)=aacbacbacc$</span>, <span class="math-container">$h(c)=abbaaccbbc$</span>, <span class="math-container">$h(d)=abcaacbbcc$</span>. </p> <p>Then <span class="math-container">$h(w)$</span> is a three-halves-free word on <span class="math-container">$3$</span> letters.</p> </blockquote> <p><strike>I checked up to size 412030. I do not tried to prove the result, but maybe the proof is simple and "standard" (suppose that the image has a three-halves, then prove that the pre-image has a three-halve).</strike></p> <p>edit: The proof is easy. <span class="math-container">$h$</span> is <span class="math-container">$10$</span>-uniform and is <em>synchronizing</em>, that is for every <span class="math-container">$x,y,z\in\{a,b,c,d\}$</span>, <span class="math-container">$h(x)$</span> is not a proper infix of <span class="math-container">$h(yz)$</span>. Thus, if a factor <span class="math-container">$XYX$</span> of <span class="math-container">$h(w)$</span> is a three-halve, then either <span class="math-container">$XYX$</span> is small (of size at most <span class="math-container">$18\times 3$</span>), or <span class="math-container">$\vert X \vert = \vert Y \vert$</span> is a multiple of <span class="math-container">$10$</span>. There are only a finite number of "small" factors to check.<br> Now, if we are in the second case, and if <span class="math-container">$\vert X \vert$</span> if big (at least <span class="math-container">$42$</span>), then <span class="math-container">$XYX$</span> has a unique de-substitution in <span class="math-container">$w$</span>, and <span class="math-container">$w$</span> has a factor <span class="math-container">$X'Y'X'$</span> with <span class="math-container">$\vert Y' \vert / \vert X' \vert &gt; 2/5$</span>. Thus <span class="math-container">$w$</span> has a repetition of exponent greater that <span class="math-container">$7/5$</span>, and we have a contradiction with the fact that <span class="math-container">$w$</span> is a Dejean word (i.e. it is <span class="math-container">$7/5+$</span>-free).</p>
3,165,276
<p>I am attempting to prove if the limit for <span class="math-container">$S_n = \frac{\sqrt{n}}{\sqrt{n}+1} $</span>, exists, and if it exists, what it is? If the limit exists, proving it in terms of the definition for <span class="math-container">${e}$</span> and <span class="math-container">${N}$</span>.</p> <p>I got the limit by : <span class="math-container">$$S_n= \frac{\sqrt{n}}{\sqrt{n}+1}=\lim_{n\to\infty}\Biggl(\frac{\frac{\sqrt{n}}{\sqrt{n}}}{\frac{\sqrt{n}}{\sqrt{n}}+\frac{1}{\sqrt{n}}}\Biggr)=\lim_{n\to\infty}\biggl(\frac{1}{1+{\frac{1}{\sqrt{n}}}}\Biggr)=1$$</span> Having the limit, I assume I can plug it into the definition, as follows:</p> <p><span class="math-container">$${\forall}{e}&gt;0\;\exists{N}={N}{(e)\;s.t.\;\forall\;{n}&gt;{N} \;\vert\;\frac{\sqrt{n}}{\sqrt{n}+1}-1\;\vert} $$</span> which gives me: <span class="math-container">$$\vert\;\frac{\sqrt{n}}{\sqrt{n}+1}-{\frac{\sqrt{n}-1}{\sqrt{n}-1}}\;\vert=\vert\;\frac{\sqrt{n}\,(\sqrt{n}-1)-\;(\sqrt{n}+1)(\sqrt{n}-1)}{(\sqrt{n}+1)(\sqrt{n}-1)}\;\vert=\vert\frac{1-\sqrt{n}}{n-1}\vert$$</span> So now I have <span class="math-container">$$\vert\frac{1-\sqrt{n}}{n-1}\;\vert\leq \frac{1-\sqrt{n}}{n-1}$$</span> Am I on the right track? I am unsure of how to proceed from here ?</p>
Kavi Rama Murthy
142,385
<p>You have made a mistake in your calculations. Instead of <span class="math-container">$|-2|$</span> in the numerator you should get <span class="math-container">$|1-\sqrt n|$</span>. Of course, you mis-typed <span class="math-container">$n-1$</span> as <span class="math-container">$n-n$</span> at the end. Hint for completing the proof: I claim that <span class="math-container">$n&gt;2$</span> and <span class="math-container">$n &gt;\frac 4 {\epsilon^{2}}$</span> gives <span class="math-container">$\frac {\sqrt n -1} {n-1} &lt;\epsilon$</span>. For this use the facts that <span class="math-container">$\sqrt n -1 &lt;\sqrt n$</span> and <span class="math-container">$n-1 &gt;\frac n 2$</span> for <span class="math-container">$ n&gt;2$</span>. </p>
68,899
<p>I was rereading basic results on de Rham cohomology, and this led me inevitably to the fact that $H^q(X,\Omega^p)$ converges to $H^*(X)$ for any smooth proper variety (over any field). How does one view this spectral sequence "maturely" as a Grothendieck spectral sequence?</p>
Torsten Ekedahl
4,008
<p>If by "Grothendieck spectral sequence" you mean the spectral sequence associated to the composite of functors (fulfilling the Grothendieck condition) then I am skeptical as to whether this is possible. Also I do not see that there would be any particular point in being able to view it in that light (unless the functors involved would be interesting in themselves). As you seem to be looking for some general principle that would give the dRss I would like to a mention one such which seem to give most spectral sequences in use very quickly. Usually one constructs the dRss by taking a Cartan-Eilenberg resolution of the de Rham complex and then consider the spectral sequence associated to a double complex. However, one can instead use the Massey exact couple construction; once one has an exact couple one automatically gets a spectral sequence. One systematic way of constructing such exact couples is to start with a triangulated category $T$, a sequence of morphisms $\cdots\to X_{i-1}\to X_i\to X_{i+1}\to\cdots$ and a homological functor $H$ on $T$. This gives a spectral sequence starting with $H^i(Y_j)$ and going towards $\varinjlim_iH^\ast(Y_i)$. (Convergence is not assured but is OK for instance of $Y_i$ is $0$ for small $i$ and equal if $i$ is large.) Starting with the naive truncations of de Rham complex gives the dRss (and starting with canonical truncations in the algebraic case gives the conjugate spectral sequence).</p> <p>This setup works very generally, for instance it is how Adams first constructed the Adams spectral sequence.</p>
2,263,431
<p>Let $$v=\sqrt{a^2\cos^2(x)+b^2\sin^2(x)}+\sqrt{b^2\cos^2(x)+a^2\sin^2(x)}$$</p> <p>Then find difference between maximum and minimum of $v^2$.</p> <p>I understand both of them are distance of a point on ellipse from origin, but how do we find maximum and minimum?</p> <p>I tried guessing, and got maximum $v$ when $x=45^{o}$ and minimum when $x=0$, but how do we justify this?</p>
Michael Rozenberg
190,319
<p>By C-S we obtain: $$v=$$ $$=\sqrt{a^2\cos^2x+b^2\sin^2x+b^2\cos^2x+a^2\sin^2x+2\sqrt{(a^2\cos^2x+b^2\sin^2x)(b^2\cos^2x+a^2\sin^2x)}}=$$ $$=\sqrt{a^2+b^2+2\sqrt{(a^2\cos^2x+b^2\sin^2x)(b^2\cos^2x+a^2\sin^2x)}}\geq$$ $$\geq\sqrt{a^2+b^2+2|ab|(\cos^2x+\sin^2x)}=|a|+|b|.$$ $$v\leq\sqrt{2(a^2\cos^2x+b^2\sin^2x+b^2\cos^2x+a^2\sin^2x)}=\sqrt{2(a^2+b^2)}.$$</p>
2,263,431
<p>Let $$v=\sqrt{a^2\cos^2(x)+b^2\sin^2(x)}+\sqrt{b^2\cos^2(x)+a^2\sin^2(x)}$$</p> <p>Then find difference between maximum and minimum of $v^2$.</p> <p>I understand both of them are distance of a point on ellipse from origin, but how do we find maximum and minimum?</p> <p>I tried guessing, and got maximum $v$ when $x=45^{o}$ and minimum when $x=0$, but how do we justify this?</p>
egreg
62,967
<p>We can restrict to $x\in[0,\pi/2]$, because of symmetries.</p> <p>The maximum and minimum of $v$ are attained at the same values as the maximum and minimum of $v^2$: $$ v^2=a^2+b^2+ 2\sqrt{(a^2\cos^2(x)+b^2\sin^2(x))(b^2\cos^2(x)+a^2\sin^2(x))} $$ We can also remove $a^2+b^2$, then the factor $2$ and square again, so we reduce to finding the maximum and minimum of $$ f(x)=(a^2\cos^2(x)+b^2\sin^2(x))(b^2\cos^2(x)+a^2\sin^2(x)) $$ If we set $t=\cos^2x$, we want to look at $$ g(t)=((a^2-b^2)t+b^2)((b^2-a^2)t+a^2) $$ subject to $0\le t\le 1$.</p> <p>If $a^2=b^2$, the function is constant, so it's not restrictive to assume $a^2\ne b^2$. The (global) function $g$ assumes its maximum at the middle point between its zeros: $$ \frac{1}{2}\left(\frac{b^2}{b^2-a^2}+\frac{a^2}{a^2-b^2}\right)=\frac{1}{2} $$ which is in $[0,1]$ so it's our required maximum.</p> <p>The minimum is either at $0$ or $1$ (the graph of $g$ is a parabola): since $g(0)=a^2b^2=g(1)$, we can choose either one.</p> <p>Thus we have a maximum for $t=1/2$, which corresponds to $x=\pi/4$, and a minimum at $x=0$ (or $x=\pi/2$).</p> <p>We have $$ v(\pi/4)=2\sqrt{\frac{1}{2}(a^2+b^2)} $$ and $$ v(0)=v(\pi/2)=|a|+|b| $$</p>
1,602,078
<p>There are $n$ prisoners and $n$ hats. Each hat is colored with one of $k$ given colors. Each prisoner is assigned a random hat, but the number of each color hat is not known to the prisoners. The prisoners will be lined up single file where each can see the hats in front of him but not behind. Starting with the prisoner in the back of the line and moving forward, they must each, in turn, say only one word which must be one of the $k$ given colors. If the word matches their hat color they are released, if not, they are killed on the spot. They can set up a strategy before the test, so they choose a strategy that maximizes the number of definitely released prisoners (that number is called the <em>number of the strategy</em>. What is that number?</p>
Asinomás
33,907
<p>Label the colors $\{1,2,3\dots k\}$. The first one says the sum of the hats in front of him $\bmod k$ (the last $n-1$ persons). After this the second one can deduce which number corresponds to the color of his hat (by subtracting the sum that he can see minus the sum previously said).</p> <p>The third person, having heard all of this, can now deduce the sum of the last $n-2$ people, and by substracting this from the sum of the hats he sees (last $n-3$) he can deduce which hat he has. </p> <p>This process continues on to the last person. All of them are saved except for the first one, which survives with probability $\frac{1}{k}$. It is clear that no matter which strategy we follow, the probability the first person survives is $\frac{1}{k}$. So this strategy is optimal.</p> <p>The maximum strategy number is hence $n-1$.</p>
2,940,287
<p>My ring theory teacher has a very broad definition of rings: he doesn't require them to be associative. As such, he told us to work out a definition of a product operation <span class="math-container">$\cdot$</span> on <span class="math-container">$R = \mathbb{Z}_2 \times \mathbb{Z}_2$</span> that is distributive over the additive operation <span class="math-container">$+$</span> (the operation that makes <span class="math-container">$R$</span> an abelian group) and that makes the resulting ring <span class="math-container">$(R, +, \cdot)$</span> non associative, non commutative and without a unit element. Im having some trouble doing this and was wondering if someone could shed some light on the way to doing it - preferably without having to check for lots of cases or trial and error.</p>
Upstart
312,594
<p>Let <span class="math-container">$b\in im(T-I)\implies \exists ~x\in R^n$</span>such that <span class="math-container">$(T-I)x=b$</span>. Now consider<span class="math-container">$Tb= T(T-I)x=(T^2-T)x= 0$</span>. Hence <span class="math-container">$Tb=0$</span>. Thus <span class="math-container">$b \in Nul(T)$</span>. Since <span class="math-container">$b$</span> was arbitrary, hence <span class="math-container">$im(T-I) \subseteq~nul(T)$</span>. Can you proceed with the converse?</p>
2,889,482
<blockquote> <p><strong>1.28</strong> In Section 1.6, we introduced the idea of entropy $h(x)$ as the information gained on observing the value of a random variable $x$ having distribution $p(x)$. We saw that, for independent variables $x$ and $y$ for which $p(x,y) = p(x) p(y)$, the entropy functions are additive, so that $h(x,y) = h(x) + h(y)$. In this exercise, we derive the relation between $h$ and $p$ in the form of a function $h(p)$. First show that $h(p^2) = 2 h(p)$, and hence by induction that $h(p^n) = n h(p)$ where $n$ is a positive integer. Hence show that $h(p^{n/m}) = (n/m) h(p)$ where $m$ is also a positive integer. This implies that $h(p^x) = x h(p)$ where $x$ is a positive rational number, and hence by continuity when it is a positive real number. Finally, show that this implies $h(p)$ must take the form $h(p) \propto \ln p$.</p> <p>(Original image <a href="https://i.stack.imgur.com/Hj5U2.png" rel="nofollow noreferrer">here</a>.)</p> </blockquote> <p>I am having a few problems with this exercise, I will try to list them and perhaps you can help on some of them:</p> <ol> <li>Essentially, I don't understand what kind of function is $h$, because the first two lines of the problem definition make me think that its input is a random variable $x$ and its ouput a real number $h(x)$. But next, the problem uses the notation $h(x,y)$, implying that now $h$ is a function of 2 random variables $x$ and $y$, yielding a real number $h(x,y)$ as output. So what would be the correct or formal way to define $h$?</li> <li>In connection with the first item, now in the 5th line it reads $h(p)$... so if switching between 1 and 2 random variables as inputs wasn't enough, now the input is a probability density? $h$ must be a very robust function for sure... ok I guess that so far the question is the same as in item 1, i.e., what is $h$?</li> <li>The first step of the problem definition is to show $h(p^2) = 2 \, h(p)$ - ignoring the issues I described previously - I can't see how this is achieved by using those two hypothesis about independent variables. A have a feeling that the idea is to do something like (warning: the following is nonsense): $h(p^2) = h(p,p) = h(p) + h(p) = 2\, h(p)$ the first equality being wrong because I magically split the $p^2$ into two input variables, the second one being also wrong because I apply a hypothesis given for <em>independent random variables</em> on <em>probability densities of dependent random variables</em> and the last one is ok (!!!). So, what would be a nicer way to fulfill this first step?</li> <li>Finally, is it really the logarithm's family the only solutions that match these requirements? I wonder, for example, what would happen if the argument function was used (like arg$(z)$ in complex analysis) since it satisfies the same properties as the logarithm regarding the product and power of its inputs.</li> </ol> <p>Help in any of these items will be appreciated!</p> <p>In case you are interested, there are many online sources of the book, e.g. <a href="http://users.isr.ist.utl.pt/~wurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf" rel="nofollow noreferrer">here</a>, the topic is covered in page 48 of the book (68 of the pdf) and the problem definition - picture uploaded - is in page 64 of the book (84 of the pdf).</p> <p>EDIT: I've found <a href="https://en.wikipedia.org/wiki/Self-information" rel="nofollow noreferrer">this</a> article on Wikipedia, in which my first two items are answered, via a more clear notation for the functions and variables involved in the problem. Still looking for 3 and 4...</p>
Javi
554,663
<p>After some hours of research I've found a few sites which altogether answer these questions.</p> <p>Regarding items 1 and 2, it looks like there is indeed a severe abuse of notation every time the author refers to function $h$. This function seems to be the so-called <em>self-information</em> and it is usually defined over probability events or random variables as well. I find this <a href="https://en.wikipedia.org/wiki/Self-information#Derivation" rel="nofollow noreferrer">article</a> very clarifying in this respect.</p> <p>Regarding item 4, for what I have seen, it seems that under certain conditions that the self information functions must satisfy, the logarithm if the only possible choice. The selected answer in this <a href="https://math.stackexchange.com/questions/2318606/is-log-the-only-choice-for-measuring-information">post</a> was particularly useful, and also the comments on the question. This topic is also discussed <a href="https://www.reddit.com/r/askscience/comments/5syaj8/why_is_the_logarithm_formulation_in_self/" rel="nofollow noreferrer">here</a>, but I prefer the previous link. </p> <p>Finally, I have not found an answer for item 3. Actually, I really think that this step is wrongly formulated due to the imprecision in the definition of function $h$. Nevertheless, the links I have provided as an answer to item 4 lead to the desired result.</p>
388,815
<p>I am trying to compute the integral: $$\int_{4}^{5} \frac{dx}{\sqrt{x^{2}-16}}$$ The question is related to hyperbolic functions, so I let $x = 4\cosh(u)$ therefore the integral becomes: $$-\int_{0}^{\ln(2)}\frac{4\sinh(u)}{\sqrt{16-16\cosh^{2}(u)}}du = -\int_{0}^{\ln(2)}1du = -\ln(2)$$</p> <p>The answer is $\ln(2)$ so if someone could point out where I went wrong that would be great, thanks </p>
iostream007
76,954
<p>use this formula $$\int \frac{dx}{\sqrt{x^{2}-a^2}}=\log|x+\sqrt{x^2-a^2}|+c$$ $$\int_{4}^{5} \frac{dx}{\sqrt{x^{2}-16}}$$ $$\int_{4}^{5} \frac{dx}{\sqrt{x^{2}-4^2}}$$ $$(\log|x+\sqrt{x^2-16}|)_4 ^5$$ $$\log|8|-\log|4|$$ $$\log2$$</p>
4,064,618
<p><span class="math-container">$10$</span> individuals including <span class="math-container">$x$</span> and <span class="math-container">$y$</span> seat at random at a round table. All possible orders are equally likely. Let <span class="math-container">$W$</span> be the event that there are exactly two people seated between <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. What is the probability of this event?</p>
miracle173
11,206
<p>Hint: A valid placement is</p> <pre><code>... i-1 i i+1 i+2 i+3 i+4 ... ... a X b c Y d ... </code></pre> <p>another is</p> <pre><code>... i-1 i i+1 i+2 i+3 i+4 ... ... a Y b c X d ... </code></pre> <p>where <span class="math-container">$10$</span> must be subtracted from the seat number <span class="math-container">$i+k$</span> if it is larger then <span class="math-container">$10$</span>.</p> <p>The first placement is related to the placement</p> <pre><code>1 2 3 4 5 ... 10 X b c Y d ... a </code></pre> <p>Annotation:</p> <p>I thought you want to calculate the number of placements where 2 people are seated between <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. If you have this number you divide it by <span class="math-container">$10!$</span> and get the probability that you really want. But of course this probability can be calculated by the way <a href="https://math.stackexchange.com/a/4064651/11206">this post</a> shows us in a simpler way.</p>
609,744
<p>I am not sure about the last step of my proof:</p> <p>$(L^{p}(X,A,\mu), \|\cdot \|)$ is a normed $L^{p}$ space of p-integrable functions. $L^{p}(a,b)$ is the space of p-integrable functions on (a,b). $C[a,b]$ is the space of continuous functions on [a,b].</p> <p>Main goal in order to prove the claim is that the integral of $|f|^{p}$ is finite for any $f\in C[a,b]$. Let us take a function $f\in C[a,b]$. Now we see that the interval $[a,b]$ is compact (assuming both $a$ and $b$ are finite real numbers). Therefore, $f$ has a maximum. Similarly, the function $|f|^{p}$ is also continuous on $[a,b]$ and hence has maximum on it. Now the integral of the function $|f|^{p}$ is dominated by $\max|f|$ (max on the interval $[a,b]$) times $(b-a)$ (the Lebesgue size of the interval). And therefore it (the integral which I just mention) is finite. And this inequality shows that $C[a,b]$ is a subspace of $L_{p}$ for any $1\le p\le \infty$. And it is linear because $\forall f,g \in C[a,b]$ and $a,u$ $\in \mathbb{R}$ $af+ug \in C[a,b]$. It remains to be checked whether the linear subspace is indeed closed.</p> <p>My friend told me about this idea, but that still didn't help me: "Since all the step functions are dense in $L_p(a,b)$ and we know that any step function can be approximated by continuous functions on [a,b]."</p> <p>Any help would be greatly appreciated!</p> <p>Thanks</p>
David Holden
79,543
<p>set $y=kx$ for $k \gt 1$, so you wish to show that if $x \gt e$ then $$ x^{kx} \gt (kx)^x $$ i.e. $x \gt e$ implies $$x^{k-1} \gt k $$ so it is sufficient to show that for $k \gt 1$ we have $$ k \ge 1 + ln \; k $$ so can you find a convexity proof for this?</p>
1,770,508
<p>I am confronted with the following definition:</p> <blockquote> <p>Let <span class="math-container">$K$</span> be a field and <span class="math-container">$e_1,e_2,\ldots,e_n$</span> the standard basis of the <span class="math-container">$K$</span> vector space <span class="math-container">$K^n$</span>.</p> <p>For <span class="math-container">$1\leq i\leq n$</span> let <span class="math-container">$V_i=Ke_1+Ke_2+\dots+Ke_n$</span>.</p> </blockquote> <p>For given <span class="math-container">$0&lt;m&lt;n$</span> let <span class="math-container">$$P=\{~g\in GL_n(k)~|~g(V_m)=V_m~\}$$</span></p> <hr /> <p>The part of the definition I don't understand is highlighted, the rest might use as clarification.</p> <p>So what does <span class="math-container">$Ke_1$</span> mean? (Side question: how does picking a different <span class="math-container">$i$</span> make a difference?)</p>
Felix Marin
85,343
<p>$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Leftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\, #2 \,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ With $\ds{\mathcal{V} = \braces{\pars{x,y,z}\ \left.\vphantom{A^A}\right\vert\ \root{{x^{2} \over 4} + y^{2}}\ &lt;\ z\ &lt;\ {a - x \over 4}}}$:</p> <p>\begin{align} \color{#f00}{\iiint_{\mathcal{V}}\dd x\,\dd y\,\dd z} &amp; = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \Theta\pars{z - \root{{x^{2} \over 4} + y^{2}}}\Theta\pars{{a - x \over 4} - z} \,\dd x\,\dd y\,\dd z \end{align}</p> <p>Where $\Theta$ is the <a href="https://en.wikipedia.org/wiki/Heaviside_step_function" rel="nofollow">Heaviside Step function</a>. Then,</p> <p>\begin{align} \color{#f00}{\iiint_{\mathcal{V}}\dd x\,\dd y\,\dd z} &amp; = 2\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \Theta\pars{z - \root{x^{2} + y^{2}}}\Theta\pars{{a - 2x \over 4} - z} \,\dd x\,\dd y\,\dd z \\[3mm] &amp; = 2\int_{-\infty}^{\infty}\int_{0}^{2\pi}\int_{0}^{\infty} \Theta\pars{z - \rho}\Theta\pars{{a - 2\rho\cos\pars{\phi} \over 4} - z} \rho\,\dd\rho\,\dd\phi\,\dd z \\[3mm] &amp; = 2\int_{0}^{2\pi}\int_{0}^{\infty} \bracks{{a - 2\rho\cos\pars{\phi} \over 4} - \rho} \Theta\pars{{a - 2\rho\cos\pars{\phi} \over 4} - \rho}\rho\,\dd\rho\,\dd\phi \\[3mm] &amp; = \color{#88f}{2\int_{0}^{2\pi}\int_{0}^{\infty} \braces{{1 \over 4}\,a\rho - \half\bracks{2 + \cos\pars{\phi}}\rho^{2}} \Theta\pars{{a/2 \over 2 + \cos\pars{\phi}} - \rho}\,\dd\rho\,\dd\phi} \\[3mm] &amp; = 2\int_{0}^{2\pi} \braces{{1 \over 8}\,a\,\bracks{a/2 \over 2 + \cos\pars{\phi}}^{2} - \half\bracks{2 + \cos\pars{\phi}} {1 \over 3}\bracks{{a/2 \over 2 + \cos\pars{\phi}}}^{3}}\,\dd\phi \\[3mm] &amp; = {1 \over 48}\,a^{3}\ \overbrace{\int_{0}^{2\pi}{\dd\phi \over \bracks{2 + \cos\pars{\phi}}^{\,2}}} ^{\ds{{4\root{3} \over 9}\,\pi}}\ =\ \color{#f00}{{\root{3} \over 108}\,\pi a^{3}} \end{align}</p> <p>Note that the $\color{#88f}{\mbox{"blue integration"}}$ requires $a &gt; 0$ as we can see from the $\Theta$ argument. Otherwise it vanishes out.</p>
1,770,508
<p>I am confronted with the following definition:</p> <blockquote> <p>Let <span class="math-container">$K$</span> be a field and <span class="math-container">$e_1,e_2,\ldots,e_n$</span> the standard basis of the <span class="math-container">$K$</span> vector space <span class="math-container">$K^n$</span>.</p> <p>For <span class="math-container">$1\leq i\leq n$</span> let <span class="math-container">$V_i=Ke_1+Ke_2+\dots+Ke_n$</span>.</p> </blockquote> <p>For given <span class="math-container">$0&lt;m&lt;n$</span> let <span class="math-container">$$P=\{~g\in GL_n(k)~|~g(V_m)=V_m~\}$$</span></p> <hr /> <p>The part of the definition I don't understand is highlighted, the rest might use as clarification.</p> <p>So what does <span class="math-container">$Ke_1$</span> mean? (Side question: how does picking a different <span class="math-container">$i$</span> make a difference?)</p>
Gathdi
301,858
<p>Cylindrical coordinates is polar coordinates but in 3D. First you have to find projection in XY Plane which can be done so by eliminating z from above equations. Doing so you will get $$(x-\frac{a}{3})^2 - (\frac{4y}{\sqrt 3})^2 = (\frac{2a}{3})^2$$</p> <p>Also your Z limits will be $ \sqrt{\frac{r^2cos^2\theta}{4}) + r^2sin^2\theta}$ and $\frac{a-rcos\theta}{4}$</p> <p>To solve double integral you need to do substitution by putting $x=\frac{a}{3} + \frac{2a}{3}sec\theta$</p> <p>$y= \frac{\sqrt3}{4}\frac{2a}{3}tan\theta$</p>
2,590,537
<p>Let $A$ be a proper subset of of $X$, $B$ be a proper subset of $Y$. If $X$ and $Y$ are connected, show that $(X\times Y)-(A\times B)$ is connected.</p> <p>I have already seen the solution of this problem here on math.stackexchange, but my confusion is this. Suppose $X$=(a,b) &amp; $Y$=(c,d) and let $A$={m} &amp; $B$={n} be singleton sets such that $A\subset X$ &amp; $B\subset Y$ then $(X\times Y)-(A\times B)$= $[(a,m)\cup(m,b)]\times[(c,n)\cup(n,d)]$ = $[(a,m)\times(c,n)]\cup[(m,b)\times(n,d)]$ which implies that $(X\times Y)-(A\times B)$ is not connected. </p> <p>Can someone explain me what I have done wrong &amp; what is this question asking. </p>
Gödel
467,121
<p>You can see a $n$ dimensional vector with binary inputs as the characteristic function the subsets of a set of size $n$, for example, $\{0,1,...,n-1\}$.</p> <p>This is, you know that $\{0,1,3\}\subseteq\{0,1,...,n-1\}$ if $n\geq 5$, so, the vector $(1,1,0,1,0,...,0)$ represents the characteristic function of $\{0,1,3\}$. This means that the input $x_i=1$, $(1\leq i\leq n)$ iff $i\in\{0,1,3\}$ and $x_i=0$ in other case.</p> <p>Now, you can count the number of subset of size $k\leq n$ in $\{0,1,...,n-1\}$ with the binomial coefficient $\binom{n}{k}$ and, in this context you want count the number of subsets of $\{0,1,...,n-1\}$. So, you need calculate $\sum_{k=0}^n\binom{n}{k}$. But this is easy because by the binomial theorem you have $$2^n=(1+1)^n=\sum_{k=0}^n\binom{n}{k}1^k1^{n-k}=\sum_{k=0}^n\binom{n}{k}$$</p>
1,852,664
<p>I'm working through Mumford's Red Book, and after introducing the definition of a sheaf, he says "Sheaves are almost standard nowadays, and we will not develop their properties in detail." So I guess I need another source to read about sheafs from. Does anybody know of any expository papers that cover them? I'd prefer to not have to dig deep into a separate textbook if possible.</p>
carmichael561
314,708
<p>One relatively old but classic reference is Serre's paper "Faisceaux Algébriques Cohérents", often referred to simply as FAC. English translations can be found fairly easily online (see for instance <a href="https://mathoverflow.net/questions/14404/serres-fac-in-english">here</a>).</p> <p>Another reference is the book <em>The Geometry of Schemes</em> by Eisenbud and Harris. They cover sheaves before moving on to schemes.</p>
1,852,664
<p>I'm working through Mumford's Red Book, and after introducing the definition of a sheaf, he says "Sheaves are almost standard nowadays, and we will not develop their properties in detail." So I guess I need another source to read about sheafs from. Does anybody know of any expository papers that cover them? I'd prefer to not have to dig deep into a separate textbook if possible.</p>
Armando j18eos
224,305
<p>I suggest Mumford and Oda - <em>Algebraic Geometry II</em> <a href="http://www.dam.brown.edu/people/mumford/alg_geom/papers/AGII.pdf" rel="nofollow">(a penultimate draft)</a>, the appendix at the first chapter; so one finds again the style of Mumford.</p>
400,296
<p>There are n persons.</p> <p>Each person draws k interior-disjoint squares.</p> <p>I want to give each person a single square out of his chosen k, so that the n squares I give are interior-disjoint.</p> <p>What is the minimum k (as a function of n) for which I can do this?</p> <p>NOTES:</p> <ul> <li>For n=1, obviously k=1.</li> <li>For n=2, obviously k must be more than 2, since with 2 squares per person, it is easy to think of situations where both squares of person 1 intersect both squares or person 2. It seems that k=3 is enough, but I couldn't prove this formally.</li> <li>If we don't limit ourselves to squares, but allow general rectangles, then even for n=2, no k will be large enough, as it is possible that every rectangle of player 1 intersects every other rectangle of player 2. So, the sqauare limitation is important.</li> </ul> <p>EDIT: The problem has two versions: in one version, the squares are all axis-aligned. In the second version, the squares may be rotated. Solutions to any of these versions are welcome. </p> <p>EDIT: Here is a possibly useful claim, relevant for the axis-aligned version:</p> <p><strong>Claim 1</strong>: If two axis-aligned squares, A and B, intersect, then one of the following 3 options hold:</p> <ul> <li>At least 2 corners of A are covered by B, and B is as large or larger than A;</li> <li>One corner of A is covered by B, and one corner of B is covered by A,</li> <li>At least 2 corners of B are covered by A, and A is as large or larger than B.</li> </ul> <p>Thus, if A intersects B, then, out of the 8 corners of A and B, at most 6 corners remain uncovered.</p>
Erel Segal-Halevi
29,780
<p>I found an upper bound for the axis-aligned version. It is based on the following claim:</p> <p><strong>Claim 2</strong>: a square A can intersect at most 4 interior-disjoint squares that are as large or larger that A.</p> <ul> <li>Proof: If A intersects B and B is as large or larger than A, then at least one corner of A is covered by B (See Claim 1 at the end of the question). Since A has 4 corners, there can be at most 4 such squares that are interior-disjoint.</li> </ul> <p>Now we can give each person a square of his choice, using the following algorithm:</p> <ol> <li>Find the smallest square out of the $nk$ available squares. If there are more than one - choose one arbitrarily.</li> <li>Give the smallest square to its owner, send that owner home, and remove all squares of that owner.</li> <li>Remove all squares that are intersected by the smallest square. By Claim 2, there are at most 4 such squares per person. Thus, each of the n-1 persons has at least k-4 squares. Return to step 1.</li> </ol> <p>If $k = 4n-3$, then the algorithm will eventually terminate with 1 person and 1 square.</p> <p>This is only half a solution. Can we do better than $4k-3$ in the axis-aligned version?</p>
38,367
<p>Notation: Let$M$ be a smooth, closed manifold, $S$ any submanifold of $M$, $Diff(M)$ the group of diffeomorphisms of $M$ and $Imb(S, M)$ the group of smooth imbeddings of $S$ into $M$.</p> <p>A classical result of R. Palais from the 1960 paper <em>Local triviality of the restriction map for embeddings</em> says that the map $Diff(M)$ $\rightarrow$ $Imb(S, M)$ given by restriction is a fibration.</p> <p>I feel like I've heard during numerous teas that there are various refinements and generalizations of this due to J. Cerf and (possibly) others.</p> <p>(1) Can anyone summarize what else is known in this direction beyond the theorem of Palais? </p> <p>(2) Is there a way to see Palais' result easily? [added: from the responses it sounds like the original paper is still a great way to see this result. But see the answers of Randal-Williams and Palais for an alternate route.]</p>
Oscar Randal-Williams
318
<p>This is closely related to (and can be proved by the same methods as) the fact that if we fix another manifold $N$ then the restriction map $$\mathrm{Emb}(M, N) \to \mathrm{Emb}(S, N)$$ is a locally trivial fibration, which was proved in the same paper by Palais. However, I wanted to advertise a geodesic proof of this fact:</p> <p>E. L. Lima, <em>On the local triviality of the restriction map for embeddings</em>, Commentarii Mathematici Helvetici Volume 38, Number 1, pp 163-164.</p>
307,287
<p>According to <a href="https://en.wikipedia.org/wiki/Brouwer_fixed-point_theorem" rel="nofollow noreferrer">Brouwer's fixed point theorem</a>, for compact convex $K\subset\mathbb{R}^n$, every continuous map $K\rightarrow K$ has a fixed point.</p> <p>However, these fixed points cannot be chosen continuously, even for $K=[0,1]$, in the sense that there is no continuous map $fix:[0,1]^{[0,1]}\rightarrow[0,1]$ such that $\forall f:[0,1]\rightarrow[0,1]\;f(fix(f))=fix(f)$. To see this, consider a family of functions $f_x:[0,1]\rightarrow[0,1]$ ($x\in[0,1]$) such that $f_0(y)=\frac{1}{3}$; $f_{\frac{1}{2}}(y)=\frac{1}{3}$ for $y\leq\frac{1}{3}$, $f_{\frac{1}{2}}(y)=\frac{2}{3}$ for $y\geq\frac{2}{3}$, and linearly interpolates between those for $\frac{1}{3}&lt;y&lt;\frac{2}{3}$; $f_1(y)=\frac{2}{3}$; and $f_x$ linearly interpolates between $f_0$ and $f_\frac{1}{2}$ for $0&lt;x&lt;\frac{1}{2}$ and between $f_\frac{1}{2}$ and $f_1$ for $\frac{1}{2}&lt;x&lt;1$. In order for $fix$ to continuously select a fixed point, we would need $fix(f_x)=\frac{1}{3}$ for $x\leq\frac{1}{2}$ and $fix(f_x)=\frac{2}{3}$ for $x\geq\frac{1}{2}$, a contradiction.</p> <p>But one could imagine gradually shifting probability from the lower fixed point of $f_x$ to the upper fixed point as $x$ increases. [Edit: actually, one couldn't do that; as Noam Elkies points out in the comments, this example answers my own question.]</p> <p>Hence my question: For a compact convex $K\subset\mathbb{R}^n$, is there a continuous map $fix:K^K\rightarrow\Delta(K)$ such that $\forall f:K\rightarrow K$, $fix(f)$ is supported on fixed points of $f$? Here $K^K$ is given the <a href="https://en.wikipedia.org/wiki/Compact-open_topology" rel="nofollow noreferrer">compact-open topology</a> and $\Delta(K)$ is the space of probability distributions over $K$, equipped with the <a href="https://en.wikipedia.org/wiki/Convergence_of_measures#Weak_convergence_of_measures" rel="nofollow noreferrer">weak topology</a>.</p>
Arno
15,002
<p>The problem here is not restricted to obtaining continuous choice functions, it already fails at the level of continuous multivalued functions.</p> <p><strong>Theorem</strong> The multivalued function $\mathrm{BFT}_k : \mathcal{C}([0,1]^k,[0,1]^k) \rightrightarrows [0,1]^k$ mapping a continuous function to some arbtirary fixed point is not continuous. (See <a href="https://arxiv.org/abs/1206.4809" rel="nofollow noreferrer">https://arxiv.org/abs/1206.4809</a>)</p> <p><strong>Theorem</strong> The multivalued function $\mathrm{SuppPoint} : \mathcal{PM}(\mathbf{X}) \rightrightarrows \mathbf{X}$ mapping a probability measure to an arbitrary point in its support is continuous (in fact, computable) for every "sufficiently complete" space $\mathbf{X}$, in particular it works for $\mathbf{X} = [0,1]^k$. (See <a href="http://logicandanalysis.org/index.php/jla/article/view/241/0" rel="nofollow noreferrer">http://logicandanalysis.org/index.php/jla/article/view/241/0</a>)</p> <p>By the latter, we see that getting probability distributions supported on the fixed points cannot be easier than getting a fixed point. The former tells us that this doesnt work.</p>
286,852
<p>I'm currently trying to understand time continuous Fourier transformations for my Signal Analysis class (Electrical Engineering major). In my Textbook we have a set of "useful FT-pairs", one of which being $x(t)&lt;=FT=&gt;X(\omega)$</p> <p>$\sin(\omega_0*t)&lt;=&gt;i\pi[\delta(\omega+\omega_0)-\delta(\omega-\omega_0)]$</p> <p>I'm trying to derive this on my own, but I keep running into the same dead end. I replace the sine with $\frac 1{2i}(e^\left(i\omega_0t\right)-e^\left(-i\omega_0t\right))$ and drop that into the $\int f(x) e^\left(-i\omega t\right)$. Multiply the functions together by adding together the exponents and then finding the anti-derivative isn't hard, but don't understand how you get from that to the $\delta(x)$. I played around on wolfram alpha and found out that apparently $\int_{-\infty}^{\infty} e^\left(it(\omega + \omega_0\right)dt$ doesn't converge except for $\omega = -\omega_0$, and then it converges to infinity. This seems close to the behavior of the delta function, but I don't think it's enough for me to replace them?</p> <p>Could <em>really</em> use some help understanding this.</p>
not all wrong
37,268
<p>Here are two ways to look at it.</p> <ul> <li>Assume that whatever mathematical trickery is used to define it (distribution theory) the result should be consistent with the inverse Fourier transform formula, and $$\int_{-\infty}^{\infty} dk \delta(k-k_0) \exp(ikx)\frac{1}{2\pi} = \exp(ik_0x)\frac{1}{2\pi}$$</li> <li>Think about this happening in a finite range. $$\int_{-L}^{L}dx \exp(-ikx)\exp(ik_0x) = \frac{2\sin(k_0-k)L}{k_0-k}$$ is sharply peaked at $k=k_0$ with a peak of size $2L$, the length of the interval. For large $L$ this acts just like a $\delta$ function. (Integrate it against another continuous function and take $L\to\infty$ to prove this and check out the normalization.)</li> </ul>
1,926,593
<p>I am currently taking a course in Discrete Math. The first part of our lesson this week is regarding sequences. I am stuck on formulas like the ones shown in the images I attached... I was hoping someone might be able to help me learn how to solve them. :)</p> <p>Ps: What does it mean when <span class="math-container">$n-1$</span> is written below the function? Is it the inverse of <span class="math-container">$a^{n-1}$</span>?</p> <p>I'm sorry if this is a dumb question haha. I've been studying every day for the past 3 weeks and my brain is officially exhausted. Thank-you so much for your time!</p> <blockquote> <p>Given <span class="math-container">$a_n = 3 a_{n-1} + 1$</span> and <span class="math-container">$a_0 = 2$</span>, compute <span class="math-container">$a_2$</span>.</p> <p>Given the recurrence relation <span class="math-container">$a_n = -2a_{n-1}$</span> where <span class="math-container">$a_0 = 1$</span>, find <span class="math-container">$a_5$</span>.</p> </blockquote>
ClownInTheMoon
367,034
<p>Here is how you prove this via Induction. Let me know if you have questions on a particular step. Under the inductive step it's basically:</p> <p>1) Perform induction on $n$. You could substitute $k$ for $n$ if you'd like but I did not.</p> <p>2) Apply Pascal's Identity to split the sum into two sums.</p> <p>3) Factor out $n+1$ from your sums since $i=n+1$ yields ${(n+1)-1 \choose m}$ on the first sum and ${(n+1)-1 \choose m-1}$ on the second sum, we can write the factors we pull out as ${n \choose m}$ and ${n \choose m-1}$.</p> <p>4) Regroup</p> <p>5) Apply Pascal's Identity to your two factored out terms and your two sums.</p> <p>6) Apply your inductive hypothesis to $\sum_{i=0}^{n}{i\choose m}$.</p> <p>7) Finally, apply Pascal's Identity one last time.</p> <p>Here is the work:</p> <p><strong>Proposition</strong>. $\sum_{i=0}^n{i\choose m}={n+1\choose m+1}$ where $0\le m\le n$.</p> <p>$Proof.$ We use mathematical induction.</p> <p><strong>Base Case.</strong> Let $n=0$. Then $\sum_{i=0}^{n=0}{i\choose m}= {0\choose 0}=1$ and ${n+1\choose m+1}={0+1\choose 0+1}={1 \choose1} = 1$. Thus for $n=0$ our proposition holds.</p> <p><strong>Inductive Step.</strong> Let $n\ge0$. Assume $\sum_{i=0}^n{i\choose m}={n+1\choose m+1}$. Then</p> <p>$$\begin{align*}\sum_{i=0}^{n+1}{i\choose m}&amp;=\sum_{i=0}^{n+1}{i-1\choose m}+\sum_{i=0}^{n+1}{i-1\choose m-1}\\ &amp;={n\choose m}+\sum_{i=0}^{n}{i-1\choose m}+{n\choose m-1}+\sum_{i=0}^{n}{i-1\choose m-1}\\ &amp;={n\choose m}+{n\choose m-1}+\sum_{i=0}^{n}{i-1\choose m}+\sum_{i=0}^{n}{i-1\choose m-1}\\ &amp;={n+1\choose m}+\sum_{i=0}^{n}{i\choose m}\\ &amp;={n+1\choose m}+{n+1\choose m+1}\\ &amp;={n+2\choose m+1}\\ &amp;={(n+1)+1\choose m+1} \end{align*} $$</p> <p>It follows by mathematical induction that $\sum_{i=0}^n{i\choose m}={n+1\choose m+1}$ where $0\le m \le n$. $\Box$</p>
374,694
<p>I believe I'm looking for a function: $f(x) : \mathbb N \mapsto \mathbb N^2$ and it's inverse $f^{-1}(x) : \mathbb N^2 \mapsto \mathbb N$, a known mapping that can take any positive integer and map to a unique 2D integer point (and the inverse mapping as well). </p> <p>Like the titles says, the conceptual idea is to take an infinitely long piece of rope, and starting at the origin (0,0), map the rope to the 2D plane in such a way as to fill the space efficiently, and allow for a simple algorithm to go from (d) -> (x,y) and from (x,y) -> (d).</p> <p>The mapping kind of a resembles a Hilbert space-filling curve, however, there the space being filled is infinite.</p> <p>I apologize in advance, it's been years since I've done high level mathematics (I'm CS now) and the correct jargon escapes me.</p>
long tom
47,794
<p>Some possibilities as described <a href="http://en.wikipedia.org/wiki/Hilbert%27s_paradox_of_the_Grand_Hotel#Infinitely_many_coaches_with_infinitely_many_guests_each" rel="nofollow">here</a>:</p> <ul> <li>Prime method: $(x,y)\mapsto p_x^y$ where $p_x$ is the $x$th prime number</li> <li>merge method: consider $(x,y)$. write both numbers with the same amount of digits by adding leading zero's. Now mix $x$ and $y$ to a new number. To do this, take the first digit of $x$, followed by the first digit of $y$, and so on... </li> <li>triangulation method: $(x,y)\mapsto\frac{(c+n)^2+c-n}{2}$</li> </ul>
165,382
<p>Is there any number $a+b\sqrt{5}$ with $a,b \in \mathbb{Z}$ with norm (defined by $|a^2−5b^2|$) equal 2?</p>
Arturo Magidin
742
<p>No. </p> <p>Since $a^2-5b^2\equiv a^2\pmod{5}$, the values of $a^2-5b^2$ must be congruent to either $0$, $1$, or $4$ modulo $5$, since those are the only squares modulo $5$. But neither $2$ nor $-2$ satisfy this condition, so it is impossible to have $a^2-5b^2=2$ or $a^2-5b^2=-2$ with $a,b\in\mathbb{Z}$.</p>
41,000
<p>The motivation for this question is producing a distribution that produces the gender and age of an individual when the distribution of ages depends on gender.</p> <p>Suppose I want a distribution which, when RandomVariate is applied to it, produces two values. The first value it produces is either 0 or 1 and the second value it produces is some real value between 0 and 65. But the second value is correlated with the first. By way of example, it might be the case that when the first value is 0, the second value tends to be lower than the when the first value is 1. I want the end product to be a distribution, so that one could apply things such as CDF to it.</p> <p>Obviously, the gender distribution can be modeled as a BernoulliDistribution. And, as it happens I know how to model the age distribution conditioned on the person being a male and the age distribution conditioned on the person being a female. But how do I put them together? I've looked at CopulaDistribution and, while conceptually it might be appropriate, I don't see the kind of kernel I would need.</p> <p>I have the feeling I am being stupid and that there is some obvious and elegant representation of this situation, but, at the moment, it escapes me. Help appreciated.</p>
Sjoerd C. de Vries
57
<p>I believe <code>TransformedDistribution</code> can do the job. I'll do the conditional part first and finish with producing the number pairs asked for.</p> <p>The trick is to let the transformation function contain the conditional using <code>Boole</code>:</p> <pre><code>dist[μm_, σm_, μf_, σf_, p_] = TransformedDistribution[ Boole[mf == 0] f + Boole[mf == 1] m, { m \[Distributed] NormalDistribution[μm, σm], f \[Distributed] NormalDistribution[μf, σf], mf \[Distributed] BernoulliDistribution[p] } ] </code></pre> <p>Let's try it:</p> <pre><code>Mean[dist[μm, σm, μf, σf, p]] (* μf - p μf + p μm *) StandardDeviation[dist[μm, σm, μf, σf, p]] (* Sqrt[p μf^2 - p^2 μf^2 - 2 p μf μm + 2 p^2 μf μm + p μm^2 - p^2 μm^2 + σf^2 - p σf^2 + p σm^2] *) RandomVariate[dist[6, 1, 0, 1/2, 1/4], 10000] // Histogram </code></pre> <p><img src="https://i.stack.imgur.com/riSHo.png" alt="Mathematica graphics"></p> <p>It also works for more complicated functions (if you're lucky):</p> <pre><code>dist2[am_, bm_, af_, bf_, p_] := TransformedDistribution[ Boole[mf == 0] f + Boole[mf == 1] m, { m \[Distributed] GompertzMakehamDistribution[am, bm], f \[Distributed] GompertzMakehamDistribution[af, bf], mf \[Distributed] BernoulliDistribution[p] } ] Mean[dist2[am, bm, af, bf, p]] (* ( am E^bf MeijerG[{{}, {1}}, {{0, 0}, {}}, bf] - am E^bf p MeijerG[{{}, {1}}, {{0, 0}, {}}, bf] + af E^bm p MeijerG[{{}, {1}}, {{0, 0}, {}}, bm])/(af am) *) </code></pre> <p>The above can be easily extended to generate the numbers pairs you need:</p> <pre><code>dist[μm_, σm_, μf_, σf_, p_] := TransformedDistribution[ {mf, Boole[mf == 0] f + Boole[mf == 1] m}, { m \[Distributed] NormalDistribution[μm, σm], f \[Distributed] NormalDistribution[μf, σf], mf \[Distributed] BernoulliDistribution[p] } ] RandomVariate[dist[65, 5, 75, 5, 1/4], 100000] // DensityHistogram </code></pre> <p><img src="https://i.stack.imgur.com/b74Ul.png" alt="Mathematica graphics"></p>
1,509,502
<p>I am trying to prove the following statement:</p> <blockquote> <p>Suppose <span class="math-container">$\mu$</span> and <span class="math-container">$\nu$</span> are finite measures on the measurable space <span class="math-container">$(X,\mathcal A)$</span> which have the same null sets. Show that there exists a measurable function <span class="math-container">$f$</span> such that <span class="math-container">$0 &lt; f &lt; \infty$</span> <span class="math-container">$\mu$</span>-a.e. and <span class="math-container">$\nu$</span>-a.e. and for all <span class="math-container">$A \in \mathcal A$</span> one has</p> <p><span class="math-container">$$ \nu(A) = \int_A f d\mu \quad \text{and} \quad \mu(A) = \int_A \frac{1}{f} d\nu. $$</span></p> </blockquote> <p>My approach is to define <span class="math-container">$f = \frac{d\nu}{d\mu}$</span> (applying the Radon-Nikodym theorem) and show that the second equality also holds. Showing the almost-everywhere-requirements is trivial.</p> <p>I am able to prove this when <span class="math-container">$f$</span> is a simple function, but I run into trouble when I try to extend this argument. I tried to do this by writing <span class="math-container">$f$</span> as the limit of a sequence of increasing simple positive functions <span class="math-container">$f_k$</span>, defining <span class="math-container">$\nu_k(A) = \int_A f_k d\mu$</span>, and then showing that <span class="math-container">$\lim\limits_{k\to\infty} \int_A \frac{1}{f_k} d\nu_k = \mu(A)$</span>. I also showed that <span class="math-container">$\nu(A) = \lim\limits_{k\to\infty}\nu_k(A)$</span>. However, I was unable to relate <span class="math-container">$\int_A \frac{1}{f} d\nu$</span> to these limits.</p> <p>This is a practice question for an exam, so I'm not looking for complete answers, but I would really appreciate it if someone pointed me in the right direction. Is this the correct approach, or am I overlooking a much simpler argument?</p>
Zhanxiong
192,408
<p><strong>Hint:</strong> Since $\nu$ and $\mu$ have the same null sets, $\mu \ll \nu$, by R-N theorem, there exists $f$ with required properties such that $$\nu(A) = \int_A f d \mu.$$ Therefore $d\nu = f d \mu$ Now evaluate $\int_A \frac{1}{f} d \nu$ as follows $$\int_A \frac{1}{f} d \nu = \int_A \frac{1}{f} f d\mu = \int_A d\mu = \mu(A).$$</p>
1,648,354
<p>We have been taught that linear functions, usually expressed in the form $y=mx+b$, when given a input of 0,1,2,3, etc..., you can get from one output to the next by adding some constant (in this case, 1). $$ \begin{array}{c|l} \text{Input} &amp; \text{Output} \\ \hline 0 &amp; 1\\ 1 &amp; 2\\ 2 &amp; 3 \end{array} $$</p> <p>But with exponential functions (which are usually expressed in the form $y=a\cdot b^x $), instead of adding a constant, you multiply by a constant. (In this case, 2) $$ \begin{array}{c|l} \text{Input} &amp; \text{Output} \\ \hline 0 &amp; 1\\ 1 &amp; 2\\ 2 &amp; 4\\ 3 &amp; 8 \end{array} $$ But... we can keep going, can't we? $$ \begin{array}{c|l} \text{Input} &amp; \text{Output} \\ \hline 0 &amp; 1\\ 1 &amp; 2\\ 2 &amp; 4\\ 3 &amp; 16\\ 4 &amp; 256 \end{array} $$ In this example, you square the last output to get to the next one. I cannot find a 'general form' for such an equation, nor can I find much information online. Is there a name for these functions? Is there a general form for them? And can we keep going even past these 'super exponential functions'?</p>
Eleven-Eleven
61,030
<p>Note that your table can be written as</p> <p>$$ \begin{array}{c|l} \text{Input} &amp; \text{Output} \\ \hline 1 &amp; 2^{2^{1-1}}\\ 2 &amp; 2^{2^{2-1}}\\ 3 &amp; 2^{2^{3-1}}\\ 4 &amp; 2^{2^{4-1}} \end{array} $$</p> <p>Tetration is the operation you could say that succeeds exponentiation. As an example, notationally, if we want to represent "a to the a to the a to the a"th power, we use</p> <p>$$ ^4 a=a^{a^{a^a}}$$</p>
679,712
<p>Find the volume bounded by the cylinder $x^2 + y^2=1$ and the planes $y=z , x=0 ,z=0$ in the first octant.</p> <p>How do I go about doing this?</p>
Américo Tavares
752
<p>In general the volume of a region $S$ bounded above by a surface defined by a function of two variables $z=f(x,y)\ge 0$, whose domain is $R$, and below by the plane $z=0$, is given by:</p> <ul> <li>the double integral $\iint_R f(x,y)\,dA=\iint_R f(x,y)\,dx\,dy$, </li> <li>or by the triple integral $\iiint_S \,dV=\iiint_S \,dx\,dy\,dz.$ </li> </ul> <p>The following figure represents the given region of the <em>first octant</em> ($x\ge 0,y\ge 0,z\ge 0$) bounded by the cylinder $x^{2}+y^{2}=1$ (gray) and the plane $y=z$ (green).</p> <p><img src="https://i.stack.imgur.com/y1qm4.jpg" alt="enter image description here"></p> <p>The domain $R$ of the function $z=f(y)=y$ is defined in <em>Cartesian coordinates</em> $x,y$ by</p> <p>\begin{eqnarray*} R &amp;=&amp;\left\{ (x,y)\in\mathbb{R}^{2}:0\leq x^{2}+y^{2}\leq 1,x\ge 0,y\ge 0\right\} . \end{eqnarray*}</p> <p>or in <em><a href="http://en.wikipedia.org/wiki/Polar_coordinate_system" rel="nofollow noreferrer">polar coordinates</a></em> $r,\theta$, with $x=r\cos \theta , y=r\sin \theta $, by</p> <p>\begin{eqnarray*} T &amp;=&amp;\left\{ (r,\theta )\in \mathbb{R}^{2}:0\leq r\leq 1,0\leq \theta \leq \pi /2\right\} . \end{eqnarray*}</p> <p>As such the requested volume $V$ may be given in Cartesian coordinates by the double integral</p> <p>\begin{equation*} V=\iint_{R}z\,dx\,dy=\iint_{R}y\,dx\,dy. \end{equation*}</p> <p>Using polar coordinates, since the <a href="http://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant" rel="nofollow noreferrer">Jacobian determinant</a> of the transformation from Cartesian to <a href="http://en.wikipedia.org/wiki/Polar_coordinate_system#Generalization" rel="nofollow noreferrer">polar coordinates</a> is $J=\frac{\partial (x,y)}{\partial (r,\theta )}=r$, the integral is converted into the following <a href="http://math.uchicago.edu/~vipul/teaching-1011/195/doubleanditeratedintegrals.pdf" rel="nofollow noreferrer">separable</a> one:</p> <p>\begin{eqnarray*} V &amp;=&amp;\iint_{T}(r\sin \theta)\, |J| \,dr\,d\theta =\int_{r=0}^{1}\int_{\theta =0}^{\pi /2}r^{2}\sin \theta \,dr\,d\theta \\ &amp;=&amp;\left( \int_{0}^{1}r^{2}dr\right) \left( \int_{0}^{\pi /2}\sin \theta \,d\theta \right) =\left. \frac{1}{3}r^{3}\right\vert _{0}^{1}\left. \left( -\cos \theta \right) \right\vert_{0}^{\pi /2} =\frac{1}{3}. \end{eqnarray*}</p>
2,447,677
<p>Consider the parametric equations given by \begin{align*} x(t)&amp;=\sin{t}-t,\\ y(t) &amp; = 1-\cos{t}.\end{align*}</p> <p>I want to write these parametric equations in Cartesian form. </p> <p>In order to eliminate the sine and cosine terms I think I probably need to consider some combination of $x(t),y(t), x(t)^2$ and $y(t)^2$ but I can't see exactly how to do this. </p>
José Carlos Santos
446,262
<p>From the second equality, you'll get $t=\arccos(1-y)$. So,\begin{align}x&amp;=\sin\bigl(\arccos(1-y)\bigr)-\arccos(1-y)\\&amp;=\sqrt{1-(1-y)^2}-\arccos(1-y)\\&amp;=\sqrt{2y-y^2}-\arccos(1-y).\end{align}</p>
1,570,983
<p>The volume of a sphere with radius $r$ is given by the formula $V(r) = \frac{4 \pi}{3} r^3$.</p> <p>a) If $a$ is a given fixed value for $r$, write the formula for the linearization of the volume function $V(r)$ at $a$.</p> <p>b) Use this linearization to calculate the thickness $\Delta r$ (in $cm$) of a layer of paint on the surface of a spherical ball with radius $r=52cm$ if the total volume of paint used is $340cm^3$. </p> <p>The first part is easy to calculate, but I don't know exactly how to get the second part?</p>
Narasimham
95,860
<p>For second part by logarithmic differentiation of volume/radius</p> <p><span class="math-container">$$\dfrac{\Delta V}{V}=3\dfrac{\Delta r}{r}$$</span></p> <p><span class="math-container">$$\dfrac{340}{\dfrac{4\pi}{3} 52^3}=3\dfrac{\Delta r}{52}$$</span></p> <p>from which we calculate the paint thickness <span class="math-container">$\Delta r \approx 0.01 $</span> cm.</p> <p>The same would be the case if instead <span class="math-container">$a=r$</span> would represent the side of a cube.</p>
1,885,434
<p>I'm currently reading through <em>Introductory Discrete Mathematics</em> by V.K. Balakrishnan and came across the following theorem:</p> <p>If $X$ is a set of cardinality $n$, then the number of $r$-collections from $X$ is $\binom{r + n - 1}{n - 1}$, where $r$ is any positive integer.</p> <p>To me, it seems like the number of such $r$-collections ought to be $\dfrac{n^r}{r!}$ since each collection will have $r$ elements and for each of those $r$ elements there are $n$ choices. But, obviously, $n^r$ would be an overestimation considering order does not matter and since each collection of $r$ elements could be permuted in $r!$ ways, we divide by $r!$. </p> <p>It seems that either I have made a mistake (very likely), he author has made a mistake (much less likely), or maybe my solution and the author's are actually equivalent. If anyone could shed some light on this I'd greatly appreciate it!</p>
MarnixKlooster ReinstateMonica
11,994
<p>Your proof is correct. However, it is a proof that contains a 'rabbit' (see the introduction of <a href="http://www.cs.utexas.edu/users/EWD/transcriptions/EWD13xx/EWD1300.html" rel="nofollow">EWD1300</a>): it introduces the expression $\;(b+a)(b−a)+c(b−a)\;$ as a complete surprise to the reader. That makes the proof harder to read, harder to find, and harder to remember and reconstruct later.</p> <p>Here are two alternative proofs which avoid that pitfall: they solve this problem essentially by calculation.$ \newcommand{\calc}{\begin{align} \quad &amp;} \newcommand{\op}[1]{\\ #1 \quad &amp; \quad \unicode{x201c}} \newcommand{\hints}[1]{\mbox{#1} \\ \quad &amp; \quad \phantom{\unicode{x201c}} } \newcommand{\hint}[1]{\mbox{#1} \unicode{x201d} \\ \quad &amp; } \newcommand{\endcalc}{\end{align}} \newcommand{\Ref}[1]{\text{(#1)}} \newcommand{\then}{\Rightarrow} \newcommand{\when}{\Leftarrow} \newcommand{\true}{\text{true}} \newcommand{\false}{\text{false}} $</p> <p>As you observed, $\;x,y,z\;$ are an arithmetic sequence iff $\;y-x = z-y\;$, or simplified $\;2y = x+z\;$, or (solving for $\;y\;$) $\;y = \tfrac 1 2 (x+z)\;$.</p> <p>That last form gives us a very straightforward proof: given $\;b = \tfrac 1 2 (a+c)\;$, you are asked to prove $$ \tag 1 b^2-ac = \tfrac 1 2 ((a^2-bc)+(c^2-ab)) $$ So we can just substitute $\;b := \tfrac 1 2 (a+c)\;$ in $\Ref 1$, and then prove the resulting identity by simplifying and calculation.</p> <p>And using the form $\;2y = x+z\;$ gives another easy-to-find proof, which I find more appealing: asuming $\;2b = a+c\;$, we calculate $$\calc \tag{1'} 2(b^2-ac) = (a^2-bc)+(c^2-ab) \op\equiv\hint{move all negative terms to the other side} 2b^2 + bc + ab = a^2 + c^2 + 2ac \op\equiv\hints{simplify left hand side; simplify right hand side} \hints{-- this introduces both $\;2b\;$ and $\;a+c\;$, which} \hint{we know something about} b(2b + c + a) = (a + c)^2 \op\equiv\hint{substitute $\;2b\;$ for $\;a+c\;$, by the assumption} b(2b + 2b) = (2b)^2 \op\equiv\hint{arithmetic} \true \endcalc$$</p> <p>Here's hoping that this gets more people interested in rabbit extermination:</p> <blockquote> <p>We don’t want to baffle or puzzle our readers [...] As time went by, we accepted as challenges to avoid pulling rabbits out of the magician’s hat. [...] Eventually, expelling rabbits became another joy of my professional life.</p> </blockquote> <p>-- Edsger W. Dijkstra, from "The notational conventions I adopted, and why" (EWD1300)</p>
165,154
<p>a) Let $\,f\,$ be an analytic function in the punctured disk $\,\{z\;\;;\;\;0&lt;|z-a|&lt;r\,\,,\,r\in\mathbb R^+\}\,$ . Prove that if the limit $\displaystyle{\lim_{z\to a}f'(z)}\,$ exists finitely, then $\,a\,$ is a removable singularity of $\,f\,$</p> <p><strong>My solution and doubt:</strong> If we develop $\,f\,$ is a Laurent series around $\,a\,$ we get $$f(z)=\frac{a_{-k}}{(z-a)^k}+\frac{a_{-k+1}}{(z-a)^{k-1}}+\ldots +\frac{a_{-1}}{z-a}+a_0+a_1(z-a)+\ldots \Longrightarrow$$ $$\Longrightarrow f'(z)=-\frac{ka_{-k}}{(z-a)^{k+1}}-\ldots -\frac{a_{-1}}{(z-a)^2}+a_1+...$$ and since $\,\displaystyle{\lim_{z\to a}f'(z)}\,$ exists <em>finitely</em> then it must be that $$a_{-k}=a_{-k+1}=...=a_{-1}=0$$ getting that the above series for $\,f\,$ is, in fact, a Taylor one and thus $\,f\,$ has a removable singularity at $\,a\,$ .</p> <p><em><strong>My doubt:</em></strong> is there any other "more obvious" or more elementary way to solve the above without having to resource to term-term differentiating that Laurent series? </p> <p>b) Evaluate, using some complex contour, the integral $$\int_0^\infty\frac{\log x}{(1+x)^3}\,dx$$</p> <p><em><strong>First doubt:</em></strong> it is given in this exercise the hint(?) to use the function $$\frac{\log^2z}{(1+z)^3}$$Please do note the square in the logarithm! Now, is this some typo or perhaps it really helps to do it this way? After checking with WA, the original real integral equals $\,-1/2\,$ and, in fact, it is doable without need to use complex functions, and though the result is rather ugly <em>it nevertheless is</em> an elementary function (rational with logarithms, no hypergeometric or Li or stuff).</p> <p>The real integral with the logarithm squared gives the beautiful result of $\,\pi^2/6\,$ but, again, I'm not sure whether "the hint" is a typo.</p> <p><em><strong>Second doubt:</em></strong> In either case (logarithm squared or not), what would be the best contour to choose? I though using one quarter of the circle $\,\{z\;\;;\;\;|z|=R&gt;1\}\,$ <em>minus one quarter of the circle</em> $\,\{z\;\;;\;\;|z|=\epsilon\,\,,0&lt;\epsilon&lt;&lt;R\}\,$, in the first quadrant both, because</p> <p>$(i)\,$ to get the correct limits on the $\,x\,$-axis when passing to the limits $\,R\to\infty\,\,,\,\epsilon\to 0\,$</p> <p>$(ii)\,$ To avoid the singularity $\,z=0\,$ of the logarithm (not to mention going around it and changing logarithmic branch and horrible things like this!). </p> <p>Well, I'm pretty stuck here with the evaluations on the different segments of the path, besides being baffled by "the hint", and I definitely need some help here.</p> <p>As before: these exercises are supposed to be for a first course in complex variable and, thus, I think they should be more or less "elementary", though this integral looks really evil.</p> <p>For the time you've taken already to read this long post I already thank you, and any help, hint or ideas will be very much appreciated.</p>
Rajesh D
2,987
<p>If you are passing the signal through an ideal low pass filter, then Fourier series coefficients of the filtered signal above $4$kHz vanish that is $a_k = 0 \forall k \ge \frac{4000T}{2\pi}$ and the rest of the coefficients remain same. Here $T$ is the period of the square wave. </p> <p>Then what is the bandwidth? It depends on the definition of the bandwidth. Bandwidth usually means the the width of the continuous region in frequency domain within which most of the energy (usually $90\%$ or greater) of the signal is present. Bandwidth is usually inversely proportional to the width of the pulse of a rectangular pulse. Since its a square wave, the width is $\frac{T}{2}$ and hence the bandwidth is $\frac{2}{T}$. Now for the low pass filtered signal the bandwidth is $\min(\frac{2}{T},$4kHz$)$. $min$ is the <em>minimum of</em> operation.</p>
961,603
<p>Show that every positive integer $N$ less than or equal to $n$ factorial, is the sum of at most $n$ distinct positive integers, each of which divides $n!$.</p>
André Nicolas
6,312
<p>We deal with the induction step. Suppose that the result holds when $n=k$. We show the result holds when $n=k+1$. So we want to show that every positive integer $N\le (k+1)!$ is representable as a sum $y_1+\cdots +y_t$, where $t\le k+1$, and where the $y_j$ are distinct divisors of $(k+1)!$.</p> <p>If $N$ is divisible by $k+1$, we are essentially finished. For we have $\frac{N}{k+1}\le k!$. Thus, using the induction hypothesis, we can represent $\frac{N}{k+1}$ as $\frac{N}{k+1}=x_1+\cdots +x_t$, where $t\le k$ and the $x_j$ are distinct divisors of $k!$. Let $y_j=(k+1)x_j$. Then $N=y_1+\cdots +y_t$, and the $y_j$ are distinct divisors of $(k+1)!$.</p> <p>Now suppose that $N$ is not divisible by $k+1$. Then $N$ leaves some remainder $r$ on division by $k+1$, where $1\le r\le k$. </p> <p>Then $N-r$ is divisible by $n+1$. So again by the induction hypothesis there exists a $s\le k$, and distinct divisors $x_1,\dots, x_s$ of $k!$, such that $\frac{N-r}{k+1}=x_1+\cdots +x_s$.</p> <p>Let $y_j=(k+1)x_j$ for $1\le j\le s$, and let $y_{s+1}=r$. Then $s+1\le k+1$, and $N=y_1+\cdots+y_s+y_{s+1}$.</p> <p>For $1\le j\le s$, the $x_j$ are distinct divisors of $k!$. Thus the $y_j$ are distinct divisors of $(k+1)!$. Since $y_{s+1}=r\le k$, it follows that $y_{s+1}$ is also a divisor of $(k+1)!$. Finally, $y_{s+1}$ is not equal to any $y_j$ with $j\le s$, since $y_{s+1}$ is not divisible by $k+1$, but the others are. </p> <p>This completes the induction step. </p>
1,782,800
<p>I am studying for an exam and found this in the solutions:</p> <blockquote> <p>"... with canonical parameters $\{\log\lambda_i\}^n_1$". </p> </blockquote> <p>Index $I=1,2,\ldots,n$. The professor that wrote the exam has retired. Does anyone know what the $1$ and $n$ might mean?</p>
user332239
332,239
<p>This usually indicates the range of your index parameter, $i$. Sometimes you will see it written more explicitly as $$\{A_i\}_{i=1}^n.$$ For more general index sets you may see something like $$\{A_i\}_{i \in I}.$$ This is useful if your index set is uncountable, for instance.</p>
163,889
<p>Find a closed form for $a_n:=\sum_{k=0}^{n}\binom{n}{k}(n-k)^n(-1)^k$ using generating functions.</p>
user26872
26,872
<p>We have $$\begin{eqnarray*} \sum_{k=0}^{n}(-1)^k \binom{n}{k}(n-k)^n &amp;=&amp; \sum_{k=0}^{n}(-1)^k \binom{n}{n-k}(n-k)^n \\ &amp;=&amp; \sum_{k=0}^{n} (-1)^{n-k} \binom{n}{k} k^n \\ &amp;=&amp; \left.\sum_{k=0}^{n} (-1)^{n-k} \binom{n}{k} (x D)^n x^k\right|_{x=1} \\ &amp;=&amp; \left.(x D)^n \sum_{k=0}^{n} (-1)^{n-k} \binom{n}{k} x^k\right|_{x=1} \\ &amp;=&amp; \left.(x D)^n (x-1)^n\right|_{x=1}, \end{eqnarray*}$$ where $D = \partial/\partial x$. But $$(x D)^n = x^n D^n + (\mathrm{const}) x^{n-1}D^{n-1} + \ldots.$$ and $D^k(x-1)^n|_{x=1} = 0$ unless $k\ge n$. Therefore, $\left.(x D)^n (x-1)^n\right|_{x=1} = D^n(x-1)^n|_{x=1} = n!,$ and so $$\begin{equation*} \sum_{k=0}^{n}(-1)^k \binom{n}{k}(n-k)^n = n!.\tag{1} \end{equation*}$$</p> <hr> <p><sub> The argument above immediately implies that $$\sum_{k=0}^{n}(-1)^k \binom{n}{k}(n-k)^m = 0$$ if $m\in\mathbb{N}$ and $m&lt;n$. It also gives us a method to calculate the sum for $m&gt;n$. Sums of this type are related to the <a href="http://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling numbers of the second kind</a>, $$\begin{eqnarray*} \sum_{k=0}^{n}(-1)^k \binom{n}{k}(n-k)^m &amp;=&amp; \sum_{k=0}^{n}(-1)^{n-k} \binom{n}{k}k^m \\ &amp;=&amp; n! \left\{m\atop n\right\}. \end{eqnarray*}$$ The operator $(x D)^n$ and it's connection to the Stirling numbers has been discussed <a href="https://math.stackexchange.com/questions/154887/nicer-expression-for-the-following-differential-operator/">here</a>. </sub></p>
2,233,457
<p>I'm working on the following integral:</p> <p>$\int_0^{\infty}\frac{y^r}{\theta}ry^{r-1}e^{\frac{-y^r}{\theta}} dy$. </p> <p>I noticed that if I took the derivative:</p> <p>$\frac{d}{dy}e^{\frac{-y^r}{\theta}} = \frac{-ry^{r-1}}{\theta}e^{\frac{-y^r}{\theta}}$ </p> <p>I got most of the expression of the integrand. But I'm left with an extra factor of $y^r$:</p> <p>$\int_0^{\infty}(y^r\frac{d}{dy}e^{\frac{-y^r}{\theta}}) dy$</p> <p>I'm not sure how to simplify this anymore, so is this how I'm supposed to be going about this, or is there another way to take this integral?</p> <p>I got this from <em>Wackerly's Mathematical Statistics 7e</em> when trying to find the $E[Y^r]$ from the following distribution:</p> <p>$f(y|\theta) = \frac{1}{\theta}ry^{r-1}e^{\frac{-y^r}{\theta}}, \theta&gt;0,y&gt;0$</p> <p>Thank you!</p>
shrinklemma
67,953
<p>Here is an argument independent of the smooth structure of $M$. </p> <p>Let $M$ be $n$-dimensional ($n\geq 2$) and pick any $p\in M$. If $f:M\to\mathbb{R}$ is one-to-one, then $f$ restricted on a neighborhood of $p$, is also one-to-one, and we may assume that $f$ is now defined on $\mathbb{R}^n$ by a neighborhood chart around $p$. Now by continuity of $f$, there exists some $r&gt;0$ such that $f(B(0,r))\subset (f(0)-1,f(0)+1)$, where $B(0,r)$ denotes the open ball centered at $0$ with radius $r&gt;0$. Since $B(0,r)$ is connected, $f(B(0,r))$ is a bounded interval. Say the two boundary points of this interval is $a$ and $b$ with $a&lt;b$ ($a\neq b$ since $f$ is one-to-one on $B(0,r)$). Then it is not hard to show that in fact $f(\overline{B(0,r)})=[a,b]$. </p> <p>Now let $p,q\in \overline{B(0,r)}$ such that $f(p)=a$ and $f(q)=b$. Denote by $l$ the compact line segment connecting $p$ and $q$, then we have $f(l)=[a,b]$. This contradicts with the fact that $f$ is one-to-one on $\overline{B(0,r)}$.</p>
4,219,193
<p>The <code>p implies q</code> statement is often described in various ways including:<br /> (1) <code>if p then q</code> (i.e. whenever p is true, q is true)<br /> (2) <code>p only if q</code> (i.e. whenever q is false, p is false)</p> <p>I see the truth table for (1) as</p> <pre><code>p | q | if p then q ------------------- T | T | T T | F | F | T | F | F | </code></pre> <p>I see the truth table for (2) as</p> <pre><code>p | q | p only if q ------------------- T | T | T | F | F F | T | F | F | T </code></pre> <p>How are the two statements the same? What is wrong with my understanding?</p> <hr /> <p><strong>Addenda</strong><br /> 1). There are some excellent excellent answers/suggestions here but what really worked for me was the <a href="https://math.stackexchange.com/a/4219227/956241">following tip</a>:</p> <blockquote> <p>I think the intuitive way to think of this is if something is contradicted then it is false but if nothing can be contradicted it is by default true.</p> </blockquote> <p>2). I have now learned that a conditional statement that is true by virtue of the fact that its hypothesis is false (i.e. <code>true by default</code>) is called <code>vacuously true</code>.</p>
Isaiah
215,257
<p>&quot;If P then Q&quot; means that whenever P is true, Q is true as you have observed. Thus, if both are true, it follows that the statement as a whole is true. Now, what if P is true, but Q is false? What does this imply about the statement as a whole? In this case, it is <strong>not</strong> true that &quot;if P is true, Q is true&quot;. Hence, the statement as a whole is false.</p> <p>Now, consider the cases where P is false. Does it matter whether Q is true or false? Again, we are evaluating the truthfulness of the statement that &quot;If P is true, then Q is also.&quot; Since P is false, the statement no longer has a bearing on the factuality of the statement we are assessing. By convention, we thus assert that the statement is still true. (At the very least, it doesn't contradict the original statement. This is a complex topic; you would do well to look up some related questions regarding this. See <a href="https://math.stackexchange.com/questions/48161/in-classical-logic-why-is-p-rightarrow-q-true-if-both-p-and-q-are-false">this one</a> for instance.)</p> <p>Now, using the advice above, try to fill in the truth table for the other one. You will find that they are the same; therefore, the two are logically equivalent.</p>
4,219,193
<p>The <code>p implies q</code> statement is often described in various ways including:<br /> (1) <code>if p then q</code> (i.e. whenever p is true, q is true)<br /> (2) <code>p only if q</code> (i.e. whenever q is false, p is false)</p> <p>I see the truth table for (1) as</p> <pre><code>p | q | if p then q ------------------- T | T | T T | F | F | T | F | F | </code></pre> <p>I see the truth table for (2) as</p> <pre><code>p | q | p only if q ------------------- T | T | T | F | F F | T | F | F | T </code></pre> <p>How are the two statements the same? What is wrong with my understanding?</p> <hr /> <p><strong>Addenda</strong><br /> 1). There are some excellent excellent answers/suggestions here but what really worked for me was the <a href="https://math.stackexchange.com/a/4219227/956241">following tip</a>:</p> <blockquote> <p>I think the intuitive way to think of this is if something is contradicted then it is false but if nothing can be contradicted it is by default true.</p> </blockquote> <p>2). I have now learned that a conditional statement that is true by virtue of the fact that its hypothesis is false (i.e. <code>true by default</code>) is called <code>vacuously true</code>.</p>
hamam_Abdallah
369,188
<p><span class="math-container">$ p\implies q $</span> can be seen as an argument <span class="math-container">$ p \therefore q$</span>. If the argument is valid and <span class="math-container">$ p $</span> is true, then we are sure that <span class="math-container">$ q $</span> is also true.</p> <p>If the argument is valid and <span class="math-container">$ p $</span> is false, we can say nothing about <span class="math-container">$ q $</span>.</p> <p>using material implication means that we can replace <span class="math-container">$ p\implies q $</span> by <span class="math-container">$ \lnot p \vee q$</span>.</p>
4,219,193
<p>The <code>p implies q</code> statement is often described in various ways including:<br /> (1) <code>if p then q</code> (i.e. whenever p is true, q is true)<br /> (2) <code>p only if q</code> (i.e. whenever q is false, p is false)</p> <p>I see the truth table for (1) as</p> <pre><code>p | q | if p then q ------------------- T | T | T T | F | F | T | F | F | </code></pre> <p>I see the truth table for (2) as</p> <pre><code>p | q | p only if q ------------------- T | T | T | F | F F | T | F | F | T </code></pre> <p>How are the two statements the same? What is wrong with my understanding?</p> <hr /> <p><strong>Addenda</strong><br /> 1). There are some excellent excellent answers/suggestions here but what really worked for me was the <a href="https://math.stackexchange.com/a/4219227/956241">following tip</a>:</p> <blockquote> <p>I think the intuitive way to think of this is if something is contradicted then it is false but if nothing can be contradicted it is by default true.</p> </blockquote> <p>2). I have now learned that a conditional statement that is true by virtue of the fact that its hypothesis is false (i.e. <code>true by default</code>) is called <code>vacuously true</code>.</p>
mohottnad
955,538
<p>After practicing filling truth table and gaining logic terminologies, the natural language intuition for &quot;if p then q&quot; is generally that p is a <em>sufficient</em> condition of q, while for &quot;p only if q&quot; q is a <em>necessary</em> condition for p. With these intuitions you can usually find answers with more ease.</p> <p>Final note about your natural language word &quot;implies&quot; which usually indicates logical or semantic consequence, and simple propositional logic you're learning here usually cannot fully express as explained further by my earlier post <a href="https://math.stackexchange.com/questions/4223543/justification-words-in-symbolic-logic/4224187#4224187">here</a>.</p>
533,628
<p>I'm learning how to take indefinite integrals with U-substitutions on <a href="https://www.khanacademy.org/math/calculus/integral-calculus/u_substitution/v/u-substitution" rel="nofollow">khanacademy.org</a>, and in one of the videos he says that: $$\int e^{x^3+x^2}(3x^2+2x) \, dx = e^{x^3+x^2} + \text{constant}$$ I understand that the differential goes away, but not how the whole $(3x^2+2x)$ term go away together with the $dx$.</p>
Tacet
186,012
<p>Take $b_n = n$ and $c_n = \sum_{i = 1}^{n} a_i$ $$ \lim_{n\to\infty} \frac{c_{n+1} - c_n}{b_{n+1} - b_n} = \lim_{n\to\infty} \frac{a_{n+1}}{1} = a $$</p> <p>$s_n = \frac{c_n}{b_n}$ and $b_n \to \infty $ and above limit exists, so <a href="https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem" rel="nofollow">Stolz–Cesàro theorem</a> ends the proof.</p>
2,411,081
<p>How can I show (with one of the tests for convergence , <strong>not by solving</strong>) that the integral $$\int _{1}^\infty\frac{\ln^5(x)}{x^2}dx$$ converges?</p>
Michael Burr
86,421
<p>If you really meant: $$ \int_1^\infty \frac{\ln(x)^5}{x^2}dx, $$ then observe that for $x$ sufficiently large, $\frac{\ln(x)^5}{\sqrt{x}}$ is less than $1$. In other words, for some $c&gt;1$, $\ln(x)^5&lt;\sqrt{x}$ for all $x&gt;c$. Therefore, the tail of this integral is bounded above by $$ \int_c^\infty\frac{dx}{x^{3/2}}, $$ which converges. Then, $$ \int_1^\infty \frac{\ln(x)^5}{x^2}dx\leq \int_1^c \frac{\ln(x)^5}{x^2}dx+\int_c^\infty\frac{dx}{x^{3/2}}&lt;\infty. $$</p>
4,447,641
<p>I can't figure out where to even start, I have looked up the answer on Desmos but it uses L'Hopitals rule which i haven't learned yet.</p> <p><span class="math-container">$$\lim_{h \to 0} \frac{(2+h)^{3+h} - 8}{h}$$</span> I see that i can use the log rules to rewrite it as <span class="math-container">$$\lim_{h \to 0} \frac{e^{(3+h)\ln(2+h)} - 8}{h}$$</span> but that only confuses me more.</p>
311411
688,046
<p>I follow up on your suggestion to use a logarithm. Define</p> <p><span class="math-container">$$ \phi(h)= e^{(3+h)\ln(2+h)}, \text{ noting that } \phi(0)= 8.$$</span></p> <p>On the one hand, <span class="math-container">$$\phi'(0) \;=\; \lim_{h \to 0}\frac{\phi(h)-\phi(0)}{h-0},$$</span> which is exactly your target.</p> <p>But applying the basic rules for derivatives of complicated expressions:</p> <p><span class="math-container">$$ \phi'(h)= \phi(h)\left[\ln(2+h) + (3+h)\frac1{2+h}\right],$$</span></p> <p><span class="math-container">$$ \phi'(0)= 12+8\ln 2.$$</span></p>
2,078,943
<p>My book says this but doesn't explain it:</p> <blockquote> <p>Row operations do not change the dependency relationships among columns.</p> </blockquote> <p>Can someone explain this to me? Also what is a dependency relationship? Are they referring to linear dependence? </p>
Michelle Zhuang
739,444
<p>I just figured out a way that makes sense of it, so I searched for such a question to post it. And gladly it's slightly different than hardmath's above.</p> <p>If two vectors <span class="math-container">$\vec{a}$</span> and <span class="math-container">$\vec{b}$</span> are dependent, <span class="math-container">$\vec{a}=c\vec{b}$</span>.</p> <p>One can consider a row operation as performing an identical linear transformation on all the columns of a matrix. A linear transformation is just a mapping from one vector space to another vector space, <span class="math-container">$T: V \mapsto W$</span> such that if <span class="math-container">$\vec{x}, \vec{y}$</span> are two vectors in <span class="math-container">$V$</span> and <span class="math-container">$c$</span> a scalar, <span class="math-container">$T(\vec{x})+T(\vec{y})=T(\vec{x}+\vec{y})$</span> and <span class="math-container">$T(c\vec{x})=cT(\vec{x})$</span>.</p> <p>Proof for &quot;Row operations do not change the dependency relationships among columns&quot;:</p> <p>We want to show that after a row operation, one column <span class="math-container">$\vec{a}$</span> will be linearly dependent on another column <span class="math-container">$\vec{b}$</span> iff it had been linearly dependent on <span class="math-container">$\vec{b}$</span> beforehand.</p> <p>So beforehand, <span class="math-container">$\vec{a}=c\vec{b}$</span>. Let <span class="math-container">$T$</span> be a row operation, which we said is a linear transformation. Then <span class="math-container">$T(\vec{a})=T(c\vec{b})=cT(\vec{b})$</span>. And this shows that after the row operation, two linearly dependent columns remain linearly dependent.</p> <p>We've shown that &quot;after a row operation, one column <span class="math-container">$\vec{a}$</span> will be linearly dependent on another column <span class="math-container">$\vec{b}$</span> iff it had been linearly dependent on <span class="math-container">$\vec{b}$</span> beforehand.&quot; The contrapositive is: &quot;after a row operation, one column <span class="math-container">$\vec{a}$</span> will be linearly independent of another column <span class="math-container">$\vec{b}$</span> iff it had been linearly independent of <span class="math-container">$\vec{b}$</span> beforehand. Every conditional statement is equivalent to its contrapositive, so the contrapositive is true as well.</p> <p><span class="math-container">$\square$</span></p>
1,490,828
<p>Consider an equilateral triangle in $A \subset \mathbb{R}^2$ with vertices $x_1$, $x_2$, and $x_3$. I would like to show that the smallest circle which contains $A$ has radius $\frac{d}{\sqrt{3}}$, where $d$ is the length of each side. </p> <p>I've been able to show that a (closed) ball centered at $(x_1 + x_2 + x_3)/3$ with radius $\frac{d}{\sqrt{3}}$ contains A. To complete the proof, I need to show that no smaller ball can contain $A$. My strategy so far has been as follows. Pick a point $x \in \mathbb{R}^2$. If $d(x_1,x) \leq \frac{d}{\sqrt{3}}$ and $d(x_2,x) \leq \frac{d}{\sqrt{3}}$ with at least one equality strict, then $d(x_3,x) &gt; \frac{d}{\sqrt{3}}$ (where $d(\cdot, \cdot)$ is the Euclidean distance function). I have not been able to find a simple way to verify this statement (either directly or by contradiction). Is is possible to show this? Or is another strategy better? </p>
cr3
250,652
<p>The <a href="http://mathworld.wolfram.com/Circumcenter.html" rel="nofollow">circumcenter</a> of a triangle is the intersection of the perpendicular bisector of each edge. A right triangle <strong>B</strong> can be formed between the bisector of an edge, a vertice of the same edge and the circumcenter. Other right triangles can be formed at each edge and they will be congruent because triangle <strong>A</strong> is equilateral. The hypothenuse of each right triangle will therefore be of the same length. A circle can be formed at the circumcenter with a radius the length of the hypothenuse. The circle will touch each edge and will therefore be the smallest circle.</p> <p>To calculate the hypothenuse $h$ of triangle <strong>B</strong> given any side $x$, consider that the perpendiculars intersect the center of each edge in triangle <strong>A</strong> by definition and that the angles of the equilateral triangle are halved because triangle <strong>A</strong> is equilateral:</p> <p>$$ cos(\pi/6) = \frac{x/2}{h}\\ h = \frac{x/2}{cos(\pi/6)}\\ h = \frac{x/2}{\sqrt{3}/2}\\ h = \frac{x}{\sqrt{3}} $$</p> <p>This hopefully proves what you already know.</p>
20,683
<p>The most elementary construction I know of quantum groups associated to a finite dimensional simple Hopf algebra is to construct an algebra with generators $E_i$ and $F_i$ corresponding to the simple positive roots, and invertible $K_j$'s generating a copy of the weight lattice. Then one has a flurry of relations between them, and a coproduct defined on the generators by explicit formulas. These are not mortally complicated, but are still rather involved. Then come explicit checks of coassociativity, and compatibility between multiplication and comultiplication. Finally, one has the $R$-matrix which is an infinite sum with rather non-obvious normalizations. Enter more computations to verify $R$-matrix axioms.</p> <p>I recall learning about a nice way to construct the quantum group, which in addition to requiring less formulas has the advantage of making it clear conceptually why it's braided.</p> <blockquote> <p>I'm hoping someone can either point me to a reference for the complete picture, or perhaps fill in some of the details, since I only remember the rough outline. That, precisely, is my question.</p> </blockquote> <p>I include the remarks below in hopes it will jog someone's memory.</p> <p>You start with the tensor category $Vect_\Lambda$ of $\Lambda$-graded vector spaces, where $\Lambda$ is the weight lattice. We have a pairing $\langle,\rangle:\Lambda\times\Lambda\to \mathbb{Z}$, and we define a braiding $\sigma_{\mu,\nu}:\mu \otimes \nu \to \nu\otimes\mu$ to be $q^{\langle \mu,\nu \rangle}$. Here $q$ is either a complex number or a formal variable. We may need to pick some roots of $q$ if we regard it as a number; I don't remember (and am not too worried about that detail). Also, here we denoted by $\mu$ and $\nu$ the one dimensional vector space supported at $\mu$ and $\nu$ respectively, and we used the fact that both $\mu\otimes\nu$ and $\nu\otimes\mu$ are as objects just $\mu+\nu$.</p> <p>Okay, so now we're supposed to build an algebra in this category, generated by the $E_i$'s, which generators we regard as living in their respective gradings, corresponding to the simple roots. Here's where things start to get fuzzy. Do we take only the simples as I said, or do we take all the $E_\alpha$'s, for all roots $\alpha$? Also, what algebra do we build with the $E_i$'s? Of course it should be the positive nilpotent part of the quantum group, but since we build it as an algebra in this category, there may be a nicer interpretation of the relations? Anyways, let's call the algebra we are supposed to build here $U_q(\mathfrak{n}^+)$. I definitely remember that it's now a bi-algebra in $Vect_\Lambda$, and the coproduct is just $\Delta(E_i)=E_i\otimes 1 + 1\otimes E_i$ (the pesky $K$ that appears there usually has been tucked into the braiding data). Now we take $U_q(\mathfrak{n}^-)$ to be generated by $F_i$'s in negative degree, and we construct a pairing between $U_q(\mathfrak{n}^+)$ and $U_q(\mathfrak{n}^-)$. The pairing is degenerate, and along the lines of Lusztig's textbook, one finds that the kernel of the pairing is the q-Serre relations in each set of variables $E_i$ and $F_i$.</p> <p>Finally, once we quotient out the kernel, we take a relative version of Drinfeld's double construction (the details here I also can't remember, but would very much like to), and we get a quasi-triangular Hopf algebra in $Vect_\Lambda$. As an object in $Vect_\Lambda$ it's just an algebra generated by the $E_i$'s and $F_i$'s, so no torus. But since we're working in this relative version, we can forget down to vector spaces, and along the way, we get back the torus action, because that was tucked into the data of $Vect_\Lambda$ all along.</p> <p>So, the construction (a) gives neater formulas for the products, coproducts, and relations (including the $q$-Serre relations), and (b) makes it clear why there's a braiding on $U_q(\mathfrak{g})$ by building it as the double.</p> <p>The only problem is that I learned it at a seminar where to my knowledge complete notes were never produced, and while I remember the gist, I don't remember complete details. Any help?</p>
Zahlendreher
33,854
<p>There is a detailed exposition of this in Majid's paper <em>Double-bosonization of braided groups and the construction of</em> $U_q(\mathfrak{g})$, Math Proc Cambridge Phil Soc 125(1). Especially appendix B where quantum group is obtained by a version of Tannaka-Krein duality for braided monoidal categories applied to the category of Yetter-Drinfeld modules of the positive part $U_q(\mathfrak{n})$ living in the category of comodules over a weakly quasitriangular group algebra $k\Lambda$ with respect to a braiding coming from the Cartan datum and a parameter $q$.</p> <p>Here, the quantum groups are a special case of what Majid calls <em>double bosonization</em> which is the a briaded version of the Drinfeld double (the ordinary Drinfeld double arises by applying reconstruction to Yetter-Drinfeld modules in the symmetric monoidal category of $k$-vector spaces). It is not possible to define the Drinfeld double in a briaded monoidal category. This is why one needs to work with a fibre functor to vector spaces, and bosonizations (reflected by the commutator relations with the $K_i$ become necessary.</p> <p>The nice thing about this construction is that the category of Yetter-Drinfeld modules is very special, it is the <em>center</em> of the category of $U_q(\mathfrak{n})$-modules. And $U_q(\mathfrak{n})$ is a Nichols algebra.</p>
132,769
<p>My distribution histogram looks like it is not identically distributed, as in the negative counts have a different shape than the positive counts. Here is an image:</p> <p><img src="https://i.stack.imgur.com/VtlHd.jpg" alt="enter image description here"></p> <p>The chart has 800 data points, and the tallest count of 40 is for 0.</p> <p>I don't know a formal approach, but looking at it, it appears to me it is not identically distributed. My guess is that the negative counts (to the left of 0) are closer to a normal distribution, while the positive counts (to the right of 0) are closer to a Laplace distribution, or have higher kurtosis than the left side. </p> <p>Is this possible, are there cases where this happens, is there a topic in probability that studies it? Or, is this just my eyes playing tricks on me, or not having enough sample data or some other simple reason? </p> <p>Thanks in advance for any help.</p>
B R
5,773
<p>"<a href="http://en.wikipedia.org/wiki/Shape_parameter" rel="nofollow">Shape parameters</a>" are used to describe changes, such as "skewness", in a family of distributions other than those generated by "location" or "scale" parameters. There are many examples of distributions with shape parameters. Your distribution actually looks like a product of two normal distributions with different means, like $N(−1,1)⋅N(1,1)$. </p>
75,656
<p>Is there a closed form for any function f(x,y) satisfying:</p> <p>a) $\frac{df}{dx}+\frac{df}{dy}=xy$</p> <p>b) $\frac{df}{dx}+\frac{df}{dy}+\frac{df}{dz}=xyz$</p>
Samuel Tan
18,039
<p>I think $$\tag{1} f(x,y) = \frac{ -x^3 + 3 x^2 y + 6 c_1(y-x)} {6} $$</p> <p>is a solution to part a). For part b), </p> <p>$$ f(x,y,z)= \frac {(x^4 - 2 x^3 y - 2 x^3 z + 6 x^2 y z + 12 c_2(-x + y, -x + z)}{12} $$</p> <p>might be a solution. In both cases $c$ is a function, like the constant of integration for one variable. In equation (1), it might not be clear that $c$ is a function of $y-x$.</p>
75,656
<p>Is there a closed form for any function f(x,y) satisfying:</p> <p>a) $\frac{df}{dx}+\frac{df}{dy}=xy$</p> <p>b) $\frac{df}{dx}+\frac{df}{dy}+\frac{df}{dz}=xyz$</p>
Thomas Belulovich
831
<p>(a) Is solved by $f(x,y) = xy^2/2 - y^3/6$.</p> <p>One way to get this: Let $f_x = a(x,y)$ so that $f_y = xy - a(x,y)$. Then use commutativity of mixed partials to get a handle on $a$; we'd have $a_y(x,y) = y - a_x(x,y)$. Then, why not set $a_x = 0?$, we get $a_y = y$ and so $a(x,y) = y^2/2$ works fine. Since we can solve exact differential equations, we should be able to solve this given that $(f_x)_y = (f_y)_x$, indeed we can, with the above (among other solutions, of course!)</p>
75,656
<p>Is there a closed form for any function f(x,y) satisfying:</p> <p>a) $\frac{df}{dx}+\frac{df}{dy}=xy$</p> <p>b) $\frac{df}{dx}+\frac{df}{dy}+\frac{df}{dz}=xyz$</p>
Robert Israel
8,508
<p>More generally, suppose we want a function of $n$ variables with $$\sum_{i=1}^n \frac{\partial f}{\partial x_i} = x_1 \ldots x_n$$ We can start off with a change of variables: let $U$ be an $n \times n$ orthogonal matrix whose first row is $[1,1,\ldots 1]/\sqrt{n}$, and take $$ \pmatrix{u_1\cr u_2\cr \ldots\cr u_n\cr} = U \pmatrix{x_1\cr x_2\cr \ldots\cr x_n\cr} \ \text{and} \pmatrix{x_1\cr x_2\cr \ldots\cr x_n\cr} = U^T \pmatrix{u_1\cr u_2\cr \ldots\cr u_n\cr}$$ Expressed in terms of the $u$ variables, $x_1 \ldots x_n = p(u_1,\ldots, u_n)$ is a homogeneous polynomial of degree $n$ in $u_1,\ldots,u_n$. According to the chain rule, $$ \sum_i \frac{\partial f}{\partial x_i} = \sum_j \sum_i \frac{\partial g}{\partial u_j} \frac{\partial u_j}{\partial x_i} $$ But $$\sum_i \frac{\partial u_j}{\partial x_i} = \sum_i U_{ji} = (U e)_j$$ where $e$ is the vector of all 1's, and this is $\sqrt{n}$ for $j=1$ and 0 otherwise. Thus all we need is $\sqrt{n} \frac{\partial g}{\partial u_1} = p(u_1, \ldots, u_n)$, which can be obtained with $$g(u_1, \ldots, u_n) = n^{-1/2} \int p(u_1, \ldots, u_n) \ du_1$$</p> <p>For example, in the case $n=2$, we could take $$U = 2^{-1/2} \pmatrix{1 &amp; 1\cr 1 &amp; -1\cr}$$ Then $x_1 x_2 = p(u_1, u_2) = u_1^2/2 - u_2^2/2$, and we get $$f(x_1, x_2) = g(u_1, u_2) = 2^{-1/2} \int \left(\frac{u_1^2}{2} - \frac{u_2^2}{2}\right)\ du_1 = \frac{u_1^3}{6 \sqrt{2}} - \frac{u_1 u_2^2}{2 \sqrt{2}} + C(u_2) = \frac{-x_1^3 + 3 x_1^2 x_2 + 3 x_1 x_2^2 - x_2^3}{12} + C((x_1 - x_2)/\sqrt{2})$$ where $C$ is an arbitrary differentiable function.</p>
308,426
<p>Assume that a sequence $(x_n)_{n\in\omega}$ of points of a locally convex topological vector space converges to zero. Is it always possible to find increasing number sequences $(n_k)_{k\in\omega}$ and $(m_k)_{k\in\omega}$ such that the sequence $(m_kx_{n_k})_{k\in\omega}$ still converges to zero? </p> <p><strong>Added in Edit.</strong> So we already know that this property (called <em>the Mackey convergence condition</em>) does not hold in any locally convex space. But we can ask another </p> <p><strong>Problem.</strong> Assume that a locally convex space $X$ admits an indexed family $(B_\alpha)_{\alpha\in\omega^\omega}$ of bounded sets such that (i) $B_\alpha\subset B_\beta$ for all $\alpha\le \beta$ in $\omega^\omega$ and (ii) each bounded subset $B\subset X$ is contained in some $B_\alpha$, $\alpha\in\omega^\omega$.</p> <p>Does $X$ satisfy the Mackey convergence condition?</p>
Jochen Wengenroth
21,051
<p>This is satisfied for locally convex spaces with the so-called <em>Mackey convergence condition</em>, i.e., for every null sequence $x$ there is a bounded absolutely convex set $B$ such that $x$ tends to $0$ in the linear hull of $B$ endowed with the Minkowski functional of $B$ as a norm. This property was introduced by Grothendieck and is satisfied by metrizable spaces and virtually all <em>non-Banach</em> spaces arising in analysis (test function spaces $\mathscr D(\Omega)$, distributions $\mathscr D'(\Omega)$, spaces of germs of holorphic functions on compact sets, spaces of real analytic functions,...).</p>
1,002,191
<p>I have a problem:</p> <blockquote> <p>A family has six children, A,B,C,D,E,F. The ages of these six children are added together to make an integer less than $32$ where none of the children are less than one year old.</p> </blockquote> <p>What is the chances of guessing the ages of all of the children.</p> <p>I thought stars and bars, and this is equivalent to choosing $7$ children for $$x_1+\dots+x_7=31$$</p> <p>So we get (1 in)${31-1 \choose 7-1}$</p> <p>But they said that $x_1+\dots+\dots+x_7=32$ is correct. I feel like that would be the case if the original question said $x_1+\dots+x_6\leq32$, but it said $x_1+\dots+x_6\lt32$</p> <p>So apparently that makes it (1 in)${32-1 \choose 7-1}$</p> <p>Which is right?(Probably them I know, but I want to know why mine is wrong please)</p>
Brian M. Scott
12,042
<p>When you set the sum at $31$, you’re allowing the extra variable $x_7$ to be $0$. However, the formula that you’re using is for the number of solutions in strictly <strong>positive</strong> integers, so you want to make sure that $x_7$ will have to be at least $1$. You do this by insisting that the total be $32$ rather than $31$.</p>
120,808
<pre><code>Limit[Sum[k/(n^2 - k + 1), {k, 1, n}], n -&gt; Infinity] </code></pre> <p>This should converge to <code>1/2</code>, but <code>Mathematica</code> simply returns <code>Indeterminate</code> without calculating (or so it would appear). Any specific reason why it can't handle this? Did I make a mistake somewhere?</p>
QuantumDot
2,048
<p>You can use the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula">Euler-Maclaurin</a> formula to get the limit (the sum can be approximated by an integral, which becomes exact in the infinite limit):</p> <pre><code>f[i_] = i/(n^2 - i + 1); Integrate[f[k], {k, 0, n}, Assumptions -&gt; n &gt; 0] Limit[%, n -&gt; Infinity] </code></pre> <blockquote> <p><code>1/2</code></p> </blockquote>
120,808
<pre><code>Limit[Sum[k/(n^2 - k + 1), {k, 1, n}], n -&gt; Infinity] </code></pre> <p>This should converge to <code>1/2</code>, but <code>Mathematica</code> simply returns <code>Indeterminate</code> without calculating (or so it would appear). Any specific reason why it can't handle this? Did I make a mistake somewhere?</p>
Bob Hanlon
9,362
<p>Reverse the order of the summation. i.e., <code>k -&gt; (n - k + 1)</code></p> <pre><code>s = Sum[(n - k + 1)/(n^2 - (n - k + 1) + 1), {k, 1, n}] // Simplify (* -n + (1 + n^2) PolyGamma[0, 1 + n^2] - (1 + n^2) PolyGamma[0, 1 - n + n^2] *) Limit[s, n -&gt; Infinity] (* 1/2 *) </code></pre> <p>For an alternative representation</p> <pre><code>s2 = FullSimplify[s] (* -n - (1 + n^2) HarmonicNumber[(-1 + n) n] + (1 + n^2) HarmonicNumber[n^2] *) Limit[s2, n -&gt; Infinity] (* 1/2 *) </code></pre>
3,162,294
<p>I've been trying to prove this statement by opening up things on the left hand side using the chain rule but am really getting nowhere. Any tips/hints would be very helpful and appreciated!</p>
user
293,846
<p>Your example is simple: <span class="math-container">$$ p=1-\left (\frac56\right)^{14}-\binom {14}1 \left (\frac56\right)^{13}\left (\frac16\right)\approx 0.704.$$</span></p> <p>In this expression we have subtracted from <span class="math-container">$1$</span> the probabilities of having no or only one 6. The general expression will be much more complicated.</p> <hr /> <p>The general expression: <span class="math-container">$$ p_X=\sum_{k=\max(1,X-K)}^{\min(K,\lfloor\frac X2\rfloor)}\sum_{n=2}^N\binom Nn \left (\frac1K\right)^n \left (\frac{k-1}K\right)^{N-n} n^{1-\delta_{k,\frac X2}}, $$</span> where <span class="math-container">$X$</span> is the sum which probability is to be computed, <span class="math-container">$K$</span>is the number of dice faces (numbered from <span class="math-container">$1$</span> to <span class="math-container">$K$</span>), <span class="math-container">$N$</span> is the number of rolls, <span class="math-container">$\delta$</span> is <a href="https://en.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker delta</a>, <span class="math-container">$\lfloor x\rfloor$</span> is the <a href="https://en.wikipedia.org/wiki/Floor_and_ceiling_functions" rel="nofollow noreferrer">floor function</a>.</p> <blockquote> <p>Explanation of the formula:</p> <p>Let the two largest numbers be <span class="math-container">$k$</span> and <span class="math-container">$k'$</span> (<span class="math-container">$1\le k\le k'\le K$</span>, <span class="math-container">$k+k'=X$</span>). Any other result of a trial, <span class="math-container">$k''$</span>, should be either less or equal to <span class="math-container">$k$</span>. To avoid over-counting it is suggestive to consider the arrangements with distinct amount of trials giving the result &quot;<span class="math-container">$k$</span>&quot; separately and then sum the resulting probabilities. This idea is implemented in the formula in the following way:</p> <p>The first sum runs over all possible values of <span class="math-container">$k$</span>. The limits of summation are chosen to ensure <span class="math-container">$k'=X-k\ge k$</span>. The limits also take care of the finite set of allowed values <span class="math-container">$(1..K)$</span> for <span class="math-container">$k$</span>. In the next sum each <span class="math-container">$n$</span> represents the cardinality of a multiset consisting of one <span class="math-container">$k'$</span> and <span class="math-container">$(n-1)$</span> <span class="math-container">$k$</span>-numbers. The multiset can be placed in the overall result of <span class="math-container">$N$</span> trials in <span class="math-container">$\binom Nn$</span> ways (<span class="math-container">$k$</span> and <span class="math-container">$k'$</span> are assumed for a moment to be indistinguishable). The probability that the final result contains such a multiset is <span class="math-container">$\left (\frac1K\right)^n\left (\frac{k-1}K\right)^{N-n}$</span>, where the last factor stays for the probability to fill the rest <span class="math-container">$(N-n)$</span> places with numbers <span class="math-container">$k''&lt;k$</span>. Finally, if <span class="math-container">$k\ne k'$</span> (which is equivalent to <span class="math-container">$k\ne\frac X2$</span>), one among <span class="math-container">$n$</span> places chosen for the multiset shall be filled with <span class="math-container">$k$</span>. This can be done in <span class="math-container">$\binom n1=n$</span> ways, and this is the last factor in the expression.</p> </blockquote> <p>Hope this helps.</p>
1,063,774
<blockquote> <p>Prove that the <span class="math-container">$\lim_{n\to \infty} r^n = 0$</span> for <span class="math-container">$|r|\lt 1$</span>.</p> </blockquote> <p>I can't think of a sequence to compare this to that'll work. L'Hopital's rule doesn't apply. I know there's some simple way of doing this, but it just isn't coming to me. :(</p>
aes
194,204
<p>I'll assume $0 &lt; |r| &lt; 1$.</p> <p>We want $|r^n| &lt; \epsilon$, or equivalently $(1/|r|)^n &gt; 1/\epsilon$.</p> <p>Write $1/|r| = 1 + a$ with $a &gt; 0$. Then $(1/|r|)^n = (1+a)^n \geq 1 + na$ by the binomial theorem.</p> <p>Choose $N$ such that $1 + Na &gt; 1/\epsilon$, say let $N = \lceil \frac{1}{a\epsilon}\rceil$. Then $|r^n| &lt; \epsilon$ for $n \geq N$.</p>
1,063,774
<blockquote> <p>Prove that the <span class="math-container">$\lim_{n\to \infty} r^n = 0$</span> for <span class="math-container">$|r|\lt 1$</span>.</p> </blockquote> <p>I can't think of a sequence to compare this to that'll work. L'Hopital's rule doesn't apply. I know there's some simple way of doing this, but it just isn't coming to me. :(</p>
Tacet
186,012
<p>If $r=0$ it's trivial, so we can skip this case. Now assume $r\neq0$. </p> <p>$$\lim_{n\to\infty}r^n=0 \Longleftrightarrow(\forall \epsilon \in \mathbb{R}^+)(\exists \delta\in\mathbb{R})(\forall n_{&gt; \delta})(|r^n| &lt; \epsilon)$$</p> <p>Now we can note, that both sides of above inequality are positive. We can use logarithms.</p> <p>$$|r^n|&lt;\epsilon \overset{|r|&lt;1}{\Longleftrightarrow} n &gt; \log_{|r|}\epsilon $$</p> <p>Because $\log_{|r|}\epsilon$ is const, $\delta = \log_{|r|}\epsilon$ satisfy thesis.</p> <p>$\mathscr{Q.E.D.}$</p>
1,063,774
<blockquote> <p>Prove that the <span class="math-container">$\lim_{n\to \infty} r^n = 0$</span> for <span class="math-container">$|r|\lt 1$</span>.</p> </blockquote> <p>I can't think of a sequence to compare this to that'll work. L'Hopital's rule doesn't apply. I know there's some simple way of doing this, but it just isn't coming to me. :(</p>
Mr.HiggsBoson
675,350
<p><a href="https://math.stackexchange.com/a/3277943/675350">This</a> is an answer I had earlier posted.</p> <p>This is not a rigorous proof but helps in visualisation.</p> <p>If <span class="math-container">$$-1&lt;x&lt;1$$</span> and you raise it to any power <span class="math-container">$n &gt; 1 $</span> the value of it decreases.</p> <p>Eg say<span class="math-container">$ x = 0.1 , x ^2 = 0.01, x^3 = 0.001$</span> or <span class="math-container">$x ^n = 0.1 ^ n = \frac{1}{10^n}$</span> and as <span class="math-container">$ n$</span> keeps getting bigger , the denominator approaches <span class="math-container">$\infty $</span> and the numerator approaches <span class="math-container">$0 $</span></p>
47,188
<p>I am an amateur mathematician, and I had an idea which I worked out a bit and sent to an expert. He urged me to write it up for publication. So I did, and put it on arXiv. There were a couple of rounds of constructive criticism, after which he tells me he thinks it ought to go a "top" journal (his phrase, and he went on to name two). </p> <p>His opinion is that my outsider status will have no effect on the reviewing process and my paper will be taken seriously. I am pleased and quite flattered to hear this, and my inclination is to do as he suggests. But I have to say it sounds too good to be true. Does this match other people's experience? I understand it's rare enough for undergraduates to publish anywhere, and I am not even an undergraduate! Surely a "non-mathematician submitting to top mathematical journal" must instantly rank high on the crackpot scale. How often does this actually occur successfully these days? Any advice?</p>
Alex B.
35,416
<p>Think of it this way: any graduate student who submits his first result for publication faces a similar issue: his paper will land on a referee's desk, who has never heard the name of the author before, so according to your logic, the better the result the more it must smell of crackpottery to the referee. Yet, PhD students and young postdocs do get their results published and in fact, they sometimes receive more patience from the referee, when it comes to the writing style, say, than experienced mathematicians.</p> <p>Fact is, that an expert in the respective discipline can recognise 99% of the crackpots within less than 10 seconds. If your write-up passes this 10 second test and looks like a serious attempt at a mathematical problem, then the referee will read it. Of course, he might be extra sceptical about actual mistakes and extra diligent, but that can only be good for you.</p> <p>At any rate, the threshold to receiving a careful reading is not nearly as high as you think. Key to this is a well written abstract and introduction with clear and precise statements of the results and, ideally, a clear indication of where the main difficulties were and how you overcame them, i.e. where the actual novelty lies. That's "all" it takes (I am not claiming that it's easy, but obtaining the result in the first place was likely harder).</p> <p>And one last word: receiving rejections is part of the business. A mathematician who cannot handle them will lead a miserable life. None of us likes rejections, but we all must contend with the possibility of getting one. If you think that you will not be able to digest a rejection, then send the paper to a less prestigious journal. After all, your mental well-being is more important than this kind of prestige, especially since you are not relying on getting things published.</p>
1,659,224
<p>Find a sequence $a_n$ such that $\lim_{n \to \infty}|a_{n+1} - a_n | = 0$ while the sequence does not converge.</p> <p>I am thrown for a loop. For a sequence not to converge it means that for all $\epsilon \geq 0$ and for all $N$, when $n \geq N$, $|a_n - L| \geq \epsilon$ as $n \longrightarrow \infty$</p> <p>or I could change some terms and state it in terms of a Cauchy sequence, but either way how can the terms have a distance of 0 but the sequence not be consider converging? hints??</p>
Pantelis Sopasakis
8,357
<p>Take $a_n=\log(n)$; then</p> <p>$$ |a_{n+1}-a_n|=\log(n+1)-\log(n) = \log(1+\frac{1}{n})\to 0, $$</p> <p>but the sequence diverges. In the definition of <em>Cauchy sequences</em> it is asserted that $|a_m-a_n|\to 0$, but you are free to choose any such $m,n$ independent of each other. In your case $m=n+1$.</p>
737,212
<p>Please how can I easily remember the following trig identities: $$ \sin(\;\pi-x)=\phantom{-}\sin x\quad \color{red}{\text{ and }}\quad \cos(\;\pi-x)=-\cos x\\ \sin(\;\pi+x)=-\sin x\quad \color{red}{\text{ and }}\quad \cos(\;\pi+x)=-\cos x\\ \sin(\frac\pi2-x)=\phantom{-}\cos x\quad \color{red}{\text{ and }}\quad \cos(\frac\pi2-x)=\phantom{-}\sin x\\[12pt] \text{and similar things where we add a radian angle inside cos or sin as you can see} $$</p> <p>So how can I remember all of them? They're pretty confusing! I can easily interchange some and do mistakes thus...</p>
Matt L.
70,664
<p>I wouldn't try to remember them, but try to understand them by visualizing the graphs of the sine and cosine function. If you draw the functions (mentally), you can easily "derive" all of these identities.</p>
737,212
<p>Please how can I easily remember the following trig identities: $$ \sin(\;\pi-x)=\phantom{-}\sin x\quad \color{red}{\text{ and }}\quad \cos(\;\pi-x)=-\cos x\\ \sin(\;\pi+x)=-\sin x\quad \color{red}{\text{ and }}\quad \cos(\;\pi+x)=-\cos x\\ \sin(\frac\pi2-x)=\phantom{-}\cos x\quad \color{red}{\text{ and }}\quad \cos(\frac\pi2-x)=\phantom{-}\sin x\\[12pt] \text{and similar things where we add a radian angle inside cos or sin as you can see} $$</p> <p>So how can I remember all of them? They're pretty confusing! I can easily interchange some and do mistakes thus...</p>
kingW3
130,953
<p>$$\sin(\alpha\pm\beta)=\sin\alpha\cos\beta\pm\sin\beta\cos\alpha\\\cos(\alpha\pm\beta)=\cos\alpha\cos\beta\mp\sin\alpha\sin\beta$$ Logically just replace $\alpha$ and $\beta$ with the value and you'll get the equation you need.</p>
2,565,763
<p>I was playing a standard solitaire game on my mobile app and I came across a round where I couldn't perform a single move thus resulting in a loss. I was then thinking as to what the probability of this event to happen in a single game. Would anybody know this probability and show the calculations given a standard deck of 52 cards? </p> <p>Here is a link to the rules: <a href="https://www.wikihow.com/Play-Solitaire" rel="nofollow noreferrer">https://www.wikihow.com/Play-Solitaire</a></p> <p>Note: There are three cards turned at a time but you can only play the second card after you played the top card. Also, the game is klondike.</p>
mucciolo
222,084
<p>First let's adress your confusion with the example.</p> <p><strong>1. If in the end they use the Z statistic to calculate their value of $k$, why don't they just skip straight to calculating the Z statistic in the first place, instead of doing the LRT?</strong></p> <p>Notice that the $Z$ statistic is the tool they utilize to determine $k$. The example is built to show you how do we find that the $Z$ statistic are useful for doing the LRT when infering about the mean of a normal distribution using a simple null hypothesis versus a composite alternative hypothesis. So, in the end they use the $Z$ statistic because through the process they find out that they can use that it determine the value of $k$. Without going through these steps at least one time we would never find out that we can use a $Z$ statistic to determine $k$.</p> <p><strong>2. If they work to make sure they can input $\bar X$ into the Z statistic to be able to decide whether to reject $H_0$, why do they then continue with the LRT, or use it in the first place?</strong></p> <p>They work not to make sure, but to find out that the Z statistic is usable to develop the LRT. Observe that in this case the LRT dependes on the Z statistic. Without using the Z statistic we would not be able to determine $k$.</p> <p><strong>3. On top of this, what determines what value you should choose for the null hypothesis?</strong></p> <p>The $H_0$ value is not arbitrarily chosen. Actually is a educated guess based on the information we already have. Read carefully the examples statement:</p> <blockquote> <p>A food processing company packages honey in small glass jars. <strong>Each jar is supposed to contain 10 fluid ounces of the sweet and gooey good stuff.</strong> Previous experience suggests that the volume X, the volume in fluid ounces of a randomly selected jar of the company's honey is <strong>normally distributed with a known variance of 2.</strong></p> </blockquote> <p><strong>Each jar is supposed to contain 10 fluid ounces of the sweet and gooey good stuff.</strong> This means that on average a jar will contain 10 fluid ounces. So whatever distrubution the liquid pouring machine follows, we want it to have mean equal to $10$. Because we want to fill each jar with 10 fluid ounces. Furthermore we know that $X = N(\mu, 2)$. Hence, since we want to make sure that $\mu = 10$, it is logical to chose $H_0: \mu_0 = 10$. We want to test with $\alpha$ significance if indeed $\mu = 10$.</p> <p>Now some comments on your understand of LRT.</p> <blockquote> <p>And you do this by taking the (Log) Likelihood Function, differentiating and setting equal to zero, then solving for $\theta$.</p> </blockquote> <p>It is not always possible to do this. Sometimes we need to find such $\theta$ by inspecting or numerically.</p> <blockquote> <p>Otherwise without doing an LRT, it might be that your proposed maximising $\theta$ was only marginally better, and maybe only for this sample, than the null value due to something like sampling error, yet you thought it was definitively better.</p> </blockquote> <p>It is not exactly about maximising $L(\theta)$. You should think more about this. The purpose of a statistical hypothesis test (such as LRT) is to statiscally infer if the null hypothesis is true or false with probability $1-\alpha$, or in other words, $\alpha$ significance. Such tests are used when we want to obtain new information about the parameters being tested with a confidence of $&lt; 100\%$. We could have used others test that do not make use of the likelihood function.</p> <p>Yet you are on the right track.</p> <p>(Also if you want to to maximise the probability of your questions getting answered you should focus on one at time :p)</p>
3,059,020
<p>The integral is<span class="math-container">$$\int_0^{2\pi}\frac{\mathrm dθ}{2-\cosθ}.$$</span>Just to skip time, the answer of the indefinite integral is <span class="math-container">$\dfrac2{\sqrt{3}}\tan^{-1}\left(\sqrt3\tan\left(\dfracθ2\right)\right)$</span>.</p> <p>Evaluating it from <span class="math-container">$0$</span> to <span class="math-container">$ 2 \pi$</span> yields<span class="math-container">$$\frac2{\sqrt3}\tan^{-1}(\sqrt3 \tanπ)-\frac2{\sqrt3}\tan^{-1}(\sqrt3 \tan0)=0-0=0.$$</span>But using complex analysis, the integral is transformed into<span class="math-container">$$2i\int_C\frac{\mathrm dz}{z^2-4z+1}=2i\int_C\frac{\mathrm dz}{(z-2+\sqrt3)(z-2-\sqrt3)},$$</span> where <span class="math-container">$C$</span> is the boundary of the circle <span class="math-container">$|z|=1$</span>. Then by Cauchy's integral formula, since <span class="math-container">$z=2-\sqrt3$</span> is inside the domain of the region bounded by <span class="math-container">$C$</span>, then: <span class="math-container">$$2i\int_C\frac{\mathrm dz}{(z-2+\sqrt3)(z-2-\sqrt3)}=2πi\frac{2i}{2-\sqrt3-2-\sqrt3}=2πi\frac{2i}{-2\sqrt3}=\frac{2π}{\sqrt3}.$$</span></p> <p>Using real analysis I get <span class="math-container">$0$</span>, using complex analysis I get <span class="math-container">$\dfrac{2π}{\sqrt3}$</span>. What is wrong?</p>
N. S.
9,176
<p>The problem with the real approach is that you make the change of variable <span class="math-container">$t=\tan\left(\dfrac{\theta}{2}\right)$</span> for <span class="math-container">$0 &lt; \theta &lt; 2 \pi$</span>. </p> <p>This is problematic since your substitution need to be defined and continuous for all <span class="math-container">$\theta$</span>, but you have a problem when <span class="math-container">$\theta=\pi$</span>.</p> <p><strong>Edit:</strong> Note that if you split the integral into <span class="math-container">$\int_0^\pi+\int_\pi^{2 \pi}$</span>, you are going to get the right answer, as for one integral you are going to get <span class="math-container">$\arctan(- \infty)$</span> and for the other <span class="math-container">$\arctan(+\infty)$</span>:</p> <p><span class="math-container">$$\int_0^{2 \pi} \frac{\mathrm{d}θ}{2-\cos \theta}=\int_0^\pi \frac{\mathrm{d}θ}{2-\cos \theta}+\int_\pi ^{2 \pi} \frac{\mathrm{d}θ}{2-\cos \theta}\\ = \lim_{r \to \pi_-} \int_0^r \frac{\mathrm{d}θ}{2-\cos \theta}+ \lim_{w \to \pi_+} \int_w^{2 \pi} \frac{\mathrm{d}θ}{2-\cos \theta}\\= \lim_{r \to \pi_-} \left(\frac{2\tan^-1( \sqrt{3} \tan( \frac{ r}{2}))}{ \sqrt{3}}-0\right)+ \lim_{w \to \pi_+}\left(0- \frac{2\tan^-1( \sqrt{3} \tan( \frac{ r}{2}))}{ \sqrt{3}}\right).$$</span></p>
730,018
<p>Assume ZFC (and AC in particular) as the background theory.</p> <p>If $(M,\in^M)$ is a model of ZFC (not necessarily transitive or standard), must there exist a bijection between $M$ and $$\{x \in M \mid (M,\in^M) \models x \mbox{ is an ordinal number}\}?$$</p> <p>I am also interested in the cases where $M$ is assumed to be a model of ZFC2.</p> <p><strong>Remark.</strong> I think this is different from what was asked <a href="https://math.stackexchange.com/questions/336956/on-models-of-zfc-does-there-exist-a-bijection-between-von-neumann-universe-and">here</a>. Please comment if you believe otherwise; I am happy to discuss.</p>
Martín-Blas Pérez Pinilla
98,199
<p>Modulo possible subtleties about non transitive models, I understand that is the same question because being an ordinal is absolute for transitive models. About (2), infinite cardinals (so all the cardinals) and ordinals are in bijective correspondence with ordinals via the $\aleph$ function.</p>
4,036,975
<p><span class="math-container">$\require{begingroup} \begingroup$</span> <span class="math-container">$\def\e{\mathrm{e}}\def\W{\operatorname{W}}\def\Wp{\operatorname{W_0}}\def\Wm{\operatorname{W_{-1}}}\def\Catalan{\mathsf{Catalan}}$</span></p> <p><a href="https://math.stackexchange.com/q/3590905/122782">Related question</a></p> <p>Is there a known closed form solution to</p> <p><span class="math-container">\begin{align} I_n&amp;= \int_0^1\frac{\ln(1+x^{2n})}{1+x^2} \,dx =? \tag{0}\label{0} \end{align}</span></p> <p>It checks out numerically, for <span class="math-container">$n=1,\dots,7$</span> that</p> <p><span class="math-container">\begin{align} I_1= \int_0^1\frac{\ln(1+x^2)}{1+x^2} \,dx &amp;= \tfrac\pi2\,\ln2-\Catalan \tag{1}\label{1} ,\\ I_2= \int_0^1\frac{\ln(1+x^{2\cdot2})}{1+x^2} \,dx &amp;= \tfrac\pi2\,\ln(2+\sqrt2)-2\Catalan \tag{2}\label{2} ,\\ I_3= \int_0^1\frac{\ln(1+x^{2\cdot3})}{1+x^2} \,dx &amp;= \tfrac\pi2\,\ln6-3\Catalan \tag{3}\label{3} ,\\ I_4= \int_0^1\frac{\ln(1+x^{2\cdot4})}{1+x^2} \,dx &amp;= \tfrac\pi2\,\ln(4+\sqrt2+2\sqrt{4+2\sqrt2})-4\Catalan \tag{4}\label{4} ,\\ I_5= \int_0^1\frac{\ln(1+x^{2\cdot5})}{1+x^2} \,dx &amp;= \tfrac\pi2\,\ln(10+4\sqrt5)-5\Catalan = \tfrac\pi2\,\ln(10\cot^2\tfrac\pi5)-5\Catalan \tag{5}\label{5} ,\\ I_6= \int_0^1\frac{\ln(1+x^{2\cdot6})}{1+x^2} \,dx &amp;= \tfrac\pi2\,\ln((5\sqrt2+2\sqrt{12})(1+\sqrt2))-6\Catalan \tag{6}\label{6} ,\\ I_7= \int_0^1\frac{\ln(1+x^{2\cdot7})}{1+x^2} \,dx &amp;= \tfrac\pi2\,\ln(14\cot^2\tfrac\pi7)-7\Catalan \tag{7}\label{7} , \end{align}</span></p> <p>so \eqref{0} seems to follow the pattern</p> <p><span class="math-container">\begin{align} I_n&amp;= \tfrac\pi2\,\ln(f(n))-n\Catalan \tag{8}\label{8} \end{align}</span></p> <p>for some function <span class="math-container">$f$</span>.</p> <p>Items \eqref{5} and \eqref{7} look promising as they agree to <span class="math-container">$f(n)=2n\cot^2(\tfrac\pi{n})$</span>, but the other fail on that.</p> <hr /> <p><strong>Edit:</strong></p> <p>Also, it looks like <span class="math-container">\begin{align} \int_1^\infty\frac{\ln(1+x^{2n})}{1+x^2} \,dx &amp;=\tfrac\pi2\,\ln(f(n))+n\Catalan \tag{9}\label{9} \end{align}</span></p> <p>and</p> <p><span class="math-container">\begin{align} \int_0^\infty\frac{\ln(1+x^{2n})}{1+x^2} \,dx &amp;=\pi\,\ln(f(n)) \tag{10}\label{10} \end{align}</span></p> <p>with the same <span class="math-container">$f$</span>.</p> <hr /> <p><strong>Edit</strong></p> <p>Thanks to <a href="https://math.stackexchange.com/a/4037168/122782">the great answer by @Quanto</a>, the function <span class="math-container">$f$</span> can be defined as</p> <p><span class="math-container">\begin{align} f(n)&amp;= 2^n\!\!\!\!\!\!\!\!\!\! \prod_{k = 1}^{\tfrac{2n-1+(-1)^n}4} \!\!\!\!\!\!\!\!\! \cos^2\frac{(n+1-2k)\pi}{4n} \tag{11}\label{11} . \end{align}</span></p> <p><span class="math-container">$\endgroup$</span></p>
Varun Vejalla
595,055
<p>Not a complete answer, but an elaboration of what I said in the comments. To evaluate <span class="math-container">$$\int_1^\infty\frac{\ln(1+x^{2n})}{1+x^2} dx$$</span></p> <p>Write it as <span class="math-container">$$\int_1^\infty\frac{\ln\left(x^{-2n}+1\right)+\ln\left(x^{2n}\right)}{1+x^2} dx = \int_1^\infty\frac{\ln\left(x^{-2n}+1\right)}{1+x^2} dx+2n\int_1^\infty\frac{\ln\left(x\right)}{1+x^2} dx$$</span></p> <p>The second integral is a standard integral for Catalan's constant. To solve the first, make the substitution <span class="math-container">$x \to \frac{1}{x}$</span> to get <span class="math-container">$$\int_0^1\frac{\ln(1+x^{2n})}{1+x^2} dx + 2nG$$</span></p> <p>So if you know the value of any of the integrals with bounds <span class="math-container">$(0,\infty)$</span> or <span class="math-container">$(1,\infty)$</span> or <span class="math-container">$(0,1)$</span>, you could find the other two.</p> <hr /> <p>The series for <span class="math-container">$\ln(1+x)$</span> converges for <span class="math-container">$|x|&lt;1$</span>, so expand it to get <span class="math-container">$$\int_0^1\frac{\sum_{k=1}^{\infty}\frac{\left(-1\right)^{k+1}}{k}x^{2nk}}{1+x^2}dx = \sum_{k=1}^{\infty}\frac{\left(-1\right)^{k+1}}{k}\int_{0}^{1}\frac{x^{2nk}}{1+x^{2}}dx$$</span></p> <p>Then you can expand <span class="math-container">$\frac{1}{1+x^2}$</span> to get <span class="math-container">$$\sum_{k=1}^{\infty}\frac{\left(-1\right)^{k+1}}{k}\int_{0}^{1}x^{2nk}\sum_{m=0}^{\infty}\left(-1\right)^{m}x^{2m}dx = \sum_{k=1}^{\infty}\frac{\left(-1\right)^{k+1}}{k}\sum_{m=0}^{\infty}(-1)^m\int_{0}^{1}x^{2nk}x^{2m}dx$$</span></p> <p>Finally, evaluate the inner integral to get <span class="math-container">$$\sum_{k=1}^{\infty}\frac{\left(-1\right)^{k+1}}{k}\sum_{m=0}^{\infty}(-1)^m\frac{1}{2nk+2m+1}$$</span></p>
603,291
<p>Suppose $f:(a,b) \to \mathbb{R} $ satisfy $|f(x) - f(y) | \le M |x-y|^\alpha$ for some $\alpha &gt;1$ and all $x,y \in (a,b) $. Prove that $f$ is constant on $(a,b)$. </p> <p>I'm not sure which theorem should I look to prove this question. Can you guys give me a bit of hint? First of all how to prove some function $f(x)$ is constant on $(a,b)$? Just show $f'(x) = 0$?</p>
Yiorgos S. Smyrlis
57,021
<p>It suffices to show that $f$ is differentiable and its derivative vanishes everywhere.</p> <p>The hypothesis implies that, for every $x\in(a,b)$ and $h$, such that $x+h\in(a,b)$, we have that</p> <p>$$ \frac{|f(x+h)-f(x)-0\cdot h|}{|h|} \le M\,|h|^{\alpha-1}. $$</p> <p>The right hand side of the above tends to zero, as $h\to 0$, and therefore $f'(x)=0$!</p>
3,690,076
<p>Is there a rigorous proof that <span class="math-container">$|G|=|\text{Ker}(f)||\text{Im}(f)|$</span>, for some homomorphism <span class="math-container">$f\,:\,G\rightarrow G'$</span>? Can anyone provide such a proof with explanations?</p>
Didier
788,724
<p>Consider <span class="math-container">$f : G \to \mathrm{Im}f$</span>. Is it by construction a surjective morphism. <span class="math-container">$\ker f$</span> is a subgroup of <span class="math-container">$G$</span>. <span class="math-container">$f$</span> goes to the quotient to <span class="math-container">$\overline{f} : G/\ker f \to \mathrm{Im}f$</span>. It is surjective and injective by construction, hence it's an isomorphism. So <span class="math-container">$|G/\ker f| = |\mathrm{Im} f|$</span>. It is then easy to show that <span class="math-container">$|G / \ker f| = |G| / |\ker f|$</span></p>
1,017,738
<p>So I know that the derivative of arccos is: $-dx/\sqrt{1-x^2}$</p> <p>So how would I find the derivative of $\arccos(x^2)$? What does the $-dx$ mean in the above formula?</p> <p>Would it just be $-2x/\sqrt{1-x^2}$ ?</p>
Michael Hardy
11,667
<p>"$-dx$" should not be there.</p> <p>$$ \frac{d}{dx} \arccos(x^2) = \frac{-1}{\sqrt{1-(x^2)^2}} \cdot \frac d {dx} x^2 = \frac{-2x}{\sqrt{1-x^4}}. $$</p>
2,009,134
<p>Suppose $$\frac{{{{\sin }^4}(\alpha )}}{a} + \frac{{{{\cos }^4}(\alpha )}}{b} = \frac{1}{{a + b}}$$ for some $a,b\ne 0$. </p> <p>Why does $$\frac{{{{\sin }^8}(\alpha )}}{{{a^3}}} + \frac{{{{\cos }^8}(\alpha )}}{{{b^3}}} = \frac{1}{{{{(a + b)}^3}}}$$</p>
Community
-1
<p>The general expression is:$$\frac{\sin^{4n}\theta}{a^{2n-1}} + \frac{\cos^{4n}\theta}{b^{2n-1}} = \frac{1}{(a+b)^{2n-1}}~, ~~n\in \mathbb N$$</p> <p>From the given relation, it can be written \begin{align}(a+b)\left(\frac{\sin^4\theta}{a}+ \frac{\cos^4\theta}{b}\right) &amp;= \left(\sin^2\theta + \cos^2\theta\right)^2\\ \implies\sin^4\theta + \cos^4\theta + \frac ba\sin^4\theta + \frac ab\cos^4\theta &amp;= \sin^4\theta + 2\sin^2\theta\cos^2\theta + \cos^4\theta\\ \implies~~~~~~~~~~~ \left(\sqrt{\frac ba}\sin^2\theta - \sqrt\frac ab\cos^2\theta \right)^2&amp;= 0\\ \implies~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \frac{\sin^2\theta}{a}&amp;= \frac{\cos^2\theta }{b}\\ \implies~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \frac{\sin^2\theta}{a}= \frac{\cos^2\theta }{b} &amp;= \frac{\sin^2\theta + \cos^2\theta}{a+b}\end{align}</p> <p>From this, it can be concluded that $$\sin^2 \theta = \frac a{a+b};~~~\cos^2\theta = \frac{b}{a+b} \,.$$</p> <p>It's a matter of substituting the values to prove the general and the desired expression.</p>
209,728
<p>I'm attempting to calculate the gradient of a function defined by two variables. After following a tutorial I found, I have the following code: </p> <pre><code>f[x_, y_] = (2 x^4 - x^2 + 3 x - 7 y + 1)/(e^(2 x^2/3 + 3 y^2/4)); (*t is gradient of f*) t[{x_, y_}] := { D[f[x, y], x], D[f[x, y], y] }; (* Trying to use NestList to basically calculate gradient descent 'n' times/ use Euler's method starting at the point (.510, -.445) *) NestList[t, {.51, -.445}, n] </code></pre> <p>However, the output I'm getting is ".51" and "-.445" are not valid variables. I understand this is because I when I apply NestList, I am essentially taking the derivative with respect to ".51" and "-.445" which isn't possible obviously. However, I am unsure of how to apply NestList with respect to the variables <em>x</em> and <em>y</em> at the point <strong>(.51,-.445)</strong>.</p>
J. M.'s persistent exhaustion
50
<p>What you have is not quite the way to do steepest descent. Recall that you need to pick a step size for the direction where the next point is headed. In fancy code, that is done through some form of line search, but to keep things simple here, let us use a fixed step size.</p> <pre><code>γ = 1/10; (* step size *) iter[{x_, y_}] = {x, y} - γ D[f[x, y], {{x, y}}]; </code></pre> <p>(Note that I have used here the special syntax of <code>D[]</code> for evaluating the gradient; of course, <code>D[f[x, y], {{x, y}}] == {D[f[x, y], x], D[f[x, y], y]}</code>.)</p> <p>Generate the iterates until convergence and look at the last two:</p> <pre><code>its = FixedPointList[iter, {-0.8, 0.4}]; Take[its, -2] {{-0.414549, 0.791434}, {-0.414549, 0.791434}} </code></pre> <p>(If you only want a certain number of iterates, replace <code>FixedPointList[]</code> with <code>NestList[]</code>.)</p> <p>Visualize the steepest-descent path:</p> <pre><code>ContourPlot[f[x, y], {x, -1, 0}, {y, 0, 1}, Epilog -&gt; {Directive[Orange, Arrowheads[Small]], Arrow[its]}, PlotRange -&gt; All] </code></pre> <p><img src="https://i.stack.imgur.com/WSav5.png" alt="steepest descent"></p>
694,090
<p>Let $V$ and $W$ be vector spaces over $\Bbb{F}$ and $T:V \to W$ a linear map. If $U \subset V$ is a subspaec we can consider the map $T$ for elements of $U$ and call this the restriction of $T$ to $U$, $T|_{U}: U \to W$ which is a map from $U$ to $W$. Show that</p> <p>$$\ker T|_{U} = \ker T\cap U.$$</p> <p>I know the definition of a linear map is </p> <p>$f(x+y)=f(x) +f(y)$ and </p> <p>$f(ax)=a\cdot f(x)$</p> <p>I also know the kernel is the set of points which are mapped to zero.</p> <p>However, I am struggling to piece this all together.</p> <p>Thanks in advance for all the help.</p>
naslundx
130,817
<p>Indeed the kernel is the set of points mapped to zero, i.e.</p> <p>$$\ker{T} = \{x \in V | T(x) = 0\}$$</p> <p>Then $T$ is restricted to $U$, we need only consider points $x \in U$:</p> <p>$$\ker{T|_U} = \{x \in U | T(x) = 0\}$$</p> <p>Since $U \subset V$, this is the same as the points $x$ that are both in $\ker{T}$ and in $U$, in other terms:</p> <p>$$ker{T|_U} = \ker{T} \cap U$$</p>
1,265,987
<p>I am working on some Automata practice problems. I am working a 2 part question. Here it is:</p> <blockquote> <p>Let $\Sigma = \{a,b\}$ be an alphabet. Let $L = \left\{w \in \Sigma^* \mid n_a(w) \le 4\right\}$</p> <p>a) Create a transition graph for $L$</p> <p>b) Find a regular expression for $L$</p> </blockquote> <hr> <p>Here is my solution. I am no too sure about part b. If anyone can help me out, I'd really appreciate it. Thank you.</p> <p><img src="https://i.stack.imgur.com/uktBo.jpg" alt="enter image description here"></p>
mvw
86,776
<p>I have difficulties reading your answer from the image, but the DFA looks fine, while the regexp does not. </p> <p>Counter example: $ba$ seems not to be recognized.</p>
325,519
<p>I'm reading a book about inverse analysis and trying to figure out how the authors do the inversion.</p> <p>Assume that matrix $C$ is $$ C ~=~ \begin{bmatrix} 88.53 &amp; -33.60 &amp; -5.33 \\ -33.60 &amp; 15.44 &amp; 2.67 \\ -5.33 &amp; 2.67 &amp; 0.48 \end{bmatrix} $$ and at some point authors diagonalize this matrix to calculate matrix $P$ using $$ C^{-1} ~=~ P^{\rm t} L P $$ where $L$ is a diagonal matrix of positive eigenvalues and the columns of $P$ are orthonormal eigenvectors.</p> <p>The above equation for diagonalizing inverse of $C$ is a bit different from what is usually used and therefore I cannot calculate $P$ correctly (same as the book!). So, that would be great if somebody can show me the way to calculate $P$. $$ P ~=~ \begin{bmatrix} 0.93 &amp; 0.36 &amp; -0.03 \\ -0.36 &amp; 0.90 &amp; -0.23 \\ -0.06 &amp; 0.23 &amp; 0.97 \end{bmatrix} $$</p>
AndreasT
53,739
<p>The spectral theorem ensures that since $C$ is symmetric it has 3 real eigenvalues and its eigenspaces are orthogonal. Let $\lambda_1,\lambda_2,\lambda_3$ be the eigenvalues and $\vec v_1,\vec v_2,\vec v_3$ be orthonormal eigenvectors (whose existence is granted by the spectral theorem; note that $\lambda_i$ need not be distinct, since you can always orthonormalize a basis with the <a href="http://en.wikipedia.org/wiki/Gram-Schmidt_process" rel="nofollow">Gram-Schmidt process</a>). $\vec v_i$ are such that $C\vec v_i=\lambda_iv_i$ ($i=1,2,3$): in matrix notation this means $$ C \begin{bmatrix} ~\\ \vec v_1 &amp; \vec v_2 &amp; \vec v_3 \\~ \end{bmatrix} ~=~ \begin{bmatrix} ~\\ \vec v_1 &amp; \vec v_2 &amp; \vec v_3 \\~ \end{bmatrix} \begin{bmatrix} \lambda_1\\ &amp; \lambda_2\\ &amp;&amp; \lambda_3 \end{bmatrix} $$ where $\vec v\in\mathbb R^{3\times 1}$ is intended as a column vector. Therefore, setting $$ D = \begin{bmatrix} \lambda_1\\ &amp; \lambda_2\\ &amp;&amp; \lambda_3 \end{bmatrix} \quad\text{and}\quad P = \begin{bmatrix} ~\\ \vec v_1 &amp; \vec v_2 &amp; \vec v_3 \\~ \end{bmatrix} $$ you have that $C=PDP^{-1}$. Since the columns of $P$ are orthonormal, it follows that $P^{-1}=P^{\rm t}$ is the transpose of $P$, therefore $$ C=PDP^{\rm t} $$ Now, if you want to compute any power of $C$ (or just any integer power, if you work with real matrices and $\lambda_i$ are not all non-negative) you have that $$ C^n = PD^nP^{\rm t} $$ (since $P^{\rm t}P=\rm I$, the identity matrix). Chosing $n=-1$ you have $$ C^{-1} ~=~ P \begin{bmatrix} \lambda_1^{-1}\\ &amp; \lambda_2^{-1}\\ &amp;&amp; \lambda_3^{-1} \end{bmatrix} P^{\rm t} $$ so that $L=D^{-1}$.</p> <p>To sum up, $P$ is the matrix whose $i$-th column is the eigenvector $\vec v_i$ with eigenvalue $\lambda_i$, where $\vec v_1,\vec v_2,\vec v_3$ are orthonormal.</p> <p>(In your case, since you require $C^{-1}=P^{\rm t}LP$ rather than $PLP^{\rm t}$, $P$ is the matrix whose $i$-th <em>row</em> is the eigenvector $\vec v_i$, i.e. the transpose of the $P$ above.</p>
19,848
<p><strong>{Xn}</strong> is a sequence of independent random variables each with the same <strong>Sample Space {0,1}</strong> and <strong>Probability {1-1/$n^2$ ,1/$n^2$}</strong> <br> <em>Does this sequence converge with probability one (Almost Sure) to the <strong>constant 0</strong>?</em> <br><br>Essentially the {Xn} will look like this (its random,but just to show that the frequency of ones drops)<br> 010010000100000000100000000000000001......</p>
Nate Eldredge
822
<p>Hint: The Borel-Cantelli lemma.</p>
2,252,256
<p>The Chernoff bound, being a sharp quantitative version of the law of large numbers, is incredibly useful in many contexts. Some general applications that come to mind (which I guess are really the same idea) are:</p> <ul> <li>bounding the sample complexity of PAC algorithms;</li> <li>estimating confidence intervals for polling (somewhat surprisingly, the Chernoff bound tells you that if you want to poll a population of $N$ people, the number $N$ doesn't really matter for the tradeoff between randomly sampled people and accuracy of the empirical average)</li> <li>more generally, very often in the analysis of randomized algorithms you need to argue that you have 'enough' samples, and Chernoff bounds are the way to go.</li> </ul> <p>By now I feel like I have a good intuitive grasp of the power and limitations of Chernoff bounds. Basically, my question is about getting a similar understanding of matrix Chernoff bounds:</p> <ul> <li>How do I obtain a similar palette of 'classical' applications of matrix Chernoff bounds?</li> <li>What are some of the nice proofs they give us?</li> <li>Have they substantially simplified previous work that didn't explicitly use them?</li> </ul>
Shravas Rao
37,588
<p>I think matrix Chernoff bounds are still relatively new, so maybe there aren't really classical applications yet. But personally, I've used Tropp's monograph (<a href="https://arxiv.org/abs/1501.01571" rel="nofollow noreferrer">https://arxiv.org/abs/1501.01571</a>) and sections 5 and 6 of "Derandomizing the Ahlswede-Winter matrix-valued Chernoff bound using pessimistic estimators, and applications" (<a href="http://theoryofcomputing.org/articles/v004a003/" rel="nofollow noreferrer">http://theoryofcomputing.org/articles/v004a003/</a>) as references for applications.</p> <p>Some examples of matrix Chernoff bounds simplifying previous work include the Alon-Roichman theorem (simplified here - <a href="http://www.combinatorics.org/ojs/index.php/eljc/article/view/v11i1r62" rel="nofollow noreferrer">http://www.combinatorics.org/ojs/index.php/eljc/article/view/v11i1r62</a>) and results on Matrix Completion (simplified here - <a href="http://arxiv.org/abs/0910.0651" rel="nofollow noreferrer">http://arxiv.org/abs/0910.0651</a>).</p>
4,097,262
<p><a href="https://i.stack.imgur.com/E1UoR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E1UoR.png" alt="Cubic equation with a unit radius circle" /></a> A cubic equation and circle (unit radius) has intersection at A,B,C,D. ABCD is a square. <em>Find the angle <span class="math-container">$\theta$</span>.</em></p> <p>I tried:</p> <ol> <li><p><span class="math-container">$(0,0)$</span> is a solution so constant term is <span class="math-container">$0$</span></p> </li> <li><p>Substituting A(x,y) and C(-x,-y) and adding them gives coefficient of <span class="math-container">$x^2$</span> is 0.</p> </li> </ol> <p>Then the cubic becomes f(x) = <span class="math-container">$ax^3+bx$</span>.</p> <p>3.Substituting A and B and added the two equations.</p> <p>I found it interesting-<a href="https://math.stackexchange.com/questions/3680693/prove-that-from-n1-given-points-a-unique-polynomial-of-n-degree-will-pass">for n points given we can find a unique n+1 degree polynomial</a></p> <p>Also - Can complex number be used here?</p> <p><strong>Please note</strong>: I am not sure whether we can find the angle(integer) without knowing the coefficients of the cubic.</p> <p><strong>EDIT</strong>: From the answers<br /> 1.putting A <span class="math-container">$(cos\theta,sin\theta)$</span> in f(x) : <span class="math-container">$acos^3\theta + b cos\theta = sin\theta$</span></p> <p>2.putting B <span class="math-container">$(-sin\theta,cos\theta)$</span> in f'(x) : <span class="math-container">$asin^2\theta + b = tan\theta$</span> [ as circle has <span class="math-container">$tan\theta$</span> slope at B]</p> <p><span class="math-container">$1,2 $</span> eqn gives <span class="math-container">$3asin^2\theta = acos^2\theta$</span></p> <p>So, <span class="math-container">$sin^2\theta = \frac{1}{4}$</span></p> <p><em>But I getting the value of <span class="math-container">$\theta$</span> but a answer shows plot of many cubics</em> -&gt; because in my case <span class="math-container">$ABCD$</span> is a square.</p>
Jean Marie
305,862
<p>We are looking for a third degree curve <span class="math-container">$(C)$</span> with cartesian equation:</p> <p><span class="math-container">$$y=ax(x^2-B)\tag{1}$$</span></p> <p>Due to the fact it is an odd function, we can restrict our attention to the <span class="math-container">$x&gt;0$</span> part.</p> <p>The rest will follow by symmetry with respect to the origin.</p> <p>I am going to show that the solution is:</p> <p><span class="math-container">$$a=\tfrac{8}{\sqrt{3}}, \ \ B=\tfrac{5}{8} \tag{2}$$</span></p> <p>A parametric representation of <span class="math-container">$(C)$</span> is obtained (a classical method) by intersecting it with the line having the equation <span class="math-container">$y=tx$</span>, where <span class="math-container">$t$</span> is to be interpreted (this is important) as <span class="math-container">$t:=\tan \theta$</span>:</p> <p><span class="math-container">$$\begin{cases}x&amp;=&amp;\sqrt{\frac{t}{a}+B}\\y&amp;=&amp;t\sqrt{\frac{t}{a}+B}\end{cases}\tag{3}$$</span></p> <p>Intersecting now <span class="math-container">$(C)$</span> with the unit circle <span class="math-container">$U$</span> (<span class="math-container">$x^2+y^2=1$</span>) gives equation:</p> <p><span class="math-container">$$t^3+t^2aB+t+a(B-1)=0\tag{4}$$</span></p> <p>As <span class="math-container">$\tan \theta = t$</span> is a root we must have as well <span class="math-container">$-\tfrac{1}{t}$</span> for a root the fullfillment of the orthogonality condition (think to <span class="math-container">$f'(x)$</span> vs. <span class="math-container">$-\dfrac{1}{f'(x)}$</span>). Moreover, this root has to be doubled for the tangency condition.</p> <p>Therefore, equation (4) must have the form:</p> <p><span class="math-container">$$(t-t_0)(t+\tfrac{1}{t_0})^2=0\tag{5}$$</span></p> <p>which encompasses <em>all the constraints</em> of the issue.</p> <p>Identifying coefficients in (4) and (5) gives the 3 equations:</p> <p><span class="math-container">$$aB=\tfrac{2}{t_0}-t_0, \ \ \ \ (\tfrac{1}{t_0})^2 - 2 = 1, \ \ \ \ -\tfrac{1}{t_0}=a(B-1)$$</span></p> <p>implying <span class="math-container">$t_0=\tfrac{1}{\sqrt{3}}$</span> and, at once, (2).</p> <p>We retrieve of course as well angle <span class="math-container">$\theta_0=\operatorname{atan}(\tfrac{1}{\sqrt{3}})=\pi/6$</span>.</p> <p>The following figure displays different curves giving a rectangle, with, in red, the solution curve with coefficients given by (2) giving a square:</p> <p><a href="https://i.stack.imgur.com/6ZgZ7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6ZgZ7.jpg" alt="enter image description here" /></a></p> <p>Edit: Old solution(long) I think this may be also useful for someone.</p> <p>We are looking for a third degree function with cartesian equation:</p> <p><span class="math-container">$$y=ax(x^2-B)\tag{1}$$</span></p> <p>Due to the fact it's an odd function, we can restrict our attention to the <span class="math-container">$x&gt;0$</span> part. The rest will follow by symmetry with respect to the origin.</p> <p>I am going to show that the solution is:</p> <p><span class="math-container">$$a=\tfrac{8}{\sqrt{3}}, \ \ B=\tfrac{5}{8} \tag{2}$$</span></p> <p>Intersecting the cubic curve with the line with equation <span class="math-container">$y=tx$</span>, where <span class="math-container">$t$</span> is to be interpreted (this is important) as <span class="math-container">$t:=\tan \theta$</span>, we get the parametric representation:</p> <p><span class="math-container">$$\begin{cases}x&amp;=&amp;\sqrt{\frac{t}{a}+B}\\y&amp;=&amp;t\sqrt{\frac{t}{a}+B}\end{cases}\tag{3}$$</span></p> <p>Intersecting the curve with unit circle <span class="math-container">$U$</span> (<span class="math-container">$x^2+y^2=1$</span>) gives the equation:</p> <p><span class="math-container">$$t^3+t^2aB+t+a(B-1)=0\tag{4}$$</span></p> <p>Now, as <span class="math-container">$\tan \theta = t$</span>, is a root we must have as well <span class="math-container">$-\tfrac{1}{t}$</span> as a root for the fullfilment of the orthogonality condition.</p> <p>A) First solution (long!)</p> <p><span class="math-container">$$\left(-\tfrac1t\right)^3+\left(-\tfrac1t\right)^2aB+\left(-\tfrac1t\right)+a(B-1)=0$$</span></p> <p>otherwise said</p> <p><span class="math-container">$$a(B-1)t^3-t^2+aBt-1=0\tag{5}$$</span></p> <p>Eliminating <span class="math-container">$t$</span> from (4) and (5) gives a first relationship between <span class="math-container">$A:=a^2$</span> and <span class="math-container">$B$</span>:</p> <p><span class="math-container">$$2AB^2-3AB+A+2=0\tag{6}$$</span></p> <p>Besides Besides, the three roots of (4) must be such that two of them shoulbare the same (doubldouble root) in order to take into accountexpress that (<span class="math-container">$D$</span>) is a tangency point (<span class="math-container">$D$</span>). This is done by setting to <span class="math-container">$0$</span> the discriminant of equation (4) equal to <span class="math-container">$0$</span>, giving a second relationship between <span class="math-container">$A$</span> and <span class="math-container">$B$</span>:</p> <p><span class="math-container">$$-4 - 27 A + 36 A B - 8 A B^2 + 4 A^2 B^3 - 4 A^2 B^4=0\tag{7}$$</span><span class="math-container">$$ 4 A^2 (B^4 - B^3) + A (8 B^2-36 B +27) + 4 =0\tag{7}$$</span></p> <p>Constraints (6) and (7) give the unique solution (2).</p> <p>The following figure displays different curves (for a certain number of values of <span class="math-container">$B=0.1, 0.3, 0.5, 0.7, 0.9$$B \in (0,1)$</span>) fullfilling (7) and the solution curve (in red) with coefficients given by (2).</p> <p>Last but not least, plugging the values or <span class="math-container">$a$</span> and <span class="math-container">$B$</span> in equation (4), we get the following roots:</p> <p><span class="math-container">$$t_1=\tan \theta= \sqrt{3}/3, \ \ t_2=t_3=-\sqrt{3}\tag{8}$$</span></p> <p>corresponding to angles</p> <p><span class="math-container">$$\theta_1=30° \ \text{and} \ \theta_2=\theta_3=-60°\tag{9}$$</span></p> <p>which is very simple in fact.</p> <p>Remark: (6) and (7) have been obtained using a Computer Algebra System. For example (6), resulting from the elimination of <span class="math-container">$t$</span> between equations (4) and (5), has been obtained using the following request with Wolfram Alpha:</p> <p>Factor[Resultant[t^3+t^2<em>a</em>B+t+a*(B-1),a*(B-1)<em>t^3-t^2+a</em>B*t-1],t]]</p> <p>B) Second (very short) solution:</p> <p>As there must be a single root <span class="math-container">$t_0$</span> (assumed <span class="math-container">$&gt;0$</span>) and a double root of the form <span class="math-container">$-\dfrac{1}{t_0}$</span>, equation (4) must have the form:</p> <p><span class="math-container">$$(t-t_0)(t+\tfrac{1}{t_0})^2=0\tag{10}$$</span></p> <p>taking into accounts all the constraints of the issue. Identification of coefficients in (4) and (10) gives the 3 equations:</p> <p><span class="math-container">$$aB=\tfrac{2}{t_0}-t_0, \ \ \ \ (\tfrac{1}{t_0})^2 - 2 = 1, \ \ \ \ -\tfrac{1}{t_0}=a(B-1)$$</span></p> <p>implying <span class="math-container">$t_0=\tfrac{1}{\sqrt{3}}$</span> and immediately results (2) !</p> <p>We retrieve of course as well angle <span class="math-container">$\theta_0=\operatorname{atan}(\tfrac{1}{\sqrt{3}})=30°$</span>.</p>
4,315,844
<p>Find the greatest number <span class="math-container">$k$</span> such that there exists a perfect square that is not a multiple of 10, with its last <span class="math-container">$k$</span> digits the same</p> <p>I could find <span class="math-container">$12^2 = 144$</span>, <span class="math-container">$38^2 = 1444$</span>, <span class="math-container">$62^2 = 3844$</span> and <span class="math-container">$88^2 = 7744$</span></p>
Matthias Klupsch
19,700
<p>Take <span class="math-container">$\mathfrak{g} = \mathbb{R}^3$</span> with <span class="math-container">$[x,y] = x \times y$</span> the cross product and as symmetric bilinear form <span class="math-container">$\langle \cdot, \cdot \rangle$</span> the usual scalar product. Then the space <span class="math-container">$S = \langle e_1 \rangle$</span> is closed under the Lie bracket and <span class="math-container">$S^\perp = \langle e_2, e_3 \rangle$</span> but <span class="math-container">$e_2 \times e_3 = e_1$</span>.</p>
545,728
<p>So I have $X \sim \text{Geom}(p)$ and the probability mass function is:</p> <p>$$p(1-p)^{x-1}$$</p> <p>From the definition that:</p> <p>$$\sum_{n=1}^\infty ns^{n-1} = \frac {1}{(1-s)^2}$$</p> <p>How would I show that the $E(X)=\frac 1p$</p>
Michael Hardy
11,667
<p>$p(1-p)^{x-1}$</p> <p>\begin{align} \operatorname{E}(X) &amp; = \sum_{x=1}^\infty x\Pr(X=x) = \sum_{x=1}^\infty x p(1-p)^{x-1} = p \sum_{x=1}^\infty x(1-p)^x \\[10pt] &amp; = \underbrace{p \sum_{n=1}^\infty xs^{x-1} = p\cdot \frac 1 {(1-s)^2}}_{\text{This is the identity that you cited.}} \quad \text{where }s=1-p, \\[15pt] &amp; = p\cdot \frac 1 {p^2} = \frac 1 p. \end{align}</p>
2,780,832
<p>Let's define: </p> <p>$$\sin(z) = \frac{\exp(iz) - \exp(-iz)}{2i}$$ $$\cos(z) = \frac{\exp(iz) + \exp(-iz)}{2}$$</p> <blockquote> <p>We are to prove that $$\sin(z+w)=\sin(w) \cos(z) + \sin(z)\cos(w), \forall_{z,w \in \mathbb{C}}$$ using only the following statement: $\exp(z+w) = \exp(w)\exp(z)$.</p> </blockquote> <p>I managed only to show that: $$\sin(z + w) = \frac{\exp(iz)\exp(iw)}{2i} - \frac{\exp(-iz)\exp(-iw)}{2i}.$$<br> Where can I go from here?</p>
Sujit Bhattacharyya
524,692
<p>Consider the RHS,</p> <p>$$\sin z\cos w+\cos z\sin w$$</p> <p>$$=\frac{e^{iz}-e^{-iz}}{2i}\frac{e^{iw}+e^{-iw}}{2}+\frac{e^{iw}-e^{-iw}}{2i}\frac{e^{iz}+e^{-iz}}{2}$$</p> <p>[By doing the usual calculation which is skipped]</p> <p>$$=2\frac{e^{i(z+w)}-e^{-i(z+w)}}{4i}$$ $$=\frac{e^{i(z+w)}-e^{-i(z+w)}}{2i}$$ $$=\sin(z+w)$$</p> <p>Proved.</p>
478,523
<p>I'm trying to reason through whether $\int_{x=2}^\infty \frac{1}{xe^x} dx$ converges.</p> <p>Intuitively, It would seem that since $\int_{x=2}^\infty \frac{1}{x} dx$ diverges, then multiplying the denominator by something to make it smaller would make it converge.</p> <p>If I apply a Taylor expansion to the $\frac{1}{e^x}$, then I get</p> <p>$\displaystyle \int_{x=2}^\infty \frac{1}{x} e^{-x} dx = \int_{x=2}^\infty \frac{1}{x}(1 + (-x) + \frac{(-x)^2}{2} + \cdots) dx$</p> <p>My initial thought was to multiply everything out and just look at the leading order behavior, $ x^{n-1}$, but this would seem to mean the integral diverges.</p> <p>What is wrong with this approach?</p>
Jonathan Y.
89,121
<p>I believe the easiest thing to do here is note that for all $x\geq 2$ one has $\frac{1}{xe^x}\leq e^{-x}$.</p> <p>As for your approach, beyond what @user1337 rightfully said, note that we can't just decide that everything (asymptotically-)smaller than $\frac{1}{x}$ yields a converging integral, because (for example) $$\int_2^\infty\frac{dx}{x\log x}=\infty.$$ However, it is true that $$\int_2^\infty\frac{dx}{x(\log x)^{1+\epsilon}}&lt;\infty,$$ so every function which grows at least as fast as $(\log x)^{1+\epsilon}$ (which includes, as we already know by the standard $p$-test, $x^\epsilon$) would do, so $e^x$ surely works.</p>
1,761,775
<p>Suppose we have angular momentum operators $L_1,L_2,L_3$ which satisfy $[L_1,L_2]=iL_3$, $[L_2,L_3]=iL_1$ and $[L_3,L_1]=iL_2$. We can show that the operator $L^2:=L_1^2+L_2^2+L_3^2$ commutes with $L_1,L_2$ and $L_3$. Now define $L_{\pm}=L_1\pm iL_2$ and then we can also show that $$L_+L_-=L^2-L_3(L_3-I)$$ and $$L_-L_+=L^2-L_3(L_3+I)$$ If now we have a vector $v$ in our Hilbert space $\mathcal{H}$ such that $L^2v=\lambda v$ and $L_3 v=\lambda_3v$. Then it is easy to show using the commutation relations that $L_+v$ is an eigenvector of $L^2$ with eigenvalue $\lambda$ and it is also an eigenvector of $L_3$ with eigenvalue $1+\lambda_3$. Now I have to show that $$||L_+v||^2=|\lambda-\lambda_3(\lambda_3+1)|||v||^2$$</p> <p>I dont see how this can be deduced from the above calculations, we do know that $$||L_-L_+v||^2=|\lambda-\lambda_3(\lambda_3+1)|^2||v||^2$$ This is true because of the second formula above for $L_-L_+$.</p> <p>I need a hint for this calculation. Thanks.</p>
Jyrki Lahtonen
11,619
<p>(promoting my comment to an answer)</p> <p>You can use the fact that $L_+$ and $L_-$ are each others adjoints, IOW $$\langle L_+x|y\rangle=\langle x|L_-y\rangle$$ for all $x,y$. Applying this to $y=L_+v, x=v$ gives $$ \begin{aligned} \Vert L_+v\Vert^2&amp;=\langle L_+v\mid L_+v\rangle\\ &amp;=\langle v\mid L_-L_+ v\rangle\\ &amp;=\langle v\mid (\lambda-\lambda_3(1+\lambda_3))v\rangle\\ &amp;=(\lambda-\lambda_3(1+\lambda_3))\Vert v\Vert^2. \end{aligned} $$ Consequently the scalar $\lambda-\lambda_3(1+\lambda_3)$ is a non-negative real number (most likely you already knew this by other means), so it is unnecessary to wrap it inside absolute value signs.</p>
2,174,508
<p>We have a box in 3D space defined by it's position, size and rotation. How can i find the smallest box that contains that box, but has a new rotation? I know this is a really bad description of the problem, i just can't describe it any better, i'm really bad at vector maths. i tried to draw a <a href="https://i.stack.imgur.com/ND7Cv.png" rel="nofollow noreferrer">little scene</a> in blender to clarify what i mean. here the white box is the original, and the red one is the one i want. the lines are the Forward vectors of these object. Sorry for the terrible formulation and thanks for reading :)</p>
John Hughes
114,036
<p>I don't know how your box is described, but at least one way is that you know the "forward" vector of the box (i.e., a length-one (or <em>unit</em>) vector that points parallel to one edge of the box) and you know two other vectors that are parallel to the other two edges of the box. If you call these $v_1, v_2, v_3$, and write them as columns, and put them into a matrix $M$, then you have $$ M^t M = I $$ because they're mutually perpendicular and are all unit vectors. </p> <p>Let's put the corresponding half-edge lengths into a matrix $S$: $$ S = \begin{bmatrix} s_1 &amp; 0 &amp; 0 \\ 0 &amp; s_2 &amp; 0 \\ 0 &amp; 0 &amp; s_3 \end{bmatrix} $$ where $2s_1$ is the length of the cube-edge corresponding to $v_1$, etc. Then the matrix $$ T = MS $$ contains three column-vectors that each represent one side of the (white) cube.</p> <p>In fact, if you apply $MS$ to each point of the b-unit cube $-1 \le x,y,z \le 1$, you'll get a cube that looks just like your white cube (except that it's centered at the origin, but not rotated or scaled at all). </p> <p>For your target cube (the red one), you know three other unit edge-vectors, $w_1, w_2, w_3$, which we can write as the columns of a matrix $K$, but you don't know the half-edge-lengths, $r_1, r_2, r_3$, which we can place in a corresponding matrix $R$. </p> <p>What you DO know is that if you compute $Tu$ for any vector $u$ in the bi-unit cube, you get a point in the white cube, and that if you then transform this to red-cube coordinates, by $$ v = (KR)^{-1} T u = R^{-1} K^t T u $$ then we must get a point inside the bi-unit cube, i.e., all coordinates of $v$ must be between $-1$ and $1$. </p> <p>We can rewrite this equation in the form $$ Rv = K^t T u $$ Now the largest coordinates for points on the white cube are at the vertices, so you can compute the $8$ vectors $K^t Tu$ for $u = (\pm 1, \pm 1, \pm 1)$, and look at the $x$ coordinates of all those points, which might range from, say, $-3.7$ to $3.7$. If that happens, then you know that $r_1$ must be $3.7$. </p> <p>In fact, you can define $$ r_1 = \max( v_x ) \\ r_2 = \max( v_y ) \\ r_3 = \max( v_z ) $$ where $v = \begin{bmatrix} v_x \\ v_y \\v_z \end{bmatrix}$ ranges over all the vectors $K^t Tu$, where $y = (\pm 1, \pm 1, \pm 1)$. And this turns out to give the result that $r_1$ is the sum of the absolute values of the entries of the first row of $K^t T$, $r_2$ is the sum of the absolute values of the entries in the second row, and $r_3$ is the sum of the absolute values of the entries of the third row. </p> <p>When you double each of these, you get the edge-lengths for the red cube. </p>
1,130,029
<p>I am not sure if this question is off topic or not but a question like this has been asked on this site before - <a href="https://math.stackexchange.com/questions/409408/insertion-sort-proof">Insertion sort proof</a></p> <p>Here is an example of insertion sort running a on a set of data <a href="https://courses.cs.washington.edu/courses/cse373/13wi/lectures/02-25/19-sorting2-select-insert-shell.pdf" rel="nofollow noreferrer">https://courses.cs.washington.edu/courses/cse373/13wi/lectures/02-25/19-sorting2-select-insert-shell.pdf</a></p> <p><img src="https://i.stack.imgur.com/5Iamh.png" alt="enter image description here" /></p> <p>Here is the instructor's runtime proofs for the different cases (slide 10) <img src="https://i.stack.imgur.com/7Ml65.png" alt="enter image description here" /></p> <p>Can anyone explain the intuition behind the i/2 in average case? I get worst case(number of comparisons = element number) and the best case(everything in order, 1 comparison per element).</p>
AlexR
86,940
<p>Reading through the slides I noticed the insertion sort <em>implementation</em> discussed here is actually sub-optimal: An element is swapped into place in a bubble-sort like manner. Since the sorted list at step $i$ has $i$ elements, the average number of comparisons (= swaps - 1) needed to sort the $i+1$-st element into place is $$\frac12(1_{\text{element is in place}} + i_{\text{element is smallest yet}})$$ So actually the correct average case estimate would be $$\sum_{i=1}^{N-1} \frac{i+1}2 = \frac{(N-1)N}4 + \frac{N-1}2 = \frac{(N-1)(N+2)}{4}$$ But this is also $\mathcal O(n^2)$ so that's a minor mistake.</p>
1,181,356
<p>Can such complex-valued function exist that</p> <p>$$\lim_{x\to x_0}f(x)=0$$</p> <p>but</p> <p>$$\lim_{x\to x_0}\frac 1{f(x)}=1$$?</p> <p>What about</p> <p>$$\lim_{x\to x_0}\frac 1{Re(f(x))}=1$$?</p>
Winther
147,873
<p>No, if it did then by the product law of limits $$1 = \lim_{x\to x_0}\frac{f(x)}{f(x)} = \lim_{x\to x_0}f(x) \cdot \lim_{x\to x_0}\frac{1}{f(x)} = 0\cdot 1 = 0$$ giving us a contradiction. If $L = \lim_{x\to x_0} f(x)$ exist and $L\not =0 $ then we always have $\lim_{x\to x_0}\frac{1}{f(x)} = \frac{1}{L}$.</p>
1,197,875
<p>I'm just starting partials and don't understand this at all. I'm told to hold $y$ "constant", so I treat $y$ like just some number and take the derivative of $\frac{1}{x}$, which I hope I'm correct in saying is $-\frac{1}{x^2}$, then multiply by $y$, getting $-\frac{y}{x^2}$.</p> <p>But apparently the correct answer is $\frac{1}{x}$. What am I missing?</p>
user225079
225,079
<p>You're supposed to hold x constant. </p>
1,197,875
<p>I'm just starting partials and don't understand this at all. I'm told to hold $y$ "constant", so I treat $y$ like just some number and take the derivative of $\frac{1}{x}$, which I hope I'm correct in saying is $-\frac{1}{x^2}$, then multiply by $y$, getting $-\frac{y}{x^2}$.</p> <p>But apparently the correct answer is $\frac{1}{x}$. What am I missing?</p>
MichaelChirico
205,203
<p>have you looked at the graph? if you have a good enough plotting program (good enough to rotate 3D graphs), it should be easy to see that the surface is linear for all fixed x (and hyperbolic for all fixed y)</p> <p>first pass:</p> <p><a href="http://www.wolframalpha.com/input/?i=plot+y%2Fx+for+x%3D-1..1%2C+y%3D-1..1" rel="nofollow">http://www.wolframalpha.com/input/?i=plot+y%2Fx+for+x%3D-1..1%2C+y%3D-1..1</a></p>
84,982
<p>I am a new professor in Mathematics and I am running an independent study on Diophantine equations with a student of mine. Online I have found a wealth of very helpful expository notes written by other professors, and I would like to use them for guided reading. <strong>I am wondering whether it is customary to ask permission of the author before using his or her online notes for my own reading course.</strong> </p> <p>Also, if anyone has suggestions for good sources Diophantine Equations please feel free to enlighten me.</p>
François G. Dorais
2,000
<p>The <a href="http://fairuse.stanford.edu/Copyright_and_Fair_Use_Overview/chapter7/7-b.html" rel="nofollow">Stanford Copyright &amp; Fair Use Overview</a> is pretty clear about what constitutes fair use in the classroom and what doesn't:</p> <blockquote> <p><strong>Rules for Reproducing Text Materials for Use in Class</strong></p> <p>The guidelines permit a teacher to make one copy of any of the following: a chapter from a book; an article from a periodical or newspaper; a short story, short essay, or short poem; a chart, graph, diagram, drawing, cartoon, or picture from a book, periodical, or newspaper.</p> <p>Teachers may photocopy articles to hand out in class, but the guidelines impose restrictions. Classroom copying cannot be used to replace texts or workbooks used in the classroom. Pupils cannot be charged more than the actual cost of photocopying. The number of copies cannot exceed more than one copy per pupil. And a notice of copyright must be affixed to each copy.</p> <p>Examples of what can be copied and distributed in class include:</p> <ul> <li>a complete poem if less than 250 words or an excerpt of not more than 250 words from a longer poem</li> <li>a complete article, story, or essay if less than 2,500 words or an excerpt from any prose work of not more than 1,000 words or 10% of the work, whichever is less; or one chart, graph, diagram, drawing, cartoon, or picture per book or per periodical issue.</li> </ul> <p>Not more than one short poem, article, story, essay, or two excerpts may be copied from the same author, nor more than three from the same collective work or periodical volume (for example, a magazine or newspaper) during one class term. As a general rule, a teacher has more freedom to copy from newspapers or other periodicals if the copying is related to current events.</p> <p>The idea to make the copies must come from the teacher, not from school administrators or other higher authority. Only nine instances of such copying for one course during one school term are permitted. In addition, the idea to make copies and their actual classroom use must be so close together in time that it would be unreasonable to expect a timely reply to a permission request. For example, the instructor finds a newsweekly article on capital punishment two days before presenting a lecture on the subject.</p> <p>Teachers may not photocopy workbooks, texts, standardized tests, or other materials that were created for educational use. The guidelines were not intended to allow teachers to usurp the profits of educational publishers. In other words, educational publishers do not consider it a fair use if the copying provides replacements or substitutes for the purchase of books, reprints, periodicals, tests, workbooks, anthologies, compilations, or collective works.</p> </blockquote> <p>Note that this does not apply if the work is licensed appropriately. For example, the <a href="http://fairuse.stanford.edu/Copyright_and_Fair_Use_Overview/" rel="nofollow">Stanford Copyright &amp; Fair Use Overview</a> quoted above is licensed <a href="http://creativecommons.org/licenses/by-nc/3.0/us/" rel="nofollow">CC BY-NC</a> and this answer is licensed <a href="http://creativecommons.org/licenses/by-sa/3.0/" rel="nofollow">CC BY-SA</a>. For course notes, it makes sense to use such a license since that is probably the intended use of the work. Contact the author(s) and recommend that they license their work in a manner they find appropriate.</p> <hr> <p>The <a href="http://fairuse.stanford.edu/Copyright_and_Fair_Use_Overview/chapter0/0-b.html#3" rel="nofollow">Stanford Copyright &amp; Fair Use Overview</a> also has a clear notice regarding online material:</p> <blockquote> <p><strong>If You Want to Use Material on the Internet</strong></p> <p>Each day, people post vast quantities of creative material on the Internet -- material that is available for downloading by anyone who has the right computer equipment. Because the information is stored somewhere on an Internet server, it is fixed in a tangible medium and potentially qualifies for copyright protection. Whether it does, in fact, qualify depends on other factors that you would have no way of knowing about, such as when the work was first published (which affects the need for a copyright notice), whether the copyright in the work has been renewed (for works published before 1978), whether the work is a work made for hire (which affects the length of the copyright) and whether the copyright owner intends to dedicate the work to the public domain. If you want to download the material for use in your own work, you should be cautious. It's best to track down the author of the material and ask for permission. Generally, you can claim a fair use right for using a very small portion of text for commentary, scholarship or smilar purposes.</p> </blockquote> <p>The thing to take away from this is that there is no substitute for proper licensing. If you intend for something to be used in some way or another: make that clear!</p>
999,304
<p>Frankie and Johnny play the following game. Frankie selects a number at random from the interval $[a, b]$. Johnny, not knowing Frankie’s number, is to pick a second number from that same inverval and pay Frankie an amount, W, equal to the squared difference between the two</p> <p>[so $0 ≤ W ≤ (b − a)^2]$. What should be Johnny’s strategy if he wants to minimize his expected loss?</p> <p>Attempt: Let $Y$ be Frankie's choice. Then Johnny wants to choose $k$ so that $E(Y - k)^2$ is minimize. </p> <p>Can someone please help me? I am having trouble. The answer is Johnny should pick $(a+b)/2$ to minimize his loss.</p>
mookid
131,738
<p>$M$ is not closed:</p> <p>Look at the sequence $$ f_n(x) = \min(1, nx) $$ and check that $f_n\to 1$ for the norm $\| .\|$.</p>
999,304
<p>Frankie and Johnny play the following game. Frankie selects a number at random from the interval $[a, b]$. Johnny, not knowing Frankie’s number, is to pick a second number from that same inverval and pay Frankie an amount, W, equal to the squared difference between the two</p> <p>[so $0 ≤ W ≤ (b − a)^2]$. What should be Johnny’s strategy if he wants to minimize his expected loss?</p> <p>Attempt: Let $Y$ be Frankie's choice. Then Johnny wants to choose $k$ so that $E(Y - k)^2$ is minimize. </p> <p>Can someone please help me? I am having trouble. The answer is Johnny should pick $(a+b)/2$ to minimize his loss.</p>
Simon S
21,495
<p>As for the "quotient norm", have a look at mookid's counter-example and see if you can modify it to force a violation of one of the norm axioms. I'd aim to show that $$\| [f] \|_{\hbox{am I a norm?}} = 0 \ \ \ \Longrightarrow \ \ \ f = 0$$ is violated.</p>