qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,183,809
<p>The problem:</p> <blockquote> <p>If $p$ is prime and $4p^4+1$ is also prime, what can the value of $p$ be?</p> </blockquote> <p>I am sure this is a pretty simple question, but I just can't tackle it. I don't even known how I should begin...</p>
Joffan
206,402
<p>If $p\ne 5$, then $p^4\equiv 1 \bmod 5$ by Fermat's Little Theorem. Thus $4p^4+1\equiv 0 \bmod 5$, that is $4p^4{+}1$ is divisible by $5$ and since $4p^4&gt;2^4&gt;5$ this is not prime.</p> <p>If $p=5$ , $4p^4+1=2501$ is not prime either. So there is no such $p$.</p>
2,185,072
<p>Let A be the matrix: $$\begin{pmatrix} 1&amp;2&amp;3&amp;2&amp;1&amp;0\\2&amp;4&amp;5&amp;3&amp;3&amp;1\\1&amp;2&amp;2&amp;1&amp;2&amp;1 \end{pmatrix}$$.</p> <p>Show that {$\bigl( \begin{smallmatrix} 1 \\ 4\\3\end{smallmatrix} \bigr)$, $\bigl( \begin{smallmatrix} 3\\4\\1 \end{smallmatrix} \bigr)$} is a basis for the column space of A. Find a "nice basis for the column space of A. </p> <p>So far, I have row reduced A to $$\begin{pmatrix} 1&amp;2&amp;0&amp;-1&amp;4&amp;3\\0&amp;0&amp;1&amp;1&amp;-1&amp;-1\\0&amp;0&amp;0&amp;0&amp;0&amp;0 \end{pmatrix}$$ where the pivots occur in column 1 and column 3, so {(1,2,1),(3,5,2)} should be a "nice" column space? I do not see where {$\bigl( \begin{smallmatrix} 1 \\ 4\\3\end{smallmatrix} \bigr)$, $\bigl( \begin{smallmatrix} 3\\4\\1 \end{smallmatrix} \bigr)$} come from though. </p>
puhsu
417,064
<p>You can just show, that rank of matrix $A$ is equal to $2$ (you actually did it by reducing $A$). So if rank is $2$ the dimension of a column space is also two. It means that there are two elements in the basis of $A$ column space. And you only need to notice that vectors $(1, 4, 3)^T$ and $(3, 4, 3)^T$ are linearly independent, so they form a basis of a column space.</p> <p>Note that you can easily get this two vectors via simple matrix transformations (working with columns). For example $(3, 4, 3)^T = (3, 5, 2)^T - (0, 1, 1)^T$ and $(1, 4, 3)^T = (1, 2, 1)^T + 2(0, 1, 1)^T$ and so on you can show that after all this matrix transformations, you will reduce it to $$ \begin{pmatrix} 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 3 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 4 &amp; 4 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 3 &amp; 1 \end{pmatrix} $$</p>
2,185,072
<p>Let A be the matrix: $$\begin{pmatrix} 1&amp;2&amp;3&amp;2&amp;1&amp;0\\2&amp;4&amp;5&amp;3&amp;3&amp;1\\1&amp;2&amp;2&amp;1&amp;2&amp;1 \end{pmatrix}$$.</p> <p>Show that {$\bigl( \begin{smallmatrix} 1 \\ 4\\3\end{smallmatrix} \bigr)$, $\bigl( \begin{smallmatrix} 3\\4\\1 \end{smallmatrix} \bigr)$} is a basis for the column space of A. Find a "nice basis for the column space of A. </p> <p>So far, I have row reduced A to $$\begin{pmatrix} 1&amp;2&amp;0&amp;-1&amp;4&amp;3\\0&amp;0&amp;1&amp;1&amp;-1&amp;-1\\0&amp;0&amp;0&amp;0&amp;0&amp;0 \end{pmatrix}$$ where the pivots occur in column 1 and column 3, so {(1,2,1),(3,5,2)} should be a "nice" column space? I do not see where {$\bigl( \begin{smallmatrix} 1 \\ 4\\3\end{smallmatrix} \bigr)$, $\bigl( \begin{smallmatrix} 3\\4\\1 \end{smallmatrix} \bigr)$} come from though. </p>
dantopa
206,581
<p>Building on the insights of @puhsu, you know you have 2 vectors that span the image, columns 1 and 3. Can you construct the target vectors in this basis? $$ \left[ \begin{array}{r} 1 \\ 4 \\3 \end{array} \right] = \alpha \left[ \begin{array}{r} 1 \\ 2 \\ 1 \end{array} \right] + \beta \left[ \begin{array}{r} 3 \\ 5 \\ 2 \end{array} \right], \quad \alpha = 7, \ \beta = -2. $$ $$ \left[ \begin{array}{r} 3 \\ 4 \\ 1 \end{array} \right] = \alpha \left[ \begin{array}{r} 1 \\ 2 \\ 1 \end{array} \right] + \beta \left[ \begin{array}{r} 3 \\ 5 \\ 2 \end{array} \right], \quad \alpha = -3, \ \beta = 2. $$</p>
2,328,567
<p>I'm trying to search for any kind of development in the mathematics (science, astronomy, even astrology or other kind of early studies that envolve any kind of math) expecially in early england and in the carolingian empire. </p> <p>The problem I have is that it seams that math died in there: every work seems to be related to arabic, or chinese or indian work in the area. In science I'm looking and nothing seems to appears. So I've the main question:</p> <blockquote> <p><em>Is there any reference, article, name of scientis of mathematician, book, page, etc. that can bring contributions to math history in england or in the carolingian empire between 550 and 1050 A.D? Even the e-mail of someone who is a expeciallist in the area could work as a fruitfull answer</em></p> </blockquote> <p>thanks for future answers or critics. </p>
Nigel Overmars
96,700
<p>Continuing where you stopped: Since $f$ is bijective, we have that $f(f^{-1}(U)) = U$ and since $U$ is open, we have that $f(V)$ is open (in $Y$)</p> <p>To prove that $f^{-1}$ is continuous, we should prove that for all open sets $U \in X$, $(f^{-1})^{-1}(U)$ is open in $Y$, since $(f^{-1})^{-1}(U) = f(U)$. And since $f$ is open, this follows directly.</p>
1,469,859
<p>If I have a positive <code>x</code>, are there more integers below <code>x</code> or above <code>x</code>?</p> <p>I was discussing this with some friends and we came up with two opposing ideas:</p> <ol> <li>No, since you can always count one more in either direction.</li> <li>Yes, since the infinite amount of numbers below <code>x</code> is greater than the infinite amount of numbers above <code>x</code>.</li> </ol>
cr001
254,175
<p>Actually there are an equal number of integers: Denote the positive integer $n$. Then the two sets you are talking about are $S_1=\{x\mid x\in \mathbb{Z} \land x&gt;n \}$ and $S_2=\{x\mid x\in \mathbb{Z} \land x&lt;n \}$</p> <p>Then consider the following function $f:S_1\rightarrow S_2, f(x) =2n-x$. This is a bijection because injective for obvious reason and surjective because for any $y&lt;n$ we let $x=2n-y$ which is in our domain.</p>
3,840,643
<p>Assume that given three predicates are presented below:</p> <p><span class="math-container">$H(x)$</span>: <span class="math-container">$x$</span> is a horse</p> <p><span class="math-container">$A(x)$</span>: <span class="math-container">$x$</span> is an animal</p> <p><span class="math-container">$T(x,y)$</span>: <span class="math-container">$x$</span> is a tail of <span class="math-container">$y$</span></p> <p>Then, translate the following inference into an inference using predicate logic expressions and prove whether inference is valid or not (for instance, using natural deduction):</p> <p>Horses are animals.</p> <hr /> <p>Horses' tails are tails of animals.</p> <p>My thoughts: I am quite good at translating predicate logic expressions, but here I struggled to come up with formula for Horses' tails. My initial idea was to consider similar sentence such as &quot;w is a tail of a horse&quot; to form required inference, but it was not successful. Would be welcomed to hear your ideas about this task.</p>
Taroccoesbrocco
288,417
<p>As correctly suggested in lemontree's <a href="https://math.stackexchange.com/a/3840664/288417">answer</a>, &quot;Horses' tails are tails of animals&quot; can be formalized as <span class="math-container">$\forall x \forall y \big((H(y) \land T(x,y)) \to A(y) \big)$</span> or more precisely, <span class="math-container">$\forall x \forall y \big((H(y) \land T(x,y)) \to (A(y) \land T(x,y))\big)$</span>.</p> <p>Of course, the argument</p> <p><span class="math-container">$\frac{\text{Horses are animals}}{\text{Horses' tails are tails of animals}} \quad \text{i.e.} \quad \frac{\forall y (H(y) \to A(y))}{\forall x \forall y \big((H(y) \land T(x,y)) \to (A(y) \land T(x,y))\big)}$</span></p> <p>is valid. First, I give you an informal proof of that.</p> <p>We want to prove that <span class="math-container">$\forall x \forall y \big((H(y) \land T(x,y)) \to (A(y) \land T(x,y))\big)$</span>, under the hypothesis <span class="math-container">$\forall y (H(y) \to A(y) )$</span>. So, let us fix arbitrary individuals <span class="math-container">$x$</span> and <span class="math-container">$y$</span> and let us suppose that <span class="math-container">$H(y) \land T(x,y)$</span>, we have to show that <span class="math-container">$A(y) \land T(x,y)$</span>. Since by hypothesis <span class="math-container">$\forall y (H(y) \to A(y) )$</span>, hence <span class="math-container">$H(y) \to A(y)$</span> holds for the particular <span class="math-container">$y$</span> we have chosen. Moreover, we are supposing that <span class="math-container">$H(y) \land T(x,y)$</span> and in particular <span class="math-container">$H(y)$</span> holds. By <em>modus ponens</em>, from <span class="math-container">$H(y) \to A(y)$</span> and <span class="math-container">$H(y)$</span> it follows that <span class="math-container">$A(y)$</span>. Also, since we are supposing that <span class="math-container">$H(y) \land T(x,y)$</span>, in particular <span class="math-container">$T(x,y)$</span> holds. So, <span class="math-container">$A(y) \land T(x,y)$</span>. Therefore, we have proved that, for arbitrary <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, if <span class="math-container">$H(y) \land T(x,y)$</span> then <span class="math-container">$A(y) \land T(x,y)$</span>. Thus, <span class="math-container">$\forall x \forall y \big((H(y) \land T(x,y)) \to (A(y) \land T(x,y))\big)$</span> holds, under the hypothesis <span class="math-container">$\forall y (H(y) \to A(y))$</span>.</p> <p>You can formalize this proof in natural deduction as follows:</p> <p><span class="math-container">$$ \dfrac {\dfrac {\dfrac {\dfrac{\dfrac{\forall y (H(y) \to A(y))}{H(y) \to A(y)}\forall_\text{elim} \qquad \dfrac{[H(y) \land T(x,y)]^*}{H(y)}\land_\text{elim}\!\!\!\!\!\!\!\!\!\!\!}{A(y)}\to_\text{elim} \quad \dfrac{[H(y) \land T(x,y)]^*}{T(x,y)}\land_\text{elim}} {A(y) \land T(x,y)}\land_\text{intro}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! } {(\!\!\!\!\!\!\!\!H(y) \land T(x,y)) \to (A(y) \land T(x,y))} \to_\text{intro}^*\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! } {\dfrac {\forall y \big((H(y) \land T(x,y)) \to (A(y) \land T(x,y))\big)} {\forall x \forall y \big((H(y) \land T(x,y)) \to (A(y) \land T(x,y))\big)} \forall_\text{intro} } \forall_\text{intro} $$</span></p>
275,308
<p>Problems with calculating </p> <p>$$\lim_{x\rightarrow0}\frac{\ln(\cos(2x))}{x\sin x}$$</p> <p>$$\lim_{x\rightarrow0}\frac{\ln(\cos(2x))}{x\sin x}=\lim_{x\rightarrow0}\frac{\ln(2\cos^{2}(x)-1)}{(2\cos^{2}(x)-1)}\cdot \left(\frac{\sin x}{x}\right)^{-1}\cdot\frac{(2\cos^{2}(x)-1)}{x^{2}}=0$$</p> <p>Correct answer is -2. Please show where this time I've error. Thanks in advance!</p>
lab bhattacharjee
33,337
<p>$\lim_{x\to0}\cos 2x=1$ not $0$ unlike $\sin x$ so we can not write $\lim_{x\rightarrow0}\frac{\ln(2\cos^{2}(x)-1)}{(2\cos^{2}(x)-1)}$ to utilize $\lim_{y\to 0}\frac{\ln(1+y)}y=1$</p> <p>Instead, we can try in the following way: $$\frac{\ln(1-2\sin^2x)}{x\sin x}=(-2)\frac{\ln(1-2\sin^2x)}{(-2\sin^2x)}\frac{\sin x}x$$</p>
2,135,918
<p>Does L'Hopitals Rule hold for second derivative, third derivative, etc...? Assuming the function is differential up to the $k$th derivative, is the following true?</p> <p>$$\lim_{x\to c} \frac{f(x)}{g(x)} = \lim_{x\to c} \frac{f'(x)}{g'(x)} = \lim_{x\to c} \frac{f''(x)}{g''(x)} = \cdots = \lim_{x\to c} \frac{f^{(k)}(x)}{g^{(k)}(x)} = \lim_{x\to c} \frac{f^{(k+1)}(x)}{g^{(k+1)}(x)}$$</p>
Simply Beautiful Art
272,831
<p>Well, let's see. Under the specified conditions <strong><em>and</em></strong> $f(c)=g(c)=0$ (or both go to $\pm\infty$), then</p> <p>$$\lim_{x\to c}\frac{f(x)}{g(x)}=\lim_{x\to c}\frac{f'(x)}{g'(x)}$$</p> <p>Now, if $f'(x)$ and $g'(x)$ still hold all those conditions and $f'(c)=g'(c)=0$ (or both go to $\pm\infty$), then</p> <p>$$\lim_{x\to c}\frac{f'(x)}{g'(x)}=\lim_{x\to c}\frac{f''(x)}{g''(x)}$$</p> <p>And you may repeat this process until the limit is easily evaluated or it fails to uphold the conditions of L'Hospital's rule.</p>
2,704,955
<p>In my test on complex analysis I encountered following problem:</p> <blockquote> <p>Find $\oint\limits_{|z-\frac{1}{3}|=3} z \text{Im}(z)\text{d}z$</p> </blockquote> <p>So first I observed that function $z\text{Im}(z)$ is not holomorphic at least on real axis. Therefore we have to intgrate using parametrization.</p> <p>First, let's change variable $w = z - \frac{1}{3}$. So we got $\oint\limits_{|w|=3} (w+\frac{1}{3}) \text{Im}(w+\frac{1}{3})\text{d}w = \oint\limits_{|w|=3} (w+\frac{1}{3}) \text{Im}(w)\text{d}w = \frac{1}{2i}\oint\limits_{|w|=3} (w+\frac{1}{3}) (w-\bar w)\text{d}w$.</p> <p>Then by letting $w=3e^{i \phi}$ we transform integral to the form $\frac{1}{2}\int\limits_{0}^{2\pi}(3e^{i \phi}+\frac{1}{3})(3e^{i \phi}-3e^{-i \phi})ie^{i \phi}\text{d}\phi = -\frac{1}{2}\int\limits_{0}^{2\pi}\text{d}\phi=-\pi$.</p> <p>Is my reasoning correct? I don't quite sure about change of variable I made since function is not holomorphic at real axis. Is there any other way how this integral can be evaluated? Thanks! </p>
W99
546,048
<p>You can also use Stokes'theorem. Since $z\mathrm{Im}(z)=\frac{1}{2i}(z^2-z\overline{z})$ and $dx\wedge dy=\frac{1}{2i}d\overline{z}\wedge dz$ we have $$ \oint_{|z-\frac{1}{3}|=3}\frac{z^2-z\overline{z}}{2i}dz=\iint_{|z-\frac{1}{3}|\le3}-z\frac{d\overline{z}\wedge dz}{2i}=\iint_{(x-\frac{1}{3})^2+y^2\le 9}(-x-iy)dx\wedge dy=\\ =-\iint_{(x-\frac{1}{3})^2+y^2\le 9}xdx\wedge dy-i\iint_{(x-\frac{1}{3})^2+y^2\le 9}ydx\wedge dy=\\ =-Average_D(x)\times Volume(D)-iAverage_D(y)\times Volume(D)=\\ =-\frac{1}{3}\times 9\pi-i\cdot 0\times 9\pi=-3\pi $$ where $D=\{(x,y)\in\mathbb{R}|(x-\frac{1}{3})^2+y^2\le 9\}$</p>
1,071,321
<p>The problem is quite simple to formulate. If you have a large group of people (n > 365), and their birthdays are uniformly distributed over the year (365 days), what's the probability that every day of the year is someone's birthday?</p> <p>I am thinking that the problem should be equivalent to finding the number of ways to place n unlabeled balls into k labeled boxes, such that all boxes are non-empty, but C((n-k)+k-1, (n-k))/C(n+k-1, n) (C(n,k) being the binomial coefficient) does not yield the correct answer.</p>
Robert Israel
8,508
<p>Use the <a href="http://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow">inclusion-exclusion principle</a>. For a given set of $m$ days, the probability that nobody has a birthday on those days is $(1 - m/365)^n$. </p> <p>EDIT: There are $365 \choose m$ such sets. So the probability that there is at least one day with no birthdays is $$\sum_{m=1}^{364} (-1)^{m-1}{365 \choose m} (1 - m/365)^n $$</p>
4,496,913
<blockquote> <p>If <span class="math-container">$x=\frac12(\sqrt[3]{2009}-\frac{1}{\sqrt[3]{2009}})$</span>, what is the value of <span class="math-container">$(x+\sqrt{1+x^2})^3$</span>?</p> </blockquote> <p>I solved this problem as follow,</p> <p>Assuming <span class="math-container">$\sqrt[3]{2009}=\alpha$</span> , we have <span class="math-container">$x=\frac12(\alpha-\frac1{\alpha})$</span> and <span class="math-container">$$(x+\sqrt{1+x^2})^3=\left[\frac12(\alpha-\frac1{\alpha}) +\sqrt{1+\frac14(\alpha^2+\frac1{\alpha^2}-2)}\right]^3=\left[(\frac{\alpha}2-\frac1{2\alpha}) +\sqrt{\frac{\alpha^2}4+\frac1{4\alpha^2}+\frac12)}\right]^3=\left(\frac{\alpha}2-\frac1{2\alpha}+\left|\frac{\alpha}2+\frac1{2\alpha}\right|\right)^3=\alpha^3=2009$$</span> I'm wondering, is it possible to solve this problem with other approaches?</p>
Ninad Munshi
698,724
<p>Here's a rather elegant approach that involves pattern matching. Let <span class="math-container">$x=\sinh t$</span>. Then</p> <p><span class="math-container">$$(x+\sqrt{1+x^2})^3 = (\sinh t + \cosh t)^3 = e^{3t}$$</span></p> <p>And from comparing definitions</p> <p><span class="math-container">$$\sinh t = \frac{1}{2}(e^t-e^{-t})$$</span></p> <p><span class="math-container">$$x = \frac{1}{2}\left(\sqrt[3]{2009}-\frac{1}{\sqrt[3]{2009}}\right)$$</span></p> <p>clearly, <span class="math-container">$e^t=\sqrt[3]{2009}$</span>, thus</p> <p><span class="math-container">$$e^{3t} = 2009$$</span></p>
1,593,679
<p>While proving that $$\int^{\infty}_0 \frac{\sin x}xdx$$ I saw the Laplace Transform proof. <br> It used that $$\cal L\left\{\frac{\sin t}{t}\right\}=\int^\infty_0 \cal L\left\{\sin(t)\right\}d\sigma$$ So for understanding it, I tried: $$\cal L\left\{\frac{\sin t}{t}\right\}=\int^\infty_0e^{-st}\frac{\sin t}{t}dt=\int^\infty_0\frac1t\cal L\left\{\sin t\right\}dt$$ But I cannot see how that $\sigma$ emerged and $t^{-1}$ vanished? Also, how do we know that using the Laplace transform, we would get an integral that is equal to the original one ($\int^\infty_0\frac{\sin x}{x}dx$)</p>
Albert
85,959
<p>You can write,</p> <p>$$ \int_{0}^{\infty}\frac{\sin t}{t}dt=L\bigg\{\frac{\sin t}{t}\bigg\}_{s=0}=\Bigg(\int_{s}^{\infty}L\big\{\sin t\big\}du\Bigg)_{s=0}\\ =\Bigg(\int_{s}^{\infty}\frac{1}{1+u^{2}}du\Bigg)_{s=0}=\Bigg(\tan^{-1}(u)\Bigg|^{\infty}_{s}\Bigg)_{s=0}=\frac{\pi}{2}. $$</p>
3,743,282
<p>One definition of complex sympletic group I have encountered is (sourced from <a href="https://en.wikipedia.org/wiki/Symplectic_group" rel="nofollow noreferrer">Wikipedia</a>): <span class="math-container">$$Sp(2n,F)=\{M\in M_{2n\times 2n}(F):M^{\mathrm {T} }\Omega M=\Omega \}$$</span></p> <p>What is the motivation for imposing the condition <span class="math-container">$M^{\mathrm {T} }\Omega M=\Omega$</span> instead of others such as <span class="math-container">$M^{-1}\Omega M=\Omega$</span>?</p>
paul garrett
12,291
<p>Such issues become clearer when we do not commit to a basis of a vector space, and do not depend on matrices to express either linear maps or quadratic forms or...</p> <p>So, given a finite-dimensional <span class="math-container">$k$</span>-vectorspace <span class="math-container">$V$</span>, with <span class="math-container">$k$</span> a field of characteristic not <span class="math-container">$2$</span>, we can consider non-degenerate skew-symmetric bilinear forms <span class="math-container">$J$</span> on <span class="math-container">$V$</span>. Two such (<span class="math-container">$J_1$</span> and <span class="math-container">$J_2$</span>) might be said &quot;equivalent&quot; when there is an automorphism <span class="math-container">$A$</span> of <span class="math-container">$V$</span> such that <span class="math-container">$J_1(Ax,Ay)=J_2(x,y)$</span> for all <span class="math-container">$x,y\in V$</span>.</p> <p>If/when we describe the bilinear forms <span class="math-container">$J_i$</span> as <span class="math-container">$J_i(x,y)=y^\top S_i x$</span> for skew-symmetric matrices <span class="math-container">$S_i$</span>, we obtain the formulaic condition you mention.</p>
445,127
<p>I need to prove this limit:</p> <blockquote> <p>Given $f:(-1,1) \to \mathbb{R}\,$ and $\,f(x)&gt;0,\,$ if $\,\lim_{x\to 0} \left(f(x) + \dfrac{1}{f(x)}\right) = 2,\,$ then $\,\lim_{x\to 0} f(x) = 1$.</p> </blockquote>
Christian Bueno
86,451
<p>Let $z=\lim_{x\to 0} f(x)$, then the first limit can be written as, $z+\dfrac{1}{z}=2$. Equivalently, we can write, $z^2 -2z +1=0$ or $(z-1)^2=0$.</p> <p>Thus $z=1$ and so $\lim_{x\to 0} f(x)=1$.</p>
2,159,083
<p>I am currently translating a research paper from French (which I do not speak well). I have made good progress with copious use of google translate &amp; switching between French &amp; English versions of articles on wikipedia, coupled with knowledge in the given field. However, I am stuck on the following ($M$ is a matrix):</p> <p><a href="https://i.stack.imgur.com/X4wQY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X4wQY.png" alt="enter image description here"></a></p> <p>The parenthetical implies that $M$ is similar to a <a href="https://en.wikipedia.org/wiki/Companion_matrix" rel="nofollow noreferrer">Companion Matrix</a>, but I am stumped when it comes to what "monogène" means. Google translate says "monogenic", which I would infer to be the author's term for "similar to the companion matrix of a polynomial." However, I can't say that I've come across the term "Monogenic Matrix" before, plus this would just be the Froebius normal form / rational canonical form of a matrix with only one block.</p> <p>Is this a reasonable translation/interpretation?</p>
Community
-1
<p>The main idea connecting logic to set theory is the idea that <em>propositions are subsets</em>.</p> <p>On the one hand, if you are interpreting propositional logic in some (set-theoretic) domain $D$ of discourse, then each proposition gets interpreted as a subset of $D$, and connectives are operations on subsets.</p> <p>On the other hand, given any set $D$, we can make $\mathcal{P}(D)$ into a Boolean lattice in which we can compute with propositional logic.</p> <p>And this all extends fairly naturally to predicates and quantifiers and such.</p> <hr> <p>When connecting logic to <em>category theory</em> we use basically the same idea, except instead that propositions are subobjects rather than subsets.</p> <p>The posets $\operatorname{Sub}(X)$ of subobjects of $X$ are of crucial importance to doing propositional logic, and more generally we are interested in the maps back and forth between $\operatorname{Sub}(X)$ and $\operatorname{Sub}(Y)$ that may be induced by a map $X \to Y$.</p> <p>So, for internal first-order logic, classifying <em>subobjects</em> really is the thing we want. Even better, in a topos, $\operatorname{Sub}$ <em>is</em> a representable functor</p> <p>$$ \operatorname{Sub}(-) \cong \hom(-, \Omega) $$</p> <p>So, while $\operatorname{Sub}(-)$ is a relatively complicated object, in a topos the whole thing effectively collapses down to being a single object $\Omega$, thus allowing the internal logic to be treated in a more elementary way.</p>
891,137
<p>Count all $n$-length strings of digits $0, 1,\dots, m$ that have an equal number of $0$'s and $1$'s. Is there a closed form expression?</p>
DSinghvi
148,018
<p>Hint : go by calculus (increasing or decreasing)</p> <p>See the domain ant that's all</p>
3,369,438
<p><span class="math-container">$\begin{array}{|l} \forall xP(x) \vee \forall x \neg P(x) \quad premise \\ \exists xQ(x) \rightarrow \neg P(x) \quad premise \\ \forall xQ(x) \quad premise \\\hline \begin{array}{|l} \forall xP(x) \quad assumption \\\hline \vdots \quad \\ \forall x\neg P(x) \quad \end{array} \\\begin{array}{|l} \forall x\neg P(x) \quad assumption \\\hline \end{array} \\ \forall x \neg P(x) \quad \vee elim\\ \end{array}$</span></p> <p>Am I on the right path to solving this or how should I be thinking about it? I think I may have strayed from the path in trying to use <span class="math-container">$\vee$</span> elim, just didn't see a different path to take. </p>
lemontree
344,246
<p>What you did so far is correct, and you only have to fill in the "..."s in your subproofs.</p> <p>The second subproof is easy: You already got your conclusion <span class="math-container">$\forall x \neg P(x)$</span> as an assumption -- just reiterate (R) that line and you're done.</p> <p>As for the first subproof, the idea is to <strong>derive a contradiction between <span class="math-container">$\neg P(a)$</span> for some individual <span class="math-container">$a$</span> in the second premise -- of which we know there is an instance, since we are given <span class="math-container">$\forall x Q(x)$</span> -- and the universal claim <span class="math-container">$\forall x P(x)$</span></strong>:<br> Assume <span class="math-container">$Q(a) \to \neg P(a)$</span> for an arbitrary individual <span class="math-container">$a$</span>. By the third premise <span class="math-container">$\forall x Q(x)$</span> we know by universal instantiation (<span class="math-container">$\forall\!$</span> E) that <span class="math-container">$Q$</span> indeed holds of <span class="math-container">$a$</span>. By modus ponens (<span class="math-container">$\to\!$</span> E), we can conclude <span class="math-container">$\neg P(a)$</span>. But this contradicts the proposition that <span class="math-container">$P(a)$</span>, which we get out of the assumption <span class="math-container">$\forall x P(x)$</span> by <span class="math-container">$\forall\!$</span> E, so we get a contradiction <span class="math-container">$\bot$</span>. From this contradiction we are allowed to conclude anything thanks to ex falso quodlibet (<span class="math-container">$\bot$</span>) -- conveniently, we can choose <span class="math-container">$\forall x \neg P(x)$</span> as the next conclusion. Since we were able to derive this conclusion under the assumption that <span class="math-container">$Q(a) \to \neg P(a)$</span> holds for some individual <span class="math-container">$a$</span>, and by the second premise we know that at least one such individual does exist, we may apply <span class="math-container">$\exists\!$</span> E on the existential formula and the subproof, and thereby discharge the assumption <span class="math-container">$Q(a) \to \neg P(a)$</span> to conclude <span class="math-container">$\forall x \neg P(x)$</span> for sure.</p> <p><a href="https://i.stack.imgur.com/HmOIb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HmOIb.png" alt="enter image description here"></a></p>
35,964
<p>This is kind of an odd question, but can somebody please tell me that I am crazy with the following question, I did the math, and what I am told to prove is simply wrong:</p> <p>Question: Show that a ball dropped from height of <em>h</em> feet and bounces in such a way that each bounce is $\frac34$ of the height of the bounce before travels a total distance of 7 <em>h</em> feet.</p> <p>My Work: $$\sum_{n=0}^{\infty} h \left(\frac34\right)^n = 4h$$</p> <p>Obviously 4 <em>h</em> does not equal 7 <em>h</em> . What does the community get?</p> <p>I know that my calculations are correct, see Wolfram Alpha and it confirms my calculations, that only leaves my formula, or the teacher being incorrect...</p> <p>Edit: Thanks everyone for pointing out my flaw, it should be something like: $$\sum_{n=0}^{\infty} -h + 2h \left(\frac34\right)^n = 7h$$</p> <p>Thanks in advance for any help!</p>
Brandon Carter
1,016
<p>Hint: You are forgetting to count the distance traveled on the way up from the bounce. If the initial drop is from height $h$, the ball bounces up $\frac{3h}{4}$ and then drops from the same distance, and so on.</p>
192,784
<p>First let me try to describe in more details below the approach of "reordering" digits of Pi, which is used in OEIS A096566</p> <p><a href="https://oeis.org/A096566" rel="nofollow noreferrer">https://oeis.org/A096566</a></p> <p>and what I have done analyzing it so far.</p> <p>I am looking at first 620 "reordered" digits (which is more than currently listed in A096566) of Pi (in decimal representation). The reordering is done in such way that, while looking at consecutive pipeline stream of digits, all "range" of 10 different decimal digits:</p> <p>1,2,...,7,8,9,0</p> <p>in their first occurrence are getting "collected", while all coming "repeating" digits of each kind (1,2,...,7,8,9,0 ) are getting "pushed back" to be written later ... .</p> <p>Then the second "next" unique ten digits are getting collected, that is written, first looking for them in already "pushed back" group and then looking for coming-in (again with all "repeating" digits getting "pushed back") and so on until entire (second) set of all unique digits (1,2,...,7,8,9,0 ) is completely collected (written).</p> <p>In total - I got 62 such sets (covering 62*10 decimal digits) - see all those 62 sets listed below - where each {....} line represents the set of such 10 digits collection.</p> <pre><code> {3,1,4,5,9,2,6,8,7,0} {1,5,3,9,2,8,4,6,7,0} {5,9,3,2,6,4,8,1,7,0} {3,2,9,5,8,4,1,6,7,0} {3,2,8,9,5,1,7,4,6,0} {3,9,5,8,2,4,7,1,6,0} {3,9,4,5,2,8,6,0,1,7} {3,9,4,2,8,6,5,1,0,7} {3,9,2,8,4,6,1,0,5,7} {9,3,8,2,4,6,1,0,5,7} {9,8,3,2,4,6,0,5,1,7} {9,8,3,2,6,4,0,5,1,7} {9,8,2,3,4,6,0,5,1,7} {9,8,2,3,4,5,0,1,6,7} {8,2,9,3,4,1,5,0,6,7} {8,9,2,3,4,1,0,5,6,7} {8,2,3,9,4,0,1,5,6,7} {8,2,4,9,3,1,0,5,6,7} {8,2,1,5,9,4,3,0,6,7} {8,2,4,9,5,3,1,0,6,7} {8,2,4,9,1,5,3,6,0,7} {8,2,4,9,5,3,1,6,0,7} {8,2,9,4,5,3,1,6,0,7} {2,8,4,9,3,5,1,6,0,7} {8,2,9,4,3,1,5,6,0,7} {8,4,2,9,1,5,3,6,0,7} {8,4,2,9,3,6,1,5,0,7} {8,4,2,9,3,6,1,5,0,7} {8,2,4,6,3,9,1,0,5,7} {8,4,2,3,6,9,1,5,0,7} {8,4,2,3,6,1,5,9,0,7} {8,4,2,3,6,1,9,5,0,7} {8,4,2,6,3,1,9,5,0,7} {4,8,2,6,9,1,3,5,0,7} {4,2,8,6,1,3,9,0,5,7} {4,2,8,6,9,3,1,0,5,7} {4,8,2,6,1,3,0,5,9,7} {4,8,2,6,3,0,1,5,9,7} {4,8,2,3,6,1,5,0,9,7} {8,2,4,6,3,1,0,5,9,7} {2,4,8,6,1,3,0,5,9,7} {2,4,8,1,6,3,5,9,0,7} {8,2,4,1,3,6,5,9,0,7} {2,8,4,1,5,3,6,9,0,7} {4,2,1,8,3,6,9,5,0,7} {4,2,1,8,3,5,6,9,0,7} {4,1,2,3,8,9,6,5,0,7} {1,4,8,2,3,9,6,5,0,7} {1,4,2,3,8,5,9,6,0,7} {1,4,8,5,2,9,3,6,0,7} {1,4,2,8,3,9,6,5,0,7} {1,4,2,8,9,3,6,5,0,7} {1,2,8,4,9,3,6,0,5,7} {1,2,8,3,4,6,9,5,0,7} {1,3,2,4,8,9,6,5,0,7} {1,4,3,2,9,8,6,0,5,7} {1,3,4,2,9,6,8,0,5,7} {1,4,3,2,9,6,8,5,0,7} {1,4,3,2,9,6,8,5,0,7} {1,4,3,2,9,6,8,5,7,0} {1,3,2,9,4,6,8,7,0,5} {1,2,3,4,6,9,8,7,0,5} </code></pre> <p>Here are results of arithmetic averages which I got for each (out of ten) positions between those 62 sets:</p> <p>first position digits average 4.694538</p> <p>second position digits average 4.306452</p> <p>third position digits average 4</p> <p>forth position digits average 5.048387097</p> <p>fifth position digits average 4.951612903</p> <p>sixth position digits average 4.548387097</p> <p>seventh position digits average 4.161290323</p> <p>eights position digits average 4.274193548</p> <hr> <p>ninths position digits average 2.870967742</p> <p>tenth position digits average 6.14516129</p> <hr> <p>Above results show that for the first eight positions, digits in each position were changing from the set to set progressively more and more randomly - thus averages for those positions are getting closer to 4.5 .</p> <p>However for positions 9 and 10 - such randomization was not achieved yet within first 62 sets ...(though looking outside of presented so far 62 sets data and relying on the known observation that eventually the average digit value in the decimal expansion of Pi comes practically to 4.5), I could "speculate" in advance that it will come to 4.5 eventually and for positions 9 and 10 too ... - but it looks like that positions 9 and 10 are "randomizing" at much slowly rate than the other 8 positions ... and that might be (or might not be) interesting. </p> <p>I am not sure how many more sets (beyond 62, which I presented here) are needed to get arithmetic averages for "ninth" and "tenth" digit positions (within the set) to reach the same proximity of 4.5 for average value, as it is achieved already by first 8 positions within those 62 sets ... .</p> <p>It is also notable that if to average positions 9 and 10 together, the average between those two, within available so far 62 sets, will be close to 4.5.</p> <h1>Conclusion and questions</h1> <p>There appears to be that those first 62 sets listed above have a slight hint of retaining some loose organizational order between predecessor sets and successor sets.</p> <p>But I presume that further "down the road", beyond the first 62 sets, one will see that gradually the level of randomness in sets digits composition order is increasing and adjacent sets become more more disconnected from each other.</p> <p>What I am trying to say that in case of digits Pi (after applied above discussed reordering) it appears that there exists some sort of transition from initial order (within first 620 digits) to total randomness ...</p> <p>I used Maximal information-based nonparametric exploration statistical analysis program (MINE) by "David Reshef at al ".</p> <p>Being applied (by me ) "pairwise" to the first 62 terms, MINE shows high values (up to 1) of the maximal information coefficient (MIC), which is a measure of two-variable dependence designed specifically for rapid exploration of many-dimensional data sets.</p> <p>The links (thanks to LVK for the upload) to the excel spreadsheet, turned into the comma separated value file (.csv), with the data (62 sets of reordered Pi digits) and the MINE generated output .csv file, which was produced (at my home PC Windows based computer) upon executing</p> <p>java -jar MINE.jar PiReordered.csv -allPairs cv=1.0</p> <p>correspondingly are </p> <p><a href="https://dl.dropbox.com/u/29863189/PiReordered.csv" rel="nofollow noreferrer">https://dl.dropbox.com/u/29863189/PiReordered.csv</a></p> <p>and</p> <p><a href="https://dl.dropbox.com/u/29863189/PiReordered.csv%2Callpairs%2Ccv%3D1.0%2CB%3Dn%5E0.6%2CResults.csv" rel="nofollow noreferrer">https://dl.dropbox.com/u/29863189/PiReordered.csv%2Callpairs%2Ccv%3D1.0%2CB%3Dn%5E0.6%2CResults.csv</a></p> <p>Does such concept of transition from order to randomness exist ?</p> <p>Could this above observation be statistically confirmed or disproved ?</p> <p>If "yes" - what specific tools / methods of statistical analysis could be applied ?</p> <p>I also received suggestion that in order to test whether this discussed above feature is only characteristic to (some initial digits of) Pi, the same reordering should be applied to the some significant number of randomly generated very long strings of decimal digits - to see if there the same pattern behavior will appear or not - is it useful ?</p> <p>Thanks,</p> <p>Best Regards,</p> <p>Alexander R. Povolotsky</p> <p>PS - in response to LVK's answer and his comment, which I am quotting here "The nth line of your table consists of the digits written in the order of their nth appearance in π. This could be in principle read off the graph by crossing it with the horizontal line y=n and reading off the intersection points from left to right. (In practice this is not convenient due to the low resolution and overlap between the curves.) ..... I don't think there is any statistical method for analysis of the data organized in this way. You'll probably need to devise one yourself. ..... – LVK Sep 10 at 16:02"</p> <p>LVK - thanks for your thoughts and valuable contribution ! I think though that the frequency chart somewhat hides away the positional dependency between the unique digits of the {1,2,...,9,0} set. The table presentation with the columns representing the particular combination of (all) digits (from 1 to 0 in above mentioned set) for each consecutive ten digit collection is, in my opinion, more revealing in that regard.</p> <p>My questions still remain to be in place:</p> <p>1) is there some "other" (I would call it "transitional") non-randomness exists for some few hundreds "initial" digits of Pi (beyond the order imposed by the re-arrangement itself), which is getting revealed by this re-arrangement ?</p> <p>2) what (other than MINE) quantative statistical methods/tools could be used in analysis of this situation - </p> <p>PPS I am trying to rework first 3 columns in already posted MINE results csv file (where first two columns are "textually enumerated" names of the 10 digitt sets, like for example "18thSet", and the 3rd column is MIC value for the two sets identified in the first two columns at the same row) into three (3)-dimensional "surface" chart with each column, mentioned above, be correspondingly x,y and z values ...</p> <p>Doing it manually via converting into table -- by keeping the first column in tact, transposing the second column into up-most row and filling the table's body by the MIC values from the 3rd column is very laborious.</p> <p>I found discussion at</p> <p><a href="https://stackoverflow.com/questions/7083044/mathematica-csv-to-multidimensional-charts">https://stackoverflow.com/questions/7083044/mathematica-csv-to-multidimensional-charts</a></p> <p>how to do it with Mathematica, but I don't have it ...</p> <p>Could some one (who has Mathematica) be kind enough to do it (and post) ?</p>
Community
-1
<blockquote> <p>I also received suggestion that in order to test whether this discussed above feature is only characteristic to (some initial digits of) Pi, the same reordering should be applied to the some significant number of randomly generated very long strings of decimal digits - to see if there the same pattern behavior will appear or not - is it useful ?</p> </blockquote> <p>Yes, this is an easy numerical experiment that could help you understand just how likely such patterns are to emerge under your process of rearrangement. </p> <blockquote> <p>In private communication I received the following: "It is rather obviously non-random. For example, the digit 7 appears at the end of 53/62 sets...</p> </blockquote> <p>Of course it's non-random. To begin with, every digit appears exactly once in every line. This already makes this set non-random, but we know that this non-randomness is the result of rearrangement. As for the digit 7, it simply lags in frequency in the beginning of pi, which is what places it at the end of your 10-digit groups. Here is the chart of digit counts in $\pi$ which I just made in a spreadsheet. Also a link to <a href="https://dl.dropbox.com/u/29863189/pi.png" rel="nofollow noreferrer">full size version</a> and to the actual <a href="https://dl.dropbox.com/u/29863189/pi.ods" rel="nofollow noreferrer">OpenOffice spreadsheet</a> that I used. </p> <p><img src="https://i.stack.imgur.com/qHtpt.png" alt="Digits of Pi"></p>
40,500
<blockquote> <p>What are the most fundamental/useful/interesting ways in which the concepts of Brownian motion, martingales and markov chains are related?</p> </blockquote> <p>I'm a graduate student doing a crash course in probability and stochastic analysis. At the moment, the world of probability is a confusing blur, but I'm starting with a grounding in the basic theory of markov chains, martingales and Brownian motion. While I've done a fair amount of analysis, I have almost no experience in these other matters and while understanding the definitions on their own isn't too difficult, the big picture is a long way away.</p> <p>I would like to <strong>gather together results and heuristics</strong>, each of which links together two or more of Brownian motion, martingales and Markov chains in some way. Answers which <strong>relate probability to real or complex analysis</strong> would also be welcome, such as "Result X about martingales is much like the basic fact Y about sequences".</p> <p>The thread may go on to contain a Big List in which each answer is the posters' favourite as yet unspecified result of the form "This expression related to a markov chain is always a martingale because blah. It represents the intuitive idea that blah".</p> <p>Because I know little, I can't gauge the worthiness of this question very well so apologies in advance if it is deemed untenable by the MO police.</p>
Simon Lyons
9,564
<p>If X is a continuous martingale of finite variation such that $X_0 = 0$, then $P(X_t = 0 \ \ \forall t) = 1$. </p>
59,486
<p>Chern-Weil theory tells us that the integral Chern classes of a flat bundle over a compact manifold (i.e. a bundle admitting a flat connection) are all torsion. Given a compact manifold $M$ whose integral cohomology contains torsion, one can then ask which (even-dimensional) torsion classes appear as the Chern classes of flat bundles. What is known about this question? I would be interested both in statements about specific manifolds and about general (non)-realizability results.</p> <p>One specific thing that I know: if $S$ is a non-orientable surface, then there is a flat bundle $E\to S$ whose first Chern class is the generator of $H^2 (S; \mathbb{Z}) = \mathbb{Z}/2$. This shows up, for example, in papers of C.-C. Melissa Liu and Nan-Kuo Ho. As Johannes pointed out in the comments, this also shows that the fundamental class of a product of surfaces can be realized by a flat bundle.</p> <p>However, I suspect that for a product of 3 Klein bottles, not all the 4-dimensional torsion classes can be realized as second Chern classes of flat bundles. In fact, I think I know a proof of this if one restricts to unitary flat connections: the space of unitary representations has too few connected components. </p>
Tom Goodwillie
6,666
<p>Small correction: For a non-compact manifold $M$, the group $H_{2k-1}(M)$ might not be finitely generated. In this case Chern-Weil does not imply that the $k$th Chern class of a flat bundle on $M$ has finite order. Rather, it just implies that it belongs to the subgroup $Ext(H_{2k-1}(M),\mathbb Z)\subset H^{2k}(M)$.</p> <p>Positive answer for first Chern class: Use the surjection $Hom(H_1(M),GL_1(\mathbb C))\to Ext(H_1(M),\mathbb Z)$ associated to the exponential exact sequence $0\to \mathbb Z\to \mathbb C\to GL_1(\mathbb C))\to 1$. An element of this $Hom$ group describes a flat complex line bundle on $M$ with prescribed Chern class in the $Ext$ part of $H^2(M)$.</p> <p>Negative answer in general, for a pretty trivial reason: If $M$ is simply connected, then flat bundles on $M$ are necessarily trivial, but $M$ can still have torsion in $H^{2k}$ if $k&gt;1$. </p> <p>So a better question is, if $\Gamma$ is a group then can every element of (the $Ext$ part of ?) $H^{2k}(B\Gamma)$ be $c_k$ of a vector bundle arising from a homomorphism $\Gamma\to GL_r(\mathbb C)$ for some $r$? I don't know the answer if $k&gt;1$.</p>
2,372,743
<p>Show that $$\lim_{x\to\infty} \int_{1}^x e^{-y^2} dy$$ exist and is in $[e^{-4},1]$</p> <p>I cannot find a good minoration, I must show it whitout using the double integration, I totally don't know how to solve it</p>
Brevan Ellefsen
269,764
<p><strong>Existence:</strong> Note that $e^{-x^2} \le e^{-x}$ for all $x \in [1,\infty)$ </p> <hr> <p><strong>Bounds:</strong> $$\begin{align} \int_1^x e^{-t^2} dt &amp;=\int_1^3 e^{-t^2} dt +\int_3^\infty e^{-t^2} dt\\ &amp;&lt;\int_1^3 e^{-t^2} dt +\int_3^\infty e^{-t} dt \end{align}$$ Estimate the finite integral in your favorite manner, and the second integral is trivial. For tighter bounds replace $3$ with any larger real number.</p>
4,133,760
<p>Dr Strang in his book linear algebra and it's applications, pg 108 says ,when talking about the left inverse of a matrix( <span class="math-container">$m$</span> by <span class="math-container">$n$</span>)</p> <blockquote> <p><strong>UNIQUENESS:</strong> For a full column rank <span class="math-container">$r=n . A x=b$</span> has at most one solution <span class="math-container">$x$</span> for every <span class="math-container">$b$</span> if and only if the columns are linearly independent. Then <span class="math-container">$A$</span> has an <span class="math-container">$n$</span> by <span class="math-container">$m$</span> left-inverse <span class="math-container">$B$</span> such that <span class="math-container">$B A=I_{n}$</span>. This is possible only if <span class="math-container">$m \geq n$</span>.</p> </blockquote> <p>I understand why there can be at most one solution for a full column rank but how does that lead to <span class="math-container">$A$</span> having a left inverse?</p> <p>I'd be grateful if someone could help or hint at the answer.</p>
Still Learning
362,881
<p>As an add on to user169852’s proof, I would note that Strang’s argument that <span class="math-container">$ A^T A $</span> is invertible if A has full column rank is fairly simple.</p> <p>He shows that for any matrix A, <span class="math-container">$ A^T A $</span> has the same nullspace as A:</p> <p>(1) Clearly the nullspace of A is contained in the nullspace of A^T A.</p> <p>(2) To show the reverse inclusion, suppose that <span class="math-container">$ A^T A x = 0 $</span>. Then <span class="math-container">$ x^T A^T A x = 0 $</span>, so <span class="math-container">$ (A x)^T (A x) = 0 $</span>. I.e., the norm of A x is zero and hence A x = 0. So the nullspace of <span class="math-container">$ A^T A $</span> is contained in the nullspace of A. Hence the nullspace of A equals the nullspace of <span class="math-container">$ A^T A $</span>.</p> <p>user169852 has already shown that A having full column rank implies its nullspace is trivial. So <span class="math-container">$ A^T A $</span> is a square matrix with trivial nullspace and hence is invertible.</p>
1,221,586
<p>How would one compute? $$ \oint_{|z|=1} \frac{dz}{\sin \frac{1}{z}} $$</p> <p>Can one "generalize" the contour theorem and take the infinite series of the residues at each singularity? </p>
GEdgar
442
<p>Another method for this case: Think of the outside of the unit disk as your domain. Go around the contour in the opposite direction. Outside the unit disk, the function $$ \frac{1}{\sin\frac{1}{z}} $$ is meromorphic. It has only one singularity in that region: a pole with residue $-1/6$ at $\infty$. So the value of the integral is $$ (-2\pi i)\left(\frac{-1}{6}\right) = \frac{i\pi}{3} . $$</p>
1,916,297
<p>Let $\{F_n\}_{n=1}^\infty $ be the Fibonacci sequence (defined by $ F_n=F_{n-1}+F_{n-2}$ with $F_1=F_2=1$).</p> <p>Is it true that for every integer number $N$ there exist positive integers $n,m,k,l$ such that $N=F_n+F_m-F_k-F_l$ ?</p>
user133281
133,281
<p><strong>Short answer:</strong> No, because the Fibonacci sequence grows exponentially.</p> <p><strong>Longer answer:</strong> First suppose $\{n,m\} \cap \{k,\ell\} \neq \emptyset$, say $n=l$. Then $N = F_m - F_k$. This implies that $N = F_m - F_k \geq F_m - F_{m-1} = F_{m-2}$. It follows that $m = O(\log(N))$ because the Fibonacci sequence is exponential. Similarly we have $k = O(\log(N))$. It follows that for large $N$ we can choose $m$ and $k$ in at most $O(\log(N)^2)$ ways.</p> <p>Now suppose $\{n,m\}$ and $\{k,\ell\}$ are disjoint. Then we similarly find that we can choose $n$, $m$, $k$, $\ell$ in at most $O(\log(N)^4)$ ways. </p> <p>Consequently, to write the numbers $1$, $2$, ..., $N$ in the desired form, there are at most $O(\log(N)^2 + \log(N)^4)$ possible choices for $n$, $m$, $k$, $\ell$. However this becomes strictly smaller than $N$ for large $N$.</p>
628,236
<p>I am trying to give a name to this axiom in a definition: </p> <p>$(X \bullet R) \sqcup (Y \bullet S) \equiv (X \sqcup Y) \bullet (R \sqcup S)$</p> <p>(for all $X, Y, R, S$) where $\sqcup$ is the join of a lattice and $\bullet$ is some binary operation. It feels related to monotonicity/distributivity but I don't know a standard name for this. Any ideas? So far I have called it "full distributivity". I'd also like to have a name (possibly the same) for this axiom when $\sqcup$ isn't a lattice operation, just some binary operation.</p>
Community
-1
<p>Alternatively, it is a quadratic polynomial in $Z$. Complete the square in $Z$. You will something like $(Z-A)^2+B$. For it to be reducible $B$ must be a square (over the complex numbers). But $B$ is again a quadratic polynomial in the remaining variables. If it is a square, it is reducible. Repeat the same work as before with this polynomial using another one of the variables.</p>
628,236
<p>I am trying to give a name to this axiom in a definition: </p> <p>$(X \bullet R) \sqcup (Y \bullet S) \equiv (X \sqcup Y) \bullet (R \sqcup S)$</p> <p>(for all $X, Y, R, S$) where $\sqcup$ is the join of a lattice and $\bullet$ is some binary operation. It feels related to monotonicity/distributivity but I don't know a standard name for this. Any ideas? So far I have called it "full distributivity". I'd also like to have a name (possibly the same) for this axiom when $\sqcup$ isn't a lattice operation, just some binary operation.</p>
Igor Rivin
109,865
<p>Yet another way: substitute some value for $Y,$ and check that the resulting polynomial is irreducible. My favorite value is $Y=2.$</p>
810,190
<blockquote> <p>Page 29 of Source 1: Denote the complex conjugate by * : $\mathbf{u \cdot v} = \sum_{1 \le i \le n} u_i^*v_i = (\mathbf{v \cdot u})^*$</p> <p><a href="http://www.math.sunysb.edu/~eitan/la13.pdf" rel="nofollow">Page 1 of Source 2:</a> $\mathbf{u \cdot v} = \mathbf{u}^T\mathbf{ \bar{v} }$.</p> <p><a href="http://www2.math.umd.edu/~hking/Hermitian.pdf" rel="nofollow">Page 1 of Source 3:</a> Denote $\mathbf{u^*} = \mathbf{\bar{u}^T} $. Then $ \mathbf{ &lt;u,v&gt; = u*v = \bar{u}^Tv } $.</p> </blockquote> <p>Would someone please explain and elucidate all these differences? Which is right? I'm confused. I believe that $u \cdot v = &lt;u, v&gt;$, if $&lt; &gt;$ is considered as the $\cdot$?</p> <p>In view of the answer below, which is the most convenient and powerful that I should remember?</p>
Branimir Ćaćić
49,610
<p>So, as you observed, there are two conventions in the literature for Hermitian inner products:</p> <ol> <li>anti-linearity in the first argument, linearity in the second (Sources 1 and 3);</li> <li>linearity in the first argument, anti-linearity in the second (Source 2).</li> </ol> <p>Convention (1) is standard in theoretical physics, probably because of Dirac—I can't remember, off the top of my head, which convention von Neumann uses—and from there in mathematical physics; moreover, it is very common in branches of mathematics traditionally linked to mathematical physics, e.g., operator theory and noncommutative geometry. Convention (2), on the other hand, is common in branches of mathematics where connections to mathematical physics are less prominent.</p> <p>It should be stressed that there is absolutely no difference in the resulting theory whatsoever from choosing one convention over the other, besides superficial differences in notation here and there, since each convention really is just the complex conjugate of the other.</p>
133,936
<p>I am trying to understand a part of the following theorem:</p> <blockquote> <p><strong>Theorem.</strong> Assume that $f:[a,b]\to\mathbb{R}$ is bounded, and let $c\in(a,b)$. Then, $f$ is integrable on $[a,b]$ if and only if $f$ is integrable on $[a,c]$ and $[c,b]$. In this case, we have $$\int_a^bf=\int_a^cf+\int_c^bf.$$ <em>Proof.</em> If $f$ is integrable on $[a,b]$, then for every $\epsilon&gt;0$ there exists a partition $P$ such that $U(f,P)-L(f,P)&lt\epsilon$. Because refining a partition can only potentially bring the upper and lower sums closer together, we can simply add $c$ to $P$ if it is not already there. Then, let $P_1=P\cap[a,c]$ be a partition of $[a,c]$, and $P_2=P\cap[c,b]$ be a partition of $[c,b]$. It follows that $$U(f,P_1)-L(f,P_1)&lt\epsilon\text{ and }U(f,P_2)-L(f,P_2)&lt\epsilon,$$ implying that $f$ is integrable on $[a,c]$ and $[c,b]$.</p> <p>[...]</p> </blockquote> <p>How does that last expression "follow?" Neither $P_1$ nor $P_2$ are refinements of $P$, but they are still somehow less than $\epsilon$; will that not make their difference larger? That is, $$U(f,P_i)-L(f,P_i)\geqslant U(f,P)-L(f,P),$$ for $i=1,2$? Thanks in advance!</p>
Jonathan Prieto-Cubides
29,241
<p>i look your question, and i'm thinking the following. By Definition, with the partition $P$ you have: $ U(f,P) - L(f,P) &lt\epsilon$ so the difference will be near zero, because we say in terms inf and sup for sums Riemman, and $U(f,P)\geq L(f,P)$.</p> <p>So, If $P_1=P\cap [a,c]$ and $P_2 = P \cap [c,b]$ should be $P = P1 \cup P2$ and $P$ was you partition for $[a,b]$, then $$ \begin{align*} L(f,P_1) + L(f,P_2)) &amp;\leq U(f,P_1)+U(f,P_2)\\ L(f,P_i) &amp;\leq U(f,P_i)\\ &amp;\text{well you get by subtracting }\\ L(f,P) - L(f,P_i) &amp;\leq U(f,P) -U(f,P_i)\\ U(f,P_i)−L(f,P_i) &amp;\leq U(f,P)−L(f,P) \end{align*} $$</p> <p>Can not be, $U(f,P_i)−L(f,P_i)\geq U(f,P)−L(f,P)$.</p>
1,318,934
<blockquote> <p>Let $q$ be an odd prime power. Consider the map $f:\Bbb F_{q^3} \rightarrow \Bbb F_{q^3}$, defined by $$f(x)=\alpha x^q+\alpha^q x$$ for some fixed $\alpha \in \Bbb F_{q^3} \setminus \{ 0 \}$. Show that $f$ is a bijection.</p> </blockquote> <p>Hint: If $\beta \in \ker(f) \setminus \{ 0 \}$ consider the relative norm map $N_{\Bbb F_{q^3}/{\Bbb F_q}}(\alpha \beta^q)$.</p>
Adam Hughes
58,831
<p>Consider</p> <p>$$g(x) =f(x)-f(a)=\alpha(x-a)^q+\alpha^q(x-a)$$</p> <p>since $q\equiv 0$ in our field and the binomial theorem holds. Now if $f$ has a double value, say $f(a)$ which is taken on twice, then $g$ has a two zeros. However, if $b\ne a$ is such a pair of zeros, we have</p> <p>$$\alpha(b-a)^q+\alpha^q(b-a)=0\iff (b-a)^{q-1}=-(\alpha)^{q-1}.$$</p> <p>Then</p> <blockquote> <p>$$\left({b-a\over \alpha}\right)^{q-1}=-1$$</p> </blockquote> <p>So that ${b-a\over\alpha}\not\in\Bbb F_{q}$, as elements of the base field are totally determined by the fact that they are roots of $x^q-x$. However, this implies</p> <p>$$\left({b-a\over \alpha}\right)^{q^2}={b-a\over\alpha}$$</p> <p>i.e. ${b-a\over\alpha}\in\Bbb F_{q^2}$ which is impossible since that field is not a sub-extension of $\Bbb F_{q^3}$. Hence no such $b$ exists.</p>
239,544
<p>I'm working with the code below, which works well for my purposes; however, instead of the canonical ordering seen in the output, I want the output produced in the order I give in the Cases line of code. Any help is most appreciated.</p> <pre><code>ClearAll; m = {{N11, &quot;x&quot;}, {N12, &quot;y&quot;}, {N19, &quot;z&quot;}, {N20, &quot;w&quot;}, {N41, &quot;exz&quot;}, {N42, &quot;jy&quot;}, {N49, &quot;br&quot;}, {N50, &quot;mew&quot;}, {N61, &quot;qr&quot;}, {N62, &quot;xqe&quot;}, {N539, &quot;ycf&quot;}, {N551, &quot;pyu&quot;}, {N559, &quot;wq&quot;}, {N560, &quot;hra&quot;}}; Cases[m, {N19 | N560 | N41 | N61 | N551 | N20 | N539, __}]; TableForm[%] </code></pre> <p>Output from the above code</p> <pre><code>N19 z N20 w N41 exz N61 qr N539 ycf N551 pyu N560 hra </code></pre>
thorimur
63,584
<p>Welcome to MMA SE! The assumption <code>Element[t, Reals]</code> is only used inside <code>Refine</code>, and so in a sense is not &quot;known to&quot; any evaluations outside of that expression. <code>Refine</code> doesn't change the nature of <code>A</code>; it merely spits out an expression produced from its argument evaluated under the provided assumptions. The assumptions don't &quot;remain attached&quot; to the symbol <code>A</code>.</p> <p>You have a couple different ways to proceed:</p> <ol start="0"> <li>Simply <code>Refine[Conjugate[A], {Element[t, Reals], t &gt; 0}]</code>!</li> </ol> <p>In the following I'll mention other general strategies for applying assumptions, using <code>Simplify</code> as a way of applying them, but everything applies equally well to using <code>Refine</code> and other similar expressions in place of <code>Simplify</code>. (Note, though, that to use <code>Simplify</code> with assumptions, though, you need to provide them as an option, e.g. <code> Simplify[Conjugate[A], Assumptions -&gt; {Element[t, Reals]}]</code>)</p> <ol> <li>Set global assumptions via <code>$Assumptions</code>, e.g.</li> </ol> <pre><code>$Assumptions = {Element[t, Reals]} </code></pre> <p>Then e.g. <code>Simplify[Conjugate[t]]</code> will return <code>t</code> and <code>Simplify[Conjugate[A]]</code> will return <code>A</code>. Likewise for <code>Refine[Conjugate[A]]</code>. (Note that <code>Conjugate[t]</code> will still return <code>Conjugate[t]</code>.)</p> <ol start="2"> <li>Wrap your expressions in <code>Assuming</code> and use <code>Simplify</code>/<code>Refine</code>, e.g.</li> </ol> <pre><code>Assuming[{Element[t, Reals]}, Simplify[Conjugate[A]]] </code></pre> <p>This is essentially a local version of the above; it's equivalent to temporarily appending to <code>$Assumptions</code></p> <ol start="3"> <li>Bypass assumptions and set an <a href="https://reference.wolfram.com/language/ref/UpValues.html" rel="noreferrer">upvalue</a> for <code>t</code>:</li> </ol> <pre><code>t /: Conjugate[t] := t </code></pre> <p>This won't make <code>t</code> be assumed to be real in simplifying procedures, in general; it will only replace the actual expression <code>Conjugate[t]</code> with <code>t</code> when encountered. So, it relies on <code>Conjugate[t]</code> being present as-is, and while it does mean you don't have to use <code>Simplify</code>/<code>Refine</code>, it is rather fragile due to how specific it is.</p> <p>Let me know if anything here doesn't make sense or could use expanding upon!</p>
239,544
<p>I'm working with the code below, which works well for my purposes; however, instead of the canonical ordering seen in the output, I want the output produced in the order I give in the Cases line of code. Any help is most appreciated.</p> <pre><code>ClearAll; m = {{N11, &quot;x&quot;}, {N12, &quot;y&quot;}, {N19, &quot;z&quot;}, {N20, &quot;w&quot;}, {N41, &quot;exz&quot;}, {N42, &quot;jy&quot;}, {N49, &quot;br&quot;}, {N50, &quot;mew&quot;}, {N61, &quot;qr&quot;}, {N62, &quot;xqe&quot;}, {N539, &quot;ycf&quot;}, {N551, &quot;pyu&quot;}, {N559, &quot;wq&quot;}, {N560, &quot;hra&quot;}}; Cases[m, {N19 | N560 | N41 | N61 | N551 | N20 | N539, __}]; TableForm[%] </code></pre> <p>Output from the above code</p> <pre><code>N19 z N20 w N41 exz N61 qr N539 ycf N551 pyu N560 hra </code></pre>
E. Chan-López
53,427
<p>Using <strong>ComplexExpand</strong> command:</p> <pre><code>A = 5 t^2 + 3 t + 4; ComplexExpand[Conjugate[A]] </code></pre> <p>For expressions within radicals, we must use the <strong>Refine</strong> command for the <strong>ComplexExpand</strong> command to work properly. Here is an example:</p> <p>Without Refine command:</p> <pre><code> Simplify[ComplexExpand[Conjugate[Sqrt[a - b c]]]] (*((a - b c)^2)^(1/4) (Cos[1/2 Arg[a - b c]] - I Sin[1/2 Arg[a - b c]])*) </code></pre> <p>With <strong>Refine</strong> command for <span class="math-container">$a-bc&gt;0$</span> or <span class="math-container">$a-bc&lt;0$</span></p> <pre><code> Refine[ComplexExpand[Conjugate[Sqrt[a - b c]]], Assumptions -&gt; a - b c &gt; 0] (*Sqrt[a - b c]*) Refine[ComplexExpand[Conjugate[Sqrt[a - b c]]], Assumptions -&gt; a - b c &lt; 0] (*- I*Sqrt[b c - a]*) </code></pre>
195,006
<p>I am not very familiar with mathematical proofs, or the notation involved, so if it is possible to explain in 8th grade English (or thereabouts), I would really appreciate it.</p> <p>Since I may even be using incorrect terminology, I'll try to explain what the terms I'm using mean in my mind. Please correct my terminology if it is incorrect so that I can speak coherently about this answer, if you would.</p> <p>Sequential infinite set: A group of ordered items that flow in a straight line, of which there are infinitely many. So, all integers from least to greatest would be an example, because they are ordered from least to greatest in a sequential line, but an infinite set of bananas would not, since they are not linearly, sequentially ordered. An infinite set of bananas that were to be eaten one-by-one would be, though, because they are iterated through (eaten) one-by-one (in linear sequence).</p> <p>Sequential infinite subsets: Multiple sets within a sequential infinite set that naturally fall into the same order as the items of the sequential infinite set of which they are subsets. So, for example, the infinite set of all integers from least to greatest can be said to have the following two sequential infinite subsets within it: all negative integers; and all positive integers. They are sequential because the negative set comes before the positive set when ordered as stated. They are infinite because they both contain an infinite qty of items, and they are subsets because they are within the greater infinite set of all integers.</p> <p>So I'm wondering if every (not some, but every) sequential infinite set contains within it sequential infinite subsets. The subsets (not the items within them) being sequentially ordered is extremely important. Clearly, a person could take any infinite set, remove one item, and have an infinite subset. Put the item back, remove a different item, and you have multiple infinite subsets. But I need them to be not only non-overlapping, but also sequential in order.</p> <p>Please let me know if this does not make sense, and thank you for dumbing the answer down for me.</p>
Tarnation
37,670
<p>The short answer is, yes. </p> <p>Every infinite set has a countably infinite subset. In lay terms, 'countably infinite' means that we can index each element of the set with an integer, such as $a_5$. Say your original set, call it $S$,has a total ordering on it (I am assuming this is what you mean); this says we have an inequality $&lt;$ such that, for any two elements $x$ and $y$ in the set, either $x&lt;y$, $y&lt;x$, or $x=y$ (this is an abstract inequality, these elements don't have to be numbers).</p> <p>With these conditions in place, we can rewrite our set as $S=\{...,a_{-3},a_{-2},a_{-1},a_0,a_1,a_2,a_3,...\}$, where we have $a_i&lt;a_j$ whenever $i&lt;j$. Notice that we can split up the set in the same ways we can split up the integers, say into elements of odd and even indices, or by indices evenly divisible by three and those that are not. You can see that every such partition splits our original set into multiple, totally ordered sets.</p> <p>If you are interested in properties of infinity and infinite sets, the book Infinity:The Quest to Think the Unthinkable by Brian Clegg is a very accessible and interesting introduction to questions like this. It is written for a layperson, no math skills past junior high school level are required.</p>
3,201,996
<p>I have an 8-digit number and you have an 8-digit number - I want to see if our numbers are the same without either of us passing the other our actual number. Hashing the numbers is the obvious solution. However, if you send me your hashed number and I do not have it - it is very easy to hash all the permutations of an 8-digit number and see your number.</p> <p>I am looking for a way to increase the complexity of the 8-digit number while maintaining uniqueness and a universal process (i.e. we need to be able to apply the same process on both ends.) Squaring the number or something like that will not work because there are the same number of unique squares of an 8 digit number as there are unique 8 digit permutations. Salting will not work for the same reason.</p> <p>Is there anything I can do to the number to make brute-forcing all permutations not viable?</p>
Ross Millikan
1,827
<p>There are only <span class="math-container">$10^8$</span> eight digit numbers, even if you allow leading zeros. No deterministic operation can increase the entropy of this. Your example of squaring is a good one. It makes the number larger, but does not increase the number of results. You need more possible codes, either by adding digits or by having more digits as in allowing other characters. </p> <p>One other approach is to make a <em>very</em> slow checking algorithm. If it takes an hour to check a code (and that is acceptable if somebody has the code), brute forcing would require a considerable investment in computing resources. One could still buy <span class="math-container">$10^4$</span> computers, set them running in parallel, and have the answer in a month and a half.</p>
1,225,122
<p>So I am currently studying a course in commutative algebra and the main object that we are looking at are ideals generated by polynomials in n variables. But the one thing I don't understand when working with these ideals is when we reduce the generating set to something much simpler. For e.g.</p> <p>Consider the Ideal $I$ = $&lt;x^2-4x + 3, x^2 +x -2&gt;$, then since $x -1$ is a common factor of both the polynomials in the generating set we deduce that I is infact $&lt;x-1&gt;$. So my question is what is the criteria that applies when we are reducing the generating set to something much simpler.</p> <p>Based on what I understand, I am guessing in the above example that since every polynomial is divisible by $x-1$ we can say the ideal is generated by $x-1$ (wouldn't this result in the loss of any elements?). But I am not entirely convinced by my reasoning and would prefer to hear it from someone who understands this stuff better. </p> <p>Also using the same reasoning as above can we then say that the ideal $I$ = $&lt;x^3 - x^2 + x&gt;$ = $&lt;x&gt;$ ?</p>
Bernard
202,857
<p>$I=\langle(x-1)(x-3),(x-1)(x+2)\rangle=(x-1)\langle x-3,x+2\rangle =(x-1)A$, since $x-3$ and $x+2$ are coprime.</p> <p>You can't apply the same reasoning to $\langle x^3-x^2+x\rangle$ and $\langle x\rangle$.</p>
3,283,600
<p>"Find all the complex roots of the following polynomials</p> <p>A) <span class="math-container">$S(x)=135x^4 -324x^3 +234x^2 -68x+7$</span>, knowing that all its real roots belong to the interval <span class="math-container">$(0.25;1.75)$</span></p> <p>B)<span class="math-container">$M(x)=(x^3 -1+i)(5x^3 +27x^2 -28x+6)$</span> "</p> <p>Well, in A) I don't know how to use the given information about real roots. I mean, I know that I can apply Bolzano but I don't think that's very useful. To find the complex roots I should have some information about a complex root in particular so that I could use Ruffini, but this is not the case.</p> <p>And in B) I know that <span class="math-container">$(x^3 -1+i)$</span> is giving me some information related to a complex root, but that "^3" bothers me. If it wasn't there, I would know that <span class="math-container">$1-i$</span> is a root...</p>
Michael Rozenberg
190,319
<p>Since <span class="math-container">$\frac{1}{3}$</span> is a root of <span class="math-container">$S$</span>, we obtain: <span class="math-container">$$S=135x^4-324x^3+234x^2-68x+7=$$</span> <span class="math-container">$$=135x^4-45x^3-279x^3+93x^2+141x^2-47x-21x+7=$$</span> <span class="math-container">$$=(3x-1)(45x^3-93x^2+47x-7)=$$</span> <span class="math-container">$$=(3x-1)(45x^3-15x^2-78x^2+26x+21x-7)=$$</span> <span class="math-container">$$=(3x-1)^2(15x^2-26x+7)=(3x-1)^2(15x^2-5x-21x+7)=(3x-1)^3(5x-7).$$</span> Since <span class="math-container">$\frac{3}{5}$</span> is a root of <span class="math-container">$5x^3+27x^2-28x+6$</span>, we obtain: <span class="math-container">$$5x^3+27x^2-28x+6=5x^3-3x^2+30x^2-18x-10x+6=(5x-3)(x^2+6x-2)=$$</span> <span class="math-container">$$=(5x-3)((x+3)^2-11)=(5x-3)(x+3-\sqrt{11})(x+3+\sqrt{11}).$$</span> Also, <span class="math-container">$$\sqrt[3]{1-i}=\sqrt[6]2\sqrt[3]{\cos315^{\circ}+i\sin315^{\circ}}=$$</span> <span class="math-container">$$=\sqrt[6]2(\cos(105^{\circ}+120^{\circ}k)+i\sin(105^{\circ}+120^{\circ}k)),$$</span> where <span class="math-container">$k\in\{0,1,2\}$</span>.</p>
588,488
<p>I know that for the harmonic series $\lim_{n \to \infty} \frac1n = 0$ and $\sum_{n=1}^{\infty} \frac1n = \infty$.</p> <p>I was just wondering, is there a sequence ($a_n =\dots$) that converges "faster" (I am not entirely sure what's the exact definition here, but I think you know what I mean...) than $\frac1n$ to $0$ and its series $\sum_{n=1}^{\infty}{a_n}= \infty$?</p> <p>If not, is there proof of that?</p>
Eric Thoma
35,667
<p>If we use the notion of a partial sum:</p> <p>$$S_n = \sum_{k=1}^n a_k$$</p> <p>you are asking for a series in which the partial sums diverge more slowly than $\sum_{k=1}^n 1/k$ as $n\to\infty$. For this to happen, we just need to find $a_k$ such that $a_k &lt; 1/k$ for all $k&gt;c$ where $c$ is some value.</p> <p>Look at $a_k = \frac{1}{k\ln k}$. Let's do the integral test to show this diverges. The substitution used is $u = \ln k$ so that $du = dk/k$.</p> <p>$$\int \frac{dk}{k\ln k}=\int \frac{du}{u}=\ln u = \ln {\ln k}$$</p> <p>Putting in limits of $2$ and $\infty$ shows that this indeed diverges. And for $k&gt;c=e$, we have that $1/(k\ln k) &lt; 1/k$.</p>
249,332
<p>Consider the following list</p> <pre><code>list={{{0,0,0},0},{{0,0,1},a},{{0,0,-1},-a},{{1,0,1},b},{{1,0,0},-b},{{1,1,1},a+b},{{1,1,1},a-b},{{-1,0,-1},{-a-b}},{{-1,0,-1},{-a+b}}}; </code></pre> <p>how can this list be sorted such that a&gt;b&gt;0, therefore the expected result would be</p> <pre><code>list={{{1,1,1},a+b},{{0,0,1},a},{{1,0,1},b},{{1,1,1},a-b},{{0,0,0},0},{{-1,0,-1},{-a+b}},{{1,0,0},-b},{{0,0,-1},-a},{{-1,0,-1},{-a-b}}}; </code></pre> <p>in this specific example we also have that <code>2b&gt;a</code> however it is not that important.</p>
Lukas Lang
36,508
<p>After fixing your list (the last two items had a list wrapped around the second element), the following works:</p> <pre><code>list = {{{0, 0, 0}, 0}, {{0, 0, 1}, a}, {{0, 0, -1}, -a}, {{1, 0, 1}, b}, {{1, 0, 0}, -b}, {{1, 1, 1}, a + b}, {{1, 1, 1}, a - b}, {{-1, 0, -1}, -a - b}, {{-1, 0, -1}, -a + b}}; Sort[list, FullSimplify[#[[2]] &gt; #2[[2]], a &gt; b &gt; 0 &amp;&amp; 2 b &gt; a] &amp;] (* {{{1, 1, 1}, a + b}, {{0, 0, 1}, a}, {{1, 0, 1}, b}, {{1, 1, 1}, a - b}, {{0, 0, 0}, 0}, {{-1, 0, -1}, -a + b}, {{1, 0, 0}, -b}, {{0, 0, -1}, -a}, {{-1, 0, -1}, -a - b}} *) </code></pre> <p>This works by supplying a <a href="https://reference.wolfram.com/language/ref/Sort.html#359818967" rel="nofollow noreferrer">custom comparison function</a> to <a href="https://reference.wolfram.com/language/ref/Sort.html" rel="nofollow noreferrer"><code>Sort</code></a> that simplifies &quot;left&quot;&gt;&quot;right&quot; given the assumptions on <code>a</code> and <code>b</code>.</p>
312,649
<p>Hello how to show the following:</p> <p>Let $(X,\tau)$ be a topological space then a single point is compact but not necessarily closed.</p> <p>Thank you!</p>
Brian M. Scott
12,042
<p>HINT: Let $X$ be the <a href="http://en.wikipedia.org/wiki/Sierpi%C5%84ski_space" rel="nofollow">Sierpiński space</a>: $X=\{0,1\}$, and the open sets are $\varnothing,\{1\}$, and $X$. Is $\{1\}$ a compact set in $X$? Is it a closed set?</p>
312,649
<p>Hello how to show the following:</p> <p>Let $(X,\tau)$ be a topological space then a single point is compact but not necessarily closed.</p> <p>Thank you!</p>
Community
-1
<p>Take $\Bbb{R}$ with the trivial topology. The set $\{0\}$ is compact but is not closed.</p>
757,917
<p>According to <a href="http://www.wolframalpha.com/input/?i=sqrt%285%2bsqrt%2824%29%29-sqrt%282%29%20=%20sqrt%283%29" rel="nofollow">wolfram alpha</a> this is true: $\sqrt{5+\sqrt{24}} = \sqrt{3}+\sqrt{2}$</p> <p>But how do you show this? I know of no rules that works with addition inside square roots.</p> <p>I noticed I could do this:</p> <p>$\sqrt{24} = 2\sqrt{3}\sqrt{2}$</p> <p>But I still don't see how I should show this since $\sqrt{5+2\sqrt{3}\sqrt{2}} = \sqrt{3}+\sqrt{2}$ still contains that addition</p>
lab bhattacharjee
33,337
<p>$$5+\sqrt{24}=(\sqrt3)^2+(\sqrt2)^2+2\cdot\sqrt2\cdot\sqrt3=(\sqrt3+\sqrt2)^2$$</p>
3,684,331
<p>I am working on a problem from my Qual</p> <p>"Let <span class="math-container">$T:V\to V$</span> be a bounded linear map where <span class="math-container">$V$</span> is a Banach space. Assume for each <span class="math-container">$v\in V$</span>, there exists <span class="math-container">$n$</span> s.t. <span class="math-container">$T^n(v)=0$</span>. Prove that <span class="math-container">$T^n=0$</span> for some <span class="math-container">$n$</span>."</p> <p>My impression is that it looks like a algebraic problem. But I know nothing about <span class="math-container">$V$</span> (not fintely generated or something like that). So this one is off.</p> <p>The hypothesis seems to be set up for the Uniformly Bounded Principle. Indeed, for each <span class="math-container">$v$</span>, <span class="math-container">$\sup_n \{||T^n(v)|| \}$</span> is bounded, so by the principle, <span class="math-container">$\sup_n ||T^n||$</span> is bounded. </p> <p>I'm stuck here. I don't think there is a relation between <span class="math-container">$||T^n||$</span> and <span class="math-container">$||T||$</span>, even if there is, I cannot shake the feeling that the sequence <span class="math-container">$||T^n||$</span> may be eventually constant. I am hoping <span class="math-container">$||T^n||=0$</span> eventually but failed to prove it.</p>
Conrad
298,272
<p>Maybe an easier way to do it, though conceptually similar with @Integrand</p> <p>By multiplying out and simplifying the terms, while noting that <span class="math-container">$x(x+\pi)^2-x^3=\pi x(x+2\pi)=\pi(x+\pi)^2-\pi^3$</span>, the inequality is equivalent to:</p> <p><span class="math-container">$(\pi-\sin x)(x+\pi)^2 \ge \pi^3, x \ge 0$</span></p> <p>But now <span class="math-container">$\pi-\sin x \ge \pi -1$</span> and we see that <span class="math-container">$(\pi+1)^2(\pi-1)=(\pi^2-1)(\pi+1)=\pi^3+\pi^2-\pi-1&gt;\pi^3$</span>, so the inequality is automatically true for <span class="math-container">$x \ge 1$</span></p> <p>Then taking derivatives for <span class="math-container">$f(x)=(\pi-\sin x)(x+\pi)^2$</span> we have <span class="math-container">$f'(x)=2(x+\pi)(\pi-\sin x)-(x+\pi)^2 \cos x$</span> and for <span class="math-container">$0 \le x \le 1$</span>, <span class="math-container">$2(\pi-\sin x)-(x+\pi) \cos x \ge 2\pi-2 -\pi-1= \pi -3 &gt;0$</span> so <span class="math-container">$f$</span> is increasing there, hence <span class="math-container">$f(x) \ge f(0)=\pi^3$</span> and the inequality holds in this interval too. so done!</p>
268,635
<p>Consider the triangle formed by randomly distributing three points on a circle. What is the probability of the center of the circle be contained within the triangle?</p>
Christian Blatter
1,303
<p>Let $A$ be the first point chosen, $M$ the center of the circle, and $AA'$ the diameter through $A$. The probability that the other two points $B$ and $C$ lie to either side of $AA'$ is ${1\over2}$. Given that this is the case, the central angles $X:=\angle(AMB)$ and $Y:=\angle(AMC)$ are independently and uniformly distributed in $[0,\pi]$. The probability that $X+Y\geq\pi$ corresponds to the area of half a square cut by a diagonal, so it is ${1\over2}$ again. Therefore the probability we are looking for amounts to ${1\over2}\cdot{1\over2}={1\over4}$.</p>
268,635
<p>Consider the triangle formed by randomly distributing three points on a circle. What is the probability of the center of the circle be contained within the triangle?</p>
Rayhunter
76,236
<p>Take any triangle that contains the centre. Connect each vertex with the centre and continue the lines until they cross the circle. Now form the three 'outer' triangles, as shown in Figure 1. </p> <p>These sets of 4 triangles are uniquely determined by any one of them, meaning that you can reconstruct all of them if you have any one. 1 of the 4 always contains the centre and 3 of them don't. In other words, the triangles that do not contain the centre are three times as much as those that contain it. </p> <p>Therefore, the probability sought must be <span class="math-container">$\frac14$</span>.</p> <p><a href="https://i.stack.imgur.com/UzR9V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UzR9V.png" alt="Figure 1"></a></p> <p>So far so good...</p> <p>What was not considered in this approach are the special cases when the centre lies on one of the sides of the triangle initially picked to contain the centre. Then there are 2 uniquely determined triangles both of which contain the centre. It would make sense that these cases are so negligibly few that they don't affect the overall probability. I could possibly be missing other cases as well. </p> <p>Any feedback is welcome.</p> <p><a href="https://i.stack.imgur.com/4WHRf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4WHRf.png" alt="Figure 2"></a></p>
1,058,515
<p>determine whether $v_1=(1,1,2)$ , $v_2=(1,0,1)$ and $v_3=(2,2,3)$ span $\mathbb{R}^3$</p> <p>I think that it must prove that it linear Independence </p> <p>let $α_1v_1+α_2v_2+α_3v_3=(0,0,0)$</p> <p>so what next , is this a correct way or I'm completely wrong </p> <p>please guide me to understand this</p> <p>thanks</p>
ml0105
135,298
<p>You have three vectors and $dim(\mathbb{R}^{3}) = 3$. So all you have to test for is linear independence. You can do this by constructing a matrix $A$ as JohnD suggested. Then just take $det(A)$. The vectors form a basis iff $det(A) \neq 0$.</p>
1,058,515
<p>determine whether $v_1=(1,1,2)$ , $v_2=(1,0,1)$ and $v_3=(2,2,3)$ span $\mathbb{R}^3$</p> <p>I think that it must prove that it linear Independence </p> <p>let $α_1v_1+α_2v_2+α_3v_3=(0,0,0)$</p> <p>so what next , is this a correct way or I'm completely wrong </p> <p>please guide me to understand this</p> <p>thanks</p>
Community
-1
<p>There is a theorem in linear algebra which says that any set of $n$ linearly independent vectors in an $n$-dimensional space is a basis for that space. $\Bbb R^3$ is $3$-dimensional (I assume you've proven this at some point this semester?) and you've got $3$ vectors. Thus those vectors form a basis for $\Bbb R^3$ if and only if they are linearly independent. Below I've provided a few methods of determining whether or not they are linearly independent.</p> <p><strong>Method #1</strong></p> <p>Construct the following matrix: $$\begin{bmatrix} 1 &amp; 1 &amp; 2 \\ 1 &amp; 0 &amp; 2 \\ 2 &amp; 1 &amp; 3 \end{bmatrix}$$</p> <p>Now use Gauss-Jordan elimination to determine the rank of the matrix. If it is $3$ (the number of columns), then the columns are linearly independent.</p> <p><strong>Method #2</strong></p> <p>Construct the same matrix as above. Find the determinant of that matrix. If it is nonzero, then the columns are linearly independent.</p> <p><strong>Method #3</strong></p> <p>Look at the equation $a_1(1,1,2) + a_2(1,0,1) + a_3(2,2,3) = (0,0,0)$.</p> <p>This is equivalent to the set of scalar equations: $$\begin{cases} a_1+a_2+2a_3 = 0 \\ a_1+2a_3=0 \\ 2a_1 + a_2 +3a_3 = 0\end{cases}$$</p> <p>We can immediately see that $a_1 = -2a_3$. Plugging this into the first equation we get: $(-2a_2)+a_2 +2a_3 = -a_2+2a_3 = 0 \implies a_2 = 2a_3$. Plugging both of these into the third equation we get $2(-2a_3)+(2a_3)+3a_3=0 \implies a_3=0$. Plugging this back into $a_1=-2a_3$ and $a_2=2a_3$, we see that $a_1=a_2=a_3=0$. Therefore these $3$ vectors are linearly independent.</p> <p><strong>Method 4</strong></p> <p>We know that $(1,0,0), (0,1,0), (0,0,1)$ is a basis for this space. So if each of these standard basis vectors can be represented by a linear combination of your three vectors $v_1, v_2, v_3$, then $\operatorname{span}((1,0,0), (0,1,0), (0,0,1)) \subseteq \operatorname{span}(v_1, v_2, v_3)$. Because there are $3$ of them, we can see that $\{v_1, v_2, v_3\}$ will thus be a basis for $\Bbb R^3$ and thus linearly independent.</p> <p><strong>Method 5</strong></p> <p>Take the <a href="http://en.wikipedia.org/wiki/Exterior_algebra" rel="nofollow">exterior product</a> of the three vectors, $v_1 \wedge v_2 \wedge v_3$. If it is nonzero, then the three vectors are linearly independent.</p>
3,080,664
<p>Hi I've almost completed a maths question I am stuck on I just can't seem to get to the final result. The question is:</p> <p>Find the Maclaurin series of <span class="math-container">$g(x) = \cos(\ln(x+1))$</span> up to order 3.</p> <p>I have used the formulas which I won't type out as I'm not great with Mathjax yet sorry. But I have obtained:</p> <p><span class="math-container">$$ \ln(x+1) = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} $$</span></p> <p><span class="math-container">$$ \cos x = 1 - \frac{x^2}{2} + \frac{x^4}{24} - \frac{x^6}{720} $$</span></p> <p>I'm sure what I have done above is correct however I can't seem to get to the final solution shown in the answers and would like some help please. Thanks </p>
José Carlos Santos
446,262
<p>Since<span class="math-container">$$\cos x=1-\frac{x^2}2+\cdots,$$</span>you have<span class="math-container">\begin{align}\cos\bigl(\log(x+1)\bigr)&amp;=1-\frac12\left(x-\frac{x^2}2+\frac{x^3}3-\cdots\right)^2+\cdots\\&amp;=1-\frac{x^2}2+\frac{x^3}2+\cdots\end{align}</span>Note that this argument works because the series for <span class="math-container">$\log(x+1)$</span> has no constant term. A consequence of this is that no matter how more terms you use from both series, you will get no new term whose degree is smaller than <span class="math-container">$4$</span>.</p>
155,373
<p>Consider $X,Y \subseteq \mathbb{N}$. </p> <p>We say that $X \equiv Y$ iff there exists a bijection between $X$ and $Y$.</p> <p>We say that $X \equiv_c Y$ iff there exist a bijective computable function between $X$ and $Y$. </p> <p>Can you show me some examples in which the two concepts disagree?</p>
Kaveh
468
<p>The structure with the computable equivalence (which is defined as $A\leq_T B$ and $B \leq_T A$) is called <a href="http://en.wikipedia.org/wiki/Turing_degree" rel="nofollow">Turing degrees</a> and has a very rich structure (unlike the usual bijection).</p>
1,220,002
<p>I am performing the ratio test on series and I come across many situations where I have any of those 3 in the numerator and denominator and I was wondering what I could cancel as n approaches infinity.</p>
user157227
157,227
<p>If I am interpreting your question correctly, here is how you can relate $n^n$ and $(n+1)^n$.</p> <p>$$\lim_{n\to\infty} \frac{(n+1)^n}{n^n} = \lim_{n\to\infty} \left(\frac{n+1}{n}\right)^n = \lim_{n\to\infty} \left(1 + \frac{1}{n}\right)^n = e.$$</p> <p>More generally,</p> <p>$$\lim_{n\to\infty} \frac{(n+k)^n}{n^n} = \lim_{n\to\infty} \left(\frac{n+k}{n}\right)^n = \lim_{n\to\infty} \left(1 + \frac{k}{n}\right)^n = e^k.$$</p> <p>If you want to compare $n^n$ and $n!$ it might help to write $$\frac{n^n}{n!} = 1\cdot \frac{n}{n-1} \cdot \frac{n}{n-2} \cdots \frac{n}{2} \cdot \frac{n}{1} \geq n$$ which is a crude way to see that the limit must tend to $+\infty$.</p>
1,220,002
<p>I am performing the ratio test on series and I come across many situations where I have any of those 3 in the numerator and denominator and I was wondering what I could cancel as n approaches infinity.</p>
Alessandro17
224,351
<p>You should know the fundamental limit $$ \lim_{n\to +\infty} \frac{(n+1)^n}{n^n} = \lim_{n\to +\infty} \Bigl(\frac{n+1}{n}\Bigr)^n = \lim_{n\to +\infty} \Bigl(1 + \frac{1}{n}\Bigr)^n = e $$ so when $n$ approaches $+\infty$ the expressions $n^n$ and $(n+1)^n$ differ only by a constant. On the other hand we have $$ \lim_{n\to +\infty} \frac{n!}{n^n} = 0$$ and since we know the previous estimate, this limit gives also the relation between $(n+1)^n$ and $n!$. Another useful method is to use Stirling's approximation, which contains all the informations here explained.</p>
1,906,332
<p>I know every complex differentiable function is continuous. I would like to know if the converse is true. If not, could someone give me some counterexamples?</p> <p><strong>Remark:</strong> I know this is not true for the real functions (e.g. $f(x)=|x|$ is a continuous function in $\mathbb R$ which is not differentiable at the origin).</p>
Surb
154,545
<p>No, for example, any non constant function s.t. for all $z$ you have $f(z)\in \mathbb R$. As you mentioned, $$f(z)=|z|$$ is a good counter-example. But a beautiful property, is that any complex differentiable function are infinitely differentiable. </p>
1,906,332
<p>I know every complex differentiable function is continuous. I would like to know if the converse is true. If not, could someone give me some counterexamples?</p> <p><strong>Remark:</strong> I know this is not true for the real functions (e.g. $f(x)=|x|$ is a continuous function in $\mathbb R$ which is not differentiable at the origin).</p>
operatorerror
210,391
<p>A classic example is $f(z)=\bar{z}$. It is continuous everywhere but there is no open disk in which it is holomorphic, and therefore it is nowhere holomorphic.</p>
11,724
<p>What is the compelling need for introducing a theory of $p$-adic integration?</p> <p>Do the existing theories of $p$-adic integration use some kind of analogues of Lebesgue measures? That is, do we put a Lebesgue measure on $p$-adic spaces, and just integrate real or complex valued functions on $p$-adic spaces, or is something more possible like integrating $p$-adic valued functions on $p$-adic spaces? What is the machinery used?</p> <p>Then again, does the integration on spaces like $\mathbb C_p$ give something more than the usual integration in real analysis? I mean, the integration of complex valued functions of complex variables, or more precisely holomorphic functions, is much a much more interesting topic than measure theory. Is a similar analogue true in $p$-adic cases?</p> <p>I have also seen mentioned that Grothendieck's cohomology theories like etale cohomology, crystalline cohomology etc., fit into such $p$-adic integration theories. What could possibly be the connection?</p>
Community
-1
<p>I must say I do not know anything much about it; but the following is the general idea. </p> <p>Note that in the case of cohomology theories for smooth projective varieties over $\mathbb C$, there are various canonical isomorphisms between de Rham cohomology, Betti cohomology, etale cohomology, etc.. For instance the isomorphism between de Rham cohomology and Betti cohomology involves integrating differential forms over homology classes. Actually the situation is a bit more intricate; the de Rham cohomology decomposes into various other spaces via the Hodge decomposition. This whole stuff is all done over the complex numbers.</p> <p>In the $p$-adic case, an analogous theory is built, called the $p$-adic Hodge theory. Some details are available at the <a href="http://en.wikipedia.org/wiki/P-adic_Hodge_theory">wikipedia page</a>. There is an isomorphism between algebraic de Rham cohomology and $p$-adic etale cohomology, as per the Hodge-Tate conjecture. There is another conjectural functor from the algebraic de Rham cohomology to $p$-adic etale cohomology, which is called Grothendieck's mysterious functor. A theory for this was constructed by Jean-Marc Fontaine and it was applied in various situations such as the study of $p$-adic Galois representations. </p> <p>The theories of $p$-adic integration are useful in connection with such investigations. Pierre Colmez has a theory and so does Robert Coleman. Coleman's homepage has a <a href="http://math.berkeley.edu/~coleman/fontaine.html">course</a> on this material. Also see Perrin-Riou's Asterisque volume [3]:</p> <p>[3]: Fonctions L p-adiques des représentations p-adiques, Perrin-Riou, B.</p>
164,321
<p>I'm searching for a suitable (hopefully simple enough) solution to the following form of integral:</p> <p>$$\int_0^\infty \mathrm{d}x~x^n J_\nu(a x) J_\nu(b x) K_\mu(c x) $$</p> <p>Where $n$, $\nu$, and $\mu$ are all integers, and $a$, $b$, and $c$ are all real and positive.</p> <p>If not generally, a specific case would be quite helpful:</p> <p>$$\int_0^\infty \mathrm{d}x~x^2 J_\nu(a x) J_\nu(b x) K_1(c x) $$</p> <p>I am aware of the following:</p> <ul> <li>Gradshteyn &amp; Ryzhik eq. (6.522.3), which calculates the following integral as a relatively simple function:</li> </ul> <p>$$\int_0^\infty \mathrm{d}x~x K_0(ax) J_\nu(b x) J_\nu(c x)$$</p> <ul> <li><a href="http://dx.doi.org/10.1063/1.526062?ver=pdfcov" rel="nofollow">Gervois - integrals of three Bessel functions of the first kind</a>: by manipulating the order of the last Bessel I could change the order of the last Bessel function, but I don't know how to "replace" it by $K_\mu$.</li> <li><a href="http://dx.doi.org/10.1002/zamm.200310059" rel="nofollow">Fabrikant - Computation of infinite integrals involving three Bessel functions by introduction of new formalism</a>: helpful formulas, but unfortunately his eq. (21) and (22) is not valid in the case above, so all the results in his discussion are useless to me as they start from eq. (21). He does use this formula to "replace" $K_\mu$ by $J_\mu$ in eq. (24), using <a href="http://dlmf.nist.gov/10.27#E9" rel="nofollow">a property relating the two functions (or at least something close to what this links to)</a>:</li> </ul> <p>$$ \pi \mathrm{i} J_\nu(z) = \mathrm{e}^{-\nu\pi\mathrm{i}/2} K_\nu(-\mathrm{i} z) - \mathrm{e}^{\nu\pi\mathrm{i}/2} K_\nu(\mathrm{i} z), ~~~~|~\mathrm{arg}~z~| \leq \pi/2$$</p> <p>I simply fail to see how this formula results in a simple change in his equations, especially because of the complex nature of the arguments introduced by the above formula, and the involved definition of $K_\mu$ for integer order.</p> <p>Should I instead be looking for specific application of G&amp;R eq. 6.522.17-18, which could provide the required formulas, or are there better approaches to this problem? It seems that Fabrikant in the above-linked article says the validity of these formulas is more strictly bounded than shown in G&amp;R.</p> <p>Any help here would be much appreciated.</p>
Zurab Silagadze
32,389
<p>The integral should be calculable by the Mellin-transform technique. See the calculation of the similar (but different) integral in <a href="http://www.opticsinfobase.org/josaa/abstract.cfm?uri=josaa-7-7-1218" rel="nofollow">http://www.opticsinfobase.org/josaa/abstract.cfm?uri=josaa-7-7-1218</a> (Analysis of propagation through turbulence: evaluation of an integral involving the product of three Bessel functions, by Glenn A. Tyler).</p> <p>P.S. As for the V.I. Fabrikant's article. There is a tragic story behind it. He suffered from a severe personality disorder and after being denied tenure because of his erratic behaviour, he walked into the department and shot and killed 4 people. The article about computation of integrals involving three Bessel functions he wrote in the jail. See <a href="http://xcorr.net/2013/01/17/killer-among-us-dr-fabrikant/" rel="nofollow">http://xcorr.net/2013/01/17/killer-among-us-dr-fabrikant/</a> </p> <p>"O King, most high and wise Lord; How incomprehensible are thy judgments, and inscrutable thy ways!"</p>
4,330,852
<blockquote> <p>Let <span class="math-container">$A_1 ,\ldots ,A_m$</span> be subsets of an <span class="math-container">$n$</span>-element set such that <span class="math-container">$|A_i \cap A_j |\leq t$</span> for all <span class="math-container">$i\neq j$</span>. Prove that <span class="math-container">$$\sum_{i=1}^{m}|A_i |\leq n+t{m \choose 2}$$</span></p> </blockquote> <p><strong>What I've tried:</strong></p> <p>I've tried to use the following lemma to prove the fact above:</p> <p><em>Lemma: Let <span class="math-container">$X$</span> be a set of <span class="math-container">$n$</span> elements, and let <span class="math-container">$A_1 ,\ldots ,A_m$</span> be subsets of <span class="math-container">$X$</span> of average size at least <span class="math-container">$n/w$</span>. If <span class="math-container">$N\geq 2w^2$</span>, then there exist <span class="math-container">$i\neq j$</span> such that <span class="math-container">$|A_i \cap A_j |\geq \frac{n}{2w^2}$</span>.</em></p> <p>Now by contrary, let <span class="math-container">$\sum_{i=1}^{m}|A_i |\geq n+t{m \choose 2}+1$</span>. Then <span class="math-container">$\frac{1}{m}\sum_{i=1}^{m}|A_i |\geq \frac{n+t{m \choose 2}+1}{m}$</span>, where <span class="math-container">$\frac{1}{m}\sum_{i=1}^{m}|A_i |$</span> is the average size of <span class="math-container">$A_1 ,\ldots ,A_m$</span>. But now I don't know what <span class="math-container">$w$</span> is here. I was wondering if someone could help me.</p>
nonuser
463,553
<p>Use of <a href="https://en.wikipedia.org/wiki/Boole%27s_inequality#Bonferroni_inequalities" rel="nofollow noreferrer">Bonferroni inequalitie</a>: <span class="math-container">$$n\geq \Big|\bigcup _{i=1}^n A_i\Big| \geq \sum _{i=1}^m|A_i|-\sum _{1\leq i&lt;j\leq m} |A_i\cap A_j| \geq \sum _{i=1}^m|A_i| - t\cdot {m\choose 2}$$</span></p>
4,618,975
<p>I've read the definition of the ring homomorphism:<br /> <strong>Definition</strong>. Let <span class="math-container">$R$</span> and <span class="math-container">$S$</span> be rings. A ring homomorphism is a function <span class="math-container">$f : R → S$</span> such that:<br /> (a) For all <span class="math-container">$x, y ∈ R, f(x + y) = f(x) + f(y).$</span><br /> (b) For all <span class="math-container">$x, y ∈ R, f(xy) = f(x)f(y)$</span>.<br /> (c) <span class="math-container">$f(1) = 1.$</span></p> <hr /> <p>I want to see some examples of ring homomorphism <span class="math-container">$f : R → S$</span>, where <span class="math-container">$R$</span> is arbitrary. So, I consider <span class="math-container">$S=R/I$</span> , where <span class="math-container">$I$</span> is an ideal of <span class="math-container">$R$</span>. I can also imbed <span class="math-container">$R$</span> in a <em>Cartesian product</em>.</p> <hr /> <p><strong>What else?</strong> Are there some <em>famous</em> ring homomorphisms <span class="math-container">$f : R \to S$</span> for <span class="math-container">$R$</span> arbitrary?</p> <hr /> <p>Thank you.</p>
Math.mx
15,654
<p><span class="math-container">$f\colon R\longrightarrow R\otimes_\mathbb{Z} S$</span> where <span class="math-container">$S$</span> is another unital ring given by <span class="math-container">$f(r)=r\otimes 1$</span></p> <p><span class="math-container">$f\colon R\longrightarrow M_n(R)$</span> given by <span class="math-container">$f(r)=rI_n$</span> where <span class="math-container">$I_n$</span> is the identity matrix in <span class="math-container">$M_n(R)$</span>.</p>
1,856,409
<p>In business with 80 workers, 7 of them are angry. If the business leader visits and picks 12 randomly, what is the probability of picking 12 where exactly 1 is angry? </p> <p>(7/80)<em>(73/79)</em>(72/78)<em>(71/77)</em>(70/76)<em>(69/75)</em>(68/74)<em>(67/73)</em>(66/72)<em>(65/71)</em>(64/70)*(63/69)*12=0.4134584151106464</p> <p>What is the probability more than 2 are angry? </p> <p>My idea is to calculate the probability of 2,3,4,5,6, and 7 angry people just like did in the previous example and then add them altogether. </p> <p>In the previous example I can seat the one person 12 times. In all the different 12 spots, and then times by 12. The problem I have now is, how many times can I seat 2 people in 12 spots? If I use the combinatorics formula I will get a negative factorial.</p> <p>There must be a much easier way than this.</p>
Arthur
15,500
<p>There is one easier way to do it, which even lets you directly use the result from the first part as part of your calculations: "at least $2$" and "at most $1$" are complementary events, which means that the probability of one of them happening is equal to one minus the probability of the other happening, or written with more symbols: $$ P(\text{at least }2)=1-P(\text{at most }1) $$ And "at most $1$" takes a lot fewer steps to calculate (although technically, it's done in exactly the way you planned to calculate "at least $2$").</p>
1,519,251
<p>This question was in school maths challenge. I dont know how to approach this one.. any help would be appreciated.</p>
JMoravitz
179,297
<p>First, note that $n!$ will end in a number of zeroes equal to $\sum\limits_{i=1}^\infty \lfloor\frac{n}{5^i}\rfloor$</p> <p>Alternatively, this could be written as $\sum\limits_{i=1}^n a_i$ where $a_i$ is the largest integer value such that $5^{a_i}\mid i$.</p> <p>This can be seen by the fact that in the expansion of $n!$, each term divisible by $5$ will contribute a zero to the end with repitition considered for those numbers which are divisible by higher powers of $5$.</p> <p>For example, $30!$ ends in $7$ zeroes.</p> <p>In your specific question then, find an appropriate range of values of $n$ such that the number of zeroes at the end is $100$ and pick the value such that $n$ is furthermore divisible by $8$.</p> <p>Interesting to note, not every number of zeroes at the end of $n!$ is possible. Below is the beginning of a chart:</p> <p>$\begin{array}{|c|c|c|c|c|c|c}\hline \text{number of zeroes at end of}~n!&amp;1&amp;2&amp;3&amp;4&amp;5&amp;6&amp;\dots&amp;\\ \hline \text{range of}~n&amp;5-9&amp;10-14&amp;15-19&amp;20-24&amp;none&amp;25-29&amp;\dots&amp;\\ \hline\end{array}$</p> <p>Skipping ahead, we have</p> <p>$\begin{array}{|c|c|c|c|c|c|c}\hline \text{number of zeroes at end of}~n!&amp;96&amp;97&amp;98&amp;99&amp;\dots&amp;\\ \hline \text{range of}~n&amp;390-394&amp;395-399&amp;none&amp;400-404&amp;\dots&amp;\\ \hline\end{array}$</p> <blockquote class="spoiler"> <p> We see then that the range $405-409$ gives $100$ zeroes at the end of $n!$. The only $n$ of which that is divisible by $8$ is $408$.</p> </blockquote>
275,527
<pre><code>Sqrt[Matrix[( { {0, 1}, {-1, 0} } )]] /. f_[Matrix[x__]] :&gt; Matrix[MatrixFunction[f, x]] </code></pre> <p><code>Matrix</code> is an undefined symbol but I want to define some substitutions with it.</p>
mikado
36,788
<p>If you look at the <code>FullForm</code> of <code>Sqrt</code>, you see that it is silently converted to a <code>Power</code>, a function taking two arguments, not one. Your pattern only matches functions of a single argument.</p> <pre><code>FullForm[Sqrt[x]] (* Power[x, Rational[1, 2]]*) </code></pre>
602,248
<p>This is the system of equations: $$\sqrt { x } +y=7$$ $$\sqrt { y } +x=11$$</p> <p>Its pretty visible that the solution is $(x,y)=(9,4)$</p> <p>For this, I put $x={ p }^{ 2 }$ and $y={ q }^{ 2 }$. Then I subtracted one equation from the another such that I got $4$ on RHS and factorized LHS to get two factors in terms of $p$ and $q$.</p> <p>Then $4$ can be represented as $2*2$, $4*1$ or $1*4$. Comparing the two factors on both sides, I got the solution.</p> <p>As you can see, the major drawback here is that I assumed this system has only integral solutions and then went further. Is there any way I can prove that this system indeed has only integral solutions or is there any other elegant way to solve this question?</p>
Igor Rivin
109,865
<p>Solve the first equation for $x,$ the second for $y,$ to get</p> <p>$$ x = (y-7)^2.$$</p> <p>$$ y = (x-11)^2.$$</p> <p>Substitute the value for $x$ into the second equation, to get:</p> <p>$$y = ((y-7)^2 -11)^2 = (y^2-14 y + 38)^2.$$ This is a quartic equation, but dividing through by $y-4,$ you get an irreducible cubic $y^3-24y^2+176 y - 361.$ Now, being cubic, it has at least one real solution, which would indicate that there is at least one more real pair $(x, y).$ </p> <p><strong>EDIT</strong> Mathematica seems to think that the only solution is $9, 4.$</p>
602,248
<p>This is the system of equations: $$\sqrt { x } +y=7$$ $$\sqrt { y } +x=11$$</p> <p>Its pretty visible that the solution is $(x,y)=(9,4)$</p> <p>For this, I put $x={ p }^{ 2 }$ and $y={ q }^{ 2 }$. Then I subtracted one equation from the another such that I got $4$ on RHS and factorized LHS to get two factors in terms of $p$ and $q$.</p> <p>Then $4$ can be represented as $2*2$, $4*1$ or $1*4$. Comparing the two factors on both sides, I got the solution.</p> <p>As you can see, the major drawback here is that I assumed this system has only integral solutions and then went further. Is there any way I can prove that this system indeed has only integral solutions or is there any other elegant way to solve this question?</p>
Max
2,633
<p>Second equation implies $y$ is a perfect square, first one implies it's at most $7$. There are only 3 options - $y=0$, $y=1$, $y=4$. The first two don't work (they lead to $x=49$ and $x=36$ respectively), so $y=4$, $x=9$ is the only integral solution.</p>
1,384,203
<p>The additive character of a finite field $\mathbb F_q$ is obtained by using the trace function to base field $F_p$.</p> <p>Can we write some of them into a middle field. Using trace function from $\mathbb F_{q^n}$ to $\mathbb F_q$? If yes, how?</p>
xxxxxxxxx
252,194
<p>Not for an additive character; since a finite field has characteristic $p$, an additive map from $\mathbb{F}_{q^{n}} \to \mathbb{C}$ must send every element to a $p$th root of unity. If you wrote into a middle field $\mathbb{F}_{q}$, how would you use these elements to obtain an additive map? I may be misunderstanding the question.</p>
210,071
<p>I have some lines of code that generate numerical solutions to equations. Then I want to combine two of these in a piecewise function. The way I did it is the following -lf1[r] and lf4[r] are the aforementioned numerical solutions</p> <pre><code>test[r_] := Piecewise[{{lf1[r], 0.688199 &lt;= r &lt;= 10}, {lf4[r], 0.687159 &lt;= r &lt;= 0.688199}}] Show[Plot[test[r], {r, 0.676319, rmax}, PlotStyle -&gt; {Thick}, BaseStyle -&gt; {18, FontFamily -&gt; "Times New Roman"}, AxesLabel -&gt; {"\[Rho]", "L(\[Rho])"}, PlotRange -&gt; {{0, rmax}, {0, 1.4}}], Plot[x, {x, 0, 1.4}, PlotStyle -&gt; {Thick, Black}]] </code></pre> <p>The plot is the following </p> <p><a href="https://i.stack.imgur.com/hQdkB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hQdkB.png" alt="enter image description here"></a></p> <p>Then I would like to have different colours in the different sectors of the piecewise function. I found some excellent answers here and I tried to adopt them - particularly <a href="https://mathematica.stackexchange.com/questions/1128/plotting-piecewise-function-with-distinct-colors-in-each-section?noredirect=1&amp;lq=1">in this link</a>. However, I am facing some difficulties that I do not understand. </p> <p>Example 1: Different colours, wrong plot.</p> <pre><code>pwSplit[_[pairs : {{_, _} ..}]] := Piecewise[{#}, Indeterminate] &amp; /@ pairs pwSplit[_[pairs : {{_, _} ..}, expr_]] := Append[pwSplit@{pairs}, pwSplit@{{{expr, Nor @@ pairs[[All, 2]]}}}] pw = Piecewise[{{lf4[r], 0.687159 &lt;= r &lt;= 0.688199}, {lf1[r], 0.688199 &lt;= r &lt;= 10}}]; Plot[Evaluate[pwSplit@pw], {r, 0, 1}, PlotStyle -&gt; Thick, Axes -&gt; True] </code></pre> <p>The plot is <a href="https://i.stack.imgur.com/KH8L5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KH8L5.png" alt="enter image description here"></a></p> <p>Example 2: This time I don't get many colours and I also get a wrong plot -if you see there is a black flat line in the bottom that should not be there</p> <pre><code>f = Piecewise[{{lf1[#], 0.688199 &lt;= # &lt;= 10}, {lf4[#], 0.687159 &lt;= # &lt;= 0.688199}}] &amp;; colorFunction = f; piecewiseParts = Length@colorFunction[[1, 1]]; colors = ColorData[1][#] &amp; /@ Range@piecewiseParts; colorFunction[[1, 1, All, 1]] = colors; Show[Plot[f[x], {x, 0, 10}, ColorFunction -&gt; colorFunction, ColorFunctionScaling -&gt; False, PlotRange -&gt; {{0, rmax}, {0, 1.4}}, PlotStyle -&gt; {Thick}], Plot[x, {x, 0, 1.4}, PlotStyle -&gt; {Thick, Black}]] </code></pre> <p><a href="https://i.stack.imgur.com/1XkYG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1XkYG.png" alt="enter image description here"></a></p> <p>I don't understand what I am doing wrong in either case and it is not clear if I should modify something due to the fact that I have numerical functions and not analytic. </p>
Carl Woll
45,431
<p>You can use a single plot if you separate the <a href="http://reference.wolfram.com/language/ref/Piecewise" rel="noreferrer"><code>Piecewise</code></a> conditions into separate <a href="http://reference.wolfram.com/language/ref/ConditionalExpression" rel="noreferrer"><code>ConditionalExpression</code></a> objects.</p> <p>To do this, you can use an internal function to determine the intervals of validity for each piecewise condition, and then create a list of <a href="http://reference.wolfram.com/language/ref/ConditionalExpression" rel="noreferrer"><code>ConditionalExpression</code></a> objects. The internal, undocumented (and hence subject to change) function to use is <code>PiecewiseDump`PWIntCases</code>:</p> <pre><code>test[r_] := Piecewise[ { {Exp[r], r&lt;-1}, {1-r^2, -2&lt;r&lt;1}, (* notice change in left endpoint! *) {Sin[Pi r], r&gt;1} } ]; PiecewiseDump`PWIntCases[test[r],{{r,-3,3}},{}] </code></pre> <blockquote> <p>{{-3 &lt; r &lt; -1, -1 &lt;= r &lt; 1, 1 &lt; r &lt; 3, r >= 3 || r &lt;= -3}, {E^r, 1 - r^2, Sin[π r], 0}}</p> </blockquote> <p>Even though I changed the conditions so that they overlap, <code>PiecewiseDump`PWIntCases</code> is smart enough to accommodate this change (the first condition took precedence over the second). We can use the above to create a list of <a href="http://reference.wolfram.com/language/ref/ConditionalExpression" rel="noreferrer"><code>ConditionalExpression</code></a> objects:</p> <pre><code>conditionals = MapThread[ ConditionalExpression[#2, #1]&amp;, PiecewiseDump`PWIntCases[test[r], {{r,-3,3}}, {}] ] </code></pre> <blockquote> <p>{ConditionalExpression[E^r, -3 &lt; r &lt; -1], ConditionalExpression[1 - r^2, -1 &lt;= r &lt; 1], ConditionalExpression[Sin[π r], 1 &lt; r &lt; 3], ConditionalExpression[0, r >= 3 || r &lt;= -3]}</p> </blockquote> <p>Each <a href="http://reference.wolfram.com/language/ref/ConditionalExpression" rel="noreferrer"><code>ConditionalExpression</code></a> object is only valid over the given condition. </p> <p>Visualization:</p> <pre><code>Plot[conditionals, {r, -3, 3}] </code></pre> <p><a href="https://i.stack.imgur.com/BxU8h.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BxU8h.png" alt="enter image description here"></a></p>
1,804,360
<p>The following is a classically valid deduction for any propositions <span class="math-container">$A,B,C$</span>. <span class="math-container">$\def\imp{\rightarrow}$</span></p> <blockquote> <p><span class="math-container">$A \imp B \lor C \vdash ( A \imp B ) \lor ( A \imp C )$</span>.</p> </blockquote> <p>But I'm quite sure it isn't intuitionistically valid, although I don't know how to prove it, which is my first question.</p> <p>If my conjecture is true, my next question is what happens if we add this rule to intuitionistic logic. I do not think we will get classical logic. Is my guess right?</p> <p>[Edit: The user who came to close and downvote this 4 years after I asked this presumably did not see my comment: "The BHK interpretation suggests to me that it isn't intuitionistically valid, but that's just my intuition...". If you are not familiar with Kripke frames (and I was not at that time), good luck trying to figure out what to try!]</p>
Paul
939,509
<p>That is <strong>G</strong>(Gödel–Dummett logic: <a href="https://en.wikipedia.org/wiki/Intermediate_logic#Properties_and_examples" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Intermediate_logic#Properties_and_examples</a>). <span class="math-container">$$ \forall A\forall B\forall C((A\to B\lor C)\to (A\to B)\lor (A\to C))\leftrightarrow \forall A\forall B((A\to B)\lor (B\to A)) $$</span> Let me do a tedious speech.<br /> <span class="math-container">$\forall A\forall B\forall C((A\to B\lor C)\to (A\to B)\lor(A\to C))\vdash \forall A\forall B((A\to B)\lor(B\to A))$</span></p> <blockquote> <p><span class="math-container">$ \forall A\forall B,\\ \ \ \ \ A\lor B,\\ \ \ \ \ \ \ \ \ (i)A,\\ \ \ \ \ \ \ \ \ \ \ \ \ B\to A\\ \ \ \ \ \ \ \ \ \ \ \ \ \therefore(A\to B)\lor(B\to A)\\ \ \ \ \ \ \ \ \ (ii)B,\\ \ \ \ \ \ \ \ \ \ \ \ \ A\to B\\ \ \ \ \ \ \ \ \ \ \ \ \ \therefore(A\to B)\lor(B\to A)\\ \ \ \ \ \ \ \ \ \therefore (A\to B)\lor(B\to A)\\ \ \ \ \ A\lor B\to (A\to B)\lor(B\to A)\\ \ \ \ \ by\ Hypothesis\\ \ \ \ \ (A\lor B\to A\to B)\lor(A\lor B\to B\to A)\\ \ \ \ \ (i)A\lor B\to A\to B,\\ \ \ \ \ \ \ \ \ A,\\ \ \ \ \ \ \ \ \ \ \ \ \ A\lor B\\ \ \ \ \ \ \ \ \ \ \ \ \ A\to B\\ \ \ \ \ \ \ \ \ \ \ \ \ \therefore B\\ \ \ \ \ \ \ \ \ A\to B\\ \ \ \ \ \ \ \ \ \therefore (A\to B)\lor(B\to A)\\ \ \ \ \ (ii)A\lor B\to B\to A,\\ \ \ \ \ \ \ \ \ B,\\ \ \ \ \ \ \ \ \ \ \ \ \ A\lor B\\ \ \ \ \ \ \ \ \ \ \ \ \ B\to A\\ \ \ \ \ \ \ \ \ \ \ \ \ \therefore A\\ \ \ \ \ \ \ \ \ B\to A\\ \ \ \ \ \ \ \ \ \therefore (A\to B)\lor(B\to A)\\ \ \ \ \ \therefore (A\to B)\lor(B\to A)\\ \therefore \forall A\forall B((A\to B)\lor(B\to A)) $</span></p> </blockquote> <p>Conversely,<br /> <span class="math-container">$\forall A\forall B((A\to B)\lor(B\to A))\vdash \forall A\forall B\forall C((A \to B \lor C)\to (A\to B)\lor(A\to C))$</span></p> <blockquote> <p><span class="math-container">$ \forall A\forall B\forall C,\\ \ \ \ \ A\to B\lor C,\\ \ \ \ \ \ \ \ \ by\ Hypothersis\\ \ \ \ \ \ \ \ \ ((A\to B)\to A\to C)\lor((A\to C)\to A\to B)\\ \ \ \ \ \ \ \ \ (i)(A\to B)\to A\to C,\\ \ \ \ \ \ \ \ \ \ \ \ \ A,\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ B\lor C\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (i)B,\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A\to B\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A\to C\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \therefore C\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (ii)C,\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \therefore C\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \therefore C\\ \ \ \ \ \ \ \ \ \ \ \ \ A\to C\\ \ \ \ \ \ \ \ \ \ \ \ \ \therefore (A\to B)\lor (A\to C)\\ \ \ \ \ \ \ \ \ (ii)(A\to C)\to A\to B,\\ \ \ \ \ \ \ \ \ \ \ \ \ A,\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ B\lor C\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (i)B,\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \therefore B\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (ii)C,\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A\to C\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ A\to B\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \therefore B\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \therefore B\\ \ \ \ \ \ \ \ \ \ \ \ \ A\to B\\ \ \ \ \ \ \ \ \ \ \ \ \ \therefore (A\to B)\lor (A\to C)\\ \ \ \ \ \ \ \ \ \therefore (A\to B)\lor (A\to C)\\ \ \ \ \ \therefore (A\to B\lor C)\to (A\to B)\lor (A\to C)\\ \therefore \forall A\forall B\forall C((A \to B \lor C)\to (A\to B)\lor (A\to C)) $</span></p> </blockquote>
4,590,713
<p>If I know the McLaurin series of <span class="math-container">$$f(x)=\sum_{n=0}^\infty a_n x^n,$$</span> can I say something about the McLaurin series of <span class="math-container">$$e^{f(x)}=\sum_{n=0}^\infty b_n x^n?$$</span> In other words, what is the relation between <span class="math-container">$a_n$</span> and <span class="math-container">$b_n$</span>?</p>
Gary
83,800
<p>A complete asymptotic expansion may be obtained as follows. First <span class="math-container">$$ \sum\limits_{k = 1}^n {\sqrt {k\log k} } \le \sqrt {n\log n} + \sum\limits_{k = 1}^{n - 1} {\int_k^{k + 1} {\sqrt {t\log t} \,{\rm d}t} } = \sqrt {n\log n} + \int_1^n {\sqrt {t\log t} \,{\rm d}t} $$</span> and <span class="math-container">$$ \sum\limits_{k = 1}^n {\sqrt {k\log k} } = \sum\limits_{k = 2}^n {\sqrt {k\log k} } \ge \sum\limits_{k = 2}^n {\int_{k - 1}^k {\sqrt {t\log t} \,{\rm d}t} } = \int_1^n {\sqrt {t\log t} \,{\rm d}t} $$</span> Therefore, <span class="math-container">$$\tag{$\star$} \sum\limits_{k = 1}^n {\sqrt {k\log k} } = \int_1^n {\sqrt {t\log t}\, {\rm d}t} + \mathcal{O}(\sqrt {n\log n} ) $$</span> as <span class="math-container">$n\to +\infty$</span>. With the substitution <span class="math-container">$t = {\rm e}^{2x/3}$</span>, <span class="math-container">$$ \int_1^n {\sqrt {t\log t}\, {\rm d}t} = \left( {\frac{2}{3}} \right)^{3/2} \int_0^{(3\log n)/2} {{\rm e}^x x^{1/2} {\rm d}x} $$</span> for any <span class="math-container">$n\geq 1$</span>. Taking <span class="math-container">$x = (1 - s)z$</span> and using <a href="https://en.wikipedia.org/wiki/Watson%27s_lemma" rel="nofollow noreferrer">Watson's lemma</a> yields <span class="math-container">$$ \int_0^z {{\rm e}^x x^{1/2} \,{\rm d}x} = - z^{3/2} {\rm e}^z \int_0^1 {{\rm e}^{ - zs} \sqrt {1 - s} \,{\rm d}s} \sim \sqrt z {\rm e}^z \sum\limits_{m = 0}^\infty {\frac{{(2m)!}}{{4^m (1 - 2m)m!}}\frac{1}{{z^m }}} $$</span> as <span class="math-container">$z\to +\infty$</span>. Accordingly, <span class="math-container">$$ \int_1^n {\sqrt {t\log t} \,{\rm d}t} \sim \frac{2}{3}n^{3/2} \sqrt {\log n} \sum\limits_{m = 0}^\infty {\frac{{(2m)!}}{{6^m (1 - 2m)m!}}\frac{1}{{\log ^m\! n}}} $$</span> as <span class="math-container">$n\to +\infty$</span>. In view of <span class="math-container">$(\star)$</span>, we finally have <span class="math-container">$$ \sum\limits_{k = 1}^n {\sqrt {k\log k} } \sim \frac{2}{3}n^{3/2} \sqrt {\log n} \sum\limits_{m = 0}^\infty {\frac{{(2m)!}}{{6^m (1 - 2m)m!}}\frac{1}{{\log ^m\! n}}} $$</span> as <span class="math-container">$n\to +\infty$</span>.</p> <p><strong>Addendum.</strong> With the aid of exponential asymptotics, one can show that <span class="math-container">$$ \sum\limits_{k = 1}^n {\sqrt {k\log k} } = \frac{2}{3}n^{3/2} \sqrt {\log n} \left( {\frac{3}{{4n}} + \sum\limits_{m = 0}^{\left\lfloor {\log (n^{3/2} )} \right\rfloor } {\frac{{(2m)!}}{{6^m (1 - 2m)m!}}\frac{1}{{\log ^m\! n}}} } \right) + \mathcal{O}(1). $$</span> Note that the number of terms on the right-hand side is exponentially small compared with that on the left-hand side. I omit the proof.</p>
3,817,760
<p>In geometry class, it is usually first shown that the medians of a triangle intersect at a single point. Then is is explained that this point is called the centroid and that it is the balance point and center of mass of the triangle. Why is that the case?</p> <p>This is the best explanation I could think of. I hope someone can come up with something better.</p> <p>Choose one of the sides of the triangle. Construct a thin rectangle with one side coinciding with the side of the triangle and extending into it. The center of mass of this rectangle is near the midpoint of the side of the triangle. Continue constructing thin rectangles, with each one on top of the previous one and having having the lower side meet the two other sides of the triangle. In each case the centroid of the rectangle is near a point on the median. Making the rectangles thinner, in the limit all the centroids are on the median, and therefore the center of mass of the triangle must lie on the median. This follows because the center of mass of the combination of two regions lies on the segment joining the centroids of the two regions.</p>
Intelligenti pauca
255,730
<p>I think your approach is very simple and effective: you can substitute every chord of a triangle parallel to a fixed side with a mass at its midpoint, proportional to the length of the chord: the centre of mass of the triangle is the same as the centre of mass of those masses. But those masses are aligned along a median of the triangle, hence their centre of mass lies on that median.</p> <p>For a different proof, one may divide every side of a triangle <span class="math-container">$ABC$</span> into <span class="math-container">$n$</span> equal parts and connect the dividing points to form <span class="math-container">$n^2$</span> equal triangles (see figure below for case <span class="math-container">$n=4$</span>). Leaving aside <span class="math-container">$n$</span> triangles which have a side on <span class="math-container">$BC$</span> (yellow in the figure), the other <span class="math-container">$n^2-n$</span> triangles can be joined in pairs to form <span class="math-container">$(n^2-n)/2$</span> parallelograms (blue in the figure).</p> <p>The centre of mass of every parallelogram, by symmetry, lies at its geometric centre. Hence the centre of mass of the blue region is the centre of mass of the <span class="math-container">$(n^2-n)/2$</span> centres (points <span class="math-container">$PQ\ldots U$</span> in the figure), which is the intersection point <span class="math-container">$V$</span> of the medians of triangle <span class="math-container">$UPR$</span> embedding them (see the Appendix for a proof). Notice that median <span class="math-container">$UQ$</span> of that triangle lies on median <span class="math-container">$AM$</span> of triangle <span class="math-container">$ABC$</span>, and from <span class="math-container">$UV/QV=2$</span> one gets <span class="math-container">$$ {AV\over MV}={2n-1\over n+1}. $$</span> It follows that for <span class="math-container">$n\to\infty$</span> point <span class="math-container">$V$</span> tends to the centroid of <span class="math-container">$ABC$</span>.</p> <p>The centre of mass <span class="math-container">$G$</span> of <span class="math-container">$ABC$</span> lies then on the line joining <span class="math-container">$V$</span> with the centre of mass <span class="math-container">$V'$</span> of the yellow region: as the mass of the blue region is <span class="math-container">$n-1$</span> times that of the yellow region we also have <span class="math-container">$$ {GV\over GV'}={1\over n-1}. $$</span> When <span class="math-container">$n\to\infty$</span> we then have then <span class="math-container">$G\to V$</span> and that concludes the proof.</p> <p><a href="https://i.stack.imgur.com/XD0Nj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XD0Nj.png" alt="enter image description here" /></a></p> <p>Appendix.</p> <p>To prove that the centre of mass of equal mass points <span class="math-container">$PQ\ldots U$</span> lies at the intersection of the medians of triangle <span class="math-container">$UPR$</span>, consider the points lying on rows <span class="math-container">$PR$</span>, <span class="math-container">$ST$</span>, and so on, parallel to <span class="math-container">$PR$</span>. We can replace all the <span class="math-container">$k$</span> masses on a line with a single heavier point mass (<span class="math-container">$k$</span> times the small masses) lying at the midpoint of every line. The centre of mass of those large masses is the same as the centre of mass of all points <span class="math-container">$PQ\ldots U$</span>.</p> <p>But the midpoints of parallel lines <span class="math-container">$PR$</span>, <span class="math-container">$ST$</span>, and so on, lie on the median <span class="math-container">$UQ$</span> of triangle <span class="math-container">$UPR$</span>: it follows that the centre of mass also lies on that median. And the reasoning can be repeated by considering rows <span class="math-container">$PU$</span>, <span class="math-container">$TQ$</span>, ..., parallel to <span class="math-container">$UP$</span>, hence the centre of mass also lies on median <span class="math-container">$RS$</span>. This completes the proof.</p>
1,190,798
<p>I'm having difficulty understanding when to use $\cos$ and $\sin$ to find $x$ and $y$ components of a vector. Do we always use $\cos$ for $x$-component or what?</p>
dk2ax
223,875
<p>One thing I've found useful is having a mental picture of the <span class="math-container">$\sin$</span> and <span class="math-container">$\cos$</span> functions. If you draw your vector and split it up into components like so:</p> <p><img src="https://i.stack.imgur.com/SJOhg.gif" alt="enter image description here" /></p> <p>and pick your angle <span class="math-container">$\theta$</span>, then you can vary that angle in your mind and see what happens to the components.</p> <p>If you let <span class="math-container">$\theta$</span> go to zero, all of the vector will be in the x-direction, so taking <span class="math-container">$F_x = |F| \alpha \hat e_x$</span> you'll see that we need to set <span class="math-container">$\alpha = 1$</span> (<span class="math-container">$\alpha$</span> is just a factor that varies between -1 and 1).</p> <p>So at <span class="math-container">$\theta = 0$</span> we have <span class="math-container">$\alpha = 1$</span>. Now redo this with <span class="math-container">$\theta = \pi/2$</span> and you'll see that there is no component in the x-direction, our <span class="math-container">$\alpha$</span> will be 0.</p> <p>We now need to find a function of <span class="math-container">$\theta$</span> that returns 1 for <span class="math-container">$\theta = 0$</span> and 0 for <span class="math-container">$\theta = \pi/2$</span>. Looking at this graph</p> <p><img src="https://i.stack.imgur.com/qf4UB.png" alt="enter image description here" /></p> <p>we see that the <span class="math-container">$\cos(\theta)$</span> does just that. The x-component of <span class="math-container">$\vec F$</span> is therefore given by <span class="math-container">$F_x = |F| \cos(\theta) \hat e_x$</span>.</p> <p>By doing the same again but this time watching out what happens to the y-component, we will see that the <span class="math-container">$\sin$</span> gives you the right relation between angle and scaling factor.</p> <p>So, without memorizing SOH CAH TOA or something like that, you can figure out how the components are given.</p>
38,594
<p>One of the definitions of a Galois extension is that $E/K$ is Galois iff $E$ is the splitting field of some <strong>separable</strong> polynomial $f(x) \in K[x]$, yes?</p> <p>I want to understand why the following is true:</p> <blockquote> <p>Let $f(x) \in \mathbb{Q}[x]$ be a polynomial and let $F$ be a splitting field of $f(x)$. Then $F/\mathbb{Q}$ is Galois. </p> </blockquote> <p>My question is: How do we know that $f(x)$ is a separable polynomial? Can you please explain this part?</p>
Qiaochu Yuan
232
<p>We don't. What we do know is that every irreducible polynomial over a field of characteristic zero is separable, and $f$ is the product of its irreducible factors, so its splitting field is a compositum of separable extensions, hence separable. (In other words, the splitting field of $f$ is the splitting field of a separable polynomial, but that polynomial is <em>not</em> $f$; it's the product of the distinct irreducible factors of $f$.)</p> <p>This isn't a particularly transparent definition of Galois extension, though. The definition I prefer is that $F/K$ is Galois if $F^{\text{Aut}(F/K)} = K$. Among other things, it has the benefit of working for infinite extensions. </p>
3,963,517
<blockquote> <p>Write to power series of <span class="math-container">$$f(x)=\frac{\sin(3x)}{2x}, \quad x\neq 0.$$</span></p> </blockquote> <p>I try to get a series for <span class="math-container">$\sin(3x)$</span> with <span class="math-container">$x=0$</span> and multiplying the series with <span class="math-container">$2x$</span>.</p> <p>Is that right?</p>
Clerni
866,477
<p>Using the Taylor series for the sine: <span class="math-container">$$\sin{3x}=\sum_{n=0}^{\infty} (-1)^{n}\frac{(3x)^{2n+1}}{(2n+1)!}$$</span></p> <p>We can write the function <span class="math-container">$f(x)$</span> as:</p> <p><span class="math-container">$$f(x)=\frac{\sin{3x}}{2x}=\frac{1}{2x}\sum_{n=0}^{\infty} (-1)^{n}\frac{(3x)^{2n+1}}{(2n+1)!} =\frac{1}{2}\sum_{n=0}^{\infty} (-1)^{n}\frac{3^{2n+1}x^{2n}}{(2n+1)!}$$</span></p>
1,563,004
<p>Assume we that we calculate the expected value of some measurements $x=\dfrac {x_1 + x_2 + x_3 + x_4} 4$. what if we dont include $x_3$ and $x_4$, but instead we use $x_2$ as $x_3$ and $x_4$. Then We get the following expression $v=\dfrac {x_1 + x_2 + x_2 + x_2} 4$.</p> <p>How do I know if $v$ is a unbiased estimation of $x$?</p> <p>I am not sure how to approach this problem, any ideas are appreciated!</p>
Prabhdeep Singh
294,228
<p>$$\dfrac {\Bbb d} {\Bbb dx} [10x \dfrac {\ln x} {\ln 10}] =\dfrac {\frac {\Bbb d} {\Bbb dx} (10x \ln x)} {\ln 10} = \dfrac {\frac {\Bbb d} {\Bbb dx} [10^x] \ln x + 10^x \frac {\Bbb d} {\Bbb dx} \ln x} {\ln 10} = \dfrac {\ln 10 10^x \ln x + \frac 1 x 10^x} {\ln 10} = \dfrac {\ln 10 10^x \ln x + \frac {10^x} x} {\ln 10}$$</p>
858,250
<p>I would like to evaluate the following alternating sum of products of binomial coefficients: $$\sum_{k=0}^{m} (-1)^k \binom m k \binom n k .$$ I had the idea to use Pascal recursion to re-express $\binom n k$ so that we always have $m$ as the upper index and I have been able to come up with nice expressions for $$ \sum_{k=0}^{m} (-1)^k \binom m k \binom m {k-j},$$ ($j=0$ gives the central binomial coefficient as is well known and the others turn out to be shifted away from the centre by $j$ steps, up to a sign).</p> <p>However we then end up with another alternating sum of products of these solutions with the binomial coefficients that came out of using the recursion. And this sum seems to be even worse to solve, at least combinatorially. Any help appreciated! Thanks in advance.</p>
Mhenni Benghorbal
35,472
<p>Here is a nice closed form in terms of <a href="http://en.wikipedia.org/wiki/Jacobi_polynomials" rel="nofollow">Jacobi polynomials</a> </p> <blockquote> <p>$$ \sum_{k=0}^{m} (-1)^k \binom m k \binom n k = P_{m}^{(0,-n-m-1)}(3) . $$</p> </blockquote>
440,452
<blockquote> <p>Let $b,c \in \mathbb{Z} $ and let $n \in \mathbb{N} $, $n \ge 2. $ Let $f(x) = x^{n} -bx+c$. Prove that $$\hbox{disc} (f(x)) = n^{n }c^{ n-1}-(n-1)^{n-1 }b^{n }.$$</p> </blockquote> <p>Here $\hbox{disc} (f(x)) = \prod_{i} f'(\alpha_{i} )$ where $\alpha_{1}, \dots, \alpha_{n}$ are the roots of $f(x)$.</p> <p>After some calculations I obtained $\hbox{disc} (f(x)) = \frac{\prod_{i} \alpha_{i}(n-1)b \ - \ nc }{\prod_{i} \alpha_{i}} $, but I'm afraid this is the wrong way.</p>
Jyrki Lahtonen
11,619
<p>Two solutions. The first is based on resultants. As I wasn't 100% confident in my handling of the leading coefficients, I calculated the discriminant also using the definition. I am keeping the first (worse) solution for "educational purposes".</p> <hr> <p>A promising way is to use the description of the discriminant as a <a href="http://en.wikipedia.org/wiki/Resultant" rel="nofollow">resultant</a> of $f(x)$ and $f'(x)$. The resultant is (when non-zero) also a generator of the ideal $I=(f(x),f'(x))\cap\mathbb{Z}$ and can be calculated with steps like the Euclidean algorithm, where we need to take care to keep everything as integers (necessitating appropriate scaling). </p> <p>Here we first calculate $$ r_1(x)=nf(x)-xf'(x)= -b(n-1)x +cn $$ and are "lucky" to get a linear remainder. Proceeding we see that $$ g(x)=(cn)^{n-1}-[b(n-1)x]^{n-1} $$ is a multiple of $r_1(x)$ by a polynomial in $\mathbb{Z}[x]$. We can then eliminate $x$ by calculating the combination $$ r_2(x)=(b(n-1))^{n-1}f'(x)+ng(x)=n^nc^{n-1}-(n-1)^{n-1}b^n. $$</p> <hr> <p>Here's a more convincing calculation. Let us write $$ \begin{aligned} f(x)&amp;=(x-\alpha_1)(x-\alpha_2)\cdots (x-\alpha_n),\\ f'(x)&amp;=n(x-\beta_1)(x-\beta_2)\cdots (x-\beta_{n-1}). \end{aligned} $$ Here the numbers $\beta_j, j=1,2,\ldots,n-1,$ are the zeros of $f'(x)$, i.e. we have $\beta_j=\zeta^j K$, where $K=\root{n-1}\of {b/n}$ and $\zeta=e^{2\pi i/(n-1)}$ is an appropriate root of unity.</p> <p>Now $$ \begin{aligned} \operatorname{disc}(f)&amp;=\prod_i f'(\alpha_i)=n^n\prod_{i,j}(\alpha_i-\beta_j)\\ &amp;=n^n(-1)^{n(n-1)}\prod_{i,j}(\beta_j-\alpha_i)\\ &amp;=n^n\prod_jf(\beta_j). \end{aligned} $$ We can calculate $$ f(\beta_j)=\beta_j(\beta_j^{n-1}-b)+c=c-(b-K^{n-1})\beta_j=c-\frac{b(n-1)K}n\zeta^j. $$ so $$ \operatorname{disc}(f)=n^n\prod_j(c-\frac{b(n-1)K}n\zeta^j). $$ We can use the factorization $$ u^{n-1}-v^{n-1}=\prod_j(u-\zeta^jv) $$ with $u=c$ and $v=b(n-1)K/n$ to evaluate this. The result is $$ \operatorname{disc}(f)=n^n\left(c^{n-1}-\left(\frac{b(n-1)K}n\right)^{n-1}\right) =n^n\left(c^{n-1}-\frac{(n-1)^{n-1}b^n}{n^n}\right), $$ which equals the claimed formula.</p>
3,830,971
<p>The function <span class="math-container">$f(x)=\cot^{-1} x$</span> is well known to be neither even nor odd because <span class="math-container">$\cot^{-1}(-x)=\pi-\cot^{-1} x$</span>. it's domain is <span class="math-container">$(-\infty, \infty)$</span> and range is <span class="math-container">$(0, \pi)$</span>. Today, I was surprised to notice that Mathematica treats it as an odd function, and yields its plot as given below:</p> <p><a href="https://i.stack.imgur.com/IcH3w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IcH3w.png" alt="enter image description here" /></a></p> <p>How to reconcile this ? I welcome your comments.</p> <p>Edit: I used: Plot[ArcCot[x], {x, -3, 3}] there to plot</p>
Z Ahmed
671,540
<p>I think the Range of <span class="math-container">$f(x)=\cot^{-1} x$</span> as <span class="math-container">$(0,\pi)$</span> and hence the mixed parity of this function gets preference as then <span class="math-container">$f(x)$</span> is continuous in the its domain <span class="math-container">$(-\infty, \infty)$</span>, specially at <span class="math-container">$x=0$</span>. Then <span class="math-container">$\cot^{-1}(-x)=\pi-\cot (x)$</span>.</p> <p>In problem solving, for students, teachers and examiners this convention is most welcome for the sake of consistency. According to this <span class="math-container">$\cot^{-1} x$</span> function should look as below:</p> <p><a href="https://i.stack.imgur.com/lgXrI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lgXrI.png" alt="enter image description here" /></a></p>
1,430,369
<p>I know how to prove this by induction but the text I'm following shows another way to prove it and I guess this way is used again in the future. I'm confused by it.</p> <p>So the expression for first n numbers is: $$\frac{n(n+1)}{2}$$</p> <p>And this second proof starts out like this. It says since:</p> <p>$$(n+1)^2-n^2=2n+1$$</p> <p>Absolutely no idea where this expression came from, doesn't explain where it came from either.</p> <p>Then it proceeds to say: \begin{align} 2^2-1^2&amp;=2*1+1 \\ 3^2-2^2&amp;=2*2+1\\ &amp;\dots\\ n^2-(n-1)^2&amp;=2(n-1)+1\\ (n+1)^2-n^2&amp;=2n+1 \end{align} At this point I'm completely lost. But it continues to say "adding and noting the cancellations on the left, we get" \begin{align} (n+1)^2-1&amp;=2(1+2+...+n)+n \\ n^2+n&amp;=2(1+2+...+n) \\ (n(n+1))/2&amp;=1+2+...+n \end{align}</p> <p>Which proves it but I have no clue what has happened. I am entirely new to these math proofs. Im completely lost. I was great at high school math and calculus but now I haven't got the slightest clue of what's going on. Thanks</p>
Mark Bennet
2,906
<p>The proof is a trick, of course. Working in the opposite direction, the idea is to write $$2\sum_{r=1}^nr=\sum 2r=\sum \left ((2r+1)-1\right)=\sum (2r+1)-\sum 1=\sum (2r+1)-n$$</p> <p>This seems elaborate, but the point is to write as much of the sum as possible in a form which cancels. We use $2r+1$ for this because we know that the difference between successive squares is an odd number, so $$\sum_{r=1}^n (2r+1)=\sum_{r=1}^n ((r+1)^2-r^2)=\sum_{r=2}^{n+1}r^2-\sum_{r=1}^n r^2=(n+1)^2-1$$because all the other terms cancel. Then $$2\sum_{r=1}^nr=(n+1)^2-1-n=n(n+1)$$</p> <hr> <p>Note that $(n+1)^k-n^k=kn^{k-1}+\text{ terms of lower degree in }n$, because the terms in $n^k$ cancel - so if you know the sums of $(k-1)^{th}$ powers and below, the sums of the $k^{th}$ powers can be worked out using the difference trick. The factor $k$ makes it easy to see that the leading term in $\sum_{r=1}^nr^k$ is $\cfrac {n^{k+1}}{(k+1)!}$.</p> <p>This strategy - where spotted - can be used to simplify problems, and is well worth knowing. If you are using it you don't need to know where it came from - the justification for using it is that it works. But having an idea where it came from can help you to spot other possible uses.</p> <p>It isn't quite as simple as saying this method avoids induction - rather the induction gets hidden in the fact that terms cancel instead of being made explicit.</p>
3,625,739
<p>My question is from Humphreys <em>Introduction to Lie Algebras and Representation Theory</em>. There is a lemma in section 6.3 which says, if <span class="math-container">$\phi: L \to \mathfrak{gl}(V)$</span> is a representation of a semisimple Lie algebra <span class="math-container">$L$</span> then <span class="math-container">$\phi(L) \subset \mathfrak{sl}(V)$</span> and in particular <span class="math-container">$L$</span> acts trivially on any one-dimensional <span class="math-container">$L$</span>-module.<br> I understand how the first part of the statement follows but not why <span class="math-container">$L$</span> acts trivially on any one-dimensional <span class="math-container">$L$</span>-module. First, "acts trivially" is never defined. Does this mean <span class="math-container">$x \cdot v = \phi(x)v = 0$</span> for all <span class="math-container">$x \in L$</span> and <span class="math-container">$v \in V$</span>? If so how do we get from <span class="math-container">$\phi(x)$</span> is a trace zero matrix to <span class="math-container">$\phi(L)(V) = 0$</span> where <span class="math-container">$V$</span> is a one-dimensional <span class="math-container">$L$</span>-module?</p>
José Carlos Santos
446,262
<p>Because a linear map <span class="math-container">$\psi$</span> from a <span class="math-container">$1$</span>-dimensional vector space <span class="math-container">$V$</span> into itself can only be of the form <span class="math-container">$v\mapsto\lambda v$</span> and then <span class="math-container">$\lambda=\operatorname{tr}\psi$</span>.</p>
50,479
<p>Can the number of solutions $xy(x-y-1)=n$ for $x,y,n \in Z$ be unbounded as n varies?</p> <p>x,y are integral points on an Elliptic Curve and are easy to find using enumeration of divisors of n (assuming n can be factored).</p> <p>If yes, will large number of solutions give moderate rank EC?</p> <p>If one drops $-1$ i.e. $xy(x-y)=n$ the number of solutions can be unbounded via multiples of rational point(s) and then multiplying by a cube. (Explanation): Another unbounded case for varying $a , n$ is $xy(x-y-a)=n$. If $(x,y)$ is on the curve then $(d x,d y)$ is on $xy(x-y-a d)=n d^3$. Find many rational points and multiply by a suitable $d$. Not using the group law seems quite tricky for me. The constant $-1$ was included on purpose in the initial post.</p> <p>I would be interested in this computational experiment: find $n$ that gives a lot of solutions, say $100$ (I can't do it), check which points are linearly independent and this is a lower bound on the rank.</p> <p>What I find intriguing is that <strong>all integral points</strong> in this model come from factorization/divisors only.</p> <p><strike> Current record is n=<strong>179071200</strong> with 22 solutions with positive x,y. Due to Matthew Conroy.</p> <p>Current record is n=<strong>391287046550400</strong> with 26 solutions with positive x,y. Due to Aaron Meyerowitz</p> <p>Current record is n=<strong>8659883232000</strong> with 28 solutions with positive x,y. Found by Tapio Rajala. </strike></p> <p>Current record is n=<strong>2597882099904000</strong> with 36 solutions with positive x,y. Found by Tapio Rajala.</p> <p>EDIT: $ab(a+b+9)=195643523275200$ has 48 positive integer points. – Aaron Meyerowitz (<em>note this is a different curve and 7 &lt;= rank &lt;= 13</em>)</p> <p>A variation: $(x^2-x-17)^2 - y^2 = n$ appears to be eligible for the same question. The quartic model is a difference of two squares and checking if the first square is of the form $x^2-x-17$ is easy.</p> <p>Is it possible some relation in the primes or primes or divisors of certain form to produce records: Someone is trying in $\mathbb{Z}[t]$ <a href="https://mathoverflow.net/questions/51193/can-the-number-of-solutions-xyx-y-1n-for-x-y-n-in-zt-be-unbounded-as-n">Can the number of solutions xy(x−y−1)=n for x,y,n∈Z[t] be unbounded as n varies?</a> ? Read an article I didn't quite understand about maximizing the Selmer rank by chosing the primes carefully.</p> <p>EDIT: The curve was chosen at random just to give a clear computational challenge.</p> <p>EDIT: On second thought, can a symbolic approach work? Set $n=d_1 d_2 ... d_k$ where d_i are variables. Pick, well, ?some 100? ($d_i$, $y_i$) for ($x$,$y$) (or a product of $d_i$ for $x$). The result is a nonlinear system (last time I tried this I failed to make it work in practice).</p> <p>EDIT: Related search seems <strong>"thue mahler" equation'</strong></p> <p>Related: <a href="https://mathoverflow.net/questions/50661/unboundedness-of-number-of-integral-points-on-elliptic-curves">unboundedness of number of integral points on elliptic curves?</a></p> <p>Crossposted on MATH.SE: <a href="https://math.stackexchange.com/questions/14932/can-the-number-of-solutions-xyx-y-1-n-for-x-y-n-in-z-be-unbounded-as-n">https://math.stackexchange.com/questions/14932/can-the-number-of-solutions-xyx-y-1-n-for-x-y-n-in-z-be-unbounded-as-n</a></p>
Chris Wuthrich
5,015
<p>With the transformation <span class="math-container">$X = -n/x$</span> and <span class="math-container">$Y= ny/x$</span>, the curve becomes isomorphic to the Weierstrass model <span class="math-container">$$ E_n\colon \ \ Y^2 - X\ Y - n\ Y = X^3.$$</span> The points in question are exactly the integral points in <span class="math-container">$E_n(\mathbb{Q})$</span> such that <span class="math-container">$X$</span> divides <span class="math-container">$n$</span>. I do not see why the number of these points should be bounded independently of <span class="math-container">$n$</span>; so my guess is that there is no bound and that it is going to be difficult to show this.</p> <p>The curve <span class="math-container">$E_n$</span> has always two rational 3-torsion points <span class="math-container">$(0,0)$</span> and <span class="math-container">$(0,n)$</span>. Unless <span class="math-container">$n$</span> is of the form <span class="math-container">$k\cdot (\tfrac{k-1}{2})^2$</span> for some integer <span class="math-container">$k\not\equiv 2\pmod{4}$</span>, these are all the torsion points in <span class="math-container">$E_n(\mathbb{Q})$</span>, otherwise there are 6 torsion points defined over <span class="math-container">$\mathbb{Q}$</span>. Hence, if <span class="math-container">$n$</span> is not of the above form, then any integral point with <span class="math-container">$X$</span> dividing <span class="math-container">$n$</span> will be of infinite order and hence the rank will be at least <span class="math-container">$1$</span>.</p> <p>(Edit:) Now, I have a reason to believe that the number <strong>is</strong> bounded. As pointed out by Felipe Voloch in <a href="https://mathoverflow.net/questions/50661/unboundedness-of-number-of-integral-points-on-elliptic-curves">this question</a>, the <a href="https://doi.org/10.1007/s002220050121" rel="nofollow noreferrer">paper</a> by Abramovich shows that:</p> <p><em>if the conjecture by Lang and Vojta about rational poitns on varieties of general type holds, then the number of solutions is bounded as <span class="math-container">$n$</span> varies.</em></p> <p>One has just to note that the equation <span class="math-container">$E_n$</span> is in fact minimal and that the curve <span class="math-container">$E_n$</span> is semistable for all <span class="math-container">$n$</span>. For all primes <span class="math-container">$p$</span> dividing <span class="math-container">$n$</span>, the curve has split multiplicative reduction with <span class="math-container">$3\cdot \text{ord}_p(n)$</span> components. For all primes <span class="math-container">$p$</span> dividing <span class="math-container">$27n+1$</span>, the reduction can be shown to be multiplicative, as well.</p> <p>Maybe a descent via three-isogeny could help to give an upper bound on the rank.</p>
59,949
<p>How to track <code>Initialization</code> step by step? Any tricky solution is ok. One condition, <code>x</code> should be <code>DynamicModule</code> variable.</p> <pre><code>DynamicModule[{x}, Dynamic[x], Initialization :&gt; ( x = 1; Pause@1; x = 2; Pause@1; x = 3; )] </code></pre> <p>I want the progress to be reflected in <code>Dynamic[x]</code>. </p>
WReach
142
<p>By default, <code>DynamicModule</code> uses <code>SynchronousInitialization -&gt; True</code>. This causes the initialization to be performed on the preemptive link, disabling any updates to the front-end. In particular, print statements, cell creation and dynamic box updates will all be deferred until the initialization completes.</p> <p>If we wish to monitor that initialization within the notebook itself, we must turn off synchronous initialization. If we do so, then any of a number of strategies are possible, such as...</p> <p>... using <code>Monitor</code> to watch <code>x</code> change within a temporary cell:</p> <pre><code>DynamicModule[{x} , Dynamic[x] , SynchronousInitialization -&gt; False , Initialization :&gt; Monitor[x = 1; Pause@1; x = 2; Pause@1; x = 3, x] ] </code></pre> <p>... or <code>CellPrint</code> to watch <code>x</code> change within a permanent cell:</p> <pre><code>DynamicModule[{x} , Dynamic[x] , SynchronousInitialization -&gt; False , Initialization :&gt; (CellPrint@Dynamic@x; x = 1; Pause@2; x = 2; Pause@2; x = 3) ] </code></pre> <p>... or even simply using <code>Print</code> to print out some intermediate information:</p> <pre><code>DynamicModule[{x} , Dynamic[x] , SynchronousInitialization -&gt; False , Initialization :&gt; (x = 1; Print@"1st"; Pause@2; x = 2; Print@"2nd"; Pause@2; x = 3) ] </code></pre> <p>We can even monitor the initialization of <code>x</code> from within the <code>DynamicModule</code> by overriding the default <code>UpdateInterval -&gt; Infinity</code> on <code>Dynamic</code>:</p> <pre><code>DynamicModule[{x} , Dynamic[x, UpdateInterval -&gt; 0] , SynchronousInitialization -&gt; False , Initialization :&gt; (x = 1; Pause@1; x = 2; Pause@1; x = 3) , UnsavedVariables :&gt; {x} ] </code></pre> <p>It is necessary to add <code>x</code> to the list of <code>UnsavedVariables</code> because without it the value of <code>x</code> will not be seen to change dynamically when the output cell is re-initialized in a new front-end session.</p> <p><strong>For Debugging Purposes</strong></p> <p>If debugging messages are our goal, and if it is not acceptable to perform asynchronous initialization for some reason, then a simple solution would be to write logging messages to an external file. That file could then be monitored in real-time using an external utility like <em>tail</em>.</p> <p>It is possible, if ambitious, to use <code>Mathlink</code> to send debugging messages to a second front-end. See, for example, the chat application in this <a href="https://mathematica.stackexchange.com/q/44027/142">this question</a>. Personally, I think this is overkill. I would probably spend the time getting the <code>DynamicModule</code> to work with asynchronous initialization instead.</p>
851,647
<p>I have $$ X_i, i=1,...,n_1, $$ $$ Y_j, j=1,...,n_2, $$ $$ Z_k, k=1,...,n_3, $$ whereat $X_i, Y_j$ and $Z_k$ are all independent. Does then follow that $$ \bar{X}=\frac{1}{n_1}\sum_{i=1}^{n_1}X_i, \bar{Y}=\frac{1}{n_2}\sum_{=1}^{n_2}Y_j, \bar{Z}=\frac{1}{n_3}\sum_{k=1}^{n_3}Z_k $$ independent, too?</p>
Vishwa Iyer
71,281
<p>$$V = ((F + \frac{PG}{B}) * A^M) + \frac{HP}{B}$$ Solving it for $P$ means isolating $P$. So we want $P$ by itself. First we multiply both sides by $B$. $$BV = A^MBF+A^MPG+HP$$ $$BV = A^MBF+P(A^MG+H)$$ $$BV - A^MBF = P(A^MG+H) $$ $$P = \frac{BV-A^MBF}{A^MG+H}$$</p>
160,392
<p>Did I solve the following quadratic equation correctly.</p> <p>$$W(W+2)-7=2W(W-3)$$</p> <p>I got.</p> <p>$$W^2-8W+7$$ Then for my solution I got.</p> <p>$$(W-1)(W-7)$$</p>
Gigili
181,853
<p>$$W(W+2)-7=2W(W-3)$$</p> <p>$$\Downarrow$$</p> <p>$$W^2+2W-7=2W^2-6W$$</p> <p>$$\Downarrow$$</p> <p>$$W^2-8W+7=0$$</p> <p>$$\Downarrow$$</p> <p>$$(W-1)(W-7)=0$$</p> <p>You're right, well done. The solutions are $W=7$ and $W=1$.</p> <p><strong>EDIT</strong>: It should be written as I showed above, you've omitted "$=0$" part of the equation, intentionally or unintentionally. </p>
4,023,366
<p>So the question is pretty much described in the title already. I have to show the following result. I have tried it but am failing to do so. Anyone who can please help me in understanding its proof. I have attached the picture of formulas that may prove to be helpful.</p> <p><span class="math-container">$\exists x.P(x) \lor \exists x.Q(x)$</span> <span class="math-container">$\vdash$</span> <span class="math-container">$\exists x.(P(x) \lor Q(x))$</span></p> <p><a href="https://i.stack.imgur.com/O5Hod.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O5Hod.jpg" alt="enter image description here" /></a></p>
Simone
586,627
<p>Hint: the proof has 3 instances of an elimination rule.</p> <p>Consider that the premise has as the principal operator a Vel, whereas on the right we have an existential quantifier.</p> <p>Try to work backward to see what you need: clearly the last rule will be an application of <span class="math-container">$\exists$</span>-introduction after a derivation of the formula <span class="math-container">$Pa \lor Qa$</span>.</p> <p>If you're still stuck let me know and I'll give you another hint.</p>
1,660,361
<blockquote> <p>$$x^4+x^2-2$$</p> </blockquote> <p>How do I write in a that can be used for partial fraction? taking out $x$ does not help, what is the process? </p> <p>using Wolfram I got to $(x^2+2)(x-1)(x+1)$</p>
John
314,739
<p>Find the greatest common factor $k$ amongst the exponents of the nonzero powers of $x$ and perform a substitution of $y$ for $x^k$. Often, this substitution is the first step in seeing directly how to quickly factor a relatively simple polynomial into its irreducible parts, for, the polynomial in $y$, from the substitution, is of a lower or equal degree and as such may be easier to factor or to realize it has no factors.</p>
9,880
<p>The ban on edits that change only a few characters can be quite annoying in some cases. I don't have a problem with trying to keep trivial spelling corrections and such out of the peer review queue, but sometimes a small edit is semantically significant. For example, it makes perfect sense to edit an otherwise good question/answer to change a $0$ to a $1$ or to change $a\implies b\implies c$ to $(a\implies b)\implies c$.</p>
PA6OTA
127,690
<p>I've run into the same problem. This is particularly bad for math, as one character can really dramatically change the essence of a question. </p> <p>Of course, there are workarounds suggested, like adding a comment describing the nature of your edit. Maybe that's the way to go.</p>
3,837,856
<p>In <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.29.6143&amp;rep=rep1&amp;type=pdf" rel="nofollow noreferrer">Fast Exact Multiplication by the Hessian</a> equation 1,</p> <p><span class="math-container">$O(\left\Vert\Delta w\right\Vert^2)$</span> gets taken from RHS to LHS and <span class="math-container">$\Delta w$</span> is substituted as <span class="math-container">$rv$</span> where r is small scalar and v is a vector. I understand that <span class="math-container">$O(\left\Vert rv\right\Vert^2) = O(r^2\left\Vert v\right\Vert^2)$</span>. But what I don't get is how did the sign of <span class="math-container">$O$</span> not become negative when it was taken to LHS. And why did <span class="math-container">$\left\Vert v\right\Vert^2 $</span> disappear form <span class="math-container">$O$</span>. Is it because as <span class="math-container">$r$</span> is tiny it will govern the <span class="math-container">$O$</span> term and <span class="math-container">$\left\Vert v\right\Vert^2$</span> won't matter?</p>
Alex Ortiz
305,215
<p>When we write <span class="math-container">$x = O(z)$</span>, where <span class="math-container">$x$</span> is some expression, and <span class="math-container">$z$</span> is some other quantity, then we mean that <span class="math-container">$x$</span> is a quantity such that <span class="math-container">$|x| \le Cz$</span> for some number <span class="math-container">$C$</span>. If we have <span class="math-container">$x = y + O(z)$</span>, then we really mean <span class="math-container">$x-y = O(z)$</span> in the sense I just described. So <span class="math-container">$x-y$</span> is equal to some quantity <span class="math-container">$z'$</span> such that <span class="math-container">$|z'| \le Cz$</span>. So we can say <span class="math-container">$$ y = x + O(z) $$</span> too, since <span class="math-container">$y-x = -z'$</span>, and <span class="math-container">$|\!-\!z'| = |z'| \le Cz$</span>. So the minus sign does not need to be kept track of, and it's best to think of <span class="math-container">$O(z)$</span> as meaning a quantity, whose absolute value is bounded by a constant times <span class="math-container">$z$</span>.</p>
412,944
<p>Is it mathematically acceptable to use <a href="https://math.stackexchange.com/questions/403346/prove-if-n2-is-even-then-n-is-even">Prove if $n^2$ is even, then $n$ is even.</a> to conclude since 2 is even then $\sqrt 2$ is even? Further more using that result to also conclude that $\sqrt [n]{2}$ is even for all n?</p> <p>Similar argument for odd numbers should give $\sqrt[n]{k}$ is even or odd when k is even odd.</p> <p>My question is does any of above has been considered under a more formal subject or it is a correct/nonsensical observation ? </p>
john
79,781
<p>Note that that theorem begins with "Suppose $n$ is an integer, and that $n^2$ is even." So it does not hold when considering $\sqrt{2}$</p>
2,086,598
<p>Let's start with a well known limit: $$\lim_{n \to \infty} \left(1 + {1\over n}\right)^n=e$$</p> <p>As $n$ approaches infinity, the expression evaluates Euler's number $e\approx 2.7182$. Why that number? What properties does the limit have that leads it to 2.71828$\dots$ and why is this number important? Where can we find $e$, in what branch of Mathematics?</p>
Community
-1
<p>I know only one amazing property. </p> <hr> <p>For any number $a$, $(a^x)^\prime = ba^x$,where $\displaystyle b = \lim_{\Delta x \to 0} {a^{\Delta x} - 1\over \Delta x} = st\left({a^{\Delta x} - 1\over \Delta x}\right)$</p> <p>eg :- </p> <blockquote> <p>$$(10^x)^\prime = b(10^x)$$ Where $$b = \lim_{\Delta x \to 0} {10^{\Delta x} - 1\over \Delta x} = st\left({10^{\Delta x} - 1\over \Delta x}\right) = 2.303 \cdots$$</p> </blockquote> <p>It just so happens that for only $e$, $b = 1$ so $(e^x)^\prime = e^x$</p> <p>Which is just so amazing.</p>
2,086,598
<p>Let's start with a well known limit: $$\lim_{n \to \infty} \left(1 + {1\over n}\right)^n=e$$</p> <p>As $n$ approaches infinity, the expression evaluates Euler's number $e\approx 2.7182$. Why that number? What properties does the limit have that leads it to 2.71828$\dots$ and why is this number important? Where can we find $e$, in what branch of Mathematics?</p>
Ng Chung Tak
299,599
<p>Firstly,</p> <p>\begin{align*} e^{x} &amp;= 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\ldots \\ (e^{x})' &amp;= e^{x} \end{align*}</p> <p>Secondly,</p> <p>\begin{align*} y &amp;= e^{x} \\ \frac{dy}{dx} &amp;= y\\ x &amp;= \ln y \\ \frac{dx}{dy} &amp;= \frac{1}{y} \\ \frac{d}{dy} \ln y &amp;= \frac{1}{y} \\ \int \frac{dy}{y} &amp;= \ln |y|+C \end{align*}</p> <p>That's why we say $\log_{e} x=\ln x$ as natural logarithm.</p> <p>Similarly, we use radian as a natural unit for angles (in calculus).</p>
416,514
<p>I really think I have no talents in topology. This is a part of a problem from <em>Topology</em> by Munkres:</p> <blockquote> <p>Show that if $A$ is compact, $d(x,A)= d(x,a)$ for some $a \in A$. </p> </blockquote> <p>I always have the feeling that it is easy to understand the problem emotionally but hard to express it in math language. I am a student in Economics and I DO LOVE MATH. I really want to learn math well, could anyone give me some advice. Thanks so much!</p>
DonAntonio
31,254
<p>Hints:</p> <p>(1) By definition of distance between points and set:</p> <p>$$D:=d(x,A):=\inf_{a\in A}d(x,A)$$</p> <p>(2) By definition of infimum:</p> <p>$$\forall\,n\in\Bbb N\;\exists\,a_n\in A\;\;s.t.\;\;D\le d(x,a_n)\le D+\frac1n$$</p> <p>(3) The sequence $\,\{a_n\}\,$ has a subsequence </p> <p>$$\,\{a_{n_k}\}\;\;s.t.\;\;a_{n_k}\xrightarrow[k\to\infty]{}a\in A$$</p> <p>(4) The function distance $\,d(x,.): X\to\Bbb R_+\;$ from a metric space $\,X\,$ is continuous </p>
1,136,458
<p>Suppose $G$ is an abelian group. Then the subset $H= \big\{ x\in G\ | \ x^3=1_G \big\}$ is a subgroup of $G$.</p> <p>I was able to show the statement is correct but the weird thing is that I didn't use the fact $G$ is abelian. <strong>Is it possible that the fact $G$ is abelian is redundant?</strong></p> <p>Here is how I proved the statement above without using the fact $G$ is abelian:</p> <p><strong>(Identity)</strong> Of course the identity is in H because $1_G^3=1_G$.</p> <p><strong>(Closure)</strong> Suppose $a,b\in H$ then $a^3=1_G$ and $b^3=1_G$. Therefore we have: $(ab)^3 = a^3b^3 = 1_G1_G = 1_G$.</p> <p><strong>(Inverse)</strong> Suppose $a\in H$ then $a^3=1_G \implies aa^2=1_G \implies a^{-1} = a^2$</p> <p>Am I correct?</p>
Geoff Robinson
13,147
<p>In fact, the sequence $( 1 + \frac{1}{n})^{n}$ is an increasing sequence, which is bounded above by $e$, and $e$ is in fact the least upper bound for the sequence, hence is the limit. The answer is that there is a fault in the computer's accuracy, there is no counterexample to the theory. If the computer contradicts the theory, then you can be sure that the fault is with the computer.</p>
3,225,151
<p><span class="math-container">$ZFC+V=L$</span> implies that <span class="math-container">$P(\mathbb{N})$</span> is a subset of <span class="math-container">$L_{\omega_1}$</span>. But I’m wondering what layer of the constructible Universe contains a smaller set.</p> <p>My question is, what is the smallest ordinal <span class="math-container">$\alpha$</span> such that for all formulas <span class="math-container">$\phi(n)$</span> in the language of second-order arithmetic, the set <span class="math-container">$\{n\in\mathbb{N}:\phi(n\}\in L_\alpha$</span>? Does this depend on whether we assume <span class="math-container">$V=L$</span>?</p> <p>I’m guessing <span class="math-container">$\alpha&gt;\omega_1^{CK}$</span>, and that <span class="math-container">$\alpha$</span> is greater than the ordinal <span class="math-container">$\beta_0$</span> discussed in my question <a href="https://math.stackexchange.com/q/3224027/71829">here</a>. But can we say anything more about it?</p>
Not Mike
480,631
<p>What you want is the least <span class="math-container">$\delta \in \mathsf{On}$</span> such that <span class="math-container">$L_\delta \vDash "\mathsf{ZFC}^{-}+V=HC"$</span> (where <span class="math-container">$V=HC$</span> is the assertion that every set is hereditarily countable.) Since the theory "<span class="math-container">$\mathsf{ZFC}^{-} + V=HC$</span>" is bi-interpretable with second-order arithmetic, you'll get a <em>relativized</em> version of second-order comprehension. </p> <p>If the <em>relativzed</em> version of <span class="math-container">$\mathsf{SOA}$</span> is acceptable to you, then it's possible for <span class="math-container">$\delta$</span> to be countable; that said, if you want full comprehension, then as @Noah pointed out, it's consistent that no such <span class="math-container">$\delta$</span> exists (countable or otherwise.)</p>
747,997
<p>in <a href="https://math.stackexchange.com/questions/239521/why-no-trace-operator-in-l2">Why no trace operator in $L^2$?</a> it is mentioned, that there exists a linear continuous trace operator from $L^2(\Omega)$ to $H^\frac12(\partial\Omega)$* for sufficiently smooth boundary. Can you give me any reference for this statement? I need something like this and can not find it anywhere else.</p>
Jan
59,638
<p>I recommend you have a look into Girault&amp;Raviart's book on <em>Finite element methods for Navier-Stokes equations</em>. Chapter 1 provides a survey on basic concepts on Sobolev spaces including also the pretty involved definition of the trace spaces. The result you are out for is given in Thm. 1.5. </p> <p>Personally I find the presentation very accessible.</p> <p>Of course, as @Tomás says, you will find everything in the book by Lions&amp;Magenes <em>Non-Homogeneous Boundary Value Problems and Applications</em> in a more general setting.</p>
747,997
<p>in <a href="https://math.stackexchange.com/questions/239521/why-no-trace-operator-in-l2">Why no trace operator in $L^2$?</a> it is mentioned, that there exists a linear continuous trace operator from $L^2(\Omega)$ to $H^\frac12(\partial\Omega)$* for sufficiently smooth boundary. Can you give me any reference for this statement? I need something like this and can not find it anywhere else.</p>
bastienchaudet
390,718
<p>In Girault&amp;Raviart's book, Thm 1.5 does not apply in this case. Indeed, the continuity of the trace operator from $W^{s,p}(\Omega)$ to $W^{s-\frac{1}{p},p}(\Omega)$ holds only if $s-\frac{1}{p}&gt;0$ (this is implied by the assumptions).</p>
1,638,490
<p>Yes, I know the title is bizarre. I was urinating and forgot to lift the seat up. That made me wonder: assuming I maintain my current position, is it possible for the toilet seat (assume it is a closed, but otherwise freely deformable curve) to be moved/deformed such that stream does not pass through the hole anymore without it intersecting the curve (in other words, spraying urine everywhere!)?</p>
Milo Brandt
174,927
<p>Purely topologically, the situation here is the same as having a ring (a toilet seat) surrounding a ray (with you at the end of the ray, urine in the middle, and the toilet anchored "off at infinity"). So, one can simply move the ring off the ray, since the ray has an end. Bringing this into real world terms, this looks something like the following:</p> <ul> <li>Firstly, step into the toilet bowl. Now the topology is quite clear - you are a ray pointing downwards and the toilet seat must be lifted up to escape. Thus:</li> <li>Secondly, duck into the toilet bowl. This will bring the seat over your head, hence off the end of the ray. Lift the seat at this point.</li> <li>Finally, stand up. You have successfully unlinked the urine stream from the toilet bowl.</li> </ul> <p>I write this solution mostly to point out that we can introduce some amount of rigidity to this problem without making it unsolvable: In particular, suppose you are a line segment, and there is a parabola emanating from your midpoint and terminating at a fixed location. There is a ring around this. Now, if the parabola and the line segment are perpendicular, and the line segment is longer than the ring (which it probably is), you're not going to be able to unlink the ring. However, you can unlink the system by making the parabola and line segment parallel, so they look like a point in cross section to the ring, making it easy to pass through.</p> <p>Now, there's a few positions that would achieve this - for instance, you could levitate horizontally in midair, or do a hand stand, but the easiest to achieve would be to make everything vertical and just stand in the toilet. Now, since the parabola and line segment are parallel, we're back to having a ray, which is nice. This also has the advantage that you do not need to pry the toilet seat off. In fact, it's conceivable that, if you have a small enough cross section, this solution would actually work without breaking anything.</p>
2,967,626
<p>I have a graph that is 5280 units wide, and 5281 is the length of the arc.</p> <p><img src="https://i.stack.imgur.com/4rFK4.png"></p> <p>Knowing the width of this arc, and how long the arc length is, how would I calculate exactly how high the highest point of the arc is from the line at the bottom?</p>
saulspatz
235,128
<p>The usual way of getting rid of square roots is to square both sides. Of course, this increases the degrees, but at least it gets rid of the root. Rewrite the last inequality as <span class="math-container">$$r-{b^2r\over z^2}&lt; b\sqrt{1-{b^2\over r^2}}\tag{1}$$</span> Now the right-hand side is nonnegative, so the first possibility to check is that the left-hand side is negative. Since <span class="math-container">$r&gt;0,$</span> <span class="math-container">$$r-{b^2r\over z^2}&lt;0\implies 1-{b^2\over z^2}&lt;0,$$</span> which is impossible, since the square root in <span class="math-container">$(1)$</span> wouldn't make sense. Therefore, the left-hand side of <span class="math-container">$(1)$</span> is nonnegative, and <span class="math-container">$(1)$</span> is equivalent to the the inequality we get by squaring both sides:<span class="math-container">$$ \left(r-{b^2r\over z^2}\right)^2&lt; b^2\left(1-{b^2\over r^2}\right)$$</span></p> <p>This should be simpler to work with. I suggest you start be clearing denominators.</p>
55,071
<p><strong>Bug introduced in 10.0.0 and persisting through 10.3.0 or later</strong></p> <hr> <p>I've upgraded my home installation of <em>Mathematica</em> from version 9 to 10 today on a Windows 8.1 machine, and I'm getting a weird font issue - the fonts are not anti-aliased, and look unbalanced and weird. Just look:</p> <blockquote> <p><img src="https://i.stack.imgur.com/cnuXE.png" alt="HALP"></p> </blockquote> <p>For comparison, here what it looks on Linux with <em>Mathematica</em> V10</p> <blockquote> <p><img src="https://i.stack.imgur.com/Qf6kO.png" alt="Mathematica graphics"></p> </blockquote> <p>At this point you may object, as these issues are too minor to get worked up about. But neither I nor my otherwise benign OCD can work like this. Any ideas?</p> <p><strong>EDIT</strong> I've just had an idea, that maybe I need to manually remove old fonts leftover from the old <em>Mathematica</em> 9 installation. I've read somewhere that they were not going to pollute the main font catalog with symbol fonts in the next release, and maybe the new <em>Mathematica</em> is using old fonts for some reason. I can't test it right now myself, unfortunately.</p>
Community
-1
<p>As a follow-up to Tetsuo Ichii's <a href="https://mathematica.stackexchange.com/a/55356/31159">answer</a>, the thinnings observed in version 10 are also present in 11.0.1:</p> <p><a href="https://i.stack.imgur.com/BSF4X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BSF4X.png" alt="enter image description here"></a></p>
31,948
<p>A Finsler manifold is defined as a differentiable manifold with a metric defined on so that any well-defined curves of finite arc length is given by a generalized arc length integral of an asymmetric norm over each tangent space defined at a point. This generalizes the Riemannian manifold structure since the norm is no longer required to be induced by an inner product and therefore the Finsler manifold is not necessarily Euclidean in the tangent space structure. </p> <p>A collegue of mine and I recently got into an arguement over whether or not Finsler manifolds are semi- or psuedo-Reimannian. I say no-by definition, A semi-Reimannian manifold-like an Loretzian manifold in relativity theory-is still required to have a metric tensor as it's normed structure,which is clearly an inner product. We simply weaken the condition of positive definiteness(i.e. the associated quadratic form of the norm is real valued) to nondegeneracy (i.e. the tangent space is isomorphic with its dual space). Both conditions require the distance structure to be induced by an inner product. </p> <p>My colleague's argument is that the key property of semi-Riemannian manifolds is that they admit local signed coordinate structures that allow the distinction of different kinds of tangent spaces on the manifold. Local isomorphisms can be defined-especially in infinite-dimensional extensions of classical relativistic spaces-that make certain Fisler manifolds eqivilent to relativistic models of space-time. </p> <p>I honestly don't know enough about the research that's been done on this. Is he right? This seems very bizarre to me, but it may indeed be possible to use specially constructed mappings to convert Finsler spaces to semi-Riemannian ones and vice-versa. I seriously doubt it could be done globally without running into serious topological barriers. </p> <p>I'd like the geometers to chime in on this,particularly ones who are well-versed in relativistic geometry: Am I right? Can Finsler manifolds be defined in such a manner as to be true semi-Reimannian manifolds? Can local isomorphisms or differentiomorphisms be defined to interconvert them? </p>
Deane Yang
613
<p>No, a Finsler metric is in general not semiriemannian. As you and José indicate, a semiriemannian metric is always given by a nondegenerate quadratic form on tangent vectors at each point in the manifold. In other words, if you fix a basis of the tangent space for a given point, then the norm of that vector is given by a homogeneous quadratic polynomial in the coefficients of the vector with respect to the basis.</p> <p>On the other hand, a Finsler metric is given by a norm function on the tangent space of each point in the manifold. This norm function must be convex, and additional regularity and convexity assumptions are often made. However, there is no requirement that the norm function be given by a quadratic form. It could be given by a higher even degree polynomial in the coefficients of a vector with respect to a basis. But it could be an arbitrary sufficiently smooth sufficiently convex function, too.</p> <p>One way to think about this is to consider the standard flat models. The standard flat semiriemannian model is just $R^n$ with the metric given by a non-degenerate quadratic form. The standard flat Finsler model is $R^n$ with a (sufficiently smooth and convex) Banach norm, i.e. a finite dimensional Banach space. There are obviously a lot more of the latter than the former.</p>
183,233
<p>I am having some problems joining sigma algebras. So I have: <span class="math-container">$$\textit{K} =\left \{ A\cap B: A\in \sigma (D),B\in \sigma (E) \right \}$$</span> and I need to show <span class="math-container">$$\sigma(K)= \sigma (D,E)$$</span></p> <h3>What I've done so far</h3> <p>I intend to do this by showing <span class="math-container">$$\sigma(K)\subseteq \sigma (D,E)$$</span> and <span class="math-container">$$\sigma(K)\supseteq\sigma (D,E)$$</span></p> <p>The first part is easy enough, my problem is <span class="math-container">$\sigma(K)\supseteq\sigma (D,E)$</span> as I'm not sure where to begin.</p> <p>I've started by saying by definition <span class="math-container">$\sigma(D,E)= \sigma(\sigma(D)\cup\sigma(E))$</span> and <span class="math-container">$\sigma(\sigma(D)\cup\sigma(E))=\sigma(\sigma(D)\cap\sigma(E))$</span> by De Morgan and <span class="math-container">$\sigma(D)\cap\sigma(E)=K$</span>, but I think that is wrong as <span class="math-container">$\sigma(D)\cap\sigma(E)$</span> may be a sigma algebra and thus its smaller than <span class="math-container">$ \sigma (A,B)$</span>.</p> <p>I've also considered something like <span class="math-container">$\sigma(K^{c})$</span> to somehow argue that <span class="math-container">$\sigma(\sigma(D)\cup\sigma(E))$</span> is a subset of that but intuitively it feels like what I did with the De Morgan above.</p> <p>Thanks</p> <p>EDIT: sorry for the confusing notation everyone, everything has been changed.</p>
tomasz
30,222
<p>Notice that $K$ contains $A$ and $B$. The definition of $\sigma(A,B)$ would be $\sigma(A\cup B)$, I think. Not that it makes for much of a difference, but still...</p> <p>I think you're overthinking it. I believe that the first part is actually <em>harder</em> (if only a little bit).</p> <p>Also, you might want to use different symbols to denote the algebras and their elements (\mathcal might help you).</p> <p>If you're still stuck, see the further hint:</p> <blockquote class="spoiler"> <p>If we have a set $G$ and a sigma-algebra $\Sigma$, then if $G\subseteq \Sigma$, then $\sigma(G)\subseteq \Sigma$. Thus, if for some $G_1,G_2$ we have that $\sigma(G_1)\supseteq G_2$ and $\sigma(G_2)\supseteq G_1$, then $\sigma(G_1)=\sigma(G_2)$.</p> </blockquote>
183,233
<p>I am having some problems joining sigma algebras. So I have: <span class="math-container">$$\textit{K} =\left \{ A\cap B: A\in \sigma (D),B\in \sigma (E) \right \}$$</span> and I need to show <span class="math-container">$$\sigma(K)= \sigma (D,E)$$</span></p> <h3>What I've done so far</h3> <p>I intend to do this by showing <span class="math-container">$$\sigma(K)\subseteq \sigma (D,E)$$</span> and <span class="math-container">$$\sigma(K)\supseteq\sigma (D,E)$$</span></p> <p>The first part is easy enough, my problem is <span class="math-container">$\sigma(K)\supseteq\sigma (D,E)$</span> as I'm not sure where to begin.</p> <p>I've started by saying by definition <span class="math-container">$\sigma(D,E)= \sigma(\sigma(D)\cup\sigma(E))$</span> and <span class="math-container">$\sigma(\sigma(D)\cup\sigma(E))=\sigma(\sigma(D)\cap\sigma(E))$</span> by De Morgan and <span class="math-container">$\sigma(D)\cap\sigma(E)=K$</span>, but I think that is wrong as <span class="math-container">$\sigma(D)\cap\sigma(E)$</span> may be a sigma algebra and thus its smaller than <span class="math-container">$ \sigma (A,B)$</span>.</p> <p>I've also considered something like <span class="math-container">$\sigma(K^{c})$</span> to somehow argue that <span class="math-container">$\sigma(\sigma(D)\cup\sigma(E))$</span> is a subset of that but intuitively it feels like what I did with the De Morgan above.</p> <p>Thanks</p> <p>EDIT: sorry for the confusing notation everyone, everything has been changed.</p>
Stéphane Laurent
38,217
<p>Your definition of $K$ has no sense: you should say for instance $K=\sigma(X\cap Y, X \in\sigma(A), Y\in\sigma(B))$ (because $A \in \sigma(A)$ is puzzling).</p> <p>The first inclusion $\sigma(K) \subset \sigma(A,B)$ is easy: it follows from the inclusions $\sigma(A) \subset \sigma(A,B)$ and $\sigma(B) \subset \sigma(A,B)$ and from the stability of a $\sigma$-field under intersection.</p> <p>The second inclusion $\sigma(K) \supset \sigma(A,B)$ is easy too: clearly $\sigma(K)$ is a $\sigma$-algebra containing $\sigma(A)$ and $\sigma(B)$, therefore it contains the smallest $\sigma$-algebra containing $\sigma(A)$ and $\sigma(B)$, which is nothing but $\sigma(A,B)$.</p>
651,707
<p>Let $A=(a_{ij})\in \mathbb{M}_n(\mathbb{R})$ be defined by</p> <p>$$ a_{ij} = \begin{cases} i, &amp; \text{if } i+j=n+1 \\ 0, &amp; \text{ otherwise} \end{cases} $$ Compute $\det (A)$</p> <hr> <blockquote> <p>After calculation I get that it may be $(-1)^{n-1}n!$. Am I right?</p> </blockquote>
dato datuashvili
3,196
<p>for example let us take $3X3$ matrix,then it would be following matrix</p> <p>a=[0 0 1;0 2 0;3 0 0]</p> <p>a =</p> <pre><code> 0 0 1 0 2 0 3 0 0 det(a) ans = -6 </code></pre> <p>in your case if we compute $(-1)^{n-1}*n!=(-1)^2*n!=6$</p> <p>maybe it is $(-1)^{n}*n!$</p>
3,792,242
<p>A large part of the set theory is devoted to infinities of various kinds, and this has been built on Cantor's groundbreaking work on uncountable sets. However, even Cantor's proof is based on the assumption that certain sets exist, namely that the power set of a countably infinite set exists. Sure, after assuming that the set of natural numbers has a power set, Cantor's proof shows that this power set is uncountable. However, he does not address why such a power set should exist in the first place. Is the existence of the power set of an infinite set assumed as an axiom of set theory?</p> <p>Intuitively, I found the existence of such a power set troubling. By definition, most of the elements of this set consist of purely random infinite sets. For example, the power set of natural numbers can be partitioned into:</p> <ol> <li>Those elements that can be constructed by an algorithm (i.e. a Turing Machine), such as all the finite sets, or the infinite sets with a constructive algorithm (e.g., set of odd numbers, set of natural numbers that build the digits of <span class="math-container">$\pi$</span> in some form such as {3, 14, 159, ...}, etc.)</li> <li>All the other elements that no Turing Machine can generate them, e.g., an infinite set of entirely random natural numbers.</li> </ol> <p>These purely random sets have (1) an infinite amount of information packed into them, and (2) we cannot even construct them. Both these aspects are unsettling to me.</p> <p>Setting the intuition aside, is there a concrete reason why mathematicians accept the existence of the power set of an infinite set? Can we prove that such a set must exist? Has there been any inquiry on whether the existence of such a set would result in inconsistencies or not?</p>
Doctor Who
812,056
<p>Firstly, let's discuss the situation in classical logic (ordinary logic where <span class="math-container">$P \lor \neg P$</span> is a tautology). Suppose the existence of a set <span class="math-container">$S$</span>, and suppose that the set <span class="math-container">$\{f : S \to 2\}$</span> exists, where <span class="math-container">$2$</span> is some fixed 2-element set <span class="math-container">$\{\top, \bot\}$</span>. In other words, suppose that the collection of all functions <span class="math-container">$S \to 2$</span> forms a set. In this case, we can plainly see that the power set of <span class="math-container">$S$</span> exists by the Axiom of Replacement, since every subset <span class="math-container">$J \subseteq S$</span> defines a characteristic function <span class="math-container">$f : S \to 2$</span> by <span class="math-container">$f(j) = \top$</span> if <span class="math-container">$j \in J$</span>, and <span class="math-container">$f(x) = \bot$</span> if <span class="math-container">$x \notin J$</span>, and every function <span class="math-container">$f : S \to 2$</span> defines a subset <span class="math-container">$\{x : f(x) = \top\}$</span>. Thus, if we want the collection of functions between two fixed sets to form a set, we must accept the existence of power sets. On the other hand, accepting the existence of power sets (along with the existence of cartesian products, which follows from ZF's replacement axiom schema) allows one to define the set of functions between two sets. Basically, if we want to consider sets of functions with a given domain and codomain, we also must accept the existence of powersets.</p> <p>In general, this concept generalises to intuitionist logic (logic which doesn't necessarily accept <span class="math-container">$P \lor \neg P$</span> as a tautology) as follows: let <span class="math-container">$1$</span> be the one-element set, and suppose the set <span class="math-container">$\Omega = \{S : S \subseteq 1\}$</span> exists. Then the powerset of an arbitrary set <span class="math-container">$S$</span> exists iff the collection <span class="math-container">$\{f : S \to \Omega\}$</span> is a set. For if we have the set <span class="math-container">$\{f : S \to \Omega\}$</span>, then just as before we can define for every <span class="math-container">$f : S \to \Omega$</span> a subset <span class="math-container">$\{x \in S : f(x) = 1\}$</span>; and conversely, for every <span class="math-container">$J \subseteq S$</span>, we can define a function <span class="math-container">$f(x) = \{y \in 1 : x \in J\}$</span>, <span class="math-container">$f : S \to \Omega$</span>, so we can again apply Replacement.</p> <p>If we want the power set of <span class="math-container">$1$</span> to exist and we want function sets to exist, we must accept the existence of power sets as a necessary consequence. In classical logic, we can prove that the powerset of <span class="math-container">$1$</span> exists since it is equal to <span class="math-container">$\{\emptyset, 1\}$</span> which exists by the axiom of pairing.</p> <p>There are weaker theories of set theory in which the law of excluded middle (<span class="math-container">$P \lor \neg P$</span>) is not considered a tautology and in which function sets <span class="math-container">$\{f : A \to B\}$</span> exist but it cannot be proven that powersets exist (even <span class="math-container">$\Omega = P(1)$</span> may not exist). An example is CZF. Other examples of such theories (moving beyond the realm of &quot;set theory&quot;) include Homotopy Type Theory and, in general, the theory of <span class="math-container">$\Pi W$</span> pretoposes.</p> <p>If we eliminate the power set axiom from <span class="math-container">$ZFC$</span>, then power sets can't be proven to exist. This is why the power set axiom is, in fact, an axiom. But then we'd also be forced to give up the existence of sets of functions, which would make set theory a completely miserable place for most mathematics.</p> <p>Indeed, it seems difficult to even discuss point-set topology at all without the axiom of the power set. We'd essentially have to write that branch of mathematics off.</p> <p>Of course, the axiom of the powerset does introduce one complication; it's possible that <span class="math-container">$ZF$</span> including powerset is inconsistent but <span class="math-container">$ZF$</span> without powerset is consistent. In fact, because we can prove that <span class="math-container">$ZFC -$</span>powerset is consistent within <span class="math-container">$ZFC$</span>, we cannot possibly prove that <span class="math-container">$ZFC$</span> is consistent from <span class="math-container">$ZFC - $</span> powerset unless both theories are inconsistent. But at the end of the day, constant fear of foundational inconsistencies is no way to live mathematical life.</p>
3,341,471
<p>Let <span class="math-container">$M$</span> be a multiplication operator on <span class="math-container">$L^2(\mathbb{R})$</span> defined by <span class="math-container">$$Mf(x) = m(x) f(x)$$</span> where <span class="math-container">$m(x)$</span> is continuous and bounded. Prove that <span class="math-container">$M$</span> is a bounded operator on <span class="math-container">$L^2(\mathbb{R})$</span> and that its spectrum is given by <span class="math-container">$$\sigma(M) = \overline{\{m(x) : x \in \mathbb{R}\}}.$$</span> Can <span class="math-container">$M$</span> have eigenvalues? </p> <p>My partial answer is below:</p> <p>First, observe that for <span class="math-container">$f \in L^2(\mathbb{R})$</span>, <span class="math-container">$$\| Mf\|^2 = \|m(x) f(x) \|^2_{L^2} = \int_\mathbb{R} (m(x))^2 (f(x))^2 \leq \int_\mathbb{R} \|m\|_\infty (f(x))^2 = \|m\|_\infty \|f\|_{L^2}^2.$$</span> Since <span class="math-container">$m$</span> is continuous, <span class="math-container">$\|m\|_\infty = R$</span> for some constant <span class="math-container">$R$</span>, so <span class="math-container">$M$</span> is bounded. </p> <p>Define the set <span class="math-container">$X$</span> as <span class="math-container">$X = \overline{\{m(x) : x \in \mathbb{R}\}}$</span>. I show <span class="math-container">$X \subset \sigma(M)$</span> by contrapositive. Suppose <span class="math-container">$\lambda \in \rho(M)$</span>. Then <span class="math-container">$(M-\lambda I)$</span> is invertible. For <span class="math-container">$g\in L^2(\mathbb{R})$</span>, <span class="math-container">$(M-\lambda I)g(x) = m(x)g(x) - \lambda g(x)$</span>. Clearly, the inverse of this operator is <span class="math-container">$\frac{1}{M - \lambda I}$</span>. As the inverse is well-defined and bounded, <span class="math-container">$m(x)g(x) - \lambda g(x) \not= 0$</span>, which implies that <span class="math-container">$\lambda \not= m(x)$</span> for any <span class="math-container">$x\in\mathbb{R}$</span>. Thus, <span class="math-container">$\lambda \not\in X$</span>. Since the spectrum is closed, it follows that <span class="math-container">$X \subseteq \sigma(M)$</span>. </p> <p>I'm not sure how to proceed to show that <span class="math-container">$\sigma(M) \subseteq X$</span>. </p>
Kavi Rama Murthy
142,385
<p><span class="math-container">$X$</span> is closed ans bounded, hence compact. If <span class="math-container">$\lambda \notin X$</span> the there exists <span class="math-container">$\epsilon &gt;0$</span> such that <span class="math-container">$|\lambda -y| &gt;\epsilon$</span> for all <span class="math-container">$y \in X$</span>. Define <span class="math-container">$T$</span> by <span class="math-container">$Tf(x)=\frac 1 {m(x)-\lambda} f(x)$</span>. Then <span class="math-container">$T$</span> is a bounded operator with <span class="math-container">$\|T\| \leq \frac 1 {\epsilon}$</span>. [This is proved exactly the way you proved that <span class="math-container">$\|M\| \leq \|m\|_{\infty}$</span>].</p> <p>Now verify that <span class="math-container">$S(M-\lambda I)=(M-\lambda I)S=I$</span>. Thus <span class="math-container">$M-\lambda I$</span> is invertible and <span class="math-container">$\lambda$</span> is not in the spectrum of <span class="math-container">$M$</span>. </p>