qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
757,702
<p>I am in my pre-academic year. We recently studied the Remainder sentence (at least that's what I think it translates) which states that any polynomial can be written as <span class="math-container">$P = Q\cdot L + R$</span></p> <p>I am unable to solve the following:</p> <blockquote> <p>Show that <span class="math-container">$(x + 1)^{(2n + 1)} + x^{(n + 2)}$</span> can be divided by <span class="math-container">$x^2 + x + 1$</span> without remainder.</p> </blockquote>
enigne
143,559
<p>If $n=0$, then it is trivial.</p> <p>For $n&gt;0$, we have $$(x+1)^{2n+1}+x^{n+2}\\=(x^2+2x+1)(x+1)^{2n-1}+x^{n+2}\\=(x^2+x+1)(x+1)^{2n-1}+x(x+1)^{2n-1}+x^{n+2}\\=(x^2+x+1)(x+1)^{2n-1}+x((x+1)^{2n-1}+x^{n+1})$$. By using induction, suppose $(x+1)^{2n-1}+x^{n+1}$ can be divided by $x^2+x+1$, then, $(x+1)^{2n+1}+x^{n+2}$ also can be divided by $x^2+x+1$.</p>
1,705,656
<p>I've hit a wall on the above question and was unable to find any online examples that also contain trig in $f(g(x))$. I'm sure I am missing something blatantly obvious but I can't quite get it.</p> <p>$$ g(x)=3x+4 , \quad f(g(x)) = \cos\left(x^2\right)$$</p> <p>So far I've managed to get to the point where I have $f(x+8) = \cos\left(x^2\right)$, by solving $g^{-1}(g(x))$ (loosely based on the last bit of advice <a href="https://www.physicsforums.com/threads/if-f-o-g-16x-7-and-g-x-2x-1-find-f-x.518117/">here</a>), but I can't make that final connection.</p> <p>My best attempt so far was $f(x)=\cos(x^2-16x+64)$, but while that does result in $x^2$, it still ends up wrong due to it being cosine.</p>
Roman83
309,360
<p>$$y=g(x)=3x+4$$ $$x=\frac{y-4}{3} \Rightarrow g^{-1}(x)=\frac{x-4}{3}$$ $$f=f \circ (g \circ g^{-1})=(f \circ g) \circ g^{-1} =\cos \left(\frac{x-4}{3} \right)^2 $$</p>
3,007,785
<p><span class="math-container">$\lim\limits_{x\to 0}\frac{e^x-\sqrt{1+2x+2x^2}}{x+\tan (x)-\sin (2x)}$</span> I know how to count this limit with the help of l'Hopital rule. But it is very awful, because I need 3 times derivate it. So, there is very difficult calculations. I have the answer <span class="math-container">$\frac{2}{5}$</span>. </p> <p>I want to know if there is other ways to calculate it, without 3 times using l'Hopital rule? (I could write my steps, but they are very big. I just took third derivative of numerator and denominator)</p>
Community
-1
<p>It is not so "very awful" as you say, if you tame the computation.</p> <p>For the second term of the numerator, you can work this out as</p> <p><span class="math-container">$$y^2=1+2x+2x^2,$$</span></p> <p><span class="math-container">$$2yy'=2+4x,$$</span></p> <p><span class="math-container">$$2y'^2+2yy''=4,$$</span></p> <p><span class="math-container">$$4y'y''+2y'y''+2yy'''=0.$$</span></p> <p>The evaluating at <span class="math-container">$x=0$</span>, you obtain</p> <p><span class="math-container">$$y=1,y'=1,y''=1,y'''=-3.$$</span></p> <p>For the tangent, use </p> <p><span class="math-container">$$z=\tan x,$$</span></p> <p><span class="math-container">$$z'=z^2+1,$$</span></p> <p><span class="math-container">$$z''=2zz',$$</span></p> <p><span class="math-container">$$z'''=2z'^2+2zz''$$</span></p> <p>giving</p> <p><span class="math-container">$$z=0,z'=1,z''=0,z'''=2.$$</span></p> <p>Finally, the limit is (by the three applications of L'Hospital)</p> <p><span class="math-container">$$\frac{e^0-(-3)}{2-(-8)}.$$</span></p>
3,007,785
<p><span class="math-container">$\lim\limits_{x\to 0}\frac{e^x-\sqrt{1+2x+2x^2}}{x+\tan (x)-\sin (2x)}$</span> I know how to count this limit with the help of l'Hopital rule. But it is very awful, because I need 3 times derivate it. So, there is very difficult calculations. I have the answer <span class="math-container">$\frac{2}{5}$</span>. </p> <p>I want to know if there is other ways to calculate it, without 3 times using l'Hopital rule? (I could write my steps, but they are very big. I just took third derivative of numerator and denominator)</p>
trancelocation
467,003
<p><span class="math-container">$$\begin{eqnarray*} \frac{e^x-\sqrt{1+2x+2x^2}}{x+\tan (x)-\sin (2x)} &amp; = &amp; \underbrace{\frac{1}{e^x+\sqrt{1+2x+2x^2}}}_{\mbox{harmless}}\cdot \frac{e^{2x}-(1+2x+2x^2)}{x+\tan (x)-\sin (2x)} \\ &amp; \stackrel{\mbox{Taylor}}{=} &amp; \frac{1}{e^x+\sqrt{1+2x+2x^2}}\cdot \frac{\frac{4}{3}x^3+o(x^4)}{\frac{5}{3}x^3+o(x^4)} \\ &amp;\stackrel{x \to 0}{\longrightarrow} &amp; \frac{1}{2}\cdot \frac{4}{5} = \frac{2}{5} \end{eqnarray*}$$</span></p>
1,952,337
<p>I have consulted some answers on SE already, such as these two:</p> <p><a href="https://math.stackexchange.com/questions/473831/if-am-i-then-a-is-diagonalizable">If $A^m=I$ then A is Diagonalizable</a></p> <p><a href="https://math.stackexchange.com/questions/507199/show-that-if-an-i-then-a-is-diagonalizable">Show that if $A^{n}=I$ then $A$ is diagonalizable.</a></p> <p>However, I am still confused. I understand that <em>if</em> we can show that $p_A (\lambda) = \lambda^n - 1$, then setting it to zero, we will have $(\lambda - 1)(\lambda^{n-1} + \cdots)=0$, so we have $n$ distinct roots. (or even simpler, root of unity, so it follows)</p> <p>But how do we get $p_A (\lambda) = \lambda^n - 1$ in the first place? Cayley-Hamilton says that $A$ should satisfy its own characteristic polynomial. But in this case we don't have the characteristic polynomial to begin with?</p>
211792
211,792
<p>To show that $A$ is diagonalizable, it's sufficient to show that its minimal polynomial has distinct roots. Since $A$ satisfies $A^m-I$, its minimal polynomial must divide $\lambda^m-1$, which has distinct roots. So the minimal polynomial has distinct roots, meaning that $A$ is diagonalizable.</p> <p>For a proof of the first sentence, see for example <a href="https://math.stackexchange.com/questions/56745/minimal-polynomial-and-diagonalizable-matrix">Minimal polynomial and diagonalizable matrix</a></p>
1,952,337
<p>I have consulted some answers on SE already, such as these two:</p> <p><a href="https://math.stackexchange.com/questions/473831/if-am-i-then-a-is-diagonalizable">If $A^m=I$ then A is Diagonalizable</a></p> <p><a href="https://math.stackexchange.com/questions/507199/show-that-if-an-i-then-a-is-diagonalizable">Show that if $A^{n}=I$ then $A$ is diagonalizable.</a></p> <p>However, I am still confused. I understand that <em>if</em> we can show that $p_A (\lambda) = \lambda^n - 1$, then setting it to zero, we will have $(\lambda - 1)(\lambda^{n-1} + \cdots)=0$, so we have $n$ distinct roots. (or even simpler, root of unity, so it follows)</p> <p>But how do we get $p_A (\lambda) = \lambda^n - 1$ in the first place? Cayley-Hamilton says that $A$ should satisfy its own characteristic polynomial. But in this case we don't have the characteristic polynomial to begin with?</p>
Martin Argerami
22,857
<p>Here is a different argument that does not use characteristic or minimal polynomials. </p> <p>We have that $A$ is similar to its Jordan form, i.e., $A=SJS^{-1}$. Then $$ J^m=(S^{-1}JS)^m=A^m=I. $$</p> <p>Now, to have $J^m=I$, we need that each Jordan block $J_k(\lambda)$ satisfies $J_k(\lambda)^m=I_k$. This can only happen when $k=1$ (see below), which forces $J$ to be diagonal, and so $A$ is diagonalizable. </p> <hr> <p>Here is the proof that $J_k(\lambda)^m\ne I$ if $k\geq2$. The case $\lambda=0$ is trivial because the diagonal entries will always be zero. When $\lambda\ne0$, we can prove by induction on $m$ that $[J_k(\lambda)^m]_{12}$ (the $1,2$ entry of $J_k(\lambda)^m$) is $m\lambda^{m-1}$. When $m=1$, we have $J_k(\lambda)_{12}=1=m\lambda^{m-1}$. Assume as inductive hypothesis that $[J_k(\lambda)^m]_{12}=m\lambda^{m-1}$ and $[J_k(\lambda)^m]_{rr}=\lambda^m$ for all $r$. Then \begin{align} [J_k(\lambda)^{m+1}]_{12}&amp;=[J_k(\lambda)\,J_k(\lambda)^{m}]_{12} =\sum_{r=1}^n[J_k(\lambda)]_{1r}\,[J_k(\lambda)^m]_{r2}\\ \ \\ &amp;=[J_k(\lambda)]_{11}\,[J_k(\lambda)^m]_{12}+[J_k(\lambda)]_{12}\,[J_k(\lambda)^m]_{22}\\ \ \\ &amp;=\lambda\,m\lambda^{m-1}+1\,\lambda^m=(m+1)\lambda^m. \end{align}</p>
4,351,725
<p>This is what the solution says:</p> <p>Since each string of 4 digits are independent, having 2018 in a string has probability of <span class="math-container">$(1/10)^4$</span></p> <p>By geometric distribution, expected value of digits to obtain 2018 in a string would be <span class="math-container">$10^4$</span></p> <p>So we need to see how many powers do we need to write before we reach <span class="math-container">$10^4$</span> units.</p> <p>Since <span class="math-container">$2^{10} \approx 1000 $</span>, we can say <span class="math-container">$2^n$</span> has <span class="math-container">$0.4$</span> digits.</p> <p>So the number of digits is:</p> <p><span class="math-container">$\sum_{r=1}^{n}0.3r = 0.2n^2$</span></p> <p>Thus <span class="math-container">$0.2n^2 = 10^4$</span> where <span class="math-container">$n \approx 231$</span></p> <p>For the last part I don't get why the solution put summation of <span class="math-container">$0.3r$</span>. According to the explanation above, isn't it correct just to solve the equation</p> <p><span class="math-container">$0.4n = 10^4$</span> because we need to see whether <span class="math-container">$2^n$</span> has <span class="math-container">$10^4$</span> digits?</p> <p>summation implies the summation of the digits, but that means writing all numbers <span class="math-container">$2^n$</span> side by side, which the problem did not intended.</p> <p>Hope anyone would shed some light with this one.</p>
gnasher729
137,175
<p>The digits of 2^n are pseudo-random (hard to predict, behaving like random except they are of course deterministic) except for the first and last few digits. So if you calculate values 2^n until you produced 10,000 digits, there is a good chance that four consecutive digits equal 2018, or any other four digit sequence. Of course it may happen a bit earlier or later.</p> <p>2^n has roughly 0.3n digits. The powers 2^k for k &lt;= n have a total of 0.15n^2 digits. You can reasonably expect a solution for some k &lt;= sqrt(10,000 / 0.15) or k &lt;= 260, which would have about 78 digits.</p> <p>A much smaller solution (say k &lt;= 100) or having to calculate many more powers (say first k &gt;= 1000) is possible, but unlikely.</p> <p>And as said elsewhere, there is an infinite number of k such that 2^k even <em>starts</em> with 2018.</p>
118,486
<p>I am seeking a deeper understanding of the representation of set-based objects in terms of Boolean algebras.</p> <p>Let $\wp(A)$ be the set of subsets of a set $A$. A relation $R \subseteq A \times B$ generates two operators $pre: \wp(B) \to \wp(A)$ and $post: \wp(A) \to \wp(B)$ where $pre$ maps a set $X \subseteq A$ to its preimage with respect to $R$ and $post$ maps $X$ to its image.</p> <p>In the standard Stone duality between the category of sets and Boolean algebras, a function is represented using the preimage operator. The preimage operator generated by a function turns out to be a Boolean algebra homomorphism but the image operator may not be a homomorphism. I see this as one reason to choose the preimage to represent a function. My first question is: Are there other reasons to choose the preimage representation? I feel like there should be something deeper going on. </p> <p>If we leave the setting of functions, the preimage operator generated by a relation isn't necessarily a Boolean algebra homomorphism. So, in the representation of a system of relations over a set by a Boolean algebra with operators (in the sense of Jonsson and Tarski) I see no specific motivation for using the preimage, as opposed to image operator. I see why we want to be consistent in convention, and also use the preimage because of its connection to the semantics of modal logic. However, this appears to be an aesthetic choice. My second question is: Is there a specific reason to choose the preimage, rather than image operator, when representing relations as Boolean algebras with operators?</p>
Qiaochu Yuan
290
<p>You should think of the preimage as taking the pullback of $\mathbb{F}_2$-valued functions. For any topological space $X$, the space of continuous functions $X \to \mathbb{F}_2$ may be identified with the Boolean algebra / ring of clopen subsets of $X$, the pullback of such a function is another such function, and taking pullbacks corresponds to taking preimages (thinking of subsets via their indicator functions). The natural thing to do with functions is pull them back, both in logic and in geometry. Moreover, pulling back functions is a ring homomorphism, and even in contexts where pushing forward functions can also be done it usually does not preserve multiplication. </p> <p><strong>Edit:</strong> As for relations instead of sets, taking the <a href="http://en.wikipedia.org/wiki/Inverse_relation" rel="nofollow">transpose</a> of a relation gives an equivalence of categories $\text{Rel} \cong \text{Rel}^{op}$ ($\text{Rel}$ is a <a href="http://en.wikipedia.org/wiki/Dagger_category" rel="nofollow">dagger category</a>) which exchanges inverse image and image, so in that sense there isn't a strong reason to prefer one over the other for relations. </p>
3,780,781
<p>How can I shift selected column of a matrix (column wise) by matrix multiplication?</p> <p>Let's say I have a matrix like this: <span class="math-container">$$A=\begin{pmatrix}1 &amp; 0 &amp; 0\\\ 1 &amp; 0 &amp; 1 \\\ 1&amp;1&amp;1\end{pmatrix}$$</span></p> <p>and I want to shift the first column as well as the third column one step (or multiple steps) vertically.</p> <p><span class="math-container">$$ B = \begin{pmatrix}0 &amp; 0 &amp; 0\\\ 1 &amp; 0 &amp; 0 \\\ 1&amp;1&amp;1\end{pmatrix}$$</span></p> <p>How should my Matrix S look like to have: <span class="math-container">$$AS=B$$</span></p> <p>With a shift matrix (<a href="https://en.wikipedia.org/wiki/Shift_matrix" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Shift_matrix</a>) the whole matrix is shifted in one &quot;direction&quot;. That is not what I am looking for.Rather I want to control how many steps specific columns are shifted. To illustrate this problem: Maybe it is easier to look at it as First-In-First-Out System or balls that are falling downwards. Where a &quot;1&quot; stands for a ball and &quot;0&quot; for no ball. When I remove the bottom &quot;ball&quot; gravity makes that the upper balls fall down. E.g. removing ball B31 makes that Ball B21 comes in position of P31 and B11 in position P21. Also removing B33 makes B21 getting in position P31.</p> <p>I am looking for a more general rules how to do it as they are shown in (<a href="https://en.wikipedia.org/wiki/Shift_matrix" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Shift_matrix</a>)</p>
A learner
737,804
<p><strong>Hints</strong> : as <span class="math-container">$A$</span> is invertible, just find <span class="math-container">$A^{-1}$</span>.</p> <p>Then , <span class="math-container">$S=A^{-1} B $</span></p> <p>Can you continue ?</p>
3,780,781
<p>How can I shift selected column of a matrix (column wise) by matrix multiplication?</p> <p>Let's say I have a matrix like this: <span class="math-container">$$A=\begin{pmatrix}1 &amp; 0 &amp; 0\\\ 1 &amp; 0 &amp; 1 \\\ 1&amp;1&amp;1\end{pmatrix}$$</span></p> <p>and I want to shift the first column as well as the third column one step (or multiple steps) vertically.</p> <p><span class="math-container">$$ B = \begin{pmatrix}0 &amp; 0 &amp; 0\\\ 1 &amp; 0 &amp; 0 \\\ 1&amp;1&amp;1\end{pmatrix}$$</span></p> <p>How should my Matrix S look like to have: <span class="math-container">$$AS=B$$</span></p> <p>With a shift matrix (<a href="https://en.wikipedia.org/wiki/Shift_matrix" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Shift_matrix</a>) the whole matrix is shifted in one &quot;direction&quot;. That is not what I am looking for.Rather I want to control how many steps specific columns are shifted. To illustrate this problem: Maybe it is easier to look at it as First-In-First-Out System or balls that are falling downwards. Where a &quot;1&quot; stands for a ball and &quot;0&quot; for no ball. When I remove the bottom &quot;ball&quot; gravity makes that the upper balls fall down. E.g. removing ball B31 makes that Ball B21 comes in position of P31 and B11 in position P21. Also removing B33 makes B21 getting in position P31.</p> <p>I am looking for a more general rules how to do it as they are shown in (<a href="https://en.wikipedia.org/wiki/Shift_matrix" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Shift_matrix</a>)</p>
user
505,767
<p>We can operate by left multiplication which corresponds to operation on the rows, notably in this case we exchange first and second rows by the permutation matrix</p> <p><span class="math-container">$$S_1 = \begin{pmatrix}0 &amp; 1 &amp; 0\\\ 1 &amp; 0 &amp; 0 \\\ 0&amp;0&amp;1\end{pmatrix}$$</span></p> <p>then we erase the first row by</p> <p><span class="math-container">$$S_2 = \begin{pmatrix}0 &amp; 0 &amp; 0\\\ 0 &amp; 1 &amp; 0 \\\ 0&amp;0&amp;1\end{pmatrix}$$</span></p> <p>therefore</p> <p><span class="math-container">$$SA=B$$</span></p> <p><span class="math-container">$$S=S_2S_1=\begin{pmatrix}0 &amp; 0 &amp; 0\\\ 0 &amp; 1 &amp; 0 \\\ 0&amp;0&amp;1\end{pmatrix}\begin{pmatrix}0 &amp; 1 &amp; 0\\\ 1 &amp; 0 &amp; 0 \\\ 0&amp;0&amp;1\end{pmatrix}=\begin{pmatrix}0 &amp; 0 &amp; 0\\\ 1 &amp; 0 &amp; 0 \\\ 0&amp;0&amp;1\end{pmatrix}$$</span></p> <p>Operating by right multiplication which corresponds to column operation we exchange first and third column by</p> <p><span class="math-container">$$S_1 = \begin{pmatrix}0 &amp; 0 &amp; 1\\\ 0 &amp; 1 &amp; 0 \\\ 1&amp;0&amp;0\end{pmatrix}$$</span></p> <p>then we copy second column in the third one by</p> <p><span class="math-container">$$S_2 = \begin{pmatrix}1 &amp; 0 &amp; 0\\\ 0 &amp; 1 &amp; 1 \\\ 0&amp;0&amp;0\end{pmatrix}$$</span></p> <p>therefore</p> <p><span class="math-container">$$AS=B$$</span></p> <p><span class="math-container">$$S=S_1S_2=\begin{pmatrix}0 &amp; 0 &amp; 1\\\ 0 &amp; 1 &amp; 0 \\\ 1&amp;0&amp;0\end{pmatrix}\begin{pmatrix}1 &amp; 0 &amp; 0\\\ 0 &amp; 1 &amp; 1 \\\ 0&amp;0&amp;0\end{pmatrix}=\begin{pmatrix}0 &amp; 0 &amp; 0\\\ 0 &amp; 1 &amp; 1 \\\ 1&amp;0&amp;0\end{pmatrix}$$</span></p>
4,244,983
<p>In maths it is assumed that every statement is either true (t) or false (f). When proving theorems such as: For all vector spaces <span class="math-container">$V$</span> over a field <span class="math-container">$K$</span> and <span class="math-container">$\lambda \in K$</span> it holds that <span class="math-container">$f_{\lambda}:V \to V, v \mapsto \lambda v$</span> is a linear map. When proving such a statement I usually did not think of truth values at all, the steps in the proof rather felt &quot;intuitive&quot; and were thus correct to me. However, the formal reason why a proof is correct is always connected to truth values isn't it? This would mean that a formal proof would then look like this:</p> <p>Let <span class="math-container">$V$</span> be a vector space over <span class="math-container">$K$</span> and <span class="math-container">$\lambda \in K$</span> (this means that <span class="math-container">$V$</span> is a vector space is a true statement, as well as <span class="math-container">$K$</span> is a field and <span class="math-container">$\lambda \in K$</span> are true as well). Since <span class="math-container">$V$</span> is a vector space, it is by definition true that <span class="math-container">$V$</span> is a set. Thus <span class="math-container">$f_{\lambda} \subseteq V \times V$</span> with <span class="math-container">$f_{\lambda}:=\{(v,\lambda v) \ | \ v \in V\}$</span> is a subset (this is true by set theoretic axioms). Furthermore it holds that if <span class="math-container">$v_1=v_2$</span>, then <span class="math-container">$\lambda v_1 = \lambda v_2$</span> since <span class="math-container">$V$</span> is equipped with a well defined multiplication function and by definition of a function this implication is true. Therefore <span class="math-container">$f_{\lambda}$</span> is a function. Since <span class="math-container">$f_{\lambda}$</span> is a function the element <span class="math-container">$f_{\lambda}(v_1+v_2)$</span> exists for all <span class="math-container">$v_1,v_2 \in V$</span> and it is true that <span class="math-container">$f_{\lambda}(v_1+v_2)=f_{\lambda}(v_1)+f_{\lambda}(v_2)$</span> using the axioms of a vector space which were assumed to be true. Thus the statement is true. Here I am also unsure if it is necessary to know that <span class="math-container">$f_{\lambda}(v_1+v_2)$</span> exists in order to proceed in the proof and what harm it would cause if it would not. Then the statement would be somewhat vacuous, I suppose.</p> <p>In textbooks I don't usually see authors saying that the step in their proof is true, but I am correct in saying that this is what is meant when proving things, right? I suspect that when authors write that &quot;something holds&quot; they also mean that the truth value of the statement is (t). Another question is, whether it is good to think this way. I would say that, when one is insecure whether ones proof is correct, this might be helpful, however it makes proofs a lot longer and restricts the readability to permanently write down that something is true.</p>
Léreau
351,999
<p>It seems to me that, so far you have encountered the (metaphorically speaking) coin that represents <em>logic and proofs</em> but you haven't seen or been shown yet, that it really has two individual sides.</p> <p>There is as a deductive and a semantic side of it.</p> <p>On the <strong>deductive side</strong>, one settles on a set of inference rules which are deemed valid logical principles. For example (where <span class="math-container">$A, B, C$</span> are any propositions):</p> <ul> <li>If <span class="math-container">$A$</span> and <span class="math-container">$A \to B$</span> then we can conclude <span class="math-container">$B$</span>.</li> <li>If we want to show <span class="math-container">$C$</span>, we can instead try to show <span class="math-container">$A \to C$</span> and <span class="math-container">$A$</span>.</li> <li>If <span class="math-container">$A$</span> and <span class="math-container">$B$</span> can be shown then <span class="math-container">$A \land B$</span>.</li> <li>If <span class="math-container">$A \land B$</span> then we can conclude <span class="math-container">$A$</span>.</li> </ul> <p><a href="https://en.wikipedia.org/wiki/List_of_rules_of_inference" rel="nofollow noreferrer">And there are a bunch more</a>. These can then be used to give proofs, and in the context of mathematics these are usually presented in the form of text. Let's write <span class="math-container">$\vdash P$</span> when we have given a poof like this for the proposition <span class="math-container">$P$</span>.</p> <p>On the <strong>semantic side</strong>, we proceed differently. We settle to assigning truth values true <span class="math-container">$T$</span> or false <span class="math-container">$F$</span> to propositions. If the proposition is build from other propositions by combining them via a logical connective like <span class="math-container">$\to, \lor$</span> or <span class="math-container">$\land$</span>, we specify how the truth value of the composite proposition is defined. This is usually presented in the form of truth tables; like here in this case of <span class="math-container">$\land$</span> :</p> <p><span class="math-container">$$\boxed{\begin{array}{cc|c} A &amp; B &amp; A \land B \\ \hline T &amp; T &amp; T \\ T &amp; F &amp; F \\ F &amp; T &amp; F \\ F &amp; F &amp; F \end{array}}$$</span> Using these we can figure out the truth value of any composite proposition <span class="math-container">$P$</span> when we are given the truth values of the propositions it is build up from (let's call this the set <span class="math-container">$\mathsf{atoms}(P)$</span>). If <span class="math-container">$P$</span> is true no matter the truth values of the propositions in <span class="math-container">$\mathsf{atoms}(P)$</span>, then we call <span class="math-container">$P$</span> <strong>semantically true</strong> and let's write <span class="math-container">$\vDash P$</span> to express this. This is why you usually do a table consisting of all the propositions in <span class="math-container">$\mathsf{atoms}(P)$</span>, try out all their possible truth value assignments over several rows, and check if <span class="math-container">$P$</span> always gets <span class="math-container">$\mathsf{true}$</span> as a result.</p> <p>And now, in the best case scenario, when we have chosen the right deduction rules and the right definition of semantics, we get a wonderful connection between the two sides of the coin:</p> <blockquote> <p><span class="math-container">$\vdash P ~~\Longleftrightarrow ~~\vDash P$</span></p> </blockquote> <p>Or in words: There is a deductive proof of <span class="math-container">$P$</span> if and only if it is semantically true. And by this connection there is no need to repeat writing &quot;which is true&quot; or similar statements during a text-style proof, since we are on one side of the coin then.</p> <hr /> <p>To wrap this up, I will give an example by proving the proposition <span class="math-container">$(A \to B) \lor (B \to C)$</span> in both ways.</p> <p>First the more commonly known way, using a truth table:</p> <p><span class="math-container">$$\boxed{\begin{array}{ccc|ccc} A &amp; B &amp; C &amp; A \to B &amp; B \to C &amp; (A \to B) \lor (B \to C) \\ \hline T &amp; T &amp; T &amp; T &amp; T &amp; T \\ T &amp; T &amp; F &amp; T &amp; F &amp; T \\ T &amp; F &amp; T &amp; F &amp; T &amp; T \\ T &amp; F &amp; F &amp; F &amp; T &amp; T \\ F &amp; T &amp; T &amp; T &amp; T &amp; T \\ F &amp; T &amp; F &amp; T &amp; F &amp; T \\ F &amp; F &amp; T &amp; T &amp; T &amp; T \\ F &amp; F &amp; F &amp; T &amp; T &amp; T \end{array}}$$</span> Next with a deductive text-style proof where I implicitly use the inference rules for classical propositional logic:</p> <blockquote> <p>We need to show <span class="math-container">$(A \to B) \lor (B \to C)$</span>. By the law of excluded middle we know that either <span class="math-container">$B$</span> or <span class="math-container">$\neg B$</span>.</p> <ul> <li>In the case that we have <span class="math-container">$H_1 : B$</span> we can show <span class="math-container">$A \to B$</span>, since assuming <span class="math-container">$A$</span>, we can already show <span class="math-container">$B$</span> because of <span class="math-container">$H_1$</span>. Because of this we can in particular show <span class="math-container">$(A \to B) \lor (B \to C)$</span>.</li> <li>If we have <span class="math-container">$H_2 : \neg B$</span>, then we can prove <span class="math-container">$B \to C$</span>, since given <span class="math-container">$B$</span> we have a contradiction to <span class="math-container">$H_2$</span> and therefore <a href="https://en.wikipedia.org/wiki/Principle_of_explosion" rel="nofollow noreferrer">by explosion</a> (always sounds so dramatic!) we have <span class="math-container">$C$</span>. And again, this also shows <span class="math-container">$(A \to B) \lor (B \to C)$</span>. <span class="math-container">$\hspace{0.3em} \Box$</span></li> </ul> </blockquote> <hr /> <p>If you want a more thourough introduction into these aspects, I can really recommend the books by Peter Smith on Logic. Some are freely available <a href="https://www.logicmatters.net/ifl/" rel="nofollow noreferrer">on his website</a>.</p>
21,182
<p>In writing my senior thesis I met the following problem: Sometimes I have some intuition about some mathematical statement. Yet I find it extremely painful trying to put these intuition into precise form on paper. In particular it is very hard to specify the correct condition for statement.</p> <p>Does anyone have some tips for me in doing so? How do you often do it?</p> <p>Let me elaborate a bit further. I believe it is an experience that any mature mathematician must have went through. We want to go to D, and we need to go through A, B, C. But A can not be stated clearly until one knows B, B can not be stated clearly until one knows A and C, and C can not be stated clearly until one knows B. But we sort of have a vague picture of A, B, C in our mind. It sounds very stupid, but I don't know where to start.</p> <p>I wonder if this question is too vague for MO. So please close it if you see fit.</p>
Miguel
5,316
<p>I would say the "correct conditions" for a statement are discovered by trying to prove it and seeing what you need to be true in order for the statement to follow.</p> <p>Proofs are generally discovered in the opposite direction than they are written because proofs are written for elegance and conciseness, not for teaching purposes. The last thing you discover is the first step of your elegantly-presented proof. Which is why textbooks can be very confusing to students. They understand the statement of the theorem and then the proof starts with "Let blah blah be " and the student is left wondering where that came from. Well, it came from spending a long time working on the proof... and the process of discovery is almost never shown.</p> <p>Sometimes the very statement of a theorem is confusing because it is not clear where they got the maddeningly detailed conditions of the theorem. They got them by starting with the conclusion and working out the proof. </p> <p>And sometimes the basic definitions of an entire theory are obscure and baroque, and only much later one realises that the definition is just what one needs for a certain big theorem to be true.</p>
21,182
<p>In writing my senior thesis I met the following problem: Sometimes I have some intuition about some mathematical statement. Yet I find it extremely painful trying to put these intuition into precise form on paper. In particular it is very hard to specify the correct condition for statement.</p> <p>Does anyone have some tips for me in doing so? How do you often do it?</p> <p>Let me elaborate a bit further. I believe it is an experience that any mature mathematician must have went through. We want to go to D, and we need to go through A, B, C. But A can not be stated clearly until one knows B, B can not be stated clearly until one knows A and C, and C can not be stated clearly until one knows B. But we sort of have a vague picture of A, B, C in our mind. It sounds very stupid, but I don't know where to start.</p> <p>I wonder if this question is too vague for MO. So please close it if you see fit.</p>
cheater
5,251
<p>My first intuition is always to talk to somebody. This can help, since the speech center is a different part of the brain, works differently, taps different parts of the brain. In that way, even speaking out loud what you want to talk about can supply you with a surge of inspiration. Don't forget that written words are just a way to store the sounds we communicate with.</p> <p>That said a second opinion is always very good. On the one hand someone smarter might have a ready answer, but on the other hand explaining the material (maybe from ground up, if that's not too much) to someone who doesn't know anything about it yet is apt can give you a lot of inspiration.</p> <p>Try thinking about the object at hand in terms of real-life words. Ask yourself 'what happens if it does X? What if it does Y?' trying to learn a bit more about the object at hand - just like a physicist could pick up a mechanical contraption, look at it from many angles, and try turning a gear here and there, we have to do that with mathematical objects. Most of this is done on paper, using formulas and words.</p> <p>Sometimes you can't put together theorems because the notation you came up with isn't intuitive enough - try thinking of a system of notation that limited to your current point of interest will be coherent, complete, and intuitive.</p> <p>However sometimes some things simply will not 'tick' unless seen or heard: in signal processing you sometimes <em>must</em> listen to (some form of) the signals or functions at hand to understand what's going on. Fourier analysis helps visualize, but it doesn't do justice to the information you can get. In real analysis the picture of a saddle point cannot be replaced by any amount of writing - you just have to see it. Similarly an animation cannot be replaced with a few pictures. Example: I would have never noticed this effect had the display not been animated <a href="https://mathoverflow.net/questions/20837/what-is-this-effect-in-fourier-additive-synthesis-called">MO: effect in additive resynthesis</a></p> <p>Try visualizing your mathematical objects. Ask a fellow geometer to help you figure out a pretty illustration. Visualizing is important because, again, it taps into different parts of our brain, all which can work for the cause rather than sitting around doing nothing. And they work better when they're looking at something pretty, rather than ugly! If you visualized the data/theorem already, try visualizing <em>differently</em>. Maybe you're thinking of some specific representative of the kind of object your theorem talks about, while the theorem visualizes better on a different one? (example: intermediate value theorems don't visualize too well on straight line functions..)</p> <p>Another option is it might be a "writer's block". Just forget about it for some time, and get back to it later. Stop thinking about it, have a full day without working on maths, meditate, take a nap, go to the movies, sleep over it, have a good, long, enjoyable, dinner, relax listening to some music.</p>
2,495,176
<p>For how many positive values of $n$ are both $\frac n3$ and $3n$ four-digit integers?</p> <p>Any help is greatly appreciated. I think the smallest n value is 3000 and the largest n value is 3333. Does this make sense?</p>
Olimjon
300,977
<p>Your answer makes sense.</p> <p>Minimum 4 digit number is $1000$</p> <p>Maximum 4 digit number is $9999$</p> <p>$$max = 3n = 9999$$</p> <p>$$n_{max}=3333$$</p> <p>$$min=\frac{n}{3}=1000$$</p> <p>$$n_{min}=3000$$</p> <p>Keep in mind that n must be divisible by 3. So, the answer would be: $$\frac{3333-3000}{3}+1=112$$</p>
2,495,176
<p>For how many positive values of $n$ are both $\frac n3$ and $3n$ four-digit integers?</p> <p>Any help is greatly appreciated. I think the smallest n value is 3000 and the largest n value is 3333. Does this make sense?</p>
Piyush Agrawal
497,053
<p>112 values is the number of positive values whose n/3 and n*3 both are 4-digit numbers.</p>
2,299,678
<p>Question:</p> <p>Assume $x, y$ are elements of a field $F$. Prove that if $xy = 0$, then $x = 0$ or $y = 0$.</p> <p>My thinking:</p> <p>I am not sure how to prove this. <strong>I can only use basic field axioms.</strong> Should I assume that both x and y are not equal to 0 and then prove by contradiction or should I assume one of x and y is not 0 and then prove the other one has to equal 0?</p> <p>Thanks </p>
baharampuri
50,080
<p>In a field every nonzero element has a multiplicative inverse. If $x \neq 0$ there is an inverse let us say $t$, and $xy =0$ so $t(xy)=t0=0$ or $(tx)y=0$ or $1 \times y=0$ which shows $y=0$. Similarly if $y$ is not zero we can show $x$ is zero using the multiplicative inverse of $y$.</p>
101,078
<p>While reading through several articles concerned with mathematical constants, I kept on finding things like this:</p> <blockquote> <p>The continued fraction for $\mu$(Soldner's Constant) is given by $\left[1, 2, 4, 1, 1, 1, 3, 1, 1, 1, 2, 47, 2, ...\right]$. </p> <p>The <strong>high-water marks</strong> are 1, 2, 4, 47, 99, 294, 527, 616, 1152, ... , which occur at positions 1, 2, 3, 12, 70, 126, 202, 585, 1592, ... . </p> </blockquote> <p>(copied from <a href="http://mathworld.wolfram.com/SoldnersConstant.html" rel="nofollow">here</a>)</p> <p>I didn't find a definition of <strong>high-water marks</strong> in the web, so I assume that it's a listing of increasing largest integers, while going through the continued fraction expansion.</p> <p>Is this correct and is there special meaing behind them?</p>
Jonas Kibelbek
1,461
<p>The term comes from the common practice of marking bridges or walls with the water level and date during big floods. So, you can look at these high-water marks to see that in 1985, the river flooded to 5 feet above its current level. If you come back a decade later, you may find that a new high-water mark has surpassed it.</p> <hr> <p>I haven't seen the term "high-water marks" for continued fractions before (but I like the name). </p> <p>I'm sure your interpretation is correct: the $n$th coefficient $a_n$ is a <em>high-water mark</em> if $a_n&gt;a_i$ for $i&lt;n$. </p> <p>(Maybe it also qualifies if $a_n \geq a_i$ for $i&lt;n$? You would have to find a definition or example to settle that question for sure.)</p> <p>One significance of a high-water mark at position $n$ is that the $(n-1)$st and $n$th convergents will be big improvements in accuracy. For example, $\pi$ is given by [3;7,15,1,292,...]. In the step before including 292, you go from convergent 333/106=3.14150... to 355/113=3.1415929... When you include 292, you get 103993/33102=3.1415926530..., gaining 2 or 3 more accurate digits each time.</p>
220,946
<p>I am a newcomer to Mathematica and I am trying to solve this differential equation:</p> <pre><code>s = NDSolve[{y'[x] == -4/3*y[x]/x + 4 a/(3 x^2) + 4/(9 x^3) a b - 8/(9 x^3) c, y[0.01] == 20}, y, {x, 0.01, 10}] </code></pre> <p>I am getting the error message "Encountered non-numerical value for a derivative at x == 0.01" and I don't know what I am doing wrong. Could someone help me?</p>
kglr
125
<p><strong>Update:</strong> Inspired by @thorimur's excellent answer, an alternative way to get a maximal matching:</p> <pre><code>ClearAll[swapRows] swapRows = #[[SparseArray`MaximalBipartiteMatching[# /. Except[List, _Symbol] -&gt; 1][[All, 1]]]] &amp;; </code></pre> <p><strong><em>Examples:</em></strong></p> <pre><code>MatrixForm @ swapRows[A] </code></pre> <p><a href="https://i.stack.imgur.com/x9DIH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x9DIH.png" alt="enter image description here"></a></p> <p>With <code>mat</code> as the 30X30 matrix in OP:</p> <pre><code>Diagonal @ swapRows[mat] </code></pre> <blockquote> <pre><code> {1, 1, 1, m4m21, m5m19, m6m26, m7m7, m8m8, m9m9, 1, 1, 1, m10m13, m11m14, m12m15, 1, 1, 1, m5m19, m4m20, m13m21, m8m19, m9m20, 1, 1, 1, 1, m10m25, m12m26, m11m27} </code></pre> </blockquote> <pre><code>MatrixForm[MapIndexed[If[Equal @@ #2, Style[#, Red], #] &amp;, swapRows[mat], {2}]] </code></pre> <p><a href="https://i.stack.imgur.com/UIGgL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UIGgL.png" alt="enter image description here"></a></p> <pre><code>SeedRandom[1] rm = RandomChoice[{5, 1, 1, 1} -&gt; {0, a, b, c}, {10, 10}]; Row[MatrixForm /@ {rm, ReplacePart[#, {i_, i_} :&gt; Style[#[[i, i]], Red, Bold]] &amp; @ swapRows[rm]}] </code></pre> <p><a href="https://i.stack.imgur.com/BSQXR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BSQXR.png" alt="enter image description here"></a></p> <pre><code>SeedRandom[1] im = RandomSample[IdentityMatrix[15]]; Row[MatrixForm /@ {im, ReplacePart[#, {i_, i_} :&gt; Style[#[[i, i]], Red, Bold]] &amp; @ swapRows[im]}] </code></pre> <p><a href="https://i.stack.imgur.com/szMPt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/szMPt.png" alt="enter image description here"></a></p> <p><strong>Original answer:</strong></p> <p>The following methods work for the input matrix <code>A</code> in OP:</p> <pre><code>A = {{a, 0, 0, 0, 0}, {0, 0, 0, d1, 0}, {0, 0, 0, 0, e1}, {0, b, 0, d2, 0}, {0, 0, c, 0, e2}}; B = SortBy[LengthWhile[#, # == 0 &amp;] &amp;] @ A; MatrixForm @ B </code></pre> <p><a href="https://i.stack.imgur.com/pEvPB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pEvPB.png" alt="enter image description here"></a></p> <p>Also</p> <pre><code>B2 = SortBy[-FromDigits[# /. Except[List, _Symbol] -&gt; 1] &amp;] @ A; </code></pre> <p>and</p> <pre><code>B3 = SortBy[FirstPosition[#, Except[0], Heads -&gt; False] &amp;] @ A; B == B2 == B3 </code></pre> <blockquote> <pre><code>True </code></pre> </blockquote>
220,946
<p>I am a newcomer to Mathematica and I am trying to solve this differential equation:</p> <pre><code>s = NDSolve[{y'[x] == -4/3*y[x]/x + 4 a/(3 x^2) + 4/(9 x^3) a b - 8/(9 x^3) c, y[0.01] == 20}, y, {x, 0.01, 10}] </code></pre> <p>I am getting the error message "Encountered non-numerical value for a derivative at x == 0.01" and I don't know what I am doing wrong. Could someone help me?</p>
thorimur
63,584
<p>The question can be rephrased as:</p> <blockquote> <p>Can we assign an index to each row such that 1) the row contains a nonzero entry at that index and 2) no other row is assigned that index?</p> </blockquote> <p>Each row has a set of what we might call <em>relevant indices</em>, that is, the set of indices at which it is nonzero. (E.g. the list <code>{1,0,0,1,0}</code> has relevant indices <code>{1,4}</code>.) Armed with our sets of relevant indices, we can again rephrase the question as:</p> <blockquote> <p>Can we choose an element (index) from each set such that we do not choose the same element from any other set?</p> </blockquote> <p>Turns out: this is a bad problem to brute-force. We find that your matrix leads to <code>31 990 087 039 589 244 179 841 024</code> possibilities to check if we try to use, say, <code>Outer</code>! When I tried to run code based on <code>Outer</code> for this matrix, it crashed my kernel.</p> <p>So let's not use <code>Outer</code> or otherwise check each tuple. Instead, let's rephrase the problem as a graph problem, which Mathematica has quite the toolkit for. We want to draw an edge from each given set of relevant indices to each of its members. (This is a bipartite graph, with sets of relevant indices (arising from rows) on one side and indices (corresponding to columns) on another.)</p> <p>Our problem is now:</p> <blockquote> <p>Can we choose a set of edges in this graph such that every vertex belongs to some edge, and no two edges share a vertex?</p> </blockquote> <p>This is known as a matching, or independent edge set, which Mathematica has a built-in function for: <code>FindIndependentEdgeSet</code>.</p> <p>We can clean up our representation of things a bit. Instead of using sets of relevant indices and their elements, since sets of relevant indices each correspond to a given row, we can simply ask for a vertex representing <code>row i</code> to be connected to a vertex representing <code>column j</code> (one of the indices in the set) whenever <code>M[[i,j]]</code> is nonzero. We can do this by</p> <pre><code>A = Position[M, Except[0], {2}, Heads-&gt;False] </code></pre> <p>We can then turn an index pairs into an edge by</p> <pre><code>edge[{a_, b_}] := DirectedEdge[{"row", a}, {"column", b}] </code></pre> <p>(It's important we distinguish the indices representing rows from those representing columns somehow, or Mathematica will think they're the same vertex.)</p> <p>And we can get the whole graph by</p> <pre><code>G = edge /@ A </code></pre> <p>Then <code>matching = FindIndependentEdgeSet[G]</code> finds a maximum edge set.</p> <p>But is it enough? Can we, in fact, reorder the rows or columns to get a diagonal with no zeros for this matrix?</p> <p>Yes! <code>Length[matching]</code> is <code>30</code>, and <code>M</code> is a <code>30 x 30</code> matrix. So we have a corresponding row for each column, and vice versa.</p> <hr> <p>Now to apply it: we can switch rows <em>or</em> columns, whichever we like. The point is that now for each row, we have in <code>matching</code> an edge that lands on a unique column such that that row and column in <code>M</code> has a nonzero entry, and we want to simply move that column to the diagonal...</p> <p><em>or</em> we can read it vice versa, which is easier: for each column <code>j</code>, <code>matching</code> gives a unique row <code>i</code> such that the entry of <code>M</code> at that row and column is nonzero, and we want to move that <em>row</em> so that it appears at index <code>j</code>. In other words, we want to re-order our rows so that row <code>i</code> of <code>M</code> is row <code>j</code> of our new matrix (call it <code>M1</code>). So we want a list <code>L</code> that has <code>i</code> in the <code>j</code>th place; then we can say <code>M1 = M[[L]]</code> (which works just like the swapping function you have above, but all at once).</p> <p>To make this list we can call upon <code>SortBy</code> to reorder <code>matching</code> and put all the edges in the right place, and then extract their row values.</p> <p>To extract column values, we can apply the rule <code>DirectedEdge[_,{"column",b_}] :&gt; b</code>, and to extract row values, <code>DirectedEdge[{"row",a_},_] :&gt; a</code>.</p> <p>To sort by the column values and then extract row values:</p> <pre><code>reindexlist[matching_] := ( SortBy[matching, Replace[#,DirectedEdge[_,{"column",b_}] :&gt; b]&amp; ] /. DirectedEdge[{"row",a_},_] :&gt; a ) </code></pre> <p>Then</p> <pre><code>M1 = M[[reindexlist[matching]]] </code></pre> <p>And indeed, <code>Diagonal[M1]</code> yields</p> <pre><code>{1, m1m1, 1, m8m19, m5m5, m10m27, m7m20, m8m8, m7m9, 1, m14m21, 1, m10m26, m11m25, m11m15, m12m16, 1, 1, m9m19, m6m20, m13m21, 1, m9m20, m8m21, 1, m10m26, 1, m10m25, m12m26, m11m27} </code></pre> <p>So we're done! :)</p> <hr> <p>For convenience, here's the code in a block, except for your matrix:</p> <pre><code>M = (*your matrix here*); A = Position[M, Except[0], {2}, Heads -&gt; False]; edge[{a_, b_}] := DirectedEdge[{"row", a}, {"column", b}]; G = edge /@ A; matching = FindIndependentEdgeSet[G]; reindexlist[matching_] := (SortBy[matching, Replace[#, DirectedEdge[_, {"column", b_}] :&gt; b] &amp;] /. DirectedEdge[{"row", a_}, _] :&gt; a); M1 = M[[reindexlist[matching]]] </code></pre>
1,573,341
<blockquote> <p>Prove that the roots of the equation $z^n=(z+3)^n$ are collinear</p> </blockquote> <p>Taking modulus on both sides, $$\vert z-0\vert =\vert z-3\vert$$</p> <p>This represents the perpendicular bisector of line joining $x=0$ and $x=3$</p> <p>That was easy. But I tried to solve it using algebra:</p> <p>$$\frac{z+3}{z}=1^{\frac{1}{n}}=\cos{\frac{2k\pi}{n}}+i\sin{\frac{2k\pi}{n}}$$</p> <p>After simplifying, I got $$z=\frac{3i ( \cos{\frac{k\pi}{n} }+i\sin{ \frac{k\pi}{n}})}{2\sin{ \frac{k\pi}{n}}}$$ What should I do next?</p>
Community
-1
<p>$\left(\dfrac{z+3}{z}\right)=(1)^n= e^{\dfrac{2\pi i k}{n}}$ where k=0,1,2.....(n-1)</p> <p>$\left(1+\dfrac{3}{z}\right)=(1)^n= \alpha^k$ where $\alpha=e^{\dfrac{2\pi i}{n}}$</p> <p>$\implies z=\dfrac{3}{\alpha^k-1} ;k\neq0$</p> <p>roots of your equation will be $z=\dfrac{3}{\alpha-1},\dfrac{3}{\alpha^2-1},.......\dfrac{3}{\alpha^{n-1}-1}$</p> <p>but we know, for $n^{th}$ roots unity$\implies$ $\alpha^{n-1}=\dfrac{\alpha^n}{\alpha}=\dfrac{1}{\alpha};\alpha^{n-2}=\dfrac{1}{\alpha^2} $and so on </p> <p>thus roots of your equation will be $z=\dfrac{3}{\alpha-1},\dfrac{3}{\alpha^2-1},.......\dfrac{3\alpha^2}{1-\alpha^2},\dfrac{3\alpha}{1-\alpha}$ </p>
4,360,054
<p><span class="math-container">$\textbf{Question}$</span>: Show that there exist an uncountable subset <span class="math-container">$X$</span> of <span class="math-container">$\mathbb{R}^{n}$</span> with property that every subset of <span class="math-container">$X$</span> with <span class="math-container">$n$</span> elements is a basis of <span class="math-container">$\mathbb{R}^{n}$</span>.</p> <p><span class="math-container">$\textbf{My Attempt}$</span>: For <span class="math-container">$n= 2$</span>, if we take <span class="math-container">$X = \{(\cos \theta, \sin \theta): 0&lt;\theta &lt; \frac{\pi}{2}\}$</span>, then <span class="math-container">$X$</span> has such property. But for <span class="math-container">$n\geq 3$</span>, I can not find any way. Welcome for answer of this question.</p>
Bart Michels
43,288
<p>You can consider all such subsets of <span class="math-container">$\mathbb R^n$</span> (not necessarily uncountable). They form a poset under inclusion and it is easy to see that every chain has an upper bound (their union). By Zorn's lemma, there exists a maximal subset <span class="math-container">$S$</span> of <span class="math-container">$\mathbb R^n$</span> with the required property. If <span class="math-container">$S$</span> were countable, then <span class="math-container">$\mathbb R^n$</span> would be a countable union of hyperplanes, and this is not true: <a href="https://math.stackexchange.com/questions/1284301/show-that-mathbbrn-cannot-be-written-as-a-countable-union-of-proper-subspa">Show that $\mathbb{R}^n$ cannot be written as a countable union of proper subspaces</a></p>
3,552,695
<p>The question is to find all real numbers solutions to the system of equations: </p> <ul> <li><span class="math-container">$y=\Large\frac{4x^2}{4x^2+1}$</span>,</li> <li><span class="math-container">$z=\Large\frac{4y^2}{4y^2+1}$</span>,</li> <li><span class="math-container">$x=\Large\frac{4z^2}{4z^2+1}$</span>, </li> </ul> <p>This seems simple enough, so I tried substituting the values of x, y and z into the different equations but I only ended up with a huge degree 8 equation which clearly doesn't seem like the right approach. I really have no idea on how to go about solving this if substitution is not the answer.<br> Any help would be greatly appreciated :)</p>
Daniyar Aubekerov
741,637
<p>We can note that <span class="math-container">$4a^2+1 \ge 4a \ \ \forall a \in R$</span>. Also, since <span class="math-container">$\frac{4a^2}{4a^2+1} \ge 0$</span>, then we know that <span class="math-container">$x,y,z \ge 0$</span></p> <p>Therefore <span class="math-container">$y=\frac{4x^2}{4x^2+1} \le \frac{4x^2}{4x}=x$</span> for nonezero values of <span class="math-container">$x$</span>. Similarly <span class="math-container">$z \le y$</span> and <span class="math-container">$x \le y$</span>. Therefore <span class="math-container">$x \le y \le z \le x \implies x=y=z$</span>.</p> <p>Now solving equation <span class="math-container">$a=\frac{4a^2}{4a^2+1} \implies 4a^2+1=4a \implies a = \frac{1}{2} \implies x=y=z=\frac{1}{2}$</span>.</p> <p>Finally, we assumed that numbers are non-zero, so we should include solution <span class="math-container">$(0,0,0)$</span></p>
3,552,695
<p>The question is to find all real numbers solutions to the system of equations: </p> <ul> <li><span class="math-container">$y=\Large\frac{4x^2}{4x^2+1}$</span>,</li> <li><span class="math-container">$z=\Large\frac{4y^2}{4y^2+1}$</span>,</li> <li><span class="math-container">$x=\Large\frac{4z^2}{4z^2+1}$</span>, </li> </ul> <p>This seems simple enough, so I tried substituting the values of x, y and z into the different equations but I only ended up with a huge degree 8 equation which clearly doesn't seem like the right approach. I really have no idea on how to go about solving this if substitution is not the answer.<br> Any help would be greatly appreciated :)</p>
jacky
14,096
<p><span class="math-container">$\bullet\; $</span> Clearly <span class="math-container">$x=y=z=0$</span> are the solution of system of eqn</p> <p><span class="math-container">$\bullet\; $</span> If <span class="math-container">$x,y,z\neq 0\; $</span> Then</p> <ul> <li><span class="math-container">$\displaystyle y=\frac{4x^2}{4x^2+1}\Longrightarrow \frac{1}{y}=1+\frac{1}{4x^2}$</span>,</li> <li><span class="math-container">$\displaystyle z=\frac{4y^2}{4y^2+1}\Longrightarrow \frac{1}{z}=1+\frac{1}{4y^2},$</span></li> <li><span class="math-container">$\displaystyle x=\frac{4z^2}{4z^2+1}\Longrightarrow \frac{1}{x}=1+\frac{1}{4z^2}$</span>, </li> </ul> <p>Adding all three </p> <p><span class="math-container">$\displaystyle \bigg(1-\frac{1}{2x}\bigg)^2+\bigg(1-\frac{1}{2y}\bigg)^2+\bigg(1-\frac{1}{2z}\bigg)^2=0$</span></p> <p>which is possible when</p> <p><span class="math-container">$\displaystyle 1-\frac{1}{2x}=0,1-\frac{1}{2y}=0,1-\frac{1}{2z}=0$</span></p> <p>System of equation have <span class="math-container">$\displaystyle (x,y,z)=\bigg(\frac{1}{2},\frac{1}{2},\frac{1}{2}\bigg)$</span> as solution also</p>
4,026,795
<blockquote> <p>Let <span class="math-container">$V=\{(x,y,z)\in\mathbb{R^3}: x^2+y^2\le z, z\le x+2\}$</span></p> <p>Then the volume of V is:</p> <p>(A) Vol(V) = <span class="math-container">$\frac{75}{8}\pi$</span></p> <p>(B) Vol(V) = <span class="math-container">$\frac{81}{32}\pi$</span></p> <p>(C) Vol(V) = <span class="math-container">$\frac{9}{4}\pi$</span></p> <p>(D) Vol(V) = <span class="math-container">$\frac{45}{8}\pi$</span></p> </blockquote> <p>I already understand how the figure sits on 3D space, is limited by a paraboloid on the bottom and a plane on the top, which is inclined. I guessed it would be easier if I used cylindrical or spherical coordinates but I'm having a hard time with the inclined plane. I would appreciate any hint in or guided resolution in order to understand what thing I'm missing. Thank you.</p>
John Wayland Bales
246,513
<p>If we let <span class="math-container">$G^\prime(x)=g(x)$</span> then we have <span class="math-container">$f(x)=G(1)-G(x)$</span>. Then <span class="math-container">$f^\prime(x)=-G^\prime(x)=-g(x)$</span>.</p> <p>We know that <span class="math-container">$f$</span> will have a minimum value at the value <span class="math-container">$x_0$</span> where <span class="math-container">$f^\prime(x_0)=0$</span> and where <span class="math-container">$f^\prime(x)&lt;0$</span> on some interval <span class="math-container">$(a,x_0)$</span> and <span class="math-container">$f^\prime(x)&gt;0$</span> on some interval <span class="math-container">$(x_0,b)$</span>. This is because <span class="math-container">$f$</span> will be decreasing to the left of <span class="math-container">$x_0$</span> and increasing to the right of <span class="math-container">$x_0$</span>.</p> <p>Since <span class="math-container">$f^\prime(x)=-g(x)$</span> you can find <span class="math-container">$x_0$</span> by inspecting the graph of <span class="math-container">$y=-g(x)$</span>.</p> <p>Turn the graph of <span class="math-container">$g$</span> &quot;upside down&quot; and we see that the conditions are met for <span class="math-container">$x_0=2$</span>.</p>
622,883
<p>I'm finding maximum and minimum of a function $f(x,y,z)=x^2+y^2+z^2$ subject to $g(x,y,z)=x^3+y^3-z^3=3$.</p> <p>By the method of Lagrange multiplier, $\bigtriangledown f=\lambda \bigtriangledown g$ and $g=3$ give critical points. So I tried to solve these equalities, i.e.</p> <p>$\quad 2x=3\lambda x^2,\quad 2y=3\lambda y^2,\quad 2z=-3\lambda z^2,\quad x^3+y^3-z^3=3$. </p> <p>But it is too hard for me. Can anybody solve these?</p>
Eric Thoma
35,667
<p>For an arbitrary set of natural numbers, there is no shorter way to count the number of even numbers other than considering each number individually.</p> <p>I will assume that you are considering a set of natural numbers forming an arithmetic progression. The sequence starts with $a_1$ and $a_{k+1} = a_{k} + d$. There are $n$ terms in total. Note: $n = \frac{b-a}{d} + 1$, where $b$ is the last term and $a$ is the first.</p> <p>If $d$ is even and $a_1$ is odd, there are no even numbers.</p> <p>If $d$ is even and $a_1$ is even, all $n$ terms are even.</p> <p>If $d$ is odd and $a_1$ is odd, then the $k$th term is even iff $k$ is even. So the answer is $\lfloor n/2 \rfloor$, using the notation of the floor function.</p> <p>If $d$ is odd and $a_1$ is even, then the $k$th term is even iff $k$ is odd. So the answer is $\lceil n/2 \rceil$, using the ceiling function. Note that $\lceil n \rceil$ is $n$ for integers, not $n+1$.</p>
2,733,142
<p>This question has been asked a <a href="https://math.stackexchange.com/questions/207029/a-b2-for-which-matrix-a">few</a> <a href="https://math.stackexchange.com/questions/583442/square-root-of-nilpotent-matrix">times</a>. In the former case I noticed that there was some argument trending towards using the Jordan form. For the sake of completeness, I was hoping to see if I'm on the right track.</p> <p>Let $N$ be a nilpotent matrix such that $N^n=0$ and $N^{n-1}\neq0$. Suppose there exists $A^2=N$. Then I can write $A^2=\lambda I+N'$ where $N'$ is a nilpotent matrix. Using the Taylor expansion, I can see that $A^2=(\lambda I+N')^{1/2}=\sqrt{\lambda}\left(I+\frac{N'}{2\lambda}-\frac{N'^2}{8\lambda^2}+\frac{N'^3}{16\lambda^3}+....\right)$. This series is convergent since it only has finitely many terms. But, $A^2=N$ and $N$ is nilpotent. So $\lambda^i=0$ for all $i\in \{0,...,n\}.$ Hence such an $A^2$ doesn't exist.</p>
Theo Bendit
248,286
<p>I'm not 100% sure if I follow, but it looks invalid. When you write $A^2 = \lambda I + N'$, where $N'$ and $A^2$ are both nilpotent, the only scalar $\lambda$ that can possibly fit the bill here is $\lambda = 0$. Why? Because adding $\lambda I$ to a matrix increases all eigenvalues by $\lambda$, and nilpotent matrices have only $0$ as their eigenvalue. Therefore, from the outset, writing the series with $\lambda$ in the denominator is meaningless.</p> <p>You realise this, and you use it to conclude that no square root exists. This doesn't really work logically. All you've figured out is that the method you've got for computing a square root cannot help you. There's no guarantee that another method won't help you.</p> <p>What you need is an answer like the one you've linked to: one that assumes one exists, but arrives at a contradiction.</p> <p>EDIT: To draw an analogy, consider the differential equation $$\frac{\mathrm{d}y}{\mathrm{d}x} = y.$$ One common way to solve this is by separation of variables, that is, $$\mathrm{d}{y} = y \, \mathrm{d}x \implies \frac{\mathrm{d}{y}}{y} = \mathrm{d}x \implies \int \frac{\mathrm{d}{y}}{y} = \int \mathrm{d}x \implies \ln|y| = x + C \implies y = \pm e^C e^x.$$ Notice that we divide by $y$, which means this method cannot arrive at the solution $y = 0$. However, $y = 0$ is definitely a solution; the fact that we divided by $y$ does not prove $y = 0$ is not a solution, just that if it is a solution, we've missed it using this method.</p> <p>EDIT2: In order to prove it, what I'd do is suppose $A$ be a square root of $N$. Note that $A^{2n} = N^n = 0$, hence $A$ is nilpotent. Moreover, $A^{2n - 2} = N^{n - 1} \neq 0$. However, this contradicts the fact that $A^n = 0$, which is true since $A$ is nilpotent and $n \times n$.</p>
4,010,160
<h2>The Problem</h2> <p>Let <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span> be a continuous i.i.d. random variables. By definition, we know that...</p> <ol> <li>Independence: <span class="math-container">$\mathbb P(X_1 \in A \ \cap \ X_2 \in B)=\mathbb P(X_1 \in A) \ \mathbb P(X_2 \in B)$</span></li> <li>Identically Distributed: <span class="math-container">$\mathbb P(X_1 \le x) = \mathbb P(X_2 \le x) \ \ \forall \ \ x \in \mathbb R^1$</span></li> </ol> <p>My hypothesis is that: <span class="math-container">$$\mathbb P(X_1 &gt; X_2) = 0.5$$</span> I have attempted to prove this by using the above definitions to form the expression: <span class="math-container">$$\mathbb P(X_1 \ge x \ \cap \ X_2 &lt; x)=\mathbb P(X_2 \ge x \ \cap \ X_1 &lt; x) \ \ \forall \ \ x \in \mathbb R^1$$</span></p> <h2>My Questions</h2> <ol> <li>Is this a valid proof, and am I missing anything?</li> <li>This seems like a fundamental property of i.i.d. random variables. Is this already a named theorem?</li> </ol>
Hayden
27,496
<p>No, consider <span class="math-container">$X = [0,1]$</span> and <span class="math-container">$Y = [0,1] \cup [2,3]$</span>, with their usual subspace topologies. You may let <span class="math-container">$A = [0,1/3] \cup [1/3,2/3]$</span> and <span class="math-container">$B = [0,1]$</span>. <span class="math-container">$X$</span> is connected while <span class="math-container">$Y$</span> is not.</p>
2,716,036
<p>In reviewing some old homework assignments, I found two problems that I really do not understand, despite the fact that I have the answers.</p> <p>The first is: R(x, y) if y = 2^d * x for some nonnegative integer d. What I do not understand about this relation is how it can possibly be transitive (according to my notes it is). My understanding is that if the relation were transitive, the following would apply: if y = 2^d * x and x = 2^d * z, then y = 2^d * z. That seems impossible unless x = z. Am I missing something?</p> <p>The second is: R(x, y) if x and y are both divisible by 17. What I do not understand about this relation is why it is not reflexive. My understanding is that if the relation is reflexive, if x is divisible by 17 then both x and x are divisible by 17. I think that I am possibly applying the quality of reflexiveness incorrectly to this relation, but I am not quite sure.</p> <p>Thank you for any help in correcting these misunderstandings!</p>
bitesizebo
547,202
<p>For your first relation, note you're allowed to use a different $d$ each time. So if $y = 2^dx$ and $x = 2^{d'} z$ then $y = 2^{d + d'} z$ so $R(y, z) $. </p> <p>For your second relation, we need $R(x, x) $ for all $x$ in order to be reflexive, but (for example) 1 will not be related to 1 for this relation. </p>
1,187,706
<p>I'd imagine this is a duplicate question, but I can't find it:</p> <p>How many quadratic residues are there $\pmod{2^n}$. </p> <p>I tried small $n$: $n=1: 2, n=2:2, n=3: 3, n=4: 4, n=5: $not 5: 0, 1, 4, 9, 16, 25</p> <p>No pattern :/</p>
Asinomás
33,907
<p>The multiplicative group $\bmod 2^n$ is isomorphic to $\mathbb Z_{2^{n-2}}\times \mathbb Z_2$ so we have $\mathbb Z_{2^{n-3}}$ odd quadratic residues.</p> <p>How many even quadratic residues do we have? How many do we have of the form $d2^{2k}?$ with $d$ odd? $d$ is an odd quadratic residue $\bmod 2^{n-2k}$ so there are $2^{n-2k-3}$ options.</p> <p>Hence the answer is $\sum\limits_{k=0}^{\lceil\frac{n}{2}\rceil}\lceil2^{n-2k-3}\rceil$</p>
2,900,283
<p>I'm having trouble proving this theorem from Rudin: that if $x \neq 0$ then $\frac{1}{\frac{1}{x}} = x$. </p> <p>Rudin seems to solve this by referring to an earlier result that $xy = xz$ for $x \neq 0$ implies that $y = z$. I haven't quite been able to grasp this approach, as we don't seem to have access to this assumption. Another, perhaps more intuitive approach, is to deduce it from the field axioms: \begin{align*} \frac{1}{\frac{1}{x}} &amp; = 1 \cdot \frac{1}{\frac{1}{x}} &amp; &amp; \text{Mult Identity} \\ &amp; = \left(x \cdot \frac{1}{x}\right) \cdot \frac{1}{\frac{1}{x}} &amp; &amp; \text{Mult Inverse} \\ &amp; = x \left(\frac{1}{x \cdot \frac{1}{x}} \right) &amp; &amp; \text{Associativity, simplification} \\ &amp; = x \cdot 1 &amp; &amp; \text{Mult Inverse} \\ &amp; = x &amp; &amp; \text{Mult Identity} \end{align*} I'm particularly unsure on whether we can perform the third line of this proof, wherein we write that $\frac{1}{x} \cdot \frac{1}{\frac{1}{x}} = \frac{1}{x \cdot \frac{1}{x}}$. This is surely the multiplication law for rational numbers, but our only assumption is that $x$ is an element of some field. Considering all of the possible fields -- reals, rationals, complex, finite fields, etc. -- it seems that this law would work, but I can't think of an axiom by which it would other than the fact that multiplication is defined and behaves as we would expect in fields, so this seems like a standard result. I'm unsure on whether I ought to prove such a result prior to using it, or if it simply follows from the definition.</p> <p>Any helpful insights or hints would be greatly appreciated. </p>
José Carlos Santos
446,262
<p>This is correct, except that the justification for the third equality is not appropriate. There is no &ldquo;simplification&rdquo; axiom. But, by definition of multiplicative inverse,$$\frac1x\cdot\frac1{\frac1x}=1,$$since, for each $z\neq0$, $z\cdot\frac1z=1$.</p>
2,900,283
<p>I'm having trouble proving this theorem from Rudin: that if $x \neq 0$ then $\frac{1}{\frac{1}{x}} = x$. </p> <p>Rudin seems to solve this by referring to an earlier result that $xy = xz$ for $x \neq 0$ implies that $y = z$. I haven't quite been able to grasp this approach, as we don't seem to have access to this assumption. Another, perhaps more intuitive approach, is to deduce it from the field axioms: \begin{align*} \frac{1}{\frac{1}{x}} &amp; = 1 \cdot \frac{1}{\frac{1}{x}} &amp; &amp; \text{Mult Identity} \\ &amp; = \left(x \cdot \frac{1}{x}\right) \cdot \frac{1}{\frac{1}{x}} &amp; &amp; \text{Mult Inverse} \\ &amp; = x \left(\frac{1}{x \cdot \frac{1}{x}} \right) &amp; &amp; \text{Associativity, simplification} \\ &amp; = x \cdot 1 &amp; &amp; \text{Mult Inverse} \\ &amp; = x &amp; &amp; \text{Mult Identity} \end{align*} I'm particularly unsure on whether we can perform the third line of this proof, wherein we write that $\frac{1}{x} \cdot \frac{1}{\frac{1}{x}} = \frac{1}{x \cdot \frac{1}{x}}$. This is surely the multiplication law for rational numbers, but our only assumption is that $x$ is an element of some field. Considering all of the possible fields -- reals, rationals, complex, finite fields, etc. -- it seems that this law would work, but I can't think of an axiom by which it would other than the fact that multiplication is defined and behaves as we would expect in fields, so this seems like a standard result. I'm unsure on whether I ought to prove such a result prior to using it, or if it simply follows from the definition.</p> <p>Any helpful insights or hints would be greatly appreciated. </p>
Community
-1
<p>$$\frac1{\dfrac1x}=x\iff \dfrac1x\frac1{\dfrac1x}=\dfrac1xx\iff1=1.$$</p>
563,129
<p>Let $\mathcal{X} = \{x_1,\dots,x_n\} \subset \mathbb{R}^d$ and let $Z$ be a random variable uniformly distributed over convex hull of $\mathcal{X}$, denoted as $\text{conv}(\mathcal{X})$. Assuming that $\mathcal{X}$ coincides with the set of extreme points of $\text{conv}(\mathcal{X})$, we can write $Z = \sum_{i=1}^n \alpha_i x_i$ for a unique choice of $\alpha = (\alpha_1,\dots,\alpha_n)$ such that $\alpha_i \ge 0$ and $\sum_i \alpha_i = 1$. What is the distribution of $\alpha$?</p> <p>EDIT: As pointed out below, we need to assume $\text{conv}(\mathcal{X})$ to be a simplex, for the representation to be unique. Assuming so, what is the distribution of the coordinates?</p>
Alex Ravsky
71,850
<p>It seems the following.</p> <p>We have no unique choice of $\alpha$. For instance, let $d=2$ and $\mathcal{X}$ be a set of vertices of a regular hexagon $H.$ Let $\mathcal{X_1}$ be a set of odd vertices of $H$ and $\mathcal{X_2}$ be a set of even vertices of $H$. Then each point $z$ from the intersetion of triangles $\text{conv}(\mathcal{X}_1)$ and $\text{conv}(\mathcal{X}_2)$ has two natural choices of its $\alpha$. Moreover, it seems that we have a unique choice of $\alpha=\alpha(z)$ for each point $z\in \text{conv}(\mathcal{X})$ iff $\mathcal{X}$ is a set of vertices of a simplex. </p>
563,129
<p>Let $\mathcal{X} = \{x_1,\dots,x_n\} \subset \mathbb{R}^d$ and let $Z$ be a random variable uniformly distributed over convex hull of $\mathcal{X}$, denoted as $\text{conv}(\mathcal{X})$. Assuming that $\mathcal{X}$ coincides with the set of extreme points of $\text{conv}(\mathcal{X})$, we can write $Z = \sum_{i=1}^n \alpha_i x_i$ for a unique choice of $\alpha = (\alpha_1,\dots,\alpha_n)$ such that $\alpha_i \ge 0$ and $\sum_i \alpha_i = 1$. What is the distribution of $\alpha$?</p> <p>EDIT: As pointed out below, we need to assume $\text{conv}(\mathcal{X})$ to be a simplex, for the representation to be unique. Assuming so, what is the distribution of the coordinates?</p>
Tomilov Anatoliy
54,348
<p>I can propose the next algorithm for picking the point from the polytope:</p> <ol> <li><p>Firstly, triangulate your polytope (it is simple for convex ones).</p> </li> <li><p>Calculate an absolute values of hypervolumes of simplexes from <strong>1.</strong> (<a href="https://math.stackexchange.com/a/828854/54348">hypervolume is just a determinant of <span class="math-container">$d \times d$</span>-matrix of rows as vectors from last vertex to each of first <span class="math-container">$d$</span> vertices divided by <span class="math-container">$d!$</span></a>). Then transform above set to partial sums set. Generate uniformly distributed value from zero to sum of hypervolumes. By means of binary search in above set you can choose the simplex <span class="math-container">$P = \{\mathbf{p}_i\}_{i = 0}^d$</span>.</p> </li> <li><p>For this simplex do:</p> <ol> <li>Step <span class="math-container">$0$</span>: <span class="math-container">$\mathbf{p} \leftarrow \mathbf{p}_0$</span></li> <li>Step <span class="math-container">$i$</span>: <span class="math-container">$w_i = (U[0;1])^\frac{1}{i},\; \mathbf{p} \leftarrow \mathbf{p} * w_i + \mathbf{p}_i \cdot (1 - w_i)$</span></li> </ol> <p>On the <span class="math-container">$d$</span>-th step (<span class="math-container">$d + 1$</span> totally, due to <span class="math-container">$i$</span> is zero-based index) you will peak the desired uniformly distributed point <span class="math-container">$\mathbf{p}$</span> inside the simplex <span class="math-container">$P$</span>.</p> </li> </ol> <p>I don't know how calls the resulted distributions of baricentric coordinates, but geometrical interpretation of the algorithm is evident: on <span class="math-container">$i$</span>-th step consequentely we add barycentric dimension from next in turn vertex relative to its opposite <span class="math-container">$i - 1$</span>-dimensional simplex <span class="math-container">$S_{i - 1}$</span> (which is defined by set of vertices<span class="math-container">$\stackrel{def}{=} \{\mathbf{p}_j\}_{j = 0}^{i - 1}$</span>), formed by all previously involved vertices. <span class="math-container">$i$</span>-hypervolume of pyramid formed by <span class="math-container">$S_{i - 1}$</span> and secant (also <span class="math-container">$i - 1$</span>-simplex) of simplex <span class="math-container">$S_i$</span>, which is parallel to the <span class="math-container">$S_{i - 1}$</span> simplex, distributed as <span class="math-container">$\frac{1}{i} \cdot V_{i - 1} \cdot \displaystyle\sqrt[\displaystyle i]{h_i}$</span>, where <span class="math-container">$V_{i - 1}$</span> is <span class="math-container">$i - 1$</span>-hypervolume of <span class="math-container">$S_{i - 1}$</span> and <span class="math-container">$h_i \in [0;h_{0i}]$</span> is distance from the secant of <span class="math-container">$S_i$</span> to <span class="math-container">$S_{i - 1}$</span> and <span class="math-container">$h_{0i}$</span> is distance from <span class="math-container">$\mathbf{p}_i$</span> vertex to <span class="math-container">$S_{i - 1}$</span>; <span class="math-container">$h_i$</span> distributed uniformly as stated in statement.</p> <p>If the number of resulted points is known at the start, then we can reduce the step <strong>2.</strong>: generate respective part of whole amount of resulted points for each <span class="math-container">$d$</span>-simplex.</p> <p>Also, I suspect, that numerical stability is very-very bad for higher dimensions, due to recurrent nature of step <strong>3.</strong> and high degrees of roots.</p> <p>There is alternative approach using standard unit <span class="math-container">$d - 1$</span>-simplex mentioned <a href="https://stackoverflow.com/questions/3010837/sample-uniformly-at-random-from-an-n-dimensional-unit-simplex/3080946#3080946">here</a>. As mentioned above, affine transformations does not affect on barycentric coordinates, and, as I know, on uniformity of the distribution. So, <strong>the answer</strong> on the question is: the distribution of the coordinates is <strong>Dirichlet distribution</strong>. It looks quite &quot;symmetrical&quot; and, on my mind, numerical stable, but requires one more uniformly distributed value.</p> <p><span class="math-container">$Y_i \sim \operatorname{Gamma}(1,1) = \operatorname{Exp}(1), i \in \{1, 2, \dots, dim\} \\ V = \sum \limits_{i = 1}^{dim}Y_i \sim \operatorname{Gamma}(\underbrace{\sum \limits_{i = 1}^{dim}1}_\text{= dim}, 1) \\ X = (X_1, X_2, \dots, X_{dim}) = (\frac{Y_1}{V}, \frac{Y_2}{V}, \dots, \frac{Y_{dim}}{V}) \sim \operatorname{Dir}(\underbrace{1, 1, \dots, 1}_\text{dim times})$</span></p>
2,727,237
<p>$$ \begin{matrix} 1 &amp; 0 &amp; -2 \\ 0 &amp; 1 &amp; 1 \\ 0 &amp; 0 &amp; 0 \\ \end{matrix} $$</p> <p>I am told that the span of vectors equal $R^m$ where $m$ is the rows which has a pivot in it. So when describing the span of the above vectors, is it correct it saying that they don't span $R^3$ but only span $R^2$?.</p> <p>Thanks</p>
Mohammad Riazi-Kermani
514,496
<p>The coordinates may be orthonormal but the basis vectors do not have to be orthonormal. </p> <p>Let $v_1=(1,3)$ and $v_2= (3,2)$ then the coordinates of $v_1$ with respect to the basis $\{ v_1, v_2 \}$ is $(1,0)$.</p> <p>Also the coordinates of $v_2$ with respect to the basis $\{ v_1, v_2 \}$ is $(0,1)$</p> <p>The coordinates are orthonormal but the vectors are not. </p>
3,542,485
<p>Suppose my birthday fell on a Sunday. What is the maximum number of years I may have to wait till my birthday falls on a Sunday again?</p> <p>My intuition says 11 years:</p> <ul> <li><p>At the earliest, the next Sunday is in 6 years time. Reason: before I can encounter another Sunday, there must be at least one leap year. (7 - 1 = 6).</p></li> <li><p>Suppose by bad luck, 6 years later is a leap year (my birthday would skip Sunday and fall on a Monday). From this point, before I can encounter the next Sunday, there must be another leap year. There are two leap years involved, so the next Sunday is in 5 years time. (7 - 1 - 1 = 5)</p></li> <li>6 + 5 = 11</li> </ul> <p>Is 11 the correct answer? If yes, I would appreciate an explanation more rigorous than the above. If no, please teach.</p> <p>I am also looking for exceptional cases where the above reasoning could be faulty.</p>
Toby Mak
285,313
<p><span class="math-container">$365$</span> leaves a remainder of <span class="math-container">$1$</span> when divided by <span class="math-container">$7$</span>. Therefore, without accounting for leap years, your next birthday (<span class="math-container">$365$</span> days after your previous birthday) will shift by one weekday: for example, Sunday becomes Monday.</p> <p>This year, <span class="math-container">$2020$</span>, is a leap year. Now there are two cases: if your birthday is after February <span class="math-container">$29$</span>, the leap day does not add an extra day between this birthday and your next birthday, so the weekday will only be shifted by <span class="math-container">$1$</span> instead. However, if your birthday is before February the <span class="math-container">$29$</span>th, the leap day will be added before your next birthday, and your birthday in the <em>following</em> year will be shifted by <span class="math-container">$2$</span> weekdays.</p> <p>The years <span class="math-container">$2021, 2022, 2023$</span> are common years. For these years, we only need to shift by <span class="math-container">$1$</span> weekday, with the exception of <span class="math-container">$2$</span> weekdays as explained in the previous paragraph.</p> <p>The next year, <span class="math-container">$2024$</span> is a leap year. Apply the same logic as the year <span class="math-container">$2020$</span> again.</p> <p>Repeat this logic until you get to a Sunday again.</p> <p>The answers are below (spoilers!):</p> <blockquote class="spoiler"> <p> Birthday before February <span class="math-container">$29$</span>th: in <span class="math-container">$5$</span> years, <span class="math-container">$2025$</span> <br> Birthday after February <span class="math-container">$29$</span>th: in <span class="math-container">$6$</span> years, <span class="math-container">$2026$</span></p> </blockquote> <hr> <p>Surprising answer ahead:</p> <p>The maximum is actually <span class="math-container">$12$</span> years! This uses the fact that <span class="math-container">$2100$</span> is <em>not</em> a leap year in the Gregorian calendar.</p> <p>Take the date <span class="math-container">$17$</span> March <span class="math-container">$2097$</span>, which is a Sunday. Because <span class="math-container">$2100$</span> is not a leap year, the date is shifted back <span class="math-container">$1$</span> day, so the <span class="math-container">$17$</span>th of March <span class="math-container">$2103$</span> is not a Sunday, but a Saturday. The next Sunday is the <span class="math-container">$17$</span>th of March <span class="math-container">$2109$</span>.</p>
132,053
<p>I have an expression $e^{x-a}(e^{bx}+c)$. I just want it multiplied throughout to get $e^{(b+1)x-a}+ce^{x-a}$. To achieve this I have tried <code>Expand</code>, <code>Simplify</code>, <code>Collect</code> (using pattern <code>Exp[q_*x]</code>). The best output I could get is</p> <pre><code>Expand[Exp[x-a]*(Exp[b*x]+c)] (* Exp[-a+x+b*x]+c*Exp[-a+x] *) </code></pre> <p>I have tried <code>Evaluate</code> after this step to obtain proper exponent, but to no avail. How do I make it come with $e^{(b+1)x-a}$ in the first term?</p>
corey979
22,013
<p>Make a rule:</p> <pre><code>Expand[Exp[x - a]*(Exp[b*x] + c)] /. Exp[a_] :&gt; Exp[Collect[a, x]] </code></pre> <p><a href="https://i.stack.imgur.com/32DmV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/32DmV.png" alt="enter image description here"></a></p> <p>See also:</p> <ul> <li><a href="https://mathematica.stackexchange.com/questions/131857/variable-order-in-output">Variable order in output</a></li> <li><a href="https://mathematica.stackexchange.com/questions/15744/how-to-keep-collect-result-in-order">How to keep <code>Collect[]</code> result in order?</a></li> </ul>
1,489,362
<p>$$\lim_{x\to\infty} \left(\frac{3x^2}{\sqrt{4x^2+x+1}+\sqrt{x^2+x+1}}-x\right)$$</p> <p>I need help in finding this limit.</p>
mjo
235,913
<p><strong>Hint:</strong> Try putting it under the same denominator, then multiply by the denominators conjugate and see if nice stuff happens.</p>
2,145,098
<p>This was part of a three part question where I was supposed to prove two sets have equal cardinality by finding bijections. I've created a bijection $f: \Bbb Z \Rightarrow 2\Bbb Z$ by $f(x)=2x$. I've created a bijection $g: (0,1) \Rightarrow (4,50)$ by $g(x)=46x+4$. I think those are both correct. My last question is finding a a bijection between (0,1) and [0,1]. I've seen this question several times on this board, but I've yet to understand them and I can't really go back and ask any questions to the original posters. I know a bijective function exists between (0,1) and $\Bbb R$, but I don't think that helps me here. </p>
Przemysław Scherwentke
72,361
<p>HINT: Find first the bijection between sets $\{1/n:n\in\mathbb{N}\}\cup\{0\}$ and $\{1/n:n\in\mathbb{N},\ n\geq2\}$</p>
2,071,828
<p>Find the splitting field of $x^6-2x^4-8x^2+16$ over $\mathbb {F}_3$ and list the intermediate fields between the base camp and the splitting field.</p>
Dietrich Burde
83,966
<p>Hint: Over $\mathbb{F}_3$ we have $$ x^6-2x^4-8x^2+16=(x^2 + 2x + 2)(x^2 + x + 2)(x^2 + 1). $$</p>
1,651,452
<p>I am trying to solve a modular arithmetic system and I got to the point where I have to solve $22t \equiv 9 \pmod{7}$ for $t$. I researched on the internet and found that there are many ways to solve this, such as using linear diophantine equations, the euclidean algorithm or using inverses.</p> <p>Can someone show me, using which ever method they choose how to solve for $t$.</p>
Gregory Grant
217,398
<p>The way it works is that you can subtract as many $7$'s from the numbers as you want. So subtract three $7$'s from the $21$ and it simply becomes $1$. And subtract one $7$ from the $9$ and it becomes $2$. So this really just says $t\equiv 2\pmod 7$. And there's your solution. This was a particularly easy linear equation to solve since you didn't need to find the inverse of the coefficient of $t$.</p>
2,046,492
<p>Let's think of two events $1$ and $2$.</p> <p>Both events happen randomly $n_1$/$n_2$-times during a given time $T$ and last for a time of $t_1$/$t_2$.</p> <p><strong>What is probability $P$, that both events happen simultaneously at some moment?</strong></p> <hr> <hr> <p><em>EXAMPLE 1:</em></p> <p>$T = 60$ min</p> <p>Event $1$ - looking out of the office window: $n_1 = 8$ and $t_1 = 1$ min </p> <p>Event $2$ - a green car is on the street visible: $n_2 = 20$ and $t_2 = 0.5$ min</p> <p>$P$: How likely do I see a green car during these $60$ min?</p>
Thomas Andrews
7,933
<p>Consider the discrete version of this problem, where $T$, $t_1, t_2$ are integers under some fixed unit of time and the events always start at integer multiples of time.</p> <p>Then you are selecting $n_1$ values $\{a_i\}$ from $0,...,T-t_1+1$ so that each adjacent pair differs by at least $t_1$, and $n_2$ values f$\{b_j\}$ from $0,\dots T-t_2+1$ with adjacent pairs differing by at least $t_2$. You can count these with a stars-and-bars argument get you something like:</p> <p>$$\binom{T-(t_1-1)n_1}{n_1}$$</p> <p>The number of ways that they don't overlap can be written as the number of ways of ordering the $n_1+n_2$ blocks of type $1$ and $2$, and then inserting gaps before and after them totaling $A=T-n_1t_1-n_2t_2$. The number of orderings is $\binom{n_1+n_2}{n_1}$. A stars-and-bars argument means we can think of this as selecting $n_1+n_2$ elements from $A+n_1+n_2=T-n_1(t_1-1)-n_2(t_2-1)$ different locations. This totals $$\binom{n_1+n_2}{n_1}\binom{T-n_1(t_1-1)-n_2(t_2-1)}{n_1+n_2}$$</p> <p>So the probability they don't overlap is:</p> <p>$$p(t_1,t_2,T,n_1,n_2)=\frac {\binom{n_1+n_2}{n_1}\binom{T-n_1(t_1-1)-n_2(t_2-1)}{n_1+n_2}}{\binom{T-(t_1-1)n_1}{n_1}\binom{T-(t_2-1)n_2}{n_2}}$$</p> <p>Expanding and canceling, we get:</p> <p>$$p(t_1,t_2,T,n_1,n_2)=\frac{(T-n_1(t_1-1)-n_2(t_2-1))!(T-t_1n_1)!(T-t_2n_2)!}{(T-n_1t_1-n_2t_2)!(T-n_1(t_1-1))!(T-n_2(t_2-1))!}$$</p> <p>Letting $A_i=T-n_it_i$, and $A=T-n_1t_1-n_2T_2$ we get:</p> <p>$$p(t_1,t_2,T,n_1,n_2)=\frac{(A+n_1+n_2)!A_1!A_2!}{A!(A_1+n_1)!(A_2+n_2)!}$$ For continuous $t_1,t_2,T$, you'll need a limit of the above expression.</p> <p>$$\lim_{M\to\infty} p(\lfloor Mt_1\rfloor,\lfloor Mt_2\rfloor, \lfloor MT\rfloor,n_1,n_2)$$</p> <p>Which will give you the probability they do <em>not</em> overlap.</p> <p>Using the following result, which is easy to prove directly:</p> <blockquote> <p>For fixed $m$, $$\lim_{B\to\infty}\frac{(B+m)!}{B!B^m}=1$$</p> </blockquote> <p>we get that this is the same as:</p> <p>$$\begin{align}\lim_{M\to\infty} \frac{(AM)^{n_1+n_2}}{(A_1M)^{n_1}(A_2M)^{n_2}}&amp;=\frac{A^{n_1+n_2}}{A_1^{n_1}A_2^{n_2}}\\ &amp;=\frac{(T-n_1t_1-n_2t_2)^{n_1+n_2}}{(T-n_1t_1)^{n_1}(T-n_2t_2)^{n_2}} \end{align}$$</p> <p>as the probability that the two events won't occur at the same time.</p> <p>When $n_1=n_2=1$, this agrees with a more direct calculation of $\frac{(T-t_1-t_2)^2}{(T-t_1)(T-t_2)}$.</p> <p>In your first case, $T=60, t_1=1, n_1=8, t_2=\frac{1}{2}, n_2=20$ you get $A=42, A_1=52, A_2=50$ and the probability is:</p> <p>$$1-\frac{42^{28}}{52^{8}50^{20}}\approx 0.99446$$</p>
854,671
<p>So I'm a bit confused with calculating a double integral when a circle isn't centered on $(0,0)$. </p> <p>For example: Calculating $\iint(x+4y)\,dx\,dy$ of the area $D: x^2-6x+y^2-4y\le12$. So I kind of understand how to center the circle and solve this with polar coordinates. Since the circle equation is $(x-3)^2+(y-2)^2=25$, I can translate it to $(u+3)^2+(v+2)^2=25$ and go on from there.</p> <p>However I would like to know if I could solve this without translating the circle to the origin. I thought I could, so I simply tried solving $\iint(x+4y)\,dx\,dy$ by doing this: $\int_0^{2\pi}\,d\phi\int_0^5(r\cos\phi + 4r\sin\phi)r\,dr$ but this doesn't work. I'm sure I'm missing something, but why should it be different? the radius is between 0 and 5 in the original circle as well, etc.</p> <p>So my questions are:</p> <ol> <li><p>How can I calculate something like the above integral without translating the circle to the origin? What am I doing wrong?</p></li> <li><p>I would appreciate a good explanation of what are the steps exactly when translating the circle. I kind of "winged it" with just saying "OK, I have to move the $X$ back by 3, so I'll call it $X+3$, the same with the $Y$ etc. If someone could give a clear breakdown of the steps that would be very nice :)</p></li> </ol> <p>Thanks!</p>
bof
111,012
<p>Look at a calendar and observe that the months of May, June, July, August, September, October, and November start on seven different days of the week. This will happen every year, because the number of days in each of those months is the same every year. Therefore one of those months will start on a Sunday, and so the 13th will fall on a Friday.</p>
611,198
<p>A corollary to the Intermediate Value Theorem is that if $f(x)$ is a continuous real-valued function on an interval $I$, then the set $f(I)$ is also an interval or a single point.</p> <p>Is the converse true? Suppose $f(x)$ is defined on an interval $I$ and that $f(I)$ is an interval. Is $f(x)$ continuous on $I$? </p> <p>Would the answer change if $f(x)$ is one-to-one?</p>
Nick D.
115,491
<p>No to both! Define $f:[0,1]\rightarrow \Bbb{R}$, $f(x)=x$ for $x\in [0,\frac{1}{2})$, and $f(x)=\frac{3}{2}-x$ for $x\in[\frac{1}{2},1]$. $f$ is injective and $f([0,1])$ is an interval, but $f$ is not continuous.</p>
3,555,687
<blockquote> <p>There are 3 straight lines and two circles on a plane. They divide the plane into regions. Find the greatest possible number of regions.</p> </blockquote> <p><a href="https://i.stack.imgur.com/YbUAU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YbUAU.png" alt="enter image description here"></a></p> <p>This is the best that I could come up with an I don't really know how to get even more regions. Using some mental calculations, I figured that I could edit the picture and make it into 20 regions. However, the answer key gave me 21 as the answer. Is there a way to find out the number of regions on a plane without using a calculator? Is there a formula for it? Please help.</p>
David G. Stork
210,401
<p>I get 21:</p> <p><a href="https://i.stack.imgur.com/XwOz5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XwOz5.png" alt="enter image description here"></a></p> <p>There can be only six regions "at infinity" (as defined by the straight lines).</p> <ul> <li>6 "at infinity"</li> <li>2 "local, but outside circles"</li> <li>8 "entirely within red circle"</li> <li>5 "within green circle but not inside red circle"</li> </ul>
2,990,642
<p><span class="math-container">$\lim_{n\to \infty}(0.9999+\frac{1}{n})^n$</span></p> <p>Using Binomial theorem:</p> <p><span class="math-container">$(0.9999+\frac{1}{n})^n={n \choose 0}*0.9999^n+{n \choose 1}*0.9999^{n-1}*\frac{1}{n}+{n \choose 2}*0.9999^{n-2}*(\frac{1}{n})^2+...+{n \choose n-1}*0.9999*(\frac{1}{n})^{n-1}+{n \choose n}*(\frac{1}{n})^n=0.9999^n+0.9999^{n-1}+\frac{n-1}{2n}*0.9999^{n-2}+...+n*0.9999*(\frac{1}{n})^{n-1}+(\frac{1}{n})^n$</span></p> <p>A limit of each element presented above is 0. How should I prove that limit of "invisible" elements (I mean elements in "+..+") is also 0?</p>
Yanko
426,577
<p>Wow your approach to this question is way too difficult.</p> <p>Let me first explain what's wrong with your approach before I give you another way to do this.</p> <p>You wrote your term as a sum of <span class="math-container">$n$</span> terms, each converges to zero as <span class="math-container">$n$</span> goes to infinity . In fact you didn't show that the "invisible" elements converges to zero, but even if you do, it is not enough. For instance the sum <span class="math-container">$\frac{1}{n}+\frac{1}{n}+...+\frac{1}{n}$</span> of <span class="math-container">$n$</span> times <span class="math-container">$\frac{1}{n}$</span> converges to <span class="math-container">$1$</span> even though each term converges to zero.</p> <p>What can you do instead:</p> <p>Instead you can say that for <span class="math-container">$n$</span> sufficiently large <span class="math-container">$0.9999 + \frac{1}{n} &lt; 0.99999$</span> (add one more 9). Then use the fact that <span class="math-container">$0.99999^n\rightarrow 0$</span> as <span class="math-container">$n\rightarrow 0$</span>.</p>
2,990,642
<p><span class="math-container">$\lim_{n\to \infty}(0.9999+\frac{1}{n})^n$</span></p> <p>Using Binomial theorem:</p> <p><span class="math-container">$(0.9999+\frac{1}{n})^n={n \choose 0}*0.9999^n+{n \choose 1}*0.9999^{n-1}*\frac{1}{n}+{n \choose 2}*0.9999^{n-2}*(\frac{1}{n})^2+...+{n \choose n-1}*0.9999*(\frac{1}{n})^{n-1}+{n \choose n}*(\frac{1}{n})^n=0.9999^n+0.9999^{n-1}+\frac{n-1}{2n}*0.9999^{n-2}+...+n*0.9999*(\frac{1}{n})^{n-1}+(\frac{1}{n})^n$</span></p> <p>A limit of each element presented above is 0. How should I prove that limit of "invisible" elements (I mean elements in "+..+") is also 0?</p>
Jacob
181,986
<p><strong>Hint</strong> Look at <span class="math-container">$n\gt 10000$</span></p> <blockquote class="spoiler"> <p> For <span class="math-container">$n\gt 10000$</span>, we have <span class="math-container">$0.9999 + \frac{1}{n} \leq 0.9999+ \frac{1}{10001}\lt1$</span>, so <span class="math-container">$\lim_{n\rightarrow\infty}(0.9999 + \frac{1}{n})^n \leq \lim_{n\rightarrow\infty} (0.9999 + \frac{1}{10001})^n = 0$</span></p> </blockquote>
29,443
<p>I'm looking at a <a href="http://en.wikipedia.org/wiki/Partition_function_%28statistical_mechanics%29#Relation_to_thermodynamic_variables" rel="nofollow noreferrer">specific</a> derivation on wikipedia relevant to statistical mechanics and I don't understand a step.</p> <p>$$ Z = \sum_s{e^{-\beta E_s}} $$</p> <p>$Z$ (the partition function) encodes information about a physical system. $E_s$ is the energy of a particular system state. $Z$ is found by summing over all possible system states.</p> <p>The expected value of $E$ is found to be:</p> <p>$$ \langle E \rangle = -\frac{\partial \ln Z}{\partial \beta} $$</p> <p>Why is the variance of $E$ simply defined as:</p> <p>$$ \langle(E - \langle E\rangle)^2\rangle = \frac{\partial^2 \ln Z}{\partial \beta^2} $$</p> <p>just a partial derivative of the mean. </p> <p>What about this problem links the variance and mean in this way?</p>
Alex Becker
8,173
<p>The variance of a random variable $X$ is always defined as $&lt;(X - &lt;X&gt;)^2&gt;$; this is the expected square of the difference between the expected and actual values.</p>
556,807
<p>What is the sum of this series ?</p> <p>$(n-1)+(n-2)+(n-3)+...+(n-k)$ </p> <p>$(n-1)+(n-2)+...+3+2+1 = \frac{n(n-1)}{2}$</p> <p>So how can we find the sum from $n-1$ to $n-k$ ?</p>
Shobhit
79,894
<p>$$(n-1)+(n-2)\cdots(n-k)=\underbrace{n+n+\cdots +n}_{\text{$k$ copies}}-(1+2+\cdots k)=nk-\frac{k}{2}(k+1)$$</p>
1,841,173
<blockquote> <p>Consider the symmetric group of$S_{20}$ and it's subgroup $A_{20}$ consisting of all even permutations. Let $H$ be a $7$-Sylow subgroup of$A_{20}$. Is $H$ cyclic? And is correct the statement which says that any $7$-Sylow subgroup of $S_{20}$ is subset of $A_{20}$?</p> </blockquote> <p>I know that order of H is 49 and H is not normal subgroup of $A_{20}$. But I don't understand whether it is cyclic or not.</p>
Shyam Saurabh
715,157
<p>For the last option Sn has two types of subgroups: a subgroup which contains half even and half odd permutations or it contains only even permutations.</p>
2,032,501
<p>Let A and B be subsets of $R^n$ Define</p> <p>$A+B=\{a+b\ |\ a\in\, A , b\in \,B\}$</p> <p>Consider the sets $W=\{(x,y) \in\,R^2\ |\ x&gt;0 , y&gt;0\} \\ X=\{(x,y) \in\,R^2\ |\ x\in\,R , y=0\} \\ Y=\{(x,y) \in\,R^2\ |\ xy=1\} \\Z=\{(x,y) \in\,R^2\ |\ |x|\le 1,|y|\le 1\}$<br> Which of the following statements are true </p> <ol> <li>The Set $W+X$ is open</li> <li>The Set $X+Y$ is closed</li> <li>The Set $Y+Z$ is closed</li> </ol> <p>Try I sincerely don't know how to proceed The answer is 1 and 3 I do know that sum of two open set is open in $R^n$ but that doesn't work over here </p>
DanielWainfleet
254,665
<ol> <li><p>Prove that $W+X=\{(x,y):y&gt;0\}.$</p></li> <li><p>For positive integer $n$ let $p_n=(-n,0)\in X$ and $q_n=(n,1/n)\in Y.$ Then $p_n+q_n=(0,1/n)$ which converges to $(0,0)$ as $n\to \infty.$ But $(0,0)\not \in X+Y$ because if $a=(x_1,0)\in X$ and $b=(x_2,1/x_2)\in Y$ then the second co-ordinate of $a+b$ is $1/x_2 ,$ which can't be $0.$</p></li> <li><p>A sequence $(z_n)_n$ in $Z$ has a subsequence $(z'_n)_n$ that converges to a member of $Z.$</p> <p>$\quad$ So if $(y_n)_n$ is a sequence in $Y$ and $(z_n)_n$ is a sequence in $Z,$ such that $(y_n+z_n)_n$ converges to $p$, there is a subsequence $(y'_n+z'_n)_n,$ with $y'_n\in Y$ and $z'_n\in Z,$ such that $(z'_n)_n$ converges to $z\in Z.$</p> <p>$\quad$ Now $(y'_n+z'_n-p)$ and $(z-z'_n)$ both converge to $0$ as $n\to \infty,$ so their sum also does. Their sum is $y'_n-(p-z).$ So $y'_n$ converges to $p-z.$ </p> <p>$\quad Y$ is closed, so $p-z\in Y.$ Hence, with $p-z\in Y$ and $z\in Z$, we have $p=(p-z)+z\in Y+Z.$</p></li> </ol> <p>Remark. There are examples of two unbounded closed subsets in $\mathbb R^2$ (even in $\mathbb R$) whose sum is not closed.... Let $A=\mathbb N$ and $B=\{-n-\frac {1}{n+1}:n\in \mathbb N\}.$ So $\{-\frac {1}{n+1}:n \in \mathbb N\}\subset A+B$ but $0\not \in A+B.$....In $\mathbb R^2,$ take $A\times \{0\}$ and $B\times \{0\}.$</p>
3,097,816
<p>Show that the product of four consecutive odd integers is 16 less than a square.</p> <p>For the first part I first did <span class="math-container">$n=p(p+2)(p+4)(p+6) =(p^2+6p)(p^2+6p+8).$</span> I know that you are supposed to rearrange this to give an equation in the form <span class="math-container">$(ap^2+bp+c)^2$</span>, but I'm not sure how to. Also, once we get to that point, how do we prove it has to be odd numbers? </p> <p>NOTE: I do not know how to edit Math LaTeX so I was hoping someone could edit my post for me. Thanks </p>
EngineerInProgress
632,503
<p>Let those consecutive odd integers be <span class="math-container">$2p-3 , 2p-1 , 2p+1, 2p+3$</span>. Then their product is: <span class="math-container">$(2p-3)(2p+3)(2p-1)(2p+1)=(4p^2-9)(4p^2-1)=16p^4-40p^2+9$</span><br> Next step is to notice that <span class="math-container">$16p^4-40p^2+9+16=16p^4-40p^2+25=(4p^2-5)^2$</span></p>
1,523,230
<p>Prove that for any primitive Pythagorean triple (a, b, c), exactly one of a and b must be a multiple of 3, and c cannot be a multiple of 3.</p> <p><strong>My attempt:</strong></p> <p>Let a and b be relatively prime positive integers.</p> <p>If $a\equiv \pm1 \pmod{3}$ and $b\equiv \pm1 \pmod{3}$, </p> <p>$c^2=a^2+b^2\equiv 1+1\equiv 2 \pmod{3}$</p> <p>This is impossible as the only quadratic residues modulo 3 are 0 and 1.</p> <p><em>So far, so good.</em></p> <p>If one of a, b is $\equiv 0 \pmod{3}$ and the other is $\equiv \pm1 \pmod{3}$,</p> <p>$c^2=a^2+b^2\equiv 0+1\equiv 1 \pmod{3}$</p> <p><em>This is the part I don't understand. Just because $c^2\equiv 1\pmod{3}$ doesn't mean that $c^2$ must be a perfect square. For example, $a=12$ and $b=13$ satisfy the above conditions but $c^2=a^2+b^2=313$, which isn't a perfect square.</em></p>
Mark Fischler
150,362
<p>Your first step was a good start: It proves that <em>at least one</em> of $a$ and $b$ must be a multiple of $3$. We will have to eliminate the case of <em>both</em> $a$ and $b$ being multiples of $3$, but that is easy: If $a=3p$ and $b=3q$ then $$c^2 = a^2+b^2 = (3p)^2 + (3q)^2 = 9(p^2+q^2) = 9m$$ so $c$ is also a multiple of $3$, in which case the triplet is not primitive, as $3$ divides all three of them.</p> <p>So exactly one of $a$ and $b$ must be a multiple of $3$.</p> <p>Your second step has shown that since exactly one of $a$ and $b$ is a multiple of $3$, $c \equiv 1 \pmod{3}$. You don't care that not every number of the form $3k+1$ is a square; all you care about is that the number that happens to be $c^2$ must be expressible as $3k+1$. And this completes the proof that $c$ is not a multiple of $3$.</p>
47,495
<p>Firstly, let me divulge. I've been doing a lot of research on the summation of two coprime numbers and unfortunately have failed to come up with the properties I'm seeking; it is my hope that someone here might be of some help.</p> <p>Let $(j, k)\in \mathbb{N}^2$ be coprime.</p> <p>Can $\Omega(j + k)$ or $\omega(j + k)$ be expressed as some function of $\omega(j)$, $\omega(k)$, $\lambda(j)$, $\lambda(k)$, $ j$, and/or $k$?</p> <p>If not, then maybe for the special case that $(j, k)$ are prime, or in particular, odd primes?</p> <p>If the answer is still no, any information regarding this topic is much appreciated.</p> <p>Note: The functions $\Omega$, $\omega$, and $\lambda$, are the total prime factors, distinct prime factors, and the Liouville function respectively.</p>
Chris Wuthrich
5,015
<p>Obviously, $\Omega(j+k)$ is a function in $j$ and $k$, certainly it would be difficult to write it in a closed form other than $\Omega(j+k)$. Similar for $\omega(j+k)$. On the other hand neither of them can be a function in $\omega(j)$, $\omega(k)$, $\lambda(j)$, $\lambda(k)$ and $j$ only. For instance they take the same values for $(j,k)=(1,2)$ as for $(j,k)=(1,5)$, but $\omega(3)=\Omega(3)=1\neq \omega(6)=\Omega(6)=2$.</p>
1,397,538
<p>I know that if a distribution (generalized function) has zero derivative, then it is a constant. I also know the proof. But I have a hard time finding a <strong>reference</strong> which contains a statement of this fact. Any thoughts? Thanks.</p> <p>Update:</p> <p>I indeed found it in "Théorie des distributions" by L. Schwartz. Section 2.4, Theorem 1.</p>
David C. Ullrich
248,223
<p>Ok, you said you knew how to prove it. Others may not. And you may like this better than what you have (it's the second thing I always think of when this comes up, and <em>I</em> like it a lot better than the first thing I think of...)</p> <p>Say $u$ is a distribution and $u'=0$. By definition $u(\phi')=0$ for any test function $\phi$. <em>Hence</em> $u(\phi)=0$ for any test function $\phi$ with $\int\phi=0$.</p> <p>Fix $\psi_0$ with $\int\psi_0=1$, and let $c=u(\psi_0)$. Now for an arbitrary test function $\phi$, let $\alpha=\int\phi$. Then $u(\alpha\psi_0-\phi)=0$, which says $$u(\phi)=c\int\phi.$$Which is exactly what "$u=c$" means.</p> <hr> <p><strong>Detail:</strong> We used the following fact above: Given a test function $\phi$ on $\Bbb R$, there exists a test function $\psi$ with $\phi=\psi'$ if and only if $\int\phi=0$. In case this is not clear: First, if $\phi=\psi'$ then $\int\phi=\int\psi'=0$ because $\psi$ has compact support. Suppose on the other hand that $\int\phi=0$, and define $\psi(x)=\int_{-\infty}^x\phi$. Then $\psi'=\phi$ and hence $\psi$ is infinitely differentiable, while the fact that $\int\phi=0$ shows that $\psi$ has compact support.</p>
1,397,538
<p>I know that if a distribution (generalized function) has zero derivative, then it is a constant. I also know the proof. But I have a hard time finding a <strong>reference</strong> which contains a statement of this fact. Any thoughts? Thanks.</p> <p>Update:</p> <p>I indeed found it in "Théorie des distributions" by L. Schwartz. Section 2.4, Theorem 1.</p>
Lorenzo Pompili
884,561
<p>I may be aware that this is very bad timing, but I wanted to point out the fact that there is an easy proof also in <span class="math-container">$\mathbb{R}^N$</span>. Source: lecture notes by Professor P. D'Ancona (in Italian). <a href="https://www1.mat.uniroma1.it/people/dancona/IstAnSup/dispense-esercizi/4-Distribuzioni-20191024.pdf" rel="noreferrer">https://www1.mat.uniroma1.it/people/dancona/IstAnSup/dispense-esercizi/4-Distribuzioni-20191024.pdf</a></p> <p><strong>Proposition</strong>. Let <span class="math-container">$T \in \mathscr{D}^{\prime}\left(\mathbb{R}^{N}\right)$</span> such that <span class="math-container">$\nabla T=0 .$</span> Then there exists a constant <span class="math-container">$c$</span> such that <span class="math-container">$T=T_{c}$</span>, where for <span class="math-container">$f\in L^1_{loc}$</span> I denote <span class="math-container">$T_f$</span> the distribution represented by <span class="math-container">$f$</span>.</p> <p><em>Proof</em>. Let's define <span class="math-container">$f_{\varepsilon}:=\rho_{\varepsilon} * T$</span>, the standard regularization of <span class="math-container">$T$</span>. Then <span class="math-container">$$ \nabla f_{\varepsilon}=\nabla\left(\rho_{\varepsilon} * T\right)=\rho_{\varepsilon} *(\nabla T)=0, $$</span> thus <span class="math-container">$f_{\varepsilon}$</span> is a constant function <span class="math-container">$f_{\varepsilon}=c_{\varepsilon} .$</span> On the other hand, <span class="math-container">$T_{f_{\varepsilon}}=T_{c_{\varepsilon}}$</span> converge to <span class="math-container">$T$</span> in <span class="math-container">$\mathscr{D}^{\prime}(\Omega)$</span>, hence the sequence of numbers <span class="math-container">$\left\{c_{\varepsilon}\right\}$</span> must be convergent, so there exists <span class="math-container">$c \in \mathbb{R}$</span> such that <span class="math-container">$T_{c_{\varepsilon}} \rightarrow T_{c}$</span>.</p>
1,615,732
<p>Consider $CW$-complex $X$ obtained from $S^1\vee S^1$ by glueing two $2$-cells by the loops $a^5b^{-3}$ and $b^3(ab)^{-2}$. As we can see in Hatcher (p. 142), abelianisation of $\pi_1(X)$ is trivial, so we have $\widetilde H_i(X)=0$ for all $i$. And if $\pi_2(X)=0$, we have that $X$ is $K(\pi,1)$.</p> <p>My question is: <em>how can one compute</em> $\pi_2(X)$? Computing homotopy groups is hard, what methods may i use?</p>
Andrey Ryabichev
257,176
<p>No, $\pi_2(X)=\mathbb Z^{119}$, is is obvious by considering Euler characteristic.</p> <p>Denote universal cover of $X$ by $\widetilde X$. In Hatcher we see (and i don't know how to prove it) that $\pi_1(X)$ has order $120$. So $\widetilde X$ has 120 $0$-cells, 240 $1$-cells, and 240 $2$-cells; therefore $\chi(\widetilde X)=120$. </p> <p>We know that $H_0(\widetilde X)=\mathbb Z$, $H_1(\widetilde X)=0$ and $H_2(\widetilde X)$ has no torsion. So $H_2(\widetilde X)=\pi_2(\widetilde X)=\pi_2(X)=\mathbb Z^{119}$.</p>
119,686
<p>I have a very dumb question. Let $X = \mathbb{P}^2_k = Proj(k[x,y,z])$ where $k$ is algebraically closed. We have an invertible sheaf $\mathcal{O}(2)$ on $X$. Its space of global sections contains the elements $x^2, y^2, z^2, xy, yz, xz$. </p> <p>It seems to me (by my calculations), however, that $\mathcal{O}(2)$ is generated by $x^2, y^2$, and $z^2$. Meaning, these 3 global sections generate the stalks at each point of $X$. I'm suspicious, though. Is this true?</p> <p>David</p>
Georges Elencwajg
3,217
<p>One should not confuse the two different statements :<br> a) The sections $x^2,y^2,z^2$ generate the $k$-vector space $\Gamma(\mathbb P^2_k,\mathcal O(2))$, which happens to be false.<br> b) The sections $x^2,y^2,z^2$ generate the line bundle $O(2)$, which happens to be true. </p> <p>a) The first statement probably needs no further eleboration since the $k$-vector space $\Gamma(\mathbb P^2_k,\mathcal O(2))$ is the 6-dimensional space $k[x,y,z]_2$ of homogeneous polynomials of degree 2, which obviously cannot be generated by only 3 vectors. </p> <p>b) And what does the second statement even mean? It means that that the fibre of $\mathcal O_{\mathbb P^2_k}(2)$ at any point $P\in \mathbb P^2_k$ is generated by the values of the three sections at $P$.<br> Since the fibres of at any $P\in \mathbb P^2_k$ are 1-dimensional, the condition just states that the three sections do not vanish simultaneously at $P$.<br> And this is indeed the case since at any $P\in \mathbb P^2_k$ it is impossible to have simultaneously $x^2=y^2=z^2=0$ . </p> <p>[It is also vital to carefully distinguish between the stalk $\mathcal (O(2))_P$, which is a free module of rank one over $\mathcal O_P$ and thus an infinite-dimensional vector space over $k$, and the fibre $\mathcal O(2)_P\otimes_{O_P} k$ which is a 1-dimensional vector space over $k$.]</p>
3,048,063
<p>I have to prove this preposition by mathematical induction:</p> <p><span class="math-container">$$\left(x^n+1\right)&lt;\left(x+1\right)^n \quad \forall n\geq 2 \quad \text{and}\quad x&gt;0,\,\, n \in \mathbb{N}$$</span></p> <p>I started the prove with <span class="math-container">$n=2$</span>:</p> <p><span class="math-container">$\left(x^{2}+1\right)&lt;\left(x+1\right)^{2}$</span></p> <p><span class="math-container">$x^{2}+1&lt;x^{2}+2x+1$</span></p> <p>We see that;</p> <p><span class="math-container">$x^{2}+1-x^{2}-1&lt;2x$</span></p> <p><span class="math-container">$0&lt;2x$</span></p> <p>Then</p> <p><span class="math-container">$x&gt;0$</span></p> <p>And this one carries out for <span class="math-container">$n=2$</span></p> <p>Now for <span class="math-container">$\quad n=k \quad$</span> (Hypothesis)</p> <p><span class="math-container">$\left(x^{k}+1\right)&lt;\left(x+1\right)^{k}$</span></p> <p>We have</p> <p><span class="math-container">$\displaystyle x^{k}&lt;\left(x+1\right)^{k}-1\ldots \quad (1)$</span></p> <p>Then, we must prove for <span class="math-container">$\quad n= k+1 \quad$</span> (Thesis):</p> <p><span class="math-container">$x^{k+1}+1&lt;\left(x+1\right)^{k+1}$</span></p> <p>We develop before expression as:</p> <p><span class="math-container">$x^{k+1}&lt;\left(x+1\right)^{k+1}-1\ldots \quad (2)$</span></p> <p>According to the steps of mathematical induction, the next stpe would be use the hypothesis <span class="math-container">$(1)$</span> to prove thesis <span class="math-container">$(2)$</span>. It's in here when I hesitate if the next one that I am going to write is correct:</p> <p><strong>First way:</strong></p> <p>We multiply hypothesis <span class="math-container">$(1)$</span> by <span class="math-container">$\left(x+1\right)$</span> and we have:</p> <p><span class="math-container">$x^{k}\left(x+1\right)&lt;\left[\left(x+1\right)^{k}-1\right]\left(x+1\right)$</span></p> <p><span class="math-container">$x^{k}\left(x+1\right)&lt;\left(x+1\right)^{k+1}-\left(x+1\right)$</span></p> <p>Last expression divided by <span class="math-container">$\left(x+1\right)$</span> we have again the expression <span class="math-container">$(1)$</span>:</p> <p><span class="math-container">$\displaystyle \frac{x^{k}\left(x+1\right)&lt;\left(x+1\right)^{k+1}-\left(x+1\right)}{\left(x+1\right)}$</span></p> <p><span class="math-container">$x^{k}&lt;\left(x+1\right)^{k}-1$</span></p> <p><strong>Second way:</strong></p> <p>If we multiply <span class="math-container">$(2)$</span> by <span class="math-container">$x$</span> we have:</p> <p><span class="math-container">$xx^{k}&lt;x\left[\left(x+1\right)^{k}-1\right]$</span></p> <p><span class="math-container">$x^{k+1}&lt;x\left(x+1\right)^{k}-x$</span></p> <p>And if we again divided last expression by <span class="math-container">$x$</span>, we arrive at the same result</p> <p><span class="math-container">$\displaystyle \frac{x^{k+1}&lt;x\left(x+1\right)^{k}-x}{x}$</span></p> <p><span class="math-container">$x^{k}&lt;\left(x+1\right)^{k}-1$</span></p> <p>I do not find another way to prove this demonstration, another way to solve the problem is using Newton's theorem binomial coeficients, but the prove lies in the technical using of mathematical induction. If someone can help me, I will be very grateful with him/her! Thanks -Víctor Hugo-</p>
Word Shallow
466,835
<p>We will prove it is true with <span class="math-container">$n=k$</span>. Indeed we need to prove it is true with <span class="math-container">$n=k+1$</span>.</p> <p>Or we need to prove <span class="math-container">$x^{k+1}+1&lt;\left(x+1\right)^{k+1}$</span> </p> <p>We have: <span class="math-container">$RHS=(x+1)\cdot (x+1)^k&gt;(x+1)\cdot (x^k+1)$</span> (Hypothesis)</p> <p><span class="math-container">$=x(x^k+1)+(x^k+1)=x^{k+1}+x^k+x+1$</span></p> <p><span class="math-container">$&gt;x^{k+1}+1=RHS$</span> <span class="math-container">$(x&gt;0)$</span></p>
4,623,748
<p>Let <span class="math-container">$X,Y$</span> be smooth vector fields on the unit circle <span class="math-container">$M = S^1$</span> such that <span class="math-container">$[X,Y] = 0$</span>, i.e. (treating tangent vectors as derivations) <span class="math-container">$X(x)(Yf) = Y(x)(Xf)$</span> for all <span class="math-container">$x\in M$</span>. Assuming that <span class="math-container">$X(x)\ne 0$</span> for all <span class="math-container">$x\in M$</span> I want to show that there is a <span class="math-container">$c\in \Bbb R$</span> such that <span class="math-container">$Y = cX$</span>.</p> <p>My attempt: for all <span class="math-container">$x\in M$</span> we have <span class="math-container">$X(x), Y(x) \in T_x M$</span> which is one dimensional, and since <span class="math-container">$X(x) \ne 0$</span> there is a <span class="math-container">$c(x)\in \Bbb R$</span> such that <span class="math-container">$Y(x) = c(x) X(x)$</span>. Assuming this <span class="math-container">$c: M\to \Bbb R$</span> is smooth (I don't know how to show this) we get that <span class="math-container">$Y(x)(Xf) = X(x)(Yf) = X(x)(cXf) = X(x)(c) (Xf)(x) + c(x) X(x)(Xf) = X(x)(c)X(x)(f) + Y(x)(Xf)$</span>, so <span class="math-container">$X(x)(c) X(x)(f) = 0$</span>. This holds for all smooth <span class="math-container">$f: M\to \Bbb R$</span>, so since <span class="math-container">$X(x)\ne 0$</span> we get <span class="math-container">$X(x)(c) = 0$</span>. I'm not sure how to proceed or if this is even correct.</p>
Lee Mosher
26,501
<p>Assuming <span class="math-container">$z^2+|z|=0$</span> it follows that <span class="math-container">$$-z^2=|z| $$</span> Taking the modulus of both sides we get <span class="math-container">$$|-z^2| = |(|z|)| \implies |z|^2=|z| \implies |z|=0 \,\,\text{or}\,\, 1 $$</span></p> <p>Case 1: If <span class="math-container">$|z|=0$</span> then <span class="math-container">$z=0$</span>. And <span class="math-container">$0$</span> is indeed a solution.</p> <p>Case 2: If <span class="math-container">$|z|=1$</span> then <span class="math-container">$z^2=-1$</span> so <span class="math-container">$z=-i$</span> or <span class="math-container">$+i$</span>. And <span class="math-container">$-i$</span>, <span class="math-container">$+i$</span> are indeed both solutions.</p>
687,897
<p>I need help with a differential equation, the trouble is I don't think it's separable and I have tried and failed to apply the method of characteristics to figure it out. z is also bound between zero and one.</p> <p>$$ \frac{\partial u}{\partial x}+z\frac{\partial u}{\partial y}-Cz\frac{\partial u}{\partial z}=0 $$</p> <p>The problem with characteristics is I couldn't figure out how to apply the boundary conditions.</p> <p>$$ u(0,y,z)=\delta(y)\delta(z-1) $$</p> <p>Any and all help would be greatly appreciated.</p>
doraemonpaul
30,938
<p>Assume $C\neq0$ for the key case:</p> <p>Follow the method in <a href="http://en.wikipedia.org/wiki/Method_of_characteristics#Example" rel="nofollow">http://en.wikipedia.org/wiki/Method_of_characteristics#Example</a>:</p> <p>$\dfrac{dx}{dt}=1$ , letting $x(0)=0$ , we have $x=t$</p> <p>$\dfrac{dz}{dt}=-Cz$ , letting $z(0)=z_0$ , we have $z=z_0e^{-Ct}=z_0e^{-Cx}$</p> <p>$\dfrac{dy}{dt}=z=z_0e^{-Ct}$ , letting $y(0)=y_0$ , we have $y=y_0+\dfrac{z_0}{C}-\dfrac{z_0e^{-Ct}}{C}=y_0+\dfrac{z_0}{C}-\dfrac{z_0e^{-Cx}}{C}$</p> <p>$\dfrac{du}{dt}=0$ , letting $u(0)=f(y_0,z_0)$ , we have $u(x,y,z)=f(y_0,z_0)=f\left(y+\dfrac{z}{C}-\dfrac{ze^{Cx}}{C},ze^{Cx}\right)$</p> <p>$u(0,y,z)=\delta(y)\delta(z-1)$ :</p> <p>$f(y,z)=\delta(y)\delta(z-1)$</p> <p>$\therefore u(x,y,z)=\delta\left(y+\dfrac{z}{C}-\dfrac{ze^{Cx}}{C}\right)\delta(ze^{Cx}-1)$</p>
19,478
<p>Let $K$ and $L$ be two subfields of some field. If a variety is defined over both $K$ and $L$, does it follow that the variety can be defined over their intersection?</p>
Pete L. Clark
1,149
<p>No, not in general. Assuming we start with a variety over an algebraically closed field, this would be true if the variety could be defined over its field of moduli, which would then be the unique minimal field of definition. (In particular it is true if the variety has no nontrivial automorphisms.) Note also that it holds trivially for genus zero curves -- there are no moduli! -- and for genus one curves, by the theory of the $j$-invariant.</p> <p>Therefore the simplest counterexample seems to be a genus $2$ curve over $\mathbb{Q}$. Mestre has shown that the obstruction to such a curve being defined over its field of moduli is a quaternion algebra over $\mathbb{Q}$ -- i.e., the curve can be defined over its field of moduli iff this quaternion algebra is split, i.e., isomorphic to a $2 \times 2$ matrix algebra.</p> <p>Let $C$ be a genus $2$ curve with field of moduli $\mathbb{Q}$ and nontrivial Mestre obstruction. (Such curves certainly exist. Let me know if you want an explicit example.) Then $C$ may be defined over a quadratic field $K$ iff $K$ splits the quaternion algebra. By the structure theory of quaternion algebras, $C$ can therefore be defined over infinitely many quadratic fields, but not over $\mathbb{Q}$. </p> <p><b>Addendum</b>: Here is a nice paper by S. Baba and H. Granath:</p> <p><a href="http://www.math.kau.se/granath/research/qm.pdf" rel="noreferrer">http://www.math.kau.se/granath/research/qm.pdf</a></p> <p>It deals with the Mestre obstruction on certain genus 2 curves and includes specific examples. </p>
1,164,040
<p>Let $A$ be a $m \times n$ matrix. Determine whether or no the set $W= \{y : Ay=0\}$ is a vector space. This proof involves nullspace work and another way of asking it is also proving that $W$ is the nullspace of $A$.</p> <p>I think you can solve this with the use of the subspace theorem being that you can just prove axioms $A1$ ($U+V =$ something in the set) and $M1$ ($kU=$ something in the set) The problem that I am having here is how to set this up so that I can prove this.</p>
Auslander
218,333
<p>Yes, the subspace theorem is the way to go here. You first need to show that $W$ is not the empty. This is pretty easy (the zero vector should come in handy). </p> <p>Next, suppose that $x,y\in W$. This means that $Ax=0$ and $Ay=0$. Therefore, $$A(x+y)=Ax+Ay=0+0=0.$$ Therefore $W$ is closed with respect to addition. Now suppose $x\in W$ and $c\in\mathbb{R}$. Then $Ax=0$ so that $$A(c\cdot x)=cAx=c\cdot 0=0.$$ Therefore $W$ is closed with respect to scalar multiplication.</p>
4,502,150
<p>There is a &quot;direct&quot; way to prove this equality.</p> <p><span class="math-container">$\displaystyle \binom{1/2}{n}=\frac{2(-1)^{n-1}}{n4^n}\binom{2n-2}{n-1}$</span></p> <p>I am trying to skip the induction. Maybe there is a rule or formula that will help me.</p> <p>Thank you</p>
G Cab
317,234
<p>another way</p> <p>Let's rewrite part of the RHS in terms of gamma <span class="math-container">$$ \frac{1}{n}\left( \begin{array}{c} 2n - 2 \\ n - 1 \\ \end{array} \right) = \frac{{\Gamma \left( {2n - 1} \right)}} {{n\Gamma \left( n \right)\Gamma \left( n \right)}} = \frac{{\Gamma \left( {2n - 1} \right)}}{{\Gamma \left( {n + 1} \right)\Gamma \left( n \right)}} $$</span></p> <p>To the above we can apply the Gamma Duplication Formula](<a href="https://en.wikipedia.org/wiki/Multiplication_theorem#Gamma_function%E2%80%93Legendre_formula" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Multiplication_theorem#Gamma_function%E2%80%93Legendre_formula</a>) <span class="math-container">$$ \Gamma \left( {2\,z} \right) = \frac{{2^{\,2\,z - 1} }}{{\sqrt \pi }}\Gamma \left( z \right)\Gamma \left( {z + 1/2} \right) = 2^{\,2\,z - 1} \frac{{\Gamma \left( z \right)\Gamma \left( {z + 1/2} \right)}}{{\Gamma \left( {1/2} \right)}} $$</span> wherefrom <span class="math-container">$$ \Gamma \left( {2n - 1} \right) = \Gamma \left( {2\left( {n - 1/2} \right)} \right) = 2^{\,2n - 2} \frac{{\Gamma \left( {n - 1/2} \right)\Gamma \left( n \right)}}{{\Gamma \left( {1/2} \right)}} $$</span> so getting</p> <p><span class="math-container">$$ \begin{array}{l} \frac{1}{n}\left( \begin{array}{c} 2n - 2 \\ n - 1 \\ \end{array} \right) = \frac{{\Gamma \left( {2n - 1} \right)}}{{\Gamma \left( {n + 1} \right)\Gamma \left( n \right)}} = 2^{\,2n - 2} \frac{{\Gamma \left( {n - 1/2} \right)}}{{\Gamma \left( {1/2} \right)\Gamma \left( {n + 1} \right)}} = \\ = 2^{\,2n - 2} \frac{{\Gamma \left( {n - 1/2} \right)}} {{\left( { - \frac{1}{2}} \right)\Gamma \left( { - 1/2} \right)\Gamma \left( {n + 1} \right)}} = - 2^{\,2n - 1} \left( \begin{array}{c} n - 3/2 \\ n \\ \end{array} \right) = \\ = - 2^{\,2n - 1} \left( { - 1} \right)^n \left( \begin{array}{c} 1/2 \\ n \\ \end{array} \right) \\ \end{array} $$</span></p> <p>where:</p> <ul> <li>1st line : application of the duplication formula,</li> <li>2nd line: passing from <span class="math-container">$\Gamma(1/2)$</span> to <span class="math-container">$\Gamma(-1/2)$</span> to write the fraction as a binomial</li> <li>3rd line : applying the &quot;upper negation&quot;.</li> </ul>
4,930
<p>This is not an urgent question, but something I've been curious about for quite some time.</p> <p>Consider a Boolean function in <em>n</em> inputs: the truth table for this function has 2<sup><em>n</em></sup> rows.</p> <p>There are uses of such functions in, for example, computer graphics: the so-called ROP3s (ternary raster operations) take three inputs: <em>D</em> (destination), <em>S</em> (source) and <em>M</em> (mask). All three are planes of bits, and the result is returned to the destination plane of bits. Now, this is only really applicable to 2-colour (black &amp; white) or 8-colour displays (regarding the red, green, blue planes separately). A given ROP3 is given a code from 0 to 255, a number which represents the pattern of the output bit, down the rows of the table. Similarly, ROP2s have a value from 0 to 15. ROPs may also be given names, especially when the logical connective of which the truth table is an extension is a simple one, such as AND, XOR or SRC (source).</p> <p>An expression for any truth table (or ROP) may be found in terms of an expressively-complete set of connectives (usually unary or binary, sometimes nullary too). [Well, I suppose this statement is itself a tautology!] For example, the sets {NOT, AND}, {NOT, OR}, {NAND} are each expressively complete.</p> <p>One commonly used (redundant) expressively complete set is {NOT, AND, OR}. Two particularly common canonical sets of expressions over this set are the conjunctive normal form (CNF) and disjunctive normal form (DNF). Both of these are rather verbose.</p> <p>There is also a notion of a minimal expression over a set of connectives, defined variously. The count might be of instances of a variable or of connectives. There might be a bound to the depth or breadth of the expression (perhaps).</p> <p>The Boolean connectives might be extended to the set <strong><em>R</em></strong>[0,1], for fuzzy logic; that is, the connectives are over <strong><em>R</em></strong>[0,1], with the restriction to {0,1} being the usual Boolean function. There are many ways to do this; it is possible to preserve some but not all the usual properties (e.g. associativity, idempotency) of the connectives. [NOT(x) would usually be interpreted as (1−x); AND(x,y) could be (x*y) or (min{x,y}), or in many other ways.]</p> <p>Such extensions may be used, for example, to give a meaning to a ROP3 as applied to 256-level monochrome images (to combine or 'mix' such images) or to planes of full-colour images. (Necessarily, some truncation or 'quantisation' must take place.)</p> <p>However, two expressions have the same function over {0,1} will generally have different functions over <strong><em>R</em></strong>[0,1]. Rather than choosing some arbitrary expression, it would be an advantage to choose some canonical or minimal expression.</p> <p>How much is known about this field? Are there any good on-line references? I'm particularly interested in definitions of, theorems about, and algorithms for the generation of minimal or canonical expressions.</p>
Darsh Ranjan
302
<p>If you play around with this a bit, you'll probably come to the same conclusion that I did a long time ago, namely that there is indeed a <b>one true extension</b> of boolean functions $\{0,1\}^n\to \{0,1\}$ to functions $[0,1]^n\to [0,1]$. There are many simple characterizations of this extension (which tells you that it's important). The most intuitive one for me is to relax boolean logic to probability: take a boolean function of some number of variables, like $$y \text{ or not } (x \text{ and } z).$$ Now suppose we flip possibly biased coins independently to determine the truth values of $x,y,z$. The probability of $x$ being true is $p$, the probability of $y$ being true is $q$, and the probability of $z$ being true is $r$, so we have $0\leq p,q,r \leq 1$. Now consider our expression $y \text{ or not } (x \text{ and } z)$: what is the probability that this will be true? It's some function of $p,q,r$ that takes values in $[0,1]$. If you work it out (which is straightforward), you'll find that the function is $$f(p,q,r) = 1-p(1-q)r.$$ You should check that when you restrict $p,q,r$ to be 0 or 1, this agrees with the original boolean function when the corresponding boolean variables are respectively false or true. One of the other main characterizations of this extension is as the unique multiaffine interpolating polynomial. ("Multiaffine" means that if you set all but one of the variables, say $p$, to constant, then you get a linear function $ap + b$ of $p$ for some constants $a$ and $b$ (which depend on the constants you chose for the other variables)). </p> <p>Unfortunately, <i>computing</i> this extension for a boolean function of many variables is easily seen to be #P-hard, which means it's one of those hard problems that people hate because they can't solve it efficiently but can't prove that they can't. (Indeed, any easily computable canonical representation (extending to [0,1] or otherwise) applicable to all boolean functions would prove P=NP, so your problem as you stated it is pretty much hopeless. This particular canonical extension happens to be powerful enough to be #P-hard.) However, if your boolean expressions have only a few variables, then there is a very easy recursive algorithm to compute the extension. Here it is: $$f(p\_1,p\_2,\ldots,p\_n) = p\_1f(1,p\_2,\ldots,p\_n) + (1-p\_1)f(0,p\_2,\ldots,p\_n).$$ $f(1,p\_2,\ldots,p\_n)$ and $f(0,p\_2,\ldots,p\_n)$ are respectively the extensions in which you set the first variable to true and to false, which you can compute recursively. The basis case is when $n=0$, in which case you have a nullary function 0 or 1. </p> <p>[<b>Edit:</b> If your boolean function is actually represented literally as a truth table, then there's no difficulty in computing its canonical extension to real values; actually, the recursive algorithm given above has $O(N)$ complexity, where $N=2^n$ is the number of rows in the truth table. Of course, <i>producing</i> a truth table for a function of many variables is time- and space-consuming, but if you already have the truth table, then you've already committed to it.]</p> <p>There's actually one special case in which the extension is easy to compute for any number of variables, which may or may not help you: if you can write your boolean expression so each variable only appears once, then you can compute the extension in linear time. You use the transformation rules</p> <ul> <li>$(\text{not } A)\to (1-A)$,</li> <li>$(A \text{ and } B)\to (AB)$,</li> <li>$(A \text{ or } B)\to (A + B - AB)$</li> </ul> <p>and simply recurse on the expression. My example $y \text{ or not } (x \text{ and } z)$ has this property, so let's use this procedure on it: </p> <p>$$ y \text{ or } (\text {not } (x \text{ and } z)) \to y + (1-xz) - y(1-xz), $$ which you can see is the same as $1-x(1-y)z$. This, very unfortunately, fails miserably when you have multiple occurrences of any variable (just try $x \text{ and not } x$). However, the situation is just slightly more general than may appear initially: note that I gave transformation rules for AND, OR, and NOT. I did that because it's often convenient to express boolean expressions in terms of those three. You can in fact formulate a corresponding transformation rule for the boolean function $f$ of your choice (which is identical to the canonical extension of $f$: e. g., $1-x$ is the canonical extension of $\text{not }x$ and $x + y - xy$ is the canonical extension of $x\text{ or }y$), and you can apply that transformation to $f(A\_1,\ldots,A\_n)$ <em>provided that no variable appears in more than one subexpression $A\_1,\ldots, A\_n$</em>. For example, exclusive-or might be a useful boolean function, so you can use the transformation rule </p> <ul> <li>$(A \text{ xor } B)\to A + B - 2AB$.</li> </ul> <p>This way, you can effectively build up a library of your favorite boolean functions and their canonical extensions, which you can use as transformation rules whenever you apply those functions to subexpressions with disjoint variable sets. </p> <p>Finally, let me give you one "fun" way to compute the canonical extension based on the transformation rules that is tremendously useful when computing a canonical extension by hand: it turns out that you can apply any canonical transformation rule <em>arbitrarily</em>, even for arguments with non-disjoint sets of variables, expand everything out into a sum of monomials, and use the additional transformation </p> <ul> <li>$(x^2) \to x$.</li> </ul> <p>as many times as you can until you're left with a multiaffine polynomial. (In fact, you're free to apply that transformation even without expanding your polynomial: e. g., $(1 -xy)^2 \to (1-xy)$ is perfectly valid.) To illustrate, let's apply that to the exclusive-or function: </p> <p>$$ x \text{ xor } y = (x \text{ or } y) \text{ and not } (x \text{ and } y).$$</p> <p>If we blindly apply the transformations for AND, OR, and NOT, we get: </p> <p>$$ x \text{ xor } y \to (x + y - xy)(1-xy). $$ Expand that out into monomials: $$ (x + y - xy)(1-xy) = x + y - xy - x^2y - xy^2 + x^2y^2.$$Now replace $A^2$ by $A$ whenever possible: $$x + y - xy - x^2y - xy^2 + x^2y^2 \to x + y - xy - xy - xy + xy = x + y - 2xy,$$ which agrees with my canonical expression/transformation rule for XOR stated above. </p>
4,465,504
<p>Take an invertible formal series <span class="math-container">$f\in \mathbb{Z}_p[[T]]$</span> of inverse <span class="math-container">$g\in \mathbb{Z}_p[[T]]$</span> and let <span class="math-container">$x\in \mathbb{Z}_p$</span> such that the value of <span class="math-container">$f$</span> evaluated at <span class="math-container">$x$</span> exists. I have a few questions:</p> <ol> <li>Is <span class="math-container">$f(x)$</span> a <span class="math-container">$p$</span>-adic integer ? Since <span class="math-container">$\mathbb{Z}_p$</span> is a closed set I thought it was the case, but I have some doubts.</li> <li>Does <span class="math-container">$g(x)$</span> also exist ?</li> <li>If <span class="math-container">$g(x)$</span> does exist, is it the inverse of <span class="math-container">$f(x)$</span> ? I was thinking of evaluating at <span class="math-container">$x$</span> in the identity <span class="math-container">$f(T) g(T) =1$</span>, but still not sure of myself.</li> </ol> <p>I'm new to the p-adic world and to the formal series world, so thanks a lot.</p>
Lubin
17,760
<p>The answer of @reuns was perfect, but telegraphic. If you prefer prolixity, try this:</p> <p>You’ll find that when <span class="math-container">$R$</span> is a unital ring, the set of elements of <span class="math-container">$R$</span> that have inverses (i.e. reciprocals) is often denoted <span class="math-container">$R^\times$</span>. That is for <span class="math-container">$f\in R$</span>, <span class="math-container">$f$</span> is in <span class="math-container">$R^\times$</span> if and only if there is a <span class="math-container">$g\in R$</span> such that <span class="math-container">$fg=1_R$</span> (and if <span class="math-container">$R$</span> isn’t commutative, I guess you also need to specify <span class="math-container">$gf=1$</span>).</p> <p>Let’s suppose that <span class="math-container">$R$</span> is commutative, and look at the ring <span class="math-container">$R[[T]]$</span>, which I hope you know is complete under the <span class="math-container">$(T)$</span>-adic topology. You can think of this as saying that if you have an expression <span class="math-container">$\sum_mf_m(T)$</span> whose terms are in <span class="math-container">$R[[T]]$</span>, and if the initial degrees of the separate series <span class="math-container">$f_m$</span> have <span class="math-container">$\lim_m\bigl(\text{init deg}(f_m)\bigr)\to\infty$</span>, then the sum is convergent, with limit a good element of <span class="math-container">$R[[T]]$</span>.</p> <p>In particular, if you have a series with constant term equal to <span class="math-container">$1_R$</span>, i.e. <span class="math-container">$f(T)=1-\sum_1^\infty a_mT^m$</span>, in other words of the form <span class="math-container">$1-Tg(T)$</span> for <span class="math-container">$g\in R[[T]]$</span>, then the reciprocal of <span class="math-container">$f$</span> is easily written as <span class="math-container">$$ \frac1f=1+\sum_{n=1}^\infty\bigl(Tg(T)\bigr)^n\,, $$</span> as you know from high-school. I’ll leave it to you now to show that if the constant term of <span class="math-container">$f(T)$</span> is in <span class="math-container">$R^\times$</span>, then <span class="math-container">$f\in\bigl(R[[x]]\bigr)^\times$</span>.</p> <p>What about your question about evaluating <span class="math-container">$f(T)$</span> somewhere in <span class="math-container">$R$</span>? I’ll specialize to the case <span class="math-container">$R=\Bbb Z_p$</span>, as you have done. If you have <span class="math-container">$f(T)\in\Bbb Z[[T]]$</span>, then you’ll get a <span class="math-container">$p$</span>-adically convergent series when you plug in for <span class="math-container">$T$</span> <em>any</em> element <span class="math-container">$x\in p\Bbb Z_p$</span>. I’ll let you check that. On the other hand, if <span class="math-container">$x\notin p\Bbb Z_p$</span>, the convergence of the associated series of elements of <span class="math-container">$\Bbb Z_p$</span> is not guaranteed at all. Might converge, might not, mostly depending on whether the coefficients of the original series get <span class="math-container">$p$</span>-adically small as you look at them farther and farther out.</p> <p>What you can show, however, is that if <span class="math-container">$\xi$</span> is a chosen element of <span class="math-container">$p\Bbb Z_p$</span>, then the evaluation map <span class="math-container">$\text{ev}_\xi:\Bbb Z_p[[T]]]\to\Bbb Z_p$</span> by the rule <span class="math-container">$\text{ev}_\xi\bigl(f(T)\bigr)=f(\xi)$</span> is continuous. We’re talking about the <span class="math-container">$(T)$</span>-adic topology on <span class="math-container">$\Bbb Z_p[[T]]$</span> and the <span class="math-container">$p$</span>-adic topology on <span class="math-container">$\Bbb Z_p$</span>. This means that when <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are reciprocals of each other, so that <span class="math-container">$f(T)g(T)=1$</span>, then also <span class="math-container">$f(\xi)g(\xi)=1$</span>. And that answers your other question, I hope.</p>
1,745,832
<p>It is stated in Kolmogorov &amp; Fomin's <em>Introductory Real Analysis</em> that every normal space is Hausdorff. I cannot seem to find an explanation for this anywhere, and don't see why this is obviously true... since it is not necessarily the case in an arbitrary topological space that a singleton is closed. </p> <p>Any help in understanding this would be appreciated! </p>
Forever Mozart
21,137
<p>You must assume singletons are closed (equivalently, that the space is T$_1$). </p> <p>Every indiscrete space is normal, but fails to be Hausdorff if it has more than one point.</p>
991,530
<p>I am trying to prove the following: For any positive real number $x&gt;0$, there exist a positive integer N such that $x&gt;1/N&gt;0$. </p> <p>What I know is: since x is a real number, it can be expressed as the formal limit of a Cauchy sequence $$x:=LIM (b_n)_{n=1}^\inf$$ since $x&gt;0$, the sequence $(b_n)$ is bounded away from zero, i.e. $|b_n|&gt;c$, $c&gt;0$ Also, since $(b_n)$ is Cauchy then its bounded by M ($|b_n|\leq M$)</p> <p>However, I dont know how to further proceed. Would appreciate your help. </p>
MAS
182,585
<p>Let x and $\epsilon$ be any positive real numbers, there exist M such that $M\epsilon&gt;x$. choose $\epsilon=1$, then $$\frac{1}{M}&lt;x$$ $$x&gt;\frac{1}{M}&gt;0$$</p>
2,551,368
<p>If I have a liquid that has a specific volume of 26.9 inches cubed per pound, and that liquid sells in the form of a gallon weighing 15.2 lbs, it would be correct to say it's total volume is 15.2*26.9, correct? So that would be 408.88 inches cubed. So if I had a volume of 1.157 inches cubed to fill with that liquid, I would be able to fill that volume 408.88/1.157 = 353.4 times. 353.4 times I would be able to fill that volume if I had that specific liquid; am I correct? Thank you in advance for any assistance.</p>
Luke Peachey
506,520
<p>$X$ is a normed space, so it is Hausdorff. In particular, $C$ finite implies $C$ is closed.</p> <p>$C$ dense means by definition $C = \bar C = B_X $. Thus $B_X$ is finite.</p> <p>For any non-trivial vector-space, any subset with non-empty interior is infinite. Hence it is only possible when $X$ has dimension $0$.</p>
186,417
<p>Is it possible to write $0.3333(3)=\frac{1}{3}$ as sum of $\frac{1}{4} + \cdots + \cdots\frac{1}{512} + \cdots$ so that denominator is a power of $2$ and always different? As far as I can prove, it should be an infinite series, but I can be wrong. In case if it can't be written using pluses only, minuses are allowed as well.</p> <p>For example, $\frac{1}{2}$ is $\frac{1}{4}+\frac{1}{8}+\frac{1}{16}+\cdots$ So, what about $\frac{1}{3}$?</p>
André Nicolas
6,312
<p>There is no way to write it as a finite sum. For if you bring such a sum to a common denominator, that denominator will be a power of $2$. Minus signs won't help.</p> <p>It can be expressed as the infinite "sum" $$\frac{1}{4}+\frac{1}{4^2}+\frac{1}{4^3}+\frac{1}{4^4}+\cdots.$$</p> <p>For note that if $|r|\lt 1$ the geometric series $a+ar+ar^2+ar^3+\cdots$ has sum $\frac{a}{1-r}$. Put $a=\frac{1}{4}$ and $r=\frac{1}{4}$, and simplify.</p> <p>Another interesting representation of $\frac{1}{3}$ is $\frac{1}{2}-\frac{1}{4}+\frac{1}{8}-\frac{1}{16}+\cdots$.</p>
186,417
<p>Is it possible to write $0.3333(3)=\frac{1}{3}$ as sum of $\frac{1}{4} + \cdots + \cdots\frac{1}{512} + \cdots$ so that denominator is a power of $2$ and always different? As far as I can prove, it should be an infinite series, but I can be wrong. In case if it can't be written using pluses only, minuses are allowed as well.</p> <p>For example, $\frac{1}{2}$ is $\frac{1}{4}+\frac{1}{8}+\frac{1}{16}+\cdots$ So, what about $\frac{1}{3}$?</p>
Michael Hardy
11,667
<p>Archimedes showed that if you have a finite sum in which each term is $1/4$ of the previous term, except that the last term is $1/3$ of the previous term, then the sum does not depend on the number of terms, but is just $4/3$ of the first term. In modern terminology: $$ 1+\frac 1 4 + \frac{1}{16} + \cdots + \frac{1}{4^n} + \frac{1}{4^n}\cdot\frac 1 3\ =\ \frac 4 3. $$ He deduced from that that the infinite sum $$ 1 + \frac 1 4 + \frac{1}{16} + \cdots = \frac 4 3. $$</p> <p>(If the first term is $1/4$ then $4/3$ of that is of course $1/3$.)</p>
1,412,091
<p>The typewriter sequence is an example of a sequence which converges to zero in measure but does not converge to zero a.e.</p> <p>Could someone explain why it does not converge to zero a.e.?</p> <blockquote> <p><span class="math-container">$f_n(x) = \mathbb 1_{\left[\frac{n-2^k}{2^k}, \frac{n-2^k+1}{2^k}\right]} \text{, where } 2^k \leqslant n &lt; 2^{k+1}.$</span></p> </blockquote> <p>Note: the <a href="http://terrytao.wordpress.com/2010/10/02/245a-notes-4-modes-of-convergence/" rel="nofollow noreferrer">typewriter sequence</a> (Example 7).</p>
Jam
161,490
<h3>Animated visual demonstration</h3> <p><a href="https://i.stack.imgur.com/tP1Mb.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/tP1Mb.gif" alt="enter image description here" /></a></p> <p>As the sequence progresses, the indicator functions move from left to right over <span class="math-container">$[0,1]$</span> and then half in width, when they reach the end, and then repeat this ad infinitem. But then, each left-to-right run of a particular width must, at some point, have a value of <span class="math-container">$1$</span> for every <span class="math-container">$x$</span> in the interval. Therefore, throughout the infinitely many left-to-right runs, every <span class="math-container">$x$</span> attains a value of <span class="math-container">$1$</span> infinitely many times. But since <span class="math-container">$x$</span> is usually <span class="math-container">$0$</span>, it must have no pointwise limit.</p> <p>On the other hand, the width of the interval over which each function is <span class="math-container">$1$</span> decreases to <span class="math-container">$0$</span>, and the functions are eventually infinitesimally thin spikes. So their 'size' (both in terms of <span class="math-container">$L^1$</span> norm and measure) converges to zero.</p> <p>This also explains the origin of the sequence's name, as it looks like the a typewriter's carriage return jumping it back to where it started.</p> <p><a href="https://i.stack.imgur.com/7fz8e.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/7fz8e.gif" alt="enter image description here" /></a></p> <hr /> <p>I wrote the sequence in Desmos by using <span class="math-container">$\frac{\operatorname{sign}\left(x-c\right)+1}{2}=\textbf{1}_{(c,\infty)}$</span>, except at <span class="math-container">$x=c$</span>, where it is <span class="math-container">$0.5$</span>, and <span class="math-container">$1-\operatorname{sign}(x-c)^2=\begin{cases}1,&amp;x=c\\0,&amp;\text{else}\end{cases}$</span>. This gives an expression for the functions in terms of the <span class="math-container">$\operatorname{sign}(\cdot)$</span> function, which can be interpreted by the program and can be simplified into the form in the animation above. A link to the project is available <a href="https://www.desmos.com/calculator/14qevyrmnc" rel="noreferrer">here</a>.</p>
1,412,091
<p>The typewriter sequence is an example of a sequence which converges to zero in measure but does not converge to zero a.e.</p> <p>Could someone explain why it does not converge to zero a.e.?</p> <blockquote> <p><span class="math-container">$f_n(x) = \mathbb 1_{\left[\frac{n-2^k}{2^k}, \frac{n-2^k+1}{2^k}\right]} \text{, where } 2^k \leqslant n &lt; 2^{k+1}.$</span></p> </blockquote> <p>Note: the <a href="http://terrytao.wordpress.com/2010/10/02/245a-notes-4-modes-of-convergence/" rel="nofollow noreferrer">typewriter sequence</a> (Example 7).</p>
shark
474,426
<p>The set on which <span class="math-container">${f_n}$</span> fails to converge to <span class="math-container">$0$</span> is exactly <span class="math-container">$[0, 1]$</span>, which has full measure <span class="math-container">$1$</span>. To see this, note that the interval <span class="math-container">$[0, 1]$</span> is repeatedly overwritten by indicator functions of intervals with decreasing length(<span class="math-container">$1/2$</span>, <span class="math-container">$1/4$</span>, ... and so on). Since any point in <span class="math-container">$[0, 1] $</span>is covered infinitely many times by such intervals, they oscillate between <span class="math-container">$0$</span> and <span class="math-container">$1$</span> infinitely often as <span class="math-container">$n \to \infty$</span>.</p> <p>The &quot;overwriting&quot; nature is what prevent the sequence from converging a.e, while the sequence <span class="math-container">${f_n := \frac{1}{n} 1_{[0,n]}}$</span> does converge to <span class="math-container">$0$</span> a.e.(actually it converges uniformly.)</p>
1,935,320
<p>If $a^2-b^2=2$ then what is the least possible value of: \begin{vmatrix} 1+a^2-b^2 &amp; 2ab &amp;-2b\\ 2ab &amp; 1-a^2+b^2&amp;2a\\2b&amp;-2a&amp;1-a^2-b^2 \end{vmatrix}</p> <p>I tried to express the determinant as a product of two determinants but could not do so. Seeing no way out, I tried expanding it but that took too long and was difficult to evaluate. Please help me with this one, thanks.</p>
Bolton Bailey
165,144
<p>An example of a matrix $A$ such that $A^3 = I$ is $$ A = \left( \begin{matrix} 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 1 \\ 1 &amp; 0 &amp; 0 \\ \end{matrix} \right) $$</p>
765,738
<p>I am trying to prove the following inequality:</p> <p>$$(\sqrt{a} - \sqrt{b})^2 \leq \frac{1}{4}(a-b)(\ln(a)-\ln(b))$$</p> <p>for all $a&gt;0, b&gt;0$.</p> <p>Does anyone know how to prove it?</p> <p>Thanks a lot in advance!</p>
enigne
143,559
<p>The equality holds for $a=b$, so we can just consider the case for $a&gt;b$. In this case, to show this inequality, it is equivalent to show that $$4\frac{\sqrt{a}-\sqrt{b}}{\sqrt{a}+\sqrt{b}}\leq(\ln(a)-\ln(b)).$$ As $a&gt;b&gt;0$, we can suppose that $a=e^x$ and $b=e^y$ where $x&gt;y,\ x,y\in\mathbf{R}$. So we have to show that $$4\frac{e^\frac{x}{2}-e^\frac{y}{2}}{e^\frac{x}{2}+e^\frac{y}{2}}\leq(x-y),$$ or $$4\frac{e^\frac{x-y}{2}-1}{e^\frac{x-y}{2}+1}\leq(x-y).$$</p> <p>Now, denote $\frac{x-y}{4}=z$, the problem becomes to show that for $z&gt;0$ the following inequality holds $$\frac{e^{2z}-1}{e^{2z}+1}\leq{z}.$$</p> <p>Obviously, this holds for $z\geq{1}$. So we only consider the case for $0&lt;z&lt;1$, and we have that $$\frac{e^{2z}-1}{e^{2z}+1}=\frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}=\frac{2(z+\frac{z^3}{3!}+\frac{z^5}{5!}+...)}{2(1+\frac{z^2}{2!}+\frac{z^4}{4!}+...)}\leq{z}.$$</p>
7,072
<p>I want to replace $x^{i+1}$ with $z_i$.</p> <p>EDIT: I have some latex equations, and wish to import them to MMa.</p> <p>How to replace x_i (NOT $x_i$) with $x[i]$</p> <p>But it seems the underscore "_" has a special meaning (function), can we do such substitution in MMa?</p> <p><strong>EDIT2:</strong></p> <p>For example, I want to convert the following two latex equations:</p> <pre><code> `{la_3^2=0, la_1la_3-6la_3ps=0}` </code></pre> <p>into</p> <pre><code> `{la[3]^2==0, la[1]*la[3]-6*la[3]*ps==0}` </code></pre> <p>so that I could apply Simplify or Solve to these equations later.</p> <p>Great thanks.</p>
Artes
184
<pre><code>x^3 + 3 x^2 - 5 x /. {x^(i_Integer) -&gt; Subscript[z, i], x -&gt; Subscript[z, 1]} </code></pre> <p><img src="https://i.stack.imgur.com/HXjhQ.gif" alt="enter image description here"></p> <p>As rcollyer suggested in the comment it can be shortened to</p> <pre><code>x^3 + 3 x^2 - 5 x /. x^(i_Integer: 1) :&gt; Subscript[z, i] </code></pre>
870,174
<p>If I have $6$ children and $4$ bedrooms, how many ways can I arrange the children if I want a maximum of $2$ kids per room?</p> <p>The problem is that there are two empty slots, and these empty slots are not unique.</p> <p>So, I assumed there are $8$ objects, $6$ kids and $2$ empties.</p> <p>$$C_2^8 \cdot C_2^6 \cdot C_2^4 \cdot C_2^2 = 2520.$$</p> <p>Subtract off combinations where empties are together:</p> <p>$$2520 - 4 \cdot C_2^6 \cdot C_2^4 \cdot C_2^2 = 2160$$</p> <p>Divide by $2!$ to get rid of identical combinations due to identical empties and I get $1080$.</p> <p>Is this right? </p>
KeithS
23,565
<p>Looks right, given some assumptions. Let's build it up a different way to check your work. </p> <p>We're basically interested in two patterns of putting kids into rooms. Those are all kids paired with one empty, 2-2-2-0, and two pairs and two solo sleepers, 2-2-1-1. Any other arrangement either breaks the 2 kids max rule or is a permutation of one of these, like 2-1-1-2 or 2-0-2-2. Permutations are easy so let's ignore that for now.</p> <p>Let's start by figuring out how many unique sets of pairs of kids we can have. There are 15 possibilities for a pair drawn from 6 kids, lettered A-F: AB, AC, AD, AE, AF, BC, BD, BE, BF, CD, CE, CF, DE, DF, and EF. We might now be tempted to take one pair, then figure out the number of combinations for pairs of the remaining four kids for the second pair, and the last two are the third pair. That would give us $C^6_2*C^4_2*C^2_2= 90$ sets of kids. </p> <p>However, we're double-counting; we could have chosen AB for the first pair and then CD for the second, or vice-versa, and our current calculation assumes AB-CD-EF is different from CD-AB-EF. It's not, so the figure of 90 is $P^3_2 = 6$ times too high; there are actually only 15 unique sets of pairs of all 6 kids that can be formed.</p> <p>Now, of those 15 sets of pairs, you can either just chuck 'em into rooms, leaving one empty, or you can split any one of the three pairs to form a unique 2-2-1-1 arrangement. So, there are just 60 ways (15 + 3*15) to put 6 kids into 4 rooms with a max of two per room if you don't care who gets what room. If there's an unwritten understanding of one child minimum, no empty rooms, then there are 45 ways just counting the unique 2-2-1-1 arrangements.</p> <p>Now, if you <em>do</em> care who goes in which room, then there are 24 permutations of 4 things taken 4 at a time. 60 unique ways to split up 6 kids up into 3 or 4 groups, times 24 ways to put those kids into 4 rooms, gives you 1440 unique permutations of kids in rooms if an empty room is allowed. If there must be at least 1 kid per room, then 45 unique 2-2-1-1 patterns, times the same 24 permutations, gives 1080 ways to put kids into specific rooms with at least 1 in each.</p> <p>QED.</p>
1,035,091
<p>I try to get the limit of the following expression (should be valid for $a\geq 0$):</p> <p>$\lim_{n\to\infty}\sum_{k=0}^n\frac{a}{ak+n}$</p> <p>Unfortunately nothing worked. I tried to rewrite the expressiona and to use l'hospitals rule, but it didn't work.</p> <p>Thank you for any help:)</p> <p>regards</p> <p>Kevin</p>
Felix Marin
85,343
<p>$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ Besides the simple 'Riemann Sum Method': \begin{align} &amp;\color{#66f}{\large\lim_{n\ \to\infty}\sum_{k\ =\ 0}^{n}{a \over ak + n}} =\lim_{n\ \to\infty}\sum_{k\ =\ 0}^{\infty} \pars{{1 \over k + n/a} - {1 \over k + n + 1+ n/a}} \\[5mm]&amp;=\lim_{n\ \to\infty}\bracks{\pars{n + 1}\sum_{k\ =\ 0}^{\infty} {1 \over \pars{k + n + 1 + n/a}\pars{k + n/a}}} \\[5mm]&amp;=\lim_{n\ \to\infty}\bracks{\pars{n + 1}\, {\Psi\pars{n + 1 + n/a} - \Psi\pars{n/a} \over \pars{n + 1 + n/a} - n/a}} =\lim_{n\ \to\infty}\bracks{\Psi\pars{n + 1 + {n \over a}} - \Psi\pars{n \over a}} \\[5mm]&amp;=\lim_{n\ \to\infty}\bracks{% \ln\pars{n + 1 + {n \over a}} - \ln\pars{n \over a}} =\lim_{n\ \to\infty}\ln\pars{n\bracks{a + 1} + a \over n} =\color{#66f}{\large\ln\pars{a + 1}} \end{align}</p> <blockquote> <p>$\ds{\Psi\pars{z}}$ is the Digamma Function which satisfies $$ \Psi\pars{z} \sim \ln\pars{z}\,,\qquad\verts{z} \ggg 1\,,\quad\verts{{\rm arg}\pars{z}} &lt; \pi $$</p> </blockquote>
2,101,894
<p>As part of a larger proof, I need to use the fact that for a homomorphism $\phi : A \rightarrow B$, where $A$ is cyclic and $a \in A$ its generator, $\phi(a^{-n}) = \phi(a^n)^{-1}$. All I came up with is that $\phi(a^{-n}) = \phi((a^n)^{-1})$ and then I hit a wall. Is it true in general that $\phi(a^{nm}) = \phi(a^n)^m$, even for $m &lt; 0$?</p> <p>I'm also not sure if it's important that $A$ is cyclic and $a$ is its generator.</p> <p>Thanks in advance for any help.</p>
positrón0802
334,676
<p>Since $\phi(a^{n})\phi(a^{-n})=\phi(a^na^{-n})=\phi(1)=1$, then $\phi(a^{-n})=\phi(a^n)^{-1}$ since inverse of $\phi(a^{n})$ is unique.</p>
135,235
<blockquote> <p>if $f(3)=-2$ and $f'(3)=5$, find $g'(3)$ if,<br> $g(x)=3x^2-5f(x)$</p> </blockquote> <p>the answer is -7, I find that very hard to understand the question. thanks </p>
Community
-1
<p><strong>Hint</strong>:</p> <ul> <li><p>$(\alpha+\beta)\ '=\alpha\ '+\beta\ '$ </p></li> <li><p>$f$ is differentiable at $x=3$ as $f\ '(3)$ exists.</p></li> <li><p>$f(3)$ is not a relevant piece of information here...</p></li> </ul>
135,235
<blockquote> <p>if $f(3)=-2$ and $f'(3)=5$, find $g'(3)$ if,<br> $g(x)=3x^2-5f(x)$</p> </blockquote> <p>the answer is -7, I find that very hard to understand the question. thanks </p>
J. J.
3,776
<p>The solution to the problem comes from the fact that differentiation is a linear operator. This means that $(cf(x) + dg(x))' = cf'(x) + dg'(x)$ where $c$ and $d$ are constants. Assuming we know this and the differentiation rule for powers ($(x^n)' = n x^{n-1}$) we can continue by differentiating the equation $$g(x) = 3x^2 - 5f(x)$$ to get $$g'(x) = 6x - 5f'(x).$$</p> <p>Therefore $g'(3) = 6\cdot 3 - 5 f'(3) = 18 - 5 \cdot 5 = 18 - 25 = -7$.</p>
1,865,868
<p>the problem goes like that "in urn $A$ white balls, $B$ black balls. we take out without returning 5 balls. (we assume $A,B\gt4$) what would be the probability that at the 5th ball removal, there was a white ball while we know that at the 3rd was a black ball".</p> <p>What I did is I build a conditional probability tree. as it seemed, it gets really ugly and parameters won't reduce, so the equation is huge, therefor probably it's not the correct path of solution.</p> <p>I've got this intuition that the probability of the <strong>p</strong>th ball removal being black while <strong>q</strong>th ball removal being white is the same as the probability of the first ball being black, and the second ball being white - $\frac{A}{A+B-1}$ but this is only intuition and I can't explain it.</p> <p>would appreciate your advising,</p>
Rodrigo de Azevedo
339,790
<p>This is an instance of the <strong>least-norm</strong> problem</p> <p><span class="math-container">$$\begin{array}{ll} \text{minimize} &amp; \| {\bf x} \|_2^2\\ \text{subject to} &amp; {\bf A} {\bf x} = {\bf b} \end{array}$$</span></p> <p>As <span class="math-container">$2 \times 3$</span> matrix <span class="math-container">$\bf A$</span> has full row rank, the least-norm solution is</p> <p><span class="math-container">$$ {\bf x}_{\text{LN}} := {\bf A}^\top \left( {\bf A} {\bf A}^\top \right)^{-1} {\bf b} $$</span></p> <p>In <a href="http://live.sympy.org" rel="nofollow noreferrer">SymPy</a>:</p> <pre><code>&gt;&gt;&gt; A = Matrix([[1, 1, 1], [12, 3, 3]]) &gt;&gt;&gt; A ⎡1 1 1⎤ ⎢ ⎥ ⎣12 3 3⎦ &gt;&gt;&gt; b = Matrix([2, 12]) &gt;&gt;&gt; b ⎡2 ⎤ ⎢ ⎥ ⎣12⎦ &gt;&gt;&gt; x_LN = A.T * (A * A.T)**-1 * b &gt;&gt;&gt; x_LN ⎡2/3⎤ ⎢ ⎥ ⎢2/3⎥ ⎢ ⎥ ⎣2/3⎦ </code></pre> <hr /> <p><strong>Appendix</strong></p> <p>We can use Lagrange multipliers to find the least-norm solution. We define the Lagrangian</p> <p><span class="math-container">$$\mathcal{L} ({\bf x}, {\bf \lambda}) := \frac 12 {\bf x}^\top {\bf x} - {\bf \lambda}^\top ({\bf A} {\bf x} - {\bf b})$$</span></p> <p>Taking the partial derivatives and finding where they vanish, we obtain</p> <p><span class="math-container">$$ {\bf x} = {\bf A}^\top {\bf \lambda}, \qquad \qquad {\bf A} {\bf x} = {\bf b} $$</span></p> <p>from which it is easy to compute the least-norm solution, assuming that <span class="math-container">$\bf A$</span> has full row rank (so that <span class="math-container">$\bf A \bf A^\top$</span> is invertible).</p>
191,307
<p>How can I calculate the perimeter of an ellipse? What is the general method of finding out the perimeter of any closed curve?</p>
Mark
24,958
<p>I do not know if that's what you wanted, but the only general method is to calculate the length of the curve. If we have a ellipse equation:</p> <p>$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$</p> <p>with parametric representation:</p> <p>$x=a \cos t, \ \ y=b \sin t, \ \ \ t\in [0,2\pi]$</p> <p>the length of the curve is calculated knowing:</p> <p>$x'=-a \sin t, \ \ y'=b \cos t, \ \ \ t\in [0,2\pi]$</p> <p>and is (see <a href="http://en.wikipedia.org/wiki/Arc_length" rel="nofollow">Arc length</a>)</p> <p>$\int_{0}^{2 \pi} \sqrt{a^{2}\sin^{2}t+b^{2}\cos^{2} t} dt$</p> <p>this integral can not be solved in closed form. There are various approximations (they take advantage of the power series) that you can see in this link</p> <p><a href="http://mathworld.wolfram.com/Ellipse.html" rel="nofollow">ellipse</a></p>
191,307
<p>How can I calculate the perimeter of an ellipse? What is the general method of finding out the perimeter of any closed curve?</p>
SN77
37,814
<p>For any ellipse, its perimeter is given by $p=2πa(1-(\frac{1}{2})^2ε^2-{(\frac{1.3}{2.4})}^2\frac{ε^4}{3}-\cdots)$</p>
216,748
<ol> <li><p>SO far i showed that if A matrix is left invertible (L) then in Ax = b, x has at most 1 solution. <strong>I got that LAx = x = Lb, so x = Lb</strong></p></li> <li><p>for right inverse (R) of A, in Ax = b, x has at least one solution. <strong>I got that x = Rb</strong>. </p></li> </ol> <p>In the book it says that x has 1 solution for case 1, and at least one solution for case 2. Can someone explain my WHY? HOW?</p>
N. S.
9,176
<p>Yes you are right. In 1., you SOLVED the equation, and thus you PROVED that $x=Lb$.</p> <p>In 2., you can see easily that $x=Rb$ works, but this only proves that this is one solution. It could be the only one, or there could be others....</p> <p><strong>PS</strong> I don't know how you got the $x=Rb$, it is easy to guess or, to observe that $b=ARb$. Note that once you see that $Ax=ARb$, you cannot cancel the $A$ unless $A$ has a left inverse... </p>
3,845,144
<p>Let's say a number <span class="math-container">$n$</span> is insertable if for every digit <span class="math-container">$d$</span>, if we insert <span class="math-container">$d$</span> between any two digits of <span class="math-container">$n$</span>, then the obtained number is a multiple of <span class="math-container">$d$</span>. For example, <span class="math-container">$144$</span> is not insertable because <span class="math-container">$1474$</span> is not divisible by <span class="math-container">$7$</span>.</p> <p>The question is the find the smallest insertable positive integer with at least two digits.</p> <p>It is relatively easy to see that such a number have to be divisible by <span class="math-container">$2520$</span> (assuming it is at least <span class="math-container">$4$</span>-digits long). I also ran a script to check all integers below 75,000,000,000 with no success (the issue might be my code).</p> <p><strong>Disclaimer.</strong> I do not know if such a number do exist.</p>
Evaristo
813,919
<p>With <span class="math-container">$\{e_1, e_2, e_3\}$</span> as your base, check what happens when you operate on them <span class="math-container">$$Ae_1 = e_3 \implies A^ne_1 = A^{n-1}e_3$$</span> <span class="math-container">$$Ae_2 = e_2+e_3 \implies A^ne_2 = A^{n-1}(e_2+e_3) = A^{n-1}e_3 + A^{n-1}e_2$$</span> <span class="math-container">$$Ae_3 = e_1+e_2+e_3\implies A^ne_3 = A^{n-1}(e_1+e_2+e_3) = $$</span> <span class="math-container">$$ = A^{n-1}e_1+A^{n-1}e_2+A^{n-1}e_3 = 2A^{n-2}e_3+A^{n-2}e_2 +A^{n-1}e_3$$</span> This implies a recursive expression for each of your columns, maybe it can be tweaked a bit further.</p>
120,910
<p>$f(x) = 2\sin x \hspace{10pt}(0 \leq x \leq \pi)$<br> $g(x) = -\sin x \hspace{10pt}(0 \leq x \leq \pi)$</p> <p>Rectangle ABCD is enclosed between the above functions' graphs (its edges are parallel to the axes).</p> <p>How would I go about finding the maximum perimeter of ABCD?</p> <p>I'm really clueless about this, I don't even know how to begin. How am I supposed to represent the edges?</p> <p>Thanks</p>
André Nicolas
6,312
<p>As a minor matter of strategy, after <em>drawing the picture</em>, I would move the $y$-axis to get symmetry. Then the picture becomes the picture of $2\cos x$ and $-\cos x$. If the upper right-hand corner of the rectangle in the new picture is $(x,2\cos x)$, then the perimeter is $4x+6\cos x$. </p> <p>We have $0\le x\le \pi/2$, and the maximum is obviously not reached at an endpoint. So the maximum is reached where the derivative of $4x+6\cos x$ is $0$. This happens where $4-6\sin x=0$, that is, where $\sin x=\frac{2}{3}$. There, $x=\arcsin(2/3)$ and $\cos x=\frac{\sqrt{5}}{3}$, and now we can calculate the maximum area.</p>
265,509
<p>Given a (possibly disjoint) region, defined by a discrete set of points, how can I use <code>ListContourPlot[]</code> together with <code>Mesh</code> to highlight a specific area of the plot? For instance, how can I mesh the region where the points are smaller than a certain value?</p> <p>Here I construct a minimal example where I try to highlight the area where the values of a discrete sample of the function <span class="math-container">$f(x) = e^{x^2 - y^2}$</span> are smaller then one.</p> <pre><code>data = Table[Exp[x^2 - y^2], {x, -1, 1, .01}, {y, -1, 1, .01}]; ListContourPlot[ data , Contours -&gt; {1.0} , ContourStyle -&gt; Transparent , Mesh -&gt; 25 , MeshFunctions -&gt; {#1 + #2 &amp;} , MeshStyle -&gt; Thick ] </code></pre> <p>I also tried using <code>MeshFunctions -&gt; {Piecewise[{{#1 + #2 &amp;, #3 &lt;= 1 &amp;}, {None, #3 &gt; 0 &amp;}}]}</code>, but I had no luck.</p> <p>I am aware that this can be done for symbolic functions through <code>RegionPlot[]</code>, however I am not sure how to extend this to numerical data.</p> <p><a href="https://i.stack.imgur.com/RF0im.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RF0im.png" alt="Mesh in ListContourPlot" /></a></p>
Bob Hanlon
9,362
<pre><code>data = Table[Exp[x^2 - y^2], {x, -1, 1, .01}, {y, -1, 1, .01}]; Show[ ListContourPlot[data, Contours -&gt; {1}, Mesh -&gt; 25, MeshFunctions -&gt; {#1 + #2 &amp;}, MeshStyle -&gt; Directive[Thick, #[[1]]], RegionFunction -&gt; #[[2]]] &amp; /@ {{Red, Function[{x, y, f}, f &lt; 1]}, {Black, Function[{x, y, f}, f &gt; 1]}}] </code></pre> <p><a href="https://i.stack.imgur.com/yP3Ec.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yP3Ec.png" alt="enter image description here" /></a></p>
3,264,935
<p>For a given natural number <span class="math-container">$k$</span>, I'm going to call a subset <span class="math-container">$T$</span> of the plane <span class="math-container">$\Bbb{R}^2$</span> a <span class="math-container">$k$</span>-traversal if, for any <span class="math-container">$x \in \Bbb{R}$</span>, <span class="math-container">\begin{align*} k &amp;= \operatorname{card} \{(a, b) \in T : a = x\} \\ &amp;= \operatorname{card} \{(a, b) \in T : b = x\} \\ &amp;= \operatorname{card} \{(a, b) \in T : a + b = x\}. \end{align*}</span> It's not difficult to see that lines in the plane that aren't parallel to the <span class="math-container">$x$</span>-axis, the <span class="math-container">$y$</span>-axis, or the line <span class="math-container">$x + y = 0$</span> will be <span class="math-container">$1$</span>-traversals (but are far from the only examples!). We can make <span class="math-container">$2$</span>-traversals without difficulty by taking two disjoint <span class="math-container">$1$</span>-traversals and unioning them. My question is,</p> <blockquote> <p>Is there a <span class="math-container">$2$</span>-traversal that cannot be decomposed into the union of two <span class="math-container">$1$</span>-traversals?</p> </blockquote> <p>The idea of (irreducible) <span class="math-container">$k$</span>-traversals is a concept from latin squares. I'm trying to consider the concept when applied to an (uncountably) infinite Latin square generated by the group <span class="math-container">$(\Bbb{R}, + )$</span>.</p> <p>This was a question that popped into my head years ago when attending a combinatorics conference. Combinatorics is not my forte, but it seemed like an interesting question, and I thought I'd share it.</p>
Vincent
101,420
<p>I have the feeling that this (or a variation) would work, but maybe I am overlooking something. It is definitely the union of two things, but they are not 1-traversals.</p> <p>Take the union of the v-shape <span class="math-container">$y = |1/2(x-1)|$</span> and upside-down v-shape <span class="math-container">$y = -|1/2(x+1)|$</span>.</p>
77,092
<p>I am wondering if anyone could please post the solution to the following differential equation for the function $f(x)$:</p> <p>$$\frac{f}{f^\prime}=\frac{f^\prime}{f^{\prime\prime}}$$</p> <p>Thanks!</p>
anon
11,763
<p>$\rm\bf Start$: Multiply through by $f\,&#39;&#39;/f$ and integrate with respect to $x$: $$\frac{f\,&#39;&#39;}{f\,&#39;}=\frac{f\,&#39;}{f} \implies \ln (f\,&#39;)=\ln f+C.$$ Now exponentiate and solve another differential equation similarly...</p> <hr> <p>$\rm\bf Finish$:</p> <blockquote class="spoiler"> <p> $$f\,&#39;=e^Cf=Af\implies f(x)=Be^{Ax}.$$</p> </blockquote>
129,072
<p>We know that for a representation $V$ of a Lie algebra or a quantum group, we can define character of $V$ as $ch(V)=\sum_{\mu} dim(V_{\mu})e^{\mu}$, where $V_{\mu}$ is the weight space of $V$ with weight $\mu$. I didn't find the corresponding definition for representations of finite dimensional algebras (for example, associative algebras). Is there some kind of character theory for representations of finite dimensional algebras? Thank you very much. </p>
Qiaochu Yuan
232
<p>Sure. If $A$ is a finite-dimensional algebra over a field $k$ and $V$ a finite-dimensional $A$-module (so we have a morphism $\rho : A \to \text{End}_k(V)$), we can define a character $\chi_V(a) = \text{tr}(\rho(a)) : A \to k$ just as in the case of finite groups (where $A = k[G]$). Characters are additive in short exact sequences, so if $\chi_V$ has a composition series consisting of $n_i$ copies of the simple $A$-module $V_i$, then $$\chi_V = \sum n_i \chi_{V_i}.$$</p> <p>Moreover, if $k$ has characteristic zero, the characters corresponding to distinct simple modules are linearly independent. (I don't know a short way to prove this although I am sure there is one; it follows from Artin-Wedderburn applied to $A/J(A)$.) So the character of a module tells you precisely what the simple modules in its composition series are. </p> <p>Note that any character $\chi_V$ above is among other things a linear functional $A/[A, A] \to k$ (where $[A, A]$ is the subspace, not the ideal, generated by commutators), so $\dim A/[A, A]$ is an upper bound on the number of simple modules. I don't know if this upper bound is attained in general. </p>
849,025
<p>Say I have 100 people each with a height, weight, and age. I make a regression that predicts age based on height and weight. Now, I would like to update that model when I meet someone new. I don't want to just re-process 101 people though--I want to take the model that I already have and incorporate the new person into it. </p> <p>For example, say that I found from the first 100 that:</p> <p>age = .08*height + .06*weight + 7.</p> <p>Now, I meet someone with age 120 height 56 and weight 34. </p> <p>I do know the number of cases with which I originally made the regression. So, my initial idea was that I could just assume that they all fit the model age = .08*height + .06*weight + 7 and somehow weight the new case so that it takes the old into account (e.g., assume I have 100 people that fit the model, so create a regression equation based on the 100 identical data points and then just add the 101? That's almost just like running the regression again on the 101 data points, except that nothing needs to be stored then, because you could derive 100 from the equation. </p> <p>I would like to do something like this on a large scale, and I don't want to be creating an enormous database of cases, I just want to update the model with each new case.</p>
Claude Leibovici
82,404
<p>Let us suppose that you fit a model $Z = a +b X+cY$ based on $N$ data points $(X_i,Y_i,Z_i)$. The so-called normal equations are $$\sum _{i=1}^N Z_i= N a + b\sum _{i=1}^N X_i+ c\sum _{i=1}^N Y_i$$ $$\sum _{i=1}^N X_iZ_i= a\sum _{i=1}^N X_i + b\sum _{i=1}^N X_i^2+ c\sum _{i=1}^N X_iY_i$$ $$\sum _{i=1}^N Y_iZ_i= a\sum _{i=1}^N Y_i + b\sum _{i=1}^N X_iY_i+ c\sum _{i=1}^N Y_i^2$$ and you solve them for $a,b,c$. </p> <p>If we rewrite these equations in a more symbolic manner, as $$Sz=N a+b Sx + c Sy$$ $$Sxz=a Sx+b Sxx+c Sxy$$ $$Syz=a Sy+b Sxy+c Syy$$ the solutions are given by $$c=\frac{-N \text{Sxx} \text{Syz}+N \text{Sxy} \text{Sxz}+\text{Sx}^2 \text{Syz}-\text{Sx} \text{Sxy} \text{Sz}-\text{Sx} \text{Sxz} \text{Sy}+\text{Sxx} \text{Sy} \text{Sz}}{-N \text{Sxx} \text{Syy}+N \text{Sxy}^2+\text{Sx}^2 \text{Syy}-2 \text{Sx} \text{Sxy} \text{Sy}+\text{Sxx} \text{Sy}^2}$$ $$b=\frac{c N \text{Sxy}-c \text{Sx} \text{Sy}-N \text{Sxz}+\text{Sx} \text{Sz}}{\text{Sx}^2-N \text{Sxx}}$$ $$a=\frac{-b \text{Sx}-c \text{Sy}+\text{Sz}}{N}$$</p> <p>Now, you add another data point ($N+1$), so the equations are now $$\sum _{i=1}^{N+1} Z_i= (N+1) a + b\sum _{i=1}^{N+1} X_i+ c\sum _{i=1}^{N+1} Y_i$$ $$\sum _{i=1}^{N+1} X_iZ_i= a\sum _{i=1}^{N+1} X_i + b\sum _{i=1}^{N+1} X_i^2+ c\sum _{i=1}^{N+1} X_iY_i$$ $$\sum _{i=1}^{N+1} Y_iZ_i= a\sum _{i=1}^{N+1} Y_i + b\sum _{i=1}^{N+1} X_iY_i+ c\sum _{i=1}^{N+1} Y_i^2$$ But the new sums can be expressed as the old sums plus an extra term corresponding to the new data point $$\sum _{i=1}^{N+1} Z_i=\sum _{i=1}^{N} Z_i+Z_{N+1}$$ $$\sum _{i=1}^{N+1} X_iZ_i=\sum _{i=1}^{N} X_iZ_i+X_{N+1}Z_{N+1}$$ and so on for all the summations.</p> <p>So, you only need to keep the values of the sums and update them every time you add a new data point. The only thing left is to solve the new three normal equations for which I gave you the formulas.</p> <p>So, updating the regression by adding one extra point is really simple.</p>
1,668,487
<p>$$2^{-1} \equiv 6\mod{11}$$</p> <p>Sorry for very strange question. I want to understand on which algorithm there is a computation of this expression. Similarly interested in why this expression is equal to two?</p> <p>$$6^{-1} \equiv 2\mod11$$</p>
N. F. Taussig
173,070
<p>The notation $2^{-1} \pmod{11}$ means the <em>multiplicative inverse</em> of $2$ modulo $11$, that is, a number $x$ satisfying the equation $2x \equiv 1 \pmod{11}$. </p> <p>Since $2 \cdot 6 \equiv 12 \equiv 1 \pmod{11}$, $2^{-1} \equiv 6 \pmod{11}$. The same calculation shows that $6^{-1} \equiv 2 \pmod{11}$. </p> <p>The multiplicative inverse of $6$ exists modulo $11$ since $\gcd(6, 11) = 1$.</p> <p>We can find the multiplicative inverse of $6$ modulo $11$ using the <a href="https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm" rel="nofollow">extended Euclidean algorithm</a>. \begin{align*} 11 &amp; = 1 \cdot 6 + 5\\ 6 &amp; = 1 \cdot 5 + 1\\ 5 &amp; = 5 \cdot 1 \end{align*} We solve for $1$ as a linear combination of $6$ and $11$. \begin{align*} 1 &amp; = 6 - 5\\ &amp; = 6 - (11 - 6)\\ &amp; = 2 \cdot 6 - 11 \end{align*} Hence, $2 \cdot 6 \equiv 1 \pmod{11} \iff 6^{-1} \equiv 2 \pmod{11}$.</p> <p>Let's see what happens when we use the extended Euclidean algorithm to solve for $2^{-1}$ modulo $11$.</p> <p>The multiplicative inverse of $2$ exists modulo $11$ since $\gcd(2, 11) = 1$. \begin{align*} 11 &amp; = 5 \cdot 2 + 1\\ 2 &amp; = 2 \cdot 1 \end{align*} Solving for $1$ as a linear combination of $2$ and $11$ yields $$1 = 11 - 5 \cdot 2$$ Thus, $$-5 \cdot 2 \equiv 1 \pmod{11} \iff 2^{-1} \equiv -5 \pmod{11}$$ What does this mean? It means that for each $t \in \mathbb{Z}$, $$2(-5 + 11t) \equiv 1 \pmod{11}$$ In particular, if $t = 1$, we obtain $$2 \cdot 6 \equiv 1 \pmod{11} \iff 2^{-1} \equiv 6 \pmod{11}$$ Why is $2(-5 + 11t) \equiv 1 \pmod{11}$ for $t \in \mathbb{Z}$? Observe that $$2(-5 + 11t) \equiv -10 + 22t \equiv 1 - 11 + 22t \equiv 1 + 11(2t - 1) \equiv 1 \pmod{11}$$</p>
2,442,705
<p>Like the title says, I'm wondering if there's a non-brute-force way to determine $x$ such that $$45\le x&lt;200,$$ $$x\bmod5\equiv0,$$ $$x\bmod8\equiv1,$$ $$x\bmod12\equiv1$$ I know I can simply enumerate all integers in $[45, 200)$ to get the answer (145), but I'm wondering if there's a more elegant solution.</p>
paw88789
147,810
<p>From the last two congruences, $x\equiv 1 \pmod{24}$</p> <p>Then $x\equiv 1 \equiv 25 \equiv 49 \equiv...$</p> <p>Just keep going until you find something congruent to $0 \pmod{5}$ that is within your range of interest.</p> <p>Or you could note that $25$ works, and then so will $25 + 120k$ for any integer $k$. </p>
2,442,705
<p>Like the title says, I'm wondering if there's a non-brute-force way to determine $x$ such that $$45\le x&lt;200,$$ $$x\bmod5\equiv0,$$ $$x\bmod8\equiv1,$$ $$x\bmod12\equiv1$$ I know I can simply enumerate all integers in $[45, 200)$ to get the answer (145), but I'm wondering if there's a more elegant solution.</p>
Jihoon Kang
452,346
<p>Hint: Use the Chinese Remainder Theorem. In the case for modular arithmetic, this theorem is as follows:</p> <p><strong>Theorem</strong> Suppose you have a system of congruences: $$\begin{cases} x \equiv a_1 \ \mbox{mod} \ b_1 \\ \vdots \ \ \ \ \ \ \ \ \vdots \ \ \ \ \ \ \ \ \ \ \ \ \vdots \\ x\equiv a_n \ \mbox{mod} \ b_n \end{cases}$$ where $b_i$ are pairwise coprime. Then there is a unique $c$ such that $$x\equiv c \ \mbox{mod} \ \prod_{j=1}^nb_j$$</p> <p>Does this hint help?</p>
2,294,585
<p>I can't figure out why $((p → q) ∧ (r → ¬q)) → (p ∧ r)$ isn’t a tautology. </p> <p>I tried solving it like this: </p> <p>$$((p ∧ ¬p) ∨ (r ∧ q)) ∨ (p ∧ r)$$ resulting in $(T) ∨ (p ∧ r)$ in the end that should result in $T$. What am I doing wrong?</p>
User4407
81,495
<p>It looks like you swapped a $p$ for $q$.</p> <p>You have:</p> <p>$$((p ∧ ¬p) ∨ (r ∧ q)) ∨ (p ∧ r)$$</p> <p>When it should be </p> <p>$$((p ∧ ¬q) ∨ (r ∧ q)) ∨ (p ∧ r)$$</p> <p>It also looks like you made the mistake of thinking that $p ∧ ¬p$ is true. That would be the case in classical logic if it were OR rather than AND: $p ∨ ¬p$.</p> <p>The expression simplifies by the <a href="https://en.wikipedia.org/wiki/Consensus_theorem" rel="nofollow noreferrer">consensus rule</a> to the following:</p> <p>$$(p \land \neg q) \vee (q \land r)$$</p>
3,911,297
<p>Chapter 12 - Problem 26)</p> <blockquote> <p>Suppose that <span class="math-container">$f(x) &gt; 0$</span> for all <span class="math-container">$x$</span>, and that <span class="math-container">$f$</span> is decreasing. Prove that there is a <em>continuous</em> decreasing function <span class="math-container">$g$</span> such that <span class="math-container">$0 &lt; g(x) \le f(x)$</span> for all <span class="math-container">$x$</span>.</p> </blockquote> <p>So this question has already been asked and &quot;solved&quot; on MSE 9 years ago <a href="https://math.stackexchange.com/questions/30777/spivaks-calculus-chapter-12-problem-26">here</a>, but the accepted answer isn't very detailed, and I think it is in fact flawed (or I've just misunderstood it). I tried commenting to open up the question again, but it seems pretty dead now, hence why I'm making this follow up.</p> <p>The answer says to <em>&quot;make <span class="math-container">$g$</span> piecewise linear with <span class="math-container">$g(n) = f(n+1)$</span>&quot;</em>. Can someone explain what this means exactly? I will write my thoughts below, but it's a lot, so feel free to skip.</p> <p>My thoughts: Notice that if we try to simply let <span class="math-container">$g(x) = f(x+1)$</span>, then it works perfectly except for the fact that <span class="math-container">$g$</span> may not be continuous, because <span class="math-container">$f$</span> need not be continuous (otherwise we could just let <span class="math-container">$g(x) = f(x)$</span> in that case!). So if we could just modify this <span class="math-container">$g$</span> to make it continuous somehow then we're done.</p> <p>Fortunately, <span class="math-container">$f$</span> is decreasing on <span class="math-container">$\mathbb{R}$</span>, which means the left and right limits do exist, however they might disagree. This means <span class="math-container">$f$</span> can only have jump discontinuities that jump downwards.</p> <p>So what if we took all the points in <span class="math-container">$\mathbb{R}$</span> where <span class="math-container">$f$</span> has a jump discontinuity, and just joined lines between them? (I think this is what the answer meant by piecewise linear function?) This would guarantee that <span class="math-container">$g$</span> is continuous, however, this approach has some fixable flaws.</p> <p>First flaw, for starters, it isn't necessarily true that this <span class="math-container">$g$</span> would be always smaller than <span class="math-container">$f$</span>! For example, consider this picture, where <span class="math-container">$f$</span> is the red function, and <span class="math-container">$g$</span> is the black function:</p> <p><a href="https://i.stack.imgur.com/WrfB5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WrfB5.png" alt="" /></a></p> <p>Sure <span class="math-container">$g$</span> is continuous now, but we've lost the <span class="math-container">$g(x) \leq f(x)$</span> property! We can fix this easily by letting <span class="math-container">$g$</span> be the smaller of the piecewise linear function and <span class="math-container">$f$</span>. Then the picture becomes like this:</p> <p><a href="https://i.stack.imgur.com/2if25.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2if25.png" alt="" /></a></p> <p>To make this a bit more rigorous, first we need the set of all the points where <span class="math-container">$f$</span> is discontinuous:</p> <p><span class="math-container">$S = \big\{x: \lim_{y \rightarrow x^-} (f(y)) = \lim_{y \rightarrow x^+} (f(y)) \big\}$</span></p> <p>Then let <span class="math-container">$l(x)$</span> be the piecewise linear function joining all points <span class="math-container">$\big(x, \lim_{y \rightarrow x^+} [f(y)] \big)$</span>, where <span class="math-container">$x \in S$</span>.</p> <p>Then finally let <span class="math-container">$g(x) = \text{Min}\big(f(x), l(x)\big)$</span>.</p> <p>Now this would work fine, so long as <span class="math-container">$l(x)$</span> is well defined. But must it necessarily be so? I'm not sure, and this is where I'm stuck. For example, what if the set <span class="math-container">$S$</span> contains not isolated points, but an entire interval of points? For example, what if <span class="math-container">$f$</span> is a function that has a jump discontinuity at every point in <span class="math-container">$[0,1]$</span>? Then to construct <span class="math-container">$l(x)$</span>, we'd need to join all these jump discontinuity points in <span class="math-container">$[0,1]$</span>, of which it isn't obvious at all we can do that.</p> <p>Now you might say that an interval of jump discontinuities is impossible, and you'd be right. However the proof of that comes much much later in the book and is certainly beyond the knowledge of this chapter. But more importantly, even if <span class="math-container">$f$</span> doesn't have an interval of jump discontinuities, there are other ways <span class="math-container">$l(x)$</span> can be questionable.</p> <p>Consider this monstrous example: <span class="math-container">$f(x) = \begin{cases} 1-\frac{x}{2^{\lfloor 1 - \log_2(|x|)-1 \rfloor}} &amp; :x \leq \frac{1}{2} \\ \frac{6}{6x+5} &amp; :x &gt; \frac{1}{2} \end{cases} \Biggr\}$</span></p> <p>Looks something like this (click <a href="https://www.desmos.com/calculator/j13mpqs9io" rel="nofollow noreferrer">here</a> to view in Desmos): <a href="https://i.stack.imgur.com/3dPdp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3dPdp.png" alt="" /></a></p> <p>As it turns out, this <span class="math-container">$f$</span> satisfies the questions premises, but it also has the cool property that it contains an infinite number of jump discontinuities in any neighbourhood around 0! As such, in order to construct the piecewise linear function for it, you'd have to join lines between an infinite number of points, and still have a function, which might be possible? But for sure it isn't obvious that it is so...</p> <p>Those are my thoughts on the problem. So Q1) Is my approach so far in the right direction? Or could it be that I've missed some super simple trick that will make the problem trivial and everything I've said above redundant? Q2) If I am in the right direction, how can I justify taking a linear piecewise function for an infinite number of points in a given interval?</p>
Donald Splutterwit
404,247
<p>Your solution is fine. You could do it in one line ... <span class="math-container">\begin{eqnarray*} a=\frac{(a^{12})^3}{(a^7)^5}. \end{eqnarray*}</span></p>
1,276,177
<p>Four fair 6-sided dice are rolled. The sum of the numbers shown on the dice is 8. What is the probability that 2's were rolled on all four dice?</p> <hr> <p>The answer should be 1/(# of ways 4 numbers sum to 8) However, I can't find a way, other than listing all possibilities, to find the denominator. </p>
Michael Hardy
11,667
<p>Possibly brute force is nearly the best available option here. \begin{align} 2+2+2+2 &amp; &amp; &amp; 1 \text{ way} \\[6pt] 1+2+2+3 &amp; &amp; &amp; 12\text{ ways} \\[6pt] 1+1+2+4 &amp; &amp; &amp; 12\text{ ways} \\ 1+1+3+3 &amp; &amp; &amp; 6\text{ ways} \\[6pt] 1+1+1+5 &amp; &amp; &amp; 4\text{ ways} \end{align}</p> <p>So you get $1/35$.</p>
4,013,638
<p>a) <span class="math-container">$F(x)&gt;0$</span> for every <span class="math-container">$x&gt;0$</span>.<br> b) <span class="math-container">$F(x)&gt;F(e)$</span> for every <span class="math-container">$x&gt;1$</span>.<br> c) <span class="math-container">$x=e^{-2}$</span> is an inflection point. <br> d) None of the answers are correct. <br> <br> Note: This was a question in my exam and I wrote it from my memory so if there's something that doesn't make sense it might be that I messed up. <br> My Work was to take second derivative of <span class="math-container">$F$</span> and saw that <span class="math-container">$F''(e^{-2})\ne 0$</span>, so I took out that option, and didn't know how to decide between (a) or (b), but I noticed that for (a) I might have to flip the integral, so I just went with (b), I would appreciate any explanation on how to really approach this question and not just with my intuition. (and know if I got it right too).</p>
Bernard
202,857
<p>If you apply the <em>First fundamental theorem</em> of integral calculus, you have <span class="math-container">$$F'(x)=x^{\sqrt x},\enspace \text{hence }\quad F''(x)=x^{\sqrt x}\biggl(\frac{\ln x}{2\sqrt x}+\frac{\sqrt x}x\biggr)=x^{\sqrt x}\,\frac{\ln x+2}{2\sqrt x}$$</span> Therefore, <span class="math-container">$F''(x)=0$</span> for <span class="math-container">$x=\mathrm e^{-2}$</span>, and it changes sign at that point, which means we have an inflection point.</p>
428,291
<p>What is the solution of the following system?</p> <p>$$ \begin{align} a \cdot e-b \cdot d &amp; =\alpha \\ a \cdot f-c \cdot d &amp; =\beta \\ b \cdot f-c \cdot e &amp; =\gamma \end{align} $$</p> <p>Where the unknowns $a,b,c,d,e,f$ are real numbers and $\alpha, \beta, \gamma$ are fixed real numbers.</p> <p>I tried to solve it by giving it some geometric meaning, but failed.</p>
Glen O
67,842
<p>Hint: the system can actually be interpreted as $$ \begin{pmatrix}a\\b\\c\end{pmatrix}\times \begin{pmatrix}d\\e\\f\end{pmatrix}= \begin{pmatrix}\gamma\\-\beta\\\alpha\end{pmatrix} $$ where $\times$ is the vector cross product.</p>
86,657
<p>Throughout my upbringing, I encountered the following annotations on Gauss's diary in several so-called accounts of the history of mathematics:</p> <blockquote> <p>"... A few of the entries indicate that the diary was a strictly private affair of its author's (sic). Thus for July 10, 1796, there is the entry</p> <p>ΕΥΡΗΚΑ! num = Δ + Δ + Δ.</p> <p>Translated , this echoes Archimedes' exultant "Eureka!" and states that every positive integer is the sum of three triangular numbers—such a number is one of the sequence 0, 1, 3, 6, 10, 15, ... where each (after 0) is of the form $\frac{1}{2}n(n+1)$, $n$ being a positive integer. Another way of saying the same thing is that every number of the form $8n+3$ is a sum of three odd squares... It is not easy to prove this from scratch.</p> <p>Less intelligible is the cryptic entry for October 11, 1796, "Vicimus GEGAN." What dragon had Gauss conquered this time? Or what giant had he overcome on April 8, 1799, when he boxes REV. GALEN up in a neat rectangle? Although the meaning of these is lost forever the remaining 144 are for the most part clear enough." "</p> </blockquote> <p>The preceding paragraphs have been quoted <i>verbatim</i> from J. Newman's The World of MATHEMATICS (Vol. I, pages 304-305) and the questions that I pose today were motivated from my recent spotting of [2]:</p> <ul> <li><p>Why is there no mention whatsoever to the REV. GALEN inscription in either Klein's or Gray's work?</p></li> <li><p>What is the reason that E. T. Bell expressed that Gauss had written the Vicimus GEGAN entry on October 11, 1796? According to Klein, Gray, and (even) the Wikipedians it was written on October 21, 1796. As far as I understand, Klein and Gray are just reporting the dates that appear on the original manuscript. Did Bell actually go over it?</p></li> <li><p>Last but not least, is there a compendium out there of all known potential explanations to the Vicimus GEGAN enigma? The only ones whereof I have notice can be found on page 112 of [1]:</p></li> </ul> <blockquote> <p>"... Following a suggestion of Schlesinger [Gauss, Werke, X.1, part 2, 29], Biermann ... proposed that GA stood for Geometricas, Arithmeticas, so reading GEGAN in reverse as Vicimus N[exum] A[rithmetico] G[eometrici cum] E[xspectationibus] G[eneralibus]. Schumann has since proposed other variants; including, for GA, (La) G(rangianae) A(nalysis)..."</p> </blockquote> <p>Heartfelt thanks for your comments, reading suggestions, and replies.</p> <p><strong>References</strong></p> <ol> <li>J. J. Gray. " A commentary on Gauss's mathematical diary, 1796-1814, with an English translation". <em>Expo. Math.</em> 2 (1984), 97-130.</li> <li>F. Klein. "Gauß' wissenschaftliches Tagebuch 1796–1814". <em>Math. Ann.</em> 57 (1903), 1–34.</li> <li>M. Perero. Historia e Historias de Matemáticas. Grupo Editorial Iberoamérica, 1994, pág. 40.</li> </ol>
Barry Cipra
15,837
<p>I think part of the answer may be found by consulting Volume <em>X</em> of Gauss's <em>Werke</em>. "REV. GALEN" doesn't actually appear in the <em>Tagebuch</em> itself, a facsimile of which appears following page 482. It was jotted down by Gauss elsewhere, as explained on page 539, in the commentary (which runs for nearly three pages) on the <em>Tagebuch</em> entry dated April 8, 1799.</p> <p>Just above the excerpted paragraphs from <em>Men of Mathematics</em>, Bell writes, "A facsimile reproduction [of Gauss's diary] was published in 1917 in the tenth volume (part 1) of Gauss' [sic] collected works, together with an exhaustive analysis of its contents by several expert editors." I think it's safe to assume that Bell actually looked at this 1917 publication (and I think it's reasonable to assume that the 1973 edition I'm looking at right now is not substantially different), and I think it's fair to conjecture that Bell paid more attention -- but maybe not enough! -- to the transcription and commentary than he did to the facsimile.</p> <p>As for the misdating of "Vicimus GEGAN," the correct date is clear enough in both the facsimile and in the transcription on page 507. For one thing, it appears immediately below an entry dated October 18. My guess is that either Bell or the typesetter made a simple mistake.</p> <p>Finally, a useful reference, especially for "GEGAN" (and a related notation, "WAEGEGAN") is <em>Mathematisches Tagebuch : 1796-1814</em>. Unfortunately, my command of German is insufficient to give a good synopsis of what's to be found there. I hope an actual historian will weigh in here. </p> <p><strong>Added Feb. 21</strong>: It turns out there is a 2005 edition of <em>Mathematisches Tagebuch 1796-1814</em> (the copy I found earlier was a 1985 edition) which has an update referring to a 1997 paper by Kurt Biermann. Here is a relevant Zentralblatt review of that paper:</p> <p>Zbl 0888.01025 Biermann, Kurt-R. Vicimus NAGEG. Confirmation of a hypothesis. (Vicimus NAGEG. Bestätigung einer Hypothese.) (German) [J] Mitt., Gauss-Ges. Gött. 34, 31-34 (1997). </p> <p>The author, a well-known expert on Carl Friedrich Gauss, reports on a Gauss-manuscript, which was found recently in the Göttingen astronomical observatory by H. Grosser and which confirms a hypothesis by Biermann from 1963. At that time Biermann read the frequent code GEGAN in Gauss' diary and manuscripts in inverse order as standing for (vicimus) N[exum medii] A[rithmetico-] G[eometricum] E[xpectationibus] G[eneralibus]. This in turn was alluding (in Biermann's opinion) to Gauss' discovery of the connections between the arithmetic geometric mean and the general theory of elliptic functions. The recently found Gauss-manuscript shows, for the first time, the code NAGEG, and, on the same sheet (which is reproduced in the article), the well-known GEGAN alongside with the picture (by Gauss' hand) of a lemniscate. Thus a remarkable historical hypothesis has been essentially solved after more than three decades. [R.Siegmund-Schultze (Berlin)]</p>