qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,541,800
<p>I happened to stumble upon the following matrix: $$ A = \begin{bmatrix} a &amp; 1 \\ 0 &amp; a \end{bmatrix} $$</p> <p>And after trying a bunch of different examples, I noticed the following remarkable pattern. If $P$ is a polynomial, then: $$ P(A)=\begin{bmatrix} P(a) &amp; P'(a) \\ 0 &amp; P(a) \end{bmatrix}$$</p> <p>Where $P'(a)$ is the derivative evaluated at $a$.</p> <p>Futhermore, I tried extending this to other matrix functions, for example the matrix exponential, and wolfram alpha tells me: $$ \exp(A)=\begin{bmatrix} e^a &amp; e^a \\ 0 &amp; e^a \end{bmatrix}$$ and this does in fact follow the pattern since the derivative of $e^x$ is itself!</p> <p>Furthermore, I decided to look at the function $P(x)=\frac{1}{x}$. If we interpret the reciprocal of a matrix to be its inverse, then we get: $$ P(A)=\begin{bmatrix} \frac{1}{a} &amp; -\frac{1}{a^2} \\ 0 &amp; \frac{1}{a} \end{bmatrix}$$ And since $f'(a)=-\frac{1}{a^2}$, the pattern still holds!</p> <p>After trying a couple more examples, it seems that this pattern holds whenever $P$ is any rational function.</p> <p>I have two questions:</p> <ol> <li><p>Why is this happening?</p></li> <li><p>Are there any other known matrix functions (which can also be applied to real numbers) for which this property holds?</p></li> </ol>
Dac0
291,786
<p>It's a general statement if <span class="math-container">$J_{k}$</span> is a Jordan block and <span class="math-container">$f$</span> a function matrix then <span class="math-container">\begin{equation} f(J)=\left(\begin{array}{ccccc} f(\lambda_{0}) &amp; \frac{f'(\lambda_{0})}{1!} &amp; \frac{f''(\lambda_{0})}{2!} &amp; \ldots &amp; \frac{f^{(n-1)}(\lambda_{0})}{(n-1)!}\\ 0 &amp; f(\lambda_{0}) &amp; \frac{f'(\lambda_{0})}{1!} &amp; &amp; \vdots\\ 0 &amp; 0 &amp; f(\lambda_{0}) &amp; \ddots &amp; \frac{f''(\lambda_{0})}{2!}\\ \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \frac{f'(\lambda_{0})}{1!}\\ 0 &amp; 0 &amp; 0 &amp; \ldots &amp; f(\lambda_{0}) \end{array}\right) \end{equation}</span> where <span class="math-container">\begin{equation} J=\left(\begin{array}{ccccc} \lambda_{0} &amp; 1 &amp; 0 &amp; 0\\ 0 &amp; \lambda_{0} &amp; 1&amp; 0\\ 0 &amp; 0 &amp; \ddots &amp; 1\\ 0 &amp; 0 &amp; 0 &amp; \lambda_{0} \end{array}\right) \end{equation}</span> This statement can be demonstrated in various ways (none of them short), but it's a quite known formula. I think you can find it in various books, like in Horn and Johnson's <em>Matrix Analysis</em>.</p>
1,541,800
<p>I happened to stumble upon the following matrix: $$ A = \begin{bmatrix} a &amp; 1 \\ 0 &amp; a \end{bmatrix} $$</p> <p>And after trying a bunch of different examples, I noticed the following remarkable pattern. If $P$ is a polynomial, then: $$ P(A)=\begin{bmatrix} P(a) &amp; P'(a) \\ 0 &amp; P(a) \end{bmatrix}$$</p> <p>Where $P'(a)$ is the derivative evaluated at $a$.</p> <p>Futhermore, I tried extending this to other matrix functions, for example the matrix exponential, and wolfram alpha tells me: $$ \exp(A)=\begin{bmatrix} e^a &amp; e^a \\ 0 &amp; e^a \end{bmatrix}$$ and this does in fact follow the pattern since the derivative of $e^x$ is itself!</p> <p>Furthermore, I decided to look at the function $P(x)=\frac{1}{x}$. If we interpret the reciprocal of a matrix to be its inverse, then we get: $$ P(A)=\begin{bmatrix} \frac{1}{a} &amp; -\frac{1}{a^2} \\ 0 &amp; \frac{1}{a} \end{bmatrix}$$ And since $f'(a)=-\frac{1}{a^2}$, the pattern still holds!</p> <p>After trying a couple more examples, it seems that this pattern holds whenever $P$ is any rational function.</p> <p>I have two questions:</p> <ol> <li><p>Why is this happening?</p></li> <li><p>Are there any other known matrix functions (which can also be applied to real numbers) for which this property holds?</p></li> </ol>
WW1
88,679
<p>$$ A =a \mathbb{I}+M $$ where $$M = \begin{bmatrix} 0 &amp; 1 \\ 0 &amp; 0 \end{bmatrix}$$</p> <p>most relevantly, $M$ is an upper triangular matrix satisfying $ M^n=0;n&gt;1$</p> <p>$$ A^n=\left(a \mathbb{I}+M \right)^n $$ Since all these matrices ( $ \mathbb{I}$ and $M$ ) commute we can write out the binomial formula ... $$A^n = \sum_{i=0}^n\binom ni a^i \mathbb{I}^iM^{n-i}$$ The only non-zero terms will be $i=n$ and $i=n-1$ $$A^n= a^n \mathbb{I}+na^{n-1}M $$</p>
1,541,800
<p>I happened to stumble upon the following matrix: $$ A = \begin{bmatrix} a &amp; 1 \\ 0 &amp; a \end{bmatrix} $$</p> <p>And after trying a bunch of different examples, I noticed the following remarkable pattern. If $P$ is a polynomial, then: $$ P(A)=\begin{bmatrix} P(a) &amp; P'(a) \\ 0 &amp; P(a) \end{bmatrix}$$</p> <p>Where $P'(a)$ is the derivative evaluated at $a$.</p> <p>Futhermore, I tried extending this to other matrix functions, for example the matrix exponential, and wolfram alpha tells me: $$ \exp(A)=\begin{bmatrix} e^a &amp; e^a \\ 0 &amp; e^a \end{bmatrix}$$ and this does in fact follow the pattern since the derivative of $e^x$ is itself!</p> <p>Furthermore, I decided to look at the function $P(x)=\frac{1}{x}$. If we interpret the reciprocal of a matrix to be its inverse, then we get: $$ P(A)=\begin{bmatrix} \frac{1}{a} &amp; -\frac{1}{a^2} \\ 0 &amp; \frac{1}{a} \end{bmatrix}$$ And since $f'(a)=-\frac{1}{a^2}$, the pattern still holds!</p> <p>After trying a couple more examples, it seems that this pattern holds whenever $P$ is any rational function.</p> <p>I have two questions:</p> <ol> <li><p>Why is this happening?</p></li> <li><p>Are there any other known matrix functions (which can also be applied to real numbers) for which this property holds?</p></li> </ol>
CR Drost
42,154
<p>Just to expand on the comment of @Myself above...</p> <h1>Adjoining algebraics as matrix maths</h1> <p>Sometimes in mathematics or computation you can get away with adjoining an algebraic number $\alpha$ to some simpler ring $R$ of numbers like the integers or rationals $\mathbb Q$, and these characterize all of your solutions. If this number obeys the algebraic equation $$\alpha^n = \sum_{k=0}^{n-1} q_k \alpha^k$$ for all $q_k \in R,$ we call the above polynomial equation $Q(\alpha) = 0$, and then we can adjoin this number by using polynomials of degree $n - 1$ with coefficients from $R$ and evaluated at $\alpha$: the ring is formally denoted $R[\alpha]/Q(\alpha),$ "the quotient group of the polynomials with coefficients in $R$ of some parameter $\alpha$ given their equivalence modulo polynomial division by $Q(\alpha).$"</p> <p>If you write down the action of the multiplication $(\alpha\cdot)$ on the vector in $R^n$ corresponding to such a polynomial in this ring, it will look like the matrix $$\alpha \leftrightarrow A = \begin{bmatrix}0 &amp; 0 &amp; 0 &amp; \dots &amp; 0 &amp; q_0\\ 1 &amp; 0 &amp; 0 &amp; \dots &amp; 0 &amp; q_1 \\ 0 &amp; 1 &amp; 0 &amp; \dots &amp; 0 &amp; q_2 \\ \dots &amp; \dots &amp; \dots &amp; \dots &amp; \dots &amp; \dots \\ 0 &amp; 0 &amp; 0 &amp; \dots &amp; 0 &amp; q_{n-2} \\ 0 &amp; 0 &amp; 0 &amp; \dots &amp; 1 &amp; q_{n-1} \\ \end{bmatrix},$$ and putting such a matrix in row-reduced echelon form is actually so simple that we can immediately strengthen our claim to say that the above ring $R[\alpha]/Q(\alpha)$ is a field when $R$ is a field and $q_0 \ne 0.$ The matrix $\sum_k p_k A^k$ is then a matrix representation of the polynomial $\sum_k p_k \alpha^k$ which implements all of the required operations as matrix operations.</p> <h1>A simple example before \$#!&amp; gets real.</h1> <p>For example, there is a famous "O(1)" solution to generating the $k^\text{th}$ Fibonacci number which comes from observing that the recurrence $F_k = F_{k-1} + F_{k-2}$ can be solved by other functions for other boundary conditions than $F_{0,1} = 0,1$, and one very special set of solutions looks like $F_k = \phi^k$ for some $\phi$. Plugging and chugging we get the algebraic numbers $\phi^2 = \phi + 1,$ which we can solve for the golden ratio $\varphi = (1 + \sqrt{5})/2$ and its negative reciprocal $\bar\varphi = -\varphi^{-1}= (1-\sqrt{5})/2.$ However since the Fibonacci recurrence relation is linear, this means that any linear combination $$F_n = A \varphi^n + B \bar\varphi^n$$obeys the Fibonacci recurrence, and we can actually just choose $A = \sqrt{1/5},\; B = -\sqrt{1/5}$ to get the standard $F_{0,1} = 0,1$ starting points: this is the Fibonacci sequence defined purely in terms of exponentiation.</p> <p>But, there is a problem with using this on a computer: the <code>Double</code> type that a computer has access to has only finite precision and the above expressions will round off wildly. What we <em>really</em> want is to use our arbitrary-precision <code>Integer</code> type to calculate this. We can do this with matrix exponentiation in a couple of different ways. The first would be to adjoin the number $\sqrt{5}$ to the integers, solving $\alpha^2 = 5.$ Then our ring consists of the numbers $a + b \sqrt{5}$ which are the matrices: $$\begin{bmatrix}a &amp; 5b\\b &amp; a\end{bmatrix}.$$ And that's easy-peasy to program. Your "unit vectors" can similarly be chosen as $1$ and $\varphi$ however, and that leads to the "no-nonsense" matrix $$a + b \varphi = \begin{bmatrix}a &amp; b\\ b &amp; a + b\end{bmatrix}$$ which I'm calling "no-nonsense" because for $a = 0, b = 1$ this is actually the Fibonacci recurrence relation on vectors $[F_{n-1}, F_{n}],$ which is a way to get to this result without going through the above hoops. There is also an interesting "symmetric" version where $\varphi$ and $\bar\varphi$ are our "unit vectors" and the matrix is (I think) $a \varphi + b \bar\varphi \leftrightarrow [2a-b,\; -a+b;\;a - b,\; -a + 2b].$ </p> <p>(In any case, it turns out that the supposedly "O(1)" algorithm is not: even when we exponentiate-by-squaring we have to perform $\log_2(k)$ multiplications of numbers $m_i = F_{2^i}$ which are growing asymptotically like $F_k \approx \varphi^k/\sqrt{5},$ taking some $O(k^2)$ time, just like adding up the numbers directly. The big speed gain is that the adding-bignums code will "naturally" allocate new memory for each bignum and will therefore write something like $O(k^2)$ bits in memory if you don't specially make it intelligent; the exponentiation improves this to $O(k~\log(k))$ and possibly even to $O(k)$ since the worst of these happen only at the very end.)</p> <h1>Going off of the algebraics into complex numbers</h1> <p>Interestingly, we don't need to restrict ourselves to real numbers when we do the above. We know that in $\mathbb R$ there is no $x$ satisfying $x^2 = -1,$ so the above prescribes that we extend our field to the field $$ a + b \sqrt{-1} \leftrightarrow \begin{bmatrix}a &amp; -b\\b &amp; a\end{bmatrix}.$$When we replace $a$ with $r\cos\theta$ and $b$ with $r\sin\theta$ we find out that in fact these "complex numbers" are all just scaled rotation matrices:$$ r (\cos\theta + i~\sin\theta) = r \begin{bmatrix}\cos\theta &amp; -\sin\theta\\\sin\theta &amp; \cos\theta\end{bmatrix} = r~R_\theta,$$giving us an immediate geometric understanding of a complex number as a scaled rotation (and then analytic functions are just the ones which locally look like a scaled rotation.)</p> <h1>Going way off the algebraics into infinitesimals.</h1> <p>Another interesting way to go with this is to consider adjoining a term $\epsilon$ which is not zero, but which squares to zero. This idea formalizes the idea of an "infinitesimal" with no real effort, although as mentioned before, the resulting algebra is doomed to be a ring. (We could adjoin an inverse $\infty = \epsilon^{-1}$ too but presumably we'd then have $\infty^2 = \infty$ which breaks associativity, $(\epsilon\cdot\infty)\cdot\infty \ne \epsilon\cdot(\infty\cdot\infty),$ unless we insert more infinities to push the problem out to infinity.)</p> <p>Anyway, we then have the matrix: $$a + b \epsilon \leftrightarrow a I + b E = \begin{bmatrix}a &amp; 0\\ b &amp; a\end{bmatrix}.$$ It's precisely the transpose of what you were looking at. Following the rules, $(a + b \epsilon)^n = a^n + n~a^{n-1}~b~\epsilon$ with all of the other terms disappearing. Expanding it out we find by Taylor expansion that $f(x + \epsilon) = \sum_n f^{(n)}(x) \epsilon^n = f(x) + f'(x) \epsilon,$ and this is the property that you have seen in your own examination.</p> <p>We can similarly keep infinitesimals out to second order with a 3x3 matrix $$a + b \epsilon + c \epsilon^2 \leftrightarrow \begin{bmatrix}a &amp; 0 &amp; 0\\ b &amp; a &amp; 0\\ c &amp; b &amp; a\end{bmatrix}$$Then $f(x I + E) = f(x) I + f'(x) E + f''(x) E^2 / 2$ straightforwardly.</p>
1,609,945
<p>Prove that the set of points for which $\arg(z-1)=\arg(z+1) +\pi/4$ is part of the set of points for which $|z-j|=\sqrt 2$ and show which part clearly in a diagram.</p> <p>Intuition for the geometrical interpretation of this would be really appreciated. </p> <p>And I was wondering greatest and lowest value for $|z+k|$ and why does this represent a circle ? The question might be too basic but I just needed a intuition behind this. Concept of vectors is quite hard to follow. Thanks in advance .</p>
OnoL
65,018
<p><strong>Hint:</strong></p> <ol> <li>Define for each $t\in T$ a function $d^t:S\to \mathbf R_+$ such that $d^t(s)=|s-t|$. Prove that: (i) $d^t$ is continuous for each $t\in T$ and (ii) $d^t$ achieves a minimum $m(t)&gt;0$. </li> <li>Let $M=\{m(t)\,|\,t\in T\}$. Prove that $\inf M&gt;0$. (<strong>further hint:</strong> suppose not. Then for each $n\in\mathbf N$ there are $t_n\in T$ and $s_n\in S$ such that $|t_n-s_n|&lt;\frac 1n$. Since the sequence $\{s_n\}\subseteq S$ and $S$ is compact, you can find a convergent subsequence $\{s_{n_k}\}$ of $\{s_n\}$ whose limit is $s\in S$. Show that $\lim_{k\to\infty}t_{n_k}=s$. Then argue that $s\in T$. Conclude.)</li> </ol>
1,999,352
<p>so what really is the meaning of a metric space and why is it so important in topology?</p>
layman
131,740
<p>The idea of a mathematical space is that it is a set endowed with some type of structure (and the word structure just means there's more to the definition of the set than just a collection of elements -- there's an additional feature added to the set; we call this feature the structure of the set). Here are some examples to help you get a feel for it:</p> <ol> <li><p><strong>Metric space</strong>: This is a set $X$ of elements which we endow with a distance function $d: X \times X \to [0,\infty)$. You input two points $x$ and $y$ from $X$ into the distance function and the output $d(x,y)$, which is a number, we interpret as the distance between $x$ and $y$. Different metric functions give different notions/interpretations of distance, but they all satisfy three important properties which allow us to call them <em>distance</em> functions (i.e., metrics). So the structure here is the idea of distance between two points.</p></li> <li><p><strong>Topological space</strong>: This is a set $X$ of elements which we endow with a <em>topology</em>, i.e., a collection of subsets which we call <em>open</em>. We can endow $X$ with different topologies to give us different ideas of what constitutes an open set, and under these different topologies we can examine if functions between topological spaces are open, and if sequences in a topological space converge. You can have different topologies, but all topologies must satisfy three important properties which allow us to call them topologies. So the structure here is a collection of subsets of $X$ which we call open.</p></li> <li><p><strong>Group</strong>: In Abstract Algebra, a group is a set $X$ of elements which we endow with a certain property (call it multiplication but it's not the same as ordinary number multiplication -- it's more general). Under this multiplication, the set now has structure because we can look at how two elements interact with each other under multiplication. If the elements in the set behave in such a way that they satisfy some important properties, we call the set a <em>group</em>. So the structure here is an operation between two elements called multiplication, and from here you can see how elements interact with each other under this operation. A group can be thought of as a space.</p></li> </ol>
2,687,932
<p>Let $x\in\mathbb{R}$. Prove that $x=-1$ if and only if $x^3+x^2+x+1=0$. This is a bi-conditional statement, thus to prove it we need to prove: <a href="https://i.stack.imgur.com/PRPSC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PRPSC.png" alt="enter image description here"></a></p>
Surb
154,545
<p>$$x=-1\implies x^3+x^2+x+1=(-1)^3+(-1)^2+-1+1=-1+1-1+1=0.$$</p>
1,985,849
<p>I am really stumped by this equation. $$e^x=x$$ I need to prove that this equation has no real roots. But I have no idea how to start.</p> <p>If you look at the graphs of $y=e^x$ and $y=x$, you can see that those graphs do not meet anywhere. But I am trying to find an algebraic and rigorous proof. Any help is appreciated. ( $e^\infty=\infty$ looks like a solution but is it?)</p>
Astyx
377,528
<p>Study $f : x \mapsto e^x - x$</p> <p>Its derivative is $x \mapsto e^x-1$ which cancels at $0$, is negative before and positive after, thus $f$ reaches a minimum at $0$, and that minimum is $f(0) = 1 \gt 0$. Thus $f$ never cancels, and there are therefore no real solutions to the equation $e^x -x = 0$ ie $e^x = x$</p>
90,548
<p>Suppose we are handed an algebra $A$ over a field $k$. What should we look at if we want to determine whether $A$ can or cannot be equipped with structure maps to make it a Hopf algebra?</p> <p>I guess in order to narrow it down a bit, I'll phrase it like this: what are some necessary conditions on an algebra for it to be a Hopf algebra?</p> <p>Thoughts so far:</p> <p>The first obvious condition is that $A$ must be augmented, i.e. there must be a nontrivial character $\varepsilon : A \to k$. Since this is generally not that hard to determine if we are given the algebra in some fairly concrete way, let's suppose that $A$ is given to us with an augmentation map.</p> <p>If $A$ is finite-dimensional, then $A$ must be a Frobenius algebra. But not every finite-dimensional Frobenius algebra is a Hopf algebra, e.g. $\Lambda^\bullet(k^2)$ is not a Hopf algebra if the characteristic of $k$ is not 2. And generally I am more interested in the infinite-dimensional case.</p> <p>All I can come up with is this: the category of finite-dimensional $A$-modules must be a (left) rigid monoidal category. But I don't know if that is a helpful observation: given a category with a forgetful functor to finite-dimensional vector spaces over some field, how can one prove that it can't be given the structure of a <s>braided</s> rigid monoidal category?</p> <p>And perhaps there are some homological invariants that one can look at?</p> <p>To sum up, the question is:</p> <h3>Question</h3> <p>Given a $k$-algebra $A$ and a nonzero character $\varepsilon : A \to k$, are there invariants we can look at in order to show that $A$ cannot be given the structure of a Hopf algebra?</p>
Vladimir Dotsenko
1,306
<p>A homological condition that might be useful: in the Hopf case, the Yoneda algebra $Ext_A^\bullet(k,k)$ embeds into the Hochschild cohomology $HH^\bullet(A,A)$, moreover, there is a Gerstenhaber algebra structure on the Yoneda algebra, and this embedding is an embedding of Gerstenhaber algebras. </p> <p>Reference: <a href="http://www.ams.org/journals/proc/2004-132-10/S0002-9939-04-07274-0/S0002-9939-04-07274-0.pdf" rel="nofollow noreferrer">this</a> article of Marco Farinati and Andrea Solotar.</p> <p><s>I have a feeling that it would give some information already for exterior algebras, though I don't have time to check it carefully now.</s> Of course, to use this observation for exterior algebras, the graded commutative product from the Gerstenhaber structure (highlighted by <a href="https://mathoverflow.net/users/6481/mt">mt</a> in his answer) is enough. But I think that there are cases where the Lie bracket will help to settle the answer.</p>
2,941,896
<p>I want to ask you for following statement</p> <blockquote> <p><span class="math-container">$f$</span> is continuous from <span class="math-container">$X$</span> to <span class="math-container">$Y$</span>, where <span class="math-container">$X$</span> is compact Hausdorff, connected and <span class="math-container">$Y$</span> is ordered set in order topology, then <span class="math-container">$f$</span> is onto?</p> </blockquote> <p>My attempt </p> <p>For given <span class="math-container">$y \in Y$</span> choose nbhd <span class="math-container">$N_y$</span>. Since <span class="math-container">$f$</span> is continuous and <span class="math-container">$X$</span> is compact Hausdorff, <span class="math-container">$B=f^{-1}(\bar N_y$</span>) is compact in X , Where <span class="math-container">$\bar N_y$</span> is closure of nbhd. Since <span class="math-container">$f(B)$</span> is compact on Y which contain <span class="math-container">$y$</span>, it has extreme value. Now we can apply Intermediate theorem on <span class="math-container">$f(B)$</span> , so we can find <span class="math-container">$x^* \in B$</span> such that <span class="math-container">$f(x*)=y$</span> . Thus <span class="math-container">$f$</span> is onto</p> <p>First, I want to ask whether this proof is correct or not.</p> <p>Second, Is there any counter example if X has only connect condition ?</p> <p>Third, Is there any other condition <span class="math-container">$f$</span> could be automatically onto for specific condition of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span></p> <p>Thank you!</p>
Wuestenfux
417,848
<p>Hint: <span class="math-container">$\log(ab) = \log(a) + \log(b)$</span>. This should be enough to simplify the sum on the left-hand side.</p>
505,617
<p>Let $a,b,c$ be nonnegative real numbers such that $a+b+c=3$, Prove that</p> <blockquote> <p>$$ \sqrt{\frac{a}{a+3b+5bc}}+\sqrt{\frac{b}{b+3c+5ca}}+\sqrt{\frac{c}{c+3a+5ab}}\geq 1.$$</p> </blockquote> <p>This problem is from <a href="http://www.artofproblemsolving.com/Forum/viewtopic.php?f=52&amp;t=555716">http://www.artofproblemsolving.com/Forum/viewtopic.php?f=52&amp;t=555716</a></p> <p>@Calvin Lin Thank you </p>
Karuppiah Kannappan
95,107
<p>We know the one result if sum of two numbers $a$ and $b$ is constant, the max. of $ab$ occurs when both $a$ and $b$ equal. Eg. if $a+b=6$, then $\max\{ab\}=9$. Use this result, we get proof.</p>
221,762
<p>Let $f$ be a newform of level $\Gamma_1(N)$ and character $\chi$ which is not induced by a character mod $N/p$. I learned from <a href="http://wstein.org/books/ribet-stein/main.pdf" rel="nofollow">these notes</a> by Ribet and Stein that $|a_p|=p^{(k-1)/2}$ where $k$ is the weight of $f$. So I wonder</p> <p>1, the proof of this statement,</p> <p>2, is $a_p$ for $p|N$ in the number field $K_f$ of $f$, i.e the number field generated by all $a_q$ for $(q,N)=1$?</p> <p>Now I see that 2 is true. Why I thought it's wrong is because $p^{(k-1)/2}$ is of degree 2, which will implies in this case $2|[K_f:\mathbb{Q}]$. Seems strange...</p>
GH from MO
11,919
<p>Here is an answer to both questions.</p> <p><strong>First question.</strong> The quoted result is a special case of Theorem 9.1.10 in the <a href="http://wstein.org/books/ribet-stein/main.pdf" rel="nofollow">Ribet-Stein notes</a>, which in turn is identical to Theorem 3 in <a href="http://gdz.sub.uni-goettingen.de/dms/load/img/?PPN=PPN235181684_0212&amp;DMDID=DMDLOG_0047" rel="nofollow">Li: Newforms and functional equations</a> (in fact Ribet-Stein emphasize that they follow Li's treatment closely).</p> <p>As you can see, Li uses classical Atkin-Lehner theory and also refers to an older work of Ogg's from 1969. I am sure the same result could be deduced by looking at the underlying local newform at the prime $p$ in the Kirillov model, see e.g. <a href="http://www2.math.ou.edu/~rschmidt/papers/gl2.pdf" rel="nofollow">Schmidt's excellent notes</a> for that purpose. (Also, I realize that Ben Linowitz answered much the same 2 hours ago.)</p> <p><strong>Second question.</strong> Let $S_k^\text{new}(\Gamma_1(N))$ denote the set of primitive newforms in $S_k(\Gamma_1(N))$, and note that this is a finite set. By Proposition 2.7 in <a href="http://www.numdam.org/item?id=ASENS_1974_4_7_4_507_0" rel="nofollow">Deligne-Serre: Formes modulaires de poids 1</a>, for any $f\in S_k^\text{new}(\Gamma_1(N))$ and for any field automorphism $\sigma$ of $\mathbb{C}$, there exits an $f^\sigma\in S_k(\Gamma_1(N))$ such that $a_p(f^\sigma)=a_p(f)^\sigma$. It follows that for any $f\in S_k^\text{new}(\Gamma_1(N))$, the $a_p(f)$'s generate a number field $K_f$ (of finite degree) over $\mathbb{Q}$. </p> <p>Now let $S$ be any subset of primes with Dirichlet density strictly less than $1/8$ (e.g. $S$ is finite), and consider the subfield $K_{f,S}$ of $K_f$ generated by the $a_p(f)$'s with $p\not\in S$. I claim that $K_{f,S}=K_f$. Indeed, if $\sigma$ fixes $K_{f,S}$, then $f^\sigma=f$ by <a href="http://gdz.sub.uni-goettingen.de/dms/load/img/?PPN=PPN356556735_0116&amp;DMDID=DMDLOG_0040" rel="nofollow">Ramakrishnan's version of strong multiplicity one for $\mathrm{GL}_2$</a>, hence $\sigma$ fixes $K_f$ as a whole, and the claim follows by Galois theory. In particular, $K_f$ is generated by the $a_p(f)$'s with $p\nmid N$, and the remaining $a_p(f)$'s with $p\mid N$ lie in this number field as well.</p>
659,988
<p>I understood the definition of a $\sigma$-algebra, that its elements are closed under complementation and countable union, but as I am not very good at maths, I could not visualize or understand the intuition behind the meaning of "closed under complementation and countable union". </p> <p>If we consider the set X to be a finite set, then what would be a good real life example of a $\sigma$-algebra, for a noob to understand. </p>
Spock
108,632
<p>Let $X = \{a, b, c, d\}$, a possible sigma algebra on $X$ is $Σ = \{∅, \{a, b\}, \{c, d\}, \{a, b, c, d\}\}$. <br/> I think this is a good example.</p>
894,917
<p>I'm stuck with the limit $\lim_{n\to\infty} (2^n)\sin(n) $. I've been trying the squeeze theorem but it doesn't seem to work. I can't think of a second way to tackle the problem. Any push in the right direction would be much appreciated. </p> <blockquote> <p>Also, please don't just post the answer up because I want to try and get it. </p> </blockquote> <p>Here's what I've got so far:</p> <p>$$ \lim_{n \to \infty} (2^n)\sin(n) $$</p> <p>So I know, $\ sin(n) $ is bound by -1 and 1, but multiplying the inequality by $\ 2^n $ will give me a negative and positive $\ 2^n $. So I am stuck here. This would mean that the function is bounded by limits that tend to negative and positive infinity, pretty useless.</p> <p>So can I take the absolute value of each side of the inequality? Like this: </p> <p>$$ \lvert-2^n\rvert \le \lvert 2^n \sin(n) \rvert \le \lvert 2^n\rvert $$</p> <p>If this works, I can say it tends to infinity, but it seems a bit dodgy to me.</p> <p>Thank you for taking a look at this problem.</p>
Community
-1
<p>Every interval $[n\pi/2-1/2, n\pi/2+1/2]$, $n$ odd contains an integer. Use this to show some subsequence tends to $\infty$ and some subsequence tends to $-\infty$. -- <a href="https://math.stackexchange.com/questions/894917/limit-of-2n-sinn-as-n-goes-to-infinity#comment1847157_894917">David Mitra</a> </p>
2,882,678
<p>Why are the morphisms of the category of sets functions? Shouldn't the morphisms take an object in a category and turn it into another object of the category, i.e. map Set to Set. I don't understand how e.g. $f(x)=x^2$ is a map from a set to a set. </p> <p>If the morphism had been the image of a set under a function it would make more sense to me.</p>
freakish
340,986
<blockquote> <p>Shouldn't the morphisms take an object in a category and turn it into another object of the category</p> </blockquote> <p>No, what makes you think so? Are you perhaps confusing <em>morphisms</em> with <em>functors</em>?</p> <p>Morphism is just an arrow between two objects. Objects in set category are just sets. If $A,B$ are two sets then what would be an arrow $A\to B$? The natural choice is a function.</p>
2,882,678
<p>Why are the morphisms of the category of sets functions? Shouldn't the morphisms take an object in a category and turn it into another object of the category, i.e. map Set to Set. I don't understand how e.g. $f(x)=x^2$ is a map from a set to a set. </p> <p>If the morphism had been the image of a set under a function it would make more sense to me.</p>
red_trumpet
312,406
<blockquote> <p>take an object in a category and turn it into another object of the category</p> </blockquote> <p>This is what is called a <a href="https://en.wikipedia.org/wiki/Functor" rel="nofollow noreferrer">functor</a>.</p> <p>A morphism is (more or less) exactly a generalization of functions between sets. One function corresponds to one morphism / arrow.</p> <p>Your example $f(x) = x^2$ might be considered as a morphism $f: \mathbb{R} \rightarrow \mathbb{R}$. Sometimes we also write $f \in Hom(\mathbb{R}, \mathbb{R})$ or $f \in Mor(\mathbb{R}, \mathbb{R})$, but those are just different notations for the same thing.</p> <p>Morphisms are one of the building blocks of categories. Each category $C$ as some objects $O \in Ob(C)$, and for each pair of objects $O, P \in Ob(C)$, there is a set of morphisms $Mor(O, P)$.</p> <p>Does that make something more clear?</p>
1,225,551
<p>I have a vector $\mathbf{x}$ (10 $\times$ 1), a vector $\mathbf{f}(\mathbf{x})$ (10 $\times$ 1) and an invertible square matrix $\mathbf{Z}(\mathbf{x})$ (10 $\times$ 10) which is block diagonal:</p> <p>$$\mathbf{Z}(\mathbf{x}) = \begin{bmatrix}\mathbf{A} &amp; 0\\\mathbf{C} &amp; \mathbf{B}\end{bmatrix}$$</p> <p>$\mathbf{A}$ is 4x4, $\mathbf{B}$ is 6x6 diagonal, $\mathbf{C}$ is 6x4 with columns 3 and 4 null.</p> <p>I need to compute (numerically) $\Delta(\mathbf{x})$</p> <p>$$\Delta(\mathbf{x}) = \frac{\partial(\mathbf{Z}^{-1}\mathbf{f})}{\partial \mathbf{x}}$$</p> <p>I came up with the following:</p> <p>$$\Delta(\mathbf{x}) = \frac{\partial(\mathbf{Z}^{-1})}{\partial \mathbf{x}}\mathbf{f}+ \mathbf{Z}^{-1}\frac{\partial\mathbf{f}}{\partial \mathbf{x}}$$</p> <p>The second term is trivial, but I can't compute $\frac{\partial(\mathbf{Z}^{-1})}{\partial \mathbf{x}}$.<br> I've mixed matrix-by-vector derivative and inverse derivative (I'm not sure it is legal, however) and got this:</p> <p>$$\frac{\partial(\mathbf{Z}^{-1})}{\partial \mathbf{x}} = -\mathbf{Z}^{-1}\frac{\partial\mathbf{Z}}{\partial \mathbf{x}}\mathbf{Z}^{-1}$$</p> <p>The middle term is a $n \times 1$ vector of $n \times n$ matrices, <a href="http://en.wikipedia.org/wiki/Matrix_calculus#Other_matrix_derivatives" rel="nofollow">according to Wikipedia</a>.</p> <p>How am I supposed to multiply a vector of matrices and two matrices in order to obtain a $n \times n$ matrix?</p> <p>And if the last deduction is wrong, how to compute $\frac{\partial(\mathbf{Z}^{-1})}{\partial \mathbf{x}}$?</p>
Astrinus
229,726
<p>I've found that the solution is much more simple than I thought.</p> <p>Consider the whole term $\frac{\partial(\mathbf{Z}^{-1})}{\partial \mathbf{x}}\mathbf{f} = -\mathbf{Z}^{-1}\frac{\partial\mathbf{Z}}{\partial \mathbf{x}}\mathbf{Z}^{-1}\mathbf{f}$, with the vector $\mathbf{f}$ too; the derivative term in the right member has the form</p> <p>$$ \frac{\partial\mathbf{Z}}{\partial \mathbf{x}} = \begin{bmatrix}\frac{\partial\mathbf{Z}}{\partial x_1} &amp; \frac{\partial\mathbf{Z}}{\partial x_2} &amp; \frac{\partial\mathbf{Z}}{\partial x_3} &amp; \cdots\end{bmatrix}$$</p> <p>whose elements are $n \times n$ matrices.</p> <p>Now, swap multiplications and vector-of-matrices:</p> <p>$$\frac{\partial(\mathbf{Z}^{-1})}{\partial \mathbf{x}}\mathbf{f}= \begin{bmatrix}-\mathbf{Z}^{-1}\frac{\partial\mathbf{Z}}{\partial x_1}\mathbf{Z}^{-1}\mathbf{f} &amp; -\mathbf{Z}^{-1}\frac{\partial\mathbf{Z}}{\partial x_2}\mathbf{Z}^{-1}\mathbf{f} &amp; -\mathbf{Z}^{-1}\frac{\partial\mathbf{Z}}{\partial x_3}\mathbf{Z}^{-1}\mathbf{f} &amp; \cdots\end{bmatrix}$$</p> <p>These elements are column vectors, each equal to $\frac{\partial(\mathbf{Z}^{-1})}{\partial x_i}\mathbf{f}$, and their concatenation yields a $n \times n$ matrix, summable with the other term.</p> <p>I'm still not sure if this is true for $n \gt 3$, so if someone could point out pitfalls I have run in, I'll be grateful.</p>
194,867
<p>Let $f(z)$ be a function meromorphic in a simply connected convex domain $D$ (subset of the complex plane with positive area or the whole complex plane) where $z$ is a complex number.</p> <p>Are there such functions $f(z)$ where $\Re(f(z))$ is periodic in the domain (no periods larger than the domain please :p ) but $f(z)$ is not periodic? (if $D\subset \mathbb C$ it is clear that $f(z)$ is not periodic but $\Re(f(z))$ might still be for some shapes of $D$).</p> <p>In particular the case when $D = \mathbb C$ is interesting. (in other words $f(z)$ meromorphic over $\mathbb C$)</p> <p>I guess it is a similar question to ask about $\Im$ , $\operatorname{Arg}$ or $|\cdot|$ instead of $\Re$.</p> <p>I read about double periodic functions and Cauchy-Riemann equations but I still don't know. I can't find such a function in the literature ( i mean the one i search here , i don't mean i can't find a double periodic one in the literature of course ) and I don't know how to construct them or even if they exist.</p>
hmakholm left over Monica
14,366
<p>The real part of $z\mapsto iz$ has period $p$ for any $p\in \mathbb R$.</p>
4,306,419
<p>Can I write <span class="math-container">$\frac{dy}{dx}=\frac{2xy}{x^2-y^2}$</span> implies <span class="math-container">$\frac{dx}{dy}=\frac{x^2-y^2}{2yx}$</span> ?</p> <p>Actually I have been given two differential equations to solve.</p> <p><span class="math-container">$\frac{dy}{dx}=\frac{2xy}{x^2-y^2}$</span> , <span class="math-container">$\frac{dy}{dx}=\frac{y^2-x^2}{2yx}$</span></p> <p>They have done in the following manner.</p> <p><span class="math-container">$\frac{dy}{dx}=\frac{2xy}{x^2-y^2} \implies \frac{dx}{dy}=\frac{x^2-y^2}{2xy}$</span></p> <p>Now they have solved this <span class="math-container">$\frac{dx}{dy}=\frac{x^2-y^2}{2xy}$</span>.</p> <p>Solution of this differential equation is the solution of the first one and after that they interchanged <span class="math-container">$x$</span> and <span class="math-container">$y$</span> to get the solution of second one.</p> <p>I can not understand how they can do it.</p> <p>Can anyone please tell me when we can do this ?</p>
JJacquelin
108,514
<p>If you are not convinced, instead of Eq.<span class="math-container">$(1)$</span> or Eq.<span class="math-container">$(2)$</span> consider the differential equation <span class="math-container">$(3)$</span> in which no <span class="math-container">$\frac{dy}{dx}$</span> or <span class="math-container">$\frac{dx}{dy}$</span> appear. <span class="math-container">$$\frac{dy}{dx}=\frac{2xy}{x^2-y^2} \tag 1$$</span><br /> <span class="math-container">$$\frac{dx}{dy}=\frac{x^2-y^2}{2xy} \tag 2$$</span> <span class="math-container">$$\boxed{2xy\,dx-(x^2-y^2)dy=0} \tag 3$$</span> Thanks to the method of integrating factor <span class="math-container">$(1/y^2)$</span> the solution is :</p> <p><span class="math-container">$\frac{2x}{y}dx-(\frac{x^2}{y^2}-1)dy=0$</span></p> <p><span class="math-container">$d\left(\frac{x^2}{y}+y\right)=0$</span></p> <p><span class="math-container">$\frac{x^2}{y}+y=C$</span> <span class="math-container">$$\boxed{x^2+y^2-Cy=0}\tag 4$$</span> For explicit solutions one can solve Eq.<span class="math-container">$(4)$</span> for <span class="math-container">$x$</span> or for <span class="math-container">$y$</span>.</p> <p>Solving Eq.<span class="math-container">$(4)$</span> for <span class="math-container">$y$</span> leads to : <span class="math-container">$$y=\frac12\left(C\pm\sqrt{C^2-4x^2}\right) \tag 5$$</span> which is exactly the solution of Eq.<span class="math-container">$(1)$</span> that one can check in puting Eq.<span class="math-container">$(5)$</span> into Eq.<span class="math-container">$(1)$</span>.</p> <p>Solving Eq.<span class="math-container">$(4)$</span> for <span class="math-container">$x$</span> leads to : <span class="math-container">$$x=\pm\sqrt{Cy-y^2} \tag 6$$</span> which is exactly the solution of Eq.<span class="math-container">$(2)$</span> that one can check in puting Eq.<span class="math-container">$(6)$</span> into Eq.<span class="math-container">$(2)$</span>.</p>
1,045,941
<p>Usually this is just given as a straight up definition in a calculus course. I am wondering how you prove it? I tried using the limit definition, $$\lim\limits_{h\rightarrow 0} \dfrac{\log(x+h)-\log(x)}{h}$$ but this led to no developments.</p>
k170
161,538
<p>First note that</p> <blockquote> <p>$$ e= \lim\limits_{z\to 0}\left(1+z\right)^{\frac{1}{z}}$$</p> </blockquote> <p>So now $$ \lim\limits_{h\to 0}\frac{\ln(x+h)-\ln (x)}{h} = \lim\limits_{h\to 0}\frac{\ln\left(\frac{x+h}{x}\right)}{h} $$ $$ = \lim\limits_{h\to 0}\frac{1}{h}\ln\left(1+\frac{h}{x}\right) = \lim\limits_{h\to 0} \ln\left(1+\frac{h}{x}\right)^{\frac1h} $$ Let $z=\frac{h}{x}$, then $$ \lim\limits_{z\to 0} \ln\left(1+z\right)^{\frac{1}{zx}}= \lim\limits_{z\to 0} \frac{1}{x} \ln\left(1+z\right)^{\frac{1}{z}} $$ $$= \frac{1}{x} \ln\left( \lim\limits_{z\to 0}\left(1+z\right)^{\frac{1}{z}}\right)=\frac1x \ln (e)=\frac1x $$</p>
1,045,941
<p>Usually this is just given as a straight up definition in a calculus course. I am wondering how you prove it? I tried using the limit definition, $$\lim\limits_{h\rightarrow 0} \dfrac{\log(x+h)-\log(x)}{h}$$ but this led to no developments.</p>
marty cohen
13,079
<p>As I have written here before (somewhere), I like using the functional equation for logs of $f(xy) = f(x)+f(y)$.</p> <p>From this, $f(xy)-f(x) = f(y)$ or $f(x)-f(y) = f(x/y) $.</p> <p>Getting this into a form that looks like a derivative, $f(x+h)-f(x) =f(1+h/x) $.</p> <p>Since $f(1) = 0$ (from $f(1) = f(1)-f(1) = 0$),</p> <p>$\begin{array}\\ \frac{f(x+h)-f(x)}{h} &amp;=\frac{f(1+h/x)}{h}\\ &amp;=\frac{f(1+h/x)-f(1)}{h}\\ &amp;=\frac1{x}\frac{f(1+h/x)-f(1)}{h/x}\\ \end{array} $</p> <p>If $f$ is differentiable at $1$, the right-hand side of this, as $h \to 0$, is $\frac{f'(1)}{x} $.</p> <p>Since the right-hand limit exists, the left-hand limit exists, so $f'(x) =\frac{f'(1)}{x} $.</p> <p>The natural log is the one where $f'(1) = 1$.</p> <p>This also works (for getting the derivative of a function from a functional equation) for $\exp$ (from $\exp(x)\exp(y) = \exp(x+y)$) and $\arctan$ ( from $\arctan(x)+\arctan(y) =\arctan\left(\frac{x+y}{1-xy}\right) $) (and probably more functions with functional equations).</p>
2,583,232
<p>Prove that $\lim_{n \to \infty} \dfrac{2n^2}{5n^2+1}=\dfrac{2}{5}$</p> <p>$$\forall \epsilon&gt;0 \ \ :\exists N(\epsilon)\in \mathbb{N} \ \ \text{such that} \ \ \forall n &gt;N(\epsilon) \ \ \text{we have} |\dfrac{-2}{25n^2+5}|&lt;\epsilon$$ $$ |\dfrac{-2}{25n^2+5}|&lt;\epsilon \\ |\dfrac{2}{25n^2+5}|&lt;\epsilon \\ \dfrac{5(n^2+1)}{2}&lt;\epsilon \\ {5(n^2+1)}&lt;2\epsilon \\ {n}&gt;\sqrt{\dfrac{2\epsilon -5}{25}}$$</p> <p>What do I do?</p>
Stu
460,772
<p>Let $\varepsilon &gt;0$</p> <p>$|\dfrac{-2}{25n^2+5}|&lt;\varepsilon\iff \dfrac{2}{25n^2+5}&lt;\varepsilon$</p> <p>$\dfrac{2}{25n^2+5}&lt;\dfrac{2}{25n^2}&lt;\varepsilon$ so $n^2&gt;\dfrac{2}{25\varepsilon}\iff n&gt;\dfrac{\sqrt{2}}{5\sqrt{\varepsilon}}$</p> <p>Then $N(\varepsilon)=\bigg\lfloor\dfrac{\sqrt{2}}{5\sqrt{\varepsilon}}\bigg\rfloor+1$</p>
3,984,778
<p>Ackermann's function and all the up-arrow notation is based on exponentiating. We know for a fact that the factorial function grows faster than any power law so why not build an iterative sequence based on factorials? I am thinking of the simple function defined by <span class="math-container">$$ a(n)=a(n-1)! $$</span> and to start things off we need <span class="math-container">$a(1)=3$</span>. The numbers generated are <span class="math-container">$3,6,24,720$</span> and the next one (<span class="math-container">$720!$</span>) is roughly <span class="math-container">$10^{1746}$</span> and I can't even start to imagine what happens next!</p> <p>Is it primitive recursive? Can it be coded up using for loops avoiding calls to itself? I think not but this is just a hunch. Computer scientists might have better chances of answering that than I.</p> <p>Thanks in advance, I appreciate it.</p>
Simply Beautiful Art
272,831
<p>To answer the question concerning writing this in for loops, here's some python-like pseudocode to consider:</p> <pre><code># Some integer to compute a(n). n: int # Start with 3, solution = 3 # and then apply the factorial n-1 times. for i in [1, 2, ..., n-1]: # To compute the factorial, start with 1, factorial = 1 # and then multiply by everything from 1 to the current solution for j in [1, 2, 3, ..., solution]: factorial *= j # and save the results. solution = factorial print(solution) </code></pre>
243,377
<p>Consider a random walk on the real time, starting from $0$. But this time assume that we can decide, for each step $i$, a step size $t_i&gt;0$ to the left or the right with equal probabilities. </p> <p>To formalize this, we have $(X_n)_{n\geq 0}$ such that $X_0=0$, $Pr[X_n=t_n]=0.5$ and $Pr[X_n=-t_n]=0.5$ (hence still $E(X_i)=0$). Let $I=[-1,1]$ and $$p=Pr[\sum_{i=1}^n X_i \in I. \forall n\geq 0]$$ </p> <p>The question is: does there exist a sequence $(t_i)$ such that $t_i&gt;0$ and $\sum_{i=0}^\infty=\infty$ (i.e. $(t_i)$ diverges), to ensure $p&gt;0$. </p> <p>Clearly such a sequence does not exist to let $p=1$, and intuitively it seems to me that neither exists $(t_i)$ to force $p&gt;0$, but I am looking for a rigorous proof or disproof. Moreover, I am not exactly sure whether this kind of questions has been studied, so references would be appreciated. </p>
js21
21,724
<p>Yes. Indeed, if $s = \sum_{i \geq 1} t_i^2 &lt;1$, then $$ \mathbb{P}[ \ \ \forall n, \sum_{i=1}^n X_i \in [-1,1] \ \ ] \geq 1-s &gt; 0. $$ To see this, note that $M_n = |\sum_{i=1}^n X_i|$ is a nonnegative submartingale, so that Doob's martingale inequality yields $$ \mathbb{P}[ \max_{1 \leq j \leq n} M_j &gt; 1 ] \leq \mathbb{E}[M_n^2] = \sum_{i=1}^n t_i^2 \leq s. $$ Letting $n$ tends to infinity, one gets $$ \mathbb{P}[ \sup_{j \geq 1} M_j &gt; 1 ] \leq s, $$ hence the result by taking the complement.</p> <p>Remark : In the other direction, one can show that if $$ \mathbb{P}[ \ \ \forall n, \sum_{i=1}^n X_i \in [-1,1] \ \ ] \geq c &gt; 0, $$ then $$ s = \sum_{i \geq 1} t_i^2 \leq \frac{14}{c^2}. $$ One first note that one must have $|t_i| \leq 2$ for all $i$. One has $|\cos(2 \pi \xi)| \leq e^{-2 \pi^2 \xi^2 }$ whenever $|\xi| \leq 2 \delta$, with $\delta = 0.14$, hence $$ |\mathbb{E}[e^{-2i \pi \xi \sum_{j=1}^n X_j} ]| = \prod_{j=1}^n |\cos(2 \pi \xi t_j)| \leq e^{-2 \pi^2 \xi^2 \sum_{i=1}^n t_i^2} $$ whenever $|\xi| \leq \delta$. Let $\chi$ be the Beurling-Selberg majorant of $\mathbb{1}_{[-1,1]}$ with parameter $\delta$ : one has $\chi \geq \mathbb{1}_{[-1,1]}$, $||\chi||_{L^1}= 2 + \delta^{-1}$, and its Fourier transform $\hat{\chi}$ is supported on $[-\delta,\delta]$. In particular : $$ c \leq \mathbb{E}[\chi(\sum_{i=1}^n X_i)] = \int_{\mathbb{R}} \hat{\chi}(\xi) \mathbb{E}[e^{-2i \pi \xi \sum_{j=1}^n X_j} ] d \xi \leq (2 + \delta^{-1}) \int_{\mathbb{R}} e^{-2 \pi^2 \xi^2 \sum_{i=1}^n t_i^2} d \xi, $$ and thus $$ c \leq \frac{2 + \delta^{-1}}{\sqrt{2 \pi \sum_{i=1}^n t_i^2}}. $$ This implies $$ \sum_{i=1}^n t_i^2 \leq \frac{(2 + \delta^{-1})^2}{2 \pi c^2} \leq \frac{14}{c^2}, $$ hence the result.</p>
3,720,012
<blockquote> <p>(Feller Vol.1, P.241, Q.35) Let <span class="math-container">$S_n$</span> be the number of successes in <span class="math-container">$n$</span> Bernoulli trials. Prove <span class="math-container">$$E(|S_n-np|) = 2vq b(v; n, p) $$</span> where <span class="math-container">$v$</span> is the integer such that <span class="math-container">$np &lt; v \le np+1$</span> and <span class="math-container">$b(v; n,p)$</span> is a binomial distribution with <span class="math-container">$v$</span> successes of <span class="math-container">$n$</span> trials. Hint: The left side <span class="math-container">$= \sum_{k=0}^{v-1} (np - k) \frac{n}{k} p^k q^{n-k}$</span>.</p> </blockquote> <p>My attempt: I found that <span class="math-container">$P(|S_n - np| = j)= b(np +j ; n, p)$</span> if <span class="math-container">$S_n \ge np$</span>, <span class="math-container">$b(np-j; n,p)$</span> if <span class="math-container">$S_n &lt; np$</span>. Therefore, <span class="math-container">$$E(|S_n-np|)= \sum_{k=0}^{v-1} (np-k)b(k;n,p) + \sum_{k=v}^{n}(k-np)b(k; n,p).$$</span></p> <p>I am stuck here, and don't know how to proceed. I would appreciate if you give some help.</p>
Pepe Silvia
418,447
<p>I'd be tempted to take this hint and see how I can simplify it first. And by simplify I mean get rid of the summation. So we have <span class="math-container">\begin{align*} &amp;\sum_{k=0}^{v-1}(np-k)\left(\begin{array}{cc} n \\ k\end{array}\right)p^kq^{n-k} \\ =&amp;np\sum_{k=0}^{v-1}\left(\begin{array}{cc} n \\ k\end{array}\right)p^kq^{n-k}-n\sum_{k=1}^{v-1}\left(\begin{array}{cc} n-1 \\ k-1\end{array}\right)p^kq^{n-k} \end{align*}</span> using the identity <span class="math-container">$\left(\begin{array}{cc} n \\ k\end{array}\right)=\frac{n}{k}\left(\begin{array}{cc} n-1 \\ k-1\end{array}\right)$</span> which is easy to prove, and which should often be in the back of your head when you have these combination expressions multiplied by <span class="math-container">$n$</span> or <span class="math-container">$k$</span>. I replace <span class="math-container">$k=0$</span> with <span class="math-container">$k=1$</span> in the second sum also as the substitution is only valid when <span class="math-container">$k\geq 1$</span> and the <span class="math-container">$k=0$</span> term contributed nothing to the sum anyway.</p> <p>Now we don't know how to sum any of these really, unless they refer to an expectation or probability. If I'd kept the <span class="math-container">$k$</span> in the second sum it would look more like an expectation but as the sum is partial it will never be one, which is what motivated me to replace it with <span class="math-container">$n$</span> which can be removed from the sum. It's easy to see that the first sum is <span class="math-container">$\mathbb{P}[S_n\leq v-1]$</span>, it would be nice if we could make the second sum look similar. It would have to be a probability related to <span class="math-container">$S_{n-1}$</span>; see that <span class="math-container">\begin{align*} \sum_{k=1}^{v-1}\left(\begin{array}{cc} n-1 \\ k-1\end{array}\right)p^kq^{n-k}&amp;=\sum_{l=0}^{v-2}\left(\begin{array}{cc} n-1 \\ l\end{array}\right)p^{l+1}q^{n-(l+1)} \\ &amp;=p\sum_{l=0}^{v-2}\left(\begin{array}{cc} n-1 \\ l\end{array}\right)p^{l}q^{n-1-l} \\ &amp;=p\mathbb{P}[S_{n-1}\leq v-2]. \end{align*}</span> Then we have <span class="math-container">\begin{align*} np\sum_{k=0}^{v-1}\left(\begin{array}{cc} n \\ k\end{array}\right)p^kq^{n-k}-n\sum_{k=1}^{v-1}\left(\begin{array}{cc} n-1 \\ k-1\end{array}\right)p^kq^{n-k} &amp;=np\left(\mathbb{P}[S_n\leq v-1]-\mathbb{P}[S_{n-1}\leq v-2]\right). \end{align*}</span> It's annoying that these probabilities are for different variables, but consider the following. If, instead of being independent variables, <span class="math-container">$S_{n-1}$</span> was the number of successes in <span class="math-container">$n-1$</span> trials and <span class="math-container">$S_n$</span> was the number of successes in <span class="math-container">$n$</span> trials, and these trials were from the <span class="math-container">$\textbf{same sequence}$</span>, then by sketching a Venn diagram if necessary we can see that <span class="math-container">\begin{align*} \mathbb{P}[S_n\leq v-1]-\mathbb{P}[S_{n-1}\leq v-2]=&amp;\mathbb{P}[S_{n-1}=v-1,\;n\text{-th trial is a failure}] \\ =&amp;\left(\begin{array}{cc} n-1 \\ v-1\end{array}\right)p^{v-1}q^{n-1-(k-1)}q \end{align*}</span> so that ultimately we get <span class="math-container">\begin{align*} \mathbb{E}[\vert S_n-np\vert]=np\left(\begin{array}{cc} n-1 \\ v-1\end{array}\right)p^{v-1}q^{n-1-(v-1)}q=vq\left(\begin{array}{cc} n \\ v\end{array}\right)p^vq^{n-v}=vqb(v;n,p), \end{align*}</span> as required.</p>
262,773
<p>In the curve obtained with following</p> <pre><code> Plot[{1/(3 x*Sqrt[1 - x^2])}, {x, 0, 1}, PlotRange -&gt; {0, 1}, GridLines -&gt; {{0.35, 0.94}, {}}] </code></pre> <p>how can one fill the top and bottom with different colors or patterns such that two regions are perfectly visible in a black-n-white printout?</p>
kglr
125
<pre><code>Plot[{1/(3 x*Sqrt[1 - x^2]), 1/(3 x*Sqrt[1 - x^2])}, {x, 0, 1}, PlotStyle -&gt; Directive[AbsoluteThickness[2], Opacity[1], Black], PlotRange -&gt; {0, 1}, GridLines -&gt; {{0.35, 0.94}, {}}, Method -&gt; &quot;GridLinesInFront&quot; -&gt; True, GridLinesStyle -&gt; Directive[Black, Dashed], Filling -&gt; {1 -&gt; {Bottom, GrayLevel[.8]}, 2 -&gt; {Top, GrayLevel[.6]}}, RegionFunction -&gt; (.35 &lt;= # &lt;= .94 &amp;)] </code></pre> <p><a href="https://i.stack.imgur.com/Thbek.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Thbek.png" alt="enter image description here" /></a></p> <p>To get hatched-filling in older versions we can use <code>ParametricPlot</code> with the options <a href="https://reference.wolfram.com/language/ref/MeshFunctions.html" rel="nofollow noreferrer"><code>MeshFunctions</code></a> + <a href="https://reference.wolfram.com/language/ref/Mesh.html" rel="nofollow noreferrer"><code>Mesh</code></a> + <a href="https://reference.wolfram.com/language/ref/MeshStyle.html" rel="nofollow noreferrer"><code>MeshStyle</code></a>:</p> <pre><code>Show @ MapThread[ ParametricPlot[{x, # t + (1 - t) 1/(3 x*Sqrt[1 - x^2])}, {x, 0.35, 0.94}, {t, 0, 1}, PlotRange -&gt; {0, 1}, ImageSize -&gt; Medium, AspectRatio -&gt; 1/GoldenRatio, GridLines -&gt; {{0.35, 0.94}, {}}, BoundaryStyle -&gt; None, PlotStyle -&gt; None, MeshStyle -&gt; Directive[GrayLevel[.3], Opacity[1], AbsoluteThickness[1], CapForm[&quot;Butt&quot;]], MeshFunctions -&gt; {#4 &amp;, #2}, Mesh -&gt; {{{0, Directive[Black, Opacity[1], AbsoluteThickness[3], CapForm[&quot;Butt&quot;]]}}, #3}] &amp;, {{0, 1}, {# + #2 &amp;, # - #2 &amp;}, {50, 25}}] </code></pre> <p><a href="https://i.stack.imgur.com/pJOce.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pJOce.png" alt="enter image description here" /></a></p> <pre><code>$Version </code></pre> <blockquote> <pre><code>&quot;11.3.0 for Microsoft Windows (64-bit) (March 7, 2018)&quot; </code></pre> </blockquote> <p><strong>See also:</strong></p> <ol> <li><a href="https://mathematica.stackexchange.com/q/64159/125">Generating hatched filling using Region functionality</a></li> <li><a href="https://mathematica.stackexchange.com/q/149113/125">Texture or shading to avoid requiring color printing</a></li> </ol>
2,901,971
<p>The question is as follows: </p> <blockquote> <p>Prove that for every $\beta \in \mathbb{R}$, $\sup(-\infty, \beta) = \beta$.</p> </blockquote> <p>The goal of the problem is to prove this without using the $\epsilon-\delta$ method. My professor gave us an idea in class, but for some reason, it isn't really making any sense to me. </p> <p>Call the set $S.$ We need to show that: </p> <p>1) $\beta$ is an upper bound of $S.$ </p> <p>2) $\beta \leq B \ \forall $ upper bounds $B$ of $S.$ </p> <p>The first condition is clearly satisfied by the fact that $x \leq \beta \ \forall x \in S$. </p> <p>The second condition is what we did in class and the part that I am most confused about. </p> <p>If $B$ is an upper bound of $S$, then $\beta \leq B.$ </p> <p>Now assume that $B &lt; \beta.$</p> <p>If $B &lt; \beta, \exists x$ such that $B &lt; x &lt; \beta$. But then $x &lt; \beta$ and $B&lt;x$, contradicing the assumption that $B$ is an upper bound of $S$, thus proving that $\sup(-\infty, \beta) = \beta$. </p> <p>Q.E.D. </p> <p>This proof does not make much sense to me, so I am asking for a clarification. If this prove is indeed wrong, I would appreciate if it someone could provide me with a clear and concise way of proving this. </p> <p>Thank you so much!</p>
nonuser
463,553
<ol start="4"> <li><p>If $|X|=|Y|=n$ and $\delta \geq n/2$.</p></li> <li><p>If $|X|=|Y|=n$ and $\varepsilon \geq n^2-n+1$.</p></li> <li><p>If $|X|=|Y|=n \ge 1$ Suppose that $X=\{x_1,x_2,..,x_n\}$ and $Y=\{y_1,y_2,...,y_n\}$. Two vertices $x_i$ and $y_j$ are adjacent in $G$ if and only if $i+j \ge n+1$.</p></li> </ol>
1,517,086
<p>I spent a long time trying to find a natural deduction derivation for the formula $\exists x(\exists y A(y) \rightarrow A(x))$, but I always got stuck at some point with free variables in the leaves. Could someone please help me or give me some hints to find a proof. </p> <p>Thanks.</p>
Jannik Vierling
287,757
<p>The following is a natural deduction proof of the formula <span class="math-container">$\exists x (\exists y A(y) \rightarrow A(x))$</span> <span class="math-container">$$ \dfrac { \dfrac { \dfrac { \dfrac { \dfrac { \dfrac {} { [\exists y A(y)]_2 } \dfrac { \dfrac { \dfrac { \dfrac{\dfrac{}{[A(b)]_3}}{\exists y A(y) \rightarrow A(b)} \rightarrow_i } { \exists x (\exists y A(y) \rightarrow A(x)) } \exists_i \dfrac{}{[\neg \exists x (\exists y \rightarrow A(x))]_1} } {\bot} \neg_e } { A(x) } \bot_e } { A(x) } \exists_{e,3} } { \exists y A(y) \rightarrow A(x) } \rightarrow_{i,2} } { \exists x (\exists y A(y) \rightarrow A(x)) } \dfrac{}{[\neg \exists x(\exists y A(y) \rightarrow A(x))]_1} } { \bot } \neg_e } { \exists x (\exists y A(y) \rightarrow A(x)) } \mathrm{RAA_1} $$</span> Since the formula <span class="math-container">$\exists x(\exists y A(y) \rightarrow A(x))$</span> is not valid in intuitionistic logic, it can not be proven in this natural deduction system without the rule RAA (or the rule DNE).</p>
2,413,899
<p>When I was 7 or 8 years old, I was playing with drawing circles. For some reason, I thought about taking a 90° angle and placing the vertex on the circle and marking where the two sides intersected the circle.</p> <p><a href="https://i.stack.imgur.com/oskEP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oskEP.png" alt="90° angle on a circle"></a></p> <p>It appeared to me that connecting the two points of intersection created a line through the center regardless of how the square was rotated.</p> <p><a href="https://i.stack.imgur.com/IIMqF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IIMqF.png" alt="90° angle on a circle with line connecting points of intersection"></a></p> <p>I tested this many times and, within the error my tools, my conjecture seemed to be correct. I had indeed discovered a method for finding the center of a circle. Simply create at least two of these bisecting lines and voila!</p> <p><a href="https://i.stack.imgur.com/MrFPw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MrFPw.png" alt="3 90° angles on a circle with lines connecting points of intersection that appear to meet in the center of the circle"></a></p> <p>The problem I have now, many years later, is that I am not satisfied with simply checking a bunch of times; I want a proof.</p> <p>I can intuitively explain two extremes and the middle without any fancy math, but I still have not come up with a general proof.</p> <p>As one of the points of intersection approaches the vertex of the angle, the leg with that point approaches a tangent of the circle. It is intuitive to me that a line, which is perpendicular to a tangent line and passes through the point at which the tangent line touches the circle, would bisect the circle.</p> <p><a href="https://i.stack.imgur.com/y1uqm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y1uqm.png" alt="90° angle on a circle being rotated until one side is tangent to the circle"></a></p> <p>In addition, when the angle is rotated such that the two line segments formed between the vertex and the points of intersection are equal, these line segments form half of an inscribed square. A line that passes through two opposite vertices of that square would also bisect the circle.</p> <p><a href="https://i.stack.imgur.com/wjDJf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wjDJf.png" alt="Square inscribed in circle"></a></p> <p>Can you prove whether placing the vertex of a 90° angle on a circle creates a bisector by connecting the points of intersection?</p>
Claude Leibovici
82,404
<p>If you cannot use special functions, then either numerical integration or approximation would be required.</p> <p>For example, consider the Taylor expansion built around $x=\frac 12$ (mid point of the integration interval selected in order to tvoid promoting one of the bounds). You would get $$2^{x^2+x}=2^{3/4}+2^{3/4} \left(x-\frac{1}{2}\right) \log (4)+2^{3/4} \log (2) (1+\log (4))\left(x-\frac{1}{2}\right)^2+O\left(\left(x-\frac{1}{2}\right)^3\right)$$ Integrate termwise to get $$\int 2^{x^2+x}\,dx=2^{3/4} \left(x-\frac{1}{2}\right)+\frac{\left(x-\frac{1}{2}\right)^2 \log (4)}{\sqrt[4]{2}}+\frac{1}{3} 2^{3/4} \left(x-\frac{1}{2}\right)^3 \log (2) (1+\log (4))+O\left(\left(x-\frac{1}{2}\right)^4\right)$$ USe the bounds to get, as an approximation, $$\int_0^1 2^{x^2+x}\,dx\approx\frac{24+\log ^2(4)+\log (4)}{12 \sqrt[4]{2}}\approx 1.91361$$ while Wolfram Alpha would give $\approx 1.93749$.</p> <p>For sure, you could improve using more terms. For illustration purposes, suppose that we make the expansion to $O\left(\left(x-\frac{1}{2}\right)^n\right)$. We should get $$\left( \begin{array}{cc} n &amp; \text{result} \\ 2 &amp; 1.91361 \\ 4 &amp; 1.93589 \\ 6 &amp; 1.93741 \\ 8 &amp; 1.93749 \end{array} \right)$$</p>
745,876
<p>$$\exists! x : A(x) \Rightarrow \exists x : A(x)$$ Assuming that $A(x)$ is an open sentence. I'm new to abstract mathematics and proofs, so I came here to ask for some simplification. Thanks</p>
dtldarek
26,306
<p>Notation $\exists! x. P(x)$ means "there is a unique $x$ such that $P(x)$". The common way to express this is by adding "$P(y)$ implies $x = y$", that is:</p> <p>$$\exists x :P(x) \land \Big(\forall y : P(y) \implies (x = y)\Big).$$</p> <p>However, please check, if this actually is the definition of $\exists!$ your book/professor uses. If yes, then your problem could be rewritten to</p> <p>$$\Bigg(\exists x :A(x) \land \Big(\forall y : A(y) \implies (x = y)\Big)\Bigg) \implies \Bigg(\exists x : A(x)\Bigg),$$</p> <p>which you can see is of the form $\alpha \land \beta \implies \alpha$ modulo the quantifier. If this isn't your definition, use the one from your course and proceed accordingly.</p> <p>I hope this helps $\ddot\smile$</p>
1,792,402
<p>I am trying to solve <a href="https://en.wikipedia.org/wiki/Mean_value_theorem" rel="nofollow">Mean Value theorem</a> but i ran into a road block trying to solve the question. So the question is: Assuming</p> <p>$$ \frac{f(b) − f(a)}{b − a} = f'(c) \quad \text{for some } c \in (a, b)$$</p> <p>Let $f(x) = \sqrt x$ and $[a, b] = [0, 4]$. Find the point(s) $c$ specified by the theorem.</p> <p>so i attempted to solve this by first getting the derivative of f which gave me $\frac {1}{2\sqrt{x}}$. After plugging in into the equation $\frac{\frac{1}{2\sqrt{4}} - \frac{1}{2\sqrt{0}}}{4- 0}$ gives me $\frac{1}{0}$ which equates to infinity.<br> Now i am convinced that my answer is incorrect because the theorem states <strong>f(x) is continuous function on the (closed and finite) interval</strong>. </p> <p>Can someone guide me in the right direction with this equation. </p>
HighEnergy
330,386
<p>The above answers are correct; however, I would like to point out that in lieu of utilizing $$\frac{f(b)-f(a)}{b-a},$$ you utilized $f'(a)$ and $f'(b)$. Before using <code>MVT</code>, make sure f is continuous on $[a,b]$ and differentiable on $(a,b)$, which they are here.</p>
2,513,509
<p>If $f,g,h$ are functions defined on $\mathbb{R},$ the function $(f\circ g \circ h)$ is even:</p> <p>i) If $f$ is even.</p> <p>ii) If $g$ is even.</p> <p>iii) If $h$ is even.</p> <p>iv) Only if all the functions $f,g,h$ are even. </p> <p>Shall I take different examples of functions and see?</p> <p>Could anyone explain this for me please?</p>
don-joe
360,559
<ul> <li><p>A good point to start is always to put in some values (n=10, n=100, n=1000, ...) to get a first guess.</p></li> <li><p>In this particular case, try to look at the limit $1/n = x \rightarrow 0$ and apply L'Hôpital's rule (multiple times)</p></li> </ul>
99,312
<p>Let C be a geometrically integral curve over a number field K and let K' be a number field containing K. Does there exist a number field L containing K such that</p> <ul> <li>$L \cap K' = K$, and</li> <li>$C(L) \neq \emptyset$?</li> </ul> <p>Note that the hypotheses on C are necessary -- the curve x^2 + y^2 = 0, with the origin removed, is not geometrically integral, but gives a counterexample for K = Q and K' = Q(i).</p> <p>Also, I can prove that this is true when C has prime gonality. It would be odd, though, for this to be a necessary hypothesis.</p>
Olivier Benoist
2,868
<p>I think this is a consequence of (variants of) Hilbert's irreducibility theorem. Let me explain why. Suppose that $C$ is a geometrically integral curve defined over a number field $K$. Let $K'/K$ be a normal finite extension. We will show that infinitely many points of $C$ are defined over a field disjoint from $K'$.</p> <p>Since both the hypotheses and the conclusion are birational invariants, we may suppose that $C$ is a closed subset of $\mathbb{A}_K^2$ (take an affine open of $C$, embed it in $\mathbb{A}^N$ for some $N$ and take a generic projection to $\mathbb{A}^2$). Choose a generic projection $p:C\to\mathbb{A}^1_K$. The curve $C$ is described by an equation $F(t,x)=0$, where $t$ is the coordinate of $\mathbb{A}^1_K$, and $F$ is an irreducible polynomial.</p> <p>Now, since $C$ is geometrically integral, $F_{K'}$ is still irreducible. By [Serre, Topics in Galois theory, Proposition 3.3.1], $x\mapsto F_{K'}(\lambda',x)$ is irreducible for every $\lambda'\in K'$ outside of a thin set. Hence, by [Serre, Topics in Galois theory, Proposition 3.2.1], $x\mapsto F_{K'}(\lambda,x)$ is irreducible for every $\lambda\in K$ outside of a thin set. Since $K$ is Hilbertian, this holds for infinitely many $\lambda\in K$.</p> <p>Let us fix such a $\lambda$. We denote by $q$ and $q'$, the points of $\mathbb{A}^1_K$ and $\mathbb{A}^1_{K'}$ with coordinate $\lambda$. By choice of $\lambda$, $x\mapsto F_{K'}(\lambda,x)$ hence also $x\mapsto F(\lambda,x)$ are irreducible polynomials. Hence $p_{K'}^{-1}(q')\subset C_{K'}$ (resp. $p^{-1}(q)\subset C$) consists of a unique (reduced) point $p'\in C_{K'}$ (resp. $p\in C$). Let $L$ and $L'$ be the residual fields of $p$ and $p'$. By construction, $p'=p\times_{q} q'$ so that $L'=L\otimes_K K'$. This implies that $L$ is disjoint from $K'$. </p>
1,972,252
<blockquote> <p>If $v^TAv = v^TBv$ for all constant vectors $v$ and $A,B$ are matrices of size $n$ by $n$, is it true that $A=B$? </p> </blockquote> <p>I have thought about using basis vectors but cannot get system of equations uniquely. Is there a trick here?</p>
W. mu
369,495
<p>The question is the same as: </p> <blockquote> <p>Is it true $A=0$, if $v^TAv=0$, for all vectors $v$? </p> </blockquote> <p>If $A$ is anti-symmetric, it is not.</p>
2,375,023
<p>So I have to find an interval (in the real numbers) such that it contains all roots of the following function: $$f(x)=x^5+x^4+x^3+x^2+1$$</p> <p>I've tried to work with the derivatives of the function but it doesn't give any information about the interval, only how many possible roots the function might have.</p>
Masacroso
173,262
<p>HINT: observe that</p> <p>$$\lim_{x\to +\infty}f(x)=+\infty,\qquad\lim_{x\to -\infty}f(x)=-\infty$$</p> <p>and</p> <p>$$\lim_{x\to \pm\infty}\frac{4x^4}{x^5}=0$$</p> <p>Thus exists some $\alpha&gt;1$ such that $$|x^5|&gt;|4x^4|&gt;| x^4+x^3+x^2+1|$$</p> <p>when $x&gt;\alpha$ or $x&lt;-\alpha$.</p>
1,015,264
<p>This is a worked out example in my book, but I am having a little trouble understanding it:</p> <p>Consider the system of equations:</p> <p>$$x'=y+x(1-x^2-y^2)$$ $$y'=-x+y(1-x^2-y^2)$$</p> <p>The orbits and limit sets of this example can be easily determined by using polar coordinates. (My question: what is the motivation for that thinking? What should clue me in to thinking that I should use polar coordinates?)</p> <p>The polar coordinate satisfies $r^2=x^2+y^2$ so by differentiating with respect to t and using the differential equations we get:</p> <p>$r\cdot r'=x\cdot x'+y\cdot y'$ (I am unclear about how the book even got this first equation from $r^2=x^2+y^2$) </p> <p>$=x\cdot y+x^2(1-r^2)-x\cdot y+y^2(1-r^2)$ Substitute in $x'$ and $y'$ and then multiple out and replace with $r$, I get this step</p> <p>$=r^2(1-r^2)$ cancel terms, I get this step too</p> <p>$r'=r(1-r)$</p> <p>similarly, the angle variable $\theta$ satisfies $\tan\theta=\frac yx$, so the derivative with respect to $t$ yields $\sec^2(\theta)\theta'=x^{-2}[x^2+xy(1-r^2)-y^2-xy(1-r^2)]=-\frac{r^2}{x^2}$ so $\theta=1$</p> <p>Thus the solution goes clockwise around the origin at unit angular speed.</p> <p>I don't understand the $\theta$ step at all or how they reached the conclusion of clockwise around the origin with unit angular speed..</p> <p>But then it just jumps to saying "the origin is a fixed point, so α(0)=ω(0)={0} but I have no idea how they reached this conclusion..</p>
AlexR
86,940
<p>Firsly, anything dependent on $x^2+y^2$ (such as $1-x^2-y^2 = 1-(x^2+y^2)$) lends itself to conversion into polar (or spherical or cylindrical) coordinates. We obtain $$\begin{align*} r^2 &amp; = x^2+y^2 \\ \Rightarrow \frac d{dt} r^2 &amp; = \frac d{dt} x^2 + \frac d{dt} y^2 \\ \Rightarrow 2rr' &amp; = 2xx' + 2yy' \\ \Rightarrow rr' &amp; = xx'+yy' \end{align*}$$ This trick of differentiating both sides has also been applied to $$\begin{align*} \tan \theta &amp; = \frac yx \\ \Rightarrow \sec^2 \theta \cdot \theta' &amp; = \frac{x'y - y'x}{x^2} \end{align*}$$ The last equation was then manipulated to arrive at $\theta' = 1$ (error in your typeset I guess).</p>
877,683
<p><img src="https://www.anonimg.com/img/c0423915f5f5fae8f9790506c36393f8.jpg" alt=""> Based on my last question I learned that this is an envelope of a parabola</p> <p><a href="https://math.stackexchange.com/questions/874478/what-is-this-geometric-pattern-called">What is this geometric pattern called?</a></p> <p>But how can I prove it ?</p>
Community
-1
<p>The curve you see is by <a href="http://en.m.wikipedia.org/wiki/B%C3%A9zier_curve#Quadratic_B.C3.A9zier_curves" rel="nofollow">definition</a> a quadratic bézier curve which is always a segment of a parabola. </p>
1,031,464
<p>I am supposed to simplify this:</p> <p>$$(x^2-1)^2 (x^3+1) (3x^2) + (x^3+1)^2 (x^2-1) (2x)$$</p> <p>The answer is supposed to be this, but I can not seem to get to it:</p> <p>$$x(x^2-1)(x^3+1)(5x^3-3x+2)$$</p> <p>Thanks</p>
Pauly B
166,413
<p>$$(x^2-1)^2 (x^3+1) (3x^2) + (x^3+1)^2 (x^2-1) (2x)$$ Factor out the $(x^2-1)$ term to get $$(x^2-1)\left((x^2-1) (x^3+1) (3x^2) + (x^3+1)^2(2x)\right)$$ then factor out the $(x^3+1)$ term to get $$(x^2-1)(x^3+1)\left((x^2-1) (3x^2) + (x^3+1)(2x)\right)$$ then factor out the $x$ term to get $$x(x^2-1)(x^3+1)\left((x^2-1) (3x) + (x^3+1)(2)\right)=x(x^2-1)(x^3+1)(5x^3-3x+2)$$</p>
4,232,341
<p>To prove <span class="math-container">$\lim\limits_{n\to\infty} \left(1+\frac{x}{n}\right)^{n}$</span> exists, we prove that the sequence <span class="math-container">$$f_n=\left(1+\frac{x}{n}\right)^n$$</span> is bounded and monotonically increasing toward that bound.</p> <p><strong>Proof Attempt:</strong></p> <hr /> <p>We begin by showing <span class="math-container">$f_n=\left(1+\frac{x}{n}\right)^n$</span> is monotonically increasing by looking at the ratio of consecutive terms: <span class="math-container">\begin{align*} \frac{f_{n+1}}{f_n} &amp;=\frac{\left(1+\frac{x}{n+1}\right)^{n+1}}{\left(1+\frac{x}{n}\right)^{n}} \tag{Definition of $f_n$} \\ &amp;=\frac{\left(1+\frac{x}{n+1}\right)^{n+1}\left(1+\frac{x}{n}\right)}{\left(1+\frac{x}{n}\right)^{n}\left(1+\frac{x}{n}\right)} \tag{Multiplication by $\frac{\left(1+\frac{x}{n}\right)}{\left(1+\frac{x}{n}\right)}$} \\ &amp;=\frac{\left(1+\frac{x}{n+1}\right)^{n+1}}{\left(1+\frac{x}{n}\right)^{n+1}}\left(1+\frac{x}{n}\right) \tag{Simplify $a^n\cdot a = a^{n+1}$} \\ &amp;=\left(\frac{1+\frac{x}{n+1}}{1+\frac{x}{n}}\right)^{n+1}\left(1+\frac{x}{n}\right) \tag{Simplify $\frac{a^{n+1}}{b^{n+1}}=\left(\frac{a}{b}\right)^{n+1}$} \\ &amp;=\left(\frac{\frac{n+1+x}{n+1}}{\frac{n+x}{n}}\right)^{n+1}\left(1+\frac{x}{n}\right) \tag{Common denominators} \\ &amp;=\left(\frac{n+1+x}{n+1}\cdot \frac{n}{n+x}\right)^{n+1}\left(1+\frac{x}{n}\right) \tag{Simplify $\frac{\frac{a}{b}}{\frac{c}{d}}=\frac{a}{b}\cdot \frac{d}{c}$} \\ &amp;=\left(\frac{n^2+n+nx}{(n+1)(n+x)}\right)^{n+1}\left(1+\frac{x}{n}\right) \tag{Distribute $(n+1+x)n$} \\ &amp;=\left(\frac{n^2+n+nx+x-x}{(n+1)(n+x)}\right)^{n+1}\left(1+\frac{x}{n}\right) \tag{Add and subtract $x$} \\ &amp;=\left(\frac{(n+1)(n+x)-x}{(n+1)(n+x)}\right)^{n+1}\left(1+\frac{x}{n}\right) \tag{Factor $n^2+n+nx+x$} \\ &amp;=\left(1+\frac{-x}{(n+1)(n+x)}\right)^{n+1}\left(1+\frac{x}{n}\right) \tag{Simplify $\frac{a+b}{c}=\frac{a}{c}+\frac{b}{c}$} \\ &amp;\ge\left(1+\frac{-x}{(n+x)}\right)\left(1+\frac{x}{n}\right) \tag{Bernoulli: $(1+x)^n \ge 1+nx$} \\ &amp;=\left(\frac{n}{n+x}\right)\left(\frac{n+x}{n}\right) \tag{Common denominators} \\ &amp;=1 \tag{Simplify $\frac{a}{b} \cdot \frac{b}{a}=1$} \end{align*}</span> Since <span class="math-container">$\frac{f_{n+1}}{f_n}&gt;1$</span>, then <span class="math-container">$f_{n+1}&gt;f_n$</span>, which shows the sequence <span class="math-container">$f_n$</span> is monotonically increasing for all <span class="math-container">$n \in \mathbb{N}$</span>.</p> <p>Next, we show <span class="math-container">$f_n=\left(1+\frac{x}{n}\right)^n$</span> is bounded above. Note that <span class="math-container">\begin{align*} f_n &amp;=\left(1+\frac{x}{n}\right)^n \tag{Definition of $f_n$} \\ &amp;=\sum_{k=0}^n \binom{n}{k} (1)^{n-k} \left(\frac{x}{n}\right)^{k} \tag{Binomial Theorem} \\ &amp;=1+\frac{n}{1!}\left(\frac{x}{n}\right)+\frac{n(n-1)}{2!}\left(\frac{x}{n}\right)^2+\frac{n(n-1)(n-2)}{3!}\left(\frac{x}{n}\right)^3+\cdots+\frac{n!}{n!}\left(\frac{x}{n}\right)^n \\ &amp;=1+\frac{\frac{n}{n}}{1!}x+\frac{\frac{n(n-1)}{n^2}}{2!}x^2+\frac{\frac{n(n-1)(n-2)}{n^3}}{3!}x^3+\cdots+\frac{\frac{n!}{n^n}}{n!}x^n \tag{Simplify}\\ &amp;=1+\frac{1}{1!}x+\frac{\left(1-\frac{1}{n}\right)}{2!}x^2+\frac{\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)}{3!}x^3+\cdots+\frac{\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)\cdots \left(1-\frac{n-1}{n}\right)}{n!}x^n \\ &amp; \le 1+\frac{1}{1!}x+\frac{1}{2!}x^2+\frac{1}{3!}x^3+\cdots+\frac{1}{n!}x^n \tag{$1-\frac{k}{n}&lt;1$} \\ &amp; = \sum_{k=0}^n \frac{1}{k!} x^k \tag{Sigma notation}\\ &amp; \to %\underset{n \to \infty}{\to} \sum_{k=0}^\infty \frac{1}{k!}x^k \tag{as $n \to \infty$} \\ &amp; = \sum_{k=0}^K \frac{1}{k!} x^k + \sum_{k=K+1}^\infty \frac{1}{k!}x^k \tag{$\exists K$, $k&gt;K$ implies $k! \ge (2x)^k$}\\ &amp; \le \sum_{k=0}^K \frac{1}{k!} x^k + \sum_{k=K+1}^\infty \frac{1}{(2x)^k} x^k \tag{$k! \ge (2x)^k$ implies $\frac{1}{k!} \le \frac{1}{(2x)^k}$}\\ &amp; = \sum_{k=0}^K \frac{1}{k!} x^k + \sum_{k=K+1}^\infty \frac{1}{2^k} \tag{$\frac{1}{(2x)^k}x^k=\frac{1}{2^k x^k}x^k = \frac{1}{2^k}$}\\ &amp;= \sum_{k=0}^K \frac{1}{k!} x^k + \frac{1}{2^K} \tag{Geometric series evaluation} \end{align*}</span> which is finite. Thus, the sequence <span class="math-container">$f_n$</span> is bounded. Since it is both monotonically increasing and bounded, it is convergent by the Monotone convergence theorem.</p> <hr /> <p>Is my proof correct? I am suspicious of the step which says &quot;<span class="math-container">$\rightarrow \sum_{k=0}^n \frac{1}{k!}x^k$</span>&quot;, and would like to avoid taking another limit in the middle of the boundedness proof.</p> <p>I also compared my proof to the following references and saw something worrisome:</p> <ul> <li><p><a href="http://www.sci.brooklyn.cuny.edu/%7Emate/misc/exp_x.pdf" rel="nofollow noreferrer">Reference 1</a> &lt;- Assumes <span class="math-container">$x\ge 0$</span> (Why?)</p> </li> <li><p><a href="https://math.stackexchange.com/a/1590263/100167">Reference 2</a> &lt;- Assumes <span class="math-container">$x \ge -1$</span> (Why?)</p> </li> <li><p><a href="https://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html?m=0#.YSUsR8pUviD" rel="nofollow noreferrer">Reference 3 </a> &lt;- Considers <span class="math-container">$x=0$</span>, <span class="math-container">$x&gt;0$</span>, and <span class="math-container">$x&lt;0$</span> separately (Why?)</p> </li> </ul> <p>All of the above proofs either assumed <span class="math-container">$x&gt;0$</span> or considered cases where <span class="math-container">$x&gt;0$</span> and <span class="math-container">$x&lt;0$</span> separately, but I do not know why. In fact, the third reference considers <span class="math-container">$\left(1-\frac{x}{n}\right)^{-n}$</span> for <span class="math-container">$x&gt;0$</span> (I think this is a typo and should read <span class="math-container">$x&lt;0$</span>), but I am not sure why the negative exponent is needed (we are talking about a negative value of <span class="math-container">$x$</span>, not negative <span class="math-container">$n$</span>.)</p> <p>I could only find one proof that did not consider different cases on the sign of <span class="math-container">$x$</span>:</p> <ul> <li><a href="https://proofwiki.org/wiki/Exponential_Function_is_Well-Defined/Real/Proof_2" rel="nofollow noreferrer">Bonus reference 4</a> &lt;- Uses absolute values, but I am not sure why these are necessary either.</li> </ul> <p>I would like to verify my proof and ask 3 questions:</p> <ol> <li><p>Why is it necessary to consider cases <span class="math-container">$x&gt;0$</span> and <span class="math-container">$x&lt;0$</span> separately? Did any step in my proof implicitly assume that <span class="math-container">$x&gt;0$</span>? If so, which one?</p> </li> <li><p>Is there any way to avoid taking a limit in the middle of the boundedness proof?</p> </li> <li><p>Substituting <span class="math-container">$n=1$</span> in my boundedness proof shows <span class="math-container">$1+x \le \sum_{k=0}^n \frac{1}{k!}x^k$</span>. Does this imply <span class="math-container">$1+x \le \lim\limits_{n\to \infty} \left(1+\frac{x}{n}\right)^n$</span>, since <span class="math-container">$f_n$</span> is an increasing function of <span class="math-container">$n$</span>? Can this be seen explicitly, or would that require a separate proof?</p> </li> </ol> <p>Thank you.</p>
Daniel Sehn Colao
649,328
<p>I'll give you the intuition of how it works:</p> <p>To begin with, Fubini's Theorem states that <span class="math-container">$F(x,y) = \int_c^y \int_a^x f(u,v)dudv = \int_a^x \int_c^y f(u,v)dvdu$</span>, where F is a continuous function over the region R defined by <span class="math-container">$[a,b]\times[c,d]$</span>.</p> <p>We know that <span class="math-container">$F(x,y) = \int_c^y \int_a^x f(u,v)dudv = \int_a^x \int_c^y f(u,v)dvdu$</span>.</p> <p>Differentiating both sides of <span class="math-container">$F(x,y) = \int_c^y \int_a^x f(u,v)dudv$</span> with respect to <span class="math-container">$y$</span>:</p> <p><span class="math-container">$\frac{\partial F}{\partial y}(x,y) = \int_a^x f(u,y)du$</span></p> <p>Now with respect to x:</p> <p><span class="math-container">$\frac{\partial F^2}{\partial x\partial y}(x,y) = f(x,y)$</span></p> <p>Differentiating both sides of <span class="math-container">$F(x,y) = \int_a^x \int_c^y f(u,v)dvdu$</span> with respect to <span class="math-container">$x$</span>:</p> <p><span class="math-container">$\frac{\partial F}{\partial x}(x,y) = \int_c^y f(x,v)dv$</span>.</p> <p>Now, with respect to <span class="math-container">$y$</span>:</p> <p><span class="math-container">$\frac{\partial F^2}{\partial y \partial x}(x,y) = f(x,y)$</span>.</p> <p>Therefore, <span class="math-container">$\frac{\partial F^2}{\partial x\partial y}(x,y) = \frac{\partial F^2}{\partial y \partial x}(x,y)$</span>.</p> <p>I hope I clarified a little more.</p>
3,427,794
<p>Let's see I have the following equation</p> <p><span class="math-container">$$ x=1 $$</span></p> <p>I take the derivate of both sides with respect to <span class="math-container">$x$</span>:</p> <p><span class="math-container">$$ \frac{\partial }{\partial x} x = \frac{\partial }{\partial x}1 $$</span></p> <p>Therefore, <span class="math-container">$1=0$</span>. Clearly, that is not the right approach. </p> <p>So what is the right way to think of <span class="math-container">$x=1$</span>. What kind of object is it?</p>
user
505,767
<p>Partial derivative requires a function as argument therefore if we assume, with little a abuse of notation, <span class="math-container">$x$</span> as a function <span class="math-container">$x= x(x,y,z,\ldots)$</span> and we states that <span class="math-container">$x$</span> is a constant function <span class="math-container">$x(x,y,z,\ldots)=1$</span> then</p> <p><span class="math-container">$$\frac{\partial }{\partial x} x(x,y,z,\ldots) = \frac{\partial }{\partial x}1=0$$</span></p> <p>Note that of course it should be better to use a different notation for <span class="math-container">$x$</span> as a function to distinguish it form the variable <span class="math-container">$x$</span>, that is for example <span class="math-container">$\bar x= \bar x(x,y,z,\ldots)$</span>.</p>
4,184,196
<blockquote> <p>Let <span class="math-container">$\hat{f} : \Bbb S^1 \to \Bbb R^2$</span>, <span class="math-container">$\hat{f}(x,y) = (x,y)$</span> and <span class="math-container">$\hat{g} : \Bbb S^1 \to \Bbb R^2$</span>, <span class="math-container">$\hat{g}(x,y) = -(x,y)$</span>. Show that there exists a Homotopy <span class="math-container">$\hat{H} : \Bbb S^1 \times [0,1] \to \Bbb R^2$</span> from <span class="math-container">$\hat{f}$</span> to <span class="math-container">$\hat{g}$</span>.</p> </blockquote> <p>So both <span class="math-container">$\hat{f}$</span> and <span class="math-container">$\hat{g}$</span> are mapping points from the unit disk to the plane. Isn't the image of both maps just the disk back itself? I'm confused on how to get an intuition for the problem here. The definition of Homotopy is that I would need to construct <span class="math-container">$\hat{H}$</span> such that <span class="math-container">$$\hat{H}(x,y,0) = \hat{f}$$</span> and that <span class="math-container">$$\hat{H}(x,y,1) = \hat{g}.$$</span> Certainly <span class="math-container">$$\hat{H}(x,y,t) = (tx, ty)$$</span> doesn't work since <span class="math-container">$\hat{H}(x,y,0) = (0,0) \ne (-x,-y).$</span> What can I do here?</p>
Andrew Paul
419,177
<p><span class="math-container">$\hat{f}$</span> and <span class="math-container">$\hat{g}$</span> map from the unit <em>circle</em> (<span class="math-container">$S^1$</span>) to the plane. Intuitively, one can imagine that <span class="math-container">$t\in[0,1]$</span> parameterizes a rotation along the circle. In particular, at <span class="math-container">$t=0$</span>, we rotate <span class="math-container">$(x,y)$</span> an angle of <span class="math-container">$0$</span>, and at <span class="math-container">$t=1$</span>, we rotate <span class="math-container">$(x,y)$</span> an angle of <span class="math-container">$\pi$</span>. In general, we can imagine the function <span class="math-container">$h_t(x,y)=\hat{H}(x,y,t)$</span> as rotating the point <span class="math-container">$(x,y)\in S^1$</span> by an angle of <span class="math-container">$\pi t$</span> and then throwing the resulting point to <span class="math-container">$\mathbb{R}^2$</span> via the canonical inclusion <span class="math-container">$S^1\hookrightarrow\mathbb{R}^2$</span>.</p> <p>So if we took coordinates on <span class="math-container">$S^1$</span> to be <span class="math-container">$(\cos{\theta},\sin{\theta})$</span> instead of <span class="math-container">$(x,y)$</span>, a homotopy would be given by <span class="math-container">$$\hat{H}(\cos{\theta},\sin{\theta},t)=\left(\cos{(\theta+\pi t)},\sin{(\theta+\pi t)}\right).$$</span> It is easy to see that this is indeed a homotopy from <span class="math-container">$\hat{f}$</span> to <span class="math-container">$\hat{g}$</span>.</p>
2,007,584
<p>Let $M$ be a smooth manifold and $f:M\rightarrow \mathbb{R}$ be a smooth function such that $f(M)=[0,1]$. Let $1/2$ be a regular value and suppose we consider the open and non-empty set $U:=f^{-1}(\frac{1}{2},\infty)\subset M$. I would like to show that $f^{-1}(\frac{1}{2})$ must coincide with the topological boundary of $U$, i.e. $\partial U=f^{-1}(\frac{1}{2})$.</p> <p>I could prove that $\partial U\subset f^{-1}(\frac{1}{2})$. But I have problems to show the opposite inclusion. How can one prove that $ f^{-1}(\frac{1}{2})\subset\partial U$?</p> <p>Best wishes</p>
Eric Schlarmann
381,012
<p>Let $f: (-\epsilon, 1+\epsilon) \to [0,1]$<br/> $t\mapsto \begin{cases}0 &amp; t\leq 0 \\ \frac{3}{2}t &amp; 0&lt;t&lt;\frac{1}{3}\\ \frac 1 2 &amp; \frac 1 3 \leq t \leq \frac 2 3\\ \frac 3 2 t- \frac 1 2 &amp; 1&gt;t&gt; \frac 2 3 \\ 1 &amp; t\geq 1\end{cases}$</p> <p>Then the preimage of $\frac 1 2$ is not contained in the boundary of $U$.</p> <p>Edit: The question was edited to live in the smooth category. If you assume 1/2 to be a regular value of f, then this will be true for the manifold notion of boundary, as a consequence of the regular value theorem.</p>
2,553,853
<p>I know that I am to use the thm. R/A. is a integral domain iff A is prime. So I want an case where for some a,b in R and ab in A we have a nor b in A.</p> <p>I thought it had something to do with the conjugate, but I'm lost.</p>
57Jimmy
356,190
<p>What would be "the conjugate", since you do not know what ring $R$ you are dealing with?</p> <p>As suggested in the comments, $x^7+1=\underset{a}{(x+1)} \underset{b}{(x^6-x^5+x^4-x^3+x^2-x+1)}$. Neither $a$ nor $b$ are in the ideal $A=(x^7 + 1)$, since all nonzero elements there have degree at least 7, but their product $ab$ is in $A$. Hence $A$ is not prime.</p>
250,651
<p>(<a href="https://math.stackexchange.com/questions/1940157/coend-of-mathscrdf-bullet-g-bullet?noredirect=1#comment3982852_1940157">Crosspost</a> from stack)</p> <p>Given categories $\mathscr{C}$ and $\mathscr{D}$ and functors $F,G: \mathscr{C} \to \mathscr{D}$, we can form a bifunctor $$\mathscr{D}(F(\bullet), G(\bullet)): \mathscr{C}^\text{op} \times \mathscr{C} \to \mathsf{Set}$$ and the end of this functor is the set of natural transformations from $F \Rightarrow G$. (I guess we need $\mathscr{C}$ to be small in order to guarantee there is a <em>set</em> of such natural transformations, in general.)</p> <p>Can I say anything about the coend? Is it some familiar thing?</p>
HeinrichD
98,306
<p>Basic rule in general category theory: Restrict to categories with one object, aka monoids, and see what happens. (Alternatively, restrict to thin categories, aka preorders.) Gain some intuition and generalize this to arbitrary categories.</p> <p>So here, if $M$ is a monoid, we have two objects $X$ and $Y$ of $\mathscr{D}$ on which $M$ acts from the left via morphisms of monoids $F:M \to \mathrm{End}(X)$ and $G:M \to \mathrm{End}(Y)$. Then, the set $\mathrm{Hom}(X,Y)$ carries a left $M$- and a right $M$-action. The coend identifies these two actions in a universal way: It is given by $\mathrm{Hom}(X,Y) / (G(m) \circ f \sim f \circ F(m))_{f \in \mathrm{Hom}(X,Y),\, m \in M}$. If the action $F$ is trivial, this is a generalization of the set of <strong>coinvariants</strong>. Namely, this is the special case $\mathscr{D}=\mathsf{Set}$ and $X=1$. Dually, notice that the end of $\mathrm{Hom}(X,Y)$ is the set of morphisms $f \in \mathrm{Hom}(X,Y)$ which "believe" that the two actions agree, i.e. $G(m) \circ f = f \circ F(m)$ for all $m \in M$, and that this generalizes the set of <strong>invariants</strong> when the action $F$ is trivial.</p> <p>Now, if $\mathscr{C}$ is any <strong>small</strong> category and $F,G : \mathscr{C} \to \mathscr{D}$ are functors, you may view these as "multi-object" actions of $\mathscr{C}$ on $\mathscr{D}$. The coend of $\mathrm{Hom}(F(-),G(-))$ is the quotient set of the coproduct $\coprod_{x \in \mathscr{C}} \mathrm{Hom}(F(x),G(x))$ which identifies the two actions of $\mathcal{C}$. This means that we identify, for every morphism $m : x \to y$ in $\mathscr{C}$ and every morphism $f : F(y) \to G(x)$ in $\mathscr{D}$ the two morphisms $G(m) \circ f : F(y) \to G(y)$ and $f \circ F(m) : F(x) \to G(y)$ in the coproduct.</p>
629,996
<p>At <a href="https://math.stackexchange.com/questions/629950/why-i-left-px-in-mathbbz-leftx-right2-mid-p0-right-is-not-a-prin#629954">this question</a> I asked about specific one...</p> <p>But I think that I don't understand the basic: </p> <blockquote> <p>If I have an ideal $I$, what avoid it to be principal? </p> </blockquote> <p>I think that I really need a good example of some ideal $I$ that is <strong>not</strong>(!!) principal. </p> <p>At all the books and the examples I saw that $\left&lt;m\right&gt;=m\cdot \mathbb{Z}$ is a principal ideal, but I look for an opposite example, i.e. an ideal that is <strong>not</strong> principal and why this ideal is not principal... </p> <p>I open another Q, because I really want to understand it but not via my example...</p> <p>Thank you! </p>
Amzoti
38,839
<p>Another approach: Let</p> <p>$$y = v x$$</p> <p>The derivatives are:</p> <p>$$y' = v + x v', ~~y'' = 2 v' + x v''$$</p> <p>Substitute those into the ODE and solve for $v$ and then $y$.</p> <p><strong>Update</strong></p> <p>Upon substitution, we get:</p> <p>$$x^4 v'' + x^2(2x - 1) v' = 0$$</p> <p>Let $v' = w$, hence:</p> <p>$$x^4 w' + x^2(2x-1)w = 0$$</p> <p>This is now a separable equation, which yields:</p> <p>$$w(x) = \dfrac{c_1e^{-1/x}}{x^2}$$</p> <p>Using this result, now solve for $v$. From the earlier substitution, we had, $v' = w$, so we can integrat both sides, yielding:</p> <p>$$v(x) = c_1e^{-1/x} + c_2$$</p> <p>Lastly, from the original substitution, we have:</p> <p>$$y(x) = v x = x(c_1e^{-1/x} + c_2)$$</p>
106,560
<p>Mochizuki has recently announced a proof of the ABC conjecture. It is far too early to judge its correctness, but it builds on many years of work by him. Can someone briefly explain the philosophy behind his work and comment on why it might be expected to shed light on questions like the ABC conjecture?</p>
Minhyong Kim
1,826
<p>I would have preferred not to comment seriously on Mochizuki's work before much more thought had gone into the very basics, but judging from the internet activity, there appears to be much interest in this subject, especially from young people. It would obviously be very nice if they were to engage with this circle of ideas, regardless of the eventual status of the main result of interest. That is to say, the current sense of urgency to understand something seems generally a good thing. So I thought I'd give the flimsiest bit of introduction imaginable at this stage. On the other hand, as with many of my answers, there's the danger I'm just regurgitating common knowlege in a long-winded fashion, in which case, I apologize.</p> <p>For anyone who wants to really get going, I recommend as starting point some familiarity with two papers, 'The Hodge-Arakelov theory of elliptic curves (HAT)' and 'The Galois-theoretic Kodaira-Spencer morphism of an elliptic curve (GTKS).' [It has been noted here and there that the 'Survey of Hodge Arakelov Theory I,II' papers might be reasonable alternatives.][I've just examined them again, and they really might be the better way to begin.] These papers depart rather little from familiar language, are essential prerequisites for the current series on IUTT, and will take you a long way towards a grasp at least of the motivation behind Mochizuki's imposing collected works. This was the impression I had from conversations six years ago, and then Mochizuki himself just pointed me to page 10 of IUTT I, where exactly this is explained. The goal of the present answer is to decipher just a little bit those few paragraphs.</p> <p>The beginning of the investigation is indeed the function field case (over $\mathbb{C}$, for simplicity), where one is given a family $$f:E \rightarrow B$$ of elliptic curves over a compact base, best assumed to be semi-stable and non-isotrivial. There is an exact sequence $$0\rightarrow \omega_E \rightarrow H^1_{DR}(E) \rightarrow H^1(O_E)\rightarrow0,$$ which is moved by the logarithmic Gauss-Manin connection of the family. (I hope I will be forgiven for using standard and non-optimal notation without explanation in this note.) That is to say, if $S\subset B$ is the finite set of images of the bad fibers, there is a log connection $$H^1_{DR}(E) \rightarrow H^1_{DR}(E) \otimes \Omega_B(S),$$ which <em>does not preserve</em> $\omega_E$. This fact is crucial, since it leads to an $O_B$-linear Kodaira-Spencer map $$KS:\omega \rightarrow H^1(O_E)\otimes \Omega_B(S),$$ and thence to a non-trivial map $$\omega_E^2\rightarrow \Omega_B(S).$$ From this, one easily deduces Szpiro's inequality: $$\deg (\omega_E) \leq (1/2)( 2g_B-2+|S|).$$ At the most simple-minded level, one could say that Mochizuki's programme has been concerned with replicating this argument over a number field $F$. Since it has to do with differentiation on $B$, which eventually turns into $O_F$, some philosophical connection to $\mathbb{F}_1$-theory begins to appear. I will carry on using the same notation as above, except now $B=Spec(O_F)$.</p> <p>A large part of HAT is exactly concerned with the set-up necessary to implement this idea, where, roughly speaking, the Galois action has to play the role of the GM connection. Obviously, $G_F$ doesn't act on $H^1_{DR}(E)$. But it does act on $H^1_{et}(\bar{E})$ with various coefficients. The comparison between these two structures is the subject of $p$-adic Hodge theory, which sadly works only over local fields rather than a global one. But Mochizuki noted long ago that something like $p$-adic Hodge theory should be a key ingredient because over $\mathbb{C}$, the comparison isomorphism $$H^1_{DR}(E)\simeq H^1(E(\mathbb{C}), \mathbb{Z})\otimes_{\mathbb{Z}} O_B$$ allows us to completely recover the GM connection by the condition that the topological cohomology generates the flat sections.</p> <p>In order to get a global arithmetic analogue, Mochizuki has to formulate a <em>discrete non-linear</em> version of the comparison isomorphism. What is non-linear? This is the replacement of $H^1_{DR}$ by the universal extension $$E^{\dagger}\rightarrow E,$$ (the moduli space of line bundles with flat connection on $E$) whose tangent space is $H^1_{DR}$ (considerations of this nature already come up in usual p-adic Hodge theory). What is discrete is the \'etale cohomology, which will just be $E[\ell]$ with global Galois action, where $\ell$ can eventually be large, on the order of the height of $E$ (that is $\deg (\omega_E)$). The comparison isomorphism in this context takes the following form: $$\Xi: A_{DR}=\Gamma(E^{\dagger}, L)^{&lt;\ell}\simeq L|E[\ell]\simeq (L|e_{E})\otimes O_{E[\ell]}.$$ (I apologize for using the notation $A_{DR}$ for the space that Mochizuki denotes by a calligraphic $H$. I can't seem to write calligraphic characters here.) Here, $L$ is a suitably chosen line bundle of degree $\ell$ on $E$, which can then be pulled back to $E^{\dagger}$. The inequality refers to the polynomial degree in the fiber direction of $E^{\dagger} \rightarrow E$. The isomorphism is effected via evaluation of sections at $$E^{\dagger}[\ell]\simeq E[\ell].$$ Finally, $$ L|E[\ell]\simeq (L|e_{E})\otimes O_{E[\ell]}$$ comes from Mumford's theory of theta functions. The interpretation of the statement is that it gives an isomorphism between the space of functions of some bounded fiber degree on non-linear De Rham cohomology and the space of functions on discrete \'etale cohomology. This kind of statement is entirely due to Mochizuki. One sometimes speaks of $p$-adic Hodge theory with finite coefficients, but that refers to a theory that is not only local, but deals with linear De Rham cohomology with finite coefficients.</p> <p>Now for some corrections: As stated, the isomorphism is not true, and must be modified at the places of bad reduction, the places dividing $\ell$, and the infinite places. This correction takes up a substantial portion of the HAT paper. That is, the isomorphism is generically true over $B$, but to make it true everywhere, the integral structures must be modified in subtle and highly interesting ways, while one must consider also a comparison of metrics, since these will obviously figure in an arithmetic analogue of Szpiro's conjecture. The correction at the finite bad places can be interpreted via coordinates near infinity on the moduli stack of elliptic curves as the subtle phenomenon that Mochizuki refers to as 'Gaussian poles' (in the coordinate $q$). Since this is a superficial introduction, suffice it to say for now that these Gaussian poles end up being a major obstruction in this portion of Mochizuki's theory.</p> <p>In spite of this, it is worthwhile giving at least a small flavor of Mochizuki's Galois-theoretic KS map. The point is that $A_{DR}$ has a Hodge filtration defined by</p> <p>$F^rA_{DR}= \Gamma(E^{\dagger}, L)^{ &lt; r} $</p> <p>(the direction is unconventional), and <em>this is moved around by the Galois action induced by the comparison isomorphism.</em> So one gets thereby a map $$G_F\rightarrow Fil (A_{DR})$$ into some space of filtrations on $A_{DR}$. This is, in essence, the Galois-theoretic KS map. That, is if we consider the equivalence over $\mathbb{C}$ of $\pi_1$-actions and connections, the usual KS map measures the extent to which the GM connection moves around the Hodge filtration. Here, we are measuring the same kind of motion for the $G_F$-action.</p> <p>This is already very nice, but now comes a very important variant, essential for understanding the motivation behind the IUTT papers. In the paper GTKS, Mochizuki modified this map, producing instead a 'Lagrangian' version. That is, he assumed the existence of a Lagrangian Galois-stable subspace $G^{\mu}\subset E[l]$ giving rise to another isomorphism $$\Xi^{Lag}:A_{DR}^{H}\simeq L\otimes O_{G^{\mu}},$$ where $H$ is a Lagrangian complement to $G^{\mu}$, which I believe does not itself need to be Galois stable. $H$ is acting on the space of sections, again via Mumford's theory. This can be used to get another KS morphism to filtrations on $A_{DR}^{H}$. But the key point is that </p> <p><em>$\Xi^{Lag}$, in contrast to $\Xi$, is free of the Gaussian poles</em> </p> <p>via an argument I can't quite remember (If I ever knew).</p> <p>At this point, it might be reasonable to see if $\Xi^{Lag}$ contributes towards a version of Szpiro's inequality (after much work and interpretation), except for one small problem. A subspace like $G^{\mu}$ has no reason to exist in general. This is why GTKS is mostly about the universal elliptic curve over a formal completion near $\infty$ on the moduli stack of elliptic curves, where such a space does exists. What Mochizuki explains on IUTT page 10 is exactly that the scheme-theoretic motivation for IUG was to enable the move to a single elliptic curve over $B=Spec(O_F)$, via the intermediate case of an elliptic curve 'in general position'.</p> <p>To repeat:</p> <p><em>A good 'nonsingular' theory of the KS map over number fields requires a global Galois invariant Lagrangian subspace $G^{\mu}\subset E[l]$.</em></p> <p>One naive thought might just be to change base to the field generated by the $\ell$-torsion, except one would then lose the Galois action one was hoping to use. (Remember that Szpiro's inequality is supposed to come from <em>moving</em> the Hodge filtration inside De Rham cohomology.) On the other hand, such a subspace does often exist <em>locally</em>, for example, at a place of bad reduction. So one might ask if there is a way to globally extend such local subspaces.</p> <p>It seems to me that this is one of the key things going on in the IUTT papers I-IV. As he say in loc. cit. he works with various categories of collections of local objects that <em>simulate</em> global objects. It is crucial in this process that many of the usual scheme-theoretic objects, local or global, are encoded as suitable categories with a rich and precise combinatorial structure. The details here get very complicated, the encoding of a scheme into an associated Galois category of finite \'etale covers being merely the trivial case. For example, when one would like to encode the Archimedean data coming from an arithmetic scheme (which again, will clearly be necessary for Szpiro's conjecture), the attempt to come up with a category of about the same order of complexity as a Galois category gives rise to the notion of a <em>Frobenioid</em>. Since these play quite a central role in Mochizuki's theory, I will quote briefly from his first Frobenioid paper:</p> <p>'Frobenioids provide a single framework [cf. the notion of a "Galois category"; the role of monoids in log geometry] that allows one to capture the essential aspects of both the Galois and the divisor theory of number fields, on the one hand, and function fields, on the other, in such a way that one may continue to work with, for instance, global degrees of arithmetic line bundles on a number field, but which also exhibits the new phenomenon [not present in the classical theory of number fields] of a "Frobenius endomorphism" of the Frobenioid associated to a number field.'</p> <p>I believe the Frobenioid associated to a number field is something close to the finite \'etale covers of $Spec(O_F)$ (equipped with some log structure) together with metrized line bundles on them, although it's probably more complicated. The Frobenious endomorphism for a prime $p$ is then something like the functor that just raises line bundles to the $p$-th power. This is a functor that would come from a map of schemes if we were working in characteristic $p$, but obviously not in characteristic zero. But this is part of the reason to start encoding in categories: </p> <p><em>We get more morphisms and equivalences.</em></p> <p>Some of you will notice at this point the analogy to developments in algebraic geometry where varieties are encoded in categories, such as the derived category of coherent sheaves. There as well, one has reconstruction theorems of the Orlov type, as well as the phenomenon of non-geometric morphisms of the categories (say actions of braid groups). Non-geometric morphisms appear to be very important in Mochizuki's theory, such as the Frobenius above, which allows us to simulate characteristic $p$ geometry in characteristic zero. Another important illustrative example is a non-geometric isomorphism between Galois groups of local fields (which can't exist for global fields because of the Neukirch-Uchida theorem). In fact, I think Mochizuki was rather fond of Ihara's comment that the positive proof of the anabelian conjecture was somewhat of a disappointment, since it destroys the possibility that encoding curves into their fundamental groups will give rise to a richer category. Anyways, I believe the importance of non-geometric maps of categories encoding rather conventional objects is that </p> <p><em>they allow us to glue together several standard categories in nonstandard ways.</em></p> <p>Obviously, to play this game well, some things need to be encoded in rigid ways, while others should have more flexible encodings.</p> <p>For a very simple example that gives just a bare glimpse of the general theory, you might consider a category of pairs $$(G,F),$$ where $G$ is a profinite topological group of a certain type and $F$ is a filtration on $G$. It's possible to write down explicit conditions that ensure that $G$ is the Galois group of a local field and $F$ is its ramification filtration in the upper numbering (actually, now I think about it, I'm not sure about 'explicit conditions' for the filtration part, but anyways). Furthermore, it is a theorem of Mochizuki and Abrashkin that the functor that takes a local field to the corresponding pair is fully faithful. So now, you can consider triples $$(G,F_1, F_2),$$ where $G$ is a group and the $F_i$ are <em>two</em> filtrations of the right type. If $F_1=F_2$, then this 'is' just a local field. But now you can have objects with $F_1\neq F_2$, that correspond to strange amalgams of two local fields.</p> <p>As another example, one might take a usual global object, such as $$ (E, O_F, E[l], V)$$ (where $V$ denotes a collection of valuations of $F(E[l])$ that restrict bijectively to the valuations $V_0$ of $F$), and associate to it a collection of local categories indexed by $V_0$ (something like Frobenioids corresponding to the $E_v$ for $v\in V_0$). One can then try to glue them together in non-standard ways along sub-categories, after performing a number of non-standard transformations. My rough impression at the moment is that the 'Hodge theatres' arise in this fashion. [This is undoubtedly a gross oversimplification, which I will correct in later amendments.] You might further imagine that some construction of this sort will eventually retain the data necessary to get the height of $E$, but also have data corresponding to the $G^{\mu}$, necessary for the Lagrangian KS map. In any case, I hope you can appreciate that a good deal of 'dismantling' and 'reconstructing,' what Mochizuki calls <em>surgery</em>, will be necessary.</p> <p>I can't emphasize enough times that much of what I write is based on faulty memory and guesswork. At best, it is superficial, while at worst, it is (not even) wrong. [<s>In particular, I am no longer sure that the GTKS map is used in an entirely direct fashion.</s>] I have not yet done anything with the current papers than give them a cursory glance. If I figure out more in the coming weeks, I will make corrections. But in the meanwhile, I do hope what I wrote here is mostly more helpful than misleading.</p> <p>Allow me to make one remark about set theory, about which I know next to nothing. Even with more straightforward papers in arithmetic geometry, the question sometimes arises about Grothendieck's universe axiom, mostly because universes appear to be used in SGA4. Usually, number-theorists (like me) neither understand, nor care about such foundational matters, and questions about them are normally met with a shrug. The conventional wisdom of course is that any of the usual theorems and proofs involving Grothendieck cohomology theories or topoi do not actually rely on the existence of universes, except general laziness allows us to insert some reference that eventually follows a trail back to SGA4. However, this doesn't seem to be the case with Mochizuki's paper. That is, universes and interactions between them seem to be important actors rather than conveniences. How this is really brought about, and whether more than the universe axiom is necessary for the arguments, I really don't understand enough yet to say. In any case, for a number-theorist or an algebraic geometer, I would guess it's still prudent to acquire a reasonable feel for the 'usual' background and motivation (that is, HAT, GTKS, and anabelian things) before worrying too much about deeper issues of set theory.</p>
17,270
<p>I just joined MathSE and it's beautiful here, except for the fact that some unregistered users ask a question and never come back. Most of the time these questions are trivial, though they still consume answerers' (valuable) time which never gets rewarded. I thought it was okay until I saw someone's profile with the following statistics: active $1$ year $7$ Months, $0$ Answers , $72$ Questions, $0$ accept votes. Yes, I agree that the answers are up-voted in this case but is it really okay to never accept any answers?</p>
Robert Soupe
149,436
<p>No, it's not okay, except under very special circumstances.</p> <p>These are the special circumstances I can think of, in rough order of likelihood:</p> <ul> <li>You ask a lot of questions but get no answers at all to any of them.</li> <li>You ask a lot of questions but get only one answer on each question. The one answer gets more than two upvotes each time, so you're not worried about giving the extra 15 points, plus you worry someone might come along with a better answer and you'd look rude if you unaccepted the answer you originally accepted.</li> <li>You ask a lot of questions and get more than one answer but all the answers casually invoke advanced and/or esoteric concepts beyond your comprehension.</li> <li>You get locked out of your account (e.g., forget the password) before you've had chance to think about which answer is best.</li> </ul> <p>I seriously doubt any of these apply to the fellow who's asked more than 70 questions in the past few months.</p>
164,746
<p>For example</p> <p><a href="https://i.stack.imgur.com/bFDdw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bFDdw.png" alt="enter image description here"></a></p> <p>But I want to strip those box edge except the bottom ones like this</p> <p><a href="https://i.stack.imgur.com/DpzG1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DpzG1.png" alt="enter image description here"></a></p> <p>Note that there is still one redundant edge left, because erasing it will be quite ugly with my image editing tool. Also I need the "y" axis labeled like 2D plot case.</p> <p>I search the documentation, did't find a way to detailed control the box edge.</p>
Alexei Boulbitch
788
<p>Try this:</p> <pre><code>Plot3D[-x^2, {x, -1, 1}, {y, -1, 1}, Boxed -&gt; False, ViewPoint -&gt; {-1.94, -1.94, 1.94}] </code></pre> <p><a href="https://i.stack.imgur.com/ghoCf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ghoCf.jpg" alt="enter image description here"></a></p> <p>May be it will do for you. </p> <p>If not, you may remove all automatic axis and the box and draw the axis by hand as follows, for example:</p> <pre><code> Show[{ Plot3D[-x^2, {x, -1, 1}, {y, -1, 1}, PlotRange -&gt; {{-1.2, 1.3}, {-1.3, 1}, {-1, 0.2}}, Axes -&gt; None, Boxed -&gt; False, ViewPoint -&gt; {-1.94, -1.94, 1.94}], Graphics3D[{Arrow[{{-1, -1, -1}, {1.1, -1, -1}}], Arrow[{{-1.1, 1, -1}, {-1.1, -1.2, -1}}]}] }] </code></pre> <p><a href="https://i.stack.imgur.com/7l5Z2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7l5Z2.jpg" alt="enter image description here"></a></p> <p>Then you will, however, need to also draw the ticks and figures in addition, which I did not do here. It cab be done by the same statement <code>Graphics3D</code>.</p> <p>Have fun!</p>
4,624,695
<p>I have an exercise with text: With lines</p> <p><span class="math-container">$$ p_1 \ldots \frac{x-1}{2}=\frac{y+1}{-1}=\frac{z-4}{1}, \quad p_2 \ldots \frac{x-5}{2}=\frac{y-1}{-1}=\frac{z-2}{1} . $$</span> Determine one plane <span class="math-container">$\pi$</span> with respect to which the lines <span class="math-container">$p_1$</span> and <span class="math-container">$p_2$</span> are symmetric.</p> <p>How to approach this? I should somehow conclude that lines go through dots A1(1,-1,4) A2(5,1,2).</p>
Awnon Bhowmik
515,900
<p>By the King's property of integration, we have</p> <p><span class="math-container">\begin{align}I&amp;=\int_a^b f(x)\,\mathrm dx=\int_a^b f(a+b-x)\,\mathrm dx\end{align}</span></p> <p>So, we can write <span class="math-container">$$J=\int_0^{2a} f(x)\,\mathrm dx=\int_0^{2a} f(2a-x)\,\mathrm dx$$</span></p> <p>Adding both integrals <span class="math-container">\begin{align}2J&amp;=\int_0^{2a} f(x)\,\mathrm dx+\int_0^{2a} \underbrace{f(2a-x)}_{{-f(x)}}\,\mathrm dx\\2J&amp;=0\\J&amp;=0\\\int_0^{2a}f(x)\,\mathrm dx&amp;=0\end{align}</span></p> <p>Notice that if <span class="math-container">$a=0$</span>, <span class="math-container">$f(x)=-f(x)$</span> simply means, <span class="math-container">$f(x)$</span> is an odd function, i.e. it is antisymmetric about <span class="math-container">$x=0$</span>. In the case of this question, an origin shift of <span class="math-container">$x=a$</span> is required. <span class="math-container">$f(x)$</span> is still odd and everything else becomes obvious.</p>
2,822,126
<blockquote> <p>If $0⩽x⩽y⩽z⩽w⩽u$ and $x+y+z+w+u=1$, prove$$ xw+wz+zy+yu+ux⩽\frac15. $$</p> </blockquote> <p>I have tried using AM-GM, rearrangement, and Cauchy-Schwarz inequalities, but I always end up with squared terms. For example, applying AM-GM to each pair directly gives$$ x^2+y^2+z^2+w^2+u^2 ⩾ xw + wz + zy + yu + ux, $$ but I cannot seem to continue from here or use $x + y + z + w + u = 1$. Other inequalities like Chebyshev's rely on the multiplied pairs to be in order from least to greatest or vice versa, so I am stuck here.</p>
HK Lee
37,116
<p>(1) Consider $$ \Delta =\bigg\{x=(x_i)\in \mathbb{R}^5\bigg| \sum_i\ x_i=1,\ x_i\geq 0\bigg\} $$</p> <p>If $\sigma x =\sum_i\ x_ie_{i+1}$ where $e_i$ is a canonical basis in $\mathbb{R}^5$, then define $$ f: \Delta\rightarrow \mathbb{R},\ f(x)=x\cdot \sigma (x) $$</p> <p>Define $H=\bigg\{ v\in \mathbb{R}^n\bigg| \sum_i\ x_i=0\bigg\}$. For $v\in H$, then $$ \frac{d^2}{dt^2} \bigg|_{t=0}\ f(p+tv) = 2v\cdot \sigma (v) &lt;0 \ \ast $$</p> <p>(2) Proof of $\ast $ : If $\pi : \mathbb{R}^5\rightarrow H$ is orthogonal projection, then $\{\pi (e_i)\}$ forms vertex set of tetrahedron $T$ in $H$. And $\sigma$ acts on the set transitively and note that vertexes has an angle $&gt;\frac{\pi}{2}$.</p> <p>(3) So $f$ is a strict concave so that it has a global maximum. Here $$f'(p+tv)\bigg|_{t=0} = p\cdot \{ \sigma (v) +\sigma^{-1}(v)\} =0 $$ when $p=\frac{1}{5}\sum_{i}\ e_i$.</p> <p>Hence $f(p)=\frac{1}{5}$</p>
3,105,184
<p>Three shooters shoot at the same target, each of them shoots just once. The three of them respectively hit the target with a probability of 60%, 70%, and 80%. What is the probability that the shooters will hit the target</p> <p>a) at least once</p> <p>b) at least twice</p> <hr> <p>I have an approach to this but I'm not sure if there's just a formula for this type of thing. For part (a) I was thinking of simply adding the probability of all of the valid scenarios. For instance <span class="math-container">$$[P(A)*(1-P(B))*(1-P(C))]+[(1-P(A))*P(B)*(1-P(C))]+...$$</span></p> <p>and so on until I cover all scenarios in which "at least one" hits. Is there a simpler way? (i assume this would also apply to part b)</p>
user247327
247,327
<p>"a) at least once". Writing "H" for a hit and "M" for a miss, and writing hits and misses in the order "ABC" there are 3 possibilities: HMM: probability is (0.6)(0.3)(0.2)= 0.036 MHM: probability is (0.4)(0.7)(0.2)= 0.056 MMH: probability is (0.4)(0.3)(0.8)= 0.096 The probability of any one of these happening is 0.036+ 0.056+ 0.096= 0.188.</p> <p>"b) at least twice". These are HHM: probability is (0.6)(0.7)(0.2)= 0.084 HMH: probability is (0.6)(0.3)(0.8)= 0.144 MHH: probability is (0.4)(0.7)(0.8)= 0.224 The probability of any one of these happening is 0.084+ 0.144+ 0.224= 0.452.</p> <p>"c) all three". This is HHH: probability is (0.6)(0.7)(0.8)= 0.336.</p>
3,105,184
<p>Three shooters shoot at the same target, each of them shoots just once. The three of them respectively hit the target with a probability of 60%, 70%, and 80%. What is the probability that the shooters will hit the target</p> <p>a) at least once</p> <p>b) at least twice</p> <hr> <p>I have an approach to this but I'm not sure if there's just a formula for this type of thing. For part (a) I was thinking of simply adding the probability of all of the valid scenarios. For instance <span class="math-container">$$[P(A)*(1-P(B))*(1-P(C))]+[(1-P(A))*P(B)*(1-P(C))]+...$$</span></p> <p>and so on until I cover all scenarios in which "at least one" hits. Is there a simpler way? (i assume this would also apply to part b)</p>
awkward
76,172
<p>Consider the polynomial <span class="math-container">$$P(x)=(0.4\, +0.6 x) (0.3\, +0.7 x) (0.2\, +0.8 x)$$</span> The neat thing about <span class="math-container">$P(x)$</span> is that when expanded it gives all you could possibly want to know about the probabilities of the number of shooters hitting the target: <span class="math-container">$$P(x) = 0.024\, +0.188 x+0.452 x^2+0.336 x^3$$</span> The coefficient of <span class="math-container">$x^n$</span> in <span class="math-container">$P(x)$</span> is the probability that exactly <span class="math-container">$n$</span> shooters will hit the target, for <span class="math-container">$n=0,1,2,3$</span>. I.e., the probability that zero shooters will hit the target is 0.024, that one shooter will hit the target is 0.188, that two shooters will hit the target is 0.452, and that three shooters will hit the target is 0.336.</p> <p>This method is especially attractive if you have a computer algebra system, but in small examples like this one hand computation is also easy.</p>
256,138
<p>I need to generate four positive random values in the range [.1, .6] with (at most) two significant digits to the right of the decimal, and which sum to exactly 1. Here are three attempts that do not work.</p> <pre><code>x = {.15, .35, .1, .4}; While[Total[x] != 1, x = Table[Round[RandomReal[{.1, .6}], .010], 4]]; x = {.25, .25, .25, .25}; While[Total[x] == 1, x = Table[Round[RandomReal[{.1, .6}], .010], 4]]; NestWhileList[Total[x], x = Table[Round[RandomReal[{.1, .6}], .010], 4], Plus @@ x == 1][[1]] </code></pre>
agei
82,027
<p>Strategy:</p> <ol> <li>Offset the target value range by 0.1</li> <li>Generate 4x random values in the range [0-1]</li> <li>Rebase range based on sum of values generated</li> <li>Rebase values to reflect offset (0.1) x4 values =&gt; 0.6</li> <li>Round to 2 digits (excepting floating point error)</li> </ol> <blockquote> <p>Array(4).fill(0).map(X=&gt;Math.random()).map((X,ii,A)=&gt;0.1 + (Math.round((100*X/A.reduce((C,I)=&gt;C+I, 0.0)*0.6)) /100))</p> </blockquote> <p>I doubt if the distribution is truly random, but it might help, depending on the application.</p>
239,387
<p>Someone could explain how to build the smallest field containing to $\sqrt[3]{2}$.</p>
Asaf Karagila
622
<p>Haskell Curry gave an internal definition, but we can also define this field externally:</p> <p>$$F=\bigcap\{K\mid\Bbb Q\subseteq K, K\text{ is a field}, \sqrt[3]2\in K\}$$</p> <p>Namely, $F$ is the intersection of all the fields which extend $\Bbb Q$ and contain $\sqrt[3]2$. We know this collection is not empty because $\Bbb R$ is in this collection.</p> <p>Now we need to show that $F$ is a field, and that it extends $\Bbb Q$ and $\sqrt[3]2\in F$. The last two things are simple. Every rational number, as well $\sqrt[3]2$ are in $K$ for every $K$ in the collection we intersected over. Therefore $\sqrt[3]2\in F$ and $\Bbb Q\subseteq F$.</p> <p>To see that $F$ is a field we see that if $x,y\in F$ then $x,y\in K$ for all $K$ in the collection, and therefore $x+y,x\cdot y\in K$ (for all $K$) and therefore $F$ is closed under addition and multiplication, and the same argument shows that additive inverse and multiplicative inverse for non-zero elements also exist in $F$.</p> <p>Therefore $F$ is a field and it extends the rationals and contain $\sqrt[3]2$. To see that it is minimal observe if $K$ is in the collection then $F\subseteq K$ by definition of the intersection. Therefore the minimality is assured.</p> <hr> <p>Note that this construction does not specifically depend on $\Bbb Q$, and we may replace it with an arbitrary field of our liking as the "base" field. If $\sqrt[3]2$ is already in that base field then the construction will not add new elements, for obvious reasons.</p>
2,133,870
<p>I'm having a little trouble solving this contour integral. I've found the singularities to be at 0, 2, and $1\pm i$. I'm not quite sure how to evaluate the integral when there are two poles on the contour instead of just one. Any help is appreciated.</p>
Felix Marin
85,343
<p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p> <blockquote> <p>Assuming the question asks for the <em>Principal Value</em> $\pars{~\mrm{P.V.}~}$:</p> </blockquote> <p>\begin{align} &amp;\mrm{P.V.}\int_{-\infty}^{\infty}{x\,\dd x \over \pars{x - 1}^{4} - 1} \\[5mm] \stackrel{\mrm{def.}}{=} &amp;\ \lim_{\epsilon \to 0^{+}}\bracks{% \int_{-\infty}^{-\epsilon}{x\,\dd x \over \pars{x - 1}^{4} - 1} + \int_{\epsilon}^{2 - \epsilon}{x\,\dd x \over \pars{x - 1}^{4} - 1} + \int_{2 + \epsilon}^{\infty}{x\,\dd x \over \pars{x - 1}^{4} - 1}} \\[5mm] = &amp;\ \lim_{\epsilon \to 0^{+}}\bracks{% \int_{-\infty}^{-1 - \epsilon}{x + 1 \over x^{4} - 1}\,\dd x + \int_{-1 + \epsilon}^{1 - \epsilon}{x + 1 \over x^{4} - 1}\,\dd x + \int_{1 + \epsilon}^{\infty}{x + 1 \over x^{4} - 1}\,\dd x} \\[5mm] = &amp;\ \lim_{\epsilon \to 0^{+}}\bracks{% \int_{-\infty}^{-1 - \epsilon}{\dd x \over \pars{x - 1}\pars{x^{2} + 1}}\,\dd x + 2\int_{0}^{1 - \epsilon}{\dd x \over x^{4} - 1} - \int_{0}^{1/\pars{1 + \epsilon}}{x \over \pars{x - 1}\pars{x^{2} + 1}}\,\dd x} \\[1cm] = &amp;\ -\int_{1}^{\infty}{\dd x \over \pars{x + 1}\pars{x^{2} + 1}} \\[5mm] + &amp;\ \lim_{\epsilon \to 0^{+}}\braces{% \int_{0}^{1 - \epsilon}\bracks{% {2 \over x^{4} - 1} - {x \over \pars{x - 1}\pars{x^{2} + 1}}}\,\dd x - \int_{1 - \epsilon}^{1/\pars{1 + \epsilon}} {x \over \pars{x - 1}\pars{x^{2} + 1}}\,\dd x} \\[1cm] &amp; = -\int_{1}^{\infty}{\dd x \over \pars{x + 1}\pars{x^{2} + 1}} - \int_{0}^{1}{x + 2 \over \pars{x + 1}\pars{x^{2} + 1}}\,\dd x \\[3mm] - &amp; \ \underbrace{% \lim_{\epsilon \to 0^{+}}\int_{1 - \epsilon}^{1/\pars{1 + \epsilon}} {x \over \pars{x - 1}\pars{x^{2} + 1}}\,\dd x}_{\ds{=\ 0}} = \bbx{-\ds{\pi \over 2}} \end{align}</p> <blockquote> <p>I left to you the integral evaluations and the limit evaluation.</p> </blockquote>
382,603
<p>Let $F$ be a finite field. Prove that the following are equivalent:</p> <p>i) $A \subset B$ or $B \subset A$ for each two subgroups $A,B$ of $F^*$.</p> <p>ii) $\#F^*$ equals 2, 9, a Fermat-prime or $\#F^* -1$ equals a Mersenne prime.</p> <p>Any ideas for i => ii ? I don't know where to start, except for remarking that $\#F^*$ is cyclic. Thanks.</p>
DonAntonio
31,254
<p>Jyrki's comment is a huge hint: if $\;\,p,q\,\mid\; |\Bbb F^*|\;$ for two different primes $\,p,q\,$, then there exist cyclic subgroups $\,P,Q\,$ of order $\,p,q\,$ resp., and clearly neither $\,P\subset Q\,$ nor $\,Q\subset P\,$, so $\,|F^*|\,$ is divisible only by one single prime (or it is the trivial group... trivial case), say $\;p\,$.</p> <p>Since $\,|F|=r^n\,$, for some prime $\,r\,$, then $\,|F^*|=r^n-1=p^m\,$, yet</p> <p>$$r^n-1=(r-1)(r^{n-1}+r^{n-2}+\ldots+r+1),$$</p> <p>so in particular $\,p\,\mid\, r-1\,$, and the only numbers that fulfill this are...</p>
931,748
<p>I do not understand this solution and this formula and why we are using (1+1)^n... I need some help to get an idea of what is going on here Thanks</p> <p><img src="https://i.stack.imgur.com/S9IL5.png" alt="enter image description here"></p>
Zubin Mukerjee
111,946
<p>Suppose $2\mid (x^2-1)$. Then $2\mid (x-1)(x+1)$. </p> <p>By <a href="http://en.wikipedia.org/wiki/Euclid&#39;s_lemma" rel="nofollow">Euclid's lemma</a>, either $2\mid (x+1)$ <a href="http://en.wiktionary.org/wiki/inclusive_or" rel="nofollow">or</a> $2\mid (x-1)$. </p> <p>But $x+1$ is even <a href="http://en.wikipedia.org/wiki/Logical_biconditional" rel="nofollow">if and only if</a> $x-1$ is even. Thus, both must be even. </p> <p>The product of two even numbers must be divisible by four, so $$4\mid (x^2-1)$$</p>
931,748
<p>I do not understand this solution and this formula and why we are using (1+1)^n... I need some help to get an idea of what is going on here Thanks</p> <p><img src="https://i.stack.imgur.com/S9IL5.png" alt="enter image description here"></p>
Deepak
151,732
<p>$x-1$ and $x+1$ are separated by $2$ so they are either both odd or both even. The first case means that the product will not be divisible by two, so the second case must hold true.</p> <p>Hence $2|(x-1)(x+1) \implies 2|(x+1)$ and $2|(x-1)$ so $4|(x-1)(x+1)$.</p>
3,740,647
<p>I want to load an external Magma file within another Magma file. (Both files are saved in the same directory.) I want to be able to quickly change which external file is being loaded, ideally at the beginning of the file making the load call, so that I can easily run the same code with various inputs.</p> <p>(The external file contains computations, whose ultimate result is used by the file making the load call. These computations vary depending on the object being analyzed.)</p> <p>I tried creating a string-type variable that stores the external file's name, then using Magma's <code>load</code> command with this variable. For example,</p> <pre><code>fileName := &quot;externalMagmaFile.txt&quot;; load fileName; </code></pre> <p>However, this results in the error</p> <blockquote> <p>User error: Could not open file &quot;fileName&quot; (No such file or directory)</p> </blockquote> <p>The same error results when I include double quotes around the external file name:</p> <pre><code>fileName := &quot;\&quot;externalMagmaFile.txt\&quot;&quot;; load fileName; </code></pre> <p>It seems that, for the <code>load</code> command, Magma interprets the variable name as the string specifying the file name, instead of first evaluating the variable, then executing <code>load</code>.</p> <p>(I am using Magma V2.23-1 on MacOS Version 10.15.5.)</p> <p>Can I use a variable with the <code>load</code> command in Magma? If yes, how?</p>
ev.gal
682,411
<p>Here is sample code illustrating the approach proposed by @DavidCraven. (The file <code>realFile.txt</code> contains the Magma commands we desire to load.)</p> <pre><code>fileNameReal := &quot;realFile.txt&quot;; PrintFile(&quot;dummyLoadFile.txt&quot;,&quot;load \&quot;&quot; cat fileNameReal cat &quot;\&quot;;&quot; : Overwrite := true); load &quot;dummyLoadFile.txt&quot;; </code></pre>
1,465,989
<p>I'm trying to find the column space of $\begin{bmatrix}a&amp;0\\b&amp;0\\c&amp;0\end{bmatrix}$, which I think is $span\left(~\begin{bmatrix}a\\b\\c\end{bmatrix}~\begin{bmatrix}0\\0\\0\end{bmatrix}~\right)$. Since by definition a span needs to include the null vector, this is redundant. Would it also be correct to say that this is redundant because $\begin{bmatrix}0\\0\\0\end{bmatrix}$ is linearly dependent on $\begin{bmatrix}a\\b\\c\end{bmatrix}$ (can multiply it by a scalar of 0 to reach the null vector)? Or is that a misapplication of the concept of linear dependence?</p>
Community
-1
<p>If $\{\vec v_1,\vec v_2, \cdots, \vec v_n\}$ are linearly independent $c_1 \vec v_1+c_2 \vec v_2+\cdots+c_n \vec v_n= \vec 0$ iff $c_1=c_2=\cdots=c_n=0$. Considering $\vec v_n=\vec 0$, we can get $c_1 \vec v_1+c_2 \vec v_2+\cdots+c_n \vec v_n= \vec 0$ by setting $c_1=c_2=\cdots=c_{n-1}=0$ and taking any $c_n \neq 0$. So by definition, any set of vectors that contain the zero vector is linearly dependent.</p>
1,465,989
<p>I'm trying to find the column space of $\begin{bmatrix}a&amp;0\\b&amp;0\\c&amp;0\end{bmatrix}$, which I think is $span\left(~\begin{bmatrix}a\\b\\c\end{bmatrix}~\begin{bmatrix}0\\0\\0\end{bmatrix}~\right)$. Since by definition a span needs to include the null vector, this is redundant. Would it also be correct to say that this is redundant because $\begin{bmatrix}0\\0\\0\end{bmatrix}$ is linearly dependent on $\begin{bmatrix}a\\b\\c\end{bmatrix}$ (can multiply it by a scalar of 0 to reach the null vector)? Or is that a misapplication of the concept of linear dependence?</p>
user514425
514,425
<p>If <span class="math-container">$S=\{v : v=(0,0)\}$</span> we will show that its linearly dependent. let suppose that S is Linearly independent such that <span class="math-container">$cv=0$</span>. <span class="math-container">$c$</span> may not be a zero so <span class="math-container">$S$</span> is linearly dependent.</p>
2,983,370
<p><strong>question:</strong></p> <p>maximum value of <span class="math-container">$\theta$</span> untill which the approximation <span class="math-container">$\sin\theta\approx \theta$</span> holds to within <span class="math-container">$10\%$</span> error is </p> <p><span class="math-container">$(a)10^{\circ}$</span></p> <p><span class="math-container">$(b)18^{\circ}$</span></p> <p><span class="math-container">$(c)50^{\circ}$</span></p> <p><span class="math-container">$(d)90^{\circ}$</span></p> <p><strong>my attempt:</strong></p> <p>i calculated percentage error for each of 4 options and got <span class="math-container">$\theta = 50 $</span>degree</p> <p>but is there any quick method to arrive at answer without verifying all options </p> <p>one by one . because it is MCQ there will be very less time availaible per question to solve it.</p> <p>thank you</p>
Hagen von Eitzen
39,174
<p>The next best approximation is <span class="math-container">$$\sin x \approx x-\frac 16x^3 =x\cdot \left(1-\frac16x^2\right)$$</span> (and this is an underestimation), so we look for <span class="math-container">$\frac16x^2\approx 0.1$</span>, <span class="math-container">$x^2\approx 0.6\approx 0.64=0.8^2$</span>. Finally, <span class="math-container">$0.8\,\text{rad}=0.8\cdot\frac{180}\pi\,^\circ$</span>, wich from the available choices seems to be closest to <span class="math-container">$50^\circ$</span>.</p>
4,283,063
<p>For positive real numbers <span class="math-container">$a,b ,c$</span> prove that: <span class="math-container">$a^{b+c}b^{c+a}c^{a+b} ≤ (a^ab^bc^c)^2$</span></p> <p>My working</p> <p><span class="math-container">$c^c≥b^c≥b^a$</span></p> <p><span class="math-container">$(c^c)^2 ≥ b^{a+c}$</span></p> <p>Similarly <span class="math-container">$(c^c)^2≥ a^{c+b} and (c^c)^2≥c^{a+b}$</span></p> <p>Therefore <span class="math-container">$(c^c)^6 ≥b^{a+c}c^{a+b}a^{c+b}$</span></p> <p>But I do not know how to use it.</p>
Ma Ye
977,864
<p>The notation in the question and the description do not agree and I am using the latter.</p> <p>Consider <span class="math-container">$f(x)=e^x-1$</span>, <span class="math-container">$h(x)=x$</span>. Let <span class="math-container">$g(x)=x$</span> if <span class="math-container">$x$</span> is rational, and <span class="math-container">$g(x)=\frac{x}{2}$</span> if <span class="math-container">$x$</span> is irrational.</p> <p>We can see that the conditions are satisfied. Notice that there are no rational points on the curve <span class="math-container">$y=e^x-1$</span> except <span class="math-container">$(0,0)$</span>.</p> <p>Now consider <span class="math-container">$lim_{x\rightarrow 0}\frac{g(f(x))}{g(h(x))}$</span>. We shall see that the limit is <span class="math-container">$1$</span> for irrational <span class="math-container">$x$</span> and <span class="math-container">$\frac{1}{2}$</span> for rational <span class="math-container">$x$</span>, so the limit does not exist.</p>
4,508,840
<p>I need to find the summation <span class="math-container">$$S=\sum_{r=0}^{1010} \binom{1010}r \sum_{k=2r+1}^{2021}\binom{2021}k$$</span></p> <p>I tried various things like replacing <span class="math-container">$k$</span> by <span class="math-container">$2021-k$</span> and trying to add the 2 summations to a pattern but was unable to find a solution. For more insights into the question, this was essential to solve a probability question wherein there were 2 players, A and B. A rolls a dice <span class="math-container">$2021$</span> times, and B rolls it <span class="math-container">$1010$</span> times. We had to find the probability of A having number of odd numbers more than twice of B. So if B had <span class="math-container">$r$</span> odd numbers, A could have odd numbers from <span class="math-container">$2r+1$</span> to <span class="math-container">$2021$</span>, hence the summation. I can get the required probability by dividing this by <span class="math-container">$2^{2021+1010}$</span>.</p>
drhab
75,923
<p>Finding that summation...., too lazy for that. But solving that puzzle you gave in your description is tempting enough to give it a try.</p> <hr /> <p>First a modulation.</p> <blockquote> <p>Let it be that <span class="math-container">$B$</span> throws <span class="math-container">$n$</span> dice and <span class="math-container">$A$</span> throws <span class="math-container">$2n$</span> dice.</p> </blockquote> <p>If <span class="math-container">$O_B$</span> and <span class="math-container">$O_A$</span> denote the number of <strong>odds</strong> thrown by <span class="math-container">$B$</span> and <span class="math-container">$A$</span> respectively then it can be shown that: <span class="math-container">$$P(O_A&gt;2O_B)=P(O_A&lt;2O_B)\tag1$$</span> I will prove this assertion below but let us first look at the consequences of <span class="math-container">$(1)$</span>.</p> <p>Suppose that <span class="math-container">$A$</span> and <span class="math-container">$B$</span> want to avoid a draw and agree that after the <span class="math-container">$3n$</span> throws player <span class="math-container">$A$</span> always throws a die that is decisive in the case that there is indeed a draw. It has no impact if there is no draw but if there is one then <span class="math-container">$A$</span> wins iff this final throw results in an odd number.</p> <p>If <span class="math-container">$(1)$</span> is true then evidently this agreement provides equal chances for <span class="math-container">$A$</span> and <span class="math-container">$B$</span> to win.</p> <p>But this agreement also boils down to:</p> <blockquote> <p><span class="math-container">$B$</span> throws <span class="math-container">$n$</span> dice and <span class="math-container">$A$</span> throws <span class="math-container">$2n+1$</span> dice and <span class="math-container">$A$</span> wins if he throws a number of odds more than twice as <span class="math-container">$B$</span>.</p> </blockquote> <p>which is exactly the game you describe in your question.</p> <p>So if <span class="math-container">$(1)$</span> is correct then we can conclude that <span class="math-container">$A$</span> will win the game with probability <span class="math-container">$\frac12$</span> or equivalently that:<span class="math-container">$$\sum_{r=0}^n\binom{n}r\sum_{k=2r+1}^{2n+1}\binom{2n+1}k=2^{3n}$$</span></p> <hr /> <p>Now a proof of <span class="math-container">$(1)$</span> which is surprisingly simple.</p> <p>Let <span class="math-container">$E_B$</span> and <span class="math-container">$E_A$</span> denote the number of <strong>evens</strong> thrown by <span class="math-container">$B$</span> and <span class="math-container">$A$</span> respectively.</p> <p>Then by symmetry we find:<span class="math-container">$$P(O_A&gt;2O_B)=P(E_A&gt;2E_B)=P\left(2n-O_A&gt;2(n-O_B)\right)=P(O_A&lt;2O_B)$$</span>and we are ready.</p>
4,068,529
<p>I need to find <span class="math-container">$63^{63^{63}} \bmod 100$</span>. <br /> This is what I've got so far: <br /> Since <span class="math-container">$\gcd(63,100)=1$</span> we can use Euler's theorem. We have <span class="math-container">$\phi (100)=40$</span> so <span class="math-container">$63^{40} \equiv 1 \mod 100$</span> <br /> Again <span class="math-container">$\gcd(16,100)=1$</span> and <span class="math-container">$\phi (40)=16$</span>, that is <span class="math-container">$63^{16} \equiv 1 \mod 40$</span><br /> Using this I got that <span class="math-container">$63^{63} \equiv 7 \mod 40 $</span> which led me to <span class="math-container">$63^{63^{63}} \equiv 63^7 \mod 100$</span><br /> I'm stuck here and don't know what to do next, what could I do now?</p>
tommik
791,458
<p>Using properties of Gaussians, Expectation and variance you get that</p> <p><span class="math-container">$$Y_2\sim N(\alpha+\beta\mu;\beta^2\sigma^2+v^2)$$</span></p>
4,068,529
<p>I need to find <span class="math-container">$63^{63^{63}} \bmod 100$</span>. <br /> This is what I've got so far: <br /> Since <span class="math-container">$\gcd(63,100)=1$</span> we can use Euler's theorem. We have <span class="math-container">$\phi (100)=40$</span> so <span class="math-container">$63^{40} \equiv 1 \mod 100$</span> <br /> Again <span class="math-container">$\gcd(16,100)=1$</span> and <span class="math-container">$\phi (40)=16$</span>, that is <span class="math-container">$63^{16} \equiv 1 \mod 40$</span><br /> Using this I got that <span class="math-container">$63^{63} \equiv 7 \mod 40 $</span> which led me to <span class="math-container">$63^{63^{63}} \equiv 63^7 \mod 100$</span><br /> I'm stuck here and don't know what to do next, what could I do now?</p>
RobertTheTutor
883,326
<p>If you like, define <span class="math-container">$Y_3 = \alpha + \beta Y_1$</span>. Use properties of expectation. If <span class="math-container">$E[X] = \mu,$</span> then because expectation is a linear sum, <span class="math-container">$E[bX] = b\mu$</span>, and <span class="math-container">$E[X+c] = \mu + c$</span> For example, if you double every number, the average will double, and if you add <span class="math-container">$5$</span> to every single number, the average goes up by <span class="math-container">$5$</span> as well. Using those, you can find that <span class="math-container">$Y_3$</span> is distributed <span class="math-container">$N(\alpha + \beta \mu, \beta^2 \sigma^2$</span>)</p> <p>Now you have a sum <span class="math-container">$Y_2 = Y_3 + U$</span> of independent variables. The mean of a sum is the sum of the means. The variance of the sum is the sum of the variances for independent variables. In formulas, <span class="math-container">$$E[X+Y] = E[X] + E[Y]$$</span> and <span class="math-container">$$V[X+Y] = V[X] + V[Y] + Cov[X,Y]$$</span>. and for independent variables, the covariance is zero. Remember that standard deviation is the square root of the variance.</p>
4,068,529
<p>I need to find <span class="math-container">$63^{63^{63}} \bmod 100$</span>. <br /> This is what I've got so far: <br /> Since <span class="math-container">$\gcd(63,100)=1$</span> we can use Euler's theorem. We have <span class="math-container">$\phi (100)=40$</span> so <span class="math-container">$63^{40} \equiv 1 \mod 100$</span> <br /> Again <span class="math-container">$\gcd(16,100)=1$</span> and <span class="math-container">$\phi (40)=16$</span>, that is <span class="math-container">$63^{16} \equiv 1 \mod 40$</span><br /> Using this I got that <span class="math-container">$63^{63} \equiv 7 \mod 40 $</span> which led me to <span class="math-container">$63^{63^{63}} \equiv 63^7 \mod 100$</span><br /> I'm stuck here and don't know what to do next, what could I do now?</p>
chuck
244,783
<p>The CDF of random variable <span class="math-container">$Y_2$</span> is given by <span class="math-container">$$ F_{Y_2}(y_2)\equiv P(Y_2\leq y_2) $$</span> Therefore, <span class="math-container">$$ P(Y_2\leq y_2)=Pr(\alpha+\beta Y_1+U\leq y_2)=Pr(\beta Y_1+U\leq y_2-\alpha)=F_{\beta Y_1+U}(y_2-\alpha). $$</span> The sum of normal distributions is normal. Therefore, <span class="math-container">$\beta Y_1+U\sim N(\beta \mu,\beta^2\sigma^2+v^2)$</span>.</p>
598,838
<p><span class="math-container">$11$</span> out of <span class="math-container">$36$</span>? I got this by writing down the number of possible outcomes (<span class="math-container">$36$</span>) and then counting how many of the pairs had a <span class="math-container">$6$</span> in them: <span class="math-container">$(1,6)$</span>, <span class="math-container">$(2,6)$</span>, <span class="math-container">$(3,6)$</span>, <span class="math-container">$(4,6)$</span>, <span class="math-container">$(5,6)$</span>, <span class="math-container">$(6,6)$</span>, <span class="math-container">$(6,5)$</span>, <span class="math-container">$(6,4)$</span>, <span class="math-container">$(6,3)$</span>, <span class="math-container">$(6,2)$</span>, <span class="math-container">$(6,1)$</span>. Is this correct?</p>
Newb
98,587
<p>That's right. The easier approach would be to calculate the chance of <em>not</em> rolling a $6$ - that's just $\frac56$ for the first die, and $\frac56$ for the second die, so by the product rule (as the events are independent), the probability is $\frac56 \cdot \frac56 = \frac{25}{36}$. </p> <p>Then the probability of rolling a $6$ is $1$ minus the probability of <em>not</em> rolling a $6$, which we just calculated: so it is $1-\frac{25}{36}=\frac{11}{36}$.</p> <p>If calculating the probability of an event <em>not</em> occurring to calculate the probability of it occurring feels weird to you, you might want to read up on <strong>complementary events</strong>. The idea here is that the product rule can sometimes make probabilities smaller when that wouldn't make any sense --- in our example, if you had just multiplied $\frac16 \cdot \frac16 = \frac1{36}$, then that would've been obviously wrong. This is because when we work with probability, we're dealing with quantities in the $[0,1]$ interval, so multiplication usually makes things smaller, rather than larger.</p>
815,963
<p>I am trying to understand how to do proof by induction for inequalities. The step that I don't fully understand is making an assumption that n=k+1. For equations it is simple. For example:</p> <blockquote> <p><strong>Prove that 1+2+3+...+n = $ \frac {n(n+1)}{2} $ is valid for $ i \ge 1 $</strong> </p> </blockquote> <p>1) Base case for n=1. (...)</p> <p>2) Assume that equation is true for n=k. (...)</p> <p>3) Proof for n=k+1.<br> 1+2+3+...+k+(k+1) = $ \frac {(k+1)[(k+1)+1]}{2} $</p> <p>So we add +1 to k on both sides.</p> <p>For inequalities:<br> <strong>Prove that $ 4n \lt 2^n $ is valid for $ n \ge 5 $</strong></p> <p>1) Base case. (...)</p> <p>2) n=k assumption. (...)</p> <p>3) n=k+1 proof. </p> <p>How I would do this(add 1 to k on both sides):<br> $ 4(k+1) &lt; 2^{k+1} $</p> <p>How it's done in tutorials:<br> $ 4(k+1) &lt; 2^k + 4 $<br> $ 4k+4 &lt; 2^k + 4 $</p> <p>Why they add extra 4 to RHS instead of increasing k by 1?</p>
amWhy
9,003
<p>I think you've got the logic wrong with respect to proof by induction. Let's say we are trying to prove a proposition $P(n)$, which might be, for example, $$P(n): \;1 + 2 + 3 +\cdots + n = \frac{n(n+1)}{2}, \;n\geq 1$$</p> <ul> <li><p>Base CaseProve that $P(1)$ is true. That is, we test whether the proposition holds for smallest applicable value of $n$, which in this case is $n = 1.$ This certainly holds, since $1 = \frac{1\cdot 2}{2}$.</p></li> <li><p>Inductive hypothesis (IH): We <em>assume</em> $P(k)$ holds, i.e. that the proposition holds for $n = k$. So we assume $$P(k):\;1 + 2 + \cdots + k = \dfrac{k(k+1)}{2}$$</p></li> <li><p>Inductive Step: Now, given that we have assumed that $P(k)$ is true, we need to <em>prove</em> that $P(k)$ <em>implies</em> $P(k+1)$. $P(k+1)$ is not an assumption. We need to prove that it follows from the Inductive Hypothesis (IH). So what we <em>want to prove</em> is that $$1 + 2 + \cdots + k + (k+1) = \dfrac{(k+1)((k+1) + 1)}2 = \dfrac{(k+1)(k+2)}{2}$$We can <em>prove</em> this by starting with what we know and what we have already assumed.$$\color{blue}{1 + 2 + \cdots + k} + (k+1) \overset{IH} = \color{blue}{\frac{k(k+1)}2} + k+1$$ Now, with a little algebra, we can express the right-hand side in the desired form.</p></li> </ul> <hr> <p>Now, for the inequality:</p> <p>Your base case is $n = 5$. Why?</p> <p>The inductive hypothesis that we <em>assume</em> for $n = k$ is $4k \lt 2^k$. </p> <p>Now, assuming $4k\lt 2^k$, we need to <em>prove</em> that $4(k+1) \lt 2^{k+1}$. We can start from the left hand side and sue the inductive hypothest:</p> <p>$$4(k + 1) = \color{blue}{4k} + 4 \lt \color{blue}{2^k} + 4 = 2^k + 2^2 \leq 2\cdot 2^k = 2^{k+1}$$ as desired. The last step on the right holds for $k\geq 2$, which is fine, since we are proving the proposition for $n \geq 5$.</p>
423,158
<p>Maybe I'm too naive in asking this question, but I think it's important and I'd like to know your answer. So, for example I always see that people just write something like "let $f:R\times R\longrightarrow R$ such that $f(x,y)=x^2y+1$", or for example "Let $g:A\longrightarrow A\times \mathbb{N}$ such that for every function $f:\mathbb{N}\longrightarrow A$ we have $g(f(n))=(f(n),n)$)". They never justify the existence, but rather they assume the function to exist. So my question is why? How do you justify the existence? </p>
Dylan Yott
62,865
<p>For your first example, such a function exists because you've explicitly told us what $f(x,y)$ should be for each $x$ and $y$. On the other hand, $g$ may or may not exist. It isn't immediately clear what to make $g(x)$ for a given $x$.</p>
423,158
<p>Maybe I'm too naive in asking this question, but I think it's important and I'd like to know your answer. So, for example I always see that people just write something like "let $f:R\times R\longrightarrow R$ such that $f(x,y)=x^2y+1$", or for example "Let $g:A\longrightarrow A\times \mathbb{N}$ such that for every function $f:\mathbb{N}\longrightarrow A$ we have $g(f(n))=(f(n),n)$)". They never justify the existence, but rather they assume the function to exist. So my question is why? How do you justify the existence? </p>
Ittay Weiss
30,953
<p>This is a good question. To properly address it one needs to be very precise about what a function is and what it means to have defined/constructed one. So, in the modern era we define a function to be a triple $(A,f,B)$ where $A,B$ are sets and $f\subseteq A\times B$ is a relation from $A$ to $B$. We write $f(x)=y$ as shorthand for $(x,y)\in f$. The condition the relation needs to satisfy in order to be called a function is for all $x,y,z$: $f(x)=y$ together with $f(x)=z$ implies $y=z$.</p> <p>So now things are reduced to sets. So, when do we say that we defined or constructed a valid set? Well, to answer that we need to be very precise about what sets are. That is tricky. There are different axiomatization and quite a lot of intricate results regarding the foundations of set theory. So, here is just a little bit without getting technical. Defining what sets are is quite hopeless, since what on earth will you define sets in terms of? What is more primitive than a set? (if you have a good answer, please do write a paper about it). Instead, we abandon the idea of defining what sets are, and instead we adopt the axiomatic approach. We don't say what sets are, we say what we can do with sets. </p> <p>Well, that's good enough, since we want to know that certain sets exist, so if we know what we can do with sets, perhaps we can show them to exist by following the rules that tell us how to build sets. This turns out to work quite fine, after a bit of work of introducing all that is needed. </p> <p>Now, there is a rule of constructing sets that tells you that if you have a formula, like $x^2y+1$, then you can construct a set out of it that consists of all pairs $(x,y)$ of real numbers that satisfy this formula. So, we have a candidate for the function $(\mathbb R,f,\mathbb R)$, we just need to check that this set $f$ indeed satisfies the condition of being a function (which it does for the formula you mention). There are also rules for building sets that allow you to give recursive definitions of functions. </p> <p>So, the 'standard' ways of constructing functions are justified by axiomatic set theory. It should be noted though that no model of set theory is known, and that if one is ever found it will immediately imply that the theory is inconsistent. So, the foundations are tricky. We did not rule out that mathematics is free of contradictions, and we never will. But, it seems ok. </p>
3,717,172
<blockquote> <p>The random variable <span class="math-container">$X$</span> has an exponential distribution function given by density: <span class="math-container">$$ f_X(x) = \begin{cases} e^{-x}, &amp; x\ge 0,\\ 0, &amp; x&lt;0. \end{cases} $$</span> Find the distribution and density function of the random variable <span class="math-container">$Y=\max\left\{X^2,2-X\right\}$</span>.</p> </blockquote> <p>I honestly just don't know how to start here.</p> <ol> <li>Why should we be able to calculate this if we don't know what exactly X is?</li> <li>What is X^2 and 2-X ? I don't understand that</li> <li>And even knowing all of this, how should we be able to find the distribution and density function of Y?</li> </ol> <p>Thanks</p>
gt6989b
16,192
<p><strong>HINT</strong> <span class="math-container">$$ \begin{split} F_Y(y) &amp;= \mathbb{P}[Y \le y] \\ &amp;= \mathbb{P}[\max\{X^2,2-X\} \le y] \\ &amp;= \mathbb{P}[X^2 \le y,2-X \le y] \\ &amp;= \mathbb{P}[|X| \le \sqrt{y},X \ge 2+y] \\ \end{split} $$</span> Can you finish and express <span class="math-container">$F_Y(y)$</span> in terms of <span class="math-container">$F_X(\cdot)$</span> and <span class="math-container">$f_X(\cdot)$</span>?</p>
3,326,310
<p>From Serge Lang's Linear Algebra:</p> <blockquote> <p>Let <span class="math-container">$x_1$</span>, <span class="math-container">$x_2$</span>, <span class="math-container">$x_3$</span> be numbers. Show that:</p> <p><span class="math-container">$$\begin{vmatrix} 1 &amp; x_1 &amp; x_1^2\\ 1 &amp;x_2 &amp; x_2^2\\ 1 &amp; x_3 &amp; x_3^2 \end{vmatrix}=(x_2-x_1)(x_3-x_1)(x_3-x_2)$$</span></p> </blockquote> <p>The matrix presented above seems to be the specific case of Vandermonde determinant:</p> <p><span class="math-container">$$ \begin{vmatrix} 1 &amp; x_1 &amp; ... &amp; x_1^{n-1}\\ 1 &amp;x_2 &amp; ... &amp; x_2^{n-1}\\ ... &amp; ... &amp; ... &amp; ...\\ 1 &amp; x_n &amp; ... &amp; x_n^{n-1} \end{vmatrix}=\prod_{i, j}(x_i - x_j), \forall (1 \leq i \leq n) \land (1 \leq j \leq n) $$</span></p> <hr /> <p>I'm trying to prove the specific case to then generalize it for arbitrary Vandermonde matrices.</p> <h1>My incomplete &quot;proof&quot;</h1> <p>Since determinant is a multilinear alternating function, it can be seen that adding a scalar multiple of one column (resp. row) to other column (resp. row) does not change the value (I omitted the proof to avoid too much text).</p> <p>Thus considering that <span class="math-container">$x_1$</span> is a scalar, we can multiply each column but the last one of our specific Vandermonde matrix by <span class="math-container">$x_1$</span> and then starting from <strong>right to left</strong> subtract <span class="math-container">$n-1$</span>th column from <span class="math-container">$n$</span>:</p> <p><span class="math-container">$$\begin{vmatrix} 1 &amp; x_1 &amp; x_1^2\\ 1 &amp;x_2 &amp; x_2^2\\ 1 &amp; x_3 &amp; x_3^2 \end{vmatrix}=\begin{vmatrix} x_1 &amp; 0 &amp; 0 \\ x_1 &amp; x_2 - x_1 &amp; x^{2}_2 - x^{2}_1\\ x_1 &amp; x_3 - x_1 &amp; x^{2}_3 - x^{2}_1 \end{vmatrix}$$</span></p> <p>Then using the expansion rule along the first row (since all the elements in it but <span class="math-container">$x_1$</span> are zero):</p> <p><span class="math-container">$$... =x_1\begin{vmatrix} x_2 - x_1 &amp; x^{2}_2 - x^{2}_1\\ x_3 - x_1 &amp; x^{2}_3 - x^{2}_1 \end{vmatrix}=(x_1x_2-x^2_1)(x^2_{3}-x^2_1)-(x^{2}_2x_1 - x^{3}_1)(x_3x_1 - x^2_1)$$</span></p> <p>The first expansion seems interesting because it contains <span class="math-container">$x_2 - x_1$</span> and <span class="math-container">$x_3 - x_1$</span> (which are first two factors of specific Vandermonde matrix), but further expansion does not give satisfying results.</p> <p><strong>Question</strong>:</p> <p>Is this a good simple start of inductively &quot;proving&quot; relation between Vandermonde matrix and its factors? If so what does it lack to show the complete result? Did I make mistake during evaluation?</p> <p>Thank you!</p>
Community
-1
<p>The general proof is not difficult.</p> <p>From the definition of a determinant (sum of products), the expansion must be a polynomial in <span class="math-container">$x_1,x_2,\cdots x_n$</span>, of degree <span class="math-container">$0+1+2+\cdots n-1=\dfrac{(n-1)n}2$</span>, and the coefficient of every term is <span class="math-container">$\pm1$</span>.</p> <p>On another hand, the determinant cancels whenever <span class="math-container">$x_j=x_k$</span>, so that the polynomial must be a multiple of</p> <p><span class="math-container">$$(x_1-x_2)(x_1-x_3)(x_1-x_4)\cdots(x_1-x_n)\\ (x_2-x_3)(x_2-x_4)\cdots(x_2-x_n)\\ (x_3-x_4)\cdots(x_3-x_n)\\ \cdots\\ (x_n-x_{n-1})$$</span> (<span class="math-container">$\dfrac{(n-1)n}2$</span> factors).</p> <p>Hence the determinant has no other choice than being <span class="math-container">$\pm$</span> this product.</p> <hr> <p>For the <span class="math-container">$3\times3$</span> case,</p> <p><span class="math-container">$$\begin{vmatrix} 1 &amp; x_1 &amp; x_1^2\\ 1 &amp;x_2 &amp; x_2^2\\ 1 &amp; x_3 &amp; x_3^2 \end{vmatrix}= \begin{vmatrix} 1 &amp; x_1 &amp; x_1^2\\ 0 &amp;x_2-x_1 &amp; x_2^2-x_1^2\\ 0 &amp; x_3-x_1 &amp; x_3^2-x_1^2 \end{vmatrix}=\begin{vmatrix} x_2-x_1 &amp; x_2^2-x_1^2\\ x_3-x_1 &amp; x_3^2-x_1^2 \end{vmatrix}=(x_2-x_1)(x_3-x_1)\begin{vmatrix} 1&amp;x_2+x_1 \\1&amp; x_3+x_1 \end{vmatrix}=(x_2-x_1)(x_3-x_1)(x_3-x_2).$$</span></p>
369,435
<p>My task is as in the topic, I've given function $$f(x)=\frac{1}{1+x+x^2+x^3}$$ My solution is following (when $|x|&lt;1$):$$\frac{1}{1+x+x^2+x^3}=\frac{1}{(x+1)+(x^2+1)}=\frac{1}{1-(-x)}\cdot\frac{1}{1-(-x^2)}=$$$$=\sum_{k=0}^{\infty}(-x)^k\cdot \sum_{k=0}^{\infty}(-x^2)^k$$ Now I try to calculate it the following way:</p> <p>\begin{align} &amp; {}\qquad \sum_{k=0}^{\infty}(-x)^k\cdot \sum_{k=0}^{\infty}(-x^2)^k \\[8pt] &amp; =(-x+x^2-x^3+x^4-x^5+x^6-x^7+x^8-x^9+\cdots)\cdot(-x^2+x^4-x^6+x^8-x^{10}+\cdots) \\[8pt] &amp; =x^3-x^4+0 \cdot x^5+0 \cdot x^6 +x^7-x^8+0 \cdot x^9 +0 \cdot x^{10} +x^{11}+\cdots \end{align}</p> <p>And now I conclude that it is equal to $\sum_{k=0}^{\infty}(x^{3+4 \cdot k}-x^{4+4 \cdot k})$ ($|x|&lt;1$) Is it correct? Are there any faster ways to solve that types of tasks? Any hints will be appreciated, thanks in advance.</p>
André Nicolas
6,312
<p>Let $x\ne 1$. By the usual formula for the sum of a finite geometric series, we have $1+x+x^2+x^3=\frac{1-x^4}{1-x}$. So your expression is equal to $\frac{1-x}{1-x^4}$.</p> <p>Expand $\frac{1}{1-x^4}$ in a power series, multiply by $1-x$. Unless we are operating purely formaly, we will need to assume that $|x|\lt 1$.</p> <p>For details, note that $\frac{1}{1-t}=1+t+t^2+t^3+\cdots$.</p> <p>Substituting $x^4$ for $t$, we get that our expression is equal to $$(1-x)(1+x^4+x^8+x^{12}+\cdots).$$ Now multiply through by $1-x$. </p>
37,380
<p>I do have noisy data and want to smooth them by a Savitzky-Golay filter because I want to keep the magnitude of the signal. </p> <p>a) Is there a ready-to-use Filter available for that? </p> <p>b) what are appropriate values for m (the half width) and for the coefficients for 3000-4000 data points?</p>
Joseph
4,636
<p>The following code will filter noisy data…</p> <pre><code>SGKernel[left_?NonNegative, right_?NonNegative, degree_?NonNegative, derivative_? NonNegative] := Module[{i, j, k, l, matrix, vector}, matrix = Table[ (* matrix is symmetric *) l = i + j; If[l == 0, left + right + 1, (*Else*) Sum[k^l, {k, -left, right}] ], {i, 0, degree}, {j, 0, degree} ]; vector = LinearSolve[ matrix, MapAt[1&amp;, Table[0, {degree+1}], derivative+1] ]; (* vector = Inverse[matrix][[derivative + 1]]; *) Table[ vector.Table[If[i == 0, 1, k^i], {i, 0, degree}], {k, -left, right} ] ] /; derivative &lt;= degree &lt;= left+right SGSmooth[list_?VectorQ, window_, degree_, derivative_:0]:= Module[{pairs}, pairs = MapThread[List, {Range[Length[list]], list}]; Map[Last, SGSmooth[pairs, window, degree, derivative]] ] SGSmooth[list_?MatrixQ, window_, degree_, derivative_:0]:= Module[{kernel, list1, list2, margin, space, smoothData}, (* determine a symmetric margin at the ends of the raw dataset. The window width is split in half to make a symmetric window around a data point of interest *) margin = Floor[window/2]; (* take only the 1st column of data in the list to be smoothed (the independant Values) and extract the data from the list by removing half the window width 'i.e., margin' from the ends of the list *) list1 = Take[Map[First, list], {margin + 1, Length[list] - margin}]; (* take only the 2nd column of data in the list to be smoothed (the dependent Values) and Map them into list2 *) list2 = Map[Last, list]; (* get the kernel coefficients for the left and right margins, the degree, and the requested derivative *) kernel = SGKernel[margin, margin, degree, derivative]; (* correlation of the kernel with the list of dependent values *) list2 = ListCorrelate[kernel, list2]; (* Data _should_ be equally spaced, but... calculate spacing anyway by getting the minimum of all the differences in the truncated list1, remove the first and last points of list1 *) space = Min[Drop[list1, 1] - Drop[list1, -1]]; (* condition the dependant values based on spacing and the derivative *) list2 = list2*(derivative!/space^derivative); (* recombine the correlated (x-y) data pairs (that is list1 and list2), put these values back together again to form the smooth data list *) smoothData=Transpose[{list1, list2}] ] /; derivative &lt;= degree &lt;= 2*Floor[window/2] &amp;&amp; $VersionNumber &gt;= 4.0 </code></pre> <p>I did not apply this to your data, but you can do that later. This example is applied to noisy random data.</p> <p>Using a noisy sine data function…</p> <pre><code>dataFunction[x_] := Sin[x] + Random[Real, {-π, π}]; </code></pre> <p>Build a table of noisy tabular data from $0$ to $2\pi$…</p> <pre><code>dataTable = Table[{x, dataFunction[x]}, {x, 0, 2 π, .01}]; </code></pre> <p>Animate the smoothing operations. Notice the smoothed dataset shrinks with increasing 'window width'. This is an artifact of the <code>ListCorrelate</code> function used in the <code>SGSmooth</code> function. <code>ListCorrelate</code> uses an end buffered dataset.</p> <p>NOTE: The red line is the filtered data set…</p> <pre><code> Manipulate[ If[showRawData, Show[ ListPlot[dataTable, PlotRange -&gt; {{0, 2 π}, {-5.0, 5.0}}], ListPlot[ { SGSmooth[dataTable, win, order] }, PlotRange -&gt; {{0, 2 π}, {-5.0, 5.0}}, PlotStyle -&gt; {{Red, Thick}, {Green, Thick}}, Joined -&gt; True] ], (* ELSE just plot smooth data *) ListPlot[ { SGSmooth[dataTable, win, order] }, PlotRange -&gt; {{0, 2 π}, {-5.0, 5.0}}, PlotStyle -&gt; {{Red, Thick}, {Green, Thick}}, Joined -&gt; True] ], {{win, 100, "window width"}, 2, 300, 1, Appearance -&gt; "Labeled"}, {{order, 1, "order of polynomial"}, 1, 9, 1, Appearance -&gt; "Labeled"}, {{showRawData, True, "Raw Data"}, {True, False}}, SaveDefinitions -&gt; True ] </code></pre> <p>This will create a <code>Manipulate</code> similar to the following:</p> <p><img src="https://i.stack.imgur.com/1nLPZ.png" alt="demo"></p> <p>Hope this helps!</p>
37,380
<p>I do have noisy data and want to smooth them by a Savitzky-Golay filter because I want to keep the magnitude of the signal. </p> <p>a) Is there a ready-to-use Filter available for that? </p> <p>b) what are appropriate values for m (the half width) and for the coefficients for 3000-4000 data points?</p>
Frank
44,130
<p><br> I came across the code pasted below a couple of years back and have been using it ever since. It has the advantage that you can apply it also to {x,y} data right away. (BTW, it looks like PH Lundow's code below, last updated back in 2007, is in parts very similar to the one posted by John...)<br>Best,<br>Frank<br><br>PS<br>I just noticed, that the built-in SavitzkyGolayMatrix function does not seem to be capable of generating higher order (>2) derivatives. So it makes perfectly sense to use a separate package for the task even if it's slower.</p> <hr> <pre><code>(* :Title: Smooth *) (* :Context: "Smooth`" *) (* :Author: P.H. Lundow Bug reports to phl@kth.se *) (* :Summary: Functions for smoothing equidistant data with a Savitsky-Golay filter. *) (* :History: 020507 Created. 030417 Smooth extended to lists of y-values. 030425 Added SmoothAll. 030619 Added MinimumDistance and EquidistantData. 030923 Small improvement in SGKernel. 070618 Clean-up. *) (* :Mathematica Version: 4.0 *) (* :Keywords: *) (* :Limitations: *) (* :Discussion: There is room for improvement here. Returned list of smoothed data is shorter than original list. Find a better way to produce many kernels that takes care of the margins. *) BeginPackage["Smooth`"] SGKernel::usage = "SGKernel[left, right, degree, derivative] returns the Savitsky-Golay kernel used by the function Smooth when convoluting the data. The kernel has length left+right+1. Left is the number of leftward data points and right is the number of rightward points. Degree refers to the degree of the polynomial and derivative is the order of the desired derivative.\nRef: Numerical Recipes, chapter 14.8." Smooth::usage = "Smooth[list, window, degree, derivative] returns the smoothed form of equally spaced and sorted data. This is done by fitting polynomials of a given degree to a moving window of the list. Argument list is either a list of pairs {{x1,y1},{x2,y2},...} or a list {y1,y2,...} where the x-values are taken to be 1,2,3... If a derivative of the data is desired then give the order of the derivative, default is 0. \nExample:\n a=Table[{x,Sin[x]+Random[Real,{-1,1}],{x,0,2*Pi,0.001}]; \n b=Smooth[a,15,2]; \n c=Smooth[a,15,2,1]; \n This fits 2:nd degree polynomials to moving windows of width 15 to the data. List b is the smoothed data and list c is the derivative of the data." SmoothAll::usage = "SmoothAll[{vector,matrix}, window, degree, derivative] works like Smooth except that vector is a list of x-values and matrix is a list of lists of y-values at the corresponding x-values. The result is returned on the same form.\nExample:\n xold={1,2,3,4,5}; \n yold={{1,3,5,4,4},{2,3,3,2,1},{3,4,6,4,3}}; \n {xnew,ynew}=SmoothAll[{xold,yold},2,1,0];" MinimumDistance::usage = "MinimumDistance[data] returns the minimum distance between two x-values in the data which is a list of lists. The x-values are the first positions of each sublist." EquidistantData::usage = "EquidistantData[data] extracts the largest chunk of equidistant data from a list of lists.\nExample:\n EquidistantData[{{0,1},{2,8},{3,7},{4,9},{6,3}}] returns the list\n {{2,8},{3,7},{4,9}}." Begin["`Private`"] definitions = {MinimumDistance, EquidistantData, SGKernel, Smooth, SmoothAll} Scan[Unprotect, definitions] MinimumDistance[data:{__List}] := Module[{x}, x = Map[First, data]; Min[Drop[x, 1] - Drop[x, -1]] ] EquidistantData[data:{__List}] := Module[{min, len, pos, tmp}, min = MinimumDistance[data]; tmp = Split[data, (#1[[1]] + min == #2[[1]])&amp;]; len = Map[Length, tmp]; pos = Flatten[Position[len, Max[len], Infinity, 1]][[1]]; tmp[[pos]] ] SGKernel[left_?NonNegative, right_?NonNegative, degree_?NonNegative, derivative_?NonNegative] := Module[{i, j, k, l, matrix, vector}, matrix = Table[ (* matrix is symmetric *) l = i + j; If[l == 0, left + right + 1, (*Else*) Sum[k^l, {k, -left, right}] ], {i, 0, degree}, {j, 0, degree} ]; vector = LinearSolve[ matrix, MapAt[1&amp;, Table[0, {degree+1}], derivative+1] ]; (* vector = Inverse[matrix][[derivative + 1]]; *) Table[ vector.Table[If[i == 0, 1, k^i], {i, 0, degree}], {k, -left, right} ] ] /; derivative &lt;= degree &lt;= left+right Smooth[list_?MatrixQ, window_, degree_, derivative_:0]:= Module[{kernel, list1, list2, margin, space}, margin = Floor[window/2]; list1 = Take[ Map[First, list], {margin + 1, Length[list] - margin} ]; list2 = Map[Last, list]; kernel = SGKernel[margin, margin, degree, derivative]; list2 = ListCorrelate[kernel, list2]; (* Data _should_ be equally spaced, but... *) space = Min[Drop[list1, 1] - Drop[list1, -1]]; list2 = list2*(derivative!/space^derivative); Transpose[{list1, list2}] ] /; derivative &lt;= degree &lt;= 2*Floor[window/2] &amp;&amp; $VersionNumber &gt;= 4.0 Smooth[list_?VectorQ, window_, degree_, derivative_:0]:= Module[{pairs}, pairs = MapThread[List, {Range[Length[list]], list}]; Map[Last, Smooth[pairs, window, degree, derivative]] ] SmoothAll[{x_?VectorQ, y_?MatrixQ}, window_, degree_, derivative_:0]:= Module[{old, new, tmp}, tmp = Map[( old = Transpose[{x, #}]; new = Smooth[old, window, degree, derivative]; Map[Last, new])&amp;, y ]; {Map[First, new], tmp} ] Scan[Protect, definitions] End[] EndPackage[] </code></pre>
3,003,982
<p>By integrating by parts twice, show that <span class="math-container">$I_n$</span>, as defined below for integers <span class="math-container">$n &gt; 1$</span>, has the value shown.</p> <blockquote> <p><span class="math-container">$$I_n = \int_0^{\pi / 2} \sin n \theta \cos \theta \,d\theta = \frac{n-\sin(\frac{\pi n}{2})}{n^2 -1}$$</span></p> </blockquote> <p>I can do this using the formula <span class="math-container">$$\sin A \cos B = \frac{1}{2}[\sin(A-B)+\sin(A+B)] ,$$</span> but when I try using integration by parts I get stuck in a loop of integrating the same thing over and over.</p>
Aleksas Domarkas
562,074
<p><span class="math-container">$$\cos{(x)} \sin{\left( n x\right) }=\frac{\sin{\left( \left( n+1\right) x\right) }+\sin{\left( \left( n-1\right) x\right) }}{2}$$</span> <span class="math-container">$$\int{\left. \cos{(x)} \sin{\left( n x\right) }dx\right.}=-\frac{\cos{\left( \left( n+1\right) x\right) }}{2 \left( n+1\right) }-\frac{\cos{\left( \left( n-1\right) x\right) }}{2 \left( n-1\right) }$$</span> Then <span class="math-container">$$\int_{0}^{\frac{\pi}{2}}{\left. \cos{(x)} \sin{\left( n x\right) }dx\right.}\\= \frac{n}{n^2-1}-\frac{\left( n-1\right) \cos{\left( \frac{{\pi} n+{\pi} }{2}\right) }+\left( n+1\right) \cos{\left( \frac{{\pi} n-{\pi} }{2}\right) }}{2n^2-2}\\= \frac{n}{{{n}^{2}}-1}-\frac{\sin{\left( \frac{{\pi} n}{2}\right) }}{{{n}^{2}}-1}$$</span></p>
3,003,982
<p>By integrating by parts twice, show that <span class="math-container">$I_n$</span>, as defined below for integers <span class="math-container">$n &gt; 1$</span>, has the value shown.</p> <blockquote> <p><span class="math-container">$$I_n = \int_0^{\pi / 2} \sin n \theta \cos \theta \,d\theta = \frac{n-\sin(\frac{\pi n}{2})}{n^2 -1}$$</span></p> </blockquote> <p>I can do this using the formula <span class="math-container">$$\sin A \cos B = \frac{1}{2}[\sin(A-B)+\sin(A+B)] ,$$</span> but when I try using integration by parts I get stuck in a loop of integrating the same thing over and over.</p>
Community
-1
<p><strong>Hint:</strong></p> <p>There's an escape from the infinite loop. If you observe closely, you will see that after two iterations you get to something like</p> <p><span class="math-container">$$I=b+aI$$</span></p> <p>where the constants <span class="math-container">$a,b$</span> are computable.</p> <p>You naturally conclude</p> <p><span class="math-container">$$I=\frac{b}{1-a}.$$</span></p> <hr> <blockquote class="spoiler"> <p> <span class="math-container">$$\int\sin n\theta\cos\theta\,d\theta=\sin n\theta\sin\theta-n\int\cos n\theta\sin\theta\,d\theta\\=\sin n\theta\sin\theta+n\cos n\theta\cos\theta+n^2\int\sin n\theta\cos\theta\,d\theta.$$</span></p> </blockquote>
2,523,570
<p>$15x^2-4x-4$, I factored it out to this: $$5x(3x-2)+2(3x+2).$$ But I don’t know what to do next since the twos in the brackets have opposite signs, or is it still possible to factor them out?</p>
N. F. Taussig
173,070
<p>To split the linear term of $15x^2 - 4x - 4$, you must find two numbers with product $15 \cdot (-4) = -60$ and sum $-4$. They are $-10$ and $6$. Hence, \begin{align*} 15x^2 - 4x - 4 &amp; = 15x^2 - 10x + 6x - 4 &amp;&amp; \text{split the linear term}\\ &amp; = 5x(3x - 2) + 2(3x - 2) &amp;&amp; \text{factor by grouping}\\ &amp; = (5x + 2)(3x - 2) &amp;&amp; \text{extract the common factor} \end{align*} You made a sign error while extracting the factor of $2$ from $6x - 4$.</p>
2,830,926
<p>I am implementing a program in C which requires that given 4 points should be arranged such that they form a quadrilateral.(assume no three are collinear)<br> Currently , I am ordering the points in the order of their slope with respect to origin.<br> See <a href="https://ibb.co/cfDHeo" rel="nofollow noreferrer">https://ibb.co/cfDHeo</a> .<br> In this case a,b,c,d are in descending order of slope but on joining a to b , b to c , c to d , d to a - I don't get a quadrilateral .<br> So my method fails in such cases.<br></p> <p>I need suggestion.</p> <p>Thanks.</p>
Graham Kemp
135,106
<p>Let $q:=1-p$</p> <p>$\begin{align}\mathsf M'_X(u) &amp; = \mathcal D_u\left(\dfrac{p}{1-(1-p)e^u}\right)^r\\[1ex] &amp; =p^r \mathcal D_u(1-qe^u)^{-r}\\[1ex] &amp;= -rp^r (1-qe^u)^{-r-1}\mathcal D_u (1-qe^u)\\[1ex] &amp;= rqp^r (1-qe^u)^{-r-1}e^u\\[2ex]\mathsf M''_X(u) &amp;= \mathcal D_u\left(rqp^r (1-qe^u)^{-r-1}e^u\right)\\ &amp; =rqp^r\left(e^u\mathcal D_u(1-qe^u)^{-r-1} + (1-qe^u)^{-r-1}\mathcal D_u e^u\right)\\ &amp; = rqp^r\left((r+1)e^u(1-qe^u)^{-r-2}qe^u + (1-qe^u)^{-r-1}e^u\right)\\&amp;=rqp^r(1-qe^u)^{-r-2}\left((r+1)qe^{2u} + e^u-qe^{2u}\right)\\&amp;=rqp^r(1-qe^u)^{-r-2}\left(rqe^{2u}+e^u\right)\\[2ex] \mathsf M'_X(0) &amp; = rqp^{-1}\\&amp;=r(1-p)p^{-1}\\[1ex] \mathsf M''_{X}(0) &amp; = (r^2q^2+rq)\,p^{-2} \\[1ex] \mathsf M''_X(0)-(\mathsf M'_X(x))^2 &amp;=rqp^2 \\&amp;=r(1-p)p^{-2} \end{align}$</p>
2,727,974
<blockquote> <p>I am in need of this important inequality $$\log(x+1)\leqslant x$$.</p> </blockquote> <p>I understand that $\log(x)\leqslant x$. For $c\in\mathbb{R}$. However is it true that $\log(x+c)\leqslant x$?</p> <p>It is hard to accept because it seems like $c$ cannot be arbitrary. I have tried to prove this inequality:</p> <p>$\log(x+c)\leqslant x\iff x+c\leqslant e^x$</p> <p>It is true that $f(x)=x$ grows much faster than $g(x)=\log(x+c)$, since the $\frac{df(x)}{dx}=1\geqslant \frac{1}{x+c}=\frac{dg(x)}{dx}$ </p> <p><strong>Question:</strong></p> <p>Is the derivative argument enough to prove the more general inequality $\log(x+c)\leqslant x$?</p> <p>Thanks in advance!</p>
Dr. Sonnhard Graubner
175,066
<p>Hint: write $$x-\log(x+1)\geq 0$$ and define $$f(x)=x-\log(x+1)$$ and use calculus. And $$f'(x)=\frac{x+1-x}{x(x+1)}$$</p>
2,727,974
<blockquote> <p>I am in need of this important inequality $$\log(x+1)\leqslant x$$.</p> </blockquote> <p>I understand that $\log(x)\leqslant x$. For $c\in\mathbb{R}$. However is it true that $\log(x+c)\leqslant x$?</p> <p>It is hard to accept because it seems like $c$ cannot be arbitrary. I have tried to prove this inequality:</p> <p>$\log(x+c)\leqslant x\iff x+c\leqslant e^x$</p> <p>It is true that $f(x)=x$ grows much faster than $g(x)=\log(x+c)$, since the $\frac{df(x)}{dx}=1\geqslant \frac{1}{x+c}=\frac{dg(x)}{dx}$ </p> <p><strong>Question:</strong></p> <p>Is the derivative argument enough to prove the more general inequality $\log(x+c)\leqslant x$?</p> <p>Thanks in advance!</p>
Simply Beautiful Art
272,831
<p>The derivative argument works. For your specific inequality, note that the derivative argument can be written in terms of integrals:</p> <p>$$y-1=\int_1^y\frac11{\rm~d}t\ge\int_1^y\frac1t{\rm~d}t=\ln(y)$$</p> <p>where $y=x+1&gt;1$.</p> <p>For $c&lt;1$ it follows trivially from above. For $c&gt;1,y&gt;0$, note that</p> <p>\begin{align}y+c&amp;\ge y+\ln(c)+1\\&amp;&gt;y+\ln(c)+\ln(2)\\&amp;=y+\ln(2c)\\&amp;=\ln(2c)+\int_{2c}^{y+2c}1{\rm~d}t\\&amp;\ge\ln(2c)+\int_{2c}^{y+2c}\frac1t{\rm~d}t\\&amp;=\ln(y+2c)\end{align}</p> <p>Let $y+c=x$ and you end up with</p> <p>$$x\ge\ln(x+c)\quad\forall x&gt;c$$</p>
2,174,270
<p>I have an elliptic curve $E$ over $\mathbb{F}_{7}$ defined by $y^2=x^3+2$ with the point at infinity $\mathcal{O}$</p> <p>I am given the point $(3,6)$ and need to find the line which intersects with $E$ at <strong>only this point</strong></p> <p>I am told that this line is $y\equiv (4x+1)\mod 7$</p> <p>I have verified that this is the case, however my question is, how would I go about finding that equation in the first place?</p> <hr> <p>What I'm asking for is a method, which would take in the elliptic curve $E$ and a point on the curve $P$ and output the line which intersects with $E$ at <strong>only this point</strong></p> <p>In the example above, we would have the following</p> <p>\begin{align}\text{input} &amp;= \begin{cases}y^2=x^3 + 2\\ P=(3,6)\end{cases}\\ \text{output} &amp;= y\equiv (4x+1)\mod 7\end{align}</p>
DonAntonio
31,254
<p>An idea: </p> <p>A general line through $\;(3,6)\;$ is of the form $\;y-6=m(x-3)\implies \color{red}{y=mx-(3m-6)}\;$ , and we want this line to intersect $\;E\;$ <strong>only</strong> at the given point, so we need the equation over $\;\Bbb F_7\;$ : $\;(mx-(3m-6))^2=x^3+2\;$, to have <em>one single solution</em> (the following is done modulo $\;7\;$ all along):</p> <p>$$m^2x^2-2m(3m-6)x+9m^2-36m+36=x^3+2\implies$$</p> <p>$$f(x)=x^3-m^2x^2+6m(m-2)x-2m^2+m+1=0$$</p> <p>The above cubic has $\;x=3:\;f(3)=0 , \;$ as a root, of course. If we require this root to be double (as we need a single solution), <em>also</em> its derivative must vanish at $\;x=3\;$ :</p> <p>$$f'(x)=3x^2-2m^2x+6m^2-5m\implies 0=f'(3)=-1-6m^2+6m^2-5m\implies$$</p> <p>$$5m=-1\implies m=(-1)\cdot\overbrace{3}^{=5^{-1}}=-3=4$$</p> <p>and that way we get the line </p> <p>$$\color{red}{y=4x+1}$$</p>
72,012
<p>I'm trying to discretize a region with "pointy" boundaries to study dielectric breaking on electrodes with pointy surfaces. So far I've tried 3 types of boundaries, but the meshing functions stall indefinitely. I've let it run for several hours and it doesn't complete. Seems strange that it would take so long</p> <pre><code>ParametricPlot[{-Cos[u] (1.2 - Cos[(u - Pi)/2]^6) (0.2 + Cos[10 u]^10), Sin[u] (1.2 - Cos[(u - Pi)/2]^6) (0.2 + Cos[10 u]^10)}, {u, 0, 2 Pi}, PlotRange -&gt; All] ir = ParametricRegion[{{r Cos[u], r Sin[u]}, 0 &lt;= u &lt;= 2 Pi &amp;&amp; 0 &lt;= r &lt;= (0.5 + Cos[(u - (Pi/2))/2]^8 Cos[14 (u - (Pi/2))]^8)}, {r, u}] ird = DiscretizeRegion[ir, MaxCellMeasure -&gt; 0.02] (* never completes *) Needs["NDSolve`FEM`"] m = ToElementMesh[ir, "BoundaryMeshGenerator" -&gt; {"RegionPlot", "SamplePoints" -&gt; 50}, "MeshOrder" -&gt; 1] (* never completes *) </code></pre> <p>Another attempt:</p> <pre><code>ParametricPlot[{ Cos[u] Max[ 0.1, -Cos[u]] (1.2 - Abs[Sin[10 Abs[((u - Pi))^(1/2)]]^(1/3)]), (1.2 - Abs[Sin[10 Abs[((u - Pi))^(1/2)]]^(1/3)]) Sin[u] Max[ 0.1, -Cos[u]]}, {u, 0, 2 Pi}, PlotRange -&gt; All] ir2 = ParametricRegion[{{r Cos[u], r Sin[u]}, 0 &lt;= u &lt;= 2 Pi &amp;&amp; 0 &lt;= r &lt;= Max[0.1, -Cos[u]] (1.2 - Abs[Sin[10 Abs[((u - Pi))^(1/2)]]^(1/3)])}, {r, u}] m = ToElementMesh[ir2, "BoundaryMeshGenerator" -&gt; {"RegionPlot", "SamplePoints" -&gt; 50}, "MeshOrder" -&gt; 1] (* never completes *) </code></pre> <p>Last attempt:</p> <pre><code>ParametricPlot[{-Cos[u] (1.2 - Cos[(u - Pi)/2]^6) (0.2 + Cos[10 u]^10), Sin[u] (1.2 - Cos[(u - Pi)/2]^6) (0.2 + Cos[10 u]^10)}, {u, 0, 2 Pi}, PlotRange -&gt; All] ir3 = ParametricRegion[{{r Cos[u], r Sin[u]}, 0 &lt;= u &lt;= 2 Pi &amp;&amp; 0 &lt;= r &lt;= (1.2 - Cos[(u - Pi)/2]^6) (0.2 + Cos[10 u]^10)}, {r, u}] m = ToElementMesh[ir3, "BoundaryMeshGenerator" -&gt; {"RegionPlot", "SamplePoints" -&gt; 50}, "MeshOrder" -&gt; 1] (* never completes *) </code></pre>
Community
-1
<p>I would suggest not going through the <code>Plot</code> functions for this. They're designed to produce a good <em>visual</em> representation of the region, which is not necessarily the same as a good representation for doing numerical computation on. Besides, the plot already discretizes the region into a polygon, so the mesh refinement options of <code>DiscretizeRegion</code> or <code>ToElementMesh</code> cannot help to improve the accuracy of the boundary. It's best to maintain an analytical representation of the region right up until it goes into the mesh generation routine.</p> <p>Unfortunately, Mathematica is not very good at dealing with <code>ParametricRegion</code>s. (It can't even do</p> <pre><code>RegionPlot[ ParametricRegion[{r Cos@t, r Sin@t}, {{r, 1, 2}, {t, 0, Pi/2}}], PlotRange -&gt; {{0, 2}, {0, 2}}] </code></pre> <p>correctly.) Your region can be easily expressed as an <code>ImplicitRegion</code> instead:</p> <pre><code>region = ImplicitRegion[ With[{r = Sqrt[x^2 + y^2], u = ArcTan[-x, y]}, r &lt;= f[u]], {x, y}]; RegionPlot[region, PlotRange -&gt; {{-1.5, 0.6}, {-1.5, 1.5}}, AspectRatio -&gt; Automatic] </code></pre> <p><img src="https://i.stack.imgur.com/RhnsT.png" alt="enter image description here"></p> <p>(Mathematica also has a hard time figuring out the bounds of the region automatically, so sometimes you have to specify them manually.) Then <code>DiscretizeRegion</code> works fine:</p> <pre><code>DiscretizeRegion[region, {{-1.5, 0.6}, {-1.5, 1.5}}, MaxCellMeasure -&gt; 0.02, AccuracyGoal -&gt; 3] </code></pre> <p><img src="https://i.stack.imgur.com/iMNpi.png" alt="enter image description here"></p> <p>as does <code>ToElementMesh</code>:</p> <pre><code>Needs["NDSolve`FEM`"]; mesh = ToElementMesh[region, {{-1.5, 0.6}, {-1.5, 1.5}}, "BoundaryMeshGenerator" -&gt; "Continuation"]; Show[mesh["Wireframe"]] </code></pre> <p><img src="https://i.stack.imgur.com/tVpAa.png" alt="enter image description here"></p>
2,896,780
<p>With a trusted 3rd party, running Secret Santa is easy: The 3rd party labels each person $1,\dotsc,n$, and then randomly chooses a derangement from among all possible derangements of $n$ numbers. Person $i$ will then give a gift to the number in position $i$ of the derangement. The trusted 3rd party is responsible for keeping the derangement secure, and for telling each person whom to give a gift to.</p> <p>The question is: Is there an algorithm that would allow Secret Santa to be played without a trusted 3rd party?</p> <p>I thought perhaps a clever use of secret keys and a one way hash function could accomplish it, but I've failed to find an algorithm so far.</p> <p>So I'm looking for a description of a valid algorithm or a (informal) proof that one does not exist.</p> <h2>EDIT</h2> <p>I believe my problem is different from the possible duplicate. I want a solution that will work for a distributed group of players. That is, you cannot assume the players are in the same room and have the ability to shuffle envelopes or notecards or anything like that. </p> <p>To make this concrete, a valid solution must work across a group instant messenger or over group emails.</p> <p>Also, to clarify again, it must be a random derangement and not merely a random n-cycle.</p>
hmakholm left over Monica
14,366
<p>How about this:</p> <ol> <li><p>Everybody generates a random private-public key pair.</p></li> <li><p>Everybody publishes their public key <em>anonymously</em> (see below).</p></li> <li><p>Collaboratively choose a random seed:</p> <p>a. Everybody chooses a random seed component.<br> b. Everybody publishes a hash of their seed component openly.<br> c. Everybody publishes their actual seed component openly.<br> d. The seed is the sum of the components published in step (c).</p></li> <li><p>Use seed to derive a derangement of the public keys in a deterministic way. (Everybody can now do this for themselves).</p></li> <li><p>Everybody encrypts their name with their secret santa's public key and publishes the ciphertext.</p></li> <li><p>Everybody tries to decrypt all of the messages from the previous step with their own private key. When one of the decryptions succeed, they've found their target.</p></li> </ol> <p>This assumes that "encrypting" with a public key uses some randomness to produces a <em>non-deterministic</em> result, so someone who has only the public key cannot tell whether a given cleartext and ciphertext match up or not. (This is a standard property of real public-key protocols, though not of textbook RSA).</p> <p>The procedure does inherently make the <em>cycle structure</em> of the derangement public. In particular, everybody who is <em>mutually</em> santa with someone will know it. It is probably desirable to restrict oneself to "super-derangements" that don't have any $2$-cycles.</p> <p>A <strong>variant</strong> that doesn't reveal the cycle structure, at the cost of not having a hard bound on the running time would be:</p> <ol> <li><p>Everybody chooses a random number and hashes it to produce a "santa ID".</p></li> <li><p>Everybody publishes their santa ID anonymously.</p></li> <li><p>Collaboratively choose a random seed.</p></li> <li><p>Use the seed to deterministically choose a bijection between santa IDs and target <em>names</em>.</p></li> <li><p>If anyone got themselves as target, complain openly, proving that it is so by revealing the number they hashed to get their ID. Everybody starts over from step 1 with new random numbers for everything. On average, $e$ iterations will be needed before a derangement is found.</p></li> </ol> <hr> <p>Both of the above need a <strong>sub-algorithm for publishing things anonymously:</strong></p> <ol> <li><p>Everybody generates a public-private key pair just for the publication sub-algorithm and publishes the public key.</p></li> <li><p>Everybody selects a random permutation $(K_i)_{1\le i\le n}$ of all the public keys from step 1.</p></li> <li><p>Everybody wraps their message in a series of encryptions: $$ M_0 = \text{the message to publish} \\ M_k = \operatorname{encypt}(K_k, M_{k-1}) \quad\text{for }1\le k\le n $$</p></li> <li><p>Everybody publishes their own $M_n$.</p></li> <li><p>Everybody tries to decrypt all the messages from the previous round. Publish all contents where decryption succeeds.</p></li> <li><p>After repeating the previous step $n$ times, everybody's original messages will be on the table, and nobody knows where any of them comes from (except their own).</p></li> </ol> <p>A large enough collusion might be able to break the anonymous publishing step by traffic analysis, but if everybody except a few players collude, the game has lost much of its meaning anyway.</p>
195,556
<p>My math background is very narrow. I've mostly read logic, recursive function theory, and set theory.</p> <p>In recursive function theory one studies <a href="http://en.wikipedia.org/wiki/Partial_functions">partial functions</a> on the set of natural numbers. </p> <p>Are there other areas of mathematics in which (non-total) partial functions are important? If so, would someone please supply some references?</p> <p>Thanks!</p>
Qiaochu Yuan
232
<p>Sure. For example, <a href="http://en.wikipedia.org/wiki/Meromorphic_function" rel="noreferrer">meromorphic functions</a> in complex analysis are partial functions. <a href="http://en.wikipedia.org/wiki/Multiplicative_inverse" rel="noreferrer">Multiplicative inverse</a> is another partial function (say on a <a href="http://en.wikipedia.org/wiki/Ring_theory" rel="noreferrer">ring</a>). </p> <p>It is worth mentioning that mathematical tradition in most areas (in my experience) is to refer to a partial function as a total function by restricting the domain and not to actually talk about partial functions. For example, most people talk about multiplicative inverse as a total function, but defined on the <a href="http://en.wikipedia.org/wiki/Unit_(ring_theory)" rel="noreferrer">group of units</a> of a ring. </p> <p>This works fine but I think it is unnecessary. There are many places in mathematics where it would be more natural to use partial functions where people don't. For example, composition of <a href="http://en.wikipedia.org/wiki/Morphism" rel="noreferrer">morphisms in a category</a> is a partially defined operation on pairs of morphisms, but that's not how it gets formalized: instead it is formalized as a family of totally defined operations $\text{Hom}(a, b) \times \text{Hom}(b, c) \to \text{Hom}(a, c)$. </p>
181,321
<p>Assuming that <code>a.b=z</code>,where <code>b={b1,b2,b3}</code> and <code>z=x b1+y b2+z b3</code>, obviously <code>a={x,y,z}</code>, how to realize it by mathematica?</p>
GerardF123
58,885
<p>If I understand correctly, you have a single linear equation with 3 unknowns.</p> <p>It's not possible to back out the 3 unknown values. You have an equation for a 3D hyperplane with an infinite number of solutions. </p>
3,454,395
<p>So I’m working on this equation <span class="math-container">$z^{10} + 2z^5 + 2 = 0$</span> to find all complex solutions, and I think I managed to solve it, but I can’t find solution manual for it, since it is really old exam task. The thing that makes me uncomfortable with my solution is that, shouldn’t I get just 10 solutions? But when I put in all <span class="math-container">$k$</span> values(<span class="math-container">$k = 0,1,2,3,4$</span>), you get 12 different angle solutions. Isn’t that wrong?</p> <p>My answer:</p> <h2><span class="math-container">$$ \sqrt{\mathstrut 2}^{1/5}e^{\frac{\left(\pm\frac\pi4i+2\pi k \right)}5} $$</span></h2> <p>Sorry could't figure out how to put 5 in denominator of the polar formula...</p>
lab bhattacharjee
33,337
<p><span class="math-container">$$1+z^5=-1\pm i=\sqrt2e^{i(2n\pi+\pi\pm\pi/4)}$$</span> where <span class="math-container">$n$</span> is any integer</p> <p><span class="math-container">$$z=2^{1/10}\text{exp}\left(\dfrac{i\left(2n\pi+\pi\pm\pi/4\right)}5\right)$$</span> where <span class="math-container">$0\le n\le4$</span></p> <p>Observe that <span class="math-container">$$\text{exp}\left(\dfrac{i\left(2n\pi+\pi+\pi/4\right)}5\right)\ne\text{exp}\left(\dfrac{i\left(2n\pi+\pi-\pi/4\right)}5\right)$$</span> as the equality needs <span class="math-container">$\text{exp}\left(\dfrac{i\pi/2}5\right)=1$</span></p> <p>Like <a href="https://math.stackexchange.com/questions/192742/how-to-solve-x3-1/192743#192743">How to solve $x^3=-1$?</a>, we can prove that there is no repeated root of the given equation </p> <p>Now if for <span class="math-container">$n_1&lt;n_2$</span> <span class="math-container">$$\text{exp}\left(\dfrac{i\left(2n_1\pi+\pi+\pi/4\right)}5\right)=\text{exp}\left(\dfrac{i\left(2n_2\pi+\pi+\pi/4\right)}5\right)$$</span></p> <p><span class="math-container">$\iff\dfrac{2(n_1-n_2)}5$</span> is a multiple of <span class="math-container">$2\pi$</span></p> <p><span class="math-container">$\iff n_1-n_2$</span> is a multiple of <span class="math-container">$5$</span> which is not possible as <span class="math-container">$0\le n_1&lt;n_2\le4$</span></p> <p>Similarly for </p> <p><span class="math-container">$$\text{exp}\left(\dfrac{i\left(2n_1\pi+\pi-\pi/4\right)}5\right)=\text{exp}\left(\dfrac{i\left(2n_2\pi+\pi-\pi/4\right)}5\right)$$</span></p>
3,714,995
<blockquote> <p>Using sell method to find the volume of solid generated by revolving the region bounded by <span class="math-container">$$y=\sqrt{x},y=\frac{x-3}{2},y=0$$</span> about <span class="math-container">$x$</span> axis, is (using shell method)</p> </blockquote> <p>What I try:</p> <p><a href="https://i.stack.imgur.com/4D4Jw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4D4Jw.jpg" alt="enter image description here"></a></p> <p>Solving two given curves <span class="math-container">$$\sqrt{x}=\frac{x-3}{2}\Longrightarrow x^2-10x+9=0$$</span></p> <p>We have <span class="math-container">$x=1$</span> (Invalid) and <span class="math-container">$x=9$</span> (Valid).</p> <p>Put <span class="math-container">$x=9$</span> in <span class="math-container">$y=\sqrt{x}$</span> we have <span class="math-container">$y=3$</span></p> <p>Now Volume of solid form by rotation about <span class="math-container">$x$</span> axis is </p> <p><span class="math-container">$$=\int^{9}_{0}2\pi y\bigg(y^2-2y-3\bigg)dy$$</span></p> <p>Is my Volume Integral is right? If not then how do I solve it? Help me please.</p>
Sumanta
591,889
<p>Since <span class="math-container">$A$</span> is self-adjoint, all eigenvalues of <span class="math-container">$A$</span> are real. Also, <span class="math-container">$\lambda$</span> is an eigenvalue of <span class="math-container">$A$</span> implies <span class="math-container">$Av=\lambda v$</span> for some non-zero <span class="math-container">$v$</span>, i.e. <span class="math-container">$v=A^kv=\lambda^kv$</span>. So, <span class="math-container">$\lambda^k=1\implies \lambda=\pm 1$</span>.</p> <p>Now, <span class="math-container">$0=A^k-I=\prod_j(A -\xi_jI)$</span>, where <span class="math-container">$\xi_j$</span> are <span class="math-container">$k$</span>-th roots of unity. Since, <span class="math-container">$\pm 1$</span> are the only eigenvalues of <span class="math-container">$A$</span>, we have <span class="math-container">$(A-\xi_j I)u\not=0$</span> for any non-zero vector <span class="math-container">$u$</span> if <span class="math-container">$\xi_j\not=\pm 1$</span>. In other words, <span class="math-container">$\prod_{\xi_j\not=\pm 1}(A-\xi_jI)$</span> is injective. But, <span class="math-container">$\prod_j(A -\xi_jI)=0$</span>, so <span class="math-container">$A^2=I$</span>.</p> <p><span class="math-container">$\textbf{Edit:---}$</span> So, let <span class="math-container">$T:=(A^2-I)\prod_{\xi_j\not=\pm 1}(A-\xi_jI)$</span>. Then, <span class="math-container">$T=0$</span>. Note that either <span class="math-container">$A-I$</span> or <span class="math-container">$A+I$</span> or both are factors of <span class="math-container">$\prod_{j}(A-\xi_jI)=0$</span>. Also, let <span class="math-container">$S=\prod_{\xi_j\not=\pm 1}(A-\xi_jI)$</span>, then, <span class="math-container">$T=S(A^2-I)=(A^2-I)S$</span>, and <span class="math-container">$S$</span> is injective. Therefore, for any <span class="math-container">$v\not=0$</span>, we have <span class="math-container">$0=0(v)=T(v)=S\big((A^2-I)v\big)$</span>. If, <span class="math-container">$(A^2-I)v=0$</span> we are done, but if <span class="math-container">$w:=(A^2-I)v\not=0$</span>, then <span class="math-container">$S(w)=0$</span>, contradicts the fact that <span class="math-container">$S$</span> is injective.</p>
33,303
<p>Is it possible to attach certain pieces of code to certain controls in a Manipulate? For example, consider the following Manipulate</p> <pre><code>Manipulate[ data = Table[function[x], {x, -Pi*10, Pi*10, Pi/1000}]; ListPlot[{x, data}, PlotRange -&gt; {{start, stop}, Automatic}] , {function, {Sin, Cos, Tan}} , {start, 1, Length[data]} , {{stop, 300}, 1, Length[data]} ] </code></pre> <p>Generation of the data is expensive but it only needs to be done if I change function. So, I'd like the line</p> <pre><code>data = Table[function[x], {x, -Pi*10, Pi*10, Pi/1000}]; </code></pre> <p>to only run when I change the function control...i.e. I want to attach that line of code to the 'function' control. As it stands, the data is generated when I move the plot range too which is not what I want. </p> <p><img src="https://i.stack.imgur.com/OHJeN.png" alt="enter image description here"></p>
rm -rf
5
<p>In such cases, you can get better flexibility by switching to a <code>DynamicModule</code> and building up the GUI yourself. Then, you can pull the data generating step out of the plotting dynamic, so that the latter can be manipulated freely without regenerating the data. </p> <pre><code>DynamicModule[{function = Sin, start, stop = 300, x = Range[-10 Pi, 10 Pi, Pi/100]}, Dynamic@With[{data = function[x], s = Spacer@10, f = Print@10}, Panel@Column[{ Row[{"function", s, Control[{function, {Sin, Cos, Tan}}]}], Row[{"start", s, Slider[Dynamic@start, {1, Length@data}]}], Row[{"stop", s, Slider[Dynamic@stop, {1, Length@data}]}], Dynamic@ListPlot[data, PlotRange -&gt; {{start, stop}, Automatic}, ImageSize -&gt; 400, Background -&gt; White] }] ] ] </code></pre> <p><img src="https://i.stack.imgur.com/DuvsX.png" alt=""></p> <p>Note that the <code>f = Print@10</code> is there just to observe evaluation of <code>data</code>. You can check for yourself that nothing is printed when you move the sliders and prints only when the tabs are changed.</p>
803,800
<p>Define a sequence $(a_n)_{n = 1}^{\infty}$ of real numbers by $a_n = \sin(n)$. This sequence is bounded (by $\pm1$), and so by the <a href="http://en.wikipedia.org/wiki/Bolzano%E2%80%93Weierstrass_theorem" rel="noreferrer">Bolzano-Weierstrass Theorem</a>, there exists a convergent subsequence.</p> <p><strong>My question is this:</strong> Is there any way to construct an explicit convergent subsequence?</p> <p>Naïvely, I tried considering the subsequence $$\sin(3), \,\sin(31), \,\sin(314), \,\sin(3141),\, \sin(31415),\, \dots$$ hoping it would converge to $0$, but I was unable to prove this (and I think it's probably not even true).</p>
Stijn Hanson
145,554
<p>This isn't a very analytic way of doing it but consider the continued fraction expansion $\pi = [a_0; a_1, a_2, \ldots]$. Generally we know that, if $p_n/q_n$ are the convergents to $\pi$ then $|q_n\pi - p_n| \leq 1/q$ so the $p_n$ are the closest integers to being multiples of $\pi$ so (with slight abuse of notation) $\sin(p_n) \rightarrow \sin m\pi = 0$ with $n \rightarrow \infty$ meaning that this is an explicit construction of the sort that you're after.</p>
914,440
<p>By the definition of topology, I feel topology is just a principle to define "open sets" on a space(in other words, just a tool to expand the conception of open sets so that we can get some new forms of open sets.) But I think in the practical cases, we just considered Euclidean space most and the traditional form open set in Euclidean space works pretty well. And these new form open sets seem to have no common use. So why we need to expand the conception of the open sets? What's the motivation behind?Thanks.</p>
GWilliams
169,969
<p>I think your idea that topology is just to define open sets misses the point. I interpret the point of topology as attempting to generalize the concept of a continuous function primarily, and open sets are often the most direct pedagogical route there. </p> <p>Further, one often does want to leave euclidean space. For example, we often want to consider spaces of continuous functions where the topology (say, pointwise convergence) may not even be metrizable! Topological spaces that are not first countable, and thus not metrizable, also arise quite naturally in functional analysis.</p> <p>Even at times when one can fundamentally imbedd the space being worked on into some euclidean space, this is often a cumbersome process adding on unnecessary baggage. </p>
306,744
<p>So if the definition of continuity is: $\forall$ $\epsilon \gt 0$ $\exists$ $\delta \gt 0:|x-t|\lt \delta \implies |f(x)-f(t)|\lt \epsilon$. However, I get confused when I think of it this way because it's first talking about the $\epsilon$ and then it talks of the $\delta$ condition. Would it be equivalent to say: $\forall$ $\delta \gt 0$ $\exists$ $\epsilon \gt0$ $:|x-t|\lt \delta \implies|f(x)-f(t)|\lt \epsilon$. I guess what I'm asking is whether there is a certain order proofs or more formal statements need to follow. I know I only changed the place where I said there is a $\delta$ but is that permissable in a "formal" way of writing?</p>
gnometorule
21,386
<p>Let $$f(x) := \begin{cases} 1, &amp; x\in \mathbb{Q} \\ 0, &amp; x\in \mathbb{R} - \mathbb{Q}, \\ \end{cases}$$ a very discontinuous function. Then $\epsilon := 2$ will do, for any $x, t, \delta &gt;0$ you choose. So this cannot be equivalent. </p>
4,272,214
<h2>The Equation</h2> <p>How can I analytically show that there are <strong>no real solutions</strong> for <span class="math-container">$\sqrt[3]{x-3}+\sqrt[3]{1-x}=1$</span>?</p> <h2>My attempt</h2> <p>With <span class="math-container">$u = -x+2$</span></p> <p><span class="math-container">$\sqrt[3]{u-1}-\sqrt[3]{u+1}=1$</span></p> <p>Raising to the power of <span class="math-container">$3$</span></p> <p><span class="math-container">$$(u+1)^{2/3}(u-1)^{1/3} - (u+1)^{1/3}(u-1)^{2/3}=1\\(u+1)^{1/3}(u^2-1)^{1/3} - (u-1)^{1/3}(u^2-1)^{1/3}=1\\(u^2-1)^{1/3}\cdot\boxed{\left[(u+1)^{1/3}-(u-1)^{1/3}\right]}=1$$</span></p> <p>Raising to the power of <span class="math-container">$3$</span>:</p> <p><span class="math-container">$$(u^2-1)\cdot\left[3(u+1)^{1/3}(u-1)^{2/3}-3(u+1)^{2/3}(u-1)^{1/3}+2\right]=1\\(u^2-1)\cdot\left[3(u^2-1)^{1/3}(u-1)^{1/3}-3(u+1)^{1/3}(u^2-1)^{1/3}+2\right]=1$$</span></p> <p>Thus: <span class="math-container">$(u^2-1)\cdot\left[3(u^2-1)^{1/3}\boxed{\left[(u-1)^{1/3}-(u+1)^{1/3}\right]}+2\right]=1$</span></p> <p>And with <span class="math-container">$y = (u-1)^{1/3}-(u+1)^{1/3}$</span>, we can say that:</p> <p><span class="math-container">$y^3=-3y(u^2-1)^{1/3}+2$</span></p> <p>I am stuck... Any tips for this radical equation?</p>
B. Goddard
362,009
<p>Certainly for any real <span class="math-container">$x$</span>,</p> <p><span class="math-container">$$x-3 &lt; x-1$$</span></p> <p>so</p> <p><span class="math-container">$$\sqrt[3]{x-3} &lt; \sqrt[3]{x-1}$$</span></p> <p>since cube root is a monotonically increasing function. So</p> <p><span class="math-container">$$\sqrt[3]{x-3} + \sqrt[3]{1-x} = \sqrt[3]{x-3} - \sqrt[3]{x-1} &lt; 0&lt;1.$$</span></p>
2,971,980
<p>Show that if <span class="math-container">$0&lt;b&lt;1$</span> it follows that <span class="math-container">$$\lim_{n\to\infty}b^n=0$$</span> I have no idea how to express <span class="math-container">$N$</span> in terms of <span class="math-container">$\varepsilon$</span>. I tried using logarithms but I don't see how to find <span class="math-container">$N$</span> from this.</p>
user
505,767
<p>Recall that by the definition <span class="math-container">$\lim_{n\to\infty}a_n=L\in \mathbb{R}$</span> means that</p> <p><span class="math-container">$$\forall \epsilon&gt;0 \quad \exists \bar n \quad \forall n&gt;\bar n \quad |a_n-L|&lt;\epsilon$$</span></p> <p>In that case, note that for any <span class="math-container">$\epsilon&gt;0$</span></p> <p><span class="math-container">$$|b^n-0|&lt;\epsilon \iff b^n&lt;\epsilon \iff n&gt;\frac{ \log \epsilon}{\log b}$$</span></p> <p>then by the definition <span class="math-container">$b^n\to 0$</span>.</p>
843,763
<blockquote> <p>Let $x\in \mathbb{R}$ an irrational number. Define $X=\{nx-\lfloor nx\rfloor: n\in \mathbb{N}\}$. Prove that $X$ is dense on $[0,1)$. </p> </blockquote> <p>Can anyone give some hint to solve this problem? I tried contradiction but could not reach a proof.</p> <p>I spend part of the day studying this question <a href="https://math.stackexchange.com/questions/450493/multiples-of-an-irrational-mod-1-are-dense">Positive integer multiples of an irrational mod 1 are dense</a> and its answers. Only one answer is clear and give clues to solve the problem. This answer is the first one. However, this answer does not answer the question nor directly, nor the proof follows from this answer. </p> <p>This answer has some mistakes, he use that $[(k_1-k_2)\alpha]=[k_1\alpha]-[k_2\alpha]$ which is not true. Consider $k_1=3, k_2=1, \alpha=\sqrt{2}$ we have $[(k_1-k_2)\alpha]=2\not= 3=[k_1\alpha]-[k_2\alpha] $. We only can assure that $[k_2\alpha]-[k_1\alpha]-1\leq [(k_2-k_1)\alpha]\leq[k_2\alpha]-[k_1\alpha]$. </p> <p>Who answered said something interesting about additive subgroups of $\mathbb{R}$, but unfortunately the set $X=\{nx-[nx] : n\in \mathbb{N} \}$ is not a subgroup. Considering the additive subgroup $G=\langle X \rangle$, if we prove the part (a) of the link, we get that indeed $G$ is dense on $\mathbb{R}$ but we can not conclude that $X$ is dense on $[0,1)$.</p> <p>I think this problem has not been solved.</p> <p>Thanks!</p>
user411953
411,953
<p>$f \colon x \rightarrow x-\lfloor x \rfloor$ </p> <p>$F=\{f(nx),n \in \Bbb{Z}\}$</p> <p>$F \neq \emptyset$ and $x \ge 0$ $\exists d=\inf(F)$</p> <p>We can prove that $d=0$ (suppose a $\gt$ 0 and ...)</p> <p>hence $\forall$ y $\gt$ 0 $\exists$ n $\in$ $\Bbb{N}$ , $y \gt f(nx) \gt 0$ </p> <p>$a,b \in [0;1[$ if $b-a \gt 0$ $\exists n \in \Bbb{N}$, $b-a \gt f(nx) \gt 0$</p> <p>$\exists p \in \Bbb{N}$ , pf(nx) $\gt$ a (1) , if we suppose that p is the smallest number who verifies (1) we have that (p-1)fn(x) $\lt$ a therefore pf(nx) $\lt$ a+f(nx) $\lt$ b </p> <p>therefore pf(nx) $\in$ [a;b] and pf(nx)=f(pnx) </p> <p>we can conclude about the density </p> <p>First time I use this way of writing I hope I have been clear</p>
3,481,531
<p>I was solving a problem where there are 4 points and we need to check if they form a square or not. Now if all 4 points are the same, is it square or not. My initial thought was it is a zero-sided square but wanted to clarify if my understanding is right or not. </p>
Fred
380,717
<p>If <span class="math-container">$a \le b$</span> and <span class="math-container">$c \le d$</span> then we have two cases:</p> <ol> <li><p><span class="math-container">$a &lt; b$</span> and <span class="math-container">$c&lt; d$</span>, then <span class="math-container">$[a,b] \times [c,d]$</span> is a rectangle (in <span class="math-container">$ \mathbb R^2$</span>).</p></li> <li><p><span class="math-container">$a= b$</span> or <span class="math-container">$c = d$</span>, then <span class="math-container">$[a,b] \times [c,d]$</span> is a degenerate rectangle (in <span class="math-container">$ \mathbb R^2$</span>).</p></li> </ol>
3,518,232
<p>I am given with a Triangle <span class="math-container">$ABC$</span> which is inscribed in circle <span class="math-container">$\omega$</span>. The tangent lines to <span class="math-container">$\omega$</span> at <span class="math-container">$B$</span> and <span class="math-container">$C$</span> meet at <span class="math-container">$T$</span>. Point <span class="math-container">$S$</span> lies on ray <span class="math-container">$BC$</span> such that <span class="math-container">$AS$</span> is perpendicular to <span class="math-container">$AT$</span>. Points <span class="math-container">$B_1$</span> and <span class="math-container">$C_1$</span> lies on ray <span class="math-container">$ST$</span> (with <span class="math-container">$C_1$</span> in between <span class="math-container">$B_1$</span> and <span class="math-container">$S$</span>) such that <span class="math-container">$B_{1}T=BT=C_{1}T$</span>. I need to prove that the triangles <span class="math-container">$ABC$</span> and <span class="math-container">$AB_1C_1$</span> are similar to each other. </p>
Sunaina Pati
807,973
<p>Probably a different solution than the previous answer, hence sharing it!</p> <p><strong>My Proof</strong>: Beautiful problem indeed!</p> <p>Since <span class="math-container">$T$</span> is the point of intersection of tangent at <span class="math-container">$B$</span> and <span class="math-container">$C$</span> wrt <span class="math-container">$\omega$</span> , we get that <span class="math-container">$BT=CT$</span>.</p> <p>Now, using the conditions given us in the problem, we get that <span class="math-container">$CT=B_{1}T=C_{1}T$</span>.</p> <p>Hence <span class="math-container">$BCC_1B_1$</span> is cyclic quad.</p> <p>Now, define <span class="math-container">$D$</span> as the midpoint of <span class="math-container">$BC$</span>. Hence <span class="math-container">$TD\perp BC \implies TD \perp DS$</span> .</p> <p>Since, <span class="math-container">$AS\perp AT$</span> ( given in the question ) , we get that <span class="math-container">$TDAS$</span> is cyclic.</p> <p>Now, before proceeding further, we will state a lemma :</p> <p><strong>Lemma</strong>: Let <span class="math-container">$T$</span> be the intersection of the tangents to <span class="math-container">$(ABC)$</span> at <span class="math-container">$B$</span> and <span class="math-container">$C$</span> . Then line <span class="math-container">$AX$</span> is a symmedian , where a symmedian is the isogonal of a median in a triangle .</p> <p><strong>Proof</strong>: It can be directly proved using sin law!</p> <p>Hence, by the above lemma , we note that <span class="math-container">$AT$</span> is the <span class="math-container">$A-$</span>symmedian we have <span class="math-container">$\angle BAD=\angle TAC$</span>.</p> <p><a href="https://i.stack.imgur.com/6M28n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6M28n.png" alt="enter image description here" /></a></p> <p>Now, we are ready for angle chase!</p> <p>Let <span class="math-container">$\angle BAD=\angle TAC= \theta$</span> , <span class="math-container">$\angle BAC= \alpha$</span> , <span class="math-container">$\angle ABC= B $</span> .</p> <p><strong>Claim</strong> : <span class="math-container">$ACC_1S$</span> is cyclic</p> <p><strong>Proof</strong> : Note that by tangent- side theorem , we get that <span class="math-container">$\alpha= \angle CBT= \angle BCT \implies \angle DTC= 90-\alpha$</span> .</p> <p>Also, since <span class="math-container">$AS\perp AT$</span>, we get that <span class="math-container">$\angle CAS= 90-\theta$</span> .</p> <p>Also, since <span class="math-container">$TDAS$</span> is cyclic, we get <span class="math-container">$\angle DAT= \alpha -2\theta= \angle DST \implies \angle DTS = 90- (\alpha -2\theta) \implies \angle CTC_1=2\theta \implies \angle TC_1C=90-\theta \implies \angle CC_1S=90+\theta$</span> .</p> <p>Hence we have <span class="math-container">$\angle CAS= 90-\theta$</span> and <span class="math-container">$\angle CC_1S=90+\theta$</span> . Hence <span class="math-container">$ACC_1S$</span> is cyclic.</p> <p><strong>Claim</strong>: <span class="math-container">$BB_1AS$</span> is cyclic</p> <p><strong>Proof</strong>: Note that <span class="math-container">$\angle BAS= 90+ \alpha - \theta \implies \angle ASB= 90-( B+\alpha - \theta) $</span>.</p> <p>By using the previous observation, that <span class="math-container">$\angle DST = \alpha -2\theta$</span> , we get that <span class="math-container">$\angle AST=\angle ASB_1= 90- (B+\theta)$</span> .</p> <p>Again, using the previous observations that , <span class="math-container">$\angle BTC= 180-2\alpha$</span> and <span class="math-container">$\angle CTC_1=2\theta$</span> , we get that <span class="math-container">$ \angle BTB_1= 2 \alpha -2\theta \implies \angle B_1BT=90-(\alpha - \theta )$</span> .</p> <p>Hence we get that <span class="math-container">$\angle B_1BA= 90+(B+\theta)$</span> .</p> <p>Now since, <span class="math-container">$\angle ASB_1= 90- (B+\theta)$</span> and <span class="math-container">$\angle B_1BA= 90+(B+\theta)$</span> , we get that <span class="math-container">$BB_1AS$</span> is cyclic .</p> <p><a href="https://i.stack.imgur.com/frBJ9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/frBJ9.png" alt="enter image description here" /></a></p> <p>Now , the most beautiful part of this problem, spiral similarity !</p> <p>Note that since <span class="math-container">$BB_1AS$</span> is cyclic and <span class="math-container">$ACC_1S$</span> is cyclic .Note that there is a spiral symmetry centered at <span class="math-container">$A$</span> dilating <span class="math-container">$\Delta ABC$</span> to <span class="math-container">$\Delta AB_1C_1$</span> .</p> <p>Hence <span class="math-container">$\Delta ABC \sim \Delta AB_1C_1$</span>. And we are done!</p> <p><a href="https://i.stack.imgur.com/9V2hw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9V2hw.png" alt="enter image description here" /></a> Note: Can someone please verify? I am very new to spiral similarity ( I just learned it today ).</p> <p>Thanks in advance.</p>
3,152,532
<p>What is meaning of <span class="math-container">$f^2(x)$</span> ? There seems to be confusion in its interpretation.</p> <p>Is <span class="math-container">$f^2(x)$</span> same as <span class="math-container">$(f(x))^2$</span> or <span class="math-container">$f\circ f$</span>?</p>
José Carlos Santos
446,262
<p>It depends upon the context. It can be both. But if <span class="math-container">$f$</span> is a function from some set <span class="math-container">$S$</span> into <span class="math-container">$\mathbb R$</span>, then it can only be <span class="math-container">$\bigl(f(x)\bigr)^2$</span>. And if it is a function from a set <span class="math-container">$S$</span> into itself and if that set has no multiplication defined in it, then it can only mean <span class="math-container">$f\circ f$</span>.</p>
3,152,532
<p>What is meaning of <span class="math-container">$f^2(x)$</span> ? There seems to be confusion in its interpretation.</p> <p>Is <span class="math-container">$f^2(x)$</span> same as <span class="math-container">$(f(x))^2$</span> or <span class="math-container">$f\circ f$</span>?</p>
David
651,991
<p>I would interpret it as <span class="math-container">$(f(x))^2$</span> whenever it makes sense. If <span class="math-container">$f(x)$</span> is not defined (for example, if it is a vector), then I would understand <span class="math-container">$f\circ f$</span></p> <p>Anyway, I would make sure the meaning is consistent throughout my work, never mixing them.</p> <p>In topics such as linear algebra, I will always mean <span class="math-container">$f \circ f$</span> If I am talking about calculus, I would expect you to understand <span class="math-container">$(f(x))^2$</span></p>
4,369,480
<p>In's commonly said that in VAE, we use reparameterization trick because &quot;we can't backpropagate through stochastic node&quot; <a href="https://i.stack.imgur.com/ED72y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ED72y.png" alt="enter image description here" /></a></p> <p>It makes sense from the picture, but I found it hard to understand exactly what it means and why. Let's say X ~ N(u, 1).</p> <p>And we want to compute <span class="math-container">$$\frac{d X}{d u}$$</span> which is not possible because the sampling operation is non-differentiable. That is, we don't know how changing u a little bit will affect how we got the sample X.</p> <p>However, say in the MLE for Gaussian. We are trying to estimate the following quantity:</p> <p><span class="math-container">$$\sum_{i=1}^N \log p(X_i;u)$$</span> for which the derivative <span class="math-container">$$\frac{d \log p(X_i ; u)}{d u}$$</span> can be easily calculated. My confusion comes from the fact that <span class="math-container">$$\frac{d \log p(X_i ; u)}{d u} = \frac{d \log p(X_i ; u)}{d X_i} \frac{d X_i}{d u}$$</span> by the chain rule. If we can't compute <span class="math-container">$\frac{d X_i}{d u}$</span>, why can we compute <span class="math-container">$\frac{d \log p(X_i ; u)}{d u}$</span> ?</p>
Joe
693,577
<p>I think you are confused about back propagation. You never need to take the gradient of the input with respect to anything (because there is no layer BEFORE the input), nor do you need to make assumptions about the distribution of the input.</p> <p>The 'reparametrization trick' makes some assumption about the parametric form of the distribution of the latent vector <span class="math-container">$z$</span>, and represents sampling from that latent space as the output of some function of the parameter values and a noise vector. That allows you to backprop through the latent vector <span class="math-container">$z$</span> by taking the gradient of it with respect to the parameter values.</p> <p>For example, if <span class="math-container">$z$</span> is assumed to be multivariate Gaussian, then <span class="math-container">$z_i = \mu_i + \sigma_i \epsilon_i$</span>, where <span class="math-container">$\epsilon_i \sim N(0,1)$</span>, and <span class="math-container">$$\frac{\partial z_i}{\partial \mu_i} = 1$$</span> <span class="math-container">$$\frac{\partial z_i}{\partial \sigma_i} = \epsilon_i$$</span> The vectors <span class="math-container">$\mu$</span> and <span class="math-container">$\sigma$</span> are learned, i.e. they are connected to the previous layer in the network, and you can backprop through them in the usual way. The random noise <span class="math-container">$\epsilon$</span> is drawn from a fixed distribution, not learned, so you do not backprop through those nodes (which is why it is orange in your diagram).</p>