qid
int64 1
4.65M
| question
large_stringlengths 27
36.3k
| author
large_stringlengths 3
36
| author_id
int64 -1
1.16M
| answer
large_stringlengths 18
63k
|
---|---|---|---|---|
199,199 |
<p>Suppose a box contains 5 white balls and 5 black balls.</p>
<p>If you want to extract a ball and then another:</p>
<p>What is the probability of getting a black ball and then a black one?</p>
<p>I think that this is the answer:</p>
<p>Let $A:$ get a black ball in the first extraction, $B:$ get a black ball in the second extraction. Then: $P(A)=\frac{5}{10}$ and
$$P(B|A)=\frac{P(A\cap B)}{P(A)}=\frac{\frac{4}{9}}{\frac{1}{2}}=\frac{8}{9}$$</p>
<p>Is this correct? </p>
<p>Now, If you want to extract two balls at the same time:
what is the probability that both are black?
Is exactly the same probability of the first question?
Why?</p>
<p>Thanks for your help.</p>
|
André Nicolas
| 6,312 |
<p>It is good to check an answer for plausibility. <strong>Given</strong> that the first ball is black, surely the probability that the second ball is black cannot be $\dfrac{8}{9}$!</p>
<p>In this case, it is the <strong>conditional probability</strong> that is obvious. Note that $\Pr(B|A)=\dfrac{4}{9}$. And what we want is $\Pr(A\cap B)$. We get
$$\Pr(A\cap B)=\Pr(B|A)\Pr(A)=\frac{4}{9}\cdot\frac{1}{2}.$$</p>
<p>As to your question about two <em>consecutive</em> picks versus two <em>simultaneous</em> picks, there is no effective difference. If Alicia picks two balls from the urn, what difference can it make whether she touches the two balls simultaneously, or with a delay of a millisecond?</p>
<p><strong>Remark:</strong> For this kind of problem, using the formal conditional probability machinery is overkill, and may hinder the intuition. </p>
|
1,435,665 |
<p>Matrix $\mathbf{D}$ is a full rank diagonal matrix. $\mathbf{ADA}^T=\mathbf{D}$.</p>
<p>Can we conclude that $\mathbf{A}^2=\mathbf{I}_n$? (edited from $\mathbf{A}=\pm\mathbf{I}$)</p>
<p>It's almost sure that some one has asked this question before, but I cannot find proper keywords to search for that post.</p>
|
DirkGently
| 88,378 |
<p>By taking the derivative of $y^2 + xy -3x = 5$ with respect to $t$, we get
$$2y\frac{dy}{dt}+x\frac{dy}{dt}+\frac{dx}{dt}y-3\frac{dx}{dt}=0.$$
By substitution,
$$-2-1-\frac{dx}{dt}-3\frac{dx}{dt}=0,$$
implying that
$$\frac{dx}{dt}=-\frac34\cdot$$</p>
|
3,506,340 |
<p>I have no idea how to calculate <span class="math-container">$z_x+z_y$</span> at a point <span class="math-container">$\left( \frac{\pi +3}{3}, \frac{\pi+1}{2}\right)$</span>, if <span class="math-container">$z=uv^2$</span> and <span class="math-container">$x=u+sinv$</span>, <span class="math-container">$y=v+cosu$</span>. <span class="math-container">$z$</span> is not expressed in terms of <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. Maybe it is meant to be solved as <span class="math-container">$x_u=1$</span> and <span class="math-container">$x_v=cosv$</span>, <span class="math-container">$y_u=-sinu$</span> and <span class="math-container">$y_v=1,$</span> then <span class="math-container">$x_u=cos^2v$</span> and <span class="math-container">$y_u=-sinu$</span>. </p>
<p><span class="math-container">$$z_u=v^2 \Rightarrow v^2= \frac{\partial z}{\partial x} \cdot 1+\frac{\partial z}{\partial y} \cdot (-sinu)$$</span></p>
<p><span class="math-container">$$z_v=2uv \Rightarrow 2uv=\frac{\partial z}{\partial x} cosv+\frac{\partial z}{\partial y}\cdot 1$$</span></p>
<p><span class="math-container">$$\Rightarrow \frac{\partial z}{\partial x}=v^2+sin(u) \frac{\partial z}{\partial y}$$</span></p>
<p><span class="math-container">$$\Rightarrow 2uv=cos(v)\left( v^2+sin(u)\frac{\partial z}{\partial y} \right)+\frac{\partial z}{\partial y}$$</span></p>
<p><span class="math-container">$$\Rightarrow 2uv-v^2cos(v)=\frac{\partial z}{\partial y}\left(sin(u)+1 \right)$$</span></p>
<p><span class="math-container">$$\Rightarrow \frac{\partial z}{\partial y}=\frac{2uv-v^2cos(v)}{sin(u)+1}$$</span></p>
<p>I know that what I have done looks just confusing.</p>
<p><span class="math-container">$$v^2=\frac{\partial z}{\partial x}+\frac{\sqrt{3}}{2}\Rightarrow \frac{\partial z}{\partial x}=\frac{\pi^2}{9}-\frac{\sqrt{3}}{2}$$</span></p>
<p><span class="math-container">$$2uv=\frac{\partial z}{\partial y} \Rightarrow \frac{\partial z}{\partial y}=\frac{\pi^2}{18}$$</span></p>
<p><span class="math-container">$$z_x+z_y=\frac{3\pi^2-9\sqrt{3}}{18}$$</span></p>
|
Quanto
| 686,284 |
<p>Differentiate <span class="math-container">$u+\sin v=x$</span> and <span class="math-container">$v+\cos u=y$</span> with respect to <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, respectively,</p>
<p><span class="math-container">$$u_x+\cos v\> v_x=1,\>\>\>\>\>
v_x-\sin u\> u_x=0,\>\>\>\>\>
u_y+\cos v\> v_y = 0,\>\>\>\>\>
v_y-\sin u\> u_y = 1$$</span></p>
<p>Solve the linear equations,</p>
<p><span class="math-container">$$u_x= \frac1{1+\sin u\cos v},\>\>\>\>\>v_x= \frac{\sin u}{1+\sin u\cos v}$$</span>
<span class="math-container">$$u_y= -\frac{\cos v}{1+\sin u\cos v},\>\>\>\>\>v_y= \frac{1}{1+\sin u\cos v}$$</span></p>
<p>Then, from <span class="math-container">$z=uv^2$</span>, evaluate</p>
<p><span class="math-container">$$z_x+z_y =( z_u u_x+z_v v_x)+( z_u u_y+z_v v_y)=\frac{v^2(1-\cos v)+2uv(1+\sin u)}{1+\sin u\cos v}\tag 1$$</span></p>
<p>Given the point <span class="math-container">$(x,y)=(\frac{\pi+3}3,\frac{\pi+1}2)$</span>, set</p>
<p><span class="math-container">$$\frac{\pi+3}3=u+\sin v,\>\>\>\>\>\frac{\pi+1}2 = v+\cos u$$</span></p>
<p>to get <span class="math-container">$(u,v) = (\frac\pi3,\frac\pi2)$</span>. Plug the point into the expression (1) to obtain the result</p>
<p><span class="math-container">$$z_x+z_y =\frac{7+2\sqrt3}{12}\pi^2$$</span></p>
|
316,016 |
<p>Could you recommend any approachable books/papers/texts about matroids (maybe a chapter from somewhere)? The ideal reference would contain multiple examples, present some intuitions and keep formalism to a necessary minimum.</p>
<p>I would appreciate any hints or appropriate sources.</p>
|
davidlowryduda
| 9,754 |
<p>Dover has released a cheap version of Welsh's <a href="http://rads.stackoverflow.com/amzn/click/0486474399">Matroid Theory</a> - can't argue with the price. On the other hand, I've heard better things about Oxley's <a href="http://rads.stackoverflow.com/amzn/click/0199603391">Matroid Theory</a>.</p>
<p>(Apparently there is absolutely no creativity in naming books on Matroid Theory)</p>
|
5,363 |
<p>There is something in the definition of the <a href="http://en.wikipedia.org/wiki/Free_product" rel="nofollow noreferrer">free product</a> of two groups that annoys me, and it's this "word" thing:</p>
<blockquote>
<p>If <span class="math-container">$G$</span> and <span class="math-container">$H$</span> are groups, a word in <span class="math-container">$G$</span> and <span class="math-container">$H$</span> is a product of the form</p>
<p><span class="math-container">$$
s_1 s_2 \dots s_m,
$$</span></p>
<p>where each <span class="math-container">$s_i$</span> is either an element of <span class="math-container">$G$</span> or an element of <span class="math-container">$H$</span>. </p>
</blockquote>
<p>So what is this "word" guy? Does it come out of the blue? Does it come from some sort of new operation that I can perform with the two sets <span class="math-container">$G$</span> and <span class="math-container">$H$</span> -in addition to the well-known ones of union, intersection, Cartesian product...?</p>
<p>Fortunatelly, I think there is nothing new under the sun of set operations: it's easy to realise that words can be identified with elements of some Cartesian product (see below):</p>
<p><span class="math-container">$$
(s_1, s_2, \dots , s_m ) \ .
$$</span></p>
<p>And Cartesian product is a well-established set-theoretical operation.</p>
<p>So I tried to translate the rest of Wikipedia's definition </p>
<blockquote>
<p>Such a word may be <em>reduced</em> using the following operations:</p>
<p>Remove an instance of the identity element (of either <span class="math-container">$G$</span> or <span class="math-container">$H$</span>).
Replace a pair of the form <span class="math-container">$g_1g_2$</span> by its product in <span class="math-container">$G$</span>, or a pair <span class="math-container">$h_1h_2$</span> by its product in <span class="math-container">$H$</span>.</p>
<p>Every reduced word is an alternating product of elements of <span class="math-container">$G$</span> and elements of <span class="math-container">$H$</span>, e.g.</p>
<p><span class="math-container">$$
g_1 h_1 g_2 h_2 \dots g_r h_r.
$$</span></p>
<p>The free product <span class="math-container">$G ∗ H$</span> is the group whose elements are the reduced words in <span class="math-container">$G$</span> and <span class="math-container">$H$</span>, under the operation of concatenation followed by reduction.</p>
</blockquote>
<p>in an elementary set setting. First, consider the set of "unreduced" tuples of elements of <span class="math-container">$G$</span> and <span class="math-container">$H$</span></p>
<p><span class="math-container">$$
U = G \sqcup H \sqcup (G\times G) \times (G\times H) \sqcup (H\times G) \sqcup (H\times H) \sqcup (G\times G \times G) \sqcup \dots
$$</span></p>
<p>More concisely:</p>
<hr>
<p>EDIT:</p>
<p>I think the following formula may be less messier than the one I wrote previously:</p>
<p><span class="math-container">$$
U = \bigsqcup_{r \geq 1} (S_1 \times \cdots \times S_r),
$$</span></p>
<p>where <span class="math-container">$S_i = G$</span> or <span class="math-container">$S_i = H$</span>.</p>
<hr>
<p>So, elements of <span class="math-container">$U$</span> are ordered tuples (<em>unreduced</em> ones)</p>
<p><span class="math-container">$$
(s_1, s_2, \dots , s_m),
$$</span></p>
<p>where each <span class="math-container">$s_i$</span> is either an element of <span class="math-container">$G$</span> or an element of <span class="math-container">$H$</span>.</p>
<p>The product of two unreduced tuples is defined by concatenation</p>
<p><span class="math-container">$$
(s_1, \dots , s_m) \cdot (t_1, \dots , t_n) = (s_1, \dots , s_m, t_1 , \dots , t_n) \ .
$$</span></p>
<p>Now, consider the following equivalence relation in the set of unreduced tuples <span class="math-container">$U$</span>:</p>
<p><span class="math-container">$$
(s_1, s_2, \dots , s_{i-1}, 1, s_{i+1}, \dots , s_n) \sim (s_1, s_2, \dots, s_{i-1}, s_i, \dots , s_n) \ ,
$$</span></p>
<p>where <span class="math-container">$1$</span> is either the unit element of <span class="math-container">$G$</span> or the one of <span class="math-container">$H$</span>. And</p>
<p><span class="math-container">$$
(s_1, s_2, \dots , s_i,s_{i+1}, \dots , s_r) \sim (s_1, s_2, \dots , s_is_{i+1}, \dots , s_r )
$$</span></p>
<p>whenever two adjacent <span class="math-container">$s_i, s_{i+1} \in G$</span> or <span class="math-container">$s_i, s_{i+1} \in H$</span> at the same time. </p>
<p>If you want, you may call the equivalence class of a tuple under this equivalence relation a <em>reduced</em> tuple. So every reduced tuple is an alternating one, </p>
<p><span class="math-container">$$
(g_1, h_1, \dots , g_r , h_r) \ ,
$$</span></p>
<p>with <span class="math-container">$g_i \in G$</span> and <span class="math-container">$h_i \in H$</span> for all <span class="math-container">$i = 1, \dots , r$</span>.</p>
<p>Define the free product of <span class="math-container">$G$</span> and <span class="math-container">$H$</span> as the quotient:</p>
<p><span class="math-container">$$
G*H = U/\sim \ .
$$</span></p>
<p>Finally, one verifies that concatenation is well-defined on unreduced tuples and gives <span class="math-container">$G*H$</span> a group structure.</p>
<p>After performing this elementary exercise I understand perfectly well why nobody defines the free product in this way, but I still wanted to ask:</p>
<ol>
<li>Is this correct?</li>
<li>Is it written somewhere?</li>
</ol>
|
Dylan Wilson
| 423 |
<p>Well, I think that the usual definition of the free product is just a slightly less formal version of what you've written, which, by the way, is certainly correct. I haven't seen it completely written out like that before but maybe others have.</p>
<p>Meanwhile, one can avoid the whole messy construction and prove (using, for example, a version of the adjoint functor theorem) that the category Grp has small colimits. The free product is just the coproduct in Grp, so that would suffice for all practical purposes, and it is a rigorous formulation, if that's what you're worried about.</p>
|
5,363 |
<p>There is something in the definition of the <a href="http://en.wikipedia.org/wiki/Free_product" rel="nofollow noreferrer">free product</a> of two groups that annoys me, and it's this "word" thing:</p>
<blockquote>
<p>If <span class="math-container">$G$</span> and <span class="math-container">$H$</span> are groups, a word in <span class="math-container">$G$</span> and <span class="math-container">$H$</span> is a product of the form</p>
<p><span class="math-container">$$
s_1 s_2 \dots s_m,
$$</span></p>
<p>where each <span class="math-container">$s_i$</span> is either an element of <span class="math-container">$G$</span> or an element of <span class="math-container">$H$</span>. </p>
</blockquote>
<p>So what is this "word" guy? Does it come out of the blue? Does it come from some sort of new operation that I can perform with the two sets <span class="math-container">$G$</span> and <span class="math-container">$H$</span> -in addition to the well-known ones of union, intersection, Cartesian product...?</p>
<p>Fortunatelly, I think there is nothing new under the sun of set operations: it's easy to realise that words can be identified with elements of some Cartesian product (see below):</p>
<p><span class="math-container">$$
(s_1, s_2, \dots , s_m ) \ .
$$</span></p>
<p>And Cartesian product is a well-established set-theoretical operation.</p>
<p>So I tried to translate the rest of Wikipedia's definition </p>
<blockquote>
<p>Such a word may be <em>reduced</em> using the following operations:</p>
<p>Remove an instance of the identity element (of either <span class="math-container">$G$</span> or <span class="math-container">$H$</span>).
Replace a pair of the form <span class="math-container">$g_1g_2$</span> by its product in <span class="math-container">$G$</span>, or a pair <span class="math-container">$h_1h_2$</span> by its product in <span class="math-container">$H$</span>.</p>
<p>Every reduced word is an alternating product of elements of <span class="math-container">$G$</span> and elements of <span class="math-container">$H$</span>, e.g.</p>
<p><span class="math-container">$$
g_1 h_1 g_2 h_2 \dots g_r h_r.
$$</span></p>
<p>The free product <span class="math-container">$G ∗ H$</span> is the group whose elements are the reduced words in <span class="math-container">$G$</span> and <span class="math-container">$H$</span>, under the operation of concatenation followed by reduction.</p>
</blockquote>
<p>in an elementary set setting. First, consider the set of "unreduced" tuples of elements of <span class="math-container">$G$</span> and <span class="math-container">$H$</span></p>
<p><span class="math-container">$$
U = G \sqcup H \sqcup (G\times G) \times (G\times H) \sqcup (H\times G) \sqcup (H\times H) \sqcup (G\times G \times G) \sqcup \dots
$$</span></p>
<p>More concisely:</p>
<hr>
<p>EDIT:</p>
<p>I think the following formula may be less messier than the one I wrote previously:</p>
<p><span class="math-container">$$
U = \bigsqcup_{r \geq 1} (S_1 \times \cdots \times S_r),
$$</span></p>
<p>where <span class="math-container">$S_i = G$</span> or <span class="math-container">$S_i = H$</span>.</p>
<hr>
<p>So, elements of <span class="math-container">$U$</span> are ordered tuples (<em>unreduced</em> ones)</p>
<p><span class="math-container">$$
(s_1, s_2, \dots , s_m),
$$</span></p>
<p>where each <span class="math-container">$s_i$</span> is either an element of <span class="math-container">$G$</span> or an element of <span class="math-container">$H$</span>.</p>
<p>The product of two unreduced tuples is defined by concatenation</p>
<p><span class="math-container">$$
(s_1, \dots , s_m) \cdot (t_1, \dots , t_n) = (s_1, \dots , s_m, t_1 , \dots , t_n) \ .
$$</span></p>
<p>Now, consider the following equivalence relation in the set of unreduced tuples <span class="math-container">$U$</span>:</p>
<p><span class="math-container">$$
(s_1, s_2, \dots , s_{i-1}, 1, s_{i+1}, \dots , s_n) \sim (s_1, s_2, \dots, s_{i-1}, s_i, \dots , s_n) \ ,
$$</span></p>
<p>where <span class="math-container">$1$</span> is either the unit element of <span class="math-container">$G$</span> or the one of <span class="math-container">$H$</span>. And</p>
<p><span class="math-container">$$
(s_1, s_2, \dots , s_i,s_{i+1}, \dots , s_r) \sim (s_1, s_2, \dots , s_is_{i+1}, \dots , s_r )
$$</span></p>
<p>whenever two adjacent <span class="math-container">$s_i, s_{i+1} \in G$</span> or <span class="math-container">$s_i, s_{i+1} \in H$</span> at the same time. </p>
<p>If you want, you may call the equivalence class of a tuple under this equivalence relation a <em>reduced</em> tuple. So every reduced tuple is an alternating one, </p>
<p><span class="math-container">$$
(g_1, h_1, \dots , g_r , h_r) \ ,
$$</span></p>
<p>with <span class="math-container">$g_i \in G$</span> and <span class="math-container">$h_i \in H$</span> for all <span class="math-container">$i = 1, \dots , r$</span>.</p>
<p>Define the free product of <span class="math-container">$G$</span> and <span class="math-container">$H$</span> as the quotient:</p>
<p><span class="math-container">$$
G*H = U/\sim \ .
$$</span></p>
<p>Finally, one verifies that concatenation is well-defined on unreduced tuples and gives <span class="math-container">$G*H$</span> a group structure.</p>
<p>After performing this elementary exercise I understand perfectly well why nobody defines the free product in this way, but I still wanted to ask:</p>
<ol>
<li>Is this correct?</li>
<li>Is it written somewhere?</li>
</ol>
|
Tsuyoshi Ito
| 926 |
<p>This is not an answer to your questions, but I guess I would point out that the “words” in the definition in Wikipedia can be given a structure: consider the free monoid freely generated by the set <span class="math-container">$G \sqcup H$</span>, and just call its elements <em>words in G and H</em>. Now you can introduce an equivalence relation and define the free product as the quotient (similarly to what you did), verifying that the operation on the quotient is well-defined and makes it a group.</p>
<p>Of course, this is not at all shorter than your written-down definition if we write down the definition of the free monoid and the definition of the equivalence relation. I just wanted to point out that the “words” are a natural concept and that this two-step construction may be slightly tidier than constructing the free product of groups in one big step (based on sets).</p>
|
3,317,088 |
<blockquote>
<p>Let
<span class="math-container">\begin{align*}
\mathcal{P} : &\min_{x}~& (x + a)^2 \\
& s.t. & x + b \leq 0
\end{align*}</span></p>
</blockquote>
<p>where <span class="math-container">$ a, b \in \mathbb{R} $</span>. I would like to find <span class="math-container">$ x $</span>.</p>
<p><strong>My attempt:</strong> The Lagrangian is <span class="math-container">$ L_\pi = |x + a|^2 + \pi (x + b) $</span>.</p>
<p>Stationarity: <span class="math-container">$ x + a + \pi = 0 $</span></p>
<p>Complementary slackness: <span class="math-container">$ \pi (x + b) = 0 $</span></p>
<ul>
<li>Replacing <span class="math-container">$ x = -\pi - a $</span> in <span class="math-container">$ \pi (x + b) = 0 $</span>, we obtain that <span class="math-container">$ \pi (-\pi - a + b) = 0 $</span>.</li>
</ul>
<p>This is the point where I am not able to continue. I get two roots por <span class="math-container">$ \pi $</span> but I am unable to realize which one I should use to get <span class="math-container">$ x $</span>.</p>
|
k.stm
| 42,242 |
<p>No, it doesn’t. The point is that any chain of primes in <span class="math-container">$R / \mathfrak p$</span> lifts to a chain of primes in <span class="math-container">$R$</span> <em>above</em> <span class="math-container">$\mathfrak p$</span>. But since <span class="math-container">$R$</span> is an integral domain, you can at least extend this chain below by <span class="math-container">$(0)$</span>, since <span class="math-container">$\mathfrak p ≠ 0$</span>.</p>
<p>So, no matter how long a chain of primes is in <span class="math-container">$R / \mathfrak p$</span>, you can always lift <em>and extend</em> it to a chain in <span class="math-container">$R$</span>, making any chain of primes in <span class="math-container">$R$</span> of <em>maximal length</em>, whatever it may be, longer than any such chain in <span class="math-container">$R/\mathfrak p$</span>.</p>
|
30,402 |
<p>The envelope of parabolic trajectories from a common launch point is itself a parabola.
In the U.S. soon many will have a chance to observe this fact directly, as the 4th of July is traditionally celebrated with fireworks.</p>
<p>If the launch point is the origin, and the trajectory starts off at angle $\theta$ and velocity $v$, then under unit gravity it follows that the parabola
$$
y = x \tan \theta - [x^2 /(2 v^2)] (1 + \tan^2 \theta)
$$
and the envelope of all such trajectories, is another parabola:
$$
y = v^2 /2 - x^2 / (2v^2)
$$
<hr>
<img src="https://i.stack.imgur.com/tn5pg.jpg" alt="envelope"></p>
<hr>
<p>These equations are not difficult to derive.
I have two questions.
First, is there a way to see that the envelope of parabolic trajectories is itself a parabola, without computing these equations?
Is there a purely geometric argument?
Perhaps there is a way to nest cones and obtain the above picture through conic sections, but I couldn't see it.</p>
<p>Second, of course the trajectories are actually pieces of ellipses, not parabolas, if we follow the true inverse-square law of gravity.
Is the envelope of these elliptical trajectories also an ellipse?
(I didn't try to work out the equations.)
Perhaps the same geometric viewpoint (if it exists) could apply, e.g. by slightly tilting the sections.</p>
|
Joseph O'Rourke
| 6,094 |
<p>I found another article to supplement those to which Andrey linked:
Eugene I Butikov, "<a href="http://iopscience.iop.org/0143-0807/24/4/101" rel="nofollow">Comment on 'The envelope of projectile trajectories'</a>,"
<em>Eur. J. Phys.</em> <strong>24</strong> L5-L9, 2003.
He also explains the expanding-circles viewpoint that is Andrey's second bullet.
He imagines first that there is no gravity, in which case the particles are
on the surface of an expanding sphere whose radius $r$ equals $v t$.
"With gravity, this uniformly expanding sphere is falling freely as a whole with the
acceleration of free fall." He then finds the envelope of these falling, expanding circles.</p>
<p>Later in the note he considers water drops spun off a spinning, wet bicycle wheel.
To continue the 4th-of-July theme, these could be sparks from a spinning sparkler wheel.
He proves that again, the envelope of the drops/sparks is a parabola.</p>
|
2,906,350 |
<p>I have already consulted V.K. Rohatgi, and it has an example where it takes $Y=X^a$ where $a>0$ but the domain of $X$ is positive real values.<br>
Even the theorem for transformation of continuous random values restricts the derivative of $Y$ w.r.t. $X$ to be positive or negative for the entire domain of $X$ where as in the case when $Y = X^3$ derivative of $Y$ w.r.t. $X$ is $0$ at $X=0$.<br>
I am unable to proceed further.</p>
|
Hongyu Wang
| 381,246 |
<p>We have</p>
<p>\begin{align*}
P(Y \leq y) &= P(X^3 \leq y)\\
&= P(X \leq y^\frac{1}{3})
\end{align*}</p>
<p>Now we take the derivative in this case with $F$ being the CDF of X and f being the pdf of X.</p>
<p>$$F(y^\frac{1}{3})' = \frac{1}{3}f(y^\frac{1}{3})y^{-\frac{2}{3}}$$</p>
<p>Thus you just subsitute in the PDF of X.</p>
|
719,056 |
<p>I'm writing a Maple procedure and I have a line that is "if ... then ..." and I would like it to be if k is an integer (or if k^2 is a square) - how would onw say that in Maple? Thanks!</p>
|
Abhishek Verma
| 94,307 |
<p>There is a $C$ which <strong>looks like</strong> $A/B$ (suppose) in some german notation. And now in some rule you denote some value by $C$ and others by $A$ and $B$ and $C = A/B$ is the rule. You won't say you broke the notation $C$ into $A/B$ because it was looking so.</p>
<p>You will deal more such cases in integration.</p>
|
689,546 |
<p>I know this problem involves using Cantor's theorem, but I'm not sure how to show that there are more subsets of an infinite enumerable set than there are positive integers. It seems like a lot of these problems are really the same problem, but they require some unique and creative thought to get them just right. Any idea how I can solve these more quickly? What train of thought do you go though when working with this specific example?</p>
|
P Vanchinathan
| 28,915 |
<p>Given an enumerable set $S=\{x_1,x_2,\ldots\}$, and an infinite sequence of 0's and 1's, $\{b_1,b_2,\ldots\}$, we can associate a subset of $S$ consisting of those $x_i$ for which $b_i=1$. This is a bijection between colection of subsets of $S$ and infinite binary sequences. These binary sequences can be through of as base 2 representation of all real numbers between 0 and 1, hence it is uncountable. </p>
|
2,078,737 |
<p>I will gladly appreciate explanation on how to do so on this matrix:</p>
<p>$$
\begin{pmatrix}
i & 0 \\
0 & i \\
\end{pmatrix}
$$</p>
<p>I got as far as calculating the eigenvalues and came up with $λ = i$.
when trying to find the eigenvectors I came up with the $0$ matrix.</p>
<p>what am I doing wrong? </p>
<p>Much appreciation and thanks in advance.</p>
|
Tsemo Aristide
| 280,301 |
<p>$x\rightarrow 1^{-}$, $x\pi/2\rightarrow \pi/2^{-}$, </p>
<p>$sin(x\pi/2)\rightarrow 1$, $cos(x\pi/2)\rightarrow 0^+$ so the limit is $+\infty$.</p>
|
1,810,582 |
<p>Statement 1: </p>
<blockquote>
<p>If $f :A\mapsto B$ & if $X,Y$ are subsets of $B$, then the inverse image of union of two sets $X$ & $Y$ is the union of inverse images of $X$ & $Y$.</p>
</blockquote>
<p>Statement 2:</p>
<blockquote>
<p>Let $f:A\mapsto B$ be a one-to-one correspondence from $A$ to $B$.
We define inverse of function , $(f^{-1}): B \mapsto A$ as $f(a)= b$. This inverse function also has a one-to-one correspondence.</p>
</blockquote>
<p>My question is what should i note in these two statements?</p>
<p>Second statement states that the function whose inverse is to be determined should be one-to-one correspondence. </p>
<p>Then what about the inverse in first statement?</p>
<p>Also I have doubt about these notations:</p>
<ul>
<li>A) $(f^{-1})(x)$</li>
<li>B) $(f^{-1})(\{x\})$.</li>
</ul>
<p>Can I say A) is for one one onto function & B) is for any inverse function in general?</p>
|
Henno Brandsma
| 4,280 |
<p>I'd fill in the details of "it's easy to check" steps. They are crucial here (you must use the triangle inequality somewhere in the proof, as the theorem does not hold for non-metric spaces). </p>
<p>Also state what the supposed countable base is, beforehand. Then show it is indeed countable, and then do this proof to show it's actually a base.</p>
<p>The proof itself is correct in essentials. But fill in all the dots.</p>
|
1,749,284 |
<p>As part of my homework i've the following question:</p>
<p>The tangent line $ L $ is crossing the graph of $ y = ax^3 + bx $ at point $ x = x_0 $, find another point where the tangent-line $L$ is crossing the graph. Define $ a = 1$ and $b = 0$.</p>
<p>Second part of the question is to graph $y = x^3$ and show the corssing points.</p>
<p>I was able to find find the equation of $L$ using the derivative of the function and the point: $$ y - x_0^3 = 3x_0^2(x - x_0) \\ y = 3x_0^2x - 2x_0^3 $$</p>
<p>I'm unable to find another point the tangent $L$ cross the graph (which is assume is $y = x^3$).
Should i resolve the value of $x_0$?</p>
|
SimpleProgrammer
| 1,102,366 |
<p>Tupper's self-referential formula (when suitably modified) can be absolutely used to plot any two coloured image of arbitrary height and width. Thus, indeed, you can take a screenshot on a 5K iMac and make it monochromatic and you will be able to graph it provided you have modified the formula suitably. The modifications have to do with the fact that the original formula was created for the pixelated image of the formula itself whereas you want it for your 5K screenshot. I have mentioned the specific steps needed to be followed for doing these modifications in another answer <a href="https://math.stackexchange.com/questions/1749330/how-to-generalize-tuppers-self-referential-formula">How to generalize Tupper's self-referential formula?</a></p>
<p>For my convenience and your ease of reference, I will copy my answer from that question:<br />
Suppose you have an image (two coloured bitmap) of arbitrary height and width and you want to find an inequality (that looks similar to Tupper's self referential formula) and the value of N for that image such that when you make a graph of that inequality with that particular value of N you get back your image. Then:</p>
<ol>
<li>if the height of your image (i.e. the length along the vertical Y direction) is other than 17 pixels (say 45 pixels) then just replace the 17 in the original formula with 45. Also don't forget to do so in the range of y (i.e. the new range of y should be N < y < N + 45)</li>
<li>if the width of your image (i.e. the length along the horizontal X direction) is other than 106 pixels (say 192 pixels) the just replace 106 with 192 in the range of x (i.e. the new range of x should be 0 < x < 192)</li>
<li>if both width and height of image are other than 106 and 17 respectively then do both the replacements mentioned above.</li>
</ol>
<p>Doing the above steps will give you the inequality (which will look identical to original formula if the height of your image is exactly 17 pixels) to be plotted and the exact range of x in which to plot it. However, the exact range of y is still not known as we don't know the value of N (since it depends on what your image is).<br />
The method to find the value of N is well explained in the article <a href="https://shreevatsa.wordpress.com/2011/04/12/how-does-tuppers-self-referential-formula-work/" rel="nofollow noreferrer">https://shreevatsa.wordpress.com/2011/04/12/how-does-tuppers-self-referential-formula-work/</a> which also well explains how the formula works in general. Briefly, the method first finds the value of N in binary (i.e. in base 2) from the image and the converts it into decimal (i.e. base 10). To find the binary N from the image it involves traversing your image (bitmap) column wise i.e. going from down to up and then from left to right starting from the bottom-left corner of the image. The bottom-left corner maps to the rightmost bit in the binary N and so on. Whenever a pixel is black the bit corresponding to the pixel is set to 1 otherwise it is kept to 0.<br />
Once all the above mentioned things are completed you will have an inequality which when graphed in the range of x and y that you had found, will give the image you had desired.</p>
<p>For better visualization, I have created a python script which is <a href="https://github.com/PranayPallavTripathi/Tupper_related_codes/" rel="nofollow noreferrer">available on github</a> and which allows you to draw any two coloured pixelated picture on a grid of height 17 pixels and width 106 pixels which then gives you the value of N corresponding to that picture so for example if you have the following picture
<a href="https://i.stack.imgur.com/SRhfn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SRhfn.png" alt="playing cards suites pixelated" /></a><br />
then the value of N corresponding to this picture which I found using the above linked script is<br />
573664048802431285558384669364352925059638745045637318771330283263689925623512983966825565546953441049160288866400662368999420358297824194432021809665303159472762426112693175725741721715844365367846948865802115889305652072084761535561585075425406795419160445636331380266234319267174478760508576590406346155428633201020192455643408277590918394462619142989337244768646480332610252190311941581014022074814334170654936502928583356066933856116345126839941691180729761792</p>
<p>If you plot Tupper's self referential formula on 0 < x < 106 and N < y < N + 17 with the above mentioned value of N, you will get the image displayed above.</p>
|
1,932,599 |
<p>Find the solution of the differential equation that satisfies the given initial condition <span class="math-container">$$~y'=\frac{x~y~\sin x}{y+1}, ~~~~~~y(0)=1~$$</span></p>
<p>When I integrate this function I get
<span class="math-container">$$y+\ln(y)= -x\cos x + \sin x + C.$$</span></p>
<p>Have I integrated the function correctly? </p>
<p>How do I complete the second part of the question <span class="math-container">$~y(0)=1~$</span></p>
|
nmasanta
| 623,924 |
<p>Given that <span class="math-container">$$~y'=\frac{x~y~\sin x}{y+1}$$</span>
<span class="math-container">$$\implies \frac{dy}{dx}=\frac{x~y~\sin x}{y+1}$$</span>
<span class="math-container">$$\implies \frac{y+1}{y}~dy=x\sin x~dx$$</span>
<span class="math-container">$$\implies \left(1+\frac{1}{y}\right)~dy=x\sin x~dx$$</span>
Integrating we have <span class="math-container">$$y+\log y=\int x\sin x~dx $$</span>
<span class="math-container">$$\implies y+\log y=-~ x\cos x+\sin x~+~C\qquad \text{where $~C~$is integraating constant.}$$</span>
Now given condition is <span class="math-container">$~~y(0)=1~~$</span>, which gives <span class="math-container">$$1+0=0+0+C\implies C=1$$</span>
So the solution of the differential equation that satisfies the given initial condition is <span class="math-container">$$y+\log y=-~ x\cos x+\sin x~+~1$$</span></p>
<hr>
<p>Using integrating by parts rule:
<span class="math-container">$$\int x\sin x~dx =x\int \sin x~dx~-~\int \left(\frac{d}{dx}x~\int \sin x~dx\right)~dx$$</span>
<span class="math-container">$$=-x~\cos x~-~\int \left(1\cdot~(-)~\cos x\right)~dx$$</span>
<span class="math-container">$$=-x\cos x+\sin x$$</span></p>
|
2,354,383 |
<p>Why doesn't a previous event affect the probability of (say) a coin showing tails?</p>
<p>Let's say I have a <strong>fair</strong> and <strong>unbiased</strong> coin with two sides, <em>heads</em> and <em>tails</em>.</p>
<p>For the first time I toss it up the probabilities of both events are equal to $\frac{1}{2}$.
This much is <em>intuitive and clear</em> to me.</p>
<p>Now suppose that I toss it up $1000000000$ times and the scores are,</p>
<p>$501000000$ Heads</p>
<p>$499000000$ Tails</p>
<p>Now, for the $1001000000^{th}$ toss, shouldn't the probability of a tail coming up be <strong>greater</strong> than that of heads showing up?</p>
<p>I have seen many books which say that even for the $1001000000^{th}$ toss, the probabilities of both events are equal to $\frac{1}{2}$.</p>
<p>This <strong>seems wrong to me</strong> since the same books affirm that <strong>if a coin is tossed a large number of times, the quantity $\frac{heads}{tails}$ will approach $1$.</strong></p>
<p>I know this is very elementary and naive, yet I had only superficially studied probability and I hope you all will bear with me.</p>
<p><strong>My Objections with some of the top-voted answers</strong></p>
<blockquote>
<p>It isn't that future flips compensate for the imbalance, it is that there are so many of them it doesn't matter. </p>
</blockquote>
<p>I don't get this statement. What exactly does the second sentence mean? Moreover, if what you said is true then, the following comment by a user should be wrong,</p>
<blockquote>
<p>Law of large numbers</p>
</blockquote>
<p>So these are contradicting each other I feel. Please bear with my lack of knowledge.</p>
|
farruhota
| 425,072 |
<p>Imagine one tosses $1,000,000$ fair coins at once. Hence the outcomes of the coins are independent and then the numbers of heads and tails will be approximately (if not exactly) equal.</p>
|
3,758,536 |
<p>Theorem 3.29: If <span class="math-container">$p>1$</span>,<br />
<span class="math-container">$$
\sum_{n=2}^{\infty}\frac{1}{n(\log\ n)^p}
$$</span>
converges; if <span class="math-container">$p\leq1$</span>, the series diverges.</p>
<p>Proof: The monotonicity of the logarithmic function implies that <span class="math-container">$\{log\ n\}$</span> increases. Hence <span class="math-container">$\{1/n\ \log\ n\}$</span> decreases, and we can apply Theorem 3.27 to the series above; this leads us to the series
<span class="math-container">$$
\sum_{k=1}^{\infty}2^k\cdot \frac{1}{2^k(\log\ 2^k)^p}=\sum_{k=1}^{\infty}\frac{1}{(k\log\ 2)^p}=\frac{1}{(\log\ 2)^p}\sum_{k=1}^{\infty}\frac{1}{k^p}
$$</span>
and Theorem 3.29 follows from Theorem 3.28.</p>
<p>I have two questions:<br />
(1) How can we get the decrease of <span class="math-container">$\{1/n\ \log\ n\}$</span> from the increase of <span class="math-container">$\{\log\ n\}$</span>? I didn't see any conncection between them.<br />
(2) The author said that we can apply Theorem 3.27 to the series. However, in order to apply Theorem 3.27, I think we need to show <span class="math-container">$\{1/n\ (\log\ n)^p\}$</span> is decreasing. But I don't know how to do that.</p>
<p>Theorem 3.27: Suppose <span class="math-container">$a_1\geq a_2\geq\cdots\geq0$</span>. Then the series <span class="math-container">$\sum_{n=1}^{\infty} a_n$</span> converges if and only if the series
<span class="math-container">$$
\sum_{k=0}^{\infty}2^ka_{2^k}=a_1+2a_2+4a_4+8a_8+\cdots
$$</span>
converges.</p>
|
Kavi Rama Murthy
| 142,385 |
<p>Product of two non-negative increasing functions is increasing. Any positive power of an increasing function is increasing. So <span class="math-container">$n \log n$</span> and <span class="math-container">$n \log^{p}n$</span> are increasing. If <span class="math-container">$f$</span> is increasing and <span class="math-container">$>0$</span> then <span class="math-container">$\frac 1 f$</span> is decreasing.</p>
|
2,081,612 |
<p>I'd like to evaluate the following integral:</p>
<p>$$\int \frac{\cos^2 x}{1+\tan x}dx$$</p>
<p>I tried integration by substitution, but I was not able to proceed. </p>
|
DXT
| 372,201 |
<p>$\displaystyle \int \frac{\cos^2 x}{1+\tan x}dx = \int\frac{\cos^3 x}{\sin x+\cos x}dx = \frac{1}{2}\int\frac{(\cos^3 x+\sin ^3 x)+(\cos^3 x-\sin ^3 x)}{\cos x+\sin x}dx$</p>
<p>$\displaystyle = \frac{1}{4}\int (2-\sin 2x)dx+\frac{1}{4}\int\frac{(2+\sin 2x)(\cos x-\sin x)}{\cos x+\sin x}dx$</p>
<p>put $\cos x+\sin x= t$ and $1+\sin 2x = t^2$ in second integral</p>
|
592,963 |
<p>Find $x^4+y^4$ if $x+y=2$ and $x^2+y^2=8$</p>
<p>So i started the problem by nothing that $x^2+y^2=(x+y)^2 - 2xy$ but that doesn't help!</p>
<p>I also seen that $x+y=2^1$ and $x^2+y^2=2^3$ so maybe $x^3+y^3=2^5$ and $x^4+y^4=2^7$ but i think this is just coincidence</p>
<p>So how can i solve this problem?</p>
<p>PLEASE i need some help and thanks for all!!</p>
|
Carl Love
| 104,071 |
<p>Solve the given equations simultaneously to obtain $x=1-\sqrt 3, y=1+\sqrt 3$ or vice versa. Then just compute $x^4+y^4$ directly.</p>
|
1,647,517 |
<p>I was in need to urgently solve this integral. I already know the result in the closed form, does anybody know how to solve it?
\begin{equation}
\int_{\mathbb{R}}e^{-\frac{x^{2}}{2}}\left(\cos\left(\pi nx\right)\right)dx=\sqrt{2\pi}e^{-\frac{n^{2}\pi^{2}}{2}},
\end{equation}
If anybody already knows it is welcome, otherwise I guess I will have to look tomorrow with more calm</p>
|
gniourf_gniourf
| 51,488 |
<p>Here's a funny one:</p>
<p>Define the function $f$ as
$$f:\mathbb{R}\longrightarrow\mathbb{R},\ a\longmapsto\int_{-\infty}^{+\infty}\mathrm{e}^{-x^2/2}\cos(a x)\,\mathrm{d}x.$$
It is well-known that the improper integral defining $f$ is convergent. We can apply the differentiation theorem to differentiate this improper integral (proof left to the reader) and we obtain:
$$\forall a\in\mathbb{R},\ f'(a)=-\int_{-\infty}^{+\infty}x\mathrm{e}^{-x^2/2}\sin(ax)\,\mathrm{d}x.$$
Using an integration by parts yields
$$\forall a\in\mathbb{R},\ f'(a)=-a\int_{-\infty}^{+\infty}\mathrm{e}^{-x^2/2}\cos(a x)\,\mathrm{d}x=-af(a).$$
Hence $f$ is a solution of the differential equation
$$f'(a)+af(a)=0,$$
the general solution of which is
$$f(a)=C\mathrm{e}^{-a^2/2}.$$
Now it is well-known that:
$$f(0)=\int_{-\infty}^{+\infty}\mathrm{e}^{-x^2/2}\,\mathrm{d}x=\sqrt{2\pi},$$
hence
$$\forall a\in\mathbb{R},\ f(a)=\sqrt{2\pi}\mathrm{e}^{-a^2/2}.$$
In particular,
$$\int_{-\infty}^{+\infty}\mathrm{e}^{-x^2/2}\cos(\pi nx)\,\mathrm{d}x=f(\pi n)=\sqrt{2\pi}\mathrm{e}^{-n^2\pi^2/2}.$$</p>
|
1,647,517 |
<p>I was in need to urgently solve this integral. I already know the result in the closed form, does anybody know how to solve it?
\begin{equation}
\int_{\mathbb{R}}e^{-\frac{x^{2}}{2}}\left(\cos\left(\pi nx\right)\right)dx=\sqrt{2\pi}e^{-\frac{n^{2}\pi^{2}}{2}},
\end{equation}
If anybody already knows it is welcome, otherwise I guess I will have to look tomorrow with more calm</p>
|
Jack D'Aurizio
| 44,121 |
<p>Another chance is given by the expansion of $\cos(\pi n x)$ as a Taylor series:
$$ \cos(\pi n x) = \sum_{m\geq 0}\frac{(-1)^m (\pi n)^{2m} x^{2m}}{(2m)!}\tag{1} $$
together with the fact that:
$$ \int_{\mathbb{R}} x^{2m} e^{-x^2/2}\,dx = \int_{0}^{+\infty} x^{m-1/2} e^{-x/2}\,dx = 2^{m+1/2}\cdot\Gamma\left(m-\frac{1}{2}\right).\tag{2} $$
These identities give:
$$\begin{eqnarray*} \int_{\mathbb{R}}e^{-x^2/2}\cos(\pi n x)\,dx &=& \sum_{m\geq 0}\frac{(-1)^m (\pi n)^{2m}\, 2^{m+1/2}\,\Gamma(m+1/2)}{\Gamma(2m+1)} \\ &\color{red}{=}&\sqrt{2\pi}\cdot \sum_{m\geq 0}\frac{(-1)^m (\pi n)^{2m}\, 2^{m}\,\Gamma(m+1/2)}{2^{2m}\,\Gamma(m+1/2)\,\Gamma(m+1)}\\&=&\sqrt{2\pi}\sum_{m\geq 0}\frac{(-1)^m (\pi^2 n^2/2)^m}{m!}\\&\color{blue}{=}&\sqrt{2\pi}\cdot e^{-\pi^2 n^2/2}\tag{3}\end{eqnarray*} $$
where the equality in red follows from the <a href="http://mathworld.wolfram.com/LegendreDuplicationFormula.html" rel="nofollow">Legendre duplication formula</a> and the equality in blue from the Taylor series of the exponential function.</p>
|
27,666 |
<p>I have reading a book myself; more than 90 % of the book is ok to me; but I have some problems on it. Is it ok to ask my questions in a related chat-room? Because I believe that they are so easy, and it seems shameful to me to ask such an absurd question?</p>
|
samerivertwice
| 334,732 |
<p>I frequently have this same problem but I have learnt to ask the question anyway. Provided the question or equivalent doesn't already exist, often the answer is given in a comment rather than an answer, as users are embarrassed to answer such trivial questions. But inevitably somebody later needs an answer to the same question and finds it useful. In fact I now see many others asking relatively trivial questions.</p>
<p>MSE is for mathematicians of all levels and there are as many trivial misconceptions and problems understanding as there are deep and interesting ones. While we of course aspire not to ask bad questions, bad or duplicate question will get downvoted and removed so you receive feedback from the community on what is considered acceptable.</p>
<p>I get the impression some of the more advanced mathematicians fail to appreciate the full extent of my mathematical ineptitude and find trivial questions frustrating but there are other less experienced mathematicians who know more than me while appreciating my difficulty with simple problems and are happy to help.</p>
|
2,913,156 |
<p>I'm working through the definitions given in <strong>2.18</strong> in Baby Rudin, and I was wondering how to prove that <span class="math-container">$E = [0,1] \cap \Bbb{Q}$</span> is closed in <span class="math-container">$\Bbb{Q}$</span>.</p>
<p>Clearly, the set <span class="math-container">$[0,1] \cap \Bbb{R}$</span> is closed in <span class="math-container">$\Bbb{R}$</span>, since for all neighborhoods of a point <span class="math-container">$p \in [0,1]$</span>, we can always find a <span class="math-container">$q \neq p$</span> such that <span class="math-container">$q \in [0,1]$</span>. Intuitively, this is because for all <span class="math-container">$p \in [0,1] \cap \Bbb{R}$</span>, we can always find a <span class="math-container">$q \in [0,1] \cap \Bbb{R}$</span> such that <span class="math-container">$p - \epsilon < q < p$</span> or <span class="math-container">$p < q < p+\epsilon$</span> for all <span class="math-container">$\epsilon >0$</span>.</p>
<p>However, that the set <span class="math-container">$E = [0,1] \cap \Bbb{Q}$</span> is closed in <span class="math-container">$\Bbb{Q}$</span> is sort of clear, but not intuitive to me, since <span class="math-container">$\Bbb{Q}$</span> is not uncountably infinite as <span class="math-container">$\Bbb{R}$</span> is. Does one have to use that fact that it is closed in <span class="math-container">$\Bbb{R}$</span> and that <span class="math-container">$\Bbb{Q}$</span> is dense in <span class="math-container">$\Bbb{R}$</span>? How would one solve it from first principles? i.e. Something along the lines of: Consider an arbitrary <span class="math-container">$p \in [0,1] \cap \Bbb{Q}$</span>, we show that it is a limit point by...</p>
<p>Any help would be greatly appreciated!</p>
|
fleablood
| 280,126 |
<p>I'm sorry to say I couldn't follow your argument that it is intuitive $[0,1]$ is closed in $\mathbb R$ at all.</p>
<p>$[0,1]$ is closed (in $\mathbb R$) because all its limit points are in in $[0,1]$. That is; If $p \in \mathbb R$ and for every $\epsilon > 0$ it will always be that the interval $(p-\epsilon, p+\epsilon)$ will contain a point $q \in [0,1]$, then we can conclude that $p$, itself, is in $[0,1]$.</p>
<p>Pf: If $p < 0$ and $\epsilon = 0 - p= |p|$ then $(p-\epsilon, p + \epsilon) = (2p, 0)$ is disjoint from $[0,1]$ and there is no point, $q$, in common with $[0,1]$. And if $p > 0$ and $\epsilon = p - 1$ then $(p-\epsilon, p+ \epsilon) = (1, 2p - 1)$ and there is no point, $q$, in common with $[0,1]$. </p>
<p>So if $p$ is a limit point (if there <em>are</em> any limit points, there might not be) then it would have to be that $0 \le p \le 1$ so $p \in [0,1]$.</p>
<p>Thus by definition $[0,1]$ is closed.</p>
<p>.....</p>
<p>But notice there is <em>nothing</em> in that proof that pertains to any property of $\mathbb R$ and it would have held true for any subspace of $\mathbb R$ at all. Everything we said would be true for $\mathbb Q$ or any $B \subset \mathbb R$.</p>
<p>Also, although the proof to show $[0,1]$ was closed was particular to $[0,1]$, there is no proof of any other set being closed that would rely upon the space.</p>
<p><strong>Claim</strong>: If $A$ is closed in space $X$ then $A\cap Y$ is closed in $Y \subset X$.</p>
<p>Pf: Let $p \in Y$ be a limit point of $A\cap Y$ then every neighborhood, $N$ of $p$ contains a point $q; q \ne p; q \in A\cap Y\subset A$. But then $N$ is a subset of a neighborhood $N'\subset X$. $q \in A$ and $q \in N'$. So $p$ is a limit point of $A$. And as $A$ is closed in $X$ that means $p \in A$. And as $p \in Y$. So $p \in A\cup Y$. So $A\cup Y$ is closed in $Y$.</p>
|
897,955 |
<p>For a function $f:X\rightarrow Y$ we have $f(U)\subset V\iff U\subset f^{-1}(V)$ proof</p>
<p>I'm following a book and it just uses this, it doesn't say anything about the function, so I've not assumed it's one-to-one. I've spent to much time on this (as in it should be 4 lines for both ways at most) so I hope you can help me.</p>
<p><strong>What have I tried?</strong></p>
<p>For $\implies$</p>
<p>$x\in U\implies f(x)\in f(U)$ then by hypothesis $f(x)\in V$<br>
Here's where I get stuck, because $f^{-1}(\{f(x)\})\subset f^{-1}(V)$ isn't true.</p>
<p>Wait, It is because $f(x)\in V$ everything that maps to $f(x)$ will be in $f^{-1}(V)$, writing this really does help!</p>
<p>I then say $x\in f^{-1}(\{f(x)\})$ thus $x\in f^{-1}(V)$ - is that right?</p>
<p>For $\impliedby$</p>
<p>$y\in f(U)\implies \exists x\in U:f(x)=y$ thus $x\in f^{-1}(V)$ by hypothesis, so $f(x)\in f(f^{-1}(V))$ which is the same as $y\in V$</p>
<p>I don't like these sort of proves largely because they're glossed over in the first chapter of all books, and the term "function" is used very ambiguously (is it one-to-one implicitly for example?)</p>
<p>Are these proofs okay?</p>
|
Troy Woo
| 74,973 |
<p>$$
\Rightarrow: U\subset f^{-1}(f(U))\subset f^{-1}(V)
$$
$$
\Leftarrow: f(U)\subset f(f^{-1}(V))\subset V
$$
Just read through (maybe more than once) the first chapter of James Munkres, <strong>Topology</strong>. Its the best to me. </p>
|
1,270,107 |
<p>How many real roots does the below equation have?</p>
<p>\begin{equation*}
\frac{x^{2000}}{2001}+2\sqrt{3}x^2-2\sqrt{5}x+\sqrt{3}=0
\end{equation*}</p>
<p>A) 0 B) 11 C) 12 D) 1 E) None of these</p>
<p>I could not come up with anything.</p>
<p>(Turkish Math Olympiads 2001)</p>
|
wythagoras
| 236,048 |
<p>We have that $x^{2000} \geq 0$, because squares are nonnegative.</p>
<p>Further, we have $(x-\frac{1}{2}\sqrt{\frac{5}{3}})^2 \geq 0$. This gives $x^2-\sqrt{\frac{5}{3}}x+\frac{5}{12} \geq 0$</p>
<p>Therefore $2 \sqrt{3}x^2-2\sqrt{5}x+\frac{10}{12}\sqrt{3} \geq 0$</p>
<p>Therefore $2 \sqrt{3}x^2-2\sqrt{5}x+\sqrt{3} > 0$</p>
<p>Therefore $\frac{x^{2000}}{2001} + 2 \sqrt{3}x^2-2\sqrt{5}x+\sqrt{3} > 0$</p>
<p>Therefore there are no real roots. </p>
|
489,528 |
<p>I need to check for convergence here. Why is every example completely different and nothing which I've learned from the examples before helps me in the next? Hate it!</p>
<p>$$ \sum_{n\geq1} \frac{n^{n-1}}{n!} $$</p>
<p>so I tried the ratio test and I got</p>
<p>$$ \frac{1}{1+\frac{1}{n}} $$
which converges to $1$ which isn't $>$ or $<$ then $1,$ so that doesn't work.</p>
<p>which test will work and how can I transform the series?</p>
|
John
| 23,668 |
<blockquote>
<p>Why is every example completely different and nothing which I've
learned from the examples before helps me in the next?</p>
</blockquote>
<p>Because in mathematics you need to think in principles, not in rote repetition. Every problem is new, and you have to determine which of all the various facts/processes/algorithms/theorems you have learned to date can be applied to this specific problem. That is the nature of mathematics. It makes mathematics simultaneously hard, but exciting as h***.</p>
|
4,074,031 |
<p>You have been teaching Dennis and Inez math using the Moore method for their entire lives, and you're currently deep into topology class.</p>
<p>You've convinced them that topological manifolds aren't quite the right object of study because you "can't do calculus on them," and you've managed to motivate the definition of a smooth manifold.</p>
<p>Dennis and Inez take your motivation about "doing calculus" seriously. They pose to you that they asked themselves how to take the derivative of a function <span class="math-container">$f: M \to \mathbb{R}$</span>. They figured out independently that such a thing couldn't meaningfully be another smooth function. Instead, it has to say how the function is changing in the direction of a given vector field.</p>
<p>You are thrilled at their work: they've independently invented <span class="math-container">$d$</span> and the cotangent bundle, and they arrive at the correct formal definition of both with little prodding.</p>
<p>You want to point them in the direction of generalization, so you ask them to consider how to extend <span class="math-container">$d$</span> to higher exterior powers of the cotangent bundle. You get blank stares. Why should they think about higher tensor powers of the cotangent bundle at all? Why the <strong>alternating</strong> tensors, in particular? Yes, they know what that means (you led them to it in their linear algebra course), but the cotangent bundle just naturally showed up when they went thinking about derivatives, and the alternating tensors don't.</p>
<p>Oops! Now's the time to confess that your students never had a traditional 3-dimensional vector calculus course culminating in Stokes's theorem. Actually, it's worse: they have never heard of integrals. You see, you were torn about whether to start with the Lebesgue theory or spend time building the more intuitive Riemann integral, so you just skipped from the first half of Calc One to a topics course on higher topos theory. Lesson learned.</p>
<p>So, how do you teach them? (Alternatively, answer this question in the negative by proving that Dennis and Inez will always invent integration if you make them think about higher exterior derivatives enough.)</p>
|
David E Speyer
| 448 |
<p><span class="math-container">$\def\RR{\mathbb{R}}$</span>I have in fact tried to tackle this problem when I teach higher differential forms. Here are some of the things I try.</p>
<p><b>Before we do differential forms or manifolds</b> For functions <span class="math-container">$f : \RR^n \to \RR^p$</span>, introduce <span class="math-container">$Df$</span> and prove the multivariate change of variables formula.</p>
<p>Let <span class="math-container">$f$</span> be a function on <span class="math-container">$\RR^n$</span>. Define the Hessian of <span class="math-container">$f$</span>, <span class="math-container">$D^2 f$</span>, to be the matrix of second partials <span class="math-container">$\tfrac{\partial^2 f}{(\partial x_i) (\partial x_j)}$</span>. Prove the multivariate second derivative test: If <span class="math-container">$Df=0$</span> at <span class="math-container">$c$</span> and <span class="math-container">$D^2 f$</span> is positive definite, then <span class="math-container">$c$</span> is a local minimum for <span class="math-container">$f$</span>. (This isn't logically necessary for what follows, but helps show this is a useful concept.)</p>
<p>Show that, if <span class="math-container">$c$</span> is a critical point, then the Hessian has a simple chain rule like formula. Show that, if <span class="math-container">$c$</span> is not a critical point, the formula for changing variables inside the Hessian is a mess.</p>
<p>Also, during this pre-manifold time, I talk about the curl of a vector field on <span class="math-container">$\RR^2$</span> and prove Stokes' theorem for rectangles; but you asked me not to mention integration.</p>
<p><b>Spend a bunch of time working with and getting used to <span class="math-container">$1$</span>-forms, still in <span class="math-container">$\RR^n$</span></b> Note that the multivariate chain rule means that <span class="math-container">$df$</span> is well defined.
Note that <span class="math-container">$D^2 f$</span> is well defined as a quadratic form on the tangent space at critical points, but is not a well defined quadratic form on the tangent space in general. All of this is just the computations from the pre-manifold discussion, now placed in a more sophisticated context. This means that we don't have a coordinate independent notion of second derivative.</p>
<p>If we can't take the second derivative of a function, can we at least take the first derivative of a <span class="math-container">$1$</span>-form? In coordinates, if we have <span class="math-container">$\omega = \sum g_i d x_i$</span>, we can form the matrix of partials <span class="math-container">$\tfrac{\partial g_i}{\partial x_j}$</span>. Could this be coordinate independent in some sense?</p>
<p>A problem! If <span class="math-container">$\omega = df$</span>, then <span class="math-container">$\tfrac{\partial g_i}{\partial x_j}$</span> is the Hessian, which we just saw was bad. Let's try skew symmetrizing this matrix, to give <span class="math-container">$\tfrac{\partial g_i}{\partial x_j} - \tfrac{\partial g_j}{\partial x_i}$</span>. This throws away the Hessian: If <span class="math-container">$\omega = df$</span>, we just get <span class="math-container">$0$</span>. Is what is left over any better?</p>
<p>A miracle (which the students are assigned to grind out by brute force): This gives a well defined skew-symmetric form on the tangent space. <b>Define a <span class="math-container">$2$</span>-form on <span class="math-container">$\RR^n$</span></b>, and explain that we have just constructed <span class="math-container">$d: \Omega^1 \to \Omega^2$</span> and shown that it is well defined. In particular, when <span class="math-container">$n=2$</span>, we have just shown that curl is a well-defined skew-symmetric form.</p>
|
207,382 |
<p>I was interested in exploiting the Terminal` package, as demonstrated in <a href="https://mathematica.stackexchange.com/questions/13961/plotting-graphics-as-ascii-plots">this post</a>, to show some ASCII plots straight through the terminal of my Raspberry Pi Model 3 B+. However, it doesn't seem that I can get a plot to show when using the package either in the terminal or <em>even in the front-end</em>. Here's what happens in either case when I try to replicate a simple example:</p>
<pre><code>In[1]:= << Terminal`
-- Terminal graphics initialized --
In[2]:= Plot[Sinc[x],{x,0,20},PlotRange->All]
#
StringJoin[MapThread[System`ConvertersDump`RasterDataPacket[#"#", #"#", #"#", #"+", ColorFunction -> #"+"] & ]]
Out[2]= -Graphics-
</code></pre>
<p>The package seems to load fine, the stuff between the In[2] line and the Out[2] line is stuff Mathematica outputted (I didn't type it), and the graphics the system outputs in both cases are fully formatted correct plots; just not in ASCII. <strong>Is the package broken, or am I missing something</strong>?</p>
|
Lukas Lang
| 36,508 |
<p>Here's an approach to fix the code behind <code>Terminal`</code> - Execute the following code to restore the functionality of the package:</p>
<pre><code><< Terminal`
ExportString["", "TTY"];
DownValues@System`Convert`BitmapDump`ExportTTY = DownValues@System`Convert`BitmapDump`ExportTTY /.
{
TextStyle -> LabelStyle,
HoldPattern[pre : (data_ = _[___, gr_, "ColorReductionPalette" -> crp_, _, opts_])] :>
(
pre;
data = Replace[
ImageData@ColorQuantize[
Image[data, "Byte"],
crp,
FilterRules[{opts},
Options@ColorQuantize]
],
Thread[First /@ crp -> Range@Length@crp],
{2}
]
)
};
</code></pre>
<p>Now:</p>
<pre><code>Plot[Sinc[x], {x, 0, 20}, PlotRange -> All]
(* #
##
##+
##++
## ++
### +
## ++
## +
## ++
## +
## ++
## +
### ++
## +
## ++
## +
## ++
## + ++++++ # #
## # # +# # # # ++#+ #++#+ # # # #+++#+++# # # # # ++#
###############################################################################
## + ++ +++++ +++++ ++++++++++++
## + ++ +++++++ + + +
## ++ +++
## ++++++
## *)
</code></pre>
<p>Looking at the <a href="https://reference.wolfram.com/language/ref/$DisplayFunction.html" rel="noreferrer"><code>$DisplayFunction</code></a> set by <code>Terminal`</code>, we see that it uses <code>ExportString[#,"TTY"]</code> to generate the ASCII plot. That function in turn uses the <code>"PNM"</code> image format to do the actual conversion. It turns out that this format has lost some functionality at some point, specifically for the <code>"ColorReductionPalette"</code> option. This option was used to convert the image into a version with only six gray levels, which were then converted into the different ascii characters. Since that option no longer exists, we replicate it by inserting a call to <a href="https://reference.wolfram.com/language/ref/ColorQuantize.html" rel="noreferrer"><code>ColorQuantize</code></a>, followed by some post-processing. It should be noted that this does not produce exactly the same output as in older versions, presumably something in the rendering/quantization code got updated at some point.</p>
|
3,034,374 |
<p>Two cyclists start from the same place to ride in the same direction.A starts at noon with a speed of 8km/hr and B starts at 2pm with a speed of 10km/hr.At what times A and B will be 5km apart ?
My thought process:
As A starts early at 12 so it will have already covered 16km(8*2).
so S relative=V relative*t or say 16=2*t1 and thus t1=8.
Now we want Srelative to be 5 so
5=2*t2 and t2=2 and a half hour so they will meet at t1-t2 .
Is this correct process ?</p>
|
Shubham Johri
| 551,962 |
<p>By the time <span class="math-container">$B$</span> starts, <span class="math-container">$A$</span> is ahead by <span class="math-container">$16$</span> km. The velocity of <span class="math-container">$B$</span> with respect to <span class="math-container">$A, v_{BA}=v_B-v_A=2$</span> kmph. Therefore, the distance between <span class="math-container">$A$</span> and <span class="math-container">$B$</span> changes at <span class="math-container">$2$</span> kmph, and after <span class="math-container">$t$</span> hours from <span class="math-container">$B$</span>'s departure, the distance between them is given by <span class="math-container">$|16-2t|$</span>.</p>
<p>We want <span class="math-container">$|16-2t|=5\implies t=5.5, 10.5$</span> hours. Since <span class="math-container">$B$</span> started at <span class="math-container">$2$</span> pm, they are <span class="math-container">$5$</span> km apart at <span class="math-container">$5.5$</span> and <span class="math-container">$10.5$</span> hours after <span class="math-container">$2$</span> pm, that is, at <span class="math-container">$7:30$</span> pm, <span class="math-container">$12:30$</span> am.</p>
|
2,260,975 |
<p>Problem: Let $\mu : \mathcal{B}(\mathbb{R}^k) \rightarrow \mathbb{R}$ be a nonnegative and finitely additive set function with $\mu(\mathbb{R}^k) < \infty $.
Suppose that
\begin{equation}
\mu(A) = \sup \{ \mu(K) : K \subset A, K \text{ compact}\}
\end{equation}
for each $A \in \mathcal{B}(\mathbb{R}^k)$.
Then $\mu$ is a finite measure.</p>
<p>I'm stuck showing that $\mu$ is countably additive.
First, let $A_{i}$ disjoint, and $K_i \subset A_i$ compact such that $\mu(A_i) \leq \mu(K_i)+\epsilon/2^i$ for $\epsilon > 0$.
Then
\begin{equation}
\mu\left(\bigcup_{i=1}^{\infty} A_i \right)
\geq \mu\left(\bigcup_{i=1}^{n} A_i \right)
\geq \mu\left(\bigcup_{i=1}^{n} K_i \right)
= \sum_{i=1}^n \mu(K_i)
\geq \sum_{i=1}^n \mu(A_i) + \frac{\epsilon}{2^i} ,
\end{equation}
which follows from monotonicity (obvious) and finite additivity (on the class of compact sets).
Hence, letting $n \rightarrow \infty$, and since $\epsilon$ was arbitrary,
\begin{equation}
\mu\left(\bigcup_{i=1}^{\infty} A_i \right)
\geq \sum_{i=1}^\infty \mu(A_i) .
\end{equation}
Next, I want to establish the reverse inequality.
This is where I'm stuck. Any ideas?</p>
<p>Thanks in advance!</p>
<p>Chris</p>
|
Nathanael Skrepek
| 423,961 |
<p>$\newcommand{\R}{\mathbb{R}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\irun}[1]{_{#1 \in \N}}$</p>
<p>Unfortunately math.stackexchange hast problems with the <code>\intertext</code> command.</p>
<p>This is a very detailed proof for your problem</p>
<p>$\Omega = \R^n$,
$\mathfrak{A} = \mathfrak{B}(\R^n)$</p>
<p>At first I want to show some regularity with open sets</p>
<p>\begin{align*}
\mu(A) &= \mu(\Omega\backslash A^{c}) = \mu(\Omega) - \mu(A^{c}) \\
&= \mu(\Omega) - \sup\{ \mu(K) : K \in \mathfrak{A},K \subseteq A^{c}, K \text{ compact}\}
\end{align*}
when you take in the minus sign then you receive a $\inf$ instead of the $\sup$
\begin{align*}
= \mu(\Omega) + \inf\{ -\mu(K) : K \in \mathfrak{A}, K \subseteq A^{c}, K \text{ compact}\} \\
= \inf\{ \mu(\Omega) -\mu(K) : K \in \mathfrak{A}, K \subseteq A^{c}, K \text{ compact}\}
\end{align*}
Have in mind, that $\mu(\Omega) -\mu(K)$ is nothing else than $\mu(K^{c})$ and that $K\in \mathfrak{A}$ is equivalent to $K^{c}\in\mathfrak{A}$</p>
<p>\begin{align*}
= \inf\{ \mu(K^{c}) : K^{c} \in \mathfrak{A}, (K^{c})^{c} \subseteq A^{c}, (K^{c})^{c} \text{ compact}\}
\end{align*}
now replace $K^{c}$ with $O$
\begin{align*}
= \inf\{ \mu(O) : O \in \mathfrak{A}, O^{c} \subseteq A^{c}, O^{c} \text{ compact}\} \\
= \inf\{ \mu(O) : O \in \mathfrak{A}, A \subseteq O, O^{c} \text{ compact}\}
\end{align*}
That means that for every $A \in \mathfrak{A}$ and every $\epsilon>0$ there is a $O\in\mathfrak{A}$ such that $A \subseteq O^{}$ and $\mu(O) - \epsilon \leq \mu(A)$. Clearly $O$ is open since $O^{c}$ is compact.</p>
<p>Let $(A_{n})\irun{n}$ be a sequence of disjoint Sets of $\mathfrak{A}$, such that the union $\bigcup\irun{n} A_{n}$ is in $\mathfrak{A}$.
Then there is regarding to the already proven for every $A_{n}$ a set $O_{n}$ such that $A_n \subseteq O_n$ and $\mu(O_{n}) - \frac{\epsilon}{2^{n}} \leq \mu(A_{n})$.
Now choose an arbitrary
\begin{align*}
K \in \bigg\{ K \in \mathfrak{A} \;\,\bigg|\;\, K \subseteq \bigcup\irun{n} A_{n}, K \text{ compact}\bigg\}
\end{align*}</p>
<p>It is easy to see $K \subseteq \bigcup\irun{n} A_{n}\subseteq \bigcup\irun{n} O_{n}$. Since $K$ is compact and all $O_{n}$ are open, there must be a $n_{0}\in \N$, such that ${K} \subseteq \bigcup_{n=1}^{n_{0}} O_{n}^{}$ is true. Now we receive
\begin{align*}
K \subseteq \bigcup_{n=1}^{n_{0}} O_{n}
\end{align*}
because of the monotony of $\mu$
\begin{align*}
\mu(K) \leq \mu\bigg(\bigcup_{n=1}^{n_{0}} O_{n}\bigg)
= \sum_{n=1}^{n_{0}} \mu(O_{n})
\leq \sum_{n=1}^{n_{0}} \Big(\mu(A_{n}) + \frac{\epsilon}{2^{n}}\Big)
\leq \Big(\sum\irun{n} \mu(A_{n}) \Big)+ \epsilon
\end{align*}
for all $\epsilon > 0$. Since the right-hand-side is independ of $K$ this inequality is also true for the supremum of all such $K$. This means nothing else than
\begin{align*}
\mu\Big(\bigcup\irun{n} A_{n}\Big) \leq \Big(\sum\irun{n} \mu(A_{n}) \Big)+ \epsilon \quad \text{for}\quad \epsilon > 0.
\end{align*}
Now this inequality must also hold for $\epsilon = 0$ and this shows that $\mu$ is $\sigma$-subadditiv.</p>
|
244 |
<p>I know that Hilbert schemes can be very singular. But are there any interesting and nontrivial Hilbert schemes that are smooth? Are there any necessary conditions or sufficient conditions for a Hilbert scheme to be smooth?</p>
|
David E Speyer
| 297 |
<p>The Hilbert scheme of n points on a 3-fold is not smooth for n sufficiently large, but the exact value of sufficiently large is unknown. See the chapter on Hilbert schemes in "Combinatorial Commutative Algebra", by Miller and Sturmfels.</p>
|
244 |
<p>I know that Hilbert schemes can be very singular. But are there any interesting and nontrivial Hilbert schemes that are smooth? Are there any necessary conditions or sufficient conditions for a Hilbert scheme to be smooth?</p>
|
mdeland
| 397 |
<p>Here is yet another example of a smooth Hilbert scheme. Let $X$ be a smooth degree 3 hypersurface in projective space of dimension $n \geq 3$ (say, over an algebraically closed field), and let $H$ be the Hilbert scheme of lines on $X$ (i.e., corresponding to Hilbert polynomial $t + 1$). </p>
<p>The tangent space to $H$ at a point $[L]$ (corresponding to a line $L$ in $X$) is $H^0(L, N)$ where $N$ is the normal bundle of $L$ in $X$. The rank of $N$ is $n - 2$ and the degree of $N$ is $2n - 6$ (you can see this by looking at the standard tangent bundle and normal bundle sequences). Every vector bundle on $L = \mathbb{P}^1$ splits into the direct sum of line bundles. Then the degree of each rank 1 summand of $N$ is at most 1 ($N$ injects into the normal bundle of $L$ in $\mathbb P^n$) and then you can show that no piece can have degree less than $-1$. This allows us to conclude that $H^1(L, N) = 0$. This means that $H$ is smooth at the point $[L]$ (see for example Kollár's book <em>Rational Curves on Algebraic Varieties</em>, Chapter 1, where he explains the infinitesimal behavior of the Hilbert scheme). Since this is true for any line $L$ in $X$, the Hilbert scheme is smooth. </p>
<p>The same argument works for lines on a smooth Quadric. In the same book, Kollár proves that for a general degree $d$ hypersurface $X$ in $\mathbb P^n$, the Hilbert scheme of lines on $X$ is smooth. </p>
|
1,990,105 |
<p>Presume that $(x_n)$ is a sequence s. t.</p>
<p>$|x_n-x_{n+1}| \le 2^{-n}$ for all $n \in \mathbb N$</p>
<p>Prove that $x_n$ converges.</p>
<p>What I've tried to think: since $2^{-n}$ converges to 0, and the difference between the terms $x_n$ and $x_{n+1}$ is smaller or equal to it, then $x_n$ must be a cauchy sequence and converge. How do I make this an actual proof?</p>
|
Yiorgos S. Smyrlis
| 57,021 |
<p>Let $\varepsilon>0$.</p>
<p>There exists an $N$, such that $2^{-N}<\varepsilon$. (Say $N=\lfloor-\log_2\varepsilon\rfloor+1$.)</p>
<p>Now let $m,n>N$, then
$$
\lvert x_m-x_n\rvert\le \sum_{k=n}^{m-1}\lvert x_{k+1}-x_k\rvert\le
\sum_{k=n}^{m-1}2^{-k}<2^{-(n-1)}\le 2^{-N}<\varepsilon.
$$</p>
|
125,834 |
<p>Is there a general formula for the number of distinguishable arrangements of length $\ell$ of a sequence of size $s$, given the repeated elements? So,</p>
<p>For example, would there be a general formula to solve the problem:</p>
<p>How many distinguishable 5-letter arrangements are there of the word "PROBABILITY" ?</p>
<p>I'm looking for a general formula.</p>
|
Jair Taylor
| 28,545 |
<p>This may indeed be "overkill", but there is a fairly simple general formula for the exponential generating function. Let <span class="math-container">$n_i$</span> be the number of times the <span class="math-container">$i$</span>th letter is repeated - the order we put the letters in doesn't matter of course. In your example, if we put the word PROBABILITY we have one P, one R, one O, two B's, one A, two I's, one L, one T, and one Y, so we get <span class="math-container">$$(n_1, ..., n_8) = (1,1,1,2,1,2,1,1)$$</span> if I've counted correctly. Now let <span class="math-container">$a_l$</span> be the number of words of length <span class="math-container">$l$</span>, and <span class="math-container">$m$</span> be the number of distinct letters. I claim <span class="math-container">$$\sum_l \frac{a_l}{l!} x^l = \prod_{i=1}^m \sum_{j=0}^{n_i} \frac{x^{j}}{j!};$$</span> in your example this is <span class="math-container">$$\sum_l \frac{a_l}{l!} x^l = (1+x)^6(1+x + x^2/2)^2$$</span> since we have <span class="math-container">$7$</span> letters appearing once and <span class="math-container">$2$</span> letters appearing twice each. So if you want to know the number of such words with a specific length, you can extract that coefficient (probably with a computer). In this case we get <span class="math-container">$$ (1+x)^7(1+x + x^2/2)^2 = 1 + 9x + 37x^2 + 92x^3 + 617/4 x^4 + 735/4x^5 + ...$$</span></p>
<p>and so the answer is <span class="math-container">$735/4 *5!$</span>.</p>
<p>To see this, note that if we restrict ourselves to a given (multi)-set of letters (with possible repititions) and require that our word have all of these letters (each letter repeated as many times as it appears in our multiset, we can count the number of words by the so-called multinomial coefficient - if we have <span class="math-container">$r_i$</span> of the <span class="math-container">$i$</span>th letters, the total number of words is <span class="math-container">$${ r_1 + \ldots + r_m \choose r_1, \ldots , r_m} :=\frac{(r_1 + ... + r_m)!}{r_1!...r_m!}$$</span> since we may choose any permutation of the total set of symbols with repetition, and then divide out the number of permutations of the like symbols.</p>
<p>Now consider the expansion of the right-hand side above: if we choose to use the <span class="math-container">$r_i$</span>th power in the <span class="math-container">$i$</span>th factor, we get the term <span class="math-container">$$\frac{x^{r_1 + ... + r_m}}{r_1! ... r_m!} = \frac{x^s}{s!} {s \choose r_1, ..., r_m} $$</span> where <span class="math-container">$s = \sum_i r_i$</span>; and then summing all the terms gives the desired formula.</p>
<p>The same analysis shows that the answer is <span class="math-container">$$a_l = \sum_{r_1 + ... + r_m = l, r_i \leq n_i} {l \choose r_1, \ldots, r_m}$$</span> although this is not as pretty.</p>
|
2,672,497 |
<p>$$\lim _{n\to \infty }\sum _{k=1}^n\frac{1}{n+k+\frac{k}{n^2}}$$ I unsuccessfully tried to find two different Riemann Sums converging to the same value close to the given sum so I could use the Squeeze Theorem. Is there any other way to solve this?</p>
|
Mark Viola
| 218,419 |
<p>It is easy to see that</p>
<p>$$ \begin{align}
\left |\sum_{k=1}^n \frac{1}{n+k+k/n^2}-\sum_{k=1}^n \frac1{n+k}\right|&=\frac1{n^2}\sum_{k=1}^n \frac{k}{(n+k+k/n^2)(n+k)}\\\\
&\le \frac1{n^2}\sum_{k=1}^n \frac1k\tag 1
\end{align}$$ </p>
<p>Then, using $\sum_{k=1}^n\frac1k =\gamma+\log(n) +O\left(\frac1n\right)$, we see that the limit of the left-hand side of $(1)$ as $n\to \infty$ is $0$.</p>
<p>Can you finish now?</p>
|
496,882 |
<p>If I have an experiment that has $1000$ trials, and $10$% of the time there is an error, what is the approximate probability that I will have $125$ failures? I figured out that $\mu =100$ and $\sigma =9.4868$. I'm trying to approximate with a normal distribution, since with this many trials, and the fact that each experiment is independent the total number of failrues should be normally distributed. So I set up my problem as so $$P(X\le 125)=P\left(z\le \frac{125-100}{\frac{9.4868}{\sqrt{1000}}}\right)=\Phi\left(\frac{125-100}{\frac{9.4868}{\sqrt{1000}}}\right)$$ But there's no way that with probability $1$ there are at most $125$ errors. So what am I missing?</p>
|
robjohn
| 13,854 |
<p>You are computing the value for going over $83$ standard deviations above the mean. The chance of going above that are approximately
$$
\frac1{\sqrt{2\pi}}\frac1{83}e^{-83^2/2}
$$
which is very small; $\sim2.5\times10^{-1494}$. The chance of getting less than 83 standard deviations is therefore approximately $1$.</p>
<p>However, as Dilip Sarwate points out, we should use the standard deviation of $9.4868$ and not $\frac{9.4868}{\sqrt{1000}}$. That puts us at $2.6352$ standard deviations above the mean for $0.99580$ probability. </p>
|
1,018,672 |
<blockquote>
<p><span class="math-container">$$\int_0^{\infty} \frac{1}{x^3-1}dx$$</span></p>
</blockquote>
<p>What I did:</p>
<p><span class="math-container">$$\lim_{\epsilon\to0}\int_0^{1-\epsilon} \frac{1}{x^3-1}dx+\lim_{\epsilon\to0}\int_{1+\epsilon}^{\infty} \frac{1}{x^3-1}dx$$</span></p>
<hr />
<p><span class="math-container">$$\lim_{\epsilon\to0}\int_0^{1-\epsilon}\frac{1}{3(x-1)}-\frac{2x+1}{6(x^2+x+1)}-\frac{1}{2(x^2+x+1)}dx+\lim_{\epsilon\to0}\int_{1+\epsilon}^{\infty}\frac{1}{3(x-1)}-\frac{2x+1}{6(x^2+x+1)}-\frac{1}{2(x^2+x+1)}dx$$</span></p>
<hr />
<p><span class="math-container">$$\lim_{\epsilon\to0}\int_0^{1-\epsilon}\frac{1}{3(x-1)}-\frac{2x+1}{6(x^2+x+1)}-\frac{1}{2[(x+\frac{1}{2})^2+\frac{3}{4}]}dx+\lim_{\epsilon\to0}\int_{1+\epsilon}^{\infty}\frac{1}{3(x-1)}-\frac{2x+1}{6(x^2+x+1)}-\frac{1}{2[(x+\frac{1}{2})^2+\frac{3}{4}]}dx$$</span></p>
<hr />
<p><span class="math-container">$$[\frac{1}{3}ln(x-1)-\frac{1}{6}ln(x^2+x+1)-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{0}^{1-\epsilon}+[\frac{1}{3}ln(x-1)-\frac{1}{6}ln(x^2+x+1)-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{1+\epsilon}^{\infty}$$</span></p>
<hr />
<p><span class="math-container">$$[\frac{1}{6}(2ln(x-1)-ln(x^2+x+1))-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{0}^{1-\epsilon}+[\frac{1}{6}ln{2(x-1})-\frac{1}{6}ln(x^2+x+1)-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{1+\epsilon}^{\infty}$$</span></p>
<hr />
<p><span class="math-container">$$[\frac{1}{6}ln(\frac{(x-1)^2}{x^2+x+1})-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{0}^{1-\epsilon}+[\frac{1}{6}ln(\frac{(x-1)^2}{x^2+x+1})-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{1+\epsilon}^{\infty}$$</span></p>
<hr />
<p><span class="math-container">$$[\frac{1}{6}ln(\frac{x^2-2x+1}{x^2+x+1})-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{0}^{1-\epsilon}+[\frac{1}{6}ln(\frac{x^2-2x+1}{x^2+x+1})-\frac{1}{\sqrt3}\arctan(\frac{2x+1}{\sqrt3})]_{1+\epsilon}^{\infty}$$</span></p>
<hr />
<p><span class="math-container">$$\lim_{\epsilon\to0}[\frac{1}{6}ln(\frac{(1-\epsilon)^2-2(1-\epsilon)+1}{(1-\epsilon)^2+1-\epsilon+1})-\frac{1}{\sqrt3}\arctan(\frac{2(1-\epsilon)+1}{\sqrt3})+\frac{1}{\sqrt3}\arctan(\frac{1}{\sqrt3})]+\lim_{\epsilon\to 0} [ \frac{1}{6}ln(\frac{(\infty)^2-2(\infty)+1}{(\infty)^2+(\infty)+1})+\cdots]$$</span></p>
<hr />
<blockquote>
<p>This is where my problem is, what is :</p>
<p><span class="math-container">$$ \frac{1}{6}ln(\frac{(\infty)^2-2(\infty)+1}{(\infty)^2+(\infty)+1})$$</span></p>
</blockquote>
<p>^^^ If I know past this, I know how to proceed. The only thing stopping me is this ^^^. Please help.</p>
|
Timbuc
| 118,527 |
<p>Using partial fractions directly I think it is simpler:</p>
<p>$$\int\frac1{x^3-1}dx=\frac13\left(\int\frac1{x-1}-\frac{x+2}{x^2+x+1}\right)dx=$$</p>
<p>$$\frac13\left(\log(x-1)-\frac12\int\frac{2x+1}{x^2+x+1}dx-\frac32\int\frac1{\left(x+\frac12\right)^2+\frac34}dx\right)=$$</p>
<p>$$=\frac13\log(x-1)-\frac12\log(x^2+x+1)-\sqrt3\int\frac{\frac2{\sqrt3}dx}{1+\left(\frac{2x+1}{\sqrt3}\right)^2}=$$</p>
<p>$$=\log\frac{\sqrt[3]{x-1}}{\sqrt{x^2+x+1}}-\arctan\frac{2x+1}{\sqrt3}+C , $$</p>
<p>and etc.</p>
|
607,917 |
<p>Any Help solving this question ?</p>
<blockquote>
<p>a) Find ONE solution <span class="math-container">$\overline x\in\Bbb Z/325\Bbb Z$</span> such that <span class="math-container">$x^2\equiv-1\pmod{325}$</span>. (Hint: CRT and lifting.)</p>
<p>b) How many solutions <span class="math-container">$\overline x$</span> to the above equation are there, and why?</p>
</blockquote>
|
Tim Ratigan
| 79,602 |
<p>$325=5^2\cdot 13$ so let's solve $x^2\equiv -1\pmod{25}$, $x^2\equiv -1\pmod{13}$.</p>
<p><strong>Case $1$: $\mod{25}$</strong></p>
<p>If $x^2\equiv -1\pmod{25}$, then $x^2\equiv -1\pmod 5$ so $x\equiv\pm2\pmod 5$</p>
<p>$$(5k+2)^2=25k^2+20k+4\equiv 20k+4\equiv-1\pmod{25}\implies 4k\equiv -1\pmod 5\\\implies k\equiv 1\pmod 5$$
$$(5k-2)^2=25k^2-20k+4\equiv -20k+4\equiv-1\pmod{25}\implies 4k\equiv 1\pmod 5\\\implies k\equiv 4\pmod 5$$
So we have $x^2\equiv-1\pmod{25}\implies x\equiv\pm7\pmod{25}$</p>
<p><strong>Case $2$: $\mod 13$</strong></p>
<p>We see upon inspection that $x^2\equiv -1\pmod{13}\implies x=\pm 5$</p>
<p>See if you can use CRT to find all solutions to $x^2\equiv -1\pmod{325}$</p>
|
1,304,971 |
<p>Using a triangular facet approximation of a sphere based on <a href="http://paulbourke.net/geometry/circlesphere/" rel="nofollow">Sphere Generation by Paul Bourke</a>.</p>
<p>We take an octahedron and bisect the edges of its facets
to form 4 triangles from each triangle.</p>
<p><code>
/\ /\ /\
/ \ / \ /__\
/ \ / \ /\ /\
/ \ /______\ /__\/__\
/ \ /\ /\ /\ /\ /\
/ \ / \ / \ /__\/__\/__\
/ \ / \ / \ /\ /\ /\ /\
/______________\ /______\/______\ /__\/__\/__\/__\
0th generation 1st generation 2nd generation
</code></p>
<p>This happens for every face so 8 times for the first generation.</p>
<p>Between each generation new vertices are pushed to the surface of the sphere.</p>
<p>The number of facets will be $(4^\mathrm{generations}) \cdot 8$</p>
<p>Some facets will share vertices: in the $0^{th}$ generation there are 6 unique vertices (it's an octahedron).</p>
<p>How many unique vertices will there be in the $N^{th}$ generation?</p>
|
user7530
| 7,530 |
<p>The key is the Euler characteristic formula
$$V-E+F = 2.$$</p>
<p>You've already calculated that $F=8\cdot 4^N$. You also know that all of the facets are triangles; that means that each triangle has three half-edges or $3F=2E$. That means
$$2V - F = 4$$
or
$$V = 2 + 4^{N+1}.$$</p>
|
2,564,256 |
<p>The definition of general complex log for any non-zero complex number $z$ is
$$Log(z)=\log|z|+i[\arg(z)+2m\pi], m\in \mathbb{Z}$$</p>
<p>With this, if $n\in \mathbb{N}$ then
$Log(z^{1/n})=\frac{1}{n} Log(z)$ holds for all non-zero complex number $z$. </p>
<p>I verified this for $Log(i^{1/2})=\frac12 Log(i)$ successfully but could not make up with the following: $$Log(i^{1/3})=\frac13 Log(i)$$</p>
<p>Let me show what I have done and where I got stuck. </p>
<p>Since $i=\cos(2n\pi+\frac{\pi}{2})+i \sin(2n\pi+\frac{\pi}{2}), n\in \mathbb{Z}$ then by De-Moivre' s theorem, we have
\begin{align}
i^{1/3}= & \cos\left(\frac{2n\pi+\frac{\pi}{2}}{3}\right)+i\sin\left(\frac{2n\pi+\frac{\pi}{2}}{3}\right), n=0,1,2 \\
=& \cos\left(\frac{(4n+1)\pi}{6}\right)+i\sin\left(\frac{(4n+1)\pi}{6}\right), n=0,1,2\\
= &\begin{cases}
\cos(\frac{\pi}{6})+i\sin(\frac{\pi}{6}) \\
\cos(\frac{5\pi}{6})+i\sin(\frac{5\pi}{6}) \\
\cos(\frac{9\pi}{6})+i\sin(\frac{9\pi}{6})
\end{cases}\\
= &\begin{cases}
\cos(\frac{\pi}{6})+i\sin(\frac{\pi}{6}) \\
\cos(\pi-\frac{\pi}{6})+i\sin(\pi-\frac{5\pi}{6}) \\
-\cos(\frac{\pi}{2})-i\sin(\frac{\pi}{2})
\end{cases}\\
= &\begin{cases}
\frac{\sqrt{3}}{2}+i\frac{1}{2} \\
-\frac{\sqrt{3}}{2}+i\frac{1}{2} \\
0-i
\end{cases}
\end{align}</p>
<p>Now LHS:
\begin{align}
\frac13 Log(i)=&\frac13[\log|i|+i\{\arg(i)+2n_1 \pi\}], n_1\in \mathbb{Z}\\
=&\frac13(2n_1\pi+\frac{\pi}{2}), n_1\in \mathbb{Z}\\
=&(4n_1+1)\frac{\pi i}{6}, n_1\in \mathbb{Z}
\end{align}</p>
<p>whereas we see that
\begin{align}
&Log(i^{1/3})\\
=&\begin{cases}
\log|\frac{\sqrt{3}}{2}+i\frac{1}{2}|+i[\arg(\frac{\sqrt{3}}{2}+i\frac{1}{2})+2m_1\pi]\\
\log|-\frac{\sqrt{3}}{2}+i\frac{1}{2}|+i[\arg(-\frac{\sqrt{3}}{2}+i\frac{1}{2})+2m_2\pi]\\
\log|-i|+i[\arg(-i)+2m_3\pi]
\end{cases}\\
=&\begin{cases}
i[\frac{\pi}{6}+2m_1\pi]\\
i[\frac{5\pi}{6}+2m_2\pi]\\
i[\frac{-\pi}{2}+2m_3\pi]
\end{cases}, m_1, m_2, m_3\in \mathbb{Z}
\end{align}</p>
<p>And here I got stuck. I don't know how to finish. Any help will be appreciated. </p>
|
Gaurang Tandon
| 89,548 |
<p>Let $z^3-i=0$, then, by cube roots of unity, its roots are $-i, -i\omega, -i\omega^2$, for $\omega=e^{i2\pi/3}$ the cube root of unity. These values translate to $e^{i3\pi/2}, e^{i\pi/6}, e^{i5\pi/6}$.</p>
<p><strong>Hint:</strong> I am showing this proof for $z=e^{i3\pi/2}$, but the procedure is same for the other two values.</p>
<p>So, $$\ln i^{1/3}=i3\pi/2$$</p>
<p>So, if $1/3\cdot \ln i=i3\pi/2$<br>
Then, $\ln i=i9\pi/2$ that is $i=e^{i9\pi/2}=e^{i\pi/2+i4\pi}=e^{i\pi/2}$ which is true. Hence, proved.</p>
<p><strong>Where you went wrong?</strong> You were using DeMoivre's theorem and expanding to $\cos$ and $\sin$, when you should have used the Euler form instead. That is, you were using the right theorem in the wrong place.</p>
<p><strong>Why should I use the Euler's form?</strong> Well, you already have $\ln$ in your question, and we can easily manipulate complex numbers using $\ln$, when they are in Euler form.</p>
<p>Hope it helps!</p>
|
2,424,722 |
<blockquote>
<p>Let a,b,c be real numbers such that $a+2b+c=4$. What is the value of $\max(ab+bc+ac)$</p>
</blockquote>
<p>My attempt:</p>
<p>Squaring both the sides: </p>
<p>$a^2 +4b^2+c^2+2ac+4bc+4ab=16$</p>
<p>Then I tried factoring after bringing 16 to LHS but couldn't. It's not even a quadratic in one variable or else I could have directly found the maximum. </p>
<p>How do I proceed? </p>
|
Donald Splutterwit
| 404,247 |
<p>Lagrange multipliers will give the solution
\begin{eqnarray*}
L=ab+bc+ca+ \lambda (a+2b+c-4)
\end{eqnarray*}
Differentiating gives
\begin{eqnarray*}
b+c+ \lambda =0 \\
c+a +2\lambda =0 \\
a+b+ \lambda =0 \\
\end{eqnarray*}
Add these equation together and substitute to get $ \lambda = \frac{b-4}{2}$ now substitute this back into the equations above and we get $b=0, a=c=2$. So the maximum value is $\color{blue}{4}$.</p>
<p>Edit : Ooops ... it is a precalculus question ...
\begin{eqnarray*}
L=ab+bc+ca=ab+(a+b)(4-a-2b)=-a^2-2ab-2b^2+4(a+b) = \\ \underbrace{-\frac{1}{2}(2b+a-2)^2}_{2b+a=2}-\underbrace{\frac{1}{2}(a-2)^2}_{a=2}+\color{blue}{4}
\end{eqnarray*}</p>
|
2,424,722 |
<blockquote>
<p>Let a,b,c be real numbers such that $a+2b+c=4$. What is the value of $\max(ab+bc+ac)$</p>
</blockquote>
<p>My attempt:</p>
<p>Squaring both the sides: </p>
<p>$a^2 +4b^2+c^2+2ac+4bc+4ab=16$</p>
<p>Then I tried factoring after bringing 16 to LHS but couldn't. It's not even a quadratic in one variable or else I could have directly found the maximum. </p>
<p>How do I proceed? </p>
|
dxiv
| 291,201 |
<p>From the given condition $\,2b=4-a-c\,$, then:</p>
<p>$$\require{cancel}
\begin{align}
2(ab+bc+ca) &= 2b(a+c)+2ac \\
&= (a+c)(4-a-c)+2ac= \\
&= 4a - a^2 - \bcancel{ac} + 4c - \bcancel{ac} - c^2 + \bcancel{2ac} = \\
&= \color{red}{8} -a^2 + 4a \color{red}{-4} - c^2 +4c \color{red}{-4} = \\
&= 8 - (a-2)^2 - (c-2)^2 \\
&\le 8
\end{align}
$$</p>
<p>Equality is attained for $a=c=2\,$, so the upper bound is in fact a maximum.</p>
|
3,744,610 |
<p>The following puzzle was introduced by the psychologist Peter Wason in 1966, and is one of the
most famous subject tests in the psychology of reasoning.</p>
<blockquote>
<p>Four cards are placed on the table in front of you. You are told
(truthfully) that each has a letter printed on one side and a digit on
the other, but of course you can only see one face of each. What you
see is:</p>
<pre><code> B E 4 7
</code></pre>
<p>You are now told that the cards you are looking at were chosen to
follow the rule \If there is a vowel on one side, then there is an odd
number on the other side." What is the least number of cards you have
to turn over to verify this rule, and which cards do you in fact have
to turn over?</p>
</blockquote>
<p>My solution:</p>
<blockquote>
<p>Assuming A: Vowel on one side B: Odd number on one side</p>
<p>by the data given to us, A(conditional)B {A implies B}. We have to
verify this rule. We can do this by proving that the contrapositive of
the above given statement is true, which says (NO ODD IMPLIES NO
VOWEL). By doing this, We'll have to turn the "E" and the "4" card.</p>
</blockquote>
<p>The correct answer is : "E" and "7" card.</p>
<p>Working:</p>
<blockquote>
<p>You need to turn over 2 cards (the E and the 7). The rule can be
(simply) equivalently stated like so: There isn't a vowel on one side
OR there is an odd number on one side. Thus, if you see a letter and
it isn't a vowel, you don't need to verify the rule; likewise, if you
see a number and it's odd, you don't need to verify the rule.</p>
</blockquote>
<p>I have two concerns, 1. How is my reasoning wrong. and 2. How is the statement -"There isn't a vowel on one side OR there is an odd number on one side" equivalent to - "If there is a vowel on one side, then there is an odd number on the other side."</p>
|
Deepak
| 151,732 |
<p>Your answer (not the book's) is correct. You are given a proposition that is a unidirectional implication <span class="math-container">$vowel \implies odd$</span>. You need to test this by turning over any vowel card(s) you see to confirm the reverse side(s) is(are) odd(s).</p>
<p>But every implication also implies the contrapositive. Which means <span class="math-container">$not \ odd \implies not \ vowel$</span> which is equivalent to <span class="math-container">$even \implies consonant$</span>. So you need to test this by turning over any even card(s) to confirm the reverse side(s) is(are) consonant(s).</p>
<p>In your set, that means only <span class="math-container">$E$</span> and <span class="math-container">$4$</span> need be turned over. The rest are irrelevant to the proposition.</p>
|
3,694,661 |
<p>I was stuck on a problem from <a href="https://rads.stackoverflow.com/amzn/click/com/0821804308" rel="nofollow noreferrer" rel="nofollow noreferrer" title="Quite a Fun Recreational Math Book">Mathematical Circles: Russian Experience</a>, which reads as follows:</p>
<blockquote>
<p><em>Prove that the number <span class="math-container">$6n^3 + 3$</span> cannot be a perfect sixth power of an integer for any natural number n</em>. </p>
</blockquote>
<p>The problems previous to this dealt with proving that numbers cannot be a cube and cannot be a square. The hints offered to these problem said that a square leaves a remainder of 0 or 1 when divided by 3 or 4, and that a cube leaves a remainder of 0, 1 or 8 when divided by 9. However, for this problem, the hint states that the reader should "experiment by dividing the number by 7 and comparing it remainders of sixth powers divided by 7". </p>
<p>Where did that come from? How would the solver figure out that <span class="math-container">$6n^3 + 3$</span> should be divided by 7? Moreover, why are 3 and 4 used in proving facts about squares, and why is 9 used when proving facts about cubes? Was this mainly through trial and error over the years, or is there some obvious fact that I'm blanking out on? </p>
<p>Thanks!</p>
|
Anas A. Ibrahim
| 650,028 |
<p>I guess that the power is chosen because when working with modular arithmetic, one would directly consider <strong>euler's theorem</strong> (which is a generalization of fermat's little theorem) and see if it helps:
<span class="math-container">$$a^{\phi(n)} \equiv 1 \pmod{n}$$</span>
If <span class="math-container">$a$</span> and <span class="math-container">$n$</span> are relatively prime, and <span class="math-container">$\phi(n)$</span> is euler's totient function. Now seeing that the problem required <span class="math-container">$m^6$</span>, one would see which <span class="math-container">$n$</span> in euler's theorem would give him <span class="math-container">$\phi(n)=6$</span>. The well known property of the totient function is that for any prime <span class="math-container">$p$</span>, <span class="math-container">$\phi(p)=p-1$</span> and this works with us so we have to check modulo <span class="math-container">$7$</span>. As far as I know, there isn't a secret formula to see which modulo would be the most useful. However, there are some useful things to consider, check the brilliant wiki "Diophantine equations: modular arithmetic considerations".</p>
<p>Now, back to the problem!</p>
<p><strong>Hint</strong>: try the case <span class="math-container">$7|6n^3+3$</span> and prove there's a contradiction to get <span class="math-container">$7 \nmid6n^3+3$</span> so you can use fermat's little theorem/euler's theorem and modular arithmetic to prove that it can't be.</p>
<p><strong>Solution</strong>:</p>
<p><span class="math-container">$$6n^3+3=m^6$$</span>
If <span class="math-container">$7| 6n^3+3$</span>, we have
<span class="math-container">$$6n^3 \equiv-3 \pmod{7} \implies 2n^3 \equiv-1 \equiv6 \pmod{7}$$</span>
<span class="math-container">$$\implies n^3 \equiv3 \pmod{7}$$</span>
which is impossible. So <span class="math-container">$7 \nmid 6n^3+3 \implies 7 \nmid m^6$</span>
<span class="math-container">$$\implies m^6 \equiv1 \pmod{7} \tag{FLT}$$</span>
<span class="math-container">$$\implies 6n^3 \equiv2 \pmod{7} \implies 3n^3 \equiv1\equiv8\equiv15 \pmod{7}$$</span>
<span class="math-container">$$\implies n^3 \equiv5 \pmod{7}$$</span>
which is also impossible.</p>
|
1,621,792 |
<p>I'm stuck on this problem:</p>
<blockquote>
<p>Let <span class="math-container">$C\subset \mathbb{R}^n$</span> be closed and unbounded.</p>
<p>Suppose <span class="math-container">$f:C\to\mathbb{R}^m$</span> is continuous and such that <span class="math-container">$\lim_{x\to\infty} f(x)$</span> exists and is finite.</p>
<p>Show that <span class="math-container">$f$</span> is uniformly continuous on <span class="math-container">$C$</span>.</p>
</blockquote>
<p>I tried to prove it using the definition of limit and continuity but it didn't get me anywhere. I appreciate any advice.</p>
|
ncmathsadist
| 4,154 |
<p>Let $\epsilon > 0$ and $L$ be the limit of $f$ at $\infty$. Choose $M$ so that $|x| > M \Rightarrow |f(x) - L| <\epsilon$. Notice that $\{x\in C| |x|\le M\}$ is compact so $f$ is uniformly continous there. Can you do the rest?</p>
|
1,621,792 |
<p>I'm stuck on this problem:</p>
<blockquote>
<p>Let <span class="math-container">$C\subset \mathbb{R}^n$</span> be closed and unbounded.</p>
<p>Suppose <span class="math-container">$f:C\to\mathbb{R}^m$</span> is continuous and such that <span class="math-container">$\lim_{x\to\infty} f(x)$</span> exists and is finite.</p>
<p>Show that <span class="math-container">$f$</span> is uniformly continuous on <span class="math-container">$C$</span>.</p>
</blockquote>
<p>I tried to prove it using the definition of limit and continuity but it didn't get me anywhere. I appreciate any advice.</p>
|
Wolfy
| 217,910 |
<p>Suppose $f$ is continuous but not uniformly continuous on $[a,b]$. Then, there exists an $\varepsilon_0 > 0$ such that $$|x - y| < \delta \ \Rightarrow \ |f(x) - f(y)| < \varepsilon_0 \ \ \text{is false} \ \forall \delta$$ For $\delta = \frac{1}{N}$, we can find two points $x_n,y_n\in [a,b]$ such that $$|x_n - y_n| < \frac{1}{N} \ \ \text{and} \ \ |f(x_n) - f(y_n)| \geq \varepsilon_0$$ This gives us two sequences $\{x_n\}$ and $\{y_n\}$ both lying in $[a,b]$, and with $|x_n - y_n| \rightarrow 0$. By the Bolzano-weierstrass theorem, there exists a subsequence $\{x_{n_{k}}\}$ of $x_n$, and a point $x\in [a,b]$, such that $x_{n_{k}}\rightarrow x$ (this is where we are using the fact the interval is closed). Now, put $\tilde{x}_{n_{k}} = x_{n_{k}}$, and $\tilde{y}_{n_{k}} = y_{n_{k}}$. So, $\tilde{x}_{n_{k}}\rightarrow x$, and $\tilde{y}_{n_{k}}$ is a subsequence of the original sequence $y_n$. </p>
<p>By the same reasoning, there exisits $y\in [a,b]$ and a subsequence $\{Y_{k_{s}}\}$ with $\{Y_{k_{s}}\}\rightarrow y$. Put $\hat{x_{s}} = \tilde{x}_{k_{s}} $ , $\hat{y_{s}} = \tilde{y}_{k_{s}} $. Then $\hat{x_s}$ is a subsequence of a convergent subsequence and hence also converges to $x$. Since $\hat{y}_s\rightarrow y$, and we have $$|\hat{x}_s - \hat{y}_s| \rightarrow 0$$ we must have $$|f(\hat{x}_s) - f(\hat{y}_s)|\rightarrow 0$$ This contradicts the assumption that is bounded away from zero. Therefore, our assumption that $f$ is not uniformly continuous is incorrect. </p>
|
131,579 |
<p>I need some help solving this problem.</p>
<p>A man is about to perform a random walk. He is standing a distance of 100 units from a wall. In his pocket, he has 10 playing cards: 5 red and 5 black.</p>
<p>He shuffles the cards and draws the top card.</p>
<p>If he draws a red card, he moves 50 units (half the distance from the wall) to the right (away from the wall).</p>
<p>If he draws a black card, he moves 50 units (half the distance from the wall) to the left (towards the wall).</p>
<p>How far from the wall will he be after all 10 cards have been drawn?</p>
<p>Thank you in advance for your help!</p>
|
Brian M. Scott
| 12,042 |
<p>Some hints: </p>
<ol>
<li><p>$792$ is even, so $13xy45z$ is even; this tells you something about the possibilities for $z$. In fact, it’s divisible by $8$, which means that the three-digit number $45z$ has to be divisible by $8$.</p></li>
<li><p>$792$ is divisible by $9$, so $13xy45z$ is as well; do you know the easy divisibility test for $9$?</p></li>
<li><p>$792$ is divisible by $11$, so $13xy45z$ is as well; do you know the easy divisibility test for $11$?</p></li>
</ol>
|
1,770,656 |
<p>In lecture I was given the following reasoning:</p>
<p>There are 7 options for the first character, 6 options for the last character, and 9! combinations for the 9 characters in between. Then you divide out the repeats -- 4 I's, 4 S's, and 2 P's. So the answer is 7 * 6 * 9! / (4! * 4! * 2!) = 13230</p>
<p>My own reasoning went like this:</p>
<p>There are 11! / (4! * 4! * 2!) combinations if you ignore the restriction on the I's. Then you just subtract out the combinations where I starts and ends the word, which is the same thing as finding out how many combinations are of the form I _ _ _ _ _ _ _ _ _ I, with the possible letters being 1 M, 2 I's (since we used two at the beginning and end), 4 S's, and 2 P's => 9! / (4! * 2! * 2!).</p>
<p>11! / (4! * 4! * 2!) - 9! / (4! * 2! * 2!) = 30870</p>
<p>Where does my reasoning go wrong?</p>
|
Ben
| 313,305 |
<p>You only subtracted the combinations both beginning and ending with I. There are cases that the letter either start with I or end with I.</p>
<p>So you want to calculate the number of combinations that start with I: $\frac{10!}{4!*3!*2!}$</p>
<p>and the number of combinations that end with I: $\frac{10!}{4!*3!*2!}$ (should be the same as number of combinations that start with I)</p>
<p>and the number of combinations that both start and end with I : $\frac{9!}{4!*2!*2!}$</p>
<p>So, by principle of inclusion and exclusion, the number of combinations that either start or end with I would be: $\frac{10!}{4!*3!*2!}+\frac{10!}{4!*3!*2!}-\frac{9!}{4!*2!*2!}=21420$</p>
<p>$\frac{11!}{4!*4!*2!}-21420=13230$</p>
<p>Hope that helps!</p>
|
889,728 |
<p>Evaluate $$\int_0^{1/2}x^3\arctan(x)\,dx$$</p>
<p>My work so far:
$x^3\arctan(x) = \sum_{n=0}^\infty(-1)^n \dfrac{x^{2n+4}}{2n+1}$</p>
<p>$$\int_0^{1/2}x^3\arctan(x)\,dx = \sum_{n=0}^\infty \frac{(-1)^{n+1}(1/2)^{2n+4}}{2n+1}$$</p>
<p>The desired accuracy needed is to four decimal places, that is were I am stuck.</p>
|
David H
| 55,051 |
<p><strong>Hint:</strong> You might consider first integrating parts:</p>
<p>$$\begin{align}
\int_{0}^{1/2}x^3\arctan{(x)}\,\mathrm{d}x
&=\frac14x^4\arctan{(x)}\bigg{|}_{0}^{1/2}-\frac14\int_{0}^{1/2}\frac{x^4}{1+x^2}\,\mathrm{d}x\\
&=\frac{1}{64}\arctan{\left(\frac12\right)}-\frac14\int_{0}^{1/2}\frac{x^4}{1+x^2}\,\mathrm{d}x\\
&=\frac{1}{64}\arctan{\left(\frac12\right)}-\frac14\int_{0}^{1/2}\left(x^2-1+\frac{1}{1+x^2}\right)\,\mathrm{d}x.
\end{align}$$</p>
|
96,957 |
<p>Stacks, of varying kinds, appear in algebraic geometry whenever we have moduli problems, most famously the stacks of (marked) curves. But these seem to be to be very geometric in motivation, so I was wondering if there are natural examples of stacks that arise in arithmetic geometry or number theory.</p>
<p>To me, as a non-expert, it seems like it's all Galois representations and estimates on various numeric measures (counting points or dimensions of things) based on other numeric things (conductors, heights, etc). </p>
<p>I asked this question at M.SE (here : <a href="https://math.stackexchange.com/questions/143746/stacks-in-arithmetic-geometry">https://math.stackexchange.com/questions/143746/stacks-in-arithmetic-geometry</a> please vote to close if you can) because I thought it a bit too 'recreational', but with no success. What I am after is not just stacks which can be seen as arithmetic using number fields or rings of integers, but which are actually used in number-theoretic problems, or have a number-theoretic origin. Maybe there aren't any, but it doesn't hurt to ask.</p>
<p>EDIT: I have belatedly made this question CW, as I've realised, too late, that there is clearly not one correct answer.</p>
|
Zack Wolske
| 18,086 |
<p>One big recent example would be Lafforgue's proof of the Langlands correspondence for $GL_n$ of function fields (<a href="http://arxiv.org/abs/math.NT/0212399" rel="noreferrer">http://arxiv.org/abs/math.NT/0212399</a>), which uses stacks of schtukas. It is similar to Drinfel'd's proof for $GL_2$, but with the moduli space being an essential component. </p>
<p>More readable versions, with additional context, are given by Lafforgue's advisor Gerard Laumon (<a href="http://arxiv.org/abs/math.AG/0003131" rel="noreferrer">http://arxiv.org/abs/math.AG/0003131</a> if you can read French) and by his student Ngo Dac Tuan (MR2402699 on MathSciNet, or <a href="http://www.impan.pl/~pragacz/download/Ngo.pdf" rel="noreferrer">http://www.impan.pl/~pragacz/download/Ngo.pdf</a>)</p>
|
390,644 |
<p>I'm trying to solve this recurrence relation:</p>
<p>$$
a_n = \begin{cases}
0 & \mbox{for } n = 0 \\
5 & \mbox{for } n = 1 \\
6a_{n-1} - 5a_{n-2} + 1 & \mbox{for } n > 1
\end{cases}
$$</p>
<p>I calculated generator function as:
$$
A = \frac{31x - 24x^2}{1 - 6x + 5x^2} + \frac{x^3}{(1-x)(1-6x+5x^2)} =
\frac{31x - 24x^2}{(x-1)(x-5)} + \frac{x^3}{(1-x)(x-1)(x-5)}
$$</p>
<p>(I'm not sure if that's right)</p>
<p>and its partial fractions decomposition looks like:
$$
A = \left(\frac{-7}{4} \cdot \frac{1}{x-1} - \frac{445}{4} \cdot \frac{1}{x-5}\right) +
\left( \frac{39}{16} \cdot \frac{1}{x-5} + \frac{3}{4} \cdot \frac{1}{(x-1)^2} - \frac{375}{16} \cdot \frac{1}{x-5} \right)
$$
(again - I'm not sure if it's ok)</p>
<p>I'm stuck here... From solutions I know that I should get:
$$
a_n = \frac{-21}{16} - \frac{1}{4}n + \frac{21}{16}5^n
$$</p>
<p>but I have no idea how it's solved... I hope somebody can help me (I spend more than 3h trying to solve this myself...)</p>
|
vonbrand
| 43,946 |
<p>Use Wilf's technique from "generatingfunctionology". Define $A(z) = \sum_{n \ge 0} a_n z^n$, and write:
$$
a_{n + 2} = 6 a_{n + 1} - 5 a_n + 1 \qquad a_0 = 0, a_1 = 5
$$
Multiply by $z^n$ and add for $n \ge 0$, which gives:
$$
\frac{A(z) - a_0 - a_1 z}{z^2} = 6 \frac{A(z) - a_0}{z} - 5 A(z) + \frac{1}{1 - z}
$$
Solving for $A(z)$ gives:
$$
A(z) = \frac{5 z - 4 z^2}{1 - 7 z + 11 z^2 - 5 z^3}
= \frac{21}{16} \cdot \frac{1}{1 - 5 z}
- \frac{17}{16} \cdot \frac{1}{1 - z}
- \frac{1}{4} \cdot \frac{1}{(1 - z)^2}
$$
Remember the expansions:
$$
(1 - u)^{-m} = \sum_{k \ge 0} \binom{-m}{k} (-u)^k
= \sum_{k \ge 0} \binom{k + m - 1}{m - 1} u^k
$$
(the binomial coefficient is just an $m-1$-degree polynomial in $k$) and you are all set:
$$
a_n = \frac{21}{16} \cdot 5^n - \frac{17}{16} - \frac{1}{4} \binom{n + 1}{1}
= \frac{21 \cdot 5^n - 4 n - 21}{16}
$$</p>
<p><a href="http://maxima.sourceforge.net" rel="nofollow">Maxima</a>'s help with the algebra is gratefully acknowledged.</p>
|
1,771,279 |
<p>Let $B=\{x \in \ell_2, ||x|| \leqslant 1\}$ - Hilbert ball $X \subset B$ - open convex connected set in Hilbert ball, $\bar{X}$ - closure of $X$. $F: \bar{X} \to \bar{X}$ - continuous map that holomorphic into each point of $X$. Is it true in general, that $F$ has fixed point?</p>
|
Yiorgos S. Smyrlis
| 57,021 |
<p>Define $f: \overline{X}\to \overline{X}$, where $X=\{x\in B: 1/2<\|x\|<1\}$, as
$$
f(x_1,x_2,\ldots)=(0,x_1,x_2,\ldots).
$$
No fixed point for $f$ is $B$.</p>
|
1,541,800 |
<p>I happened to stumble upon the following matrix:
$$ A = \begin{bmatrix}
a & 1 \\
0 & a
\end{bmatrix}
$$</p>
<p>And after trying a bunch of different examples, I noticed the following remarkable pattern. If $P$ is a polynomial, then:
$$ P(A)=\begin{bmatrix}
P(a) & P'(a) \\
0 & P(a)
\end{bmatrix}$$</p>
<p>Where $P'(a)$ is the derivative evaluated at $a$.</p>
<p>Futhermore, I tried extending this to other matrix functions, for example the matrix exponential, and wolfram alpha tells me:
$$ \exp(A)=\begin{bmatrix}
e^a & e^a \\
0 & e^a
\end{bmatrix}$$
and this does in fact follow the pattern since the derivative of $e^x$ is itself!</p>
<p>Furthermore, I decided to look at the function $P(x)=\frac{1}{x}$. If we interpret the reciprocal of a matrix to be its inverse, then we get:
$$ P(A)=\begin{bmatrix}
\frac{1}{a} & -\frac{1}{a^2} \\
0 & \frac{1}{a}
\end{bmatrix}$$
And since $f'(a)=-\frac{1}{a^2}$, the pattern still holds!</p>
<p>After trying a couple more examples, it seems that this pattern holds whenever $P$ is any rational function.</p>
<p>I have two questions:</p>
<ol>
<li><p>Why is this happening?</p></li>
<li><p>Are there any other known matrix functions (which can also be applied to real numbers) for which this property holds?</p></li>
</ol>
|
Martin R
| 42,969 |
<p>If $$ A = \begin{bmatrix}
a & 1 \\
0 & a
\end{bmatrix}
$$
then by induction you can prove that
$$ A^n = \begin{bmatrix}
a^n & n a^{n-1} \\
0 & a^n
\end{bmatrix} \tag 1
$$
for $n \ge 1 $. If $f$ can be developed into a power series
$$
f(z) = \sum_{n=0}^\infty c_n z^n
$$
then
$$
f'(z) = \sum_{n=1}^\infty n c_n z^{n-1}
$$
and it follows that
$$
f(A) = \sum_{n=0}^\infty c_n A^n = I + \sum_{n=1}^\infty c_n
\begin{bmatrix}
a^n & n a^{n-1} \\
0 & a^n
\end{bmatrix} = \begin{bmatrix}
f(a) & f'(a) \\
0 & f(a)
\end{bmatrix} \tag 2
$$
From $(1)$ and
$$
A^{-1} = \begin{bmatrix}
a^{-1} & -a^{-2} \\
0 & a^{-1}
\end{bmatrix}
$$
one gets
$$
A^{-n} = \begin{bmatrix}
a^{-1} & -a^{-2} \\
0 & a^{-1}
\end{bmatrix}^n =
(-a^{-2})^{n} \begin{bmatrix}
-a & 1 \\
0 & -a
\end{bmatrix}^n \\ =
(-1)^n a^{-2n} \begin{bmatrix}
(-a)^n & n (-a)^{n-1} \\
0 & (-a)^n
\end{bmatrix} =
\begin{bmatrix}
a^{-n} & -n a^{-n-1} \\
0 & a^{-n}
\end{bmatrix}
$$
which means that $(1)$ holds for negative exponents as well.
As a consequence, $(2)$ can be generalized to functions
admitting a Laurent series representation:
$$
f(z) = \sum_{n=-\infty}^\infty c_n z^n
$$</p>
|
4,101,068 |
<p>Can two random variables <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> have the covariance matrix</p>
<p><span class="math-container">$\begin{pmatrix} 0 & -1 \\ -1 & 0 \end{pmatrix}? $</span></p>
<p>The matrix is positive definite if <span class="math-container">$xy<0$</span>. Does this imply that it could be the convariance matrix for some <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>?</p>
|
Cm7F7Bb
| 23,249 |
<p>The eigenvalues of the matrix
<span class="math-container">$$
\begin{pmatrix}
0&-1\\
-1&0
\end{pmatrix}
$$</span>
are <span class="math-container">$-1$</span> and <span class="math-container">$1$</span>. The eigenvalues of a positive semidefinite matrix are non-negative. Hence, this matrix is not positive semidefinite and it is not a covariance matrix.</p>
|
1,956,855 |
<p>I'm doing some math work involving proofs, and one of the definitions is:</p>
<p>|a| = -a when a < 0</p>
<p>Isn't the absolute value of a, positive a no matter what a is in the beginning? Am I looking at this wrong? Could use an explanation.</p>
|
arkeet
| 50,739 |
<p>$\renewcommand\sgn{\operatorname{sgn}}$The general idea is this: For any nonzero real number $a$, we can decompose it as $s \cdot t$, where $s = +1$ or $-1$ is a sign, and $t > 0$ is a positive number. I'll call these $s = \sgn(a)$ and $t = |a|$. (Here $\sgn$ is called the <a href="https://en.wikipedia.org/wiki/Sign_function" rel="nofollow">signum function</a>.)</p>
<p>If $a$ is positive, then put $\sgn(a) = +1$ and $|a| = a$. Then it's clear that $|a|$ is positive, and $\sgn(a) \cdot |a| = +1 \cdot a = 1$.</p>
<p>If $a$ is negative, then put $\sgn(a) = -1$ and $\color{red}{|a| = -a}$. Then $|a|$ is positive ($a$ is negative, so $-a$ is positive), and $\sgn(a) \cdot |a| = -1 \cdot -a = a$.</p>
<p>(If $a = 0$ then $\sgn(a)$ is undefined, but $|a| = 0$.)</p>
|
4,580,478 |
<p>So I'm currently reading about signed measures in "Real analysis" by Folland. In it, he defines a signed measure as follows.</p>
<p><em>Let <span class="math-container">$(X,\mathcal{M})$</span> be a measurable space. Then a signed measure <span class="math-container">$\nu$</span> is a function from <span class="math-container">$\mathcal{M}$</span> to <span class="math-container">$[-\infty,\infty]$</span> satisfying<br />
(i) <span class="math-container">$\nu(\emptyset) = 0$</span><br />
(ii) At most one of the values <span class="math-container">$\pm\infty$</span> are assumed<br />
(iii) If <span class="math-container">$A_1, A_2, \dots$</span> are (pairwise) disjoint elements of <span class="math-container">$\mathcal{M}$</span>, then
<span class="math-container">$\nu(\cup_iA_i) = \sum_i\nu(A_i)$</span>
where the sum on the RHS converges absolutely if the LHS is finite.</em></p>
<p>Now suppose we have a sequence <span class="math-container">$(A_i)$</span> of sets in <span class="math-container">$\mathcal{M}$</span>. If <span class="math-container">$\nu(\cup_iA_i) = \infty$</span>, it is not obvious to me that, under these hypotheses, for any permutation <span class="math-container">$\sigma$</span>, one automatically has <span class="math-container">$\sum_i\nu(A_{\sigma(i)}) = \infty$</span>.</p>
<p>Is this in fact automatically true or is it an additional assumption that we bake into the definition?</p>
|
Simon SMN
| 712,612 |
<p>Okay, I've done some digging and I think I've formulated my original hesitancy and resolved it.</p>
<p>Let <span class="math-container">$(X,\mathcal{M})$</span> be a measurable space and suppose <span class="math-container">$\nu':\mathcal{M}\to[-\infty,\infty]$</span>. Suppose further that</p>
<p>(i) <span class="math-container">$\nu'(\emptyset) = 0$</span><br />
(ii) <span class="math-container">$\nu'$</span> assumes
at most one of the values <span class="math-container">$\pm\infty$</span>.</p>
<p>Then, given any sequence <span class="math-container">$(A_i)$</span> of sets in <span class="math-container">$\mathcal{M}$</span>, that (iii) hold for some permutation of the <span class="math-container">$A_i$</span>'s is in fact equivalent to it holding for all possible permutations of them. The left implication is obvious. For the right, one supposes the negation and, using the Riemann series theorem, shows that this leads to a violation of (ii). QED.</p>
<p>Hopefully, someone can confirm that I'm right. My hope is this answer will be of help to someone else reading in Folland about signed measures.</p>
|
16,797 |
<p>Is there any good way to approximate following integral?<br>
$$\int_0^{0.5}\frac{x^2}{\sqrt{2\pi}\sigma}\cdot \exp\left(-\frac{(x^2-\mu)^2}{2\sigma^2}\right)\mathrm dx$$<br>
$\mu$ is between $0$ and $0.25$, the problem is in $\sigma$ which is always positive, but it can be arbitrarily small.<br>
I was trying to expand it using Taylor series, but terms looks more or less this $\pm a_n\cdot\frac{x^{2n+3}}{\sigma^{2n}}$ and that can be arbitrarily large, so the error is significant.</p>
|
whuber
| 1,489 |
<p>A standard way to get a good approximation for integrals that "look" Gaussian is to evaluate the Taylor series of the logarithms of their integrands through second order, expanding around the point of maximum value thus (continuing with @Ross Millikan's substitution):</p>
<p>$$\eqalign{
&\log\left(\sqrt{y}\cdot \exp\left(-\frac{(y-\mu )^2}{2\sigma ^2}\right)\right) \cr
= &\frac{-\mu ^2-\sigma ^2+\mu \sqrt{\mu ^2+2 \sigma ^2}+2 \sigma ^2 \log\left[\frac{1}{2} \left(\mu +\sqrt{\mu ^2+2 \sigma ^2}\right)\right]}{4 \sigma ^2} \cr
+ &\left(-\frac{1}{2 \sigma ^2}-\frac{1}{\left(\mu +\sqrt{\mu ^2+2 \sigma ^2}\right)^2}\right) \left(y-\frac{1}{2} \left(\mu +\sqrt{\mu ^2+2 \sigma ^2}\right)\right)^2 \cr
+ &O\left[y-\frac{1}{2} \left(\mu +\sqrt{\mu ^2+2 \sigma ^2}\right)\right]^3 \cr
\equiv &\log(C) - (y - \nu)^2/(2\tau^2)\text{,}
}$$</p>
<p>say, with the parameters $C$, $\nu$, and $\tau$ depending on $\mu$ and $\sigma$ as you can see. The resulting integral now <em>is</em> a Gaussian, which can be computed (or approximated or looked up) in the usual ways. The approximation is superb for small $\sigma$ or large $\mu$ and still ok otherwise.</p>
<p>The plot shows the original integrand in red (dashed), this approximation in blue, and the simpler approximation afforded by replacing $\sqrt{y} \to \sqrt{\mu}$ in gold for $\sigma = \mu = 1/20$.</p>
<p><img src="https://i.stack.imgur.com/gDLAR.png" alt="alt text"></p>
<hr>
<h3>(Added)</h3>
<p><em>Mathematica</em> tells us the integral, when taken to $\infty$, can be expressed as a linear combination of modified Bessel Functions $I_\nu$ of orders $\nu = -1/4, 1/4, 3/4, 5/4$ with common argument $\mu^2/(4 \sigma^2)$. From the Taylor expansion we can see that when both $\mu$ and $\sigma$ are small w.r.t. $1/2$--specifically, $(1/4-\mu)/\sigma \gg 3$, the error made by including the entire right tail will be very small. (With a little algebra and some simple estimates we can even get good explicit bounds on the error as a function of $\mu$ and $\sigma$.) There are many ways to compute or approximate Bessel functions, including <a href="http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6TY8-4KKWVK6-4&_user=10&_coverDate=12%2F15%2F2006&_rdoc=1&_fmt=high&_orig=search&_origin=search&_sort=d&_docanchor=&view=c&_searchStrId=1606463395&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=6b1a974efc7b54358be324bde6f4a00c&searchtype=a" rel="nofollow noreferrer">polynomial approximations</a>. From looking at graphs of the integrand, it appears that the cases where the Bessel function approximation works extremely well more or less complement the cases where the preceding "saddlepoint approximation" works extremely well.</p>
|
1,531,171 |
<p>if $fg$ and $f$ are differentiable at $a$, must $g$ be differentiable at $a$? If not, what condition is needed to imply that $g$ be differentiable at $a$?</p>
<p>I realized that people asked similar question before, like
"If $f+g$ and $f$ are differentiable at $a$, must $g$ be differentiable at $a?$" It can be easily proved by using limits. </p>
<p>However, when it comes to multiply, I find it no longer eligible to use limits to finish the proof.</p>
|
Thomas Andrews
| 7,933 |
<p>The most commonly referenced non-differentiable function is $g(x)=|x|$. Let $f(x)=x$. Then $f$ is differentiable, as is $fg$.</p>
<p>Obviously, if $f$ and $fg$ are differentiable at $a$ and $f(a)\neq 0$ then $g=(fg)/f$ is differentiable at $a$ by the usual quotient rule.</p>
|
4,206,994 |
<p>Let <span class="math-container">$F(X, Y)$</span> be an irreducible polynomial of <span class="math-container">$\mathbb{C}[X, Y]$</span> with <span class="math-container">$F(0, 0) = 0$</span> and non-singular there, and let <span class="math-container">$(X(t), Y(t))$</span> be a local analytical parametrization of the corresponding curve, that is, <span class="math-container">$F(X(t), Y(t)) = 0$</span> for <span class="math-container">$t$</span> in a neighborhood of <span class="math-container">$0$</span>, with <span class="math-container">$X(0) = Y(0) = 0$</span> and <span class="math-container">$X'(0) \neq 0$</span>.</p>
<p>If, for some polynomial <span class="math-container">$G(X, Y)$</span>, we have <span class="math-container">$G(X(t), Y(t)) = 0$</span> for <span class="math-container">$t$</span> in the neighborhood in question, does then <span class="math-container">$G(X, Y)$</span> necessarily belong to the ideal generated by <span class="math-container">$F(X, Y)$</span> ? In other words, can the whole curve <span class="math-container">$F(X, Y) = 0$</span> be reconstructed from its local snippet?</p>
<p>And for an arbitrary algebraically closed field, is there a well-known characterization for a subset of an irreducible algebraic curve to define the curve already ? Is it for instance enough for such a subset to be infinite ?</p>
|
orangeskid
| 168,051 |
<p>The result you need is the following: in a irreducible complex variety every open subset in the standard topology is Zariski dense. Therefore, if two irreducible complex varieties <span class="math-container">$X$</span>, <span class="math-container">$X'$</span> (say inside some bigger space) are such that <span class="math-container">$X\cap X'$</span> has non-empty interior in both <span class="math-container">$X$</span> and <span class="math-container">$X'$</span> in the standard topologies, then <span class="math-container">$X = X'$</span>.</p>
<p>I think you can find a proof of the statement above in the book of Mumford, Algebraic Geometry I.</p>
|
1,985,849 |
<p>I am really stumped by this equation.
$$e^x=x$$
I need to prove that this equation has no real roots. But I have no idea how to start.</p>
<p>If you look at the graphs of $y=e^x$ and $y=x$, you can see that those graphs do not meet anywhere. But I am trying to find an algebraic and rigorous proof. Any help is appreciated. ( $e^\infty=\infty$ looks like a solution but is it?)</p>
|
Community
| -1 |
<p>Let $f(x) = e^x - x$. $f'(x) = e^x -1 \geq 0$ and hence $f$ is increasing. Thus for any $x > 0$, $f(x) > f(0)$ and hence $e^x > x+1$ and $f(x) = 0$ has no solution when $x > 0$. For $x <0$, $e^x >0$ and $x < 0$ and hence $e^x \neq x$.</p>
|
1,985,849 |
<p>I am really stumped by this equation.
$$e^x=x$$
I need to prove that this equation has no real roots. But I have no idea how to start.</p>
<p>If you look at the graphs of $y=e^x$ and $y=x$, you can see that those graphs do not meet anywhere. But I am trying to find an algebraic and rigorous proof. Any help is appreciated. ( $e^\infty=\infty$ looks like a solution but is it?)</p>
|
Fred
| 380,717 |
<ol>
<li><p>For $x \le 0$ we have $e^x >0$, thus no solution of the above equation exists in $(-\infty, 0)$</p></li>
<li><p>If $x>0$, then $e^x=1+x+\frac{x^2}{2!}+\ldots>x$. Therefore: no solution of the above equation exists in $(0, \infty)$.</p></li>
</ol>
|
1,595,243 |
<p>Let $f:\mathbb R \to \mathbb R$ be a function such that $|f(x)|\le x^2$, for all $x\in \mathbb R$. Then, at $x=0$, is $f$ both continuous and differentiable ?</p>
<p>No idea how to begin. Can someone help?</p>
|
Jimmy R.
| 128,037 |
<p>You need to use the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem">squeeze theorem</a> on the given condtition $|f(x)|\le x^2$ in order to prove that $f$ is both continuous and differentiable at $x=0$:</p>
<ol>
<li>Continuity: You know that $|f(x)|\le x^2$. Substituting $x=0$ you find that $|f(0)|\le 0$ or $f(0)=0$. But this condition also can be written as $$|f(x)|\le x^2 \implies -x^2\le f(x)\le x^2$$ and so take limits ($x\to 0$) $$\lim_{x\to 0}-x^2\le \lim_{x\to 0}f(x)\le \lim_{x\to 0}x^2 $$ which gives you $$0\le \lim_{x\to 0}f(x)\le 0 \implies \lim_{x\to}f(x)=0$$
So the limit of $f$ as $x$ goes to $0$ and the value of $f$ at $x=0$ coincide which implies that $f$ is continuous at $x=0$.</li>
<li>Differentiability: $$\lim_{h\to 0}\frac{f(h)-f(0)}{h}\overset{1.}=\lim_{h \to 0}\frac{f(h)}{h}$$ And now bound again $$\lim_{h \to 0}\frac{-h^2}{h}\le \lim_{h \to 0}\frac{f(h)}{h}\le \lim_{h \to 0}\frac{h^2}{h}$$ which implies that $$\lim_{h \to 0}\frac{f(h)}{h}=0$$ or equivalently that $f'(0)=0$.</li>
</ol>
|
1,764,729 |
<blockquote>
<p>Let $x(t) : [0,T] \rightarrow \mathbb{R}^n$ be a solution of a differential equation
$$
\frac{d}{dt} x(t) = f(x(t),t).
$$
In addition we have functions $E :\mathbb{R}^n \rightarrow \mathbb{R}$ and $h:\mathbb{R}^{n+1}\rightarrow \mathbb{R}$ such that
$$
\frac{d}{dt}{E(x(t))} = h(x(t),t) E(x(t))
$$
for any solution $x(t)$ of the differential equation. Can we show that if $E$ is positive at the time $0$ then it is positive at all times? i.e.
$$
E(x(0))>0 \Longrightarrow E(x(t)) \qquad 0<t\leq T
$$
Assume that all functions are at least one continuously differentiable.</p>
</blockquote>
<p>It is easy to show that it is true if $h$ does not depend on $x$ or can be written in the form $h(x(t),t) =\hat h(E(x(t)),t)$. Then $E$ satisfy following differential equation
$$
\frac{d}{dt}{E(t)} = \hat h(E(t),t) E(t)
$$
Thanks to the uniqueness of the solution, any solution $E(t)$ cannot cross the trivial solution $E(t)=0$ and therefore it has to have the same sign at all times.</p>
<hr>
<p>Application: Let's define matrix $A(t)$ via differential equation
$$
\frac{d}{dt}{A(t)} = B(A(t),t) A(t) \qquad A(0) = I
$$
where $B$ is matrix valued function with arguments $A$ and $t$. Is $A(t)$ invertible for all $t\geq 0$?</p>
<p>We have
$$
\frac{d}{dt}{\det{A(t)}} = Tr(B(A(t),t)) \det{A(t)}
$$
If $B$ is just a function of $t$ and $\det{A(t)}$ then the answer is yes, in general I do not know if $A(t)$ is invertible or not.</p>
|
Vincent
| 332,815 |
<p>This is a partial answer (but may put someone on the right track for a full one)</p>
<p>In the case $E$ and $h$ are $C^{\infty}$ (infinitely derivable, like it is often the case in physics) and if there is a constant $M$ that bounds ALL of the derivatives of $E$:</p>
<p>Let us assume $E(t_0) = 0$ for some $t_0$</p>
<p>Then, we can prove recursively that all the derivatives of $E$ are null in $t_0$ by differentiating $E'= h \times E $ (you always get some $E^{(k)}$ in factor of the terms, which means $E^{(n+1)}(t_0) = 0$)</p>
<p>Let us use Taylor's inequality in $t_0$ (<a href="https://en.wikipedia.org/wiki/Taylor%27s_theorem#Estimates_for_the_remainder" rel="nofollow">https://en.wikipedia.org/wiki/Taylor%27s_theorem#Estimates_for_the_remainder</a>), we can have</p>
<p>$\forall n \in \mathbb{N}, E(0) \leq M\frac{t_0^n}{n!}$</p>
<p>Which, at its limit, gives</p>
<p>$E(0) \leq 0$, which is absurd.</p>
<p>Hence, $\forall t \in [0,T], E(t)>0$</p>
|
3,474,926 |
<p>I came across this question:</p>
<p>Find an isomorphism from the group of orientation preserving isometries of the plane to some subgroup of <span class="math-container">$GL_{2}(\mathbb C)$</span>.</p>
<p>I'm having trouble with finding such isomorphism. Mainly, I'm having trouble with finding some representation of those isometries in a way that would respect the group structure. I know that you can represent any isometry of the plane using a <span class="math-container">$3\times 3$</span> real matrix, and I want to somehow use this representation in order to construct a new one (maybe since I'm looking at a smaller group and can also use complex numbers I can somehow use a smaller matrix), but I'm not entirely sure on how to do this.</p>
<p>Any help would be appreciated - I'd like to know how one can approach such a problem.</p>
<p>Thanks in advance</p>
|
Moishe Kohan
| 84,907 |
<p>Hint: Think of affine complex linear-fractional transformations of the form <span class="math-container">$z\mapsto az+b$</span>, <span class="math-container">$|a|=1$</span>, and the epimorphism <span class="math-container">$GL(2, {\mathbb C})\to PGL(2, {\mathbb C})$</span>, the group of complex linear-fractional transformations. </p>
<p>Edit: the epimorphism is standard:
<span class="math-container">$$\left[
\begin{array}{cc}
a&b\\
c&d\\
\end{array}\right]\mapsto \frac{az +b}{cz+d}
$$</span></p>
|
505,617 |
<p>Let $a,b,c$ be nonnegative real numbers such that $a+b+c=3$, Prove that</p>
<blockquote>
<p>$$ \sqrt{\frac{a}{a+3b+5bc}}+\sqrt{\frac{b}{b+3c+5ca}}+\sqrt{\frac{c}{c+3a+5ab}}\geq 1.$$</p>
</blockquote>
<p>This problem is from <a href="http://www.artofproblemsolving.com/Forum/viewtopic.php?f=52&t=555716">http://www.artofproblemsolving.com/Forum/viewtopic.php?f=52&t=555716</a></p>
<p>@Calvin Lin Thank you </p>
|
user1793963
| 96,691 |
<p>Since $a+b+c=3$ you can reduce $f(a,b,c)$ to $f(3-b-c, b, c)$. With only 2 variables you can make a 3D plot and see if it gets below $1$ for $b+c\le3$. Not an elegant proof, but it works. If nothing else it will at least show you where the edge cases are.</p>
|
4,373,216 |
<p>Find power series solution for <span class="math-container">$y''-xy=0$</span> close to <span class="math-container">$x=0$</span>.</p>
<p><span class="math-container">$\sum_{n=2}^\infty n(n-1)a_nx^{n-2}-x\sum_{n=0}^\infty a_nx^n=0$</span></p>
<p>Then,
<span class="math-container">$a_{n+2}(n+2)(n+1)-a_{n-1}=0 \implies a_{n+2}(n^2+3n+2)-a_{n-1}=0 \implies a_{n+3}=a_n\cdot \frac{1}{(n+3)(n+2)}$</span></p>
<p>I got two problems:</p>
<p><span class="math-container">$(1)$</span> The answer is <span class="math-container">$a_{n+3}=(n+3)(n+2)a_n$</span> , where am I wrong ?</p>
<p><span class="math-container">$(2)$</span> How can I find the general power series solution ?</p>
|
Michael Burr
| 86,421 |
<p>Hint: Break the integral into pieces depending on its domains of definition. In your linked example,</p>
<p><span class="math-container">$$
\int_{-1}^3 f(x)dx=\int_{-1}^0f(x)dx+\int_0^1f(x)dx+\int_1^2f(x)dx+\int_2^3f(x)dx.
$$</span>
Substituting formulas, you would have
<span class="math-container">$$
\int_{-1}^00dx+\int_0^1\frac{6}{7}xdx+\int_1^2\frac{6}{7}x(2-x)dx+\int_2^30dx.
$$</span>
After a quick calculus computation, the result is <span class="math-container">$1$</span> as expected for a probability distribution. For your problem, you'll need a different integrand and region of integration.</p>
|
782,427 |
<p><strong>I - First doubt</strong> : free variables on open formulas . </p>
<p>I'm having a hard time discovering what different kinds of variables in an open formula of fol are refering to., For example, lets take some open formulas : </p>
<p>1: $"x+2=5"$<br>
2: $"x=x"$<br>
3: $"x+y = y+x"$<br>
4: $"x+2 = 5 -> x=3 "$ .</p>
<p>So let's suppose we have a domain D of a structure which is interpreting that fol language.
Even tho all those formulas are open formulas variables occuring free, i think the variables refer to different kind of sets ( obviously all sets with the same type of objects, be it numbers, or whatever ) depending on the kind of formula... to what exactly, in each case ? </p>
<p>The formula 1 would be refering to a fixed element on the domain (whatever kind of elements it has ) ? Would that be it ? Or would it be an arbitrary one ? </p>
<p>Should we always use the word "refering to" in open formula, instead of "ranging through" ?</p>
<p>The free variable on the formula 2 would be refering to a bunch of elements of the domain ? Or would it be ranging through a bunch of elements of the domain ? How does the truth-set ( set of individuals that satisfy the open sentence " of each formula enter in picture ? </p>
<p><strong>II - Second doubt</strong> : When to close a formula, and what does it provide us ? </p>
<p>The second point i'm having a hard time with is how to differentiate, and how to know the difference between the uses and the reasons of choosing to use either the formula<br>
$"x+2 =5 -> x = 3 "$ or $\forall x (x+2 = 5 -> x = 3 )$ . </p>
<p>And also between the difference of using ( and choosing to use ) the open formula $"x+y = y+x"$ or the closed formula $"\forall x \forall y ( x + y = y + x ) "$. </p>
<p>Same thing about $"x = x$ and $"\forall x (x = x) "$. </p>
|
jdc
| 7,112 |
<p>I don't know if this will help, but here's the formalism I use.</p>
<p>Let a first-order language $L$ be given, an open formula $\varphi(x,y)$ in the language of $L$, with only free variables $x$ and $y$, and an $L$-structure $M$. Let $X$ be the collection of all variables you are using.</p>
<p>There is no way to determine whether $M \models \varphi(x,y)$ without knowing what $x$ and $y$ are. A <em>valuation</em> is a function $\sigma\colon X \to M$ that gives a value $\sigma(x)$ to each $x \in X$. Then, if $\varphi$ holds in $M$ with the constant $\sigma(x)$ plugged in for $x$ and $\sigma(y)$ plugged in for $y$, one can write $M \models_\sigma \varphi(x,y)$. We say $M \models \exists x\ \exists y \ \varphi(x,y)$ if there is <em>some</em> valuation $\sigma$ such that $M \models_\sigma \varphi(x,y)$, and $M \models \forall x\ \forall y\ \varphi(x,y)$ if for every valuation $\sigma$, it is the case $M \models_\sigma \varphi(x,y)$.</p>
<p>Some people use $M \models \varphi(x,y)$ interchangeably for $M \models \forall x\ \forall y\ \varphi(x,y)$. If that's your convention, of course one gains nothing from closing. Otherwise, you need to close to have a truth-value independently of valuation. A <em>sentence</em> is true or false in a structure, whereas an <em>open formula</em> is only true or false <em>under a given valuation</em> in a given structure.</p>
<p>In the sort of language you seem to be using, $x$ and $y$ all range over the same sorts of things, namely, individuals in (the underlying set of) $M$.</p>
|
1,918,674 |
<p>For what value $k$ is the following function continuous at $x=2$?
$$f(x) = \begin{cases}
\frac{\sqrt{2x+5}-\sqrt{x+7}}{x-2} & x \neq 2 \\
k & x = 2
\end{cases}$$</p>
<p>I was thinking about multiplying the numerator by it's conjugate, but that makes the denominator very messy, so I don't rlly know what to do.</p>
|
ajotatxe
| 132,456 |
<p>Do as you were thinking. Don't expand the denominator and cancel out the factor $x-2$.</p>
|
519,093 |
<p>Let $C[0,1]$ be the set of all continuous real-valued functions on $[0,1]$.</p>
<p>Let these be 3 metrics on $C$.</p>
<p>$p(f,g)=\sup_{t\in[0,1]}|f(t)-g(t)|$</p>
<p>$d(f,g)=(\int_0^1|f(t)-g(t)|^2dt)^{1/2}$</p>
<p>$t(f,g)=\int_0^1|f(t)-g(t)|dt$</p>
<p>Prove that for every $f,g\in C$, the following holds $t(f,g)\le d(f,g)\le p(f,g)$</p>
<p>I understand that $t(f,g)\le p(f,g)$ since $t(f,g)=\int_0^1|f(t)-g(t)|dt \le \int_0^1\sup_{t\in[0,1]}|f(t)-g(t)|dt =\sup_{t\in[0,1]}|f(t)-g(t)|=p(f,g)$.</p>
<p>But I can't get the others.</p>
<p>I think using Schwarz's inequality might be useful $(\int_0^1 w(t)v(t)dt)^2 \le (\int_0^1w^2(t)dt)(\int_0^1v^2(t)dt)$</p>
|
kel c.
| 58,740 |
<p>you can invoke if $x \le y$ and $z \ge 0$, then $zx \le zy$ and the transitivity of the ordered fields. </p>
<p>try $xy$ and compare that to $xx$ and $yy$.</p>
|
1,045,941 |
<p>Usually this is just given as a straight up definition in a calculus course. I am wondering how you prove it?
I tried using the limit definition, $$\lim\limits_{h\rightarrow 0} \dfrac{\log(x+h)-\log(x)}{h}$$
but this led to no developments.</p>
|
Aditya Hase
| 190,645 |
<p>$$\begin{align}
f'(x)&=\lim_{h\rightarrow 0} \dfrac{f(x+h)-f(x)}{h}\tag{1}\\
&=\lim_{h\rightarrow 0} \dfrac{\log(x+h)-\log(x)}{h}\tag{2}\\
&=\lim_{h\rightarrow 0} \dfrac{\log\left(\frac{x+h}{x}\right)}{h}\tag{3}\\
&=\lim_{h\rightarrow 0} \dfrac{\log\left(1+\frac{h}{x}\right)}{h}\tag{4}\\
&=\lim_{h\rightarrow 0}\log\left(1+\frac{h}{x}\right)^{1/h}\tag{5}\\
&=\lim_{u\rightarrow 0}\frac1x\log(1+u)^{1/u}\tag{6}\\
&=\frac1x\lim_{u\rightarrow 0}\log(1+u)^{1/u}\tag{7}\\
&=\frac1x\ln e\tag{8}\\
&=\frac1x\tag{9}\\
\end{align}$$</p>
<blockquote>
<p>$$\frac{d}{dx}\Big[\log(x)\Big]=\frac{1}{x}$$</p>
</blockquote>
<hr>
<p>$\text{Explanation}$ $6$ substituting $\dfrac hx=u\iff h=ux$</p>
|
2,583,232 |
<p>Prove that $\lim_{n \to \infty} \dfrac{2n^2}{5n^2+1}=\dfrac{2}{5}$</p>
<p>$$\forall \epsilon>0 \ \ :\exists N(\epsilon)\in \mathbb{N} \ \ \text{such that} \ \ \forall n >N(\epsilon) \ \ \text{we have} |\dfrac{-2}{25n^2+5}|<\epsilon$$
$$ |\dfrac{-2}{25n^2+5}|<\epsilon \\ |\dfrac{2}{25n^2+5}|<\epsilon \\ \dfrac{5(n^2+1)}{2}<\epsilon \\ {5(n^2+1)}<2\epsilon \\ {n}>\sqrt{\dfrac{2\epsilon -5}{25}}$$</p>
<p>What do I do?</p>
|
Community
| -1 |
<p>Given the limit $\lim_{n \to \infty} \dfrac{2n^2}{5n^2+1}$ you have to group by $n^2$. It becomes:
$$\lim_{n \to \infty} \dfrac{2n^2}{5n^2+1}=\lim_{n \to \infty} \dfrac{n^2\cdot 2}{n^2\cdot\left(5+\frac 1{n^2}\right)}=\lim_{n \to \infty} \dfrac{2}{\left(5+\frac 1{n^2}\right)}=\dfrac 25$$</p>
<p>Because we know that $\lim_{n\to \infty}\left(5+\frac 1{n^2}\right)=5$</p>
|
1,051,335 |
<p>Purely out of interest, I wanted to try and construct a sequence of differentiable functions converging to a non-differentiable function. I began with the first non-differentiable function that sprung to my mind, namely
\begin{align}
&f:\mathbb{R}\to\mathbb{R}\\
&f(x)=|x|.
\end{align}
After some testing I considered the function defined by
$$f_\varepsilon(x) = |x|+\frac{\varepsilon}{|x|+\sqrt{\varepsilon}} $$
for some $\varepsilon>0$. Then $\lim\limits_{\varepsilon\to0^+}f_\varepsilon(x)=f(x)$, and $f_\varepsilon(x)$ <em>looks</em> smooth, i.e. differentiable for every $\varepsilon>0$ on the entire domain.</p>
<p><strong>Question:</strong> How can I prove that $f_\varepsilon$ is differentiable for every $\varepsilon>0$ (or disprove) using the definition of the derivative?</p>
<p>If this assertion is true, then I construct the sequence simply by setting $\varepsilon = 1/n$ for $n\in\mathbb{N}$.</p>
<p><strong>Attempt:</strong> I set up the definition for the derivative
\begin{align}
\frac{\mathrm{d}f_\varepsilon}{\mathrm{d}x} &= \lim_{h\to 0}\frac{1}{h}\left[\left(|x+h|+\frac{\varepsilon}{|x+h|+\sqrt{\varepsilon}}\right)-\left(|x|+\frac{\varepsilon}{|x|+\sqrt{\varepsilon}}\right)\right]\\
&=\lim_{h\to 0}\frac{1}{h}\left[|x+h|-|x|+\frac{\varepsilon}{|x+h|+\sqrt{\varepsilon}}-\frac{\varepsilon}{|x|+\sqrt{\varepsilon}}\right],
\end{align}
but I could not figure out how to proceed.</p>
<p><strong>Sidenotes:</strong> An interesting thing I discovered when constructing $f_\varepsilon$, was that almost any small change removes its smoothness, for example
\begin{equation}
g_\varepsilon(x) = |x|+\frac{2\varepsilon}{|x|+\sqrt{\varepsilon}}\hspace{2cm} h_\varepsilon(x) = |x|+\frac{\varepsilon}{|x|+2\sqrt{\varepsilon}}
\end{equation}
do both not look smooth at all. Similarly for the other terms; changing the coefficients will remove the smoothness. I am also somewhat intrigued by this. So if anyone can shed some light on this, even better.</p>
|
Alain
| 138,095 |
<p>Consider the sequence of differentiable functions $h_n(x) = x^{1+\frac{1}{2n-1}}$ defined on $[-1,1]$ and note
$$
\lim_{n\rightarrow\infty} h_n(x) =
x\lim_{n\rightarrow\infty} x^\frac{1}{2n-1}=
|x|,\qquad \forall x\in[-1,1].
$$</p>
|
1,977,817 |
<p>Find the perimeter of the square shown, I don't know how to solve it do I use pythagorean theorem?
<a href="https://i.stack.imgur.com/cdzq7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cdzq7.jpg" alt="enter image description here"></a></p>
|
DonAntonio
| 31,254 |
<p>Hints (for you to complete):</p>
<p>1) Both diagonals of a square meet at the circumscribing circle's center.</p>
<p>2) If a square's side's length is $\;x\;$ , then any of its two diagonals' length is $\;\sqrt2\,x\;$</p>
|
1,977,817 |
<p>Find the perimeter of the square shown, I don't know how to solve it do I use pythagorean theorem?
<a href="https://i.stack.imgur.com/cdzq7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cdzq7.jpg" alt="enter image description here"></a></p>
|
scott
| 330,966 |
<p>Try going this direction: draw another radius of the circle to an adjacent corner of the square. Now you have a right triangle, the length of the legs is $3\sqrt{2}$. Can you find the length of the hypotenuse? This is one of the sides of the square; multiply this length by 4 to find the perimeter.</p>
|
3,720,012 |
<blockquote>
<p>(Feller Vol.1, P.241, Q.35) Let <span class="math-container">$S_n$</span> be the number of successes in <span class="math-container">$n$</span> Bernoulli trials. Prove
<span class="math-container">$$E(|S_n-np|) = 2vq b(v; n, p) $$</span>
where <span class="math-container">$v$</span> is the integer such that <span class="math-container">$np < v \le np+1$</span> and <span class="math-container">$b(v; n,p)$</span> is a binomial distribution with <span class="math-container">$v$</span> successes of <span class="math-container">$n$</span> trials. Hint: The left side <span class="math-container">$= \sum_{k=0}^{v-1} (np - k) \frac{n}{k} p^k q^{n-k}$</span>.</p>
</blockquote>
<p>My attempt: I found that <span class="math-container">$P(|S_n - np| = j)= b(np +j ; n, p)$</span> if <span class="math-container">$S_n \ge np$</span>, <span class="math-container">$b(np-j; n,p)$</span> if <span class="math-container">$S_n < np$</span>. Therefore,
<span class="math-container">$$E(|S_n-np|)= \sum_{k=0}^{v-1} (np-k)b(k;n,p) + \sum_{k=v}^{n}(k-np)b(k; n,p).$$</span></p>
<p>I am stuck here, and don't know how to proceed. I would appreciate if you give some help.</p>
|
StubbornAtom
| 321,264 |
<p>The hint given is not quite correct as mentioned by @PepeSilvia.</p>
<p>For any random variable <span class="math-container">$X$</span> with finite mean <span class="math-container">$\mu$</span>, we have</p>
<p><span class="math-container">\begin{align}
\mathbb E[|X-\mu|]&=\mathbb E[|X-\mu|I_{X>\mu}]+\mathbb E[|X-\mu|I_{X\le\mu}]
\\&=\mathbb E[(X-\mu )I_{X>\mu}]+\mathbb E[(\mu-X )I_{X\le\mu}] \tag{1}
\end{align}</span></p>
<p>But <span class="math-container">$\mathbb E(X-\mu )=0$</span>, so</p>
<p><span class="math-container">$$0=\mathbb E[(X-\mu)I_{X>\mu}]+\mathbb E[(X-\mu)I_{X\le\mu}]$$</span></p>
<p>That is, <span class="math-container">$$\mathbb E[(X-\mu)I_{X>\mu}]=\mathbb E[(\mu-X)I_{X\le\mu}] \tag{2}$$</span></p>
<p>From <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> we see that</p>
<p><span class="math-container">$$\mathbb E[|X-\mu|]=2\,\mathbb E[(X-\mu)I_{X>\mu}]=2\,\mathbb E[(\mu-X)I_{X\le\mu}]$$</span></p>
<p>So a direct calculation here would involve evaluating <span class="math-container">$$\mathbb E[|S_n-np|]=2\sum_{j=v}^n (j-np) f(j)=2\sum_{j=0}^{v-1} (np-j) f(j)\,,\tag{3}$$</span></p>
<p>where <span class="math-container">$f(j)=\binom{n}{j}p^jq^{n-j}$</span> and <span class="math-container">$v=\lfloor np \rfloor +1$</span>.</p>
<p>But this can be done here without much work.</p>
<p>We have the relation</p>
<p><span class="math-container">$$\frac{f(j+1)}{f(j)}=\left(\frac{n-j}{j+1}\right)\frac{p}{q} \quad,\, j=0,1,\ldots,n$$</span></p>
<p>That is, <span class="math-container">$$q(j+1)f(j+1)=npf(j)-pjf(j) \quad,\, j=0,1,\ldots,n$$</span></p>
<p>Therefore, <span class="math-container">$$q\sum_{j=v}^n (j+1)f(j+1)=np\sum_{j=v}^n f(j)-p\sum_{j=v}^n jf(j)$$</span></p>
<p>Noting that <span class="math-container">$f(n+1)=0$</span>, this is same as <span class="math-container">$$q\sum_{j=v}^n jf(j)-q vf(v)=np\sum_{j=v}^n f(j)-p\sum_{j=v}^n jf(j)$$</span></p>
<p>That is, <span class="math-container">$$\sum_{j=v}^n jf(j)-np\sum_{j=v}^n f(j)=q vf(v) \tag{4}$$</span></p>
<p>So from <span class="math-container">$(3)$</span> and <span class="math-container">$(4)$</span> we end up with</p>
<p><span class="math-container">$$\mathbb E[|S_n-np|]=2\left[\sum_{j=v}^n jf(j)-np\sum_{j=v}^n f(j)\right]=2vqf(v)$$</span></p>
<p>(Working with the other sum <span class="math-container">$\sum_{j=0}^{v-1} (np-j) f(j)$</span> also results in the same answer).</p>
<p>This can be seen as a general method for finding mean absolute deviation about mean (if it exists) of a non-negative discrete distribution whenever the probability mass function <span class="math-container">$f$</span> is such that <span class="math-container">$$\frac{f(j+1)}{f(j)}=\frac{\alpha+\beta j}{j+1}\quad,\,j=0,1,\ldots$$</span> for some constant <span class="math-container">$\alpha,\beta$</span>.</p>
|
1,329,214 |
<p>I'm having difficulty solving a linear algebra problem:<br>
Let $A,B,C,D$ be real $n \times n$ matrices. Show that there is a non-zero $n \times n$ matrix $X$ such that $AXB$ and $CXD$ are both symmetric. </p>
<p>There is an accompanying hint:<br>
Show that the set of all matrices $X$ for which $AXB$ is symmetric is a vector space, and compute its dimension. </p>
<p>I introduced some notation:
let $S(A,B)$ denote the set of all $n \times n$ real matrices such that $AXB$ is symmetric. It's easy to prove that for any $A,B$ real $n \times n$ matrices $S(A,B)$ is a subspace of $\cal {M}_{n \times n} (\mathbb{R})$-it's closed under addition, scalar multiplication, and contains the zero matrix. Now I'm not entirely sure where to go. I'd like to be able to show that the space $S(A,B) \cap S (C,D)$ has dimension greater than $0$, but I haven't made any progress thus far. </p>
<p>Any help would be greatly appreciated!</p>
|
Matt Samuel
| 187,867 |
<p>Hint: The product $AXB$ has entries that can be expressed as linear combinations of the entries of $X$, where the coefficients come from the entries of $A$ and $B$. The symmetry condition imposes equations on the entries, so we have a linear system of equations with unknowns the off-diagonal entries of $X$. How many equations are there? You should be able to use this to compute the dimension, and it will be big enough to get what you want.</p>
|
1,410,048 |
<p>I am unsure of how to proceed about finding the solution to this problem. $$\sum_{i=6}^8(\sum_{j=i}^8 (j+1)^2)$$</p>
<p>Obviously the last step is not to difficult, but the fact that the lower limit for the summation in brackets is i I am not sure how to solve this. In classes so far we have only really dealt with cases where j=1. I assume however it would be beneficial to reindex, so that we have</p>
<p>$$\sum_{i=6}^8(\sum_{k=i+1}^9 k^2)$$</p>
<p>Any help is greatly appreciated!</p>
|
Hypergeometricx
| 168,053 |
<p>As pointed out in the first answer, it would be easier to compute it by simple expansion since $i$ can take on only $3$ values. </p>
<p>However since you asked about reindexing this is a possible approach:</p>
<p>$$\sum_{i=6}^8\sum_{j=i}^8(j+1)^2=\sum_{i=7}^9\sum_{j=i}^9 j^2=\sum_{7\le i\le j\le 9} j^2=\sum_{j=7}^9\sum_{i=7}^j j^2=\sum_{j=7}^9(j-6)j^2=7^2+2(8^2)+3(9^2)\;\;\blacksquare$$</p>
|
3,914,365 |
<p>I'm pretty stuck with this exercise. Hope somebody can help me:</p>
<p>show that
<span class="math-container">$$
\sum_{n=2^{k}+1}^{2^{k+1}} \frac{1}{n} \geq\left(2^{k+1}-2^{k}\right) \frac{1}{2^{k+1}}=\frac{1}{2}
$$</span>
and use this to show that the harmonic series is divergent.</p>
<p>First I can't figure out how to show the inequality.
Second I'm not quite sure how to use the inequality to show divergence. Because it doesn't really help letting k go to infinity.</p>
<hr />
<p>Writing out the series gave me this, but where do I go from here:
<span class="math-container">$$
\sum_{n=2^{k}+1}^{2^{k+1}} \frac{1}{n}=\frac{1}{2^{k}+1}+\frac{1}{2^{k}+2}+\cdots \frac{1}{2^{k}+2^{k}}=\frac{1}{2^{k}+1}+\frac{1}{2^{k}+2}+\cdots+\frac{1}{2^{k+1}}
$$</span></p>
<hr />
<p><strong>Edit:</strong>
So I understand how to show the inequality now. Thanks!
But I'm still lost with how to use the inequality for the divergence proof.</p>
|
xpaul
| 66,420 |
<p>In fact
<span class="math-container">$$
\sum_{n=2^{k}+1}^{2^{k+1}} \frac{1}{n}=\frac{1}{2^{k}+1}+\frac{1}{2^{k}+2}+\cdots \frac{1}{2^{k}+2^{k}}\ge \frac{1}{2^{k}+2^k}+\frac{1}{2^{k}+2^k}+\cdots+\frac{1}{2^{k}+2^k}=(2^{k+1}-2^k)\frac{1}{2^{k+1}}=\frac12.
$$</span></p>
|
3,843,807 |
<p>Let <span class="math-container">$d_1$</span> be a metric on <span class="math-container">$X$</span>. If there exists a strictly increasing, continuous, and subadditive <span class="math-container">$ f:\mathbb {R} \to \mathbb {R} _{+}$</span> such that <span class="math-container">$d_{2}=f\circ d_{1}$</span>. Then <span class="math-container">$d_1,d_2$</span> are topologically equivalent.</p>
<hr />
<p>Note : I found the proposition here:<a href="https://en.wikipedia.org/wiki/Equivalence_of_metrics" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Equivalence_of_metrics</a></p>
|
Hamilton
| 549,321 |
<p>It is extremely simple once you read carefully the content in the <a href="https://www.wikiwand.com/en/Equivalence_of_metrics#/Topological_equivalence" rel="nofollow noreferrer">link</a> you post.</p>
<p>Note that there is an equivalent definition of equivalence of metric:</p>
<blockquote>
<p>the open balls "nest": for any point <span class="math-container">$x \in X$</span> and any radius <span class="math-container">$r > 0$</span>, there exist radii <span class="math-container">$r', r''$</span> such that</p>
<p><span class="math-container">$B_{r'} (x; d_1) \subseteq B_r (x; d_2) \text{ and } B_{r''} (x; d_2)
\subseteq B_r (x; d_1).$</span></p>
</blockquote>
<p>You can try to prove this.</p>
<p>Part of the solution:</p>
<p>Given a point <span class="math-container">$x$</span> and a radius <span class="math-container">$r$</span>, we want to find <span class="math-container">$r'$</span> such that <span class="math-container">$B_{r'} (x; d_1) \subseteq B_r (x; d_2)$</span>. Because the function <span class="math-container">$f$</span> is strictly increasing and continuous, it has an inverse function <span class="math-container">$f^{-1}$</span> which is also strictly increasing and continuous. Let <span class="math-container">$r' = f^{-1}(r) $</span>. You can prove <span class="math-container">$B_{r'} (x; d_1) \subseteq B_r (x; d_2)$</span>. Because <span class="math-container">$\forall y \in X, d_1(y,x) < r' = f^{-1}(r) \Longrightarrow f \circ d_1(y,x) < f \circ f^{-1}(r) \Longrightarrow d_2(y,x) < r$</span>.</p>
<p>Now Given a point <span class="math-container">$x$</span> and a radius <span class="math-container">$r$</span>, we want to find <span class="math-container">$r''$</span> such that <span class="math-container">$B_{r''} (x; d_2)\subseteq B_r (x; d_1)$</span>. Let <span class="math-container">$r'' = f(r)$</span>. You can proceed in the similar manner.</p>
<p>So, in fact, the conditions "sub-additivity" are superfluous if you already know <span class="math-container">$d_2$</span> is a metric already. However, these conditions are useful when you don't know <span class="math-container">$d_2$</span> is a metric and you want to construct a metric <span class="math-container">$d_2$</span> from <span class="math-container">$d_1$</span>. see: <a href="https://math.stackexchange.com/a/69431/549321">this link</a></p>
|
262,773 |
<p>In the curve obtained with following</p>
<pre><code> Plot[{1/(3 x*Sqrt[1 - x^2])}, {x, 0, 1}, PlotRange -> {0, 1},
GridLines -> {{0.35, 0.94}, {}}]
</code></pre>
<p>how can one fill the top and bottom with different colors or patterns such that two regions are perfectly visible in a black-n-white printout?</p>
|
MarcoB
| 27,951 |
<p>Since you mentioned that your final product will be in black and white (or perhaps grayscale), I recommend using hatched fillings instead of colors:</p>
<pre><code>Plot[
Evaluate@ConstantArray[1/(3 x*Sqrt[1 - x^2]),2],
{x, 0.35, .94},
PlotRange -> {{0, 1}, {0, 1}},
GridLines -> {{0.35, 0.94}, {}},
GridLinesStyle -> Directive[Black, Dashed],
PlotStyle -> Directive[Black, Thickness[0.007]],
Filling -> {1 -> Top, 2 -> Bottom},
FillingStyle->{
Directive[HatchFilling[-Pi/4, 1, 10], Black],
Directive[HatchFilling[Pi/4, 1, 10], Black]
}
]
</code></pre>
<p><a href="https://i.stack.imgur.com/ipaAM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ipaAM.jpg" alt="plot with two regions highlighted by different hatched filling styles" /></a></p>
|
3,898,327 |
<p>I tried expanding <span class="math-container">$n$</span> parentheses such as <span class="math-container">$a b^{-1} a$</span> and then tried to replace the second <span class="math-container">$a$</span> with <span class="math-container">$a^2 a^{-1}$</span> but didn't know how to proceed further.</p>
<p>Also, is it possible to use commutativity here somehow?</p>
|
Robert Lewis
| 67,071 |
<p>I do it <em>via</em> a simple inductive argument:</p>
<p>The trivial identity</p>
<p><span class="math-container">$aba^{-1} = aba^{-1} \tag 1$</span></p>
<p>provides the <span class="math-container">$k = 1$</span> base case; for general <span class="math-container">$k$</span> we assume</p>
<p><span class="math-container">$(aba^{-1})^k = ab^ka^{-1}; \tag 2$</span></p>
<p>then right multiplying by <span class="math-container">$aba^{-1}$</span>:</p>
<p><span class="math-container">$(aba^{-1})^{k + 1} = (ab^ka^{-1})aba^{-1} = (ab^k)(a^{-1}a)ba^{-1}$</span>
<span class="math-container">$= (ab^k)eba^{-1} = ab^kba^{-1} = ab^{k + 1}a^{-1}, \tag 3$</span></p>
<p>where <span class="math-container">$e \in G$</span> is the identity element. This completes the induction and hence we have</p>
<p><span class="math-container">$(aba^{-1})^n = ab^na^{-1}, \tag 4$</span></p>
<p>for all <span class="math-container">$n \in \Bbb N$</span>.</p>
|
3,898,327 |
<p>I tried expanding <span class="math-container">$n$</span> parentheses such as <span class="math-container">$a b^{-1} a$</span> and then tried to replace the second <span class="math-container">$a$</span> with <span class="math-container">$a^2 a^{-1}$</span> but didn't know how to proceed further.</p>
<p>Also, is it possible to use commutativity here somehow?</p>
|
Community
| -1 |
<p>The inner adjacent <span class="math-container">$aa^{-1}$</span> terms cancel out, and this is fairly trivial: <span class="math-container">$(aba^{-1})^n=aba^{-1}aba^{-1}\dots aba^{-1}=ab^na^{-1}$</span>.</p>
|
494,338 |
<p>This is gre preparation question of data interpretation,
Distribution of test score among students(Score range -> total % of students)</p>
<blockquote>
<p>0-65 -> 16<br/>
65-69 -> 37<br/>
70-79 -> 25<br/>
80-89 -> 14 <br/>
90-100 -> 8<br/></p>
</blockquote>
<hr>
<p>Question is that</p>
<p>Which of the following point ranges includes the median reading test score for ninth grade students in School District X for 1993 ? </p>
<hr>
<p>From my understanding, median is the middle value of the dataset</p>
<blockquote>
<p>1,2,3,4,5 <strong>median</strong>=3<br/>
1,2,3,4,5,6 <strong>median</strong>=3.5</p>
</blockquote>
<p>From this logic answers is 0-65 as 16 comes in middle (8,14,16,25,37), However that is not right, Correct answer is 65-69.</p>
<p>Can anyone explain to me what I am doing wrong? </p>
|
QED
| 91,884 |
<p>You are completely wrong. The median can be calculated as follows, write down the observations in a increasing order including multiplicities, and then choose the middlemost value, you wrote down the frequency of the corresponding classes and chose the median of the frequencies, not the median of the given data.</p>
<p>You need to write down the cumulative frequencies and choose the class whose cumulative frequency just crosses $N/2$, where $N$ is the total number of observations.</p>
<p>$$\begin{array}{ccc}
\text{Classes}&\text{Frequencies}&\text{Cumulative Frequencies}\\
0-65 & 16 &16\\
65-69 & 37 &16+37=53\\
70-79 & 25 &53+25=78\\
80-89 & 14 &78+14=92\\
90-100 &8&92+8=100
\end{array}$$
Here $N/2=50$. That is the $50^th$ value is the median. The class $65-69$ has cumulative frequency $53>50$ and the class $0-65$ has cumulative frequency $16<50$. So the median, that is the $50^{th}$ observation lies in the class $65-69$.</p>
|
3,379,447 |
<p>I am currently studying for an exam in Mathematical research tools. One concept that has come up again and again throughout the course is continuity and although the definition of Cauchy Continuity or Delta Epsilon continuity was discussed, we were never shown how to actually prove that a function is continuous. Therefore I am very interested if there is a. general approach that one can take if one wants to prove continuity of simple one variable functions like X^2 or the square root of x etc. Any sort of help would be greatly appreciated. </p>
|
Mohammad Riazi-Kermani
| 514,496 |
<p>If you want to prove that the function <span class="math-container">$f(x)$</span> is continuous at <span class="math-container">$x=a$</span> you need to show that for a given <span class="math-container">$\epsilon >0$</span> you can find a <span class="math-container">$\delta>0$</span> such that if <span class="math-container">$$|x-a|<\delta$$</span> then <span class="math-container">$$|f(x)-f(a)|<\epsilon $$</span></p>
<p>For example if <span class="math-container">$f(x)=x^2$</span> and <span class="math-container">$a=5$</span> you have to show that <span class="math-container">$$|x-5|<\delta \implies |x^2-25|<\epsilon $$</span></p>
<p>The trick is to factor <span class="math-container">$$|x^2-25| = |(x-5)||(x+5)| <|x+5|\delta $$</span>
Now if <span class="math-container">$\delta <1$</span> then we have <span class="math-container">$ |x+5|<6$</span> so <span class="math-container">$$|x^2-25| = |(x-5)||(x+5)| <|x+5|\delta <6\delta <\epsilon $$</span> provided that <span class="math-container">$$\delta =\min \{\epsilon /6, 1\}$$</span></p>
<p>Yes, it is complicated at first but gets better with practice. </p>
|
2,513,509 |
<p>If $f,g,h$ are functions defined on $\mathbb{R},$ the function $(f\circ g \circ h)$ is even:</p>
<p>i) If $f$ is even.</p>
<p>ii) If $g$ is even.</p>
<p>iii) If $h$ is even.</p>
<p>iv) Only if all the functions $f,g,h$ are even. </p>
<p>Shall I take different examples of functions and see?</p>
<p>Could anyone explain this for me please?</p>
|
José Carlos Santos
| 446,262 |
<p>Does the limit$$\lim_{h\to0}\frac{\arcsin h-\arctan h}{\sin h-\tan h}$$exist? If it does, it will be equal to your limit. But\begin{align}\lim_{h\to0}\frac{\arcsin h-\arctan h}{\sin h-\tan h}&=\lim_{h\to0}\frac{\left(h+\frac{h^3}6+\frac{3h^5}{40}+\cdots\right)-\left(h-\frac{h^3}3+\frac{h^5}5+\cdots\right)}{\left(h-\frac{h^3}6+\frac{h^5}{120}+\cdots\right)-\left(h+\frac{h^3}3+\frac{2h^5}{15}+\cdots\right)}\\&=\lim_{h\to0}\frac{\frac{h^3}2-\frac{h^5}8+\cdots}{-\frac{h^3}2-\frac{h^5}8+\cdots}\\&=-1.\end{align}</p>
|
121,791 |
<p>Now I'm trying to use </p>
<pre><code>Show[plota, Epilog->Inset[inset, Scaled->[{0.2, 0.7}]]]
</code></pre>
<p>operation to insert an inset of a figure, where plota is an Overlay of two plots</p>
<pre><code>plota = Overlay[{plotb, plotc}];
</code></pre>
<p>and then I cannot be successfully inserted in because "Overlay is not a type of graphics" so it cannot be showed. </p>
<p>What should I do to make the inset works? </p>
|
Jason B.
| 9,490 |
<p>You can just make everything an <a href="http://reference.wolfram.com/language/ref/Inset.html" rel="nofollow noreferrer"><code>Inset</code></a>, this way you don't even need an underlying <code>Graphics</code> object to build on top of. Using the plots from Mr. Wizard's post,</p>
<pre><code>insetA = Overlay[{plotB, plotC}];
insetB = Overlay[{plotA, Graphics@Circle[{0, 0}, .1]}];
Show[Graphics /@ {Inset[insetA, {0, 0}, {0, 0}],
Inset[insetB, Scaled[{-.05, -.05}], {0, 0},
Scaled[{.5, .5}], {1, 1}]}]
</code></pre>
<p><img src="https://i.stack.imgur.com/X0tOp.png" alt="Mathematica graphics"></p>
<p>I just combined two <code>Overlay</code> objects using <code>Show</code></p>
|
2,369,274 |
<p>More specifically, I am trying to solve the problem in Ravi Vakil's notes:</p>
<blockquote>
<p>Make sense of the following sentence: "The map <span class="math-container">$$\mathbb{A}^{n+1}_k \setminus \{0\} \rightarrow \mathbb{P}^n_k$$</span> given by <span class="math-container">$$(x_0,x_1,..x_n) \rightarrow [x_0,x_1,..,x_n]$$</span> is a morphism of schemes." Caution: you can’t just say where points go; you have
to say where functions go. So you may have to divide these up into affines, and describe the maps, and check that they glue.</p>
</blockquote>
<p>I understand why this works on a very technical level, essentially it seems to be because this diagram commutes (which is mostly because a map of affine schemes is given by its map of global sections):</p>
<p><a href="https://i.stack.imgur.com/DKoEU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DKoEU.jpg" alt="enter image description here" /></a></p>
<p>However, in <a href="https://math.stackexchange.com/questions/114165/on-a-certain-morphism-of-schemes-from-affine-space-to-projective-space">this</a> answer to the same problem, Michael Joyce argues that there is no need to check gluing because the map is globally defined. So my question is:</p>
<p>Is this map globally defined on points or on functions? If it is globally defined on points, then what makes this example different from the different maps of schemes <span class="math-container">$Spec(\mathbb{C}) \to Spec(\mathbb{C})$</span> given by the identity and conjugation maps <span class="math-container">$\mathbb{C} \to \mathbb{C}$</span>, which are different maps but agree on points? If the map is defined on functions, then why does it seem like the arrow is going in the wrong direction?</p>
|
Ross Millikan
| 1,827 |
<p>First you can just compute $f$ at $0,1,2$ getting $-2,1,-2$ so clearly A and C are wrong. Because the quadratic term is negative the function will go to $-\infty$ as $x$ gets large in either direction, so it will have some absolute maximum, and D is wrong. You can compute the derivative at $0,2$ and find it is nonzero to rule out $E$. The answer is B. There is no derivative at B because above $1$ is is $-2$ and below $1$ it is $+2$. Since the derivative fails to exist it is a candidate for a local maximum, which it is. It is also the global maximum.</p>
|
2,369,274 |
<p>More specifically, I am trying to solve the problem in Ravi Vakil's notes:</p>
<blockquote>
<p>Make sense of the following sentence: "The map <span class="math-container">$$\mathbb{A}^{n+1}_k \setminus \{0\} \rightarrow \mathbb{P}^n_k$$</span> given by <span class="math-container">$$(x_0,x_1,..x_n) \rightarrow [x_0,x_1,..,x_n]$$</span> is a morphism of schemes." Caution: you can’t just say where points go; you have
to say where functions go. So you may have to divide these up into affines, and describe the maps, and check that they glue.</p>
</blockquote>
<p>I understand why this works on a very technical level, essentially it seems to be because this diagram commutes (which is mostly because a map of affine schemes is given by its map of global sections):</p>
<p><a href="https://i.stack.imgur.com/DKoEU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DKoEU.jpg" alt="enter image description here" /></a></p>
<p>However, in <a href="https://math.stackexchange.com/questions/114165/on-a-certain-morphism-of-schemes-from-affine-space-to-projective-space">this</a> answer to the same problem, Michael Joyce argues that there is no need to check gluing because the map is globally defined. So my question is:</p>
<p>Is this map globally defined on points or on functions? If it is globally defined on points, then what makes this example different from the different maps of schemes <span class="math-container">$Spec(\mathbb{C}) \to Spec(\mathbb{C})$</span> given by the identity and conjugation maps <span class="math-container">$\mathbb{C} \to \mathbb{C}$</span>, which are different maps but agree on points? If the map is defined on functions, then why does it seem like the arrow is going in the wrong direction?</p>
|
zhw.
| 228,045 |
<p>The functions agree at $1$ so $f$ is continuous. Verify that $f'(x) > 0$ to the left of $1,$ and $f'(x) <0$ to the right of $1.$ It follows that $f$ has an absolute max at $1.$</p>
|
2,006,565 |
<p>I was solving a question to find $\lambda$ and this is the current situation:</p>
<p>$$\begin{bmatrix}
5 & 10\\
15 & 10
\end{bmatrix}
=\lambda\begin{bmatrix}2\\3\end{bmatrix}$$</p>
<p>My algebra suggested that I should divide the right-hand matrix with the left-hand one to find $\lambda$. Then after a few searches I found out that $A/B = AB^{-1}$, so now I think I need to find the inverse of that $2\times1$ matrix to multiply it with the other matrix to find $\lambda$, but the inverse is only present for a square matrix. I also have a feeling that this is not the right approach, so I am here for help. Basically I have no idea how to find $\lambda$.</p>
|
lab bhattacharjee
| 33,337 |
<p>Let $\cos^{-1}\sqrt x=y\implies x=(\cos y)^2=\dfrac{1+\cos2y}2\implies dx=-\sin2y\ dy$</p>
<p>and $\cos2y=2x-1$</p>
<p>As $\sqrt x\ge0,0\le y\le\dfrac\pi2,\sin2y=2\sqrt{x(1-x)}$</p>
<p>$$I=\int(a-\cos^{-1}\sqrt x)^2\ dx=-\int(a-y)^2\sin2y\ dy$$</p>
<p>Integrating by parts, </p>
<p>$$I=(a-y)^2\int(-\sin2y)\ dy-\int\left(\dfrac{d(y-a)^2}{dy}\int(-\sin2y)\ dy\right)dy$$</p>
<p>$$=\dfrac{(y-a)^2\cos2y}2-\dfrac12\int(y-a)\cos2y\ dy$$</p>
<p>Integrate by parts,
$$J=\int(y-a)\cos2y\ dy$$
$$=(y-a)\int\cos2y\ dy-\int\left(\dfrac{d(y-a)}{dy}\cdot\int\cos2y\ dy\right)dy=?$$</p>
|
828,148 |
<p>In Euclidean geometry, we know that SSS, SAS, AAS (or equivalently ASA) and RHS are the only 4 tools for proving the congruence of two triangles. I am wondering if the following can be added to that list:-</p>
<p>If ⊿ABC~⊿PQR and the area of ⊿ABC = that of ⊿PQR, then the two triangles are congruent.</p>
|
Bill Dubuque
| 242 |
<p><strong>Hint</strong> $\ {\rm mod}\ 5\!:\,\ \color{#c00}{3^2\equiv\, -1}\ \Rightarrow\ 3^{2k}\equiv (\color{#c00}{3^2})^k\equiv (\color{#c00}{-1})^k\ $ [$\,\equiv -1\ $ if $\,k\,$ is odd, as in your case]</p>
|
4,333,062 |
<p>I stumbled upon the following identity
<span class="math-container">$$\sum_{k=0}^n(-1)^kC_k\binom{k+2}{n-k}=0\qquad n\ge2$$</span>
where <span class="math-container">$C_n$</span> is the <span class="math-container">$n$</span>th Catalan number. Any suggestions on how to prove it are welcome!</p>
<p>This came up as a special case of a generating function for labeled binary trees. Actually I can directly prove the identity is zero by showing that certain trees don't exist, but I expect that seeing a direct proof will help me find nice closed formulae for other coefficients of the generating function.</p>
|
Yiorgos S. Smyrlis
| 57,021 |
<p>If
<span class="math-container">$$
\sqrt{2k+1}-\sqrt{2k-5}=4
$$</span>
the
<span class="math-container">$$
6=(2k+1)-(2k-5)=(\sqrt{2k+1}-\sqrt{2k-5})(\sqrt{2k+1}+\sqrt{2k-5})
=4(\sqrt{2k+1}+\sqrt{2k-5}).
$$</span>
Hence we obtain the <em>system</em>
<span class="math-container">$$
\sqrt{2k+1}-\sqrt{2k-5}=4 \\
\sqrt{2k+1}+\sqrt{2k-5} = \frac{3}2
$$</span>
This implies that <span class="math-container">$\sqrt{2k-5}=-\frac{5}{4}$</span></p>
<p>IMPOSSIBLE!</p>
<p>Hence, no solution.</p>
|
1,466,281 |
<p>Let $X$ and $Y$ be two absolutely continuous random variables on some probability space having joint distribution function $F$ and joint density $f$. Find the joint distribution and the joint density of the random variables $W=X^2$ and $Z=Y^2$.</p>
<p>I tried the following. I know that $$F(w,y)=P[W \leq w,Z \leq z]=P[X^2 \leq w,Y^2 \leq z]$$
Equivalently $$P[-\sqrt{w} \leq X \leq \sqrt{w},-\sqrt{z} \leq Y \leq \sqrt{z}]=\int_{-\sqrt{z}}^{\sqrt{z}}\int_{-\sqrt{w}}^{\sqrt{w}} f(x,y) \, dx \, dy$$ Then I tried the change of variable $x=\sqrt{w}$ but i can't see how to rescue the joint density and distribution of $W$ and $Y$</p>
|
Michael Hardy
| 11,667 |
<p>You want this:
$$
f_{W,Z}(w,z) = \frac{\partial^2}{\partial w\,\partial z} \int_{-\sqrt{z}}^{\sqrt{z}}\int_{-\sqrt{w}}^{\sqrt{w}} f(x,y) \, dx \, dy.
$$</p>
<p>Use the chain rule:</p>
<p>Write $u= \sqrt z$ so that $u^2 = z$ and $2u\,du = dz$ and then you have
\begin{align}
\frac{df}{dz} & = \frac{df}{du}\cdot\frac{du}{dz} = \left( \frac d {du} \int_{-u}^u \bullet\bullet\bullet \right) \cdot \frac d {dz} \sqrt z \\[10pt]
& = \left( \left(\frac d {du} \int_0^u \bullet\bullet\bullet \right) - \left(\frac d {du} \int_0^{-u} \bullet\bullet\bullet\right) \right) \cdot \frac d {dz} \sqrt z
\end{align}</p>
<p>You can handle $\displaystyle \frac d {du} \int_0^u \bullet\bullet\bullet$ via the fundamental theorem of calculus.</p>
<p>For $\displaystyle \frac d {du} \int_0^{-u} \bullet\bullet\bullet$ you can write
$$
g = \int_0^v\bullet\bullet\bullet \text{ and } v=-u
$$
and then find $\dfrac{dg}{dv}$ via the fundamental theorem, and $\frac{dv}{du}=-1$, and
$$
\frac{dg}{du} = \frac{dg}{dv}\cdot\frac{dv}{du}.
$$</p>
|
2,787,250 |
<p>I don't understand why there are constant $A,B,C$ s.t.
$$\frac{1}{(x-1)(x-2)^2}=\frac{A}{(x-1)}+\frac{B}{(x-2)}+\frac{C}{(x-2)^2}.$$</p>
<p>I now how to compute $A,B,C$, but I don't understand how someone though to do this. In what this is natural ?</p>
|
Surb
| 154,545 |
<p>It's just Bézout's Identity. There are $A,B\in\mathbb R$ s.t. $$A(x-1)+B(x-2)=1.$$
Then $$\frac{1}{(x-1)(x-2)^2}=\frac{A(x-1)+B(x-2)}{(x-1)(x-2)^2}=\frac{A}{(x-2)^2}+\frac{B}{(x-1)(x-2)}=\frac{A}{(x-2)^2}+\frac{B(A(x-1)+B(x-2))}{(x-1)(x-2)}=\frac{A}{(x-2)^2}+\frac{BA}{(x-2)}+\frac{B^2}{(x-1)}.$$</p>
<hr>
<h2>Edit</h2>
<p>In fact Bezout says that there are polynomials $P(x)$ and $Q(x)$ s.t. $$P(x)(x-1)+Q(x)(x-2)=1,$$</p>
<p>but this implies that $P(x)$ and $Q(x)$ must has degree $0$.</p>
|
2,787,250 |
<p>I don't understand why there are constant $A,B,C$ s.t.
$$\frac{1}{(x-1)(x-2)^2}=\frac{A}{(x-1)}+\frac{B}{(x-2)}+\frac{C}{(x-2)^2}.$$</p>
<p>I now how to compute $A,B,C$, but I don't understand how someone though to do this. In what this is natural ?</p>
|
G Cab
| 317,234 |
<p>Expanding upon request my comment above, I am going to briefly expose the statement that Partial Fractions
naturally proceed from the chain<br>
Residue Theorem $\to$ <a href="https://en.wikipedia.org/wiki/Laurent_series" rel="nofollow noreferrer">Laurent Series</a> $\to$ <a href="https://en.wikipedia.org/wiki/Mittag-Leffler's_theorem" rel="nofollow noreferrer">Mittag-Leffler Theorem</a> , and corollaries.</p>
<p>Consider the function $f(z)$ analytic in all the complex plane, except from having a countable number of isolated poles.<br>
If $z_k$ is one of these poles of multiplicity $m_k$, then the Laurent Theorems tells that the function, in an annulus
around and excluding $z_k$, the function can be expressed as (principal and regular part of the expansion)
$$
f(z) = {{a_{k,\; - m_{\,k} } } \over {\left( {z - z_{\,k} } \right)^{m_{\,k} } }}
+ {{a_{k,\; - \left( {m_{\,k} - 1} \right)} } \over {\left( {z - z_{\,k} } \right)^{m_{\,k} - 1} }}
+ \cdots + {{a_{k,\; - 1} } \over {\left( {z - z_{\,k} } \right)}} + g(z)
$$
and where the coefficents $a$ are derived from the Residue theorem. </p>
<p>All other poles of $f(z)$, if any, are contained in $g(z)$.
Thus we can reiterate the above for $g(z)$, and continue untill the remaining regular part of the expansion ($\gamma(z)$)<br>
does not contain any pole in the finite plane and thus it is an entire function.<br>
The behaviour of $f(z)$ and $\gamma(z)$ for $z \to \infty$ is the same.</p>
<p>Finally if the function is meromorphic, i.e the ratio of two entire functions $f(z)=g(z)/h(z)$, the from the above
it follows the decomposition into Partial Fractions, which is analyzed and proved under the Mittag-Leffler theorem.</p>
<p>For those interested in a more detailed presentation, I suggest to read the excellent dedicated chapter in
<em>" Theory of analytic functions - A.I. Markuševič "</em>.</p>
|
2,403,851 |
<p>Here is Proposition 3 (page 12) from Section 2.1 in <em>A Modern Approach to Probability</em> by Bert Fristedt and Lawrence Gray. </p>
<blockquote>
<p>Let $X$ be a function from a measurable space $(\Omega,\mathcal{F})$ to another measurable space $(\Psi,\mathcal{G})$. Suppose $\mathcal{E}$ is a family of subsets of $\Psi$ that generates $\mathcal{G}$ and that $X^{-1}(B) \in \mathcal{F}$ for every $B \in \mathcal{E}$. Then $X$ is a measurable function.</p>
</blockquote>
<p>I´m trying to solve this problem in probability but I don´t know how to solve it.</p>
<p>I understand that this problem (if I solve it) says that I can prove that a random variable is measurable from a generator of the algebra and not all the elements of the algebra.</p>
<p>Someone can help me to prove this please, I´m really stuck with this proposition of the book. Thanks for the help and time.</p>
|
lhf
| 589 |
<p>All log functions differ by a multiplicative constant. Therefore, notations like $O(n \log n)$ are unambiguous, because the $O$ absorbs multiplicative constants.</p>
|
3,533,023 |
<p>We have <span class="math-container">$x_1+x_2+...+x_k=n$</span> for some integers <span class="math-container">$k,n$</span>. We have that <span class="math-container">$0 \leq x_1,...,x_k$</span> and individually <span class="math-container">$x_1 \leq a_1$</span>, <span class="math-container">$x_2 \leq a_2$</span>, ..., <span class="math-container">$x_k \leq a_k$</span>.</p>
<p>How many integer solutions are there to this problem?</p>
<p>The solution when there are no constraints to the maximum number of objects in any individual bin is just the "stars and bars" method, but what about for this more constrained version?</p>
|
Ross Millikan
| 1,827 |
<p>There are six regions around the exterior that are not part of any of the smaller circles. You are asked what percentage of the large circle they represent. The inner area is divided into lenses and triangles with curved sides, which we can call deltas. You should compute the area of a lens and a delta, count how many of each there are, and add up the areas to get the total area of the small circles.</p>
|
3,427,794 |
<p>Let's see I have the following equation</p>
<p><span class="math-container">$$
x=1
$$</span></p>
<p>I take the derivate of both sides with respect to <span class="math-container">$x$</span>:</p>
<p><span class="math-container">$$
\frac{\partial }{\partial x} x = \frac{\partial }{\partial x}1
$$</span></p>
<p>Therefore, <span class="math-container">$1=0$</span>. Clearly, that is not the right approach. </p>
<p>So what is the right way to think of <span class="math-container">$x=1$</span>. What kind of object is it?</p>
|
eyeballfrog
| 395,748 |
<p>If we're taking derivatives, that means the things on each side of the equals sign are functions. What you've demonstrated is the correct statement that if <span class="math-container">$f$</span> is the identity function <span class="math-container">$f(x) = x$</span> and <span class="math-container">$g$</span> is the constant function <span class="math-container">$g(x) = 1$</span>, then <span class="math-container">$f\ne g$</span>.</p>
|
3,746,412 |
<blockquote>
<p>Prove: For all sets <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, ( <span class="math-container">$A \cap B = A \cup B \implies A = B$</span> )</p>
</blockquote>
<p>In the upcoming proof, we make use of the next lemma.</p>
<p>Lemma: For all sets <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, <span class="math-container">$A = B$</span> iff <span class="math-container">$A - B = B - A$</span>.</p>
<p>Proof: Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be arbitrary sets and let <span class="math-container">$S = A \cap B$</span> and <span class="math-container">$T = A \cup B$</span>. If <span class="math-container">$S = T$</span>, then <span class="math-container">$S$</span> and <span class="math-container">$T$</span> have got the same elements. Thus, by simplification law, Towe can state that</p>
<p><span class="math-container">$\forall x \in A \cup B \, ( x \in A \cap B ) \tag 1 \label 1$</span></p>
<p>To prove <span class="math-container">$A = B$</span> we might as well harness our lemma and establish that <span class="math-container">$A - B = A - B$</span>, which we can be verified through the axiom of extension by showing that <span class="math-container">$\forall y \in A - B \,( y \in B - A )$</span> and <span class="math-container">$\forall y^* \in B - A \, (y^* \in A - B )$</span>. So, let</p>
<p><span class="math-container">$y \in A - B \tag 2 \label 2$</span></p>
<p>and let
<span class="math-container">$y^* \in B - A \tag 3 \label 3$</span></p>
<p>From \eqref{2} we rest assured that <span class="math-container">$y\in A$</span> and <span class="math-container">$y\not\in B$</span>. Furthermore, by \eqref{1} we know that</p>
<p><span class="math-container">$y \in A \land y \in B \tag 4 \label 4$</span></p>
<p>Nonetheless \eqref{4} is an antilogy since from \eqref{2} we deduced, inter alia, that <span class="math-container">$y\not\in B$</span>. Notice thus far we are trying to prove that \eqref{3} and <span class="math-container">$F$</span> implies <span class="math-container">$y \in B - A $</span> and <span class="math-container">$y^* \in A - B$</span>. Therefore, it’s vacuously true that</p>
<p><span class="math-container">$A = B$</span></p>
<p>Q.E.D</p>
<p>I believe I could've done this proof more easily by using the contrapositive method, but my question is, is this proof right?</p>
|
halrankard
| 688,699 |
<p>As others have already noted, it is indeed much simpler to prove this by contrapositive. But you asked if <em>your</em> proof is correct, which is a perfectly reasonable question that I would like to answer.</p>
<p>There is a gap in your logic.</p>
<p>You have set out to prove:</p>
<p><span class="math-container">$$
\forall y (y\in A-B \rightarrow y \in B-A)\textrm{ and }
\forall y^* (y^*\in B-A\rightarrow y^*\in A-B)
$$</span></p>
<p>You fixed arbitrary elements <span class="math-container">$y$</span> and <span class="math-container">$y^*$</span> and assumed <span class="math-container">$y\in A-B$</span> and <span class="math-container">$y^*\in B-A$</span>. Then you derived a contradiction from the assumption <span class="math-container">$y\in A-B$</span> and then said that from this false premise you can conclude anything, such as <span class="math-container">$y^*\in A-B$</span>. Note that you did all of this without ever using the assumption <span class="math-container">$y^*\in B-A$</span>. So, in symbols, what you really proved is</p>
<p><span class="math-container">$$
\forall y\big(y\in A-B\rightarrow (y\in B-A \textrm{ and }\forall y^*[y^*\in A-B])\big)
$$</span></p>
<p>But this is not logically equivalent to proving the two statements above.</p>
<p>In other words, the two statements you set out to prove are separate. Thus you need to prove the second statement:
<span class="math-container">$$
\forall y^* (y^*\in B-A\rightarrow y^*\in A-B)
$$</span>
on its own, and not under a running assumption that there is some <span class="math-container">$y\in A-B$</span> (which is only relevant in proving the <em>first</em> statement). To put it simply, the falsity <span class="math-container">$F$</span> you obtained is only applicable in proving <span class="math-container">$\forall y(y\in A-B\rightarrow y\in B)$</span> (since it was generated in the process of that proof). Thus to prove the second statement <span class="math-container">$\forall y^* (y^*\in B-A\rightarrow y^*\in A-B)$</span> you need to start with a clean slate. Of course the proof of the second claim is equally straightforward and you can obtain a similar contradiction.</p>
<p>These two statements are similar enough that you could employ a "without loss of generality". For example, you want to prove <span class="math-container">$A-B\subseteq B-A$</span> and <span class="math-container">$B-A\subseteq A-B$</span>. Since <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are interchangeable in this situation, you could say without loss of generality it is enough to prove one inclusion.</p>
<p>The main conclusion here is that if you want to prove two separate statements, then you should not try to prove them simultaneously, as you have done here, since it increases the risk of such oversights.</p>
<p>A final comment is that usually when one writes "Proof" after the statement of a lemma, it gives the impression that you are about to prove the lemma just stated. So it's good practice to reiterate the statement you are actually about to prove (which is a good idea here since the statement doesn't appear in the body of the question, only the title).</p>
<p>Other options would be:</p>
<blockquote>
<p>Theorem. blah blah <br><br>
We will use this lemma. <br><br>
Lemma: blah blah <br><br>
We now prove the theorem. <br><br>
Proof. blah blah</p>
</blockquote>
<p>OR</p>
<blockquote>
<p>Theorem. blah blah <br><br>
We will use this lemma. <br><br>
Lemma. blah blah <br><br>
Proof of the Theorem. blah blah</p>
</blockquote>
|
2,421,344 |
<p>I'm studying the weak Mordell-Weil theorem in Silverman's "Arithmetic of elliptic curves" and I can't understand the proof of proposition 1.6 of chapter 8.</p>
<p>Here $K$ is a number field, $S$ a finite set of places containing the archimedean ones and $m>2$ an integer. I want to prove that the maximal abelian extension of exponent $m$ unramified outside $S$ is finite. </p>
<p>We can enlarge $K$ so we add the $m^{th} $ roots of unit to use kummer. We can also enlarge $S$ to make the ring o $S$-integers $R_S=\lbrace a\in K : v(a) \geq 0 \text{ for all} v\notin S\rbrace$ a PID. </p>
<p>The first question is, how can I do it? It says to add to $S$ the valuation corresponding to the prime dividing the integral representant of the ideal classes. Why does it make $R_S$ a PID? </p>
<p>Then we apply kummer and we deduce that the extension we are looking for is generated by the $m^{th} $ root of elements $a\in K^*/(K^*) ^m$ such that $\text{ord}_v(a) =0$ mod $m$ for every $v\notin S$.</p>
<p>We have to prove that this set called $T_S$ is finite by showing a surjective map from $R_S^*/(R_S^*) ^m$ which is finite thanks to Dirichlet theorem. </p>
<p>So we take a representant $a\in T_S$. $aR_S$ is an ideal (fractional?) which is the $m$ power of an ideal in $R_S$ given that the prime in $R_S$ correspond to the valuation not in $S$. Obviously the set of element $x\in R_S$ such that $v(x) >0$ is a prime but how can I show that every prime is in this form? </p>
<p>The last question could make the previous easier to answer. Given a set as $S$ and a $v_0\notin S$ is there an element $x\in K$ such that $v_0(x)=1$ and $v(x) =0$ for every other $v \notin S$? </p>
|
farruhota
| 425,072 |
<p>Alternatively:
$$
\begin{array}{c|l|c}
X (\# \ of \ bad) & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ P(X) & P(X) \\
\hline
0 & P(GGG)= \frac{7}{10}\cdot \frac{6}{9}\cdot \frac{5}{8} & \frac{210}{720} \\
1 & P(BGG)+P(GBG)+P(GGB) =\frac{3}{10}\cdot \frac{7}{9}\cdot \frac{6}{8}+\frac{7}{10}\cdot \frac{3}{9}\cdot \frac{6}{8}+\frac{7}{10}\cdot \frac{6}{9}\cdot \frac{3}{8} & \frac{378}{720} \\
2 & P(BBG)+P(BGB)+P(GBB) =\frac{3}{10}\cdot \frac{2}{9}\cdot \frac{7}{8}+\frac{3}{10}\cdot \frac{7}{9}\cdot \frac{2}{8}+\frac{7}{10}\cdot \frac{3}{9}\cdot \frac{2}{8} & \frac{126}{720} \\
3 & P(BBB)=\frac{3}{10}\cdot \frac{2}{9}\cdot \frac{1}{8} & \frac{6}{720}
\end{array}
$$
Hence:
$$\begin{align} E(X)= & \sum XP(X)=\frac{0+378+252+18}{720}=\frac{648}{720}=\frac{9}{10}=0.9.\\
Var(X)= & \sum (X-E(X))^2P(X)= \\
= & \frac{9^2}{10^2}\cdot \frac{210}{720}+\frac{1^2}{10^2}\cdot \frac{378}{720}+\frac{11^2}{10^2}\cdot \frac{126}{720}+\frac{21^2}{10^2}\cdot \frac{6}{720}=\frac{49}{100}.\end{align}$$
$$$$</p>
|
603,469 |
<p>Ok, so I'm not very good with these proving by induction thingies. Need a little help please.</p>
<p>How do I prove </p>
<p>$$\sum_{j=0}^n\left(-\frac12\right)^j=\frac{2^{n+1}+(-1)^n}{3\cdot2^n}$$</p>
<p>when $n$ is a non-negative integer.</p>
<p>I got the basis step and such down, but I'm pretty bad with exponents so I am having a difficult time in the induction step. </p>
|
Jean-Claude Arbaut
| 43,608 |
<p>Induction is a bit overkill here.</p>
<p>If $a \not = 1$,</p>
<p>$$\sum_{j=0}^n a^j=\frac{a^{n+1}-1}{a-1}$$</p>
<p>Here $a=-\frac 1 2$.</p>
<p>$$\frac{a^{n+1}-1}{a-1}=\frac{(-1/2)^{n+1}-1}{-1/2-1}=\frac{2}{3}\left(-\left(-\frac 1 2\right)^{n+1}+1\right)$$
$$=\frac{(-1)^n+2^{n+1}}{3 \cdot 2^n}$$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.