qid
int64 1
4.65M
| question
large_stringlengths 27
36.3k
| author
large_stringlengths 3
36
| author_id
int64 -1
1.16M
| answer
large_stringlengths 18
63k
|
---|---|---|---|---|
4,316,771 |
<p>I'm asked to compute <span class="math-container">$$\sum_{k=-3}^{10} 2k^4$$</span>
I looked up for<a href="https://en.m.wikipedia.org/wiki/Bernoulli_number" rel="nofollow noreferrer">Bernoulli Number</a> on Wikipedia and found a general formula for that. But my teacher has asked me to evaluate this by breaking the summation into the summations whose sum we already know i.e. say <span class="math-container">$\sum k^2$</span>, arithmetic series, geometric series etc.</p>
<p>Can it be solved in this way, can anyone please tell me or give me some hint? It's really urgent and I can't do anything about it.</p>
<p><strong>Edit :</strong> since there is a confusion, I want to clear it by saying that I'm not allowed to use the formula for <span class="math-container">$$\sum_{k=1}^n k^4$$</span>. So I need an answer that doesn't include it.</p>
|
am301
| 382,813 |
<p><span class="math-container">$\sum^n_{k=1} k^m$</span> is a linear function of the powers <span class="math-container">$\{n^m\}_{m=1}^{n+1}$</span>. Therefore
<span class="math-container">$$
\sum^n_{k=1}k^4=An+Bn^2+Cn^3+Dn^4+En^5
$$</span>
We can plug in the numbers <span class="math-container">$n=1-5$</span> to get 5 equations:
<span class="math-container">$$
1=A+B+C+D+E\\
17=2A+4B+8C+16D+32E\\
98=3A+9B+27C+81D+243E\\
354=4A+16B+64C+256D+1024E\\
3479=5A+25B+125C+625D+3125E
$$</span>
which can be solved (with appropriate software) to get:
<span class="math-container">$$
A=-\frac{1}{30}\\
B= 0\\
C=\frac{1}{3}\\
D=\frac{1}{2}\\
E=\frac{1}{5}\\
$$</span></p>
|
983,849 |
<p>Given</p>
<ul>
<li>$G$ be a finite group</li>
<li>$X$ is a subset of group $G$</li>
<li>$|X| > \frac{|G|}{2}$</li>
</ul>
<p>I noticed that any element in $G$ can be expressed as the product of 2 elements in $X$. Is there a valid way to prove this?</p>
<p>If the third condition was $|X| = \frac{|G|}{2}$ instead, does the above statement still hold?</p>
<p>Thank you.</p>
|
Matthias Klupsch
| 19,700 |
<p>Note that for any $g \in G$ the set $g X^{-1}$ has also more than $\frac{|G|}{2}$ elements, hence $gX^{-1} \cap X \neq \emptyset$ so that there are $x,y \in X$ such that $gx^{-1} = y$, i.e. $g = yx$.</p>
|
398,371 |
<p>How to calculate $$\lim_{t\rightarrow1^+}\frac{\sin(\pi t)}{\sqrt{1+\cos(\pi t)}}$$? I've tried to use L'Hospital, but then I'll get</p>
<p>$$\lim_{t\rightarrow1^+}\frac{\pi\cos(\pi t)}{\frac{-\pi\sin(\pi t)}{2\sqrt{1+\cos(\pi t)}}}=\lim_{t\rightarrow1^+}\frac{2\pi\cos(\pi t)\sqrt{1+\cos(\pi t)}}{-\pi\sin(\pi t)}$$
and this doesn't get me further. Any ideas?</p>
|
André Nicolas
| 6,312 |
<p>This is not quite a solution, more of a comment about the assertion that the L'Hospital's Rule calculation
$$\lim_{t\rightarrow1^+}\frac{\pi\cos(\pi t)}{\frac{-\pi\sin(\pi t)}{2\sqrt{1+\cos(\pi t)}}}=\lim_{t\rightarrow1^+}\frac{2\pi\cos(\pi t)\sqrt{1+\cos(\pi t)}}{-\pi\sin(\pi t)}$$
does not get us any further.</p>
<p>Let $L$ be the original limit, <strong>assumed</strong> to exist and be non-zero. Now look at the right-hand side of the expression you reached. The $\pi$'s cancel. The term $\cos(\pi t)$ sedately approaches $-1$, cancelling the minus sign. And the rest has limit $\dfrac{1}{L}$! What you saw as a flaw becomes a virtue. </p>
<p>We conclude that $L=2\cdot \dfrac{1}{L}$. Thus $L=\pm \sqrt{2}$, and a quick examination of signs shows that we need the negative one, since a little past $\pi$, the sine is negative. </p>
<p>I do not advocate this approach, since there are details of existence to fill in, and one could easily reach an incorrect conclusion. Anyway, there is a simple non-L'Hospital calculation that quickly yields the answer. </p>
|
3,958,142 |
<p>The following problem is in Mathematical Methods in physical science CH8 miscellaneous problems
<span class="math-container">$$
(2x-y\sin(2x))dx = (\sin^2x-2y)dy
$$</span>
it isnt an exact equation the only difference is <span class="math-container">$-$</span> sign
because it not an exact equation I tried to rearrange it to produce linear first order but it did not work
I dont know what to do
I assume there an integration factor for such problem to make it exact or methods</p>
<p>Also I tried to use Mathematica DSolve function to find a solution</p>
<pre><code>DSolve[{y'[x] == (2 x - y[x] sin (2 x))/(sin^2 x - 2 y[x])}, y[x], x]
</code></pre>
<p>This what it produced</p>
<pre><code>DSolve::deqn: Equation or list of equations expected instead of True in the first argument {True}.
DSolve[{True}, y[x], x]
</code></pre>
<p>thank you</p>
|
Alessio K
| 702,692 |
<p>Recall we say a differential equation of type
<span class="math-container">$$N(x,y)dx+M(x,y)dy=0$$</span> is exact if <span class="math-container">$$\frac{\partial N}{\partial y}=\frac{\partial M}{\partial x}$$</span></p>
<p>So re-writing the above differential equation as
<span class="math-container">$$
(2x-y\sin(2x))dx - (\sin^2x-2y)dy
=0$$</span></p>
<p>we can see that <span class="math-container">$$\frac{\partial N}{\partial y}=-\sin(2x)=\frac{\partial M}{\partial x}$$</span>
so it is in fact exact.</p>
|
2,834,219 |
<p>I'd like to numerically evaluate the following integral using computer software but it has a singularity at $x=1$:</p>
<p>\begin{equation}
\int_1^{\infty} \frac{x}{1-x^4} dx
\end{equation}</p>
<p>I was thinking of a variable transformation, rewriting the expression, or something of a kind. One of my attempts was to do differentiation under the integral sign, but I only got so far:</p>
<p>\begin{equation}
I(b) = \int_1^{\infty} \frac{x}{1-x^4}e^{-bx} dx
\end{equation}</p>
<p>where setting $b=0$ gives the original expression. Differentiating w.r.t. $b$ gives</p>
<p>\begin{equation}
I'(b) = -\int_1^{\infty} \frac{x^2}{1-x^4}e^{-bx} dx = \int_1^{\infty} \frac{1}{1+x^2}e^{-bx} dx - \int_1^{\infty} \frac{1}{1-x^4}e^{-bx} dx
\end{equation}</p>
<p>It remains to integrate the two terms w.r.t. $x$ and then $b$, but they are not standard integrals.</p>
<p>Is there a better, faster, or easier way that I'm not thinking of?</p>
|
Community
| -1 |
<p>The integral is $-\infty$ because of the pole at $1$. To see this, factor</p>
<p>$$1 - x^4 = (1 - x)p(x)$$</p>
<p>where $p(x)$ is a polynomial which doesn't vanish at $1$. Then near $x = 1$ the integrand is well approximated by $$\frac 1 {p(1)} \frac 1 {1 - x}$$</p>
<p>which is non-integrable. </p>
|
4,616,155 |
<p>This is a Question from an Analysis 1 exam. The question is as follows: Decide if the functions <span class="math-container">$f: \mathbb{R} \longrightarrow \mathbb{R}$</span> can be written as the difference of two monotonically increasing functions</p>
<p>a) <span class="math-container">$f(x) = \cos(x)$</span></p>
<p>b) <span class="math-container">$f(x) = x^2$</span></p>
<p>For the moment I’m working on a) my first thought would be to use the MVT and receive something in the form of <span class="math-container">$\cos(x)+2x = -\sin(x)-2x$</span> but as we see -<span class="math-container">$\sin(x)$</span> is not monotonically increasing. Obviously one could also answer with <span class="math-container">$\cos(x) = (\cos(x)+2x) - 2x$</span> but I fear this answer would not be accepted by my professor.
If you have any tips or answers for either a) or b) id be grateful</p>
|
PierreCarre
| 639,238 |
<p>If <span class="math-container">$f$</span> had compact support <span class="math-container">$K$</span>, it would zero outside <span class="math-container">$K$</span>. So, since <span class="math-container">$K$</span> would be bounded, there would be some <span class="math-container">$x_0 > 0$</span> for which <span class="math-container">$f(x_0)=0$</span>, which does occur.</p>
|
266,834 |
<p>I know the values for a function v[x,y] on an irregular grid of (x,y) points. Call the table storing all these points xyvtriples. Because of the irregular grid, the Mathematica function Interpolation only works as</p>
<p><code>interpolatedvfunc = Interpolation[xyvtriples, InterpolationOrder -> 1];</code></p>
<p>But what I really need are the partial derivatives of interpolatedfunc with respect to each argument, and for those partials to be continuous, which won't happen due to the edges produced by InterpolatioOrder -> 1.</p>
<p>Is there any way around this? I can make a very fine grid of (x,y) points to (I hope) counter any problems with forcing a spline like interpolation if I can somehow force this to happen.</p>
<p>Thanks.</p>
|
E. Chan-López
| 53,427 |
<p>Try this:</p>
<pre><code>Total[Map[If[Total[Grad[#, Variables[expr]]] === 0, #, Nothing] &, MonomialList[expr]]]
(*-(1/4)*)
</code></pre>
<p>In general, when there are no subscripts, we proceed as Bob Hanlon indicates:</p>
<pre><code>expr/. Thread[Variables[expr] -> 0]
</code></pre>
<p>Another way to do it is as follows:</p>
<pre><code>Total[Map[If[Total[Exponent[#, Variables[expr]]] === 0, #, Nothing] &,MonomialList[expr]]]
(*-(1/4)*)
</code></pre>
|
10,807 |
<p>Over the past few days, the Chrome browser page with Questions is crashing.</p>
<p>It has crashed while typing answers twice and several other times when just viewing and updating the questions listing.</p>
<p>Did something change in the past three days that could be affecting this as I have not seen this behavior before and it is very annoying when trying to write answers?</p>
<p>Has anyone else seen this or is this a bug?</p>
|
Antonio Vargas
| 5,531 |
<p>It seems that the crash can reliably be reproduced using the following method (supposing you have the ability to go into edit mode on an answer of mine).</p>
<ol>
<li><p>Close Chrome completely.</p></li>
<li><p>Open a new Chrome window and go to this answer: <a href="https://math.stackexchange.com/a/498905/5531">https://math.stackexchange.com/a/498905/5531</a></p></li>
<li><p>Open up another window next to it and go to this Wikipedia page, where you will find the opening paragraph of Charles Dickens' <em>A Tale of Two Cities</em>: <a href="http://en.wikiquote.org/wiki/A_Tale_of_Two_Cities#Chapter_I_-_The_Period" rel="nofollow noreferrer">http://en.wikiquote.org/wiki/A_Tale_of_Two_Cities#Chapter_I_-_The_Period</a></p></li>
<li><p>Click the "edit" link on my answer, scroll to the bottom of its raw text, and start typing the paragraph from the book into the answer.</p></li>
</ol>
<p>Both times I have tried this it has crashed before the end of the paragraph. You may need to type slightly more, for example by starting the paragraph over.</p>
<p>This should work with any answer with enough MathJax equations and any new text of sufficient length.</p>
<hr>
<p>I recall that someone wrote a script which would re-render only the paragraph you were currently editing in the preview pane rather than re-rendering the whole answer each time a character was added or deleted. Eventually something updated which caused it to stop working. I can't seem to find it now. Does anyone have the link?</p>
|
3,774,400 |
<p>Just like the title says, I don't know how to write an example matrix here to look like matrix. If this makes sense, <span class="math-container">$A$</span> can be <span class="math-container">$[1 0 0;0 1 0;0 1 0]$</span> (like in MATLAB syntax) then if we find determinants of <span class="math-container">$A-\lambda Ι$</span> we get <span class="math-container">$0$</span> for every <span class="math-container">$\lambda$</span>. In MATLAB it says the eigenvalues are <span class="math-container">$0,1,1$</span>.</p>
|
mathcounterexamples.net
| 187,663 |
<p><strong>Two answers</strong></p>
<p><em>First one based on a bit of theory</em></p>
<p>Having <span class="math-container">$\det(A - \lambda I)=0$</span> for all <span class="math-container">$\lambda$</span> is impossible (at least if you work in the field of the reals <span class="math-container">$\mathbb R$</span>). <span class="math-container">$\det(A - \lambda I)$</span> is a non-zero polynomial function that can't be always vanishing.</p>
<p><em>Second one based on computation</em></p>
<p><span class="math-container">$$\det(A - \lambda I)=\begin{vmatrix}
1- \lambda & 0 & 0\\
0 & 1- \lambda & 0\\
0 & 0 & -\lambda
\end{vmatrix} = -\lambda(\lambda-1)^2$$</span></p>
|
2,906,797 |
<p>I want to express this polynomial as a product of linear factors:</p>
<p>$x^5 + x^3 + 8x^2 + 8$</p>
<p>I noticed that $\pm$i were roots just looking at it, so two factors must be $(x- i)$ and $(x + i)$, but I'm not sure how I would know what the remaining polynomial would be. For real roots, I would usually just do use long division but it turns out a little messy in this instance (for me at least) and was wondering if there was a simpler method of finding the remaining polynomial. </p>
<p>Apologies for the basic question!</p>
|
dxiv
| 291,201 |
<p>Alt. hint: you might as well notice that $\,x=-2\,$ is a root by the <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a>. Then if the remaining two roots are $\,a,b\,$, by <a href="https://en.wikipedia.org/wiki/Vieta%27s_formulas" rel="nofollow noreferrer">Vieta's relations</a> $\,i+(-i)+(-2)+a+b=0 \iff a+b = 2\,$ and $\,i \cdot (-i)\cdot(-2)\cdot a \cdot b = -8 \iff a \cdot b = 4\,$, so $\,a,b\,$ are the roots of $\,x^2 - 2 x + 4 = 0\,$.</p>
|
11,922 |
<p>Which is correct terminology: "A Cartesian plane" or "The Cartesian plane"? (As in the directions for a section of homework being, "Plot a point on ______ Cartesian plane." In that context, I feel that one of the two should be used consistently, but find there's little agreement on Google for which is appropriate.</p>
|
Anschewski
| 199 |
<p>I think from a philosophical point of view, both terms may be used. From an educational point of view, I would generally use the term "the" Cartesian plane in school context as every two Cartesian planes are isomorphic and should most of the time be considered the same object. This may reduce cognitive complexity. Only in cases where you have several distinct objects that might be called Cartesian plane, I would change the terminology towards "a" Cartesian plane. This could happen in a university course on geometry, where you may cut out Cartesian planes from projective spaces, for example.</p>
|
653,449 |
<p>According to <a href="http://en.wikibooks.org/wiki/Haskell/Category_theory" rel="noreferrer">the Haskell wikibook on Category Theory</a>, the category below is not a valid category due to the addition of the morphism <em>h</em>. The hint says to " think about associativity of the composition operation." But I don't don't see why it fails.</p>
<p><span class="math-container">$$
f \circ (g \circ h) = (f \circ g) \circ h\\
f \circ (\mathit{id}_B) = (\mathit{id}_A) \circ h\\
$$</span></p>
<p>Does this then reduce to <span class="math-container">$f = h$</span> ?</p>
<p>And is that not true because <em>f</em> and <em>h</em>, despite both being from B to A, are not equivalent?</p>
<p><a href="https://i.stack.imgur.com/gllAK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gllAK.png" alt="" /></a><br />
<sub>(source: <a href="https://upload.wikimedia.org/wikibooks/en/6/65/Not-a-cat.png" rel="noreferrer">wikimedia.org</a>)</sub></p>
|
Thomas Andrews
| 7,933 |
<p>hint: What is $hgf$? Write it in two different ways.</p>
|
2,420,435 |
<p>I'm learning logic for computer science and came across the question:</p>
<blockquote>
<p>If $\ n$ is a real number, $\frac{1}{n}$ is the reciprocal of $\ n$. Prove that all
numbers have a unique reciprocal.</p>
</blockquote>
<p>I came up with the following method, but it seems so simple that I doubt it'll work:</p>
<p>$\frac{1}{n}=p$</p>
<p>Since we know that $\ n=ℝ$, and we assume that $\ p=$ the unique reciprocal, but will this work when proving <em>all</em> real numbers have unique reciprocals? </p>
|
vadim123
| 73,324 |
<p>The way to prove that a certain property is <em>uniquely</em> satisfied, is to assume that two things satisfy and then prove that they are equal.</p>
<p>In this case, suppose that $p,q$ are both reciprocals of $n$. Then, $qn=1=pn$. We divide both sides by $n$ (which needs to be nonzero) to conclude that $p=q$.</p>
<p>Now, this doesn't prove existence; i.e. there might be no reciprocal for $n$ at all. Indeed this is true for $n=0$; without considering this exception the problem is false as written.</p>
|
20,463 |
<p>I'm new here and I don't know how things work here and don't have any knowledge about commenting or answering questions. How do I get to know if my question has been commented or answered?</p>
|
k1.M
| 132,351 |
<p>Another way for this problem is to install the software of stackexchange for your device (phone or tablet with android operating system or IOS) and active notifications for this software to send you push notifications.</p>
|
411,875 |
<p>This is an exam question I encountered while studying for my exam for our topology course:</p>
<blockquote>
<p>Give two continuous maps from $S^1$ to $S^1$ which are not homotopic. (Of course, provide a proof as well.)</p>
</blockquote>
<p>The only continuous maps from $S^1$ to $S^1$ I can think of are rotations, and I thought rotations on a circle can be continuously morphed into one another. </p>
|
Community
| -1 |
<p>Recall that for any map $f : S^1 \to S^1$ that we have the induced map $f_\ast : H_1(S^1) \to H_1(S^1)$ which is a map between two infinite cyclic groups. Any such map is given by multiplication by $n \in \Bbb{Z}$, so we define the <strong><em>degree of $f$</em></strong> to this integer $n$.</p>
<p>If you think of $S^1$ as lying in $\Bbb{R}^2$ then what about $f : S^1 \to S^1$ that is reflection about the $x$ - axis and $g : S^1 \to S^1$ that is the identity? $f$ cannot be homotopic to $g$ because $\deg f = - 1$ while $\deg g= 1$.</p>
<p><strong>Added for OP:</strong> Here is why the degree of the reflection map is $-1$. We construct $S^1$ as a simplicial structure as shown in the picture below. </p>
<p>$\hspace{4.5 cm}$<img src="https://i.stack.imgur.com/23fC9.jpg" alt="enter image description here"></p>
<p>The orientations are according to the arrows. Then our simplicial chain complex is just</p>
<p>$$0 \to \Bbb{Z}\{c,d\} \stackrel{\partial}{\to} \Bbb{Z}\{a,b\} \to 0$$</p>
<p>where the boundary map $\partial$ is just given by $\partial(c) = b - a$ and $\partial(d) = a - b$. Then $\partial(c+d) = 0$ and we easily see that $\ker \partial = \langle c +d\rangle $. It will now follow that</p>
<p>$$H_1(S^1)\cong \ker \partial/0 \cong \Bbb{Z}\{c+d\} \cong \Bbb{Z}.$$</p>
<p>From this it is clear that the degree of the reflection map is $-1$, since it sends the generator of the homology to its negative ($c\mapsto -d$ while $d \mapsto -c$). </p>
|
411,875 |
<p>This is an exam question I encountered while studying for my exam for our topology course:</p>
<blockquote>
<p>Give two continuous maps from $S^1$ to $S^1$ which are not homotopic. (Of course, provide a proof as well.)</p>
</blockquote>
<p>The only continuous maps from $S^1$ to $S^1$ I can think of are rotations, and I thought rotations on a circle can be continuously morphed into one another. </p>
|
kahen
| 1,269 |
<p>If you think of $S^1$ as the unit circle in $\mathbb C$, each homotopy class of loops in $S^1$ with base point $1$ is represented by the map $f_n(z) = z^n$ where $n$ can be any integer, and it's not too hard to show that $f_m$ and $f_n$ are in the same homotopy class if and only if $m=n$.</p>
|
268,461 |
<blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://math.stackexchange.com/questions/228080/operatornameimfz-leq-operatornamerefz-then-f-is-constant">$|\operatorname{Im}f(z)|\leq |\operatorname{Re}f(z)|$ then $f$ is constant</a> </p>
</blockquote>
<blockquote>
<p>Let $f\colon\mathbb C \to \mathbb C$ be entire. Show that if
$|\operatorname{Im}f(z)|\geqslant |\operatorname{Re}f(z)|$ for all $z \in \mathbb C$, then $f$ is constant on $\mathbb C$. </p>
</blockquote>
<p>Can I answer this by considering the distance between $f(z)$ and $i$ like in this problem <a href="https://math.stackexchange.com/questions/228080/operatornameimfz-leq-operatornamerefz-then-f-is-constant">$|\operatorname{Im}f(z)|\leq |\operatorname{Re}f(z)|$ then $f$ is constant</a>?</p>
|
Amr
| 29,267 |
<p>$$X\subseteq F\cup F^c$$
Since $F\subseteq\cup\Omega$ (Here $\cup\Omega$ is the union of all elements of $\Omega$), therefore:
$$X\subseteq\cup\Omega\cup F^c$$
Thus:
$$K\subseteq\cup\Omega\cup F^c$$</p>
|
3,733,428 |
<p>I would like to understand why <span class="math-container">$\mathbb{R}^2\setminus \mathbb{Q}^2$</span> endowed with the subspace topology is not a topological manifold. It seems to me it is Hausdorff and second countable. So I am wondering. Why is it not locally Euclidian?</p>
|
Alex Kruckman
| 7,062 |
<p>Because it has lots of holes!</p>
<p>One way to make this precise is to show that <span class="math-container">$\mathbb{R}^2\setminus \mathbb{Q}^2$</span> is not locally compact. In fact, no point has a compact neighborhood.</p>
<p>Let <span class="math-container">$p\in \mathbb{R}^2\setminus\mathbb{Q}^2$</span> and suppose for contradiction that <span class="math-container">$K$</span> is a compact neighborhood of <span class="math-container">$p$</span>, so we have <span class="math-container">$p\in U\subseteq K$</span> for some open <span class="math-container">$U$</span>. Then <span class="math-container">$U = U'\cap (\mathbb{R}^2\setminus\mathbb{Q}^2)$</span> for some open set <span class="math-container">$U'\subseteq \mathbb{R}^2$</span>, so <span class="math-container">$U'$</span> contains a point <span class="math-container">$q\in \mathbb{Q}^2$</span>. Pick some sequence <span class="math-container">$(q_i)_{i\in \mathbb{N}}$</span> in <span class="math-container">$U$</span> converging to <span class="math-container">$q$</span>. Then no subsequence of <span class="math-container">$(q_i)_{i\in \mathbb{N}}$</span> converges in <span class="math-container">$\mathbb{R}^2\setminus\mathbb{Q}^2$</span>, contradicting compactness of <span class="math-container">$K$</span>. (Since <span class="math-container">$\mathbb{R}^2\setminus\mathbb{Q}^2$</span> is a metric space, compactness is equivalent to sequential compactness.)</p>
|
1,232,143 |
<p>If
$$f(x)=\sum_{n=0}^\infty\frac{x^n}{n!}, x\in\mathbb{R}$$
and
$$
g(x) = 1 + \int_0^x f(t) \,dt
$$</p>
<p>prove that $g(x)=f(x)$ for all $x\in\mathbb{R}$
and prove that $f$ is differentiable on $\mathbb{R}$ as well as show that $f'(x)=f(x)$ for all $x\in\mathbb{R}$.</p>
<p>I know that $f$ is continuous on $\mathbb{R}$ and the series $f$ converges uniformly on $[-r,r]$ for all $r>0$, but I'm unsure of how to approach these questions. Any help is greatly appreciated! </p>
|
Peter
| 82,961 |
<p>$$f(x)=\sum_{n=0}^\infty \frac{x^n}{n!}=e^x$$</p>
<p>$$g(x)=1+\int_0^x f(t) dt=1+e^x-1=e^x$$</p>
<p>So, we have $f(x)=g(x)$ and $f'(x)=f(x)$</p>
|
835,639 |
<p>These days I have read many descriptions of a noncooperative game like the one below.</p>
<p>A noncooperative game is a game in which players are unable to make enforceable contracts outside of the rules/description of such a game.</p>
<p>As a graduate student majoring in math, I wonder if there is any mathematically formal definition of a noncooperative( and cooperative) game.</p>
|
Asinomás
| 33,907 |
<p>You can define a cooperative game with n players $A=a_1,a_2,\dots a_n$ to be a a function from the power set of $A$ to $\mathbb R$. The idea is each alliance between players has a value, this value can be viewed as a cost or as a reward.</p>
<p>non-cooperative games are usually harder and have many definitions, you might be interested in the <a href="http://en.wikipedia.org/wiki/Extensive-form_game" rel="nofollow">extensive way</a> to look at games</p>
|
2,973,825 |
<p>when you are checking to see if a sum of say <span class="math-container">$k^2$</span> from <span class="math-container">$k=1$</span> to to <span class="math-container">$k=n$</span> is equal to a sum of <span class="math-container">$(k+1)^2$</span> from <span class="math-container">$k=0$</span> to <span class="math-container">$n−1$</span> can someone explain what is going on here. THanks</p>
<p>(looking for a fairly simple way to work the problem without writing out the sums which may help me understand what is going on, )</p>
|
nonuser
| 463,553 |
<p>Let <span class="math-container">$S$</span> be a set of all cards.</p>
<p>So we have a set <span class="math-container">$A$</span> of all ordered <span class="math-container">$4$</span>-couples of the set <span class="math-container">$S$</span> and a set <span class="math-container">$B$</span> of all <span class="math-container">$5$</span> subsets of the set <span class="math-container">$S$</span>. </p>
<p>Connect 4-couple in <span class="math-container">$A$</span> with a subset in <span class="math-container">$B$</span> iff all four card from that 4-couple are in that subset. Clearly each 4-couple is connected to <span class="math-container">$8$</span> 5-subsets and every 5 subset is connected to <span class="math-container">$5!= 120$</span> 4-couples. </p>
<p>This relation give us a bipartite graph <span class="math-container">$G=(A,B)$</span>. Now this graph satisfies Hall matching condition. Take any subset <span class="math-container">$X$</span> in <span class="math-container">$B$</span>. Then the set of neighbours <span class="math-container">$N(X)$</span> satisfies: <span class="math-container">$$48 \cdot |N(X)| \geq 120\cdot |X|\implies |N(X)|\geq |X| $$</span>
So there exist a matching which saturate all vertices in <span class="math-container">$B$</span>.</p>
|
2,973,825 |
<p>when you are checking to see if a sum of say <span class="math-container">$k^2$</span> from <span class="math-container">$k=1$</span> to to <span class="math-container">$k=n$</span> is equal to a sum of <span class="math-container">$(k+1)^2$</span> from <span class="math-container">$k=0$</span> to <span class="math-container">$n−1$</span> can someone explain what is going on here. THanks</p>
<p>(looking for a fairly simple way to work the problem without writing out the sums which may help me understand what is going on, )</p>
|
Misha Lavrov
| 383,078 |
<p>There is in fact a solution which uses your idea: that some suit will be repeated twice.</p>
<p>Say that among the <span class="math-container">$13$</span> ranks within a suit, ordered <span class="math-container">$A, 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K$</span>, each rank "beats" the next six ranks, wrapping around when we get to the end. For example, <span class="math-container">$A$</span> beats <span class="math-container">$2,3,4,5,6,7$</span>, and <span class="math-container">$10$</span> beats <span class="math-container">$J, Q, K, A, 2, 3$</span>.</p>
<p>If you have two cards of the same suit, exactly one of them beats the other one. So you should pass, in order:</p>
<ol>
<li>A card of the same suit as the missing card, which beats the missing card. This leaves six possibilities for the missing card.</li>
<li>The remaining three cards, in an order that encodes which of the six possibilities it is.</li>
</ol>
<p>For the second step, we should order all <span class="math-container">$52$</span> cards in the deck somehow; for instance, say that <span class="math-container">$\clubsuit < \diamondsuit < \heartsuit < \spadesuit$</span> and <span class="math-container">$A < 2 < 3 < \dots < Q < K$</span>. Then the order of the last three cards is one of "Low, Middle, High", "Low, High, Middle", and so on through "High, Middle, Low". Just remember some correspondence between these six possibilities and the values <span class="math-container">$+1, +2, +3, +4, +5, +6$</span>, and add the value you get to the rank of the first card (wrapping around from <span class="math-container">$K$</span> to <span class="math-container">$A$</span> of the same suit).</p>
<hr>
<p>For example, say that the correspondence we chose in the second step is</p>
<p><span class="math-container">\begin{array}{ccc|c}
\text{Low} & \text{Middle} & \text{High} &+1 \\
\text{Low} & \text{High} & \text{Middle} &+2 \\
\text{Middle} & \text{Low} & \text{High} &+3 \\
\text{Middle} & \text{High} & \text{Low} &+4 \\
\text{High} & \text{Low} & \text{Middle} &+5 \\
\text{High} & \text{Middle} & \text{Low} &+6
\end{array}</span></p>
<p>and you draw the cards <span class="math-container">$\{4\clubsuit, 5\spadesuit, 5\diamondsuit, A\clubsuit, J\spadesuit\}$</span>. </p>
<ul>
<li>We have two possibilities for the repeated suit, so let's choose <span class="math-container">$\spadesuit$</span>.</li>
<li>In the cyclic order in that suit, <span class="math-container">$5\spadesuit$</span> beats <span class="math-container">$J\spadesuit$</span>, so the first card we pass is <span class="math-container">$5\spadesuit$</span>.</li>
<li>We want to encode the offset <span class="math-container">$+6$</span>, which is the ordering High, Middle, Low.</li>
<li>So we pass that ordering: after <span class="math-container">$5\spadesuit$</span> we pass <span class="math-container">$5\diamondsuit, 4\clubsuit, A\clubsuit$</span> in that order, because <span class="math-container">$5\diamondsuit > 4\clubsuit > A\clubsuit$</span>.</li>
</ul>
|
4,594,043 |
<p>For <span class="math-container">$|x|<1,$</span> we have
<span class="math-container">$$
\begin{aligned}
& \frac{1}{1-x}=\sum_{k=0}^{\infty} x^k \quad \Rightarrow \quad \ln (1-x)=-\sum_{k=0}^{\infty} \frac{x^{k+1}}{k+1}
\end{aligned}
$$</span></p>
<hr />
<p><span class="math-container">$$
\begin{aligned}
\int_0^1 \frac{\ln (1-x)}{x} d x & =-\sum_{k=0}^{\infty} \frac{1}{k+1} \int_0^1 x^k dx \\
& =-\sum_{k=0}^{\infty} \frac{1}{(k+1)^2} \\
& =- \zeta(2) \\
& =-\frac{\pi^2}{6}
\end{aligned}
$$</span></p>
<p><span class="math-container">$$
\begin{aligned}
\int_0^1 \frac{\ln (1-x) \ln x}{x} d x & =-\sum_{k=0}^{\infty} \frac{1}{k+1} \int_0^1 x^k \ln xdx \\
& =\sum_{k=0}^{\infty} \frac{1}{k+1}\cdot\frac{1}{(k+1)^2} \\
& =\zeta(3) \\
\end{aligned}
$$</span></p>
<hr />
<p><span class="math-container">$$
\begin{aligned}
\int_0^1 \frac{\ln (1-x) \ln ^2 x}{x} d x & =-\sum_{k=0}^{\infty} \frac{1}{k+1} \int_0^1 x^k \ln ^2 xdx \\
& =-\sum_{k=0}^{\infty} \frac{1}{k+1} \cdot \frac{2}{(k+1)^3} \\
& =-2 \zeta(4) \\
& =-\frac{\pi^4}{45}
\end{aligned}
$$</span></p>
<hr />
<p>In a similar way, I dare guess that</p>
<p><span class="math-container">$$\int_0^1 \frac{\ln (1-x) \ln ^n x}{x} d x =(-1)^{n+1}\Gamma(n)\zeta(n+2),$$</span></p>
<p>where <span class="math-container">$n$</span> is a non-negative <strong>real</strong> number.</p>
<p>Proof:
<span class="math-container">$$
\begin{aligned}
\int_0^1 \frac{\ln (1-x) \ln ^n x}{x} d x & =-\sum_{k=0}^{\infty} \frac{1}{k+1} \int_0^1 x^k \ln ^n xdx \\
\end{aligned}
$$</span>
Letting <span class="math-container">$y=-(k+1)\ln x $</span> transforms the last integral into a Gamma function as</p>
<p><span class="math-container">$$
\begin{aligned}
\int_0^1 x^k \ln ^n x d x & =\int_{\infty}^0 e^{-\frac{k}{k+1}}\left(-\frac{y}{k+1}\right)^n\left(-\frac{1}{k+1} e^{-\frac{y}{k+1}} d y\right) \\
& =\frac{(-1)^n}{(k+1)^{n+1}} \int_0^{\infty} e^{-y} y^n d y \\
& =\frac{(-1)^n \Gamma(n+1)}{(k+1)^{n+1}}
\end{aligned}
$$</span></p>
<p>Now we can conclude that
<span class="math-container">$$
\begin{aligned}
\int_0^1 \frac{\ln (1-x) \ln ^n x}{x} d x & =(-1)^{n+1} \Gamma(n+1) \sum_{k=0}^{\infty} \frac{1}{(k+1)^{n+2}} \\
& =(-1)^{n+1} \Gamma(n+1)\zeta(n+2)
\end{aligned}
$$</span></p>
<p><strong>Can we</strong> evaluate <span class="math-container">$\int_0^1 \frac{\ln (1-x) \ln ^n x}{x} d x$</span> without expanding <span class="math-container">$\ln (1-x)$</span>?</p>
<p>Your comments and alternative methods are highly appreciated?</p>
|
Lai
| 732,917 |
<p>For <strong>natural number</strong> <span class="math-container">$n$</span>, I can use differentiation of the integral
<span class="math-container">$$
\begin{aligned}
I(a) & =\int_0^1 x^a \ln (1-x) d x \\
& =-\int_0^1 x^a \sum_{k=0}^{\infty} \frac{1}{k+1} x^{k+1} d x \\
& =-\sum_{k=0}^{\infty} \frac{1}{k+1} \int_0^1 x^{a+k+1} d x \\
& =-\sum_{k=0}^{\infty}\left(\frac{1}{k+1} \cdot \frac{1}{a+k+2}\right)
\end{aligned}
$$</span>
Differentiating <span class="math-container">$I(a)$</span> w.r.t. <span class="math-container">$a$</span> by <span class="math-container">$n$</span> times at <span class="math-container">$a=-1$</span> yields our integral.</p>
<p><span class="math-container">$$
\begin{aligned}
I & =-\left.\sum_{k=0}^{\infty} \frac{1}{k+1} \cdot \frac{(-1)^n n !}{(a+k+2)^{n+1}}\right|_{a=-1} \\
& =(-1)^{n+1} n ! \sum_{k=0}^{\infty} \frac{1}{k+1} \cdot \frac{1}{(k+1)^{n+1}} \\
& =(-1)^{n+1} n ! \zeta(n+2)
\end{aligned}
$$</span></p>
|
2,287,878 |
<p>Given the following polynomial: $$P(x)=(x^2+x+1)^{100}$$ How do I find : $$\sum_{k=1}^{200} \frac{1}{1+x_k} $$ Is there a general solution for this type of problem cause I saw they tend to ask the same thing for $\sum_{k=1}^{200} \frac{1}{x_k}$? Also how do I find the coefficient of $a_1$ and the remainder for $$P(x)/(x^2+x)$$ (/=divided) (I found the coef of a1 is 100 adn the remainder is 1 but im not sure)</p>
|
dxiv
| 291,201 |
<blockquote>
<p>Is there a general solution for this type of problem</p>
</blockquote>
<p>Consider for example $P(x)=(x^2+x+1)^n\,$.</p>
<blockquote>
<p>$\sum_{k=1}^{2n} \frac{1}{x_k}$</p>
</blockquote>
<p>Let $y_k = \frac{1}{x_k} \ne 0\,$ then $P(\frac{1}{y_k})=P(x_k)=0\,$, so $y_k$ are the roots of: $$Q(y)=y^{2n} P\left(\frac{1}{y}\right)=y^{2n}\left(\frac{1}{y^2}+\frac{1}{y}+1\right)^n=(1+y+y^2)^n=y^{2n}+\binom{n}{0,1,n-1}y^{2n-1}+\cdots\,$$</p>
<p>Then by Vieta's relations $\sum_{k=1}^{2n} y_k=-\binom{n}{0,1,n-1}=-n\,$.</p>
<blockquote>
<p>$\sum_{k=1}^{2n} \frac{1}{1+x_k}$</p>
</blockquote>
<p>Let $z_k=x_k+1\,$, then $P(z_k-1)=P(x_k)=0\,$, so $z_k$ are the roots of:</p>
<p>$$\require{cancel}
R(z) = P(z-1)=\left((z-1)^2+(z-\bcancel{1})+\bcancel{1}\right)^n=(z^2-z+1)^n
$$</p>
<p>Then, by an argument entirely similar to the above, it follows that $\sum_{k=1}^{2n} \frac{1}{z_k}=\binom{n}{0,1,n-1}=n\,$.</p>
<blockquote>
<p>find the coefficient of $a_1$ and remainder for $P(x)/(x^2+x)$</p>
</blockquote>
<p>$$
P(x)=\big((x^2+x)+1\big)^n=(x^2+x)^n+\binom{n}{1}(x^2+x)^{n-1}+\cdots+\binom{n}{n-1}(x^2+x) + 1
$$</p>
<p>All terms are multiples of $x^2+x$ except the very last one, so the remainder of the division is $1\,$ which is a constant polynomial so $a_1=0\,$.</p>
|
3,757,972 |
<p>If <span class="math-container">$\frac{ab} {a+b} = y$</span>, where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are greater than zero, why is <span class="math-container">$y$</span> always smaller than the smallest number substituted?</p>
<p>Say <span class="math-container">$a=2$</span> , <span class="math-container">$b=4$</span> (smallest number here is <span class="math-container">$2$</span>. Thus, the answer would be smaller than <span class="math-container">$2$</span>)</p>
<p><span class="math-container">$\frac{2\cdot4}{ 2+4} = 1.\bar 3$</span></p>
<p>I got this equation from physics. It's for getting total resistance and the miss told us to not waste time in mcq on it because the answer will always be smaller than the smallest number. But I can't explain to myself in words or by intuition why this happens. Any help??</p>
|
B. Goddard
| 362,009 |
<p>Another way to think about it: Assuming <span class="math-container">$0<a\leq b$</span>, divide the top and bottom of your fraction by <span class="math-container">$b$</span> to get</p>
<p><span class="math-container">$$\frac{a}{\frac{a}{b}+1}.$$</span></p>
<p><span class="math-container">$a$</span> is the smaller number and you're dividing it by a number greater than one, so the result is smaller than <span class="math-container">$a$</span>.</p>
|
757,702 |
<p>I am in my pre-academic year. We recently studied the Remainder sentence (at least that's what I think it translates) which states that any polynomial can be written as <span class="math-container">$P = Q\cdot L + R$</span></p>
<p>I am unable to solve the following:</p>
<blockquote>
<p>Show that <span class="math-container">$(x + 1)^{(2n + 1)} + x^{(n + 2)}$</span> can be divided by <span class="math-container">$x^2 + x + 1$</span> without remainder.</p>
</blockquote>
|
Lai
| 732,917 |
<p>I am going to prove it by modulo. In modulo <span class="math-container">$$
x^{2}+x+1 \equiv 0\left(\bmod x^{2}+x+1\right) \Rightarrow x+1\equiv -x^2\textrm{ and } x^3 \equiv 1 \left(\bmod x^{2}+x+1\right).
$$</span>
In modulo <span class="math-container">$x^{2}+x+1$</span>, we have
<span class="math-container">$$
\begin{aligned}
(x+1)^{2 n+1}+x^{n+2} \equiv &\left(-x^{2}\right)^{2 n+1}+x^{n+2} \\
\equiv &-x^{4 n+2}+x^{n+2} \\
\equiv & x^{n+2}\left(1-x^{3 n}\right) \\
\equiv & x^{n+2}\left(1-1\right) \\
\equiv & 0
\end{aligned}
$$</span>
Therefore it has no remainder when divided by <span class="math-container">$x^{2}+x+1$</span>.</p>
|
786,329 |
<p>$$f(x,y)=\begin{cases}
\frac{xy}{x^2+y^2}, \text{ if } x^2+y^2\neq 0
\\
0, \text{ if } x^2+y^2=0
\end{cases}$$</p>
<p>Is it continuous at $(0,0)$?</p>
|
dcs24
| 36,711 |
<p>No, to disprove show sequential continuity doesn't hold. Look along the sequences $\{ x = 1/n , y = 1/n \}$ and $\{ x = 0 , y = 0 \}$ (the constant sequence)</p>
|
11,290 |
<p>I have this code to produce an interactive visualization of a tangent plane to a function:</p>
<pre><code>Clear[f]
f[x_, y_] := x^3 + 2*y^3
Manipulate[
Show[
Plot3D[f[x, y], {x, -1, 1}, {y, -1, 1}, PlotStyle -> Opacity[0.8]],
Plot3D[f[point[[1]], point[[2]]] +
Limit[(f[point[[1]] + h, point[[2]]] - f[point[[1]], point[[2]]])/
h, h -> 0]*(x - point[[1]]) +
Limit[(f[point[[1]], point[[2]] + h] - f[point[[1]], point[[2]]])/
h, h -> 0]*(y - point[[2]]), {x, -1, 1}, {y, -1, 1},
PlotStyle -> Opacity[0.8], MeshStyle -> Gray]],
{{point, {0, 0}}, {-1, -1}, {1, 1}},
SaveDefinitions -> True]
</code></pre>
<p>It is, however, extremely slow. I suspect that the reason is that I unnecessarily compute the partial derivatives over and over again inside the second <code>Plot3D</code>, so my question is: how to change it?</p>
<p>Note: I am using the above code also for other functions as <a href="https://mathematica.stackexchange.com/questions/11278/how-to-find-numerical-value-of-a-derivative-at-point">discussed here</a>, and that is the reason for the <code>Limit</code> in computation of partial derivatives.</p>
|
halirutan
| 187 |
<p>The limits can be calculated analytically one time and then used inside your <code>Manipulate</code>. You can use <code>With</code> to place the final expression where you need them:</p>
<pre><code>f[x_, y_] := x^3 + 2*y^3;
With[{l1 = Limit[(f[px + h, py] - f[px, py])/h, h -> 0],
l2 = Limit[(f[px, py + h] - f[px, py])/h, h -> 0]},
Manipulate[
{px, py} = point;
Show[Plot3D[f[x, y], {x, -1, 1}, {y, -1, 1},
PlotStyle -> Opacity[0.8]],
Plot3D[f[point[[1]], point[[2]]] + l1*(x - point[[1]]) +
l2*(y - point[[2]]), {x, -1, 1}, {y, -1, 1},
PlotStyle -> Opacity[0.8],
MeshStyle -> Gray]], {{point, {0, 0}}, {-1, -1}, {1, 1}},
SaveDefinitions -> True]
]
</code></pre>
|
4,244,983 |
<p>In maths it is assumed that every statement is either true (t) or false (f). When proving theorems such as: For all vector spaces <span class="math-container">$V$</span> over a field <span class="math-container">$K$</span> and <span class="math-container">$\lambda \in K$</span> it holds that <span class="math-container">$f_{\lambda}:V \to V, v \mapsto \lambda v$</span> is a linear map. When proving such a statement I usually did not think of truth values at all, the steps in the proof rather felt "intuitive" and were thus correct to me. However, the formal reason why a proof is correct is always connected to truth values isn't it? This would mean that a formal proof would then look like this:</p>
<p>Let <span class="math-container">$V$</span> be a vector space over <span class="math-container">$K$</span> and <span class="math-container">$\lambda \in K$</span> (this means that <span class="math-container">$V$</span> is a vector space is a true statement, as well as <span class="math-container">$K$</span> is a field and <span class="math-container">$\lambda \in K$</span> are true as well). Since <span class="math-container">$V$</span> is a vector space, it is by definition true that <span class="math-container">$V$</span> is a set. Thus <span class="math-container">$f_{\lambda} \subseteq V \times V$</span> with <span class="math-container">$f_{\lambda}:=\{(v,\lambda v) \ | \ v \in V\}$</span> is a subset (this is true by set theoretic axioms). Furthermore it holds that if <span class="math-container">$v_1=v_2$</span>, then <span class="math-container">$\lambda v_1 = \lambda v_2$</span> since <span class="math-container">$V$</span> is equipped with a well defined multiplication function and by definition of a function this implication is true. Therefore <span class="math-container">$f_{\lambda}$</span> is a function. Since <span class="math-container">$f_{\lambda}$</span> is a function the element <span class="math-container">$f_{\lambda}(v_1+v_2)$</span> exists for all <span class="math-container">$v_1,v_2 \in V$</span> and it is true that <span class="math-container">$f_{\lambda}(v_1+v_2)=f_{\lambda}(v_1)+f_{\lambda}(v_2)$</span> using the axioms of a vector space which were assumed to be true. Thus the statement is true. Here I am also unsure if it is necessary to know that <span class="math-container">$f_{\lambda}(v_1+v_2)$</span> exists in order to proceed in the proof and what harm it would cause if it would not. Then the statement would be somewhat vacuous, I suppose.</p>
<p>In textbooks I don't usually see authors saying that the step in their proof is true, but I am correct in saying that this is what is meant when proving things, right? I suspect that when authors write that "something holds" they also mean that the truth value of the statement is (t).
Another question is, whether it is good to think this way. I would say that, when one is insecure whether ones proof is correct, this might be helpful, however it makes proofs a lot longer and restricts the readability to permanently write down that something is true.</p>
|
Dan
| 852,483 |
<p>Lets take another viewpoint.
Assume that you have a certain mathematical theory. Some axioms regarding this theory (e.g. in the theory of sets you would probably take the ZFC axioms... etc. and of course with logical axioms, equalitarian axioms and so on). In this case you have a "statement" of the form "<span class="math-container">$A\Rightarrow B$</span>" which you want to show(in this theory).
Now what you are doing is actually taking a new axiom into your theory (In this case the hypothesis <span class="math-container">$A$</span>). That is to say you assume <span class="math-container">$A$</span> to be "true".
So that in the new theory you have obtained by adjoining this new axiom <span class="math-container">$A$</span>, <span class="math-container">$B$</span> is a "true statement". Therefore <span class="math-container">$A\Rightarrow B$</span> is a "true statement" in your old theory.
So whenever you have a statement of the form "<span class="math-container">$A\Rightarrow B$</span>" which you want to show is true in your theory, what you are doing is, assuming <span class="math-container">$A$</span> to be "true", then with that working in a new theory in which <span class="math-container">$A$</span> is added as an axiom to your old theory, showing that <span class="math-container">$B$</span> is "true" in this new theory, then you throw away your new theory(you do not need it anymore), because in your old theory <span class="math-container">$A\Rightarrow B$</span> is now shown to be "true".
Hope that answers your question somewhat.</p>
|
2,187,571 |
<p>Consider $e^{it}$ with $t$ real. I can rewrite $t$ as $t=2\pi \tau$:</p>
<p>$ e^{i2\pi\tau} = (e^{i2\pi})^\tau = (1)^\tau = 1 $ for arbitrary $\tau$ or $t$. Where is the mistake in this calculation ? My guess that $(e^x)^y = e^{x\cdot y}$ only applies for real $x$ and $y$, but I don't see why this should be the case despite the example I presented. </p>
|
Community
| -1 |
<p>You just (re)discovered that $a^{bc}=(a^b)^c$ may not hold in the complex.</p>
<hr>
<p>Actually, $a^b$ is not so well defined. By logarithms,</p>
<p>$$a^b=\exp(b \log a)=\exp(b(\log|a|+i\angle a+i2k\pi))$$ where the term $i2k\pi$ introduces an indeterminacy.</p>
<hr>
<p>Back to your example,</p>
<p>$$\log e^{i2\pi}=i2\pi+i2k\pi$$ and </p>
<p>$$(e^{i2\pi})^\tau=\exp(\tau(i2\pi+i2k\pi)),$$ which includes $1$ (for $k=-1$), but not only.</p>
|
2,495,176 |
<p>For how many positive values of $n$ are both $\frac n3$ and $3n$ four-digit integers?</p>
<p>Any help is greatly appreciated. I think the smallest n value is 3000 and the largest n value is 3333. Does this make sense?</p>
|
Soph
| 232,073 |
<p>The question is asking how many values of n satisfy </p>
<ul>
<li>$n \in \mathbb{N}$</li>
<li>$3n \in [1000, 9999]$ </li>
<li>$\frac n3 \in [1000, 9999]$</li>
</ul>
<p>So find the <em>min</em> value of $n$ satisfying the lower bound:</p>
<p>$$ \frac n3 \ge 1000 $$</p>
<p>And the <em>max</em> value of $n$ satisfying the higher bound:</p>
<p>$$ 3n \le 9999$$</p>
<p>Now only include values in the range $[3000, 3333]$ where $\frac n3$ is an integer (adding 1 since $3000$ is divisible by $3$).</p>
<p>$$ \frac {3333}{3} - \frac {3000}{3} + 1 = 112 $$</p>
|
3,756,932 |
<p>Overall, I think I understand the Monty Hall problem, but there's one particular part that I either don't understand, or I don't agree with, and I'm hoping for an intuitive explanation why I'm wrong or confirmation that I'm right.</p>
<p>The variant that I'm working on is the one with 3 doors. Prize is uniformly located behind 1 of the doors. You choose Door 3. The host opens door 2 and reveals that it's empty. Do you switch to door 1 or stay with your original choice of door 3?</p>
<p>Okay so, if the problem was instead that the host opens either door 1 or door 2 (which ever is empty), then I would switch.</p>
<p>But I think this problem is different because the problem statement states explicitly that the host opens door 2 and reveals that it's empty. So therefore the prize is behind either door 1 or door 3, each with equal probability. So switching is not expected to be advantageous.</p>
<p>The author of the book I am studying from states this as a solution:</p>
<p><em>Suppose you play the game repeatedly and always
choose Door 3. If you look at all the times the host reveals Door 2 empty, you
will find that two-thirds of the time the prize lies behind Door 1, and one-third
of the time it is behind Door 3. Seeing Door 2 empty is thus a stronger signal
that Door 1 has the prize than it is that Door 3 has it. This argument is
more general, of course. Whichever door you choose, seeing the host reveal an
empty door is a signal that you should switch.</em></p>
<p>I disagree with this reasoning because of my intuition above. Am I wrong?</p>
<p>Edit: I agree with the last sentence, which doesn't constrain the host to only opening door 2, but I disagree with the first few sentences where the host is constrained to opening door 2.</p>
<p>Another edit: The original problem statement is (Verbatim):</p>
<p><em>You choose Door 3. He opens Door 2 and reveals that it is empty. You now know that the prize lies behind either Door 3 or Door 1. Should you switch your choice to Door 1?</em></p>
<p>I feel that this problem is equivalent to just eliminating the host and the second door, and asking the question "If the prize is uniformly distributed behind doors 1 and 3, and you choose door 3, should you switch to door 1."</p>
|
YJT
| 731,237 |
<p>The main question is why the host opens door #2. If he opens it because he knows it is empty, then indeed it sends a signal about door #1. But if he opens it because he wants to open door #2 and doesn't know it is empty, then the other two doors have equal probabilities and it doesn't matter.</p>
<p>In the latter case, in one-third of the time he would accidentally reveal the prize behind door #2.
In the rest of the time, he will open the empty door #2, so the fact that the door is empty doesn't imply anything about the other doors, and the ratio between their prior probabilities remains.</p>
<p>I think that this is why Monty Hall is such a difficult question. The wording matters a lot, and small inaccuracies change the problem significantly.</p>
|
1,431,969 |
<p>Just like in the title:</p>
<blockquote>
<p>How to compute the CDF of $X\cdot Y$ if $X,Y$ are independent random variables, uniformly distributed over $(-1,1)$?</p>
</blockquote>
<p>I tried using the next formula:
the density of $X\cdot Y$ is the integral of
$f(u/v)\cdot g(v)$ where $f$ is the density of $X$ and $g$ is the density of $Y$ but it has no sense.</p>
|
Subhasish Basak
| 266,148 |
<p>Hint : Try jacobian transformation with $u=xy$ and $v=y$ then integrate on v between(-1,1).</p>
|
220,946 |
<p>I am a newcomer to Mathematica and I am trying to solve this differential equation:</p>
<pre><code>s = NDSolve[{y'[x] == -4/3*y[x]/x + 4 a/(3 x^2) + 4/(9 x^3) a b -
8/(9 x^3) c, y[0.01] == 20}, y, {x, 0.01, 10}]
</code></pre>
<p>I am getting the error message "Encountered non-numerical value for a derivative at x == 0.01" and I don't know what I am doing wrong. Could someone help me?</p>
|
SonerAlbayrak
| 49,198 |
<p>EDIT: The method below only works for upper triangular matrices, I need to modify it to work for all matrices!</p>
<p>One quick way is as follows:</p>
<pre><code>ClearAll[getDiagonal];
getDiagonal[list_List] := getDiagonal[list, {}];
getDiagonal[{}, a_] := a;
getDiagonal[list_List, results_List] := With[{
chosen = DeleteCases[list, {0, ___} | {}]},
Which[Flatten[list] == {},
results,
chosen == {},
getDiagonal[(#[[2 ;;]] & /@ list), Join[results, {0}]],
True,
getDiagonal[(#[[2 ;;]] & /@ DeleteCases[list, chosen[[1]]]),
Join[results, {First[chosen[[1]]]}]]
]
];
</code></pre>
<p>It is a recursive approach: </p>
<ol>
<li>Given a matrix M, start with an empty list (call L)</li>
<li>Look for a row that starts with a nonzero element in M (call that row R)</li>
<li>Add the first element of R to L (add 0 to L if no R is found), call new L L'</li>
<li>Remove R from M and call resultant matrix M' (M'=M if no R is found)</li>
<li>Remove first column of M' (call resultant matrix M'')</li>
<li>If M'' does not have any column left, return L'. Otherwise go to step 1 with M'' as M and L' as L</li>
</ol>
<p>For OP's matrices:</p>
<pre><code>A={{a,0,0,0,0},{0,0,0,d1,0},{0,0,0,0,e1},{0,b,0,d2,0},{0,0,c,0,e2}};
B={{m1m1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0},{0,m1m1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0},{0,0,m1m1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0},{0,0,0,m4m4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,m4m20,m4m21,1,0,0,0,m4m26,m4m27,1,0,0},{0,0,0,0,m5m5,0,0,0,0,0,0,0,0,0,0,0,0,0,m5m19,0,m5m21,0,1,0,m5m25,0,m5m27,0,1,0},{0,0,0,0,0,m6m6,0,0,0,0,0,0,0,0,0,0,0,0,m6m19,m6m20,0,0,0,1,m6m25,m6m26,0,0,0,1},{0,0,0,0,0,0,m7m7,m7m8,m7m9,m7m10,m7m11,m7m12,0,0,0,0,0,0,m7m19,m7m20,m7m21,0,0,0,0,0,0,0,0,0},{0,0,0,0,0,0,m8m7,m8m8,m8m9,m8m10,m8m11,m8m12,0,0,0,0,0,0,m8m19,m8m20,m8m21,0,0,0,0,0,0,0,0,0},{0,0,0,0,0,0,m9m7,m9m8,m9m9,m9m10,m9m11,m9m12,0,0,0,0,0,0,m9m19,m9m20,m9m21,0,0,0,0,0,0,0,0,0},{0,0,0,0,0,0,0,0,0,0,0,0,m10m13,m10m14,m10m15,m10m16,m10m17,m10m18,0,0,0,0,0,0,m10m25,m10m26,m10m27,0,0,0},{0,0,0,0,0,0,0,0,0,0,0,0,m11m13,m11m14,m11m15,m11m16,m11m17,m11m18,0,0,0,0,0,0,m11m25,m11m26,m11m27,0,0,0},{0,0,0,0,0,0,0,0,0,0,0,0,m12m13,m12m14,m12m15,m12m16,m12m17,m12m18,0,0,0,0,0,0,m12m25,m12m26,m12m27,0,0,0},{0,0,0,0,0,0,m13m7,m13m8,m13m9,m13m10,m13m11,m13m12,0,0,0,0,0,0,m13m19,m13m20,m13m21,m7m19,m7m20,m7m21,0,0,0,0,0,0},{0,0,0,0,0,0,m14m7,m14m8,m14m9,m14m10,m14m11,m14m12,0,0,0,0,0,0,m14m19,m14m20,m14m21,m8m19,m8m20,m8m21,0,0,0,0,0,0},{0,0,0,0,0,0,m15m7,m15m8,m15m9,m15m10,m15m11,m15m12,0,0,0,0,0,0,m15m19,m15m20,m15m21,m9m19,m9m20,m9m21,0,0,0,0,0,0},{0,0,0,0,0,0,0,0,0,0,0,0,m16m13,m16m14,m16m15,m16m16,m16m17,m16m18,0,0,0,0,0,0,m16m25,m16m26,m16m27,m10m25,m10m26,m10m27},{0,0,0,0,0,0,0,0,0,0,0,0,m17m13,m17m14,m17m15,m17m16,m17m17,m17m18,0,0,0,0,0,0,m17m25,m17m26,m17m27,m11m25,m11m26,m11m27},{0,0,0,0,0,0,0,0,0,0,0,0,m18m13,m18m14,m18m15,m18m16,m18m17,m18m18,0,0,0,0,0,0,m18m25,m18m26,m18m27,m12m25,m12m26,m12m27},{0,0,0,m7m19,m7m20,m7m21,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{0,0,0,m8m19,m8m20,m8m21,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{0,0,0,m9m19,m9m20,m9m21,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{0,0,0,m10m25,m10m26,m10m27,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{0,0,0,m11m25,m11m26,m11m27,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0},{0,0,0,m12m25,m12m26,m12m27,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0},{1,0,0,0,m5m19,m6m19,m7m19,m8m19,m9m19,m13m19,m14m19,m15m19,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{0,1,0,m4m20,0,m6m20,m7m20,m8m20,m9m20,m13m20,m14m20,m26m12,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{0,0,1,m4m21,m5m21,0,m7m21,m8m21,m9m21,m13m21,m14m21,m15m21,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},{1,0,0,0,m5m25,m6m25,0,0,0,0,0,0,m10m25,m11m25,m12m25,m16m25,m17m25,m18m25,0,0,0,0,0,0,0,0,0,0,0,0},{0,1,0,m4m26,0,m6m26,0,0,0,0,0,0,m10m26,m11m26,m12m26,m16m26,m17m26,m29m18,0,0,0,0,0,0,0,0,0,0,0,0},{0,0,1,m4m27,m5m27,0,0,0,0,0,0,0,m10m27,m11m27,m12m27,m16m27,m17m27,m18m27,0,0,0,0,0,0,0,0,0,0,0,0}};
</code></pre>
<p>It yields following results:</p>
<pre><code>getDiagonal[A]
</code></pre>
<blockquote>
<p>{a, b, c, d1, e1}</p>
</blockquote>
<pre><code>getDiagonal[B]
</code></pre>
<blockquote>
<p>{m1m1, m1m1, m1m1, m4m4, m5m5, m6m6, m7m7, m8m8, m9m9, m13m10, m14m11, m15m12, m10m13, m11m14, m12m15, m16m16, m17m17, m18m18, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}</p>
</blockquote>
<p>Performance of the code seems to be fine:</p>
<pre><code>{RepeatedTiming[getDiagonal[A]][[1]], RepeatedTiming[getDiagonal[B]][[1]]}
</code></pre>
<blockquote>
<p>{0.000065, 0.0016}</p>
</blockquote>
|
1,621,302 |
<blockquote>
<p>Assume $a,n\in\mathbb{N}$ such that $\gcd{(a,n)}=1$. We say $n$ is prime if $a^{n-1}\equiv 1\mod{n}$ and $a^x\not\equiv1\mod{n}$ for any divisor $x$ of $n-1$.</p>
</blockquote>
<p>I am presented with the following proof but there are jumps within it that I don't understand.</p>
<hr>
<p><em>Proof</em>: Suppose $\gcd{(a,n)} = 1$. Let $d$ be the order of $a$ modulo $n$, i.e. the smallest $x$ such that $a^x\equiv1\mod{n}$. Then $d\mid n-1$. We write $n-1=sd+r$ where $0\le r<d$. Hence $$1\equiv a^{n-1} \equiv a^{sd+r} \equiv a^r\mod{n}$$</p>
<p>Since $d\mid n-1$, we know $d\mid\varphi(n)$. Since $a^{\varphi(n)}\equiv 1\mod{n}$ we note that $d=n-1$. Also $d\mid\varphi(n) \le n-1\implies \varphi(n)=n-1\implies n$ is prime.</p>
<hr>
<p><strong>Questions:</strong></p>
<ol>
<li>How does $d\mid n-1$?</li>
<li>If $d\mid n-1$ why do we write $n-1=sd+r$, surely $r=0$?</li>
<li>What relevance does showing $1\equiv a^r\mod{n}$ have?</li>
<li>How does $d\mid n-1\implies d\mid \varphi(n)$?</li>
<li>How does $a^\varphi(n)\equiv 1\mod{n}\implies d=n-1$?</li>
<li>Can we always say that $\varphi(n)\le n-1$?</li>
<li>Is it true to say if $\varphi(n)=n-1$ then $n$ is prime?</li>
</ol>
|
Community
| -1 |
<ul>
<li><p>For <span class="math-container">$1,2,3$</span> :</p>
<p>Let <span class="math-container">$n-1=sd+r$</span> where <span class="math-container">$0 \leq r <d$</span> .Then as you noted it follows that :</p>
</li>
</ul>
<p><span class="math-container">$$1 \equiv a^{n-1} \equiv \left (a^d \right )^s \cdot a^r \equiv a^r \pmod{n}$$</span> and so :</p>
<p><span class="math-container">$$a^r \equiv 1 \pmod{n}$$</span> But <span class="math-container">$d$</span> is the <em>smallest</em> non-zero number such that :</p>
<p><span class="math-container">$$a^d \equiv 1 \pmod{n}$$</span> so because <span class="math-container">$r<d$</span> it must follow that <span class="math-container">$r=0$</span> and then <span class="math-container">$n-1=sd$</span> or <span class="math-container">$d \mid n-1$</span> .</p>
<ul>
<li>For <span class="math-container">$4$</span> :</li>
</ul>
<p>It's known from Euler's theorem that <span class="math-container">$$a^{\phi(n)} \equiv 1 \pmod{n}$$</span> so now applying the same argument as above (with <span class="math-container">$\phi(n)=sd+r$</span> ) it follows that also <span class="math-container">$d \mid \phi(n)$</span> .</p>
<ul>
<li>For <span class="math-container">$5$</span> :</li>
</ul>
<p>You know that <span class="math-container">$a^x \not\equiv 1 \pmod{n}$</span> for <span class="math-container">$x$</span> a divisor of <span class="math-container">$n-1$</span> (except <span class="math-container">$x=n-1$</span> of course ) .</p>
<p>But <span class="math-container">$a^d \equiv 1 \pmod{n}$</span> and <span class="math-container">$d \mid n-1$</span> and so it must follow that <span class="math-container">$d=n-1$</span> and because <span class="math-container">$ d \mid \phi(n)$</span> it follows that <span class="math-container">$n-1 \mid \phi(n)$</span> .</p>
<ul>
<li>For <span class="math-container">$6,7$</span> :</li>
</ul>
<p>Do you know the definition of <span class="math-container">$\phi(n)$</span> ? :</p>
<blockquote>
<p><span class="math-container">$\phi(n)$</span> represents the number of positive integers prime with <span class="math-container">$n$</span> and less than <span class="math-container">$n$</span> .</p>
</blockquote>
<p>There are exactly <span class="math-container">$n-1$</span> numbers less than <span class="math-container">$n$</span> so <span class="math-container">$$\phi(n) \leq n-1$$</span> is a direct consequence of the definition .</p>
<p>The equality is achieved when all the numbers less than <span class="math-container">$n$</span> are prime with <span class="math-container">$n$</span> which can only happen when <span class="math-container">$n$</span> is prime because :</p>
<p>If for example <span class="math-container">$q \mid n$</span> with <span class="math-container">$q$</span> prime then if <span class="math-container">$q<n$</span> we would have <span class="math-container">$(n,q) \neq 1$</span> so we can't have <span class="math-container">$\phi(n)=n-1$</span> .This means that <span class="math-container">$n=q$</span> is a prime number .</p>
|
1,573,341 |
<blockquote>
<p>Prove that the roots of the equation $z^n=(z+3)^n$ are collinear</p>
</blockquote>
<p>Taking modulus on both sides, $$\vert z-0\vert =\vert z-3\vert$$</p>
<p>This represents the perpendicular bisector of line joining $x=0$ and $x=3$</p>
<p>That was easy. But I tried to solve it using algebra:</p>
<p>$$\frac{z+3}{z}=1^{\frac{1}{n}}=\cos{\frac{2k\pi}{n}}+i\sin{\frac{2k\pi}{n}}$$</p>
<p>After simplifying, I got $$z=\frac{3i ( \cos{\frac{k\pi}{n} }+i\sin{ \frac{k\pi}{n}})}{2\sin{ \frac{k\pi}{n}}}$$
What should I do next?</p>
|
DeepSea
| 101,504 |
<p><strong>Hint</strong>:the roots have the same real part which is $-3/2$.</p>
|
4,010,160 |
<h2>The Problem</h2>
<p>Let <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span> be a continuous i.i.d. random variables. By definition, we know that...</p>
<ol>
<li>Independence: <span class="math-container">$\mathbb P(X_1 \in A \ \cap \ X_2 \in B)=\mathbb P(X_1 \in A) \ \mathbb P(X_2 \in B)$</span></li>
<li>Identically Distributed: <span class="math-container">$\mathbb P(X_1 \le x) = \mathbb P(X_2 \le x) \ \ \forall \ \ x \in \mathbb R^1$</span></li>
</ol>
<p>My hypothesis is that:
<span class="math-container">$$\mathbb P(X_1 > X_2) = 0.5$$</span>
I have attempted to prove this by using the above definitions to form the expression:
<span class="math-container">$$\mathbb P(X_1 \ge x \ \cap \ X_2 < x)=\mathbb P(X_2 \ge x \ \cap \ X_1 < x) \ \ \forall \ \ x \in \mathbb R^1$$</span></p>
<h2>My Questions</h2>
<ol>
<li>Is this a valid proof, and am I missing anything?</li>
<li>This seems like a fundamental property of i.i.d. random variables. Is this already a named theorem?</li>
</ol>
|
zero2infinity
| 862,690 |
<p>No, Suppose <span class="math-container">$A=[-1,1], X=\mathbb{R}, B=(-2,2)$</span> and <span class="math-container">$Y=[-3,3]$</span>. Then this gives a counter example</p>
|
4,010,160 |
<h2>The Problem</h2>
<p>Let <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span> be a continuous i.i.d. random variables. By definition, we know that...</p>
<ol>
<li>Independence: <span class="math-container">$\mathbb P(X_1 \in A \ \cap \ X_2 \in B)=\mathbb P(X_1 \in A) \ \mathbb P(X_2 \in B)$</span></li>
<li>Identically Distributed: <span class="math-container">$\mathbb P(X_1 \le x) = \mathbb P(X_2 \le x) \ \ \forall \ \ x \in \mathbb R^1$</span></li>
</ol>
<p>My hypothesis is that:
<span class="math-container">$$\mathbb P(X_1 > X_2) = 0.5$$</span>
I have attempted to prove this by using the above definitions to form the expression:
<span class="math-container">$$\mathbb P(X_1 \ge x \ \cap \ X_2 < x)=\mathbb P(X_2 \ge x \ \cap \ X_1 < x) \ \ \forall \ \ x \in \mathbb R^1$$</span></p>
<h2>My Questions</h2>
<ol>
<li>Is this a valid proof, and am I missing anything?</li>
<li>This seems like a fundamental property of i.i.d. random variables. Is this already a named theorem?</li>
</ol>
|
Noah Schweber
| 28,111 |
<p>The topological version of Schroeder-Bernstein <strong>fails badly</strong>. For example, consider <span class="math-container">$(0,1)$</span> and <span class="math-container">$[0,1]$</span> with their usual topologies, which are of course non-homeomorphic:</p>
<ul>
<li><p><span class="math-container">$(0,1)$</span> is homeomorphic to itself, which is a subspace of <span class="math-container">$[0,1]$</span>.</p>
</li>
<li><p><span class="math-container">$[0,1]$</span> is homeomorphic to <span class="math-container">$[{1\over 3}, {1\over 2}]$</span>, which is a subspace of <span class="math-container">$(0,1)$</span>.</p>
</li>
</ul>
<p>In general, the question of when Schroeder-Bernstein holds for a given category of mathematical objects is a quite interesting one; see e.g. <a href="https://mathoverflow.net/questions/1058/when-does-cantor-bernstein-hold">this MathOverflow question</a>. For example, while the counterexample above shows that Schroeder-Bernstein fails for metric spaces, it in fact <em>holds</em> for <a href="https://mathoverflow.net/a/73332/8133">compact metric spaces</a> (and while <span class="math-container">$[0,1]$</span> is compact, <span class="math-container">$(0,1)$</span> isn't).</p>
|
4,010,160 |
<h2>The Problem</h2>
<p>Let <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span> be a continuous i.i.d. random variables. By definition, we know that...</p>
<ol>
<li>Independence: <span class="math-container">$\mathbb P(X_1 \in A \ \cap \ X_2 \in B)=\mathbb P(X_1 \in A) \ \mathbb P(X_2 \in B)$</span></li>
<li>Identically Distributed: <span class="math-container">$\mathbb P(X_1 \le x) = \mathbb P(X_2 \le x) \ \ \forall \ \ x \in \mathbb R^1$</span></li>
</ol>
<p>My hypothesis is that:
<span class="math-container">$$\mathbb P(X_1 > X_2) = 0.5$$</span>
I have attempted to prove this by using the above definitions to form the expression:
<span class="math-container">$$\mathbb P(X_1 \ge x \ \cap \ X_2 < x)=\mathbb P(X_2 \ge x \ \cap \ X_1 < x) \ \ \forall \ \ x \in \mathbb R^1$$</span></p>
<h2>My Questions</h2>
<ol>
<li>Is this a valid proof, and am I missing anything?</li>
<li>This seems like a fundamental property of i.i.d. random variables. Is this already a named theorem?</li>
</ol>
|
SUNIL PASUPULATI
| 594,846 |
<p>For example, consider <span class="math-container">$X=[0,1]$</span> and <span class="math-container">$Y=(0,1)$</span>. <span class="math-container">$X$</span> is not homeomorphic to <span class="math-container">$Y.$</span> but <span class="math-container">$X$</span> is homeomorphic to <span class="math-container">$[1/4,3/4] \subset Y=(0,1).$</span> and Y is homeomorphic to <span class="math-container">$(0,1)\subset X.$</span></p>
|
132,053 |
<p>I have an expression $e^{x-a}(e^{bx}+c)$. I just want it multiplied throughout to get $e^{(b+1)x-a}+ce^{x-a}$. To achieve this I have tried <code>Expand</code>, <code>Simplify</code>, <code>Collect</code> (using pattern <code>Exp[q_*x]</code>). The best output I could get is</p>
<pre><code>Expand[Exp[x-a]*(Exp[b*x]+c)]
(* Exp[-a+x+b*x]+c*Exp[-a+x] *)
</code></pre>
<p>I have tried <code>Evaluate</code> after this step to obtain proper exponent, but to no avail. How do I make it come with $e^{(b+1)x-a}$ in the first term?</p>
|
BoLe
| 6,555 |
<p><code>MapAt</code> with specified position:</p>
<pre><code>tmp = Exp[x - a] (Exp[b x] + c) // Expand;
pos = Position[tmp, p_Plus /; ! FreeQ[p, x]];
MapAt[Collect[#, x] &, tmp, pos]
</code></pre>
<blockquote>
<p><code>c E^(-a + x) + E^(-a + (1 + b) x)</code></p>
</blockquote>
|
2,145,098 |
<p>This was part of a three part question where I was supposed to prove two sets have equal cardinality by finding bijections. I've created a bijection $f: \Bbb Z \Rightarrow 2\Bbb Z$ by $f(x)=2x$. I've created a bijection $g: (0,1) \Rightarrow (4,50)$ by $g(x)=46x+4$. I think those are both correct. My last question is finding a a bijection between (0,1) and [0,1]. I've seen this question several times on this board, but I've yet to understand them and I can't really go back and ask any questions to the original posters. I know a bijective function exists between (0,1) and $\Bbb R$, but I don't think that helps me here. </p>
|
Forever Mozart
| 21,137 |
<p>Do you know that the rationals are countable? Let </p>
<p>$\{q_i:i\in\omega\}$ enumerate $\mathbb Q\cap (0,1)$, and </p>
<p>$\{p_i:i\in\omega\}$ enumerate $\mathbb Q \cap [0,1]$.</p>
<p>Map $q_i\to p_i$ and let every irrational number map to itself.</p>
|
46,726 |
<p>In many proofs I see that some variable is "fixed" and/or "arbitrary". Sometimes I see only one of them and I miss a clear guideline for it. Could somebody point me to a reliable source (best a well-known standard book) which explains, when and how to use both in proofs?</p>
<p>EDIT: A little add-on to the question: Take a usual induction on natural numbers and assume that you are teaching it to students in their first semester. How do you explain "fixed" and "arbitrary" in this scenario?</p>
|
Eric Naslund
| 6,075 |
<p>Rather then arbitrary and fixed, lets look at something very similar where the difference is more pronounced. There is a very important distinction between <strong>uniform</strong> and <strong>fixed</strong>. A perfect example is convergence of functions.</p>
<p>Let <span class="math-container">$f_n$</span> be a sequence of functions of a set <span class="math-container">$X$</span>. For each point <span class="math-container">$x\in X$</span>, let <span class="math-container">$f$</span> be defined by <span class="math-container">$f(x)=\lim_{n\rightarrow \infty} f_n(x)$</span>. That is, <span class="math-container">$f$</span> is the <a href="http://en.wikipedia.org/wiki/Pointwise_convergence" rel="nofollow noreferrer">pointwise</a> limit, and it is the limit for <strong>fixed</strong> <span class="math-container">$x$</span>.</p>
<p>However, <a href="http://en.wikipedia.org/wiki/Uniform_convergence" rel="nofollow noreferrer">uniform</a> convergence is much stronger, and requires that the limit is small for all <span class="math-container">$x$</span> simultaneously.</p>
<p>Very often things come up which are easy to prove for fixed <span class="math-container">$x$</span>, but very difficult when we need uniformity in <span class="math-container">$x$</span>.</p>
<p>Hope that helps,</p>
|
1,651,452 |
<p>I am trying to solve a modular arithmetic system and I got to the point where I have to solve $22t \equiv 9 \pmod{7}$ for $t$. I researched on the internet and found that there are many ways to solve this, such as using linear diophantine equations, the euclidean algorithm or using inverses.</p>
<p>Can someone show me, using which ever method they choose how to solve for $t$.</p>
|
John Frederick Chionglo
| 190,989 |
<p>A solution t = 2 - 7y where y is from the set of Integers.
<a href="https://i.stack.imgur.com/FH2In.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FH2In.jpg" alt="enter image description here"></a></p>
<h2>Notes (Chionglo, 2015)</h2>
<ol>
<li>A linear system of congruencies is equivalent to a linear system of equations where the coefficients and variables are elements of the set of Integers (Z). It is possible to solve a linear system of equations in Z by symbolic manipulation using the “generally accepted definitions, laws and theorems” for the set of Integers. The method of substitution is used in two ways in the manipulations. For the first way, substitution is used to add parameter variables and corresponding equations. For the second way, substitution is used to eliminate variables from existing equations. By judiciously applying the method of substitution in these two ways, a basis can be found where every original variable is expressed in terms of parameter variables.</li>
<li>A guiding principle in applying the method of substitution is to express the original variables in terms of parameter variables.</li>
<li>Based on the symbolic manipulations, it is possible to work with an augmented matrix instead of directly manipulating the symbols; in other words, perform matrix operations on the augmented matrix. This reduces the amount work and the time needed to find a basis. </li>
</ol>
<h2>References</h2>
<p>Chionglo, J. F. (2016). A Reply to "Solve 22t == 9 (mod 7) at Mathematics Stack Exchange". Available at <a href="http://www.aespen.ca/AEnswers/1455258289.pdf" rel="nofollow noreferrer">http://www.aespen.ca/AEnswers/1455258289.pdf</a>.</p>
<p>Chionglo, J. F. (2015). A Reply to “Find Solutions to Linear [Congruencies]”. Available at <a href="http://www.aespen.ca/AEnswers/Reply" rel="nofollow noreferrer">http://www.aespen.ca/AEnswers/Reply</a>$1447217299.pdf. </p>
<p>“Solve “ (2016). Mathematics Stack Exchange. Retrieved on Feb. 11, 2016 at <a href="https://math.stackexchange.com/questions/1651452/solve-22t-equiv-9-pmod7">Solve $22t \equiv 9 \pmod{7}$</a>. </p>
|
1,518,258 |
<p>I haven't been able to come up with a counterexample so far. </p>
|
AnotherPerson
| 185,237 |
<p>$a=b=2$. $a$ divides $2!$, $b$ divides $3!$, but their product $4$ does not divide $3!$</p>
|
376,530 |
<p>Let <span class="math-container">$G$</span> be a graph which does not contain a simple cycle <span class="math-container">$v_1\ldots v_k$</span> and two "crossing" chords <span class="math-container">$v_iv_j$</span> and <span class="math-container">$v_pv_q$</span>, <span class="math-container">$i<p<j<q$</span>. An example of such graph is a triangulation of the convex polygon. Is it true that the number of edges in <span class="math-container">$G$</span> does not exceed <span class="math-container">$2n-3$</span>, where <span class="math-container">$n$</span> denotes the number of vertices?</p>
|
David Wood
| 25,980 |
<p>Assume <span class="math-container">$n=2k+2$</span> is even. Let <span class="math-container">$G$</span> be the graph obtained from a matching <span class="math-container">$v_1w_1,\dots,v_kw_k$</span> by adding two vertices <span class="math-container">$x$</span> and <span class="math-container">$y$</span> both adjacent to all of
<span class="math-container">$v_1,w_1,\dots,v_k,w_k$</span>. So <span class="math-container">$G$</span> has <span class="math-container">$n$</span> vertices and <span class="math-container">$5k=\frac{5}{2}(n-2)$</span> edges. Consider a cycle <span class="math-container">$C$</span> in <span class="math-container">$G$</span>. If <span class="math-container">$|C|=3$</span> then <span class="math-container">$C$</span> has no chords. Otherwise <span class="math-container">$|C|=4$</span> and both <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are in <span class="math-container">$C$</span>. If <span class="math-container">$C=(x,v_i,y,w_i)$</span> for some <span class="math-container">$i\in\{1,\dots,k\}$</span>, then <span class="math-container">$C$</span> has only one chord (namely <span class="math-container">$v_iw_i$</span>). Otherwise <span class="math-container">$C$</span> has no chords. So <span class="math-container">$G$</span> satisfies a stronger property (every cycle has at most one chord).</p>
|
2,168,554 |
<p>Assuming you are playing roulette.</p>
<p>The probabilities to win or to lose are:</p>
<p>\begin{align}
P(X=\mathrm{win})&=\frac{18}{37}\\
P(X=\mathrm{lose})&=\frac{19}{37}
\end{align}</p>
<p>Initially 1$ is used. Everytime you lose, you double up the stake. If you win once, you stop playing. If required you play forever.</p>
<p>We can calculate two expectations:</p>
<p>Win somewhen:</p>
<p>$E[X_{win}]=\lim_{n\to\infty}1-(\frac{19}{37})^n=1$</p>
<p>The expected payoff:</p>
<p>$E[X_{payoff}]=\lim_{n\to\infty}\left(p^n(-(2^n-1))+(1-p^n)\right)=1-(\frac{38}{37})^n=-\infty$</p>
<p>This result confuses me: We have the probability of 1 to win eventually, but the expected payoff is $-\infty$. Whats wrong here?</p>
<p>Thank you</p>
|
Breh
| 458,959 |
<p>I wrote this assuming the payoff is double the bet, as in Roulette when betting on spaces with these odds. So a \$1 bet yields \$2, or a \$1 net. When repeated, one is starting with a \$1 loss, and doubles their bet to \$2. If they win, they receive \$4, but have invested \$3 in the series, meaning the net gain is \$1, the original bet.</p>
<p>The EV of any bet in this game with these parameters is:</p>
<p>$(bet * P(win)) + (-bet * P(lose))$</p>
<p>So $(1 * (18/37)) + (-1 * (19/37)) = -0.027$</p>
<p>You can expect to lose 2.7 cents per $1 bet, or 2.7% of whatever money you put down. This holds true for a bet of any amount. Not by coincidence, this is precisely the house edge in European Roulette. The house edge is simply the EV of whatever game you are playing.</p>
<p>So if you were to repeat this process an infinite number of times, betting double your previous bet, with an EV of -2.7%, surely you can see why you would lose in the infinite case.</p>
<p>I honestly have no idea what all your math stuff is doing (not because you're wrong, just because my own understanding isn't there), but I have a pretty good understanding of infinite. When you calculated your "win somewhen" equation, you could conversely do the same thing for "lose somewhen" and would come to the exact same answer. Weird stuff happens with infinite and I think what you've come across is a situation where you have a 100% chance of winning and losing, depending on what you're looking at.</p>
<p>In simple terms, if infinite is the length of time in which all that can happen, will happen, then there is a probability of 1 that any possible event will occur in that time frame. Since during infinite iterations of the game, everything that can happen, will happen, you have 100% chance of winning and losing during that time. This, however, does not change your expected value. It will always be -2.7% of your bet. You can always add to infinite, but the resulting "value" will also be infinite.</p>
<p>$\infty$ * -.027 = -$\infty$</p>
<p>P.S. this is my first post on StackExchange, so let me know if I broke any rules or whatever. Also, criticism of my reasoning is highly encouraged.</p>
|
611,198 |
<p>A corollary to the Intermediate Value Theorem is that if $f(x)$ is a continuous real-valued function on an interval $I$, then the set $f(I)$ is also an interval or a single point.</p>
<p>Is the converse true? Suppose $f(x)$ is defined on an interval $I$ and that $f(I)$ is an interval. Is $f(x)$ continuous on $I$? </p>
<p>Would the answer change if $f(x)$ is one-to-one?</p>
|
Euler....IS_ALIVE
| 38,265 |
<p>The conclusion is not true even if $f$ is one to one. Consider the function on $[-1,1]$ defined by $f(x) = x$ for $x \not=-1,0,1 $, and $f(-1) = 0, f(0) = 1, f(1)=-1$. Then this function is not continuous, but is one to one and the range $f(I)$ is the interval $[-1,1]$.</p>
|
3,555,687 |
<blockquote>
<p>There are 3 straight lines and two circles on a plane. They divide the plane into regions. Find the greatest possible number of regions.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/YbUAU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YbUAU.png" alt="enter image description here"></a></p>
<p>This is the best that I could come up with an I don't really know how to get even more regions. Using some mental calculations, I figured that I could edit the picture and make it into 20 regions. However, the answer key gave me 21 as the answer. Is there a way to find out the number of regions on a plane without using a calculator? Is there a formula for it? Please help.</p>
|
WimC
| 25,313 |
<p>The number <span class="math-container">$21$</span> for a general configuration with maximal number of intersections follows from Euler’s formula. There are <span class="math-container">$18$</span> vertices (<span class="math-container">$3$</span> from line/line, <span class="math-container">$12$</span> from line/circle, <span class="math-container">$2$</span> from circle/circle intersections, and <span class="math-container">$1$</span> at infinity). There are <span class="math-container">$37$</span> edges (<span class="math-container">$21$</span> from lines, <span class="math-container">$16$</span> from circles). So <span class="math-container">$18 - 37 + A=2$</span> and <span class="math-container">$A=21$</span>.</p>
|
29,443 |
<p>I'm looking at a <a href="http://en.wikipedia.org/wiki/Partition_function_%28statistical_mechanics%29#Relation_to_thermodynamic_variables" rel="nofollow noreferrer">specific</a> derivation on wikipedia relevant to statistical mechanics and I don't understand a step.</p>
<p>$$ Z = \sum_s{e^{-\beta E_s}} $$</p>
<p>$Z$ (the partition function) encodes information about a physical system. $E_s$ is the energy of a particular system state. $Z$ is found by summing over all possible system states.</p>
<p>The expected value of $E$ is found to be:</p>
<p>$$ \langle E \rangle = -\frac{\partial \ln Z}{\partial \beta} $$</p>
<p>Why is the variance of $E$ simply defined as:</p>
<p>$$ \langle(E - \langle E\rangle)^2\rangle = \frac{\partial^2 \ln Z}{\partial \beta^2} $$</p>
<p>just a partial derivative of the mean. </p>
<p>What about this problem links the variance and mean in this way?</p>
|
Ehsan M. Kermani
| 8,346 |
<p>Have you seen the link of the definition of variance or expected value in wiki, <a href="http://en.wikipedia.org/wiki/Variance" rel="nofollow">here</a>?</p>
|
348,674 |
<p>I am a beginner at matrices and I am trying to find out whether or not the linear transformation defined by the matrix $A$ is onto, and also whether it is 1-1.</p>
<p>Here is the matrix $A$: </p>
<p>$$\begin{bmatrix}
1 & 3 & 4 & -1 & 2\\
2 & 6 & 6 & 0 & -3\\
3 & 9 & 3 & 6 & -3\\
3 & 9 & 0 & 9 & 0
\end{bmatrix}$$</p>
<p>I reduced it to echeleon row form but I am not sure what to do from there, thank you for any help.</p>
|
DonAntonio
| 31,254 |
<p>Seen as an operator, your matrix is a map from $\,5-$dimensional space to a $\,4-$dimensional one, so $\;1-1\;$ it certainly cannot be. </p>
<p>Now, if its rank is $\,4\,$ then it is onto (why?), so you really have to reduce it and check this...</p>
|
3,902,501 |
<p>Over <span class="math-container">$\mathbb R$</span>, the only linear maps are those of the form <span class="math-container">$ax$</span>.</p>
<p>If we discuss rational functions over <span class="math-container">$\mathbb R$</span>, this extra structure would allow us to describe a wider variety of linear maps.</p>
<p>But the obvious maps such as limits, differentiation, summation, doing <span class="math-container">$f(x)\mapsto f(x+k)$</span> seem too easy. Is there a linear map which specifically takes advantage of the fact that we have a rational function?</p>
<p>For instance, something like "double the coefficient of <span class="math-container">$x$</span> of the numerator and triple that of <span class="math-container">$x$</span> in the denominator". I know this is not a good example because it isn't linear, but it illustrates what I mean by "taking advantage of the fact that we have a rational function", i.e., something which doesn't generalise easily to a wider class of functions (say <span class="math-container">$C^1$</span>).</p>
|
Ethan
| 207,264 |
<p>Using partial fractions and the fact that <span class="math-container">$\mathbb{R}[X]$</span> is a PID, it's possible to write every element of <span class="math-container">$\mathbb{R}(X)$</span> <em>uniquely</em> as a sum <span class="math-container">$\sum_{p<\mathbb{R}[X],n\in\mathbb{N}} \frac{f_{p,n}}{p^n}$</span>, where <span class="math-container">$p$</span> ranges over the primes of <span class="math-container">$\mathbb{R}[X]$</span> and <span class="math-container">$f_{p,n}$</span> has smaller degree than <span class="math-container">$p$</span>.</p>
<p>This expresses <span class="math-container">$\mathbb{R}(X)$</span> as an infinite direct sum of finite dimensional vector spaces (namely vector spaces of dimension <span class="math-container">$1$</span> for the linear primes and dimension <span class="math-container">$2$</span> for indecomposable quadratics), so we can apply any linear maps we like independently to each piece of the decomposition. Most of these maps will be strange and have no obvious generalization to arbitrary continuous functions.</p>
|
3,097,816 |
<p>Show that the product of four consecutive odd integers is 16 less than a square.</p>
<p>For the first part I first did
<span class="math-container">$n=p(p+2)(p+4)(p+6)
=(p^2+6p)(p^2+6p+8).$</span>
I know that you are supposed to rearrange this to give an equation in the form <span class="math-container">$(ap^2+bp+c)^2$</span>, but I'm not sure how to. Also, once we get to that point, how do we prove it has to be odd numbers? </p>
<p>NOTE: I do not know how to edit Math LaTeX so I was hoping someone could edit my post for me. Thanks </p>
|
Keith Backman
| 29,783 |
<p>The trick is to start with the middle number, call it <span class="math-container">$k$</span>. Then the four consecutive odd integers are <span class="math-container">$k-3,k-1,k+1,k+3$</span>.</p>
<p>Multiply the outer two: <span class="math-container">$(k-3)(k+3)=k^2-9$</span></p>
<p>Multiply the inner two: <span class="math-container">$(k-1)(k+1)=k^2-1$</span></p>
<p>Multiply the two products: <span class="math-container">$(k^2-9)(k^2-1)=k^4-10k^2+9$</span></p>
<p>Now add <span class="math-container">$16$</span> to get <span class="math-container">$k^4-10k^2+25=(k^2-5)^2$</span></p>
|
1,536,904 |
<blockquote>
<p>A father send his $3$ sons to sell watermelons.The first son took $10$
watermelons,the second $20$ and the third $30$ watermelons.The father
gave order to sell all in the same price and collect and the same
number of money.How is this possible?</p>
</blockquote>
<p>Any ideas for this puzzle?</p>
|
mvw
| 86,776 |
<p>We can model this as:
$$
10 p = 20 p = 30 p \Rightarrow
$$
Subtracting $10 p$ from both sides of the first equation gives
$$
0 = 10 p \Rightarrow \\
p = 0
$$</p>
|
47,495 |
<p>Firstly, let me divulge. I've been doing a lot of research on the summation of two coprime numbers and unfortunately have failed to come up with the properties I'm seeking; it is my hope that someone here might be of some help.</p>
<p>Let $(j, k)\in \mathbb{N}^2$ be coprime.</p>
<p>Can $\Omega(j + k)$ or $\omega(j + k)$ be expressed as some function of $\omega(j)$, $\omega(k)$, $\lambda(j)$, $\lambda(k)$, $ j$, and/or $k$?</p>
<p>If not, then maybe for the special case that $(j, k)$ are prime, or in particular, odd primes?</p>
<p>If the answer is still no, any information regarding this topic is much appreciated.</p>
<p>Note: The functions $\Omega$, $\omega$, and $\lambda$, are the total prime factors, distinct prime factors, and the Liouville function respectively.</p>
|
Alastair Irving
| 8,084 |
<p>Suppose $p,q$ are prime. Thus $\omega,\Omega,\lambda$ are known constants at $p,q$ so we want to know if $\omega,\Omega(p+q)$ can be written as functions in $p,q$. To answer this you need to define what is meant by a function, as clearly $\omega(p+q)$ is a function in $p,q$.</p>
<p>Presumably you mean a function which is built from some class of elementary functions. Therefore, a possible approach would be to try and prove that certain classes of functions are not sufficient. For example, we cannot have $\omega(p+q)=Ap+Bq+C$, for constants $A,B,C$, as we can pick suitable combinations of primes to give a system of linear equations in $a,B,C$ with no solutions, (I haven't actually done this computation). It seems reasonable that for any degree $d$ we can find suitable pairs of primes to show that the function isn't a polynomial with degree $\leq d$, but I don't have a proof of this. Thus showing that the function isn't a polynomial might be a worthwhile first It will probably help to use that the function is symmetric in $p,q$ so if its a polynomial then its a polynomial in $p+q,pq$.</p>
<p>Once polynomials are dealt with, then try and extend the class of possible functions to something bigger.</p>
|
1,615,732 |
<p>Consider $CW$-complex $X$ obtained from $S^1\vee S^1$ by glueing two $2$-cells by the loops $a^5b^{-3}$ and $b^3(ab)^{-2}$. As we can see in Hatcher (p. 142), abelianisation of $\pi_1(X)$ is trivial, so we have $\widetilde H_i(X)=0$ for all $i$. And if $\pi_2(X)=0$, we have that $X$ is $K(\pi,1)$.</p>
<p>My question is: <em>how can one compute</em> $\pi_2(X)$? Computing homotopy groups is hard, what methods may i use?</p>
|
Lee Mosher
| 26,501 |
<p>Letting $\widetilde X$ be the universal cover of $X$ we have
$$\pi_2(X) \approx \pi_2(\widetilde X) \approx H_2(\widetilde X,\mathbb{Z})
$$
The first isomorphism comes from the long exact sequence of homotopy groups of a fibration, using discreteness of the fiber of the universal covering map. The second isomorphism comes from the Hurewicz theorem, using simple connectivity of $\widetilde X$.</p>
|
3,702,309 |
<p>The question reads,
Find all differentiable functions <span class="math-container">$f$</span> such that <span class="math-container">$$f(x+y)=f(x)+f(y)+x^2y$$</span> for all <span class="math-container">$x,y \in \mathbb{R} $</span>. The function <span class="math-container">$f$</span> also satisfies <span class="math-container">$$\lim_{x \rightarrow 0}\frac {f(x)}x = 0$$</span></p>
<p>To solve the problem I wrote the expression for <span class="math-container">$\frac{df}{dx}$</span> using first principle and found that <span class="math-container">$\frac{df}{dx} = x^2.$</span>(Due to the given conditions.)</p>
<p>Using this and calculating <span class="math-container">$f(0)$</span> as zero I got the implication that <span class="math-container">$f(x)$</span> must be <span class="math-container">$\frac{x^3}{3}.$</span></p>
<p>But,
Clearly the calculated <span class="math-container">$f(x)$</span> does not satisfy the required condition for the problem. Thinking about where I had committed the mistake, I realized that <span class="math-container">$f(x+y)=f(x)+f(y)+x^2y$</span> cannot be true for all real <span class="math-container">$x,y$</span>. </p>
<p>So how is it that these operations that implied <span class="math-container">$f(x)=\frac{x^3}{3}$</span> went wrong?</p>
<p>And why is it that while doing these operations I could not 'see' that the conditions cannot be satisfied for all real <span class="math-container">$x$</span>?</p>
<p>EDIT:Reading the comments, I want to make the clarification that I realise the fact that there can not be any <span class="math-container">$ f(x) $</span> that satisfies these conditions using a different method. However I fail to understand why the method elaborated in the question does not reflect this fact to me??</p>
|
Alex Ravsky
| 71,850 |
<p>The problem conditions are implicitly self-contradictory. If we accept them and shall follow them then we can rigorously prove a lot of claims about the function <span class="math-container">$f$</span> and we can believe that everything is OK, until we obtain an explicit contradiction.</p>
<p>I <a href="http://ceadserv1.nku.edu/longa//classes/mat385_resources/docs/russellpope.html" rel="nofollow noreferrer">quote</a> famous Bertrand Russell’s illustration.</p>
<blockquote>
<p>The story goes that Bertrand Russell, in a lecture on logic, mentioned that in the sense of material implication, a false proposition implies any proposition.<br />
A student raised his hand and said "In that case, given that <span class="math-container">$1 = 0$</span>, prove that you are the Pope."<br />
Russell immediately replied, "Add <span class="math-container">$1$</span> to both sides of the equation: then we have <span class="math-container">$2 = 1$</span>. The set containing just me and the Pope has 2 members. But <span class="math-container">$2 = 1$</span>, so it has only <span class="math-container">$1$</span> member; therefore, I am the Pope."</p>
</blockquote>
<p>Also we can easily construct a formal logical argument showing that a contradiction implies any proposition.</p>
|
3,702,309 |
<p>The question reads,
Find all differentiable functions <span class="math-container">$f$</span> such that <span class="math-container">$$f(x+y)=f(x)+f(y)+x^2y$$</span> for all <span class="math-container">$x,y \in \mathbb{R} $</span>. The function <span class="math-container">$f$</span> also satisfies <span class="math-container">$$\lim_{x \rightarrow 0}\frac {f(x)}x = 0$$</span></p>
<p>To solve the problem I wrote the expression for <span class="math-container">$\frac{df}{dx}$</span> using first principle and found that <span class="math-container">$\frac{df}{dx} = x^2.$</span>(Due to the given conditions.)</p>
<p>Using this and calculating <span class="math-container">$f(0)$</span> as zero I got the implication that <span class="math-container">$f(x)$</span> must be <span class="math-container">$\frac{x^3}{3}.$</span></p>
<p>But,
Clearly the calculated <span class="math-container">$f(x)$</span> does not satisfy the required condition for the problem. Thinking about where I had committed the mistake, I realized that <span class="math-container">$f(x+y)=f(x)+f(y)+x^2y$</span> cannot be true for all real <span class="math-container">$x,y$</span>. </p>
<p>So how is it that these operations that implied <span class="math-container">$f(x)=\frac{x^3}{3}$</span> went wrong?</p>
<p>And why is it that while doing these operations I could not 'see' that the conditions cannot be satisfied for all real <span class="math-container">$x$</span>?</p>
<p>EDIT:Reading the comments, I want to make the clarification that I realise the fact that there can not be any <span class="math-container">$ f(x) $</span> that satisfies these conditions using a different method. However I fail to understand why the method elaborated in the question does not reflect this fact to me??</p>
|
Alex Peter
| 808,129 |
<p>Your expression is not a representation of a function on as wide domain as <span class="math-container">$\mathbb{R}$</span>, but it does not mean that it is totally meaningless.</p>
<p>Replace <span class="math-container">$y=x$</span> and you have got</p>
<p><span class="math-container">$$f(2x)=2f(x)+x^3$$</span></p>
<p>This function has a solution</p>
<p><span class="math-container">$$f(x)=\frac{x}{6}(x^2+c)$$</span></p>
<p>If you replace this general solution you will get</p>
<p><span class="math-container">$$ xy(x-y)=0 $$</span></p>
<p>which gives the domains 1. <span class="math-container">$x=0, y \in \mathbb{R}$</span>, 2. <span class="math-container">$y=0, x \in \mathbb{R}$</span>, 3. <span class="math-container">$y = x$</span>.</p>
<p>The domains <span class="math-container">$1.$</span> and <span class="math-container">$2.$</span> are covered with any function as you get no condition except <span class="math-container">$f(x)=f(x)$</span>.</p>
<p>For third one the solution is as above:</p>
<p><span class="math-container">$$f(x)=\frac{x}{6}(x^2+c)$$</span></p>
<p>Therefore it does not exist a function for all independent <span class="math-container">$x,y \in \mathbb{R}$</span>, only for <span class="math-container">$x,y(x) \in \mathbb{R}$</span>.</p>
<p>Your second condition requires <span class="math-container">$c=0$</span> for third domain, and just any function that satisfies the limit for domains <span class="math-container">$1.$</span> and <span class="math-container">$2.$</span></p>
|
2,465,705 |
<p>In differential geometry, you are often asked to reparameterize a curve using arc-length. I understand the process of how to do this, but I don't understand what we are reparameterizing from.</p>
<p>What is the curve originally parameterized by (before we REparameterize it by arc-length)?</p>
|
Community
| -1 |
<p>It is parameterized by the parameter that appears in the parametric equations given to you !</p>
<p>It is often the case that there is a natural way to define a curve, be it from its geometric definition or from a convenient coordinate system, and the parameter stands out.</p>
<p>In the case of trajectories, it can be time.</p>
<hr>
<p>Let's take the case of Viviani's curve: the intersection of a sphere and a cylindre of half the diameter, tangent to it.</p>
<p>In Cartesian coordinates, we have an implicit system of equations</p>
<p>$$\begin{cases}x^2+y^2+z^2=4,\\(x-1)+y^2=1.\end{cases}$$</p>
<p>Then letting $x-1=\cos t,y=\sin t$ by educated guess, we have</p>
<p>$$z=\pm\sqrt{4-x^2-y^2}=\pm\sqrt{2-2\cos t}=2\sin\frac t2.$$</p>
|
402,970 |
<p>I had an exercise in my text book:</p>
<blockquote>
<p>Find $\frac{\mathrm d}{\mathrm dx}(6)$ by first principles.</p>
</blockquote>
<p>The answer that they gave was as follows:</p>
<p>$$\lim_{h\to 0} \frac{6-6}{h} = 0$$</p>
<p>However surely that answer would be undefined if we let $h$ tend towards $0$, as for finding all other limits like that we substitute $0$ in the place of $h$? </p>
|
Mikasa
| 8,581 |
<p>The answer is right. We know that if the following limit $$\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$$ exists, then it is denoted by $f'(x)$. In the case $f$ is a constant function, the numerator of previous fraction in limit is <strong>absolutely</strong> zero but the variable $h$ in the denominator <strong>is going</strong> to zero but it is not zero. We know many functions that $x=0$ is excluded from their domains but we prove their limit at zero.</p>
|
1,851,698 |
<p>While playing with the results of defining a new operation, I came across a number of interesting properties with little literature surrounding it; the link to my original post is here: <a href="https://math.stackexchange.com/questions/1785715/finding-properties-of-operation-defined-by-x%E2%8A%95y-frac1-frac1x-frac1y">Finding properties of operation defined by $x⊕y=\frac{1}{\frac{1}{x}+\frac{1}{y}}$? ("Reciprocal addition" common for parallel resistors)</a></p>
<p>and as you can see, the operation of interest is $x⊕y = \frac{1}{\frac{1}{x}+\frac{1}{y}} = \frac{xy}{x+y}$. </p>
<p>In wanting to find a condition such that $x⊕y = x-y$, I found that the ratio between x and y mus be φ=1.618... the golden ratio, for this to work!</p>
<p>$x⊕y=x-y$</p>
<p>$\frac{1}{\frac{1}{x}+\frac{1}{y}} = x-y$</p>
<p>$\frac{xy}{x+y} = x-y$</p>
<p>$xy = x^2-y^2$</p>
<p>$0 = x^2-xy-y^2$</p>
<p>and, using the quadratic formula,</p>
<p>$x = \frac{y±\sqrt{y^2+4y^2}}{2}$</p>
<p>$x = y\frac{1±\sqrt{5}}{2}$</p>
<p>$x = φy$</p>
<p>This result is amazing in and of itself. Yet through the same basic setup, we find a new ratio pops out if we try $x⊕y = x+y$ and it is complex. </p>
<p>$x⊕y = x+y$</p>
<p>$\frac{1}{\frac{1}{x}+\frac{1}{y}} = x+y$</p>
<p>$\frac{xy}{x+y} = x+y$</p>
<p>$xy = x^2+2xy+y^2$</p>
<p>$0 = x^2+xy+y^2$</p>
<p>$x = \frac{-y±\sqrt{y^2-4y^2}}{2}$</p>
<p>$x = y\frac{1±\sqrt{-3}}{2}$</p>
<p>$x = y\frac{1±\sqrt{3}i}{2}$</p>
<p>and this is the "imaginary golden ratio"!</p>
<p>$φ_i = \frac{1+\sqrt{3}i}{2}$</p>
<p>It has many properties of the golden ratio, mirrored. This forum from 2011 is the only literature I could dig up on it, and it explains most of the properties I also found and more. <a href="http://mymathforum.com/number-theory/17605-imaginary-golden-ratio.html" rel="noreferrer">http://mymathforum.com/number-theory/17605-imaginary-golden-ratio.html</a></p>
<p>This number is extremely cool, because its mathematical properties mirror φ but also have their own coolness. </p>
<p>$φ_i = 1-\frac{1}{φ_i}$</p>
<p>$φ_i^2 = φ_i - 1$</p>
<p>and generally </p>
<p>$φ_i^n = φ_i^{n-1} - φ_i^{n-2}$</p>
<p>This complex ratio also lies on the unit circle in the complex plane, and has a representation as a power of e! </p>
<p>$φ_i = cos(π/3)+ isin(π/3) = e^{iπ/3}$</p>
<p>$|φ_i|=1$</p>
<p>It is also a nested radical, because of the identity $φ_i^2 + 1 = φ_i$</p>
<p>$φ_i=\sqrt{-1+\sqrt{-1+\sqrt{-1+\sqrt{-1+...}}}}$</p>
<p>Since the only other forum which I could find that has acknowledged the existence of the imaginary golden ratio (other than the context of it as a special case imaginary power of e) I'd like to share my findings and ask if anybody has heard of this ratio before, and if anybody could offer more fine tuned ideas or explorations into the properties of this number. One specific qustion I have involves its supposed connection (according to the 2011 forum) to the sequence </p>
<p>$f_n = f_{n-1} - f_{n-2}$</p>
<p>$f_0=0$</p>
<p>$f_1=1$</p>
<p>$0,1,1,0,-1,-1,0,1,1,...$</p>
<p>could somebody explain to me how this sequence is connected to φ_i? The forum states there is a connection, but I can't figure out what it is based on the wording. What am I missing?</p>
<p>Thanks for your help with my question/exploration. </p>
|
wyboo
| 373,047 |
<p>I found this number in an unrelated context.
Consider the differential equation:</p>
<p><span class="math-container">$$f'(x) = f(f(x))$$</span></p>
<p>The function that is the solution to this has the property that its derivative is the same as composing the function with itself. Assume the function takes the form of <span class="math-container">$f(x) = A x^r$</span> where <span class="math-container">$A$</span> and <span class="math-container">$r$</span> are constants. Then:</p>
<p><span class="math-container">$$r A x^{r - 1} = A^{r + 1} x^{r^2}$$</span></p>
<p>Assuming <span class="math-container">$x$</span> is nonzero:</p>
<p><span class="math-container">$$r A x^{ (r - 1) - r^2} = A^{r + 1}$$</span></p>
<p>The RHS is a constant and equivalent to the LHS, so the exponent must be <span class="math-container">$0$</span>, otherwise the LHS would vary with <span class="math-container">$x$</span>, so:</p>
<p><span class="math-container">$$(r - 1) - r^2 = 0$$</span></p>
<p>The solutions to this are <span class="math-container">$(1 \pm i \sqrt{3})/2$</span>.</p>
<p>So a function <span class="math-container">$f(x)$</span> that has the property of its derivative equaling <span class="math-container">$f(f(x))$</span> is of the form <span class="math-container">$f(x) = A x^r$</span> where <span class="math-container">$A$</span> is some constant and <span class="math-container">$r$</span> is the 'imaginary golden ratio'!</p>
|
226,249 |
<p>I am struggling to convert a base64 string to a list of UnsignedInteger16 values.
I have limited experience with mathematica, so please excuse me if this should be obvious.</p>
<p>I read the base64 string from an XML file and ultimately into a variable base64String.
This is a long string with 7057 UnsignedInteger16 values, but it starts off as: "8ynnKdspzinCKbcprSmhKZUpiCl8KW8pYylXKUkpQCk0KSop..."</p>
<p>Can convert to ByteArry with n = BaseDecode[base64String] // Normal;</p>
<p>Then n has the expected values {243, 41, 231, 41, 219, 41, 206, 41, 194, 41, 183, 41, 173, 41, 161, ...</p>
<p>However, what I am looking for is a list with the values {243 + 256 * 41, 231 + 256 * 41, 219 + 256 * 41, ...</p>
<p>i.e.: {10739, 10727, 10727, ...</p>
<p>Thank you in advance for you suggestions.</p>
<p>Edit: Thank you for the suggestion to use Partition. Works well. For the interests in improving my understanding, can any one suggest why I cannot get ImportString[...] to work.</p>
<p>My source data is XML file with data along the lines of the following where I truncated the Base64 string for clarity.:</p>
<pre><code><?xml version="1.0" encoding="utf-8" ?>
<BPplus version="5.0" filename="BPplus_00100.xml">
<MeasDataLogger guid="ee7bee36-ffd6-30ae-53f1-257f0eab7ddd">
<Sys>130</Sys>
<Dia>77</Dia>
<Map>101</Map>
<PressureWaves>
<RawPressureWave>
<RawPressure>NIBP</RawPressure>
<RawSampleCount>7057</RawSampleCount>
<RawCuffPressureWave>8ynnKdspzinCKbcprSmhKZUpiCl8KW8pYylXKUk... bU1mjV+NWM1RjU=</RawCuffPressureWave>
</RawPressureWave>
</PressureWaves>
</MeasDataLogger>
</BPplus>
</code></pre>
<p>The script I then run with the C1, C2 & C3 suggestions.</p>
<pre><code>sampleFilenameString = "C:\\BPPdata\\BPplus_00112.xml";
xmldata = Import[sampleFilenameString, "XML"];
RawCuffPressureWave =
Cases[xmldata, XMLElement["RawCuffPressureWave", _, _], Infinity];
nibpxml = RawCuffPressureWave[[2]];
base64String = ToString[nibpxml[[3]]];
C1 = Partition[Normal@BaseDecode[base64String], 2].{1, 256};
C2 = ImportByteArray[BaseDecode[base64String], "UnsignedInteger16"];
C3 = ImportString[base64String, {"Base64", "UnsignedInteger16"},
ByteOrdering -> -1]
C1 == C2
C1 == C3
C1
</code></pre>
<p>produces the following output</p>
<pre><code>{17956, 26977, 25964}
True
False
{10739, 10727, 10715, 10702, 10690, 10679, 10669, 10657, ... }
</code></pre>
<p>I have tried to make the first parameter <code>ToString[base64String]</code>, but that did not change the outcome.</p>
<p>If I manually define the string <code>$base64 = "8ynnKdspzinCKbcprSmhKZUpiCl8KW8pYylXKUkpQCk0KSo....</code></p>
<p>It works as expected and C1 = C4</p>
<pre><code>C4 = ImportString[$base64, {"Base64", "UnsignedInteger16"},
ByteOrdering -> -1];
C1 == C4
</code></pre>
<p>Can anyone explain the difference why <code>$base64</code> variable works but <code>base64String</code> does not?</p>
<p>Edit: base64String is not a string! Not sure if this is the best way to convert, but it works in Mathematica 11.x and higher.</p>
<pre><code>sampleFilenameString = "C:\\BPPdata\\BPplus_00112.xml";
xmldata = Import[sampleFilenameString, "XML"];
RawCuffPressureWave = Cases[xmldata, XMLElement["RawCuffPressureWave", _, _], Infinity];
nibpxml = RawCuffPressureWave[[2]];
(* convert to string then base64 decode *)
base64Data =
Developer`DecodeBase64ToByteArray[
nibpxml //. XMLElement[_, _, t_] :> t // Flatten // StringJoin] // Normal;
C1 = Partition[base64Data, 2].{1, 256};
</code></pre>
|
alex
| 20,155 |
<p>There doesn't seem to be a built-in method within LineLegend or any other legend function to allow multiple lines. It seems that the fastest way forward is to have this created via Graphics, and then introduced in the plot as an inset using the Epilog option</p>
<pre><code>sep = 0.05; (* separation between the two lines *)
size = 0.11; (* thickness of the legends *)
cols = {Red, Orange, Black}; (* colours of choice *)
pix2mm = 72/25.4; (* pixel to mm conversion *)
(* creates two lines with a given colour, thickness and size. The \
second line is translated vertically
by the variable sep *)
legends =
Show[{Graphics[{#, Thickness[size], Line[{{{0, 0}, {0.1, 0}}}]},
ImageSize -> 8*pix2mm],
Graphics[{#, Dashed, Thickness[size],
Line[{{{0, -sep}, {0.1, -sep}}}]},
ImageSize -> 8*pix2mm]}] & /@ cols;
(* creating the string associated with each 'legend' *)
vals = Style["\!\(\*SubscriptBox[\(N\), \(B\)]\) = " <> ToString[#],
Black, 14, FontFamily -> "Helvetica"] & /@ {6, 4, 0};
(* Assembling the two using Grid *)
Grid[{legends[[#]], vals[[#]]} & /@ Range@3, Spacings -> {1, 2}]
</code></pre>
<p>Where the output looks like:</p>
<p><a href="https://i.stack.imgur.com/iiN68.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iiN68.png" alt="enter image description here" /></a></p>
<p>Introducing this into a plot hastily, but could easily be done using Inset, Placed, Legended,Graphics or Grid.
Here the example I am showing is that I have a variety of frequencies (and amplitudes), with a small offset:</p>
<pre><code>Plot[ Evaluate[
k[[#]] Sin[ k[[#]] x + b] /. {b -> {1, 2}} & /@ Range@3], {x, 0,
2 \[Pi]},
PlotStyle ->
Flatten[{{cols[[#]]}, {cols[[#]], Dashed}} & /@ Range@3, 1],
FrameStyle -> Directive[Black, 16],
Frame -> {True, True, False, False},
ImageSize -> 450, PlotRange -> {-8, 8},
PlotLegends -> lgs]
</code></pre>
<p><a href="https://i.stack.imgur.com/aSYfj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aSYfj.png" alt="enter image description here" /></a></p>
|
1,948,634 |
<blockquote>
<p>Is it possible to find $6$ integers $a_1,a_2,\ldots,a_6 \geq 2$ such that $$a_1+a_1a_2+a_1a_2a_3+a_1a_2a_3a_4+a_1a_2a_3a_4a_5+a_1a_2a_3a_4a_5a_6 = 248?$$</p>
</blockquote>
<p>I was wondering how we could establish the existence of such numbers. Is there a way to do it without finding the actual $6$ numbers?</p>
|
trang1618
| 370,434 |
<p>Note that:$$a_1(1+a_2(1+a_3(1+a_4(1+a_5(1+a_6))))) = 248 = 2^3(31)$$
$a_1$ has to be 2, because $1+a_2(1+a_3(1+a_4(1+a_5(1+a_6))))\geq 63.$ And so now we're solving for:
$$a_2(1+a_3(1+a_4(1+a_5(1+a_6)))) = 123.$$ </p>
<p>Following the same kind of argument, keeping in mind the constraint $a_i\geq2$, we arrive to the conclusion that there do not exist such numbers.</p>
|
238,673 |
<p><a href="http://vihart.com/doodling/" rel="nofollow">Vi Hart's doodling videos</a> and a 4 year old son interested in mazes has made me wonder:</p>
<p>What are some interesting mathematical "doodling" diversions/games that satisfy the following criteria:</p>
<p>1) They are "solitaire" games, i.e. require only one player;
2) They require only a pencil and blank sheet of paper;
3) They don't rely on abstract mathematical language/substitutions. More precisely, I'm interested in "picture" type games with very simple rules, and not the cheeky answer: "mathematics".</p>
<p>What I'd really like to see is a game with simple rules that, out of the planarity of a sheet of paper, somehow "generates" a maze for the player from the "automaton" type rules...A good starting point may be a solitaire version of dots-and-boxes or something like this...</p>
<p>If such a thing is impossible, under certain simple desirable assumptions, I'd like to see proofs of such a fact, too.</p>
|
Karolis Juodelė
| 30,701 |
<p>You could draw an <a href="http://en.wikipedia.org/wiki/Elementary_cellular_automaton" rel="nofollow">elementary cellular automaton</a> with random initial row.</p>
|
973,985 |
<p>I don't know where to start. </p>
<p>I know that the sum of the angles is less than or equal to 180.
but how do i prove this.</p>
|
Hagen von Eitzen
| 39,174 |
<p>Let the sum be $x$. Cut an arbitrary triangle into two triangles. The sum of the smaller triangles is $2x$ but also $x+180^\circ$.</p>
|
2,240,756 |
<p>I tried rewriting $(1+x+x^2)^\frac{1}{x}$ as $e^{\frac{1}{x}\ln(1+x+x^2)}$ and then computing the taylor series of $\frac{1}{x}$ and $\ln(1+x+x^2)$ but I'm still not getting the correct answer..</p>
|
Masacroso
| 173,262 |
<p>HINT: a workaround following your way to see the problem is this: you want to write the Taylor series around $x=a$ of $$f(x):=\frac{\ln (1+x+x^2)}x$$</p>
<p>in terms of $$\frac1{1-z}=\sum_{k=0}^\infty z^k,\quad |z|<1\tag{1}$$</p>
<p>and</p>
<p>$$\ln(1+z)=\sum_{k=1}^\infty(-1)^{k+1}\frac{z^k}k,\quad|z|<1\tag{2}$$</p>
<p>Then you want to solve the equations</p>
<p>$$K_1+\ln(1+x+x^2)=\ln(1+z),\quad\frac{K_2}x=\frac1{1-z}$$</p>
<p>for $x\in(a-\delta,a+\delta)$ and $|z|<1$, for some $\delta>0$, for some constants $K_1$ and $K_2$. When you have done that then the Cauchy product of $(1)$ and $(2)$, as functions of $x\in(a-\delta,a+\delta)$, is well-defined, so you can try to find a simple expression for this Cauchy product.</p>
<p>Finally you can try to write the double series of the exponential function and the previous Cauchy product as a series in an unique index, what will be the Taylor series of your function around $x=a$ with radius of convergence $\delta$.</p>
|
735,563 |
<p>For both sets C and D, provide a proof that C ∪ (D − /C) = C</p>
<p>"/C" is a set's complement of C</p>
|
Ellya
| 135,305 |
<p>It may help to know that the diagonal entries of $D$ are the eigenvalues of $A$, and the rows of $P$ are the corresponding eigenvectors of $A$, so calculate the eigenvalues and eigenvectors, then $A^5=(P^{-1}D^5P)$</p>
|
3,672,839 |
<p>What is the derivative of <span class="math-container">$f(x)=|x|^\frac{3}{2},\forall x\in \mathbb{R}$</span>.</p>
<p>When <span class="math-container">$x>0$</span> it is fine but the problem is when <span class="math-container">$x\leq 0$</span>.I was trying by the defintion of derivative but what is <span class="math-container">$|x|^\frac{3}{2}$</span> when <span class="math-container">$x<0$</span>?</p>
|
Asaf Karagila
| 622 |
<p>I think that the first encounter I had was in linear algebra I or II. After proving that every finite dimensional vector space has a basis, some people wondered what happens if we omit "finite dimensional", and the answer is positive with Zorn's lemma being mentioned.</p>
<p>But maybe this doesn't count, since there is no discussion about the lemma or the proof of the statement. It's just a remark to satisfy the immediate curiosity of more advanced students.</p>
<p>During the second year, however, at least in mathematics programs in Israel, Zorn's lemma makes <em>a lot</em> of appearances. I would often hear from my second year students that "this week we saw Zorn's lemma in all the courses", so it sort of appeared out of nowhere.</p>
<ol>
<li><p>Algebraic structures and ring theory: every commutative ring with a unit has a maximal ideal.</p></li>
<li><p>Topology: Tychonoff's theorem, which may depend on how you prove it may already required you to talk about Zorn's lemma (e.g. talking about ultrafilters).</p></li>
<li><p>Logic: the completeness theorem (or compactness theorem, whichever is done first).</p></li>
<li><p>Set theory: Zorn's lemma itself. Obviously.</p></li>
</ol>
<p>Which one is the first one encountered depends <em>a lot</em> on the program, the course, the syllabus, and the professor. Some of these proofs may lend themselves a bit more naturally to the Teichmüller–Tukey lemma, or the Hausdorff Maximality Principle, or maybe something else. It also depends on the prerequisites of each course, in my first first semester I already learned about partial orders and chains, so Zorn's lemma was easier to explain compared to the well-ordering theorem or transfinite recursion. </p>
<p>With the exception of the vector spaces example, which perhaps some students also see in their early linear algebra courses already, I think those are the natural examples.</p>
|
2,220,105 |
<p>Hatcher makes the following definition in exercise 7 on page 258. </p>
<p>For a map $f: M \rightarrow N $ between connected, closed, orientable $n$-manifolds with fundamental classes $[M]$ and $[N]$, the <em>degree</em> of $f$ is defined to be the integer $d$ such that $f_{*}[M]=d[N]$ (so the sign of the degree depends on the choice of fundamental classes).</p>
<blockquote>
<p>I'm confused as to why $f_{*}[M]$ has be of the form $d[N]$ for arbitrary coefficient ring $R$. I know that $[M]$ will be a generator in the group that it lies in, namely $H_{n}(M;R) \approx R$ and similarly $[N]$ is a generator for $H_{n}(N,R) \approx R$. I showed that for any isomorphism $f:R\rightarrow S$ a generator of $R$ must map to a generator of $S$, where my definition of generator for $R$ is an element $u$ such that $Ru=R$. But with this I still can't see why $f_{*}[M]=d[N]$. </p>
</blockquote>
|
Bram28
| 256,001 |
<p>The claim I will prove is that any string of $n$ numbers can be sorted <em>in any way I want</em> in at most $2n-3$ inversions.</p>
<p>Proof by weak induction over $n$:</p>
<p>Base: $n =2$. Then I have two numbers, which are either already in the order I want, which takes no inversions, or must be switched, which takes 1 inversion, so indeed at most $2n-3=4-3=1$ inversions.</p>
<p>Step:</p>
<p>Suppose I want to order $n$ numbers $1$ through $n$ in some way, but they are initially put in a random sequence. Now, let's say the order I want is $a_1 ... a_n$.</p>
<p>OK, suppose the number on the right is $a_i$</p>
<p>By inductive hypothesis, I can order the $n-1$ numbers to the left of $i$ in at most $2(n-1)-3$ inversions into the following order:</p>
<p>$a_n \: a_{n-1} ... \: a_{i+1} \: a_1 \: a_2 \: a_3 ... \: a_{i-1} \: a_i$</p>
<p>Now invert that whole string to</p>
<p>$a_i \: a_{i-1} ... \: a_3 \: a_2 \: a_1 \: a_{i+1} ... \: a_{n-1} \: a_n$</p>
<p>And then invert the first i numbers to get the desired sequence.</p>
<p>This takes 2 additional inversions, so the whole orignal sequence can be ordered in at most $2(n-1)-3+2=2n-2-3+2=2n-3$ inversions.</p>
|
4,930 |
<p>This is not an urgent question, but something I've been curious about for quite some time.</p>
<p>Consider a Boolean function in <em>n</em> inputs: the truth table for this function has 2<sup><em>n</em></sup> rows.</p>
<p>There are uses of such functions in, for example, computer graphics: the so-called ROP3s (ternary raster operations) take three inputs: <em>D</em> (destination), <em>S</em> (source) and <em>M</em> (mask). All three are planes of bits, and the result is returned to the destination plane of bits. Now, this is only really applicable to 2-colour (black & white) or 8-colour displays (regarding the red, green, blue planes separately). A given ROP3 is given a code from 0 to 255, a number which represents the pattern of the output bit, down the rows of the table. Similarly, ROP2s have a value from 0 to 15. ROPs may also be given names, especially when the logical connective of which the truth table is an extension is a simple one, such as AND, XOR or SRC (source).</p>
<p>An expression for any truth table (or ROP) may be found in terms of an expressively-complete set of connectives (usually unary or binary, sometimes nullary too). [Well, I suppose this statement is itself a tautology!] For example, the sets {NOT, AND}, {NOT, OR}, {NAND} are each expressively complete.</p>
<p>One commonly used (redundant) expressively complete set is {NOT, AND, OR}. Two particularly common canonical sets of expressions over this set are the conjunctive normal form (CNF) and disjunctive normal form (DNF). Both of these are rather verbose.</p>
<p>There is also a notion of a minimal expression over a set of connectives, defined variously. The count might be of instances of a variable or of connectives. There might be a bound to the depth or breadth of the expression (perhaps).</p>
<p>The Boolean connectives might be extended to the set <strong><em>R</em></strong>[0,1], for fuzzy logic; that is, the connectives are over <strong><em>R</em></strong>[0,1], with the restriction to {0,1} being the usual Boolean function. There are many ways to do this; it is possible to preserve some but not all the usual properties (e.g. associativity, idempotency) of the connectives. [NOT(x) would usually be interpreted as (1−x); AND(x,y) could be (x*y) or (min{x,y}), or in many other ways.]</p>
<p>Such extensions may be used, for example, to give a meaning to a ROP3 as applied to 256-level monochrome images (to combine or 'mix' such images) or to planes of full-colour images. (Necessarily, some truncation or 'quantisation' must take place.)</p>
<p>However, two expressions have the same function over {0,1} will generally have different functions over <strong><em>R</em></strong>[0,1]. Rather than choosing some arbitrary expression, it would be an advantage to choose some canonical or minimal expression.</p>
<p>How much is known about this field? Are there any good on-line references? I'm particularly interested in definitions of, theorems about, and algorithms for the generation of minimal or canonical expressions.</p>
|
Jon Awbrey
| 1,636 |
<p>I doubt if there's anything like the best of all possible formal languages for boolean expressions, but there are many ways of coming up with calculi that are more efficient than most of the ones currently in common use.</p>
<p>You might enjoy exploring the possibilities of using <a href="http://mywikibiz.com/Minimal_negation_operator" rel="nofollow noreferrer">minimal negation operators</a> as the fundamental primitives of a propositional calculus.</p>
<p>A calculus that is very efficient from both conceptual and computational standpoints is based on just two types of logical connectives, both of variable $k$-ary scope. The formulas of this calculus parse into a family of graph-theoretical data structures whose underlying graphs are called "painted and rooted cacti" (PARCs). Hence the name "<a href="http://mywikibiz.com/Directory:Jon_Awbrey/Papers/Differential_Logic_:_Introduction#Cactus_Language_for_Propositional_Logic" rel="nofollow noreferrer">cactus language</a>" for this style of propositional calculus, in either its traversal string or parse graph forms.</p>
<ul>
<li><p>The first kind of propositional expression is a parenthesized sequence of propositional expressions, written as $\unicode{x2997} ~ e_1 \unicode{xFE50} e_2 \unicode{xFE50} \ldots \unicode{xFE50} e_{k-1} \unicode{xFE50} e_k ~ \unicode{x2998}$ and read to say that exactly one of the propositions $e_1, e_2, \ldots, e_{k-1}, e_k$ is false, in other words, that their minimal negation is true. A clause of this form maps into a PARC structure called a <em>lobe</em>, in this case, one that is <em>painted</em> with the colors $e_1, e_2, \ldots, e_{k-1}, e_k$ as shown below.</p>
<p><img src="https://i.stack.imgur.com/RoFpUm.jpg" alt="Cactus Graph Lobe Connective"> <a href="http://mywikibiz.com/images/5/5e/Cactus_Graph_Lobe_Connective.jpg" rel="nofollow noreferrer"><sup>(source)</sup></a></p></li>
<li><p>The second kind of propositional expression is a concatenated sequence of propositional expressions, written as $e_1 ~ e_2 ~ \ldots ~ e_{k-1} ~ e_k$ and read to say that all of the propositions $e_1, e_2, \ldots, e_{k-1}, e_k$ are true, in other words, that their logical conjunction is true. A clause of this form maps into a PARC structure called a <em>node</em>, in this case, one that is <em>painted</em> with the colors $e_1, e_2, \ldots, e_{k-1}, e_k$ as shown below.</p>
<p><img src="https://i.stack.imgur.com/hTaUKm.jpg" alt="Cactus Graph Node Connective"> <a href="http://mywikibiz.com/images/9/98/Cactus_Graph_Node_Connective.jpg" rel="nofollow noreferrer"><sup>(source)</sup></a></p></li>
</ul>
<p>All other propositional connectives can be obtained through combinations of these two forms. Strictly speaking, the parenthesized form is sufficient to define the concatenated form, making the latter formally dispensable, but it is convenient to maintain it as a concise way of expressing more complicated combinations of parenthesized forms. While working with expressions solely in propositional calculus, it is easiest to use plain parentheses for logical connectives. In contexts where ordinary parentheses are needed for other purposes an alternate typeface — for example, $\unicode{x2997} ~ \unicode{xFE50} ~ \unicode{x2998}$ — may be used for logical operators.</p>
<p>See above links for further details.</p>
|
4,930 |
<p>This is not an urgent question, but something I've been curious about for quite some time.</p>
<p>Consider a Boolean function in <em>n</em> inputs: the truth table for this function has 2<sup><em>n</em></sup> rows.</p>
<p>There are uses of such functions in, for example, computer graphics: the so-called ROP3s (ternary raster operations) take three inputs: <em>D</em> (destination), <em>S</em> (source) and <em>M</em> (mask). All three are planes of bits, and the result is returned to the destination plane of bits. Now, this is only really applicable to 2-colour (black & white) or 8-colour displays (regarding the red, green, blue planes separately). A given ROP3 is given a code from 0 to 255, a number which represents the pattern of the output bit, down the rows of the table. Similarly, ROP2s have a value from 0 to 15. ROPs may also be given names, especially when the logical connective of which the truth table is an extension is a simple one, such as AND, XOR or SRC (source).</p>
<p>An expression for any truth table (or ROP) may be found in terms of an expressively-complete set of connectives (usually unary or binary, sometimes nullary too). [Well, I suppose this statement is itself a tautology!] For example, the sets {NOT, AND}, {NOT, OR}, {NAND} are each expressively complete.</p>
<p>One commonly used (redundant) expressively complete set is {NOT, AND, OR}. Two particularly common canonical sets of expressions over this set are the conjunctive normal form (CNF) and disjunctive normal form (DNF). Both of these are rather verbose.</p>
<p>There is also a notion of a minimal expression over a set of connectives, defined variously. The count might be of instances of a variable or of connectives. There might be a bound to the depth or breadth of the expression (perhaps).</p>
<p>The Boolean connectives might be extended to the set <strong><em>R</em></strong>[0,1], for fuzzy logic; that is, the connectives are over <strong><em>R</em></strong>[0,1], with the restriction to {0,1} being the usual Boolean function. There are many ways to do this; it is possible to preserve some but not all the usual properties (e.g. associativity, idempotency) of the connectives. [NOT(x) would usually be interpreted as (1−x); AND(x,y) could be (x*y) or (min{x,y}), or in many other ways.]</p>
<p>Such extensions may be used, for example, to give a meaning to a ROP3 as applied to 256-level monochrome images (to combine or 'mix' such images) or to planes of full-colour images. (Necessarily, some truncation or 'quantisation' must take place.)</p>
<p>However, two expressions have the same function over {0,1} will generally have different functions over <strong><em>R</em></strong>[0,1]. Rather than choosing some arbitrary expression, it would be an advantage to choose some canonical or minimal expression.</p>
<p>How much is known about this field? Are there any good on-line references? I'm particularly interested in definitions of, theorems about, and algorithms for the generation of minimal or canonical expressions.</p>
|
Jon Awbrey
| 1,636 |
<p>By way of clarifying the question to the point where it might get a definite answer, or at least a parametric set of answers, I think it would help to draw the distinction between the language-fixed and the language-variable sides of the problem a little more sharply. Rhubbarb alluded to this distinction in asking the question, but it could be formalized further by introducing a parameter $L$ for the particular calculus or formal language that is used to describe the object domain $O$, which I'm guessing is something like a union over sets of boolean functions of types $\mathbb{B}^k \to \mathbb{B}$, where $\mathbb{B} = \lbrace 0, 1 \rbrace$ and $k$ is a positive integer.</p>
|
1,363,882 |
<p>I am aware that the area under the curve of $\frac{1}{x}$ is infinite yet the area under the curve of $\frac{1}{x^2}$ is finite. </p>
<p>Calculus and series wise, I understand what is going on, but I can't seem to get a good geometric intuition of the problem.
Both curves can be shown to converge to $0$ (the curves themselves, not the area), and on the interval from $1$ to infinity, the two curves have nothing intrinsically different.
Can someone please provide me with an good geometric intuition of what's going on? I can't find anything on the web, people seem to not want to explain it geometrically.</p>
|
Ad van Straeten
| 573,311 |
<p>I made drawings of both functions.</p>
<p>With respect to 1/x^2 I was very strict: I calculated the area under the straight line connecting ( 1,1) and (2,1/4) and so on. This gives two series that are both convergent by comparison with the series 1, 1/2, 1/4 ....The area of this series is larger than the area we are interested in, so our area under the integral is finite, as expected.</p>
<p>With respect to 1/x I was very lenient, and I calculated the area connecting (1,1/2) and (2,1/2), (2,1/3) and (3, 1/3) and so on, which gives the harmonic series of which we know that it diverges. The area that we calculated is smaller than the area we are interested in, so the area of the integral here is infinite.Harmonic series was already suggested by Cardinal, thanks for that suggestion.</p>
<p>So even when you are very strict on the first, and very lenient on the second function, the first is clearly finite, and the other infinite.But in the end you use simple geometry- pre calculus I suppose- and combine it with knowledge with respect to convergence of series.</p>
<p>I now realize that Mark Viola hinted this already in his post "Frank just curious". His mysterious response was alluding to what would happen if you inspect the areas, thereby getting these series he made reference to. I guess he wanted to just give a hint;)!</p>
<p>It is common knowledge that both functions blow up at x=0. The function y=1/x diverges also at the tail, so I will leave it at that. By inspecting the areas of y=1/x^2 one gets a series that keeps adding terms, so the area is infinite if one includes x=0, as expected.</p>
<p>I took it from x=1 to make calculations easier and because I thought the question was with respect to the tail of both functions.Since it is about the tail , choosing the left limit of the integration seems arbitrary.</p>
<p>My response is late: a student at the calculus course I am mentoring asked the question, now being about x going from 1 to infinity, I saw the page of this site, and I didn't notice any posts that did the same thing as I did. It was very simple, but I thought that this was meant by "geometrical" in the original question. I posted because I liked to have some feedback on the content, not because I wanted a medal;)!
As to your comment hardmath: I thought it was clearly understood that the question was not about the blow up of both functions.
Are there follow up answers that I didn't see, or did the discussion pass away some four years ago? If the last is true, is it senseless to add a post after a long time?
It seemed to me that the original post actually wanted to point out that things were questionable because they were not intuitive, brought philosophy into the discussion, and after that I saw some reference to space time. After evaluating the level of some of these comments- only a few of them- I thought that simple calculations would at least meet that level;)!
Kind regards,
Ad van Straeten</p>
<p>P.S.
In fact this was a very simple answer to a very simple question. Both are not consistent with the advanced level of the other discussions at this site. But it is still an answer that was not provided yet.</p>
|
2,867,718 |
<p>How do I integrate $\sin\theta + \sin\theta \tan^2\theta$ ?</p>
<p>First thing,
I have been studying maths for business for approximately 3 months now. Since then, I studied algebra and then I started studying calculus. Yet, my friend stopped me there, and asked me to study Fourier series as we'll need it for our incoming projects. So I feel that I am missing a lot of things as I haven't studied integrals yet. </p>
<p>Today, I encountered this solution that I couldn't understand at all. </p>
<p>Apparently, $\int(\sin\theta + \sin\theta \tan^2\theta)d\theta = \int(\sin\theta (1 + \tan^2\theta))d\theta$. </p>
<p>Below, a link to the solution at 5:08.
<a href="https://youtu.be/aw_VM_ZDeIo" rel="nofollow noreferrer">https://youtu.be/aw_VM_ZDeIo</a></p>
<p>He stated the we have to learn the integral identities. So, I started searching the whole internet looking for them. But, I think I couldn't find them. The only thing that I found was something called Magic hexagon. I thought of reading about $\theta$ as it might mean something. But, after all I learned that it is just a regular greek letter used as a variable.</p>
|
Tyler6
| 411,359 |
<p>What happens in the video is that the $\sin(x)$ is factored out, and so it becomes $\sin(x)(1+\tan^2(x))$. This is simply factoring, and by expanding you can see it is equivalent to $\sin(x)+\sin(x) \tan^2(x)$. </p>
<p>As you said, the Greek letter $\theta$ is just a common variable, which is often used in trigonometric problems instead of the standard “$x$”. Many integrals of this kind require use of trigonometric identities, and if you wish to review them a list can be found <a href="http://www.sosmath.com/trig/Trig5/trig5/trig5.html" rel="nofollow noreferrer">here</a>. </p>
|
2,867,718 |
<p>How do I integrate $\sin\theta + \sin\theta \tan^2\theta$ ?</p>
<p>First thing,
I have been studying maths for business for approximately 3 months now. Since then, I studied algebra and then I started studying calculus. Yet, my friend stopped me there, and asked me to study Fourier series as we'll need it for our incoming projects. So I feel that I am missing a lot of things as I haven't studied integrals yet. </p>
<p>Today, I encountered this solution that I couldn't understand at all. </p>
<p>Apparently, $\int(\sin\theta + \sin\theta \tan^2\theta)d\theta = \int(\sin\theta (1 + \tan^2\theta))d\theta$. </p>
<p>Below, a link to the solution at 5:08.
<a href="https://youtu.be/aw_VM_ZDeIo" rel="nofollow noreferrer">https://youtu.be/aw_VM_ZDeIo</a></p>
<p>He stated the we have to learn the integral identities. So, I started searching the whole internet looking for them. But, I think I couldn't find them. The only thing that I found was something called Magic hexagon. I thought of reading about $\theta$ as it might mean something. But, after all I learned that it is just a regular greek letter used as a variable.</p>
|
the_candyman
| 51,370 |
<p>This is a step-by-step solution using only <strong>basic facts</strong>.</p>
<blockquote>
<p>$$\tan(\theta) = \frac{\sin\theta}{\cos\theta}$$</p>
</blockquote>
<p>Then:</p>
<p>$$\int\sin\theta (1 + \tan^2\theta)d\theta = \int\sin\theta \left(1 + \frac{\sin^2\theta}{\cos^2\theta}\right)d\theta = \int\sin\theta \left(\frac{\cos^2\theta+\sin^2\theta}{\cos^2\theta}\right)d\theta.$$</p>
<blockquote>
<p>$$\sin^2\theta + \cos^2\theta = 1$$</p>
</blockquote>
<p>Then:</p>
<p>$$\int\sin\theta (1 + \tan^2\theta)d\theta = \int\sin\theta\left(\frac{1}{\cos^2\theta}\right)d\theta = \int\sin\theta\cos^{-2}\theta d\theta.$$</p>
<blockquote>
<p>$$\int f'(x) f^n(x) dx = \frac{f^{n+1}}{n+1} + c.$$</p>
</blockquote>
<p>Then:</p>
<p>$$\int\sin\theta (1 + \tan^2\theta)d\theta = \frac{1}{\cos\theta}+c.$$</p>
|
1,318,552 |
<blockquote>
<p>Let $f$ be a twice differentiable function on $\left[0,1\right]$ satisfying $f\left(0\right)=f\left(1\right)=0$. Additionally $\left|f''\left(x\right)\right|\leq1$ in $\left(0,1\right)$. Prove that $$\left|f'\left(x\right)\right|\le\frac{1}{2},\quad\forall x\in\left[0,1\right]$$</p>
</blockquote>
<p>The hint we were given is to expand into a first order Taylor polynomial at the minimum of $f'$. So I tried doing that:</p>
<p>As $f'$ is differentiable, it is continuous, and attains a minimum at $\left[0,1\right]$. Thus we can denote $x_{0}$ as the minimum, and expending into a Taylor polynomial of the first order around it gives us, for some $c$ between $x$ and $x_0$. </p>
<p>$$T_{x_{0}}\left(x\right)=f\left(x_{0}\right)+f'\left(x_{0}\right)\left(x-x_{0}\right)+\frac{f''\left(c\right)}{2}\left(x-x_{0}\right)^{2}$$</p>
<p>Now at $x=0$ we have
$$T_{x_{0}}\left(0\right)=f\left(x_{0}\right)-x_{0}f'\left(x_{0}\right)+x_{0}^{2}\frac{f''\left(c\right)}{2}=0$$</p>
<p>And at $x=1$ we have $$T_{x_{0}}\left(0\right)=f\left(x_{0}\right)+\left(1-x_{0}\right)f'\left(x_{0}\right)+\left(1-x_{0}\right)^{2}\frac{f''\left(c\right)}{2}=0$$</p>
<p>and I'm pretty much stuck here..</p>
<p>So I tried a different approach using Mean Value Theorem directly, by Rolle's I know the derivative is $0$ somewhere (as $f(0)=f(1)$), suppose at $x_0$, so by mean value theorem $\frac{f'(x)}{x-x_0} =f''(c)\leq1$ for some $c$ and $f'(x)<x-x_0$.</p>
<p>But using this approach as well I'm not sure how to proceed, as this gives me the desired propety only in $1/2$ environment around $x_0$...</p>
<p>Any help?</p>
|
wdacda
| 124,843 |
<p>This is an equivalent statement:</p>
<p>Let $g:[0,1] \to \mathbb R$ be a continuous function on $[0,1]$. Assume that $g$ is differentiable on $(0,1)$,<br>
$$
\int_0^1 g(x) dx = 0 \qquad \text{and} \qquad \forall x \in (0,1) \quad \bigl| g'(x) \bigr| \leq 1. \tag{1}\label{1}
$$
Then $|g(x)| \leq 1/2$ for all $x \in [0,1]$.</p>
<p><b>Proof.</b> By the <a href="http://en.wikipedia.org/wiki/Mean_value_theorem" rel="nofollow">Mean value theorem</a>, for all $x,y \in [0,1]$ such that $x \neq y$ there exists $c \in (0,1)$ such that
$$
\frac{g(y) - g(x)}{y-x} = g'(c).
$$
By the second assumption in $\eqref{1}$, for all $x,y \in [0,1]$ such that $x \neq y\ $ we have
$$
\left| \frac{g(y) - g(x)}{y-x} \right| \leq 1.
$$
Consequently, for all $x,y \in [0,1]$ we have
$$
-|y-x| \leq g(y) - g(x) \leq |y-x|.
$$
Integrating the last inequality with respect to $x$ over $[0,1]$ and using the first assumption in $\eqref{1}$, we get
$$
-\int_0^1 |y-x| dx \leq g(y) \leq \int_0^1 |y-x| dx.
$$
Thus, for all $y \in [0,1]$ we have
$$
\bigl| g(y) \bigr| \leq \int_0^1\!\! |y-x| dx =\frac{1}{2}\bigl( y^2 + (1-y)^2 \bigr) = \Bigl(y-\frac{1}{2}\Bigr)^2 + \frac{1}{4} \leq \frac{1}{2}.
$$</p>
|
2,551,368 |
<p>If I have a liquid that has a specific volume of 26.9 inches cubed per pound, and that liquid sells in the form of a gallon weighing 15.2 lbs, it would be correct to say it's total volume is 15.2*26.9, correct? So that would be 408.88 inches cubed. So if I had a volume of 1.157 inches cubed to fill with that liquid, I would be able to fill that volume 408.88/1.157 = 353.4 times. 353.4 times I would be able to fill that volume if I had that specific liquid; am I correct? Thank you in advance for any assistance.</p>
|
DanielWainfleet
| 254,665 |
<p>Only if $X=\{0\}.$ If $X\ne \{0\}$ let $p\in X$ and let $\overline {B(p,1)}$ be a closed unit ball. Let $0\ne q\in X.$ Let $q'=q/\|q\|.$ Let $n\in \Bbb N.$ Then $$F(n)= \{B(p+q'(4j+1)/4n\; , 1/4n) : j\in \{0,...,n-1\}\}$$ is a family of pair-wise disjoint sets (by the Triangle Inequality) with $n$ members. Members of $F(n)$ are open in $X$ and are subsets of $B(p,1),$ so if $C$ is dense in $B(p,1)$ then $C$ intersects each member of $F(n).$ So for every $n\in \Bbb N,$ there are at least $n$ members of $C.$</p>
<p>Remark: We could also take $F=\{B(p+4^{-n}q',4^{-n-1}): n\in \Bbb N\},$ an infinite pair-wise disjoint family of open balls that are subsets of $B(p,1)$ and observe that $C$ must intersect every member of $F.$</p>
|
1,412,091 |
<p>The typewriter sequence is an example of a sequence which converges to zero in measure but does not converge to zero a.e.</p>
<p>Could someone explain why it does not converge to zero a.e.?</p>
<blockquote>
<p><span class="math-container">$f_n(x) = \mathbb 1_{\left[\frac{n-2^k}{2^k}, \frac{n-2^k+1}{2^k}\right]} \text{, where } 2^k \leqslant n < 2^{k+1}.$</span></p>
</blockquote>
<p>Note: the <a href="http://terrytao.wordpress.com/2010/10/02/245a-notes-4-modes-of-convergence/" rel="nofollow noreferrer">typewriter sequence</a> (Example 7).</p>
|
RozaTh
| 386,136 |
<p>I drew the first 63 functions in the sequence to help me understand its convergences. It might help others as well to understand answers given above:</p>
<p><a href="https://i.stack.imgur.com/2D209.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2D209.png" alt="The typewriter sequence"></a></p>
<p>Unfortunately, here I can only attach the rasterized format. In case someone wants to reproduce it, here is the Latex code:</p>
<pre><code>\documentclass[9pt]{standalone}
\usepackage{bbm}
\usepackage{amsmath}
\usepackage{tikz,pgfplots}
\usetikzlibrary{arrows}
\newcommand{\nMAX}{63}
\newcommand{\xGrSamp}{8}
\begin{document}
\centering
\begin{tikzpicture}[font=\Large,shorten >=-2.5pt,shorten <=-2.5pt]
\begin{axis}[
axis x line*=bottom,
axis y line*=right,
axis z line*=left,
plot box ratio = 3 1000 2,
view={.3}{.2},
xmin=-0.2, xmax=1.25,
ymin=0.6, ymax=\nMAX+0.3,
zmin=0, zmax=1.0,
xtick={0,1/8,2/8,3/8,4/8,5/8,6/8,7/8,1},
xticklabels={$0$,$\frac{1}{2^3}$,$\frac{2}{2^3}$,$\frac{3}{2^3}$,$\frac{4}{2^3}$,$\frac{5}{2^3}$,$\frac{6}{2^3}$,$\frac{7}{2^3}$,$1$},
ytick={0,...,\nMAX},
ztick={0,...,1.0},
xlabel=$x$,
ylabel=$n$,
zlabel=$f_n(x)$,
x label style={at={(axis description cs:0.067,-0.001)},anchor=north},
y label style={at={(axis description cs:0.062,0.145)},anchor=south},
z label style={at={(axis description cs:-0.002,0.035)},anchor=south},
yscale=5,
xscale=5,
legend entries={$f_n(x)=1\,$,
$f_n(x)=0\,$},
legend style={rounded corners=3pt,at={(0.023,0.14)}},
legend style={nodes={scale=1.5, transform shape}},
legend plot pos=right,
]
\foreach \n in {1, ..., \nMAX}
{
\pgfmathsetmacro\k{floor(log2(\n+1e-1))}
\pgfmathsetmacro{\xm}{-0.2}
\pgfmathsetmacro\xM{1.2}
\pgfmathsetmacro\xa{(\n-(2^(\k)))/(2^(\k))}
\pgfmathsetmacro\xb{(\n-(2^(\k))+1)/(2^(\k))}
\edef\temp
{
\noexpand\coordinate (d1) at (axis cs:\xm,\n,0);
\noexpand\coordinate (d2) at (axis cs:\xa,\n,0);
\noexpand\coordinate (d3) at (axis cs:\xa,\n,1);
\noexpand\coordinate (d4) at (axis cs:\xb,\n,1);
\noexpand\coordinate (d5) at (axis cs:\xb,\n,0);
\noexpand\coordinate (d6) at (axis cs:\xM,\n,0);
\noexpand\coordinate (g0) at (axis cs:\xm,\n,1);
\noexpand\coordinate (g1) at (axis cs:\xM,\n,1);
}
\temp
\draw[blue,<-o] (d1)--(d2);
\draw[black,dashed,line width=0.04mm] (d2)--(d3);
\draw[red,*-*] (d3)--(d4);
\draw[black,dashed,line width=0.04mm] (d4)--(d5);
\draw[blue,o->] (d5)--(d6);
\draw[black,dashed,line width=0.04mm] (g0)--(g1);
}
\pgfplotsinvokeforeach{0, ..., \xGrSamp}
{
\draw[black,dashed,line width=0.06mm] (axis cs:#1/\xGrSamp,0,0)--(axis cs:#1/\xGrSamp,\nMAX,0);
}
\addlegendimage{no markers,red}
\addlegendimage{no markers,blue}
\end{axis}
\node[rectangle,draw,rounded corners=3pt,text width=7.7cm] at (29.3,1.7)
{\huge Typewriter Sequence: \\$f_n(x)={\mathbbm{1}}_{[\frac{n-2^k}{2^k},\frac{n-2^k+1}{2^k}]},$\\
$\forall\, k\geq 0\,\, \&\,\, 2^k\leq n<2^{k+1} $};
\end{tikzpicture}
\end{document}
</code></pre>
|
1,412,091 |
<p>The typewriter sequence is an example of a sequence which converges to zero in measure but does not converge to zero a.e.</p>
<p>Could someone explain why it does not converge to zero a.e.?</p>
<blockquote>
<p><span class="math-container">$f_n(x) = \mathbb 1_{\left[\frac{n-2^k}{2^k}, \frac{n-2^k+1}{2^k}\right]} \text{, where } 2^k \leqslant n < 2^{k+1}.$</span></p>
</blockquote>
<p>Note: the <a href="http://terrytao.wordpress.com/2010/10/02/245a-notes-4-modes-of-convergence/" rel="nofollow noreferrer">typewriter sequence</a> (Example 7).</p>
|
robjohn
| 13,854 |
<p>I wrote this sequence of functions as a counterexample for <a href="https://math.stackexchange.com/questions/3735925/how-to-prove-that-f-n-to-0">a question</a> that was deleted.</p>
<p><span class="math-container">$$
f_n(x)=\left[\frac km\le x\lt\frac{k+1}m\right]
$$</span>
where <span class="math-container">$[\dots]$</span> are <a href="https://en.wikipedia.org/wiki/Iverson_bracket" rel="noreferrer">Iverson brackets</a>, <span class="math-container">$m=\left\lfloor\frac{1+\sqrt{1+8n}}2\right\rfloor$</span> and <span class="math-container">$k=n-\frac{m(m-1)}2$</span>.</p>
<p>The Typewriter Sequence is a subsequence of this sequence (when <span class="math-container">$m$</span> is a power of <span class="math-container">$2$</span>). Like the Typewriter Sequence, this sequence does not tend to <span class="math-container">$0$</span> pointwise, but its <span class="math-container">$L^1$</span> norms vanish.</p>
<p><a href="https://i.stack.imgur.com/O1TKQ.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/UK6JA.gif" alt="enter image description here" /></a></p>
|
3,839,244 |
<p>Is <span class="math-container">$\{3\}$</span> a subset of <span class="math-container">$\{\{1\},\{1,2\},\{1,2,3\}\}$</span>?</p>
<p>If the set contained <span class="math-container">$\{3\}$</span> plain and simply I would know but does the element <span class="math-container">$\{1,2,3\}$</span> include <span class="math-container">$\{3\}$</span> such that it would be a subset?</p>
|
User203940
| 333,294 |
<p>Let <span class="math-container">$X$</span> be a set. We say <span class="math-container">$Y \subseteq X$</span> (<span class="math-container">$Y$</span> is a subset of <span class="math-container">$X$</span>) if, for all <span class="math-container">$x \in Y$</span>, we have <span class="math-container">$x \in X$</span>.</p>
<p>Examine the sets <span class="math-container">$Y = \{3\}$</span>, <span class="math-container">$X = \{\{1\},\{1,2\},\{1,2,3\}\}$</span>. Take <span class="math-container">$x = 3 \in Y$</span>. Is <span class="math-container">$3 \in X$</span>?</p>
<p>Trickier problem: If <span class="math-container">$X = \{\{1\},\{3\},\{1,2\},\{1,2,3\}\}$</span>, is <span class="math-container">$3 \in X$</span>?</p>
|
1,935,320 |
<p>If $a^2-b^2=2$ then what is the least possible value of: \begin{vmatrix} 1+a^2-b^2 & 2ab &-2b\\ 2ab & 1-a^2+b^2&2a\\2b&-2a&1-a^2-b^2 \end{vmatrix}</p>
<p>I tried to express the determinant as a product of two determinants but could not do so. Seeing no way out, I tried expanding it but that took too long and was difficult to evaluate. Please help me with this one, thanks.</p>
|
Ivan Neretin
| 269,518 |
<p>Why, the general answer is just as simple: consider a $120^\circ$ rotation around any axis.</p>
<p>In 2D, the possibilities are limited to Jacky Chong's answer, its transposition, and $I$.</p>
<p>In 3D, the general answer is $$\left(\begin{array}{c|c|c}
{3\over2}x^2-{1\over2} & {3\over2}xy+{\sqrt3\over2}z& {3\over2}xz-{\sqrt3\over2}y\\ \hline
{3\over2}xy-{\sqrt3\over2}z& {3\over2}y^2-{1\over2} & {3\over2}yz+{\sqrt3\over2}x\\ \hline
{3\over2}xz+{\sqrt3\over2}y& {3\over2}yz-{\sqrt3\over2}x & {3\over2}z^2-{1\over2} \\
\end{array}\right)$$
where $x,y,z$ are any numbers such that $x^2+y^2+z^2=1$. Note that when $(x,y,z)=({1\over\sqrt3},{1\over\sqrt3},{1\over\sqrt3})$, this produces Bolton Bailey's answer.</p>
<p>In 4D and beyond, things get a bit hairy.</p>
|
765,738 |
<p>I am trying to prove the following inequality:</p>
<p>$$(\sqrt{a} - \sqrt{b})^2 \leq \frac{1}{4}(a-b)(\ln(a)-\ln(b))$$</p>
<p>for all $a>0, b>0$.</p>
<p>Does anyone know how to prove it?</p>
<p>Thanks a lot in advance!</p>
|
medicu
| 65,848 |
<p>Inequality that is another form of a well-known inequalities:</p>
<p>For $0<y<x$
$$ \sqrt{xy}<\frac{x-y}{\ln x-\ln y} < \frac{x+y}{2}. (1)$$
For $a>b$ inequality of enunciation transform so:
$$(\sqrt{a} - \sqrt{b})^2 < \frac{1}{4}(a-b)(\ln a-\ln b)<=>$$
$$(\sqrt{a} - \sqrt{b})^2 < \frac{1}{4}(\sqrt{a}-\sqrt{b})(\sqrt{a}+\sqrt{b})(\ln a-\ln b)<=>$$
$$\sqrt{a} - \sqrt{b} < \frac{1}{2}(\sqrt{a}+\sqrt{b})(\ln \sqrt{a}-\ln \sqrt{b})<=>$$
$$\frac{\sqrt{a} - \sqrt{b} }{\ln \sqrt{a}-\ln \sqrt{b}}< \frac{\sqrt{a}+\sqrt{b}}{2}$$
which is obtained from (1) for $x=\sqrt{a}, y=\sqrt{b}.$</p>
|
599,656 |
<p>If the objects of a category are algebraic structures in their own right, this often places additional structure on the homsets. Is there somewhere I can learn more about this general idea?</p>
<hr>
<p><strong>Example 1.</strong> Let $\cal M$ denote the category of magmas and <strong>functions</strong>, not necessarily structure-preserving. Then given objects $X$ and $Y$ of $\cal M$, the hom"set" $\mathrm{Hom}(X,Y)$ is probably best viewed as a <em>magma</em> as opposed to a set, with the pointwise operation inherited from $Y$. Furthermore we have a right (but not left) distributivity law. In particular, given objects $X, Y$ and $Z$ and arrows $f:X \rightarrow Y$ and $g,g' : Y \rightarrow Z,$</p>
<p>$$(g+g')\circ f = (g \circ f) + (g' \circ f).$$</p>
<p><strong>Example 2.</strong> Now let $\cal N$ denote the category of <a href="http://en.wikipedia.org/wiki/Medial_magma" rel="nofollow">medial</a> magmas and magma <strong>homomorphisms</strong>. Then given objects $X$ and $Y$ of $\cal N$, the hom"set" $\mathrm{Hom}(X,Y)$ can be viewed as a medial magma in its own right. Furthermore, we have both left and right distributivity laws. In the sense that given objects $X, Y$ and $Z$ and arrows $f,f':X \rightarrow Y$ and $g,g' : Y \rightarrow Z,$</p>
<p>$$(g+g')\circ f = (g \circ f) + (g' \circ f)$$</p>
<p>$$g \circ (f + f') = (g \circ f) + (g \circ f').$$</p>
|
Flowers
| 93,842 |
<p>I cannot comment, but you are obviously mistaken in your calculation. Your quadratic formula that is. Here, <a href="http://www.wolframalpha.com/input/?i=x%5E2%2B104x%E2%88%92896%3D0" rel="nofollow">WA does not make mistakes</a></p>
<p>EDIT:
$$\frac{-104\pm\sqrt{104^2+4\cdot 896}}{2}\\
=\frac{-104\pm\sqrt{14400}}{2}=-52\pm60$$</p>
|
1,593,007 |
<p>maybe this is a stupid question but I have the following expression:</p>
<p>$ 10^{-18}(e^{50,9702078⋅0,75}) = 10^{-18}(4⋅10^{16}) $</p>
<p>How would I go about simplifying the big exponent on the left to what's on the right? With the use of a calculator.</p>
<p>Thanks a lot! </p>
|
Yuri Negometyanov
| 297,350 |
<p>$$yy' = x+1$$
$$y^2 = (x+1)^2+C.$$
$$y = \pm\sqrt{(x+1)^2+C},\quad(x+1)^2 + C \geq 0.$$
Or:
$$y = \pm\sqrt{(x+1)^2+C},$$
$$
x\in
\begin{cases}
(-\infty, +\infty),\text{ if }C \geq -1\\[3pt]
\left(-\infty. -\sqrt{-C}-1\right)\, \cup (\sqrt{-C}-1, +\infty),\text{ if }C <-1.
\end{cases}
$$</p>
|
1,865,868 |
<p>the problem goes like that "in urn $A$ white balls, $B$ black balls. we take out without returning 5 balls. (we assume $A,B\gt4$) what would be the probability that at the 5th ball removal, there was a white ball while we know that at the 3rd was a black ball".</p>
<p>What I did is I build a conditional probability tree. as it seemed, it gets really ugly and parameters won't reduce, so the equation is huge, therefor probably it's not the correct path of solution.</p>
<p>I've got this intuition that the probability of the <strong>p</strong>th ball removal being black while <strong>q</strong>th ball removal being white is the same as the probability of the first ball being black, and the second ball being white - $\frac{A}{A+B-1}$ but this is only intuition and I can't explain it.</p>
<p>would appreciate your advising,</p>
|
Claude Leibovici
| 82,404 |
<p>You could even solve the problem using basic calculus since minimizing the distance is the same as minimizing the square of the distance. </p>
<p>The constraints being $$x+y+z=2\qquad , \qquad 12x+3y+3z=12$$ take advantage of their linearity and solve these two equations for $x$ and $z$ as functions of $y$. Tou will get $x=\frac 23$ and $z=\frac 43-y$.</p>
<p>So $$d^2=x^2+y^2+z^2=\frac{4}{9}+y^2+\left(\frac{4}{3}-y\right)^2=2 y^2-\frac{8 y}{3}+\frac{20}{9}$$ The derivative $(d^2)'=4y-\frac 83$ cancel for $y=\frac 23$ and, using $z=\frac 43-y$, $z=\frac 23$.</p>
<p>Then the solution $x=y=z=\frac 23$ and $d^2=\frac 43$.</p>
<p>Notice that the second derivative test $(d^2)''=4y$ confirms that this is a minimum.</p>
|
1,993,034 |
<p>if I have say $x_4$ = free, what value goes in the $x_4$ position of the parametric form, is it $1$ or $0$ or can it be any value since it's free?</p>
|
rosaqq
| 384,490 |
<p>You just say $x_4 = t$, $t\in \Bbb R$</p>
<p>Example:
$$\mathbf v = [x_1, x_2, x_3, 0] + t[0, 0, 0, 1],\quad t\in \Bbb R$$
$x_1$, $x_2$, and $x_3$ are set values while $t$ is free.</p>
|
102,402 |
<p>I wanted to make a test bank of graphs of linear equations for my algebra classes. I want the $y$-intercept of each graph to be an integer no less than $-10$ and no greater than $10$. Generally, you want these graphs to be small, so i've decided on a $20 \times 20$ grid (10 units from the origin). Additionally, i would like students to be able to see a second integral point on this grid, so they can find the slope. How many possible graphs would be in this test bank? How did you get the solution?</p>
|
2'5 9'2
| 11,123 |
<p>If you extended by $11$ in all directions (instead of $10$), then your grid is a picture of $\mathbb{F}_{23}^2$. (It's nicer to work with the field $\mathbb{F}_{23}$ than the non-field $\mathbb{F}_{21}$.) Your question translates to counting how many affine lines are there in this vector space. Lines in this space wrap around the edges. Possible slopes are $0$ through $22$. After wrapping, every line encounters $23$ integral points on your grid.</p>
<hr>
<p>Edit: As Gerry points out in the comments, this whole argument doesn't work. It's in this next paragraph that I say something that can't be backed up.</p>
<hr>
<p>First, if $b=0$, every (non $y$-axis) point in the grid is on a unique line. So whatever line you imagine for $\mathbb{R}^2$ translates to a unique line in $\mathbb{F}_{23}^2$ by focusing on the integral point with the smallest positive $x$-value.</p>
<p>We have $23$ lines through the origin in $\mathbb{F}_{23}^2$, counting the $23$ possible slopes. Coupled with translation by $23$ possible $y$-intercepts, there are $23^2$ such lines. If you like, add vertical lines, of which there are $23$.</p>
<p>So either $23^2$ or $23\cdot24$, depending on what you are after. (This argument does not work exactly the same for $\mathbb{F}_{21}$, since lines with different slopes can cross at more than one point.)</p>
|
1,910,085 |
<p>For all integers $n \ge 0$, prove that the value $4^n + 1$ is not divisible by 3.</p>
<p>I need to use Proof by Induction to solve this problem. The base case is obviously 0, so I solved $4^0 + 1 = 2$. 2 is not divisible by 3.</p>
<p>I just need help proving the inductive step. I was trying to use proof by contradiction by saying that $4^n + 1 = 4m - 1$ for some integer $m$ and then disproving it. But I'd rather use proof by induction to solve this question. Thanks so much.</p>
|
parsiad
| 64,601 |
<p><strong>Hint:</strong> Suppose $3$ does not divide $4^{n}+1$ for some $n\in\{0,1,\ldots\}$.
If $3$ divides $4^{n+1}+1$,
$$
0\equiv4^{n+1}+1\equiv4^{n}4+1\equiv \text{ } ?\text{ (mod }3\text{)}.
$$
(I have left out a step which you should fill in to arrive at a contradiction)</p>
|
1,910,085 |
<p>For all integers $n \ge 0$, prove that the value $4^n + 1$ is not divisible by 3.</p>
<p>I need to use Proof by Induction to solve this problem. The base case is obviously 0, so I solved $4^0 + 1 = 2$. 2 is not divisible by 3.</p>
<p>I just need help proving the inductive step. I was trying to use proof by contradiction by saying that $4^n + 1 = 4m - 1$ for some integer $m$ and then disproving it. But I'd rather use proof by induction to solve this question. Thanks so much.</p>
|
Community
| -1 |
<p>Induction step:</p>
<p>$$4^{n+1}+1=3\cdot 4^n+4^n+1\equiv4^n+1\mod 3$$</p>
<hr>
<p>By the way, by recurrence,</p>
<p>$$4^n\equiv1\mod3,$$
$$4^n+1\equiv2\mod3,$$
$$4^n+2\equiv0\mod3.$$</p>
|
1,910,085 |
<p>For all integers $n \ge 0$, prove that the value $4^n + 1$ is not divisible by 3.</p>
<p>I need to use Proof by Induction to solve this problem. The base case is obviously 0, so I solved $4^0 + 1 = 2$. 2 is not divisible by 3.</p>
<p>I just need help proving the inductive step. I was trying to use proof by contradiction by saying that $4^n + 1 = 4m - 1$ for some integer $m$ and then disproving it. But I'd rather use proof by induction to solve this question. Thanks so much.</p>
|
Bill Dubuque
| 242 |
<p><strong>Hint</strong> $\ \ f_n =\, 4^n\!+1\ \Rightarrow\,\ f_{n+1}\!-f_n =\! \overbrace{\color{#c00}3\cdot 4^n}^{\large 4^{\Large n+1}-4^{\Large n} }\! $ so $\,\ \color{#c00}3\mid f_{n+1}\!\iff \color{#c00}3\mid f_n$</p>
<p><strong>Remark</strong> $ $ Said $\,{\rm mod}\ 3\!:\ f_{n+1}\equiv f_n\ $ so by induction $\,f_n\equiv f_0 \equiv 2,\ $ so $\ f_n\not\equiv 0$ </p>
<p>But using congruences it is clearer to prove by induction $\,4\equiv 1\,\Rightarrow\, 4^n\equiv 1^n\equiv 1\,$ (special case of <a href="https://math.stackexchange.com/a/879262/242">Congruence Power Rule)</a>. $ $ Then $\,f_n = 4^n+1\equiv 1+1\equiv 2\,$ by the Congruence Sum Rule.</p>
|
265,509 |
<p>Given a (possibly disjoint) region, defined by a discrete set of points, how can I use <code>ListContourPlot[]</code> together with <code>Mesh</code> to highlight a specific area of the plot? For instance, how can I mesh the region where the points are smaller than a certain value?</p>
<p>Here I construct a minimal example where I try to highlight the area where the values of a discrete sample of the function <span class="math-container">$f(x) = e^{x^2 - y^2}$</span> are smaller then one.</p>
<pre><code>data = Table[Exp[x^2 - y^2], {x, -1, 1, .01}, {y, -1, 1, .01}];
ListContourPlot[
data
, Contours -> {1.0}
, ContourStyle -> Transparent
, Mesh -> 25
, MeshFunctions -> {#1 + #2 &}
, MeshStyle -> Thick
]
</code></pre>
<p>I also tried using <code>MeshFunctions -> {Piecewise[{{#1 + #2 &, #3 <= 1 &}, {None, #3 > 0 &}}]}</code>, but I had no luck.</p>
<p>I am aware that this can be done for symbolic functions through <code>RegionPlot[]</code>, however I am not sure how to extend this to numerical data.</p>
<p><a href="https://i.stack.imgur.com/RF0im.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RF0im.png" alt="Mesh in ListContourPlot" /></a></p>
|
cvgmt
| 72,111 |
<pre><code>data = Table[Exp[x^2 - y^2], {x, -1, 1, .01}, {y, -1, 1, .01}];
ListContourPlot[data, Contours -> {1.0}, ContourStyle -> Transparent,
ContourShading -> {Directive[Orange, HatchFilling[-Pi/4, 1, 10]],
Directive[Cyan, HatchFilling[Pi/4, 1, 8]]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/p4LTg.png" rel="noreferrer"><img src="https://i.stack.imgur.com/p4LTg.png" alt="enter image description here" /></a></p>
|
686,848 |
<p>This is probably an elementary question about fields, but I think it is a little tricky. </p>
<blockquote>
<p>Can we make the integers $\mathbb{Z}$ into a field?</p>
</blockquote>
<p>Let me be more precise. Is it possible to make $\mathbb{Z}$ into a field so that the underlying additive group structure is the usual addition in $\mathbb{Z}$? In other words, we just need to define "multiplication" on $\mathbb{Z}$ that makes it into a field. But the question is: How to define such a multiplication?</p>
<p>The ordinary multiplication (that we learn in elementary school) doesn't work as $2$ does not have a multiplicative inverse. </p>
|
Community
| -1 |
<p><strong>Theorem:</strong> If $F$ is a field and $x \in F$ with $x \neq 0$, then either:</p>
<ul>
<li>there exists $y \in F$ such that $y + y = x$, or</li>
<li>$x + x = 0$.</li>
</ul>
|
3,614,875 |
<p><strong>Prop:</strong> Every sequence has a monotone subsequence.</p>
<p><strong>Pf:</strong> Suppose <span class="math-container">$\{a_n\}_{n\in \mathbb{N}}$</span> is a sequence. Choose <span class="math-container">$a_{n_1} \in \{a_1,a_2,...\}$</span>. Further choose smallest possible <span class="math-container">$a_{n_2} \in \{a_1,a_2,...\}$</span> such that <span class="math-container">$a_{n_1}\leq a_{n_2}$</span>. Denote the subsequence of <span class="math-container">$\{a_n\}$</span> starting with <span class="math-container">$a_{n_2}$</span> by <span class="math-container">$\{a^{(2)}_{n}\}$</span>. Choose smallest possible <span class="math-container">$a_{n_3}\in \{a^{(2)}_n\}$</span> such that <span class="math-container">$a_{n_2}\leq a_{n_3}$</span>. Denote the subsequence of <span class="math-container">$\{a^{(2)}_n\}$</span> starting with <span class="math-container">$a_{n_3}$</span> by <span class="math-container">$\{a^{(3)}_{n}\}$</span>. Choose smallest possible <span class="math-container">$a_{n_4} \in \{a^{(3)}_n\}$</span> such that <span class="math-container">$a_{n_3}\leq a_{n_4}$</span>. Denote the subsequence of <span class="math-container">$\{a^{(3)}_n\}$</span> starting with <span class="math-container">$a_{n_4}$</span> by <span class="math-container">$\{a^{(4)}_{n}\}$</span>. Choose smallest possible <span class="math-container">$a_{n_5} \in \{a^{(3)}_n\}$</span> such that <span class="math-container">$a_{n_4}\leq a_{n_5}$</span>. Then, <span class="math-container">$a_{n_1}\leq a_{n_2}\leq ... \leq a_{k}\leq a_{k+1}$</span>. Conversely, if there exists an index <span class="math-container">$n_2$</span> such that <span class="math-container">$a_{n_1}\geq a_{n_2}$</span>, denote the subsequence of <span class="math-container">$\{a_n\}$</span> starting with <span class="math-container">$a_{n_2}$</span> by <span class="math-container">$\{b^{(2)}_n\}$</span>. Choose largest possible <span class="math-container">$a_{n_3} \in \{b^{(2)}_n\}$</span> such that <span class="math-container">$a_{n_2}\geq a_{n_3}$</span>. Repeating the same process, we inductively conclude that for all <span class="math-container">$k \in \mathbb{N}$</span>, <span class="math-container">$a_k \geq a_{k+1}$</span>. Thus, <span class="math-container">$a_1\geq a_2 \geq ... \geq a_k \geq a_{k+1}$</span>. </p>
<p>Does my proof look correct? I wrote my arguments more understandable.</p>
|
Brian M. Scott
| 12,042 |
<p>Say that a term <span class="math-container">$a_n$</span> is <em>dominant</em> if <span class="math-container">$a_n\ge a_m$</span> whenever <span class="math-container">$n\le m$</span>.</p>
<ul>
<li>Show that if there are infinitely many dominant terms, they form an infinite non-increasing subsequence.</li>
</ul>
<p>Suppose, then, that there are only finitely many dominant terms, say <span class="math-container">$\{a_{k_0},\ldots,a_{k_\ell}\}$</span>. Fix <span class="math-container">$n_0>k_\ell$</span>; then for each <span class="math-container">$n\ge n_0$</span> there is an <span class="math-container">$m>n$</span> such that <span class="math-container">$a_n<a_m$</span>. (Why?) Now use your argument to construct an infinite strictly increasing subsequence.</p>
|
2,698,903 |
<p>So this has caused some confusion. In one exercise one was asked to prove that</p>
<p>$$\lim_{k \rightarrow \infty} \int_A \cos(kt)dt=0$$</p>
<p>where $A \subset [-\pi, \pi]$ is a measurable set.</p>
<p>My initial idea was to take any $a,b \in A$ and then show that:</p>
<p>$$\lim_{k \rightarrow \infty} \int_a^b cos(kt)dt= \lim_{k \rightarrow \infty} \frac{\sin(bk)-\sin(ak)}{k} =0$$</p>
<p>But my instructor said that this wouldn't work out because the lengths of the partitions on $A$ would make my computation impossible. I'm not sure about this.</p>
|
Giuseppe Negro
| 8,157 |
<p>I think that your approach is a good start, but you have to extend your result to general measurable sets. You can do that by approximation. </p>
<p>Let $\mathbb T=[-\pi, \pi)$. </p>
<p><strong>Step 1</strong>. Show that for any open set $O\subset \mathbb T$, it holds that $\lim_{k\to \infty} \int_O \cos(kt)\, dt =0$. To do this, use the fact that every open set in $\mathbb T$ is a countable disjoint union of intervals. Since $\mathbb T$ has a finite total measure, most of $O$ must be concentrated on a finite disjoint union of intervals. More precisely, for all $N\in\mathbb N$, $O$ can be decomposed as $$O=\bigcup_{j=1}^N (a_j, b_j) + R_N,\ \text{where }|R_N|\le\tfrac 1 N.$$
Now use the computation you have already done.</p>
<p><strong>Step 2</strong>. Fix $\epsilon>0$. We know from measure theory that every measurable set $M\subset \mathbb T$ is contained in an open set $O_\epsilon$ in such a way that $|O_\epsilon \setminus M|\le \epsilon$. (Absolute value of a set denotes Lebesgue measure). So
$$
\left| \int_M \cos(kt)\, dt\right|\le \left| \int_{O\setminus M} \cos(kt)\, dt \right| + \left| \int_O \cos(kt)\, dt\right| \le \epsilon + \eta_k, $$
where $\eta_k\to 0$ as $k\to \infty$.</p>
|
1,545,045 |
<p>Is it possible to cut a pentagon into two equal pentagons?</p>
<p>Pentagons are not neccesarily convex as otherwise it would be trivially impossible.</p>
<p>I was given this problem at a contest but cannot figure the solution out, can somebody help?</p>
<p>Edit: Buy "cut" I mean "describe as a union of two pentagons which intersect only on the boudaries".</p>
|
N. F. Taussig
| 173,070 |
<p>Graham Kemp and user164385 have provided you with nice solutions. Here is an alternative approach.</p>
<p>Line up the $28$ workers in some order, whether alphabetically, by seniority, or some other criterion. The first person in line has $27$ ways of selecting a work partner. The next person in line who is not already on a team has $25$ ways of selecting a work partner. Continue. The number of teams that can be selected is
$$27 \cdot 25 \cdot 23 \cdot 21 \cdot 19 \cdot 17 \cdot 15 \cdot 13 \cdot 11 \cdot 9 \cdot 7 \cdot 5 \cdot 3 \cdot 1 = \prod_{k = 1}^{14} (2k - 1) = 27!!$$
Observe that if we multiply the numerator and denominator of this answer by $2^{14}14!$, we obtain
$$\frac{28!}{2^{14}14!}$$</p>
|
109,973 |
<p>I have stuck solving this problem of financial mathematics, in this equation:
$$\frac{(1+x)^{8}-1}{x}=11$$</p>
<p>I'm stuck in this eight grade equation:
$$x^{8}+8x^{7}+28x^{6}+56x^{5}+70x^{4}+56x^{3}+28x^{2}+9x-11=0$$</p>
<p>But I cannot continue past this. This quickly lead me to find a general way of solving equations of any degree, but I couldn't find anything serious on the internet.</p>
<p>Do you know any simple methods to solve equations of any degree?</p>
|
Nick Alger
| 3,060 |
<p>As has been pointed out above, from Galois theory in general there is no algebraic solution to polynomials of degree 5+. </p>
<p>But what about numerical solvers? Consider the polynomial
$$p(x)=a+bx+cx^2+dx^3+x^4$$
and its companion matrix,
$$A(p)=\left[\begin{matrix}0 & 0 & 0 & -a \\ 1 & 0 & 0 & -b \\ 0 & 1 & 0 & -c \\ 0 & 0 & 1 & -d \end{matrix}\right]$$</p>
<p>If you stare at it long enough, hopefully you can convince yourself that the eigenvalues of the companion matrix are exactly the zeros of the original polynomial. (hint, what is $\det(A-xI)$?)</p>
<p>So finding the zeros of a polynomial is equivalent to solving an eigenvalue problem, which is well studied numerically.</p>
<p>If you want to find a single one of the roots, consider using some form of <a href="http://en.wikipedia.org/wiki/Inverse_iteration">inverse iteration</a>/<a href="http://en.wikipedia.org/wiki/Rayleigh_quotient_iteration">Rayleigh quotient iteration</a>. If you want to find all the roots, consider using some variant of the <a href="http://en.wikipedia.org/wiki/QR_algorithm">QR algorithm</a> (not to be confused with QR decomposition, which is used inside the QR algorithm).</p>
|
1,668,487 |
<p>$$2^{-1} \equiv 6\mod{11}$$</p>
<p>Sorry for very strange question. I want to understand on which algorithm there is a computation of this expression. Similarly interested in why this expression is equal to two?</p>
<p>$$6^{-1} \equiv 2\mod11$$</p>
|
Clément Guérin
| 224,918 |
<p>You have $2\times 6=12$ which is $1$ mod $11$. Hence, we see that that $2\times 6=1$ mod $11$.</p>
<p>Now the set of congruence numbers modulo $11$ (often noted $\mathbb{Z}/11$) is a ring with addition and multiplication coming from $\mathbb{Z}$. </p>
<p>Saying that $6^{-1}\text{ mod }11=2$ is saying that $6$ mod $11$ is invertible as an element of the ring $(\mathbb{Z}/11,+,\times)$ and that its inverse is $2$ mod $11$. </p>
|
847,672 |
<p>I repeat the Peano Axioms:</p>
<ol>
<li><p>Zero is a number.</p></li>
<li><p>If a is a number, the successor of a is a number.</p></li>
<li><p>zero is not the successor of a number.</p></li>
<li><p>Two numbers of which the successors are equal are themselves equal.</p></li>
<li><p>If a set S of numbers contains zero and also the successor of every number in S, then every number is in S.</p></li>
</ol>
<p>Suppose to have two isomorph "copies" of the natural numbers $\mathbb{N}':=\{0',1',2'...\}$ and $\mathbb{N}'':=\{0'',1'',2''...\}$.
Then the set $NUMBERS:=\mathbb{N}'\cup \mathbb{N}''$ with "Zero"$:=0'$ and the "natural" successor for each element in any of the two sets, seems to satisfy the axioms.</p>
<p>Yes, P5 is very strange now because, it says that when I start with a set which I know to contain at least $0'$ and every successor of the numbers in it, automatically contains $0''$ which is not a successor of any number.</p>
<p>If this way of reasoning is allowed we could also use a number of copies of $\mathbb{N}$ indicized by a continuous index so there will be TWO NOT ISOMORPHIC Peano sets.</p>
<p>Because this sounds very strange to me, It's possible that there is a problem in my argument. What do you think?</p>
|
Thomas Andrews
| 7,933 |
<p>In logic, $0$ is a constant, not a property. That is, we can say "$x=0$" rather than "$x$ is a zero."</p>
<p>Even in second order logic, the statement of induction is $0\in S$. Not, "for all zeros $z, z\in S$."</p>
<p>You can, of course, interpret lots of language in lots of different ways. But mathematics uses "$0$ is a natural number" in a very specific way, and if you want to interpret that phrase differently, you are free to do so, but you are likely to confuse mathematicians and fail to communicate with them unless you are very explicit.</p>
|
749,035 |
<p>Call a square-free number a 3-prime if it is the product of three primes. Similarly for 2-primes, 4-primes , 5-primes, etc. Are there two consecutive 3-primes with no 2-prime between them?Are there infinitely many?</p>
|
Community
| -1 |
<p>Here's a list to get you started: $$[[230, 231], [255, 258], [285, 286], [429, 430], [434, 435], [609, 610], [645, 646], [741, 742], [805, 806], [902, 903], [969, 970], [986, 987], [1001, 1002], [1022, 1023], [1065, 1066], [1085, 1086], [1105, 1106], [1130, 1131], [1221, 1222], [1245, 1246], [1265, 1266], [1309, 1310], [1310, 1311], [1334, 1335], [1406, 1407], [1434, 1435], [1442, 1443], [1462, 1463], [1490, 1491], [1505, 1506], [1533, 1534], [1581, 1582], [1595, 1598], [1598, 1599], [1605, 1606], [1614, 1615], [1634, 1635], [1729, 1730], [1742, 1743], [1833, 1834], [1885, 1886], [1886, 1887], [1946, 1947], [2013, 2014], [2014, 2015], [2054, 2055], [2085, 2086], [2093, 2094], [2109, 2110], [2134, 2135]]$$</p>
<p>"Are there infinitely many?" Something around that...</p>
<p>Please note that I'm only posting this as an "answer" because it's too long for the comments - I don't have an actual proof that there are infinitely many but maybe this list will inspire you :)</p>
|
825,848 |
<p>I'm trying to show that the minimal polynomial of a linear transformation $T:V \to V$ over some field $k$ has the same irreducible factors as the characteristic polynomial of $T$. So if $m = {f_1}^{m_1} ... {f_n}^{m_n}$ then $\chi = {f_1}^{d_1} ... {f_n}^{d_n}$ with $f_i$ irreducible and $m_i \le d_i$.</p>
<p>Now I've managed to prove this using the primary decomposition theorem and then restricting $T$ to $ker({f_i}^{m_i})$ and then using the fact that the minimal polynomial must divide the characteristic polynomial (Cayley-Hamilton) and then the irreducibility of $f_i$ gives us the result.</p>
<p>However I would like to be able to prove this directly using facts about polynomials/fields without relying on the primary decomposition theorem for vector spaces. Is this fact about polynomials true in general?</p>
<p>We know that $m$ divides $\chi$ and so certainly $\chi = {f_1}^{m_1} ... {f_n}^{m_n} \times g$ but then how do we show that $g$ must have only $f_i$ as it's factors? I'm guessing I need to use the fact that they share the same roots. And I'm also guessing that it depends on $k$, i.e. if $k$ is algebraically closed then it is easy because the polynomials split completely into linear factors.</p>
<p>Help is much appreciated,
Thanks</p>
|
Marc van Leeuwen
| 18,880 |
<p>One can reason using only some field theory, rather than linear algebra, if one prefers. I will assume the Cayley-Hamilton theorem, and the fact that all eigenvalues are roots of the minimal polynomial$~\mu$, which holds because an eigenvector for$~\lambda$ is killed by $P[T]$ (if and) only if $P$ is a multiple of $X-\lambda$.</p>
<p>First observe that the result is true when $\chi$ splits over $k$ (and hence so does$~\mu$, which divides$~\chi$ by the Cayley-Hamilton theorem), since the factors $X-\lambda$ of either polynomial are precisely those with $\lambda$ an eigenvalue. This in particular takes care of the case where $k$ is algebraically closed; the remainder deals with the case where $\chi$ does not split into linear factors in $k[X]$.</p>
<p>If $K/k$ is a field extension, one can extend scalars to obtain $\def\ext{\otimes_kK}T\ext: V\ext\to V\ext$. Since one gets identical matrices$~A$ for $T$ and for $T\ext$, using some $k$-basis of $V$ and the corresponding $K$-basis of $V\ext$, they have the same characteristic polynomials. They have the same minimal polynomials as well, since the minimal polynomial can be found from the unique solution of the linear system $x_0I+X_1A+\cdots+x_{d-1}A^{d-1}=A^d$ (one equation for each matrix coefficient) for the smallest $d$ for which this system has a solution at all; solving a linear system with coefficients in$~k$ over a larger field$~K$ will not change the existence of a solution, or the solution in case it is unique.</p>
<p>So over a sufficiently large field$~k$ (a splitting field of$~\chi$ will do), the polynomials $\mu$ and $\chi$ split, and give rise to the same <em>set</em> of linear factors $X-\lambda$ (though some may occur with larger multiplicity for$~\chi$ than for$~\mu$). In order to conclude that they also have the same set of irreducible factors in $k[X]$, I propose two arguments. One is that the presence of an irreducible factor$~f$ in $k[X]$ can be read off from the presence of any one of the factors into which $f$ splits in $K[X]$, because no two (monic) irreducible polynomials over$~k$ share a root in$~K$: if $\alpha\in K$ is a root of a monic irreducible $f\in k[X]$, then the minimal polynomial of$~\alpha$ over$~k$ is $f$ (this is actually the minimal polynomial of multiplication by$~\alpha$ viewed as $k$-linear map $K\to K$). The other argument is that, having the same set of (linear) factors, $\chi$ divides in $K[X]$ some power of $\mu$. But this division is valid in $k[X]$ as well (where both $\mu$ and $\chi$ live), which shows that the irreducible factors of$~\chi$ occur among those of$~\mu$ (and C-H gives the converse).</p>
|
272,588 |
<p>I am trying to do the partition of the list variable called
<strong>test</strong> so that answer will look like the variable called <strong>goal</strong>. The <strong>test</strong> list is actually a list of long data. In this example I have a short version example according to the code below which also includes my attempts called ans1 and ans2 using method called Partition and ArrayShape:</p>
<pre><code> Clear[ans];
goal={{{a,b,c},{d,e,f},{g,h,i},j},{{k,l,m},{n,o,p},{q,r,s},t}};
TableForm[goal]
</code></pre>
<p>And the output should be</p>
<pre><code> a d g
b e h j
c f i
k n q
l o r t
m p s
</code></pre>
<p>My attempt is:</p>
<pre><code> test={a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t};
ans1=Partition[test,3,3,1,Nothing]
{{a,b,c},{d,e,f},{g,h,i},{j,k,l},{m,n,o},{p,q,r},{s,t}}
TableForm[ans1]
ans2=ArrayReshape[test,{3,3,3},1]
{{{a,b,c},{d,e,f},{g,h,i}},{{j,k,l},{m,n,o},{p,q,r}},{{s,t,1},{1,1,1},{1,1,1}}}
TableForm[ans2]
</code></pre>
|
Syed
| 81,355 |
<p>Assuming that the number of items in <code>test</code> is not divisible by the total number of elements in each partition, e.g.,</p>
<blockquote>
<pre><code>test = {a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u,
v, w, x, y};
</code></pre>
</blockquote>
<p>a possible solution using <code>TakeList</code> can be:</p>
<pre><code>FlattenAt[#, {4}] & /@ (TakeList[#, UpTo /@ {3, 3, 3, 1}] & /@
Partition[test, UpTo[10]])
</code></pre>
<blockquote>
<pre><code>{{{a, b, c}, {d, e, f}, {g, h, i},
j}, {{k, l, m}, {n, o, p}, {q, r, s}, t}, {{u, v, w}, {x, y}, {}}}
</code></pre>
</blockquote>
|
317,981 |
<p>Prove the following:</p>
<p>$$\lim_{n \to \infty} \displaystyle \int_0 ^{2\pi} \frac{\sin nx}{x^2 + n^2} dx = 0$$</p>
<p>How would I prove this? I know you have to show your steps, but I'm literally stuck on the first one, so I can't. </p>
|
user 1591719
| 32,016 |
<p><strong>HINT:</strong> Make use of </p>
<p>$$\left|\dfrac{\sin nx}{x^2 + n^2}\right|\le\dfrac{1}{x^2 + n^2}$$</p>
<p>I think you also want to see <a href="https://math.stackexchange.com/questions/151839/absolute-value-integral-inequality-proof-step">Absolute value integral inequality proof step</a>. </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.