qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
697,458
<p>Using a trivial example to illustrate the question -</p> <p>$$\int_0^\infty 2 dx$$</p> <p>$=2x \mid_0^\infty = 2(\infty) - 2(0)$</p> <p>Can we actually say $2(\infty)$? It doesn't seem valid me.</p>
Community
-1
<p>The expression $2 \cdot (+\infty) - 2 \cdot 0$ does indeed make sense. It is rather useful to do arithmetic/analysis with $\pm \infty$ alongside the other real numbers; e.g $\lim_{x \to +\infty}$ has all of the same properties that $\lim_{x \to a}$ does, and $\lim_{x \to a} f(x) = +\infty$ behaves much more like a convergent limit than other sorts of divergence (e.g. that of $\lim_{x \to 0} \sin(1/x)$), and $[-\infty, +\infty]$ is a compact interval.</p> <p>The name of the structure that includes $\pm \infty$ at the endpoints of the real line is the "extended real numbers".</p>
1,531,050
<p>a. Say $X_n$ and $Y_n$ are two sequences such that $|X_n-Y_n| &lt; \frac{1}{n}$ and $\lim X_n$ = $z$. Prove $\lim Y_n$ = $z$.</p> <p>b. Say $X_n$ takes values in closed interval $[a,b]$ and $\lim X_n$ = $z$. Prove $z$ takes values in closed interval $[a,b]$.</p> <p>c. Does b. hold when the interval is open?</p>
G H
282,682
<p>a) Let $N$: $\forall n &gt; N$ $\ \ \ |X_n - z| &lt; \frac{1}{n}$.</p> <p>$|z - Y_n| &lt; |z - X_n| + |X_n - Y_n| &lt; \frac{2}{n}$ for $n &gt; N$.</p> <p>b) If not than $\exists$ $\epsilon $ : $z \in (z - \epsilon, z + \epsilon)$ because $\mathbb{R} / [a,b]$ is open.</p> <p>But $\exists N$: $\forall n &gt; N$ $\ \ \ |X_n - z| &lt; \epsilon$ - contradiction.</p> <p>c) If interval open then $X_n = b - \frac{1}{n}$ from interval but $\lim X_n$ = $b$ not.</p>
2,507,565
<p>I'm failing to find any reasonable solution to this given problem.</p> <p>Find explicit formula of $n$-th element from given sequence: $$\begin{cases} a_1 = 1\\ a_2 = 2\\ a_{n+2} = 4 a_{n+1} + 4 a_n + 2^n \end{cases}$$</p> <p>I have tried finding some clues whilst typing out few first expressions to no avail. I also tried inserting the recursion formula instead of $a_{n+1}$ or $a_n$ though it got very messy in no time and all I have gotten was something like this: $$a_{n+2} = 4^3 a_{n-3} + \sum\limits_{j=0}^3 (2^{n+j} + 4^{j+1}a_{n-j})$$ Which I believe would get me to: $$a_{n+2} = 4^{n-1} a_{1} + \sum\limits_{j=0}^{n-1} (2^{n+j} + 4^{j+1}a_{n-j})$$</p> <p>Any tips on how can I solve this one?</p>
Benedict W. J. Irwin
330,257
<p>Solving recurrences can be tricky. A systematic approach is to try to solve for the <a href="https://en.wikipedia.org/wiki/Generating_function" rel="nofollow noreferrer">generating function</a> of the sequence and then work from there, but to get the general sequence term from the generating function can also be hard as it involves differentiating $n$ times. This kind of problem is usually solved nicely by a CAS like Mathematica. You can also try to look up the sequence terms in the <a href="https://oeis.org/" rel="nofollow noreferrer">Online Encyclopedia of Integer Sequences</a>, however for this sequence there doesn't seem to be an entry.</p> <p>Think about the sequence generating function, that is the formal power series $$ G(x) = \sum_{n=1}^\infty a_{n+2} x^n $$ we know the general term for $a_{n+2}$ giving $$ a_{n+2} = 4 a_{n+1} + 4 a_{n} +2^{n} $$ Then we know the generating function is $$ G(x) = \sum_{n=1}^\infty (4 a_{n+1} + 4 a_{n} +2^{n}) x^n \\ G(x) = 4\sum_{n=1}^\infty a_{n+1}x^n + 4\sum_{n=1}^\infty a_{n}x^n +\sum_{n=1}^\infty 2^{n} x^n $$ consider what these terms mean in relation to our definition of the generating function $$ G(x) = \sum_{n=1}^\infty a_{n+2} x^n = a_3x^1+a_4x^2+a_5x^3+\cdots\\ 4\sum_{n=1}^\infty a_{n+1}x^n =4(a_2x^1+a_3x^2+a_4x^3+\cdots)=4a_2x+4xG(x)\\ 4\sum_{n=1}^\infty a_{n}x^n = 4(a_1x^1+a_2x^2+a_3x^3+\cdots)=4a_1x+4a_2x^2+4x^2G(x) $$ we also know that (when it converges) from the geometric series $$ \sum_{n=1}^\infty 2^n x^n = \frac{2x}{1-2x} $$ Altogether we have $$ G(x) = 8x+4x G(x) + 4x+8x^2+4x^2G(x) + \frac{2x}{1-2x} $$ $$ G(x)(1-4x-4x^2) = 12x+8x^2 + \frac{2x}{1-2x}\\ G(x) = -\frac{2x(8x^2+8x-7)}{1-6x+4x^2+8x^3} = 14x + 68x^2 + 336x^3 + \cdots $$ which fits with our original definition. Now we need to extract the sequence of coefficients from this. We can define the full sequence generating function to include the first two terms $$ g(x) = 1 + 2x + x G(x) = \sum_{n=0}^\infty a_0 x^n = \frac{1-4x+6x^2}{1-6x+4x^2+8x^3} $$ from this we can differentiate as many times as necessary then set the generating function to zero to get the desired coefficient multiplied by some factor, this can formally be seen as the <a href="https://en.wikipedia.org/wiki/Z-transform#Inverse_Z-transform" rel="nofollow noreferrer">inverse Z transform</a> of g(1/x).</p> <blockquote> <p>InverseZTransform[(1-4x+6x^2)/(1-6x+4x^2+8x^3)/.x->1/x,x,n]</p> </blockquote> <p>Which in Mathematica gives the answer $$ a_n = -2^{n-2} + \frac{5}{8}(2-2\sqrt{2})^n + 5\cdot2^{n-3}(1+\sqrt{2})^n $$ which can be seen to recreate the relabelled coefficients $a_0=1, a_1=2, a_3=14, \cdots$. Other ways to get the $n^{th}$ derivative are the <a href="https://en.wikipedia.org/wiki/Cauchy%27s_integral_formula" rel="nofollow noreferrer">Cauchy differentiation formula</a> or by spotting patterns for simple generating functions. </p>
538,870
<p>Evaluate</p> <p>$$\lim_{n\to\infty}\frac{n}{\ln n}\left(\frac{\sqrt[n]{n!}}{n}-\frac{1}{e}\right).$$</p> <p>This sequence looks extremely horrible and it makes me crazy. How can we evaluate this?</p>
robjohn
13,854
<p>Stirling's asymptotic formula (without any series enhancement) $$ n!\sim\frac{n^n}{e^n}\sqrt{2\pi n}\tag{1} $$ put into Landau notation is $$ n!=\frac{n^n}{e^n}\sqrt{2\pi n}\,(1+o(1))\tag{2} $$ which gives $$ \begin{align} (n!)^{1/n} &amp;=\frac ne(2\pi n)^{\frac1{2n}}(1+o(1/n))\\ &amp;=\frac ne\left(1+\frac{\log(2\pi n)}{2n}\right)(1+o(1/n))\\ \frac{(n!)^{1/n}}{n} &amp;=\frac1e\left(1+\frac{\log(2\pi n)}{2n}\right)(1+o(1/n))\\ \frac{(n!)^{1/n}}{n}-\frac1e &amp;=\frac1e\frac{\log(2\pi n)}{2n}+o(1/n)\\ \frac{n}{\log(n)}\left(\frac{(n!)^{1/n}}{n}-\frac1e\right) &amp;=\frac1{2e}+\frac{\log(2\pi)}{2e\log(n)}+o(1/\log(n))\tag{3} \end{align} $$ Taking the limit as $n\to\infty$, $(3)$ yields $$ \lim_{n\to\infty}\frac{n}{\log(n)}\left(\frac{(n!)^{1/n}}{n}-\frac1e\right) =\frac1{2e}\tag{4} $$ which is the result sought.</p> <p>However, $(3)$ gives more $$ \lim_{n\to\infty}\log(n)\left(\frac{n}{\log(n)}\left(\frac{(n!)^{1/n}}{n}-\frac1e\right)-\frac1{2e}\right)=\frac{\log(2\pi)}{2e}\tag{5} $$</p>
1,299,552
<p>Can you tell me why my answer is wrong?</p> <p>$$\frac {x+y} {x-y} + \frac 1 {x+y} - \frac {x^2+y^2} {y^2-x^2} = \frac {x^2 + y^2} {x^2-y^2} + \frac {x-y} {x^2-y^2} + \frac {x^2+y^2} {x^2-y^2} = 2x^2 + 2y^2 + x-y$$</p>
jejeje
243,585
<p>$(x+y)(x+y)$ does not equal $x^2+y^2$. It equals $(x+y)^2$ or $x^2+2xy+y^2$.</p>
57,642
<p>I'm looking for the shortest and the clearest proof for this following theorem:</p> <p>For $V$ vector space of dimension $n$ under $\mathbb C$ and $T: V \to V $ linear transformation , I need to show $V= \ker T^n \oplus $ Im $T^n$.</p> <p>Any hints? I don't know where to start from.</p> <p>Thank you.</p>
user89987
89,987
<p>This appears in Axler Sheldon's LADR in the exercises to chapter 8. The exercise right before it reads as follows:</p> <p>Suppose $P \in \mathcal L (V)$ and $P^2 =P$. Prove that $V=\mathrm{null}P \oplus \mathrm{range}P$.</p> <p>In the same chapter it is shown that if $n=\mathrm{dim}V$ then $\mathrm{null}T^n =\mathrm{null}T^{n+1} = \dots $ and $\mathrm{range}T^n =\mathrm{range}T^{n+1} = \dots $.</p> <p>You may combine these results to get the desired result as an immediate consequence after noting that $T^n = T^{2n}$.</p>
566
<h3>We all love a good puzzle</h3> <p>To a certain extent, any piece of mathematics is a puzzle in some sense: whether we are classifying the homological intersection forms of four manifolds or calculating the optimum dimensions of a cylinder, it is an element of investigation and inherently puzzlish intrigue that drives us. Indeed most puzzles (cryptic crosswords aside) are somewhat mathematical (the mathematics of Sudoku for example is hidden in <a href="http://en.wikipedia.org/wiki/Latin_squares" rel="nofollow noreferrer">latin squares</a>). Mathematicians and puzzles get on, it seems, rather well.</p> <h3>But what is a good puzzle?</h3> <p>Okay, so in order to make this question worthwhile (and not a ten-page wade-athon through 57 varieties of the men with red and blue hats puzzle), we are going to have to impose some limitations. Not every puzzle-based answer that pops into your head will qualify as a good puzzle—to do so it must</p> <ul> <li><strong>Not be widely known:</strong> If you have a terribly interesting puzzle that motivates something in cryptography; well done you, but chances are we've seen it. If you saw that <em>hilarious</em> scene in the film 21, where Kevin Spacey explains the Monty hall paradox badly and want to share, don't do so here. Anyone found posting the liar/truth teller riddle will be immediately disembowelled.</li> <li><strong>Be mathematical (as much as possible):</strong> It's true that logic <em>is</em> mathematics, but puzzles beginning '<em>There is a street where everyone has a different coloured house …</em>' are tedious as hell. Note: there is a happy medium between this and trig substitutions.</li> <li><strong>Not be too hard:</strong> Any level of difficulty is cool, but if coming up with an answer requires more than two sublemmas, you are misreading your audience.</li> <li><strong>Actually have an answer:</strong> Crank questions will not be appreciated! You can post the answers/hints in <a href="http://en.wikipedia.org/wiki/Rot_13" rel="nofollow noreferrer">Rot-13</a> underneath as comments as on MO if you fancy.</li> <li><strong>Have that indefinable spark that makes a puzzle awesome:</strong> Like a situation that seems familiar, requiring unfamiliar thought …</li> </ul> <p>Ideally include where you found the puzzle so we can find more cool stuff like it. For ease of voting, one puzzle per post is best.</p> <hr /> <h1>Some examples to set the ball rolling</h1> <blockquote> <p>Simplify <span class="math-container">$\sqrt{2+\sqrt{3}}$</span></p> </blockquote> <p><strong>From:</strong> problem solving magazine</p> <p><strong>Hint:</strong></p> <blockquote class="spoiler"> <p> Try a two term solution</p> </blockquote> <hr /> <blockquote> <p>Can one make an equilateral triangle with all vertices at integer coordinates?</p> </blockquote> <p><strong>From:</strong> Durham distance maths challenge 2010</p> <p><strong>Hint:</strong></p> <blockquote class="spoiler"> <p> This is equivalent to the rational case</p> </blockquote> <hr /> <blockquote> <p>The collection of <span class="math-container">$n \times n$</span> <a href="http://en.wikipedia.org/wiki/Magic_squares" rel="nofollow noreferrer">Magic squares</a> form a vector space over <span class="math-container">$\mathbb{R}$</span> prove this, and by way of a linear transformation, derive the dimension of this vector space.</p> </blockquote> <p><strong>From:</strong> Me, I made this up (you can tell, can't you!)</p> <p><strong>Hint:</strong></p> <blockquote class="spoiler"> <p> Apply the rank nullity theorem</p> </blockquote>
BlueRaja - Danny Pflughoeft
136
<p>Most of us know that, being deterministic, computers cannot generate true <a href="http://en.wikipedia.org/wiki/Pseudorandom_number_generator" rel="noreferrer">random numbers</a>.</p> <p>However, let's say you have a box which generates truly random binary numbers, but is biased: it's more likely to generate either a <code>1</code> or a <code>0</code>, but you don't know the exact probabilities, or even which is more likely <em>(both probabilities are > 0 and sum to 1, obviously)</em></p> <p>Can you use this box to create an unbiased random generator of binary numbers?</p>
566
<h3>We all love a good puzzle</h3> <p>To a certain extent, any piece of mathematics is a puzzle in some sense: whether we are classifying the homological intersection forms of four manifolds or calculating the optimum dimensions of a cylinder, it is an element of investigation and inherently puzzlish intrigue that drives us. Indeed most puzzles (cryptic crosswords aside) are somewhat mathematical (the mathematics of Sudoku for example is hidden in <a href="http://en.wikipedia.org/wiki/Latin_squares" rel="nofollow noreferrer">latin squares</a>). Mathematicians and puzzles get on, it seems, rather well.</p> <h3>But what is a good puzzle?</h3> <p>Okay, so in order to make this question worthwhile (and not a ten-page wade-athon through 57 varieties of the men with red and blue hats puzzle), we are going to have to impose some limitations. Not every puzzle-based answer that pops into your head will qualify as a good puzzle—to do so it must</p> <ul> <li><strong>Not be widely known:</strong> If you have a terribly interesting puzzle that motivates something in cryptography; well done you, but chances are we've seen it. If you saw that <em>hilarious</em> scene in the film 21, where Kevin Spacey explains the Monty hall paradox badly and want to share, don't do so here. Anyone found posting the liar/truth teller riddle will be immediately disembowelled.</li> <li><strong>Be mathematical (as much as possible):</strong> It's true that logic <em>is</em> mathematics, but puzzles beginning '<em>There is a street where everyone has a different coloured house …</em>' are tedious as hell. Note: there is a happy medium between this and trig substitutions.</li> <li><strong>Not be too hard:</strong> Any level of difficulty is cool, but if coming up with an answer requires more than two sublemmas, you are misreading your audience.</li> <li><strong>Actually have an answer:</strong> Crank questions will not be appreciated! You can post the answers/hints in <a href="http://en.wikipedia.org/wiki/Rot_13" rel="nofollow noreferrer">Rot-13</a> underneath as comments as on MO if you fancy.</li> <li><strong>Have that indefinable spark that makes a puzzle awesome:</strong> Like a situation that seems familiar, requiring unfamiliar thought …</li> </ul> <p>Ideally include where you found the puzzle so we can find more cool stuff like it. For ease of voting, one puzzle per post is best.</p> <hr /> <h1>Some examples to set the ball rolling</h1> <blockquote> <p>Simplify <span class="math-container">$\sqrt{2+\sqrt{3}}$</span></p> </blockquote> <p><strong>From:</strong> problem solving magazine</p> <p><strong>Hint:</strong></p> <blockquote class="spoiler"> <p> Try a two term solution</p> </blockquote> <hr /> <blockquote> <p>Can one make an equilateral triangle with all vertices at integer coordinates?</p> </blockquote> <p><strong>From:</strong> Durham distance maths challenge 2010</p> <p><strong>Hint:</strong></p> <blockquote class="spoiler"> <p> This is equivalent to the rational case</p> </blockquote> <hr /> <blockquote> <p>The collection of <span class="math-container">$n \times n$</span> <a href="http://en.wikipedia.org/wiki/Magic_squares" rel="nofollow noreferrer">Magic squares</a> form a vector space over <span class="math-container">$\mathbb{R}$</span> prove this, and by way of a linear transformation, derive the dimension of this vector space.</p> </blockquote> <p><strong>From:</strong> Me, I made this up (you can tell, can't you!)</p> <p><strong>Hint:</strong></p> <blockquote class="spoiler"> <p> Apply the rank nullity theorem</p> </blockquote>
BlueRaja - Danny Pflughoeft
136
<p>Assuming you have unlimited time and cash, is there a strategy that's guaranteed to win at roulette?</p>
566
<h3>We all love a good puzzle</h3> <p>To a certain extent, any piece of mathematics is a puzzle in some sense: whether we are classifying the homological intersection forms of four manifolds or calculating the optimum dimensions of a cylinder, it is an element of investigation and inherently puzzlish intrigue that drives us. Indeed most puzzles (cryptic crosswords aside) are somewhat mathematical (the mathematics of Sudoku for example is hidden in <a href="http://en.wikipedia.org/wiki/Latin_squares" rel="nofollow noreferrer">latin squares</a>). Mathematicians and puzzles get on, it seems, rather well.</p> <h3>But what is a good puzzle?</h3> <p>Okay, so in order to make this question worthwhile (and not a ten-page wade-athon through 57 varieties of the men with red and blue hats puzzle), we are going to have to impose some limitations. Not every puzzle-based answer that pops into your head will qualify as a good puzzle—to do so it must</p> <ul> <li><strong>Not be widely known:</strong> If you have a terribly interesting puzzle that motivates something in cryptography; well done you, but chances are we've seen it. If you saw that <em>hilarious</em> scene in the film 21, where Kevin Spacey explains the Monty hall paradox badly and want to share, don't do so here. Anyone found posting the liar/truth teller riddle will be immediately disembowelled.</li> <li><strong>Be mathematical (as much as possible):</strong> It's true that logic <em>is</em> mathematics, but puzzles beginning '<em>There is a street where everyone has a different coloured house …</em>' are tedious as hell. Note: there is a happy medium between this and trig substitutions.</li> <li><strong>Not be too hard:</strong> Any level of difficulty is cool, but if coming up with an answer requires more than two sublemmas, you are misreading your audience.</li> <li><strong>Actually have an answer:</strong> Crank questions will not be appreciated! You can post the answers/hints in <a href="http://en.wikipedia.org/wiki/Rot_13" rel="nofollow noreferrer">Rot-13</a> underneath as comments as on MO if you fancy.</li> <li><strong>Have that indefinable spark that makes a puzzle awesome:</strong> Like a situation that seems familiar, requiring unfamiliar thought …</li> </ul> <p>Ideally include where you found the puzzle so we can find more cool stuff like it. For ease of voting, one puzzle per post is best.</p> <hr /> <h1>Some examples to set the ball rolling</h1> <blockquote> <p>Simplify <span class="math-container">$\sqrt{2+\sqrt{3}}$</span></p> </blockquote> <p><strong>From:</strong> problem solving magazine</p> <p><strong>Hint:</strong></p> <blockquote class="spoiler"> <p> Try a two term solution</p> </blockquote> <hr /> <blockquote> <p>Can one make an equilateral triangle with all vertices at integer coordinates?</p> </blockquote> <p><strong>From:</strong> Durham distance maths challenge 2010</p> <p><strong>Hint:</strong></p> <blockquote class="spoiler"> <p> This is equivalent to the rational case</p> </blockquote> <hr /> <blockquote> <p>The collection of <span class="math-container">$n \times n$</span> <a href="http://en.wikipedia.org/wiki/Magic_squares" rel="nofollow noreferrer">Magic squares</a> form a vector space over <span class="math-container">$\mathbb{R}$</span> prove this, and by way of a linear transformation, derive the dimension of this vector space.</p> </blockquote> <p><strong>From:</strong> Me, I made this up (you can tell, can't you!)</p> <p><strong>Hint:</strong></p> <blockquote class="spoiler"> <p> Apply the rank nullity theorem</p> </blockquote>
Sev
1,630
<p>Relatively simple, but still fun...</p> <p>Unknown source, other than my friend telling me yesterday</p> <p>There is a room with 50 doors in it, each door leading to a cell that is completely sound-proof and light-proof, and each cell can hold a maximum of 1 person. There are 50 prisoners. There also exists a light in the middle of the main room, so that when a prisoner is pulled out of their cell, they could see the light. There is also a light switch inside the main room which controls the light. The 50 prisoners are the only people allowed to touch the light switch that controls the light (even the prison guard cannot touch it)</p> <p>Before any prisoner is placed in the cells, the prison guard tells them that he will pull out, at random, any one of them at any time, and then put them back in their cell after they get a chance to turn the light on or off or do nothing at all to it. (Only one prisoner can be out of their cell at any given time) He also guarantees them that he will pull at least one prisoner out at least once a day.</p> <p>The goal is, when any one prisoner knows when all 50 prisoners have been pulled out of their cells at least one time, and he tells the prison guard this fact correctly, all prisoners will be let free.</p> <p>The prisoners are allowed one short meeting before they all get placed in their cells.</p> <p>They also have a (reasonably) infinite amount of time to let the prison guard know when all prisoners have been let out of their cells at least one time. (say 5 years or something to that effect)</p> <p>There is only one guess allowed ever, so it better be the right answer, or else they will all be fed to the cannibals.</p> <p>What method did they come up with to know exactly when all 50 prisoners have been out of their cells at least one time?</p>
3,387,024
<p>We denote by <span class="math-container">$H$</span> the subspace of <span class="math-container">$C([0,1])$</span> of functions <span class="math-container">$h$</span> such that <span class="math-container">$h(0)=0$</span>, <span class="math-container">$h$</span> is absolutely continuous and its derivative <span class="math-container">$h'$</span> (which exists a.e.) satisfies <span class="math-container">$$\int_0^1 h'(s)^2 ds &lt; \infty.$$</span></p> <p>Then <span class="math-container">$H$</span> is a Hilbert space for the scalar product </p> <p><span class="math-container">$$(g,h) = \int_0^1 g'(s)h'(s)ds.$$</span> </p> <p>2) For any bounded measure <span class="math-container">$\mu$</span> on <span class="math-container">$[0,1]$</span>, show that there exists an element <span class="math-container">$h$</span> in <span class="math-container">$H$</span> such that for every <span class="math-container">$f \in H$</span> <span class="math-container">$$\int_0^1 f(x)d\mu(x) = (f,h),$$</span> and that <span class="math-container">$h'(s) = \mu((s,1])$</span>.</p> <p>[Hint: The canonical injection of <span class="math-container">$H$</span> into <span class="math-container">$C([0,1])$</span> is continuous; use Riesz's theorem.]</p> <p>Following the hint, we know that since <span class="math-container">$\int_0^1 f(x)d\mu(x)$</span> as a functional of <span class="math-container">$f$</span> is bounded linear, so it can be represented in the form <span class="math-container">$(f,h)$</span> for some <span class="math-container">$h \in H$</span>. However I am stuck at showing that <span class="math-container">$h'(s)=\mu((s,1])$</span>. <span class="math-container">$\mu((s,1]) = \int 1_{(s,1]} d\mu$</span>, but this function does not belong to <span class="math-container">$H$</span>. I would greatly appreciate any help.</p>
N. S.
9,176
<p><strong>Hint</strong> For each odd prime <span class="math-container">$p$</span>, if <span class="math-container">$p^k |m \leq n$</span> then <span class="math-container">$$p^k |2^{\frac{p-1}{p}m}-1$$</span> and hence <span class="math-container">$$p^k| 2^{n}-2^{n-\frac{p-1}{p}m}$$</span> </p>
3,387,024
<p>We denote by <span class="math-container">$H$</span> the subspace of <span class="math-container">$C([0,1])$</span> of functions <span class="math-container">$h$</span> such that <span class="math-container">$h(0)=0$</span>, <span class="math-container">$h$</span> is absolutely continuous and its derivative <span class="math-container">$h'$</span> (which exists a.e.) satisfies <span class="math-container">$$\int_0^1 h'(s)^2 ds &lt; \infty.$$</span></p> <p>Then <span class="math-container">$H$</span> is a Hilbert space for the scalar product </p> <p><span class="math-container">$$(g,h) = \int_0^1 g'(s)h'(s)ds.$$</span> </p> <p>2) For any bounded measure <span class="math-container">$\mu$</span> on <span class="math-container">$[0,1]$</span>, show that there exists an element <span class="math-container">$h$</span> in <span class="math-container">$H$</span> such that for every <span class="math-container">$f \in H$</span> <span class="math-container">$$\int_0^1 f(x)d\mu(x) = (f,h),$$</span> and that <span class="math-container">$h'(s) = \mu((s,1])$</span>.</p> <p>[Hint: The canonical injection of <span class="math-container">$H$</span> into <span class="math-container">$C([0,1])$</span> is continuous; use Riesz's theorem.]</p> <p>Following the hint, we know that since <span class="math-container">$\int_0^1 f(x)d\mu(x)$</span> as a functional of <span class="math-container">$f$</span> is bounded linear, so it can be represented in the form <span class="math-container">$(f,h)$</span> for some <span class="math-container">$h \in H$</span>. However I am stuck at showing that <span class="math-container">$h'(s)=\mu((s,1])$</span>. <span class="math-container">$\mu((s,1]) = \int 1_{(s,1]} d\mu$</span>, but this function does not belong to <span class="math-container">$H$</span>. I would greatly appreciate any help.</p>
WhatsUp
256,378
<p>A nice solution is given by the natural embedding of <span class="math-container">$S_n$</span> into <span class="math-container">$\mathrm{GL}_n(\mathbb{F}_2)$</span>.</p> <p>Here <span class="math-container">$S_n$</span> is the symmetric group of <span class="math-container">$n$</span> elements. Every element of <span class="math-container">$S_n$</span> permutes the canonical basis of <span class="math-container">$\mathbb{F}_2^n$</span>, hence gives an <span class="math-container">$\mathbb{F}_2$</span>-linear automorphism of <span class="math-container">$\mathbb{F}_2^n$</span>. This embeds <span class="math-container">$S_n$</span> as a subgroup of <span class="math-container">$\mathrm{GL}_n(\mathbb{F}_2)$</span>.</p> <p>Now it only remains to say that the cardinalities of the two groups are <span class="math-container">$n!$</span> and <span class="math-container">$\prod_{i = 0}^{n - 1}(2^n - 2^i)$</span>, respectively.</p> <hr> <p>By replacing <span class="math-container">$\mathbb{F}_2$</span> with <span class="math-container">$\mathbb{F}_q$</span> for any prime power <span class="math-container">$q$</span>, one proves in the same way that <span class="math-container">$n!$</span> divides <span class="math-container">$\prod_{i = 0}^{n - 1}(q^n - q^i)$</span>.</p>
655,591
<p>$S=(v_1, \cdots v_n)$ of vectors in $\mathbb{F^n}$ is a basis iff the matrix obtained by forming a matrix (call it A) of the co-ordinate vectors of $v_i$ is invertible</p> <p>My Idea: I was able to prove the reverse direction wherein we can show that $AX =0$ has trivial solutions so linearly independent and also spans the given vector space so it is a basis.</p> <p>I am not sure about the forward direction that is to prove that A is invertible. I am just confused about the coordinate vector for $v_i$. That is,we assume $S$ is a basis then how can each element (namely $v_i'$'s) in a basis have a coordinate vector? Wouldn't it just be a single column for different vector in the vector space?</p>
Blah
6,721
<p>You can rewrite $A$ as a row of vectors $A_1,\dots,A_n$. Then $$ Ax = x_1 A_1 + \dots + x_n A_n $$ Think about</p> <p>$Ax=b$ always solvable means $A_i$ generate $V$</p> <p>$Ax=b$ uniquely solvable (if at all) means $A_i$ independent</p> <p>If these two conditions are met, you can find $n$ solutions for $Ax_i=e_i$ (put them in a matrix). This matrix will be the inverse.</p> <p>If $A$ is invertible, then for every $b$ the equation $Ax=b$ has the unique solution $x=A^{-1}b$. So the $A_i$ are a basis.</p>
724,302
<p>I have read the page about category theory in wikipedia carefully, but i don't really get what this theory is.</p> <p>Is category theory a content in ZFC-set theory? (Just like measure theory, group theory etc.) If not, is it just another formal logic system independent from the standard ZFC-set theory?</p> <p>Following wikipedia, i think it is the latter one.</p> <p>If category theory is another formal logic system, then is there any theorem that relates category theory and ZFC-set theory? For example, there is a theorem states that "every theorem about sets in NBG is provable in ZFC, assuming consistency of ZFC" even though NBG and ZFC are completely different set theory. Moreover, is there a theorem which tells the relation between consistencies of ZFC and category theory? Also, is the usual independence proof in ZFC no more available in category theory?</p> <p>Moreover, how does one write a <em>formula</em> in ZFC-set theory in category theory and vise versa? Since there is no <strong>undefined notion: set</strong> in category theory, i think this is not possible..</p>
Geremia
128,568
<p>See:</p> <ul> <li>Bell, J. L., “<a href="https://isidore.co/misc/Physics%20papers%20and%20books/Zotero/storage/IY3V5UCD/Bell,%20J.%20L.%20-%201981%20-%20Category%20Theory%20and%20the%20Foundations%20of%20Mathematics.pdf" rel="nofollow noreferrer">Category Theory and the Foundations of Mathematics</a>,” <em>British Journal for the Philosophy of Science</em> 32, no. 4 (December 1981): 349–58. <sub>(cited in <a href="https://isidore.co/calibre#panel=book_details&amp;book_id=2920" rel="nofollow noreferrer"><em>The Way toward Wisdom</em></a> pt. 2, ch.7, §A.3. "Modern Mathematics")</sub></li> </ul>
387,519
<p>The domain of the following function $$y=2$$ is just 2? And the image of it?</p> <p>I don't think I quiet understand what the image of a function means. The domain is all values that it can assume, correct?</p> <p>Could you please try to define the image of this equation too: $$y = 2x - 6$$ so I can try to understand it?</p>
fgp
42,986
<p>The <em>image</em> of a function is simply the set of all <em>possible</em> values the function can take. If you have a function $f(x)$, and evaluate it for all possible values of $x$ (i.e., for all $x$ in the <em>domain</em> of f), then the resulting set of values is called the <em>image</em> of $f$.</p> <p>You can also find a image of a certain subset of the domain - i.e., instead of evalulating $f(x)$ for all $x$ in the domain, you only evaluate for a certain subset.</p> <blockquote> <p>Formally, the image $f(U)$ of some subset $U$ of the domain of $f$ is $$ f(U) = \left\{f(x) \,:\, x \in U\right\} \text{.} $$</p> </blockquote> <p>To check whether $y$ lies in the image of $f$, you thus have to ask</p> <blockquote> <p>Is there an $x$ such that $f(x)=y$?</p> </blockquote> <p>To check whether $y$ lies in the image of some subset $U$ of the domain of $f$, you similarly ask</p> <blockquote> <p>Is there an $x \in U$ such that $f(x)=y$?</p> </blockquote> <p>The image <em>all</em> sets $U$ which aren't empty under $f(x)=2$ is thus simply $\{2\}$, because $f$ never takes any value other than 2. </p>
506,767
<p>What is the largest number on multiplying with itself gives the same number as last digits of the product?</p> <p>i.e., $(376 \times 376) = 141376$</p> <p>i.e., $(25\times 25) = 625$</p> <p>If the largest number cant be found out can you prove that there is always a number greater than any given number? (only in base $10$)</p>
robjohn
13,854
<p>Looking for $k$ so that $k^2\equiv k\pmod{10^n}$, we get $$ k(k-1)\equiv0\pmod{10^n} $$ Since $\gcd(k,k-1)=1$, we must have one of $$ \begin{align} k&amp;\equiv0\pmod{10^n}\tag{1}\\ k&amp;\equiv1\pmod{10^n}\tag{2}\\ k&amp;\equiv0\pmod{2^n}\quad\text{and}\quad k\equiv1\pmod{5^n}\tag{3}\\ k&amp;\equiv1\pmod{2^n}\quad\text{and}\quad k\equiv0\pmod{5^n}\tag{4} \end{align} $$ The Chinese Remainder Theorem guarantees a solution for each of these.</p> <p>$(1)$ and $(2)$ lead to $0$ and $1$, so they are bounded.</p> <p>$(3)$ and $(4)$ lead to more interesting, unbounded solutions. Note that a solution of $(3)$ or $(4)$ for $n$ is also a solution for $n{-}1$ but with another digit added to the left. Furthermore, it cannot be that there is a bounded solution of $(3)$ or $(4)$; that is, one for all $n$. If $k(k-1)\equiv0\pmod{10^n}$ for all $n$, then $k(k-1)=0$; that is, $k=0$ or $k=1$. Therefore, for any $n$, there is a larger $n'$, so that the digit added to the left for the solution mod $10^{n'}$ is non-zero. Thus, there is no largest such number.</p>
2,050,939
<p>My daughter got the following optional question for 8th grade homework mixed into her simultaneous equations questions. All of the other questions were simple for me to solve using substitution or elimination which is what she is being taught at the moment but this one seems to me to be at a much higher grade.</p> <p>A high school play sold 168 tickets to one showing. If tickets were bought in advance, they were half price. If they made $2130 from that one show, how many tickets were sold in advance?</p> <p>I put together a few formulas but had to use guesswork for the last step which just felt wrong because math should be about processes, not guesswork.</p> <pre><code>a = advance r = regular v = advance price </code></pre> <p>$$a + r = 168$$ $$a(v) + r(2v) = 2130$$ $$a &gt; 0$$ $$r &gt; 0$$</p> <p>Solving I get:</p> <p>$$v(168-r) + 2rv = 2130$$ $$168v - rv + 2rv = 2130$$ $$168v + rv = 2130$$ $$v(168 + r) = 2130$$</p> <p>At this point I guessed v at 10 and worked out r and a from that. This concerned me because I can't teach my daughter to make educated guesses. What is the process/trick involved with solving these sort of questions?</p>
Community
-1
<p>Let us consider $m$ red balls and $n$ blue balls. Let $u_{mn}(r)$ and $u_{mn}(b)$ be the probability that the sequence of balls being discarded should begin and end with a red ball and a blue ball respectively. We require $u_{mn} = u_{mn}(r) + u_{mn}(b)$.</p> <hr> <p>Consider $u_{mn}(r)$. The probability of choosing a red ball first is $\frac{m}{m+n}$. The probability of then getting a sequence ending in red is made up of two parts. Firstly, that the first and the last ball should be red is $u_{m-1,n}(r)$. Secondly, that the first ball should be blue namely $\frac{n}{m+n-1}$, multiplied by the probability of a sequence ending in red chosen from $m-1$ red and $n$ blue balls, remembering that the blue ball was replaced namely $u_{m-1,n}$. Hence for $m\geq 2, n\geq 1$, $$u_{mn}(r) = \frac{m}{m+n}[u_{m-1,n}(r) +\frac{n}{m+n-1}u_{m-1,n}]...(1)$$ $(1)$ also holds if $m=1, n\geq 1$. Similarly for $m\geq 1, n\geq 2$, $$u_{mn}(b) = \frac{n}{m+n}[u_{m,n-1}(b) + \frac{m}{m+n-1}u_{m,n-1}]...(2)$$. Since $u_{m,1}(b) =\frac{1}{m+1}$,$u_{m,0}(b) =0, u_{m,0} = 1$ equation $(2)$ holds if $m\geq 1, n=1$.</p> <hr> <p>The probability that the final ball should be blue is $1-u_{m,n}$. By interchanging the roles of the red and blue balls, it is easy to see that this is $u_{n,m}$. Hence, $$u_{m,n} + u_{n,m} = 1 (m\geq 1, n\geq 1)...(3)$$. Let $q_{m,n}(r) = \frac{(m+n-1)!}{m!n!} u_{m,n}(r) (m\geq 1, n\geq 1)$ with similar definitions for $q_{m,n}(b)$ and $q_{m,n}$. Writing $a$ in place of $m$ in $(1)$ and substituting for $u_{m,n}$, we have, $$(a+n)q_{a,n}(r)-(a+n-1)q_{a-1,n}(r) = nq_{a-1,n} (r\geq 1, n\geq 1)$$. Adding for $r=1,2...m$ and for $m\geq 1, n\geq 1$ we have $$(m+n)q_{m,n}(r)-nq_{0,n}(r) = n(q_{m-1,n}+q_{m-2,n}+...q_{0,n})...(4)$$ and for $m\geq 1, n\geq 1$, we have, $$(m+n)q_{m,n}(b)-mq_{m,0}(b) = m(q_{m,n-1}+q_{m,n-2}+...q_{m,0})...(5)$$. We also have $q_{0,n}(r)=0 (n\geq 1)$ and $q_{m,0}(b)=0 (m\geq 1)$. $(4)+(5)$ and $m\geq 1, n\geq 1$ gives $$(m+n)q_{m,n} =n(q_{m-1,n}+...q_{0,n}) +m(q_{m,n-1}+...q_{m,0})...(6)$$. If $n=1, m\geq 1$, we have $$(m+1)q_{m,1} = q_{m-1,1}+...q{0,1}+mq_{m,0} = q_{m-1,1}+...q_{1,1}+q_{0,1}+1$$. In terms of $u_{m,1} (m\geq 1)$, we have, $$(m+1)u_{m,1} = u_{m-1,1}+...u_{0,1}+1$$. Writing $m+1$ in place of $m$ and subtracting the two equations, we have, $$(m+2)u_{m+1,1}-(m+1)u_{m,1} = u_{m,1}$$ and so $u_{m+1,1} = u_{m,1} = u_{1,1} = \frac{1}{2}...(7)$.<br> Similarly by putting $m=1$ in $(6)$ we get $u_{1,n} (n\geq 1)$. Hence $u_{m,1} = u_{1,n} =\frac{1}{2}$ for $m\geq 1, n\geq 1$.</p> <hr> <p>We write equations by putting $m+1$ in place of $m$, $n+1$ in place of $n$ and combining $(m+1,n+1)$ in place of $(m,n)$. Subtracting $(6)$ and the sum of these three equations we have, $$(m+n)q_{m,n}-(m+n+1)(q_{m+1,n} + q_{m,n+1})-(m+n+2)q_{m+1,n+1} = -nq_{m,n} +(n+1)q_{m,n+1} + (m+1)q_{m+1,n} -mq_{m,n}$$. Reverting to $u_{m,n}$ for $m\geq 1, n\geq 1$, we have $$2(m+1)(n+1)u_{m,n} + (m+n+2)(m+n+1)u_{m+1,n+1} = (m+1)(m+2n+2)u_{m,n+1} + (2m+n+2)(n+1)u_{m+1,n}$$ $$\Rightarrow (m+n+2)(m+n+1)((u_{m+1,n+1}-u_{m,n+1})-(u_{m+1,n}-u_{m,n})) + m(m+1)(u_{m+1,n}-u_{m,n}) + n(n+1)(u_{m,n+1}-u_{m,n}) = 0$$ This implies that $u_{m,n} = c$(a constant)$(m\geq 1, n\geq 1)$ is a solution of the problem. It satisfies the boundary conditions of $(7)$ if $c=\frac{1}{2}$ hence $u_{m,n}$ is a solution of the problem. Hope it helps.</p>
2,084,671
<p>I've tried simplifying the term by <strong>substituting</strong> $y := x^2$ and also to use a certain <strong>algorithm</strong> that outputs the f(a), but it gets too big for the input r, that is a variable and not a number to begin with (added little example):</p> <p><a href="https://i.stack.imgur.com/8C9sV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8C9sV.png" alt="enter image description here"></a></p> <p>If you have a good idea how to do this, please let me know</p>
Arnaldo
391,612
<p><strong>Hint</strong></p> <p>Completing square (it is equivalent to use Baskara's formula):</p> <p>$x^4-6qx^2+q^2=(x^2-3q)^2-8q^2=0 \rightarrow (x^2-3q)^2=8q^2 \rightarrow x^2=(3 \pm2\sqrt{2})q=(1\pm \sqrt{2})^2q$</p> <p>$$x^2=(1\pm \sqrt{2})^2q$$</p> <p>Can you finish?</p>
42,258
<p>I have a Table of values e.g. </p> <pre><code>{{x,y,z},{x,y,z},{x,y,z}…} </code></pre> <p>How do I replace the the "z" column with a List of values?</p>
bill s
1,783
<p>When I read this question, it seems to be asking for set-delayed. For example, say we want to calculate <code>D[f[t],t]</code> over and over but want to give it a name like <code>q</code>. Then</p> <pre><code>q := D[f[t], t] </code></pre> <p>does this. If you don't have f defined then</p> <pre><code>q Derivative[1][f][t] </code></pre> <p>If <code>f</code> is defined, say </p> <pre><code>f[t_] := t^2 q 2 t </code></pre>
549,585
<p>Let $a_1=1$ and $a_{n+1}=ma_n+10$ for $n\geq 2$. Determine for which value of m, the sequence $(a_n)$ is convergent and find the limit.</p> <p>I know that if if we simplify we will get $0=L(M-1)+10$, which is equal to $10/(1-M)=L$ but I have no clue what to do from there</p>
detnvvp
85,818
<p>If $m=1$, then $a_n=10(n-1)+1$ for all $n\in\mathbb N$, so the sequence does not converge. Suppose that $m\neq 1$. Then, we can prove that $a_{n+1}=m^n+10\frac{m^n-1}{m-1}$ for $n\geq 1$ by induction: this holds for $n=2$, and, if it holds for some $n\geq 2$, then $$a_{n+2}=ma_n+10=m\left(m^n+10\frac{m^n-1}{m-1}\right)+10=m^{n+1}+10\frac{m^{n+1}-m}{m-1}+10=$$$$m^{n+1}+10\left(\frac{m^{n+1}-m}{m-1}+1\right)=m^{n+1}+10\frac{m^{n+1}-1}{m-1}.$$ Therefore, this holds for all $n\in\mathbb N$, so $$a_{n+1}=m^n+10\frac{m^n-1}{m-1}=\frac{m^{n+1}+9m^n-10}{m-1}.$$ So, $a_n$ converges if and only if $m^{n+1}+9m^n=m^n(m+9)$ converges. So, either $m=-9$, or $m^n$ converges, which happens only if $|m|&lt;1$. Hence $a_n$ converges only when $|m|&lt;1$ or $m=-9$. In all of those case, you can check that the limit is $\frac{10}{1-m}$.</p>
1,143,889
<p>I am interested in counting the number of hyperedges in the complete, $t$-uniform hypergraph on $n$ vertices which intersect a specified set of $k$ vertices. This is trivial, the answer is:</p> <p>$$\sum_{i=1}^t {k \choose i}{n-k \choose t-i}.$$</p> <p>My questions is whether there is a nice simplification of this expression; I'd like to get rid of the sum if possible. Anyone know?</p> <p>Thanks a lot for the help!</p>
Alijah Ahmed
124,032
<p>We can use the <a href="http://en.wikipedia.org/wiki/Binomial_coefficient">Chu-Vandermonde identity</a> (see Equation 7 in linked page):- $$\sum_{i=0}^t {k \choose i}{n-k \choose t-i} = {n \choose t}$$ so that the sum can be simplified to $$\sum_{i=1}^t {k \choose i}{n-k \choose t-i} = {n \choose t}-{n-k \choose t}$$</p>
173,745
<p>I am trying to prove the following: 'If $(X_1,d_1)$ and $(X_2,d_2)$ are separable metric spaces (that is, they have a countable dense subset), then the product metric space $X_1 \times X_2$ is separable.' It seems pretty straightforward, but I would really appreciate it if someone could verify that my proof works.</p> <p>Since $(X_1,d_1)$ and $(X_2,d_2)$ are separable, they each contain a countable dense subspace, say $D_1 \subset X_1$ and $D_2 \subset X_2$. We will show that $D_1 \times D_2 \subset X_1 \times X_2$ is dense and countable. First, $D_1 \times D_2$ is countable since both $D_1$ and $D_2$ are.</p> <p>Now let $x=(x_1,x_2) \in X_1 \times X_2$ and let $d$ be the product metric on $X_1 \times X_2$ (given by $d(x,y)=(\displaystyle\sum_{i=1}^2 d_i(x_i,y_i)^2)^{1/2}$). We will show that every open ball $B_d(x,\varepsilon)$ containing $x=(x_1,x_2)$ also contains a distinct point of $D_1 \times D_2$. Let $a_1 \in B_{d_1}(x_1,\frac{\sqrt{2}}{2}\varepsilon)\cap D_1$ and let $a_2 \in B_{d_2}(x_2,\frac{\sqrt{2}}{2}\varepsilon)\cap D_2$ (such points exist because $D_1$ and $D_2$ are dense in $X_1$ and $X_2$, respectively.) Letting $a=(a_1,a_2)$, we then have $d(x,a)=(\displaystyle\sum_{i=1}^2 d_i(x_i,a_i)^2)^{1/2})=(d_1(x_1,a_1)^2 + d_2(x_2,a_2)^2)^{1/2} &lt; ((\frac{\sqrt{2}}{2}\varepsilon)^2 + (\frac{\sqrt{2}}{2}\varepsilon)^2)^{1/2}=\varepsilon$, so we have that $a \in B_d(x,\varepsilon)$, so $D_1 \times D_2$ is dense in $X_1 \times X_2$. Then since $D_1 \times D_2 \subset X_1 \times X_2$ is a countable dense subspace of $X_1 \times X_2$, we have that $X_1 \times X_2$ is separable.</p> <p>I can see how this would easily generalize to finite products, but does it also extend to countable products?</p>
Kevin Arlin
31,228
<p>Your proof goes through. For countable products, make the analogous argument using the metric $$d(x,y)=\sum_{i=1}^\infty \frac{1}{2^i}\frac{d_i(x_i,y_i)}{1+d_i(x_i,y_i)}$$ The summands are now bounded by $\frac{1}{2^i}$, so the sum always converges, and defines a metric on the product space.</p> <p>Incidentally, if you take an even larger cardinality in your product, you're no longer guaranteed first countability, in which case you don't get a metric space. The product space in the pure topological category might still be separable, but it's no longer guaranteed. Steen and Seebach, Counterexamples in Topology, is a good place to look for general topology tidbits like this.</p>
173,745
<p>I am trying to prove the following: 'If $(X_1,d_1)$ and $(X_2,d_2)$ are separable metric spaces (that is, they have a countable dense subset), then the product metric space $X_1 \times X_2$ is separable.' It seems pretty straightforward, but I would really appreciate it if someone could verify that my proof works.</p> <p>Since $(X_1,d_1)$ and $(X_2,d_2)$ are separable, they each contain a countable dense subspace, say $D_1 \subset X_1$ and $D_2 \subset X_2$. We will show that $D_1 \times D_2 \subset X_1 \times X_2$ is dense and countable. First, $D_1 \times D_2$ is countable since both $D_1$ and $D_2$ are.</p> <p>Now let $x=(x_1,x_2) \in X_1 \times X_2$ and let $d$ be the product metric on $X_1 \times X_2$ (given by $d(x,y)=(\displaystyle\sum_{i=1}^2 d_i(x_i,y_i)^2)^{1/2}$). We will show that every open ball $B_d(x,\varepsilon)$ containing $x=(x_1,x_2)$ also contains a distinct point of $D_1 \times D_2$. Let $a_1 \in B_{d_1}(x_1,\frac{\sqrt{2}}{2}\varepsilon)\cap D_1$ and let $a_2 \in B_{d_2}(x_2,\frac{\sqrt{2}}{2}\varepsilon)\cap D_2$ (such points exist because $D_1$ and $D_2$ are dense in $X_1$ and $X_2$, respectively.) Letting $a=(a_1,a_2)$, we then have $d(x,a)=(\displaystyle\sum_{i=1}^2 d_i(x_i,a_i)^2)^{1/2})=(d_1(x_1,a_1)^2 + d_2(x_2,a_2)^2)^{1/2} &lt; ((\frac{\sqrt{2}}{2}\varepsilon)^2 + (\frac{\sqrt{2}}{2}\varepsilon)^2)^{1/2}=\varepsilon$, so we have that $a \in B_d(x,\varepsilon)$, so $D_1 \times D_2$ is dense in $X_1 \times X_2$. Then since $D_1 \times D_2 \subset X_1 \times X_2$ is a countable dense subspace of $X_1 \times X_2$, we have that $X_1 \times X_2$ is separable.</p> <p>I can see how this would easily generalize to finite products, but does it also extend to countable products?</p>
Jonathan Hole
661,524
<p>The answer to your question whether the countable product of seperable metric spaces is seperableis affirmative. However I thnik it's useful to prove a slight generalization of this, namely that the countable product of second countable topological spaces is countable. Indeed, suppose <span class="math-container">$\{X_n\}_{n=1}^{\infty}$</span> is a sequence of second countable topological spaces; for each <span class="math-container">$n$</span> let <span class="math-container">$\mathcal{U}_n$</span> be a countable basis for <span class="math-container">$X_n$</span> containing <span class="math-container">$X_n$</span>. Defining <span class="math-container">$\mathcal{U}:=\{\prod\limits_n U_n: U_n\in \mathcal{U}_n \text{ for each }n \text{ and }U_n=X_n \text{ for all but finitely many }n\}.$</span> Then <span class="math-container">$\mathcal{U}$</span> is a countable collection of open sets of <span class="math-container">$\prod\limits_n X_n$</span> (there is an obvious bijection from <span class="math-container">$\mathcal{U}$</span> into the set of <span class="math-container">$\mathbb{N}$</span>-valued functions on finite subsets of <span class="math-container">$\mathbb{N}$</span>). It is easily checked that <span class="math-container">$\mathcal{U}$</span> is a basis for the product topology on <span class="math-container">$\{X_n\}_{n=1}^{\infty}$</span>, hence this space is second countable. What you wanted to prove now follows from the fact that seperable metric spaces are second countable, and second countable spaces, in general, are seperable.</p>
3,699,303
<p>Let <span class="math-container">$R$</span> be a commutative ring with <span class="math-container">$1$</span>. This is a simple question, but I can't see whether it is true or not.</p> <p>Suppose <span class="math-container">$R$</span> does not have a "unit" of (additive) order <span class="math-container">$2$</span>, i.e. , a unit <span class="math-container">$u$</span> such that <span class="math-container">$u=-u$</span>. Then it is necessarily true that <span class="math-container">$R$</span> does not have an element of order <span class="math-container">$2$</span>?</p>
Ender Wiggins
424,790
<p>No, consider the example <span class="math-container">$R = \mathbb{C}[X]/\langle X^2-1\rangle$</span>. If you denote <span class="math-container">$x = X + \langle X^2-1\rangle$</span> then <span class="math-container">$x^2=1$</span> and hence it has order <span class="math-container">$2$</span> in the multiplicative monoid. </p> <p>On the other hand, <span class="math-container">$R$</span> cannot have a non-zero element <span class="math-container">$p(X)$</span> such that <span class="math-container">$2p(X) + \langle X^2-1\rangle = \langle X^2-1\rangle.$</span> Otherwise, <span class="math-container">$2p(X) \in \langle X^2-1\rangle$</span> which implies that <span class="math-container">$p(X) \in \langle X^2-1\rangle$</span>.</p>
2,812,518
<p>I am self learning calculus and have a problem that may be simple here but I cant find an answer on the web so here it is:</p> <p>If we have the equation:</p> <p>$$y = y^2x$$</p> <p>and differentiate it (implicitly and using product rule) we get:</p> <p>$$\frac{dy}{dx} = \frac{y^2}{1-2xy}.$$</p> <p>However, $y =y^2x$ can be simplified to $1 = xy$, i.e. $y = \frac{1}{x} $, which when differentiated using the power rule gives $\frac{dy}{dx} = -\frac{1}{x^2}$. The two differentials are different, as $\frac{dy}{dx} = -\frac{1}{x^2}$ does not equal $\frac{dy}{dx} = \frac{y^2}{1-2xy}$.</p> <p>Why is this when we are essentially differentiating the same equation?</p> <p>(I plotted the graph on wolfram alpha and they are indeed not the same.)</p>
Eric Wofsey
86,856
<p>Remember that you found that $y=\frac{1}{x}$. So, $$\frac{y^2}{1-2xy}=\frac{1/x^2}{1-2}=-\frac{1}{x^2}$$ and the two answers you have found actually are the same.</p> <p>However, there are some issues with division by $0$ in your second approach. It is <em>not</em> correct to simplify $y=y^2x$ to $1=xy$; that is only valid if $y\neq 0$. So in fact, the graph of the equation $y=y^2x$ consists of the parabola $y=\frac{1}{x}$ (along which $\frac{dy}{dx}$ is indeed $-\frac{1}{x^2}$) together with the line $y=0$ (since if $y=0$ the equation $y=y^2x$ is always true). At points on $y=0$, your first answer $\frac{dy}{dx}=\frac{y^2}{1-2xy}$ is correct (and it simplifies to $0$) but your second answer $\frac{dy}{dx}=-\frac{1}{x^2}$ is wrong.</p>
1,144,540
<p>I'm looking to determine the convergence of $\{e^{in}\}_{n=1}^\infty$ in $\mathbb{C}$, where $n \in \mathbb{N}$. Using the definition of convergence that iff $\exists \ \epsilon &gt; 0: \forall \ n_0 \in \mathbb{N}, \exists \ n \ge n_0, |z_n - z| \le \epsilon$. </p> <p>Using $z_n = e^{in} = \cos{n} + i\sin{n}$, and anticipating that the sequence is divergent, I am looking to show a that $|z_n - z| \ge \epsilon$ but I'm not sure how, since $n$ is in radians, not degrees. Thank you in advance for any guidance!</p>
BCLC
140,308
<ol> <li><p>$$\infty &gt; \sum_{n} \lambda_n = \sum_{n} E[X_n] = E[\sum_{n} X_n]$$</p></li> </ol> <p>$$\to \infty &gt; \sum_{n} X_n$$</p> <ol start="2"> <li>$$E[X_n] = Var[X_n] = \lambda_n$$</li> </ol> <p>Use contrapositive of <a href="https://en.wikipedia.org/wiki/Kolmogorov%27s_three-series_theorem" rel="nofollow">Kolmogorov's three-series theorem</a> to show that $\forall A &gt; 0$, one of the three series diverges.</p>
1,785,044
<p>Looking at the picture below, it's easy to see why the perimeter of a polygon inscribed in a circle is an underestimation of the circle's perimeter. This follows from the triangle inequality: Any side (say $AB$) of the polygon is shorter than the circular arc with the same endpoints ($\stackrel{\frown}{AB}$). Summing all these inequalities shows the perimeter of the inscribed polygon is indeed smaller than that of the circle.</p> <p>I'm wondering if there is proof that the perimeter of a circumscribed polygon always overestimates the perimeter of the circle, which is as simple as that of the inscribed polygon case. Thanks!</p> <p><a href="https://i.stack.imgur.com/iYJuW.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iYJuW.gif" alt="enter image description here"></a></p>
Oscar Lanzi
248,217
<p>Pick a point $F$ on one of the arcs of the circle, let us say it's on arc that faces $C$ in your circumscribed quadrilateral. Construct a line segment $GFH$ with $G$ on $BC$ and $H$ on $CD$, tangent to the circle at $F$. $GFH$,being a straight segment, is shorter than $GC+CH$, so the circumscribed pentagon $ABGHD$ has less perimeter than the quadrilateral $ABCD$. Keep adding sides to the polygon by drawing additional tangents and the polygon perimeters will constitute a strictly monotonic decreasing sequence. So the terms of that sequence must be greater than the limiting value which is the circumference of the circle.</p>
1,785,044
<p>Looking at the picture below, it's easy to see why the perimeter of a polygon inscribed in a circle is an underestimation of the circle's perimeter. This follows from the triangle inequality: Any side (say $AB$) of the polygon is shorter than the circular arc with the same endpoints ($\stackrel{\frown}{AB}$). Summing all these inequalities shows the perimeter of the inscribed polygon is indeed smaller than that of the circle.</p> <p>I'm wondering if there is proof that the perimeter of a circumscribed polygon always overestimates the perimeter of the circle, which is as simple as that of the inscribed polygon case. Thanks!</p> <p><a href="https://i.stack.imgur.com/iYJuW.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iYJuW.gif" alt="enter image description here"></a></p>
Richard Hevener
17,411
<p>This proof assumes the fact (sometimes used as a definition) that the circumference of a circle is the least upper bound of the perimeters of all polygons inscribed within it.</p> <p>Def.: If $A$ is a polygon, $P(A)$ is its perimeter. If $C$ is a circle, $P(C)$ is its circumference.</p> <p>Lemma: If $A$ and $B$ are convex polygons with $A \subset B$, then $P(A) \leq P(B)$. (See <a href="http://www.cut-the-knot.org/m/Geometry/PerimetersOfTwoConvexPolygons.shtml" rel="nofollow noreferrer">http://www.cut-the-knot.org/m/Geometry/PerimetersOfTwoConvexPolygons.shtml</a> for a proof.)</p> <p>Now suppose that $B$ is a polygon circumscribed about circle $C$. Let $A$ be a polygon inscribed in $C$. By the lemma, $P(A) \leq P(B)$. Taking the sup over all such polygons $A$ and using our 'fact', we have that $P(C) \leq P(B)$. Now form a second circumscribed polygon $D$ within and smaller than $B$ (one additional side is enough). Then $P(C) \leq P(D) &lt; P(B)$, showing that $P(B)$ does indeed overestimate $P(C)$.</p>
3,176,482
<p>In my Econometrics class yesterday, our teacher discussed a sample dataset that measured the amount of money spent per patient on doctor's visits in a year. This excluded hospital visits and the cost of drugs. The context was a discussion of generalized linear models, and for the purposes of Stata, he found that a gamma distribution worked best. Good enough.</p> <p>But I started thinking about the dataset itself, and the pdf curve "total dollars per patient" might follow. I'm thinking of it as two variables: </p> <p>1) The number of visits per person (k). Let's use a Poisson distribution with λ=1, so p(0)=0.368, p(1)=0.368, p(2)=0.184, etc.</p> <p>2) The cost of each visit (x). Let's use a normal curve with a mean of 50 dollars and a SD of 5 dollars. A person's total expenditure would be the sum of normally distributed variables, or x = N(k50, k25) for k=0 to infinity visits. So a person with no visits would spend nothing, ~.95 of people with one visit would spend between 40 and 60, ~.95 of those with two visits would spend between ~85.86 and ~114.14, three visits would spend ~132.68 and ~167.32, etc.</p> <p>I think a pdf curve would be some combination of these two functions. My guess is that it would look like a vertical line a 0 and a series of decreasing, ever flattening humps at 50, 100, 150 dollars, etc, where the integral of x=(0, inf.) equals 0.632 (because zero visits is p=.0368)</p> <p>Can anyone help me put the pieces together? What would that function be? I've taken through Calc III (multivariate/vector calculus), FYI. Thanks.</p>
r.e.s.
16,397
<p>(Expanding on a comment by @Michael, this might help some readers understand the R computations in the accepted answer by @Henry.)</p> <p>Let <span class="math-container">$$Y= \begin{cases} 0 &amp;\text{ if }K=0\\ X_1 + \ldots + X_K&amp;\text{ if }K&gt;0 \end{cases}$$</span> where the <span class="math-container">$X_i$</span> are i.i.d. Normal random variables with mean <span class="math-container">$\mu$</span> and variance <span class="math-container">$\sigma^2,$</span> and <span class="math-container">$K$</span> is a Poisson random variable.</p> <p>Then the CDF of <span class="math-container">$Y$</span> is <span class="math-container">$$\begin{align} P(Y\le y)&amp;=\sum_{k\ge 0}^\infty\, P(Y\le y\mid K=k)\,P(K=k)\\ &amp;=P(Y\le y\mid K=0)\cdot P(K=0)\ +\ \sum_{k\gt 0}^\infty\, P(Y\le y\mid K=k)\, P(K=k)\\ &amp;=[0\le y]\cdot P(K=0)\ +\ \sum_{k\gt 0}^\infty\, P(X_1+\ldots+X_k\le y)\, P(K=k)\\ &amp;=[0\le y]\cdot P(K=0)\ +\ \sum_{k\gt 0}^\infty\, \Phi_{}\left({y-k\mu\over \sigma\sqrt{k}}\right)\, P(K=k)\\ \end{align}$$</span> where [...] are Iverson brackets and <span class="math-container">$\Phi$</span> is the Standard Normal CDF; hence, the density function of <span class="math-container">$Y$</span> is </p> <p><span class="math-container">$$\begin{align} f_{Y}(y)&amp;=\delta(y)\cdot P(K=0)\ +\ \sum_{k\gt 0}^\infty\,\underbrace{\phi\left({y-k\mu\over \sigma\sqrt{k}}\right)\,{1\over\sigma\sqrt{k}} }_{\text{density of Normal$(k\mu,k\sigma^2)$}}\,P(K=k)\\ \end{align}$$</span> where <span class="math-container">$\delta()$</span> is the Dirac delta function and <span class="math-container">$\phi$</span> is the Standard Normal density function.</p> <p>That is, the density function of <span class="math-container">$Y$</span> is the Poisson-weighted sum of a delta function at <span class="math-container">$y=0$</span> together with infinitely many Normal densities with increasing variances <span class="math-container">$1\sigma^2,2\sigma^2,3\sigma^2,...,$</span> centered respectively at <span class="math-container">$y=1\mu,2\mu,3\mu,...$</span> </p>
988,566
<p>For x ∈ ℝ, define by: ⌊x⌋ ∈ ℤ ∧ ⌊x⌋ ≤ x ∧ (∀z ∈ ℤ, z ≤ x ⇒ z ≤ ⌊x⌋).</p> <p>Use this definition to prove or disprove the following with a structured proof technique: ∀x ∈ ℝ, ∀y ∈ ℝ, x > y ⇒ ⌊x⌋ ≥ ⌊y⌋.</p> <p>I understand I need to start with assuming the domain to be true aswell as the antecedent, then equating a value for x as something and continuing it onwards all the way towards the concluding statement, but I'm fairly new to this and could use some help in proving the thinking part of this</p>
Ben Grossmann
81,360
<p>It is clear that the matrix is symmetric. It suffices, therefore, to show that its eigenvalues are non-negative.</p> <p>Note that the eigenvalues of $11^T$ are $0$ and $n$. Thus, the eigenvalues of $H$ must be $1-\rho$, and $(1 - \rho) + \rho n = 1 + (n-1) \rho$ respectively. Note that both of these must be non-negative.</p>
1,728,910
<p>Whenever I get this question, I have a hard time with it. </p> <p>An example of a problem:</p> <p>In the fall, the weather in the evening is <em>dry</em> on 40% of the days, <em>rainy</em> on 58% of days and <em>snowy</em> 2% of the days. </p> <p>At noon you notice clouds in the sky. </p> <p>Clouds appear at noon on 10% of the days that have <strong>dry evenings</strong>, </p> <p>25% of the days that have <strong>rainy evenings</strong>, </p> <p>and 35% of the days that have <strong>snowy evenings</strong>.</p> <p>From that I get:</p> <p>$P(S)$ = "prob. snows" = $.02$</p> <p>$P(R)$ = "prob. rains" = $.58$</p> <p>$P(D)$ = "prob. is dry" = $.40$</p> <p>$P(C|S)$ = "prob. that it is cloudy given that it snows" = $.35$</p> <p>$P(C|R)$ = "prob. that it is cloudy given that it rains" = $.25$</p> <p>$P(C|D)$ = "prob. that it is cloudy given that it is dry" = $.10$</p> <p>However, the question given is:</p> <p>Find the probability of snow in the evening given that you saw clouds at noon.</p> <p>I interpret this as:</p> <p>$$P(S|C)$$</p> <p>I can't get it right. Is there a formula to it without having P(C)? </p>
DeepSea
101,504
<p>Hint: $P\left(S \mid C \right) = \dfrac{P\left(C \mid S \right)P\left(S\right)}{P\left(C\right)}$.</p>
226,534
<p>I have the following function for which I want to know the range</p> <pre><code>FunctionRange[{1/ 32 (8 - Sqrt[(-8 - 36 Abs[y] (1 + 2 Sqrt[Abs[z]]) - 27 (Abs[y] + 2 Abs[y] Sqrt[Abs[z]])^2)^2 - 64 (1 + Abs[y] + 2 Abs[y] Sqrt[Abs[z]])] + 36 Abs[y] (1 + 2 Sqrt[Abs[z]]) + 27 (Abs[y] + 2 Abs[y] Sqrt[Abs[z]])^2), 0 &lt; Abs[y] &lt; 1 &amp;&amp; 0 &lt; Abs[z] &lt; 1}, {Abs[y], Abs[z]}, t] </code></pre> <p>But once the code starts it just keeps on running with no output. I tried to use MaxValue and MinValue too so as to get the Range but that too doesn't seem to work. Is there any other way to find the range when the above options don't work?</p>
Bob Hanlon
9,362
<pre><code>expr = 1/32 (8 - Sqrt[(-8 - 36 Abs[y] (1 + 2 Sqrt[Abs[z]]) - 27 (Abs[y] + 2 Abs[y] Sqrt[Abs[z]])^2)^2 - 64 (1 + Abs[y] + 2 Abs[y] Sqrt[Abs[z]])] + 36 Abs[y] (1 + 2 Sqrt[Abs[z]]) + 27 (Abs[y] + 2 Abs[y] Sqrt[Abs[z]])^2); </code></pre> <p>Looking at the plot,</p> <pre><code>Plot3D[expr, {y, -1, 1}, {z, -1, 1}, PlotPoints -&gt; 100, MaxRecursion -&gt; 5, ClippingStyle -&gt; None, AxesLabel -&gt; (Style[#, 14, Bold] &amp; /@ {y, z, t}), WorkingPrecision -&gt; 20, PlotRange -&gt; All] </code></pre> <p><a href="https://i.stack.imgur.com/WHk1N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WHk1N.png" alt="enter image description here" /></a></p> <pre><code>max = expr /. y -&gt; 0 (* 1/4 *) min = Min[ Limit[expr, #] &amp; /@ {{y -&gt; -1, z -&gt; -1}, {y -&gt; -1, z -&gt; 1}, {y -&gt; 1, z -&gt; -1}, {y -&gt; 1, z -&gt; 1}}] (* 1/32 (359 - 35 Sqrt[105]) *) min // N (* 0.0111476 *) </code></pre>
259,751
<p>I'm playing this video game where people can get kills, deaths, and assists , and all this is recorded on a stats website. The stats website gives you a rating by directly manipulating these numbers.</p> <p>In the first entry, I have 26 kills, 5 deaths, and 19 assists. The KDA ratio the website gave me was 29.8. At first thought, I guessed that the formula was ratio = (kills*5 + assists)/deaths.</p> <p>But then the second entry threw me off. 21.33 Kills, 1.33 deaths, and 4.33 assists. And the ratio outputted here is 24.58.</p> <p>Is there a good approach to figuring out the formula or function when the inputs and outputs are given?</p>
Dan Brumleve
1,284
<p><a href="http://terrytao.files.wordpress.com/2011/02/polymath.pdf" rel="nofollow">No prime-generating function is known to be computable in polynomial time</a>. However, maybe there is a solution to the "almost-all" case.</p>
167,891
<p>I'm working through the James Stewart Calculus text to prep for school. I'm stuck at this particular point. </p> <p>How would you sketch the graph for the parametric equations: $x = \cos t$, $y = \sin t$, and $z = \sin 5t$? I understand that if it were the case that $z=t$, I'd merely get a helix around the $z$-axis, as $x$ and $y$ form an ellipse. However, I cannot make the leap to solve more exotic problems such as the problem posed or even the case when $z = \ln(t)$.</p> <p>Some help and a push in the right direction would be appreciated.</p>
Zev Chonoles
264
<p>Here is an animation I made that might help. The left is a plot of $(\cos(t),\sin(t),\sin(5t))$ and the right is a plot of $\sin(5t)$.</p> <p><img src="https://i.stack.imgur.com/y0gen.gif" alt="enter image description here"></p> <p>For the case of $(\cos(t),\sin(t),\ln(t))$, here is the corresponding animation:</p> <p><img src="https://i.stack.imgur.com/Ow63m.gif" alt="enter image description here"></p> <p>As a sanity check, note that in each animation, you can see that the point on the circle makes its first full revolution as $t=2\pi\approx 6.28$.</p> <p>Mathematica code for my (and anyone else's) future reference:</p> <pre><code>size = 1.5 slices = 150 Slice[t_,z_] := {Show[ParametricPlot3D[{Cos[2 Pi*s], Sin[2 Pi*s], z}, {s, 0, 1}, PlotRange -&gt; {{-size, size}, {-size, size}, {-size, size}}], Graphics3D[{PointSize[Large], Point[{Cos[t], Sin[t], z}]}]], Show[Plot[Sin[5 s], {s, 0, 2 Pi}, Ticks -&gt; {{0, 2 Pi/5, 4 Pi/5, 6 Pi/5, 8 Pi/5, 2 Pi}}], Graphics[{PointSize[Large], Point[{t, Sin[5 t]}]}]]} NewSlice[t_,z_] := {Show[ParametricPlot3D[{Cos[2 Pi*s], Sin[2 Pi*s], z}, {s, 0, 1}, PlotRange -&gt; {{-size, size}, {-size, size}, {-2, 2}}], Graphics3D[{PointSize[Large], Point[{Cos[t], Sin[t], z}]}]], Show[Plot[Log[s], {s, 0.5, 8}, PlotRange -&gt; {{0, 8}, {-1, 2}}, AspectRatio -&gt; 1/2], Graphics[{PointSize[Large], Point[{t, Log[t]}]}]]} Export["sin.gif", Table[Slice[2 Pi*t/slices, Sin[5*2 Pi*t/slices]], {t, 0,slices}], "DisplayDurations" -&gt; 0.15] Export["ln.gif",Table[NewSlice[t, Log[t]], {t, 0.5, 7.5, 7/slices}], "DisplayDurations" -&gt; 0.15] </code></pre>
1,687,500
<p>Prove that, any group of order $15$ is abelian (without help of Sylow's theorem or its application).</p> <p>What I have done so far is, </p> <p>by class equation we know that $|G|=|Z|+\sum\frac{|G|}{C(a_i)}$. Now if I can show that $|G|=|Z|$ then the theorem is proved. Now order of $|Z|$ can not be $3$ or $5$, because if it is then $(G/Z)$ will be cyclic,which can not be since $G$ non-abelian.So, only possibilities are $|Z|=1$ or $15$. Now haw can I conclude that $|Z|\not=1$?</p>
Shahab
10,575
<p>If there is a non basic variable $x_r$ such that $z_r−c_r=0$ (and all others are $\ge 0$) and point 3 (the one about the min ratio) is satisfied one can still force $x_q$ to leave and $x_r$ to enter. This will not improve the optimal value but give an alternative solution</p>
1,687,500
<p>Prove that, any group of order $15$ is abelian (without help of Sylow's theorem or its application).</p> <p>What I have done so far is, </p> <p>by class equation we know that $|G|=|Z|+\sum\frac{|G|}{C(a_i)}$. Now if I can show that $|G|=|Z|$ then the theorem is proved. Now order of $|Z|$ can not be $3$ or $5$, because if it is then $(G/Z)$ will be cyclic,which can not be since $G$ non-abelian.So, only possibilities are $|Z|=1$ or $15$. Now haw can I conclude that $|Z|\not=1$?</p>
fredq
297,080
<h1>1. Lets show $x_q$ can't leave the basis</h1> <h2>Intuitively</h2> <p>If the row $s$ contains all zeros except in the column of $x_q$, it means the problem contains a constraint of the type</p> <p>$$x_q = b_s$$</p> <p>In that case, only solutions with $x_q = b_s$ are feasible, so the variable $x_q$ never leaves the basis.</p> <h2>Mathematically</h2> <blockquote> <p>$x_q$ leaves if there is some non-basic variable $x_r$ that enters because</p> <ol> <li><p>$$z_r - c_r &lt; 0$$</p></li> <li><p>$$z_r - c_r = \min_j (z_j - c_j)$$</p></li> <li><p>$$\frac{b_q'}{a_{qr}'} = \min_i \{\frac{b_i'}{a_{ir}'} | a_{ir}' &gt; 0 \}$$</p></li> </ol> </blockquote> <p>It is true, assuming the problem is about maximization.</p> <p>Now, lets apply this to the case where the row corresponding to $x_q$ contains only zeros except in the column corresponding to $x_q$. You have $$z_q = 1\times c_q = c_q$$ </p> <p>So $z_q- c_q = 0$, which is normal since $x_q$ is a basic variable.</p> <p>Now, let $x_r$ be the entering variable. From the asumptions of the problem, we know that $a'_{qr} = 0$. This implies </p> <p>$$\min_i \{\frac{b_i'}{a_{ir}'} | a_{ir}' &gt; 0 \} \neq \frac{b_q'}{a_{qr}'}$$</p> <p><strong>This means the variable $x_q$ never leaves the basis.</strong></p> <hr> <h1>2. Let's show row $s$ can't change</h1> <h2>Intuitively</h2> <p>Lets call $x_r$ the entering variable, and $x_t$ the exiting variable.</p> <p>Now lets see how the pivot will affect the column of $x_q$. We have </p> <p>Line $i$ of the new tableau is a linear combinaison of lines $i$ and $t$ of the old tableau. Since these value are 0 for the column of $x_q$, the column of $x_q$ will never change. The reduced cost of variable $x_q$ will always be 0 and $x_q$ will never leave the basis.</p> <h2>Mathematically</h2> <p>Lets call $x_r$ the entering variable, and $x_t$ the exiting variable.</p> <p>When making the pivot for row $s$, the formulae is (assuming $a'_{ij}$ is the new value in the tableau and $a_{ij}$ is the old value.</p> <p>$$a'_{si} = a_{si} - \frac{a_{qr}}{a_{tr}} a_{ti}$$</p> <p>Since $a_{qr} = 0$, the row $s$ stays unchanged.</p>
89,638
<p>I'm trying to solve an exercise as it follows:</p> <p>$\alpha, \beta \in \mathbb{C}$ such that $a^{37}=2=\beta^{17}$. Note that both are prime.</p> <p>a) Find the minimum polynomials of $\alpha$ and $\beta$ over $\mathbb{Q}$.</p> <p>Well, I guess that $(x^{37}-2)$ and $(x^{17}-2)$ are irreducible over $\mathbb{Q}$, hence minimum polynomials. But does that follow from 37,17 being prime, or just from the fact that the polynomial has no zero in $\mathbb{Q}$? How do I write this down rigorously?</p> <p>b) Compute $[\mathbb{Q}(\alpha, \beta):\mathbb{Q}]$, and compute $[\mathbb{Q}(\alpha\beta):\mathbb{Q}]$!</p> <p>My guess here on the first one is that this is equal to $[\mathbb{Q}(\alpha, \beta):\mathbb{Q(\beta)}]*[\mathbb{Q}( \beta):\mathbb{Q}]$ and $[\mathbb{Q}(\alpha, \beta):\mathbb{Q(\alpha)}]*[\mathbb{Q}( \alpha):\mathbb{Q}]$, so the first product is $17x$, the second is $37y$, so the dimension of field extension is at least $17\times 37$, but this is also the maximum number (right?), so $17\times 37$ is the answer? and this would be the same answer as for the last part, $\mathbb{Q}(\alpha \beta)$? How do I write this down nicely?</p>
Community
-1
<p>If $\varphi$ is the characteristic function of $X$, then $\Re(\varphi)$ is the characteristic function of a symmetrized version $X_{\rm sym}$ of $X$. </p> <p>That is, $X_{\rm sym}=\varepsilon X$, where $\varepsilon$ is independent of $X$ and $\mathbb{P}(\varepsilon=+1)=\mathbb{P}(\varepsilon=-1)=1/2$. </p>
2,431,052
<p>Let $M$ be a an oriented riemannian manifold. I have seen the following definition for the Hodge-star operator acting on a differential form. Starting with $\beta\in \Omega^p(M)$ we have $$\alpha \wedge \star \beta = \left&lt;\alpha,\beta\right&gt;\text{vol} ~~\forall \alpha \in \Omega^p(M)$$</p> <p>where $ \left&lt;\alpha,\beta\right&gt; = \left&lt;e^1\wedge\ldots\wedge e^p, f^1\wedge \ldots \wedge f^p\right&gt; = \det[\left&lt;e^i,f^j\right&gt;]$. </p> <p>My question is if we have a Lie algebra (of a matrix group) valued form $F\in \Omega^p(M,g)$. How do we now define the Hodge star operator ?. I am asking this question because I want to understand expressions such as $$ F \wedge \star F$$ in the context of Yang-Mills theory. Thank you very much.</p> <p>EDIT: I think my confusion really comes from the fact wheter I should use the commutator or the matrix product as the first answer kindly mentioned. In the case of Yang-Mills functional is it meant as: $$ L_{YM}= \int_{M}Tr(F\wedge\star F), ~~F = \sum\limits_{j}\omega_j \otimes g_j\in \Omega^2(M,g) \\Tr(F\wedge \star F) = \sum\limits_{j,k}\left&lt;\omega_j, \omega_k\right&gt;\text{vol}~\text{Tr}([g_j,g_k])$$ or is it meant as $$F\in \Omega^2(M,g) \\Tr(F\wedge \star F) = \sum\limits_{j,k}\left&lt;\omega_j, \omega_k\right&gt;\text{vol}~\text{Tr}(g_jg_k).$$ Thank you for you help.</p>
Ted Shifrin
71,348
<p>The Hodge star is not any different. But, depending on the notation you're using, if you have Lie-algebra valued forms, you use wedge on the forms and Lie bracket on the Lie-algebra part. To be explicit, if you have $F= \sum\limits_j \omega_j\otimes v_j$ and $G = \sum\limits_k\eta_k\otimes w_k$, with $\omega_j,\eta_k$ differential forms and $v_j,w_k\in\mathfrak g$, then $F\wedge G = \sum\limits_{j,k} (\omega_j\wedge\eta_k) \otimes [v_j,w_k]$.</p> <p>Some authors will use regular matrix product rather than matrix commutator, when working with matrix-valued forms, so you need to check your particular author.</p>
2,389,581
<p>A town has 2017 houses. Of these 2017 houses, 1820 have a dog, 1651 have a cat, and 1182 have a turtle. If x is the largest possible number of houses that have a dog, a cat, and a turtle, and y is the smallest possible number of houses that have a dog, a cat, and a turtle, then what is the value of x−y.</p>
Math Lover
348,257
<p>You already figured out $x=1182$. Now for computing $y$, we take the following strategy.</p> <p>Without loss of generality, we presume each of the houses 1 to 1820 has a dog. To minimize the 'overlap', each of the remaining houses (1821 to 2017 i.e. 197 houses) contains both a cat and a turtle. So we are left with $1651-197=1454$ cats and $1182-197=985$ turtles.</p> <p>Now the question is to find the minimum number of houses out of the first $1820$ houses that have both a cat and a turtle. It is very easy to figure out that it is $1454+985-1820=619$. So $y=619$.</p>
1,887,395
<p>Let $D=\{Z\in\mathbb{C}:|z|&lt;1\}$. Suppose $f:\overline D\to\mathbb{C}$ is continuous, holomorphic in $D$, and $f(\partial D)\subset \mathbb{R}$. Define $h:\mathbb{C}\to\mathbb{C}$ by $$ h(z)=\begin{cases} f(z) &amp;\text{if}\ z\in \overline D\\ \overline{f(1/\overline z)} &amp;\text{if}\ z\in\mathbb{C}\setminus \overline{D} \end{cases}$$ Show that $h$ is entire and $f$ is a constant function.</p> <p>My work: I can show that $h$ is holomorphic in $\mathbb{C}\setminus \overline{D}$. </p> <p>To show $h$ is entire, by using Morera's theorem, considering arbitrary triangular path, the integral of $h$ is zero if it does not cross the boundary of the unit disc, however, what if the triangular path cross the unit disc? How to show in this case the integral is also zero.</p> <p>Could anyone kindly help? Thanks!</p>
Jack D'Aurizio
44,121
<p>We have: $$ -\log(1-x) = \sum_{n\geq 1}\frac{x^n}{n}\tag{1} $$ hence: $$ \frac{-\log(1-x)}{x(1-x)} = \sum_{n\geq 1} H_n x^{n-1} \tag{2} $$ and: $$ \sum_{n\geq 1}H_n\left(\frac{1}{n}-\frac{1}{n+k}\right) = \int_{0}^{1}\frac{-\log(1-x)}{x(1-x)}(1-x^k)\,dx \tag{3}$$ can be easily computed by expanding $\frac{1-x^k}{1-x}$ as a geometric sum, then exploiting: $$ \int_{0}^{1}-\log(1-x)x^{m-1}\,dx = \frac{H_m}{m}\tag{4} $$ and: $$ \sum_{m=1}^{k}\frac{H_m}{m}=\sum_{m=1}^{k}\frac{1}{m}\sum_{1\leq s\leq m}\frac{1}{s} = \frac{H_k^2+H_k^{(2)}}{2}.\tag{5} $$ Putting everything together, $$\boxed{\,\forall k\in\mathbb{N}^+,\qquad \sum_{n\geq 1}\frac{k H_n}{n(n+k)}=\color{red}{\zeta(2)+\frac{H_{k-1}^2+H_{k-1}^{(2)}}{2}}\,}\tag{6} $$</p>
2,415,957
<p>I would like to prove the Second Isomorphism Theorem in an arbitrary abelian category without appealing to Mitchell's Embedding Theorem.</p> <p>Given two subobjects $M', M''$ of an object $M$ in an abelian category $\mathcal A$, we define the intersection $M'\cap M'' := \ker(M\rightarrow M/M' \oplus M/M'')$ and the sum $M'+M'' := \text{im}(M'\oplus M''\rightarrow M)$, where these morphisms are all canonical.</p> <p>I want to show that $(M'+M'')/M' \cong M''/M'\cap M''$.</p> <p>We get a map $M'' \rightarrow (M'+M'')/M'$ by injecting $M''$ into $M'\oplus M''$ and using the universal property of the image to get a map to $M'+M''$ and then composing with the projection.</p> <p>I don't know where to go from here though. I know that for any map $f$ we have $\text{coker}\ker f = \ker\text{coker} f$, and I realise I probably need to use this since the first isomorphism theorem is used to prove the second in the case of modules, but I'm stuck and all the arrows are confusing me.</p>
emma
406,460
<p>An important identity in trigonometry is that $\sin^2 \theta + \cos^2 \theta = 1$. One way to see that this is true, at least for $0^{\circ} \leq \theta \leq 90^{\circ}$, is to draw a right triangle. Choose one of the acute angles as $\theta$, and label the leg opposite that angle as $o$, the side next to that angle as $a$, and the hypotenuse as $h$. Then from the Pythagorean Theorem we have $o^2 + a^2 = h^2$, from which we conclude $\frac{o^2}{h^2} + \frac{a^2}{h^2} = \frac{h^2}{h^2}$, or $\sin^2 \theta + \cos^2 \theta = 1$. Does that help?</p> <p>Note that you will need to know what quadrant $\theta$ is in in order to know whether $\sin \theta$ should be positive or negative.</p>
2,415,957
<p>I would like to prove the Second Isomorphism Theorem in an arbitrary abelian category without appealing to Mitchell's Embedding Theorem.</p> <p>Given two subobjects $M', M''$ of an object $M$ in an abelian category $\mathcal A$, we define the intersection $M'\cap M'' := \ker(M\rightarrow M/M' \oplus M/M'')$ and the sum $M'+M'' := \text{im}(M'\oplus M''\rightarrow M)$, where these morphisms are all canonical.</p> <p>I want to show that $(M'+M'')/M' \cong M''/M'\cap M''$.</p> <p>We get a map $M'' \rightarrow (M'+M'')/M'$ by injecting $M''$ into $M'\oplus M''$ and using the universal property of the image to get a map to $M'+M''$ and then composing with the projection.</p> <p>I don't know where to go from here though. I know that for any map $f$ we have $\text{coker}\ker f = \ker\text{coker} f$, and I realise I probably need to use this since the first isomorphism theorem is used to prove the second in the case of modules, but I'm stuck and all the arrows are confusing me.</p>
user577215664
475,762
<p>$$ sin^2 \theta = 1 - \cos^2 \theta = 1 - \frac 1 9 = \frac 8 9 $$ $$ \sin \theta = \sqrt \frac 8 9 = \frac { 2 \sqrt 2} {3} $$</p> <p>Once you have one of the trigonometric functions you get all of them, $\tan \theta $ or $ \cos \theta $</p>
186,395
<p>i want write a module to find the integer combination for a multi variable fomula. For example</p> <p>$8x + 9y \le 124$</p> <p>The module will return all possible positive integer for $x$ and $y$.Eg. $x=2$, $y=12$. It does not necessary be exactly $124$, could be any number less or equal to $124$. Must be as close as possible to $124$ if no exact solution could be found.</p> <p>I do not want to solve with brute force as the number of variable could be any...$(5,10,100,...n)$</p> <p>Any algorithm could solve this?</p>
Gerry Myerson
8,269
<p>Given a finite set $U$ of positive integers, and positive integers $B$ and $K$, the problem of determining whether there are non-negative integers $a(u)$ such that $K\le\sum_{u{\rm\ in\ }U}a(u)u\le B$ is NP-complete (which implies that it is computationally infeasible now and, perhaps, forever). This is discussed under the heading Integer Knapsack on page 247 of Garey and Johnson, Computers and Intractability. As has been pointed out already, it's easy if $\#U=2$. </p>
301,318
<p>As the title stated , what is the meaning of infinitely many ? When we say a set contains infinitely many elements, does this mean we cannot finish counting all the elements in the set ? Does infinitely many same as $\forall$ ? </p>
Asaf Karagila
622
<p>No, $\forall$ means that <em>for all</em> elements. Infinitely many just means there are infinitely many elements with a certain property.</p> <p>For example $A=\{p\in\mathbb N\mid p\text{ is a prime and } p\equiv 1\pmod 4\}$ contains <em>infinitely many</em> prime numbers, but it is not true that all the prime numbers are in $A$, or that all the natural numbers are in $A$. Therefore the property defining $A$ holds for infinitely many elements, but hardly for all of them.</p> <p>Another example would be $\mathbb R$ with the $\leq$ relation and $0$. For infinitely many numbers it holds that $0\leq x$, but hardly $\forall x.0\leq x$.</p>
301,318
<p>As the title stated , what is the meaning of infinitely many ? When we say a set contains infinitely many elements, does this mean we cannot finish counting all the elements in the set ? Does infinitely many same as $\forall$ ? </p>
Gautam Shenoy
35,983
<p>You have to understand this from basic definitions.</p> <p>Firstly, a set A is said to be infinite if it is "not finite" OR equivalently, there exists a bijection from A to a <strong>strict</strong> subset of itself. Additionally, we don't say a set contains infinitely many elements, instead we say it is infinite. We use the phrase "infinitely many" in the following way for example: </p> <p>"A has infinitely many elements satisfying property P"</p> <p>This in NOT synonymous as $\forall$ as the other answers have answered.</p> <p>So what does finite mean? A set A is finite if $\exists K \in \mathbb{N}$ such that there is a bijection from A to $\{1,2,..K \}$</p>
531,895
<p>Assume K is a field that is not algebraically closed, then how to prove that for any m>0, there is a polynomial with m variables over K that possess a unique zero point?</p>
Hagen von Eitzen
39,174
<p>As $K$ is not algebraically closed, there exists $p\in K [X]$ with no root in $K$ and $\deg p\ge1$. Let $f(X,Y)=Y^{\deg p}p\left(\frac XY\right)$, that is, if $\deg p=d$ and $$p(X)=a_0+a_1X+\ldots +a_dX^d $$ we let $$ f(X,Y)=a_0Y^d+a_1XY^{d-1}+\ldots +a_dX^d. $$</p> <p>Then clearly $f(0,0)=0$. If $v\ne 0$ then $f(u,v)=v^dp(\frac uv)\ne 0$ and if $v=0, u\ne 0$ then $f(u,v)=a_du^d\ne 0$. Thus $f(u,v)=0\iff u=v=0$. For $m\ge 2$ let $f_2=f$ and recursively $f_{m+1}(X_1,\ldots, X_{m+1}) = f(f_m(X_1,\ldots,X_m),X_{m+1})$. Then it follows quickly by induction that for $u_1,\ldots ,u_m\in K$ we have $f_m(u_1,\ldots ,u_m)=0\iff u_1=\ldots u_m=0$.</p>
1,283,541
<p>Find the least value of $f(x)=3^{-x+1} + e^{-x-1}$.</p> <p>I tried to use the maxima/minima concept but it was of no use. Please help.</p>
WimC
25,313
<p>On some suitable interval around $x$ $$1\geq \cos(x) \geq \cos(x) - x \sin(x)\geq 0.$$ Now integrate on this interval to find for $x\geq 0$ $$x \geq \sin(x) \geq x \cos(x)$$ and the opposite inequalities for $x\leq 0$.</p>
89,622
<p>I'm having a hard time trying to understand a proof of the Principle of Local Reflexivity. I'm following the proofs from </p> <p>1) Topics in Banach Space Theory (by Fernando Albiac, Nigel J. Kalton) 2) <a href="http://www.math.tamu.edu/~schlump/sofar.pdf" rel="nofollow">http://www.math.tamu.edu/~schlump/sofar.pdf</a> (some notes from Professor Schlump)</p> <p>The proof in 2) is basically the same as in 1) , but with further details. I don't quite get why ker $S$ is contained in Ker $S_1$ . And I don't get why that implies that there is a $T$ such that $S_1 = TS$. </p> <p>Can someone enlight me ? I would really appreciate it!</p>
Bill Johnson
2,554
<p>Enlarge $G$, if necessary, so that the restriction mapping from $F$ to $G^*$ is one to one. Then what you want is completely obvious from the equality 4 lines from the bottom of p. 274 of [AK]. (The proof is correct without this step, but enlarging $G$ makes it clear without thinking.)</p>
3,004,767
<p>If <span class="math-container">$\displaystyle\lambda_n = \int_{0}^{1} \frac{dt}{(1+t)^n}$</span> for <span class="math-container">$n \in \mathbb{N}$</span>. Then prove that <span class="math-container">$\lim_{n \to \infty} (\lambda_{n})^{1/n}=1.$</span></p> <p><span class="math-container">$$\lambda_n=\int_{0}^{1} \frac{dt}{(1+t)^n}= \frac{2^{1-n}}{1-n}-\frac{1}{1-n}$$</span></p> <p>Now if we use L'Hôpital's rule, then it gets cumbersome. Is there any short method? Thank you.</p>
Yiorgos S. Smyrlis
57,021
<p>Actually, <span class="math-container">$$ \int_0^1 \frac{dt}{(1+t)^n}=\left.\frac{1}{1-n}\frac{1}{(1+t)^{n-1}}\,\right|_0^1=\frac{1}{n-1}-\frac{2^{-n+1}}{n-1} $$</span> and hence, for all <span class="math-container">$n&gt;1$</span> <span class="math-container">$$ \frac{1}{2(n-1)}&lt;\int_0^1 \frac{dt}{(1+t)^n}&lt;\frac{1}{n-1}. $$</span> Next, observe that <span class="math-container">$$ \lim_{n\to\infty}\left(\frac{1}{2(n-1)}\right)^{1/n}=\lim_{n\to\infty}\left(\frac{1}{n-1}\right)^{1/n}=1. $$</span></p>
1,080,572
<p>I am having problem with this question , kindly please help me with this ,</p> <p>Let $$S = \{x : x^{6} -x^{5} \leq 100\}$$</p> <p>And $$T =\{x^{2} - 2x : x \in ( 0, \infty)\}$$</p> <p>Then I have to show that set $S \cap T$ is closed and bounded. $S$ appears to be closed and bounded as it less than or equal to $100$.</p>
agha
118,032
<ol> <li>Set $S \cap T$ is bounded. Let $x \in S \cap T$. Then $x \in S$, so $x \geq -1$ (it's because $(t-1)^=t^2-2t+1 \geq 0$ for all $t \in \mathbb{R}$) and $x \leq 101$ (because for $t&gt;101$ we have $t^6-t^5=t^5(t-1)&gt;100$). So $x \in [-1,101]$, thus $S \cap T \in [-1,101]$.</li> <li>Set $S \cap T$ is closed. Note that $S$ is preimage of set $(-\infty,100]$ under continous function $f(t)=t^6-t^5$, so $S$ is closed. Set $T$ can be written as $[-1,\infty)$, because continuous function $f(t)=t^2-2t$ reaches minimum at point $t=1$ and $\lim_{t \to \infty}f(t)=\infty$. Sets $S$ and $T$ are closed, so $S \cap T$ is also closed.</li> </ol>
121,631
<p>How to prove most simply that if a polyonmial $f$, has only real coefficients and $f(c)=0$, and $k$ is the complex conjugate of $c$, then $f(k)=0$?</p>
Martin Argerami
22,857
<p>You use the fact that the coefficients of $f$ are real to show that $$ f(\overline c)=\overline{f(c)}. $$</p>
1,317,974
<p>Consider the affine toric variety $V \subset k^{5}$ parametrized by $$\Phi(s,t,u) = (s^{4},t^{4},u^{4},s^{8}u,t^{12}u^{3}) \in k^{5}$$ where k is an algebraically closed field of characteristic 2. This is problem 1.1.8 from Cox, Little, and Schenck, but my question regards a more general notion: How exactly does one determine the generators of the toric ideals. Just by looking at the parametrization, I can deduce the generators $x_{4}x_{5} - x_{1}^{2}x_{2}^{3}x_{3}, x_{1}^{8}x_{3} - x_{4}^{4}$, and $x_{5}^{4} - x_{2}^{12}x_{3}^{3}$ </p> <p>Would these then generate the toric ideal? In general, is there a way to determine whether the toric ideal is the correct one other than just looking at the relations?</p> <p>Thanks!</p>
Viktor Vaughn
22,912
<p>If you haven't already, you should consult Cox, Little, and O'Shea's <a href="http://www.cs.amherst.edu/~dac/iva.html" rel="nofollow">Ideals, Varieties, and Algorithms</a>: they treat this exact problem in Chapter 3 Elimination, particularly in $\S3$ Implicitization. The main result is the following theorem.</p> <p><strong>Theorem 1 (Polynomial Implicitization)</strong> <em>If $k$ is an infinite field, let $F: k^m \to k^n$ be the function determined by the polynomial parametrization \begin{align*} x_1 &amp;= f_1(t_1, \ldots, t_m)\\ &amp;\ \, \vdots\\ x_n &amp;= f_n(t_1, \ldots, t_m) \, . \end{align*} Let $I = \langle x_1 - f_1, \ldots, x_n - f_n \rangle$ and let $I_m = I \cap k[x_1, \ldots, x_n]$ be the $m^\text{th}$ elimination ideal. Then $\mathbb{V}(I_m)$ is the smallest variety in $k^n$ containing $F(k^m)$.</em></p> <p>For your particular problem, I used <a href="http://www.sagemath.org/" rel="nofollow">Sage</a> to compute a Gröbner basis.</p> <pre><code>R.&lt;s,t,u,x1,x2,x3,x4,x5&gt; = PolynomialRing(QQ,8, order = 'lex') I_gens = [x1 - s^4, x2 - t^4, x3 - u^4, x4 - s^8*u, x5 - t^12*u^3] I = ideal(I_gens) basis = I.groebner_basis() </code></pre> <p>By another theorem, the generators contained in this basis only involving the variables $x_1, \ldots, x_5$ form a Gröbner basis for the elimination ideal. In this case, they are $$ \{x_1^8 x_3 - x_4^4, x_1^6 x_5 - x_2^3 x_4^3, x_1^4 x_5^2 - x_2^6 x_3 x_4^2, x_1^2 x_2^3 x_3 - x_4 x_5, x_1^2 x_5^3 - x_2^9 x_3^2 x_4, x_2^{12} x_3^3 - x_5^4\}. $$ In particular, this shows that the polynomials you found do not generate $\mathbb{I}(V)$, since for instance the second generator $g_2 = x_1^6 x_5 - x_2^3 x_4^3 \in \mathbb{I}(V)$ but $g_2 \notin \langle x_{4}x_{5} - x_{1}^{2}x_{2}^{3}x_{3}, x_{1}^{8}x_{3} - x_{4}^{4}, x_{5}^{4} - x_{2}^{12}x_{3}^{3} \rangle$.</p>
921,868
<p>I'm not quite sure whether this question belongs here, because it has no definite answer. But I'll give it a shot. If any of the mods objects, then I will, of course, respectfully delete this contribution.</p> <p>A friend of mine, who knows a little arithmetic and only the bare rudiments of algebra, has asked me to explain what it is exactly what mathematicians do. Also, she wants to know what a mathematical proof is.</p> <p>Now I have this idea of presenting her with two sorts of mathematical 'artefacts':</p> <p>A. A few non-trivial (and preferably striking or beautiful) theorems whose proofs are easy and brief enough for my friend to understand. Euclid's proof of the infinitude of the primes, or Cantor's of the uncountability of the real numbers, would fit this category. But I would like something a bit off the beaten track, that's not been done to death in thousands of `popular' books.</p> <p>B. A couple of results whose proofs are not neccesarily easy, but which illustrate nicely how mathematicians deal with heuristics and with discovering new theorems. A few immediately plausible yet striking conjectures would be nice, too. Unfortunately, I can't think of any concrete examples from the top of my head right now. (Edit: the four color theorem comes to mind.) </p> <p>Your suggestions will be warmly appreciated.</p>
user164587
164,587
<p>A useful introduction to proof by induction is to take a $2 \times 2$ square with one corner removed as in the diagram below.</p> <p><img src="https://i.stack.imgur.com/D2tji.png" alt="The basic building block"></p> <p>The problem is to prove that using these shapes, one can cover any $2^n \times 2^n$ square with one corner removed as in the next diagram.</p> <p><img src="https://i.stack.imgur.com/l8GxR.png" alt="The 4x4 space"></p>
2,696,097
<p>$ \lim_{n\to \infty} ( \lim_{x\to0} (1+\tan^2(x)+\tan^2(2x)+ \cdots + \tan^2(nx)))^{\frac{1}{n^3x^2}} $</p> <p>The answer should be $ {e}^\frac{1}{3} $</p> <p>I haven't encountered problems like this before and I'm pretty confused, thank you.</p> <p>I guess we must use the remarkable limit of $ \frac{\tan(x)}{x} $ when $ x $ approaches $ 0 $ by dividing and then multiplaying with $x$.</p>
G Tony Jacobs
92,129
<p>Suppose $h\in H$ and $k\in K$. Where is $hk$? If $H$ and $K$ are both subgroups, with $H\cup K=G$ and $H\cap K=\{e\}$, then we have a problem. Do you see why?</p>
2,385,188
<p>$c_R,R_f$ are known (constant part of return and risk free rate). Let $R=R_f(e^r-1)=c_R+\epsilon_R$, $r\sim N(\mu_r,\sigma_r)$, how to specify the distribution of $\epsilon_R\sim Lognormal(?,?)$ by mean and variance of a normal distribution?</p>
Graham Kemp
135,106
<p>The implication/conditional: "$P\to Q$" means "If $P$, then $Q$" .</p> <p>It is a guarantee that $Q$ is true whenever $P$ is true.</p> <p>When $P$ is false, we might say the guarantee is <em>counterfactual</em>, but it is not <em>falsified</em>. &nbsp; If the guarantee is not false, what is it?</p> <p>The guarantee is only considered <em>falsified</em> when $P$ is true while $Q$ is false.</p> <p>So we consider the implication to be true when either $P$ is false or $Q$ is true. &nbsp; That is $\neg P\vee Q$.</p>
2,104,382
<p>I was trying to understand the proof of a theorem from Munkres's book. He wants to prove that a compact subset $Y$ of an Hausdorff space $X$ is closed. He simply proves that, for all $x\in X-Y$, there is a neighborhood which is disjoint from $Y$. Why is this sufficient to conclude that $X-Y$ is open?</p>
Kendall
400,051
<p>You can also prove this by showing that the closure of Y in X is open. </p>
3,346,543
<p>I am aware this is a pretty big topic, but the attempts at layman's explanations I have seen either barely provide commentary on the formal proofs, or fail to provide an explanation (e.g "it gets too complex" does not really say anything)</p> <p>Is there a good intuitive explanation as to why we fail to obtain a general solution for a 5th+ degree polynomial, and why this happens at the 5th degree and not below or above?</p>
IamKnull
610,697
<p>There is intutive explanation given <a href="https://math.stackexchange.com/a/1733136/610697">here</a> through Galois Theory</p> <p>you have to take Galois group of the equation, the group of automorphisms of the <em>splitting field</em> of the polynomial. For polynomial of any degree <span class="math-container">$n$</span>. By defination of Automorphism, it maps root of a equation to the other root, it is a subgroup <span class="math-container">$G$</span> of the <span class="math-container">$S_n$</span> (symmetric group).</p> <p>Galois Theory assigns, to each polynomial, a mathematical structure called a group. A polynomial is solvable in radicals (that is, you can write down its roots in terms of its coefficients, the 4 arithmetical operations, and square roots, cube roots, etc.) if and only if the corresponding group is a "solvable" group. The definition of solvable group won't mean much to you if you haven't done a course in group theory; there should be a sequence of groups, starting with the trivial group and ending with the group corresponding to the polynomial, such that each group in the sequence is a "normal" subgroup of the next group, and the "quotient" of each group by the previous group is "commutative". But for <span class="math-container">$n\ge 5$</span>, and of course, not abelian. Hence <span class="math-container">$S_n$</span>, which is the Galois group is not solvable.</p>
225,730
<p>I am reading a paper "2-vector spaces and groupoid" by <a href="http://arxiv.org/abs/0810.2361" rel="noreferrer">Jeffrey Morton </a> and I need a help to understand the following.</p> <p>Let $X$ and $Y$ be finite groupoids. Let $[X, \mathbb{Vect}]$ be a functor category from $X$ to $\mathbb{Vect}$. Suppose we have a functor $f: Y \rightarrow X$. The author defines a pushforward $f_{*} :[Y, \mathbb{Vect}] \rightarrow [X, \mathbb{Vect}]$ as follows.</p> <p>For each object $x \in X$, the comma category $(f \downarrow x)$ has object which are objects $y \in Y$ with maps $f(y) \rightarrow x$ in $X$, and morphisms which are morphisms $a: y \rightarrow y'$ whose images make the evident triangle in $X$ commute.</p> <p>Then the author defines for each $F\in [Y, \mathbb{Vect}]$, $f_{*}(F)(x):=colim F(f \downarrow x)$.</p> <p>He also shows that $f_{*}(F)(y)=\bigoplus_{f(x)\cong y} \mathbb{C}[Aut(y)]\otimes_{\mathbb{C}[Aut(x)]}F(x)$ and fimilarly calculate $f^*f_*$ and $f_*f^*$.</p> <p>Later he says that this description accords with the susal description of these functors in the left adjunction. Then he mentions that the right adjoint is given as</p> <p>$f_*F(x)= \bigoplus_{[y], f(y)\cong x}hom_{\mathbb{C}[Aut(x)]}(\mathbb{C}[Aut(y), F(y)).$</p> <p>Then we says:</p> <blockquote> <p>The Nalayama isomorphism gives the duality between the two descriptions of $f_*$, in terms of $hom_{\mathbb{C}[Aut(x)]}$ and $\otimes_{\mathbb{C}[Aut(x)]}$ by means of the exterior trace map.</p> </blockquote> <p>I understood the calculation of $f_*F(x)$ using a colimit defined above. But I don't know what it means by saying left or right adjoint. Also I don't know how to get the formula for the right adjoint. I even don't know what is Nakayama isomorphism. I searched for it but I couldn't find a good resource.</p> <p>So I would like to know what is going on here. Especially, I'd like to know;</p> <ol> <li><p>Why is the first construction using a colimit called "left adjoint"?</p></li> <li><p>What is the right adjoint and how is it defined and how to calculate it to get the formula above.</p></li> <li><p>What is Nakayama isomorphism?</p></li> </ol> <p>I have never studied these things so I don't know where to look up. I also want good references.</p> <p>I appreciate any help. Thank you in advance.</p>
Qiaochu Yuan
232
<p>You should learn something about the representation theory of finite groups. It suffices to understand the case that $X, Y$ have one object (note that the definition is as a direct sum of things that happen at one object); call those objects $x, y$ and let $G = \text{Aut}(x), H = \text{Aut}(y)$. Then $f$ may be regarded as a morphism $f : H \to G$. The functor categories $[X, \text{Vect}]$ and $[Y, \text{Vect}]$ are precisely the categories $\text{Rep}(G), \text{Rep}(H)$ of representations of $G$ and $H$ respectively. The pullback functor</p> <p>$$f^{\ast} : \text{Rep}(G) \to \text{Rep}(H)$$</p> <p>is known in representation theory as <strong><a href="http://en.wikipedia.org/wiki/Restriction_of_scalars">restriction</a></strong> and commonly denoted $\text{Res}_H^G$. Restriction happens to have two equivalent descriptions, and it will be cleaner for me to describe these by working in slightly greater generality. Namely, let $f : R \to S$ be a homomorphism of rings. $f$ induces a pullback functor</p> <p>$$\text{Res}_R^S : S\text{-Mod} \to R\text{-Mod}$$</p> <p>from the category of left $S$-modules to the category of left $R$-modules. (Recall that the category of left $R$-modules is the functor category $[R, \text{Ab}]$). When $R = \mathbb{C}[H], S = \mathbb{C}[G]$ are group algebras we recover the picture above. </p> <p>Restriction can be described in two ways. The first description is (writing $_RM_S$ for an $(R, S)$-bimodule $M$)</p> <p>$$_S M \mapsto \text{Hom}_S(_S S_R, _SM)$$</p> <p>where $_S S_R$ is $S$ regarded as a left $S$-module in the obvious way and as a right $R$-module via the map $f$ above, and</p> <p>$$_S M \mapsto _R S_S \otimes_S M$$</p> <p>where $_R S_S$ is $S$ regarded as a <em>right</em> $S$-module in the obvious way and as a left $R$-module via the map $f$ above. Via the <strong><a href="http://en.wikipedia.org/wiki/Tensor-hom_adjunction">tensor-hom adjunction</a></strong>, this shows that restriction has both a left and a right adjoint. The left adjoint is called <strong><a href="http://en.wikipedia.org/wiki/Restriction_of_scalars">extension of scalars</a></strong> in module theory and <strong>induction</strong> in group theory and is given by</p> <p>$$_R N \mapsto _S S_R \otimes_R N.$$</p> <p>In group theory it is usually denoted $\text{Ind}_H^G$. </p> <p>I don't know if the right adjoint has a standard name in module theory. In group theory I think it is sometimes called <strong>coinduction</strong> (so maybe in module theory it should be called <strong>coextension</strong>?) and is given by</p> <p>$$_R N \mapsto \text{Hom}_R(_R S_S, _R N).$$</p> <p>For an arbitrary map $f : R \to S$ of rings, there is no reason to expect these two functors to be naturally isomorphic. However, if $f : \mathbb{C}[H] \to \mathbb{C}[G]$ is a map on group algebras induced by a group homomorphism, then induction and coinduction are naturally isomorphic (which is why you won't see textbooks on the representation theory of finite groups talk about coinduction). The explicit isomorphism between them is what Morton calls the Nakayama isomorphism; it is an isomorphism</p> <p>$$\text{Nak} : \text{Hom}_{\mathbb{C}[H]}(\mathbb{C}[G], V) \to \mathbb{C}[G] \otimes_{\mathbb{C}[H]} V$$</p> <p>(I am dropping the bimodule subscripts for ease of notation) given by</p> <p>$$\text{Hom}_{\mathbb{C}[H]}(\mathbb{C}[G], V) \ni \phi \mapsto \frac{1}{|H|} \sum_{g \in G} g^{-1} \otimes \phi(g) \in \mathbb{C}[G] \otimes_{\mathbb{C}[H]} V.$$</p> <p>From a representation-theoretic perspective, the existence of this isomorphism merely reflects that $\mathbb{C}[G]$, as a representation of $H$, is self-dual, and this in turn follows from the fact that it has real character. But the Nakayama isomorphism is a preferred isomorphism for reasons I do not completely understand. </p>
542,397
<p>Consider the sequence $\{a_n\}_1^\infty$ such that $ a_n = $ ${n^m + 1}\over {n^{m+1} + 1}$ and $m \in \mathbb{R}$</p> <p>EDIT: This is incorrect for $m &lt; -1$, Then add the condition $m \geq -1$</p> <p>I want to show this sequence is monotonically decreasing.<br> This is not homework, I've seen many specific examples of this however and graphed many cases, and I'd like to show this for the general case.</p> <p>I've tried considering the continuous case and taking the derivative, using the quotient rule I get a square in the denominator, and since I'm only interested in the sign of the derivative I can throw away the denominator in the quotient rule, I then have:</p> <p>$mx^{m-1}(x^{m+1} + 1) - (m+1)x^m(x^m + 1)$</p> <p>Plugging in $x = 1$ gives a negative result for the derivative showing the continuous function is decreasing at that point, so I get a feeling my "conjecture" is correct. <br> Not sure where I can go from here, and if there's a simpler way without going to the continuous case. Expanding the brackets didn't help much either</p>
Prahlad Vaidyanathan
89,789
<p>Take $$ f(x) = \frac{x^m + 1}{x^{m+1} + 1} $$</p> <p>When you differentiate, your numerator should be $$ (x^{m+1} +1)mx^{m-1} - (x^m+1)(m+1)x^m = -x^{m-1}(x^{m+1} + (m+1)x - m) $$ Now, $x^m + (m+1)x - m &gt; 0$ for large values of $x$, so $a_n$ is <em>eventually</em> decreasing. However, a little checking with small values of $x$ should tell you that it is decreasing everywhere.</p>
3,150,645
<p>Suppose <span class="math-container">$\dim V = \dim W$</span>. Given <span class="math-container">$\phi:V\rightarrow V$</span> and <span class="math-container">$\psi:W\rightarrow W$</span> when do we have bijective linear maps <span class="math-container">$a,b:V\rightarrow W$</span> such that <span class="math-container">$$\require{AMScd} \begin{CD} V @&gt;\phi&gt;&gt; V \\ @VaVV @VVbV \\ W @&gt;\psi&gt;&gt;W \end{CD} $$</span> commutes? </p> <p>My intuition says that if the rank of <span class="math-container">$\phi$</span> and <span class="math-container">$\psi$</span> match there should exist <span class="math-container">$a,b$</span>. Of course with <span class="math-container">$a=b$</span>, this requires that <span class="math-container">$\phi,\psi$</span> are similar (after identifying <span class="math-container">$V$</span> and <span class="math-container">$W$</span> in any way you like), but with <span class="math-container">$a\neq b$</span> allowed, it seems this is much weaker. </p>
egreg
62,967
<p>This is essentially the change of bases.</p> <p>Suppose that <span class="math-container">$\phi$</span> and <span class="math-container">$\psi$</span> have the same rank <span class="math-container">$k$</span>. Take a basis <span class="math-container">$\{v_1,\dots,v_n\}$</span> of <span class="math-container">$V$</span> such that <span class="math-container">$\{\phi(v_1),\dots,\phi(v_k)\}$</span> is a basis of the range of <span class="math-container">$\phi$</span>.</p> <p>Choose similarly a basis <span class="math-container">$\{w_1,\dots,w_n\}$</span> for <span class="math-container">$W$</span>.</p> <p>Now complete <span class="math-container">$\{\phi(v_1),\dots,\phi(v_k)\}$</span> to a basis <span class="math-container">$\{\phi(v_1),\dots,\phi(v_k),v_{k+1}',\dots,v_n'\}$</span> of <span class="math-container">$V$</span>. Similarly, complete <span class="math-container">$\{\psi(w_1),\dots,\psi(w_k)\}$</span> to a basis <span class="math-container">$\{\psi(w_1),\dots,\psi(w_k),w_{k+1}',\dots,w_n'\}$</span> of <span class="math-container">$W$</span>.</p> <p>Define <span class="math-container">$a$</span> by <span class="math-container">$a(v_i)=w_i$</span>; define <span class="math-container">$b$</span> by <span class="math-container">$b(\phi(v_i))=\psi(w_i)$</span> for <span class="math-container">$i=1,2,\dots,k$</span> and <span class="math-container">$b(v_i')=w_i'$</span> for <span class="math-container">$i=k+1,\dots,n$</span>.</p> <p>Then <span class="math-container">$a$</span> and <span class="math-container">$b$</span> induce bijective linear maps and, for <span class="math-container">$i=1,2,\dots,n$</span>, <span class="math-container">$$ b(\phi(v_i))=\psi(w_i)=\psi(a(v_i)) $$</span></p> <p>Can you show the converse?</p> <p>Without bases, you can use complements. Write <span class="math-container">$V=V_1\oplus\ker\phi$</span>, so <span class="math-container">$\phi$</span> induces an injective linear map <span class="math-container">$V_1\to V$</span>, with the same image as <span class="math-container">$\phi$</span>. Write <span class="math-container">$V=\operatorname{im}\phi\oplus V_2$</span>.</p> <p>Do similarly for <span class="math-container">$\psi$</span>, writing <span class="math-container">$W=W_1\oplus\ker\psi=\operatorname{im}\psi\oplus W_2$</span>.</p> <p>By assumption <span class="math-container">$\dim V_1=\dim W_1$</span> and <span class="math-container">$\dim V_2=\dim W_2$</span>. Choose suitably the isomorphisms, noting that <span class="math-container">$\phi$</span> and <span class="math-container">$\psi$</span> induce isomorphisms <span class="math-container">$V_1\to\operatorname{im}\phi$</span> and <span class="math-container">$W_1\to\operatorname{im}\psi$</span>. However this is not really different from choosing bases.</p>
286,932
<p>For convenience we work with commutative rings instead of commutative algebras.</p> <hr> <p>Fix a commutative ring $R$. Consider the functor $\mathsf{Mod}\longrightarrow \mathsf{CRing}$ defined by taking an $R$-module $M$ to $R\ltimes M$ (with dual number multiplication). Following the <a href="https://ncatlab.org/nlab/show/Mod#TangentsAndDeformationTheory" rel="noreferrer">nlab</a>, the module of Kähler differentials of $R$ is defined as the universal arrow from $R$ to this functor.</p> <p>There's also the equivalence between the category of <em>split</em> zero-square extensions and the category of modules. These result in the natural isomorphisms ($\operatorname{dom}$ takes an arrow to its domain).</p> <p>$$\begin{aligned}\mathsf{CRing}(R,\operatorname{dom}(A^{\prime}\twoheadrightarrow A)) &amp; \cong{\substack{\text{split square-zero}\\ \text{extensions} } }(R\ltimes\Omega_{R}\twoheadrightarrow R,A^{\prime}\twoheadrightarrow A)\\ &amp; \cong \mathsf{Mod}(\Omega_{R},\operatorname{Ker}(A^{\prime}\twoheadrightarrow A)). \end{aligned}$$</p> <p>So $\Omega _R\cong \bf 0$ iff $R$ has exactly one arrow to the domain of every <em>split</em> square-zero extension.</p> <hr> <p>On the other hand, we say $R$ is unramified if given a zero-square extension $A^\prime \twoheadrightarrow A$, the induced post-composition $$\mathsf{CRing}(R,A^\prime)\longrightarrow \mathsf{CRing}(R,A)$$ is injective. The diagrams of interest are below.</p> <p>$$\require{AMScd} \begin{CD} A^\prime /I @&lt;&lt;&lt; R\\ @AAA \\ A^\prime &amp; \end{CD} \; \; \; \begin{CD} A^\prime /I @&lt;&lt;&lt; R\\ @AAA @VVV\\ A^\prime @&lt;&lt;&lt; I \end{CD}$$</p> <hr> <p>Now, if $\Omega _R\cong \bf 0$ then there <em>exists and is unique</em> a diagonal arrow $R\to A^\prime $, which seems to easily give "unramified w.r.t <em>split</em> square-zero extensions". However, I see no way to deduce anything about the non-split ones.</p> <hr> <p>The thing is, I feel a proof is very close: suppose $g,h$ are diagonal fillers $R\to A^\prime$ on the left below. Viewing them as module arrows this is equivalent to saying $g-h$ lands in $I$, i.e factors through $I\vartriangleleft A^\prime $. Finally $g=h$ iff this factorization $R\to I$ of $g-h$ must be zero. If only this could be given by the universal property...</p> <p>Perhaps the correct way to proceed is using some universal property of the diagonal? (Not that it realizes the Kähler differentials.)</p>
Nick Mertes
178,527
<p>Your question can be rephrased as follows: I know something about split square-zero extensions. How can I (categorically) conclude something about all square-zero extensions? The key observation is that square-zero extensions are the <em>torsors</em> for the split square-zero extensions. I first learned about this general idea from section 7.4.1 of Lurie's <a href="http://people.math.harvard.edu/%7Elurie/papers/HA.pdf" rel="nofollow noreferrer">Higher algebra</a> and I formulated this idea more precisely <a href="https://arxiv.org/abs/2006.04230" rel="nofollow noreferrer">here</a>.</p> <p>Let <span class="math-container">$\varphi: B\to A$</span> be a square-zero extension. Then <span class="math-container">$\ker(\varphi)$</span> is an <span class="math-container">$A$</span>-module and <span class="math-container">$A\oplus \ker(\varphi)\to A$</span> is an abelian group object in <span class="math-container">$\text{CRing}/A$</span>. We have a natural group action <span class="math-container">$\tau: (A\oplus \ker(\varphi))\times_A B\to B$</span> which is a torsor in the sense that <span class="math-container">$$(\tau, \pi_B): (A\oplus \ker(\varphi))\times_A B\to B\times_A B$$</span> is an isomorphism.</p> <p>Now we'll relate this to your question. Let <span class="math-container">$C$</span> be such that <span class="math-container">$\Omega_C = 0$</span>. Let <span class="math-container">$\varphi: B\to A$</span> be a square-zero extension. We want to show that the canonical map <span class="math-container">$\text{Hom}(C, B)\to\text{Hom}(C, A)$</span> is injective. Let <span class="math-container">$\alpha, \beta\in\text{Hom}(C, B) $</span> be such that <span class="math-container">$\varphi\circ\alpha = \varphi\circ\beta$</span>. Thus <span class="math-container">$$(\alpha, \beta)\in \text{Hom}(C, B)\times_{\text{Hom}(C,A)} \text{Hom}(C, B).$$</span> As you already showed, <span class="math-container">$\Omega_C = 0$</span> implies <span class="math-container">$\text{Hom}(C, A\oplus\ker(\varphi))\approx\text{Hom}(C, A)$</span>. Thus <span class="math-container">$$\begin{split} \text{Hom}(C, B)\times_{\text{Hom}(C,A)} \text{Hom}(C, B) &amp;\approx \text{Hom}(C, B\times_A B) \\ &amp;\approx \text{Hom}(C, (A\oplus \ker(\varphi))\times_A B) \\ &amp;\approx \text{Hom}(C, A\oplus \ker(\varphi))\times_{\text{Hom}(C, A)} \text{Hom}(C, B)\\&amp;\approx \text{Hom}(C, A)\times_{\text{Hom}(C, A)} \text{Hom}(C, B)\\&amp;\approx \text{Hom}(C, A\times_A B)\\&amp;\approx \text{Hom}(C, B)\end{split}$$</span> and this final bijection maps <span class="math-container">$(\alpha, \beta)$</span> to <span class="math-container">$\beta$</span>. This establishes that there can be at most one lift for a general square-zero extension.</p> <p>This same argument works in any category with a notion of Kähler differentials, as I show <a href="https://arxiv.org/abs/2104.08999" rel="nofollow noreferrer">here</a>.</p>
1,529,731
<p>I'm currently studying for the GRE and a specific type of question has me stumped. I have a two part question, but first here is one problem and my work:</p> <p><strong>"If x is the remainder when a multiple of 4 is divided by 6, and y is the remainder when a multiple of 2 is divided by 3, what is the greatest possible value of x+y"</strong></p> <p>Since it says greatest I thought GCM and wrote(with arbitrary variables)</p> <p>$$N = 24a + x$$</p> <p>$$M = 6b + y$$ I thought that since 2 &amp; 3 are factors of 4 &amp; 6, respectively, maybe I could equate them: $$24a+x=6b+y$$ Or maybe N is twice a large as M: $$24a+x=2(6b+y)$$ <strong>1) How do I solve this? Am I even on the right track?</strong></p> <p>This kind of question has had me stumped for a while but the books explanation has me even more lost. </p> <p><strong>"The greatest possible remainder for a multiple of 4 being divided by 6, happens when 4 is divided by 6. When 4 is divided by 6 the result is 0 with a remainder of 6."</strong></p> <p>.....Don't remainders by nature have to be smaller than the divisor......</p> <p><strong>2)How is that possible? Shouldn't the result be 0 with a remainder of 4?</strong> </p> <p>How can there be a remainder of 6 when the divisor is 6? I found a similar problem and solution here: <a href="https://math.stackexchange.com/questions/346172/finding-a-number-given-its-remainder-when-divided-by-other-numbers?newreg=e42984c5474e4b2dbb127333cfb3656d">Finding a number given its remainder when divided by other numbers</a> and here: <a href="https://math.stackexchange.com/questions/1401850/how-to-find-the-greatest-remainder-of-a-number-that-is-a-multiple-of-another-num">How to find the greatest remainder of a number that is a multiple of another number</a> But the more I read the more lost I was. </p> <p>At this point I feel like a child throwing spaghetti about trying to make art. I have no idea how to even start on this problem any help is appreciated.</p>
Anurag A
68,092
<p>When any integer $a$ is divided by $6$ then the possible remainders are $0,1,2,3,4,5$. So if $4a$ is divided by $6$, then the possible remainders will be $0,4,2,0,4,2$ respectively (if you are familiar with modular arithmetic then this can be written more elegantly). So the maximum possible remainder when a multiple of $4$ is divided by $6$ is $4$. </p> <p>Likewise when you divide any integer $b$ by $3$, the possible remainders are $0,1,2$. In which case the remainders when $2b$ is divided by $3$ will be $0,2,1$. Here the maximum is $2$. Thus the maximum of the sum will be $4+2=6$.</p>
1,529,731
<p>I'm currently studying for the GRE and a specific type of question has me stumped. I have a two part question, but first here is one problem and my work:</p> <p><strong>"If x is the remainder when a multiple of 4 is divided by 6, and y is the remainder when a multiple of 2 is divided by 3, what is the greatest possible value of x+y"</strong></p> <p>Since it says greatest I thought GCM and wrote(with arbitrary variables)</p> <p>$$N = 24a + x$$</p> <p>$$M = 6b + y$$ I thought that since 2 &amp; 3 are factors of 4 &amp; 6, respectively, maybe I could equate them: $$24a+x=6b+y$$ Or maybe N is twice a large as M: $$24a+x=2(6b+y)$$ <strong>1) How do I solve this? Am I even on the right track?</strong></p> <p>This kind of question has had me stumped for a while but the books explanation has me even more lost. </p> <p><strong>"The greatest possible remainder for a multiple of 4 being divided by 6, happens when 4 is divided by 6. When 4 is divided by 6 the result is 0 with a remainder of 6."</strong></p> <p>.....Don't remainders by nature have to be smaller than the divisor......</p> <p><strong>2)How is that possible? Shouldn't the result be 0 with a remainder of 4?</strong> </p> <p>How can there be a remainder of 6 when the divisor is 6? I found a similar problem and solution here: <a href="https://math.stackexchange.com/questions/346172/finding-a-number-given-its-remainder-when-divided-by-other-numbers?newreg=e42984c5474e4b2dbb127333cfb3656d">Finding a number given its remainder when divided by other numbers</a> and here: <a href="https://math.stackexchange.com/questions/1401850/how-to-find-the-greatest-remainder-of-a-number-that-is-a-multiple-of-another-num">How to find the greatest remainder of a number that is a multiple of another number</a> But the more I read the more lost I was. </p> <p>At this point I feel like a child throwing spaghetti about trying to make art. I have no idea how to even start on this problem any help is appreciated.</p>
Archis Welankar
275,884
<p>The pattern which is followed when divided by 6 a multiple of 4 is 0,4,2 starting from 0 and pattern followed by remainder when a multiple of 2 is divided is 0,2,1 when started from 0 so greatest value if $x+y=4+2=6$. Hope it helped you.</p>
82,865
<p>I've looked through the other posts on rotating a function around the x or y axis and haven't had any luck.</p> <p>my question is this how do i rotate the area between the functions $f(x)=x^2+1$, and $f(x)=2x^2-2$, whereby $x&gt;0$ and $y&gt;0$ and the region is bounded by the y and x axis.</p> <p>This is a plot of the area I'm talking about.</p> <p><img src="https://i.stack.imgur.com/aELQX.jpg" alt="enter image description here"></p> <p>Revolving the area around the y-axis is suppose to produce a vase, although I picture something of a shallow bowl.</p> <p>Any help would be much appreciated. As I'm still quite new to <em>Mathematica</em>, an explanation of any code posted would be helpful.</p>
Dr. belisarius
193
<pre><code>Show[RevolutionPlot3D[#, {x, 0, Sqrt[3]}, MeshStyle -&gt; None, PlotStyle -&gt; Opacity[.5]] &amp; /@ {2 x^2 - 2, x^2 + 1}] </code></pre> <p><img src="https://i.stack.imgur.com/Jq6nx.png" alt="Mathematica graphics"></p>
82,865
<p>I've looked through the other posts on rotating a function around the x or y axis and haven't had any luck.</p> <p>my question is this how do i rotate the area between the functions $f(x)=x^2+1$, and $f(x)=2x^2-2$, whereby $x&gt;0$ and $y&gt;0$ and the region is bounded by the y and x axis.</p> <p>This is a plot of the area I'm talking about.</p> <p><img src="https://i.stack.imgur.com/aELQX.jpg" alt="enter image description here"></p> <p>Revolving the area around the y-axis is suppose to produce a vase, although I picture something of a shallow bowl.</p> <p>Any help would be much appreciated. As I'm still quite new to <em>Mathematica</em>, an explanation of any code posted would be helpful.</p>
ubpdqn
1,997
<p>Just for something different:</p> <pre><code>pp = ParametricPlot3D[{Sqrt[3] Cos[t], Sqrt[3] Sin[t], 4}, {t, 0, 2 Pi}, PlotStyle -&gt; {Red, Thickness[0.04]}, Boxed -&gt; False]; rp = RegionPlot3D[ 0 &lt; x^2 + y^2 &lt; 3 &amp;&amp; Max[0, 2 x^2 + 2 y^2 - 2] &lt; z &lt; x^2 + y^2 + 1, {x, -2, 2}, {y, -2, 2}, {z, 0, 5}, Mesh -&gt; None, PlotStyle -&gt; Red, PlotPoints -&gt; 200, PerformanceGoal -&gt; "Quality", Boxed -&gt; False]; Show[pp, rp, Axes -&gt; False, Background -&gt; Black] </code></pre> <p>Top ring just to deal with boundary of intersection...</p> <p><img src="https://i.stack.imgur.com/NKY69.gif" alt="enter image description here"></p>
1,160,454
<p>I missed two classes in calculus and we're on a subject that I do not understand at all. If someone could just walk me through this problem I could probably begin to comprehend the rest.</p> <p>The base of a solid elliptical cylinder is given by $ (x/5)^2 + (y/3)^2 = 1.$ A solid is formed by cutting off or removing some material such that the cross-sections perpendicular to the x-axis are all squares. Find the volume of such a solid. </p>
Narasimham
95,860
<p>$$ y = 3 \sqrt{ 1 -(x/5)^2 } $$</p> <p>$$ \int dA = 4 \int_{-5} ^5 9 ( 1 -(x/5)^2 ) dx $$ </p> <p>and you can take it from there.</p> <p>EDIT1:</p> <p>To 3D visualize sections perpendicular to x-axis are squares: <img src="https://i.stack.imgur.com/stAiS.png" alt="SquareXnSolid"></p>
1,160,454
<p>I missed two classes in calculus and we're on a subject that I do not understand at all. If someone could just walk me through this problem I could probably begin to comprehend the rest.</p> <p>The base of a solid elliptical cylinder is given by $ (x/5)^2 + (y/3)^2 = 1.$ A solid is formed by cutting off or removing some material such that the cross-sections perpendicular to the x-axis are all squares. Find the volume of such a solid. </p>
user84413
84,413
<p>You can use the fact that the volume can be found by integrating the areas of the cross-sections.</p> <p>In this case, the base of the solid is the region bounded by an ellipse, and the cross-sections are squares with sides $s=2\left(\frac{3}{5}\right)\sqrt{25-x^2}$, so</p> <p>$\displaystyle V=\int_{-5}^{5}A(x)\;dx=\int_{-5}^{5}\left(\frac{6}{5}\sqrt{25-x^2}\right)^2dx=\frac{36}{25}\int_{-5}^5(25-x^2)\;dx=\frac{72}{25}\int_0^5 (25-x^2)\;dx$.</p>
3,567,245
<p>I am trying to compute <span class="math-container">$\int_{C}Log(z+3)$</span>, where C is a circle centered at the origin, with radius of 2, oriented once counterclockwise. </p> <p>I kind of get the idea of how to compute <span class="math-container">$\int Log(z)$</span> on the unit circle. I know <span class="math-container">$z=e^{i \theta}$</span> and <span class="math-container">$dz=ie^{i\theta}d\theta$</span> gives</p> <p><span class="math-container">$\int Log(e^{i\theta})ie^{i\theta}d\theta = \int -\theta e^{i \theta}d \theta$</span></p> <p>and it follows from there, but I am not sure if it is the same approach for <span class="math-container">$Log(z+3)$</span> or for a circle of radius 2. So far I am thinking <span class="math-container">$z=2e^{i \theta}$</span> since the radius is 2 but that's as far as I got. Do I take the same approach? Computing </p> <p><span class="math-container">$\int_{C}Log(2e^{i \theta}+3)2ie^{i \theta}d \theta$</span> </p> <p>if so I have no idea where to go from there.</p> <p>Thanks :)</p>
Paweł Czyż
551,592
<p>Let's apply <a href="https://regularize.wordpress.com/2011/11/29/substitution-and-integration-by-parts-for-functions-of-a-complex-variable/" rel="nofollow noreferrer">Proposition 2</a> to:</p> <ul> <li>the circle <span class="math-container">$\gamma\colon [0,2\pi]\to \mathbb C, t\mapsto 2\exp it$</span>,</li> <li>function <span class="math-container">$f(z)=\mathrm{Log}(z+3)$</span> that is continuous on a neighborhood of the circle <span class="math-container">$\mathrm{im} \,\gamma$</span>,</li> <li>biholomorphic function <span class="math-container">$\phi(z)=z-3$</span>.</li> </ul> <p><span class="math-container">$$\int_\gamma \mathrm{Log}(z+3)\,dz=\int_{\phi^{-1}\circ \gamma} \mathrm{Log}(z)\, dz$$</span></p> <p>But <span class="math-container">$\phi^{-1}\circ \gamma$</span> is basically the circle of radius <span class="math-container">$2$</span> centered at <span class="math-container">$3$</span> and <span class="math-container">$\mathrm{Log}$</span> is holomorphic there.</p>
2,226,150
<blockquote> <p>The following diophantine equation came up in the past paper of a Mathematics competition that I am doing soon: $$ 2(x+y)=xy+9.$$ </p> </blockquote> <p>Although I know that the solution is $(1,7)$, I am unsure as of how to reach this result. Clearly, the product $xy$ must be odd since $2(x+y)$ must be even, however beyond that, I am unable to see anything else that I can do to solve the problem. I have also tried using the AM-GM inequality, however, it did not simplify the problem much:$$(x+y)+(x-xy+y)\le(\frac{(x+y)+(x+y-xy)}{2})^2.$$ Any help would be greatly be appreciated.</p>
Michael Rozenberg
190,319
<p>We need to solve $$xy-2(x+y)+4=-5$$ or $$(x-2)(y-2)=-5$$ and it remains to solve the following systems.</p> <p>$x-2=1$ and $y-2=-5$;</p> <p>$x-2=-1$ and $y-2=5$;</p> <p>$x-2=5$ and $y-2=-1$ and</p> <p>$x-2=-5$ and $y-2=1$,</p> <p>which gives the answer: $\{(1,7),(7,1), (3,-3),(-3,3)\}$.</p>
693,550
<p>Consider the function $F(x) = x^2-2x+2$.</p> <p>Find an interval in which the function is contractive and find the fixed point in this interval.</p> <p>What is the convergence rate of the fixed point iteration: $x_{n+1} = F(x_n)$ in that interval?</p> <p>I'm lost on fixed point iteration. I've watched a few youtube videos, but I'm still not understanding contractive intervals and methods for finding fixed points in functions.</p>
Ian Coley
60,524
<p>You need an interval $[a,b]$ so that there's a $0\leq c&lt;1$ such that $|f(x)-f(y)|\leq c|x-y|$ for all $x,y\in[a,b]$. In our case, we have $$ |f(x)-f(y)|=|x^2-y^2-(2x-2y)|\leq c|x-y|. $$ Notice that, in fact, $f(1)=1$, so you're going to want to choose an interval somewhere around $1$.</p>
504,997
<p>I have a dynamic equation, $$ \frac{\dot{k}}{k} = s k^{\alpha - 1} + \delta + n$$ Where $\dot{k}/k$ is the capital growth rate as a function of savings $s$, capital $k$, capital depreciation rate $\delta$, and population growth rate $n$.</p> <p>I have been asked to find the change in the growth rate as $k$ increases. This is of course $$\frac{\partial \dot{k}/k}{\partial k} = (\alpha - 1) s k^{\alpha -2}$$ But what I want to find now is the change in growth rate as $k$ increases <em>proportionately</em>. This should be $$\frac{\partial \dot{k}/k}{\partial \ln(k)} = ?$$ How do you calculate the partial derivative with respect to the logarithm of a variable? I'm sure the answer is simple, but my analytical calculus is pretty rusty.</p>
Patrick
109,962
<p>The more formal/modern/natural way to answer this is via a change of variables.</p> <p>Let <span class="math-container">$z \equiv \log(k)$</span> on <span class="math-container">$\mathbb R_{&gt;0}$</span>, so that pairs <span class="math-container">$(k, z) $</span> belong the graph of the function <span class="math-container">$\log$</span>. Note that, for any differentiable real-valued function <span class="math-container">$ k \mapsto g(k)$</span> on <span class="math-container">$\mathbb R_{&gt;0}$</span>, the function <span class="math-container">$z \mapsto h(z) = (g \circ \exp)(z)$</span> satisfies:</p> <p><span class="math-container">$$ g(k) = h(z) \quad \text{for every $k$ in $\mathbb R_{&gt;0}$}. $$</span> Now note that, via the chain rule, the derivative <span class="math-container">$h'$</span> of <span class="math-container">$h$</span> with respect to <span class="math-container">$z$</span> is <span class="math-container">$$ h'(z) = g'(k) \cdot \exp'(z) = g'(k) \cdot (\exp \circ \log)(k) = g'(k) \cdot k .$$</span> For <span class="math-container">$g'(k) = (\alpha - 1)\cdot s\cdot k^{\alpha - 2}$</span>, we arrive at the result the OP was after.</p>
1,635,682
<p>I have the functions $$ f_n(x) = x + x^n(1 - x)^n $$</p> <p>that $\to x$ as $n \to \infty $ (pointwise convergence).</p> <p>Now I have to look whether the sequence converges uniformly, so I used the theorem and arrived at: $$ \sup\limits_{x\in[0,1]}|f_n(x)-x| = |x+x^n(1-x)^n - x| = |x^n(1-x)^n| = 0, n \to \infty $$</p> <p>However, the master solution suggested something different that I'm unable to comprehend:</p> <p>$$ \forall x \in [0,1]: 0 \leq x(1-x) = x - x^2 = 1/4 - (x - 1/2)^2 \leq 1/4 $$ $$ \sup\limits_{x\in[0,1]}|f_n(x) - x| = \sup\limits_{x\in[0,1]}|(x(1-x))^n| \leq (1/4)^n \to 0, (n \to \infty)$$</p> <p>So, is my solution wrong? And how did they come up with that estimate? Or the better question is; how can I learn to come up with that stuff? First semester University and Analysis is... difficult.</p>
Andrew
307,012
<p>You are correct in assuming the formula is invalid because of the presence of a quantifier inside a predicate. The following formula captures the essence of your English sentence: $$\forall x_1 (\forall x_2 (\mathop{\mathrm{Love}}(x_1,x_2)) \rightarrow \mathop{\mathrm{Good}}(x_1)).$$</p>
930,458
<p>would somebody algebraic-geometry-savvy please help me with this problem, as I am pretty new to algebraic geometry: I have to find smallest algebraic variety (irreducible algebraic set) in $\mathbb{C^{2}}$ that contains all the points with integer coordinates. I don't really know how to start, so at least an idea, hint or even better sketch of the solution would be much appreciated. Thank you very much.</p>
Jyrki Lahtonen
11,619
<p>Extended hint: Let $$ F(X,Y)=\sum_{i=0}^n F_i(Y)X^i $$ be a non-zero polynomial from $\Bbb{C}[X,Y]$. Assume that it vanishes at all the points where $(X,Y)\in\Bbb{Z}^2$. The polynomial $F_n(Y)$ is non-zero as the leading coefficient. It can only have finitely many zeros. Therefore there exists an integer $m$ such that $F_n(m)\neq0$. Thus $$ G(X):=F(X,m)=\sum_{i=0}^nF_i(m)X^i\in\Bbb{C}[X] $$ is non-zero.</p>
289,367
<p>Given a positive definite matrix $Q\in\mathbb{R}^{n \times n}$, I want to find a diagnonal matrix $D$ such that $rank(Q-D) \leq k &lt; n$.</p> <p>I think this can be regarded as a generalization of eigenvalue problem, which is basically problem of finding a diagonal matrix $\lambda I$ such that $rank(Q-\lambda I) &lt; n$.</p> <p>Is there any theory about this problem?</p>
Aaron Meyerowitz
8,008
<p>You can limit the question to real symmetric matrices $A$ (with diagonal $0$ if desired). For large enough $k,$ the matrix $A+kI$ will be positive definite and you are then allowed to change all the diagonal entries.</p> <p>As noted in a comment (and overlooked in my earlier answer) this $5 \times 5$ matrix (where the diagonal entries are free to be assigned) will have rank at least $4.$</p> <p>$$ \left[ \begin {array}{ccccc} a&amp;1&amp;0&amp;0&amp;0\\ 1&amp;b&amp;1&amp;0&amp;0\\ 0&amp;1&amp;c&amp;1&amp;0\\ 0&amp;0 &amp;1&amp;d&amp;1\\ 0&amp;0&amp;0&amp;1&amp;e\end {array} \right] $$ The same construction works for any $n.$</p> <p>It is curious that having the freedom to chose any diagonal matrix may be no more effective than being restricted to choosing a multiple of the identity matrix. Perhaps the thing to generalize is not $\lambda I$ to $D$ but $\lambda I$ to $\lambda D$ for a given $D$. Hence:</p> <blockquote> <p>Given $n \times n$ matrices $Q,D$ with $D$ diagonal, consider scalars $\lambda$ such that $rank(Q-\lambda D) &lt; n.$ Discuss the theory.</p> </blockquote> <p>We might call $\lambda$ an <em>eigenvalue for $Q$ with respect to $D.$</em> Likewise a vector $X$ with $Qx=\lambda Dx$ (which there will then be) might be termed a <em>$\lambda$-eigenvector for $Q$ with respect to $D.$</em> </p> <p>When $D$ is has all entries non-zero, $D^{-1}$ exists and we are simply looking at the eigenvalues and eigenvectors of $D^{-1}Q.$ When $D$ itself has rank $k \lt n$ one can still consider the degree $k$ polynomial $|Q-xD|$, it's roots etc. I'm not sure how it would all work out. </p>
50,023
<p>two problems from Dugundji's book page $156$. (I don't know why the system deletes the word hi in this sentence)</p> <p>$1$. Let $X$ be a Hausdorff space. Show that:</p> <p>a) $\bigcap \{F: x \in F , F \ \textrm{closed}\} = \{x\}$.</p> <p>b) $\bigcap \{U : x \in U, U \ \textrm{open}\} = \{x\}$.</p> <p>c) The above properties are not equivalent to being Hausdorff.</p> <p>$2$. Prove every infinite Hausdorff space contains a countably infinite discrete subspace.</p> <p>My work:</p> <ol> <li>(a) So let $y \not \in \{x\}$ then $x \neq y$. Since $X$ is Hausdorff we can find disjoint open sets $U_{x}$, $V_{y}$. Define $F = X \setminus V_{y}$ then $x \in F$, $F$ is closed and $y \not \in F$, so $y \not \in \bigcap \{F: x \in F , F \ \textrm{closed}\}$.</li> </ol> <p>(b) Same as above, just note that $y \not \in U_{x}$, hence not in the intersection.</p> <p>(c) Not sure here, can we take $X$ any infinite set endowed with the cofinite topology?</p> <p>Claim: $\bigcap \{F: x \in F , F \ \textrm{closed}\} = \{x\}$. Suppose $y \not \in \{x\}$ then $y \neq x$, that is: $y \not \in \{x\}$. Note that since $X$ is a $T_{1}$ space then $F=\{x\}$ is closed and contains $x$, so $y$ is not in the intersection.</p> <p>Similarly for $(b)$. But $X$ is not Hausdorff since any two non-empty open sets intersect.</p> <p>2) How do you prove this one?</p>
Mark
4,460
<p>a) Hausdorff implies $T_1$ which implies singletons are closed. Note that closure of $\{x\}$ is defined: $\bigcap\{F:x\in F\}$, and this must equal $\{x\}$ by the aforementioned property of $T_1$ spaces.</p> <p>b) Choose a point $y \not= x$. We note that there is an open set containing $x$, that does not contain $y$ by choosing 2 open sets, one containing $x$, the other containing $y$, such that the sets are disjoint. The result follows - if $y$ is not in one open set containing $x$, it is certainly not in every open set containing $x$.</p> <p>c) a: Use the cofinite topology on the set of reals. Singletons are again closed. The fact that every open set intersects every other open set (except $\emptyset$) shows that this topology is not Hausdorff.</p> <p>b: Use the cofinite topology on the reals. Consider open sets of the form $\{r\}^c$ where $r$ is a real number different from $\{x\}$. The intersection of all these sets is {x} but again, the space is not Hausdorff.</p> <p>Part 2: <strong>Edit: This construction doesn't work, I can't think of a way to fix it sorry.</strong> Let $H$ be a Hausdorff space. Let's construct the discrete space inductively.</p> <p><strong>Induction</strong></p> <p>Suppose we have the set of points $\{x_i : 1\le i \le n\}$ and a corresponding collection of open sets $O_i$ such that<br> 1.$x_i \in O_i$<br> 2. $x_j \not \in O_i$ where $j \not= i$<br> 3. $\bigcup_{i=1} ^n O_i$ does not cover $H$.</p> <p>Choose points $x_{n+1}$ and $y$ in $[\bigcup_{i=1}^n O_i]^c$. Choose an open set $O_{n+1}$ around $x_{n+1}$ that does not contain $y$ or $x_i : 1\le i \le n$. (We can find this set by intersecting $n+1$ open sets that contain $x_{n+1}$ which are disjoint from the $x_i : 1\le i \le n$, and $y$. Now we have set of points $\{x_i : 1\le i \le n+1\}$ and a corresponding collection of open sets $O_i$ such that<br> 1. $x_i \in O_i$<br> 2. $x_j \not \in O_i$ where $j \not= i$<br> 3. $\bigcup_{i=1} ^{n+1} O_i$ does not cover $H$.</p> <p><strong>Base Case</strong></p> <p>Chose $x_1$ and $y$ in $H$, and choose $O_1$ which contains $x_1$ but not $y$.</p> <p><strong>Conclusion</strong></p> <p>Now we have shown the existence of an increasing sequence of subspaces $S_n = \{x_i: 0 \le i \le n\}$ which are discrete. Just consider that $\{x_i\} = S_n \cap O_i \Rightarrow \{x_i\} $ is open in the subspace topology. (By increasing sequence I mean $S_n \subset S_{n+1}$) It remains to be shown that the limit is discrete. </p> <p>Let $S_{\infty} = \bigcup_n S_n = \{x_i : 1\le i\}$. Choose any point $x_i$ in $S_{\infty}$. Note that, by construction, $O_i \cap S_{\infty} = \{x_i\}$ so all singletons are open and therefore $S_{\infty}$ is a discrete subspace which is countable and infinite.</p>
1,259,961
<p>I was looking at $$\int_0^1\left(\frac{1}{\sqrt{s}}\right)^2\ ds.$$</p> <p>So in calculus, I would evaluate $\ln(1) - \ln(0)$ as the answer. What I don't get and I don't remember why is the answer $\infty$?</p> <p>I know $\ln(1)=0$ and $\ln(0)=-\infty$</p> <p>Wouldn't the answer be $-\infty$? Any explanation would be helpful. I'm forgetting some of my basic calculus.</p>
HeeHaw
235,885
<p>The answer would be $0 - (- \infty) = + \infty$ (modulo technicalities, the wholesome way to write this is to say that the integral diverges to $\infty$).</p>
711,549
<p>Consider the successor of the largest finite ordinal that will ever be considered alone. But then it wasn't the largest finite ordinal that will ever be considered alone. How do we get around this paradox? The largest finite ordinal that will ever be considered alone does exist, and yet we can consider its successor.</p>
Charles
1,778
<p>This is essentially <a href="https://en.wikipedia.org/wiki/Richard%27s_paradox">Jules Richard's paradox</a>. You define an ordinal in a language and then talk about its consequences in the metalanguage. But it's not problematic in the original language and doesn't have the desired property in the metalanguage.</p> <p>Basically, the difficulty comes from the imprecision of natural language ("the largest finite ordinal that will ever be considered by itself") and its translation into a formal language where it can be unambiguously defined.</p>
1,157,877
<p>We have $n$ bags of sand, with volume $$v_1,...,v_n, \forall i: \space 0 &lt; v_i &lt; 1$$ but not essentially sorted. we want to place all bag to boxes with volumes 1. We propose one algorithm:</p> <blockquote> <p>At first we place all bags in the original order. Then we select one box and place on it, bag $1, 2, 3,...$ until these can be place in box. If the $i^{th}$ bag couldn't be inserted in a box, we choose another box and place it $i^{th}, i^{th+1},...$ until these can be place in the box.</p> </blockquote> <p>If the number of boxes that be used in this algorithms $X$, and the number of boxes used in minimum way (by using minimum algorithms) be $Y$, why always $X &gt; Y$ is false, and always true that $X &lt; 2Y$.</p>
DeepSea
101,504
<p><strong>hint</strong>: Solve for $\displaystyle \int_{0}^x f(\alpha)d\alpha$, and then taking derivative both sides to proceed. </p>
2,526,865
<p>I came across this question while preparing for an interview.</p> <p>You draw cards from a $52$-card deck until you get first Ace. After each card drawn, you discard three cards from the deck. What's the expected sum of cards until you get the first Ace? </p> <p>Note</p> <ol> <li><p>J, Q, K have point value 11, 12 and 13, and Ace has point value 1</p></li> <li><p>discarded cards don't count towards the sum and if we don't get an Ace we shuffle the deck and continue</p></li> <li><p>when you shuffle, you shuffle all cards but you keep the sum, and when you draw a new card you add it to that sum (you don't start from zero after each shuffle) </p></li> </ol> <p>My thought so far: the expected sum is definitely between $73$ and $91$.</p> <p>$73$ is the expected sum if we don't discard any cards, so the problem simply becomes the expected sum until first Ace, that is, $(2+\dots+13) \cdot 4 \cdot \frac{1}{5}+1$.</p> <p>$91$ is the expected sum if we discard all $51$ remaining cards (shuffle the deck after each draw). In this case the number of draws needed to see the first Ace follows a Geometric distribution, so the answer is $(\frac{52}{4}-1) \cdot 7.5+1$</p> <p>Any help is appreciated!!! </p>
Charlotte
499,360
<p>Although an exact solution is complicated and worth more thoughts, I did a quick simulation which hopefully can provide some insights.</p> <p><a href="https://i.stack.imgur.com/tC8TB.png" rel="nofollow noreferrer">Expected sum v.s. number of cards thrown</a></p> <p>Essentially the plot above demonstrates how the expected sum till first ACE changes with respect to the number of cards thrown after each draw. One can see that the increase is slower than linear – it might be even slightly slower than logarithm, as can be seen from the red curve.</p> <p>I am also presenting the codes for the simulation in case someone else is interested in more investigation. On the other hand, I think that it will be more exciting to approach the problem exactly, for which I have an idea, but it involves very lengthy calculation that may leads to nowhere at all…</p> <pre><code>class Cards(object): def __init__(self): self.c = np.asarray([4]*13) ### a length-13 list containing the number of remaining cards of each number ###i.e. A,2,3,...,13 self.s = np.sum(self.c) self.r = np.arange(1,14) def draw_card(self): if self.s &lt;= 0: ### Enter a new deck self.c = np.asarray([4]*13) self.s = np.sum(self.c) card = choice( self.r, p = self.c/self.s ) self.c[card-1] -= 1 self.s -= 1 return card def throw_cards(self,num=3): ### assert num &gt; 0, "Number of cards to be thrown away is zero!" if num &lt;= 0: return if self.s &gt;= num: for n in xrange(num): card = choice( self.r, p = self.c/self.s ) self.c[card-1] -= 1 self.s -= 1 else: ### self.s &lt; num; number of remaining cards &lt; number of cards to be thrown away ### Enter a new deck self.c = np.asarray([4]*13) ### a length-13 list containing the number of remaining cards of each number i.e. A,2,3,...,13 self.s = np.sum(self.c) return def exp_sum(self,num=3,maxiter=200): es = 0 it = 0 while it &lt; maxiter: it += 1 card = self.draw_card() es += card if card == 1: break else: ### card != 1 self.throw_cards(num) ### Reset self.c and self.r self.c = np.asarray([4]*13) self.s = np.sum(self.c) return es </code></pre>
1,461,484
<p>For example, look at this sentence from Perko's text on dynamical system</p> <p>"It follows from Cauchy Schwarz inequality that if $T \in L(R^n)$ is represented by the matrix $A$ with respect to the standard basis for $R^n$ $_\cdots$" pg 11</p> <p>What does it mean for a $T: R^n \to R^n$ to be represented by a matrix? Isn't it by definition that $T: R^n \to R^n$ is equivalent to an n by n matrix? Can someone translate exactly what it means by "represented" versus "not represented"?</p>
Brian M. Scott
12,042
<p>The operator $T$ is a function from $\Bbb R^n$ to $\Bbb R^n$; an $n\times n$ matrix of real numbers is not a function. If one chooses the matrix $A$ properly, one can say that the function $T(x)=Ax$ for each $x\in\Bbb R^n$, but that is far from saying that the function $T$ <strong>is</strong> the matrix $A$.</p> <p>Consider the more familiar case of real-valued functions on the reals. Specifically, consider the function $f(x)=ax$: this function is certainly not the same thing as the real number $a$. The relationship between $T$ and $A$ in the first paragraph is precisely similar to that between $f$ and $a$ in this paragraph.</p>
766,694
<p>I am stuck trying to get the values for x, y, and z. I keep moving variables around but I end up getting answers like x = x or z = z and I do not think that is what I want. It's really just algebra but I seem to have forgotten that. </p> <p>(x/2) + (2z/3) = x</p> <p>(x/2) + (y/3) = y</p> <p>(2y/3) + (z/3) = z</p> <p>I initially tried finding the values for z and y from the first two equations and plugged those into the third equation but ended up getting that x is equal to 0 and when I plug zero into the other equations to solve for the other three variables I get that x, y, and z are all equal to 0. I do not think I am tackling this correctly.</p>
DomJack
145,285
<p>Step 1: Move all unknowns to the left, in the same order, i.e. a*x + b*y + c*z = d</p> <p>-1/2 * x + 0 * y + 2/3 * z = 0 (1)</p> <p>1/2 * x - 2/3 * y + 0 * z = 0 (2)</p> <p>0 * x + 2/3 * y - 2/3 * z = 0 (3)</p> <p>This is a homogeneous linear system, and is normally expressed as the product of a matrix and vector. If you want a rigorous run through of how to solve a problem of this sort, look up Gaussian Elimination. I'll provide a quick run through in the context of separate equations, rather than matrix operations, though they are just different ways of expressing the same mathematical operations.</p> <p>The overall aim is to modify equation (3) to look like 0 * x + 0 * y + a * z = b. That's one equation in one unknown, and can be solved. Next, if we can make modify equation (2) such that it looks like 0 * x + c*y + d*z = e, since we know z, we'll be able to find y. We can then use known values of y and z in equation (1). This is known as backward substitution.</p> <p>Before we can do backward substitution, however, we need to make equations (2) and (3) look 'appropriate'.</p> <p>Let's start off by trying to make the coefficients of the x terms in equations (2) and (3) zero. (3) already has a zero coefficient of x (though this isn't always the case), so that leaves just (2).</p> <p>In systems like this, we're able to add any number of one equation to another, without changing the solution. How many lots of equation (1) should we add to equation (2) such that we get an equation with a zero x coefficient? 1!</p> <p>Replace (2) with (2) + (1)</p> <p>-1/2 * x + 0 * y + 2/3 * z = 0 (1)</p> <p>0 * x - 2/3 * y + 2/3 * z = 0 (2)</p> <p>0 * x + 2/3 * y - 2/3 * z = 0 (3)</p> <p>That's looking better - equation (2) is now in the correct form. Next, we want to make the coefficient of y in (3) zero. We could try and use equation (1) for this, but if we did this, we'd be adding a non-zero x-term - something we don't want! If, on the other hand, we use equation (2), we know we won't be adding a non-zero x-term, since we know the x-term in equation (2) is zero (a result of the previous step).</p> <p>How many lots of equation (2) do we add to equation (3) to get a zero y-coefficient? 1 again!</p> <p>Replace (3) with (3) + (2)</p> <p>-1/2 * x + 0 * y + 2/3 * z = 0 (1)</p> <p>0 * x - 2/3 * y + 2/3 * z = 0 (2)</p> <p>0 * x + 0 * y + 0 * z = 0 (3)</p> <p>Now we're ready to backward substitute.</p> <p>Looking at equation (3), we see it's now meaningless. Any value of x, y and z will satisfy equation 3, so we effectively only have two meaningful equations with 3 unknowns. This means we'll have infinitely many solutions - if you give me a specific value of z, I can give you values of x and y that will make each of (1) and (2) hold.</p> <p>In this situation, we say we let z be free - often by writing something along the lines, "let z = t", just so it is clearly identified as something we're not trying to solve for, rather than something we simply haven't solved for yet, and move on.</p> <p>The next step of backward substitution, as the name implies, is to go up an equaiton ('backwards') and substitute values we know. Equation (2) thus becomes</p> <p>-2/3 y + 2/3 t = 0</p> <p>=> y = t</p> <p>This might not look like much of an answer, but it's still meaningful. This says, whatever value z is, y is the same. They're both free, but they're not free independent of each other. If you give me a specific value of z, there's only one possible value of y that will satisfy these equations.</p> <p>Now that we've processed 2, we continue backward substitution by substituting in known values to (1)</p> <p>-1/2*x + 0 * t + 2/3*t = 0</p> <p>=> x = 4/3 * t</p> <p>We now have values of x, y and z, dependent on some free parameter, t.</p> <p>x = 4/3 t</p> <p>y = t</p> <p>z = t</p> <p>You give me a value of t - and it can be -any- value - and I'll give you values of x, y and z that satisfy those equations.</p> <p>E.g if t = 3,</p> <p>x = 4</p> <p>y = 3</p> <p>z = 3</p> <p>satisfies the original equations. Since there are an infinite number of possible values of t, there are an infinite number of solutions. When t = 0, we get what's known as the trivial solution - x = y = z = 0 - but this obviously isn't the only solution.</p>
2,282,359
<p>I am trying to calculate this limit: $$\lim_{x \to \infty} x^2(\ln x-\ln (x-1))-x$$ The answer is $1/2$ but I am trying to verify this through proper means. I have tried L'Hospital's Rule by factoring out an $x$ and putting that as $\frac{1}{x}$ in the denominator (indeterminate form) but it becomes hopeless afterwards. Also I am a little hesitant about series involving the natural log because of restricted interval of convergence as $x$ is going to infinity. Is there a different approach how to do evaluate this limit? Thanks. </p>
lab bhattacharjee
33,337
<p>HINT:</p> <p>Set $1/x=h$ to get $$-\lim_{h\to0^+}\dfrac{\ln(1-h)+h}{h^2}$$</p> <p>Now either use Taylor Expansion or L'Hospital</p> <p>or See the third Question of <a href="https://math.stackexchange.com/questions/387333/are-all-limits-solvable-without-lh%c3%b4pital-rule-or-series-expansion">Are all limits solvable without L&#39;H&#244;pital Rule or Series Expansion</a></p>
11,315
<p>I just answered this question:</p> <p><a href="https://math.stackexchange.com/questions/528880/boolean-formula">Boolean formula over 64 Boolean variables X</a></p> <p>By the time I had posted my answer, another user had edited the question so as to remove all the mathematical interest.</p> <p>What is going on here? I feel very demotivated after spending time crafting an answer to a question and then seeing a sensible question trashed.</p> <p>Regards,</p> <p>Rob.</p>
user642796
8,348
<p>This isn't going to directly address the question (which has been amply answered by others), but I do want to highlight <a href="https://math.meta.stackexchange.com/questions/11315/it-looks-like-anyone-can-edit-anyones-questions-is-that-right#comment42866_11317">a comment made by Lord_Farin</a>:</p> <blockquote> <p>In case of defacing edits, it's good to <em>flag</em> the question with attention for a moderator as well.</p> </blockquote> <p>I'll go a step further and say that whenever you notice very suspicious edits, the moderators <strong>should</strong> be notified via a flag. We will then investigate to see if anything shady went on, and deal with the user(s) involved as appropriate. Starting meta-threads on the topic is not always the best way to get our attention, and simply reverting the edits may not be enough.</p>
1,149,907
<p>What are some different methods to evaluate</p> <p>$$ \int_{-\infty}^{\infty} x^4 e^{-ax^2} dx$$</p> <p>for $a &gt; 0$.</p> <p>This integral arises in a number of contexts in Physics and was the original motivation for my asking. It also arises naturally in statistics as a higher moment of the normal distribution.</p> <p><strong>I have given a few methods of evaluation below.</strong> Anyone know of others?</p>
Simon S
21,495
<p><strong>1</strong> Here's a relatively elegant method.</p> <p>Notice that $\frac{\partial \ }{\partial a} e^{-a x^2} = - x^2 e^{-a x^2}$ and hence $\frac{\partial^2 \ }{\partial a^2} e^{-a x^2} = + x^4 e^{-a x^2}$ </p> <p>Thus, as the integrand is bounded and $C^\infty$ in both variables,</p> <p>$$I = \int_{-\infty}^{\infty} x^4 e^{-ax^2} dx = \int_{-\infty}^{\infty} \frac{\partial^2 \ }{\partial a^2} e^{-ax^2} dx = \frac{d^2 \ }{da^2}\int_{-\infty}^{\infty} e^{-ax^2} dx $$</p> <p>Since $\displaystyle \int_{-\infty}^{\infty} e^{-ax^2} dx = \frac{\sqrt \pi}{\sqrt a}$,</p> <p>$$I = \frac{d^2 \ }{da^2} \frac{\sqrt \pi}{\sqrt a} = \frac{3\sqrt \pi}{4a^{5/2}}$$</p> <hr> <p><strong>2</strong> Another method:</p> <p>$$I^2 = \int_{-\infty}^{\infty} x^4 e^{-ax^2} dx \ \cdot \int_{-\infty}^{\infty} y^4 e^{-ay^2} dx = \int\int_{\mathbb R^2} (xy)^4 e^{-a(x^2 + y^2)} dx \ dy$$</p> <p>Moving to polar coordinates,</p> <p>$$I^2 = \int_0^{2\pi} \int_0^\infty r^8 \cos^4\theta\sin^4\theta e^{-ar^2} r \ dr \ d\theta = \int_0^\infty r^9e^{-ar^2} \ dr \ \cdot \ \int_0^{2\pi} \left(\frac{1}{2}\sin2\theta\right)^4 \ d\theta$$</p> <p>With substitution $u = r^2$, the first integral is $\frac{4!}{2a^5}$. As $\sin^4 2\theta = \frac{1}{8} ( -4\cos4\theta + \cos 8\theta + 3)$, in the second integral the first two terms vanish over the domain of integration $[0,2\pi]$ and</p> <p>$$I^2 = \frac{4!}{2a^5} \cdot \frac{1}{2^4} \frac{3}{8} 2\pi = \frac{9\pi}{16a^5}$$</p> <p>Hence, as $I$ is positive,</p> <p>$$I = \frac{3\sqrt \pi}{4a^{5/2}}$$</p> <hr> <p><strong>3</strong> High school method:</p> <p>Integrating by parts,</p> <p>$$I = {-1 \over 2a} \int_{-\infty}^{\infty} x^3 (-2ax)e^{-ax^2} dx = {3\over 2a} \int_{-\infty}^{\infty} x^2 e^{-ax^2} dx $$</p> <p>$$= {-3\over (2a)^2} \int_{-\infty}^{\infty} x (-2ax) e^{-ax^2} dx = {3 \over 4a^2} \int_{-\infty}^{\infty} e^{-ax^2} dx $$</p> <p>and hence</p> <p>$$I = {3 \over 4}{\sqrt\pi \over a^{5/2}}$$</p>
4,048,335
<p>Question:</p> <blockquote> <p>Assume that a recurrence relation is given as below:</p> <p><span class="math-container">$T(n)=2T(n/2)+\log(n)$</span></p> <p>and we know that <span class="math-container">$T(1) = 1$</span></p> <p>We want to solve the relation (find an explicit definition of <span class="math-container">$T(n)$</span> which does not rely on itself).</p> </blockquote> <p>My solving:</p> <blockquote> <p>I wanted to solve it using substitution method but when i tried I get the series <span class="math-container">$$\log(n)+2\log(n/2)+4\log(n/4)+8\log(n/8) ... 2^k\log(n/2^k)$$</span> which I am not able to solve.</p> </blockquote>
jjagmath
571,433
<p>The recurrence relation is not well defined. You can't calculate, for example, <span class="math-container">$T(3)$</span>.</p>
4,048,335
<p>Question:</p> <blockquote> <p>Assume that a recurrence relation is given as below:</p> <p><span class="math-container">$T(n)=2T(n/2)+\log(n)$</span></p> <p>and we know that <span class="math-container">$T(1) = 1$</span></p> <p>We want to solve the relation (find an explicit definition of <span class="math-container">$T(n)$</span> which does not rely on itself).</p> </blockquote> <p>My solving:</p> <blockquote> <p>I wanted to solve it using substitution method but when i tried I get the series <span class="math-container">$$\log(n)+2\log(n/2)+4\log(n/4)+8\log(n/8) ... 2^k\log(n/2^k)$$</span> which I am not able to solve.</p> </blockquote>
Cesareo
397,348
<p>Assuming <span class="math-container">$\log(n) = \log_2 n$</span> we have</p> <p><span class="math-container">$$ T\left(2^{\log_2 n}\right) = 2T\left(2^{\log_2 \frac n2}\right)+\log_2 n $$</span></p> <p>now calling <span class="math-container">$\mathcal{T}(\cdot) = T\left(2^{(\cdot)}\right)$</span> and <span class="math-container">$z = \log_2 n$</span> we follow with the linear recurrence</p> <p><span class="math-container">$$ \mathcal{T}(n) = 2\mathcal{T}(z-1) + z $$</span></p> <p>with solution</p> <p><span class="math-container">$$ \mathcal{T}(n) =2^{z-1}c_0 + 2^{z+1}-(z+2) $$</span></p> <p>and now going backwards with <span class="math-container">$z = \log_2 n$</span> we arrive at</p> <p><span class="math-container">$$ T(n) = \frac n2 c_0 +2(n-1)-\log_2 n $$</span></p> <p>Finally with <span class="math-container">$T(1) = 1$</span> follows <span class="math-container">$c_0 = 2$</span></p>
591,207
<p>Show a proof that $Z(G),$ the center of $G,$ is a normal subgroup of $G$ and every subgroup of the center is normal. I know that $Z(G)$ is a normal subgroup and it is abelian, but how would I show that every subgroup of $Z(G)$ is also normal?</p>
Igor Rivin
109,865
<p>Since if $H&lt;Z(G),$ and $h \in H,$ for every $g \in G$ $g^{-1}hg = h,$ (since $h$ is central), it follows that $g^{-1}H g = H$ (it is true elementwise).</p>
873,183
<p>Are variables logical or non-logical symbols in a logic system? I understand constants are 0-ary logical operation symbols. I think variables are non-logical symbols.</p> <p>But here are two contrary examples:</p> <p>It seems that variables are logical symbols in a propositional logic system, according to <a href="http://en.wikipedia.org/wiki/First-order_logic#Logical_symbols" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/First-order_logic#Logical_symbols</a></p> <blockquote> <p><strong>Logical symbols</strong></p> <p>An infinite set of <strong>variables</strong>, often denoted by lowercase letters at the end of the alphabet x, y, z, … . Subscripts are often used to distinguish variables: x0, x1, x2, … .</p> </blockquote> <p>It seems that variables are non-logical symbols in a propositional logic system, according to <a href="http://en.wikipedia.org/wiki/Propositional_logic#Generic_description_of_a_propositional_calculus" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Propositional_logic#Generic_description_of_a_propositional_calculus</a></p> <blockquote> <ul> <li><p>The alpha set is a finite set of elements called <strong>proposition symbols or propositional variables</strong>. Syntactically speaking, these are the most basic elements of the formal language <span class="math-container">$\mathcal{L}$</span>, otherwise referred to as atomic formulæ or terminal elements. In the examples to follow, the elements of are typically the letters <span class="math-container">$p, q, r$</span>, and so on.</p> </li> <li><p>The omega set <span class="math-container">$\Omega$</span> is a finite set of elements called <strong>operator symbols or logical connectives</strong>.</p> </li> </ul> </blockquote> <p>Thanks.</p>
hmakholm left over Monica
14,366
<p>As Thomas Andrews pointed out in a comment, this terminology is not something where one can expect consistency from book to book.</p> <p>However, <strong>as a practical matter</strong>, when one specifies that the non-logical language of a particular theory is such-and-such, it is highly unusual to have to say, "oh, and there are variables too". Variables are just supposed to be <em>there</em>, implicitly, when we say that what we're speaking about is a first-order theory.</p> <p>As such, variables are <strong>treated as</strong> logical symbols, no matter whether the official definitions in any given book call them out as being so.</p> <hr> <p>Note that your second quote is about <strong>propositional logic</strong> where there are no object variables at all and there's no meaningful distinction between logical and non-logical vocabulary.</p>
2,869,305
<p>I need some help evaluating this limit... Wolfram says it blows up to infinity but I don't think so. I just can't prove it yet.</p> <p>$$ \lim_{x\to\infty}(x!)^{1/x}-\frac{x}{e} $$</p>
gammatester
61,216
<p>I used a simple one-liner and asked Maple 7 to <code>simplify(series(GAMMA(x+1)^(1/x)-x/exp(1), x=infinity,3));</code> and the answer is <code>1/2*(ln(2)+ln(Pi)+ln(x)+2*O(1/x)*exp(1))*exp(-1)</code>, this can be converted to mathematical notation as: $$\Gamma\left(x+1\right)^{1/x}-\frac{x}{e} = \frac{1}{2e}\ln(2\pi x) + O\left(\frac{1}{x}\right), \quad x\rightarrow \infty$$</p>
2,707,744
<h2>The Problem</h2> <p>I am really struggling to wrap my head around the definition of the integral in measure theory. According to <a href="http://measure.axler.net/" rel="nofollow noreferrer">Axler</a>, the integral of a nonnegative function is defined as follows:</p> <blockquote> <p>Suppose $(X, \mathcal S,\mu)$ is a measure space and $f : X \to [0,\infty]$ a measurable function. The integral of $f$ with respect to $\mu$, denoted $\int f\, \mathrm d \mu$ is defined by \begin{multline} \int f\,\mathrm d\mu = \sup \bigg\{\sum_{k=1}^{m} a_k \mu(A_k) : \sum_{k=1}^{m} a_k\chi_{A_k} \leq f,\quad m\in\mathbb Z^+,\quad a_1,\ldots,a_m &gt; 0\quad \text{and} \quad A_1,\ldots, A_m\in\mathcal S\bigg\}. \end{multline}</p> </blockquote> <p>I've been asked to prove Theorems 3.10 and 3.12 in the linked book, that the integral is order-preserving, as in if $f \leq g$, then \begin{equation} \int f\,\mathrm d \mu \leq \int g \,\mathrm d \mu \end{equation} and that the integral is positively homogeneous, as in if $c\in\mathbb R : c \geq 0$, then \begin{equation} \int cf\,\mathrm d\mu = c\int f\,\mathrm d\mu\,. \end{equation} The book gives hints in both cases, namely that</p> <blockquote> <h2>3.10</h2> <p>The supremum defining $\int f\,\mathrm d\mu$ is taken over a subset of the corresponding set for the supremum defining $\int g\,\mathrm d\mu$.</p> </blockquote> <p>and in the case of positive homogeneity:</p> <blockquote> <h2>3.12</h2> <p>The supremum defining $\int cf\,\mathrm d\mu$ is taken over a set consisting of $c$ times the set, whose supremum defines $\int f\,\mathrm d\mu$.</p> </blockquote> <p>You would think that this was enough of a hint, but I am having trouble justifying the steps in the "calculations".</p> <h2>3.10</h2> <p>Let $f,g : X\to[0,\infty]$ be measurable, meaning that \begin{equation} F = \bigcup_{k=1}^{m} F_k = f^{-1}([0,\infty]) \in\mathcal S \quad \text{and}\quad G = \bigcup_{k=1}^{m} G_k = g^{-1}([0,\infty]) \in\mathcal S\,. \end{equation} If we now take the hint, choose the sets $F\subseteq G \subseteq X$ so that each of the sums in the definition of the integral reach their supremums. Then \begin{align} \int_F f\,\mathrm d\mu &amp;= \sup\bigg\{ \sum_{k=1}^{m} a_k \mu(F_k) : \sum_{k=1}^{m} a_k \chi_{F_k} \leq f \bigg\}\\ &amp;\leq \sup\bigg\{ \sum_{k=1}^{m} a_k \mu(G_k) : \sum_{k=1}^{m} a_k \chi_{G_k} \leq g \bigg\}\\ &amp;= \int_G g\,\mathrm d\mu, \end{align} where the inequality follows from the fact that measure preserves order, as in if $F \subseteq G$, then $\mu(F) \leq \mu(G)$. This seems to be enough of a proof in my head, but I am not totally convinced. As for the next theorem...</p> <h2>3.12</h2> <p>Let $f$ be a measurable function and $c \geq 0$, and once again choose the pre-image of $f$, $F$, so that the sum in the definition of the integral reaches its supremum and also define the set \begin{equation} cF = \{ca : a\in F\} = c\bigcup_{k=1}^{m} F_k = \bigcup_{k=1}^{m} cF_k\,. \end{equation} Then \begin{align} \int_{cF} cf\,\mathrm d\mu &amp;= \sup\bigg\{ \sum_{k=1}^{m} a_k \mu(cF_k) : \sum_{k=1}^{m} a_k \chi_{cF_k} \leq cf \bigg\}\\ &amp;= \sup\bigg\{ \sum_{k=1}^{m} a_k c\mu(F_k) : \sum_{k=1}^{m} a_k \chi_{cF_k} \leq cf \bigg\}\\ &amp;= \sup\bigg\{c \sum_{k=1}^{m} a_k \mu(F_k) : \sum_{k=1}^{m} a_k \chi_{cF_k} \leq cf \bigg\}\\ &amp;= c\sup\bigg\{\sum_{k=1}^{m} a_k \mu(F_k) : \sum_{k=1}^{m} a_k \chi_{cF_k} \leq cf \bigg\}\\ &amp;= c\int_F f\,\mathrm d\mu\,. \end{align} This is where I'm not getting it. The proof works if \begin{equation} \mu(cF_k) = c\mu(F_k), \end{equation} but I don't remember such a result from the book or our exercise sessions. What I'm especially interested in is, is my understanding of the idea behind the proof correct, or have I totally just missed the target?</p>
Emilio Novati
187,568
<p>It seems that you know homogeneous coordinates. Using these coordinates, a point $X$ in an affine space that corresponds to the vector $x=[x_1,x_2]^T$ is represented by the triple $X=[x_1,x_2,1]^T$ and the matrix that represents a translation of vector $a=[2,5]^T$ is : $$ T_a=\begin{bmatrix} 1 &amp; 0 &amp; 2\\0 &amp; 1 &amp; 5\\ 0&amp;0&amp;1 \end{bmatrix} $$ that, applied to the point $X$ ( with the usual row-column product rule) gives: $$ T_a(X)=\begin{bmatrix} 1 &amp; 0 &amp; 2\\0 &amp; 1 &amp; 5\\ 0&amp;0&amp;1 \end{bmatrix}\begin{bmatrix} x_1\\x_2\\1 \end{bmatrix}= \begin{bmatrix} x_1+2\\x_2+5\\1 \end{bmatrix}$$</p> <p>That are the homogeneous coordinates of the translated point.</p>
4,580,484
<p>It is often said that &quot;differential forms are used for integration&quot;.</p> <p>Typically people like to talk about the integral <span class="math-container">$\int_M \omega$</span> of a differential form <span class="math-container">$\omega$</span>, and exterior derivative, one of the most important operation of differential form, is defined such that Stokes' theorem <span class="math-container">$\int_{\partial M} \omega = \int_M \mathrm{d} \omega$</span> can hold.</p> <p>Symplectic manifold is a manifold equipped with a symplectic 2-form <span class="math-container">$\omega$</span>. Therefore, according to the former consideration, it should be interesting to talk about integral of the symplectic form on a two-dimensional submanifold of a symplectic manifold.</p> <p>However, it seems that people in the field of symplectic geometry is quite indifferent on this. Why?</p>
F. Conrad
403,916
<p>While I love the answer of Quaere Verum and cant give enough +1s on it, here is a few more &quot;simple&quot; facts about the symplectic form <span class="math-container">$\omega$</span> on a symplectic manifold <span class="math-container">$(M,\omega)$</span> of dimension <span class="math-container">$2n$</span>:</p> <p>First of all, note that a <span class="math-container">$2$</span>-form on a <span class="math-container">$2n$</span> dimensional manifold (unless <span class="math-container">$n=1$</span>) is not really meant to be integrated. But let me explain why closed stll matters and how we can still integrate using this fact:</p> <p><span class="math-container">$\omega^n$</span> is a volume form, called the symplectiv volume form. It gives you a &quot;canonical&quot; volume form on your manifold compatible with the structure. If in addition <span class="math-container">$M$</span> is closed (i.e. compact with no boundary), then you know that all the even cohomology groups <span class="math-container">$H^{2k}(M)\neq 0$</span>. Use the previous fact to proof this!</p> <p>Hamiltonian vector fields, moment maps - they all make use of the symplectic form. It usually involves the Cartan formula, which for the symplectic form just reads <span class="math-container">$\mathcal{L}_X\omega=d(i_{X}\omega)$</span>, where <span class="math-container">$d$</span> is exeterior differentiation and <span class="math-container">$i$</span> is the contraction/interior product with a vector field <span class="math-container">$X$</span>.</p> <p>Last but not least, we also have so called Kähler manifolds, where the symplectic form can be seen as an &quot;intermediate&quot; form between the Riemannian metric <span class="math-container">$g$</span> and the (almost-)complex structure <span class="math-container">$J$</span> on M, i.e. we have <span class="math-container">$$ \omega(\cdot,\cdot)=g(\cdot,J\cdot). $$</span> This construction would define you a non-degenerate <span class="math-container">$2$</span>-form for any choice of <span class="math-container">$J$</span> and <span class="math-container">$g$</span>, however, it is not necessarily closed.</p>
2,623,544
<p>I have to diagonolize a matrix A \begin{bmatrix}0&amp;-3&amp;-1&amp;1\\2&amp;5&amp;1&amp;-1\\-2&amp;-3&amp;1&amp;1\\2&amp;3&amp;1&amp;1\end{bmatrix}</p> <hr> <p>I do $det(A-λ)=0$ and I get $λ_{1}=1$, $λ_{2}=2$, $λ_{3}=2$, $λ_{4}=2$</p> <p>So possible diagonal matrix looks like: \begin{bmatrix}1&amp;0&amp;0&amp;0\\0&amp;2&amp;0&amp;0\\0&amp;0&amp;2&amp;0\\0&amp;0&amp;0&amp;2\end{bmatrix}</p> <hr> <p>Then I look for eigenvectors for $λ=1$ and $λ=2$</p> <p>I get:</p> <p>For $λ=1$: $(1, -1, 1,-1)$ </p> <p>For $λ=2$: $(1, 0, 0, 2)$, $(0, 1, 0, 3)$, $(0, 0, 1, 1)$ </p> <p>I create matrix P made of eigenvectors (as columns):</p> <p>\begin{bmatrix}1&amp;1&amp;0&amp;0\\-1&amp;0&amp;1&amp;0\\1&amp;0&amp;0&amp;1\\-1&amp;2&amp;3&amp;1\end{bmatrix}</p> <hr> <p>And when checking if the diagonal matrix is correct by using formula: </p> <p>$A=P^{-1}*D*P$</p> <p>I don't get the correct answer :(</p> <p>Can someone check my steps? I tried to do that a few times but still I am not getting the correct result.</p>
Radek Suchánek
300,610
<p>It seems you have not made any errors when computing the eigenvectors. I checked your results on the following matrix calculator: <a href="https://matrixcalc.org/en/#%7B%7B1,1,0,0%7D,%7B-1,0,1,0%7D,%7B1,0,0,1%7D,%7B-1,2,3,1%7D%7D%5E%28-1%29%2A%7B%7B1,0,0,0%7D,%7B0,2,0,0%7D,%7B0,0,2,0%7D,%7B0,0,0,2%7D%7D%2A%7B%7B1,1,0,0%7D,%7B-1,0,1,0%7D,%7B1,0,0,1%7D,%7B-1,2,3,1%7D%7D" rel="nofollow noreferrer">is $P$ the correct transformation matrix?</a></p> <p>Also as was pointed out, you must use the correct formula for going from one basis to the other. The matrix $P$ is a transformation from the eigenvectors basis to the canonical one, not the other way around, i.e. the formula is $A = P D P^{-1}$. </p>
2,249,929
<p>Is there a more general form for the answer to this <a href="https://math.stackexchange.com/q/1314460/438622">question</a> where a random number within any range can be generated from a source with any range, while preserving uniform distribution? </p> <p><a href="https://stackoverflow.com/q/137783/866502">This</a> question for example looks familiar and is changing a range of 1-5 to 1-7</p>
Ian
83,396
<p>The simplest way to proceed is rejection sampling. For this we need to first assume $M \geq N$. If this isn't the case (maybe you're playing D&amp;D with a d6, so $N=20,M=6$), then you should roll at least $k=\lceil \log_M(N) \rceil$ times, and then label the $k$-tuples of rolls. For the discussion of the rejection approach let us assume you have already done this and accordingly rename $M^k$ as $M$ if need be.</p> <p>Now rejection sampling amounts to assigning each of the numbers $1,2,\dots,N$ to $q(M,N)$ of the possible rolls, where $q$ is the quotient when $M$ is divided by $N$. If you got one of the assigned rolls, you terminate, otherwise you start over.</p> <p>This rejection method takes constant space. It fails with probability $\frac{r(M,N)}{M}$ (where $r$ is the remainder when $M$ is divided by $N$), so the average number of steps taken is $\frac{M}{M-r(M,N)}$. In theory this can run forever but the probability of long runtimes decays exponentially fast.</p> <p>A different way to proceed is to simply solve a more general problem and then apply it to this one. Note that a statistically equivalent procedure to rolling a $N$ sided die is to generate a random variable $U$ which is uniform on $(0,1)$ and then return $\lceil NU \rceil$. Such a variable can be obtained using infinitely many rolls of an $M$ sided die: $U=\sum_{n=1}^\infty (X_n-1) M^{-n}$ where $X_n$ are uniform on $\{ 1,2,\dots,M \}$. </p> <p>Now as it stands, that seems like a bad thing, because you can't do infinitely many rolls. But you don't need full resolution, you only need to resolve which of the intervals $(k/N,(k+1)/N]$ that $U$ will eventually be in. Given $R$ rolls and a current value of $U$, say $U_R$, you know that the final value of $U$ will be somewhere between $U_R$ and $U_R+\sum_{n=R+1}^\infty (M-1) M^{-n}=U_R+M^{-R}$. If these numbers fall in the same interval of the form above then you are done computing. </p> <p>Again the runtime of this alternative method is random and not bounded. Additionally, the <em>memory</em> footprint of this method is also random and not bounded. The advantage of it is that it does not throw away entropy, so the probability that you will finish in the next step improves as you go on. Also, if $M&lt;N$, although you need to roll at least $k=\lceil \log_M(N) \rceil$ times, you do not have to perform units of $k$ rolls, which could be good if for some reason $k$ were large.</p> <p>An example of the latter method: $N=20,M=6$. I roll a 4, putting me in $[3/6,4/6]$, but $\lceil 3 \cdot 20/6 \rceil = 10 \neq 14 = \lceil 4 \cdot 20/6 \rceil$. Then I roll a 6, putting me in $[23/36,24/36]$. I'm still not done because $\lceil 23 \cdot 20/36 \rceil = 13 \neq 14 = \lceil 24 \cdot 20/36 \rceil$. I roll again and get a $1$, and now my roll is understood as a 13.</p>
1,546,599
<p>What method should I use for this limit? $$ \lim_{n\to \infty}{\frac{n^{n-1}}{n!}} $$</p> <p>I tried ratio test but I ended with the ugly answer $$\lim_{n\to \infty}\frac{(n+1)^{n-1}}{n^{n-1}} $$ which would go to 1? Which means we cannot use ratio test. I do not know how else I could find this limit.</p>
André Nicolas
6,312
<p>Note that for $n\ge 2$ the top is equal to $n\cdot n^{n-2}$.</p> <p>The bottom is $(1)(2)\left[(3)(4)\cdots(n)\right]$. The product $[(3)(4)\cdots(n)]$ consists of $n-2$ terms, all $\le n$. So the bottom is $\le (1)(2)n^{n-2}$. It follows that for $n\ge 2$ we have $$\frac{n^{n-1}}{n!}\ge \frac{n\cdot n^{n-2}}{(1)(2)n^{n-2}}=\frac{n}{2}.$$ Since $\frac{n}{2}\to\infty$ as $n\to\infty$, it follows that our sequence diverges to $\infty$.</p>
98,932
<p>Suppose I would like to use a method for data prediction, and that I have some empirical data (i.e., sequence of samples of the form [time, value]). Would it be possible to know in advance, based on the data only, if it makes sense to use a model for prediction (based on the samples, used as a training set). </p> <p>I am asking this for the following simple reason: it is possible that the sample data is totally random and that there is no correlation at all between the samples. Hence, I would like to avoid trying to find correlations where there is not.</p>
Andrew
23,236
<p>I lack a theoretical proof, however I strongly suspect that no such test exists.</p> <p>My reasoning is that it is possible to design pseudo random number generators that meet strong statistical measures of randomness ( For example, the BSI tests. See wikipedia for details ). Nonetheless, such generators are deterministic and so, in principle, it should be possible to discover the underlying algorithm via machine learning ( consider, for example, a genetic programming system that manages to reproduce the generator exactly ).</p>
2,576,801
<p>$$\frac {|A-B|}{15} = \frac {|B-A|}{10} = \frac {|A\cap B|}{6}$$</p> <ul> <li>What is the minimal value of $A \cup B$?</li> </ul> <p>This question is from my exam. I've tried to solve it by giving values and added them up. </p> <p>Regards</p>
random
513,275
<p>Multiplying with $30$ gives $2|A-B|=3|B-A|=5|A\cap B|$, which implies that |A-B| must be divisible by 3 and by 5 and therefore by $15$.</p>
4,007,450
<p>Let <span class="math-container">$y=f(x)$</span> the graph of a real-valued function. We define its curvature by : <span class="math-container">$$curv(f) = \frac{|f''|}{(1+(f')^2)^{3/2}}$$</span></p> <p>I would like to know if there is any function (apart from the trivial anwser <span class="math-container">$f(x)=0$</span>) whose curvature is itself. So what is the fixed point of <span class="math-container">$curv$</span> ?</p>
Robert Lewis
67,071
<p>Choose</p> <p><span class="math-container">$a = b; \tag 1$</span></p> <p>then</p> <p><span class="math-container">$aba = b \tag 2$</span></p> <p>becomes</p> <p><span class="math-container">$a^3 = a, \tag 3$</span></p> <p>which implies</p> <p><span class="math-container">$a^2 = e, \tag 4$</span></p> <p><span class="math-container">$e$</span> being the identity element of <span class="math-container">$G$</span>; thus,</p> <p><span class="math-container">$a = a^{-1}, \tag 5$</span></p> <p>whence (2) becomes</p> <p><span class="math-container">$aba^{-1} = b, \; \forall a, b \in G, \tag 6$</span></p> <p>or</p> <p><span class="math-container">$ab = ba, \tag 7$</span></p> <p>and we see that <span class="math-container">$G$</span> is abelian. <span class="math-container">$OE\Delta$</span>.</p>
2,336
<p>I have just started teaching mathematics up to secondary level. I don't have much idea as to how to handle the class. </p> <p>In order to make students learn well, how can we divide the time in order to put a mix of lecture and specially the group work?</p>
Dan Goldner
1,525
<p>In my 75-min algebra 2 class, I use this schedule daily: 5 min for warmup, 35 min for individual presentations to the class of homework solutions, up to 10 min for direct instruction if needed from me, 20 min for classwork in groups, and 5 for some quick exit exercise that I collect as a check on understanding. It's been working well for me; I got the basic structure from Lee Mahavier, a very experienced and successful teacher of secondary ELL's in Georgia. </p>
2,336
<p>I have just started teaching mathematics up to secondary level. I don't have much idea as to how to handle the class. </p> <p>In order to make students learn well, how can we divide the time in order to put a mix of lecture and specially the group work?</p>
Mike Pierce
5,220
<ol> <li><p>Identify what you want you'd like to teach the students for the class. Maybe more importantly, identify the <em>goals</em> of the course. What do you want the students to <em>know</em> and what do you want them to be able to <em>do</em> once the course is over? </p></li> <li><p>Write up a reasonable calendar for the course. Put the lessons you want to give in a reasonable order and designate which days will correspond to each lesson. You may need to alter your goals for the course at this point upon noticing you don't have enough time.</p></li> <li><p>Each day in the course, just <em>teach</em> the students the thing you set out to teach them. There's no need to have an immutable structure to each day (a 5 minute warm-up, followed by 27 minutes of lecture, then a 12½ minute breakout exercise, …); use your training as a teacher, and your own intuition, to teach them as best as you can each day. Some topics will require more lecturing than others to setup. Some topics will require very little scaffolding, and you can have the students explore during the class meeting. Some topics are better discussed in groups, whereas some are best thought about individually. Be flexible.</p></li> </ol>
3,373,576
<p>What are the non-trivial solutions of <span class="math-container">$$\tan x = \arctan x$$</span></p> <p>Can these solutions be expressed e.g. in terms of <span class="math-container">$\pi$</span> or in radicals? I mean are they some &quot;nice&quot; numbers?</p> <p>E.g. do we know if these solutions are irrational, or rational, some rational multiple of <span class="math-container">$\pi$</span>, or say something like e.g. <span class="math-container">$\frac{\sqrt{2}}{7}$</span>?</p> <p>What about <span class="math-container">$\cot x = \text{arccot} (x)$</span>? I became curious about these problems after graphing the four functions.</p>
user
505,767
<p>We have by <span class="math-container">$y=x-1 \to 0$</span></p> <p><span class="math-container">$$\lim\limits_{x\to1}\frac{\tan(x^2-1)}{\sin(x^2-4x+3)}=\lim\limits_{y\to0}\frac{\tan(y(y+2))}{\sin(y(y-2))}$$</span></p> <p>and</p> <p><span class="math-container">$$\frac{\tan(y(y+2))}{\sin(y(y-2))}=\frac{\tan(y(y+2))}{y(y+2)}\frac{y(y-2)}{\sin(y(y-2))}\frac{y(y+2)}{y(y-2)}$$</span></p>
1,361,948
<p>$\frac{df}{dx} = 2xe^{y^2-x^2}(1-x^2-y^2) = 0.$</p> <p>$\frac{df}{dy} = 2ye^{y^2-x^2}(1+x^2+y^2) = 0.$</p> <p>So, $2xe^{y^2-x^2}(1-x^2-y^2) = 2ye^{y^2-x^2}(1+x^2+y^2)$.</p> <p>$x(1-x^2-y^2) = y(1+x^2+y^2)$</p> <p>$x-x^3-xy^2 = y + x^2y + y^3$</p> <p>Is the guessing the values of the variables the only way of solving this? With $y = 0$ and I can already figure out that $x = 0$, $1$ or $-1$ but It's bothering me how I had to randomly guess that. </p>
Jan Eerland
226,665
<p>I've checked my answer with Mathematica and it gives me the same, it's much more complicated as you thought, so your answer isn't good!</p> <hr> <p>$$z = \frac{1 + i}{\sigma \delta \left(1 - e^{-(1 + i)t/\delta} \right)}= \frac{1 + i}{\sigma \delta \left(1 - e^{-\frac{(1+i)t}{\delta}} \right)}$$</p> <p>Assuming $\sigma,\delta,t\in \mathbb{R}$:</p> <p>$$\Re\left(\frac{1 + i}{\sigma \delta \left(1 - e^{-\frac{(1+i)t}{\delta}} \right)}\right)=$$ $$\Re\left(\frac{1 + i}{\sigma \delta - \sigma \delta e^{-\frac{(1+i)t}{\delta}} }\right)=$$</p> <p>$$\Re\left(\frac{1+i}{\sigma \delta}\right)+\Re\left(\frac{1+i}{\sigma \delta\left(-1+e^{-\frac{(1+i)t}{\delta}}\right)}\right)=$$</p> <p>$$\frac{1}{\sigma \delta}+\Re\left(\frac{1+i}{\sigma \delta\left(-1+e^{-\frac{(1+i)t}{\delta}}\right)}\right)=$$</p> <p>$$\frac{1}{\sigma \delta}+\frac{\Re\left(\frac{1+i}{-1+e^{-\frac{(1+i)t}{\delta}}}\right)}{\sigma \delta}=$$</p> <p>$$\frac{1}{\sigma \delta}+\frac{\Re\left(\frac{1}{-1+e^{-\frac{(1+i)t}{\delta}}}\right)-\Im\left(\frac{1}{-1+e^{-\frac{(1+i)t}{\delta}}}\right)}{\sigma \delta}=$$</p> <p>$$\frac{1}{\sigma \delta}+\frac{\frac{1-e^{\frac{t}{\delta}}\cos\left(\frac{t}{\delta}\right)}{-e^{\frac{2t}{\delta}}+2e^{\frac{t}{b}}\cos\left(\frac{t}{\delta}\right)-1}-\left(-\frac{e^{\frac{t}{\delta}}\sin\left(\frac{t}{\delta}\right)}{e^{\frac{2t}{\delta}}\sin^2\left(\frac{t}{\delta}\right)+\left(e^{\frac{t}{\delta}}\cos\left(\frac{t}{\delta}\right)-1\right)^2}\right)}{\sigma \delta}=$$</p> <p>$$\frac{1}{\sigma \delta}+\frac{\frac{1-e^{\frac{t}{\delta}}\cos\left(\frac{t}{\delta}\right)}{-e^{\frac{2t}{\delta}}+2e^{\frac{t}{\delta}}\cos\left(\frac{t}{\delta}\right)-1}+\frac{e^{\frac{t}{\delta}}\sin\left(\frac{t}{\delta}\right)}{e^{\frac{2t}{\delta}}\sin^2\left(\frac{t}{\delta}\right)+\left(e^{\frac{t}{\delta}}\cos\left(\frac{t}{\delta}\right)-1\right)^2}}{\sigma \delta}=$$</p> <p>$$\frac{1}{\sigma \delta}+\frac{e^{\frac{t}{\delta}}\sin\left(\frac{t}{\delta}\right)+e^{\frac{t}{\delta}}\cos\left(\frac{t}{\delta}\right)-1}{\sigma \delta\left(e^{\frac{2t}{\delta}}-2e^{\frac{t}{\delta}}\cos\left(\frac{t}{\delta}\right)+1\right)}$$</p> <p>So:</p> <p>$$\Re\left(\frac{1 + i}{\sigma \delta \left(1 - e^{-\frac{(1+i)t}{\delta}} \right)}\right)=\frac{1}{\sigma \delta}+\frac{e^{\frac{t}{\delta}}\sin\left(\frac{t}{\delta}\right)+e^{\frac{t}{\delta}}\cos\left(\frac{t}{\delta}\right)-1}{\sigma \delta\left(e^{\frac{2t}{\delta}}-2e^{\frac{t}{\delta}}\cos\left(\frac{t}{\delta}\right)+1\right)}$$</p>