qid
int64 1
4.65M
| question
large_stringlengths 27
36.3k
| author
large_stringlengths 3
36
| author_id
int64 -1
1.16M
| answer
large_stringlengths 18
63k
|
---|---|---|---|---|
1,299,032 |
<p>The following problem is from Taylor's PDE I. I do not get why $D \det(I)B=\frac{d}{d t}\det(I+t B)|_{t=0}$ hold. Since $\det(I)=1$ and $D$ is $n\times n$ matrix, the left side seems to be a matrix, while the right side is clearly a scalar. Could anyone tell me how I should correctly interpret $\det(I)$ in this case?</p>
<p><img src="https://i.stack.imgur.com/pIVUt.png" alt="enter image description here"></p>
|
Robert Lewis
| 67,071 |
<p>Hoping I'm not completely off-base here! I think that mjqxxx had it right in his comment, i.e. that $D \det(I)$ refers to the linear map from $M_n(\Bbb C) \to \Bbb C$ which is the derivative of $\det: M_n(\Bbb C) \to \Bbb C$ at the point $I \in M_n(\Bbb C)$; then $D\det(I) B$ makes sense as the directional derivative of $\det$ in the $B$ direction at the point $I$, so that</p>
<p>$D\det(I)B = \dfrac{d}{dt}(\det(I + tB) \mid_{t = 0} \tag{0}$</p>
<p>is in fact a legitimate notation.</p>
<p>Having said these things, and clearly understanding the symbolism, I must say that I became quite intrigued by the "other question" presented in the body of this post, so I went ahead and answered it as well.</p>
<p><strong><em>Soooooooo,</em></strong> in case that anyone is interested: </p>
<p>Let $\lambda_i \in \Bbb C$, $1 \le i \le n$, be the $n$ eigenvalues of $B$; note if need be we allow multiple occurances of the same complex <em>number</em> in the list $\lambda_1$, $\lambda_2$, $\ldots$, $\lambda_n$ to accommodate repeated eigenvalues of $B$. Bearing this convention in mind, we recall that the determinant of a complex matrix is the product of its eigenvalues; we have</p>
<p>$\det(B) = \prod_1^n \lambda_i. \tag{1}$</p>
<p>We next observe that, for any complex square matrix $C$ and any eigenvalue $\mu$ of $C$, with eigenvector $v \ne 0$,</p>
<p>$Cv = \mu v, \tag{2}$</p>
<p>for any complex numbers $a, b \in \Bbb C$ we have</p>
<p>$(aC + bI)v = aCv + bIv = a\mu v + bv = (a\mu + b)v; \tag{3}$</p>
<p>that is, the eigenvalues of $C$ themselves are transformed in accord with the linear mapping $C \mapsto aC + bI$,
whilst the eigenvectors are unaffected; furthermore, if $v$ is a <em>generalized eigenvector</em>, that is</p>
<p>$Cv - \mu v = w, \tag{4}$</p>
<p>where $w \ne 0$ is also a generalized eigenvector, then</p>
<p>$(aC + bI)v - (a\mu + b)v = aCv + bv - a\mu v - bv$
$= aCv - a\mu v = a(Cv - \mu v) = aw; \tag{5}$</p>
<p>for $a \ne 0$, (5) indicates that $v$ is a generalized eigenvector corresponding to eigenvalue $a\mu + b$ of $aC + bI$; these considerations lead to the conclusion that the entire invariant subspace structure of $C$ is preserved under $C \mapsto aC + bI$, provided that $a \ne 0$; thus the multiplicities of the transformed eigenvalues $a\mu + b$ are the same as those of the original $\mu$.</p>
<p>We apply the above remarks to the matrix $B$; it follows that the eigenvalues of $I + tB$ are precisely the numbers $1 + t\lambda_i$; thus</p>
<p>$\det(I + tB) = \prod_1^n (1 + t\lambda_i). \tag{6}$</p>
<p>The product on the right-hand side of this equation may be expanded in terms of the <a href="http://en.wikipedia.org/wiki/Elementary_symmetric_polynomial" rel="nofollow noreferrer">elementary symmetic polynomials</a> $\sigma_k(t\lambda_1, t\lambda_2, \ldots, t\lambda_n)$ in the $n$ quantities $t\lambda_i$, yielding</p>
<p>$\det(I + tB) = \prod_1^n (I + t\lambda_i) = \sum_0^n \sigma_k(t\lambda_1, t\lambda_2, \ldots, t\lambda_n); \tag{7}$</p>
<p>a detailed proof of (7) may be found in <a href="https://math.stackexchange.com/questions/1231781/determine-the-coefficient-of-polynomial-deti-xa">my answer to this question</a>. Each $\sigma_k(t\lambda_1, t\lambda_2, \ldots, t\lambda_n)$ is in fact a homogeneous polynomial of degree $k$ in the $\lambda_i$; thus (7) becomes</p>
<p>$\det(I + tB) = \prod_1^n (I + t\lambda_i) = \sum_0^n t^k \sigma_k(\lambda_1, \lambda_2, \ldots, \lambda_n)$
$= 1 + t\sigma_1(\lambda_1, \lambda_2, \ldots, \lambda_n) + t^2\sigma_2(\lambda_1, \lambda_2, \ldots, \lambda_n) + \ldots + t^n\sigma_n(\lambda_1, \lambda_2, \ldots, \lambda_n); \tag{8}$</p>
<p>it follows immediately from (8) that</p>
<p>$\dfrac{d}{dt}(\det(I + tB) \mid_{t = 0} = \sigma_1(\lambda_1, \lambda_2, \ldots, \lambda_n); \tag{9}$</p>
<p>since</p>
<p>$\sigma_1(\lambda_1, \lambda_2, \ldots, \lambda_n) = \sum_1^n \lambda_i = \text{Tr}(B), \tag{10}$</p>
<p>we may combine (9) and (10) to arrive at</p>
<p>$\dfrac{d}{dt}(\det(I + tB) \mid_{t = 0} = \text{Tr}(B), \tag{101}$</p>
<p>the result desired (by Yours Truly in any event). <strong><em>QED!</em></strong></p>
<p><strong><em>Note:</em></strong> One reason I find this problem so engaging is that I think it the may very well lead to nice proofs of things like <a href="http://en.wikipedia.org/wiki/Liouville%27s_formula" rel="nofollow noreferrer">Liouville's formula</a> and so forth; I have always found the usual demonstrations (see the wikipedia page I just linked) rather cumbersome. <strong><em>End of Note.</em></strong></p>
|
3,542,485 |
<p>Suppose my birthday fell on a Sunday. What is the maximum number of years I may have to wait till my birthday falls on a Sunday again?</p>
<p>My intuition says 11 years:</p>
<ul>
<li><p>At the earliest, the next Sunday is in 6 years time. Reason: before I can encounter another Sunday, there must be at least one leap year. (7 - 1 = 6).</p></li>
<li><p>Suppose by bad luck, 6 years later is a leap year (my birthday would skip Sunday and fall on a Monday). From this point, before I can encounter the next Sunday, there must be another leap year. There are two leap years involved, so the next Sunday is in 5 years time. (7 - 1 - 1 = 5)</p></li>
<li>6 + 5 = 11</li>
</ul>
<p>Is 11 the correct answer? If yes, I would appreciate an explanation more rigorous than the above. If no, please teach.</p>
<p>I am also looking for exceptional cases where the above reasoning could be faulty.</p>
|
Geoffrey Trang
| 684,071 |
<p>Assuming that one is not born on February 29, then the following possibilities can occur:</p>
<ul>
<li>The day of the week repeats after 5 years. For example, March 1, 2015, and March 1, 2020, are both Sundays.</li>
<li>The day of the week repeats after 6 years. For example, April 1, 2012, and April 1, 2018, are both Sundays.</li>
<li>The day of the week repeats after 7 years. This case can only occur for dates from March 1 in a year ending with "96" to February 28 the following year ending with "97". For example, March 1, 1896, and March 1, 1903, are both Sundays, because 1900 was not a leap year.</li>
<li>The day of the week repeats after 11 years. For example, October 1, 2006, and October 1, 2017, are both Sundays.</li>
<li>The day of the week repeats after 12 years. This case can only occur for dates within the last decade of a century. For example, October 1, 2090, and October 1, 2102, are both Sundays, with the skipped leap year here being 2100.</li>
</ul>
<p>Hence, the maximum is 12 years (for dates other than February 29). Of course, one could replace Sunday with any other day of the week and the same possible gaps between occurrences would still occur.</p>
<p>For February 29, the day of the week can only repeat after 12, 28, or 40 years. The 12-year gap can only occur for leap years ending with "92" or "96", while the 40-year gap can only occur for leap years ending with "72", "76", "80", "84", or "88".</p>
<p>Hence, the maximum for February 29 is 40 years. For example, February 29, 2088, and February 29, 2128, are both Sundays, with the skipped leap year here again being 2100.</p>
<p>By the way, it turns out that the 40-year gap can occur for leap years with any dominical letter:</p>
<ul>
<li>AG: 1888 and 1928</li>
<li>BA: 1684 and 1724, 1780 and 1820, or 1876 and 1916</li>
<li>CB: 1672 and 1712</li>
<li>DC: 1688 and 1728, 1784 and 1824, or 1880 and 1920</li>
<li>ED: 1676 and 1716, or 1772 and 1812</li>
<li>FE: 1788 and 1828, or 1884 and 1924</li>
<li>GF: 1680 and 1720, 1776 and 1816, or 1872 and 1912</li>
</ul>
|
1,489,362 |
<p>$$\lim_{x\to\infty} \left(\frac{3x^2}{\sqrt{4x^2+x+1}+\sqrt{x^2+x+1}}-x\right)$$</p>
<p>I need help in finding this limit.</p>
|
egreg
| 62,967 |
<p>Rewrite it in the form
$$
\lim_{x\to\infty} \biggl(
\frac{3x^2\bigl(\sqrt{4x^2+x+1}-\sqrt{x^2+x+1}\,\bigr)}{3x^2}-x
\biggr)
$$
and simplify $3x^2$. Now change $x=1/t$ so you get
$$
\lim_{t\to0^+}
\frac{\sqrt{4+t+t^2}-\sqrt{1+t+t^2}-1}{t}
$$
which is the derivative at $0$ of the function
$$
f(t)=\sqrt{4+t+t^2}-\sqrt{1+t+t^2}
$$
because $f(0)=1$, so
$$
f'(t)=\frac{2t+1}{2\sqrt{4+t+t^2}}-\frac{2t+1}{2\sqrt{1+t+t^2}}
$$
and you just need to compute $f'(0)$.</p>
|
1,489,362 |
<p>$$\lim_{x\to\infty} \left(\frac{3x^2}{\sqrt{4x^2+x+1}+\sqrt{x^2+x+1}}-x\right)$$</p>
<p>I need help in finding this limit.</p>
|
mathlove
| 78,967 |
<p>Multiplying the first term by $$\frac{\sqrt{4x^2+x+1}-\sqrt{x^2+x+1}}{\sqrt{4x^2+x+1}-\sqrt{x^2+x+1}}$$gives$$\begin{align}&\lim_{x\to \infty}\left(\frac{3x^2}{\sqrt{4x^2+x+1}+\sqrt{x^2+x+1}}-x\right)\\\\&=\lim_{x\to\infty}\left(\sqrt{4x^2+x+1}-\sqrt{x^2+x+1}-x\right)\\\\&=\lim_{x\to\infty}x\left(\sqrt{4+\frac 1x+\frac{1}{x^2}}-\sqrt{1+\frac 1x+\frac{1}{x^2}}-1\right)\\\\&=\lim_{t\to 0}\frac{\sqrt{4+t+t^2}-\sqrt{1+t+t^2}-1}{t}\qquad (\text{set $t=\frac 1x$})\\\\&=\lim_{t\to 0}\left(\frac{1+2t}{2\sqrt{4+t+t^2}}-\frac{1+2t}{2\sqrt{1+t+t^2}}\right)\quad(\text{apply L'Hôpital's rule})\\\\&=\color{red}{-\frac 14}\end{align}$$</p>
|
4,361,626 |
<p><span class="math-container">$4$</span> balls are randomly distributed into <span class="math-container">$3$</span> cells (<span class="math-container">$3^4=81$</span> possibilities of equal probability).</p>
<p>What is the probability that there is a cell that contains exactly <span class="math-container">$2$</span> balls?</p>
<p>The correct answer is: <span class="math-container">$\frac{2}{3}$</span>, but i know don't where was i mistaken.</p>
<p>Here was my idea:</p>
<p>Let's define: <span class="math-container">$\forall _{i=1,2,3}:A_i$</span> = The event that cell #<span class="math-container">$i$</span> contains exactly <span class="math-container">$2$</span> balls.</p>
<p>Then, according to the <em>Inclusion–exclusion principle</em>, the answer should be:</p>
<p><span class="math-container">$P_{solution} = P(A_1 \cup A_2 \cup A_3) = P(A_1)+P(A_2)+P(A_3)-P(A_1\cap A_2)-P(A_1\cap A_3)-P(A_2\cap A_3)+P(A_1\cap A_2 \cap A_3)$</span></p>
<p>Where:
<span class="math-container">$$
\forall _{i=1,2,3}: \quad P(A_i)=\frac{{4 \choose 2}*2^2}{3^4}=\frac{8}{27}
$$</span>
<span class="math-container">$$
\forall_{i \neq j}: \quad P(A_i \cap A_j)= \frac{{4 \choose 2}*2}{3^4}=\frac{4}{27}
$$</span>
<span class="math-container">$$
P(A_1\cap A_2 \cap A_3)=0
$$</span></p>
<p>and so:</p>
<p><span class="math-container">$$P_{solution}=3*\frac{8}{27}-3*\frac{4}{27} = \frac{4}{9}$$</span></p>
<p>I can see that IF my calculation of <span class="math-container">$P(A_i \cap A_j)$</span> was <span class="math-container">$\frac{{4 \choose 2}}{3^4}$</span> (without multiplying by <span class="math-container">$2$</span>), then that would be correct, but i can't seem not to wonder why. I have to multiply by <span class="math-container">$2$</span>. Suppose we look at cell #1 and cell #2: I need to choose <span class="math-container">$2$</span> balls out of <span class="math-container">$4$</span>, that's <span class="math-container">$4 \choose 2$</span>. Let's say i chose the the balls <span class="math-container">$\{1,3\}$</span> and <span class="math-container">$\{2,4\}$</span>, then i must decide which cell will get the <span class="math-container">$\{1,3\}$</span> set and which will the <span class="math-container">$\{2,4\}$</span>. That's <span class="math-container">$2$</span> options, so we multiply by <span class="math-container">$2$</span>.</p>
<p>Any idea? Where was i mistaken? Can you show me your solutions?</p>
|
Lubin
| 17,760 |
<p>It occurs to me that you may be looking at the offending equation
<span class="math-container">$$
(J+a)(J+b)=(J+ab)
$$</span>
and making too much of it. In particular, how is “<span class="math-container">$(J+a)(J+b)$</span>” to be understood? To multiply these two cosets, you don’t take all possible products, one factor from <span class="math-container">$(J+a)$</span>, the other from <span class="math-container">$(J+b)$</span>; rather, you <em>choose</em> one element, say <span class="math-container">$a_0\in(J+a)$</span>, and one, say <span class="math-container">$b_0\in(J+b)$</span>, and see what coset <span class="math-container">$a_0b_0$</span> belongs to. That will be your product coset.
But notice: I have italicized the word “choose”, so you have to verify that the resulting coset does not depend on your choice.</p>
<p>Accordingly, choose other <span class="math-container">$a_1\in(J+a)$</span> and <span class="math-container">$b_1\in(J+b)$</span>, so that <span class="math-container">$a_1-a_0\in J$</span> and <span class="math-container">$b_1-b_0\in J$</span>. Then
<span class="math-container">$$
a_1b_1-a_0b_0=a_1(b_1-b_0)+(a_1-a_0)b_0\in J\,,
$$</span>
because the perenthesized quantities are in <span class="math-container">$J$</span>. Thus <span class="math-container">$a_0b_0$</span> and <span class="math-container">$a_1b_1$</span> are in the same <span class="math-container">$J$</span>-coset, and your choice didn’t matter. In particular, the product coset is <span class="math-container">$(J+ab)$</span>.</p>
<p>(Note: because <span class="math-container">$p$</span> is irreducible, it happens that the set of all possible products <em>is</em> a coset, equal to the product coset. Not so if <span class="math-container">$p$</span> is reducible.)</p>
|
1,642,427 |
<p>If the stem of a mushroom is modeled as a right circular cylinder with diameter $1$, height $2$, its cap modeled as a hemisphere of radius $a$ the mushroom has axial symmetry, is of uniform density,and its center of mass lies at center of plane where the cap and stem join, then find $a$.</p>
<p>I really need help.</p>
|
user311100
| 311,100 |
<p>The arc length should also be $\pi$. The curve "converges" to a straight line, but is actually an infinite union of semicircles of 0 length. You cannot approximate the arc length like that, it is similar to the fallacious proofs that $\pi=4$. In fact, there are $2^n$ semicircles each with diamater $\frac{1}{2^n}$, then the arc length is $2^n\pi\frac{1}{2^n}=\pi$.</p>
|
510,488 |
<p>Using the fact that $x^2 + 2xy + y^2 = (x + y)^2 \ge 0$, show that the assumption $x^2 + xy + y^2 < 0$ leads to a contradiction...
So do I start off with...</p>
<p>"Assume that $x^2 + xy + y^2 <0$, then blah blah blah"?</p>
<p>It seems true...because then I go $(x^2 + 2xy + y^2) - (x^2 + xy + y^2) \ge 0$.
It becomes $2xy - xy \ge 0$, then $xy \ge 0$.
How is this a contradiction?
I think I'm missing some key point. </p>
|
peterwhy
| 89,922 |
<p>By completing square,</p>
<p>$$x^2+xy+y^2 = x^2+2x\frac{y}{2}+\frac{y^2}{4} + \frac{3y^2}{4} = \left(x+\frac{y}{2}\right)^2+\frac{3y^2}{4}\ge 0$$</p>
|
510,488 |
<p>Using the fact that $x^2 + 2xy + y^2 = (x + y)^2 \ge 0$, show that the assumption $x^2 + xy + y^2 < 0$ leads to a contradiction...
So do I start off with...</p>
<p>"Assume that $x^2 + xy + y^2 <0$, then blah blah blah"?</p>
<p>It seems true...because then I go $(x^2 + 2xy + y^2) - (x^2 + xy + y^2) \ge 0$.
It becomes $2xy - xy \ge 0$, then $xy \ge 0$.
How is this a contradiction?
I think I'm missing some key point. </p>
|
Abel
| 71,157 |
<p>Another way to see this is as follows:</p>
<p>Suppose $x^2+xy+y^2 <0$, then $xy<-x^2-y^2< 0$, hence $2xy<xy<-x^2-y^2$ and thus $x^2+2xy+y^2<0$, which is absurd.</p>
|
510,488 |
<p>Using the fact that $x^2 + 2xy + y^2 = (x + y)^2 \ge 0$, show that the assumption $x^2 + xy + y^2 < 0$ leads to a contradiction...
So do I start off with...</p>
<p>"Assume that $x^2 + xy + y^2 <0$, then blah blah blah"?</p>
<p>It seems true...because then I go $(x^2 + 2xy + y^2) - (x^2 + xy + y^2) \ge 0$.
It becomes $2xy - xy \ge 0$, then $xy \ge 0$.
How is this a contradiction?
I think I'm missing some key point. </p>
|
Clayton
| 43,239 |
<p>Assume $x^2+xy+y^2<0$. Adding and subtracting $xy$ on the left-hand side gives $x^2+2xy+y^2-xy=(x+y)^2-xy<0$, and therefore $0\leq(x+y)^2<xy$. Conversely, $x^2+xy+y^2<0$ implies $xy<-(x^2+y^2)\leq0$. Combining these, we have $$xy<0<xy,$$ a clear contradiction.</p>
|
510,488 |
<p>Using the fact that $x^2 + 2xy + y^2 = (x + y)^2 \ge 0$, show that the assumption $x^2 + xy + y^2 < 0$ leads to a contradiction...
So do I start off with...</p>
<p>"Assume that $x^2 + xy + y^2 <0$, then blah blah blah"?</p>
<p>It seems true...because then I go $(x^2 + 2xy + y^2) - (x^2 + xy + y^2) \ge 0$.
It becomes $2xy - xy \ge 0$, then $xy \ge 0$.
How is this a contradiction?
I think I'm missing some key point. </p>
|
Brian M. Scott
| 12,042 |
<p>Suppose that $x^2+xy+y^2<0$; then $x^2+2xy+y^2<xy$, so $(x+y)^2<xy$. Subtracting $3xy$ from both sides of the original inequality, we see that $x^2-2xy+y^2<-3xy$, so $(x-y)^2<-3xy$. Squares are non-negative, so on the one hand $xy>0$, and on the other hand $-3xy>0$ and therefore $xy<0$.</p>
|
46,726 |
<p>In many proofs I see that some variable is "fixed" and/or "arbitrary". Sometimes I see only one of them and I miss a clear guideline for it. Could somebody point me to a reliable source (best a well-known standard book) which explains, when and how to use both in proofs?</p>
<p>EDIT: A little add-on to the question: Take a usual induction on natural numbers and assume that you are teaching it to students in their first semester. How do you explain "fixed" and "arbitrary" in this scenario?</p>
|
Qiaochu Yuan
| 232 |
<p>Both "arbitrary" and "fixed" are just shorthand for a universal quantifier. When I say something like "fix $\epsilon > 0$" it means I am about to prove a statement that is true for all $\epsilon > 0$ (and thereby prove that some function is continuous, for example) but I don't want to actually write out "for all $\epsilon > 0$" in front of every sentence I'm about to write. That's really all there is to it. </p>
|
1,330,376 |
<p>In textbooks and online tutorials I see that the remainder is calculated by using a new unknown variable on the same interval. For example we take the Taylor polynomial $T_n(a)$ but find the remainder $R(x)$ with a new variable $z$ inside it. See <a href="http://www.millersville.edu/~bikenaga/calculus/remainder-term/remainder-term.html" rel="nofollow">this tutorial</a> for an example.</p>
<p>Maybe I'm dense but I haven't seen an explanation for this other variable being used. Or at least not one I can understand. I originally thought the remainder was the difference between the value of $f(a)$ and $T_n(a)$ but it appears instead that it is a function like the derivative that can find the error at any point. But I'm not even sure if that's the case.</p>
<p>What is the purpose of $z$ in that example? And why do we find the remainder at an arbitrary value instead of the value we are estimating? </p>
<p>I'm in Calc II if that's relevant. Thanks.</p>
|
KittyL
| 206,286 |
<p>Consider the easy case where $f(x)=f(c)+f'(z)(x-c)$. If you are in Calc 2, you should have learned linear approximation. The linear approximation of $f(x)$ around point $c$ is $f(x)\approx f(c)+f'(c)(x-c)$. See the following picture:</p>
<p><img src="https://i.stack.imgur.com/G4avH.png" alt="enter image description here"></p>
<p>If you use $x$ in that derivative, you only get an approximation of $f(x)$, but if you use $f'(z)$ where $z$ is some value such that the slope of the tangent line at that point is equal to the slope of the secant line between $c$ and $x$, then you get the exact value of $f(x)$. So that's why we have to use a new variable $z$. Because it depends on your $c$ and $x$. </p>
<p>Edit: This is to answer the question from the comment. The remainder, in this linear case, is the error when you use $f(c)$ to approximate $f(x)$. It is $f(x)-f(c)=f'(z)(x-c)$ for some $z$. You can see from the picture that $z$ exists somewhere between the two points. </p>
<p>In second order case, we write
$$f(x)=f(c)+f'(c)(x-c)+\frac{f''(z)}{2}(x-c)^2$$</p>
<p>Considering this as linear approximation with a remainder $\frac{f''(z)}{2}(x-c)^2$. So $f(x)$ is approximated by $f(c)+f'(c)(x-c)$, but the error is the remainder term. </p>
<p>The following picture shows the remainder (or error) when you approximate $f(x)$ with $0$-th order $f(c)$ or with $1$st order $f(c)+f'(c)(x-c)$:</p>
<p><img src="https://i.stack.imgur.com/0iaKs.png" alt="enter image description here"></p>
|
1,630,074 |
<p>My brother needs help coming up with a formula for a problem that I already did but failed to write out the formula for. </p>
<p>The problem is:</p>
<p>Consider a circle with the point (5,4) and a radius of 3.
Determine if a vertical line segment with the points (3,5) being one end of the segment and (3,1) being the other end of the segment, intersects this circle. List the intersecting points if so.</p>
<p>:
Again I need help coming up with a simple formula to determine where the line intersects the circle.</p>
<p>Thanks</p>
|
Balarka Sen
| 117,002 |
<p>The current answer already explains why the proposed homotopy cannot work. Let me give a geometric interpretation of the two-step homotopy on the linked answer. </p>
<p>Trying to contract $S^\infty$ to $(1, 0, 0,\cdots)$ directly using straightline homotopy cannot possibly work: The situation is the same as that of trying to contract $S^2$ in $\Bbb R^2$. Straightline homotopy at some point of time will run through the origin, in which case normalizing gives you undefined things. </p>
<p>So the point of the shift map $\sigma : S^\infty \to S^\infty$, $(x_0, x_1, x_2, \cdots) \mapsto (0, x_1, x_2, \cdots)$ is to pull $S^\infty$ up one dimension. <em>Now</em> you can contract the image of $\sigma$ to $(1, 0, 0, \cdots)$, because it lives in codimension one and $(1, 0, 0, \cdots)$ is just some other point outside it. The situation is the same as that of contracting $S^1 \subset \Bbb R^2$ inside $\Bbb R^3$ to a point outside the hyperplane it lives. This can easily be done using straightline homotopy.</p>
<p>Irrelevant to the question, but here's a different way to do it. $S^\infty$ is the same a the colimit $\bigcup_n S^n$ with $S^i \subset S^{i-1}$ being inclusion as an equator. Note that each $S^n$ bounds a disk (i.e., hemisphere) on each side in $S^\infty$. Consider the homotopy which contracts $S^n$ through those. To make this work, one needs a $[1/2^{n+1}, 1/2^n]$ trick so that the composition is continuous. </p>
|
2,006,993 |
<p>Let $(a_{n,k})_{n, k \in \mathbb N} \subset \mathbb C$ be a series satisfying</p>
<p>$$
\sum_{n=0}^\infty \left| \sum_{k=0}^\infty a_{n,k}\right| \lt \infty
$$
and
$$
\sum_{k=0}^\infty \left|a_{n,k}\right| \lt \infty \qquad \forall n\in \mathbb N.
$$
Does this imply that $\sum_{k=0}^\infty \sum_{n=0}^\infty a_{n,k}$ converges with
$$
\sum_{k=0}^\infty \sum_{n=0}^\infty a_{n,k} = \sum_{n=0}^\infty \sum_{k=0}^\infty a_{n,k},
$$
ie. the order of summation can be rearranged?</p>
<p>In a lecture a proof was given (at least I think so!) for the case that instead of the first condition above the series satisfies
$$
\sum_{n=0}^\infty \sum_{k=0}^\infty |a_{n,k}| \lt \infty.
$$
(Am I right that this last condition itself is enough to justify switching the order because of the Fubini theorem?)</p>
<p>Can the series be rearranged in general or is there a (nasty) counter example?</p>
|
Victor Chen
| 289,695 |
<p>The condition
$$
\sum_{n=0}^\infty \left| \sum_{k=0}^\infty a_{n,k}\right| \lt \infty
$$
is not sufficient, see the counter-example below.</p>
<p>Define a serie $\{a_{n,k}\}$, where
$$
a_{n,k} = \cases{\frac{1}{n^2}-1 & if $n=k$, \\ 1 & if $n \ne k, k=1$, \\ 0 & else.}
$$
Then we can easily verify that $$\sum_{n=0}^\infty \left| \sum_{k=0}^\infty a_{n,k}\right| = \sum_{n=2}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6} -1 \lt \infty,$$ but $$\sum_{k=0}^\infty \left|\ \sum_{n=0}^\infty a_{n,k}\right| = \infty.$$</p>
|
1,004,056 |
<p>Pick a piece of paper and a pen. Put the pen on a starting point and begin to draw an arbitrary curve and don't withdraw your hand until you reached the starting point. You can meet your curve during drawing in some <em>one point</em> intersections but not more (like a line intersection). For example the left curve is allowed but the right one is not.</p>
<p><img src="https://i.stack.imgur.com/ItJT2.png" alt="enter image description here"></p>
<p>Now your paper is divided to some areas. We call two areas "neighbors" if their <em>common borderline</em> has more than one point. A "coloring" for the areas of the paper is such that two neighbor areas have different colors. Recall the chess board coloring.</p>
<blockquote>
<p><strong>Question:</strong> Is it true that there is a $2$-coloring (Black and White) for the areas of a paper when we draw an arbitrary curve on it as described above? If yes, why?</p>
</blockquote>
<p><strong>An Example:</strong> Note to the coloring in the following shape. You can try on more complicated shapes by your own. It seems to be always true that we can do this by $2$ colors.</p>
<p><img src="https://i.stack.imgur.com/fhlXZ.png" alt="enter image description here"></p>
|
Laars Helenius
| 112,790 |
<p>Probably. We can create a graph $G$ from the drawing by assigning each region to be colored a vertex. Then if $u$ and $v$ are vertices of $G$, $(u,v)$ is an edge if and only if the regions represented by $u$ and $v$ share a face.</p>
<p>This should result in a bipartite graph which is always two colorable, which in turn implies your picture is two colorable.</p>
<p>The only thing you will have to prove is that $G$ is bipartite. But that shouldn't be too difficult.</p>
|
1,004,056 |
<p>Pick a piece of paper and a pen. Put the pen on a starting point and begin to draw an arbitrary curve and don't withdraw your hand until you reached the starting point. You can meet your curve during drawing in some <em>one point</em> intersections but not more (like a line intersection). For example the left curve is allowed but the right one is not.</p>
<p><img src="https://i.stack.imgur.com/ItJT2.png" alt="enter image description here"></p>
<p>Now your paper is divided to some areas. We call two areas "neighbors" if their <em>common borderline</em> has more than one point. A "coloring" for the areas of the paper is such that two neighbor areas have different colors. Recall the chess board coloring.</p>
<blockquote>
<p><strong>Question:</strong> Is it true that there is a $2$-coloring (Black and White) for the areas of a paper when we draw an arbitrary curve on it as described above? If yes, why?</p>
</blockquote>
<p><strong>An Example:</strong> Note to the coloring in the following shape. You can try on more complicated shapes by your own. It seems to be always true that we can do this by $2$ colors.</p>
<p><img src="https://i.stack.imgur.com/fhlXZ.png" alt="enter image description here"></p>
|
Community
| -1 |
<p>Colour a point black if the <a href="https://en.wikipedia.org/wiki/Winding_number" rel="nofollow">winding number</a> of the curve around the point is odd.</p>
|
1,004,056 |
<p>Pick a piece of paper and a pen. Put the pen on a starting point and begin to draw an arbitrary curve and don't withdraw your hand until you reached the starting point. You can meet your curve during drawing in some <em>one point</em> intersections but not more (like a line intersection). For example the left curve is allowed but the right one is not.</p>
<p><img src="https://i.stack.imgur.com/ItJT2.png" alt="enter image description here"></p>
<p>Now your paper is divided to some areas. We call two areas "neighbors" if their <em>common borderline</em> has more than one point. A "coloring" for the areas of the paper is such that two neighbor areas have different colors. Recall the chess board coloring.</p>
<blockquote>
<p><strong>Question:</strong> Is it true that there is a $2$-coloring (Black and White) for the areas of a paper when we draw an arbitrary curve on it as described above? If yes, why?</p>
</blockquote>
<p><strong>An Example:</strong> Note to the coloring in the following shape. You can try on more complicated shapes by your own. It seems to be always true that we can do this by $2$ colors.</p>
<p><img src="https://i.stack.imgur.com/fhlXZ.png" alt="enter image description here"></p>
|
Cyriac Antony
| 120,721 |
<p>I shall try to give an intuitive colouring method that doesn't require much mathematical background.
I though I clearly understood why this method of colouring works; but now I doubt it.
Anyway I shall try to give a colouring method for any such drawing using only 2 colours without presenting proof for validity (I don't have one now). Any proof for validity, or counterexample that this wouldn't work is welcome.</p>
<p>I shall try to explain the colouring method with figures. Let us use colours 1 and 2 instead of black and white.</p>
<p>We first give colour 2 to the plane. We place the pen at a point <span class="math-container">$p$</span> on the plane. Without lose of generality, we may assume that will draw a crossing at point <span class="math-container">$p$</span>. We keep colour 1 on the right side of the pen and colour 2 on the left side. When we reach a crossing point, we recolour the two 'newborn' faces with depending on which side of the pen they appear at the point of crossing (see the figure). At each crossing, we interchange the colours on the left and right sides of the pen. This continues till we finally reach the starting point.</p>
<p><a href="https://i.stack.imgur.com/y7FMN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y7FMN.png" alt="enter image description here"></a></p>
|
33,743 |
<p>I have a lot of sum questions right now ... could someone give me the convergence of, and/or formula for, $\sum_{n=2}^{\infty} \frac{1}{n^k}$ when $k$ is a fixed integer greater than or equal to 2? Thanks!!</p>
<p>P.S. If there's a good way to google or look up answers to these kinds of simple questions ... I'd love to know it...</p>
<p>Edit: Can I solve it by integrating $\frac{1}{x^k}$ ? I can show it converges, but to find the formula? Is my question just the Riemann Zeta function?</p>
<p>(edit)</p>
<p>Thanks guys! This got me the following result:</p>
<p>$\sum_{p} \frac{1}{p} \log \frac{p}{p-1} ~ ~ \leq ~ \zeta(2)$</p>
<p>summing over all primes $p$. (And RHS is Riemann zeta function.)</p>
<p>First, sum over all integers $p$ instead of primes. Then transform the log into $\sum_{m=1}^{\infty} \frac{1}{m p^{m}}$ (<a href="http://en.wikipedia.org/wiki/Natural_logarithm#Derivative.2C_Taylor_series" rel="nofollow">reference: wikipedia</a>. I know.). Now we have (with rearranging):</p>
<p>$\leq ~ \sum_{m=1}^{\infty} \frac{1}{m} \sum_{p=2}^{\infty} \frac{1}{p^{m+1}}$</p>
<p>By the result of this question (Arturo's answer), this inner sum, which is $\zeta(m+1)$ is at most $\frac{1}{m+1-1} = \frac{1}{m}$. So we have</p>
<p>$\leq ~ ~ \sum_{m=1}^{\infty} \frac{1}{m} \frac{1}{m} = \zeta(2)$</p>
<p>I think this is a very pretty little proof. Thanks again, hope you math people enjoyed reading this....</p>
|
N. S.
| 9,176 |
<p>The easiest way to prove that $1+\frac{1}{2^2}+...+\frac{1}{n^2}$ is convergent, is proving by induction that</p>
<p>$$ 1+\frac{1}{2^2}+...+\frac{1}{n^2} \leq 2 -\frac{1}{n} \,.$$</p>
<p>If $k >2$, convergence follows immediately from $\frac{1}{j^k} \leq \frac{1}{j^2}$. </p>
<p>I always find fascinating the fact that the above inequality is a trivial induction problem, while the easier inequality: $ 1+\frac{1}{2^2}+...+\frac{1}{n^2} \leq 2 \,.$ cannot be proven by induction.... </p>
|
854,671 |
<p>So I'm a bit confused with calculating a double integral when a circle isn't centered on $(0,0)$. </p>
<p>For example: Calculating $\iint(x+4y)\,dx\,dy$ of the area $D: x^2-6x+y^2-4y\le12$.
So I kind of understand how to center the circle and solve this with polar coordinates. Since the circle equation is $(x-3)^2+(y-2)^2=25$, I can translate it to $(u+3)^2+(v+2)^2=25$ and go on from there.</p>
<p>However I would like to know if I could solve this without translating the circle to the origin. I thought I could, so I simply tried solving $\iint(x+4y)\,dx\,dy$ by doing this:
$\int_0^{2\pi}\,d\phi\int_0^5(r\cos\phi + 4r\sin\phi)r\,dr$ but this doesn't work. I'm sure I'm missing something, but why should it be different? the radius is between 0 and 5 in the original circle as well, etc.</p>
<p>So my questions are:</p>
<ol>
<li><p>How can I calculate something like the above integral without translating the circle to the origin? What am I doing wrong?</p></li>
<li><p>I would appreciate a good explanation of what are the steps exactly when translating the circle. I kind of "winged it" with just saying "OK, I have to move the $X$ back by 3, so I'll call it $X+3$, the same with the $Y$ etc. If someone could give a clear breakdown of the steps that would be very nice :)</p></li>
</ol>
<p>Thanks!</p>
|
mesel
| 106,102 |
<p>I think this question has already good answers,</p>
<p>It is enough to show that there exist a month starting on Sunday.</p>
<p>Let $x$ be the starting day of the year then next month's first day is $x+31\equiv x+3$ and next month's first day, $x+3+28\equiv x+3$...</p>
<p>At the and we have, $$x,x+3,x+3,x+6,x+1,x+4,x+6,x+2,x+5..$$ now we reached seven different days, $x,x+1...,x+6$ by pigeonhole principle we must have all days which conclude the result.</p>
|
2,898,889 |
<p>Let $\ A $ be a real matrix of $\ 3 \times 3 $ with the characteristic polynom: $\ p(t) = t^3 + 2t^2 -3t $</p>
<p>Prove: the matrix $\ A^2 + A - 2I $ is similar to $\ D = \begin{bmatrix} 0 & 0 & 0 \\ 0 &4 & 0 \\ 0 & 0 & - 2 \end{bmatrix} $</p>
<p>$\ p(t) = t^3 +2t^2 -3t = t(t^2 + 2t -3) = t(t-1)(t+3) $ then $\ A $ has 3 eigenvalues $\ 0,1,-3 $ and is similar to $\ M = \begin{bmatrix} 1 & 0 & 0 \\ 0 & -3 & 0 \\ 0 & 0 & 0 \end{bmatrix}$ </p>
<p>also, $\ A^2 + A -2I = (A + 2I)(A-I) $</p>
<p>I can see that if I place $\ M$ instead so $\ (M+2I)(M-I) = D$ yet this is far from proving.</p>
|
Math Lover
| 348,257 |
<p>Hint: $$\frac{s}{(s+1)^2-2^2} = \frac{s}{(s-1)(s+3)} = \frac{1}{4(s-1)}+\frac{3}{4(s+3)},$$
and $$\mathcal{L}(e^{at}) = \frac{1}{s-a}.$$</p>
<hr>
<p>For the textbook solution, it is useful to write the given fraction as
$$\frac{s}{(s+1)^2-2^2} = \frac{s+1-1}{(s+1)^2-2^2} = \frac{s+1}{(s+1)^2-2^2} - \frac{1}{2}\cdot\frac{2}{(s+1)^2-2^2},$$
and note that
$$\mathcal{L}(e^{-at}\cosh(bt)) = \frac{s+a}{(s+a)^2-b^2},$$
and
$$\mathcal{L}(e^{-at}\sinh(bt)) = \frac{b}{(s+a)^2-b^2}.$$</p>
|
2,898,889 |
<p>Let $\ A $ be a real matrix of $\ 3 \times 3 $ with the characteristic polynom: $\ p(t) = t^3 + 2t^2 -3t $</p>
<p>Prove: the matrix $\ A^2 + A - 2I $ is similar to $\ D = \begin{bmatrix} 0 & 0 & 0 \\ 0 &4 & 0 \\ 0 & 0 & - 2 \end{bmatrix} $</p>
<p>$\ p(t) = t^3 +2t^2 -3t = t(t^2 + 2t -3) = t(t-1)(t+3) $ then $\ A $ has 3 eigenvalues $\ 0,1,-3 $ and is similar to $\ M = \begin{bmatrix} 1 & 0 & 0 \\ 0 & -3 & 0 \\ 0 & 0 & 0 \end{bmatrix}$ </p>
<p>also, $\ A^2 + A -2I = (A + 2I)(A-I) $</p>
<p>I can see that if I place $\ M$ instead so $\ (M+2I)(M-I) = D$ yet this is far from proving.</p>
|
Mohammad Riazi-Kermani
| 514,496 |
<p>$$\mathcal{L}^{-1} \left\{ \dfrac{s}{(s + 1)^2 - 4} \right\}$$</p>
<p>$$=\mathcal{L}^{-1} \left\{ \dfrac{s+1}{(s + 1)^2 - 4} \right\}-\mathcal{L}^{-1} \left\{ \dfrac{1}{(s + 1)^2 - 4} \right\}$$</p>
<p>$$=e^{-t} \cosh(2t) - \dfrac{1}{2}e^{-t}\sinh(2t)$$</p>
|
708,633 |
<p>My question is as in the title: is there an example of a (unital but not necessarily commutative) ring $R$ and a left $R$-module $M$ with nonzero submodule $N$, such that $M \simeq M/N$?</p>
<p>What if $M$ and $N$ are finitely-generated? What if $M$ is free? My intuition is that if $N$ is a submodule of $R^n$, then $R^n/N \simeq R^n$ implies $N=0$. It seems like $N\neq 0$ implies $R^n/N$ has nontrivial relations, so $R^n/N$ can't be free.</p>
<p>If $R^n/N \simeq R^n$, we'd have an exact sequence</p>
<p>$0 \rightarrow N \hookrightarrow R^n \twoheadrightarrow R^n/N \simeq R^n \rightarrow 0$</p>
<p>which splits since $R^n$ is free, so $R^n \simeq R^n \oplus N$. Does this imply $N=0$? What if we assume $R$ is commutative, or even local? Maybe Nakayama can come in handy.</p>
<p>I'm interested in noncommutative examples too. Thanks!</p>
|
Ehsaan
| 78,996 |
<p>One can show that Noetherian modules are <strong>Hopfian</strong>, that is, any surjective endomorphism is injective. In particular if $R$ is a left Noetherian ring, then the free left $R$-module $R^n$ is Noetherian too hence Hopfian.</p>
<p><strong>Proposition.</strong> Let $R$ be any ring. Then the following are equivalent.</p>
<ol>
<li>All finitely-generated free left $R$-modules are Hopfian.</li>
<li>For any $n$ and any left $R$-module $L$, $R^n\simeq R^n\oplus L$ as left $R$-modules implies $L=0$.</li>
</ol>
<p>A ring satisfying one (hence both) of these conditions is called <strong>stably finite</strong>, and this is pretty much the condition expected in the OP post. Therefore all Noetherian rings are stably finite.</p>
<p>The proof of this proposition is pretty easy. First assume every free module $R^n$ is Hopfian. If $R^n\simeq R^n\oplus L$, then we have a surjection $R^n \simeq R^n \oplus L \twoheadrightarrow R^n$ whose kernel is $L$, hence $L=0$ by Hopfian-ness. Conversely, we must show $R^n$ is Hopfian: let $\varphi: R^n\rightarrow R^n$ be a surjective endomorphism. Then $R^n/\ker(\varphi)\simeq R^n$ splits since $R^n$ is free, so $R^n \simeq R^n \oplus \ker(\varphi)$, thus $\ker(\varphi)=0$ by assumption.</p>
|
708,633 |
<p>My question is as in the title: is there an example of a (unital but not necessarily commutative) ring $R$ and a left $R$-module $M$ with nonzero submodule $N$, such that $M \simeq M/N$?</p>
<p>What if $M$ and $N$ are finitely-generated? What if $M$ is free? My intuition is that if $N$ is a submodule of $R^n$, then $R^n/N \simeq R^n$ implies $N=0$. It seems like $N\neq 0$ implies $R^n/N$ has nontrivial relations, so $R^n/N$ can't be free.</p>
<p>If $R^n/N \simeq R^n$, we'd have an exact sequence</p>
<p>$0 \rightarrow N \hookrightarrow R^n \twoheadrightarrow R^n/N \simeq R^n \rightarrow 0$</p>
<p>which splits since $R^n$ is free, so $R^n \simeq R^n \oplus N$. Does this imply $N=0$? What if we assume $R$ is commutative, or even local? Maybe Nakayama can come in handy.</p>
<p>I'm interested in noncommutative examples too. Thanks!</p>
|
Slade
| 33,433 |
<p>If $R$ is a commutative ring, $M,N$ are finitely generated free $R$-modules of the same rank, and $\varphi: M\to N$ is surjective, then $\varphi$ is injective.</p>
<p>Note that the Noetherian hypothesis is unnecessary! We cannot use Nakayama, but we can use the same <em>proof technique</em> as Nakayama:</p>
<p>We represent $\varphi$ by a square matrix $A$ with entries in $R$. Since $\varphi$ is surjective, there is a square matrix $B$ with $A\cdot B=I$. Taking determinants, we see that $(\operatorname{det} A)(\operatorname{det} B) = 1$, so $\operatorname{det} A$ is invertible. It follows that $A$ has a two-sided inverse, namely $(\operatorname{det} A)^{-1} \cdot A^{\operatorname{adj}}$, therefore $\varphi$ is invertible as well.</p>
<p>So the answer to your question is negative for all commutative rings.</p>
|
851,959 |
<p>In Halmos's Naive Set Theory about well-ordering set, it states that if a collecton <span class="math-container">$\mathbb{C}$</span> of well-ordered set is a chain w.r.t continuation, then the union of these sets is a well-ordered set. However, I cannot see why it is well-ordered. That is, I cannot see why each of its subset has a smallest element.</p>
|
Mauro ALLEGRANZA
| 108,274 |
<p>You have to re-read all the passage from page 67-on :</p>
<blockquote>
<p>We shall say that a well ordered set $A$ is a <em>continuation</em> of a well ordered set $B$, if, in the first place, $B$ is a subset of $A$, if, in fact, $B$ is an initial segment of $A$, and if, finally, the ordering of the elements in $B$ is the same as their ordering in $A$. </p>
<p>Thus if $X$ is a well ordered set and if $a$ and $b$ are elements of $X$ with $b < a$, then $s(a)$ is a continuation of $s(b)$, and, of course, $X$ is a continuation of both $s(a)$ and $s(b)$.</p>
<p>If $\mathcal C$ is an arbitrary collection of initial segments of a well ordered set,
then $\mathcal C$ is a <em>chain</em> [see page 54 : "a totally ordered set"] with respect to continuation; this means that $\mathcal C$ is a collection of well ordered sets with the property that of any two distinct members of the collection one is a continuation of the other. </p>
<p>A sort of converse of this comment is also true and is frequently useful. If a collection $\mathcal C$ of well ordered sets is a chain with respect to continuation, and if $U$ is the union of the sets of $\mathcal C$, then there is a unique well ordering of $U$ such that $U$ is a continuation of each set (distinct from $U$ itself) in the collection $\mathcal C$.</p>
<p>Roughly speaking, the union of a chain of well ordered sets is well ordered. This abbreviated formulation is dangerous because it does not explain that "chain" is meant with respect to continuation. If the ordering implied by the word "chain" is taken to be simply order-preserving inclusion, then the conclusion is not valid. </p>
</blockquote>
<p>The relevant fact is : the collection $\mathcal C$ of well-ordered set is a <em>chain w.r.t continuation</em>.</p>
<p>A collection $\mathcal C$ is a <em>chain</em> when, for all $A,B \in \mathcal C$ : $A \subseteq B$ or $B \subseteq A$.</p>
<p>If a collection $\mathcal C$ is a <em>chain w.r.t continuation</em>, it has "something more" : in addition to the property (common to all chains) that for all $A,B \in \mathcal C$ : $A \subseteq B$ or $B \subseteq A$, we have also that (supposing : $B \subseteq A$) $B$ is an initial segment of $A$, <strong>and</strong> the ordering of the elements in $B$ is the same as their ordering in $A$. </p>
<p>Thus, when we "merge" all the members of the collection $\mathcal C$ into the "mega-set" $U$ every subset "preserve" its "original" minimal-element.</p>
<p><em>I hope it may help ...</em></p>
<p>I'm not able to "manufacture" an example different from the "trivial" one built from $\mathbb N$.</p>
<p>If you consider a collection $\mathcal C = \{ X_n \}$ where all $X_i$ form a <em>chain w.r.t continuation</em>, we have that $X_n = \{ 0,1,2, \ldots n \} = n+1$.</p>
<p>Thus the union $U$ of the collection is $\omega$ itself. </p>
|
2,624,272 |
<p>It is well known that, in order to track constant reference values with zero asymptotic error, in the presence of uncertainty, an integrator is required in the open loop (either by the plant or the controller itself).</p>
<p>Lets for simplicity assume the plant $P$ has no integral components.</p>
<p>In order to track constants, $r(t) = 1,$ one integrator in the controller is needed. Tracking ramps, $r(t) = t$, needs two integrators, while tracking parabolas, $r(t) = t^2$ needs 3 integrators, and so on.</p>
<p>So, generally speaking, in order to track $r(t) = t^n$ with zero asymptotic error for $P$, you need $n + 1$ integrators in the controller.</p>
<p>However, we also know that the exponential function grows faster than any polynomial. So, what is for references like $r(t) = \exp(t)$? Wouldn't it then require an <em>infinite</em> amount of integrators in order to track such a reference with zero asymptotic error? Is that even possible at all?</p>
|
JMJ
| 295,405 |
<p>The exponential function increases very, <em>very</em> rapidly, so in practice you never need to track an exponential reference. If you tried, you would quickly hit the saturation limits of your actuators.</p>
<p>The basic reason you can use the trick of adding a pole at the origin (pure integrator) to deal with polynomial references is that feeding forward $t^n$ is the same as using the transfer function $\frac{H(s)}{s^n}$, where $H(s)$ is the (closed-loop) transfer function for a step command. There's nothing deep --just the Laplace transform of the reference. If my reference generally has the Laplace form $R(s) = F(s)R_{step}(s)$ then I can always treat the input like a step function for the transfer function $F(s)H(s)$. Using this logic you need to use the correct Laplace transform for $e^{ax}$, which is $\frac{1}{s-a}$. Thus you would add a single pole at $a$ rather than some number of poles at zero. </p>
|
3,048,063 |
<p>I have to prove this preposition by mathematical induction:</p>
<p><span class="math-container">$$\left(x^n+1\right)<\left(x+1\right)^n \quad \forall n\geq 2 \quad \text{and}\quad x>0,\,\, n \in \mathbb{N}$$</span></p>
<p>I started the prove with <span class="math-container">$n=2$</span>:</p>
<p><span class="math-container">$\left(x^{2}+1\right)<\left(x+1\right)^{2}$</span></p>
<p><span class="math-container">$x^{2}+1<x^{2}+2x+1$</span></p>
<p>We see that;</p>
<p><span class="math-container">$x^{2}+1-x^{2}-1<2x$</span></p>
<p><span class="math-container">$0<2x$</span></p>
<p>Then</p>
<p><span class="math-container">$x>0$</span></p>
<p>And this one carries out for <span class="math-container">$n=2$</span></p>
<p>Now for <span class="math-container">$\quad n=k \quad$</span> (Hypothesis)</p>
<p><span class="math-container">$\left(x^{k}+1\right)<\left(x+1\right)^{k}$</span></p>
<p>We have</p>
<p><span class="math-container">$\displaystyle x^{k}<\left(x+1\right)^{k}-1\ldots \quad (1)$</span></p>
<p>Then, we must prove for <span class="math-container">$\quad n= k+1 \quad$</span> (Thesis):</p>
<p><span class="math-container">$x^{k+1}+1<\left(x+1\right)^{k+1}$</span></p>
<p>We develop before expression as:</p>
<p><span class="math-container">$x^{k+1}<\left(x+1\right)^{k+1}-1\ldots \quad (2)$</span></p>
<p>According to the steps of mathematical induction, the next stpe would be use the hypothesis <span class="math-container">$(1)$</span> to prove thesis <span class="math-container">$(2)$</span>. It's in here when I hesitate if the next one that I am going to write is correct:</p>
<p><strong>First way:</strong></p>
<p>We multiply hypothesis <span class="math-container">$(1)$</span> by <span class="math-container">$\left(x+1\right)$</span> and we have:</p>
<p><span class="math-container">$x^{k}\left(x+1\right)<\left[\left(x+1\right)^{k}-1\right]\left(x+1\right)$</span></p>
<p><span class="math-container">$x^{k}\left(x+1\right)<\left(x+1\right)^{k+1}-\left(x+1\right)$</span></p>
<p>Last expression divided by <span class="math-container">$\left(x+1\right)$</span> we have again the expression <span class="math-container">$(1)$</span>:</p>
<p><span class="math-container">$\displaystyle \frac{x^{k}\left(x+1\right)<\left(x+1\right)^{k+1}-\left(x+1\right)}{\left(x+1\right)}$</span></p>
<p><span class="math-container">$x^{k}<\left(x+1\right)^{k}-1$</span></p>
<p><strong>Second way:</strong></p>
<p>If we multiply <span class="math-container">$(2)$</span> by <span class="math-container">$x$</span> we have:</p>
<p><span class="math-container">$xx^{k}<x\left[\left(x+1\right)^{k}-1\right]$</span></p>
<p><span class="math-container">$x^{k+1}<x\left(x+1\right)^{k}-x$</span></p>
<p>And if we again divided last expression by <span class="math-container">$x$</span>, we arrive at the same result</p>
<p><span class="math-container">$\displaystyle \frac{x^{k+1}<x\left(x+1\right)^{k}-x}{x}$</span></p>
<p><span class="math-container">$x^{k}<\left(x+1\right)^{k}-1$</span></p>
<p>I do not find another way to prove this demonstration, another way to solve the problem is using Newton's theorem binomial coeficients, but the prove lies in the technical using of mathematical induction. If someone can help me, I will be very grateful with him/her!
Thanks
-Víctor Hugo-</p>
|
Sri-Amirthan Theivendran
| 302,692 |
<p>Suppose that <span class="math-container">$(1+x)^n>1+x^n$</span> for some <span class="math-container">$n\ge 2$</span>. Then
<span class="math-container">$$
(1+x)^{n+1}=(1+x)^n(1+x)>(1+x^n)(1+x)=1+x+x^n+x^{n+1}>1+x^{n+1}
$$</span>
since <span class="math-container">$x>0$</span> where in the first inequality we used the induction hypothesis.</p>
|
1,874,159 |
<blockquote>
<p>Given real numbers $a_0, a_1, ..., a_n$ such that $\dfrac {a_0}{1} + \dfrac {a_1}{2} + \cdots + \dfrac {a_n}{n+1}=0,$ prove that $a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n=0$ has at least one real solution.</p>
</blockquote>
<p>My solution:</p>
<hr>
<p>Let $$f(x) = a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n$$</p>
<p>$$\int f(x) = \dfrac {a_0}{1} x + \dfrac {a_1}{2}x^2 + \cdots + \dfrac {a_n}{n+1} x^{n+1} + C$$</p>
<p>$$\int_0^1 f(x) = \left[ \dfrac {a_0}{1} + \dfrac {a_1}{2} + \cdots + \dfrac {a_n}{n+1} \right]-0$$</p>
<p>$$\int_0^1 f(x) = 0$$</p>
<p>Since $f$ is continuous, by the area interpretation of integration, it must have at least one zero.</p>
<hr>
<p>My question is, is this rigorous enough? Do I need to prove the last statement, perhaps by contradiction using Riemann sums? Is this a theorem I can/should quote?</p>
|
Aweygan
| 234,668 |
<p>Your proof looks fine. If you wanted to expand, you could add the following:</p>
<blockquote>
<p>Suppose $f(x)>0$ for all $x>0$. Then we must have
$$\int_0^1f(x)\ dx>0,$$
But we have already shown that $\int_0^1f(x)\ dx=0$, a contradiction.</p>
<p>If we assume $f(x)<0$ for all $x>0$, we arrive at a similar contradiction. </p>
</blockquote>
|
402,970 |
<p>I had an exercise in my text book:</p>
<blockquote>
<p>Find $\frac{\mathrm d}{\mathrm dx}(6)$ by first principles.</p>
</blockquote>
<p>The answer that they gave was as follows:</p>
<p>$$\lim_{h\to 0} \frac{6-6}{h} = 0$$</p>
<p>However surely that answer would be undefined if we let $h$ tend towards $0$, as for finding all other limits like that we substitute $0$ in the place of $h$? </p>
|
Adriano
| 76,987 |
<p>When evaluating the limit:
$$\lim_{h\to 0} \frac{6-6}{h}$$
notice that (provided that $h \ne 0$), we have:
$$\frac{6-6}{h}=\frac{0}{h}=0$$
Thus, since it is possible to evaluate $\lim_{x\to a}f(x)$ even if $f(a)$ is undefined, we have:
$$\lim_{h\to 0} \frac{6-6}{h} = \lim_{h\to 0} 0 = 0$$</p>
|
236,546 |
<p>$f(x)=x^4-16x^2+4$, the root of $f(x)$ is $a= \sqrt{3} + \sqrt{5}$</p>
<p>Factorise $f(x)$ as a product of irreducible polynomials over $\mathbb{Q}$, over $\mathbb{R}$ and over $\mathbb{C}$.</p>
<p>I am really confused as to how to start.</p>
|
Community
| -1 |
<p>$R=K[X,Y]_{(X,Y)}$ is a local regular ring of dimension $2$ with maximal ideal $M=(X,Y)R$. Then $F\in M-M^2$, so $R/(F)$ is local regular of dimension $1$, hence DVR.</p>
<p><strong>Edit.</strong> At Makoto Kato request I'll sketch a proof of the following assertion: $(R,M,k)$ local regular and $F\in M-M^2$, then $R^*=R/(F)$ is regular. </p>
<p>We have that $\dim R^*\ge\dim R-1$. On the other side, $\text{edim}(R^*)=\text{edim}(R)-1$, where $\text{edim}(R)$ is the minimal number of generators of $M$, i.e. $\dim_k M/M^2$. This can be proven easily by taking $F_1^*,\dots,F_n^*\in R^*$ a minimal system of generators for $M/(F)$ and showing that $F,F_1,\dots,F_n$ is a minimal system of generators for $M$. Now use the following inequality: $\dim R^*\le \text{edim}(R^*)$. We get $\dim R-1\le \dim R^*\le \text{edim}(R^*)=\text{edim}(R)-1$ and use the regularity of $R$.</p>
|
2,240,756 |
<p>I tried rewriting $(1+x+x^2)^\frac{1}{x}$ as $e^{\frac{1}{x}\ln(1+x+x^2)}$ and then computing the taylor series of $\frac{1}{x}$ and $\ln(1+x+x^2)$ but I'm still not getting the correct answer..</p>
|
Paramanand Singh
| 72,031 |
<p>Note that $$f(x) = (1 + x + x^{2})^{1/x} = \exp\left(\frac{\log(1 + x + x^{2})}{x}\right) = \exp\left(\frac{\log(1 - x^{3}) - \log(1 - x)}{x}\right)$$ so that we can get the series $$\frac{\log(1 - x^{3}) - \log(1 - x)}{x} = \sum_{n = 1}^{\infty}\frac{x^{n - 1} - x^{3n - 1}}{n} = 1 + \frac{x}{2} - \frac{2x^{2}}{3} + \cdots$$ and we can exponentiate this to get $$(1 + x + x^{2})^{1/x} = e\cdot\exp(x/2 - 2x^{2}/3 + \cdots) = e\left(1 + \frac{x}{2} - \frac{13}{24}x^{2} + \cdots\right)$$ Note that the calculation of coefficients in general is complicated with no specific formula.</p>
|
735,563 |
<p>For both sets C and D, provide a proof that C ∪ (D − /C) = C</p>
<p>"/C" is a set's complement of C</p>
|
gt6989b
| 16,192 |
<p>Imagine that $A$ is diagonalizable, i.e, you can write $A = P^{-1}DP$ for some $P$ and diagonal $D$. Then,
$$
A^2 = \left( P^{-1}DP \right) \left( P^{-1}DP \right)
= \left( P^{-1}D \left(P P^{-1}\right) DP \right)
= \left( P^{-1}D^2 P \right)
$$</p>
<p>Can you take this from here?</p>
<p><strong>EDIT</strong> In response to the comments below, I am getting
$$
P = \pmatrix{1 & -1\\-1 & 2}, P^{-1}= \pmatrix{2 & 1\\1 & 1}, D= \pmatrix{-1 & 0\\0 & 3}
$$
so
$$
A^5 = \pmatrix{2 & 1\\1 & 1} \pmatrix{-1 & 0\\0 & 243} \pmatrix{1 & -1\\-1 & 2}
= \pmatrix{-245 & 488\\-244 & 487}
$$</p>
|
735,563 |
<p>For both sets C and D, provide a proof that C ∪ (D − /C) = C</p>
<p>"/C" is a set's complement of C</p>
|
kleineg
| 120,144 |
<p>To find the diagonal form of the matrix you would calculated the eigenvalues and eigenvectors. $D$ is a diagonal matrix where the entries are the eigenvalues of $A$, $P$ is a matrix formed by the eigenvectors (each eigenvector is a column corresponding to the column its eigenvalue is in for $D$). </p>
|
4,930 |
<p>This is not an urgent question, but something I've been curious about for quite some time.</p>
<p>Consider a Boolean function in <em>n</em> inputs: the truth table for this function has 2<sup><em>n</em></sup> rows.</p>
<p>There are uses of such functions in, for example, computer graphics: the so-called ROP3s (ternary raster operations) take three inputs: <em>D</em> (destination), <em>S</em> (source) and <em>M</em> (mask). All three are planes of bits, and the result is returned to the destination plane of bits. Now, this is only really applicable to 2-colour (black & white) or 8-colour displays (regarding the red, green, blue planes separately). A given ROP3 is given a code from 0 to 255, a number which represents the pattern of the output bit, down the rows of the table. Similarly, ROP2s have a value from 0 to 15. ROPs may also be given names, especially when the logical connective of which the truth table is an extension is a simple one, such as AND, XOR or SRC (source).</p>
<p>An expression for any truth table (or ROP) may be found in terms of an expressively-complete set of connectives (usually unary or binary, sometimes nullary too). [Well, I suppose this statement is itself a tautology!] For example, the sets {NOT, AND}, {NOT, OR}, {NAND} are each expressively complete.</p>
<p>One commonly used (redundant) expressively complete set is {NOT, AND, OR}. Two particularly common canonical sets of expressions over this set are the conjunctive normal form (CNF) and disjunctive normal form (DNF). Both of these are rather verbose.</p>
<p>There is also a notion of a minimal expression over a set of connectives, defined variously. The count might be of instances of a variable or of connectives. There might be a bound to the depth or breadth of the expression (perhaps).</p>
<p>The Boolean connectives might be extended to the set <strong><em>R</em></strong>[0,1], for fuzzy logic; that is, the connectives are over <strong><em>R</em></strong>[0,1], with the restriction to {0,1} being the usual Boolean function. There are many ways to do this; it is possible to preserve some but not all the usual properties (e.g. associativity, idempotency) of the connectives. [NOT(x) would usually be interpreted as (1−x); AND(x,y) could be (x*y) or (min{x,y}), or in many other ways.]</p>
<p>Such extensions may be used, for example, to give a meaning to a ROP3 as applied to 256-level monochrome images (to combine or 'mix' such images) or to planes of full-colour images. (Necessarily, some truncation or 'quantisation' must take place.)</p>
<p>However, two expressions have the same function over {0,1} will generally have different functions over <strong><em>R</em></strong>[0,1]. Rather than choosing some arbitrary expression, it would be an advantage to choose some canonical or minimal expression.</p>
<p>How much is known about this field? Are there any good on-line references? I'm particularly interested in definitions of, theorems about, and algorithms for the generation of minimal or canonical expressions.</p>
|
John D. Cook
| 136 |
<p>See <a href="http://en.wikipedia.org/wiki/Karnaugh_map" rel="nofollow">Karnaugh maps</a></p>
|
4,930 |
<p>This is not an urgent question, but something I've been curious about for quite some time.</p>
<p>Consider a Boolean function in <em>n</em> inputs: the truth table for this function has 2<sup><em>n</em></sup> rows.</p>
<p>There are uses of such functions in, for example, computer graphics: the so-called ROP3s (ternary raster operations) take three inputs: <em>D</em> (destination), <em>S</em> (source) and <em>M</em> (mask). All three are planes of bits, and the result is returned to the destination plane of bits. Now, this is only really applicable to 2-colour (black & white) or 8-colour displays (regarding the red, green, blue planes separately). A given ROP3 is given a code from 0 to 255, a number which represents the pattern of the output bit, down the rows of the table. Similarly, ROP2s have a value from 0 to 15. ROPs may also be given names, especially when the logical connective of which the truth table is an extension is a simple one, such as AND, XOR or SRC (source).</p>
<p>An expression for any truth table (or ROP) may be found in terms of an expressively-complete set of connectives (usually unary or binary, sometimes nullary too). [Well, I suppose this statement is itself a tautology!] For example, the sets {NOT, AND}, {NOT, OR}, {NAND} are each expressively complete.</p>
<p>One commonly used (redundant) expressively complete set is {NOT, AND, OR}. Two particularly common canonical sets of expressions over this set are the conjunctive normal form (CNF) and disjunctive normal form (DNF). Both of these are rather verbose.</p>
<p>There is also a notion of a minimal expression over a set of connectives, defined variously. The count might be of instances of a variable or of connectives. There might be a bound to the depth or breadth of the expression (perhaps).</p>
<p>The Boolean connectives might be extended to the set <strong><em>R</em></strong>[0,1], for fuzzy logic; that is, the connectives are over <strong><em>R</em></strong>[0,1], with the restriction to {0,1} being the usual Boolean function. There are many ways to do this; it is possible to preserve some but not all the usual properties (e.g. associativity, idempotency) of the connectives. [NOT(x) would usually be interpreted as (1−x); AND(x,y) could be (x*y) or (min{x,y}), or in many other ways.]</p>
<p>Such extensions may be used, for example, to give a meaning to a ROP3 as applied to 256-level monochrome images (to combine or 'mix' such images) or to planes of full-colour images. (Necessarily, some truncation or 'quantisation' must take place.)</p>
<p>However, two expressions have the same function over {0,1} will generally have different functions over <strong><em>R</em></strong>[0,1]. Rather than choosing some arbitrary expression, it would be an advantage to choose some canonical or minimal expression.</p>
<p>How much is known about this field? Are there any good on-line references? I'm particularly interested in definitions of, theorems about, and algorithms for the generation of minimal or canonical expressions.</p>
|
Jose Brox
| 1,234 |
<p>For mor information, you should look at any book about Electronic Design Technology. When you manufacture a microchip, you want to keep it as simple as possible while integrating all the functions you are interested on. Thus, you want to minimize the number of transistors you use (i.e., to minimize the number of logic gates which implement your functions), which gets you to develop general techniques to achieve this.</p>
<p>Although I have not read it, I think you can give a try to the following book:</p>
<p>"Principles of CMOS VLSI Design: A System Perspective (2nd edition)", N. WESTE and K. ESHRAGHIAN, Addison-Wesley, 1993.</p>
|
388,950 |
<p>I've been trying to get my head around this for days. I understand what is going on with the calculation of a linear recurrence and I also understand how the characteristic is obtained.</p>
<p>What is confusion me is the general solution.</p>
<p>The general solution to the recurrence $at(n) + bt(n-1) ct(n-2) = 0$</p>
<p>For a unique root $r$, the general solution is: $t(n) = (A + Bn) r^n$. For two distinct roots $r_1$ and $r_2$, the general solution is: $t(n) = Ar_1^n + Br_2^n$</p>
<p>What are $A$ and $B$? What do they represent in the linear recurrence and in the general solution</p>
<p>I just can't see it and I need to get it so I can start my discrete math assignment ...</p>
<p>Thanks heaps in advance :)</p>
|
Andreas Blass
| 48,510 |
<p>Your computation and the resulting formula look fine, but there seems to be a problem with the graph. I think you plotted the intercept 32 on the vertical axis, assuming that the vertical direction represents $F$, but then I think you plotted a slope of $5/9$ rather than $9/5$, assuming that the vertical axis represents $C$ and the horizontal axis represents $F$. One thing I can say for sure, regardless of which axis represents which variable, is that the graph should go through the point $(-40,-40)$, which your graph doesn't. The reason is that $-40$ is the (one and only) temperature that comes out the same on the Fahrenheit and Celsius scales.</p>
|
2,461,918 |
<p>Often a function (real, say) is written without mentioning its domain, co-domain, just the rule $y=f(x)$ is given. In that case, how does one determine the domain, co-domain and range? For example, consider $f(x)=1/x$.</p>
|
Siong Thye Goh
| 306,553 |
<p>The practice of not stating the domain and co-domain is a bad practice.</p>
<p>A function is only well defined only when the domain, co-domain and the rules are given. They should be pre-specified in order to define the function. The range are the values in the codomain that has a preimage. Codomain can be anything that contains the range as a subset.</p>
<p>However, for most examination based question, what they intend to ask is what is the largest possible subset of $\mathbb{R}$ that can be used as the domain. </p>
|
1,485,463 |
<p>At a school function, the ratio of teachers to students is $5:18$. The ratio of female students to male students is $7:2$. If the ratio of the female teachers to female students is $1:7$, find the ratio of the male teachers to male students. </p>
|
hanish
| 281,010 |
<p>Lets consider there are 7 female student and 2 male student.(7:2)</p>
<p>total student = 9</p>
<p>teacher : student = 5:18</p>
<p>therefore for 18 student there will be 14 girls and 4 boy students</p>
<p>5:14:4</p>
<p>now only 1 female teacher is required for 7 female students (1:7)</p>
<p>here we have 14 female students hence 2 female teacher is required</p>
<p>and total teacher is 5 </p>
<p>out of it 2 are female hence 3 teacher are male</p>
<p>5:14:4(teacher:female student:male student)</p>
<p>3:2:14:4(male teacher: female teacher :female student:male student)</p>
<p>hence 3: 4</p>
<p>this is what i found if its wrong please reply</p>
|
113,960 |
<p>When trying to compute the (Serre-generalized) intersection number of two varieties at a closed point, I came to a need to compute the following $\operatorname{Tor}$:</p>
<blockquote>
<p>Let $k$ be an algebrically closed field, $A=k[x_1,x_2,x_3,x_4]$ and $\mathfrak m=(x_1,x_2,x_3,x_4)$. Let $M = k[x_1,x_2,x_3,x_4]/(x_1x_3,x_1x_4,x_2x_3,x_2x_4)$, $N=k[x_1,x_2,x_3,x_4]/(x_1-x_3,x_2-x_4)$. I want to compute $\operatorname{Tor}^i_{A_{\mathfrak m}}(M_{\mathfrak m},N_{\mathfrak m})$.</p>
</blockquote>
<p>Any ideas how to do this?</p>
<p>I first noted that $N$ is gotten from $A$ by quotient by a a regular sequence, so that the Koszul complex of $(x_1-x_3,x_2-x_4)$ is a free resolution of $N$ over $A$. However, tensoring with $M$ computations became too hard, and I was not able to find the cohomology of the resulting complex.</p>
<p>Any ideas?</p>
<p>Thanks!</p>
|
curious
| 223 |
<p>(I will omit the localization in this answer). </p>
<p>As $N$ has projective dimension $2$, you only need to worry about $i=0,1,2$. When $i=0$ the Tor is just tensor product, and tensoring with $N$ means setting $x_1=x_3$, $x_2=x_4$. Thus $M\otimes N = k[x_1,x_2]/(x_1^2,x_1x_2,x_2^2)$. The length is $3$.</p>
<p>For the rest, here is a key observation </p>
<p>(*) $(x_1-x_3)$ is a regular element on $M$.</p>
<p>We will use the short exact sequence
$$0 \to L \to L \to N\to 0 $$
Here $L = A/(x_1-x_3)$ and the first map is multiplication by $(x_2-x_4)$. Because of (*), $\operatorname{Tor}_i(L,M)=0$ for $i>0$. So looking at the long exact sequence by tensoring with $M$ we immediately get $\operatorname{Tor}_2(M,N)=0$ and $\operatorname{Tor}_1(M,N)$ is the kernel of the map $$L\otimes M \to L\otimes M = M/(x_1-x_3)M=k[x_1,x_2,x_4]/(x_1^2,x_1x_2,x_1x_4,x_2x_4) $$
given by multiplication with $x_2-x_4$. This kernel is a vector space generated by the residue of $x_1$, so the length is $1$.</p>
<p>This is an example (simplest in some sense) where the naive count for multiplicity fails. Note that $M$ is not Cohen-Macaulay (this is not a coincidence). </p>
|
186,417 |
<p>Is it possible to write $0.3333(3)=\frac{1}{3}$ as sum of $\frac{1}{4} + \cdots + \cdots\frac{1}{512} + \cdots$ so that denominator is a power of $2$ and always different?
As far as I can prove, it should be an infinite series, but I can be wrong.
In case if it can't be written using pluses only, minuses are allowed as well.</p>
<p>For example, $\frac{1}{2}$ is $\frac{1}{4}+\frac{1}{8}+\frac{1}{16}+\cdots$
So, what about $\frac{1}{3}$?</p>
|
Richard Evans
| 39,080 |
<p>More generally we can do this for any fraction $\frac{m}{n}$. We know that the sum of the infinite geometric series
$$ S=a+ar+ar^2+ar^3+\dots=\frac{a}{1-r} $$
if $ |r| < 1 $. So we could take $r=-\frac{1}{n-1}$ and $a=\frac{m}{n-1}$ giving</p>
<p>$$ S=\frac{\frac{m}{n-1}}{1+\frac{1}{n-1}}=\frac{\frac{m}{n-1}}{\frac{n}{n-1}}=\frac{m}{n} $$</p>
<p>We can in fact be very general with both $r$ and $a$. Just suppose $r=\frac{s}{t}$ (with $|s|>|t|$ to ensure $|r|<1$), and let $a=1$ for now then</p>
<p>$$ S=\frac{1}{1-\frac{s}{t}}=\frac{t}{t-s} $$</p>
<p>So if instead we take $a=\frac{m(t-s)}{nt}$ we have again that $S=\frac{m}{n}$.</p>
|
3,839,244 |
<p>Is <span class="math-container">$\{3\}$</span> a subset of <span class="math-container">$\{\{1\},\{1,2\},\{1,2,3\}\}$</span>?</p>
<p>If the set contained <span class="math-container">$\{3\}$</span> plain and simply I would know but does the element <span class="math-container">$\{1,2,3\}$</span> include <span class="math-container">$\{3\}$</span> such that it would be a subset?</p>
|
fleablood
| 280,126 |
<p>No.</p>
<p>the elements inside elements of the set do not count.</p>
<p>The elments of your big set are:</p>
<ul>
<li><span class="math-container">$\{1\}$</span></li>
<li><span class="math-container">$\{1,2\}$</span></li>
<li><span class="math-container">$\{1,2,3\}$</span>.</li>
</ul>
<p>The elements of your small set are:</p>
<ul>
<li><span class="math-container">$3$</span></li>
</ul>
<p>So <span class="math-container">$\{3\}$</span> is a subset only if <span class="math-container">$3$</span> is equal (the <em>same</em> thing; !!!!!<em><strong>NOT</strong></em>!!!! an element within) one of the elements <span class="math-container">$\{1\}$</span> or <span class="math-container">$\{1,2\}$</span>, or <span class="math-container">$\{1,2,3\}$</span>. But none of those are the same thing as <span class="math-container">$3$</span> so <span class="math-container">$\{3\}$</span> is not a subset.</p>
<p>But in some text the natural numbers are defined as</p>
<p><span class="math-container">$0 = \emptyset$</span></p>
<p><span class="math-container">$1= \{\emptyset\}$</span></p>
<p><span class="math-container">$2= \{\emptyset, 1\}$</span>.</p>
<p><span class="math-container">$3 = \{\emptyset, 1, 2\}$</span></p>
<p>So we could have a trick thing of <span class="math-container">$\{3\} \subset \{\{\emptyset,1\}, \{\emptyset,1,2\}, \{\emptyset,1,2,3\}\}$</span> <em>not</em> because <span class="math-container">$3 \in \{\emptyset,1,2,3\}$</span> (that's utterly irrelevent), but because <span class="math-container">$3 = \{\emptyset, 1, 2\}$</span> and the set <span class="math-container">$\{\{\emptyset,1\}, \{\emptyset,1,2\}, \{\emptyset,1,2,3\}\}$</span> is equally equal to the set <span class="math-container">$\{2,3,4\}$</span> and <span class="math-container">$\{3\}\subset \{2,3,4\}$</span>.</p>
|
159,438 |
<p>Can be easily proved that the following series onverges/diverges?</p>
<p>$$\sum_{k=1}^{\infty} \frac{\tan(k)}{k}$$</p>
<p>I'd really appreciate your support on this problem. I'm looking for some easy proof here. Thanks.</p>
|
Pedro
| 23,350 |
<p>A simle example would be the term $\tan 121 / 121$. Noting that $\dfrac{\pi}{2}+2 \cdot 19 \cdot \pi \approx 120,95131$ shows why the term is much larger relative to the others. Here you can see a plot and spot the rogues. Those are the first 20k terms.</p>
<p><img src="https://i.stack.imgur.com/pMtLh.jpg" alt="enter image description here"></p>
<p>Here's a much more interesting take, of the first 50k terms. Note the way they align.</p>
<p><img src="https://i.stack.imgur.com/WZBAV.jpg" alt="enter image description here"></p>
<p>Basically, what we're worried about is how close an integer can get to $$\dfrac{\pi}{2}+2 k \pi$$</p>
<p>and that is a though question to answer.</p>
|
1,935,320 |
<p>If $a^2-b^2=2$ then what is the least possible value of: \begin{vmatrix} 1+a^2-b^2 & 2ab &-2b\\ 2ab & 1-a^2+b^2&2a\\2b&-2a&1-a^2-b^2 \end{vmatrix}</p>
<p>I tried to express the determinant as a product of two determinants but could not do so. Seeing no way out, I tried expanding it but that took too long and was difficult to evaluate. Please help me with this one, thanks.</p>
|
Jacky Chong
| 369,395 |
<p>Consider the matrix
\begin{align}
A=
\begin{pmatrix}
\cos \frac{2\pi}{3} & -\sin\frac{2\pi}{3}\\
\sin\frac{2\pi}{3} & \cos \frac{2\pi}{3}
\end{pmatrix}
\end{align}</p>
|
135,235 |
<blockquote>
<p>if $f(3)=-2$ and $f'(3)=5$, find $g'(3)$ if,<br>
$g(x)=3x^2-5f(x)$</p>
</blockquote>
<p>the answer is -7,
I find that very hard to understand the question.
thanks </p>
|
chemeng
| 25,845 |
<p>Since it's a homework question here are some tips. </p>
<ol>
<li>Are $f,g$ differentiable functions everywhere??If not, how can you find the derivative of $g$ at a specific point (in our case 3).</li>
<li>Find everything needed for calculating $g'(3)$ from the equation given.</li>
</ol>
|
135,235 |
<blockquote>
<p>if $f(3)=-2$ and $f'(3)=5$, find $g'(3)$ if,<br>
$g(x)=3x^2-5f(x)$</p>
</blockquote>
<p>the answer is -7,
I find that very hard to understand the question.
thanks </p>
|
dato datuashvili
| 3,196 |
<p>$g'(x)=6*x-5*f'(x)$</p>
<p>$g'(3)=6*3-5*f'(3)$=$18-25=-7$
i hope it would help you</p>
|
2,992,477 |
<p>Can u help me prove that every simple module is cyclic? Ive already proved that a cyclic module is isomorphic to A quotiented by I ( I being a left ideal of A) using homomorphisms, but i cant seem to make any advance in proving this.Thx in advance.</p>
|
egreg
| 62,967 |
<p>Suppose <span class="math-container">$M$</span> is a simple left <span class="math-container">$R$</span>-module. This means that <span class="math-container">$M\ne\{0\}$</span> and that the only submodules of <span class="math-container">$M$</span> are <span class="math-container">$\{0\}$</span> and <span class="math-container">$M$</span>.</p>
<p>Now take <span class="math-container">$x\in M$</span>, <span class="math-container">$x\ne0$</span> (which exists by assumption). Then <span class="math-container">$Rx$</span> is a submodule of <span class="math-container">$R$</span> and <span class="math-container">$x=1x\in Rx$</span>. Thus <span class="math-container">$Rx\ne\{0\}$</span> and so <span class="math-container">$Rx=M$</span>.</p>
<hr />
<p>If your rings don't necessarily have an identity, the definition of simple module is slightly different. A left <span class="math-container">$R$</span>-module <span class="math-container">$M$</span> is simple if</p>
<ol>
<li><span class="math-container">$RM\ne\{0\}$</span>;</li>
<li>the only submodules of <span class="math-container">$M$</span> are <span class="math-container">$\{0\}$</span> and <span class="math-container">$M$</span>.</li>
</ol>
<p>By assumption, there exist <span class="math-container">$r\in R$</span> and <span class="math-container">$x\in M$</span> such that <span class="math-container">$rx\ne0$</span>. Now <span class="math-container">$Rrx$</span> is a nonzero submodule of <span class="math-container">$M$</span>, so <span class="math-container">$Rrx=M$</span> and, a fortiori, <span class="math-container">$Rx=M$</span>.</p>
|
1,867,352 |
<p>Find all the angles $v$ between $-\pi$ and $\pi$ such that</p>
<p>$$-\sin(v)+ \sqrt3 \cos(v) = \sqrt2$$</p>
<p>The answer has to be in the form of: $\pi/2$ (it must include $\pi$)</p>
<p>I have tried squaring but I get nowhere.</p>
|
Zain Patel
| 161,779 |
<p>It's sometimes a good idea to turn a sum of sines and cosines into a single trigonometric term when solving equations of the form $a \sin x + b\cos x = c$, in this case: $$\sqrt{3}\cos v - \sin v \equiv 2\cos \left(v + \frac{\pi}{6}\right)$$</p>
<p>So, if you set $x = v + \frac{\pi}{6}$ you need only solve $\cos x = \frac{1}{\sqrt{2}}$ which you can then do, I'm sure.</p>
|
1,867,352 |
<p>Find all the angles $v$ between $-\pi$ and $\pi$ such that</p>
<p>$$-\sin(v)+ \sqrt3 \cos(v) = \sqrt2$$</p>
<p>The answer has to be in the form of: $\pi/2$ (it must include $\pi$)</p>
<p>I have tried squaring but I get nowhere.</p>
|
DonAntonio
| 31,254 |
<p>Divide the equation by $\;2\;$ :</p>
<p>$$\frac1{\sqrt2}=-\frac12\sin x+\frac{\sqrt3}2\cos x=\sin\frac\pi3\cos x-\cos\frac\pi3\sin x=\sin\left(\frac\pi3-x\right)\implies$$</p>
<p>$$\frac\pi3-x=\begin{cases}\cfrac\pi4\\{}\\\cfrac{3\pi}4\end{cases}\implies \ldots$$</p>
<p>Observe this is similar to the other answer but, perhaps, a little, very little, easier to understand.</p>
|
3,450,692 |
<p>How to solve this equation?</p>
<p><span class="math-container">$$
\frac{dy}{dx}=\frac{x^{2}+y^{2}}{2xy}
$$</span></p>
|
Dr. Sonnhard Graubner
| 175,066 |
<p>It is <span class="math-container">$$\frac{1}{2}\left(\frac{x}{y}+\frac{y}{x}\right)$$</span>,lets do <span class="math-container">$$u=\frac{y}{x}$$</span> then <span class="math-container">$$ux=y$$</span> and <span class="math-container">$$y'=u'x+u$$</span></p>
|
3,450,692 |
<p>How to solve this equation?</p>
<p><span class="math-container">$$
\frac{dy}{dx}=\frac{x^{2}+y^{2}}{2xy}
$$</span></p>
|
user577215664
| 475,762 |
<p>Another approach:
<span class="math-container">$$\frac{dy}{dx}=\frac{x^{2}+y^{2}}{2xy}$$</span>
<span class="math-container">$$2xydy=x^2dx+y^2dx$$</span>
<span class="math-container">$$xdy^2-y^2dx=x^2dx$$</span>
<span class="math-container">$$\frac {xdy^2-y^2dx}{x^2}=dx$$</span>
<span class="math-container">$$d\left (\frac {y^2}x \right) =dx$$</span>
After integration:
<span class="math-container">$$y^2=x^2+Cx$$</span></p>
|
426,264 |
<ol>
<li><p>Given one sine wave in time domain, I want to find its frequency. Because I observe only a very small part of the sine wave ~1 cycle, FFT methods have a poor spectral resolution. </p></li>
<li><p>Has there been work that bounds the error on the frequency estimate? </p></li>
</ol>
<p>Thanks a ton</p>
|
Christian Blatter
| 1,303 |
<p>I don't think that FFT is of any use here. You are given a data set $(t_k,x_k)$ $\>(1\leq k\leq N)$ and want to know the best fit for a function of the form
$$x(t)=A\cos(\omega t)+B\sin(\omega t)+C\ ,$$
where $A$, $B$, $C$, $\omega$ are parameters to be determined. This is a standard problem of numerical analysis.</p>
<p>Note that FFT presupposes a base period (and its $2^{-n}$ parts) as given and will never be able to recover the "real period" as a floating point number.</p>
|
102,402 |
<p>I wanted to make a test bank of graphs of linear equations for my algebra classes. I want the $y$-intercept of each graph to be an integer no less than $-10$ and no greater than $10$. Generally, you want these graphs to be small, so i've decided on a $20 \times 20$ grid (10 units from the origin). Additionally, i would like students to be able to see a second integral point on this grid, so they can find the slope. How many possible graphs would be in this test bank? How did you get the solution?</p>
|
Gerry Myerson
| 8,269 |
<p>Here's a simple upper bound. There are 21 points on the $y$-axis, and there are 420 points in your grid that are not on the $y$-axis, so there are at most $21\times420$ lines. Of course, there are actually fewer, because lines with 3 or more points are getting counted more than once. Two random integers less than $n$ are relatively prime with probability $6/\pi^2$ for large $n$, so my guess is that $(6/\pi^2)\times21\times420$ is a decent estimate for the answer. </p>
|
245,623 |
<p><em>For the following vectors $v_1 = (3,2,0)$ and $v_2 = (3,2,1)$, find a third vector $v_3 = (x,y,z)$ which together build a base for $\mathbb{R}^3$.</em></p>
<p>My thoughts:</p>
<p>So the following must hold:</p>
<p>$$\left(\begin{matrix}
3 & 3 & x \\
2 & 2 & y \\
0 & 1 & z
\end{matrix}\right)
\left(\begin{matrix}
{\lambda}_1 \\
{\lambda}_2 \\
{\lambda}_3
\end{matrix}\right) =
\left(\begin{matrix}
0 \\
0 \\
0
\end{matrix}\right)
$$</p>
<p>The gauss reduction gives</p>
<p>$$
\left(\begin{matrix}
3 & 3 & x \\
0 & 1 & z \\
0 & 0 & -\frac{2}{3}x+y
\end{matrix}\right)
$$</p>
<p>(but here I'm not sure if I'm allowed to swap the $y$ and $z$ axes)</p>
<p>For ${\lambda}_1 = {\lambda}_2 = {\lambda}_3 = 0$, this gives me</p>
<p>$$
x = 0 \\
y = 0 \\
z = 0
$$</p>
<p>Is this third vector $v_3$ building a base of $\mathbb{R}^3$ together with the other two vectors? If not, where are my mistakes?</p>
|
egens
| 50,866 |
<p>There is more general solution, that assumes finding normalized basis of given linear subspace and then complement it to full basis by solving several homogeneous systems.</p>
<p><strong>Basis normalisation.</strong>
Suppose having $m$ linear independent vectors $\tilde{v}_1..\tilde{v}_m$ in $R^n$. Linear independence says that they form a basis in some linear subspace of $R^n$. To normalize this basis you should do the following:</p>
<ol>
<li>Take the first vector $\tilde{v}_1$ and normalize it
$$v_1 = \frac{\tilde{v}_1}{||\tilde{v}_1||}.$$</li>
<li>Take the second vector and substract its projection on the first vector from it
$$\bar{v}_2 = \tilde{v}_2 - (\tilde{v}_2 \cdot v_1) {v}_1,$$
there $(\tilde{v}_2 \cdot v_1)$ is scalar product and equals to the length of projection, cosider $||v_1||=1$. Normalize
$$v_2 = \frac{\bar{v}_2}{||\bar{v}_2||}.$$</li>
<li>Take the $i=3..m$ vector $\tilde{v}_i$. Substract it projections on the all previously generated vectors of normal basis from it
$$\bar{v}_i = \tilde{v}_i - \sum_{j=1}^{i-1}(\tilde{v}_i \cdot v_j) {v}_j,$$
and normalize it
$$v_i = \frac{\bar{v}_i}{||\bar{v}_i||}.$$</li>
</ol>
<p>Vectors $v_1..v_m$ will form new normalized basis. All their lengths are equal to 1 and they are normal to each other.</p>
<p><strong>Homogeneous systems.</strong> To get the $(m+1)$'th basis vector $v_{m+1}$ the next homogeneous system of scalar productions must be solved
$$\begin{cases} v_1 \cdot v_{m+1} = 0 \\ v_2 \cdot v_{m+1} = 0 \\ ... \\ v_m \cdot v_{m+1} = 0 \end{cases}$$</p>
<p>The solution of this system will be subspace, that is normal to given. One of its vectors should be taken as $v_{m+1}$.</p>
|
3,845,144 |
<p>Let's say a number <span class="math-container">$n$</span> is insertable if for every digit <span class="math-container">$d$</span>, if we insert <span class="math-container">$d$</span> between any two digits of <span class="math-container">$n$</span>, then the obtained number is a multiple of <span class="math-container">$d$</span>.
For example, <span class="math-container">$144$</span> is not insertable because <span class="math-container">$1474$</span> is not divisible by <span class="math-container">$7$</span>.</p>
<p>The question is the find the smallest insertable positive integer with at least two digits.</p>
<p>It is relatively easy to see that such a number have to be divisible by <span class="math-container">$2520$</span> (assuming it is at least <span class="math-container">$4$</span>-digits long). I also ran a script to check all integers below 75,000,000,000 with no success (the issue might be my code).</p>
<p><strong>Disclaimer.</strong> I do not know if such a number do exist.</p>
|
N. S.
| 9,176 |
<p>The following solution is using the eigenvalues in an implicit way.</p>
<p><span class="math-container">$$A^3=2A^2+A-I_3$$</span></p>
<p>Which can be found by inspection or by calculating the characteristic polynomial.</p>
<p>Then, by long division
<span class="math-container">$$X^n=Q(X)(X^3-2X^2-X+1)+aX^2+bX+c$$</span>
You can find <span class="math-container">$a,b,c$</span> by setting <span class="math-container">$X=\lambda_{1,2,3}$</span> the solutions to <span class="math-container">$X^3-2X^2-X+1$</span> in the above equation.</p>
<p>Then, setting <span class="math-container">$x=A$</span> in the equation we get:
<span class="math-container">$$A^n=aA^2+bA+C$$</span></p>
<p><strong>Note</strong> The above implies that all entries of <span class="math-container">$A^n$</span> are of the form <span class="math-container">$c_1\lambda_1^n+c_2 \lambda_2^n+c_3 \lambda_3^n$</span> (which is consistent with diagonalisation). Since the roots
<span class="math-container">$\lambda_{1,2,3}$</span> of the equation
<span class="math-container">$$x^3-2x^2-x+1$$</span>
are ugly, the formula is almost impossible to guess.</p>
|
120,910 |
<p>$f(x) = 2\sin x \hspace{10pt}(0 \leq x \leq \pi)$<br>
$g(x) = -\sin x \hspace{10pt}(0 \leq x \leq \pi)$</p>
<p>Rectangle ABCD is enclosed between the above functions' graphs (its edges are parallel to the axes).</p>
<p>How would I go about finding the maximum perimeter of ABCD?</p>
<p>I'm really clueless about this, I don't even know how to begin. How am I supposed to represent the edges?</p>
<p>Thanks</p>
|
David Mitra
| 18,986 |
<p>If the left vertical side of the rectangle passes through $x=a$ then the height of the rectangle is $2\sin a +\sin a=3\sin a$. The right vertical side of the rectangle then passes through $\pi-a$ (draw a picture). So the width of the rectangle whose left vertical side passes through $x=a$ is $(\pi-2a)$. The perimeter of the rectangle is then
$$
2\cdot 3\sin a +2(\pi-2a).
$$<br>
You need to find the maximum value of the above expression over the interval $[0,\pi/2]$.</p>
<p><br></p>
<p><img src="https://i.stack.imgur.com/VTw1T.png" alt="enter image description here"></p>
|
3,264,935 |
<p>For a given natural number <span class="math-container">$k$</span>, I'm going to call a subset <span class="math-container">$T$</span> of the plane <span class="math-container">$\Bbb{R}^2$</span> a <span class="math-container">$k$</span>-traversal if, for any <span class="math-container">$x \in \Bbb{R}$</span>,
<span class="math-container">\begin{align*}
k &= \operatorname{card} \{(a, b) \in T : a = x\} \\
&= \operatorname{card} \{(a, b) \in T : b = x\} \\
&= \operatorname{card} \{(a, b) \in T : a + b = x\}.
\end{align*}</span>
It's not difficult to see that lines in the plane that aren't parallel to the <span class="math-container">$x$</span>-axis, the <span class="math-container">$y$</span>-axis, or the line <span class="math-container">$x + y = 0$</span> will be <span class="math-container">$1$</span>-traversals (but are far from the only examples!). We can make <span class="math-container">$2$</span>-traversals without difficulty by taking two disjoint <span class="math-container">$1$</span>-traversals and unioning them. My question is,</p>
<blockquote>
<p>Is there a <span class="math-container">$2$</span>-traversal that cannot be decomposed into the union of two <span class="math-container">$1$</span>-traversals?</p>
</blockquote>
<p>The idea of (irreducible) <span class="math-container">$k$</span>-traversals is a concept from latin squares. I'm trying to consider the concept when applied to an (uncountably) infinite Latin square generated by the group <span class="math-container">$(\Bbb{R}, + )$</span>.</p>
<p>This was a question that popped into my head years ago when attending a combinatorics conference. Combinatorics is not my forte, but it seemed like an interesting question, and I thought I'd share it.</p>
|
GEdgar
| 442 |
<p>Vincent's example... <span class="math-container">$T$</span> is the union of two graphs <span class="math-container">$y=\frac{|x-1|}{2}$</span> and <span class="math-container">$y=-\frac{|x+1|}{2}$</span>.</p>
<p><a href="https://i.stack.imgur.com/1gMR6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1gMR6.jpg" alt="T" /></a></p>
<p>Each vertical line meets <span class="math-container">$T$</span> in two points, one in the top graph and one in the bottom graph.<br />
Each horizontal line meets <span class="math-container">$T$</span> in two points: either two in the top graph, or two in the bottom graph, or the two corners on the <span class="math-container">$x$</span>-axis.<br />
Each line of slope <span class="math-container">$-1$</span> meets <span class="math-container">$T$</span>in two points, one in the top graph and one in the bottom graph.<br />
Thus, <span class="math-container">$T$</span> is a <span class="math-container">$2$</span>-traversal.</p>
<p>Next, we claim it is impossible to have <span class="math-container">$T = A \cup B$</span> where <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are <span class="math-container">$1$</span>-traversals and <span class="math-container">$A \cap B = \varnothing$</span>. Indeed, consider the three points
<span class="math-container">$$
u:=\left(0,\frac12\right),\quad
v:=\left(2,\frac12\right),\quad
w:=\left(2,-\frac32\right).
$$</span></p>
<p><a href="https://i.stack.imgur.com/ya2Du.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ya2Du.jpg" alt="B" /></a></p>
<p>Now <span class="math-container">$A$</span> contains exactly one of the points <span class="math-container">$u,v$</span>, since they are both on the line <span class="math-container">$y=\frac12$</span>. <span class="math-container">$A$</span> contains exactly one of the points <span class="math-container">$v,w$</span>, since they are both on the line <span class="math-container">$x=2$</span>. And <span class="math-container">$A$</span> contains exactly one of the points <span class="math-container">$w,u$</span>, since they are both on the line <span class="math-container">$x+y=\frac12$</span>. Impossible.</p>
|
686,848 |
<p>This is probably an elementary question about fields, but I think it is a little tricky. </p>
<blockquote>
<p>Can we make the integers $\mathbb{Z}$ into a field?</p>
</blockquote>
<p>Let me be more precise. Is it possible to make $\mathbb{Z}$ into a field so that the underlying additive group structure is the usual addition in $\mathbb{Z}$? In other words, we just need to define "multiplication" on $\mathbb{Z}$ that makes it into a field. But the question is: How to define such a multiplication?</p>
<p>The ordinary multiplication (that we learn in elementary school) doesn't work as $2$ does not have a multiplicative inverse. </p>
|
Elchanan Solomon
| 647 |
<p>The multiplicative structure in $\mathbb{Z}$ is determined by the additive structure. We start with the axiom of the multiplicative identity: $1 \times n = n$ for all $n \in \mathbb{Z}$. The axiom of distributivity implies
$$2 \times n = (1 + 1) \times n = (1 \times n) + (1 \times n) = n + n = 2n$$</p>
<p>This determines multiplication by positive integers. I'll leave it to you to show that it is also determined for the negative integers and zero.</p>
<p>Morally: The distributive property tells us how to multiply in terms of how to add, assuming we know how to multiply certain "basic elements". In the case of $\mathbb{Z}$, the entire ring is <em>generated</em> by the basic element $1$, and multiplication by $1$ is governed by the axioms.</p>
|
849,025 |
<p>Say I have 100 people each with a height, weight, and age. I make a regression that predicts age based on height and weight. Now, I would like to update that model when I meet someone new. I don't want to just re-process 101 people though--I want to take the model that I already have and incorporate the new person into it. </p>
<p>For example, say that I found from the first 100 that:</p>
<p>age = .08*height + .06*weight + 7.</p>
<p>Now, I meet someone with age 120 height 56 and weight 34. </p>
<p>I do know the number of cases with which I originally made the regression. So, my initial idea was that I could just assume that they all fit the model age = .08*height + .06*weight + 7 and somehow weight the new case so that it takes the old into account (e.g., assume I have 100 people that fit the model, so create a regression equation based on the 100 identical data points and then just add the 101? That's almost just like running the regression again on the 101 data points, except that nothing needs to be stored then, because you could derive 100 from the equation. </p>
<p>I would like to do something like this on a large scale, and I don't want to be creating an enormous database of cases, I just want to update the model with each new case.</p>
|
Michael Hardy
| 11,667 |
<p>Here I am thinking I should remember something about this, or maybe I shouldn't. We have a matrix "equation":
$$
\begin{bmatrix} 1 & h_1 & w_1 \\ \vdots & \vdots & \vdots \\ 1 & h_n & w_n \end{bmatrix} \begin{bmatrix} 7 \\ 0.08 \\ 0.06 \end{bmatrix} \overset{\text{?}} = \begin{bmatrix} a_1 \\ \vdots \\ a_n \end{bmatrix} \tag 1
$$
The equality is not actually equality: on the right, if we put the fitted ages rather than the observed ages, we would have equality. Write $(1)$ in matrix form as
$$
X\beta = A.
$$
The matrix $X$ has no inverse: there is no $3\times n$ matrix we could put to its right and get the $n\times n$ identity matrix. But there is a <b>left</b> inverse: there is a $3\times3$ matrix we can put to its left to get the $3\times3$ identity matrix. That matrix is
$$
\underbrace{(X^T X)^{-1}}_{3\times3} \underbrace{{}\quad X^T\quad{}}_{3\times n}. \tag 2
$$
Hence
$$
\begin{bmatrix} 7 \\ 0.08 \\ 0.06 \end{bmatrix} = (X^T X)^{-1} X^T A \tag 3
$$
<b>Exercise:</b> Equality holds in $(3)$ regardless of whether $A$ is the column of fitted ages or the column of observed ages.</p>
<p>Now we add an $(n+1)$th row to $X$, getting $\begin{bmatrix} X \\ W \end{bmatrix}$ where $W\in\mathbb R^{1\times3}$. Now instead of $(2)$ we will have
$$
\left(\begin{bmatrix} X^T & W^T \end{bmatrix} \begin{bmatrix} X \\ W \end{bmatrix}\right)^{-1} \begin{bmatrix} X^T & W^T \end{bmatrix} \begin{bmatrix} 7 \\ 0.08 \\ 0.06 \end{bmatrix} \overset{\text{?}} = \begin{bmatrix} a_1 \\ \vdots \\ a_n \\ a_{n+1} \end{bmatrix}.
$$</p>
<p>The matrix we need to invert becomes the $3\times3$ matrix
$$
\begin{bmatrix} X^T X + W^T W \end{bmatrix}.
$$
How to do that without doing the whole thing over from scratch is what I don't know at this moment.</p>
<p>Software experts must have considered this problem, but I don't know what has been done.</p>
|
109,973 |
<p>I have stuck solving this problem of financial mathematics, in this equation:
$$\frac{(1+x)^{8}-1}{x}=11$$</p>
<p>I'm stuck in this eight grade equation:
$$x^{8}+8x^{7}+28x^{6}+56x^{5}+70x^{4}+56x^{3}+28x^{2}+9x-11=0$$</p>
<p>But I cannot continue past this. This quickly lead me to find a general way of solving equations of any degree, but I couldn't find anything serious on the internet.</p>
<p>Do you know any simple methods to solve equations of any degree?</p>
|
GeoffDS
| 8,671 |
<p>First off, let me say that the polynomial you gave does not match up with your original problem. So, I will solve it based on your original problem and not the polynomial you gave. Perhaps you typed it wrong.</p>
<p>Second off, since this is financial mathematics, there are most likely assumptions that go beyond the pure math. The point is, you don't need to know general methods for solving any polynomial equation. You need methods for very specific types of situations.</p>
<p>$x$ here represents an interest rate. And, it is often assumed that interest rates are positive. This simplifies the solution considerably. But, even without that assumption, I can tell by looking at your equation what your original problem was. Your original problem was that 1 dollar is put into an account at the end of each year for 8 years. The account accumulates at interest rate $x$, or $100x\%$, and the accumulated value at time 8 is 11. Since you end up with more money than you put in, it is definitely reasonable to assume your interest rate is positive. In fact, since all the payments by you were done ahead of the payment at time 8 by the bank of 11, there is guaranteed to be only one real solution, and it is positive clearly from the context.</p>
<p>So, now your job is just to figure out the one and only solution. We use the Intermediate Value Theorem. It says, if a function is continuous on some interval, and if the function is negative at one point and positive at another point, then it must be 0 somewhere in between. So consider</p>
<p>$$f(x) = \frac{(1 + x)^8 - 1}{x} - 11.$$</p>
<p>It is not continuous everywhere, because of division by 0 when $x = 0$, but it is continuous for $x > 0$ and therefore the Intermediate Value Theorem applies if we are only searching for a solution to $f(x) = 0$ for positive $x$.</p>
<p>Plug in a couple small numbers, like 0.05 and 0.2. $f(0.05) = -1.45089$ and $f(0.2) = 5.49908$. Now, we know the solution is between 0.05 and 0.2 by the Intermediate Value Theorem. Pick a number in between, like 0.1. $f(0.1) = 0.435888$. Since this is positive, our root is between 0.05 and 0.1. We're getting close. Try 0.075. $f(0.075) < 0$. This tells us our solution is between 7.5% and 10%. Try 0.09. $f(0.09) > 0$, so our solution is between 0.075 and 0.09. Try 0.08. You get $f(0.08) = -0.363372$. Continue in this manner and you'll eventually narrow down the answer to as many decimal places as you want. Of course, it's slow. This method is basically the bisection method, except I didn't always pick my next number to be right in the middle of the previous bounds. You could also try Newton's method. You'll eventually get x = 0.0892863...</p>
|
2,442,705 |
<p>Like the title says, I'm wondering if there's a non-brute-force way to determine $x$ such that
$$45\le x<200,$$
$$x\bmod5\equiv0,$$
$$x\bmod8\equiv1,$$
$$x\bmod12\equiv1$$
I know I can simply enumerate all integers in $[45, 200)$ to get the answer (145), but I'm wondering if there's a more elegant solution.</p>
|
Caleb Stanford
| 68,107 |
<p>Well:</p>
<ul>
<li><p>Since the number is $0$ mod $5$, we don't have to check <em>all</em> integers in $[45,200)$, but only the multiples of $5$: $45, 50, 55, \ldots$</p></li>
<li><p>But we also know $x \mod{8} = 1$, which happens one in every 8 multiples of $5$. So $x \mod 40 = 25$. Now we only have to check $65, 105, 145, 185$.</p></li>
<li><p>The third statement, $x \mod 12 = 1$, is the same as $x \mod 3 = 1$ and $x \mod 4 = 1$. The latter thing we already know from $x \mod 8 = 1$. So we combine the facts $x \mod 40 = 25$ and $x \mod 3 = 1$ to get $x \mod 120 = 25$. Now the only possibility is $145$.</p></li>
</ul>
<p>If you would like to know the mathematical justification for these steps, it is known as the <a href="https://en.wikipedia.org/wiki/Chinese_remainder_theorem" rel="noreferrer">Chinese remainder theorem</a>.</p>
|
2,294,585 |
<p>I can't figure out why $((p → q) ∧ (r → ¬q)) → (p ∧ r)$ isn’t a tautology. </p>
<p>I tried solving it like this: </p>
<p>$$((p ∧ ¬p) ∨ (r ∧ q)) ∨ (p ∧ r)$$
resulting in $(T) ∨ (p ∧ r)$ in the end that should result in $T$. What am I doing wrong?</p>
|
egreg
| 62,967 |
<p>We want to see this is not a tautology. This is the case if we find a choice of $p$, $q$ and $r$ so that</p>
<ol>
<li>$(p → q) ∧ (r → ¬q)$ is false</li>
<li>$p ∧ r$ is true</li>
</ol>
<p>In order for $(p → q) ∧ (r → ¬q)$ to be true, both $p → q$ and $r → ¬q$ must be true. Let's choose $q$ true. Then, as $¬q$ is false, $r$ must be false.</p>
<p>OK, we're done: choose $p$ true, $q$ true and $r$ <em>true</em>. Then</p>
<ol>
<li><p>$p → q$ is true and $r → ¬q$ is false, so $(p → q) ∧ (r → ¬q)$ is false</p></li>
<li><p>$p ∧ r$ is true</p></li>
</ol>
<p>Therefore $((p → q) ∧ (r → ¬q)) → (p ∧ r)$ is false.</p>
|
825,848 |
<p>I'm trying to show that the minimal polynomial of a linear transformation $T:V \to V$ over some field $k$ has the same irreducible factors as the characteristic polynomial of $T$. So if $m = {f_1}^{m_1} ... {f_n}^{m_n}$ then $\chi = {f_1}^{d_1} ... {f_n}^{d_n}$ with $f_i$ irreducible and $m_i \le d_i$.</p>
<p>Now I've managed to prove this using the primary decomposition theorem and then restricting $T$ to $ker({f_i}^{m_i})$ and then using the fact that the minimal polynomial must divide the characteristic polynomial (Cayley-Hamilton) and then the irreducibility of $f_i$ gives us the result.</p>
<p>However I would like to be able to prove this directly using facts about polynomials/fields without relying on the primary decomposition theorem for vector spaces. Is this fact about polynomials true in general?</p>
<p>We know that $m$ divides $\chi$ and so certainly $\chi = {f_1}^{m_1} ... {f_n}^{m_n} \times g$ but then how do we show that $g$ must have only $f_i$ as it's factors? I'm guessing I need to use the fact that they share the same roots. And I'm also guessing that it depends on $k$, i.e. if $k$ is algebraically closed then it is easy because the polynomials split completely into linear factors.</p>
<p>Help is much appreciated,
Thanks</p>
|
nombre
| 246,859 |
<p>There is another proof using the properties of the determinant.</p>
<p><strong>Claim: $\chi$ divides $m^n$ in $k[X]$.</strong></p>
<p>To see this, fix a basis of $V$ and consider the matrix $M$ of $T$ in said basis. We see $\mathcal{M}_n(k),\mathcal{M}_n(k[X])$ as a subrings of $\mathcal{M}_n(k(X))$ given the natural embeddings $k\rightarrow k[X] \rightarrow k(X)$. In this context $\chi = \det(X.I_n-M)$ is a regular determinant of $X.I_n-M\in \mathcal{M}_n(k(X))$.</p>
<p>Write $m=\sum \limits_{k=0}^d a_kX^k$. We have: $m(X.I_n)=m(X.I_n) - m(M)=\sum \limits_{k=0}^d a_k(X^k.I_n-M^k)= \sum \limits_{k=1}^d a_k(X^k.I_n-M^k)=(X.I_n-M)\sum \limits_{k=1}^da_k\sum \limits_{p=0}^{k-1}X^p.M^{k-1-p}$.</p>
<p>Let $B:=\sum \limits_{k=1}^da_k\sum \limits_{p=0}^{k-1}X^p.M^{k-1-p}$. Since $B\in \mathcal{M}_n(k[X])$ where $k[X]$ is a ring, we have $\det(B) \in k[X]$.</p>
<p>We then compute the determinant of those matrices:
$m^n=m^n.\det(I_n)=\det(m(X.I_n))=\det((X.I_n-M)B)=\det(X.I_n-M)\det(B)=\chi\det(B)$, hence the result.</p>
|
428,291 |
<p>What is the solution of the following system?</p>
<p>$$
\begin{align}
a \cdot e-b \cdot d & =\alpha \\
a \cdot f-c \cdot d & =\beta \\
b \cdot f-c \cdot e & =\gamma
\end{align}
$$</p>
<p>Where the unknowns $a,b,c,d,e,f$ are real numbers and $\alpha, \beta, \gamma$ are fixed real numbers.</p>
<p>I tried to solve it by giving it some geometric meaning, but failed.</p>
|
Robert Israel
| 8,508 |
<p>The resultant of the first two equations with respect to variable $a$ is
$bdf - cde + \alpha f - \beta e$. You can solve this and the third equation as
linear equations in $e,f$ (assuming $\alpha c - \beta b \ne 0$).</p>
|
2,340,606 |
<p>$$x^2-2(3m-1)x+2m+3=0$$
Find the sum of solutions. It says that the sum equals to $-1$. I just can't wrap my head around this? Any help? Thx</p>
|
Robert Z
| 299,698 |
<p>In a quadratic equation $ax^2+bx+c=0$ the sum of the solutions (if any) is equal to $-b/a$. In fact, if $x_1$ and $x_2$ are the solutions then
$$ax^2+bx+c=a(x-x_1)(x-x_2)=ax^2-a(x_1+x_2)x+a(x_1\cdot x_2)$$
and by comparing the coefficients we get $x_1+x_2=-b/a$.</p>
<p>In your case $x_1+x_2=2(3m-1)$. It seems that the sum of the (complex) solutions is $-1$ if $m=1/6$.</p>
|
317,981 |
<p>Prove the following:</p>
<p>$$\lim_{n \to \infty} \displaystyle \int_0 ^{2\pi} \frac{\sin nx}{x^2 + n^2} dx = 0$$</p>
<p>How would I prove this? I know you have to show your steps, but I'm literally stuck on the first one, so I can't. </p>
|
Fly by Night
| 38,495 |
<p>We can come up with some bounds for this integral $I$. First notice that
$$|I|=\left|\int_0^{2\pi} \frac{\sin (nx)}{x^2+n^2} \operatorname{d}\!x \right| \le \int_0^{2\pi} \left|\frac{\sin (nx)}{x^2+n^2}\right| \operatorname{d}\!x \le \int_0^{2\pi} \frac{1}{\left|x^2+n^2\right|} \operatorname{d}\!x \le \int_0^{2\pi} \frac{1}{||x^2|-|n^2||} \operatorname{d}\!x$$
The last step was an application of the triangle inequaility: $|z+w| \ge ||z|-|w||$. Hence:
$$\lim_{n\to\infty}|I|\le\lim_{n\to\infty}\left(\int_0^{2\pi} \frac{1}{||x^2|-|n^2||} \operatorname{d}\!x\right) =\int_0^{2\pi}\lim_{n\to\infty}\left( \frac{1}{||x^2|-|n^2||}\right) \operatorname{d}\!x = 0$$
Since $|I|\to 0$ as $n \to \infty$ and so $I \to 0$ as $n \to \infty$.</p>
|
314,416 |
<p>Part of what it means to be a functor between two categories is to have a map of morphisms e.g. $F$ sends $f: A \to B$ to $Ff: FA \to FB$.</p>
<p>Suppose $F$ is a functor from a category to itself, and that category has (some or all) exponential objects. $B^A$ intuitively corresponds to an object of morphisms $A \to B$, so I'd hope to be able to apply the morphism-mapping of the functor $F$ to get $FB^{FA}$. I can do this in $\mathbf{Set}$ in an obvious way, but I can't see how to frame that idea category-theoretically so that I might generalise it. So my questions are:</p>
<ul>
<li>Under what conditions can I "internalise" the morphism map of $F$ to get a morphism $B^A \to FB^{FA}$?</li>
<li>If that doesn't work, what if I replace $B^A$ with the hom-functor of a category enriched over itself, or some other internal notion of function?</li>
</ul>
|
user1551
| 1,551 |
<p>Read the Wiki page again. It says $M$ is normal if and only if $M=U\Lambda U^\ast$ for some diagonal matrix $\Lambda$ and some unitary matrix $U$. Note that $\Lambda$ may be <strong><em>non-real</em></strong>. However, in a SVD $M=U\Sigma V^\ast$, $\Sigma$ is a <strong><em>real</em></strong> diagonal matrix. So, the two decompositions are not in conflict with each other.</p>
<p>Actually, as pointed out in Manos' answer, if $U=V$ in the SVD of $M$, then $M$ is not just normal, but it must be positive semidefinite.</p>
|
13,922 |
<p>How to sum up this series :</p>
<p>$$2C_o + \frac{2^2}{2}C_1 + \frac{2^3}{3}C_2 + \cdots + \frac{2^{n+1}}{n+1}C_n$$
</p>
<p>Any hint that will lead me to the correct solution will be highly appreciated.</p>
<p>EDIT: Here $C_i = ^nC_i $</p>
|
Aryabhata
| 1,102 |
<p>For an elementary method which does not use calculus:</p>
<p>Notice that $\displaystyle \dfrac{{n \choose k}}{k+1} = \dfrac{1}{n+1} {n+1 \choose k+1}$
</p>
<p>Thus your sum is </p>
<p>$$\sum_{k=0}^{n} \dfrac{1}{n+1} {n+1 \choose k+1} 2^{k+1} = \dfrac{\sum_{k=0}^{n+1} {n+1 \choose k} 2^k -1}{n+1} = \dfrac{3^{n+1} - 1}{n+1}$$
</p>
|
13,922 |
<p>How to sum up this series :</p>
<p>$$2C_o + \frac{2^2}{2}C_1 + \frac{2^3}{3}C_2 + \cdots + \frac{2^{n+1}}{n+1}C_n$$
</p>
<p>Any hint that will lead me to the correct solution will be highly appreciated.</p>
<p>EDIT: Here $C_i = ^nC_i $</p>
|
Bill Dubuque
| 242 |
<p><strong>REMARK</strong> $\ $ The various approaches are all equivalent. Namely, suppose that we desire to prove without calculus the identity arising from integrating the binomial formula, viz. $$\rm (1 + x)^{n+1}\ =\ 1 + \sum_{k=1}^{n+1}\: \frac{n+1}{k+1} {n\choose k}\ x^{k+1}$$
</p>
<p>Comparing coefficients reduces it to the identity</p>
<p>$$\rm \quad\quad\ {n+1 \choose k+1}\ =\ \frac{n+1}{k+1} {n\choose k} $$
</p>
<p>which is precisely the identity employed in Moron's "calculus free" approach.</p>
|
1,711,673 |
<p>Why are the inverses of 2-cycles of the symmetric permutation group $S_3$ the elements themselves rather than their reverses?</p>
<p>E.g. why is $(1,2)^{-1}=(1,2)$ and not $(2,1)$?</p>
|
Jack D'Aurizio
| 44,121 |
<p>$$\frac{1-\cos^3 x}{x^2} = \frac{2\sin^2\frac{x}{2}}{x^2}\cdot\left(1+\cos x+\cos^2 x\right)=\frac{1}{2}\cdot\left(\frac{\sin(x/2)}{x/2}\right)^2\cdot \left(1+\cos x+\cos^2 x\right)$$</p>
<p>and now no factor is troublesome in a neighborhood of zero, since:
$$ \frac{\sin z}{z}=\sum_{n\geq 0}\frac{(-1)^n z^{2n}}{(2n+1)!} $$
is an entire function.</p>
|
1,711,673 |
<p>Why are the inverses of 2-cycles of the symmetric permutation group $S_3$ the elements themselves rather than their reverses?</p>
<p>E.g. why is $(1,2)^{-1}=(1,2)$ and not $(2,1)$?</p>
|
Clement C.
| 75,808 |
<p>To "remove" the cancellation <em>at $0$</em>, one can use any of the "usual suspects" (basic limits, L'Hôpital, you name it; I would go for Taylor expansions, to low order). For instance, from $\cos u = 1- \frac{u^2}{2} + o(u^2)$ and $(1+u)^3 = 1+3u+o(u)$ at $0$, you get
$$
\frac{1-\cos^3 x}{x^2} = \frac{1-(1-\frac{x^2}{2}+o(x^2))^3}{x^2} = \frac{1-(1-\frac{3x^2}{2}+o(x^2))}{x^2}= \frac{3}{2}+o(1)
$$
when $x\to 0$.</p>
<p>However, besides $0$ your expression <strong>will</strong> cancel at any $x=2\pi n$, for $n\in\mathbb{Z}\setminus\{0\}$, no matter what transformations you do. (And it is easy to see these are the only points.)</p>
|
4,093,406 |
<p>Which of the equations have at least two real roots?
<span class="math-container">\begin{aligned}
x^4-5x^2-36 & = 0 & (1) \\
x^4-13x^2+36 & = 0 & (2) \\
4x^4-10x^2+25 & = 0 & (3)
\end{aligned}</span>
I wasn't able to notice something clever, so I solved each of the equations. The first one has <span class="math-container">$2$</span> real roots, the second one <span class="math-container">$4$</span> real roots and the last one does not have real roots. I am pretty sure that the idea behind the problem wasn't solving each of the equations. What can we note to help us solve it faster?</p>
|
acernine
| 331,233 |
<p>This is a case of a "hidden quadratic" in that your equations are all really quadratic equations in a different variable. To see this, write <span class="math-container">$u=x^2$</span>. Then for example your first equation becomes
<span class="math-container">$$u^2-5u-36=0.$$</span>
The temptation is then to look at the discriminant of this equation and conclude from there. But a solution of this equation gives us a value of <span class="math-container">$u=x^2$</span>, which will give two values of <span class="math-container">$x$</span> if <span class="math-container">$u>0$</span>, one (repeated) value if <span class="math-container">$u=0$</span> and no values if <span class="math-container">$u<0$</span>. So in fact we have to solve this quadratic for <span class="math-container">$u$</span> and go from there. In this case we have
<span class="math-container">$$u=\frac{5\pm\sqrt{25+144}}{2}=\frac{5\pm 13}{2}=9,-4.$$</span>
This gives two values of <span class="math-container">$u$</span>, but the case <span class="math-container">$u=x^2=-4$</span> has no real solutions for <span class="math-container">$x$</span> so we discard it. The case <span class="math-container">$u=x^2=9$</span> has two solutions <span class="math-container">$\pm 3$</span> for <span class="math-container">$x$</span> so overall there are two solutions.</p>
<p>Really this is most of the way to solving the equations, although we only need to know the sign of the values of <span class="math-container">$u$</span>, so for example we can see that
<span class="math-container">$$u=\frac{5\pm\sqrt{25+144}}{2}$$</span>
clearly gives one positive solution without explicitly calculating.</p>
|
2,549,834 |
<p>Ive been looking at this problem and trying to use examples online to try to solve it but I get stuck. </p>
<p>It says to use mathematical induction to prove
(1/1*4)+(1/4*7)+(1/7*10)+ ... + (1/(3n-2)(3n+1)) = n/(3n+1)</p>
<p>I solve for n=1 and substitute k for n, but I don’t really know what to do after that step.</p>
<p>It would really help to see this problem laid out step by step. Thank you!!</p>
|
Andres Mejia
| 297,998 |
<p><strong>Hint:</strong> $\big||f(x)|-|f(x_0)|\big| \leq |f(x)-f(x_0)|$ by the <a href="https://math.stackexchange.com/questions/127372/reverse-triangle-inequality-proof">reverse triangle inequality</a>.</p>
|
1,761,189 |
<p>Suppose that voters arrive to a voting booth according to a Poisson process with rate $\lambda = 100$ voters per hour. The voters will vote for two candidates, candidate $1$ and candidate $2$ with equal probability $\left(\frac{1}{2}\right)$. I am assuming that we start the voting at time $t=0$ and it continues indefinitely. I am trying to find probability of candidate $1$ receiving $k$ votes in the first $4$ hours of voting, CONDITIONAL on $1000$ voters arriving during the first $10$ hours. My calculation keeps coming up as $\frac{1}{2}$ due to the symmetry involved. </p>
<p>My idea is to define $A_t =$ number of votes for candidate $1$ at time $t$, then find $P(A_4 = k | N(10) = 1000)$, where $N(t)$ is the number of voters who have arrived at time $t$. Could someone tell me if this the right approach? Thanks.</p>
|
MJ73550
| 331,483 |
<p>You have :</p>
<p>$$\mathbb{P}(A_4=k|N_{10}= 1000)=\frac{\mathbb{P}(A_4=k,N_{10}=1000)}{\mathbb{P}(N_{10}=1000)}$$</p>
<p>$$\mathbb{P}(N_{10}=1000)=\frac{(10\times 100)^{1000}}{1000!}e^{-10\times 100}$$</p>
<p>You have :
$$A_t = \sum_{i=1}^{N_t} \xi_i$$
where $\xi_i$ are independent and identically distributed $$\mathbb{P}(\xi_i=1)=\mathbb{P}(\xi_i=0)=\frac{1}{2}$$</p>
<p>$$\mathbb{P}(A_4=k,N_{10}=1000)=\sum_{k\leq n\leq 1000}\mathbb{P}(\sum_{i=1}^n\xi_i=k,N_{10}-N_4=1000-n)$$
using independence of increments of $N$ you get:
$$\mathbb{P}(A_4=k,N_{10}=1000)=\sum_{k\leq n\leq 1000}\mathbb{P}(\sum_{i=1}^n\xi_i=k)\mathbb{P}(N_{10}-N_4=1000-n)$$
using that increments are stationnary $N_{t+h}-N_t\stackrel{\mathcal{L}}{=}N_h$ and that $\sum_{i=1}^n\xi_i$ is a Binomial $\mathcal{B}(n,\frac{1}{2})$ you get:</p>
<p>$$\mathbb{P}(A_4=k,N_{10}=1000)=\sum_{k\leq n\leq 1000}\binom{n}{k}\frac{1}{2^n}\frac{(6\times 100)^{1000-n}}{(1000-n)!}e^{-6\times 100}$$</p>
<p>and endly you get :</p>
<p>$$\mathbb{P}(A_4=k|N_{10}= 1000)=\frac{\sum_{k\leq n\leq 1000}\binom{n}{k}\frac{1}{2^n}\frac{(6\times 100)^{1000-n}}{(1000-n)!}e^{-6\times 100}}{\frac{(10\times 100)^{1000}}{1000!}e^{-10\times 100}}$$</p>
|
1,166,524 |
<p>I am looking for all irreducible polynomials of degree 5 in $\mathbb{F}_{17}$
with have the form h(y) = $y^5+C$.</p>
<p>I think there aren't any irreducible polynomials of this form because for every C
I can find an element of $\mathbb{F}_{17}$ as a root.</p>
<p>What would be the fastest way to find irreducible polynomials if the form would be h(y) = $y^5$+...</p>
|
Brent Kerby
| 218,224 |
<p>You are correct that there no irreducible polynomials of the form $h(y)=y^5+C$ over $\Bbb F_{17}$ because every such polynomial has a root in $\Bbb F_{17}$. An easy way to see this is to notice that $\Bbb F_{17}^*$ is a cyclic group of order 16, which is relatively prime to 5, so that $y \mapsto y^5$ is an automorphism of $\Bbb F_{17}^*$. This means that $y \mapsto y^5$ is a bijection from $\Bbb F_{17}$ to itself, so for any $C$, there exists some $y\in\Bbb F_{17}$ such that $y^5=-C$.</p>
<p>Now, to find an irreducible polynomial of degree 5, a reasonably fast way to do it is to simply pick monic polynomials of degree 5 at random and test them until you find one that is irreducible; about $\frac15$th of them are irreducible (see below), so you probably won't have to try too many. </p>
<p>The question then becomes: how do we test a degree 5 polynomial $f$ for irreducibility? Well, if $f$ is reducible then it must have either a quadratic or a linear factor. Now, the polynomial $x^{17^2}-x=x^{289}-x$ is the product of all irreducible linear and quadratic polynomials over $\Bbb F_{17}$, so we can test for a linear or quadratic factor by checking if $\text{gcd}(f,x^{289}-x) \neq 1$. We can make this easier by recognizing that if we replace $x^{289}-x$ by its reduction mod $f$ then it does not change the gcd with $f$, and we can do this by computing the power $x^{289}$ mod $f$ using repeated squaring. If you're trying to do this by hand, this is still going to require quite a bit of calculation, but on a computer it is very fast and also scales nicely to larger finite fields and higher-degree polynomials (Look up the Ben-Or irreducibility test).</p>
<p>(Proof of the fact that about $\frac15$th of monic degree 5 polynomials are irreducible: $x^{289^5}-x$ factors over $\Bbb F_{17}$ as the product of all of the irreducible polynomials of degree 1 or 5; there are 17 irreducible polynomials of degree 1, so by considering the degree of $x^{17^5}-x$ we conclude that there are $\frac{17^5-17}{5}$ irreducible polynomials of degree 5. So the fraction of all monic degree 5 polynomials that are irreducible is $\frac1{17^5}\cdot \frac{17^5-17}{5} = \frac{1-17^{-4}}5$.)</p>
|
1,133,581 |
<p>$\lim\limits_{(x,y) \to (0,0)} \frac{\log(1-x^2y^2)}{x^2y^2}$</p>
<p>in polar coordinates:</p>
<p>$\lim\limits_{r \to 0} \frac{\log(1-r^2\cos^2t \sin^2t)}{r^2\cos^2t \sin^2t}$</p>
<p>The first limit exists if the second is independent from $t$. But if $t$ is $k \pi/2$ that fraction does not exist. So I could argue that the limit does not exist. Instead it is wrong the limit is $-1$. Where is the mistake?</p>
<p><strong>EDIT</strong></p>
<p>I know I can use another way to reach the result. My question is strictly related to this method (polar coordinates). Since the second limit does not exist for some $t$ I should think that the multivar limit does not exist. Perhaps are there some $t$ that I shouldn't care about?</p>
|
user2820579
| 141,841 |
<p>In case you are still interested. <a href="https://rads.stackoverflow.com/amzn/click/com/1493927116" rel="nofollow noreferrer" rel="nofollow noreferrer">Abbot's</a> has this problem as Example 3.3.7. For each point <span class="math-container">$x\in(0,1)$</span> consider the open interval <span class="math-container">$0_x = (x/2,1)$</span>. The infinite union of <span class="math-container">$O_x$</span>'s forms an open cover of <span class="math-container">$(0,1)$</span>. Take any finite subset of such <span class="math-container">$O_x$</span>, and set <span class="math-container">$x'=\operatorname{min}(x_1,x_2,\dots,x_n)$</span>. You can still find a number <span class="math-container">$0<y\leq x'/2$</span> not in any subfinite union.</p>
|
4,246,461 |
<p>I am an economics undergrad who wants to develop a firm grasp over mathematics, so will I be able to understand Bourbaki's books on my own without needing additional books? What about the books titled analysis released by Roger Godement?</p>
|
hm2020
| 858,083 |
<p><strong>Question:</strong> "What mathematical education should I have in order to study the books released under the pseudonym N. Bourbaki?"</p>
<p><strong>Answer:</strong> For the algebra and commutative algebra books I recommend the algebra book of Lang and the commutative algebra books of Atiyah-Macdonald and Matsumura. When you have studied these books you will easily understand the Bourbaki books. The Bourbaki books in algebra and commutative algebra are used by researchers in the field. They contain much material and are self contained.</p>
|
42,016 |
<p>I am looking for algorithms on how to find rational points on an <a href="http://en.wikipedia.org/wiki/Elliptic_curve">elliptic curve</a> $$y^2 = x^3 + a x + b$$ where $a$ and $b$ are integers. Any sort of ideas on how to proceed are welcome. For example, how to find solutions in which the numerators and denominators are bounded, or how to find solutions with a randomized algorithm. Anything better than brute force is interesting.</p>
<p>Background: a student worked on the <a href="http://en.wikipedia.org/wiki/Mordell-Weil_theorem">Mordell-Weil theorem</a> and illustrated it on some simple examples of elliptic curves. She looked for rational points by brute force (I really mean <em>brute</em>, by enumerating all possibilities and trying them). As a continuation of the project she is now interested in smarter algorithms for finding rational points. A cursory search on Math Reviews did not find much.</p>
|
Robin Chapman
| 4,213 |
<p>There is a whole industry devoted to this. The basic method is by
<em>descent</em>, which is a formalized version of the infinite descent proofs
of Fermat and Euler. It helps if there are rational 2-torsion points
but it's not essential. Chapter X in Silverman's <em>The Arithmetic of Elliptic
Curves</em> is called "Computing the Mordell-Weil group". It has lots of good
information, but maybe isn't so easy for a beginner due to its heavy use of
group cohomology.</p>
|
47,181 |
<p>I'll ask you to consider a situation wherein one has a series of edges for a graph, $(e_1, e_2, ..., e_N) \in E$, each with a specifiable length $(l_1, l_2, ..., l_N) \in L$, and the goal is to insure that the connected graph has a unique topology in 3-space. More specifically, I'm interested in insuring that some graph with the connectivity of a polytope can only be drawn as the skeleton of that particular polytope - that there should be no crossed edges or knots possible for the specified edge lengths. </p>
<p>To provide a physical example: </p>
<p>I use a group of rods to represent the edges of the desired graph (with pencils or the like) and color/symbol-encode their ends to represent vertex-assignments. I want to choose rod lengths in such a way that if I hand them to a naive-constructor (i.e. a 3-year old or a computer-controlled robot), and tell him/her/it to connect the ends of the rods together that have the same color or symbol, after waiting an arbitrarily long time there will only be a unique geometry satisfying the connectivity constraints of the graph I originally had in mind. </p>
<p>Is there a known computational complexity for this problem? Is there even a solution in the general case, or in the case where we apply the restriction that the specified polytope is convex?</p>
<p>I appreciate any feedback!</p>
<p>EDIT 1: The edges of the graph must be straight lines in 3-space, they cannot be bent to accommodate a particular edge length. </p>
<p>EDIT 2: Does the problem become easier if one assumes some physical diameter for the edges?</p>
|
sleepless in beantown
| 8,676 |
<p>You are asking about the <strong>embedding of a graph structure into 3-space</strong> $\mathbb{R}^3$. A graph structure by itself does not specify its embedding into $n$-space. In chemistry, these two different chiral instances of (tetrahedral) molecules below would be called stereo-isomers or enantiomers of each other.</p>
<p>In mechanical engineering, you'd be talking about building trusses and support structures, and a lot is known about the fact that quadrilaterals do not define a rigid structure. Quadrilaterals are easily sheared within a plane, and are not restricted to being coplanar, whereas triangular faces are at least limited to being coplanar.</p>
<p>Also, the presence of these constraints (on edge length and vertex-edge connectivity) also does not mean that it would be impossible to build partial structures that meet the specified partial constraints but which cannot be built upon to complete the structure. In other words, a "naive constructor" cold generate a partial assembly which is a configuration which is impossible to continue onto a final desired construction. There could be <strong>dead-end</strong> partial constructions which could not be completed. This type of problem could partially be avoided by also imposing a temporal constraint, or a sequence constraint, e.g. first add this, then add that. </p>
<p>However, there are chirality issues in play which cannot be avoided.</p>
<p>If the "vertices" do not impose restrictions on relative angles, then there are no additional contraints beyond edge-length, and the graph-structure and edge lengths will not usually define a single embedding in 3-space, relative to transformations such as translation and rotation.</p>
<p>If by topology, you do not also mean chirality, you may be correct. If you allow chirality differences to mean something, then there is a simple counterexample in the tetrahedron.</p>
<p>Let this tetrahedron $T_1$ in $\mathbb{R}^3$ be defined with a base triangle $ABC$ with the points $A=(0,0,0), B=(0,1,0), C=(1,0,0)$ and the top of the tetrahedron at $D=(0,0,1)$. Let the edge lengths of the skeleton of this polytope be defined based on this baseline instantiation in 3-space, $|AB|=1, |AC|=1, |BC|=\sqrt{2}, |AD|=1, |BD|=\sqrt{2}, |CD|=\sqrt{2}$.</p>
<p>Now note that if $D$ is instead placed at $D_2=(0,0,-1)$, that the this alternate tetrahedron (let's call it $T_2=ABCD_2$), has the same edge lengths as $T_1$, but has the mirror chirality. If we labeled the vertices with $A,B,C,D$, it is not possible to rotate and translate $T_1$ into $T_2$, whereas it is possible to turn $T_1$ inside-out and transform it into $T_2$.</p>
<p>If you don't have all triangular faces, e.g. you use the edge lengths of a cube as the only constraints on a skeleton of a cube, you'll quickly see the problem that engineers found in constructing trusses with square faces: parallelograms are not necessarily "rigid" and can be sheared easily and still maintain the correct edge-lengths between vertices. Thus it's not possible to build a rigid skelton with only square faces.</p>
<p>Thus, it depends on the axiomatic construction of your objects:</p>
<p>if you disallow disassembly and reconstruction, then the tetrahedra $T_1$ and $T_2$ are separate chiral mirror-images of each other. If you allow for disassembly and reconstruction, then $T_1$ and $T_2$ have the same topology. If you also define "topologically equivalent" to allow for elastic stretching (at least for transforming from one 3-d realization to another, then back to being solid and rigid while in a specific 3-d realization), then $T_1$ can be transformed into $T_2$ by pushing the vertex $D$ through the center of the face $ABC$ and onto the other side. If the faces actually have a physical planar object defining that face (like a kite has its tissue paper), then this sort of transform is disallowed and the mirror image tetrahedra $T_1$ and $T_2$ are different.</p>
<p>You can also visualize this by allowing the edges to be made of elastic springy rods with spring constants $k_i$. If the $k$'s are very large, then the springs are very stiff and the inversion will be impossible; if the $k$'s are small, the springs have a lot of give and it's easily possible to change between the two mirror-image configurations.</p>
|
2,470,062 |
<blockquote>
<p><span class="math-container">$$\sqrt{k-\sqrt{k+x}}-x = 0$$</span></p>
<p>Solve for <span class="math-container">$k$</span> in terms of <span class="math-container">$x$</span></p>
</blockquote>
<p>I got all the way to
<span class="math-container">$$x^{4}-2kx^{2}-x+k^{2}-x^{2}$$</span>
but could not factor afterwards. My teacher mentioned that there was grouping involved</p>
<p>Thanks Guys!</p>
<p>Edit 1 : The exact problem was solve for <span class="math-container">$x$</span> given that <span class="math-container">$$\sqrt{4-\sqrt{4+x}}-x = 0$$</span> with a hint of substitute 4 with k</p>
|
Blue
| 409 |
<p>(Too long for a comment.)</p>
<p>As @Leucippus has shown, the solutions are</p>
<p>$$k \quad=\quad x^2 + x + 1 \qquad\text{or}\qquad x^2 - x$$</p>
<p>Thus,
$$k + x \quad=\quad (x+1)^2 \qquad\text{or}\qquad x^2$$
so that
$$\sqrt{k+x} \quad=\quad |x+1| \qquad\text{or}\qquad |x|$$
Further,
$$k - \sqrt{k+x} \quad=\quad x^2+x+1 - |x+1| \qquad\text{or}\qquad x^2-x-|x|$$
For us to get the perfect squares that we <em>know</em> we'll need, we require, in the first case, that $|x+1| = x+1$ (that is, $x \geq -1$); in the second case, $|x| = -x$ (that is, $x\leq 0$). These cause the above expressions to reduce to $x^2$. From there ...
$$\sqrt{k- \sqrt{k+x}}-x = \sqrt{x^2} - x = |x| - x$$
For this to vanish, we must have $x \geq 0$. So, the previous restrictions were not enough; rather, we have
$$k = \begin{cases} x^2 + x + 1 &, \text{for}\;x \geq 0 \\
x^2 - x &, \text{for}\; x = 0\end{cases}$$</p>
<hr>
<p>Incidentally, the solutions tell us that we <em>could have</em> factored the expanded polynomial equation as
$$k^2 - 2 k x^2 + x^4 - k - x = 0 \qquad\to\qquad( k - x^2 - x - 1 )( k - x^2 + x ) = 0$$
I'm not seeing an "obvious" grouping that would've led me there. I believe I would've jumped right to the Quadratic Formula and allowed myself to be pleasantly surprised that the square root went away.</p>
|
68,899 |
<p>I was rereading basic results on de Rham cohomology, and this led me inevitably to the fact that $H^q(X,\Omega^p)$ converges to $H^*(X)$ for any smooth proper variety (over any field). How does one view this spectral sequence "maturely" as a Grothendieck spectral sequence?</p>
|
Mariano Suárez-Álvarez
| 1,409 |
<p>You probably want one (the second one...) of the two hypercohomology spectral sequences which compute $\mathbb H^\bullet(X,\Omega^\bullet)$, the hypercohomology of the de Rham complex. A reference for this is Weibel's book.</p>
<p>I doubt you can view this as a Grothendieck spectral sequence, but it has sufficiently much <em>hyper</em> in it to be considered mature, I guess.</p>
|
1,358,574 |
<p>When I searched for the derivative of the Gamma function I got something of the form:</p>
<p>$$\Gamma'(x)=\Gamma(x) \psi(x)$$</p>
<p>But from the definition of the Digamma function to me it's like writing:</p>
<p>$$\Gamma'(x)=\Gamma'(x)$$</p>
<p>And this doesn't seem very useful to me (if I'm wrong feel free to explain me why) so I'm wondering: is there any other form for the derivative(s) of the Gamma function ? This function is defined by an integral so I think that there could be but I'm not sure on how to deal with this.</p>
|
Pedro
| 23,350 |
<p>A bit useless in light of Steven's answer, but nonetheless useful for future ideas. Recall that the Catalan numbers are equal to $\frac{1}{n+1}\binom {2n}n$ and count the number of sequences of length $2n$ of zeros and ones for which the number of ones is always at least that of zeros at any truncation of the sequence. Since there are $4^n$ binary sequences of length $2n$ it follows that $$\frac{1}{n+1}\leqslant 4^n \binom{2n}n^{-1}$$ so the series diverges by comparison to the harmonic series. In fact $$\frac{1}{4^n}\binom{2n}n \simeq \frac{1}{\sqrt{n\pi }}$$</p>
|
2,590,537 |
<p>Let $A$ be a proper subset of of $X$, $B$ be a proper subset of $Y$. If $X$ and $Y$ are connected, show that $(X\times Y)-(A\times B)$ is connected.</p>
<p>I have already seen the solution of this problem here on math.stackexchange, but my confusion is this.
Suppose $X$=(a,b) & $Y$=(c,d) and let $A$={m} & $B$={n} be singleton sets such that $A\subset X$ & $B\subset Y$ then $(X\times Y)-(A\times B)$= $[(a,m)\cup(m,b)]\times[(c,n)\cup(n,d)]$ = $[(a,m)\times(c,n)]\cup[(m,b)\times(n,d)]$
which implies that $(X\times Y)-(A\times B)$ is not connected. </p>
<p>Can someone explain me what I have done wrong & what is this question asking. </p>
|
Dan
| 500,478 |
<p>For each $n-1$-vector you can append either a $0$ or a $1$ as its $n$th-coordinate to create an $n$-vector. So $A(k)=2A(k-1)$. If $A(k-1)=2^{k-1}$, then the result follows.</p>
|
400,296 |
<p>There are n persons.</p>
<p>Each person draws k interior-disjoint squares.</p>
<p>I want to give each person a single square out of his chosen k, so that the n squares I give are interior-disjoint.</p>
<p>What is the minimum k (as a function of n) for which I can do this?</p>
<p>NOTES:</p>
<ul>
<li>For n=1, obviously k=1.</li>
<li>For n=2, obviously k must be more than 2, since with 2 squares per person, it is easy to think of situations where both squares of person 1 intersect both squares or person 2. It seems that k=3 is enough, but I couldn't prove this formally.</li>
<li>If we don't limit ourselves to squares, but allow general rectangles, then even for n=2, no k will be large enough, as it is possible that every rectangle of player 1 intersects every other rectangle of player 2. So, the sqauare limitation is important.</li>
</ul>
<p>EDIT: The problem has two versions: in one version, the squares are all axis-aligned. In the second version, the squares may be rotated. Solutions to any of these versions are welcome. </p>
<p>EDIT: Here is a possibly useful claim, relevant for the axis-aligned version:</p>
<p><strong>Claim 1</strong>: If two axis-aligned squares, A and B, intersect, then one of the following 3 options hold:</p>
<ul>
<li>At least 2 corners of A are covered by B, and B is as large or larger than A;</li>
<li>One corner of A is covered by B, and one corner of B is covered by A,</li>
<li>At least 2 corners of B are covered by A, and A is as large or larger than B.</li>
</ul>
<p>Thus, if A intersects B, then, out of the 8 corners of A and B, at most 6 corners remain uncovered.</p>
|
Kris Williams
| 38,143 |
<p>Here is my counter-example to the case of $n=2$ and $k=3$. Provided of course that the squares need not be the same size. You will have to trust me that all shapes are squares. However, we can see that each square overlaps all 3 of the squares of the other color.</p>
<p><img src="https://i.stack.imgur.com/p2kjF.png" alt="Image"></p>
|
2,070,574 |
<p>I had a question </p>
<p>$$\sum_{a=1}^n \frac{1}{a^2}=?$$</p>
<p>I had learned newton's method of undetermined method
,but that doesn't work here because of negative power</p>
<p>than I saw another question </p>
<p>$$\sum_{a=1}^n \frac{1}{a(a+1)}$$</p>
<p>here we transformed it as</p>
<p>$$\sum_{a=1}^n (\frac{1}{a}-\frac{1}{a+1})$$</p>
<p>which on expanding gave </p>
<p>$$\frac{1}{1}-\frac{1}{2}+\frac{1}{2}-\frac{1}{3}+......+\frac{1}{n}-\frac{1}{n+1}$$</p>
<p>where each element cancels out except</p>
<p>$$1-\frac{1}{n+1}$$</p>
<p>which gives the formula</p>
<p>$$\sum_{a=1}^n \frac{1}{n(n+1)}=1-\frac{1}{n+1}=\frac{n}{n+1}$$</p>
<p>but I can't express $a^{-2}$ as two diffrent factors so I thought to use limits(calculus) as </p>
<p>$$\frac{1}{a^2}=\lim_{h\to 0}\frac{1}{a(a+h)}$$</p>
<p>but when I tried to break it then I got $h$ in denominator.</p>
<p>I may be wrong, so please help to calculate the summation.</p>
|
Marsan
| 392,650 |
<p>I am not aware of any formulas for the finite partial sum... however we can compute the limit ! I think the easiest way to compute it is through trigonometric sums/estimations. Here is a sketch of the proof : </p>
<p>By using De Moivre's Formula, you can easily show that, for all real $t$ :</p>
<p>\begin{equation*}
\sin(2n+1)t = \sum_{k=1}^{n} \binom{2n+1}{2k+1}(-1)^k \sin^{2k+1}(t) \cos^{2(n-k)}(t)
\end{equation*}</p>
<p>Then dividing by $\sin^{2n+1}(t)$, when $t \notin \pi\mathbb{Z}$, you get </p>
<p>\begin{equation*}
\frac{\sin(2n+1)t}{\sin^{2n+1}(t)} = P_n({cotan}^{2}(t)). \; \text{where} \: P_n=\sum_{k=1}^{n} \binom{2n+1}{2k+1}(-1)^k X^{n-k}
\end{equation*}</p>
<p>For $t \in (0, \frac{\pi}{2})$, the formula above holds and thus $P_n$ vanishes at points $x_k=cotan(t_k)$ where $t_k= \frac{k\pi}{2n+1}$ for $k \in \{1, \cdots, n\}$. Since $P_n$ is of degree $n$, we found all the roots of $P_n$. And by the coefficient/roots relationship, we can assert that :</p>
<p>\begin{equation*}
\sum_{k=1}^n x_k = -\frac{-\binom{2n+1}{3}}{\binom{2n+1}{1}} = \frac{n(2n-1)}{3}
\end{equation*}</p>
<p>Then by squaring and inverting the inequalities for $ t \in(0,\frac{\pi}{2})$ : $0<sin(t)\leq t \leq \tan(t)$, we get :</p>
<p>\begin{equation*}
{cotan}^2(t) \leq \frac{1}{t^2} \leq 1 + {cotan}^2(t)
\end{equation*}</p>
<p>which yields by taking $t=t_k$ and then summing over $k$ :</p>
<p>\begin{equation*}
\frac{n(2n-1)}{3} \leq \frac{(2n+1)^2}{\pi^2} \sum_{k=1}^n \frac{1}{k^2} \leq n + \frac{n(2n-1)}{3}
\end{equation*}</p>
<p>Thanks to the squeeze Theorem, you may conclude that :</p>
<p>\begin{equation*}
\sum_{k=1}^{+ \infty} \frac{1}{k^2} = \frac{\pi^2}{6}
\end{equation*}</p>
|
550,757 |
<p>I don't see how $(5, 1 - 8i)$ would generate the element $5$. I think that $5$ and $1 - 8i$ would have to have a common factor but I can't find any that divide both.</p>
|
TBrendle
| 97,380 |
<p>The prime factors of $5$ are $1+2i$ and $1-2i$ while the prime factors of $1-8i$ are $1+2i$ and $3+2i$. Therefore the GCD of $5$ and $1-8i$ is $1+2i$, or, in the language of ideals, we have $(5, 1-8i)=(1+2i)$.</p>
<p>More abstractly, you can prove that <em>any</em> ideal in the Gaussian integers must be principal, by using the Euclidean algorithm.</p>
<p>You can also use the Euclidean algorithm to find the GCD of the two Gaussian integers. First compute:</p>
<p>$$\frac{1-8i}{5} = \frac{1}{5} + \frac{-8}{5}i$$
and round each part to the nearest integer to get the quotient $-2i$.</p>
<p>So the first step gives
$$1-8i = -2i(5) + (1+2i).
$$
In the next step we find $5=(1-2i)(1+2i)+0$ so $1+2i$ is the GCD of $5$ and $1-8i$. This is probably the simplest way.</p>
|
92,492 |
<p>If $M$ is a combinatorial model category, it's known by the experts that there is a ''natural'' model structure on diagram categories $Hom(C,M)$, which is the <strong>projective model structure</strong>. The fibrations and weak equivalences are defined point wise. </p>
<p>There is also the one called <strong>injective model structure</strong>, where the cofibrations and weak equivalences are defined point wise. </p>
<p>I would like to know if for a given a morphism $\alpha \in Arr(C)$, the evaluation at $\alpha$ can be a right Quillen functor with the <strong>injective model structures on each side</strong>:</p>
<p>$Ev_\alpha: Hom(C,M) \to M^2$ </p>
<p>Thanks ! </p>
<p>Edit: Here $M^2= Hom([0 \to 1], M)= Arr(M)$, sorry for the confusion.</p>
|
user22479
| 22,479 |
<p>This is really a comment on Igor Rivin's answer (I don't have enough rep. to comment, it seems), but one has to be attentive to whether or not the polynomial $p$ (not to be confused with the prime $p$...) is absolutely irreducible over the ground field, or at least whether its irreducible factors are absolutely irreducible. In general the answer is influenced by the degrees of the finite extensions of the ground field over which the various "geometric" irreducible factors of $p$ are defined. Lang and Weil would have known how to formulate this, though in modern terms it comes out from how the structure of the top-degree compactly supported etale cohomology is influenced by the "field of definition" (or field of constants in the function fields) of the various irreducible components.</p>
|
1,926,593 |
<p>I am currently taking a course in Discrete Math. The first part of our lesson this week is regarding sequences. I am stuck on formulas like the ones shown in the images I attached... I was hoping someone might be able to help me learn how to solve them. :)</p>
<p>Ps: What does it mean when <span class="math-container">$n-1$</span> is written below the function? Is it the inverse of <span class="math-container">$a^{n-1}$</span>?</p>
<p>I'm sorry if this is a dumb question haha. I've been studying every day for the past 3 weeks and my brain is officially exhausted. Thank-you so much for your time!</p>
<blockquote>
<p>Given <span class="math-container">$a_n = 3 a_{n-1} + 1$</span> and <span class="math-container">$a_0 = 2$</span>, compute <span class="math-container">$a_2$</span>.</p>
<p>Given the recurrence relation <span class="math-container">$a_n = -2a_{n-1}$</span> where <span class="math-container">$a_0 = 1$</span>, find <span class="math-container">$a_5$</span>.</p>
</blockquote>
|
Bernard
| 202,857 |
<p>An inductive proof rests entirely on <em>Pascal's formula</em>. Note the above formula can as well be written:
$$\sum_{i=m}^n \binom{i}{m} = \binom{n+1}{m+1}\quad\text{or}\quad \binom{n+1}{m+1}=\sum_{k=0}^{n-m} \binom{n-k}{m}$$
We'll prove by induction on $l$ that, for all $l\;$ ($0\le l<n-m$):
$$ \binom{n+1}{m+1}=\sum_{k=0}^{l}\binom{n-k}{m}+\binom{n-l}{m+1} \tag{1}$$</p>
<p>Indeed, if $l=0$, this is simply Pascal's formula.</p>
<p>For the inductive step, suppose formula $(1)$ is true for some $l$. We have
\begin{align}
\binom{n+1}{m+1}&=\binom{n}{m}+\binom{n-1}{m}+\dots+\binom{n-l}{m}+\binom{n-l}{m+1}\\
&=\binom{n}{m}+\binom{n-1}{m}+\dots+\binom{n-l}{m}+\binom{n-l-1}{m}+\binom{n-l-1}{m+1}\\
&=\sum_{k=0}^{l+1}\binom{n-k}{m}+\binom{n-l-1}{m+1}.
\end{align}</p>
<p>Setting $l=n-m-1$, we obtain
\begin{align}
\binom{n+1}{m+1}&=\binom{n}{m}+\binom{n-1}{m}+\dots+\binom{m+1}{m}+\binom{m+1}{m+1}\\
&=\binom{n}{m}+\binom{n-1}{m}+\dots+\binom{m+1}{m}+\binom{m}{m}.
\end{align}</p>
|
41,000 |
<p>The motivation for this question is producing a distribution that produces the gender and age of an individual when the distribution of ages depends on gender.</p>
<p>Suppose I want a distribution which, when RandomVariate is applied to it, produces two values. The first value it produces is either 0 or 1 and the second value it produces is some real value between 0 and 65. But the second value is correlated with the first. By way of example, it might be the case that when the first value is 0, the second value tends to be lower than the when the first value is 1. I want the end product to be a distribution, so that one could apply things such as CDF to it.</p>
<p>Obviously, the gender distribution can be modeled as a BernoulliDistribution. And, as it happens I know how to model the age distribution conditioned on the person being a male and the age distribution conditioned on the person being a female. But how do I put them together? I've looked at CopulaDistribution and, while conceptually it might be appropriate, I don't see the kind of kernel I would need.</p>
<p>I have the feeling I am being stupid and that there is some obvious and elegant representation of this situation, but, at the moment, it escapes me. Help appreciated.</p>
|
ciao
| 11,467 |
<p>I don't think there's any <em>direct</em> way to do this currently with the MM statistics/probability features - it would be nice if WRI extended things like <code>ProductDistribution</code> et. al. to allow conditioning/other logic. In any case, here's one way you could accomplish the end result:</p>
<pre><code>dist = EmpiricalDistribution[{#,
If[# == 0, RandomReal[50], RandomReal[65]]} & /@
RandomVariate[BernoulliDistribution[.45], 100]]
RandomVariate[dist]
(* {0, 18.2127} *)
</code></pre>
<p>So in this trivialized example, I created a distribution where 45% are gender 1, 55% gender 0, and the 'age' is tacked on based on the gender. In your case, you'd replace the <code>RandomReal</code> with <code>RandomVariate</code> on your actual age vs gender distribution.</p>
<p>You can use pretty much all of the MM statistical/probability functions on the distribution:</p>
<pre><code>Probability[gender == 1 && age > 40, {gender, age} \[Distributed] dist]
(* 0.08 *)
</code></pre>
<p>You might also sniff around at <code>ParameterMixtureDistribution</code>, it could be of use here.</p>
|
4,220,751 |
<p>In the example in the following slide, we follow the highlighted formula. With regard to the highlight, I'm confused why the number is greater <strong>or equal</strong> to <span class="math-container">$2^{n-1}$</span>, while only need to be less than <span class="math-container">$2^n$</span> (not less than <strong>or equal to</strong> <span class="math-container">$2^n$</span>)?
<a href="https://i.stack.imgur.com/YZ9He.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YZ9He.png" alt="enter image description here" /></a></p>
|
Mr.Infinity
| 930,710 |
<p>Here is a hint: Notice that the matrix <span class="math-container">$A$</span> can be decomposed into the sum of an identity matrix and a nilpotent, that is <span class="math-container">$A=N+I$</span>, where <span class="math-container">$N^{\dim V}=N^3=0$</span>.</p>
<p>Therefore we yield <span class="math-container">$M=\sum_{i=1}^{20}(N+I)^{i}$</span>. Apply binomial theorem to each term and you will derive your desired result.</p>
<p>Besides, <span class="math-container">$N=A-I$</span> is not invertible, so please give up the attempt to inverting it.</p>
|
1,648,354 |
<p>We have been taught that linear functions, usually expressed in the form $y=mx+b$, when given a input of 0,1,2,3, etc..., you can get from one output to the next by adding some constant (in this case, 1).
$$
\begin{array}{c|l}
\text{Input} & \text{Output} \\
\hline
0 & 1\\
1 & 2\\
2 & 3
\end{array}
$$</p>
<p>But with exponential functions (which are usually expressed in the form $y=a\cdot b^x $), instead of adding a constant, you multiply by a constant. (In this case, 2)
$$
\begin{array}{c|l}
\text{Input} & \text{Output} \\
\hline
0 & 1\\
1 & 2\\
2 & 4\\
3 & 8
\end{array}
$$
But... we can keep going, can't we?
$$
\begin{array}{c|l}
\text{Input} & \text{Output} \\
\hline
0 & 1\\
1 & 2\\
2 & 4\\
3 & 16\\
4 & 256
\end{array}
$$
In this example, you square the last output to get to the next one. I cannot find a 'general form' for such an equation, nor can I find much information online. Is there a name for these functions? Is there a general form for them? And can we keep going even past these 'super exponential functions'?</p>
|
Simply Beautiful Art
| 272,831 |
<p>Your last table is incorrect, and as others have mentioned, tetration would be the next operation.</p>
<p>Let's try to figure out what these kinds of operations are and how they work, to begin with.</p>
<p>We will be considering $x+2$, $x\cdot2$, and $x^2$.</p>
<p>What is $x+2$? In simple terms, it means $x+1+1$ where we have $2$ ones.</p>
<p>What is $x\cdot2$? In terms of addition, it is $x+x$, where we add $x$ to itself $2$ times.</p>
<p>What is $x^2$? In terms of multiplication, it is $x\cdot x$, where we multiply $x$ to itself $2$ times.</p>
<p>From here, there is an operation $H_n$ which $n$ tells us how high up this tree of operations it is. For addition, $n=-1$. For multiplication, $n=0$. For exponentiation, $n=1$. We will be using base $2$.</p>
<p>But what about $H_2$? Well, as you can imagine, it is a repetition of exponentiation:</p>
<p>$$H_2(x)=x^x$$</p>
<p>It is simply $x$ raised to itself an amount of times.</p>
<p>Let us define our function a little bit better. $H_n(x,b)$, where $n$ is the 'level,' $x$ is the input, and $b$ is the base.</p>
<p>We now see that $H_2(x,2)=x^x$.</p>
<p>Also, we see that $H_2(x,3)=x^{x^x}$, and we can do this for any whole number for the base. This is called tetration.</p>
<p>What is beyond that is pentation:</p>
<p>$$H_3(x,2)=H_2(x,x)$$</p>
<p>It is difficult to express such a thing with normal notation, and even with my notation. However, I can define the general operation:</p>
<p>$$H_n(x,b)=H_{n-1}(x,H_{n-1}(x,H_{n-1}(x,H_{n-1}(x,\dots1))))$$</p>
<p>Where we apply the operation $H_{n-1}$ $b$ times. Very complicated!</p>
|
3,497,919 |
<p><span class="math-container">$$\sum_{k=0}^{\Big\lfloor \frac{(n-1)}{2} \Big\rfloor} (-1)^k {n+1 \choose k} {2n-2k-1 \choose n} = \frac{n(n+1)}{2} $$</span></p>
<p>So I feel like <span class="math-container">$(-1)^k$</span> is almost designed for the inclusion-exclusion principle. And the left-hand side looks like some sort of pairing, so I am interested in some combinatorics proof like below-linked question. But using a generating function is always helpful. </p>
<p>[EDIT] now I am probably equally, if not more interested in a generating function solution now that I see below answer that completely makes sense to me, but with some issues in signs..</p>
<p><a href="https://math.stackexchange.com/questions/897948/evaluation-of-a-sum-of-1k-n-choose-k-2n-2k-choose-n1">Evaluation of a sum of $(-1)^{k} {n \choose k} {2n-2k \choose n+1}$</a></p>
|
Mike Earnest
| 177,399 |
<p>Both sides of the equation are the answer to the following question:</p>
<blockquote>
<p>How many sequences of <span class="math-container">$n$</span> zeroes and <span class="math-container">$n-1$</span> ones are there where no two ones are adjacent?</p>
</blockquote>
<p>To see why <span class="math-container">$n(n+1)/2$</span> answers this question, note that the <span class="math-container">$n-1$</span> ones divide each valid sequence into <span class="math-container">$n$</span> sections; a section before the first one (possible empty), a section after the last one (possibly empty), and <span class="math-container">$n-2$</span> sections between each adjacent pair. Each of the middle <span class="math-container">$n-2$</span> sections must have at least one zero. After placing these <span class="math-container">$n-2$</span> zeroes, there are two remaining zeroes to place. The number of ways to place two identical objects into <span class="math-container">$n$</span> sections is <span class="math-container">$\binom{n}2+n=n(n+1)/2$</span>, by conditioning on whether the zeroes are placed into the same section.</p>
<p>Now, we need to explain why number of such sequences is also counted by the left hand side of the equation. Instead of using the ones to divide each sequence into <span class="math-container">$n$</span> sections, we use the zeroes to divide each sequence into <span class="math-container">$n+1$</span> sections. For each <span class="math-container">$1\le i\le n+1$</span>, let <span class="math-container">$A_i$</span> be the set of sequences where the <span class="math-container">$i^{th}$</span> section contains two or more ones. We want to count
<span class="math-container">$$
|A_1^c\cap A_2^c\cap \dots \cap A_{n+1}^c|
$$</span>
which by the principle of inclusion exclusion is
<span class="math-container">$$
\binom{2n-1}{n}-\sum_{k=1}(-1)^{k+1}\sum_{1\le i_1<i_2<\dots<i_k\le n+1}|A_{i_1}\cap A_{i_2}\cap \dots \cap A_{i_k}|
$$</span>
Each intersection <span class="math-container">$A_{i_1}\cap A_{i_2}\cap \dots \cap A_{i_k}$</span> is easy to count; first, choose an arbitrary sequence of <span class="math-container">$n$</span> zeroes and <span class="math-container">$n-2k-1$</span> ones, then add an extra two ones two each of sections <span class="math-container">$i_1,i_2,\dots,i_k$</span>. This shows the number of such sequences in this intersection is <span class="math-container">$\binom{2n-1-2k}n$</span>. Since there are <span class="math-container">$\binom{n+1}k$</span> intersections which contribute this sum, the above exactly simplifies to the LHS of the equation we want to prove.</p>
|
1,703,332 |
<p>Let $f \in L^2 \backslash L^\infty$ on some bounded domain. Let $h$ be a function such that $h(p) \to \frac 32$ as $p \to \infty$.</p>
<p>Consider $$I_p = \left(\int |f|^{h(p)}\right)^{\frac{p}{h(p)}}.$$</p>
<p>Is it true that $\lim_{p \to \infty} I_p \leq C$ exist for some finite constant $C$?</p>
<p>I don't think so, since there is a $p$ in the exponent. </p>
|
Community
| -1 |
<p>Turns out the problem is (a lot) easier than I thought .</p>
<p>Attach numbers to the positions : from $1$ to $n^2$ from left to right .</p>
<p>On the first column delete all the files except the first , which is useful (namely the files $n+1$ , $2n+1 , \ldots ,n^2-n+1$ )</p>
<p>Now take the useful file on the row $k$ namely $n(k-1)+k$ .</p>
<p>Exactly $k-1$ files will be deleted that are now on smaller positions than this file so after the operation this file will be on the position :</p>
<p>$$n(k-1)+k-(k-1)=n(k-1)+1$$ </p>
<p>This means that all files will be on the first column and then we only need to delete the remaining rectangle and we're done .</p>
|
2,013,382 |
<blockquote>
<p>If $AX=\lambda X$ for every eigenvector $X\in \mathbb{R}^n$ then show that $A=\lambda I_n$.</p>
</blockquote>
<p>Given, $AX=\lambda X$ for every $X\in \mathbb{R}^n$.</p>
<p>Then $(A-\lambda I_n)X=0$ is true for every $X\in \mathbb{R}^n$.</p>
<p>But how to show $A=\lambda I_n$? Please help.</p>
|
Nick
| 359,825 |
<p><strong>HINT:</strong>$(A-\lambda I)x=0$ for any $x\in \Bbb R^n$ implies $A-\lambda
I$ maps every $x\in \Bbb R^n$ to $0$, this is exactly the <a href="http://mathworld.wolfram.com/ZeroMap.html" rel="nofollow noreferrer">definition</a> of a zero transformation.</p>
|
3,447,872 |
<p>Subtracting both equations
<span class="math-container">$$x(a-b)+b-a=0$$</span>
<span class="math-container">$$(x-1)(a-b)=0$$</span>
Since <span class="math-container">$a\not = b$</span>
<span class="math-container">$$x=1$$</span>
Substitution of x gives
<span class="math-container">$$1+a+b=0$$</span> which is contradictory to the question. What did I do wrong?</p>
<p><em>There is a solution for this question that proves the required condition satisfactorily, but I want to the know reason behind this contradiction.</em></p>
|
farruhota
| 425,072 |
<p>Let <span class="math-container">$x_1,x_2$</span> be the roots of the first equation and <span class="math-container">$y_1,y_2$</span> of the second.</p>
<p>By the Vieta's:
<span class="math-container">$$\begin{cases}x_1+x_2=-a \\ x_1x_2=b\end{cases} \Rightarrow x_1^2+x_2^2=a^2-2b\\
\begin{cases}y_1+y_2=-b \\ y_1y_2=a\end{cases} \Rightarrow y_1^2+y_2^2=b^2-2a$$</span>
The difference of roots is equal:
<span class="math-container">$$x_1-x_2=y_1-y_2 \Rightarrow \\
x_1^2+x_2^2-2x_1x_2=y_1^2+y_2^2-2y_1y_2 \Rightarrow \\
a^2-4b=b^2-4a \Rightarrow \\
(a-b)(a+b)=-4(a-b) \Rightarrow \\
a+b=-4.$$</span>
Note: <span class="math-container">$a-b\ne 0$</span>.</p>
|
3,301,387 |
<p>I don't understand what it is about complex numbers that allows this to be true.</p>
<p>I mean, if I root both sides I end up with Z=Z+1. Why is this a bad first step when dealing with complex numbers?</p>
|
Community
| -1 |
<p>From</p>
<p><span class="math-container">$$(z+1)^6=z^6$$</span> you draw (taking the sixth root)</p>
<p><span class="math-container">$$z+1=\omega^k z$$</span> where <span class="math-container">$\omega$</span> is a primitive sixth root of unity, and <span class="math-container">$k=0,1,\cdots 5$</span>.</p>
<p>Then for <span class="math-container">$k\ne0$</span>*,</p>
<p><span class="math-container">$$z=\frac1{\omega^k-1}$$</span> are the five solutions. With <span class="math-container">$\omega^k=-1$</span>, the solution is real, <span class="math-container">$z=-\dfrac12$</span>.</p>
<hr>
<p>*With <span class="math-container">$k=0$</span>, <span class="math-container">$z+1=z$</span> is impossible.</p>
|
3,301,387 |
<p>I don't understand what it is about complex numbers that allows this to be true.</p>
<p>I mean, if I root both sides I end up with Z=Z+1. Why is this a bad first step when dealing with complex numbers?</p>
|
Nurator
| 685,412 |
<p>Try to understand it with real numbers first.
<span class="math-container">$$
y^2=x^2
$$</span>
does NOT mean
<span class="math-container">$$
y=x
$$</span></p>
<p>It means
<span class="math-container">$$
y=\pm x
$$</span>
instead.
With that in mind, try to understand that a complex circle has even more possible values than just + or minus and thus, such a equation can have a solution. In fact, there is a fundamental theorem that states that in the complex space, EVERY polynomial of degree <span class="math-container">$n$</span> has exactly <span class="math-container">$n$</span> roots.</p>
|
3,301,387 |
<p>I don't understand what it is about complex numbers that allows this to be true.</p>
<p>I mean, if I root both sides I end up with Z=Z+1. Why is this a bad first step when dealing with complex numbers?</p>
|
Michael Rozenberg
| 190,319 |
<p><span class="math-container">$$(z+1)^6-z^6=((z+1)^2-z^2)((z+1)^4+(z+1)^2z^2+z^4)=$$</span>
<span class="math-container">$$=(2z+1)((z+1)^4+2(z+1)^2z^2+z^4-(z+1)^2z^2)=$$</span>
<span class="math-container">$$=(2z+1)((z+1)^2+z^2-(z+1)z)((z+1)^2+z^2+(z+1)z)=$$</span>
<span class="math-container">$$=(2z+1)(z^2+z+1)(3z^2+3z+1).$$</span>
Id est, we need to solve
<span class="math-container">$$(2z+1)(z^2+z+1)(3z^2+3z+1)=0.$$</span>
Can you end it now?</p>
|
1,377,927 |
<p>Prove using mathematical induction that
$(x^{2n} - y^{2n})$ is divisible by $(x+y)$.</p>
<p><strong>Step 1:</strong> Proving that the equation is true for $n=1 $</p>
<p>$(x^{2\cdot 1} - y^{2\cdot 1})$ is divisible by $(x+y)$ </p>
<p><strong>Step 2:</strong> Taking $n=k$</p>
<p>$(x^{2k} - y^{2k})$ is divisible by $(x+y)$</p>
<p><strong>Step 3:</strong> proving that the above equation is also true for $(k+1)$</p>
<p>$(x^{2k+2} - y^{2k+2})$ is divisible by $(x+y)$.</p>
<p>Can anyone assist me what would be the next step? Thank You in advance!</p>
|
bryan anthony
| 966,357 |
<p>When <span class="math-container">$n=1$</span>,</p>
<p><span class="math-container">$$x^{2n} - y^{2n} =x^2-y^2=(x+y)(x-y)$$</span></p>
<p>which is divisible by <span class="math-container">$(x+y)$</span>. Assume true for <span class="math-container">$n=k$</span>. Then <span class="math-container">$x^{2k}-y^{2k}$</span> is divisible by <span class="math-container">$x+y$</span>.
Putting <span class="math-container">$n=k+1$</span>,
<span class="math-container">$$x^{2n}-y^{2n}=x^{2k+2}-y^{2k+2}=\left(x^{k+1}-y^{k+1}\right) \left(x^{k+1}+y^{k+1}\right)$$</span></p>
<p>When <span class="math-container">$k$</span> is an odd number, <span class="math-container">$x+y$</span> is a factor of <span class="math-container">$x^{k+1}-y^{k+1}$</span>.
Similarly, when <span class="math-container">$k$</span> is an even number, <span class="math-container">$x+y$</span> is a factor of <span class="math-container">$x^{k+1} + y^{k+1}$</span>.
Hence <span class="math-container">$x^{2n}-y^{2n}$</span> is divisible by <span class="math-container">$x+y$</span> for integers <span class="math-container">$n>0$</span>.</p>
|
2,411,081 |
<p>How can I show (with one of the tests for convergence , <strong>not by solving</strong>) that the integral $$\int _{1}^\infty\frac{\ln^5(x)}{x^2}dx$$ converges?</p>
|
Nosrati
| 108,128 |
<p>We know
$$\ln u=\int_1^u\dfrac{1}{t}dt\leq\int_1^u\dfrac{t^\alpha}{t}dt=\dfrac{u^\alpha-1}{\alpha}<\dfrac{u^\alpha}{\alpha}$$
for all $\alpha>0$ so with $\alpha=\dfrac{1}{10}$:
$$\int _{1}^\infty\frac{\ln^5(x)}{x^2}dx \leqslant 10^5\int _{1}^\infty\frac{1}{x^\frac32}dx<\infty$$</p>
|
422,318 |
<p>If $A$ is abelian group and $B$ is a subgroup of $A$, $B$ is normal subgroup of $A$.</p>
<p>Is it true that $B \times A/B \cong A$?</p>
<p>I ask because I was watching an online lecture from a course in abstract algebra at Harvard extension school. </p>
<p>And the lecturer (whose name is Peter) was taking about vector spaces and said that if $V$ is a vector space and $W$ is a subspace, then $V/W \times W \cong V$.</p>
<p>So the question which I thought of is: "is this true for all Abelian groups?" </p>
<p>Also, is there a less restricted condition on groups which will make this property hold? </p>
|
DonAntonio
| 31,254 |
<p>Take $\,A=C_4=$ the cyclic group of order $\,4\,$ , and take $\,B=C_2\,$ , so is</p>
<p>$$C_4\cong C_2\times C_4/C_2=C_2\times C_2\;\;?$$</p>
|
422,318 |
<p>If $A$ is abelian group and $B$ is a subgroup of $A$, $B$ is normal subgroup of $A$.</p>
<p>Is it true that $B \times A/B \cong A$?</p>
<p>I ask because I was watching an online lecture from a course in abstract algebra at Harvard extension school. </p>
<p>And the lecturer (whose name is Peter) was taking about vector spaces and said that if $V$ is a vector space and $W$ is a subspace, then $V/W \times W \cong V$.</p>
<p>So the question which I thought of is: "is this true for all Abelian groups?" </p>
<p>Also, is there a less restricted condition on groups which will make this property hold? </p>
|
amWhy
| 9,003 |
<p>As suggested by DonAntonio:</p>
<p>We need only take $A =\mathbb Z_4$, $B = \mathbb Z_2$. Then $A/B = \mathbb Z_4/\mathbb Z_2 \cong \mathbb Z_2 = B.\;$ Clearly, $$\mathbb Z_4 \not\cong \mathbb Z_2 \times \mathbb Z_2 = B \times A/B $$</p>
<p>The same "conjecture" would fail for $A = \mathbb Z_9$ and $B = \mathbb Z_3$, for the same reason</p>
<p>Indeed, what we can say is that <strong>if</strong> $A$ is a finite abelian group, then your conjecture will hold <em>provided</em> there is no prime $p$ such that $p^2$ divides $\,|A|$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.