qid
int64 1
4.65M
| question
large_stringlengths 27
36.3k
| author
large_stringlengths 3
36
| author_id
int64 -1
1.16M
| answer
large_stringlengths 18
63k
|
---|---|---|---|---|
108,277 |
<p>If Mochizuki's proof of abc is correct, why would this provide a new proof of FLT?</p>
<p>Edit: In proof of asymptotic FLT, does Mochizuki claim a specific value of n and if so what is this value?</p>
|
Joël
| 9,317 |
<p>The two answers of J.H.S and Igor Rivin don't really answer the question : they prove FLT
only for an exposant $n$ large enough.
Since Mochizuki's version of ABC is effective, one could certainly find an effective bound for $n$, and maybe prove by hand the remaining $n$. But as they stand, they are incomplete.</p>
<p>Here is a short proof that Mochizuku implies Fermat. If the proof is correct, since it is also incorrect (cf. the second answer to [this question] by Vasselin Dimitrov), it implies trivially Fermat't last theorem as well as its contrary.</p>
<p>I guess I should explicit my point: it is prematurate to ask precise questions
on the consequence of Mochizuki's proof. As for vague philosophical question, as the one
given in the link above, that have a problem too : they are vague. Let us at least wait for the MO community to accept or not Vasselin Dimitrov's very interesting answer, which
moreover is exactly about the effective version of ABC conjecture needed to answer the PO's question, before asking such questions.</p>
<p>[1] <a href="https://mathoverflow.net/questions/106560/philosophy-behind-mochizukis-work-on-the-abc-conjecture">Philosophy behind Mochizuki's work on the ABC conjecture</a></p>
|
803,608 |
<p>I believe that the statement is True, and this is my argument:</p>
<p>Since there exists an element $((1-\sqrt{2})/(1-(\sqrt{2})^2)\in\mathbb{Q}[\sqrt{2}]$ such that their products gives $1$ (multiplicative identity), therefore ( $1+\sqrt{2}$ ) is a unit. </p>
<p>A $unit$ is an element $a\in R|ab=1$ where as $b\in R$. Where $b$ is known as the multiplicative inverse of $a$.</p>
<p>Is my argument reasonable enough? And if there is/are any other possible argument, It would be really helpful to know.</p>
|
Thomas
| 26,188 |
<p>As noted in the comments, you have a field, so all non-zero elements are units. </p>
<p>But since you asked: to show that $1+\sqrt{2}$ is a unit, you just have to find an element $b$ such that $(1+\sqrt{2})b = 1$. And you can do
$$
(1 + \sqrt{2})(-1+\sqrt{2}) = -1 - \sqrt{2} + \sqrt{2} + 2 = 1.
$$
And that is it. You don't have to say anything else. It just has to be clear that the $b$ is indeed an element of $\mathbb{Q}[\sqrt{2}]$. And since $\mathbb{Q}[\sqrt{2}] = \{a + b\sqrt{2}: a,b\in \mathbb{Q}\}$, it is very clear that $-1 + \sqrt{2} \in \mathbb{Q}[\sqrt{2}]$.</p>
|
3,355,215 |
<blockquote>
<p>For <span class="math-container">$x$</span> and <span class="math-container">$k$</span> real numbers, for what values of <span class="math-container">$k$</span> will the graphs of <span class="math-container">$f(x)=-2\sqrt{x+1}$</span> and <span class="math-container">$g(x)=\sqrt{x-2}+k$</span> intersect?</p>
</blockquote>
<p>I tried to make an equation of them, but I’m stuck with the two variables and I couldn’t solve it. Much appreciation.</p>
<p>We didn’t do calculus yet..</p>
|
Dr. Sonnhard Graubner
| 175,066 |
<p>Hint: You must solve the equation
<span class="math-container">$$\sqrt{x-2}+k=-2\sqrt{x+1}$$</span> for <span class="math-container">$$x\geq 2$$</span> Writing this equation in the form
<span class="math-container">$$-k-\sqrt{x-2}=2\sqrt{x+1}$$</span> then it must be
<span class="math-container">$$k^2+2\geq x$$</span> and you can square it.
After squaring one times we get
<span class="math-container">$$2k\sqrt{x-2}=3x+6-k^2$$</span>
Squaring again we get
<span class="math-container">$$-k^4+10 k^2 x+4 k^2-9 x^2-36 x-36=0$$</span>
Solving this we get
<span class="math-container">$$k\leq -2 \sqrt{3}\land x=\frac{1}{9} \left(5
k^2-18\right)-\frac{4}{9} \sqrt{k^4-9 k^2}$$</span></p>
|
438,818 |
<p>I am looking for all two consecutive integers $A$ and $A+1$, which can be represented as sums of two squares
$A=a^2+b^2$ and $A+1=c^2+d^2$, $a,b,c,d>0$.</p>
|
Tamref Relue
| 447,449 |
<p>Take a member of the modular group, i.e. a matrix with integer entries and determinant one. Let's call it M={{a,b},{c,d}}. There are plenty of such matrices, in fact factorization of any two consecutive integers is a good source of these beasts <em>(it is a kind of poetic justice that any two consecutive integers will give you your special consecutive integers - or half integers)</em>.</p>
<p>For any such matrix consider these two integers (or half integers)</p>
<p>$X = (\frac{a+d}{2})^2+(\frac{c-b}{2})^2$<br>
$Y = (\frac{a-d}{2})^2+(\frac{c+b}{2})^2$</p>
<p>Elements of the modular group come in two flavors, they have either one even number among its entries or two. The latter case will give you the desired integers. The fact that they are consecutive follows from the det(M)=1 condition <em>(the former case will give you some interesting solutions with half integers, which you may donate at the Salvation Army)</em>.</p>
|
911,584 |
<p><em>Disclaimer: This thread is a record of thoughts.</em></p>
<p><strong>Discussion</strong>
Given a compact set.</p>
<blockquote>
<p>Do mere neighborhood covers admit finite subcovers?
$$C\subseteq\bigcup_{i\in I}N_i\implies C\subseteq N_1\cup\ldots N_n$$
<em>(The idea is that neighborhoods are in some sense fat.)</em></p>
</blockquote>
<p><strong>Application</strong></p>
<p>Given a locally compact space.</p>
<p>Every compact set has a compact neighborhood base:
$$C\subseteq U:\quad N\subseteq U\quad(C\subseteq N^°)$$
<em>(The above would give clues how to prove this.)</em></p>
|
almagest
| 172,006 |
<p>@Stones Yes, but the table is not quite right. If there are three or more 1s or 2s, then it does not matter how many 6s there are. So N5 the number of ways of getting five 1s or 2s is 32. Similarly, N4 the number of ways of getting four is $5\times 2^4\times 4=320$ and N4 the number of ways of getting three is $10\times 2^3\times 4^2=1280$. If we get just one 1/2 then we cannot get any 6s, so N1 is $5\times 2\times 3^4=810$.</p>
<p>That leaves the trickier case of two 1s/2s. If there are no 6s, then we have N2a as $10\times 2^2\times 3^3=1080$. If there is one 6, then we have N2b as $10\times 3\times 2^2\times 1\times 3^2=1080$. Adding those up, we get 4602, so the probability is 767/1296 = 0.5918 approx.</p>
<p>[The coefficients 1, 5, 10 are just the binomial coefficients, because we are looking at the number of ways of picking 5, 4, 3 things from 5. The extra 3 for N2b comes because there are 3 ways of choosing which of the three results not a 1 or 2 is the 6.]</p>
|
20,802 |
<p>Look at the following example:</p>
<p>Which picture has four apples?</p>
<p>A<a href="https://i.stack.imgur.com/Tpm46.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Tpm46.png" alt="enter image description here" /></a></p>
<hr />
<p>B <a href="https://i.stack.imgur.com/AOv29.png" rel="noreferrer"><img src="https://i.stack.imgur.com/AOv29.png" alt="enter image description here" /></a></p>
<hr />
<p>C <a href="https://i.stack.imgur.com/lZNmQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lZNmQ.png" alt="enter image description here" /></a></p>
<hr />
<p>D <a href="https://i.stack.imgur.com/BWqpH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BWqpH.png" alt="enter image description here" /></a></p>
<p>B is the expected answer but should not the correct answer be BCD? Technically if a set has <strong>exactly</strong> <span class="math-container">$m$</span> elements, then it has <span class="math-container">$k$</span> elements if <span class="math-container">$k\leq m$</span>. This is also how we talk in everyday language:</p>
<blockquote>
<p>"Do you have three dollars?"
"Yes."</p>
</blockquote>
<p>The second speaker is not indicating he has exactly three dollars. He simply indicates that he has <strong>at least</strong> three dollars.</p>
<p>So I am wondering if we are teaching children correct logic here. Shouldn't the original question be rephrased as "which picture has <strong>exactly</strong> four apples"?</p>
|
Graham
| 6,456 |
<h3>If you allowed this, how do you grade answers?</h3>
<p>Suppose the same maths test said "John has 2 apples and Lucy has 3 apples. How many apples do they have in total?"</p>
<p>By your logic, the child could say "1" and be entirely correct. If you have 5 apples and someone asks you "do you have an apple?", the answer of course is "yes". So clearly your proposal fails, because your logic is not internally consistent.</p>
<p>It is important to remember that what we say casually in everyday use is <em><strong>NOT</strong></em> mathematically sound, nor even necessarily logically or factually sound. Technically, if we ask "which set has 4 members?" (or perhaps more accurately, "which set has an ordinal of 4?"), then a set which has 5 members does <em><strong>NOT</strong></em> meet the question. For an even better example of how everyday usage does not match mathematical usage, look up the differences between Boolean AND/OR and how we might actually use "and" and "or" in conversation.</p>
|
20,802 |
<p>Look at the following example:</p>
<p>Which picture has four apples?</p>
<p>A<a href="https://i.stack.imgur.com/Tpm46.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Tpm46.png" alt="enter image description here" /></a></p>
<hr />
<p>B <a href="https://i.stack.imgur.com/AOv29.png" rel="noreferrer"><img src="https://i.stack.imgur.com/AOv29.png" alt="enter image description here" /></a></p>
<hr />
<p>C <a href="https://i.stack.imgur.com/lZNmQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lZNmQ.png" alt="enter image description here" /></a></p>
<hr />
<p>D <a href="https://i.stack.imgur.com/BWqpH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BWqpH.png" alt="enter image description here" /></a></p>
<p>B is the expected answer but should not the correct answer be BCD? Technically if a set has <strong>exactly</strong> <span class="math-container">$m$</span> elements, then it has <span class="math-container">$k$</span> elements if <span class="math-container">$k\leq m$</span>. This is also how we talk in everyday language:</p>
<blockquote>
<p>"Do you have three dollars?"
"Yes."</p>
</blockquote>
<p>The second speaker is not indicating he has exactly three dollars. He simply indicates that he has <strong>at least</strong> three dollars.</p>
<p>So I am wondering if we are teaching children correct logic here. Shouldn't the original question be rephrased as "which picture has <strong>exactly</strong> four apples"?</p>
|
Nardya
| 15,952 |
<p>I showed this question to my three-year old son. His response - because he counted the apples one by one in each picture, passing "4" each time - was B, C and D. Hence, we need to take into account how children arrive at their conclusion, since they do not apply formal logic. The thought process is very different from the abstract approach a programmer might take when deciding whether his boolean expression is correctly formulated. I have worked with kids taking ability score tests, and sometimes even the very strictly standardised question sets with pictures, where the question itself was worded correctly, elicit unusual answers. But many children, if prompted, give a very compelling answer. "Odd one out" questions are especially problematic, and the question here is a variation of that.
It does not matter if the formal logic is the correct one, linguistically or mathematically, if children can not apply the rules in a conscious way. But of course we want a specific answer - at least for grading - and not find out how the child's analytical capabilities work exactly. To limit the risk of such unusual reasoning, best remove ambiguities, albeit without introducing logical loops that frame other questions in a more complicated way.</p>
<p>"Which picture shows exactly 4 apples" is the most accurate phrasing, and it avoids having to put "at least" into other questions to keep the logic consistent (see the "how many children do you have" example given in another reply).</p>
|
129,709 |
<p>Say we have a continuous function $u(x,y) : \mathbb{R}^2 \rightarrow \mathbb{R}$.</p>
<p>I have seen several textbooks that make the following assertion:</p>
<blockquote>
<p>The length/area element of the zero level set of $u$ is given by
$\lvert\nabla H\left(u\right)\rvert = \delta(u)\lvert\lvert\nabla u\rvert\rvert$, where $H\left(u\right)$ is the Heaviside step function, $\delta(u)= \partial H(u) / \partial u $
is the Dirac delta function.</p>
<p>We can measure the length of the zero level set as $
\int\int \lvert \nabla H(x,y)\rvert dx dy = \int\int \delta (u(x,y))\lvert \nabla u (x,y)\rvert dx dy$</p>
</blockquote>
<p>I fail to see this. Why is the length (or/and area) element of the zero level set of $u$ given by
$\lvert\nabla H\left(u\right)\rvert $ or $\delta(u)\lvert\lvert\nabla u\rvert\rvert$?</p>
<p>Some papers and textbooks that make this assertion:</p>
<ul>
<li>Zhao, H.K. et al., 1996. <a href="http://www.math.lsa.umich.edu/~psmereka/LEVELSET/LSPAPERS/zzosherzhao.pdf" rel="nofollow">A Variational Level Set Approach to Multiphase Motion</a> (available online). <em>Journal of Computational Physics</em>, 127(1), p.179-195 (page 2, Equations 2.2a and 2.2b)</li>
<li><p>Geometric Partial Differential Equations and Image Analysis. Guillermo Sapiro. Cambridge University Press 2001 (e.g. page 92).</p></li>
<li><p>Image Processing and Analysis. Variational, PDE, Wavelet, and stochastic methods. Tony Chan and Jianhong Shen. SIAM 2005 (e.g. page 46)</p></li>
</ul>
|
Charlie
| 31,893 |
<p>I wanted to remark that the answer above is only true when $\phi$ is a distance function, that is to say $||\nabla \phi|| = 1$ almost everywhere. In the heuristic argument in the accepted answer, the following equality
$$1/2N = |\mathbf{n}(s)| h = h = \textrm{"the distance to $\partial \Omega$"}$$ </p>
<p>is precisely true when $|\mathbf n(s)| = 1$. As noted in Christian's original comments, the scaling of $\phi$ will affect the so-called length element. </p>
<p>I, too, have noticed that there is lack of justification for this definition in the literature, but found the tutorial on Chan-Vese segmentation by Pascal Getreuer mentioned this distance function stipulation. I know for the Chan-Vese model imposing that $\phi$ be distance function is often overlooked because it comes at a computational cost and slows convergence to "reasonable" answers.</p>
<p>Interestingly, there was a lot of research on the "reinitialization" of level set functions so that an arbitrary level set function could be transformed away from it's interface (zero level set) into a distance function. I believe this was originally done here:</p>
<p>Sussman, Mark, Peter Smereka, and Stanley Osher. "A level set approach for computing solutions to incompressible two-phase flow." Journal of Computational physics 114.1 (1994): 146-159.</p>
<p>I found some sample code for it here: <a href="http://www.math.lsa.umich.edu/~psmereka/LEVELSET/LSCODES/reint2d.m" rel="nofollow">http://www.math.lsa.umich.edu/~psmereka/LEVELSET/LSCODES/reint2d.m</a></p>
<p>I have no idea how much in practice the approximate length element differs from a better guess using a distance function.</p>
|
546,239 |
<p>Myself and my Math teacher are at a disagreement in to what the proper method of solving the question <em>In how many ways can four students be chosen from a group of 12 students?</em> is.</p>
<p>The question comes straight out of a Math revision sheet from a Math book distributed under the national curriculum. The options it gives for answers are:</p>
<ol>
<li>12</li>
<li>48</li>
<li>495</li>
<li>11880</li>
<li>40320</li>
</ol>
<p>As we are currently learning Permutations and Combinations, my above interpretation is that it is asking for a <strong>Combination without repetition</strong> or $\frac{(n+r-1)!}{r!(n-1)!}$ which gives you the amount of combinations without repetition (as you cannot pick the same student twice.) Now my teacher argues that the answer the book provides is correct. The books answer simply says to use $^{n}C_{r}$ or $\frac{n!}{r!(n-r)!}$.</p>
<p>What is the correct method of answering this? The book states <em>3. 495</em> is the answer.</p>
|
ncmathsadist
| 4,154 |
<p>As asked, the question is seeking to know the number of subsets of size 4 in a set of size 12. A subsequent accounting for the order in which they are chosen is a change in the nature of the question. So there are
$${12\choose 4}$$
ways to do this. Your interpretation is correct. </p>
<p>Were order important, you would be asked the number of ordered subsets of size four, or you might be asked to name them something like president, vice-president, secretary and treasurer. This labeling can be done to each subset of size 4 freely, so it boosts the count by a factor of $4! = 24.$ No such secondary labeling is present in the question. </p>
<p>Being asked to choose "a group (set) of size 4" implies sampling without replacement. </p>
<p>In a word, the book is on rock-solid ground.</p>
|
2,194,762 |
<p>I have to find all possible values of <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, <span class="math-container">$c$</span>, and <span class="math-container">$d$</span> for which the <span class="math-container">$2 \times 2$</span> matrix
<span class="math-container">$\begin{pmatrix}a & b \\ c & d \end{pmatrix}$</span> is semisimple. I know that a matrix <span class="math-container">$S$</span> is semisimple if there is a nonsingular matrix <span class="math-container">$P$</span> such that <span class="math-container">$P^{-1}SP=A$</span> is diagonal. How can I use this to find all the values <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, <span class="math-container">$c$</span> and <span class="math-container">$d$</span>?</p>
|
Travis Willse
| 155,629 |
<p>In the setting that the underlying field is algebraically closed (which we henceforth assume), a matrix is semisimple iff it is diagonalizable.</p>
<p>If the eigenvalues of a matrix are (pairwise) distinct then the matrix is diagonalizable. If the eigenvalues of a <span class="math-container">$2 \times 2$</span> matrix coincide (say, they are both <span class="math-container">$\lambda$</span>) and it is diagonalizable, it has the form <span class="math-container">$P^{-1} (\lambda I) P = \lambda I$</span>, i.e., it is already a multiple of the identity matrix.</p>
<p>If we denote the matrix by <span class="math-container">$$A := \pmatrix{a&b\\c&d},$$</span> its characteristic polynomial is <span class="math-container">$p_A(t) = t^2 - (\operatorname{tr} A) t + \det A$</span>, so the eigenvalues of <span class="math-container">$A$</span> (roots of <span class="math-container">$p_A$</span>) coincide iff the discriminant <span class="math-container">$$\Delta = (\operatorname{tr} A)^2 - 4 \det A = (a - d)^2 + 4 b c .$$</span>
vanishes.</p>
<p>So, <span class="math-container">$A$</span> is diagonalizable iff</p>
<ul>
<li><span class="math-container">$(a - d)^2 + 4 b c$</span>, or</li>
<li><span class="math-container">$a = d$</span> and <span class="math-container">$b = c = 0$</span>.</li>
</ul>
<p><strong>Remark</strong> Conversely, we can parametrize the nondiagonalizable <span class="math-container">$2 \times 2$</span> matrices: Any such matrix is similar to a <span class="math-container">$2 \times 2$</span> Jordan block <span class="math-container">$J_2(\lambda) := \pmatrix{\lambda&1\\\cdot&\lambda}$</span>, i.e., it is equal to <span class="math-container">$QJ_2(\lambda)Q^{-1}$</span> for some (invertible) <span class="math-container">$Q = \pmatrix{p&q\\r&s}$</span>. Conjugating by <span class="math-container">$\mu Q$</span> and <span class="math-container">$Q$</span> yield the same matrix, so we may as well assume that <span class="math-container">$\det Q = ps-qr = 1$</span>, yielding
<span class="math-container">$$\pmatrix{\lambda - pr & p^2 \\ -r^2 & \lambda + pr} .$$</span></p>
|
79,782 |
<p>Calculate $17^{14} \pmod{71}$</p>
<p>By Fermat's little theorem:<br>
$17^{70} \equiv 1 \pmod{71}$<br>
$17^{14} \equiv 17^{(70\cdot\frac{14}{70})}\pmod{71}$</p>
<p>And then I don't really know what to do from this point on. In another example, the terms were small enough that I could just simplify down to an answer, but in this example, I have no idea what to do with that $17^{(70\cdot\frac{14}{70})}$</p>
<p>What do I do from here?</p>
|
Brian M. Scott
| 12,042 |
<p>$17$ isn’t particularly close to a multiple of $71$, but as Ragib Zaman pointed out, $17^2=289$ is: $289=4\cdot71+5$. Thus, $17^{14}=(17^2)^7=289^7\equiv 5^7\pmod {71}$. At that point you can use brute force, or you might notice that $5^4=625$ is pretty close to $9\cdot71=639$. In fact $625=639-14$, so $5^4\equiv -14\pmod{71}$, $5^5\equiv -70\equiv 1\pmod{71}$, and finally $$17^{14}\equiv 5^7\equiv 5^2\equiv 25 \pmod{71}\;.$$</p>
|
4,135,449 |
<p>A few years ago, I got interested in an apparently hard integration problem which had me fascinated. This was the integral of a Sophomore Dream like integral except with the bounds over the real positive numbers denoted by <span class="math-container">$\Bbb R^+$</span> and not from 0 to 1 over the unit square, or unit line in this case? Nevertheless, this quirky problem caught my attention for having its area be almost 2 with this <a href="https://www.desmos.com/calculator/xuntecgloz" rel="noreferrer">graph</a>. Here is my solution, although there are a couple of calculations that I could have done differently:
<a href="https://i.stack.imgur.com/fPxAi.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/fPxAi.jpg" alt="enter image description here" /></a>
<span class="math-container">$$ A\mathit =\mathrm{\int_{\Bbb R^+}x^{-x}dx=\int_0^{\infty}\sum_{n=0}^{\infty}\frac{(-1)^nx^n ln^n(x)}{n!}dx}$$</span></p>
<p>I naively used Wolfram Alpha to integrate, but this can easily be still done with the substitution of <span class="math-container">$x=e^{-y}$</span> which is similar to the other <a href="https://en.m.wikipedia.org/wiki/Sophomore%27s_dream" rel="noreferrer">Sophomore Dream integrals</a>, so it will be ignored here for conciseness and should be able to use an interchanging theorem. Here I chose not to do the substitution as the same result appears anyways and this is simpler:</p>
<p><span class="math-container">$$\mathrm{\sum_{n=0}^{\infty} \frac{(-1)^n}{n!}\int_0^{\infty}x^n ln^n(x)dx= \sum_{n=0}^{\infty} \frac{(-1)^n}{n!}(-x^{-n}x^nln^n(x)ln(x)ln^{-(n+1)}(x)(-1)^{-(n+1)}(n+1)^{-(n+1)}Γ\big(n+1,-(n+1)ln(x)\big)\big|_0^{\infty}= \sum_{n=0}^{\infty}\frac{Γ\big(n+1,-(n+1)ln(\infty)\big)}{Γ(n+1)(n+1)^{n+1}}}$$</span></p>
<p>This -<span class="math-container">$\infty$</span> in the gamma function argument does not seem to exist, but taking a limit helps as well as some other forms with the <a href="https://functions.wolfram.com/GammaBetaErf/GammaRegularized/" rel="noreferrer">Regularized Incomplete Gamma Function</a>:
<span class="math-container">$$A=\mathrm{\sum_{n=1}^{\infty}n^{-n}\frac{Γ(n,-\infty n)}{Γ(n)}=\lim_{x\to \infty}\sum_{n\ge1}\frac{Q(n,-nx)}{n^n}=1.99545595…}$$</span></p>
<p>As seen from the link, the index n has to be a natural number which is true here. Also, there are other forms with the exponential sum function, and other series representations if non-elementary functions are not allowed. Here is <a href="https://www.wolframalpha.com/input/?i=sum+Q%28x%2C-3x%29%2Fx%5Ex+from+1+to+1000" rel="noreferrer">proof</a> of this answer from Wolfram Alpha.</p>
<p>Please do not give me any integral expressions of this constant, but rather special functions or other representations are encouraged. I will “check” the best answer. I just need a “nice” non-integral expression of the constant. Please give me feedback and correct me!</p>
<p>An alternate form solution is very needed for this problem also concerning power tower, so please check it out. I have left my context, attempts, and questions, therefore any help with alternate forms for either of these would be appreciated.</p>
<p>Using the definition of the Riemann Sum, we can rewrite this as:</p>
<p><span class="math-container">$$\mathrm{\int_{\Bbb R^+}x^{-x}dx=\lim_{b,n\to \infty}\frac bn \sum_{k=0}^n\left(\frac{bk}{n}\right)^{-\left(\frac{bk}{n}\right)},n\gg b}$$</span></p>
<p>Here is proof of this result: <a href="https://www.desmos.com/calculator/ictlmmzqwy" rel="noreferrer">Graph</a></p>
<p>Here are some unlikely <a href="https://www.wolframalpha.com/input/?i=1.99545595750013800041872469845272435208621663696797887278830006098303161714656636306695492778946387705816763770" rel="noreferrer">closed forms.</a></p>
<p>There is also a form of the constant using a summation expansion. Here is proof of a third form of this constant <a href="https://www.desmos.com/calculator/uyvdvtjpdq" rel="noreferrer">here</a>: <span class="math-container">$$\mathrm{A= \lim_{x\to \infty}\sum_{n\ge 1}\sum_{k=0}^{n-1}\frac{(-1)^k e^{nx} n^{k-n}x^k}{k!}}$$</span></p>
<p>Here is another sum and double sum answer <a href="https://www.desmos.com/calculator/tq5qjafgyg" rel="noreferrer">here</a>. This has a sophomore dream in the solution plus the same integrand from 1 to <span class="math-container">$\infty$</span>:
<span class="math-container">$$\mathrm{A=\sum_{n\ge 1}n^{-n}-\lim_{x\to \infty}\sum_{n\ge1}\sum_{k\ge1}\frac{(-x)^n(nx)^k}{(k+n)k!n!}}$$</span></p>
<p>Let <span class="math-container">$\bar F$</span> represent a Regularized Hypergeometric function then, as <span class="math-container">$\mathrm{Q(n,-nx)= 1-(-nx)^n\ _1\bar F_1(n,n+1,nx)}$</span>, the following can be proven with the <a href="https://mathworld.wolfram.com/ConfluentHypergeometricFunctionoftheFirstKind.html" rel="noreferrer">Kummer Confluent Hypergeometric Function</a>: <span class="math-container">$$\mathrm{\sum_{n\ge1}n^{-n}-\lim_{x\to\infty}\sum_{n\ge1}(-x)^n\ _1\bar F_1(n,n+1,nx)}$$</span></p>
<p>Also, feel free to use <a href="https://functions.wolfram.com/GammaBetaErf/Gamma2/06/01/04/01/02/" rel="noreferrer">these</a> identities to find an alternate form. I could go on, but I have found no form that does not need a limit in the answer.</p>
|
JJacquelin
| 108,514 |
<p>From : <a href="https://fr.scribd.com/doc/34977341/Sophomore-s-Dream-Function" rel="nofollow noreferrer">https://fr.scribd.com/doc/34977341/Sophomore-s-Dream-Function</a>
<span class="math-container">$$\text{Closed form : }\quad\int_0^\infty x^{-x}dx=\text{Sphd}(-1;\infty)$$</span>
<span class="math-container">$$\text{Numerical value : Eq.(8:3).}$$</span></p>
<p>Before downvoting, please read "A schoolboy joke" pp.2-3 in the above referenced paper.</p>
|
675,695 |
<p>[<b>Added by PLC</b>: This question is a followup to <a href="https://math.stackexchange.com/questions/298870/inner-product-change-of-axioms/298891">this already answered question</a>.]</p>
<p>Keep the axioms for a real inner product (symmetry, linearity, and homogeneity).
But make the fourth be
$$\langle x,x \rangle = 0 \text{ if and only if } x = 0.$$</p>
<p>I want to prove that either $\langle x,x \rangle > 0$ or $\langle x,x \rangle < 0$ for all $x \neq 0$. </p>
<p>Note: $c_1 = \langle x,x \rangle > 0$ and and $c_2 = \langle y,y \rangle < 0$.</p>
<p>Here's the sketch of the proof I want: Assume $\langle x,x \rangle > 0$ for some $x$ and $\langle y,y \rangle < 0$ for some $y$. I'm trying to find a $z \neq 0$ such that $\langle z,z\rangle = 0$, where $z$ is in the space spanned by $\{x,y\}$. By contradiction, we know that $\langle x,x \rangle < 0$ or $\langle x,x \rangle > 0$ for all $x \neq 0$.</p>
<p>Unfortunately, my proof doesn't work that way, and I don't think it proves what I want it to prove.</p>
<p>I say let $\langle z,z \rangle = \langle ax + by, ax + by\rangle$, for $a,b \in \mathbb{R}$. Then
$$\langle z,z \rangle = a^2 \langle x,x \rangle + 2ab \langle x,y \rangle + b^2 \langle y,y \rangle = 0.$$
Let $a = \langle y,y \rangle$ and $b = \langle x,x \rangle$.
After plugging $a$ and $b$ in, I get:
\begin{align*}
& \langle y,y \rangle + 2\langle x,y \rangle + \langle x,x \rangle\langle y,y \rangle = 0 \\
\implies& c_2 + 2 \langle x,y \rangle + c_1 c_2 = 0 \\
\implies& 2\langle x,y \rangle = -(c_1 c_2 + c_2).
\end{align*}
Thus,
\begin{align*}
\langle z,z\rangle=
&= c_1 c_2 + 2(-c_1 c_2 - c_2) + c_1 c_2 \\
&= -2c_1c_2 + 2(c_1c_2 c_2) \\
&= -c_1c_2+c_1c_2 +c_2 = 0
\end{align*}
Then $c_2 = 0$</p>
<p>The only thing I can think to do now is to claim a contradiction: we said $c_2 < 0$. But I don't think this proves what we want to prove that either $\langle x,x \rangle < 0$ OR $\langle x,x \rangle > 0$ for all $x \neq 0$.</p>
<p>I think my issue is that I don't know how to choose $a,b$ to make $\langle z,z \rangle = 0$. Someone please offer some help.</p>
|
Pete L. Clark
| 299 |
<p>Suppose that $\langle x, x \rangle > 0$ and $\langle y, y \rangle < 0$. I claim that there are $a,b \in \mathbb{R}$ such that $ax+by \neq 0$ and $\langle ax+by, ax+by \rangle = 0$. This shows that if
$q(v) = \langle v, v \rangle$ assumes both positive and negative value, then there is some nonzero $v$ with $q(v) = 0$.</p>
<p>We have $\langle ax+by, ax+by \rangle = a^2 \langle x,x \rangle + 2ab \langle x,y \rangle + b^2 \langle y,y \rangle$. If we view this as a quadratic equation in $a$, its discriminant is</p>
<p>$\Delta = 4b^2 \langle x,y \rangle^2 - 4 b^2 \langle x,x \rangle \langle y, y \rangle$. </p>
<p>Because of the assumptions on the sign of $\langle x,x \rangle$ and $\langle y, y \rangle$, $\Delta > 0$ when $b \neq 0$. So choose your favorite nonzero value of $b$; then the quadratic formula shows that the equation $\langle ax+by,ax+by \rangle = 0$ can be solved for $a$. </p>
|
1,329,539 |
<p>I'm trying to figure out how to solve this word problem. I'm pretty sure it involves calculus or something even harder, but I don't know how to solve <strong><em>the general form</em></strong>.</p>
<p>Let me start with the concrete form, however:</p>
<p><strong>Concrete Form:</strong></p>
<p>You start with 5 scientists. A scientist can train 50 students for 5 years, after which each student becomes a scientist. (Assume a perfect graduation rate always, and assume you have an infinite population from which to draw students). Or a scientist can work on a project.</p>
<p>The problem is you have 30 Type-A projects, 50 Type-B projects, and 75 Type-C Projects, and they all need to be completed in minimal time. Each Type-A Project requires at least 10 scientists and takes 200/x years to complete, where x is the number of scientists assigned to them. Type-B's require at least 18 scientists and take 150/x years to complete. Type-C's require at least 25 scientist and take 120/x to complete.</p>
<p>What is the minimum time necessary to complete all projects, and what is the "event-order" of such an optimal solution?</p>
<p>I could solve this numerically by doing simulations in a computer program (although that will still be a pain in the neck), but what I really need is <strong>how to solve this in its general form.</strong></p>
<p><strong>General Form:</strong></p>
<p>Just assign constants to everything. You start with s scientists, who can train t students for y years. There are A type-a projects, B type-b's, and C type-c's. Respectively, they require a minimum of d, e, and f scientists, and take g/x, h/x, and i/x years to complete.</p>
<p>How do you go about solving this? Is that even possible? Solving this requires finding an optimal solution (completing all projects in minimal time), and proving that no other solution exists that has a smaller finish time.</p>
<p><strong>EDIT:</strong> Thanks to @Paul for this clarification. For projects, scientists can join or leave at any time. This is indiscrete time. For training, however, only 1 scientist can train a group of 50. (Two scientists training 50 does not make it go 2x faster.) The training has to be "atomic", which I think is the right word.</p>
|
jose_castro_arnaud
| 292,886 |
<p>Here is a solution, although probably it's not optimal.</p>
<p>To complete all projects at once in one year, one needs (30 * 200) + (50 * 150) + (25 * 120) = 6000 + 7500 + 3000 = 16500 scientists. </p>
<p>Training almost all of them can be done in 10 years: </p>
<ul>
<li>Year 0: 5 scientists</li>
<li>Year 5: 5 + (5 * 50) = 255 scientists</li>
<li>Year 10: 255 + (255 * 50) = 13005 scientists</li>
</ul>
<p>Dividing the scientists equally for all projects, they should be done in less than two years. Total time: 11 to 12 years.</p>
|
1,329,539 |
<p>I'm trying to figure out how to solve this word problem. I'm pretty sure it involves calculus or something even harder, but I don't know how to solve <strong><em>the general form</em></strong>.</p>
<p>Let me start with the concrete form, however:</p>
<p><strong>Concrete Form:</strong></p>
<p>You start with 5 scientists. A scientist can train 50 students for 5 years, after which each student becomes a scientist. (Assume a perfect graduation rate always, and assume you have an infinite population from which to draw students). Or a scientist can work on a project.</p>
<p>The problem is you have 30 Type-A projects, 50 Type-B projects, and 75 Type-C Projects, and they all need to be completed in minimal time. Each Type-A Project requires at least 10 scientists and takes 200/x years to complete, where x is the number of scientists assigned to them. Type-B's require at least 18 scientists and take 150/x years to complete. Type-C's require at least 25 scientist and take 120/x to complete.</p>
<p>What is the minimum time necessary to complete all projects, and what is the "event-order" of such an optimal solution?</p>
<p>I could solve this numerically by doing simulations in a computer program (although that will still be a pain in the neck), but what I really need is <strong>how to solve this in its general form.</strong></p>
<p><strong>General Form:</strong></p>
<p>Just assign constants to everything. You start with s scientists, who can train t students for y years. There are A type-a projects, B type-b's, and C type-c's. Respectively, they require a minimum of d, e, and f scientists, and take g/x, h/x, and i/x years to complete.</p>
<p>How do you go about solving this? Is that even possible? Solving this requires finding an optimal solution (completing all projects in minimal time), and proving that no other solution exists that has a smaller finish time.</p>
<p><strong>EDIT:</strong> Thanks to @Paul for this clarification. For projects, scientists can join or leave at any time. This is indiscrete time. For training, however, only 1 scientist can train a group of 50. (Two scientists training 50 does not make it go 2x faster.) The training has to be "atomic", which I think is the right word.</p>
|
Ross Millikan
| 1,827 |
<p>There are some features to this problem that you might not think are present in the general form. All the projects require a fixed number of scientist-years of research and all the scientists are interchangeable. We just need to assign $22500$ scientist-years to research. The minimum number per project does not matter because we will have lots of scientists and can assign them all to the same project, finishing it quickly, then have them move to another. The only option is how many scientists to assign to training and how many to research. Because the training ratio is so high, we make back the cost of training in $0.1$ years. We should spend all our efforts training until we can complete the projects in less than $5$ years. After that, training is in vain, because the trainees will not be ready until the research is done. So we start with all five scientists training, then all $255$ scientists training, so at the end of $10$ years we have $13005$ scientists. We will be done in $\frac {16500}{13005}\approx 1.730$ years for a total of $11.730$ years.</p>
|
1,232,023 |
<p>For me, intuitively, integral $2\pi y~dx$ make more sense. I know intuition can not be proof, but by far, most part of math I've learned does match with my intuition. So, I think this one should 'make sense' as well. Probably I didn't understand the way surface area is measured. It will be great if any one could tell me how 'integral $2\pi y~dx$' is wrong. (By the way, how to use mathematical symbols in texts?)</p>
|
Simon S
| 21,495 |
<p>Here's some intuition:</p>
<p>Draw a diagram of some element of a graph of $y = f(x)$ you are rotating. If the line element described by $dx$ or $ds$ is not parallel to the $x$ axis, then notice that $ds > dx$; that is, the length of the line element is longer than just $dx$ because there is a component in the $dy$ direction; i.e., $ds^2 = dx^2 + dy^2$.</p>
<p>So if were to calculate the integral</p>
<p>$$\int_a^b 2\pi f(x) \ dx$$</p>
<p>that would be a lower bound on the actual surface area. For continuous $f$ on the interval $[a,b]$, this integral is equal to the actual surface area if and only if $f$ is a constant. That is, if we were calculating the surface area of the curved part of a cylinder. If $f(x) = r > 0$, then the surface are is indeed</p>
<p>$$\int_a^b 2\pi f(x) \ dx = \int_a^b 2\pi r \ dx = 2\pi rL \ \ \ \text{ where } L = b - a$$</p>
<p>For every other $f$, we need $ds$:</p>
<p>$$\int 2\pi f(x) \ ds$$</p>
<p>As $\displaystyle \left( { ds \over dx } \right)^2 = 1 + \left( { dy \over dx } \right)^2$, this last integral is normally written as</p>
<p>$$\int_a^b 2\pi f(x) \ \sqrt{1 + \left( { dy \over dx } \right)^2} \ dx $$</p>
|
1,232,023 |
<p>For me, intuitively, integral $2\pi y~dx$ make more sense. I know intuition can not be proof, but by far, most part of math I've learned does match with my intuition. So, I think this one should 'make sense' as well. Probably I didn't understand the way surface area is measured. It will be great if any one could tell me how 'integral $2\pi y~dx$' is wrong. (By the way, how to use mathematical symbols in texts?)</p>
|
user492893
| 492,893 |
<p>A bit late but either way... </p>
<p>The surface area depends on $ds$ because $ds$ runs on the surface, unlike $dx$ which runs through the shape. </p>
<p>Consider two points A and B (not mathematical points, just locations), which are separated by a mountain. Person X simply goes through the mountain tunnel to get from A to B. Person S climbs up the mountain, then climbs down the other side of the mountain to get to point B. It's the same intuition behind $ds$ instead of $dx$. </p>
<p>When calculating volume, we want the interior of the shape. $dx$ gives us just that, since it pierces through the shape, just as person X passes through the mountain. However, for surface area, we want the exterior of the shape, so we need to use $ds$, just as person S has to walk on the mountain. </p>
|
250,314 |
<p>I have a problem about proving the following. Can you please help me?</p>
<p>First of all, boundary of A is the set of points that for every r>0 we can find a ball B(x,r) such that B contains points from both A and outside of A.</p>
<p>Secondly, definition of closure of A is the intersection of all closed sets containing A. I am trying to prove that, Let A is a subset of X and X is a metric space. Closure of A = (A union boundary of A). To prove it, i try to show LHS is a subset of RHS, and then RHS is subset of LHS. It is obvious for me that RHS is subset of LHS. But why is closure of A is subset of (A union boundary A) ? For example, let x be an element in closure of A, and let us say closure of A is B. Then, if x is in B, it can be in the set B\A, i think it does not have to be in (A union boundary of A). So, i dont get it. Can anyone help?</p>
<p>Thanks.</p>
|
Brian M. Scott
| 12,042 |
<p>$\newcommand{\cl}{\operatorname{cl}}\newcommand{\bdry}{\operatorname{bdry}}$One way to show that $\cl A\subseteq A\cup\bdry A$ is to show that if $x\in X\setminus(A\cup\bdry A)$, then $x\notin\cl A$. To show that $x\notin\cl A$, find a closed set $F$ such that $A\subseteq F$ and $x\notin F$.</p>
<p>So suppose that $x\in X\setminus(A\cup\bdry A)$. Then $x\notin\bdry A$, so there is an $\epsilon>0$ such that either $B(x,\epsilon)\cap A=\varnothing$ or $B(x,\epsilon)\subseteq A$. However, $x\in B(x,\epsilon)\setminus A$, so clearly $B(x,\epsilon)\nsubseteq A$, and therefore $B(x,\epsilon)\cap A=\varnothing$. What happens if you set $F=X\setminus B(x,\epsilon)$?</p>
|
250,314 |
<p>I have a problem about proving the following. Can you please help me?</p>
<p>First of all, boundary of A is the set of points that for every r>0 we can find a ball B(x,r) such that B contains points from both A and outside of A.</p>
<p>Secondly, definition of closure of A is the intersection of all closed sets containing A. I am trying to prove that, Let A is a subset of X and X is a metric space. Closure of A = (A union boundary of A). To prove it, i try to show LHS is a subset of RHS, and then RHS is subset of LHS. It is obvious for me that RHS is subset of LHS. But why is closure of A is subset of (A union boundary A) ? For example, let x be an element in closure of A, and let us say closure of A is B. Then, if x is in B, it can be in the set B\A, i think it does not have to be in (A union boundary of A). So, i dont get it. Can anyone help?</p>
<p>Thanks.</p>
|
Grifulkin
| 17,398 |
<p>So let $a\in\bar{A}$ the closure. Then every open set $U$ such that $a\in U$ we have the following: $$U\cap A \neq \emptyset.$$ And if $a \not\in A \Rightarrow U \cap (X-A) \neq \emptyset$ for all open sets $U$. So $a \in \bar{A} \cap \overline{X-A} = \partial(A)=bdry \ A.$ So either $a \in A \mbox{ or } a\in \partial(A)$.</p>
|
1,351,881 |
<p>This argument comes up once every while on Lambda the Ultimate. I want to know where the flaw is.</p>
<p><em>Take a countable number of TMs all generating different bitstreams. Construct a Cantor TM which runs every nth TM upto the nth bit and outputs the reversed bit.</em></p>
<p><em>Now I have established there are uncountably many TMs.</em></p>
<p>(The argument is that every bitstream is a unique real in the set [0,1]. Since every TM can be seen to be represented by that real, we can use Cantor's construction to prove that there are uncountably many TMs.)</p>
<p>Where does it go wrong?</p>
|
Qiaochu Yuan
| 232 |
<p>There are uncountably many countable sets of Turing machines, and only countably many of these can be enumerated by a Turing machine (since there are, in fact, only countably many Turing machines). So for most such subsets, your Cantor TM can't exist. </p>
|
1,351,881 |
<p>This argument comes up once every while on Lambda the Ultimate. I want to know where the flaw is.</p>
<p><em>Take a countable number of TMs all generating different bitstreams. Construct a Cantor TM which runs every nth TM upto the nth bit and outputs the reversed bit.</em></p>
<p><em>Now I have established there are uncountably many TMs.</em></p>
<p>(The argument is that every bitstream is a unique real in the set [0,1]. Since every TM can be seen to be represented by that real, we can use Cantor's construction to prove that there are uncountably many TMs.)</p>
<p>Where does it go wrong?</p>
|
Taumatawhakatangihangakoauauot
| 144,375 |
<p>There are a couple of issues here. To begin, I'll assume that by a Turing Machine we mean a Turing Machine with unary input alphabet $\{ \mathsf{1} \}$, and tape alphabet $\{ \mathsf{1} , \mathsf{-} \}$, where $\mathsf{-}$ is the blank symbol. The natural number $k$ is inputted into such a Turing Machine in the usual unary manner: $\overbrace{\mathsf{1}\cdots\mathsf{1}}^{k\text{ times}}$.</p>
<p>To view such a Turing Machine as generating a bitstream, we'll say that the $k$th bit generated by $M_n$ is $1$, if $M_n$ halts reading a $\mathsf{1}$ on input $k$, and is $0$ if $M_n$ halts reading a blank on that same input.</p>
<p>Now take your enumeration $\{ M_0, M_1 , \ldots \}$ of all these Turing machines, and devise a Turing machine $M_*$ so that on input $k$ it outputs the reverse of whatever $M_k$ outputs on input $k$. Now, if $M_* = M_k$ for some $k$, then we have the following</p>
<ul>
<li>if the $k$th bit outputted by $M_k$ is $1$, then the $k$th bit outputted by $M_*$ is $1$, and so by our description the $k$th bit outputted by $M_k$ is $0$, and similarly,</li>
<li>if the $k$th bit outputted by $M_k$ if $0$, then the $k$th bit outputted by $M_k$ is $1$.</li>
</ul>
<p>But there is a third possibility that hasn't even been counted upon, and that is that $M_k$ <em>may not halt</em> on input $k$. If $M_k$ does not halt on input $k$, then we say nothing of what $M_*$ does. In fact, when we say that a TM $M$ uses the output of another TM $M^\prime$ to determine its output, we generally mean that $M$ simulates $M^\prime$ on the desired input (or that $M^\prime$ is a "subroutine" of $M$). If $M_k$ does not halt on input $k$, then $M_*$ will also not halt on this input. Notice that we've alleviated the supposed contradiction to the countability of TMs quite nicely.</p>
<p>But, you may ask, what if we assume that our enumeration is set up so that for each $k$ the TM $M_k$ halts on input $k$? Well, you're not going to get <em>all</em> the TMs in this manner, because there are TMs that do not halt on any input (and so cannot be part of this enumeration). Also, for each $k$ there would be infinitely many TMs that halt only on input $k$. So this route is a bit of a non-starter.</p>
<p>But let's try to be even more generous and suppose that $\{ M_0, M_1 , \ldots \}$ is an enumeration of <em>all</em> TMs that halt on <em>all</em> inputs. Now $M_*$ also halts on all inputs, and we appear to have arrived at a contradiction. But to what?</p>
<p>Well, we now get into a matter that has been overlooked. In order for $M_*$ to simulate any of the infinitely many $M_k$ it must be that the enumeration is given to $M_*$ in such a way that it can algorithmically convert the input $k$ into a description of $M_k$. So our beginning enumeration is not arbitrary, but must be <em>computable</em>. The contradiction we run into now is that there is no <em>computable</em> enumeration of all TMs which halt on all inputs.</p>
|
3,019,882 |
<p>Í already have found a proof About the statment which I did not understand, I will write it down, so maybe someone can explain me the part that I did not understand. But if there are easier ways to prove the statement I would be delighted to know. </p>
<p><span class="math-container">$\alpha:=\sqrt{2}+\sqrt{3}\notin U:= \mathbb{Q}(\sqrt{2})\cup\mathbb{Q}(\sqrt{3})\iff\alpha \notin \mathbb{Q}(\sqrt{2})$</span> and <span class="math-container">$\alpha \notin \mathbb{Q}(\sqrt{3})$</span></p>
<p>Proof: <span class="math-container">$\alpha \notin \mathbb{Q}(\sqrt{2})$</span></p>
<p>Assume <span class="math-container">$\exists_{r,s\in \mathbb{Q}}r+s\sqrt{2}=\sqrt{2}+\sqrt{3}$</span></p>
<p><span class="math-container">$\iff \sqrt{3}=r+(s-1)\sqrt2$</span></p>
<p><span class="math-container">$\iff 3=r^2+2r(s-1)\sqrt2+2(s-1)^2$</span></p>
<p>Which implicates <span class="math-container">$3=0$</span> or <span class="math-container">$\sqrt3\in \mathbb{Q}$</span> or <span class="math-container">$\sqrt{3/2}\in \mathbb{Q}$</span> or <span class="math-container">$\sqrt{2}\in \mathbb{Q}$</span></p>
<p>I don't understand the last implication, I Need some help here.</p>
<p>Many Thanks </p>
|
Wuestenfux
| 417,848 |
<p>Well, <span class="math-container">$\alpha\not\in {\Bbb Q}(\sqrt 2)$</span> is equivalent to <span class="math-container">$\sqrt 3\not\in {\Bbb Q}(\sqrt 2)$</span>.
On the contrary, <span class="math-container">$\sqrt 3 = a+b\sqrt 2$</span> for some rational numbers <span class="math-container">$a,b$</span>. Squaring gives <span class="math-container">$3 = a^2+2ab\sqrt 2 + 2b^2$</span>. Then <span class="math-container">$\sqrt 2$</span> could be written as a rational number, which is impossible as it is irrational.</p>
|
3,019,882 |
<p>Í already have found a proof About the statment which I did not understand, I will write it down, so maybe someone can explain me the part that I did not understand. But if there are easier ways to prove the statement I would be delighted to know. </p>
<p><span class="math-container">$\alpha:=\sqrt{2}+\sqrt{3}\notin U:= \mathbb{Q}(\sqrt{2})\cup\mathbb{Q}(\sqrt{3})\iff\alpha \notin \mathbb{Q}(\sqrt{2})$</span> and <span class="math-container">$\alpha \notin \mathbb{Q}(\sqrt{3})$</span></p>
<p>Proof: <span class="math-container">$\alpha \notin \mathbb{Q}(\sqrt{2})$</span></p>
<p>Assume <span class="math-container">$\exists_{r,s\in \mathbb{Q}}r+s\sqrt{2}=\sqrt{2}+\sqrt{3}$</span></p>
<p><span class="math-container">$\iff \sqrt{3}=r+(s-1)\sqrt2$</span></p>
<p><span class="math-container">$\iff 3=r^2+2r(s-1)\sqrt2+2(s-1)^2$</span></p>
<p>Which implicates <span class="math-container">$3=0$</span> or <span class="math-container">$\sqrt3\in \mathbb{Q}$</span> or <span class="math-container">$\sqrt{3/2}\in \mathbb{Q}$</span> or <span class="math-container">$\sqrt{2}\in \mathbb{Q}$</span></p>
<p>I don't understand the last implication, I Need some help here.</p>
<p>Many Thanks </p>
|
Stockfish
| 362,664 |
<p>Me neither - but if <span class="math-container">$r(s-1) \neq 0$</span> you can rewrite the last equation to obtain <span class="math-container">$$\mathbb{Q} \ni \frac{3-r^2-2(s-1)^2}{2r(s-1)} = \sqrt{2} \notin \mathbb{Q},$$</span> a contradiction. If <span class="math-container">$s=1$</span> we have <span class="math-container">$3=r^2$</span>, if <span class="math-container">$r=0$</span> the equation reads <span class="math-container">$3=2(s-1)^2$</span> - both is impossible because of the uniqueness of prime factorization.</p>
|
4,489,898 |
<p>After 18 months of studying an advanced junior high school mathematics course, I'm doing a review of the previous 6 months, starting with solving difficult quadratics that are not easily factored, for example:
<span class="math-container">$$x^2+6x+2=0$$</span>
This could be processed via the quadratic equation but the course I'm working through asks me to use the complete the square method. I can do it, and I appreciate the geometric illustration of what is happening.</p>
<p>But it's so powerful and elegant, I can't help but wonder where else this method of adding something into an expression only to take it away in another is employed in mathematics. And is there a name for the general case of this kind of operation?</p>
<p><a href="https://i.stack.imgur.com/I0fmi.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/I0fmi.jpg" alt="geometric complete the square" /></a></p>
|
Oscar Lanzi
| 248,217 |
<p>This may not be an example of what the OP wants, but there is an inside-out version of completing the square. Given the expression</p>
<p><span class="math-container">$a^4+4b^4,$</span></p>
<p>we recognize that adding the middle term <span class="math-container">$4a^2b^2$</span> forms a square and at the same time, the middle term itself is a square. Thus</p>
<p><span class="math-container">$a^4+4b^4=(a^4+4a^2b^2+4b^4)-4a^2b^2$</span></p>
<p><span class="math-container">$=(a^2+2b^2)^2-(2ab)^2$</span></p>
<p><span class="math-container">$=(a^2+2ab+2b^2)(a^2-2ab+2b^2).$</span></p>
|
718,850 |
<p>I see similar questions asked on here and obviously I did some research and read my book, but it seems like every explanation contradicts another in some way. There are basically infinite scenarios using these and every example problem/scenario I seem to convince myself it could be both!</p>
<p><strong>Here are some of my understandings of each:</strong></p>
<p><strong>Permutation:</strong> Every detail matters and ALL ways of doing something. "Think of permutations as a list."</p>
<p><strong>Combinations:</strong> Used for groups. Order and Position DOES NOT matter.</p>
<hr>
<p><strong>My Confusion:</strong> </p>
<p>a.) If permutations are ALL ways of doing something.. then why does order/position/type matter?</p>
<p>b.) If order does NOT matter with combinations.. why are "Locks" said to have a "combination" when clearly the order does matter with a lock? If the "combination" to unlock something is 1-2-3.. then clearly 1-3-2 would not work. Therefore it seems like order does matter..</p>
<p>c.) If permutations are ALL ways of doing something and if EVERY detail matters.. then why are the number of permutations larger than the number of combinations?</p>
<p>Sorry if I included too much. I'm really struggling with this and every time I think I understand a scenario/problem.. I look at another and have no idea how to do it! I'd greatly appreciate any help. Thank you!</p>
|
StephanCasey
| 157,220 |
<p>a) this one depends on the question at hand.</p>
<p>Say you have 3 people (A, B, C) and you want to put them in a row. Finding all the arrangements would include</p>
<p>A B C</p>
<p>A C B</p>
<p>B C A</p>
<p>B A C</p>
<p>C A B</p>
<p>C B A</p>
<p>3P3 = 3! = 6 arrangements</p>
<p>but the same three people were in there so if order didn't matter, this would only count as 1 arrangement since in each case there was 1A, 1B, 1C.</p>
<p>that's how a combination is different</p>
<p>$3\choose3$ = 1 arrangement</p>
<p>b) You're right that's clever. Locks don't have permutations or combinations actually</p>
<p>you can't say a lock has 10P3 = 10 x 9 x 8</p>
<p>A lock has different numbers for each position</p>
<p>so n(S) = 10x10x10</p>
<p>c) The combination is just the number of choices provided we are not ordering the items but choosing a certain amount of them and grouping them together. A permutation is all the ways of arranging all the combinations into specific orders like in a)</p>
<p>A question on combinations.</p>
<p>If there are 30 people in a class and you need to pick 2 people to clean up at the end of the day. How many ways can you choose those people. This is one where order doesn't matter because picking Eddie and Fred is the same as Fred and Eddie</p>
<p>n(S) = $30\choose2$ = 435 (not 30P2 = 870. Note that 870 is double 435 so see how in this case you would have had 870 full arrangements but only half of them are valid choices. Why is that? Obviously because for every Fred and Eddie there was a Fred Eddie)</p>
<p>If you are choosing 3 people it gets worse because</p>
<p>30P3 = 24 360 but $30\choose3$ is 4 060. Clearly you cannot just divide the permutation by 3 to get 4060. You need to divide it by 3! because there will be </p>
<p>Fred, Eddie, Freddie. </p>
<p>Fred, Freddie, Eddie.</p>
<p>Eddie, Fred, Freddie.</p>
<p>Eddie, Freddie, Fred.</p>
<p>Freddie, Fred, Eddie.</p>
<p>Freddie, Eddie, Fred.</p>
<p>$A\chooseB$ = $A\PermuteB$ / B!</p>
|
1,515,417 |
<p>I understand the idea that some infinities are "bigger" than other infinities. The example I understand is that all real numbers between 0 and 1 would not be able to "fit" on an infinite list.</p>
<p>I have to show whether these sets are countable or uncountable. If countable, how would you enumerate the set? If uncountable, how would you prove using diagonalization? </p>
<p>Set 1. All real numbers represented only by 1's. EX) 1, .11, 111.11, 1.111...</p>
<p>Set 2. All real numbers represented only by 2's and 3's. EX) .2, 23.2, 22.2232...</p>
|
Ittay Weiss
| 30,953 |
<p>Hints: 1) You just listed these numbers, so.... 2) Use the exact same technique of diagnolization you say you understand.</p>
|
335,258 |
<p>Find the domain of the function:
$$f(x)= \sqrt{x^2 - 4x - 45}$$</p>
<p>I'm just guessing here; how about if I square everything and then put it in the graphing calculator?
Thanks,
Lauri</p>
|
André Nicolas
| 6,312 |
<p>Note that $x^2-4x-45=(x-2)^2-49$. We completed the square. This is an almost universally useful move when we are dealing with quadratics.</p>
<p>Now when is the thing under the square root sign bad (negative)?</p>
|
2,550,655 |
<p>When we're integrating a one variable function, there is only one path to follow between points $a$ and $b$.</p>
<p>$F(b)=F(a)+h(f(a)+f(a+h)+f(a+2h)+........)$ where $h$ is very small. So, we approach from $a$ to $b$ in steps of $h$ along the line joining unique path joining $a$ and $b$.</p>
<p>I was wondering what if we have a multi-variable function $f(x,y)$. What does it mean to integrate $f(x,y)$ from $(a,b)$ to $(c,d)$? Clearly, there isn't a unique path joining $(a,b)$ and $(c,d)$.</p>
<p>We have partial integration:</p>
<p>$$\int_{(a,b)}^{(c,d)} f(x,y)dxdy=\int_{a}^{c}f(x,b)dx+\int_{b}^{d}f(c,y)dy$$</p>
<p>But that's basically doing it in two steps, first keeping $y$ constant then $x$ constant. How can we vary both $x$ and $y$ simultaneously?</p>
<p>I thought of finding a curve which contains both the points $(a,b)$ and $(c,d)$ and varying both variables simultaneously along that path. Suppose $g(x)$ is a curve such that $g(a)=b$ and $g(c)=d$. Then we can replace $y$ with $g(x)$ to get:</p>
<p>$$\int_{(a,b)}^{(c,d)} f(x,y)dxdy=\int_a^b f(x,g(x))g'(x)dx$$</p>
<p>Does this integration of $f(x,y)$ from $(a,b)$ to $(c,d)$ depend on the choice of curve $g(x)$?</p>
|
davidlowryduda
| 9,754 |
<p>Generically, yes. The integral of $f$ along a path depends on the choice of the path.</p>
<p>For perhaps the simplest example, suppose $f(x,y) \equiv 1$, and choose a path $\gamma(t)$. When I say choose a path $\gamma(t)$, I think of $t$ as indicating 'time' and $\gamma(t)$ as indicating the location of a particle at time $t$, or something. So $\gamma: [a,b] \longrightarrow \mathbb{R}^2$.</p>
<p>Then $\int_\gamma 1 dt$ is really another name for the length of the path, and this clearly depends on the path.</p>
<p>For more, you should look up 'line integrals', which should be covered early in a multivariable calculus text (or in various places online).</p>
|
1,822,179 |
<p>Pointwise limit of the sequence of functions </p>
<p>$$h_n(x)=\begin{cases}1,&\text{if }x\ge \frac1n\\nx,&\text{if }x\in[0,\frac1n)\end{cases}$$</p>
<p>The trouble with this question is that I think that $h(0)$ is not defined but Im not completely sure. We can see that for any $n\in\Bbb N$ we have that $h_n(0)=0$. But the sequence $h_n(x)=1$ is approaching to zero as well but only reach zero in the limit cause $\lim 1/n=0$.</p>
<p>Are my assumptions correct? It is $h(0)$ undefined?</p>
|
Michael Hardy
| 11,667 |
<p>You have $h_1(0)=0$, $h_2(0)=0$, $h_3(0)=0$, $h_4(0)= 0$, etc. So the sequence $h_1(0), h_2(0), h_3(0), h_4(0), h_5(0),\ldots$ is just the sequence $0,0,0,0,0,\ldots$ in which every term is $0$. That sequence does have a limit and the limit is $0$. In other words $\lim_{n\to\infty} h_n(0) = 0$. The limit does exist and is $0$.</p>
<p>If $x>0$, then $\lim_{n\to\infty} h_n(x)=1$. So this is an example of a sequence of continuous functions converging pointwise to a discontinuous function.</p>
|
5,363 |
<p>There is something in the definition of the <a href="http://en.wikipedia.org/wiki/Free_product" rel="nofollow noreferrer">free product</a> of two groups that annoys me, and it's this "word" thing:</p>
<blockquote>
<p>If <span class="math-container">$G$</span> and <span class="math-container">$H$</span> are groups, a word in <span class="math-container">$G$</span> and <span class="math-container">$H$</span> is a product of the form</p>
<p><span class="math-container">$$
s_1 s_2 \dots s_m,
$$</span></p>
<p>where each <span class="math-container">$s_i$</span> is either an element of <span class="math-container">$G$</span> or an element of <span class="math-container">$H$</span>. </p>
</blockquote>
<p>So what is this "word" guy? Does it come out of the blue? Does it come from some sort of new operation that I can perform with the two sets <span class="math-container">$G$</span> and <span class="math-container">$H$</span> -in addition to the well-known ones of union, intersection, Cartesian product...?</p>
<p>Fortunatelly, I think there is nothing new under the sun of set operations: it's easy to realise that words can be identified with elements of some Cartesian product (see below):</p>
<p><span class="math-container">$$
(s_1, s_2, \dots , s_m ) \ .
$$</span></p>
<p>And Cartesian product is a well-established set-theoretical operation.</p>
<p>So I tried to translate the rest of Wikipedia's definition </p>
<blockquote>
<p>Such a word may be <em>reduced</em> using the following operations:</p>
<p>Remove an instance of the identity element (of either <span class="math-container">$G$</span> or <span class="math-container">$H$</span>).
Replace a pair of the form <span class="math-container">$g_1g_2$</span> by its product in <span class="math-container">$G$</span>, or a pair <span class="math-container">$h_1h_2$</span> by its product in <span class="math-container">$H$</span>.</p>
<p>Every reduced word is an alternating product of elements of <span class="math-container">$G$</span> and elements of <span class="math-container">$H$</span>, e.g.</p>
<p><span class="math-container">$$
g_1 h_1 g_2 h_2 \dots g_r h_r.
$$</span></p>
<p>The free product <span class="math-container">$G ∗ H$</span> is the group whose elements are the reduced words in <span class="math-container">$G$</span> and <span class="math-container">$H$</span>, under the operation of concatenation followed by reduction.</p>
</blockquote>
<p>in an elementary set setting. First, consider the set of "unreduced" tuples of elements of <span class="math-container">$G$</span> and <span class="math-container">$H$</span></p>
<p><span class="math-container">$$
U = G \sqcup H \sqcup (G\times G) \times (G\times H) \sqcup (H\times G) \sqcup (H\times H) \sqcup (G\times G \times G) \sqcup \dots
$$</span></p>
<p>More concisely:</p>
<hr>
<p>EDIT:</p>
<p>I think the following formula may be less messier than the one I wrote previously:</p>
<p><span class="math-container">$$
U = \bigsqcup_{r \geq 1} (S_1 \times \cdots \times S_r),
$$</span></p>
<p>where <span class="math-container">$S_i = G$</span> or <span class="math-container">$S_i = H$</span>.</p>
<hr>
<p>So, elements of <span class="math-container">$U$</span> are ordered tuples (<em>unreduced</em> ones)</p>
<p><span class="math-container">$$
(s_1, s_2, \dots , s_m),
$$</span></p>
<p>where each <span class="math-container">$s_i$</span> is either an element of <span class="math-container">$G$</span> or an element of <span class="math-container">$H$</span>.</p>
<p>The product of two unreduced tuples is defined by concatenation</p>
<p><span class="math-container">$$
(s_1, \dots , s_m) \cdot (t_1, \dots , t_n) = (s_1, \dots , s_m, t_1 , \dots , t_n) \ .
$$</span></p>
<p>Now, consider the following equivalence relation in the set of unreduced tuples <span class="math-container">$U$</span>:</p>
<p><span class="math-container">$$
(s_1, s_2, \dots , s_{i-1}, 1, s_{i+1}, \dots , s_n) \sim (s_1, s_2, \dots, s_{i-1}, s_i, \dots , s_n) \ ,
$$</span></p>
<p>where <span class="math-container">$1$</span> is either the unit element of <span class="math-container">$G$</span> or the one of <span class="math-container">$H$</span>. And</p>
<p><span class="math-container">$$
(s_1, s_2, \dots , s_i,s_{i+1}, \dots , s_r) \sim (s_1, s_2, \dots , s_is_{i+1}, \dots , s_r )
$$</span></p>
<p>whenever two adjacent <span class="math-container">$s_i, s_{i+1} \in G$</span> or <span class="math-container">$s_i, s_{i+1} \in H$</span> at the same time. </p>
<p>If you want, you may call the equivalence class of a tuple under this equivalence relation a <em>reduced</em> tuple. So every reduced tuple is an alternating one, </p>
<p><span class="math-container">$$
(g_1, h_1, \dots , g_r , h_r) \ ,
$$</span></p>
<p>with <span class="math-container">$g_i \in G$</span> and <span class="math-container">$h_i \in H$</span> for all <span class="math-container">$i = 1, \dots , r$</span>.</p>
<p>Define the free product of <span class="math-container">$G$</span> and <span class="math-container">$H$</span> as the quotient:</p>
<p><span class="math-container">$$
G*H = U/\sim \ .
$$</span></p>
<p>Finally, one verifies that concatenation is well-defined on unreduced tuples and gives <span class="math-container">$G*H$</span> a group structure.</p>
<p>After performing this elementary exercise I understand perfectly well why nobody defines the free product in this way, but I still wanted to ask:</p>
<ol>
<li>Is this correct?</li>
<li>Is it written somewhere?</li>
</ol>
|
Jack Lee
| 1,421 |
<p>In answer to your question about whether this is written somewhere: the construction of free products in my book <em>Introduction to Topological Manifolds</em> proceeds very much along these lines. (I'm not the only one who's written it down, but I don't have other references handy.) I don't go through the set-theoretic description of the set of ordered tuples in as much detail as you do, but it can be described more simply as just the disjoint union of the sets $(G\amalg H)^n$ over all n. I like the concreteness of this construction as a way of giving one a handle on what the free product looks like. Once the group is constructed, it's a relatively easy matter to prove after the fact that it is uniquely determined up to isomorphism as the coproduct in the category of groups.</p>
|
4,570,396 |
<p>Is the set <span class="math-container">$\{\frac 1n :n\in\mathbb{N}\}$</span> countable?</p>
<p>This was part of the Exercises in a book on Real Analysis. The answer given is that this set is uncountable.</p>
<p><strong>My work:</strong></p>
<p>I'm just starting on Real Analysis and , to my knowledge, this set is countable, since we can map every element of this set to natural numbers as
<span class="math-container">\begin{eqnarray}1&\to&1\\
2&\to&\frac12\\
3&\to&\frac13
\end{eqnarray}</span>
and so on.</p>
<p>So, is there a mistake in the book?</p>
|
Adam Rubinson
| 29,156 |
<blockquote>
<p>...To my knowledge, this set is countable, since we can map every element of this set to natural numbers as
<span class="math-container">\begin{eqnarray}1&\to&1\\
2&\to&\frac12\\
3&\to&\frac13
\end{eqnarray}</span>
and so on.</p>
</blockquote>
<p>This is sort of the correct reasoning for why the set is countable.</p>
<p>There are two common definitions of "countable".</p>
<p>One common definition of "Set <span class="math-container">$X$</span> is <em>countable</em>" is that: "there exists an <em><a href="https://en.wikipedia.org/wiki/Injective_function" rel="nofollow noreferrer">injection</a></em> <span class="math-container">$f:X\to\mathbb{N}.$</span>"</p>
<p>The other definition of "Set <span class="math-container">$X$</span> is <em>countable</em>" which is that: "there exists a <a href="https://en.wikipedia.org/wiki/Bijection" rel="nofollow noreferrer">bijection</a> <span class="math-container">$f:X\to\mathbb{N}.$</span>"</p>
<p>The distinction between the two definitions is that the first definition allows for finite sets to be called "countable", whereas the second definition does not. [Note that countably infinite sets satisfy both definitions.]</p>
<p>Let's adopt the second definition. If you want to be more rigorous, you will have to <em>either</em> show that the map</p>
<p><span class="math-container">$$ f:\mathbb{N}\to \left\{ \frac{1}{n}: n\in\mathbb{N} \right\}; \qquad f(n)=\frac{1}{n}$$</span></p>
<p>is bijective, or equivalently - because "is bijective" is an <a href="https://en.wikipedia.org/wiki/Equivalence_relation" rel="nofollow noreferrer">equivalence relation</a> - that the map</p>
<p><span class="math-container">$$ g: \left\{ \frac{1}{n}: n\in\mathbb{N} \right\} \to \mathbb{N}; \qquad g(n)=\frac{1}{n} $$</span></p>
<p>is bijective.</p>
<p>Note that we cannot <em>assume</em> that either function has an inverse, since in order for a function to have an inverse, it must first be true that the function is a bijection, which is what we are trying to prove.</p>
<p>Anyway, the moral of the story is, if you're stuck in maths, then going back to the definitions is usually not a bad idea.</p>
|
719,056 |
<p>I'm writing a Maple procedure and I have a line that is "if ... then ..." and I would like it to be if k is an integer (or if k^2 is a square) - how would onw say that in Maple? Thanks!</p>
|
Tony
| 39,296 |
<p>The notation $\frac{dy}{dx} = \frac{dy}{du} * \frac{du}{dx}$ is valid. However, you cannot prove the chain rule just by "cancelling" the two $du$'s; it doesn't work that way.</p>
|
400,715 |
<p>Consider the metric space $(\mathbb{Q},d)$ where $\mathbb{Q}$ denotes the rational numbers and $d(x,y)=|x-y|$. Let $$E:=\{x \in\mathbb{Q}:x>0, 2<x^2<3\}$$</p>
<p>Is $E$ closed and bounded in $\mathbb{Q}?$ Is it compact? Justify your answers.</p>
|
ncmathsadist
| 4,154 |
<p>It is a metric space. A compact metric space is necessarily complete. This will tell you about compactness.</p>
|
26,192 |
<p>I have a list of rules that represents a list of parameters to be applied to a circuit model (Wolfram SystemModeler model): </p>
<pre><code>sk = {
{R1 -> 10080., R2 -> 10080., C1 -> 1.*10^-7, C2 -> 9.8419*10^-8},
{R1 -> 10820., R2 -> 4984.51, R3 -> 10000., R4 -> 10000., C1 -> 1.*10^-7,
C2 -> 1.85417*10^-7},
{R1 -> 12600., R2 -> 12600., C1 -> 1.*10^-7, C2 -> 6.29882*10^-8},
{R1 -> 16420., R2 -> 16420., C1 -> 1.*10^-7, C2 -> 3.70897*10^-8},
{R1 -> 26120., R2 -> 26120., C1 -> 1.*10^-7, C2 -> 1.46573*10^-8},
{R1 -> 76600., R2 -> 1283.61, R3 -> 10000., R4 -> 10000., C1 -> 1.*10^-7,
C2 -> 1.01704*10^-7}};
</code></pre>
<p>Before I can apply these values to the model parameters I have to rename them. The list above consists of six lists - four lists of four rules and two lists of six rules. Those that have 4 rules should be named "sallenKeyUnityGain" and those that have 6 rules should be named "sallenKey". This is what I have so far: </p>
<pre><code>Table[If[Length[sk[[i]]] > 4, cirname = "sallenKey", cirname = "sallenKeyUnityGain"];
(cirname <> ToString[i] <> "." <> ToString[sk[[i]][[All, 1]][[j]]]) -> sk[[i]][[All, 2]][[j]]
, {i, Length[sk]}, {j, Length[sk[[i]]]}]
</code></pre>
<p>And this is the output: </p>
<pre><code>{{"sallenKeyUnityGain1.R1" -> 10080., "sallenKeyUnityGain1.R2" -> 10080.,
"sallenKeyUnityGain1.C1" -> 1.*10^-7, "sallenKeyUnityGain1.C2" -> 9.8419*10^-8},
{"sallenKey2.R1" -> 10820., "sallenKey2.R2" -> 4984.51, "sallenKey2.R3" -> 10000.,
"sallenKey2.R4" -> 10000., "sallenKey2.C1" -> 1.*10^-7,
"sallenKey2.C2" -> 1.85417*10^-7},
{"sallenKeyUnityGain3.R1" -> 12600., "sallenKeyUnityGain3.R2" -> 12600.,
"sallenKeyUnityGain3.C1" -> 1.*10^-7, "sallenKeyUnityGain3.C2" -> 6.29882*10^-8},
{"sallenKeyUnityGain4.R1" -> 16420., "sallenKeyUnityGain4.R2" -> 16420.,
"sallenKeyUnityGain4.C1" -> 1.*10^-7, "sallenKeyUnityGain4.C2" -> 3.70897*10^-8},
{"sallenKeyUnityGain5.R1" -> 26120., "sallenKeyUnityGain5.R2" -> 26120.,
"sallenKeyUnityGain5.C1" -> 1.*10^-7, "sallenKeyUnityGain5.C2" -> 1.46573*10^-8},
{"sallenKey6.R1" -> 76600., "sallenKey6.R2" -> 1283.61, "sallenKey6.R3" -> 10000.,
"sallenKey6.R4" -> 10000., "sallenKey6.C1" -> 1.*10^-7,
"sallenKey6.C2" -> 1.01704*10^-7}}
</code></pre>
<p>This would work fine if all were sallenKey or if all were sallenKeyUnityGain. However, I would like the output to look like this:</p>
<pre><code> {{"sallenKeyUnityGain1.R1" -> 10080., "sallenKeyUnityGain1.R2" -> 10080.,
"sallenKeyUnityGain1.C1" -> 1.*10^-7, "sallenKeyUnityGain1.C2" -> 9.8419*10^-8},
{"sallenKey1.R1" -> 10820., "sallenKey1.R2" -> 4984.51, "sallenKey1.R3" -> 10000.,
"sallenKey1.R4" -> 10000., "sallenKey1.C1" -> 1.*10^-7,
"sallenKey1.C2" -> 1.85417*10^-7},
{"sallenKeyUnityGain2.R1" -> 12600., "sallenKeyUnityGain2.R2" -> 12600.,
"sallenKeyUnityGain2.C1" -> 1.*10^-7, "sallenKeyUnityGain2.C2" -> 6.29882*10^-8},
{"sallenKeyUnityGain3.R1" -> 16420., "sallenKeyUnityGain3.R2" -> 16420.,
"sallenKeyUnityGain3.C1" -> 1.*10^-7, "sallenKeyUnityGain3.C2" -> 3.70897*10^-8},
{"sallenKeyUnityGain4.R1" -> 26120., "sallenKeyUnityGain4.R2" -> 26120.,
"sallenKeyUnityGain4.C1" -> 1.*10^-7, "sallenKeyUnityGain4.C2" -> 1.46573*10^-8},
{"sallenKey2.R1" -> 76600., "sallenKey2.R2" -> 1283.61, "sallenKey2.R3" -> 10000.,
"sallenKey2.R4" -> 10000., "sallenKey2.C1" -> 1.*10^-7,
"sallenKey2.C2" -> 1.01704*10^-7}}
</code></pre>
<p>In other words, if I have 4 apples and 2 oranges, instead of having:<br>
apple1, orange2, apple3, apple4, apple5, orange6, I would like:<br>
apple1, orange1, apple2, apple3, apple4, orange2. How would I do this?<br>
Thank you<br>
Tatjana</p>
|
andre314
| 5,467 |
<p>Here is a solution :</p>
<pre><code>myFormat[ruleList_, {type_, counter_}] :=
MapAt[type <> ToString[counter] <> "." <> ToString[#] &, #, 1] & /@ ruleList
(* example : myFormat[{R1->1,R2->2}, {"sallenKey",5}] gives : *)
(* {"sallenKey5.R1"->1, "sallenKey5.R2"->2} *)
sallenKeyCounter = 0;
sallenKeyUnityGainCounter = 0;
res = myFormat[#, If[
Length[#] <= 4,
{"sallenKeyUnityGain", sallenKeyUnityGainCounter = sallenKeyUnityGainCounter + 1},
{"sallenKey", sallenKeyCounter = sallenKeyCounter + 1}
]] & /@ sk ;
</code></pre>
<p>myFormat is a auxiliary formatting function which is applied to each list of rules completed with the type/number of the item.</p>
<pre><code>res // Grid
</code></pre>
<p><img src="https://i.stack.imgur.com/jpmid.jpg" alt="enter image description here"></p>
|
186,477 |
<p>The question is.</p>
<p>Is the converse true: In a simply connected domain every harmonic function has its conjugate?</p>
<p>I am not able to get an example to disprove the statement.</p>
|
timur
| 2,473 |
<p>No, to the question as stated by OP in the comments. Take the real part of a holomorphic function on an annulus.</p>
<p>On the other hand, look at the comment by Jonas.</p>
|
186,477 |
<p>The question is.</p>
<p>Is the converse true: In a simply connected domain every harmonic function has its conjugate?</p>
<p>I am not able to get an example to disprove the statement.</p>
|
Federico
| 128,293 |
<p>The answer is yes. If a domain is not simply connected, you can always construct a harmonic function without a harmonic conjugate there. For example, for an annulus centred at the origin, take $f(x,y)=\log\sqrt{x^2+y^2}$ in . This cannot have a harmonic conjugate, as if it did you would get a branch of the logarithm analytic in such a domain. </p>
<p>You can look at a nice explanation for the construction in the general case here:</p>
<p><a href="https://math.stackexchange.com/questions/711950/show-omega-is-simply-connected-if-every-harmonic-function-has-a-conjugate?rq=1">Show $\Omega$ is simply connected if every harmonic function has a conjugate</a></p>
|
2,404,258 |
<p>I'm asked to prove that </p>
<h2>$|z-i||z+i|=2$</h2>
<p>defines an ellipse in the plane.</p>
<p>I have tried replacing $z = x+iy $ in the previous equation and brute forcing the result to no avail. Considering that $ |z-i||z+i| = |z^2+1| $ eases the algebra a bit but didn't help me that much.</p>
<p>Edit: I know it seems like there's a missing plus sign: $|z-i| + |z+i| = 2$ in the question, but that's what the exercise says. </p>
<p>In fact, the next exercise wants us to prove that $ |z-1||z+i| = 2$ defines a line in the complex plane.</p>
|
user1337
| 62,839 |
<p>The locus of the points $P$ which have the product of their distances from two given points (foci) constant is known as a the <a href="https://en.wikipedia.org/wiki/Cassini_oval" rel="nofollow noreferrer">Cassini oval</a>. Thus I also think that there should be a + sign rather than multiplication in your expression.</p>
|
1,468,097 |
<p>Below is the problem:</p>
<p>Choose a point uniformly at random from the triangle with vertices (0,0), (0, 30), and (20, 30). Let (X, Y ) be the coordinates of the chosen point. (a) Find the cumulative distribution function of X. (b) Use part (a) to find the density of X.</p>
<p>First, for the first part of the question, a triangle is formed by three vertices in the question. Therefore it would look something like an inverted triangle whose right angle is formed by y-axis and the line that goes through (0,30) and (20, 30)</p>
<p>If I find the area of this triangle it would be 300 since
(30)(20) / 2 = 300</p>
<p>so for (X, Y) coordinates in the area, the probability of the point being in the triangle is 1/300.</p>
<p>So to find cdf, since the equation of the line that goes through the origin and (20, 30) is y = (3/2)x, I think cdf is</p>
<p>F(X) =
1. (3/2)x * x * 1/2 * 1/300 = (1/400) * x^2 for 0 <= x <= 20
2. 0 for x < 0
3. 1 for x > 20</p>
<p>But I am not really sure if my process is right </p>
<p>For part (b), I know I just have to take the derivative but the if I take the derivative, I am only calculating the area of the slope but I am suppose to find the area of the inverted triangle.
Since the area of two triangles are equal, can I just take the derivative? or should the area be something like 1 - (derivative) ?</p>
|
Empiricist
| 189,188 |
<p>Consider $V = \mathbb{C}^2$ and $u = (1,0), v = (0,1), w = (1,1)$. Then $u,v,w$ are pairwise linearly independent and but the whole set it not.</p>
|
141,112 |
<p>I'm trying to put the following equation in determinant form:
$12h^3 - 6ah^2 + ha^2 - V = 0$, where $h, a, V$ are variables (this is a volume for a pyramid frustum with $1:3$ slope, $h$ is the height and $a$ is the side of the base, $V$ is the volume). </p>
<p>The purpose of identifying the determinant is to construct a nomogram. I'm not sure if it actually can be placed in determinant form, and I'm curious if there is a Mathematica function that can do this?
I've been trying a pen and pencil approach as listed <a href="http://www.projectrho.com/nomogram/determinant06.html" rel="nofollow">here</a>. But this approach has hopefully been automated. </p>
<p>Any tips are appreciated!</p>
|
Jason Walker
| 53,357 |
<p>It's a genus 1 type as described <a href="http://www.projectrho.com/nomogram/standardForms.html" rel="nofollow">here</a> with determinantal form:</p>
<p>$$\left| \begin{array}{ccc}
0 & V & 1 \\
1 & 6a-a^2 & 1 \\
\frac{h}{h+1} & \frac{12h^3}{h+1} & 1 \end{array} \right|=0$$</p>
|
141,112 |
<p>I'm trying to put the following equation in determinant form:
$12h^3 - 6ah^2 + ha^2 - V = 0$, where $h, a, V$ are variables (this is a volume for a pyramid frustum with $1:3$ slope, $h$ is the height and $a$ is the side of the base, $V$ is the volume). </p>
<p>The purpose of identifying the determinant is to construct a nomogram. I'm not sure if it actually can be placed in determinant form, and I'm curious if there is a Mathematica function that can do this?
I've been trying a pen and pencil approach as listed <a href="http://www.projectrho.com/nomogram/determinant06.html" rel="nofollow">here</a>. But this approach has hopefully been automated. </p>
<p>Any tips are appreciated!</p>
|
Glen_b
| 47,748 |
<p>As it stands there are 6 linearly independent terms to deal with ($a$, $a^2$, $h$, $h^2$, $h^3$ and $V$). Without being able to spot a common factor somewhere it's not clear one can accommodate so many terms (the three terms in $h$ in particular make it hard). </p>
<p>Saint-Robert's criterion indicates that it can't be done with three straight axes.</p>
<p>After playing around trying to simplify it (by trying to eliminate a term in various ways) in order to try to get it in a form dealt with by Clark, I think that perhaps Clark's approach may not be sufficient, which suggests one might have to go to Warmus' laborious criteria to even see if its possible as it stands, but I should go back and check it against either the Massau or Lecornu conditions first.</p>
<p>If you are allowed to reparameterize it to be in terms of say $p = \frac{h}{a}$ and $a$, or even easier, $q = 1 - \frac{h}{a}$ and $a$, then you can get somewhere. </p>
<p>If a good approximation is okay, it can be done with an order 4 (i.e. genus I) nomogram pretty closely, and one can generate better approximations by upping the order, possibly* to such a close approximation compared to the error in using an actual 'exact' nomogram that it wouldn't matter - typically anything below about half a percent error, though with care it can be a bit lower; if you get the worst approximation error much lower than that, you would have trouble telling an exact nomogram from an approximate one.</p>
<p>*(depending on what range of values you want to cover, I just assumed some values in order to play with it)</p>
<hr>
<p>Edit: I see now that $\frac{V}{a^3}-\frac{1}{18}=(6\frac{h}{a}-1)^3/18$.</p>
<p>This suggests some possibility involving a pair of (possibly overlaid or back-to-back) $N$-charts but I don't currently see how to make it work without requiring multiple isopleth lines drawn - which is likely more effort for the end-user than it's worth.</p>
<p>Still, the observation may help someone spot a way to take it further.</p>
|
3,207,453 |
<p>studying the series <span class="math-container">$\sum_\limits{n=2}^\infty \frac{1}{n(\log n)^ {2}}$</span>.</p>
<p>I've tried with the root criterion</p>
<p><span class="math-container">$\lim_{n \to \infty} \sqrt[n]{\frac{1}{n(\log n)^ {2}}}>1$</span>
and the series should diverge.</p>
<p>But I'm not sure
Can someone help me to understand?</p>
|
José Carlos Santos
| 446,262 |
<p>It turns out that that limit is equal to <span class="math-container">$1$</span>, not bigger.</p>
<p>On the other hand, you can use the integral test: in order to determine whether the integral<span class="math-container">$$\int_2^\infty\frac1{x\log^2x}\,\mathrm dx$$</span>converges or not, use the fact that iy is equal to<span class="math-container">$$\int_2^\infty\frac{\log'x}{\log^2x}\,\mathrm dx.$$</span></p>
|
4,215,824 |
<p>Is there a formal definition of "almost always less than" or "almost always greater than"? I think one could define it using probabilities but not sure how to go about it. If one could show the following, then I think you could say <span class="math-container">$X$</span> is almost always less than a value <span class="math-container">$x$</span>. Is there other ways of going about this?</p>
<p><span class="math-container">$$
P(X<x)=1
$$</span></p>
|
Ethan Bolker
| 72,858 |
<p>The precise meaning of "almost always" depends on context. In measure theory it means "except on a set of measure <span class="math-container">$0$</span>".</p>
<p>Probability is often discussed in a measure theoretic context. In that context your probability assertion makes sense. For example, on the unit interval the value of <span class="math-container">$x^2$</span> is almost always less than <span class="math-container">$1$</span> when <span class="math-container">$x$</span> is chosen at random from the uniform distribution since the set on which it has its maximum value <span class="math-container">$1$</span> is the singleton <span class="math-container">$\{1\}$</span> with measure (probability) <span class="math-container">$0$</span>.</p>
|
3,784,889 |
<p><span class="math-container">$$\begin{array}{ll} \text{minimize} & f(x,y,z) := x^2 + y^2 + z^2\\ \text{subject to} & g(x,y,z) := xy - z + 1 = 0\end{array}$$</span></p>
<hr />
<p>I tried the Lagrange multipliers method and the system resulted from has no solution. So I posted it to see if the question is wrong by itself or I'm missing something.</p>
<p>So I made the Lagrangian equation <span class="math-container">$L(x,y,z,λ)=x^2 + y^2 + z^2 + λ(xy -z+1)$</span></p>
<p>and then</p>
<p><span class="math-container">$θL/θx = 2x + λy =0$</span></p>
<p><span class="math-container">$θL/θy = 2y + λx =0$</span></p>
<p><span class="math-container">$θL/θz = 2z - λ =0$</span></p>
<p><span class="math-container">$θL/θλ = xy -z +1 =0 $</span></p>
<p>The obvious solution for that system is x=0 , y=0 , z=1 and λ=2</p>
<p>But solving it in an online solver for nonlinear systems of equation the answer I get is that it's unsolvable.</p>
<p>So my question is: What I'm doing wrong</p>
|
Michael Rozenberg
| 190,319 |
<p><span class="math-container">$$x^2+y^2+z^2=x^2+y^2+z^2+2(xy-z+1)=(x+y)^2+(z-1)^2+1\geq1.$$</span>
The equality occurs for <span class="math-container">$x=y=0$</span> and <span class="math-container">$z=1$</span>, which says that we got a minimal value.</p>
|
1,932,599 |
<p>Find the solution of the differential equation that satisfies the given initial condition <span class="math-container">$$~y'=\frac{x~y~\sin x}{y+1}, ~~~~~~y(0)=1~$$</span></p>
<p>When I integrate this function I get
<span class="math-container">$$y+\ln(y)= -x\cos x + \sin x + C.$$</span></p>
<p>Have I integrated the function correctly? </p>
<p>How do I complete the second part of the question <span class="math-container">$~y(0)=1~$</span></p>
|
Dr. Sonnhard Graubner
| 175,066 |
<p>we write the equation in the form
$$\left(1+\frac{1}{y}\right)dy=x\sin(x)dx$$
integrating we obtain
$$y+\ln|y|=\sin(x)-x\cos(x)+C$$</p>
|
3,102,218 |
<p>Given a fraction:</p>
<p><span class="math-container">$$\frac{a}{b}$$</span></p>
<p>I now add a number <span class="math-container">$n$</span> to both numerator and denominator in the following fashion:</p>
<p><span class="math-container">$$\frac{a+n}{b+n}$$</span></p>
<p>The basic property is that the second fraction is suppose to closer to <span class="math-container">$1$</span> than the first one. My question is how can we prove that?</p>
<p>What I have tried:</p>
<p>I know <span class="math-container">$\frac{n}{n} = 1$</span> so now adding numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> to it would actually "move it away" from <span class="math-container">$1$</span>. But I cannot understand why <span class="math-container">$\frac{a}{b}$</span> is actually farther away from <span class="math-container">$1$</span> than <span class="math-container">$\frac{a+n}{b+n}$</span>.</p>
<p>Why is that? What does it mean to add a number to both the numerator and denominator?</p>
|
Rhys Hughes
| 487,658 |
<p>Let <span class="math-container">$a=kb$</span>. (<span class="math-container">$k$</span> doesnt necessarily have to be an integer). Then:</p>
<p><span class="math-container">$$\frac ab = k$$</span></p>
<p><span class="math-container">$$\frac{a+n}{b+n}=\frac{k(b+n)-(k-1)n}{b+n}$$</span>
<span class="math-container">$$=k-\frac{kn-n}{b+n}$$</span></p>
<p>Can you show the extra term is positive when <span class="math-container">$k>1$</span>, and negative when <span class="math-container">$k<1$</span>? (Hint: let <span class="math-container">$k=1+t$</span> for first case and <span class="math-container">$k=1-t$</span> for the second)</p>
|
299,795 |
<p>I seem to have completely lost my bearing with implicit differentiation. Just a quick question:</p>
<p>Given $y = y(x)$ what is $$\frac{d}{dx} (e^x(x^2 + y^2))$$</p>
<p>I think its the $\frac d{dx}$ confusing me, I don't what effect it has compared to $\frac{dy}{dx}$. Any help will be greatly appreciated.</p>
|
Librecoin
| 53,846 |
<p>I just want to add that Wolfram Alpha's solutions $\left\{\text{$\color{blue}{x_\mathrm{I} = 0}$, $\color{green}{x_\mathrm{II} = (-1)^{2/3}}$, $\color{red}{x_\mathrm{III} = -\sqrt[3]{-1}}$} \right\}$ are indeed correct.</p>
<p>As others already have noted, the equation can be factorized into $x^7(x^2+x+1)=0$. Clearly, the first factor $x^7=0$ has only the root $\color{blue}{x_1 = 0 = x_\mathrm{I}}$. By the <a href="http://en.wikipedia.org/wiki/Quadratic_formula#Quadratic_formula" rel="nofollow">quadratic formula</a>, the second factor $x^2+x+1=0$ has the two roots $x_{\color{green}{2},\color{red}{3}} = \dfrac{-1 \pm i\sqrt{3}}{2}$. Now, recall <a href="http://en.wikipedia.org/wiki/Euler%27s_formula" rel="nofollow">Euler's formula</a>. We see that $e^{i\pi} = \cos \pi + i\sin \pi = -1$, so
\begin{equation*}
\color{green}{x_\mathrm{II} = (-1)^{2/3} = e^{i2\pi/3} = \,\underbrace{\cos \frac{2\pi}{3}}_{-1/2} + i\,\underbrace{\sin \frac{2\pi}{3}}_{\sqrt{3}/2} = x_2}
\end{equation*}
and
\begin{equation*}
\color{red}{x_\mathrm{III} = -\sqrt[3]{-1} = -e^{i\pi/3} = -\,\underbrace{\cos \frac{\pi}{3}}_{1/2} - i\,\underbrace{\sin \frac{\pi}{3}}_{\sqrt{3}/2} = x_3}.
\end{equation*}
Hence Wolfram Alpha's solutions are also correct.</p>
|
3,194,249 |
<p>If we are given <span class="math-container">$d$</span> an integer, then I've seen it written that the floor of <span class="math-container">$\sqrt{d^2-1}=d-1$</span>, but how is this found?</p>
<p>It's not immediately obvious, at least not to me.It makes me feel there must be an algorithm that can be used . </p>
<p>Would anyone be able to impart their knowledge on how to find the floor of such an expression?</p>
<p>I read it here : <a href="https://math.stackexchange.com/questions/2230529/show-that-the-simple-continued-fraction-of-sqrtd2-1-is-d-1-overline1">Show that the simple continued fraction of $\sqrt{d^2-1}$ is $[d-1; \overline{1, 2d-1}]$ for $d \geq 2$</a></p>
|
Martin Argerami
| 22,857 |
<p>Saying that <span class="math-container">$\lfloor\sqrt{ d^2-1}\rfloor=d-1$</span> amounts to say
<span class="math-container">$$
d-1\leq\sqrt{d^2-1}<d.
$$</span>
The inequality on the right is trivial since <span class="math-container">$d^2-1<d^2$</span>. For the inequality on the left,
<span class="math-container">$$
\sqrt{d^2-1}=\sqrt{(d+1)(d-1)}\geq\sqrt{(d-1)^2}=d-1.
$$</span>
All that we are using is that <span class="math-container">$d-1\geq0$</span> (so that the square root cancels the square) and that <span class="math-container">$d+1>d-1$</span>. </p>
|
2,259,099 |
<blockquote>
<p><em><strong>Theorem 29.2.</strong></em> Let <span class="math-container">$X$</span> be a Hausdorff space. Then <span class="math-container">$X$</span> is locally compact if and only if given <span class="math-container">$x \in X$</span>, and given a neighborhood <span class="math-container">$U$</span> of <span class="math-container">$x$</span>,there is a neighborhood <span class="math-container">$V$</span> of <span class="math-container">$x$</span> such that <span class="math-container">$\overline{V}$</span> is compact and <span class="math-container">$\overline{V} \subseteq U$</span>.</p>
<p><em>Proof</em>: Clearly this new formulation implies local compactness; the set <span class="math-container">$C = \overline{V}$</span> is the desired compact set containing a neighborhood of <span class="math-container">$x$</span>. To prove the converse, suppose <span class="math-container">$X$</span> is locally compact, let <span class="math-container">$x$</span> be a point of <span class="math-container">$X$</span> and let <span class="math-container">$U$</span> be a neighborhood of <span class="math-container">$x$</span>. Take the one-point compactification <span class="math-container">$Y$</span> of <span class="math-container">$X$</span>, and let <span class="math-container">$C$</span> be the set <span class="math-container">$Y - U$</span>. Then <span class="math-container">$C$</span> is closed in <span class="math-container">$Y$</span>, so that <span class="math-container">$C$</span> is a compact subspace of <span class="math-container">$Y$</span>. Apply Lemma 26.4 to choose disjoint open sets <span class="math-container">$V$</span> and <span class="math-container">$W$</span> containing <span class="math-container">$x$</span> and <span class="math-container">$C$</span>, respectively. Then the closure <span class="math-container">$\overline{V}$</span> of <span class="math-container">$V$</span> in <span class="math-container">$Y$</span> is compact, furthermore, <span class="math-container">$\overline{V}$</span> is disjoint from <span class="math-container">$C$</span>, so that <span class="math-container">$\overline{V} \subset U$</span>, as desired.</p>
</blockquote>
<p>I didn't understand how exactly the fact of <span class="math-container">$V$</span> and <span class="math-container">$W$</span> are disjoint opent sets containing <span class="math-container">$x$</span> and <span class="math-container">$C$</span>, respectively imply that <span class="math-container">$\overline{V}$</span> of <span class="math-container">$V$</span> in <span class="math-container">$Y$</span> is compact, someone can help me? Thanks in advance!</p>
|
Tob Ernack
| 275,602 |
<p>The set $\overline{V}$ is closed, and since $Y$ is compact, closed subsets of $Y$ are compact by Theorem 26.2 in Munkres.</p>
|
2,081,612 |
<p>I'd like to evaluate the following integral:</p>
<p>$$\int \frac{\cos^2 x}{1+\tan x}dx$$</p>
<p>I tried integration by substitution, but I was not able to proceed. </p>
|
lab bhattacharjee
| 33,337 |
<p>HINT:</p>
<p>$$\frac{\cos^2x}{1+\tan x}=\frac{\cos^3x}{\cos x+\sin x}$$</p>
<p>Now $\cos x+\sin x=\sqrt2\cos\left(x-\dfrac\pi4\right)$</p>
<p>Set $x-\dfrac\pi4=u\implies\cos x=\cos\left(u+\dfrac\pi4\right)=\sqrt2(\cos u-\sin u)$</p>
|
2,081,612 |
<p>I'd like to evaluate the following integral:</p>
<p>$$\int \frac{\cos^2 x}{1+\tan x}dx$$</p>
<p>I tried integration by substitution, but I was not able to proceed. </p>
|
StackTD
| 159,845 |
<p>Here's another way; rewrite:
$$\int \frac{\cos^2 x}{1+\tan x} \,\mbox{d}x
= \int \frac{\sec^2 x}{\left(1+\tan x\right) \sec^4 x} \,\mbox{d}x
= \int \frac{\sec^2 x}{\left(1+\tan x\right)\left( 1+\tan^2x\right)^2} \,\mbox{d}x $$</p>
<p>Now set $u = \tan x$ to get:
$$\int \frac{1}{\left(1+u\right)\left( 1+u^2\right)^2} \,\mbox{d}u$$
And you can continue with partial fractions. Tedious, but it works.</p>
|
2,081,612 |
<p>I'd like to evaluate the following integral:</p>
<p>$$\int \frac{\cos^2 x}{1+\tan x}dx$$</p>
<p>I tried integration by substitution, but I was not able to proceed. </p>
|
Alexandre C.
| 61,517 |
<p>This one always works for rational functions of $\sin x$ and $\cos x$ but can be a bit tedious. Set:
$$ z = \tan x / 2$$
so that
$$ \mathrm{d}x = \frac{2\,\mathrm{d} z}{1 + z^2}$$
$$ \cos x = \frac{1 - z^2}{1 + z^2}$$
$$\sin x = \frac{2z}{1 + z^2}$$
Now, you have a rational fraction in $z$ that you can integrate by standard methods (partial fraction decomposition).</p>
<p>There are often simpler (and trickier) substitutions for this kind of integrals, but this one will always do the job.</p>
|
5,927 |
<p>I have a problem with the binomial coefficient $\binom{5}{7}$. I know that the solution is zero, but I have problems to reproduce that:</p>
<p>${\displaystyle \binom{5}{7}=\frac{5!}{7!\times(5-7)!}=\frac{5!}{7!\times(-2)!}=\frac{120}{5040\times-2}=\frac{120}{-10080}=-\frac{1}{84}}$</p>
<p>Where is my mistake?</p>
|
Sy123
| 61,808 |
<p>Although not as formal, one by relying on only a combinatoric definition of the binomial coefficient we can find that it is zero straight away.
Consider that:</p>
<p>$\binom{m}{k}$ is the amount of combinations of k elements from m numbered set.</p>
<p>It is obvious that we cannot select more than what we have, so if m < k, then the answer is already zero because for example in this case, if we have 5 apples, then it is impossible to select 7 apples from the 5, hence there are zero combinations.</p>
|
2,356,593 |
<blockquote>
<p>Quoting:" Prove: if $f$ and $g$ are continuous on $(a,b)$ and $f(x)=g(x)$ for every $x$ in a dense subset of $(a,b)$, then $f(x)=g(x)$ for all $x$ in $(a,b)$."</p>
</blockquote>
<p>Let $S \subset (a,b)$ be a dense subset such that every point $x \in (a,b)$ either belongs to S or is a limit point of S.</p>
<p>There exists $x_n \in S$ such that $\lim\limits_{n \rightarrow \infty} x_n =x$.</p>
<p>As $f$ and $g$ are continuous on (a,b),
$$f(x)= \lim\limits_{n \rightarrow \infty} f(x_n)= \lim\limits_{n \rightarrow \infty} g(x_n)= g(x)$$</p>
<ol>
<li>How can I conclude that "$f(x)=g(x)$ for all $x$ in $(a,b)$"?</li>
<li>Is there a proof with $\epsilon$, $\delta$ definition of limits?</li>
</ol>
<p>Much appreciated</p>
|
gary
| 6,595 |
<p>Assuming these are Real-valued functions of a Real variable. Consider the $0$ function on a dense subset. Since it is uniformly -continuous, we can extended it into the complement using sequential continuity $( x_n \rightarrow x ) \rightarrow (f(x_n) \rightarrow f(x))$, which implies $f(x)=0$. Assume $f(x)= a>0 $ (Wolg) for some $x$ in the complement of the dense subset . Then for any 'hood $(a- \delta, a+ \delta )$ of a we have a $y$ with $|f(x)-f(y)|=|a-f(y)|\geq |a-0|> a/2 $. So using $\epsilon < a/2 $ you can show that the only continuous extension is the zero function, i.e., $f(x)=g(x)$ in the complement of the dense subset.</p>
|
978,399 |
<p>A friend of mine posed this question to me a couple days ago and it's been bugging me ever since. He told me to take the square root of 5 twenty times, subtract 1 from it, and then multiply it by 2^20, giving me a rough approximation of ln (5). I can't for the life of my figure out why this is.</p>
<p>Also: How can you find the natural log of any positive number X using only arithmetic operations (+, x, -, /) and square root?</p>
|
Mats Granvik
| 8,530 |
<p>I checked this in Mathematica. It really does seem to converge to Log(5) accurately after many iterations:</p>
<pre><code>a = N[Sqrt[5], 30];
Do[
a = Sqrt[a], {n, 1, 10000 - 1}]
(a - 1)*2^10000
</code></pre>
<p>Output:</p>
<p>1.60943791243410037460075933323</p>
<p>compared to:</p>
<pre><code>N[Log[5], 30]
</code></pre>
<p>1.60943791243410037460075933323</p>
|
978,399 |
<p>A friend of mine posed this question to me a couple days ago and it's been bugging me ever since. He told me to take the square root of 5 twenty times, subtract 1 from it, and then multiply it by 2^20, giving me a rough approximation of ln (5). I can't for the life of my figure out why this is.</p>
<p>Also: How can you find the natural log of any positive number X using only arithmetic operations (+, x, -, /) and square root?</p>
|
paw88789
| 147,810 |
<p>Let $a$ be a positive real number.</p>
<p>Then
$$\lim_{n\to\infty}(a^\frac1n-1)\cdot n=\lim_{n\to\infty}\frac{a^{\frac1n}-1}{\frac1n}=\lim_{n\to\infty} \frac{a^\frac1n\cdot\ln a\cdot\frac{-1}{n^2}}{\frac{-1}{n^2}}=\ln a$$</p>
<p>Your result is a special case of this with $a=5$, and $n=2^{20}$</p>
|
2,338,321 |
<p>Under given standardized random variable $Z = (X-\mu)/\sigma$ Show that </p>
<p>$$C_3(Z) = M_3(Z)\;\;\text{and}\;\; C_4(Z) = M_4(Z) -3$$</p>
<p>where $C_r(Z)$ refer to $r$-th cumulant, $M_r(Z)$ refer to $r$-th moment.</p>
<hr>
<p><strong>Question</strong></p>
<p>I've been required to show above relationship by my textbook, but sadly, my textbook doesnot provide well-constructed mathematical definition of moment and cumulant. I've checked Hogg's book, but this book also, is insufficient rigourously prove above statement. Any advice or recommendation for above proof?</p>
|
Michael Hardy
| 11,667 |
<p>The validity of an answer might depend on which characterizations of cumulants you're looking for.</p>
<p>The $r$th cumulant is a polynomial function of the first $r$ moments. The $r$th cumulant is shift-invariant if $r>1.$ "Shift invariant" means that for random variables $X$ and constants $c$ ("constant" = not random) the $r$th cumulant of the probability distribution of $X+c$ is the same as the $r$th cumulant of the probability distribution of $X,$ i.e. $C_r(X+c) = C_r(X).$ Because of shift invariance, you can take the $r$th cumulant to be a polynomial in the first $r$ <em>central</em> moments.</p>
<p>Thus the $4$th cumulant must be a polynomial function of the second central moment (i.e. the variance), the $3$rd central moment, and the $4$th central moment (the $4$th central moment of the probability distribution of $X$ is $\operatorname{E}((X-\mu)^4)$ where $\mu = \operatorname{E}(X)$).</p>
<p>The $4$th cumulant should also be $4$th-degree homogeneous, i.e. $C_4(cX) = c^4 C_4(X).$ That means the third central moment will not appear in the polynomial and the $2$nd central moment needs to get squared. So we have
$$
C_4(X) = \operatorname{E}((X-\mu)^4) + \left( \text{coefficient} \times \left( \operatorname{E}((X-\mu)^2) \right)^2 \right).
$$
(The coefficient of the $4$th central moment is $1$; that is how cumulants get scaled.)
The question now is, what must the “coefficient” be in order that the $4$th cumulant be “cumulative”, i.e. for independent random variables $X_1,\ldots,X_n$ we have
$$
C_4(X_1+\cdots+X_n) = C_4(X_1)+\cdots+C_4(X_n) \text{ ?}
$$
Some algebra using the binomial theorem will show that it must be $-3,$ so that we have
$$
C_4(X) = \operatorname{E}((X-\mu)^4) - 3\big( \operatorname{E}((X-\mu)^2) \big)^2
$$
To see that, first not that no generality is lost by assuming that $\mu=0.$ So we have $X_1,X_2$ independent and $\operatorname{E}(X_1) = \operatorname{E}(X_2) =0.$</p>
<p>Then$\require{cancel}$
\begin{align}
C_4(X_1+X_2) = {} & \operatorname{E}((X_1+X_2)^4) + \left( \text{coefficient} \times \left( \operatorname{E}((X_1+X_2)^2) \right)^2 \right) \\[10pt]
= {} & \operatorname{E}(X_1^4) + 4\operatorname{E}(X_1^3) \cancelto{0}{\operatorname{E}(X_2)} + 6\operatorname{E}(X_1^2)\operatorname{E}(X_2^2) + 4\cancelto{0}{\operatorname{E}(X_1)} \operatorname{E}(X_2^3) + \operatorname{E}(X_2^4) \\
& {} - \text{coefficient} \times (\operatorname{E}(X_1^2) + 2\cancelto{0}{\operatorname{E}(X_1) \operatorname{E}(X_2)} + \operatorname{E}(X_2^2))^2 \\[10pt]
= {} & \operatorname{E}(X_1^4) + 6\operatorname{E}(X_1^2) \operatorname{E}(X_2^2) + \operatorname{E}(X_2^4) \\
& {} - \left( (\text{coefficient}) \times \left( (\operatorname{E}X_1^2))^2 + 2\operatorname{E}(X_1^2) \operatorname{E}(X_2^2) + (\operatorname{E}(X_2^2))^2 \right) \right)
\\[12pt]
= {} & \overbrace{\operatorname{E}(X_1^4) + \text{coefficient}\times (\operatorname{E}(X_1^2))^2} {} + {} \overbrace{\operatorname{E}(X_2^4) + \text{coefficient}\times (\operatorname{E}(X_2^2))^2} \\
& {} + 6 \operatorname{E}(X_1^2) \operatorname{E}(X_2^2) + (\text{coefficient}) \times 2 \operatorname{E}(X_1^2) \operatorname{E}(X_2^2) \\[10pt]
= {} & C_4(X_1) + C_4(X_2) + \underbrace{6 \operatorname{E}(X_1^2) \operatorname{E}(X_2^2) + (\text{coefficient}) \times 2 \operatorname{E}(X_1^2) \operatorname{E}(X_2^2)} \\[10pt]
= {} & C_4(X_1) + C_4(X_2) + \underbrace{(6 + 2\times\text{coefficient}) \operatorname{E}(X_1^2) \operatorname{E}(X_2^2)} \\[10pt]
= {} & C_4(X_1) + C_4(X_2) \text{ if and only if coefficient} = -3.
\end{align}
$$
\text{Conclusion: } C_4(X) = \operatorname{E}\big((X-\mu)^4\big) - 3 \Big( \operatorname{E}\big((X-\mu)^2\big) \Big)^2.
$$</p>
|
2,736,286 |
<p>Just wondering if I am anywhere near the correct answer.</p>
<p>We have the set {z $\in \mathbb{C} : |z - 1| < |z + i|$}</p>
<p>So we let $0 < \delta < |z + i| - |z - 1|$ then use the triangle inequality to prove $\delta + |z - 1| < |z + i|$.</p>
<p>Thanks!</p>
|
S. D. ZHU
| 156,697 |
<p>You are as far as $\delta$ to $0$. </p>
<p>Consider its geometric meaning instead: this set depicts the points closer to $1$ than to $-i$. Can you draw it? </p>
|
1,594,968 |
<p>How much knowledge of group theory is needed in order to begin Galois Theory? Which topics are most relevant?</p>
|
p Groups
| 301,282 |
<p>Some elementary: definitions and examples of groups (of small orders and especially $S_n,A_n$), normal subgroups, cyclic and abelian groups, quotient groups, isomorphism theorem. </p>
<p>[These of these terminologies enter in the (two) main theorems of Galois; examples are needed to know just to understand what is going on].</p>
|
1,578,378 |
<p>Why does $$\int_{-L}^{L} \sum_{n=1}^{\infty}a_n\cos \frac{n\pi x}{L}=\sum_{n=1}^{\infty}a_n\int_{-L}^{L}\cos \frac{n\pi x}{L}$$</p>
<p>This is used in a derivation of the Fourier coefficients. I see why they flip them but I don't understand why this is true and when we are allowed to do such a thing.</p>
<p>Thanks</p>
|
Julián Aguirre
| 4,791 |
<p>These are conditions allowing the interchange of integration and summation, from stronger to weaker:</p>
<ol>
<li>$\sum_{n=1}^\infty|a_n|<\infty$.</li>
<li>$\sum_{n=1}^\infty a_n\cos\dfrac{n\,\pi\,x}{L}$ converges uniformly.</li>
<li>$\sum_{n=1}^\infty a_n\cos\dfrac{n\,\pi\,x}{L}$ is the Fourier series of a Riemann integrable function (wether or not the series converges.)</li>
</ol>
<p>In the study of Fourier series, it is assumed that the interchange is valid to motivate the definition of the Fourier coefficients.</p>
|
2,839,554 |
<p>I would like to formalise some operations I am doing, however it is unclear how I should deal with categorical variables. </p>
<p>Imagine a dataset with 15 distinct couples (<code>ID</code>). Each couple was observed 3 times (<code>time</code>). </p>
<p>Each partner has responded to two questions: <code>p</code> and <code>act</code>. <code>p</code> is a dummy vector {0, 1} while <code>act</code> is a categorical variable with 4 levels {a,b,c,d}. <code>_m</code> refers to partner male and <code>_w</code> to partner female. </p>
<pre><code> ID time p_m p_w act_m act_w
1 A 1 1 1 c b
2 A 2 1 1 b c
3 A 3 1 1 c d
4 B 1 1 1 b b
5 B 2 0 1 a a
6 B 3 1 1 b b
7 C 1 1 1 b b
8 C 2 1 1 c c
9 C 3 1 1 c b
10 D 1 1 1 c b
11 D 2 1 0 b a
12 D 3 1 1 c b
13 E 1 1 1 d d
14 E 2 1 1 b c
15 E 3 1 1 c c
</code></pre>
<p>First, I am interested in formalising the matches on <code>p</code>. Because <code>p</code> is a dummy variable, it seems that I can simply write: </p>
<p>$joint_{jt} = (p_{jt}^{m} \times p_{jt}^{w})$</p>
<p>where $t$ denotes <em>time</em>, $p_{jt}^{m}$ denotes partner $m$ response at time $t$ belonging to couple number $j$ (and vice versa for partner $w$). </p>
<pre><code> ID time p_m p_w act_m act_w joint_j
1 A 1 1 1 c b 1
2 A 2 1 1 b c 1
3 A 3 1 1 c d 1
4 B 1 1 1 b b 1
5 B 2 0 1 a a 0
6 B 3 1 1 b b 1
7 C 1 1 1 b b 1
8 C 2 1 1 c c 1
9 C 3 1 1 c b 1
10 D 1 1 1 c b 1
11 D 2 1 0 b a 0
12 D 3 1 1 c b 1
13 E 1 1 1 d d 1
14 E 2 1 1 b c 1
15 E 3 1 1 c c 1
</code></pre>
<p>However, it is unclear to me if I can use the $\times$ operator for categorical variables. Basically, what I am interested in is when $joint_{jt} = 1$ and $p_m == p_w$, then 1. </p>
<p>My question is, how do you formalise $p_m == p_w$, when $p$ is categorical?
I have been told that I could not use <em>logical operators</em> in my papers (econ, sociology field), but that I should use the arithmetic operators. So, how do you express <code>TRUE/FALSE</code> mathematically? </p>
<p>Now I have </p>
<p>$jointact_{jt} = (joint_{jt} \times act_{jt}^{m} \times act_{jt}^{w})$</p>
<p>But this seems wrong to me. </p>
<p>Could I for instance do, first define a vector $act$ if $act_{jt}^{m} = act_{jt}^{j}$ is true</p>
<p>$$
act_{jt} = \left\{\begin{array}{ll}
1 & act_{jt}^{m} = act_{jt}^{w}\\
0 & otherwise
\end{array}\right.
$$</p>
<p>and then combine $act_{jt}$, with $join_{jt}$, like </p>
<p>$jointact_{jt} = (joint_{jt} \times act_{jt})$</p>
<pre><code> ID time p_m p_w act_m act_w join_j joinact_j
1 A 1 1 1 c b 1 0
2 A 2 1 1 b c 1 0
3 A 3 1 1 c d 1 0
4 B 1 1 1 b b 1 1
5 B 2 0 1 a a 0 0
6 B 3 1 1 b b 1 1
7 C 1 1 1 b b 1 1
8 C 2 1 1 c c 1 1
9 C 3 1 1 c b 1 0
10 D 1 1 1 c b 1 0
11 D 2 1 0 b a 0 0
12 D 3 1 1 c b 1 0
13 E 1 1 1 d d 1 1
14 E 2 1 1 b c 1 0
15 E 3 1 1 c c 1 1
</code></pre>
|
jnez71
| 295,791 |
<p>Such a choice is arbitrary. Do you view $A$ and $B$ as time varying parameters, or as inputs like $u$? (Probably the former, since you described (1) and (2) as a "linear system"). It's all about context. In control theory, equations (3) and (4) are the most common for a nonlinear ODE system in explicit state-space form. I.e. if the dependent variables and inputs are real curves like you said,
\begin{align}
x &: \mathbb{R} \to \mathbb{R}^n \\
y &: \mathbb{R} \to \mathbb{R}^m \\
u &: \mathbb{R} \to \mathbb{R}^k
\end{align}
then the usual situation is,
\begin{align}
\dot{x} &= f(x, u) \\
y &= g(x, u)
\end{align}
where,
\begin{align}
f &: \mathbb{R}^n \times \mathbb{R}^k \to \mathbb{R}^n \\
g &: \mathbb{R}^n \times \mathbb{R}^k \to \mathbb{R}^m \\
\end{align}
while $A(t)$, $B(t)$, $C(t)$, $D(t)$ would be hidden away within the workings of these functions as "time varying parameters" or "exogenous inputs." Finally, though I think it'd be misleading, you could write $\mathbb{R}^n \times \mathbb{R}^k$ as $\mathbb{R}^{n+k}$ to fit exactly the notation you are looking for.</p>
<p>This is not "the most general system" one can write though. It need not be explicit in $\dot{x}$ and $y$, need not necessary have a real / Euclidean state (like for example if $x : \mathbb{R} \to \mathbb{SO}3$), and need not even be a finite state system (i.e. it could be a PDE instead of an ODE). Your equations (1) and (2) are just a very very special case. But also a very important one to study :)</p>
|
4,074,031 |
<p>You have been teaching Dennis and Inez math using the Moore method for their entire lives, and you're currently deep into topology class.</p>
<p>You've convinced them that topological manifolds aren't quite the right object of study because you "can't do calculus on them," and you've managed to motivate the definition of a smooth manifold.</p>
<p>Dennis and Inez take your motivation about "doing calculus" seriously. They pose to you that they asked themselves how to take the derivative of a function <span class="math-container">$f: M \to \mathbb{R}$</span>. They figured out independently that such a thing couldn't meaningfully be another smooth function. Instead, it has to say how the function is changing in the direction of a given vector field.</p>
<p>You are thrilled at their work: they've independently invented <span class="math-container">$d$</span> and the cotangent bundle, and they arrive at the correct formal definition of both with little prodding.</p>
<p>You want to point them in the direction of generalization, so you ask them to consider how to extend <span class="math-container">$d$</span> to higher exterior powers of the cotangent bundle. You get blank stares. Why should they think about higher tensor powers of the cotangent bundle at all? Why the <strong>alternating</strong> tensors, in particular? Yes, they know what that means (you led them to it in their linear algebra course), but the cotangent bundle just naturally showed up when they went thinking about derivatives, and the alternating tensors don't.</p>
<p>Oops! Now's the time to confess that your students never had a traditional 3-dimensional vector calculus course culminating in Stokes's theorem. Actually, it's worse: they have never heard of integrals. You see, you were torn about whether to start with the Lebesgue theory or spend time building the more intuitive Riemann integral, so you just skipped from the first half of Calc One to a topics course on higher topos theory. Lesson learned.</p>
<p>So, how do you teach them? (Alternatively, answer this question in the negative by proving that Dennis and Inez will always invent integration if you make them think about higher exterior derivatives enough.)</p>
|
Moishe Kohan
| 84,907 |
<p>Regarding the sentence:</p>
<p>"Alternatively, answer this question in the negative by proving that Dennis and Inez will always invent integration if you make them think about higher exterior derivatives enough."</p>
<p>The answer is "yes" (but "long enough" might take longer than their life expectancy). There are two ways to approach this:</p>
<p>a. Suppose they study smooth maps of <span class="math-container">$k$</span>-manifolds into an <span class="math-container">$m$</span>-manifold <span class="math-container">$M$</span> and come up with an idea of making a vector space out of this by working with chains (with real coefficients). Then they could ask "what is the dual vector space"? Under the extra analytical assumptions on the element of the dual space which I will not spell out here, the elements of the dual space are differential <span class="math-container">$k$</span>-forms on <span class="math-container">$M$</span> (and vice versa). See</p>
<p><em>Whitney, Hassler</em>, Geometric integration theory, Princeton Mathematical Series. Princeton, N. J.: Princeton University Press; London: Oxford University Press. XV, 387 p. (1957). <a href="https://zbmath.org/?q=an:0083.28204" rel="nofollow noreferrer">ZBL0083.28204</a>.</p>
<p>Theorem 10A, page 167, for details. Then they might discover the exterior differential as the dual of the boundary operator on chains (Stokes' Theorem).</p>
<p>b. Conversely, if they discover (somehow) differential forms, they might ask themselves "what is the dual space" of <span class="math-container">$\Omega^k(M)$</span>?" The elements of the dual are know as <em><a href="https://en.wikipedia.org/wiki/Current_(mathematics)" rel="nofollow noreferrer">currents</a></em>. They probably will then try to find "concrete examples" of currents which will be, most likely, Lipschitz currents, given by integration over Lipschitz submanifolds.</p>
|
1,381,545 |
<p>I have the following problem:</p>
<blockquote>
<p>Show by induction that $F_n \geq 2^{0.5 \cdot n}$, for $n \geq 6$</p>
</blockquote>
<p>Where $F_n$ is the $nth$ Fibonacci number. </p>
<hr>
<h3>Proof</h3>
<p><strong>Basis</strong></p>
<p>$n = 6$. </p>
<p>$F_6 = 8 \geq 2^{0.5 \cdot 6} = 2^{\frac{6}{2}} = 2^3 = 8$</p>
<p><strong>Induction hypothesis</strong></p>
<p>Assume $F_n \geq 2^{\frac{n}{2}}$, for some $n \geq 6$.</p>
<p><strong>Inductive step</strong></p>
<p>Lets shows that $F_{n+1} \geq 2^{\frac{n + 1}{2}}$. </p>
<p>We know that </p>
<ul>
<li><p>$F_{n + 1} = F_n + F_{n - 1}$</p></li>
<li><p>$2^{\frac{n + 1}{2}} > 2^{\frac{n}{2}}$</p></li>
<li><p>$2^{\frac{n + 1}{2}} = 2^{\frac{n}{2} + \frac{1}{2}} = 2^{\frac{n}{2}} \cdot 2^{\frac{1}{2}} \leq 2^{\frac{n}{2}} \cdot 2 = 2^{\frac{n}{2}} + 2^{\frac{n}{2}}$</p></li>
</ul>
<p>Since we have assumed that $F_n \geq 2^{\frac{n}{2}}$, then $$F_n + F_{n - 1} = F_{n + 1} \geq 2^{\frac{n}{2}} + F_{n - 1} \geq 2^{\frac{n}{2}} + F_{n - 1} + 2^{\frac{n}{2}} - F_n = 2^{\frac{n}{2}} + 2^{\frac{n}{2}} + F_{n - 1} - F_n$$</p>
<p>The last inequality is true because $2^{\frac{n}{2}} - F_n$ is negative or $0$, since $F_n \geq 2^{\frac{n}{2}}$.</p>
<hr>
<p>I have tried a lot of things, but I cannot figure out how to proceed and how to conclude that indeed $F_{n + 1} \geq 2^{\frac{n}{2}} \cdot 2^{\frac{1}{2}}$. I feel really stupid after trying for a long time to do this alone and not managing to do it.</p>
|
Kelechi Nze
| 164,374 |
<p>Suppose we knew for 2 values of n $F_n \ge 2^{\frac n2}$ i.e for n = 6 and n = 7.</p>
<p>We know this holds for n=6 and n=7.</p>
<p>We also know that $F_{n+1} = F_{n} + F_{n-1}$</p>
<p>So we assume for some k and k-1 (7 and 6) $F_{k-1} \ge 2^{\frac {k-1}{2}}$ and $F_{k+1} \ge 2^{\frac {k+1}{2}}$</p>
<p>We know $F_{k} \ge F_{n-1}$</p>
<p>so $F_{k+1} \ge 2 \cdot F_{n-1}$</p>
<p>Using the assumption $F_{k+1} \ge 2 \cdot 2^{\frac {k-1}{2}} = 2^{\frac{k+1}{2}}$ as required.</p>
<p>EDIT: If you want a phrasing in the language of induction (propositional)
$$P(k) = F_{n} \ge 2^{\frac{n}{2}} \mbox{ & } F_{n-1} \ge 2^{\frac{n-1}{2}}\\$$
We then prove:
$$
P(k+1) = F_{n+1} \ge 2^{\frac{n+1}{2}} \mbox{ & } F_{n} \ge 2^{\frac{n}{2}}$$</p>
<p>Above I proved the second from the first.</p>
|
2,005,365 |
<p>For example, p(getting two heads from tossing a coin twice) = 0.5 * 0.5...</p>
<p>I passed my probability course in college, but I am still having trouble getting the intuition for this.</p>
|
Mark Viola
| 218,419 |
<p><strong>HINT:</strong></p>
<p>Rationalize the numerator to get</p>
<p>$$\frac{1}{\sqrt{x+h}}-\frac{1}{\sqrt x}=-\frac{1}{\sqrt{x+h}\sqrt{x}(\sqrt{x+h}+\sqrt{x})}$$</p>
|
3,807,504 |
<p>Suppose that <span class="math-container">$z=f(x,y)$</span> is given. where <span class="math-container">$f(x,y)=x^3+y^3$</span>.
I am a bit confused about the notation of <span class="math-container">$z_x$</span> and <span class="math-container">$f_x$</span>.</p>
<p>A lot of people use the below notation for <span class="math-container">$z_x$</span>.
<span class="math-container">$$z_x=3x^2$$</span></p>
<p>But what if <span class="math-container">$y$</span> is related to <span class="math-container">$x$</span> in some equation. For example
<span class="math-container">$$y=x.$$</span>
Then
<span class="math-container">$${\partial z\over\partial x}$$</span>
should also consider variable <span class="math-container">$y$</span> since <span class="math-container">$y$</span> can be represented by <span class="math-container">$x$</span>.
But in a similar situation, I saw in a book
<span class="math-container">$${\partial f\over\partial x }=3x^2$$</span>
which doesn't consider y as a variable.</p>
<p>Summarizing, given that
<span class="math-container">$z=f(x,y)$</span>, <span class="math-container">$f(x,y)=x^3+y^3$</span>, <span class="math-container">$y=x$</span> what is <span class="math-container">$z_x$</span> and <span class="math-container">$f_x$</span>?</p>
|
Claude Leibovici
| 82,404 |
<p>There is an antiderivative in terms of the gaussian hypergeometric function
<span class="math-container">$$\int \frac{b\,x}{x^a-b}\,dx=-\frac{1}{2} x^2 \, _2F_1\left(1,\frac{2}{a};\frac{a+2}{a};\frac{x^a}{b}\right)$$</span> If <span class="math-container">$a >0$</span>
<span class="math-container">$$I_a=\int_0^R \frac{b\,x}{x^a-b}\,dx=-\frac{1}{2} R^2 \, _2F_1\left(1,\frac{2}{a};\frac{a+2}{a};\frac{R^a}{b}\right)$$</span> which, as usual, requires that <span class="math-container">$R^a < b$</span>.</p>
|
16,740 |
<p>Excuse me for my unclear question title, math terms have never been my forté. However being new here and knowing I shall be staying here for a while, I thought it's best to do things right with the community and site.</p>
<p>I just asked my first <a href="https://math.stackexchange.com/questions/920192/differentiation-with-respect-to-variable">question</a> here and got a great response and nice welcome. However I have no idea which <em>answer</em> I, as a community member should pick. </p>
<p>The reason being is that the first one explains how to get there <em>(which is something I requested in the question itself)</em> whilst the other one is the actual answer to the questions but doesn't directly tell me how he got there even though he did somewhat clarify that in the comments.</p>
<p>A general rule of thumb for such scenarios would be greatly appreciated!</p>
|
GEdgar
| 442 |
<p>Generally, I would be opposed to "check my proof" questions even if the link is to a permanent place like the ArXiV.</p>
|
182,024 |
<p>Following my previous question <a href="https://math.stackexchange.com/q/182000/21813">Relationship between cross product and outer product</a> where I learnt that the Exterior Product generalises the Cross Product whereas the Inner Product generalises the Dot Product, I was wondering if the simple map that I have drawn below is at all an accurate representation of the links between these different products?</p>
<p>Vertical lines denote generalisation-specification, horizontal lines denote "in opposition to". I'm just trying to get a quick overview before I dive in. Thanks.</p>
<p><img src="https://i.stack.imgur.com/1RE3I.jpg" alt="enter image description here" /></p>
|
rschwieb
| 29,335 |
<p>I'm not sure what you're reading, but it certainly contains a lot of terms talked about in <a href="http://en.wikipedia.org/wiki/Clifford_algebra" rel="nofollow">Clifford algebra</a> (or "<a href="http://en.wikipedia.org/wiki/Geometric_algebra" rel="nofollow">geometric algebra</a>").</p>
<p>Clifford algebras were born of a synthesis of inner product spaces and Grassmann's exterior algebras, both of which have geometric applications.</p>
<p>A Clifford algebra is constructed from an <strong>inner product space</strong> $(V,Q)$ by generating an associative algebra (whose product is a descendant of the <strong>tensor product</strong> in the tensor algebra for $V$). These are compatible in a sense made clear in the Wiki.</p>
<p>Suppose we are in a real inner product space, and let's denote the algebra multiplication with $\otimes$ and the inner product with $\cdot$. One identity that holds for parallel vectors $v,w$ is $v\otimes w=v\cdot w$. Part of the reason this is true is that $v\otimes w=w\otimes v$ when $v,w$ are parallel.</p>
<p>When they are not parallel, then there is a skew component to $v\otimes w$. This can be retrieved explicitly by computing $\frac{1}{2}(v\otimes w-w\otimes v):=v\wedge w$. With this notation, $v\otimes w=v\cdot w+v\wedge w$ for all vectors $v,w$. This $\wedge$ is called the <strong>exterior or outer product</strong> in this algebra. </p>
<p>They key thing to know is that this inner and this outer product don't uniquely extend to the whole algebra (well the outer product has a slightly more natural extension than the inner product). They are not "natural" to the algebra really: they are mostly just a notational convenience with some algebraic properties that make them easy to view as products.</p>
<p>The <strong>cross product</strong> arises too in the discussion of Clifford algebras, but it seems to be regarded as second class to the outer product. I've seen this attributed to the cross-product somehow not carrying the correct information for physical applications.</p>
<p>I went and read a little and found out that "interior product" looks a lot like an important extension of the inner product in a Clifford algebra which is sometimes called the <a href="http://en.wikipedia.org/wiki/Geometric_algebra#Extensions_of_the_inner_and_outer_products" rel="nofollow"><strong>contraction</strong></a>.</p>
<p>In summary, I think these various products might become less mysterious if you read some of these linked articles and see where they are used. It's going to be especially enlightening when you find out where they are all defined (in the vector space $V$ or in an algebra containing $V$.)</p>
|
2,051,785 |
<p>$PQRS$ is a convex quadrilateral. The intersection of the two diagonals is $O$. If the areas of triangles $PQS$, $QRP$, and $SPR$ are $1, 2,$ and $3$, respectively. What is the area of triangle $PQO$?</p>
|
Triskele
| 397,755 |
<p>If you want a hint, read on. </p>
<p>First determine $\mathrm{Area}(\triangle QRS)$. </p>
<p>Now, let $x = \mathrm{Area}(\triangle SPO)$, $y = \mathrm{Area}(\triangle PQO)$, and note that $\mathrm{Area}(\triangle RSO) : \mathrm{Area}(\triangle QRO) = x : y$. (Why?) </p>
|
3,480,890 |
<p>I do understand pure mathematical concepts of probability space and random variables as a (measurable) functions. </p>
<p>The question is: what is the real-world meaning of probability and how can we apply the machinery of probability to the real situations?</p>
<p>Ex1: probability of heads for fair coin is 1/2. Which means that "if we will make exactly the same experiment over and over again, we will obtain heads roughly 1/2 of the time". More rigorously, there is some convergences under the hood etc.
But this is nonsense: if I will make exactly the same initial conditions, velocity, humidity, wind etc, then I will always obtain the same result. Moreover, I cannot guarantee the same circumstances: we have some planets flying around, and molecules are not in the same places etc. So it is not really possible to make the same experiment twice. Like, you know, one cannot step twice in the same river.</p>
<p>Ex2: probability of rain is 0.9 for the next day. The "frequency" idea is really absurd in this case. </p>
<p>Ex3: I'm throwing a coin and see the side. You don't see it. I ask you, what is the probability that it comes Heads. This is really somewhat vague...</p>
|
kludg
| 42,926 |
<p>General interpretation of probabilities is Bayesian: probabilities are subjective opinions or beliefs; of course probabilities must satisfy probability axioms, but this is the only requirement; any probabilistic model satisfying probability axioms is good. I believe it is helpful to think about probabilities as subjective opinions, but don't confuse subjectivity with arbitrariness. Subjectivity means that different persons may have different data and use different statistical models to infer probabilities.</p>
<p>How a probabilistic model corresponds to the real world is another question. Sometimes (flipping a coin) you can use frequencies to define probabilities, sometimes you need more advanced statistical models to infer probabilities corresponding to the real world.</p>
<p>Ex 3 demonstrates subjective nature of probabilities: you and your friend build different probabilistic models depending on information you and your friend have.</p>
|
855,088 |
<p>Consider two vectors $V_1$ and $V_2$ in $\mathbb{R}^3$.
When we take their dot product we get a real number.
How is that number related to the vectors?
Is there any way we can visualize it? </p>
|
Eric Towers
| 123,905 |
<p>Temporarily imagine that $V_2$ is of unit length. Then, $V_1 \cdot V_2$ is the projection of the vector $V_1$ onto the vector $V_2$. Picture <a href="http://en.wikipedia.org/wiki/File%3aDot_Product.svg">here</a>. Now we let $V_2$ have its original length and to do so we multiply the result of the dot product by the new length of $V_2$. (This has the effect of making it not matter which one you pretend has unit length initially.)</p>
<p>You do this sort of thing when you write a vector as a sum of multiples of the standard unit coordinate vectors (sometimes written $\hat{x}, \hat{y}$, and $\hat{z}$). Use the dot product to project your vector onto $\hat{x}$ getting the multiple of $\hat{x}$ that, when assembled with the other components will sum to your vector.</p>
<p>The dot product is a (poor) measure of the degree of parallelism of two vectors. If they point in the same (or opposite) directions, then the projection of one onto the other is not just a component of the length of the projected vector, but is the entire projected vector. It is a poor measure because it is scaled by the lengths of the two vectors -- so one has to know not only their dot product, but also their lengths, to determine how parallel or perpendicular they really are.</p>
<p>In physics,the dot product is frequently used to determine how parallel some vector quantity is to some geometric construct, for instance the direction of motion (displacement) versus a partially opposing force (to find out how much work must be expended to overcome the force). Another example is the direction of the electric field compared to a small patch of surface (which is represented by a <a href="http://en.wikipedia.org/wiki/Area_vector">vector</a> "<a href="http://en.wikipedia.org/wiki/Normal_%28geometry%29">normal</a>" to its surface and of length proportional to its area).</p>
|
889,728 |
<p>Evaluate $$\int_0^{1/2}x^3\arctan(x)\,dx$$</p>
<p>My work so far:
$x^3\arctan(x) = \sum_{n=0}^\infty(-1)^n \dfrac{x^{2n+4}}{2n+1}$</p>
<p>$$\int_0^{1/2}x^3\arctan(x)\,dx = \sum_{n=0}^\infty \frac{(-1)^{n+1}(1/2)^{2n+4}}{2n+1}$$</p>
<p>The desired accuracy needed is to four decimal places, that is were I am stuck.</p>
|
Claude Leibovici
| 82,404 |
<p>You wrote correctly $$x^3\arctan(x) = \sum_{n=0}^\infty(-1)^n \dfrac{x^{2n+4}}{2n+1}$$ but you forgot to integrate with respect to $x$. If you do it $$\int_0^{1/2}x^3\arctan(x)\,dx = \sum_{n=0}^\infty \frac{(-1)^{n}(1/2)^{2n+5}}{(2n+1)(2n+5)}$$</p>
<p>As said in the comments, you could start searching for which value of $n$ $$\frac{(1/2)^{2n+5}}{(2n+1)(2n+5)} \leq 5\times 10^{-5}$$ that is to say $$(2n+1)(2n+5)2^{2n+5} \geq 20000$$ Probably, plotting the function after taking logarithms of both sides will make life easier and $n=2$ satisfies the condition ($23040$ for the lhs).</p>
<p>The other solution, also said in comments, is to start adding the terms and so, you have for the integral $$\frac{1}{160}-\frac{1}{2688}+\frac{1}{23040}=\frac{191}{32256} \simeq 0.00592138$$</p>
<p>If you take the time of computing exactly the integral, as David H suggested in his answer, you probably obtained $$\int x^3\arctan{(x)}\,\mathrm{d}x=\frac{1}{4} x^4 \tan ^{-1}(x)-\frac{x^3}{12}+\frac{x}{4}-\frac{1}{4} \tan ^{-1}(x)$$ and then $$\int_{0}^{1/2}x^3\arctan{(x)}\,\mathrm{d}x=\frac{1}{192} \left(22-45 \tan ^{-1}\left(\frac{1}{2}\right)\right) \simeq 0.00591592$$</p>
|
1,326,816 |
<p>How to find the Maclaurin series of the function $$f(x)=\frac{1}{(9-x^2)^2}$$
I guess we are gonna use derivatives but i have no idea how the final answer should be formed.</p>
|
abel
| 9,252 |
<p>$\bf hint:$ </p>
<p>(a) use the binomial series $$(1-u)^{-2} = 1+\frac21 u+\frac{2 \cdot 3}{1 \cdot 2}u^2+ \frac{2 \cdot 3 \cdot 4}{1 \cdot 2 \cdot 3}u^3+ \cdots \tag 1$$ </p>
<p>(b) $$\frac{1}{(9-x^2)^2} =\frac{1}{9^2(1-x^2/9)^2} = \frac1{81}\left(1-\frac{x^2}{9}\right)^{-2} \tag 2$$</p>
|
96,957 |
<p>Stacks, of varying kinds, appear in algebraic geometry whenever we have moduli problems, most famously the stacks of (marked) curves. But these seem to be to be very geometric in motivation, so I was wondering if there are natural examples of stacks that arise in arithmetic geometry or number theory.</p>
<p>To me, as a non-expert, it seems like it's all Galois representations and estimates on various numeric measures (counting points or dimensions of things) based on other numeric things (conductors, heights, etc). </p>
<p>I asked this question at M.SE (here : <a href="https://math.stackexchange.com/questions/143746/stacks-in-arithmetic-geometry">https://math.stackexchange.com/questions/143746/stacks-in-arithmetic-geometry</a> please vote to close if you can) because I thought it a bit too 'recreational', but with no success. What I am after is not just stacks which can be seen as arithmetic using number fields or rings of integers, but which are actually used in number-theoretic problems, or have a number-theoretic origin. Maybe there aren't any, but it doesn't hurt to ask.</p>
<p>EDIT: I have belatedly made this question CW, as I've realised, too late, that there is clearly not one correct answer.</p>
|
David Zureick-Brown
| 2 |
<p>Here are two applications of stacks to number theory.</p>
<p>1) Section 3 of <a href="http://math.mit.edu/~poonen/papers/pss.pdf" rel="nofollow noreferrer">this</a> paper, which solves the diophantine equation $x^2 + y^3 = z^7$, explains the connection between stacks and generalized Fermat equations.</p>
<p>2) <a href="https://mathoverflow.net/questions/24573/">This</a> post explains how stacks fit into the proof of Deuring's formula for the number of supersingular elliptic curves over a finite field.</p>
|
390,644 |
<p>I'm trying to solve this recurrence relation:</p>
<p>$$
a_n = \begin{cases}
0 & \mbox{for } n = 0 \\
5 & \mbox{for } n = 1 \\
6a_{n-1} - 5a_{n-2} + 1 & \mbox{for } n > 1
\end{cases}
$$</p>
<p>I calculated generator function as:
$$
A = \frac{31x - 24x^2}{1 - 6x + 5x^2} + \frac{x^3}{(1-x)(1-6x+5x^2)} =
\frac{31x - 24x^2}{(x-1)(x-5)} + \frac{x^3}{(1-x)(x-1)(x-5)}
$$</p>
<p>(I'm not sure if that's right)</p>
<p>and its partial fractions decomposition looks like:
$$
A = \left(\frac{-7}{4} \cdot \frac{1}{x-1} - \frac{445}{4} \cdot \frac{1}{x-5}\right) +
\left( \frac{39}{16} \cdot \frac{1}{x-5} + \frac{3}{4} \cdot \frac{1}{(x-1)^2} - \frac{375}{16} \cdot \frac{1}{x-5} \right)
$$
(again - I'm not sure if it's ok)</p>
<p>I'm stuck here... From solutions I know that I should get:
$$
a_n = \frac{-21}{16} - \frac{1}{4}n + \frac{21}{16}5^n
$$</p>
<p>but I have no idea how it's solved... I hope somebody can help me (I spend more than 3h trying to solve this myself...)</p>
|
Brian M. Scott
| 12,042 |
<p>I did not check your work, so I’ll outline what you need to do to finish. You have something of the form</p>
<p>$$A(x)=\frac{a}{1-x}+\frac{b}{5-x}+\frac{c}{(1-x)^2}=\frac{a}{1-x}+\frac{b/5}{1-\frac{x}5}+\frac{c}{(1-x)^2}\;.$$</p>
<p>Expand these three terms into power series:</p>
<p>$$\begin{align*}
A(x)&=a\sum_{n\ge 0}x^n+\frac{b}5\sum_{n\ge 0}\left(\frac{x}5\right)^n+c\sum_{n\ge 0}(n+1)x^n\\\\
&=\sum_{n\ge 0}\left(a+\frac{b}{5^{n+1}}+c(n+1)\right)x^n\;.
\end{align*}$$</p>
<p>Now you can read off the coefficient of $x^n$.</p>
<p><strong>Added:</strong> I’ve now had a chance to check your work, and it appears to be a bit off. I’ll use my preferred approach, which begins by assuming that $a_n=0$ for all $n<0$. Then the recurrence can be written</p>
<p>$$a_n=6a_{n-1}-5a_{n-2}+1-[n=0]+4[n=1]\;,$$</p>
<p>for all $n\ge 0$, where the last two terms contain <a href="http://en.wikipedia.org/wiki/Iverson_bracket" rel="nofollow">Iverson brackets</a> and are added to make the recurrence give the correct values to $a_0$ and $a_1$. Now multiply through by $x^n$ and sum over $n\ge 0$:</p>
<p>$$\begin{align*}
\sum_{n\ge 0}a_nx^n&=\sum_{n\ge 0}\Big(6a_{n-1}-5a_{n-2}+1-[n=0]+4[n=1]\Big)x^n\\\\
&=6x\sum_{n\ge 0}a_{n-1}x^{n-1}-5x^2\sum_{n\ge 0}a_{n-2}x^{n-2}+\sum_{n\ge 0}x^n-1+4x\;.
\end{align*}$$</p>
<p>Since $A(x)=\sum_{n\ge 0}a_nx^n=\sum_{n\ge 0}a_{n-1}x^{n-1}=\sum_{n\ge 0}a_{n-2}x^{n-2}$ (remember the blanket assumption that $a_n=0$ for $n<0$), we have</p>
<p>$$A(x)=6xA(x)-5x^2A(x)+\frac1{1-x}-1+4x\;,$$</p>
<p>and hence</p>
<p>$$A(x)=\frac1{(1-x)^2(5-x)}+\frac{4x-1}{(1-x)(5-x)}\;.$$</p>
<p>I’ll leave the partial fraction decomposition to you, at least for now.</p>
|
2,239,058 |
<blockquote>
<p>Find the spherically symmetric solution to $$\nabla^2u=1$$ in the
region $r=|\mathbf{r}|\le a$ for $a>0$ that satisfies the following
boundary condition at $r=a$:</p>
<p>$\frac{\partial u}{\partial n}=0$</p>
</blockquote>
<p>The solution I have looked at states to begin with $\frac{\partial u}{\partial r}=0$ and I can go from here, my question is not actually the above question (but rather I'm using that to illustrate my actualy question) which is:</p>
<p>Why does $\frac{\partial u}{\partial n}=0 \implies \frac{\partial u}{\partial r}=0$? I thought maybe the chain rule, i.e. $\frac{\partial r}{\partial n}\frac{\partial u}{\partial r}=\frac{\partial u}{\partial n}=0$ but I have no idea what $\frac{\partial r}{\partial n}$ actually is and so I don't know that it's not equal to $0$, in fact I don't have a great grasp on what $\frac{\partial u}{\partial n}$ actually $is$, so i would also greatly appreciate if someone could just explain to me what these actually represent- at the moment they're very much just notation for me.</p>
<p>Sorry one more thing: we also arrive at $\frac{\partial u}{\partial r}=\frac13a$ at $r=a$ which I'm fine with, but than it says $>0$ so contradiction. Where has this come from?</p>
<p>Thanks in advance and sorry there are so many parts to my question, they just all are related to the above question and hence why I did not ask each separately.</p>
|
user1551
| 1,551 |
<p>This is just a change of basis. Let $A$ and $B$ be the matrices of the same bilinear form, but with respect to two ordered bases $\{x_1,\ldots,x_n\}$ and $\{y_1,\ldots,y_n\}$ respectively. Let $P$ be the matrix such that $y_j=\sum_ip_{ij}x_i$, i.e. the matrix such that $\sum_jc_jy_j=\sum_i(Pc)_ix_i$ for any $c\in\mathbb R^n$. Then $B=P^TAP$.</p>
|
1,541,800 |
<p>I happened to stumble upon the following matrix:
$$ A = \begin{bmatrix}
a & 1 \\
0 & a
\end{bmatrix}
$$</p>
<p>And after trying a bunch of different examples, I noticed the following remarkable pattern. If $P$ is a polynomial, then:
$$ P(A)=\begin{bmatrix}
P(a) & P'(a) \\
0 & P(a)
\end{bmatrix}$$</p>
<p>Where $P'(a)$ is the derivative evaluated at $a$.</p>
<p>Futhermore, I tried extending this to other matrix functions, for example the matrix exponential, and wolfram alpha tells me:
$$ \exp(A)=\begin{bmatrix}
e^a & e^a \\
0 & e^a
\end{bmatrix}$$
and this does in fact follow the pattern since the derivative of $e^x$ is itself!</p>
<p>Furthermore, I decided to look at the function $P(x)=\frac{1}{x}$. If we interpret the reciprocal of a matrix to be its inverse, then we get:
$$ P(A)=\begin{bmatrix}
\frac{1}{a} & -\frac{1}{a^2} \\
0 & \frac{1}{a}
\end{bmatrix}$$
And since $f'(a)=-\frac{1}{a^2}$, the pattern still holds!</p>
<p>After trying a couple more examples, it seems that this pattern holds whenever $P$ is any rational function.</p>
<p>I have two questions:</p>
<ol>
<li><p>Why is this happening?</p></li>
<li><p>Are there any other known matrix functions (which can also be applied to real numbers) for which this property holds?</p></li>
</ol>
|
Vectornaut
| 16,063 |
<p><em>Here's one for the algebra lovers. It can probably be improved substantially, since I'm not an algebra lover myself. For simplicity, I'll use rational functions with rational coefficients; sophisticated readers may substitute their favorite coefficient fields.</em></p>
<p>A rational function in the variable $x$ is an element of the field $\mathbb{Q}(x)$. You can think of $\mathbb{Q}(x)$ as a vector space over itself. Multiplication by $x$ is a $\mathbb{Q}(x)$-linear map from $\mathbb{Q}(x)$ to itself, which can be written as the $1 \times 1$ matrix $$X_0 = \left[ \begin{array}{c} x \end{array} \right].$$
The derivative, sadly, is not a $\mathbb{Q}(x)$-linear map. However, we can represent it with a $\mathbb{Q}(x)$-linear map by resorting to trickery. If we happen to know the derivative $f'$ of a function $f$, we can bundle the two together into an element of $\mathbb{Q}(x)^2$:
$$\left[ \begin{array}{c} f' \\ f \end{array} \right].$$
Multiplication by $x$ sends this "derivative pair" to
$$\left[ \begin{array}{c} (xf)' \\ xf \end{array} \right]
= \left[ \begin{array}{c} xf' + f \\ xf \end{array} \right]
= \left[ \begin{array}{cc} x & 1 \\ 0 & x \end{array} \right] \left[ \begin{array}{c} f' \\ f \end{array} \right].$$
In other words, multiplication by $x$ acts on derivative pairs in $\mathbb{Q}(x)^2$ by the matrix
$$X_1 = \left[ \begin{array}{cc} x & 1 \\ 0 & x \end{array} \right].$$</p>
<hr>
<p>The set of derivative pairs isn't very nice. In particular, it's not a subspace of $\mathbb{Q}(x)^2$. We can say one thing about it, though: we know it contains
$$\left[ \begin{array}{c} 1' \\ 1 \end{array} \right]
= \left[ \begin{array}{c} 0 \\ 1 \end{array} \right].$$
Using the matrix $X_1$, we can develop this fact into a complete understanding of derivative pairs.</p>
<p>To see how, recall that the multiplication-by-$x$ operator on $\mathbb{Q}(x)$ is invertible. In fact, plugging the multiplication-by-$x$ operator into any rational function gives a well-defined $\mathbb{Q}(x)$-linear operator on $\mathbb{Q}(x)$. Every rational function can be built up from $1 \in \mathbb{Q}(x)$ by applying a rational function of the multiplication-by-$x$ operator. In matrices,
$$\left[ \begin{array}{c} f \end{array} \right] = f(X_0) \left[ \begin{array}{c} 1 \end{array} \right].$$
Similarly, any derivative pair in $\mathbb{Q}(x)^2$ can be built up from
$$\left[ \begin{array}{c} 0 \\ 1 \end{array} \right] \in \mathbb{Q}(x)^2$$
by applying a rational function of $X_1$:
$$\left[ \begin{array}{c} f' \\ f \end{array} \right]
= f(X_1) \left[ \begin{array}{c} 0 \\ 1 \end{array} \right].$$
That explains the mystifying behavior you noticed.</p>
<hr>
<p>This idea can be extended to higher derivatives. For example, multiplication by $x$ acts on "derivative quadruples"
$$\left[ \begin{array}{c} f''' \\ f'' \\ f' \\ f \end{array} \right] \in \mathbb{Q}(x)^4$$
by the matrix
$$X_4 = \left[ \begin{array}{cc}
x & 3 & 0 & 0 \\
0 & x & 2 & 0 \\
0 & 0 & x & 1 \\
0 & 0 & 0 & x
\end{array} \right],$$
yielding the formula
$$\left[ \begin{array}{c} f''' \\ f'' \\ f' \\ f \end{array} \right] =
f(X_4) \left[ \begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array} \right].$$
I'm certain this whole business is related to Myself's <a href="https://math.stackexchange.com/questions/1541800/why-does-this-matrix-give-the-derivative-of-a-function#comment3139754_1541800">comment</a> about dual numbers, but I haven't worked out how.</p>
|
1,344,993 |
<p>Let $g_1, g_2, h_1, h_2 : \mathbb{R} \rightarrow \mathbb{R}$ be non-decreasing and right-continuous. Define
$$
\begin{align}
f_1 & := g_1 - h_1 \\
f_2 & := g_2 - h_2
\end{align}
$$
and suppose $f_1 = f_2$. In other words, for every $a, b \in \mathbb{R}$ with $a < b$, the restriction of $g_1 - h_1$ and $g_2 - h_2$ to $[a, b]$ are two Jordan decompositions of the same bounded-variation function.</p>
<p>Denote with $\mu_1, \mu_2$ the (positive) Lebesgue-Stieltjes measures engendered by $g_1, g_2$, respectively, and with $\nu_1, \nu_2$ the (positive) Lebesgue-Stieltjes measures engendered by $h_1, h_2$, respectively. Suppose that either $\mu_1$ or $\nu_1$ is finite (so we may define the signed measure $\mu_1 - \nu_1$, as we do below).</p>
<ol>
<li><p>Is it necessarily the case that either $\mu_2$ or $\nu_2$ is finite? (so we may define the signed measure $\mu_2 - \nu_2$, as we do below.)</p></li>
<li><p>Suppose that either $\mu_2$ or $\nu_2$ is finite. Define the signed measures
$$
\begin{align}
\varphi_1 & := \mu_1 - \nu_1 \\
\varphi_2 & := \mu_2 - \nu_2
\end{align}
$$
Is it the case that $\varphi_1 = \varphi_2$?</p></li>
</ol>
|
David C. Ullrich
| 248,223 |
<p>Actually no, a function has at most <em>one</em> Jordan decomposition.
You have two decompositions, but at most one is the <em>Jordan</em> decomposition.</p>
<p>Anyway, finiteness for one decomposition does not imply finiteness for the other. Consider $$0=0-0=x^+-x^+.$$</p>
<p><em>If</em> $f$ has bounded variation and all your measures happen to be finite then yes, $\phi_1=\phi_2$. Because for example $\phi_j([x,y))=f(y)-f(x)$.</p>
|
1,344,993 |
<p>Let $g_1, g_2, h_1, h_2 : \mathbb{R} \rightarrow \mathbb{R}$ be non-decreasing and right-continuous. Define
$$
\begin{align}
f_1 & := g_1 - h_1 \\
f_2 & := g_2 - h_2
\end{align}
$$
and suppose $f_1 = f_2$. In other words, for every $a, b \in \mathbb{R}$ with $a < b$, the restriction of $g_1 - h_1$ and $g_2 - h_2$ to $[a, b]$ are two Jordan decompositions of the same bounded-variation function.</p>
<p>Denote with $\mu_1, \mu_2$ the (positive) Lebesgue-Stieltjes measures engendered by $g_1, g_2$, respectively, and with $\nu_1, \nu_2$ the (positive) Lebesgue-Stieltjes measures engendered by $h_1, h_2$, respectively. Suppose that either $\mu_1$ or $\nu_1$ is finite (so we may define the signed measure $\mu_1 - \nu_1$, as we do below).</p>
<ol>
<li><p>Is it necessarily the case that either $\mu_2$ or $\nu_2$ is finite? (so we may define the signed measure $\mu_2 - \nu_2$, as we do below.)</p></li>
<li><p>Suppose that either $\mu_2$ or $\nu_2$ is finite. Define the signed measures
$$
\begin{align}
\varphi_1 & := \mu_1 - \nu_1 \\
\varphi_2 & := \mu_2 - \nu_2
\end{align}
$$
Is it the case that $\varphi_1 = \varphi_2$?</p></li>
</ol>
|
Evan Aad
| 37,058 |
<p>I'd like to expand upon <a href="https://math.stackexchange.com/users/248223/david-c-ullrich">David C. Ullrich</a>'s answer to question #2. The answer, as he wrote, is: yes, $\phi_1 = \phi_2$. Indeed, let $B$ be a Borel set on the real line. Then for $i \in \{1, 2\}$,
$$
\phi_i(B) = \phi_i\left(\cup_{n \in \mathbb{Z}}(B \cap (-n, n])\right) = \sum_{n \in \mathbb{Z}} \phi_i(B \cap (-n, n])
$$
Therefore, it suffices to show, for any $n \in \mathbb{Z}$, that
$$
\phi_1(B \cap (-n, n]) = \phi_2(B \cap (-n, n])
$$
Let then $n \in \mathbb{Z}$ be some integer. Since, restricted to the interval $(-n, n]$, $\phi_1$ and $\phi_2$ are finite, Dynkin's $\pi$-$\lambda$ theorem can be used to show that $\phi_1\big|_{(-n, n]} = \phi_2\big|_{(-n, n]}$ (use the easily verified fact that for every $a, b \in \mathbb{R}$ with $a < b$ we have $\phi_1((a, b]) = f_1(b) - f_2(a) = \phi_2((a, b])$). Hence, in particular, $\phi_1(B \cap (-n, n]) = \phi_2(B \cap (-n, n])$, Q.E.D.</p>
|
4,206,994 |
<p>Let <span class="math-container">$F(X, Y)$</span> be an irreducible polynomial of <span class="math-container">$\mathbb{C}[X, Y]$</span> with <span class="math-container">$F(0, 0) = 0$</span> and non-singular there, and let <span class="math-container">$(X(t), Y(t))$</span> be a local analytical parametrization of the corresponding curve, that is, <span class="math-container">$F(X(t), Y(t)) = 0$</span> for <span class="math-container">$t$</span> in a neighborhood of <span class="math-container">$0$</span>, with <span class="math-container">$X(0) = Y(0) = 0$</span> and <span class="math-container">$X'(0) \neq 0$</span>.</p>
<p>If, for some polynomial <span class="math-container">$G(X, Y)$</span>, we have <span class="math-container">$G(X(t), Y(t)) = 0$</span> for <span class="math-container">$t$</span> in the neighborhood in question, does then <span class="math-container">$G(X, Y)$</span> necessarily belong to the ideal generated by <span class="math-container">$F(X, Y)$</span> ? In other words, can the whole curve <span class="math-container">$F(X, Y) = 0$</span> be reconstructed from its local snippet?</p>
<p>And for an arbitrary algebraically closed field, is there a well-known characterization for a subset of an irreducible algebraic curve to define the curve already ? Is it for instance enough for such a subset to be infinite ?</p>
|
Aphelli
| 556,825 |
<p>The result that decides your problem is the following: let <span class="math-container">$f,g \in \mathbb{C}[x,y]$</span> be two non-associated irreducible polynomials. Then the set of common zeros of <span class="math-container">$f$</span> and <span class="math-container">$g$</span> is finite.</p>
<p>Proof: we can see <span class="math-container">$f,g$</span> as polynomials in <span class="math-container">$\mathbb{C}(x)[y]$</span>.</p>
<p>Suppose one of them is constant: then (for instance) <span class="math-container">$f=f_0(x)=x-x_0$</span> (<span class="math-container">$f$</span> irreducible) and <span class="math-container">$g=g(x,y)$</span>. If <span class="math-container">$f$</span> and <span class="math-container">$g$</span> have infinitely many common roots, this means that <span class="math-container">$g(x_0,\cdot)=0$</span>. Then <span class="math-container">$x-x_0$</span> divide all the coefficients of <span class="math-container">$g(x,y) \in \mathbb{C}[x][y]$</span>, so that <span class="math-container">$f|g$</span> and thus <span class="math-container">$f,g$</span> are proportional.</p>
<p>Otherwise, <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are two non-proportional irreducible elements of <span class="math-container">$\mathbb{C}[x,y]$</span> so they are irreducible non-proportional in <span class="math-container">$\mathbb{C}(x)[y]$</span> so coprime. Thus there are <span class="math-container">$a,b \in \mathbb{C}(x)[y]$</span> such that <span class="math-container">$af+bg=1$</span>. Multiplying by all denominators, it means that there is some nonzero <span class="math-container">$p \in \mathbb{C}[x] \cap (f,g)$</span>. Similarly, there is some nonzero <span class="math-container">$q \in \mathbb{C}[y] \cap (f,g)$</span>. Then, if <span class="math-container">$(x,y)$</span> is a root of <span class="math-container">$f$</span> and <span class="math-container">$g$</span>, <span class="math-container">$p(x)=0$</span> and <span class="math-container">$q(y)=0$</span>, which leaves finitely many possibilities. QED.</p>
|
2,185,434 |
<p>I recently read this problem</p>
<blockquote>
<p>A bag contains $500$ beads, each of the same size, but in $5$ different colors. Suppose there are $100$ beads of each color and I am blindfolded. What is the least number of beads I must pick before I can be sure there are $5$ beads of the same color among the beads I have picked blindfolded? <br/></p>
</blockquote>
<p>My intuitive answer was $401$, because the worst case would be picking $100$ beads of $4$ of the colors, then picking $1$ of the last remaning color. However, the given answer was $21$. Could someone explain how this answer was achieved?</p>
|
hunter
| 108,129 |
<p>Your answer is the number of beads you would have to pick to be sure there was at least one of each color. But that's not the question. You just want there to be one color such that there are at least 5 beads in that color. So 21 beads will do, since it's greater than (4 beads)*(5 colors) = 20.</p>
|
90,548 |
<p>Suppose we are handed an algebra $A$ over a field $k$. What should we look at if we want to determine whether $A$ can or cannot be equipped with structure maps to make it a Hopf algebra?</p>
<p>I guess in order to narrow it down a bit, I'll phrase it like this: what are some necessary conditions on an algebra for it to be a Hopf algebra?</p>
<p>Thoughts so far:</p>
<p>The first obvious condition is that $A$ must be augmented, i.e. there must be a nontrivial character $\varepsilon : A \to k$. Since this is generally not that hard to determine if we are given the algebra in some fairly concrete way, let's suppose that $A$ is given to us with an augmentation map.</p>
<p>If $A$ is finite-dimensional, then $A$ must be a Frobenius algebra. But not every finite-dimensional Frobenius algebra is a Hopf algebra, e.g. $\Lambda^\bullet(k^2)$ is not a Hopf algebra if the characteristic of $k$ is not 2. And generally I am more interested in the infinite-dimensional case.</p>
<p>All I can come up with is this: the category of finite-dimensional $A$-modules must be a (left) rigid monoidal category. But I don't know if that is a helpful observation: given a category with a forgetful functor to finite-dimensional vector spaces over some field, how can one prove that it can't be given the structure of a <s>braided</s> rigid monoidal category?</p>
<p>And perhaps there are some homological invariants that one can look at?</p>
<p>To sum up, the question is:</p>
<h3>Question</h3>
<p>Given a $k$-algebra $A$ and a nonzero character $\varepsilon : A \to k$, are there invariants we can look at in order to show that $A$ cannot be given the structure of a Hopf algebra?</p>
|
Peter May
| 14,447 |
<p>Let $I$ be the kernel of $\epsilon\colon A\to k$. Filter $A$ by the powers of $I$.
If $A$ is a Hopf algebra, then the associated graded algebra $E^0A$ is
a primitively generated Hopf algebra. If the characteristic of $k$ is
zero, $E^0A$ is isomorphic to the universal enveloping algebra of its
Lie algebra of primitive elements. By the Poincar\'e-Birkhof-Witt (PBW)
theorem, the associated graded algebra of $E^0A$ with respect to its
Lie filtration is isomorphic (when regraded by total degree) to the
free commutative algebra on the primitive elements of $E^0A$, which is
a polynomial algebra (assuming the $A$ you start out with is ungraded,
or graded and concentrated in even degrees). Thus $A$ is two filtrations
away from a polynomial algebra, which gives a significant restriction on
the underlying vector space of $A$. If $k$ has positive characteristic $p$,
the same argument applies with Lie algebra replaced by restricted Lie algebra,
but of course the conclusion of the PBW theorem gives much less complete information,
depending on the restriction ($p$th power operation). </p>
|
60,081 |
<p>I have a stochastic matrix $A \in R^{n \times n}$ whose sum of the entries in each row is $1$. When I found out the eigenvalues and eigenvectors for this stochastic matrix, it always happens that one of the eigenvalues is $1$. </p>
<p>Is it true that for any square <a href="https://en.wikipedia.org/wiki/Stochastic_matrix" rel="nofollow noreferrer">row- or right-stochastic matrix</a> (i.e. each row sums up to $1$) one of the eigenvalues is $1$?
If so, how do we prove it?</p>
|
Geoff Robinson
| 13,147 |
<p>The column vector with every entry $1$ is an eigenvector with eigenvalue $1$ for your matrix. It is not necessarily true that the eigenvalue $1$ occurs with <em>algebraic multiplicity</em> $1$ as an eigenvalue for your matrix $A$. By the Frobenius Perron-Theorem, that is the case if the entries of $A$ (or even some power of $A$) are all positive. What is true is that $x-1$ is not a repeated factor of the minimum polynomial of $A$ (using the Frobenius Perron theorem on each block).</p>
|
782,929 |
<p>prove or disprove following matrix is
positive definite matrix ?</p>
<p>$$\begin{bmatrix}
\dfrac{\sin(a_1-a_1)}{a_1-a_1}&\cdots&\dfrac{\sin(a_1-a_n)}{a_1-a_n}\\
\vdots& &\vdots\\
\dfrac{\sin(a_n-a_1)}{a_n-a_1}&\cdots&\dfrac{\sin(a_n-a_n)}{a_n-a_n}
\end{bmatrix}_{n\times n}$$
and where we defind $$\dfrac{\sin 0}{0}=1$$</p>
<p>Maybe this problem can use Integral to solve it.
since
$$\frac{1}{x} = \int_0^\infty e^{-tx} t \, dt$$?
Thank you</p>
|
Omran Kouba
| 140,450 |
<blockquote>
<p>Let $b_{kl}=\frac{\sin{(a_k-a_l)}}{a_k-a_l}$.
The symmetric $n\times n$ matrix $M=(b_{kl})$ is positive definite, if and only if $(a_1,\ldots,a_n)$ are distinct. </p>
</blockquote>
<p>Note that
$$\frac{\sin a}{a}=\frac{1}{2}\int_{-1}^1e^{iat}dt$$
Thus, for ${\bf x}=(x_1,\ldots,x_n)\in\Bbb{C}^n$ we have
$$\eqalign{
{\bf x}^*M{\bf x}&=\frac{1}{2}\int_{-1}^1\sum_{k,l=1}^nx_k\overline{x_l}e^{i(a_k-a_l)t}dt\cr
&=\frac{1}{2}\int_{-1}^1\left\vert\sum_{k=1}^n{x_k}e^{ia_k t}\right\vert^2 dt\geq0\tag{1}
}
$$
Thus $M$ is positive. Moreover, Suppose that ${\bf x}^*M{\bf x}=0$ we want to prove that $$x_1=x_2=\cdots=x_n=0.$$ From $(1)$ we conclude that
the continuous function $t\mapsto \sum_{k=1}^n{x_k}e^{ia_k t}$ is zero on $[-1,1]$.
Now, consider the function $f$ defined by $f(z)=\sum_{k=1}^n{x_k}e^{ia_k z}$, this
is an analytic function in $\Bbb{C}$ that is equal to $0$ for $z\in[-1,1]$, so it must be identically zero.</p>
<p>Consider $j\in\{1,\ldots,n\}$, we have
$$
\forall\,t\in \Bbb{R},\quad \sum_{k=1}^nx_ke^{i(a_k-a_j)t}=0
$$
hence, for $T>0$,
$$
\sum_{k=1}^nx_k\left(\frac{1}{2T}\int_{-T}^Te^{i(a_k-a_j)t}dt\right)=0
$$
Letting $T$ tend to $\infty$ we conclude that $x_j=0$. Here we use that fact that $$
\lim_{T\to\infty}\frac{1}{2T}\int_{-T}^Te^{i wt}dt=\left\{\matrix{1&w=0\cr 0&w\ne 0}\right.
$$ and the announced conclusion follows.</p>
|
1,760,148 |
<p>If I have a connected metric space $X$, is any ball around a point $x\in X$ also connected?</p>
|
hmakholm left over Monica
| 14,366 |
<p>There are even <em>complete path-connected</em> metric spaces that contain a point $x$ such that <em>no</em> ball around $x$ is connected, for example</p>
<p>$$ \{\langle x,0\rangle \mid x\ge 1\} \cup
\{\langle 1,y\rangle \mid 0\le y\le 1 \} \cup
\{\langle x,\tfrac1x \rangle \mid x\ge 1 \} \cup
\bigcup_{n=3}^\infty \{ \langle x,\tfrac1n \rangle \mid 2 \le x \le n \}
$$</p>
<p>No ball of finite radius around $\langle 2,0\rangle$ is connected.</p>
<hr>
<p>And here's a sketch of a complete and connected (but not path-connected) subspace of $\mathbb R^2$ where <em>no ball whatsoever, around any point,</em> is connected:</p>
<p><a href="https://i.stack.imgur.com/UxkjQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UxkjQ.png" alt="computer-generated sketch"></a></p>
<p>Each of the gaps in the Cantor set on the left will eventually be contracted -- but the smaller the gap, the farther to the right will this happen.</p>
|
1,760,148 |
<p>If I have a connected metric space $X$, is any ball around a point $x\in X$ also connected?</p>
|
Giulio Bresciani
| 118,028 |
<p>Just take a plane without a segment, and consider a small ball near the center of the segment. There's no need to get complicated stuff.</p>
|
2,941,896 |
<p>I want to ask you for following statement</p>
<blockquote>
<p><span class="math-container">$f$</span> is continuous from <span class="math-container">$X$</span> to <span class="math-container">$Y$</span>, where <span class="math-container">$X$</span> is compact Hausdorff, connected and <span class="math-container">$Y$</span> is ordered set in order topology, then <span class="math-container">$f$</span> is onto?</p>
</blockquote>
<p>My attempt </p>
<p>For given <span class="math-container">$y \in Y$</span> choose nbhd <span class="math-container">$N_y$</span>. Since <span class="math-container">$f$</span> is continuous and <span class="math-container">$X$</span> is compact Hausdorff,
<span class="math-container">$B=f^{-1}(\bar N_y$</span>) is compact in X , Where <span class="math-container">$\bar N_y$</span> is closure of nbhd. Since <span class="math-container">$f(B)$</span> is compact on Y which contain <span class="math-container">$y$</span>, it has extreme value. Now we can apply Intermediate theorem on <span class="math-container">$f(B)$</span> , so we can find <span class="math-container">$x^* \in B$</span> such that <span class="math-container">$f(x*)=y$</span> . Thus <span class="math-container">$f$</span> is onto</p>
<p>First, I want to ask whether this proof is correct or not.</p>
<p>Second, Is there any counter example if X has only connect condition ?</p>
<p>Third, Is there any other condition <span class="math-container">$f$</span> could be automatically onto for specific condition of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span></p>
<p>Thank you!</p>
|
5xum
| 112,884 |
<p><span class="math-container">$3\log x - 2\log x = \log x$</span>, just like <span class="math-container">$3y-2y=y$</span> no matter what <span class="math-container">$y$</span> is equal to.</p>
<p>Alternatively, you can get
<span class="math-container">$$3\log x - 2\log x = \log(x^3)-\log(x^2) = \log\left(\frac{x^3}{x^2}\right) = \log x$$</span></p>
<p>but you can never under any manipulation get <span class="math-container">$$3\log x - 2\log x = \log(x)^\frac{3}{2}$$</span> because that equality is simply not true.</p>
<hr>
<p>That said, for your main question, use the fact that <span class="math-container">$\log(x^n) = n\cdot \log(x)$</span> and your equation should become much much simpler.</p>
|
2,941,896 |
<p>I want to ask you for following statement</p>
<blockquote>
<p><span class="math-container">$f$</span> is continuous from <span class="math-container">$X$</span> to <span class="math-container">$Y$</span>, where <span class="math-container">$X$</span> is compact Hausdorff, connected and <span class="math-container">$Y$</span> is ordered set in order topology, then <span class="math-container">$f$</span> is onto?</p>
</blockquote>
<p>My attempt </p>
<p>For given <span class="math-container">$y \in Y$</span> choose nbhd <span class="math-container">$N_y$</span>. Since <span class="math-container">$f$</span> is continuous and <span class="math-container">$X$</span> is compact Hausdorff,
<span class="math-container">$B=f^{-1}(\bar N_y$</span>) is compact in X , Where <span class="math-container">$\bar N_y$</span> is closure of nbhd. Since <span class="math-container">$f(B)$</span> is compact on Y which contain <span class="math-container">$y$</span>, it has extreme value. Now we can apply Intermediate theorem on <span class="math-container">$f(B)$</span> , so we can find <span class="math-container">$x^* \in B$</span> such that <span class="math-container">$f(x*)=y$</span> . Thus <span class="math-container">$f$</span> is onto</p>
<p>First, I want to ask whether this proof is correct or not.</p>
<p>Second, Is there any counter example if X has only connect condition ?</p>
<p>Third, Is there any other condition <span class="math-container">$f$</span> could be automatically onto for specific condition of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span></p>
<p>Thank you!</p>
|
KM101
| 596,598 |
<p><span class="math-container">$$3\log x-2\log x$$</span>
Here, we use two identities.
<span class="math-container">$$\log a^b = b\cdot\log a$$</span>
<span class="math-container">$$\log_a b-\log_a c = \log_a \big(\frac{b}{c}\big)$$</span>
Use the first identity to reach the following expression.<br>
<span class="math-container">$$\implies \log x^3-\log x^2$$</span>
Use the second identity to simplify.<br>
<span class="math-container">$$\implies \log \big(\frac{x^3}{x^2}\big) \implies \boxed{\log x}$$</span>
So no, you can’t get <span class="math-container">$(\log x)^{\frac{3}{2}}$</span>... </p>
<p>For the next question, we have the following equation.
<span class="math-container">$$\log x+\log x^2+\log x^3+...+\log x^n = n\cdot(n+1)$$</span>
Simplify the left hand side using the first identity.<br>
<span class="math-container">$$\log x+2\cdot\log x+3\cdot\log x+...+n\cdot\log x = n\cdot(n+1)$$</span>
<span class="math-container">$$(1+2+3+...+n)\cdot\log x = n\cdot(n+1)$$</span>
<span class="math-container">$$\sum_{k = 1}^n k = n\cdot\frac{n+1}{2}$$</span>
<span class="math-container">$$\biggr(\frac{n\cdot(1+n)}{2}\biggr)\cdot \log x = n\cdot(n+1)$$</span>
<span class="math-container">$$\log x = \frac{n\cdot(n+1)}{\frac{n\cdot(1+n)}{2}}$$</span>
<span class="math-container">$$\log x = \frac{n\cdot(n+1)\cdot 2}{n\cdot(1+n)}$$</span>
<span class="math-container">$$\log x = 2$$</span>
Use the definition of logarithms.
<span class="math-container">$$\log_b x = y \longleftrightarrow b^y = x$$</span>
So, we get:<br>
<span class="math-container">$$\boxed{x = 10^{2} = 100}$$</span></p>
|
782,427 |
<p><strong>I - First doubt</strong> : free variables on open formulas . </p>
<p>I'm having a hard time discovering what different kinds of variables in an open formula of fol are refering to., For example, lets take some open formulas : </p>
<p>1: $"x+2=5"$<br>
2: $"x=x"$<br>
3: $"x+y = y+x"$<br>
4: $"x+2 = 5 -> x=3 "$ .</p>
<p>So let's suppose we have a domain D of a structure which is interpreting that fol language.
Even tho all those formulas are open formulas variables occuring free, i think the variables refer to different kind of sets ( obviously all sets with the same type of objects, be it numbers, or whatever ) depending on the kind of formula... to what exactly, in each case ? </p>
<p>The formula 1 would be refering to a fixed element on the domain (whatever kind of elements it has ) ? Would that be it ? Or would it be an arbitrary one ? </p>
<p>Should we always use the word "refering to" in open formula, instead of "ranging through" ?</p>
<p>The free variable on the formula 2 would be refering to a bunch of elements of the domain ? Or would it be ranging through a bunch of elements of the domain ? How does the truth-set ( set of individuals that satisfy the open sentence " of each formula enter in picture ? </p>
<p><strong>II - Second doubt</strong> : When to close a formula, and what does it provide us ? </p>
<p>The second point i'm having a hard time with is how to differentiate, and how to know the difference between the uses and the reasons of choosing to use either the formula<br>
$"x+2 =5 -> x = 3 "$ or $\forall x (x+2 = 5 -> x = 3 )$ . </p>
<p>And also between the difference of using ( and choosing to use ) the open formula $"x+y = y+x"$ or the closed formula $"\forall x \forall y ( x + y = y + x ) "$. </p>
<p>Same thing about $"x = x$ and $"\forall x (x = x) "$. </p>
|
Dan Christensen
| 3,515 |
<p>Quite often, quantifiers are are left out of statements in informal discussions. When we talk about the commutativity of addition on the natural numbers, for example, some authors may simply write $x+y=y+x$. To be more precise, they should write $\forall x,y\in N: x+y=y+x.$ </p>
<p>In a statement in a <em>formal</em> proof, free variables are those that are not quantified within that statement. If we have a statement in formal proof like $y=x+1$ (with no quantifiers), it is best to think of it as referring to a single pair of numbers $x$ and $y$ even if we are later able to generalize for infinitely many cases using quantifiers.</p>
|
414,432 |
<p>We define $f_n:\mathbb{R}\to\mathbb{R}$ by $f_n(x)=\dfrac{x}{1+nx^2}$ for each $n\ge 1$.</p>
<p>I compute that $f(x):= \displaystyle\lim_{n\to \infty}f_n(x) = 0$ for each $x\in\mathbb{R}$.</p>
<p>Now, I want to know in which intervals $I\subseteq \mathbb{R}$ the convergence is uniform.</p>
<p>Any hint? Thanks.</p>
|
Euler....IS_ALIVE
| 38,265 |
<p>The sequence converges uniformly on $I$ if for every $\epsilon >0, \exists N$ such that $\forall x \in I $ and $\forall n \geq N$, we have $|f_n(x) - f(x)| < \epsilon$</p>
<p>For the interval $|x| \leq 1$, we see that $x(1+nx) = x +nx^2 \leq 1 + nx^2$, so that $\frac{x}{1+nx^2} \leq \frac{x}{x(1+nx)} = \frac{1}{1+nx}$. So, to make this smaller than $\epsilon$, choose $N_1 = (\frac{1}{\epsilon}-1)$. </p>
<p>For the interval $|x| >1$, we have that $|\frac{x}{1+nx^2}| \leq |\frac{x}{nx^2}| =|\frac{1}{nx}| < \frac{1}{n}$. So, choose $N_2 > \frac{1}{\epsilon}$.</p>
<p>Comparing $N_1, N_2$, we see that we can choose $N = N_2$. Convergence is uniform on $\mathbb{R}$. </p>
|
221,762 |
<p>Let $f$ be a newform of level $\Gamma_1(N)$ and character $\chi$ which is not induced by a character mod $N/p$. I learned from <a href="http://wstein.org/books/ribet-stein/main.pdf" rel="nofollow">these notes</a> by Ribet and Stein that $|a_p|=p^{(k-1)/2}$ where $k$ is the weight of $f$. So I wonder</p>
<p>1, the proof of this statement,</p>
<p>2, is $a_p$ for $p|N$ in the number field $K_f$ of $f$, i.e the number field generated by all $a_q$ for $(q,N)=1$?</p>
<p>Now I see that 2 is true. Why I thought it's wrong is because $p^{(k-1)/2}$ is of degree 2, which will implies in this case $2|[K_f:\mathbb{Q}]$. Seems strange...</p>
|
Myshkin
| 43,108 |
<p>I think you can find the proof you want in:</p>
<ul>
<li>Andrew Ogg, <a href="http://wwwuser.gwdg.de/~subtypo3/gdz/pdf/PPN235181684_0179/PPN235181684_0179___LOG_0039.pdf" rel="nofollow">On the Eigenvalues of Hecke Operators</a> (1969)</li>
</ul>
<p>Perhaps you can find it more thoroughly explained in Shimura's book, or on more concrete notes on Atkin-Lehner.</p>
<p>As for your second question, the first part is way too broad, see for example Ken Ono's "The Web of Modularity: Arithmetic of the Coefficients of Modular Forms and q-series".</p>
<p>Not so sure anymore about the number field generated by those coefficients.</p>
|
221,762 |
<p>Let $f$ be a newform of level $\Gamma_1(N)$ and character $\chi$ which is not induced by a character mod $N/p$. I learned from <a href="http://wstein.org/books/ribet-stein/main.pdf" rel="nofollow">these notes</a> by Ribet and Stein that $|a_p|=p^{(k-1)/2}$ where $k$ is the weight of $f$. So I wonder</p>
<p>1, the proof of this statement,</p>
<p>2, is $a_p$ for $p|N$ in the number field $K_f$ of $f$, i.e the number field generated by all $a_q$ for $(q,N)=1$?</p>
<p>Now I see that 2 is true. Why I thought it's wrong is because $p^{(k-1)/2}$ is of degree 2, which will implies in this case $2|[K_f:\mathbb{Q}]$. Seems strange...</p>
|
paul Monsky
| 6,214 |
<p>This is really a comment, but the system won't let me post one. I think there's a confusion and that the absolute value is $p^((k-2)/2)$ where $k$ is the weight. For example there's a level $\Gamma_0 (2)$ newform of weight 8, $f=(\eta(z)\eta(2z))^8$, with $f^3 =\delta(z)\delta(2z)=q^3-24q^4+...$ Then $f=q-8q^2+...$, the $U_2$ eigenvalue is -8, and the exponent is $(8-2)/2$.</p>
<p>EDIT: The confusion was mine--I missed the condition on the character. My example illustrated case (iii) of Li's Theorem 3, while the question related to case (ii). I'd be grateful for an explicit example from the databases of case (ii) to further clear up my confusion.</p>
|
1,319,761 |
<p>I'm stuck trying to figure out how to solve the following integral:</p>
<p>$\int_{C(0,1)^+}\sin(z)dz$</p>
<p>I've tried parameterizing z(t) but then I get</p>
<p>$\int_0^{2\pi}\sin(e^{it})ie^{it}$ which I don't know how to integrate.</p>
<p>So then I'm looking to use Cauchy's Integral formula but I'm not sure if I can use it because it doesnt fit the structure.</p>
<p>I'm not sure where to go from here so any help would be greatly appreciated!</p>
|
Jack D'Aurizio
| 44,121 |
<p>The integral of any entire function over any closed curve is just zero. That follows from:
$$ \forall n\in\mathbb{Z},\qquad \int_{0}^{2\pi} e^{nit}\,dt = 2\pi\cdot\delta(n). $$
In your case:
$$ \sin z = \sum_{n\geq 0}\frac{(-1)^n}{(2n+1)!}z^{2n+1} $$
can be integrated termwise. Can you see what happens, then?</p>
|
659,988 |
<p>I understood the definition of a $\sigma$-algebra, that its elements are closed under complementation and countable union, but as I am not very good at maths, I could not visualize or understand the intuition behind the meaning of "closed under complementation and countable union". </p>
<p>If we consider the set X to be a finite set, then what would be a good real life example of a $\sigma$-algebra, for a noob to understand. </p>
|
Shivam Singh
| 377,513 |
<p>steps to form the smallest sigma field
step 1: break the whole space into sets that are mutually exclusive and exhaustive .
step 2: then form a new set using these sets(sets obtained in step 1) in this way{ take single sets ,take sum of two at a time ,take sum of three at a time ,..........,take sum of all } . the set so formed is sigma field you need</p>
|
1,045,941 |
<p>Usually this is just given as a straight up definition in a calculus course. I am wondering how you prove it?
I tried using the limit definition, $$\lim\limits_{h\rightarrow 0} \dfrac{\log(x+h)-\log(x)}{h}$$
but this led to no developments.</p>
|
Darrin
| 117,053 |
<p>Usually, this statement is defined in the following way: $$\int_0^x \frac{1}{t}dt = \ln(x), x \in (0, \infty)$$. </p>
<p>Nonetheless, suppose you were not working with this as a definition - how could we verify this from, say, what we know about $f(x)=e^x$ and then proceeding to define $\ln(x)$ as that function $f^{-1}(x)$ which is the inverse function of $e^x$?</p>
<p>We know from derivatives of inverses of differentiable functions that $$[f^{-1}]'(y)=\frac{1}{f'(x)}$$ where $y=f(x)$ (draw the tangent lines on $e^x$ and their corresponding reflected points on the inverse function to convince yourself of this geometrically). Therefore (being a bit liberal with our usage of $x$ here), we have that </p>
<p>$$\frac{d}{dx} \ln(x) = \frac{1}{e^{ln(x)}}=\frac{1}{x}$$ for $x \in (0, \infty)$ (as $f(x)>0$, the above expression of $f'(x)=e^{f^{-1}(x)}=e^{ln(x)} $exists for all such $x$). </p>
|
2,583,232 |
<p>Prove that $\lim_{n \to \infty} \dfrac{2n^2}{5n^2+1}=\dfrac{2}{5}$</p>
<p>$$\forall \epsilon>0 \ \ :\exists N(\epsilon)\in \mathbb{N} \ \ \text{such that} \ \ \forall n >N(\epsilon) \ \ \text{we have} |\dfrac{-2}{25n^2+5}|<\epsilon$$
$$ |\dfrac{-2}{25n^2+5}|<\epsilon \\ |\dfrac{2}{25n^2+5}|<\epsilon \\ \dfrac{5(n^2+1)}{2}<\epsilon \\ {5(n^2+1)}<2\epsilon \\ {n}>\sqrt{\dfrac{2\epsilon -5}{25}}$$</p>
<p>What do I do?</p>
|
Higurashi
| 204,434 |
<p>Note that $\frac{2n^2}{5n^2+1} = \frac{2}{5} \frac{n^2}{n^2+\frac{1}{5}}$. Subtract $\frac{2}{5}$, and you get:
$\frac{2}{5} \left|\frac{n^2}{n^2+\frac{1}{5}}-1\right| = \frac{2}{5}\left|\frac{\frac{1}{5}}{n^2+\frac{1}{5}}\right|$.
Now we can always take $N>0$ such that for all $n>N$, this is smaller than $\epsilon$. Can you see why?</p>
|
3,984,778 |
<p>Ackermann's function and all the up-arrow notation is based on exponentiating. We know for a fact that the factorial function grows faster than any power law so why not build an iterative sequence based on factorials? I am thinking of the simple function defined by
<span class="math-container">$$
a(n)=a(n-1)!
$$</span>
and to start things off we need <span class="math-container">$a(1)=3$</span>. The numbers generated are <span class="math-container">$3,6,24,720$</span> and the next one (<span class="math-container">$720!$</span>) is roughly <span class="math-container">$10^{1746}$</span> and I can't even start to imagine what happens next!</p>
<p>Is it primitive recursive? Can it be coded up using for loops avoiding calls to itself? I think not but this is just a hunch. Computer scientists might have better chances of answering that than I.</p>
<p>Thanks in advance, I appreciate it.</p>
|
Wuestenfux
| 417,848 |
<p>So you mean the function <span class="math-container">$a(n)$</span> defined by <span class="math-container">$a(1)=3$</span> and <span class="math-container">$a(n) = a(n-1)!$</span>.</p>
<p>This function is primitive recursive.
Multiplication can be defined by a primitive recursive scheme. Based on this, the factorial function can be defined by a primitive recursive scheme.
From here, you can use induction on <span class="math-container">$n$</span> to show that <span class="math-container">$a(n)$</span> is primitive recursive.</p>
|
1,051,335 |
<p>Purely out of interest, I wanted to try and construct a sequence of differentiable functions converging to a non-differentiable function. I began with the first non-differentiable function that sprung to my mind, namely
\begin{align}
&f:\mathbb{R}\to\mathbb{R}\\
&f(x)=|x|.
\end{align}
After some testing I considered the function defined by
$$f_\varepsilon(x) = |x|+\frac{\varepsilon}{|x|+\sqrt{\varepsilon}} $$
for some $\varepsilon>0$. Then $\lim\limits_{\varepsilon\to0^+}f_\varepsilon(x)=f(x)$, and $f_\varepsilon(x)$ <em>looks</em> smooth, i.e. differentiable for every $\varepsilon>0$ on the entire domain.</p>
<p><strong>Question:</strong> How can I prove that $f_\varepsilon$ is differentiable for every $\varepsilon>0$ (or disprove) using the definition of the derivative?</p>
<p>If this assertion is true, then I construct the sequence simply by setting $\varepsilon = 1/n$ for $n\in\mathbb{N}$.</p>
<p><strong>Attempt:</strong> I set up the definition for the derivative
\begin{align}
\frac{\mathrm{d}f_\varepsilon}{\mathrm{d}x} &= \lim_{h\to 0}\frac{1}{h}\left[\left(|x+h|+\frac{\varepsilon}{|x+h|+\sqrt{\varepsilon}}\right)-\left(|x|+\frac{\varepsilon}{|x|+\sqrt{\varepsilon}}\right)\right]\\
&=\lim_{h\to 0}\frac{1}{h}\left[|x+h|-|x|+\frac{\varepsilon}{|x+h|+\sqrt{\varepsilon}}-\frac{\varepsilon}{|x|+\sqrt{\varepsilon}}\right],
\end{align}
but I could not figure out how to proceed.</p>
<p><strong>Sidenotes:</strong> An interesting thing I discovered when constructing $f_\varepsilon$, was that almost any small change removes its smoothness, for example
\begin{equation}
g_\varepsilon(x) = |x|+\frac{2\varepsilon}{|x|+\sqrt{\varepsilon}}\hspace{2cm} h_\varepsilon(x) = |x|+\frac{\varepsilon}{|x|+2\sqrt{\varepsilon}}
\end{equation}
do both not look smooth at all. Similarly for the other terms; changing the coefficients will remove the smoothness. I am also somewhat intrigued by this. So if anyone can shed some light on this, even better.</p>
|
Lutz Lehmann
| 115,115 |
<p>Why do you think your function is differentiable? Did you calculate the derivatives? What does the picture look like for $ε=1$ or $ε=10$?</p>
<hr>
<p>An easier differentiable approximation of the abs function is
$$\sqrt{ε^2+x^2}$$ or $$\sqrt{ε^2+x^2}-ε.$$</p>
<p>Differentiability here is obvious by the chain rule.</p>
<hr>
<p>Added: Close to zero, $|x|\lt \sqrt ε$, one can use the binomial formula to get
\begin{align}
|x|+\frac{ε}{|x|+\sqrt{ε}} &= |x|+ε\frac{\sqrt{ε}-|x|}{ε-x^2} =\frac{ε\sqrt{ε}-x^2|x|}{ε-x^2}
\end{align}
which tells that the function is twice continuously differentiable and symmetric at the origin (which we knew before), so that it has horizontal slope there.</p>
|
722,663 |
<p>I am totally stack in how to do this exercise :
How can I find the irreducible components of $ V(X^2 - XY - X^2 Y + X^3) $ in $A^2$(R)?</p>
<p>Given an algebraic set, what is the consideration I have to make to find them? I know the definitions of components and irreducible components but these don't help in to solve this kind of exercise in the end.</p>
<p>Can anyone suggest a good reference book also?</p>
<p>Thanks</p>
|
BoZenKhaa
| 60,953 |
<p>$V(f)$, the zero set of a polynomial $f \in R[x,y]$, is given as set of points such that $f(x,y) = 0$. If this polynomial is reducible, i.e. $f=gh$, the points $(x_0, y_0)$ in $V(f)$ are such that $g(x_0, y_0) = 0$ or $h(x_0, y_0) = 0$. This means that $V(f)$ is a union of $V(g)$ and $V(h)$. </p>
<p>In your case, $f(x,y)=x^2-xy-x^2y+x^3=x(x+1)(x-y)$. Zeros of this polynomial are all the points for which any factor of $f$ is zero. Thus we see that $V(f) = V(x) \cup V(x+1) \cup V(x-y)$. Geometrically, we can imagine this in $\mathbb{R}^2$ as a union of three lines, the y-axis, diagonal line through the origin passing through the first quadrant and parallel to the y axis passing through the point $(-1,0)$. </p>
<p>As for the books, very nice introductory book about algebraic curves which is freely available is by Fulton: <a href="http://www.math.lsa.umich.edu/~wfulton/CurveBook.pdf" rel="nofollow">http://www.math.lsa.umich.edu/~wfulton/CurveBook.pdf</a>.
Quick google search also turned up these notes which seem to contain some nice examples and theorems ilustrating what is going on: <a href="http://csclub.uwaterloo.ca/~mlbaker/get.php?name=LW-1135-pmath764notes.pdf" rel="nofollow">http://csclub.uwaterloo.ca/~mlbaker/get.php?name=LW-1135-pmath764notes.pdf</a>.</p>
|
3,760,140 |
<p>My professor asked us to find the boundary points and isolated points of <span class="math-container">$\Bbb{Q}$</span>; my answer that there is no boundary point or either isolated point for rational numbers set. However the question for confusing me, which asks "to find", means that there exist boundary points and isolated points, and I have to find them. So is there any boundary point or either isolated point for rational numbers set?</p>
|
Siddharth Bhat
| 261,373 |
<ul>
<li><p><a href="https://www.desmos.com/calculator/cpecxbejut" rel="nofollow noreferrer">Here's a desmos link to the function <code>peek-a-boo</code> showed me</a></p>
</li>
<li><p>Here's a drawing. The bump function is in purple, the function <span class="math-container">$h(x)$</span> is in red.</p>
</li>
</ul>
<img src="https://i.stack.imgur.com/VJKWD.png" width="300"/>
|
3,428,546 |
<p>Consider whether the following series converges or diverges <span class="math-container">$$\sum_{n=1}^\infty\frac{(-1)^n\sin\frac{n\pi}{3n+1}}{\sqrt{n+3}}$$</span></p>
<p>I have tried Leibniz's test but failed to show that <span class="math-container">$\sin\frac{n\pi}{3n+1} / \sqrt{n+3}$</span> is monotone, which I graphically checked it is. I have managed to show that the series diverges absolutely, therefore I've shown nothing. Also I have tried using Cauchy's criterion, but failed to arrive at any useful result.</p>
|
WoolierThanThou
| 686,397 |
<p>The point is that often-times, your question only depends on the <em>distributions</em> of the involved random variables, not about their behaviour as a map.</p>
<p>For instance, if <span class="math-container">$X:\Omega\to \mathbb{R}$</span> has the property that <span class="math-container">$X$</span> only ever assumes a finite number of values, <span class="math-container">$1,2,3,...,n$</span>, then <em>any</em> question relating to <span class="math-container">$X$</span> really reduces to questions about the <span class="math-container">$n$</span> sets <span class="math-container">$A_j=\{\omega|X(\omega)=j\},$</span> which is, naturally a partitition of <span class="math-container">$\Omega$</span>. In fact, the only thing I really need to know about <span class="math-container">$A_j$</span> is its probability.</p>
<p>So say <span class="math-container">$\mathbb{P}(A_j)=p_j$</span> for some collection of parameters <span class="math-container">$p_j$</span> such that <span class="math-container">$\sum_{j=1}^{n} p_j=1$</span>. And let <span class="math-container">$\Omega_2$</span> be any other set with a partition <span class="math-container">$(B_j)_{1\leq j\leq n}$</span> and a probability measure <span class="math-container">$\mathbb{P}_2$</span> such that <span class="math-container">$\mathbb{P}(B_j)=p_j$</span>. Then, we could define a random variable <span class="math-container">$Y:\Omega_2\to \mathbb{R}$</span> by <span class="math-container">$Y(\omega)=j$</span> if <span class="math-container">$\omega\in B_j$</span>. Then, what really is the difference between <span class="math-container">$X$</span>and <span class="math-container">$Y$</span>?</p>
<p>Surely, the background spaces are potentially different in all sorts of relevant ways, but <span class="math-container">$X$</span>and <span class="math-container">$Y$</span> can't tell us the difference. They give rise to the same probabilities, the same moments (expectation, variance, so on and so forth), so unless we have some reason to care about them as maps, their different background spaces won't matter - although, it does happen in probability theory that we <em>do</em> care about things on the level of maps - see the concept of almost sure convergence, for instance.</p>
<p>This generalises to non-discrete variables - whenever you are asking about probabilistic traits of a single variable, the only thing that's gonna matter is its distribution, and there are potentially many different background spaces that have a variable with a specific distribution - and it doesn't matter which one you use. For this reason, we usually don't care too much about the background space in question.</p>
|
1,410,048 |
<p>I am unsure of how to proceed about finding the solution to this problem. $$\sum_{i=6}^8(\sum_{j=i}^8 (j+1)^2)$$</p>
<p>Obviously the last step is not to difficult, but the fact that the lower limit for the summation in brackets is i I am not sure how to solve this. In classes so far we have only really dealt with cases where j=1. I assume however it would be beneficial to reindex, so that we have</p>
<p>$$\sum_{i=6}^8(\sum_{k=i+1}^9 k^2)$$</p>
<p>Any help is greatly appreciated!</p>
|
Christopher Carl Heckman
| 261,187 |
<p>Since there are only three values of $i$, I'd rewrite this as
$$\sum\limits_{j=6}^8 (j+1)^2 + \sum\limits_{j=7}^8 (j+1)^2 + \sum\limits_{j=8}^8 (j+1)^2$$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.