qid
int64 1
4.65M
| question
large_stringlengths 27
36.3k
| author
large_stringlengths 3
36
| author_id
int64 -1
1.16M
| answer
large_stringlengths 18
63k
|
---|---|---|---|---|
2,238,830 |
<p>I have just come across "Foundations of Mathematics". Wikipedia describes it as </p>
<blockquote>
<p>Foundations of mathematics can be conceived as the study of the basic
mathematical concepts (number, geometrical figure, set, function,
etc.) and how they form hierarchies of more complex structures and
concepts, especially the fundamentally important structures that form
the language of mathematics (formulas, theories and their models
giving a meaning to formulas, definitions, proofs, algorithms, etc.)
also called metamathematical concepts, with an eye to the
philosophical aspects and the unity of mathematics.</p>
</blockquote>
<p>Now this seems a bit strange to me. If foundations of mathematics really do deal with the study of basic mathematical concepts shouldn't they be taught to us as the first thing ? Also in theory wouldn't this mean that someone who has a good grasp of foundations of mathematics will have a very good understanding of mathematics as a whole ?</p>
<p>If that is true then why isn't foundation of mathematics taught at a earlier stage in life and more often ?</p>
<p>I apologize for the over simplification, but I am not an ace mathematician. Hence I want some insight into what foundations of mathematics are. </p>
|
Mikhail Katz
| 72,694 |
<p>Let me put it this way: if you had to learn precisely how an engine works before you could drive a car, there will be very few drivers on the road. Mathematicians mainly drive cars rather than analyzing the functioning of the engine. For some details on the metaphor of set theory as the automobile, see <a href="http://u.cs.biu.ac.il/~katzmik/infinitesimals.html#17f" rel="nofollow noreferrer">this article</a>.</p>
|
216,748 |
<ol>
<li><p>SO far i showed that if A matrix is left invertible (L) then in Ax = b, x has at most 1 solution.
<strong>I got that LAx = x = Lb, so x = Lb</strong></p></li>
<li><p>for right inverse (R) of A, in Ax = b, x has at least one solution.
<strong>I got that x = Rb</strong>. </p></li>
</ol>
<p>In the book it says that x has 1 solution for case 1, and at least one solution for case 2.
Can someone explain my WHY? HOW?</p>
|
Brian M. Scott
| 12,042 |
<p>What you did for the left inverse is fine, but what you did for the right inverse doesn’t work: you know that $AR=I$, but you don’t know that $RA=I$, so you can’t say that $x=Rb$. (Actually, you may have seen that $ARb=Ib=b$ and realized on that basis that $Rb$ is a solution; if so, you’re right, but that doesn’t say anything one way or the other about whether it’s the <strong>only</strong> solution. In any case, you should justify it.)</p>
<p>Suppose that $A$ is an $m\times n$ matrix with a right inverse $R$. Then $R$ is $n\times m$, and $AR$ is $m\times m$. This implies that the $m$ rows of $A$ are linearly independent (why?), i.e., that the rank of $A$ is $m$, and hence that $m\le n$. From here there are several ways to argue that $Ax=b$ has at least one solution, depending on what you’ve already proved. One very straightforward way, however, is to consider what happens to the augmented matrix $\begin{bmatrix}A&b\end{bmatrix}$ when you row-reduce it.</p>
|
426,264 |
<ol>
<li><p>Given one sine wave in time domain, I want to find its frequency. Because I observe only a very small part of the sine wave ~1 cycle, FFT methods have a poor spectral resolution. </p></li>
<li><p>Has there been work that bounds the error on the frequency estimate? </p></li>
</ol>
<p>Thanks a ton</p>
|
Ross Millikan
| 1,827 |
<p>If you have lots of points on the sine wave, and FFT will do very well. If you have $1024$ points and FFT them, the bin width is $0.1\%$ of the period. You can also just look at the time between two zero crossings. The fractional error in frequency will be twice the (rss of two) measurement errors divided by the time between them. If you measure the zero crossing well, that will be quite good. The reason to use zero crossings rather than peaks and troughs is they are better defined.</p>
|
1,910,085 |
<p>For all integers $n \ge 0$, prove that the value $4^n + 1$ is not divisible by 3.</p>
<p>I need to use Proof by Induction to solve this problem. The base case is obviously 0, so I solved $4^0 + 1 = 2$. 2 is not divisible by 3.</p>
<p>I just need help proving the inductive step. I was trying to use proof by contradiction by saying that $4^n + 1 = 4m - 1$ for some integer $m$ and then disproving it. But I'd rather use proof by induction to solve this question. Thanks so much.</p>
|
Prahlad Vaidyanathan
| 89,789 |
<p>Your induction step could look like this: </p>
<p>Suppose $4^n + 1 \equiv 1\pmod{3}$, then $4^n \equiv 0\pmod{3}$, so
$$
4^{n+1} + 1 \equiv 4(4^n) + 1 \equiv 1\pmod{3}
$$
and if $4^n+1 \equiv 2\pmod{3}$, then $4^n \equiv 1\pmod{3}$, so
$$
4^{n+1} +1 \equiv 4+1\pmod{3} = 2\pmod{3}
$$</p>
|
1,910,085 |
<p>For all integers $n \ge 0$, prove that the value $4^n + 1$ is not divisible by 3.</p>
<p>I need to use Proof by Induction to solve this problem. The base case is obviously 0, so I solved $4^0 + 1 = 2$. 2 is not divisible by 3.</p>
<p>I just need help proving the inductive step. I was trying to use proof by contradiction by saying that $4^n + 1 = 4m - 1$ for some integer $m$ and then disproving it. But I'd rather use proof by induction to solve this question. Thanks so much.</p>
|
2'5 9'2
| 11,123 |
<p>I think that if you need to use induction, instead of proving "$4^n+1$ is not divisible by $3$", you should prove the more specific "$4^n+1$ has remainder $2$ when divide by $3$".</p>
<p>$$4^n+1=3k+2\implies4^n=3k+1\implies4^{n+1}=12k+4$$
$$\implies4^{n+1}+1=12k+5\implies4^{n+1}+1=3(4k+1)+2$$</p>
|
1,910,085 |
<p>For all integers $n \ge 0$, prove that the value $4^n + 1$ is not divisible by 3.</p>
<p>I need to use Proof by Induction to solve this problem. The base case is obviously 0, so I solved $4^0 + 1 = 2$. 2 is not divisible by 3.</p>
<p>I just need help proving the inductive step. I was trying to use proof by contradiction by saying that $4^n + 1 = 4m - 1$ for some integer $m$ and then disproving it. But I'd rather use proof by induction to solve this question. Thanks so much.</p>
|
Wojowu
| 127,263 |
<p>Hint: $4^n-1=4^n-1^n$ is divisible by $3$ (why?).</p>
|
3,928,429 |
<p>As titled, I was considering the mimimization problem where <span class="math-container">$y(x)$</span> has two endpoints fixed. That is, minimizing <span class="math-container">$$\int_a^b L(x,y(x),y'(x)) \, dx$$</span></p>
<p>where <span class="math-container">$$\ y(a)=m, y(b)=n $$</span>for all <span class="math-container">$y(x)$</span>.</p>
<p>if the Euler-Lagrange equation is always zero for all functions <span class="math-container">$y(x)$</span>, I think the original integral <span class="math-container">$$\int_a^b L(x,y(x),y'(x)) \, dx$$</span> is constant for all <span class="math-container">$y(x)$</span>.</p>
<p>Which means if
<span class="math-container">$$\frac {d}{dx} \frac {\partial L}{\partial y'} = \frac {\partial L}{\partial y}$$</span> for all <span class="math-container">$y(x)$</span>, then
<span class="math-container">$$\int_a^b L(x,y(x),y'(x)) \, dx = C$$</span> for some constant <span class="math-container">$C$</span>.</p>
<p>But I don't know how to prove it. I start by using integration by parts, writing <span class="math-container">$$\int_a^b L(x,y(x),y'(x)) \, dx$$</span> as <span class="math-container">$$\left.\ x*L(x,y(x),y'(x))\right|_a^b - \int_a^b x*\frac {d}{dx}L(x,y(x),y'(x)) \, dx$$</span>
But I don't know how to continue from here, or maybe I shouldn't use IBP here.</p>
<p>Any help will be appreciated.</p>
|
Mozibur Ullah
| 26,254 |
<p>The simplest reference on this is a discussion in section 2 of Landau & Lifschitz's <em>Classical Mechanics</em>, their first volume in their coyrse of theoretical phtsics. This uses the same setup as you do: one spatial and one temporal dimension. They show that a Lagrangian that differs from another Lagrangian by a total temporal derivative, that is:</p>
<blockquote>
<p><span class="math-container">$L'(q,\dot{q},t) = L(q,\dot{q},t) + (d/dt)f(q,t)$</span></p>
</blockquote>
<p>Give actions</p>
<blockquote>
<p><span class="math-container">$S' := \int_T L' dt = \int_T L dt + f|\partial T = S + f|\partial T$</span></p>
</blockquote>
<p>Where <span class="math-container">$S$</span> is the action of <span class="math-container">$L$</span>, <span class="math-container">$S:= \int_T L$</span> and where by <span class="math-container">$f|\partial T$</span>, I mean the evaluation of f at the boundary, <span class="math-container">$\partial T$</span> of the temporal interval <span class="math-container">$T$</span>. As the latter term is constant we see that the variational conditions, <span class="math-container">$\delta S'=0$</span> and <span class="math-container">$\delta S=0$</span> give the same Euler-Lagrange equations of motion.</p>
<p>This has a generalisation to higher spatial dimensions where the extra term is then recognised as a divergence. This relates to the inverse variational problem first posed by Helmholtz over a hundred years ago where he asked which systems of differential equations come from variational principles. The mathematical expression of this is the variational bicomplex; or more naively:</p>
<blockquote>
<p>Vector Fields <span class="math-container">$\rightarrow$</span> Lagrangians <span class="math-container">$\rightarrow$</span> Differential Equations <span class="math-container">$\rightarrow$</span> Differential Operators</p>
</blockquote>
<p>Here, the first arrow is taking divergences, <span class="math-container">$Div$</span>, and the second, the Euler-Lagrange operator, <span class="math-container">$EL$</span>, whilst the third is the Helmholtz operator, <span class="math-container">$H$</span>. It's also what is called an exact sequence or complex, and so the composition of two arrows vanishes, that is <span class="math-container">$EL•Div=0$</span> amd <span class="math-container">$H•EL=0$</span>. The proper expression of this meams we don't just have a complex, but a bicomplex.</p>
<p>In a sense, whilst the ordinary calculus generalises to the differential geometry of <em>manifolds</em>, aka tangent bundles, and the operators of vector analysis both generalise and assemble themselves into the de Rham complex; the variational calculus generalises to the differential geometry of <em>bundles</em>, aka jet bundles, and the variational operators both generalise and assemble themselves not into a complex, but a bicomplex, reflecting the fact that a bundle, in its most general description, is a surjective morphism between two manifolds - and which in a sense, we can call the de Rham bicomplex of a bundle.</p>
|
4,173,308 |
<p><a href="https://i.stack.imgur.com/G55Op.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G55Op.png" alt="triangle of area 0.5 on a lattice grid" /></a></p>
<p>I'm trying to find the area of this triangle using the <span class="math-container">$\frac{1}{2} \times b \times h$</span> formula, but for some reason, it isn't quite working out. My workings:</p>
<h2>My working:</h2>
<p><a href="https://i.stack.imgur.com/VPlsA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VPlsA.png" alt="working" /></a></p>
<p><span class="math-container">$$\alpha = \sqrt{1^{2}+1^{2}} = \sqrt{2}$$</span>
<span class="math-container">$$\beta = \sqrt{2^{2} + 2^{2}} = 2\sqrt{2}$$</span></p>
<p><span class="math-container">$$\frac{1}{2} \times \sqrt{2} \times 2\sqrt{2} = 2?$$</span></p>
<p>I know the area is supposed to be 0.5 units^2, so what am I doing wrong here?</p>
|
Laxmi Narayan Bhandari
| 931,957 |
<p>Here is an approach using Beta function.</p>
<p>We start with the substitution <span class="math-container">$ x \mapsto zx $</span>.</p>
<p><span class="math-container">$$\begin{align} \int_0^z (xz-x^2)^{1/2}\, \mathrm dx &= z^2 \int_0^1 (x-x^2)^{1/2}\, \mathrm dx\\ &= z^2 \int_0^1 x^{1/2}(1-x)^{1/2}\, \mathrm dx \\ &= z^2 \mathrm{B}(3/2,3/2) \\ &= \dfrac{z^2\Gamma(3/2) \Gamma(3/2)}{\Gamma(3)} \\ \int_0^z (xz-x^2)^{1/2}\, \mathrm dx&= \dfrac{\pi z^2}{8} \end{align}$$</span></p>
|
77,092 |
<p>I am wondering if anyone could please post the solution to the following differential equation for the function $f(x)$:</p>
<p>$$\frac{f}{f^\prime}=\frac{f^\prime}{f^{\prime\prime}}$$</p>
<p>Thanks!</p>
|
Agustí Roig
| 664 |
<p>$$
\frac{f}{f'} = \frac{f'}{f''} \qquad \Longleftrightarrow \qquad \frac{f'}{f} = \frac{f''}{f'} \qquad \Longleftrightarrow \qquad \int \frac{f'}{f} = \int\frac{f''}{f'} + C \qquad \Longleftrightarrow \qquad \ln f = \ln f' + C
$$</p>
<p>Hence, taking exponentials on both sides,</p>
<p>$$
f = K f' \ ,
$$</p>
<p>where $K = e^C$. Renaming $K$ as $\frac{1}{K}$, this is the same as</p>
<p>$$
\frac{f'}{f} = K \qquad \Longleftrightarrow \qquad \int \frac{f'}{f} = \int K + C \qquad \Longleftrightarrow \qquad \ln f = Kx + C \qquad \Longleftrightarrow \qquad f(x) = A e^{Kx}
$$</p>
<p>where $A = e^C$.</p>
|
1,207,247 |
<blockquote>
<p>Prove that if $\gcd(a,b)=1$, then $\gcd(a^2+b^2, a^2b^2)=1$.</p>
</blockquote>
<p>My attempt:</p>
<p>If $a$ is prime to $b$ the $gcd(a,b)=1$. Assume that $a^2+b^2$ and $a^2b^2$ are not prime to each other. Let $d=gcd(a^2+b^2, a^2b^2)$. Then $d|a^2+b^2, ~~ \&~~d|a^2b^2$. We shall have to prove that $d=1$.</p>
<p>How to prove that $d=1$?</p>
|
Prahlad Vaidyanathan
| 89,789 |
<p>If $p$ is a prime such that $p\mid (a^2+b^2)$ and $p\mid a^2b^2$, then either $p\mid a$ or $p\mid b$. Assume that $p\mid a$, then $p\mid a^2$. But then $p\mid a^2+b^2$ implies that $p\mid b^2$, which means that $p\mid b$. This would imply that $(a,b) \neq 1$ - a contradiction.</p>
|
129,072 |
<p>We know that for a representation $V$ of a Lie algebra or a quantum group, we can define character of $V$ as $ch(V)=\sum_{\mu} dim(V_{\mu})e^{\mu}$, where $V_{\mu}$ is the weight space of $V$ with weight $\mu$. I didn't find the corresponding definition for representations of finite dimensional algebras (for example, associative algebras). Is there some kind of character theory for representations of finite dimensional algebras? Thank you very much. </p>
|
Aaron
| 28,841 |
<p>In ordinary group representation theory, a character is a homomorphism $\chi:G\to \mathbb{C}^\times$. A character associated to a representation $\rho: G\to GL(V)$ is $\chi_\rho: g\mapsto trace(\rho(g))$. Character of Lie algebra arising from differentiating a Lie group is the same thing in the sense that character of Lie algebra representation is character of corresponding Lie group representation.</p>
<p>However, things are complicated once we switch the field from $\mathbb{C}$ to something else (in particular, over field of positive characteristic). For example, the modular character and the Brauer character are two different definitions for characters of the group algebra $kG$ where char$(k)>0$. When we say 'things are complicated', it means many useful properties of ordinary characters are lost when we just generalise the concept to extending to the other fields (that's why we have Brauer characters in place of modular characters in positive characteristic for group algebras). A 'good' character theory need some sort of nice behaviour of the corresponding algebra. e.g. divison in to conjugacy classes of a group and semisimplicity of group algebras over $\mathbb{C}$, and the notion of weight spaces for Lie algebras</p>
<p>Also since the 20's, after the work of, say Dade and Green, one sees module theorectic approach to representation is more convenient than character theorectic one. That's the reason why you don't see character being mentioned for general associative algebras. On the other hand, a representation for algebra $A$ is $\rho:A\to End(V)$, so in theory you can also define characters of algebras.</p>
<p>Nevertheless, character theory is not completely forgotten, one can still do character theory (although not as well-behaved as group/lie algebra representation one) for some special algebras to get some insight. The example I know of is the Brauer algebra, which is built by 'stacking' layers of symmectric group algebras, and exhibit a nice basis which uses large part of symmectric group elements, hence one can use characters to see some elementary properties, but probably not very much can be said in general.</p>
|
2,698,903 |
<p>So this has caused some confusion. In one exercise one was asked to prove that</p>
<p>$$\lim_{k \rightarrow \infty} \int_A \cos(kt)dt=0$$</p>
<p>where $A \subset [-\pi, \pi]$ is a measurable set.</p>
<p>My initial idea was to take any $a,b \in A$ and then show that:</p>
<p>$$\lim_{k \rightarrow \infty} \int_a^b cos(kt)dt= \lim_{k \rightarrow \infty} \frac{\sin(bk)-\sin(ak)}{k} =0$$</p>
<p>But my instructor said that this wouldn't work out because the lengths of the partitions on $A$ would make my computation impossible. I'm not sure about this.</p>
|
Community
| -1 |
<p>Trying to use an interval as a replacement for $A$ is not a good approach. $A$ could be a very complicated set, after all; it could be a countable union of intervals, or a set like the irrationals, or some totally disconnected Cantor set with positive measure. There is a general principle that measurable sets are <em>like</em> countable unions of intervals for many purposes, but this approach goes too far.</p>
<p>So there is <em>very</em> little relationship between the interval $[a, b]$ and the set $A$, even with $a, b \in A$.</p>
<hr>
<p>For an approach that you might find more successful: The integral you've written is the real part of the $k$-th Fourier coefficient of $\chi_A$ (the indicator function of $A$). What does Plancherel tell you?</p>
|
1,668,487 |
<p>$$2^{-1} \equiv 6\mod{11}$$</p>
<p>Sorry for very strange question. I want to understand on which algorithm there is a computation of this expression. Similarly interested in why this expression is equal to two?</p>
<p>$$6^{-1} \equiv 2\mod11$$</p>
|
JMP
| 210,189 |
<p>It is part of the collection of integer pairs such that:</p>
<p>$$xy\equiv1\mod p$$</p>
<p>in this case with $p=11$.</p>
<p>For $p=11$, they are $1\cdot1,2\cdot6,3\cdot4,5\cdot9,7\cdot8$ and $10\cdot10$</p>
<p>Except for $1$ and $p-1$, each $x$ is paired to a unique $y$.</p>
|
847,672 |
<p>I repeat the Peano Axioms:</p>
<ol>
<li><p>Zero is a number.</p></li>
<li><p>If a is a number, the successor of a is a number.</p></li>
<li><p>zero is not the successor of a number.</p></li>
<li><p>Two numbers of which the successors are equal are themselves equal.</p></li>
<li><p>If a set S of numbers contains zero and also the successor of every number in S, then every number is in S.</p></li>
</ol>
<p>Suppose to have two isomorph "copies" of the natural numbers $\mathbb{N}':=\{0',1',2'...\}$ and $\mathbb{N}'':=\{0'',1'',2''...\}$.
Then the set $NUMBERS:=\mathbb{N}'\cup \mathbb{N}''$ with "Zero"$:=0'$ and the "natural" successor for each element in any of the two sets, seems to satisfy the axioms.</p>
<p>Yes, P5 is very strange now because, it says that when I start with a set which I know to contain at least $0'$ and every successor of the numbers in it, automatically contains $0''$ which is not a successor of any number.</p>
<p>If this way of reasoning is allowed we could also use a number of copies of $\mathbb{N}$ indicized by a continuous index so there will be TWO NOT ISOMORPHIC Peano sets.</p>
<p>Because this sounds very strange to me, It's possible that there is a problem in my argument. What do you think?</p>
|
Asaf Karagila
| 622 |
<p>Note that from the fifth axiom it follows that if $x\neq 0$ then $x=S(y)$ for some $y$. </p>
<p>Therefore if $x,y$ are both $0$'s (and are distinct) then at least one of them is a successor. Which means that one of them is not a $0$.</p>
<hr>
<p>I think that the source of the confusion here is the fact that $0$ is seen as just any number. The truth is that $0$ is a constant in the formal language of arithmetic. And in a given structure there can only be one interpretation of a constant. Not two or more.</p>
|
2,442,705 |
<p>Like the title says, I'm wondering if there's a non-brute-force way to determine $x$ such that
$$45\le x<200,$$
$$x\bmod5\equiv0,$$
$$x\bmod8\equiv1,$$
$$x\bmod12\equiv1$$
I know I can simply enumerate all integers in $[45, 200)$ to get the answer (145), but I'm wondering if there's a more elegant solution.</p>
|
Parcly Taxel
| 357,390 |
<p>Just use the Chinese remainder theorem:
$$x\equiv1\bmod12\implies x\equiv1\bmod3$$
$$x\equiv1\bmod8\land x\equiv1\bmod3\implies x\equiv1\bmod24$$
$$x\equiv1\bmod24\land x\equiv0\bmod5\implies x\equiv 25\bmod120$$
Then add 120 repeatedly to 25 until getting a number within the desired range, yielding $x=145$.</p>
|
3,911,297 |
<p>Chapter 12 - Problem 26)</p>
<blockquote>
<p>Suppose that <span class="math-container">$f(x) > 0$</span> for all <span class="math-container">$x$</span>, and that <span class="math-container">$f$</span> is decreasing. Prove that there is a <em>continuous</em> decreasing function <span class="math-container">$g$</span> such that <span class="math-container">$0 < g(x) \le f(x)$</span> for all <span class="math-container">$x$</span>.</p>
</blockquote>
<p>So this question has already been asked and "solved" on MSE 9 years ago <a href="https://math.stackexchange.com/questions/30777/spivaks-calculus-chapter-12-problem-26">here</a>, but the accepted answer isn't very detailed, and I think it is in fact flawed (or I've just misunderstood it). I tried commenting to open up the question again, but it seems pretty dead now, hence why I'm making this follow up.</p>
<p>The answer says to <em>"make <span class="math-container">$g$</span> piecewise linear with <span class="math-container">$g(n) = f(n+1)$</span>"</em>. Can someone explain what this means exactly? I will write my thoughts below, but it's a lot, so feel free to skip.</p>
<p>My thoughts: Notice that if we try to simply let <span class="math-container">$g(x) = f(x+1)$</span>, then it works perfectly except for the fact that <span class="math-container">$g$</span> may not be continuous, because <span class="math-container">$f$</span> need not be continuous (otherwise we could just let <span class="math-container">$g(x) = f(x)$</span> in that case!). So if we could just modify this <span class="math-container">$g$</span> to make it continuous somehow then we're done.</p>
<p>Fortunately, <span class="math-container">$f$</span> is decreasing on <span class="math-container">$\mathbb{R}$</span>, which means the left and right limits do exist, however they might disagree. This means <span class="math-container">$f$</span> can only have jump discontinuities that jump downwards.</p>
<p>So what if we took all the points in <span class="math-container">$\mathbb{R}$</span> where <span class="math-container">$f$</span> has a jump discontinuity, and just joined lines between them? (I think this is what the answer meant by piecewise linear function?) This would guarantee that <span class="math-container">$g$</span> is continuous, however, this approach has some fixable flaws.</p>
<p>First flaw, for starters, it isn't necessarily true that this <span class="math-container">$g$</span> would be always smaller than <span class="math-container">$f$</span>! For example, consider this picture, where <span class="math-container">$f$</span> is the red function, and <span class="math-container">$g$</span> is the black function:</p>
<p><a href="https://i.stack.imgur.com/WrfB5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WrfB5.png" alt="" /></a></p>
<p>Sure <span class="math-container">$g$</span> is continuous now, but we've lost the <span class="math-container">$g(x) \leq f(x)$</span> property! We can fix this easily by letting <span class="math-container">$g$</span> be the smaller of the piecewise linear function and <span class="math-container">$f$</span>. Then the picture becomes like this:</p>
<p><a href="https://i.stack.imgur.com/2if25.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2if25.png" alt="" /></a></p>
<p>To make this a bit more rigorous, first we need the set of all the points where <span class="math-container">$f$</span> is discontinuous:</p>
<p><span class="math-container">$S = \big\{x: \lim_{y \rightarrow x^-} (f(y)) = \lim_{y \rightarrow x^+} (f(y)) \big\}$</span></p>
<p>Then let <span class="math-container">$l(x)$</span> be the piecewise linear function joining all points <span class="math-container">$\big(x, \lim_{y \rightarrow x^+} [f(y)] \big)$</span>, where <span class="math-container">$x \in S$</span>.</p>
<p>Then finally let <span class="math-container">$g(x) = \text{Min}\big(f(x), l(x)\big)$</span>.</p>
<p>Now this would work fine, so long as <span class="math-container">$l(x)$</span> is well defined. But must it necessarily be so? I'm not sure, and this is where I'm stuck. For example, what if the set <span class="math-container">$S$</span> contains not isolated points, but an entire interval of points? For example, what if <span class="math-container">$f$</span> is a function that has a jump discontinuity at every point in <span class="math-container">$[0,1]$</span>? Then to construct <span class="math-container">$l(x)$</span>, we'd need to join all these jump discontinuity points in <span class="math-container">$[0,1]$</span>, of which it isn't obvious at all we can do that.</p>
<p>Now you might say that an interval of jump discontinuities is impossible, and you'd be right. However the proof of that comes much much later in the book and is certainly beyond the knowledge of this chapter. But more importantly, even if <span class="math-container">$f$</span> doesn't have an interval of jump discontinuities, there are other ways <span class="math-container">$l(x)$</span> can be questionable.</p>
<p>Consider this monstrous example: <span class="math-container">$f(x) =
\begin{cases}
1-\frac{x}{2^{\lfloor 1 - \log_2(|x|)-1 \rfloor}} & :x \leq \frac{1}{2} \\
\frac{6}{6x+5} & :x > \frac{1}{2}
\end{cases} \Biggr\}$</span></p>
<p>Looks something like this (click <a href="https://www.desmos.com/calculator/j13mpqs9io" rel="nofollow noreferrer">here</a> to view in Desmos): <a href="https://i.stack.imgur.com/3dPdp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3dPdp.png" alt="" /></a></p>
<p>As it turns out, this <span class="math-container">$f$</span> satisfies the questions premises, but it also has the cool property that it contains an infinite number of jump discontinuities in any neighbourhood around 0! As such, in order to construct the piecewise linear function for it, you'd have to join lines between an infinite number of points, and still have a function, which might be possible? But for sure it isn't obvious that it is so...</p>
<p>Those are my thoughts on the problem. So Q1) Is my approach so far in the right direction? Or could it be that I've missed some super simple trick that will make the problem trivial and everything I've said above redundant? Q2) If I am in the right direction, how can I justify taking a linear piecewise function for an infinite number of points in a given interval?</p>
|
Calvin Lin
| 54,563 |
<p>Hint: <span class="math-container">$\gcd(7, 12) = 1$</span></p>
<p>so done.</p>
|
3,911,297 |
<p>Chapter 12 - Problem 26)</p>
<blockquote>
<p>Suppose that <span class="math-container">$f(x) > 0$</span> for all <span class="math-container">$x$</span>, and that <span class="math-container">$f$</span> is decreasing. Prove that there is a <em>continuous</em> decreasing function <span class="math-container">$g$</span> such that <span class="math-container">$0 < g(x) \le f(x)$</span> for all <span class="math-container">$x$</span>.</p>
</blockquote>
<p>So this question has already been asked and "solved" on MSE 9 years ago <a href="https://math.stackexchange.com/questions/30777/spivaks-calculus-chapter-12-problem-26">here</a>, but the accepted answer isn't very detailed, and I think it is in fact flawed (or I've just misunderstood it). I tried commenting to open up the question again, but it seems pretty dead now, hence why I'm making this follow up.</p>
<p>The answer says to <em>"make <span class="math-container">$g$</span> piecewise linear with <span class="math-container">$g(n) = f(n+1)$</span>"</em>. Can someone explain what this means exactly? I will write my thoughts below, but it's a lot, so feel free to skip.</p>
<p>My thoughts: Notice that if we try to simply let <span class="math-container">$g(x) = f(x+1)$</span>, then it works perfectly except for the fact that <span class="math-container">$g$</span> may not be continuous, because <span class="math-container">$f$</span> need not be continuous (otherwise we could just let <span class="math-container">$g(x) = f(x)$</span> in that case!). So if we could just modify this <span class="math-container">$g$</span> to make it continuous somehow then we're done.</p>
<p>Fortunately, <span class="math-container">$f$</span> is decreasing on <span class="math-container">$\mathbb{R}$</span>, which means the left and right limits do exist, however they might disagree. This means <span class="math-container">$f$</span> can only have jump discontinuities that jump downwards.</p>
<p>So what if we took all the points in <span class="math-container">$\mathbb{R}$</span> where <span class="math-container">$f$</span> has a jump discontinuity, and just joined lines between them? (I think this is what the answer meant by piecewise linear function?) This would guarantee that <span class="math-container">$g$</span> is continuous, however, this approach has some fixable flaws.</p>
<p>First flaw, for starters, it isn't necessarily true that this <span class="math-container">$g$</span> would be always smaller than <span class="math-container">$f$</span>! For example, consider this picture, where <span class="math-container">$f$</span> is the red function, and <span class="math-container">$g$</span> is the black function:</p>
<p><a href="https://i.stack.imgur.com/WrfB5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WrfB5.png" alt="" /></a></p>
<p>Sure <span class="math-container">$g$</span> is continuous now, but we've lost the <span class="math-container">$g(x) \leq f(x)$</span> property! We can fix this easily by letting <span class="math-container">$g$</span> be the smaller of the piecewise linear function and <span class="math-container">$f$</span>. Then the picture becomes like this:</p>
<p><a href="https://i.stack.imgur.com/2if25.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2if25.png" alt="" /></a></p>
<p>To make this a bit more rigorous, first we need the set of all the points where <span class="math-container">$f$</span> is discontinuous:</p>
<p><span class="math-container">$S = \big\{x: \lim_{y \rightarrow x^-} (f(y)) = \lim_{y \rightarrow x^+} (f(y)) \big\}$</span></p>
<p>Then let <span class="math-container">$l(x)$</span> be the piecewise linear function joining all points <span class="math-container">$\big(x, \lim_{y \rightarrow x^+} [f(y)] \big)$</span>, where <span class="math-container">$x \in S$</span>.</p>
<p>Then finally let <span class="math-container">$g(x) = \text{Min}\big(f(x), l(x)\big)$</span>.</p>
<p>Now this would work fine, so long as <span class="math-container">$l(x)$</span> is well defined. But must it necessarily be so? I'm not sure, and this is where I'm stuck. For example, what if the set <span class="math-container">$S$</span> contains not isolated points, but an entire interval of points? For example, what if <span class="math-container">$f$</span> is a function that has a jump discontinuity at every point in <span class="math-container">$[0,1]$</span>? Then to construct <span class="math-container">$l(x)$</span>, we'd need to join all these jump discontinuity points in <span class="math-container">$[0,1]$</span>, of which it isn't obvious at all we can do that.</p>
<p>Now you might say that an interval of jump discontinuities is impossible, and you'd be right. However the proof of that comes much much later in the book and is certainly beyond the knowledge of this chapter. But more importantly, even if <span class="math-container">$f$</span> doesn't have an interval of jump discontinuities, there are other ways <span class="math-container">$l(x)$</span> can be questionable.</p>
<p>Consider this monstrous example: <span class="math-container">$f(x) =
\begin{cases}
1-\frac{x}{2^{\lfloor 1 - \log_2(|x|)-1 \rfloor}} & :x \leq \frac{1}{2} \\
\frac{6}{6x+5} & :x > \frac{1}{2}
\end{cases} \Biggr\}$</span></p>
<p>Looks something like this (click <a href="https://www.desmos.com/calculator/j13mpqs9io" rel="nofollow noreferrer">here</a> to view in Desmos): <a href="https://i.stack.imgur.com/3dPdp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3dPdp.png" alt="" /></a></p>
<p>As it turns out, this <span class="math-container">$f$</span> satisfies the questions premises, but it also has the cool property that it contains an infinite number of jump discontinuities in any neighbourhood around 0! As such, in order to construct the piecewise linear function for it, you'd have to join lines between an infinite number of points, and still have a function, which might be possible? But for sure it isn't obvious that it is so...</p>
<p>Those are my thoughts on the problem. So Q1) Is my approach so far in the right direction? Or could it be that I've missed some super simple trick that will make the problem trivial and everything I've said above redundant? Q2) If I am in the right direction, how can I justify taking a linear piecewise function for an infinite number of points in a given interval?</p>
|
lone student
| 460,967 |
<p>Here is general way. You need to solve this linear equation</p>
<blockquote>
<p><span class="math-container">$$\dfrac {a^{12x}}{a^{7y}}=a \Longrightarrow 12x-7y=1$$</span>
where <span class="math-container">$x,y\in \mathbb {Z^+}$</span></p>
</blockquote>
<p>Now, <span class="math-container">$\gcd (12,7) =1$</span> , then we always have a solution. Thus, it is not even necessary to solve this equation with the Euclidean algorithm.</p>
|
1,139,723 |
<p>Given the differential equation $\frac{dy}{dt}+a(t)y=f(t)$ </p>
<p>with a(t) and f(t) continuous for:</p>
<p>$-\infty<t<\infty$</p>
<p>$a(t) \ge c>0$</p>
<p>$lim_{t_->0}f(t)=0$</p>
<p>Show that every solution tends to 0 as t approaches infinity.</p>
<p>$$\frac{dy}{dt}+a(t)y=f(t)$$</p>
<p>$$μ(t)=e^{\int a(t)dt}$$
$$μ(t)=e^{\frac{1}{2}a^2(t)}$$</p>
<p>Multiplying both sides of the equation by $μ(t)$:</p>
<p>$$\frac{dy}{dt}+a(t)y=f(t)$$
$$μ(t)[\frac{dy}{dt}+a(t)y]=μ(t)[f(t)]$$
$$e^{\frac{1}{2}a^2(t)}[\frac{dy}{dt}+a(t)y]=e^{\frac{1}{2}a^2(t)}[f(t)]$$
$$\frac{d}{dt}e^{\frac{1}{2}a^2(t)}y=e^{\frac{1}{2}a^2(t)}[f(t)]$$
$$e^{\frac{1}{2}a^2(t)}y=\int e^{\frac{1}{2}a^2(t)}[f(t)]dt$$ </p>
<p>Did I make a mistake in the integrating factor step? I am not sure how to proceed..</p>
|
abel
| 9,252 |
<p>here is way to do this. we will set the lower arbitrarily to $t_0.$ you can change it to anything more convenient if you like. suppose
$$A = exp\left(\int_{t_0}^t a(s)\,ds\right), \dfrac{1}{A}= exp\left(-\int_{t_0}^t a(s)\,ds\right)$$ is the unique solution of $$ \frac{dA}{dt} - aA = 0, \, A(t_0) = 1.$$ $A$ is called the integrating factor because if you multiply $\frac{dy}{dt} + ay = f$ by $A$ you get $$\dfrac{d}{dt}(Ay) = Af$$ which can be integrated to give a particular solution
$$y_P = \dfrac{1}{A(t)}\left(\int_{t_0}^t A(s)f(s)\,ds\right) = \int_{t_0}^t exp\left(-\int_s^t a(u)\,du\right)f(s)\,ds$$ </p>
<p>we will show that $y_P(t) \to 0$ as $t \to \infty.$</p>
<p>$$ |y_P| \le \int_{t_0}^t exp\left(-\int_s^t a(u)\,du\right)|f(s)|\,ds
\le \int_{t_0}^t exp\left(-\int_s^t c\,du\right)|f(s)| \,ds$$
picking $t_0$ large enough to make $|f|$ small so that the limit of $y_P$ is zero as $t \to \infty.$</p>
<p>there must be a cleaner way to show that the limit is zero than the one i have here. i will think about it.</p>
|
3,100,420 |
<p>So my previous question states that: We are given the sequence <span class="math-container">$_{n}= 6^{({2}^n)} + 1$</span>. We must prove that the elements of this sequence are pairwise co-prime, i.e prove that if m ≠ n then <span class="math-container">$(_{m},_{n}) = 1$</span>.</p>
<p>I previously proved that <span class="math-container">$k_n | k_{n+1} - 2$</span></p>
<p>Then using answers I got on my previous question I was able to show that if <span class="math-container">$p | k_n$</span> then it follows that <span class="math-container">$p | k_{n+1} -2$</span>. Using this I could say that that if <span class="math-container">$p|k_{n+1}$</span> then <span class="math-container">$p|(k_{n+1} - (k_{n+1} -2)$</span>. This means the only prime that could divide <span class="math-container">$k_n$</span> and <span class="math-container">$k_{n+1}$</span> is <span class="math-container">$2$</span>, however all the terms in the sequence are odd so therefore <span class="math-container">$gcd(k_n,k_{n+1})=1$</span>.</p>
<p>This therefore proves that every pair of numbers in the sequence are coprime however I'm struggling to extend it to show every element is coprime so I need to show <span class="math-container">$gcd(k_m,k_n)=1$</span>. </p>
<p>All help is very much appreciated as always cheers. </p>
|
Don Fanucci
| 329,831 |
<p>I don't see why the same approach doesn't work. If <span class="math-container">$m>n$</span></p>
<p><span class="math-container">$$k_m-2=6^{2^m}-1=\big(6^{2^n}\big)^{2^{m-n}}-1\equiv(-1)^{2^{m-n}}-1\equiv1-1\equiv0\pmod{k_n}$$</span></p>
<p>So, indeed <span class="math-container">$k_n\mid k_m-2$</span> ...</p>
<p><strong>Edit</strong>: Further elaboration, as <span class="math-container">$k_n=6^{2^n}+1$</span> we must have</p>
<p><span class="math-container">$$6^{2^n}+1\equiv 0\pmod{k_n}\Rightarrow 6^{2^n}\equiv -1\pmod{k_n}$$</span></p>
<p>also, note that <span class="math-container">$m-n>0$</span> so <span class="math-container">$2^{m-n}$</span> is even, so <span class="math-container">$(-1)^{2^{m-n}}=1$</span>. </p>
|
272,588 |
<p>I am trying to do the partition of the list variable called
<strong>test</strong> so that answer will look like the variable called <strong>goal</strong>. The <strong>test</strong> list is actually a list of long data. In this example I have a short version example according to the code below which also includes my attempts called ans1 and ans2 using method called Partition and ArrayShape:</p>
<pre><code> Clear[ans];
goal={{{a,b,c},{d,e,f},{g,h,i},j},{{k,l,m},{n,o,p},{q,r,s},t}};
TableForm[goal]
</code></pre>
<p>And the output should be</p>
<pre><code> a d g
b e h j
c f i
k n q
l o r t
m p s
</code></pre>
<p>My attempt is:</p>
<pre><code> test={a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t};
ans1=Partition[test,3,3,1,Nothing]
{{a,b,c},{d,e,f},{g,h,i},{j,k,l},{m,n,o},{p,q,r},{s,t}}
TableForm[ans1]
ans2=ArrayReshape[test,{3,3,3},1]
{{{a,b,c},{d,e,f},{g,h,i}},{{j,k,l},{m,n,o},{p,q,r}},{{s,t,1},{1,1,1},{1,1,1}}}
TableForm[ans2]
</code></pre>
|
cvgmt
| 72,111 |
<p><strong>Edit(reply to comment)</strong></p>
<p>For the pattern <code>{{#,#,#},{#,#,#},{#,#,#},{#,#,#},#}</code>,there are <code>3*4+1</code> elements, so we use</p>
<pre><code>test = Range[39];
FlattenAt[#, {-1}] & /@ (Partition[#, UpTo[3]] & /@
Partition[test, UpTo[3*4 + 1]])
</code></pre>
<p><a href="https://i.stack.imgur.com/FNexx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FNexx.png" alt="enter image description here" /></a></p>
<p><strong>Original</strong></p>
<pre><code>test={a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t};
FlattenAt[#, {-1}] & /@ (Partition[#, UpTo[3]] & /@
Partition[test, UpTo[10]])
FlattenAt[#, {-1}] & /@ (Partition[#, UpTo[3]] & /@
Partition[test, 10])
FlattenAt[#, {-1}] & /@ (Partition[#, 3, 3, 1, Nothing] & /@
Partition[test, 10])
</code></pre>
<p><a href="https://i.stack.imgur.com/U7UDH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U7UDH.png" alt="> {{{a, b, c}, {d, e, f}, {g, h, i}, j}, {{k, l, m}, {n, o, p}, {q, r, s}, t}}" /></a></p>
|
13,922 |
<p>How to sum up this series :</p>
<p>$$2C_o + \frac{2^2}{2}C_1 + \frac{2^3}{3}C_2 + \cdots + \frac{2^{n+1}}{n+1}C_n$$
</p>
<p>Any hint that will lead me to the correct solution will be highly appreciated.</p>
<p>EDIT: Here $C_i = ^nC_i $</p>
|
Andrés E. Caicedo
| 462 |
<p>Let's assume $C_i=\binom ni$. I'll give a solution that is not precalculus level. Consider first the equality
$$ (1+x)^n=C_0+xC_1+x^2C_2+\dots+x^nC_n. $$
This is the <em>binomial theorem</em>.</p>
<p>Integrate from 0 to t. On the left hand side we get $\frac{(1+t)^{n+1}-1}{n+1}$ and on the right hand side $\sum \frac1{i+1}t^{i+1}C_i$. </p>
<p>Now set $t=2$, and a bit of algebra gives you the answer you want.</p>
<p>Pretty sure there is an elementary approach as well. </p>
|
348,395 |
<p>If <span class="math-container">$f$</span> a distribution with compact support then they exist <span class="math-container">$m$</span> and measures <span class="math-container">$f_\beta$</span>,<span class="math-container">$|\beta|\leq m$</span> such that
<span class="math-container">$$f=\sum_{|\beta|\leq m}\frac{\partial^\beta f_\beta}{\partial x^\beta}$$</span></p>
<p>how to demonstrate this result ?</p>
|
Johannes Hahn
| 3,041 |
<p>That's an easy consequence of the Hahn-Banach theorem and the Riesz theorem characterising the dual of <span class="math-container">$C^0(\Omega)$</span>.</p>
<p>"of order m" means that the distribution is linear w.r.t. to the <span class="math-container">$C^m$</span>-norm. It is defined on a dense subspace of <span class="math-container">$C^m$</span> and thus extends uniquely to an element of the dual space of <span class="math-container">$C^m$</span>. Now <span class="math-container">$C^m(\Omega) \hookrightarrow \prod_{|\beta|\leq m} C^0(\Omega), \phi\mapsto (D^\beta \phi)_\beta$</span> is an isometric embedding so that we can extend to distribution with Hahn-Banach further to an element of the dual space of this bigger space which is <span class="math-container">$\prod_{|\beta|\leq m} (C^0(\Omega))'$</span>. With Riesz identify <span class="math-container">$(C^0(\Omega))'$</span> with a space of measures on <span class="math-container">$\Omega$</span>. This gives you the measures <span class="math-container">$f_\beta$</span> you're looking for.</p>
|
3,441,779 |
<p>Given that <span class="math-container">$(b_k)_{k \in \mathbb{N}} \rightarrow m$</span> and that <span class="math-container">$b_k > 0$</span> for every <span class="math-container">$k \in \mathbb{N}$</span> and that <span class="math-container">$m > 0$</span>, how do I show that <span class="math-container">$(\frac{1}{(b_k)})_{k \in \mathbb{N}}$</span> converges to <span class="math-container">$\frac{1}{m}$</span>? I'm assuming that you would use the Archimedian property, but I'm not sure how to complete the proof. Any help would be appreciated, thank you!</p>
|
N.N.
| 469,634 |
<p>You can prove a more general statement.</p>
<p>In order that there exist integers <span class="math-container">$i$</span> and <span class="math-container">$j$</span> satisfying the equation
<span class="math-container">$$
ai+bj = c,
$$</span>
it is both necessary and sufficient that <span class="math-container">$\gcd(a,b)$</span> divides <span class="math-container">$c.$</span></p>
<p><em>Proof.</em> Let <span class="math-container">$a = nd$</span> and <span class="math-container">$b = md.$</span> Then, if it is true that <span class="math-container">$ai+bj = c,$</span> we have
<span class="math-container">$$
ndi+mdj = d(ni+mj) = c.
$$</span>
Thus, <span class="math-container">$d$</span> divides <span class="math-container">$c.$</span> The same argument works in the reverse direction.</p>
<p>Letting <span class="math-container">$c = 1,$</span> you recover your desired proof.</p>
|
3,441,779 |
<p>Given that <span class="math-container">$(b_k)_{k \in \mathbb{N}} \rightarrow m$</span> and that <span class="math-container">$b_k > 0$</span> for every <span class="math-container">$k \in \mathbb{N}$</span> and that <span class="math-container">$m > 0$</span>, how do I show that <span class="math-container">$(\frac{1}{(b_k)})_{k \in \mathbb{N}}$</span> converges to <span class="math-container">$\frac{1}{m}$</span>? I'm assuming that you would use the Archimedian property, but I'm not sure how to complete the proof. Any help would be appreciated, thank you!</p>
|
D_S
| 28,556 |
<p>Let <span class="math-container">$R$</span> be a commutative ring with identity. Let's say that two elements <span class="math-container">$a, b \in R$</span> have "GCD equal to <span class="math-container">$1$</span>" if there is no irreducible element <span class="math-container">$p$</span> of <span class="math-container">$R$</span> that divides both <span class="math-container">$a$</span> and <span class="math-container">$b$</span>. You're asking when the following property holds: </p>
<p><strong>Property</strong>: If <span class="math-container">$a, b \in R$</span>, then <span class="math-container">$\operatorname{GCD}(a,b) = 1$</span> if and only if the ideal generated by <span class="math-container">$a$</span> and <span class="math-container">$b$</span> is all of <span class="math-container">$R$</span>.</p>
<p>The implication <span class="math-container">$\Leftarrow$</span> is obviously always true, since if <span class="math-container">$xa + yb = 1$</span> for some <span class="math-container">$x, y \in R$</span>, then any irreducible element of <span class="math-container">$R$</span> which divides both <span class="math-container">$a$</span> and <span class="math-container">$b$</span> would have to divide <span class="math-container">$1$</span>, which is impossible, because irreducible elements are by definition nonunits.</p>
<p>The converse is true for principal ideal domains, but false for general unique factorization domains. If <span class="math-container">$k$</span> is a field, the ring <span class="math-container">$R = k[X,Y]$</span> is a UFD, the elements <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are relatively prime, but you cannot find polynomials <span class="math-container">$f, g \in R$</span> such that <span class="math-container">$X f(X,Y) + Y g(X,Y) = 1$</span>. </p>
<p>This is because the ideal <span class="math-container">$(X,Y)$</span> generated by <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> is properly contained in <span class="math-container">$R$</span>, being the kernel of the surjective ring homomorphism <span class="math-container">$R \rightarrow k, h(X,Y) \mapsto h(0,0)$</span>. So <span class="math-container">$\Rightarrow$</span> is false for the ring <span class="math-container">$k[X,Y]$</span>.</p>
<p><strong>Proof that the property holds for principal ideal domains</strong>:</p>
<p>Let <span class="math-container">$R$</span> be a PID, and let <span class="math-container">$a, b \in R$</span> be elements with GCD equal to one. Let <span class="math-container">$J$</span> be the ideal generated by <span class="math-container">$a$</span> and <span class="math-container">$b$</span>. We want to show that <span class="math-container">$J = R$</span>. Since <span class="math-container">$R$</span> is a PID, we have <span class="math-container">$J = Rc$</span> for some <span class="math-container">$c \in R$</span>. If <span class="math-container">$c$</span> is a unit, we are done. Otherwise, <span class="math-container">$c$</span> must be divisible by some irreducible element <span class="math-container">$p$</span>. The inclusions</p>
<p><span class="math-container">$$(a), (b) \subset (c) \subset (p)$$</span>
tell us that <span class="math-container">$p$</span> divides <span class="math-container">$a$</span> and <span class="math-container">$b$</span>. Therefore <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are not relatively prime, contradiction.</p>
|
1,133,581 |
<p>$\lim\limits_{(x,y) \to (0,0)} \frac{\log(1-x^2y^2)}{x^2y^2}$</p>
<p>in polar coordinates:</p>
<p>$\lim\limits_{r \to 0} \frac{\log(1-r^2\cos^2t \sin^2t)}{r^2\cos^2t \sin^2t}$</p>
<p>The first limit exists if the second is independent from $t$. But if $t$ is $k \pi/2$ that fraction does not exist. So I could argue that the limit does not exist. Instead it is wrong the limit is $-1$. Where is the mistake?</p>
<p><strong>EDIT</strong></p>
<p>I know I can use another way to reach the result. My question is strictly related to this method (polar coordinates). Since the second limit does not exist for some $t$ I should think that the multivar limit does not exist. Perhaps are there some $t$ that I shouldn't care about?</p>
|
layman
| 131,740 |
<p>Actually, if you haven't already learned this, you will learn that in <span class="math-container">$\Bbb R^{n}$</span>, a set is compact <em>if any only if</em> it is both closed and bounded.</p>
<p>The set <span class="math-container">$(0,1)$</span> is bounded, but it is not closed, so it can't be compact.</p>
<p>The solution to your exercise of finding an open cover with no finite subcover proves that <span class="math-container">$(0,1)$</span> is not compact, because the definition of a set being compact is that every open cover of the set has a finite subcover.</p>
<p>So, the whole trick to finding an open cover with no finite subcover is this:</p>
<p><span class="math-container">$(0,1)$</span> is not closed, and so it doesn't contain all of its limit points (it's easy to see that the two it doesn't contain are <span class="math-container">$0$</span> and <span class="math-container">$1$</span>). Well, can you construct an open cover whose open sets get closer and closer and closer to at least one (or maybe both) of these end points, but never quite reaches them? Hint: since we are in <span class="math-container">$\Bbb R$</span>, think about how you would get closer and closer to a real number <span class="math-container">$r$</span> without ever actually being <span class="math-container">$r$</span>. If you want to get closer from above, then <span class="math-container">$\{r + \frac{1}{n} \}_{n = 1}^{\infty}$</span> is a sequence that gets closer to <span class="math-container">$r$</span> from above. What would be a sequence that gets close to <span class="math-container">$r$</span> from below? You should hopefully say <span class="math-container">$\{r - \frac{1}{n} \}_{n = 1}^{\infty}$</span>.</p>
<p>Anyway, so maybe we can construct our open intervals so that the end points get closer and closer to the limit points <span class="math-container">$0$</span> and <span class="math-container">$1$</span>, but finitely many of the sets would always have gaps between them and <span class="math-container">$0$</span> and <span class="math-container">$1$</span>... hmm...</p>
<p>Well, we could do <span class="math-container">$(0 + \frac{1}{3}, 1 - \frac{1}{3}) \cup (0 + \frac{1}{4}, 1 - \frac{1}{4}) \cup (0 + \frac{1}{5}, 1 - \frac{1}{5}) \cup \ldots$</span>.</p>
<p>These open sets are in <span class="math-container">$(0,1)$</span> and have end points getting closer and closer to <span class="math-container">$0$</span> and <span class="math-container">$1$</span>, but any finite number of them won't contain all of <span class="math-container">$(0,1)$</span> (why?). If we use a formula to define these open intervals, it would be <span class="math-container">$\{ (0 + \frac{1}{n}, 1 - \frac{1}{n}) \}_{n = 3}^{\infty}$</span>.</p>
|
1,593,433 |
<p>How do I define the integral of this Riemann sum?
$$\lim\limits_{m\to\infty}\frac{1}{m}\sum_{n=1}^m\cos\left({\frac{2\pi n x}{b}}\right)$$</p>
|
Mark Viola
| 218,419 |
<p>The term of interest is not a Riemann sum. But I thought it would be instructive to present a way to evaluate the limit. </p>
<p>To that end, let $a_m$ be the sequence defined by</p>
<p>$$a_m=\frac1m \sum_{n=1}^m\cos\left(\frac{2\pi nx}{b}\right) \tag 1$$</p>
<p>We can evaluate the sum in $(1)$ in closed form by writing for $x \ne \ell b$, $\ell$ an integer</p>
<p>$$\begin{align}
\sum_{n=1}^m\cos\left(\frac{2\pi nx}{b}\right) &=\text{Re}\left(\sum_{n=1}^m e^{i2\pi nx/b} \right)\\\\
&=\text{Re}\left(\frac{e^{i2\pi x/b}-e^{i2\pi(m+1)x/b}}{1-e^{i2\pi x/b}} \right)\\\\
&=\cos\left(\frac{\pi(m+1)x}{b}\right)\,\sin\left(\frac{\pi mx}{b}\right)\,\csc\left(\frac{\pi x}{b}\right)
\end{align}$$</p>
<p>Then, note that we have for $x\ne \ell b$, for integer $\ell$</p>
<p>$$\left|\frac1m \sum_{n=1}^m\cos\left(\frac{2\pi nx}{b}\right)\right|\le \left|\frac1m\csc\left(\frac{\pi x}{b}\right)\right|\to 0\,\,\text{as}\,\,m\to \infty$$</p>
<p>If $x= \ell b$, then the limit is trivial to compute and is $1$.</p>
<p>And we are done!</p>
|
4,246,461 |
<p>I am an economics undergrad who wants to develop a firm grasp over mathematics, so will I be able to understand Bourbaki's books on my own without needing additional books? What about the books titled analysis released by Roger Godement?</p>
|
Alex Wertheim
| 73,817 |
<p>I want to preface what I'm about to say by first expressing my admiration for the Bourbaki textbooks, or at least the ones I've read from (namely, Algebra and Commutative Algebra). The Bourbaki series is the distillation of countless hours of work from some of the greatest mathematicians of the 20th century on what they felt was the correct pedagogical approach to certain core areas of mathematics. Some of the books are a little dated - for instance, Bourbaki's book on algebra is conspicuously devoid of categorical language - but many are nevertheless deep, beautiful works which contain treasures that I have been unable to find in any other books. I am an algebraist by training, and found the two books mentioned above indispensible for conducting my own research. Frankly, to go a step further, I find a lot of the criticism directed at the Bourbaki books misplaced and undeserved.</p>
<p>All that being said, my advice would be to reconsider whether reading the works of Bourbaki will really get you closer to what you want. You write in the comments, for instance, that you want to get a general picture of the whole of mathematics. This is an ambitious goal, and I think in practice, the way that most people do this is by getting <em>briefly</em> acquainted with the basics in many fields, and then going deeper based on what they're interested in and what their needs are. It's also important to look at different resources and approaches to material, because different presentations of the same material might resonate with you in entirely different ways. It's hard to know a priori what will produce the most understanding within yourself, and for many people, the only remedy for this problem is to see the same material over and over in different guises.</p>
<p>The Bourbaki tomes are encyclopedic, extremely dry, and ponderous at times. They are not, as far as I am aware, intended to be read from start to finish by a neophyte who wants to learn the subject. If you had extraordinary focus and maybe 15 or 20 years to spare, you might get somewhere with your goal by just reading through Bourbaki. The problem is, the Bourbaki series isn't well suited to jumping around, flitting from subject to subject, if you're not <em>already acquainted</em> with the material being discussed. One of the aims of the Bourbaki series was to build modern mathematics from the foundations up, addressing every last rigorous detail in a self-contained way. The result is that the proof of a given statement in Bourbaki's "Algebra" might reference several other statements in "Algebra", or even in (say) their book on set theory. This makes it very difficult to peruse the material non-linearly if you're seeing it for the first time, since you will have to either look up every reference you come across, or already understand the statements referenced.</p>
<p>Just trying to bulldoze through the books from start to finish isn't something I'd recommend either. One could spend a very long time on (say) Bourbaki's book on set theory without learning anything substantial about mathematics as a whole. I should add that this is especially true without the guidance of great teachers, which was indispensable for me on my own journey through these texts.</p>
<p>Anyway, the above represents my personal feelings, and shouldn't be taken as gospel - a person could surely learn a great deal of deep mathematics just by reading Bourbaki. If you're determined to do this, don't let this post discourage you. But I think that there are better ways to get what you want, and in time, you'll be in a better place to appreciate these classic works for what they are. Good luck!</p>
|
42,016 |
<p>I am looking for algorithms on how to find rational points on an <a href="http://en.wikipedia.org/wiki/Elliptic_curve">elliptic curve</a> $$y^2 = x^3 + a x + b$$ where $a$ and $b$ are integers. Any sort of ideas on how to proceed are welcome. For example, how to find solutions in which the numerators and denominators are bounded, or how to find solutions with a randomized algorithm. Anything better than brute force is interesting.</p>
<p>Background: a student worked on the <a href="http://en.wikipedia.org/wiki/Mordell-Weil_theorem">Mordell-Weil theorem</a> and illustrated it on some simple examples of elliptic curves. She looked for rational points by brute force (I really mean <em>brute</em>, by enumerating all possibilities and trying them). As a continuation of the project she is now interested in smarter algorithms for finding rational points. A cursory search on Math Reviews did not find much.</p>
|
James Weigandt
| 4,872 |
<p>A good reference to get started from the algorithmic point of view is Chapter 3 of Cremona's <a href="http://www.warwick.ac.uk/~masgaj/book/fulltext/index.html" rel="noreferrer"><em>Algorithms for Modular Elliptic Curves</em></a>. It contains a good deal of pseudocode which explains how Cremona's C++ package mwrank computes rational points on elliptic curves.</p>
<p>Some of the more intricate details, such as second descents are left to Cremona's papers <a href="http://www.warwick.ac.uk/~masgaj/papers/index.html" rel="noreferrer">here</a>. Given an elliptic curve with coefficients that aren't too big, your best bet to quickly find the points you're looking for will probably be to use mwrank as included in <a href="http://www.warwick.ac.uk/~masgaj/papers/index.html" rel="noreferrer">Sage</a>.</p>
<p>As has been explained to me in the comments. Sage is not the only way to get access to mwrank and the other programs that make up Cremona's elliptic curve library (eclib), but it is arguably the easiest way to get it, and it contains much more elliptic curve functionality, such as the method E.analytic_rank() which if run on elliptic curve of reasonably sized conductor, will return an integer that is proBably the analytic rank of the curve.</p>
|
47,181 |
<p>I'll ask you to consider a situation wherein one has a series of edges for a graph, $(e_1, e_2, ..., e_N) \in E$, each with a specifiable length $(l_1, l_2, ..., l_N) \in L$, and the goal is to insure that the connected graph has a unique topology in 3-space. More specifically, I'm interested in insuring that some graph with the connectivity of a polytope can only be drawn as the skeleton of that particular polytope - that there should be no crossed edges or knots possible for the specified edge lengths. </p>
<p>To provide a physical example: </p>
<p>I use a group of rods to represent the edges of the desired graph (with pencils or the like) and color/symbol-encode their ends to represent vertex-assignments. I want to choose rod lengths in such a way that if I hand them to a naive-constructor (i.e. a 3-year old or a computer-controlled robot), and tell him/her/it to connect the ends of the rods together that have the same color or symbol, after waiting an arbitrarily long time there will only be a unique geometry satisfying the connectivity constraints of the graph I originally had in mind. </p>
<p>Is there a known computational complexity for this problem? Is there even a solution in the general case, or in the case where we apply the restriction that the specified polytope is convex?</p>
<p>I appreciate any feedback!</p>
<p>EDIT 1: The edges of the graph must be straight lines in 3-space, they cannot be bent to accommodate a particular edge length. </p>
<p>EDIT 2: Does the problem become easier if one assumes some physical diameter for the edges?</p>
|
j.c.
| 353 |
<p>This seems like a question in rigidity theory. In particular it seems like part of what you want is conditions for <strong>global rigidity</strong> in 3 dimensions. </p>
<p>Let me write down some definitions and basic facts from the introduction of <a href="http://www.math.columbia.edu/~dpt/speaking/Rigidity/talk.pdf" rel="nofollow">this nice set of slides by Dylan Thurston</a> and then post some references that might be helpful. </p>
<p>A <em>framework</em> is a graph and a map from its vertices into $d$-dimensional Euclidean space $\mathbb{E}^d$. A framework is <em>locally rigid</em> if every other framework in a small neighborhood with the same edge lengths is related to it by an isometry of $\mathbb{E}^d$. A framework is <em>globally rigid</em> if every other framework in $\mathbb{E}^d$ with the same edge lengths is related to it by an isometry of $\mathbb{E}^d$.</p>
<p>It turns out that checking global rigidity is NP hard, even in 1 dimension (Saxe 1979). However, if you're just interested in "generic" frameworks, i.e. those for which the edge lengths do not satisfy any polynomial relation, then work of Connelly and S. Gortler, A. Healy and D. Thurston characterizes these frameworks in any dimension with an efficient randomized algorithm. See <a href="http://www.cs.harvard.edu/~sjg/papers/ggr.pdf" rel="nofollow">the paper of GHT</a> or the slides above. I must admit that I have not yet studied their work in any detail.</p>
<p>Since you are requiring that your frameworks are skeleta of polytopes, there may be extra structure which you can exploit. Let me just point you to <a href="http://en.wikipedia.org/wiki/Cauchy%27s_theorem_%28geometry%29" rel="nofollow">Cauchy's rigidity theorem</a> which states that convex polyhedra are rigid if you force the faces to be rigid in addition to the edges. If you don't have this restriction on the faces, then there are nonrigid examples, e.g. the 1-skeleton of a cube can be sheared, also pointed out in sleepless in beantown's answer. If you do have the restriction on the faces, but you allow nonconvex polyhedra, then there are <a href="http://en.wikipedia.org/wiki/Flexible_polyhedra" rel="nofollow">flexible polyhedra</a>.</p>
<p>In addition to the links above, there are several surveys on the webpage of <a href="http://www.math.cornell.edu/~connelly/" rel="nofollow">Robert Connelly</a> on various topics in rigidity theory.</p>
|
2,954,991 |
<p>How do I prove that <span class="math-container">$\forall p, b, c \in \mathbb{Z}^+, (\text{Prime}(p) \wedge p^2 + b^2 = c^2) \Rightarrow p^2 = c+b$</span> ?
I know this much;
<span class="math-container">$$p^2 + b^2 = c^2\\
p^2= c^2 - b^2\\
p^2=(c-b)(c+b)
$$</span>
I know the next part has something to do with the fact that every number that is not prime can be expressed as a product of primes that is unique(up to the orders of the factor?). And using that, somehow, <span class="math-container">$c-b=1$</span> and, therefore, <span class="math-container">$p^2 = c+b$</span>. If someone could please offer me a rigorous explanation for this it would be appreciated.</p>
|
Fabio Lucchini
| 54,738 |
<p>If <span class="math-container">$p^2=(c-b)(c+b)$</span> and <span class="math-container">$c-b\neq 1$</span>, then <span class="math-container">$c-b=c+b=p$</span> because <span class="math-container">$p $</span> is prime.
This implies <span class="math-container">$b=0$</span>, a contradiction which proves <span class="math-container">$c-b=1$</span> and <span class="math-container">$c+b=p^2 $</span>.</p>
|
2,470,062 |
<blockquote>
<p><span class="math-container">$$\sqrt{k-\sqrt{k+x}}-x = 0$$</span></p>
<p>Solve for <span class="math-container">$k$</span> in terms of <span class="math-container">$x$</span></p>
</blockquote>
<p>I got all the way to
<span class="math-container">$$x^{4}-2kx^{2}-x+k^{2}-x^{2}$$</span>
but could not factor afterwards. My teacher mentioned that there was grouping involved</p>
<p>Thanks Guys!</p>
<p>Edit 1 : The exact problem was solve for <span class="math-container">$x$</span> given that <span class="math-container">$$\sqrt{4-\sqrt{4+x}}-x = 0$$</span> with a hint of substitute 4 with k</p>
|
Leucippus
| 148,155 |
<p>As discovered $\sqrt{k - \sqrt{k+x}} = x$ leads to the equation $k^2 - (2 \, x^2 + 1) \, k + (x^4 - x) = 0$. Now,
\begin{align}
k &= \frac{1}{2} \, \left[(2 \, x^2 + 1) \pm \sqrt{ (2 \, x^2 + 1)^2 - 4 \, (x^4 - x) } \right] \\
&= \frac{1}{2} \, [(2 \, x^2 + 1) \pm (2 x + 1)] \\
&= \begin{cases}{ x^2 + x + 1 = \frac{1 - x^3}{1-x} \\ x(x-1) } \end{cases}
\end{align} </p>
|
3,165,276 |
<p>I am attempting to prove if the limit for <span class="math-container">$S_n = \frac{\sqrt{n}}{\sqrt{n}+1} $</span>, exists, and if it exists, what it is?
If the limit exists, proving it in terms of the definition for <span class="math-container">${e}$</span> and <span class="math-container">${N}$</span>.</p>
<p>I got the limit by :
<span class="math-container">$$S_n= \frac{\sqrt{n}}{\sqrt{n}+1}=\lim_{n\to\infty}\Biggl(\frac{\frac{\sqrt{n}}{\sqrt{n}}}{\frac{\sqrt{n}}{\sqrt{n}}+\frac{1}{\sqrt{n}}}\Biggr)=\lim_{n\to\infty}\biggl(\frac{1}{1+{\frac{1}{\sqrt{n}}}}\Biggr)=1$$</span>
Having the limit, I assume I can plug it into the definition, as follows:</p>
<p><span class="math-container">$${\forall}{e}>0\;\exists{N}={N}{(e)\;s.t.\;\forall\;{n}>{N} \;\vert\;\frac{\sqrt{n}}{\sqrt{n}+1}-1\;\vert} $$</span>
which gives me:
<span class="math-container">$$\vert\;\frac{\sqrt{n}}{\sqrt{n}+1}-{\frac{\sqrt{n}-1}{\sqrt{n}-1}}\;\vert=\vert\;\frac{\sqrt{n}\,(\sqrt{n}-1)-\;(\sqrt{n}+1)(\sqrt{n}-1)}{(\sqrt{n}+1)(\sqrt{n}-1)}\;\vert=\vert\frac{1-\sqrt{n}}{n-1}\vert$$</span>
So now I have
<span class="math-container">$$\vert\frac{1-\sqrt{n}}{n-1}\;\vert\leq \frac{1-\sqrt{n}}{n-1}$$</span>
Am I on the right track?
I am unsure of how to proceed from here ?</p>
|
WaveX
| 323,744 |
<p>No need for the conjugate here.</p>
<p>Let <span class="math-container">$\varepsilon>0$</span>. Then by the Archimedean Property, we have that for some <span class="math-container">$N \in \mathbb{N}$</span>, <span class="math-container">$\frac1N \lt \varepsilon^2$</span> </p>
<p>See that for <span class="math-container">$n \ge N$</span>, we have <span class="math-container">$$\left| \frac{\sqrt{n}}{\sqrt{n}+1} - 1\right| = \left|\frac{\sqrt{n}}{\sqrt{n}+1} - \frac{\sqrt{n}+1}{\sqrt{n}+1}\right| = \left|\frac{-1}{\sqrt{n}+1}\right| = \frac{1}{\sqrt{n}+1} < \frac{1}{\sqrt{n} } \le \frac{1}{\sqrt{N} }<\varepsilon$$</span> </p>
|
3,520,057 |
<p>how would you draw <span class="math-container">$A\cup(B\cap C\cap D)$</span>
I tried with paint doesn't look good and I don't know if there are tools to make this easier would appreciate any help or tips</p>
<p><img src="https://i.stack.imgur.com/fc6Pq.png" alt="enter image description here"></p>
|
Jonas Linssen
| 598,157 |
<p>Note that the tensor product of algebras is in fact the coproduct in the category of <span class="math-container">$A$</span>-algebras. What AM defines there should therefore satisfy the universal property of an infinite coproduct. I am not sure about the universal property with respect to multilinear maps of the underlying modules though. I believe one can derive something like this, yet one has to ask oneself, if it would be worth the effort.</p>
<p>I recall doing this construction in some exercise quite a while ago and would prefer not to reconsider how the elements look like. So I am afraid I wont be of much use regarding your second question. Maybe someone else can help out there. Personally I strive to do as much as possible using universal properties only...</p>
|
273,463 |
<p>I want to solve this differential equation</p>
<p><span class="math-container">$$
\frac{1}{\Phi(\phi)} \frac{d^2 \Phi}{d \phi^2}=-m^2
$$</span></p>
<p>with b.c.</p>
<p><span class="math-container">$$
\Phi(\phi+2 \pi)=\Phi(\phi)
$$</span></p>
<p>it is easy to solve by wolfram</p>
<pre><code>eq1 = D[Phi[phi], {phi, 2}]/Phi[phi];
sol1 = DSolve[eq1 == -m^2, Phi[phi], phi];
</code></pre>
<p>out is <code>{{Phi[phi] -> C[1] Cos[m phi] + C[2] Sin[m phi]}}</code></p>
<hr />
<p>In <em>QUANTUM CHEMISTRY</em> by McQuarrie(<a href="https://libgen.unblockit.cat/libraryp2/main/0FAA97ECA7C67AEAE6D4DD67E0217420" rel="nofollow noreferrer">https://libgen.unblockit.cat/libraryp2/main/0FAA97ECA7C67AEAE6D4DD67E0217420</a>)</p>
<p><span class="math-container">$$
\Phi(\phi)=A_m e^{i m \phi} \quad \text { and } \quad \Phi(\phi)=A_{-m} e^{-i m \phi}
$$</span></p>
<p>periodicity condition</p>
<p><span class="math-container">$$
\Phi(\phi+2 \pi)=\Phi(\phi)
$$</span></p>
<p>so
<span class="math-container">$$
\begin{gathered}
A_m e^{i m(\phi+2 \pi)}=A_m e^{i m \phi} \\
A_{-m} e^{-i m(\phi+2 \pi)}=A_{-m} e^{-i m \phi}
\end{gathered}
$$</span></p>
<p>by all of above can get</p>
<p><span class="math-container">$$
e^{\pm i 2 \pi m}=1
$$</span></p>
<p>so</p>
<p><span class="math-container">$$
m=0, \pm 1, \pm 2, \ldots
$$</span></p>
<p>finally</p>
<p><span class="math-container">$$
\Phi_m(\phi)=A_m e^{i m \phi} \quad m=0, \pm 1, \pm 2, \ldots
$$</span></p>
<p>the periodicity condition <span class="math-container">$\Phi(\phi+2 \pi)=\Phi(\phi)$</span>,How do I add periodicity conditions to my code?</p>
<hr />
<p>In general <em>mathematical physics equations</em>,can be get this
<span class="math-container">$$
\Phi(\phi)=A_m \mathrm{e}^{\mathrm{i} m \phi}+A_{-m} \mathrm{e}^{-\mathrm{i} m \phi} \quad(m=0,1,2 \cdots)
$$</span></p>
|
Nasser
| 70 |
<p>Just change constants.</p>
<pre><code>ClearAll[Phi, phi, m];
eq1 = D[Phi[phi], {phi, 2}]/Phi[phi];
sol = Phi[phi] /. First@DSolve[eq1 == -m^2, Phi[phi], phi]
</code></pre>
<p><img src="https://i.stack.imgur.com/ve3Lz.png" alt="Mathematica graphics" /></p>
<pre><code>sol = TrigToExp[sol]
</code></pre>
<p><img src="https://i.stack.imgur.com/pk1Ln.png" alt="Mathematica graphics" /></p>
<p>Now the idea is to collect on the exponential terms</p>
<pre><code>sol = Collect[sol, {E^(-I m phi), E^(I m phi)}]
</code></pre>
<p><img src="https://i.stack.imgur.com/AYO1N.png" alt="Mathematica graphics" /></p>
<p>We are free now to rename the constants (even if they are complex, it does not matter)</p>
<pre><code>sol = sol /. {C[1]/2 - (I C[2])/2 -> A0, C[1]/2 + (I C[2])/2 -> A1}
</code></pre>
<p><img src="https://i.stack.imgur.com/Ii8dQ.png" alt="Mathematica graphics" /></p>
<p>Which is what book says</p>
<p><span class="math-container">$$
\Phi_m(\phi)=A_m e^{i m \phi} \quad m=0, \pm 1, \pm 2, \ldots
$$</span></p>
<p><strong>Update</strong></p>
<p>Answer the comment below. If the book meant to show the negative side also, then this can be done by expanding each term of the solution using <code>FourierSeries</code> and then applying <code>TrigToExp</code> on each term and collecting.</p>
<p>For example for <code>n=2</code> applied on <code>Cos[m phi]</code> gives</p>
<pre><code>tab1 = Table[FourierSeries[Cos[m*phi], phi, n], {n, 0, 2}]
tab1 = Map[TrigToExp[#] &, tab1]
</code></pre>
<p><img src="https://i.stack.imgur.com/ld9Ck.png" alt="Mathematica graphics" /></p>
<p>Then need to do the same for <code>Sin[ m phi]</code> and add and simplify. This will give answer for infinite terms as shown in book.</p>
<p>I do not have the time to write this fully now. If someone wants to do it, they are welcome to.</p>
<p>I read it as just they wanted to rewrite the solution are sum of only 2 complex exponentials instead of two trig functions. But I could be wrong on this.</p>
|
3,299,072 |
<p>I want to prove that
<span class="math-container">$$\frac{(x-y)(x-z)}{x^2}+\frac{(y-x)(y-z)}{y^2}+\frac{(z-x)(z-y)}{z^2}\geq 0$$</span>
for positive numbers <span class="math-container">$x,y,z$</span>.
I don't know how to even begin. I must say I'm not 100% certain the inequality always holds.</p>
<p>I tried the sort of factoring involved in proving schur's inequality, but it doesn't seem to work here. I also tried to distribute the denominators to obtain terms of form (1-y/x)(1-z/x) and then maybe substituting x/y=a, y/z=b, z/x=a etc </p>
|
Michael Rozenberg
| 190,319 |
<p>After replacing <span class="math-container">$x$</span> on <span class="math-container">$\frac{1}{x}$</span> and similar we need to prove that
<span class="math-container">$$\sum_{cyc}x(x-y)(x-z)\geq0,$$</span> which is Schur.</p>
|
16,158 |
<p>I have a Manipulate control, the main element of which is a single slider. Moving the <em>one</em> slider triggers calls to <em>two</em> functions - one which is evaluated very quickly, the other more slowly. I'd like to see 'live' update ( i.e. ContinuousAction->True ) for the fast one, but would only like to update the slow one when the mouse is released. Is there a simple way to achieve this?</p>
<p>Thanks in advance!</p>
|
Nasser
| 70 |
<p>Yes. Each control can have its own <code>ContinuousAction</code> option. As follows:</p>
<pre><code>Manipulate[{x, y, z},
{{x, 0, "x"}, 0, 10, 1, ContinuousAction -> True},
{{y, 0, "y"}, 0, 10, 1, ContinuousAction -> False},
{{z, 0, "z"}, 0, 10, 1},
ContinuousAction -> True
]
</code></pre>
<p>The inner options take over from the common options. So in the above, <code>z</code> will use <code>True</code> but <code>y</code> will use <code>False</code> since it is an inner option.</p>
<p><strong>Update</strong>
this is a follow up on the comment below. I am not quite sure what is the problem we are solving exactly. But I think then what you want is simply to use <code>ControlActive</code> </p>
<p><a href="http://reference.wolfram.com/mathematica/ref/ControlActive.html">http://reference.wolfram.com/mathematica/ref/ControlActive.html</a></p>
<pre><code>ControlActive can be used to switch between a faster computation to be
done while controls like sliders are being moved, and a slower
computation to be done once the controls are released.
</code></pre>
<p>If this sounds like what you want, I'll be happy to make you an example using it if you tell me more about what these actions you want to do, which one is slow and which is fast. (help also have more examples on it, but feel free to follow up if it is not clear)</p>
<p>I understood the question first as you have 2 control variables and want to control each on its own.</p>
|
16,158 |
<p>I have a Manipulate control, the main element of which is a single slider. Moving the <em>one</em> slider triggers calls to <em>two</em> functions - one which is evaluated very quickly, the other more slowly. I'd like to see 'live' update ( i.e. ContinuousAction->True ) for the fast one, but would only like to update the slow one when the mouse is released. Is there a simple way to achieve this?</p>
<p>Thanks in advance!</p>
|
jVincent
| 1,194 |
<p>This is not a full answer to your question, however I'd just like to add how this can be solved outside of Manipulate, perhaps someone can translate this into using Manipulate.
As I understand it you problem is that you dynamically display both <code>fast[a]</code> and <code>slow[a]</code> while interactively updating <code>a</code>, which means the control waits for <code>slow[a]</code> to finish constantly. You can avoid this by making another variable that's only updated upon completion and then use that for the slow update:</p>
<pre><code> fast[a_] := a
slow[a_] := (Pause[0.5]; a)
</code></pre>
<p>So rather than having</p>
<pre><code> {Dynamic@fast[a], Dynamic@slow[a]}]
Slide[Dynamic[a],{0,1}]
</code></pre>
<p>You should have</p>
<pre><code> Slider[Dynamic[a, {(a = #) &, (b = #) &}], {0, 1}]
{Dynamic@fast[a], Dynamic@slow[a]}]
</code></pre>
<p><strong>Updated</strong></p>
<p>to correct error caught by Mr. Wizard. Previusly used <code>Dynamic[{fast[a],slow[b]}]</code> to see the results which defies the point of letting them update independently.</p>
|
1,602,078 |
<p>There are $n$ prisoners and $n$ hats. Each hat is colored with one of $k$ given colors. Each prisoner is assigned a random hat, but the number of each color hat is not known to the prisoners. The prisoners will be lined up single file where each can see the hats in front of him but not behind. Starting with the prisoner in the back of the line and moving forward, they must each, in turn, say only one word which must be one of the $k$ given colors. If the word matches their hat color they are released, if not, they are killed on the spot. They can set up a strategy before the test, so they choose a strategy that maximizes the number of definitely released prisoners (that number is called the <em>number of the strategy</em>. What is that number?</p>
|
Parnad Bhattacharjee
| 778,200 |
<p>Since, I don't have the permission to comment yet, so I am adding here... we should label the k colours with number from 0 to k-1 : {0,1,2,3,....k-1} and then use the concept of modular arithmetic.</p>
|
3,422,861 |
<p>The problem is as follows:</p>
<blockquote>
<p>The acceleration of an oscillating sphere is defined by the equation
<span class="math-container">$a=-ks$</span>. Find the value of <span class="math-container">$k$</span> such as <span class="math-container">$v=10\,\frac{cm}{s}$</span> when <span class="math-container">$s=0$</span>
and <span class="math-container">$s=5$</span> when <span class="math-container">$v=0$</span>.</p>
</blockquote>
<p>The given alternatives in my book are as follows:</p>
<p><span class="math-container">$\begin{array}{ll}
1.&15\\
2.&20\\
3.&10\\
4.&4\\
5.&6\\
\end{array}$</span></p>
<p>What I attempted to do here is to use integration to find the value of <span class="math-container">$k$</span>.</p>
<p>Since the acceleration measures the rate of change between the speed and time then:</p>
<p>I'm assuming that they are using a weird notation <em>"s"</em> for the time.</p>
<p><span class="math-container">$\dfrac{d(v(s))}{ds}=-ks$</span></p>
<p><span class="math-container">$v(s)=-k\frac{s^2}{2}+c$</span></p>
<p>Using the given condition: <span class="math-container">$v(0)=10$</span></p>
<p><span class="math-container">$10=c$</span></p>
<p><span class="math-container">$v(s)=-k\frac{s^2}{2}+10$</span></p>
<p>Then it mentions: <span class="math-container">$v(5)=0$</span></p>
<p><span class="math-container">$0=-k\frac{25}{2}+10$</span></p>
<p>From this it can be obtained:</p>
<p><span class="math-container">$k=\frac{20}{25}=\frac{4}{5}$</span></p>
<p>However this value doesn't appear within the alternatives. What part did I missunderstood?. Can somebody help me here?.</p>
|
David K
| 139,123 |
<p>The first thing to realize is that <span class="math-container">$s$</span> is <em>not</em> a "weird notation for time". It's a fairly standard notation for distance along a path.
It is used much the same way as <span class="math-container">$x$</span> is used in other problems, except that we tend to use <span class="math-container">$x$</span> when we know the motion is in a straight line and can declare that line to be the <span class="math-container">$x$</span> axis, whereas not every oscillation problem involves straight-line motion.</p>
<p>So <span class="math-container">$v = \frac{ds}{dt}$</span> and <span class="math-container">$a = \frac{dv}{dt}.$</span></p>
<p>Then you might recognize <span class="math-container">$a = -ks$</span> as an example of Hooke's Law and you might recall that the result is sinusoidal oscillation around <span class="math-container">$s = 0.$</span>
But you don't actually need to know that.</p>
<p>All you really need to know is that along the path of length <span class="math-container">$5$</span> units from <span class="math-container">$s = 0$</span> to <span class="math-container">$s = 5,$</span> the velocity of the sphere is reduced from <span class="math-container">$v = 10$</span> to <span class="math-container">$v = 0.$</span>
If you let the letter <span class="math-container">$m$</span> stand for the mass of the sphere,
then the force <span class="math-container">$F = ma = -mks$</span> acts on the sphere along that path.
You can integrate over <span class="math-container">$s$</span> to figure out how much work that force does on the sphere
and match that to the change in kinetic energy of the sphere.
The change is negative, because we go from whatever kinetic energy the sphere has at <span class="math-container">$v = 10$</span> (I hope you remember the formula for that) to zero at <span class="math-container">$v = 0$</span>.</p>
|
1,358,574 |
<p>When I searched for the derivative of the Gamma function I got something of the form:</p>
<p>$$\Gamma'(x)=\Gamma(x) \psi(x)$$</p>
<p>But from the definition of the Digamma function to me it's like writing:</p>
<p>$$\Gamma'(x)=\Gamma'(x)$$</p>
<p>And this doesn't seem very useful to me (if I'm wrong feel free to explain me why) so I'm wondering: is there any other form for the derivative(s) of the Gamma function ? This function is defined by an integral so I think that there could be but I'm not sure on how to deal with this.</p>
|
Community
| -1 |
<p>$4^n = \sum_{k=0}^{2n} \binom{2n}{k} \ge \binom{2n}{n}$, so the terms of this series are all at least one. (In particular, they don't converge to zero.)</p>
|
2,940,287 |
<p>My ring theory teacher has a very broad definition of rings: he doesn't require them to be associative. As such, he told us to work out a definition of a product operation <span class="math-container">$\cdot$</span> on <span class="math-container">$R = \mathbb{Z}_2 \times \mathbb{Z}_2$</span> that is distributive over the additive operation <span class="math-container">$+$</span> (the operation that makes <span class="math-container">$R$</span> an abelian group) and that makes the resulting ring
<span class="math-container">$(R, +, \cdot)$</span> non associative, non commutative and without a unit element. Im having some trouble doing this and was wondering if someone could shed some light on the way to doing it - preferably without having to check for lots of cases or trial and error.</p>
|
egreg
| 62,967 |
<p>The statement is true when <span class="math-container">$T\colon V\to V$</span> is a linear transformation on the vector space <span class="math-container">$V$</span>, not just when <span class="math-container">$V=\mathbb{R}^{2\times2}$</span>.</p>
<p>Suppose <span class="math-container">$T^2=T$</span>; we want to prove that <span class="math-container">$\operatorname{im}(T-I)\subseteq\ker(T)$</span>. Let <span class="math-container">$v\in V$</span> and set <span class="math-container">$w=(T-I)v=Tv-v$</span>; then
<span class="math-container">$$
Tw=T(Tv-v)=T^2v-Tv=Tv-Tv=0
$$</span></p>
<p>Suppose, conversely, that <span class="math-container">$\operatorname{im}(T-I)\subseteq\ker(T)$</span>. This implies that <span class="math-container">$T(T-I)$</span> is the zero map. More generally, if <span class="math-container">$T_1\colon V\to V$</span> and <span class="math-container">$T_2\colon V\to V$</span> are linear maps, then <span class="math-container">$\operatorname{im}T_1\subseteq\ker T_2$</span> if and only if <span class="math-container">$T_2T_1=0$</span>. Therefore <span class="math-container">$T^2-T=0$</span>, hence <span class="math-container">$T^2=T$</span>.</p>
|
609,744 |
<p>I am not sure about the last step of my proof:</p>
<p>$(L^{p}(X,A,\mu), \|\cdot \|)$ is a normed $L^{p}$ space of p-integrable functions. $L^{p}(a,b)$ is the space of p-integrable functions on (a,b). $C[a,b]$ is the space of continuous functions on [a,b].</p>
<p>Main goal in order to prove the claim is that the integral of $|f|^{p}$ is finite for any $f\in C[a,b]$.
Let us take a function $f\in C[a,b]$. Now we see that the interval $[a,b]$ is compact (assuming both $a$ and $b$ are finite real numbers). Therefore, $f$ has a maximum. Similarly, the function $|f|^{p}$ is also continuous on $[a,b]$ and hence has maximum on it. Now the integral of the function $|f|^{p}$ is dominated by $\max|f|$ (max on the interval $[a,b]$) times $(b-a)$ (the Lebesgue size of the interval). And therefore it (the integral which I just mention) is finite. And this inequality shows that $C[a,b]$ is a subspace of $L_{p}$ for any $1\le p\le \infty$.
And it is linear because $\forall f,g \in C[a,b]$ and $a,u$ $\in \mathbb{R}$ $af+ug \in C[a,b]$.
It remains to be checked whether the linear subspace is indeed closed.</p>
<p>My friend told me about this idea, but that still didn't help me:
"Since all the step functions are dense in $L_p(a,b)$ and we know that any step function can be approximated by continuous functions on [a,b]."</p>
<p>Any help would be greatly appreciated!</p>
<p>Thanks</p>
|
Stefan4024
| 67,746 |
<p>You can use this:</p>
<p>$$x^y > y^x \iff \sqrt[x]{x} > \sqrt[y]{y}$$</p>
<p>which is true because the function $f(x) = \sqrt[x]{x}$ is strictly decreasing from $e$ to $\infty$, i.e it's convex</p>
|
2,548,254 |
<p>Consider this problem:</p>
<p><em>Prove $x$ and $y$ in a complex vector space is orthogonal if and only if $||\alpha x + \beta y|| ^2 = ||\alpha x||^2 + ||\beta y||^2$ for all scalars $\alpha$ and $\beta$.</em></p>
<hr>
<p>I have solved a similar problem that asks:</p>
<p><em>Prove $x$ and $y$ in a real vector space is orthogonal if and only if $|| x + y|| ^2 = ||x||^2 + ||y||^2$.</em></p>
<p>I see that since $|| x + y|| ^2 = ||x||^2 + 2<x,y> + \space||y||^2$, if $x$ and $y$ are orthogonal, then $2<x,y> = 0$ and hence $|| x + y|| ^2 = ||x||^2 + ||y||^2$.</p>
<p>I also noticed that if the field is complex, then we could have $2<x,y> = 0$ yet $<x,y> \ne 0$, which would invalidate the second statement.</p>
<hr>
<p>I fail to see how adding a pair of scalars solves the issue. Any help would be appreciated.</p>
|
Jbag1212
| 509,460 |
<p>First we show that if $x$ and $y$ are orthogonal then
$$||\alpha x + \beta y||^2 = ||\alpha x||^2 + || \beta y||^2.$$</p>
<p>Assume $x$ and $y$ are orthogonal. Then by definition $\langle x, y \rangle =0.$ </p>
<p>Therefore, $ \alpha \beta^{*} \langle x, y \rangle = \alpha^{*} \beta \langle y, x \rangle =0.$</p>
<p>So
$$||\alpha x + \beta y||^2 = \langle \alpha x + \beta y, \alpha x + \beta y \rangle = \alpha \langle x, \alpha x + \beta y \rangle + \beta \langle y, \alpha x + \beta y \rangle$$
$$ = \alpha \alpha^{*}\langle x, x \rangle + \alpha \beta^{*}\langle x, y \rangle + \beta \beta^{*}\langle y, y \rangle + \beta \alpha^{*}\langle y, x \rangle$$
$$= \alpha \alpha^{*}\langle x, x \rangle + 0 + \beta \beta^{*}\langle y, y \rangle + 0 = ||\alpha x||^2 + ||\beta y||^2.$$</p>
<p>Showing the other direction is similar.</p>
<p>Edit:</p>
<p>Note that $\langle x, y, \rangle$ is just some number, so if $\langle x, y, \rangle = 0$ then $\alpha \langle x, y, \rangle = 0$ as well.</p>
<p>For finite dimensional complex vector spaces $\langle x, y \rangle$ is defined:
$$\left [\begin{matrix} x_1 & x_2 & ... & x_{n-1} &x_n \end{matrix} \right] \left [ \begin{matrix} y_1^{*} \\
y_2^{*} \\
\vdots \\
y_{n-1}^{*} \\
y_n^{*}
\end{matrix} \right]
= x_1 y_1^{*} + ... + x_n y_n^{*}.$$</p>
|
2,548,254 |
<p>Consider this problem:</p>
<p><em>Prove $x$ and $y$ in a complex vector space is orthogonal if and only if $||\alpha x + \beta y|| ^2 = ||\alpha x||^2 + ||\beta y||^2$ for all scalars $\alpha$ and $\beta$.</em></p>
<hr>
<p>I have solved a similar problem that asks:</p>
<p><em>Prove $x$ and $y$ in a real vector space is orthogonal if and only if $|| x + y|| ^2 = ||x||^2 + ||y||^2$.</em></p>
<p>I see that since $|| x + y|| ^2 = ||x||^2 + 2<x,y> + \space||y||^2$, if $x$ and $y$ are orthogonal, then $2<x,y> = 0$ and hence $|| x + y|| ^2 = ||x||^2 + ||y||^2$.</p>
<p>I also noticed that if the field is complex, then we could have $2<x,y> = 0$ yet $<x,y> \ne 0$, which would invalidate the second statement.</p>
<hr>
<p>I fail to see how adding a pair of scalars solves the issue. Any help would be appreciated.</p>
|
Math Lover
| 348,257 |
<p>First, it is easy to show that if $x$ and $y$ are orthogonal then
$$\|\alpha x + \beta y\|^2 = \|\alpha x\|^2 + \|\beta y\|^2. \tag{1}$$
Now, let's try to prove that if $(1)$ is true <strong>for all</strong> scalars $\alpha$ and $\beta$ then $x$ and $y$ are orthogonal. To this end, we first observe that
$$\|\alpha x + \beta y\|^2 = \|\alpha x\|^2 + \|\beta y\|^2 + \alpha^* \beta x^H y + \alpha \beta^* y^H x. \tag{2}$$
If $(1)$ holds then
$$\alpha^* \beta x^H y =- \alpha \beta^* y^H x. \tag{3}$$
Since $(3)$ holds for all values of $\alpha$ and $\beta$, put $\alpha =1$ and $\beta=1$ to get
$$x^H y =- y^H x. \tag{4}$$
Now put $\alpha=1$ and $\beta=i$ to get
$$x^H y =+ y^H x. \tag{5}$$
Solving $(4)$ and $(5)$ simultaneously results in $x^H y = y^H x = 0$.</p>
<p>Q.E.D.</p>
<hr>
<p>Here, $x^H$ denotes the Hermitian transpose of $x$, and $x^H y = \langle x, y \rangle$.</p>
|
1,770,508 |
<p>I am confronted with the following definition:</p>
<blockquote>
<p>Let <span class="math-container">$K$</span> be a field and <span class="math-container">$e_1,e_2,\ldots,e_n$</span> the standard basis of the <span class="math-container">$K$</span> vector space <span class="math-container">$K^n$</span>.</p>
<p>For <span class="math-container">$1\leq i\leq n$</span> let <span class="math-container">$V_i=Ke_1+Ke_2+\dots+Ke_n$</span>.</p>
</blockquote>
<p>For given <span class="math-container">$0<m<n$</span> let
<span class="math-container">$$P=\{~g\in GL_n(k)~|~g(V_m)=V_m~\}$$</span></p>
<hr />
<p>The part of the definition I don't understand is highlighted, the rest might use as clarification.</p>
<p>So what does <span class="math-container">$Ke_1$</span> mean? (Side question: how does picking a different <span class="math-container">$i$</span> make a difference?)</p>
|
user5713492
| 316,404 |
<p>My solution is another way of stating @Jack D'Aurizio's solution. We want to rotate coordinates so that the plane $x+4z=a$ is horizontal. This can be achieved through
$$\begin{array}{rl}x^{\prime}&=\frac4{\sqrt{17}}x-\frac1{\sqrt{17}}z\\
z^{\prime}&=\frac1{\sqrt{17}}x+\frac4{\sqrt{17}}z\end{array}$$
The inverse transformation is
$$\begin{array}{rl}x&=\frac4{\sqrt{17}}x^{\prime}+\frac1{\sqrt{17}}z^{\prime}\\
z&=-\frac1{\sqrt{17}}x^{\prime}+\frac4{\sqrt{17}}z^{\prime}\end{array}$$
In terms of these coordinates, the plane is $z^{\prime}=\frac a{\sqrt{17}}$. Then the equation of the boundary reads
$$\frac{(-x^{\prime}+4z^{\prime})^2}{17}=\frac{(4x^{\prime}+z^{\prime})^2}{4\cdot17}+y^2$$
Or
$$(x^{\prime})^2-8x^{\prime}z^{\prime}+16(z^{\prime})^2=\frac14\left(16(x^{\prime})^2+8x^{\prime}z^{\prime}+(z^{\prime})^2\right)+17y^2$$
$$0=3(x^{\prime})^2-6x^{\prime}z^{\prime}-\frac{63}4(z^{\prime})^2+17y^2=3(x^{\prime}-z^{\prime})^2-\frac{75}4(z^{\prime})^2+17y^2$$
<strong>EDIT:</strong> Whoops, mistake on the above line. Should be
$$0=3(x^{\prime})^2+10x^{\prime}z^{\prime}-\frac{63}4(z^{\prime})^2+17y^2=3\left(x^{\prime}+\frac53z^{\prime}\right)^2-\frac{289}{12}(z^{\prime})^2+17y^2$$
Now we can write this as the equation of an ellipse
$$\frac{\left(x^{\prime}+\frac53z^{\prime}\right)^2}{\frac{289}{36}(z^{\prime})^2}+\frac{y^2}{\frac{17}{12}(z^{\prime})^2}=\frac{\left(x^{\prime}+\frac53z^{\prime}\right)^2}{\bar a^2}+\frac{y^2}{\bar b^2}=1$$
This is the equation of an ellipse with semi-major axis $\bar a=\frac{17}6z^{\prime}$, semi-minor axis $\bar b=\frac12\sqrt{\frac{17}3}z^{\prime}$ parallel to the $x^{\prime}y^{\prime}$-plane and centered at $\langle x_c^{\prime},y_c,z_c^{\prime}\rangle=\langle-\frac53z^{\prime},0,z^{\prime}\rangle$, so we know its area is $A(z^{\prime})=\pi\bar a\bar b=\pi\frac{17}{12}\sqrt{\frac{17}3}(z^{\prime})^2$, so we can add up the areas of the ellipses to get the volume
$$V=\int_0^{\frac a{\sqrt{17}}}A(z^{\prime})dz^{\prime}=\int_0^{\frac a{\sqrt{17}}}\pi\frac{17}{12}\sqrt{\frac{17}3}(z^{\prime})^2dz^{\prime}=\left.\pi\frac{17}{12}\sqrt{\frac{17}3}\frac{(z^{\prime})^3}3\right|_0^{\frac a{\sqrt{17}}}=\frac{\pi a^3}{36\sqrt3}$$
So with the edit, my solution is now consistent with the others.</p>
|
2,590,537 |
<p>Let $A$ be a proper subset of of $X$, $B$ be a proper subset of $Y$. If $X$ and $Y$ are connected, show that $(X\times Y)-(A\times B)$ is connected.</p>
<p>I have already seen the solution of this problem here on math.stackexchange, but my confusion is this.
Suppose $X$=(a,b) & $Y$=(c,d) and let $A$={m} & $B$={n} be singleton sets such that $A\subset X$ & $B\subset Y$ then $(X\times Y)-(A\times B)$= $[(a,m)\cup(m,b)]\times[(c,n)\cup(n,d)]$ = $[(a,m)\times(c,n)]\cup[(m,b)\times(n,d)]$
which implies that $(X\times Y)-(A\times B)$ is not connected. </p>
<p>Can someone explain me what I have done wrong & what is this question asking. </p>
|
Michael Hardy
| 11,667 |
<p>Here's your list of $k$-vectors:
$$
\underbrace{ \begin{bmatrix} \vdots \\ \vdots \\ \vdots \end{bmatrix}, \ldots\ldots\ldots, \begin{bmatrix} \vdots \\ \vdots \\ \vdots \end{bmatrix} }_{\large \text{This list has $2^k$ items.}}
$$
Here's your list of $(k+1)$-vectors:
$$
\underbrace{ \begin{bmatrix} \vdots \\ \vdots \\ \vdots \\ 0 \end{bmatrix}, \ldots\ldots\ldots, \begin{bmatrix} \vdots \\ \vdots \\ \vdots \\ 0 \end{bmatrix} }_{\large \text{This list has $2^k$ items.}},\, \underbrace{ \begin{bmatrix} \vdots \\ \vdots \\ \vdots \\ 1 \end{bmatrix}, \ldots\ldots\ldots, \begin{bmatrix} \vdots \\ \vdots \\ \vdots \\ 1 \end{bmatrix} }_{\large \text{This list has $2^k$ items.}}
$$
You have two copies of the same list, except that $0$ has been appended at the bottom of every item in the first copy and $1$ has been appended at the bottom of every item in the second.</p>
<p>Thus the number of items is $2^k\times 2 = 2^{k+1}.$</p>
|
2,070,574 |
<p>I had a question </p>
<p>$$\sum_{a=1}^n \frac{1}{a^2}=?$$</p>
<p>I had learned newton's method of undetermined method
,but that doesn't work here because of negative power</p>
<p>than I saw another question </p>
<p>$$\sum_{a=1}^n \frac{1}{a(a+1)}$$</p>
<p>here we transformed it as</p>
<p>$$\sum_{a=1}^n (\frac{1}{a}-\frac{1}{a+1})$$</p>
<p>which on expanding gave </p>
<p>$$\frac{1}{1}-\frac{1}{2}+\frac{1}{2}-\frac{1}{3}+......+\frac{1}{n}-\frac{1}{n+1}$$</p>
<p>where each element cancels out except</p>
<p>$$1-\frac{1}{n+1}$$</p>
<p>which gives the formula</p>
<p>$$\sum_{a=1}^n \frac{1}{n(n+1)}=1-\frac{1}{n+1}=\frac{n}{n+1}$$</p>
<p>but I can't express $a^{-2}$ as two diffrent factors so I thought to use limits(calculus) as </p>
<p>$$\frac{1}{a^2}=\lim_{h\to 0}\frac{1}{a(a+h)}$$</p>
<p>but when I tried to break it then I got $h$ in denominator.</p>
<p>I may be wrong, so please help to calculate the summation.</p>
|
Simply Beautiful Art
| 272,831 |
<p>There isn't such a beautiful solution as the one you've presented, since it is the case that as $n\to\infty$, you get the Basel problem.</p>
<p>Notice that</p>
<p>$$\frac1{a(a+h)}=\frac1h\left(\frac1a-\frac1{a+h}\right)$$</p>
<p>but since $h$ is not a whole number, we will not be seeing any cancellations like with your telescoping series.</p>
<hr>
<p>But, if you could recall the geometric series:</p>
<p>$$\frac{1-r^{n+1}}{1-r}=\sum_{k=0}^nr^k$$</p>
<p>Integrate both sides and divide by $x:$</p>
<p>$$\frac1x\int_0^x\frac{1-r^{n+1}}{1-r}\ dr=\frac1x\int_0^x\sum_{k=0}^nr^k\ dr=\sum_{k=0}^n\frac1{k+1}x^k$$</p>
<p>Repeating the process,</p>
<p>$$\int_0^y\frac1x\int_0^x\frac{1-r^{n+1}}{1-r}\ dr\ dx=\int_0^y\sum_{k=0}^n\frac1{k+1}x^k\ dx=\sum_{k=0}^n\frac1{(k+1)^2}y^{k+1}$$</p>
<p>And as $y\to1$,</p>
<p>$$\sum_{k=1}^nk^{-2}=\int_0^1\frac1x\int_0^x\frac{1-r^n}{1-r}\ dr\ dx$$</p>
<p>I believe I've read that this is how Euler approximated the Basel problem.</p>
|
3,072,274 |
<p><a href="https://i.stack.imgur.com/WaOF8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WaOF8.png" alt="enter image description here"></a></p>
<blockquote>
<p>Let <span class="math-container">$n\in\mathbb N$</span> be fixed. For <span class="math-container">$0\le k \le n$</span>, et <span class="math-container">$C_k=\binom nk$</span>. Evaluate: <span class="math-container">$$\sum_{\substack{0\le k\le n\\ k\text{ even}}} \frac{C_k}{k+1}.$$</span></p>
</blockquote>
<p>My attempt : <span class="math-container">$(x+1)^n=\sum_{k=0}^{n}{\binom{n}{k}x^k}\implies \frac{(x+1)^{n+1}}{n+1}+C=\int (x+1)^ndx=\sum_{k=0}^{n}{\binom{n}{k}\frac{1}{k+1}x^{k+1}}.$</span></p>
<p>Putting <span class="math-container">$x=0$</span> we have <span class="math-container">$C=\frac{-1}{n+1}$</span>. </p>
<p>so my answer is <span class="math-container">$$\frac{2^{n+1}-1}{n+1}$$</span></p>
<p>is its True ?</p>
<p>Any hints/solution will appreciated </p>
<p>thanks u </p>
|
Community
| -1 |
<p>The determinants of similar matrices are the same: <span class="math-container">$\operatorname{det}A=\operatorname{det}PBP^{-1}=\operatorname{det}P\cdot
\operatorname{det}B\cdot\operatorname{det}P^{-1}=\operatorname{det}P\cdot \frac1{\operatorname{det}P}\cdot \operatorname{det}B=\operatorname{det}B$</span>.
This means that <span class="math-container">$H\triangleleft G$</span>. </p>
<p>Alternatively, the surjective homomorphism <span class="math-container">$h:GL_n(\Bbb R)\to\{1,-1\}$</span> given by the sign of the determinant, has <span class="math-container">$H$</span> as kernel. Thus <span class="math-container">$H$</span> has index <span class="math-container">$2$</span>, hence is normal. </p>
|
92,492 |
<p>If $M$ is a combinatorial model category, it's known by the experts that there is a ''natural'' model structure on diagram categories $Hom(C,M)$, which is the <strong>projective model structure</strong>. The fibrations and weak equivalences are defined point wise. </p>
<p>There is also the one called <strong>injective model structure</strong>, where the cofibrations and weak equivalences are defined point wise. </p>
<p>I would like to know if for a given a morphism $\alpha \in Arr(C)$, the evaluation at $\alpha$ can be a right Quillen functor with the <strong>injective model structures on each side</strong>:</p>
<p>$Ev_\alpha: Hom(C,M) \to M^2$ </p>
<p>Thanks ! </p>
<p>Edit: Here $M^2= Hom([0 \to 1], M)= Arr(M)$, sorry for the confusion.</p>
|
Igor Rivin
| 11,142 |
<p>The magic words are: <a href="http://dl.dropbox.com/u/5188175/langweil.pdf" rel="nofollow">Lang-Weil</a></p>
|
92,492 |
<p>If $M$ is a combinatorial model category, it's known by the experts that there is a ''natural'' model structure on diagram categories $Hom(C,M)$, which is the <strong>projective model structure</strong>. The fibrations and weak equivalences are defined point wise. </p>
<p>There is also the one called <strong>injective model structure</strong>, where the cofibrations and weak equivalences are defined point wise. </p>
<p>I would like to know if for a given a morphism $\alpha \in Arr(C)$, the evaluation at $\alpha$ can be a right Quillen functor with the <strong>injective model structures on each side</strong>:</p>
<p>$Ev_\alpha: Hom(C,M) \to M^2$ </p>
<p>Thanks ! </p>
<p>Edit: Here $M^2= Hom([0 \to 1], M)= Arr(M)$, sorry for the confusion.</p>
|
David E Speyer
| 297 |
<p>This is a big field and improvements are being made all the time. The key phrases to search on are "zeta function" and "hypersurface" -- the $\zeta$-function of $F=0$ is $\exp(\sum \frac{N_n}{n} t^n )$ where $N_n$ is the number of solutions to $F=0$ over $\mathbb{F}_{p^n}$. </p>
<p>I started learning the current approaches from Kedlaya's <a href="http://swc.math.arizona.edu/aws/2007/KedlayaNotes11Mar.pdf" rel="nofollow">lecture notes</a> (and consider myself far from fully understanding them). A quick mathscinet search suggests that the state of the art is <a href="http://www.ams.org/mathscinet-getitem?mr=2078663" rel="nofollow">this paper</a> for smooth $F$. I don't know whether these are at your level or way too hard, but they might give you a sense for the sort of thing people are thinking about.</p>
|
4,227,776 |
<p>Let <span class="math-container">$R$</span> be a commutative unital ring, and let <span class="math-container">$p$</span> be a prime number.</p>
<p><strong>Question 1</strong>: If <span class="math-container">$R$</span> is a <em>local</em> ring (with unique maximal ideal <span class="math-container">$m$</span>) of characteristic <span class="math-container">$0$</span>, <span class="math-container">$p \in m$</span>, and <span class="math-container">$R/p$</span> is finite, does it follow that <span class="math-container">$R$</span> is noetherian?</p>
<p><strong>Question 2</strong>: In case the answer to 1 is "no", does it hold if we additionally assume <span class="math-container">$R$</span> to be a domain?</p>
<p>Motivation: This came out of discussions to the question <a href="https://math.stackexchange.com/q/4225280/96384">$\mathcal{O}_{\mathbb{C}_p}/ p \mathcal{O}_{\mathbb{C}_p}$ infinite</a>. I have come to suspect that the answer is no to both, but I have not suceeded in cooking up counterexamples. I'm afraid it's been too long since I studied all the local ring regularity properties, and just hope there is no easy proof with Nakayama or something that I'm missing.</p>
|
Minseon Shin
| 7,719 |
<p>(Sorry this might be incorrect, see below.)</p>
<p>(Set <span class="math-container">$\mathbb{N} := \mathbb{Z}_{\ge 0}$</span>.) Let <span class="math-container">$\mathbb{Z}_{p}[[\{x_{n}\}_{n \in \mathbb{N}}]]$</span> be the <span class="math-container">$\langle \{x_{n}\}_{n \in \mathbb{N}} \rangle$</span>-adic completion of <span class="math-container">$\mathbb{Z}_{p}[\{x_{n}\}_{n \in \mathbb{N}}]$</span>. (As power series, we have the following description: Let <span class="math-container">$S$</span> be the set of functions <span class="math-container">$e : \mathbb{N} \to \mathbb{N}$</span> with finite support (i.e. <span class="math-container">$e(n) = 0$</span> for all but finitely many <span class="math-container">$n$</span>); then an element of <span class="math-container">$\mathbb{Z}_{p}[[\{x_{n}\}_{n \in \mathbb{N}}]]$</span> can be identified with a function <span class="math-container">$F : S \to \mathbb{Z}_{p}$</span> such that for each "degree" <span class="math-container">$d$</span>, there are only finitely many <span class="math-container">$e \in S$</span> such that <span class="math-container">$|e| = d$</span> and <span class="math-container">$F(e) \ne 0$</span>.)</p>
<p>Set <span class="math-container">$R := \mathbb{Z}_{p}[[\{x_{n}\}_{n \in \mathbb{N}}]]/\langle \{x_{n} = px_{n+1}\}_{n \in \mathbb{N}} \rangle$</span>. Then <span class="math-container">$R$</span> is local because it is a quotient of a local ring, and <strong><error></strong><span class="math-container">$R/pR \simeq \mathbb{F}_{p}$</span> (since in every power series whose constant term is <span class="math-container">$0$</span>, we can replace <span class="math-container">$x_{n} \mapsto px_{n+1}$</span> then divide by <span class="math-container">$p$</span>)<strong></error></strong> <em>(we can't do infinitely many replacements, for example in <span class="math-container">$x_{1} + x_{2}^{2} + x_{3}^{3} + \dotsb$</span>)</em>. We have an injective ring map <span class="math-container">$\varphi : R \to \mathbb{Q}_{p}[[x]]$</span> sending <span class="math-container">$x_{n} \mapsto \frac{1}{p^{n}}x$</span> (in terms of the above description, we have <span class="math-container">$F \mapsto \sum_{d \in \mathbb{N}}(\sum_{|e| = d}F(e))x^{d}$</span>), thus <span class="math-container">$R$</span> is an integral domain. We show that <span class="math-container">$\langle x_{n} \rangle \subsetneq \langle x_{n+1} \rangle$</span> for all <span class="math-container">$n$</span>, so that <span class="math-container">$\langle x_{0} \rangle \subsetneq \langle x_{1} \rangle \subsetneq \langle x_{2} \rangle \subsetneq \dotsb$</span> is a strictly ascending chain of ideals of <span class="math-container">$R$</span>. Note that each <span class="math-container">$x_{n}$</span> is a nonzerodivisor in <span class="math-container">$R$</span> since its image in <span class="math-container">$\mathbb{Q}_{p}[[x]]$</span> is a nonzerodivisor. Suppose <span class="math-container">$x_{n+1} = x_{n} g$</span> for some <span class="math-container">$g \in R$</span>; then <span class="math-container">$x_{n+1} = x_{n+1} pg$</span> so <span class="math-container">$1 = pg$</span>, i.e. <span class="math-container">$R$</span> contains <span class="math-container">$\frac{1}{p}$</span>. But this contradicts the existence of a ring map <span class="math-container">$R \to \mathbb{Z}_{p}$</span> (for example, the one induced by the map <span class="math-container">$\mathbb{Z}_{p}[[\{x_{n}\}_{n \in \mathbb{N}}]] \to \mathbb{Z}_{p}$</span> sending <span class="math-container">$x_{n} \mapsto 0$</span> for all <span class="math-container">$n \in \mathbb{N}$</span>).</p>
|
2,959,628 |
<p>I'm struggling to find the density function of <span class="math-container">$Y=X^3$</span> where <span class="math-container">$X \sim \mathcal{N}(0,1)$</span> with density function <span class="math-container">$\phi(\cdot)$</span>. </p>
<p>Since <span class="math-container">$g(X) = X^3$</span> is monotonic I suppose that the "change of variables" formula <span class="math-container">$$f_Y(y) = f_X(g^{-1}(y)) \cdot \biggr\lvert\frac{d}{dy}g^{-1}(y) \biggr\rvert$$</span> is applicable here? With <span class="math-container">$g^{-1}(Y)=Y^{1/3}$</span> and <span class="math-container">$f_X(x) = \phi(x)$</span> this yields <span class="math-container">$$f_Y(y) = \phi(y^{1/3}) \cdot \biggr\lvert\frac{y^{-2/3}}{3}\biggr\rvert = \frac{\phi(y^{1/3})}{3y^{2/3}}$$</span> which is only defined for <span class="math-container">$y>0$</span> and does not integrate to unity so it can't be correct. </p>
<p>What am I doing wrong? </p>
|
lab bhattacharjee
| 33,337 |
<p>Let <span class="math-container">$1/x=h^2,h>0$</span> to find</p>
<p><span class="math-container">$$\lim_{h\to0}\dfrac{\sqrt{1+4h^4}-\sqrt{1+h^4}}{h^3}$$</span></p>
<p><span class="math-container">$$=\lim...\dfrac{1+4h^4-(1+h^4)}{h^3}\cdot\lim...\dfrac1{\sqrt{1+4h^4}+\sqrt{1+h^4}}=?$$</span></p>
|
2,806,164 |
<p>I recently came across a question that asked for the derivative of $e^x$ with respect to $y$. I answered $\frac{d}{dy}e^x$ but the answer was $e^x\frac{dx}{dy}$. How is that the answer? I am confused.</p>
|
Richard Ambler
| 65,729 |
<p>If $x$ is a function of $y$ then the given answer follows by the chain rule:</p>
<p>$$\frac {\text d}{\text dy} \left(e^x\right) = e^x \cdot \frac {\text d}{\text dy}(x)$$</p>
|
531,080 |
<blockquote>
<p>Let $P$ be a plane in $\mathbb{R}^3$ parallel to the $xy$-plane. Let $\Omega$ be a closed, bounded set in the $xy$-plane with $2$-volume $B$. Pick a point $Q$ in $P$ and make a pyramid by joining each point in $\Omega$ to $Q$ with a straight line segment. Find the $3$-volume of this pyramid.</p>
</blockquote>
<p>I know that the volume with be dependent on $B$ and the distance from $\Omega$ to $P$, and the solution probably involves multiple integration, but beyond that I don't know where to start.</p>
<p>Any help would be appreciated.</p>
<p>N.b.: Although tagged as "homework," this problem is no longer "live," i.e. the homework has already been turned in.</p>
|
Daniel Robert-Nicoud
| 60,713 |
<p>Let's answer to a bit more ambitious question: <em>Let $\Omega$ be an open, bounded and connected subset of an $(n-1)$-dimensional hyperplane $P$ of $\mathbb{R}^n$, and let $x_0\in\mathbb{R}^n$ be a point not contained in $P$. What is the volume of the pyramid with basis $\Omega$ and vertex $x_0$?</em></p>
<p>By a rotation, we may assume $P=\{x\in\mathbb{R}^n|x_n=0\}$. By a reflection, we may also assume $(x_0)_n>0$. Our pyramid is given by
$$\Pi = \{tx_0+(1-t)x|t\in[0,1],\ x\in\Omega\}$$
and its volume is
$$V_\Pi=\int_\Pi dV$$
where $dV=dx_1\ldots dx_n$. We make the following change of variables:
$$\begin{align}
x_n = & t(x_0)_n\\
x_i = & (1-t)y_i + t(x_0)_i
\end{align}$$
where $i=1,\ldots,n-1$. With this we have $dV = (x_0)_n(1-t)^{n-1}dtdy_1\ldots dy_{n-1}$ and after the change of variables our integral becomes
$$\begin{align}
V_\Pi = & \int_\Omega dy_1\ldots dy_{n-1}\int_0^1 dt\ \bigg((x_0)_n(1-t)^{n-1}\bigg)\\
= & \frac{1}{n}(x_0)_nV_\Omega
\end{align}$$
where $V_\Omega$ is the $(n-1)$-dimensional volume of $\Omega$.</p>
<p>Notice how this reduces to the following known formulas</p>
<ul>
<li>$n=2$: $\mathrm{area\ triangle} = \frac{1}{2}\mathrm{basis}\times\mathrm{height}$</li>
<li>$n=3$: $\mathrm{volume\ pyramid} = \frac{1}{3}\mathrm{basis\ area}\times\mathrm{height}$</li>
</ul>
|
4,480,276 |
<p>[This question is rather a very easy one which I found to be a little bit tough for me to grasp. If there is any other question that has been asked earlier which addresses the same topic then kindly link this here as I am unable to find such questions by far.]</p>
<p>let <span class="math-container">$f(x)=x.$</span> If we are to find limit of it at <span class="math-container">$a$</span> by the definition <span class="math-container">$|f(x)-a|<\epsilon \implies |x-a|<\epsilon$</span>. Again <span class="math-container">$|x-a|<\delta$</span>. Then how does it prove that <span class="math-container">$\epsilon=\delta$</span>. Can't <span class="math-container">$\delta$</span> be larger or smaller than <span class="math-container">$\epsilon$</span>?</p>
<p>N.B: I am self learning limits from Calculus Early Transcendentals by James Stewart. I could reach upto the lesson "Precise Definition of Limits" which involves such a problem and didn't explain much about the explained problem</p>
|
CyclotomicField
| 464,974 |
<p>I think the <span class="math-container">$\epsilon$</span>-<span class="math-container">$\delta$</span> definition is best understood as trying to find the bounds on the input that will ensure the output is correct to within some tolerance. So <span class="math-container">$\epsilon$</span> is the acceptable error in the output and we need to find some <span class="math-container">$\delta$</span> on the input that keeps us within that tolerance. In this case you <span class="math-container">$f(x)=x$</span> so if the acceptable tolerance is <span class="math-container">$\epsilon$</span>, we can set <span class="math-container">$\delta = \epsilon$</span>, which is the largest possible choice for <span class="math-container">$\delta$</span>, but we could have chosen any <span class="math-container">$\delta$</span> smaller than that too. So say I set <span class="math-container">$\delta = \epsilon /2$</span> this will also be within the acceptable tolerance since the output would be less than <span class="math-container">$\epsilon$</span>. You don't have to find the biggest possible <span class="math-container">$\delta$</span> and there will be times when such a bound is impractical, it just has to be small enough to get the job done. It's often useful to think of <span class="math-container">$\delta$</span> as a function of <span class="math-container">$\epsilon$</span> but you have some freedom in which function you choose.</p>
|
2,253,676 |
<p>$f(x)$ continuous in $[0,\infty)$. Both $\int^\infty_0f^4(x)dx, \int^\infty_0|f(x)|dx$ converge. </p>
<p>I need to prove that $\int^\infty_0f^2(x)dx$ converges.</p>
<p>Knowing that $\int^\infty_0|f(x)|dx$ converges, I can tell that $\int^\infty_0f(x)dx$ converges.</p>
<p>Other than that, I don't know anything. I tried to maybe integrate by parts but I got to nowhere</p>
<p>$\int^\infty_0f^4(x)dx = \int^\infty_0f^2(x)|f(x)|\cdot |f(x)| dx$.. </p>
|
Jack D'Aurizio
| 44,121 |
<p>Improved version of Lord Shark's answer: for any $K>0$ we have, by AM-GM,
$$ 3 f(x)^2 \leq K^2 f(x)^4 + \frac{1}{K}|f(x)|+ \frac{1}{K}|f(x)| \tag{1}$$
from which it follows that
$$ \int_{0}^{+\infty}f(x)^2\,dx \leq \frac{K^2}{3}\int_{0}^{+\infty} f(x)^4\,dx + \frac{2}{3K}\int_{0}^{+\infty}|f(x)|\,dx \tag{2} $$
and the RHS of $(2)$ is minimized by choosing $K=\left(\frac{\int_{0}^{+\infty}|f(x)| dx}{\int_{0}^{+\infty}f(x)^4 dx}\right)^{1/3}$, leading to the following form of Holder's inequality:
$$ \int_{0}^{+\infty}f(x)^2\,dx \leq \left(\int_{0}^{+\infty}f(x)^4\,dx\right)^{1/3}\left(\int_{0}^{+\infty}|f(x)|\,dx\right)^{2/3}.\tag{3}$$
The same path can be followed for proving the more general <a href="https://en.wikipedia.org/wiki/Riesz%E2%80%93Thorin_theorem" rel="nofollow noreferrer">Riesz-Thorin theorem</a>, essentially equivalent to the log-convexity of the map $p\mapsto \|f\|_p=\left(\int_{0}^{+\infty}|f(x)|^p\,dx\right)^{1/p}$.</p>
|
3,447,872 |
<p>Subtracting both equations
<span class="math-container">$$x(a-b)+b-a=0$$</span>
<span class="math-container">$$(x-1)(a-b)=0$$</span>
Since <span class="math-container">$a\not = b$</span>
<span class="math-container">$$x=1$$</span>
Substitution of x gives
<span class="math-container">$$1+a+b=0$$</span> which is contradictory to the question. What did I do wrong?</p>
<p><em>There is a solution for this question that proves the required condition satisfactorily, but I want to the know reason behind this contradiction.</em></p>
|
Alexey Burdin
| 233,398 |
<p>Let <span class="math-container">$c$</span> be the difference between roots of <span class="math-container">$x^2+ax+b$</span> and <span class="math-container">$x^2+bx+a$</span><br>
(in other words, let <span class="math-container">$x_{1,2}^2+ax_{1,2}+b=0$</span>,<br>
<span class="math-container">$x_{3,4}^2+bx_{3,4}+a=0$</span>,<br>
<span class="math-container">$x_1\le x_2$</span>,<br>
<span class="math-container">$x_3\le x_4$</span> and<br>
<span class="math-container">$x_1-x_3=x_2-x_4=c$</span>), so<br>
<span class="math-container">$x^2+ax+b\equiv(x-c)^2+b(x-c)+a$</span><br>
<span class="math-container">$\begin{cases}a=-2c+b\\b=c^2-bc+a\end{cases}$</span><br>
<span class="math-container">$b=c^2-bc-2c+b$</span><br>
<span class="math-container">$(c-b-2)c=0$</span>,<br>
<span class="math-container">$b=c-2$</span> or <span class="math-container">$c=0$</span>,<br>
<span class="math-container">$c\ne0$</span> as <span class="math-container">$x^2+ax+b$</span> and <span class="math-container">$x^2+bx+a$</span> are not the same, so <span class="math-container">$b=c-2$</span><br>
<span class="math-container">$a=-2c+b=-c-2$</span><br>
so <span class="math-container">$a+b=-4$</span>, QED.</p>
|
3,301,387 |
<p>I don't understand what it is about complex numbers that allows this to be true.</p>
<p>I mean, if I root both sides I end up with Z=Z+1. Why is this a bad first step when dealing with complex numbers?</p>
|
José Carlos Santos
| 446,262 |
<p>Because it is not true (even in <span class="math-container">$\mathbb R$</span>) that <span class="math-container">$a^6=b^6\implies a=b$</span>, which is what you are assuming.</p>
<p>In <span class="math-container">$\mathbb C$</span>,<span class="math-container">$$a^6=b^6\iff a=b\times\left(\cos\theta+i\sin\theta\right),$$</span>for some <span class="math-container">$\theta\in\left\{0,\frac\pi3,\frac{2\pi}3,\pi\frac{4\pi}3,\frac{5\pi}3\right\}$</span>.</p>
|
286,082 |
<p><a href="https://en.wikipedia.org/wiki/Budan's_theorem" rel="nofollow noreferrer">Budan's theorem</a> gives an upper bound for the number of real roots of a real polynomial in a given interval $(a,b)$. This bound is not sharp (see the example in Wikipedia).</p>
<p>My question is the following: let us suppose that Budan's theorem tells us "there are $0$ or $2$ roots in the interval $(a,b)$" (or more generally "there are $0$, $2$, ... $2n$ roots"). Let us suppose also that there are actually no roots in the interval. Is it true that there are $2$ roots whose real part lies in the interval $(a,b)$?</p>
<p>Thank you!</p>
|
Alexandre Eremenko
| 25,510 |
<p>If this were so, real parts of non-real roots will form a subset of
the real critical points which is absurd. So almost any random example is a counterexample.
Take
$$f(x)=(1+z^2)(1-x+x^2),$$
there are no real roots, and the the real parts of the complex roots are known exactly: they are $0$ and $1/2$.
Now $$f'(x)=4x^3-3x^2+4z-1,$$
and this has a root at approximately $c=0.28839$ by Maple, but you can locate it
with less accuracy by hand. Then $f''$ has no roots and $f'''$ has
a single root at $1/4$. These roots $c$ and $1/4$ are the only Fourier critical points (the points where
Budan estimate changes). So on the interval $(1/8,3/8)$ Boudan says "at most four roots", while in fact there is no root with real part in this interval.</p>
|
1,782,800 |
<p>I am studying for an exam and found this in the solutions:</p>
<blockquote>
<p>"... with canonical parameters $\{\log\lambda_i\}^n_1$". </p>
</blockquote>
<p>Index $I=1,2,\ldots,n$. The professor that wrote the exam has retired. Does anyone know what the $1$ and $n$ might mean?</p>
|
Laars Helenius
| 112,790 |
<p>It is an indexed set. For example:
$$
\{x_i\}_{i=1}^n=\{x_1,x_2,\ldots,x_n\}.
$$</p>
<p>Does this help?</p>
|
1,377,927 |
<p>Prove using mathematical induction that
$(x^{2n} - y^{2n})$ is divisible by $(x+y)$.</p>
<p><strong>Step 1:</strong> Proving that the equation is true for $n=1 $</p>
<p>$(x^{2\cdot 1} - y^{2\cdot 1})$ is divisible by $(x+y)$ </p>
<p><strong>Step 2:</strong> Taking $n=k$</p>
<p>$(x^{2k} - y^{2k})$ is divisible by $(x+y)$</p>
<p><strong>Step 3:</strong> proving that the above equation is also true for $(k+1)$</p>
<p>$(x^{2k+2} - y^{2k+2})$ is divisible by $(x+y)$.</p>
<p>Can anyone assist me what would be the next step? Thank You in advance!</p>
|
Community
| -1 |
<p>$$x^{2(k+1)}-y^{2(k+1)}=x^2x^{2k}-y^2y^{2k}=x^2x^{2k}\color{green}{-x^2y^{2k}+x^2y^{2k}}-y^2y^{2k}\\
=x^2(x^{2k}-y^{2k})+(x^2-y^2)y^{2k}.$$</p>
<p>The first term is divisible by $x+y$ by the induction hypothesis, and the second as well, by direct factorization.</p>
|
2,233,457 |
<p>I'm working on the following integral:</p>
<p>$\int_0^{\infty}\frac{y^r}{\theta}ry^{r-1}e^{\frac{-y^r}{\theta}} dy$. </p>
<p>I noticed that if I took the derivative:</p>
<p>$\frac{d}{dy}e^{\frac{-y^r}{\theta}} = \frac{-ry^{r-1}}{\theta}e^{\frac{-y^r}{\theta}}$ </p>
<p>I got most of the expression of the integrand. But I'm left with an extra factor of $y^r$:</p>
<p>$\int_0^{\infty}(y^r\frac{d}{dy}e^{\frac{-y^r}{\theta}}) dy$</p>
<p>I'm not sure how to simplify this anymore, so is this how I'm supposed to be going about this, or is there another way to take this integral?</p>
<p>I got this from <em>Wackerly's Mathematical Statistics 7e</em> when trying to find the $E[Y^r]$ from the following distribution:</p>
<p>$f(y|\theta) = \frac{1}{\theta}ry^{r-1}e^{\frac{-y^r}{\theta}}, \theta>0,y>0$</p>
<p>Thank you!</p>
|
Pedro
| 9,921 |
<p>The rank of $f$ at every point is $\leq 1$. Consider the subset of points where the rank is exactly $1$. This is an open subset (since rank is upper semicontinuous or whatever). Certainly it can't be empty or else you'd have a constant map. So there's a non-empty open subset where $f$ has constant rank $1$. Then the constant rank theorem makes it clear that $f$ can't be injective.</p>
|
2,233,457 |
<p>I'm working on the following integral:</p>
<p>$\int_0^{\infty}\frac{y^r}{\theta}ry^{r-1}e^{\frac{-y^r}{\theta}} dy$. </p>
<p>I noticed that if I took the derivative:</p>
<p>$\frac{d}{dy}e^{\frac{-y^r}{\theta}} = \frac{-ry^{r-1}}{\theta}e^{\frac{-y^r}{\theta}}$ </p>
<p>I got most of the expression of the integrand. But I'm left with an extra factor of $y^r$:</p>
<p>$\int_0^{\infty}(y^r\frac{d}{dy}e^{\frac{-y^r}{\theta}}) dy$</p>
<p>I'm not sure how to simplify this anymore, so is this how I'm supposed to be going about this, or is there another way to take this integral?</p>
<p>I got this from <em>Wackerly's Mathematical Statistics 7e</em> when trying to find the $E[Y^r]$ from the following distribution:</p>
<p>$f(y|\theta) = \frac{1}{\theta}ry^{r-1}e^{\frac{-y^r}{\theta}}, \theta>0,y>0$</p>
<p>Thank you!</p>
|
Angina Seng
| 436,618 |
<p>No continuous map from $M$ to $\mathbb{R}$ can be injective.</p>
<p>As $M$ has dimension at least $2$ it contains a subspace $C$ homeomorphic to a circle. Now there is no continuous injective map $g$ from $C$ to $\mathbb{R}$. If there were, then there are $a$ and $b$ on $C$ with $g(a)<g(b)$. Then on each arc of $C$ with endpoints $a$ and $b$ there is a point mapped to $\frac12(g(a)+g(b))$ (intermediate value theorem).</p>
|
2,629,115 |
<blockquote>
<p>We will define the convolution (slightly unconventionally to match Rudin's proof) of $f$ and $g$ as follows:
$$(f\star g)(x)=\int_{-1}^1f(x+t)g(t)\,dt\qquad(0\le x\le1)$$</p>
<ol>
<li>Let $\delta_n(x)$ be defined as $\frac n2$ for $-\frac1n<x<\frac1n$ and 0 for all other $x$. Let $f(x)$ be defined as $x$ for $.4<x<.7$ and let $f(x)=0$ for all other $x$. Find a piecewise algebraic expression for $f\star\delta_{10}$ and graph $f\star\delta_{10}$. Repeat the exercise for $f\star\delta_{20}$. In what sense does $f\star\delta_n$ converge on $[0,1]$ and to what function does it converge?</li>
</ol>
</blockquote>
<p>Hello everyone,
I just need help finding a piecewise algebraic expression for $f* \delta_{ 10}$. I think I should be able to figure out everything else in the question once I know how to do this. </p>
<p>Thoughts/things I know (how to do):
Delta 10 is defined as $5$ for $-1/10<x<1/10$ and $0$ otherwise.
$f(x)=x$ for $0.4<x<0.7$ and $0$ otherwise. I know how both graphs look like visually. I guess you could say there is a jump in the graph of f(x) at 0.4 and 0.7 and a jump in the graph of $\delta_{10}$ at $-1/10$ and $1/10$.</p>
<p>Now confusions: I believe that perhaps I will have to split up the problem into several cases/integrals as both f(x) and delta are piecewise. The main problem in this question is I don't understand what the t represents so I don't know how to set up my bounds as my bounds are in terms of t (as we integrate with respects to $dt$). I believe that the integral seemingly from the definition is only valid for $0\leq x<\leq1$ inclusive. Also if I'm doing $f* \delta_{10}$, lets say for x between $0.4<x<0.7$ then wouldn't $f(x+t)=x+t$ and $g(t)=0$? Overall, I think I'm confused so I could really use some guidance on this problem for the first case $f*\delta_{10}$, then I believe I could figure out the rest.
Thank you!</p>
|
operatorerror
| 210,391 |
<p>By definition,
$$
f*\delta_{10}=\int_{-1}^1f(x+t)\delta_{10}(t)\mathrm dt\\
=5\int_{-.1}^{.1}f(x+t)\mathrm dt
$$
Now we use the argument of $f$ and the definition of $f$ to determine the rest.</p>
<p>The integrand vanishes (by the definition of $f$) unless $.4<x+t<.7$, i.e.
$$
.4-x<t<.7-x
$$
Noting that the size of this interval, $.7-x-.4+x=.3$, is larger than the length of the interval we are integrating over, $.2$. </p>
<p>We shift the window for $t$ around to figure out what the integral is in each case. </p>
<p>If the window misses the entire interval, i.e. if $.7-x\leq -.1$ or if $.4-x\geq .1$., the above integral vanishes. If instead the right hand side of the interval intersects the domain of integration, we have
$$
-.1\leq .7-x\leq .1
$$
then the convolution is the integral
$$
5\int_{-.1}^{.7-x}(x+t)\mathrm dt
$$
and if the left hand side does, $-.1\leq .4-x\leq .1$, then the convolution is
$$
5\int_{.4-x}^{.1}(x+t)\mathrm dt
$$
if we the left and right end points straddle the entire interval, i.e.
$.4-x\leq -.1$ and $.7-x\geq .1$, then we have
$$
5\int_{-.1}^.1(x+t)\mathrm dt=x
$$
Putting this all together in piecewise function form we have
$$
f*\delta_{10}(x)=
\begin{cases}
0&x\leq .3\\
5\int_{.4-x}^{.1}(x+t)\mathrm dt&.3\leq x\leq .5\\
x&.5\leq x\leq .6\\
5\int_{-.1}^{.7-x}(x+t)\mathrm dt&.6\leq x\leq .8\\
0&x\geq .8
\end{cases}
$$
Where I leave to you to simplify the integrals.</p>
<p>As for the latter part of the question, a hint may be that
$$
\lim_{n\to \infty}=\delta_n=\begin{cases}\infty&x=0\\
0&x\ne 0\end{cases}
$$
and $\int_{\mathbb{R}}\delta_n\mathrm dx=1$ for any $n$.</p>
|
2,889,651 |
<p>The given points are $M(3,-1,2)$ and $M1(0,1,2)$ , plane A is passing though this 2 points. Plane B $2x-y+2z-1=0$ and its normal to A. How to we find the equation of plane A? I have tried with cross product of vector $MM1$ and vector $b=(2,-1,2)$, but it didn't work.</p>
|
mathcounterexamples.net
| 187,663 |
<p><strong>The result is wrong for real vector spaces with dimension $n>2$</strong></p>
<p>The orthogonal transformation</p>
<p>$$Q=
\begin{pmatrix}
\dfrac{\sqrt{2}}{2} & -\dfrac{\sqrt{2}}{2} & 0\\
\dfrac{\sqrt{2}}{2} & \dfrac{\sqrt{2}}{2} & 0\\
0 & 0 &-1
\end{pmatrix}$$</p>
<p>is such that $\det Q= - 1$. However, $Q$ is the not the matrix of a reflection ($Q$ has no fixed vector).</p>
|
1,634 |
<p>I am not too certain what these two properties mean geometrically. It sounds very vaguely to me that finite type corresponds to some sort of "finite dimensionality", while finite corresponds to "ramified cover". Is there any way to make this precise? Or can anyone elaborate on the geometric meaning of it?</p>
|
Andrew Critch
| 84,526 |
<p>I definitely agree with Peter's general intuitive description.</p>
<p>In response to some of the subsequent comments, here are some implications to keep in mind: </p>
<p><strong>Finite</strong> ==> <strong>finite fibres</strong> (1971 EGA I 6.11.1) and <strong>projective</strong> (EGA II 6.1.11), hence <strong>proper</strong> (EGA II 5.5.3), but <em>not conversely</em>, contrary to popular belief ;)</p>
<p><strong>Proper</strong> + <strong>locally finite presentation</strong> + <strong>finite fibres</strong> ==> <strong>finite</strong> (EGA IV (part 3) 8.11.1)</p>
<p>When reading about these, you'll need to know that "quasi-finite" means "finite type with finite fibres." Also be warned that in EGA (II.5.5.2) projective means $X$ is a closed subscheme of a "finite type projective bundle" $\mathbb{P}_Y(\mathcal{E})$, which gives a nice description via relative Proj, whereas "Hartshorne-projective" more restrictively means that $X$ is closed subscheme of "projective n-space" $\mathbb{P}^n_Y$.</p>
<p>When the target (or "base" scheme) is locally Noetherian, like pretty much anything that comes up in "geometry", a proper morphism is automatically of locally finite presentation, so in that case we do have </p>
<p><strong>finite</strong> <==> <strong>proper</strong> + <strong>finite fibres</strong></p>
<p>Regarding "locally finite type", its does <em>not</em> imply finite dimensionality of the fibres; rather, it's about finite dimensionality of small neighborhoods of the source of the map. For example, you can cover a scheme by some super-duper-uncountably-infinite disjoint union of copies of itself that is LFT but not FT, since it has gigantic fibres.</p>
|
758,466 |
<blockquote>
<p>A fair 6-sided die is tossed 8 times. The sequence of 8 results is recorded to form an 8-digit number. For example if the tosses give {3, 5, 4, 2, 1, 1, 6, 5}, the resultant number is $35421165$. What is the probability that the number formed is a multiple of 8.</p>
</blockquote>
<p>I solved this by listing all possibilities for the last 3 digits that give multiples of 8, and found this to be $\frac{1}{8}$. </p>
<p>The solution key agrees with my answer, but also says that "There are quicker ways to solve the problem using a more advanced knowledge of number theory"</p>
<hr>
<p>What would be a faster way to solve this using number theory?</p>
|
Anant
| 85,660 |
<p>Here's a generalization:</p>
<p>The sample space for a six-sided die is $S = \{1,2,3,4,5,6\}$.</p>
<p><strong>Theorem:</strong> There are exactly $\color{blue}{3}^n$ n-tuples $(x_{n-1}, x_{n-2}, \cdots, x_1, x_0)$ that satisfy $f(n) = 10^{n-1}x_{n-1} + 10^{n-2}x_{n-2} + \cdots + 10x_1 + x_0 \equiv 0 \mod{2^n}$ for $x_i \in S$</p>
<p><strong>Proof by induction:</strong></p>
<p>Base case $n=1$: We have $f(1) = x_0 \equiv 0 \mod{2}$ which is clearly satisfied by exactly $\color{blue}{3}$ 1-tuples $(2), (4)$ and $(6)$.</p>
<p>Inductive step: Assume true for $n$. Now for $n+1$ we have $\begin{array}{l}f(n+1) &= 10^nx_n &+ & 10^{n-1}x_{n-1} + \cdots + 10x_1 + x_0 \equiv 0 \mod{2^{n+1}} &\text{(i)}\\&\Rightarrow 0 &+ & 10^{n-1}x_{n-1} + \cdots + 10x_1 + x_0 \equiv 0 \mod{2^{n}}\\&\Rightarrow & & 10^{n-1}x_{n-1} + \cdots + 10x_1 + x_0 \equiv 0 \mod{2^{n}}& \text{(ii)}\end{array}$</p>
<p>But $\text{(ii)}$ has exactly $\color{blue}{3}^n$ n-tuple solutions from our inductive hypothesis. </p>
<p>For each such n-tuple $(x_{n-1}, x_{n-2}, \cdots, x_1, x_0)$ , there are $\color{blue}{3}$ unique $x_n \in S$ that satisfy $\text{(i)}$ per Lemma below. So, the number of (n+1)-tuple solutions $(x_n, x_{n-1}, \cdots, x_1, x_0)$ is $\color{blue}{3} \cdot \color{blue}{3}^n = \color{blue}{3}^{n+1}$ and this completes the proof by induction.</p>
<p><strong>Answer:</strong></p>
<p>The probability of forming an n-digit number $x_{n-1}x_{n-2}\cdots x_1x_0 = f(n)$ divisible by $2^n$ is thus $\frac{\text{count of n-digit numbers } \equiv 0 \mod{2^n}}{\text{total count of n-digit numbers}} = \frac{3^n}{6^n} = \frac{1}{2^n}$. In your case, $n=3$, giving the probability of $\frac{1}{8}$.</p>
<hr>
<p><strong>Lemma:</strong> $10^n x + m \equiv 0 \mod{2^{n+1}} $ has exactly $\color{blue}{3}$ unique solutions $x\in S$ given that $m \equiv 0 \mod{2^n}$.</p>
<p><strong>Proof:</strong> We have $m \equiv 0 \mod{2^n} \implies m = 2^n\ell$ where $\ell \in \mathbb{Z}$.</p>
<p>$\begin{array}{cl} \therefore &10^n x + m &\equiv 0\mod{2^{n+1}}&\\ \iff & 10^n x + 2^n\ell &= 2^{n+1} q & (q \in \mathbb{Z})\\ \iff & 5^nx + \ell &= 2q &(\text{divide by } 2^n)\\\iff & x + \ell &\equiv 0 \mod{2} & (5^n \equiv 1 \mod{2})\end{array}$</p>
<p>And this has exactly $\color{blue}{3}$ unique solutions $x\in S$ since,</p>
<p>$\mod{2}:\begin{cases}\ell \equiv 0 \implies x + 0 \equiv 0 \implies x \equiv 0 &\implies x \in \{2, 4, 6\} \\ \ell \equiv 1 \implies x + 1 \equiv 0 \implies x \equiv 1 &\implies x \in \{1, 3, 5\}\end{cases}$</p>
|
1,470,244 |
<p>Prove that the sum of the moduli of the roots of $x^4-5x^3+6x^2-5x+1=0$ is $4$.<br></p>
<hr>
<p>I tried various methods to find its roots,but no luck.Is there any method to solve it without actually finding the roots of the equation.Please help me.<br>
Thanks.</p>
|
Math-fun
| 195,344 |
<p>\begin{align}
x^4-5x^3+6x^2-5x+1&=(\color{red}1\times x^4-\color{red}4\times x^3+\color{red}1\times x^2)\\
&-(\color{red}1\times x^3-\color{red}4\times x^2+\color{red}1\times x)\\
&+(\color{red}1\times x^2-\color{red}4\times x+\color{red}1)\\
&=x^2(x^2-4x+1)-x(x^2-4x+1)+(x^2-4x+1)\\
&=(x^2-4x+1)(x^2-x+1)
\end{align}</p>
<p>In general we have
$$1 \pm n x + (1 + n) x^2 \pm n x^3 + x^4=\pm\left(x^2\pm x+1\right) \left(n x\pm x^2-x\pm 1\right)$$
for example we have $$x^4+\sqrt{2} x^3+\left(1+\sqrt{2}\right) x^2+\sqrt{2} x+1=\left(x^2+x+1\right) \left(x^2+\sqrt{2} x-x+1\right)$$
Something similar you may find for $1 \pm n x + (1 - n) x^2 \pm n x^3 + x^4$. You could find similar rules for higher order polynomials, but the rules quickly get complicated :-)</p>
|
1,490,828 |
<p>Consider an equilateral triangle in $A \subset \mathbb{R}^2$ with vertices $x_1$, $x_2$, and $x_3$. I would like to show that the smallest circle which contains $A$ has radius $\frac{d}{\sqrt{3}}$, where $d$ is the length of each side. </p>
<p>I've been able to show that a (closed) ball centered at $(x_1 + x_2 + x_3)/3$ with radius $\frac{d}{\sqrt{3}}$ contains A. To complete the proof, I need to show that no smaller ball can contain $A$. My strategy so far has been as follows. Pick a point $x \in \mathbb{R}^2$. If $d(x_1,x) \leq \frac{d}{\sqrt{3}}$ and $d(x_2,x) \leq \frac{d}{\sqrt{3}}$ with at least one equality strict, then $d(x_3,x) > \frac{d}{\sqrt{3}}$ (where $d(\cdot, \cdot)$ is the Euclidean distance function). I have not been able to find a simple way to verify this statement (either directly or by contradiction). Is is possible to show this? Or is another strategy better? </p>
|
achille hui
| 59,379 |
<p>Choose coordinate system so that the centroid of the equilateral triangle of side $d$ is the origin. i.e
$$x_1 + x_2 + x_3 = 0$$
We will have $|x_1| = |x_2| = |x_3| = \frac{d}{\sqrt{3}}$.</p>
<p>Given any circle of radius $r$ centered at $x$. If it contains the three point $x_1, x_2, x_3$, it will satisfies:
$$3r^2 \ge |x_1 - x|^2 + |x_2 - x|^2 + |x_3 - x|^2 = |x_1|^2 + |x_2|^2 + |x_3|^2 + 3|x|^2 \ge 3\left(\frac{d}{\sqrt{3}}\right)^2$$
This leads to $r \ge \frac{d}{\sqrt{3}}$.</p>
|
132,769 |
<p>My distribution histogram looks like it is not identically distributed, as in the negative counts have a different shape than the positive counts. Here is an image:</p>
<p><img src="https://i.stack.imgur.com/VtlHd.jpg" alt="enter image description here"></p>
<p>The chart has 800 data points, and the tallest count of 40 is for 0.</p>
<p>I don't know a formal approach, but looking at it, it appears to me it is not identically distributed. My guess is that the negative counts (to the left of 0) are closer to a normal distribution, while the positive counts (to the right of 0) are closer to a Laplace distribution, or have higher kurtosis than the left side. </p>
<p>Is this possible, are there cases where this happens, is there a topic in probability that studies it? Or, is this just my eyes playing tricks on me, or not having enough sample data or some other simple reason? </p>
<p>Thanks in advance for any help.</p>
|
Robert Israel
| 8,508 |
<p>As @cardinal wrote, you probably don't mean "identically distributed": that's something you can't really tell from a histogram. If you want to test whether this is a normal distribution, you might use a D'Agostino-Pearson test, a Shapiro-Wilk test, or a Kolmogorov-Smirnov test. See e.g. <a href="http://www.graphpad.com/faq/viewfaq.cfm?faq=959" rel="nofollow">http://www.graphpad.com/faq/viewfaq.cfm?faq=959</a> and references there.</p>
|
619,580 |
<p>Please help me to solve the following problem that is in the Lebesgue integral discussion </p>
<blockquote>
<p>Give an example of a sequence $\,\,f_n : [0, 1] \to \Bbb R$ of continuous functions such that $\,\,\|f_n\|_\infty \to \infty$ but $\int_0^1\lvert\, f_n\rvert\,d\lambda \to 0$ as $n\to\infty$.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/IXYJq.jpg" rel="nofollow noreferrer">Original screenshot</a></p>
|
Yiorgos S. Smyrlis
| 57,021 |
<p>Let
$$
f_n(x)=\max\left\{n,n^{-1}x^{-1/2}\right\}.
$$</p>
<p>Then $\|\,f_n\|_\infty=n$ while $\|\,f\|_1<\dfrac{1}{2n}$.</p>
|
619,580 |
<p>Please help me to solve the following problem that is in the Lebesgue integral discussion </p>
<blockquote>
<p>Give an example of a sequence $\,\,f_n : [0, 1] \to \Bbb R$ of continuous functions such that $\,\,\|f_n\|_\infty \to \infty$ but $\int_0^1\lvert\, f_n\rvert\,d\lambda \to 0$ as $n\to\infty$.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/IXYJq.jpg" rel="nofollow noreferrer">Original screenshot</a></p>
|
ABIM
| 83,860 |
<p>Below I have given a sufficient condition for your problem to have a positive solution in a rather general measure space, afterwhich I give an example in the case of $[0,1]$ with its usual Euclidean topology to help illustrate my construction. </p>
<hr>
<p><strong>A general solution:</strong></p>
<p><strong>Proposition:</strong>
More generally, let $X$ be a locally-compact Hausdorff space, $<X,\mathscr{F},\mu>$ be a non-trivial, regular (ie.: $\mu \neq 0$) $\sigma$-finite Borel meaure space which is not purely atomic. Then for any real number $r$, there exists a sequence of measurable functions $f_n$ satisfying:
\begin{equation}
f_n\overset{L^{\infty}_m}{\rightarrow} \infty
\mbox{ and }
f_n\overset{L^{p}_m}{\rightarrow} r ,
\end{equation}
for every $p\in [1,\infty)$. </p>
<p><em>Proof:</em>
Since the measure space is $\sigma$-finite and non-trivial, there exists some $A\in \mathscr{F}$ with $\mu(A)\leq \infty$. Moreover, since every measure which is not purely atomic is the sum of a purely atomic $\mu_a$ and a purely nonatomic (or diffuse) measure $\mu_d$ <em>(see <a href="http://www.ams.org/journals/proc/1970-025-03/S0002-9939-1970-0279266-8/S0002-9939-1970-0279266-8.pdf" rel="nofollow">this article</a>)</em>, we may without loss of generality assume $\mu$ to be purely nonatomic (by only considering its nonatomic part). </p>
<p>Now consider $A$ as a disjoint union of two sets $B$ and $C$, both of positive $\mu$-measure. </p>
<p>Now therefore, since $B$ must also be diffuse, for every sufficiently large $k\in \mathbb{Z}^+$ there exists $B_k\in \mathscr{F}$ such that $B_{k+1}\subseteq B_k$ and $\mu(B_k)=\frac{1}{k}. </p>
<p>Therefore the sequence of measurable functions:
\begin{equation}
g_k:=kI_{A_k} + \frac{r}{\mu(C)}I_{C},
\end{equation} must have the desired properties (for large enough $k$).</p>
<p>However, the sequence $g_k$ is not continous but $C_c(X,\mathbb{R})$ is dense in $L_{\mu}^p$, then for every $k \in \mathbb{Z}^+$, there exists some continous function $f_k^p$ with compact support such that $\|f_k^p-g_k\|_{L^p_{mu}} = o_{\mu}( 2^{-k})$.</p>
<p>Therefore $f_k$ converge in $L_{\mu}^p$ to r, and in $L^{\infty}_{\mu}$ to $\infty$, as desired. </p>
<hr>
<p><strong>A particular case for the Lebesgue measure space</strong></p>
<p><strong>Corollary:</strong> In the case of the Lebesgue measure space on $[0,2]^d$ we may take: \begin{equation}
B_k:=[0,\frac{1}{\sqrt[d]{k}}]^d \mbox{ and } C:=[1,2]^d
\end{equation}</p>
<p>Then
\begin{equation}
f_k(x):= k\prod_{i=1}^d (1-\frac{x-(\frac{1}{\sqrt[d]{k}}+2^{-k})}{2^{-k}})I_{[0,\frac{1}{\sqrt[d]{k}}]}(x_i)
+
rI_{[1,2]^d}
\end{equation}. </p>
<p><strong>Example:</strong> More particularly for $\mathbb{R}$ we may take: \begin{equation}
A_k:=[0,\frac{1}{k}]
\end{equation}</p>
<p>Then $f_k(x):= k(1-\frac{x-(\frac{1}{k}+2^{-k})}{2^{-k}})I_{[0,\frac{1}{k}]}(x_i)$ solves your problme :). </p>
|
75,656 |
<p>Is there a closed form for any function f(x,y) satisfying:</p>
<p>a) $\frac{df}{dx}+\frac{df}{dy}=xy$</p>
<p>b) $\frac{df}{dx}+\frac{df}{dy}+\frac{df}{dz}=xyz$</p>
|
Ilya
| 5,887 |
<p>Forgive for using Mathematica, but the answers are:
$$
a) \quad f(x,y) = \frac16\left(3x^2y-x^3\right)+F(y-x)
$$
with $F$ - arbitrary continuously differentiable function,
$$
b)\quad f(x,y,z) = \frac1{12}(x^4-2x^3y-2x^3z+6x^2yz)+F(y-x,z-x)
$$
with $F$ - arbitrary continuously differentiable function.</p>
<p>Hope that it helps.</p>
|
9,540 |
<p>I'm following <a href="http://reference.wolfram.com/mathematica/ref/FinancialData.html">http://reference.wolfram.com/mathematica/ref/FinancialData.html</a></p>
<p>I get the following:</p>
<pre><code>In[6]:= DateListLogPlot[FinancialData["^DJI", All]]
</code></pre>
<blockquote>
<p>During evaluation of In[6]:= DateListLogPlot::ntdt: The first argument to DateListLogPlot should be a list of pairs of dates and real values, a list of real values, or a list of several such lists. >></p>
<p>Out[6]= DateListLogPlot[Missing["NotAvailable"]]</p>
</blockquote>
<pre><code>In[8]:= FinancialData["DJI", All]
</code></pre>
<blockquote>
<p>During evaluation of In[8]:= FinancialData::notent: DJI is not a known entity, class, or tag for FinancialData. Use FinancialData[] for a list of entities. >></p>
<p>Out[8]= FinancialData["DJI", All]</p>
</blockquote>
<pre><code>In[9]:= FinancialData["^DJI"]
</code></pre>
<blockquote>
<p>Out[9]= Missing["NotAvailable"]</p>
</blockquote>
<p>What's going on here? Is the DJI data unavailable somehow?</p>
|
Vitaliy Kaurov
| 13 |
<p>We have another option now: <a href="https://github.com/bajracha71/Quandl-Mathematica-QuandlLink" rel="nofollow noreferrer">QuandlLink</a> package. Using this package you can compute this as:</p>
<pre><code>Needs["QuandlLink`"]
dowJonesOHLCV =
QuandlFinancialData[
"YAHOO/INDEX_DJI",
startDate -> "2015-01-01",
column -> 1 ;; 6];
TradingChart[Rest@dowJonesOHLCV, PlotLabel -> "OHLCV"]
</code></pre>
<p><a href="https://i.stack.imgur.com/SY55r.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SY55r.jpg" alt="trading chart"></a></p>
|
1,002,191 |
<p>I have a problem:</p>
<blockquote>
<p>A family has six children, A,B,C,D,E,F. The ages of these six children are added together to make an integer less than $32$ where none of the children are less than one year old.</p>
</blockquote>
<p>What is the chances of guessing the ages of all of the children.</p>
<p>I thought stars and bars, and this is equivalent to choosing $7$ children for $$x_1+\dots+x_7=31$$</p>
<p>So we get (1 in)${31-1 \choose 7-1}$</p>
<p>But they said that $x_1+\dots+\dots+x_7=32$ is correct. I feel like that would be the case if the original question said $x_1+\dots+x_6\leq32$, but it said $x_1+\dots+x_6\lt32$</p>
<p>So apparently that makes it (1 in)${32-1 \choose 7-1}$</p>
<p>Which is right?(Probably them I know, but I want to know why mine is wrong please)</p>
|
RE60K
| 67,609 |
<p>given $$\sum_i^6x_i<32\implies \sum_i^7x_i=31(\because x_7\ge0,x_{(i,0\le i\le6)}\ge1)$$
let $$x_7'=x_7+1,x_7'\ge1$$
so $$\sum_i^6x_i+x_7'=32$$
now all $x_i\ge1$, now you can solve it maybe?</p>
|
1,419,185 |
<p>I am attempting to help someone with their homework and these concepts are a bit above me. I apologize for the terrible graph drawing. I am using a surface pro 3 and it has an awful camera so I can't take a picture of the problem so I attempted to trace it.</p>
<p><a href="https://i.stack.imgur.com/FzMSf.png" rel="nofollow noreferrer">http://i.stack.imgur.com/FzMSf.png</a></p>
<p>The problem shows a graph in the shape of a W. The left part comes downward to -3,0, the center is at 0,0 and the right is at 3,0. There are no numbers shown on the y axis. I believe the W shape indicates that it is a graph of a quartic equation.</p>
<p>The problem states:</p>
<p>"Find the formula for the graph above, given that it is a polynomial, that all zeroes of the polynomial are shown, that the exponents of each of the zeroes are the least possible, and that it passes through the point (-1, -8)."</p>
<p>Now from what I could find trying to research quartic equations is that my formula should look like ax^4+bx^3+cx^2+dx+e, but I have no idea where to start.</p>
<p>Edit: I managed to get the solution as x^4-9x^2 by using Desmos and playing with the graph using your comments to guide me, but I am not sure how to go through the steps mathematically.</p>
|
Daniel R. Collins
| 266,243 |
<p>The graph shows roots at $x = -3, 0, 3$, and there's a double root at zero due to the way the graph is tangent to the y-axis at that point. The equation is therefore the product of the four factors $(x + 3)(x - 3)(x + 0)(x + 0) = (x^2 - 9)(x)(x) = x^4 - 9x^2$, possibly with another constant scaling multiplier. But if we check $x = -1$ for this expression, then we see $x^4 - 9x^2 = (-1)^4 - 9(-1)^2 = 1 - 9 = -8$, which meets the final constraint, and so no constant multiplier is needed. </p>
|
1,897,212 |
<p>Does there exist any surjective group homomorphism from <span class="math-container">$(\mathbb R^* , .)$</span> (the multiplicative group of non-zero real numbers) onto <span class="math-container">$(\mathbb Q^* , .)$</span> (the multiplicative group of non-zero rational numbers)? </p>
|
Hanul Jeon
| 53,976 |
<p>Suppose that $f:\Bbb{R}^*\to\Bbb{Q}^*$ is a such homomorphism. Then we have some $x$ such that $f(x)=2$. Now take a cube root $\sqrt[3]{x}$ of $x$, which always exists in $\mathbb{R}^*$. Then $(f(\sqrt[3]{x}))^3 = f(x) = 2$, i.e. $f(\sqrt[3]{x})$ is a cube root of $2$. But $2$ has no cube root in $\mathbb{Q}^*$, so this is a contradiction.</p>
|
698,702 |
<p>Let $V$ be a vector space with finite dimension $n$ and $T:V\longrightarrow V$ is a linear transformation such that $T^{2}=0$. Then</p>
<ol>
<li><p>$rank(T)\leq\frac{n}{2}$</p></li>
<li><p>$n(T)\leq\frac{n}{2}$</p></li>
<li><p>$rank(T)\geq n(T)$</p></li>
<li><p>$rank(T)\geq \frac{n}{2}$</p></li>
</ol>
|
r9m
| 129,017 |
<p>Sylvester's Rank Inequality $2\operatorname{rank}(T)-n\le \operatorname{rank}(T^2)=0$. $\operatorname{rank}(T)\le\frac{n}{2}$</p>
<p>and $\operatorname{rank}(T)+\operatorname{null}(T)=n$, so $\operatorname{null}(T)\ge \frac{n}{2}$.</p>
|
2,820,842 |
<p>I am trying to prove that if $Q$ is an algebraic extension of $F$
($F$ is a field), if $R$ is a subring of $Q$, and we have
$F\subseteq R$, then $R$ is a subfield of $Q$.</p>
<p>My first approach was to show every element has an inverse. But there was no result.</p>
<p>Any help would be great.</p>
|
Hagen von Eitzen
| 39,174 |
<p>We are given that $a=b+up^n$ with $u$ not a multiple of $p$.
Then $$ a^p-b^p=\sum_{k=1}^p{p\choose k}b^{p-k}u^kp^{nk}$$
Using the fact that $p\mid {p\choose k}$ for $0<k<p$, we see:
The summand for $k=1$ is divisible precisely by $p^{n+1}$, the summand for $k=p$ is divisible by $p^{np}>p^{n+1}$, and all in-between summands are divisible at least by $p^{2n+1}$.</p>
|
1,823,023 |
<h2>1°</h2>
<p>$n_0=1$</p>
<p>$(1!)^2 \ge 1^1$</p>
<p>$1\ge1$</p>
<h2>2°</h2>
<p>$k \ge n_0$</p>
<p>assumption: $$(k!)^2 \ge k^k$$</p>
<p>and for k+1: $$((k+1)!)^2 \ge (k+1)^{k+1}$$</p>
<p>I also noticed that: $$((k+1)!)^2 = (k!)^2 * (k+1)^2$$</p>
|
Daniel McLaury
| 3,296 |
<p>Well, the simplest interpretation is that they take a vector and recover its coordinates. So, for instance, if you have a vector (2, 3, 1), and you want to express the fact that the second coordinate is 3, the most effective way of doing this is by writing something like</p>
<p>$$(2, 3, 1) \cdot (0, 1, 0) = 3$$</p>
<p>The geometric interpretation is really secondary much of the time.</p>
|
1,823,023 |
<h2>1°</h2>
<p>$n_0=1$</p>
<p>$(1!)^2 \ge 1^1$</p>
<p>$1\ge1$</p>
<h2>2°</h2>
<p>$k \ge n_0$</p>
<p>assumption: $$(k!)^2 \ge k^k$$</p>
<p>and for k+1: $$((k+1)!)^2 \ge (k+1)^{k+1}$$</p>
<p>I also noticed that: $$((k+1)!)^2 = (k!)^2 * (k+1)^2$$</p>
|
Christian Blatter
| 1,303 |
<p>Well, it's a <em>symmetric bilinear function</em> $G:\>V\times V\to{\mathbb R}$ such that the associated quadratic form $q(x):=G(x,x)$ is positive definite. This means that
$$G(x,y)=G(y,x),\quad G(x+x',y)=G(x,y)+G(x',y),\quad G(\alpha x,y)=\alpha G(x,y), $$
and $G(x,x)>0$ for all $x\ne0$. If such a $G$ has been selected once and for all one writes, e.g., $\langle x,y\rangle$ instead of $G(x,y)$, and calls $\langle\cdot,\cdot\rangle$ a <em>scalar product</em> on $V$.</p>
<p>Given a basis $(e_i)_{1\leq i\leq n}$ of $V$, such a $G$ is determined by the data $g_{ik}:=\langle e_i,e_k\rangle$. If the matrix $[g_{ik}]$ is the identity matrix the basis $(e_i)_{1\leq i\leq n}$ is called <em>orthonormal</em>.</p>
<p>Similar things hold in a complex environment.</p>
|
3,137,681 |
<p>I'm trying to see if there is any way to solve for x? I'm told Option 1 is correct and I think I understand it but why is Option 2 incorrect?</p>
<p>Option 1: subtract both sides by y
xy + y -y = y - y
xy = 0
No solution as either x or y could be zero</p>
<p>Option 2: factor out y
y * (x + 1) = y
(1/y) * y * (x + 1) = y * (1/y)
(x + 1) -1 = 1 -1
x = 0
Granted we still don't know what y is, but can't we be sure that x = 0?</p>
|
Michael Rozenberg
| 190,319 |
<p>Take <span class="math-container">$x=1$</span> and <span class="math-container">$y=0$</span>.</p>
<p>I think the right reasoning is the following.</p>
<p>We need to solve <span class="math-container">$xy=0$</span>.</p>
<p>If <span class="math-container">$y=0$</span> so we see that all real number is a root of the equation.</p>
<p>For <span class="math-container">$y\neq0$</span> we obtain <span class="math-container">$x=0$</span>.</p>
|
1,063,774 |
<blockquote>
<p>Prove that the <span class="math-container">$\lim_{n\to \infty} r^n = 0$</span> for <span class="math-container">$|r|\lt 1$</span>.</p>
</blockquote>
<p>I can't think of a sequence to compare this to that'll work. L'Hopital's rule doesn't apply. I know there's some simple way of doing this, but it just isn't coming to me. :(</p>
|
user84413
| 84,413 |
<p>Let <span class="math-container">$a_n=|r|^n$</span> for <span class="math-container">$n\ge1$</span>,
so <span class="math-container">$(a_n)$</span> is a decreasing sequence which is bounded below by zero </p>
<p>and therefore converges, so let <span class="math-container">$\displaystyle\lim_{n\to\infty}a_n=L$</span>.</p>
<p>Since <span class="math-container">$a_{n+1}=|r|a_n$</span>, and <span class="math-container">$(a_{n+1})$</span> is a subsequence of <span class="math-container">$(a_n)$</span>, by the theorem that all subsequence of a convergent sequence converge to the same limit<span class="math-container">$, \;\;L=|r|L\implies L=0\;\;$</span> (for <span class="math-container">$r\ne0$</span>).</p>
<p>Since <span class="math-container">$\displaystyle\lim_{n\to\infty}|r|^n=0$</span>, <span class="math-container">$\displaystyle\lim_{n\to\infty}r^n=0$</span> also.</p>
|
2,076,910 |
<p>proving $\displaystyle \binom{n}{0}-\binom{n}{1}+\binom{n}{2}+\cdots \cdots +(-1)^{\color{red}{m}-1}\binom{n}{m-1}=(-1)^{m-1}\binom{n-1}{m-1}.$</p>
<p>$\displaystyle \Rightarrow 1-n+\frac{n(n-1)}{2}+\cdots \cdots (-1)^{n-1}\frac{n.(n-1)\cdot (n-2)\cdots(n-m+2)}{(m-1)!}$</p>
<p><strong>Added</strong></p>
<p>writting LHS as $\displaystyle \binom{n}{0}-\left(\binom{n-1}{0}+\binom{n-1}{1}\right)+\left(\binom{n-1}{1}+\binom{n-1}{2}\right)+\cdots \cdots +(-1)^{n-1}\left(\binom{n-1}{m-2}+\binom{n-1}{m-1}\right)=(-1)^{m-1}\binom{n-1}{m-1}.$</p>
<p>$\displaystyle \binom{n}{0}-\binom{n-1}{0}+\binom{n-1}{1}-\cdots +(-1)^{n-1}\binom{n-1}{m-2}-\left(\binom{n-1}{1}-\binom{n-1}{2}+\cdots +(-1)^n\binom{m-1}{m-1}\right)$ </p>
<p>wan,t be able to solve after that, help me to solve it</p>
|
Brian M. Scott
| 12,042 |
<p>Extended HINT: The result is incorrect as originally stated; it should read</p>
<p>$$\binom{n}0-\binom{n}1+\binom{n}2-\ldots+(-1)^{\color{crimson}m-1}\binom{n}{m-1}=(-1)^{m-1}\binom{n-1}{m-1}\;,$$</p>
<p>or, more compactly,</p>
<p>$$\sum_{k=0}^{m-1}(-1)^k\binom{n}k=(-1)^{m-1}\binom{n-1}{m-1}\;.\tag{1}$$</p>
<p>Fix $n\in\Bbb N$. For $m=1$ the desired result is</p>
<p>$$\binom{n}0=(-1)^0\binom{n-1}0\;,$$</p>
<p>which is indeed true, since both sides are equal to $1$. Suppose as an induction hypothesis that $(1)$ holds for some $m$; for the induction step we want to prove that</p>
<p>$$\sum_{k=0}^m(-1)^k\binom{n}k=(-1)^m\binom{n-1}m\;.\tag{2}$$</p>
<p>Using the induction hypothesis we can rewrite the lefthand side of $(2)$:</p>
<p>$$\sum_{k=0}^m(-1)^k\binom{n}k=(-1)^m\binom{n}m+\sum_{k=0}^{m-1}(-1)^k\binom{n}k=(-1)^m\binom{n}m+(-1)^{m-1}\binom{n-1}{m-1}\;,$$</p>
<p>so to complete the induction step we need only show that</p>
<p>$$(-1)^m\binom{n}m+(-1)^{m-1}\binom{n-1}{m-1}=(-1)^m\binom{n-1}m\;.$$</p>
<p>This is easily done using one of the most basic identities involving binomial coefficients.</p>
<p><strong>Added:</strong> After some thought I realize that $(1)$ can be proved by direct calculation:</p>
<p>$$\begin{align*}
\sum_{k=0}^{m-1}(-1)^k\binom{n}k&=\sum_{k=0}^{m-1}(-1)^k\left(\binom{n-1}k+\binom{n-1}{k-1}\right)\\
&=\sum_{k=0}^{m-1}(-1)^k\binom{n-1}k+\sum_{k=0}^{m-1}(-1)^k\binom{n-1}{k-1}\\
&=\sum_{k=0}^{m-1}(-1)^k\binom{n-1}k+\sum_{k=1}^{m-1}(-1)^k\binom{n-1}{k-1}\\
&=\sum_{k=0}^{m-1}(-1)^k\binom{n-1}k+\sum_{k=0}^{m-2}(-1)^{k+1}\binom{n-1}k\\
&=\sum_{k=0}^{m-1}(-1)^k\binom{n-1}k-\sum_{k=0}^{m-2}(-1)^k\binom{n-1}k\\
&=(-1)^{m-1}\binom{n-1}{m-1}+\sum_{k=0}^{m-2}(-1)^k\binom{n-1}k-\sum_{k=0}^{m-2}(-1)^k\binom{n-1}k\\
&=(-1)^{m-1}\binom{n-1}{m-1}\;.
\end{align*}$$</p>
|
47,188 |
<p>I am an amateur mathematician, and I had an idea which I worked out a bit and sent to an expert. He urged me to write it up for publication. So I did, and put it on arXiv. There were a couple of rounds of constructive criticism, after which he tells me he thinks it ought to go a "top" journal (his phrase, and he went on to name two). </p>
<p>His opinion is that my outsider status will have no effect on the reviewing process and my paper will be taken seriously. I am pleased and quite flattered to hear this, and my inclination is to do as he suggests. But I have to say it sounds too good to be true. Does this match other people's experience? I understand it's rare enough for undergraduates to publish anywhere, and I am not even an undergraduate! Surely a "non-mathematician submitting to top mathematical journal" must instantly rank high on the crackpot scale. How often does this actually occur successfully these days? Any advice?</p>
|
Andrew D. King
| 4,580 |
<p>It certainly does happen that amateur mathematicians publish articles in good journals. Unfortunately the good amateur mathematicians have the burden of distinguishing themselves from the crackpots.</p>
<p>Without more information it is hard to give advice. Everybody's ideas of a good paper and a top journal are different. But I can say two simple things for the general case.</p>
<p>First, it is definitely worth the effort to make sure your writing style is in line with that of good math papers. If your paper is written in a way that doesn't conform to writing standards/norms in the area, the referees will have a bad first impression. Second, make sure you acknowledge anyone who helped you with the writing. In my experience this is best done briefly in a final section at the very end of the paper.</p>
|
2,940,649 |
<p>solve <span class="math-container">$$\frac{|x|-1}{|x|-3} \ge 0$$</span></p>
<p>here <span class="math-container">$x$</span> is not equal to <span class="math-container">$3$</span> and <span class="math-container">$-3$</span>.</p>
|
Andrei
| 331,661 |
<p>You are oscillating between the <strong>meaning</strong> of having some particular color of stripes. You have the following categories: a) blue stripes only, b) red stripes only, c) both blue and red. In addition, in the problem appear d) blue stripes which is a+c, and e) red stripes, which is b+c.</p>
<p>Now look at the striped fish only. 70% have red stripes. That is category e. That means 30% have blue only (a). 50% have blue stripes (d), meaning that red only (b) is 50%. You can conclude now that 20% have both color stripes.</p>
<p>This should be enough to lead you to the answer</p>
|
81,348 |
<p>There is a well-known proof of the Compactness Theorem in propositional logic which uses the compactness of the space $\{0,1\}^P$, where $P$ is the set of propositional variables in consideration. In general, this compactness relies on the Tychonoff theorem which in turn requires the Axiom of Choice. Let me sketch it (in danger of boring the experts, but for reference it will be useful):</p>
<p>For a set $A$ of formulas in $P$, let $T(A)$ be the set of interpretations $P \to \{0,1\}$ which make the formulas in $A$ true. Then $T(\cup_i A_i)=\cap_i T(A_i)$ and each $T(A)$ is closed in $\{0,1\}^P$. Thus if $A$ is finitely consistent, then $T(A)$ is the directed intersection of nonempty closed sets, thus also nonempty (compactness).</p>
<p>My question concerns the case that $P = \{p_1,p_2,\dotsc\}$ is countable. Then it seems to me that it is provable in ZF that $\{0,1\}^P$ is compact: Use the homeomorphism to the closed cantor set $C \subseteq [0,1]$ and the compactness of $[0,1]$. From this we can conclude that the Compactness Theorem in propositional logic for countably many propositional variables is provable in ZF - if everything is correct so far.</p>
<p>Isn't this somehow counterintuitive? Or does this proof yield a constructive algorithm how to find an interpretation which makes all formulas in $A$ true provided one can find them for finitely many?</p>
|
Buschi Sergio
| 6,262 |
<p>In P. Johnstone "Notes on Logic and Set theory" there is a proof of completeness (and then compactness) of propositional logic for a countable set of base atomic proposition, avoiding the choise axiom (in the exercise Johnstone prove it in general by Zorn Lemma).
If you want I send you the outline of the proof. \</p>
<p>THE OUTLINE:</p>
<p>Let $S\subset Bool(P)$ a set of propositions (where $Bool(P)$ is the set of propositional expressions building from a set of primitives proposition alphabet $P=${<em>p, q, r, s..</em>} this is the free Boolean algebra on the set $P$). A valuation is a funtion $v: P\to ${$1$}, any valuation as a unique natural extension to $S$ that indicate still by the some letter $v: S\to ${$1$}. We write $S\models s$ if any valuation $v$ such that is 1 (true) on all elements of $S$ is $1$ also on $s$. We write $S\vdash s$ is there is a <em>deduction</em> of $s$ from $S$. We call $S$ inconsistent if $S\vdash \bot$ and consistent if $S\vdash \bot$ is false.
Then $S\models \bot$ means that any valuation $s$ is $0$ (false) on each members of $S$ (where $\bot$ is the atomic expression "false").</p>
<p>LEMMA1.</p>
<p>If $S\models \bot$ then $S\vdash \bot$ (i.e. $S$ is inconsistent).</p>
<p>PROOF (in the case of $P$ countable): We have to suppose $S$ consistent, and show a valuation $v$ on $P$ such that $v(s)=1$.
Observe that for $t\in Bool(P)$ we have that either $S\cup${$t$} or $S\cup${$\neg t$} is consistent (if $S\cup${$t$}$\vdash \bot$ from deduction lemma follow that $S\vdash (t\to \bot)$ i.e. $S\vdash \neg t$ (we can define $ \neg t:= t\to \bot$), and then $S\models \neg t$ then is $S$ is consistent (hypothesis) so is $S\cup${$\neg t$}).<br>
Then we enumerate the elements of $Bool(P)$ and one to one add it to $S$ is preserve the consistence, ot add its negation in opposite. We have a set $S'\supset S$ which is consistent (is $S'\vdash \bot$ then exist a dedution of $\bot$ from $S'$ that involve a finite number of elements of $S'$), and such that for any $t\in Bool(P)$ or $t$ ot $\neq t$ belong to $S'$. Let $v(t)=1\ if\ t\in S'$ and $v(t)=0\ if\ t\not\in S'$ . We claim that $v$ is a valutation i.e. preserve the boolean operations: clearly $v(\bot)=0$, and its enought show that $v(s\to t)= (v(s)\to v(t))$, we shall consider three cases:</p>
<p>1) $v(t)=1$ i.i $t\in S'$ Then we cannot have $\neg (s\to t)\in S'$ since $t\vdash (s\to t)$ and $S'$ is consistent. So $s\to t)\in S'$ i.e. $v(s\to t)=1= v(s)\to v(t))$.</p>
<p>2) $v(s)=0$, then $\neg s\in S'$, and since $\neg s \vdash (s\to t)$ (exercise) as in $(1)$ we must have $v(s\to t)=1= v(s)\to v(t))$.</p>
<p>3) $v(s)=1,\ v(t)=0$ . In this case we have $s\in S'$ and $t\not\in S'$ so since {$s,\ s\to t$}$\vdash t$ we have $(s\to t)\not\in S'$ hence $v(s\to t)=0= v(s)\to v(t))$.</p>
<p>Theorem of Completeness: $S\models t\ iff\ S\vdash t$.</p>
<p>PROOF. The "$if$" part is obvious. Suppose $S\models t$, from {$t, \neg t$}$\vdash \bot$ we have $S\cup${$\neg t$}$\models \bot$, by lemma we have $S\cup${$\neg t$}$\vdash \bot$, by deduction lemma we have
$S\vdash (\neg t)\to \bot$ then $S\vdash t$ .</p>
<p>Theorem of Compactness: is $S\models t$ then $S\vdash t$ then exist a finite $S'\subset S$ such that $S'\vdash t$ and then $S'\models t$.</p>
|
1,912,628 |
<p>Can someone give me an example or a hint to come up with a countable compact set in the real line with infinitely many accumulation points?
Thank you in advance!</p>
|
Reveillark
| 122,262 |
<p>First consider the set:</p>
<p>$$A=\left\{ \frac{1}{n}:n\in\mathbb{Z}^+\right\}\cup\{0\}$$</p>
<p>For each $n>1$, take a sequence $\{\varepsilon_k^n\}_{k=1}^\infty$ such that $\frac{1}{n}<\varepsilon^n_k<\frac{1}{n-1}$, and $\varepsilon_k^1>1$ such that </p>
<p>$$\lim_{k\to\infty}\varepsilon_k^n=\frac{1}{n}$$</p>
<p>Put $$K=A\cup \{\varepsilon_k^n:n,k\in\mathbb{Z}^+\}$$</p>
<p>Notice that this $K$ works. </p>
|
1,912,628 |
<p>Can someone give me an example or a hint to come up with a countable compact set in the real line with infinitely many accumulation points?
Thank you in advance!</p>
|
Brian M. Scott
| 12,042 |
<p>HINT: Start with $\{0\}\cup\left\{\frac1n:n\in\Bbb Z^+\right\}$, and figure out how to add for each $n\in\Bbb Z^+$ a sequence between $\frac1{n+1}$ and $\frac1n$ converging downwards to $\frac1{n+1}$.</p>
|
1,912,628 |
<p>Can someone give me an example or a hint to come up with a countable compact set in the real line with infinitely many accumulation points?
Thank you in advance!</p>
|
DanielWainfleet
| 254,665 |
<p>Let $a_0=0$ and let $(a_n)_{n\in N}$ be a strictly increasing sequence of members of $(0,1)$ with $\lim_{n\to \infty}a_n=1.$</p>
<p>For each $n\in N$ (i.e. $n>0$) let $(b_{n,j})_{j\in N}$ be a strictly increasing sequence of members of $[a_{n-1},a_n)$ with $b_{n,1}=a_{n-1}$ and $\lim_{j\to \infty}b_{n,j}=a_n.$ </p>
<p>Let $S=\{b_{n,j}:n,j\in N\}\cup \{1\}.$ Then S is closed and bounded, hence compact, and the set of accumulation points of $S$ is $\{a_n:n\in N\}\cup \{1\}.$</p>
|
535,226 |
<p>I am trying to check whether or not the sequence $$a_{n} =\left\{\frac{n^n}{n!}\right\}_{n=1}^{\infty}$$ is bounded, convergent and ultimately monotonic (there exists an $N$ such that for all $n\geq N$ the sequence is monotonically increasing or decreasing). However, I'm having a lot of trouble finding a solution that sufficiently satisfies me.</p>
<p>My best argument so far is as follows,</p>
<p>$$a_{n} = \frac{n\cdot n\cdot n\cdot \ldots\cdot n}{n(n-1)(n-2)(n-3)\dots(2)(1)} = \frac{n}{n}\cdot \frac{n}{n-1}\cdot \ldots \cdot \frac{n}2\cdot n$$</p>
<p>so $\lim a_{n}\rightarrow \infty$ since $n<a_{n}$ for all $n>1$. Since the sequence is divergent, it follows that the function must be ultimately monotonic.</p>
<p>This feels a little dubious to me, I feel like I can form a much better argument than that, or at the very least a more elegant one. I've tried to assume $\{a_{n}\}$ approaches some limit $L$ so there exists some $N$ such that</p>
<p>$|a_{n} - L| < \epsilon$ whenever $n>N$ and derive a contradiction, but this approach got me nowhere.</p>
<p>Finally, I've also tried to use the fact that $\frac{a_{n+1}}{a_n}\rightarrow e$ to help me, but I couldn't find an argument where that fact would be useful.</p>
|
Noor
| 495,811 |
<p>To show that <span class="math-container">$a_n$</span> is unbounded, we can show that, <span class="math-container">$a_n > n$</span> for all <span class="math-container">$n$</span>. And since <span class="math-container">$n$</span> is unbounded by Archimedes, <span class="math-container">$a_n$</span> must also be unbounded.</p>
<p><span class="math-container">$$a_n = \frac{n^n}{n!} = \underbrace{\frac{n}{n} \cdot \frac{n}{n-1} \cdot \frac{n}{n-2} \cdot \frac{n}{n-3}\cdots \frac{n}{2}}_{\text{Since $n \geq 1$, each of these terms are $\geq 1$}} \cdot \frac{n}{1} \geq n$$</span></p>
|
323,781 |
<p>In the <a href="https://arxiv.org/abs/1902.07321" rel="noreferrer">paper</a> by Griffin, Ono, Rolen and Zagier which appeared on the arXiv today, (Update: published now in <a href="https://www.pnas.org/content/early/2019/05/20/1902572116" rel="noreferrer">PNAS</a>) the abstract includes</p>
<blockquote>
<p>In the case of the Riemann zeta function, this proves the GUE random
matrix model prediction in derivative aspect. </p>
</blockquote>
<p>In more detail, towards the bottom of the second page they say</p>
<blockquote>
<p>Theorem 3 in the case of the Riemann zeta function is the <em>derivative
aspect Gaussian Unitary Ensemble</em> (GUE) random matrix model
prediction for the zeros of Jensen polynomials. To make this precise,
recall that Dyson, Montgomery, and Odlyzko ... conjecture that the
non-trivial zeros of the Riemann zeta function are distributed like
the eigenvalues of random Hermitian matrices. These eigenvalues
satisfy Wigner's Semicircular Law, as do the roots of the Hermite
polynomials <span class="math-container">$H_d(X)$</span>, when suitably normalized, as
<span class="math-container">$d\rightarrow+\infty$</span> ... The roots of <span class="math-container">$J_{\gamma}^{d,0}(X)$</span>, as
<span class="math-container">$d\rightarrow+\infty$</span>, approximate the zeros of
<span class="math-container">$\Lambda\left(\frac{1}{2}+z\right)$</span>, ... and so GUE predicts that
these roots also obey the Semicircular Law. Since the derivatives of
<span class="math-container">$\Lambda\left(\frac{1}{2}+z\right)$</span> are also predicted to satisfy
GUE, it is natural to consider the limiting behavior of
<span class="math-container">$J_{\gamma}^{d,n}(X)$</span> as <span class="math-container">$n\rightarrow+\infty$</span>. The work here proves
that these derivative aspect limits are the Hermite polynomials
<span class="math-container">$H_d(X)$</span>, which, as mentioned above, satisfy GUE in degree aspect.</p>
</blockquote>
<p>I am hoping someone can further explain this. In particular, does this result shed any light on the horizontal distribution of the zeros of the derivative of the Riemann zeta function?</p>
<hr>
<p>Edit: Speiser showed that the Riemann hypothesis is equivalent to <span class="math-container">$\zeta^\prime(s)\ne 0$</span> for <span class="math-container">$0<\sigma<1/2$</span>. Since then quite a lot of work has gone into studying the horizontal distribution of the zeros of <span class="math-container">$\zeta^\prime(s)$</span>. For example, <a href="https://arxiv.org/abs/1002.0372" rel="noreferrer">Duenez et. al.</a><a href="https://iopscience.iop.org/article/10.1088/0951-7715/23/10/014" rel="noreferrer">compared</a> this distribution with the radial distribution of zeros
of the derivative of the characteristic polynomial of a random unitary matrix. (Caveat: I'm not up to date on all the relevant literature.)</p>
<p>This is a very significant question. If the GUE distribution holds for the Riemann zeros, then rarely but infinitely often there will be pair with less than than half the average (rescaled) gap. From this, by the work of Conrey and Iwaniec, one gets good lower bounds for the class number problem.</p>
<p>In <a href="https://arxiv.org/abs/1002.1616" rel="noreferrer">this</a> <a href="https://www.sciencedirect.com/science/article/pii/S0001870812001600" rel="noreferrer">paper</a> Farmer and Ki showed that if the derivative of the Riemann zeta function has sufficiently many zeros close to the
critical line, then the zeta function has many closely spaced zeros, which by the above, also solves the class number problem. </p>
<p>The question of modeling the horizontal distribution of the zeros of <span class="math-container">$\zeta^\prime(s)$</span> with the radial distribution of zeros
of the derivative of the characteristic polynomial of a random unitary matrix, is intimately connected to the class number problem. Based on the answer of Griffin below, I don't think that's what the Griffin-Ono-Rolen-Zagier paper does, but it's worth asking about.</p>
|
Michael Griffin
| 61,910 |
<p>Just for clarification, what do you mean by the horizontal distribution? Are you asking about the distribution of the real parts of the zeros of <span class="math-container">$\zeta'(z):=\frac{\operatorname{d}}{\operatorname{d}s}\zeta(s)$</span>? In that case, the answer is no. It is known that <span class="math-container">$\zeta'(s)$</span> does not satisfy the Riemann hypothesis, due to interaction with the trivial zeros, so the distribution of the real parts of these zeros is an interesting question, but not one on which I can comment. Our work only deals with the derivatives of the symmetric version of the zeta function, <span class="math-container">$\Lambda(z)$</span>, which are all expected to satisfy both the Riemann hypothesis and the GUE model. If your question was about the distribution of these zeros, then yes, our theorem suggests that the low-lying zeros follow the GUE model with increasing accuracy as the order of the derivative increases.</p>
|
868,384 |
<p>Let's assume I have a set like $S = \{2,1,3,4,8,10\}$. </p>
<p>What's the math notation for the smallest number in the set? </p>
|
DiegoMath
| 64,437 |
<p>The notation you looking for is:
$$\min$$</p>
<p>Suppose you have a ordinary finite set $A=\{a_1,\ldots,a_k\}$, then you can write the minimum notation as follows:</p>
<p>$$\min\{a_1,\ldots,a_k\}$$</p>
<p>In your case,</p>
<p>$$\min\{2,1,3,4,8,10\}=1$$</p>
<p>In case of functions, you can represent its minimum over a set as follows:
$$\min_{x\in S}f(x).$$</p>
<p>An example:</p>
<p>$$S=\mathbb{R},\ f(x)=x^2\Rightarrow \min_{x\in S}f(x)=0.$$</p>
<p>Look the comments above for more informations.</p>
|
40,463 |
<p>If a rectangle is formed from rigid bars for edges and joints
at vertices, then it is flexible in the plane: it can flex
to a parallelogram.
On any smooth surface with a metric, one can define a linkage
(e.g., a rectangle) whose edges are geodesics of fixed length,
and whose vertices are joints, and again ask if it is rigid
or flexible on the surface.
This leads to my first, specific question:</p>
<blockquote>
<p><b>Q1</b>.
Is a rhombus, or a rectangle,
always flexible on a sphere?</p>
</blockquote>
<p><br /><img src="https://i.stack.imgur.com/FRW0R.jpg" alt="alt text" /><br /></p>
<p>It seems the answer should be <em>Yes</em> but I am a bit uncertain
if there must be a restriction on the edge lengths.
(In the above figure, the four arcs are each <span class="math-container">$49^\circ$</span> in length, comfortably short.)</p>
<blockquote>
<p><b>Q2</b>.
The same question for other surfaces: Arbitrary convex surfaces? A torus?</p>
</blockquote>
<p>I am especially interested to learn if there are situations where a linkage that is flexible
in the plane is rendered rigid when embedded on some surface.
It seems this should be possible...?</p>
<blockquote>
<p><b>Q3</b>.
More generally,
<a href="https://mathworld.wolfram.com/LamansTheorem.html" rel="nofollow noreferrer">Laman's theorem</a>
provides a combinatorial characterization of the rigid linkages in the plane.
The <span class="math-container">$n{=}4$</span> rectangle is not rigid because it has fewer than <span class="math-container">$2n-3 = 5$</span> bars:
it needs a 5th diagonal bar to rigidify.
Has Laman's theorem been extended to arbitary (closed, smooth) surfaces
embedded in <span class="math-container">$\mathbb{R}^3$</span>?
Perhaps at least to spheres, or to all convex surfaces?</p>
</blockquote>
<p>Thanks for any ideas or pointers to relevant literature!</p>
<p><b>Addendum</b>.
I found one paper related to my question:
"Rigidity of Frameworks Supported on Surfaces"
by A. Nixon, J.C. Owen, S.C. Power.
<a href="https://arxiv.org/abs/1009.3772v1" rel="nofollow noreferrer" title="Journal version at doi:10.1137/110848852. zbMATH review at https://zbmath.org/?q=an:1266.52018">arXiv:1009.3772v1 math.CO</a>
In it they prove an analog of Laman's theorem for
the circular cylinder in <span class="math-container">$\mathbb{R}^3$</span>.
If one phrases Laman's theorem as requiring for
rigidity that the number of edges <span class="math-container">$E \ge 2 V - 3$</span>
in both the graph and in all its subgraphs,
then their result (Thm. 5.3) is that, on the cylinder, rigidity requires
<span class="math-container">$E \ge 2 V -2$</span> in the graph and in all its subgraphs.
This is not the precise statement of their theorem.
They must also insist that the graph be <em>regular</em>
in a sense that depends on the rigidity matrix achieving maximal rank
(Def. 3.3).
They give as examples of <em>irregular</em> linkages on a sphere
one that
contains an edge with antipodal endpoints, or one that includes
a triangle all three of whose vertices lie on a great circle.
But modulo excluding irregular graphs and other minor technical
details, they essentially replace the constant 3 in Laman's
theorem for the plane with 2 for the cylinder.</p>
<p>Theirs is a very recent paper but contains few citations to related
work on surfaces, suggesting that perhaps the area
of linkages embedded on surfaces is
not yet well explored.
In light of this apparent paucity of information, it seems appropriate that I 'accept'
one of the excellent answers received. Thanks!</p>
<p><b>Addendum [31Jan11].</b>
I just learned of a 2010 paper by Justin Malestein and Louis Theran,
"Generic combinatorial rigidity of periodic frameworks"
<a href="https://arxiv.org/abs/1008.1837v2" rel="nofollow noreferrer" title="Journal version at doi:10.1016/j.aim.2012.10.007. zbMATH review at https://zbmath.org/?q=an:1268.52021">arXiv:1008.1837v2 (math.CO)</a>,
which pretty much completely solves the problem of linkages on a flat 2-torus,
generalizing to flat orbifolds. They obtain a combinatorial characterization for generic minimal
rigidity for "planar periodic frameworks," which encompass these surfaces.</p>
|
Jack Lee
| 6,751 |
<p>No rhombus on the round sphere is rigid, if we define a rhombus $ABCD$ to be a union of four geodesic segments $\overline{AB}\cup \overline{BC}\cup \overline{CD}\cup \overline{DA}$, such that $A$, $B$, $C$, and $D$ are all distinct points, no three of which are collinear, and such that all four segments have the same length, with self-intersections and overlaps allowed. Suppose $ABCD$ is a rhombus. First note that the hypotheses imply that $B$ is neither equal to nor antipodal to $A$: they're not equal because the vertices are distinct; while if they were antipodal, then $C$ would have to be equal to $A$ because $A$ is the only point at the correct distance from $B$. These facts imply that the endpoints of the geodesic segments starting at $A$ and having length equal to $AB$ trace out a circle, not a point. Let $B'$ be any point on this circle, and let $C'$ be the point obtained by reflecting $A$ through the great circle containing $B'$ and $D$. Because reflection through a great circle is an isometry, it follows that $AB'C'D$ is also a rhombus with the same side lengths as $ABCD$.</p>
<p>I would guess that this argument could be made to work in any symmetric space (with a suitable interpretation of "antipodal"), but I have no idea what would happen with more general metrics.</p>
|
3,690,076 |
<p>Is there a rigorous proof that <span class="math-container">$|G|=|\text{Ker}(f)||\text{Im}(f)|$</span>, for some homomorphism <span class="math-container">$f\,:\,G\rightarrow G'$</span>? Can anyone provide such a proof with explanations?</p>
|
Maryam
| 626,408 |
<p>If <span class="math-container">$f:G\to H$</span> is a group homomorphism, then the subgroup <span class="math-container">$\operatorname{Im}(f)$</span> of <span class="math-container">$G'$</span> is isomorphic to the factor group <span class="math-container">$G/\operatorname{Ker}(f)$</span>. Therefore, the order of <span class="math-container">$\operatorname{Im}(f)$</span> is the same as the order of <span class="math-container">$G/\operatorname{Ker}(f)$</span>. The latter is also called the index of <span class="math-container">$\operatorname{Ker}(f)$</span> in <span class="math-container">$G$</span>, and is the number of distinct cosets of <span class="math-container">$\operatorname{Ker}(f)$</span> in <span class="math-container">$G$</span>. Since each coset has the same cardinality, that is the cardinality of <span class="math-container">$\operatorname{Ker}(f)$</span>, you got that the order of the factor group is the quotient of the orders.</p>
|
4,486,956 |
<p>If <span class="math-container">$f$</span> is a convex increasing function, then I need to show that
<span class="math-container">$$ \lim_{x\rightarrow -\infty} \frac{1}{x}\int_x^{x_0}f(y)dy=-\lim_{x\rightarrow -\infty}f(x).$$</span></p>
<p>I can prove this equality using L'Hopital's rule and the Fundamental Theorem of Caluclus. However, is there a way of proving this without use L'Hopital's rule?</p>
|
Cauchy's Sequence
| 744,327 |
<p>I think the limit on the right should have a negative sign in front of it. I didn't need the convexity assumption for this proof. Let <span class="math-container">$L = \lim_{x \to -\infty}f(x)$</span> and <span class="math-container">$g(x) = f(x) - L$</span>. Suppose <span class="math-container">$L$</span> is finite. Then <span class="math-container">$g$</span> is also increasing and its limit at <span class="math-container">$-\infty$</span> is <span class="math-container">$0$</span>. Now we show that the limit on the left with <span class="math-container">$f$</span> replaced by <span class="math-container">$g$</span> is also <span class="math-container">$0$</span>. Obviously <span class="math-container">$$ \frac{1}{x} \int_x^{x_0}g(y)\ dy \le 0 $$</span> for all <span class="math-container">$x < \textrm{min}\{x_0, 0\}$</span>. Notice that the limit of this expression does not depend on <span class="math-container">$x_0$</span>. Hence we can choose <span class="math-container">$x_0 < 0$</span> so that <span class="math-container">$g(x) < \varepsilon$</span> for all <span class="math-container">$x < x_0$</span> and some fixed <span class="math-container">$\varepsilon > 0$</span>. Then <span class="math-container">$$ \frac{1}{x}\int_x^{x_0}g(y)\ dy > \frac{\varepsilon(x_0 - x)}{x}\ .$$</span> Therefore <span class="math-container">$$ \liminf_{x \to -\infty} \frac{1}{x}\int_x^{x_0}g(y)\ dy \ge -\varepsilon\ .$$</span> Since <span class="math-container">$\varepsilon$</span> was arbitrary, we see that the limit exists and is equal to <span class="math-container">$0$</span>. It follows that <span class="math-container">$$0 = \lim_{x \to -\infty} \frac{1}{x}\int_x^{x_0}f(y) - L\ dy = \lim_{x \to -\infty} \frac{1}{x}\int_x^{x_0}f(y) \ dy - \lim_{x\to -\infty}\frac{L(x_0 - x)}{x} \\= \lim_{x \to -\infty} \frac{1}{x}\int_x^{x_0}f(y)\ dy + L\ .$$</span> This concludes the proof. The case where <span class="math-container">$L = -\infty$</span> is similar and I leave it to you.</p>
|
694,090 |
<p>Let $V$ and $W$ be vector spaces over $\Bbb{F}$ and $T:V \to W$ a linear map. If $U \subset V$ is a subspaec we can consider the map $T$ for elements of $U$ and call this the restriction of $T$ to $U$, $T|_{U}: U \to W$ which is a map from $U$ to $W$. Show that</p>
<p>$$\ker T|_{U} = \ker T\cap U.$$</p>
<p>I know the definition of a linear map is </p>
<p>$f(x+y)=f(x) +f(y)$ and </p>
<p>$f(ax)=a\cdot f(x)$</p>
<p>I also know the kernel is the set of points which are mapped to zero.</p>
<p>However, I am struggling to piece this all together.</p>
<p>Thanks in advance for all the help.</p>
|
hmmmm
| 18,301 |
<p>We know that $kerT=\{v\in V| f(v)=0\}$ </p>
<p>So that $kerT|_U=\{v\in U|f(v)=0\}$</p>
<p>So if $v\in ker T|_U$ then $v\in U$ and $v\in ker T$ which shows that $kerT|_U \subset ker T \cap U$</p>
<p>Now if we take $v\in ker T \cap U$ then $v\in U$ and $v\in ker T$ and so $v\in ket T|_U$ which gives $KerT\cap U\subset KerT|_U$</p>
<p>Hence we have shown that $ker T|_U=ker T \cap U$</p>
|
268,482 |
<p>One has written a paper, the main contribution of which is a few conjectures. Several known theorems turned out to be special cases of the conjectures, however no new case of the conjectures was proven in the paper. In fact, no new theorem was proven in the paper. </p>
<p>The work was reported on a few seminars, and several experts found the conjectures interesting. </p>
<p>One would like to publish this paper in a refereed journal. The paper was rejected from a certain journal just two days after its submission because "this genre of article does not fit the journal".</p>
<blockquote>
<p><strong>QUESTION.</strong> Are there examples of publications of this genre in refereed journals?</p>
</blockquote>
<p><strong>ADD:</strong> The mentioned paper explains the background, states the conjectures, discusses various special cases and consequences, and lists known cases. It is 20 pages long. </p>
|
Carlo Beenakker
| 11,260 |
<p>If you have numerical evidence in support of the conjecture, the journal of <A HREF="https://en.wikipedia.org/wiki/Experimental_Mathematics_(journal)" rel="noreferrer">Experimental Mathematics</A> seems to fit the bill:</p>
<blockquote>
<p>Experimental Mathematics publishes original papers featuring formal
results inspired by experimentation, conjectures suggested by
experiments, and data supporting significant hypotheses.</p>
<p>Note that we do value proofs: experimentally inspired results that can be proved are more desirable than conjectural ones. However, we do publish significant conjectures or explorations in the hope of inspiring other, perhaps better-equipped researchers to carry on the investigation. The objective of Experimental Mathematics is to play a role in the discovery of formal proofs, not to displace them.</p>
</blockquote>
<p>Several publications in that journal have gotten quite some traction, like:</p>
<ul>
<li><p><A
HREF="http://www.tandfonline.com/doi/abs/10.1080/10586458.1998.10504357" rel="noreferrer">Mahler's
Measure and Special Values of L-functions</A></p></li>
<li><p><A
HREF="https://www.emis.de/journals/EM/expmath/volumes/15/15.3/Torquato.pdf" rel="noreferrer">New
Conjectural Lower Bounds on the Optimal Density of Sphere
Packings</A></p></li>
<li><p><A HREF="https://www.emis.de/journals/EM/expmath/volumes/11/11.3/Murakami427_435.pdf" rel="noreferrer">Kashaev’s Conjecture and the Chern-Simons Invariants of Knots and Links</A></p></li>
<li><p><A HREF="https://www.emis.de/journals/EM/expmath/volumes/12/12.4/Guillera.pdf" rel="noreferrer">About a new kind of Ramanujan-type series</A></p></li>
</ul>
|
268,482 |
<p>One has written a paper, the main contribution of which is a few conjectures. Several known theorems turned out to be special cases of the conjectures, however no new case of the conjectures was proven in the paper. In fact, no new theorem was proven in the paper. </p>
<p>The work was reported on a few seminars, and several experts found the conjectures interesting. </p>
<p>One would like to publish this paper in a refereed journal. The paper was rejected from a certain journal just two days after its submission because "this genre of article does not fit the journal".</p>
<blockquote>
<p><strong>QUESTION.</strong> Are there examples of publications of this genre in refereed journals?</p>
</blockquote>
<p><strong>ADD:</strong> The mentioned paper explains the background, states the conjectures, discusses various special cases and consequences, and lists known cases. It is 20 pages long. </p>
|
bof
| 43,266 |
<p>The editorial policy of the <a href="http://ulamquarterly.dyndns.org/editor.html" rel="noreferrer"><em>Ulam Quarterly Journal</em></a> claims that the journal is "devoted to the publication of original research <strong>and open problems</strong> in all areas of mathematics."</p>
|
325,519 |
<p>I'm reading a book about inverse analysis and trying to figure out how the authors do the inversion.</p>
<p>Assume that matrix $C$ is
$$
C
~=~
\begin{bmatrix}
88.53 & -33.60 & -5.33 \\
-33.60 & 15.44 & 2.67 \\
-5.33 & 2.67 & 0.48
\end{bmatrix}
$$
and at some point authors diagonalize this matrix to calculate matrix $P$ using
$$
C^{-1}
~=~
P^{\rm t} L P
$$
where $L$ is a diagonal matrix of positive eigenvalues and the columns of $P$ are orthonormal eigenvectors.</p>
<p>The above equation for diagonalizing inverse of $C$ is a bit different from what is usually used and therefore I cannot calculate $P$ correctly (same as the book!).
So, that would be great if somebody can show me the way to calculate $P$.
$$
P
~=~
\begin{bmatrix}
0.93 & 0.36 & -0.03 \\
-0.36 & 0.90 & -0.23 \\
-0.06 & 0.23 & 0.97
\end{bmatrix}
$$</p>
|
Amzoti
| 38,839 |
<p>Have you heard of <a href="http://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow"><em>Jordan Normal Form</em></a></p>
<p>For your matrix:</p>
<p>$$C=\left(\begin{matrix}
88.53 & -33.60 &-5.33\\
-33.60 & 15.44 & 2.67\\
-5.33 & 2.67 & 0.48
\end{matrix}\right)$$</p>
<p>We would find the eigenvalues and the eigenvectors and then diagonalize it such that:</p>
<p>$$C = P J P^{-1}$$</p>
<p>For this matrix, we have:</p>
<p>$\lambda_1 = 101.976, v_1 = (0.929997, -0.362899, -0.0583849)$</p>
<p>$\lambda_2 = 2.47102 , v_2 = (0.366307, 0.901908, 0.228868)$</p>
<p>$\lambda_3 = 0.00312608 , v_3 = (-0.0303981, -0.234233, 0.971705)$</p>
<p>Using these values, we can now diagonalize the matrix $C$.</p>
<p>$$C=\left(\begin{matrix}
88.53 & -33.60 &-5.33\\
-33.60 & 15.44 & 2.67\\
-5.33 & 2.67 & 0.48
\end{matrix}\right) = P J P^{-1}$$ </p>
<p>Where:
$$P = \left(\begin{matrix}
0.929997 & 0.366307 & -0.0303981 \\
-0.362899 & 0.901908 & -0.234233 \\
-0.0583849 & 0.228868 & 0.971705
\end{matrix}\right)$$</p>
<p>$$J = \left(\begin{matrix}
101.976 & 0 & 0 \\
0 & 2.47102 & 0 \\
0 & 0 & 0.00312608
\end{matrix}\right)$$</p>
<p>$$P^{-1} = \left(\begin{matrix}
0.929997 & -0.362899 & -0.0583849 \\
0.366307 & 0.901908 & 0.228868 \\
-0.0303981 & -0.234233 & 0.971705
\end{matrix}\right)$$</p>
<p>Notice that the columns of $P$ are from the linear combination of the eigenvectors $[v_1 | v_2 | v_3]$.</p>
<p>Notice that $J$ is the diagonal of the eigenvalues $\lambda_1, \lambda_2, \lambda_3$.</p>
<p>Note the $P^{-1}$ is just the inverse of $P$.</p>
<p>Lastly, you should understand that this works only when a matrix is diagonalizable (see site referenced above).</p>
<p>Regards</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.